Are We Prepared for ‘Act 2’ of Gen AI?

venturebeat.com

With every demonstration and experiment, the breathless excitement surrounding generative AI grows in a nearly unprecedented manner. Across healthcare, finance, transportation manufacturing, media, retail and energy, gen AI is virtually rewriting the rules for the very way we work and think.

Of course, we’ve previously seen rapid adoption curves for game-changing technologies: The internet, smartphones, social media, robotics, streaming media and electric vehicles all provide lessons and models with varying degrees of relevance. The crucial difference: Those technologies largely automated tasks and communication to provide their substantial benefits — that is, the power to send and receive messages in real time, faster manufacturing and assembly, safer and smarter transportation. But with gen AI, we are automating (and profoundly accelerating) human analysis and insights. That places greater demands, constraints and challenges before us.

We are only in what I’d call “Act 1” of the gen AI story. Previously unimaginable amounts of data and compute have created models that demonstrate (a key word) what gen AI can deliver. However, these early experiments have also brought compromises, exceptions, cost concerns and, yes, errors. After all, fast-evolving technologies are inherently fragile at the start.

However, we must ensure we don’t remain mired in Act 1. Much more work remains on the pragmatics of operationalizing gen AI. This work — what I’d call “Act 2” — may be less glamourous, but it is no less essential to the success of this technology.

Think about it. The major breakthrough technologies of the past 30-plus years created some of the best-known names in business: Facebook, Tesla, Netflix or even my own company Amazon. Those successes certainly provide a useful roadmap, but they became household names only after they built out their businesses that proved their value, after they created the infrastructure, applications, systems and processes that turned head-turning innovation into sustainable and scalable businesses.

Time to roll up our sleeves

In gen AI, that transition to Act 2 is just getting underway. We can’t point to four or five life-changing applications — yet. Reality has not caught up to the hype — yet. Why is that? Quite simply, it’s because Act 2 is really hard. In any segment of the technology industry, building a sustainable, scalable business requires years of heavy lifting. But in gen AI, with its higher profile and higher stakes, that work will be exponentially more challenging. Act 1 has shown us clearly the areas we must address:

  • Accuracy: Amid all of the amazing demonstrations of gen AI’s power and sophistication, we’ve also seen inaccuracies and “hallucinations” that, for now, disqualify it for broader usage until we resolve quality problems.
  • Bias: Early pilot programs have shown that gen AI still depends on the training data, and biased data will lead to biased results. This flaw must be addressed for gen AI to earn the trust of most users.
  • Ethics: Regulators, thought leaders and ethicists have urged AI companies to integrate important guardrails and safeguards to prevent misuse, disinformation, fraud, misrepresentations and even runaway events. Responsible AI must be a primary consideration.
  • Scalability: The magnitude of computing resources required to build and operate gen AI applications at scale is nearly unprecedented. Since 2010, the amount of training compute for machine learning (ML) models has grown by a factor of 10 billion, significantly exceeding a naive extrapolation of Moore’s Law. The amount of data used to train ML models has increased 100X. And the size of models has grown more than 1,000 times. We’re still only in Act 1, and there’s no reason to expect this trajectory won’t continue.
  • Cost: The impressive feats of gen AI carry a high price due to their compute-intensive nature. These preliminary proof-of-concept demonstrations are unburdened by concerns over economic feasibility. But a mass-market gen AI application must provide its benefits at an acceptable cost that encourages and enables the broadest levels of usage. Gen AI must not be so expensive that it is restricted only to a rare subset of use-cases.

Are you in act 1 or act 2 of gen AI?

Decades ago, the world immediately recognized that the jet engine represented an exponential improvement in transportation, one that would forever change the world through its ability to shrink distances and times and democratize the world of travel.

But the jet engine alone wasn’t anywhere near a complete solution. We needed to integrate it into robust vehicles with aerodynamic wings and space-efficient cabins, optimized fuels, maintenance procedures and safety protocols. We had to redesign runways and airports to accommodate these vehicles and their greater numbers of passengers. We had to upgrade air-traffic control systems. And we had to commit to safety as the primary directive. The engine alone wasn’t of great value without all of these supporting innovations and resources.

The lesson is clear: Life-changing applications require infrastructure. It’s a mistake to assume that an Act 1 AI demonstration will be enterprise-ready. In Act 2, we must take dazzling AI technology and develop it into a mature, ubiquitous system backed by a robust and reliable infrastructure that will integrate with almost every area of our lives. For companies moving into Act 2, the logical question arises: How do you accelerate that journey to broadly deployed gen AI and reap its benefits?

In my view, there are five keys to ensuring you don’t remain stuck in Act 1 — and for succeeding in the coming Act 2 for gen AI:

Differentiate with data

Even in gen AI’s Act 1, the importance of data quickly becomes clear. The quality of gen AI is heavily dependent on the quality of training data. The data is your asset — your added value, so devote proper resources to data-cleansing routines. Whether it’s using multiple sources or enforcing security and access privileges, a sound and thoughtful data strategy makes a big difference.

Choose the right hybrid mixture of models

It’s both logical and tempting to design your AI usage around one large model. You might think you can simply take a giant large language model (LLM) from your Act 1 initiatives and just get moving. However, the better approach is to assemble and integrate a mixture of several models. Just as a human’s frontal cortex handles logic and reasoning while the limbic system deals with fast, spontaneous responses, a good AI system brings together multiple models in a heterogeneous architecture. No two LLMs are alike — and no single model can “do it all.” What’s more, there are cost considerations. The most accurate model might be more expensive and slower.

For instance, a faster model might produce a concise answer in one second — something ideal for a chatbot. However, a different but similar model might produce a more comprehensive (but equally accurate) answer to the same question within 15 seconds, which might be better suited for a customer-service agent. That’s why many companies are identifying, evaluating and deploying a blended portfolio of models to support their various AI initiatives. Invest the time to fully analyze your options and choose the right mixture.

Integrate AI responsibly

Even in its early days, gen AI quickly presented scenarios and demonstrations that underscore the critical importance of standards and practices that emphasize ethics and responsible use. Gen AI should take a people-centric approach that prioritizes education and integrity by detecting and preventing harmful or inappropriate content — in both user input and model output. For example, invisible watermarks can help reduce the spread of disinformation.

Focus on cost, performance and scale

Success in gen AI will depend on a low-cost, highly performant ML infrastructure that provides rapid training. This encompasses both purpose-built hardware and resilient software optimized for scalability, fault tolerance and more, enabling you to build, train, tune and deploy models in a cost-feasible manner. It’s also important to recognize that scaling an application inevitably exposes unexpected scenarios that can sidetrack generative AI expansion. And since the scale is much higher, any failures will have a very high profile. Enterprises must account for these scenarios and build in the plans and infrastructure to accommodate these deployments.

Promote usability and accessibility

To succeed, gen AI must be broadly accessible (within security parameters) and intuitively usable within existing workflows. Direct your efforts toward non-experts and non-coders, and enable business analysts, finance pros, citizen data analysts

Source: venturebeat.com

Share This Article
Leave a comment