AI governance can’t be left to the vested interests

techcrunch.com

A final report by the UN’s high-level advisory body on artificial intelligence makes for, at times, a surreal read. Named “Governing AI for Humanity,” the document underlines the contradictory challenges of making any kind of governance stick on such a fast developing, massively invested and heavily hyped technology.

On the one hand, the report observes — quite correctly — that there’s “a global governance deficit with respect to AI.” On the other, the UN advisory body dryly points out that: “Hundreds of [AI] guides, frameworks, and principles have been adopted by governments, companies and consortiums, and regional and international organizations.” Even as this report adds plus-one-more set of recommendations to the AI governance pile.

The overarching problem the report is highlighting is there’s a patchwork of approaches building up around governing AI, rather than any collective coherence on what to do about a technology that’s both powerful and stupid.

AI automation can certainly be powerful: Press the button and you get outputs scaled on demand. But AI can also be stupid because, despite what the name implies, AI is not intelligence; its outputs are a reflection of its inputs; and bad inputs can lead to very bad (and unintelligent) outcomes.

Add scale to stupidity, and AI can cause very big problems indeed, as the report highlights. For instance, it can amplify discrimination or spread disinformation. Both of which are already happening, in all sorts of domains, at problematic scale, which leads to very real-world harms.

But those with commercial irons in the generative AI fire that’s been raging over the past few years are so in thrall to the potential scale upside of this technology that they’re doing everything they can to downplay the risks of AI stupidity.

In recent years, this has included heavy lobbying about the idea that the world needs rules to protect against so-called AGI (artificial general intelligence), or the concept of an AI that can think for itself and could even out-think humans. But this is a flashy fiction intended to grab policymakers’ attention and focus lawmakers’ minds on nonexistent AI problems, thereby normalizing the harmful stupidities of current gen AI tools. (So really, the PR game being played is about defining and defusing the notion of concept of “AI Safety” by making it mean let’s just worry about science fiction.)

A narrow definition of AI safety serves to distract from the vast environmental harms of pouring ever more compute power, energy, and water into building data centers big enough to feed this voracious new beast of scale. Debates about whether we can afford to keep scaling AI like this are not happening at any high level — but maybe they should be?

The ushered-in specter of AGI also serves to direct the conversation to skip over the myriad legal and ethical issues chain-linked to the development and use of automation tools trained on other people’s information without their permission. Jobs and livelihoods are at stake. Even whole industries. And so are individual people’s rights and freedoms.

Words like “copyright” and “privacy” scare AI developers far more than the claimed existential risks of AGI because these are clever people who haven’t actually lost touch with reality.

But those with a vested interest in scaling AI choose to harp only about the potential upside of their innovations in order to minimize the application of any “guardrails” (to use the minimalist metaphor of choice when technologists are finally forced to apply limits to their tech) standing in the way of achieving greater profits.

Toss in geopolitical rivalries and a bleak global economic picture and nation states’ governments can often be all too willing to join the AI hype and fray, pushing for less governance in the hopes it might help them scale their own national AI champions.

With such a skewed backdrop, is it any wonder AI governance remains such a horribly confusing and tangled mess? Even in the European Union where, earlier this year, lawmakers did actually adopt risk-based framework for regulating a minority of applications of AI, the loudest voices discussing this landmark effort are still decrying its existence and claiming the law spells doom for the bloc’s chances of homegrown innovation. And they’re doing that even after the law got watered down after earlier tech industry lobbying (led by France, with its eye on the interests of Mistral, its hope for a national GenAI champion).

A new push to deregulate EU privacy laws

Vested interests aren’t stopping there, either. We now have Meta, owner of Facebook and Instagram — turned Big AI developer — openly lobbying to deregulate European privacy laws to remove limits on how it can use people’s information to train AIs. Will no one rid Meta of this turbulent data protection regulation so it can strip-mine Europeans of their culture for ad profit?

Its latest open letter lobbying against the EU’s General Data Protection Regulation (GDPR), which was written up in the WSJ, loops in several other commercial giants also willing to deregulate for profit, including Ericsson, Spotify, and SAP.

“Europe has become less competitive and less innovative compared to other regions, and it now risks falling further behind in the AI era due to inconsistent regulatory decision making,” the letter reportedly suggests.

Meta has a long history of breaking EU privacy law — chalking up a majority of the 10 largest-ever GDPR fines to date, for example, and racking up billions of dollars in fines — so it really shouldn’t be a poster child for lawmaking priorities. Yet, when it comes to AI, here we are! Having broken so many EU laws, we’re apparently supposed to listen to Meta’s ideas for removing the obstacle of having laws to break in the first place? This is the kind of magical thinking AI can provoke.

But the really scary thing is there’s a danger lawmakers might inhale this propaganda and hand the levers of power to those who would automate everything — putting blind faith in a headless god of scale in the hopes that AI will automagically deliver economic prosperity for all.

It’s a strategy — if we can even call it that — which totally ignores the fact that the last several decades of (very lightly regulated) digital development have delivered the very opposite: a staggering concentration of wealth and power sucked in by a handful of massive platforms — Big Tech.

Clearly, platform giants want to repeat the trick with Big AI. But policymakers risk walking mindlessly down the self-serving pathways being recommended to them by its handsomely rewarded army of policy lobbyists. This isn’t remotely close to a fair fight — if it’s even a fight at all.

Economic pressures are certainly driving a lot of soul-searching in Europe right now. A much anticipated report earlier this month by the Italian economist Mario Draghi on the never-so-sensitive topic of the future of European competitiveness itself chafes at self-imposed “regulatory burdens” that are also specifically described as “self-defeating for those in the digital sectors.”

Given the timing of Meta’s open letter, it’s surely aiming to hook into the same conclusion. But that’s hardly surprising: Meta and several of the others adding their signatures to this push to deregulate EU privacy laws are among the long list of companies that Draghi directly consulted for his report. (Meanwhile, as others have pointed out, the economist’s contributor disclosure list does not include any digital rights or human rights groups, aside from the consumer group BEUC.)

Recommendations from the UN AI advisory group

The asymmetry of interests driving AI uptake while simultaneously seeking to downgrade and dilute governance efforts makes it hard to see how a genuinely global consensus can emerge on how to control AI’s scale and stupidity. But the UN AI advisory group has a few solid-looking ideas if anyone is willing to listen.

The report’s recommendations include setting up an independent international scientific panel to survey AI capabilities, opportunities, risks, and uncertainties and identify areas where more research is needed with a focus on the public interest (albeit, good luck finding academics not already on Big AI’s payroll). Another recommendation is intergovernmental AI dialogues that would take place twice a year on the margins of existing UN meetings to share best practices, exchange info, and push for more international interoperability on governance.

Natasha Lomas

Source: techcrunch.com

Share This Article
Leave a comment