t/f Recommendation: “I went for a walk with Gary Marcus, AI’s loudest critic”

www.technologyreview.com

Gary Marcus meets me outside the post office of Vancouver’s Granville Island wearing neon-coral sneakers and a blue Arc’teryx jacket. I’m in town for a family thing, and Marcus has lived in the city since 2018, after 20 years in New York City. “I just find it to be paradise,” he tells me, as I join him on his daily walk around Granville Island and nearby Kitsilano Beach. We’ve agreed to walk and talk about—what else—the current state of AI.

“I’m depressed about it,” he tells me. “When I went into this field, it was not so that we could have a massive turnover of wealth from artists to big corporations.” I take a big sip of my black dark-roast coffee. Off we go.

Marcus, a professor emeritus at NYU, is a prominent AI researcher and cognitive scientist who has positioned himself as a vocal critic of deep learning and AI. He is a divisive figure. You might recognize him from the spicy feuds on X with AI heavyweights such as Yann LeCun and Geoffrey Hinton. (“All attempts to socialize me have failed,” he jokes.) It is on walks like this that Marcus often does most of his tweeting.

This week has been a big news week in AI. Google DeepMind launched the next generation of its powerful artificial-intelligence model Gemini, which has an enhanced ability to work with large amounts of video, text, and images. And OpenAI has built a striking new generative video model called Sora that can take a short text description and turn it into a detailed, high-definition film clip up to a minute long. AI video generation has been around for a while, but Sora seems to have upped the ante. My X timeline has been flooded with stunning clips people have generated using the software. OpenAI claims that its results suggest that scaling video generation models like Sora “is a promising path towards building general purpose simulators of the physical world.” You can read more about Sora from Will Douglas Heaven here.

But—surprise—Marcus is not impressed. “If you look at [the videos] for a second, you’re like, ‘Wow, that’s amazing.’ But if you look at them carefully, [the AI system] still doesn’t really understand common sense,” he says. In some videos, the physics are clearly off, and animals and people spontaneously appear and disappear, or things fly backwards, for example.

For Marcus, generative video is yet another example of the exploitative business model of tech companies. Many artists and writers and even the New York Times have sued AI companies, claiming that their practice of indiscriminately scraping the internet for data to train their models violates their intellectual property. Copyright issues are top of Marcus’s mind. He managed to get popular AI image generators to generate scenes from Marvel movies or famous characters such as the Minions, Sonic the Hedgehog, and Darth Vader. He has started lobbying for clearer rules on what goes into AI models.

“Video generation should not be done with copyrighted materials taken without consent, in systems that are opaque, where we can’t understand what’s going on,” he says. “It shouldn’t be a legal thing. It’s certainly not an ethical thing.”

We stop at a scenic spot. It’s a beautiful route, with views of the city, the mountains, and the beach. A speckle of sun hits the peak of a mountain just across the bay. We could not be further away from Silicon Valley, the epicenter of today’s AI boom. “​​I’m not a religious person, but these kinds of tableaux … just continue to blow my mind,” Marcus says.

But despite the tranquility of the surroundings, it is on walks like this that Marcus often uses X to rail against the power structures of Silicon Valley. Right now, he says, he identifies as an activist.

When I ask him what motivates him, he replies without missing a beat: “The people who are running AI don’t really care that much about what you might call responsible AI, and that the consequences for society may be severe.”

Late last year he wrote a book, called Taming Silicon Valley, which is coming out this fall. It is his manifesto on how AI should be regulated, but also a call to action. “We need to get the public involved in the struggle to try to get the AI companies to behave responsibly,” he says.

There are a bunch of different things people can do, ranging from boycotting some of the software until people clean up their act to choosing electoral candidates around their tech policies, he says.

Action and AI policy are needed urgently, he argues, because we are in a very narrow window during which we can fix things in AI. The risk is that we make the same mistakes regulators made with social media companies.

“What we saw with social media is just going to be like an appetizer compared to what’s going to happen,” he says.

Around 12000 steps later, we’re back at Granville Island’s Public Market. I’m starving, so Marcus shows me a spot that serves good bagels. We both get the lox with cream cheese and eat it outside in the sun before parting ways.

Later that day, Marcus would send out a flurry of tweets about Sora, having seen enough evidence to call it: “Sora is fantastic, but it is akin to morphing and splicing, rather than a path to the physical reasoning we would need for AGI,” he wrote. “We will see more systemic glitches as more people have access. Many will be hard to remedy.”

Don’t say he didn’t warn you.

Source: www.technologyreview.com

Share This Article
Leave a comment