Google’s GenAI model, Gemini, can be used to write misleading content about events like the upcoming United States presidential election. Similarly, if asked about the Super Bowl or Titan submersible implosion, it will provide fake information and claimed citations. This has caused concerns amongst law and policy makers. In response, Google’s DeepMind, in an attempt to address these issues, has established the AI Safety and Alignment organization. The group is tasked with enhancing safety in the GenAI models.
What’s Happening & Why This Matters
Formed by integrating existing teams with specialized cohorts of GenAI researchers and engineers, the organization will focus on preventing bad medical advice, ensuring child safety, and avoiding the amplification of bias and other injustices.
The AI Safety and Alignment organization also has a team dedicated to the safety of artificial general intelligence (AGI). Dragan, a former Waymo staff research scientist and UC Berkeley professor of computer science, leads this team. She plans to enable models to better understand human preferences, be robust against adversarial attacks, and account for diverse human values and viewpoints.
t/f Summary
While there is skepticism surrounding GenAI tools and their potential to spread fake information, Google and other companies aim to address this through investments in AI safety. This will help prevent instances such as Microsoft’s CoPilot suite making mistakes in meeting summaries. Importantly, worries about the spread of false/misleading information during election cycles. As developments in AI continue, these efforts will ideally result in the enhancement of GenAI model safety over time.