Skip to main content

Digesting AI from VivaTech 2024

Posted at: 09.20.2024in category:Session Digests
Balancing ethical regulation, combatting misinformation, and empowering all of society with Artificial Intelligence. We recap what was said in the top AI sessions at VivaTech 2024.

2 speakers on stage Session The Societal Impact of AI. Photo credit: VivaTech 2024

The Societal Impact of AI

As AI transforms society, the challenge is no longer just about innovation—it’s about making sure everyone, from tech-savvy youth to non-digital natives, can benefit from its power. The panelists in this session explored how AI can empower individuals and reshape societies.

Anne Bouverot of Ecole Normale Supérieure stressed the importance of making AI understandable and accessible to all, particularly non-digitally native and older populations. She called for capacity building and inclusive AI education, advocating for social dialogue to address common concerns around bias, job displacement, and disinformation. Bouverot referenced the report titled "AI: Our Ambition for France," which she presented to President Macron, focusing on the need for AI literacy across all sectors of society.

James Manyika, SVP of Tech & Society at Google, added to the discussion by highlighting the empowering potential of AI, especially for those with less specialized skills. He noted, "Quite often it is the less skilled workers... that should get the most out of it," underscoring how AI can democratize access to opportunities previously restricted by expertise. Manyika further emphasized the role of generative AI in leveling the playing field, explaining, "The fact that I can interact with these systems with very little expertise and still get something useful... is very empowering."

The session also touched on broader societal fears about AI, with a focus on understanding the risks to mitigate fear and anxiety. Bouverot called for ongoing social dialogue to study the challenges AI presents—like bias in algorithms and job automation—and create solutions that protect workers and uphold public interest.

4 panelists on stage Session The AI Rulebook: What is to be Regulated, How and by Whom? Photo credit: VivaTech 2024

The AI Rulebook

Who controls how AI is developed? What is to be regulated, how, and by whom? This panel explored how different regions—Europe, the USA, and China—are approaching AI regulation and development.

Dragos Tudorache of the European Parliament emphasized the EU's shift from skepticism to adopting the AI Act, which aims to safeguard against AI risks while allowing innovation. He noted, "If you're a democracy and care about your people and societal risks, you need a rulebook because general principles have not been effectively applied in the past." This highlights Europe's commitment to a regulatory framework that prioritizes societal well-being.

Anu Bradford from Columbia University discussed the US perspective, where the focus has been more on AI development than regulation. Bradford advocated for a balanced approach, saying, "We need to recognize the importance of regulating AI to protect citizens and maintain technological leadership responsibly." Her comments underscored the tension between innovation and the need for regulation to avoid harm.

The session also touched on the global challenges of regulating AI, with Raffi Krikorian of Emerson Collective addressing the misconception that regulation stifles innovation. He explained, "There is a false dichotomy between regulation and innovation. You can have both at the same time." His statement highlights the importance of crafting regulation that fosters the right kind of innovation, ensuring technology benefits society without causing harm.

4 panelists on stage Session Can We Have it “All” Safe, Profitable, and Ethical AI. Photo credit: VivaTech 2024

Can We Have it All?

Can AI truly be ethical, profitable, and safe? That was the burning question tackled by leading voices in the AI world. From critiques of tech monopolies to calls for decentralization, this session sparked a lively debate about the future of AI. This panel shared a compelling vision for AI's future, one where inclusive, ethical frameworks shape technology and the societies it impacts.

Meredith Whittaker, from the Signal Foundation, took a critical stance on the current concentration of AI power within a few large U.S. tech companies. She argued that this monopoly undermines the potential of AI to benefit society at large. Whittaker advocated for a shift away from the surveillance business model, promoting decentralized, open-source AI development as an alternative. She called for the creation of new tech infrastructures that empower people, saying, “We must reject the surveillance business model and create alternative tech infrastructures.”

Sneha Revanur, Founder of Encode Justice and on this year’s TIMEs list of the 100 most influential people in AI, echoed Whittaker's call for decentralization and regulatory measures. She stressed that while AI holds great potential, the current trajectory lacks inclusivity and proper regulation. Revanur highlighted the need for diverse voices, particularly youth, to shape AI’s future. She emphasized, “Right now, we’re heading down the wrong path, but it’s entirely reversible... this is a matter of human decision-making.”

On a more technical note, Jonas Andrulis, CEO of Aleph Alpha, discussed the challenges of developing AI technologies in Europe amidst the dominance of U.S. tech giants. He emphasized the need for tech sovereignty, urging Europe to build its own AI infrastructure to remain competitive. Andrulis offered a forward-looking perspective, stating, “The next generations will grow up and will be built by AI. If we don't have a positive vision... we’ll end up with a world we may not like.”

4 panelists on stage Session Is Generative AI Feeding the Fake News Machine? Photo credit: VivaTech 2024

Is Generative AI Feeding the Fake News Machine?

Generative AI is not just a tool for creativity—it’s also fueling the spread of fake news at an alarming rate. Experts in this session discussed how AI is being weaponized to flood the media landscape with misleading content, deteriorating public trust, while also exploring potential solutions to mitigate these risks.

Mario Vasilescu, CEO of Readocracy, highlighted the erosion of trust between media and the public, driven by AI’s ability to flood the information landscape with misleading content. He explained the "bullshit asymmetry principle", which makes disproving false information significantly more difficult than creating it. Vasilescu urged the tech industry to take responsibility for mitigating the spread of misinformation, calling for improved media literacy and alternative models that value quality content and critical thinking.

Alexander Nicholas, from XPRIZE, stressed the need for robust policy responses, particularly during election periods, when misinformation can have serious consequences. “What’s at stake is essentially our democratic systems. When citizens no longer believe that they can trust an election or that their voices count, they often resort to other means of expressing their voices.” Nicholas emphasized that beyond technological solutions, society needs prebunking strategies—proactive measures to arm people with the tools to recognize and reject false information.

Chine Labbé, from NewsGuard, echoed these sentiments, advocating for the responsible use of AI to combat misinformation at scale. She urged the industry to take a more active role in identifying harmful AI-generated content, noting how NewsGuard collaborates with companies like Microsoft to reduce the spread of such content. Labbé also pointed out that AI can be part of the solution, stating, “We have to use AI to empower us more in the fight against misinformation."

Finally, Ana Carcani Rold of Diplomatic Courier, raised a provocative question about the audience’s role in spreading misinformation, asking, "Even when we know something is fake, are we going to really care that it’s fake?" Her remarks underscored the broader cultural and behavioral challenges that AI-generated fake news presents.

AI for All

The AI leaders at VivaTech 2024 tackled pressing issues like ethical governance, the battle against misinformation, societal inclusion, and the balance between regulation and innovation. But beyond innovation, the majority of speakers ultimately highlighted the need for responsible AI development that empowers all of society while safeguarding democratic values.

Want to dive into more VivaTech sessions? Check out all of our 2024 Session Recordings for more insights into other technologies.

Share this

Related articles