The AI Action Summit was held in Paris at the beginning of this February. The event, co-hosted by Emmanuel Macron, French President, and Narendra Modi, Indian Vice President, saw several other important faces, including US Vice President JD Vance, Chinese Vice Premier Zhang Guoqing, and Canadian Prime Minister Justin Trudeau.
The summit highlighted different countries' approaches across the globe for how to balance innovation with proper regulation.
The EU: Regulation Before Innovation?
Currently, the EU has one of the most comprehensive frameworks for regulating AI use and development, called the AI Act. It identifies key risks that AI poses, as well as potential ways to combat these risks in order to prioritize safety and protection of human rights. The act establishes rules on data, as well as implementation of transparency, human oversight, and accountability. One of the major goals of the act is to create a unified legal framework for AI across all European countries and a single, cohesive European AI market.
President Macron has been a strong advocate of making France an "AI powerhouse." This goal has been supported by funding and financial support, as well as the recent appointing of Clara Chappaz as the first French Minister for AI and Digitial.
Following the AI summit, Chappaz and others spoke about the AI and tech ecosystem in the EU and globally at an event by Visionaries Unplugged. She spoke about the competitive advantages of regulation, as well as her hopes regarding financing, talent, and energy in Europe.
Ethics vs. Security Risks in the UK
Following the AI Action Summit, both the US and the UK refused to sign an international declaration on artificial intelligence, though for different reasons. While the US was more concerned about how the act might impede the AI industry from taking off, the UK pushed back with concerns of national security and global governance.
In 2023, the UK held the world's first AI Safety Summit, leaving the world to wonder what has changed in the past few years and where the UK stands in the debate of regulation versus innovation. However, according to BBC, the UK government reported that they agreed with most of the declaration, they just felt that it "didn't provide enough practical clarity" and wasn't focused enough on the question of national security for them to get on board.
As for regulation, the current leaders in the UK have made it clear that they want to take a pro-innovation approach to AI, potentially falling in line more with the US than with Europe, although it's still unclear for the moment.
The government has however emphasized their intent to go after "serious AI risks with security implications," rather than more ethical risks like "bias or freedom of speech." The UK government has stated that they will be specifically targeting issues such as "how the technology can be used to develop chemical and biological weapons, how it can be used to carry out cyber-attacks, and enable crimes such as fraud and child sexual abuse."
The US: Innovation Above All
The US fell in line with the UK at the summit, also refusing to sign the same international declaration. This follows a major shift in attitude towards AI in the US shortly after the shift from the Biden to Trump administration. While the Biden administration was heavily focused on mitigating the potential risks of artificial intelligence, the Trump administration has voiced their intention to prioritize innovation above everything else.
This comes in the form of less regulation, with US Vice President JD Vance saying that "to restrict [AI’s] development now would not only unfairly benefit incumbents in the space, it would mean paralyzing one of the most promising technologies we have seen in generations," as reported by Time.
This attitude towards regulation is much different than the EU, leaving people to wonder how the sector will develop differently as a result.
That's Not All
The development of AI has created a diverse regulatory environment, with different governments prioritizing different issues. How can we best regulate AI to ensure it serves humanity’s best interests? And how can countries collaborate rather than compete to create a tech landscape?
If these questions interest you, join us at VivaTech 2025, where we'll be discussing everything related to ethics, privacy, and governance.