Following the AI Action Summit in Paris, the Commission removes the proposed AI liability directive from its 2025 work program, signaling a shift in its regulatory approach.
The European Commission has officially withdrawn the AI liability directive from its final 2025 work program, as detailed in a document published late on February 11. This decision follows significant criticism of the EU's regulatory strategies, prominently voiced by US Vice-President JD Vance during the AI Action Summit held in Paris on February 10-11.
The AI Action Summit aimed to foster a human-centric approach to artificial intelligence, yet it became overshadowed by substantial investment commitments from both the EU and France, amounting to hundreds of billions of euros.
These initiatives are seen as efforts to enhance the EU's position in the ongoing global AI competition.
The withdrawal of the AI liability directive indicates a strategic shift by the EU Commission, aiming to cultivate an environment more amenable to investment and technological advancement.
This move may also reflect the Commission's desire to show willingness to collaborate with the new US administration under President
Joe Biden.
For the past year, the AI liability directive had faced diminishing support within the EU, particularly as the AI Act—a regulation that establishes guidelines based on the societal risks associated with AI technologies—was adopted.
As a result, the necessity of an additional liability law had come into question.
The Commission articulated its reasons for withdrawing the directive, stating that there is 'no foreseeable agreement' on the law and indicating it would explore alternative approaches in this legislative area.
The 2025 work program outlines the Commission's priorities, emphasizing the simplification of rules and effective implementation of existing regulations.
Alongside the withdrawal of the AI liability directive, the Commission plans to retract a total of 37 legislative proposals as part of its re-evaluation strategy.