European Union legislators in the European Parliament approved the preliminary agreement on groundbreaking regulations for artificial intelligence (AI) on Feb. 13. This sets the stage for the world’s first AI-focused legislation, with a parliamentary vote scheduled for April.

The Internal Market and Civil Liberties Committees voted 71-8 to endorse the provisional agreement on the AI Act. The act intends to establish guidelines for AI across various industries, including banking, automotive, electronics, aviation, security and law enforcement.

The regulations will oversee foundational models or generative AI, which are trained on extensive data sets, such as OpenAI’s ChatGPT.

‼️ AI Act takes a step forward:
MEPs in @EP_Justice @EP_SingleMarket have endorsed the provisional agreement on an Artificial Intelligence Act that ensures safety and complies with fundamental rights https://t.co/EbXtLBfIoY @brandobenifei @IoanDragosT pic.twitter.com/J3NXRhxd9p

— LIBE Committee Press (@EP_Justice) February 13, 2024

The endorsement follows the approval by EU member states  after France withdrew its objection, which led to concessions aimed at reducing the administrative load on high-risk AI systems and providing enhanced protection for business secrets.

Following the December 2023 political agreement, efforts began to transform agreed-upon positions into a final compromise text for approval by lawmakers, concluding with the “coreper” vote on Feb. 2, which is a vote of the permanent representatives of all member states.

The European Parliament Committee on Civil Liberties referred to the endorsement as AI taking a step forward in a post on the X social platform.

Related: Singapore becoming AI hub with commercial models in local languages

The AI Act is set to proceed to the European Parliament for a vote in March or April. If passed, it is expected to be fully applied 24 months after entering into force, with specific provisions taking effect earlier.

In November 2023, a group of businesses and tech companies issued a joint letter to the EU regulators warning against over-policing robust AI systems at the expense of innovation.

The letter was signed by 33 companies operating in the EU and emphasized that overly strict regulations on foundation models and general-purpose AI might discourage essential innovation in the region.

The European Commission is taking steps to establish an AI Office to monitor compliance with a group of high-impact foundational models considered to have systemic risks. Additionally, it unveiled measures to support local AI developers, such as upgrading the EU’s supercomputer network for generative AI model training.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change