Welcome to TheTech Platform where you can get Tech updates and Tech accessories
menu_banner1

-20%
off

How to regulate ChatGPT?: EU countries, MEPs strike political deal on landmark AI act

How to regulate ChatGPT?: EU countries, MEPs strike political deal on landmark AI act

The European Union has embarked on a groundbreaking journey to regulate large language models (LLMs) like ChatGPT through the recent political agreement on the AI Act. This landmark legislation is designed to establish a comprehensive framework governing the development, deployment, and usage of artificial intelligence (AI) across the EU, with a particular emphasis on mitigating the risks associated with powerful language models.

The AI Act introduces key provisions that directly impact LLMs, including ChatGPT, reshaping the landscape of AI governance within the European Union:

  1. Transparency and Explainability: One of the pivotal aspects of the act is the mandate for developers and users of AI systems, including LLMs, to be transparent about their capabilities and limitations. This involves providing comprehensive information about the training methodologies employed and the datasets utilized. Furthermore, it underscores the importance of ensuring that the outputs generated by these systems are not only comprehensible but also traceable.
  2. Risk Mitigation: The act categorizes AI systems based on their risk levels, with a specific focus on high-risk systems, including LLMs like ChatGPT. Stringent requirements, such as mandatory audits and conformity assessments, are imposed on these systems to minimize potential risks and enhance accountability.
  3. Prohibited Uses: Addressing ethical concerns, the AI Act explicitly prohibits the use of AI, including LLMs, for malicious purposes. This encompasses activities like disseminating misinformation or generating harmful content. Additionally, the act forbids the deployment of AI in applications that could result in discrimination or unfair bias, aligning with the EU’s commitment to fostering ethical AI practices.
  4. Human Oversight and Accountability: The act underscores the significance of human oversight and accountability in the deployment of AI systems. Developers and users are required to take responsibility for the actions of their AI systems, ensuring they are utilized in a safe and ethical manner. This accountability principle aims to instill confidence in the public regarding the responsible use of AI technologies.

While the AI Act explicitly references ChatGPT, its implications extend to all LLMs meeting the criteria of a “high-risk” AI system. This inclusivity means that developers and users of similar models, including Google’s Bard or LaMDA, must adhere to the stringent requirements outlined in the act.

It’s important to note that the AI Act is currently pending approval by the European Parliament and Council. Once approved, expected in 2024, it will usher in a new era of governance for LLMs within the EU, significantly influencing their development and application.

Beyond the EU, the regulation of LLMs is becoming a global consideration. The United States is exploring various approaches, including industry self-regulation and government oversight, while China has initiated steps to regulate AI, providing guidelines for the development and utilization of LLMs.

In conclusion, the regulation of LLMs represents a complex and evolving landscape. The EU’s AI Act, with its robust provisions, stands as a significant stride towards ensuring the responsible and ethical use of powerful language models. As other countries grapple with similar challenges, the EU’s approach will likely influence the global discourse on AI governance, shaping the trajectory of responsible AI development.

[instagram-feed cols=6]