Europe is making history by creating new rules defining how artificial intelligence can be used. The EU has really started to put their foot down to help trickle regulation across the globe.

But it’s strange because many people making these new laws don’t even really understand what AI really is. An extremely oversimplified description of what AI is can be described as a computer that can learn and make decisions like a human.

Now, the 27 countries that make up the European Union (EU) are setting some guidelines to make sure that AI benefits everyone and doesn’t hurt people or invade their privacy.

This is huge because it’s the first time a big group of countries have come together to create such rules. It’s all moving so fast.

The new rules are part of the “EU AI Act“, which recently passed a significant milestone by getting approval from the European Parliament, a key body in the EU.

The next step is to iron out differences in the wording of the rules, and get a final version ready before the EU elections next year.

So, what do these new rules say?

  • Categorizing AI Systems Based on Risk: The EU AI Act methodically classifies AI systems into four levels according to the potential risks they pose, ranging from very low to unacceptable. This is akin to categorizing chemicals based on their potential hazards. For instance, an AI system that recommends songs (low risk) wouldn’t be scrutinized as much as an AI that assists in surgical procedures (high risk). Each category has its own set of rules and safeguards to ensure that the associated risks are properly managed.
  • Restrictions on Certain AI Applications: The EU has identified specific AI applications that are deemed unacceptable due to the inherent risks they pose to society. One of these is “social scoring,” where AI systems evaluate individuals based on various aspects of their behavior, potentially affecting their social benefits or career opportunities. Imagine a system that tracks your every move, from jaywalking to online purchases, and assigns you a score that could affect your job prospects. Additionally, the EU prohibits AI systems that manipulate or take advantage of vulnerable groups. Predictive policing, where AI anticipates criminal behavior, is also banned, as it could lead to bias and discrimination. Furthermore, the use of AI for real-time facial recognition in public spaces is restricted unless there is a significant public interest, protecting citizens’ privacy.
  • Transparency Requirements: In the same way that products have labels to inform consumers, the EU mandates that AI systems must disclose when users are interacting with them. Moreover, AI systems must indicate whether the content such as images or videos is AI-generated (referred to as deepfakes). For instance, if you’re engaging with a customer service chatbot, it should explicitly inform you that you’re conversing with an AI. This transparency empowers individuals to make informed decisions regarding their interactions with AI systems.
  • Penalties for Non-Compliance: The EU AI Act imposes substantial financial penalties on companies that fail to comply with the new regulations. These fines can be as high as $43 million or 7% of the company’s global revenue, depending on which is greater. To put this in perspective, a company with a global revenue of $1 billion could face a penalty of $70 million. This serves as a strong incentive for companies to ensure adherence to the regulations, and it underscores the seriousness with which the EU regards responsible AI governance.

But what about the companies that make AI? What do they think? OpenAI, which is the company behind the groundbreaking ChatGPT, has had mixed views about regulation.

While they do see the importance of some rules, they’re also worried that too many rules could make it hard to create and use AI effectively. They have been talking to lawmakers to make sure the rules make sense. Not sure how much of this is legitimate discussion or just corporate lobbying.

To put it in perspective, Europe isn’t the biggest player in creating AI tech – that’s mainly the United States and China. But, Europe is really stepping up its game in setting the rules. This is important because, often, where Europe goes, the rest of the world follows in terms of making laws.

But, it’s still going to take quite a long take time for these rules to come into effect. The EU countries, the European Parliament, and the European Commission need to finalize the details. Plus, companies will have some time to adjust before the rules start applying.

Meanwhile, Europe and the U.S. are trying to make a ‘play nice’ agreement, which is like a promise to behave well when it comes to AI. This can be a guiding light for other countries, too.

Europe really has been taking the lead in making sure AI is used responsibly and doesn’t harm people or their rights. While this is a step in the right direction, it’s also important that these rules allow for creativity and innovation in AI. Just like in life, it’s all about finding the right balance!

Only time will tell what regulations and policies will get applied to these companies going forward. It’s



Source link