×
×
homepage logo
SUBSCRIBE

Tech Matters: What could the EU’s AI Act foretell for us?

By Leslie Meredith - Special to the Standard-Examiner | Aug 7, 2024

Photo supplied

Leslie Meredith

The European Union has taken a groundbreaking step by introducing the EU AI Act, the first legislation of its kind in the world. This act sets out comprehensive regulations on the development and deployment of artificial intelligence technologies. Its main aim is to ensure that AI systems used within the EU are safe, transparent and respect fundamental rights.

The act categorizes AI systems into three risk levels: unacceptable, high, and low or minimal risk. Unacceptable risk AI systems, like those used for social scoring by governments, are banned entirely. The most well-known example is China’s Social Credit System that uses financial records, social behaviors, and compliance with laws and regulations, to assign scores to citizens and organizations in an attempt to rate their “trustworthiness.” Those who score high may have easier access to loans and travel, while low scorers may face travel bans, reduced access to social services and public shaming.

High-risk systems, such as those used in critical infrastructure, health care and law enforcement, face strict requirements for transparency, oversight and accountability. Facial recognition systems used for identifying people in public spaces and systems that evaluate students’ performance and make decisions about their education paths fall into this category. Infrastructure-related systems include water supply systems and energy grid management systems, both essential to the public.

Low-risk AI systems, including chatbots and deepfakes, are subject to less stringent regulations but must still adhere to certain transparency obligations. They must be clearly labeled or otherwise identified as AI generated. At the bottom of the scale are minimal-risk systems like AI-enabled video games, inventory management systems and email spam filters. Because they don’t directly interact with people or have very little impact when they do, these products may be developed and used freely.

Fines for noncompliance can be staggering and are tiered in line with the risk levels. Fines may be a monetary figure or a percentage of the company’s worldwide annual turnover. At the highest level, fines can reach 35 million euros or 7% of the company’s worldwide annual turnover, whichever is higher. High-risk-related violations could incur fines up to 15 million euros or 3% of the company’s annual turnover, while violations involving low-risk systems could be fined as much as 7.5 million euros or 1.5% of annual turnover.

Much like the EU’s General Data Protection Regulation (GDPR), the AI Act applies to any company doing business in the EU, meaning American companies will need to comply if they operate in European markets. We’ve seen with GDPR that many companies structure their policies to meet EU levels rather than creating policies for different markets. It’s reasonable to expect the same thinking from tech companies like Alphabet (Google), Microsoft, Apple and Meta, all of which are working with EU regulators. As a result, we should see a standard level of transparency and privacy protections across these companies’ products.

While U.S. lawmakers are unlikely to pass AI legislation as strict as the EU’s, it doesn’t mean the U.S. will remain a free-for-all for AI development. There are already discussions and proposals for AI regulation in the U.S., signaling a move toward more controlled and responsible AI innovation.

Utah lawmakers have pioneered AI legislation at the state level. On the same day that the EU Parliament adopted its AI Act (March 13), Utah Gov. Spencer Cox signed the AI law that took affect on May 1 and was incorporated into Utah’s consumer protection statutes. Its key elements include establishing liability for inadequate or improper disclosure of generative AI (when consumers interact with a chatbot) and creating the Office of Artificial Intelligence Policy to administer a state AI program.

Companies or individuals associated with businesses regulated by Utah’s Division of Consumer Protection must tell a person that they are interacting with a chatbot, not a human — but only if the person asks. For health care companies, a disclosure must be made before an interaction begins that clearly tells the individual that they will be communicating with a chatbot and not a real person.

The liability piece is important and holds the company responsible if the chatbot makes an error — and we all know how generative AI can make errors. Remember, these large language models are trained to predict the next most likely word and can be prone to make things up, or what’s called “hallucinate.” If a company covered by the law violates it, there are penalties. The Utah Division of Consumer Protection can impose administrative fines of up to $2,500 per violation​. And if a company violates an administrative or court order, fines can be as high as $5,000 per violation.

Technology fueled by AI is developing at a rapid pace, and it’s reassuring to know that our legislators are leading the way to a safer future for Utahns.

Leslie Meredith has been writing about technology for more than a decade. As a mom of four, value, usefulness and online safety take priority. Have a question? Email Leslie at asklesliemeredith@gmail.com.

Newsletter

Join thousands already receiving our daily newsletter.

I'm interested in (please check all that apply)