At the end of 2022, OpenAI released ChatGPT to the public and, in doing so, changed the world. While AI and machine learning systems have been working behind the scenes in various contexts for decades, the step-function shift in accessibility to powerful Generative AI tooling triggered a collective “hands-on” experience which propelled AI into the mainstream consciousness.
Alongside the excitement generated by the potential of AI, the GenAI race has renewed public fear around AI. The consequences of technology misuse are still fresh in people’s minds, reminding us what happens when technology and the private sector go unchecked. Convincing deep fakes are already deceiving and scamming people. The public and politicians want to create rational guardrails for this power, walking the line between implementing control points for the safety and security of their constituents, whilst working to avoid stifling the opportunity (and international competition) afforded by AI. But how do you regulate something that’s new and not even fully developed, or even fully understood? In this blog, we’ll explore the common calls for regulation, how it can cause problems, and what regulation could look like.
“Do no harm”–The public call for regulation
AI is powerful, both in its depth and in its accessibility—it’s inherently “dual-use.” There is a widespread call for some agreement on the limits to what people and companies can do with AI. Although what is considered ethical can be subjective, there are certainly cases that people agree are inappropriate or predatory uses of the technology, such as phone scams run by AI bots and deepfakes of politicians weaponized to manipulate public opinion. These are direct examples of use of a technology to cause harm, and they are, in general, the easy ones to regulate. The need for ethical guidelines also extends to governments; AI regulation should consider the potential for governmental abuse of power and extreme surveillance, such as indiscriminate use of facial recognition in public places to track or target individuals.
Regulating companies from using AI in harmful, unsafe, or unethical ways prevents a race to the bottom against their competition, so companies aren’t left deciding between unethical AI use or risking obsolescence. The industry can focus on ways of using AI ethically to help businesses and their users, whilst protecting both.
Accountability and transparency
Occasionally, AI systems won’t work quite as planned, even with the best intentions. Any groundbreaking technology can potentially have some unintended or unfortunate impacts on society, individuals, and the environment. For example, a threat actor can use prompt injection, a straightforward attack on an LLM, that could cause a chatbot to inadvertently share user’s data, even if the company was otherwise following the correct established data privacy procedures. Adding to this, model creators and the model deployers are different entities, and when things go astray, a clearly established system for assigning responsibility can lead to faster resolutions of issues and protections for consumers.
We also want to build AI systems that we can trust. To mitigate, and ideally prevent, discriminatory AI systems, regulation may explore mandating transparency around bias. This can involve requirements for providing explanations of AI decisions, disclosing the underlying algorithms, the data used to train the models, as well as declaration of known bias through model cards. Biases that appear in AI models are the result of already biased datasets, so we need to have access to what causes these problems in the first place.
Ideally, people would also be informed about the limitations and accidentally misleading AI products. LLMs frequently have hallucinations, where they output coherent but incorrect information. While model producers continue to reduce and eliminate these issues, they can also make clear to the public that the information that they receive may be false.
Data privacy
An important thing to acknowledge is that there are different ways personal data is used, and that there is a difference between profiling individuals and aggregate data anonymized for model training. Effective guidelines would be clear on the use of personal data for training AI models, particularly when it comes to sensitive topics. Both the EU and many US states already have some data protection laws in place, GDPR having set the stage for data privacy worldwide in 2016. These laws provide some foundation for data in AI usage, which includes requirements for data anonymization, consent mechanisms, and safeguards against unauthorized access or misuse of personal data.
However, these laws aren’t fully prepared to handle all new AI use cases. If people are able to consent to their data being used in model training, and they later revoke that consent, would the AI creators have to change their data set, causing a number of problems in the continuity of research and training? Realistically, specific data retention rules should apply for such cases. If a model released to the public was created with someone’s data, data privacy laws alone don’t tell us if someone has the right to later protest the use of that model. Precise rules would both protect the individual and allow companies to build clear and compliant systems.
Safety and security
AI holds tremendous promise in improving some vital societal necessities, like our health care systems. However, the application of AI in essential services may unintentionally introduce new vulnerabilities that could compromise safety and security. For example, inaccuracies in diagnostic algorithms could lead to incorrect treatment recommendations or delayed interventions. Running infrastructure like a power grid on AI systems has the potential to improve public services, but would lead to new risks including increased surfaces for AI attacks or failures from buggy software.
In the case of the government, the use of AI raises concerns about national security. Poorly governed AI systems can unintentionally expose sensitive data, resulting in privacy breaches of citizens and, at a minimum, legal consequences. In more serious cases, foreign governments obtaining information on a country’s citizens or military secrets could put people in danger.
There is also the need to address the concern of protecting consumers from misleading AI. Recent examples of under-regulated technology causing harm to consumers are still fresh. The quick rise in popularity and access to cryptocurrencies resulted in financial scams, rug pulls, and disappearing funds. Crypto has run further ahead than the government has been able to regulate it, and has resulted in many regular people losing lots of money. Although AI is an entirely different world, the potential for risk to consumers is also extremely high.
Landscape of regulation today
Now that AI is popular and visible in everyday life, governments are rushing to regulate and control what some see as a potential threat. However, AI has been a part of our lives for some time already. Social media ranking algorithms, for example, have been around for years and fall under the category of AI. We’ll outline some of the first attempts at government regulation in the West.
AI regulation in the United States
In his recent State of the Union address in March of 2024, President Joe Biden specifically called out banning AI voice impersonation, highlighting that putting limits on AI and technology is a top priority for US lawmakers. Despite this seemingly stern warning, the US has so far taken a more hands off approach.
In 2022, the White House released its AI Bill of Rights, which outlined five protections that all Americans should have in regards to AI. The bill of rights didn’t enforce any protections, but instead invited companies to voluntarily commit to following the principles.
A year later, Biden signed an executive order for managing AI risks, which “establishes new standards for AI safety and security.” There aren’t many specific rules for businesses yet, as it delegates to other departments to make more specific legislation. However it does contain some guidelines and restrictions for how AI is used in government, creating some accountability and ethical standards for government use. Ultimately, the sentiment in the US is that AI is good, and that a “wait and see” approach is preferred. We can expect more legislation to come, especially as the use of AI in election interference bubbles up as a threat in 2024.
AI regulation in the EU and the EU AI Act
The EU recently approved the EU AI Act, a law that outlines and governs AI use. The EU proposed stricter guidelines than those proposed in the US, and goes further in attempts to protect citizens in many of the areas we talked about above.
It restricts AI use based on a risk based framework, categorizing AI based on use case and methodology and applying restrictions. AI that is considered high-risk, covering use cases related to law enforcement, employment, and education, will have lots of oversight and mechanisms for citizens to report concerns. Applications with unacceptable risk, such as predicting criminal behavior based solely on profiling, creating facial recognition databases through untargeted scraping, and inferring emotions in workplace and educational settings, are outright banned. The rules also intend to let people know when they are dealing with AI systems.
Tech companies weigh in
Tech companies understand that regulation is inevitable and necessary, and we’ve seen major players weighing in and collaborating with governments to co-create solutions. Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI were the first to agree to the 2022 US AI Bill of Rights, with eight more companies joining later. These companies, by following some self-regulation, are forming blueprints for what legal restrictions crafted by technologists could look like. OpenAI, for example, in following principles proposed by the AI Bill of Rights, established a red teaming network to improve the safety of their AI models. Companies and the creators of AI models are arguably some of the most qualified entities to understand the potential and risks of increasingly more powerful AI models, so they will likely be important contributors in the safety mechanisms and security that will be required for AI.
Problems with regulation
Despite the clear need for regulation, there are potential pitfalls for regulation that we need to avoid. The EU AI Act has already received criticism for some of them. Rushing to create limits ignores the established use of AI and can result in some unintended consequences.
Shutting down innovation
In attempts to limit AI’s reach, the prescriptive laws in the EU AI may limit innovations and constrain people’s access to services. Some of the enacted legislation in the EU AI Act establishes rules that will favor larger players and potentially shut out competition and new startups, such as lots of oversight and compliance procedures. Lots of red tape might make it not worth it for small players to even enter the space. With so much to learn and develop in AI, leaving the development in the hands of a few resource rich players would ultimately slow and limit the advancement we can make in AI.
Without further research into some of these technologies, prematurely banning them instead of exploring them is likely going to drive away innovators and technologists, ultimately hurting the European technical innovative landscape. This is also costly to consumers and the general public, as they will have more expensive AI solutions if they exist at all. A more nuanced approach would allow for innovation, and can be supported with attentive and cautious lawmaking.
Nascent Technology, Slow Governments
Software evolves quickly, while laws get made slowly and are updated even more slowly. One poignant example is the Electronic Communications Privacy Act, enacted in the US in 1986, that allowed the government access to wiretapping. Technology changed, but the law wasn’t updated appropriately. As a result, until 2017, the US government could access emails older than 180 days without a warrant, because it was considered “abandoned property”. Since AI is still quite new, we are far from fully understanding the ways we need to manage it. Premature or fear-based rules have the potential to be irrelevant at best. Governments and lawmakers should be paying attention to AI and keeping up with the trends, resisting the temptation to act too quickly on stories, hype, and hypothetical use cases.
Is there hope for worldwide collaboration?
Across the industry, businesses and model creators desire consistent international standards so it’s easier to do business across borders. With the disparate AI regulations already arising after the EU AI Act, companies already have to manage different rules across different countries and different US states. The slower pace of regulation in the US is starting to create a patchwork of local and state laws governing AI. Such scattered legislation can make it difficult to create AI products that can work across borders, and will likely result in consumers in some regions missing out on some services entirely.
There is some hope. The US and China, who regularly disagree on technology and policy issues, have agreed to meet to discuss the ethical use and development of AI. Governments that haven’t rushed into heavy regulation yet still have the opportunity to collaborate and develop legislation based on needs as they arise. Determining which restrictions make sense for a fair competitive landscape, are enforceable and protect individuals, and enable further research and development of technology will require continued collaboration from the private and public sectors.
The field of artificial intelligence is still relatively young, and there are still so many possibilities to improve humanity. Collaboration between governments, companies, and the public is our best hope at creating a future where we have transparent, ethical, and helpful AI.