MBW Explains is a series of analytical features in which we explore the context behind key music industry talking points – and suggest what might happen next.
WHAT HAPPENED?
The European Union has taken a big step to become the first major jurisdiction with a comprehensive law guiding the development of AI – and in the process has potentially set itself up to fight tech companies Americans.
The European Parliament, the EU’s legislative body, on Wednesday voted in favor of the AI Act, a set of new rules that would, among other things, impose restrictions on generative AI tools like ChatGPT.
The bill would also ban a number of practices enabled by AI, such as real-time facial recognition, predictive policing tools, and social rating systems, such as those used by China to give citizens scores based on their public behavior.
“(This is) the world’s first-ever horizontal AI legislation, which we believe will establish a true governance model for these technologies with the right balance between supporting innovation and protecting fundamental values,” said Brando Benifei, Member of the European Parliament (MEP) from Italy, as quoted by Politico.
Under the proposed EU law, the use of AI would be assessed according to the degree of risk involved.
For “high risk” uses – such as operating critical infrastructure like power and water, as part of the legal system, employment, border control, education and service delivery public services and government benefits – AI technology developers will need to perform risk assessments in a process the New York Times compares rules for approving new drugs.
When it comes to everyday AI apps like ChatGPT, the law doesn’t automatically regulate their use, but it does require developers of “base models” – those AI apps that train on huge amounts of data – declare whether copyrighted materials were used to train the AI.
However, as Notes from Time Magazinethe regulation falls short of some activists’ expectations because it does not require AI developers to report whether personal information was used in training AI models.
WHAT IS THE CONTEXT?
Since ChatGPT exploded onto the scene late last year, governments around the world have struggled to adjust to the reality that pervasive artificial intelligence technology is not just around the corner. – it’s here, and in the hands of businesses and consumers everywhere.
However, while some governments, like the US, are essentially starting from scratch on AI legislation, the EU has been working on the issue for over two years at this point.
But that doesn’t make it the first to come out of regulation. In April, China’s Cyberspace Administration released its second set of rules guiding the development and use of AI.
According to the first set of rules, any AI-generated content must be clearly labeled, and if someone’s image or voice is used, the AI user must obtain prior permission.
The second set of rules would require tech companies to submit safety assessments of their AI technologies to a “national network information department” before their AI services can be offered to consumers. The rules also create a mechanism for consumer complaints about AI.
In this context, the United States – where much of the generative AI technology is being developed – seems to be lagging behind. According to the Washington Postlawmakers are only beginning to work on the issue and are not expected to begin discussions on specific legislation until the fall.
In the meantime, the US executive has taken a few steps forward, with the Biden administration releasing some ideas for a “AI Bill of Rights”, and the United States Copyright Office launch an initiative to examine the copyright implications of AI.
While it is likely that AI regulations in different countries will see some convergence as they develop, one exception appears to be Japan, which hopes to become a major player in AI by adopting a more lax to regulate the field.
At a public hearing in late April, Japan’s Minister of Education, Culture, Sports, Science and Technology Keiko Nagaoka said that, in the government’s view, Japan’s laws on copyright laws do not prohibit the use of AI training on copyrighted material.
This is a sign that Japan may be using some game theory principles to attract companies developing AI. Giving AI developers more latitude than they would have in the United States or Europe could entice them to locate in Japan.
WHAT HAPPENS NOW?
The proposed EU AI law will now move to the ‘trilogue’ stage of EU lawmaking, where officials will negotiate a final form of the law, following negotiations with the EU. European Commission, representing the executive branch of government, and the European Council, representing individual EU member states.
This process will need to be completed by January if the law is to come into force before the next round of European parliamentary elections next year. In the meantime, the bill risks bringing together supporters and opponents.
Among the likely supporters are music recording companies, some of which have recently raised concerns about AI models using copyrighted tracks to practice creating music.
They will likely support that part of EU legislation that requires AI developers to disclose the use of copyrighted material when training AI models. However, the rule requires disclosure – it does not outright prohibit the use of copyrighted material for training. This means that some rights holders could push for tighter restrictions on AI development in the future.
“(This is) the first-ever horizontal AI legislation in the world, which we believe will establish a true governance model for these technologies with the right balance between supporting innovation and protection of fundamental values.”
Brando Benifei, MEP
But that same rule could put the EU on the path to conflict with some AI developers. Sam Altman, CEO of ChatGPT maker OpenAI, warned last month that his company could pull out of Europe if the proposed law is too strict. However, he returned to these comments a few days later.
Still, it’s no secret that large language models – the fundamental technology behind AI applications – train on large volumes of hardware, and it could be difficult for developers to sort out copyrighted and non-copyrighted source materials.
Besides rights holders and tech companies, other stakeholders will want to have a say in the legislation before it is passed. As Time reports, the European Council is expected to defend the interests of law enforcement, who want an exemption from EU AI law risk assessment rules for their own uses of AI technology. ‘IA.
A LAST THOUGHT…
The new EU rules have sparked much discussion about Europe’s emerging role as a global leader in the development of digital policy.
The vote on the AI law “solidifies Europe’s position as the de facto global technology regulator, setting rules that influence technology policy-making around the world and standards that are likely to ripple across all consumers,” the Washington Post said.
“This moment is extremely important,” Access Now senior policy analyst Daniel Leufer told Time. “What the European Union says constitutes an unacceptable risk to human rights will be seen as a model around the world.”
This reputation for being at the forefront of digital law really started with the EU’s General Data Protection Regulation (GDPR), a set of rules designed to protect people’s privacy online that is entered into force in 2018. Although it only affects EU citizens, in the borderless online world, it has effectively forced companies and organizations around the world to adapt their privacy policies to the right to the EU – and most have.
However, AI regulation is uncharted territory, and some in the tech industry fear that the EU is over-regulating the sector, which would push AI companies out of Europe and into rules-based jurisdictions. more lax, like Japan, and possibly the United States, might turn out to be.
“What worries me is how (the law is) constructed,” said Robin Rohm, co-founder and CEO of Berlin-based artificial intelligence startup Apheris. said subdued in a recent interview. “We will put a lot of unnecessary bureaucracy on companies that innovate quickly.”
Piotr Mieczkowski, Managing Director of Digital Poland, put it this way: “Startups will go to the United States, they will grow in the United States, and then they will come back to Europe as developed companies, unicorns, which can for pay lawyers and lobbyists… Our European businesses will not flourish, because nobody will have enough money to hire enough lawyers.
If the AI law does indeed cause Europe to lag behind in the development of AI and other advanced digital technologies, that reputation for global rule-making could fall to the wayside.
But in the meantime, stakeholders seeking to influence the development of AI law may want to book a flight, not to Washington, but to Brussels.The music industry around the world