Happy new “#AI” year! This 25th issue of our TMT Bites marks the first anniversary of the AI Act published on April 21, 2021 by the European Commission. We will thus focus on the AI legal, ethic, and economic implications that regulators, companies, and the public at large should carefully consider.
If you have any questions, ask our team or send an e-mail to tmtbites.italy@dentons.com.
This time last year, the European Union took a decisive first step in the direction of regulating lawful, safe and trustworthy artificial intelligence technologies by publishing the so-called AI Act—officially known as the “Proposal for a Regulation laying down harmonized rules on Artificial Intelligence”. The AI Act represents a crucial opportunity for the European Union to ensure that AI systems operating in the single market respect the EU’s fundamental rights and values. Some commentators anticipated that the AI Act may lead the European Union to once again become a standards-setter on the international scene, similarly to what happened with regards to the General Data Protection Regulation (“GDPR”).
There are, in fact, a number of common features between the AI Act and the GDPR.
Read more
On March 2, 2022 the European Union Intellectual Property Office (“EUIPO”) published its “Study on the impact of artificial intelligence on the infringement and enforcement of copyright and designs”, describing current and future connections between IP law and artificial intelligence and the implications of technology transformation on the IP system.
In 2021, Collins Dictionary named “NFT”, the abbreviation for non-fungible token, its word of the year. In 2017, the US-based Association of National Advertisers named AI (artificial intelligence) its marketing word of the year. As the recent rise of the use of both tools is evident, one may wonder whether they can operate together. In particular, the main question for IP rights holders and practitioners currently reads: Does AI illustrate a hurdle or rather a beneficial tool for IP protection in the realm of NFTs?
Earlier this year, the Intellectual Property Owners Association (“IPO”) released its White Paper on Best Practices for Protecting Inventions Relating to Artificial Intelligence (“White Paper”). The extensive guidance discusses the patenting process concerning AI in the United States as well as in other jurisdictions and aims to aid practitioners and applicants in forming sound decisions.
The deployment of AI-enabled technologies has become a key focus in many societies and an important driver for growth and development of high skill, technology-focused economies. AI has the potential to transform business models and operations, jobs and the delivery of public services. If developed correctly, AI will directly lead to higher productivity and economic growth.
Despite the huge benefits that AI presents, there are many challenges that need to be addressed, and governments around the world are each making their own policy decisions as they grapple with the same issues.
Having reviewed both the opportunities and the legal and ethical risks related to the use of artificial intelligence, what’s next? To maximize the value that AI can bring to your organization, while avoiding the potential pitfalls, we strongly recommend developing a robust AI strategy for your business.
Our survey found that while 60% of respondents were using AI in some form, just under a fifth have such a strategy, while another 23% are currently formulating it. The lack of strategic focus can mean that AI is being implemented without due consideration of the risks, the relevant legislation or the internal controls required to ensure it is well-governed.
The European Commission’s “Proposal for a Regulation laying down harmonized rules on Artificial Intelligence” (“Draft AI Regulation”) is far from being final and even further from taking effect. Indeed, it was published on April 21, 2021, remaining open to consultation until August 6, 2021, and is subject to the ordinary legislative procedure, which requires not less than two years to conclude.
Read more (Top Read Articles – JD Supra)
AI continues to assume greater importance in the framework of contracting and contract review. There are already products on the market that use AI-driven technologies to review contract terms. Such tools can be used to review large numbers of contracts in a fraction of the time that it would take lawyers to review each one manually.
The issue of data privacy is often among the first risks that come to mind in relation to artificial intelligence, since developing, testing and implementing AI technologies often involve the processing of personal data. According to Dentons’ AI survey, 81 percent of respondents cited personal data protection as a significant concern.
One of the most important questions in the field of AI is whether the traditional types of intellectual property (“IP”) protection are sufficient or even suitable to protect an AI and its byproducts.
This debate stems especially from the traditional view that IP protection has a human-centered nature. This humanistic view was echoed in Dentons’ AI survey: 58% believed that the user of the AI system should own the IP rights, while 20% believed the rights should go to the inventor of the AI system. Only 4% believed the AI system itself should hold the rights.
One of the legal issues that is always on the table when discussing AI is liability. It raises a number of concerns among regulators, businesses and customers alike.
What happens if an autonomous car crashes? Who is to blame? What kind of damages can be claimed? If a healthcare worker follows the recommendation of an AI-based tool to treat a patient, who would bear liability for any treatment injury?
Going back to the well-known driverless car dilemma, let’s take it a step further. Picture that a driverless car equipped with the latest artificial intelligence is driving down the road at considerable speed, when a pedestrian mindlessly steps into the road and into the path of the car. The car calculates that it can avoid the pedestrian and spare their life, but only by swerving onto the pavement and into another pedestrian.
The question whether artificial intelligence (AI) systems, when acting independently of humans, can be considered inventors is not recent. Along with advancements in the sophistication of these systems, the possibility for them to “create” in an increasingly broad sense (which includes software), comes quickly to mind. As software is protected by copyright, in most legal systems, only a standard of originality is required in its creation. Thus the question begs: If an AI system—acting as an extension of its original creator (a natural person)—creates new software, should the copyright on this program be automatically attributed to the creator of the AI?
One of the most significant outcomes of the so-called "Fourth Industrial Revolution" is the development of highly sophisticated technologies for artificial intelligence (“AI”). The scale, scope, and complexity of tasks that can be performed by AI are perceived to be endless. In fact, many human activities are now being carried out by or with the help of AI. One of those activities is the creation of artistic works, including music, painting, literature, and others. Traditionally, only humans could be considered to have created such works, especially in a legal sense, including under copyright law. But now, as AI can significantly contribute to the process of creating such works, oftentimes much more than humans themselves, the question arises as to whether copyrights may be granted for AI-generated works and if so, who is entitled to receive the copyright. The purpose of this article is to discuss this issue especially from a Korean law perspective. But as much of this discussion focuses on copyright issues generally regarding AI, it may be applicable to other jurisdictions as well.
There is no single agreed definition of AI. The Organisation for Economic Co-operation and Development ("OECD") defines AI as "a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions […] designed to operate with varying levels of autonomy” while the World Intellectual Property Organisation ("WIPO") defines AI as including "machines and systems that can carry out tasks considered to require human intelligence, with limited or no human intervention … [and] techniques and applications programmed to perform individual tasks.”
In August 2021, the Czech Supreme Court issued a ruling, in which it found a provider of a file-sharing service liable for infringement of Czech laws against unfair competition. The decision takes a somewhat unorthodox approach to unfair competition, as it recognizes that particular business models benefitting from “free riding” may in themselves constitute an unlawful practice. For a European audience, it may also be interesting to learn how the court disapplied the safe harbor liability exception for hosting services and how the CJEU’s case law influenced the judgment.