Written by: Haim Ravia, Dotan Hammer
Legislatures around the world have enacted a number of laws relating to Artificial Intelligence.
A new Arkansas Act directly addresses the ownership of content and model training generated by generative Artificial Intelligence (AI) tools. It stipulates that the person providing input or directives owns the generated content, provided it does not infringe on existing copyrights or intellectual property (IP) rights. Similarly, the individual who provides data to train a generative AI model will own the resulting trained model, conditional on the training data being lawfully acquired and no prior transfer of ownership rights via contract. A significant “work made for hire” exception states that if an employee uses an AI tool as part of their job duties under the employer’s direction, the employer owns the resulting training data and AI-generated content. Crucially, the Act explicitly clarifies that it does not grant ownership over content that infringes pre-existing IP rights.
In New York, the Senate passed the RAISE Act: New York’s “Responsible AI Safety and Education Act”, and the legislation is now pending Assembly approval and the Governor’s signature. It focuses on establishing transparency and safety requirements for “frontier models”. A “frontier model” is defined by its substantial compute cost (exceeding $100 million), and a “large developer” is a person who has trained such a model. The bill therefore attempts to capture large AI platforms, such as OpenAI, Google, and Anthropic.
Before deploying a frontier model, large developers would be required to implement a written “safety and security protocol,” which must be conspicuously published (with appropriate redactions) and shared with the Attorney General and the Division of Homeland Security. The bill prohibits the deployment of a frontier model if it creates an “unreasonable risk of critical harm,” defined as severe consequences such as death or serious injury to 100 or more people or at least $1 billion in property damages. Large developers must conduct annual reviews of their safety protocols and disclose any “safety incident” (e.g., critical harm, theft of model weights) to the Attorney General within 72 hours of discovery. Violations can lead to civil penalties of up to $10 million for a first offense and $30 million for subsequent violations, though it does not establish a private right of action.
In Japan, a Bill on the Promotion of Research and Development and Utilization of AI-related Technologies aims to comprehensively and systematically promote policies for the research, development, and utilization of AI-related technologies to contribute to the improvement of national life and the sound development of the national economy.
It establishes basic principles, emphasizing AI as a foundational technology for economic development and national security. A key principle is the need for “proper implementation” to prevent negative outcomes, specifically criminal use, personal information leakage, and copyright infringement. The bill outlines the responsibilities of the national government, local entities, R&D institutions, and businesses to foster AI advancement. It proposes measures like promoting R&D across stages, facilitating shared use of large-scale facilities and datasets, developing guidelines aligned with international norms, and conducting research on rights infringement cases stemming from improper AI use.
The bill also mandates an “AI Basic Plan” and establishes an “AI Strategy Headquarters,” led by the Prime Minister, to coordinate national AI policy.
Click here to read Arkansas’ Act Regarding the Ownership of Model Training and Content Generated by a Generative Artificial Intelligence Tool.
Click here to read New York’s Responsible AI Safety and Education Act.
Click here to read Japan’s Fundamental law on promoting research, development, and use of AI-related technologies (in Japanese).