OpenAI Launches Safety-Featured DALL·E 3 in ChatGPT. OpenAI announced the launch of DALL·E 3 Generative-AI based image creator, able to “create unique images from a simple conversation”. Currently, the feature is available only to ‘Plus’ and ‘Enterprise’ users, and not in the free version of ChatGPT.
According to OpenAI, DALL·E 3 “is particularly good in responding to extensive, detailed prompts”, creating images that are “more visually striking” and “also crisper in detail”.
Interestingly, OpenAI reports using a safety system to restrict the creation of harmful content, including violent, adult, or hateful images. According to OpenAI user feedback helped to “identify edge cases for graphic content generation, such as sexual imagery, and stress test the model’s ability to generate convincingly misleading images”.
Moreover, OpenAI states that it took measures to reduce the likelihood of generating images in the style of living artists and images of public figures. OpenAI also worked to “improve demographic representation across generated images”.
When asked to generate an image with the logos of known Internet platforms, for example, DALL·E 3 responds, “I’m sorry, but I cannot create direct combinations of copyrighted logos… However, I can create an original design inspired by the concepts of social media…” DALL·E 3 also refuses to create artwork inspired by the style of known artists, such as Roy Fox Lichtenstein.
Click here to read OpenAI’s blog post regarding DALL·E 3.
Click here to read the DALL·E 3 System Card.
UK’s Competition and Markets Authority Weighs in on AI and Competition. Foundation models, such as OpenAI’s GPT and models from other tech giants, have gained prominence due to their vast capabilities. They also have raised issues surrounding competition, ethics, and societal implications of AI. The UK’s Competition and Markets Authority (CMA) recently issued a report regarding competition in the AI industry, underscoring the importance of equitable access to resources, transparent licensing, and avoiding monopoly in the foundation model market.
The CMA report suggests that a competitive market should offer diverse methods for integrating foundation models and support seamless data transfer and interoperability. The CMA’s report discourages anti-competitive behavior and advocates for regulations that address hurdles faced by newcomers. To tackle these concerns, the CMA recommends principles that mandate developer accountability, promote both open and closed-source models, and offer broad education about these models’ risks and limitations to consumers and businesses.
Click here to read a Short Version of the CMA report.
Study Illustrates Privacy Concerns in AI. A study conducted by a computer science professor at ETH Zurich evaluated language models developed by OpenAI, Google, Meta, and Anthropic. According to the study, these models can deduce a significant amount of personal information about users, including ethnicity, location, and profession, from seemingly harmless conversations. This stems from the models being trained on web content, which often contains personal details. Subtle linguistic patterns allow these models to ascertain specific details about users with remarkable precision, sometimes achieving an accuracy of 85%-95%. The study’s findings underscore potential privacy breaches. In response to these concerns, both OpenAI and Anthropic assert they have implemented measures to protect user data. However, Google and Meta have yet to publicly address the issue.
Click here to read about the study illustrating how AI models can deduce a user’s personal information.
New Lawsuits Against Operators of AI. Copyright holders are continuing to push back against AI tools. Mike Huckabee, former Arkansas governor, and a group of religious authors filed a class action suit against Meta, Microsoft, and financial data provider Bloomberg L.P. They allege copyright infringement in that the tech giants used their books to train artificial intelligence tools. According to the lawsuit, the training data used a pirated book collection, depriving the authors and their publishers of rightful compensation for their contributions.
In another matter, Jobiak LLC, an AI-based recruitment platform, filed a lawsuit against Botmakers LLC, (d/b/a Tarta.ai). The lawsuit accuses Tarta.ai of data scraping and unlawfully integrating content from Jobiak’s proprietary database into its own job listings. The lawsuit includes allegations of unfair competition, violations of the Computer Fraud and Abuse Act, the California Comprehensive Computer Access and Fraud Act, and the Digital Millennium Copyright Act (DMCA). Jobiak provides evidence of the infringement in Tarta.ai’s use of similar layout designs and specific keywords placed at the end of job descriptions. Notably, these descriptions also contain “dummy” keywords, unrelated to the job but strategically inserted by Jobiak. This indicates to Jobiak that Tarta.ai had intentionally and without authorization “scraped” their data.
As a result, Jobiak asserts it has suffered significant financial setbacks, a tarnished reputation, and a diminished market presence. In addition to seeking monetary compensation, which includes statutory damages of $150,000 for each violation, Jobiak is also advocating for a restraining order against Tarta.ai.
In a case involving the major record label Universal Music Group, music publishers have taken legal action against Anthropic PBC, an AI company. The plaintiffs argue that Anthropic infringed the copyrights of song lyrics they represent, specifically through its advanced AI system “Claude.” The system is touted as a state-of-the-art AI assistant.
The lawsuit emphasizes, however, that Anthropic not only distributed copyrighted lyrics without the required licenses but also used them to train its language models. In doing so, they often omitted crucial copyright management details.
Click here to read the complaint against Meta, Microsoft, and Bloomberg Over AI Copyrights.
Click here to read the complaint in JOBIAK, LLC., v. BOTMAKERS LLC.
Click here to read the complaint against ANTHROPIC PBC.
*Image generated by DALL·E 3