America’s Federal Trade Commission has started looking into whether OpenAI’s ChatGPT is breaking consumer protection laws by causing reputational or privacy damage.

Claims to that effect were made last month in private civil litigation when a radio host in the state of Georgia sued OpenAI alleging ChatGPT defamed him and damaged his reputation by falsely associating his name with a criminal issue.

In April, a mayor in Australia threatened a defamation lawsuit against OpenAI after ChatGPT supposedly accused him of involvement in a foreign bribery scandal. The man’s lawyers reportedly gave OpenAI 28 days to repair its AI model. Since then, there’s been no further word of litigation.

Amid these disputes, the FTC wants OpenAI to open up its code books. According to The Washington Post, the trade watchdog this week sent the machine-learning outfit a 20-page Civil Investigative Demand letter [PDF] seeking details about the company, its AI model marketing and training, model risk assessment, mitigations for privacy and prompt injection attacks, API and plugin integrations, and details about data collection.

The letter also requests numerous company documents, including contracts with partners since 2017, and internal communications about the potential of AI models to “produce inaccurate statements about individuals” and to “reveal personal information.”

OpenAI did not immediately respond to a request for comment. The FTC also declined to comment.

Testifying before a US House oversight committee on Wednesday, FTC boss Lina Khan outlined her agency’s priorities, which include policing AI software.

“As companies race to deploy and monetize artificial intelligence, the Commission is taking a close look at how we can best achieve our dual mandate to promote fair competition and to protect Americans from unfair or deceptive practices,” Khan said in prepared remarks [PDF].

OpenAI has been sued over the past few months for allegedly violating authors, comedians and programmers’ copyrights, based on claims its AI models were trained on and reproduce material protected under copyright. It’s also a defendant, along with Microsoft, in a lawsuit alleging its AI models have violated people’s privacy.

Microsoft has been injecting OpenAI’s models into all aspects of its software empire.

AI applications, specifically those made possible by large language models that drive text-based chat applications and text-to-image applications, have become the focus of intense enthusiasm over the past few years among technology companies.

With the commercial release of the OpenAI-powered coding helper Copilot from Microsoft’s GitHub last year and Redmond’s integration of ChatGPT/GPT-4 into its Bing search engine this year, much of the tech industry’s pent-up desire – to move past the stasis of a Google-monopolized web and to replace costly workers with clever bots – has been channeled into hopes for AI.

The giddiness may have to be dialed back until the legal uncertainty settles, but there’s a lot of money at stake and laws to be worked out.

Back in May, OpenAI CEO Sam Altman told the US Senate Judiciary Committee that AI should be regulated, even though it appears he meant that AI should not be regulated too much.

“OpenAI believes that regulation of AI is essential, and we’re eager to help policymakers as they determine how to facilitate regulation that balances incentivizing safety while ensuring that people are able to access the technology’s benefits,” he said in prepared remarks.

Altman may find that having the FTC rummaging about isn’t what he had in mind. ®

Updated to add

In a post on Twitter, Sam Altman expressed disappointment in the way the FTC inquiry was made public while reiterating the need to address AI safety concerns.

“It is very disappointing to see the FTC’s request start with a leak and does not help build trust,” he said. “That said, it’s super important to us that out technology is safe and pro-consumer, and we are confident we follow the law. Of course we will work with the FTC.”





Source link