US Government Acts to Shut Down AI-Generated Pornographic Image Market

TapTechNews May 25th news, according to the Associated Press, the US government is asking the technology industry and financial institutions to shut down the increasingly rampant market for generating pornographic images using artificial intelligence.

It is reported that new generative AI tools can easily turn a person's portrait into an AI face-swapped video with sexual suggestive content, facilitating the spread of such realistic deepfake images on the web, and the victims (whether celebrities or children) have little way to prevent this from happening.

The White House is calling on enterprises to take the initiative to cooperate and curb the behavior of creating, spreading AI images and profiting from them without the consent of the parties involved through a series of specific actions, especially to combat pornographic content involving children.

When generative AI emerged, everyone was guessing where the first true harm would show up. I think we have the answer, Arati Prabhakar, Biden's chief science adviser and director of the White House Office of Science and Technology Policy, explained.

She told the Associated Press that due to the promotion of artificial intelligence tools, the situation of spreading images without consent has significantly increased, especially for women and girls, and such content may upend their lives. We see this kind of problem accelerating because generative AI is developing rapidly. The fastest solution is to get these companies to step up and take responsibility.

A document obtained by Associated Press reporters pointed out that the US government not only just calls on AI developers to take action but also hopes payment processors, financial institutions, cloud computing providers, search engines, and internet gatekeepers - that is, companies like Apple and Google to take action and strictly control the content in the app store.

The White House believes that these commercial companies should intensify efforts to curb the monetization process of such infringement behaviors and limit payment channels, especially those websites that post explicit pictures of minors.

Prabhakar mentioned that many payment platforms and financial institutions have already issued statements stating that they will not provide service support for companies that spread such pictures. However, she also emphasized that some people have not strictly implemented it, and these service terms (that is, helping victims quickly clean up such infringing content) are equivalent to non-existent.

The Associated Press pointed out that perhaps the most famous victim of pornographic fake pictures would be Taylor Swift. In January this year, a large number of AI pornographic pictures about this singer began to circulate on the web, and Swift's fans also fought back. Later, some of the forged images were traced to the AI visual design tool under Microsoft, and Microsoft also promised to strengthen protection measures. In fact, in the US and other places, there are also a large number of students struggling with deep fake nude photos, some of which belong to school violence, and some are due to students' curiosity.

The US government signed an executive order last October, aiming to guide the healthy development of artificial intelligence, hoping that major technology companies can profit without endangering public safety. Biden also said that the government's artificial intelligence protection measures need to be supported by laws, and relevant groups are currently urging Congress to invest at least 32 billion US dollars (TapTechNews note: currently about 232 billion yuan) in the next three years to develop AI and provide funds for controlling and regulating AI's capabilities.

Jennifer Klein, director of the White House Gender Policy Council, said: Encouraging companies to get involved and make voluntary commitments does not change the fundamental need for Congress to take action here.

The Stanford Internet Observatory announced last December that it found thousands of suspected child pornographic pictures in the huge artificial intelligence database LAION.

Earlier this month, federal prosecutors filed charges against a man in Wisconsin who was suspected of using the AI image generator StableDiffusion to create thousands of images of minors engaging in sexual behavior.

In response, the StabilityAI company behind StableDiffusion said this week that it did not approve the release of the model used by the Wisconsin man, but also acknowledged that this open-source model is a Pandora's box that can't be closed because its technical components have already been completely published on the web.

Prabhakar said that it is not just open-source AI technology that causes harm. This is a broader problem. Unfortunately, a lot of similar people seem to be using this kind of AI image generator. We just saw an explosive growth in that area, and I don't think open-source systems and proprietary systems can be neatly separated.

Likes