Friday, September 13, 2024
Americans have grown increasingly concerned about generative artificial intelligence and the potential impact this technology can have on individuals’ privacy as well artists' creative works, according to a poll conducted by Global Strategy Group.
The findings were released by the Human Artistry Campaign, a global coalition dedicated to the advancement of responsible AI.
The poll, which surveyed registered U.S. voters, found that 84 percent of respondents consider deepfakes and voice clones to be a problem. These concerns apparently cross partisan lines, as 85 percent of Democrats and 82 percent of Republicans agreed with the statement.
Americans were asked: How big of a problem are deepfakes and voice cloning? |
Additionally, 89 percent think AI developers should always receive consent prior to using a person's voice or image to make a deepfake and 90 percent believe they should get permission to use that person’s voice or image to train an AI model. And 84 percent of respondents think an artist's music or vocals should never be used by AI without prior authorization.
Overall, more than half (55 percent) said they would be less likely to listen to music created on an AI model that had been trained on music without that artist's permission, and 89 percent believe artists should have the right to take legal action against companies that use their work for AI purposes.
Of those polled, 60 percent said they would be more likely to accept AI-generated music if was unique and didn’t attempt to copy or imitate an existing artist’s existing music.
Despite political affiliation, respondents widely agree that transparency and labeling are needed when it comes to generative AI technology, with 89 percent supporting a basic requirement that developers and platforms label AI-generated content. An additional 89 percent said they would support new legislation that protects people from unauthorized AI-developed deepfakes and voice clones, and 87 percent said they would support similar policies to protect artists' voices and images.
In late July, Senators Chris Coons (D-Conn.), Marsha Blackburn (R-Tenn.), Amy Klobuchar (D-Minn.) and Thom Tillis (R-NC) in introduced the “NO FAKES Act.” That proposed legislation would protect individuals from unauthorized computer-generated recreations of their voice and visual likeness by AI and other technologies. A companion bill was introduced in the House by Reps. Adam Schiff (D-Calif.) María Elvira Salazar (R-Fla.), Madeleine Dean (D-Penn.), Nathaniel Moran (R-TX), Rob Wittman (R-Va.), and Joe Morelle (D-NY).
Finally, the survey reported that 85 percent of respondents believe AI developers should maintain a public database of all content they use to train their models.
Global Strategy Group’s “Human Artistry Campaign AI Perception Survey” polled more than 800 registered voters nationwide in late July.