From fake images of war to celebrity hoaxes, artificial intelligence technology has spawned new forms of reality-warping misinformation online. New analysis co-authored by Google researchers shows just how quickly the problem has grown. The research, co-authored by researchers from Google, Duke University and several fact-checking and media organizations, was published in a preprint last week.
The paper introduces a massive new dataset of misinformation going back to 1995 that was fact-checked by websites like Snopes. According to the researchers, the data reveals that AI-generated images have quickly risen in prominence, becoming nearly as popular as more traditional forms of manipulation. Don't believe your eyes — fake photos have been a problem for a long time Analysis With rise of AI-generated images, distinguishing real from fake is about to get a lot harder The work was first reported by 404 Media after being spotted by the Faked Up newsletter, and it clearly shows that "AI-generated images made up a minute proportion of content manipulations overall until early last year," the researchers wrote.
Last year saw the release of new AI image-generation tools by major players in tech, including OpenAI, Microsoft and Google itself. Now, AI-generated misinformation is "nearly as common as text and general content manipulations," the paper said. The researchers note that the uptick in fact-checking AI images coincided with a general wave of AI hype, which may have led website.
