Long gone are the days of the hawk-eyed image sleuth. 🔍We are at a point where it is, at best, next to impossible for the naked eye to detect AI-generated visual content. Worse yet, it takes little time or skill to produce and mass-disseminate hyper-realistic images, video, sound and other forms of AI-generated content intended to spread mis- and disinformation.
But there are ways to detect if text or images were AI generated. Keep reading (and watching!) to find out how.
Find tons of more helpful tips below!
Often called "deepfakes," AI-generated (as well as human-manually-altered) videos have been a growing part of the mis-/dis-/malinformation landscape for a number of years. Rapid advances in technology - and more widespread access to such technology - has accelerated the spread of these videos.
Manual, human-altered videos can include things as subtle as shading or coloration (to, say, give the appearance of lighter or darker skin) or altering the speed of a brief portion of a video (e.g. speeding up a physical assault to make a punch/slap seem as if it was delivered more swiftly; or slowing down someone's speech to make it seem slurred) or something as overt as adding/replacing (ie. "face-swapping") or removing an object or person.
Video script voiceovers can be another giveaway. Some AI-generated videos use copy-and-pasted footage (a copyright violation if not done with permission) with an AI-generated voiceover, which often has a 'robotic' sound. Other subtle but telltale giveaways are few if any comments from viewers and an unrealistically high amount and frequency of output.
AI-generated deepfakes take this kind of content creation to an entirely new level, creating even greater opportunity to generate fake news via misleading videos. Keep reading and watching to learn more about how to spot a deepfake.
Although many have said that AI generated text has sort of a "vanilla" tone to it, this isn't always helpful when trying to detect such text. Current and future advances in AI generative text will likely make attempts to discern for ourselves inadequate. But it is possible to do a reasonable job of due diligence - check out some of the tips, advice and resources here.
In the meantime, here are some articles from outside the world of academe that you may find helpful. (This section is a WIP.)
Note: This section was written/generated by ChatGPT, based on the article "Rise of the Newsbots: AI-Generated News Websites Proliferating Online" as it appeared in NewsGuard on May 1, 2023
There are several ways to detect AI-generated news content:
Bland Language and Repetitive Phrases: AI-generated content often has a noticeable lack of sophistication in the language. The articles can contain bland language and repetitive phrases, which may appear as a telltale sign of AI involvement.
Absence of Human Oversight: In many cases, AI-generated articles don't have any bylines or are credited to generic accounts like “Admin” or “Editor”. Other times, they may use fake author profiles, which can be detected via reverse image searches.
Algorithmically Generated Pages: Certain pages like About Us or Privacy Policy pages may appear to be algorithmically produced and not fully completed, offering clues about the nature of the site's content creation process.
Error Messages: AI-generated texts often include error messages that are uncommon in human-written pieces. Examples include phrases like “my cutoff date in September 2021,” “as an AI language model,” and “I cannot complete this prompt.”
Fabricated Information: AI-generated content may include fabricated information or “hallucinations” that could be identified as unusual or unlikely in human-written text.
AI Text Classifiers: Tools like GPTZero.com can be used to check if a text was likely written by an AI.
Remember, AI-generated content can closely hew to human text, so it can be challenging to identify. However, the strategies above can be helpful in distinguishing between human and AI-generated news.
Being AI Literate does not mean you need to understand the advanced mechanics of AI. It means that you are actively learning about the technologies involved and that you critically approach any texts you read that concern AI, especially news articles.
Sandy Hervieux and Amanda Wheatley of McGill University created a tool you can use when reading about AI applications to help consider the legitimacy of the technology.
Reliability
Objective
Bias
Ownership
Type
Reliability
Objective
Bias
Owner
Type
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
See the original guide here: https://libraryguides.mcgill.ca/ai/literacy
Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry. https://thelibrairy.wordpress.com/2020/03/11/the-robot-test