Skip to Main Content
chat loading...
lirbary homepage

Misinformation and Media Literacy: How To Spot AI-Generated Content

There's soooo much AI generated content now 🤯

Long gone are the days of the hawk-eyed image sleuth. 🔍We are at a point where it is, at best, next to impossible for the naked eye to detect AI-generated visual content. Worse yet, it takes little time or skill to produce and mass-disseminate hyper-realistic images, video, sound and other forms of AI-generated content intended to spread mis- and disinformation.

But there are ways to detect if text or images were AI generated. Keep reading (and watching!) to find out how.  

How to spot AI-generated...

  • Do a Reverse-Image Search
    Use Google Reverse Image Search or TinEye to learn if that image has any history on the internet and what that history is. 

     
  • Use this "Content Credentials" tool
    Examining the Content Credentials of an image can be helpful for validating its origin. Head to the Content Credentials website to get started. If the image's metadata lacks certain details, the website will compare it with similar images online. It can also tell whether those images were generated using AI.

     
  • Read the comments
    OK, this isn't always fun but if you're seeing the image on social media and you're feelin' brave, dive on into the comments section and see what others are saying about it. This can sometimes provide clues about the image's origin.

     
  • Look for a watermark
    Some AI image generators, such as DALL-E 2, will place a watermark on their images. They're not always easy to spot but look in the corners of the image. 

     
  • Look for blurry or distorted features
    Sadly, this tip will not be particularly useful for much longer but when viewing images of people it might be helpful to be on the lookout for atypical body features.
    • features that seem out-of-proportion, distorted, misaligned or asymmetrical 
    • missing/extra limbs
    • blurry or halo-like outline around the person
       
  • Garbled or non-sensical background
    This tip also has a very limited amount of time left, but...Is there text in the background? Does it look garbled or like a completely foreign language? Do patches of the background seem oddly textured or blended in places? This could indicate the image is AI generated.

Find tons of more helpful tips below! 

Often called "deepfakes," AI-generated (as well as human-manually-altered) videos have been a growing part of the mis-/dis-/malinformation landscape for a number of years. Rapid advances in technology - and more widespread access to such technology - has accelerated the spread of these videos. 

Manual, human-altered videos can include things as subtle as shading or coloration (to, say, give the appearance of lighter or darker skin) or altering the speed of a brief portion of a video (e.g. speeding up a physical assault to make a punch/slap seem as if it was delivered more swiftly; or slowing down someone's speech to make it seem slurred) or something as overt as adding/replacing (ie. "face-swapping") or removing an object or person.

Video script voiceovers can be another giveaway. Some AI-generated videos use copy-and-pasted footage (a copyright violation if not done with permission) with an AI-generated voiceover, which often has a 'robotic' sound. Other subtle but telltale giveaways are few if any comments from viewers and an unrealistically high amount and frequency of output.      

AI-generated deepfakes take this kind of content creation to an entirely new level, creating even greater opportunity to generate fake news via misleading videos. Keep reading and watching to learn more about how to spot a deepfake.  

Although many have said that AI generated text has sort of a "vanilla" tone to it, this isn't always helpful when trying to detect such text. Current and future advances in AI generative text will likely make attempts to discern for ourselves inadequate. But it is possible to do a reasonable job of due diligence - check out some of the tips, advice and resources here

In the meantime, here are some articles from outside the world of academe that you may find helpful. (This section is a WIP.) 

How to Recognize an AI-generated Cookbook | LifeHacker

Note: This section was written/generated by ChatGPT, based on the article "Rise of the Newsbots: AI-Generated News Websites Proliferating Online" as it appeared in NewsGuard on May 1, 2023


There are several ways to detect AI-generated news content:

Bland Language and Repetitive Phrases: AI-generated content often has a noticeable lack of sophistication in the language. The articles can contain bland language and repetitive phrases, which may appear as a telltale sign of AI involvement.


Absence of Human Oversight: In many cases, AI-generated articles don't have any bylines or are credited to generic accounts like “Admin” or “Editor”. Other times, they may use fake author profiles, which can be detected via reverse image searches.


Algorithmically Generated Pages: Certain pages like About Us or Privacy Policy pages may appear to be algorithmically produced and not fully completed, offering clues about the nature of the site's content creation process.


Error Messages: AI-generated texts often include error messages that are uncommon in human-written pieces. Examples include phrases like “my cutoff date in September 2021,” “as an AI language model,” and “I cannot complete this prompt.”


Fabricated Information: AI-generated content may include fabricated information or “hallucinations” that could be identified as unusual or unlikely in human-written text.


AI Text Classifiers: Tools like GPTZero.com can be used to check if a text was likely written by an AI.

Remember, AI-generated content can closely hew to human text, so it can be challenging to identify. However, the strategies above can be helpful in distinguishing between human and AI-generated news.

The ROBOT Test

Being AI Literate does not mean you need to understand the advanced mechanics of AI. It means that you are actively learning about the technologies involved and that you critically approach any texts you read that concern AI, especially news articles. 

Sandy Hervieux and Amanda Wheatley of McGill University created a tool you can use when reading about AI applications to help consider the legitimacy of the technology.

Reliability

Objective

Bias

Ownership

Type

Reliability

  • How reliable is the information available about the AI technology?
  • If it’s not produced by the party responsible for the AI, what are the author’s credentials? Bias?
  • If it is produced by the party responsible for the AI, how much information are they making available? 
    • Is information only partially available due to trade secrets?
    • How biased is they information that they produce?

Objective

  • What is the goal or objective of the use of AI?
  • What is the goal of sharing information about it?
    • To inform?
    • To convince?
    • To find financial support?

Bias

  • What could create bias in the AI technology?
  • Are there ethical issues associated with this?
  • Are bias or ethical issues acknowledged?
    • By the source of information?
    • By the party responsible for the AI?
    • By its users?

Owner

  • Who is the owner or developer of the AI technology?
  • Who is responsible for it?
    • Is it a private company?
    • The government?
    • A think tank or research group?
  • Who has access to it?
  • Who can use it?

Type

  • Which subtype of AI is it?
  • Is the technology theoretical or applied?
  • What kind of information system does it rely on?
  • Does it rely on human intervention? 


This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

See the original guide here: https://libraryguides.mcgill.ca/ai/literacy
Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry. https://thelibrairy.wordpress.com/2020/03/11/the-robot-test

chat loading...