1 Generative Artificial Intelligence (GenAI), Misinformation, and Disinformation: Learning Module Joel Blechinger, Mount Royal University Library, Spring 2025 This module is licensed under an Attribution-NonCommercial-ShareAlike 4.0 International CC license. This is a learning module that I developed during spring 2025 for GenAI-related library instruction at Mount Royal University in a General Education class (GNED 1403 Writing in a Digital Context taught by Kara Toews). This documentation related to this learning module is comprised of 3 parts: ● (1) a guided overview of some of the issues that GenAI technologies present related to misinformation and disinformation; ● (2) an active learning activity where students use reverse image searching tools in pairs or groups on a set of instructor-curated GenAI images to explore some of the issues; and ● (3) an “answer key” for the learning activity to be used by the instructor and optionally shown to students during the class-wide debrief of the activity. NB: Working on GenAI necessarily presents issues related to currency, so this activity will likely need to be updated to include new figures from an organization like NewGuard in the guided overview section, for example, and also perhaps more current or recently viral examples of GenAI-generated images or videos in the learning activity. The images included in this document reflect the time period when this module was first created. 1. Generative Artificial Intelligence, Misinformation, and Disinformation Generative artificial intelligence (AI) is a hot topic these days that is having an impact on many areas of cultural life, education, and the economy. Generative AI tools can be used to create and spread misinformation and disinformation in multiple ways. 2 In NewsGuard's October 2024 audit of the 10 leading GenAI chatbots, the organization found that those chatbots "collectively repeated misinformation 24.67 percent of the time, offered a non-response 21.67 percent of the time, and a debunk 53.67 percent of the time. The 46.33 percent “fail” rate (percentage of responses containing misinformation or offering a non-response) increased from NewsGuard’s September audit." In addition, as of November 11, 2024, NewsGuard's team has identified 1,121 unreliable AI-generated news and information websites spanning 16 languages. OpenAI, the company behind ChatGPT, states, in its own documentation for GPT-4, that “[t]he profusion of false information from [large language models] … has the potential to cast doubt on the whole information environment, threatening our ability to distinguish fact from fiction.” Two ways that this has happened already and that have made the news have been through fake image generation and textual fabrication. Our activity later will explore fake image generation specifically. A Specific Textual Fabrication Example From ChatGPT Be sure to scrutinize any source(s) that generative AI provides you with on a topic. This is because, at this point, it is prone to error: what some have called "hallucination," but that I prefer to call "fabrication." If generative AI provides you with a source: (1) make sure that the source actually exists, and, if it does exist; (2) make sure that the source actually contains the information that generative AI has attributed to it. 3 II Can you give me five citations for books or articles related to ethics in influencer culture'? I'm unable to provide specific c1tat1ons or direct links to current academic books or articles, as my training data only goes up until September 2021, and I don't have live internet access to search for real·time information. However, I can suggest five academic sources related to ethics in influencer culture that you can search for using academic databases like Google Scholar or your university's library resources: Book: •influencer Marketing for Dummies• by Kristy Sammis, Cat Lincoln, and Stefania Pomponi • This book covers various aspects of influencer marketing, including ethical considerations and best practices. 2 Book: •influencer Marketing: Building Brand in a D1g1tal Age• by Duncan Brown and Nick Hayes • While this book primarily focuses on influencer marketing strategies, it likely touches on ethical concerl'\S w1th1n the context of influencer culture. 3. Academic Article: "The Rise of Influencer Marketing and Its Impact on Consumer Behavior• by Liu, Hu, and Zhang (2019) • This artiele d1seusses the ethical implications of influencer marketing and its effeets on consumer behavior. 4 Academic Article: ·Ethical and Legal Issues in Influencer Marketing· by Brenner, A. and Capron, L (2019) • This academic paper explores the ethical and legal dimensions of influencer marketing, including issues related to transparency and authenticity. 5 Academic Artiolo: "The Dark Side of Social Media: A Consumer Psychology Perspectivet by Phua, J., J,n, S.V., and Kim, J.J. (2017) • This article discusses various ethical issues surrounding social media and could include insights related to influencer culture. Please use these titles and aiuthor research. You can search for An Investigation of ChatGPT's Sources 1. Book: Influencer Marketing for Dummies by Kristy Sammis, Cat Lincoln, and Stefania Pomponi ○ This source does exist and it was written by these authors, but it is a For Dummies book that wouldn't be considered scholarly. 2. Book: Influencer Marketing: Building Brand in a Digital Age by Duncan Brown and Nick Haye ○ This source does exist and it was written by those authors, but ChatGPT has fabricated a subtitle for it that it doesn't have. 3. Academic Article: "The Rise of Influencer Marketing and Its Impact on Consumer Behavior" by Liu, Hu, and Zhang (2019) ○ To the best of my searching abilities, this source does not exist. 4. Academic Article: "Ethical and Legal Issues in Influencer Marketing" by Brenner, A. and Capron, L. (2019) ○ To the best of my searching abilities, this source does not exist. 5. Academic Article: "The Dark Side of Social Media: A Consumer Psychology Perspective" by Phua, J., Jin, S.V., and Kim, J.J. (2017) 4 ○ This source is a Frankenstein composite of 2 sources. The authors have been taken from this article and the title has been taken from this edited book with which those authors had no involvement. 2. GenAI & Mis/Disinformation Activity Here is a Padlet board to use to track your research: https://padlet.com/bleching/gned1403_genai 1. Please put your names below an item in a comment to claim your information source. 2. Next, perform a "lateral reading" of your source by searching the web for it and finding other reputable sources that talk about it or that are hosting it online. Useful websites that you might use to analyze your source might be: ● Tineye ● Google's Reverse Image Search (instructions here) or (in Chrome) right click and "Search with Google Lens." ● You might also want to use Hive Moderation's AI-Generated Content Detector. If your source has text, you may want to also search that text on Google in quotes to find more info about it. For example: "example text" 3. Write what you find in other comments and try to answer some of these questions: ● Who created your source (if applicable/known)? ● Why did they create it (if applicable/known)? ● If you find that your source was AI generated, which GenAI tool(s) were used to create it (if applicable/known)? ● What issues related to GenAI does your source represent? 4. We will wrap up in ~10-12 mins and debrief the sources as a whole class. ● ● ● ● ● ● ● Source 1 Source 2 Source 3 Source 4 Source 5 Source 6 Source 7 5 ● ● ● ● Source 8 Source 9 Source 10 Source 11 3. GenAI Mis/Disinfo Activity "Answer Key" Notable Fake Image Generation Examples Trump's "Arrest" (March 2023) (Source 1) https://twitter.com/EliotHiggins/status/1637927681734987777 "Balenciaga Pope" (March 2023) (Source 2) https://www.reddit.com/r/midjourney/comments/120vhdc/the_pope_drip/ Pseudomnesia: The Electrician by Boris Eldagsen (April 2023) (Source 4) https://www.theartnewspaper.com/2023/04/18/the-camera-never-lies-creator-o f-ai-image-rejects-prestigious-photo-award https://www.theguardian.com/artanddesign/2023/apr/18/ai-threat-boris-eldags en-fake-photo-duped-sony-judges-hits-back (This is more of a difficult image to analyze because it was deliberately created by an artist and his intentions in doing so can be somewhat difficult to grasp.) "Shrimp Jesus" on Facebook (Spring 2024) (Source 6) https://x.com/TheHornetsFury/status/1767792068947329106 https://www.forbes.com/sites/danidiplacido/2024/04/28/facebooks-surreal-shri mp-jesus-trend-explained/ Crying Girl and Puppy After Hurricane Helene (October 2024) (Source 10) 6 https://www.rollingstone.com/culture/culture-news/ai-girl-maga-hurricane-hele ne-1235125285/ https://www.forbes.com/sites/larsdaniel/2024/10/04/hurricane-helena-deepfak es-flooding-social-media-hurt-real-people/ Baby Peacocks (October 2024) (Source 9) https://ifunny.co/picture/all-crossed-out-images-have-been-generated-by-al-th e-YdRMgSaqB Hollywood Sign on Fire (January 2025) (Source 5) https://x.com/Shayan86/status/1877322068892483891 Notable Textual Fabrication Examples Lawyers Citing AI-Generated Cases (March 2023) (Source 7) https://arstechnica.com/tech-policy/2023/06/lawyers-have-real-bad-day-in-cou rt-after-citing-fake-cases-made-up-by-chatgpt/ AI-Generated Mushroom Foraging Books Sold on Amazon (August 2023) (Source 8) https://twitter.com/newyorkmyc/status/1695689778224594959 https://www.404media.co/ai-generated-mushroom-foraging-books-amazon/ Google's AI Overview Feature Fabricating Answers (May/June 2024) (Source 11) https://www.theverge.com/2024/5/23/24162896/google-ai-overview-hallucinati ons-glue-in-pizza Notable Examples of AI Use to Deceive (Disinformation) Willy's Chocolate Experience in Glasgow, Scotland (February 2024) (Source 3) 7 https://en.wikipedia.org/wiki/Willy%27s_Chocolate_Experience Fake Halloween Parade in Dublin, Ireland (October 2024) (Not Used in Padlet, But Another Potential Image to Use in Learning Activity) https://www.cbsnews.com/news/dublin-halloween-parade-hoax-ireland-prankai-fake-news/