While generative text tools such as ChatGPT offer opportunities in many fields, it must be remembered that these tools are not neutral. They have been built by humans and human-led companies, and they do not exist in a vaccuum.
Aside from concerns about academic integrity and plagiarism, there are a number of other ethical concerns around the development, deployment, and use of generative AI tools. For a quick rundown of the most salient issues, see 8 Big Problems With OpenAI's ChatGPT and AI’s Trust Problem.
Professors intending to encourage students to use generative AI tools should ensure they have a full understanding of the negative aspects of the technology. In addition, exploration of these topics in class discussions or projects may help students better understand the limitations of generative AI tools and learn to utilize them more effectively and appropriately.
One of the primary concerns about text generators is that they are known to produce inaccurate information. Large Language Models (LLMs) like ChatGPT cannot tell fact from fiction, sometimes giving incorrect answers or nonexistent sources. This is sometimes called 'hallucinating,' though some researchers feel this term inappropriately anthropomorphizes AI models, which is a concern itself. Often the innaccuracies produced by ChatGPT are harmless, but sometimes they have real-world negative or dangerous consequences. ChatGPT maker OpenAI is currently under investigation by the Federal Trade Commission, in part because of concerns over harmful inaccuracies produced by the text generator.
Students should be reminded that the onus is always on the end user to verify any and all information that comes from a text generator.
RESOURCES
It is necessary to use human-created content for training, as training LLMs on synthetic media (e.g., text or images created by generative AI tools) leads to model collapse. Most generative AI tools are trained on massive amounts of copyright-protected text, usually without permission or compensation. Currently, there are numerous open lawsuits against ChatGPT maker OpenAI and other generative AI companies for copyright infringement, including from prominent authors, publications, and artists. Companies have argued the LLM training falls under 'fair use,' but given the extant and predicted effects on the marketplace and the commercial nature of OpenAI's business model, this claim is questionable.
Please note that entering copyrighted work into ChatGPT prompts (including your students' work) may infringe on intellectual property rights.
RESOURCES
More on Model Collapse:
Even before the advent of ChatGPT, AI tools had well-documented problems with bias. Generative AI models often perpetuate the biases found in the training data, and training data often includes text from some of the worst corners of the internet.
RESOURCES
ChatGPT's Terms of Service allow the company to use any and all data entered as prompts to the tool, unless 'incognito mode' is activated. OpenAI is currently being investigation by the Federal Trade Commission (FTC), in part for potential privacy violations. Users should avoid entering any personally identifiable information in a ChatGPT prompt, as well as financial information.
See 5 Things You Must Not Share With AI Chatbots.
RESOURCES
A big worry about generative text tools is their potential to be used by bad actors to mislead, deceive, and generate spam and phishing attempts on a massive scale. ChatGPT and other generative tools are already being used to create and spread propaganda, spam, disinformation, and conspiracy theories.
RESOURCES
It's easy to think of generative text tools like ChatGPT as totally machine-based, but a vast amount of human labor is needed to ensure it works correctly. The data annotation and content moderation that enables LLMs to function as desired is mostly done by underpaid freelancers in the Global South. Sometimes, they must view and label the very worst internet content, including sexual violence and child pornography.
RESOURCES
An often-overlooked concern is the environmental impact of generative AI tools. The huge amount of computing power needed to generate predictive text or images with AI tools necessitates a huge amount of energy and water-- much more so than a traditional search. In fact, researchers have estimated that just 5 ChatGPT prompts uses 16oz. of water and each individual prompt consumes energy equivalent to running a 5 watt LED bulb for more than 1 hour. This is about 15 times the resource use of a Google search.
RESOURCES
In the modern age of globalization and interconnectedness, many people have become aware of the importance of investigating what the companies they support, in turn support themselves. Recently, many people have choosen not to patronize companies whose views do not align with their own.
Many of Silicon Valley's power players, including ChatGPT maker OpenAI's CEO Sam Altman, have expressed support for Longtermist philosophies. "Longtermism," which should not be confused for mere long-term thinking, is a fringe philosophical belief about the future of humanity.
RESOURCES:
The plausibility of ChatGPT's output can give users the illusion of sentiences, but generative AI should not be confused with artificial general intelligence, which does not exist. Humans are very prone to anthropomorphization, which could lead users to imbue ChatGPT with abilities and consciousness it does not have. There are additional dangers and liability when generative AI is used as a substitute for human decision-making, human companionship, and human psychology.
RESOURCES