A Word About Generative AI Large Language Models
A new and rapidly evolving phenomenon impacting higher education is the availability of generative artificial intelligence systems [such as Chat Generative Pre-trained Transformer or ChatGPT]. They have been developed from scanning text from millions of books, web sites, and other sources to enable algorithms within the system to learn patterns in how words and sentences are constructed. This allows these systems to respond to a broad range of questions and prompts, generate stories, compose essays, create lists, and more. Generative AI systems are not actually thinking or understanding like a human, but they are good at mimicking human-like text based on what it has learned from the sources of input data used to build and enhance its artificial intelligence algorithms, protocols, and standards.
As such, generative AI systems [a.k.a., “Large Language Models”] have emerged, depending on one’s perspective, as either a threat or an opportunity in in how faculty create or modify class assignments and how students approach the task of writing a college-level research paper. We are in the very earliest stages of understanding how LLMs may impact learning outcomes associated with information literacy, i.e., fluency in effectively applying the skills needed to effectively identify, gather, organize, critically evaluate, interpret, and report information. However, before this is understood, these systems will continue to improve and become more sophisticated, as will academic integrity detection programs used to identify AI generated text in student papers [e.g., Turnitin].
When given a research assignment that has an element of writing, it is up to your professor if using ChatGTP is permitted or not. Some professors embrace using these systems as part of an in-class writing exercise to help understand their limitations, while others will warn against its use because of their current limitations and biases. That said, the future of information seeking using LLMs means that the intellectual spaces associated with research and writing will likely collapse into a single online environment in which students will be able to perform in-depth searches for information connected to the Libraries' many resources.
As LLMs become more sophisticated, here are some potential ways generative artificial intelligence programs could facilitate organizing and writing your social sciences research paper:
Despite their power to create text, generative AI systems are far from perfect and their ability to “answer” questions can be misleading, deceiving, or outright false. Described below are some current problems adapted from an essay written by Bernard Marr at Forbes Magazine and reiterated by researchers studying LLMs and writing. These issues focus on problems with using ChatGPT, but they are applicable to any current Large Language Model.
As they currently exist, ChatGPT and other Large Language Models truly are artificial in their intelligence. They cannot express thoughts, feelings, or other affective constructs that help a reader intimately engage with the author's written words; the output contains text, but the systems are incapable of producing creative expressions or thoughts, such as, conveying the idea of willful deception and other narrative devices that you might find in a poem or song lyric. Although creative devices, such as metaphors, idioms, imagery or subtleties in narrative rhythm, style, or voice, are rarely used in academic writing, it does illustrate that personalizing the way you present your research [e.g., sharing a personal story relating to the significance of the topic or being asked to write a reflective paper] cannot be generated artificially.
Ethical Considerations
In the end, the ethics of whether to use ChatGTP or similar platforms to help write your research paper is up to you; it’s an introspective negotiation between you and your conscience. As noted by Bjork (2023) and others, though, it is important to keep in mind these overarching ethical problems related to the use of LLMs:
NOTE: If your professor allows you to use generative AI programs or you decide on your own to use an LLM for a writing assignment, then this fact should be cited in your research paper, just as any other source of information used to write your paper should be acknowledged. Why? Because unlike grammar or citation tools, such as Grammarly or Citation Machine that correct text you've already written, generative AI programs are creating new content that is not in your own words. Currently, the American Psychological Association (APA), Modern Language Association (MLA) and the Chicago Manual of Style provide citation recommendations in this area.
ANOTHER NOTE: As described above, LLMs have significant deficiencies that still require attention to thorough proofreading and source verification, an ability to discern quality information from misleading, false, irrelevant, or even made up information, a capacity to interpret and critically analyze what you have found, and the skills required to extrapolate meaning from the research your have conducted. For help with these elements of research and writing, you should still contact a librarian for help.
YET ANOTHER NOTE: Researchers are finding early evidence that suggests over-reliance on ChatGPT and other LLM platforms for even the simplest writing task may, over time, undermine confidence in student's own writing ability. Just like giving a class presentation or participating effectively in a group project, good writing is an acquired skill that is only improved by the act of doing; the more you write, the more comfortable and confident you become expressing your own ideas, opinions, and judgements applied to the problem you have researched. Substituting LLMs with your own voice can inhibit your growth as a writer, so give yourself room to write creatively and with confidence by accepting LLMs as a tool rather than a definitive source of text.
For more information about Generative AI platforms and guidance on their ethical use in an academic setting, review the USC Libraries' Using Generative AI in Research guide for students and faculty.
Introduction to ChatGPT for Library Professionals. Mike Jones and Curtis Fletcher. USC Libraries, Library Forum, May 18, 2023; ChatGPT. Library, Wesleyan University; Bjork, Collin. "ChatGPT Threatens Language Diversity." The Conversation, February 9, 2023; Understanding AI Writing Tools and their Uses for Teaching and Learning at UC Berkeley. Center for Teaching & Learning, University of California, Berkeley; Ellis, Amanda R., and Emily Slade. "A New Era of Learning: Considerations for ChatGPT as a Tool to Enhance Statistics and Data Science Education." Journal of Statistics and Data Science Education 31 (2023): 1-10; Ray, Partha Pratim. “ChatGPT: A Comprehensive Review on Background, Applications, Key Challenges, Bias, Ethics, Limitations and Future Scope.” Internet of Things and Cyber-Physical Systems (2023); Uzun, Levent. "ChatGPT and Academic Integrity Concerns: Detecting Artificial Intelligence Generated Content." Language Education and Technology 3, no. 1 (2023); Lund, Brady D. Et al. “ChatGPT and a New Academic Reality: Artificial Intelligence Written Research Papers and the Ethics of the Large Language Models in Scholarly Publishing.” Journal of the Association for Information Science and Technology 74 (February 2023): 570–581; Rasul, Tareq et al. "The Role of ChatGPT in Higher Education: Benefits, Challenges, and Future Research Directions.” Journal of Applied Learning and Teaching 6 (2023); Marr, Bernard. “The Top 10 Limitations Of ChatGPT.” Forbes (March 3, 2023): https://www.forbes.com/sites/bernardmarr/2023/03/03/the-top-10-limitations-of-chatgpt/?sh=41ae78e8f355; Thinking about ChatGPT? Academic Integrity at UBC, Office of the Provost and Vice-President Academic, University of British Columbia.