Skip to Main Content

Organizing Your Social Sciences Research Paper

Offers detailed guidance on how to develop, organize, and write a college-level research paper in the social and behavioral sciences.

Generative AI and Writing

A Word About Generative AI Large Language Models

A new and rapidly evolving phenomenon impacting higher education is the availability of generative artificial intelligence systems [such as Chat Generative Pre-trained Transformer or ChatGPT]. They have been developed from scanning text from millions of books, web sites, and other sources to enable algorithms within the system to learn patterns in how words and sentences are constructed. This allows these systems to respond to a broad range of questions and prompts, generate stories, compose essays, create lists, and more. Generative AI systems are not actually thinking or understanding like a human, but they are good at mimicking human-like text based on what it has learned from the sources of input data used to build and enhance its artificial intelligence algorithms, protocols, and standards.

As such, generative AI systems [a.k.a., “Large Language Models”] have emerged as either a threat or an opportunity, depending on one’s perspective, in how faculty create or modify class assignments and how students approach the task of writing a college-level research paper. We are in the very earliest stages of understanding how LLMs may impact learning outcomes associated with information literacy, i.e., fluency in the skills needed to effectively identify, gather, organize, critically evaluate, interpret, and report information. However, before this is understood, these systems will continue to improve and become more sophisticated, as will academic integrity detection programs used to identify AI generated text in student papers [e.g., Turnitin].

When assigned to write a research paper, it is up to your professor if using ChatGTP is permitted or not. Some professors embrace using these systems as part of an in-class writing exercise to help understand their limitations while others will warn again its use because of their current limitations and biases. That said, the future of information seeking using LLMs means that the intellectual spaces associated with research and writing will likely collapse into a single online environment in which students will be able to perform in-depth searches for information connected to the Libraries' many resources.

As LLMs become more sophisticated, here are some potential ways generative artificial intelligence programs could facilitate organizing and writing your social sciences research paper:

  • Explore a Topic – develop a research problem related to the questions you have about a general subject of inquiry.
  • Formulate Ideas – obtain background information and explore ways to place the research problem within specific contexts.
  • Hone in on Specific Research Questions and Related Sub-questions – create a query-based framework for how to investigate the research problem.
  • Locate Sources to Answer those Questions – begin the initial search for sources concerning your research questions.
  • Obtain Summaries of Sources – build a synopsis of the sources to help determine their relevance to the research questions underpinning the problem.
  • Outline and Structure an Argument – present information helpful to formulating an argument or an explanation for a stated position.
  • Draft and Iterate on a Final Essay – create a final essay based on a process of repeating the action of text generation on the results of each prior action [i.e., ask follow up questions to build on or clarify initial results].

Despite their power to create text, generative AI systems are far from perfect and their ability to “answer” questions can be misleading, deceiving, or outright false. Described below are some current problems adapted from an essay written by Bernard Marr at Forbes Magazine and reiterated by researchers studying LLMs and writing. These issues focus on problems with using ChatGPT, but they are applicable to any current Large Language Model.

  • Not connected to the Internet – although the generative AI systems may appear to possess a significant amount of information, most LLM’s are currently not mining the Internet for that information [Note that this is changing quickly. For example, an AI chatbot feature is now embedded into Microsoft’s Bing search engine, but you'll probably need to pay for this feature in the future]. Without a connection to the Internet, LLMs cannot provide real-time information about a topic and so the scope of research is limited and any new developments in a particular field of study will not be included in the responses. In addition, the LLMs can only accept input in text format. Therefore, other forms of communication such as videos, web sites, or images, are excluded as part of the inquiry prompts.
  • Trouble generating long-form, structured content – ChatGPT and other systems are inadequate at producing long-form content that follows a particular structure, format, or narrative flow. The models are capable of creating coherent and grammatically correct text and, as a result, they are currently best suited for generating shorter pieces of content like summaries of topics, bullet point lists, or brief explanations. However, they are poor at creating a comprehensive, coherent, and well-structured college-level research paper.
  • Limitations in handling multiple tasks – generative AI systems perform best when given a single task or objective to focus on. If you ask LLMs to perform multiple tasks at the same time [e.g., a question that includes multiple sub-questions], the models will struggle to prioritize them, which will lead to a decrease in the effectiveness and accuracy of the results.
  • Potentially biased responses -- this is important to understand. While ChatGPT and other systems are trained on a large set of text data, that data has not been widely shared. You can ask the systems what sources they are using but their responses can not be independently verfied. Therefore, one cannot know what hidden biases or prejudices may exist within the data [i.e., it doesn't cite its sources]. This means the LLM may generate responses that are biased or discriminatory.
  • Accuracy problems or grammatical issues -- the sensitivity to typographical errors, grammatical errors, and misspellings is currently very limited in LLMs. The models may produce responses that are technically correct, but they may not be entirely accurate in terms of context or relevance. This limitation can be particularly challenging when processing complex or specialized information where accuracy and precision are essential. Given this, never take the information that is generated at face value; always proofread and verify the results!

As they currently exist, ChatGPT and other Large Language Models truly are artificial in their intelligence. They cannot express thoughts or feelings and other affective constructs that help a reader intimately engage with the written word; the output contains words, but the systems are incapable of producing creative expressions or thoughts, such as, conveying the idea of willful deception like you might find in a poem or song lyric.


Ethical Considerations

In the end, the ethics of whether to use ChatGTP or similar platforms to help write your research paper is up to you; it’s an introspective negotiation between you and your conscience. As noted by Bjork (2023) and others, though, it is important to keep in mind these overarching ethical problems related to using LLMs:

  1. LLMs Do Not Understand the Meaning of Words. Without meaning as a guide, these systems use algorithms that rely on formulating context clues, stylistic structures, writing forms, linguistic patterns, and word frequency in determining how to respond to queries. This functionality means that, by default, LLMs perpetuate dominant modes of writing and language use while minimizing or hiding less common ones. As a result,...
  2. LLMs Prioritize Standard American English. Since White English-speaking men have dominated most writing-intensive sectors of the knowledge economy, such as, journalism, law, politics, medicine, academia, and perhaps most importantly, computer programming, writers and speakers of African American or Indigenous English that use forms of language with its own grammar, lexicon, slang, and history of resistance within the dominant culture, are penalized and shamed for writing as they speak. The default functionality and outputs of LLMs, therefore, can privilege forms of English writing developed primarily by the dominant culture.
  3. LLMs Do Not Protect User Privacy. ChatGPT and other platforms record and retain the entire content of your conversations with the systems. This means any information you enter, including personal information or any documents you ask the systems to revise, for example, is retained and cannot be removed. Although the American Data Privacy and Protection Act was being considered within the 117th Congress, there is no federal privacy law that regulates how these for-profit companies can store, use, or possibly sell information entered into their platforms. Given this, it is highly recommended that personal information should never be included in any queries.

Given these and other factors, LLMs have significant deficiencies that still require attention to thorough proofreading and source verification, an ability to discern quality information from misleading, false, irrelevant, or even made up information, a capacity to interpret and critically analyze what you have found, and the skills required to extrapolate meaning from the research your have conducted. For help with these elements of research and writing, you should still contact a librarian for help.


Introduction to ChatGPT for Library Professionals. Mike Jones and Curtis Fletcher. USC Libraries, Library Forum, May 18, 2023; ChatGPT. Library, Wesleyan University; Bjork, Collin. "ChatGPT Threatens Language Diversity." The Conversation, February 9, 2023; Understanding AI Writing Tools and their Uses for Teaching and Learning at UC Berkeley. Center for Teaching & Learning, University of California, Berkeley; Ray, Partha Pratim. “ChatGPT: A Comprehensive Review on Background, Applications, Key Challenges, Bias, Ethics, Limitations and Future Scope.” Internet of Things and Cyber-Physical Systems (2023); Uzun, Levent. "ChatGPT and Academic Integrity Concerns: Detecting Artificial Intelligence Generated Content." Language Education and Technology 3, no. 1 (2023); Lund, Brady D. Et al. “ChatGPT and a New Academic Reality: Artificial Intelligence Written Research Papers and the Ethics of the Large Language Models in Scholarly Publishing.” Journal of the Association for Information Science and Technology 74 (February 2023): 570–581; Rasul, Tareq et al. "The Role of ChatGPT in Higher Education: Benefits, Challenges, and Future Research Directions.” Journal of Applied Learning and Teaching 6 (2023); Marr, Bernard. “The Top 10 Limitations Of ChatGPT.” Forbes (March 3, 2023): https://www.forbes.com/sites/bernardmarr/2023/03/03/the-top-10-limitations-of-chatgpt/?sh=41ae78e8f355