Skip to Main Content

Organizing Your Social Sciences Research Assignments

This guide describes how to successfully complete specific assignments commonly assigned in social sciences and behavioral sciences courses.

Generative AI and Writing

A Word About Generative AI Large Language Models

A new and rapidly evolving phenomenon impacting higher education is the availability of generative artificial intelligence systems [such as Chat Generative Pre-trained Transformer or ChatGPT]. They have been developed from scanning text from millions of books, web sites, and other sources to enable algorithms within the system to learn patterns in how words and sentences are constructed. This allows these systems to respond to a broad range of questions and prompts, generate stories, compose essays, create lists, and more. Generative AI systems are not actually thinking or understanding like a human, but they are good at mimicking human-like text based on what it has learned from the sources of input data used to build and enhance its artificial intelligence algorithms, protocols, and standards.

As such, generative AI systems [a.k.a., “Large Language Models”] have emerged, depending on one’s perspective, as either a threat or an opportunity in in how faculty create or modify class assignments and how students approach the task of writing a college-level research paper. We are in the very earliest stages of understanding how LLMs may impact learning outcomes associated with information literacy, i.e., fluency in effectively applying the skills needed to effectively identify, gather, organize, critically evaluate, interpret, and report information. However, before this is understood, these systems will continue to improve and become more sophisticated, as will academic integrity detection programs used to identify AI generated text in student papers [e.g., Turnitin].

When given a research assignment that has an element of writing, it is up to your professor if using ChatGTP is permitted or not. Some professors embrace using these systems as part of an in-class writing exercise to help understand their limitations, while others will warn against its use because of their current limitations and biases. That said, the future of information seeking using LLMs means that the intellectual spaces associated with research and writing will likely collapse into a single online environment in which students will be able to perform in-depth searches for information connected to the Libraries' many resources.

As LLMs become more sophisticated, here are some potential ways generative artificial intelligence programs could facilitate organizing and writing your social sciences research paper:

  • Explore a Topic – develop a research problem related to the questions you have about a general subject of inquiry.
  • Formulate Ideas – obtain background information and explore ways to place the research problem within specific contexts.
  • Zero in on Specific Research Questions and Related Sub-questions – create a query-based framework for how to investigate the research problem.
  • Locate Sources to Answer those Questions – begin the initial search for sources concerning your research questions.
  • Obtain Summaries of Sources – build a synopsis of the sources to help determine their relevance to the research questions underpinning the problem.
  • Outline and Structure an Argument – present information helpful to formulating an argument or an explanation for a stated position.
  • Draft and Iterate on a Final Essay – create a final essay based on a process of repeating the action of text generation on the results of each prior action [i.e., ask follow up questions to build on or clarify initial results].

Despite their power to create text, generative AI systems are far from perfect and their ability to “answer” questions can be misleading, deceiving, or outright false. Described below are some current problems adapted from an essay written by Bernard Marr at Forbes Magazine and reiterated by researchers studying LLMs and writing. These issues focus on problems with using ChatGPT, but they are applicable to any current Large Language Model.

  • Not Connected to the Internet. Although the generative AI systems may appear to possess a significant amount of information, most LLM’s are currently not mining the Internet for that information [Note that this is changing quickly. For example, an AI chatbot feature is now embedded into Microsoft’s Bing search engine, but you'll probably need to pay for this feature in the future]. Without a connection to the Internet, LLMs cannot provide real-time information about a topic. As a result, the scope of research is limited and any new developments in a particular field of study will not be included in the responses. In addition, the LLMs can only accept input in text format. Therefore, other forms of knowledge such as videos, web sites, or images, are excluded as part of the inquiry prompts.
  • Hallucinating Facts and Sources. ChatGPT and other large language models hallucinate. In other words, they are capable of making stuff up, particularly when the systems are asked to create a list of scholarly sources or when presenting factual statements in text that may read as being semantically or syntactically credible, but is actually false or nonsensical. Researchers have already noted that this has broad implications for using LLMs in health care, diagnostic medicine, and applied science disciplines, such as structural engineering, where false or inaccurate information could prove harmful or dangerous. Given this, all AI generated text must be proofread for accuracy because submitting papers with false statements signals to your professor that you didn't take the assignment seriously but, more importantly, any unverifiable references to sources in your paper exposes you to allegations of plagiarism.
  • Trouble Generating Long-form, Structured Content. ChatGPT and other systems are inadequate at producing long-form content that follows a particular structure, format, or narrative flow. The models are capable of creating coherent and grammatically correct text and, as a result, they are currently best suited for generating shorter pieces of content like summaries of topics, bullet point lists, or brief explanations. However, they are poor at creating a comprehensive, coherent, and well-structured college-level research paper.
  • Limitations in Handling Multiple Tasks. Generative AI systems perform best when given a single task or objective to focus on. If you ask LLMs to perform multiple tasks at the same time [e.g., a question that includes multiple sub-questions], the models struggle to prioritize them, which will lead to a decrease in the effectiveness and accuracy of the results.
  • Biased Responses. This is important to understand. While ChatGPT and other systems are trained on a large set of text data, that data has not been widely shared so that it can be reviewed and critically analyzed. You can ask the systems what sources they are using, but any responses can not be independently verified. Therefore, it is not possible to identify any hidden biases or prejudices that exist within the data [i.e., it doesn't cite its sources]. This means the LLM may generate responses that are biased or discriminatory.
  • Accuracy Problems or Grammatical Issues. The sensitivity to typographical errors, grammatical errors, and misspellings is currently very limited in LLMs. The models may produce responses that are technically correct, but they may not be entirely accurate in terms of context or relevance. This limitation can be particularly challenging when processing complex or specialized information where accuracy and precision are essential. Given this, never take the information that is generated at face value; always proofread and verify the results!

As they currently exist, ChatGPT and other Large Language Models truly are artificial in their intelligence. They cannot express thoughts, feelings, or other affective constructs that help a reader intimately engage with the author's written words; the output contains text, but the systems are incapable of producing creative expressions or thoughts, such as, conveying the idea of willful deception and other narrative devices that you might find in a poem or song lyric. Although creative devices, such as metaphors, idioms, imagery or subtleties in narrative rhythm, style, or voice, are rarely used in academic writing, it does illustrate that personalizing the way you present your research [e.g., sharing a personal story relating to the significance of the topic or being asked to write a reflective paper] cannot be generated artificially.


Ethical Considerations

In the end, the ethics of whether to use ChatGTP or similar platforms to help write your research paper is up to you; it’s an introspective negotiation between you and your conscience. As noted by Bjork (2023) and others, though, it is important to keep in mind these overarching ethical problems related to the use of LLMs:

  1. LLMs Do Not Understand the Meaning of Words. Without meaning as a guide, these systems use algorithms that rely on formulating context clues, stylistic structures, writing forms, linguistic patterns, and word frequency in determining how to respond to queries. This functionality means that, by default, LLMs perpetuate dominant modes of writing and language use while minimizing or hiding less common ones. As a result,
  2. LLMs Prioritize Standard American English. Since White English-speaking men have dominated most writing-intensive sectors of the knowledge economy, such as, journalism, law, politics, medicine, academia, and perhaps most importantly, computer programming, writers and speakers of African American or Indigenous English that use forms of language with its own grammar, lexicon, slang, and history of resistance within the dominant culture, are penalized and shamed for writing as they speak. The default functionality and outputs of LLMs, therefore, can privilege forms of English writing developed primarily by the dominant culture.
  3. LLMs Do Not Protect User Privacy. ChatGPT and other platforms record and retain the entire content of your conversations with the systems. This means any information you enter, including personal information or, for example, any documents you ask the systems to revise is retained and cannot be removed. Although the American Data Privacy and Protection Act was being considered within the 117th Congress, there is no federal privacy law that regulates how these for-profit companies can store, use, or possibly sell information entered into their platforms. Given this, it is highly recommended that personal information should never be included in any queries.

NOTE:  If your professor allows you to use generative AI programs or you decide on your own to use an LLM for a writing assignment, then this fact should be cited in your research paper, just as any other source of information used to write your paper should be acknowledged. Why? Because unlike grammar or citation tools, such as Grammarly or Citation Machine that correct text you've already written, generative AI programs are creating new content that is not in your own words. Currently, the American Psychological Association (APA), Modern Language Association (MLA) and the Chicago Manual of Style provide citation recommendations in this area.

ANOTHER NOTE: As described above, LLMs have significant deficiencies that still require attention to thorough proofreading and source verification, an ability to discern quality information from misleading, false, irrelevant, or even made up information, a capacity to interpret and critically analyze what you have found, and the skills required to extrapolate meaning from the research your have conducted. For help with these elements of research and writing, you should still contact a librarian for help.

YET ANOTHER NOTE: Researchers are finding early evidence that suggests over-reliance on ChatGPT and other LLM platforms for even the simplest writing task may, over time, undermine confidence in student's own writing ability. Just like giving a class presentation or participating effectively in a group project, good writing is an acquired skill that is only improved by the act of doing; the more you write, the more comfortable and confident you become expressing your own ideas, opinions, and judgements applied to the problem you have researched. Substituting LLMs with your own voice can inhibit your growth as a writer, so give yourself room to write creatively and with confidence by accepting LLMs as a tool rather than a definitive source of text.

For more information about Generative AI platforms and guidance on their ethical use in an academic setting, review the USC Libraries' Using Generative AI in Research guide for students and faculty.


Introduction to ChatGPT for Library Professionals. Mike Jones and Curtis Fletcher. USC Libraries, Library Forum, May 18, 2023; ChatGPT. Library, Wesleyan University; Bjork, Collin. "ChatGPT Threatens Language Diversity." The Conversation, February 9, 2023; Understanding AI Writing Tools and their Uses for Teaching and Learning at UC Berkeley. Center for Teaching & Learning, University of California, Berkeley; Ellis, Amanda R., and Emily Slade. "A New Era of Learning: Considerations for ChatGPT as a Tool to Enhance Statistics and Data Science Education." Journal of Statistics and Data Science Education 31 (2023): 1-10; Ray, Partha Pratim. “ChatGPT: A Comprehensive Review on Background, Applications, Key Challenges, Bias, Ethics, Limitations and Future Scope.” Internet of Things and Cyber-Physical Systems (2023); Uzun, Levent. "ChatGPT and Academic Integrity Concerns: Detecting Artificial Intelligence Generated Content." Language Education and Technology 3, no. 1 (2023); Lund, Brady D. Et al. “ChatGPT and a New Academic Reality: Artificial Intelligence Written Research Papers and the Ethics of the Large Language Models in Scholarly Publishing.” Journal of the Association for Information Science and Technology 74 (February 2023): 570–581; Rasul, Tareq et al. "The Role of ChatGPT in Higher Education: Benefits, Challenges, and Future Research Directions.” Journal of Applied Learning and Teaching 6 (2023); Marr, Bernard. “The Top 10 Limitations Of ChatGPT.” Forbes (March 3, 2023): https://www.forbes.com/sites/bernardmarr/2023/03/03/the-top-10-limitations-of-chatgpt/?sh=41ae78e8f355; Thinking about ChatGPT? Academic Integrity at UBC, Office of the Provost and Vice-President Academic, University of British Columbia.