Consider this advice from Wolfson College, Cambridge:
While we may already by comfortable with many forms of AI in our day to day life, below are some points to consider as Generative AI continues to create new possibilities to influence how we engage with learning and our own research.
Be wary of false citations, even if the sources sound legitimate or you are provided with quotes from the article. Large Language Models (LLMs) like Chat GPT can often "hallucinate" or invent sources that sound convincingly real. It can make up articles from established publications and even court cases with quotes from imaginary opinions.
Reading summaries is not a replacement for critically engaging with the text. Think of AI software that provides summaries of text as your first scan of a reading; it can help you get a general sense of the reading, but you may miss out on a lot of the nuance and subtle features of the text that can be essential for using it later in your own writing.
Be aware of the possibility of bias in the information you are presented. Since generative AI learns from existing raw data, it may, at times, provide responses that take data out of context or reinforce harmful stereotypes. Though the responses provided by generative AI may give the appearance of a complex, reflective, and deeply reasoned process, remember, current platforms are heavily susceptible to the quality of prompts that are entered and lack of moral understanding of the information they provide. Having a critical approach to the information you are presented is just as necessary as if you were engaging with other resources for your research.
Don't let generative AI do your work for you. Relying on generative AI to substitute the work needed to develop a skill or practice is not only a detriment to your own learning experience, but it offloads, as Marc Watkins notes, "the entire moral, ethical, and responsible thinking we expect from a human being."
Last but not least, while generative AI can help you work through various stages of the research process, remember you are ultimately responsible for knowing what is needed at each stage of your research and for the information you submit for evaluation.
Most generative AI tools will, by default, collect the data you enter in your prompts to help train and develop the AI tool further. In many cases you will have the ability to option to opt out of allowing the company to use your data this way, but you may need to actively select this option.
If you are entering research that you are not ready to share publicly, please be aware that unless you are able to opt out of allowing your information to be used for training, it may reappear in the responses the AI tool provides to other similar prompts.
If you are using a generative AI tool to help transcribe interviews or analyze participant data, you will need to be aware of how it stores and manages your data. Improper use of generative AI for these types of tasks may constitute a violation of ethics for your research project.
Here is some food for thought from Wolfson College, Cambridge:
As the capabilities of generative AI continue to evolve, it will be important to have a sense of each tools limitations and how you can use it without breaching the guidelines of academic integrity. Below are some ways you can use various platforms to assist you in your study without participating in academic misconduct:
If you use generative AI to assist your study, remember that it should not be the only method used to study. You will have greater success if you incorporate its usage alongside other study techniques where you can develop your subject knowledge and critical thinking skills.
NJIT has created Guidelines for Instructors on AI. Check often to see if this document has been updated.
The main takeaway is to make sure you look closely at your course's syllabus to understand what types of AI are (and aren't) allowed in a course you are taking.
This page was adapted from Using AI for Academic Purposes from Wolfson College Cambridge under a CC-BY-NC-SA 4.0 license.
|