Innovative Artificial Intelligence (AI) technologies and Large Language Models (LLM) will fundamentally change teaching, but also learning, in the near future. What consequences this will have and how the changes can be countered, what new teaching and learning methods are needed to continue to ensure high-quality education – these issues were discussed at the recent meeting of the International Network of Universities in Technical Communication and related disciplines (IUNTC).
Jenni Viratuoto from the University of Jyväskyla in Finland and Sissi Closs, Karlsruhe University of Applied Sciences, focused in their presentation on balancing the benefits and challenges of AI in academia, an important topic currently being discussed at many universities.
There are two aspects to consider when looking at AI in academia: Firstly, how is teaching changing, and what new methods, regulations and frameworks are needed? Secondly, how do students need to be prepared and trained for the future use of Artificial Intelligence in their work environment? What skills will be needed in working life in the future?
Currently, professions are still affected to varying degrees by the influences and changes brought about by AI. The recent dramatic rise in AI language modeling capabilities has led to many questions about the impact of these technologies on business. In a recent study titled "How Will Language Modelers Like ChatGPT Affect Occupations and Industries?", Edward W. Felten (Princeton University) and his colleagues Manav Raj (University of Pennsylvania) and Robert Seamans (New York University – Leonard N. Stern School of Business) concluded that the professions most affected by language modeling include telemarketers and a range of higher education teachers such as English, foreign language, and history teachers. Industries most affected by advances in language modeling include legal services and securities, commodities, and investments. They also found a positive correlation between wages and the use of AI language modeling.
The results of the study also show that tech writers rank 90th in professions most affected. Jenni Viratuoto mentioned that the reasons why technical writing is not in the top 20 are as follows: The content that LLMs produce is generic and based on statistical probabilities, but the content that technical communicators work with is context-specific and accurate. Technical communicators produce content that is new. The background materials that technical communicators use in their work are proprietary; they cannot (currently) be fed into an LLM. There are laws, regulations, and standards that govern many types of technical communication products. And several people are involved in the creation of technical communication products; this is a co-creation process.
In Finland, ChatGPT has completely changed the teaching and learning of English. There are several apps which are used meanwhile for different purposes: ChatGPT for everyone, DeepL for translation, Elicit for academic research, Perplexity also for academic research, Gamma – for creating automatic (!) presentations and Google Bard, etc. (coming soon). And there will be more. It's hard to keep up with all the changes – which are coming very quickly.
For students, the tools make it easy to get everything done at the last minute. AI is impacting academic work, course assignments, research papers, and learning and teaching. That raises questions about how to deal with academic cheating, plagiarism, and unethical behavior. How can faculty ensure that students get the job done and learn?
One method is the effective use of plagiarism detection apps. Furthermore, students are instructed to use AI and LLM tools properly – e.g. Perplexity instead of ChatGPT for research. But clear guidelines, such as when the usage of AI and LLMs is allowed and when it is forbidden, are needed. For example, LLMs should never be used to produce a final assignment or thesis text from scratch, nor should such a text created with a language model be presented as written by the student. The student is always responsible for the content of the assignments they turn in, including references and any factual errors.
Furthermore, we need to develop new teaching methods. Instructors should ensure that students cannot complete course assignments using the language model alone, without thinking independently. This can be done, for example, by strictly linking tasks to the course material used or to a less familiar case or example. For example, in the area of translation, instead of translating a piece of content, the following task can be given: Students must submit a self-generated version V1 and a version V2 generated with DeepL or another tool; they should analyze and reflect on the differences between the versions as part of the course assignment. They submit a final version, which is then graded.
But many questions about the ethics of AI and LLMs are still unresolved, e.g., companies are using individual users as unpaid trainers of the system. There is bias; there is “garbage in, garbage out”. Issues remain about fair access for everyone in the world? How can privacy and security be ensured, and how is ownership of and responsibility for the content produced regulated?
To round up her presentation with Jenni Viratuoto with a practical example, Sissi Closs showed how she uses ChatGPT in the classroom to teach XML. ChatGPT is used here directly as an aid to create structured XML-based texts. From a work perspective, knowledge of markup language is not necessarily one of the core competencies of technical writers. Here, it makes sense to use AI so that technical writers can concentrate on more important aspects of their work.
One thing is for sure: We are at the very beginning, and the topic of Artificial Intelligence and Large Language Models will remain highly present in education at schools and universities as well as in the working world, still to bring many changes.