Of Artificial Intelligence and Artificially Intelligent Learning: Missing Link in the Director's Cut

Together with the Heise Online editorial team and the Federal Competition for Artificial Intelligence, I had the opportunity to write an article for one of the country's largest online magazines. As is usually the case, the text was shortened a bit before final publication. Here is the text

Table of contents

The final version of the text can be found at Heise Online under the following link: Missing Link: Of Artificial Intelligence and Artificially Intelligent Learning

A huge thank you to the BWKI and the Heise Online editorial team for the opportunity & enjoy reading!

For most students, it happens twice a year: exam period. Six months of study effort compressed into two weeks full of assessments, trying to teach yourself as much as possible at the very last second and then, shortly afterward, usually putting it down on paper. More and more, the omnipresent artificial intelligence is also influencing how and what is learned. A personal account of studying in times of seemingly all-knowing artificial intelligence from a student's perspective.

It has been almost exactly 10 years since former Chancellor Angela Merkel caused raised eyebrows with the statement “The internet is uncharted territory for all of us,” especially among younger people, because by the time she said it, the invention of the internet was already more than 20 years in the past, and the dot-com bubble a good 15 years. Back then, the internet undoubtedly created entirely new opportunities and problems in education. It did not disruptively change how we learn, but rather brought about an evolution with new forms of media and learning methods; education became more accessible in many places: without learning management systems like Moodle, which worked more or less well, the coronavirus period would probably have been even more destructive for education than it already was.

And no question: the buzzword-heavy topic of “artificial intelligence” (which is usually reduced to the subtopic of chatbots like ChatGPT, and those are also what I am primarily dealing with in this article) also brings opportunities and challenges, but which points outweigh which is something I want to leave aside for now — because the fact is: chatbots have arrived in our everyday lives.

For us students, that means adapting to new circumstances in many areas. Whether lecturers and professors actively plan for the use of tools like chatbots in their courses, prohibit them, or are still sliding the same transparencies across the overhead projector as they did twelve years ago, in the end students will use artificial intelligence somewhere to expand, simplify, or vary tasks. And this happens — at least in my experience — across the entire student body: regardless of native language and technical background, the possibilities and access methods have reached everyone and every subject.

It gets more difficult when it comes to being able to assess whether using a chatbot in a given subject will actually lead to helpful results or not. Because while explaining the principle of emission spectroscopy in experimental physics works quite well, even in native languages other than German and English and when follow-up questions are asked, the subject depth suddenly becomes very thin when it comes to solving higher mathematics problems step by step, even in languages with a large chatbot training vocabulary like English and German. Too thin to give correct answers to follow-up questions, too thin to be used as the sole learning method for exams. And yet that is exactly what often happens — many people are not aware that the term “correctness” in the field of AI is generally associated with substantial fuzziness, drowned out by all the “AI headlines” portraying one success after another. Because if you are used to getting an expected output for a given input from computer algebra systems and simulation programs, chatbots are not exactly known for delivering the most consistent answers. Recent examples such as the question of whether 9.11 or 9.9 is larger, which ChatGPT vehemently answers, with “Clearly, 9.11 is larger than 9.9!”, show that not everything — not even primitive questions like this — is always correct. Worse still, it produces a chain of reasoning that dresses up the falsehood with context and makes it seem even more credible. But these stories rarely make it out of the tech bubble; the hype train is still far too loud and drowns out reports of artificial intelligence missteps, so the impression of “perfect” systems remains. A blind trust in AI emerges, and this danger has so far not been addressed in higher education, or only minimally.

There seems to be a kind of paralysis in which people either assume that students are not using ChatGPT and the like, or, if they are using them, that they have understood the topic of AI so thoroughly that they need no help at all in classifying the results. But that is mistaken: many students take what AIs generate at face value, as the Bavarian Research Institute for Digital Transformation was also able to show in an analysis.

There needs to be room for rethinking

But of course, introducing a new subject just for dealing with AI would not be particularly useful either: much more helpful would be concrete examples in lectures showing how AI can be used correctly in each subject — what tutorials are supposed to achieve for applying the material learned in lectures should from now on also include briefly highlighting subject-specific ways of dealing with artificial intelligence in the respective topic. Not three hours every week, but a note such as “be careful, a chatbot cannot draw correct free-body diagrams” would help many people immediately. Because AI is going to be used anyway — only then with more context than if it continues to happen behind closed doors, as it currently does. In the end, artificial intelligence remains matrix multiplication — and not magic, as many currently perceive it and use it with that mindset.

One could now make the argument that students on the highest track in the German education system can be expected to make such an assessment of the risks themselves (“ignorance is no defense”). But this assessment of dangers can only arise if space is opened up for discussion, far away from headlines and short videos with AI voices that promise in 15 seconds the “killer prompt for PERFECT stoichiometric calculations” and the moon. The willingness to talk about AI is there on the student side; now all that is needed is the opportunity to do so without the risk of consequences and intimidation.

The occasional statement by some lecturers, “anyone who uses ChatGPT in this course is out,” is not very helpful, just as little as the mantra “in the future they (the students) will be unemployed anyway thanks to AI, so why even bother trying?” As Prof. Dr. Jörn Loviscach quite rightly stated in his Missing Link article: “Technology and materials” are not enough to learn successfully. There also needs to be room for exchange and opportunities for ungraded application — precisely what tutorials at every technical university have been providing for decades to clarify open questions and prepare for exams.

What still counts as achievement when an AI independently does my design work?

Meanwhile, some universities are slowly adapting organizationally to the new wave of technical possibilities, which in some places has led, especially in the humanities, to the discontinuation of term papers and bachelor's theses. In these disciplines, which have traditionally worked a lot with text production, the talkative chatbots have already brought about the first changes. It is only a matter of time before technical degree programs, too, can no longer rely as much on term papers and long lab reports and theses — Autodesk has recently started building an AI assistant directly into its in-house CAD programs, and with Text to CAD, there are now even approaches to decoupling entire design works from manual labor. We have to ask ourselves: how can work samples still be assessed fairly in the future? Whether it is the supporting structure of future civil engineers, the shaft of mechanical engineers, or the electric speed controller of electrical engineers. Right now, a lot points to universities answering this question with “more exams,” probably the simplest answer to this complex issue. That is terribly unfortunate, because there has rarely been a better opportunity to weave old, proven concepts (written exams, oral exams) together with newer ideas (graded peer teaching, Socratic seminars, or group project work with peer review) and deliver real added value for students.

We do not need a revolution, but an evolution of assessment; just as the internet did, new concepts are bringing further possibilities onto the universities' playing field. It is time to play those cards as well instead of adding the seventh exam to the second semester because the documentation work otherwise spread across the semester has seemingly become pointless due to ChatGPT.

A déjà vu

Just as we students grew up with smartphones and the internet, the generation that is currently in early childhood will grow up directly with the possibilities and dangers of AI — for us students, for whom large language transformer models simply did not exist in school education, that means learning a completely new way of dealing with them. Higher education can and must help with that — not only by creating space to exchange ideas about the meaningful use of chatbots, but also by providing modern chatbots. It cannot be the case that students can buy themselves better chatbots and thereby gain an advantage over students who are less well-off financially. And since chatbots are used at least as much as the library (if not even more!), it is not too much to ask that part of our semester fees be invested in access to modern language models. Just as we young people in 2013 reacted to the statement “The internet is uncharted territory for all of us” with a certain smile, the tables have suddenly turned: “AI is uncharted territory for all of us” — and before we lose touch as we did with broadband expansion, it is worth talking openly about the topic and looking for meaningful spaces for it in higher education. Not forcing AI into every subject and topic, but showing how AI can help us students and how it cannot. We will still need to know trigonometry in the future; writing the detailed accompanying prose for a technical data sheet, probably not. And if AI can help us understand trigonometry, all the better.