Artificial intelligence and artificially intelligent learning: My Heise Online Missing Link article translated
Together with the Heise Online editorial team and the Bundeswettbewerb Künstliche Intelligenz, I had the opportunity to write an article for one of the country's largest online magazines. This is the translated version of the German article.
FYI: The full article is available from Heise Online and the director's cut version in German is available on my blog.

For most students, twice a year is a kind of state of emergency: exam season. Six months of learning compressed into two weeks full of assessments, trying to learn as much as possible at the last minute and then putting it down on paper shortly afterward. The omnipresent artificial intelligence is increasingly influencing how and what we learn - and, above all, how knowledge is tested.
ChatGPT and co. have already made their way right into the middle of university life and are shaking up the system: Generative AIs not only help us in technical subjects with research and summarize studies, but also optimize the structure of CAD models or create images of different types of defects in production facilities.
The technology and the countless tools offer a wide range of possibilities for juggling knowledge and information, trying out new approaches to old challenges, and using acquired knowledge more efficiently.
However, many universities and lecturers remain in a kind of state of shock and reflexively reduce the discussion to one striking problem: the potential for unfair advantage in (online) exams and especially in term papers and theses. The supposed solutions are not very original and merely attempt to prevent what can no longer be prevented: that previously complex bachelor's and master's theses are quickly written by or with the help of generative AI.
And that's a terrible shame! Because such measures, which aim to preserve the status quo for as long as possible and confine us students to the good old internet and traditional search engines, waste energy and valuable potential. But we students need a university that encourages and supports us in exploring AI for our own purposes. And we need teachers who use their knowledge to casually remind us again and again of the limits of technology and its fallibility.
Help for self-help
Regardless of whether lecturers and professors actively plan for the use of tools such as chatbots in their courses, prohibit their use, or are still pushing the same slides across the overhead projector as they did twelve years ago: Students will use artificial intelligence one way or another to expand, simplify, and vary tasks. And this is happening - at least in my experience - across the entire student body: regardless of native language and previous technical experience, the possibilities and means of access have arrived for everyone and in every subject.
It becomes more difficult when it comes to assessing whether the use of a chatbot leads to helpful results in the respective subjects or not. Because while ChatGPT confidently explains the principle of emission spectroscopy in experimental physics in several languages, it suddenly gets onto very thin ice when it comes to arithmetic problems in higher mathematics. Too thin to answer questions correctly; too thin to be used as the sole learning method and source of knowledge for exams. While computer algebra systems and simulation programs are used to providing a predictable output for a given input, chatbots are not necessarily known for providing the most consistent answers.
The fact that ChatGPT vehemently answers the question of whether 9.11 or 9.9 is greater with “Of course, 9.11 is greater than 9.9!” not only reveals that language models lack an elementary understanding of numbers (and that this lack cannot be compensated for by statistical skills). Worse still, the language model constructs a chain of reasoning that embellishes the falsehood with context and thus makes it even more credible.
Universities would actually be the perfect place to examine the new technology critically and in all its facets. Instead, they currently seem to be in a kind of state of shock - either they assume that students don't use ChatGPT and its ilk, or that they have a good grasp of AI and don't need any help classifying the results. But they are wrong: many students take what AIs generate at face value, as the Bavarian Research Institute for Digital Transformation was able to show in an analysis.
Room for experiments
But what certainly won't help us now is a new subject on dealing with AI or an AI driver's license that promptly teaches us how to use AI. What would be far more helpful would be concrete examples in lectures from each subject area that vividly convey how AI can be used in a targeted way. AI must be taught in context and in passing. Repeated reminders such as “Be careful, a chatbot can't draw correct free-form images” would help many people immediately.
Critics might now retort that students at the highest level of the German education system should be able to assess such risks themselves. But such judgment can only arise if space is opened up for discussion, away from headlines and short videos with TTS voices promising the “killer prompt for PERFECT stoichiometric calculations” and the moon in 15 seconds. And above all, without the threatening backdrop that teachers sometimes create.
Threats from some lecturers, such as “Anyone who uses ChatGPT in this course will be kicked out,” or demotivating remarks such as “In the future, you'll be unemployed anyway thanks to AI, so why bother at all?” are simply counterproductive. As Prof. Jörn Loviscach rightly pointed out: “Technology and materials” are not enough to learn successfully. It also requires space for exchange and opportunities for unevaluated application, which is exactly what tutorials at every technical university have been doing for decades to clarify open questions and prepare for exams.
What still counts as performance?
Some universities are already adapting to the new wave of technical possibilities. Particularly in the humanities, which traditionally focused on text production, the first universities have already abolished term papers and bachelor's theses. It is only a matter of time before technical degree programs are also forced to follow suit. Autodesk has recently integrated an AI assistant directly into its in-house CAD software and, with Text to CAD, there are now even approaches to decouple entire design works from manual labor. We have to ask ourselves: How can work samples still be assessed fairly in the future? Whether it's the supporting structure of future civil engineers, the crankshaft of mechanical engineers, or the electrical speed controller of electrical engineers.
Probably the simplest solution to this problem is “more exams,” and at the moment there are many signs that universities are taking this route. This is very unfortunate, because there has rarely been a better opportunity to interweave old, tried-and-tested concepts (written exams, oral exams) with newer ideas (assessed peer teaching, Socratic seminars, or group projects with peer review) and deliver real added value for students. It is time to finally play these trump cards and not tack a seventh exam onto the second semester because the documentation work otherwise spread over half a year has seemingly become pointless due to ChatGPT.
Despite all the discussion about the technical aspects, the topic of equal opportunity must not be neglected. Free versions of relevant AI tools may offer a good introduction to the topic, but they quickly reach their limits when it comes to complex workflows and issues. This is why universities themselves should provide modern chatbots, AI tools, and computing power for working with and training artificial neural networks, similar to what is already common practice with classic software: we are given access to the latest CAD and EDA software suites, licensed for huge sums of money. Computer labs allow us to use powerful workstation computers, only when it comes to the world of artificial intelligence it seems as if they want to wait and see if and what will catch on before purchasing licenses for students. It cannot be the case that well-off students can buy better chatbots and thus gain an advantage over the less well-off. And since chatbots are used at least as much as the library (if not more!), a portion of our semester fees should go toward access to modern language models.
Neuland, here we come!
Just as we young people received the former German Chancellor's statement “The internet is new territory for all of us” with a certain smile in 2013, we must now actually realize: “(Generative) AI is new territory for all of us” - and before we lose touch, as with broadband expansion, it is worth exploring the terrain together. Not to force AI into every subject and topic, but to show how AI can and cannot help us students. We will certainly still need to be able to do trigonometry in the future, but we probably won't need to write a detailed text to accompany a technical data sheet. And if AI can help us understand trigonometry, all the better.