By Eben van Tonder, 28 May 2025

About this Series: In Conversation with Christa Berger
This series is published under Origins Global Meats, a section within EarthwormExpress dedicated to our consultancy company. We specialise in mass-market, gourmet, and meat-hybrid formulations. Our services span new factory design, production line set-up, profit optimisation, brand communication, and research and development.
It is in the final two areas of brand communication and R&D, that our work connects directly with Korrekturdienst, the company led by Christa Berger. Based in Austria, Korrekturdienst offers crystal-clear, academically and culturally precise German-language services, delivered with absolute confidentiality and the highest level of accuracy and personal attention. It is the definitive standard for error-free, context-aware academic and professional communication in German. Nothing vague. Nothing missed. Nothing less than exact.
This series provides insight and clarity into matters related to the academic and scientific-based writing and the use of AI in writing and R&D.
Introduction
We met again in Graz, this time at the Kunsthaus café, with its sweeping glass curves and quiet vantage over the Mur. Outside, the city moved at its usual unhurried pace. Inside, our conversation turned sharply toward technology.
I had asked Christa to explain, not just philosophically, but technically, why AI, for all its usefulness, cannot replace what she does at Korrekturdienst.
She took out a notepad, drew a quick diagram, and began.
How AI Actually Works: A Functional Overview
Christa began by explaining that most AI language models work through probabilistic sequence prediction. They do not understand meaning. They calculate which word is statistically likely to come next based on training data.
“The result sounds fluent because it mimics patterns, but those patterns aren’t governed by internal logic. They are echoes.”
She drew a simple comparison:
- AI is like a fast musical improviser with no ear.
- A human is slower, but can tell when a note carries the wrong emotional weight.
AI is superb for speed, breadth, and mechanical consistency. But it lacks:
- Contextual awareness
- Cultural literacy
- Ethical judgement
- Structural intent
Where AI Belongs in Research Workflows
Christa outlined where AI fits best:
- Literature discovery: Quickly surfacing recent publications or summarising complex findings
- Draft generation: Helping writers move past the blank page with basic scaffolds
- Language refinement: Offering clearer or more grammatically consistent rewordings
But she warned: “If you’re relying on AI to verify facts, cite references, or provide accurate source material, you’re on dangerous ground.”
I was fascinated when she explained that AI models are not connected to live databases or source verification systems. They generate language by predicting plausible text sequences based on training data, not by retrieving or cross-checking factual information. This means they can, and often do, fabricate references, invent authors, and produce entirely fictional publications that sound convincing but don’t exist. These hallucinations occur because the model is optimising for fluency and likelihood, not truth. In academic or technical contexts, this can introduce serious errors, erode credibility, and even amount to accidental fraud when using AI to write your argument, when you don’t yet have one. AI helps execute, but it cannot formulate the insight.
Why Human Proofreading Cannot Be Replaced
Christa was firm. Proofreading, in the full sense of the term, is not grammar correction. It is the final safeguard of meaning.
She explained:
- Humans understand narrative flow and whether sections align logically
- Humans catch contradictions, tonal drift, and broken reasoning
- Humans ask, “Is this true?” AI cannot
“It’s not that AI makes mistakes. It’s that it doesn’t care if it does.”
At Korrekturdienst, this human oversight isn’t cosmetic; it’s ethical. It’s where authorship is reclaimed.
The Role of Structure in Holding Meaning
Christa sketched a diagram of how ideas often collapse under poor structure:
• Thesis unsupported
• Claims out of sequence
• Transitions missing
“Structure is not decoration. It is the vessel that carries clarity.”
AI can mimic outlines. But only a human can feel when the structure doesn’t fit the thought. It does not build logical arguments. It presents related facts, but the facts are no substitute for reasoning.
This is because AI does not understand meaning. It predicts patterns. It lacks a sense of hierarchy, emphasis, or argumentative flow. When it generates content, it often confuses proximity for logic and surface fluency for depth. The result can be a sequence that sounds polished but falls apart under scrutiny. Without a human to impose intellectual order, deciding what matters, what leads, and what must be earned, AI-generated structures can quickly become hollow or misleading.
The Future of Language Still Needs Humans
Christa reflected on where things are heading.
“We will use AI more and more. But the more we do, the more important it is that someone still listens. Not just for errors. It begins with intention, which AI can’t insert!”
What Christa offers at Korrekturdienst is not just editing. It’s human alignment. Between idea and sentence. Between author and reader.
In a world accelerating toward automation, she reminds us that meaning is not the product of fluency but the result of care. Logic! Intension! Thought!
In response to all this, I tell Christa that I use AI extensively. Almost every task I perform, whether scientific, strategic, or even exploratory, I run through AI. It has become a powerful extension of my thinking. But it comes with limitations that I’ve come to recognise sharply.
“There are no transitions,” I said. “No real stacking of arguments. The structure is always mechanical. The logic doesn’t evolve. It resets with every paragraph.”
Even in my meat science writing, the problem remains. There’s data, yes. But little interpretation. The prose runs too long. The arguments don’t flow to a tight thesis. It writes. But it doesn’t think.
“For sure, it speeds things up. What used to take me a week or a month, I can now get out in one night. But it always has to be checked. Every piece. Every sentence.”
I paused.
“And from a research perspective, the number of times it gives me information that’s just wrong is staggering. Worse, it doesn’t distinguish between information and interpretation. It serves you both with the same tone.”
Christa nodded.
“Exactly. That’s the problem. It presents fragments as if they were frameworks. But only humans know what matters.”
We sat in silence a while longer.
She closed her notebook and signalled the waiter. We paid the bill and stepped out into the chilly Graz air, both knowing this conversation wasn’t ending. It was only deepening.
Back to Series Home Page:
For more articles like these, visit the series home page at In Conversation with Christa Berger: Holding Meaning in a Machine Age
References
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21). Association for Computing Machinery.
https://doi.org/10.1145/3442188.3445922
Marcus, G. (2022). The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence. arXiv preprint arXiv:2206.04130.
https://arxiv.org/abs/2206.04130
Mitchell, M. (2023). Artificial Intelligence: A Guide for Thinking Humans (Revised edition). Pelican Books.
Floridi, L. (2019). The Logic of Information: A Theory of Philosophy as Conceptual Design. Oxford University Press.
Vincent, J. (2023). Why ChatGPT and Other AI Tools Can’t Replace Editors. The Verge.
https://www.theverge.com/
Wang, A., Narayanan, A., & Solaiman, I. (2023). AI and Misinformation: Limitations of Language Models as Truth Engines. AI Ethics Journal, 4(1), 10–25