I’m just old enough to have done my first university studies at the turn of the millenium. A time when we worked on computers, but internet was at its infancy, when the library catalogue was slowly online, but online resources were non-existent. Now I teach and research in a world, where the library is conveniently on my desk, where rarely do I need to head to a physical library, and if I do so — and have the luxury of access to such resources — it’s because I still love the smell and feel of books, and prefer flipping through them physically.
Yet, our teaching scheme, both at secondary as well as university level, are often still content based, focused on schooling and learning stuff, rather than focused on encouraging critical thinking. So, while I understand the anxiety of teachers in having to deal with fairness and ChatGPT, my generation often was feeding on SparkNotes, or their equivalents, and assessed on stuff we often learnt without it properly touching our brain. Let’s face it: thinking critically has not been a strong point of our educational systems. It is enough to see that what is shared uncritically on social media (by persons with multiple university degrees), to show us that despite stellar educations, too many people lack the most basic sense of critical teaching, and are way too gullible.
In the present day, of course, the great frenemy is ChatGPT and similar large language models. I understand and sympathise with teachers who dread having to assess work that is could be AI based, and even plagiarism tools are not exactly design to pick this up (yet). Honestly, while in the immediate sense, the urgency is to figure a way forward; in the longer term, the challenge is rather how to include AI as part of what we train for. (I was the last generation not be allowed use of the calculator for Maths class, and still had to work with logarithmic tables instead. When was the last time you used a logbook?).
What is AI?
A scout leader friend of mine, who’s way more into the field than I, told me a few months ago something that set me thinking. AI is a misnomer. Artificial Intelligence is not “intelligent” in the way we would normally understand it as humans. Rather it is statistical tool that works with a massive amount of data. So, it’s useful, but its a statistical tool. If you understand how it works, you might find how to use it …
Just out of curiosity, I decided to ask ChatGPT the following question:
Do I understand correctly that chatGPT works as a statistical tool that formulates answers based on a massive amount of data?

Here’s the answer that ChatGPT gave me (ironically, of course, this too is generated in the same way as it describes):
Yes, that’s essentially correct! ChatGPT, like other large language models, is a type of artificial intelligence trained on vast datasets of text to predict and generate plausible responses to given inputs. Here’s a breakdown of how it works:
1. Statistical Modeling:
- The model learns patterns, relationships, and structures in the data during its training process. It doesn’t “know” or “understand” in a human sense but uses probabilities to determine the most likely sequence of words that fits a given input.
2. Training on Text:
- The training data includes books, articles, websites, and other publicly available text sources (up until its knowledge cutoff for the underlying training). By analyzing this data, the model captures syntactic, semantic, and contextual patterns.
3. Generating Responses:
- When you provide an input (e.g., a question or a statement), the model analyzes the context and generates a response based on what it “learned” during training.
- It doesn’t retrieve pre-written answers but rather constructs them in real-time based on probabilities of word sequences.
4. Limitations:
- Since it relies on patterns in data, it can sometimes produce responses that are incorrect, outdated, or overly generic.
- It doesn’t have consciousness or understanding; it mimics comprehension through complex statistical predictions.
5. Fine-Tuning:
- The model can be adapted or fine-tuned for specific tasks, such as customer support or scientific research, by exposing it to more specialized datasets.
In short, ChatGPT acts as a sophisticated statistical tool for language generation, crafting responses informed by learned patterns in its training data. While it can seem insightful or knowledgeable, it’s ultimately a probabilistic system rather than a reasoning or sentient entity.
Put in a nutshell, such AI tools already always beat us as humans when it comes to the capacity of volume of data, and working with statistical information. It is a probability tool, one that can process vast amounts of data and synthesise an answer.
However, its limits are clear: it is not designed to be creative, it can only mimic human reason (or human writing, art, or music), but is still a craft. So, it does think critically for itself, it does not actually produce “new” knowledge. But it can replace the daily grind of plenty of stuff, and — honestly — if it can, why not?
If the short term concern is how to detect and mitigate for the use of AI in the classroom. In the longer term, we need to rather change our teaching, to train for what are human strengths.