Cheraw Chronicle

Complete News World

“In the humanities, artificial intelligence is like mechanical steroids.”

“In the humanities, artificial intelligence is like mechanical steroids.”

Bram de Ridder says the debate around AI in universities is currently focused on how teaching and assessment practices will change. “Much less attention is being paid to the logical follow-up question: How will AI impact scientific practice as a whole?”

It is no longer surprising that artificial intelligence is developing at lightning speed in higher education. As a history teacher at KU Leuven, this academic year was also full of AI for me. Several first-year students confirmed that AI helps them write their annual assignment (their first introduction to scientific writing), while a master's student used AI images in his presentation on twentieth-century revolutions (his final oral test of scientific skills). Training assistants are now even brightening up lessons with AI-generated slides.

Given this speed, it makes sense for both philosophers and AI specialists to suggest that higher education needs a comprehensive rethink, even down to abandoning the master's thesis. This may seem like a radical step, but in the humanities in particular, it is an inevitable choice.

Abandoning your master's thesis is almost inevitable, especially in the humanities.

In these disciplines, text is the primary means through which scientific ideas are expressed. If the master's thesis text was generated (partly) automatically, did the student have enough original ideas? And the language differences that are so important to human scientists, what if they were introduced only after correction by AI? Have these master's students learned to think creatively and accurately about themselves, or have they particularly learned to recognize creative and precise thinking?

As with plagiarism checking, it will be possible to detect the clearest examples of problematic AI use. But there will always be loopholes. Compare it to doping in sports. Blatant doping can be observed, such as that to which the Russian team was exposed at the Winter Games. But the clever use of tools and tricks to stay within limits, as with Lance Armstrong's cycling team, does not appear until much later or even at all.

There will always be loopholes in problematic uses of AI.

This comparison actually applies to all scientific practices. In 2021, Marcel Levy, head of the Netherlands Organization for Scientific Research, compared science to supreme sport. This has been criticized, but the analogy seems valid. In the positive sense in terms of striving for excellence, always wanting to raise the level, and looking for new challenges. But also in the negative sense, such as the personal life that has to give way, the psychological pressure to always be excellent, and the competitive atmosphere that sometimes encourages fraud and transgressive behavior.

See also  This German family car offers a lot of space for next to nothing

Just as there is criticism when athletes do not perform as expected (× fewer goals, no wins in the Spring Classics, too many injuries, etc.), there is also criticism when scientists provide fewer results (× fewer publications, no Obtaining a project grant, and not being prepared to undertake additional tasks…).

Their peers in artificial intelligence

In elite sports, AI is currently an undetectable and poorly regulated mechanical stimulant. Take peer review, where each scientific text is first examined by several scientists (peers) before publication.

For scientifically accomplished athletes, conducting a comprehensive peer review is often tedious extra work, but what if you could write those reviews faster? For example, by running your basic criticisms through an AI program, and then simply correcting the result here and there? Making time for other work, work that counts as performance, right?

Richard Evans, a prominent British historian, reported this summer that such practices are becoming increasingly common, including among anthropologists. But the same questions arise for AI-generated peer reviews as they do for student research. Review content: How much is it from the peer and how much is it from the AI ​​software? Who adds linguistic nuances to reviews, nuances that often mean the difference for editors between publishing or not? Are there rules, or at least guidelines, and are they applied uniformly?

Artificial intelligence project proposals

The project texts are also being prepared for future research with the help of artificial intelligence, as are two fellow Flemish historians. In both cases, it was quickly added that it was “just a matter of focusing” on the project text, for example, to make the grant application in English smoother and more precise. But that again does not change the basic questions: to what extent do ideas originate from the “mind” of the AI ​​program, and who can rely on the linguistic prowess with which those ideas are expressed? Anthropologists often claim that every word counts, so, just like their students, it is important to know exactly where their words come from.

See also  Two billion years ago, there were still volcanoes on the moon

These new practices already pose a significant challenge to research funders. Suppose a research team comes up with a challenging, rigorous, and very smoothly written landmark research proposal. Does it make a difference if AI helps them or not? How would you compare the higher proposal, which used a lot of AI, with the slightly lower proposal that relied less on AI? How do these two proposals compare to a third project that has not used artificial intelligence at all, or at least claims to do so? What guidelines have the reviewers themselves established regarding the use of AI when writing project reviews?

The answers to these questions actually determine which grants will be awarded and who will emerge as “top scientific talent.” Unfortunately, there is no World Anti-Doping Agency (WADA), the world anti-doping agency that attempts to enforce fair play. As a researcher, you should now be confident that your competitors are equally transparent in reporting the use of AI in project applications and publications; that their institution does not have more lax usage standards than your university; that it did not have (institutional) access to more expensive and better AI tools; And that peer reviewers evaluate their use of AI in the same way that you do.

Unfortunately, there is no global scientific anti-doping agency that attempts to enforce fair play.

In short, the question marks over the future of AI for higher education are entirely justified, especially in the text-oriented humanities. But the essence of writing a master's thesis is actually the same as all the scientific practices on which it depends. So, if your master's thesis urgently needs to be rethought, why don't we immediately think about all scientific activities in which AI is already being used?

Bram de Ridder is an independent researcher in applied history. No artificial intelligence was used when writing this article.