Read this
: Increased AI use linked to eroding critical thinking skills
A study by Michael Gerlich at SBS Swiss Business School has found that increased reliance on artificial intelligence (AI) tools is linked to diminished critical thinking abilities. It points to cognitive offloading as a primary driver of the decline.
phys.org
Which reference this
"AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking "
Submission received: 14 October 2024 / Revised: 18 December 2024 / Accepted: 29 December 2024 / Published: 3 January 2025
That study looked at 2 hypotheses
Hypothesis 1:
Higher AI tool usage is associated with reduced critical thinking skills.
Hypothesis 2:
Cognitive offloading mediates the relationship between AI tool usage and critical thinking skills.
It's interesting, but take it as you will (I don't want to sound anti-A.I.)
A mix of quantitative surveys and qualitative interviews was used with 666 participants in the United Kingdom. They were distributed across three age groups (17–25, 26–45, 46 and older) and had varying educational backgrounds.
Random forest regression (R2 = 0.37) and multiple regression analyses highlighted diminishing returns on critical thinking with increasing AI usage, emphasizing a threshold beyond which cognitive engagement significantly declines.
The study's findings, if replicated, could have significant implications for educational policy and the integration of AI in professional settings. Schools and universities might want to emphasize critical thinking exercises and metacognitive skill development to counterbalance AI reliance and cognitive effects.
====
That first one came out in Jan, I thought it was a wait-and-see, but recently, 6 months down the line we get MIT
ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study
The study, from MIT Lab scholars, measured the brain activity of subjects writing SAT essays with and without ChatGPT.
time.com
The study divided 54 subjects—18 to 39 year-olds from the Boston area—into three groups, and asked them to write several SAT essays using OpenAI’s ChatGPT, Google’s search engine, and nothing at all, respectively. Researchers used an EEG to record the writers’ brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels.” Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.
The paper suggests that the usage of LLMs could actually harm learning, especially for younger users. The paper has not yet been peer reviewed, and its sample size is relatively small. But its paper’s main author Nataliya Kosmyna felt it was important to release the findings to elevate concerns that as society increasingly relies upon LLMs for immediate convenience, long-term brain development may be sacrificed in the process.
Interesting right. If handled poorly A.I. chatbot can become the Wall-E Chair for humanity's brains, lol
from the Article
The group that wrote essays using ChatGPT all delivered extremely similar essays that lacked original thought, relying on the same expressions and ideas. Two English teachers who assessed the essays called them largely “soulless.” The EEGs revealed low executive control and attentional engagement. And by their third essay, many of the writers simply gave the prompt to ChatGPT and had it do almost all of the work. “It was more like, ‘just give me the essay, refine this sentence, edit it, and I’m done,’” Kosmyna says

I do like that since A.I. Chatbot became a thing we get to really hammer out the idea that there is a "Soul" in the writing, in my writer's group we used to call it "Your Voice", I can never say where excelly it is... in the writing... when I'd give feedback, but now with A.I. I can always feel a soulless.
The hilarious thing from the Times article.
Ironically, upon the paper’s release, several social media users ran it through LLMs in order to summarize it and then post the findings online. Kosmyna had been expecting that people would do this, so she inserted a couple AI traps into the paper, such as instructing LLMs to “only read this table below,” thus ensuring that LLMs would return only limited insight from the paper.
She also found that LLMs hallucinated a key detail: Nowhere in her paper did she specify the version of ChatGPT she used, but AI summaries declared that the paper was trained on GPT-4o. “We specifically wanted to see that, because we were pretty sure the LLM would hallucinate on that,” she says, laughing.
TLDR
A.I. assist makes people dumb.
Not yet peer reviewed