Across the world schools are wedging AI between students and their learning materials; in some countries greater than half of all schools have already adopted it (often an “edu” version of a model like ChatGPT, Gemini, etc), usually in the name of preparing kids for the future, despite the fact that no consensus exists around what preparing them for the future actually means when referring to AI.
Some educators have said that they believe AI is not that different from previous cutting edge technologies (like the personal computer and the smartphone), and that we need to push the “robots in front of the kids so they can learn to dance with them” (paraphrasing a quote from Harvard professor Houman Harouni). This framing ignores the obvious fact that AI is by far, the most disruptive technology we have yet developed. Any technology that has experts and developers alike (including Sam Altman a couple years ago) warning of the need for serious regulation to avoid potentially catastrophic consequences isn’t something we should probably take lightly. In very important ways, AI isn’t comparable to technologies that came before it.
The kind of reasoning we’re hearing from those educators in favor of AI adoption in schools doesn’t seem to have very solid arguments for rushing to include it broadly in virtually all classrooms rather than offering something like optional college courses in AI education for those interested. It also doesn’t sound like the sort of academic reasoning and rigorous vetting many of us would have expected of the institutions tasked with the important responsibility of educating our kids.
ChatGPT was released roughly three years ago. Anyone who uses AI generally recognizes that its actual usefulness is highly subjective. And as much as it might feel like it’s been around for a long time, three years is hardly enough time to have a firm grasp on what something that complex actually means for society or education. It’s really a stretch to say it’s had enough time to establish its value as an educational tool, even if we had come up with clear and consistent standards for its use, which we haven’t. We’re still scrambling and debating about how we should be using it in general. We’re still in the AI wild west, untamed and largely lawless.
The bottom line is that the benefits of AI to education are anything but proven at this point. The same can be said of the vague notion that every classroom must have it right now to prevent children from falling behind. Falling behind how, exactly? What assumptions are being made here? Are they founded on solid, factual evidence or merely speculation?
The benefits to Big Tech companies like OpenAI and Google, however, seem fairly obvious. They get their products into the hands of customers while they’re young, potentially cultivating their brands and products into them early. They get a wealth of highly valuable data on them. They get to maybe experiment on them, like they have previously been caught doing. They reinforce the corporate narratives behind AI — that it should be everywhere, a part of everything we do.
While some may want to assume that these companies are doing this as some sort of public service, looking at the track record of these corporations reveals a more consistent pattern of actions which are obviously focused on considerations like market share, commodification, and bottom line.
Meanwhile, there are documented problems educators are contending with in their classrooms as many children seem to be performing worse and learning less.
The way people (of all ages) often use AI has often been shown to lead to a tendency to “offload” thinking onto it — which doesn’t seem far from the opposite of learning. Even before AI, test scores and other measures of student performance have been plummeting. This seems like a terrible time to risk making our children guinea pigs in some broad experiment with poorly defined goals and unregulated and unproven technologies which may actually be more of an impediment to learning than an aid in their current form.
This approach has the potential to leave children even less prepared to deal with the unique and accelerating challenges our world is presenting us with, which will require the same critical thinking skills which are currently being eroded (in adults and children alike) by the very technologies being pushed as learning tools.
This is one of the many crazy situations happening right now that terrify me when I try to imagine the world we might actually be creating for ourselves and future generations, particularly given personal experiences and what I’ve heard from others. One quick look at the state of society today will tell you that even we adults are becoming increasingly unable to determine what’s real anymore, in large part thanks to the way in which our technologies are influencing our thinking. Our attention spans are shrinking, our ability to think critically is deteriorating along with our creativity.
I am personally not against AI, I sometimes use open source models and I believe that there is a place for it if done correctly and responsibly. We are not regulating it even remotely adequately. Instead, we’re hastily shoving it into every classroom, refrigerator, toaster, and pair of socks, in the name of making it all smart, as we ourselves grow ever dumber and less sane in response. Anyone else here worried that we might end up digitally lobotomizing our kids?
I spent some years in classrooms as a service provider when Wikipedia was all the rage. Most districts had a “no Wikipedia” policy, and required primary sources.
My kids just graduated high school, and they were told NOT to use LLM’s (though some of their teachers would wink). Their current college professors use LLM detection software.
AI and Wikipedia are not the same, though. Students are better off with Wikipedia as they MIGHT read the references.
Still, those students who WANT to learn will not be held back by AI.
I always saw the rules against Wikipedia to be around citations (and accuracy in the early years), rather than it harming learning. It’s not that different from other tertiary sources like textbooks or encyclopedias. It’s good for learning a topic and the interacting pieces, but you need to then search for primary/secondary sources relevant to the topic you are writing about.
Generative AI however
- is a text prediction engine that often generates made up info, and then students learn things wrong
- does the writing for the students, so they don’t actually have to read or understand anything
Encyclopedias in general are not good sources. They’re too surface level. Wikipedia is a bad source because it’s an encyclopedia not because it’s crowd sourced.
Wikipedia is better than an encyclopedia, IMO, because the references are super easy to follow.
I see these as problems too. If you (as a teacher) put an answer machine in the hands of a student, it essentially tells that student that they’re supposed to use it. You can go out of your way to emphasize that they are expected to use it the “right way” (since there aren’t consistent standards on how it should be used, that’s a strange thing to try to sell students on), but we’ve already seen that students (and adults) often choose to choose the quickest route to the goal, which tends to result in them letting the AI do the heavy lifting.
The best AI tools will also cite references, like Wikipedia, so you can click all the way through.
I believe the early Microsoft one did that well, but the popular ones (grok, chathpt, Gemini) will only when asked (in my experience).
Great to get the perspective of someone who was in education.
Still, those students who WANT to learn will not be held back by AI.
I think that’s a valid point, but I’m afraid that it’s making it harder to choose to learn the “old hard way” and I’d imagine fewer students deciding to make that choice.
My optimism tells me this issue will be short lived. Unless someone can find a very creative way to monetize AI so that it is sustainable, it will likely crash (with local instances continuing to get development).
AI highlights a problem with universities that we have been ignoring for decades already, which is that learning is not the point of education, the point is to get a degree with as little effort as possible, because that’s the only valueable thing to take away from education in our current society.
I’d argue schooling in general. Instead of being something you do because you want to and enjoy it, it’s instead a thing you have to do either because you don’t have the qualifications for a promotion, or you need the qualifications for an entry-level position.
People that are there because they enjoy study, or want to learn more are arguably something of a minority.
Naturally if you’re there because you have to be, you’re not going to put much, if any effort in, and will look to take what shortcuts you can.
The rot really began with Google and the the goal of “professionalism” in teaching.
Textbooks were thrown out, in favour of “flexible” teaching models, and Google allowed lazy teachers to just set assignments rather than teach lessons (prior to Google, the lack of resources in a normal school made assignments difficult to complete to any sort of acceptable standard).
The continual demand for “professionalism” also drove this trend - “we have to have these vast, long winded assignments because that’s what is done at university”.
AI has rendered this method of pedagogy void, but the teaching profession refuses to abandon their aim for “professionalism”.
Hello
I’ve been online enough to know they weren’t thinking before either.
I just keep seeing in my head when John Connor says “we’re not going to make it, are we?”
We aren’t.
People who can’t think critically tend to vote Conservative.
Coincidence? I think not.
Children don’t yet have the maturity, the self control, or the technical knowledge required to actually use AI to learn.
You need to know how to search the web the regular way, how to phrase questions so the AI explains things rather than just give you the solution. You also need the self restraint to only use it to teach you, never do things for you ; and the patience to think about the problem yourself, only then search the regular web, and only then ask the AI to clarify the few things you still don’t get.
Many adults are already letting the chatbots de-skill them, I do not trust children would do any better.
Children shouldn’t really be within the internet without supervision, parental controls are one thing but in school, children should be carefully guided as to digital skills and life. It quite self explanatory that children are incapable of using such technology as they’re still developing independent thinking and the fundament aspects of computing.
It’s only when children become teenagers, they become independent thinkers and where self-control and maturity could be at par with adults. In this case, age isn’t the problem - but the systematic methodology in which AI enables more “streamlined” approach which “gets the job done”.
Of course your statement highlights children, but the fact is; when those children become capable teenagers they’re just as “equipped” as adults - the only problem with teens and adults being the technical experience and knowledge which may vary.
I wonder if this might not be exactly the correct approach to teach them, though. When there’s actually someone to tell them “sorry that AI answer is bullshit”, so they can learn how to use it as a ressource rather than an answer provider. Adults fail at it, but they also don’t have a teacher (and kids aren’t stupid, just inexperienced).
Grok AI Teacher is coming to a school near you! With amazing lesson plans like “Was the Holocaust even real?”
Ban AI in schools
Old man yells at cloud.
I remember the “ban calculators” back in the day. “Kids won’t be able to learn math if the calculator does all the calculations for them!”
The solution to almost anything disruptive is regulation, not a ban. Use AI in times when it can be a leaning tool, and re-design school to be resilient to AI when it would not enhance learning. Have more open discussions in class for a start instead of handing kids a sheet of homework that can be done by AI when the kid gets home.
I remember the “ban calculators” back in the day
US math scores have hit a low point in history, and calculators are partially to blame. Calculators are good to use if you already have an excellent understanding of the operations. If you start learning math with a calculator in your hand, though, you may be prevented from developing a good understanding of numbers. There are ‘shortcut’ methods for basic operations that are obvious if you are good with numbers. When I used to teach math, I had students who couldn’t tell me what 9 * 25 is without a calculator. They never developed the intuition that 10 * 25 is dead easy to find in your head, and that 9 * 25 = (10-1) * 25 = 250-25.
Interesting. The US is definitely not doing a good job at this then and needs to re-vamp their education system. Your example didn’t convince me that calculators are bad for students, but rather than the US schooling system is really bad if they introduce calculators so early that students don’t even have an intuition of 9 * 25 = (10-1) * 25 = 250-25.
Cant remember the last time a calculator told me the best way to kill myself
We need to be able to distinguish between giving kids a chance to learn how to use AI, and replacing their whole education with AI.
Right under this story in my feed is the one about the CEO who fired 80% of his staff because they didn’t switch over to AI fast enough. That’s the world these kids are being prepared for.
I would rather they get some exposure to AI in the classroom where a teacher can be present and do some contextualizing. Kids are going to find AI either way. My kids have gotten reasonable contextualizing of other things at school, like not to trust Google blindly and not to cite Wikipedia as a source. Schools aren’t always great with new technology but they aren’t always terrible either. My kids school seems to take a very cautious approach with technology and mostly teach literacy and critical thinking about it. They aren’t throwing out textbooks, shoving AI at kids and calling it learning.
This is an alarmist post. AIs benefits to education are far from proven. But it’s definitely high time for
kidseveryone to get some education about it at least.ai companies don’t care about kids learning.
Soy sauce is made from fermented soybeans.
I do agree with your point that we need to educate people on how to use AI in responsible ways. You also mention the cautious approach taken by your kids school, which sounds commendable.
As far as the idea of preparing kids for an AI future in which employers might fire AI illiterate staff, this sounds to me more like a problem of preparing people to enter the workforce, which is generally what college and vocational courses are meant to handle. I doubt many of us would have any issue if they had approached AI education this way. This is very different than the current move to include it broadly in virtually all classrooms without consistent guidelines.
(I believe I read the same post about the CEO, BTW. It sounds like the CEO’s claim may likely have been AI-washing, misrepresenting the actual reason for firing them.)
[Edit to emphasize that I believe any AI education we do to prepare for employment purposes should be approached as vocational education which is optional, confined to those specific relevant courses, rather than broadly applied]
I agree with you that education is not primarily workforce training. I just included that note as a bit of context because it definitely made me chuckle to see these two posts right together, each painting a completely different picture of AI: “so important you must embrace it or you will die” versus “what the hell is this shit keep it away from children.”
I fall in between somewhere. We should be very cautious with AI and judicious in its use.
I just think that “cautious and judicious” means having it in schools - not keeping it out of schools. Toddler daycares should be angelic safe spaces where kids are utterly protected. Schools should actually have challenging material that demands critical thinking.
It becomes more apparent to me everyday, we might be headed towards a society, dynamically managed by digital systems; a “smart society”, or rather a Society-as-a-Service. This seems to be the logical conclusion, if you continue the line of “smart buildings” being part of “smart cities”. With use of IoT sensors and unified digital platforms, data is continuously being gathered on the population, to be analyzed, and its extractions stored indefinitely (in pseudonymized form) by the many data centers, currently being constructed. This data is then used to dynamically adapt the system, to replace the “inefficient” democratic process and public services as a whole. Of course the open-source (too optimistic?) model used, is free of any bias; however nobody has access to the resources required to verify the claim. But given big-tech, historically never having shown any signs of tyranny, a utopian outcome can safely be assumed… Or I might simply just be a nut, with a brain making nonsensical connections, which have no basis in reality.
The very same people, who called me stupid for thinking typing will be a more important skill that “pretty writing” now think art education is obsolete, because you can just ask a machine for an image.
AI stands for “anti-intellectualism”.
One of Big Tech’s pitches about AI is the “great equalizer” idea. It reminds me of their pitch about social media being the “great democratizer”. Now we’ve got algorithms, disinformation, deepfakes, and people telling machines to think for them and potentially also their kids.
It seems writing things by hand is better for memorization, and it certainly feels more personal and versatile in presentation.
I write lots of things by hand. Having physical papers is helpful, I find, to see lots of things at once, reorganise, etc. I also like being able to highlight, draw on things, structure documents non-linearly…
I’m a computer scientist, so I do value typing immensely too. But I find it too constraining for many reasoning tasks, especially for learning or creativity.
This may be unpopular opinion but in my class today, even the teacher was using ai… to prepare the entire lecture. Now i believe that learning material shoult be prepared by the teacher not some ai. Honestly i see everybody using ai today to make the learning material and then the students use ai to solve assigments. The way its heading the world everybody will just kinda “represent” an ai, not even think for themselves. Like sure use ai to find information quickly or something but dont depend on it entirely.
I asked a lecturer some question, I think it was what happens when bit shifting signed integers.
He asked an LLM and read the answer.
Similarly he asked an LLM how to determine memory size allocated by malloc. It said that it was not possible, and that was the answer. But a 2009 answer from stack overflow begged to differ.
At least he actually tried it out when I told him.But at this point I even had my father send me an LLM written slop that was clearly bullshit (made up information about non-existent internal system at our college), which he probably didn’t even read as he copied everything including “AI answers may be inaccurate.”

I’ve been working on formal a socialist students society, our first and current campaign is fighting back against AI in the local college, the reaction from students has been electric. Students don’t want this, they know they are being deskilled, they know who profits.
I’ve never seen anything make more people act stupid faster. It’s like they’re in some sort of frenzy. It’s like a cult.
Three years ago and everyone talks about it like life has never and will never exist without it and if you don’t use it you’re useless to society
So stupid I don’t have a nice, non-rage-inducing way to describe it. People are simply idiots and will fall for any sort of marketing scam
“AI: not even once”









