- cross-posted to:
- technology@beehaw.org
- cross-posted to:
- technology@beehaw.org
Saved you a click:
After much debate, the new policy is in effect: Wikipedia authors are not allowed to use LLMs for generating or rewriting article content. There are two primary exceptions, though.
First, editors can use LLMs to suggest refinements to their own writing, as long as the edits are checked for accuracy. In other words, it’s being treated like any other grammar checker or writing assistance tool. The policy says, “ LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.”
The second exemption for LLMs is with translation assistance. Editors can use AI tools for the first pass at translating text, but they still need to be fluent enough in both languages to catch errors. As with regular writing refinements, anyone using LLMs also has to check that incorrect information hasn’t been injected.
AIbros: we’re creating God!!!
AI users: it can do translation & reformating pretty well but you got to check it’s not chatting shit
The takeaway from all LLM-based AI is the user needs to be smart enough to do whatever they’re asking anyway. All output needs to be verified before being used or relied upon.
The “AI” is just streamlining the process to save time.
Relying on it otherwise is stupid and just proves instantly that you are incompetent.
the user needs to be smart enough to do whatever they’re asking anyway
I’m gonna say that’s ideal but not quite necessary. What’s needed is that the user is capable of properly verifying the output. Which anyone who could do it themselves definitely can, but it can be done more broadly. It’s an easier skill to verify a result than it is to obtain that result. Think: how film critics don’t necessarily need to be filmmakers, or the P=NP question in computer science.
This is where domain expertise would come in, no? It’s speeding up the work but it usually outputs generic content, and whatever else it injects while hallucinating. Therefore the validation part holds up I’d say.
This is absolutely the case, and honestly, at least for now how it needs to be across the board.
Noone should be using AI to do things you’re incapable of doing (or undoing).
Relying on it otherwise is stupid and just proves instantly that you are incompetent.
Relying on it in any circumstances (though medical stuff is understandable if you’re simply too poor or don’t have access) while it is exhausting water supplies and polluting the planet is stupid and instantly proves that you are stupid and inconsiderate.
Fucking hate those anti human filth pushing slop into everything. I want to take one apart with power tools.
Seems pretty reasonable to use it as a grammar checker. As long as it’s not changing content, just form or readability, that seems like a pretty decent use for it, at least with a purely educational resource like Wikipedia.
To save you another few clicks: this is the discussion (RfC) that implemented the changes, and the policy is linked at the top.
Liar. I already read the article before opening the comments. YOU SAVED ME NOTHING.
;-)
Treating it like a tool instead of treating it like a God. What a novel idea !
So, it should be used reasonably, as it should have always been.
Seems like there should be a third exception. For those occasions where the article is about LLM generated text. They should be able to quote it when it’s appropriate for an article.
That is a reasonable exception to no-AI policies in research papers and newspaper articles, but not for Wikipedia. As a tertiary source, Wikipedia has a strict “no original research” policy. Using AI to provide examples of AI output would be original research, and should not be done.
Quoting AI output shared in primary and secondary sources should be allowed for that reason, though.
An extremely measured and level-headed response. Kudos to Wikipedia for maintaining high standards.
It has to be said, they originally changed their stance due to the considerable editor pushback when they tried to introduce LLM summaries on the top of articles. So kudos to the editor community’s resistance! ✊
Good point. The real strength of Wikipedia truly lies in the editors .
I know at least one writing major who won an award from his volunteer work at Wikipedia. He did it as a hobby. They don’t really need AI, they need people like him.
Banned the people who openly admit it, anyway.
there are ai detectors, although Im not sure about accuracy of those
very bad
There should be only one exception: In case someone needs an example of an AI-generated text.
LLMs are excellent tools for mapping one set of words and phrases to another, which is more or less exactly what you need out of a language translator.
W Wikipedia,would be better to remove the exceptions but its fine tbh.
Why do they need AI at all? Wikipedia had existed long before it and was doing fine.
You could make that argument about any tool Wikipedia editors use. Why should they need spellcheck? They were typing words just fine before.
…except it just makes it easier to spot errors or get little suggestions on how you could reword something, and thus makes the whole process a little smoother.
It’s not strictly necessary, but this could definitely be helpful to people for translation and proofreading. Doesn’t have to be something people are wholly reliant on to still be beneficial to their ability to edit Wikipedia.
Wikipedia has banned AI-generated text,

… with two exceptions

But how do they know it is ai written?
I was about to link to that, and specifically the stuff that now seems to have been moved to Signs of AI writing.
I thought that was a very interesting read, because it’s so much better than the usual AI ragebait that led to people getting pilloried over the fact that they actually know how to use em dashes. You can’t detect LLM use just by the fact that someone uses em dashes. It’s a complicated stylistic issue that usually boils down to “well, you know what ChatGPT output looks like when you see it”.
Ok but surely there must be an automated way. You can’t throw manpower at this because they will loose
There are no reliable automated LLM output detectors. Anyone who says otherwise is either trying to sell you snake oil (or is unwittingly helping someone to sell snake oil to someone else, I guess).
so the question still stands. how do they detect AI use? i am all for it btw. it is absolutely necessary but I am afraid it is impossible to do or implement.
I think they just try their best
actually the manual and volumetric https://en.wikipedia.org/wiki/Wikipedia:Recent_changes_patrol is ridiculously good
So in other words, when used responsibly as a tool with limitations, AI has it’s uses? Though very environmentally unfriendly uses?
*its
There should be a Wikipedia LLM with a sole purpose to check that the tone of the text is objective and matches Wikipedia standards.
The LLM should flag any changes it would make and if the the changes are above a threshold, the edit should be flagged to be reviewed more by another human.
This is actually fascinating from a discourse perspective. The RfC mentions that AI detectors are unreliable, which is the whole problem.
I work on mapping public opinion across thousands of responses using AI as a tool to find patterns, not to detect individual writers. The difference matters.
We can detect patterns across a corpus without needing to prove any single person wrote it. That scale of analysis is what lets us see where opinion clusters, not just label individual posts.
Wikipedia’s ban is probably the right call for their use case. They need verifiable authorship for accountability. But we shouldn’t conflate that with not being able to use AI for understanding large-scale discourse.
You’re not working on anything, clanker.
For those wondering, check the timestamps this accounts comment history, especially comments from 4 days ago or longer. Fully formatted multi-paragraph comments made 10-30 seconds apart. This is an LLM-controlled account.
I can’t even write a two-sentence comment in 30s without overthinking. I do like to use formatting, but that doesn’t make it quicker…












