Jump to content
Wikipedia The Free Encyclopedia

Wikipedia talk:Writing articles with large language models

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

FAQ

[edit ]
  • What is the purpose of this guideline?
    • To establish a ground rule against using AI tools to create articles from scratch.
  • This guideline covers so little! What's the point?
    • The point is to have something. Instead of trying to get consensus for the perfect guideline on AI, which doesn't exist, we have been in practice pursuing a piecemeal approach: restrictions on AI images, AI-generated comments, etc. This is the next step. Eventually, we might merge them all into one to create one guideline on AI use. This proposal is designed to be as simple and straightforward as possible so as to easily gain consensus.
  • Why doesn't this guideline explain or justify itself?
    • Guidelines aren't information pages. We have plenty of information already about why using LLMs is usually a bad idea at WP:LLM.

Cremastra (talk · contribs) 20:43, 24 October 2025 (UTC) [reply ]

RfC

[edit ]

Should this proposal be accepted as a guideline? (Please consider reading the FAQ above before commenting.) Cremastra (talk · contribs) 20:56, 24 October 2025 (UTC) [reply ]

Introduction

[edit ]

Is this going to solve everything about AI? Of course not. But the current state of affairs of no real formal P&Gs on LLMs in articles – just WP:LLM, which is an essay – is a huge weakness. We need guidelines to point to, not essays. Let's write out what our practical ground rules are: for starters, that using a chatbot to write a Wikipedia article for you is not a good plan.

We could spend ages writing up up one page that reflects all our views on AI. We tried that, and it failed. So I'm trying this instead.

This proposal doesn't say a lot of things. It doesn't say anything about using LLMs to find sources or rephrase paragraphs or even write a new section for you. It's just about writing a new article from scratch with a chatbot, which I think there's consensus against. Thank you. Cremastra (talk · contribs) 20:56, 24 October 2025 (UTC) [reply ]

Survey

[edit ]
  • Yes as proposer. It's about time we articulated this: LLMs may have their uses, and there are plenty of edge cases, but using ChatGPT to generate your article for you is not one of those. We have CSD G15, but that's a "practical" thing. We need the theory, the guidelines, to back it up. Statements of principle. Something very simple we can point new users to, that anyone can understand. This is that. Cremastra (talk · contribs) 20:56, 24 October 2025 (UTC) [reply ]
  • No. I would explain the reasons, but I would use more time on that than what has been used to write this "proposal". Really, just 5 lines? Even an AI-generated guideline against AIs would have done it better. Cambalachero (talk) 21:46, 24 October 2025 (UTC) [reply ]
    So the primary characteristic you look for in guidelines is length, rather than concision, efficacy, or accuracy? Good to know. I cordially direct you to WP:KISS. Cremastra (talk · contribs) 23:29, 24 October 2025 (UTC) [reply ]
    Not in guidelines, in proposals. I expect to see a rationale better than a "because I said so!" (and "this is forbidden because this policy forbids it" is just that in wikivoice). And being a complex topic, I expect to see a proposal that accomodates all those complexities in a fluid manner. Yes, just ban everything and call it a day is a simple proposal... simple like unleashing a bull in a china shop. Cambalachero (talk) 00:24, 25 October 2025 (UTC) [reply ]
  • Yes in principle and support the urgency, but I'd like to see it workshopped a little more. I love concise guidelines, this only needs a couple more sentences and to point to WP:LLM#Usage. I know this is a big issue over at AFC, but what's the substantial difference between banning using AI to generate new articles from scratch but not content from scratch in general? Imo what we need is a guideline banning uncritically using LLMs to generate content from scratch, and have an essay which gives editors tips on how they can use AI to help them (ie. personally understand the topic, find sources, spell/grammar check etc.).Kowal2701 (talk) 22:27, 24 October 2025 (UTC) [reply ]
  • Yes, although I agree with Kowal2701 that this can also be extended to generating new content from scratch in general. This guideline doesn't specify any detection mechanisms, but that makes sense, as that isn't the purpose of a guideline. In the same way as we can agree to ban undisclosed paid editing or sockpuppetry without having to make an exhaustive list of all the ways they could be spotted. These will naturally be developed and evolve alongside AI technologies themselves, and we obviously shouldn't accuse people based on "tells" without any concrete evidence. Chaotic Enby (talk · contribs) 23:17, 24 October 2025 (UTC) [reply ]
    This guideline doesn't specify any detection mechanisms, but that makes sense, as that isn't the purpose of a guideline. In the same way as we can agree to ban undisclosed paid editing or sockpuppetry without having to make an exhaustive list of all the ways they could be spotted. Yes, this, exactly. Thank you. Cremastra (talk · contribs) 23:18, 24 October 2025 (UTC) [reply ]
    Undisclosed paid editing is a new and unfamiliar topic. Accordingly, is there a page that informs readers about this ban? I am very curious about what detection methods could be employed to identify chatbot content or if this is even possible? The correct identification of this type of content may or may not be elusive. Do any detection methods exist? 2603:3014:C06:3C00:C978:1FAE:A45C:D956 (talk) 06:45, 27 October 2025 (UTC) [reply ]
    WP:AITELLSNovem Linguae (talk) 10:33, 27 October 2025 (UTC) [reply ]
    To note: this is my current position based on the signal-to-noise ratio of existing AI models. If, in 2 or 5 years, new models were capable of generating Wikipedia articles of acceptable quality with no significant issues, I would be open to revising the guideline to something like *Large language models should not be used to generate new Wikipedia articles without human review. Chaotic Enby (talk · contribs) 23:28, 24 October 2025 (UTC) [reply ]
    I agree. One hopes that, in the magical future, there will be a point at which an AI tool is capable of writing a decent Wikipedia article. That's not true now, so I support a simple "No, thank you" guideline. If/when that changes, I'd be open to revisiting it. WhatamIdoing (talk) 00:58, 25 October 2025 (UTC) [reply ]
    I concur with this idea. Cdr. Erwin Smith (talk) 10:51, 31 October 2025 (UTC) [reply ]
  • Yes, why not. I like this unique approach of building a policy/guideline page from the ground up and expanding it further based on talk page discussions. The time and effort that will be used to workshop this RfC are better spent discussing potential additions to the page instead once it becomes a guideline. Some1 (talk) 23:27, 24 October 2025 (UTC) [reply ]
  • Yes in principle. I suggest changing the page title to Wikipedia:Creating articles with large language models. My main worry is that if this WP:PROPOSAL succeeds, this appropriately narrow page may both grow significantly over time and also get worse with every expansion. "Don't do the whole article in LLM" is good. "Here are 67 different details that might help you spot a violation" is not good. WhatamIdoing (talk) 00:03, 25 October 2025 (UTC) [reply ]
    I agree with this. There is also Wikipedia:Large language models, which may later become the main policy. Bogazicili (talk) 12:27, 27 October 2025 (UTC) [reply ]
  • Yes – This is a good starting point, and I think simple is better per Ms WhatamIdoing above. This page documents what has already become established practice across the encyclopaedia. We can consider making more complex regulations later. The important thing to do now is to give a clear policy basis for AI-related enforcement and clean-up actions across the Wikipedia. This proposal will do that, and for that reason, I support it. Yours, &c. RGloucester 00:18, 25 October 2025 (UTC) [reply ]
  • Yes as an obvious principle. I do not think we need this to be perfect before adopting it, but agree with others that it should be any mainspace content, not just article creation. Especially if we consider it's probably easier to nip relevant content concerns in the bud at AfC, harder when it's half a paragraph here and there at an already-live article. I assume further discussion will proceed on possible refinements. Kingsif (talk) 00:26, 25 October 2025 (UTC) [reply ]
  • Yes. And please consider changing "should not" to "may not", although I am !voting yes either way. –Novem Linguae (talk) 01:26, 25 October 2025 (UTC) [reply ]
  • Procedural close for failure to follow WP:RFCBEFORE, particularly where several editors who are interested in writing such a policy have asked for the opportunity to workshop this proposal and where the community has been trying to come to consensus on these issues for years. We shouldn't let the first person to get past the gate on an RfC monopolize the proposal. Additionally, per my comments below, this proposal is sufficiently ambiguous such that it would potentially ban productive uses of LLMs. voorts (talk/contributions) 01:32, 25 October 2025 (UTC) [reply ]
    As of my comment here, there are 8 supporters, 1 opposer, and 3 people saying it should be workshopped.
    What exactly does "workshopping" mean in practical terms for a guideline such as this one? Cremastra (talk · contribs) 01:42, 25 October 2025 (UTC) [reply ]
    It could mean discussing:
    1. Different wording.
    2. Changing the proposal.
    3. Adding additional proposals to put to the community alongside this one.
    4. Letting relevant WikiProjects weigh in.
    voorts (talk/contributions) 01:57, 25 October 2025 (UTC) [reply ]
    @Voorts, I also objected to the wording for that reason and it has now been changed. I don't think it's so far into the RFC that this is problematic, given that responses so far heavily lean towards supports (that is, people were already willing to support something wider / more ambiguous, so one presumes they would also support the less ambiguous wording). -- asilvering (talk) 02:21, 25 October 2025 (UTC) [reply ]
    Voorts, I think your comments here are unbecoming of an administrator. WP:RFCBEFORE does not specify any requirement to workshop proposals, and in any case, Wikipedia is not a bureaucracy. That there are 'several editors who are interested in writing such a policy' does not negate the existence of this proposal, which was made in good faith and has already drawn some support. Those editors can make their own proposals whenever they see fit; in the meantime, trying to gatekeep who can or cannot make a policy/guideline proposal is inappropriate. Yours, &c. RGloucester 03:56, 25 October 2025 (UTC) [reply ]
    I couldn't disagree with you more strongly. I agree with voorts - WP:RFCBEFORE is clearly and simply defined - is my perspective unbecoming of an editor? NicheSports (talk) 03:59, 25 October 2025 (UTC) [reply ]
    I'm not trying to gatekeep anyone and I agree that anyone can make any proposal when they like; RFCBEFORE says that they need to discuss it, though, and I don't think it's bureaucratic to insist that a proposal be discussed before it be proposed to the community via RfC. I presume you think that my comment about monopolizing process is the one that's unbecoming. That comment was not meant to question Cremastra's good faith. It was merely descriptive of what occurs when someone doesn't follow RFCBEFORE. voorts (talk/contributions) 04:04, 25 October 2025 (UTC) [reply ]
    Administrators can enact sanctions, blocks, &c. against editors; your comments could have a chilling effect, discouraging editors who may not be part of the WikiProject Policies and Guidelines clique from making proposals, which is inappropriate. Instead of rowing over process, consider voicing opposition to the proposal on its merits. Yours, &c. RGloucester 04:09, 25 October 2025 (UTC) [reply ]
    I am participating as an editor in this RfC, not as an administrator. As an editor (and as an administrator), I am allowed to have opinions, including opinions on how we build consensus. You have pointed to nothing in any of my statements in this RfC that would suggest that I would ever enact sanctions, blocks, &c. against editors against editors with whom I am involved in order to punish them. And, in particular, I have not and would never suggest that Cremastra, who I respect, should be blocked or sanctioned.I also never suggested that the RFCBEFORE had to go through WikiProject Policies and Guidelines, which is completely unrelated to the topic of our LLM PAGs. You seem to be misconstruing my mention of it below. If you look at Cremastra's edit summary in the diff I linked to, they implied that I don't like short PAGs; I pointed out WP:PROJPOL, the whole point of which is to shorten PAGs. WP:PROJPOL is also not a clique. There are currently several open talk page discussions that you or anyone else can participate in. voorts (talk/contributions) 06:12, 25 October 2025 (UTC) [reply ]
    Given that several WP:PROJPOL members have commented in favour of this proposal, it can't be very cliquey. I also genuinely thought I was already a member, but apparently I'm not. Joining now. Cremastra (talk · contribs) 16:04, 26 October 2025 (UTC) [reply ]
  • I don't think this is necessary. The problem with AI-generated content is that it often violates our existing content policies. The problem is not that it is AI generated. ChatGPT can be used to generate perfectly adequate articles from scratch, and has been. Telling people they shouldn't do this with all the weight of a guideline but zero reasoning or nuance won't solve any problems. Also, "...may be used as useful tools, but those uses do not include..." is grammatically suspect. If you want to be concise, just say "LLMs can be useful." Toadspike [Talk] 01:56, 25 October 2025 (UTC) [reply ]
    @Toadspike, I strongly disagree that it would not solve any problems. I have personally dealt with many editors through AFC and unblocks who have read WP:LLM and understand it to mean that AI use is not banned. A clear guideline that says "don't create articles from scratch with AI" will head off at least some of those people. -- asilvering (talk) 02:19, 25 October 2025 (UTC) [reply ]
    Will it really? I seriously doubt the distinction between an essay and a guideline matters. Maybe I'm jaded from patrolling G15s, but most people pasting in LLM drafts don't bother to communicate or read anything we tell them. (Even people who don't use LLMs do not often click on links we send them.) If an editor's LLM content has serious issues, I am already able to leave them a warning of "Stop using LLMs to write for you or you may be blocked". I have never seen someone turn around and go "well ackshually, there's no guideline against it". Maybe I'd be convinced if you could provide counterexamples. Toadspike [Talk] 08:14, 25 October 2025 (UTC) [reply ]
    I've had another look at the proposed guideline and the wording has been improved slightly, but it still doesn't make sense. It says "[LLMs] are sometimes useful tools, but those uses do not include...". Which uses? It doesn't name any uses. It's also fairly easy to disprove that the uses of LLMs don't include generating Wikipedia articles, which they can do poorly with minimal prompting effort and well with some effort.
    I suggest rewording to "[LLMs] can be useful tools, but using them to generate Wikipedia articles can introduce a variety of errors." Or similar. @Cremastra, thoughts? Toadspike [Talk] 08:22, 25 October 2025 (UTC) [reply ]
  • Yes. We don't do editors any favours by pretending that the community is more open to AI-generated articles than it is. Consensus for a clear, simple, "please don't" is welcome. -- asilvering (talk) 02:23, 25 October 2025 (UTC) [reply ]
  • Yes I boldly changed "should not" to "may not" in the guideline and will now support. I would like to see this extended (per Kowal2701 and Chaotic Enby) to also include generation of content in existing articles, but I will support without that clause added NicheSports (talk) 02:32, 25 October 2025 (UTC) [reply ]
  • Yes as a start. I think a lot more could be (and as the discussion below has noted, perhaps should've been) said, but I do agree with this for a start. Perryprog (talk) 02:35, 25 October 2025 (UTC) [reply ]
  • Yes as a very weak start. Use of fabricated sources should be sufficient grounds for an indef on the first offense. Carrite (talk) 02:39, 25 October 2025 (UTC) [reply ]
  • Yes. I have seen editors use AI carefully and constructively, and I would oppose a guideline that discouraged that use; I have seen many more editors use it incautiously and disruptively, but still in good faith. I don't know the perfect way to distinguish between these, but "don't generate new articles from scratch" seems like a good start. jlwoodwa (talk) 02:52, 25 October 2025 (UTC) [reply ]
  • Oppose as currently not well enough constructed, somewhat seconding the vote by @Asilvering but I would go further. To expand on this we certainly have to have some policy statements, but this is too vague. Some key points that IMO should be included:
    1. I have seen the "classic" page full of hallucinations, these we need to say are against policy.
    2. I have seen editors using ChapGPT to respond on talk pages and/or at AfD. This should also be against policy.
    3. I have seen LLM used to construct and then the page edited and purged of issues. This should be acceptable, and stated as such.
    4. I have seen knee-jerk reactions "Delete as AI" at AfD. This should be against policy. Yes, I really mean this and think it has to be stated.
  • Lastly we must recognise that fairly soon some of the hallucination issues are going to be reduced or solved. We need a policy that does not become nonsense when this occurs. Ldm1954 (talk) 03:50, 25 October 2025 (UTC) [reply ]
  • Lastly we must recognise that fairly soon some of the hallucination issues are going to be reduced or solved. No. Hallucinations are endemic to our current models and the rate of AI development has plateaued. Cremastra (talk · contribs) 14:05, 25 October 2025 (UTC) [reply ]
    1. That's literally what this proposal is proposing.
    2. Already in policy.
    3. This rarely, if ever, happens. A lot of the AI usage I have seen is for quick and easy creation of promotion.
    4. Already in policy somewhat as the G15 criterion. Like any good policy, it has proper limits and fully objective criteria.
    Also, hallucinations are an inherent feature of LLMs and not a problem that can be resolved. The only way to solve hallucinations is to completely ditch the LLM architecture for a better one. SuperPianoMan9167 (talk) 19:44, 25 October 2025 (UTC) [reply ]
  • Yes oh dear sweet jesus yes ship it. Thank you for finally belling this cat. I don't care whether it's perfect or whether it goes far enough -- we had 3 years to perfect something, we wasted them, and so now are 3 years late to doing literally anything at all. Gnomingstuff (talk) 04:43, 25 October 2025 (UTC) [reply ]
  • Yes However, it is a bit vague, and is missing an RFCBEFORE process. JuxtaposedJacob (talk) | :) | he/him | 05:06, 25 October 2025 (UTC) [reply ]
  • Yes, per asilvering and to a lesser extent jlwoodwa. ♠PMC(talk) 05:19, 25 October 2025 (UTC) [reply ]
  • Oppose per Ldm1954. AI use is nowhere close to being as black and white as this page makes out. I know nuance is difficult on the internet, but that's no excuse, neither is a lot of people having an irrational fear that AI will kill Wikipedia. The problem is not AI, the problem is unreviewed AI Thryduulf (talk) 06:14, 25 October 2025 (UTC) [reply ]
    The problem is more that "AI" and "LLM" are not the same thing. LLMs are fundamentally flawed in ways that make their outputs unacceptable for Wikipedia. There can certainly be a proposal in the future to address better AI that isn't just larger large language models. SuperPianoMan9167 (talk) 19:36, 25 October 2025 (UTC) [reply ]
    Are there any forms of AI, besides LLMs, which are anywhere near being able to generate Wikipedia articles from scratch? jlwoodwa (talk) 19:38, 25 October 2025 (UTC) [reply ]
    No, not at present.SuperPianoMan9167 (talk) 19:44, 25 October 2025 (UTC) [reply ]
    No. However, AI may include digital writing tools, such as machine translating (e.g., Google Translate or DeepL), digital writing assistants (e.g., Grammarly), and automated paraphrasing tools (e.g., Grammarly or Quillbot). I would not argue that editors shouldn't be able to use some of these tools as long as they ensured the output is accurate. I would argue (as many have) that restricting use of certain tools is more likely to harm already marginalized groups, including those whose native language isn't English. Significa liberdade (she/her) (talk) 01:20, 7 November 2025 (UTC) [reply ]
    In the context of my comment, "AI" should be read to include "LLM". Reviewed LLM-output is exactly as acceptable as human output. It can be stellar, it can be trash, it can be anywhere in between, it needs to be treated for what it is, not what created it. Thryduulf (talk) 22:31, 25 October 2025 (UTC) [reply ]
    But why would we want it in the first place? It's inherently trash. Cremastra (talk · contribs) 02:08, 28 October 2025 (UTC) [reply ]
    Unreviewed LLM-output frequently (but not always) has errors. LLM-output that has been reviewed and, if necessary, had errors corrected is no more or less likely to be trash than human output. If you wish to crusade against the use of AI on moral grounds then there are plenty of places on the internet where you can do that, Wikipedia is not one of them. Thryduulf (talk) 02:13, 28 October 2025 (UTC) [reply ]
    What can be done is oppose AI usage for content creation from scratch(and maybe full content creation eventually) on the grounds that it causes all sorts of issues even if there are people reviewing the content being generated. GothicGolem29 (talk) 02:32, 28 October 2025 (UTC) [reply ]
    Why not? We are a large encyclopedia, a repository of knowledge and writing. LLMs distort knowledge and mock writing. Saying we can't take a stance on them is like insisting a hospital has no place promoting vaccines. Cremastra (talk · contribs) 02:37, 28 October 2025 (UTC) [reply ]
    Once again will you please stop with the false equivalences. If you are just going to ignore everything anybody who disagrees with you says and just repeat the same factually incorrect hyperbolic moral judgments then it's a waste of time communicating with you. Thryduulf (talk) 02:43, 28 October 2025 (UTC) [reply ]
  • Yes, simple and sensible. If Artificial Intelligences come up soon, they will agree that potentially hallucinating large language models are harmful. —Kusma (talk) 06:43, 25 October 2025 (UTC) [reply ]
  • Yes to the principle, whether it is to have this as a separate guideline or fold it in somewhere else. I have wasted several hours on dealing with AI junk in these last few weeks - it is a massive drain on volunteer time to clean up and respond to. We need to draw a clear line for everyone's benefit, and "generating entire articles" is a clear line that encompasses many of the most problematic uses, much like WP:AITALK does. I'm glad to see a simple straightforward statement proposed; it's urgently needed. It won't stop people doing it, but it will make it clearer what we do and we don't want them to do. Andrew Gray (talk) 09:05, 25 October 2025 (UTC) [reply ]
  • Yes, and I hope this is another step towards a unified guideline banning LLM use for pure generation of content, per Kowal2701 . The dust has settled, we know what these technologies can be expected to achieve or not, and it is thus time for clarity in order to save everyone's time. Choucas0 🐦‍⬛💬📋 13:34, 25 October 2025 (UTC) [reply ]
  • Yes we've workshopped until the cows came home, snoozed the night away, went out into the fields again, chewed a fat lot of cud, and blah blah blah. Let's bartender it instead. ~~ AirshipJungleman29 (talk) 14:09, 25 October 2025 (UTC) [reply ]
  • Yes. Articles that are LLM-generated from scratch are already subject to drafitification and, in certain obvious cases, to speedy deletion. A scan of the recent archives of the incidents noticeboard shows that editors who repeatedly create LLM-generated articles can already expect to be indefinitely blocked. As policies and guidelines are created to reflect existing practice, we need this guideline to properly communicate to all editors that LLM-generated articles are unacceptable. — Newslinger talk 15:10, 25 October 2025 (UTC) [reply ]
  • Yes: it's a good start. We can elaborate on it later, but the first basic rule is clearly set out here and gives editors a Guideline to cite when rejecting a draft or proposing deletion at AfD. Pam D 15:17, 25 October 2025 (UTC) [reply ]
  • Yes as a start. Although I have multiple objections: The wording is not great. Keeping it short is important, but as a guideline, the ambiguity created by how short this is could easily make it misuseable at an AfD to delete articles that are written by AI purely because they are AI-generated, even if they have no problems. Also, this proposed guideline should cover edits that are used to "revise" articles and not write them from scratch. Although the second point is less important because this is only a start (hence why I voted yes despite my objections). --not-cheesewhisk3rs ≽^•⩊•^≼ ∫ (pester) 15:29, 25 October 2025 (UTC) [reply ]
  • (削除) Yes, as a start. My thoughts mirror the ones of Not-cheesewhisk3rs just above me. -- Sohom (talk) 15:40, 25 October 2025 (UTC) (削除ここまで)There is waaay to much confusion about the wording and interpretation for me to support at this time. This RfC has my moral support, but if we cannot find a stable interpretation of the wording, then we are fucked in terms of enforcing this policy. I do not feel confident lending my support to something so nebulous (are we banning any use of LLM to generate parts of a article from scratch as part of the creation process of a article? are we only banning using LLMs to complete generate articles without human supervision? or are we banning everything upto-and-including folks reviewing and heavily editing the LLM output before being published as a article?). I'm sorry, but I'm not comfortable rubber stamping and writing a blank cheque here, especially peeps can misconstrue what I am supporting. -- Sohom (talk) 00:17, 4 November 2025 (UTC) [reply ]
  • Yes with the caveat that it should be dated "2025." Right now, LLMs have very large practical and ethical problems that make them unsuitable for a project like Wikipedia, but part of that is that they are very new technology and society hasn't caught up yet. Marking the guideline with the year would be a suitable reminder that we mean 2025 LLMs. For example, Wikipedias in other languages have been flooded with incompetent LLM-generated articles by users who did not check and take responsibility for every fact. If people learn to knock it off and stop doing that, then we might consider LLMs worth it, even if their other problems remain. Darkfrog24 (talk) 16:13, 25 October 2025 (UTC) [reply ]
    LLMs are not new technology, and society has certainly caught up with them. ChatGPT came out November 2022. The paper that introduced the LLM architecture, Attention Is All You Need, was published in 2017.
    As long as the architecture stays the same, 2035 LLMs will have the same problems as 2025 LLMs. Because of this, including the year is unnecessary disambiguation. SuperPianoMan9167 (talk) 19:33, 25 October 2025 (UTC) [reply ]
    Cell phones became common more than three years before my state made laws forbidding phone calls while driving. Movie theaters didn't always have those big "No talking or texting" warnings before the previews. It was common for people to shout into their phones in no-I-can't-just-leave public places, like the bus or the theater or waiting in line at the post office. (Remember that line in Serenity where Shepherd Book refers to "people who talk at the theater"? That was a reference to people talking on cell phones.) That's what I mean by catching up. Maybe schools will stop assigning out-of-school papers in favor of pen-to-paper in-class essay writing with a proctor. Maybe the culture will just 100% get behind the idea that ChatGPT robs students of cognitive development. But we're not there yet. The people voting oppose are right that this issue is complicated. Darkfrog24 (talk) 19:06, 26 October 2025 (UTC) [reply ]
    I found that article I was thinking of: "How AI and Wikipedia Sent Vulnerable Languages into a Doom Spiral" in MIT Technology Review. Darkfrog24 (talk) 21:08, 26 October 2025 (UTC) [reply ]
  • Yes, though LLMs shouldn't be used for writing whole sections or significantly expanding existing pages either, not only making new articles from scratch, but I agree with PamD on the basic usefulness. Reywas92 Talk 17:13, 25 October 2025 (UTC) [reply ]
  • Yes, per Thryduulf, "The problem is not AI, the problem is unreviewed AI". Esculenta (talk) 18:19, 25 October 2025 (UTC) [reply ]
    you meant "no" I think? NicheSports (talk) 18:33, 25 October 2025 (UTC) [reply ]
    No, I meant "yes". Esculenta (talk) 18:37, 25 October 2025 (UTC) [reply ]
    Fair enough. Thryduulf voted oppose so was just looking out! NicheSports (talk) 18:50, 25 October 2025 (UTC) [reply ]
  • Yes: This is a reasonable starting place to build a more comprehensive policy to combat the persistent problems caused by LLM use. fifteen thousand two hundred twenty four (talk) 18:46, 25 October 2025 (UTC) [reply ]
  • Not yet I like the idea, but cannot support as written. Workshop it first (maybe at WP:AIC or WP:VPI). Currently there is a lot of ambiguity, and I am not a fan of the title, which seems like it would make this guideline a ripe candidate for policy misuse. Perfection is not required, but in this case we should try to have the guideline more hashed out before implementing it. md m.b la 19:27, 25 October 2025 (UTC) [reply ]
  • Yes. Editors keep being CBANned at ANI for AI usage to the point where there is already a de facto community-imposed prohibition on LLM-generated content creation, especially if it is high-volume. The template {{uw-ai4 }} says editors will be blocked for disruptive LLM usage, but we don't actually have something to point to other than "it's disruptive". This proposal will solve at least some of the confusion and make clear to newcomers what the community already accepts: raw LLM output is unacceptable for Wikipedia. As for those asking for a workshop, I would like to point out that RFCBEFORE says "Try discussing the matter with any other parties on the related talk page. If you can reach a consensus or have your questions answered through discussion, then there is no need to start an RfC." It does not say that there must be a workshop discussion before any RfC starts. The problem with this case specifically is that almost three years of discussion have failed to explicitly answer the question of "What is Wikipedia's AI policy?" This proposal is a step in the right direction. And in any case, if RFCBEFORE hinders finding a resolution to this persistent problem, we should just ignore it. SuperPianoMan9167 (talk) 19:29, 25 October 2025 (UTC) [reply ]
  • Yes, with the caveat that it would just be the starting point for further refinement. I have some issues with the wording here ("from scratch" is strange wording and I would prefer to remove those words; what does it mean for an article to not be created from scratch?), but something along these lines is needed and what's written here is better than what we have now, ie. nothing, so we should start with this and refine and expand from there. I can understand the concerns about the lack of RFCBEFORE, terseness, etc. but we spent months trying to refine a perfect policy only for it to get bogged down in the details and rejected; starting with a basic statement of intent means that at least we'll have something. It also puts pressure on people to compromise and come up with improvements, and ensures that even if that goes nowhere we'll at least have this bare minimum. The generation of articles is a serious threat to Wikipedia, since automation can churn out sludge faster than we can realistically review it; and the technology is clearly not at a point where using it to generate Wikipedia articles from scratch is productive. --Aquillion (talk) 21:58, 25 October 2025 (UTC) [reply ]
    I agree that "from scratch" should be removed (or a better alternative to replace it). --not-cheesewhisk3rs ≽^•⩊•^≼ ∫ (pester) 17:29, 26 October 2025 (UTC) [reply ]
  • Yes. Like all guidelines, it will be open to modification through consensus-building. And so what that it's short? Not every Wikipedia guideline has to look like a Russian novel banged a tax code. Stepwise Continuous Dysfunction (talk) 22:04, 25 October 2025 (UTC) [reply ]
    may regret saying this but: the people who directed years-long rage campaigns at a few people creating database stubs with a handful of factual errors have been reeeeeeeeeaaaaaaaaaaallllllllll quiet about the proliferation of AI articles, despite them containing errors almost all the time Gnomingstuff (talk) 05:09, 26 October 2025 (UTC) [reply ]
  • Yes. The brevity may have the unintended consequence of giving slightly more leverage for slop-slinging in existing articles ("the policy just says you can't create an article with LLMs, not that you can't edit an article with LLMs"), but that can be fixed later with additional text. Einsof (talk) 22:19, 25 October 2025 (UTC) [reply ]
  • Yes As the nomination recounts, we (admittedly not me) tried building consensus to promote a comprehensive version of WP:LLM. Facing intractable disputes, we now turn to a proposal that no one seems to dispute the principle of with WP:KISS in mind. How will we handle the cases not governed by this succinct maxim? By pointing users to the more comprehensive WP:LLM essay representing weaker consensus and relevant P&G sections. The revival of WP:PROJPOL is great, but I cannot agree that this RfC is an unacceptable model for consensus building. ViridianPenguin🐧 (💬) 23:08, 25 October 2025 (UTC) [reply ]
  • Yes to start, with support for more additions over time. For the people who are !voting oppose here, I implore you to look at WP:ANI and notice how nearly half the threads are about unchecked use of AI. We desperately need an AI policy to avoid situations like these from stealing volunteer time. Children Will Listen (🐄 talk, 🫘 contribs) 23:39, 25 October 2025 (UTC) [reply ]
  • Yes (with room for it to be expanded/adjusted in the future) During the recent new page patrol backlog drive, it was really apparent this is becoming an issue. It would be useful to have a guideline over an essay when sending something to drafts. Sariel Xilo (talk) 00:19, 26 October 2025 (UTC) [reply ]
  • Oppose. As Toadspike said, the issue with (what we can identify as) AI generated articles isn't that they're AI generated, it's that they're bad. I am concerned with a trend I sometimes see in discussions where a debate becomes not "did this user do something wrong and what is the wrong thing they did" but "did this user use LLMs". Yes, LLMs are very very often used in ways that allow for rapid introduction of bad or outright false content. We can already handle this: a hoaxer should be blocked as a hoaxer and whether or not they used an LLM is irrelevant to whether they should get the boot about it. Likewise with puffery, NPOV issues, source-text integrity, bot-like editing, etc etc etc. I would support strengthening the wording of our current essays/guidelines to indicate that using an LLM to create an article without knowing what you're doing is very stupid and will probably get both you and the article removed, if the issue we're trying to solve is people pointing at it and going "see!! it's fine!!". I'd also be sympathetic to an expanded G15-style guideline encouraging draftification for bad AI drafts, though I'm not sure how in practice one would implement this. But I don't think a blanket ban is the way to go. Rusalkii (talk) 00:24, 26 October 2025 (UTC) [reply ]
    I can think of any number of reasons why readers might receive AI-generated articles poorly, regardless of apparent quality:
    • because the environmental impact of generative AI (fossil fuel, water, toxins) is potentially greater than what might have otherwise been used to write the article without AI;
    • because it places Wikipedia downstream of a bunch of AI products, with murky implications for Wikipedia and potentially with perverse incentives for the AI products it is now enmeshed with; or
    • because of a general expectation that the information they read is prepared by another human (or to put it negatively, because of a conviction that making people read prose that wasn't prepared by a human is definitionally an antisocial act). Einsof (talk) 02:38, 26 October 2025 (UTC) [reply ]
    "I would support strengthening the wording of our current essays/guidelines to indicate that using an LLM to create an article without knowing what you're doing is very stupid and will probably get both you and the article removed"
    @Rusalkii: This is exactly what this proposal is proposing in the first place. SuperPianoMan9167 (talk) 17:33, 31 October 2025 (UTC) [reply ]
    No, this is proposing forbidding it? "Don't do this unless you know what you're doing and how to check that it's following policies, and be careful" is very different from "You should not do this". Rusalkii (talk) 18:16, 31 October 2025 (UTC) [reply ]
    Even if it says "You should not do this", if you really know what you're doing (the vast majority of editors using LLMs don't) and you rigorously scrutinize the output, ignore the rule and use LLMs to write a new article anyway. We don't need to directly build the exception into the rule. Exceptions should leave the rule intact. SuperPianoMan9167 (talk) 18:25, 31 October 2025 (UTC) [reply ]
    if you really know what you're doing [...] and you rigorously scrutinize the output, ignore the rule and use LLMs to write a new article anyway. multiple people in this discussion are explicitly arguing that this proposed guideline does and/or should explicitly ban all LLM use, even if the output is rigorously scrutinised. You cannot have it both ways - which is why the vagueness of this proposal is so problematic - different people are reading the same text and coming to completely opposite interpretations. Thryduulf (talk) 20:35, 31 October 2025 (UTC) [reply ]
    That is true. I see the points you're making about finding a suitable wording before starting an RfC. But what should we do instead? There is, at the very least, some kind of problem, as the number of LLM-related blocks is growing exponentially. The graph in that section was made by a sockpuppet, but that's not important; the content is what's important.
    Long-winded digression: Your description of the dispute in this comment is spot-on. Just by looking at the support and oppose !votes in this discussion, you can see the division:
    • The people in support generally belong to the first viewpoint, which is that all LLM-generated content is bad (this includes myself): Imo what we need is a guideline banning uncritically using LLMs to generate content from scratch, AI and LLMs have no place in Wikipedia articles full stop, (this person actually opposed for being too lenient) LLMs are an entirely destructive force with no positive role to play in the development of an encyclopedia, We have established that LLMs are generally unreliable, I think in the long term we need to forbid any LLM output from making its way onto Wikipedia, LLMs represent a long-term threat to the usefulness of this project in a way that trolls, vandals, or bigots would struggle to match, (me) LLMs are fundamentally flawed in ways that make their outputs unacceptable for Wikipedia, etc. etc.
    • The people in opposition generally belong to the second viewpoint, which is that LLM-generated content may or may not be bad, and the problem is more with the lack of proper review of LLM-generated content than anything else (you are one of the most prominent advocates of this viewpoint): the mere usage of AI is not a problem; mindlessly copy-pasting slop is, The problem with AI-generated content is that it often violates our existing content policies. The problem is not that it is AI generated, the issue with (what we can identify as) AI generated articles isn't that they're AI generated, it's that they're bad, I have no issues with LLMs being used by them as long as the output complies with our policies and guidelines and goes through human review, (you) The problem is not AI, the problem is unreviewed AI, etc. etc.
    The first group likely supports banning LLMs entirely. The second likely supports doing nothing and keeping the current policies and guidelines unchanged. If you were to ask me how to resolve this dispute, I'd be at a loss. Although, based on the creation of G15 in August, the numerical balance in this discussion, and the current practices for handling LLM issues at ANI, it seems that the first group is more likely to prevail in the long run.
    I'm thinking that this dispute would be a good topic for an essay.
    SuperPianoMan9167 (talk) 22:08, 31 October 2025 (UTC) [reply ]
    There is, at the very least, some kind of problem The very first thing you should have done is define what the problem actually is, specifically. The second is to define, specifically, why that specific thing is a problem. Only once you have those things nailed down can you even start to craft a workable policy or guideline or anything to tackle the problem. The G15 proposal succeeded because it was (at least relative to most other LLM-related proposals) specific and focused. What we have here is an good example of the politician's syllogism - something must be done, this is something, therefore we should do this. Unfortunately this vague something will not solve whatever it is that each of the supporters thinks is the problem (which is not the same as what other supporters think is the problem). If the proposal were harmless then it wouldn't be so frustrating to see so many otherwise intelligent Wikipedians fall into this trap, but unfortunately as others have pointed out more eloquently than I can, if enacted this will actually actively harm the project. Thryduulf (talk) 00:11, 1 November 2025 (UTC) [reply ]
    Okay.
    • The main problem is that people who don't know about the community's expectations for LLM use are repeatedly being blocked for LLM usage at ANI, and this becomes more and more frequent as time goes on, as evidenced by the number of LLM-related blocks doubling every 100 days.
    • This is a problem as it drives away newcomers who are trying to contribute in good faith using LLMs.
    • Implementing this proposal would hopefully solve at least some of the confusion on whether LLM use is allowed by increasing awareness of the expectations. The expectation, as currently practiced by the community, is that LLMs are not to be used to generate article content. If we indicate this in a guideline, this may reduce the underlying problem (low awareness of the community expectations for LLM content creation) because guidelines are more visible than essays, and a new guideline will likely receive press coverage with a headline like "Wikipedia bans AI-generated articles". Even if this is not entirely accurate, it will have the desired effect of increasing public awareness of the community's profound distaste for unreviewed LLM-generated content. If there's one thing that both sides can agree on, it's that most LLM-generated articles are bad. SuperPianoMan9167 (talk) 00:55, 1 November 2025 (UTC) [reply ]
    It's far too late for any of this now. This is where the workshopping should have started, long before it got anywhere near the RFC stage. Thryduulf (talk) 02:20, 1 November 2025 (UTC) [reply ]
  • Yes, sweet jesus yes already. What we have is not working, and it takes a whole lot longer to sort through the mess than to make it. We need something solid already, even if it does immediately need to be revised. I'd much rather our AI rules be a "living document" than a mere essay - it currently carries the same weight as whether there is or isn't a deadline. For this? There is. We've been nitpicking it for years, time's up for nuance. At this point I'd endorse a full LLM ban despite regularly letting it do my formatting, it's that pervasive a problem. ~ Argenti Aertheri (Chat?) 01:52, 26 October 2025 (UTC) [reply ]
  • Yes AI has become a huge issue on Wikipedia(I see many threads on issues related too it like hallucinations at ANI.) This is a good starting point to tackle the problem as it is clear that people cannot create articles from scratch using LLms and having that clear in the guidelines is vitally needed and long overdue. GothicGolem29 (talk) 04:06, 26 October 2025 (UTC) [reply ]
  • Oppose for being too lenient: I absolutely oppose what has been said above about expanding this proposed guideline, as it's already too long and already has two loopholes for LLM vandals. Only five words are necessary: Submitting LLM-generated content to Wikipedia, placed as a subcategory at WP:VANDTYPES. LLMs are an entirely destructive force with no positive role to play in the development of an encyclopedia (or anything else, really, but that's beyond our scope here). They are absolutely not sometimes useful tools[1] nor should the prohibition on them only extend to creating original Wikipedia articles.[2] This is no time for hairsplitting or nuancemongering, because LLMs have already wreaked havoc on Wikipedia in ways that will take editors years and many wasted man-hours to track down and fix. Evidently misinformative LLM slop can remain undetected for months on end, as anyone engaged in anti-vandalism can attest. This is why LLM vandalism is the worst and most serious type of vandalism, and should be treated with the harshest possible measures. Anyone who wants to open a "legitimate use-cases" loophole for LLM vandals should be specific down to the micron level on what exactly those alleged use-cases are, not just handwaving in their general direction. Festucalextalk 04:13, 26 October 2025 (UTC) [reply ]
    Wouldn't this be better than what we currently have though?
    And forming a consensus that They are absolutely not sometimes useful tools nor should the prohibition on them only extend to creating original Wikipedia articles is going to be impossible, given the surprising amount of opposition to even the most basic requirements for using AI (Wikipedia:Large language model policy). ARandomName123 (talk)Ping me! 06:43, 26 October 2025 (UTC) [reply ]
    Not with that attitude. We have enough critical mass (and plain evidence, editor experience, and common sense) to implement a sufficiently strict LLM policy. In any case, Wikipedia is WP:NOTDEMOCRACY and certainly WP:NOTANARCHY. LLMs being as catastrophically disruptive as they are, the interests and the fundamental mission of the encyclopedia comes ahead of any other consideration or opposition. All I can say is that WP:CIR, and if someone can't edit without an LLM, they should find a different project to volunteer for. Festucalextalk 07:37, 26 October 2025 (UTC) [reply ]
    Then please to propose a sufficiently strict LLM policy , because all the critical mass, plain evidence, editor experience, and common sense hasn’t managed to do that in the past three years. Maybe it’s changed? ~~ AirshipJungleman29 (talk) 10:03, 26 October 2025 (UTC) [reply ]
    There has been much hesitation and delay and gnashing of the teeth precisely because we've been trying to come up with a whole new framework for LLM usage while the vandals run ahead. As I pointed out above, we can simply integrate LLM usage into the tried-and-tested vandalism framework, and there we can deal with it as we do with all other vandalism: revert, warn, ban. It literally takes only five words (again, Submitting LLM-generated content to Wikipedia) in WP:VANDTYPES, no need for whole new pages of guidelines. The for-some-reason-missing template:aw-ai4im can be created to accompany template:uw-ai1 through template:uw-ai4 and we can all get on with our business. (P.S. the reason I opposed here is that if we institute incomplete, loophole-ridden guidelines at this stage, it will require herculean labors to close the loopholes later on. On the other hand, if we keep it strict and minimal now (not much attack surface for loophole seekers) we can reconsider later and add exceptions if Sam Altman somehow develops something with irrefutably legitimate uses.) Festucalextalk 10:28, 26 October 2025 (UTC) [reply ]
    Um, have you read Wikipedia:Vandalism recently? The opening sentence begins: "On Wikipedia, vandalism has a very specific meaning: editing (or other behavior) deliberately intended to obstruct or defeat the project's purpose , which is to create a free encyclopedia".
    If someone finds an article tagged as needing copyediting, and they choose a chat bot for copyediting, do you really thing they are deliberately intending to obstruct the creation of a free encyclopedia?
    Or do you think they're just well-intentioned who are trying to help but whose help is the opposite of helpful? WhatamIdoing (talk) 01:34, 27 October 2025 (UTC) [reply ]
    The LLMs themselves warn their users that their output is unreliable and may contain hallucinations, it's not exactly a secret. If they go ahead and use it for editing Wikipedia, I cannot help but see that as deliberate, specifically towards the encyclopedia part of which is to create a free encyclopedia. In any case, we don't pillory people, we always WP:AGF where applicable and issue a level 1 user warning template just in case they missed the hallucination warning, as I'm sure you know. The point I'm making is twofold: 1) the rule should be as concise and restrictive as possible which will make it both airtight and hypothetically relaxable in the future, and 2) it should be attached to an existing policy, whether it's WP:VANDALISM or WP:DISRUPTIVE or something else if you think I'm being too uncharitable. Festucalextalk 02:15, 27 October 2025 (UTC) [reply ]
    Really? Something may – or may not! – contain problems, so if people use it, then you think they're deliberately trying to hurt Wikipedia?
    Ordinary writing may – or may not! contain errors. Web browsers warn their users about spelling errors all the time, and most people have had at least 10 years of school teachers telling them about grammar, so it's not exactly a secret. The first sentence in your comment here contains a comma splice. Were you deliberately trying to hurt Wikipedia by posting that grammar error?
    I wonder if some experienced editors should get together to do an AI-focused edition of WP:NEWT. It might teach us just how bad (some) editors are at detecting AI-generated content. WhatamIdoing (talk) 02:22, 30 October 2025 (UTC) [reply ]
    Comma splices do not subvert Wikipedia's purpose, especially in talkspace. Hallucinations in mainspace do. jlwoodwa (talk) 04:01, 30 October 2025 (UTC) [reply ]
    I agree. Misinformation in the mainspace hurts Wikipedia. Misinformation hurts Wikipedia if it's written by hand, and misinformation hurts Wikipedia if it's generated by AI.
    But:
    • An AI-generated piece that contains no hallucinations – which is not exactly unusual – doesn't hurt Wikipedia, and therefore not Wikipedia:Vandalism.
    • An AI-generated piece that an editor believes to be correct is not a sign of someone trying to hurt Wikipedia, and therefore not Wikipedia:Vandalism.
    WhatamIdoing (talk) 04:12, 3 November 2025 (UTC) [reply ]
    jlwoodwa put it better than I could, so I'll just add that I follow GMEU, which allows comma splices. Quote: That is, most usage authorities accept comma splices when (1) the clauses are short and closely related, (2) there is no danger of a miscue, and (3) the context is informal. I doubt any usage authority will allow LLM-generated hallucinations in any encyclopedic context. Festucalextalk 02:15, 31 October 2025 (UTC) [reply ]
    What about LLM-generated non-hallucinations? LLMs don't hallucinate 100% of the content. WhatamIdoing (talk) 04:12, 3 November 2025 (UTC) [reply ]
    No, but by definition GPT content is a result of WP:SYNTH. There's simply no way for them not to synthesize data as They are pre-trained on large datasets of unlabeled content, and able to generate novel content. ~ Argenti Aertheri (Chat?) 17:22, 3 November 2025 (UTC) [reply ]
  • Yes It is a big yes from me. Recently, I have written an article about ML (with some references from AI) but some editors tried their best to delete my article even thought I tried hard to rewrite a better version without any intervention from AI-generation tools. They simply suspect the article have AI content and then reject it without caring my efforts. Alphama (talk) 08:42, 26 October 2025 (UTC) [reply ]
    Did you mean no instead of yes? I think your comment supports the no position more than the yes position. –Novem Linguae (talk) 09:56, 26 October 2025 (UTC) [reply ]
    I added more context. Alphama (talk) 11:54, 26 October 2025 (UTC) [reply ]
  • Yes. Even if it's a stop gap to have in place before something more nuanced or refined can be developed, we need to have something in place we as a community can point to when dealing with LLM-generated content and say 'no, because'. - SchroCat (talk) 09:48, 26 October 2025 (UTC) [reply ]
  • Yes It makes sense for Wikipedia to continue to refine our approach. It also makes sense that external changes to AI functionality (both good and bad) would continue to be an input into those ongoing discussions here. But right here and now, looking at the narrow proposal, this seems like a positive change. - RevelationDirect (talk) 10:10, 26 October 2025 (UTC) [reply ]
  • No, LLMs shouldn't be used in Wikipedia at all, not even after an article is created. Betseg (talk) 11:07, 26 October 2025 (UTC) [reply ]
    @Betseg: I 100% agree. But what you're saying is "no, don't build those house foundations, I want to build a mansion there!" The response is "um". Cremastra (talk · contribs) 13:17, 26 October 2025 (UTC) [reply ]
    If you build the foundations incorrectly, you'll spend a fortune to fix it underneath the finished structure. If we rush towards a half-baked guideline, it will be twice as hard to fix it than to do it right the first time. For example, if we allow loopholes for LLM slop and decide to close them later, there will be made an argument that we already have an LLM policy and that we can't prove that it isn't working. Bureaucratic inertia takes hold. Festucalextalk 03:35, 31 October 2025 (UTC) [reply ]
    To be fair, that's essentially already the case. WP:LLM already exists (albeit as essay not guideline) and provides a carveout that most read as "LLM use is allowed as long as you know what you're doing," which does very little to discourage LLM usage considering, well, everybody thinks they know what they're doing. I had one editor using AI to write an autobiography about themself (and to write their replies to the criticism) eventually say something to the effect of "I've integrated AI so much into my daily workflow I didn't even think twice about it." Athanelar (talk) 06:42, 31 October 2025 (UTC) [reply ]
    You're right; the comment you replied to almost had "LLM addicts will argue" instead of "there will be made an argument", but I changed it because I felt that my tone would come across as aggressive. It's true nonetheless. Many people who use LLMs end up functioning like addicts, they literally can't do work like a normal human anymore. That's why I never touch LLMs for any purpose whatsoever. Festucalextalk 11:56, 31 October 2025 (UTC) [reply ]
  • Oppose per Rusalkii. The amount of clarifications needed on the talk page for what this allegedly simple guideline is actually meant to mean should speak for itself. That said, I might be inclined to support if it something to the effect of "...without verification or human editing afterwards" were appended to the end. MarijnFlorence (talk) 13:53, 26 October 2025 (UTC) [reply ]
  • Yes, kind of. Yes to this, but it doesn't go far enough. AI and LLMs have no place in Wikipedia articles full stop, regardless of if it's creation, expansion, etc. Wizardman 16:58, 26 October 2025 (UTC) [reply ]
  • Oppose – This issue is not as black-and-white as the proposed policy makes it seem. FaviFake (talk) 18:36, 26 October 2025 (UTC) [reply ]
  • No It's not clear what "from scratch" actually means, or how it could ever be determined conclusively that an article was created by LLM "from scratch". This will not solve our ANI woes as everyone accused will just swear up and down they didn't use AI (as they frequently already do) or say they only used it in the ways we ostensibly permit (Festucalex's loophole two looms large, for instance). Toadspike and Rusalkii have their eye on the ball: our existing content guidelines already prohibit careless AI use in multiple ways, although I don't oppose the idea of an overlapping guideline specifically proscribing it. That guideline should be explicit in what LLM use is unacceptable, as well as how such use can reasonably be identified. I can't support a new policy based on vague assurances that it will be fleshed out into something workable eventually. —Rutebega (talk) 22:30, 26 October 2025 (UTC) [reply ]
    A few editors have suggested expanding this to a restriction against generating any mainspace content. This would be sensible, actionable, and harder to circumvent, and I would support language to that effect. The only loophole is that if there are no problems with the addition, we won't catch it, which is really not a problem as Thryduulf and others point out. —Rutebega (talk) 22:51, 26 October 2025 (UTC) [reply ]
    Right. Something like "LLMs must not be used to generate article content. Under this restriction, LLMs must not be used to create new articles, expand or rewrite existing articles in part or in whole, or generate citations. Reviewing LLM-generated content does not make it acceptable; experience has shown that this review is rarely sufficient to remove violations of WP:V, WP:NPOV, WP:SYNTH, and other core content policies. The only exception is that LLMs may be used sparingly to suggest copy edits for small passages of existing, human-written prose (1-2 sentences), but this must not be used so liberally within a section as to approximate a rewrite of the section. LLMs may also be used to search for sources, similar to a search engine. However, it is unacceptable to add content to articles based on LLM summaries of sources, as this can introduce inaccuracies and verification failures into the article. Any sources found via LLM must be reviewed by the editor prior to their information being incorporated into the article." NicheSports (talk) 23:23, 26 October 2025 (UTC) [reply ]
    (削除) Also please note I would support the creation of an "llm-authorized" user right that would exempt certain highly experienced, vetted editors (such as Esculenta) from these restrictions. Ideally with the same experience requirements as Autopatrolled, although editors would have to apply for it separately. NicheSports (talk) 23:34, 26 October 2025 (UTC) (削除ここまで) Reconsidering after being alerted to an unambiguous source-to-text integrity issue (that I was able to confirm) in one of these articles. NicheSports (talk) 02:40, 28 October 2025 (UTC) [reply ]
    I really like that idea, not too dissimilar from how people mass-creating have to get consensus to do it Kowal2701 (talk) 00:39, 27 October 2025 (UTC) [reply ]
    I would also support this (both for Esculenta and in general). Gnomingstuff (talk) 17:00, 27 October 2025 (UTC) [reply ]
    I'll dissect this for loopholes, if you don't mind. LLMs must not be used to generate article content. Under this restriction, LLMs must not be used to create new articles, expand or rewrite existing articles in part or in whole, or generate citations.[3] Reviewing[4] LLM-generated content does not make it acceptable; experience has shown that this review is rarely[5] sufficient to remove violations of WP:V, WP:NPOV, WP:SYNTH,[6] and other core content policies.[7] The only exception is that LLMs[8] may be used sparingly to suggest copy edits for small passages of existing, human-written prose (1-2 sentences), but this must not be used so liberally within a section as to approximate a rewrite of the section.[9] (削除) LLMs may also be used to search for sources, similar to a search engine. (削除ここまで)[10] (削除) However, it is unacceptable to add content to articles based on LLM summaries of sources, as this can introduce inaccuracies and verification failures into the article. Any sources found via LLM must be reviewed by the editor prior to their information being incorporated into the article. (削除ここまで)[11] My view is that Wikipedia's LLM policy should be extremely concise and restrictive, to save editor time if nothing else. Something like Do not use LLMs to edit Wikipedia for any purpose. Short, sweet, and hard to poke holes into. If some legit use shows up in the future, we can discuss exceptions to the rule. Festucalextalk 03:27, 27 October 2025 (UTC) Festucalextalk 03:27, 27 October 2025 (UTC) [reply ]
  • Yes but for all text generation in the mainspace. Using LLMs to find sources is fine (using the technology like a search engine) but should be disclosed so that tagging works better and filters can be applied. Rolluik (talk) 22:34, 26 October 2025 (UTC) [reply ]
  • Yes. It is the responsibility of the user to be knowledgeable in the article's subject matter before writing it, and large language models often use unreliable sources when people enter prompts and the LLMs provide their output. We have established that LLMs are generally unreliable, and anyone who uses and LLM has a duty to fact-check and research what the LLM outputs. Z. Patterson (talk) 02:30, 27 October 2025 (UTC) [reply ]
  • Yes. While I think the wording should be more precise, at this point it would be better to have something specifically about this than nothing. I can see the wisdom of doing it like this with something simple, since previous attempts attempted to be detailed and got bogged down. Having something like this allows it to be naturally discoverable for any good faith LLM-using editors who would be okay with following this guidance and maybe the LLMs they use if the prompt is something like "write a Wikipedia article on X that complies with all Wikipedia polices and guidelines". It also provides less leeway for problematic editors to argue that they can still use it and are competent enough to catch problems. Will this move the needle a lot? Maybe not, but it's better than having nothing. In any case, WP:IAR is still policy, and the editors who are truly competent enough to use LLMs and verify their sources can still do so even within a broader interpretation of the current wording. ---- Patar knight - chat /contributions 06:37, 27 October 2025 (UTC) [reply ]
  • Yes - This is direly needed because there is a problem with editors creating articles (and other content) with LLMs. There have already been efforts to have the elements of the WP:RFCBEFORE process: asking for input or assistance at one or more relevant WikiProjects (see WP:AICLEAN), also many efforts at dispute resolution through WP:DRN has already been tried (see this search for several attempts at dispute resolution in September and March 2025). - tucoxn \talk 07:56, 27 October 2025 (UTC) [reply ]
  • I supported in principle something that tells people in very explicit language not to do this, so I suppose this is very easy to support in full given it does exactly that and not much else. Alpha3031 (tc) 08:15, 27 October 2025 (UTC) [reply ]
  • Yes Large language models must not be used to generate new Wikipedia articles from scratch is a good start. There are so many problems with AI such as Hallucination (artificial intelligence), which may include fake AI generated sources. This is a good start. Obviously this policy will need expansion, and future RfC's may be needed, but we need to start somewhere. Bogazicili (talk) 12:20, 27 October 2025 (UTC) [reply ]
  • Yes; this is incomplete, it has weaknesses, but unfortunately some editors seem to need to be told that we are not a set of pre-calculated LLM queries. If you want what chatGPT thinks about photosynthesis, ask it directly (if Ldm is correct and LLMs are going to improve, the reader will get a better result by asking than by harvesting an out-of-date answer from WikipediaGPT). If you want what a bunch of humans have hammered out, look here on Wikipedia. There is no guarantee Wikipedia is, or will remain, better than LLM's, but it is different, and becomes pointless if it's merely a chatGPT mirror. Elemimele (talk) 13:48, 27 October 2025 (UTC) [reply ]
  • Yes, it's a start. Though brief, the current wording does reflect how editors are currently responding to articles generated entirely by LLMs. Rjjiii (talk) 14:17, 27 October 2025 (UTC) [reply ]
  • YES. This is long overdue — just look at my AfC log. Besides, this is already a de facto best practice, so it'll be good to finally have it in writing. It may be worth expanding or renaming the guideline in the future, but this is an excellent first step. pythoncoder (talk | contribs) 16:24, 27 October 2025 (UTC) [reply ]
    I would also support adding a sentence to WP:MEATBOT that clarifies that any LLM use on Wikipedia constitutes running an unauthorized bot, because that's exactly what an LLM is, even if the user is copy-pasting output as opposed to submitting the edits in an automated fashion. pythoncoder (talk | contribs) 16:28, 27 October 2025 (UTC) [reply ]
  • Yes, as a start. I'd like to eventually see something more thorough, but this is a valuable first step toward codifying what's already agreed-upon best practice. ModernDayTrilobite (talkcontribs) 16:45, 27 October 2025 (UTC) [reply ]
  • Yes Wikipedia needs to not fall into the "AI" craze. Too much slop out on the internet right now. IAm Chaos 17:00, 27 October 2025 (UTC) [reply ]
  • Yes. Write the article yourself. Find real, non-hallucinated references using the bevy of resources on and off the internet. Use the resources on Wikipedia to figure out how to structure a new article. Like Festucalex said above, something concise and restrictive is needed. jellyfish 17:21, 27 October 2025 (UTC) [reply ]
    If used in text/copy-editing, possibly adding on "attribution or at least saying you used some LLM is required". New and occasional editors should be made aware of a policy/guideline change like this in the various tutorials. jellyfish 17:30, 28 October 2025 (UTC) [reply ]
  • Yes: I have taken to very occasionally using AI as part of my process in finding sources, but even then it is only a small part of my research process that is less important than the Wikipedia Library search bar. We don't need LLMs writing articles that LLMs will then rely upon, especially as the biases within AI become more evident and more malevolent. ~ Pbritti (talk) 02:14, 28 October 2025 (UTC) [reply ]
  • Yes recently took part in two LLM-related AfDs[1] [2] which felt like a very egregious waste of several editor's time and the addition of incredibly poor content to the encyclopedia. I feel like a blanket ban approach will be the most effective and beneficial to the project. Orange sticker (talk) 11:31, 28 October 2025 (UTC) [reply ]
  • Yes. Long, long overdue. It's a start, and it will at least give AfC and NPP reviewers more leeway to reject LLM slop. More is needed, but for now this is a good stop-gap. Lovelyfurball (talk) 13:02, 28 October 2025 (UTC) [reply ]
  • Yes There have been many discussions already on how Wikipedia should react to AI. This proposal is narrow enough to be accurate in what it regulates and broad enough to have useful application. This is a great starting point for further and future guidelines on AI. Normally we should do WP:RFCBEFORE but many Wikipedia editors are already familiar with this issue. Also I appreciate the brevity and clarity of this proposal, and I do not see the point in workshopping it when I expect that many people understand and believe in the spirit of this guideline. Workshopping could wordsmith this, or it might add more text for more guidance, but with how focused this is, I think taking it to vote is a correct action. Bluerasberry (talk) 15:06, 28 October 2025 (UTC) [reply ]
  • Yes, since this would reflect the community's opinions on generative AI (it sucks). It's concise, simple, and works well. The loopholes mentioned don't convince me otherwise. Maybe a reference LLM usage should be added to WP:EYNTK, but that's out of scope. mwwv converseedits 16:54, 28 October 2025 (UTC) [reply ]
  • Yes. It's comically short, but after reading the FAQ and witnessing the failure of more complex proposals, I am in favor of this as a start.--MattMauler (talk) 19:41, 28 October 2025 (UTC) [reply ]
  • Neutral Of course LLM output is beset with issues. But so is human output. And we have been doing our best to improve human output for 20 years. See, for example, Lānaʻi, where we said, until earlier today "A volcanic collapse in Lānaʻi 100,000 years ago generated a megatsunami that inundated land to elevations higher than 300 metres (980 ft)." This is unclear so I checked the ref and it is also wrong. I replaced it with "A giant wave generated by a submarine landslide on a sea scarp south of Lānaʻi 100,000 years ago generated a megatsunami that inundated land to elevations higher than 300 metres (980 ft)." which is better, but only a little, even in terms of covering the paper cited. But the real point of the paper is that there is a gravel deposit (a named formation) on Lānaʻi in places many metres thick that was thrown up by the megatsunami. But subsequent scholarship suggest that this was caused by multiple events and may be glacial deposition. I actually think an decent LLM would do better than the humans in this case. And there are many such cases. I might be persuaded to support a temporary moratorium while we work out a more nuanced approach. All the best: Rich Farmbrough 23:17, 28 October 2025 (UTC).[reply ]
  • Yes: LLMs can't write well-sourced articles yet. Also, it's useful to have a guideline to point to, rather than an essay. ARandomName123 (talk)Ping me! 07:09, 29 October 2025 (UTC) [reply ]
  • Very weak support: I won't deny I've considered opposing making this into a guideline for lack of nuance as well as possible temporal shortsigthedness (since sooner or later the problems discussed at WP:LLM with AI outputs will become uncommon enough to be considered negligible issues), but the need of the hour is a clear guideline disallowing AI use in article creation which (as others have noted) is a sentiment that is hard to convey to newer users in the present circumstances. Java Hurricane 10:12, 29 October 2025 (UTC) [reply ]
    "since sooner or later the problems discussed at WP:LLM with AI outputs will become uncommon enough to be considered negligible issues" is incorrect. Hallucinations are mathematically inevitable due to the large language model architecture. Unless the industry ditches LLMs entirely this will not change. SuperPianoMan9167 (talk) 11:16, 29 October 2025 (UTC) [reply ]
    To which I must say that we humans have done rather well at creating a considerable mass of knowledge in spite of the human proclivity for hallucinations, both wilful and otherwise. Of course any AI architecture will not be wholly accurate without any scope for errors; but I see no reason as to why the rate of errors can be reduced in the future to a level comparable to (and maybe even than) that of humans. Java Hurricane 14:55, 29 October 2025 (UTC) [reply ]
  • Oppose for a couple reasons. First, the wording of this proposal has changed from "should" to "may" to "must" through the course of this RfC, so a number of people have !voted for/against a policy that significantly changed before/after they saw it. (Because RfC proposals should not be changed after they begin, I've restored the "should" wording.) This is a great illustration of why WP:RFCBEFORE is important.Second, the proposed 33-word guideline doesn't have enough detail. If I used a LLM like Esculenta, i.e. taking a carefully tuned model and significantly editing/vetting its contents, doesn't that break the letter but not the spirit of the policy? If I used a LLM to expand a stub, why isn't that disallowed? Etc., etc. We can't treat guidelines as glibly as this proposal does. Ed [talk] [OMT] 02:19, 30 October 2025 (UTC) [reply ]
  • Support. Misuse of LLMs by would-be editors is now causing significant disruption; this should help mitigate the impact at AfC. As noted by the proposer, it is not intended to be a complete solution; following the closure of this RfC, I would like to see a separate workshop on how best to mitigate the impact on pre-existing articles. (While it's outside the scope of this discussion, I'd support a policy along the lines of: (1) edits containing LLM-generated text must be marked as LLM-assisted (WP:LLMDISCLOSE), (2) if such edits are reverted, they cannot be restored, in part or in full, without talk page consensus (akin to WP:CITEVAR), and (3) reversions of LLM-assisted edits lacking talk page consensus do not constitute edit-warring (WP:NOT3RR).) Preimage (talk) 03:02, 30 October 2025 (UTC) [reply ]
    Yes and we need to go farther. I don't think there's a single argument for inserting AI-generated content into Wikipedia which doesn't essentially boil down to 'I lack the competence to perform this task myself.'
Long rant about banning AI ahead; here be dragons
If you want to ask ChatGPT to teach you about a subject before you write about it, fine; but I think in the long term we need to forbid any LLM output from making its way onto Wikipedia, including LLM-retrieved sources. I don't care if you 'just use it to clean up your grammar' or 'to help with formatting' or whatever else. If you lack the competence to do those things yourself you shouldn't be doing them until you gain that competence. Lowering the skill floor and allowing people who can barely string a sentence together to contribute new material to Wikipedia can only ever be a bad thing, because making it possible to accomplish those kinds of tasks without having to think about it only encourages thoughtless contributions. For every 'good' use case someone can imagine there will be 10 other people using it to try to draft unnotable autobiographies about themselves and such-like. Will those things happen even without LLMs being permitted? Sure, but if we can speedy-delete any article which smells like LLM it vastly reduces the effort necessary to police that kind of content, and also raises the floor on the amount of effort someone would need to put in to try to get unnotable or unverifiable etc content included in Wikipedia. At the moment any random person can input "write a Wikipedia article about me/my favourite macguffin" into ChatGPT, 'review the output' (with zero understanding of our policies,) and submit it, and we all have to waste our time politely explaining to them how LLM output often contravenes Wikipedia's policies and guidelines and they should first carefully read those and then make sure they review the output so that it aligns with the policies and guidelines and so on and so on and so on...
Enough is quite enough.

Athanelar (talk) 07:40, 30 October 2025 (UTC) [reply ]

  • Yes in the strongest possible terms. Now, we can definitely go farther -- and I feel that we should -- but let's get the principle established first. Because of the profound damage they can do to articles, frequently without notice, LLMs represent a long-term threat to the usefulness of this project in a way that trolls, vandals, or bigots would struggle to match. This is triage. CoffeeCrumbs (talk) 09:04, 30 October 2025 (UTC) [reply ]
  • Yes to start with, and to be expanded significantly, I hope. My preference would be "must not" but I'll bat for any version at this point. --Elmidae (talk · contribs) 17:13, 30 October 2025 (UTC) [reply ]
  • Yes. Going further, I'd support complete prohibition on the use of LLMs for creating article content. Some editors will probably try to secretly use them anyway, but they would be forced to review the output so carefully that it would become indistinguishable from human writing. This seems like the ideal outcome. Apfelmaische (talk) 20:10, 30 October 2025 (UTC) [reply ]
  • No because it is not strong enough and will be cleverly misread by AI gadgeteers to mean that they can sometimes use LLMs. Robert McClenon (talk) 01:40, 31 October 2025 (UTC) [reply ]
  • No. This leaves too many unanswered questions and it is not clear that this addresses the root of our most pressing problems with LLMs. I note that this very brief guideline has undergone substantial revision since this RfC was posted. Which versions were the early !votes endorsing? Many other points of clarification have been raised with the answer that the FAQ addresses this or that interpretation is obvious. While I appreciate the goal of codifying something and not having another overwrought, caveat-filled guideline, this discussion reveals that this is not ripe for promotion. —Myceteae 🍄‍🟫 (talk) 02:29, 31 October 2025 (UTC) [reply ]
    Note: I see that the "original version" has been restored in the diff I posted. I was going to write this last night and had checked the revisions then, before it was restored, and honestly didn't notice when I clicked on the latest revision. The fact remains that people have quibbled about wording during the course of this RfC and have !voted 'yes' or 'no' on different versions throughout its course. Maybe some did as I did, read it, slept on it, and returned to !vote on an altered version. Who knows?! —Myceteae 🍄‍🟫 (talk) 02:38, 31 October 2025 (UTC) [reply ]
  • Question Would an article/guideline like this be deleted if this rule were implemented? Slyfamlystone (talk) 04:54, 31 October 2025 (UTC) [reply ]
    No, because it's a humourous essay, not a guideline. Cremastra (talk · contribs) 12:44, 31 October 2025 (UTC) [reply ]
  • Yes as I see no reason denying the fact that LLMs (atleast as of now) can't write large essays without bloat/walls of text. But a time will come (and it is coming very soon), when we will have to revisit this rule. Cdr. Erwin Smith (talk) 12:05, 31 October 2025 (UTC) [reply ]
  • Oppose It looks like it's going to pass by the looks of it, but it is unfortunate the editors who wrote this policy have no f'n clue how to write a policy. It's going to lead to extreme levels of abuse within AFC and NPP at every level because it doesn't define exactly what types of abuses were seeing already, that need to be excluded. There is old quote in Roman/western law that I read years ago, I think it was in Latin (the reason I can't remember it), that basically says something like - if you don't define exactly what is allowed then everything is allowed. That is exactly what this policy states. I am worried now that it will swamp WP in the same manner that UPE/Paid editing did in 2008-2012 before we got a handle on it and will lead to a drastic reduction in the overall quality of Wikipedia at a fundamental level. Your placing trust in folk that is unhindered, who will take it literally. They will read it, write a crap article and then use then AI to clean it and expand it and they will say "I wrote the article myself, the policy says that". Its essentially a field full of loopholes. It is simplistic and and has no bearing on reality or experience. scope_creep Talk 12:55, 31 October 2025 (UTC) [reply ]
    And what stops people from doing this now? The AI edit deluge is ongoing (or at least the start of it). This will at least formalize the informal community norm that AI edits are generally disallowed and are considered disruptive 2A04:7F80:55:D888:189F:90F9:CC10:E8C (talk) 13:58, 31 October 2025 (UTC) [reply ]
    Read the FAQ. Cremastra (talk · contribs) 14:02, 31 October 2025 (UTC) [reply ]
    It should not be necessary to read an FAQ to understand the basics of a policy. Thryduulf (talk) 16:02, 31 October 2025 (UTC) [reply ]
    No, but it shouldn't be that hard to read the FAQ to better understand the purpose of it. Cremastra (talk · contribs) 16:03, 31 October 2025 (UTC) [reply ]
    It isn't that hard, but it should not be necessary. The purpose of a policy should be clear from the text of the policy itself, but this one is just hopelessly vague. Thryduulf (talk) 16:06, 31 October 2025 (UTC) [reply ]
  • Strong Oppose. I am against LLM slop as anyone else - I decline and CSD a heck of a lot at AFC - but this isn't a suitable guideline in it's present form. LLM is a tool, and like a lot of tools should not be used by new, inexperienced editors, with no knowledge of Wikipedia's norms, guidelines, and policies. But in the hands of experienced editors? I have no issues with LLMs being used by them as long as the output complies with our policies and guidelines and goes through human review. I agree that we do need a codified policy for LLM usage, but this is far too broad and vague. The current wording "generate new Wikipedia articles from scratch" could mean several things that all have different interpretations:
    • an LLM creating an entirely new article with no human involvement, or
    • an LLM writing the first version of an article that a human later edits, or
    • any use of LLM at the start of an article.
  • Does "from scratch" cover the lead section only? the whole article? a stub? a list?This is banning a method without actually defining where it begins or ends. Since no one can reliably tell if an LLM was used (and we’re not about to install spyware on editors machines), enforcement would be impossible. LLM detection is unreliable, and we already have CSD G15 to handle unreviewed LLM slop. I would support a guideline that strongly discourages LLM usage for new editors, in the same way that we strongly discourage autobiographical writing. Focus on the risk factors not the tool. We can come up with something better. qcne (talk) 15:49, 31 October 2025 (UTC) [reply ]
  • Strong oppose. As others have already stated, there are numerous problems with the proposed guideline and RfC. Firstly, there was not enough WP:RFCBEFORE for a new guideline (especially something of this magnitude). Secondly, the mere usage of AI is not a problem; mindlessly copy-pasting slop is. I do not see why we should be banning experienced editors from using AI as long as they carefully review its output. The issues that AI usage creates (made up sources, hallucinations, POV, etc.) are already banned by policy. Thirdly, the proposed text is as ambigous and vague as it could be, so a huge amount of articles would be tagged, users would be swearing up and down that they didn't use AI, different admins would evaluate them differently, and this whole entire thing would turn into chaos.The closing admin of this RfC is strongly reminded to evaluate consensus based on the arguments presented, and not solely on head count. After closure, I would support a proper workshopping for a proper RfC regarding the AI issue. Kovcszaln6 (talk) 16:40, 31 October 2025 (UTC) [reply ]
    Firstly, there was not enough WP:RFCBEFORE for a new guideline (especially something of this magnitude). Your second point contains quite reasonable arguments, but I don't think a failure to pre-discuss a guideline is by itself grounds to oppose the content of the guideline itself. Cremastra (talk · contribs) 16:54, 31 October 2025 (UTC) [reply ]
    I was just expanding upon my statement of there are numerous problems with the proposed guideline and RfC. Kovcszaln6 (talk) 16:59, 31 October 2025 (UTC) [reply ]
    There's been almost three years of discussion that has failed to produce anything of substance. I think that's enough to satisfy WP:RFCBEFORE. SuperPianoMan9167 (talk) 17:26, 31 October 2025 (UTC) [reply ]
    The main reason why I have an issue with the lack of WP:RFCBEFORE is that if there were one, we could have made a better-worded proposal that would have been supported by those opposing (including me). Right now, we're just wasting time with RfC: either this will get closed with "no consensus"/"consensus against", and we'll start over with a new one, or it will be closed as "consensus for", and then the project will spiral into chaos (as I've explained above) until we repeal this. Kovcszaln6 (talk) 17:32, 31 October 2025 (UTC) [reply ]
    a huge amount of articles would be tagged, users would be swearing up and down that they didn't use AI, different admins would evaluate them differently, is kind of what we have already because we have no clear guidelines whatsoever. All this guideline attempts to do is establish a narrow set of behaviours that are definitely inappropriate. Cremastra (talk · contribs) 17:36, 31 October 2025 (UTC) [reply ]
    All this guideline attempts to do is establish a narrow set of behaviours that are definitely inappropriate. This proposal is the exact opposite of "a narrow set of behaviours". As others have pointed out, it is extremely vague. For example, what does "from scratch" mean? What about partially using AI? What if an experienced editor carefully reviews it? Different editors will interpret this guideline differently. When determining whether something violates this guideline, editors will probably decide based on whether the text follows already existing policies (WP:V, WP:NPOV, etc.), so there's no point of this guideline. Kovcszaln6 (talk) 17:49, 31 October 2025 (UTC) [reply ]
    huge amount of articles would be tagged: This is already happening. Category:Articles containing suspected AI-generated texts continues to get larger and larger as the months go by.
    users would be swearing up and down that they didn't use AI: This happens over and over and over again at ANI: a user is reported for using LLMs, they almost always deny it, evidence is compiled to prove they did use LLMs, and then they are CBANned. Seriously, just look at the recent ANI threads and archives to see just how often users are blocked for using LLMs.
    different admins would evaluate them differently: This is how G15 is used because each admin evaluates the validity of the {{db-g15 }} tag in distinct ways.
    The project is kind of already in chaos because we have no clear guideline on LLM use, only an essay that really is treated like a guideline at this point.
    My preferred outcome would be:
    • WP:LLM is promoted to guideline.
    • The two most widely accepted standards for LLM use, WP:LLMDISCLOSE and this proposal (which would be incorporated into WP:LLM as WP:LLMWRITE), are made policy, with each section labeled using {{policy|type=section}}.
    • WP:LLM is labeled with {{Policy section top }}.
    But I don't think that would get wide acceptance, so this proposal is a good start. SuperPianoMan9167 (talk) 17:50, 31 October 2025 (UTC) [reply ]
    I haven't seen significant disagreements over the interpretation or evaluation of WP:G15; its whole point is to have clear and objective criteria that obviously prove unreviewed AI usage.we have no clear guideline on LLM use This proposed guideline isn't clear either.I would absolutely support requiring WP:LLMDISCLOSE and some kind of more precise rule against using AI irresponsibly. If there were an actual WP:RFCBEFORE, we could have discussed these things. Kovcszaln6 (talk) 18:03, 31 October 2025 (UTC) [reply ]
    As I said below, nothing in WP:RFCBEFORE describes or even suggests the existence of a pre-RfC process for a situation like this where the entire point is to develop community-wide consensus on something. WP:RFCBEFORE is about trying to resolve small-scale disputes that might avoid going throuh RfC entirely. This is obviously nothing like that. Einsof (talk) 19:24, 31 October 2025 (UTC) [reply ]
    Please see Wikipedia:Policies and guidelines#Proposals, especially the "Brainstorming" subsection. Kovcszaln6 (talk) 19:47, 31 October 2025 (UTC) [reply ]
    So what was the point of repeatedly linking WP:RFCBEFORE if you actually meant WP:PROPOSAL? Also this is not specific to you — many other editors seem to be doing this. And also, nothing in WP:PROPOSAL indicates the necessity of a pre-RfC process either, other than discussions that have already occured. Einsof (talk) 19:51, 31 October 2025 (UTC) [reply ]
    Seeing people linking to WP:RFCBEFORE made me believe that it mentioned this, but apparently not (it should be). That's my bad.nothing in WP:PROPOSAL indicates the necessity of a pre-RfC process It does. Please see the Brainstorming section. Also, the next section starts with Once you think the initial proposal is well written, and the issues involved have been sufficiently discussed among early participants to create a proposal that has a solid chance of success with the broader community, start a request for comment(emphasis added). Kovcszaln6 (talk) 19:58, 31 October 2025 (UTC) [reply ]
    We can still make exceptions when necessary, as Wikipedia is not a bureaucracy. Most ideas for LLM policies/guidelines never even make it out of the idea lab; having an actual RfC is beneficial because it draws in more feedback and participation. SuperPianoMan9167 (talk) 20:29, 31 October 2025 (UTC) [reply ]
    Most ideas should never make it out the idea lab. Writing policies and guidelines is hard, and good ones require significant workshopping with a wide range of input to ensure that people can actually agree:
    • That there is a problem
    • What the problem actually is, specifically
    • How the problem could be fixed
    • Which of those ideas actually do fix the problem
    • What the side effects of those fixes will be
    • Whether the impact of those side effects are, on balance, less problematic than the original problem
    • How those side effects could be mitigated
    • Whether those mitigations would cause other issues (and what their impact will be, how they will be mitigated, etc)
    • Then when you've done all that you start working out how to word the proposed policy/guideline so that everyone reading it understands it the same way, agrees that what it says matches what it is intended to say, etc. and get a feel for whether most people will support it as written.
    Only after all that do you bring it to an RFC. Very nearly all the previous proposals about LLM/AI use suffer from similar problems to this one: the proponents have only got as far as "there is a problem with LLMs" but can't even agree what specific problem with LLMs they're trying to solve, let alone defining one problem specifically or any of the other steps. Thryduulf (talk) 20:50, 31 October 2025 (UTC) [reply ]
    Would opening a workshop discussion at VPIL help, or would that be counterproductive now that there's already been 100,000+ bytes of discussion here? SuperPianoMan9167 (talk) 01:00, 1 November 2025 (UTC) [reply ]
    It's far too late to salvage this proposal. If it didn't have so many responses I'd say to withdraw it and start the discussion again from scratch, but again it's too late for that and yet another overlapping discussion about LLMs is not going to help anyone. What I suggest is that you wait for this discussion and the discussion about LLMs and GAs to both be closed (both should either be consensus against or no consensus), wait a few days to allow some space (and make sure nobody contests either closure, if they do wait a few days after that plays out) and then start a discussion (either directly at the idea lab or on a dedicated page advertised there) that beigns at first principles. Make it very explicit that it's not a venue for supporting or opposing nor for expressing general opinions about LLMs (and shut down any of that which does happen, leaving it open will only hinder) but solely about workshopping a proposed policy or guideline that has a chance of achieving consensus. Thryduulf (talk) 03:19, 1 November 2025 (UTC) [reply ]
    I am going to start one at WT:AIC after this closes NicheSports (talk) 03:20, 1 November 2025 (UTC) [reply ]
    I do not think we should prejudge the outcome. It could be either of those you mentioned but consensus for is also possible we have to see what the close says. As for workshopping if it fails that is probably what will be done but if even a basic start of don't generate articles from scratch with AI fails to get consensus I am not sure a workshopping process who's ideas will either be more complex or not less complex than this one will generate consensus. But first let's see if this gets approved at the close. GothicGolem29 (GothicGolem29 Talk) 03:54, 1 November 2025 (UTC) [reply ]
    It's not about being more simple or more complex, it's about being more precise. Thryduulf (talk) 04:05, 1 November 2025 (UTC) [reply ]
    The only way I can see this being more precise is if it is more complex and very possibly the scope expands too and when you start going down that route there are going to be more objections on other issues regarding scope what AI can be used for etc etc. I am sure there have been other proposals in the past that you would have considered more precise than this one but they failed to achieve consensus and I fear any workshopped proposal will face the same outcome and the status quo will remain. GothicGolem29 (GothicGolem29 Talk) 04:21, 1 November 2025 (UTC) [reply ]
    once again, any and all of you were free to weigh in one month ago when I tried to do a RFCBEFORE, but you didn't. Gnomingstuff (talk) 04:56, 1 November 2025 (UTC) [reply ]
  • No per Rusalkii and Toadspike. This proposal misidentifies the problem with LLM, which is not the fact that LLMs are some inherent evil, but that they produce bad content that is already forbidden by our guidelines. There is no reason to add vague and unclear another guideline that is seemingly unenforceable. Guidelines are not where Wikipedia should be enshrining moral stances, they "are developed by the community to describe best practices, clarify principles, resolve conflicts, and otherwise further our goal of creating a free, reliable encyclopedia". I don't think this guideline does any of that, because it does effectively "describe best practices" due to its complete lack of precision (evidenced by the fact the guideline has changed multiple times during this discussion), it does not "clarify principles" because it seems to be based on some moral principle that is opposed to LLM usage, a principle that has not been discussed or established as Wikipedia consensus, and has no use in resolving conflicts as our existing guidelines already preclude the bad content that editors have pointed out as the result of LLMs. In addition, the inability of editors to identify LLM-produced content will only produce more conflict, as there will be debate on whether or not particular text was produced by an LLM, highlighting the necessity for further community discussion before rashly pushing through a guideline. Editors have all highlighted their support 'as a start', but there is no rational reason to push through an unclear guideline with without considering the implications and specifics. Katzrockso (talk) 00:50, 1 November 2025 (UTC) [reply ]
  • Yes - I think LLMs are a detriment to the project that have repeatedly proven to be massive time sinks for the community. Though I'd prefer stronger restrictions then what is currently proposed, I'd rather have something ok that can be improved upon later, then nothing at all. - Butterscotch Beluga (talk) 00:56, 1 November 2025 (UTC) [reply ]
  • Yes in principle, per reasoning above. but also, this rfc seems very bare bones. User:Bluethricecreamman (Talk·Contribs) 02:58, 1 November 2025 (UTC) [reply ]
  • Yes, yes and YES, we need to bell the cat and at least this would be a start. --Gurkubondinn (talk) 18:51, 1 November 2025 (UTC) [reply ]
  • Yes It is about time we had an AI policy; we have been too lenient on what could be apocalyptic to our encyclopedia. I look forward to helping expand this when it passes. Mikeycdiamond (talk) 19:11, 3 November 2025 (UTC) [reply ]
  • No LLM is a tool. Used without necessary skills, it produces junk. So does the visual editing interface of Wikipedia. First of all, there are areas where IMHO LLM is safe to use. For example, it seems perfectly OK to create new articles using translation from another language with the assistance of an LLM. I would dare to say that this translation will be better than what almost all editors of Wikipedia will be able to provide themselves. Why prohibit an option to get a better article text? By the way, just a few years ago, the machine translation also produced bad results. Prohibiting a useful tool altogether is not useful and will not be sustainable, as LLMs evolve quickly and spotting their use would be much harder in a year or so. I would suggest that if one is willing to pay few hundred USD today per month, producing texts on simpler topics (say, minor rivers) of quality similar to the human-made ones is already within grasp. Instead of a new Prohibition, I would suggest something like WP:BURDEN: bad text that is probably produced by an LLM should be easy to delete (no proof or discussion needed, suspicion is enough) and hard to reinstate (a point-by-point demonstration of adherence to WP:V - with references and page numbers - should be required if the text was challenged and deleted). We also might want to consider limiting the per-user/per-day contribution of new editors. --Викидим (talk) 06:32, 4 November 2025 (UTC) [reply ]
    I absolutely object to the idea that AI translations are of any value whatsoever. AI is not capable of producing translations that faithfully reproduce the meaning and nuance of the original text. It is no better than machine translation, which has long been rejected on Wikipedia (see WP:MACHINE; the 'worse than nothing' language is very applicable here) and often worse, because it frequently incorporates hallucinations into the translated text. I would much prefer an unpolished human translation to any AI translation. An unpolished human translation may use unnatural English, but at least it will be true to the text and can be spruced up; an AI translation, on the other hand, is likely to be full of factual inaccuracies and bewildering hallucinations that only those with deep knowledge of the source language will be able to pick up on. Allowing AI translation on Wikipedia is asking for trouble. Yours, &c. RGloucester 07:12, 4 November 2025 (UTC) [reply ]
    IMHO the situation have changed a lot since WP:MACHINE has been written. We are both entitled to our own opinion, mine is that, as of today, any good AI model does a better job at translation than a gifted amateur (which most of us here are). The models struggle with obscure terminology, but so do nonprofessionals among the humans. In my experience, a typical Wikipedia article will require very little editing after translation from, say, Russian to English with, for example, Google Gemini 2.5 Pro.
    For the avoidance of doubt, I am not suggesting to paste any unedited AI text into Wikipedia or suggest attempting the translation without knowledge of both languages. Викидим (talk) 08:39, 4 November 2025 (UTC) [reply ]
The problem is, for anyone to evaluate whether it has done a better job or not, one needs to have full mastery of the source and target languages to go back and perform a detailed check for errors, and even worse, hallucinations. If one has already mastered the relevant languages, whether as a native speaker or otherwise, it is much more efficient to write one's own translation, no matter how imperfect it may be. Hallucinations are difficult enough to spot in non-translated texts, let alone those that are crossing linguistic boundaries. Mopping up the mess is a nightmare, and a waste of editor time. Finally, as you say, those who have incomplete mastery in either language should never use machine or AI translation to write articles, as they cannot review the results. While I assume good faith about any given editor's stated linguistic ability, permitting this kind of article-writing workflow is opening the door to endless headaches. Yours, &c. RGloucester 09:25, 4 November 2025 (UTC) [reply ]
Also, maybe Russian-to-English is near flawless—I wouldn't know—but I have not found Japanese-to-English machine translation to be on par with a "gifted amateur" human translator. Sometimes it produces results that are pretty good, but often it yields results that look superficially reasonable but actually contain significant errors in word interpretation, nuance, and/or tone, on top of generating awkward English fairly often. In particular, it often translates words in concrete, literal terms that are meant metaphorically in a way that doesn't come across in translation, directly translates English loanwords that are actually false friends, and tends to flatten out the tone of everything to a kind of beige neutral even when the original source text is blunt and rude, overweeningly polite, cheekily sarcastic, etc. I went through a phase of thinking it was remarkably dependable, but as I've gotten a better handle on the language I've become very wary of it.
I think it's something to use for casual purposes when you have no better option, but here we have to hold ourselves to a higher standard than casual and have a better option—editors that know both languages well. Japanese is so different from English that the best translation is often quite loose and paraphrastic, and knowing how to achieve that while staying as close as possible to the semantic meaning and tone of what was originally written requires the kind of understanding of the languages and cultures that an LLM can't possibly have. ('"') (Mesocarp ) (talk ) (@) 15:12, 4 November 2025 (UTC) [reply ]
All I can say is that Wikipedia articles about research journals or small rivers typically do not contain text that is blunt and rude, overweeningly polite, cheekily sarcastic. Yes, due to lawyers and political correctness, it is indeed hard to force AI to produce the "blunt and rude" language. Other stylistic variations are actually possible, just prefix the prompt with "you are ...". Викидим (talk) 20:37, 4 November 2025 (UTC) [reply ]
That may be true, but we cover a lot of topic areas beyond those. Sources like newspapers and magazines can display a wide range of affects (especially considering that they often quote people talking etc.), not to mention the stylistic range of excerpts from literature or other primary sources that we have on-wiki translations of (have you ever tried machine translating Japanese poetry from a thousand years ago? it will dutifully comply but not at all well ime). Also, if you understand the text you're trying to translate deeply enough to give an LLM fine-tuned instructions on its tone, I don't think you need it to do the translation for you. ('"') (Mesocarp ) (talk ) (@) 00:25, 5 November 2025 (UTC) [reply ]
what almost all editors of Wikipedia will be able to provide themselves why would a human who understands what they're doing produce a worse result than a machine which is just putting in what word seems right to come next, hallucinating madly all the while? This "AI is just a tool" stuff is starting to really irk me. All sorts of the things are "just tools" but that doesn't make them naturally innocent. Some tools are too unreliable, but reliable-feeling enough to dupe humans into not using proper oversight, to be appropriate for a serious project. Cremastra (talk · contribs) 15:14, 4 November 2025 (UTC) [reply ]
Humans are imperfect. Editors that do not bother to check the AI output would produce just as sloppy mess of the translation without using AI. (somewhat off-topic, but might be relevant) Long time ago, when I was young, the word LLM did not exist, and AI was an obscure research topic, I was lucky to came across some truly good linguists. They explained to me that the "meaning" of the word is just probabilities of it appearing in different contexts. The distributional hypothesis is not the only game in town, but it clearly points to a chance that humans actually create texts by just putting in what word seems right to come next, too. Викидим (talk) 20:52, 4 November 2025 (UTC) [reply ]
  • Yes, sure. I agree with other editors here that we ultimately would benefit from more extensive guidelines on this topic, but this seems like a good start.
elaborating further
Something I really wish a lot of the participants here would remember is that, ultimately, no one can practically stop someone from submitting LLM-generated content (entire articles or otherwise) that is truly indistuingushable from good handwritten content. The actual reason we need guidelines like this is to discourage people from using them recklessly, as has become rather commonplace; no one could possibly stop someone from using them effectively because we wouldn't even be able to tell. There's obviously a pressing practical need to discourage people from using them though, because, as many others here have noted, their use is taking up a huge amount of editor time. People who use them recklessly are generally unable to tell that their edits are of a special kind of poor quality that can take extensive work to clean up or cause drastic frustration and upset (e.g. rapid fire GA reviews via LLM), and they are sometimes even unwilling to recognize these problems when challenged, or will superficially apologize and then continue. So if we allow wiggle room that LLMs are "sometimes useful" or the like, people who want to use them recklessly will be very likely to take advantage that wiggle room, at cost to the community (a glance at ANI right now will demonstrate this further if needed). Such editors are already saying things like "The WMF said I could do it" based merely on this blog post. Yes, this kind of thing may violate existing policy and we can ban the editors in question, but LLMs allow people to move fast enough that they can do a lot of damage before it gets to that point, hence the need for giving them special guidelines or policies.If someone is so confident that they can use LLMs well enough that no one will call them on it, well, they can try, but having strong guidelines or even policies in place with harsh disincentives to discourage their use outright will help ensure that only people who are really capable of using them well enough to totally evade detection will actually do it, which would be fine. Also, if LLMs somehow become magical in the future and capable of generating content on par with a seasoned editor unsupervised, obviously any such guidelines or policies will simply gather dust, so we really don't need to fret about them somehow "closing off the future" or the like.

(I feel a little sad making these arguments because some of the things people write on here with LLMs make me giggle immensely, but I know that's not worth the cost to the project...I wish we could just secretly redirect them to some kind of doppelgänger of Wikipedia so that I could read their silly edits without them making trouble.) ('"') (Mesocarp ) (talk ) (@) 14:29, 4 November 2025 (UTC) [reply ]

  • Yes I just took a couple months of Wikibreak as cleaning up AI slop was consuming all of my time and demotivating me to contribute in other ways. The few oppose votes here are the same arguments that have been presented at every AI discussion for over a year now:
AI is a just a tool - I agree, but it's a tool that has been consistently used to inflict massive damage to the wiki. VPNs are just a tool, but we still have WP:PROXY.
The problem isn't LLM content, it's unreviewed LLM content - I agree, and for over a year many editors like myself have spent nearly 100% of our available wiki time carefully reviewing and usually deleting LLM content, yet unreviewed content continues to flood onto the wiki.
The proposal is unnuanced/needs to be stronger/needs more RFCBEFORE - I agree that there is room for improvement, but we've been spilling massive quantities of digital ink on this issue for well over a year now in dozens of discussions. It is well past time to act. WP:G15 wasn't (and still isn't) perfect, but the wiki is still much better off because it exists.
LLM technology is constantly improving/this will need to be reevaluated in a few years - LLM technology has plateaued according to many sources. Even if it improves in the future, enormous harm is being done right now. We can reevaluate this guideline if needed, but we need to set a culture right now that only human written and reviewed content is acceptable here (this culture already exists, but the minority of experienced editors who disagree are creating space for the flood of LLM misusers to continue their behavior and to be confused when they receive pushback).
LLMs are helpful for accessibility/translation - the output of LLM translation must still be evaluated for accuracy, and anyone who lacks sufficient fluency in the receptor language to write quality content lacks competence to evaluate LLM output. Failure to acknowledge this has the potential to repeat the Scots Wikipedia debacle in every minority-language wiki.
We can't detect all LLM use/it will be too hard to enforce - There's lots of undisclosed paid editing that we can't detect, and enforcement is difficult, but that hasn't stopped us from making a very clear statement of what our expectations are.
LLM proponents have been given a full year and untold bytes of contributions to demonstrate that LLM use is a positive for the wiki, and the rest of us have wasted untold hours responding to these discussions and cleaning up LLM slop in wiki space. I for one am willing to accept any possible small harm this action may cause in exchange for reducing the large harm that is already occurring. -- LWG talk 15:21, 4 November 2025 (UTC) [reply ]
it really -- apologies for the AI-speak -- underscores the hypocrisy of people spending literally years dancing on the grave of one person who introduced a few inaccuracies, proclaiming it an imminent crisis, while an actual imminent crisis was taking place, producing far more inaccuracies than he ever did. Gnomingstuff (talk) 21:06, 4 November 2025 (UTC) [reply ]
There's no hypocrisy in the community - this proposal is receiving even more overwhelming support than the Lugnuts stuff had, and just as in the Lugnuts case there is a vocal minority who object to action for various reasons, mostly procedural although a few voices actually disagree with the community on the merits. I understand that these discussions and your work on AI cleanup have left you feeling embattled, but most of the community seems to share your general perspective and appreciate your efforts, though we may advocate slightly different practical measures. -- LWG talk 23:24, 4 November 2025 (UTC) [reply ]
Not saying everyone or even most people are being hypocritical. (I don't agree with the aforementioned "vocal minority" on everything, but the people who've opposed both are at least consistent in their reasoning which I appreciate.) Gnomingstuff (talk) 22:50, 5 November 2025 (UTC) [reply ]
Something I pointed out above is that, if someone produces content using an LLM that is truly indistinguishable from that of a well-seasoned editor working by hand, or LLMs advance to the point that they can produce material like that unsupervised, it will not cause any problems and these guidelines/policies simply won't be applied, precisely because no one will be able to tell the difference. There's not an issue there. The central point is not that there is some kind of religious impurity that surrounds LLMs, it's that their use is having constant bad effects in how little they generally live up to good material written by hand and how rapidly they allow people to generate this kind of distinctively hard-to-handle malign material. We don't need to carve out exceptions in the guidelines/policies because in practice there's no way to apply them to unproblematic LLM-sourced material anyway; the point is to discourage everything else. Right now, a lot of people clearly think they can produce good material with an LLM that are plainly wrong, and strong, unambiguous, airtight prohibitions against their use should at least make those people think twice. ('"') (Mesocarp ) (talk ) (@) 00:54, 5 November 2025 (UTC) [reply ]
  • Yes A good step to discourage LLM, especially important to guide those who, through inexperience, know no better. JMCHutchinson (talk) 10:03, 9 November 2025 (UTC) [reply ]
  • Oppose The current text is just two sentences and they are so simplistic that they seem at the same level as "burn the witch". Two immediate issues:
  1. What it specifically outlaws are "large language models" but that's a particular bit of technology jargon and the user may not know whether one is underneath of the hood of a virtual assistant, chatbot, search engine or other power tool.
  2. As written, the prohibition is easy to work around. For example, one might start an article as a simple stub or skeleton and then use an LLM tool to expand it.
To move forward, we should aim to consolidate other pages on much the same subject such as WP:LLM, which already has the relevant shortcut; WP:CXT which governs use of another tool for generating articles; and WP:BOTPOL which governs automated editing in general.
Andrew🐉(talk) 11:11, 9 November 2025 (UTC) [reply ]
There's also WP:FREECOPY, which is already a guideline, and WP:COPYPASTE and WP:PARAPHRASE for more specific guidance. I think we could address both of your issues by making it clear that those pages still apply if the content in question was generated in response to a prompt. But this RFC is a starting point to help people crafting more nuanced guidance understand where the community is at on the issue. -- LWG talk 19:55, 10 November 2025 (UTC) [reply ]
I was going to say that we don't need to explicitly say that content policies and guidelines apply regardless of whether the content was machine-generated or human-generated, but given how many people seem to think that content that (they suspect) has been touched by an LLM in any way needs specific policies rather than just applying the existing ones in exactly the same way to all content, we actually probably do need to. Thryduulf (talk) 20:12, 10 November 2025 (UTC) [reply ]
We don't need more startpoints as it seems that we have more than enough PAG pages for this already and so WP:CREEP and WP:TLDR apply. What we need more are case studies and I've just been dealing with one. The topic is Breaking Rust which is an AI country singer with some chart success. Some editors have suggested that the article was AI-assisted but they didn't give any specific evidence and so this just seems to be blind prejudice which should not be encouraged. I added an image and what was especially interesting was that this was easy because such AI-generated images are public domain in our primary jurisdiction of the US. Andrew🐉(talk) 20:43, 16 November 2025 (UTC) [reply ]
  • Oppose per per. The likelihood of any text this far down the page being read by anybody is approximately zero, so I will not write any. jp ×ばつg 🗯️ 09:25, 10 November 2025 (UTC) [reply ]
    Some people subscribe to topics, y'know. Cremastra (talk · contribs) 13:33, 10 November 2025 (UTC) [reply ]
    +1 GothicGolem29 (Talk) 13:40, 10 November 2025 (UTC) [reply ]
    The likelihood of a good closer reading this far down is 100%. The likelihood of a good closer assigning any weight to your opinion as it stands is essentially 0%. The likelihood of a good closer assigning much greater weight to a comment that expresses the reasons for your opposition is extremely high. Obviously the likelihood of this being closed by a good closer is not 100% but it's much closer to that than it is to 0%. Thryduulf (talk) 14:31, 10 November 2025 (UTC) [reply ]
  • Support – This is becoming an urgent issue... we need a base standard to build on to. SuperPianoMan9167's response to Ldm1954 and other opposes is more than sufficient. – Aza24 (talk) 00:50, 14 November 2025 (UTC) [reply ]
  • Yes There is absolutely no excuse for using LLMs to generate Wikipedia articles. Absolutely none. As I'm sure many have already mentioned, both in this discussion and others, LLMs are heavily prone to hallucination of information and sources, as well as editorializing, both of which greatly impact the quality of Wikipedia articles. Furthermore, LLMs serve as a copyright risk to Wikipedia due to them being trained on pre-existing resources. If the counterargument is to focus on the content rather than the tool, well the time it takes to ensure LLM content is up to Wikipedia's standards could be better spent actually researching and actually writing the article on the subject in question. If the counterargument is that it allows non-English speakers to contribute, it's difficult to imagine how someone who can't speak or read English can ensure that an English article is up to standard, and it wouldn't be fair to place the burden of cleaning up their work on those who do speak English. While future work could perhaps improve this guideline (I personally don't think LLMs are useful tools at all), I think it's fine as a starting point. Lazman321 (talk) 18:31, 14 November 2025 (UTC) [reply ]
  • Support, but this is not enough. LLM's are very poor for transparency and bake in all existing biases of their training data (which rarely reflects the entirety of the world as we wish to fairly represent under WP:DUE), so their usage should be restricted entirely on WIkipedia. There is already a de facto prohibition on using them to generate comments in discussions like this one.--Jasper Deng (talk) 20:02, 15 November 2025 (UTC) [reply ]
  • Support because this is slightly better than the status quo. If we want to expand or change it, we can discuss that once it's in place. Perfect is the enemy of good. Thebiguglyalien (talk) 🛸 21:49, 16 November 2025 (UTC) [reply ]
    I wrote Perfect is the enemy of good and that maxim encourages an easy-going pragmatism rather than the rigid intolerance which this putative guideline represents. Andrew🐉(talk) 22:54, 16 November 2025 (UTC) [reply ]
  • Needs work. The general concept is in the right direction, but the implication of the current text is ambiguous and misleading, insofar as it encourages people to let an LLM do everything except article creation per se. The wording should be broadened to disallow LLM rewrites or copyedits, LLM-written sections, LLM-written paraphrases of sources, and so on. People should use these tools to generate critiques of their work if they want, or to flag new edits for closer inspection, or even to automatically roll-back likely vandalism, but LLM-generated content should not be going directly into the encyclopedia given the current state of the technology and the vast asymmetry in time effort between LLM- vs. human-led editing. LLM editing, like other kinds of automatic editing but in some ways even worse, creates messes that waste orders of magnitude more manual human effort to fix, once discovered, than it originally took to make. –jacobolus (t) 01:07, 18 November 2025 (UTC) [reply ]
  • Yes as a basic first step. LLMs should be prohibited for content creation in all instances. My stance is that LLMs and GPT content will always be flawed in ways that the contributor cannot see. The question of whether the LLM material is reviewed or not is immaterial when the reality is that virtually none of it will be reviewed. The LLM-using contributor is already too lazy or incompetent to read a breadth of the available sources, weigh the various interpretations, and summarize the facts. They will also be too lazy or incompetent to review the results, which would require reading the sources, etc, the same as if the LLM was not used. Binksternet (talk) 01:01, 23 November 2025 (UTC) [reply ]
    Is there a reason why you are assuming bad faith of good faith contributors? Thryduulf (talk) 10:51, 23 November 2025 (UTC) [reply ]
    I think the entire contention here is that LLM users are generally not good faith contributors, which I agree with. It's almost harder to use LLMs 'right' than to simply not use them at all, and most who try to use LLMs to generate contributions to Wikipedia use them as a competence shortcut because they can't find sources, can't extract information from sources, can't summarise well etc etc. Athanelar (talk) 14:16, 23 November 2025 (UTC) [reply ]
    I think the entire contention here is that LLM users are generally not good faith contributors do you have any evidence at all for this gross assumption of bad faith? Thryduulf (talk) 14:17, 23 November 2025 (UTC) [reply ]
    Most of the AI-generated content I come across on Wikipedia is the result of COI, promotional editing, brand-new editors slapping together articles about a topic of interest to them without taking even a second to familiarise themselves with how Wikipedia works, etc. I simply think there's a general air of NOTHERE about these kinds of contributions. Maybe it's not correct to say these contributions are made in 'bad faith' in the sense of 'with malice' but certainly in a bull-in-a-china-shop level of impulsivity and ignorance that I think also makes it hard to argue they are made in 'good faith' Athanelar (talk) 14:24, 23 November 2025 (UTC) [reply ]
    I very strongly disagree that you can equate ignorance with bad faith. Thryduulf (talk) 14:32, 23 November 2025 (UTC) [reply ]
    I agree with Thryduulf. See also Hanlon's razor. Ignorance does not equal bad faith. SuperPianoMan9167 (talk) 16:16, 23 November 2025 (UTC) [reply ]
    Then it's a good thing the !vote said lazy or incompetent and didn't assume any faith. ~ Argenti Aertheri (Chat?) 17:17, 23 November 2025 (UTC) [reply ]
    too lazy or incompetent to read a breadth of the available sources, weigh the various interpretations, and summarize the facts. They will also be too lazy or incompetent to review the results is unambiguously an assumption of bad faith. Thryduulf (talk) 17:43, 23 November 2025 (UTC) [reply ]

Comments

[edit ]
  • Where's the WP:RFCBEFORE discussion? voorts (talk/contributions) 21:38, 24 October 2025 (UTC) [reply ]
    • I have to admit I don't get it. WP:RFCBEFORE is about triaging disputes that could be resolved on a Wikiproject page, on the dispute resolution noticeboard, or by bringing a third opinion into a discussion between two editors. Would any of us accept the creation of a new Wikipedia-wide guideline through any of those means? Since WP:RFCBEFORE clearly states that its purpose is to avoid misspending a large amount of editor time, shouldn't we proceed directly to RfC, since we all know the other venues are not the right places to attempt a community-wide consensus? Einsof (talk) 00:35, 26 October 2025 (UTC) [reply ]
      • @Voorts and Einsof: I've made a few changes to RFCBEFORE to make this clearer. Voorts means that RfCs should be carefully planned and launched to avoid wasting editor time. For example, debating whether this proposal's most important words should be "should not", "may not", or "must not" is something that should have happened before the RfC started. Ed [talk] [OMT] 02:27, 30 October 2025 (UTC) [reply ]
      • +1. This subject has been discussed ad nauseam across countless pages, bringing us here. This isn't some brand new or dispute idea that hasn't received input before. Demanding a specific discussion to be designated as the RFCBEFORE is just rules lawyering. Thebiguglyalien (talk) 🛸 21:54, 16 November 2025 (UTC) [reply ]
    I mean, I tried to have an RFCBEFORE discussion. Anyone here, including you, was welcome to participate. Gnomingstuff (talk) 13:35, 30 October 2025 (UTC) [reply ]
  • (削除) Not ready for RFC, workshop first (削除ここまで) (striking as I have now !voted in support above, but please follow WP:RFCBEFORE next time while proposing PAGs related to LLMs) Can we like, workshop this before going to an RFC? I think the community would consider a total ban on LLMs to generate article content, per both Kowal2701 and Chaotic Enby below (note: added) at this point - your proposal is just a ban on LLMs to create new articles, which leaves a gap. There are also implications around enforcement that I was hoping to have basic guidelines for before launching into this. NicheSports (talk) 21:44, 24 October 2025 (UTC) [reply ]
    Yes, it leaves a gap. That's the point.
    We keep trying to build a perfect house, but that's hard to gain consensus here. All I'm trying to do is build the foundations, so we have something. Is that really that hard? Cremastra (talk · contribs) 22:51, 24 October 2025 (UTC) [reply ]
    If you want to use the RfC process, yes, it is that hard. RFCBEFORE is very clear that you shouldn't bring something to the community for an RfC until it's been discussed. voorts (talk/contributions) 23:01, 24 October 2025 (UTC) [reply ]
    For a proposal to be accepted as a guideline, it is a clear status quo that it must be done via an RfC. Not all proposals, especially simple ones, need to be "workshopped" in advance. If you want to remove the RfC tag, go ahead. I do understand the RFCBEFORE argument. But a discussion is needed that simply asks whether we accept this as a statement of principle. Cremastra (talk · contribs) 23:05, 24 October 2025 (UTC) [reply ]
    There is no simple LLM policy because they can be used in many ways and identifying their use is often subjective. Let's discuss this. It doesn't have to take forever. I think we could have a robust policy proposal ready within weeks that would have a decent chance of community approval NicheSports (talk) 23:12, 24 October 2025 (UTC) [reply ]
    That's why this isn't trying to be a catch-all. It's making a narrow statement of principle to back up current practices. Cremastra (talk · contribs) 23:13, 24 October 2025 (UTC) [reply ]
    There's no rush. This is an issue that the community has been grappling with for several years. The way we make reasoned decisions is by discussing things, not trying to blast our individual proposals through consensus-building processes. voorts (talk/contributions) 23:13, 24 October 2025 (UTC) [reply ]
    I'm not trying to rush this. I'm just proposing a narrow statement of principle to back up current practices. This doesn't need extensive discussion prior to the main discussion. We're discussing it right now and will probably do so for at least a month. If I was rushing I'd be proposing some all-encompassing guideline that tries to cover everything AI, which is not this. This is only a proposal, and now the community can decide whether or not it's a good one. What I object to is needlessly slowing things down with pre-discussions instead of just going ahead and having the main discussion. A narrow, two-sentence proposal doesn't have much scope for "workshopping" anyway. Cremastra (talk · contribs) 23:14, 24 October 2025 (UTC) [reply ]
    ChatGPT has been around for almost 3 years now. That is several years past "no rush." Even the technologically unsavvy dinosaur institutions of the world have been faster at meaningfully dealing with AI than we have. Gnomingstuff (talk) 04:49, 25 October 2025 (UTC) [reply ]
    I think several years of discussing things is more than enough WP:RFCBEFORE for our purposes. How much do we need? A decade? ~~ AirshipJungleman29 (talk) 14:11, 25 October 2025 (UTC) [reply ]
  • Let's back out of this and workshop it first - This topic needs guidance, but the current proposal feels half baked, incomplete, and arbitrarily narrow. Tazerdadog (talk) 21:50, 24 October 2025 (UTC) [reply ]
    @Tazerdadog Did you read the FAQ?
    This really isn't very hard. We want to build a house. I'm trying to build a foundation. We really don't need to waste breath "workshopping" every damn thing. For once, let's do something simple.
    Yes, it is incomplete!!! Again, did you read the FAQ? See my comment here. Let's actually start building the house instead of just talking about it!! Cremastra (talk · contribs) 22:54, 24 October 2025 (UTC) [reply ]
  • @Cremastra: are you amenable to workshopping this? voorts (talk/contributions) 21:51, 24 October 2025 (UTC) [reply ]
    Plug to do this at WT:AIC? I have been mulling ideas around for a while, sounds like now may be the time
    • Policy options: total ban or llm use restricted to an autopatrolled-type user right
    • Enforcement: need to address valid concerns about having AI "snipe hunts". Criteria for determining AI use should be as objective as possible, i.e. G15 criteria, repeated content verification failures, or unambiguous evidence that a user is not writing in their own voice. To what extent do we want to include guidelines like this in a proposed policy? What about blocking policy? 1LLM/2LLM? First block duration? Lots to consider.
    • Supporting data: do we want to present any data along with a policy proposal? Ideas here are mostly statistics from AfC and NPP declines, that will be the easiest to quantify
    NicheSports (talk) 21:59, 24 October 2025 (UTC) [reply ]
    The one data point I have in mind is that 10% of AfC declines (3,026 out of 30,409) invoke the LLM decline criterion. This gives us a lower bound on the proportion of drafts with AI issues, since articles can have less clear-cut issues besides the one or two stated in the decline reason, and drafts deleted through G15 do not show up in the tracking categories. Chaotic Enby (talk · contribs) 23:33, 24 October 2025 (UTC) [reply ]
    Yep makes sense. I would also like to chart total AfC submissions by outcome over time. If we see a bump in AfC submissions and/or decline rate post 2022 while the active user base hasnt really changed then we can probably conclude something about LLM usage beyond just the # of AfC drafts declined while invoking the LLM criteria. I unfortunately don't know how to find this data NicheSports (talk) 23:40, 24 October 2025 (UTC) [reply ]
    Since drafts that last got edited more than six months ago don't show up on Category:Declined AfC submissions, it gives us what is effectively a six-month rolling window of declined drafts. While categories don't keep a history of their page count, I can check if earlier versions have been archived on Wayback Machine to give us some past data. Chaotic Enby (talk · contribs) 23:59, 24 October 2025 (UTC) [reply ]
    Manually checking the first revision of each semester
    From what I see, most of the increase in declined drafts happened in 2025, rather than 2023–2024 like I expected. To make sure that we're accounting for aspects like higher/lower AfC traffic, we could also look at the number of accepted AfC drafts in each six-month window, to see if the numbers diverge at some point. Chaotic Enby (talk · contribs) 00:11, 25 October 2025 (UTC) [reply ]
    As for G15s, wouldn't admins have access to that data through the deletion log? NicheSports (talk) 23:44, 24 October 2025 (UTC) [reply ]
    As far as enforcement: the big problem (and of course I'm not telling you anything you don't already know) is that the only person who can definitively prove whether AI was used is the person who was using/not using AI. The obvious stuff really is obvious, in an I know it when I see it" type of way; the problem is expressing that in a way that A) people will believe and B) will not just result in people making band-aid fixes to whatever 3-or-so things you pointed out. Gnomingstuff (talk) 04:57, 25 October 2025 (UTC) [reply ]
    VoortsWhat needs workshopping? It's a very simple proposal that states a principle that I think many Wikipedians will agree with.
    Our collective addiction to "workshopping" and "discussing" everything to death has paralyzed our decision-making skills.
    I have presented a proposal for a guideline. It's up to the community to say "yes, this represents our principles" or "no, this doesn't represent our principles". Cremastra (talk · contribs) 22:53, 24 October 2025 (UTC) [reply ]
    It's an extreme proposal that might be susceptible to many exceptions. Does your proposal ban using AI spell correction? Does it ban turning a list of well-researched, bulleted notes into an article using an LLM and then checking the LLM output to ensure everything is correct? If so, why? Also, what does this proposal accomplish? We already delete articles that have objective indicia that they were generated with an LLM and not checked and we already block editors that indiscriminately rely on LLMs. voorts (talk/contributions) 22:58, 24 October 2025 (UTC) [reply ]
    Does your proposal ban using AI spell correction? Does it ban turning a list of well-researched, bulleted notes into an article using an LLM and then checking the LLM output to ensure everything is correct? If so, why? As stated in the FAQ, it does not. It is in fact very narrow, focusing only on articles wholly generated by AI from scratch. Being against that is not an extreme position.
    Also, what does this proposal accomplish? We already delete articles that have objective indicia that they were generated with an LLM and not checked and we already block editors that indiscriminately rely on LLMs. Because we should have a statement of position as well. We do these things, but we don't base them on any underlying guideline. In contrast, we do plenty of practical things that all rely fundamentally on, say WP:NPOV or WP:V. One might say: "why do we need WP:NPOV? We're already committed to the neutral point of view in our articles; we have cleanup tags and noticeboards for it." Well, those practical things have to rest on some kind of foundation. Cremastra (talk · contribs) 23:04, 24 October 2025 (UTC) [reply ]
    Would this guideline also exclude an article "wholly generated by AI from scratch", reviewed and modified by a human editor who is adding the article to Wikipedia? Katzrockso (talk) 13:29, 25 October 2025 (UTC) [reply ]
    Yes my understanding is it would exclude this. FWIW, how many editors do you know who use LLMs to generate articles and then sufficiently review them to remove all issues with V, NPOV, RS, OR, DUE, etc.? Basically everything I do here is AI cleanup and I only know of one such editor. NicheSports (talk) 14:26, 25 October 2025 (UTC) [reply ]
    I have no idea, I just don't know if a guideline excluding potential beneficial LLM usage would be appropriate, though I'm not sure how likely this is. Katzrockso (talk) 16:12, 25 October 2025 (UTC) [reply ]
    Also RE your comment here, I just restarted WP:PROJPOL. I'm not opposed to short PAGs. I'm opposed to jumping the gun. voorts (talk/contributions) 23:04, 24 October 2025 (UTC) [reply ]
    Fully agree. A guideline on this topic shouldn't be as short as IAR FaviFake (talk) 19:55, 26 October 2025 (UTC) [reply ]
    Cremastra, the issue is that it ostensibly gives no space to other editors to express their own opinions on how they might like the policy to be worded or adjusted, or even to discuss those things. Now all you can do is say "yes" or "no" when any productive discussion on writing PAGs for this area is going to be very far from black and white. Perryprog (talk) 00:39, 25 October 2025 (UTC) [reply ]
    This WP:PROPOSAL is for a {{guideline }}, not a {{policy }}.
    Also, there's plenty of room to discuss different ways to word it. There's this whole talk page, for example. Someone could say, e.g., that they'd like a philosophical explanation of why Wikipedia says 'no' added. Another editor could say that they'd like to take this simple little guideline and bloat it with a bunch of stuff that editors are less likely to agree with. Someone else could say she'd like a different title page title. And so forth.
    What there isn't room for is turning it into a completely different proposal. For example, there's no point in having this be a somewhat reworded version of long pages that have already earned a {{failed proposal }} tag. We don't need another failed proposal. WhatamIdoing (talk) 00:57, 25 October 2025 (UTC) [reply ]
    Well, that is why I hedged myself with "ostensibly" :). What I'm trying to say, though, is that the vote has already started and it's a binary "accept guideline" or "don't accept guideline". You can bring up anything like you said, but it's not clear what bearing that has on the RFC. In my opinion it's better to first have these discussions before having the "make this a PAG" RFC since then you're more likely to be in a better place than just assuming the first draft is what everyone wants. (No disrespect to Cremastra, to be clear—I don't think you are assuming that.) Perryprog (talk) 01:09, 25 October 2025 (UTC) [reply ]
  • The wording is too unclear to !vote on. I've read through the discussion and I think what is meant by the wording of the proposal is "AI should not be used to generate articles from scratch" - that is, AI should not be responsible for the creation of the text, the markup, the references, etc all in one go. But the wording of the proposed bolded sentence could just as easily mean "you cannot use AI at any point in making a Wikipedia article", that is, at no point in your generation of the article should you make use of AI, or something somewhere between these two points. Can this please be clarified, explicitly, in the proposed text, before too many !votes come in on this? -- asilvering (talk) 01:38, 25 October 2025 (UTC) [reply ]
    @Asilvering I've rephrased it a bit, moving the "from scratch" into the bold text. Cremastra (talk · contribs) 01:40, 25 October 2025 (UTC) [reply ]
  • Wikipedia needs to go further, it needs to become 100% human only and disable the paste function. 95.129.20.22 (talk) 13:13, 26 October 2025 (UTC) [reply ]
  • I'm just commenting here to let people know that the cat is already out of the bag in regards to AI use on Wikipedia. I think lots of new-ish editors are using AI to create articles. It's not always easy to spot. User:Esculenta won 800ドル with AI-generated articles at The World Destubathon and no one even cared. ~WikiOriginal-9~ (talk) 19:15, 27 October 2025 (UTC) [reply ]
    Well, not no one. It seems MtBotany (who hopefully does not mind the ping) was driven away from the contest upon the realization that it was effectively an LLM speedrun. Einsof (talk) 23:05, 27 October 2025 (UTC) [reply ]
    I don't mind the ping. And also a fair assessment that I quit because the use of LLMs offended me greatly.
    I also had the notion that I might try to advocate against the use of LLMs on Wikipedia and I didn't want to muddy things by "losing" a contest so that LLM generated text advocates cannot say that that is why I am against it. I'm not super effective at the organizing and Wikipedia advocacy, though. What I know and like to do is to make articles better.
    LLMs are fatally flawed at generating accurate texts. They are very interesting tools and useful in doing tasks like watching for possible vandalism, but they do not write well and they never will. I feel like I should clean up after the person who was making all those LLM generated lichen articles, but it is just so discouraging. I randomly sampled one and was finding all sorts of subtle errors, but felt overwhelmed at the size of the task. Minutes to generate a text and hours of work to show all the ways in it is flawed. 🌿MtBotany (talk) 01:21, 28 October 2025 (UTC) [reply ]
    Thryduulf will tell us that if the output's good enough, it doesn't matter. But it does. If people want to use an LLM, there are plenty of websites that let them do that. But we stand for quality, written by humans, and our readers deserve real articles from real human effort. We should take a moral stand against the proliferation of AI writing – or are such noble sentiments hopelessly naive and out of date? Cremastra (talk · contribs) 01:51, 28 October 2025 (UTC) [reply ]
    Please see WP:RGW. We are not here to make a moral judgement about AI use, we are here to write an encyclopaedia. As long as our articles are neutral and correct how they were written is completely irrelevant. Thryduulf (talk) 02:08, 28 October 2025 (UTC) [reply ]
    So, in your opinion, the end justifies the means?
    Also, I'm curious about your own moral perspectives. Outside of the encyclopedia context, do you think LLMs are generally no good, or do you think they are able to produce output of value? Cremastra (talk · contribs) 02:09, 28 October 2025 (UTC) [reply ]
    I'll also point out that's not what RGW is even about. We can, as a project, absolutely take a moral stance on important issues, such as how we blacked out the site in protest of internet censorship. We take a stand on moral issues when we promote Pride editing events. We take a stand on moral issues when we commit ourselves to an encylcopedia available under a free license, that anyone can edit. We take a stand on moral issues when we commit ourselves to a neutral point of view in our articles. Cremastra (talk · contribs) 02:12, 28 October 2025 (UTC) [reply ]
    So, in your opinion, the end justifies the means? no, because the means are completely irrelevant - they don't need justifying. My personal morals are completely irrelevant and so I leave them at the door, which is where you should be leaving yours. Trying to equate supporting improvements to our encyclopaedic coverage of topics related to LGBTQ+ people with banning a tool some people use constructively because you personally dislike it is the exact sort of moral judgement that is incompatible with an neutral encyclopaedia. Thryduulf (talk) 02:28, 28 October 2025 (UTC) [reply ]
    You're calling LLMs a "tool", but they're more than that. Using an LLM to write an article isn't like using a spellcheck or a ruler or a compass or any other tool, digital or practical. Because it uses machine learning. This isn't a moral issue like other moral issues, it's one directly up our alley that we need to fight about. Why should we roll over to the threat? Your approach of looking at the article content alone completely ignores the fact that there are larger issues at play here, and is impractically idealistic. Cremastra (talk · contribs) 02:30, 28 October 2025 (UTC) [reply ]
    If you cannot understand that an LLM is a tool and that your opinion about the morality or otherwise of machine learning is a moral judgement that is irrelevant to the output of that tool then it is not going to be possible for us to have a rational discussion because you are not capable of engaging rationally with the topic. Thryduulf (talk) 02:48, 28 October 2025 (UTC) [reply ]

When you are editing Wikipedia you are, in fact, signing up to adhere to a number of moral commitments, chief among them the commitment to write a free encyclopedia. We could be writing a restrictively licensed encyclopedia for profit; but by being here, we've agreed not to do that. We should also be articulating in policy that although we could write slop, we agree by being here that we aren't going to do that either. Our moral commitments do, in fact, restrict some of the materials and tools we agree to use. Einsof (talk) 02:50, 28 October 2025 (UTC) [reply ]

Of course we're not writing slop, nobody is arguing we should, so most of your comment is completely irrelevant. What matters is that the content we produce is good quality. What doesn't matter is whether that good quality content was written by a human, a machine or a combination. Thryduulf (talk) 02:56, 28 October 2025 (UTC) [reply ]
But many would argue that LLMs, as a tool produce work that is inherently of lower quality and value. You can dismiss this as an "opinion" as much as you want, but you should understand that many Wikipedians hold this opinion. We have every right to take a moral stance against LLMs if we don't like them. If you don't like that, you're welcome to take your volunteer work somewhere else. Cremastra (talk · contribs) 02:57, 28 October 2025 (UTC) [reply ]
The point about value is very important. Given two texts of identical quality (whatever that means), many people will value one text less than the other if they find it was written by a machine rather than a person. Einsof (talk) 03:01, 28 October 2025 (UTC) [reply ]
There's a reason people pay more for handmade goods. Since the advantage of LLMs is extremely small compared to the advantage of industrial mass-production, there's not much cost to us insisting on running a quality establishment. Cremastra (talk · contribs) 03:03, 28 October 2025 (UTC) [reply ]
Cremastra I don't think you should be telling people to "take their work elsewhere." Especially where we haven't established anything about a consensus moral stance. And... there is so much wrong with what you just said, you might like to rethink or at least reword it. All the best: Rich Farmbrough 22:47, 28 October 2025 (UTC).[reply ]
@Cremastra and @Einsof some people do argue those things, but other people argue the opposite because it is solely a matter of opinion. Regardless of how many people share your opinion, you do not get to declare it the only valid opinion and denigrate or dismiss those who disagree with you. Your opinion is not objectively correct (and indeed it is based in part on things that are objectively incorrect. You do not get to prohibit other people submitting work of good quality to Wikipedia because you dislike the tool they used to write it, and you especially do not get to prohibit the inclusion of work because you guessed that they used a tool you personally disapprove of. Thryduulf (talk) 22:58, 28 October 2025 (UTC) [reply ]
All of those things are what this and future RfCs on LLMs will decide. We get to prohibit whatever community consensus empowers us to prohibit, and as you can see from the survey above there is already tremendous appetite for partial or total prohibition of LLM editing. Einsof (talk) 23:13, 28 October 2025 (UTC) [reply ]
There is a tremendous appetite to do something but almost none of the commenters actually engage constructively with the substance of this specific proposal and even fewer with the objections to it (note that you can engage with objections even if you disagree with them). Expressing a vague dislike of and/or moral objection to AI and/or LLMs is not engaging constructively with the specific proposal. Thryduulf (talk) 00:31, 29 October 2025 (UTC) [reply ]
I've not seen enough discussion about WP's ethos in regards to AI. It's a community of humans, and WP 'sells' itself as written by humans. Our purpose is to build the "sum of all human knowledge", it's fundamentally a human initiative. When you have people who are oblivious about the content they're uploading, it breaks the system, in which people are supposed to be actively procuring information for human consumption. Focussing on the single-case end result in a vacuum doesn't constitute "engaging constructively with the specific proposal", there's an element of soul-searching re WP in this that can't be ignored or dismissed. Kowal2701 (talk) 02:10, 29 October 2025 (UTC) [reply ]
Mr Thryduulf, I must protest. There is nothing 'vague' about my objection to the use of LLMs on Wikipedia. You speak of LLMs as if there were a mere 'tool', but the so-called 'tool' can, with very little human intervention, create an entire encyclopaedia-like entity in a trice. Grokipedia is only the beginning. The companies that own these 'tools' are content to use them to create these 'encyclopaedias', irrespective of the quality of the information they contain, in as much as these may provide a suitable profit. They have no concern for WP:NPOV, for WP:V, or any of the other ideals that the Wikipedia community claims to hold dear. They will revel in the opportunity to provide separate, forking knowledge-bases that appeal to one political tribe or another, balkanising the mass of human knowledge.
If I may be allowed to borrow the words of the capitalist, Wikipedia is in direct competition with these companies. If they succeed in convincing the public that their product is superior, that it will give them only the answers that they want to read, Wikipedia will have failed in its mission. Our only recourse as Wikipaedists is to compete on the merits of our work; to prove that an anonymous mass of men and women scattered right across the world can come together to create and maintain an encyclopaedia of the highest quality, and not demand a penny in return. To sacrifice the writing and research skills that this community has developed over these past two decades, to substitute them for the products of the very companies that have placed a target on Wikipedia's back, is to give away the only weapons we have in our fight to provide everyone and anyone the ability to access the sum of real, human knowledge. I, for one, will not put down my pen. I hope you will not either. Yours, &c. RGloucester 10:43, 29 October 2025 (UTC) [reply ]
As someone who cleaning after LLM use, I'm not sure why this is even a debate. The amount of time it takes to make the mess versus the time it takes to clean it up is not sustainable and "I'll check more next time" isn't actually doing much besides triage. We need a policy we can definitively point to the first time someone leaves an LLM mess, one that doesn't allow for repeated violations because the editor "is trying". If anyone misused a bot like this, introducing errors with every use, they'd have their bot privileges revoked immediately because you're responsible for your edits. I honestly expected this to snow close because look up, consensus is pretty clearly yes. If you can use the tool well enough no one can tell, no one will care, just like any other bot. Fundamentally, LLMs are just really fancy bots since they don't "understand what they're saying", and thus cannot ever actually check text-source integrity like a human can. IMO, and our policies, WP:V trumps convenience. ~ Argenti Aertheri (Chat?) 18:41, 29 October 2025 (UTC) [reply ]
Yes. Our approach to LLMs should be similar to our approach to bots. You can use them, for some purposes, provided you know what you're doing. Cremastra (talk · contribs) 19:46, 29 October 2025 (UTC) [reply ]
(edit conflict) If you look only at bolded !votes then yes there is a very clear consensus. However, if you read what people are saying it's not at all clear that everyone is actually supporting the same thing or is actually engaging with what is actually proposed beyond "LLM bad!". Thryduulf (talk) 19:54, 29 October 2025 (UTC) [reply ]
@Thryduulf Yes, with reason. Are you saying the opinion "LLMs have no place on Wikipedia" is somehow illegitimate? Cremastra (talk · contribs) 19:57, 29 October 2025 (UTC) [reply ]
Yes. Because, as multiple people have to point out every time it's brought up, it is overly simplistic, incorrect and impossible to police. Thryduulf (talk) 20:00, 29 October 2025 (UTC) [reply ]
Well, "illegitimate" is the wrong word - anybody can have that opinion, but it's not an opinion that is useful to express in discussions for the reasons I mention (and others have repeatedly explained better than I apparently can). Thryduulf (talk) 20:02, 29 October 2025 (UTC) [reply ]
And that you are strongly opposed to this opinion has, I'm sure, no affect whatsoever on your judgement. Do you think that restrictions on bot accounts are "incorrect"? Cremastra (talk · contribs) 20:11, 29 October 2025 (UTC) [reply ]
No, because restrictions on bot accounts are based on rational assessments of defined problems, and directly address those problems, rather than just banning everything that looks like it might have been done by a bot, regardless of anything else. Thryduulf (talk) 20:14, 29 October 2025 (UTC) [reply ]
Some people do support banning LLM use completely. That is not what is being proposed here. What is being proposed here is a limitation on bot/LLM use based on evidence of a defined problem. You're the one having a kneejerk reaction here, much as you like to accuse the Wikipedians who are anti-LLM of being Luddites and reactionaries. Cremastra (talk · contribs) 20:16, 29 October 2025 (UTC) [reply ]
It is very reasonable to conclude that LLMs have had a net negative effect on Wikipedia and that the appropriate action is to ban their use in creating or editing articles. It seems like you are basing your vote off the question "is it possible to use LLMs constructively while editing in article space?" The answer to this is "theoretically" yes (although I cannot point to many examples of it). However, the point of Wikipedia is not to prove that certain tools can be used here effectively - the point of Wikipedia is to build an encyclopedia. We should be voting on PAGs related to LLMs based on their net contribution to this goal. And it is clear to most people here that the net impact of these tools on the project has been overwhelmingly negative. NicheSports (talk) 20:05, 29 October 2025 (UTC) [reply ]
Respectfully, every oppose except yours seems to opposing it as being insufficiently strict. Your points are certainly interesting, but this beginning to feel very wp:dropthestick. Nobody here seems to be saying that the very short proposal is bad, just too short. So we should ship it damn thing and then we can go back to workshopping until pigs fly. ~ Argenti Aertheri (Chat?) 20:12, 29 October 2025 (UTC) [reply ]
That's true if you ignore all the other reasons for opposition such as it being vauge and unenforceable. Thryduulf (talk) 20:15, 29 October 2025 (UTC) [reply ]
But it isn't enforcement, it's a general principle. The enforcement is CSD G15 and various other practical mechanisms. We have plenty of guideline principles that establish a general way things ought to be done, and leave the enforcement to other pages. Cremastra (talk · contribs) 20:17, 29 October 2025 (UTC) [reply ]
What is vague about "Large language models must not be used to generate new Wikipedia articles from scratch." That seems as clear a rule as you can get: this must not be used to do that. ~ Argenti Aertheri (Chat?) 20:24, 29 October 2025 (UTC) [reply ]
Well, to start with, "from scratch" is an idiom that people who aren't native English speakers are unlikely to be familiar with. People who aren't native English speakers are also more likely to use an LLM (or a tool like Grammarly) to polish up their writing, and therefore the group least likely to understand it is the group most likely to inadvertently violate it.
Once we write it in simpler language (maybe "must not be used to generate whole new Wikipedia articles from nothing"), then we have the next problem: How little of the article can be in my own words, before this ban kicks in? What if I write one sentence, and an LLM does the rest? Is that enough to stay on the good side of this rule?
And how are editors supposed to determine whether an editor used an LLM for some or all of a new article? I've put some of the articles I've written through free AI detectors, and they've told me that things I wrote years ago are from ChatGPT—which I've never used. Let's say that the dubious website says 90% AI. Or let's say that you see my correct use of the em dash, or a fondness for bullet lists, or some other signal that you believe is evidence of AI use. You tag the article. I deny it. Now what? How to enact this proposed guideline is unclear, too.
NB that I support this guideline in principle. But I'm not under any illusions that it's actually as clear as we need it to be. WhatamIdoing (talk) 02:46, 30 October 2025 (UTC) [reply ]
(削除) I think it's reasonable to expect contributors to enwiki to be fluent enough to understand an idiom such as "from scratch". If a contributor is struggling with idioms such as that, then perhaps simplewiki or their native language wiki would be a better fit. –Novem Linguae (talk) 17:02, 30 October 2025 (UTC) (削除ここまで) Striking. We've got fluent speakers saying they don't understand this phrase, so looks like I picked a bad phrase to make this point about. –Novem Linguae (talk) 22:20, 30 October 2025 (UTC) [reply ]
This is the English Wikipedia. Our priority for readers and editors should be fluent English speakers. My grasp of French is not good enough to understand all of fr.wiki's guidelines at first pass, but that's not their fault, nor should they need to dumb down the language so that I can understand it more easily. Cremastra (talk · contribs) 18:01, 30 October 2025 (UTC) [reply ]
Ah, yes, because understanding idiomatic speech is only related to English language fluency. [4] [5] [6] [7]
Just speaking personally, no, I have absolutely no idea what "from scratch" really means in this content. I can guess, but in my brain it's nothing more that a little mass of cartoon-like pencil marking tumbleweed. I can make out some key features, but there's no innate understanding. Just another one of those hidden electric fences that everybody but me seems to know how to avoid, and looks at me like I'm an idiot when I mess it up. Should I be confined to the Simple English Wikipedia, @Novem? GreenLipstickLesbian 💌 🦋 18:27, 30 October 2025 (UTC) [reply ]
When I don't know what a word or phrase means or find the usage confusing, I look it up in the dictionary. I find our sister project Wiktionary is excellent: wikt:from scratch. Einsof (talk) 18:30, 30 October 2025 (UTC) [reply ]
Nope. You need not restrict yourself. You are clearly fluent enough to edit here. If the phrase is more regional than I thought, which if you've never heard it then maybe it is, then I am fine with it being copy edited to something clearer. –Novem Linguae (talk) 22:04, 30 October 2025 (UTC) [reply ]
It can be enforced there are ANI threads that place sanctions based on AI non ANI sanctions placed and speedily deletion based on AI. GothicGolem29 (talk) 22:26, 29 October 2025 (UTC) [reply ]
  • I understand the frustration of having to deal with AI slop as I myself encountered it. However I'm not sure what's the point of having this policy. How would it be enforced? Alaexis ¿question? 23:05, 14 November 2025 (UTC) [reply ]
    It would be enforced by speedy deletion under G15. SuperPianoMan9167 (talk) 23:06, 14 November 2025 (UTC) [reply ]
    I'm probably missing something, but if did criterion already exists, what's the point of this guideline? How would it change our lives? Alaexis ¿question? 23:14, 14 November 2025 (UTC) [reply ]
    By our current enforcement mechanisms. This is a statement of basic principle, not a statement of enforcement. Cremastra (talk · contribs) 23:07, 14 November 2025 (UTC) [reply ]
    @Cremastra, what impact do you expect this policy to have on your day-to-day wikilife? I agree with you that a article generated by a current-generation LLM will almost never satisfy our policies. I'm just not sure how this guideline would help.
    Wouldn't it be better to come up with practical measures that would dissuade editors from submitting LLM-generated slop? Alaexis ¿question? 13:04, 15 November 2025 (UTC) [reply ]
    @Alaexis We already have practical measures. What we don't have is a statement of broad principle that justifies those measures and is something simple that we can point a newbie to. Did you read the FAQ? Cremastra (talk · contribs) 14:44, 15 November 2025 (UTC) [reply ]
    I'm not sure we need a justification as llm-generated content would almost always fail wp:v.
    You haven't answered my question about the impact of this guideline. I've read the faq but i didn't find it there either. Alaexis ¿question? 15:22, 15 November 2025 (UTC) [reply ]
    I think the problem is that even though current guidelines preclude virtually all LLM use, this is not explicitly stated anywhere (WP:LLM is just an essay) and users have a hard time understanding why things like WP:V and WP:FREECOPY apply to their LLM use. The pattern I currently see happening is:
    1. A new user joins and immediately starts submitting large quantities of LLM-generated content that fails WP:V and other policies.
    2. Eventually, someone notices, and raises objections on a talk page. The user responds to objections via LLM-generated text walls.
    3. The LLM-generated textwall gets hatted under WP:HATGPT and the user is advised that they should stop using LLMs if they are unable to ensure that the LLM output complies with policy.
    4. Here we fork: either the user denies LLM use, or they insist that their LLMs are compliant with policy "I specifically include instructions to follow wiki policy in the prompt!"
    5. Either way, the user gets dragged to ANI and a long and contentious discussion occurs. If the user denies LLM use they get indeffed for lying under WP:NOTHERE as soon as someone digs up clear evidence, or if they try to argue that their LLM use is actually acceptable they get indeffed under WP:CIR after a long and repetitive policy debate.
    6. Editors like myself comb through their edit history checking their contributions. Usually the outcome of hours of work is essentially a damnatio memoriae on the user's contributions, but we still feel pressured to waste our time checking every single one lest we be accused of having an irrational anti-AI axe to grind.
    With more robust AI policy (of which this is only a start), the pattern I hope to see is:
    1. A new user joins and immediately starts submitting large quantities of LLM-generated content that fails WP:V and other policies.
    2. Eventually, someone notices, reverts, and informs the user that LMM generated content is deemed unacceptable by the Wikipedia community due to a long and poor track record.
    3. Here we fork: either the user says "Oh, ok, sorry about that" and proceeds on the normal new contributor path, or they say "I refuse to stop using LLMs" and gets blocked for WP:NOTHERE.
    4. Either way, the behavior is hopefully averted before a contentious ANI thread spoils chances of reconciliation for that user, and the user is prevented from generating hours of additional cleanup work while the intervention process is going on.
    I think if we can get from the first pattern to the second pattern, that will be a huge improvement to the wiki. The corresponding loss will be that the vanishingly few editors who are inserting LLM-generated content that actually improves the wiki might get asked to stop using LLMs, which would decrease the rate at which they are able to contribute. Since I currently see low-quality LLM content overwhelming our content curation mechanisms and I currently see few to no examples of high quality LLM content being contributed, I consider this an acceptable tradeoff. -- LWG talk 17:54, 15 November 2025 (UTC) [reply ]
    100%, I agree with every word of this NicheSports (talk) 18:38, 15 November 2025 (UTC) [reply ]
    And if experienced contributors wish to, they can still submit high-quality LLM content; they just have to ignore the guideline. SuperPianoMan9167 (talk) 19:28, 15 November 2025 (UTC) [reply ]
    Going to reiterate here that I think a whitelist is a good idea, although I'm not sure who'd be on it besides Esculenta. Gnomingstuff (talk) 15:18, 17 November 2025 (UTC) [reply ]
    I am aware of one other experienced editor who is thoughtfully experimenting with LLMs, with full disclosure and collaboration, but not the same results. A big reason for Esculenta's (relative) success (although some editors such as Mt Botany have raised concerns) is the subject area they are writing in. It is difficult even for an LLM to editorialize about lichen. The other editor I am aware of is using them to create stubs of deceased people, and they are having a hard time controlling hallucination and synth, especially relating to subjects not from Western English-speaking countries (current LLMs seem to hallucinate at higher rates the further the subject is from the Anglo world). NicheSports (talk) 15:42, 17 November 2025 (UTC) [reply ]
    Assuming we end up having nuance eventually, we should also whitelist certain non-content generating uses (e.g. "put this data in this template following these rules"). I'd rather have to go back to doing my menial tasks by hand than keep the status quo though. The risks from GPT generated text, especially regarding WP:COPYVIO, are just too high. ~ Argenti Aertheri (Chat?) 18:06, 17 November 2025 (UTC) [reply ]
    Well, that is one pattern, and a relatively infrequent one. The pattern I see happening more often is this:
    1. A new user joins and immediately starts submitting large quantities of LLM-generated LLM-generated content and/or LLM-generated revisions that fail WP:V and other policies.
    2. No one notices.
    3. Here we fork: either no one notices indefinitely, or someone points it out.
    4. No one believes them or cares.
    5. The text stands, issues and all, because it is now the status quo.
    If we have an AI policy, or at least AI guideline, that at least will help -- maybe? optimistically? -- with the "no one cares" outcome. It may also encourage more than ~10 people to go looking for the large amount of AI text we now have. Gnomingstuff (talk) 15:17, 17 November 2025 (UTC) [reply ]
    I agree that a statement of principle on its own is insufficient. Whether or not there is one, processes to handle editors who don't pay any attention to guidance are still needed. However if it is true that there is a consensus against using program-generated text that has content that goes beyond any human input used to trigger its creation, then it's simpler to write guidance that starts with this as a baseline, rather than writing a lot of guidance that explains why program-generated text might not comply with other guidance. From a different perspective than day-to-day operations, such a statement of principle would also serve as a distinguishing aspect of English Wikipedia versus other encyclopedia-like sources. isaacl (talk) 19:40, 15 November 2025 (UTC) [reply ]

Notes

[edit ]
  1. ^ This is loophole one: I was using ChatGPT for a useful purpose! Cue 200 column inches of debate on what constitutes a useful purpose in this case.
  2. ^ This is loophole two: Strictly speaking, I added ChatGPT content after the article was created.
  3. ^ No mention of other possible activities: I used the LLM to suggest categories to add to this article / add captions to images / add formatting / add cleanup templates / come up with plausible-sounding AfD requests
  4. ^ By whom? I didn't have another LLM review it, I checked it by hand!
  5. ^ But this time I beat the odds!
  6. ^ Can you prove that my manually checked LLM content violates these policies? The onus of proof is on you!
  7. ^ [which? ]
  8. ^ Why? Other non-LLM AIs have been doing this job dutifully and silently for a long time. Maybe direct people to those instead.
  9. ^ What if it's a section made up of two long run-on sentences?
  10. ^ Dear living lord Lucifer of the seven fires, No. LLMs are particularly bad at this exact thing, especially the ones with a cut-off date for the training data like ChatGPT. If anything, an essay should be written discouraging this exact thing, maybe WP:LLMSEARCH or something. This whole part should be struck.
  11. ^ I did that! Can you prove that I didn't?

AltStyle によって変換されたページ (->オリジナル) /