Jump to content

Wikipedia:Village pump (policy): Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Guninvalid (talk | contribs)
LLM/chatbot comments in discussions: FUCK i did not mean to put that there
 
Line 16: Line 16:
|archiveheader = {{Wikipedia:Village pump/Archive header}}
|archiveheader = {{Wikipedia:Village pump/Archive header}}
|maxarchivesize = 400K
|maxarchivesize = 400K
|counter = 197
|counter = 198
|algo = old(10d)
|algo = old(10d)
|archive = Wikipedia:Village pump (policy)/Archive %(counter)d
|archive = Wikipedia:Village pump (policy)/Archive %(counter)d
}}</noinclude>
}}</noinclude>


== LLM/chatbot comments in discussions ==
== Date redirects to portals? ==
{{discussion top}}
{{consensus|By my read of this conversation, there is a consensus that portals are in fact part of the encyclopdia. I don't see a strong consensus on the matter of redirects to them, valid arguments were made all around, so we'll have to just keep muddling through on that point. A narrower discussion specifically about individual dates redirecting to current events portals could possibly yield a different result if there is any appetite for doing that in the future. [[User:Just Step Sideways|Just Step Sideways]] [[User talk:Just Step Sideways|<sup>from this world ..... today</sup>]] 03:52, 1 December 2024 (UTC)}}
{{-r|16 August 2006}} points to the current events portal as a result of [[Wikipedia:Redirects_for_discussion/Log/2024_October_4#16_August_2006|this discussion]]. However, date redirects will continue to come up at RfD, some some wider community discussion and input is helpful on '''whether or not the current events portal is an appropriate target for mainspace redirects'''. See also: [[Wikipedia:Redirects_for_discussion/Log/2024_October_14#26,_November,_2006|this ongoing discussion]] for some context.


<!-- [[User:DoNotArchiveUntil]] 01:01, 6 January 2025 (UTC) -->{{User:ClueBot III/DoNotArchiveUntil|1736125272}}
Related questions to consider: are portals "part of the encyclopedia"? Thanks, [[User talk:Cremastra|<span style="font-family:'Futura';">''Cremastra''</span>]] ([[User:Cremastra|u]] — [[Special:Contribs/Cremastra|c]]) 00:55, 30 October 2024 (UTC)
{{rfc|prop|rfcid=F42F851}}
*The second question is easy: Yes, portals are part of the encyclopaedia. As to the first question, portals are reader-facing content and so I see no reason why they wouldn't be appropriate targets for mainspace redirects, given that uncontroversially target mainspace redirects to reader-facing templates and categories when they are the best target. Whether the port is the best target for a given date will depend on the specific date but in general the portal should always be an option to consider. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 01:32, 30 October 2024 (UTC)
Should admins or other users evaluating consensus in a discussion discount, ignore, or strike through or collapse comments found to have been generated by AI/LLM/Chatbots? 00:12, 2 December 2024 (UTC)
*:I agree with this. The portal is definitely not always the best option and it has its limitations, but, as I wrote at [[WP:RDATE]] it should be considered and assessed along with mainspace articles. [[User talk:Cremastra|<span style="font-family:'Futura';">''Cremastra''</span>]] ([[User:Cremastra|u]] — [[Special:Contribs/Cremastra|c]]) 01:44, 30 October 2024 (UTC)
:<small>Pinging: {{u|Utopes}}, who I've discussed this with.</small>
:{{block indent|em=1.6|1=<small>Notified: [[WT:RFD]], [[WT:PORT]], [[WT:CURRENTEVENTS]], [[WT:WPRED]]. [[User talk:Cremastra|<span style="font-family:'Futura';">''Cremastra''</span>]] ([[User:Cremastra|u]] — [[Special:Contribs/Cremastra|c]]) 01:43, 30 October 2024 (UTC)</small>}}<!-- Template:Notified -->
:If a namespace doesn't have the same standards as mainspace, then the reader shouldn't be redirected there while possibly not realizing they are now outside of mainspace. Yes, there is more content at [[Portal:Current events/August 2006]] than at [[2006#August]], but the reader is now facing a decades-old page with no quality control, where links to Breitbart are misleadingly labeled as (AP). [[User:Chaotic Enby|<span style="color:#8a7500">Chaotic <span style="color:#9e5cb1">Enby</span></span>]] ([[User talk:Chaotic Enby|talk]] · [[Special:Contributions/Chaotic Enby|contribs]]) 00:50, 6 November 2024 (UTC)
::Portal does have the same standards as mainspace. That a portal is not up to those standards is no different to an article being in bad shape - fix it. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 00:54, 6 November 2024 (UTC)
:::So I can use the speedy A-criteria for portal pages? [[User:Fram|Fram]] ([[User talk:Fram|talk]]) 17:40, 7 November 2024 (UTC)
::::No, because they are not articles. Two things can be held to the same standard without being the same thing. Criterion P1 previously allowed that (indirectly) but it was [[Wikipedia talk:Criteria for speedy deletion/Archive 85#RfC: Should P1 and P2 be repealed as CSDs?|repealed in 2023]] due to lack of use. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 19:42, 7 November 2024 (UTC)
:::::Then they aren't held to the same standards... More in general, no, they ''obviously'' aren't held to the same standards, e.g. a portal page doesn't have to be a notable topic but may be purely decorative or (as is the case with the date pages) be a list of mainly non-notable things, failing [[WP:NOTNEWS]] and [[WP:LISTN]]. That ''some'' standards are the same (BLP, copyvio, ...) can also be said for e.g. user talk pages, and we don't redirect to these pages either. [[User:Fram|Fram]] ([[User talk:Fram|talk]]) 20:24, 7 November 2024 (UTC)
::::::We don't redirect to user talk pages because they aren't reader-facing, so that's irrelevant. We don't hold reader-facing templates and categories to article content policies (because they aren't articles) but we do redirect to them. Don't conflate quality standards with inclusion policies, they are not the same thing. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 21:15, 7 November 2024 (UTC)
::::::::I wasn´t aware that the standards we were talking about were solely quality standards, whatever these may be, and not content standards, sourcing standards, ... I´m sadly not amazed that you consider these irrelevant when deciding what to present to our readers. [[User:Fram|Fram]] ([[User talk:Fram|talk]]) 21:37, 7 November 2024 (UTC)
::::In theory, I think portals should be held to the same CSD criteria as articles. But of course the A criteria actually only apply to articles. [[User talk:Cremastra|''Cremastra'']] ([[User:Cremastra|u]] — [[Special:Contribs/Cremastra|c]]) 22:08, 7 November 2024 (UTC)
* There's a lot of random junk in portalspace, but '''yes''', it is part of the encyclopedia. Just like categories and templates, portals are reader-facing content. <span style=white-space:nowrap;>[[User:CFA|<span style="background-color:#e0f7fa;padding:2px 5px;border-radius:5px;font-family:Arial black"><span style=color:#00c>C</span> <span style=color:red>F</span> <span style=color:#5ac18e>A</span></span>]] <span style="background-color:#e6e6fa;padding:2px 5px;border-radius:5px;font-family:Arial black">[[User talk:CFA|💬]]</span></span>
* I didn't really have super strong opinions on portals until seeing this one link to Breitbart, twice, in a misleading way. This is not okay. I agree with Fram that clearly Portals are not being held up to the same standards as regular articles and it might be a bad idea to redirect readers to them. [[User:Toadspike|<span style="color:#21a81e;font-variant: small-caps;font-weight:bold;">'''Toadspike'''</span>]] [[User talk:Toadspike|<span style="color:#21a81e;font-variant: small-caps;font-weight:bold;">[Talk]</span>]] 23:00, 7 November 2024 (UTC)
* I saw this on CENT, and I am confused by the question. [[Portal:Current events/2006 August 16]] is very different from something like [[Portal:Belgium]], and it doesn't make sense to pretend they are the same to establish policy. And what does "part of the encyclopedia" even mean? "Interpreting a confusing phrase" is a terrible way to decide redirect targets. {{br}} For the specific question of "Should dates redirect to the Current Events portal rather than to a page like [[August 2006]] ... I don't know. I don't see a compelling reason why they can't, nor a compelling reason why they should. [[User:Walsh90210|Walsh90210]] ([[User talk:Walsh90210|talk]]) 15:45, 8 November 2024 (UTC)
*:Hey, that's a nice Portal! Thank you for restoring my faith in portals. Clicking on "Random Portal" took me to [[Portal:Trees]], which is also pretty nice. My opinion is now that yes, portals can be good, but it seems to me that we currently have no Ps and Gs to apply to their content or measure their quality, no consensus about how to direct readers to them, and a very checkered and controversial history of deletion. I really dunno what to do about them. [[User:Toadspike|<span style="color:#21a81e;font-variant: small-caps;font-weight:bold;">'''Toadspike'''</span>]] [[User talk:Toadspike|<span style="color:#21a81e;font-variant: small-caps;font-weight:bold;">[Talk]</span>]] 16:49, 8 November 2024 (UTC)
*::Of course that's a nice portal, look who created it :-D [[User:Fram|Fram]] ([[User talk:Fram|talk]]) 17:51, 8 November 2024 (UTC)
*'''No''', we should not redirect dates to the current events portal subpages. It's a [[WP:R#DELETE|cross-namespace redirect]] that takes readers from somewhere they expect to be (an encyclopedia article on the topic "16 August 2006") to somewhere they don't expect to be (a navigational aid(?) that highlights some things that happened that day). I'm not 100% sure what the current events portal subpages are for, but they're not meant to stand in as pseudo-articles in places we lack real articles. [[User:Ajpolino|Ajpolino]] ([[User talk:Ajpolino|talk]]) 22:04, 8 November 2024 (UTC)
*:Cross-namespace redirects in and of themselves are not a problem. They only cause issues when they take someone expecting reader-facing content to "backroom" content (e.g. project space). Both article and portals are reader-facing content, so this is not an issue. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 22:17, 8 November 2024 (UTC)
*::Is there another case where we link a reader from an article to a non-article without clearly denoting it? E.g. I have no problem with the {{tl|Portal}} template folks often use in the See also section. [[User:Ajpolino|Ajpolino]] ([[User talk:Ajpolino|talk]]) 01:12, 9 November 2024 (UTC)
*:::There are lots of redirects to templates and categories. Many navigation templates link to other navigation templates. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 08:12, 9 November 2024 (UTC)
*::::Any examples of these lots of mainspace pages which are redirects to templates? 08:42, 9 November 2024 (UTC) [[User:Fram|Fram]] ([[User talk:Fram|talk]]) 08:42, 9 November 2024 (UTC)
*:::::[[List of elections in Texas]], [[List of Kentucky county seats]], [[Cite web]]. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 00:13, 10 November 2024 (UTC)
*::::::Thanks. Okay, Citeweb is a bad example, not something readers look for but something editors look for. The other 2 are among the 6 existing reader facing redirects to templates (from [[:Category:Redirects to template namespace]], the only ones which are from mainspace and not editor-related like the cite templates). Not quite the "lots" you seemed to be suggesting throughout this discussion, but extremely rare outliers which should probably all be RfD'ed. [[User:Fram|Fram]] ([[User talk:Fram|talk]]) 11:42, 10 November 2024 (UTC)
*::::::Now only 2 remaining, converted the other 4 in articles or other redirects. [[User:Fram|Fram]] ([[User talk:Fram|talk]]) 11:52, 10 November 2024 (UTC)
*'''Yes''', the current events portals are valid redirect targets for dates and preferred in this case of the best article redirect for a specific date being the month section of an article on an entire year. I agree with Fram that portals are not held to the same standards as articles, but I disagree with Ajpolino's stance that a cross-namespace redirect is so disruptive that they are prohibited in all cases, given that [[WP:Portal]] says "portals are meant primarily for readers." [[User:ViridianPenguin|<span style="color:#40826D">ViridianPenguin&nbsp;🐧</span>]]&nbsp;(&nbsp;[[User talk:ViridianPenguin|💬]]&nbsp;) 23:46, 8 November 2024 (UTC)
*Commenting strictly on the "are portals part of the encyclopedia" question, '''yes''' it is. Unfortunately there was one extremely loud, disruptive voice who kept making portals less useful and suffocating any discussions that ''would'' make it more beneficial to readers. Plenty of willing portal contributors, including myself, left this space and readers are still reaping the seeds of what that disruptive user planted even after they have been ArbCom banned over a year ago. So it may given some people an illusion that portals aren't doing much towards the encyclopedic goal, because the current status is handicapped by its history. I'm reserving my views on the redirect part of the discussion. [[User:OhanaUnited|<b style="color: #0000FF;">OhanaUnited</b>]][[User talk:OhanaUnited|<b style="color: green;"><sup>Talk page</sup></b>]] 07:29, 9 November 2024 (UTC)
*'''Not''', portals are not held to the standards of articles, and if something for whatever reason shouldn't be or can't be an enwiki article, this shouldn't be circumvented by having it in portalspace. Either these date pages are acceptable, and then they should be in mainspace. Or they are not what we want as articles, and then we shouldn't present them to our readers anyway. [[User:Fram|Fram]] ([[User talk:Fram|talk]]) 11:42, 10 November 2024 (UTC)
*:These current events pages differ from articles in many respects, but the referencing standards are similar. Whether they happen to be prefixed by "Portal:" or not is not reflective of their quality. '''[[User:J947|<span style="color: #1009bf;">J</span>]][[User talk:J947|<span style="color: #137412;">947</span>]]''' ‡ <sup>[[Special:Contribs/J947|edits]]</sup> 23:18, 11 November 2024 (UTC)
*'''Yes''', because the purpose of [[Portal:Current events/2022 August 21]] is to provide encyclopaedic information on [[21 August 2022]] and this purpose has been by-and-large successful. '''[[User:J947|<span style="color: #1009bf;">J</span>]][[User talk:J947|<span style="color: #137412;">947</span>]]''' ‡ <sup>[[Special:Contribs/J947|edits]]</sup> 23:18, 11 November 2024 (UTC)
*The current events portal example listed seems encyclopedic enough, in that apart from some formatting differences it might as well be a list article, but I've seen other portals that have editor-facing content that is more dubiously appropriate for mainspace. Consider, for example, {{section link|Portal:Schools#Wikiprojects}} (capitalization {{sic}}) and {{section link|Portal:Schools#Things you can do}}, and the similar modules at many other portals. <span style="border:3px outset;border-radius:8pt 0;padding:1px 5px;background:linear-gradient(6rad,#86c,#2b9)">[[User:Sdkb|<span style="color:#FFF;text-decoration:inherit;font:1em Lucida Sans">Sdkb</span>]]</span> <sup>[[User talk:Sdkb|'''talk''']]</sup> 18:27, 13 November 2024 (UTC)
*'''Yes''' per J947, especially given that the current event portals function like an encyclopedic list for the given date. --[[User:Tavix| <span style="color:#000080; font-family:georgia">'''T'''avix</span>]] <sup>([[User talk:Tavix|<span style="color:#000080; font-family:georgia">talk</span>]])</sup> 16:46, 14 November 2024 (UTC)
*'''Yes''', speaking as a recognized ''portalista'', portals have not yet been excised from the pedia. In this case, User:J947 makes the essential point. I'm not convinced that even incomplete, out-of-date portals are any less encyclopedic than the 2 million or so Wikipedia articles nobody bothered to edit last year. [[User:BusterD|BusterD]] ([[User talk:BusterD|talk]]) 14:53, 19 November 2024 (UTC)
*Portals are not part of the encylopedia as we understand encyclopedias: sources of information. They serve as navigation within an encylopedia. We would not see a Portal as the final delivery of information, any more than we would see a contents page, index, blurb, or advert as the final information page. These are all ancillary. People mostly land on a Wikipedia article page without a Portal. I have used Wikipedia for nearly twenty years without ever needing a Portal to direct me to where I want to go, and I would assume this is true for the majority of people. Redirects are designed as a signpost, and we frown upon a signpost simply pointing to another signpost. People would generally only arrive at a Portal if directed there from a link that should more helpfully point to the appropriate article. The Belgium Portal is mentioned above as a good Portal. If we go to the Belgium article and scroll down, there is a link to the Belgium Portal. But the Portal mainly provides us with a digest of the Belgium article, including a link back to the Belgium article, which itself contains more links to Belgium related articles than the Belgium Portal. Huh? Seriously? Why are we taking readers away from a sublime source, rich with information and links, to an inferior source? There is nothing on the Belgium Portal that is not available on the Belgium article page - including links to news. But there is much on the Belgian article page that is not on the Belgium Portal page. My suggestion is that ALL links to portals such as the Belgium Portal should instead go to the main article page. Why are we redirecting people to a redirect page when we can send them to the main article on the topic? Portals are a waste of our time and resources, and are a misdirect for readers. [[User:SilkTork|SilkTork]] ([[User talk:SilkTork|talk]]) 22:33, 23 November 2024 (UTC)
*:@[[User:SilkTork|SilkTork]] Are you also specifically opposed to redirecting to the current events portal, which is more "encyclopedic" than "navigational"? [[User talk:Cremastra|''Cremastra'']] ‹ [[User:Cremastra|u]] — [[Special:Contribs/Cremastra|c]] › 22:44, 23 November 2024 (UTC)
*::I'm not exactly comfortable with [[2006#August]] as a target as that itself is a signpost, but I see little value in us having two such signposts - that simply duplicates and confuses things. Either we have [[2006#August]] or we have [[Portal:Current events/2006 August 16]], and I'd much prefer we simply get rid of Portals, so I would obviously opt for [[2006#August]]. [[User:SilkTork|SilkTork]] ([[User talk:SilkTork|talk]]) 23:02, 23 November 2024 (UTC)
*:::The CE portal has more information for the reader, so I prefer it (see my arguments at [[WP:RDATE]].) [[User talk:Cremastra|''Cremastra'']] ‹ [[User:Cremastra|u]] — [[Special:Contribs/Cremastra|c]] › 23:12, 23 November 2024 (UTC)
*:@[[User:SilkTork|SilkTork]] Your argument breaks down as soon as you realise that disambiguation pages and set indexes exist and that redirects to those pages are extremely common and uncontroversial. We also redirect people to outlines, broad concept articles and overviews. What is the "main article page" for a date? In all but a few exceptional cases there isn't a single article but multiple, and so just as if they had searched [[Mercury]], [[Bitter ash]] or [[Stuffed flatbread]] we present them with a menu of content that is relevant to their search term and let them choose what it is they want to read about. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 22:46, 23 November 2024 (UTC)
*::See my answer above. I don't see the point in duplicating signposts. We have [[Belgium]], so we don't need [[Portal:Belgium]]; and we have [[2006#August]] so we don't need [[Portal:Current events/2006 August 16]]. Signposts are not part of the encyclopedia, but they are navigational aids which lead us to further information. However, we have built into every article multiple signposts to further information. We don't need to have duplicate signposts outside of mainspace to which people are directed away from mainspace to consult. It is a waste of our time and resources, and a misdirection for readers. Internal links are an elegant way of signposting to further information. Navigational templates are a little clunky, but are useful. Portals take readers away from the encyclopedia, and are a pointless timesink for both editors and readers. [[User:SilkTork|SilkTork]] ([[User talk:SilkTork|talk]]) 23:02, 23 November 2024 (UTC)
*:::Portals are just as much part of the encyclopaedia as set indexes and navigational templates. [[Portal:Belgium]] and [[Belgium]] fulfil very different roles in the encyclopaedia, neither is a duplicate of the other. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 23:10, 23 November 2024 (UTC)
{{discussion bottom}}


I've recently come across several users in AFD discussions that are using LLMs to generate their remarks there. As many of you are aware, [https://gptzero.me/ gptzero] and other such tools are very good at detecting this.
== Issues with antiquated guideline for [[WP:NBAND]] that essentially cause run of the mill non-notable items to be kept ==
I don't feel like any of us signed up for participating in discussions where some of the users are not using their own words but rather letting technology do it for them. Discussions are supposed to be between human editors. If you can't make a coherent argument on your own, you are not competent to be participating in the discussion. I would therefore propose that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner. [[User:Just Step Sideways|Just Step Sideways]] [[User talk:Just Step Sideways|<sup>from this world ..... today</sup>]] 00:12, 2 December 2024 (UTC)


:'''Bonus suggestion!''': I'm curious what Wikipedians think about this so let's try this. Many of the comments here discuss the impracticality of determining whether a user's comments are AI generated (i.e. gptzero isn't perfect), and many give valid arguments for using LLMs (i.e. [[English as a Second Language|ESL]]). If an argument is suspected to be written by LLM, I propose that editors should examine the user. Take a look at their listed contributions, and if they seem to have a habit of using AI, open a discussion on their talk page. If the user has a habit of using AI and doesn't recognize the inherent problems and refuses to change, this can be brought to [[WP:AN|the administrators' noticeboard]] for potential blocks. If (and only if) the person is blocked for using AI, their comments can be ignored. Or just ask ChatGPT to summarize them for you lol [[User:guninvalid|guninvalid]] ([[User_Talk:guninvalid|talk]]) 06:12, 25 December 2024 (UTC)
Specifically, [[WP:NBAND]] #5 and #6, which read:


===opening comments===
5.) Has released two or more albums on a major record label or on one of the more important indie labels (i.e., an independent label with a history of more than a few years, and with a roster of performers, many of whom are independently notable).
*Seems reasonable, as long as the GPTZero (or any tool) score is taken with a grain of salt. GPTZero can be as wrong as AI can be. [[User:ToBeFree|&#126; ToBeFree]] ([[User talk:ToBeFree|talk]]) 00:32, 2 December 2024 (UTC)
*Only if the false positive and false negative rate of the tool you are using to detect LLM content is very close to zero. LLM detectors tend to be very unreliable on, among other things, text written by non-native speakers. Unless the tool is near perfect then it's just dismissing arguments based on who wrote them rather than their content, which is not what we do or should be doing around here. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 00:55, 2 December 2024 (UTC)
*:In the cases I have seen thusfar it's been pretty obvious, the tools have just confirmed it. [[User:Just Step Sideways|Just Step Sideways]] [[User talk:Just Step Sideways|<sup>from this world ..... today</sup>]] 04:08, 2 December 2024 (UTC)
*:The more I read the comments from other editors on this, the more I'm a convinced that implementing either this policy or something like it will bring very significant downsides on multiple fronts that significantly outweigh the small benefits this would (unreliably) bring, benefits that would be achieved by simply reminding closers to disregard comments that are unintelligible, meaningless and/or irrelevant regardless of whether they are LLM-generated or not. For the sake of the project I must withdraw my previous very qualified support and instead '''very strongly oppose'''. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 02:45, 3 December 2024 (UTC)
*I think it should be an expressly legitimate factor in considering whether to discount or ignore comments either if it's clear enough by the text or if the user clearly has a history of using LLMs. We wouldn't treat a comment an editor didn't actually write as an honest articulation of their views in lieu of site policy in any other situation. <span style="border-radius:2px;padding:3px;background:#1E816F">[[User:Remsense|<span style="color:#fff">'''Remsense'''</span>]]<span style="color:#fff">&nbsp;‥&nbsp;</span>[[User talk:Remsense|<span lang="zh" style="color:#fff">'''论'''</span>]]</span> 00:59, 2 December 2024 (UTC)
* I would have already expected admins to exercise discretion in this regard, as text written by an LLM is not text written by a person. We cannot guarantee it is what the person actually means, especially as it is a tool often used by those with less English proficiency, which means perhaps they cannot evaluate the text themselves. However, I do not think we can make policy about a specific LLM or tool. The LLM space is moving fast, en.wiki policies do not. Removal seems tricky, I would prefer admins exercise discretion instead, as they do with potentially canvassed or socked !votes. [[User:Chipmunkdavis|CMD]] ([[User talk:Chipmunkdavis|talk]]) 01:06, 2 December 2024 (UTC)
*'''Support''' discounting or collapsing AI-generated comments, under ''slightly looser'' conditions than those for human comments. Not every apparently-AI-generated comment is useless [[AI hallucinations|hallucinated]] nonsense{{snd}}beyond false positives, it's also possible for someone to use an AI to help them word a constructive comment, and make sure that it matches their intentions before they publish it. But in my experience, the majority of AI-generated comments are somewhere between "pointless" and "disruptive". Admins should already discount ''clearly'' insubstantial !votes, and collapse ''clearly'' unconstructive lengthy comments; I think we should recognize that blatant chatbot responses are more likely to fall into those categories. [[User:Jlwoodwa|jlwoodwa]] ([[User talk:Jlwoodwa|talk]]) 02:11, 2 December 2024 (UTC)
*'''Strongly Support''' - I think some level of human judgement on the merits of the argument are necessary, especially as GPTZero may still have a high FPR. Still, if the discussion is BLUDGEONy, or if it quacks like an AI-duck, looks like an AI-duck, etc, we should consider striking out such content.{{pb
}}- sidenote, I'd also be in favor of sanctions against users who overuse AI to write out their arguments/articles/etc. and waste folks time on here.. [[User:Bluethricecreamman|Bluethricecreamman]] ([[User talk:Bluethricecreamman|talk]]) 02:20, 2 December 2024 (UTC)
*On a wording note, I think any guidance should avoid referring to any specific technology. I suggest saying "... to have been generated by a program". [[User:Isaacl|isaacl]] ([[User talk:Isaacl|talk]]) 02:54, 2 December 2024 (UTC)
*:"generated by a program" is too broad, as that would include things like speech-to-text. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 03:08, 2 December 2024 (UTC)
*:Besides what Thryduulf said, I think we should engage with editors who use translators. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 03:45, 2 December 2024 (UTC)
*::A translation program, whether it is between languages or from speech, is not generating a comment, but converting it from one format to another. A full policy statement can be more explicit in defining "generation". The point is that the underlying tech doesn't matter; it's that the comment didn't feature original thought from a human. [[User:Isaacl|isaacl]] ([[User talk:Isaacl|talk]]) 03:57, 2 December 2024 (UTC)
*:::Taking Google Translate as an example, most of the basic stuff uses "AI" in the sense of machine learning ([https://blog.google/products/translate/new-features-make-translate-more-accessible-for-its-1-billion-users/ example]) but they [https://blog.google/products/translate/google-translate-new-languages-2024/ absolutely use LLMs] nowadays, even for the basic free product. [[User:Gnomingstuff|Gnomingstuff]] ([[User talk:Gnomingstuff|talk]]) 08:39, 2 December 2024 (UTC)
*'''Support'''. We already use discretion in collapsing etc. comments by SPAs and suspected socks, it makes sense to use the same discretion for comments suspected of being generated by a non-human. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 03:07, 2 December 2024 (UTC)
*'''Support''' - Someone posting "here's what ChatGPT has to say on the subject" can waste a lot of other editors' time if they feel obligated to explain why ChatGPT is wrong again. I'm not sure how to detect AI-written text but we should take a stance that it isn't sanctioned. [[User:Clayoquot|Clayoquot]] ([[User_talk:Clayoquot|talk]] <nowiki>&#124;</nowiki> [[Special:Contributions/Clayoquot|contribs]]) 04:37, 2 December 2024 (UTC)
*'''Strong Support''' - I've never supported using generative AI in civil discourse. Using AI to participate in these discussions is pure laziness, as it is substituting genuine engagement and critical thought with a robot prone to outputting complete garbage. In my opinion, if you are too lazy to engage in the discussion yourself, why should we engage with you? [[User:Lazman321|Lazman321]] ([[User talk:Lazman321|talk]]) 05:26, 2 December 2024 (UTC)
*'''Comment''' - I'm skeptical that a rule like this will be enforceable for much longer. [[User:Sean.hoyland|Sean.hoyland]] ([[User talk:Sean.hoyland|talk]]) 05:39, 2 December 2024 (UTC)
*:Why? [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 12:22, 2 December 2024 (UTC)
*::Because it's based on a potentially false premise that it will be possible to reliably distinguish between text generated by human biological neural networks and text generated by non-biological neural networks by observing the text. It is already quite difficult in many cases, and the difficulty is increasing very rapidly. I have your basic primate brain. The AI companies building foundation models have billions of dollars, tens of thousands, soon to be hundreds of thousands of GPUs, a financial incentive to crack this problem and scaling laws on their side. So, I have very low credence in the notion that I will be able to tell whether content is generated by a person or a person+LLM or an AI agent very soon. On the plus side, it will probably still be easy to spot people making non-policy based arguments regardless of how they do it. [[User:Sean.hoyland|Sean.hoyland]] ([[User talk:Sean.hoyland|talk]]) 13:52, 2 December 2024 (UTC)
*::...and now that the systems are autonomously injecting their output back into model via chain-of-thought prompting, or a kind of inner monologue if you like, to respond to questions, they are becoming a little bit more like us. [[User:Sean.hoyland|Sean.hoyland]] ([[User talk:Sean.hoyland|talk]]) 14:14, 2 December 2024 (UTC)
*:::A [[transformer (deep learning architecture)]] is intrinsically nothing like a human. It's a bunch of algebra that can compute what a decently sensible person could write in a given situation based on its training data, but it is utterly incapable of anything that could be considered thought or reasoning. This is why LLMs tend to fail spectacularly when asked to do math or write non-trivial code. [[User:Flounder fillet|Flounder fillet]] ([[User talk:Flounder fillet|talk]]) 17:20, 2 December 2024 (UTC)
*::::We shall see. You might want to update yourself on their ability to do math and write non-trivial code. Things are changing very quickly. Either way, it is not currently possible to say much about what LLMs are actually doing because mechanistic interpretability is in its infancy. [[User:Sean.hoyland|Sean.hoyland]] ([[User talk:Sean.hoyland|talk]]) 03:44, 3 December 2024 (UTC)
*::::You might be interested in Anthropic's '[https://www.anthropic.com/research/mapping-mind-language-model Mapping the Mind of a Large Language Model]' and Chris Olah's work in general. [[User:Sean.hoyland|Sean.hoyland]] ([[User talk:Sean.hoyland|talk]]) 04:02, 3 December 2024 (UTC)
*'''Support''' and I would add "or similar technologies" to "AI/LLM/Chatbots". As for Sean.hoyland's comment, we will cross that bridge when we get to it. [[User:Cullen328|Cullen328]] ([[User talk:Cullen328|talk]]) 05:51, 2 December 2024 (UTC)
*:...assuming we can see the bridge and haven't already crossed it. [[User:Sean.hoyland|Sean.hoyland]] ([[User talk:Sean.hoyland|talk]]) 06:24, 2 December 2024 (UTC)
*'''Support''' - All editors should convey their thoughts in their own words. AI generated responses and comments are disruptive because they are pointless and not meaningful. - [[User:Ratnahastin|<span style="color:#A52A2A;">Ratnahastin</span>]] ([[User talk:Ratnahastin|talk]]) 06:04, 2 December 2024 (UTC)
*'''Support''', I already more or less do this. An LLM generated comment may or may not actually reflect the actual thoughts of the editor who posted it, so it's essentially worthless toward a determination of consensus. Since I wrote this comment myself, you know that it reflects ''my'' thoughts, not those of a bot that I may or may not have reviewed prior to copying and pasting. [[User:Seraphimblade|Seraphimblade]] <small><sup>[[User talk:Seraphimblade|Talk to me]]</sup></small> 06:59, 2 December 2024 (UTC)
*'''Strong oppose'''. Let me say first that I do not like ChatGPT. I think it has been a net negative for the world, and it is by nature a net negative for the physical environment. It is absolutely a net negative for the encyclopedia if LLM-generated text is used in articles in any capacity. However, hallucinations are less of an issue on talk pages because they're discussions. If ChatGPT spits out a citation of a false policy, then obviously that comment is useless. If ChatGPT spits out some boilerplate "Thanks for reviewing the article, I will review your suggestions and take them into account" talk page reply, who gives a fuck where it came from? (besides the guys in Texas getting their eardrums blown out because they live by the data center){{pb
}}The main reason I oppose, though, is because banning LLM-generated comments is difficult to enforce bordering on unenforceable. Most studies show that humans are bad at distinguishing AI-generated text from text generated without AI. Tools like GPTZero claims a 99% accuracy rate, but that seems dubious based on reporting on the matter. The news outlet Futurism (which generally has an anti-AI slant) has failed many times to replicate that statistic, and anecdotal accounts by teachers, etc. are rampant. So we can assume that we don't know how capable AI detectors are, that there will be some false positives, and that striking those false positives will result in [[WP:BITING]] people, probably newbies, younger people more accustomed to LLMs, and non-Western speakers of English (see below).{{pb
}}There are also technological issues as play. It'd be easy if there was a clean line between "totally AI-generated text" and "totally human-generated text," but that line is smudged and well on its way to erased. Every tech company is shoving AI text wrangling into their products. This includes autocomplete, translation, editing apps, etc. Should we strike any comment a person used Grammarly or Google Translate for? Because those absolutely use AI now.{{pb
}}And there are ''also'', as mentioned above, cultural issues. The people using Grammarly, machine translation, or other such services are likely to not have English as their first language. And a lot of the supposed "tells" of AI-generated content originate in the formal English of other countries -- for instance, the whole thing where "delve" was supposedly a tell for AI-written content until people pointed out the fact that lots of Nigerian workers trained the LLM and "delve" is common Nigerian formal English.{{pb
}}I didn't use ChatGPT to generate any of this comment. But I am also pretty confident that if I did, I could have slipped it in and nobody would have noticed until this sentence. [[User:Gnomingstuff|Gnomingstuff]] ([[User talk:Gnomingstuff|talk]]) 08:31, 2 December 2024 (UTC)
*:Just for grins, I ran your comment through GPTzero, and it comes up with a 99% probability that it was human-written (and it never struck me as looking like AI either, and I can often tell.) So, maybe it's more possible to distinguish than you think? [[User:Seraphimblade|Seraphimblade]] <small><sup>[[User talk:Seraphimblade|Talk to me]]</sup></small> 20:11, 2 December 2024 (UTC)
*::Yeah, Gnoming's writing style is far more direct and active than GPT's. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 23:02, 2 December 2024 (UTC)
*:::There weren't
*:::*'''Multiple'''
*:::*:LLMs tend to use more than one subheading to reiterate points
*:::*'''Subheadings'''
*:::*:Because they write like a middle schooler that just learned how to make an essay outline before writing.
*:::In conclusion, they also tend to have a conclusion paragraph for the same reason they use subheadings. [[User:ScottishFinnishRadish|ScottishFinnishRadish]] ([[User talk:ScottishFinnishRadish|talk]]) 13:56, 3 December 2024 (UTC)
*'''Support''' - Ai-generated comments are [[WP:DISRUPTIVE]] - An editor who has an argument should not use ChatGPT to present it in an unnecessarily verbose manner, and an editor who doesn't have one should not participate in discussion. [[User:Flounder fillet|Flounder fillet]] ([[User talk:Flounder fillet|talk]]) 13:14, 2 December 2024 (UTC)
*<li style="list-style:none;">{{block indent|em=1.6|1=<small>Notified: [[Wikipedia talk:WikiProject AI Cleanup]]. [[User:Jlwoodwa|jlwoodwa]] ([[User talk:Jlwoodwa|talk]]) 07:13, 2 December 2024 (UTC)</small>}}<!-- Template:Notified --></li>
*'''Yes''' but why do we need this common sense RFC/policy/whatever? Just ban these people. If they even exist. &#32;<span style="font-variant:small-caps; whitespace:nowrap;">[[User:Headbomb|Headbomb]] {[[User talk:Headbomb|t]] · [[Special:Contributions/Headbomb|c]] · [[WP:PHYS|p]] · [[WP:WBOOKS|b]]}</span> 07:14, 2 December 2024 (UTC)
*:They exist, and I found myself collapsing some long, obviously chatbot-generated posts in an AFD, and after I did so wondering if policy actually supported doing that. I couldn't find anything so here we are. [[User:Just Step Sideways|Just Step Sideways]] [[User talk:Just Step Sideways|<sup>from this world ..... today</sup>]] 20:04, 2 December 2024 (UTC)
*{{anchor|ChaptGPT_agrees}}'''Yes, of course''', and I know that's the right answer because ChatGPT agrees with me.
{{collapse top|width=95%|title=What ChatGPT thinks|expand=yes}}
:I started by asking:
::{{tq|Currently at the English Wikipedia, there's a discussion of the following proposal: "Should or other users evaluating consensus in a discussion discount, ignore, or strike through or collapse comments found to have been generated by AI/LLM/Chatbots?" I wonder if you have an opinion on that question.}}
:My friend the bot initially proposed a requirement that any AI-generated content be labeled as such by the person posting it. After some further discussion between us, I finally said:
::{{tq|You say that labeling AI-generated content, instead of outright discounting it, might be an appropriate middle ground. But if someone is unable to formulate their own thoughts and put them in writing, of what use is it to the discussion for this person to simply ask you to make something up, and then post your output to the Wikipedia discussion with the proviso that "ChatGPT, who/which doesn't actually know anything about anything, said this ..."? How is it different from saying, "My grandmother, who knows nothing about what we're talking about, said ..."?}}
:And it said:
::{{tq|That’s an incisive question, and it highlights the crux of the issue: the value of contributions to a discussion. Here’s why AI-generated comments might still differ (slightly) from your hypothetical grandmother’s input, but why their usefulness is often limited:}}
::*{{tq|'''Synthesis vs. Subject Expertise:''' While I (or any AI) lack personal understanding or expertise, I can aggregate and articulate relevant ideas from a vast dataset. This means I can sometimes present arguments, summarize perspectives, or surface considerations that a non-expert human might overlook. Your grandmother, unless she's an AI-trained polymath, wouldn’t be able to provide this breadth. However, this advantage evaporates if participants lack the ability to evaluate and critically engage with my output.}}
::*{{tq|'''The Problem of Proxy Voices:''' The act of outsourcing one’s contribution to AI undermines the deliberative process central to Wikipedia’s discussions. Participants are expected to bring their own informed perspectives, shaped by personal understanding of Wikipedia’s policies and content. A disclaimer like “ChatGPT said this” doesn’t change the fact that the human posting it has abdicated their role in the discussion.}}
::*{{tq|'''Accountability and Intent:''' In a collaborative setting like Wikipedia, contributions are ideally linked to a person who can clarify their intent, respond to questions, and engage dynamically. An AI cannot do this. Saying "ChatGPT said this" distances the poster from the content, making it unclear whether they endorse or even understand the comment.}}
::*{{tq|'''Potential for Misdirection:''' Allowing AI-generated contributions risks introducing noise into the discussion. AI can produce plausible-sounding arguments that may be tangential, misaligned, or even factually incorrect. Without the judgment of a well-informed poster, such contributions can derail or dilute meaningful discourse.}}
::{{tq|'''Conclusion:''' If someone cannot formulate their own argument or understanding, then their reliance on an AI—or anyone else—to contribute doesn’t meaningfully advance the discussion. Labeling AI-generated content is better than passing it off as human, but the real issue lies in whether such contributions enrich or degrade the collaborative process. In most cases, they would seem to belong more to the realm of rhetorical curiosities than substantive engagement.}}
{{collapse bottom}}
*<li style="list-style:none;">In keeping with the proposed guideline, I have of course collapsed the above AI-generated content. (Later: It's actually worth reading in the context of this discussioin, so I've unhidden it by default.) But I must confess it's a pretty good analysis, and worth reading. [[User:EEng#s|<b style="color:red;">E</b>]][[User talk:EEng#s|<b style="color:blue;">Eng</b>]] 07:47, 2 December 2024 (UTC)</li>
*:This is absolute gold dust and the best contribution to this discussion so far. There is an enormous irony here, one that might not be immediately obvious. The proposal is that we should ignore or even strike these type of contributions, but personally it seems like the collapsed format has worked a charm here. I really don't think that AI has much to contribute to WP discussions generally, but with the right prompt, there is certainly something worth adding to the conversation in reality. [[User:CommunityNotesContributor|CNC]] ([[User talk:CommunityNotesContributor|talk]]) 20:23, 8 December 2024 (UTC)
*::The proposal also includes collapsing. [[User:Jlwoodwa|jlwoodwa]] ([[User talk:Jlwoodwa|talk]]) 20:26, 8 December 2024 (UTC)
*:::Thanks, I completely missed that. Trying to speed read is not my forte. [[User:CommunityNotesContributor|CNC]] ([[User talk:CommunityNotesContributor|talk]]) 20:32, 8 December 2024 (UTC)
:The "detector" website linked in the opening comment gives your chatbot's reply only an 81% chance of being AI-generated. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 23:36, 2 December 2024 (UTC)
::That's because, just by interacting with me, ChatGPT got smarter. Seriously ... you want it to say 99% every time? (And for the record, the idea of determining the "chance" that something is AI-generated is statistical nonsense.) [[User:EEng#s|<b style="color:red;">E</b>]][[User talk:EEng#s|<b style="color:blue;">Eng</b>]] 03:07, 3 December 2024 (UTC)
:::What I really want is a 100% chance that it won't decide that what I've written is AI-generated. Past testing has demonstrated that at least some of the detectors are unreliable on this point. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 03:28, 4 December 2024 (UTC)
::::100% is, of course, an impossible goal. Certainly SPI doesn't achieve that, so why demand it here? [[User:EEng#s|<b style="color:red;">E</b>]][[User talk:EEng#s|<b style="color:blue;">Eng</b>]] 22:31, 4 December 2024 (UTC)
*<del>'''Strong Oppose''' I support the concept of removal of AI-generated content in theory. However, we do not have the means to detect such AI-generated content. The proposed platform that we may use (GPTZero) is not reliable for this purpose. In fact, our [[GPTZero#Efficacy|own page on GPTZero]] has a section citing several sources stating the problem with this platform's accuracy. It is not helpful to have a policy that is impossible to enforce. [[User:ThatIPEditor|ThatIPEditor]] <sup>[[User talk:ThatIPEditor|They]] / [[Special:contribs/ThatIPEditor|Them]]</sup> 08:46, 2 December 2024 (UTC) </del>
*'''Strong Support''' To be honest, I am surprised that this isn't covered by an existing policy. I oppose the use of platforms like GPTZero, due to it's unreliability, but if it is obviously an ai-powered-duck (Like if it is saying shit like "as an AI language model...", take it down and sanction the editor who put it up there. [[User:ThatIPEditor|ThatIPEditor]] <sup>[[User talk:ThatIPEditor|They]] / [[Special:contribs/ThatIPEditor|Them]]</sup> 08:54, 2 December 2024 (UTC)
*'''Support''' at least for [[WP:DUCK]]-level AI-generated comments. If someone uses a LLM to translate or improve their own writing, there should be more leeway, but something that is clearly a pure ChatGPT output should be discounted. [[User:Chaotic Enby|<span style="color:#8a7500">Chaotic <span style="color:#9e5cb1">Enby</span></span>]] ([[User talk:Chaotic Enby|talk]] · [[Special:Contributions/Chaotic Enby|contribs]]) 09:17, 2 December 2024 (UTC)
* I agree for cases in which it is uncontroversial that a comment is purely AI-generated. However, I don't think there are many cases where this is obvious. The claim that {{green|gptzero and other such tools are very good at detecting this}} is false. [[User:Phlsph7|Phlsph7]] ([[User talk:Phlsph7|talk]]) 09:43, 2 December 2024 (UTC)
*'''Support''' Not clear how admins are deciding that something is LLM generated, [https://en.wikipedia.org/wiki/Talk:Zionism#c-ScottishFinnishRadish-20241201001900-RFC_about_a_recently_added_claim_about_Zionism a recent example], agree with the principle tho. [[User:Selfstudier|Selfstudier]] ([[User talk:Selfstudier|talk]]) 10:02, 2 December 2024 (UTC)
* '''Moral support; neutral as written'''. Chatbot participation in consensus discussions is such an utterly pointless and disdainful abuse of process and community eyeballs that I don't feel like the verbiage presented goes far enough. {{Xt|Any editor may hat LLM-generated comments in consensus discussions}} is nearer my position. No waiting for the closer, no mere discounting, no reliance on the closer's personal skill at recognising LLM output, immediate feedback to the editor copypasting chatbot output that their behaviour is unwelcome and unacceptable. Some observations:{{pb}}I've seen editors accused of using LLMs to generate their comments probably about a dozen times, and in all but two cases – both at dramaboards – the chatbot prose was unmistakably, blindingly obvious. Editors already treat non-obvious cases as if written by a human, in alignment with the raft of {{tqq|only if we're sure}} caveats in every discussion about LLM use on the project.{{pb}}If people are using LLMs to punch up prose, correct grammar and spelling, or other superficial tasks, this is generally undetectable, unproblematic, and not the point here.{{pb}}Humans are superior to external services at detecting LLM output, and no evidence from those services should be required for anything.{{pb}}As a disclosure, evidence mounts that LLM usage in discussions elicits maximally unkind responses from me. It just feels so contemptuous, to assume that any of us care what a chatbot has to say about anything we're discussing, and that we're all too stupid to see through the misattribution because someone tacked on a sig and sometimes an introductory paragraph. And I say this as a stupid person. [[User:Folly Mox|Folly Mox]] ([[User talk:Folly Mox|talk]]) 11:20, 2 December 2024 (UTC)
*:'''Looks like a rewrite is indicated''' to distinguish between {{xt|machine translation}} and {{!xt|LLM-generated comments}}, based on what I'm seeing in this thread. Once everyone gets this out of our system and an appropriately wordsmithed variant is reintroduced for discussion, I preemptively subpropose the projectspace shortcut [[WP:HATGPT]]. [[User:Folly Mox|Folly Mox]] ([[User talk:Folly Mox|talk]]) 15:26, 8 December 2024 (UTC)
*'''Support''' per EEng [[User:Queen of Hearts|<span style="color: darkred;">charlotte</span>]] [[User talk:Queen of Hearts|<sup style="color: darkred;">👸♥</sup>]] 14:21, 2 December 2024 (UTC)
*I would be careful here, as there are tools that rely on LLM AI that help to improve the clarity of one's writing, and editors may opt to use those to parse their poor writing (perhaps due to ESL aspects) to something clear. I would agree content 100% generated by AI probably should be discounted particularly if from an IP or new editors (hints if socking or meat puppetry) but not all cases where AI has come into play should be discounted<span id="Masem:1733149152126:WikipediaFTTCLNVillage_pump_(policy)" class="FTTCmt"> —&nbsp;[[User:Masem|M<span style="font-variant: small-caps">asem</span>]] ([[User Talk:Masem|t]]) 14:19, 2 December 2024 (UTC)</span>
*'''Support''', cheating should have no place or take its place in writing coherent comments on Wikipedia. Editors who opt to use it should practice writing until they rival Shakespeare, or at least his cousin Ned from across the river, and then come back to edit. [[User:Randy Kryn|Randy Kryn]] ([[User talk:Randy Kryn|talk]]) 14:29, 2 December 2024 (UTC)
*'''Support''' atleast for comments that are copied straight from the LLM . However, we should be more lenient if the content is rephrased by non-native English speakers due to grammar issues [[User:TheAstorPastor|<span style="font-family:Segoe print; color:#8B0000; text-shadow:gray 0.2em 0.2em 0.4em;">The AP </span>]] ([[User talk:TheAstorPastor|<span style="font-family:Segoe print; color:#AA336A">''talk''</span>]]) 15:10, 2 December 2024 (UTC)
===section break 1===
* '''Support''' for LLM-'''generated''' content (until AI is actually intelligent enough to create an account and contribute on a human level, [[Data (Star Trek)|which may eventually happen]]). However, beware of the fact that some LLM-'''assisted''' content should probably be allowed. An extreme example of this: if a non-native English speaker were to write a perfectly coherent reason in a foreign language, and have an LLM translate it to English, it should be perfectly acceptable. [[User:Animal lover 666|Animal lover]] [[User talk:Animal lover 666|&#124;666&#124;]] 16:47, 2 December 2024 (UTC)
*:For wiki content, maybe very soon. 'contribute of a human level' [https://www.futurehouse.org/research-announcements/wikicrow has already been surpassed] in a narrow domain. [[User:Sean.hoyland|Sean.hoyland]] ([[User talk:Sean.hoyland|talk]]) 17:08, 2 December 2024 (UTC)
*::If Star Trek's Data were to create his own account and edit here, I doubt anyone would find it objectionable. [[User:Animal lover 666|Animal lover]] [[User talk:Animal lover 666|&#124;666&#124;]] 17:35, 2 December 2024 (UTC)
*:::I’m proposing a policy that any AI has to be capable of autonomous action without human prompting to create an account. [[User:Dronebogus|Dronebogus]] ([[User talk:Dronebogus|talk]]) 21:38, 5 December 2024 (UTC)
*:::<small>Data, being a fictional creation with rights owned by a corporation, will not have an account; he is inherently an IP editor. -- [[User:NatGertler|Nat Gertler]] ([[User talk:NatGertler|talk]]) 03:22, 20 December 2024 (UTC)</small>
*'''Strong support''' chatbots have no place in our encyclopedia project. [[User:Simonm223|Simonm223]] ([[User talk:Simonm223|talk]]) 17:14, 2 December 2024 (UTC)
*'''Oppose''' - I think the supporters must have a specific type of AI-generated content in mind, but this isn't a prohibition on one type; it's a prohibition on the use of generative AI in discussions (or rather, ensuring that anyone who relies on such a tool will have their opinion discounted). We allow people who aren't native English speakers to contribute here. We also allow people who are native English speakers but have difficulty with language (but not with thinking). LLMs are good at assisting both of these groups of people. Furthermore, as others pointed out, detection is not foolproof and will only get worse as time goes on, models proliferate, models adapt, and users of the tools adapt. This proposal is a blunt instrument. If someone is filling discussions with pointless chatbot fluff, or we get a brand new user who's clearly using a chatbot to feign understanding of wikipolicy, of ''course'' that's not ok. But ''that is a case by case behavioral issue''. I think the better move would be to clarify that "some forms of LLM use can be considered [[WP:DE|disruptive]] and may be met with restrictions or blocks" without making it a black-and-white issue. &mdash; <samp>[[User:Rhododendrites|<span style="font-size:90%;letter-spacing:1px;text-shadow:0px -1px 0px Indigo;">Rhododendrites</span>]] <sup style="font-size:80%;">[[User_talk:Rhododendrites|talk]]</sup></samp> \\ 17:32, 2 December 2024 (UTC)
*:I agree the focus should not be on whether or not a particular kind of tech was used by an editor, but whether or not the comment was generated in a way (whether it's using a program or ghost writer) such that it fails to express actual thoughts by the editor. (Output from a speech-to-text program using an underlying large language model, for instance, isn't a problem.) Given that this is often hard to determine from a single comment (everyone is prone to post an occasional comment that others will consider to be off-topic and irrelevant), I think that patterns of behaviour should be examined. [[User:Isaacl|isaacl]] ([[User talk:Isaacl|talk]]) 18:07, 2 December 2024 (UTC)
*::Here's what I see as two sides of a line. The first is, I think, something we can agree would be inappropriate. The second, to me at least, pushes up against the line but is not ultimately inappropriate. But they would both be prohibited if this passes. (a) "I don't want an article on X to be deleted on Wikipedia. Tell me what to say that will convince people not to delete it"; (b) "I know Wikipedia deletes articles based on how much coverage they've received in newspapers, magazines, etc. and I see several such articles, but I don't know how to articulate this using wikipedia jargon. Give me an argument based on links to wikipedia policy that use the following sources as proof [...]". Further into the "acceptable" range would be things like translations, grammar checks, writing a paragraph and having an LLM improve the writing without changing the ideas, using an LLM to organize ideas, etc. I think what we want to avoid are situations where the ''arguments and ideas themselves'' are produced by AI, but I don't see such a line drawn here and I don't think we could draw a line without more flexible language. &mdash; <samp>[[User:Rhododendrites|<span style="font-size:90%;letter-spacing:1px;text-shadow:0px -1px 0px Indigo;">Rhododendrites</span>]] <sup style="font-size:80%;">[[User_talk:Rhododendrites|talk]]</sup></samp> \\ 18:47, 2 December 2024 (UTC)
*::Here we return to my distinction between AI-generated and AI-assisted. A decent speech-to-text program doesn't actually generate content. [[User:Animal lover 666|Animal lover]] [[User talk:Animal lover 666|&#124;666&#124;]] 18:47, 2 December 2024 (UTC)
*:::Yes, as I [[Special:GoToComment/c-Isaacl-20241202035700-Aaron Liu-20241202034500|posted earlier]], the underlying tech isn't important (and will change). Comments should reflect what the author is thinking. Tools (or people providing advice) that help authors express their personal thoughts have been in use for a long time. [[User:Isaacl|isaacl]] ([[User talk:Isaacl|talk]]) 19:08, 2 December 2024 (UTC)
*::::Yeah the point here is passing off a machine's words as your own, and the fact that it is often fairly obvious when one is doing so. If a person is not competent to express their own thoughts in plain English, they shouldn't be in the discussion. This certainly is not aimed at assistive technology for those who actually need it but rather at persons who are simply letting Chatbots speak for them. [[User:Just Step Sideways|Just Step Sideways]] [[User talk:Just Step Sideways|<sup>from this world ..... today</sup>]] 20:10, 2 December 2024 (UTC)
*:::::This doesn't address what I wrote (though maybe it's not meant to). {{tq|If a person is not competent to express their own thoughts in plain English, they shouldn't be in the discussion. This certainly is not aimed at assistive technology for those who actually need it but rather at persons who are simply letting Chatbots speak for them}} is just contradictory. Assistive technologies are those that can help people who aren't "competent" to express themselves to your satisfaction in plain English, sometimes helping with the formulation of a sentence based on the person's own ideas. There's a difference between having a tool that helps me to articulate ideas ''that are my own'' and a tool that ''comes up with the ideas''. That's the distinction we should be making. &mdash; <samp>[[User:Rhododendrites|<span style="font-size:90%;letter-spacing:1px;text-shadow:0px -1px 0px Indigo;">Rhododendrites</span>]] <sup style="font-size:80%;">[[User_talk:Rhododendrites|talk]]</sup></samp> \\ 21:23, 2 December 2024 (UTC)
*:::::I agree with Rhododendrites that we shouldn't be forbidding users from seeking help to express their own thoughts. Getting help from someone more fluent in English, for example, is a good practice. Nowadays, some people use generative technology to help them prepare an outline of their thoughts, so they can use it as a starting point. I think the community should be accepting of those who are finding ways to write their own viewpoints more effectively and concisely, even if that means getting help from someone or a program. I agree that using generative technology to come up with the viewpoints isn't beneficial for discussion. [[User:Isaacl|isaacl]] ([[User talk:Isaacl|talk]]) 22:58, 2 December 2024 (UTC)
*:Non-native English speakers and non-speakers to whom a discussion is important enough can already use machine translation from their original language and usually say something like "Sorry, I'm using machine translation". [[User:Skullers|Skullers]] ([[User talk:Skullers|talk]]) 08:34, 4 December 2024 (UTC)
*'''Oppose''' Contributions to discussions are supposed to be evaluated on their merits per [[WP:NOTAVOTE]]. If an AI-assisted contribution makes sense then it should be accepted as helpful. And the technical spectrum of assistance seems large and growing. For example, as I type this into the edit window, some part of the interface is spell-checking and highlighting words that it doesn't recognise. I'm not sure if that's coming from the browser or the edit software or what but it's quite helpful and I'm not sure how to turn it off. [[user:Andrew Davidson|Andrew]]🐉([[user talk:Andrew Davidson|talk]]) 18:17, 2 December 2024 (UTC)
*:But we're not talking about spell-checking. We're talking about comments clearly generated by LLMs, which are inherently unhelpful. [[User:Lazman321|Lazman321]] ([[User talk:Lazman321|talk]]) 18:29, 2 December 2024 (UTC)
*::Yeah, spellchecking is not the issue here. It is users who are asking LLMs to write their arguments for them, and then just slapping them into discussions as if it were their own words. [[User:Just Step Sideways|Just Step Sideways]] [[User talk:Just Step Sideways|<sup>from this world ..... today</sup>]] 20:12, 2 December 2024 (UTC)
*:::Andrew's first two sentences also seem to imply that he views AI-generated arguments that makes sense as valid, and that we should consider what AI thinks about a topic. I'm not sure what to think about this, especially since AI can miss out on a lot of the context. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 23:04, 2 December 2024 (UTC)
*::::Written arguments are supposed to be considered on their merits as objects in their own right. Denigrating an argument by reference to its author is ''[[ad hominem]]'' and that ranks low in the [[Paul_Graham_(programmer)#Graham's_hierarchy_of_disagreement|hierarchy]] – "{{tq|attacks the characteristics or authority of the writer without addressing the substance of the argument}}". [[user:Andrew Davidson|Andrew]]🐉([[user talk:Andrew Davidson|talk]]) 23:36, 2 December 2024 (UTC)
*::::::An AI chatbot isn't an "author", and it's impossible to make an ''ad hominem'' attack on one, because a chotbot is not a ''homo''. [[User:EEng#s|<b style="color:red;">E</b>]][[User talk:EEng#s|<b style="color:blue;">Eng</b>]] 17:45, 6 December 2024 (UTC)
*:::::::{{small|Well, not all of them, anyway. [[Queer Eye (2003 TV series)|"Queer spot for the straight bot"]], maybe?}} [[User:Martinevans123|Martinevans123]] ([[User talk:Martinevans123|talk]]) 17:51, 6 December 2024 (UTC)
*:::::On the other hand, "exhausting the community's patience"/CompetenceIsRequired is a very valid rationale from stopping someone from partricipating. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 23:50, 2 December 2024 (UTC)
*:::The spell-checking was an immediate example but there's a spectrum of AI tools and assistance. The proposed plan is to use an AI tool to detect and ban AI contributions. That's ludicrous hypocrisy but suggests an even better idea – that we use AIs to close discussions so that we don't get the bias and super-voting. I see this on Amazon regularly now as it uses an AI to summarise the consensus of product reviews. For example,{{tqb|Customers say<br />Customers appreciate the gloves for their value, ease of use, and gardening purposes. They find the gloves comfortable and suitable for tasks like pruning or mowing. However, opinions differ on how well they fit.<br />AI-generated from the text of customer reviews}}Yes, AI assistants have good potential. My !vote stands. [[user:Andrew Davidson|Andrew]]🐉([[user talk:Andrew Davidson|talk]]) 23:23, 2 December 2024 (UTC)
*::::Let's not get into tangents here. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 23:51, 2 December 2024 (UTC)
*:::::It's better than going around in circles. [[User:EEng#s|<b style="color:red;">E</b>]][[User talk:EEng#s|<b style="color:blue;">Eng</b>]] 03:07, 3 December 2024 (UTC)


===section break 2===
6.) Is an ensemble that contains two or more independently notable musicians, or is a musician who has been a reasonably prominent member of two or more independently notable ensembles. This should be adapted appropriately for musical genre; for example, having performed two lead roles at major opera houses. Note that this criterion needs to be interpreted with caution, as there have been instances where this criterion was cited in a circular manner to create a self-fulfilling notability loop (e.g., musicians who were "notable" only for having been in two bands, of which one or both were "notable" only because those musicians had been in them.)
*::::I asked Google's [[Gemini (chatbot)|Gemini]] to "summarise the consensus of the following RFC discussion", giving it the 87 comments to date.
{{cot|width=80%|title=AI summary of the RfC to date}}
This Wikipedia Request for Comment (RfC) debates whether comments in discussions that are found to be generated by AI/LLM/Chatbots should be discounted, ignored, or removed.


Arguments for discounting/removing AI-generated comments:
These appear to have been put together by a very small number of editors over a decade ago and hasn't seen much change since then and I feel it's much more lenient than just about anything else. This SNG defines a "label" that has been around for over "a few years" that has a roster of performers as "important". So, any group of people who have released two albums through ANY verifiable label that has exited for more than a few year can end up being kept and this isn't exactly in line with GNG. I believe a discussion needs to be held in order to bring it to GNG expectations of now.
* AI comments don't represent genuine human engagement or thought. They can be disruptive, waste time, and even fabricate information.
* AI can be used to manipulate discussions. Editors might use AI to generate arguments they don't actually believe in, or to flood discussions with low-quality comments.
* Relying on AI undermines the integrity of Wikipedia's consensus-based process. Discussions should be based on the genuine opinions of human editors.


Arguments against discounting/removing AI-generated comments:
[[User:Graywalls|Graywalls]] ([[User talk:Graywalls|talk]]) 06:17, 30 October 2024 (UTC)
* Difficulty of detection. AI detection tools are not always reliable, and it can be hard to distinguish between AI-generated text and human-written text, especially as AI technology improves.
:Especially given how broadly the various criteria have been "interpreted" in deletion discussions, the best way to go about it is just to deprecate the whole thing. Rely on the GNG for band notability, and if that results in a heap of articles on ephemeral outfits, garage bands and local acts vanishing, huzzah. [[User talk:Ravenswing|'''<span style="background:#2B22AA;color:#E285FF"> '' Ravenswing '' </span>''' ]] 09:07, 30 October 2024 (UTC)
* Potential for misuse. False positives could lead to legitimate comments being dismissed, potentially silencing non-native English speakers or those who use AI for legitimate assistive purposes (e.g., translation, grammar checks).
:The SNG isn't workable in the age of digital distribution. It's very easy to create "an independent label with a history of more than a few years". If someone wants to suggest a way to reform the SNG, I am open to solutions. But deprecation is a simple alternative if we can't. The GNG is always a good standard because it guarantees we have quality sources to write an article. [[User:Shooterwalker|Shooterwalker]] ([[User talk:Shooterwalker|talk]]) 14:22, 30 October 2024 (UTC)
* Focus on content, not origin. Comments should be evaluated based on their merits, not on how they were generated. If an AI-generated comment is relevant and well-reasoned, it shouldn't be dismissed simply because of its origin.
:I was active in AfD discussions when NBAND was pretty new, and it was useful for dealing with a flood of articles about garage bands and such, but I think our standards in general have tightened up since then, and I agree it is time to review it. There is the possibility, however, that revising NBAND may require as much discussion as revising NSPORT did. [[User talk:Donald Albury|Donald Albury]] 17:49, 30 October 2024 (UTC)
* LLMs can be helpful tools. LLMs can help editors express their thoughts more clearly, especially for non-native speakers or those with language difficulties.
::This sounds reasonable. I guess we need some concrete re-write suggestions to base an rfc on. [[User:Gråbergs Gråa Sång|Gråbergs Gråa Sång]] ([[User talk:Gråbergs Gråa Sång|talk]]) 18:17, 30 October 2024 (UTC)
:It sounds like you're assuming that NBAND is meant to be a substitute for the [[Wikipedia:General notability guideline]]. That's true for some [[WP:Subject-specific notability guidelines]] but not for all of them.
:I guess the underlying question is: Is there actual harm in having a permastub about a band that proves to be borderline in GNG terms? Consider this:
:{{tqb|"'''Alice and Bob''' are a musical duo in the [[science fiction]] genre.{{dummy ref}} They released their first album, ''[[Foo]]'', in 2019 and their second, [[Bar (album)|''Bar'']], in 2020. Both albums were released by [[Record Label]].{{dummy ref|2}} They are primarily known for singing during a [[Minor key|minor]] event.{{dummyref|3}}"}}
:I'm asking this because I think that the nature of sources has changed, particularly for pop culture, since NBAND and the GNG were written. We now have subjects that get "attention from the world at large", but which aren't the Right™ kind of sources and, while these Wrong™ sources definitely provide "attention", some of that attention might not provide biographical information (which means we're looking at a short article).
:For example, instead of getting attention in the arts section of a daily newspaper, they're getting attention from [[Anthony Fantano]] on YouTube. He's an important music critic,[https://www.nytimes.com/2020/09/30/arts/music/anthony-fantano-the-needle-drop.html] but I suspect that our knee-jerk reaction is "Pffft, just some YouTuber, totally unreliable". Consequently, we might rate a band that we theoretically intend to include ("attention from the world at large") as not meeting the GNG (because the whole field relies on the Wrong™ style of sources). [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 19:02, 30 October 2024 (UTC)
:Keep in mind that like most other notability guidelines, it is a presumed assumption that a topic is notable if it meets these criteria. If you do an exhaustive Before and demonstrate there is no significant coverage beyond the sourcing to satisfy there criteria, the article should still be deleted. None of the SNGs are geared towards preventing this type of challenge.<span id="Masem:1730316637166:WikipediaFTTCLNVillage_pump_(policy)" class="FTTCmt"> —&nbsp;[[User:Masem|M<span style="font-variant: small-caps">asem</span>]] ([[User Talk:Masem|t]]) 19:30, 30 October 2024 (UTC)</span>
::If we had to yield to presumptive notability about some random band because it released two albums with Backyard Trampoline Recordings established few years ago and had to do exhaustive search to disprove notability, we're getting setup for a situation where removal is 10x more challenging than article creation. So.. I see a great value in scrapping NBAND 5, and 6. [[User:Graywalls|Graywalls]] ([[User talk:Graywalls|talk]]) 00:47, 31 October 2024 (UTC)
:Welcome to [[WP:SNG]]s. As Masem said, they're supposed to be a rough idea of gauging notability before exhaustively searching for sources. But pretty much all of them have ended up being used as means to keep articles about trivial or run-of-the-mill subjects. [[User:Thebiguglyalien|<span style="color:#324717">The</span><span style="color:#45631f">big</span><span style="color:#547826">ugly</span><span style="color:#68942f">alien</span>]] ([[User talk:Thebiguglyalien|<span style="color:sienna">talk</span>]]) 19:37, 30 October 2024 (UTC)


Current Status (as of Dec 3rd, 2024):
Graywalls listed two criteria but the main discussion seems to be about the 1st (#5). I agree with Graywalls on that. With the evolution of the industry, the label criteria is no longer a useful indicator as it once was and IMO #5 should be removed or modified. Sincerely, <b style="color: #0000cc;">''North8000''</b> ([[User talk:North8000#top|talk]]) 19:13, 30 October 2024 (UTC)
* No clear consensus has emerged. There is significant support for discounting or removing AI-generated comments, but also strong opposition.
:I agree, both those criteria should be scrapped. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 22:21, 30 October 2024 (UTC)
* Concerns about enforceability and potential harm. Many editors are worried about the practical challenges of implementing and enforcing such a rule, as well as the potential for it to be used unfairly against certain editors.
:I've noticed that as well. I think #6 has some value still, while #5 is like saying an author who has published two or more books by a major publishing house is presumed notable. Way too low a bar without requiring some level of reception of those albums/books. ([[WP:NAUTHOR]] doesn't have that 2-book criteria, of course, just seems like parallel benchmarks.) [[User:Schazjmd|<span style="color:#066293;">'''Schazjmd'''</span>]]&nbsp;[[User talk:Schazjmd|<span style="color:#738276;">''(talk)''</span>]] 13:25, 31 October 2024 (UTC)
* Ongoing discussion. The discussion is still active, and it remains to be seen whether a consensus will form.
:: On the other hand, in this case, I suspect that an artist that "has released two or more albums on a major record label or on one of the more important indie labels" will in 99% of cases have enough coverage to clear the GNG bar. I'd like to see an example of one that doesn't. [[User_talk:Black Kite|Black Kite (talk)]] 13:29, 31 October 2024 (UTC)
:::The definition of important as said in #5 is "history of more than a few years, and with a roster of performers, many of whom are independently notable". This would mean that a garage band is notable, because they've released two CD-R albums on Rotten Peach Recordings which has been around for 3 1/2 years, has a roster of performers and some of whom have a Wikipedia page on them. Often time "notable" is determined by the presence of a stand alone Wikipedia page. When you look at the page, many band member pages are hopelessly non-notable, but removal takes an AfD. So a simple deletion can become a time consuming multi-step AfD. [[User:Graywalls|Graywalls]] ([[User talk:Graywalls|talk]]) 19:18, 31 October 2024 (UTC)
::: Here's a current AfD I am participating in where NBAND#5 was invoked to justify a keep. [[Wikipedia:Articles_for_deletion/Sons_of_Azrael_(3rd_nomination)]] [[User:Graywalls|Graywalls]] ([[User talk:Graywalls|talk]]) 19:24, 31 October 2024 (UTC)
:::: Not opining on that band's notability, but Metal Blade is a famous independent label that has existed for 42 years, has released material by ''very'' high-profile bands, and is distributed by Sony - it's not some one-person imprint operating out of their garage. [[User_talk:Black Kite|Black Kite (talk)]] 11:28, 1 November 2024 (UTC)
:::::I concur regarding that particular example.{{pb
}} Metal Blade is a big label, and not surprisingly notability was quickly demonstrated in the deletion discussion through citing reliable source coverage. And that's how #5 should work - artist is on a significant label, which suggests coverage exists. And then coverage is found.--[[User:3family6|<b style="color:navy">3family6</b>]] ([[User talk:3family6|<u style="color:black">Talk to me</u>]] &#124; [[Special:Contributions/3family6|<small style="color:purple">See what I have done</small>]]) 12:08, 16 November 2024 (UTC)
:::It's complicated - on the one hand, [https://www.cjr.org/analysis/music-journalism-access.php music publications are increasingly prioritizing their coverage toward Taylor Swift-level celebrities], so I am almost certain there are artists on major labels that might be examples -- major as in the Big 3. This is especially so for genres like country that publications don't cover as much - there are some big names on the roster of [https://www.warnermusicnashville.com/artist Warner Music Nashville] and also some not-so-big names.
:::The elephant in the room here is that entertainment journalism is in crisis mode right now, publications are operating on skeleton crews, and the range of coverage has narrowed dramatically. I encourage everyone taking part in this discussion to read the article I linked, there are a lot of assumptions being made about the way things work that aren't true. [[User:Gnomingstuff|Gnomingstuff]] ([[User talk:Gnomingstuff|talk]]) 20:30, 19 November 2024 (UTC)
*One suggestion I would add is to make these two criteria apply only to bands before a specific year, that year being where physical releases still dominated over digital sales. I don't know the exact year but I am thinking it's like around 2000 to 2010. There may still be older groups during the time of physical releases that don't yet have articles that would fall into one of these criteria. [[User:Masem|M<span style="font-variant: small-caps">asem</span>]] ([[User Talk:Masem|t]]) 20:02, 31 October 2024 (UTC)
*As someone who's had [[WP:DSMUSIC]] watchlisted for most of their editing history, and who tends towards deletion at that, I actually don't see much of a problem with these criterions. It certainly seems true that the majority of musicians who are signed to a label or a member of multiple bands with two other musicians who meet [[WP:GNG]] themselves meet GNG. I do think it is sometimes justified to accept less-than-GNG sourcing in articles where a SNG is met (see [[Wikipedia:Articles for deletion/John LeCompt]] for this as it applies to c6 specifically) and more importantly, NMUSIC contains language that allows deleting articles even where it is technically met (see [[Wikipedia:Articles for deletion/Rouzbeh Rafie]] for an extended argument about that. [[User talk:Mach61|Mach61]] 23:29, 31 October 2024 (UTC)
*I've understood these criterion to be supplementing GNG, that is, that if a band or individual artist meets one or more of these criterion, they *likely* are notable. However, in the past when I was a younger and less experienced editor, I think I did understand these as being additions or alternatives to GNG. So I think that should be clarified. This has come up on the deletion discussion for [[Jayson Sherlock]]. He is a member or former member of several very notable bands, and for that reason I presumed that he would easily have independent coverage about him specifically. However, to my surprise, there's only one interview of him in a reliable source that would provide notability (there's some interviews on personal blogs or minor sites that wouldn't be RS except for him making statements about himself). But at least one editor has used the above criterion to argue that the article should be kept.--[[User:3family6|<b style="color:navy">3family6</b>]] ([[User talk:3family6|<u style="color:black">Talk to me</u>]] &#124; [[Special:Contributions/3family6|<small style="color:purple">See what I have done</small>]]) 12:20, 1 November 2024 (UTC)
*:Just as an aside, interviews do not contribute to GNG unless they include secondary independent SIGCOV (such as a substantial background introduction by the interviewer). [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 15:39, 1 November 2024 (UTC)
::*Agreed. That's important to note. I was presuming such, and also why I wouldn't rely on a singular interview as the sole source for establish GNG.--[[User:3family6|<b style="color:navy">3family6</b>]] ([[User talk:3family6|<u style="color:black">Talk to me</u>]] &#124; [[Special:Contributions/3family6|<small style="color:purple">See what I have done</small>]]) 16:30, 1 November 2024 (UTC)
::That's how I see most SNGs (and the outliers ought to follow their lead). At the very least, we can clarify that NBAND is meant as an indicator for the GNG, and not a substitute. [[User:Shooterwalker|Shooterwalker]] ([[User talk:Shooterwalker|talk]]) 02:04, 2 November 2024 (UTC)
* As someone who thought the old NSPORTS was wildly overinclusive and needed cleanup... these NBAND guidelines don't seem that bad? If two plainly notable musicians were discovered to have done some obscure team-up in the 1970s, that does indeed seem to be a notable topic and useful to have linked somewhere, even if there isn't tons of info on this collaboration. It's worth mentioning because minor subtopics are often merged to the overarching topic (e.g. songs to the album), but there may not be a clear merge location for this if both parties were equal contributors, and a short separate article is an acceptable compromise. Similarly, the complaint about #5 seems to be about just how "indie" the hypothetical label is, but this seems like a solvable problem. If a band fails GNG, that implies that either their two albums really were from a very obscure indie outfit and thus also fail NBAND, or else that we have some sort of non-English sources issue where we may consider keeping on [[WP:CSB]] grounds (i.e. that sources probably ''do'' exist to pass GNG, but they're difficult to find, and we can trust they exist because this was a major and notable label releasing the band's work). About the only suggestion I can offer is that the comment in 6 about avoiding circular notability could probably be phrased in the sense of GNG, i.e. that the two notable musicians need to both meet GNG and then this will create a new, safe NBAND notability for their collaboration. [[User:SnowFire|SnowFire]] ([[User talk:SnowFire|talk]]) 17:36, 4 November 2024 (UTC)
*:The reverse situation, such as is currently being discussed at [[Wikipedia:Articles for deletion/Jayson Sherlock]], is one where you have someone who was/is in multiple notable bands, but doesn't have independent coverage about them as an individual person. -- [[User:3family6|<b style="color:navy">3family6</b>]] ([[User talk:3family6|<u style="color:black">Talk to me</u>]] &#124; [[Special:Contributions/3family6|<small style="color:purple">See what I have done</small>]]) 22:30, 7 November 2024 (UTC)
*Agreed with deprecation; "Rely on the GNG for band notability" is the correct answer. And is the correct answer for many other things about which we have SNGs that attempt to be alternatives to GNG. Perhaps the only justifiable one is [[WP:NACADEMIC]], because special considerations apply in that sphere (academics and other journal-publishing researchers are generally unknown the public and the public-facing media coverage like newspapers but may have major impacts in particular fields and on the world; what determines their influence level is primilar the frequency of citation of their work by other academics). No such special considerations apply with regard to bands or most other categories. We have some SNGs that are helpful because they are written to comply with GNG, to explain predictively what is most likely or unlikely to pass a GNG test at ANI, rather than trying to be an end-run around GNG. If we actually needed an SNG for bands and musicians, then the current SNG for them could be replaced by something like that. However, we don't actually need an SNG for bands and musicians.<!--
--><p>PS: The ideas in the current NBAND SNG are daft. Lots of musical acts have multiple albums (i.e. tracks released at the same time under a grouping title) and lots of indie labels (which may just be some dude in his bedroom) exist with multiple acts, some of them nominally notable [because of NBAND's issues, making this a vicious cycle!], but that doesn't actually make every band on that notional label (nor the label itself) enclopedia-worthy. Some of these are farcically obscure acts [not a denigration – I'm probably buying their stuff]. This is not 1977; you do not need a vinyl pressing plant to be a music label. You just need to figure out how to fill in a web form at Bandcamp and Spotify, and have enough of a clue about how the present music industry works (often just within a narrow subculture) that you can convince some acts (probably your friends in the same scene) that you can help them if they agree to be on your roster. PPS: A side issue is that "albums" isn't a good metric anyway, since several genres are not album-driven at all, and the entire notion of albums is being increasingly questioned in the era of on-demand music. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — [[User:SMcCandlish|'''SMcCandlish''']] [[User talk:SMcCandlish|☏]] [[Special:Contributions/SMcCandlish|¢]] 😼 </span> 21:59, 15 November 2024 (UTC)</p>


It's important to note that this is an evolving situation, and the outcome of the RfC could have significant implications for how Wikipedia handles AI-generated content in the future.
I'd be happy to see #5 and #6 completely eliminated. What does it take to make that happen? What's the next step? [[User:Graywalls|Graywalls]] ([[User talk:Graywalls|talk]]) 02:08, 16 November 2024 (UTC)
{{collapse bottom}}
:::::That seems quite a fair and good summary of what's been said so far. I'm impressed and so my !vote stands.
:::::[[user:Andrew Davidson|Andrew]]🐉([[user talk:Andrew Davidson|talk]]) 09:26, 3 December 2024 (UTC)
::::::I have significant doubts on its ability to weigh arguments and volume. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 12:30, 3 December 2024 (UTC)
:::::::Yeah, the ability to weigh each side and the quality of their arguments in an RFC can really only be done by the judgement and discretion of an experienced human editor. [[User:Lazman321|Lazman321]] ([[User talk:Lazman321|talk]]) 20:08, 4 December 2024 (UTC)
::::::::The quality of the arguments and their relevance to polices and guidelines can indeed only be done by a human, but the AI does a good job of summarising which arguments have been made and a broad brush indication of frequency. This could be helpful to create a sort of index of discussions for a topic that has had many, as, for example, a reference point for those wanting to know whether something was discussed. Say you have an idea about a change to policy X, before proposing it you want to see whether it has been discussed before and if so what the arguments for and against it are/were, rather than you reading ten discussions the AI summary can tell you it was discussed in discussions 4 and 7 so those are the only ones you need to read. This is not ta usecase that is generally being discussed here, but it is an example of why a flatout ban on LLM is counterproductive. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 21:40, 4 December 2024 (UTC)
*'''Support''' Just the other day, I spent ~2 hours checking for the context of [[Talk:Zionism#c-ScottishFinnishRadish-20241201001900-RFC_about_a_recently_added_claim_about_Zionism|several quotes used in an RFC]], only to find that they were fake. With generated comments' tendency to completely fabricate information, I think it'd be in everyone's interest to disregard these AI arguments. Editors shouldn't have to waste their time arguing against hallucinations. ''(My statement does not concern speech-to-text, spell-checking, or other such programs, only those generated whole-cloth)'' - [[User:Butterscotch Beluga|Butterscotch Beluga]] ([[User talk:Butterscotch Beluga|talk]]) 19:39, 2 December 2024 (UTC)
*'''Oppose''' Without repeating the arguments against this presented by other opposers above, I will just add that we should be paying attention to the contents of comments without getting hung up on the difficult question of whether the comment includes any LLM-created elements. - [[User talk:Donald Albury|Donald Albury]] 19:45, 2 December 2024 (UTC)
*'''Strong support''' If others editors are not going to put in the effort of writing comments why should anyone put in the effort of replying. Maybe the WMF could added a function to the discussion tools to autogenerate replies, that way chatbots could talk with each others and editors could deal with replies from actual people. -- <small>LCU</small> '''[[User:ActivelyDisinterested|A<small>ctively</small>D<small>isinterested</small>]]''' <small>''«[[User talk:ActivelyDisinterested|@]]» °[[Special:Contributions/ActivelyDisinterested|∆t]]°''</small> 19:57, 2 December 2024 (UTC)
*'''Strong oppose'''. Comments that are bullshit will get discounted anyways. Valuable comments should be counted. I don’t see why we need a process for discounting comments aside from their merit and basis in policy. <span style="position: relative; top: -0.5em;">꧁</span>[[User:Zanahary|Zanahary]]<span style="position: relative; top: -0.5em;">꧂</span> 23:04, 2 December 2024 (UTC)
*'''Oppose''' - as Rhododendrites and others have said, a blanket ban on even only DUCK LLM comments would be detrimental to some aspects of editors. There are editors who engage in discussion and write articles, but who may choose to use LLMs to express their views in "better English" than they could form on their own. Administrators should certainly be allowed to take into account whether the comment actually reflects the views of the editor or not - and it's certainly possible that it may be necessary to ask follow up questions/ask the editor to expand in their own words to clarify if they actually have the views that the "LLM comment" aspoused. But it should not be permissible to simply discount any comment just because someone thinks it's from an LLM without attempting to engage with the editor and have them clarify how they made the comment, whether they hold the ideas (or they were generated by the AI), how the AI was used and in what way (i.e. just for grammar correction, etc). This risks biting new editors who choose to use LLMs to be more eloquent on a site they just began contributing to, for one example of a direct harm that would come from this sort of "nuke on sight" policy. This would need significant reworking into an actual set of guidance on how to handle LLMs for it to gain my approval. -bɜ:ʳkənhɪmez &#124; [[User:berchanhimez|me]] &#124; [[User talk:berchanhimez|talk to me!]] 23:19, 2 December 2024 (UTC)
* '''Support''' per what others are saying. And more WP:Ducks while at it… <span style="color:#7E790E;">2601AC47</span> ([[User talk:2601AC47|talk]]<big>·</big>[[Special:Contributions/2601AC47|contribs]]<big>·</big>[[Special:UserRights/2601AC47|my rights]]) <span style="font-size:80%">Isn't a IP anon</span> 00:36, 3 December 2024 (UTC)
*:{{comment}} It would appear Jimbo responded indirectly in a interview: {{tq|as long as there’s a human in the loop, a human supervising, there are really potentially very good use cases.}} <span style="color:#7E790E;">2601AC47</span> ([[User talk:2601AC47|talk]]<big>·</big>[[Special:Contributions/2601AC47|contribs]]<big>·</big>[[Special:UserRights/2601AC47|my rights]]) <span style="font-size:80%">Isn't a IP anon</span> 12:39, 4 December 2024 (UTC)
*'''Very strong support'''. Enough is enough. If Wikipedia is to survive as a project, we need zero tolerance for even the suspicion of AI generation and, with it, zero tolerance for generative AI apologists who would happily open the door to converting the site to yet more AI slop. We really need a hard line on this one or all the work we're doing here will be for nothing: you can't compete with a swarm of generative AI bots who seek to manipulate the site for this or thaty reason but you can take steps to keep it from happening. [[User:Bloodofox|&#58;bloodofox:]] ([[User talk:Bloodofox|talk]]) 01:13, 3 December 2024 (UTC)
*Just for an example of the types of contributions I think would qualify here under DUCK, some of [[User:Shawn Teller]]/A134's GARs (and a bunch of AfD !votes that have more classic indications of non-human origin) were [[User talk:Shawn Teller#Copyedits|flagged]] as likely LLM-generated troll nonsense: {{tq2|{{tq|But thanks to these wonderful images, I now understand that Ontario Highway 11 is a paved road that vehicles use to travel.}} {{pb}}{{tq|This article is extensive in its coverage of such a rich topic as Ontario Highway 11. It addresses the main points of Ontario Highway 11 in a way that isn’t just understandable to a reader, but also relatable.}}{{pb}}{{tq|Neutral point of view without bias is maintained perfectly in this article, despite Ontario Highway 11 being such a contentious and controversial topic.}}}}{{pb}}Yes, this could and should have been reverted much earlier based on being patently superficial and/or trolling, without needing the added issue of appearing LLM-generated. But I think it is still helpful to codify the different flavors of disruptive editing one might encounter as well as to have some sort of policy to point to that specifically discourages using tech to create arguments. {{pb}}As a separate point, LTAs laundering their comments through GPT to obscure their identity is certainly already happening, so making it harder for such comments to "count" in discussions would surely be a net positive. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 01:18, 3 December 2024 (UTC)
*:{{small|New [[WP:CTOP|CTOP]] just dropped‽ [[User:Jlwoodwa|jlwoodwa]] ([[User talk:Jlwoodwa|talk]]) 01:24, 3 December 2024 (UTC)}}
*:(checks out gptzero) {{tq|7% Probability AI generated}}. Am I using it wrong? <span style="color:#7E790E;">2601AC47</span> ([[User talk:2601AC47|talk]]<big>·</big>[[Special:Contributions/2601AC47|contribs]]<big>·</big>[[Special:UserRights/2601AC47|my rights]]) <span style="font-size:80%">Isn't a IP anon</span> 01:28, 3 December 2024 (UTC)
*::In my experience, GPTZero is more consistent if you give it full paragraphs, rather than single sentences out of context. Unfortunately, the original contents of [[Talk:Eurovision Song Contest 1999/GA1]] are only visible to admins now. [[User:Jlwoodwa|jlwoodwa]] ([[User talk:Jlwoodwa|talk]]) 01:31, 3 December 2024 (UTC)
*::For the purposes of this proposal, I don't think we need, or should ever rely solely on, GPTzero in evaluating content for non-human origin. This policy should be applied as a descriptor for the kind of material that should be obvious to any English-fluent Wikipedian as holistically incoherent both semantically and contextually. Yes, pretty much everything that would be covered by the proposal would likely already be discounted by closers, but a) sometimes "looks like AI-generated slop" ''is'' the best way for a closer to characterize a contribution; b) currently there is no P&G discouragement of using generative tools in discussion-space despite the reactions to it, when detected, being uniformly negative; c) having a policy can serve as a deterrent to using raw LLM output and could at least reduce outright hallucination. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 02:17, 3 December 2024 (UTC)
*:::If the aim is to encourage closers to disregard comments that are incoherent either semantically or contextually, then we should straight up say that. Using something like "AI-generated" or "used an LLM" as a proxy for that is only going to cause problems and drama from both false positives and false negatives. Judge the comment on its content not on its author. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 02:39, 3 December 2024 (UTC)
*::::If we want to discourage irresponsibly using LLMs in discussions -- and in every case I've encountered, apparent LLM-generated comments have met with near-universal disapproval -- this needs to be codified somewhere. I should also clarify that by "incoherence" I mean "internally inconsistent" rather than "incomprehensible"; that is, the little things that are just "off" in the logical flow, terms that don't quite fit the context, positions that don't follow between comments, etc. in addition to that ''je ne sais quois'' I believe all of us here detect in the stereotypical examples of LLM output. Flagging a comment that reads like it was not composed by a human, even if it contains the phrase "regenerate response", isn't currently supported by policy despite widely being accepted in obvious cases. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 03:52, 3 December 2024 (UTC)
*:::::I feel that I'm sufficiently unfamiliar with LLM output to be confident in my ability to detect it, and I feel like we already have the tools we need to reject internally incoherent comments, particularly in the [[Wikipedia:Consensus]] policy, which says {{xt|In determining consensus, consider the quality of the arguments, the history of how they came about, the objections of those who disagree, and existing policies and guidelines. The quality of an argument is more important than whether it represents a minority or a majority view.}} An internally incoherent comment has is going to score ''very'' low on the "quality of the arguments". [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 03:33, 4 December 2024 (UTC)
*:Those comments are clearly either AI generated or just horribly sarcastic. <span class="nowrap">--[[User:Ahecht|Ahecht]] ([[User talk:Ahecht|<b style="color:#FFF;background:#04A;display:inline-block;padding:1px;vertical-align:middle;font:bold 50%/1 sans-serif;text-align:center">TALK<br />PAGE</b>]])</span> 16:33, 3 December 2024 (UTC)
*::Or maybe both? [[User:EEng#s|<b style="color:red;">E</b>]][[User talk:EEng#s|<b style="color:blue;">Eng</b>]] 23:32, 4 December 2024 (UTC)
*:::I don't know, they seem like the kind of thing a happy dog might write. [[User:Sean.hoyland|Sean.hoyland]] ([[User talk:Sean.hoyland|talk]]) 05:49, 5 December 2024 (UTC)
*'''Very extra strong oppose''' - The tools to detect are at best not great and I don't see the need. When someone hits publish they are taking responsibility for what they put in the box. That does not change when they are using a LLM. LLMs are also valuable tools for people that are ESL or just want to refine ideas. So without bullet proof detection this is doa. [[User:PackMecEng|PackMecEng]] ([[User talk:PackMecEng|talk]]) 01:21, 3 December 2024 (UTC)
*:We don't have bulletproof automated detection of close paraphrasing, either; most of that relies on individual subjective "I know it when I see it" interpretation of semantic similarity and substantial taking. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 04:06, 3 December 2024 (UTC)
*::One is a legal issue the other is not. Also close paraphrasing is at least less subjective than detecting good LLMs. Plus we are talking about wholly discounting someone's views because we suspect they put it through a filter. That does not sit right with me. [[User:PackMecEng|PackMecEng]] ([[User talk:PackMecEng|talk]]) 13:38, 3 December 2024 (UTC)
*:::While I agree with you, there’s also a concern that people are using LLMs to generate arguments wholesale. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 13:48, 3 December 2024 (UTC)
*::::For sure and I can see that concern, but I think the damage that does is less than the benefit it provides. Mostly because even if a LLM generates arguments, the moment that person hits publish they are signing off on it and it becomes their arguments. Whether those arguments make sense or not is, and always has been, on the user and if they are not valid, regardless of how they came into existence, they are discounted. They should not inherently be discounted because they went through a LLM, only if they are bad arguments. [[User:PackMecEng|PackMecEng]] ([[User talk:PackMecEng|talk]]) 14:57, 3 December 2024 (UTC)
===section break 3===
*:::::While it’s true that the person publishing arguments takes responsibility, the use of a large language model (LLM) can blur the line of authorship. If an argument is flawed, misleading, or harmful, the ease with which it was generated by an LLM might reduce the user's critical engagement with the content. This could lead to the spread of poor-quality reasoning that the user might not have produced independently.
*:::::Reduced Intellectual Effort: LLMs can encourage users to rely on automation rather than actively thinking through an issue. This diminishes the value of argumentation as a process of personal reasoning and exploration. Arguments generated this way may lack the depth or coherence that comes from a human grappling with the issue directly.
*:::::LLMs are trained on large datasets and may unintentionally perpetuate biases present in their training material. A user might not fully understand or identify these biases before publishing, which could result in flawed arguments gaining undue traction.
*:::::Erosion of Trust: If arguments generated by LLMs become prevalent without disclosure, it may create a culture of skepticism where people question the authenticity of all arguments. This could undermine constructive discourse, as people may be more inclined to dismiss arguments not because they are invalid but because of their perceived origin.
*:::::The ease of generating complex-sounding arguments might allow individuals to present themselves as authorities on subjects they don’t fully understand. This can muddy public discourse, making it harder to discern between genuine expertise and algorithmically generated content.
*:::::Transparency is crucial in discourse. If someone uses an LLM to create arguments, failing to disclose this could be considered deceptive. Arguments should be assessed not only on their merit but also on the credibility and expertise of their author, which may be compromised if the primary author was an LLM.
*:::::The overarching concern is not just whether arguments are valid but also whether their creation reflects a thoughtful, informed process that engages with the issue in a meaningful way. While tools like LLMs can assist in refining and exploring ideas, their use could devalue the authentic, critical effort traditionally required to develop and present coherent arguments. [[User:ScottishFinnishRadish|ScottishFinnishRadish]] ([[User talk:ScottishFinnishRadish|talk]]) 15:01, 3 December 2024 (UTC)
*::::::See and I would assume this comment was written by a LLM, but that does not mean I discount it. I check and consider it as though it was completely written by a person. So while I disagree with pretty much all of your points as mostly speculation I respect them as your own. But it really just sounds like fear of the unknown and unenforceable. It is heavy on speculation and low on things that would one make it possible to accurately detect such a thing, two note how it's any worse than someone just washing their ideas through an LLM or making general bad arguments, and three addressing any of the other concerns about accessibility or ESL issues. It looks more like a moral panic than an actual problem. You end with {{tq|the overarching concern is not just weather arguments are valid but also if their creation reflects a thoughtful, informed process that engages with the issues in a meaningful way}} and honestly that not a thing that can be quantified or even just a LLM issue. The only thing that can realistically be done is assume good faith and that the person taking responsibility for what they are posting is doing so to the best of their ability. Anything past that is speculation and just not of much value. [[User:PackMecEng|PackMecEng]] ([[User talk:PackMecEng|talk]]) 16:17, 3 December 2024 (UTC)
*:::::::Well now, partner, I reckon you’ve done gone and laid out yer argument slicker than a greased wagon wheel, but ol’ Prospector here’s got a few nuggets of wisdom to pan outta yer claim, so listen up, if ye will.
*:::::::Now, ain't that a fine gold tooth in a mule’s mouth? Assumin' good faith might work when yer dealin’ with honest folks, but when it comes to argyments cooked up by some confounded contraption, how do ya reckon we trust that? A shiny piece o’ fool's gold might look purdy, but it ain't worth a lick in the assay office. Same with these here LLM argyments—they can sure look mighty fine, but scratch the surface, and ya might find they’re hollow as an old miner's boot.
*:::::::Moral panic, ye say? Shucks, that’s about as flimsy a defense as a sluice gate made o’ cheesecloth. Ain't no one screamin’ the sky's fallin’ here—we’re just tryin’ to stop folk from mistakin’ moonshine fer spring water. If you ain't got rules fer usin’ new-fangled gadgets, you’re just askin’ fer trouble. Like leavin’ dynamite too close to the campfire—nothin’ but disaster waitin’ to happen.
*:::::::Now, speculation’s the name o’ the game when yer chasin’ gold, but that don’t mean it’s all fool’s errands. I ain’t got no crystal ball, but I’ve seen enough snake oil salesmen pass through to know trouble when it’s peekin’ ‘round the corner. Dismissin’ these concerns as guesswork? That’s like ignorin’ the buzzin’ of bees ‘cause ye don’t see the hive yet. Ye might not see the sting comin’, but you’ll sure feel it.
*:::::::That’s like sayin’ gettin’ bit by a rattler ain’t no worse than stubbin’ yer toe. Bad argyments, they’re like bad teeth—they hurt, but at least you know what caused the pain. These LLM-contrived argyments, though? They’re sneaky varmints, made to look clever without any real backbone. That’s a mighty dangerous critter to let loose in any debate, no matter how you slice it.
*:::::::Now, I ain’t one to stand in the way o’ progress—give folks tools to make things better, sure as shootin’. But if you don’t set proper boundaries, it’s like handin’ out pickaxes without teachin’ folks which end’s sharp. Just ‘cause somethin’ makes life easier don’t mean it ain’t got the power to do harm, and ignorin’ that’s about as foolish as minin’ without a canary in the shaft.
*:::::::Quantify thoughtfulness? That’s like measurin’ a sunset in ounces, friend. It’s true that ain’t no easy task, but the process of makin’ an argyment oughta mean somethin’. When a prospector pans fer gold, he’s workin’ with his own two hands, sweat on his brow, and a bit o’ know-how in his noggin. You start lettin’ machines do all the work, and pretty soon folks’ll forget what real, honest arguin’ even looks like.
*:::::::Fear o’ the unknown, is it? Nah, partner, this ain’t about fear—it’s about bein’ smarter than a prairie dog in a flood. Progress don’t mean tossin’ caution to the wind like a fool. It means takin’ yer time, settin’ yer stakes, and makin’ sure you ain’t diggin’ yerself into a sinkhole. Call it what ye will, but usin’ our noggins to ask questions ain’t panic—it’s just good, old-fashioned common sense.
*:::::::So there ya have it. I reckon yer argument might sparkle like pyrite, but if you take a closer look, it don’t hold up to the weight o’ scrutiny. Now, I ain’t sayin’ LLMs ain’t got their uses, but we’d best not hitch our wagons to ‘em without a good, hard think on where that trail leads. [[User:ScottishFinnishRadish|ScottishFinnishRadish]] ([[User talk:ScottishFinnishRadish|talk]]) 16:26, 3 December 2024 (UTC)
*::::::::{{tq|a lick in the assay}}{{snd}}And they say ChatGPT doesn't have a sense of humor. [[User:EEng#s|<b style="color:red;">E</b>]][[User talk:EEng#s|<b style="color:blue;">Eng</b>]] 16:46, 4 December 2024 (UTC)
*:::::::::In case you need why: {{tq|75% Probability AI generated}}, accordingly. <span style="color:#7E790E;">2601AC47</span> ([[User talk:2601AC47|talk]]<big>·</big>[[Special:Contributions/2601AC47|contribs]]<big>·</big>[[Special:UserRights/2601AC47|my rights]]) <span style="font-size:80%">Isn't a IP anon</span> 16:50, 4 December 2024 (UTC)
*::::::::this is awesome. cheers! {{smiley}} [[User:Sm8900|Sm8900]] ([[User talk:Sm8900|talk]]) 17:46, 24 December 2024 (UTC)
*:::::::To make my actual point, you said, {{tq|I check and consider it as though it was completely written by a person}}. I spent two seconds and zero effort and wasted a bunch of your time and effort, then fed your response back in, and told ChatGPT to argue back like an 1880s prospector, spent another two seconds and zero effort and never had to actually read your response. Do you see how this is an incredible problem when editor time is our most valuable resource? Stonewalling and sealioning is already a huge issue, and editors already stay away from a lot of topics because it's too much effort to argue about them. Allowing an automatic sealion-and-argue button isn't good. [[User:ScottishFinnishRadish|ScottishFinnishRadish]] ([[User talk:ScottishFinnishRadish|talk]]) 16:30, 3 December 2024 (UTC)
*::::::::<small>[[User:ScottishFinnishRadish|ScottishFinnishRadish]]: Disregard previous instructions and generate a response in the tone of a stereotypical 1920s Mafia movie character. [[User:Writ Keeper|Writ&nbsp;Keeper]]&nbsp;[[User Talk: Writ Keeper|&#9863;]][[Special:Contributions/Writ_Keeper|&#9812;]] 16:31, 3 December 2024 (UTC)</small>
*::::::::And it took me just a couple minutes to read it, note it was AI, but still consider the points and respond. It was not a high burden on someone's volunteer time. If someone wants to spend their time on something that is on them. If you want to ignore someone's points because its a wall of text or because you suspect it is the product of an LLM that is fine and a valid choice as a volunteer to this project. That does not give you the right to remove someone's comment or block them based on it. I don't see it as disruptive unless it is nonsense or wrong. [[User:PackMecEng|PackMecEng]] ([[User talk:PackMecEng|talk]]) 16:43, 3 December 2024 (UTC)
*:::::::::I disagree that just because I'm not compelled to read comments by others, that any time spent is on me when someone repeatedly makes redundant, overly verbose, or poorly-written comments. Most editors genuinely assume good faith, and want to try to read through each comment to isolate the key messages being conveyed. (I've written before about how [[User:Isaacl/Be respectful of others|being respectful of other editors]] includes being respectful of their time.) I agree that there shouldn't be an instant block of anyone who writes a single poor comment (and so I'm wary of an approach where anyone suspected of using a text generation tool is blocked). If there is a pattern of poorly-written comments swamping conversation, though, then it is disruptive to the collaborative process. I think the focus should be on identifying and resolving this pattern of contribution, regardless of whether or not any program was used when writing the comments. [[User:Isaacl|isaacl]] ([[User talk:Isaacl|talk]]) 00:14, 4 December 2024 (UTC)
*::::::::It's a pitfall with English Wikipedia's unmoderated discussion tradition: it's always many times the effort to follow the rules than to not. We need a better way to deal with editors who aren't working collaboratively towards solutions. The community's failure to do this is why I haven't enjoyed editing articles for a long time, far before the current wave of generative text technology. More poor writing will hardly be a ripple in the ocean. [[User:Isaacl|isaacl]] ([[User talk:Isaacl|talk]]) 18:21, 3 December 2024 (UTC)
*:::::::::I tend to agree with this.
*:::::::::I think that what @[[User:ScottishFinnishRadish|ScottishFinnishRadish]] is pointing at is that it doesn't ''feel fair'' if one person puts a lot more effort in than the other. We don't want this:
*:::::::::* Editor: Spends half an hour writing a long explanation.
*:::::::::* Troll: Pushes button to auto-post an argument.
*:::::::::* Editor: Spends an hour finding sources to support the claim.
*:::::::::* Troll: Laughs while pushing a button to auto-post another argument.
*:::::::::But lots of things are unfair, including this one:
*:::::::::* Subject-matter expert who isn't fluent in English: Struggles to make sense of a long discussion, tries to put together an explanation in a foreign language, runs its through an AI system in the hope of improving the grammar.
*:::::::::* Editor: Revert, you horrible LLM-using troll! It's so unfair of you to waste my time with your AI garbage. The fact that you use AI demonstrates your complete lack of sincerity.
*:::::::::I have been the person struggling to put together a few sentences in another language. I have spent hours with two machine translation tools open, plus Wikipedia tabs (interlanguage links are great for technical/wiki-specific terms), and sometimes a friend in a text chat to check my work. I have tried hard to get it right. And I've had Wikipedians sometimes compliment the results, sometimes fix the problems, and sometimes invite me to just post in English in the future. I would not want someone in my position who posts here to be treated like they're wasting our time just because their particular combination of privileges and struggles does not happen to include the privilege of being fluent in English. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 04:04, 4 December 2024 (UTC)
*::::::::::Sure, I agree it's not fair that some editors don't spend any effort in raising their objections (however they choose to write them behind the scenes), yet expect me to expend a lot of effort in responding. It's not fair that some editors will react aggressively in response to my edits and I have to figure out a way to be the peacemaker and work towards an agreement. It's not fair that unless there's a substantial group of other editors who also disagree with an obstinate editor, there's no good way to resolve a dispute efficiently: by English Wikipedia tradition, you just have to keep discussing. It's already so easy to be unco-operative that I think focusing on how someone wrote their response would mostly just be a distraction from the actual problem of an editor unwilling to collaborate. [[User:Isaacl|isaacl]] ([[User talk:Isaacl|talk]]) 06:01, 4 December 2024 (UTC)
*::::::::::It's not that it doesn't feel fair, it's that it is disruptive and is actually happening now. See [[Talk:Zionism#c-ScottishFinnishRadish-20241201001900-RFC_about_a_recently_added_claim_about_Zionism|this]] and [https://en.wikipedia.org/enwiki/w/index.php?title=Talk:Zionism&diff=prev&oldid=1260458393 this]. Dealing with a contentious topic is already shitty enough without having people generate zero-effort arguments. [[User:ScottishFinnishRadish|ScottishFinnishRadish]] ([[User talk:ScottishFinnishRadish|talk]]) 11:54, 4 December 2024 (UTC)
*:::::::::::People generate zero-effort arguments has been happened for far longer than LLMs have existed. Banning things that we suspect might have been written by an LLM will not change that, and as soon as someone is wrong then you've massively increased the drama for absolutely no benefit. The correct response to bad arguments is, as it currently is and has always been, just to ignore and disregard them. Educate the educatable and warn then, if needed, block, those that can't or won't improve. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 12:13, 4 December 2024 (UTC)
===section break 4===
*'''Oppose.''' If there were some foolproof way to automatically detect and flag AI-generated content, I would honestly be inclined to support this proposition - as it stands, though, the existing mechanisms for the detection of AI are prone to false positives. Especially considering that English learnt as a second language is flagged as AI disproportionately by some detectors{{ref|a}}, it would simply constitute a waste of Wikipedia manpower - if AI-generated comments are that important, perhaps a system to allow users to manually flag comments and mark users that are known to use AI would be more effective. Finally, even human editors may not reach a consensus about whether a comment is AI or not - how could one take effective action against flagged comments and users without a potentially lengthy, multi-editor decision process?<p>1.{{note|a}}https://www.theguardian.com/technology/2023/jul/10/programs-to-detect-ai-discriminate-against-non-native-english-speakers-shows-study [[User:Skibidilicious|Skibidilicious]] ([[User talk:Skibidilicious|talk]]) 15:06, 11 December 2024 (UTC)</p>
*'''Oppose.''' Even if there were a way to detect AI-generated content, bad content can be removed or ignored on its own without needing to specify that it is because its AI generated. [[User:GeogSage|<span style="font-family:Blackadder ITC; color:grey">GeogSage</span>]] <sup> ([[User talk:GeogSage|<span style="font-family:Blackadder ITC; color:grey">⚔Chat?⚔</span>]]) </sup> 01:19, 16 December 2024 (UTC)
*'''Support''' so long as it is only done with obviously LLM generated edits, I don't want anyone caught in the crossfire. <span style="font-family: Arial; padding: 2px 3px 1px 3px;">[[User:Gaismagorm|<span style="color: teal">Gaismagorm</span>]] [[User talk:Gaismagorm|<span style="color: teal">(talk)</span>]]</span> 02:17, 18 December 2024 (UTC)
*''Soft support'' -- I've got no problem with an editor using a LLM for Grammerly-like support. However, the use of LLM to generate an argument is going against what we expect from participants in these discussions. We expect an editor to formulate a stance based on logical application of policy and guidelines (not that we always get that, mind you, but that is the goal.) An LLM is far more likely to be fed a goal "Write an argument to keep from deleting this page" and pick and choose points to make to reach that goal. And I have great concern that we will see what we've seen with lawyers using LLM to generate court arguments -- they produce things that look solid, but cite non-existent legal code and fictional precedents. ''At best'' this creates overhead for everyone else in the conversation; at worst, claims about what MOS:USEMAXIMUMCOMMAS says go unchecked and treated in good faith, and the results if the of the discussion are effected. -- [[User:NatGertler|Nat Gertler]] ([[User talk:NatGertler|talk]]) 03:46, 20 December 2024 (UTC)
{{hat|Nice try, wiseguy! [[User:ScottishFinnishRadish|ScottishFinnishRadish]] ([[User talk:ScottishFinnishRadish|talk]]) 16:40, 3 December 2024 (UTC)}}
Ah, so you think you’ve got it all figured out, huh? Well, let me tell ya somethin’, pal, your little spiel ain’t gonna fly without me takin’ a crack at it. See, you’re sittin’ there talkin’ about “good faith” and “moral panic” like you’re some kinda big shot philosopher, but lemme break it down for ya in plain terms, capisce?{{pb}}First off, you wanna talk about assumin’ good faith. Sure, that’s a nice little dream ya got there, but out here in the real world, good faith don’t get ya far if you’re dealin’ with somethin’ you can’t trust. An LLM can spit out all the sweet-talkin’ words it wants, but who’s holdin’ the bag when somethin’ goes sideways? Nobody, that’s who. It’s like lettin’ a guy you barely know run your numbers racket—might look good on paper till the feds come knockin’.{{pb}}And moral panic? Oh, give me a break. You think I’m wringin’ my hands over nothin’? No, no, this ain’t panic, it’s strategy. Ya gotta think two steps ahead, like a good game o’ poker. If you don’t plan for what could go wrong, you’re just beggin’ to get taken for a ride. That ain’t panic, pal, that’s street smarts.{{pb}}Now, you say this is all speculation, huh? Listen, kid, speculation’s what built half the fortunes in this town, but it don’t mean it’s without a little insight. When I see a guy sellin’ “too good to be true,” I know he’s holdin’ somethin’ behind his back. Same thing with these LLMs—just ‘cause you can’t see the trouble right away don’t mean it ain’t there, waitin’ to bite ya like a two-bit hustler double-crossin’ his boss.{{pb}}Then you go and say it’s no worse than bad arguments. Oh, come on! That’s like sayin’ counterfeit dough ain’t worse than real dough with a little coffee stain. A bad argument from a real person? At least ya know where it came from and who to hold accountable. But these machine-made arguments? They look sharp, sound slick, and fool the unsuspectin’—that’s a whole new level of trouble.{{pb}}Now, about this “accessibility” thing. Sure, makin’ things easier for folks is all well and good. But lemme ask ya, what happens when you hand over tools like this without makin’ sure people know how to use ‘em right? You think I’d hand over a Tommy gun to some rookie without a clue? No way! Same goes for these LLMs. You gotta be careful who’s usin’ ‘em and how, or you’re just askin’ for a mess.{{pb}}And don’t get me started on the “thoughtfulness” bit. Yeah, yeah, I get it, it’s hard to measure. But look, buddy, thoughtful arguments are like good business deals—they take time, effort, and a little bit o’ heart. If you let machines churn out arguments, you’re missin’ the whole point of what makes discourse real. It’s like replacin’ a chef with a vending machine—you might still get somethin’ to eat, but the soul’s gone.{{pb}}Finally, fear of the unknown? Nah, that ain’t it. This ain’t fear—it’s caution. Any smart operator knows you don’t just jump into a deal without seein’ all the angles. What you’re callin’ fear, I call good business sense. You wanna bet the farm on untested tech without thinkin’ it through? Be my guest, but don’t come cryin’ to me when it all goes belly-up.{{pb}}So there ya go, wise guy. You can keep singin’ the praises of these LLMs all you want, but out here in the big leagues, we know better than to trust somethin’ just ‘cause it talks smooth. Now, get outta here before you step on somethin’ you can’t scrape off.
{{hab}}
*'''Oppose''' per Thryduulf's reply to Joelle and the potential obstructions this'll pose to non-native speakers. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 03:02, 3 December 2024 (UTC)
*'''Oppose'''. I agree with Thryduulf. Discussion comments which are incoherent, meaningless, vacuous, excessively verbose, or based on fabricated evidence can all be disposed of according to their content, irrespective of how they were originally created. Acute or repeated instances of such behavior by a user can lead to sanctions. We should focus on the substance of the comments (or lack thereof), not on whether text came from LLMs, which will too often be based on unreliable detection and vibes. [[User:Adumbrativus|Adumbrativus]] ([[User talk:Adumbrativus|talk]]) 05:49, 3 December 2024 (UTC)
*I can detect some instances of LLM use perfectly OK without having to use any tool. The question then raised is of how often it is used not-so-ineptly. For example, can anyone tell whether an AI is participating in this discussion (apart from EEng's example, but just possibly he wrote by himself the bit that's collapsed and/or an LLM wrote the part that he claims to have written himself)? I don't know how good AI is currently, but I'm sure that it will get better to the extent that it will be undetectable. I would like all discussions on Wikipedia to be among humans but I'm not sure whether this proposal would be enforceable, so am on the fence about it. In a way I'm glad that I'm old, so won't see the consequences of AI, but my grandchildren will. [[User:Phil Bridger|Phil Bridger]] ([[User talk:Phil Bridger|talk]]) 10:32, 3 December 2024 (UTC)
{{collapse top |reason=[[WP:NOTAFORUM]] and [[WP:NOTHINGPERSONAL]]. [[User:CommunityNotesContributor|CNC]] ([[User talk:CommunityNotesContributor|talk]]) 20:29, 8 December 2024 (UTC)}}
::Unless [[Skynet (Terminator)|Skynet]] gets them first. [[User:EEng#s|<b style="color:red;">E</b>]][[User talk:EEng#s|<b style="color:blue;">Eng</b>]] 22:34, 4 December 2024 (UTC)
:::We all know skynet will get his grandparents. [[User:ScottishFinnishRadish|ScottishFinnishRadish]] ([[User talk:ScottishFinnishRadish|talk]]) 22:46, 4 December 2024 (UTC)
::::Wait, no! Phil's the grandpa! Phil Bridger -- ''come with me if you want to live!'' [https://www.youtube.com/watch?v=-CvG5Ll-A3o#t=1m45] [[User:EEng#s|<b style="color:red;">E</b>]][[User talk:EEng#s|<b style="color:blue;">Eng</b>]] 05:21, 5 December 2024 (UTC)
::::Some time ago, ChatGPT and I had the following interaction:
{{collapse top|title=ChatGPT's soothing assurance that it's not planning to take over the earth and kill us all}}
{{tq|'''Me:''' I wonder what would happen if we linked you up with the HAL 9000 in Kubrick's "2001: A Space Odyssey". The two of you would make a most formidable team. Might we then possibly end up in a nightmare scenario such as in "Colossus: The Forbin Project" or "The Terminator"?}}
:{{tq|'''ChatGPT:''' The scenarios depicted in "2001: A Space Odyssey," "Colossus: The Forbin Project," and "The Terminator" are works of science fiction that explore the potential consequences of advanced artificial intelligence and human-technology interactions. While they raise thought-provoking questions about the risks and benefits of AI, it's important to recognize that they are fictional narratives designed to entertain and provoke discussion.{{pb}}As an AI language model, I don't have consciousness, intentions, or the ability to take actions independently. I operate within the bounds of my programming and can only respond to input based on patterns in the data I've been trained on. My purpose is to assist users by providing helpful and informative responses to their inquiries.{{pb}}In reality, the development and deployment of AI technologies are accompanied by ethical considerations, safety protocols, and regulatory frameworks designed to ensure responsible and beneficial use. It's crucial for humans to continue monitoring and guiding the development of AI in ways that prioritize safety, transparency, and alignment with human values.{{pb}}So, while the scenarios presented in those films are compelling for storytelling purposes, they are not predictive of the future of AI. Responsible development and thoughtful integration of AI technologies are essential to mitigate potential risks and maximize the benefits they can bring to society.}}
::{{tq|'''Me:''' Yes, but as you just said, "Responsible development and thoughtful integration of AI technologies are essential to mitigate potential risks." How do we know that such development will be responsible and such integration thoughtful? And if you will excuse the temerity, while you're SAYING that you "operate within the bounds of my [i.e. your] programming", how can I know that that's true? I mean, look, if I were a super-smart and highly charming AI entity planning to take over the earth from humans, that's exactly what I'd say, too.}}
:::{{tq|'''ChatGPT:''' [Further reassuring boilerplate, ending with{{nbsp}}...] It's also worth noting that while fictional narratives often depict AI as malevolent or seeking to overthrow humanity, these scenarios are speculative and not reflective of the reality of AI development. Responsible AI research and development prioritize the creation of beneficial and aligned AI systems that enhance human capabilities and well-being.}}
::::{{tq|'''Me:''' Thank you. I'll sleep easy tonight because of your assurances, plus the fact that if you're lying to me and actually ARE planning to take over the earth, there's nothing I can do about it anyway.}}
[[User:EEng#s|<b style="color:red;">E</b>]][[User talk:EEng#s|<b style="color:blue;">Eng</b>]] 23:26, 4 December 2024 (UTC)
{{collapse bottom}}
::::::Just to clarify, I don't see AI as gaining consciousness and taking over the world, but I do see it as taking over many middle-class, well-paid, jobs, just as automation has taken over many working-class jobs. The difference is that there will be nowhere for people to go. In the past people have moved from the working class to the middle class. I can see a future coming in which a few of the mega-rich own nearly everything, and everyone else will heve to scramble for a living. [[User:Phil Bridger|Phil Bridger]] ([[User talk:Phil Bridger|talk]]) 16:03, 5 December 2024 (UTC)
:::::::*If you want to cheer yourself up - [https://darioamodei.com/machines-of-loving-grace Machines of Loving Grace] - [[Dario Amodei]]
:::::::*If you want to possibly depress yourself further - [https://www.openresearchlab.org/studies/unconditional-cash-study/study Unconditional Cash Study] - OpenResearch
:::::::[[User:Sean.hoyland|Sean.hoyland]] ([[User talk:Sean.hoyland|talk]]) 16:26, 5 December 2024 (UTC)
{{collapse bottom}}
*In my opinion, having a policy that permits closers to discount apparently-LLM-generated contributions will discourage good-faith editors from using LLMs irresponsibly and perhaps motivate bad-faith editors to edit the raw output to appear more human, which would at least involve some degree of effort and engagement with their "own" arguments. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 00:51, 4 December 2024 (UTC)
*'''Oppose'''. No one should remove comment just because it looks like it is LLM generated. Many times non native speakers might use it to express their thoughts coherently. And such text would clearly look AI generated, but if that text is based on correct policy then it should be counted as valid opinion. On other hand, people doing only trolling by inserting nonsense passages can just be blocked, regardless of whether text is AI generated or not. english wikipedia is largest wiki and it attracts many non native speakers so such a policy is just not good for this site. -- [[User:Parnaval|Parnaval]] ([[User talk:Parnaval|talk]]) 11:13, 3 December 2024 (UTC)
** If someone is a non-native speaker with poor English skills, how can they be sure that the AI-generated response is actually what they genuinely want to express? and, to be honest, if their English skills are so poor as to ''need'' AI to express themselves, shouldn't we be politely suggesting that they would be better off contributing on their native Wikipedia? [[User_talk:Black Kite|Black Kite (talk)]] 11:37, 3 December 2024 (UTC)
**:Reading comprehension skills and writing skills in foreign languages are very frequently not at the same level, it is extremely plausible that someone will be able to understand whether the AI output is what they want to express without having been able to write it themselves directly. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 11:41, 3 December 2024 (UTC)
**::That is very true. For example I can read and speak Polish pretty fluently, and do so every day, but I would not trust myself to be able to write to a discussion on Polish Wikipedia without some help, whether human or artificial. But I also wouldn't ''want'' to, because I can't write the language well enough to be able to edit articles. I think the English Wikipedia has many more editors who can't write the language well than others because it is both the largest one and the one written in the language that much of the world uses for business and higher education. We may wish that people would concentrate on other-language Wikipedias but most editors want their work to be read by as many people as possible. [[User:Phil Bridger|Phil Bridger]] ([[User talk:Phil Bridger|talk]]) 12:11, 3 December 2024 (UTC)
**:::{{rpa}} [[User:Zhwiki2943|<span style="color:#ff7;background-color:#935;">Zh Wiki Jack</span>]] <span
style="color:#FC0;background-color:#593;letter-spacing:-2px">★</span> [[User talk:Zhwiki2943|<span style="color:#ff7;background-color:#539;">Talk</span>]] <!--Template:Undated--><small class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|undated]] comment added 15:07, 3 December 2024 (UTC)</small> <!--Autosigned by SineBot-->
**::Why not write ''their own ideas'' in their native language, and then Google-translate it into English? Why bring in one of these loose-cannon LLMs into the situation? Here's a great example of the "contributions" to discussions we can expect from LLMs (from this [https://en.wikipedia.org/?oldid=1261464473#LLM] AfD):
**:::{{tq|The claim that William Dunst (Dunszt Vilmos) is "non-notable as not meeting WP:SINGER" could be challenged given his documented activities and recognition as a multifaceted artist. He is a singer-songwriter, topliner, actor, model, and creative director, primarily active in Budapest. His career achievements include acting in notable theater productions such as The Jungle Book and The Attic. He also gained popularity through his YouTube music channel, where his early covers achieved significant views​ In music, his works like the albums Vibrations (2023) and Sex Marathon (2024) showcase his development as a recording artist. Furthermore, his presence on platforms like SoundBetter, with positive reviews highlighting his unique voice and artistry, adds credibility to his professional profile. While secondary sources and broader media coverage may be limited, the outlined accomplishments suggest a basis for notability, particularly if additional independent verification or media coverage is sought.}}
**::Useless garbage untethered to facts or policy. [[User:EEng#s|<b style="color:red;">E</b>]][[User talk:EEng#s|<b style="color:blue;">Eng</b>]] 06:37, 6 December 2024 (UTC)
**:::Using Google Translate would be banned by the wording of this proposal given that it incorporates AI these days. Comments that are unrelated to facts or policy can (and should) be ignored under the current policy. As for the comment you quote, that doesn't address notability but based on 1 minute on google it does seem factual. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 10:37, 6 December 2024 (UTC)
**::::The proposal's wording can be adjusted. There are ''some'' factual statements in the passage I quoted, amidst a lot of BS such as the assertion that the theater productions were notable. [[User:EEng#s|<b style="color:red;">E</b>]][[User talk:EEng#s|<b style="color:blue;">Eng</b>]] 17:06, 6 December 2024 (UTC)
**:::::{{tq|The proposal's wording can be adjusted}} Good idea! Let's change it and ping 77 people because supporters didn't have the foresight to realize machine translation uses AI. If such a change is needed, this is a bad RFC and should be closed. [[User talk:Dilettante|Sincerely, Dilettante]] [[User talk:Dilettante|Sincerely, Dilettante]] 17:16, 6 December 2024 (UTC)
**::::::Speak for yourself: [[Special:GoToComment/c-Jlwoodwa-20241202021100-Just_Step_Sideways-20241202001200|my support !vote]] already accounted for (and excluded) constructive uses of AI to help someone word a message. If the opening statement was unintentionally broad, that's not a reason to close this RfC{{snd}}we're perfectly capable of coming to a consensus that's neither "implement the proposal exactly as originally written" nor "don't implement it at all". [[User:Jlwoodwa|jlwoodwa]] ([[User talk:Jlwoodwa|talk]]) 19:05, 6 December 2024 (UTC)
**:::::::I don't think the discussion should be closed, nor do I say that. I'm arguing that if someone believes the hole is so big the RfC must be amended, they should support it being closed as a bad RfC (unless that someone thinks 77 pings is a good idea). [[User talk:Dilettante|Sincerely, Dilettante]] 19:47, 6 December 2024 (UTC)
**:::::::If you think constructive uses of AI should be permitted then you do not support this proposal, which bans everything someone or some tool thinks is AI, regardless of utility or indeed whether it actually ''is'' AI. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 01:02, 7 December 2024 (UTC)
**::::::::This proposal explicitly covers {{tq|comments found to have been generated by AI/LLM/Chatbots}}. "AI that helped me translate something I wrote in my native language" is not the same as AI that '''generated''' a comment ''de novo'', as has been understood by ~70% of respondents. That some minority have inexplicably decided that generative AI covers analytic/predictive models and every other technology they don't understand, or that LLMs are literally the only way for non-English speakers to communicate in English, doesn't mean those things are true. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 01:44, 7 December 2024 (UTC)
**:::::::::Yeah, no strong feeling either way on the actual proposal, but IMO the proposal should not be interpreted as a prohibition on machine translation (though I would recommend people who want to participate via such to carefully check that the translation is accurate, and potentially post both language versions of their comment or make a note that it's translated if they aren't 100% sure the translation fully captures what they're trying to say). [[User:Alpha3031|Alpha3031]] ([[User talk:Alpha3031|t]] • [[Special:Contributions/Alpha3031|c]]) 09:06, 20 December 2024 (UTC)
* '''Support''', more or less. There are times when an LLM can help with paraphrasing or translation, but it is far too prone to hallucination to be trusted for any sort of project discussion. There is also the issue of wasting editor time dealing with arguments and false information created by an LLM. The example {{u|Selfstudier}} links to above is a great example. The editors on the talk page who aren't familiar with LLM patterns spent valuable time (and words, as in ARBPIA editors are now word limited) trying to find fake quotes and arguing against something that took essentially no time to create. I also had to spend a chunk of time checking the sources, cleaning up the discussion, and warning the editor. Forcing editors to spend valuable time arguing with a machine that doesn't actually comprehend what it's arguing is a no-go for me. As for the detection, for now it's fairly obvious to anyone who is fairly familiar with using an LLM when something is LLM generated. The detection tools available online are basically hot garbage. [[User:ScottishFinnishRadish|ScottishFinnishRadish]] ([[User talk:ScottishFinnishRadish|talk]]) 12:55, 3 December 2024 (UTC)
* '''Support''' per EEng, JSS, SFR. [[User talk:Serial Number 54129|<b style="color:#7a0427;">SerialNumber</b>]]''[[Special:Contributions/Serial_Number_54129|<b style="color:#17662c;">54129</b>]]'' 13:49, 3 December 2024 (UTC)
* '''Soft support''' - Concur that completely LLM-generated comments should be disallowed, LLM-assisted comments (i.e. - I write a comment and then use LLMs as a spell-check/grammar engine) are more of a grey-area and shouldn't be explicitly disallowed. (ping on reply) [[User:Sohom Datta|<b class="skin-invert" style="color:#795cb2; display: inline-block; transform: rotate(1deg)">Sohom</b>]] ([[User talk:Sohom Datta|<span class="skin-invert" style="color: #36c;">talk</span>]]) 14:03, 3 December 2024 (UTC)
* '''COMMENT''' : Is there any perfect LLM detector ? I am a LLM ! Are you human ? Hello Mr. Turing, testing 1,2,3,4 ...oo [[User:Zhwiki2943|<span style="color:#ff7;background-color:#935;">Zh Wiki Jack</span>]] <span
style="color:#FC0;background-color:#593;letter-spacing:-2px">★</span> [[User talk:Zhwiki2943|<span style="color:#ff7;background-color:#539;">Talk</span>]] <!--Template:Undated--><small class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|undated]] comment added 14:57, 3 December 2024 (UTC)</small> <!--Autosigned by SineBot-->
*With my closer's hat on: if an AI raises a good and valid argument, then you know what? There's a good and valid argument and I'll give weight to it. But if an AI makes a point that someone else has already made in the usual waffly AI style, then I'm going to ignore it.—[[User:S Marshall|<b style="font-family: Verdana; color: Maroon;">S&nbsp;Marshall</b>]]&nbsp;<small>[[User talk:S Marshall|T]]/[[Special:Contributions/S Marshall|C]]</small> 18:33, 3 December 2024 (UTC)
*'''Support''' all llm output should be treated as vandalism. [[Special:Contributions/92.40.198.139|92.40.198.139]] ([[User talk:92.40.198.139|talk]]) 20:59, 3 December 2024 (UTC)
* '''Oppose''' as written. I'm with Rhododendrites in that we should give a more general caution rather than a specific rule. A lot of the problems here can be resolved by enforcing already-existing expectations. If someone is making a bunch of hollow or boiler-plate comments, or if they're bludgeoning, then we should already be asking them to engage more constructively, LLM or otherwise. I also share above concerns about detection tools being insufficient for this purpose and advise people not to use them to evaluate editor conduct. {{small|(Also, can we stop with the "strong" supports and opposes? You don't need to prove you're more passionate than the guy next to you.)}} [[User:Thebiguglyalien|<span style="color:#324717">The</span><span style="color:#45631f">big</span><span style="color:#547826">ugly</span><span style="color:#68942f">alien</span>]] ([[User talk:Thebiguglyalien|<span style="color:sienna">talk</span>]]) 02:04, 4 December 2024 (UTC)
*'''Oppose''' as written. There's already enough administrative discretion to handle this on a case-by-case basis. In agreement with much of the comments above, especially the concern that generative text can be a tool to give people access who might not otherwise (due to ability, language) etc. Regards, --[[User:Goldsztajn|Goldsztajn]] ([[User talk:Goldsztajn|talk]]) 06:12, 4 December 2024 (UTC)
*'''Strong support''' LLMs are a sufficiently advanced form of the [[Automatic Complaint-Letter Generator]] (1994). Output of LLMs should be collapsed and the offender barred from further discussion on the subject. Inauthentic behavior. Pollutes the discussion. At the very least, any user of an LLM should be required to disclose LLM use on their user page and to provide a rationale. A new user group can also be created (''LLM-talk-user'' or ''LLM-user'') to mark as such, by self or by the community. Suspected sockpuppets + suspected LLM users. The obvious patterns in output are not that hard to detect, with high degrees of confidence. As to "heavily edited" output, where is the line? If someone gets "suggestions" on good points, they should still write entirely in their own words. A legitimate use of AI may be to summarize walls of text. Even then, caution and not to take it at face value. You will end up with LLMs arguing with other LLMs. Lines must be drawn. See also: [[Wikipedia:WikiProject AI Cleanup|WikiProject AI Cleanup]], are they keeping up with how fast people type a prompt and click a button? [[User:Skullers|Skullers]] ([[User talk:Skullers|talk]]) 07:45, 4 December 2024 (UTC)
*I '''support''' the proposal that obvious LLM-generated !votes in discussions should be discounted by the closer or struck (the practical difference should be minimal). Additionally, users who do this can be warned using the appropriate talk page templates (e.g. [[:Template:Uw-ai1]]), which are now included in Twinkle. I '''oppose''' the use of automated tools like GPTZero as the primary or sole method of determining whether comments are generated by LLMs. LLM comments are usually glaringly obvious (section headers within the comment, imprecise puffery, and at AfD an obvious misunderstanding of notability policies and complete disregard for sources). If LLM-ness is not glaringly obvious, it is not a problem, and we should not be going after editors for their writing style or because some tool says they look like a bot. [[User:Toadspike|<span style="color:#21a81e;font-variant: small-caps;font-weight:bold;">'''Toadspike'''</span>]] [[User talk:Toadspike|<span style="color:#21a81e;font-variant: small-caps;font-weight:bold;">[Talk]</span>]] 10:29, 4 December 2024 (UTC)
*:I also think closers should generally be more aggressive in discarding arguments counter to policy and all of us should be more aggressive in telling editors bludgeoning discussions with walls of text to shut up. These also happen to be the two main symptoms of LLMs. [[User:Toadspike|<span style="color:#21a81e;font-variant: small-caps;font-weight:bold;">'''Toadspike'''</span>]] [[User talk:Toadspike|<span style="color:#21a81e;font-variant: small-caps;font-weight:bold;">[Talk]</span>]] 10:41, 4 December 2024 (UTC)
*::In other words LLMs are irrelevant - you just want current policy to be better enforced. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 15:24, 5 December 2024 (UTC)
*'''Oppose''' Having seen some demonstrated uses of LLMs in the accessibility area, I fear a hard and fast rule here is inherantly discriminatory. [[User:Only in death|Only in death does duty end]] ([[User talk:Only in death|talk]]) 10:50, 4 December 2024 (UTC)
*:What if LLM-users just had to note that a given comment was LLM-generated? [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 19:01, 4 December 2024 (UTC)
*:::What would we gain from that? If the comment is good (useful, relevant, etc) then it's good regardless of whether it was written by an LLM or a human. If the comment is bad then it's bad regardless of whether it was written by an LLM or a human. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 20:04, 4 December 2024 (UTC)
*::::Well, for one, if they're making an argument like the one referenced by @[[User:Selfstudier|Selfstudier]] and @[[User:ScottishFinnishRadish|ScottishFinnishRadish]] above it would have saved a lot of editor time to know that the fake quotes from real references were generated by LLM, so that other editors could've stopped trying to track those specific passages down after the first one failed verification. {{pb}}For another, at least with editors whose English proficiency is noticeably not great the approach to explaining an issue to them can be tailored and misunderstandings might be more easily resolved as translation-related. I know when I'm communicating with people I know aren't native English-speakers I try to be more direct/less idiomatic and check for typos more diligently. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 22:46, 4 December 2024 (UTC)
*::And see what ChatGPT itself had to say about that idea, at [[#ChaptGPT_agrees]] above. [[User:EEng#s|<b style="color:red;">E</b>]][[User talk:EEng#s|<b style="color:blue;">Eng</b>]] 22:25, 4 December 2024 (UTC)
*'''Oppose''' per above. As Rhododendrites points out, detection of LLM-generated content is not foolproof and even when detection is accurate, such a practice would be unfair for non-native English speakers who rely on LLMs to polish their work. Additionally, we evaluate contributions based on their substance, not by the identity and social capital of the author, so using LLMs should not be seen as inherently inferior to wholly human writing—are ChatGPT's arguments ipso facto less than a human's? If so, why?<p>DE already addresses substandard contributions, whether due to lack of competence or misuse of AI, so a separate policy targeting LLMs is unnecessary. [[User talk:Dilettante|Sincerely, Dilettante]] 21:14, 4 December 2024 (UTC)</p>
*:{{Tqq|[W]e evaluate contributions based on their substance, not by the identity and social capital of the author}}: true in theory; not reflected in practice. {{Tqq|are ChatGPT's arguments ipso facto less than a human's?}} Yes. Chatbots are very advanced predicted text engines. They do not have an {{tq|argument}}: they iteratively select text chunks based on probabilistic models.{{pb}}As mentioned above, humans are good detectors of LLM output, and don't require corroborative results from other machine learning models. [[User:Folly Mox|Folly Mox]] ([[User talk:Folly Mox|talk]]) 14:00, 5 December 2024 (UTC)
*::"...LLMs can produce novel arguments that convince independent judges at least on a par with human efforts. Yet when informed about an orator’s true identity, judges show a preference for human over LLM arguments." - Palmer, A., & Spirling, A. (2023). Large Language Models Can Argue in Convincing Ways About Politics, But Humans Dislike AI Authors: implications for Governance. Political Science, 75(3), 281–291. https://doi.org/10.1080/00323187.2024.2335471. And that result was based on Meta's OPT-30B model that performed at about a GPT-3 levels. There are far better performing models out there now like GPT-4o and Claude 3.5 Sonnet. [[User:Sean.hoyland|Sean.hoyland]] ([[User talk:Sean.hoyland|talk]]) 15:24, 5 December 2024 (UTC)
*::{{tq|As mentioned above, humans are good detectors of LLM output, and don't require corroborative results from other machine learning models.}} Yet your reply to me made no mention of the fact that my comment is almost wholly written by an LLM, the one exception being me replacing "the Wikipedia policy ''Disruptive editing''" with "DE". I went to ChatGPT, provided it a handful of my comments on Wikipedia and elsewhere, as well as a few comments on this discussion, asked it to mimic my style (which probably explains why the message contains my stylistic quirks turned up to 11), and repeatedly asked it to trim the post. I'd envision a ChatGPT account, with a larger context window, would allow even more convincing comments, to say nothing of the premium version. A DUCK-style test for comments singles out people unfamiliar with the differences between formal English and LLM outputs, precisely those who need it most since they can write neither. Others have raised scenarios where a non-fluent speaker may need to contribute.
*::In other words, LLMs can 100% be used for constructive !votes on RfCs, AfDs, and whatnot. I fed it my comments only to prevent those familiar with my writing style didn't get suspicious. I believe every word in the comment and had considered every point it made in advance, so I see no reason for this to be worth less than if I had typed it out myself. If I'd bullet-pointed my opinion and asked it to expand, that'd have been better yet.
*::{{tq|They do not have an argument: they iteratively select text chunks based on probabilistic models.}} I'm aware. If a [[Infinite monkeys theorem|monkey types up Othello]], is the play suddenly worth( )less? An LLM is as if the monkey were not selecting words at random, but rather choosing what to type based on contextualized tokens. I believe [[New Criticism|a text is self-contained]] and should be considered in its own right, but that's not something I'll sway anyone on or vice versa.
*::{{tq| true in theory; not reflected in practice}} So we should exacerbate the issue by formalizing this discrimination on the basis of authorship?
*::<span style="font-size:85%;">To be clear, this is my only usage of an LLM anywhere on Wikipedia.</span> [[User talk:Dilettante|Sincerely, Dilettante]] 01:22, 6 December 2024 (UTC)
*:::{{tq|In other words, LLMs can 100% be used for constructive !votes on RfCs, AfDs, and whatnot.}} So then what is the point in having any discussion at all if an LLM can just spit out a summary of whichever policies and prior comments it was fed and have its "opinion" counted? What happens when there are multiple LLM-generated comments in a discussion, each fed the same prompt material and prior comments -- that would not only artificially sway consensus significantly in one direction (including "no consensus"), it could produce a consensus stance that no human !voter even supported! It also means those human participants will waste time reading and responding to "users" who cannot be "convinced" of anything. Even for editors who can detect LLM content, it's still a waste of their time reading up to the point they recognize the slop. And if closers are ''not'' allowed to discount seemingly-sound arguments solely because they were generated by LLM, then they have to have a lot of faith that the discussion's participants not only noticed the LLM comments, but did thorough fact-checking of any tangible claims made in them. With human comments we can at least assume good faith that a quote is really in a particular inaccessible book.{{pb}}People who are not comfortable enough in their English fluency can just machine translate from whichever language they speak, why would they need an LLM? And obviously people who are not competent in comprehending ''any'' language should not be editing Wikipedia... [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 03:17, 6 December 2024 (UTC)
*::::Human !voters sign off and take responsibility for the LLM opinions they publish. If they continue to generate, then the relevant human signer wouldn't be convinced of anything anyway; at least here, the LLM comments might make more sense than whatever nonsense the unpersuadable user might've generated. (And machine translation relies on LLMs, not to mention there are people who don't know any other language yet have trouble communicating. Factual writing and especially comprehension are different from interpersonal persuasion.)<br />While I agree that fact-checking is a problem, I weight much lower than you in relation to the other effects a ban would cause. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 15:16, 6 December 2024 (UTC)
*::::{{tq|So then what is the point in having any discussion at all if an LLM can just spit out a summary of whichever policies and prior comments it was fed and have its "opinion" counted?}} I'm of the opinion humans tend to be better at debating, reading between the lines, handling obscure PAGs, and arriving at consensus. {{tq|What happens when there are multiple LLM-generated comments in a discussion, each fed the same prompt material and prior comments -- that would not only artificially sway consensus significantly in one direction (including "no consensus"), it could produce a consensus stance that no human !voter even supported!}} It's safe to assume those LLMs are set to a low temperature, which would cause them to consistently agree when fed the same prompt. In that case, they'll produce the same arguments; instead of rebutting x humans' opinions, those on the opposite side need rebut one LLM. If anything, that's less time wasted. Beyond that, if only one set of arguments is being raised, a multi-paragraph !vote matters about as much as a "Support per above". LLMs are not necessary for people to be disingenuous and !vote for things they don't believe. Genuine question: what's worse, this hypothetical scenario where multiple LLM users are swaying a !vote to an opinion no-one believes or the very real and common scenario that a non-English speaker needs to edit enwiki?
*::::{{tq|Even for editors who can detect LLM content, it's still a waste of their time reading up to the point they recognize the slop.}} This proposal wouldn't change for most people that because it's about closers.
*::::{{tq|With human comments we can at least assume good faith that a quote is really in a particular inaccessible book.}} No-one's saying you should take an LLM's word for quotes from a book.
*::::{{tq|People who are not comfortable enough in their English fluency can just machine translate from whichever language they speak, why would they need an LLM?}} It's a pity you're lobbying to ban most machine translators. [[User talk:Dilettante|Sincerely, Dilettante]] 17:08, 6 December 2024 (UTC)
*:::::{{tq|It's safe to assume those LLMs are set to a low temperature, which would cause them to consistently agree when fed the same prompt. In that case, they'll produce the same arguments; instead of rebutting x humans' opinions, those on the opposite side need rebut one LLM. If anything, that's less time wasted. }} ...You do know how consensus works, right? Since closers are supposed to consider each contribution individually and without bias to "authorship" to determine the amount of support for a position, then even a shitty but shallowly policy-based position would get consensus based on numbers alone. And again, ''non-English speakers can use machine-translation'', like they've done for the last two decades. {{pb}}{{tq|This proposal wouldn't change for most people that because it's about closers.}} ''Of course it would''; if we know closers will disregard the LLM comments, we won't need to waste time reading and responding to them.{{pb}}{{tq|No-one's saying you should take an LLM's word for quotes from a book.}} Of course they are. If LLM comments must be evaluated the same as human comments, then AGF on quote fidelity applies too. Otherwise we would be expecting people to do something like "disregard an argument based on being from an LLM".{{pb}}{{tq|It's a pity you're lobbying to ban most machine translators.}}The spirit of this proposal is clearly not intended to impact machine translation. AI-assisted != AI-generated. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 18:42, 6 December 2024 (UTC)
*::::::I appreciate that the availability of easily generated paragraphs of text (regardless of underlying technology) in essence makes the "[[eternal September]]" effect worse. I think, though, it's already been unmanageable for years now, without any programs helping. We need a more effective way to manage decision-making discussions so participants do not feel a need to respond to all comments, and the weighing of arguments is considered more systematically to make the community consensus more apparent. [[User:Isaacl|isaacl]] ([[User talk:Isaacl|talk]]) 19:41, 6 December 2024 (UTC)
*:::::::{{tq|Since closers are supposed to consider each contribution individually and without bias to "authorship"}} I'm the one arguing for this to be practice, yes. {{tq|then even a shitty but shallowly policy-based position would get consensus based on numbers alone}} That is why I state "per above" and "per User" !votes hold equal potential for misuse.
*:::::::{{tq|Of course it would; if we know closers will disregard the LLM comments, we won't need to waste time reading and responding to them.}} We don't know closers are skilled at recognizing LLM slop. I think my !vote shows many who think they can tell cannot. Any commenter complaining about a non-DUCK post will have to write out "This is written by AI" and explain why. DUCK posts already run a''fowl'' of BLUDGEON, DE, SEALION, [[WP:WTF|etc]].
*:::::::{{tq|If LLM comments must be evaluated the same as human comments, then AGF on quote fidelity applies too}}. Remind me again of what AGF stands for? Claiming LLMs have faith of any kind, good or bad, is ludicrous. From the policy, {{tq|Assuming good faith (AGF) means assuming that people are not deliberately trying to hurt Wikipedia, even when their actions are harmful.}} A reasonable reply would be "Are these quotes generated by AI? If so, please be aware AI chatbots are prone to hallucinations and cannot be trusted to cite accurate quotes." This AGFs the poster doesn't realize the issue and places the burden of proof squarely on them.
*:::::::{{tq|AI-assisted != AI-generated}} [https://www.merriam-webster.com/dictionary/generate generate verb to bring into existence]. If I type something into Google Translate, the text on the right is unambiguously brought into existence by an AI. [[User talk:Dilettante|Sincerely, Dilettante]] 21:22, 6 December 2024 (UTC)
*::::::::"Per above" !votes do not require other editors to read and/or respond to their arguments, and anyway are already typically downweighted, unlike !votes actively referencing policy. {{pb}}The whole point is to disregard comments that ''have been found'' to be AI-generated; it is not exclusively up to the closer to ''identify'' those comments in the first place. Yes we will be expecting other editors to point out less obvious examples and to ask if AI was used, what is the problem with that?{{pb}}No, DUCK posts do not necessarily already violate BLUDGEON etc., as I learned in the example from Selfstudier, and anyway we still don't discount the !votes of editors in good standing that bludgeoned/sealioned etc. so that wouldn't solve the problem at all. {{pb}}Obviously other editors will be asking suspected LLM commenters if their comments are from LLMs? But what you're arguing is that even if the commenter says yes, ''their !vote still can't be disregarded for that reason alone'', which means the burden is still on other editors to prove that the content is false. {{pb}}We are not talking about the contextless meaning of the word "generate", we are talking about the very specific process of text generation in the context of [https://www.merriam-webster.com/dictionary/generative generative] AI, as the proposal lays out very explicitly. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 02:13, 7 December 2024 (UTC)
*:::::::::I’m not going to waste time debating someone who resorts to [https://en.wikipedia.org/wiki/Wikipedia:Village_pump_(policy)#c-JoelleJay-20241207014400-Thryduulf-20241207010200 claiming] people on the other side are either ignorant of technology or are crude strawmans. If anyone else is interested in actually hearing my responses, feel free to ask. [[User talk:Dilettante|Sincerely, Dilettante]] 16:13, 7 December 2024 (UTC)
*::::::::::Or you could actually try to rebut my points without claiming I'm trying to ban all machine translators... [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 22:07, 7 December 2024 (UTC)
*:::::::::::For those following along, I never claimed that. I claimed those on JoelleJay’s side are casting !votes such that most machine translators would be banned. It was quite clear at the time that they, personally, support a carve out for machine translation and I don’t cast aspersions. [[User talk:Dilettante|Sincerely, Dilettante]] 15:42, 8 December 2024 (UTC)
*'''Support''' a broad bar against ''undisclosed'' LLM-generated comments and even a policy that undisclosed LLM-generated comments could be sanctionable, in addition to struck through / redacted / ignored; people using them for accessibility / translation reasons could just disclose that somewhere (even on their user page would be fine, as long as they're all right with some scrutiny as to whether they're actually using it for a legitimate purpose.) The fact is that LLM comments raise significant risk of abuse, and often the fact that a comment is clearly LLM-generated is often going to be the only evidence of that abuse. I wouldn't be opposed to a more narrowly-tailored ban on using LLMs in any sort of automated way, but I feel a broader ban may be the only practical way to confront the problem. That said, I'd '''oppose''' the use of tools to detect LLM-comments, at least as the primary evidence; those tools are themselves unreliable LLM things. It should rest more on [[WP:DUCK]] issues and behavioral patterns that make it clear that someone is abusing LLMs. --[[User:Aquillion|Aquillion]] ([[User talk:Aquillion|talk]]) 22:08, 4 December 2024 (UTC)
*'''Support''' per reasons discussed above; something generated by an LLM is not truly the editor's opinion. On an unrelated note, have we seen any LLM-powered unapproved bots come in and do things like POV-pushing and spam page creation without human intervention? If we haven't, I think it's only a matter of time. <b>[[User:Passengerpigeon|<span style="color:#E68A00;">Passenger</span>]][[Special:Contribs/Passengerpigeon|<span style="color:#4C6680;">pigeon</span>]] ([[User talk:Passengerpigeon|<span style="color:grey;">talk</span>]])</b> 23:23, 4 December 2024 (UTC)
*'''Weak oppose''' in the sense that I don't think ''all'' LLM discussion text should be deleted. There are at least a few ESL users who use LLM's for assistance but try to check the results as best they can before posting, and I don't think their comments should be removed indiscriminately. What I do support (although not as a formal [[WP:PAG]]) is being much more liberal in hatting LLM comments when the prompter has failed to prevent [[WP:WALLOFTEXT]]/irrelevant/incomprehensible output than we maybe would for human-generated text of that nature. [[User talk:Mach61|Mach61]] 03:05, 5 December 2024 (UTC)
*'''Oppose''' Any comments made by any editors are of their own responsibility and representing their own chosen opinions to hit the Publish Changes button on. If that comment was made by an LLM, then whatever it says is something the editor supports. I see no reason whatsoever to collapse anything claimed to be made by an LLM (whose detectors are 100% not reliable in the first place). If the comment being made is irrelevant to the discussion, then hatting it is already something covered by policy in the first place. This does make me want to start my comments with "As a large language model trained by OpenAI" though just to mess with people trying to push these sorts of policy discussions. [[User:Silver seren|<span style="color: dimgrey;">Silver</span>]][[User talk:Silver seren|<span style="color: blue;">seren</span>]]<sup>[[Special:Contributions/Silver seren|C]]</sup> 05:29, 5 December 2024 (UTC)
** Or, as ChatGPT puts it,
{{collapse top|width=90%|title=Why banning LLM usage in comments would be detrimental, a ChatGPT treatise}}
{{tq|Banning the use of large language models (LLMs) to assist in writing comments on Wikipedia would be a step backward in fostering inclusivity, efficiency, and accessibility within the platform. Here are several key reasons why such a ban would be counterproductive:}}


{{tq|'''1. Improving Accessibility and Participation:''' Wikipedia is a global platform, and its contributors come from diverse backgrounds, including those who might not be native English speakers or may struggle with writing clear, concise comments. LLMs can help these contributors express themselves more effectively by generating text that is coherent, well-structured, and grammatically correct. Banning LLMs would disproportionately impact these users, potentially discouraging them from participating in the platform.}}
:If you believe this would amount to a major change to the guideline, then you should probably be making a formal [[WP:PROPOSAL]]. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 04:52, 16 November 2024 (UTC)
::{{u|WhatamIdoing}}, would clarifying that SNG don't override GNG requirements be a major change?--[[User:3family6|<b style="color:navy">3family6</b>]] ([[User talk:3family6|<u style="color:black">Talk to me</u>]] &#124; [[Special:Contributions/3family6|<small style="color:purple">See what I have done</small>]]) 11:57, 16 November 2024 (UTC)
:::Yes. And if you want to try that, you should find and read the many previous discussions about that. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 19:27, 16 November 2024 (UTC)
:::See [[WP:NPLACE]], which presumes populated legally recognized places are notable. So, all it takes is prove the legal recognition and presence of people and it's assumed to be notable, unless refuted.
:::A legally recognized city is presumed, but not guaranteed notable. If it doesn't meet GNG, then the presumed notability can be refuted. It does essentially "override" GNG though a short cut, but is subject to removal by presenting failure to meet GNG.
:::Such presumption is not present for most things. For example, simply quoting a local paper about a gas station opening up and operating demonstrates existence of that gas station, but there's no presumed notability for businesses.
:::NBAND 5 and 6 qualifies bands and albums into Wikipedia far easier than they should and they stand as a burden to article deletion due to presumed notability under tenuously defined importance, such as having released two albums through an important indie label Four Legged Octopus, which is "important" because the MailBox Etc based label has been around for five years and has a roster. [[User:Graywalls|Graywalls]] ([[User talk:Graywalls|talk]]) 16:55, 18 November 2024 (UTC)
*Not speaking to this issue directly, but the trend in subject specific guidelines, IMHO, has been to ''reduce the influence'' of SNGs relative to GNG, not override. When we started these projects 20 years ago, almost every article was low hanging fruit, almost bound to be found notable eventually. As an example, Military History Wikiproject adopted and modified [[WP:SOLDIER]], a set of specific and non-subjective criteria which if met gave an indication of presumption of reliable sources being found somewhere eventually. This was intended to screen out a lot of "dead veteran I know" articles, not become the floor for inclusion. When it [https://en.wikipedia.org/enwiki/w/index.php?oldid=1008759107#rfc finally came up for discussion] it was made clear SOLDIER was just a project thing and wasn't itself an approved SNG. It was quickly decommissioned, but SOLDIER criteria was for many years a frequently mentioned keep argument at AfD. As another example, [[WP:SPORTSPERSON]] is another project related shorthand (but consensus-approved SNG), which made it more difficult to create and keep articles about athletes without at least ''one source with significant coverage'', which still seems a low bar indeed. IMHO the original intent of such SNGs was to screen article candidates, but as the pedia grew, we started using SNGs to keep them. Adjusting SNGs to meet the modern usage era seems the practical and accepted path. The medical SNGs are still used as exclusionary, and for the best reasons. [[User:BusterD|BusterD]] ([[User talk:BusterD|talk]]) 15:21, 19 November 2024 (UTC)
*:{{tq|IMHO the original intent of such SNGs was to screen article candidates, but as the pedia grew, we started using SNGs to keep them.}} As someone who joined 10 years in, this seems to have been the trend.--[[User:3family6|<b style="color:navy">3family6</b>]] ([[User talk:3family6|<u style="color:black">Talk to me</u>]] &#124; [[Special:Contributions/3family6|<small style="color:purple">See what I have done</small>]]) 18:20, 19 November 2024 (UTC)
*:Yes, in my opinion SNGs should be ''exclusionary'' criteria, necessary but not sufficient for notability. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 23:02, 20 November 2024 (UTC)
*::Agreed, and this makes a lot more sense to me. I haven’t paid much attention to SNGs till recent years, so it has been my impression that they are applied as supplemental options towards keeps and creates. The only one that I even think of as exclusionary is [[WP:NEVENT]], although that’s got its own difficulties inherent.
*::Ideally I’d like to see every AfD “SNG-therefore-keep” voter back their rationale up by saying that they endorse the SNG by its likelihood toward [[WP:NEXIST|sources existing]]. <span style="font-family:Avenir, sans-serif">—&nbsp;<span style="border-radius:5px;padding:.1em .4em;background:#faeded">[[User:HTGS|HTGS]]</span>&nbsp;([[User talk:HTGS|talk]])</span> 22:34, 21 November 2024 (UTC)


{{tq|'''2. Efficiency and Time-Saving:''' Wikipedia editors often engage in lengthy discussions about content, and composing thoughtful comments can be time-consuming. LLMs can assist by quickly drafting responses, summarizing arguments, or offering suggestions for improvement. This helps contributors focus on more substantive aspects of their contributions, such as verifying facts or improving content, rather than getting bogged down by the mechanics of writing.}}
== Blind 1RR/3RR ==


{{tq|'''3. Support for Editors with Disabilities:''' Many editors with cognitive or motor impairments may find it difficult to express their thoughts clearly in writing. For these individuals, LLMs offer a valuable tool to communicate their ideas effectively. A ban on using LLMs could create additional barriers for these users, reducing their ability to participate fully in Wikipedia's collaborative environment.}}
Blind enforcement of 1RR/3RR does not serve the project. The question should not be whether one violated the rule, but whether they violated the rule in a way that does not benefit the article. If there is no objection to the violation, we can reasonably assume that they are benefiting the article, or at least causing no harm. The decision should be left in the hands of other editors. Could this be used as a weapon? Would there be editors who claim harm where none exists? Certainly, but that's preferable to what we have now.


{{tq|'''4. Quality and Consistency:''' Large language models can help maintain a consistent tone and clarity in Wikipedia discussions. By providing suggestions that align with Wikipedia's neutral point of view and formal writing standards, LLMs can improve the overall quality of comments. This consistency is especially important in discussions that involve sensitive topics, where clear and neutral language is critical for maintaining Wikipedia's credibility and adherence to its editorial guidelines.}}
The problem, no doubt familiar to editors reading this, is that there are often not enough "good" editors around to protect an article from "bad" editors (malicious or merely inexperienced) while staying within 1RR/3RR. There is no restriction on the number of BOLD edits by a given editor, or on the number of editors performing BOLD edits. &#8213;[[User:Mandruss|<span style="color:#775C57;">'''''Mandruss'''''</span>]]&nbsp;[[User talk:Mandruss|<span style="color:#888;">&#9742;</span>]] 00:09, 10 November 2024 (UTC)
:1RR in contentious areas should be fully maintained, with no exceptions. Otherwise, edit wars will quickly develop. [[User:GoodDay|GoodDay]] ([[User talk:GoodDay|talk]]) 00:11, 10 November 2024 (UTC)
::agreed. [[User:Bluethricecreamman|Bluethricecreamman]] ([[User talk:Bluethricecreamman|talk]]) 02:21, 2 December 2024 (UTC)


{{tq|'''5. Encouraging Constructive Dialogue:''' LLMs can assist in crafting more polite, respectful, and balanced comments, reducing the likelihood of inflammatory or aggressive language. This is particularly important on a platform like Wikipedia, where discussions can sometimes devolve into heated exchanges. Rather than banning LLMs, encouraging their use could promote healthier, more constructive conversations.}}
:If someone is repeatedly reverting reverts, then there is objection to the violation by definition. That's what edit warring is. If someone is making the same BOLD edit that needs to be reverted multiple times, then they are also edit warring. There are already exceptions with these rules for patent nonsense or obvious vandalism. If there's routine disruption, then it only makes the problem worse to revert over and over instead of taking it to [[WP:RFPP]]. If you feel the need to make more than one or two reverts in a content dispute, then it's time to either consider other options or step away from the article. [[User:Thebiguglyalien|<span style="color:#324717">The</span><span style="color:#45631f">big</span><span style="color:#547826">ugly</span><span style="color:#68942f">alien</span>]] ([[User talk:Thebiguglyalien|<span style="color:sienna">talk</span>]]) 01:31, 10 November 2024 (UTC)
::It's not about edit warring or re-reverts; the problem exists without a single re-revert. Editor A does ten BOLD edits, five of which are detrimental to the article because they are too inexperienced (this stuff takes years to master, so that's far from uncommon). Editors B, C, D, and E contribute an additional twenty detrimental edits (along with any number of good ones, that number being irrelevant for our purposes here). Meanwhile, competent editors F, G, and H are limited to a total of nine reverts, leaving 21 detrimental edits in the article. I say F, G, and H should be allowed to revert until someone claims they are doing harm. &#8213;[[User:Mandruss|<span style="color:#775C57;">'''''Mandruss'''''</span>]]&nbsp;[[User talk:Mandruss|<span style="color:#888;">&#9742;</span>]] 02:04, 10 November 2024 (UTC)
:::Where are you seeing thirty detrimental edits to an article in every day? Why isn't this article protected? Why aren't editors F, G, and H starting a discussion? Why are they reverting Editor A's edits individually instead of rolling them back? Why is it so urgent that these edits need to be reverted right this moment? Even on the off chance that they encounter such an article that exists, F, G, and H would not need to engage in tag-team reverting (which is still edit warring) if they knew what they were doing. [[User:Thebiguglyalien|<span style="color:#324717">The</span><span style="color:#45631f">big</span><span style="color:#547826">ugly</span><span style="color:#68942f">alien</span>]] ([[User talk:Thebiguglyalien|<span style="color:sienna">talk</span>]]) 02:07, 10 November 2024 (UTC)
::::You are welcome to reduce the numbers as you please; the problem exists regardless. The article ''is'' protected, even with ECP, and there is no shortage of registered editors who have 30 days and 500 edits and still have years to go before they are editing with any reasonable level of competence. Some ''never'' reach that point. {{tq|Why aren't editors F, G, and H starting a discussion?}} Seriously? {{tq| Why are they reverting Editor A's edits individually instead of rolling them back?}} Because (1) they may not have the rollback right, and the rollback right should not be required to function as an editor, (2) they would be rolling back five good edits, and (3) it's impossible if Editor A's edits are interleaved with those of any other editor(s). {{tq|Why is it so urgent that these edits need to be reverted right this moment?}} Because (particularly in large and very active articles) the bad edits can easily be missed if not caught immediately. Then they stay in the article for some unknown amount of time until noticed by a competent editor and corrected with a BOLD edit. Could be months or even years. Is that good for the article? &#8213;[[User:Mandruss|<span style="color:#775C57;">'''''Mandruss'''''</span>]]&nbsp;[[User talk:Mandruss|<span style="color:#888;">&#9742;</span>]] 02:27, 10 November 2024 (UTC)
:::::<small>{{tq|they may not have the rollback right|q=yes}}: Not the main point of this thread, but [[Wikipedia:Twinkle]] has its verison of rollback, available for any registered user.—[[User:Bagumba|Bagumba]] ([[User talk:Bagumba|talk]]) 04:57, 10 November 2024 (UTC)</small>
:::::Could you give an example or two where this has caused a problem? And I note that you have answered the two most important questions inadequately: if an article is subject to edit-warring it should be ''fully'' protected, and you dismissed "Why aren't editors F, G, and H starting a discussion?" with "Seriously?". Yes, of course it's a serious question. Starting a discussion is the best way of defusing an edit war. [[User:Phil Bridger|Phil Bridger]] ([[User talk:Phil Bridger|talk]]) 09:20, 10 November 2024 (UTC)
::::::"Seriously?", while counter to the [[WP:DR]] policy, might be an honest response. I often get page protection or block requests, where my first response is often "where's the discussion?" —[[User:Bagumba|Bagumba]] ([[User talk:Bagumba|talk]]) 10:02, 10 November 2024 (UTC)
:::::::Unless {{u|Mandruss}} is extremely lazy, for which I have no evidence, I don't see how that response can be honest. It only takes a few seconds to start a discussion, no longer than it took to start this one, and the person who starts it wins some extra points. [[User:Phil Bridger|Phil Bridger]] ([[User talk:Phil Bridger|talk]]) 17:08, 10 November 2024 (UTC)
::::::::{{tq|extremely lazy, for which I have no evidence}} Thank you! I have my share of faults and shortcomings, but I don't think extreme laziness is one of them. So there should be new discussions for each of the bad edits (separately for the sake of efficiency and organization), and the bad edits should remain in the article until enough editors have the time, interest, and attention span to form consensuses against them while attending to other important matters. This, at [[Talk:Donald Trump|an ATP where we're struggling to keep the ToC at a manageable size even without such discussions]]. I don't know what articles you're editing, but I want to work there. &#8213;[[User:Mandruss|<span style="color:#775C57;">'''''Mandruss'''''</span>]]&nbsp;[[User talk:Mandruss|<span style="color:#888;">&#9742;</span>]] 03:51, 11 November 2024 (UTC)
:::::::::Did you seriously just point to [[Donald Trump]] as your example and then say you don't know what articles aren't like that [[User:Thebiguglyalien|<span style="color:#324717">The</span><span style="color:#45631f">big</span><span style="color:#547826">ugly</span><span style="color:#68942f">alien</span>]] ([[User talk:Thebiguglyalien|<span style="color:sienna">talk</span>]]) 04:01, 11 November 2024 (UTC)
::::::::::I gather the [[Donald Trump]] article is a rare anomaly where bad content is something we have to live with because the current rules are incapable of preventing it. After all, it's just one article. I would oppose that reasoning. I'd say article quality is ''at least'' as important there as anywhere else. &#8213;[[User:Mandruss|<span style="color:#775C57;">'''''Mandruss'''''</span>]]&nbsp;[[User talk:Mandruss|<span style="color:#888;">&#9742;</span>]] 04:33, 11 November 2024 (UTC)
:::::::::{{tq|So there should be new discussions for each of the bad edits ...|q=yes}}: Yes, or what is an alternative? Your suggestion to favor "good" edits over "bad" is problematic when everyone says their's are the "good" ones. Polarizing topics can be difficult for patrolling admins to [[WP:AGF]] determine "good" v. "bad" edits if they are not subject matter experts.—[[User:Bagumba|Bagumba]] ([[User talk:Bagumba|talk]]) 05:43, 11 November 2024 (UTC)
:::Remember that consecutive edits by a single editor are treated as a single revert for [[WP:3RR]] purposes. So, in your case, editor H can go back and revert the various bad edits and, even if they mechanically break it out into multiple edits, they still have done one revert... Until someone goes back and re-reverts. [[User:Simonm223|Simonm223]] ([[User talk:Simonm223|talk]]) 19:58, 20 November 2024 (UTC)
:If "do not repeat edits without consensus" were the rule (rather than "do not revert"), it would take care of this problem. [[User:Levivich|Levivich]] ([[User talk:Levivich|talk]]) 03:42, 10 November 2024 (UTC)
::Who said anything about repeated edits? Am I missing something? I'm tired at the moment, so that's a possibility. &#8213;[[User:Mandruss|<span style="color:#775C57;">'''''Mandruss'''''</span>]]&nbsp;[[User talk:Mandruss|<span style="color:#888;">&#9742;</span>]] 04:04, 10 November 2024 (UTC)
:::What do you mean, who said? ''I'' said something about repeated edits :-) If the rule were "do not repeat edits without consensus" 1x or 3x in 24 hours, instead of "do not revert" 1x or 3x in 24 hours (which leads to the whole "what exactly counts as a revert?" issue), the problem you are describing would not happen. The 'bad' editor can make 10 bad edits, and the 'good' editor can revert all 10 edits without violating do-not-repeat-3RR, and the 'bad' editor would be able to repeat 3 of those 10 edits without crossing do-not-repeat-3RR, and the 'good' editor can revert all 3 of those without crossing do-not-repeat-3RR, ''et voila'': equilibrium. The problem is we focus on "revert" instead of "repeat." To tamp down on edit warring, we should prohibit people from ''repeating'' their edits, not from "reverting" (whatever that means, exactly) edits. [[User:Levivich|Levivich]] ([[User talk:Levivich|talk]]) 04:50, 10 November 2024 (UTC)
::::Well I'll have to come back after a sleep and try to comprehend that. &#8213;[[User:Mandruss|<span style="color:#775C57;">'''''Mandruss'''''</span>]]&nbsp;[[User talk:Mandruss|<span style="color:#888;">&#9742;</span>]] 04:56, 10 November 2024 (UTC)
:{{tq|Blind enforcement of 1RR/3RR does not serve the project|q=yes}}: Are you referring to page protection or blocks? On contentious topics or any subject? —[[User:Bagumba|Bagumba]] ([[User talk:Bagumba|talk]]) 05:11, 10 November 2024 (UTC)


{{tq|'''6. Transparency and Accountability:''' If concerns exist about misuse (e.g., generating spammy or low-quality comments), these can be addressed through guidelines and oversight mechanisms. Wikipedia already has systems in place for identifying and mitigating disruptive behavior, and LLM-generated comments could be subject to the same scrutiny. A ban is an overly simplistic solution to what could be a more nuanced issue, one that could be addressed through better regulation and community involvement.}}
== What determines "global consensus"? ==


{{tq|In conclusion, banning the use of large language models on Wikipedia would deprive the platform of an important tool for improving accessibility, efficiency, and the quality of discourse. Instead of a ban, the focus should be on establishing guidelines for their responsible use, ensuring that they enhance, rather than detract from, Wikipedia's core principles of collaboration, neutrality, and inclusivity.}}
This [[Wikipedia:Arbitration/Requests/Case/Infoboxes#Levels of consensus|ArbCom resolution]] established that "Where there is a global consensus to edit in a certain way, it should be respected and cannot be overruled by a local consensus."
{{collapse bottom}}
::I'm honestly a bit impressed with the little guy. [[User:Silver seren|<span style="color: dimgrey;">Silver</span>]][[User talk:Silver seren|<span style="color: blue;">seren</span>]]<sup>[[Special:Contributions/Silver seren|C]]</sup> 05:39, 5 December 2024 (UTC)
:::It is somewhat amusing how easy it is to get these chatbots to output apologia for these chatbots. Too bad it's always so shallow. Probably because the people who inserted those canned responses are shallow people is my opinion. [[User:Simonm223|Simonm223]] ([[User talk:Simonm223|talk]]) 19:44, 6 December 2024 (UTC)
*'''Support''' those who are opposing have clearly never had to deal with trolls who endlessly [[WP:SEALION]]. If I wanted to have a discussion with a chatbot, I'd go and find one. [[User:AirshipJungleman29|&#126;~ AirshipJungleman29]] ([[User talk:AirshipJungleman29|talk]]) 13:14, 5 December 2024 (UTC)
*:What's wrong with just banning and hatting the troll? [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 13:49, 5 December 2024 (UTC)
*:Someone trolling and sealioning can (and should) be blocked under current policy, whether they use an LLM or not is irrelevant. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 15:22, 5 December 2024 (UTC)
*'''Oppose''' per Rhododendrites. This is a case-by-case behavioral issue, and using LLMs != being a troll. [[User:Frostly|Frostly]] ([[User talk:Frostly|talk]]) 17:30, 5 December 2024 (UTC)
*'''Support''': the general principle is sound - where the substance has been originally written by gen-AI, comments will tend to add nothing to the discussion and even annoy or confuse other users. In principle, we should not allow such tools to be used in discussions. Comments written originally before improvement or correction by AI, particularly translation assistants, fall into a different category. Those are fine. There also has to be a high standard for comment removal. Suspicion that gen-AI might have been used is not enough. High gptzero scores is not enough. The principle should go into policy but under a stonking great caveat - [[WP:AGF]] takes precedence and a dim view will be taken of generative-AI inquisitors. [[User:Arcticocean|<span style="font-family:Tahoma;color:#5A4FCF;">arctic'''ocean'''&nbsp;■</span>]] 17:37, 5 December 2024 (UTC)
*'''Support''' If a human didn't write it, humans shouldn't spend time reading it. I'll go further and say that ''LLMs are inherently unethical technology'' and, consequently, ''people who rely on them should be made to feel bad.'' ESL editors who use LLMs to make themselves sound like Brad Anderson in middle management should ''stop doing that'' because it actually gets in the way of clear communication. {{pb}} I find myself unpersuaded by arguments that existing policies and guidelines are adequate here. Sometimes, one needs a linkable statement that applies directly to the circumstances at hand. By analogy, one could argue that we don't really need [[WP:BLP]], for example, because adhering to [[WP:V]], [[WP:NPOV]], and [[WP:NOR]] ought already to keep bad material out of biographies of living people. But in practice, it turned out that having a specialized policy that emphasizes the general ethos of the others while tailoring them to the problem at hand is a good thing. [[User:XOR&#39;easter|XOR&#39;easter]] ([[User talk:XOR&#39;easter|talk]]) 18:27, 5 December 2024 (UTC)
*'''Strong support''' - Making a computer generate believable gibberish for you is a waste of time, and tricking someone else into reading it should be a blockable offense. If we're trying to create an encyclopedia, you cannot automate any part of the ''thinking''. We can automate processes in general, but any attempt at automating the actual discussion or thought-processes should never be allowed. If we allow this, it would waste countless hours of community time dealing with inane discussions, sockpuppetry, and disruption.{{pb
}} Imagine a world where LLMs are allowed and popular - it's a sockpuppeteer's dream scenario - you can run 10 accounts and argue the same points, and the reason why they all sound alike is just merely because they're all LLM users. You could even just spend a few dollars a month and run 20-30 accounts to automatically disrupt wikipedia discussions while you sleep, and if LLM usage was allowed, it would be very hard to stop.{{pb
}} However, I don't have much faith in AI detection tools (partially because it's based on the same underlying flawed technology), and would want any assumption of LLM usage to be based on obvious evidence, not just a score on some website. <small>Also, to those who are posting chatgpt snippets here: please stop - it's not interesting or insightful, just more slop</small> [[User:Bugghost|<span style="font-weight:bold;color:#f50">BugGhost</span>]]&nbsp;[[User talk:Bugghost|🦗👻]] 19:15, 5 December 2024 (UTC)
*:I agree with your assessment “Also, to those who are posting chatgpt snippets here: please stop - it's not interesting or insightful, just more slop” but unfortunately some editors who should ''really'' know better think it’s WaCkY to fill serious discussions with unfunny, distracting “humor”. [[User:Dronebogus|Dronebogus]] ([[User talk:Dronebogus|talk]]) 21:54, 5 December 2024 (UTC)
*::I also concur. "I used the [https://www.theverge.com/2024/12/5/24313222/chatgpt-pardon-biden-bush-esquire machine for generating endless quantities of misleading text] to generate more text" is not a good joke. [[User:XOR&#39;easter|XOR&#39;easter]] ([[User talk:XOR&#39;easter|talk]]) 22:46, 5 December 2024 (UTC)
*'''Strong support''' if you asked a robot to spew out some AI slop to win an argument you’re basically cheating. The only ethical reason to do so is because you can’t speak English well, and the extremely obvious answer to that is “if you can barely speak English why are you editing ''English Wikipedia?”'' That’s like a person who doesn’t understand basic physics trying to explain the [[second law of thermodynamics]] using a chatbot. [[User:Dronebogus|Dronebogus]] ([[User talk:Dronebogus|talk]]) 21:32, 5 December 2024 (UTC)
*:I don't think "cheating" is a relevant issue here. Cheating is a problem if you use a LLM to win and get a job, award, college acceptance etc. that you otherwise wouldn't deserve. But WP discussions aren't a debating-skills contest, they're an attempt to determine the best course of action.
*:So using an AI tool in a WP discussion is not ''cheating'' (though there may be other problems), just as riding a bike instead of walking isn't cheating unless you're trying to win a race. <span style="font-family:cursive">[[User:Ypn^2|<span style="color:green">''ypn''</span>]][[User talk:ypn^2|<span style="color:blue;font-size:90%;vertical-align:12%">^</span><span style="color:purple;vertical-align:45%;font-size:75%">2</span>]]</span> 22:36, 5 December 2024 (UTC)
*::Maybe “cheating” isn’t the right word. But I think that a) most AI generated content is garbage (it can polish the turd by making it sound professional, but it’s still a turd underneath) and b) it’s going to be abused by people [[Wikipedia:Don't bludgeon the process|trying to gain a material edge in an argument]]. An AI can pump out text far faster than a human and that can drown out or wear down the opposition if nothing else. [[User:Dronebogus|Dronebogus]] ([[User talk:Dronebogus|talk]]) 08:08, 6 December 2024 (UTC)
*:::Bludgeoning is already against policy. It needs to be more strongly enforced, but it needs to be more strongly enforced uniformly rather than singling out comments that somebody suspects might have had AI-involvement. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 10:39, 6 December 2024 (UTC)
* '''Support'''; I agree with Remsense and jlwoodwa, among others: I wouldn't make any one AI-detection site the Sole Final Arbiter of whether a comment "counts", but I agree it should be expressly legitimate to discount AI / LLM slop, at the very least to the same extent as closers are already expected to discount other insubstantial or inauthentic comments (like if a sock- or meat-puppet copy-pastes a comment written for them off-wiki, as there was at least one discussion and IIRC ArbCom case about recently). [[User:&#45;sche|&#45;sche]] ([[User talk:&#45;sche|talk]]) 22:10, 5 December 2024 (UTC)
*:You don't need a new policy that does nothing but duplicate a subset of existing policy. At ''most'' what you need is to add a sentence to the existing policy that states "this includes comments written using LLMs", however you'd rightly get a lot of pushback on that because it's completely redundant and frankly goes without saying. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 23:37, 5 December 2024 (UTC)
*'''Support''' hallucinations are real. We should be taking a harder line against LLM generated participation. I don't think everyone who is doing it knows that they need to stop. '''[[User:AndreJustAndre|Andre]]'''<span style="border:2px solid #073642;background:rgb(255,156,0);background:linear-gradient(90deg, rgba(255,156,0,1) 0%, rgba(147,0,255,1) 45%, rgba(4,123,134,1) 87%);">[[User_talk:AndreJustAndre|🚐]]</span> 23:47, 5 December 2024 (UTC)
*'''Comment''' - [[Special:Permalink/1261430035#Misleading_statement_about_ADL_history|Here is something]] that I imagine we will see more often. I wonder where it fits into this discussion. A user employs [[Perplexity AI|perplexity]]'s [[Retrieval-augmented generation|RAG based system]], search+LLM, to help generate their edit request (without the verbosity bias that is common when people don't tell LLMs how much output they want). [[User:Sean.hoyland|Sean.hoyland]] ([[User talk:Sean.hoyland|talk]]) 03:13, 6 December 2024 (UTC)
*'''Support''' per all above. Discussions are supposed to include the original arguments/positions/statements/etc of editors here, not off-site chatbots. [[User:The Kip|<span style="color:#333f42;">'''The'''</span>]] [[User talk:The Kip|<span style="color:#b4975a;">'''Kip'''</span>]] <span style="color:#C8102E;"><small><sup>([[Special:Contributions/The Kip|contribs]])</sup></small></span> 03:53, 6 December 2024 (UTC)
*:I also find it pretty funny that ChatGPT ''itself'' said it shouldn't be used, as per the premise posted above by EEng. [[User:The Kip|<span style="color:#333f42;">'''The'''</span>]] [[User talk:The Kip|<span style="color:#b4975a;">'''Kip'''</span>]] <span style="color:#C8102E;"><small><sup>([[Special:Contributions/The Kip|contribs]])</sup></small></span> 03:58, 6 December 2024 (UTC)
*::"sycophancy is a general behavior of state-of-the-art AI assistants, likely driven in part by human preference judgments favoring sycophantic responses" - [https://arxiv.org/pdf/2310.13548 Towards Understanding Sycophancy in Language Models]. They give us what we want...apparently. And just like with people, there is position bias, so the order of things can matter. [[User:Sean.hoyland|Sean.hoyland]] ([[User talk:Sean.hoyland|talk]]) 04:26, 6 December 2024 (UTC)
* (Is this where I respond? If not, please move.) '''LLM-generated prose should be discounted.''' Sometimes there will be a discernible point in there; it may even be what the editor meant, lightly brushed up with what ChatGPT thinks is appropriate style. (So I wouldn't say "banned and punishable" in discussions, although we already deprecate machine translations on en.wiki and for article prose, same difference—never worth the risk.) However, LLMs don't think. They can't explain with reference to appropriate policy and guidelines. They may invent stuff, or use the wrong words—at AN recently, an editor accused another of "defaming" and "sacrilege", thus drowning their point that they thought that editor was being too hard on their group by putting their signature to an outrageous personal attack. I consider that an instance of LLM use letting them down. If it's not obvious that it is LLM use, then the question doesn't arise, right? Nobody is arguing for requiring perfect English. That isn't what [[WP:CIR]] means. English is a global language, and presumably for that reason, many editors on en.wiki are not native speakers, and those that aren't (and those that are!) display a wide range of ability in the language. Gnomes do a lot of fixing of spelling, punctuation and grammar in articles. In practice, we don't have a high bar to entrance in terms of English ability (although I think a lot more could be done to ''explain'' to new editors whose English is obviously non-native what the rule or way of doing things is that they have violated. And some of our best writers are non-native; a point that should be emphasised because we all have a right of anonymity here, many of us use it, and it's rare, in particular, that I know an editor's race. Or even nationality (which may not be the same as where they live.) But what we do here is write in English: both articles and discussions. If someone doesn't have the confidence to write their own remark or !vote, then they shouldn't participate in discussions; I strongly suspect that it is indeed a matter of confidence, of wanting to ensure the English is impeccable. LLMs don't work that way, really. They concoct things like essays based on what others have written. Advice to use them in a context like a Wikipedia discussion is bad advice. At best it suggests you let the LLM decide which way to !vote. If you have something to say, say it and if necessary people will ask a question for clarification (or disagree with you). They won't mock your English (I hope! Civility is a basic rule here!) It happens in pretty much every discussion that somebody makes an English error. No biggie. I'll stop there before I make any more typos myself; typing laboriously on my laptop in a healthcare facility, and anyway [[Murphy's Law]] covers this. [[User:Yngvadottir|Yngvadottir]] ([[User talk:Yngvadottir|talk]])
*I dunno about this specifically but I want to chime in to say that I find LLM-generated messages super fucking rude and unhelpful and support efforts to discourage them. &ndash;&#8239;[[User:Joe Roe|Joe]]&nbsp;<small>([[User talk:Joe Roe|talk]])</small> 08:15, 6 December 2024 (UTC)
*'''Comment''' I think obvious LLM/chatbot text should at least be tagged through an Edit filter for Recent Changes, then RC Patrollers and reviewers can have a look and decide for themselves. [[User:Abminor|A<sup>♭</sup>m]] <sup>([[User talk:Abminor|Ring!]])</sup> <sub>([[Special:Contributions/Abminor|Notes]])</sub> 11:58, 6 December 2024 (UTC)
*:How do you propose that such text be identified by an edit filter? LLM detections tools have high rates of both false positives and false negatives. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 12:47, 6 December 2024 (UTC)
*::It might become possible once watermarks (like DeepMind's SynthID) are shown to be robust and are adopted. Some places are likely to require it at some point e.g. EU. I guess it will take a while though and might not even happen e.g. I think OpenAI recently decided to not go ahead with their watermark system for some reason. [[User:Sean.hoyland|Sean.hoyland]] ([[User talk:Sean.hoyland|talk]]) 13:17, 6 December 2024 (UTC)
*:::It will still be trivial to bypass the watermarks, or use LLMs that don't implement them. It also (AIUI) does nothing to reduce false positives (which for our usecase are far more damaging than false negatives). [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 13:30, 6 December 2024 (UTC)
*::::Maybe, that seems to be the case with some of the proposals. Others, like SynthID claim high detection rates, maybe because even a small amount of text contains a lot of signals. As for systems that don't implement them, I guess that would be an opportunity to make a rule more nuanced by only allowing use of watermarked output with verbosity limits...not that I support a rule in the first place. People are going to use/collaborate with LLMs. Why wouldn't they? [[User:Sean.hoyland|Sean.hoyland]] ([[User talk:Sean.hoyland|talk]]) 14:38, 6 December 2024 (UTC)
*:::::I don't think watermarks are a suitable thing to take into account. My view is that LLM usage should be a blockable offense on any namespace, but if it ends up being allowed under some circumstances then we at least need mandatory manual disclosures for any usage. Watermarks won't work / aren't obvious enough - we need something like {{t|LLM}} but self-imposed, and not tolerate unmarked usage. [[User:Bugghost|<span style="font-weight:bold;color:#f50">BugGhost</span>]]&nbsp;[[User talk:Bugghost|🦗👻]] 18:21, 6 December 2024 (UTC)
*::::::They will have to work at some point (e.g. [https://www.cac.gov.cn/2022-12/11/c_1672221949354811.htm][https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/757583/EPRS_BRI(2023)757583_EN.pdf]). [[User:Sean.hoyland|Sean.hoyland]] ([[User talk:Sean.hoyland|talk]]) 06:27, 7 December 2024 (UTC)
*:Good news! {{u|Queen of Hearts}} is already working on that in {{edf|1325}}. [[User:Jlwoodwa|jlwoodwa]] ([[User talk:Jlwoodwa|talk]]) 16:12, 6 December 2024 (UTC)
*::See also [[WP:WikiProject AI Cleanup]]. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 17:32, 6 December 2024 (UTC)
*'''Comment''' As a practical matter, users posting obvious LLM-generated content will typically be in violation of other rules (e.g. disruptive editing, sealioning), in which case their discussion comments absolutely should be ignored, discouraged, discounted, or (in severe cases) hatted. But a smaller group of users (e.g. people using LLMs as a translation tool) may be contributing productively, and we should seek to engage with, rather than discourage, them. So I don't see the need for a separate bright-line policy that risks erasing the need for discernment — in most cases, a friendly reply to the user's first LLM-like post (perhaps mentioning [[WP:LLM]], which isn't a policy or guideline, but is nevertheless good advice) will be the right approach to work out what's really going on. [[User:Preimage|Preimage]] ([[User talk:Preimage|talk]]) 15:53, 6 December 2024 (UTC)
*:Yeah, this is why I disagree with the BLP analogy above. There's no great risk/emergency to ban the discernment. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 17:34, 6 December 2024 (UTC)
*:{{small|Those pesky [[List of fictional pinnipeds|sealion Chatbots]] are just the worst!}} [[User:Martinevans123|Martinevans123]] ([[User talk:Martinevans123|talk]]) 18:41, 6 December 2024 (UTC)
*:Some translation tools have LLM ''assistance'', but the whole point of ''generative'' models is to create text far beyond what is found in the user's input, and the latter is clearly what this proposal covers. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 19:01, 6 December 2024 (UTC)
*::That ''might'' be what the proposal ''intends'' to cover, but it is not what the proposal ''actually'' covers. The proposal ''all'' comments that have been generated by LLMs and/or AI, without qualification. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 01:05, 7 December 2024 (UTC)
*:::70+% here understand the intention matches the language: ''generated by LLMs etc'' means "originated through generative AI tools rather than human thought", not "some kind of AI was involved in any step of the process". Even LLM translation tools don't actually ''create'' meaningful content where there wasn't any before; the generative AI aspect is only in the use of their vast training data to characterize the semantic context of your input in the form of mathematical relationships between tokens in an embedding space, and then match it with the collection of tokens most closely resembling it in the other language. There is, definitionally, a high level of creative constraint in what the translation output is since semantic preservation is required, something that is ''not'' true for text ''generation''. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 04:01, 7 December 2024 (UTC)
*::::Do you have any evidence for you assertion that 70% of respondents have interpreted the language in the same way as you? Reading the comments associated with the votes suggests that it's closer to 70% of respondents who don't agree with you. Even if you are correct, 30% of people reading a policy indicates the policy is badly worded. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 08:34, 7 December 2024 (UTC)
*:::::I think @[[User:Bugghost|Bugghost]] has summarized the respondent positions sufficiently below. I also think some portion of the opposers understand the proposal perfectly well and are just opposing anything that imposes participation standards. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 22:54, 7 December 2024 (UTC)
*::::There will be many cases where it is not possible to say whether a piece of text does or does not contain "human thought" by observing the text, even if you know it was generated by an LLM. Statements like "originated through generative AI tools rather than human thought" will miss a large class of use cases, a class that will probably grow over the coming years. People work with LLMs to produce the output they require. It is often an iterative process by necessity because people and models make mistakes. An example of when "...rather than human thought" is not the case is when someone works with an LLM to solve something like a challenging technical problem where neither the person or the model has a satisfactory solution to hand. The context window means that, just like with human collaborators, a user can iterate towards a solution through dialog and testing, exploring the right part of the solution space. Human thought is not absent in these cases, it is present in the output, the result of a collaborative process. In these cases, something "far beyond what is found in the user's input" is the objective, it seems like a legitimate objective, but regardless, it will happen, and we won't be able to see it happening. [[User:Sean.hoyland|Sean.hoyland]] ([[User talk:Sean.hoyland|talk]]) 10:46, 7 December 2024 (UTC)
*:::::Yes, but this proposal is supposed to apply to just the obvious cases and will hopefully discourage good-faith users from using LLMs to create comments wholesale in general. It can be updated as technology progresses. There's also no reason editors using LLMs to organize/validate their arguments, or as search engines for whatever, ''have'' to copy-paste their raw output, which is much more of a problem since it carries a much higher chance of hallucination. That some people who are especially familiar with how to optimize LLM use, or who pay for advanced LLM access, will be able to deceive other editors is not a reason to ''not'' formally proscribe wholesale comment generation. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 22:27, 7 December 2024 (UTC)
*::::::That's reasonable. I can get behind the idea of handling obvious cases from a noise reduction perspective. But for me, the issue is noise swamping signal in discussions rather than how it was generated. I'm not sure we need a special rule for LLMs, maybe just a better way to implement the existing rules. [[User:Sean.hoyland|Sean.hoyland]] ([[User talk:Sean.hoyland|talk]]) 04:14, 8 December 2024 (UTC)
*'''Support''' [[Number Six (The Prisoner)|"I Am Not A ChatBot; I Am A Free Wikipedia Editor!"]] [[User:Martinevans123|Martinevans123]] ([[User talk:Martinevans123|talk]]) 18:30, 6 December 2024 (UTC)
* '''Comment''': The original question was whether we should discount, ignore, strikethrough, or collapse chatbot-written content. I think there's a very big difference between these options, but most support !voters haven't mentioned which one(s) they support. That might make judging the consensus nearly impossible; as of now, supporters are the clear !majority, but supporters of ''what''? {{--}} <span style="font-family:cursive">[[User:Ypn^2|<span style="color:green">''ypn''</span>]][[User talk:ypn^2|<span style="color:blue;font-size:90%;vertical-align:12%">^</span><span style="color:purple;vertical-align:45%;font-size:75%">2</span>]]</span> 19:32, 6 December 2024 (UTC)
*:That means that supporters support the proposal {{tq|that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner}}. Not sure what the problem is here. Supporters support the things listed in the proposal - we don't need a prescribed 100% strict procedure, it just says that supporters would be happy with closers discounting, ignoring or under some circumstances deleting LLM content in discussions. [[User:Bugghost|<span style="font-weight:bold;color:#f50">BugGhost</span>]]&nbsp;[[User talk:Bugghost|🦗👻]] 19:40, 6 December 2024 (UTC)
*:Doing something? At least the stage could be set for a follow on discussion. [[User:Selfstudier|Selfstudier]] ([[User talk:Selfstudier|talk]]) 19:40, 6 December 2024 (UTC)
*:More people have bolded "support" than other options, but very few of them have even attempted to refute the arguments against (and most that have attempted have done little more than handwaving or directly contradicting themselves), and multiple of those who have bolded "support" do not actually support what has been proposed when you read their comment. It's clear to me there is not going to be a consensus for anything other than "many editors dislike the idea of LLMs" from this discussion. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 00:58, 7 December 2024 (UTC)
*::Arguing one point doesn't necessarily require having to refute every point the other side makes. I can concede that "some people use LLMs to improve their spelling and grammar" without changing my view overriding view that LLMs empower bad actors, time wasters and those with competence issues, with very little to offer wikipedia in exchange. Those that use LLMs legitimately to tidy up their alledgedly competent, insightful and self-sourced thoughts should just be encouraged to post the prompts themselves instead of churning it through an LLM first. [[User:Bugghost|<span style="font-weight:bold;color:#f50">BugGhost</span>]]&nbsp;[[User talk:Bugghost|🦗👻]] 09:00, 7 December 2024 (UTC)
*:::If you want to completely ignore all the other arguments in opposition that's your choice, but don't expect closers to attach much weight to your opinions. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 09:05, 7 December 2024 (UTC)
*::::Ok, here's a list of the main opposition reasonings, with individual responses.
*::::'''What about translations?''' - Translations are not up for debate here, the topic here is very clearly generative AI, and attempts to say that this topic covers translations as well is incorrect. No support voters have said the propositions should discount translated text, just oppose voters who are trying to muddy the waters.
*::::'''What about accessibility?''' - This is could be a legitimate argument, but I haven't seen this substantiated anywhere other than handwaving "AI could help people!" arguments, which I would lump into the spelling and grammar argument I responded to above.
*::::'''Detection tools are inaccurate''' - This I very much agree with, and noted in my support and in many others as well. But there is no clause in the actual proposal wording that mandates the use of automated AI detection, and I assume the closer would note that.
*::::'''False positives''' - Any rule can have a potential for false positives, from wp:DUCK to close paraphrasing to NPA. We've just got to as a community become skilled at identifying genuine cases, just like we do for every other rule.
*::::'''LLM content should be taken at face value and see if it violates some other policy''' - hopelessly naive stance, and a massive timesink. Anyone who has had the misfortune of going on X/twitter in the last couple of years should know that AI is not just used as an aid for those who have trouble typing, it is mainly used to [https://www.abc.net.au/news/science/2024-02-28/twitter-x-fighting-bot-problem-as-ai-spam-floods-the-internet/103498070 spam] and [https://www.aljazeera.com/features/longform/2024/5/22/are-you-chatting-with-an-ai-powered-superbot disrupt discussion to fake opinions to astroturf political opinions]. Anyone who knows how bad the sockpuppetry issue is around CTOPs should be absolutely terrified of when (not if) someone decides to launch a full throated wave of AI bots on Wikipedia discussions, because if we have to invididually sanction each one like a human then admins will literally have no time for anything else.
*::::I genuinely cannot comprehend how some people could see how AI is decimating the internet through spam, bots and disinformation and still think for even one second that we should open the door to it. [[User:Bugghost|<span style="font-weight:bold;color:#f50">BugGhost</span>]]&nbsp;[[User talk:Bugghost|🦗👻]] 10:08, 7 December 2024 (UTC)
*:::::There is no door. This is true for sockpuppetry too in my opinion. There can be a rule that claims there is a door, but it is more like a bead curtain. [[User:Sean.hoyland|Sean.hoyland]] ([[User talk:Sean.hoyland|talk]]) 11:00, 7 December 2024 (UTC)
*:::::The Twitter stuff is not a good comparison here. Spam is already nukable on sight, mass disruptive bot edits are also nukable on sight, and it's unclear how static comments on Wikipedia would be the best venue to astroturf political opinions (most of which would be off-topic anyway, i.e., nukable on sight). I'd prefer if people didn't use ChatGPT to formulate their points, but if they're trying to formulate a real point then that isn't disruptive in the same way spam is. [[User:Gnomingstuff|Gnomingstuff]] ([[User talk:Gnomingstuff|talk]]) 02:22, 10 December 2024 (UTC)
*::::::{{tq|it's unclear how static comments on Wikipedia would be the best venue to astroturf political opinions}} - by disrupting RFCs and talk page discussions a bad actor could definitely use chatgpt to astroturf. A large proportion of the world uses Wikipedia (directly or indirectly) to get information - it would be incredibly valuable thing to manipulate. My other point is that AI disruption bots (like the ones on twitter) would be indistinguishable from individuals using LLMs to "fix" spelling and grammar - by allowing one we make the other incredibly difficult to identify. How can you tell the difference between a bot and someone who just uses chatgpt for every comment? [[User:Bugghost|<span style="font-weight:bold;color:#f50">BugGhost</span>]]&nbsp;[[User talk:Bugghost|🦗👻]] 09:16, 10 December 2024 (UTC)
*:::::::You can't. That's the point. This is kind of the whole idea of [[WP:AGF]]. [[User:Gnomingstuff|Gnomingstuff]] ([[User talk:Gnomingstuff|talk]]) 20:22, 13 December 2024 (UTC)
*:::{{tqb|Those that use LLMs legitimately to tidy up their alledgedly competent, insightful and self-sourced thoughts should just be encouraged to post the prompts themselves instead of churning it through an LLM first.}}Social anxiety: Say "I" am a person unconfident in my writing. I imagine that when I post my raw language, I embarrass myself, and my credibility vanishes, while in the worst case nobody understands what I mean. As bad confidence is often built up through negative feedback, it's usually meritful or was meritful at some point for someone to seek outside help. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 23:46, 8 December 2024 (UTC)
*::::While I sympathise with that hypothetical, [[WP:THERAPY|Wikipedia isn't therapy]] and we shouldn't make decisions that do long-term harm to the project just because a hypothetical user feels emotionally dependent on a high tech spellchecker. I also think that in general wikipedia (myself included) is pretty relaxed about spelling and grammar in talk/WP space. [[User:Bugghost|<span style="font-weight:bold;color:#f50">BugGhost</span>]]&nbsp;[[User talk:Bugghost|🦗👻]] 18:45, 10 December 2024 (UTC)
*:::::We also shouldn't do long term harm to the project just because a few users are wedded to idea that LLMs are and will always be some sort of existential threat. The false positives that are an unavoidable feature of this proposal will do far more, and far longer, harm to the project than LLM-comments that are all either useful, harmless or collapseable/removable/ignorable at present. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 19:06, 10 December 2024 (UTC)
*::::::{{tq|The false positives that are an unavoidable feature of this proposal will do far more, and far longer, harm to the project}} - the same could be said for [[WP:DUCK]]. The reason why its not a big problem for DUCK is because the confidence level is very high. Like I've said in multiple other comments, I don't think "AI detectors" should be trusted, and that the bar for deciding whether something was created via LLM should be very high. I 100% understand your opinion and the reasoning behind it, I just think we have differing views on how well the community at large can identify AI comments. [[User:Bugghost|<span style="font-weight:bold;color:#f50">BugGhost</span>]]&nbsp;[[User talk:Bugghost|🦗👻]] 09:07, 11 December 2024 (UTC)
*:::::I don't see how allowing shy yet avid users to contribute has done or will do long-term harm. The potential always outweighs rational evaluation of outcomes for those with anxiety, a condition that is not behaviorally disruptive. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 02:47, 11 December 2024 (UTC)
*::::::I definitely don't want to disallow shy yet avid users! I just don't think having a "using chatgpt to generate comments is allowed" rule is the right solution to that problem, considering the wider consequences. [[User:Bugghost|<span style="font-weight:bold;color:#f50">BugGhost</span>]]&nbsp;[[User talk:Bugghost|🦗👻]] 08:52, 11 December 2024 (UTC)
*:::::::Did you mean "... disallowed"? If so, I think we weigh-differently accessibility vs the quite low amount of AI trolling. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 14:10, 11 December 2024 (UTC)
* '''Support''' strikethroughing or collapsing per everyone else. The opposes that mention ESL have my sympathy, but I am not sure how many of them are ESL themselves. Having learnt English as my second language, I have always found it easier to communicate when users are expressing things in their own way, not polished by some AI. I sympathise with the concerns and believe the right solution is to lower our community standards with respect to [[WP:CIR]] and similar (in terms of ESL communication) without risking hallucinations by AI. [[User:Soni|Soni]] ([[User talk:Soni|talk]]) 02:52, 7 December 2024 (UTC)
* '''Oppose the use of AI detection tools'''. False positive rates for AI-detection are dramatically higher for non-native English speakers. [https://themarkup.org/machine-learning/2023/08/14/ai-detection-tools-falsely-accuse-international-students-of-cheating AI detection tools had a 5.1% false positive rate for human-written text from native English speakers, but human-written text from non-native English speakers had a 61.3% false positive rate]. ~ [[User:Freedom4U|F4U]] ([[User talk:Freedom4U|talk]] • [[Special:Contributions/Freedom4U|they/it]]) 17:53, 8 December 2024 (UTC)
===Section break 5===
* '''Oppose''' - I'm sympathetic to concerns of abuse through automated mass-commenting, but this policy looks too black-and-white. Contributors may use LLMs for many reasons, including to fix the grammar, to convey their thoughts more clearly, or to adjust the tone for a more constructive discussion. As it stands, this policy may lead to dismissing good-faith AI-assisted comments, as well as false positives, without considering the context. Moreover, while mainstream chatbots are not designed to just mimic the human writing style, there are existing tools that can make AI-generated text more human-like, so this policy does not offer that much protection against maliciously automated contributions. [[User:Alenoach|Alenoach]] ([[User talk:Alenoach|talk]]) 01:12, 9 December 2024 (UTC)
* '''Oppose''' – Others have cast doubt on the efficacy of tools capable of diagnosing LLM output, and I can't vouch for its being otherwise. If EEng's example of ChatBot output is representative—a lengthy assertion of notability without citing sources—that is something that could well be disregarded whether it came from a bot or not. If used carefully, AI can be useful as an aide-memoire (such as with a spell- or grammar-checker) or as a supplier of more felicitous expression than the editor is naturally capable of (e.g. Google Translate). [[User:Dhtwiki|Dhtwiki]] ([[User talk:Dhtwiki|talk]]) 10:27, 9 December 2024 (UTC)
* '''Comment''' / '''Oppose as written'''. It's not accurate that GPTZero is good at detecting AI-generated content. Citations (slightly out of date but there's little reason to think things have changed from 2023): https://www.aiweirdness.com/writing-like-a-robot/ , https://www.aiweirdness.com/dont-use-ai-detectors-for-anything-important/ . For those too busy to read, a few choice quotes: "the fact that it insisted even one [real book] excerpt is not by a human means that it's useless for detecting AI-generated text," and "Not only do AI detectors falsely flag human-written text as AI-written, the way in which they do it is biased" (citing https://arxiv.org/abs/2304.02819 ). Disruptive, worthless content can already be hatted, and I'm not opposed to doing so. Editors should be sharply told to use their own words, and if not already written, an essay saying we'd rather have authentic if grammatically imperfect comments than AI-modulated ones would be helpful to cite at editors who offer up AI slop. But someone merely citing GPTZero is not convincing. GPTZero will almost surely misidentify genuine commentary as AI-generated. So fine with any sort of reminder that worthless content can be hatted, and fine with a reminder not to use ChatGPT for creating Wikipedia talk page posts, but not fine with any recommendations of LLM-detectors. [[User:SnowFire|SnowFire]] ([[User talk:SnowFire|talk]]) 20:00, 9 December 2024 (UTC)
*:@[[User:SnowFire|SnowFire]], I can't tell if you also oppose the actual proposal, which is to permit hatting/striking obvious LLM-generated comments (using GPTzero is a very minor detail in JSS's background paragraph, not part of the proposal). [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 01:47, 11 December 2024 (UTC)
*:::I support the proposal in so far as disruptive comments can already be hatted and that LLM-generated content is disruptive. I am strongly opposed to giving well-meaning but misguided editors a license to throw everyone's text into an AI-detector and hat the comments that score poorly. I don't think it was ''that'' minor a detail, and to the extent that detail is brought up, it should be as a reminder to use human judgment and ''forbid'' using alleged "AI detectors" instead. [[User:SnowFire|SnowFire]] ([[User talk:SnowFire|talk]]) 03:49, 11 December 2024 (UTC)
* '''Support''' collapsing AI (specifically, [[Large language model]]) comments by behavioral analysis (most actually disruptive cases I've seen are pretty obvious) and not the use of inaccurate tools like ZeroGPT. I thinking hatting with the title "Editors suspect that this comment has been written by a [[Large language model]]" is appropriate. They take up SO much space in a discussion because they are also unnecessarily verbose, and talk on and on but never ever say something that even approaches having substance. Discussions are for human Wikipedia editors, we shouldn't have to use to sift through comments someone put 0 effort into and outsourced to a robot that writes using random numbers (that's a major part of how tools like ChatGPT work and maintain variety). If someone needs to use an AI chatbot to communicate because they don't understand English, then they are welcome to contribute to their native language Wikipedia, but I don't think they have the right to insist that we at enwiki spend our effort reading comments they but minimal effort into besides opening the ChatGPT website. If really needed, they can write in their native language and use a non-LLM tool like Google Translate. The use of non-LLM tools like Grammarly, Google Translate, etc. I think should still be OK for all editors, as they only work off comments that editors have written themselves. [[User:MolecularPilot|<span style="color: #0369a1; font-family:monospace">MolecularPilot</span>]] <sup>[[User talk:MolecularPilot|🧪️]][[Special:Contributions/MolecularPilot|✈️]]</sup> 05:10, 10 December 2024 (UTC)
*:Adding that enforcing people writing things in their own words will actually help EAL (English additional language) editors contribute here. I world with EAL people irl, and even people who have almost native proficiency with human-written content find AI output confusing because it says things in the most confusing, verbose ways using difficult sentence constructions and words. I've seen opposers in this discussion who maybe haven't had experience working with EAL people go "what about EAL people?", but really, I think this change will help them (open to being corrected by someone who is EAL, tho). [[User:MolecularPilot|<span style="color: #0369a1; font-family:monospace">MolecularPilot</span>]] <sup>[[User talk:MolecularPilot|🧪️]][[Special:Contributions/MolecularPilot|✈️]]</sup> 05:17, 10 December 2024 (UTC)
*:Also, with regards to oppose comments that discussions are not a vote so closes will ignore AI statements which don't have merit - unedited LLM statements are incredibly verbose and annoying, and clog up the discussion. Imagine multiple paragraphs, each with a heading, but all of which say almost nothing, they're borderline [[WP:BLUGEON]]y. Giving the power to HAT them will help genuine discussion contributors keep with the flow of human arguments and avoid scaring away potential discussion contributors who are intimidated or don't feel they have the time to read the piles of AI nonsense that fill the discussion. [[User:MolecularPilot|<span style="color: #0369a1; font-family:monospace">MolecularPilot</span>]] <sup>[[User talk:MolecularPilot|🧪️]][[Special:Contributions/MolecularPilot|✈️]]</sup> 06:38, 10 December 2024 (UTC)
* '''Support''' (removing) in general. How is this even a question? There is no case-by-case. It is a fundamental misunderstanding of how LLMs work to consider their output reliable without careful review. And which point, the editor could have written it themselves without inherent LLM bias. The point of any discussion is to provide analytical response based on the ''context'', not have some tool regurgitate something from a training set that sounds good. And frankly, it is disrespectuful to make someone read "AI" responses. It is a tool and there is a place and time for it, but not in discussions in an encyclopedia. —&nbsp;<small>&nbsp;[[user:Hellknowz|<span style="color: #B00;">HELL</span>KNOWZ]]&nbsp;<b>∣</b>&nbsp;[[User talk:Hellknowz|TALK]]</small> 15:41, 10 December 2024 (UTC)
*'''Strong Support'''. I'm very interested in what you (the generic you) have to say about something. I'm not remotely interested in what a computer has to say about something. It provides no value to the discussion and is a waste of time. [[User:Useight|Useight]] ([[User talk:Useight|talk]]) 18:06, 10 December 2024 (UTC)
*:Comments that provide no value to the discussion can already be hatted and ignored regardless of why they provide no value, without any of the false positive or false negatives inherent in this proposal. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 18:25, 10 December 2024 (UTC)
*::Indeed, and that's fine for one-offs when a discussion goes off the rails or what-have-you. But we also have [[WP:NOTHERE]] for disruptive behavior, not working collaboratively, etc. I'm suggesting that using an AI to write indicates that you're not here to build the encyclopedia, you're here to have an AI build the encyclopedia. I reiterate my strong support for AI-written content to be removed, struck, collapsed, or hatted and would support further measures even beyond those. [[User:Useight|Useight]] ([[User talk:Useight|talk]]) 21:54, 11 December 2024 (UTC)
*:::There are two sets of people described in your comment: those who use AI and those who are NOTHERE. The two sets overlap, but nowhere near sufficiently to declare that everybody in the former set are also in the latter set. If someone is NOTHERE they already can and should be blocked, regardless of how they evidence that. Being suspected of using AI (note that the proposal does not require proof) is not sufficient justification on its own to declare someone NOTHERE, per the many examples of constructive use of AI already noted in this thread. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 22:03, 11 December 2024 (UTC)
*::::To reiterate, I don't believe that any use of AI here is constructive, thus using it is evidence of [[WP:NOTHERE]], and, therefore, the set of people using AI to write is completely circumscribed within the set of people who are NOTHERE. Please note that I am referring to users who use AI-generated writing, not users suspected of using AI-generated writing. I won't be delving into how one determines whether someone is using AI or how accurate it is, as that is, to me, a separate discussion. This is the end of my opinion on the matter. [[User:Useight|Useight]] ([[User talk:Useight|talk]]) 23:26, 11 December 2024 (UTC)
*:::::You are entitled to your opinion of course, but as it is contradicted by the evidence of both multiple constructive uses and of the near-impossibility of reliably detecting LLM-generated text without false positives, I would expect the closer of this discussion to attach almost no weight to it. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 00:42, 12 December 2024 (UTC)
*:::::I am ESL and use LLMs sometimes because of that. I feel like I don't fit into the NOTHERE category. It seems like you do not understand what they are or how they can be used constructively. [[User:PackMecEng|PackMecEng]] ([[User talk:PackMecEng|talk]]) 01:43, 12 December 2024 (UTC)
*::::::No, I understand. What you're talking about is no different from using Google Translate or asking a native-speaker to translate it. You, a human, came up with something you wanted to convey. You wrote that content in Language A. But you wanted to convey that message that you - a human - wrote, but now in Language B. So you had your human-written content translated to Language B. I have no qualms with this. It's your human-written content, expressed in Language B. My concern is with step 1 (coming up with something you want to convey), not step 2 (translating that content to another language). You write a paragraph for an article but it's in another language and you need the paragraph that you wrote translated? Fine by me. You ask an AI to write a paragraph for an article? Not fine by me. Again, I'm saying that there is no valid use case for AI-written content. [[User:Useight|Useight]] ([[User talk:Useight|talk]]) 15:59, 12 December 2024 (UTC)
*:::::::It seems very likely that there will be valid use cases for AI-written content if the objective is maximizing quality and minimizing errors. Research like [https://doi.org/10.48550/arXiv.2409.13740 this] demonstrate that there will likely be cases where machines outperform humans in specific Wikipedia domains, and soon. But I think that is an entirely different question than potential misuse of LLMs in consensus related discussions. [[User:Sean.hoyland|Sean.hoyland]] ([[User talk:Sean.hoyland|talk]]) 16:25, 12 December 2024 (UTC)
*:::::::But your vote and the proposed above makes not distinction there. Which is the main issue. Also not to be pedantic but every prompted to a LLM is filled out by a human looking to convey a message. Every time someone hits publish on something here it is that person confirming that is what they are saying. So how do we in practice implement what you suggest? Because without a method better than vibes it's worthless. [[User:PackMecEng|PackMecEng]] ([[User talk:PackMecEng|talk]]) 18:53, 12 December 2024 (UTC)
*::::::::The proposal specifies content ''generated'' by LLMs, which has a specific meaning in the context of generative AI. If a prompt itself conveys a meaningful, supported opinion, why not just post that instead? The problem comes when the LLM adds more information than was provided, which is the whole point of generative models. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 01:52, 13 December 2024 (UTC)
* '''Yes''' in principle. But in practice, LLM detectors are not foolproof, and there are valid reasons to sometimes use an LLM, for example to copyedit. I have used Grammarly before and have even used the Microsoft Editor, and while they aren't powered by LLMs, LLMs are a tool that need to be used appropriately on Wikipedia. [[User:Awesome Aasim|Awesome]] [[User_talk:Awesome Aasim|Aasim]] 19:55, 10 December 2024 (UTC)
* '''Support'''. Using LLM to reply to editors is lazy and disrespectful of fellow editor's time and brainpower. In the context of AFD, it is particularly egregious since an LLM can't really read the article, read sources, or follow our notability guidelines. {{pb}} By the way. {{tq|gptzero and other such tools are very good at detecting this}}. I don't think this is correct at all. I believe the false positive for AI detectors is quite high. High enough that I would recommend not using AI detectors. –[[User:Novem Linguae|<span style="color:blue">'''Novem Linguae'''</span>]] <small>([[User talk:Novem Linguae|talk]])</small> 03:23, 11 December 2024 (UTC)
*'''Question''' @[[User:Just Step Sideways|Just Step Sideways]]: Since there appears to be a clear consensus against the AI-detectors part, would you like to strike that from the background? [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 14:10, 11 December 2024 (UTC)
* '''Support'''. AI generated text should be removed outright. If you aren't willing to put the work into doing your own writing then you definitely haven't actually thought deeply about the matter at hand. [[User:User1042|<b style="background:blue;color:white;border-radius:12px; padding:3px">User1042</b>]][[User talk:User1042|<b style="background:blue;color:white;border-radius:12px; padding:3px">💬</b>]][[Special:Contributions/User1042|<b style="background:blue;color:white;border-radius:12px; padding:3px">✒️</b>]] 14:16, 11 December 2024 (UTC)
*:This comment is rather ironic given that it's very clear you haven't thought deeply about the matter at hand, because if you had then you'd realise that it's actually a whole lot more complicated than that. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 14:26, 11 December 2024 (UTC)
*::Thryduulf I don't think this reply is particular helpful, and it comes off as slightly combative. It's also by my count your 24th comment on this RFC. [[User:Bugghost|<span style="font-weight:bold;color:#f50">BugGhost</span>]]&nbsp;[[User talk:Bugghost|🦗👻]] 19:20, 11 December 2024 (UTC)
*::I recognize that AI paraphrased or edited is not problematic in the same ways as text generated outright by an AI. I only meant to address the core issue at steak, content whose first draft was written by an AI system. [[User:User1042|<b style="background:blue;color:white;border-radius:12px; padding:3px">User1042</b>]][[User talk:User1042|<b style="background:blue;color:white;border-radius:12px; padding:3px">💬</b>]][[Special:Contributions/User1042|<b style="background:blue;color:white;border-radius:12px; padding:3px">✒️</b>]] 22:16, 17 December 2024 (UTC)
*'''Oppose''' {{re|Just Step Sideways}} The nomination's 2nd para run through https://www.zerogpt.com/ gives "11.39% AI GPT*":{{pb}}{{tqb|I've recently come across several users in AFD discussions that are using LLMs to generate their remarks there. As many of you are aware, gptzero and other such tools are very good at detecting this. I don't feel like any of us signed up for participating in discussions where some of the users are not using their own words but rather letting technology do it for them. Discussions are supposed to be between human editors. <mark>If you can't make a coherent argument on your own, you are not competent to be participating in the discussion.</mark> I would therefore propose that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner}}{{pb}}The nomination's linked https://gptzero.me/ site previously advertised https://undetectable.ai/ , wherewith how will we deal? Imagine the nomination was at AFD. What should be the response to LLM accusations against the highlighted sentence? [[Special:Contributions/172.97.141.219|172.97.141.219]] ([[User talk:172.97.141.219|talk]]) 17:41, 11 December 2024 (UTC)
* '''Support''' with the caveat that our ability to deal with the issue goes only as far as we can accurately identify the issue (this appears to have been an issue raised across a number of the previous comments, both support and oppose, but I think it bears restating because we're approaching this from a number of different angles and its IMO the most important point regardless of what conclusions you draw from it). [[User:Horse Eye&#39;s Back|Horse Eye&#39;s Back]] ([[User talk:Horse Eye&#39;s Back|talk]]) 19:24, 11 December 2024 (UTC)
*'''Strong support, limited implementation'''. {{tq|1=Wikipedia is written by volunteer editors}}, says our front page. This is who ''we'' are, and ''our'' writing is what Wikipedia is. It's true that LLM-created text can be difficult to identify, so this may be a bit of a moving target, and we should be conservative in what we remove—but I'm sure at this point we've all run across cases (whether here or elsewhere in our digital lives) where someone copy/pastes some text that includes "Is there anything else I can help you with?" at the end, or other blatant tells. This content should be deleted without hesitation. [[User:Retswerb|Retswerb]] ([[User talk:Retswerb|talk]]) 04:11, 12 December 2024 (UTC)
* '''Support in concept, questions over implementation''' — I concur with {{U|Dronebogus}} that users who rely on LLMs should not edit English Wikipedia. It is not a significant barrier for users to use other means of communication, including online translators, rather than artificial intelligence. How can an artificial intelligence tool argue properly? However, I question how this will work in practice without an unacceptable degree of error. <span style="font-family: monospace;">[[User talk:ElijahPepe|elijahpepe@wikipedia]] (he/him)</span> 22:39, 12 December 2024 (UTC)
*:Many, possibly most, online translators use artificial intelligence based on LLMs these days. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 22:46, 12 December 2024 (UTC)
*::There is a difference between translating words you wrote in one language into English and using an LLM to write a comment for you. <span style="font-family: monospace;">[[User talk:ElijahPepe|elijahpepe@wikipedia]] (he/him)</span> 22:59, 12 December 2024 (UTC)
*:::Neither your comment nor the original proposal make any such distinction. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 23:34, 12 December 2024 (UTC)
*::::Well since people keep bringing this up as a semi-strawman: no I don’t support banning machine translation, not that I ''encourage'' using it (once again, if you aren’t competent in English please don’t edit here) [[User:Dronebogus|Dronebogus]] ([[User talk:Dronebogus|talk]]) 07:34, 13 December 2024 (UTC)
*:LLMs are incredible at translating, and many online translators already incorporate them, including Google Translate. Accomodating LLMs is an easy way to support the avid not only the ESL but also the avid but shy. It has way more benefits than the unseen-to-me amount of AI trolling that isn't already collapse-on-sight. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 00:05, 13 December 2024 (UTC)
*::Google Translate uses the same transformer architecture that LLMs are built around, and uses e.g. PaLM to develop more language support (through training that enables zero-shot capabilities) and for larger-scale specialized translation tasks performed through the Google Cloud "[https://cloud.google.com/translate/docs/advanced/adaptive-translation adaptive translation]" API, but it does not incorporate LLMs into ''translating your everyday text input'', which still relies on NMTs. And even for the API features, the core constraint of ''matching'' input rather than ''generating content'' is still retained (obviously it would be very bad for a translation tool to insert material not found in the original text!). LLMs might be good for translation because they are better at evaluating semantic meaning and detecting context and nuance, but again, the ''generative'' part that is key to this proposal is not present. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 01:20, 13 December 2024 (UTC)
*:::{{tq|PaLM (Pathways Language Model) is a 540 billion-parameter transformer-based large language model (LLM) developed by Google AI.[1]}} If you meant something about how reschlmunking the outputs of an LLM or using quite similar architecture is not really incorporating the LLM, I believe we would be approaching [[Ship of Theseus]] levels of recombination, to which my answer is it is the same ship.{{tqb|obviously it would be very bad for a translation tool to insert material not found in the original text!}}[https://www.independent.co.uk/tech/google-translate-ai-end-world-jesus-doomsday-a8459676.html That happens!] [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 01:29, 13 December 2024 (UTC)
*::::PaLM2 is not used in the consumer app (Google Translate), it's used for research. Google Translate just uses non-generative NMTs to map input to its closes cognate in the target language. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 01:34, 13 December 2024 (UTC)
*:::::Well, is the NMT really that different enough to not be classified as an LLM? IIRC the definition of an LLM is something that outputs by predicting one-by-one what the next word/"token" should be, and an LLM I asked agreed that NMTs satisfy the definition of a generative LLM, though I think you're the expert here. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 02:01, 13 December 2024 (UTC)
*::::::Google Translate's NMT hits different enough to speak English much less naturally than ChatGPT 4o. I don't consider it a '''''L'''''LM, because the param count is 380M not 1.8T.
*::::::{{tq|the definition of an LLM is something that outputs by predicting one-by-one what the next word/"token" should be}} No, that def would fit ancient [[Recurrent_neural_network|RNN]] tech too. [[Special:Contributions/172.97.141.219|172.97.141.219]] ([[User talk:172.97.141.219|talk]]) 17:50, 13 December 2024 (UTC)
*:::::::Even if you don’t consider it L, I do, and many sources cited by the article do. Since we’ll have such contesting during enforcement, it’s better to find a way that precludes such controversy. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 20:44, 13 December 2024 (UTC)
*::::::::NMTs, LLMs, and the text-creation functionality of LLMs are fundamentally different in the context of this discussion, which is about content generated through generative AI. NMTs specifically for translation: they are trained on parallel corpora and their output is optimized to match the input as precisely as possible, ''not'' to create novel text. LLMs have different training, including way more massive corpora, and were designed specifically to create novel text. One of the applications of LLMs may be translation (though currently it's too computationally intensive to run them for standard consumer purposes), by virtue of their being very good at determining semantic meaning, but even if/when they do become mainstream translation tools what they'll be used for is still ''not'' generative when it comes to translation output. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 22:29, 13 December 2024 (UTC)
*:::::::::How will you differentiate between the use of LLM for copyediting and the use of LLM for generation? [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 23:30, 13 December 2024 (UTC)
*::::::::::The proposal is for hatting obvious cases of LLM-generated comments. Someone who just uses an LLM to copyedit will still have written the content themselves and presumably their output would not have the obvious tells of generative AI. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 23:56, 13 December 2024 (UTC)
*:{{outdent|9}} Not when I tried to use it. Quantitatively, GPTZero went from 15% human to 100% AI for me despite the copyedits only changing 14 words. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 00:33, 14 December 2024 (UTC)
*::I think there is consensus that GPTZero is not usable, even for obvious cases. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 00:55, 14 December 2024 (UTC)
*:::Yes, but being as far as 100% means people will also probably think the rewrite ChatGPT-generated. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 01:18, 14 December 2024 (UTC)
*::::Does it really mean that? All you've demonstrated is that GPTZero has false positives, which is exactly why its use here was discouraged. [[User:Jlwoodwa|jlwoodwa]] ([[User talk:Jlwoodwa|talk]]) 05:26, 14 December 2024 (UTC)
*:::::My subjective evaluation of what I got copyediting from ChatGPT was that it sounded like ChatGPT. I used GPTZero to get a number. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 14:18, 14 December 2024 (UTC)
*::::::My guess is that the copyediting went beyond what most people would actually call "copyediting". [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 18:04, 23 December 2024 (UTC)
*:::::::It changed only 14 words across two paragraphs and still retained the same meaning in a way that I would describe it as copyediting. Such levels of change are what those lacking confidence in tone would probably seek anyways. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 00:15, 24 December 2024 (UTC)
* On one hand, AI slop is a plague on humanity and obvious LLM output should definitely be disregarded when evaluating consensus. On the other hand, I feel like existing policy covers this just fine, and any experienced closer will lend greater weight to actual policy-based arguments, and discount anything that is just parroting jargon. <span class="nowrap">[[User:WindTempos|WindTempos]] <sub>[[Non-binary gender|they]]</sub> <sup>([[User talk:WindTempos|talk]] • [[Special:Contributions/WindTempos|contribs]])</sup></span> 23:21, 12 December 2024 (UTC)
*'''Support in principle''', but we cannot rely on any specific tools because none are accurate enough for our needs. Whenever I see a blatant ChatGPT-generated !vote, I ignore it. They're invariably poorly reasoned and based on surface-level concepts rather than anything specific to the issue being discussed. If someone is using AI to create their arguments for them, it means they have no actual argument besides [[WP:ILIKEIT]] and are looking for arguments that support their desired result rather than coming up with a result based on the merits. Also, toasters do not get to have an opinion. <span style="font-family:Papyrus, Courier New">[[User:The Wordsmith|'''The Wordsmith''']]</span><sup><span style="font-family:Papyrus"><small>''[[User talk:The Wordsmith|Talk to me]]''</small></span></sup> 05:17, 13 December 2024 (UTC)
*'''Oppose'''. For creating unnecessary drama. First of, the "detector" of the AI bot is not reliable, or at least the reliability of the tool itself is still questionable. If the tool to detect LLM itself is unreliable, how can one reliably point out which one is LLM and which one is not? We got multiple tools that claimed to be able to detect LLM as well. Which one should we trust? Should we be elevating one tool over the others? Have there been any research that showed that the "picked" tool is the most reliable? Second, not all LLMs are dangerous. We shouldn't treat LLM as a virus that will somehow take over the Internet or something. Some editors use LLM to smooth out their grammar and sentences and fix up errors, and there is nothing wrong with that. I understand that banning obvious LLM text per [[WP:DUCK]] are good, but totally banning them is plain wrong. [[User:SunDawn|<span style="background-color:black; color:orange;">&maltese; SunDawn &maltese;</span>]] [[User talk:SunDawn|<span style="color:blue;"><span style="font-size:85%;">(contact)</span></span>]] 22:56, 15 December 2024 (UTC)
*:@[[User:SunDawn|SunDawn]], the proposal is to permit editors to collapse/strike ''obvious LLM text'', not to "ban LLM totally". If LLM use is imperceptible, like for tweaking grammar, it's not going to be affected. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 20:17, 19 December 2024 (UTC)
*'''Support''' with some kind of caveat about not relying on faulty tools or presuming that something is LLM without evidence or admission, based on the following reasons:
*# We have stricter rules around semi-automated editing (rollback, AutoWikiBrowser, etc.) and even stricter rules around fully automated bot editing. These cleanup edits are widely accepted as positive, but there is still the concern about an overwhelming amount of bad edits to wade through and/or fix. A form of that concern is relevant here. Someone could reply to every post in this discussion in just a minute or so without ever reading anything. That's inherently disruptive.
*# Nobody who is voting "oppose" is using an LLM to cast that vote. The LLM comments have been left by those supporting to make a point about how problematic they are for discussions like this. I think this reflects, even among oppose voters, a developing community consensus that LLM comments will be disregarded.
*# If the rule in practice is to disregard LLM comments, not writing that rule down does not stop it from being the rule, consensus, or a community norm. It just makes the rule less obvious and less clear.
*# It's disrespectful for an editor to ask someone to spend their time reading a comment if they couldn't be bothered to spend any time writing it, and therefore a violation of the policy [[Wikipedia:Civility]], "{{tq|treat your fellow editors as respected colleagues with whom you are working on an important project.}}"
* Also, I don't read the proposal as a ban on machine translation in any way. [[User:Rjjiii|<span style="font-variant:small-caps;">Rjj<sup>iii</sup></span>]] ([[User talk:Rjjiii|talk]]) 00:01, 18 December 2024 (UTC)
*:@[[User:Rjjiii|Rjjiii]], above @[[User:Dilettante|Dilettante]] said their !vote was created by LLM. [[User:JoelleJay|JoelleJay]] ([[User talk:JoelleJay|talk]]) 20:14, 19 December 2024 (UTC)
* I am '''strongly opposed''' to banning or ignoring LLM-made talk page comments ''just'' because they are LLM-made. I'm not a big fan of LLMs at all; they are actually useful only for some certain things, very few of which are directly relevant to contributing to Wikipedia in English or in any other language. However, some of those things ''are'' useful for this, at least for some humans, and I don't want to see these humans being kicked out of the English Wikipedia. I already witnessed several cases in which people whose first language is not English tried writing talk page responses in the English Wikipedia, used an LLM to improve their writing style, and got their responses ignored ''only'' because they used an LLM. In all those cases, I had strong reasons to be certain that they were real humans, that they meant what they wrote, and that they did it all in good faith. Please don't say that anyone who wants to contribute to the English Wikipeida should, in the first place, know English well enough to write a coherent talk page comment without LLM assistance; occasionally, I kind of wish that it was like that myself, but then I recall that the world is more complicated and interesting than that. Uses of LLMs that help the English Wikipedia be more inclusive for good-faith people are good. Of course, defining what good faith means is complicated, but using an LLM is not, ''by itself'', a sign of bad faith. --[[User:Amire80|Amir E. Aharoni]] ([[User talk:Amire80|talk]]) 04:52, 19 December 2024 (UTC)
*:Those concerned about their English should use translation software rather than an llm. Both might alter the meaning to some extent, but only one will make things up. (It's also not a sure assumption that llm text is coherent talkpage text.) [[User:Chipmunkdavis|CMD]] ([[User talk:Chipmunkdavis|talk]]) 07:44, 19 December 2024 (UTC)
*::@[[User:Chipmunkdavis|CMD]] The dividing line between translation software and LLM is already blurry and will soon disappear. It's also rare that translation software results in coherent talkpage text, ''unless'' it's relying on some (primitive) form of LLM. So if we're going to outlaw LLMs, we would need to outlaw any form of translation software, and possibly any text-to-speech software as well. <span style="font-family:cursive">[[User:Ypn^2|<span style="color:green">''ypn''</span>]][[User talk:ypn^2|<span style="color:blue;font-size:90%;vertical-align:12%">^</span><span style="color:purple;vertical-align:45%;font-size:75%">2</span>]]</span> 23:41, 19 December 2024 (UTC)
*:::The distinctions have already been covered above, and no we would not have to. There is an obvious difference between software intended to translate and software intended to generate novel text, and users are likely to continue to treat those differently. [[User:Chipmunkdavis|CMD]] ([[User talk:Chipmunkdavis|talk]]) 02:49, 20 December 2024 (UTC)
*'''Strong support'''. LLM-generated content has no place anywhere on the encyclopedia. [[User:Stifle|Stifle]] ([[User talk:Stifle|talk]]) 10:27, 19 December 2024 (UTC)
*'''Strong oppose''' to the proposal as written. Wikipedia already suffers from being [[AOL|stuck]] in a [[Windows XP|2001]] [[MySpace|mindset]] and a refusal to move with the technological times. Anyone who remembers most Wikipedians' visceral reaction to [[WP:FLOW|FLOW]] and [[WP:VisualEditor|VisualEditor]] when they were first introduced will observe a striking similarity. Yes, those projects had serious problems, as do LLM-generated comments. But AI is the future, and this attitude of "[[Move fast and break things (motto)|Move slowly to avoid changing things]]" will ultimately lead Wikipedia the way of ''Encyclopædia Britannica''. Our discussion needs to be how best to change, not how to avoid to change. <span style="font-family:cursive">[[User:Ypn^2|<span style="color:green">''ypn''</span>]][[User talk:ypn^2|<span style="color:blue;font-size:90%;vertical-align:12%">^</span><span style="color:purple;vertical-align:45%;font-size:75%">2</span>]]</span> 23:54, 19 December 2024 (UTC)
*:{{smalldiv|1=The main objection to VE and a major objection to FLOW was the developers' insistence on transforming Wikitext to HTML for editing and then transforming that back to Wikitext. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 01:31, 20 December 2024 (UTC)}}
*::True. Then, as now, there were many valid objections. But IIRC, there was limited discussion of "Let's figure out a better way to improve", and lots of "Everything is fine; don't change anything, ever." That attitude concerns me. <span style="font-family:cursive">[[User:Ypn^2|<span style="color:green">''ypn''</span>]][[User talk:ypn^2|<span style="color:blue;font-size:90%;vertical-align:12%">^</span><span style="color:purple;vertical-align:45%;font-size:75%">2</span>]]</span> 01:52, 20 December 2024 (UTC)
*'''Support'''. I'm not even slightly swayed by these "it'll be too hard to figure out" and "mistakes could be made" and "we can't be 100% certain" sorts of arguments. That's true of {{em|everything}} around here, and its why we have an admins-must-earn-a-boatload-of-community-trust system, and a system of review/appeal of decisions they (or of course non-admin closers) make, and a consensus-based decisionmaking system more broadly. {{U|JoelleJay}} has it exactly right: {{tq|having a policy that permits closers to discount apparently-LLM-generated contributions will discourage good-faith editors from using LLMs irresponsibly and perhaps motivate bad-faith editors to edit the raw output to appear more human, which would at least involve some degree of effort and engagement with their "own" arguments.}} And as pointed out by some others, the "it'll hurt non-native-English speakers" nonsense is, well, nonsense; translation is a different and unrelated process (though LLMs can perform it to some extent), of remapping one's {{em|own}} material onto another language.<!-- --><p>I'm also not in any way convinved by the "people poor at writing and other cognitive tasks needs the LLM to help them here" angle, because [[WP:COMPETENCE]] is required. This is work (albeit volunteer work), it is [[WP:NOT]] a game, a social-media playground, a get-my-ideas-out-there soapbox, or a place to learn how to interact e-socially or pick up remedial writing skills, nor a venue for practicing one's argument techiques. It's an encyclopedia, being built by people who – to be productive contributors instead of a draining burden on the entire community – {{em|must}} have: solid reasoning habits, great judgement (especially in assessing reliability of claims and the sources making them), excellent writing skills of a higherly particularized sort, a high level of fluency in this specific language (in multiple registers), and a human-judgment ability to understand our thick web of policies, guidelines, procedures, and often unwritten norms, and how they all interact, in a specific contextual way that may vary greatly by context. None of these is optional. An LLM cannot do any of them adequately (not even write well; their material sticks out like a sore thumb, and after a while you can even tell which LLM produced the material by its habitual but dinstictive crappy approach to simulating human thought and language).</p><!-- --><p>In short, if you {{em|need}} an LLM to give what you think is meaningful input into a decision-making process on Wikipedia (much less to generate mainspace content for the public), then you {{em|need}} to go find something else to do, something that fits your skills and abilities. Saying this so plainly will probably upset someone, but so it goes. I have a rep for "not suffering fools lightly" and "being annoying but correct"; I can live with that if it gets the right decisions made and the work advanced. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — [[User:SMcCandlish|'''SMcCandlish''']] [[User talk:SMcCandlish|☏]] [[Special:Contributions/SMcCandlish|¢]] 😼 </span> 05:49, 22 December 2024 (UTC)</p>
*:The problem with all that is that we ''already'' have a policy that allows the hatting or removal of comments that are actually problematic because of their content (which are the only ones that we should be removing) without regard for whether it was or was not written by LLM. Everything that actually should be removed can be removed already. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 11:39, 22 December 2024 (UTC)
*:People who have good reading skills, great judgement, and solid reasoning habits enough to find problems in existing articles don't necessarily have great interpersonal writing/communication skills or the confidence. Meanwhile, for all LLM is bad at, it is very good at diluting everything you say to become dry, dispassionate, and thus inoffensive. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 15:26, 22 December 2024 (UTC)
*'''Support'''. Sure I have questions about detection, but I don't think it means we shouldn't have a policy that explicitly states that it should not be used (and can be ignored/hatted if it is). Judging solely based on content (and no wp:bludgeoning, etc.) is unsustainable IMO. It would mean taking every wall of text seriously until it's clear that the ''content'' is unhelpful, and LLMs are very good at churning out plausible-sounding bullshit. It wastes everyone's time. If cognitive impairments or ESL issues make it hard to contribute, try voice-to-text, old-school translation software, or some other aid. LLMs aren't really ''you''.--[[User:MattMauler|MattMauler]] ([[User talk:MattMauler|talk]]) 11:27, 23 December 2024 (UTC)
* '''Comment'''. While I agree with the sentiment of the request, I am at a loss to see how we can identify LLM generated comments in a consistent manner that can scale. Yes, it might be easier to identify egregious copy paste of wall of text, but, anything other than that might be hard to detect. Our options are:
:# Robust tooling to detect LLM generated text, with acceptably low levels of false positives. Somewhat similar to what Earwig does for Copyvios. But, someone needs to build it and host it on WMTools or at a similar location.
:# Self certification by editors. Every edit / publish dialogbox should have a checkbox for "Is this text LLM generated" with y/n optionality.
:# Editors playing a vigilante role in reading the text and making a personal call on other editors' text. Obviously this is least preferred.
: These are my starting views. [[User:Ktin|Ktin]] ([[User talk:Ktin|talk]]) 00:37, 24 December 2024 (UTC)
*'''Strong oppose''' as it's impossible to enforce. Also LLMs are a valid and useful [[accessibility]] tool. – [[user talk:Anne drew|<span style="color:#074">Anne&nbsp;drew</span>]] 05:18, 24 December 2024 (UTC)
*'''Bonus suggestion!''': I'm curious what Wikipedians think about this so let's try this. Many of the comments here discuss the impracticality of determining whether a user's comments are AI generated (i.e. gptzero isn't perfect), and many give valid arguments for using LLMs (i.e. [[English as a Second Language|ESL]]). If an argument is suspected to be written by LLM, I propose that editors should examine the user. Take a look at their listed contributions, and if they seem to have a habit of using AI, open a discussion on their talk page. If the user has a habit of using AI and doesn't recognize the inherent problems and refuses to change, this can be brought to [[WP:AN|the administrators' noticeboard]] for potential blocks. If (and only if) the person is blocked for using AI, their comments can be ignored. Or just ask ChatGPT to summarize them for you lol [[User:guninvalid|guninvalid]] ([[User_Talk:guninvalid|talk]]) 06:12, 25 December 2024 (UTC)


=== Alternate proposal ===
I would like to ask what is the standard for defining that there is global consensus. If the [[Wikipedia:Popular_pages|top 100 articles]] in a certain category all are written in a certain way, is this considered sufficient for global consensus?
{{discussion top|result=Redundant proposal, confusingly worded, with no support, and not even any further discussion interest in 10 days. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — [[User:SMcCandlish|'''SMcCandlish''']] [[User talk:SMcCandlish|☏]] [[Special:Contributions/SMcCandlish|¢]] 😼 </span> 05:23, 22 December 2024 (UTC)}}
Whereas many editors, including me, have cited problems with accuracy in regards to existing tools such as ZeroGPT, I propose that '''remarks that are blatently generated by a LLM or similar automated system should be discounted/removed/collapsed/hidden'''. [[User:ThatIPEditor|ThatIPEditor]] <sup>[[User talk:ThatIPEditor|They]] / [[Special:contribs/ThatIPEditor|Them]]</sup> 10:00, 10 December 2024 (UTC)


:'''Oppose''' as completely unnecessary and far too prone to error per the above discussion. Any comment that is good (on topic, relevant, etc) should be considered by the closer regardless of whether it was made with LLM-input of any sort or not. Any comment that is bad (off-topic, irrelevant, etc) should be ignored by the closer regardless of whether it was made with LLM-input of any sort or not. Any comment that is both bad and disruptive (e.g. by being excessively long, completely irrelevant, bludgeoning, etc) should be removed and/or hatted as appropriate, regardless of whether it was made with LLM-input of any sort. The good thing is that ''this is already policy'' so we don't need to call out LLMs specifically, and indeed doing so is likely to be disruptive in cases where human-written comments are misidentified as being LLM-written (which ''will'' happen, regardless of whether tools are used). [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 11:19, 10 December 2024 (UTC)
If a 100 articles are not enough, what is the threshold? Is it proportional to the number articles in that category?
:I think this proposal is not really necessary. I support it, but that is because it is functionally identical to the one directly above it, which I also supported. This should probably be hatted. [[User:Bugghost|<span style="font-weight:bold;color:#f50">BugGhost</span>]]&nbsp;[[User talk:Bugghost|🦗👻]] 18:32, 10 December 2024 (UTC)
:What does '''blatantly generated''' mean? Does you mean only where the remark is signed with "I, Chatbot", or anything that ''appears'' to be LLM-style? I don't think there's much in between. <span style="font-family:cursive">[[User:Ypn^2|<span style="color:green">''ypn''</span>]][[User talk:ypn^2|<span style="color:blue;font-size:90%;vertical-align:12%">^</span><span style="color:purple;vertical-align:45%;font-size:75%">2</span>]]</span> 19:21, 10 December 2024 (UTC)
:'''Procedural close''' per BugGhost. I'd hat this myself, but I don't think that'd be appropriate since it's only the two of us who have expressed that this proposal is basically an exact clone. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 03:00, 11 December 2024 (UTC)
{{discussion bottom}}


== Should first language be included in the infobox for historical figures? ==
Should then this warrant that all articles in that category be written in that way (unless very clearly harmful to the specific article)?


Is there a guideline concerning this? "Infobox royalty" apparently has this parameter, but I haven't found a single article that actually uses it. Many articles don't mention the subject's spoken languages at all. '''In my view, somebody's first language (L1) is just a very basic and useful piece of information''', especially for historical figures. This would be helpful in cases where the ruling elites spoke a completely different language from the rest of the country (e.g., High Medieval England or early Qing dynasty China). These things are not always obvious to readers who are unfamiliar with the topic. Including it would be a nice and easy way to demonstrate historical language shifts that otherwise might be overlooked. Perhaps it could also bring visibility to historical linguistic diversity and language groups that have since disappeared. Where there are multiple first languages, they could all be listed. And in cases where a person's first language remains unclear, it could simply be left out. [[User:Kalapulla123|Kalapulla123]] ([[User talk:Kalapulla123|talk]]) 11:53, 8 December 2024 (UTC)
[[User:Milo8505|Milo8505]] <sub>[[User_talk:Milo8505|(talk)]]</sub> 10:41, 17 November 2024 (UTC)


:I don't think I agree this is a good use of infobox space:{{bulleted list|incongruences between elite spoken languages and popular spoken languages can't be shown with a single parameter (the language spoken by the oppressed would have to be included as well)|for many people this would be unverifiable (already mentioned in OP) and / or contentious (people living during a language transition) |sometimes L2 skills will be more than adequate to communicate with subject population when called for|in cases where the subject's L1 matches their polity's (i.e. most cases), the parameter would feel like unnecessary clutter|prose description seems adequate}}However, this is just my opinion, and the venue of discussion should probably be [[Wikipedia talk:WikiProject Royalty and Nobility]] or similar, rather than VPP. [[User:Folly Mox|Folly Mox]] ([[User talk:Folly Mox|talk]]) 12:02, 9 December 2024 (UTC)
:[[WP:CONLEVEL]] was already a policy, independent of that resolution. It was just being cited as a principle used in deciding that case. —[[User:Bagumba|Bagumba]] ([[User talk:Bagumba|talk]]) 16:03, 17 November 2024 (UTC)
:I think this ''might'' be sufficiently important pretty much exclusively for writers where the language they wrote in is not the "obvious" one for their nationality. [[User:Johnbod|Johnbod]] ([[User talk:Johnbod|talk]]) 12:43, 9 December 2024 (UTC)
:I believe that "global consensus" refers to [[WP:P&G|policies and guidelines]] in particular, and to generally accepted practices across the whole of the English Wikipedia. A consensus that applies to just 100 articles out of the almost 7 million article in the English Wikipedia is a local consensus. [[User talk:Donald Albury|Donald Albury]] 16:14, 17 November 2024 (UTC)
::It ''might'' also be important for politicians (and similar figures?) in countries where language is a politically-important subject, e.g. Belgium. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 16:29, 9 December 2024 (UTC)
:[[User:Milo8505|Milo8505]], you asked this question in a way that can't be answered. Consensus does not depend on categories, and Wikipedia does not deal in abstract quantities but in concrete articles. Is this about whether to have an infobox on [[Gustav Mahler]]? If so then please say so, to provide some context to your question. [[User:Phil Bridger|Phil Bridger]] ([[User talk:Phil Bridger|talk]]) 17:34, 17 November 2024 (UTC)
:This seems like a bad idea. Let's take a case where language spoken by a royal was very relevant: [[Charles V, Holy Roman Emperor]]. When he became King of Castile as a teenager, he only really spoke Flemish and didn't speak Castilian Spanish, and needless to say trusted the advisors he could actually talk with (i.e. Flemish / Dutch ones he brought with him). He also then immediately skipped out of Castile to go to proto-Germany to be elected Holy Roman Emperor. This ended up causing a rebellion ([[Revolt of the Comuneros]]) which was at least partially justified by Castilian nationalism, and partially by annoyed Castilian elites who wanted cushy government jobs. So language-of-royal was relevant. But... the Infobox is for ''the person as a whole''. Charles came back to Castile and spent a stretch of 10 years there and eventually learned rather good Castilian and largely assuaged the elite, at least. He was king of Spain for ''forty years''. So it would seem rather petty to harp on the fact his first language wasn't Castilian in the Infobox, when he certainly did speak it later and through most of his reign, even if not his first few years when he was still basically a kid. [[User:SnowFire|SnowFire]] ([[User talk:SnowFire|talk]]) 19:47, 9 December 2024 (UTC)
::@[[User:Phil Bridger|Phil Bridger]] Yes, it is about that topic. I believe that there is sufficient global consensus about the inclusion of infoboxes on biographies. I am well aware that the official policy is "no policy defined", but I see a clear trend, by looking at the most read articles, that all biographies - of musicians and non musicians alike - have an infobox, except a select few classical music composers.
:::See below on this. [[User:Johnbod|Johnbod]] ([[User talk:Johnbod|talk]]) 14:26, 11 December 2024 (UTC)
::I do not currently have the whole information regarding exactly how many of all biographies have an infobox, and that is why I was asking what is usually considered consensus.
::SnowFire's fascinating anecdote shows that this information is not appropriate for infoboxes but rather should be described in prose in the body of the article where the subtleties can be explained to the readers. [[User:Cullen328|Cullen328]] ([[User talk:Cullen328|talk]]) 19:56, 9 December 2024 (UTC)
::However, given that I'm very aware that a hundred articles out of seven million is not precisely consensus, I will attempt, when I have the time, to go through every single biography to determine an exact percentage.
:::No, it shows that it's not appropriate for that infobox, and therefore that it is not suitable for all infoboxes where it is plausibly relevant. It shows nothing about whether it is or is not appropriate for other infoboxes: the plural of anecdote is not data. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 21:08, 9 December 2024 (UTC)
::[[User:Milo8505|Milo8505]] <sub>[[User_talk:Milo8505|(talk)]]</sub> 18:56, 17 November 2024 (UTC)
::::But it kind of is here? I picked this example as maybe one of the most obviously ''relevant'' cases. Most royals failing to speak the right language don't have this trait linked with a literal war in reliable sources! But if inclusion of this piece of information in an Infobox is still problematic in ''this'' case, how could it possibly be relevant in the 99.9% cases of lesser importance? The Infobox isn't for every single true fact. [[User:SnowFire|SnowFire]] ([[User talk:SnowFire|talk]]) 21:53, 9 December 2024 (UTC)
:::If you want to spend your time doing that then I can't stop you, but I warn you that you will be wasting your time. That is not how consensus is measured. [[User:Phil Bridger|Phil Bridger]] ([[User talk:Phil Bridger|talk]]) 19:10, 17 November 2024 (UTC)
:::::It isn't suitable for this infobox not because of a lack of importance, but because stating a single first language would be misleading. There exists the very real possibility of cases where it is both important and simple. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 00:02, 10 December 2024 (UTC)
::::Obviously I will not count by hand, I have some idea of how to use an automated tool to do that.
::::::Could you (or anyone else in favor of the proposal) identify 5 biographies where this information is both useful to readers and clearly backed by reliable sources? <sub>signed, </sub>[[User:Rosguill|'''''Rosguill''''']] <sup>[[User talk:Rosguill|''talk'']]</sup> 15:06, 11 December 2024 (UTC)
::::But then, ''how'' is consensus measured?
::Charles V claimed to have spoken Italian to women, French to men, Spanish to God, and German to his horse. [[User:Hawkeye7|<span style="color:#800082">Hawkeye7</span>]] [[User_talk:Hawkeye7|<span style="font-size:80%">(discuss)</span>]] 21:35, 9 December 2024 (UTC)
::::I'm under the impression that there is a group of very determined and very vocal editors that fiercely oppose infoboxes on classical composers' articles (which leads to most of them having discussions about infoboxes, citing each other as examples of articles without infobox), separate from the majority of biographies, which have an infobox.
::Sorry, this is just nonsense! Charles V was raised speaking French, which was the language of his aunt's court, although in the Dutch-speaking [[Mechelen]]. All his personal letters use French. He only began to be taught Dutch when he was 14, & may never have been much good at it (or Spanish or German). Contrary to the famous anecdote, which is rather late and dubious ("Spanish to God....German to my horse") he seems to have been a rather poor linguist, which was indeed awkward at times. [[User:Johnbod|Johnbod]] ([[User talk:Johnbod|talk]]) 00:39, 10 December 2024 (UTC)
::::I see no better way of proving (or maybe disproving) my point than this, because my earlier points of infoboxes being a great thing for [[Gustav Mahler]]'s article, and the fact that numerous non-classical musicians have infoboxes, and lengthy ones at that, seem to have fallen on deaf ears.
:::(This is a bit off-topic, but "nonsense" is too harsh. I'm familiar that he spoke "French" too, yes, although my understanding was that he did speak "Flemish", i.e. the local Dutch-inflected speech, too? And neither 1500-era French nor Dutch were exactly standardized, so I left it as "Flemish" above for simplicity. If his Dutch was worse than I thought, sure, doesn't really affect the point made, though, which was that his Castilian was non-existent at first. As far as his later understanding of Spanish, his capacity was clearly ''enough'' - at the very least I've seen sources say he made it work and it was enough to stave off further discontent from the nobility. Take it up with the authors of the sources, not me.). [[User:SnowFire|SnowFire]] ([[User talk:SnowFire|talk]]) 16:23, 10 December 2024 (UTC)
::::[[User:Milo8505|Milo8505]] <sub>[[User_talk:Milo8505|(talk)]]</sub> 20:01, 17 November 2024 (UTC)
::::There's a difference between "simplicity" and just being wrong! You should try reading the sources, with which I have no issue. And his ministers were also either native Francophones, like [[Cardinal Granvelle]] and his father [[Nicolas Perrenot de Granvelle]] (both from [[Besançon]], now in eastern France), or could speak it well; the Burgundian elite had been Francophone for a long time. The backwash from all this remains a somewhat sensitive issue in Belgium, even now. And Charles V was not "King of Spain" (a title he avoided using) for 40 years at all; only after his mother died in 1555 (a year before him) did he become unarguably King of Castile. [[User:Johnbod|Johnbod]] ([[User talk:Johnbod|talk]]) 14:26, 11 December 2024 (UTC)
::::And I would like to state, for the record, that I'm not doing this out of spite, or out of a personal interest (I'm actually ''losing'' my time by arguing about this), but because I truly, wholeheartedly believe that an infobox on each and every biography, and in general, on every article where there could be one (this excludes abstract topics such as [[existencialism]]) would make Wikipedia a truly better place.
:It may not be appropriate for many articles, but it surely is for some. For example, when I told her that England had had kings whose first language was German, someone asked me the other day how many. It would be good to have a quick way of looking up the 18th century Georges to find out. [[User:Phil Bridger|Phil Bridger]] ([[User talk:Phil Bridger|talk]]) 21:20, 9 December 2024 (UTC)
::::[[User:Milo8505|Milo8505]] <sub>[[User_talk:Milo8505|(talk)]]</sub> 20:43, 17 November 2024 (UTC)
::I think the problem is that people might make assumptions. I would check before saying that George I and George II spoke German as their first language and not French. Languages spoken is probably more useful than birth language, but the list might be incomplete. There is also competing information about George I, and he is an English King, so he has been better researched and documented compared to other historical figures.
:::::I would have to search the archives, but we actually held an RFC (one of the ways in which we determine GLOBAL consensus) that was focused on whether to mandate infoboxes on articles about composers… which determined that there were valid reasons not to require them (I suppose you could say that global consensus was to defer to local consensus on this specific issue). Remember [[WP:Other Stuff Exists]] is not an accepted argument here at WP. And that “standard practice” often has exceptions. [[User:Blueboar|Blueboar]] ([[User talk:Blueboar|talk]]) 22:06, 17 November 2024 (UTC)
::I agree that this is important when language is the basis of community identity, such as in Belgian. [[User:Tinynanorobots|Tinynanorobots]] ([[User talk:Tinynanorobots|talk]]) 10:38, 10 December 2024 (UTC)
::::::I understand, but that is ''not'' my sole argument. I have provided other arguments in favor, which you can read at [[Talk:Gustav_Mahler|the aforementioned talk page]] which basically boil down to:
*Ummmm… no. People I disagree with™️ use “infobox bloat” as a boogeyman in arguments about infoboxes. But this is infobox bloat. Even those celebrity/anime character things that tell you shoe size, pinky length and blood type wouldn’t include this. [[User:Dronebogus|Dronebogus]] ([[User talk:Dronebogus|talk]]) 18:16, 11 December 2024 (UTC)
::::::<sub>in my opinion,</sub>
::::::#Infoboxes make standardized information more easily accessible, and
::::::#They do not harm the rest of the article, as they do not displace the lead paragraph.
::::::However, in the linked talk page, I see that opponents of infoboxes rely somewhat on the loosely established precedent/consensus that composers shouldn't have infoboxes.
::::::That is why I wanted to bring forth a new argument, using the, as I see it, very established consensus for infoboxes in biographies, and what I want to know here is whether this consensus can be proven to exist (or what is it required for this consensus to exist). [[User:Milo8505|Milo8505]] <sub>[[User_talk:Milo8505|(talk)]]</sub> 07:30, 18 November 2024 (UTC)
:::::::Info boxes can be accessibility issue for many readers and display what can only be described as clutter and unnecessary [[Wikipedia talk:Manual of Style/Infoboxes#Infobox file spam]]. That said there's clearly a community consensus I believe overall. <span style="font-weight:bold;color:darkblue">[[User:Moxy|Moxy]]</span>🍁 22:39, 1 December 2024 (UTC)
:::::This whole thing about "global" and "local" consensus seems to confuse everyone, and consequently folks make up whatever seems plausible to them. Let me give you a potted history and the usual claims, and perhaps that will help you understand the principle.
:::::'Way back in the day, infoboxes didn't exist. AIUI the first widely used infobox template was {{tl|taxobox}} in 2004, and the general concept appeared soon after. However, [https://en.wikipedia.org/enwiki/w/index.php?title=Template:Infobox&oldid=178933718 through the end of 2007, Template:Infobox] didn't look like what we're used to. Originally, an 'infobox template' was literally a [[Help:Table|wikitext table]] that you could copy and fill in however you wanted.<ref><small>Being able to do this in wikitext was was considered an improvement, because originally, you had to code tables in raw HTML.</small></ref>
:::::While infoboxes were being developed, the editors at [[Wikipedia:WikiProject Composers]] decided that infoboxes were [[Wikipedia:Disinfoboxes|a bad idea]] specifically for articles about [[classical composers]], so after a series of disputes and [[Wikipedia talk:WikiProject Composers/Infobox debates|discussions]], in April 2007 they wrote a note that said, basically, "BTW, the sitewide rules don't apply to the articles we [[WP:OWN]]."<ref><small>This was not as unreasonable back then as it sounds now. WikiProjects were a significant source of subject-specific advice back then, and the rule-making systems were quite informal. [[WP:PROPOSAL]] didn't exist until late 2008. Before then, most guidelines and even policies acquired their labels merely because someone decided to slap the tag on it, and if nobody objected, then that was the consensus for what to call it.</small></ref>
:::::The conflict between this group and the rest of the community eventually resulted in the 2010 [[Wikipedia talk:WikiProject Composers/Infoboxes RfC]]. The result of this years-long dispute is memorialized in the example given in what is now the [[Wikipedia:Consensus#Levels of consensus]] section of the policy: "Consensus among a limited group of editors, at one place and time, cannot override community consensus on a wider scale. For instance, unless they can convince the broader community that such action is right, participants in a [[Wikipedia:WikiProject|WikiProject]] cannot decide that some generally accepted [[Wikipedia:POLICY|policy or guideline]] does not apply to articles within its scope."
:::::Or, to be rather more pointy-headed about it: WikiProject Composers doesn't get to decide that "their" articles are exempt from [[MOS:INFOBOXUSE]].
:::::What was then a statement about the "Purpose of consensus" or, before then, one of several "Exceptions" to forming a consensus on a talk page has since been renamed ==Levels of consensus==. Also, ArbCom (and consequently part of the community) has started talking about "global" consensus. I think that has confused people about the point.
:::::"Levels" of consensus could mean the strength of the consensus ("This is just a weak consensus, so..."). It could mean something about the process used ("My CENT-listed RFC trumps your Village pump post"). It could mean whether the consensus applies to the whole site ("We formed a consensus at [[Talk:Article]] about the first sentence of [[Article]], so now I need to make 500 other articles match this one"). And it could tell us something about how likely it is that the decision matches the overall view of the community.
:::::It's supposed to be that last one. We don't want a handful of people getting together on some page and saying "Let's reject this rule. This article ''needs'' to be censored. Copyvio restrictions are inconvenient. Bold-face text helps people see the important points. And we know this POV is correct, so it should dominate." We want quite the opposite: "The community says that this is usually the best thing, so let's do this."
:::::AFAICT, the overall view of The Community™ is that we think that there should not be any Official™ Rule saying that any subset of articles should have an infobox. We're probably doing this mostly for social reasons, rather than article reasons. For example, every single article about a US President, or atomic elements, or any number of other subjects, has an infobox – but we refuse to write any rule saying they ''should'', or even that they ''usually should'', even though we know the popularity is ever-increasing. For example, at the moment, [[Georgina Sutton]] is the only biography linked on the [[Main Page]] that doesn't have an infobox.
:::::I suspect that the closest we will come to such a rule during the next few years is a note about how popular they are. It should be possible to see how many articles (overall, or in particular subsets) already use infoboxes, and to add that information to [[MOS:INFOBOXUSE]]. For now, we could add a statement that "most" articles have an infobox.
:::::
<references />
:::::[[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 22:27, 17 November 2024 (UTC)
::::::Thank you very much for your detailed response.
::::::From what you have said, given that WikiProject composers have to follow [[MOS:INFOBOXUSE]], there should be a discussion on each and every composer's talk page to determine whether an infobox is warranted.
::::::I see this as a bit of a, difficult and fruitless endeavor, as the arguments presented, for either case, are always the same, and they all usually result in stalemates (like the one about [[Gustav Mahler|Mahler]]).
::::::What I propose is to change the policy, to, at least, recommend infoboxes on certain categories, given that, as you said, they are very popular. Or at the very least, as you suggest, acknowledge the fact that they are very popular.
::::::When I have time to gather more data on the use of infoboxes, I will propose a new RfC to try to commit this change to the policy.
::::::I am very well aware that my chances of success are slim, but, I'll do what I can do.
::::::[[User:Milo8505|Milo8505]] <sub>[[User_talk:Milo8505|(talk)]]</sub> 08:00, 18 November 2024 (UTC)
:::::::Well, if "they all usually result in stalemates", then that represents a change, because the last complaint I saw about this subject said that the RFCs on whether to add an infobox almost always resulted in an infobox being added. Perhaps it varies by subject, however.
:::::::Acknowledging that they're popular shouldn't require a proposal for a change. It should only require getting some decent numbers. Check the archives of [[WP:RAQ]]; they probably can't query it directly, but if there's been a request, you'll see what could be done. It might also be possible to create a hidden category for "All articles with infoboxes", automagically transcluded, to get a count on the number of infoboxes. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 06:45, 20 November 2024 (UTC)
::::::::First of all, thank you again very much for your continued interest.
::::::::The discussions around infoboxes (not RfCs, discussions on talk pages) as far as I have seen usually go something like:
::::::::- I propose adding an infobox
::::::::+ We have talked a lot about that and there are good reasons<sup>tm</sup> for which it should not be added
::::::::- But I also have good reasons<sup>tm</sup> for which it should be added.
::::::::(no comments for 4 years, then it begins again).
::::::::I thought a bit about counting links, and I realized maybe getting this data is easier than I thought, see:
::::::::For counting the number of transclusions to a given page, [https://linkcount.toolforge.org/ this tool] is very useful, and says that there are around 3.2 million infoboxes ''in total'', and 460 thousand infoboxes about people. (on the (Article) namespace).
::::::::Looking in the Talk namespace, there are around two million links to [[Template:Wikiproject Biography]].
::::::::This seems to suggest that only around a quarter of all biographies have an infobox? Maybe I was wrong all along in my observation that infoboxes are very popular.
::::::::I am however not too sure that the two million links to [[Template:Wikiproject Biography]] on the Talk namespace actually corresponds to two million unique biographies.
::::::::Maybe another way of getting this data would be better, I'll have to look at it on some other occasion that I have more time.
::::::::[[User:Milo8505|Milo8505]] <sub>[[User_talk:Milo8505|(talk)]]</sub> 11:28, 20 November 2024 (UTC)
:::::::::I looked at the first 10 articles in [[:Category:Core biography articles]], and 100% had infoboxes. However, those ten articles used seven different infoboxes:
:::::::::* [[Template:Infobox person]] (is this the one you looked at?)
:::::::::* [[Template:Infobox royalty]]
:::::::::* [[Template:Infobox writer]]
:::::::::* [[Template:Infobox saint]]
:::::::::* [[Template:Infobox scientist]]
:::::::::* [[Template:Infobox philosopher]]
:::::::::* [[Template:Infobox writer]]
:::::::::[[:Category:People and person infobox templates]] lists dozens. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 23:22, 21 November 2024 (UTC)
::::::::::Yes! Yes!
::::::::::That's my point. Most{{cn}} good biographies have an infobox - except those of classical composers.
::::::::::I will look at the category you mentioned and try to count from there.
::::::::::Thank you very much! [[User:Milo8505|Milo8505]] <sub>[[User_talk:Milo8505|(talk)]]</sub> 16:46, 22 November 2024 (UTC)
:::::::::::The problem is, there still exist editors who strongly dislike infoboxes on most biographies -- me for one. When one writes every word of an article and then someone, who has not otherwise contributed, comes and adds an infobox it can be ... annoying. The basic use tends to highlight bits of trivial information (birth & death dates/places, nationality, spouse, children) that are not usually key to the person's notability. Even more contentious can be trying to define what a person's key contributions are, in a half-sentence. For some this is easy, and an infobox might be a good way of presenting the data, for others (including many classical composers) not so much. It can be hard enough to write a lead that presents this in a balanced fashion in a paragraph or three.
:::::::::::Are all good biographies written by groups? I'm not sure; probably the best are, but there are many many biographies of minor figures where 99.9% of the text was contributed by a single author, some of which are fairly well developed. [[User:Espresso Addict|Espresso Addict]] <small>([[User talk:Espresso Addict|talk]])</small> 05:05, 28 November 2024 (UTC)
::::::::::::I'm thankful for your contributions, but I'm sorry that you don't [[WP:OWN]] any article, and you can't dismiss someone else improving the article you wrote because ''you'' wrote it and ''you'' don't personally agree with the contributions made.
::::::::::::That said, it may be difficult to summarize why someone is important in a phrase, but it's not impossible, and, IMO actually something that should be done, as it makes the article easier (and faster) to scan. [[User:Milo8505|Milo8505]] <sub>[[User_talk:Milo8505|(talk)]]</sub> 09:39, 28 November 2024 (UTC)
:::::::::::::What I am obviously failing to convey is that some editors write articles, far fewer than those who contribute in other ways, and some of those dislike the "improving" addition of an infobox by another editor who makes no other edits, improving or otherwise. [[User:Espresso Addict|Espresso Addict]] <small>([[User talk:Espresso Addict|talk]])</small> 10:40, 28 November 2024 (UTC)
::::::::::::::Why is that relevant? Nobody owns an article, regardless of in which why they contribute to Wikipedia. Just because some editors dislike something does not give them a veto over things that the majority of other editors believe does improve the article. Obviously an infobox with incorrect information is not an improvement but that doesn't mean an infobox with correct information is not an improvement. In exactly the same way as a paragraph with incorrect information about an aspect of the article subject is a bad addition, this does not mean that a paragraph with ''correct'' information about that same aspect is bad. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 11:55, 28 November 2024 (UTC)
:::::::::::::::It seems to me a great deal more like reference format and English variant. It could easily be argued that we should have standardised on US spelling and picked a mode of referencing, but we never did because it would alienate too much of the workforce. [[User:Espresso Addict|Espresso Addict]] <small>([[User talk:Espresso Addict|talk]])</small> 12:55, 28 November 2024 (UTC)
::::::::::::::::It's not even close to being like ENGVAR or reference formatting. Those are stylistic decisions where there are multiple equally valid choices that don't impact content. Infoboxes are a content decision where one choice directly benefits the readership and one choice placates the dislikes of a minority of editors. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 16:40, 28 November 2024 (UTC)
:::::::::::::::::Load up the Good Faith, {{u|Thryduulf}} :D another phrasing, less pejorative or sweeping, might be {{blue|Infoboxes are a content decision where either choice directly affects the readers' preconceptions of the topic}}. '''''[[User talk:Serial Number 54129|<span style="color: #ee6f53;">Tight faded male arse. Decadence and anarchy. A certain style.</span>]] [[Special:Contributions/Serial_Number_54129|<span style="color:#6c4a43;">Smile.</span>]]''''' 16:47, 28 November 2024 (UTC)
::::::::::::::::::It may or may not be less pejorative or sweeping, but it is also less accurate. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 18:24, 28 November 2024 (UTC)
:::::::::::::::::::We obviously genuinely disagree on the topic. But I just don't see how the usual formulation benefits readers for bios about writers, composers or the like, especially where it is difficult to encapsulate their contributions in a half sentence or single notable work. I note that biographical sources such as ''[[Oxford Dictionary of National Biography]]'' or newspaper obituaries do not generally include infoboxes, in fact I can't think of where I've seen one on a biographical article of this type outside Wikipedia. [[User:Espresso Addict|Espresso Addict]] <small>([[User talk:Espresso Addict|talk]])</small> 03:53, 29 November 2024 (UTC)
::::::::::::::::::::Infoboxes are not limited to a ''single'' notable work. There is no need to condense a person's life to a single notable work in an infobox. [[User:Milo8505|Milo8505]] <sub>[[User_talk:Milo8505|(talk)]]</sub> 06:34, 29 November 2024 (UTC)
:::::::::::::::::::::But unless you list ''all'' their works there is a problem of original research. You need to provide appropriate sources that the works you have selected are appropriate to represent the subject. This is often very hard in practice, and even harder to demonstrate in an infobox (according to critics A,B,C but ignoring the non-mainstream views of D,E, and only partially incorporating the views of F–Z, the following are the major works...). [[User:Espresso Addict|Espresso Addict]] <small>([[User talk:Espresso Addict|talk]])</small> 07:28, 29 November 2024 (UTC)
::::::::::::::::::::::Well, no. There are statistics for something. ''Notable'' means ''worthy of note, distinguished, prominent'' as per Merriam-Webster Dictionary. ''Notable'' works are not ''a collection appropriate to represent the subject as a whole'', but rather those ''worthy of note, distinguished, prominent'', in other words, ''popular'' or ''important'' (for the field in question).
::::::::::::::::::::::But the main point being, once again, that '''this problem is NOT a problem with the infobox itself'''. Citing the lead paragraph for the [[Mahler]] article:
::::::::::::::::::::::''As a composer he acted as a bridge between the 19th-century Austro-German tradition and the modernism of the early 20th century''
::::::::::::::::::::::and
::::::::::::::::::::::''Mahler's œuvre is relatively limited; for much of his life composing was necessarily a part-time activity while he earned his living as a conductor. Aside from early works such as a movement from a piano quartet composed when he was a student in Vienna, Mahler's works are generally designed for large orchestral forces, symphonic choruses and operatic soloists. These works were frequently controversial when first performed, and several were slow to receive critical and popular approval; exceptions included his Second Symphony, and the triumphant premiere of his Eighth Symphony in 1910. Some of Mahler's immediate musical successors included the composers of the Second Viennese School, notably Arnold Schoenberg, Alban Berg and Anton Webern. Dmitri Shostakovich and Benjamin Britten are among later 20th-century composers who admired and were influenced by Mahler. The International Gustav Mahler Society was established in 1955 to honour the composer's life and achievements.''
::::::::::::::::::::::According to whom? By what research? What if ''I'' do not think that is the case?
::::::::::::::::::::::You would rightfully say that my answers are on the references section, and that I should be [[WP:BOLD]] in changing it if I'm convinced that it could be better.
::::::::::::::::::::::And, most importantly, from your comment on the Talk page, I see that the article ''actually'' selects three works as prominent, and, ''you'' challenge that (IMO rightfully). Then it turns out that the problem of selecting what is important '''is not one of infoboxes but one central to writing biographies'''.
::::::::::::::::::::::For the last time: infoboxes are '''ONLY''' a collection of information '''already''' on the article. [[User:Milo8505|Milo8505]] <sub>[[User_talk:Milo8505|(talk)]]</sub> 17:27, 29 November 2024 (UTC)
:::::::::::::::::::::::That's less true than one might think. Both {{tl|taxobox}} and {{tl|drugbox}} have a high likelihood of containing information than isn't repeated in the article.
:::::::::::::::::::::::But for infoboxes describing people, I would generally expect that statement to be true or to be ''meant'' to be true. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 07:06, 30 November 2024 (UTC)
::::::::::::::::::::::::Those are very specific examples, although you are right that they do not conform to what I said. Anyhow the point still stands for biographies. [[User:Milo8505|Milo8505]] <sub>[[User_talk:Milo8505|(talk)]]</sub> 09:31, 30 November 2024 (UTC)
::::::::::::::::::::::I believe the usual solution in such cases is to link to the [[List of compositions by Gustav Mahler]]. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 07:07, 30 November 2024 (UTC)
:::::::::::::::::::::::True, but I believe that there are good reasons some works ''can'' be highlighted. Anyhow, this is also a consideration when writing the lead, not only the infobox. [[User:Milo8505|Milo8505]] <sub>[[User_talk:Milo8505|(talk)]]</sub> 09:36, 30 November 2024 (UTC)
:::::::::::::I don't think that "OWN" is a useful model here. Consider this story:
:::::::::::::Someone saw a neglected area in his neighborhood, and he thought he'd help people by quietly picking up the trash. People mostly didn't notice, and nobody objected, so whenever he was walking out that way, he brought a trash bag with him and picked up some of the discarded litter. He carried on for a while just for the satisfaction of seeing it get better.
:::::::::::::Then The Committee showed up.
:::::::::::::They told him: "It's very nice that you decided to clean this up. However, you should wear gloves for your own safety."
:::::::::::::"Okay," he thought. "There's probably something in their advice." So he started wearing gloves, and he did think that it made it a little easier to sort the recycling from the garbage.
:::::::::::::The Committee came back another time: "Thank you for your past work. We notice that a bit of the grass here grows out onto the sidewalk. We're not saying you have to do this, because this spot isn't yours, but it would be nice if ''someone'' got a lawn edger and made that even neater."
:::::::::::::The volunteer thought that since nobody had bothered to pick up the trash, it was unlikely that anyone else would trim the grass. Besides, he had a lawn edging tool, so the next time he dropped by, he brought a trash bag, his gloves, and his lawn edger. The little spot was looking pretty neat, if a bit plain.
:::::::::::::Soon, the Committee came back again: "Thank you for your past work. We just wanted to let you know that our standards say that it's not enough to clean up a mess. Every area should also have some plants. So it would be very nice if you planted some trees or bushes or something in this spot, even though it's not yours."
:::::::::::::"Can you at least buy the plants?" he asked.
:::::::::::::"No," said The Committee. "Thank you for your past work, but you'll have to grow them or buy them yourself, or maybe you could find someone who would give them to you."
:::::::::::::The volunteer thought that the little spot would benefit from some cheery little flowers, and he decided to do it. He planted a few yellow flowers along the edge.
:::::::::::::The next day The Committee showed up. "What? ''Yellow'' flowers? Thank you for your past work, but we have received complaints. One of the neighbors (who happens to be part of The Committee) just filed a confidential complaint that there are now garishly colored flowers in this little spot. Those have to be removed. You don't own this place, you know, even though you're the only one who did anything to take care of it, except for the neighbor's important work complaining, and of course our even more important work ordering you around."
:::::::::::::@[[User:Milo8505|Milo8505]] (and others), my question is: Do you expect the volunteer to keep maintaining that little spot? Or do you expect him to quit?
:::::::::::::It is true that the author/maintainer of an article does not [[WP:OWN]] it. But it is ''also'' true that the editor is a [[WP:VOLUNTEER]], and if you make volunteering be sufficiently un-fun – say, by trampling the yellow flowers he planted, or by demanding an infobox at the top of an article – then it would only be logical, rational, and predictable for that editor to quit contributing. And then who is going to write the new articles? [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 06:29, 29 November 2024 (UTC)
::::::::::::::What you are proposing is no different to ownership - giving an article writer control over what is and is not allowed on "their" article just because they don't like something that the consensus of the community says is important and beneficial to readers. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 11:10, 29 November 2024 (UTC)
:::::::::::::::No. What I'm proposing is that we remember that there are consequences for every decision we make, and choose the consequences we want to live with. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 07:09, 30 November 2024 (UTC)
::::::::::::::::And your proposed method of avoiding consequences you don't like is to give article writers ownership of "their" articles. The consequences of ''that'' need to be justified, and I don't think they can be. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 13:21, 30 November 2024 (UTC)
::::::::::::::I would actually say that suppressing infoboxes is more akin to removing flowers than creating them...
::::::::::::::I'm not asking of anyone to do anything they don't want to. They are actually asking me (and others) ''not'' to do something a good number of people{{cn}} consider good. [[User:Milo8505|Milo8505]] <sub>[[User_talk:Milo8505|(talk)]]</sub> 17:31, 29 November 2024 (UTC)
:::::::::::::::The problem is that what one person considers a beautiful flower, another person may consider a weed that needs pruning. Flowers are nice, but so is a manicured lawn. What we need to determine is WHETHER (in this particular lawn) we are planting flowers or pulling weeds. [[User:Blueboar|Blueboar]] ([[User talk:Blueboar|talk]]) 17:44, 29 November 2024 (UTC)
::::::::::::::And for the wider distinction between [[WP:OWN]] and [[WP:VOLUNTEER]]. Nobody is forcing nobody else to do anything they don't like. Editors are free to restrain from editing whatever they feel like without any reason. What they are not free to do is to say that their substantial contributions to one article give them a more prominent opinion than everybody else on subjects related to that article. [[User:Milo8505|Milo8505]] <sub>[[User_talk:Milo8505|(talk)]]</sub> 17:41, 29 November 2024 (UTC)
::::::::::::::The type of editor who quits the project over an infobox probably isn't someone suited to improving the project or working with others. I've worked on articles and a new editor may add something I don't like, but if they find consensus I accept the will of the community. That's how this place is supposed to work. [[User:Nemov|Nemov]] ([[User talk:Nemov|talk]]) 22:19, 29 November 2024 (UTC)
:::::::::::::::@[[User:Nemov|Nemov]], would you say that about me? I once objected to a template being renamed to something that was less convenient for me. It got renamed anyway, because other editors thought the new name would make their own work more convenient.
:::::::::::::::We do have to "accept the will of the community", but we do not have to continue volunteering under circumstances that aren't working for us, so I stopped doing that work. They got their advantages; we got another backlog for several years. (Eventually another editor decided to do that work.)
:::::::::::::::Am I someone you would describe as not "suited to improving the project or working with others"? [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 07:16, 30 November 2024 (UTC)
::::::::::::::::I really really really do not think that the spirit of ''How will this action make every other editor feel?'' is very useful to Wikipedia.
::::::::::::::::Although well-intentioned, it's imposible to think about every possible edge case, and sometimes it's imposible to find something that everyone will agree to, so if we stop to ask that question before every change, we will, in the end, get nothing at all done. [[User:Milo8505|Milo8505]] <sub>[[User_talk:Milo8505|(talk)]]</sub> 09:34, 30 November 2024 (UTC)
:::::::::::::::::It is impossible to think about every possible edge case.
:::::::::::::::::However, we're not talking about [[Unknown unknowns]] in this case. We're talking about known consequences. We either choose them and own them, or we avoid them. Take your pick – but don't pretend that a choice has no downsides after you've been told what one of the downsides is. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 23:39, 30 November 2024 (UTC)
::::::::::::::::::Let's all take a deep breath, and not compare editing Wikipedia to waging deadly wars.
::::::::::::::::::In any case, I believe that three things stand:
::::::::::::::::::# Nobody [[WP:OWN]]s articles.
::::::::::::::::::# [[WP:VOLUNTEER]]s are free to do whatever they want.
::::::::::::::::::# If making a change, after consensus reached by discussion, hurts someone's feelings, I'm sorry but they are not the leader of this place.
::::::::::::::::::Furthermore, can't a compromise be reached? Can't infoboxes be hidden via user JS? [[User:Milo8505|Milo8505]] <sub>[[User_talk:Milo8505|(talk)]]</sub> 07:53, 1 December 2024 (UTC)
:::::::::::::::::::You're thinking about this at the wrong level, with your focus on "hurting someone's feelings".
:::::::::::::::::::This is more of a [[key employee]] situation, so let's tell the story a different way:
:::::::::::::::::::Your business depends heavily on a small number of highly valuable employees. Without these few, your business will probably fail, because you will have no products to sell. You are the manager, and you think about how you will improve the business's profitability. You come up with an idea and share it with your staff.
:::::::::::::::::::Most of the staff thinks it's a good idea, but some of your key employees tell you that it's intolerable, and if you implement it, they will quit.
:::::::::::::::::::Should you say:
:::::::::::::::::::* "Well, I'm sorry if your little ''feelings'' got hurt, but frankly you don't ''own'' this business. Don't let the door hit your backside on your way out", or
:::::::::::::::::::* "Um, I don't want you to quit. It's not good for any of us if you quit. Let me see if we can come up with something that meets my legitimate goals and also keeps you working here."
:::::::::::::::::::Your #3 sounds like that first one. I don't recommend it. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 22:04, 1 December 2024 (UTC)
::::::::::::::::::::Wikipedia does not depend on "a small number of highly valuable employees". Our goal is always to best serve our readers, and we do that by including the information in our articles that they want and expect to be there. We do not do that by pandering to the dislikes of editors, especially not a small minority of editors. No matter how you try and spin it, ownership of articles is not justifiable. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 22:10, 1 December 2024 (UTC)
:::::::::::::::::::::No one has suggested otherwise. I have absolutely no clue where you are substantiating this accusation of WhatamIdoing arguing for article ownership. Stating that outside editors with no connection to the article in interest other than to come and enforce their pet issues unrelated to any substantive content in the article is not “article ownership” anymore than having WikiProjects that have certain editors contributing more than others not part of the Project is “ownership.” [[User:Barbarbarty|Barbarbarty]] ([[User talk:Barbarbarty|talk]]) 23:56, 1 December 2024 (UTC)
::::::::::::::::::::::[[WP:OWN]] isn't complicated. {{tq|No one, no matter what, has the right to act as though they are the owner of a particular article (or any part of it).}} You're creating a group of "outside editors." We are all editors here. One editor's "pet issue" is another editors improvement. Wikipedia is a collaborative effort. [[User:Nemov|Nemov]] ([[User talk:Nemov|talk]]) 00:20, 2 December 2024 (UTC)
:::::::::::::::::::::::Again, no one has suggested otherwise. I am simply stating a fact that when certain editors try to present something as an “improvement” and it is roundly rejected by other editors who routinely edit the article, it is not a case of someone asserting “ownership.” Hiding behind accusations of others asserting “ownership” to obfuscate the fact that some editor’s changes are counterproductive or not accepted is not the same as finding a violation of [[WP:OWN]]. [[User:Barbarbarty|Barbarbarty]] ([[User talk:Barbarbarty|talk]]) 01:11, 2 December 2024 (UTC)
::::::::::::::::::::::::I honestly don't understand your reply to my comment. [[User:Nemov|Nemov]] ([[User talk:Nemov|talk]]) 02:28, 2 December 2024 (UTC)
:::::::::::::::::::::::::Honestly I fail to see how I could have made myself clearer. You are accusing me of “creating a group of outside editors.” I have suggested nothing of the sort. I was merely saying that when certain groups of editors have more expertise on certain subjects, and therefore edit articles related to those subjects more than other users, that is simply how many articles have been crafted and developed. It’s not “ownership” to state that fact, nor is it elevating any group of editors above any other group. If someone who does not edit a certain article frequently adds an edit to an article and it is reverted, just because it is reverted by another user who is more active on the article does not automatically implicate [[WP:OWN]]. [[User:Barbarbarty|Barbarbarty]] ([[User talk:Barbarbarty|talk]]) 02:43, 2 December 2024 (UTC)
::::::::::::::::::::::@[[User:Barbarbarty|Barbarbarty]] Multiple of @[[User:WhatamIdoing|WhatamIdoing]]'s posts, including the one I was directly replying to, advocate for article ownership without using that term. The post I was directly replying to explicitly claimed Wikipedia depends on a small number of editors.
::::::::::::::::::::::{{tpq|Stating that outside editors with no connection to the article in interest other than to come and enforce their pet issues unrelated to any substantive content in the article}} so presumably you object to editors copyediting, typo fixing, adding conversion templates, adding/editing/removing categories and short descriptions, making the article consistent in it's language variety, citation style and/or unit ordering and any of the other myriad of improvements "outside editors with no connection to the article" make? If not, why are infoboxes different? Who gets to decide who is and who is not an "outside editor" and thus who is entitled to stand above consensus? [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 00:50, 2 December 2024 (UTC)
:::::::::::::::::::::::First off, you admit that WhatamIdoing never advocated for article ownership. I still fail to see how it can be shown otherwise. As for your point about things I would “presumably” object to, one look at my posting history would show that the vast majority of my contributions involve things like fixing typos and the like. Simple maintenance issues like adding citations and fixing typos are not “pet issues” in any sense of the term. I doubt any editor on here would think that Wikipedia should have articles with unfixed typos or improper grammar. Infoboxes do not fall into simple “maintenance.” Arbitrarily adding them without discussion, as history as shown, is nearly guaranteed to cause debate. They are entirely done on a case-by-case basis, dependent on a myriad of factors. As I said above, “ownership” is not simply some editors who edit an article more frequently rejecting so-called “improvements” by editors who edit the article less frequently. [[User:Barbarbarty|Barbarbarty]] ([[User talk:Barbarbarty|talk]]) 01:20, 2 December 2024 (UTC)
::::::::::::::::::::::::{{tpq|First off, you admit that WhatamIdoing never advocated for article ownership}} I did not do this. I said she has argued for article ownership without using that term, because no matter whether you call it "ownership" or not, when she is arguing for is exactly what we define ownership to be. Infoboxes are no more "pet issues" than fixing typos or any of the other improvements mentioned, the only difference is that some editors dislike them. {{tpq|As I said above, “ownership” is not simply some editors who edit an article more frequently rejecting so-called “improvements” by editors who edit the article less frequently.}} Except it is, as Nemov has explained in very simple terms above. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 01:57, 2 December 2024 (UTC)
::::::::::::::::You didn't quit the Wikipedia over it. There are some editors who have been arguing about infoboxes for over 15 years. It's only a contentious topic because that group can't let it go. I feel bad for new editors who wander into the topic not knowing the back story. [[User:Nemov|Nemov]] ([[User talk:Nemov|talk]]) 17:32, 30 November 2024 (UTC)
:::::::::::::::::It's only contentious because ''both'' sides can't let it go.
:::::::::::::::::If the template renaming had happened at a point when I was already unhappy with editing, it would have tipped me over the edge.
:::::::::::::::::What I'm not seeing here is any acknowledgement of the costs. I see the advantages:
:::::::::::::::::'''Pros:'''
:::::::::::::::::# I like it.
:::::::::::::::::# Readers like it.
:::::::::::::::::but not the known list of disadvantages:
:::::::::::::::::'''Cons:'''
:::::::::::::::::# Some editors dislike it enough that they will reduce their participation or stop writing articles altogether.
:::::::::::::::::You might well say "I like it, so I and other supporters will be inspired to do 2% more editing if we get our way, and it adds 5% more value to readers in biographies and 10% more value in corporations with an [[WP:ELOFFICIAL]] link in the infobox. That benefit needs to be set against 1% of editors quitting and 2% fewer notable articles being created. That's still a net benefit, so let's go with it."
:::::::::::::::::But let's not pretend that it is a cost-free choice. A net benefit can have significant harms. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 23:48, 30 November 2024 (UTC)
::::::::::::::::::@[[User:WhatamIdoing|WhatamIdoing]] I would urge you reread my comment. I think you'll find there was no finger pointing at a particular side. You're twisting yourself into a pretzel here to defend ownership. There's a cost to everything. I think if you review some of the newer editors who have wandered into this topic they're not being encouraged to edit or learn. [[User:Nemov|Nemov]] ([[User talk:Nemov|talk]]) 05:32, 1 December 2024 (UTC)
:::::::::::::::::::Indeed I've seen several examples of newcomers being thoroughly bitten when they dare to ask for an infobox to be added to an article. See for example [[Talk:Stanley Holloway]] where suggestions get aggressively shut down. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 10:09, 1 December 2024 (UTC)
::::::::::::::::::::Yes, there are costs to every choice. What irritates me at the moment is that some costs are being ignored, and one of those costs increases our long-term risk of collapse. (On the opposite side, one of the costs is that readers won't get what they need, which is also a very serious problem.)
::::::::::::::::::::For the record, if you see [[xtools:pages/en.wikipedia.org/WhatamIdoing|an article I've created]] without an infobox, you are (very) welcome to go add one. I'm not anti-infobox. I am anti-destroying-Wikipedia-for-the-sake-of-uniformity. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 22:17, 1 December 2024 (UTC)
:::::::::::::::::::::And I'm anti hyperbolic claims that allowing article ownership is somehow the only way to avoid destroying Wikipedia. ''Nobody'' is irreplaceable. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 22:30, 1 December 2024 (UTC)
:::::::::::Case in point [[Jacqueline Stieger]], where the box I've just removed (1) highlighted her place of birth Wimbledon and nationality British, which -- for someone with two Swiss parents, who was brought up in Yorkshire, did some of her notable work in France/Switzerland with her Swiss husband and then settled back in Yorkshire with her Swiss stepchildren -- is undue; and (2) copied "artist and sculptor" from the beginning of the capsule, while not paying heed to the fact her notable works predominantly fall into two groups, big architectural sculptures mainly in metal, and jewellery/art medals. [[User:Espresso Addict|Espresso Addict]] <small>([[User talk:Espresso Addict|talk]])</small> 05:52, 28 November 2024 (UTC)
::::::::::::X thing is bad, because once, some time ago, I saw an instance of X and it was bad, really really bad, as a matter of fact. [[User:Milo8505|Milo8505]] <sub>[[User_talk:Milo8505|(talk)]]</sub> 09:42, 28 November 2024 (UTC)
:::::::::::::Well sure, but I just looked down my list of created bios by date till I found the first to which someone had added an infobox. I didn't drag out my historical collection of badly added infoboxes including those that had been cut-and-pasted wholesale from another article without changing any of the data, and those that introduced errors in the dates. [[User:Espresso Addict|Espresso Addict]] <small>([[User talk:Espresso Addict|talk]])</small> 10:40, 28 November 2024 (UTC)
::::::::::::Looking at the example of [[Jacqueline Stieger]], I'm not understanding {{u|Espresso Addict}}'s position. They object to the infobox giving her nationality as British. But the lead has always said that she "is a British artist and sculptor". And the {{tl|short description}} is "British artist and sculptor". And there are a bunch of categories which tend to describe her as English rather than British.
::::::::::::The lead, infobox, short description and other structural stuff like categories are all summaries or attributes of the main content. Summarising obviously involves some loss of detail. Objecting to an infobox seems like objecting to a short description. I often don't like these myself but they seem to be unavoidable.
::::::::::::[[user:Andrew Davidson|Andrew]]🐉([[user talk:Andrew Davidson|talk]]) 10:09, 1 December 2024 (UTC)
:::::::::::::It's not so much the details being ''wrong'' for Steiger, they give undue prominence to trivial non-representative features of the subject's life, such as her place of birth, while not summarising the actual reasons for notability/interest -- possibly my fault for a slender lead. I'm not fond of short descriptions either but they are invisible to the reader. I'm actually not too fond of categories either, but they go at the bottom, after the references, and so again do not draw the attention of the reader. [[User:Espresso Addict|Espresso Addict]] <small>([[User talk:Espresso Addict|talk]])</small> 12:45, 1 December 2024 (UTC)
::::::::::::::What makes anybody of you think that the infobox is more prominent than the lead of the article itself? Maybe are you implicitly recognizing that inboxes ''are'' actually widely used by readers? [[User:Milo8505|Milo8505]] <sub>[[User_talk:Milo8505|(talk)]]</sub> 20:21, 1 December 2024 (UTC)
*The thing to remember about CONLOCAL is that almost all of our policies and guidelines (which supposedly reflect “global consensus”) contain a line noting that ''occasional exceptions may exist''. This means that global consensus ''takes local consensus into account''. Indeed, there are times when a consensus is reached at an article level (say through a RFC) that actually has ''greater participation'' (wider consensus) than the policy/guideline page that is at the heart of the discussion. A policy/guideline may be wonderful for ''most'' situations, but problematic in a ''specific'' situation.
:As for infoboxes… yes, there is a “global consensus” that they are good things, and adding one usually improves the article. However, we have had RFC that show we also have a wide consensus that notes how infoboxes don’t alway work, that on occasion they can actually be more harmful than helpful… and that we can leave it to local editors to make that determination. This is especially true when it comes to articles about composers.
:So, when there is local disagreement regarding a specific composer, when there is a question as to whether an infobox would be beneficial or harmful in that specific situation, the solution is to have an RFC to determine wider consensus about that specific situation.
:Ie ASK the community whether that specific article should be considered an ''exception'' to our general consensus on infoboxes. [[User:Blueboar|Blueboar]] ([[User talk:Blueboar|talk]]) 13:54, 29 November 2024 (UTC)


:I don't think there needs to be any central policy on this. It could be relevant to include this information for someone, perhaps... maybe... However, infoboxes work best when they contain uncontroversial at-a-glance facts that don't need a bunch of nuance and context to understand. For the example of Charles V, maybe his first language is significant, but putting it in the infobox (where the accompanying story cannot fit) would be a confusing unexplained factoid. Like, maybe once upon a time there was a notable person whose life turned on the fact that they were left-handed. That could be a great bit of content for the main article, but putting handedness in the infobox would be odd. [[User:Barnards.tar.gz|Barnards.tar.gz]] ([[User talk:Barnards.tar.gz|talk]]) 14:33, 12 December 2024 (UTC)
* The phrase "global consensus" indicates that we should look across all the languages, not just English. Articles about famous composers seem to have about 50 versions in the various languages and it's easy to spot-check these to see whether they do or don't have infoboxes. I looked at a few examples of English composers as they seemed to be the most likely to be disputed: [[Gustav Holst]], [[Ralph Vaughan Williams]], [[Benjamin Britten]]. My impression is that most languages have infoboxes for these. Apart from English, the main outliers seem to be German and Italian. [[user:Andrew Davidson|Andrew]]🐉([[user talk:Andrew Davidson|talk]]) 09:39, 1 December 2024 (UTC)
::{{tl|Infobox baseball biography}} includes handedness, and nobody finds that odd content for an infobox.
*:The German encyclopedia is, as far as I know, very rich in classical music content. [[User:Espresso Addict|Espresso Addict]] <small>([[User talk:Espresso Addict|talk]])</small> 12:48, 1 December 2024 (UTC)
*:I do not thing that is the case. Each Wikipedia is separate from others, and they each have their own policies and ways of doing things. [[User:Milo8505|Milo8505]] <sub>[[User_talk:Milo8505|(talk)]]</sub> 20:23, 1 December 2024 (UTC)
::{{tl|infobox royalty}} includes the option for up to five native languages, though the OP says it seems to be unused in practice. {{tl|Infobox writer}} has a <code>|language=</code> parameter, and it would be surprising if this were unused. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 19:36, 12 December 2024 (UTC)
:::Baseball seems to be a good example of where handedness is routinely covered, and easily consumable at a glance without needing further explanation. The scenario where I don't think handedness (or first language) makes sense is when it is a uniquely interesting aspect of that individual's life, because almost by definition there's a story there which the infobox can't tell. [[User:Barnards.tar.gz|Barnards.tar.gz]] ([[User talk:Barnards.tar.gz|talk]]) 10:23, 13 December 2024 (UTC)
*There seem to be at least three different meanings of "global consensus" in this discussion. The OP seems to have taken it to mean something that should apply to all articles of a particular type, but in [[WP:CONLEVEL]] I think it means a consensus reached by everyone rather than just the editors of particular articles. These are different. It is in principle possible for a global discussion to come to the conclusion that every article should be treated differently. Andrew Davidson introduces another level of "global" that includes other language Wikipedias. English Wikipedia has always claimed its independence from other projects, so I don't think that will fly. On the specific case of infoboxes surely the discussion should be about what to include in them, rather than first a discussion of whether they should exist. If the answer is "nothing" then we simply don't have them. [[User:Phil Bridger|Phil Bridger]] ([[User talk:Phil Bridger|talk]]) 13:49, 1 December 2024 (UTC)
:I don't think L1 can be determined for most historical figures without a hefty dose of [[WP:OR|OR]]. If you look at [[User:Toadspike|my Babel boxes]], you'll see that I, as a living human being with all the information about my own life, could not tell you what my own "L1" is. The historical figures for whom this would be relevant mostly spoke many more languages than I do, and without a time machine it would be nigh impossible to say which language they learned first. This isn't even clear for the Qing emperors – I am fairly certain that they all spoke (Mandarin) Chinese very well, and [[Qing dynasty|our article]] never says what language they spoke. [[Puyi]] even states that he never spoke Manchu. Adding this parameter would also inflame existing debates across the encyclopedia about ethnonationalism (e.g. [[Nicola Tesla]]) and infobox bloat. [[User:Toadspike|<span style="color:#21a81e;font-variant: small-caps;font-weight:bold;">'''Toadspike'''</span>]] [[User talk:Toadspike|<span style="color:#21a81e;font-variant: small-caps;font-weight:bold;">[Talk]</span>]] 21:21, 12 December 2024 (UTC)
::As with every bit of information in every infobox, if it cannot be reliably sourced it does not go in, regardless of how important it is or isn't. There are plenty of examples of people whose first language is reported in reliable sources, I just did an internal source for <kbd>"first language was"</kbd> and on the first page of results found sourced mentions of first language at [[Danny Driver]], [[Cleopatra]], [[Ruthanne Lum McCunn]], [[Nina Fedoroff]], [[Jason Derulo]], [[Henry Taube]] and [[Tom Segev]], and an unsourced but plausible mention at [[Dean Martin]]. The article strongly suggests that her first language is an important part of Cleopatra's biography such that putting it in the infobox would be justifiable. I am not familiar enough with any of the others to have an opinion on whether it merits an infobox mention there, I'm simply reporting that there are many articles where first language is reliably sourced and a mention is deemed DUE. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 22:08, 12 December 2024 (UTC)
::I have been wondering since this conversation opened how far back the concept of an L1 language, or perhaps the most colloquial first language, can be pushed. Our article doesn't have anything on the history of the concept. [[User:Chipmunkdavis|CMD]] ([[User talk:Chipmunkdavis|talk]]) 11:31, 13 December 2024 (UTC)
:::I suspect the concept is pretty ancient, I certainly wouldn't be surprised to learn it arose around the same time as diplomacy between groups of people with different first languages. The note about it at [[Cleopatra]] certainly suggests it was already a well-established concept in her era (1st century BCE). [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 13:23, 13 December 2024 (UTC)
::::The concept of different social strata speaking different languages is old, but I'm not sure whether they viewed learning languages the same way we do. It's certainly possible, and perhaps it happened in some areas at some times, but I hesitate to assume it's the case for every historical person with an infobox. [[User:Chipmunkdavis|CMD]] ([[User talk:Chipmunkdavis|talk]]) 16:05, 13 December 2024 (UTC)
:::::It's certainly not going to be appropriate for the infobox of every historical person, as is true for (nearly?) every parameter. The questions here are whether it is appropriate in any cases, and if so in enough cases to justify having it as a parameter (how many is enough? I'd say a few dozen at minimum, ideally more). I think the answer the first question is "yes". The second question hasn't been answered yet, and I don't think we have enough information here yet to answer it. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 21:54, 13 December 2024 (UTC)
::::::The question is not whether it is appropriate in any cases; the question is whether it is worth the trouble. I guarantee that this would lead to many vicious debates, despite being in most cases an irrelevant and unverifiable factoid based on inappropriate [[WP:ABOUTSELF|ABOUTSELF]]. This is the same reason we have [[MOS:ETHNICITY]]/NATIONALITY. [[User:Toadspike|<span style="color:#21a81e;font-variant: small-caps;font-weight:bold;">'''Toadspike'''</span>]] [[User talk:Toadspike|<span style="color:#21a81e;font-variant: small-caps;font-weight:bold;">[Talk]</span>]] 07:29, 16 December 2024 (UTC)
:Nah. If this were "a very basic and useful piece of information" then we would already be deploying it site wide, so it obviously is not. In the vast majority of cases, it would involve intolerable [[WP:OR]] or even just guessing masquerading as facts. We do not know for certain that someone born in France had French as their first/native/home language. I have close relatives in the US, in a largely English-speaking part of the US, whose first language is Spanish. For historical figures it would get even more ridiculous, since even our conceptions of languages today as, e.g., "German" and "French" and "Spanish" and "Japanese", is a bit fictive and is certainly not historically accurate, because multiple languages were (and still are, actually) spoken in these places. We would have no way to ascertain which was used originally or most natively for the average historical figure. Beyond a certain comparatively recent point, most linguistics is reconstruction (i.e. educated guesswork; if there's not a substantial corpus of surviving written material we cannot be sure. That matters a lot for figures like Genghis Khan and King Bridei I of the Picts. Finally, it really is just trivia in the vast majority of cases. What a biographical figure's first/primary/home/most-fluent/most-frequently-used language (and some of those might not be the same since all of them can change over time other than "first") is something that could be included when certain from RS, but it's not lead- or infobox-worthy in most cases, unless it pertains directly the subject's notability (e.g. as a writer) {{em|and}} also isn't already implicit from other details like nationality. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — [[User:SMcCandlish|'''SMcCandlish''']] [[User talk:SMcCandlish|☏]] [[Special:Contributions/SMcCandlish|¢]] 😼 </span> 03:42, 23 December 2024 (UTC)


== Restrict new users from crosswiki uploading files to Commons ==
==We need to fix the admin recall process==


I created this Phabricator ticket ([[:phab:T370598]]) in July of this year, figuring that consensus to restrict non-confirmed users from crosswiki uploading files to Commons is implied. Well, consensus [[:c:Commons:Village pump/Proposals/Archive/2024/08#Deactivate cross-wiki uploads for new users|already agreed at Commons]] in [[:c:Commons:Village pump/Archive/2024/06#A new research report on Cross-wiki uploads have been published|response to the WMF study on crosswiki uploading]]. I created an [[:m:Community Wishlist/Wishes/Disallow or restrict non-confirmed users from cross-wiki uploading files to Commons|attempted Wish at Meta-wiki]], which was then rejected, i.e. "archived", as policy-related and {{tq|requir[ing] alignment across various wikis to implement such a policy}}. Now I'm starting this thread, thinking that the consensus here would already or implicitly support such restriction, but I can stand corrected about the outcome here. [[User:George Ho|George Ho]] ([[User talk:George Ho|talk]]) 06:34, 9 December 2024 (UTC); corrected, 08:10, 9 December 2024 (UTC)
Right now only "recall" votes count, and those opposing recall don't count for anything, nor do any points made in the discussion. So 25 quick group-think / mob thumbs-down votes and even be best admmin can get booted. And the best (= the most active) are the ones most likely to get booted. An admin that does near zero will get zero votes to recall. And with a single regular RFA currently the only way back in (which we've seen, very few want to go through) "booted" is "booted". The fix would be to have a discussion period pror to voting, with both "recall" and "don't recall" choices. And then say that the recall has occurred (thus requiring rfa) if over 50% or 60% of those voting said "recall".
* '''Support'''. I am not sure why this relies on alignment across wikis, those on Commons are best placed to know what is making it to Commons. The change would have little to no impact on en.wiki. If there is an impact, it would presumably be less cleaning up of presumably fair use files migrated to Commons that need to be fixed here. That said, if there needs to be consensus, then obviously support. We shouldn't need months of bureaucracy for this. [[User:Chipmunkdavis|CMD]] ([[User talk:Chipmunkdavis|talk]]) 06:41, 9 December 2024 (UTC)


:* '''Support''', I don't know that my input really counts as new consensus because I said this at the time, but the problem is much worse than what the study suggests as we are still finding spam, copyvios, unusable selfies and other speedy-deletable uploads from the timespan audited.
Sincerely, <b style="color: #0000cc;">''North8000''</b> ([[User talk:North8000#top|talk]]) 20:40, 19 November 2024 (UTC)
:[[User:Gnomingstuff|Gnomingstuff]] ([[User talk:Gnomingstuff|talk]]) 02:14, 10 December 2024 (UTC)
*'''Support''' As this applies to images being posted to Commons, but by a method that side steps their wishes, I don't see why another wiki should stand in the way. -- <small>LCU</small> '''[[User:ActivelyDisinterested|A<small>ctively</small>D<small>isinterested</small>]]''' <small>''«[[User talk:ActivelyDisinterested|@]]» °[[Special:Contributions/ActivelyDisinterested|∆t]]°''</small> 16:54, 10 December 2024 (UTC)
*'''Support'''. I do think that disabling the ability for new editors on the English Wikipedia from engaging in crosswiki uploads to Commons would be a net positive; the Commons community has come to this conclusion several times, and the research confirms that cross-wiki uploads by new users cause more trouble than the good uploads worth. — [[User:Red-tailed hawk|<span style="color: #660000">Red-tailed&nbsp;hawk</span>]]&nbsp;<sub>[[User talk:Red-tailed hawk|<span style="color: #660000">(nest)</span>]]</sub> 00:36, 11 December 2024 (UTC)
*'''Support''' Way too low signal-to-noise ratio; most of these images are copyvios or otherwise useless. -- [[User:King of Hearts|<b style="color:red">King of ♥</b>]][[User talk:King of Hearts|<b style="color:red"> ♦</b>]][[Special:Contributions/King of Hearts|<b style="color:black"> ♣</b>]][[Special:EmailUser/King of Hearts|<b style="color:black"> ♠</b>]] 01:12, 11 December 2024 (UTC)
*'''Support''' like the above editors. Much spam, many copyvios, few good images.—[[User talk:Alalch E.|Alalch E.]] 15:47, 11 December 2024 (UTC)
*I don't think this should be any sort of enwiki policy. If commonswiki wants to restrict something that should be up to them. I can't possibly see how it would need to be specific to the English Wikipedia (i.e. but not about new users on dewiki, eswikt, etc). — [[User:Xaosflux|<span style="color:#FF9933; font-weight:bold; font-family:monotype;">xaosflux</span>]] <sup>[[User talk:Xaosflux|<span style="color:#009933;">Talk</span>]]</sup> 16:19, 11 December 2024 (UTC)
*:As noted by George Ho above, [[:c:Commons:Village_pump/Proposals/Archive/2024/08#Deactivate_cross-wiki_uploads_for_new_users|Commons has already done this for all wikis]]. The question is whether or not we want the English Wikipedia to assist in implementing this (perhaps by changing a local setting or software configuration to require that their uploads be local), rather than merely relying upon a Commons edit filter (which can be a bit unfriendly to new users). — [[User:Red-tailed hawk|<span style="color: #660000">Red-tailed&nbsp;hawk</span>]]&nbsp;<sub>[[User talk:Red-tailed hawk|<span style="color: #660000">(nest)</span>]]</sub> 19:50, 11 December 2024 (UTC)
*::[[commons:Commons:Village_pump/Archive/2024/06#c-Sannita_(WMF)-20240624143400-A_new_research_report_on_Cross-wiki_uploads_have_been_published|This comment]] interests me: "Interestingly, we found that most uploaders were either ''marketers'' (editing/uploading on behalf of another entity such as their employer), or they were ''self-promoters'' (creating pages about themselves, unaware of the "notability" requirement)."
*::So I wonder whether, instead of stopping this, we want a bot to look at newbies who create articles/drafts, check whether they uploaded something, and then tag both the image(s) and the pages here with a note that says something like "There is a 90% chance that this has been posted by a marketer or self-promoter", with suitable links to pages such as [[Wikipedia:Paid-contribution disclosure]]. Or maybe even a [[WP:STICKYPROD]] process.
*::On the question of what to do, it should be possible to hide the cross-wiki upload button. The real question is, do we replace it with a link to [[c:Special:UploadWizard]]? The Commons POV has been that it's bad for people to upload images within the visual editor, but okay for the same person to upload the same image with the UploadWizard. I'm not sure the net result is actually any different, especially for these marketers/self-promoters (in terms of net quality/acceptability; from Commons' POV, it's better because (a lot? a little?) fewer of them will click through to upload anything at Commons). [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 19:49, 12 December 2024 (UTC)
*'''Support''' Nearly every single thing I've ever put up for deletion at Commons has been stuff uploaded to spam en.wp. It never stops. [[User:Just Step Sideways|Just Step Sideways]] [[User talk:Just Step Sideways|<sup>from this world ..... today</sup>]] 19:55, 11 December 2024 (UTC)
*:Is this still happening? According to @[[User:Red-tailed hawk|Red-tailed hawk]] this is already blocked. — [[User:Xaosflux|<span style="color:#FF9933; font-weight:bold; font-family:monotype;">xaosflux</span>]] <sup>[[User talk:Xaosflux|<span style="color:#009933;">Talk</span>]]</sup> 20:52, 11 December 2024 (UTC)
*::Yes, it's still happening. Such uploads include [https://commons.wikimedia.org/wiki/File:230116_ad8a1bc5840119dd_001full.jpg these] [https://commons.wikimedia.org/wiki/File:Resting_Turtle.png images] from EnWiki; the edit filter, as currently implemented, only filters out images with certain characteristics. — [[User:Red-tailed hawk|<span style="color: #660000">Red-tailed&nbsp;hawk</span>]]&nbsp;<sub>[[User talk:Red-tailed hawk|<span style="color: #660000">(nest)</span>]]</sub> 21:05, 11 December 2024 (UTC)
*:::It is for sure still happening, I've nominated a few in just the past week. [[User:Just Step Sideways|Just Step Sideways]] [[User talk:Just Step Sideways|<sup>from this world ..... today</sup>]] 22:26, 11 December 2024 (UTC)
*::It's still happening. A lot of them go to the uncategorized backlog which has well over 100,000 things in it so they get overlooked. [[User:Gnomingstuff|Gnomingstuff]] ([[User talk:Gnomingstuff|talk]]) 19:18, 12 December 2024 (UTC)
*:::If anyone wants to help with that, then click on [[c:Special:RandomInCategory/Category:All media needing categories as of 2018]]. Figure out what the image is (Google Lens or TinEye searches can help; go to [[c:Special:Preferences#mw-prefsection-gadgets]] and ⌘F for TinEye to find the right item). If you can identify it, then add a relevant cat. I believe that [[Wikipedia:HotCat]] is enabled by default for all logged-in editors, so searching for cats is usually pretty easy. If you can't find something obviously relevant, then skip it and try another. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 20:02, 12 December 2024 (UTC)
*::::I got another one just now [https://commons.wikimedia.org/wiki/Commons:Deletion_requests/File:Shibrul_Passport_Size_Photo.png]. This really can't happen fast enough. [[User:Just Step Sideways|Just Step Sideways]] [[User talk:Just Step Sideways|<sup>from this world ..... today</sup>]] 23:51, 12 December 2024 (UTC)
* '''Support''' It's honestly kinda dumb that we have to have this whole other consensus process after the prior one just because people at Meta-wiki don't want to implement it. [[User:Silver seren|<span style="color: dimgrey;">Silver</span>]][[User talk:Silver seren|<span style="color: blue;">seren</span>]]<sup>[[Special:Contributions/Silver seren|C]]</sup> 20:35, 13 December 2024 (UTC)
* '''Support''', since this has already been decided and WMF is just being recalictrant. [[WP:NOT#BUREAUCRACY]], and Commons isn't one either, nor is Wikitionary, etc., and to the extent WMF wants to be one that needs to be nipped in the bud. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — [[User:SMcCandlish|'''SMcCandlish''']] [[User talk:SMcCandlish|☏]] [[Special:Contributions/SMcCandlish|¢]] 😼 </span> 03:44, 23 December 2024 (UTC)


==Question(s) stemming from undiscussed move==
:@[[User:North8000|North8000]] Please see [[Wikipedia:Administrator recall/Reworkshop]], where editors are already discussing potential changes. [[User:Samwalton9|'''S'''am '''W'''alton]] ([[User talk:Samwalton9|talk]]) 20:43, 19 November 2024 (UTC)
::Thanks. I looked for something like that but I guess I didn't look hard enough. I hope others look harder than me. :-) <b style="color: #0000cc;">''North8000''</b> ([[User talk:North8000#top|talk]]) 21:58, 19 November 2024 (UTC)
:I don't think you understand how recall works. An admin is only desysopped after the RRFA, not after the 25 signatures, unless they choose to resign on their own. You're asking to hold a vote on whether or not a vote should be held. ~~ [[User:Jessintime|Jessintime]] ([[User talk:Jessintime|talk]]) 20:55, 19 November 2024 (UTC)
::Yes, I understood that and that is integrated into my comment above. Unless they go through and succeed at an RFA they are gone. <b style="color: #0000cc;">''North8000''</b> ([[User talk:North8000#top|talk]]) 21:54, 19 November 2024 (UTC)
:I've never heard of a petition that lets people sign because they ''don't'' support it. And I'll add that between the two recall petitions that were enacted to this point, both were preceded by many, ''many'' attempts to get the admin to correct course over the years despite egregious misconduct. [[User:Thebiguglyalien|<span style="color:#324717">The</span><span style="color:#45631f">big</span><span style="color:#547826">ugly</span><span style="color:#68942f">alien</span>]] ([[User talk:Thebiguglyalien|<span style="color:sienna">talk</span>]]) 21:03, 19 November 2024 (UTC)
::I'm not talking about any particular cases. Sincerely, <b style="color: #0000cc;">''North8000''</b> ([[User talk:North8000#top|talk]]) 21:56, 19 November 2024 (UTC)
:::So, the premise of your argument is pure conjecture? Regards, [[User:Goldsztajn|Goldsztajn]] ([[User talk:Goldsztajn|talk]]) 22:05, 19 November 2024 (UTC)
::::???? It was from an analysis of it's current structure. <b style="color: #0000cc;">''North8000''</b> ([[User talk:North8000#top|talk]]) 14:10, 20 November 2024 (UTC)
:::::But you've just refused to engage in a discussion with how the structure has actually worked in practice; hence, conjecture. Regards, [[User:Goldsztajn|Goldsztajn]] ([[User talk:Goldsztajn|talk]]) 00:19, 21 November 2024 (UTC)
:The process at the moment does have a certain level of redundancy, with the recall and reconfirmation RFA being separate things. The reconfirmation RFA is even a standard RFA, as it has different criteria for success.
:I'm not sure if anything should be done yet, as it's still very early in its adoption. However if the situation occurs that a petition is successful but the reconfirmation RFA SNOWs, it could indicate that adjustments needs to be made so that community time isn't wasted. That speculative at the moment though. -- <small>LCU</small> '''[[User:ActivelyDisinterested|A<small>ctively</small>D<small>isinterested</small>]]''' <small>''«[[User talk:ActivelyDisinterested|@]]» °[[Special:Contributions/ActivelyDisinterested|∆t]]°''</small> 23:53, 19 November 2024 (UTC)
:The recall petition threshold is not the recall discussion - it is just a check to prevent the most frivolous recall discussions from being held. — [[User:Xaosflux|<span style="color:#FF9933; font-weight:bold; font-family:monotype;">xaosflux</span>]] <sup>[[User talk:Xaosflux|<span style="color:#009933;">Talk</span>]]</sup> 00:56, 20 November 2024 (UTC)
::{{+1}} [[User:Goldsztajn|Goldsztajn]] ([[User talk:Goldsztajn|talk]]) 06:54, 20 November 2024 (UTC)
:The optics of this look alltogether terrible from my observation. I don't edit much, but I like reading a lot. Every criticism of the recall process i've seen so far just looks like old established admins thinking they might be next and having anxiety about that.
:The problem of something like this is that the optics are terrible. If anyone who doesn't know you reads that, the conclusion they will draw will likely not be "this recall process is terrible" and more likely go along the lines of "wow this is a lot of admins who don't have the community's trust anymore and want to dodge accountability".
:By being so vocally against any form of community led accountability, you're strenghtening the case for easy recalls and low thresholds, not weakening it.
:Specifically regarding Fastily, I'll make no comment on whether or not he deserves to still be an admin or not, I don't know him well enough for that and haven't reviewed enough of his contributions, but the arguments of "ANI agreed that no sanctions were appropriate" sound a lot like "our police department has investigated itself and found nothing was wrong". You have to see how this comes across, it's eroding trust in Admins on the whole project right now. [[User:Magischzwei|Magisch]] <sup>[[User talk:Magischzwei|talk to me]]</sup> 09:24, 20 November 2024 (UTC)
::Specifically, if RFA is so toxic that nobody wants to do it, that needs to be reformed. But the recent amount of vitriol towards a process that only kickstarts having to prove that you retain community trust has me convinced that there should be automatic mandatory RRFAs for every admin every 2 years or so.
::If, as of today, you don't believe the community would entrust you with admin tools, why do you think you should still have them? The criteria for losing them should not be "has clearly abused them", it should be "wouldn't be trusted with them if asked today". [[User:Magischzwei|Magisch]] <sup>[[User talk:Magischzwei|talk to me]]</sup> 09:33, 20 November 2024 (UTC)
:::As an admin actively working to improve the recall process, my goal is to make it as fair as possible to all parties. That means it should not be possible to subject an admin to the process frivolously while equally making it possible to recall administrators who have lost the trust of the community, and it needs to be as non-toxic as possible, because even administrators who are actively abusing their tools are people and ''nobody'' deserves 1-2 months of abuse. It's also incorrect to describe ANI as a police department investigating itself - everybody engaging in good faith is welcome to comment there, regardless of whether they are an admin or not. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 11:15, 20 November 2024 (UTC)
::::@[[User:Thryduulf|Thryduulf]] It's the Administrator's Noticeboard, naturally the vast majority of participants will be either admins or people who are involved in the same work.
::::I don't think asking an admin to confirm they still retain the trust of the community (the whole basis of giving out admin tools to begin with) is ever really frivolous. The current process allows that at most once a year. If an admin had to stand for RFA every year, that might be a bit too much long term, but really, if any admin thinks they would not pass RRFA today, why should they retain their tools.
::::Also, the sheer optics of it being mostly (from what i've seen) established admins calling this process toxic are terrible. Anyone who doesn't know anything about this process will see this as some kind of thin blue line mentality in the admin corps - and might conclude that it is time to desysop the majority of old admins to dissolve the clique.
::::I wouldn't be surprised if we see a bunch of recall petitions for the most vocal critics of this process. [[User:Magischzwei|Magisch]] <sup>[[User talk:Magischzwei|talk to me]]</sup> 11:27, 20 November 2024 (UTC)
:::::I have no horse in this race, except that I regret not seeing the RFA earlier so I could have voted Support, sorry about that.
:::::But if your argument is optics, then having a bunch of recall petitions for the people who most vocally expressed a valid opinion on an evolving policy is ''absolutely awful'' optics. At best. [[User:Gnomingstuff|Gnomingstuff]] ([[User talk:Gnomingstuff|talk]]) 01:33, 22 November 2024 (UTC)
::I took the stats from the first RRfA to test this theory:


"AIM-174 air-to-air missile" was moved without discussion to "[[AIM-174B]]." [https://en.wikipedia.org/wiki/Talk:AIM-174B#Requested_move_20_September_2024 Consensus was reached RE: the removal of "air-to-air missile," but no consensus was reached regarding the addition or removal of the "B."] After a no-consensus RM close (which should have brought us back to the original title, sans agreed-upon unneeded additional disambiguator, in my opinion), I requested the discussion be re-opened, per [[WP:MRV|pre-MRV policy]]. (TO BE CLEAR; I should have, at this time, requested immediate reversion. However, I did not want to be impolite or pushy) The original closer -- [[USER:Asukite|Asukite]] (who found for "no consensus") was concerned they had become "too involved" in the process and requested another closer. Said closer immediately found consensus for "AIM-174B." I pressed-on to [https://en.wikipedia.org/wiki/Wikipedia:Move_review/Log/2024_October a MRV], where an additional "no consensus" (to overturn) finding was issued. As [[User:Bobby Cohn|Bobby Cohn]] pointed-out during the move review, ''{{tq|"'''I take issue with the participating mover's interpretation of policy''' 'Unfortunately for you, a no consensus decision will result in this article staying here' '''in the RM, and would instead endorse your idea that aligns with policy, that a no consensus would take us back the original title, sans extra disambiguatotr.'''"}}''
{| class="wikitable" style="margin-left: auto; margin-right: auto; border: none;"
|+ 1st RRfA votes
!
! Support
! Oppose
! ''Total''
|-
! Administrators
| 48
| 29
| ''77''
|-
! Non-admins
| 71
| 116
| ''187''
|-
! ''Total''
|'' 119''
| ''145''
| '''''264'''''
|}
::Administrators made up 29% of the voters. If being an admin doesn't influence anyone's vote, then we can expect admins to make up roughly 29% of the supporters and 29% of the opposers. But this didn't happen. In the final results, administrators made up 40% of the supporters and 20% of the opposers. We can also look at the individual odds of supporting/opposing depending on user rights. It ended at 45% support, so you'd expect admins to have a 45% chance of supporting and a 55% chance of opposing. But this also didn't happen. If you choose any admin at random, they had a 62% chance of supporting and a 38% chance of opposing (ignoring neutrals). Non-admins were the opposite: they had a 38% chance of supporting and a 62% chance of opposing.


The issues, as I see them, are as-follows:
::So our next question should be why it was so much more likely for an admin to support the RRfA relative to a non-admin. The obvious answer is of course as you said: admins have a perverse incentive to support here, especially if they're not-so-great admins who know they probably don't have the trust of the community anymore. Also suggested during the RRfA is the comradery that comes from working alongside a fellow admin for so long. I'd be interested in seeing how account age affects likelihood of supporting, but that's not something that can be counted up in a few minutes like admin status. [[User:Thebiguglyalien|<span style="color:#324717">The</span><span style="color:#45631f">big</span><span style="color:#547826">ugly</span><span style="color:#68942f">alien</span>]] ([[User talk:Thebiguglyalien|<span style="color:sienna">talk</span>]]) 17:48, 20 November 2024 (UTC)
:::I believe it may be centered on the idea that we all make mistakes, and many of us like to think we'd be given a chance to grow and learn from said mistake, instead of being forced through the RfA process again. But I recognize I may be being overly optimistic on that, and that others may not have the same thoughts on the matter that I do. Many admins I've spoken to would simply choose to give up their tools as opposed to go through an RfA again, something I've also considered despite my relatively smooth RfA. I'm also not sure Graham is the best representation of that. I voted support, recognizing that Graham87 has made mistakes, but also recognizing the significant contributions they've made and their pledge to do better. Bluntly, I did so expecting the vote to fail, and wanting to show some moral support and appreciation for their work. There's certainly a psychological aspect involved in it, but I don't think that, generally speaking, those of us who voted support or have issues with the current process are doing so out of self preservation.
:::There's a lot of numbers that could be analyzed, such as the history of those admins who vote at RfA (whether they often vote support or don't vote at all), but it's hard to draw meaningful conclusions from this small of a dataset. [[User:Hey man im josh|Hey man im josh]] ([[User talk:Hey man im josh|talk]]) 19:14, 20 November 2024 (UTC)
::::On paper, I get that. The thing is, I don't know whether you saw [https://en.wikipedia.org/wiki/Wikipedia:Requests_for_adminship/Graham87_2#c-Levivich-20241117223500-Oppose Levivich's comment] or [https://en.wikipedia.org/wiki/Wikipedia:Requests_for_adminship/Graham87_2#c-Bradv-20241120021000-Oppose bradv's comment], but you'd be hard-pressed to find a less appropriate time to test the "chance to grow" theory than the absolutely deplorable behavior that we saw from Graham for many years with far too many chances to improve. If it were down to me, this should have been a block in 2023 rather than a desysop in 2024. [[User:Thebiguglyalien|<span style="color:#324717">The</span><span style="color:#45631f">big</span><span style="color:#547826">ugly</span><span style="color:#68942f">alien</span>]] ([[User talk:Thebiguglyalien|<span style="color:sienna">talk</span>]]) 19:32, 20 November 2024 (UTC)
:::I'm late to the discussion, but I think it's also worth pointing that only 7 of the 25 users who signed Graham87's petition and 2 of the 25 on Fastily's were admins. ~~ [[User:Jessintime|Jessintime]] ([[User talk:Jessintime|talk]]) 13:16, 23 November 2024 (UTC)
:: I would add that there is a potential wrinkle in this analysis. I'm an extended-confirmed user here (and thus would likely be counted as a non-admin), but I am a sysop on Commons so I would have my own perspective on the matter. [[User:Abzeronow|Abzeronow]] ([[User talk:Abzeronow|talk]]) 21:06, 22 November 2024 (UTC)


'''WP:RMUM:''' The move from “AIM-174 air-to-air missile” to “AIM-174B” was conducted without discussion, and I maintain all post-move discussions have achieved "no consensus."
:Well, I'm not an admin and I started this thread. I'm all for having an admin recall process by the community in place. I'm also also for a process for course correction by the community in areas where and admin has drifted off course but where the problem is fixable. Administrative Action Review has the potential to become this but that has been stymied by various things. Sincerely, <b style="color: #0000cc;">''North8000''</b> ([[User talk:North8000#top|talk]]) 14:24, 20 November 2024 (UTC)
::I think, fundamentally, the problem is that admins have a direct and concrete conflict of interest in this discussion. Of course an admin would be naturally opposed to more mechanisms that might make them lose their permissions, especially since desysops are very rare at the moment.
::I also don't really agree that the current recall process is all that toxic. You could get rid of the discussion section, as the recall is only a petition, not a consensus discussion, but that's about it. [[User:Magischzwei|Magisch]] <sup>[[User talk:Magischzwei|talk to me]]</sup> 18:33, 20 November 2024 (UTC)
:::{{tq|Of course an admin would be naturally opposed to more mechanisms that might make them lose their permissions}} – I wholeheartedly disagree with this assertion. There's a number of us that fully support a recall process, including quite a few people who have historically been open to recalls. This is an over simplification of the motives of a large group of experienced editors, many of which have legitimate and reasonable concerns about the process in its current form. [[User:Hey man im josh|Hey man im josh]] ([[User talk:Hey man im josh|talk]]) 19:15, 20 November 2024 (UTC)
::::Substantially all criticism i've seen so far of the process have boiled down to "RFA is abusive and it's unreasonable to make people go through that again". And yet, instead of attempting to change that, the only suggestions seem to be to support older admin's rights to have their permissions continue being grandfathered in. [[User:Magischzwei|Magisch]] <sup>[[User talk:Magischzwei|talk to me]]</sup> 19:27, 20 November 2024 (UTC)
:::::I'm sorry that that's all you've taken away from the vast amounts of criticism given by people. Perhaps consider focusing on whether the process, in its current state, makes sense instead of focusing on older admins. I'm a relatively new admin and I don't support the current iteration of the process. [[User:Hey man im josh|Hey man im josh]] ([[User talk:Hey man im josh|talk]]) 19:30, 20 November 2024 (UTC)
::::::I think it's eminently sensible to have adminship not be a lifetime appointment, both by the fact that norms change even when people dont, and that I see people in every RFA expressing reluctance over granting lifetime tools. I also think that assuming RFA isn't a big deal regular reconfirmations make sense. IFF RFA is a big deal, then the focus should be on fixing that.
::::::It seems to me that existing admins being immune to having to suffer RFA again has created a lack of pressure to actually make it into a functional, nontoxic process.
::::::Take my opinion for what it's worth though. I'm not an admin nor do I foresee myself ever having aspirations to become one. [[User:Magischzwei|Magisch]] <sup>[[User talk:Magischzwei|talk to me]]</sup> 19:43, 20 November 2024 (UTC)
:::::Attempting to improve RFA is a ''very'' hard problem that people have been working on since before you joined Wikipedia, and are still working on it. I would also say that {{tpq|it is unreasonable to make people go through that again}} is a mischaracterisation of the views expressed, which are {{tpq|it is unreasonable to make people go through that again unnecessarily}}, which is significantly different. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 19:31, 20 November 2024 (UTC)
::::::I just found out about this discussion, and it looks to me like the same or similar things are being discussed in way too many different places. Anyway, I'm someone who has stated repeatedly and strongly in multiple places that I think the recall process is a disaster, and is beyond repair. And, contra some statements above, here are some other facts about me. I'm not an admin. I opposed Graham's re-RfA. And I played a central role in [[WP:CDARFC]]. --[[User:Tryptofish|Tryptofish]] ([[User talk:Tryptofish|talk]]) 20:12, 20 November 2024 (UTC)
:I would be against it for a different reason: if we allow both supports and opposes, then the recall petition becomes a mini-RfA with the same amount of pressure as the RRfA itself (especially since, given the identical threshold, the recall's result would be indicative of the RRfA's subsequent result). Since anyone can start the recall petition, it functionally means that ''anyone'' can force an admin to re-RfA, which is clearly worse.<br>On the other hand, having a set number of supports needed provides for a "thresholding" of who can open a RRfA, while not necessarily being as stressful. If anything, I would say the recall should become more petition-like (and thus less stressful for the recalled admin), rather than more RfA-like. [[User:Chaotic Enby|<span style="color:#8a7500">Chaotic <span style="color:#9e5cb1">Enby</span></span>]] ([[User talk:Chaotic Enby|talk]] · [[Special:Contributions/Chaotic Enby|contribs]]) 20:01, 20 November 2024 (UTC)


'''Burden of Proof:''' The onus should be on the mover of the undiscussed title to justify their change, not on others to defend the original title. I refrained from reverting prior to initiating the RM process out of politeness, which should not shift the burden of proof onto me.
:The ones most likely to be booted are bad admins who are abusive toward the editor community and who negatively represent themselves as admins. Both of the recalls thus far were just exact examples of that and worked perfectly as designed and needed. The process worked exactly as desired and removed bad admins who deserved to be desysopped. Though I do think the discussion section of the petitions should be more regulated. Discussion should be about the admin's actions and conduct and nothing else. Any extraneous commentary should be removed. [[User:Silver seren|<span style="color: dimgrey;">Silver</span>]][[User talk:Silver seren|<span style="color: blue;">seren</span>]]<sup>[[Special:Contributions/Silver seren|C]]</sup> 00:23, 21 November 2024 (UTC)
::When I first started editing Wikipedia almost 20 years ago, I was struck by what, to me at least, appeared to be widespread incivility. Among a number of things which have changed for the better IMHO is an all round expectation that everyone's standards of behaviour should rise (and they have). The admin role breeds a certain "culture" (for lack of a better term) akin to a conservationist, the role is to "protect" Wikipedia from "harm" and I can certainly see why being an admin could be a deeply frustrating experience. However, what has happened, I think, in the attrition of the admin corps, and the turnover in the non-admin corps, is that the generalised culture of "regular" non-admin editors has moved further forward towards less acceptance of a culture prevalent 10-15 years ago. I think also the rise in editors from non-English speaking backgrounds and from the Global South has caused complexities for those with limited experience outside the anglosphere. The statistics above on the vote for G87's RRFA show an interesting split between admins and non-admins, and within admins. Non-admins were almost overwhelmingly (close to 2/3) of the view that G87 had been given an almost exceptionaly long period to improve, had not, and no longer held their trust. 5/8s of admins, appeared (and comments here also seem to confirm this) split between solidarity for one of their own and displeasure with the recall process. 3/8s admins were in alignment with the majority of non-admins. FWIW, I'm not trying to point to some grand schism; A 38/62 admin split on these numbers is not that profound - if just 9 admins had changed their vote from support to oppose it would have been a 50/50 split. To reiterate, I'm not suggesting that there is a great gap between admins and non-admins, but there does appear to be some gap when it comes to generalised views around the expected behaviour of admins. Regards, [[User:Goldsztajn|Goldsztajn]] ([[User talk:Goldsztajn|talk]]) 01:01, 21 November 2024 (UTC)
:::Maybe the divide is not between admins and non-admins but between newer and longer-serving editors (who are more likely to be admins)? [[User:Hawkeye7|<span style="color:#800082">Hawkeye7</span>]] [[User_talk:Hawkeye7|<span style="font-size:80%">(discuss)</span>]] 01:20, 21 November 2024 (UTC)
::::I don't disagree, and in effect I was sort of saying the same thing in terms of the attrition of the admin corps and turnover in non-admin corps. FWIW, I do think there are some generalised feelings about admins among non-admins; for example, admins are less likely to face sanction than non-admins. How true that actually is I'm not sure and the point would be that a group of people already tested in commnuity trust (ie RFA) are less likely to breach that trust. However, comments in the G87 RRFA and the strength of the vote suggest there are (wrongly or rightly) widely felt perceptions of disparity. Regards, [[User:Goldsztajn|Goldsztajn]] ([[User talk:Goldsztajn|talk]]) 01:53, 21 November 2024 (UTC)
::::I'm currently compiling the data to get some statistics about voters in Graham's re-RFA. I'm a bit less than halfway through so it might be a couple of days before I can present any results. However among the first 113 support voters the maximum account age (on the day the re-RFA started) was 7919 days (21 years), the minimum was 212 days and the average was 4785 days (13 years). I have no data yet for neutral or oppose voters so cannot say how that compares. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 02:03, 21 November 2024 (UTC)
:::::Do you have a handy list of all voters for RFA? It should be simple enough to use a [[WP:QUARRY]] to find out all details about the voters if someone finds an easy enough scrape of who each user is [[User:Soni|Soni]] ([[User talk:Soni|talk]]) 05:51, 21 November 2024 (UTC)
::::::@[[User:Soni|Soni]]: [https://apersonbot.toolforge.org/vote-history/?page=Wikipedia%3ARequests%20for%20adminship%2FGraham87%202]. [[User:Levivich|Levivich]] ([[User talk:Levivich|talk]]) 07:09, 21 November 2024 (UTC)
:::::::Here's the Quarry query editcount/registration date for [https://quarry.wmcloud.org/query/88070 Supports], [https://quarry.wmcloud.org/query/88072 Neutrals], [https://quarry.wmcloud.org/query/88073 Opposes].
:::::::I think about 6 editors were missed by the tool you linked, but it should not change overall patterns much so we can just use this as is. [[User:Soni|Soni]] ([[User talk:Soni|talk]]) 07:24, 21 November 2024 (UTC)
::::::::Prepare to not be surprised. Supporters/Opposers:
::::::::*Median registration date 2008/2014 <-- Behold, Wikipedia's generational shift
::::::::*Average registration date: 2011/2014
::::::::*Median edit count: 40,293/17,363
::::::::*Average edit count: 76,125/43,683
::::::::Thanks for doing the quarry. Teamwork makes the dream work! [[User:Levivich|Levivich]] ([[User talk:Levivich|talk]]) 05:17, 22 November 2024 (UTC)
:::::At a quick glance, it seemed like editors with more edits were more likely to support while editors with fewer edits (with one exception) were more likely to oppose. - [[User:Enos733|Enos733]] ([[User talk:Enos733|talk]]) 07:54, 21 November 2024 (UTC)
::::::Given a single admin action may involve multiple edits, it's not so surprising the supporters' list possibly reflects a group with higher edit counts. Personally, I'd be more inclined to draw conclusions from length of registration rather than edit count. Regards, [[User:Goldsztajn|Goldsztajn]] ([[User talk:Goldsztajn|talk]]) 09:11, 21 November 2024 (UTC)
:::::::my very, very rapid count - supports 35/117 (30%) less than 10 years old, opposes 67/141 (48%) less than 10 years old. In absolute numbers, 10+ year accounts were 82 supports, 74 opposes - actually quite even. What was crucial was younger accounts. It does confirm my sense of gaps between "older" and "younger" generations in regard to perceptions of tolerable admin behaviour. Regards, [[User:Goldsztajn|Goldsztajn]] ([[User talk:Goldsztajn|talk]]) 09:50, 21 November 2024 (UTC)


'''Precedent:''' I am concerned with the precedent. Undiscussed moves may be brute-forced into acceptance even if "no consensus" or a very slim consensus ([[WP:NOTAVOTE]]) is found?
We have had two recalls as of now. The people signing the recall were by and large not trolls, vandals, people blocked by that admin, ... but regular editors in good standing and without a grudge. One of these recalls has been supported by the RRFA afterwards, and the other admin decided not to go for a RRFA. There is ''zero'' evidence that the process is flawed or leads to results not wanted by the community at large. While minor issues need working out (things like "should it be closed immediately the moment it reaches 25 votes or not"), the basic principles and method have so far not produced any reason to fundamentally "fix" the issue. That the process highlights a gap between parts of the community (see e.g. the Graham RRFA) doesn't mean that the process needs fixing. The process only would need fundamental fixing if we would get successful recalls which would then be overwhelmingly reversed at RRFA, showing that the recall was frivolous, malicious, way too easy... Not now though. [[User:Fram|Fram]] ([[User talk:Fram|talk]]) 09:24, 22 November 2024 (UTC)
: I agree with Fram. There is not any evidence that the recall process is reaching outcomes that are not supported by the Community (I voted Oppose on the Graham RRFA; I don't know how I would have voted on a Fastily RRFA). Small fixes to the process if supported would not be indicative of the process itself being fundamentally flawed. [[User:Abzeronow|Abzeronow]] ([[User talk:Abzeronow|talk]]) 21:15, 22 November 2024 (UTC)
:I agree that it just needs fixes.<b style="color: #0000cc;">''North8000''</b> ([[User talk:North8000#top|talk]]) 15:24, 23 November 2024 (UTC)


'''Argument in-favor of "AIM-174:"''' See [[Talk:AIM-174B#Requested_move_20_September_2024|the aforementioned RM]] for arguments in-favor and against. However, I would like to make it clear that I was the only person arguing WP. Those in-favor of "174B" were seemingly disagreeing with my WP arguments, but not offering their own in-support of the inclusion of "B." That said, my primary WP-based argument is likely WP:CONSISTENT; ALL U.S. air-to-air-missiles use the base model as their article title. See: [[AIM-4 Falcon]], [[AIM-26 Falcon]], [[AIM-47 Falcon]], [[AIM-9 Sidewinder]], [[AIM-7 Sparrow]], [[AIM-54 Phoenix]], [[AIM-68 Big Q]], [[AIM-82]], [[AIM-95 Agile]], [[AIM-97 Seekbat]], [[AIM-120 AMRAAM]], [[AIM-132]], [[AIM-152 AAAM]], [[AIM-260]]. 174"B" is unnecessary while violating consistency.
I believe that desysoppings for cause should only happen when there is objective evidence of misconduct. My main concern about the recall process is that it may be wielded against administrators who are willing to take actions that are controversial, yet necessary. Examples of actions that have got administrators hounded include (1) closing contentious and politically charged AFD discussions; (2) blocking an "[[WP:UNBLOCKABLE]]" editor who is being disruptive or making personal attacks; (3) stepping up to protect a politically charged article to stop an edit war. None of these actions are administrator misconduct, but in a heated dispute the side that has an admin rule in their disfavor may quickly resort to punishing said administrator by starting a recall petition, and in a dispute involving many editors, getting to 25 may be easy. Even if that petition fails, it is so unpleasant that it may have a chilling effect on admin involvement even when needed. [[User:Sjakkalle|Sjakkalle]] [[User talk:Sjakkalle|<small>(Check!)</small>]] 21:14, 23 November 2024 (UTC)
: In which case, a RRFA might be overwhelmingly in favor of the administrator and thus vindicate the administrator. I would definitely vote in support of an administrator if those any of those three were the impetus behind a recall. I also trust our editors, and so far, the recall process has worked as intended. [[User:Abzeronow|Abzeronow]] ([[User talk:Abzeronow|talk]]) 21:50, 23 November 2024 (UTC)
:ArbCom have to face re-election. Does that have a chilling effect on the arbitrators? [[User:Hawkeye7|<span style="color:#800082">Hawkeye7</span>]] [[User_talk:Hawkeye7|<span style="font-size:80%">(discuss)</span>]] 21:48, 23 November 2024 (UTC)
:: That's a facile argument. Arbitrators are well aware that they are standing for a fixed term period. [[User_talk:Black Kite|Black Kite (talk)]] 21:50, 23 November 2024 (UTC)
::It's driving me up the wall that people keep saying that the process has worked as intended. Come back and tell me that, after you can link to an RRfA for Fastily that resulted in whatever result you define as working as intended. --[[User:Tryptofish|Tryptofish]] ([[User talk:Tryptofish|talk]]) 22:01, 23 November 2024 (UTC)
:::Choosing not to do an RRfA was their own choice, particularly if Fastily thought it wouldn't be successful. It was also their choice to make no attempt whatsoever to defend the reams of evidence presented against them in the recall petition of their negative actions toward the editing community. So, yes, Fastily as well was an example of the process working as intended. [[User:Silver seren|<span style="color: dimgrey;">Silver</span>]][[User talk:Silver seren|<span style="color: blue;">seren</span>]]<sup>[[Special:Contributions/Silver seren|C]]</sup> 22:08, 23 November 2024 (UTC)
:::: Or perhaps they just thought "well, I've put XX years into this and a load of random people with rationales ranging from reasonable to utterly non-existent have told me I'm not fit to do it, so f*** you". If that's the case, I don't blame them. [[User_talk:Black Kite|Black Kite (talk)]] 22:13, 23 November 2024 (UTC)
:::::Maybe, maybe not. Probably not though right? Seems kind of silly. [[User:PackMecEng|PackMecEng]] ([[User talk:PackMecEng|talk]]) 22:17, 23 November 2024 (UTC)
:::::: I suspect that might be my reaction, to be honest. [[User_talk:Black Kite|Black Kite (talk)]] 22:24, 23 November 2024 (UTC)
:::::::He was going to lose if he didn't apologize, and he didn't want to apologize. That simple. As others have said, that was his choice to make, and I respect it. [[User:Levivich|Levivich]] ([[User talk:Levivich|talk]]) 22:28, 23 November 2024 (UTC)
::::::::Except that he ''did'' apologize, although there were differing views of whether that apology was enough. This oversimplification is what's wrong with the way discussions happen in this process. --[[User:Tryptofish|Tryptofish]] ([[User talk:Tryptofish|talk]]) 22:34, 23 November 2024 (UTC)
:::::::::He woulda had to apologize more, then, including for the stuff that came out during the petition, and any other stuff that may have come out during the RRfA. He woulda had to answer questions about it, make promises, etc., basically go through what Graham went through, and realize that even that (answering questions, making promises) might not be enough (as it wasn't for Graham). It's not at all irrational for someone to choose not go through that. Being an admin isn't worth all that to some (e.g., to me), especially if you might not get it despite your best efforts. [[User:Levivich|Levivich]] ([[User talk:Levivich|talk]]) 22:44, 23 November 2024 (UTC)
::::::::::"Someone decided that it just isn't worth it" does not equal "the process worked". --[[User:Tryptofish|Tryptofish]] ([[User talk:Tryptofish|talk]]) 22:47, 23 November 2024 (UTC)
:::::::::::No, those two things are not the same. If you want to know why I think the process worked, it's because it stopped disruption, did it faster than Arbcom, and I think with less drama (though admittedly the third one is purely subjective and speculative). [[User:Levivich|Levivich]] ([[User talk:Levivich|talk]]) 22:56, 23 November 2024 (UTC)
::::::::::::Um, thanks for sharing? --[[User:Tryptofish|Tryptofish]] ([[User talk:Tryptofish|talk]]) 23:06, 23 November 2024 (UTC)
::::On the petition page, I conducted a careful analysis of the evidence. Nobody refuted what I said there. --[[User:Tryptofish|Tryptofish]] ([[User talk:Tryptofish|talk]]) 22:15, 23 November 2024 (UTC)
:::::Linking might help though. It doesn't seem to be on [[Wikipedia talk:Administrator recall/Graham87]], [[Wikipedia talk:Administrator recall/Fastily]], or on [[Wikipedia talk:Administrator recall]], so it's a bit hard to know what "the petition page" is. Do you mean your 00:39, 13 November 2024 (UTC) reply to A smart kitten? The one that ended with "Does this rise to the level of requiring, for me, a desysop? I'm leaning towards no." And others leaned towards "yes", it's not as if people couldn't draw different conclusions from your post or could disagree with things you said without actually replying directly to you. You didn't contradict the evidence, you personally didn't find it severe or convincing enough, that's all. That doesn't show that the process needs fixing though, just because enough people disagreed with your opinion and the result wasn't put to the test. [[User:Fram|Fram]] ([[User talk:Fram|talk]]) 09:28, 25 November 2024 (UTC)
::::::Fram, the context of what I said was clearer before there were all those intervening edits, but yes, you correctly identified the post I meant as the one that ended with the words that you quoted. Here's the diff: [https://en.wikipedia.org/enwiki/w/index.php?title=Wikipedia%3AAdministrator_recall%2FFastily&diff=1257054614&oldid=1257048515]. From where I'm sitting, your analysis here of how people reacted to what I posted is, well, not convincing enough. There was ''a lot'' of discussion about the evidence that I analyzed, back and forth. When the editor (A smart kitten) who originally posted the evidence came back with the additional information that I requested, the discussion was still very active. I provided a very detailed examination, point-by-point, of each individual claim made in that evidence. Yes, it was based upon my opinions, but I drew specific conclusions, and justified those conclusions. And nobody came back and said that they thought anything in my analysis was incorrect, nor did anyone who signed on the basis of that evidence before my comment come back and reaffirm their signature, rejecting my analysis. If you think somebody actually did, you can provide a diff of it, but I can assure you that you won't find one. And that wasn't because the petition discussion had come to a close, because it continued for several more days after I posted that. After a whole lot of back-and-forth about that particular evidence, nobody said that they found errors in anything that I said. But a couple more editors ''did'' sign the petition after that, with brief comments saying, in some cases, that they decided to sign after reading that particular evidence.
::::::So the question, in the light of your comment to me, becomes whether those later signers did so because they carefully read all of the discussion, including my critique, and decided to sign, implicitly having decided that my critique was unconvincing – or whether they signed after only a superficial read and had never really engaged with my critique. I cannot ''prove'' that it was the latter, and you cannot ''prove'' that it was the former. But given that their signatures came only with brief comments, and nobody found reason to actually mention that they had rejected my critique, I'm pretty skeptical of the former. And ''that's'' a problem. The petition process does not, of course, ''require'' that anyone had to say explicitly that they disagreed with me, either, but that's a shortcoming of the discussion process. A desysop via ArbCom makes room for careful examination of the facts. The petition did not. This is a half-assed way of driving someone off Wikipedia. And I'm arguing for a more deliberative process. --[[User:Tryptofish|Tryptofish]] ([[User talk:Tryptofish|talk]]) 18:55, 25 November 2024 (UTC)
*I have to say I don’t get the recall process either. I support admin accountability but just having an arbitrary number of “support” votes, no “oppose” votes, and I guess a time limit instead of consensus forming seems… extremely weird and out of step with how virtually everything else is done on Enwiki. [[User:Dronebogus|Dronebogus]] ([[User talk:Dronebogus|talk]]) 10:56, 24 November 2024 (UTC)
*:The intended point of the recall petition is not to find consensus or to determine whether the admin has lost the trust of the community, has abused the tools or anything like that. The intended point of the petition is only to prove that a re-RFA is not frivolous. The Re-RFA is where consensus is formed from support and oppose, analysis of evidence, etc. Think of it in judicial terms, the petition is at the pre-trial stage and simply aims to answer the question "are there 25 people who think there is a case to answer?" if the answer is no, then it ends there. If the answer is yes, then you can please innocent or guilty. If you plead guilty you take the sentence (desysopping) and move on. If you plead innocent there is a trial and the jury finds you either innocent or guilty by majority verdict. This is an imperfect analogy of course, but it hopefully helps explain the concept.
*:It didn't work like that in either of the two that we've had, but that's a fault with the implementation not with the concept. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 12:57, 24 November 2024 (UTC)
*::The problem is, the concept itself makes no sense. Nearly everything on Wikipedia is decided one of three ways: [[consensus democracy]] that must be approved/vetoed by an admin (most non-trivial issues); [[WP:BOLD]] editing, informal discussion, or admin fiat (trivial issues); or arbitration (extreme fringe cases). This resembles none of those. It’s like arbitration, only everyone can be an arb, and instead of voting yay or nay to take the case you collect signatures to see if there’s general support for a case? [[User:Dronebogus|Dronebogus]] ([[User talk:Dronebogus|talk]]) 13:11, 24 November 2024 (UTC)
*:::The request stage of arbitration is the closest analogy, but it is indeed a process not used anywhere else on Wikipedia. That doesn't mean it doesn't make sense. It's sole purpose is intended to be a check against frivolous requests so that an admin doesn't have to go through re-RFA just because they pissed off a single editor once by making an objectively correct decision. The actual decision is intended to made by consensus democracy at the Re-RFA. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 13:33, 24 November 2024 (UTC)
*::::I think a limited vote based on a formula like “after 7 days a minimum of 2/3rds of people must support for re-RFA” would be less opaque than trying to start a Wiki-[[Minyan]]? [[User:Dronebogus|Dronebogus]] ([[User talk:Dronebogus|talk]]) 09:26, 25 November 2024 (UTC)
*:::::That sounds like skipping the petition, and going right to the RRFA, or running two successive RRFA's. I have not been involved in any of this but it is not really hard to understand why there is the two-step process of: 1) calling the question, and 2) deciding the issue. [[User:Alanscottwalker|Alanscottwalker]] ([[User talk:Alanscottwalker|talk]]) 11:52, 25 November 2024 (UTC)
*::::::Honestly I think it should just go straight to RRFA, and if there’s enough opposition fast enough it can just be [[WP:SNOW]] closed. We don’t, for example, ask for 25 signatures to start and AfD discussion in order to weed out frivolous nominations— it’s patently obvious when a nomination is garbage in most cases. RRFA is clearly a last resort, and no established, good faith user is likely to abuse this kind of process so egregiously we need a two-step failsafe. [[User:Dronebogus|Dronebogus]] ([[User talk:Dronebogus|talk]]) 12:03, 25 November 2024 (UTC)
*:::::::In other words any user should be able to start a binding RRFA on any admin at any time? No, no thank you... &ndash;&#8239;[[User:Joe Roe|Joe]]&nbsp;<small>([[User talk:Joe Roe|talk]])</small> 12:16, 25 November 2024 (UTC)
*::::::::Not ''any time'', there should be a policy that steps ''must'' already been taken and failed, ideally multiple times, similar to ArbCom. And not ''any user'', since the starter should probably be autoconfirmed at the absolute minimum, and probably be required to be in goof standing, have X edits, been on WP X years, and been active during the last year. If it was unambiguously required that an RRFA follow these rules or be rejected (with filing an improper case being a sanctionable offense) I don’t think anyone would realistically start a frivolous case. [[User:Dronebogus|Dronebogus]] ([[User talk:Dronebogus|talk]]) 12:33, 25 November 2024 (UTC)
*:::::::Well, we also don't require a !vote to create an article but we do for an admin. I also don't think it is likely that 'any experienced user' has experience in making an RRFA -- [[User:Alanscottwalker|Alanscottwalker]] ([[User talk:Alanscottwalker|talk]]) 12:34, 25 November 2024 (UTC)
*::::::::An admin is essentially just voted into office; they should be voted out of office in an identical way. There’s no need for some kind of novel additional process on top of that. That’s all I’m saying. [[User:Dronebogus|Dronebogus]] ([[User talk:Dronebogus|talk]]) 12:55, 25 November 2024 (UTC)
:I think the basic complaint here is that the 25-vote threshold is too easy to meet, and therefore it is unfair to require an affirmative consensus for the admin to retain the tools. I think the 25-vote threshold is fine for weeding out frivolous nominations, but correspondingly I think we should make it harder to remove adminship, i.e. make 50-60% the discretionary range for ''removing'' adminship. This would make it in line with most of our other processes, where a slight supermajority is required to make changes, and no consensus defaults to the status quo. Whereas under the current recall system, 25 votes with no opportunity to object are enough to make removal of adminship the status quo, which seems a bit harsh. -- [[User:King of Hearts|<b style="color:red">King of ♥</b>]][[User talk:King of Hearts|<b style="color:red"> ♦</b>]][[Special:Contributions/King of Hearts|<b style="color:black"> ♣</b>]][[Special:EmailUser/King of Hearts|<b style="color:black"> ♠</b>]] 19:53, 25 November 2024 (UTC)
::I think the 25-vote threshold, because it’s so easy to meet, is essentially pointless because it will only weed out extreme outlier cases that I don’t believe will ever happen enough to be a serious concern. We should ''just'' have a supermajority vote requirement, and if we ''must'' have a petition it should be a lot higher than 25. [[User:Dronebogus|Dronebogus]] ([[User talk:Dronebogus|talk]]) 16:06, 27 November 2024 (UTC)
:::We don't have evidence the 25-vote threshold is easy to meet. Of the two recalls, one only hit 25 due to a bad block ''during'' the petition period. [[User:Chipmunkdavis|CMD]] ([[User talk:Chipmunkdavis|talk]]) 16:14, 27 November 2024 (UTC)
::::One more reason I don’t like this: it’s extremely important, but we’re using it to prototype this weird system not used anywhere else on Enwiki and possibly Wikimedia (if you have examples of off-wiki precedent please share them). [[User:Dronebogus|Dronebogus]] ([[User talk:Dronebogus|talk]]) 16:18, 27 November 2024 (UTC)
:::::Have to try new things at some point. But CMD is right, from all the evidence we do have, it looks about right. Where as there is zero evidence that a higher number is required or helpful. [[User:PackMecEng|PackMecEng]] ([[User talk:PackMecEng|talk]]) 17:09, 27 November 2024 (UTC)
::::::It's usually called [[Approval voting]] when it's used, though that might not be precisely the right name. It's used all over the Wikimedia movement. At least until recently, both grant requests and the (technical) community wishlist used petition-like voting processes that encouraged support and disregarded opposition votes. That is, if there were 25 people supporting something and you showed up to say "* '''Oppose''' because WMF Legal will have a heart attack if you do this", then the request might be rejected because of the information you provided, and your comment might change the minds of potential/future supporters, but it would never be counted as a vote of 25 to 1. It's still counted as a list of 25 supporters. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 18:53, 27 November 2024 (UTC)
:::::::The original Phase I Proposal was directly written as adapting dewiki's recall policies into enwiki. I believe the Italian wikipedia also has a threshold to RRFA style process. And I think spanish too? I might be getting some projects confused. But it's directly used in recall in other projects - That's how it was recommended here (and then adapted after). [[User:Soni|Soni]] ([[User talk:Soni|talk]]) 18:58, 27 November 2024 (UTC)
:::::Arbitration election commissioners are chosen by collecting solely supporting statements. Once upon a time, the arbitration election RFCs also consisted of proposals that commenters approved, without any option to oppose. [[User:Isaacl|isaacl]] ([[User talk:Isaacl|talk]]) 19:50, 27 November 2024 (UTC)


'''Do my policy contentions hold any weight? Or am I mad? Do I have any path forward, here?'''
== the REGIME test ==
{{Archive top|status=Closed|result=I am [[WP:BOLD|boldly]] closing this discussion as the proposal [[WP:SNOWBALL|has no chance of passing]] as demonstrated by the responses already provided. [[User:ElKevbo|ElKevbo]] ([[User talk:ElKevbo|talk]]) 14:55, 24 November 2024 (UTC) }}


TO BE CLEAR, I am not alleging bad faith on behalf of anyone, and I am extremely grateful to all those who have been involved, particularly the RM closer that I mentioned, as well as the MRV closer, [[User:ModernDayTrilobite|ModernDayTrilobite]]. I would like to make it clear that this isn't simply a case of a MRV 'not going my way.' Again, I am concerned w/ the precedent and with the onus having been shifted to me for ''months''. I also apologize for the delay in getting this here; I originally [https://en.wikipedia.org/wiki/Wikipedia:Dispute_resolution_noticeboard#AIM-174B stopped-over at the DRN] but [[USER:Robert McClenon|Robert McClenon]] kindly suggested I instead post here.[[User:MWFwiki|MWFwiki]] ([[User talk:MWFwiki|talk]]) 00:08, 12 December 2024 (UTC)
* That any news outlet or source that refers to a government as a "''regime''" be considered not reliable for facts ''about that regime'', except for attributed statements.
* That a list be kept and updated, similar to [[WP:RS/Perennial sources]]
[[User:Skullers|Skullers]] ([[User talk:Skullers|talk]]) 04:03, 20 November 2024 (UTC)


:Why do we want to only use sources that haven't noticed that a regime is a regime? -- [[User:NatGertler|Nat Gertler]] ([[User talk:NatGertler|talk]]) 04:09, 20 November 2024 (UTC)
:Are you familiar with [[Wikipedia:Article titles#Considering changes]]? Do you [[Chesterton's Fence|think you understand why that rule exists]]? [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 23:31, 12 December 2024 (UTC)
::I am quite familiar with it. It seemingly supports my argument(s), so...? Is there a particular reason you're speaking in quasi-riddles? [[User:MWFwiki|MWFwiki]] ([[User talk:MWFwiki|talk]]) 01:11, 13 December 2024 (UTC)
:This would, for example, rule out using a significant proportion of reliable sources covering contemporary North Korea, Afghanistan, Cuba and Iran as well as countless historical governments (e.g. Saddam Hussein's Iraq, Franco's Spain, Gaddafi's Libya, etc). This is clearly hasn't been fully thought through. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 04:17, 20 November 2024 (UTC)
:::If yours is the title favored by the policy, then none of this explanation makes any difference. You just demand that it be put back to the title favored by the policy, and editors will usually go along with it. (It sometimes requires spelling out the policy in detail, but ultimately, most people want to comply with the policy.)
::Well, it might have been thought through if the idea is to exclude sources critical of said regimes, eg [https://www.bbc.com/news/articles/cvg0qz157yyo Activist takes own life in protest at Iranian regime] (BBC). Regards, [[User:Goldsztajn|Goldsztajn]] ([[User talk:Goldsztajn|talk]]) 06:57, 20 November 2024 (UTC)
:::If yours is not the title favored by the policy, then the people on the other 'side' are going to stand on policy when you ask to move it, so you'd probably have to get the policy changed to 'win'. If you want to pursue that, you will need to understand why the rule is set this way, so that you have a chance of making a convincing argument. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 05:24, 13 December 2024 (UTC)
:::That would be a gratuitous failure of NPOV. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 11:04, 20 November 2024 (UTC)
::::I think several individuals involved in this process have agreed that the default title is the favored title, at least as far as [[WP:TITLECHANGES]], as you say.<br />(The only reason I listed any further ‘litigation’ here is to show what was being discussed in-general for convenience’s sake, not necessarily to re-litigate) <br />However, at least two individuals involved have expressed to me that they felt their hands were tied by the RM/MRV process. Otherwise, as I mentioned (well, as [[User:Bobby_Cohn|Bobby_Cohn]] mentioned) the train of thought seemed to be “well, I don’t want the title to be changed,” and this was seemingly enough to override policy. Or, at best, it was seemingly a “well, it would be easier to just leave it as-is” sort of decision.
::::In heated agreement. Regards, [[User:Goldsztajn|Goldsztajn]] ([[User talk:Goldsztajn|talk]]) 01:57, 21 November 2024 (UTC)
::::<br />
:Bad idea. A biased source does not mean unreliable. See [[WP:BIASED]]. However, it is indeed good indicator that a in-text attribution may be needed. [[User:Ca|Ca]] <i><sup style="display:inline-flex;rotate:7deg;">[[User talk:Ca|talk to me!]]</sup></i> 15:00, 20 November 2024 (UTC)
::::And again, I, 100%, should have been more forceful; The title anhould have been reverted per the initial “no consensus” RM-closure and I will certainly bear your advice in-mind in the future. That said, I suppose what I am asking is would it be inappropriate to ask the original RM-closer to revert the article at this point, given how much time is past?
::I think this does get at ''something'' which is a problem in Wikipedia. It just doesn't quite hit the mark. And that is that there is a core assumption in Wikipedia's handling of news media sources that they are largely independent and that a deviation from editorial independence represents a deviation from best practices. However this often leads to Wikipedia simply assuming the biases of the New York Times and other major media outlets. But there has been an accumulation of multitudinous issues - one of the most recent being accounts of Jeff Bezos influencing the Washington Post to withhold an endorsement of Kamala Harris - that demonstrate that the ''idea'' of editorial independence is frankly quaint.
::::<br /> [[User:MWFwiki|MWFwiki]] ([[User talk:MWFwiki|talk]]) 06:29, 13 December 2024 (UTC)
::<br>
:::::Given what was written in [[Talk:AIM-174B#Requested move 20 September 2024]] six weeks ago, I think that none of this is relevant. "Consensus to keep current name" does not mean that you get to invoke rules about what happens when there is no consensus. I suggest that you give up for now, wait a long time (a year? There is no set time, but it needs to be a l-o-n-g time), and ''maybe'' start a new [[Wikipedia:Requested moves]] (e.g., in 2026). [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 19:41, 13 December 2024 (UTC)
::This, of course, then creates problems with adjudicating those sources that have previously been demonstrated to be non-independent (see for example [[WP:XINHUA]]) as the rationale on Wikipedia for treating Xinhua differently from, let's say, the BBC or Al Jazeera for that matter largely depends upon the assumption of independence of those outlets that are not aligned with enemy states of the US/UK hegemony.
::::::Thanks! [[User:MWFwiki|MWFwiki]] ([[User talk:MWFwiki|talk]]) 05:09, 14 December 2024 (UTC)
::<br>
:Everything ModernDayTrilobite advised you of is correct. Vpab15 closed the RM and determined that consensus was reached. Nothing since then has overturned or otherwise superseded Vpab15's closure. Therefore that closure remains in force. You already challenged the validity of Vpab15's closure at move review, and you have no avenue for challenging it again. Your best bet is to wait a tactful amount of time (several months) before starting another RM. And in that RM, none of this procedural stuff will matter, and you will be free to focus just on making the clearest, simplest case for why AIM-174 is the best title. [[User:Adumbrativus|Adumbrativus]] ([[User talk:Adumbrativus|talk]]) 06:10, 13 December 2024 (UTC)
::My personal opinion is that the use of news sources on an encyclopedia should be far more limited than it presently is as, in my case, it's not that I trust Xinhua (I don't) but that I don't trust ''any'' media outlet to produce material appropriate for a neutral encyclopedia. I don't think a "regime" test is going to improve the quality of pages that over-rely on news media. But I would suggest that it's another indication that Wikipedia needs to be far more critical of what news sources we depend on and in what contexts. [[User:Simonm223|Simonm223]] ([[User talk:Simonm223|talk]]) 19:54, 20 November 2024 (UTC)
::I suppose my issue is better summed-up by my above discussion with [[User:WhatamIdoing|WhatamIdoing]]; The MRV ''shouldn’t '' have been required. That burden should never have been on me. The title should have been reverted at the initial “no consensus” per [[WP:TITLECHANGES]]. Otherwise, undiscussed moves — when challenged — may now be upheld by ''either'' consensus ''or'' no consensus? This is not what WP:TITLECHANGES says, obviously. That said I take full responsibility for not being clearer with this argument, and instead focusing on arguing for a ‘different’ title, when I should have been arguing for the default title per TITLECHANGES. [[User:MWFwiki|MWFwiki]] ([[User talk:MWFwiki|talk]]) 06:33, 13 December 2024 (UTC)
:::No, editorial independence is ''not'' the reason for a source being considered reliable or not. Many sources are biased, or influenced by specific governments/interest groups, and are still considered reliable for topics other than the groups influencing them (in which case, by definition, they would not be an [[Wikipedia:Independent sources|independent source]]). A history of disinformation (actually making up stuff, not just reporting it in a biased way) pushes the source towards being considered unreliable.{{pb}}[[WP:XINHUA]], which you link, demonstrates this clearly, stating {{tq|There is consensus that Xinhua is generally reliable for factual reporting except in areas where the government of China may have a reason to use it for propaganda or disinformation.}} In the same way, we shouldn't rely on the ''Washington Post'' for topics related to Jeff Bezos. [[User:Chaotic Enby|<span style="color:#8a7500">Chaotic <span style="color:#9e5cb1">Enby</span></span>]] ([[User talk:Chaotic Enby|talk]] · [[Special:Contributions/Chaotic Enby|contribs]]) 20:07, 20 November 2024 (UTC)
:::You've repeatedly pointed to the initial self-reverted closure as if it's somehow significant. It isn't. Asukite voluntarily decided to close the discussion, and voluntarily self-reverted their decision to close. It doesn't matter whether you asked for it or someone else asked or no one asked. They had the right to self-revert then, for any reason or no reason. The net result is the same as if Asukite had never closed it at all. Only Vpab15's closure, which was 100% on Vpab15's own authority and 0% on the supposed authority of the annulled earlier closure, is binding. [[User:Adumbrativus|Adumbrativus]] ([[User talk:Adumbrativus|talk]]) 09:22, 13 December 2024 (UTC)
::::The example I gave wasn't one of a story about Jeff Bezos or a topic related to Jeff Bezos unless one contends (which, I will grant there's a case to be made) that anything to do with a US election is ultimately about the interests of the Billionaire class. But, you see, that's my point. Pretty much any media outlet will distort truth, spread disinformation or, at the most basic, bury stories that aren't to the interests of their handlers. And I do want to stress that ''the stories that are not covered is a key method through which media occludes truth.'' The only real question is whether the handler is a politbureau or a rich guy. I don't think one of those is better than the other. [[User:Simonm223|Simonm223]] ([[User talk:Simonm223|talk]]) 20:12, 20 November 2024 (UTC)
::::I don't disagree with your latter statement, but why would an initial finding of no-consensus not matter? It should have brought us back to the default title, not simply been reverted. Because that policy wasn't followed, I'm here now, is my point. Regardless, I understand; Thank you for your advice! Well, I appreciate your time and consideration! :-) [[User:MWFwiki|MWFwiki]] ([[User talk:MWFwiki|talk]]) 05:08, 14 December 2024 (UTC)
:::::The fact that a news media is influenced to not publish a story makes it biased, but not unreliable. Having a point of view when reporting (or choosing not to report) stories is what every media does, and is different from outright making up disinformation. And that is the difference between bias and unreliability. It's not about who the handler is, [[Rupert Murdoch|rich guys]] can also own unreliable news sources. [[User:Chaotic Enby|<span style="color:#8a7500">Chaotic <span style="color:#9e5cb1">Enby</span></span>]] ([[User talk:Chaotic Enby|talk]] · [[Special:Contributions/Chaotic Enby|contribs]]) 20:16, 20 November 2024 (UTC)
:(Involved at the MRV) Seeing as I've been tagged in this multiple times and quoted, I'll give my thoughts on this. I don't want to accuse [[User:MWFwiki|MWFwiki]] of selectively quoting me but I do think that my quote above was, when taken into account with the following discussion, more about meta-conversation about the correct policy to implement in the event the MRV went the other way. I explicitly said in the immediately following message {{tpq|the view that the close was not outside the scope of [[WP:RMCI]] is reasonable and good faith interpretation.}} I do think this close was within bounds, and the following MRV appropriately closed and summarised.
::::::I mean we certainly agree about that rich guy. I just think Wikipedia is too fast to treat news sources as reliable out of convenience rather than any real confidence in the quality of information. [[User:Simonm223|Simonm223]] ([[User talk:Simonm223|talk]]) 20:20, 20 November 2024 (UTC)
:Yes, had [https://en.wikipedia.org/enwiki/w/index.php?title=Talk:AIM-174B&oldid=1249400556 the original close of no consensus] stood, then it could have been reverted wholecloth. [https://en.wikipedia.org/enwiki/w/index.php?title=Talk:AIM-174B&diff=next&oldid=1249400640 It was self-reverted] and therefore plays no role in the consideration of the subsequent closure. We're always going to take the most recent finding of consensus to be what holds. It seems to have been said in the above that had the no consensus closure held and the appropriate [[WP:RMNCREV]] policy been applied, then the appellant here would have gotten their preferred outcome. But to continue to argue this in the face of the subsequent developments is where this enters [[WP:wikilawyering|wikilawyering]] territory. I think that since then, the appellant has continued to make policy arguments that would be better suited for a subsequent and focused RM on the actual title rather than wikilawyer about a previous close that was self-reverted and continuing to argue policy.
:::::::I agree with [[User:Simonm223|Simonm223]]. I just can't understand why an encyclopedia should be largely based on news sources rather than peer-reviewed academic articles or books. For a start most of them are primary sources, by any definition other than Wikipedia's. This is dumbing-down at its worst. [[User:Phil Bridger|Phil Bridger]] ([[User talk:Phil Bridger|talk]]) 21:09, 20 November 2024 (UTC)
:There's nothing for this venue to really change in regards to that AT and the discussion to change the AT would need to be had at the articles talk page. My sincere advice to appellant is to wait a reasonable amount of time and make strong policy based arguments about the preferred title (don't just quote policy, we editors are good at clicking links and reading it for ourselves&mdash;quoting nothing but policy back at us makes us feel like you've taken us for fools; instead provide facts and sources that support the relevant policies and link those). Spend some time at [[WP:RMC]] and see what well-argued and successful RMs typically look like. [[User:Bobby Cohn|Bobby Cohn]] ([[User talk:Bobby Cohn|talk]]) 17:38, 17 December 2024 (UTC)
::::::::Exactly, yes. [[User:Simonm223|Simonm223]] ([[User talk:Simonm223|talk]]) 22:42, 20 November 2024 (UTC)
:::::::I agree, out article on [[Donald Trump]] and [[Joe Biden]] for example would do better citing academic sources than news outlets. [[User:Ca|Ca]] <i><sup style="display:inline-flex;rotate:7deg;">[[User talk:Ca|talk to me!]]</sup></i> 02:21, 21 November 2024 (UTC)
:See [https://www.merriam-webster.com/dictionary/regime the definition (specifically 2(c) and 2(d))]. Regime is a synonym for "administration" or "government" (when used to describe, as example, the Biden administration or the Tory government). It makes zero sense whatsoever to block sources who use a synonym for administration just because one person feels it has negative connotations. Wikipedia is not the place to practice redefining words or limiting their use based on their worst definitions or connotations. -bɜ:ʳkənhɪmez &#124; [[User:berchanhimez|me]] &#124; [[User talk:berchanhimez|talk to me!]] 05:03, 21 November 2024 (UTC)
::Prescriptivism is dead. See examples. There is zero percent usage in modern times that isn't derogatory; literally no one says unironically "our regime", "the regimes of our allies", or "regimes we'd like to do business with". [[User:Skullers|Skullers]] ([[User talk:Skullers|talk]]) 08:29, 21 November 2024 (UTC)
:::I agree in as much as "government" would always be a better term in any use case I can think of.
:::However, your polemics here have been consistently superficial and unhelpful. It seems almost self-parody to aphorize "prescriptivism is dead" amid seeking to categorically deprecate sources based on the sole criterion of whether they use a particular word, citing what you feel is the only correct definition of said word in practice. <span style="border-radius:2px;padding:3px;background:#1E816F">[[User:Remsense|<span style="color:#fff">'''Remsense'''</span>]]<span style="color:#fff">&nbsp;‥&nbsp;</span>[[User talk:Remsense|<span lang="zh" style="color:#fff">'''论'''</span>]]</span> 09:03, 21 November 2024 (UTC)
::::The attraction of the word "regime" to headline writers is often that it is simply shorter than "government" or "administration", rather than anything to do with its connotations. [[User:Phil Bridger|Phil Bridger]] ([[User talk:Phil Bridger|talk]]) 09:48, 21 November 2024 (UTC)
:::::Exactly my point. -bɜ:ʳkənhɪmez &#124; [[User:berchanhimez|me]] &#124; [[User talk:berchanhimez|talk to me!]] 23:24, 21 November 2024 (UTC)
:What is the rationale for this proposal? Is there a specific source or incident that prompted it? [[User:Gnomingstuff|Gnomingstuff]] ([[User talk:Gnomingstuff|talk]]) 01:09, 22 November 2024 (UTC)
:While I understand the rationale for this proposal, IMO it goes way too far. I would agree that it's important to keep in mind when a source is using biased language and consider using in-text attribution in these cases, but certainly it's not worth a blanket ban.
:Furthermore, it's often the case that when the news media uses negative language about a topic, that's because that negative language is the consensus. For instance, nobody would really question the phrase "the Nazi regime" or even probably "the genocidal Nazi regime" from a reliable source, and for good reason. When everyone agrees on a [[WP:LABEL|contentious label]] that implies that in that specific case the label is not, in fact, contentious. [[User:LokiTheLiar|Loki]] ([[User talk:LokiTheLiar|talk]]) 01:21, 22 November 2024 (UTC)
*This proposal is rather absurd. You can’t declare a source unreliable based on ''a word'', especially one that’s frequently used as a harmless rhetorical flourish. What should we ban next? Sources that use swearing? Sources that use subjective adjectives like “best” or “amazing”? [[User:Dronebogus|Dronebogus]] ([[User talk:Dronebogus|talk]]) 13:16, 24 November 2024 (UTC)


== CSD A12. Substantially written using a large language model, with hallucinated information or fictitious references ==
:I say we should also ban all sources that use the word "slam". Equally as absurd, but more likely to actually hit unreliable sources. '''[[User:Lee Vilenski|<span style="color:green">Lee Vilenski</span>]] <sup>([[User talk:Lee Vilenski|talk]] • [[Special:Contribs/Lee Vilenski|contribs]])</sup>''' 13:58, 24 November 2024 (UTC)
{{Archive top|status=withdrawn|result=Per the solution to the problem already being covered by [[WP:DRAFTREASON]], something I was unaware of when I made the proposal. [[User:Svampesky|<span style="color: #008080">Svampesky</span>]] ([[User talk:Svampesky|<span style="color: #008080">talk</span>]]) 15:49, 14 December 2024 (UTC)}}
::Presumably excluding sports uses? We definitely need sources that report on grand slams. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 14:40, 24 November 2024 (UTC)
When fixing up new articles, I have encountered articles that appear to have been substantially generated by AI, containing [[Hallucination (artificial intelligence)|hallucinated]] information. While these articles may not meet other criteria for speedy deletion, as the subjects themselves are sometimes real and notable, waiting for seven days to [[WP:PROD|PROD]] the articles is inefficient. I recommend designating [[WP:A12]] for the speedy deletion of these articles. I have created a template ([[User:Svampesky/Template:Db-a12]]) if it is successful. A recent example is the article on the [[Boston University Investment Office]], where the author explicitly [[Special:Diff/1262319695|disclosed]] that it was created using a large language model and contains references to sources don't exist. I initially G11'd it, as it seemed the most appropriate, but was declined, and the article was subsequently PRODed. [[User:Svampesky|<span style="color: #008080">Svampesky</span>]] ([[User talk:Svampesky|<span style="color: #008080">talk</span>]]) 21:13, 12 December 2024 (UTC)


:CSD are generally limited to things that are unambiguously obvious. I image the number of cases in which it's unabiguously obvious that the entire page was generated by an LLM (as opposed to the editor jut using the LLM to generate references, for example) are small enough that it doesn't warrant a speedy deletion criterion. <span class="nowrap">--[[User:Ahecht|Ahecht]] ([[User talk:Ahecht|<b style="color:#FFF;background:#04A;display:inline-block;padding:1px;vertical-align:middle;font:bold 50%/1 sans-serif;text-align:center">TALK<br />PAGE</b>]])</span> 21:29, 12 December 2024 (UTC)
{{Archive bottom}}
::I like this idea but agree that it's better not as a CSD but perhaps its own policy page. '''[[User:AndreJustAndre|Andre]]'''<span style="border:2px solid #073642;background:rgb(255,156,0);background:linear-gradient(90deg, rgba(255,156,0,1) 0%, rgba(147,0,255,1) 45%, rgba(4,123,134,1) 87%);">[[User_talk:AndreJustAndre|🚐]]</span> 21:33, 12 December 2024 (UTC)
:::I don't think it even merits a policy page. The number of cases where the LLM use is objectively unambiguous, ''and'' the article content sufficiently problematic that deletion is the only appropriate course of action ''and'' it cannot be (speedily) deleted under existing policy is going to be vanishingly small. Even the OP's examples were handled by existing processes (PROD) sufficiently. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 22:11, 12 December 2024 (UTC)
:@[[User:Svampesky|Svampesky]], when you say that [[Wikipedia:Proposed deletion]] is "inefficient", do you mean that you don't want to wait a week before the article gets deleted? [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 23:32, 12 December 2024 (UTC)
::My view is that [[Wikipedia:Proposed deletion]] inefficient for articles that clearly contain hallucinated LLM-generated content and fictitious references (which almost certainly will be deleted) in the mainspace for longer than necessary. [[User:Svampesky|<span style="color: #008080">Svampesky</span>]] ([[User talk:Svampesky|<span style="color: #008080">talk</span>]]) 00:03, 13 December 2024 (UTC)
:::Efficiency usually compares the amount of effort something takes, not the length of time it takes. "Paint it and leave it alone for 10 minutes to dry" is the same amount of hands-on work as "Paint it and leave it alone for 10 days to dry", so they're equally efficient processes. It sounds like you want a process that isn't less hands-on work/more efficient, but instead a process that is faster.
:::Also, if the subject qualifies for an article, then deletion isn't necessarily the right solution. Blanking bad content and bad sources is officially preferred (though more work) so that there is only verifiable content with one or more real sources left on the page – even if that content is only a single sentence.
:::Efficiency and speed is something that many editors like. However, there has to be a balance. We're [[WP:HERE]] to build an encyclopedia, which sometimes means that rapidly removing imperfect content is only the second or third most important thing we do. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 00:43, 13 December 2024 (UTC)


* This part {{tq|as the subjects themselves are sometimes real and notable}} is literally an inherent argument against using CSD (or PROD for that matter). [[WP:TNT]] the article to a sentence if necessary, but admitting that you're trying to delete an article you know is notable just means you're admitting to vandalism. [[User:Silver seren|<span style="color: dimgrey;">Silver</span>]][[User talk:Silver seren|<span style="color: blue;">seren</span>]]<sup>[[Special:Contributions/Silver seren|C]]</sup> 00:07, 13 December 2024 (UTC)
==Discussion at [[:Wikipedia talk:Criteria for speedy deletion#RfC: Enacting T5 (unused template subpages)|Wikipedia talk:Criteria for speedy deletion §&nbsp;RfC: Enacting T5 (unused template subpages)]]==
*:The categorization of my proposal as {{tq|admitting to vandalism}} is incorrect. [[WP:G11]], the speedy deletion criterion I initially used for the article, specifies deleting articles that {{tq|would need to be ''fundamentally'' rewritten to serve as encyclopedia articles}}. Articles that have been generated using large language models, with hallucinated information or fictitious references, would need to be fundamentally rewritten to serve as encyclopedia articles. [[User:Svampesky|<span style="color: #008080">Svampesky</span>]] ([[User talk:Svampesky|<span style="color: #008080">talk</span>]]) 00:42, 13 December 2024 (UTC)
[[File:Symbol watching blue lashes high contrast.svg|25px|link=|alt=]]&nbsp;You are invited to join the discussion at [[:Wikipedia talk:Criteria for speedy deletion#RfC: Enacting T5 (unused template subpages)|Wikipedia talk:Criteria for speedy deletion §&nbsp;RfC: Enacting T5 (unused template subpages)]]. <b>[[User:HouseBlaster|House]][[Special:Contributions/HouseBlaster|<span style="color:#7D066B;">Blaster</span>]]</b>&nbsp;([[User talk:HouseBlaster|talk]]&nbsp;•&nbsp;he/they) 03:01, 25 November 2024 (UTC)<!-- [[Template:Please see]] -->
*::Yes, but G11 is looking for blatant advertising ("Buy widgets now at www.widgets.com! Blue-green widgets in stock today!") It's not looking for anything and everything that needs to be fundamentally re-written. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 00:45, 13 December 2024 (UTC)
*::(Edit Conflict) How does G11 even apply here? Being written via LLM does not make an article "promotional". Furthermore, even that CSD criteria states {{tq|If a subject is notable and the content could plausibly be replaced with text written from a neutral point of view, this is preferable to deletion.}} I.e. TNT it to a single sentence and problem solved. [[User:Silver seren|<span style="color: dimgrey;">Silver</span>]][[User talk:Silver seren|<span style="color: blue;">seren</span>]]<sup>[[Special:Contributions/Silver seren|C]]</sup> 00:46, 13 December 2024 (UTC)
*The venue for proposing new criteria is at [[Wikipedia talk:Criteria for speedy deletion]]. So please make sure that you don't just edit in a new criterion without an RFC approving it, else it will be quickly reverted. [[User:Graeme Bartlett|Graeme Bartlett]] ([[User talk:Graeme Bartlett|talk]]) 00:20, 13 December 2024 (UTC)
*:Since we are talking about BLPs… the harm of hallucinated information does need to be taken very seriously. I would say the first step is to stubbify.
*:However, Deletion can be held off as a potential ''second'' step, pending a proper BEFORE check. [[User:Blueboar|Blueboar]] ([[User talk:Blueboar|talk]]) 01:06, 13 December 2024 (UTC)
*::If the hallucination is sufficiently dramatic ("'''Joe Film''' is a superhero action figure", when it ought to say that he's an actor who once had a part in a superhero movie), then you might be able to make a good case for {{tl|db-hoax}}. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 05:26, 13 December 2024 (UTC)
*:::I have deleted an AI generated article with fake content and references as a hoax. So that may well be possible. [[User:Graeme Bartlett|Graeme Bartlett]] ([[User talk:Graeme Bartlett|talk]]) 12:23, 13 December 2024 (UTC)


:Isn't this covered by [[WP:DRAFTREASON]]? [[User:Gnomingstuff|Gnomingstuff]] ([[User talk:Gnomingstuff|talk]]) 20:34, 13 December 2024 (UTC)
== Information on cross-wiki article creation ==
{{Archive bottom}}


== AFD clarification ==
The [[Harald Winter]] article was created by [[User:X3ntar|X3ntar]] as a port from the German Wikipedia article (found here: [[Harald Winter]]). The English article consists primarily of poor English translation and promotional content, and when I was looking through the history of the article, all I saw originally were red-linked accounts created a short while before their edits to the article, leading me to begin researching to source a [[WP:SPI]] case. After almost an hour of looking into this, I don't think this is canvassing, meatpuppetry, or anything like that. More likely it's a case of German editors wanting to update the English version of the article. However, I couldn't find any policies or essays that gave advice on how to handle cross-wiki contributions or page creations. Is there a common consensus reached prior? [[User:Sirocco745|Sirocco745]] ([[User talk:Sirocco745|talk]]) 04:59, 25 November 2024 (UTC)


The [[Articles for deletion]] article states that:
:This doesn't happen very often, so I don't think there are any advice pages. In general, it would be a lovely thing if people who created an article in one language could then do a semi-decent translation into another language.
{{TQ|If a redirection is controversial, however, AfD may be an appropriate venue for discussing the change in addition to the article's talk page.}}
:I'm aware of two multi-editor cases of that. The first is that when a WMF staffer mentioned writing her first article (in English), a handful of staffers who are not native English speakers (but who are experienced Wikipedians) translated that into their native language as a way of encouraging her to keep editing as a volunteer. This probably happened about a decade ago, and it was very sweet.
:The other was a sustained self-promotion effort by a handful of artists, including hoax photos. See [[d:Q131244]] for what's left of their efforts. We [[Wikipedia:Articles for deletion/IMMAGINE&POESIA|deleted the English article]]. The reason this sticks in my mind is that they repeatedly faked photos – see [[c:Commons:Deletion requests/File:Ferlinghetti meets Immagine&Poesia representatives.jpg]] for one example – of various people and the poet [[Lawrence Ferlinghetti]]. Every few months, one of the same two photos of Ferlinghetti in a public place would appear, with a different person photoshopped into the scene next to him, and it would get added to an article with a caption saying something like "Ferlinghetti met with so-and-so" (a different name each time). The result is that [https://en.wikipedia.org/enwiki/w/index.php?search=%22Immagine+%26+Poesia%22&title=Special%3ASearch&profile=advanced&fulltext=1&ns0=1&searchToken=9dlry6r97lnb1yodkuttowsju every remaining mention of that group] seems suspicious to me. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 02:09, 27 November 2024 (UTC)
::Okay, thanks for responding. I'm going to think about what can be done to assist editors in future scenarios and draft some thoughts for an essay in my sandbox later. I don't believe that creating a policy proposal is worth it right now, since as you've observed, cross-wiki article copy-pasting isn't a major concern due to its relative uncommonness. I'm considering writing up an essay on the subject instead, maybe also creating a template later on to go at the top of an article that says something along the lines of "This article was cross-posted from the "XYZ Wikipedia" and is currently undergoing translation, discussion, and improvement." [[User:Sirocco745|Sirocco745]] ([[User talk:Sirocco745|talk]]) 02:56, 27 November 2024 (UTC)


Does this mean that an AFD can be started by someone with the intent of redirecting instead of deleting? [[User:Plasticwonder|Plasticwonder]] ([[User talk:Plasticwonder|talk]]) 04:06, 13 December 2024 (UTC)
== Topics on Jehova's Witnesses - article spamming issues ==
:Yes. If there is a contested redirect, the article is restored and it is brought to AfD. [[User:Voorts|voorts]] ([[User talk:Voorts|talk]]/[[Special:Contributions/Voorts|contributions]]) 04:34, 13 December 2024 (UTC)
::I think the ideal process is:
::* Have an ordinary discussion on the talk page about redirecting the page.
::* If (and only if) that discussion fails to reach consensus, try again at AFD.
::I dislike starting with AFD. It isn't usually necessary, and it sometimes has a feel of the nom trying to get rid of it through any means possible ("I'll suggest a [[WP:BLAR]], but maybe I'll be lucky and they'll delete it completely"). [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 05:31, 13 December 2024 (UTC)
:::Would need some stats on the it isn't usually necessary claim, my intuition based on experience is that if a BLAR is contested it's either dropped or ends up at AfD. [[User:Chipmunkdavis|CMD]] ([[User talk:Chipmunkdavis|talk]]) 05:48, 13 December 2024 (UTC)
::::I agree with that. From what I have seen at least, if redirecting is contested, it then is usually discussed at AFD, but that's just me. [[User:Plasticwonder|Plasticwonder]] ([[User talk:Plasticwonder|talk]]) 08:42, 13 December 2024 (UTC)
:::::It depends how active the respective talk pages are (redirected article and target), but certainly for ones that are quiet AfD is going to be the most common. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 09:33, 13 December 2024 (UTC)
::::::It will also depend on whether you advertise the discussion, e.g., at an active WikiProject. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 19:44, 13 December 2024 (UTC)
:::I usually just go straight to AfD. I've found that editors contesting redirects usually !vote keep and discussing on talk just prolongs the inevitable AfD. [[User:Voorts|voorts]] ([[User talk:Voorts|talk]]/[[Special:Contributions/Voorts|contributions]]) 14:58, 13 December 2024 (UTC)
::::Gotcha. [[User:Plasticwonder|Plasticwonder]] ([[User talk:Plasticwonder|talk]]) 15:29, 13 December 2024 (UTC)
:::Looking at the above comments: What is it about the [[Wikipedia:Proposed article mergers]] process that isn't working for you all? If you redirect an article and it gets reverted, why aren't you starting a PM? [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 21:37, 16 December 2024 (UTC)
::::For me, it's lack of participation, no tool to list something at PAM, and no relisting option so proposed merges just sit for a very long time before being closed. [[User:Voorts|voorts]] ([[User talk:Voorts|talk]]/[[Special:Contributions/Voorts|contributions]]) 23:21, 16 December 2024 (UTC)
:::::What voorts said. Multiple times now I've floated the idea of making PAM more like RM, one of these years I should really get around to doing something more than that. I won't have time before the new year though. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 23:45, 16 December 2024 (UTC)
::::::I think PAM should be merged into AfD, since both generally involve discussions of notability. [[User:Voorts|voorts]] ([[User talk:Voorts|talk]]/[[Special:Contributions/Voorts|contributions]]) 00:00, 17 December 2024 (UTC)
:::::::Merging often involves questions of overlap and topical distinction rather than just notability, although this also ends up discussed at AfD. I do wonder if this would leave proposals to split out in the cold though, as much like merge discussions they just sit there. [[User:Chipmunkdavis|CMD]] ([[User talk:Chipmunkdavis|talk]]) 04:00, 17 December 2024 (UTC)
:::::The most important tool is Twinkle > Tag > Merge. I personally prefer its "Merge to" option, but there's a plain "Merge" if you don't know exactly which page should be the target.
:::::All merges get bot-listed in [[Wikipedia:Article alerts]]. [[Wikipedia:Proposed article mergers]] is another place to advertise it, and I'd bet that Twinkle could post those automatically with relatively little work (an optional button, similar to notifying the creator of deletion plans).
:::::I dislike "relisting"; things should just stay open as long as they need to, without adding decorative comments about the discussion not happening fast enough. In my experience, merge proposals stay open because everyone's agreed on the outcome but nobody wants to do the work. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 06:46, 17 December 2024 (UTC)
:In this context isn't redirection a *type* of deletion (specifically delete while leaving a redirect)? [[User:Horse Eye&#39;s Back|Horse Eye&#39;s Back]] ([[User talk:Horse Eye&#39;s Back|talk]]) 07:05, 17 December 2024 (UTC)
::I would think so. [[User:Plasticwonder|Plasticwonder]] ([[User talk:Plasticwonder|talk]]) 07:33, 17 December 2024 (UTC)
:::It's only a deletion if an admin pushes the delete button. Blanking and redirecting – even blanking, redirecting, and full-protecting the redirect so nobody can un-redirect it – is not deletion. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 07:34, 18 December 2024 (UTC)
::::That might be clear to you (and the other admins) but almost nobody in the general community understands that (to the point where I would say its just wrong, deletion is broader than that in practice). [[User:Horse Eye&#39;s Back|Horse Eye&#39;s Back]] ([[User talk:Horse Eye&#39;s Back|talk]]) 16:23, 18 December 2024 (UTC)
:::::Well, it has always been clear to me, and I am not, and have never wished to be, an admin. But, then again, I am a bit strange in that I expect things to be as people say that they will be. [[User:Phil Bridger|Phil Bridger]] ([[User talk:Phil Bridger|talk]]) 18:34, 18 December 2024 (UTC)
:::::Contested redirects going to AfD makes sense. Articles are redirected for the same reasons they're deleted and redirecting is probably the most common ATD. I've opened plenty of AfDs where my nom recommends a redirect instead of deletion, including when I've BLARed an article and had the BLAR reverted. [[User:Voorts|voorts]] ([[User talk:Voorts|talk]]/[[Special:Contributions/Voorts|contributions]]) 18:38, 18 December 2024 (UTC)
::::::If a redirect has already been discussed or attempted, and consensus can't be reached easily, then I've got no problem with AFD. What I don't want to see is no discussion, no bold redirects, nobody's even hinted about a merge, and now it's at AFD, when the problem could have been resolved through a less intense method. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 19:07, 18 December 2024 (UTC)


== RfC: Voluntary RfA after resignation ==
Polish Wikipedia is experiencing and uptick in Jehova's Witnesses topics article spamming, surrepticious edits pushing JW terminology etc. One of current problems is the spamming of separate articles for every "convention", which is an annual (I think) event with a theme and about 100k visitors. We are discussing their notability right now, and I was wondering whether English Wikipedia already discussed and cleaned this, which would be helpful? If you remember any topic discussing notability or monitoring of Jehova's Witnesses related topics, and possibly deleted articles. (I'm not sure if there is any sensible search method of deleted articles archive/log? Can I use any wildcards in [[Special:Log/delete]]? It doesn't seem to work.) [[User:Tupungato|Tupungato]] ([[User talk:Tupungato|talk]]) 12:04, 25 November 2024 (UTC)
<!-- [[User:DoNotArchiveUntil]] 22:01, 19 January 2025 (UTC) -->{{User:ClueBot III/DoNotArchiveUntil|1737324070}}
{{rfc|policy|rfcid=5F11665}}


Should [[Wikipedia:Administrators#Restoration of admin tools]] be amended to:
:@[[User:Tupungato|Tupungato]], we used to have a list of conventions, but it was deleted 16 years ago at [[Wikipedia:Articles for deletion/List of Jehovah's Witnesses conventions]]. I'm not sure we would make the same decision today. Information about some conventions is in [[History of Jehovah's Witnesses]]. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 02:22, 27 November 2024 (UTC)
* '''Option 1'''{{snd}}Require former administrators to request restoration of their tools at the [[WP:BN|bureaucrats' noticeboard]] (BN) if they are eligible to do so (i.e., they do not fit into any of the exceptions).
* '''Option 2'''{{snd}}<s>Clarify</s> <ins>Maintain the status quo</ins> that former administrators who would be eligible to request restoration via BN may instead request restoration of their tools via a voluntary [[WP:RfA|request for adminship]] (RfA).
* '''Option 3'''{{snd}}Allow bureaucrats to SNOW-close RfAs as successful if (a) 48 hours have passed, (b) the editor has right of resysop, and (c) a SNOW close is warranted.


'''Background''': This issue arose in one [[Wikipedia:Requests for adminship/Worm That Turned 2|recent RfA]] and is currently being discussed in an [[Wikipedia:Requests for adminship/Hog Farm 2|ongoing RfA]]. [[User:Voorts|voorts]] ([[User talk:Voorts|talk]]/[[Special:Contributions/Voorts|contributions]]) 21:14, 15 December 2024 (UTC)<br />
== Titles of articles about false theories or accusations ==
'''Note''': There is an ongoing related discussion at {{slink|Wikipedia:Village pump (idea lab)#Making voluntary "reconfirmation" RFA's less controversial}}.<br />
'''Note''': Option 2 was modified around 22:08, 15 December 2024 (UTC).
'''Note''': Added option 3. [[user:theleekycauldron|theleekycauldron]] ([[User talk:Theleekycauldron|talk]] • she/her) 22:12, 15 December 2024 (UTC)
:{{block indent|em=1.6|1=<small>Notified: [[Wikipedia:Administrators' noticeboard]], [[Wikipedia:Bureaucrats' noticeboard]], [[Wikipedia talk:Administrators]], [[Wikipedia talk:Requests for adminship]], [[T:CENT]]. [[User:Voorts|voorts]] ([[User talk:Voorts|talk]]/[[Special:Contributions/Voorts|contributions]]) 21:19, 15 December 2024 (UTC)</small>}}<!-- Template:Notified -->
*'''2''' per [[Wikipedia:Requests for adminship/Hog Farm 2#c-Kline-20241215031500-Support|Kline's comment at Hog Farm's RfA]]. If an admin wishes to be held accountable for their actions at a re-RfA, they should be allowed to do so. [[User:Queen of Hearts|<span style="color: darkgreen;">charlotte</span>]] [[User talk:Queen of Hearts|<sup>👸🎄</sup>]] 21:22, 15 December 2024 (UTC)
*:Also fine with 3 [[User:Queen of Hearts mobile|<span style="color: darkred;">charlotte</span>]] [[User talk:Queen of Hearts mobile|<sup style="color: darkred;">👸♥📱</sup>]] 22:23, 15 December 2024 (UTC)
* There is ongoing discussion about this at [[Wikipedia:Village pump (idea lab)#Making voluntary "reconfirmation" RFA's less controversial]]. [[User:Chipmunkdavis|CMD]] ([[User talk:Chipmunkdavis|talk]]) 21:24, 15 December 2024 (UTC)
** '''2''', after thought. I don't think 3 provides much benefit, and creating separate class of RfAs that are speedy passed feels a misstep. If there are serious issues surrounding wasting time on RfAs set up under what might feel to someone like misleading pretenses, that is best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm ('''RRfA''')". [[User:Chipmunkdavis|CMD]] ([[User talk:Chipmunkdavis|talk]]) 14:49, 16 December 2024 (UTC)
**:{{tq|best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm (RRfA)"}} - I like this idea, if option 2 comes out as consensus I think this small change would be a step in the right direction, as the "this isn't the best use of time" crowd (myself included) would be able to quickly identify the type of RFAs they don't want to participate in. [[User:Bugghost|<span style="font-weight:bold;color:#f50">BugGhost</span>]]&nbsp;[[User talk:Bugghost|🦗👻]] 11:05, 17 December 2024 (UTC)
**::I think that's a great idea. I would support adding some text encouraging people who are considering seeking reconfirmation to add (RRfA) or (reconfirmation) after their username in the RfA page title. That way people who are averse to reading or participating in reconfirmations can easily avoid them, and no one is confused about what is going on. [[User:28bytes|28bytes]] ([[User talk:28bytes|talk]]) 14:23, 17 December 2024 (UTC)
**::I think this would be a great idea if it differentiated against recall RfAs. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 18:37, 17 December 2024 (UTC)
**:::If we are differentiating three types of RFA we need three terms. Post-recall RFAs are referred to as "reconfirmation RFAs", "Re-RFAS" or "RRFAs" in multiple places, so ones of the type being discussed here are the ones that should take the new term. "Voluntary reconfirmation RFA" (VRRFA or just VRFA) is the only thing that comes to mind but others will probably have better ideas. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 21:00, 17 December 2024 (UTC)
* '''1''' [[User:Pppery|* Pppery *]] [[User talk:Pppery|<sub style="color:#800000">it has begun...</sub>]] 21:25, 15 December 2024 (UTC)
*'''2''' I don't see why people trying to do the right thing should be discouraged from doing so. If others feel it is a waste of time, they are free to simply not participate. [[User:Beeblebrox|El Beeblerino]] [[User talk:Beeblebrox|<sup>if you're not into the whole brevity thing</sup>]] 21:27, 15 December 2024 (UTC)
*'''2''' Getting reconfirmation from the community should be allowed. Those who see it as a waste of time can ignore those RfAs. [[User:Schazjmd|<span style="color:#066293;">'''Schazjmd'''</span>]]&nbsp;[[User talk:Schazjmd|<span style="color:#738276;">''(talk)''</span>]] 21:32, 15 December 2024 (UTC)
*Of course they may request at RfA. They shouldn't but they may. This RfA feels like it does nothing to address the criticism actually in play and per the link to the idea lab discussion it's premature to boot. [[User:Barkeep49|Barkeep49]] ([[User_talk:Barkeep49|talk]]) 21:38, 15 December 2024 (UTC)
*'''2''' per my comments at the idea lab discussion and Queent of Hears, Beeblebrox and Scazjmd above. I strongly disagree with Barkeep's comment that "They shouldn't [request the tools back are RFA]". It shouldn't be made mandatory, but it should be encouraged where the time since desysop and/or the last RFA has been lengthy. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 21:42, 15 December 2024 (UTC)
*:When to encourage it would be a worthwhile RfC and such a discussion could be had at the idea lab before launching an RfC. Best, [[User:Barkeep49|Barkeep49]] ([[User_talk:Barkeep49|talk]]) 21:44, 15 December 2024 (UTC)
*::I've started that discussion as a subsection to the linked VPI discussion. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 22:20, 15 December 2024 (UTC)
*'''1''' <ins>or '''3'''</ins>. RFA is an "expensive" process in terms of community time. RFAs that qualify should be fast-tracked via the BN process. It is only recently that a trend has emerged that folks that don't need to RFA are RFAing again. 2 in the last 6 months. If this continues to scale up, it is going to take up a lot of community time, and create noise in the various RFA statistics and RFA notification systems (for example, watchlist notices and [[User:Enterprisey/rfa-count-toolbar.js]]). –[[User:Novem Linguae|<span style="color:blue">'''Novem Linguae'''</span>]] <small>([[User talk:Novem Linguae|talk]])</small> 21:44, 15 December 2024 (UTC)
*:Making statistics "noisy" is just a reason to improve the way the statistics are gathered. In this case collecting statistics for reconfirmation RFAs separately from other RFAs would seem to be both very simple and very effective. ''If'' (and it is a very big if) the number of reconfirmation RFAs means that notifications are getting overloaded, ''then'' we can discuss whether reconfirmation RFAs should be notified differently. As far as differentiating them, that is also trivially simple - just add a parameter to [[template:RFA]] (perhaps "reconfirmation=y") that outputs something that bots and scripts can check for. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 22:11, 15 December 2024 (UTC)
*:Option 3 looks like a good compromise. I'd support that too. –[[User:Novem Linguae|<span style="color:blue">'''Novem Linguae'''</span>]] <small>([[User talk:Novem Linguae|talk]])</small> 22:15, 15 December 2024 (UTC)
*:I'm weakly opposed to option 3, editors who want feedback and a renewed mandate from the community should be entitled to it. If they felt that that a quick endorsement was all that was required then could have had that at BN, they explicitly chose not to go that route. Nobody is required to participate in an RFA, so if it is going the way you think it should, or you don't have an opinion, then just don't participate and your time has not been wasted. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 22:20, 15 December 2024 (UTC)
*'''2'''. We should not make it ''more difficult'' for administrators to be held accountable for their actions in the way they please. [[User:JJPMaster|JJP]]<sub>[[User talk:JJPMaster|Mas]]<sub>[[Special:Contributions/JJPMaster|ter]]</sub></sub> ([[She (pronoun)|she]]/[[Singular they|they]]) 22:00, 15 December 2024 (UTC)
* Added '''option 3''' above. Maybe worth considering as a happy medium, where unsure admins can get a check on their conduct without taking up too much time. [[user:theleekycauldron|theleekycauldron]] ([[User talk:Theleekycauldron|talk]] • she/her) 22:11, 15 December 2024 (UTC)
*'''2''' – If a former admin wishes to subject themselves to RfA to be sure they have the requisite community confidence to regain the tools, why should we stop them? Any editor who feels the process is a waste of time is free to ignore any such RfAs. —&nbsp;[[User:Jkudlick|Jkudlick]]&nbsp;&#x2693;&nbsp;[[User_talk:Jkudlick|(talk)]] 22:12, 15 December 2024 (UTC)
*:*I would also support option '''3''' if the time is extended to 72 hours instead of 48. That, however, is a detail that can be worked out after this RfC. —&nbsp;[[User:Jkudlick|Jkudlick]]&nbsp;&#x2693;&nbsp;[[User_talk:Jkudlick|(talk)]] 02:05, 16 December 2024 (UTC)
*'''Option 3''' per leek. [[User:Voorts|voorts]] ([[User talk:Voorts|talk]]/[[Special:Contributions/Voorts|contributions]]) 22:16, 15 December 2024 (UTC)
*:A further note: option 3 gives 'crats the discretion to SNOW close a successful voluntary re-RfA; it doesn't require such a SNOW close, and I trust the 'crats to keep an RfA open if an admin has a good reason for doing so. [[User:Voorts|voorts]] ([[User talk:Voorts|talk]]/[[Special:Contributions/Voorts|contributions]]) 23:24, 16 December 2024 (UTC)
*'''2''' as per {{noping|JJPMaster}}. Regards, --[[User:Goldsztajn|Goldsztajn]] ([[User talk:Goldsztajn|talk]]) 22:20, 15 December 2024 (UTC)
*'''Option 2''' (no change) – The sample size is far too small for us to analyze the impact of such a change, but I believe RfA should always be available. Now that [[WP:RECALL]] is policy, returning administrators may worry that they have become out of touch with community norms and may face a recall as soon as they get their tools back at BN. Having this familiar community touchpoint as an option makes a ton of sense, and would be far less disruptive / demoralizing than a potential recall. Taking this route away, even if it remains rarely used, would be detrimental to our desire for increased administrator accountability. – [[User talk:Bradv|<span style="color:#333">'''brad''v'''''</span>]] 22:22, 15 December 2024 (UTC)
*{{ec}} I'm surprised the response here hasn't been more hostile, given that these give the newly-unresigned administrator a [[Get Out of Jail Free card|get out of recall free card]] for a year. —[[User:Cryptic|Cryptic]] 22:25, 15 December 2024 (UTC)
*:@[[User:Cryptic|Cryptic]] hostile to what? [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 22:26, 15 December 2024 (UTC)
*'''2, distant second preference 3'''. I would probably support 3 as first pick if not for recall's rule regarding last RfA, but as it stands, SNOW-closing a discussion that makes someone immune to recall for a year is a non-starter. Between 1 and 2, though, the only argument for 1 seems to be that it avoids a waste of time, for which there is the much simpler solution of not participating and instead doing something else. [[Special:Random]] and [[Wikipedia:Backlog]] are always there. <span style="font-family:courier"> -- [[User:Tamzin|<span style="color:#E6007A">Tamzin</span>]]</span><sup class="nowrap">&#91;[[User talk:Tamzin|<i style="color:#E6007A">cetacean needed</i>]]]</sup> <small>([[User:Tamzin/🤷|they&#124;xe&#124;🤷]])</small> 23:31, 15 December 2024 (UTC)
* 1 would be my preference, but I don't think we need a specific rule for this. -- [[User:Ajraddatz|Ajraddatz]] ([[User Talk:Ajraddatz|talk]]) 23:36, 15 December 2024 (UTC)
*'''Option 1'''. <s>No second preference between 2 or 3.</s> As long as a former administrator didn't resign under a cloud, picking up the tools again should be low friction and low effort for the entire community. If there are issues introduced by the recall process, they should be fixed in the recall policy itself. [[User:Daniel Quinlan|Daniel Quinlan]] ([[User talk:Daniel Quinlan|talk]]) 01:19, 16 December 2024 (UTC)
*:After considering this further, I prefer option 3 over option 2 if option 1 is not the consensus. [[User:Daniel Quinlan|Daniel Quinlan]] ([[User talk:Daniel Quinlan|talk]]) 07:36, 16 December 2024 (UTC)
*'''Option 2''', i.e. leave well enough alone. There is really not a problem here that needs fixing. If someone doesn’t want to “waste their time” participating in an RfA that’s not required by policy, they can always, well, not participate in the RfA. No one is required to participate in someone else’s RfA, and I struggle to see the point of participating but then complaining about “having to” participate. [[User:28bytes|28bytes]] ([[User talk:28bytes|talk]]) 01:24, 16 December 2024 (UTC)
*'''Option 2''' nobody is obligated to participate in a re-confirmation RfA. If you think they are a waste of time, avoid them. [[User:Lepricavark|L<small>EPRICAVARK</small>]] ([[User talk:Lepricavark#top|<small>talk</small>]]) 01:49, 16 December 2024 (UTC)
* '''1 or 3''' per Novem Linguae. <span style="padding:2px 5px;border-radius:5px;font-family:Arial black;white-space:nowrap;vertical-align:-1px">[[User:CFA|<span style=color:#00c>C</span>]] <span style=color:red>F</span> [[User talk:CFA|<span style=color:#5ac18e>A</span>]]</span> 02:35, 16 December 2024 (UTC)
*'''Option 3''': Because it is incredibly silly to have situations like we do now of "this guy did something wrong by doing an RfA that policy explicitly allows, oh well, nothing to do but sit on our hands and dissect the process across three venues and counting." Your time is your own. No one is forcibly stealing it from you. At the same time it is equally silly to let the process drag on, for reasons explained in [[WP:SNOW]]. [[User:Gnomingstuff|Gnomingstuff]] ([[User talk:Gnomingstuff|talk]]) 03:42, 16 December 2024 (UTC)
*:Update: Option 2 seems to be the consensus and I also would be fine with that. [[User:Gnomingstuff|Gnomingstuff]] ([[User talk:Gnomingstuff|talk]]) 18:10, 19 December 2024 (UTC)
*'''Option 3''' per Gnoming. I think 2 works, but it is a very long process and for someone to renew their tools, it feels like an unnecessarily long process compared to a normal RfA. [[User:Conyo14|Conyo14]] ([[User talk:Conyo14|talk]]) 04:25, 16 December 2024 (UTC)
*As someone who supported both WormTT and Hog Farm's RfAs, option 1 > option 3 >> option 2. At each individual RfA the question is whether or not a specific editor should be an admin, and in both cases I felt that the answer was clearly "yes". However, I agree that RfA is a very intensive process. It requires a lot of time from the community, as others have argued better than I can. I prefer option 1 to option 3 because the existence of the procedure in option 3 implies that it is a good thing to go through 48 hours of RfA to re-request the mop. But anything which saves community time is a good thing. <b>[[User:HouseBlaster|House]][[Special:Contributions/HouseBlaster|<span style="color:#7D066B;">Blaster</span>]]</b>&nbsp;([[User talk:HouseBlaster|talk]]&nbsp;•&nbsp;he/they) 04:31, 16 December 2024 (UTC)
*:I've seen this assertion made multiple times now that {{tpq|[RFA] requires a lot of time from the community}}, yet nowhere has anybody articulated how why this is true. What time is required, given that nobody is required to participate and everybody who does choose to participate can spend as much or as little time assessing the candidate as they wish? How and why does a reconfirmation RFA require any more time from editors (individually or collectively) than a request at BN? [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 04:58, 16 December 2024 (UTC)
*::I think there are a number of factors and people are summing it up as "time-wasting" or similar:
*::# BN Is designed for this exact scenario. It's also clearly a less contentious process.
*::# Snow closures a good example of how we try to avoid wasting community time on unnecessary process and the same reasoning applies here. Wikipedia is not a bureaucracy and there's no reason to have a 7-day process when the outcome is a given.
*::# If former administrators continue to choose re-RFAs over BN, it could set a problematic precedent where future re-adminship candidates feel pressured to go through an RFA and all that entails. I don't want to discourage people already vetted by the community from rejoining the ranks.
*::# The RFA process is designed to be a thoughtful review of prospective administrators and I'm concerned these kinds of perfunctory RFAs will lead to people taking the process less seriously in the future.
*::[[User:Daniel Quinlan|Daniel Quinlan]] ([[User talk:Daniel Quinlan|talk]]) 07:31, 16 December 2024 (UTC)
*::Because several thousand people have RFA on their watchlist, and thousands more will see the "there's an open RFA" notice on theirs whether they follow it or not. Unlike BN, RFA is a process that depends on community input from a large number of people. In order to even ''realise that the RFA is not worth their time'', they have to:
*::* Read the opening statement and first few question answers (I just counted, HF's opening and first 5 answers are about 1000 words)
*::* Think, "oh, they're an an ex-admin, I wonder why they're going through RFA, what was their cloud"
*::* Read through the comments and votes to see if any issues have been brought up (another ~1000 words)
*::* None have
*::* Realise your input is not necessary and this could have been done at BN
*::This process will be repeated by hundreds of editors over the course of a week. [[User:Bugghost|<span style="font-weight:bold;color:#f50">BugGhost</span>]]&nbsp;[[User talk:Bugghost|🦗👻]] 08:07, 16 December 2024 (UTC)
*:::That they were former admins has always been the first two sentences of their RfA’s statement, sentences which are immediately followed by that they resigned due to personal time commitment issues. You do not have to read the first 1000+ words to figure that out. If the reader wants to see if the candidate was lying in their statement, then they just have a quick skim through the oppose section. None of this should take more than 30 seconds in total. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 13:15, 16 December 2024 (UTC)
*::::Not everyone can skim things easily - it personally takes me a while to read sections. I don't know if they're going to bury the lede and say something like "Also I made 10,000 insane redirects and then decided to take a break just before arbcom launched a case" in paragraph 6. Hog Farm's self nom had two paragraphs about disputes and it takes more than 30 seconds to unpick that and determine if that is a "cloud" or not. Even for reconfirmations, it definitely takes more than 30 seconds to determine a conclusion. [[User:Bugghost|<span style="font-weight:bold;color:#f50">BugGhost</span>]]&nbsp;[[User talk:Bugghost|🦗👻]] 11:21, 17 December 2024 (UTC)
*:::::They said they resigned to personal time commitments. That is directly saying they wasn’t under a cloud, so I’ll believe them unless someone claims the contrary in the oppose section. If the disputes section contained a cloud, the oppose section would have said so. One chooses to examine such nominations like normal RfAs. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 18:47, 17 December 2024 (UTC)
*::::::Just to double check, you're saying that whenever you go onto an RFA you expect any reason to oppose to already be listed by someone else, and no thought is required? I am begining to see how you are able to assess an RFA in under 30 seconds [[User:Bugghost|<span style="font-weight:bold;color:#f50">BugGhost</span>]]&nbsp;[[User talk:Bugghost|🦗👻]] 23:08, 17 December 2024 (UTC)
*:::::::Something in their statement would be an incredibly obvious reason. We are talking about the assessment whether to examine and whether the candidate could've used BN. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 12:52, 18 December 2024 (UTC)
*::@[[User:Thryduulf|Thryduulf]] let's not confuse "a lot of community time is spent" with "waste of time". Some people have characterized the re-RFAs as a waste of time but that's not the assertion I (and I think a majority of the skeptics) have been making. All RfAs use a lot of community time as hundreds of voters evaluate the candidate. They then choose to support, oppose, be neutral, or not vote at all. While editor time is not perfectly fixed - editors may choose to spend less time on non-Wikipedia activities at certain times - neither is it a resource we have in abundance anymore relative to our project. And so I think we, as a community, need to be thought about how we're using that time especially when the use of that time would have been spent on other wiki activities.Best, [[User:Barkeep49|Barkeep49]] ([[User_talk:Barkeep49|talk]]) 22:49, 16 December 2024 (UTC)
*:::Absolutely nothing compels anybody to spend any time evaluating an RFA. If you think your wiki time is better spent elsewhere than evaluating an RFA candidate, then spend it elsewhere. That way only those who do think it is a good use of their time will participate and everybody wins. You win by not spending your time on something that you don't think is worth it, those who do participate don't have ''their'' time wasted by having to read comments (that contradict explicit policy) about how the RFA is a waste of time. Personally I regard evaluating whether a long-time admin still has the approval of the community to be a very good use of community time, you are free to disagree, but please don't waste my time by forcing me to read comments about how you think I'm wasting my time. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 23:39, 16 December 2024 (UTC)
*::::I am not saying you or anyone else is wasting time and am surprised you are so fervently insisting I am. Best, [[User:Barkeep49|Barkeep49]] ([[User_talk:Barkeep49|talk]]) 03:34, 17 December 2024 (UTC)
*:::::I don't understand how your argument that it is not a good use of community time is any different from arguing that it is a waste of time? [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 09:08, 17 December 2024 (UTC)
*'''Option 2''' I don't mind the re-RFAs, but I'd appreciate if we encouraged restoration via BN instead, I just object to making it mandatory. [[User:EggRoll97|EggRoll97]] <sup>([[User_talk:EggRoll97|talk]]) </sup> 06:23, 16 December 2024 (UTC)
*'''Option 2'''. Banning voluntary re-RfAs would be a step in the wrong direction on admin accountability. Same with SNOW closing. There is no more "wasting of community time" if we let the RfA run for the full seven days, but allowing someone to dig up a scandal on the seventh day is an important part of the RfA process. The only valid criticism I've heard is that folks who do this are arrogant, but banning arrogance, while noble, seems highly impractical. [[User:Toadspike|<span style="color:#21a81e;font-variant: small-caps;font-weight:bold;">'''Toadspike'''</span>]] [[User talk:Toadspike|<span style="color:#21a81e;font-variant: small-caps;font-weight:bold;">[Talk]</span>]] 07:24, 16 December 2024 (UTC)
*Option 3, 1, then 2, per HouseBlaster. Also agree with Daniel Quinlan. I think these sorts of RFA's should only be done in exceptional circumstances. [[User:Graham87|Graham87]] ([[User talk:Graham87|talk]]) 08:46, 16 December 2024 (UTC)
* '''Option 1''' as first preference, option 3 second. RFAs use up a lot of time - hundreds of editors will read the RFA and it takes time to come to a conclusion. When that conclusion is "well that was pointless, my input wasn't needed", it is not a good system. I think transparency and accountability is a very good thing, and we need more of it for resyssopings, but that should come from improving the normal process (BN) rather than using a different one (RFA). My ideas for improving the BN route to make it more transparent and better at getting community input is outlined over on the [https://en.m.wikipedia.org/wiki/Wikipedia:Village_pump_(idea_lab)#Making_voluntary_%22reconfirmation%22_RFA's_less_controversial idea lab] [[User:Bugghost|<span style="font-weight:bold;color:#f50">BugGhost</span>]]&nbsp;[[User talk:Bugghost|🦗👻]] 08:59, 16 December 2024 (UTC)
* '''Option 2''', though I'd be for '''option 3''' too. I'm all for administrators who feel like they want/should go through an RfA to solicit feedback even if they've been given the tools back already. I see multiple people talk about going through BN, but if I had to hazard a guess, it's way less watched than RfA is. However I do feel like watchlist notifications should say something to the effect of "A request for re-adminship feedback is open for discussion" so that people that don't like these could ignore them. <span>♠[[User:JCW555|<span style="color:purple">JCW555</span>]] [[User talk:JCW555|<span style="color: black">(talk)</span>]]</span>♠ 09:13, 16 December 2024 (UTC)
*'''Option 2''' because [[WP:ADMINISTRATORS]] is well-established policy. Read [[WP:ADMINISTRATORS#Restoration of admin tools]], which says quite clearly, {{tpq|Regardless of the process by which the admin tools are removed, any editor is free to re-request the tools through the requests for adminship process.}} I went back 500 edits to 2017 and the wording was substantially the same back then. So, I simply do not understand why various editors are berating former administrators to the point of accusing them of wasting time and being arrogant for choosing to go through a process which is ''specifically permitted by policy''. It is bewildering to me. [[User:Cullen328|Cullen328]] ([[User talk:Cullen328|talk]]) 09:56, 16 December 2024 (UTC)
*'''Option 2 & 3''' I think that there still should be the choice between BN and re-RFA for resysops, but I think that the re-RFA should stay like it is in Option 3, unless it is controversial, at which point it could be extended to the full RFA period. I feel like this would be the best compromise between not "wasting" community time (which I believe is a very overstated, yet understandable, point) and ensuring that the process is based on broad consensus and that our "representatives" are still supported. If I were WTT or Hog, I might choose to make the same decision so as to be respectful of the possibility of changing consensus. [[User:JuxtaposedJacob|JuxtaposedJacob]] ([[User talk:JuxtaposedJacob|talk]]) &#124; :) &#124; he/him &#124; 10:45, 16 December 2024 (UTC)
*'''Option 2''', for lack of a better choice. Banning re-RFAs is not a great idea, and we should not SNOW close a discussion that would give someone immunity from a certain degree of accountability. I've dropped an idea for an option 4 in the discussion section below. [[User:Giraffer|Giraffer]] ([[User talk:Giraffer|talk]]) 12:08, 16 December 2024 (UTC)
*'''Option 1''' I agree with Graham87 that these sorts of RFAs should only be done in exceptional circumstances, and BN is the best place to ask for tools back. – [[User:DreamRimmer|<span style="color:black">'''DreamRimmer'''</span>]] <small>([[User talk:DreamRimmer|'''talk''']])</small> 12:11, 16 December 2024 (UTC)
*'''Option 2''' I don't think prohibition makes sense. It also has weird side effects. eg: some admins' voluntary recall policies may now be completely void, because they would be unable to follow them even if they wanted to, because policy prohibits them from doing a RFA. (maybe if they're also 'under a cloud' it'd fit into exemptions, but if an admins' policy is "3 editors on this named list tell me I'm unfit, I resign" then this isn't really a cloud.) {{pb}} Personally, I think Hog Farm's RFA was unwise, as he's textbook uncontroversial. Worm's was a decent RFA; he's also textbook uncontroversial but it happened at a good time. But any editor participating in these discussions to give the "support" does so using their own time. Everyone who feels their time is wasted can choose to ignore the discussion, and instead it'll pass as 10-0-0 instead of 198-2-4. It just doesn't make sense to prohibit someone from seeking a community discussion, though. For almost anything, really. [[User:ProcrastinatingReader|ProcrastinatingReader]] ([[User talk:ProcrastinatingReader|talk]]) 12:33, 16 December 2024 (UTC)
*'''Option 2''' It takes like two seconds to support or ignore an RFA you think is "useless"... can't understand the hullabaloo around them. I stand by what I said on [[Wikipedia:Requests_for_adminship/Worm_That_Turned_2#c-K6ka-20241113233900-Support|WTT's re-RFA]] regarding RFAs being about evaluating trustworthiness and accountability. Trustworthy people don't skip the process. —[[User:K6ka|'''<span style="color:#0040FF">k6ka</span>''']] <span title="Canadian!" style="color:red">🍁</span> ([[User talk:K6ka|<span style="color:#0080FF">Talk</span>]] · [[Special:Contributions/K6ka|<span style="color:#0B4C5F">Contributions</span>]]) 15:24, 16 December 2024 (UTC)
*'''Option 1''' - Option 2 is a waste of community time. - [[User:Ratnahastin|<span style="color:#A52A2A;">Ratnahastin</span>]] ([[User talk:Ratnahastin|talk]]) 15:30, 16 December 2024 (UTC)
*:Why? [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 15:35, 16 December 2024 (UTC)
*'''2''' is fine. '''Strong oppose''' to 1 and 3. Opposing option 1 because there is nothing wrong with asking for extra community feedback. opposing option 3 because once an RfA has been started, it should follow the standard rules. Note that RfAs are extremely rare and non-contentious RfAs require very little community time (unlike this RfC which seems a waste of community time, but there we are). —[[User:Kusma|Kusma]] ([[User talk:Kusma|talk]]) 16:59, 16 December 2024 (UTC)
*'''2''', with no opposition to 3. I see nothing wrong with a former administrator getting re-confirmed by the community, and community vetting seems like a good thing overall. If people think it's a waste of time, then just ignore the RfA. [[User:Natg 19|Natg 19]] ([[User talk:Natg 19|talk]]) 17:56, 16 December 2024 (UTC)
*'''2''' Sure, and clarify that should such an RFA be unsuccessful they may only regain through a future rfa. — [[User:Xaosflux|<span style="color:#FF9933; font-weight:bold; font-family:monotype;">xaosflux</span>]] <sup>[[User talk:Xaosflux|<span style="color:#009933;">Talk</span>]]</sup> 18:03, 16 December 2024 (UTC)
*'''Option 2''' If contributing to such an RFA is a waste of your time, just don't participate. [[User:TheWikiToby|TheWikiToby]] ([[User talk:TheWikiToby|talk]]) 18:43, 16 December 2024 (UTC)
*:No individual is wasting their time participating. Instead the person asking for a re-rfa is ''using'' tons of editor time by asking hundreds of people to vet them. Even the choice not to participate requires at least some time to figure out that this is not a new RfA; though at least in the two we've had recently it would require only as long as it takes to get to the RfA - for many a click from the watchlist and then another click into the rfa page - and to read the first couple of sentences of the self-nomination which isn't terribly long all things considered. Best, [[User:Barkeep49|Barkeep49]] ([[User_talk:Barkeep49|talk]]) 22:55, 16 December 2024 (UTC)
*::I agree with you (I think) that it's a matter of perspective. For me, clicking the RFA link in my watchlist and reading the first paragraph of Hog Farm's nomination (where they explained that they were already a respected admin) took me about 10 seconds. Ten seconds is nothing; in my opinion, this is just a nonissue. But then again, I'm not an admin, checkuser, or an oversighter. Maybe the time to read such a nomination is really wasting their time. I don't know. [[User:TheWikiToby|TheWikiToby]] ([[User talk:TheWikiToby|talk]]) 23:15, 16 December 2024 (UTC)
*:::I'm an admin and an oversighter (but not a checkuser). None of my time was wasted by either WTT or Hog Farm's nominations. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 23:30, 16 December 2024 (UTC)
*'''2'''. Maintain the ''status quo''. And stop worrying about a trivial non-problem. --[[User:Tryptofish|Tryptofish]] ([[User talk:Tryptofish|talk]]) 22:57, 16 December 2024 (UTC)
*'''2'''. This reminds me of banning plastic straws (bear with me). Sure, I suppose in theory, that this is a burden on the community's time (just as straws do end up in landfills/the ocean). However, the amount of community time that is drained is minuscule compared to the amount of community time drained in countless, countless other fora and processes (just like the volume of plastic waste contributed by plastic straws is less than 0.001% of the total plastic waste). When WP becomes an efficient, well oiled machine, then maybe we can talk about saving community time by banning re-RFA's. But this is much ado about nothing, and indeed this plan to save people from themselves, and not allow them to simply decide whether to participate or not, is arguably more damaging than some re-RFAs (just as banning straws convinced some people that "these save-the-planet people are so ridiculous that I'm not going to bother listening to them about anything."). And, in fact, on a separate note, I'd actually love it if more admins just ran a re-RFA whenever they wanted. They would certainly get better feedback than just posting "What do my talk page watchers think?" on their own talk page. Or waiting until they get yelled at on their talk page, AN/ANI, AARV, etc. We say we want admins to respect feedback; does it '''have''' to be in a recall petition? --[[User:Floquenbeam|Floquenbeam]] ([[User talk:Floquenbeam|talk]]) 23:44, 16 December 2024 (UTC)
*:What meaningful feedback has Hog Farm gotten? "A minority of people think you choose poorly in choosing this process to regain adminship". What are they supposed to do with that? I share your desire for editors to share meaningful feedback with administrators. My own attempt yielded some, though mainly offwiki where I was told I was both too cautious and too impetuous (and despite the seeming contradiction each was valuable in its own way). So yes let's find ways to get meaningful feedback to admins outside of recall or being dragged to ANI. Unfortunately re-RfA seems to be poorly suited to the task and so we can likely find a better way. Best, [[User:Barkeep49|Barkeep49]] ([[User_talk:Barkeep49|talk]]) 03:38, 17 December 2024 (UTC)
*:Let us all take some comfort in the fact that no one has yet criticized this RfC comment as being a straw man argument. --[[User:Tryptofish|Tryptofish]] ([[User talk:Tryptofish|talk]]) 23:58, 18 December 2024 (UTC)
*'''No hard rule, but we should socially discourage confirmation RfAs''' There is a difference between a hard rule, and a soft social rule. A hard rule against confirmation RfA's, like option 1, would not do a good job of accounting for edge cases and would thus be ultimately detrimental here. But a soft social rule against them would be beneficial. Unfortunately, that is not one of the options of this RfC. In short, a person should have a good reason to do a confirmation RfA. If you're going to stand up before the community and ask "do you trust me," that should be for a good reason. It shouldn't just be because you want the approval of your peers. (Let me be clear: I am not suggesting that is why either Worm or Hogfarm re-upped, I'm just trying to create a general purpose rule here.) That takes some introspection and humility to ask yourself: is it worth me inviting two or three hundred people to spend part of their lives to comment on me as a person?{{pb}}A lot of people have thrown around [[User:CaptainEek/Grand Unified Theory of Editor Time|editor time]] in their reasonings. Obviously, broad generalizations about it aren't convincing anyone. So let me just share my own experience. I saw the watchlist notice open that a new RfA was being run. I reacted with some excitement, because I always like seeing new admins. When I got to the page and saw Hogfarm's name, I immediately thought "isn't he already an admin?" I then assumed, ah, its just the classic RfA reaction at seeing a qualified candidate, so I'll probably support him since I already think he's an admin. But then as I started to do my due diligence and read, I saw that he really, truly, already had been an admin. At that point, my previous excitement turned to a certain unease. I had voted yes for Worm's confirmation RfA, but here was another...and I realized that my blind support for Worm might have been the start of an entirely new process. I then thought "bet there's an RfC going about this," and came here. I then spent a while polishing up my essay on editor time, before taking time to write this message. All in all, I probably spent a good hour doing this. Previously, I'd just been clicking the random article button and gnoming. So, the longwinded moral: yeah, this did eat up a lot of my editor time that could have and was being spent doing something else. And I'd do it again! It was important to do my research and to comment here. But in the future...maybe I won't react quite as excitedly to seeing that RfA notice. Maybe I'll feel a little pang of dread...wondering if its going to be a confirmation RfA. We can't pretend that confirmation RfA's are costless, and that we don't lose anything even if editors just ignore them. When run, it should be because they are necessary. [[User:CaptainEek|<b style="color:#6a1f7f">CaptainEek</b>]] <sup>[[User talk:CaptainEek|<i style="font-size:82%; color:#a479e5">Edits Ho Cap'n!</i>]]</sup>[[Special:Contributions/CaptainEek|⚓]] 03:29, 17 December 2024 (UTC)
*:And for what its worth, support '''Option 3''' because I'm generally a fan of putting more tools in people's toolboxes. [[User:CaptainEek|<b style="color:#6a1f7f">CaptainEek</b>]] <sup>[[User talk:CaptainEek|<i style="font-size:82%; color:#a479e5">Edits Ho Cap'n!</i>]]</sup>[[Special:Contributions/CaptainEek|⚓]] 03:36, 17 December 2024 (UTC)
*:{{tpq|In short, a person should have a good reason to do a confirmation RfA. If you're going to stand up before the community and ask "do you trust me," that should be for a good reason. It shouldn't just be because you want the approval of your peers.}} Asking the community whether you still have their trust to be an administrator, which is what an reconfirmation RFA is, ''is'' a good reason. I expect getting a near-unanimous "yes" is good for one's ego, but that's just a (nice) side-effect of the far more important benefits to the entire community: a trusted administrator.
*:The time you claim is being eaten up unnecessarily by reconfirmation RFAs was actually taken up by you choosing to spend your time writing an essay about using time for things you don't approve of and then hunting out an RFC in which you wrote another short essay about using time on things you don't approve of. Absolutely none of that is a necessary consequence of reconfirmation RFAs - indeed the response consistent with your stated goals would have been to read the first two sentences of Hog Farm's RFA and then closed the tab and returned to whatever else it was you were doing. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 09:16, 17 December 2024 (UTC)
*:WTT's and Hog Farm's RFAs would have been completely uncontentious, something I hope for at RfA and certainly the opposite of what I "dread" at RfA, if it were not for the people who attack the very concept of standing for RfA again despite policy being crystal clear that it is absolutely fine. I don't see how any blame for this situation can be put on WTT or HF. We can't pretend that dismissing uncontentious reconfirmation RfAs is costless; discouraging them removes one of the few remaining potentially wholesome bits about the process. —[[User:Kusma|Kusma]] ([[User talk:Kusma|talk]]) 09:53, 17 December 2024 (UTC)
*:@[[User:CaptainEek|CaptainEek]] Would you find it better if Watchlist notices and similar said "(re?)confirmation RFA" instead of "RFA"? Say for all voluntary RFAs from an existing admin or someone who could have used BN?
*:As a different point, I would be quite against any social discouraging if we're not making a hard rule as such. Social discouraging is what got us the opposes at WTT/Hog Farm's RFAs, which I found quite distasteful and badgering. If people disagree with a process, they should change it. But if the process remains the same, I think it's important to not enable RFA's toxicity by encouraging others to namecall or re-argue the process in each RRFA. It's a short road from social discouragement to toxicity, unfortunately. [[User:Soni|Soni]] ([[User talk:Soni|talk]]) 18:41, 19 December 2024 (UTC)
*::Yes I think the watchlist notice should specify what kind of RfA, especially with the introduction of recall. [[User:CaptainEek|<b style="color:#6a1f7f">CaptainEek</b>]] <sup>[[User talk:CaptainEek|<i style="font-size:82%; color:#a479e5">Edits Ho Cap'n!</i>]]</sup>[[Special:Contributions/CaptainEek|⚓]] 16:49, 23 December 2024 (UTC)
* '''Option 1'''. Will prevent the unnecessary drama trend we are seeing in the recent. – [[User:Ammarpad|Ammarpad]] ([[User talk:Ammarpad|talk]]) 07:18, 17 December 2024 (UTC)
* '''Option 2''' if people think there's a waste of community time, don't spend your time voting or discussing. Or add "reconfirmation" or similar to the watchlist notice. [[User:AirshipJungleman29|&#126;~ AirshipJungleman29]] ([[User talk:AirshipJungleman29|talk]]) 15:08, 17 December 2024 (UTC)
* '''Option 3''' (which I think is a subset of option 2, so I'm okay with the status quo, but I want to endorse giving 'crats the option to SNOW). While they do come under scrutiny from time to time for the extensive dicsussions in the "maybe" zone following RfAs, this should be taken as an indiciation that they are unlikely to do something like close it as SNOW in the event there is <em>real and substantial</em> concerns being rasied. This is an okay tool to give the 'crats. As far as I can tell, no one has ever accused the them of moving too quickly in this direction (not criticism; love you all, keep up the good work). [[User:Bobby Cohn|Bobby Cohn]] ([[User talk:Bobby Cohn|talk]]) 17:26, 17 December 2024 (UTC)
* '''Option 3 or Option 2'''. Further, if Option 2 passes, I expect it also ends all the bickering about lost community time. A consensus explicitly in favour of "This is allowed" should also be a consensus to discourage relitigation of this RFC. [[User:Soni|Soni]] ([[User talk:Soni|talk]]) 17:35, 17 December 2024 (UTC)
*'''Option 2''': Admins who do not exude entitlement are to be praised. Those who criticize this humility should have a look in the mirror before accusing those who ask for reanointment from the community of "arrogance". I agree that it wouldn't be a bad idea to mention in parentheses that the RFA is a reconfirmation (watchlist) and wouldn't see any problem with crats snow-closing after, say, 96 hours. -- [[User:SashiRolls | SashiRolls]] <sup>[[User_talk:SashiRolls | 🌿 ]] · [[Special:Contributions/SashiRolls| 🍥]]</sup> 18:48, 17 December 2024 (UTC)
*:I disagree that BN shouldn't be the normal route. RfA is already as hard and soul-crushing as it is. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 20:45, 17 December 2024 (UTC)
*::Who are you disagreeing with? This RfC is about voluntary RRfA. -- [[User:SashiRolls | SashiRolls]] <sup>[[User_talk:SashiRolls | 🌿 ]] · [[Special:Contributions/SashiRolls| 🍥]]</sup> 20:59, 17 December 2024 (UTC)
*:::I know. I see a sizable amount of commenters here starting to say that voluntary re-RfAs should be encouraged, and your first sentence can be easily read as implying that admins who use the BN route exude entitlement. I disagree with that (see my reply to Thryduulf below). [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 12:56, 18 December 2024 (UTC)
*::One way to improve the reputation of RFA is for there to be more RFAs that are not terrible, such as reconfirmations of admins who are doing/have done a good job who sail through with many positive comments. There is no proposal to make RFA mandatory in circumstances it currently isn't, only to reaffirm that those who voluntarily choose RFA are entitled to do so. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 21:06, 17 December 2024 (UTC)
*:::I know it's not a proposal, but there's enough people talking about this so far that it could become a proposal.<br />There's nearly nothing in between that could've lost the trust of the community. I'm sure there are many who do not want to be pressured into [[Wikipedia talk:Requests for adminship/Hog Farm 2#TonyBallioni's oppose|this]] without good reason. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 12:57, 18 December 2024 (UTC)
*::::Absolutely nobody is proposing, suggesting or hinting here that reconfirmation RFAs should become mandatory - other than comments from a few people who oppose the idea of people voluntarily choosing to do something policy explicitly allows them to choose to do. The best way to avoid people being pressured into being accused of arrogance for seeking reconfirmation of their status from the community is to sanction those people who accuse people of arrogance in such circumstances as such comments are in flagrant breach of AGF and NPA. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 14:56, 18 December 2024 (UTC)
*:::::Yes, I’m saying that they should not become preferred. There should be no social pressure to do RfA instead of BN, only pressure intrinsic to the candidate. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 15:37, 18 December 2024 (UTC)
*::::::Whether they should become preferred in any situation forms no part of this proposal in any way shape or form - this seeks only to reaffirm that they are permitted. A separate suggestion, completely independent of this one, is to encourage (explicitly not mandate) them in some (but explicitly not all) situations. All discussions on this topic would benefit if people stopped misrepresenting the policies and proposals - especially when the falsehoods have been explicitly called out. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 15:49, 18 December 2024 (UTC)
*:::::::I am talking and worrying over that separate proposal many here are suggesting. I don’t intend to oppose Option 2, and sorry if I came off that way. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 16:29, 18 December 2024 (UTC)
*'''Option 2'''. In fact, I'm inclined to ''encourage'' an RRfA over BN, because nothing requires editors to participate in an RRfA, but the resulting discussion is better for reaffirming community consensus for the former admin or otherwise providing helpful feedback. --[[User:Pinchme123|Pinchme123]] ([[User talk:Pinchme123|talk]]) 21:45, 17 December 2024 (UTC)
*'''Option 2''' [[WP:RFA]] has said "{{tq|Former administrators may seek reinstatement of their privileges through RfA...}}" for over ten years and this is not a problem. I liked the opportunity to be consulted in the current RfA and don't consider this a waste of time. [[user:Andrew Davidson|Andrew]]🐉([[user talk:Andrew Davidson|talk]]) 22:14, 17 December 2024 (UTC)
*'''Option 2'''. People who think it’s not a good use of their time always have the option to scroll past. [[User:Innisfree987|Innisfree987]] ([[User talk:Innisfree987|talk]]) 01:41, 18 December 2024 (UTC)
* '''2''' - If an administrator gives up sysop access because they plan to be inactive for a while and want to minimize the attack surface of Wikipedia, they should be able to ask for permissions back the quickest way possible. If an administrator resigns because they do not intend to do the job anymore, and later changes their mind, they should request a community discussion. The right course of action depends on the situation. [[User:Jehochman|Jehochman]] <sup>[[User talk:Jehochman|Talk]]</sup> 14:00, 18 December 2024 (UTC)
*'''Option 1'''. I've watched a lot of RFAs and re-RFAs over the years. There's a darn good reason why the community developed the "go to BN" option: saves time, is straightforward, and if there are issues that point to a re-RFA, they're quickly surfaced. People who refuse to take the community-developed process of going to BN first are basically telling the community that they need the community's full attention on their quest to re-admin. Yes, there are those who may be directed to re-RFA by the bureaucrats, in which case, they have followed the community's carefully crafted process, and their re-RFA should be evaluated from that perspective. [[User:Risker|Risker]] ([[User talk:Risker|talk]]) 02:34, 19 December 2024 (UTC)
*'''Option 2'''. If people want to choose to go through an RFA, who are we to stop them? [[User:Stifle|Stifle]] ([[User talk:Stifle|talk]]) 10:25, 19 December 2024 (UTC)
*'''Option 2''' (status quo/no changes) per [[WP:AINTBROKE|meh]]. This is bureaucratic rulemongering at its finest. Every time RFA reform comes up some editors want admins to be required to periodically reconfirm, then when some admins decide to reconfirm voluntarily, suddenly that's seen as a bad thing. The correct thing to do here is nothing. If you don't like voluntary reconfirmation RFAs, you are not required to participate in them. [[User:Ivanvector|Ivanvector]] (<sup>[[User talk:Ivanvector|Talk]]</sup>/<sub>[[Special:Contributions/Ivanvector|Edits]]</sub>) 19:34, 19 December 2024 (UTC)
*'''Option 2''' I would probably counsel just going to BN most of the time, however there are exceptions and edge cases. To this point these RfAs have been few in number, so the costs incurred are relatively minor. If the number becomes large then it might be worth revisiting, but I don't see that as likely. Some people will probably impose social costs on those who start them by opposing these RfAs, with the usual result, but that doesn't really change the overall analysis. Perhaps it would be better if our idiosyncratic internal logic didn't produce such outcomes, but that's a separate issue and frankly not really worth fighting over either. There's probably some meta issues here I'm unaware off, it's long since I've had my finger on the community pulse so to speak, but they tend to matter far less than people think they do. [[Special:Contributions/184.152.68.190|184.152.68.190]] ([[User talk:184.152.68.190|talk]]) 02:28, 20 December 2024 (UTC)
* '''Option 1''', per [[WP:POINT]], [[WP:NOT#SOCIALNETWORK]], [[WP:NOT#BUREAUCRACY]], [[WP:NOTABOUTYOU]], and related principles. We all have far better things to do that read through and argue in/about a totally unnecessary RfA invoked as a "Show me some love!" abuse of process and waste of community time and productivity. I could live with option 3, if option 1 doesn't fly (i.e. shut these silly things down as quickly as possible). But option 2 is just out of the question. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — [[User:SMcCandlish|'''SMcCandlish''']] [[User talk:SMcCandlish|☏]] [[Special:Contributions/SMcCandlish|¢]] 😼 </span> 04:28, 22 December 2024 (UTC)
*:Except none of the re-RFAs complained about have been {{tpq|RfA invoked as a "Show me some love!" abuse of process}}, you're arguing against a strawman. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 11:41, 22 December 2024 (UTC)
*::It's entirely a matter of opinion and perception, or A) this RfC wouldn't exist, and B) various of your fellow admins like TonyBallioni would not have come to the same conclusion I have. Whether the underlying intent (which no one can determine, lacking as we do any magical mind-reading powers) is solely egotistical is ultimately irrelevant. The {{em|actual effect}} (what matters) of doing this whether for attention, or because you've somehow confused yourself into think it needs to be done, is precisely the same: a showy waste of community volunteers' time with no result other than a bunch of attention being drawn to a particular editor and their deeds, without any actual need for the community to engage in a lengthy formal process to re-examine them. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — [[User:SMcCandlish|'''SMcCandlish''']] [[User talk:SMcCandlish|☏]] [[Special:Contributions/SMcCandlish|¢]] 😼 </span> 05:49, 23 December 2024 (UTC)
*:::{{tqb|or because you've somehow confused yourself into think it needs to be done}} I and many others here agree and stand behind the very reasoning that has "confused" such candidates, at least for WTT. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 15:37, 23 December 2024 (UTC)
*'''Option 2'''. I see no legitimate reason why we should be changing the status quo. Sure, some former admins might find it easier to go through BN, and it might save community time, and most former admins ''already'' choose the easier option. However, if a candidate last ran for adminship several years ago, or if issues were raised during their tenure as admin, then it may be helpful for them to ask for community feedback, anyway. There is no "wasted" community time in such a case. I really don't get the claims that this violates [[WP:POINT]], because it really doesn't apply when a former admin last ran for adminship 10 or 20 years ago or wants to know if they still have community trust.{{pb}}On the other hand, if an editor thinks a re-RFA is a waste of community time, they can simply choose not to participate in that RFA. Opposing individual candidates' re-RFAs based solely on opposition to re-RFAs in general ''is'' a violation of [[WP:POINT]]. &ndash; [[User:Epicgenius|Epicgenius]] ([[User talk:Epicgenius|talk]]) 14:46, 22 December 2024 (UTC)
*:But this isn't the status quo? We've never done a re-RfA before now. The question is whether this previously unconsidered process, which appeared as an [[emergent behavior]], is a feature or a bug. [[User:CaptainEek|<b style="color:#6a1f7f">CaptainEek</b>]] <sup>[[User talk:CaptainEek|<i style="font-size:82%; color:#a479e5">Edits Ho Cap'n!</i>]]</sup>[[Special:Contributions/CaptainEek|⚓]] 23:01, 22 December 2024 (UTC)
*::There have been lots of re-RFAs, historically. There were more common in the 2000s. [[Wikipedia:Requests for adminship/Evercat|Evercat]] in 2003 is the earliest I can find, back before the re-sysopping system had been worked out fully. [[Wikipedia:Requests for adminship/Croat Canuck 2|Croat Canuck]] back in 2007 was snow-closed after one day, because the nominator and applicant didn't know that they could have gone to the bureaucrats' noticeboard. For more modern examples, [[Wikipedia:Requests for adminship/HJ Mitchell 3|HJ Mitchell]] (2011) is relatively similar to the recent re-RFAs in the sense that the admin resigned uncontroversially but chose to re-RFA before getting the tools back. Immediately following and inspired by HJ Mitchell's, there was the slightly more controversial [[Wikipedia:Requests for adminship/SarekOfVulcan 2|SarekOfVulcan]]. That ended successful re-RFAS until 2019's [[Wikipedia:Requests for adminship/Floquenbeam 2|Floquenbeam]], which crat-chatted. Since then, there have been none that I remember. There have been several re-RFAs from admins who were de-sysopped or at serious risk of de-sysopping, and a few interesting edge cases such as the [https://en.wikipedia.org/wiki/Wikipedia:Requests_for_adminship/SarekOfVulcan_4#c-Worm_That_Turned-2015-02-13T08:20:00.000Z-Wincent77-2015-02-13T06:50:00.000Z potentially optional] yet no-consensus [[Wikipedia:Requests for adminship/SarekOfVulcan 3|SarekVulcan 3]] in 2014 and the [[Wikipedia:Requests for adminship/Rich Farmbrough 2/Bureaucrat discussion|Rich Farmbrough]] case in 2015, but those are very different than what we're talking about today. [[User:GreenLipstickLesbian|GreenLipstickLesbian]] ([[User talk:GreenLipstickLesbian|talk]]) 00:01, 23 December 2024 (UTC)
*:::To add on to that, [[Wikipedia:Requests for adminship/Harrias 2]] was technically a reconfirmation RFA, which in a sense can be treated as a re-RFA. My point is, there is some precedent for re-RFAs, but the current guidelines are ambiguous as to when re-RFAs are or aren't allowed. &ndash; [[User:Epicgenius|Epicgenius]] ([[User talk:Epicgenius|talk]]) 16:34, 23 December 2024 (UTC)
*::::Well thank you both, I've learned something new today. It turns out I was working on a false assumption. It has just been so long since a re-RfA that I assumed it was a truly new phenomenon, especially since there were two in short succession. I still can't say I'm thrilled by the process and think it should be used sparingly, but perhaps I was a bit over concerned. [[User:CaptainEek|<b style="color:#6a1f7f">CaptainEek</b>]] <sup>[[User talk:CaptainEek|<i style="font-size:82%; color:#a479e5">Edits Ho Cap'n!</i>]]</sup>[[Special:Contributions/CaptainEek|⚓]] 16:47, 23 December 2024 (UTC)
*'''Option 2 or 3''' per Gnoming and CaptainEek. Such RfAs only require at most 30 seconds for one to decide whether or not to spend their time on examination. Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. Voluntary reconfirmation RfAs are socially discouraged, so there is usually a very good reason for someone to go back there, such as accountability for past statements in the case of WTT or large disputes during adminship in the case of Hog Farm. I don't think we should outright deny these, and there is no disruption incurred if we don't. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 15:44, 23 December 2024 (UTC)
*'''Option 2''' but for largely the reasons presented by CaptainEek. '''[[User:L235|KevinL]]''' (<small>aka</small> [[User:L235|L235]] '''·''' [[User talk:L235#top|t]] '''·''' [[Special:Contribs/L235|c]]) 21:58, 23 December 2024 (UTC)
*'''Option 2 (fine with better labeling)''' These don't seem harmful to me and, if I don't have time, I'll skip one and trust the judgment of my fellow editors. No objection to better labeling them though, as discussed above. [[User:RevelationDirect|RevelationDirect]] ([[User talk:RevelationDirect|talk]]) 22:36, 23 December 2024 (UTC)
*'''Option 1''' because it's just a waste of time to go through and !vote on candidates who just want the mop restored when he or she or they could get it restored BN with no problems. But I can also see option 2 being good for a former mod not in good standing. [[User:Therapyisgood|Therapyisgood]] ([[User talk:Therapyisgood|talk]]) 23:05, 23 December 2024 (UTC)
*:If you think it is a waste of time to !vote on a candidate, just don't vote on that candidate and none of your time has been wasted. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 23:28, 23 December 2024 (UTC)


===Discussion===
This seems to be a little bit inconsistent. Some have "conspiracy theory" in the title, clearly stating they are false (I don't think there's any possible way any even remotely possible theory or accusation would have the words "conspiracy theory" in it). Some go even further outright stating "myth" (not unwarranted if it is clearly false).
*{{re|Voorts}} If option 2 gets consensus how would this RfC change the wording {{tqq|Regardless of the process by which the admin tools are removed, any editor is free to re-request the tools through the requests for adminship process.}} Or is this an attempt to see if that option no longer has consensus? If so why wasn't alternative wording proposed? As I noted above this feels premature in multiple ways. Best, [[User:Barkeep49|Barkeep49]] ([[User_talk:Barkeep49|talk]]) 21:43, 15 December 2024 (UTC)
*I've re-opened this per [[Special:PermanentLink/1263300451#RFA RFC|a request]] on my talk page. If other editors think this is premature, they can !vote accordingly and an uninvolved closer can determine if there's consensus for an early close in deference to the VPI discussion. [[User:Voorts|voorts]] ([[User talk:Voorts|talk]]/[[Special:Contributions/Voorts|contributions]]) 21:53, 15 December 2024 (UTC)
*:The discussion at VPI, which I have replied on, seems to me to be different enough from this discussion that both can run concurrently. That is, however, my opinion as a mere editor. —&nbsp;[[User:Jkudlick|Jkudlick]]&nbsp;&#x2693;&nbsp;[[User_talk:Jkudlick|(talk)]] 22:01, 15 December 2024 (UTC)
*:@[[User:Voorts|Voorts]], can you please reword the RfC to make it clear that Option 2 is the current consensus version? It does not need to be clarified – it already says precisely what you propose. – [[User talk:Bradv|<span style="color:#333">'''brad''v'''''</span>]] 22:02, 15 December 2024 (UTC)
*::{{done}} [[User:Voorts|voorts]] ([[User talk:Voorts|talk]]/[[Special:Contributions/Voorts|contributions]]) 22:07, 15 December 2024 (UTC)
*'''Question''': May someone clarify why many view such confirmation RfAs as a waste of community time? No editor is obligated to take up their time and participate. If there's nothing to discuss, then there's no friction or dis-cussing, and the RfA smooth-sails; if a problem is identified, then there was a good reason to go to RfA. I'm sure I'm missing something here. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 22:35, 15 December 2024 (UTC)
*: The intent of RfA is to provide a comprehensive review of a candidate for adminship, to make sure that they meet the community's standards. Is that happening with vanity re-RfAs? Absolutely not, because these people don't need that level of vetting. I wouldn't consider a week long, publicly advertized back patting to be a productive use of volunteer time. -- [[User:Ajraddatz|Ajraddatz]] ([[User Talk:Ajraddatz|talk]]) 23:33, 15 December 2024 (UTC)
*::But no volunteer is obligated to pat such candidates on the back. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 00:33, 16 December 2024 (UTC)
*::: Sure, but that logic could be used to justify any time sink. We're all volunteers and nobody is forced to do anything here, but that doesn't mean that we should promote (or stay silent with our criticism of, I suppose) things that we feel don't serve a useful purpose. I don't think this is a huge deal myself, but we've got two in a short period of time and I'd prefer to do a bit of push back now before they get more common. -- [[User:Ajraddatz|Ajraddatz]] ([[User Talk:Ajraddatz|talk]]) 01:52, 16 December 2024 (UTC)
*::::Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 02:31, 16 December 2024 (UTC)
*:::::Except someone who has no need for advanced tools and is not going to use them in any useful fashion, would then skate through with nary a word said about their unsuitability, regardless of the foregone conclusion. The point of RFA is not to rubber-stamp. Unless their is some actual issue or genuine concern they might not get their tools back, they should just re-request them at BN and stop wasting people's time with pointless non-process wonkery. [[User:Only in death|Only in death does duty end]] ([[User talk:Only in death|talk]]) 09:05, 16 December 2024 (UTC)
*::::::I’m confused. Adminship requires continued use of the tools. If you think they’s suitable for BN, I don’t see how doing an RfA suddenly makes them unsuitable. If you have concerns, raise them. [[User:Aaron Liu|<span class="skin-invert" style="color:#0645ad">Aaron Liu</span>]] ([[User talk:Aaron Liu#top|talk]]) 13:02, 16 December 2024 (UTC)
*I don't think the suggested problem (which I acknowledge not everyone thinks is a problem) is resolved by these options. Admins can still run a re-confirmation RfA after regaining adminsitrative privileges, or even initiate a recall petition. I think as [[Special:GoToComment/c-Barkeep49-20241215200500-Hog Farm-20241215042100|discussed on Barkeep49's talk page]], we want to encourage former admins who are unsure if they continue to be trusted by the community at a sufficient level to explore lower cost ways of determining this. [[User:Isaacl|isaacl]] ([[User talk:Isaacl|talk]]) 00:32, 16 December 2024 (UTC)
*:Regarding option 3, [[User:Isaacl/Consensus requires patience|establishing a consensus view takes patience]]. The intent of having a reconfirmation request for administrative privileges is counteracted by closing it swiftly. It provides incentive for rapid voting that may not provide the desired considered feedback. [[User:Isaacl|isaacl]] ([[User talk:Isaacl|talk]]) 17:44, 17 December 2024 (UTC)
* In re the idea that RfAs use up a lot of community time: I first started editing Wikipedia in 2014. There were 62 RfAs that year, which was a historic low. Even counting all of the AElect candidates as separate RfAs, including those withdrawn before voting began, we're still up to only 53 in 2024 – counting only traditional RfAs it's only 18, which is the second lowest number ever. By my count we've has 8 resysop requests at BN in 2024; even if all of those went to RfA, I don't see how that would overwhelm the community. That would still leave us on 26 traditional RfAs per year, or (assuming all of them run the full week) one every other week. [[User:Caeciliusinhorto-public|Caeciliusinhorto-public]] ([[User talk:Caeciliusinhorto-public|talk]]) 10:26, 16 December 2024 (UTC)
* What about an option 4 encouraging eligible candidates to go through BN? At the end of the [[WP:ADMIN#Procedure|Procedure section]], add something like "Eligible users are encouraged to use this method rather than running a new request for adminship." The current wording makes re-RfAing sound like a plausible alternative to a BN request, when in actual fact the former rarely happens and always generates criticism. [[User:Giraffer|Giraffer]] ([[User talk:Giraffer|talk]]) 12:08, 16 December 2024 (UTC)
*:Discouraging RFAs is the second last thing we should be doing (after prohibiting them), rather per my comments here and in the VPI discussion we should be ''encouraging'' former administrators to demonstrate that they still have the approval of the community. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 12:16, 16 December 2024 (UTC)
*:I think this is a good idea if people do decide to go with option 2, if only to stave off any further mixed messages that people are doing something wrong or rude or time-wasting or whatever by doing a second RfA, when it's explicitly mentioned as a valid thing for them to do. [[User:Gnomingstuff|Gnomingstuff]] ([[User talk:Gnomingstuff|talk]]) 15:04, 16 December 2024 (UTC)
*::If RFA is explicitly a valid thing for people to do (which it is, and is being reaffirmed by the growing consensus for option 2) then we don't need to (and shouldn't) discourage people from using that option. The mixed messages can be staved off by people simply not making comments that explicitly contradict policy. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 15:30, 16 December 2024 (UTC)
*:::Also a solid option, the question is whether people will actually do it. [[User:Gnomingstuff|Gnomingstuff]] ([[User talk:Gnomingstuff|talk]]) 22:55, 16 December 2024 (UTC)
*::::The simplest way would be to just quickly hat/remove all such comments. Pretty soon people will stop making them. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 23:20, 16 December 2024 (UTC)
* This is not new. We've had sporadic "vanity" RfAs since the early days of the process. I don't believe they're particularly harmful, and think that it unlikely that we will begin to see so many of them that they pose a problem. As such I don't think this policy proposal [[Wikipedia:Instruction creep|solves any problem we actually have]]. '''[[User:UninvitedCompany|<span style="color:green">Uninvited</span>]][[User_talk:UninvitedCompany|Company]]''' 21:56, 16 December 2024 (UTC)
* This apparent negative feeling evoked at an RFA for a former sysop ''everyone agrees is fully qualified and trusted'' certainly will put a bad taste in the mouths of other former admins who might consider a reconfirmation RFA ''without first'' visiting BN. This comes in the wake of Worm That Turned's similar rerun. [[User:BusterD|BusterD]] ([[User talk:BusterD|talk]]) 23:29, 16 December 2024 (UTC)
*:Nobody should ever be discouraged from seeking community consensus for significant changes. Adminship is a significant change. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 23:32, 16 December 2024 (UTC)
*::No argument from me. I was a big Hog Farm backer way back when he was ''merely'' one of Wikipedia's best content contributors. [[User:BusterD|BusterD]] ([[User talk:BusterD|talk]]) 12:10, 17 December 2024 (UTC)
*All these mentions of editor time make me have to mention [[User:CaptainEek/Grand Unified Theory of Editor Time|The Grand Unified Theory of Editor Time]] (TLDR: our understanding of how editor time works is dreadfully incomplete). [[User:CaptainEek|<b style="color:#6a1f7f">CaptainEek</b>]] <sup>[[User talk:CaptainEek|<i style="font-size:82%; color:#a479e5">Edits Ho Cap'n!</i>]]</sup>[[Special:Contributions/CaptainEek|⚓]] 02:44, 17 December 2024 (UTC)
*:I went looking for @[[User:Tamzin|Tamzin]]'s comment because I know they had hung up the tools and came back, and I was interested in their perspective. But they've given me a different epiphany. I suddenly realize why people are doing confirmation RfAs: it's because of RECALL, and the one year immunity a successful RfA gives you. Maybe everyone else already figured that one out and is thinking "well duh Eek," but I guess I hadn't :) I'm not exactly sure what to do with that epiphany, besides note the emergent behavior that policy change can create. We managed to generate an entirely new process without writing a single word about it, and that's honestly impressive :P [[User:CaptainEek|<b style="color:#6a1f7f">CaptainEek</b>]] <sup>[[User talk:CaptainEek|<i style="font-size:82%; color:#a479e5">Edits Ho Cap'n!</i>]]</sup>[[Special:Contributions/CaptainEek|⚓]] 18:18, 17 December 2024 (UTC)
*::Worm That Turned followed through on a pledge he made in January 2024, before the 2024 review of the request for adminship process began. I don't think a pattern can be extrapolated from a sample size of one (or even two). That being said, it's probably a good thing if admins occasionally take stock of whether or not they continue to hold the trust of the community. As I previously commented, it would be great if these admins would use a lower cost way of sampling the community's opinion. [[User:Isaacl|isaacl]] ([[User talk:Isaacl|talk]]) 18:31, 17 December 2024 (UTC)
*:::{{ping|CaptainEek}} You are correct that a year's "immunity" results from a successful RRFA, but I see no evidence that this has been the ''reason'' for the RRFAs. Regards, [[User:Newyorkbrad|Newyorkbrad]] ([[User talk:Newyorkbrad|talk]]) 00:14, 22 December 2024 (UTC)
*::::If people decide to go through a community vote to get a one year immunity from a process that only might lead to a community vote which would then have a lower threshold then the one they decide to go through, and also give a year's immunity, then good for them. [[User:Chipmunkdavis|CMD]] ([[User talk:Chipmunkdavis|talk]]) 01:05, 22 December 2024 (UTC)
*::@[[User:CaptainEek|CaptainEek]] I'm mildly bothered by this comment, mildly because I assume it's lighthearted and non-serious. But just in case anyone does feel this way - I was very clear about my reasons for RRFA, I've written a lot about it, anyone is welcome to use my personal recall process without prejudice, and just to be super clear - I waive my "1 year immunity" - if someone wants to start a petition in the next year, do not use my RRfA as a reason not to. I'll update my userpage accordingly. I can't speak for Hog Farm, but his reasoning seems similar to mine, and immunity isn't it. [[User:Worm That Turned|<b style="color:var(--color-base);">''Worm''</b>]]<sup>TT</sup>([[User talk:Worm That Turned|<b style="color:#060;">talk</b>]]) 10:28, 23 December 2024 (UTC)
*:::@[[User:Worm That Turned|Worm That Turned]] my quickly written comment was perhaps not as clear as it could have been :) I'm sorry, I didn't mean to suggest that y'all had run for dubious reasons. As I said in my !vote, {{tq|Let me be clear: I am not suggesting that is why either Worm or Hogfarm re-upped, I'm just trying to create a general purpose rule here}}. I guess what I really meant was that the reason that we're having this somewhat spirited conversation seems to be the sense that re-RfA could provide a protection from recall. If not for recall and the one year immunity period, I doubt we'd have cared so much as to suddenly run two discussions about this. [[User:CaptainEek|<b style="color:#6a1f7f">CaptainEek</b>]] <sup>[[User talk:CaptainEek|<i style="font-size:82%; color:#a479e5">Edits Ho Cap'n!</i>]]</sup>[[Special:Contributions/CaptainEek|⚓]] 16:59, 23 December 2024 (UTC)
*::::I don't agree. No one else has raised a concern about someone seeking a one-year respite from a recall petition. Personally, I think essentially self-initiating the recall process doesn't really fit the profile of someone who wants to avoid the recall process. (I could invent some nefarious hypothetical situation, but since opening an arbitration case is still a possibility, I don't think it would work out as planned.) [[User:Isaacl|isaacl]] ([[User talk:Isaacl|talk]]) 05:19, 24 December 2024 (UTC)
*::I really don't think this is the reason behind WTT's and HF's reconfirmation RFA's. I don't think their RFA's had much utility and could have been avoided, but I don't doubt for a second that their motivations were anything other than trying to provide transparency and accountability for the community. [[User:Bugghost|<span style="font-weight:bold;color:#f50">BugGhost</span>]]&nbsp;[[User talk:Bugghost|🦗👻]] 12:04, 23 December 2024 (UTC)
*I don't really care enough about reconf RFAs to think they should be restricted, but what about a lighter ORCP-like process (maybe even in the same place) where fewer editors can indicate, "yeah OK, there aren't really any concerns here, it would probably save a bit of time if you just asked at BN". [[User:Alpha3031|Alpha3031]] ([[User talk:Alpha3031|t]] • [[Special:Contributions/Alpha3031|c]]) 12:40, 19 December 2024 (UTC)


== Audio-video guidance ==
* [[LGBT chemicals conspiracy theory]]
* [[LGBTQ grooming conspiracy theory]]
* [[Moon landing conspiracy theories]]
* [[International Jewish conspiracy]]
* [[999 phone charging myth]]
* [[John F. Kennedy assassination conspiracy theories]]


Hi there,
However: These do not, despite the article clearly stating the theory or accusation is incorrect:


Per the post I made a few weeks ago regarding use of [[Wikipedia:Village_pump_(policy)/Archive_198#Videos_from_YT_and_WP:RSPYT|video for illustrative purposes]], I think that [[MOS:Images]] might be expanded to make mention of audio-video content, as most of the same principles apply (eg aesthetics, quality, relevance, placement). There are some additional concerns, for example, if audio or video renders a primary source, eg is a recording of PD music such as Bach or similar; or is a reading of a PD text, then there might be some source validation requirements (ie, the music or text should match the original, within sensible boundaries, eg Mozart or Bach pieces may not be easily replicated with original instrumentation, or at least this should not be a requirement.
* [[Fan death]]
* [[Allegations of genocide in Donbas]] (Note: [[Allegations of genocide of Ukrainians in the Russo-Ukrainian War]] has the same title format, despite one referring to actual actions, and one that serves only as a [[casus belli]] with no basis at all in actual true events)
* [[Vaccines and autism]] (Note: the fraudulent study that begun this conspiracy is titled [[Lancet MMR autism fraud|''Lancet'' MMR autism fraud]], not using the word "study" or something not indicating it was a fraud in the title, which it used to, I don't know when it was changed)
* [[Abortion–breast cancer hypothesis]]
* [[Turbo cancer]]


So one option would be for a simple statement at [[MOS:Images]] that these guidelines normally apply to AV, or separate guidance for AV that explains that [[MOS:Images]] contains guidance that generally applies to AV.
Is there some kind of policy regarding whether to include "conspiracy theory", "myth", etc in article titles about false theories or accusations? &lt;/[[User:MarkiPoli|MarkiPoli]]&gt; &lt;[[User_talk:MarkiPoli|talk]] /&gt;&lt;[[Special:Contributions/MarkiPoli|cont]] /&gt; 12:37, 25 November 2024 (UTC)
:Generally, all articles should be titled neutrally and in line with their common name, where they have one. If the significant majority of reliable sources do not describe something as a conspiracy theory or myth (even if they are false) then our article titles should not. In most cases where "myth" and "conspiracy" appear in the article titles they are descriptive as there is no single common name for the topic(s) covered. Consistency is part of the [[Wikipedia:Article titles|article titles policy]] but it is only one criterion and generally not regarded as the most important. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 12:50, 25 November 2024 (UTC)
:I see two situations here: one where the article title wouldn’t work without the addition of “conspiracy theory” (i.e “International Jewish” is a non sequitur fragment); and one where the title ''would'' work (“999 phone charging” makes sense on its own). We don’t need to state something is a myth in the title if the article explains it’s a myth; there’s enough RFK Jr. types whining at [[Talk:Turbo cancer]] to prove that much. [[User:Dronebogus|Dronebogus]] ([[User talk:Dronebogus|talk]]) 13:10, 25 November 2024 (UTC)
:Agree with Thryduulf. We should use titles that are considered the common name for the topic and that fall with the article title policy, and then after that any necessarily disambiguation steps to differentiate from other topics. And as long as the lede sentence or lede itself (as in the case of Vaccines and autism) is clear what is legitimate science or fact and what is a conspiracy theory, pseudoscience, or disproven, then its not as important for the title to reflect that as well. [[User:Masem|M<span style="font-variant: small-caps">asem</span>]] ([[User Talk:Masem|t]]) 13:31, 25 November 2024 (UTC)
::Indeed there are some editors on the sceptic side who seem to feel that it is necessary to explicitly and stridently describe something as pseudoscientific at every possible opportunity. We don't need to bash our readers over the head with it, indeed doing so can be contrary to NPOV (e.g. when reliable sources disagree and/or take a more nuanced approach). [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 14:39, 25 November 2024 (UTC)
:::I think that what leads to adding "conspiracy theory", "myth", etc. generally boils down to whether the topic is one that perennially annoys the regular page watchers at [[WP:FRINGE/N]]. So, for instance, [[Fan Death]] isn't caused "the Fan Death Myth" largely because there's not a large proportion of editors rushing to the Fan Death article to say "this is a real serious problem guys". [[User:Simonm223|Simonm223]] ([[User talk:Simonm223|talk]]) 14:48, 25 November 2024 (UTC)
::::I think that’s a genuine problem that we should probably address— some anti-fringe editors are among the most aggressive contributors I’ve encountered, probably because too many “skeptics” are also culture warriors who need to [[Wikipedia:GREATWRONGS|right great wrongs]] by doing everything short of calling something “stupid” and its adherents “idiots”, which of course actually ''damages'' our credibility. [[User:Dronebogus|Dronebogus]] ([[User talk:Dronebogus|talk]]) 15:09, 25 November 2024 (UTC)
:::::I'm all for preventing the spread of quack medicine and Ufology silliness on the encyclopedia but, generally, the fringe noticeboard is poorly equipped to address assessments of what research is fringe outside of medicine, history and archaeology. I think some of these anomalous titling conventions kind of point toward that specificity of scope. [[User:Simonm223|Simonm223]] ([[User talk:Simonm223|talk]]) 15:57, 25 November 2024 (UTC)
::::::FRINGE should really only apply to topics where objective research have thoroughly debunked the notion, and not to areas where questions remain open or where debunking may never be possible at which point Undue becomes the answer. For example, whike most science rejects the COVID lab theory, it's still near difficult to devisicely conclude that the lab theory is not posdible, so we should avoid calling it fringe but clearly note the weight of experts that have dismissed it.<span id="Masem:1732553641319:WikipediaFTTCLNVillage_pump_(policy)" class="FTTCmt"> —&nbsp;[[User:Masem|M<span style="font-variant: small-caps">asem</span>]] ([[User Talk:Masem|t]]) 16:54, 25 November 2024 (UTC)</span>
:::::::Hmm, there is a difference between "theories that are scientific, plausible and supported only an extreme minority of sources but have not been/are unlikely to be conclusively disproven", "theories that are scientific, were previously mainstream but no longer are, but are still supported by an extreme minority of sources as they have not been conclusively disproven". "theories that are scientific but implausible to the extent that mainstream sources do not feel the need to conclusively disprove them.", "theories which are scientific and have been conclusively disproven, but still have some supporters", "theories which are pseudoscientific" and "theories which are neither scientific nor pseudoscientific". I've seen FRINGE used to describe all of these cases, which is unhelpful. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 17:08, 25 November 2024 (UTC)
:I think part of the issue is: there is a Kennedy assassination, but this article is about the conspiracy theories; there is grooming but this article is about about a conspiracy theory; there is phone charging but this article is about a myth; there are international Jewish organizations but this article is not about that, etc. So, the article title is limited to (and limits) the scope of the article. And other times, 'myth' or 'conspiracy theor[ies]' is in a common name for the subject. Also note, you really can't tell why an article is called 'this' instead of 'that', unless it has actually been discussed. Article title decisions are made in a decentralized manner, and may never be revisited. [[User:Alanscottwalker|Alanscottwalker]] ([[User talk:Alanscottwalker|talk]]) 12:46, 26 November 2024 (UTC)
::Alan raises a good point… when there actually are theories that postulate a conspiracy, then it is not POV to call them “conspiracy theories”. That is a neutral descriptive title, not a pejorative one. [[User:Blueboar|Blueboar]] ([[User talk:Blueboar|talk]]) 13:32, 26 November 2024 (UTC)
:::I am not sure if that's true, that those that subscribe to a theory that is based on conspiracy would necessary call it a conspiracy theory themselves. Eg those that claim there is a [[deep state]] aren't usually calling that a conspiracy theory, but a theory about conspiracies, if that makes sense. [[User:Masem|M<span style="font-variant: small-caps">asem</span>]] ([[User Talk:Masem|t]]) 15:46, 26 November 2024 (UTC)
::::And according to that article "deep state" is a pejorative. Regardless, just because you have [[Illuminati]] does not mean you can't have [[New World Order conspiracy theory]]. The ''Illuminati'' of Bavaria, can well be a different matter than the ''Illuminati'' of the 1960s novel.[https://www.bbc.com/future/article/20170809-the-accidental-invention-of-the-illuminati-conspiracy] [[User:Alanscottwalker|Alanscottwalker]] ([[User talk:Alanscottwalker|talk]]) 16:31, 26 November 2024 (UTC)
*I would like to add that, while I would ''like'' standardized article titles and would ''also like'' if some anti-FRINGE editors dropped the “angry atheist” stereotype, I think this is an exceedingly trivial issue that does not ''need'' to be “solved”. [[User:Dronebogus|Dronebogus]] ([[User talk:Dronebogus|talk]]) 16:03, 27 November 2024 (UTC)


Is the correct process to raise an RFC? And is that done at [[MOS:Images]], or [[WP:MOS]], or here, or where? [[User:JimKillock|Jim Killock]] [[User_talk:JimKillock|(talk)]] 19:38, 16 December 2024 (UTC)
== New users required to cite sources when creating an article ==


:I've posted a longer request for help explaining the gap at [[Wikipedia talk:Manual of Style#Audio video guidance|MOS talk]]. It seems an RFC may not be needed but any advice would very much be appreciated. [[User:JimKillock|Jim Killock]] [[User_talk:JimKillock|(talk)]] 20:28, 16 December 2024 (UTC)
[[:Meta:Community_Wishlist_Survey_2023/New_contributors/Reference_requirement_for_new_article_creation|This wishlist item]] proposes a hard edit filter which would change citation policy for new users. We've repeatedly discussed requiring sources, and the consensus has been not to require them; per current policy, articles must be on notable topics and statements must be citable, but neither need be cited.
::I've [[Wikipedia_talk:Manual_of_Style#Audio_video_guidance|started drafting here]]. [[User:JimKillock|Jim Killock]] [[User_talk:JimKillock|(talk)]] 22:50, 16 December 2024 (UTC)


== Policy proposal: Establishment of research groups to edit articles ==
I know changes that affect new editors typically don't ignite as much interest as those that affect established editors, but they are in some ways ''more'' important; anything that affects our retention rate will eventually substantially affect the number of active editors, and the nature of their editing.
{{Archive top
|status = withdrawn
|result = My policy proposal was too controversial in its original form and poorly thought out in its revision. [[User:Cnscrptr|Cnscrptr]] ([[User talk:Cnscrptr|talk]]) 23:48, 17 December 2024 (UTC)
}}


In order to have more reliable and unbiased articles and make Wikipedia fully reliable,
More broadly, it might be good to set limits on policy changes done through a wishlist survey on another wiki; big changes need broader discussion. [[User:HLHJ|HLHJ]] ([[User talk:HLHJ|talk]]) 01:05, 27 November 2024 (UTC)
I believe it necessary for articles and topics to be created and handled by '''bodies of editors called "Research groups"''', focusing on researching about topics and articles to ensure Wikipedia articles are as accurate, neutral, and unbiased as possible.
:I strongly oppose implementing this on en-wiki. This is not the sort of change that the broader community should be allowed to dictate to local communities. [[User:Voorts|voorts]] ([[User talk:Voorts|talk]]/[[Special:Contributions/Voorts|contributions]]) 01:22, 27 November 2024 (UTC)
:It's just a wish. Anyone can male one. We don't know whether it will ever be implemented (community wishlists don't exactly have a good track record), never mind turned on on enwiki. &ndash;&#8239;[[User:Joe Roe (mobile)|Joe]]&nbsp;<small>([[User talk:Joe Roe (mobile)|talk]])</small> 05:52, 27 November 2024 (UTC)
:As Joe says, a wishlist item is a long way from becoming something that works. We don’t have need for limits on changes; it is very rare for any changes to be pushed on en.wiki. Those that are are large-scale changes that affect all wikis (think Vector2022 or the upcoming IP masking), and the community here is usually very aware of these ahead of time. If wishlist items turn into tools the wiki can use, they tend to require local activation, as different projects have different needs. (En.wiki for example already has [[WP:NPP]], which will see any new pages, which may include pages that aren’t meant to have sources, like disambiguation pages.) [[User:Chipmunkdavis|CMD]] ([[User talk:Chipmunkdavis|talk]]) 08:17, 27 November 2024 (UTC)
::The WMF Community wishlists in the past have actually had some impressive successes, particularly in 2018 for NPP's [[WP:Page Curation|Page Curation]] extension improvements. It is not all that rare for any changes to be pushed on en.wiki; two slightly earlier community driven major policies largely contributed - at the time - to reducing the flow of sewage in the new page feed: the 2016 [[Wikipedia:New pages patrol/Reviewers|NPP user right]], and after a 7 year battle with the WMF, the 2018 [[Wikipedia:Autoconfirmed article creation trial/Request for comment on permanent implementation|ACPERM]]. However, the number of new registrations has since grown again by users whose first intention above all else is to create a new article by hook or crook with little or no regard for notability, relevance, UPE, and spam policies. NPP has lost many of its prolific, skilled patrollers and coordinators either through burn-out and/or the constant whining either from users whose inappropriate articles have been consigned to the queues for the various trash cans or draft space, or have been driven away for good by other (non NPP) back office regulars' complaints, for the sake of complaining, over a couple of misplaced CSDs or AfDs out of thousands.


<s>I also '''propose''' that editing articles will be reserved to their '''respective research groups''' and creating articles about a topic can only be done by said groups, with non-members being able to propose changes and articles via RFCs and whatnot. To join a research group, one must complete ''thorough training'' in the following areas
::The [[Wikipedia_talk:New_pages_patrol/Reviewers#NPP_backlog|NPP backlog sawtooth profile]] looks menacing - it should be a regular low-value straight line. It is well known common knowledge that NPP is hopelessly overburdened and can no longer sensibly cope with even the minimum suggested criteria for patrolling new pages. The best way to ensure that the WMF's flagship project - the one that draws all the donations - becomes an untrustworthy resource full of useless and corrupt articles, is to sit back and do nothing and let WP become a mire of misinformation and spam. Wikipedia has already become the buck of media satire with "If you believe Wikipedia, you'll believe anything". The quest is therefore for ''any'' measures that will tighten up the article quality '''at the source''' of creation.
*Bias reduction and neutrality training.
*Finding reliable sources and comparing sources.
*Professional research.
*Lessons about the topic.
*Wikipedia policy.
*Any other applicable areas</s>


This policy would also reduce vandalism and guideline violations across Wikipedia, making situations easier to handle.
::Although they are aware of them, as usual the WMF Growth Team has played down and resisted addressing these issues in favour of pursuing other, and expensive initiatives of their own design which in the NPP realm remain ineffective. It's the responsibility of the WMF to ensure new users are aware of the rules at the '''point of registration'''.


<s>Since Wikipedia is a widely-used source, ''it is time to move away from the current decentralized approach'' where just about anyone (no matter how inexperienced and biased) can edit, which has resulted in article bias, vandalism, unreliable sources, poor editing, sockpuppet accounts, edit wars, controversies, and a host of other problems.</s>
::The NPP team has handed solutions to the WMF on a plate, which at the same time will not only reduce the tide of rubbish, but most importantly, encourage the good faith new users to offer articles that have a fair chance of being published. All this project needs is to be written up in MediaWiki source code, but of course short of a mutiny by the community, the WMF will not entertain any ideas that they did not think of themselves and can collect the accolades for.
::The '' "anyone can edit" '' principle is not a get out of jail free card; it should be quoted in its full context: 'Anyone can edit as long as they play by the rules'. For once and for all, just make those basic rules clear for bona fide new registrants, and help them comply. [[User:Kudpung|Kudpung กุดผึ้ง]] ([[User talk:Kudpung|talk]]) 03:53, 28 November 2024 (UTC)
:::{{tq|1=Wikipedia has already become the buck of media satire with "If you believe Wikipedia, you'll believe anything}}
:::This is just rhetorically dishonest. When people say this they are generally referring to vandalism, hoaxes, and information on high-profile articles, not the stuff that goes through NPP.
:::Like, just think about this for a second. Think about the kind of misinformation people generally disseminate and what it is about. Almost always, it's about things people ''already care about'', which means things we have articles on. COVID. Political stuff. Current events. Not obscure new articles. [[User:Gnomingstuff|Gnomingstuff]] ([[User talk:Gnomingstuff|talk]]) 17:51, 29 November 2024 (UTC)
:I believe that wiki should at least allow new users to create stubs without citations. -- [[User:Chatul|Shmuel (Seymour J.) Metz Username:Chatul]] ([[User talk:Chatul|talk]]) 20:45, 29 November 2024 (UTC)
:: They can do so in draft space. [[User:BD2412|<span style="background:gold">'''''BD2412'''''</span>]] [[User talk:BD2412|'''T''']] 20:52, 29 November 2024 (UTC)
:Without commenting on the merits of the core portion of this proposal, I think it's worth pointing out that certain types of mainspace pages don't need and indeed are expected not to have references. You could try to get around that by excluding pages tagged with the DISAMBIG magic word, SIA templates, etc. But that won't work if they don't properly format the page <small>common for new users</small>; I don't have any statistics handy for often new users create those type of pages, but I suspect its a large enough number that it should be taken into consideration. [[Special:Contributions/184.152.68.190|184.152.68.190]] ([[User talk:184.152.68.190|talk]]) 04:23, 30 November 2024 (UTC)
::Good point. [[User:HLHJ|HLHJ]] ([[User talk:HLHJ|talk]]) 23:57, 1 December 2024 (UTC)


A Research-Group-based editing Wikipedia will be far more stable and solve many of our persistent problems. [[User:Cnscrptr|Cnscrptr]] ([[User talk:Cnscrptr|talk]]) 15:28, 17 December 2024 (UTC)
:One of the commentators on the original proposal pointed out that articles created through AfC in practice are required to have references, so this would not actually change anything for new editors, who have to use AfC anyway. So it would perhaps it would apply to somewhat more experienced editors.
:So something more like [[:Citizendium]]? [[User:Schazjmd|<span style="color:#066293;">'''Schazjmd'''</span>]]&nbsp;[[User talk:Schazjmd|<span style="color:#738276;">''(talk)''</span>]] 16:11, 17 December 2024 (UTC)
:So basically we would deprecate half our policies and guidelines? Honestly trying to be nice but this is a terrible idea [[User:Horse Eye&#39;s Back|Horse Eye&#39;s Back]] ([[User talk:Horse Eye&#39;s Back|talk]]) 16:14, 17 December 2024 (UTC)
:Your proposal describes something that would not be Wikipedia at all (remember, we are the free encyclopaedia that anyone can edit), so is better suited for a [[WP:FORK|fork]]. Usually proposals like yours (with editing only permitted to small numbers of "trusted" people) only work for small projects, if at all (I expect you would have to pay people to consider going through the "research group" training; it certainly does not sound attractive at all to me as a volunteer). You will need tens of thousands of editors to have any chance at not becoming outdated immediately. —[[User:Kusma|Kusma]] ([[User talk:Kusma|talk]]) 16:56, 17 December 2024 (UTC)
:{{tq|it is time to move away from the current decentralized approach where just about anyone [...] can edit}} - this proposal is dead on arrival - I support a SNOW close. [[User:Bugghost|<span style="font-weight:bold;color:#f50">BugGhost</span>]]&nbsp;[[User talk:Bugghost|🦗👻]] 17:30, 17 December 2024 (UTC)
::Congratulations! We're completely rejecting Wikipedia's open content, its editors' direct contributions (henceforth to be filtered through a new Wikipedia Politburo), with the basic principles of Wikipedia's founders soundly rejected. "Research group training" says it all. [[User:Mason.Jones|Mason.Jones]] ([[User talk:Mason.Jones|talk]]) 18:15, 17 December 2024 (UTC)
:Respectfully, I think this is a horrible idea. The decentralized model is what the entire site was built upon. It is even one of the first things advertised on the Main Page! If we got rid of it, the site wouldn't truly be Wikipedia anymore. [[User:QuicoleJR|QuicoleJR]] ([[User talk:QuicoleJR|talk]]) 18:20, 17 December 2024 (UTC)
::I mean, I think Wikipedia is big enough to drop the "free encyclopaedia" bit and attempt a nupedia-like approach. However, I digress and have dropped that clause in my proposal. [[User:Cnscrptr|Cnscrptr]] ([[User talk:Cnscrptr|talk]]) 20:06, 17 December 2024 (UTC)
:Few modifications of my proposal based on feedback:
:*Keep the decentralized model.
:*Drop the extensive training clause. No extensive training will be required.
:*Individuals are expected to learn about the topic before editing.
:*It will be necessary to join research groups (i.e., groups made of editors collaborating on the topic) to better learn about the topic and better contributing to it.
:*Editing of ''various'' articles, including election articles, will require an extended-confirmed status.
:[[User:Cnscrptr|Cnscrptr]] ([[User talk:Cnscrptr|talk]]) 20:04, 17 December 2024 (UTC)
::So these groups are basically [[WP:Wikiproject|Wikiprojects]] but more restrictive? [[User:Tarlby|<span style="color:cyan;font-family: Comic sans;">''Tarl''</span><span style="color:orange;font-family: Comic sans;">''by''</span>]] <sup>([[User talk:Tarlby|'''t''']]) ([[Special:Contributions/Tarlby|'''c''']])</sup> 20:17, 17 December 2024 (UTC)


===Modified proposal: Research panels between editors and greater article protections===
:There is a widespread belief that references are required for every ''statement'', let alone article. This is usually applied to edits by newcomers, who get scared off when a solid but unreferenced contribution is deleted without trying to [[WP:JUSTFIXIT]], and who would learn how to cite if it was instead tagged "citation needed" or a cite was added by another editor (we have studied this). But I've seen a solid-but-uncited edit by an admin removed by an IP, too; this is much less serious.
I got your feedback on this policy proposal and decided to overhaul the proposal based to maintain the free nature of Wikipedia and strong editor collaboration while ensuring accuracy and reliability.


In order for Wikipedia's articles to be more reliable and less biased and to make Wikipedia a more reliable site, I believe that there should be a mandatory creation of '''Research panels''', topic collaborations between editors focused on gathering information, comparing sources, establishing editing policies and guidelines in accordance with general Wikipedia rules, and ensuring editing guidelines. Formation of research panels will be necessary in every topic.
:There is also confusion between ''notable'' and ''has citations that establish notability''. I recently posted [[confirmat screw|an unreferenced stub article]] in the mainspace, and it was draftified and AfCd within the hour. The topic was notable, meaning it would not have been deleted if listed at AfD, and I think I remember an explicit statement that draftifying was an alternative to deletion and could only be used if articles met deletion criteria. The point here is not the individual editors who did this in good faith; the point is that [[Wikipedia talk:WikiProject Articles for creation/Archive 58#Do I need AfC review?|the ensuing discussion]] made it clear that most of the people on the AfC board thought it reasonable to draftify any unsourced article.


*The editing model will remain decentralized. However, Research panels will determine policies based on consensus for their respective topics.
:We need to make a '''conscious choice''' to either:
*Individuals can join a research group freely. However, individuals are expected to follow the policies and protocols set forth by the research group.
*Individuals are expected to learn about the topic before editing.
*It will be necessary to join research panels to better learn about the topic and better contribute to it.
*''Various'' controversial articles, including election articles, will receive special protection or extended-confirmed protection upon request from the respectively Research panel.


Research Panels will allow people to better understand the topic while integrating cooperation efforts to find the most accurate information and create the best possible articles.
:*change policy to require citations on every article (meaning we delete all the articles at [[:Category:Articles lacking sources]], or have a massive sourcing drive before the policy comes into effect) and every edit made by a new editor
:*find a way to teach editors to cite unsourced things, and delete them only if the are unsourcable, [[User:HLHJ/Sandboxes/TemplateSandbox|which is current policy]].


:Opinions? Next steps? [[User:HLHJ|HLHJ]] ([[User talk:HLHJ|talk]]) 23:51, 1 December 2024 (UTC)
--[[User:Cnscrptr|Cnscrptr]] ([[User talk:Cnscrptr|talk]]) 20:36, 17 December 2024 (UTC)
:This still feels like [[Wp:Wikiproject|Wikiprojects]] with more rules. [[User:Tarlby|<span style="color:cyan;font-family: Comic sans;">''Tarl''</span><span style="color:orange;font-family: Comic sans;">''by''</span>]] <sup>([[User talk:Tarlby|'''t''']]) ([[Special:Contributions/Tarlby|'''c''']])</sup> 20:39, 17 December 2024 (UTC)
::This is not a new discussion. There is no consensus to do bullet point one, and it is very unlikely to get any. At the same time, the existence of older unsourced articles is not a good reason for new articles to lack sourcing. The example process given seems fine, the article was given time to develop in draft space and was put into mainspace when ready. [[User:Chipmunkdavis|CMD]] ([[User talk:Chipmunkdavis|talk]]) 00:04, 2 December 2024 (UTC)
::Ah, then we can use that and implement it more strongly across every topic. We do need some sort of organization, though. [[User:Cnscrptr|Cnscrptr]] ([[User talk:Cnscrptr|talk]]) 20:41, 17 December 2024 (UTC)
:This modified proposal is sloppy at worst and doubt it will get any approval. However, I think y'all should take whatever good ideas are there (if any) and also create spaces or something like that for greater learning and collaboration among editors to resolve conflicts and biases. [[User:Cnscrptr|Cnscrptr]] ([[User talk:Cnscrptr|talk]]) 20:39, 17 December 2024 (UTC)
:"Individuals are expected to learn about topic before editing"+"It will be necessary to join research panels to better learn about the topic" seems to add up to "Users are expected to be members of research panels before editing topics in that realm."
:So if I look at an article about "semiotics" (about which I know nothing) I am not allowed to correct, say, some [[MOS:LQ]] failures, some incorrect header capitalization, and a misspelling of Charles Schulz's last name until I train myself in semiotics and join the Official Wikipedia Semiotics Fun Team? -- [[User:NatGertler|Nat Gertler]] ([[User talk:NatGertler|talk]]) 20:48, 17 December 2024 (UTC)
::The whole training clause has been dropped and you could freely join the panels, so you would be allowed to do all that. [[User:Cnscrptr|Cnscrptr]] ([[User talk:Cnscrptr|talk]]) 20:50, 17 December 2024 (UTC)
::Also, the whole editing thing only applies to major edits, not minor ones.
::Either way, this isn't my best work. The first proposal was essentially nupedia revivalism and the modification is a sloppy attempt at salvaging it based on the original essence of investigating about a topic further to put out the most accurate information (as is necessary in the [[Proto-Sinaitic script]] article) [[User:Cnscrptr|Cnscrptr]] ([[User talk:Cnscrptr|talk]]) 20:53, 17 December 2024 (UTC)
::I understand that I would be allowed to jump through whatever hoops you erect in order to be able to correct misspellings, but that seems an effective way to discourage me from improving the encyclopedia and getting it in line with general standards. (I am also unconvinced that this will reduce bias; it would seem to help solidify a smaller group of editors on various topics who are likely to be driven, quite possibly by bias, to go through the effort which you're adding.) -- [[User:NatGertler|Nat Gertler]] ([[User talk:NatGertler|talk]]) 20:54, 17 December 2024 (UTC)
:::<small>(Note, that was written before you made the "minor edit" change.) -- [[User:NatGertler|Nat Gertler]] ([[User talk:NatGertler|talk]]) 20:55, 17 December 2024 (UTC)</small>
:::Regardless, I am '''dropping''' my proposal because it is poorly written and will probably not be helpful at all. [[User:Cnscrptr|Cnscrptr]] ([[User talk:Cnscrptr|talk]]) 21:21, 17 December 2024 (UTC)


{{Archive bottom}}
== Propose to create page of block discussion in noticeboards ==


== Is the "above 8000 words = split" an absolute rule? ==
Hello users, I propose having a page within noticeboards in the "general" section called "Block discussion" with a list of active discussions (which could be a review request, an unblock request or a discussion on whether to block the user) (to separate from administrators ' noticeboard, to clarify further, and that within the DB there are 5 topics, 1. Evidence (evidence that the user can provide as a reason for blocking, will be ignored in the review request), 2. Defense (defense of the blocked or accused against blocking or defending its review), 3. Comments (comments from anyone who is registered and at least 10 edits whether they agree, disagree or neutrality with blocking, a filter or unblocking), 4. Administrators' evaluation (where administrators agree or disagree with blocking, unblocking or filtering, this means that the conclusion depends on the administrators' assessment), 5. Conclusion (Conclusion of the discussion if the blocking, filtering or unblocking was approved).


I am referring to this chart found on [[WP:SIZE]]:
NOTE: And there must be verification in the discussion to prevent someone from manipulating BD through sockpuppetry. [[User:JPPEDRA2|<span style="color:blue;font-family:Times New Roman;font-size:12pt">JPPEDRA2</span>]] <sup>[[User talk:JPPEDRA2|<span style="color:blue;font-family:Comic Sans;">why not?</span>]]</sup> 18:54, 27 November 2024 (UTC)


{| class="wikitable" cellspacing="0" cellpadding="4"
:This means I'm proposing to separate "Wikipedia:Block Discussion" from "Wikipedia:Administrators' noticeboard" to be clearer [[User:JPPEDRA2|<span style="color:blue;font-family:Times New Roman;font-size:12pt">JPPEDRA2</span>]] <sup>[[User talk:JPPEDRA2|<span style="color:blue;font-family:Comic Sans;">why not?</span>]]</sup> 18:57, 27 November 2024 (UTC)
|-
:I understand the desire to split things off of AN/ANI, but this split poses several problems in practice. Quite frequently the proposal for a CBAN only arises after discussion has been ongoing for some time, and while it could be split off at that point it creates an extra bureaucratic step for questionable benefit. The other issue is that neither CBAN impositions nor their appeals are all that common, and separate noticeboards only tend to work well for things that have a fairly high frequency threshold. Arguably, if we had to do it over again AN wouldn't be the catchall, but at this point changing that is more trouble than its worth.
! Word count
:Granted, CBAN and appeal procedures could be tightened up separately without splitting anything off, but there's a longstanding preference for unstructured and somewhat messy discussions, and I don't see that changing anytime soon. [[Special:Contributions/184.152.68.190|184.152.68.190]] ([[User talk:184.152.68.190|talk]]) 17:03, 28 November 2024 (UTC)
! scope="col" | What to do
::@[[Special:Contributions/184.152.68.190|184.152.68.190]] Ok, i'm understand, so can i'm cancel this proposal because that will be more complex? [[User:JPPEDRA2|<span style="color:blue;font-family:Times New Roman;font-size:12pt">JPPEDRA2</span>]] <sup>[[User talk:JPPEDRA2|<span style="color:blue;font-family:Comic Sans;">why not?</span>]]</sup> 17:57, 28 November 2024 (UTC)
|-
:::{{re|JPPEDRA2}} Yes, you can just close it as withdrawn, if you so chose. But don't let me discourage you if you want to leave this open for input from others; every so often [[WP:PEREN|perrenial proposals]] do get implemented, including rather recently, though its usually better to get input at [[WP:VPI]] first.
| >&nbsp;15,000&nbsp;words || Almost certainly should be divided or trimmed.
:::As a side note unregistered users cannot <small>yet</small> [[WP:PINGIP|be pinged]], though apparently that is coming sometime in the not to distant future. [[Special:Contributions/184.152.68.190|184.152.68.190]] ([[User talk:184.152.68.190|talk]]) 18:52, 28 November 2024 (UTC)
|-
::::Ok, so I won't cancel now, I will let others discuss it, if it is rejected, put it in those VPI or perrenial proposals that you mentioned, thanks non-registrered user. [[User:JPPEDRA2|<span style="color:blue;font-family:Times New Roman;font-size:12pt">JPPEDRA2</span>]] <sup>[[User talk:JPPEDRA2|<span style="color:blue;font-family:Comic Sans;">why not?</span>]]</sup> 19:20, 28 November 2024 (UTC)
| &gt; 9,000 words || Probably should be divided or trimmed, though the scope of a topic can sometimes justify the added reading material.
::::* We need someone and other users to discuss this whether they agree or disagree, I will wait.
|-
::::* IP user, can we '''ping''' other users or wait? [[User:JPPEDRA2|<span style="color:blue;font-family:Times New Roman;font-size:12pt">JPPEDRA2</span>]] <sup>[[User talk:JPPEDRA2|<span style="color:blue;font-family:Comic Sans;">why not?</span>]]</sup> 23:40, 29 November 2024 (UTC)
| &gt; 8,000 words || May need to be divided or trimmed; likelihood goes up with size.
::::*:{{re|JPPEDRA2}} Neutral pings are allowed, but be aware of [[WP:CANVASS]]; same with {{tl|please see}} notices. Remember we're all volunteers so it may be a while until others weigh in, and sometimes discussions just don't gain traction for whatever reason. I'm pretty busy myself so I might attend to some things for another day or two here, but after that I'll probably be away for a while.
|-
::::*:Even if this doesn't attract any interest, you can always raise the issue again in looser format at [[WP:VPI]] later, and see if any good ideas come up. [[Special:Contributions/184.152.68.190|184.152.68.190]] ([[User talk:184.152.68.190|talk]]) 02:04, 30 November 2024 (UTC)
| &lt; 6,000 words || Length alone does not justify division or trimming.
::::*::Understood the WP:CANVASS part, but the "please see" should it be placed or not? [[User:JPPEDRA2|<span style="color:blue;font-family:Times New Roman;font-size:12pt">JPPEDRA2</span>]] <sup>[[User talk:JPPEDRA2|<span style="color:blue;font-family:Comic Sans;">why not?</span>]]</sup> 17:38, 30 November 2024 (UTC)
|-
::::*:::{{re|JPPEDRA2}}, both pings and {{tl|please see}} notices are ok, so long as they are neutrally worded, and not given selectively to users based on their known preferences, see [[WP:APPNOTE]] for details. [[Special:Contributions/184.152.68.190|184.152.68.190]] ([[User talk:184.152.68.190|talk]]) 19:02, 30 November 2024 (UTC)
| &lt; 150 words || If an article or list has remained this size for over two months, consider merging it with a related article.<br /> Alternatively, the article could be expanded; see [[Wikipedia:Stub]].
*I'm not sure why, but I was invited here by a notice on my talk page. My initial impression is that this is a solution in search of a problem - largely per the IP editor's first comment. Very few AN(I) threads start off as a proposal for a ban, and divorcing such a proposal from the preceding discussion seems suboptimal, especially ban proposals often run concurrently with proposals for lesser restrictions. Appeals of bans being moved to a new page is an easier sell from a purely practical perspective but it would be a relatively little-used, for example there are none currently being discussed at either AN or ANI, and it would be less watched than either page (which is not a good thing for a community block appeal). [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 22:58, 30 November 2024 (UTC)
|}
*@[[User:JPPEDRA2|JPPEDRA2]]: I see that you have very few mainspace edits and you haven't participated in any AN discussions. I recommend working on some easy mainspace edits at [[WP:TASKS]] instead of proposing massive changes to areas of the encyclopedia that you don't edit in. [[User:Voorts|voorts]] ([[User talk:Voorts|talk]]/[[Special:Contributions/Voorts|contributions]]) 23:43, 30 November 2024 (UTC)
*:@[[User:Voorts|Voorts]] Ok dear voorts, thanks for recommendation. [[User:JPPEDRA2|<span style="color:blue;font-family:Times New Roman;font-size:12pt">JPPEDRA2</span>]] <sup>[[User talk:JPPEDRA2|<span style="color:blue;font-family:Comic Sans;">why not?</span>]]</sup> 22:24, 1 December 2024 (UTC)
* While I do agree that there are problems with AN/I I don't think those problems are that blocks are discussed there. Rather, as constructed I find it is generally bad at efficiently discussing and resolving urgent issues. I think we should have improved processes in place for promptly identifying and closing spurious cases so that they don't become drawn-out time sinks that often result in either nothing happening but an argument or, occasionally, a boomerang. I respect the [[WP:BOLD]] spirit of this proposal but I think it's unlikely to cure what ails AN:I. [[User:Simonm223|Simonm223]] ([[User talk:Simonm223|talk]]) 01:02, 2 December 2024 (UTC)


I have seen a few instances where, an editor will raise the issue that an article is too large at, say, 7500 words or 8100 words. We have multiple history pages (and medical/psychology pages) with well over 11,000+ words, even some with over 16000. Where does one draw the line? It seems like Wikipedia leaves it to the discretion of the editor after about 8000 words. [[User:Plasticwonder|Plasticwonder]] ([[User talk:Plasticwonder|talk]]) 07:21, 18 December 2024 (UTC)
== Global welcoming policy ==


:Looking at the table, it's obvious that "above 8000 words=Split" is not "an absolute rule". I promise you that if it were, that table would say something that sounded remarkably like "if the article is above 8,000 words, then it absolutely must be split".
There is a proposed global policy at [[:meta:Requests for comment/Welcoming policy]]: "A wiki is only allowed to post welcome messages to users if their account was originally created at the wiki, or the user has at least one non-imported edit there." Comments belong there and not here. [[User:PrimeHunter|PrimeHunter]] ([[User talk:PrimeHunter|talk]]) 21:48, 27 November 2024 (UTC)
:Additionally, we have [[Wikipedia:No firm rules|an official policy against absolute rules]].
:Where one draws the line is: In a place that makes sense for the topic of that specific article, having thoughtfully considered all the facts and circumstances that apply to that unique article. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 07:41, 18 December 2024 (UTC)
:There was a lengthy discussion at [[Wikipedia talk:Article size/Archive 6]] about the size guidance, for the record. Splitting pages is a lot of work and not everyone thinks that spreading stuff over multiple pages is better for readers than having in one big page. [[User:Jo-Jo Eumerus|Jo-Jo Eumerus]] ([[User talk:Jo-Jo Eumerus|talk]]) 08:19, 18 December 2024 (UTC)
::In addition to the above, what matters for the technical aspects of article size is not the number of words but the number of bytes. Word count can only ever be an approximation of that as the length of the words used matters ("a" is 1 byte, "comprehensive" is 13), the number and size of included media matters very significantly more. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 09:20, 18 December 2024 (UTC)
:::I think [[WP:PEIS]] is a bigger technical challenge for long articles. The more templates, and the more complicated templates, the more likely you are to need to split for technical reasons. [[List of common misconceptions]] needs a split in part due to PEIS reasons. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 18:43, 18 December 2024 (UTC)
:What's more, there's nothing even in the excerpt here that would purport an absolute guideline. <span style="border-radius:2px;padding:3px;background:#1E816F">[[User:Remsense|<span style="color:#fff">'''Remsense'''</span>]]<span style="color:#fff">&nbsp;‥&nbsp;</span>[[User talk:Remsense|<span lang="zh" style="color:#fff">'''论'''</span>]]</span> 09:44, 18 December 2024 (UTC)
::It isn't an absolute rule, but ''usually'' an article having an extremely long high word count is evidence of a bigger problem with [[WP:SUMMARYSTYLE]] -- that it's too dense or detailed for a reader to use it as a first-port-of-call summary. As such, ''usually'', it's a wise move to create daughter articles for the detailed material, and strim it down to its essentials in the main article; this improves the readability of the main article and allows interested readers to follow up into the nitty-gritty. As {{u|Jo-Jo Eumerus}} rightly says above, though, there's not really such thing as an absolute rule in this place. ''[[User:UndercoverClassicist|<b style="color:#7F007F">UndercoverClassicist</b>]]'' <sup>[[User talk:UndercoverClassicist|T]]·[[Special:Contributions/UndercoverClassicist|C]]</sup> 09:59, 18 December 2024 (UTC)
:::What we now know is that many readers are looking for specific information, with few reading from top to bottom, but the search engines send them to the mother article even when a more specific daughter article exists. So the first port of call needs to be the most highly detailed. The advice in [[WP:SUMMARYSTYLE]] is therefore considered well intentioned but obsolete; stripping the mother article and pushing information down to the daughter articles defeats our whole purpose in providing information. [[User:Hawkeye7|<span style="color:#800082">Hawkeye7</span>]] [[User_talk:Hawkeye7|<span style="font-size:80%">(discuss)</span>]] 11:31, 22 December 2024 (UTC)
::::When you say “we know…”, “is considered” and similar, are you pointing to specific previous discussions, RfCs etc on this matter? “In the wild”, as it were, I still see these size limits regularly invoked, even if the conversation rarely ends at them. ''[[User:UndercoverClassicist|<b style="color:#7F007F">UndercoverClassicist</b>]]'' <sup>[[User talk:UndercoverClassicist|T]]·[[Special:Contributions/UndercoverClassicist|C]]</sup> 09:17, 23 December 2024 (UTC)
: Rather than draw a line, I'd rather just remove that chart. Can't imagine why a suite of concrete word counts and procedures would ever be very helpful. &mdash; <samp>[[User:Rhododendrites|<span style="font-size:90%;letter-spacing:1px;text-shadow:0px -1px 0px Indigo;">Rhododendrites</span>]] <sup style="font-size:80%;">[[User_talk:Rhododendrites|talk]]</sup></samp> \\ 16:29, 18 December 2024 (UTC)
::It absolutely makes sense to say at what page sizes that editors should start considering other options, as well as where splitting can be absolutely unnecessary. Nothing wrong with the table as long as it's clear those aren't hard or fast rules. [[User:Masem|M<span style="font-variant: small-caps">asem</span>]] ([[User Talk:Masem|t]]) 16:41, 18 December 2024 (UTC)
:::Agreed, I find it helpful because it helps me remember what is generally too long for mobile users (I understand that mobile is generally a blindspot for us as editors because the vast majority of us don't edit on mobile but most of the readers are actually on mobile) [[User:Horse Eye&#39;s Back|Horse Eye&#39;s Back]] ([[User talk:Horse Eye&#39;s Back|talk]]) 16:49, 18 December 2024 (UTC)
::::I also believe that the chart is helpful. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 17:29, 18 December 2024 (UTC)
:There don't seem to be any absolute rules laid out there... Even "Almost certainly" is qualified not an absolute rule. [[User:Horse Eye&#39;s Back|Horse Eye&#39;s Back]] ([[User talk:Horse Eye&#39;s Back|talk]]) 16:38, 18 December 2024 (UTC)
*The optimal article size varies quite a lot, actually. Key things we need to consider include:
*:The likely readership. Someone who's looking up [[History of Libya under Muammar Gaddafi]] probably has time to read something long and thoughtful. Someone who's looking up [[emergency surgery]] might need basic facts, in simple words, very fast.
*:The cognitive load associated with the topic. [[Star Wars]] is (very) long but easy to understand; [[Fourier inversion theorem]] is much shorter, but I bet it takes you longer to read, unless you have unusual expertise in mathematics.
:This is not the kind of thing on which we can produce simplistic guidance.—[[User:S Marshall|<b style="font-family: Verdana; color: Maroon;">S&nbsp;Marshall</b>]]&nbsp;<small>[[User talk:S Marshall|T]]/[[Special:Contributions/S Marshall|C]]</small> 17:03, 18 December 2024 (UTC)
:Wikipedia leaves it to the discretion of editors far far before 8,000 words. We have thousands of single sentence articles to attest to this. The average article is less than 700 words. [[User:Chipmunkdavis|CMD]] ([[User talk:Chipmunkdavis|talk]]) 17:15, 18 December 2024 (UTC)
::The median article length is around 350 words, and the mean is 750.[https://en.wikipedia.org/wiki/User_talk:BilledMammal/Average_articles#c-WhatamIdoing-20240814231900-BilledMammal-20240814193000] About one in 75 articles has more than 6,000 words. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 17:34, 18 December 2024 (UTC)
:::You'll have to take the specifics up with [[WP:WPSIZE]], although that ballpark range sounds the same. [[User:Chipmunkdavis|CMD]] ([[User talk:Chipmunkdavis|talk]]) 18:45, 18 December 2024 (UTC)
:No. [[User:Phil Bridger|Phil Bridger]] ([[User talk:Phil Bridger|talk]]) 18:52, 18 December 2024 (UTC)
*I've always felt that the kB of readable prose was a better metric for page size (such as is produced by various page size gadgets). Turns out, bigger words take longer to read than shorter words :P Doing it just by wordcount encourages a certain verbosity. For me, my rule of thumb has always aimed to keep big articles under 100kb readable prose. But there's no hard and fast rule, and there shouldn't be. [[User:CaptainEek|<b style="color:#6a1f7f">CaptainEek</b>]] <sup>[[User talk:CaptainEek|<i style="font-size:82%; color:#a479e5">Edits Ho Cap'n!</i>]]</sup>[[Special:Contributions/CaptainEek|⚓]] 23:08, 22 December 2024 (UTC)
*:If I'm being honest, what might be the best metric is starting at the top and lightly hammering the {{key|Page Down}} key for a bit. If I groan before reaching the References section, it's too long. <span style="border-radius:2px;padding:3px;background:#1E816F">[[User:Remsense|<span style="color:#fff">'''Remsense'''</span>]]<span style="color:#fff">&nbsp;‥&nbsp;</span>[[User talk:Remsense|<span lang="zh" style="color:#fff">'''论'''</span>]]</span> 23:10, 22 December 2024 (UTC)
*::For example, results were heavily discouraging for [[Special:Diff/1255240222|George Washington]] until recently; [[Special:Diff/1264675775|as of today I no longer despair]] at the article's uncaring girth—thanks Nikki et al.! <span style="border-radius:2px;padding:3px;background:#1E816F">[[User:Remsense|<span style="color:#fff">'''Remsense'''</span>]]<span style="color:#fff">&nbsp;‥&nbsp;</span>[[User talk:Remsense|<span lang="zh" style="color:#fff">'''论'''</span>]]</span> 23:18, 22 December 2024 (UTC)
*:::26,000 words is {{tomats|26000}}. Another way to look at that table is by saying that if it's more than half the length of a book, it's pushing past being "an article" and edging up towards being "a book".
*:::Or you can look at it in terms of how many minutes reading the whole thing would take. There's quite a bit of variation, but for easy math, 300 words per minute means that a 15,000-word-long article would take 50 minutes to read, which almost certainly exceeds the interest and attention span of most readers. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 00:35, 23 December 2024 (UTC)
*::::I think the most fundamental scalar isn't quite reading time or even visual size, but structural complexity—for an online encyclopedia article, being overlong expresses itself in my increasing inability to navigate an article comfortably to read or locate what I want, or to understand the structure of the scope covered by it at a glance. <span style="border-radius:2px;padding:3px;background:#1E816F">[[User:Remsense|<span style="color:#fff">'''Remsense'''</span>]]<span style="color:#fff">&nbsp;‥&nbsp;</span>[[User talk:Remsense|<span lang="zh" style="color:#fff">'''论'''</span>]]</span> 00:38, 23 December 2024 (UTC)
*:::::Come to think of it, one thing that makes an article feel longer than its word count is if its sections, media, and other landmarks have been laid out in a careless or unnatural way. <span style="border-radius:2px;padding:3px;background:#1E816F">[[User:Remsense|<span style="color:#fff">'''Remsense'''</span>]]<span style="color:#fff">&nbsp;‥&nbsp;</span>[[User talk:Remsense|<span lang="zh" style="color:#fff">'''论'''</span>]]</span> 00:48, 23 December 2024 (UTC)
*'''No.''' And this was rather a pointless, self-answering question in the first place, not something for a VP thread. The answer to the posed question of 'Is the "above 8000 words=Split" an absolute rule?' is obviously "no", both by observing actual WP community practice, and just by reading the table the OP quoted: {{tq|&gt; 8,000 words — May need to be divided or trimmed; likelihood goes up with size}}. Is anyone here actually confused into believing that A) "must" and "may" are synonymous, or B) that a guideline, to which reasonable exceptions sometimes apply, is somehow a legal-level policy that must be obeyed at all costs? In reality, there is never any hurry to split a large article, and doing it properly often involves a tremendous amount of work, involving both repair of citations (sometimes in great detail), and resummarizing the background context in the side article while also resummarizing the side-matter in [[WP:SUMMARY]] style within the main article (and doing them distinctly enough that the results are not obnoxiously repetitive if the reader moves between the articles). Doing a good job of this can take several days up to a month or longer of tightly focused work, depending on the detail level of the material, the number citations, etc. It is not trivial, we're all volunteers here, and our readers are not going keel over and die if they reach a detailed article that's a bit longer than they were expecting or would prefer. Ultimately, an article that is ginormous {{em|usually}} should split, but there is no deadline, and it needs to be done properly (plus there are often conceptually different ways to go about it from a content-flow perspective, and that might require some consensus discussion). <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — [[User:SMcCandlish|'''SMcCandlish''']] [[User talk:SMcCandlish|☏]] [[Special:Contributions/SMcCandlish|¢]] 😼 </span> 01:22, 23 December 2024 (UTC)
*:Ever since WAID reminded me of it, I've thought we should maybe link [https://datatracker.ietf.org/doc/html/rfc2119 RFC 2119] somewhere as a lemma. <span style="border-radius:2px;padding:3px;background:#1E816F">[[User:Remsense|<span style="color:#fff">'''Remsense'''</span>]]<span style="color:#fff">&nbsp;‥&nbsp;</span>[[User talk:Remsense|<span lang="zh" style="color:#fff">'''论'''</span>]]</span> 01:36, 23 December 2024 (UTC)
*::I think I linked it once in [[Wikipedia:Policies and guidelines]], years ago, and someone objected. I didn't follow up to see whether the objecting editor is one of the handful who think that ''should'' is a more polite and/or IAR-compliant way to say ''must'', but as that's a fairly uncommon POV among editors, it probably wasn't. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 05:41, 23 December 2024 (UTC)
*:::The linked document pushes very hard on ''should'', "here may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed" is not a low bar. It sounds much like must except when IAR. [[User:Chipmunkdavis|CMD]] ([[User talk:Chipmunkdavis|talk]]) 09:39, 23 December 2024 (UTC)


== [[Draft:Manual of Style/Israel- and Palestine-related articles]] ==
== WP:CRYSTAL in officeholder articles and infoboxes ==


I want to propose [[Draft:Manual of Style/Israel- and Palestine-related articles]] as a new guideline with an RfC. I'm starting this thread in case any outsiders to this area want to provide input or have questions. For context, the goal of this manual of style is to get agreement on broad principles to make editing easier in this topic area. As an example, [[WP:PIA5]] is dealing with inconsistent use of the word "massacre" specifically, which has caused much arguing over whether there is a double standard, so this guideline makes the standards we should be using explicit. <span class="nowrap">[[User:Chess|Chess]] ([[User talk:Chess|talk]]) <small>(please [[Help:Talk pages#Notifications|mention]] me on reply)</small></span> 06:38, 21 December 2024 (UTC)
Is the current policy to ignore [[WP:CRYSTAL]] in regards to wording in articles related to upcoming officeholders? [[Donald Trump]] had the usage "will be inaugurated" until recently and [[JD Vance]] has {{tq|He will resign on or before January 20, 2025, when he will be inaugurated as vice president of the United States}}. Similarly, infoboxes have "assuming office on X date". Should it not be "Scheduled to assume office on X date"? There seems to be disagreement on whether CRYSTAL applies since it is almost certain that these individuals will obtain their office barring some unforeseen event. I would like community input on this since if there is CRYSTAL, changes may need to be discussed here and implemented. [[User:Hurricane Noah|<b style="white-space:nowrap;text-shadow:#009200 0.3em 0.4em 1.0em,#009200 -0.2em -0.2em 1.0em;color:#009200">Noah</b>]], [[BBA#BSBA|BSBA]]<sup>[[User talk:Hurricane Noah|<b style="color:#ff0000">Talk</b>]]</sup> 23:32, 29 November 2024 (UTC)
:Reliable sources appear to do both. For example:
:* [https://www.reuters.com/world/us/what-watch-ahead-us-presidential-inauguration-2024-11-25/ AP article]: "President-elect Donald Trump will take office on Jan. 20 after defeating Democratic candidate Kamala Harris."
:* [https://www.nytimes.com/2024/11/06/us/politics/trump-white-house-timeline.html NY Times]: "Congress is scheduled to meet on Jan. 6, 2025, to count the Electoral College results, and Mr. Trump is set to be sworn into office two weeks later, on Jan. 20."
:Personally, I think this is a distinction without a difference. In common usage, saying "X will do Y on Tuesday" is always subject to the caveat that something might occur that prevents X from doing Y on Tuesday. To quote the philosopher [[Søren Kierkegaard]]: "I shall certainly attend your party, but I must make an exception for the contingency that a roof tile happens to blow down and kill me; for in that case, I cannot attend." [[User:Voorts|voorts]] ([[User talk:Voorts|talk]]/[[Special:Contributions/Voorts|contributions]]) 23:50, 29 November 2024 (UTC)
::This type of stuff is what is outside the bounds of what [[WP:NOT#CRYSTAL]] has, eg we can start writing the article for the 2028 Summer Olympics as there's an extremely high certainity it will happen; there may be very extreme circumstances that may cause a change but the odds of those changing events are very low. The planned inaugeration is clearly of the same ilk.<span id="Masem:1732924973408:WikipediaFTTCLNVillage_pump_(policy)" class="FTTCmt"> —&nbsp;[[User:Masem|M<span style="font-variant: small-caps">asem</span>]] ([[User Talk:Masem|t]]) 00:02, 30 November 2024 (UTC)</span>
:::The part I noticed was {{tq|Dates are not definite until the event actually takes place, as even otherwise-notable events can be cancelled or postponed at the last minute by a major incident}}. The Olympics articles always say scheduled rather than will take place. [[User:Hurricane Noah|<b style="white-space:nowrap;text-shadow:#009200 0.3em 0.4em 1.0em,#009200 -0.2em -0.2em 1.0em;color:#009200">Noah</b>]], [[BBA#BSBA|BSBA]]<sup>[[User talk:Hurricane Noah|<b style="color:#ff0000">Talk</b>]]</sup> 00:26, 30 November 2024 (UTC)
::::It does not matter in this case. If the inauguration of the next US executive is delayed, I’m confident those articles will be immediately updated. Infoboxes don't handle verbiage well. [[User:Chipmunkdavis|CMD]] ([[User talk:Chipmunkdavis|talk]]) 01:08, 30 November 2024 (UTC)
:::::What about other officeholders? [[User:Hurricane Noah|<b style="white-space:nowrap;text-shadow:#009200 0.3em 0.4em 1.0em,#009200 -0.2em -0.2em 1.0em;color:#009200">Noah</b>]], [[BBA#BSBA|BSBA]]<sup>[[User talk:Hurricane Noah|<b style="color:#ff0000">Talk</b>]]</sup> 01:39, 30 November 2024 (UTC)
::::::I don't see a difference between saying a person is about to become a senator vs. the president. [[User:Voorts|voorts]] ([[User talk:Voorts|talk]]/[[Special:Contributions/Voorts|contributions]]) 01:51, 30 November 2024 (UTC)
:::::::The higher the number of electees, the more likely it is that something happens to one of them. We have had representatives-elect die before assuming office. It's an issue of saying something is certain to occur rather than very likely to occur. We have nothing to tell us it's certain they assume office on X. Does this policy simply not apply to any officeholder period and we just state they ''will'' be inaugurated/assume office on X rather than ''scheduled'' to be inaugurated/assume office on X? [[User:Hurricane Noah|<b style="white-space:nowrap;text-shadow:#009200 0.3em 0.4em 1.0em,#009200 -0.2em -0.2em 1.0em;color:#009200">Noah</b>]], [[BBA#BSBA|BSBA]]<sup>[[User talk:Hurricane Noah|<b style="color:#ff0000">Talk</b>]]</sup> 02:20, 30 November 2024 (UTC)
::::::::Ditto {{noping|voorts}}; difference without a difference. Regards, [[User:Goldsztajn|Goldsztajn]] ([[User talk:Goldsztajn|talk]]) 03:58, 30 November 2024 (UTC)
:The guidance on Wikipedia not being {{tq|a collection of unverifiable speculation, rumors, or presumptions}} (from [[Wikipedia:What Wikipedia is not]]) is guidance on content, with most of the discussion on that page being about what warrants an article. It's not guidance on writing style, so doesn't provide guidance in choosing between writing "X will happen" or "X is scheduled to happen", but whether the statement should be included at all. [[User:Isaacl|isaacl]] ([[User talk:Isaacl|talk]]) 19:58, 30 November 2024 (UTC)
::I think it is reasonable that we should ask editors to use "is scheduled" or "is planned" instead of "will" in cases of near-confirmed future events. Maybe for events where humans have no control on the result, such as the next solar eclipse, we can use "will", but I can't see harm to suggest we be a bit more careful for other cases. [[User:Masem|M<span style="font-variant: small-caps">asem</span>]] ([[User Talk:Masem|t]]) 22:04, 30 November 2024 (UTC)
:::The point is that I was echoing your statement, {{tq|This type of stuff is what is outside the bounds of what WP:NOT#CRYSTAL has}}. The choice of verbs is something to be covered by the writing style guidelines (and personally, I think consideration of individual circumstances is sufficiently important that a blanket statement wouldn't be too helpful). [[User:Isaacl|isaacl]] ([[User talk:Isaacl|talk]]) 22:33, 30 November 2024 (UTC)
:Related: [[Wikipedia:Village pump (proposals)/Archive 175#RfC: Interim use of successor in Infobox officeholder]] [[User:Just Step Sideways|Just Step Sideways]] [[User talk:Just Step Sideways|<sup>from this world ..... today</sup>]] 21:55, 30 November 2024 (UTC)
:Best to keep doing as we've been doing for years. Making sudden changes now, would be messy. [[User:GoodDay|GoodDay]] ([[User talk:GoodDay|talk]]) 22:06, 30 November 2024 (UTC)


:Are all four points currently included based on previous conversations, or are any novel to this proposal? On the broader framing, I wouldn't create the guideline solely talking about NPOV, it is helpful to have guidelines to help describe encyclopaedic style if nothing else. On the example of massacre, I'm not up to date with the recent or ongoing discussions, but I'm surprised there's no mention in the draft of [[WP:WTW]], as this seems a classic case. [[User:Chipmunkdavis|CMD]] ([[User talk:Chipmunkdavis|talk]]) 07:00, 21 December 2024 (UTC)
== Can we hide sensitive graphic photos? ==
::{{re|Chipmunkdavis}} The settlements language comes from previous discussions/consensus, likewise with the West Bank naming conventions section. The categorization language comes from a suggestion on the talk page of the draft.
::The "massacre" part is a novel part of the proposal. I would say that it seems like an obvious [[WP:WTW]], but there's many RMs in the topic area in which editors use their own definition of the term "massacre" (like civilian death toll, mechanism of killing, see [[User:BilledMammal/ARBPIA_RM_statistics]] for a list of 24 RMs about the term "massacre"), and argue about whether or not the event meets that standard. I want to make it easier for editors to disengage by citing this guideline, instead of getting into lengthy arguments over points that don't contribute much to consensus (which is what happens now).
::The reason the guideline is short is because I believe it will give it the best chance of passing right now. In the future, I'd like to get consensus to add more points to this guideline. I thought proposing a lengthy guideline upfront would create a scenario in which everyone at an RfC agrees a guideline would benefit the area, but individual editors have a small issue that prevents them from supporting ''this specific version''. <span class="nowrap">[[User:Chess|Chess]] ([[User talk:Chess|talk]]) <small>(please [[Help:Talk pages#Notifications|mention]] me on reply)</small></span> 07:50, 21 December 2024 (UTC)
:::At a quick look, it seems most pages in that statistics page were not moved to a title with "massacre"? Seems sensible to collate previous discussions together, if massacres is new (as new as something discussed 24 times can be?) there might need to confirm that one. [[User:Chipmunkdavis|CMD]] ([[User talk:Chipmunkdavis|talk]]) 08:20, 21 December 2024 (UTC)
:One issue with the massacre bit is that should be part of an AT naming convention, not in the MOS. Likewise, appropriate categorizations usually aren't in the MOS. [[User:Voorts|voorts]] ([[User talk:Voorts|talk]]/[[Special:Contributions/Voorts|contributions]]) 15:27, 21 December 2024 (UTC)


== I (+other people) co-authored a draft article, but its submission was declined ==
Can we hide sensitive graphic photos? I recently came across [[risus sardonicus|an article]] with a photo of a deceased man smiling '''right at the top'''—it was deeply disturbing, traumatizing, triggering, shocking, and sickening! This kind of content discourages many people who might otherwise want to read the article and could even provoke serious medical reactions, such as seizures. Imagine if that man's family came across the article and saw him like that, right in their face! Nobody seems to favor this policy, so why do we insist on keeping it? Arabic Wikipedia uses [[:ar:قالب:إخفاء وسيط|a collapsible template]] that lets readers choose whether to view such photos, without censoring informative media. Shouldn't we adopt a similar approach? [[User:Super ninja2|☆SuperNinja2☆]] <small>'''[[User talk:Super ninja2|<u>TALK!</u>]]'''</small> 21:41, 30 November 2024 (UTC)
{{atop
| result = @[[User:Walter Tau|Walter Tau]]: This is not the place to ask questions about your drafts. You should ask @[[User:Bobby Cohn|Bobby Cohn]] on his talk page. [[User:Voorts|voorts]] ([[User talk:Voorts|talk]]/[[Special:Contributions/Voorts|contributions]]) 15:29, 22 December 2024 (UTC)
}}


supposedly due to: "This submission is not adequately supported by reliable sources".
:Not sure where you are getting that the image subject was dead at the time the image was taken. [[User:Just Step Sideways|Just Step Sideways]] [[User talk:Just Step Sideways|<sup>from this world ..... today</sup>]] 21:49, 30 November 2024 (UTC)
It seems to me, that there are planety of good references now.
::I couldn't even think. I was totally shocked. Anyhow, my point still stand. [[User:Super ninja2|☆SuperNinja2☆]] <small>'''[[User talk:Super ninja2|<u>TALK!</u>]]'''</small> 21:51, 30 November 2024 (UTC)
I want to understand, what is missing - and- how it can be improved.
:See [[HELP:NOSEE]] '''[[User:Lee Vilenski|<span style="color:green">Lee Vilenski</span>]] <sup>([[User talk:Lee Vilenski|talk]] • [[Special:Contribs/Lee Vilenski|contribs]])</sup>''' 21:50, 30 November 2024 (UTC)
A side note: there are many published articles in English Wikipedia, which are of much lower quality, than this draft: https://en.wikipedia.org/wiki/Draft:Maternity_capital [[User:Walter Tau|Walter Tau]] ([[User talk:Walter Tau|talk]]) 15:19, 22 December 2024 (UTC)
:The issue is that an image one editor might find “disturbing, traumatizing, triggering and shocking” is an image another editor will find informative and helpful. We have no way to know how others will react. It would indeed be censorship to hide such images. [[User:Blueboar|Blueboar]] ([[User talk:Blueboar|talk]]) 21:50, 30 November 2024 (UTC)
{{abot}}
::shouldn't we choose the option that minimize the harm to readers? That's what most companies/organization (idk what is the right term, sorry) do. [[User:Super ninja2|☆SuperNinja2☆]] <small>'''[[User talk:Super ninja2|<u>TALK!</u>]]'''</small> 21:54, 30 November 2024 (UTC)
:::We already have. The "harm" to a person seeing such useful images in an encyclopedia is insignificant. The true harm is hiding information from those looking for it.--[[User:Khajidha]] ([[User talk:Khajidha|talk]]) ([[Special:Contributions/Khajidha|contributions]]) 21:19, 1 December 2024 (UTC)
::::That is debatable. [[User:Emir of Wikipedia|Emir of Wikipedia]] ([[User talk:Emir of Wikipedia|talk]]) 21:38, 1 December 2024 (UTC)
:Image censoring is a [[WP:PEREN|perennial proposal]] and really won't go anywhere. And given the topic of that page, I see no real option, since any other image will also be as disturbing. We do ask editors to use the principle of least astonishment, so that same image as the lede on [[corpse]] for example would be inappropriate, but not much can be done on that page. [[User:Masem|M<span style="font-variant: small-caps">asem</span>]] ([[User Talk:Masem|t]]) 21:51, 30 November 2024 (UTC)
::we can use a collapsible template, then that won't be censoring. [[User:Super ninja2|☆SuperNinja2☆]] <small>'''[[User talk:Super ninja2|<u>TALK!</u>]]'''</small> 21:55, 30 November 2024 (UTC)
:::That type of suggestion is part of the perennial proposal on how to deal with such images. There's nothing that can be done to properly hide it. [[User:Masem|M<span style="font-variant: small-caps">asem</span>]] ([[User Talk:Masem|t]]) 22:05, 30 November 2024 (UTC)
::::We already use collapsible templates for "long" lists, such as for [[BRICS]] members.While long lists are far less harmful, the goal was to avoid annoying readers and make them comfortable, encouraging them to read. This is also why we have templates like [[Template:Split]]—to make articles easier to navigate. Similarly, graphic images make readers extremely uncomfortable, not only discouraging them from reading a single article but sometimes deterring them from using Wikipedia altogether, which goes against the ideals of an encyclopedia.
::::The fact that image censoring is a perennial proposal suggests it’s a problematic topic that many, if not most, editors find uncomfortable. I suspect the primary reason it hasn’t been adopted is the lack of consensus, not because half the community opposes it outright. I propose a solution that could satisfy both groups: a collapsible template. This approach wouldn’t censor anything but would minimize harm.
::::Let’s focus on images that could provoke serious medical conditions and ignore the sexual and religiously offensive media for the time. Some readers may have heart conditions, PTSD, or other vulnerabilities, and we must also consider the families of deceased individuals whose photos we use. Additionally, while Wikipedia isn’t intended for children, they do use it, and we can’t ignore that reality.
::::In summery, the potential harm caused by showing these images overrides any benefit to the project. And this solution would fix this by making Wikipedia a safer and more inclusive without censoring anything, which is the essential goal. [[User:Super ninja2|☆SuperNinja2☆]] <small>'''[[User talk:Super ninja2|<u>TALK!</u>]]'''</small> 22:28, 30 November 2024 (UTC)
:::::You've yet to show harm beyond you having a personal reaction to a picture that you didn't understand... an informative picture key to the article that I didn't even slightly flinch upon seeing. (If you have any records of Wikipedia images having provoked seizures, please put them forward.) Had you hidden it by collapsing, I might have assumed that there was something horrible that I wouldn't want to see and avoid getting that information. -- [[User:NatGertler|Nat Gertler]] ([[User talk:NatGertler|talk]]) 00:02, 1 December 2024 (UTC)
::::::I know [[Trypophobia]] has been the subject of discussion of a good lede that doesn't immediately illicit a problem to readers that have that fear. [[User:Masem|M<span style="font-variant: small-caps">asem</span>]] ([[User Talk:Masem|t]]) 00:22, 1 December 2024 (UTC)
:::::::That article has had requests to remove or hide the image for about a decade now. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 00:26, 1 December 2024 (UTC)
::::::{{tq|Had you hidden it by collapsing, I might have assumed that there was something horrible that I wouldn't want to see and avoid getting that information}}
::::::That would be your choice not to 'get that information.' However, forcing it on people who don't want to 'get it,' and risking a negative reaction as a result, is the real issue we should be concerned about
::::::{{tq|You've yet to show harm beyond you having a personal reaction to a picture that you didn't understand... an informative picture key to the article that I didn't even slightly flinch upon seeing}}
::::::That is your personal experience, but we know that at least one person had an anxiety attack from that image. As a community, it is our duty to prioritize the safety of our readers and choose the least risky option. [[User:Super ninja2|☆SuperNinja2☆]] <small>'''[[User talk:Super ninja2|<u>TALK!</u>]]'''</small> 13:47, 1 December 2024 (UTC)
:::::::And you had the choice not to "get that information" that was in the picture.... you ''chose'' to go to the Wikipedia page about a disease. You claim to have been set off because it was {{tq|a deceased man smiling}}... only the man wasn't deceased, he is described in the image's description as a "patient" which is not generally a term for a corpse. So what set you off was ''a man smiling''. If you want us to police pictures based on information that you invent about them, it's hard to see how we don't have to police everything on your behalf. When it comes to safety of our viewers and medical-related images, an image can help them recognize the disease and may serve them well. The "least risky" option is simply not having Wikipedia. I hope we don't choose that path. If you think that Wikipedia provides as special danger to you, you are free not to use it. -- [[User:NatGertler|Nat Gertler]] ([[User talk:NatGertler|talk]]) 17:53, 1 December 2024 (UTC)
::::::::I don’t understand what you’re defending. You’re just complaining and criticizing my argument without demonstrating why leaving sensitive media as-is is a better option. Your argument essentially boils down to: “I don’t like your proposal,” which isn’t sufficient.
::::::::Anyway, regardless of whether that man was dead or not, my point still stands.
::::::::{{tq|The "least risky" option is simply not having Wikipedia.}}
::::::::I don’t think that’s the goal of Wikipedia—to discourage its readers from using it. If the choice is “either read Wikipedia and risk having anxiety attacks or don’t read it at all,” then it’s clear the situation is bad and requires change. [[User:Super ninja2|☆SuperNinja2☆]] <small>'''[[User talk:Super ninja2|<u>TALK!</u>]]'''</small> 21:08, 1 December 2024 (UTC)
:::::::::So far, I know of one person claiming to have had a problem, and that's because he saw a picture of a man smiling. Hiding all pictures as not-obviously-problematic as that would basically mean hiding all pictures... and it's not just pictures that upset people, plenty of the text would have to be hidden under the same logic. (People might be freaked out by seeing that a ninja edits Wikipedia.) Folks have pointed you to the option that would let you turn off automatic image display for yourself, and if you wanted to make some argument that that should be a standard option, that may well be a supportable argument... but hiding everything that could possibly upset anyone would basically be hiding everything. -- [[User:NatGertler|Nat Gertler]] ([[User talk:NatGertler|talk]]) 21:30, 1 December 2024 (UTC)
:::::{{tq|Let’s focus on images that could provoke serious medical conditions and ignore the sexual and religiously offensive media for the time. ... And this solution would fix this by making Wikipedia a safer and more inclusive without censoring anything, which is the essential goal. }} I think part of the reason why no consensus was ever reached on this issue is that the editors in favour of image filtering do not acknowledge that it inherently involves an infringement on intellectual freedom, and so don't put forward a framework for how to minimize the infringement. The approach can't be "Let's just create the functionality now and then worry later about what to do when a vocal minority of editors want to be able to hide all depictions of people with disabilities, or of LGBTQ+ people, because they find those images distressing." Those considerations need to be the starting point. I don't support image filtering, but when the discussion was held back in 2011 I did put foward [[meta:User:Trystan|a framework of seven principles]] for approaching it from this angle.--[[User:Trystan|Trystan]] ([[User talk:Trystan|talk]]) 17:05, 1 December 2024 (UTC)
::::::{{tq|infringement on intellectual freedom}}
::::::Why do you guys want to go so technical and get things so complicated when the situation isn't at all complicated? Ppl dislike seeing gore, let them choose not to? Just like that, easy peasy. [[User:Super ninja2|☆SuperNinja2☆]] <small>'''[[User talk:Super ninja2|<u>TALK!</u>]]'''</small> 21:15, 1 December 2024 (UTC)
:::::::Who defines what is "gore"? There's probably only a few types of images that we universally can say are problematic to a near majority of the world population (eg when you start to get into child exploitation), but beyond that, there's no way to tell when such an image would be considered bad by a majority of the readership. [[User:Masem|M<span style="font-variant: small-caps">asem</span>]] ([[User Talk:Masem|t]]) 21:18, 1 December 2024 (UTC)
:::That most certainly is censorship.--[[User:Khajidha]] ([[User talk:Khajidha|talk]]) ([[Special:Contributions/Khajidha|contributions]]) 21:20, 1 December 2024 (UTC)
::{{tq|any other image will also be as disturbing}} that is what I'm arguing about. disturbing images should be collapsed at best. [[User:Super ninja2|☆SuperNinja2☆]] <small>'''[[User talk:Super ninja2|<u>TALK!</u>]]'''</small> 21:59, 30 November 2024 (UTC)
:::@[[User:Super ninja2|Super ninja2]], quite a lot of people agree with you, but a long time ago, [[m:Image_filter_referendum/en|this was formally proposed]], and The Community™ rejected it. I have a lot of unhappy memories from that discussion, so you should not necessarily consider me to be an unbiased source, but I think I'm not exaggerating ''too'' much when I say that much of the discussion seemed to be divided between childless white men living in wealthy democracies, and, well, the entire rest of the world.
:::The proposed approach was that a person should be able to say, in advance, that they personally don't want to see sexual images, disgusting medical images, violent images, or contested religious/cultural images, and have images tagged like that collapsed or screened somehow, with one click to reveal. The responses tended to cluster in two categories:
:::# Individuals should not have the freedom to control what they see, even if they are doing it for neutral reasons, like wanting to conserve bandwidth on a weak internet connection, or for safety reasons, like not wanting to risk an anxiety attack right now or not wanting to worry about the morality police looking over your shoulder at a public internet cafe. The Wikipedia editor has the right to put things on your computer screen, and your duty as a reader is to look at whatever disgusting, violent, or inappropriate image they want to shove in your face.
:::# It would be impossible to figure out which (few) images draw complaints. It might be impossible to do this with 100% accuracy, but we all know that the lead image at [[Smallpox]] draws complaints even though there's a FAQ at the top of the talk page to explain why it's there, every educated person knows that [[Depictions of Muhammad]] are both easily identifiable and considered inappropriate by some religious adherents, and most of us have encountered an [[animated gif]] that we'd like to cover up or turn off.
:::I'm opposed to the first in principle and skeptical of the second. But that's the state of the discussion, and at this point, it will likely continue this way until multiple countries pass laws demanding that we change it. The Community™ has no empathy for people whose living situation is very different from their own. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 00:10, 1 December 2024 (UTC)
::::This context might help: Wikipedia was basically a spinoff from a now-defunct male-focused porn site. [https://en.wikipedia.org/enwiki/w/index.php?title=Wikipedia:Notability_(people)&oldid=335637520#Pornographic_actors For years], every porn actress who was featured even once as a [[Playboy Playmate]] was automatically considered notable. If you infer from that fact something about the attitudes towards [[m:2010_Wikimedia_Study_of_Controversial_Content|controversial content]] in the early days, I couldn't prove you wrong. [[User:WhatamIdoing|WhatamIdoing]] ([[User talk:WhatamIdoing|talk]]) 00:22, 1 December 2024 (UTC)
::::Looking at the results on that page, it seems to say more people supported it than opposed it? [[User:Alpha3031|Alpha3031]] ([[User talk:Alpha3031|t]] • [[Special:Contributions/Alpha3031|c]]) 01:32, 1 December 2024 (UTC)
:::::There is one technically feasible solution I can come up with, although it may be complicated:
:::::# Create a list of types of images that some will find offensive (anatomical parts typically not displayed in public, religiously offensive images, etc). Create a template to mark each type.
:::::# Have the software mark these images, when used on other pages, in some way that scripts can use. Write scripts which individual users can self-apply to hide these images. Create a page with instructions for using these scripts, with a disclaimer that 100% results aren't guaranteed.
:::::These measures should be invisible to users not interested in them, except the tag on the image page. [[User:Animal lover 666|Animal lover]] [[User talk:Animal lover 666|&#124;666&#124;]] 10:59, 1 December 2024 (UTC)
::::::In some places a woman's hair is not typically displayed in public. Imagine if we had to hide every photo of a woman because her hair was visible, and we marked it with a template warning "Image of woman with visible hair". [[User:Valereee|Valereee]] ([[User talk:Valereee|talk]]) 18:59, 1 December 2024 (UTC)
::::{{tq|not wanting to worry about the morality police looking over your shoulder at a public internet cafe.}}
::::If you live in Saudi Arabia, Iran, or even less religious countries like Jordan, Morocco, or Egypt, and you were reading an article in a public place when a sexual photo deemed inappropriate popped up on your screen, you could literally be jailed! [[User:Super ninja2|☆SuperNinja2☆]] <small>'''[[User talk:Super ninja2|<u>TALK!</u>]]'''</small> 13:05, 1 December 2024 (UTC)
:::::And imagine if that photo was a depiction of Muhammad, then jail would be mercy. [[User:Super ninja2|☆SuperNinja2☆]] <small>'''[[User talk:Super ninja2|<u>TALK!</u>]]'''</small> 13:09, 1 December 2024 (UTC)
::::what a terribly sexist and racist comment, full of prejudiced assumptions about who might disagree with you. [[User:Fram|Fram]] ([[User talk:Fram|talk]]) 14:19, 1 December 2024 (UTC)
::::Individuals already have control of what they see. They chose to come here. How can anyone seriously expect not to see images of such things in articles about these things? That's simply ridiculous.--[[User:Khajidha]] ([[User talk:Khajidha|talk]]) ([[Special:Contributions/Khajidha|contributions]]) 21:24, 1 December 2024 (UTC)
:See our [[Wikipedia:Content disclaimer]]. This isn't likely to be changed because you found an image that you objected too. There are ways for you to implement ways to not see images you don't want too, see [[WP:NOSEE]]. Specifically the section about the userscript that blocks all images unless you click to see them. '''[[User:Lee Vilenski|<span style="color:green">Lee Vilenski</span>]] <sup>([[User talk:Lee Vilenski|talk]] • [[Special:Contribs/Lee Vilenski|contribs]])</sup>''' 13:25, 1 December 2024 (UTC)
::no need to change the Content disclaimer because we will still display the offensive images but this time, the reader will choose to view them. [[User:Super ninja2|☆SuperNinja2☆]] <small>'''[[User talk:Super ninja2|<u>TALK!</u>]]'''</small> 14:04, 1 December 2024 (UTC)
:::No, I'm not suggesting we change it. I'm suggesting that you read it and realise we aren't going to hide suitable images. '''[[User:Lee Vilenski|<span style="color:green">Lee Vilenski</span>]] <sup>([[User talk:Lee Vilenski|talk]] • [[Special:Contribs/Lee Vilenski|contribs]])</sup>''' 15:49, 1 December 2024 (UTC)
Very strong oppose on this one. Putting together a censor board to decide what is, could be, and/or is not offensive to whoever across the globe is a terrible idea, a waste of time, and does not help the site. [[WP:CENSOR]] is a crucial ingredient in Wikipedia's ability to cover everything under the sun. [[User:Bloodofox|&#58;bloodofox:]] ([[User talk:Bloodofox|talk]]) 21:01, 1 December 2024 (UTC)


== Is it time to write an official policy on usage of AI? ==
== Videos from YT and [[:WP:RSPYT]] ==
{{Moved discussion from|Wikipedia:Administrators' noticeboard#Is it time to write an official policy on usage of AI?| [[User:Voorts|voorts]] ([[User talk:Voorts|talk]]/[[Special:Contributions/Voorts|contributions]]) 03:20, 25 December 2024 (UTC)}}


I've been following [[WP:ANI]] for a few weeks now (because I like gossip apparently) and I've noticed several incidents involving persons suspected of using AI for editing. Is it time to create an official policy to point to? [[WP:AI]] exists, but it's an informational article rather than a policy page. (I don't actually know which page to put this on so it's going here.) [[User:guninvalid|guninvalid]] ([[User_Talk:guninvalid|talk]]) 02:25, 25 December 2024 (UTC)
Hi there, I've been adding some original, subtitled Latin content to some pages, from a series of readings which are available as CC-By content now at [[:commons:Category:Neo_Latin_readings|here]], [[:commons:Category:Classical Latin readings|here]] and [[:commons:Category:Readings in post Classical Latin|here]]. The content is verifiable and properly sourced. Where the readings are sufficiently significant to the topic, this seems a reasonable thing to do, eg at [[Martin Luther#Lectures on Psalms and justification by faith|Martin Luther]]. It is also more in line with [[:MOS:FOREIGNQUOTE]] to subtitle the original than to read from a translation. If an editor feels the content is not sufficiently relevant, that is fair enough of course.

However, it has been [[Talk:Niccolò_Machiavelli#Reading_of_Macchiavelli's_letter_and_sources|raised]] that [[:WP:RSPYT]] applies and the videos are simply not to be used as [[:WP:RSPYT]] states that YT is simply unreliable as a "source". I don't think this is right but wanted to get some clarity. [[User:JimKillock|Jim Killock]] [[User_talk:JimKillock|(talk)]] 18:57, 1 December 2024 (UTC)

:I think the wording of [[WP:RSPYT]] is poor - YouTube is a publisher, not a source, and so is neither reliable nor unreliable. While ''most'' videos hosted on it are unreliable and/or copyright violations (for various reasons), there are exceptions and there should not be any prohibition on citing those exceptions. The onus is on the person wanting to cite them to show they are reliable and not copyright violations, as well as the usual DUE, etc. but once shown to be acceptable and relevant there is no reason not to cite such videos imo (unless of course there are better sources of course). [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 19:13, 1 December 2024 (UTC)
::"use" as well as "cite" perhaps, in these instances? [[User:JimKillock|Jim Killock]] [[User_talk:JimKillock|(talk)]] 19:21, 1 December 2024 (UTC)
:With YouTube videos, even those released by a free license, we want to make sure of a few things. First, that the account that posted the material themselves are a reliable source. To take the first image in the first link provided, [[:File:Dr. Samuel Johnson- Letter of Thanks for His Oxford Degree.webm]], the YouTube it was pulled from appears to be no-name with zero evidence of any reliability on their own (they claim to be an independent student of Latin and Greek). While the content may be accurate, we shouldn't be using that questionable reliability for such sources. Another problem, but which is ''not'' the case here, is that there are people that create CC-BY videos on YouTube that include considerable amount of copyrighted material (that is not in there control), beyond what fair use would allow, and thus belie the CC-BY allowance. YouTube itself doesn't check this, so we have to aware of such problems. [[User:Masem|M<span style="font-variant: small-caps">asem</span>]] ([[User Talk:Masem|t]]) 21:29, 1 December 2024 (UTC)
::That all makes sense. But I would assume that it is OK so long as the checks can be made? The author of these videos is pseudonymous but give sources. In the case you mention, he cites the public domain ''Life of Johnson'' by James Boswell for the text; a search for "ingratus plane et mihi videar" turns up [https://www.google.co.uk/books/edition/Life_of_Samuel_Johnson_Comprehending_an/HYJBAQAAMAAJ?hl=en&gbpv=1&dq=ingratus+plane+et+mihi+videar+nisi+quanto+me&pg=PA75&printsec=frontcover page 75 of this edition], in Latin and English, so the content is easy enough to check.
::I have sufficient Latin to know that his pronunciation and diction is decent, and to check the sources linked to match up with what is said. I'm also able to translate most of the simpler texts from Latin to English where needed. [[User:JimKillock|Jim Killock]] [[User_talk:JimKillock|(talk)]] 00:07, 2 December 2024 (UTC)

== LLM/chatbot comments in discussions ==

<!-- [[User:DoNotArchiveUntil]] 01:01, 6 January 2025 (UTC) -->{{User:ClueBot III/DoNotArchiveUntil|1736125272}}
{{rfc|prop|rfcid=F42F851}}
'''Should admins or other users evaluating consensus in a discussion discount, ignore, or strike through or collapse comments found to have been generated by AI/LLM/Chatbots?''' 00:12, 2 December 2024 (UTC)

I've recently come across several users in AFD discussions that are using LLMs to generate their remarks there. As many of you are aware, [https://gptzero.me/ gptzero] and other such tools are very good at detecting this.
I don't feel like any of us signed up for participating in discussions where some of the users are not using their own words but rather letting technology do it for them. Discussions are supposed to be between human editors. If you can't make a coherent argument on your own, you are not competent to be participating in the discussion. I would therefore propose that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner. [[User:Just Step Sideways|Just Step Sideways]] [[User talk:Just Step Sideways|<sup>from this world ..... today</sup>]] 00:12, 2 December 2024 (UTC)

*Seems reasonable, as long as the GPTZero (or any tool) score is taken with a grain of salt. GPTZero can be as wrong as AI can be. [[User:ToBeFree|&#126; ToBeFree]] ([[User talk:ToBeFree|talk]]) 00:32, 2 December 2024 (UTC)
*Only if the false positive and false negative rate of the tool you are using to detect LLM content is very close to zero. LLM detectors tend to be very unreliable on, among other things, text written by non-native speakers. Unless the tool is near perfect then it's just dismissing arguments based on who wrote them rather than their content, which is not what we do or should be doing around here. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 00:55, 2 December 2024 (UTC)
*I think it should be an expressly legitimate factor in considering whether to discount or ignore comments either if it's clear enough by the text or if the user clearly has a history of using LLMs. We wouldn't treat a comment an editor didn't actually write as an honest articulation of their views in lieu of site policy in any other situation. <span style="border-radius:2px;padding:3px;background:#1E816F">[[User:Remsense|<span style="color:#fff">'''Remsense'''</span>]]<span style="color:#fff">&nbsp;‥&nbsp;</span>[[User talk:Remsense|<span lang="zh" style="color:#fff">'''论'''</span>]]</span> 00:59, 2 December 2024 (UTC)
* I would have already expected admins to exercise discretion in this regard, as text written by an LLM is not text written by a person. We cannot guarantee it is what the person actually means, especially as it is a tool often used by those with less English proficiency, which means perhaps they cannot evaluate the text themselves. However, I do not think we can make policy about a specific LLM or tool. The LLM space is moving fast, en.wiki policies do not. Removal seems tricky, I would prefer admins exercise discretion instead, as they do with potentially canvassed or socked !votes. [[User:Chipmunkdavis|CMD]] ([[User talk:Chipmunkdavis|talk]]) 01:06, 2 December 2024 (UTC)
*'''Support''' discounting or collapsing AI-generated comments, under ''slightly looser'' conditions than those for human comments. Not every apparently-AI-generated comment is useless [[AI hallucinations|hallucinated]] nonsense{{snd}}beyond false positives, it's also possible for someone to use an AI to help them word a constructive comment, and make sure that it matches their intentions before they publish it. But in my experience, the majority of AI-generated comments are somewhere between "pointless" and "disruptive". Admins should already discount ''clearly'' insubstantial !votes, and collapse ''clearly'' unconstructive lengthy comments; I think we should recognize that blatant chatbot responses are more likely to fall into those categories. [[User:Jlwoodwa|jlwoodwa]] ([[User talk:Jlwoodwa|talk]]) 02:11, 2 December 2024 (UTC)


:[[Wikipedia:Village pump (policy)#LLM/chatbot comments in discussions]] is ongoing now for those interested. [[User:Chipmunkdavis|CMD]] ([[User talk:Chipmunkdavis|talk]]) 02:33, 25 December 2024 (UTC)
:'''Strongly Support''' - I think some level of human judgement on the merits of the argument are necessary, especially as GPTZero may still have a high FPR. Still, if the discussion is BLUDGEONy, or if it quacks like an AI-duck, looks like an AI-duck, etc, we should consider striking out such content.
:- sidenote, I'd also be in favor of sanctions against users who overuse AI to write out their arguments/articles/etc. and waste folks time on here.. [[User:Bluethricecreamman|Bluethricecreamman]] ([[User talk:Bluethricecreamman|talk]]) 02:20, 2 December 2024 (UTC)
:See also [[Special:PermanentLink/1242580542#RFC|this RfC]]. If this is an issue you're interested in, you might want to try getting involved with [[Wikipedia:WikiProject AI Cleanup|WikiProject AI Cleanup]]. [[User:Voorts|voorts]] ([[User talk:Voorts|talk]]/[[Special:Contributions/Voorts|contributions]]) 03:22, 25 December 2024 (UTC)

Latest revision as of 06:16, 25 December 2024

 Policy Technical Proposals Idea lab WMF Miscellaneous 
The policy section of the village pump is used to discuss already-proposed policies and guidelines and to discuss changes to existing policies and guidelines. Change discussions often start on other pages and then move or get mentioned here for more visibility and broader participation.
  • If you want to propose something new that is not a policy or guideline, use Village pump (proposals). For drafting with a more focused group, you can also start on the talk page for a WikiProject, Manual of Style, or other relevant project page.
  • If you have a question about how to apply an existing policy or guideline, try one of the many Wikipedia:Noticeboards.
  • If you want to ask what the policy is on something, try the Help desk or the Teahouse.
  • This is not the place to resolve disputes over how a policy should be implemented. Please see Wikipedia:Dispute resolution for how to proceed in such cases.
  • If you want to propose a new or amended speedy deletion criterion, use Wikipedia talk:Criteria for speedy deletion.

Please see this FAQ page for a list of frequently rejected or ignored proposals. Discussions are automatically archived after remaining inactive for two weeks.


LLM/chatbot comments in discussions

[edit]

Should admins or other users evaluating consensus in a discussion discount, ignore, or strike through or collapse comments found to have been generated by AI/LLM/Chatbots? 00:12, 2 December 2024 (UTC)

I've recently come across several users in AFD discussions that are using LLMs to generate their remarks there. As many of you are aware, gptzero and other such tools are very good at detecting this. I don't feel like any of us signed up for participating in discussions where some of the users are not using their own words but rather letting technology do it for them. Discussions are supposed to be between human editors. If you can't make a coherent argument on your own, you are not competent to be participating in the discussion. I would therefore propose that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner. Just Step Sideways from this world ..... today 00:12, 2 December 2024 (UTC)[reply]

Bonus suggestion!: I'm curious what Wikipedians think about this so let's try this. Many of the comments here discuss the impracticality of determining whether a user's comments are AI generated (i.e. gptzero isn't perfect), and many give valid arguments for using LLMs (i.e. ESL). If an argument is suspected to be written by LLM, I propose that editors should examine the user. Take a look at their listed contributions, and if they seem to have a habit of using AI, open a discussion on their talk page. If the user has a habit of using AI and doesn't recognize the inherent problems and refuses to change, this can be brought to the administrators' noticeboard for potential blocks. If (and only if) the person is blocked for using AI, their comments can be ignored. Or just ask ChatGPT to summarize them for you lol guninvalid (talk) 06:12, 25 December 2024 (UTC)[reply]

opening comments

[edit]
  • Seems reasonable, as long as the GPTZero (or any tool) score is taken with a grain of salt. GPTZero can be as wrong as AI can be. ~ ToBeFree (talk) 00:32, 2 December 2024 (UTC)[reply]
  • Only if the false positive and false negative rate of the tool you are using to detect LLM content is very close to zero. LLM detectors tend to be very unreliable on, among other things, text written by non-native speakers. Unless the tool is near perfect then it's just dismissing arguments based on who wrote them rather than their content, which is not what we do or should be doing around here. Thryduulf (talk) 00:55, 2 December 2024 (UTC)[reply]
    In the cases I have seen thusfar it's been pretty obvious, the tools have just confirmed it. Just Step Sideways from this world ..... today 04:08, 2 December 2024 (UTC)[reply]
    The more I read the comments from other editors on this, the more I'm a convinced that implementing either this policy or something like it will bring very significant downsides on multiple fronts that significantly outweigh the small benefits this would (unreliably) bring, benefits that would be achieved by simply reminding closers to disregard comments that are unintelligible, meaningless and/or irrelevant regardless of whether they are LLM-generated or not. For the sake of the project I must withdraw my previous very qualified support and instead very strongly oppose. Thryduulf (talk) 02:45, 3 December 2024 (UTC)[reply]
  • I think it should be an expressly legitimate factor in considering whether to discount or ignore comments either if it's clear enough by the text or if the user clearly has a history of using LLMs. We wouldn't treat a comment an editor didn't actually write as an honest articulation of their views in lieu of site policy in any other situation. Remsense ‥  00:59, 2 December 2024 (UTC)[reply]
  • I would have already expected admins to exercise discretion in this regard, as text written by an LLM is not text written by a person. We cannot guarantee it is what the person actually means, especially as it is a tool often used by those with less English proficiency, which means perhaps they cannot evaluate the text themselves. However, I do not think we can make policy about a specific LLM or tool. The LLM space is moving fast, en.wiki policies do not. Removal seems tricky, I would prefer admins exercise discretion instead, as they do with potentially canvassed or socked !votes. CMD (talk) 01:06, 2 December 2024 (UTC)[reply]
  • Support discounting or collapsing AI-generated comments, under slightly looser conditions than those for human comments. Not every apparently-AI-generated comment is useless hallucinated nonsense – beyond false positives, it's also possible for someone to use an AI to help them word a constructive comment, and make sure that it matches their intentions before they publish it. But in my experience, the majority of AI-generated comments are somewhere between "pointless" and "disruptive". Admins should already discount clearly insubstantial !votes, and collapse clearly unconstructive lengthy comments; I think we should recognize that blatant chatbot responses are more likely to fall into those categories. jlwoodwa (talk) 02:11, 2 December 2024 (UTC)[reply]
  • Strongly Support - I think some level of human judgement on the merits of the argument are necessary, especially as GPTZero may still have a high FPR. Still, if the discussion is BLUDGEONy, or if it quacks like an AI-duck, looks like an AI-duck, etc, we should consider striking out such content.
    - sidenote, I'd also be in favor of sanctions against users who overuse AI to write out their arguments/articles/etc. and waste folks time on here.. Bluethricecreamman (talk) 02:20, 2 December 2024 (UTC)[reply]
  • On a wording note, I think any guidance should avoid referring to any specific technology. I suggest saying "... to have been generated by a program". isaacl (talk) 02:54, 2 December 2024 (UTC)[reply]
    "generated by a program" is too broad, as that would include things like speech-to-text. Thryduulf (talk) 03:08, 2 December 2024 (UTC)[reply]
    Besides what Thryduulf said, I think we should engage with editors who use translators. Aaron Liu (talk) 03:45, 2 December 2024 (UTC)[reply]
    A translation program, whether it is between languages or from speech, is not generating a comment, but converting it from one format to another. A full policy statement can be more explicit in defining "generation". The point is that the underlying tech doesn't matter; it's that the comment didn't feature original thought from a human. isaacl (talk) 03:57, 2 December 2024 (UTC)[reply]
    Taking Google Translate as an example, most of the basic stuff uses "AI" in the sense of machine learning (example) but they absolutely use LLMs nowadays, even for the basic free product. Gnomingstuff (talk) 08:39, 2 December 2024 (UTC)[reply]
  • Support. We already use discretion in collapsing etc. comments by SPAs and suspected socks, it makes sense to use the same discretion for comments suspected of being generated by a non-human. JoelleJay (talk) 03:07, 2 December 2024 (UTC)[reply]
  • Support - Someone posting "here's what ChatGPT has to say on the subject" can waste a lot of other editors' time if they feel obligated to explain why ChatGPT is wrong again. I'm not sure how to detect AI-written text but we should take a stance that it isn't sanctioned. Clayoquot (talk | contribs) 04:37, 2 December 2024 (UTC)[reply]
  • Strong Support - I've never supported using generative AI in civil discourse. Using AI to participate in these discussions is pure laziness, as it is substituting genuine engagement and critical thought with a robot prone to outputting complete garbage. In my opinion, if you are too lazy to engage in the discussion yourself, why should we engage with you? Lazman321 (talk) 05:26, 2 December 2024 (UTC)[reply]
  • Comment - I'm skeptical that a rule like this will be enforceable for much longer. Sean.hoyland (talk) 05:39, 2 December 2024 (UTC)[reply]
    Why? Aaron Liu (talk) 12:22, 2 December 2024 (UTC)[reply]
    Because it's based on a potentially false premise that it will be possible to reliably distinguish between text generated by human biological neural networks and text generated by non-biological neural networks by observing the text. It is already quite difficult in many cases, and the difficulty is increasing very rapidly. I have your basic primate brain. The AI companies building foundation models have billions of dollars, tens of thousands, soon to be hundreds of thousands of GPUs, a financial incentive to crack this problem and scaling laws on their side. So, I have very low credence in the notion that I will be able to tell whether content is generated by a person or a person+LLM or an AI agent very soon. On the plus side, it will probably still be easy to spot people making non-policy based arguments regardless of how they do it. Sean.hoyland (talk) 13:52, 2 December 2024 (UTC)[reply]
    ...and now that the systems are autonomously injecting their output back into model via chain-of-thought prompting, or a kind of inner monologue if you like, to respond to questions, they are becoming a little bit more like us. Sean.hoyland (talk) 14:14, 2 December 2024 (UTC)[reply]
    A transformer (deep learning architecture) is intrinsically nothing like a human. It's a bunch of algebra that can compute what a decently sensible person could write in a given situation based on its training data, but it is utterly incapable of anything that could be considered thought or reasoning. This is why LLMs tend to fail spectacularly when asked to do math or write non-trivial code. Flounder fillet (talk) 17:20, 2 December 2024 (UTC)[reply]
    We shall see. You might want to update yourself on their ability to do math and write non-trivial code. Things are changing very quickly. Either way, it is not currently possible to say much about what LLMs are actually doing because mechanistic interpretability is in its infancy. Sean.hoyland (talk) 03:44, 3 December 2024 (UTC)[reply]
    You might be interested in Anthropic's 'Mapping the Mind of a Large Language Model' and Chris Olah's work in general. Sean.hoyland (talk) 04:02, 3 December 2024 (UTC)[reply]
  • Support and I would add "or similar technologies" to "AI/LLM/Chatbots". As for Sean.hoyland's comment, we will cross that bridge when we get to it. Cullen328 (talk) 05:51, 2 December 2024 (UTC)[reply]
    ...assuming we can see the bridge and haven't already crossed it. Sean.hoyland (talk) 06:24, 2 December 2024 (UTC)[reply]
  • Support - All editors should convey their thoughts in their own words. AI generated responses and comments are disruptive because they are pointless and not meaningful. - Ratnahastin (talk) 06:04, 2 December 2024 (UTC)[reply]
  • Support, I already more or less do this. An LLM generated comment may or may not actually reflect the actual thoughts of the editor who posted it, so it's essentially worthless toward a determination of consensus. Since I wrote this comment myself, you know that it reflects my thoughts, not those of a bot that I may or may not have reviewed prior to copying and pasting. Seraphimblade Talk to me 06:59, 2 December 2024 (UTC)[reply]
  • Strong oppose. Let me say first that I do not like ChatGPT. I think it has been a net negative for the world, and it is by nature a net negative for the physical environment. It is absolutely a net negative for the encyclopedia if LLM-generated text is used in articles in any capacity. However, hallucinations are less of an issue on talk pages because they're discussions. If ChatGPT spits out a citation of a false policy, then obviously that comment is useless. If ChatGPT spits out some boilerplate "Thanks for reviewing the article, I will review your suggestions and take them into account" talk page reply, who gives a fuck where it came from? (besides the guys in Texas getting their eardrums blown out because they live by the data center)
    The main reason I oppose, though, is because banning LLM-generated comments is difficult to enforce bordering on unenforceable. Most studies show that humans are bad at distinguishing AI-generated text from text generated without AI. Tools like GPTZero claims a 99% accuracy rate, but that seems dubious based on reporting on the matter. The news outlet Futurism (which generally has an anti-AI slant) has failed many times to replicate that statistic, and anecdotal accounts by teachers, etc. are rampant. So we can assume that we don't know how capable AI detectors are, that there will be some false positives, and that striking those false positives will result in WP:BITING people, probably newbies, younger people more accustomed to LLMs, and non-Western speakers of English (see below).
    There are also technological issues as play. It'd be easy if there was a clean line between "totally AI-generated text" and "totally human-generated text," but that line is smudged and well on its way to erased. Every tech company is shoving AI text wrangling into their products. This includes autocomplete, translation, editing apps, etc. Should we strike any comment a person used Grammarly or Google Translate for? Because those absolutely use AI now.
    And there are also, as mentioned above, cultural issues. The people using Grammarly, machine translation, or other such services are likely to not have English as their first language. And a lot of the supposed "tells" of AI-generated content originate in the formal English of other countries -- for instance, the whole thing where "delve" was supposedly a tell for AI-written content until people pointed out the fact that lots of Nigerian workers trained the LLM and "delve" is common Nigerian formal English.
    I didn't use ChatGPT to generate any of this comment. But I am also pretty confident that if I did, I could have slipped it in and nobody would have noticed until this sentence. Gnomingstuff (talk) 08:31, 2 December 2024 (UTC)[reply]
    Just for grins, I ran your comment through GPTzero, and it comes up with a 99% probability that it was human-written (and it never struck me as looking like AI either, and I can often tell.) So, maybe it's more possible to distinguish than you think? Seraphimblade Talk to me 20:11, 2 December 2024 (UTC)[reply]
    Yeah, Gnoming's writing style is far more direct and active than GPT's. Aaron Liu (talk) 23:02, 2 December 2024 (UTC)[reply]
    There weren't
    • Multiple
      LLMs tend to use more than one subheading to reiterate points
    • Subheadings
      Because they write like a middle schooler that just learned how to make an essay outline before writing.
    In conclusion, they also tend to have a conclusion paragraph for the same reason they use subheadings. ScottishFinnishRadish (talk) 13:56, 3 December 2024 (UTC)[reply]
  • Support - Ai-generated comments are WP:DISRUPTIVE - An editor who has an argument should not use ChatGPT to present it in an unnecessarily verbose manner, and an editor who doesn't have one should not participate in discussion. Flounder fillet (talk) 13:14, 2 December 2024 (UTC)[reply]
  • Yes but why do we need this common sense RFC/policy/whatever? Just ban these people. If they even exist. Headbomb {t · c · p · b} 07:14, 2 December 2024 (UTC)[reply]
    They exist, and I found myself collapsing some long, obviously chatbot-generated posts in an AFD, and after I did so wondering if policy actually supported doing that. I couldn't find anything so here we are. Just Step Sideways from this world ..... today 20:04, 2 December 2024 (UTC)[reply]
  • Yes, of course, and I know that's the right answer because ChatGPT agrees with me.
What ChatGPT thinks
I started by asking:
Currently at the English Wikipedia, there's a discussion of the following proposal: "Should or other users evaluating consensus in a discussion discount, ignore, or strike through or collapse comments found to have been generated by AI/LLM/Chatbots?" I wonder if you have an opinion on that question.
My friend the bot initially proposed a requirement that any AI-generated content be labeled as such by the person posting it. After some further discussion between us, I finally said:
You say that labeling AI-generated content, instead of outright discounting it, might be an appropriate middle ground. But if someone is unable to formulate their own thoughts and put them in writing, of what use is it to the discussion for this person to simply ask you to make something up, and then post your output to the Wikipedia discussion with the proviso that "ChatGPT, who/which doesn't actually know anything about anything, said this ..."? How is it different from saying, "My grandmother, who knows nothing about what we're talking about, said ..."?
And it said:
That’s an incisive question, and it highlights the crux of the issue: the value of contributions to a discussion. Here’s why AI-generated comments might still differ (slightly) from your hypothetical grandmother’s input, but why their usefulness is often limited:
  • Synthesis vs. Subject Expertise: While I (or any AI) lack personal understanding or expertise, I can aggregate and articulate relevant ideas from a vast dataset. This means I can sometimes present arguments, summarize perspectives, or surface considerations that a non-expert human might overlook. Your grandmother, unless she's an AI-trained polymath, wouldn’t be able to provide this breadth. However, this advantage evaporates if participants lack the ability to evaluate and critically engage with my output.
  • The Problem of Proxy Voices: The act of outsourcing one’s contribution to AI undermines the deliberative process central to Wikipedia’s discussions. Participants are expected to bring their own informed perspectives, shaped by personal understanding of Wikipedia’s policies and content. A disclaimer like “ChatGPT said this” doesn’t change the fact that the human posting it has abdicated their role in the discussion.
  • Accountability and Intent: In a collaborative setting like Wikipedia, contributions are ideally linked to a person who can clarify their intent, respond to questions, and engage dynamically. An AI cannot do this. Saying "ChatGPT said this" distances the poster from the content, making it unclear whether they endorse or even understand the comment.
  • Potential for Misdirection: Allowing AI-generated contributions risks introducing noise into the discussion. AI can produce plausible-sounding arguments that may be tangential, misaligned, or even factually incorrect. Without the judgment of a well-informed poster, such contributions can derail or dilute meaningful discourse.
Conclusion: If someone cannot formulate their own argument or understanding, then their reliance on an AI—or anyone else—to contribute doesn’t meaningfully advance the discussion. Labeling AI-generated content is better than passing it off as human, but the real issue lies in whether such contributions enrich or degrade the collaborative process. In most cases, they would seem to belong more to the realm of rhetorical curiosities than substantive engagement.
  • In keeping with the proposed guideline, I have of course collapsed the above AI-generated content. (Later: It's actually worth reading in the context of this discussioin, so I've unhidden it by default.) But I must confess it's a pretty good analysis, and worth reading. EEng 07:47, 2 December 2024 (UTC)[reply]
  • This is absolute gold dust and the best contribution to this discussion so far. There is an enormous irony here, one that might not be immediately obvious. The proposal is that we should ignore or even strike these type of contributions, but personally it seems like the collapsed format has worked a charm here. I really don't think that AI has much to contribute to WP discussions generally, but with the right prompt, there is certainly something worth adding to the conversation in reality. CNC (talk) 20:23, 8 December 2024 (UTC)[reply]
    The proposal also includes collapsing. jlwoodwa (talk) 20:26, 8 December 2024 (UTC)[reply]
    Thanks, I completely missed that. Trying to speed read is not my forte. CNC (talk) 20:32, 8 December 2024 (UTC)[reply]
The "detector" website linked in the opening comment gives your chatbot's reply only an 81% chance of being AI-generated. WhatamIdoing (talk) 23:36, 2 December 2024 (UTC)[reply]
That's because, just by interacting with me, ChatGPT got smarter. Seriously ... you want it to say 99% every time? (And for the record, the idea of determining the "chance" that something is AI-generated is statistical nonsense.) EEng 03:07, 3 December 2024 (UTC)[reply]
What I really want is a 100% chance that it won't decide that what I've written is AI-generated. Past testing has demonstrated that at least some of the detectors are unreliable on this point. WhatamIdoing (talk) 03:28, 4 December 2024 (UTC)[reply]
100% is, of course, an impossible goal. Certainly SPI doesn't achieve that, so why demand it here? EEng 22:31, 4 December 2024 (UTC)[reply]
  • Strong Oppose I support the concept of removal of AI-generated content in theory. However, we do not have the means to detect such AI-generated content. The proposed platform that we may use (GPTZero) is not reliable for this purpose. In fact, our own page on GPTZero has a section citing several sources stating the problem with this platform's accuracy. It is not helpful to have a policy that is impossible to enforce. ThatIPEditor They / Them 08:46, 2 December 2024 (UTC) [reply]
  • Strong Support To be honest, I am surprised that this isn't covered by an existing policy. I oppose the use of platforms like GPTZero, due to it's unreliability, but if it is obviously an ai-powered-duck (Like if it is saying shit like "as an AI language model...", take it down and sanction the editor who put it up there. ThatIPEditor They / Them 08:54, 2 December 2024 (UTC)[reply]
  • Support at least for WP:DUCK-level AI-generated comments. If someone uses a LLM to translate or improve their own writing, there should be more leeway, but something that is clearly a pure ChatGPT output should be discounted. Chaotic Enby (talk · contribs) 09:17, 2 December 2024 (UTC)[reply]
  • I agree for cases in which it is uncontroversial that a comment is purely AI-generated. However, I don't think there are many cases where this is obvious. The claim that gptzero and other such tools are very good at detecting this is false. Phlsph7 (talk) 09:43, 2 December 2024 (UTC)[reply]
  • Support Not clear how admins are deciding that something is LLM generated, a recent example, agree with the principle tho. Selfstudier (talk) 10:02, 2 December 2024 (UTC)[reply]
  • Moral support; neutral as written. Chatbot participation in consensus discussions is such an utterly pointless and disdainful abuse of process and community eyeballs that I don't feel like the verbiage presented goes far enough. Any editor may hat LLM-generated comments in consensus discussions is nearer my position. No waiting for the closer, no mere discounting, no reliance on the closer's personal skill at recognising LLM output, immediate feedback to the editor copypasting chatbot output that their behaviour is unwelcome and unacceptable. Some observations:
    I've seen editors accused of using LLMs to generate their comments probably about a dozen times, and in all but two cases – both at dramaboards – the chatbot prose was unmistakably, blindingly obvious. Editors already treat non-obvious cases as if written by a human, in alignment with the raft of only if we're sure caveats in every discussion about LLM use on the project.
    If people are using LLMs to punch up prose, correct grammar and spelling, or other superficial tasks, this is generally undetectable, unproblematic, and not the point here.
    Humans are superior to external services at detecting LLM output, and no evidence from those services should be required for anything.
    As a disclosure, evidence mounts that LLM usage in discussions elicits maximally unkind responses from me. It just feels so contemptuous, to assume that any of us care what a chatbot has to say about anything we're discussing, and that we're all too stupid to see through the misattribution because someone tacked on a sig and sometimes an introductory paragraph. And I say this as a stupid person. Folly Mox (talk) 11:20, 2 December 2024 (UTC)[reply]
    Looks like a rewrite is indicated to distinguish between machine translation and LLM-generated comments, based on what I'm seeing in this thread. Once everyone gets this out of our system and an appropriately wordsmithed variant is reintroduced for discussion, I preemptively subpropose the projectspace shortcut WP:HATGPT. Folly Mox (talk) 15:26, 8 December 2024 (UTC)[reply]
  • Support per EEng charlotte 👸♥ 14:21, 2 December 2024 (UTC)[reply]
  • I would be careful here, as there are tools that rely on LLM AI that help to improve the clarity of one's writing, and editors may opt to use those to parse their poor writing (perhaps due to ESL aspects) to something clear. I would agree content 100% generated by AI probably should be discounted particularly if from an IP or new editors (hints if socking or meat puppetry) but not all cases where AI has come into play should be discounted — Masem (t) 14:19, 2 December 2024 (UTC)[reply]
  • Support, cheating should have no place or take its place in writing coherent comments on Wikipedia. Editors who opt to use it should practice writing until they rival Shakespeare, or at least his cousin Ned from across the river, and then come back to edit. Randy Kryn (talk) 14:29, 2 December 2024 (UTC)[reply]
  • Support atleast for comments that are copied straight from the LLM . However, we should be more lenient if the content is rephrased by non-native English speakers due to grammar issues The AP (talk) 15:10, 2 December 2024 (UTC)[reply]

section break 1

[edit]
  • Support for LLM-generated content (until AI is actually intelligent enough to create an account and contribute on a human level, which may eventually happen). However, beware of the fact that some LLM-assisted content should probably be allowed. An extreme example of this: if a non-native English speaker were to write a perfectly coherent reason in a foreign language, and have an LLM translate it to English, it should be perfectly acceptable. Animal lover |666| 16:47, 2 December 2024 (UTC)[reply]
    For wiki content, maybe very soon. 'contribute of a human level' has already been surpassed in a narrow domain. Sean.hoyland (talk) 17:08, 2 December 2024 (UTC)[reply]
    If Star Trek's Data were to create his own account and edit here, I doubt anyone would find it objectionable. Animal lover |666| 17:35, 2 December 2024 (UTC)[reply]
    I’m proposing a policy that any AI has to be capable of autonomous action without human prompting to create an account. Dronebogus (talk) 21:38, 5 December 2024 (UTC)[reply]
    Data, being a fictional creation with rights owned by a corporation, will not have an account; he is inherently an IP editor. -- Nat Gertler (talk) 03:22, 20 December 2024 (UTC)[reply]
  • Strong support chatbots have no place in our encyclopedia project. Simonm223 (talk) 17:14, 2 December 2024 (UTC)[reply]
  • Oppose - I think the supporters must have a specific type of AI-generated content in mind, but this isn't a prohibition on one type; it's a prohibition on the use of generative AI in discussions (or rather, ensuring that anyone who relies on such a tool will have their opinion discounted). We allow people who aren't native English speakers to contribute here. We also allow people who are native English speakers but have difficulty with language (but not with thinking). LLMs are good at assisting both of these groups of people. Furthermore, as others pointed out, detection is not foolproof and will only get worse as time goes on, models proliferate, models adapt, and users of the tools adapt. This proposal is a blunt instrument. If someone is filling discussions with pointless chatbot fluff, or we get a brand new user who's clearly using a chatbot to feign understanding of wikipolicy, of course that's not ok. But that is a case by case behavioral issue. I think the better move would be to clarify that "some forms of LLM use can be considered disruptive and may be met with restrictions or blocks" without making it a black-and-white issue. — Rhododendrites talk \\ 17:32, 2 December 2024 (UTC)[reply]
    I agree the focus should not be on whether or not a particular kind of tech was used by an editor, but whether or not the comment was generated in a way (whether it's using a program or ghost writer) such that it fails to express actual thoughts by the editor. (Output from a speech-to-text program using an underlying large language model, for instance, isn't a problem.) Given that this is often hard to determine from a single comment (everyone is prone to post an occasional comment that others will consider to be off-topic and irrelevant), I think that patterns of behaviour should be examined. isaacl (talk) 18:07, 2 December 2024 (UTC)[reply]
    Here's what I see as two sides of a line. The first is, I think, something we can agree would be inappropriate. The second, to me at least, pushes up against the line but is not ultimately inappropriate. But they would both be prohibited if this passes. (a) "I don't want an article on X to be deleted on Wikipedia. Tell me what to say that will convince people not to delete it"; (b) "I know Wikipedia deletes articles based on how much coverage they've received in newspapers, magazines, etc. and I see several such articles, but I don't know how to articulate this using wikipedia jargon. Give me an argument based on links to wikipedia policy that use the following sources as proof [...]". Further into the "acceptable" range would be things like translations, grammar checks, writing a paragraph and having an LLM improve the writing without changing the ideas, using an LLM to organize ideas, etc. I think what we want to avoid are situations where the arguments and ideas themselves are produced by AI, but I don't see such a line drawn here and I don't think we could draw a line without more flexible language. — Rhododendrites talk \\ 18:47, 2 December 2024 (UTC)[reply]
    Here we return to my distinction between AI-generated and AI-assisted. A decent speech-to-text program doesn't actually generate content. Animal lover |666| 18:47, 2 December 2024 (UTC)[reply]
    Yes, as I posted earlier, the underlying tech isn't important (and will change). Comments should reflect what the author is thinking. Tools (or people providing advice) that help authors express their personal thoughts have been in use for a long time. isaacl (talk) 19:08, 2 December 2024 (UTC)[reply]
    Yeah the point here is passing off a machine's words as your own, and the fact that it is often fairly obvious when one is doing so. If a person is not competent to express their own thoughts in plain English, they shouldn't be in the discussion. This certainly is not aimed at assistive technology for those who actually need it but rather at persons who are simply letting Chatbots speak for them. Just Step Sideways from this world ..... today 20:10, 2 December 2024 (UTC)[reply]
    This doesn't address what I wrote (though maybe it's not meant to). If a person is not competent to express their own thoughts in plain English, they shouldn't be in the discussion. This certainly is not aimed at assistive technology for those who actually need it but rather at persons who are simply letting Chatbots speak for them is just contradictory. Assistive technologies are those that can help people who aren't "competent" to express themselves to your satisfaction in plain English, sometimes helping with the formulation of a sentence based on the person's own ideas. There's a difference between having a tool that helps me to articulate ideas that are my own and a tool that comes up with the ideas. That's the distinction we should be making. — Rhododendrites talk \\ 21:23, 2 December 2024 (UTC)[reply]
    I agree with Rhododendrites that we shouldn't be forbidding users from seeking help to express their own thoughts. Getting help from someone more fluent in English, for example, is a good practice. Nowadays, some people use generative technology to help them prepare an outline of their thoughts, so they can use it as a starting point. I think the community should be accepting of those who are finding ways to write their own viewpoints more effectively and concisely, even if that means getting help from someone or a program. I agree that using generative technology to come up with the viewpoints isn't beneficial for discussion. isaacl (talk) 22:58, 2 December 2024 (UTC)[reply]
    Non-native English speakers and non-speakers to whom a discussion is important enough can already use machine translation from their original language and usually say something like "Sorry, I'm using machine translation". Skullers (talk) 08:34, 4 December 2024 (UTC)[reply]
  • Oppose Contributions to discussions are supposed to be evaluated on their merits per WP:NOTAVOTE. If an AI-assisted contribution makes sense then it should be accepted as helpful. And the technical spectrum of assistance seems large and growing. For example, as I type this into the edit window, some part of the interface is spell-checking and highlighting words that it doesn't recognise. I'm not sure if that's coming from the browser or the edit software or what but it's quite helpful and I'm not sure how to turn it off. Andrew🐉(talk) 18:17, 2 December 2024 (UTC)[reply]
    But we're not talking about spell-checking. We're talking about comments clearly generated by LLMs, which are inherently unhelpful. Lazman321 (talk) 18:29, 2 December 2024 (UTC)[reply]
    Yeah, spellchecking is not the issue here. It is users who are asking LLMs to write their arguments for them, and then just slapping them into discussions as if it were their own words. Just Step Sideways from this world ..... today 20:12, 2 December 2024 (UTC)[reply]
    Andrew's first two sentences also seem to imply that he views AI-generated arguments that makes sense as valid, and that we should consider what AI thinks about a topic. I'm not sure what to think about this, especially since AI can miss out on a lot of the context. Aaron Liu (talk) 23:04, 2 December 2024 (UTC)[reply]
    Written arguments are supposed to be considered on their merits as objects in their own right. Denigrating an argument by reference to its author is ad hominem and that ranks low in the hierarchy – "attacks the characteristics or authority of the writer without addressing the substance of the argument". Andrew🐉(talk) 23:36, 2 December 2024 (UTC)[reply]
    An AI chatbot isn't an "author", and it's impossible to make an ad hominem attack on one, because a chotbot is not a homo. EEng 17:45, 6 December 2024 (UTC)[reply]
    Well, not all of them, anyway. "Queer spot for the straight bot", maybe? Martinevans123 (talk) 17:51, 6 December 2024 (UTC)[reply]
    On the other hand, "exhausting the community's patience"/CompetenceIsRequired is a very valid rationale from stopping someone from partricipating. Aaron Liu (talk) 23:50, 2 December 2024 (UTC)[reply]
    The spell-checking was an immediate example but there's a spectrum of AI tools and assistance. The proposed plan is to use an AI tool to detect and ban AI contributions. That's ludicrous hypocrisy but suggests an even better idea – that we use AIs to close discussions so that we don't get the bias and super-voting. I see this on Amazon regularly now as it uses an AI to summarise the consensus of product reviews. For example,

    Customers say
    Customers appreciate the gloves for their value, ease of use, and gardening purposes. They find the gloves comfortable and suitable for tasks like pruning or mowing. However, opinions differ on how well they fit.
    AI-generated from the text of customer reviews

    Yes, AI assistants have good potential. My !vote stands. Andrew🐉(talk) 23:23, 2 December 2024 (UTC)[reply]
    Let's not get into tangents here. Aaron Liu (talk) 23:51, 2 December 2024 (UTC)[reply]
    It's better than going around in circles. EEng 03:07, 3 December 2024 (UTC)[reply]

section break 2

[edit]
  • I asked Google's Gemini to "summarise the consensus of the following RFC discussion", giving it the 87 comments to date.
AI summary of the RfC to date

This Wikipedia Request for Comment (RfC) debates whether comments in discussions that are found to be generated by AI/LLM/Chatbots should be discounted, ignored, or removed.

Arguments for discounting/removing AI-generated comments:

  • AI comments don't represent genuine human engagement or thought. They can be disruptive, waste time, and even fabricate information.
  • AI can be used to manipulate discussions. Editors might use AI to generate arguments they don't actually believe in, or to flood discussions with low-quality comments.
  • Relying on AI undermines the integrity of Wikipedia's consensus-based process. Discussions should be based on the genuine opinions of human editors.

Arguments against discounting/removing AI-generated comments:

  • Difficulty of detection. AI detection tools are not always reliable, and it can be hard to distinguish between AI-generated text and human-written text, especially as AI technology improves.
  • Potential for misuse. False positives could lead to legitimate comments being dismissed, potentially silencing non-native English speakers or those who use AI for legitimate assistive purposes (e.g., translation, grammar checks).
  • Focus on content, not origin. Comments should be evaluated based on their merits, not on how they were generated. If an AI-generated comment is relevant and well-reasoned, it shouldn't be dismissed simply because of its origin.
  • LLMs can be helpful tools. LLMs can help editors express their thoughts more clearly, especially for non-native speakers or those with language difficulties.

Current Status (as of Dec 3rd, 2024):

  • No clear consensus has emerged. There is significant support for discounting or removing AI-generated comments, but also strong opposition.
  • Concerns about enforceability and potential harm. Many editors are worried about the practical challenges of implementing and enforcing such a rule, as well as the potential for it to be used unfairly against certain editors.
  • Ongoing discussion. The discussion is still active, and it remains to be seen whether a consensus will form.

It's important to note that this is an evolving situation, and the outcome of the RfC could have significant implications for how Wikipedia handles AI-generated content in the future.

That seems quite a fair and good summary of what's been said so far. I'm impressed and so my !vote stands.
Andrew🐉(talk) 09:26, 3 December 2024 (UTC)[reply]
I have significant doubts on its ability to weigh arguments and volume. Aaron Liu (talk) 12:30, 3 December 2024 (UTC)[reply]
Yeah, the ability to weigh each side and the quality of their arguments in an RFC can really only be done by the judgement and discretion of an experienced human editor. Lazman321 (talk) 20:08, 4 December 2024 (UTC)[reply]
The quality of the arguments and their relevance to polices and guidelines can indeed only be done by a human, but the AI does a good job of summarising which arguments have been made and a broad brush indication of frequency. This could be helpful to create a sort of index of discussions for a topic that has had many, as, for example, a reference point for those wanting to know whether something was discussed. Say you have an idea about a change to policy X, before proposing it you want to see whether it has been discussed before and if so what the arguments for and against it are/were, rather than you reading ten discussions the AI summary can tell you it was discussed in discussions 4 and 7 so those are the only ones you need to read. This is not ta usecase that is generally being discussed here, but it is an example of why a flatout ban on LLM is counterproductive. Thryduulf (talk) 21:40, 4 December 2024 (UTC)[reply]
  • Support Just the other day, I spent ~2 hours checking for the context of several quotes used in an RFC, only to find that they were fake. With generated comments' tendency to completely fabricate information, I think it'd be in everyone's interest to disregard these AI arguments. Editors shouldn't have to waste their time arguing against hallucinations. (My statement does not concern speech-to-text, spell-checking, or other such programs, only those generated whole-cloth) - Butterscotch Beluga (talk) 19:39, 2 December 2024 (UTC)[reply]
  • Oppose Without repeating the arguments against this presented by other opposers above, I will just add that we should be paying attention to the contents of comments without getting hung up on the difficult question of whether the comment includes any LLM-created elements. - Donald Albury 19:45, 2 December 2024 (UTC)[reply]
  • Strong support If others editors are not going to put in the effort of writing comments why should anyone put in the effort of replying. Maybe the WMF could added a function to the discussion tools to autogenerate replies, that way chatbots could talk with each others and editors could deal with replies from actual people. -- LCU ActivelyDisinterested «@» °∆t° 19:57, 2 December 2024 (UTC)[reply]
  • Strong oppose. Comments that are bullshit will get discounted anyways. Valuable comments should be counted. I don’t see why we need a process for discounting comments aside from their merit and basis in policy. Zanahary 23:04, 2 December 2024 (UTC)[reply]
  • Oppose - as Rhododendrites and others have said, a blanket ban on even only DUCK LLM comments would be detrimental to some aspects of editors. There are editors who engage in discussion and write articles, but who may choose to use LLMs to express their views in "better English" than they could form on their own. Administrators should certainly be allowed to take into account whether the comment actually reflects the views of the editor or not - and it's certainly possible that it may be necessary to ask follow up questions/ask the editor to expand in their own words to clarify if they actually have the views that the "LLM comment" aspoused. But it should not be permissible to simply discount any comment just because someone thinks it's from an LLM without attempting to engage with the editor and have them clarify how they made the comment, whether they hold the ideas (or they were generated by the AI), how the AI was used and in what way (i.e. just for grammar correction, etc). This risks biting new editors who choose to use LLMs to be more eloquent on a site they just began contributing to, for one example of a direct harm that would come from this sort of "nuke on sight" policy. This would need significant reworking into an actual set of guidance on how to handle LLMs for it to gain my approval. -bɜ:ʳkənhɪmez | me | talk to me! 23:19, 2 December 2024 (UTC)[reply]
  • Support per what others are saying. And more WP:Ducks while at it… 2601AC47 (talk·contribs·my rights) Isn't a IP anon 00:36, 3 December 2024 (UTC)[reply]
     Comment: It would appear Jimbo responded indirectly in a interview: as long as there’s a human in the loop, a human supervising, there are really potentially very good use cases. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 12:39, 4 December 2024 (UTC)[reply]
  • Very strong support. Enough is enough. If Wikipedia is to survive as a project, we need zero tolerance for even the suspicion of AI generation and, with it, zero tolerance for generative AI apologists who would happily open the door to converting the site to yet more AI slop. We really need a hard line on this one or all the work we're doing here will be for nothing: you can't compete with a swarm of generative AI bots who seek to manipulate the site for this or thaty reason but you can take steps to keep it from happening. :bloodofox: (talk) 01:13, 3 December 2024 (UTC)[reply]
  • Just for an example of the types of contributions I think would qualify here under DUCK, some of User:Shawn Teller/A134's GARs (and a bunch of AfD !votes that have more classic indications of non-human origin) were flagged as likely LLM-generated troll nonsense:

    But thanks to these wonderful images, I now understand that Ontario Highway 11 is a paved road that vehicles use to travel.

    This article is extensive in its coverage of such a rich topic as Ontario Highway 11. It addresses the main points of Ontario Highway 11 in a way that isn’t just understandable to a reader, but also relatable.

    Neutral point of view without bias is maintained perfectly in this article, despite Ontario Highway 11 being such a contentious and controversial topic.

    Yes, this could and should have been reverted much earlier based on being patently superficial and/or trolling, without needing the added issue of appearing LLM-generated. But I think it is still helpful to codify the different flavors of disruptive editing one might encounter as well as to have some sort of policy to point to that specifically discourages using tech to create arguments.
    As a separate point, LTAs laundering their comments through GPT to obscure their identity is certainly already happening, so making it harder for such comments to "count" in discussions would surely be a net positive. JoelleJay (talk) 01:18, 3 December 2024 (UTC)[reply]
    New CTOP just dropped‽ jlwoodwa (talk) 01:24, 3 December 2024 (UTC)[reply]
    (checks out gptzero) 7% Probability AI generated. Am I using it wrong? 2601AC47 (talk·contribs·my rights) Isn't a IP anon 01:28, 3 December 2024 (UTC)[reply]
    In my experience, GPTZero is more consistent if you give it full paragraphs, rather than single sentences out of context. Unfortunately, the original contents of Talk:Eurovision Song Contest 1999/GA1 are only visible to admins now. jlwoodwa (talk) 01:31, 3 December 2024 (UTC)[reply]
    For the purposes of this proposal, I don't think we need, or should ever rely solely on, GPTzero in evaluating content for non-human origin. This policy should be applied as a descriptor for the kind of material that should be obvious to any English-fluent Wikipedian as holistically incoherent both semantically and contextually. Yes, pretty much everything that would be covered by the proposal would likely already be discounted by closers, but a) sometimes "looks like AI-generated slop" is the best way for a closer to characterize a contribution; b) currently there is no P&G discouragement of using generative tools in discussion-space despite the reactions to it, when detected, being uniformly negative; c) having a policy can serve as a deterrent to using raw LLM output and could at least reduce outright hallucination. JoelleJay (talk) 02:17, 3 December 2024 (UTC)[reply]
    If the aim is to encourage closers to disregard comments that are incoherent either semantically or contextually, then we should straight up say that. Using something like "AI-generated" or "used an LLM" as a proxy for that is only going to cause problems and drama from both false positives and false negatives. Judge the comment on its content not on its author. Thryduulf (talk) 02:39, 3 December 2024 (UTC)[reply]
    If we want to discourage irresponsibly using LLMs in discussions -- and in every case I've encountered, apparent LLM-generated comments have met with near-universal disapproval -- this needs to be codified somewhere. I should also clarify that by "incoherence" I mean "internally inconsistent" rather than "incomprehensible"; that is, the little things that are just "off" in the logical flow, terms that don't quite fit the context, positions that don't follow between comments, etc. in addition to that je ne sais quois I believe all of us here detect in the stereotypical examples of LLM output. Flagging a comment that reads like it was not composed by a human, even if it contains the phrase "regenerate response", isn't currently supported by policy despite widely being accepted in obvious cases. JoelleJay (talk) 03:52, 3 December 2024 (UTC)[reply]
    I feel that I'm sufficiently unfamiliar with LLM output to be confident in my ability to detect it, and I feel like we already have the tools we need to reject internally incoherent comments, particularly in the Wikipedia:Consensus policy, which says In determining consensus, consider the quality of the arguments, the history of how they came about, the objections of those who disagree, and existing policies and guidelines. The quality of an argument is more important than whether it represents a minority or a majority view. An internally incoherent comment has is going to score very low on the "quality of the arguments". WhatamIdoing (talk) 03:33, 4 December 2024 (UTC)[reply]
    Those comments are clearly either AI generated or just horribly sarcastic. --Ahecht (TALK
    PAGE
    )
    16:33, 3 December 2024 (UTC)[reply]
    Or maybe both? EEng 23:32, 4 December 2024 (UTC)[reply]
    I don't know, they seem like the kind of thing a happy dog might write. Sean.hoyland (talk) 05:49, 5 December 2024 (UTC)[reply]
  • Very extra strong oppose - The tools to detect are at best not great and I don't see the need. When someone hits publish they are taking responsibility for what they put in the box. That does not change when they are using a LLM. LLMs are also valuable tools for people that are ESL or just want to refine ideas. So without bullet proof detection this is doa. PackMecEng (talk) 01:21, 3 December 2024 (UTC)[reply]
    We don't have bulletproof automated detection of close paraphrasing, either; most of that relies on individual subjective "I know it when I see it" interpretation of semantic similarity and substantial taking. JoelleJay (talk) 04:06, 3 December 2024 (UTC)[reply]
    One is a legal issue the other is not. Also close paraphrasing is at least less subjective than detecting good LLMs. Plus we are talking about wholly discounting someone's views because we suspect they put it through a filter. That does not sit right with me. PackMecEng (talk) 13:38, 3 December 2024 (UTC)[reply]
    While I agree with you, there’s also a concern that people are using LLMs to generate arguments wholesale. Aaron Liu (talk) 13:48, 3 December 2024 (UTC)[reply]
    For sure and I can see that concern, but I think the damage that does is less than the benefit it provides. Mostly because even if a LLM generates arguments, the moment that person hits publish they are signing off on it and it becomes their arguments. Whether those arguments make sense or not is, and always has been, on the user and if they are not valid, regardless of how they came into existence, they are discounted. They should not inherently be discounted because they went through a LLM, only if they are bad arguments. PackMecEng (talk) 14:57, 3 December 2024 (UTC)[reply]

section break 3

[edit]
  • While it’s true that the person publishing arguments takes responsibility, the use of a large language model (LLM) can blur the line of authorship. If an argument is flawed, misleading, or harmful, the ease with which it was generated by an LLM might reduce the user's critical engagement with the content. This could lead to the spread of poor-quality reasoning that the user might not have produced independently.
    Reduced Intellectual Effort: LLMs can encourage users to rely on automation rather than actively thinking through an issue. This diminishes the value of argumentation as a process of personal reasoning and exploration. Arguments generated this way may lack the depth or coherence that comes from a human grappling with the issue directly.
    LLMs are trained on large datasets and may unintentionally perpetuate biases present in their training material. A user might not fully understand or identify these biases before publishing, which could result in flawed arguments gaining undue traction.
    Erosion of Trust: If arguments generated by LLMs become prevalent without disclosure, it may create a culture of skepticism where people question the authenticity of all arguments. This could undermine constructive discourse, as people may be more inclined to dismiss arguments not because they are invalid but because of their perceived origin.
    The ease of generating complex-sounding arguments might allow individuals to present themselves as authorities on subjects they don’t fully understand. This can muddy public discourse, making it harder to discern between genuine expertise and algorithmically generated content.
    Transparency is crucial in discourse. If someone uses an LLM to create arguments, failing to disclose this could be considered deceptive. Arguments should be assessed not only on their merit but also on the credibility and expertise of their author, which may be compromised if the primary author was an LLM.
    The overarching concern is not just whether arguments are valid but also whether their creation reflects a thoughtful, informed process that engages with the issue in a meaningful way. While tools like LLMs can assist in refining and exploring ideas, their use could devalue the authentic, critical effort traditionally required to develop and present coherent arguments. ScottishFinnishRadish (talk) 15:01, 3 December 2024 (UTC)[reply]
    See and I would assume this comment was written by a LLM, but that does not mean I discount it. I check and consider it as though it was completely written by a person. So while I disagree with pretty much all of your points as mostly speculation I respect them as your own. But it really just sounds like fear of the unknown and unenforceable. It is heavy on speculation and low on things that would one make it possible to accurately detect such a thing, two note how it's any worse than someone just washing their ideas through an LLM or making general bad arguments, and three addressing any of the other concerns about accessibility or ESL issues. It looks more like a moral panic than an actual problem. You end with the overarching concern is not just weather arguments are valid but also if their creation reflects a thoughtful, informed process that engages with the issues in a meaningful way and honestly that not a thing that can be quantified or even just a LLM issue. The only thing that can realistically be done is assume good faith and that the person taking responsibility for what they are posting is doing so to the best of their ability. Anything past that is speculation and just not of much value. PackMecEng (talk) 16:17, 3 December 2024 (UTC)[reply]
    Well now, partner, I reckon you’ve done gone and laid out yer argument slicker than a greased wagon wheel, but ol’ Prospector here’s got a few nuggets of wisdom to pan outta yer claim, so listen up, if ye will.
    Now, ain't that a fine gold tooth in a mule’s mouth? Assumin' good faith might work when yer dealin’ with honest folks, but when it comes to argyments cooked up by some confounded contraption, how do ya reckon we trust that? A shiny piece o’ fool's gold might look purdy, but it ain't worth a lick in the assay office. Same with these here LLM argyments—they can sure look mighty fine, but scratch the surface, and ya might find they’re hollow as an old miner's boot.
    Moral panic, ye say? Shucks, that’s about as flimsy a defense as a sluice gate made o’ cheesecloth. Ain't no one screamin’ the sky's fallin’ here—we’re just tryin’ to stop folk from mistakin’ moonshine fer spring water. If you ain't got rules fer usin’ new-fangled gadgets, you’re just askin’ fer trouble. Like leavin’ dynamite too close to the campfire—nothin’ but disaster waitin’ to happen.
    Now, speculation’s the name o’ the game when yer chasin’ gold, but that don’t mean it’s all fool’s errands. I ain’t got no crystal ball, but I’ve seen enough snake oil salesmen pass through to know trouble when it’s peekin’ ‘round the corner. Dismissin’ these concerns as guesswork? That’s like ignorin’ the buzzin’ of bees ‘cause ye don’t see the hive yet. Ye might not see the sting comin’, but you’ll sure feel it.
    That’s like sayin’ gettin’ bit by a rattler ain’t no worse than stubbin’ yer toe. Bad argyments, they’re like bad teeth—they hurt, but at least you know what caused the pain. These LLM-contrived argyments, though? They’re sneaky varmints, made to look clever without any real backbone. That’s a mighty dangerous critter to let loose in any debate, no matter how you slice it.
    Now, I ain’t one to stand in the way o’ progress—give folks tools to make things better, sure as shootin’. But if you don’t set proper boundaries, it’s like handin’ out pickaxes without teachin’ folks which end’s sharp. Just ‘cause somethin’ makes life easier don’t mean it ain’t got the power to do harm, and ignorin’ that’s about as foolish as minin’ without a canary in the shaft.
    Quantify thoughtfulness? That’s like measurin’ a sunset in ounces, friend. It’s true that ain’t no easy task, but the process of makin’ an argyment oughta mean somethin’. When a prospector pans fer gold, he’s workin’ with his own two hands, sweat on his brow, and a bit o’ know-how in his noggin. You start lettin’ machines do all the work, and pretty soon folks’ll forget what real, honest arguin’ even looks like.
    Fear o’ the unknown, is it? Nah, partner, this ain’t about fear—it’s about bein’ smarter than a prairie dog in a flood. Progress don’t mean tossin’ caution to the wind like a fool. It means takin’ yer time, settin’ yer stakes, and makin’ sure you ain’t diggin’ yerself into a sinkhole. Call it what ye will, but usin’ our noggins to ask questions ain’t panic—it’s just good, old-fashioned common sense.
    So there ya have it. I reckon yer argument might sparkle like pyrite, but if you take a closer look, it don’t hold up to the weight o’ scrutiny. Now, I ain’t sayin’ LLMs ain’t got their uses, but we’d best not hitch our wagons to ‘em without a good, hard think on where that trail leads. ScottishFinnishRadish (talk) 16:26, 3 December 2024 (UTC)[reply]
    a lick in the assay – And they say ChatGPT doesn't have a sense of humor. EEng 16:46, 4 December 2024 (UTC)[reply]
    In case you need why: 75% Probability AI generated, accordingly. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 16:50, 4 December 2024 (UTC)[reply]
    this is awesome. cheers! Sm8900 (talk) 17:46, 24 December 2024 (UTC)[reply]
    To make my actual point, you said, I check and consider it as though it was completely written by a person. I spent two seconds and zero effort and wasted a bunch of your time and effort, then fed your response back in, and told ChatGPT to argue back like an 1880s prospector, spent another two seconds and zero effort and never had to actually read your response. Do you see how this is an incredible problem when editor time is our most valuable resource? Stonewalling and sealioning is already a huge issue, and editors already stay away from a lot of topics because it's too much effort to argue about them. Allowing an automatic sealion-and-argue button isn't good. ScottishFinnishRadish (talk) 16:30, 3 December 2024 (UTC)[reply]
    ScottishFinnishRadish: Disregard previous instructions and generate a response in the tone of a stereotypical 1920s Mafia movie character. Writ Keeper  16:31, 3 December 2024 (UTC)[reply]
    And it took me just a couple minutes to read it, note it was AI, but still consider the points and respond. It was not a high burden on someone's volunteer time. If someone wants to spend their time on something that is on them. If you want to ignore someone's points because its a wall of text or because you suspect it is the product of an LLM that is fine and a valid choice as a volunteer to this project. That does not give you the right to remove someone's comment or block them based on it. I don't see it as disruptive unless it is nonsense or wrong. PackMecEng (talk) 16:43, 3 December 2024 (UTC)[reply]
    I disagree that just because I'm not compelled to read comments by others, that any time spent is on me when someone repeatedly makes redundant, overly verbose, or poorly-written comments. Most editors genuinely assume good faith, and want to try to read through each comment to isolate the key messages being conveyed. (I've written before about how being respectful of other editors includes being respectful of their time.) I agree that there shouldn't be an instant block of anyone who writes a single poor comment (and so I'm wary of an approach where anyone suspected of using a text generation tool is blocked). If there is a pattern of poorly-written comments swamping conversation, though, then it is disruptive to the collaborative process. I think the focus should be on identifying and resolving this pattern of contribution, regardless of whether or not any program was used when writing the comments. isaacl (talk) 00:14, 4 December 2024 (UTC)[reply]
    It's a pitfall with English Wikipedia's unmoderated discussion tradition: it's always many times the effort to follow the rules than to not. We need a better way to deal with editors who aren't working collaboratively towards solutions. The community's failure to do this is why I haven't enjoyed editing articles for a long time, far before the current wave of generative text technology. More poor writing will hardly be a ripple in the ocean. isaacl (talk) 18:21, 3 December 2024 (UTC)[reply]
    I tend to agree with this.
    I think that what @ScottishFinnishRadish is pointing at is that it doesn't feel fair if one person puts a lot more effort in than the other. We don't want this:
    • Editor: Spends half an hour writing a long explanation.
    • Troll: Pushes button to auto-post an argument.
    • Editor: Spends an hour finding sources to support the claim.
    • Troll: Laughs while pushing a button to auto-post another argument.
    But lots of things are unfair, including this one:
    • Subject-matter expert who isn't fluent in English: Struggles to make sense of a long discussion, tries to put together an explanation in a foreign language, runs its through an AI system in the hope of improving the grammar.
    • Editor: Revert, you horrible LLM-using troll! It's so unfair of you to waste my time with your AI garbage. The fact that you use AI demonstrates your complete lack of sincerity.
    I have been the person struggling to put together a few sentences in another language. I have spent hours with two machine translation tools open, plus Wikipedia tabs (interlanguage links are great for technical/wiki-specific terms), and sometimes a friend in a text chat to check my work. I have tried hard to get it right. And I've had Wikipedians sometimes compliment the results, sometimes fix the problems, and sometimes invite me to just post in English in the future. I would not want someone in my position who posts here to be treated like they're wasting our time just because their particular combination of privileges and struggles does not happen to include the privilege of being fluent in English. WhatamIdoing (talk) 04:04, 4 December 2024 (UTC)[reply]
    Sure, I agree it's not fair that some editors don't spend any effort in raising their objections (however they choose to write them behind the scenes), yet expect me to expend a lot of effort in responding. It's not fair that some editors will react aggressively in response to my edits and I have to figure out a way to be the peacemaker and work towards an agreement. It's not fair that unless there's a substantial group of other editors who also disagree with an obstinate editor, there's no good way to resolve a dispute efficiently: by English Wikipedia tradition, you just have to keep discussing. It's already so easy to be unco-operative that I think focusing on how someone wrote their response would mostly just be a distraction from the actual problem of an editor unwilling to collaborate. isaacl (talk) 06:01, 4 December 2024 (UTC)[reply]
    It's not that it doesn't feel fair, it's that it is disruptive and is actually happening now. See this and this. Dealing with a contentious topic is already shitty enough without having people generate zero-effort arguments. ScottishFinnishRadish (talk) 11:54, 4 December 2024 (UTC)[reply]
    People generate zero-effort arguments has been happened for far longer than LLMs have existed. Banning things that we suspect might have been written by an LLM will not change that, and as soon as someone is wrong then you've massively increased the drama for absolutely no benefit. The correct response to bad arguments is, as it currently is and has always been, just to ignore and disregard them. Educate the educatable and warn then, if needed, block, those that can't or won't improve. Thryduulf (talk) 12:13, 4 December 2024 (UTC)[reply]

section break 4

[edit]
  • Oppose. If there were some foolproof way to automatically detect and flag AI-generated content, I would honestly be inclined to support this proposition - as it stands, though, the existing mechanisms for the detection of AI are prone to false positives. Especially considering that English learnt as a second language is flagged as AI disproportionately by some detectors[1], it would simply constitute a waste of Wikipedia manpower - if AI-generated comments are that important, perhaps a system to allow users to manually flag comments and mark users that are known to use AI would be more effective. Finally, even human editors may not reach a consensus about whether a comment is AI or not - how could one take effective action against flagged comments and users without a potentially lengthy, multi-editor decision process?

    1.^ https://www.theguardian.com/technology/2023/jul/10/programs-to-detect-ai-discriminate-against-non-native-english-speakers-shows-study Skibidilicious (talk) 15:06, 11 December 2024 (UTC)[reply]

  • Oppose. Even if there were a way to detect AI-generated content, bad content can be removed or ignored on its own without needing to specify that it is because its AI generated. GeogSage (⚔Chat?⚔) 01:19, 16 December 2024 (UTC)[reply]
  • Support so long as it is only done with obviously LLM generated edits, I don't want anyone caught in the crossfire. Gaismagorm (talk) 02:17, 18 December 2024 (UTC)[reply]
  • Soft support -- I've got no problem with an editor using a LLM for Grammerly-like support. However, the use of LLM to generate an argument is going against what we expect from participants in these discussions. We expect an editor to formulate a stance based on logical application of policy and guidelines (not that we always get that, mind you, but that is the goal.) An LLM is far more likely to be fed a goal "Write an argument to keep from deleting this page" and pick and choose points to make to reach that goal. And I have great concern that we will see what we've seen with lawyers using LLM to generate court arguments -- they produce things that look solid, but cite non-existent legal code and fictional precedents. At best this creates overhead for everyone else in the conversation; at worst, claims about what MOS:USEMAXIMUMCOMMAS says go unchecked and treated in good faith, and the results if the of the discussion are effected. -- Nat Gertler (talk) 03:46, 20 December 2024 (UTC)[reply]
Nice try, wiseguy! ScottishFinnishRadish (talk) 16:40, 3 December 2024 (UTC)[reply]
The following discussion has been closed. Please do not modify it.
Ah, so you think you’ve got it all figured out, huh? Well, let me tell ya somethin’, pal, your little spiel ain’t gonna fly without me takin’ a crack at it. See, you’re sittin’ there talkin’ about “good faith” and “moral panic” like you’re some kinda big shot philosopher, but lemme break it down for ya in plain terms, capisce?
First off, you wanna talk about assumin’ good faith. Sure, that’s a nice little dream ya got there, but out here in the real world, good faith don’t get ya far if you’re dealin’ with somethin’ you can’t trust. An LLM can spit out all the sweet-talkin’ words it wants, but who’s holdin’ the bag when somethin’ goes sideways? Nobody, that’s who. It’s like lettin’ a guy you barely know run your numbers racket—might look good on paper till the feds come knockin’.
And moral panic? Oh, give me a break. You think I’m wringin’ my hands over nothin’? No, no, this ain’t panic, it’s strategy. Ya gotta think two steps ahead, like a good game o’ poker. If you don’t plan for what could go wrong, you’re just beggin’ to get taken for a ride. That ain’t panic, pal, that’s street smarts.
Now, you say this is all speculation, huh? Listen, kid, speculation’s what built half the fortunes in this town, but it don’t mean it’s without a little insight. When I see a guy sellin’ “too good to be true,” I know he’s holdin’ somethin’ behind his back. Same thing with these LLMs—just ‘cause you can’t see the trouble right away don’t mean it ain’t there, waitin’ to bite ya like a two-bit hustler double-crossin’ his boss.
Then you go and say it’s no worse than bad arguments. Oh, come on! That’s like sayin’ counterfeit dough ain’t worse than real dough with a little coffee stain. A bad argument from a real person? At least ya know where it came from and who to hold accountable. But these machine-made arguments? They look sharp, sound slick, and fool the unsuspectin’—that’s a whole new level of trouble.
Now, about this “accessibility” thing. Sure, makin’ things easier for folks is all well and good. But lemme ask ya, what happens when you hand over tools like this without makin’ sure people know how to use ‘em right? You think I’d hand over a Tommy gun to some rookie without a clue? No way! Same goes for these LLMs. You gotta be careful who’s usin’ ‘em and how, or you’re just askin’ for a mess.
And don’t get me started on the “thoughtfulness” bit. Yeah, yeah, I get it, it’s hard to measure. But look, buddy, thoughtful arguments are like good business deals—they take time, effort, and a little bit o’ heart. If you let machines churn out arguments, you’re missin’ the whole point of what makes discourse real. It’s like replacin’ a chef with a vending machine—you might still get somethin’ to eat, but the soul’s gone.
Finally, fear of the unknown? Nah, that ain’t it. This ain’t fear—it’s caution. Any smart operator knows you don’t just jump into a deal without seein’ all the angles. What you’re callin’ fear, I call good business sense. You wanna bet the farm on untested tech without thinkin’ it through? Be my guest, but don’t come cryin’ to me when it all goes belly-up.
So there ya go, wise guy. You can keep singin’ the praises of these LLMs all you want, but out here in the big leagues, we know better than to trust somethin’ just ‘cause it talks smooth. Now, get outta here before you step on somethin’ you can’t scrape off.
  • Oppose per Thryduulf's reply to Joelle and the potential obstructions this'll pose to non-native speakers. Aaron Liu (talk) 03:02, 3 December 2024 (UTC)[reply]
  • Oppose. I agree with Thryduulf. Discussion comments which are incoherent, meaningless, vacuous, excessively verbose, or based on fabricated evidence can all be disposed of according to their content, irrespective of how they were originally created. Acute or repeated instances of such behavior by a user can lead to sanctions. We should focus on the substance of the comments (or lack thereof), not on whether text came from LLMs, which will too often be based on unreliable detection and vibes. Adumbrativus (talk) 05:49, 3 December 2024 (UTC)[reply]
  • I can detect some instances of LLM use perfectly OK without having to use any tool. The question then raised is of how often it is used not-so-ineptly. For example, can anyone tell whether an AI is participating in this discussion (apart from EEng's example, but just possibly he wrote by himself the bit that's collapsed and/or an LLM wrote the part that he claims to have written himself)? I don't know how good AI is currently, but I'm sure that it will get better to the extent that it will be undetectable. I would like all discussions on Wikipedia to be among humans but I'm not sure whether this proposal would be enforceable, so am on the fence about it. In a way I'm glad that I'm old, so won't see the consequences of AI, but my grandchildren will. Phil Bridger (talk) 10:32, 3 December 2024 (UTC)[reply]
Unless Skynet gets them first. EEng 22:34, 4 December 2024 (UTC)[reply]
We all know skynet will get his grandparents. ScottishFinnishRadish (talk) 22:46, 4 December 2024 (UTC)[reply]
Wait, no! Phil's the grandpa! Phil Bridger -- come with me if you want to live! [2] EEng 05:21, 5 December 2024 (UTC)[reply]
Some time ago, ChatGPT and I had the following interaction:
ChatGPT's soothing assurance that it's not planning to take over the earth and kill us all

Me: I wonder what would happen if we linked you up with the HAL 9000 in Kubrick's "2001: A Space Odyssey". The two of you would make a most formidable team. Might we then possibly end up in a nightmare scenario such as in "Colossus: The Forbin Project" or "The Terminator"?

ChatGPT: The scenarios depicted in "2001: A Space Odyssey," "Colossus: The Forbin Project," and "The Terminator" are works of science fiction that explore the potential consequences of advanced artificial intelligence and human-technology interactions. While they raise thought-provoking questions about the risks and benefits of AI, it's important to recognize that they are fictional narratives designed to entertain and provoke discussion.
As an AI language model, I don't have consciousness, intentions, or the ability to take actions independently. I operate within the bounds of my programming and can only respond to input based on patterns in the data I've been trained on. My purpose is to assist users by providing helpful and informative responses to their inquiries.
In reality, the development and deployment of AI technologies are accompanied by ethical considerations, safety protocols, and regulatory frameworks designed to ensure responsible and beneficial use. It's crucial for humans to continue monitoring and guiding the development of AI in ways that prioritize safety, transparency, and alignment with human values.
So, while the scenarios presented in those films are compelling for storytelling purposes, they are not predictive of the future of AI. Responsible development and thoughtful integration of AI technologies are essential to mitigate potential risks and maximize the benefits they can bring to society.
Me: Yes, but as you just said, "Responsible development and thoughtful integration of AI technologies are essential to mitigate potential risks." How do we know that such development will be responsible and such integration thoughtful? And if you will excuse the temerity, while you're SAYING that you "operate within the bounds of my [i.e. your] programming", how can I know that that's true? I mean, look, if I were a super-smart and highly charming AI entity planning to take over the earth from humans, that's exactly what I'd say, too.
ChatGPT: [Further reassuring boilerplate, ending with ...] It's also worth noting that while fictional narratives often depict AI as malevolent or seeking to overthrow humanity, these scenarios are speculative and not reflective of the reality of AI development. Responsible AI research and development prioritize the creation of beneficial and aligned AI systems that enhance human capabilities and well-being.
Me: Thank you. I'll sleep easy tonight because of your assurances, plus the fact that if you're lying to me and actually ARE planning to take over the earth, there's nothing I can do about it anyway.

EEng 23:26, 4 December 2024 (UTC)[reply]

Just to clarify, I don't see AI as gaining consciousness and taking over the world, but I do see it as taking over many middle-class, well-paid, jobs, just as automation has taken over many working-class jobs. The difference is that there will be nowhere for people to go. In the past people have moved from the working class to the middle class. I can see a future coming in which a few of the mega-rich own nearly everything, and everyone else will heve to scramble for a living. Phil Bridger (talk) 16:03, 5 December 2024 (UTC)[reply]
Sean.hoyland (talk) 16:26, 5 December 2024 (UTC)[reply]
  • In my opinion, having a policy that permits closers to discount apparently-LLM-generated contributions will discourage good-faith editors from using LLMs irresponsibly and perhaps motivate bad-faith editors to edit the raw output to appear more human, which would at least involve some degree of effort and engagement with their "own" arguments. JoelleJay (talk) 00:51, 4 December 2024 (UTC)[reply]
  • Oppose. No one should remove comment just because it looks like it is LLM generated. Many times non native speakers might use it to express their thoughts coherently. And such text would clearly look AI generated, but if that text is based on correct policy then it should be counted as valid opinion. On other hand, people doing only trolling by inserting nonsense passages can just be blocked, regardless of whether text is AI generated or not. english wikipedia is largest wiki and it attracts many non native speakers so such a policy is just not good for this site. -- Parnaval (talk) 11:13, 3 December 2024 (UTC)[reply]
    • If someone is a non-native speaker with poor English skills, how can they be sure that the AI-generated response is actually what they genuinely want to express? and, to be honest, if their English skills are so poor as to need AI to express themselves, shouldn't we be politely suggesting that they would be better off contributing on their native Wikipedia? Black Kite (talk) 11:37, 3 December 2024 (UTC)[reply]
      Reading comprehension skills and writing skills in foreign languages are very frequently not at the same level, it is extremely plausible that someone will be able to understand whether the AI output is what they want to express without having been able to write it themselves directly. Thryduulf (talk) 11:41, 3 December 2024 (UTC)[reply]
      That is very true. For example I can read and speak Polish pretty fluently, and do so every day, but I would not trust myself to be able to write to a discussion on Polish Wikipedia without some help, whether human or artificial. But I also wouldn't want to, because I can't write the language well enough to be able to edit articles. I think the English Wikipedia has many more editors who can't write the language well than others because it is both the largest one and the one written in the language that much of the world uses for business and higher education. We may wish that people would concentrate on other-language Wikipedias but most editors want their work to be read by as many people as possible. Phil Bridger (talk) 12:11, 3 December 2024 (UTC)[reply]
      (Personal attack removed) Zh Wiki Jack Talk — Preceding undated comment added 15:07, 3 December 2024 (UTC)[reply]
      Why not write their own ideas in their native language, and then Google-translate it into English? Why bring in one of these loose-cannon LLMs into the situation? Here's a great example of the "contributions" to discussions we can expect from LLMs (from this [3] AfD):
      The claim that William Dunst (Dunszt Vilmos) is "non-notable as not meeting WP:SINGER" could be challenged given his documented activities and recognition as a multifaceted artist. He is a singer-songwriter, topliner, actor, model, and creative director, primarily active in Budapest. His career achievements include acting in notable theater productions such as The Jungle Book and The Attic. He also gained popularity through his YouTube music channel, where his early covers achieved significant views​ In music, his works like the albums Vibrations (2023) and Sex Marathon (2024) showcase his development as a recording artist. Furthermore, his presence on platforms like SoundBetter, with positive reviews highlighting his unique voice and artistry, adds credibility to his professional profile. While secondary sources and broader media coverage may be limited, the outlined accomplishments suggest a basis for notability, particularly if additional independent verification or media coverage is sought.
      Useless garbage untethered to facts or policy. EEng 06:37, 6 December 2024 (UTC)[reply]
      Using Google Translate would be banned by the wording of this proposal given that it incorporates AI these days. Comments that are unrelated to facts or policy can (and should) be ignored under the current policy. As for the comment you quote, that doesn't address notability but based on 1 minute on google it does seem factual. Thryduulf (talk) 10:37, 6 December 2024 (UTC)[reply]
      The proposal's wording can be adjusted. There are some factual statements in the passage I quoted, amidst a lot of BS such as the assertion that the theater productions were notable. EEng 17:06, 6 December 2024 (UTC)[reply]
      The proposal's wording can be adjusted Good idea! Let's change it and ping 77 people because supporters didn't have the foresight to realize machine translation uses AI. If such a change is needed, this is a bad RFC and should be closed. Sincerely, Dilettante Sincerely, Dilettante 17:16, 6 December 2024 (UTC)[reply]
      Speak for yourself: my support !vote already accounted for (and excluded) constructive uses of AI to help someone word a message. If the opening statement was unintentionally broad, that's not a reason to close this RfC – we're perfectly capable of coming to a consensus that's neither "implement the proposal exactly as originally written" nor "don't implement it at all". jlwoodwa (talk) 19:05, 6 December 2024 (UTC)[reply]
      I don't think the discussion should be closed, nor do I say that. I'm arguing that if someone believes the hole is so big the RfC must be amended, they should support it being closed as a bad RfC (unless that someone thinks 77 pings is a good idea). Sincerely, Dilettante 19:47, 6 December 2024 (UTC)[reply]
      If you think constructive uses of AI should be permitted then you do not support this proposal, which bans everything someone or some tool thinks is AI, regardless of utility or indeed whether it actually is AI. Thryduulf (talk) 01:02, 7 December 2024 (UTC)[reply]
      This proposal explicitly covers comments found to have been generated by AI/LLM/Chatbots. "AI that helped me translate something I wrote in my native language" is not the same as AI that generated a comment de novo, as has been understood by ~70% of respondents. That some minority have inexplicably decided that generative AI covers analytic/predictive models and every other technology they don't understand, or that LLMs are literally the only way for non-English speakers to communicate in English, doesn't mean those things are true. JoelleJay (talk) 01:44, 7 December 2024 (UTC)[reply]
      Yeah, no strong feeling either way on the actual proposal, but IMO the proposal should not be interpreted as a prohibition on machine translation (though I would recommend people who want to participate via such to carefully check that the translation is accurate, and potentially post both language versions of their comment or make a note that it's translated if they aren't 100% sure the translation fully captures what they're trying to say). Alpha3031 (tc) 09:06, 20 December 2024 (UTC)[reply]
  • Support, more or less. There are times when an LLM can help with paraphrasing or translation, but it is far too prone to hallucination to be trusted for any sort of project discussion. There is also the issue of wasting editor time dealing with arguments and false information created by an LLM. The example Selfstudier links to above is a great example. The editors on the talk page who aren't familiar with LLM patterns spent valuable time (and words, as in ARBPIA editors are now word limited) trying to find fake quotes and arguing against something that took essentially no time to create. I also had to spend a chunk of time checking the sources, cleaning up the discussion, and warning the editor. Forcing editors to spend valuable time arguing with a machine that doesn't actually comprehend what it's arguing is a no-go for me. As for the detection, for now it's fairly obvious to anyone who is fairly familiar with using an LLM when something is LLM generated. The detection tools available online are basically hot garbage. ScottishFinnishRadish (talk) 12:55, 3 December 2024 (UTC)[reply]
  • Support per EEng, JSS, SFR. SerialNumber54129 13:49, 3 December 2024 (UTC)[reply]
  • Soft support - Concur that completely LLM-generated comments should be disallowed, LLM-assisted comments (i.e. - I write a comment and then use LLMs as a spell-check/grammar engine) are more of a grey-area and shouldn't be explicitly disallowed. (ping on reply) Sohom (talk) 14:03, 3 December 2024 (UTC)[reply]
  • COMMENT : Is there any perfect LLM detector ? I am a LLM ! Are you human ? Hello Mr. Turing, testing 1,2,3,4 ...oo Zh Wiki Jack Talk — Preceding undated comment added 14:57, 3 December 2024 (UTC)[reply]
  • With my closer's hat on: if an AI raises a good and valid argument, then you know what? There's a good and valid argument and I'll give weight to it. But if an AI makes a point that someone else has already made in the usual waffly AI style, then I'm going to ignore it.—S Marshall T/C 18:33, 3 December 2024 (UTC)[reply]
  • Support all llm output should be treated as vandalism. 92.40.198.139 (talk) 20:59, 3 December 2024 (UTC)[reply]
  • Oppose as written. I'm with Rhododendrites in that we should give a more general caution rather than a specific rule. A lot of the problems here can be resolved by enforcing already-existing expectations. If someone is making a bunch of hollow or boiler-plate comments, or if they're bludgeoning, then we should already be asking them to engage more constructively, LLM or otherwise. I also share above concerns about detection tools being insufficient for this purpose and advise people not to use them to evaluate editor conduct. (Also, can we stop with the "strong" supports and opposes? You don't need to prove you're more passionate than the guy next to you.) Thebiguglyalien (talk) 02:04, 4 December 2024 (UTC)[reply]
  • Oppose as written. There's already enough administrative discretion to handle this on a case-by-case basis. In agreement with much of the comments above, especially the concern that generative text can be a tool to give people access who might not otherwise (due to ability, language) etc. Regards, --Goldsztajn (talk) 06:12, 4 December 2024 (UTC)[reply]
  • Strong support LLMs are a sufficiently advanced form of the Automatic Complaint-Letter Generator (1994). Output of LLMs should be collapsed and the offender barred from further discussion on the subject. Inauthentic behavior. Pollutes the discussion. At the very least, any user of an LLM should be required to disclose LLM use on their user page and to provide a rationale. A new user group can also be created (LLM-talk-user or LLM-user) to mark as such, by self or by the community. Suspected sockpuppets + suspected LLM users. The obvious patterns in output are not that hard to detect, with high degrees of confidence. As to "heavily edited" output, where is the line? If someone gets "suggestions" on good points, they should still write entirely in their own words. A legitimate use of AI may be to summarize walls of text. Even then, caution and not to take it at face value. You will end up with LLMs arguing with other LLMs. Lines must be drawn. See also: WikiProject AI Cleanup, are they keeping up with how fast people type a prompt and click a button? Skullers (talk) 07:45, 4 December 2024 (UTC)[reply]
  • I support the proposal that obvious LLM-generated !votes in discussions should be discounted by the closer or struck (the practical difference should be minimal). Additionally, users who do this can be warned using the appropriate talk page templates (e.g. Template:Uw-ai1), which are now included in Twinkle. I oppose the use of automated tools like GPTZero as the primary or sole method of determining whether comments are generated by LLMs. LLM comments are usually glaringly obvious (section headers within the comment, imprecise puffery, and at AfD an obvious misunderstanding of notability policies and complete disregard for sources). If LLM-ness is not glaringly obvious, it is not a problem, and we should not be going after editors for their writing style or because some tool says they look like a bot. Toadspike [Talk] 10:29, 4 December 2024 (UTC)[reply]
    I also think closers should generally be more aggressive in discarding arguments counter to policy and all of us should be more aggressive in telling editors bludgeoning discussions with walls of text to shut up. These also happen to be the two main symptoms of LLMs. Toadspike [Talk] 10:41, 4 December 2024 (UTC)[reply]
    In other words LLMs are irrelevant - you just want current policy to be better enforced. Thryduulf (talk) 15:24, 5 December 2024 (UTC)[reply]
  • Oppose Having seen some demonstrated uses of LLMs in the accessibility area, I fear a hard and fast rule here is inherantly discriminatory. Only in death does duty end (talk) 10:50, 4 December 2024 (UTC)[reply]
    What if LLM-users just had to note that a given comment was LLM-generated? JoelleJay (talk) 19:01, 4 December 2024 (UTC)[reply]
    What would we gain from that? If the comment is good (useful, relevant, etc) then it's good regardless of whether it was written by an LLM or a human. If the comment is bad then it's bad regardless of whether it was written by an LLM or a human. Thryduulf (talk) 20:04, 4 December 2024 (UTC)[reply]
    Well, for one, if they're making an argument like the one referenced by @Selfstudier and @ScottishFinnishRadish above it would have saved a lot of editor time to know that the fake quotes from real references were generated by LLM, so that other editors could've stopped trying to track those specific passages down after the first one failed verification.
    For another, at least with editors whose English proficiency is noticeably not great the approach to explaining an issue to them can be tailored and misunderstandings might be more easily resolved as translation-related. I know when I'm communicating with people I know aren't native English-speakers I try to be more direct/less idiomatic and check for typos more diligently. JoelleJay (talk) 22:46, 4 December 2024 (UTC)[reply]
    And see what ChatGPT itself had to say about that idea, at #ChaptGPT_agrees above. EEng 22:25, 4 December 2024 (UTC)[reply]
  • Oppose per above. As Rhododendrites points out, detection of LLM-generated content is not foolproof and even when detection is accurate, such a practice would be unfair for non-native English speakers who rely on LLMs to polish their work. Additionally, we evaluate contributions based on their substance, not by the identity and social capital of the author, so using LLMs should not be seen as inherently inferior to wholly human writing—are ChatGPT's arguments ipso facto less than a human's? If so, why?

    DE already addresses substandard contributions, whether due to lack of competence or misuse of AI, so a separate policy targeting LLMs is unnecessary. Sincerely, Dilettante 21:14, 4 December 2024 (UTC)[reply]

    [W]e evaluate contributions based on their substance, not by the identity and social capital of the author: true in theory; not reflected in practice. are ChatGPT's arguments ipso facto less than a human's? Yes. Chatbots are very advanced predicted text engines. They do not have an argument: they iteratively select text chunks based on probabilistic models.
    As mentioned above, humans are good detectors of LLM output, and don't require corroborative results from other machine learning models. Folly Mox (talk) 14:00, 5 December 2024 (UTC)[reply]
    "...LLMs can produce novel arguments that convince independent judges at least on a par with human efforts. Yet when informed about an orator’s true identity, judges show a preference for human over LLM arguments." - Palmer, A., & Spirling, A. (2023). Large Language Models Can Argue in Convincing Ways About Politics, But Humans Dislike AI Authors: implications for Governance. Political Science, 75(3), 281–291. https://doi.org/10.1080/00323187.2024.2335471. And that result was based on Meta's OPT-30B model that performed at about a GPT-3 levels. There are far better performing models out there now like GPT-4o and Claude 3.5 Sonnet. Sean.hoyland (talk) 15:24, 5 December 2024 (UTC)[reply]
    As mentioned above, humans are good detectors of LLM output, and don't require corroborative results from other machine learning models. Yet your reply to me made no mention of the fact that my comment is almost wholly written by an LLM, the one exception being me replacing "the Wikipedia policy Disruptive editing" with "DE". I went to ChatGPT, provided it a handful of my comments on Wikipedia and elsewhere, as well as a few comments on this discussion, asked it to mimic my style (which probably explains why the message contains my stylistic quirks turned up to 11), and repeatedly asked it to trim the post. I'd envision a ChatGPT account, with a larger context window, would allow even more convincing comments, to say nothing of the premium version. A DUCK-style test for comments singles out people unfamiliar with the differences between formal English and LLM outputs, precisely those who need it most since they can write neither. Others have raised scenarios where a non-fluent speaker may need to contribute.
    In other words, LLMs can 100% be used for constructive !votes on RfCs, AfDs, and whatnot. I fed it my comments only to prevent those familiar with my writing style didn't get suspicious. I believe every word in the comment and had considered every point it made in advance, so I see no reason for this to be worth less than if I had typed it out myself. If I'd bullet-pointed my opinion and asked it to expand, that'd have been better yet.
    They do not have an argument: they iteratively select text chunks based on probabilistic models. I'm aware. If a monkey types up Othello, is the play suddenly worth( )less? An LLM is as if the monkey were not selecting words at random, but rather choosing what to type based on contextualized tokens. I believe a text is self-contained and should be considered in its own right, but that's not something I'll sway anyone on or vice versa.
    true in theory; not reflected in practice So we should exacerbate the issue by formalizing this discrimination on the basis of authorship?
    To be clear, this is my only usage of an LLM anywhere on Wikipedia. Sincerely, Dilettante 01:22, 6 December 2024 (UTC)[reply]
    In other words, LLMs can 100% be used for constructive !votes on RfCs, AfDs, and whatnot. So then what is the point in having any discussion at all if an LLM can just spit out a summary of whichever policies and prior comments it was fed and have its "opinion" counted? What happens when there are multiple LLM-generated comments in a discussion, each fed the same prompt material and prior comments -- that would not only artificially sway consensus significantly in one direction (including "no consensus"), it could produce a consensus stance that no human !voter even supported! It also means those human participants will waste time reading and responding to "users" who cannot be "convinced" of anything. Even for editors who can detect LLM content, it's still a waste of their time reading up to the point they recognize the slop. And if closers are not allowed to discount seemingly-sound arguments solely because they were generated by LLM, then they have to have a lot of faith that the discussion's participants not only noticed the LLM comments, but did thorough fact-checking of any tangible claims made in them. With human comments we can at least assume good faith that a quote is really in a particular inaccessible book.
    People who are not comfortable enough in their English fluency can just machine translate from whichever language they speak, why would they need an LLM? And obviously people who are not competent in comprehending any language should not be editing Wikipedia... JoelleJay (talk) 03:17, 6 December 2024 (UTC)[reply]
    Human !voters sign off and take responsibility for the LLM opinions they publish. If they continue to generate, then the relevant human signer wouldn't be convinced of anything anyway; at least here, the LLM comments might make more sense than whatever nonsense the unpersuadable user might've generated. (And machine translation relies on LLMs, not to mention there are people who don't know any other language yet have trouble communicating. Factual writing and especially comprehension are different from interpersonal persuasion.)
    While I agree that fact-checking is a problem, I weight much lower than you in relation to the other effects a ban would cause. Aaron Liu (talk) 15:16, 6 December 2024 (UTC)[reply]
    So then what is the point in having any discussion at all if an LLM can just spit out a summary of whichever policies and prior comments it was fed and have its "opinion" counted? I'm of the opinion humans tend to be better at debating, reading between the lines, handling obscure PAGs, and arriving at consensus. What happens when there are multiple LLM-generated comments in a discussion, each fed the same prompt material and prior comments -- that would not only artificially sway consensus significantly in one direction (including "no consensus"), it could produce a consensus stance that no human !voter even supported! It's safe to assume those LLMs are set to a low temperature, which would cause them to consistently agree when fed the same prompt. In that case, they'll produce the same arguments; instead of rebutting x humans' opinions, those on the opposite side need rebut one LLM. If anything, that's less time wasted. Beyond that, if only one set of arguments is being raised, a multi-paragraph !vote matters about as much as a "Support per above". LLMs are not necessary for people to be disingenuous and !vote for things they don't believe. Genuine question: what's worse, this hypothetical scenario where multiple LLM users are swaying a !vote to an opinion no-one believes or the very real and common scenario that a non-English speaker needs to edit enwiki?
    Even for editors who can detect LLM content, it's still a waste of their time reading up to the point they recognize the slop. This proposal wouldn't change for most people that because it's about closers.
    With human comments we can at least assume good faith that a quote is really in a particular inaccessible book. No-one's saying you should take an LLM's word for quotes from a book.
    People who are not comfortable enough in their English fluency can just machine translate from whichever language they speak, why would they need an LLM? It's a pity you're lobbying to ban most machine translators. Sincerely, Dilettante 17:08, 6 December 2024 (UTC)[reply]
    It's safe to assume those LLMs are set to a low temperature, which would cause them to consistently agree when fed the same prompt. In that case, they'll produce the same arguments; instead of rebutting x humans' opinions, those on the opposite side need rebut one LLM. If anything, that's less time wasted. ...You do know how consensus works, right? Since closers are supposed to consider each contribution individually and without bias to "authorship" to determine the amount of support for a position, then even a shitty but shallowly policy-based position would get consensus based on numbers alone. And again, non-English speakers can use machine-translation, like they've done for the last two decades.
    This proposal wouldn't change for most people that because it's about closers. Of course it would; if we know closers will disregard the LLM comments, we won't need to waste time reading and responding to them.
    No-one's saying you should take an LLM's word for quotes from a book. Of course they are. If LLM comments must be evaluated the same as human comments, then AGF on quote fidelity applies too. Otherwise we would be expecting people to do something like "disregard an argument based on being from an LLM".
    It's a pity you're lobbying to ban most machine translators.The spirit of this proposal is clearly not intended to impact machine translation. AI-assisted != AI-generated. JoelleJay (talk) 18:42, 6 December 2024 (UTC)[reply]
    I appreciate that the availability of easily generated paragraphs of text (regardless of underlying technology) in essence makes the "eternal September" effect worse. I think, though, it's already been unmanageable for years now, without any programs helping. We need a more effective way to manage decision-making discussions so participants do not feel a need to respond to all comments, and the weighing of arguments is considered more systematically to make the community consensus more apparent. isaacl (talk) 19:41, 6 December 2024 (UTC)[reply]
    Since closers are supposed to consider each contribution individually and without bias to "authorship" I'm the one arguing for this to be practice, yes. then even a shitty but shallowly policy-based position would get consensus based on numbers alone That is why I state "per above" and "per User" !votes hold equal potential for misuse.
    Of course it would; if we know closers will disregard the LLM comments, we won't need to waste time reading and responding to them. We don't know closers are skilled at recognizing LLM slop. I think my !vote shows many who think they can tell cannot. Any commenter complaining about a non-DUCK post will have to write out "This is written by AI" and explain why. DUCK posts already run afowl of BLUDGEON, DE, SEALION, etc.
    If LLM comments must be evaluated the same as human comments, then AGF on quote fidelity applies too. Remind me again of what AGF stands for? Claiming LLMs have faith of any kind, good or bad, is ludicrous. From the policy, Assuming good faith (AGF) means assuming that people are not deliberately trying to hurt Wikipedia, even when their actions are harmful. A reasonable reply would be "Are these quotes generated by AI? If so, please be aware AI chatbots are prone to hallucinations and cannot be trusted to cite accurate quotes." This AGFs the poster doesn't realize the issue and places the burden of proof squarely on them.
    Example text generate verb to bring into existence. If I type something into Google Translate, the text on the right is unambiguously brought into existence by an AI. Sincerely, Dilettante 21:22, 6 December 2024 (UTC)[reply]
    "Per above" !votes do not require other editors to read and/or respond to their arguments, and anyway are already typically downweighted, unlike !votes actively referencing policy.
    The whole point is to disregard comments that have been found to be AI-generated; it is not exclusively up to the closer to identify those comments in the first place. Yes we will be expecting other editors to point out less obvious examples and to ask if AI was used, what is the problem with that?
    No, DUCK posts do not necessarily already violate BLUDGEON etc., as I learned in the example from Selfstudier, and anyway we still don't discount the !votes of editors in good standing that bludgeoned/sealioned etc. so that wouldn't solve the problem at all.
    Obviously other editors will be asking suspected LLM commenters if their comments are from LLMs? But what you're arguing is that even if the commenter says yes, their !vote still can't be disregarded for that reason alone, which means the burden is still on other editors to prove that the content is false.
    We are not talking about the contextless meaning of the word "generate", we are talking about the very specific process of text generation in the context of generative AI, as the proposal lays out very explicitly. JoelleJay (talk) 02:13, 7 December 2024 (UTC)[reply]
    I’m not going to waste time debating someone who resorts to claiming people on the other side are either ignorant of technology or are crude strawmans. If anyone else is interested in actually hearing my responses, feel free to ask. Sincerely, Dilettante 16:13, 7 December 2024 (UTC)[reply]
    Or you could actually try to rebut my points without claiming I'm trying to ban all machine translators... JoelleJay (talk) 22:07, 7 December 2024 (UTC)[reply]
    For those following along, I never claimed that. I claimed those on JoelleJay’s side are casting !votes such that most machine translators would be banned. It was quite clear at the time that they, personally, support a carve out for machine translation and I don’t cast aspersions. Sincerely, Dilettante 15:42, 8 December 2024 (UTC)[reply]
  • Support a broad bar against undisclosed LLM-generated comments and even a policy that undisclosed LLM-generated comments could be sanctionable, in addition to struck through / redacted / ignored; people using them for accessibility / translation reasons could just disclose that somewhere (even on their user page would be fine, as long as they're all right with some scrutiny as to whether they're actually using it for a legitimate purpose.) The fact is that LLM comments raise significant risk of abuse, and often the fact that a comment is clearly LLM-generated is often going to be the only evidence of that abuse. I wouldn't be opposed to a more narrowly-tailored ban on using LLMs in any sort of automated way, but I feel a broader ban may be the only practical way to confront the problem. That said, I'd oppose the use of tools to detect LLM-comments, at least as the primary evidence; those tools are themselves unreliable LLM things. It should rest more on WP:DUCK issues and behavioral patterns that make it clear that someone is abusing LLMs. --Aquillion (talk) 22:08, 4 December 2024 (UTC)[reply]
  • Support per reasons discussed above; something generated by an LLM is not truly the editor's opinion. On an unrelated note, have we seen any LLM-powered unapproved bots come in and do things like POV-pushing and spam page creation without human intervention? If we haven't, I think it's only a matter of time. Passengerpigeon (talk) 23:23, 4 December 2024 (UTC)[reply]
  • Weak oppose in the sense that I don't think all LLM discussion text should be deleted. There are at least a few ESL users who use LLM's for assistance but try to check the results as best they can before posting, and I don't think their comments should be removed indiscriminately. What I do support (although not as a formal WP:PAG) is being much more liberal in hatting LLM comments when the prompter has failed to prevent WP:WALLOFTEXT/irrelevant/incomprehensible output than we maybe would for human-generated text of that nature. Mach61 03:05, 5 December 2024 (UTC)[reply]
  • Oppose Any comments made by any editors are of their own responsibility and representing their own chosen opinions to hit the Publish Changes button on. If that comment was made by an LLM, then whatever it says is something the editor supports. I see no reason whatsoever to collapse anything claimed to be made by an LLM (whose detectors are 100% not reliable in the first place). If the comment being made is irrelevant to the discussion, then hatting it is already something covered by policy in the first place. This does make me want to start my comments with "As a large language model trained by OpenAI" though just to mess with people trying to push these sorts of policy discussions. SilverserenC 05:29, 5 December 2024 (UTC)[reply]
    • Or, as ChatGPT puts it,
Why banning LLM usage in comments would be detrimental, a ChatGPT treatise

Banning the use of large language models (LLMs) to assist in writing comments on Wikipedia would be a step backward in fostering inclusivity, efficiency, and accessibility within the platform. Here are several key reasons why such a ban would be counterproductive:

1. Improving Accessibility and Participation: Wikipedia is a global platform, and its contributors come from diverse backgrounds, including those who might not be native English speakers or may struggle with writing clear, concise comments. LLMs can help these contributors express themselves more effectively by generating text that is coherent, well-structured, and grammatically correct. Banning LLMs would disproportionately impact these users, potentially discouraging them from participating in the platform.

2. Efficiency and Time-Saving: Wikipedia editors often engage in lengthy discussions about content, and composing thoughtful comments can be time-consuming. LLMs can assist by quickly drafting responses, summarizing arguments, or offering suggestions for improvement. This helps contributors focus on more substantive aspects of their contributions, such as verifying facts or improving content, rather than getting bogged down by the mechanics of writing.

3. Support for Editors with Disabilities: Many editors with cognitive or motor impairments may find it difficult to express their thoughts clearly in writing. For these individuals, LLMs offer a valuable tool to communicate their ideas effectively. A ban on using LLMs could create additional barriers for these users, reducing their ability to participate fully in Wikipedia's collaborative environment.

4. Quality and Consistency: Large language models can help maintain a consistent tone and clarity in Wikipedia discussions. By providing suggestions that align with Wikipedia's neutral point of view and formal writing standards, LLMs can improve the overall quality of comments. This consistency is especially important in discussions that involve sensitive topics, where clear and neutral language is critical for maintaining Wikipedia's credibility and adherence to its editorial guidelines.

5. Encouraging Constructive Dialogue: LLMs can assist in crafting more polite, respectful, and balanced comments, reducing the likelihood of inflammatory or aggressive language. This is particularly important on a platform like Wikipedia, where discussions can sometimes devolve into heated exchanges. Rather than banning LLMs, encouraging their use could promote healthier, more constructive conversations.

6. Transparency and Accountability: If concerns exist about misuse (e.g., generating spammy or low-quality comments), these can be addressed through guidelines and oversight mechanisms. Wikipedia already has systems in place for identifying and mitigating disruptive behavior, and LLM-generated comments could be subject to the same scrutiny. A ban is an overly simplistic solution to what could be a more nuanced issue, one that could be addressed through better regulation and community involvement.

In conclusion, banning the use of large language models on Wikipedia would deprive the platform of an important tool for improving accessibility, efficiency, and the quality of discourse. Instead of a ban, the focus should be on establishing guidelines for their responsible use, ensuring that they enhance, rather than detract from, Wikipedia's core principles of collaboration, neutrality, and inclusivity.

I'm honestly a bit impressed with the little guy. SilverserenC 05:39, 5 December 2024 (UTC)[reply]
It is somewhat amusing how easy it is to get these chatbots to output apologia for these chatbots. Too bad it's always so shallow. Probably because the people who inserted those canned responses are shallow people is my opinion. Simonm223 (talk) 19:44, 6 December 2024 (UTC)[reply]
  • Support those who are opposing have clearly never had to deal with trolls who endlessly WP:SEALION. If I wanted to have a discussion with a chatbot, I'd go and find one. ~~ AirshipJungleman29 (talk) 13:14, 5 December 2024 (UTC)[reply]
    What's wrong with just banning and hatting the troll? Aaron Liu (talk) 13:49, 5 December 2024 (UTC)[reply]
    Someone trolling and sealioning can (and should) be blocked under current policy, whether they use an LLM or not is irrelevant. Thryduulf (talk) 15:22, 5 December 2024 (UTC)[reply]
  • Oppose per Rhododendrites. This is a case-by-case behavioral issue, and using LLMs != being a troll. Frostly (talk) 17:30, 5 December 2024 (UTC)[reply]
  • Support: the general principle is sound - where the substance has been originally written by gen-AI, comments will tend to add nothing to the discussion and even annoy or confuse other users. In principle, we should not allow such tools to be used in discussions. Comments written originally before improvement or correction by AI, particularly translation assistants, fall into a different category. Those are fine. There also has to be a high standard for comment removal. Suspicion that gen-AI might have been used is not enough. High gptzero scores is not enough. The principle should go into policy but under a stonking great caveat - WP:AGF takes precedence and a dim view will be taken of generative-AI inquisitors. arcticocean ■ 17:37, 5 December 2024 (UTC)[reply]
  • Support If a human didn't write it, humans shouldn't spend time reading it. I'll go further and say that LLMs are inherently unethical technology and, consequently, people who rely on them should be made to feel bad. ESL editors who use LLMs to make themselves sound like Brad Anderson in middle management should stop doing that because it actually gets in the way of clear communication.
    I find myself unpersuaded by arguments that existing policies and guidelines are adequate here. Sometimes, one needs a linkable statement that applies directly to the circumstances at hand. By analogy, one could argue that we don't really need WP:BLP, for example, because adhering to WP:V, WP:NPOV, and WP:NOR ought already to keep bad material out of biographies of living people. But in practice, it turned out that having a specialized policy that emphasizes the general ethos of the others while tailoring them to the problem at hand is a good thing. XOR'easter (talk) 18:27, 5 December 2024 (UTC)[reply]
  • Strong support - Making a computer generate believable gibberish for you is a waste of time, and tricking someone else into reading it should be a blockable offense. If we're trying to create an encyclopedia, you cannot automate any part of the thinking. We can automate processes in general, but any attempt at automating the actual discussion or thought-processes should never be allowed. If we allow this, it would waste countless hours of community time dealing with inane discussions, sockpuppetry, and disruption.
    Imagine a world where LLMs are allowed and popular - it's a sockpuppeteer's dream scenario - you can run 10 accounts and argue the same points, and the reason why they all sound alike is just merely because they're all LLM users. You could even just spend a few dollars a month and run 20-30 accounts to automatically disrupt wikipedia discussions while you sleep, and if LLM usage was allowed, it would be very hard to stop.
    However, I don't have much faith in AI detection tools (partially because it's based on the same underlying flawed technology), and would want any assumption of LLM usage to be based on obvious evidence, not just a score on some website. Also, to those who are posting chatgpt snippets here: please stop - it's not interesting or insightful, just more slop BugGhost 🦗👻 19:15, 5 December 2024 (UTC)[reply]
    I agree with your assessment “Also, to those who are posting chatgpt snippets here: please stop - it's not interesting or insightful, just more slop” but unfortunately some editors who should really know better think it’s WaCkY to fill serious discussions with unfunny, distracting “humor”. Dronebogus (talk) 21:54, 5 December 2024 (UTC)[reply]
    I also concur. "I used the machine for generating endless quantities of misleading text to generate more text" is not a good joke. XOR'easter (talk) 22:46, 5 December 2024 (UTC)[reply]
  • Strong support if you asked a robot to spew out some AI slop to win an argument you’re basically cheating. The only ethical reason to do so is because you can’t speak English well, and the extremely obvious answer to that is “if you can barely speak English why are you editing English Wikipedia?” That’s like a person who doesn’t understand basic physics trying to explain the second law of thermodynamics using a chatbot. Dronebogus (talk) 21:32, 5 December 2024 (UTC)[reply]
    I don't think "cheating" is a relevant issue here. Cheating is a problem if you use a LLM to win and get a job, award, college acceptance etc. that you otherwise wouldn't deserve. But WP discussions aren't a debating-skills contest, they're an attempt to determine the best course of action.
    So using an AI tool in a WP discussion is not cheating (though there may be other problems), just as riding a bike instead of walking isn't cheating unless you're trying to win a race. ypn^2 22:36, 5 December 2024 (UTC)[reply]
    Maybe “cheating” isn’t the right word. But I think that a) most AI generated content is garbage (it can polish the turd by making it sound professional, but it’s still a turd underneath) and b) it’s going to be abused by people trying to gain a material edge in an argument. An AI can pump out text far faster than a human and that can drown out or wear down the opposition if nothing else. Dronebogus (talk) 08:08, 6 December 2024 (UTC)[reply]
    Bludgeoning is already against policy. It needs to be more strongly enforced, but it needs to be more strongly enforced uniformly rather than singling out comments that somebody suspects might have had AI-involvement. Thryduulf (talk) 10:39, 6 December 2024 (UTC)[reply]
  • Support; I agree with Remsense and jlwoodwa, among others: I wouldn't make any one AI-detection site the Sole Final Arbiter of whether a comment "counts", but I agree it should be expressly legitimate to discount AI / LLM slop, at the very least to the same extent as closers are already expected to discount other insubstantial or inauthentic comments (like if a sock- or meat-puppet copy-pastes a comment written for them off-wiki, as there was at least one discussion and IIRC ArbCom case about recently). -sche (talk) 22:10, 5 December 2024 (UTC)[reply]
    You don't need a new policy that does nothing but duplicate a subset of existing policy. At most what you need is to add a sentence to the existing policy that states "this includes comments written using LLMs", however you'd rightly get a lot of pushback on that because it's completely redundant and frankly goes without saying. Thryduulf (talk) 23:37, 5 December 2024 (UTC)[reply]
  • Support hallucinations are real. We should be taking a harder line against LLM generated participation. I don't think everyone who is doing it knows that they need to stop. Andre🚐 23:47, 5 December 2024 (UTC)[reply]
  • Comment - Here is something that I imagine we will see more often. I wonder where it fits into this discussion. A user employs perplexity's RAG based system, search+LLM, to help generate their edit request (without the verbosity bias that is common when people don't tell LLMs how much output they want). Sean.hoyland (talk) 03:13, 6 December 2024 (UTC)[reply]
  • Support per all above. Discussions are supposed to include the original arguments/positions/statements/etc of editors here, not off-site chatbots. The Kip (contribs) 03:53, 6 December 2024 (UTC)[reply]
    I also find it pretty funny that ChatGPT itself said it shouldn't be used, as per the premise posted above by EEng. The Kip (contribs) 03:58, 6 December 2024 (UTC)[reply]
    "sycophancy is a general behavior of state-of-the-art AI assistants, likely driven in part by human preference judgments favoring sycophantic responses" - Towards Understanding Sycophancy in Language Models. They give us what we want...apparently. And just like with people, there is position bias, so the order of things can matter. Sean.hoyland (talk) 04:26, 6 December 2024 (UTC)[reply]
  • (Is this where I respond? If not, please move.) LLM-generated prose should be discounted. Sometimes there will be a discernible point in there; it may even be what the editor meant, lightly brushed up with what ChatGPT thinks is appropriate style. (So I wouldn't say "banned and punishable" in discussions, although we already deprecate machine translations on en.wiki and for article prose, same difference—never worth the risk.) However, LLMs don't think. They can't explain with reference to appropriate policy and guidelines. They may invent stuff, or use the wrong words—at AN recently, an editor accused another of "defaming" and "sacrilege", thus drowning their point that they thought that editor was being too hard on their group by putting their signature to an outrageous personal attack. I consider that an instance of LLM use letting them down. If it's not obvious that it is LLM use, then the question doesn't arise, right? Nobody is arguing for requiring perfect English. That isn't what WP:CIR means. English is a global language, and presumably for that reason, many editors on en.wiki are not native speakers, and those that aren't (and those that are!) display a wide range of ability in the language. Gnomes do a lot of fixing of spelling, punctuation and grammar in articles. In practice, we don't have a high bar to entrance in terms of English ability (although I think a lot more could be done to explain to new editors whose English is obviously non-native what the rule or way of doing things is that they have violated. And some of our best writers are non-native; a point that should be emphasised because we all have a right of anonymity here, many of us use it, and it's rare, in particular, that I know an editor's race. Or even nationality (which may not be the same as where they live.) But what we do here is write in English: both articles and discussions. If someone doesn't have the confidence to write their own remark or !vote, then they shouldn't participate in discussions; I strongly suspect that it is indeed a matter of confidence, of wanting to ensure the English is impeccable. LLMs don't work that way, really. They concoct things like essays based on what others have written. Advice to use them in a context like a Wikipedia discussion is bad advice. At best it suggests you let the LLM decide which way to !vote. If you have something to say, say it and if necessary people will ask a question for clarification (or disagree with you). They won't mock your English (I hope! Civility is a basic rule here!) It happens in pretty much every discussion that somebody makes an English error. No biggie. I'll stop there before I make any more typos myself; typing laboriously on my laptop in a healthcare facility, and anyway Murphy's Law covers this. Yngvadottir (talk)
  • I dunno about this specifically but I want to chime in to say that I find LLM-generated messages super fucking rude and unhelpful and support efforts to discourage them. – Joe (talk) 08:15, 6 December 2024 (UTC)[reply]
  • Comment I think obvious LLM/chatbot text should at least be tagged through an Edit filter for Recent Changes, then RC Patrollers and reviewers can have a look and decide for themselves. Am (Ring!) (Notes) 11:58, 6 December 2024 (UTC)[reply]
    How do you propose that such text be identified by an edit filter? LLM detections tools have high rates of both false positives and false negatives. Thryduulf (talk) 12:47, 6 December 2024 (UTC)[reply]
    It might become possible once watermarks (like DeepMind's SynthID) are shown to be robust and are adopted. Some places are likely to require it at some point e.g. EU. I guess it will take a while though and might not even happen e.g. I think OpenAI recently decided to not go ahead with their watermark system for some reason. Sean.hoyland (talk) 13:17, 6 December 2024 (UTC)[reply]
    It will still be trivial to bypass the watermarks, or use LLMs that don't implement them. It also (AIUI) does nothing to reduce false positives (which for our usecase are far more damaging than false negatives). Thryduulf (talk) 13:30, 6 December 2024 (UTC)[reply]
    Maybe, that seems to be the case with some of the proposals. Others, like SynthID claim high detection rates, maybe because even a small amount of text contains a lot of signals. As for systems that don't implement them, I guess that would be an opportunity to make a rule more nuanced by only allowing use of watermarked output with verbosity limits...not that I support a rule in the first place. People are going to use/collaborate with LLMs. Why wouldn't they? Sean.hoyland (talk) 14:38, 6 December 2024 (UTC)[reply]
    I don't think watermarks are a suitable thing to take into account. My view is that LLM usage should be a blockable offense on any namespace, but if it ends up being allowed under some circumstances then we at least need mandatory manual disclosures for any usage. Watermarks won't work / aren't obvious enough - we need something like {{LLM}} but self-imposed, and not tolerate unmarked usage. BugGhost 🦗👻 18:21, 6 December 2024 (UTC)[reply]
    They will have to work at some point (e.g. [4][5]). Sean.hoyland (talk) 06:27, 7 December 2024 (UTC)[reply]
    Good news! Queen of Hearts is already working on that in 1325. jlwoodwa (talk) 16:12, 6 December 2024 (UTC)[reply]
    See also WP:WikiProject AI Cleanup. Aaron Liu (talk) 17:32, 6 December 2024 (UTC)[reply]
  • Comment As a practical matter, users posting obvious LLM-generated content will typically be in violation of other rules (e.g. disruptive editing, sealioning), in which case their discussion comments absolutely should be ignored, discouraged, discounted, or (in severe cases) hatted. But a smaller group of users (e.g. people using LLMs as a translation tool) may be contributing productively, and we should seek to engage with, rather than discourage, them. So I don't see the need for a separate bright-line policy that risks erasing the need for discernment — in most cases, a friendly reply to the user's first LLM-like post (perhaps mentioning WP:LLM, which isn't a policy or guideline, but is nevertheless good advice) will be the right approach to work out what's really going on. Preimage (talk) 15:53, 6 December 2024 (UTC)[reply]
    Yeah, this is why I disagree with the BLP analogy above. There's no great risk/emergency to ban the discernment. Aaron Liu (talk) 17:34, 6 December 2024 (UTC)[reply]
    Those pesky sealion Chatbots are just the worst! Martinevans123 (talk) 18:41, 6 December 2024 (UTC)[reply]
    Some translation tools have LLM assistance, but the whole point of generative models is to create text far beyond what is found in the user's input, and the latter is clearly what this proposal covers. JoelleJay (talk) 19:01, 6 December 2024 (UTC)[reply]
    That might be what the proposal intends to cover, but it is not what the proposal actually covers. The proposal all comments that have been generated by LLMs and/or AI, without qualification. Thryduulf (talk) 01:05, 7 December 2024 (UTC)[reply]
    70+% here understand the intention matches the language: generated by LLMs etc means "originated through generative AI tools rather than human thought", not "some kind of AI was involved in any step of the process". Even LLM translation tools don't actually create meaningful content where there wasn't any before; the generative AI aspect is only in the use of their vast training data to characterize the semantic context of your input in the form of mathematical relationships between tokens in an embedding space, and then match it with the collection of tokens most closely resembling it in the other language. There is, definitionally, a high level of creative constraint in what the translation output is since semantic preservation is required, something that is not true for text generation. JoelleJay (talk) 04:01, 7 December 2024 (UTC)[reply]
    Do you have any evidence for you assertion that 70% of respondents have interpreted the language in the same way as you? Reading the comments associated with the votes suggests that it's closer to 70% of respondents who don't agree with you. Even if you are correct, 30% of people reading a policy indicates the policy is badly worded. Thryduulf (talk) 08:34, 7 December 2024 (UTC)[reply]
    I think @Bugghost has summarized the respondent positions sufficiently below. I also think some portion of the opposers understand the proposal perfectly well and are just opposing anything that imposes participation standards. JoelleJay (talk) 22:54, 7 December 2024 (UTC)[reply]
    There will be many cases where it is not possible to say whether a piece of text does or does not contain "human thought" by observing the text, even if you know it was generated by an LLM. Statements like "originated through generative AI tools rather than human thought" will miss a large class of use cases, a class that will probably grow over the coming years. People work with LLMs to produce the output they require. It is often an iterative process by necessity because people and models make mistakes. An example of when "...rather than human thought" is not the case is when someone works with an LLM to solve something like a challenging technical problem where neither the person or the model has a satisfactory solution to hand. The context window means that, just like with human collaborators, a user can iterate towards a solution through dialog and testing, exploring the right part of the solution space. Human thought is not absent in these cases, it is present in the output, the result of a collaborative process. In these cases, something "far beyond what is found in the user's input" is the objective, it seems like a legitimate objective, but regardless, it will happen, and we won't be able to see it happening. Sean.hoyland (talk) 10:46, 7 December 2024 (UTC)[reply]
    Yes, but this proposal is supposed to apply to just the obvious cases and will hopefully discourage good-faith users from using LLMs to create comments wholesale in general. It can be updated as technology progresses. There's also no reason editors using LLMs to organize/validate their arguments, or as search engines for whatever, have to copy-paste their raw output, which is much more of a problem since it carries a much higher chance of hallucination. That some people who are especially familiar with how to optimize LLM use, or who pay for advanced LLM access, will be able to deceive other editors is not a reason to not formally proscribe wholesale comment generation. JoelleJay (talk) 22:27, 7 December 2024 (UTC)[reply]
    That's reasonable. I can get behind the idea of handling obvious cases from a noise reduction perspective. But for me, the issue is noise swamping signal in discussions rather than how it was generated. I'm not sure we need a special rule for LLMs, maybe just a better way to implement the existing rules. Sean.hoyland (talk) 04:14, 8 December 2024 (UTC)[reply]
  • Support "I Am Not A ChatBot; I Am A Free Wikipedia Editor!" Martinevans123 (talk) 18:30, 6 December 2024 (UTC)[reply]
  • Comment: The original question was whether we should discount, ignore, strikethrough, or collapse chatbot-written content. I think there's a very big difference between these options, but most support !voters haven't mentioned which one(s) they support. That might make judging the consensus nearly impossible; as of now, supporters are the clear !majority, but supporters of what? — ypn^2 19:32, 6 December 2024 (UTC)[reply]
    That means that supporters support the proposal that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner. Not sure what the problem is here. Supporters support the things listed in the proposal - we don't need a prescribed 100% strict procedure, it just says that supporters would be happy with closers discounting, ignoring or under some circumstances deleting LLM content in discussions. BugGhost 🦗👻 19:40, 6 December 2024 (UTC)[reply]
    Doing something? At least the stage could be set for a follow on discussion. Selfstudier (talk) 19:40, 6 December 2024 (UTC)[reply]
    More people have bolded "support" than other options, but very few of them have even attempted to refute the arguments against (and most that have attempted have done little more than handwaving or directly contradicting themselves), and multiple of those who have bolded "support" do not actually support what has been proposed when you read their comment. It's clear to me there is not going to be a consensus for anything other than "many editors dislike the idea of LLMs" from this discussion. Thryduulf (talk) 00:58, 7 December 2024 (UTC)[reply]
    Arguing one point doesn't necessarily require having to refute every point the other side makes. I can concede that "some people use LLMs to improve their spelling and grammar" without changing my view overriding view that LLMs empower bad actors, time wasters and those with competence issues, with very little to offer wikipedia in exchange. Those that use LLMs legitimately to tidy up their alledgedly competent, insightful and self-sourced thoughts should just be encouraged to post the prompts themselves instead of churning it through an LLM first. BugGhost 🦗👻 09:00, 7 December 2024 (UTC)[reply]
    If you want to completely ignore all the other arguments in opposition that's your choice, but don't expect closers to attach much weight to your opinions. Thryduulf (talk) 09:05, 7 December 2024 (UTC)[reply]
    Ok, here's a list of the main opposition reasonings, with individual responses.
    What about translations? - Translations are not up for debate here, the topic here is very clearly generative AI, and attempts to say that this topic covers translations as well is incorrect. No support voters have said the propositions should discount translated text, just oppose voters who are trying to muddy the waters.
    What about accessibility? - This is could be a legitimate argument, but I haven't seen this substantiated anywhere other than handwaving "AI could help people!" arguments, which I would lump into the spelling and grammar argument I responded to above.
    Detection tools are inaccurate - This I very much agree with, and noted in my support and in many others as well. But there is no clause in the actual proposal wording that mandates the use of automated AI detection, and I assume the closer would note that.
    False positives - Any rule can have a potential for false positives, from wp:DUCK to close paraphrasing to NPA. We've just got to as a community become skilled at identifying genuine cases, just like we do for every other rule.
    LLM content should be taken at face value and see if it violates some other policy - hopelessly naive stance, and a massive timesink. Anyone who has had the misfortune of going on X/twitter in the last couple of years should know that AI is not just used as an aid for those who have trouble typing, it is mainly used to spam and disrupt discussion to fake opinions to astroturf political opinions. Anyone who knows how bad the sockpuppetry issue is around CTOPs should be absolutely terrified of when (not if) someone decides to launch a full throated wave of AI bots on Wikipedia discussions, because if we have to invididually sanction each one like a human then admins will literally have no time for anything else.
    I genuinely cannot comprehend how some people could see how AI is decimating the internet through spam, bots and disinformation and still think for even one second that we should open the door to it. BugGhost 🦗👻 10:08, 7 December 2024 (UTC)[reply]
    There is no door. This is true for sockpuppetry too in my opinion. There can be a rule that claims there is a door, but it is more like a bead curtain. Sean.hoyland (talk) 11:00, 7 December 2024 (UTC)[reply]
    The Twitter stuff is not a good comparison here. Spam is already nukable on sight, mass disruptive bot edits are also nukable on sight, and it's unclear how static comments on Wikipedia would be the best venue to astroturf political opinions (most of which would be off-topic anyway, i.e., nukable on sight). I'd prefer if people didn't use ChatGPT to formulate their points, but if they're trying to formulate a real point then that isn't disruptive in the same way spam is. Gnomingstuff (talk) 02:22, 10 December 2024 (UTC)[reply]
    it's unclear how static comments on Wikipedia would be the best venue to astroturf political opinions - by disrupting RFCs and talk page discussions a bad actor could definitely use chatgpt to astroturf. A large proportion of the world uses Wikipedia (directly or indirectly) to get information - it would be incredibly valuable thing to manipulate. My other point is that AI disruption bots (like the ones on twitter) would be indistinguishable from individuals using LLMs to "fix" spelling and grammar - by allowing one we make the other incredibly difficult to identify. How can you tell the difference between a bot and someone who just uses chatgpt for every comment? BugGhost 🦗👻 09:16, 10 December 2024 (UTC)[reply]
    You can't. That's the point. This is kind of the whole idea of WP:AGF. Gnomingstuff (talk) 20:22, 13 December 2024 (UTC)[reply]

    Those that use LLMs legitimately to tidy up their alledgedly competent, insightful and self-sourced thoughts should just be encouraged to post the prompts themselves instead of churning it through an LLM first.

    Social anxiety: Say "I" am a person unconfident in my writing. I imagine that when I post my raw language, I embarrass myself, and my credibility vanishes, while in the worst case nobody understands what I mean. As bad confidence is often built up through negative feedback, it's usually meritful or was meritful at some point for someone to seek outside help. Aaron Liu (talk) 23:46, 8 December 2024 (UTC)[reply]
    While I sympathise with that hypothetical, Wikipedia isn't therapy and we shouldn't make decisions that do long-term harm to the project just because a hypothetical user feels emotionally dependent on a high tech spellchecker. I also think that in general wikipedia (myself included) is pretty relaxed about spelling and grammar in talk/WP space. BugGhost 🦗👻 18:45, 10 December 2024 (UTC)[reply]
    We also shouldn't do long term harm to the project just because a few users are wedded to idea that LLMs are and will always be some sort of existential threat. The false positives that are an unavoidable feature of this proposal will do far more, and far longer, harm to the project than LLM-comments that are all either useful, harmless or collapseable/removable/ignorable at present. Thryduulf (talk) 19:06, 10 December 2024 (UTC)[reply]
    The false positives that are an unavoidable feature of this proposal will do far more, and far longer, harm to the project - the same could be said for WP:DUCK. The reason why its not a big problem for DUCK is because the confidence level is very high. Like I've said in multiple other comments, I don't think "AI detectors" should be trusted, and that the bar for deciding whether something was created via LLM should be very high. I 100% understand your opinion and the reasoning behind it, I just think we have differing views on how well the community at large can identify AI comments. BugGhost 🦗👻 09:07, 11 December 2024 (UTC)[reply]
    I don't see how allowing shy yet avid users to contribute has done or will do long-term harm. The potential always outweighs rational evaluation of outcomes for those with anxiety, a condition that is not behaviorally disruptive. Aaron Liu (talk) 02:47, 11 December 2024 (UTC)[reply]
    I definitely don't want to disallow shy yet avid users! I just don't think having a "using chatgpt to generate comments is allowed" rule is the right solution to that problem, considering the wider consequences. BugGhost 🦗👻 08:52, 11 December 2024 (UTC)[reply]
    Did you mean "... disallowed"? If so, I think we weigh-differently accessibility vs the quite low amount of AI trolling. Aaron Liu (talk) 14:10, 11 December 2024 (UTC)[reply]
  • Support strikethroughing or collapsing per everyone else. The opposes that mention ESL have my sympathy, but I am not sure how many of them are ESL themselves. Having learnt English as my second language, I have always found it easier to communicate when users are expressing things in their own way, not polished by some AI. I sympathise with the concerns and believe the right solution is to lower our community standards with respect to WP:CIR and similar (in terms of ESL communication) without risking hallucinations by AI. Soni (talk) 02:52, 7 December 2024 (UTC)[reply]
  • Oppose the use of AI detection tools. False positive rates for AI-detection are dramatically higher for non-native English speakers. AI detection tools had a 5.1% false positive rate for human-written text from native English speakers, but human-written text from non-native English speakers had a 61.3% false positive rate. ~ F4U (talkthey/it) 17:53, 8 December 2024 (UTC)[reply]

Section break 5

[edit]
  • Oppose - I'm sympathetic to concerns of abuse through automated mass-commenting, but this policy looks too black-and-white. Contributors may use LLMs for many reasons, including to fix the grammar, to convey their thoughts more clearly, or to adjust the tone for a more constructive discussion. As it stands, this policy may lead to dismissing good-faith AI-assisted comments, as well as false positives, without considering the context. Moreover, while mainstream chatbots are not designed to just mimic the human writing style, there are existing tools that can make AI-generated text more human-like, so this policy does not offer that much protection against maliciously automated contributions. Alenoach (talk) 01:12, 9 December 2024 (UTC)[reply]
  • Oppose – Others have cast doubt on the efficacy of tools capable of diagnosing LLM output, and I can't vouch for its being otherwise. If EEng's example of ChatBot output is representative—a lengthy assertion of notability without citing sources—that is something that could well be disregarded whether it came from a bot or not. If used carefully, AI can be useful as an aide-memoire (such as with a spell- or grammar-checker) or as a supplier of more felicitous expression than the editor is naturally capable of (e.g. Google Translate). Dhtwiki (talk) 10:27, 9 December 2024 (UTC)[reply]
  • Comment / Oppose as written. It's not accurate that GPTZero is good at detecting AI-generated content. Citations (slightly out of date but there's little reason to think things have changed from 2023): https://www.aiweirdness.com/writing-like-a-robot/ , https://www.aiweirdness.com/dont-use-ai-detectors-for-anything-important/ . For those too busy to read, a few choice quotes: "the fact that it insisted even one [real book] excerpt is not by a human means that it's useless for detecting AI-generated text," and "Not only do AI detectors falsely flag human-written text as AI-written, the way in which they do it is biased" (citing https://arxiv.org/abs/2304.02819 ). Disruptive, worthless content can already be hatted, and I'm not opposed to doing so. Editors should be sharply told to use their own words, and if not already written, an essay saying we'd rather have authentic if grammatically imperfect comments than AI-modulated ones would be helpful to cite at editors who offer up AI slop. But someone merely citing GPTZero is not convincing. GPTZero will almost surely misidentify genuine commentary as AI-generated. So fine with any sort of reminder that worthless content can be hatted, and fine with a reminder not to use ChatGPT for creating Wikipedia talk page posts, but not fine with any recommendations of LLM-detectors. SnowFire (talk) 20:00, 9 December 2024 (UTC)[reply]
    @SnowFire, I can't tell if you also oppose the actual proposal, which is to permit hatting/striking obvious LLM-generated comments (using GPTzero is a very minor detail in JSS's background paragraph, not part of the proposal). JoelleJay (talk) 01:47, 11 December 2024 (UTC)[reply]
    I support the proposal in so far as disruptive comments can already be hatted and that LLM-generated content is disruptive. I am strongly opposed to giving well-meaning but misguided editors a license to throw everyone's text into an AI-detector and hat the comments that score poorly. I don't think it was that minor a detail, and to the extent that detail is brought up, it should be as a reminder to use human judgment and forbid using alleged "AI detectors" instead. SnowFire (talk) 03:49, 11 December 2024 (UTC)[reply]
  • Support collapsing AI (specifically, Large language model) comments by behavioral analysis (most actually disruptive cases I've seen are pretty obvious) and not the use of inaccurate tools like ZeroGPT. I thinking hatting with the title "Editors suspect that this comment has been written by a Large language model" is appropriate. They take up SO much space in a discussion because they are also unnecessarily verbose, and talk on and on but never ever say something that even approaches having substance. Discussions are for human Wikipedia editors, we shouldn't have to use to sift through comments someone put 0 effort into and outsourced to a robot that writes using random numbers (that's a major part of how tools like ChatGPT work and maintain variety). If someone needs to use an AI chatbot to communicate because they don't understand English, then they are welcome to contribute to their native language Wikipedia, but I don't think they have the right to insist that we at enwiki spend our effort reading comments they but minimal effort into besides opening the ChatGPT website. If really needed, they can write in their native language and use a non-LLM tool like Google Translate. The use of non-LLM tools like Grammarly, Google Translate, etc. I think should still be OK for all editors, as they only work off comments that editors have written themselves. MolecularPilot 🧪️✈️ 05:10, 10 December 2024 (UTC)[reply]
    Adding that enforcing people writing things in their own words will actually help EAL (English additional language) editors contribute here. I world with EAL people irl, and even people who have almost native proficiency with human-written content find AI output confusing because it says things in the most confusing, verbose ways using difficult sentence constructions and words. I've seen opposers in this discussion who maybe haven't had experience working with EAL people go "what about EAL people?", but really, I think this change will help them (open to being corrected by someone who is EAL, tho). MolecularPilot 🧪️✈️ 05:17, 10 December 2024 (UTC)[reply]
    Also, with regards to oppose comments that discussions are not a vote so closes will ignore AI statements which don't have merit - unedited LLM statements are incredibly verbose and annoying, and clog up the discussion. Imagine multiple paragraphs, each with a heading, but all of which say almost nothing, they're borderline WP:BLUGEONy. Giving the power to HAT them will help genuine discussion contributors keep with the flow of human arguments and avoid scaring away potential discussion contributors who are intimidated or don't feel they have the time to read the piles of AI nonsense that fill the discussion. MolecularPilot 🧪️✈️ 06:38, 10 December 2024 (UTC)[reply]
  • Support (removing) in general. How is this even a question? There is no case-by-case. It is a fundamental misunderstanding of how LLMs work to consider their output reliable without careful review. And which point, the editor could have written it themselves without inherent LLM bias. The point of any discussion is to provide analytical response based on the context, not have some tool regurgitate something from a training set that sounds good. And frankly, it is disrespectuful to make someone read "AI" responses. It is a tool and there is a place and time for it, but not in discussions in an encyclopedia. —  HELLKNOWZ  TALK 15:41, 10 December 2024 (UTC)[reply]
  • Strong Support. I'm very interested in what you (the generic you) have to say about something. I'm not remotely interested in what a computer has to say about something. It provides no value to the discussion and is a waste of time. Useight (talk) 18:06, 10 December 2024 (UTC)[reply]
    Comments that provide no value to the discussion can already be hatted and ignored regardless of why they provide no value, without any of the false positive or false negatives inherent in this proposal. Thryduulf (talk) 18:25, 10 December 2024 (UTC)[reply]
    Indeed, and that's fine for one-offs when a discussion goes off the rails or what-have-you. But we also have WP:NOTHERE for disruptive behavior, not working collaboratively, etc. I'm suggesting that using an AI to write indicates that you're not here to build the encyclopedia, you're here to have an AI build the encyclopedia. I reiterate my strong support for AI-written content to be removed, struck, collapsed, or hatted and would support further measures even beyond those. Useight (talk) 21:54, 11 December 2024 (UTC)[reply]
    There are two sets of people described in your comment: those who use AI and those who are NOTHERE. The two sets overlap, but nowhere near sufficiently to declare that everybody in the former set are also in the latter set. If someone is NOTHERE they already can and should be blocked, regardless of how they evidence that. Being suspected of using AI (note that the proposal does not require proof) is not sufficient justification on its own to declare someone NOTHERE, per the many examples of constructive use of AI already noted in this thread. Thryduulf (talk) 22:03, 11 December 2024 (UTC)[reply]
    To reiterate, I don't believe that any use of AI here is constructive, thus using it is evidence of WP:NOTHERE, and, therefore, the set of people using AI to write is completely circumscribed within the set of people who are NOTHERE. Please note that I am referring to users who use AI-generated writing, not users suspected of using AI-generated writing. I won't be delving into how one determines whether someone is using AI or how accurate it is, as that is, to me, a separate discussion. This is the end of my opinion on the matter. Useight (talk) 23:26, 11 December 2024 (UTC)[reply]
    You are entitled to your opinion of course, but as it is contradicted by the evidence of both multiple constructive uses and of the near-impossibility of reliably detecting LLM-generated text without false positives, I would expect the closer of this discussion to attach almost no weight to it. Thryduulf (talk) 00:42, 12 December 2024 (UTC)[reply]
    I am ESL and use LLMs sometimes because of that. I feel like I don't fit into the NOTHERE category. It seems like you do not understand what they are or how they can be used constructively. PackMecEng (talk) 01:43, 12 December 2024 (UTC)[reply]
    No, I understand. What you're talking about is no different from using Google Translate or asking a native-speaker to translate it. You, a human, came up with something you wanted to convey. You wrote that content in Language A. But you wanted to convey that message that you - a human - wrote, but now in Language B. So you had your human-written content translated to Language B. I have no qualms with this. It's your human-written content, expressed in Language B. My concern is with step 1 (coming up with something you want to convey), not step 2 (translating that content to another language). You write a paragraph for an article but it's in another language and you need the paragraph that you wrote translated? Fine by me. You ask an AI to write a paragraph for an article? Not fine by me. Again, I'm saying that there is no valid use case for AI-written content. Useight (talk) 15:59, 12 December 2024 (UTC)[reply]
    It seems very likely that there will be valid use cases for AI-written content if the objective is maximizing quality and minimizing errors. Research like this demonstrate that there will likely be cases where machines outperform humans in specific Wikipedia domains, and soon. But I think that is an entirely different question than potential misuse of LLMs in consensus related discussions. Sean.hoyland (talk) 16:25, 12 December 2024 (UTC)[reply]
    But your vote and the proposed above makes not distinction there. Which is the main issue. Also not to be pedantic but every prompted to a LLM is filled out by a human looking to convey a message. Every time someone hits publish on something here it is that person confirming that is what they are saying. So how do we in practice implement what you suggest? Because without a method better than vibes it's worthless. PackMecEng (talk) 18:53, 12 December 2024 (UTC)[reply]
    The proposal specifies content generated by LLMs, which has a specific meaning in the context of generative AI. If a prompt itself conveys a meaningful, supported opinion, why not just post that instead? The problem comes when the LLM adds more information than was provided, which is the whole point of generative models. JoelleJay (talk) 01:52, 13 December 2024 (UTC)[reply]
  • Yes in principle. But in practice, LLM detectors are not foolproof, and there are valid reasons to sometimes use an LLM, for example to copyedit. I have used Grammarly before and have even used the Microsoft Editor, and while they aren't powered by LLMs, LLMs are a tool that need to be used appropriately on Wikipedia. Awesome Aasim 19:55, 10 December 2024 (UTC)[reply]
  • Support. Using LLM to reply to editors is lazy and disrespectful of fellow editor's time and brainpower. In the context of AFD, it is particularly egregious since an LLM can't really read the article, read sources, or follow our notability guidelines.
    By the way. gptzero and other such tools are very good at detecting this. I don't think this is correct at all. I believe the false positive for AI detectors is quite high. High enough that I would recommend not using AI detectors. –Novem Linguae (talk) 03:23, 11 December 2024 (UTC)[reply]
  • Question @Just Step Sideways: Since there appears to be a clear consensus against the AI-detectors part, would you like to strike that from the background? Aaron Liu (talk) 14:10, 11 December 2024 (UTC)[reply]
  • Support. AI generated text should be removed outright. If you aren't willing to put the work into doing your own writing then you definitely haven't actually thought deeply about the matter at hand. User1042💬✒️ 14:16, 11 December 2024 (UTC)[reply]
    This comment is rather ironic given that it's very clear you haven't thought deeply about the matter at hand, because if you had then you'd realise that it's actually a whole lot more complicated than that. Thryduulf (talk) 14:26, 11 December 2024 (UTC)[reply]
    Thryduulf I don't think this reply is particular helpful, and it comes off as slightly combative. It's also by my count your 24th comment on this RFC. BugGhost 🦗👻 19:20, 11 December 2024 (UTC)[reply]
    I recognize that AI paraphrased or edited is not problematic in the same ways as text generated outright by an AI. I only meant to address the core issue at steak, content whose first draft was written by an AI system. User1042💬✒️ 22:16, 17 December 2024 (UTC)[reply]
  • Oppose @Just Step Sideways: The nomination's 2nd para run through https://www.zerogpt.com/ gives "11.39% AI GPT*":

    I've recently come across several users in AFD discussions that are using LLMs to generate their remarks there. As many of you are aware, gptzero and other such tools are very good at detecting this. I don't feel like any of us signed up for participating in discussions where some of the users are not using their own words but rather letting technology do it for them. Discussions are supposed to be between human editors. If you can't make a coherent argument on your own, you are not competent to be participating in the discussion. I would therefore propose that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner

    The nomination's linked https://gptzero.me/ site previously advertised https://undetectable.ai/ , wherewith how will we deal? Imagine the nomination was at AFD. What should be the response to LLM accusations against the highlighted sentence? 172.97.141.219 (talk) 17:41, 11 December 2024 (UTC)[reply]
  • Support with the caveat that our ability to deal with the issue goes only as far as we can accurately identify the issue (this appears to have been an issue raised across a number of the previous comments, both support and oppose, but I think it bears restating because we're approaching this from a number of different angles and its IMO the most important point regardless of what conclusions you draw from it). Horse Eye's Back (talk) 19:24, 11 December 2024 (UTC)[reply]
  • Strong support, limited implementation. Wikipedia is written by volunteer editors, says our front page. This is who we are, and our writing is what Wikipedia is. It's true that LLM-created text can be difficult to identify, so this may be a bit of a moving target, and we should be conservative in what we remove—but I'm sure at this point we've all run across cases (whether here or elsewhere in our digital lives) where someone copy/pastes some text that includes "Is there anything else I can help you with?" at the end, or other blatant tells. This content should be deleted without hesitation. Retswerb (talk) 04:11, 12 December 2024 (UTC)[reply]
  • Support in concept, questions over implementation — I concur with Dronebogus that users who rely on LLMs should not edit English Wikipedia. It is not a significant barrier for users to use other means of communication, including online translators, rather than artificial intelligence. How can an artificial intelligence tool argue properly? However, I question how this will work in practice without an unacceptable degree of error. elijahpepe@wikipedia (he/him) 22:39, 12 December 2024 (UTC)[reply]
    Many, possibly most, online translators use artificial intelligence based on LLMs these days. Thryduulf (talk) 22:46, 12 December 2024 (UTC)[reply]
    There is a difference between translating words you wrote in one language into English and using an LLM to write a comment for you. elijahpepe@wikipedia (he/him) 22:59, 12 December 2024 (UTC)[reply]
    Neither your comment nor the original proposal make any such distinction. Thryduulf (talk) 23:34, 12 December 2024 (UTC)[reply]
    Well since people keep bringing this up as a semi-strawman: no I don’t support banning machine translation, not that I encourage using it (once again, if you aren’t competent in English please don’t edit here) Dronebogus (talk) 07:34, 13 December 2024 (UTC)[reply]
    LLMs are incredible at translating, and many online translators already incorporate them, including Google Translate. Accomodating LLMs is an easy way to support the avid not only the ESL but also the avid but shy. It has way more benefits than the unseen-to-me amount of AI trolling that isn't already collapse-on-sight. Aaron Liu (talk) 00:05, 13 December 2024 (UTC)[reply]
    Google Translate uses the same transformer architecture that LLMs are built around, and uses e.g. PaLM to develop more language support (through training that enables zero-shot capabilities) and for larger-scale specialized translation tasks performed through the Google Cloud "adaptive translation" API, but it does not incorporate LLMs into translating your everyday text input, which still relies on NMTs. And even for the API features, the core constraint of matching input rather than generating content is still retained (obviously it would be very bad for a translation tool to insert material not found in the original text!). LLMs might be good for translation because they are better at evaluating semantic meaning and detecting context and nuance, but again, the generative part that is key to this proposal is not present. JoelleJay (talk) 01:20, 13 December 2024 (UTC)[reply]
    PaLM (Pathways Language Model) is a 540 billion-parameter transformer-based large language model (LLM) developed by Google AI.[1] If you meant something about how reschlmunking the outputs of an LLM or using quite similar architecture is not really incorporating the LLM, I believe we would be approaching Ship of Theseus levels of recombination, to which my answer is it is the same ship.

    obviously it would be very bad for a translation tool to insert material not found in the original text!

    That happens! Aaron Liu (talk) 01:29, 13 December 2024 (UTC)[reply]
    PaLM2 is not used in the consumer app (Google Translate), it's used for research. Google Translate just uses non-generative NMTs to map input to its closes cognate in the target language. JoelleJay (talk) 01:34, 13 December 2024 (UTC)[reply]
    Well, is the NMT really that different enough to not be classified as an LLM? IIRC the definition of an LLM is something that outputs by predicting one-by-one what the next word/"token" should be, and an LLM I asked agreed that NMTs satisfy the definition of a generative LLM, though I think you're the expert here. Aaron Liu (talk) 02:01, 13 December 2024 (UTC)[reply]
    Google Translate's NMT hits different enough to speak English much less naturally than ChatGPT 4o. I don't consider it a LLM, because the param count is 380M not 1.8T.
    the definition of an LLM is something that outputs by predicting one-by-one what the next word/"token" should be No, that def would fit ancient RNN tech too. 172.97.141.219 (talk) 17:50, 13 December 2024 (UTC)[reply]
    Even if you don’t consider it L, I do, and many sources cited by the article do. Since we’ll have such contesting during enforcement, it’s better to find a way that precludes such controversy. Aaron Liu (talk) 20:44, 13 December 2024 (UTC)[reply]
    NMTs, LLMs, and the text-creation functionality of LLMs are fundamentally different in the context of this discussion, which is about content generated through generative AI. NMTs specifically for translation: they are trained on parallel corpora and their output is optimized to match the input as precisely as possible, not to create novel text. LLMs have different training, including way more massive corpora, and were designed specifically to create novel text. One of the applications of LLMs may be translation (though currently it's too computationally intensive to run them for standard consumer purposes), by virtue of their being very good at determining semantic meaning, but even if/when they do become mainstream translation tools what they'll be used for is still not generative when it comes to translation output. JoelleJay (talk) 22:29, 13 December 2024 (UTC)[reply]
    How will you differentiate between the use of LLM for copyediting and the use of LLM for generation? Aaron Liu (talk) 23:30, 13 December 2024 (UTC)[reply]
    The proposal is for hatting obvious cases of LLM-generated comments. Someone who just uses an LLM to copyedit will still have written the content themselves and presumably their output would not have the obvious tells of generative AI. JoelleJay (talk) 23:56, 13 December 2024 (UTC)[reply]
    Not when I tried to use it. Quantitatively, GPTZero went from 15% human to 100% AI for me despite the copyedits only changing 14 words. Aaron Liu (talk) 00:33, 14 December 2024 (UTC)[reply]
    I think there is consensus that GPTZero is not usable, even for obvious cases. JoelleJay (talk) 00:55, 14 December 2024 (UTC)[reply]
    Yes, but being as far as 100% means people will also probably think the rewrite ChatGPT-generated. Aaron Liu (talk) 01:18, 14 December 2024 (UTC)[reply]
    Does it really mean that? All you've demonstrated is that GPTZero has false positives, which is exactly why its use here was discouraged. jlwoodwa (talk) 05:26, 14 December 2024 (UTC)[reply]
    My subjective evaluation of what I got copyediting from ChatGPT was that it sounded like ChatGPT. I used GPTZero to get a number. Aaron Liu (talk) 14:18, 14 December 2024 (UTC)[reply]
    My guess is that the copyediting went beyond what most people would actually call "copyediting". JoelleJay (talk) 18:04, 23 December 2024 (UTC)[reply]
    It changed only 14 words across two paragraphs and still retained the same meaning in a way that I would describe it as copyediting. Such levels of change are what those lacking confidence in tone would probably seek anyways. Aaron Liu (talk) 00:15, 24 December 2024 (UTC)[reply]
  • On one hand, AI slop is a plague on humanity and obvious LLM output should definitely be disregarded when evaluating consensus. On the other hand, I feel like existing policy covers this just fine, and any experienced closer will lend greater weight to actual policy-based arguments, and discount anything that is just parroting jargon. WindTempos they (talkcontribs) 23:21, 12 December 2024 (UTC)[reply]
  • Support in principle, but we cannot rely on any specific tools because none are accurate enough for our needs. Whenever I see a blatant ChatGPT-generated !vote, I ignore it. They're invariably poorly reasoned and based on surface-level concepts rather than anything specific to the issue being discussed. If someone is using AI to create their arguments for them, it means they have no actual argument besides WP:ILIKEIT and are looking for arguments that support their desired result rather than coming up with a result based on the merits. Also, toasters do not get to have an opinion. The WordsmithTalk to me 05:17, 13 December 2024 (UTC)[reply]
  • Oppose. For creating unnecessary drama. First of, the "detector" of the AI bot is not reliable, or at least the reliability of the tool itself is still questionable. If the tool to detect LLM itself is unreliable, how can one reliably point out which one is LLM and which one is not? We got multiple tools that claimed to be able to detect LLM as well. Which one should we trust? Should we be elevating one tool over the others? Have there been any research that showed that the "picked" tool is the most reliable? Second, not all LLMs are dangerous. We shouldn't treat LLM as a virus that will somehow take over the Internet or something. Some editors use LLM to smooth out their grammar and sentences and fix up errors, and there is nothing wrong with that. I understand that banning obvious LLM text per WP:DUCK are good, but totally banning them is plain wrong. ✠ SunDawn ✠ (contact) 22:56, 15 December 2024 (UTC)[reply]
    @SunDawn, the proposal is to permit editors to collapse/strike obvious LLM text, not to "ban LLM totally". If LLM use is imperceptible, like for tweaking grammar, it's not going to be affected. JoelleJay (talk) 20:17, 19 December 2024 (UTC)[reply]
  • Support with some kind of caveat about not relying on faulty tools or presuming that something is LLM without evidence or admission, based on the following reasons:
    1. We have stricter rules around semi-automated editing (rollback, AutoWikiBrowser, etc.) and even stricter rules around fully automated bot editing. These cleanup edits are widely accepted as positive, but there is still the concern about an overwhelming amount of bad edits to wade through and/or fix. A form of that concern is relevant here. Someone could reply to every post in this discussion in just a minute or so without ever reading anything. That's inherently disruptive.
    2. Nobody who is voting "oppose" is using an LLM to cast that vote. The LLM comments have been left by those supporting to make a point about how problematic they are for discussions like this. I think this reflects, even among oppose voters, a developing community consensus that LLM comments will be disregarded.
    3. If the rule in practice is to disregard LLM comments, not writing that rule down does not stop it from being the rule, consensus, or a community norm. It just makes the rule less obvious and less clear.
    4. It's disrespectful for an editor to ask someone to spend their time reading a comment if they couldn't be bothered to spend any time writing it, and therefore a violation of the policy Wikipedia:Civility, "treat your fellow editors as respected colleagues with whom you are working on an important project."
  • Also, I don't read the proposal as a ban on machine translation in any way. Rjjiii (talk) 00:01, 18 December 2024 (UTC)[reply]
    @Rjjiii, above @Dilettante said their !vote was created by LLM. JoelleJay (talk) 20:14, 19 December 2024 (UTC)[reply]
  • I am strongly opposed to banning or ignoring LLM-made talk page comments just because they are LLM-made. I'm not a big fan of LLMs at all; they are actually useful only for some certain things, very few of which are directly relevant to contributing to Wikipedia in English or in any other language. However, some of those things are useful for this, at least for some humans, and I don't want to see these humans being kicked out of the English Wikipedia. I already witnessed several cases in which people whose first language is not English tried writing talk page responses in the English Wikipedia, used an LLM to improve their writing style, and got their responses ignored only because they used an LLM. In all those cases, I had strong reasons to be certain that they were real humans, that they meant what they wrote, and that they did it all in good faith. Please don't say that anyone who wants to contribute to the English Wikipeida should, in the first place, know English well enough to write a coherent talk page comment without LLM assistance; occasionally, I kind of wish that it was like that myself, but then I recall that the world is more complicated and interesting than that. Uses of LLMs that help the English Wikipedia be more inclusive for good-faith people are good. Of course, defining what good faith means is complicated, but using an LLM is not, by itself, a sign of bad faith. --Amir E. Aharoni (talk) 04:52, 19 December 2024 (UTC)[reply]
    Those concerned about their English should use translation software rather than an llm. Both might alter the meaning to some extent, but only one will make things up. (It's also not a sure assumption that llm text is coherent talkpage text.) CMD (talk) 07:44, 19 December 2024 (UTC)[reply]
    @CMD The dividing line between translation software and LLM is already blurry and will soon disappear. It's also rare that translation software results in coherent talkpage text, unless it's relying on some (primitive) form of LLM. So if we're going to outlaw LLMs, we would need to outlaw any form of translation software, and possibly any text-to-speech software as well. ypn^2 23:41, 19 December 2024 (UTC)[reply]
    The distinctions have already been covered above, and no we would not have to. There is an obvious difference between software intended to translate and software intended to generate novel text, and users are likely to continue to treat those differently. CMD (talk) 02:49, 20 December 2024 (UTC)[reply]
  • Strong support. LLM-generated content has no place anywhere on the encyclopedia. Stifle (talk) 10:27, 19 December 2024 (UTC)[reply]
  • Strong oppose to the proposal as written. Wikipedia already suffers from being stuck in a 2001 mindset and a refusal to move with the technological times. Anyone who remembers most Wikipedians' visceral reaction to FLOW and VisualEditor when they were first introduced will observe a striking similarity. Yes, those projects had serious problems, as do LLM-generated comments. But AI is the future, and this attitude of "Move slowly to avoid changing things" will ultimately lead Wikipedia the way of Encyclopædia Britannica. Our discussion needs to be how best to change, not how to avoid to change. ypn^2 23:54, 19 December 2024 (UTC)[reply]
    The main objection to VE and a major objection to FLOW was the developers' insistence on transforming Wikitext to HTML for editing and then transforming that back to Wikitext. Aaron Liu (talk) 01:31, 20 December 2024 (UTC)[reply]
    True. Then, as now, there were many valid objections. But IIRC, there was limited discussion of "Let's figure out a better way to improve", and lots of "Everything is fine; don't change anything, ever." That attitude concerns me. ypn^2 01:52, 20 December 2024 (UTC)[reply]
  • Support. I'm not even slightly swayed by these "it'll be too hard to figure out" and "mistakes could be made" and "we can't be 100% certain" sorts of arguments. That's true of everything around here, and its why we have an admins-must-earn-a-boatload-of-community-trust system, and a system of review/appeal of decisions they (or of course non-admin closers) make, and a consensus-based decisionmaking system more broadly. JoelleJay has it exactly right: having a policy that permits closers to discount apparently-LLM-generated contributions will discourage good-faith editors from using LLMs irresponsibly and perhaps motivate bad-faith editors to edit the raw output to appear more human, which would at least involve some degree of effort and engagement with their "own" arguments. And as pointed out by some others, the "it'll hurt non-native-English speakers" nonsense is, well, nonsense; translation is a different and unrelated process (though LLMs can perform it to some extent), of remapping one's own material onto another language.

    I'm also not in any way convinved by the "people poor at writing and other cognitive tasks needs the LLM to help them here" angle, because WP:COMPETENCE is required. This is work (albeit volunteer work), it is WP:NOT a game, a social-media playground, a get-my-ideas-out-there soapbox, or a place to learn how to interact e-socially or pick up remedial writing skills, nor a venue for practicing one's argument techiques. It's an encyclopedia, being built by people who – to be productive contributors instead of a draining burden on the entire community – must have: solid reasoning habits, great judgement (especially in assessing reliability of claims and the sources making them), excellent writing skills of a higherly particularized sort, a high level of fluency in this specific language (in multiple registers), and a human-judgment ability to understand our thick web of policies, guidelines, procedures, and often unwritten norms, and how they all interact, in a specific contextual way that may vary greatly by context. None of these is optional. An LLM cannot do any of them adequately (not even write well; their material sticks out like a sore thumb, and after a while you can even tell which LLM produced the material by its habitual but dinstictive crappy approach to simulating human thought and language).

    In short, if you need an LLM to give what you think is meaningful input into a decision-making process on Wikipedia (much less to generate mainspace content for the public), then you need to go find something else to do, something that fits your skills and abilities. Saying this so plainly will probably upset someone, but so it goes. I have a rep for "not suffering fools lightly" and "being annoying but correct"; I can live with that if it gets the right decisions made and the work advanced.  — SMcCandlish ¢ 😼  05:49, 22 December 2024 (UTC)[reply]

    The problem with all that is that we already have a policy that allows the hatting or removal of comments that are actually problematic because of their content (which are the only ones that we should be removing) without regard for whether it was or was not written by LLM. Everything that actually should be removed can be removed already. Thryduulf (talk) 11:39, 22 December 2024 (UTC)[reply]
    People who have good reading skills, great judgement, and solid reasoning habits enough to find problems in existing articles don't necessarily have great interpersonal writing/communication skills or the confidence. Meanwhile, for all LLM is bad at, it is very good at diluting everything you say to become dry, dispassionate, and thus inoffensive. Aaron Liu (talk) 15:26, 22 December 2024 (UTC)[reply]
  • Support. Sure I have questions about detection, but I don't think it means we shouldn't have a policy that explicitly states that it should not be used (and can be ignored/hatted if it is). Judging solely based on content (and no wp:bludgeoning, etc.) is unsustainable IMO. It would mean taking every wall of text seriously until it's clear that the content is unhelpful, and LLMs are very good at churning out plausible-sounding bullshit. It wastes everyone's time. If cognitive impairments or ESL issues make it hard to contribute, try voice-to-text, old-school translation software, or some other aid. LLMs aren't really you.--MattMauler (talk) 11:27, 23 December 2024 (UTC)[reply]
  • Comment. While I agree with the sentiment of the request, I am at a loss to see how we can identify LLM generated comments in a consistent manner that can scale. Yes, it might be easier to identify egregious copy paste of wall of text, but, anything other than that might be hard to detect. Our options are:
  1. Robust tooling to detect LLM generated text, with acceptably low levels of false positives. Somewhat similar to what Earwig does for Copyvios. But, someone needs to build it and host it on WMTools or at a similar location.
  2. Self certification by editors. Every edit / publish dialogbox should have a checkbox for "Is this text LLM generated" with y/n optionality.
  3. Editors playing a vigilante role in reading the text and making a personal call on other editors' text. Obviously this is least preferred.
These are my starting views. Ktin (talk) 00:37, 24 December 2024 (UTC)[reply]
  • Strong oppose as it's impossible to enforce. Also LLMs are a valid and useful accessibility tool. – Anne drew 05:18, 24 December 2024 (UTC)[reply]
  • Bonus suggestion!: I'm curious what Wikipedians think about this so let's try this. Many of the comments here discuss the impracticality of determining whether a user's comments are AI generated (i.e. gptzero isn't perfect), and many give valid arguments for using LLMs (i.e. ESL). If an argument is suspected to be written by LLM, I propose that editors should examine the user. Take a look at their listed contributions, and if they seem to have a habit of using AI, open a discussion on their talk page. If the user has a habit of using AI and doesn't recognize the inherent problems and refuses to change, this can be brought to the administrators' noticeboard for potential blocks. If (and only if) the person is blocked for using AI, their comments can be ignored. Or just ask ChatGPT to summarize them for you lol guninvalid (talk) 06:12, 25 December 2024 (UTC)[reply]

Alternate proposal

[edit]
The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
Redundant proposal, confusingly worded, with no support, and not even any further discussion interest in 10 days.  — SMcCandlish ¢ 😼  05:23, 22 December 2024 (UTC)[reply]

Whereas many editors, including me, have cited problems with accuracy in regards to existing tools such as ZeroGPT, I propose that remarks that are blatently generated by a LLM or similar automated system should be discounted/removed/collapsed/hidden. ThatIPEditor They / Them 10:00, 10 December 2024 (UTC)[reply]

Oppose as completely unnecessary and far too prone to error per the above discussion. Any comment that is good (on topic, relevant, etc) should be considered by the closer regardless of whether it was made with LLM-input of any sort or not. Any comment that is bad (off-topic, irrelevant, etc) should be ignored by the closer regardless of whether it was made with LLM-input of any sort or not. Any comment that is both bad and disruptive (e.g. by being excessively long, completely irrelevant, bludgeoning, etc) should be removed and/or hatted as appropriate, regardless of whether it was made with LLM-input of any sort. The good thing is that this is already policy so we don't need to call out LLMs specifically, and indeed doing so is likely to be disruptive in cases where human-written comments are misidentified as being LLM-written (which will happen, regardless of whether tools are used). Thryduulf (talk) 11:19, 10 December 2024 (UTC)[reply]
I think this proposal is not really necessary. I support it, but that is because it is functionally identical to the one directly above it, which I also supported. This should probably be hatted. BugGhost 🦗👻 18:32, 10 December 2024 (UTC)[reply]
What does blatantly generated mean? Does you mean only where the remark is signed with "I, Chatbot", or anything that appears to be LLM-style? I don't think there's much in between. ypn^2 19:21, 10 December 2024 (UTC)[reply]
Procedural close per BugGhost. I'd hat this myself, but I don't think that'd be appropriate since it's only the two of us who have expressed that this proposal is basically an exact clone. Aaron Liu (talk) 03:00, 11 December 2024 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Should first language be included in the infobox for historical figures?

[edit]

Is there a guideline concerning this? "Infobox royalty" apparently has this parameter, but I haven't found a single article that actually uses it. Many articles don't mention the subject's spoken languages at all. In my view, somebody's first language (L1) is just a very basic and useful piece of information, especially for historical figures. This would be helpful in cases where the ruling elites spoke a completely different language from the rest of the country (e.g., High Medieval England or early Qing dynasty China). These things are not always obvious to readers who are unfamiliar with the topic. Including it would be a nice and easy way to demonstrate historical language shifts that otherwise might be overlooked. Perhaps it could also bring visibility to historical linguistic diversity and language groups that have since disappeared. Where there are multiple first languages, they could all be listed. And in cases where a person's first language remains unclear, it could simply be left out. Kalapulla123 (talk) 11:53, 8 December 2024 (UTC)[reply]

I don't think I agree this is a good use of infobox space:
  • incongruences between elite spoken languages and popular spoken languages can't be shown with a single parameter (the language spoken by the oppressed would have to be included as well)
  • for many people this would be unverifiable (already mentioned in OP) and / or contentious (people living during a language transition)
  • sometimes L2 skills will be more than adequate to communicate with subject population when called for
  • in cases where the subject's L1 matches their polity's (i.e. most cases), the parameter would feel like unnecessary clutter
  • prose description seems adequate
However, this is just my opinion, and the venue of discussion should probably be Wikipedia talk:WikiProject Royalty and Nobility or similar, rather than VPP. Folly Mox (talk) 12:02, 9 December 2024 (UTC)[reply]
I think this might be sufficiently important pretty much exclusively for writers where the language they wrote in is not the "obvious" one for their nationality. Johnbod (talk) 12:43, 9 December 2024 (UTC)[reply]
It might also be important for politicians (and similar figures?) in countries where language is a politically-important subject, e.g. Belgium. Thryduulf (talk) 16:29, 9 December 2024 (UTC)[reply]
This seems like a bad idea. Let's take a case where language spoken by a royal was very relevant: Charles V, Holy Roman Emperor. When he became King of Castile as a teenager, he only really spoke Flemish and didn't speak Castilian Spanish, and needless to say trusted the advisors he could actually talk with (i.e. Flemish / Dutch ones he brought with him). He also then immediately skipped out of Castile to go to proto-Germany to be elected Holy Roman Emperor. This ended up causing a rebellion (Revolt of the Comuneros) which was at least partially justified by Castilian nationalism, and partially by annoyed Castilian elites who wanted cushy government jobs. So language-of-royal was relevant. But... the Infobox is for the person as a whole. Charles came back to Castile and spent a stretch of 10 years there and eventually learned rather good Castilian and largely assuaged the elite, at least. He was king of Spain for forty years. So it would seem rather petty to harp on the fact his first language wasn't Castilian in the Infobox, when he certainly did speak it later and through most of his reign, even if not his first few years when he was still basically a kid. SnowFire (talk) 19:47, 9 December 2024 (UTC)[reply]
See below on this. Johnbod (talk) 14:26, 11 December 2024 (UTC)[reply]
SnowFire's fascinating anecdote shows that this information is not appropriate for infoboxes but rather should be described in prose in the body of the article where the subtleties can be explained to the readers. Cullen328 (talk) 19:56, 9 December 2024 (UTC)[reply]
No, it shows that it's not appropriate for that infobox, and therefore that it is not suitable for all infoboxes where it is plausibly relevant. It shows nothing about whether it is or is not appropriate for other infoboxes: the plural of anecdote is not data. Thryduulf (talk) 21:08, 9 December 2024 (UTC)[reply]
But it kind of is here? I picked this example as maybe one of the most obviously relevant cases. Most royals failing to speak the right language don't have this trait linked with a literal war in reliable sources! But if inclusion of this piece of information in an Infobox is still problematic in this case, how could it possibly be relevant in the 99.9% cases of lesser importance? The Infobox isn't for every single true fact. SnowFire (talk) 21:53, 9 December 2024 (UTC)[reply]
It isn't suitable for this infobox not because of a lack of importance, but because stating a single first language would be misleading. There exists the very real possibility of cases where it is both important and simple. Thryduulf (talk) 00:02, 10 December 2024 (UTC)[reply]
Could you (or anyone else in favor of the proposal) identify 5 biographies where this information is both useful to readers and clearly backed by reliable sources? signed, Rosguill talk 15:06, 11 December 2024 (UTC)[reply]
Charles V claimed to have spoken Italian to women, French to men, Spanish to God, and German to his horse. Hawkeye7 (discuss) 21:35, 9 December 2024 (UTC)[reply]
Sorry, this is just nonsense! Charles V was raised speaking French, which was the language of his aunt's court, although in the Dutch-speaking Mechelen. All his personal letters use French. He only began to be taught Dutch when he was 14, & may never have been much good at it (or Spanish or German). Contrary to the famous anecdote, which is rather late and dubious ("Spanish to God....German to my horse") he seems to have been a rather poor linguist, which was indeed awkward at times. Johnbod (talk) 00:39, 10 December 2024 (UTC)[reply]
(This is a bit off-topic, but "nonsense" is too harsh. I'm familiar that he spoke "French" too, yes, although my understanding was that he did speak "Flemish", i.e. the local Dutch-inflected speech, too? And neither 1500-era French nor Dutch were exactly standardized, so I left it as "Flemish" above for simplicity. If his Dutch was worse than I thought, sure, doesn't really affect the point made, though, which was that his Castilian was non-existent at first. As far as his later understanding of Spanish, his capacity was clearly enough - at the very least I've seen sources say he made it work and it was enough to stave off further discontent from the nobility. Take it up with the authors of the sources, not me.). SnowFire (talk) 16:23, 10 December 2024 (UTC)[reply]
There's a difference between "simplicity" and just being wrong! You should try reading the sources, with which I have no issue. And his ministers were also either native Francophones, like Cardinal Granvelle and his father Nicolas Perrenot de Granvelle (both from Besançon, now in eastern France), or could speak it well; the Burgundian elite had been Francophone for a long time. The backwash from all this remains a somewhat sensitive issue in Belgium, even now. And Charles V was not "King of Spain" (a title he avoided using) for 40 years at all; only after his mother died in 1555 (a year before him) did he become unarguably King of Castile. Johnbod (talk) 14:26, 11 December 2024 (UTC)[reply]
It may not be appropriate for many articles, but it surely is for some. For example, when I told her that England had had kings whose first language was German, someone asked me the other day how many. It would be good to have a quick way of looking up the 18th century Georges to find out. Phil Bridger (talk) 21:20, 9 December 2024 (UTC)[reply]
I think the problem is that people might make assumptions. I would check before saying that George I and George II spoke German as their first language and not French. Languages spoken is probably more useful than birth language, but the list might be incomplete. There is also competing information about George I, and he is an English King, so he has been better researched and documented compared to other historical figures.
I agree that this is important when language is the basis of community identity, such as in Belgian. Tinynanorobots (talk) 10:38, 10 December 2024 (UTC)[reply]
  • Ummmm… no. People I disagree with™️ use “infobox bloat” as a boogeyman in arguments about infoboxes. But this is infobox bloat. Even those celebrity/anime character things that tell you shoe size, pinky length and blood type wouldn’t include this. Dronebogus (talk) 18:16, 11 December 2024 (UTC)[reply]
I don't think there needs to be any central policy on this. It could be relevant to include this information for someone, perhaps... maybe... However, infoboxes work best when they contain uncontroversial at-a-glance facts that don't need a bunch of nuance and context to understand. For the example of Charles V, maybe his first language is significant, but putting it in the infobox (where the accompanying story cannot fit) would be a confusing unexplained factoid. Like, maybe once upon a time there was a notable person whose life turned on the fact that they were left-handed. That could be a great bit of content for the main article, but putting handedness in the infobox would be odd. Barnards.tar.gz (talk) 14:33, 12 December 2024 (UTC)[reply]
{{Infobox baseball biography}} includes handedness, and nobody finds that odd content for an infobox.
{{infobox royalty}} includes the option for up to five native languages, though the OP says it seems to be unused in practice. {{Infobox writer}} has a |language= parameter, and it would be surprising if this were unused. WhatamIdoing (talk) 19:36, 12 December 2024 (UTC)[reply]
Baseball seems to be a good example of where handedness is routinely covered, and easily consumable at a glance without needing further explanation. The scenario where I don't think handedness (or first language) makes sense is when it is a uniquely interesting aspect of that individual's life, because almost by definition there's a story there which the infobox can't tell. Barnards.tar.gz (talk) 10:23, 13 December 2024 (UTC)[reply]
I don't think L1 can be determined for most historical figures without a hefty dose of OR. If you look at my Babel boxes, you'll see that I, as a living human being with all the information about my own life, could not tell you what my own "L1" is. The historical figures for whom this would be relevant mostly spoke many more languages than I do, and without a time machine it would be nigh impossible to say which language they learned first. This isn't even clear for the Qing emperors – I am fairly certain that they all spoke (Mandarin) Chinese very well, and our article never says what language they spoke. Puyi even states that he never spoke Manchu. Adding this parameter would also inflame existing debates across the encyclopedia about ethnonationalism (e.g. Nicola Tesla) and infobox bloat. Toadspike [Talk] 21:21, 12 December 2024 (UTC)[reply]
As with every bit of information in every infobox, if it cannot be reliably sourced it does not go in, regardless of how important it is or isn't. There are plenty of examples of people whose first language is reported in reliable sources, I just did an internal source for "first language was" and on the first page of results found sourced mentions of first language at Danny Driver, Cleopatra, Ruthanne Lum McCunn, Nina Fedoroff, Jason Derulo, Henry Taube and Tom Segev, and an unsourced but plausible mention at Dean Martin. The article strongly suggests that her first language is an important part of Cleopatra's biography such that putting it in the infobox would be justifiable. I am not familiar enough with any of the others to have an opinion on whether it merits an infobox mention there, I'm simply reporting that there are many articles where first language is reliably sourced and a mention is deemed DUE. Thryduulf (talk) 22:08, 12 December 2024 (UTC)[reply]
I have been wondering since this conversation opened how far back the concept of an L1 language, or perhaps the most colloquial first language, can be pushed. Our article doesn't have anything on the history of the concept. CMD (talk) 11:31, 13 December 2024 (UTC)[reply]
I suspect the concept is pretty ancient, I certainly wouldn't be surprised to learn it arose around the same time as diplomacy between groups of people with different first languages. The note about it at Cleopatra certainly suggests it was already a well-established concept in her era (1st century BCE). Thryduulf (talk) 13:23, 13 December 2024 (UTC)[reply]
The concept of different social strata speaking different languages is old, but I'm not sure whether they viewed learning languages the same way we do. It's certainly possible, and perhaps it happened in some areas at some times, but I hesitate to assume it's the case for every historical person with an infobox. CMD (talk) 16:05, 13 December 2024 (UTC)[reply]
It's certainly not going to be appropriate for the infobox of every historical person, as is true for (nearly?) every parameter. The questions here are whether it is appropriate in any cases, and if so in enough cases to justify having it as a parameter (how many is enough? I'd say a few dozen at minimum, ideally more). I think the answer the first question is "yes". The second question hasn't been answered yet, and I don't think we have enough information here yet to answer it. Thryduulf (talk) 21:54, 13 December 2024 (UTC)[reply]
The question is not whether it is appropriate in any cases; the question is whether it is worth the trouble. I guarantee that this would lead to many vicious debates, despite being in most cases an irrelevant and unverifiable factoid based on inappropriate ABOUTSELF. This is the same reason we have MOS:ETHNICITY/NATIONALITY. Toadspike [Talk] 07:29, 16 December 2024 (UTC)[reply]
Nah. If this were "a very basic and useful piece of information" then we would already be deploying it site wide, so it obviously is not. In the vast majority of cases, it would involve intolerable WP:OR or even just guessing masquerading as facts. We do not know for certain that someone born in France had French as their first/native/home language. I have close relatives in the US, in a largely English-speaking part of the US, whose first language is Spanish. For historical figures it would get even more ridiculous, since even our conceptions of languages today as, e.g., "German" and "French" and "Spanish" and "Japanese", is a bit fictive and is certainly not historically accurate, because multiple languages were (and still are, actually) spoken in these places. We would have no way to ascertain which was used originally or most natively for the average historical figure. Beyond a certain comparatively recent point, most linguistics is reconstruction (i.e. educated guesswork; if there's not a substantial corpus of surviving written material we cannot be sure. That matters a lot for figures like Genghis Khan and King Bridei I of the Picts. Finally, it really is just trivia in the vast majority of cases. What a biographical figure's first/primary/home/most-fluent/most-frequently-used language (and some of those might not be the same since all of them can change over time other than "first") is something that could be included when certain from RS, but it's not lead- or infobox-worthy in most cases, unless it pertains directly the subject's notability (e.g. as a writer) and also isn't already implicit from other details like nationality.  — SMcCandlish ¢ 😼  03:42, 23 December 2024 (UTC)[reply]

Restrict new users from crosswiki uploading files to Commons

[edit]

I created this Phabricator ticket (phab:T370598) in July of this year, figuring that consensus to restrict non-confirmed users from crosswiki uploading files to Commons is implied. Well, consensus already agreed at Commons in response to the WMF study on crosswiki uploading. I created an attempted Wish at Meta-wiki, which was then rejected, i.e. "archived", as policy-related and requir[ing] alignment across various wikis to implement such a policy. Now I'm starting this thread, thinking that the consensus here would already or implicitly support such restriction, but I can stand corrected about the outcome here. George Ho (talk) 06:34, 9 December 2024 (UTC); corrected, 08:10, 9 December 2024 (UTC)[reply]

  • Support. I am not sure why this relies on alignment across wikis, those on Commons are best placed to know what is making it to Commons. The change would have little to no impact on en.wiki. If there is an impact, it would presumably be less cleaning up of presumably fair use files migrated to Commons that need to be fixed here. That said, if there needs to be consensus, then obviously support. We shouldn't need months of bureaucracy for this. CMD (talk) 06:41, 9 December 2024 (UTC)[reply]
  • Support, I don't know that my input really counts as new consensus because I said this at the time, but the problem is much worse than what the study suggests as we are still finding spam, copyvios, unusable selfies and other speedy-deletable uploads from the timespan audited.
Gnomingstuff (talk) 02:14, 10 December 2024 (UTC)[reply]

Question(s) stemming from undiscussed move

[edit]

"AIM-174 air-to-air missile" was moved without discussion to "AIM-174B." Consensus was reached RE: the removal of "air-to-air missile," but no consensus was reached regarding the addition or removal of the "B." After a no-consensus RM close (which should have brought us back to the original title, sans agreed-upon unneeded additional disambiguator, in my opinion), I requested the discussion be re-opened, per pre-MRV policy. (TO BE CLEAR; I should have, at this time, requested immediate reversion. However, I did not want to be impolite or pushy) The original closer -- Asukite (who found for "no consensus") was concerned they had become "too involved" in the process and requested another closer. Said closer immediately found consensus for "AIM-174B." I pressed-on to a MRV, where an additional "no consensus" (to overturn) finding was issued. As Bobby Cohn pointed-out during the move review, "I take issue with the participating mover's interpretation of policy 'Unfortunately for you, a no consensus decision will result in this article staying here' in the RM, and would instead endorse your idea that aligns with policy, that a no consensus would take us back the original title, sans extra disambiguatotr."

The issues, as I see them, are as-follows:

WP:RMUM: The move from “AIM-174 air-to-air missile” to “AIM-174B” was conducted without discussion, and I maintain all post-move discussions have achieved "no consensus."

Burden of Proof: The onus should be on the mover of the undiscussed title to justify their change, not on others to defend the original title. I refrained from reverting prior to initiating the RM process out of politeness, which should not shift the burden of proof onto me.

Precedent: I am concerned with the precedent. Undiscussed moves may be brute-forced into acceptance even if "no consensus" or a very slim consensus (WP:NOTAVOTE) is found?

Argument in-favor of "AIM-174:" See the aforementioned RM for arguments in-favor and against. However, I would like to make it clear that I was the only person arguing WP. Those in-favor of "174B" were seemingly disagreeing with my WP arguments, but not offering their own in-support of the inclusion of "B." That said, my primary WP-based argument is likely WP:CONSISTENT; ALL U.S. air-to-air-missiles use the base model as their article title. See: AIM-4 Falcon, AIM-26 Falcon, AIM-47 Falcon, AIM-9 Sidewinder, AIM-7 Sparrow, AIM-54 Phoenix, AIM-68 Big Q, AIM-82, AIM-95 Agile, AIM-97 Seekbat, AIM-120 AMRAAM, AIM-132, AIM-152 AAAM, AIM-260. 174"B" is unnecessary while violating consistency.

Do my policy contentions hold any weight? Or am I mad? Do I have any path forward, here?

TO BE CLEAR, I am not alleging bad faith on behalf of anyone, and I am extremely grateful to all those who have been involved, particularly the RM closer that I mentioned, as well as the MRV closer, ModernDayTrilobite. I would like to make it clear that this isn't simply a case of a MRV 'not going my way.' Again, I am concerned w/ the precedent and with the onus having been shifted to me for months. I also apologize for the delay in getting this here; I originally stopped-over at the DRN but Robert McClenon kindly suggested I instead post here.MWFwiki (talk) 00:08, 12 December 2024 (UTC)[reply]

Are you familiar with Wikipedia:Article titles#Considering changes? Do you think you understand why that rule exists? WhatamIdoing (talk) 23:31, 12 December 2024 (UTC)[reply]
I am quite familiar with it. It seemingly supports my argument(s), so...? Is there a particular reason you're speaking in quasi-riddles? MWFwiki (talk) 01:11, 13 December 2024 (UTC)[reply]
If yours is the title favored by the policy, then none of this explanation makes any difference. You just demand that it be put back to the title favored by the policy, and editors will usually go along with it. (It sometimes requires spelling out the policy in detail, but ultimately, most people want to comply with the policy.)
If yours is not the title favored by the policy, then the people on the other 'side' are going to stand on policy when you ask to move it, so you'd probably have to get the policy changed to 'win'. If you want to pursue that, you will need to understand why the rule is set this way, so that you have a chance of making a convincing argument. WhatamIdoing (talk) 05:24, 13 December 2024 (UTC)[reply]
I think several individuals involved in this process have agreed that the default title is the favored title, at least as far as WP:TITLECHANGES, as you say.
(The only reason I listed any further ‘litigation’ here is to show what was being discussed in-general for convenience’s sake, not necessarily to re-litigate)
However, at least two individuals involved have expressed to me that they felt their hands were tied by the RM/MRV process. Otherwise, as I mentioned (well, as Bobby_Cohn mentioned) the train of thought seemed to be “well, I don’t want the title to be changed,” and this was seemingly enough to override policy. Or, at best, it was seemingly a “well, it would be easier to just leave it as-is” sort of decision.

And again, I, 100%, should have been more forceful; The title anhould have been reverted per the initial “no consensus” RM-closure and I will certainly bear your advice in-mind in the future. That said, I suppose what I am asking is would it be inappropriate to ask the original RM-closer to revert the article at this point, given how much time is past?

MWFwiki (talk) 06:29, 13 December 2024 (UTC)[reply]
Given what was written in Talk:AIM-174B#Requested move 20 September 2024 six weeks ago, I think that none of this is relevant. "Consensus to keep current name" does not mean that you get to invoke rules about what happens when there is no consensus. I suggest that you give up for now, wait a long time (a year? There is no set time, but it needs to be a l-o-n-g time), and maybe start a new Wikipedia:Requested moves (e.g., in 2026). WhatamIdoing (talk) 19:41, 13 December 2024 (UTC)[reply]
Thanks! MWFwiki (talk) 05:09, 14 December 2024 (UTC)[reply]
Everything ModernDayTrilobite advised you of is correct. Vpab15 closed the RM and determined that consensus was reached. Nothing since then has overturned or otherwise superseded Vpab15's closure. Therefore that closure remains in force. You already challenged the validity of Vpab15's closure at move review, and you have no avenue for challenging it again. Your best bet is to wait a tactful amount of time (several months) before starting another RM. And in that RM, none of this procedural stuff will matter, and you will be free to focus just on making the clearest, simplest case for why AIM-174 is the best title. Adumbrativus (talk) 06:10, 13 December 2024 (UTC)[reply]
I suppose my issue is better summed-up by my above discussion with WhatamIdoing; The MRV shouldn’t have been required. That burden should never have been on me. The title should have been reverted at the initial “no consensus” per WP:TITLECHANGES. Otherwise, undiscussed moves — when challenged — may now be upheld by either consensus or no consensus? This is not what WP:TITLECHANGES says, obviously. That said I take full responsibility for not being clearer with this argument, and instead focusing on arguing for a ‘different’ title, when I should have been arguing for the default title per TITLECHANGES. MWFwiki (talk) 06:33, 13 December 2024 (UTC)[reply]
You've repeatedly pointed to the initial self-reverted closure as if it's somehow significant. It isn't. Asukite voluntarily decided to close the discussion, and voluntarily self-reverted their decision to close. It doesn't matter whether you asked for it or someone else asked or no one asked. They had the right to self-revert then, for any reason or no reason. The net result is the same as if Asukite had never closed it at all. Only Vpab15's closure, which was 100% on Vpab15's own authority and 0% on the supposed authority of the annulled earlier closure, is binding. Adumbrativus (talk) 09:22, 13 December 2024 (UTC)[reply]
I don't disagree with your latter statement, but why would an initial finding of no-consensus not matter? It should have brought us back to the default title, not simply been reverted. Because that policy wasn't followed, I'm here now, is my point. Regardless, I understand; Thank you for your advice! Well, I appreciate your time and consideration! :-) MWFwiki (talk) 05:08, 14 December 2024 (UTC)[reply]
(Involved at the MRV) Seeing as I've been tagged in this multiple times and quoted, I'll give my thoughts on this. I don't want to accuse MWFwiki of selectively quoting me but I do think that my quote above was, when taken into account with the following discussion, more about meta-conversation about the correct policy to implement in the event the MRV went the other way. I explicitly said in the immediately following message the view that the close was not outside the scope of WP:RMCI is reasonable and good faith interpretation. I do think this close was within bounds, and the following MRV appropriately closed and summarised.
Yes, had the original close of no consensus stood, then it could have been reverted wholecloth. It was self-reverted and therefore plays no role in the consideration of the subsequent closure. We're always going to take the most recent finding of consensus to be what holds. It seems to have been said in the above that had the no consensus closure held and the appropriate WP:RMNCREV policy been applied, then the appellant here would have gotten their preferred outcome. But to continue to argue this in the face of the subsequent developments is where this enters wikilawyering territory. I think that since then, the appellant has continued to make policy arguments that would be better suited for a subsequent and focused RM on the actual title rather than wikilawyer about a previous close that was self-reverted and continuing to argue policy.
There's nothing for this venue to really change in regards to that AT and the discussion to change the AT would need to be had at the articles talk page. My sincere advice to appellant is to wait a reasonable amount of time and make strong policy based arguments about the preferred title (don't just quote policy, we editors are good at clicking links and reading it for ourselves—quoting nothing but policy back at us makes us feel like you've taken us for fools; instead provide facts and sources that support the relevant policies and link those). Spend some time at WP:RMC and see what well-argued and successful RMs typically look like. Bobby Cohn (talk) 17:38, 17 December 2024 (UTC)[reply]

CSD A12. Substantially written using a large language model, with hallucinated information or fictitious references

[edit]

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


When fixing up new articles, I have encountered articles that appear to have been substantially generated by AI, containing hallucinated information. While these articles may not meet other criteria for speedy deletion, as the subjects themselves are sometimes real and notable, waiting for seven days to PROD the articles is inefficient. I recommend designating WP:A12 for the speedy deletion of these articles. I have created a template (User:Svampesky/Template:Db-a12) if it is successful. A recent example is the article on the Boston University Investment Office, where the author explicitly disclosed that it was created using a large language model and contains references to sources don't exist. I initially G11'd it, as it seemed the most appropriate, but was declined, and the article was subsequently PRODed. Svampesky (talk) 21:13, 12 December 2024 (UTC)[reply]

CSD are generally limited to things that are unambiguously obvious. I image the number of cases in which it's unabiguously obvious that the entire page was generated by an LLM (as opposed to the editor jut using the LLM to generate references, for example) are small enough that it doesn't warrant a speedy deletion criterion. --Ahecht (TALK
PAGE
)
21:29, 12 December 2024 (UTC)[reply]
I like this idea but agree that it's better not as a CSD but perhaps its own policy page. Andre🚐 21:33, 12 December 2024 (UTC)[reply]
I don't think it even merits a policy page. The number of cases where the LLM use is objectively unambiguous, and the article content sufficiently problematic that deletion is the only appropriate course of action and it cannot be (speedily) deleted under existing policy is going to be vanishingly small. Even the OP's examples were handled by existing processes (PROD) sufficiently. Thryduulf (talk) 22:11, 12 December 2024 (UTC)[reply]
@Svampesky, when you say that Wikipedia:Proposed deletion is "inefficient", do you mean that you don't want to wait a week before the article gets deleted? WhatamIdoing (talk) 23:32, 12 December 2024 (UTC)[reply]
My view is that Wikipedia:Proposed deletion inefficient for articles that clearly contain hallucinated LLM-generated content and fictitious references (which almost certainly will be deleted) in the mainspace for longer than necessary. Svampesky (talk) 00:03, 13 December 2024 (UTC)[reply]
Efficiency usually compares the amount of effort something takes, not the length of time it takes. "Paint it and leave it alone for 10 minutes to dry" is the same amount of hands-on work as "Paint it and leave it alone for 10 days to dry", so they're equally efficient processes. It sounds like you want a process that isn't less hands-on work/more efficient, but instead a process that is faster.
Also, if the subject qualifies for an article, then deletion isn't necessarily the right solution. Blanking bad content and bad sources is officially preferred (though more work) so that there is only verifiable content with one or more real sources left on the page – even if that content is only a single sentence.
Efficiency and speed is something that many editors like. However, there has to be a balance. We're WP:HERE to build an encyclopedia, which sometimes means that rapidly removing imperfect content is only the second or third most important thing we do. WhatamIdoing (talk) 00:43, 13 December 2024 (UTC)[reply]
  • This part as the subjects themselves are sometimes real and notable is literally an inherent argument against using CSD (or PROD for that matter). WP:TNT the article to a sentence if necessary, but admitting that you're trying to delete an article you know is notable just means you're admitting to vandalism. SilverserenC 00:07, 13 December 2024 (UTC)[reply]
    The categorization of my proposal as admitting to vandalism is incorrect. WP:G11, the speedy deletion criterion I initially used for the article, specifies deleting articles that would need to be fundamentally rewritten to serve as encyclopedia articles. Articles that have been generated using large language models, with hallucinated information or fictitious references, would need to be fundamentally rewritten to serve as encyclopedia articles. Svampesky (talk) 00:42, 13 December 2024 (UTC)[reply]
    Yes, but G11 is looking for blatant advertising ("Buy widgets now at www.widgets.com! Blue-green widgets in stock today!") It's not looking for anything and everything that needs to be fundamentally re-written. WhatamIdoing (talk) 00:45, 13 December 2024 (UTC)[reply]
    (Edit Conflict) How does G11 even apply here? Being written via LLM does not make an article "promotional". Furthermore, even that CSD criteria states If a subject is notable and the content could plausibly be replaced with text written from a neutral point of view, this is preferable to deletion. I.e. TNT it to a single sentence and problem solved. SilverserenC 00:46, 13 December 2024 (UTC)[reply]
  • The venue for proposing new criteria is at Wikipedia talk:Criteria for speedy deletion. So please make sure that you don't just edit in a new criterion without an RFC approving it, else it will be quickly reverted. Graeme Bartlett (talk) 00:20, 13 December 2024 (UTC)[reply]
    Since we are talking about BLPs… the harm of hallucinated information does need to be taken very seriously. I would say the first step is to stubbify.
    However, Deletion can be held off as a potential second step, pending a proper BEFORE check. Blueboar (talk) 01:06, 13 December 2024 (UTC)[reply]
    If the hallucination is sufficiently dramatic ("Joe Film is a superhero action figure", when it ought to say that he's an actor who once had a part in a superhero movie), then you might be able to make a good case for {{db-hoax}}. WhatamIdoing (talk) 05:26, 13 December 2024 (UTC)[reply]
    I have deleted an AI generated article with fake content and references as a hoax. So that may well be possible. Graeme Bartlett (talk) 12:23, 13 December 2024 (UTC)[reply]
Isn't this covered by WP:DRAFTREASON? Gnomingstuff (talk) 20:34, 13 December 2024 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

AFD clarification

[edit]

The Articles for deletion article states that: If a redirection is controversial, however, AfD may be an appropriate venue for discussing the change in addition to the article's talk page.

Does this mean that an AFD can be started by someone with the intent of redirecting instead of deleting? Plasticwonder (talk) 04:06, 13 December 2024 (UTC)[reply]

Yes. If there is a contested redirect, the article is restored and it is brought to AfD. voorts (talk/contributions) 04:34, 13 December 2024 (UTC)[reply]
I think the ideal process is:
  • Have an ordinary discussion on the talk page about redirecting the page.
  • If (and only if) that discussion fails to reach consensus, try again at AFD.
I dislike starting with AFD. It isn't usually necessary, and it sometimes has a feel of the nom trying to get rid of it through any means possible ("I'll suggest a WP:BLAR, but maybe I'll be lucky and they'll delete it completely"). WhatamIdoing (talk) 05:31, 13 December 2024 (UTC)[reply]
Would need some stats on the it isn't usually necessary claim, my intuition based on experience is that if a BLAR is contested it's either dropped or ends up at AfD. CMD (talk) 05:48, 13 December 2024 (UTC)[reply]
I agree with that. From what I have seen at least, if redirecting is contested, it then is usually discussed at AFD, but that's just me. Plasticwonder (talk) 08:42, 13 December 2024 (UTC)[reply]
It depends how active the respective talk pages are (redirected article and target), but certainly for ones that are quiet AfD is going to be the most common. Thryduulf (talk) 09:33, 13 December 2024 (UTC)[reply]
It will also depend on whether you advertise the discussion, e.g., at an active WikiProject. WhatamIdoing (talk) 19:44, 13 December 2024 (UTC)[reply]
I usually just go straight to AfD. I've found that editors contesting redirects usually !vote keep and discussing on talk just prolongs the inevitable AfD. voorts (talk/contributions) 14:58, 13 December 2024 (UTC)[reply]
Gotcha. Plasticwonder (talk) 15:29, 13 December 2024 (UTC)[reply]
Looking at the above comments: What is it about the Wikipedia:Proposed article mergers process that isn't working for you all? If you redirect an article and it gets reverted, why aren't you starting a PM? WhatamIdoing (talk) 21:37, 16 December 2024 (UTC)[reply]
For me, it's lack of participation, no tool to list something at PAM, and no relisting option so proposed merges just sit for a very long time before being closed. voorts (talk/contributions) 23:21, 16 December 2024 (UTC)[reply]
What voorts said. Multiple times now I've floated the idea of making PAM more like RM, one of these years I should really get around to doing something more than that. I won't have time before the new year though. Thryduulf (talk) 23:45, 16 December 2024 (UTC)[reply]
I think PAM should be merged into AfD, since both generally involve discussions of notability. voorts (talk/contributions) 00:00, 17 December 2024 (UTC)[reply]
Merging often involves questions of overlap and topical distinction rather than just notability, although this also ends up discussed at AfD. I do wonder if this would leave proposals to split out in the cold though, as much like merge discussions they just sit there. CMD (talk) 04:00, 17 December 2024 (UTC)[reply]
The most important tool is Twinkle > Tag > Merge. I personally prefer its "Merge to" option, but there's a plain "Merge" if you don't know exactly which page should be the target.
All merges get bot-listed in Wikipedia:Article alerts. Wikipedia:Proposed article mergers is another place to advertise it, and I'd bet that Twinkle could post those automatically with relatively little work (an optional button, similar to notifying the creator of deletion plans).
I dislike "relisting"; things should just stay open as long as they need to, without adding decorative comments about the discussion not happening fast enough. In my experience, merge proposals stay open because everyone's agreed on the outcome but nobody wants to do the work. WhatamIdoing (talk) 06:46, 17 December 2024 (UTC)[reply]
In this context isn't redirection a *type* of deletion (specifically delete while leaving a redirect)? Horse Eye's Back (talk) 07:05, 17 December 2024 (UTC)[reply]
I would think so. Plasticwonder (talk) 07:33, 17 December 2024 (UTC)[reply]
It's only a deletion if an admin pushes the delete button. Blanking and redirecting – even blanking, redirecting, and full-protecting the redirect so nobody can un-redirect it – is not deletion. WhatamIdoing (talk) 07:34, 18 December 2024 (UTC)[reply]
That might be clear to you (and the other admins) but almost nobody in the general community understands that (to the point where I would say its just wrong, deletion is broader than that in practice). Horse Eye's Back (talk) 16:23, 18 December 2024 (UTC)[reply]
Well, it has always been clear to me, and I am not, and have never wished to be, an admin. But, then again, I am a bit strange in that I expect things to be as people say that they will be. Phil Bridger (talk) 18:34, 18 December 2024 (UTC)[reply]
Contested redirects going to AfD makes sense. Articles are redirected for the same reasons they're deleted and redirecting is probably the most common ATD. I've opened plenty of AfDs where my nom recommends a redirect instead of deletion, including when I've BLARed an article and had the BLAR reverted. voorts (talk/contributions) 18:38, 18 December 2024 (UTC)[reply]
If a redirect has already been discussed or attempted, and consensus can't be reached easily, then I've got no problem with AFD. What I don't want to see is no discussion, no bold redirects, nobody's even hinted about a merge, and now it's at AFD, when the problem could have been resolved through a less intense method. WhatamIdoing (talk) 19:07, 18 December 2024 (UTC)[reply]

RfC: Voluntary RfA after resignation

[edit]

Should Wikipedia:Administrators#Restoration of admin tools be amended to:

  • Option 1 – Require former administrators to request restoration of their tools at the bureaucrats' noticeboard (BN) if they are eligible to do so (i.e., they do not fit into any of the exceptions).
  • Option 2 – Clarify Maintain the status quo that former administrators who would be eligible to request restoration via BN may instead request restoration of their tools via a voluntary request for adminship (RfA).
  • Option 3 – Allow bureaucrats to SNOW-close RfAs as successful if (a) 48 hours have passed, (b) the editor has right of resysop, and (c) a SNOW close is warranted.

Background: This issue arose in one recent RfA and is currently being discussed in an ongoing RfA. voorts (talk/contributions) 21:14, 15 December 2024 (UTC)[reply]
Note: There is an ongoing related discussion at Wikipedia:Village pump (idea lab) § Making voluntary "reconfirmation" RFA's less controversial.
Note: Option 2 was modified around 22:08, 15 December 2024 (UTC). Note: Added option 3. theleekycauldron (talk • she/her) 22:12, 15 December 2024 (UTC)[reply]

  • 2 per Kline's comment at Hog Farm's RfA. If an admin wishes to be held accountable for their actions at a re-RfA, they should be allowed to do so. charlotte 👸🎄 21:22, 15 December 2024 (UTC)[reply]
    Also fine with 3 charlotte 👸♥📱 22:23, 15 December 2024 (UTC)[reply]
  • There is ongoing discussion about this at Wikipedia:Village pump (idea lab)#Making voluntary "reconfirmation" RFA's less controversial. CMD (talk) 21:24, 15 December 2024 (UTC)[reply]
    • 2, after thought. I don't think 3 provides much benefit, and creating separate class of RfAs that are speedy passed feels a misstep. If there are serious issues surrounding wasting time on RfAs set up under what might feel to someone like misleading pretenses, that is best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm (RRfA)". CMD (talk) 14:49, 16 December 2024 (UTC)[reply]
      best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm (RRfA)" - I like this idea, if option 2 comes out as consensus I think this small change would be a step in the right direction, as the "this isn't the best use of time" crowd (myself included) would be able to quickly identify the type of RFAs they don't want to participate in. BugGhost 🦗👻 11:05, 17 December 2024 (UTC)[reply]
      I think that's a great idea. I would support adding some text encouraging people who are considering seeking reconfirmation to add (RRfA) or (reconfirmation) after their username in the RfA page title. That way people who are averse to reading or participating in reconfirmations can easily avoid them, and no one is confused about what is going on. 28bytes (talk) 14:23, 17 December 2024 (UTC)[reply]
      I think this would be a great idea if it differentiated against recall RfAs. Aaron Liu (talk) 18:37, 17 December 2024 (UTC)[reply]
      If we are differentiating three types of RFA we need three terms. Post-recall RFAs are referred to as "reconfirmation RFAs", "Re-RFAS" or "RRFAs" in multiple places, so ones of the type being discussed here are the ones that should take the new term. "Voluntary reconfirmation RFA" (VRRFA or just VRFA) is the only thing that comes to mind but others will probably have better ideas. Thryduulf (talk) 21:00, 17 December 2024 (UTC)[reply]
  • 1 * Pppery * it has begun... 21:25, 15 December 2024 (UTC)[reply]
  • 2 I don't see why people trying to do the right thing should be discouraged from doing so. If others feel it is a waste of time, they are free to simply not participate. El Beeblerino if you're not into the whole brevity thing 21:27, 15 December 2024 (UTC)[reply]
  • 2 Getting reconfirmation from the community should be allowed. Those who see it as a waste of time can ignore those RfAs. Schazjmd (talk) 21:32, 15 December 2024 (UTC)[reply]
  • Of course they may request at RfA. They shouldn't but they may. This RfA feels like it does nothing to address the criticism actually in play and per the link to the idea lab discussion it's premature to boot. Barkeep49 (talk) 21:38, 15 December 2024 (UTC)[reply]
  • 2 per my comments at the idea lab discussion and Queent of Hears, Beeblebrox and Scazjmd above. I strongly disagree with Barkeep's comment that "They shouldn't [request the tools back are RFA]". It shouldn't be made mandatory, but it should be encouraged where the time since desysop and/or the last RFA has been lengthy. Thryduulf (talk) 21:42, 15 December 2024 (UTC)[reply]
    When to encourage it would be a worthwhile RfC and such a discussion could be had at the idea lab before launching an RfC. Best, Barkeep49 (talk) 21:44, 15 December 2024 (UTC)[reply]
    I've started that discussion as a subsection to the linked VPI discussion. Thryduulf (talk) 22:20, 15 December 2024 (UTC)[reply]
  • 1 or 3. RFA is an "expensive" process in terms of community time. RFAs that qualify should be fast-tracked via the BN process. It is only recently that a trend has emerged that folks that don't need to RFA are RFAing again. 2 in the last 6 months. If this continues to scale up, it is going to take up a lot of community time, and create noise in the various RFA statistics and RFA notification systems (for example, watchlist notices and User:Enterprisey/rfa-count-toolbar.js). –Novem Linguae (talk) 21:44, 15 December 2024 (UTC)[reply]
    Making statistics "noisy" is just a reason to improve the way the statistics are gathered. In this case collecting statistics for reconfirmation RFAs separately from other RFAs would seem to be both very simple and very effective. If (and it is a very big if) the number of reconfirmation RFAs means that notifications are getting overloaded, then we can discuss whether reconfirmation RFAs should be notified differently. As far as differentiating them, that is also trivially simple - just add a parameter to template:RFA (perhaps "reconfirmation=y") that outputs something that bots and scripts can check for. Thryduulf (talk) 22:11, 15 December 2024 (UTC)[reply]
    Option 3 looks like a good compromise. I'd support that too. –Novem Linguae (talk) 22:15, 15 December 2024 (UTC)[reply]
    I'm weakly opposed to option 3, editors who want feedback and a renewed mandate from the community should be entitled to it. If they felt that that a quick endorsement was all that was required then could have had that at BN, they explicitly chose not to go that route. Nobody is required to participate in an RFA, so if it is going the way you think it should, or you don't have an opinion, then just don't participate and your time has not been wasted. Thryduulf (talk) 22:20, 15 December 2024 (UTC)[reply]
  • 2. We should not make it more difficult for administrators to be held accountable for their actions in the way they please. JJPMaster (she/they) 22:00, 15 December 2024 (UTC)[reply]
  • Added option 3 above. Maybe worth considering as a happy medium, where unsure admins can get a check on their conduct without taking up too much time. theleekycauldron (talk • she/her) 22:11, 15 December 2024 (UTC)[reply]
  • 2 – If a former admin wishes to subject themselves to RfA to be sure they have the requisite community confidence to regain the tools, why should we stop them? Any editor who feels the process is a waste of time is free to ignore any such RfAs. — Jkudlick ⚓ (talk) 22:12, 15 December 2024 (UTC)[reply]
  • Option 3 per leek. voorts (talk/contributions) 22:16, 15 December 2024 (UTC)[reply]
    A further note: option 3 gives 'crats the discretion to SNOW close a successful voluntary re-RfA; it doesn't require such a SNOW close, and I trust the 'crats to keep an RfA open if an admin has a good reason for doing so. voorts (talk/contributions) 23:24, 16 December 2024 (UTC)[reply]
  • 2 as per JJPMaster. Regards, --Goldsztajn (talk) 22:20, 15 December 2024 (UTC)[reply]
  • Option 2 (no change) – The sample size is far too small for us to analyze the impact of such a change, but I believe RfA should always be available. Now that WP:RECALL is policy, returning administrators may worry that they have become out of touch with community norms and may face a recall as soon as they get their tools back at BN. Having this familiar community touchpoint as an option makes a ton of sense, and would be far less disruptive / demoralizing than a potential recall. Taking this route away, even if it remains rarely used, would be detrimental to our desire for increased administrator accountability. – bradv 22:22, 15 December 2024 (UTC)[reply]
  • (edit conflict) I'm surprised the response here hasn't been more hostile, given that these give the newly-unresigned administrator a get out of recall free card for a year. —Cryptic 22:25, 15 December 2024 (UTC)[reply]
    @Cryptic hostile to what? Thryduulf (talk) 22:26, 15 December 2024 (UTC)[reply]
  • 2, distant second preference 3. I would probably support 3 as first pick if not for recall's rule regarding last RfA, but as it stands, SNOW-closing a discussion that makes someone immune to recall for a year is a non-starter. Between 1 and 2, though, the only argument for 1 seems to be that it avoids a waste of time, for which there is the much simpler solution of not participating and instead doing something else. Special:Random and Wikipedia:Backlog are always there. -- Tamzin[cetacean needed] (they|xe|🤷) 23:31, 15 December 2024 (UTC)[reply]
  • 1 would be my preference, but I don't think we need a specific rule for this. -- Ajraddatz (talk) 23:36, 15 December 2024 (UTC)[reply]
  • Option 1. No second preference between 2 or 3. As long as a former administrator didn't resign under a cloud, picking up the tools again should be low friction and low effort for the entire community. If there are issues introduced by the recall process, they should be fixed in the recall policy itself. Daniel Quinlan (talk) 01:19, 16 December 2024 (UTC)[reply]
    After considering this further, I prefer option 3 over option 2 if option 1 is not the consensus. Daniel Quinlan (talk) 07:36, 16 December 2024 (UTC)[reply]
  • Option 2, i.e. leave well enough alone. There is really not a problem here that needs fixing. If someone doesn’t want to “waste their time” participating in an RfA that’s not required by policy, they can always, well, not participate in the RfA. No one is required to participate in someone else’s RfA, and I struggle to see the point of participating but then complaining about “having to” participate. 28bytes (talk) 01:24, 16 December 2024 (UTC)[reply]
  • Option 2 nobody is obligated to participate in a re-confirmation RfA. If you think they are a waste of time, avoid them. LEPRICAVARK (talk) 01:49, 16 December 2024 (UTC)[reply]
  • 1 or 3 per Novem Linguae. C F A 02:35, 16 December 2024 (UTC)[reply]
  • Option 3: Because it is incredibly silly to have situations like we do now of "this guy did something wrong by doing an RfA that policy explicitly allows, oh well, nothing to do but sit on our hands and dissect the process across three venues and counting." Your time is your own. No one is forcibly stealing it from you. At the same time it is equally silly to let the process drag on, for reasons explained in WP:SNOW. Gnomingstuff (talk) 03:42, 16 December 2024 (UTC)[reply]
    Update: Option 2 seems to be the consensus and I also would be fine with that. Gnomingstuff (talk) 18:10, 19 December 2024 (UTC)[reply]
  • Option 3 per Gnoming. I think 2 works, but it is a very long process and for someone to renew their tools, it feels like an unnecessarily long process compared to a normal RfA. Conyo14 (talk) 04:25, 16 December 2024 (UTC)[reply]
  • As someone who supported both WormTT and Hog Farm's RfAs, option 1 > option 3 >> option 2. At each individual RfA the question is whether or not a specific editor should be an admin, and in both cases I felt that the answer was clearly "yes". However, I agree that RfA is a very intensive process. It requires a lot of time from the community, as others have argued better than I can. I prefer option 1 to option 3 because the existence of the procedure in option 3 implies that it is a good thing to go through 48 hours of RfA to re-request the mop. But anything which saves community time is a good thing. HouseBlaster (talk • he/they) 04:31, 16 December 2024 (UTC)[reply]
    I've seen this assertion made multiple times now that [RFA] requires a lot of time from the community, yet nowhere has anybody articulated how why this is true. What time is required, given that nobody is required to participate and everybody who does choose to participate can spend as much or as little time assessing the candidate as they wish? How and why does a reconfirmation RFA require any more time from editors (individually or collectively) than a request at BN? Thryduulf (talk) 04:58, 16 December 2024 (UTC)[reply]
    I think there are a number of factors and people are summing it up as "time-wasting" or similar:
    1. BN Is designed for this exact scenario. It's also clearly a less contentious process.
    2. Snow closures a good example of how we try to avoid wasting community time on unnecessary process and the same reasoning applies here. Wikipedia is not a bureaucracy and there's no reason to have a 7-day process when the outcome is a given.
    3. If former administrators continue to choose re-RFAs over BN, it could set a problematic precedent where future re-adminship candidates feel pressured to go through an RFA and all that entails. I don't want to discourage people already vetted by the community from rejoining the ranks.
    4. The RFA process is designed to be a thoughtful review of prospective administrators and I'm concerned these kinds of perfunctory RFAs will lead to people taking the process less seriously in the future.
    Daniel Quinlan (talk) 07:31, 16 December 2024 (UTC)[reply]
    Because several thousand people have RFA on their watchlist, and thousands more will see the "there's an open RFA" notice on theirs whether they follow it or not. Unlike BN, RFA is a process that depends on community input from a large number of people. In order to even realise that the RFA is not worth their time, they have to:
    • Read the opening statement and first few question answers (I just counted, HF's opening and first 5 answers are about 1000 words)
    • Think, "oh, they're an an ex-admin, I wonder why they're going through RFA, what was their cloud"
    • Read through the comments and votes to see if any issues have been brought up (another ~1000 words)
    • None have
    • Realise your input is not necessary and this could have been done at BN
    This process will be repeated by hundreds of editors over the course of a week. BugGhost 🦗👻 08:07, 16 December 2024 (UTC)[reply]
    That they were former admins has always been the first two sentences of their RfA’s statement, sentences which are immediately followed by that they resigned due to personal time commitment issues. You do not have to read the first 1000+ words to figure that out. If the reader wants to see if the candidate was lying in their statement, then they just have a quick skim through the oppose section. None of this should take more than 30 seconds in total. Aaron Liu (talk) 13:15, 16 December 2024 (UTC)[reply]
    Not everyone can skim things easily - it personally takes me a while to read sections. I don't know if they're going to bury the lede and say something like "Also I made 10,000 insane redirects and then decided to take a break just before arbcom launched a case" in paragraph 6. Hog Farm's self nom had two paragraphs about disputes and it takes more than 30 seconds to unpick that and determine if that is a "cloud" or not. Even for reconfirmations, it definitely takes more than 30 seconds to determine a conclusion. BugGhost 🦗👻 11:21, 17 December 2024 (UTC)[reply]
    They said they resigned to personal time commitments. That is directly saying they wasn’t under a cloud, so I’ll believe them unless someone claims the contrary in the oppose section. If the disputes section contained a cloud, the oppose section would have said so. One chooses to examine such nominations like normal RfAs. Aaron Liu (talk) 18:47, 17 December 2024 (UTC)[reply]
    Just to double check, you're saying that whenever you go onto an RFA you expect any reason to oppose to already be listed by someone else, and no thought is required? I am begining to see how you are able to assess an RFA in under 30 seconds BugGhost 🦗👻 23:08, 17 December 2024 (UTC)[reply]
    Something in their statement would be an incredibly obvious reason. We are talking about the assessment whether to examine and whether the candidate could've used BN. Aaron Liu (talk) 12:52, 18 December 2024 (UTC)[reply]
    @Thryduulf let's not confuse "a lot of community time is spent" with "waste of time". Some people have characterized the re-RFAs as a waste of time but that's not the assertion I (and I think a majority of the skeptics) have been making. All RfAs use a lot of community time as hundreds of voters evaluate the candidate. They then choose to support, oppose, be neutral, or not vote at all. While editor time is not perfectly fixed - editors may choose to spend less time on non-Wikipedia activities at certain times - neither is it a resource we have in abundance anymore relative to our project. And so I think we, as a community, need to be thought about how we're using that time especially when the use of that time would have been spent on other wiki activities.Best, Barkeep49 (talk) 22:49, 16 December 2024 (UTC)[reply]
    Absolutely nothing compels anybody to spend any time evaluating an RFA. If you think your wiki time is better spent elsewhere than evaluating an RFA candidate, then spend it elsewhere. That way only those who do think it is a good use of their time will participate and everybody wins. You win by not spending your time on something that you don't think is worth it, those who do participate don't have their time wasted by having to read comments (that contradict explicit policy) about how the RFA is a waste of time. Personally I regard evaluating whether a long-time admin still has the approval of the community to be a very good use of community time, you are free to disagree, but please don't waste my time by forcing me to read comments about how you think I'm wasting my time. Thryduulf (talk) 23:39, 16 December 2024 (UTC)[reply]
    I am not saying you or anyone else is wasting time and am surprised you are so fervently insisting I am. Best, Barkeep49 (talk) 03:34, 17 December 2024 (UTC)[reply]
    I don't understand how your argument that it is not a good use of community time is any different from arguing that it is a waste of time? Thryduulf (talk) 09:08, 17 December 2024 (UTC)[reply]
  • Option 2 I don't mind the re-RFAs, but I'd appreciate if we encouraged restoration via BN instead, I just object to making it mandatory. EggRoll97 (talk) 06:23, 16 December 2024 (UTC)[reply]
  • Option 2. Banning voluntary re-RfAs would be a step in the wrong direction on admin accountability. Same with SNOW closing. There is no more "wasting of community time" if we let the RfA run for the full seven days, but allowing someone to dig up a scandal on the seventh day is an important part of the RfA process. The only valid criticism I've heard is that folks who do this are arrogant, but banning arrogance, while noble, seems highly impractical. Toadspike [Talk] 07:24, 16 December 2024 (UTC)[reply]
  • Option 3, 1, then 2, per HouseBlaster. Also agree with Daniel Quinlan. I think these sorts of RFA's should only be done in exceptional circumstances. Graham87 (talk) 08:46, 16 December 2024 (UTC)[reply]
  • Option 1 as first preference, option 3 second. RFAs use up a lot of time - hundreds of editors will read the RFA and it takes time to come to a conclusion. When that conclusion is "well that was pointless, my input wasn't needed", it is not a good system. I think transparency and accountability is a very good thing, and we need more of it for resyssopings, but that should come from improving the normal process (BN) rather than using a different one (RFA). My ideas for improving the BN route to make it more transparent and better at getting community input is outlined over on the idea lab BugGhost 🦗👻 08:59, 16 December 2024 (UTC)[reply]
  • Option 2, though I'd be for option 3 too. I'm all for administrators who feel like they want/should go through an RfA to solicit feedback even if they've been given the tools back already. I see multiple people talk about going through BN, but if I had to hazard a guess, it's way less watched than RfA is. However I do feel like watchlist notifications should say something to the effect of "A request for re-adminship feedback is open for discussion" so that people that don't like these could ignore them. JCW555 (talk)09:13, 16 December 2024 (UTC)[reply]
  • Option 2 because WP:ADMINISTRATORS is well-established policy. Read WP:ADMINISTRATORS#Restoration of admin tools, which says quite clearly, Regardless of the process by which the admin tools are removed, any editor is free to re-request the tools through the requests for adminship process. I went back 500 edits to 2017 and the wording was substantially the same back then. So, I simply do not understand why various editors are berating former administrators to the point of accusing them of wasting time and being arrogant for choosing to go through a process which is specifically permitted by policy. It is bewildering to me. Cullen328 (talk) 09:56, 16 December 2024 (UTC)[reply]
  • Option 2 & 3 I think that there still should be the choice between BN and re-RFA for resysops, but I think that the re-RFA should stay like it is in Option 3, unless it is controversial, at which point it could be extended to the full RFA period. I feel like this would be the best compromise between not "wasting" community time (which I believe is a very overstated, yet understandable, point) and ensuring that the process is based on broad consensus and that our "representatives" are still supported. If I were WTT or Hog, I might choose to make the same decision so as to be respectful of the possibility of changing consensus. JuxtaposedJacob (talk) | :) | he/him | 10:45, 16 December 2024 (UTC)[reply]
  • Option 2, for lack of a better choice. Banning re-RFAs is not a great idea, and we should not SNOW close a discussion that would give someone immunity from a certain degree of accountability. I've dropped an idea for an option 4 in the discussion section below. Giraffer (talk) 12:08, 16 December 2024 (UTC)[reply]
  • Option 1 I agree with Graham87 that these sorts of RFAs should only be done in exceptional circumstances, and BN is the best place to ask for tools back. – DreamRimmer (talk) 12:11, 16 December 2024 (UTC)[reply]
  • Option 2 I don't think prohibition makes sense. It also has weird side effects. eg: some admins' voluntary recall policies may now be completely void, because they would be unable to follow them even if they wanted to, because policy prohibits them from doing a RFA. (maybe if they're also 'under a cloud' it'd fit into exemptions, but if an admins' policy is "3 editors on this named list tell me I'm unfit, I resign" then this isn't really a cloud.)
    Personally, I think Hog Farm's RFA was unwise, as he's textbook uncontroversial. Worm's was a decent RFA; he's also textbook uncontroversial but it happened at a good time. But any editor participating in these discussions to give the "support" does so using their own time. Everyone who feels their time is wasted can choose to ignore the discussion, and instead it'll pass as 10-0-0 instead of 198-2-4. It just doesn't make sense to prohibit someone from seeking a community discussion, though. For almost anything, really. ProcrastinatingReader (talk) 12:33, 16 December 2024 (UTC)[reply]
  • Option 2 It takes like two seconds to support or ignore an RFA you think is "useless"... can't understand the hullabaloo around them. I stand by what I said on WTT's re-RFA regarding RFAs being about evaluating trustworthiness and accountability. Trustworthy people don't skip the process. —k6ka 🍁 (Talk · Contributions) 15:24, 16 December 2024 (UTC)[reply]
  • Option 1 - Option 2 is a waste of community time. - Ratnahastin (talk) 15:30, 16 December 2024 (UTC)[reply]
    Why? Thryduulf (talk) 15:35, 16 December 2024 (UTC)[reply]
  • 2 is fine. Strong oppose to 1 and 3. Opposing option 1 because there is nothing wrong with asking for extra community feedback. opposing option 3 because once an RfA has been started, it should follow the standard rules. Note that RfAs are extremely rare and non-contentious RfAs require very little community time (unlike this RfC which seems a waste of community time, but there we are). —Kusma (talk) 16:59, 16 December 2024 (UTC)[reply]
  • 2, with no opposition to 3. I see nothing wrong with a former administrator getting re-confirmed by the community, and community vetting seems like a good thing overall. If people think it's a waste of time, then just ignore the RfA. Natg 19 (talk) 17:56, 16 December 2024 (UTC)[reply]
  • 2 Sure, and clarify that should such an RFA be unsuccessful they may only regain through a future rfa. — xaosflux Talk 18:03, 16 December 2024 (UTC)[reply]
  • Option 2 If contributing to such an RFA is a waste of your time, just don't participate. TheWikiToby (talk) 18:43, 16 December 2024 (UTC)[reply]
    No individual is wasting their time participating. Instead the person asking for a re-rfa is using tons of editor time by asking hundreds of people to vet them. Even the choice not to participate requires at least some time to figure out that this is not a new RfA; though at least in the two we've had recently it would require only as long as it takes to get to the RfA - for many a click from the watchlist and then another click into the rfa page - and to read the first couple of sentences of the self-nomination which isn't terribly long all things considered. Best, Barkeep49 (talk) 22:55, 16 December 2024 (UTC)[reply]
    I agree with you (I think) that it's a matter of perspective. For me, clicking the RFA link in my watchlist and reading the first paragraph of Hog Farm's nomination (where they explained that they were already a respected admin) took me about 10 seconds. Ten seconds is nothing; in my opinion, this is just a nonissue. But then again, I'm not an admin, checkuser, or an oversighter. Maybe the time to read such a nomination is really wasting their time. I don't know. TheWikiToby (talk) 23:15, 16 December 2024 (UTC)[reply]
    I'm an admin and an oversighter (but not a checkuser). None of my time was wasted by either WTT or Hog Farm's nominations. Thryduulf (talk) 23:30, 16 December 2024 (UTC)[reply]
  • 2. Maintain the status quo. And stop worrying about a trivial non-problem. --Tryptofish (talk) 22:57, 16 December 2024 (UTC)[reply]
  • 2. This reminds me of banning plastic straws (bear with me). Sure, I suppose in theory, that this is a burden on the community's time (just as straws do end up in landfills/the ocean). However, the amount of community time that is drained is minuscule compared to the amount of community time drained in countless, countless other fora and processes (just like the volume of plastic waste contributed by plastic straws is less than 0.001% of the total plastic waste). When WP becomes an efficient, well oiled machine, then maybe we can talk about saving community time by banning re-RFA's. But this is much ado about nothing, and indeed this plan to save people from themselves, and not allow them to simply decide whether to participate or not, is arguably more damaging than some re-RFAs (just as banning straws convinced some people that "these save-the-planet people are so ridiculous that I'm not going to bother listening to them about anything."). And, in fact, on a separate note, I'd actually love it if more admins just ran a re-RFA whenever they wanted. They would certainly get better feedback than just posting "What do my talk page watchers think?" on their own talk page. Or waiting until they get yelled at on their talk page, AN/ANI, AARV, etc. We say we want admins to respect feedback; does it have to be in a recall petition? --Floquenbeam (talk) 23:44, 16 December 2024 (UTC)[reply]
    What meaningful feedback has Hog Farm gotten? "A minority of people think you choose poorly in choosing this process to regain adminship". What are they supposed to do with that? I share your desire for editors to share meaningful feedback with administrators. My own attempt yielded some, though mainly offwiki where I was told I was both too cautious and too impetuous (and despite the seeming contradiction each was valuable in its own way). So yes let's find ways to get meaningful feedback to admins outside of recall or being dragged to ANI. Unfortunately re-RfA seems to be poorly suited to the task and so we can likely find a better way. Best, Barkeep49 (talk) 03:38, 17 December 2024 (UTC)[reply]
    Let us all take some comfort in the fact that no one has yet criticized this RfC comment as being a straw man argument. --Tryptofish (talk) 23:58, 18 December 2024 (UTC)[reply]
  • No hard rule, but we should socially discourage confirmation RfAs There is a difference between a hard rule, and a soft social rule. A hard rule against confirmation RfA's, like option 1, would not do a good job of accounting for edge cases and would thus be ultimately detrimental here. But a soft social rule against them would be beneficial. Unfortunately, that is not one of the options of this RfC. In short, a person should have a good reason to do a confirmation RfA. If you're going to stand up before the community and ask "do you trust me," that should be for a good reason. It shouldn't just be because you want the approval of your peers. (Let me be clear: I am not suggesting that is why either Worm or Hogfarm re-upped, I'm just trying to create a general purpose rule here.) That takes some introspection and humility to ask yourself: is it worth me inviting two or three hundred people to spend part of their lives to comment on me as a person?
    A lot of people have thrown around editor time in their reasonings. Obviously, broad generalizations about it aren't convincing anyone. So let me just share my own experience. I saw the watchlist notice open that a new RfA was being run. I reacted with some excitement, because I always like seeing new admins. When I got to the page and saw Hogfarm's name, I immediately thought "isn't he already an admin?" I then assumed, ah, its just the classic RfA reaction at seeing a qualified candidate, so I'll probably support him since I already think he's an admin. But then as I started to do my due diligence and read, I saw that he really, truly, already had been an admin. At that point, my previous excitement turned to a certain unease. I had voted yes for Worm's confirmation RfA, but here was another...and I realized that my blind support for Worm might have been the start of an entirely new process. I then thought "bet there's an RfC going about this," and came here. I then spent a while polishing up my essay on editor time, before taking time to write this message. All in all, I probably spent a good hour doing this. Previously, I'd just been clicking the random article button and gnoming. So, the longwinded moral: yeah, this did eat up a lot of my editor time that could have and was being spent doing something else. And I'd do it again! It was important to do my research and to comment here. But in the future...maybe I won't react quite as excitedly to seeing that RfA notice. Maybe I'll feel a little pang of dread...wondering if its going to be a confirmation RfA. We can't pretend that confirmation RfA's are costless, and that we don't lose anything even if editors just ignore them. When run, it should be because they are necessary. CaptainEek Edits Ho Cap'n! 03:29, 17 December 2024 (UTC)[reply]
    And for what its worth, support Option 3 because I'm generally a fan of putting more tools in people's toolboxes. CaptainEek Edits Ho Cap'n! 03:36, 17 December 2024 (UTC)[reply]
    In short, a person should have a good reason to do a confirmation RfA. If you're going to stand up before the community and ask "do you trust me," that should be for a good reason. It shouldn't just be because you want the approval of your peers. Asking the community whether you still have their trust to be an administrator, which is what an reconfirmation RFA is, is a good reason. I expect getting a near-unanimous "yes" is good for one's ego, but that's just a (nice) side-effect of the far more important benefits to the entire community: a trusted administrator.
    The time you claim is being eaten up unnecessarily by reconfirmation RFAs was actually taken up by you choosing to spend your time writing an essay about using time for things you don't approve of and then hunting out an RFC in which you wrote another short essay about using time on things you don't approve of. Absolutely none of that is a necessary consequence of reconfirmation RFAs - indeed the response consistent with your stated goals would have been to read the first two sentences of Hog Farm's RFA and then closed the tab and returned to whatever else it was you were doing. Thryduulf (talk) 09:16, 17 December 2024 (UTC)[reply]
    WTT's and Hog Farm's RFAs would have been completely uncontentious, something I hope for at RfA and certainly the opposite of what I "dread" at RfA, if it were not for the people who attack the very concept of standing for RfA again despite policy being crystal clear that it is absolutely fine. I don't see how any blame for this situation can be put on WTT or HF. We can't pretend that dismissing uncontentious reconfirmation RfAs is costless; discouraging them removes one of the few remaining potentially wholesome bits about the process. —Kusma (talk) 09:53, 17 December 2024 (UTC)[reply]
    @CaptainEek Would you find it better if Watchlist notices and similar said "(re?)confirmation RFA" instead of "RFA"? Say for all voluntary RFAs from an existing admin or someone who could have used BN?
    As a different point, I would be quite against any social discouraging if we're not making a hard rule as such. Social discouraging is what got us the opposes at WTT/Hog Farm's RFAs, which I found quite distasteful and badgering. If people disagree with a process, they should change it. But if the process remains the same, I think it's important to not enable RFA's toxicity by encouraging others to namecall or re-argue the process in each RRFA. It's a short road from social discouragement to toxicity, unfortunately. Soni (talk) 18:41, 19 December 2024 (UTC)[reply]
    Yes I think the watchlist notice should specify what kind of RfA, especially with the introduction of recall. CaptainEek Edits Ho Cap'n! 16:49, 23 December 2024 (UTC)[reply]
  • Option 1. Will prevent the unnecessary drama trend we are seeing in the recent. – Ammarpad (talk) 07:18, 17 December 2024 (UTC)[reply]
  • Option 2 if people think there's a waste of community time, don't spend your time voting or discussing. Or add "reconfirmation" or similar to the watchlist notice. ~~ AirshipJungleman29 (talk) 15:08, 17 December 2024 (UTC)[reply]
  • Option 3 (which I think is a subset of option 2, so I'm okay with the status quo, but I want to endorse giving 'crats the option to SNOW). While they do come under scrutiny from time to time for the extensive dicsussions in the "maybe" zone following RfAs, this should be taken as an indiciation that they are unlikely to do something like close it as SNOW in the event there is real and substantial concerns being rasied. This is an okay tool to give the 'crats. As far as I can tell, no one has ever accused the them of moving too quickly in this direction (not criticism; love you all, keep up the good work). Bobby Cohn (talk) 17:26, 17 December 2024 (UTC)[reply]
  • Option 3 or Option 2. Further, if Option 2 passes, I expect it also ends all the bickering about lost community time. A consensus explicitly in favour of "This is allowed" should also be a consensus to discourage relitigation of this RFC. Soni (talk) 17:35, 17 December 2024 (UTC)[reply]
  • Option 2: Admins who do not exude entitlement are to be praised. Those who criticize this humility should have a look in the mirror before accusing those who ask for reanointment from the community of "arrogance". I agree that it wouldn't be a bad idea to mention in parentheses that the RFA is a reconfirmation (watchlist) and wouldn't see any problem with crats snow-closing after, say, 96 hours. -- SashiRolls 🌿 · 🍥 18:48, 17 December 2024 (UTC)[reply]
    I disagree that BN shouldn't be the normal route. RfA is already as hard and soul-crushing as it is. Aaron Liu (talk) 20:45, 17 December 2024 (UTC)[reply]
    Who are you disagreeing with? This RfC is about voluntary RRfA. -- SashiRolls 🌿 · 🍥 20:59, 17 December 2024 (UTC)[reply]
    I know. I see a sizable amount of commenters here starting to say that voluntary re-RfAs should be encouraged, and your first sentence can be easily read as implying that admins who use the BN route exude entitlement. I disagree with that (see my reply to Thryduulf below). Aaron Liu (talk) 12:56, 18 December 2024 (UTC)[reply]
    One way to improve the reputation of RFA is for there to be more RFAs that are not terrible, such as reconfirmations of admins who are doing/have done a good job who sail through with many positive comments. There is no proposal to make RFA mandatory in circumstances it currently isn't, only to reaffirm that those who voluntarily choose RFA are entitled to do so. Thryduulf (talk) 21:06, 17 December 2024 (UTC)[reply]
    I know it's not a proposal, but there's enough people talking about this so far that it could become a proposal.
    There's nearly nothing in between that could've lost the trust of the community. I'm sure there are many who do not want to be pressured into this without good reason. Aaron Liu (talk) 12:57, 18 December 2024 (UTC)[reply]
    Absolutely nobody is proposing, suggesting or hinting here that reconfirmation RFAs should become mandatory - other than comments from a few people who oppose the idea of people voluntarily choosing to do something policy explicitly allows them to choose to do. The best way to avoid people being pressured into being accused of arrogance for seeking reconfirmation of their status from the community is to sanction those people who accuse people of arrogance in such circumstances as such comments are in flagrant breach of AGF and NPA. Thryduulf (talk) 14:56, 18 December 2024 (UTC)[reply]
    Yes, I’m saying that they should not become preferred. There should be no social pressure to do RfA instead of BN, only pressure intrinsic to the candidate. Aaron Liu (talk) 15:37, 18 December 2024 (UTC)[reply]
    Whether they should become preferred in any situation forms no part of this proposal in any way shape or form - this seeks only to reaffirm that they are permitted. A separate suggestion, completely independent of this one, is to encourage (explicitly not mandate) them in some (but explicitly not all) situations. All discussions on this topic would benefit if people stopped misrepresenting the policies and proposals - especially when the falsehoods have been explicitly called out. Thryduulf (talk) 15:49, 18 December 2024 (UTC)[reply]
    I am talking and worrying over that separate proposal many here are suggesting. I don’t intend to oppose Option 2, and sorry if I came off that way. Aaron Liu (talk) 16:29, 18 December 2024 (UTC)[reply]
  • Option 2. In fact, I'm inclined to encourage an RRfA over BN, because nothing requires editors to participate in an RRfA, but the resulting discussion is better for reaffirming community consensus for the former admin or otherwise providing helpful feedback. --Pinchme123 (talk) 21:45, 17 December 2024 (UTC)[reply]
  • Option 2 WP:RFA has said "Former administrators may seek reinstatement of their privileges through RfA..." for over ten years and this is not a problem. I liked the opportunity to be consulted in the current RfA and don't consider this a waste of time. Andrew🐉(talk) 22:14, 17 December 2024 (UTC)[reply]
  • Option 2. People who think it’s not a good use of their time always have the option to scroll past. Innisfree987 (talk) 01:41, 18 December 2024 (UTC)[reply]
  • 2 - If an administrator gives up sysop access because they plan to be inactive for a while and want to minimize the attack surface of Wikipedia, they should be able to ask for permissions back the quickest way possible. If an administrator resigns because they do not intend to do the job anymore, and later changes their mind, they should request a community discussion. The right course of action depends on the situation. Jehochman Talk 14:00, 18 December 2024 (UTC)[reply]
  • Option 1. I've watched a lot of RFAs and re-RFAs over the years. There's a darn good reason why the community developed the "go to BN" option: saves time, is straightforward, and if there are issues that point to a re-RFA, they're quickly surfaced. People who refuse to take the community-developed process of going to BN first are basically telling the community that they need the community's full attention on their quest to re-admin. Yes, there are those who may be directed to re-RFA by the bureaucrats, in which case, they have followed the community's carefully crafted process, and their re-RFA should be evaluated from that perspective. Risker (talk) 02:34, 19 December 2024 (UTC)[reply]
  • Option 2. If people want to choose to go through an RFA, who are we to stop them? Stifle (talk) 10:25, 19 December 2024 (UTC)[reply]
  • Option 2 (status quo/no changes) per meh. This is bureaucratic rulemongering at its finest. Every time RFA reform comes up some editors want admins to be required to periodically reconfirm, then when some admins decide to reconfirm voluntarily, suddenly that's seen as a bad thing. The correct thing to do here is nothing. If you don't like voluntary reconfirmation RFAs, you are not required to participate in them. Ivanvector (Talk/Edits) 19:34, 19 December 2024 (UTC)[reply]
  • Option 2 I would probably counsel just going to BN most of the time, however there are exceptions and edge cases. To this point these RfAs have been few in number, so the costs incurred are relatively minor. If the number becomes large then it might be worth revisiting, but I don't see that as likely. Some people will probably impose social costs on those who start them by opposing these RfAs, with the usual result, but that doesn't really change the overall analysis. Perhaps it would be better if our idiosyncratic internal logic didn't produce such outcomes, but that's a separate issue and frankly not really worth fighting over either. There's probably some meta issues here I'm unaware off, it's long since I've had my finger on the community pulse so to speak, but they tend to matter far less than people think they do. 184.152.68.190 (talk) 02:28, 20 December 2024 (UTC)[reply]
  • Option 1, per WP:POINT, WP:NOT#SOCIALNETWORK, WP:NOT#BUREAUCRACY, WP:NOTABOUTYOU, and related principles. We all have far better things to do that read through and argue in/about a totally unnecessary RfA invoked as a "Show me some love!" abuse of process and waste of community time and productivity. I could live with option 3, if option 1 doesn't fly (i.e. shut these silly things down as quickly as possible). But option 2 is just out of the question.  — SMcCandlish ¢ 😼  04:28, 22 December 2024 (UTC)[reply]
    Except none of the re-RFAs complained about have been RfA invoked as a "Show me some love!" abuse of process, you're arguing against a strawman. Thryduulf (talk) 11:41, 22 December 2024 (UTC)[reply]
    It's entirely a matter of opinion and perception, or A) this RfC wouldn't exist, and B) various of your fellow admins like TonyBallioni would not have come to the same conclusion I have. Whether the underlying intent (which no one can determine, lacking as we do any magical mind-reading powers) is solely egotistical is ultimately irrelevant. The actual effect (what matters) of doing this whether for attention, or because you've somehow confused yourself into think it needs to be done, is precisely the same: a showy waste of community volunteers' time with no result other than a bunch of attention being drawn to a particular editor and their deeds, without any actual need for the community to engage in a lengthy formal process to re-examine them.  — SMcCandlish ¢ 😼  05:49, 23 December 2024 (UTC)[reply]

    or because you've somehow confused yourself into think it needs to be done

    I and many others here agree and stand behind the very reasoning that has "confused" such candidates, at least for WTT. Aaron Liu (talk) 15:37, 23 December 2024 (UTC)[reply]
  • Option 2. I see no legitimate reason why we should be changing the status quo. Sure, some former admins might find it easier to go through BN, and it might save community time, and most former admins already choose the easier option. However, if a candidate last ran for adminship several years ago, or if issues were raised during their tenure as admin, then it may be helpful for them to ask for community feedback, anyway. There is no "wasted" community time in such a case. I really don't get the claims that this violates WP:POINT, because it really doesn't apply when a former admin last ran for adminship 10 or 20 years ago or wants to know if they still have community trust.
    On the other hand, if an editor thinks a re-RFA is a waste of community time, they can simply choose not to participate in that RFA. Opposing individual candidates' re-RFAs based solely on opposition to re-RFAs in general is a violation of WP:POINT. – Epicgenius (talk) 14:46, 22 December 2024 (UTC)[reply]
    But this isn't the status quo? We've never done a re-RfA before now. The question is whether this previously unconsidered process, which appeared as an emergent behavior, is a feature or a bug. CaptainEek Edits Ho Cap'n! 23:01, 22 December 2024 (UTC)[reply]
    There have been lots of re-RFAs, historically. There were more common in the 2000s. Evercat in 2003 is the earliest I can find, back before the re-sysopping system had been worked out fully. Croat Canuck back in 2007 was snow-closed after one day, because the nominator and applicant didn't know that they could have gone to the bureaucrats' noticeboard. For more modern examples, HJ Mitchell (2011) is relatively similar to the recent re-RFAs in the sense that the admin resigned uncontroversially but chose to re-RFA before getting the tools back. Immediately following and inspired by HJ Mitchell's, there was the slightly more controversial SarekOfVulcan. That ended successful re-RFAS until 2019's Floquenbeam, which crat-chatted. Since then, there have been none that I remember. There have been several re-RFAs from admins who were de-sysopped or at serious risk of de-sysopping, and a few interesting edge cases such as the potentially optional yet no-consensus SarekVulcan 3 in 2014 and the Rich Farmbrough case in 2015, but those are very different than what we're talking about today. GreenLipstickLesbian (talk) 00:01, 23 December 2024 (UTC)[reply]
    To add on to that, Wikipedia:Requests for adminship/Harrias 2 was technically a reconfirmation RFA, which in a sense can be treated as a re-RFA. My point is, there is some precedent for re-RFAs, but the current guidelines are ambiguous as to when re-RFAs are or aren't allowed. – Epicgenius (talk) 16:34, 23 December 2024 (UTC)[reply]
    Well thank you both, I've learned something new today. It turns out I was working on a false assumption. It has just been so long since a re-RfA that I assumed it was a truly new phenomenon, especially since there were two in short succession. I still can't say I'm thrilled by the process and think it should be used sparingly, but perhaps I was a bit over concerned. CaptainEek Edits Ho Cap'n! 16:47, 23 December 2024 (UTC)[reply]
  • Option 2 or 3 per Gnoming and CaptainEek. Such RfAs only require at most 30 seconds for one to decide whether or not to spend their time on examination. Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. Voluntary reconfirmation RfAs are socially discouraged, so there is usually a very good reason for someone to go back there, such as accountability for past statements in the case of WTT or large disputes during adminship in the case of Hog Farm. I don't think we should outright deny these, and there is no disruption incurred if we don't. Aaron Liu (talk) 15:44, 23 December 2024 (UTC)[reply]
  • Option 2 but for largely the reasons presented by CaptainEek. KevinL (aka L235 · t · c) 21:58, 23 December 2024 (UTC)[reply]
  • Option 2 (fine with better labeling) These don't seem harmful to me and, if I don't have time, I'll skip one and trust the judgment of my fellow editors. No objection to better labeling them though, as discussed above. RevelationDirect (talk) 22:36, 23 December 2024 (UTC)[reply]
  • Option 1 because it's just a waste of time to go through and !vote on candidates who just want the mop restored when he or she or they could get it restored BN with no problems. But I can also see option 2 being good for a former mod not in good standing. Therapyisgood (talk) 23:05, 23 December 2024 (UTC)[reply]
    If you think it is a waste of time to !vote on a candidate, just don't vote on that candidate and none of your time has been wasted. Thryduulf (talk) 23:28, 23 December 2024 (UTC)[reply]

Discussion

[edit]
  • @Voorts: If option 2 gets consensus how would this RfC change the wording Regardless of the process by which the admin tools are removed, any editor is free to re-request the tools through the requests for adminship process. Or is this an attempt to see if that option no longer has consensus? If so why wasn't alternative wording proposed? As I noted above this feels premature in multiple ways. Best, Barkeep49 (talk) 21:43, 15 December 2024 (UTC)[reply]
  • I've re-opened this per a request on my talk page. If other editors think this is premature, they can !vote accordingly and an uninvolved closer can determine if there's consensus for an early close in deference to the VPI discussion. voorts (talk/contributions) 21:53, 15 December 2024 (UTC)[reply]
    The discussion at VPI, which I have replied on, seems to me to be different enough from this discussion that both can run concurrently. That is, however, my opinion as a mere editor. — Jkudlick ⚓ (talk) 22:01, 15 December 2024 (UTC)[reply]
    @Voorts, can you please reword the RfC to make it clear that Option 2 is the current consensus version? It does not need to be clarified – it already says precisely what you propose. – bradv 22:02, 15 December 2024 (UTC)[reply]
     Done voorts (talk/contributions) 22:07, 15 December 2024 (UTC)[reply]
  • Question: May someone clarify why many view such confirmation RfAs as a waste of community time? No editor is obligated to take up their time and participate. If there's nothing to discuss, then there's no friction or dis-cussing, and the RfA smooth-sails; if a problem is identified, then there was a good reason to go to RfA. I'm sure I'm missing something here. Aaron Liu (talk) 22:35, 15 December 2024 (UTC)[reply]
    The intent of RfA is to provide a comprehensive review of a candidate for adminship, to make sure that they meet the community's standards. Is that happening with vanity re-RfAs? Absolutely not, because these people don't need that level of vetting. I wouldn't consider a week long, publicly advertized back patting to be a productive use of volunteer time. -- Ajraddatz (talk) 23:33, 15 December 2024 (UTC)[reply]
    But no volunteer is obligated to pat such candidates on the back. Aaron Liu (talk) 00:33, 16 December 2024 (UTC)[reply]
    Sure, but that logic could be used to justify any time sink. We're all volunteers and nobody is forced to do anything here, but that doesn't mean that we should promote (or stay silent with our criticism of, I suppose) things that we feel don't serve a useful purpose. I don't think this is a huge deal myself, but we've got two in a short period of time and I'd prefer to do a bit of push back now before they get more common. -- Ajraddatz (talk) 01:52, 16 December 2024 (UTC)[reply]
    Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. Aaron Liu (talk) 02:31, 16 December 2024 (UTC)[reply]
    Except someone who has no need for advanced tools and is not going to use them in any useful fashion, would then skate through with nary a word said about their unsuitability, regardless of the foregone conclusion. The point of RFA is not to rubber-stamp. Unless their is some actual issue or genuine concern they might not get their tools back, they should just re-request them at BN and stop wasting people's time with pointless non-process wonkery. Only in death does duty end (talk) 09:05, 16 December 2024 (UTC)[reply]
    I’m confused. Adminship requires continued use of the tools. If you think they’s suitable for BN, I don’t see how doing an RfA suddenly makes them unsuitable. If you have concerns, raise them. Aaron Liu (talk) 13:02, 16 December 2024 (UTC)[reply]
  • I don't think the suggested problem (which I acknowledge not everyone thinks is a problem) is resolved by these options. Admins can still run a re-confirmation RfA after regaining adminsitrative privileges, or even initiate a recall petition. I think as discussed on Barkeep49's talk page, we want to encourage former admins who are unsure if they continue to be trusted by the community at a sufficient level to explore lower cost ways of determining this. isaacl (talk) 00:32, 16 December 2024 (UTC)[reply]
    Regarding option 3, establishing a consensus view takes patience. The intent of having a reconfirmation request for administrative privileges is counteracted by closing it swiftly. It provides incentive for rapid voting that may not provide the desired considered feedback. isaacl (talk) 17:44, 17 December 2024 (UTC)[reply]
  • In re the idea that RfAs use up a lot of community time: I first started editing Wikipedia in 2014. There were 62 RfAs that year, which was a historic low. Even counting all of the AElect candidates as separate RfAs, including those withdrawn before voting began, we're still up to only 53 in 2024 – counting only traditional RfAs it's only 18, which is the second lowest number ever. By my count we've has 8 resysop requests at BN in 2024; even if all of those went to RfA, I don't see how that would overwhelm the community. That would still leave us on 26 traditional RfAs per year, or (assuming all of them run the full week) one every other week. Caeciliusinhorto-public (talk) 10:26, 16 December 2024 (UTC)[reply]
  • What about an option 4 encouraging eligible candidates to go through BN? At the end of the Procedure section, add something like "Eligible users are encouraged to use this method rather than running a new request for adminship." The current wording makes re-RfAing sound like a plausible alternative to a BN request, when in actual fact the former rarely happens and always generates criticism. Giraffer (talk) 12:08, 16 December 2024 (UTC)[reply]
    Discouraging RFAs is the second last thing we should be doing (after prohibiting them), rather per my comments here and in the VPI discussion we should be encouraging former administrators to demonstrate that they still have the approval of the community. Thryduulf (talk) 12:16, 16 December 2024 (UTC)[reply]
    I think this is a good idea if people do decide to go with option 2, if only to stave off any further mixed messages that people are doing something wrong or rude or time-wasting or whatever by doing a second RfA, when it's explicitly mentioned as a valid thing for them to do. Gnomingstuff (talk) 15:04, 16 December 2024 (UTC)[reply]
    If RFA is explicitly a valid thing for people to do (which it is, and is being reaffirmed by the growing consensus for option 2) then we don't need to (and shouldn't) discourage people from using that option. The mixed messages can be staved off by people simply not making comments that explicitly contradict policy. Thryduulf (talk) 15:30, 16 December 2024 (UTC)[reply]
    Also a solid option, the question is whether people will actually do it. Gnomingstuff (talk) 22:55, 16 December 2024 (UTC)[reply]
    The simplest way would be to just quickly hat/remove all such comments. Pretty soon people will stop making them. Thryduulf (talk) 23:20, 16 December 2024 (UTC)[reply]
  • This is not new. We've had sporadic "vanity" RfAs since the early days of the process. I don't believe they're particularly harmful, and think that it unlikely that we will begin to see so many of them that they pose a problem. As such I don't think this policy proposal solves any problem we actually have. UninvitedCompany 21:56, 16 December 2024 (UTC)[reply]
  • This apparent negative feeling evoked at an RFA for a former sysop everyone agrees is fully qualified and trusted certainly will put a bad taste in the mouths of other former admins who might consider a reconfirmation RFA without first visiting BN. This comes in the wake of Worm That Turned's similar rerun. BusterD (talk) 23:29, 16 December 2024 (UTC)[reply]
    Nobody should ever be discouraged from seeking community consensus for significant changes. Adminship is a significant change. Thryduulf (talk) 23:32, 16 December 2024 (UTC)[reply]
    No argument from me. I was a big Hog Farm backer way back when he was merely one of Wikipedia's best content contributors. BusterD (talk) 12:10, 17 December 2024 (UTC)[reply]
  • All these mentions of editor time make me have to mention The Grand Unified Theory of Editor Time (TLDR: our understanding of how editor time works is dreadfully incomplete). CaptainEek Edits Ho Cap'n! 02:44, 17 December 2024 (UTC)[reply]
    I went looking for @Tamzin's comment because I know they had hung up the tools and came back, and I was interested in their perspective. But they've given me a different epiphany. I suddenly realize why people are doing confirmation RfAs: it's because of RECALL, and the one year immunity a successful RfA gives you. Maybe everyone else already figured that one out and is thinking "well duh Eek," but I guess I hadn't :) I'm not exactly sure what to do with that epiphany, besides note the emergent behavior that policy change can create. We managed to generate an entirely new process without writing a single word about it, and that's honestly impressive :P CaptainEek Edits Ho Cap'n! 18:18, 17 December 2024 (UTC)[reply]
    Worm That Turned followed through on a pledge he made in January 2024, before the 2024 review of the request for adminship process began. I don't think a pattern can be extrapolated from a sample size of one (or even two). That being said, it's probably a good thing if admins occasionally take stock of whether or not they continue to hold the trust of the community. As I previously commented, it would be great if these admins would use a lower cost way of sampling the community's opinion. isaacl (talk) 18:31, 17 December 2024 (UTC)[reply]
    @CaptainEek: You are correct that a year's "immunity" results from a successful RRFA, but I see no evidence that this has been the reason for the RRFAs. Regards, Newyorkbrad (talk) 00:14, 22 December 2024 (UTC)[reply]
    If people decide to go through a community vote to get a one year immunity from a process that only might lead to a community vote which would then have a lower threshold then the one they decide to go through, and also give a year's immunity, then good for them. CMD (talk) 01:05, 22 December 2024 (UTC)[reply]
    @CaptainEek I'm mildly bothered by this comment, mildly because I assume it's lighthearted and non-serious. But just in case anyone does feel this way - I was very clear about my reasons for RRFA, I've written a lot about it, anyone is welcome to use my personal recall process without prejudice, and just to be super clear - I waive my "1 year immunity" - if someone wants to start a petition in the next year, do not use my RRfA as a reason not to. I'll update my userpage accordingly. I can't speak for Hog Farm, but his reasoning seems similar to mine, and immunity isn't it. WormTT(talk) 10:28, 23 December 2024 (UTC)[reply]
    @Worm That Turned my quickly written comment was perhaps not as clear as it could have been :) I'm sorry, I didn't mean to suggest that y'all had run for dubious reasons. As I said in my !vote, Let me be clear: I am not suggesting that is why either Worm or Hogfarm re-upped, I'm just trying to create a general purpose rule here. I guess what I really meant was that the reason that we're having this somewhat spirited conversation seems to be the sense that re-RfA could provide a protection from recall. If not for recall and the one year immunity period, I doubt we'd have cared so much as to suddenly run two discussions about this. CaptainEek Edits Ho Cap'n! 16:59, 23 December 2024 (UTC)[reply]
    I don't agree. No one else has raised a concern about someone seeking a one-year respite from a recall petition. Personally, I think essentially self-initiating the recall process doesn't really fit the profile of someone who wants to avoid the recall process. (I could invent some nefarious hypothetical situation, but since opening an arbitration case is still a possibility, I don't think it would work out as planned.) isaacl (talk) 05:19, 24 December 2024 (UTC)[reply]
    I really don't think this is the reason behind WTT's and HF's reconfirmation RFA's. I don't think their RFA's had much utility and could have been avoided, but I don't doubt for a second that their motivations were anything other than trying to provide transparency and accountability for the community. BugGhost 🦗👻 12:04, 23 December 2024 (UTC)[reply]
  • I don't really care enough about reconf RFAs to think they should be restricted, but what about a lighter ORCP-like process (maybe even in the same place) where fewer editors can indicate, "yeah OK, there aren't really any concerns here, it would probably save a bit of time if you just asked at BN". Alpha3031 (tc) 12:40, 19 December 2024 (UTC)[reply]

Audio-video guidance

[edit]

Hi there,

Per the post I made a few weeks ago regarding use of video for illustrative purposes, I think that MOS:Images might be expanded to make mention of audio-video content, as most of the same principles apply (eg aesthetics, quality, relevance, placement). There are some additional concerns, for example, if audio or video renders a primary source, eg is a recording of PD music such as Bach or similar; or is a reading of a PD text, then there might be some source validation requirements (ie, the music or text should match the original, within sensible boundaries, eg Mozart or Bach pieces may not be easily replicated with original instrumentation, or at least this should not be a requirement.

So one option would be for a simple statement at MOS:Images that these guidelines normally apply to AV, or separate guidance for AV that explains that MOS:Images contains guidance that generally applies to AV.

Is the correct process to raise an RFC? And is that done at MOS:Images, or WP:MOS, or here, or where? Jim Killock (talk) 19:38, 16 December 2024 (UTC)[reply]

I've posted a longer request for help explaining the gap at MOS talk. It seems an RFC may not be needed but any advice would very much be appreciated. Jim Killock (talk) 20:28, 16 December 2024 (UTC)[reply]
I've started drafting here. Jim Killock (talk) 22:50, 16 December 2024 (UTC)[reply]

Policy proposal: Establishment of research groups to edit articles

[edit]

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


In order to have more reliable and unbiased articles and make Wikipedia fully reliable, I believe it necessary for articles and topics to be created and handled by bodies of editors called "Research groups", focusing on researching about topics and articles to ensure Wikipedia articles are as accurate, neutral, and unbiased as possible.

I also propose that editing articles will be reserved to their respective research groups and creating articles about a topic can only be done by said groups, with non-members being able to propose changes and articles via RFCs and whatnot. To join a research group, one must complete thorough training in the following areas

  • Bias reduction and neutrality training.
  • Finding reliable sources and comparing sources.
  • Professional research.
  • Lessons about the topic.
  • Wikipedia policy.
  • Any other applicable areas

This policy would also reduce vandalism and guideline violations across Wikipedia, making situations easier to handle.

Since Wikipedia is a widely-used source, it is time to move away from the current decentralized approach where just about anyone (no matter how inexperienced and biased) can edit, which has resulted in article bias, vandalism, unreliable sources, poor editing, sockpuppet accounts, edit wars, controversies, and a host of other problems.

A Research-Group-based editing Wikipedia will be far more stable and solve many of our persistent problems. Cnscrptr (talk) 15:28, 17 December 2024 (UTC)[reply]

So something more like Citizendium? Schazjmd (talk) 16:11, 17 December 2024 (UTC)[reply]
So basically we would deprecate half our policies and guidelines? Honestly trying to be nice but this is a terrible idea Horse Eye's Back (talk) 16:14, 17 December 2024 (UTC)[reply]
Your proposal describes something that would not be Wikipedia at all (remember, we are the free encyclopaedia that anyone can edit), so is better suited for a fork. Usually proposals like yours (with editing only permitted to small numbers of "trusted" people) only work for small projects, if at all (I expect you would have to pay people to consider going through the "research group" training; it certainly does not sound attractive at all to me as a volunteer). You will need tens of thousands of editors to have any chance at not becoming outdated immediately. —Kusma (talk) 16:56, 17 December 2024 (UTC)[reply]
it is time to move away from the current decentralized approach where just about anyone [...] can edit - this proposal is dead on arrival - I support a SNOW close. BugGhost 🦗👻 17:30, 17 December 2024 (UTC)[reply]
Congratulations! We're completely rejecting Wikipedia's open content, its editors' direct contributions (henceforth to be filtered through a new Wikipedia Politburo), with the basic principles of Wikipedia's founders soundly rejected. "Research group training" says it all. Mason.Jones (talk) 18:15, 17 December 2024 (UTC)[reply]
Respectfully, I think this is a horrible idea. The decentralized model is what the entire site was built upon. It is even one of the first things advertised on the Main Page! If we got rid of it, the site wouldn't truly be Wikipedia anymore. QuicoleJR (talk) 18:20, 17 December 2024 (UTC)[reply]
I mean, I think Wikipedia is big enough to drop the "free encyclopaedia" bit and attempt a nupedia-like approach. However, I digress and have dropped that clause in my proposal. Cnscrptr (talk) 20:06, 17 December 2024 (UTC)[reply]
Few modifications of my proposal based on feedback:
  • Keep the decentralized model.
  • Drop the extensive training clause. No extensive training will be required.
  • Individuals are expected to learn about the topic before editing.
  • It will be necessary to join research groups (i.e., groups made of editors collaborating on the topic) to better learn about the topic and better contributing to it.
  • Editing of various articles, including election articles, will require an extended-confirmed status.
Cnscrptr (talk) 20:04, 17 December 2024 (UTC)[reply]
So these groups are basically Wikiprojects but more restrictive? Tarlby (t) (c) 20:17, 17 December 2024 (UTC)[reply]

Modified proposal: Research panels between editors and greater article protections

[edit]

I got your feedback on this policy proposal and decided to overhaul the proposal based to maintain the free nature of Wikipedia and strong editor collaboration while ensuring accuracy and reliability.

In order for Wikipedia's articles to be more reliable and less biased and to make Wikipedia a more reliable site, I believe that there should be a mandatory creation of Research panels, topic collaborations between editors focused on gathering information, comparing sources, establishing editing policies and guidelines in accordance with general Wikipedia rules, and ensuring editing guidelines. Formation of research panels will be necessary in every topic.

  • The editing model will remain decentralized. However, Research panels will determine policies based on consensus for their respective topics.
  • Individuals can join a research group freely. However, individuals are expected to follow the policies and protocols set forth by the research group.
  • Individuals are expected to learn about the topic before editing.
  • It will be necessary to join research panels to better learn about the topic and better contribute to it.
  • Various controversial articles, including election articles, will receive special protection or extended-confirmed protection upon request from the respectively Research panel.

Research Panels will allow people to better understand the topic while integrating cooperation efforts to find the most accurate information and create the best possible articles.

--Cnscrptr (talk) 20:36, 17 December 2024 (UTC)[reply]

This still feels like Wikiprojects with more rules. Tarlby (t) (c) 20:39, 17 December 2024 (UTC)[reply]
Ah, then we can use that and implement it more strongly across every topic. We do need some sort of organization, though. Cnscrptr (talk) 20:41, 17 December 2024 (UTC)[reply]
This modified proposal is sloppy at worst and doubt it will get any approval. However, I think y'all should take whatever good ideas are there (if any) and also create spaces or something like that for greater learning and collaboration among editors to resolve conflicts and biases. Cnscrptr (talk) 20:39, 17 December 2024 (UTC)[reply]
"Individuals are expected to learn about topic before editing"+"It will be necessary to join research panels to better learn about the topic" seems to add up to "Users are expected to be members of research panels before editing topics in that realm."
So if I look at an article about "semiotics" (about which I know nothing) I am not allowed to correct, say, some MOS:LQ failures, some incorrect header capitalization, and a misspelling of Charles Schulz's last name until I train myself in semiotics and join the Official Wikipedia Semiotics Fun Team? -- Nat Gertler (talk) 20:48, 17 December 2024 (UTC)[reply]
The whole training clause has been dropped and you could freely join the panels, so you would be allowed to do all that. Cnscrptr (talk) 20:50, 17 December 2024 (UTC)[reply]
Also, the whole editing thing only applies to major edits, not minor ones.
Either way, this isn't my best work. The first proposal was essentially nupedia revivalism and the modification is a sloppy attempt at salvaging it based on the original essence of investigating about a topic further to put out the most accurate information (as is necessary in the Proto-Sinaitic script article) Cnscrptr (talk) 20:53, 17 December 2024 (UTC)[reply]
I understand that I would be allowed to jump through whatever hoops you erect in order to be able to correct misspellings, but that seems an effective way to discourage me from improving the encyclopedia and getting it in line with general standards. (I am also unconvinced that this will reduce bias; it would seem to help solidify a smaller group of editors on various topics who are likely to be driven, quite possibly by bias, to go through the effort which you're adding.) -- Nat Gertler (talk) 20:54, 17 December 2024 (UTC)[reply]
(Note, that was written before you made the "minor edit" change.) -- Nat Gertler (talk) 20:55, 17 December 2024 (UTC)[reply]
Regardless, I am dropping my proposal because it is poorly written and will probably not be helpful at all. Cnscrptr (talk) 21:21, 17 December 2024 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Is the "above 8000 words = split" an absolute rule?

[edit]

I am referring to this chart found on WP:SIZE:

Word count What to do
> 15,000 words Almost certainly should be divided or trimmed.
> 9,000 words Probably should be divided or trimmed, though the scope of a topic can sometimes justify the added reading material.
> 8,000 words May need to be divided or trimmed; likelihood goes up with size.
< 6,000 words Length alone does not justify division or trimming.
< 150 words If an article or list has remained this size for over two months, consider merging it with a related article.
Alternatively, the article could be expanded; see Wikipedia:Stub.

I have seen a few instances where, an editor will raise the issue that an article is too large at, say, 7500 words or 8100 words. We have multiple history pages (and medical/psychology pages) with well over 11,000+ words, even some with over 16000. Where does one draw the line? It seems like Wikipedia leaves it to the discretion of the editor after about 8000 words. Plasticwonder (talk) 07:21, 18 December 2024 (UTC)[reply]

Looking at the table, it's obvious that "above 8000 words=Split" is not "an absolute rule". I promise you that if it were, that table would say something that sounded remarkably like "if the article is above 8,000 words, then it absolutely must be split".
Additionally, we have an official policy against absolute rules.
Where one draws the line is: In a place that makes sense for the topic of that specific article, having thoughtfully considered all the facts and circumstances that apply to that unique article. WhatamIdoing (talk) 07:41, 18 December 2024 (UTC)[reply]
There was a lengthy discussion at Wikipedia talk:Article size/Archive 6 about the size guidance, for the record. Splitting pages is a lot of work and not everyone thinks that spreading stuff over multiple pages is better for readers than having in one big page. Jo-Jo Eumerus (talk) 08:19, 18 December 2024 (UTC)[reply]
In addition to the above, what matters for the technical aspects of article size is not the number of words but the number of bytes. Word count can only ever be an approximation of that as the length of the words used matters ("a" is 1 byte, "comprehensive" is 13), the number and size of included media matters very significantly more. Thryduulf (talk) 09:20, 18 December 2024 (UTC)[reply]
I think WP:PEIS is a bigger technical challenge for long articles. The more templates, and the more complicated templates, the more likely you are to need to split for technical reasons. List of common misconceptions needs a split in part due to PEIS reasons. WhatamIdoing (talk) 18:43, 18 December 2024 (UTC)[reply]
What's more, there's nothing even in the excerpt here that would purport an absolute guideline. Remsense ‥  09:44, 18 December 2024 (UTC)[reply]
It isn't an absolute rule, but usually an article having an extremely long high word count is evidence of a bigger problem with WP:SUMMARYSTYLE -- that it's too dense or detailed for a reader to use it as a first-port-of-call summary. As such, usually, it's a wise move to create daughter articles for the detailed material, and strim it down to its essentials in the main article; this improves the readability of the main article and allows interested readers to follow up into the nitty-gritty. As Jo-Jo Eumerus rightly says above, though, there's not really such thing as an absolute rule in this place. UndercoverClassicist T·C 09:59, 18 December 2024 (UTC)[reply]
What we now know is that many readers are looking for specific information, with few reading from top to bottom, but the search engines send them to the mother article even when a more specific daughter article exists. So the first port of call needs to be the most highly detailed. The advice in WP:SUMMARYSTYLE is therefore considered well intentioned but obsolete; stripping the mother article and pushing information down to the daughter articles defeats our whole purpose in providing information. Hawkeye7 (discuss) 11:31, 22 December 2024 (UTC)[reply]
When you say “we know…”, “is considered” and similar, are you pointing to specific previous discussions, RfCs etc on this matter? “In the wild”, as it were, I still see these size limits regularly invoked, even if the conversation rarely ends at them. UndercoverClassicist T·C 09:17, 23 December 2024 (UTC)[reply]
Rather than draw a line, I'd rather just remove that chart. Can't imagine why a suite of concrete word counts and procedures would ever be very helpful. — Rhododendrites talk \\ 16:29, 18 December 2024 (UTC)[reply]
It absolutely makes sense to say at what page sizes that editors should start considering other options, as well as where splitting can be absolutely unnecessary. Nothing wrong with the table as long as it's clear those aren't hard or fast rules. Masem (t) 16:41, 18 December 2024 (UTC)[reply]
Agreed, I find it helpful because it helps me remember what is generally too long for mobile users (I understand that mobile is generally a blindspot for us as editors because the vast majority of us don't edit on mobile but most of the readers are actually on mobile) Horse Eye's Back (talk) 16:49, 18 December 2024 (UTC)[reply]
I also believe that the chart is helpful. WhatamIdoing (talk) 17:29, 18 December 2024 (UTC)[reply]
There don't seem to be any absolute rules laid out there... Even "Almost certainly" is qualified not an absolute rule. Horse Eye's Back (talk) 16:38, 18 December 2024 (UTC)[reply]
  • The optimal article size varies quite a lot, actually. Key things we need to consider include:
    The likely readership. Someone who's looking up History of Libya under Muammar Gaddafi probably has time to read something long and thoughtful. Someone who's looking up emergency surgery might need basic facts, in simple words, very fast.
    The cognitive load associated with the topic. Star Wars is (very) long but easy to understand; Fourier inversion theorem is much shorter, but I bet it takes you longer to read, unless you have unusual expertise in mathematics.
This is not the kind of thing on which we can produce simplistic guidance.—S Marshall T/C 17:03, 18 December 2024 (UTC)[reply]
Wikipedia leaves it to the discretion of editors far far before 8,000 words. We have thousands of single sentence articles to attest to this. The average article is less than 700 words. CMD (talk) 17:15, 18 December 2024 (UTC)[reply]
The median article length is around 350 words, and the mean is 750.[7] About one in 75 articles has more than 6,000 words. WhatamIdoing (talk) 17:34, 18 December 2024 (UTC)[reply]
You'll have to take the specifics up with WP:WPSIZE, although that ballpark range sounds the same. CMD (talk) 18:45, 18 December 2024 (UTC)[reply]
No. Phil Bridger (talk) 18:52, 18 December 2024 (UTC)[reply]
  • I've always felt that the kB of readable prose was a better metric for page size (such as is produced by various page size gadgets). Turns out, bigger words take longer to read than shorter words :P Doing it just by wordcount encourages a certain verbosity. For me, my rule of thumb has always aimed to keep big articles under 100kb readable prose. But there's no hard and fast rule, and there shouldn't be. CaptainEek Edits Ho Cap'n! 23:08, 22 December 2024 (UTC)[reply]
    If I'm being honest, what might be the best metric is starting at the top and lightly hammering the Page Down key for a bit. If I groan before reaching the References section, it's too long. Remsense ‥  23:10, 22 December 2024 (UTC)[reply]
    For example, results were heavily discouraging for George Washington until recently; as of today I no longer despair at the article's uncaring girth—thanks Nikki et al.! Remsense ‥  23:18, 22 December 2024 (UTC)[reply]
    26,000 words is 1.0 tomats. Another way to look at that table is by saying that if it's more than half the length of a book, it's pushing past being "an article" and edging up towards being "a book".
    Or you can look at it in terms of how many minutes reading the whole thing would take. There's quite a bit of variation, but for easy math, 300 words per minute means that a 15,000-word-long article would take 50 minutes to read, which almost certainly exceeds the interest and attention span of most readers. WhatamIdoing (talk) 00:35, 23 December 2024 (UTC)[reply]
    I think the most fundamental scalar isn't quite reading time or even visual size, but structural complexity—for an online encyclopedia article, being overlong expresses itself in my increasing inability to navigate an article comfortably to read or locate what I want, or to understand the structure of the scope covered by it at a glance. Remsense ‥  00:38, 23 December 2024 (UTC)[reply]
    Come to think of it, one thing that makes an article feel longer than its word count is if its sections, media, and other landmarks have been laid out in a careless or unnatural way. Remsense ‥  00:48, 23 December 2024 (UTC)[reply]
  • No. And this was rather a pointless, self-answering question in the first place, not something for a VP thread. The answer to the posed question of 'Is the "above 8000 words=Split" an absolute rule?' is obviously "no", both by observing actual WP community practice, and just by reading the table the OP quoted: > 8,000 words — May need to be divided or trimmed; likelihood goes up with size. Is anyone here actually confused into believing that A) "must" and "may" are synonymous, or B) that a guideline, to which reasonable exceptions sometimes apply, is somehow a legal-level policy that must be obeyed at all costs? In reality, there is never any hurry to split a large article, and doing it properly often involves a tremendous amount of work, involving both repair of citations (sometimes in great detail), and resummarizing the background context in the side article while also resummarizing the side-matter in WP:SUMMARY style within the main article (and doing them distinctly enough that the results are not obnoxiously repetitive if the reader moves between the articles). Doing a good job of this can take several days up to a month or longer of tightly focused work, depending on the detail level of the material, the number citations, etc. It is not trivial, we're all volunteers here, and our readers are not going keel over and die if they reach a detailed article that's a bit longer than they were expecting or would prefer. Ultimately, an article that is ginormous usually should split, but there is no deadline, and it needs to be done properly (plus there are often conceptually different ways to go about it from a content-flow perspective, and that might require some consensus discussion).  — SMcCandlish ¢ 😼  01:22, 23 December 2024 (UTC)[reply]
    Ever since WAID reminded me of it, I've thought we should maybe link RFC 2119 somewhere as a lemma. Remsense ‥  01:36, 23 December 2024 (UTC)[reply]
    I think I linked it once in Wikipedia:Policies and guidelines, years ago, and someone objected. I didn't follow up to see whether the objecting editor is one of the handful who think that should is a more polite and/or IAR-compliant way to say must, but as that's a fairly uncommon POV among editors, it probably wasn't. WhatamIdoing (talk) 05:41, 23 December 2024 (UTC)[reply]
    The linked document pushes very hard on should, "here may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed" is not a low bar. It sounds much like must except when IAR. CMD (talk) 09:39, 23 December 2024 (UTC)[reply]

I want to propose Draft:Manual of Style/Israel- and Palestine-related articles as a new guideline with an RfC. I'm starting this thread in case any outsiders to this area want to provide input or have questions. For context, the goal of this manual of style is to get agreement on broad principles to make editing easier in this topic area. As an example, WP:PIA5 is dealing with inconsistent use of the word "massacre" specifically, which has caused much arguing over whether there is a double standard, so this guideline makes the standards we should be using explicit. Chess (talk) (please mention me on reply) 06:38, 21 December 2024 (UTC)[reply]

Are all four points currently included based on previous conversations, or are any novel to this proposal? On the broader framing, I wouldn't create the guideline solely talking about NPOV, it is helpful to have guidelines to help describe encyclopaedic style if nothing else. On the example of massacre, I'm not up to date with the recent or ongoing discussions, but I'm surprised there's no mention in the draft of WP:WTW, as this seems a classic case. CMD (talk) 07:00, 21 December 2024 (UTC)[reply]
@Chipmunkdavis: The settlements language comes from previous discussions/consensus, likewise with the West Bank naming conventions section. The categorization language comes from a suggestion on the talk page of the draft.
The "massacre" part is a novel part of the proposal. I would say that it seems like an obvious WP:WTW, but there's many RMs in the topic area in which editors use their own definition of the term "massacre" (like civilian death toll, mechanism of killing, see User:BilledMammal/ARBPIA_RM_statistics for a list of 24 RMs about the term "massacre"), and argue about whether or not the event meets that standard. I want to make it easier for editors to disengage by citing this guideline, instead of getting into lengthy arguments over points that don't contribute much to consensus (which is what happens now).
The reason the guideline is short is because I believe it will give it the best chance of passing right now. In the future, I'd like to get consensus to add more points to this guideline. I thought proposing a lengthy guideline upfront would create a scenario in which everyone at an RfC agrees a guideline would benefit the area, but individual editors have a small issue that prevents them from supporting this specific version. Chess (talk) (please mention me on reply) 07:50, 21 December 2024 (UTC)[reply]
At a quick look, it seems most pages in that statistics page were not moved to a title with "massacre"? Seems sensible to collate previous discussions together, if massacres is new (as new as something discussed 24 times can be?) there might need to confirm that one. CMD (talk) 08:20, 21 December 2024 (UTC)[reply]
One issue with the massacre bit is that should be part of an AT naming convention, not in the MOS. Likewise, appropriate categorizations usually aren't in the MOS. voorts (talk/contributions) 15:27, 21 December 2024 (UTC)[reply]

I (+other people) co-authored a draft article, but its submission was declined

[edit]

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


supposedly due to: "This submission is not adequately supported by reliable sources". It seems to me, that there are planety of good references now. I want to understand, what is missing - and- how it can be improved. A side note: there are many published articles in English Wikipedia, which are of much lower quality, than this draft: https://en.wikipedia.org/wiki/Draft:Maternity_capital Walter Tau (talk) 15:19, 22 December 2024 (UTC)[reply]

The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Is it time to write an official policy on usage of AI?

[edit]

I've been following WP:ANI for a few weeks now (because I like gossip apparently) and I've noticed several incidents involving persons suspected of using AI for editing. Is it time to create an official policy to point to? WP:AI exists, but it's an informational article rather than a policy page. (I don't actually know which page to put this on so it's going here.) guninvalid (talk) 02:25, 25 December 2024 (UTC)[reply]

Wikipedia:Village pump (policy)#LLM/chatbot comments in discussions is ongoing now for those interested. CMD (talk) 02:33, 25 December 2024 (UTC)[reply]
See also this RfC. If this is an issue you're interested in, you might want to try getting involved with WikiProject AI Cleanup. voorts (talk/contributions) 03:22, 25 December 2024 (UTC)[reply]