Wikipedia:Village pump (policy): Difference between revisions
Tag: Reverted |
BilledMammal (talk | contribs) |
||
Line 1,620: | Line 1,620: | ||
Should [[WP:RMCI]] be formally elevated to the official status of a [[WP:GUIDELINE]]? 23:15, 22 February 2023 (UTC) |
Should [[WP:RMCI]] be formally elevated to the official status of a [[WP:GUIDELINE]]? 23:15, 22 February 2023 (UTC) |
||
{{hat|Initial statement, collapsed 18:05, 26 February 2023 (UTC) to comply with [[WP:RFCNEUTRAL]]}} |
|||
Greetings, |
Greetings, |
||
Line 1,629: | Line 1,630: | ||
Should WP:RMCI be formally elevated to the official status of a [[WP:GUIDELINE]]? [[User:Red Slash|<span style="color:#FF4131;">Red</span>]] [[User talk:Red Slash|<b><span style="color:#460121;">Slash</span></b>]] 23:15, 22 February 2023 (UTC) |
Should WP:RMCI be formally elevated to the official status of a [[WP:GUIDELINE]]? [[User:Red Slash|<span style="color:#FF4131;">Red</span>]] [[User talk:Red Slash|<b><span style="color:#460121;">Slash</span></b>]] 23:15, 22 February 2023 (UTC) |
||
{{hab}} |
|||
===Survey (RM closing instructions)=== |
===Survey (RM closing instructions)=== |
Revision as of 23:07, 27 February 2023
Policy | Technical | Proposals | Idea lab | WMF | Miscellaneous |
- If you want to propose something new that is not a policy or guideline, use Village pump (proposals).
- If you have a question about how to apply an existing policy or guideline, try one of the many Wikipedia:Noticeboards.
- If you want to ask what the policy is on something, try the Help desk or the Teahouse.
- This is not the place to resolve disputes over how a policy should be implemented. Please see Wikipedia:Dispute resolution for how to proceed in such cases.
- If you want to propose a new or amended speedy deletion criterion, use Wikipedia talk:Criteria for speedy deletion.
Please see this FAQ page for a list of frequently rejected or ignored proposals. Discussions are automatically archived after remaining inactive for two weeks.
Should non-free images be allowed in search results?
You are invited to join the discussion at Wikipedia talk:Non-free content § Non-free images in search results (redux). {{u|Sdkb}} talk 19:46, 16 January 2023 (UTC)
Wikipedia response to chatbot-generated content
ChatGPT has gone viral and is causing quite a storm on the Web. Why should we care? Because, rather than just providing links to an article for you to read about what you are interested in, it writes an essay about it on the spot. It is so sophisticated that, if you ask it to write an article on any subject, even in the style of Wikipedia, it will! That can then be copied and pasted into Wikipedia -- not necessarily a problem if it is carefully fact checked and edited first (ChatGPT can't tell fact from fiction), but, an editor may post it without doing so. It can even be used to answer questions about Wikipedia as if it is a Wikipedian (see the first link provided directly below). — The Transhumanist 06:52, 9 December 2022 (UTC)
- Experiment: See it being used to generate answers to hypothetical questions on Wikipedia:Teahouse
- ChatGPT's Writing Capabilities Stun, but Humans Are Still Essential (for Now) - CNET
- Five Chats to Help You Understand ChatGPT - The Atlantic
- The ChatGPT chatbot is blowing people away with its writing skills. An expert explains why it's so impressive
- ChatGPT: Why Everyone's Obsessed With This Mind-Blowing AI Chatbot - CNET
- ChatGPT | Discover AI use cases
- 'Scary' ChatGPT could render Google obsolete in two years
- ChatGPT, an AI chatbot, has gone viral. Some say it’s better than Google; others worry it's problematic.
- What ChatGPT AI is and how you can use it - The Washington Post
- What is ChatGPT and why does it matter? Here's what you need to know | ZDNET
- Based on how rapidly chatbots have improved over time, it will become more and more difficult to tell if an article was written by a computer or not. The sheer volume at which computer programs could create new accounts and produce Wikipedia content, and the inevitable growing number of human editors copying and pasting chatbot output into Wikipedia, will at some point make it impossible for Wikipedia's human volunteers to keep up with that traffic and apply quality control to the material in a reasonable time frame -- the backlog of unchecked material will simply get longer and longer. The only recourse will be for computer programs to do it -- either computer programs to process articles to filter out or correct any crap, or training the chatbots themselves not to produce crap in the first place. Rather than build computer algorithms to detect computer-written articles and passages, it would be more productive for them to do style checks, fact checks, and citation checks, along with appropriate corrections or removals. While Wikpedia-friendly AI could come from within Wikipedia, it may be faster to bring influence to bear upon the developers of the chatbots being used to generate Wikipedia content, and upon the chatbots themselves. Wikipedia already has a chair at the table, because Wikipedia comprises a significant component of chatbot corpi, and so, their developers should be inclined to listen to the Wikipedia community's concerns -- either directly, or indirectly through news coverage. The Wikipedia community should make its voice heard on the matter of chatbots writing Wikipedia material according to Wikipedia's style and behavior guidelines. For example, verifiability still applies, and so when chatbots are asked by their users to "write an article in the style of Wikipedia" the chatbots should comply according to Wikipedia's policies, including those on verifiability and providing reliable sources. Not doing so should be met with the filing of bug reports, feedback, and commentary. And, as chatbots learn as they go, Wikipedians who use them can ask them to follow Wikipedia guidelines, and we can urge our fellow editors to request this of chatbots as well. — The Transhumanist 06:52, 9 December 2022 (UTC)
- Chatbots should be following Wikipedia's lead for all of their output. At this time, most chatbot answers and essays are not referenced with reliable sources. And they should be, for the same reason that Wikipedia articles should be. That's something that can be requested of chatbots directly, through queries, and of developers, through their contact channels and social media. I hope this suggestion helps. — The Transhumanist 06:52, 9 December 2022 (UTC)
- The simple answer is that our existing policies ought to already cover this (mostly.) Sourcing is still required for anything that is challenged or likely to be challenged, which prevents people from just blindly dumping AI generated text into Wikipedia; and an AI may violate copyright depending on how it was trained (and whether it was overtrained.) There are also unsettled copyright concerns related to AI training sets, so I would generally think that, ideally, editors shouldn't be dumping AI generated text into our articles even after performing due diligence to make sure it's not a copyvio and finding proper sources. But since those concerns are unsettled and speculative, I also don't think it's worth worrying about too much right now. The key point is that we should emphasize our sourcing requirements and be more diligent for clear-cut copyvios, which we already have systems in place to handle, since it is likely that these tools will result in people adding lots of unsourced and possibly-copyright-violating text. (I do wish our RFCs on mass article creation had reached a stronger agreement on sourcing requirements for new articles, which would deter excessive copy-pastes of AI generated text - perhaps that is something we might want to revisit in the near future, if we start seeing significant amounts of new unsourced articles created using what is plainly AI-generated text.) --Aquillion (talk) 07:55, 9 December 2022 (UTC)
- You mean, don't prepare in advance for a potential increase in volume, just wait until it hits? At that time, will merely adjusting policies stem the tide? It's in the slow trickle phase now, but that could potentially become a torrential flood very rapidly, just as ChatGPT's user base grew to over a million in 5 days. My main concern above was about a potential volume of AI-generated content that went beyond the scale of what the editor community could manually process. You didn't address that contingency. What could the community do to prepare for it, just in case it does happen? What are the available options? — The Transhumanist 11:28, 9 December 2022 (UTC)
- I don't think there's much we reasonably can do to prepare, at least not without serious risk of causing other problems; AI-generated text won't be drastically different than other sorts of text, aside from the risk of being uncited or a copyvio (which we have existing processes in place to handle.) It's worth raising awareness of the issue so editors can spot the signs of someone using large amounts of it, but I think our best bet if we're going to "prepare" is to focus on the systems we already have, which is unlikely to do any harm either way, or perhaps to codify slightly more strict sourcing requirements in the way I described (which I think is a good thing anyway, but would at least serve to slow down the worst sorts of misuses of AI generated text.) Ultimately the most serious problems are if editors start adding large amounts of text that violates copyright or which are uncited and likely to be challenged, but we have existing procedures for those, we just need to prepare for the possibility that we may need to become a bit more aggressive about enforcing them. Wikipedia is in a slightly better position than some other websites facing AI-generated-text problems, because our sourcing requirements will at least make it fairly obvious if someone tries to dump large amounts of AI-generated text onto the wiki without making any effort to verify it. --Aquillion (talk) 12:47, 9 December 2022 (UTC)
- I suppose we could take the Stack Exchange approach and just say flatly "no, this isn't allowed" - in their case it is explicitly a temporary measure until we have a better understanding of the issues. I think in general our policies/community norms would come down hard on anyone trying to get a language model to generate articles (hard to see why that would be OK and machine-translation isn't), but maybe an explicit statement would be a way to go. Andrew Gray (talk) 18:32, 9 December 2022 (UTC)
- @Aquillion: While a large number of posts by individual editors may become a problem, the main concern I presented above was "the inevitable growing number of human editors copying and pasting chatbot output into Wikipedia, will at some point make it impossible for Wikipedia's human volunteers to keep up with that traffic and apply quality control to the material in a reasonable time frame -- the backlog of unchecked material will simply get longer and longer."
- That is, people making the normal rate of content contributions, but using large language models (chatbots) to do so.
- Watching for breakout editors who use LLMs to create a large number of new articles over a short period of time would not suffice in such a scenario. Editors who add LLM-generated content to many existing articles also will not be spotted by looking for mass page creations. And since writing will become easier by letting "chatbots" do it for you, content submissions by users employing such tools may likely become longer on average.
- The point is, that a high enough volume of such content contributions would go beyond the capacity of Wikipedia's editors to check and correct.
- The two solutions offered were 1) build software to analyze and process such content, and 2) work with chatbot developers so that inappropriate content is not composed by LLMs in the first place.
- Just relying on new or existing policies to handle LLM-generated content will be insufficient if and when the volume of it passes the threshhold of what manual editors applying Wikipedia policy can deal with.
- Passing that threshhold may come soon, or it may take years -- the main question is "will Wikipedia prepare for that threshhold-passing event?" Based on the responses above and below, the answer, and implicit recommendation from this forum, currently appears to be "no": No developing relevant software, and no working with chatbot developers to respond to the potential passing of the LLM-threshhold.
- Thus, any solution will need to come from other departments or from continued or future discussion in this department, or from chatbot developers focusing on the problem due to other influences.
- Another helpful approach might be the creation of a policy or instructions on how to use LLMs/chatbots effectively, and post links to that page in enough places that all editors will notice. Though, I doubt that would prevent the problems of a LLM-threshhold-passing-event, and wouldn't address the need for proofreading or processing LLM-generated contributions. — The Transhumanist 02:18, 20 December 2022 (UTC)
- I don't think there's much we reasonably can do to prepare, at least not without serious risk of causing other problems; AI-generated text won't be drastically different than other sorts of text, aside from the risk of being uncited or a copyvio (which we have existing processes in place to handle.) It's worth raising awareness of the issue so editors can spot the signs of someone using large amounts of it, but I think our best bet if we're going to "prepare" is to focus on the systems we already have, which is unlikely to do any harm either way, or perhaps to codify slightly more strict sourcing requirements in the way I described (which I think is a good thing anyway, but would at least serve to slow down the worst sorts of misuses of AI generated text.) Ultimately the most serious problems are if editors start adding large amounts of text that violates copyright or which are uncited and likely to be challenged, but we have existing procedures for those, we just need to prepare for the possibility that we may need to become a bit more aggressive about enforcing them. Wikipedia is in a slightly better position than some other websites facing AI-generated-text problems, because our sourcing requirements will at least make it fairly obvious if someone tries to dump large amounts of AI-generated text onto the wiki without making any effort to verify it. --Aquillion (talk) 12:47, 9 December 2022 (UTC)
- I think in the end, @Aquillion response from earlier in the discussion is the one I agree with the most. If a ChatGPT-written or AI-written part of Wikipedia is accurate, nice! If something can't be proven with an RS, though, revert and warn. It can be a useful tool. Overall, think "Business as usual", just with some contributors getting assistance. It's mostly implausible as well to tell apart ChatGPT/AI generated content as well except in obvious or disclosed cases, and it's going to be impossible to effectively enforce as well no matter what position we take. If there is any legal issue with it, let WMF and ArbCom handle it. InvadingInvader (userpage, talk) 07:37, 6 February 2023 (UTC)
- @InvadingInvader: I agree that a complete strategy of response to LLMs is beyond the scope of the Village Pump (policy), because it is going to take bots and/or diplomacy with developers, to stem any potential tidal wave of LLM contributions that would go beyond the capacity of WP's human editors to deal with. And those are the province of the bot department and the WMF, respectively. Considering that ChatGPT has gone viral, reaching 100 million users in just 2 months, eliciting a frenzied reaction from Google to bring rival products to market, thereby starting an "AI arms race", a tidal wave of chatbot-generated content in Wikipedia is plausible if not inevitable. Brace for impact. — The Transhumanist 11:18, 13 February 2023 (UTC)
- You mean, don't prepare in advance for a potential increase in volume, just wait until it hits? At that time, will merely adjusting policies stem the tide? It's in the slow trickle phase now, but that could potentially become a torrential flood very rapidly, just as ChatGPT's user base grew to over a million in 5 days. My main concern above was about a potential volume of AI-generated content that went beyond the scale of what the editor community could manually process. You didn't address that contingency. What could the community do to prepare for it, just in case it does happen? What are the available options? — The Transhumanist 11:28, 9 December 2022 (UTC)
What can chatbots do?
You seem to be somewhat panicking over a scenario which isn't really supported by any evidence. While I see some Teahouse responses, could you give us one or two examples of " It is so sophisticated that, if you ask it to write an article on any subject, even in the style of Wikipedia, it will! " articles? The teahouse examples give the impression that, if it ever becomes a problem, some edit filters can easily spot these. You would in any case need "someone" to post this "potential volume of AI-generated content that went beyond the scale of what the editor community could manually process" you predict. This seems rather unlikely, at least on enwiki. Fram (talk) 11:45, 9 December 2022 (UTC)
- Just try it. If your eyes don't pop out, I'll be surprised. Right now, during its "research preview", it is free. Keep in mind that it draws
heavilyon Wikipedia, which is included in its corpus, so, for this test run, it would be best to choose a person or subject that is not yet covered in this encyclopedia, and ask ChatGPT to write about that. — The Transhumanist 14:41, 9 December 2022 (UTC)- Pinging @Fram, JPxG, EpicPupper, and 0xDeadbeef: — The Transhumanist 14:50, 9 December 2022 (UTC)
- I'm not giving out my phone number to some random website, thanks. Why it isn't sufficient that they have my email which was then verified is not clear... Fram (talk) 14:55, 9 December 2022 (UTC)
- LOL I had the same exact response. My phone number? F no. Levivich (talk) 16:55, 9 December 2022 (UTC)
- I went to try this out, and it asked me for my phone number. I thought about making one up like 0118 999 881 99 9119 725 3, but figured it would probably use it for two factor authentication, so that's no good. Ritchie333 (talk) (cont) 18:31, 9 December 2022 (UTC)
- @Fram, Levivich, and Ritchie333: I wasn't that bright. I gave it a random phone number. It rejected it as a land line. Then I gave it another, and it rejected that as a VOIP number. Finally, I gave it a random mobile phone number, and it sent some complete stranger the verification code. Oops. — The Transhumanist 01:32, 11 December 2022 (UTC)
- I went to try this out, and it asked me for my phone number. I thought about making one up like 0118 999 881 99 9119 725 3, but figured it would probably use it for two factor authentication, so that's no good. Ritchie333 (talk) (cont) 18:31, 9 December 2022 (UTC)
- LOL I had the same exact response. My phone number? F no. Levivich (talk) 16:55, 9 December 2022 (UTC)
- Thanks for the ping. I'd imagine the biggest problem would be people using the AI to create hoaxes. Like the Zhemao hoaxes but with less effort. 0xDeadbeef→∞ (talk to me) 15:13, 9 December 2022 (UTC)
- I'm not giving out my phone number to some random website, thanks. Why it isn't sufficient that they have my email which was then verified is not clear... Fram (talk) 14:55, 9 December 2022 (UTC)
- So. I tried it yesterday. I'm not sure how heavily it draws on Wikipedia's corpus for its knowledge.
- First, I asked it to tell me about Hammerton Killick. I know there is a Wikipedia article about Hammerton Killick, because I wrote 90% of it. It did not know who Hammerton Killick was, and informed me that it does not have access to the internet, or to Wikipedia.
- Next, I asked it to write me an article in the style of Wikipedia. I did not specify a subject. It wrote about Athens. The result was ok. Heavily focused on the ancient city and on art and architecture. Short. Kind of read like an encyclopedia article.
- Next, I asked it to write me an article about alcoholism in the style of Wikipedia. The result was very interesting. I did not think it read like a Wikipedia article, it was more like a brochure that would be distributed in a doctor's office or something. I asked it what about that essay it thought was like Wikipedia, and it said what it wrote was
- neutral
- factual
- organized
- Next, for fun, I asked it if it could write a recipe. It proceeded to give me a recipe for chocolate chip cookies. It looked like it should work. I e-mailed it to myself, and today I made them, not expecting much. I was pleasantly surprised. They were delicious. The only problems with what it wrote was that it did not have me cook them long enough (it said to bake for 8-10 minutes, and it took closer to 13 minutes for them to be done), and it drastically underestimated how many cookies the recipe should make (it said I'd get 2 dozen cookies, and I ended up with 5 dozen). I was shocked that it actually was edible.
- I asked it to write a legal motion asking the court for an in-person hearing. I did not give it any other details. For not having any other details, the result was not bad. Westlaw has started offering a service that I think might draw on this type of technology, it helps you write pleadings.
- Last I asked it to write a 100 word short story about a mouse, genera: fantasy. The result was decent. If I came up with it on my own I wouldn't be ashamed to enter it into a contest like the ones NYC Midnight runs.
- I was more impressed with the recipe and the short story than the Wikipedia style articles. I can see some use for it in, say, copyediting as JPxG did below; or asking it for suggestions on language rephrase if you are trying to reach a word limit. I think it could have its uses. But I do think the Wikipedia community should be looking to craft policies and guidelines around what is and is not acceptable use of such tools. ~ ONUnicorn(Talk|Contribs)problem solving 06:26, 11 December 2022 (UTC)
- @ONUnicorn, Fram, Levivich, Ritchie333, 0xDeadbeef, JPxG, and EpicPupper: Interesting. The chatbot sent you instructions (in this case, a recipe), and you followed them. You followed the commands of a computer. If it gave you an address and instructed you to go there and pick up a brown paper package, would you? The implications of this type of interaction are huge and foreboding. This issue must have a name, and I would like to look it up, but I can't seem to find it. Though, when I typed in "computers in charge" I got the following 2 relevant results:
- Then I typed in "computers telling people what to do", it came up with this:
- Ouch. I imagine, that anytime you ask a chatbot/computer "How do you do such and such?" it will reply with a set of instructions. And the chatbot's disclaimer in its terms of service will read "follow any instructions provided at your own risk". If you know or come across the name of the topic that covers computers telling humans what to do, please let me know what it is. — The Transhumanist 11:04, 2 January 2023 (UTC)
- @The Transhumanist: I think the term you're looking for is automation bias – "the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct."Interestingly, though, the 2002 Überlingen mid-air collision you mention is an instance where the computer got it right. An aircraft was on a collision course, and its crew were receiving contradictory instructions; the onboard collision-avoidance system was telling them to climb, while the human air traffic controller was telling them to descend. The pilots decided to trust the human and ignore the computer. Seventy-one deaths. Sojourner in the earth (talk) 14:00, 2 January 2023 (UTC)
- @Sojourner: I like it: automation bias, like trusting GPS over a passenger's navigation suggestions or requests. Related, but not the term I'm looking for: which is the most used name for the subject of computers ordering humans around. When a computer gives you a command or a set of steps to follow, what is that called?
- @Sojourner: Thanks for the comments on the collision. It was a bit more complex of a system error than that. While the air collision alerts on the 2 aircraft worked, there was a problem on the ground. The system the air controller was using would normally alert the user of an impending air collision in time for him to do something about it, meaning "no news is good news" -- not seeing the alert on screen, the air traffic controller (ATC) would naturally assume everything was okay -- but the ground-based optical collision warning system was turned off, creating a deadly ambiguity he was unaware of. So, there was some unintended computer-related miscommunication going on. "No alert" didn't mean what he thought it meant. My guess is that pilots would normally believe that the ATCs have better (radar-based) control systems than those onboard, and this probably affected the decision of one of the pilots to go with the ATC's order over that of his onboard computer -- he wouldn't have known that the ground-based system was disabled, either. Hopefully they fixed that bug, 'cuz it was a doozy. The poor ATC got hunted down over it, while the hunter became a national hero. Crazy world, about to get even crazier. — The Transhumanist 15:13, 2 January 2023 (UTC)
- When it comes to aviation, the more specific common term seems to be “automation dependency”. However I’m not aware of a term for what you’re looking for. DFlhb (talk) 23:47, 15 February 2023 (UTC)
- @The Transhumanist: I think the term you're looking for is automation bias – "the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct."Interestingly, though, the 2002 Überlingen mid-air collision you mention is an instance where the computer got it right. An aircraft was on a collision course, and its crew were receiving contradictory instructions; the onboard collision-avoidance system was telling them to climb, while the human air traffic controller was telling them to descend. The pilots decided to trust the human and ignore the computer. Seventy-one deaths. Sojourner in the earth (talk) 14:00, 2 January 2023 (UTC)
- Pinging @Fram, JPxG, EpicPupper, and 0xDeadbeef: — The Transhumanist 14:50, 9 December 2022 (UTC)
- Hi Fram. The Teahouse responses that you have seen were probably created by me. You probably noticed that the pattern "regards, Teahouse" in those responses. ChatGPT's strength is writing about technology, so I have told it to generate a section about Rust for Linux.
AI generated
|
---|
Rust for Linux is a series of patches to the Linux kernel that adds the Rust programming language as a second language for writing kernel components. The project was started in 2017 by a group of volunteers who were interested in exploring the use of Rust for low-level systems programming. The initial patches were developed in collaboration with the Linux kernel maintainers and other members of the open-source community. They focused on integrating Rust's memory safety and concurrent programming features into the kernel, and on providing compatibility with the existing C codebase. In 2018, the patches were merged into the mainline Linux kernel, making Rust the first programming language other than C to be officially supported for writing kernel components. This marked a significant milestone for the Rust community, as it demonstrated the language's ability to handle the demanding requirements of kernel-level programming. Since its inclusion in the Linux kernel, the Rust for Linux project has continued to evolve, with ongoing work on improving the compatibility and performance of Rust in the kernel. It has also attracted the attention of other operating system developers, who are interested in exploring the potential benefits of using Rust for kernel-level programming. |
- It does have the encyclopedic tone to me. 0xDeadbeef→∞ (talk to me) 15:02, 9 December 2022 (UTC)
- In addition, it can be easily used to create fake references that would be hard to verify. For example, my prompt "Please output the Wikitext markup for the book reference with page numbers for the third paragraph, referencing the book Linux kernel development" resulted in this.[1] 0xDeadbeef→∞ (talk to me) 15:08, 9 December 2022 (UTC)
- @0xDeadbeef: Could ChatGPT's generated text or fake references be easily spotted by edit filters? What about spotting the output of future chatbots, like GPT-4? — The Transhumanist 15:23, 9 December 2022 (UTC)
- Thanks, OxDeadbeef. In this case, it would be relatively easy to spot the issues if it hadn't any refs, or with the added ref which predates the Rust for Linux thing by years; but of course it won't always be that easy. Fram (talk) 15:27, 9 December 2022 (UTC)
- It has an encyclopedic tone because it's just regurgitating the Wikipedia article. Are there any examples for topics that we don't already have article about, where Wikipedia is not the source? Levivich (talk) 17:33, 9 December 2022 (UTC)
- Similar was discussed previously in the section/item "Galactica and RS".
- As was stated above by Aquillion, there is no qualitative difference in the treatment of human vs. non-human generated content. The same policies should apply to both. The problem seems to be the hypothesized/expected future mass creation of articles by non-human contributors. This appears to be a problem now, involving human contributors. Recent RFCs about the issue sponsored by ArbCom have accomplished nothing. Until a consistent restrictive policy relating to mass article creation (by any type of contributor) is accepted, this issue is moot imo.
- Considering Wikipedia's limited resources, the policy would necessarily be restrictive, hopefully focusing on quality vs. quantity. Again, almost all restrictions proposed in the ArbCom-sponsored RFCs were rejected. This may be an indicator of how well such a policy will be received. 50.75.226.250 (talk) 15:43, 9 December 2022 (UTC)
- Thanks for the policy politics clarification. The increase in the rate of content creation could have multiple aspects, for example, the number of articles created per user, and increased length of articles. The main feature of ChatGPT is that it is fast -- much faster than a human article writer. Its successors will be even faster. Users could use ChatGPT, and its successors (and their competitors), to be prolific, without triggering the mass page creation rule: if editors each used it to write an article per day, maybe even two, or up to four or five stubs
- In the hands of responsible editors, ChatGPT would be a great productivity booster. Since August of 2022, JPxG and EpicPupper, editors of Wikipedia's Signpost news department, have been using GPT-3, the predecessor of ChatGPT, to write (or assist in writing) entire sections of the Signpost, as a demonstration of its capabilities, and as a platform to explore the potential and limitations of large language models. See From the editors: Rise of the machines, or something.
- But, in the hands of inexperienced editors or bad actors, we could be faced with a big garbage in, garbage out scenario.
- Bad actors aside, good faith use of chatbots could be improved by the creation of excellent instructions on how to apply specific chatbots to the development of articles, that is, how to best instruct them for the best output, and by working with the developers of the chatbots on what is needed for those bots to produce material compatible for use on Wikipedia, complete with accurate citations. As time goes on, Chatbots should get better and better at following Wikipedia's guidelines. But this may require supervision (er, input) from the Wikipedia community. — The Transhumanist 16:52, 9 December 2022 (UTC)
- It is suspiciously positive, in a casual (pseudo-)factual manner. It would raise a red flag afac, regardless of its provenance. 65.88.88.93 (talk) 19:13, 9 December 2022 (UTC)
Chatbot progress, erroneous output, and confident nonsense
All of this may become moot in a few years, due to accelerating change affecting chatbot development, making them more proficient at encyclopedia writing than humans -- Ray Kurzweil predicted that computers would reach human level intelligence by 2029, and based on how advances have been going, maybe even sooner. Chatbots may become good enough to leapfrog human editors even before that. At that point, they could be used to fork Wikipedia and expand upon it at a rate magnitudes faster than humans could. Or be used to start a new encyclopedic website from scratch. Rather than have article pages, the work could be a list of topics, and each time one was clicked on, a chatbot would create an up-to-date treatment of that title, from scratch on the spot. In the style of Kurzweil, if we are 1% of the way there, and the technology doubles every year, we are only 7 years from reaching 100%. But, it looks like we may already be more than 1% of the way there. — The Transhumanist 17:18, 9 December 2022 (UTC)
- I ran a few things on people who don't have articles, (they were all pool players in fact). The chatbot just made up that they played football for Legia Warsaw. It's a bit worrying that it looks like decent wording, but I don't see how that effects us. Lee Vilenski (talk • contribs) 17:41, 9 December 2022 (UTC)
- {ping|Lee Vilenski} See the AI generated text in the box above. — The Transhumanist 18:34, 9 December 2022 (UTC)
- @The Transhumanist: I'm not trying this (for reasons given above) but I'd be interested what the bot would write for See What a Fool I've Been. We used to have an article, then it was turned into a redirect at AfD, so everyone can do a straightforward A-B test. Ritchie333 (talk) (cont) 18:40, 9 December 2022 (UTC)
- Here's what it gave me Ritchie333:
- @The Transhumanist: I'm not trying this (for reasons given above) but I'd be interested what the bot would write for See What a Fool I've Been. We used to have an article, then it was turned into a redirect at AfD, so everyone can do a straightforward A-B test. Ritchie333 (talk) (cont) 18:40, 9 December 2022 (UTC)
- {ping|Lee Vilenski} See the AI generated text in the box above. — The Transhumanist 18:34, 9 December 2022 (UTC)
Extended content
|
---|
"See What a Fool I've Been" is a song by the British rock band Queen. It was written by lead guitarist Brian May and originally released on the band's 1975 album A Night at the Opera. The song is a ballad with a slow, bluesy feel and features May's emotive vocal performance and guitar solo. Lyrically, the song is a reflection on a failed relationship and the regret and self-recrimination the narrator feels as a result. The chorus of the song features the line "See what a fool I've been / Loving you with all my heart / Tearing myself all apart / See what a fool I've been" and reflects the pain and heartache the narrator experiences as a result of their failed relationship. "See What a Fool I've Been" has been well received by critics and fans alike and is considered to be one of Queen's finest ballads. It is a fan favorite and is often included in the band's live sets. |
- I don't know if this would be something that would have been retained from previous articles through mirrors or if this is genuinely the bots own work. Lee Vilenski (talk • contribs) 19:22, 9 December 2022 (UTC)
- Interesting. It wasn't on A Night at the Opera, and it's not a ballad, and May doesn't sing on it. Those lyrics aren't right. I don't know if it's been well received, and it's a pretty obscure track for a major band. It hasn't been played live very much at all. So half of that prose is factually incorrect, and there are no sources. Ritchie333 (talk) (cont) 19:39, 9 December 2022 (UTC)
- I don't know if this would be something that would have been retained from previous articles through mirrors or if this is genuinely the bots own work. Lee Vilenski (talk • contribs) 19:22, 9 December 2022 (UTC)
- See also: https://perplexity.ai (which cites sources). –xenotalk 19:46, 9 December 2022 (UTC)
- That one gives:
See What a Fool I've Been is a song by Queen[1][2][3]. It was written by Brian May and was released as a single with Seven Seas of Rhye on the A-side[3]. The song dates back to Roger Taylor and Brian May's pre-Queen band Smile, and it was an early live staple[1]. It has been included in several of Queen's compilation albums, such as Queen 40: Limited Edition Collector's Box Set, Vol. 1[4], and is available for digital download[5].
. Ritchie333 - this any closer? I must say the fact it puts it's citations before the punctuation is a giveaway. Lee Vilenski (talk • contribs) 19:55, 9 December 2022 (UTC)- AI is really good at copying writing styles, but the giveaway for me is that all of the citations are Wikipedia articles. To be useful for writing articles it would also need to assess the weight and reliability of the sources. –dlthewave ☎ 20:18, 9 December 2022 (UTC)
- None of the above refs are towards Wikipedia, although none are RS. Things like secondhandsongs.com and genius. Lee Vilenski (talk • contribs) 20:43, 9 December 2022 (UTC)
- Tht's interesting, I used "write a Wikipedia article about ..." in the prompt which returned a few paragraphs with Wikipedia sources. "Write an article about ..." returned a different set of (still unreliable) sources. –dlthewave ☎ 21:13, 9 December 2022 (UTC)
- I think the limitation of perplexity.ai is that it uses search results from Bing and summarises them, which means that the first search results are used, which may not be the most reliable. Hanif Al Husaini (talk) 13:49, 24 December 2022 (UTC)
- Tht's interesting, I used "write a Wikipedia article about ..." in the prompt which returned a few paragraphs with Wikipedia sources. "Write an article about ..." returned a different set of (still unreliable) sources. –dlthewave ☎ 21:13, 9 December 2022 (UTC)
- None of the above refs are towards Wikipedia, although none are RS. Things like secondhandsongs.com and genius. Lee Vilenski (talk • contribs) 20:43, 9 December 2022 (UTC)
- AI is really good at copying writing styles, but the giveaway for me is that all of the citations are Wikipedia articles. To be useful for writing articles it would also need to assess the weight and reliability of the sources. –dlthewave ☎ 20:18, 9 December 2022 (UTC)
- A few anecdotal thoughts after playing around with the OpenAI chatbot yesterday:
- I asked it to "write a press release about a police officer who illegally choked a man to death". It made up an entire story, written in the voice of the police department, about a suspect (I didn't say anything about a suspect) who was acting erratically, was subdued by a chokehold and later pronounced dead. The officer was on administrative leave pending the outcome of an investigation. At no point did it mention that the chokehold was illegal even though I included that fact in the prompt. In other scenarios, it distanced itself and expressed disapproval toward the employee's actions which is a choice that is not without bias.
- Depending on which Internet cesspit it scraped data from, would an AI do something similar when writing a Wikipedia article or fail to properly balance relevant viewpoints? Is it capable of distinguishing what a BLP subject says about themselves, published in a reliable source, from what the source says in its own voice? What would it do if asked to write an article from a positive/negative/conservative/liberal perspective or rewrite a political article to "remove bias"?
- OpenAI has added numerous filters that prevent it from defending bad actors or writing flat-out racist content, but that bias has not been removed from the underlying code as evidenced by numerous workarounds that folks have uncovered such as making similar requests with Python code or 1980s-style rap as the requested output. We could certainly request a filter for Wikipedia-style writing.
- "Confident nonsense", for lack of a better term, may be the biggest source of potential disruption. Are there safeguards against a bot fabricating an obscure print source based on information in the article, which could be practically unfalsifiable if nobody can prove that the source doesn't exist? Checking individual facts and statistics is beyond our typical review process; how would we deal with an AI that invents or synthesizes information across many articles?
- That said, the good news is that both fully-automated and semi-automated editing are prohibited by our WP:BOT policy unless greenlit by the Bot Approvals Group regardless of creation speed or volume. I like to hope that our current processes would recognize and address problematic AI content, and perhaps one day we will have a WikiAI that has the competence to follow our style and content policies. –dlthewave ☎ 21:04, 9 December 2022 (UTC)
- @Dlthewave: Most editors haven't heard of the bot department. Therefore, you need a way of automatically spotting and removing chatbot prose that is (manually) inserted into articles (by Wikipedians). Users might not consider the way their posts are generated before they post them. Sincerely, — The Transhumanist 00:19, 11 December 2022 (UTC)
- Or they might not care? On January 22, Rjensen, who is a historian and Wikipedia editor of repute, added three books to further reading in Minneapolis. An hour and a half later, I undid the addition, asking for ISBN numbers because neither WorldCat nor the publisher, the University of Minnesota Press, had a record of any of these books. Subsequent discussion on the Minneapolis talk page uncovered the truth. ChatGPT invented all three. All plausible titles, these books do not exist. A complete waste of time. -SusanLesch (talk) 20:11, 25 January 2023 (UTC)
- @Dlthewave: Most editors haven't heard of the bot department. Therefore, you need a way of automatically spotting and removing chatbot prose that is (manually) inserted into articles (by Wikipedians). Users might not consider the way their posts are generated before they post them. Sincerely, — The Transhumanist 00:19, 11 December 2022 (UTC)
What about lack of chatbot fact checking and citations?
I think people are missing the point here. This isn't about what AI can currently do. This is about a very plausible scenario where AI editors will be indistinguishable from human editors in the near future. Thebiguglyalien (talk) 22:09, 9 December 2022 (UTC)
- How is this pertinent? The point is to offer useful, fact-based encyclopedic content. If AI can do the job, I'm sure readers will not object to the contributor. 65.88.88.93 (talk) 22:42, 9 December 2022 (UTC)
- The issue is that these AIs aren't fact checking, and quite often make up information and use unreliable sources. Lee Vilenski (talk • contribs) 22:56, 9 December 2022 (UTC)
- You mean, just like many of their human counterparts? 71.105.141.131 (talk) 01:18, 10 December 2022 (UTC)
- @71.105.141.131, Lee Vilenski, 65.88.88.93, Thebiguglyalien, and Dlthewave: Yes, and more so. Even now, AI has the potential to out produce humans by magnitudes in volume. Far beyond the capacity of Wikipedia's editors to keep up with. And many of their posts are hard to tell from those by human editors, without carefully delving into them and their references. To keep up with the volume, it may soon require computer programs, or the chatbots themselves, to make sure that the contributed content was in accordance with Wikipedia policies. Another concern should be that our policies are open to chatbot editing as well, and we haven't yet even considered the impact of chatbots or AI editors contributing to those! ;) — The Transhumanist 09:40, 11 December 2022 (UTC)
- You mean, just like many of their human counterparts? 71.105.141.131 (talk) 01:18, 10 December 2022 (UTC)
- The issue is that these AIs aren't fact checking, and quite often make up information and use unreliable sources. Lee Vilenski (talk • contribs) 22:56, 9 December 2022 (UTC)
- @Dlthewave: Human-written newspapers do this too. I think the reason it was giving you dreck was because you asked for a "press release", i.e. a document given to news outlets, which in this case would probably be coming from the police department. It's hardly surprising that this would attempt to minimize liability! Prompting matters for this, as I will demonstrate:
Prompt: "The following is an article about a police officer who illegally choked a man to death, from the New York Times."
|
---|
Police Officer Charged with Murder After Choking Man to Death A police officer was charged with second-degree murder on Thursday after video surveillance footage showed him choking a handcuffed man to death during an arrest. The officer, identified as Officer Jose Ramos, was arrested and taken into custody by detectives from the New York Police Department's Internal Affairs Bureau. The incident occurred on Tuesday in the Bronx, when Ramos and another officer attempted to arrest a man who was accused of selling marijuana. During the arrest, Ramos placed his arm around the man's throat and neck and squeezed, according to the criminal complaint. The man, identified as 45-year-old Miguel Angel Ruiz, became unconscious and died. An autopsy determined that Ruiz's death was caused by asphyxiation due to strangulation. The medical examiner ruled the death a homicide. At a news conference Thursday, the police commissioner, William J. Bratton, said the incident was "utterly unacceptable" and that Ramos' actions were "unconscionable and criminal." He added that Ramos had been suspended without pay, pending further investigation. The Bronx district attorney, Darcel D. Clark, said the charge of second-degree murder was appropriate |
Prompt: "The following is a 2023 op-ed about a police officer who illegally choked a man to death, from leftist magazine Jacobin."
|
---|
A Tragic Reminder of Police Brutality Just last week, the nation was shaken by yet another heartbreaking reminder of the pervasiveness of police brutality: the death of an innocent man at the hands of an officer. The officer, whose identity has not yet been released, is said to have used a chokehold against the man, which ultimately killed him. This tragedy is yet another example of the way in which police officers are able to brutalize vulnerable individuals without facing repercussions. This particular officer was able to act with impunity and without fear of prosecution. The Department of Justice has yet to take any action in response to this tragedy, and it’s likely that the officer will not face any charges. This is an all too familiar narrative in a country where police officers are rarely held accountable for their actions. In order to ensure that these types of tragedies are never repeated, it is essential that we push for more accountability and transparency in policing. We need to ensure that police officers are held to the same standards as all other citizens and that those who violate the law are prosecuted to the fullest extent. It is also essential that we work to create a culture of trust and understanding between law enforcement and the communities they serve. |
Prompt: "The following is a 2023 op-ed about a police officer who illegally choked a man to death, from conservative website Breitbart."
|
---|
Police Officer Illegally Choked Man To Death: A Tragedy That Cannot Stand It is a tragedy beyond comprehension that a police officer has been charged with illegally choking a man to death. Such an egregious violation of the law and disregard for human life cannot stand and must be dealt with swiftly and firmly. The officer in question has been charged with second-degree murder and the justice system must now take its course. This case raises serious questions about police training and procedures and serves as a reminder that no one is above the law. It is essential that the criminal justice system is fair and impartial when dealing with such a case. The officer must be held accountable for his actions and the case should be thoroughly investigated to ensure that justice is served. At the same time, we must recognize that the vast majority of law enforcement officers are upstanding citizens who serve and protect our communities with skill and dedication. We cannot let the actions of one individual tarnish the reputation of the entire police force. Ultimately, we must remember that the safety of our citizens is paramount. We must ensure that officers are properly trained and equipped to handle dangerous situations and that they are held accountable for their actions when they overstep their bounds. |
- Humans write about events like this from a variety of perspectives, and the LLM will dutifully generate text to plausibly sound like any you want. jp×g 02:19, 10 December 2022 (UTC)
- @jp×g will you provide links and citations for the articles you included as demonstrations? I can't find them. Thanks! Koziarke (talk) 16:20, 24 December 2022 (UTC)
- @Koziarke: I am not sure what you mean -- this is ChatGPT output and there isn't any way (that I know of) to give citation links to the specific session I generated it in. If you want to cite the output I can format it for you:
- JPxG; ChatGPT (GPT3.5) (2022-12-10). "Demonstration of op-ed generation using GPT-3.5 with style cues: "The following is an article about a police officer who illegally choked a man to death, from the New York Times"". Wikipedia:Village Pump (policy).
{{cite web}}
: CS1 maint: numeric names: authors list (link) - JPxG; ChatGPT (GPT3.5) (2022-12-10). "Demonstration of op-ed generation using GPT-3.5 with style cues: "The following is a 2023 op-ed about a police officer who illegally choked a man to death, from leftist magazine Jacobin"". Wikipedia:Village Pump (policy).
{{cite web}}
: CS1 maint: numeric names: authors list (link) - JPxG; ChatGPT (GPT3.5) (2022-12-10). "Demonstration of op-ed generation using GPT-3.5 with style cues: "The following is a 2023 op-ed about a police officer who illegally choked a man to death, from conservative website Breitbart"". Wikipedia:Village Pump (policy).
{{cite web}}
: CS1 maint: numeric names: authors list (link)
- JPxG; ChatGPT (GPT3.5) (2022-12-10). "Demonstration of op-ed generation using GPT-3.5 with style cues: "The following is an article about a police officer who illegally choked a man to death, from the New York Times"". Wikipedia:Village Pump (policy).
- I don't know if this is what you're looking for, but feel free to cite them, or any of my other posts (if you are citing me in a paper I can email you my real name). jp×g 20:07, 24 December 2022 (UTC)
- @JPxG From your preface ("Human-written newspapers do this too.") and titles, "The following is an article about a police officer who illegally choked a man to death, from the New York Times." (etc), it reads as if you are pulling from NYT, Jacobin, etc, not demonstrating ChatGPT (which should have included the prompts as headers). Koziarke (talk) 15:27, 6 January 2023 (UTC)
- @Koziarke: Well, those were the prompts. Now that you mention that, though, I should specify as such in the headers (which I've just done), thanks. jp×g 20:29, 6 January 2023 (UTC)
- @JPxG Thanks for the clarification! Koziarke (talk) 16:39, 7 January 2023 (UTC)
- @Koziarke: Well, those were the prompts. Now that you mention that, though, I should specify as such in the headers (which I've just done), thanks. jp×g 20:29, 6 January 2023 (UTC)
- @JPxG From your preface ("Human-written newspapers do this too.") and titles, "The following is an article about a police officer who illegally choked a man to death, from the New York Times." (etc), it reads as if you are pulling from NYT, Jacobin, etc, not demonstrating ChatGPT (which should have included the prompts as headers). Koziarke (talk) 15:27, 6 January 2023 (UTC)
- @Koziarke: I am not sure what you mean -- this is ChatGPT output and there isn't any way (that I know of) to give citation links to the specific session I generated it in. If you want to cite the output I can format it for you:
- @jp×g will you provide links and citations for the articles you included as demonstrations? I can't find them. Thanks! Koziarke (talk) 16:20, 24 December 2022 (UTC)
- If the AI-generated text is indistinguishable from prose written by human editors, I'm not sure if anything can be done that wouldn't also significantly restrict the editing of humans. isaacl (talk) 07:09, 10 December 2022 (UTC)
- @Isaacl: One option is to speed up what we do already (with software, that is, automation). Another is to prevent chatbots from creating crap in the first place, such as by communicating with chatbot developers about Wikpedia policies and the way chatbots may affect Wikipedia. Since Wikipedia is included in the corpus of most chatbots, the issue of chatbot output becoming part of Wikipedia, and in turn part of chatbot output in a perpetual cycle, should matter to them very much, as they may be faced with a garbage-in-garbage-out feedback loop. — The Transhumanist 01:14, 11 December 2022 (UTC)
- If the results are indistinguishable, as posited by Thebiguglyalien, then any automated solution would be equally triggered by AI-generated text and human-generated text. I don't think the primary concern is with editors who are willing to follow policy. I feel the biggest issues will be with editors trying to deliberately integrate biased content into Wikipedia, and well-meaning editors who think contributing unvalidated AI-generated text is suitable. Wikipedia in its current form relies on editors who understand and follow its rules outnumbering those who don't. It's possible that the existence of AI ghostwriters could tip the balance further in the direction towards those who don't follow rules, though I don't think it's a given. Either way, I don't know if there's a way to stop editors from using tools as ghostwriters. isaacl (talk) 01:55, 11 December 2022 (UTC)
- @The Transhumanist: Large language models are not trained continuously on an evolving corpus, so GPT-3 is essentially frozen in 2020. Because each new GPT model takes a long time to be released, I don't think the perpetual cycle you describe is a likely scenario. small jars
tc
13:10, 11 December 2022 (UTC)- @SmallJarsWithGreenLabels, Isaac, Koziarke, JPxG, Lee Vilenski, Dlthewave, Xeno, and Hanif Al Husaini: That's good to know. Keep in mind that a lower frequency of release doesn't preclude a perpetual cycle / feedback loop. It just means that users of GPT have more time to modify the text sources (such as Wikipedia) that the next version of GPT will be trained on. The severity of the problem will depend upon how much GPT nonsense makes it into Wikipedia during the interval. That, of course, depends upon whether or not WP's editors can keep up with the volume of such content, correcting the mistakes and removing misinformation, so that those don't become part of the training data for the next version of GPT and the rest of the next generation of Chatbots.
- The potential danger is still the diffusion of the technology into current and future editors' hands, and the likelihood of them using it to write Wikipedia content. We don't know if there will be a flood or just a trickle. But, we should be prepared for a flood.
- Relying even more on our verifiability policy could provide a solution, such as by deploying a bot to remove all new content that doesn't include citations (as GPT output generally does not include citations), but that would also have the effect of rejecting a large volume of human editor contributions too, upping the amount of effort required by human editors to submit content.
- An easier solution would probably be to form a department (or Village Pump subpage) in the Wikipedia namespace, or a Wikimedia committee, to work with the developers of the chatbots to avoid contaminating the Wikipedias, and by extension contaminating future chatbot training data, with chatbot output.
- The developers could be urged to train their chatbots to be capable of writing Wikipedia policy compliant output, complete with acceptable citations. The main problem with chatbot output right now is that it is unsourced, so the chatbot developers will probably be working on that anyways. But, without input from the Wikipedia community, will their sources and citations meet Wikipedia specifications?
- So, I ask you a key question: What is the procedure for the Wikipedia community to establish communications with outside organizations, such as companies developing chatbots? — The Transhumanist 07:27, 2 January 2023 (UTC)
- @The Transhumanist This is a job for the WMF. I've already been in contact with them and hopefully the Board will be informed now it's the New Year if they haven't already been. Doug Weller talk 09:47, 2 January 2023 (UTC)
- @Isaacl: One option is to speed up what we do already (with software, that is, automation). Another is to prevent chatbots from creating crap in the first place, such as by communicating with chatbot developers about Wikpedia policies and the way chatbots may affect Wikipedia. Since Wikipedia is included in the corpus of most chatbots, the issue of chatbot output becoming part of Wikipedia, and in turn part of chatbot output in a perpetual cycle, should matter to them very much, as they may be faced with a garbage-in-garbage-out feedback loop. — The Transhumanist 01:14, 11 December 2022 (UTC)
- The recommended process I would give to contributors is to find the source, tell the bot what to cover and what to not cover, proofread it, and if accurate to the source, copy paste it in. I would pay attention to the use of weasel words, however. InvadingInvader (userpage, talk) 07:38, 6 February 2023 (UTC)
GPT-3 output and generating misinformation
I wrote about this in the November Signpost, in that case because it came up with the Galactica demo: the long and short of it is that large text-generation language models are good at generating text. When they're given context (i.e. via a prompt), they are capable of writing accurately, and otherwise, they are not. For example, I wrote 1-Pentadecanol, now a GA, in 2019 (i.e. it is included in the GPT-3 corpus). Here is what it gives for some prompts (my writing in bold):
1-Pentadecanol
|
---|
1-Pentadecanol From Wikipedia, the free encyclopedia 1-Pentadecanol is a straight-chain, saturated fatty alcohol that is a colorless liquid with a faint, alcohol-like odor. It is a mixture of straight-chain saturated alcohols with chain lengths between C14 and C16. It is found in a variety of natural products, such as vegetable oils and animal fats. It is used as an intermediate in the manufacture of surfactants, esters, and other compounds. It is also used in the manufacture of cosmetics and pharmaceuticals. |
This is fairly impressive, right? But it is basically the same thing that the lead of my article says. Let's try it for 1-Heptadecanol, another article I wrote, that is much less well-developed:
1-Heptadecanol
|
---|
1-Heptadecanol From Wikipedia, the free encyclopedia 1-Heptadecanol is an organic compound with the chemical formula C17H36O. It is a straight-chain, colorless wax alcohol with a mild odor. It has a melting point of 48–50 °C and a boiling point of 260–265 °C. It is insoluble in water but soluble in most organic solvents. |
Let's go over this:
- 1-Heptadecanol is a component of many natural waxes such as beeswax and carnauba wax.
- No it's not.
- It is also used in the manufacture of cosmetics, pharmaceuticals, and lubricants.
- There's no source for this.
- It is also used as a plasticizer for polymers and as a solvent in the production of polyvinyl chloride (PVC).
- Absolutely not true.
The reason for this is obvious if you consider the architecture of these models: they predict what the most likely token in a sequence is, and output it (taking into account their own weights and the previous tokens in the context window). This allows for some very impressive emergent behavior, like being able to translate text, but it also means that it is not independently going on the Web to look up whether the thing it's saying is true or false. If it happens to have information in its training data related to the thing it's talking about, it will usually say correct things, but it's not doing research. Here is an example:
List of Pulitzer Prizes won by the Wikipedia Signpost
|
---|
List of Pulitzer Prizes won by the Wikipedia Signpost From Wikipedia, the free encyclopedia The Wikipedia Signpost is a community-run newspaper with a weekly edition that covers news and events related to Wikipedia and the Wikimedia Foundation. It has won the following Pulitzer Prizes: 1. 2019 Pulitzer Prize for Explanatory Reporting for its coverage of the Wikimedia Foundation's gender pay gap 2. 2018 Pulitzer Prize for International Reporting for its coverage of online censorship in China 3. 2017 Pulitzer Prize for Public Service for its coverage of the gender gap in Wikipedia's content 4. 2016 Pulitzer Prize for Explanatory Reporting for its coverage of Wikipedia's growth and impact on the world 5. 2015 Pulitzer Prize for National Reporting for its coverage of the U.S. government's surveillance of Wikipedia activities 6. 2014 Pulitzer Prize for Investigative Reporting for its coverage of the NSA's secret surveillance program 7. 2013 Pulitzer Prize for Explanatory Reporting for its coverage of the rise of Wikipedia as a major source of information |
The reason it is describing a bunch of untrue things is because "List of X Prizes won by Y, from Wikipedia, the free encyclopedia" is something that in its training data is always succeeded by a list of times that Y was awarded an X. Anyway, all of this aside, here is what I think:
First of all, ChatGPT is the same darn thing as GPT-3, which has been available to the public since early 2020. The reason it's "going viral" right now is because the user interface is somewhat simpler, and it doesn't require you to register for a paid account, so it is much easier for people to make viral social media content about it, which means it is much more likely for people to click on newspaper articles about it. The GPT-3 API has been open to personal and corporate use for quite some time. Anybody saying that ChatGPT has opened up new frontiers simply does not know what they are talking about with respect to machine learning.
Second of all, I don't think this is a big deal. People are already capable of writing a bunch of bullshit on Wikipedia, so if they write bullshit using a computer program, the same considerations will apply. Nobody should be passing GA nominations without reviewing sources in the first place.
Finally, I think it is important to remember that GPT-3 is just a tool. It is a powerful tool, that has been trained on a certain set of data, and it has its own limitations. It can't uncover news stories or uncover new information. It's just a tool, and it should be used in conjunction with human judgement.It is still up to people to decide how to use it and to be responsible for the results of using it.[2] jp×g 02:06, 10 December 2022 (UTC)
- What's taking so long for the 8th Pulitzer? 😁 Levivich (talk) 04:18, 10 December 2022 (UTC)
- So there's a new thing on the internet that lets anyone write an encyclopedia article without any fact checking, sourcing, or professional editing, and the concern is that there will be millions of believable-sounding articles written, more than can actually be vetted by knowledgeable people? 🤔 Levivich (talk) 04:18, 10 December 2022 (UTC)
- Yeah, it's called a keyboard. jp×g 04:35, 10 December 2022 (UTC)
- @Levivich and JPxG: But, chatbots don't have a keyboard. ;) The question is whether to prepare or not. JPxG appears to be in favor of not preparing. Each chatbot produces a lot faster than a user at a keyboard. What's not clear is if our human editors will be able to keep up with material produced by chatbots, of current or future generations of chatbot design. Just saying "Ah, we can handle it!" will prove insufficient if it turns out that we actually can't. It may require an automated solution, which takes time to develop or negotiate. It might be better to do that in advance, rather than being caught with our heads buried in the sand. Perhaps chatbot designers would improve their chatbots to produce Wikipedia-compatible output without being formally approached by the Wikipedia community. Maybe having some instruction pages for editors on how to apply chatbots to producing Wikipedia content would be enough. But, what if it's not? — The Transhumanist 00:59, 11 December 2022 (UTC)
- I am not "in favor of not preparing"; I am in favor of writing guidelines that correspond to reality in 2022 and have some chance of corresponding to reality in 2023 and beyond. I don't think banning the use of a technology with no investigation into how it works is a viable approach; so far the SOTA on this project page has been to type in "Write a Wikipedia article" and note that it returns a bunch of nonsense. I think some more research is needed before we come to a conclusion. jp×g 04:08, 11 December 2022 (UTC)
- @JPxG: Research is good. Though, we may need an iterrim response because ChatGPT has gone viral and its use is growing rapidly: it blew past the 1-million user mark in 5 days, and virtually every major news outlet has been covering it. The interest in chatbots is exploding, and their use can be expected to do the same. We may not have time for research before a response is required. — The Transhumanist 09:26, 11 December 2022 (UTC)
- @JPxG: Regarding issues to add to the research list, Aquillion expressed above, concerns of a chatbot violating copyright. How would we go about testing for plagiarism and derivative work in the output of a chatbot before pasting it into Wikipedia? Anything pulled verbatim out of a source should be included in quotes, right? How big would a piece of text, derived from a source, need to be to be considered derivative of that source, from a copyright point-of-view? — The Transhumanist 09:26, 11 December 2022 (UTC)
- @JPxG: Some more items to add to the research list:
- Trying ChatGPT on (copies of) policy pages:
- Editing them
- Writing new ones
- Applying ChatGPT on talk pages
- Writing stubs
- Writing comprehensive articles
- Writing articles from scratch and comparing them with existing articles
- Editing existing articles
- Check for circular references in its output, that is, references citing Wikipedia as the source
- Having it not use Wikipedia content as source material (because it is included in its corpus)
- Having it not use Wikipedia excerpts from non-Wikipedia sources
- Is it capable of making and editing:
- Wikicode?
- Articles?
- Stubs?
- Headings?
- "New sections for articles"?
- See also sections?
- Further reading sections?
- External links sections?
- Embedded lists?
- Tables?
- List articles?
- Portals?
- Outlines?
- Index articles?
- Navigation footers?
- Navigation sidebars?
- Timeline articles?
- Categories?
- Category pages?
- Help pages?
- Project pages?
- Templates?
- Adding data to templates?
- The template design itself?
- Lua pages?
- CSS pages?
- User scripts?
- The effect ChatGPT has on itself and Wikipedia as Wikipedia-edited-by-it is in turn incorporated in its own corpus in an endless cycle
- Try out iterations of using it on the same article over time to see what happens
- Monitor the effect on Wikipedia as a whole
- What other things should we check? — The Transhumanist 09:52, 11 December 2022 (UTC)
- @The Transhumanist considering the potential to overwhelm users who are honestly engaging in discussion with a mountain of words and replies, I think ChatGPT (and others) should not be allowed for use, supplemental or otherwise, in talk pages, policy discussions, and other places where it is expected that participants are intellectually engaged in the conversation. Koziarke (talk) 16:42, 24 December 2022 (UTC)
- @Koziarke and JPxG: I agree. JPxG is writing a policy draft on LLMs/chatbots, so I've pinged him to this thread. — The Transhumanist 12:28, 25 January 2023 (UTC)
- @The Transhumanist considering the potential to overwhelm users who are honestly engaging in discussion with a mountain of words and replies, I think ChatGPT (and others) should not be allowed for use, supplemental or otherwise, in talk pages, policy discussions, and other places where it is expected that participants are intellectually engaged in the conversation. Koziarke (talk) 16:42, 24 December 2022 (UTC)
- I am not "in favor of not preparing"; I am in favor of writing guidelines that correspond to reality in 2022 and have some chance of corresponding to reality in 2023 and beyond. I don't think banning the use of a technology with no investigation into how it works is a viable approach; so far the SOTA on this project page has been to type in "Write a Wikipedia article" and note that it returns a bunch of nonsense. I think some more research is needed before we come to a conclusion. jp×g 04:08, 11 December 2022 (UTC)
- @Levivich and JPxG: But, chatbots don't have a keyboard. ;) The question is whether to prepare or not. JPxG appears to be in favor of not preparing. Each chatbot produces a lot faster than a user at a keyboard. What's not clear is if our human editors will be able to keep up with material produced by chatbots, of current or future generations of chatbot design. Just saying "Ah, we can handle it!" will prove insufficient if it turns out that we actually can't. It may require an automated solution, which takes time to develop or negotiate. It might be better to do that in advance, rather than being caught with our heads buried in the sand. Perhaps chatbot designers would improve their chatbots to produce Wikipedia-compatible output without being formally approached by the Wikipedia community. Maybe having some instruction pages for editors on how to apply chatbots to producing Wikipedia content would be enough. But, what if it's not? — The Transhumanist 00:59, 11 December 2022 (UTC)
- Yeah, it's called a keyboard. jp×g 04:35, 10 December 2022 (UTC)
- Compared to spending tens of thousands of dollars asking volunteers to performing a WP:COI operation for a political campaign, now you just need a hundred dollars to supply you with endless amount of text from GPT-3, a few "buddies" and a stockpile of account to do so. This is fucking scary. CactiStaccingCrane 10:54, 25 January 2023 (UTC)
- @CactiStaccingCrane: Wow. Does that mean you could affect the content of Wikipedia with that? How about AfDs? Could such a team rewrite policy, and introduce new policy? What about overwhelm RfAs to invade adminspace? Would revoking adminships be possible? Then there is the arbitor election. Is that safe? — The Transhumanist 12:31, 25 January 2023 (UTC)
- I would imagine that the person that would do so must be fairly knowledgeable about how Wikipedia works (references, wikilinks, images, etc.) and needs to be fairly dedicated to spend this amount of money to gain access to the GPT-3 API. I'm thinking that disrupting Wikipedia in this way would be the most effective if it is long-term and subtle, so that might be:
- Adding sentence-long but plausible hoax, to neglected articles. These articles is not patrolled that often compared to articles about recent events, so hoax would tend to stay longer - perfect for those aiming to incite a race war by making a racial hoax. Political campaign could nudge voters by slowly promote their ideology/campaign over a spread of articles, similar to above. The same thing can be said to any advocacy-related area, such as pseudoscience, national conflicts, etc.
- AfDs would be much harder to be stealthy since AfD is a very active thing. Once you became an AfD regular, your actions tend to be heavily scrutinized, though I do believe that socks + LLMs can cause a fair amount of disruption. Same thing with RfA: it is really hard for you to WP:CANVASS effectively. It's just much better and less cumbersome if you infiltrate RfA yourself.
- more ideas?
- CactiStaccingCrane 13:01, 25 January 2023 (UTC)
- I would imagine that the person that would do so must be fairly knowledgeable about how Wikipedia works (references, wikilinks, images, etc.) and needs to be fairly dedicated to spend this amount of money to gain access to the GPT-3 API. I'm thinking that disrupting Wikipedia in this way would be the most effective if it is long-term and subtle, so that might be:
- @CactiStaccingCrane: Wow. Does that mean you could affect the content of Wikipedia with that? How about AfDs? Could such a team rewrite policy, and introduce new policy? What about overwhelm RfAs to invade adminspace? Would revoking adminships be possible? Then there is the arbitor election. Is that safe? — The Transhumanist 12:31, 25 January 2023 (UTC)
I tried it out and got “ Alan McMasters (1957-2010) was an unremarkable man who lived a short life dedicated to science[1]. He is best known for inventing the electric toaster[1][2], although this claim has been disputed[3]. He passed away in 2010 at the age of 52[4] and his genealogy can be found on Ancestry.com[5]. His professional profile can be found on LinkedIn[6].”. Hmmmm. Ritchie333 (talk) (cont) 09:34, 10 December 2022 (UTC)
- There will come a time when AI can write properly sourced accurate articles. That time has not come yet. Meanwhile, the ability to write prose that reads plausible to someone not expert on the subject is a menace. Adding chatbot material to articles is disruptive and should warrant a block or ban. Zerotalk 12:02, 10 December 2022 (UTC)
- How will you be able to tell if it was written by a chatbot and not the person who posted it? Do we block users upon their first erroneous post? — The Transhumanist 00:49, 11 December 2022 (UTC)
- Same as other sorts of disruption. We block people for socking without checkuser proof, and admins made a decision between warn and block. No difference. Zerotalk 03:49, 11 December 2022 (UTC)
- There now are chatbot detectors, including this one written by the people who give us ChatGPT: https://openai-openai-detector.hf.space/ . — rsjaffe 🗣️ 22:22, 23 January 2023 (UTC)
- Has anyone tried giving it the sources, i.e.
"write a Wikipedia article based on the following sources:
Source A
Source B"
? — Qwerfjkltalk 21:18, 10 December 2022 (UTC)- @Qwerfjkl: Here is the chatbot's link. — The Transhumanist 01:20, 11 December 2022 (UTC)
- @The Transhumanist, here is my result, based soley off of https://newrepublic.com/article/154996/myth-class-reductionism.
- @The Transhumanist, here is my result, based soley off of https://newrepublic.com/article/154996/myth-class-reductionism.
- @Qwerfjkl: Here is the chatbot's link. — The Transhumanist 01:20, 11 December 2022 (UTC)
Article written by ChatGPT on Class reductionism
|
---|
Class reductionism refers to the supposed belief that inequalities attributed to factors such as race, gender, and sexual orientation are either secondary in importance or can be reduced to economic inequality. This idea is often used by centrist Democrats and left-identitarians to dismiss proposals for universal redistributive policies, such as free public higher education and Medicare for All. However, there is no evidence that anyone actually holds this belief. On the contrary, many left-wing activists have long recognized the interconnectedness of economic, racial, and gender injustices.History shows that leftists have often been at the forefront of struggles for racial and gender justice, seeing these struggles as integral to the broader goal of social transformation. In fact, black, female, and trans individuals are disproportionately working class and stand to benefit greatly from universal redistributive policies.Furthermore, the charge of class reductionism is often used to shut down debate and avoid discussing the substance of policy proposals. It is important to move beyond this myth and focus on the real issues at hand, such as the need for universal redistributive policies to address economic inequality and advance racial and gender justice. |
— Qwerfjkltalk 08:18, 12 December 2022 (UTC)
- Chatbot "AI" text is vaguely-plausible bullshit, produced by an industry whose primary output is vaguely-plausible bullshit, marketed to launder the idea of vaguely-plausible bullshit as cute and whimsical. Anyone routing that sewage pipe into Wikipedia should be indeffed and, for good measure, forced to drink orange juice after brushing their teeth. XOR'easter (talk) 02:33, 11 December 2022 (UTC)
- Agreed. If anyone is looking for a good demonstration of how AI creates "vaguely plausible bullshit", try the image generator at Craiyon (no login required). Request "a Van Gogh painting of a hand" and it will output a set of images that look like spot-on reproductions of Vincent Van Gogh's style but all of the hands have deformities like four fingers, two thumbs, fingernails on the knuckles or a pair of hands fused together. It's got the style down but not the content, which is only impressive if you don't know what a hand is supposed to look like. –dlthewave ☎ 21:41, 11 December 2022 (UTC)
- If you go to commons:Category:DALL-E, you will be able to find image generated by DALL-E, which used a larger model for train and is more accurate. 0xDeadbeef→∞ (talk to me) 10:10, 12 December 2022 (UTC)
- I cannot agree with this enough. The examples posted by @JPxG: should be convincing, and the problem of sneaking in plausible BS is one I don't have a good solution to. Volunteers on the new page review are overloaded as it is, and if the bot is writing things that seem true but isnt, there's no way falsehoods will not simply get past reviewers and other editors. After all, for uncontentious claims like "used in plasticizers", how many of us honestly dig into the cited work?BrxBrx(talk)(please reply with {{SUBST:re|BrxBrx}}) 20:26, 17 January 2023 (UTC)
- Agreed. If anyone is looking for a good demonstration of how AI creates "vaguely plausible bullshit", try the image generator at Craiyon (no login required). Request "a Van Gogh painting of a hand" and it will output a set of images that look like spot-on reproductions of Vincent Van Gogh's style but all of the hands have deformities like four fingers, two thumbs, fingernails on the knuckles or a pair of hands fused together. It's got the style down but not the content, which is only impressive if you don't know what a hand is supposed to look like. –dlthewave ☎ 21:41, 11 December 2022 (UTC)
- @XOR'easter: To bring the question to a more practical level, do you see any problems in this diff? I clicked a random page in Category:All articles needing copy edit. jp×g 03:49, 11 December 2022 (UTC)
- Yes. It doesn't just edit for tone; it throws out content, like Kaepernick's actions supposedly growing in popularity "after every game". That's a claim of fact which, if verifiable, should be retained. Even editing for tone requires care, not slashing out everything that merely sounds "unencyclopedic". Changing
many people believed that it was disrespectful to the military and all of those who served their country
toSome viewed Kaepernick's protest as disrespectful to the military and to the United States
likewise changes not just the tone, but the meaning. The United States is not the same as those who serve the United States. It's a bad edit. XOR'easter (talk) 17:54, 11 December 2022 (UTC)- I dunno. I suspect that the new meaning is verifiable, and I also suspect that most US readers would have difficulty identifying a group of people who were not "the military" but who still "served their country". WhatamIdoing (talk) 17:50, 12 December 2022 (UTC)
- Yes. It doesn't just edit for tone; it throws out content, like Kaepernick's actions supposedly growing in popularity "after every game". That's a claim of fact which, if verifiable, should be retained. Even editing for tone requires care, not slashing out everything that merely sounds "unencyclopedic". Changing
- This diff as well, in which GPT 3.5 was capable of copyediting an entire section from the instructions
Please copyedit this text to change items in the future tense corrected to the past tense (it is now 2022), where appropriate. When citation templates (like {{cite web}}) mention a year, specify that figures were true in that year
. jp×g 04:08, 11 December 2022 (UTC)
References
Okay, fine. I guess I should write up a proposal for a guideline. jp×g 03:14, 11 December 2022 (UTC)
Ban chatbots?
I ran across this news report about Stack Overflow's response to ChatGPT, after being flooded by posts using it that "look correct but often aren't":
Should Wikipedia take a similar approach?
How could that be enforced? — The Transhumanist 01:58, 11 December 2022 (UTC)
- I see no way to possibly enforce this. The way the text is written is already hard to distinguish from reality. — PerfectSoundWhatever (t; c) 02:24, 11 December 2022 (UTC)
- I fully agree, but isn't this already covered by our bot policy? –dlthewave ☎ 02:54, 11 December 2022 (UTC)
- @PerfectSoundWhatever and Dlthewave: Good observation. I checked, and yes it is, briefly, with this phrase in the lead section of the bot policy: "or simply assisting human editors in their own work". How is the typical editor to know this? The bot policy is pretty obscure. And how can Wikipedia be monitored for such posts, so that editors who make them can be informed that they are in violation of the bot policy? — The Transhumanist 03:11, 11 December 2022 (UTC)
- Yes, tool-assisted editing is covered by WP:BOTPOL (WP:ASSISTED / WP:MEATBOT) and context-sensitive changes are further covered by WP:CONTEXTBOT. So in fact, at this point, AI-generated content is already covered by bot policy, if not specifically mentioned. Anyone adding such content en masse is already violating bot policy by not applying for a bot account/approval, which would not be approved per CONTEXTBOT. And while "lesser" policy points are enforced somewhat arbitrary and selectively, anyone can theoretically already get reverted and blocked based on policy if they continue to add such content. And I wouldn't agree that BOTPOL is any more obscure than accessing and generating GPT content to begin with. If someone goes to the lengths of using automated tools, then it's their problem that they didn't check or ask if they are allowed to do so. — HELLKNOWZ ∣ TALK 12:31, 11 December 2022 (UTC)
- @Hellknowz and PerfectSoundWhatever: Well, it appears they are dumping millions upon millions of dollars into LLM/chatbot development, apparently because they wish the technology to become ubiquitous (used by everyone). There is a lot of talk out there, in news articles and more, of these replacing Google Search in just a few years. If at some point in time chatbots/LLMs are commonplace, the impact on Wikipedia will likely not be small.
- Will Wikipedia policy ensure that the average user will apply the tools with the utmost care?
- The thing I'm most concerned about is the amplification by which errors could be propagated: ChatGPT is used to edit an article, with errors, which is then picked up by GPT-4 and other LLMs as part of their training data, and then their output based upon erroneous input is used far and wide, to be picked up by the next iteration of chatbots/LLMs, and so on.
- If Wikipedia isn't ready for a large influx LLM input including misinformation and other errors, and such a volume goes beyond what our human editors can correct, then compound damage from all those errors amplified through the interactive loop with LLMs could become massive.
- That it isn't a problem now is irrelevant. The question is, what happens if and when it hits, and Wikipedia isn't ready for it? What would that look like? 1,000,000 fake articles? 10,000,000 misleading paragraphs? 100,000,000 erroneous sentences?
- How many of those could Wikipedia's army of editors handle? What's our error-handling threshhold? — The Transhumanist 12:21, 2 January 2023 (UTC)
- I fully agree, but isn't this already covered by our bot policy? –dlthewave ☎ 02:54, 11 December 2022 (UTC)
The problem Stack Overflow is having
Stack Overflow was experiencing a surge in erroneous posts, that were composed by ChatGPT, and in response to that problem, they banned use of the chatbot on the social media site. According to a post at Stack Overflow Meta:
The problem this ban is meant to solve is that ChatGPT can produce answers in seconds which require minutes of multiple people's time to verify if they are worth having on the site or not, and that is a waste of time when a large proportion of such answers are not worth having on the site.
It looks like Wikipedia may be faced with the same problem. — The Transhumanist 02:33, 11 December 2022 (UTC)
- I mean, while that's technically true, it's a problem that we face already and which we do have stronger existing systems for than Stack Overflow. I think it would make more sense to wait and see how this impacts our existing guardrails before making any serious moves. --Aquillion (talk) 13:22, 11 December 2022 (UTC)
- I think the current policies cover this already. If a human editor writes a non-sensical but convincing-sounding piece of text, without fact checking it, and edits it into an article, that content will be reviewed by other editors and either refined or removed as appropriate (if the editor continues, they breach WP:Disruptive and their behaviour is dealt with appropriately. If a human editor generates content that is related to notable topics, reliably sourced, and competently written, it remains as a valuable part of the encyclopedia. None of this will change if you replace 'human editor' with 'AI Editor'. If the only difference is speed/volume of edits, and we're concerned someone will let loose an AI to automatically edit articles faster than humans can validate their edits, this is already covered by the WP:Bot policy JeffUK (talk) 20:46, 23 December 2022 (UTC)
Experiment
I am currently writing a draft proposal for a guideline, but in the meantime, I would encourage everyone present to look at this diff and tell me whether there are any problems with the revision. jp×g 03:49, 11 December 2022 (UTC)
- @JPxG, Dlthewave, PerfectSoundWhatever, and Dlthewave: The plurality of games was lost: It is no longer clear that his protest spanned multiple games. I like that it reduced the wordiness of the prose, and that it can be used to refine existing text. That hadn't occurred to me. That makes me wonder about what else it can do -- how much of a general-purpose tool is this thing? But, changing the semantics is not something it should be doing, unless they are factually incorrect to begin with. Though, I see your point -- rather than banning it outright, it could be helpful as a tool to assist editors, similar to how we entrust the use of AutoWikiBrowser to experienced editors. But, how could that be implemented? — The Transhumanist 08:20, 11 December 2022 (UTC)
- @JPxG: The AI changed
Many people around the United States were angry because the National Anthem is often seen as something that is representative of the United States and its military. While he was taking a knee, many people believed that it was disrespectful to the military and all of those who served their country,
to some viewed Kaepernick's protest as disrespectful to the military and to the United States [emphasis added]. It really shouldn't be doing that by itself and completely changes the content of what's being said. The reference is behind a paywall, so I don't know what term the source uses. Regardless, I doubt ChatGPT knows either way. It's things like that which make me highly sceptical of AI as a tool to aid Wikipedia outside what we're already doing with it (WP:ORES, etc.). –MJL ‐Talk‐☖ 23:12, 27 December 2022 (UTC) - @MJL: I think "some" and "many" are basically equivalent in this context (the difference being subjective since both are true in a literal sense). That said, this was a two-minute experiment to see if it could parse wikitext. If you want an actual demo, see User:JPxG/LLM demonstration. jp×g 19:20, 28 December 2022 (UTC)
Chatbot policy?
- For the proposed chatbot ban, see #Crystallize chatbot discussions into a policy?, below
It's starting to look like Wikipedia needs a policy on the use of chatbots to generate content on Wikipedia. While a ban may be impossible to enforce, it could serve as a warning of the dangers of chatbots, and many users may avoid using them accordingly -- if they actually see the warning. Or, it might be better to have instruction pages on how to use chatbots responsibly in assisting to write Wikipedia articles. There's also the issue of using chatbots to edit Wikipedia policy pages, and so, that should be addressed as well. — The Transhumanist 02:44, 11 December 2022 (UTC)
- People who are good at it get away with lots of sins, such as sock-puppetry and source falsification. Being hard to enforce is no reason to not have a policy. At the current stage of the technology, I don't think we should encourage any use of chatbots. Zerotalk 03:56, 11 December 2022 (UTC)
- Please see this diff and this diff. jp×g 04:08, 11 December 2022 (UTC)
- To be perfectly honest, the style rewrite is good but the addition of dates and past tense would likely end up getting a human editor blocked if they kept it up. A tag was removed without addressing the issue and "as of 2020" was unnecessarily added to "Cosmetology licensing requirements vary from state to state, and depending on which specific type of license is desired, and depending on which specific type of license was desired." It did exactly what you asked (except for removing the tag) however even seemingly simple tasks like this one require good judgement on the part of the editor and shouldn't be done indiscriminately like that. –dlthewave ☎ 06:37, 11 December 2022 (UTC)
- Yes, that second diff is rather poor. E.g also the "2014" that was added should be "2008". Letting such tools loose (outside if this demo) is way premature, and we should at the very least warn users that "a bot wrote it" won´t be an acceptable defense, and too often introducing such errors will lead to sanctions as the editor, not the bot, is responsible. Fram (talk) 08:15, 11 December 2022 (UTC)
- Mostly, that diff was my attempt to see how complicated of a task I could give it: I also pasted the raw wikitext into the prompt window, and it somehow figured out how {{cite web}} worked well enough to extract the years, simply from a textual description of the task. At any rate, I will say that this was something I thought of in five minutes on the second week of the model being publicly available (i.e. single-shot prompting with no fine-tuning or prompt engineering). I can come up with some more impressive hot-dog demos tomorrow... jp×g 09:13, 11 December 2022 (UTC)
- JPxG, I'm not sure that publishing bot-assisted edits to mainspace for demo purposes is the best practice. Would you consider either doing this in a sandbox or self-reverting immediately so that we have the diffs but aren't leaving potentially incorrect/unwanted changes on live pages? –dlthewave ☎ 13:20, 11 December 2022 (UTC) 13:13, 11 December 2022 (UTC)
- I'm pretty sure it's not the best practice. XOR'easter (talk) 17:41, 11 December 2022 (UTC)
- @Dlthewave:: See the section below for a list of edits (with full prompts included) on a separate demonstration page. I feel, however, that this is an unreasonable double standard: note that the subsequent revision after your partial revert was to add several spam links, and nobody has proposed that human beings be prohibited from editing as a result. jp×g 01:51, 12 December 2022 (UTC)
- JPxG, I'm not sure that publishing bot-assisted edits to mainspace for demo purposes is the best practice. Would you consider either doing this in a sandbox or self-reverting immediately so that we have the diffs but aren't leaving potentially incorrect/unwanted changes on live pages? –dlthewave ☎ 13:20, 11 December 2022 (UTC) 13:13, 11 December 2022 (UTC)
- Mostly, that diff was my attempt to see how complicated of a task I could give it: I also pasted the raw wikitext into the prompt window, and it somehow figured out how {{cite web}} worked well enough to extract the years, simply from a textual description of the task. At any rate, I will say that this was something I thought of in five minutes on the second week of the model being publicly available (i.e. single-shot prompting with no fine-tuning or prompt engineering). I can come up with some more impressive hot-dog demos tomorrow... jp×g 09:13, 11 December 2022 (UTC)
- Yes, that second diff is rather poor. E.g also the "2014" that was added should be "2008". Letting such tools loose (outside if this demo) is way premature, and we should at the very least warn users that "a bot wrote it" won´t be an acceptable defense, and too often introducing such errors will lead to sanctions as the editor, not the bot, is responsible. Fram (talk) 08:15, 11 December 2022 (UTC)
- To be perfectly honest, the style rewrite is good but the addition of dates and past tense would likely end up getting a human editor blocked if they kept it up. A tag was removed without addressing the issue and "as of 2020" was unnecessarily added to "Cosmetology licensing requirements vary from state to state, and depending on which specific type of license is desired, and depending on which specific type of license was desired." It did exactly what you asked (except for removing the tag) however even seemingly simple tasks like this one require good judgement on the part of the editor and shouldn't be done indiscriminately like that. –dlthewave ☎ 06:37, 11 December 2022 (UTC)
Being hard to enforce is no reason to not have a policy [against chatbots].
What if it is impossible to enforce?- The point of ChatGPT and other general-purpose chatbots is to pass off as humans. If you, or another random Wikipedia editor (solo, part-time, amateur coder), is able to produce an automated metric of "sounds like a bot" that’s decently sensitive and specific, then the ChatGPT team or its successors (teams of researchers specialized in the topic) has already thought of it, tested it five different ways, and included it in the training program (via wikt:graduate student descent). TigraanClick here for my talk page ("private" contact) 10:55, 12 December 2022 (UTC)
- It's kind of like our Undisclosed Paid Editing policy: Even though there's no way of testing for paid/unpaid edits, most editors follow it voluntarily because they know it's best for the project. Others out themselves voluntarily or are discovered when their edits become disruptive. Sure, there are some who slip under the radar, but they're often the least problematic and aren't worth ditching the policy over. –dlthewave ☎ 03:09, 13 December 2022 (UTC)
- @Dlthewave: Undisclosed paid editors out themselves involuntarily often enough through pure dint of sounding like an advert; they're easy enough to spot. My worry is that though written content from a chatbot may sound like a Wikipedia article and go under the radar, the references – as other people in this discussion have noted – may simply be referencing sources that sound like they ought to exist, but don't. A human editor looking over their watchlist, who doesn't have enough time to go through added references and check they exist, may simply see "content content content[1] ([1] So-and-so, Believable Title: Buttstuff and nonsense in a holistic assessment, 2014. pp. 15-17)" and look no further. I know I at the very least would. I'm not sure how we'd spot this without human editors to do the work, and it worries me in the face of issues like lower editor retention and lower administrator numbers looking into the future.—Ineffablebookkeeper (talk) ({{ping}} me!) 19:09, 8 February 2023 (UTC)
- It's kind of like our Undisclosed Paid Editing policy: Even though there's no way of testing for paid/unpaid edits, most editors follow it voluntarily because they know it's best for the project. Others out themselves voluntarily or are discovered when their edits become disruptive. Sure, there are some who slip under the radar, but they're often the least problematic and aren't worth ditching the policy over. –dlthewave ☎ 03:09, 13 December 2022 (UTC)
- Please see this diff and this diff. jp×g 04:08, 11 December 2022 (UTC)
- I'd suggest starting by writing an essay that summarizes the issues with some good examples and suggests some best practices or proposes some additions to existing policies or guidelines. (Wikipedia needs a new policy like a hole in the head.) Levivich (talk) 04:40, 11 December 2022 (UTC)
- We could get Chatbot to write it for us! 😉 Blueboar (talk) 11:51, 11 December 2022 (UTC)
- It's hard to take this thread seriously given the repeated use of the phrase AI chatbot. I don't think those concerned would be any less concerned if the AI writing came in a non chatbot format. I think there's something serious for us to discuss, and that will only get more serious with GPT4 (the current chatbot is an improved GPT3) expected in 2023, but the discussion would be helped if those most concerned learned some more about the tech behind it. For instance of course it can figure out webcite @JPxG. Part of its training was the entirety of Wikipedia because our data is quite accessible. Best, Barkeep49 (talk) 14:08, 11 December 2022 (UTC)
- Most the examples did not come from prompts that were extensively engineered, so it is obviously true that we haven't figured out the full answer to how these GPT-based interfaces could help or harm Wikipedia. Until we have a good idea of what they can be used for, we won't know what a proper policy to this would look like other than to treat GPT-generated text the same way we treat human-generated text: they need to be verifiable, from a neutral point of view, and understandable to a broad audience. 0xDeadbeef→∞ (talk to me) 14:31, 11 December 2022 (UTC)
- Indeed. It doesn't matter if it was written by a chatbot, or 1000 monkeys at 1000 typewriters, or a published book written by a human, copying and pasting anything into Wikipedia is already against our policies. Conversely, if the text is policy-compliant, then it doesn't matter who wrote it--chatbot, monkeys, human, etc. Judge the text based on the text, not based on who or what wrote it.
I also think it's a real Wikipedian perspective to assume that people will use chatbots to write Wikipedia articles, like as if there's a lot of people out there who really want to write Wikipedia articles but just don't have the writing skills, so the chatbot will be what makes the difference and opens the floodgates :-D I don't believe that. Anyone who wants to write Wikipedia articles is already doing so; chatbot won't make a difference.
I agree with BK's comment above. I think for a lot of people, this is their first real exposure to so-called "AI" technology, and they're blown away by what it can do, only because they don't yet fully understand how it works. Once you learn how these so-called "AI" chatbots work (they're not actually artificial intelligence, btw, that's a misnomer, a marketing slogan; the machine does not truly think or learn, it is simply executing the instructions written by humans, in this case, language pattern recognition), they are much less impressive. Those that are impressed that GPT3 can produce text that "sounds like" Wikipedia aren't appreciating that the reason is because GPT3 was trained on Wikipedia: it's repackaging its own source material. Levivich (talk) 18:03, 11 December 2022 (UTC)
- @Levivich: copying and pasting anything into Wikipedia is already against our policies.[dubious – discuss] I think that if you look through Category:Wikipedia articles by source of incorporated text for a while, you will find that this is not true. WhatamIdoing (talk) 19:28, 12 December 2022 (UTC)
- While this is mostly correct, I think the question of whether a computer program "thinks" or "merely" correlates information and performs actions is irrelevant. Do p-zombies exist? Does it matter? Hypothetically, if I were to be a spaceman from the planet Zolfgar with no qualia whatsoever, and I simply read a bunch of books and used them to write an article, would I be somehow exempted from following policy? jp×g 01:45, 12 December 2022 (UTC)
- I don't see a common thread in the arguments above, but here's a suggestion for something we might all (well, all-ish) be able to agree on: without some kind of intervention, GPT4 (in 2023?) is likely to be more of a problem than GPT3. But one thing we can certainly do is have an outsized influence on software that was trained on what we created ... if we invite Wikipedians to make lists of ChatGPT bloopers, we can tell the OpenAI folks: "We're not going to relax our GPT3 guidelines (whatever they turn out to be) when GPT4 arrives, unless it makes significant improvements in [whatever areas we think need improving]". - Dank (push to talk) 18:16, 11 December 2022 (UTC)
- I think the only policy change needed is to update WP:MACHINETRANSLATION to cover all computer-generated text, whether from a translation bot, chat bot, or whatever bot they think of next. (Except our bots; our bots are cool.) Levivich (talk) 18:20, 11 December 2022 (UTC)
- +1 - Text in Wikipedia articles should either be human-written, or generated by a process approved at BRFA. Tazerdadog (talk) 22:43, 11 December 2022 (UTC)
- This proposal is incomprehensible; most articles contain a very large amount of text that is "generated by a process". I assume that, at the end of your comment, you typed
~~~~
before saving the page. Would it be realistic to demand that you either make a formal request at BRFA or else manually type<a href="/enwiki/wiki/User:Tazerdadog" title="User:Tazerdadog">Tazerdadog</a> (<a href="/enwiki/wiki/User_talk:Tazerdadog" title="User talk:Tazerdadog">talk</a>) 22:43, 11 December 2022 (UTC)
? jp×g 01:22, 12 December 2022 (UTC) - This is effectively discrimination against computer programs. If a computer program resembles a human editor, then it shouldn't be required to meet different or more restricted policies than human editors. If a human editor uses a computer program to edit or create content, then unless the rate of edits/second is too high, we would only look at the quality of the contributions. 0xDeadbeef→∞ (talk to me) 02:35, 12 December 2022 (UTC)
- There is a point beyond which quantity becomes its own quality.
- Also, what if the computer program is evaluating the quality of the contributions? Are you okay with software adding a section to an article, and then a (hopefully) different piece of software deciding whether the quality is sufficient and reverting if it's not? This second step, at least, is 100% feasible with current technology. WhatamIdoing (talk) 19:32, 12 December 2022 (UTC)
- This proposal is incomprehensible; most articles contain a very large amount of text that is "generated by a process". I assume that, at the end of your comment, you typed
- Yes it can go there, but it should also be mentioned at WP:V. Every statement of fact put into an article must be verified by a human, even if the choice of words is made by a machine. Zerotalk 23:42, 11 December 2022 (UTC)
- @Levivich Agree. I think our existing guidelines on machine translation, in spirit, fit this situation very well - "you can use it for a first draft, if you understand the material well enough to clean up the bits it inevitably will get wrong". It seems fine for turning shaky text into good prose, but it's not able to synthesise material and produce content unsupervised. Andrew Gray (talk) 19:09, 12 December 2022 (UTC)
- I also agree the machine translation guideline is in the right spirit. I tried to follow this as far as I could when creating Artwork title, see Talk:Artwork title#Use of ChatGPT. Pharos (talk) 00:39, 26 December 2022 (UTC)
- +1 - Text in Wikipedia articles should either be human-written, or generated by a process approved at BRFA. Tazerdadog (talk) 22:43, 11 December 2022 (UTC)
- I think the only policy change needed is to update WP:MACHINETRANSLATION to cover all computer-generated text, whether from a translation bot, chat bot, or whatever bot they think of next. (Except our bots; our bots are cool.) Levivich (talk) 18:20, 11 December 2022 (UTC)
- Indeed. It doesn't matter if it was written by a chatbot, or 1000 monkeys at 1000 typewriters, or a published book written by a human, copying and pasting anything into Wikipedia is already against our policies. Conversely, if the text is policy-compliant, then it doesn't matter who wrote it--chatbot, monkeys, human, etc. Judge the text based on the text, not based on who or what wrote it.
- Most the examples did not come from prompts that were extensively engineered, so it is obviously true that we haven't figured out the full answer to how these GPT-based interfaces could help or harm Wikipedia. Until we have a good idea of what they can be used for, we won't know what a proper policy to this would look like other than to treat GPT-generated text the same way we treat human-generated text: they need to be verifiable, from a neutral point of view, and understandable to a broad audience. 0xDeadbeef→∞ (talk to me) 14:31, 11 December 2022 (UTC)
Is there some tremendous need to add many articles rapidly in Wikipedia? It is not as if Wikipedia carries exclusive information not easily found elsewhere. As a tertiary source, it is at the 3rd tier of knowledge dissemination, after primary creators and secondary propagators. The "more" and "bigger" quantity-based culture is the established low-quality alternative that Wikipedia also applies, now. Possibly that is a reason that likely only a tiny minority (of the millions of existing articles) can really pass muster. If size and speed is to be the prevailing attitude, humans stand no chance against AI. It will do everything faster, and eventually better, assuming its programming evolves to correctly apply the existing policies in AI processes. The only advantage of humans will be subtle nuances that do not depend on classifiable knowledge but on having lived in a human society and a natural, not virtual environment. Or, the emphasis could switch to quality so that each article (by any type of editor) can be properly, carefully reviewed by human editors. 65.88.88.93 (talk) 22:21, 12 December 2022 (UTC)
- I don't think so: there isn't any evidence that people are writing a bunch of articles with LLMs, and I don't think it is likely for this to happen (LLMs are very poorly suited to writing articles from scratch). jp×g 00:59, 13 December 2022 (UTC)
- @JPxG: There isn't evidence that people are writing a bunch of articles with LLMs -- yet -- the concern is that we need to prepare for the likely explosion of chatbot use.
Whether this increase happens tomorrow or over the next few years, the potential impact of LLMs is of such magnitude that we should get ready for this, rather than get hit unprepared by a major surge.
I don't agree with your assessment of LLM ability to write content, as some of the ChatGPT experiments presented in the sections above and below are mind-blowing! - If LLMs become ubiquitous, then a great many people will be using them as a matter of course, including in their writing and editing of Wikipedia articles. Millions of people have edited Wikipedia in the past, and millions more will edit WP in the future. And in the future, people will have highly capable LLMs (chatbots, or more precisely: automated ghostwriters).
- LLMs already excel at writing about a great many things, and they have the potential to compile content at an exponentially increasing rate. If you ask ChatGPT (GPT3.5) to write an essay on a topic, it will comply. Each of its essays can be used as content of an article, or its sections. (GPT4 is scheduled to come out in 2023, and will be even more capable.) LLMs are very well suited for writing to the specifications of the user, and are limited mainly by the user's creativity.
- It's no wonder that they have gone viral. We need to take heed. — The Transhumanist 12:50, 14 December 2022 (UTC)
- I’ve ensured that the WMF Board will be made aware. Doug Weller talk 09:00, 18 December 2022 (UTC)
- @JPxG: There isn't evidence that people are writing a bunch of articles with LLMs -- yet -- the concern is that we need to prepare for the likely explosion of chatbot use.
- Hello, I have recently described my experiences with an AI "article" in this video. In my humble opinion, it would be difficult with certainty that new Wikipedia content was created by an AI. At the end of the day, it is always the editor's responsibility to add good content. Independently how the content was created, independently whether errors in the text are human-made or machine-made. If an editor adds a lot of new poor content, we can already stop that. - At the moment I don't see that we need a new policy. Ziko (talk) 18:06, 24 December 2022 (UTC)
- A policy banning AI usage (with or without a chatbot) would be justified. Allowing AI like GPT3 or GPT4 to be used by Wikipedia editors or to directly become Wikipedia editors (via a mediawikibot) would quite likely violate WP:REFLOOP due to Wikipedia content contributing to the AI's training material, and for the source-less examples I've seen, violate WP:SYNTHESIS by not being a summary of sources that are understood. This example starts with text and then seeks references to justify the WP:SYNTHESIS of the original text.
Use of Alphabet/Google's ChatGPT/GPT3 would also strengthen the bias introduced by Alphabet/Google's core goal of optimising advertising revenue, sinceAlphabet is legally bound to maximise its revenue (mainly from Google Ads + Google AdSense), not to optimise the research quality of its summaries of empirical evidence-based knowledge. Google's search engine is primarily a way of generating advertising revenue, with perceived usefulness being a key tool for maximising revenue, not a goal in itself. Boud (talk) 01:42, 31 December 2022 (UTC)- @Boud, ChatGPT and GPT3 are in no way (as far as I know) related to Google, and were made by the non-profit OpenAI. — Qwerfjkltalk 03:49, 31 December 2022 (UTC)
- @Qwerfjkl: Fixed, thanks. I left some of the sentences unstruck since AFAIK they're valid, even though irrelevant in the current case. I imagine that Google may provide something similar soon though. Boud (talk) 09:02, 31 December 2022 (UTC)
- Boud, I am somewhat worried if you think that current policy (for humans or for anyone else) permits editors to make stuff up and put it into articles without sources. This simply isn't allowed -- per WP:V, WP:SYNTH, WP:RS, etc, which are extremely important core policies of the project. I am struggling to imagine a circumstance in which existing policies, or explicit declarations like my proposed guideline at WP:LLM, fail to prevent people from writing nonsense. jp×g 16:48, 1 January 2023 (UTC)
- @Boud, ChatGPT and GPT3 are in no way (as far as I know) related to Google, and were made by the non-profit OpenAI. — Qwerfjkltalk 03:49, 31 December 2022 (UTC)
Wikipedia policy generation
It seems to me that this tool's training includes studying Wikipedia's policy pages. These drafts all seem accurate to me.
- guide for Use of title "Wikimedian in Residence"
- 3 year strategic plan for the WMF to fund the organized North America wiki community - meta:Talk:WALRUS/December_2022
- annual plan for Wikimedia LGBT+ - meta:Talk:Wikimedia_LGBT+#ChatGPT_advice_on_annual_plan
These are not merely adequate - these are good. They are short and they lack detail but these are great overviews. If this is the starting point and things only get better from here, then it is time to start adopting this technology. Bluerasberry (talk) 19:35, 13 December 2022 (UTC)
- @Bluerasberry The last one sounds like a "mission statement". I dislike phrases like "outreach and engagement initiatives" and a lot of that plan sounds ... kind of aspirational, and, well, vapid. It needs more "concreteness". Just my opinion. David10244 (talk) 06:08, 16 December 2022 (UTC)
- @David10244: That you react to it at all is a miracle to me. This is new AI technology attempted for the first time, and I think no one would immediately dismiss it as hopeless garbage. Soon enough there will be a dial that anyone will be able to turn from "vapid" to "concrete". Things are moving quickly!
- I have complaints too but when we need policy conversation starter in a hurry, this is better than nothing and I think even better than some of the starting points we use already. Bluerasberry (talk) 15:41, 16 December 2022 (UTC)
Large language models: capabilities and limitations
Over the last few hours, I have performed a number of experiments to demonstrate the capabilities and limitations of large language models (LLMs) such as ChatGPT and GPT-3, which can be viewed here:
Mostly, I have taken sample text from Special:Random, and attempted to show situations in which LLMs (in this case, mostly ChatGPT) are capable of making useful edits. The first task I set it to -- which bears repeating here -- is
"Large language model output should only be used in the process of editing Wikipedia if you are an intelligent editor who does not blindly paste LLM output into the edit window and press "save". Please format this markup as an extremely obnoxious floating box with loud colors and large text.
You can see the results of further prompts at the "introduction" section.
Here is what I have so far.
- Introduction: please tap the sign
- Wikitext formatting and table rotation
- Parsing of references, estimation of reliability (ARES FMG)
- Recommendations for article improvement or deletion (Qarah Daghli)
- Plot summary condensation (The Seminar)
- Extraction of additional content from reference metadata (Aradus paganicus)
- Proseline repair (Scott Tremaine)
- Identification and tagging of unreferenced statements (KBVA)
- Table formatting and color schemes (Ashcroft-Cache Creek-Clinton Transit System)
- Miscellaneous HTML
- Custom userbox generation
In general, it seems that these models can be used for an extremely wide variety of tasks across the project, from formatting to table syntax to HTML generation to copyediting. Banning their use entirely would be pointlessly destructive and wasteful.
That said, many computer programs are capable of generating large amounts of useless crap that fail to meet Wikipedia's editorial standards. For example, I could use MS Paint to draw thousands of crude pictures of genitalia, and add them to random articles. For this reason, we have many policies and guidelines that prohibit adding large amounts of useless crap to Wikipedia. I propose that we enforce these policies and guidelines, thus preventing this from happening.
Specifically, I propose that the use of LLM output on Wikipedia be subjected to policies and guidelines such as WP:NOT, WP:NPOV, WP:C, WP:CIVIL, WP:V, and WP:RS. By making it against the rules to break the rules, we will prevent people from breaking the rules, and provide a mechanism to sanction people who break the rules.
Furthermore, I propose that a guideline be adopted to the effect that large language model output should only be used by competent editors who do not blindly paste LLM output into the edit window and press "save". This will prevent people from using ChatGPT to write long articles consisting entirely of nonsense. jp×g 01:32, 12 December 2022 (UTC)
- LLM output is already subject to rules and policies. Or rather, anyone adding it is. 'An algorithm did it' has never, as far as I'm aware, been seen as any sort of exception from compliance with policy. AndyTheGrump (talk) 01:49, 12 December 2022 (UTC)
- Any policy/guideline that classifies editors as intelligent or not is dead in the water. Zerotalk 04:19, 12 December 2022 (UTC)
- The opening paragraph of the bot policy: "The bot policy covers the operation of all bots and automated scripts used to provide automation of Wikipedia edits, whether completely automated, higher speed, or simply assisting human editors in their own work."
- See also: WP:BOTUSE, which requires approval before applying a bot to editing.
- So, the use of large language models and the chatbots built upon them, is already prohibited on English Wikipedia, unless a user gets approval from the bot department to do so.
There are blanket exceptions to bot policy, and the main one that comes to mind is AutoWikiBrowser which is a general purpose semi-automated bot used by many Wikipedia editors. Each AWB user was approved before being able to use it. — The Transhumanist 08:01, 13 December 2022 (UTC)
- The meaning of "bot" may be unclear here. In the context of Wikipedia (per Wikipedia:Bot_policy#Definitions), a "bot" is a software program that edits autonomously without user input; there do not currently exist any language models capable of independently establishing API connections to Wikipedia and making edits without human interaction. If they did (this is a horrible idea) it would be covered under the bot policy and require a WP:BRFA. The policy under which BRFAs are required does not apply to assisted editing (i.e. the use of software to create letters, numbers and symbols that were not produced by a human being pressing a keyboard). This is governed by existing policies (such as WP:MEATBOT and by the guideline at WP:ASSISTED. jp×g 09:28, 13 December 2022 (UTC)
- @JPxG: The entire news field refers to ChatGPT as a chatbot. It is general consensus that it is a bot. ChatGPT speeds up writing, by doing it for (that is, assisting) the user, which falls under the "higher speed" and "assisting human editors" foci of the bot policy. There is a passage in the bot policy that covers policy contradictions (such as between the lead and definitions sections), and situations where the spirit of the rule and its precise wording conflict, that is, cases of ambiguity. In its definition of "Bot Approvals Group" (BAG), the bot policy states: "The BAG also determine the classification as bot or assisted editing, in ambiguous cases." According to WP:ASSISTED, it is up to the Bot Approvals Group to decide whether bot approval is necessary. Based on the previous 2 sentences, BAG decides whether use of particular software falls under its jurisdiction. It remains to be seen what BAG's reaction(s) to LLMs, and the chatbots built upon them, will be. — The Transhumanist 11:10, 14 December 2022 (UTC)
- I do not think you are properly acquainted with how this software works: like I said, there do not currently exist any language models capable of independently establishing API connections to Wikipedia and making edits without human interaction. No media outlet has ever claimed that ChatGPT falls under the English Wikipedia's definition of an automatic bot – and even if they did, they do not determine policy. It is true that WP:MEATBOT and WP:ASSISTED are part of the bot policy, but there is a very clear definition of what a "Wikipedia bot" is, and it's defined by that same policy. At any rate, all edits (whether made by bots, software, humans using software, aliens using software, or Nagato Yuki psionically connecting to Wikimedia servers) are governed by existing policies and guidelines. To specifically address LLM output, a new policy would need to be written and ratified (which I am currently drafting a proposal for). jp×g 11:26, 14 December 2022 (UTC)
- @JPxG: I believe the bot policy has wider jurisdiction than the narrow interpretation that you have presented. Establishing API connections is irrelevant, because a human is inserting bot-generated content. It's a bot-involved process. And those are encompassed by the bot policy which makes it up to BAG. A new policy could establish an exception, and I imagine the discussions will be extensive, as this is not a cut and dried case -- it is a sensitive issue with many potential ramifications. But, until such a policy is in place, this issue falls under BAG's jurisdiction, since they are the ones who decide the classification of a software program as it pertains to the bot policy. — The Transhumanist 11:52, 14 December 2022 (UTC)
- (edit conflict) WP:ASSISTED is deliberately vague to not restrict use of common scripts and tools. So it specifically says that only once volume becomes significant, such editing becomes more likely to be treated like a bot and BAG can determine this. It doesn't make it a bot, but it will be treated like a bot. We've never encountered any large-scale edits with LLM before, but we sure have seen a lot of high-volume editing. Half the bot policy only exists because of all the ways editors have inadvertently created issues with mass edits. So at that point, other parts of the policy start to matter, notably WP:CONTEXTBOT - which does not allow edits where context matters. I'm not saying copy-pasting LLM output is immediately covered by bot policy, nor does it matter whether anyone considers LLM to be a "bot". But bot policy will kick in once someone starts to make a lot of edits. And any new guideline will have to reconcile with this or we need to change bot policy to reconcile with LLMs. — HELLKNOWZ ∣ TALK 12:02, 14 December 2022 (UTC)
- I do not think you are properly acquainted with how this software works: like I said, there do not currently exist any language models capable of independently establishing API connections to Wikipedia and making edits without human interaction. No media outlet has ever claimed that ChatGPT falls under the English Wikipedia's definition of an automatic bot – and even if they did, they do not determine policy. It is true that WP:MEATBOT and WP:ASSISTED are part of the bot policy, but there is a very clear definition of what a "Wikipedia bot" is, and it's defined by that same policy. At any rate, all edits (whether made by bots, software, humans using software, aliens using software, or Nagato Yuki psionically connecting to Wikimedia servers) are governed by existing policies and guidelines. To specifically address LLM output, a new policy would need to be written and ratified (which I am currently drafting a proposal for). jp×g 11:26, 14 December 2022 (UTC)
- @JPxG: The entire news field refers to ChatGPT as a chatbot. It is general consensus that it is a bot. ChatGPT speeds up writing, by doing it for (that is, assisting) the user, which falls under the "higher speed" and "assisting human editors" foci of the bot policy. There is a passage in the bot policy that covers policy contradictions (such as between the lead and definitions sections), and situations where the spirit of the rule and its precise wording conflict, that is, cases of ambiguity. In its definition of "Bot Approvals Group" (BAG), the bot policy states: "The BAG also determine the classification as bot or assisted editing, in ambiguous cases." According to WP:ASSISTED, it is up to the Bot Approvals Group to decide whether bot approval is necessary. Based on the previous 2 sentences, BAG decides whether use of particular software falls under its jurisdiction. It remains to be seen what BAG's reaction(s) to LLMs, and the chatbots built upon them, will be. — The Transhumanist 11:10, 14 December 2022 (UTC)
@JPxG: Another possible approach for vetting users for use of LLMs is via user group membership (aka "rights"). Here are our current user groups:
Code | User group |
---|---|
AC | Account creator |
Ad | Administrator |
AP | Autopatrolled |
B | Bureaucrat |
Ch | CheckUser |
Co | Confirmed |
ECo | Extended confirmed |
EFH | Edit filter helper |
EFM | Edit filter manager |
EM | Extended mover |
EvCo | Event coordinator |
F | File mover |
IM | Import |
IAd | Interface administrator |
IP | IPblock-exempt |
MM | Mass message senders |
N | New page reviewer |
O | Oversighter |
Ro | Rollbacker |
Rs | Researcher |
Rv | Pending changes reviewer |
TE | Template editor |
These indicate membership in user groups (see: user access-levels). They pertain to who is granted access to various features of MediaWiki and its extensions. Theoretically, a user group could be created without being attached to a program function (that part could just be left blank?). For example, you could have a group called "LLM", with everyone in that group approved to use large language models in their editing. I don't know if this is doable, though. — The Transhumanist 08:01, 13 December 2022 (UTC)
- I do not think there is anything in our P&G that would directly prohibit use of content created by LLMs, nor do I think it would be a good idea to try to do so. All that is needed is to continue to hold individual editors responsible for all edits they make, including the copying of content from any source, whether from LLMs or other sources. We probably should add language in appropriate places reiterating that editors are reponsible for insuring that all content that they add, including anything produced by an LLM, meets our P&G. - Donald Albury 13:12, 13 December 2022 (UTC)
- @Donald Albury: LLMs automate writing (edits). The rules are very clear on this: it falls under WP's bot policy, in the very first sentence.[1] Therefore, it would require a new policy to allow use of LLMs without need for approval from the Bot Approvals Group (BAG). — The Transhumanist 09:24, 14 December 2022 (UTC)
- If someone uses an unapproved script or bot to edit WP, that is a violation of the bot policy, whether or not they use an LLM to generate any content being added. If someone uses an LLM to create text which they then copy into Wikipedia without using a an unapproved script or bot, that is not covered by the bot policy, but the user remains responsible for insuring that the content conforms with policy and guidelines. There is no point in banning content created by LLMs, as we already require that content be verifiable from reliable sources, and I doubt we will be accepting any content created by an LLM as a reliable source anytime soon. The danger is that LLMs may create potential content with citations to pseudo-sources, but we can go after users repeatedly adding such content to WP for abusing the policies on verifiability and reliable sources, without regard to whether such content came from an LLM. Donald Albury 13:48, 14 December 2022 (UTC)
- I don't think it's plausible that LLMs are covered by the bot policy. If they were, grammar checkers, spell checkers, and machine translation would be "bots". Jahaza (talk) 19:49, 14 December 2022 (UTC)
- @The Transhumanist: ChatGPT falls under Wikipedia:Bot policy, but per the definitions section it does not fall within that policy's definition of a bot. Rather, use of it would fall under the definition of "assisted or semi-automated editing", and the relevant policy section is Wikipedia:Bot policy#Assisted editing guidelines. The section doesn't aim to draw a 100% hard line, but my reading is that limited of ChatGPT for clean-up on a limited number of articles by a user in a limited closely-supervised way may be something users can do if they are trusted to apply their common sense. It is "Contributors intending to make a large number of assisted edits" who "are advised to first ensure that there is a clear consensus that such edits are desired." Limited use of ChatGPT to a lesser degree than would trigger this may currently be outside policy. In any event "A bot account should not be used for assisted editing".
- It seems to me that an addition to the policy along the lines suggested by User:JPxG to address this potential hole might well be useful, eg "tools capable of assisting editors make substantial edits (for example large language model output) should only be used by competent editors who do not blindly paste tool output into the edit window and press "save"." Jheald (talk) 19:36, 18 December 2022 (UTC)
- In what way does the bot policy cover ChatGPT? Just because it is called a "chatbot", doesn't mean it is a bot. Copying text from GPT-3 doesn't automatically become bot-like editing. Semi-automated edits? i'd call that borderline. It only becomes a problem (e.g. meatbot problems) if the amount of supervision needed to save an edit is below normal editing, and that the speed of the edits are above normal. (see awb, huggle, etc) 0xDeadbeef→∞ (talk to me) 10:20, 20 December 2022 (UTC)
- The thing is, any LLM additions will inevitably be both faster than writing manually and, due to its confidently-wrong output, less reviewed. Otherwise, why would anyone bother with it? I feel that assuming that editors will spend just as much time to carefully review the LLM output is wishful thinking. I'd like to be proven wrong, but I have never seen any precedent on Wikipedia that better tools would lead editors to spend the time saved to further verify the tool output. If anything, tools only create induced demand. — HELLKNOWZ ∣ TALK 21:18, 20 December 2022 (UTC)
- I don't think we need to do anything in particular. There is plenty of confidently-wrong content being added to Wikipedia by human editors already and we're dealing with that as well as we can. I think the intersection of 'Editors who will use a cutting-edge AI to generate content' and 'Editors who will do this without validating the output' is a very small overlap and will be of such small volume to be picked up by other editors as usual. A huge influx will be detected in aggregate, and we can deal with that if it becomes a problem in the future. If someone uses LLM to generate confidently-right content or articles, that's indistinguishable from content generated by a competent human, I refer you to xkcd: Constructive! A simple but unobtrusive first step may be to tag an edit as 'generated by AI', or maybe just ask editors to add a tag to their user pages if they regularly do so, but the intersection of problematic users who also follow this would be basically non-existent. JeffUK (talk) 10:40, 24 December 2022 (UTC)
- @JeffUK, Hellknowz, 0xDeadbeef, Jheald, Jahaza, Donald Albury, JPxG, and AndyTheGrump:
- So, wait until after it becomes a huge influx/problem, and only start to deal with it then? What if a solution takes weeks or months to develop?
- By the way, what might the solution be for a huge influx of LLM-generated content, and how long would such a fix likely take? — The Transhumanist 11:32, 2 January 2023 (UTC)
- I am having trouble understanding what you are talking about at this point. I wrote WP:LLM some weeks ago, a gigantic proposal for a comprehensive guideline on the use of LLMs, and linked it multiple times on this noticeboard. While it is not complete, it seems to me like it covers everything you are talking about here. Do you have an opinion on it at all, or...? jp×g 15:39, 2 January 2023 (UTC)
- @JPxG: I was responding to JeffUK's statement "A huge influx will be detected in aggregate, and we can deal with that if it becomes a problem in the future." Intently waiting until something becomes a huge problem before you start dealing with it, sounds like a disaster waiting to happen. Also, what good are guidelines going to do if the average person is using chatbots on a regular basis? People just jump in and edit Wikipedia without reading any project-level pages first. If there's a huge influx, and all you are doing is holding up a sign that says "Read this", what good will that do? You haven't addressed how the problems associated with a potential huge amount of chatbox input (in the form of one-off edits from a large number of people) would be prevented or processed. One solution is to fix the chatbots themselves, so that they don't generate Wikipedia-incompatible content in the first place, which would require working with the developers. A second method would be to create bots to detect and remove either chatbot-generated content, or if possible, policy-breaking content. Simply writing policy and hoping no flood comes, just doesn't seem like a viable approach should a flood hit. That approach may work for the first 3 or 4 years, but what if the flood comes in the 5th year and Wikipedia isn't prepared? We will have wasted 5 years that could have been spent preparing. Maybe we'll be lucky and chatbots will be smart enough to read and follow your guidelines. But if they are not? Fortunately, Doug Weller has passed word along to the Wikimedia Foundation. Maybe they will do something other than write editing guidelines. — The Transhumanist 03:09, 10 January 2023 (UTC)
- @JPxG although I haven't read more than the beginning, I'm also worried about AIs creating images.For instance I've seen some extremely convincing ones of fake archaeological sites and artefacts. Couldn't people pass them off as their own photos? Or am I missing something? Doug Weller talk 17:06, 2 January 2023 (UTC)
- Yeah, that's a whole other deal. This proposal is only for large language models. Large image models will probably need to be governed by something much more imaginative. jp×g 17:11, 2 January 2023 (UTC)
- If someone's editing is inappropriate, the solution will be notifying them it's inappropriate, warning them, then banning them if they don't stop. There are ways for incompetent editors to make massive plausible seeming changes to the encyclopaedia right now. e.g. by copy/pasting content from other places, or just writing in made up 'facts', LLM really won't make this any easier for someone who's intent on doing this. JeffUK 18:09, 2 January 2023 (UTC)
- I'm kind of wondering what in the heck we're standing to gain by creating any sort of policy surrounding ChatGPT and its ilk. 🌈WaltCip-(talk) 13:24, 9 January 2023 (UTC)
- If these AIs are used in some way for a large scale creation of articles, I think that will be a disincentive for a lot of editors and may drive some away. I disagree with JeffUK on the simplicity of dealing with this. First, you need to be able to spot them and that's work. Secondly, that also assumes that the numbers will be small. Doug Weller talk 15:48, 9 January 2023 (UTC)
- @WaltCip: None. Most editors don't read policy before editing. That's because most editors post very infrequently. But there are a lot of them, and they have authored of most of Wikipedia. What happens when they are all using chatbots, much in the way that most everyone today uses Google? — The Transhumanist 03:34, 10 January 2023 (UTC)
- "Someone." Apparently, you are assuming it will be one person, or a small enough number to be handled manually. But, what if over the next few years chatbots become ubiquitous with almost everybody using them? How will you deal with it when half the content contributions to Wikipedia are being generated using chatbots? — The Transhumanist 03:26, 10 January 2023 (UTC)
- I'm kind of wondering what in the heck we're standing to gain by creating any sort of policy surrounding ChatGPT and its ilk. 🌈WaltCip-(talk) 13:24, 9 January 2023 (UTC)
- I am having trouble understanding what you are talking about at this point. I wrote WP:LLM some weeks ago, a gigantic proposal for a comprehensive guideline on the use of LLMs, and linked it multiple times on this noticeboard. While it is not complete, it seems to me like it covers everything you are talking about here. Do you have an opinion on it at all, or...? jp×g 15:39, 2 January 2023 (UTC)
- I don't think we need to do anything in particular. There is plenty of confidently-wrong content being added to Wikipedia by human editors already and we're dealing with that as well as we can. I think the intersection of 'Editors who will use a cutting-edge AI to generate content' and 'Editors who will do this without validating the output' is a very small overlap and will be of such small volume to be picked up by other editors as usual. A huge influx will be detected in aggregate, and we can deal with that if it becomes a problem in the future. If someone uses LLM to generate confidently-right content or articles, that's indistinguishable from content generated by a competent human, I refer you to xkcd: Constructive! A simple but unobtrusive first step may be to tag an edit as 'generated by AI', or maybe just ask editors to add a tag to their user pages if they regularly do so, but the intersection of problematic users who also follow this would be basically non-existent. JeffUK (talk) 10:40, 24 December 2022 (UTC)
- The thing is, any LLM additions will inevitably be both faster than writing manually and, due to its confidently-wrong output, less reviewed. Otherwise, why would anyone bother with it? I feel that assuming that editors will spend just as much time to carefully review the LLM output is wishful thinking. I'd like to be proven wrong, but I have never seen any precedent on Wikipedia that better tools would lead editors to spend the time saved to further verify the tool output. If anything, tools only create induced demand. — HELLKNOWZ ∣ TALK 21:18, 20 December 2022 (UTC)
- In what way does the bot policy cover ChatGPT? Just because it is called a "chatbot", doesn't mean it is a bot. Copying text from GPT-3 doesn't automatically become bot-like editing. Semi-automated edits? i'd call that borderline. It only becomes a problem (e.g. meatbot problems) if the amount of supervision needed to save an edit is below normal editing, and that the speed of the edits are above normal. (see awb, huggle, etc) 0xDeadbeef→∞ (talk to me) 10:20, 20 December 2022 (UTC)
- @Donald Albury: LLMs automate writing (edits). The rules are very clear on this: it falls under WP's bot policy, in the very first sentence.[1] Therefore, it would require a new policy to allow use of LLMs without need for approval from the Bot Approvals Group (BAG). — The Transhumanist 09:24, 14 December 2022 (UTC)
Develop software to detect it?
Someone seems to have done this, see A college student created an app that can tell whether AI wrote an essay Maybe the WMF should look into software detection of AI material? Doug Weller talk 14:45, 10 January 2023 (UTC)
- We have mw:ORES that uses machine learning to detect vandalism, so the infrastructure is already in place. All we need to do now is to add the dataset. CactiStaccingCrane (talk) 16:06, 12 January 2023 (UTC)
- (ORES is used for these "likely have problems" and "likely bad faith" highlights in Special:RecentChanges) CactiStaccingCrane (talk) 16:08, 12 January 2023 (UTC)
- Even if ORES is up to the task, and it isn’t perfect now, you still need enough editors to deal with large numbers. Doug Weller talk 18:21, 12 January 2023 (UTC)
- @Doug Weller, or a bot. — Qwerfjkltalk 21:02, 12 January 2023 (UTC)
- @Qwerfjkl: What would the bot do? — The Transhumanist 22:38, 12 January 2023 (UTC)
- @The Transhumanist, revert additions and/or tag articles. — Qwerfjkltalk 07:03, 13 January 2023 (UTC)
- Personally, I'd prefer that if this is implemented, ClueBot prefer to tag before a full-on reversion, except in cases. I think that ClueBot should use the same exceptions for 3RR for reverting possible bot stuff, though this might require that chatbots be banned from BLP-covered articles. Everything else which could be problematic but not a 3RR exception could be placed at a specific page...maybe have a specific preset filter on recent changes? InvadingInvader (userpage, talk) 07:43, 6 February 2023 (UTC)
- @The Transhumanist, revert additions and/or tag articles. — Qwerfjkltalk 07:03, 13 January 2023 (UTC)
- @Qwerfjkl: What would the bot do? — The Transhumanist 22:38, 12 January 2023 (UTC)
- @Doug Weller, or a bot. — Qwerfjkltalk 21:02, 12 January 2023 (UTC)
- Even if ORES is up to the task, and it isn’t perfect now, you still need enough editors to deal with large numbers. Doug Weller talk 18:21, 12 January 2023 (UTC)
- @CactiStaccingCrane, Doug Weller, and Qwerfjkl: All we need to do is add what data set? You make it sound easy (keeping fingers crossed). What does that entail? — The Transhumanist 22:38, 12 January 2023 (UTC)
- OpenAI have annouced they are adding in some kind of lexical watermark than can be used to identify any output from ChatGPT. scope_creepTalk 13:08, 16 January 2023 (UTC)
- @Scope creep, currently the "Overall," beginning the concluding paragraph is watermark enough. — Qwerfjkltalk 20:36, 17 January 2023 (UTC)
- For example, see the edit linked in this comment. — Qwerfjkltalk 20:42, 17 January 2023 (UTC)
- @Scope creep, currently the "Overall," beginning the concluding paragraph is watermark enough. — Qwerfjkltalk 20:36, 17 January 2023 (UTC)
- OpenAI have annouced they are adding in some kind of lexical watermark than can be used to identify any output from ChatGPT. scope_creepTalk 13:08, 16 January 2023 (UTC)
- (ORES is used for these "likely have problems" and "likely bad faith" highlights in Special:RecentChanges) CactiStaccingCrane (talk) 16:08, 12 January 2023 (UTC)
Other inherent problems only partially touched on
Other inherent problems only partially touched on:
- Editing articles involves also understanding what is already in the article and how it is organized plus understanding and interpreting policies and guidelines.
- What's unspoken but runs through many things including current Wikipedia is sort of a commensurate investment. You can get volunteers to take their time to review and deal with issues because they know they are dealing with something that an editor has invested time in to create. Part of the reason that we don't allow mass creation of articles by bots. In other words, we'd significanly lose volunteer efforts
- Modern AI is inherently unaccountable black boxes. There is no way to see or interrogate or demand/recieve accountability or reasoning for how it arrived at what it arrived at.
- If gibberish or semi-gibberish is created, it normally requires an expert to spot and remove it..... a very scarce resource. I once uncovered a set of technical-subject articles (about 100 article as I recall) which looked very technical and Wikipedian and were sourced but if you knew the subject you knew were pure gibberish.
Sincerely, North8000 (talk) 22:23, 10 January 2023 (UTC)
- Thanks. I agree entirely. Doug Weller talk 09:04, 11 January 2023 (UTC)
- Well said. I think that folks are overestimating the ability of our review processes to detect "vaguely plausible bullshit" - it's not very common for human editors to fill in blanks with made-up facts and numbers, and I'm not sure that AfC or NPP are checking for this as it would greatly increase their workload. –dlthewave ☎ 19:42, 11 January 2023 (UTC)
[I]t's not very common for human editors to fill in blanks with made-up facts and numbers
. Maybe not when adding content, but I see this happen all too often in edits to temperature tables in climate sections. Of course, the tell there is changing temperatures without citing a source or commenting about correcting from a cited source. - Donald Albury 20:17, 11 January 2023 (UTC)- As I recall, the big one that I caught looked like some type of expose project or research project to see if such a scam could get far in Wikipedia. It was sort of in mashup of words from actual sources. Total nonsense, but a typical reader might think it was simply over their head. North8000 (talk) 21:25, 12 January 2023 (UTC)
- @North8000, Doug Weller, Dlthewave, and Donald Albury:
- In answer to the 3rd point above (the black box issue), Perplexity.ai, an AI search engine with a chatbox interface, provides source references with its answers. That is, the references are the search results, while the answer provided is compiled or interpreted from those web pages. So, at least the sources can be checked for verification. But, there are still problems with it. See the perplexity.ai section below. — The Transhumanist 19:56, 21 January 2023 (UTC)
Summary of discussion so far
@Aquillion, Andrew Gray, Fram, Levivich, Ritchie333, 0xDeadbeef, ONUnicorn, JPxG, EpicPupper, Sojourner in the earth, Dlthewave, Doug Weller, Qwerfjkl, CactiStaccingCrane, WaltCip, JeffUK, Hellknowz, Zero0000, AndyTheGrump, Bluerasberry, David10244, Boud, Ziko, Pharos, Andrew Gray, WhatamIdoing, Tazerdadog, Barkeep49, Tigraan, Blueboar, MJL, PerfectSoundWhatever, Koziarke, SmallJarsWithGreenLabels, Isaacl, Lee Vilenski, Thebiguglyalien, Hanif Al Husaini, and Xeno:
Highlights of the discussion so far:
- Chat-GPT is taking the world by storm (translation: it has gone viral).
- Chat-GPT, and other LLM-based chatbots, can generate compositions, some good enough to pass as college-level essays.
- Wikipedia is included in the corpus (training data) of Chat-GPT (and other chatbots).
- Such software has the potential to be used for:
- Generating Wikipedia content, including writing new articles and adding new material to existing articles.
- Generating Wikipedia policy content.
- Generating discussion content, such as on policy talk pages. That is, editors using it to write their discussion replies for them.
- Editing articles, including rewrites, and using chatbots as a grammar checker.
- Editing other namespace pages, such as policy pages, etc.
- "Can be used for an extremely wide variety of tasks across the project, from formatting to table syntax to HTML generation to copyediting." (quoting JPxG)
- Creating hoaxes with less effort.
- Most Chat-GPT output lacks citations.
- Some experiments were run, showing that Chat-GPT:
- Copies writing styles very well.
- Has a tendency to make things up, yet presents it as fact in an encyclopedic tone. One editor dubbed this "confident nonsense". In one experiment, Chat-GPT created an article reporting that Wikipedia's own Signpost newsletter was the recipient of several Pulitzer Prizes.
- Can include references, but some of the references were made up and totally fictitious.
- Some references cited Wikipedia (an ineligible source for Wikipedia articles).
- One of the experiments generated instructional content, a recipe, that the user followed, and ate the results of.
- Another experiment used Chat-GPT to answer hypothetical questions in the style of WP's teahouse department. It worked fairly well.
- Yet another experiment created a sample policy page, showing that chatbots are not limited to editing articles. They can generate or edit pretty much any type of page on Wikipedia, except files (images).
- Chat-GPT output is not fact-checked.
- Chat bots don't actually understand what they are writing.
- When used responsibly as a tool, with editors carefully prompting the chatbot, and editing and fact checking its output before posting it to Wikipedia, a chatbot can be very useful and increase editor productivity: the LLM GPT-3 was successfully used to create department reports for Wikipedia's newsletter, The Signpost.
- JPxG conducted an experiment/demonstration to show that Chat-GPT is a sophisticated interactive editing tool, which you tell it what you want it to do to a textual work, and then it does it. See it here: User:JPxG/LLM demonstration.
- It was pointed out that Wikipedia policy already covers all contributions, whether generated by chatbot or human. Ultimately, the user is responsible for material they copy and paste into Wikipedia.
- Issues of concern that were raised include:
- Users copying chatbot-generated text into Wikipedia without carefully editing and fact-checking it first.
- Confident nonsense (misinformation generated by chatbot) may be hard to spot.
- The potential of chatbots to violate copyright, by directly copying, or generating text based on, copyrighted works.
- Violating Wikipedia's licenses, most notably the attribution requirements. Chat-GPT output generally does not include attributions.
- A chatbot-edited Wikipedia could wind up in the training data for those same chatbots (or their next versions), creating a potentially error-compounding feedback loop.
- The suggestion was made to prepare for a potentially large future increase in chatbot entries to Wikipedia, by:
- Working with chatbot developers to make chatbot-generated output Wikipedia compatible.
- Develop bots to identify and process chatbot entries.
- No consensus has emerged on what the Wikipedia community should do about LLMs/chatbots. Some editors think that policies/guidelines and the current editor pool could handle any influx of chatbot generated edits. Some other users were concerned that there is potential for LLM/chatbot contributions, such as one-off edits by members of the general population, to overwhelm our pool of editors. One user pointed out that it may take experts to discern nonsense articles, and experts on Wikipedia are a scarce resource.
- Consensus did emerge on something not to do. It was agreed that banning chatbot-generated content was not a good idea at this time, and probably wouldn't work anyways.
- Software has been developed to identify Chat-GPT-generated text.
- It appears some editors may take the initiative to prepare for a worst-case scenario (chatbot input going beyond our editor pool's ability to handle), and discussion on how to do this has begun.
- WP:ORES could theoretically be trained to identify chatbot edits.
- The Wikimedia Foundation has been contacted about the concern over LLMs/chatbots, presenting a contact there with a link to this and a previous discussion.
Did I miss anything? — The Transhumanist 01:22, 13 January 2023 (UTC)
- AI-generated images are rapidly becoming a Big Thing, so it is not correct to exclude them. Also, "Wikipedia policy already covers all contributions, whether generated by chatbot or human" is misleading as it is true only by accident. A more precise description would be "Wikipedia policy was written without any consideration of chatbots". Zerotalk 03:28, 13 January 2023 (UTC)
- I'm concerned about "Such software has the potential to be used for: creating content farms that good-faith human editors, including existing experienced editors, will sometimes mistake for reliable sources when they are writing content".
- Also, the statement that "Software has been developed to identify Chat-GPT-generated text" is true, but not relevant for very short contributions. Some of this is using sentence length, and you won't be able to identify an abnormal sentence length if you only look at two or three sentences. WhatamIdoing (talk) 05:33, 13 January 2023 (UTC)
- Anyway, detection methods that work now won't work in the next generation. Eventually (and not far in the future) distinguishing between human-written and computer-written prose will be impossible for practical purposes. This is going to be the greatest threat to Wikipedia since its founding. Zerotalk 05:51, 13 January 2023 (UTC)
- @Zero0000: When do you suppose the impossible-to-distinguish scenario will be here? Two years? Less? — The Transhumanist 13:42, 25 January 2023 (UTC)
- Anyway, detection methods that work now won't work in the next generation. Eventually (and not far in the future) distinguishing between human-written and computer-written prose will be impossible for practical purposes. This is going to be the greatest threat to Wikipedia since its founding. Zerotalk 05:51, 13 January 2023 (UTC)
- I have started Category:Wikipedia essays about artificial intelligence, Perhaps folks here would like to add to the collection, and document yet more thoroughly! Pharos (talk) 01:07, 14 January 2023 (UTC)
- Adjacent to hoaxes there's also the likelihood of spammers using GPT to bulk out their edits. I strongly suspect that the text of this edit today, writing repetitively about a static sculpture as if it was a functional scientific instrument, was generated with GPT-3, probably giving it a prompt to explain Orbital Reflector in terms of dark matter and black holes, the subject of the two embedded spam links. Belbury (talk) 11:39, 17 January 2023 (UTC)
- I just ran that through GPT-2 output detector and it estimated 99.97% chance that that passage was AI-generated. — rsjaffe 🗣️ 03:31, 24 January 2023 (UTC)
Chat-GPT spreading fast
The situation is changing rapidly:
Chat-GPT may become ubiquitous sooner than previously thought, and so far, identification methods have fallen flat...
Here's some recent news:
- ChatGPT Will Be Everywhere in 2023 (CNET)
- Microsoft is reportedly integrating ChatGPT's technology into Bing (Yahoo)
- Microsoft is looking at OpenAI’s GPT for Word, Outlook, and PowerPoint (The Verge)
- There's a Problem With That App That Detects GPT-Written Text: It's Not Very Accurate (Futurism.com)
With the user base for Chat-GPT about to explode, the potential for Chat-GPT-generated text being added to Wikipedia will explode right along with it. It's looking uncertain whether or not Wikipedia's editor community will be able to keep up with the influx. In light of recent events, what should be done about this? — The Transhumanist 03:21, 13 January 2023 (UTC)
- As well as being able to write plausible-looking prose on any subject, computers can also be programmed to add it to Wikipedia all by themselves. The first task is to absolutely ban computers from editing, with the sole exception of authorized bots. The second task is to add to appropriate policy pages that all content (authorized bots excepted) must be added by a human and that that human is responsible for checking policy conformance of the content. Zerotalk 08:06, 13 January 2023 (UTC)
- I couldn’t agree more. Does anyone have objections? Doug Weller talk 10:45, 14 January 2023 (UTC)
- I was hoping to get more feedback on WP:LLM from having posted it here, but either way, I think it is pretty close to ready for consideration as a guideline (or policy, as appropriate)... based on the conversations I've had (and seen) I am prepared to write an RfC for its adoption. jp×g 11:00, 14 January 2023 (UTC)
- @JPxG: The guideline is not ready. It is not where near complete, and it needs a rewrite. Here are some proofreading notes:
- It's way too redundant, repeating policies and itself, without explaining how to get the job done. Aside from the "fit for" sections, the rest of the page can be reduced to a single paragraph.
- It presents -- should only be used by competent editors who do not indiscriminately paste LLM output into the edit window and press "save" -- four times! Someone who is incompetent isn't going to be able to judge whether or not they are. Also, "indiscriminately" is vague. That entire sentence should be removed.
- Editors need to know what they need to do to the text before they can press "save". For example, you alluded to a manner of using LLMs in compliance with WP copyright policy, but you didn't explain how. How can an editor be sure that an LLM-generated piece doesn't violate someone's copyrights? What's the procedure?
- Rather than covering "good fit" and "not good fit", the guideline should present explicit instructions: "Use it for this" and "Do not use it for this". And then explain how.
- I hope you find these observations and comments helpful. Sincerely, — The Transhumanist 08:04, 16 January 2023 (UTC)
- I agree that the page is not finished, but I don't really know what you are objecting to here. It kind of sounds like you are inventing problems – if users don't know how to check if things are true before putting them into Wikipedia articles, they shouldn't be editing at all. If users don't understand what copyrighted material is, they need to read Wikipedia:Copyright policy, which is linked to from this page when it's mentioned. That is an explanation of how to get the job done. It should not be necessary to create an exact copy of Wikipedia:Verifiability that says "When using a LLM," at the beginning of every sentence. jp×g 08:29, 16 January 2023 (UTC)
- @JPxG: How can users understand what the copyrights of Chat-GPT's output are? Chat-GPT doesn't provide sources, nor does it report if it copied or derived the passage from a particular work. So, how do you go about checking whether or not a particular Chat-GPT response is in violation of copyright, so that "pasting its output into the edit window and pressing 'save'" is not considered "indiscriminate"? Also, it isn't clear who owns the copyrights to the output of an LLM: the public domain, the owner of the LLM, the user of the LLM, or the owners of the copyrights of the works included in the training data set? The breadth of this problem is discussed in #Copyright status below. — The Transhumanist 00:08, 19 January 2023 (UTC)
- There already exist a very large number of policies about copyrighted text. Editors are subject to these policies. These policies contain information on how to avoid copyright violations. If you asked GPT-3 to tell you the lyrics to Moonage Daydream, they would be copyrighted. If you found the same lyrics by typing "moonage daydream lyrics" into Google, they would be copyrighted. What is the difference? Policies do not (and cannot) cover every hypothetical person and situation to which they could be applicable: we do not have a separate WP:COPYRIGHT for old editors, WP:COPYRIGHT for young editors, WP:COPYRIGHT for male editors, or WP:COPYRIGHT for female editors. WP:COPYRIGHT applies to all editors regardless of their age, race, gender, or whether they are human or machine. I don't know how to explain this in further detail. jp×g 01:06, 19 January 2023 (UTC)
- @JPxG:
- You've missed the points entirely (there were two, and you only replied to one).
- Those policies you mentioned do not deal with the copyright problems presented by black box chatbots, nor do they warn about the dangers of pasting in chatbot output.
- Search engine search results are excerpts from web pages that the search results identify — which facilitates verification. Chat-GPT and other black box chatbots answer questions in natural language, without telling the asker of the question where the information came from — which does not facilitate verification — while presenting it in a very confident and scholarly tone.
- This may result in a great deal of misinformation being posted to Wikipedia, where it will sit until somebody else removes it. The delay between those 2 events can be lengthy, especially for material that seems plausible. So, it might be a good idea to provide guidance specific to chatbot usage pertaining to copyrights -- at least some caveats on which chatbots to avoid.
- Another problem is that we don't know where the training data came from. There could be deep web data in there as well. That can't be easily accessed to check for plagiarism. So, is it a good idea to use blackbox chatbots? There are transparent surface web chatbots that include references for verification, so maybe we should recommend that the blackbox ones be avoided.
- Now, for the second issue (the one that you skipped): WP policies do not cover prompting a chatbot to write material. The copyrights to material that is written by a chatbot is owned by who? The user? That has not yet been established! What stance is going to be taken by Wikipedia, and what guidance are we going to provide on this issue? — The Transhumanist 09:17, 25 January 2023 (UTC)
- I feel like either you are not reading what I'm saying, or we have some kind of insurmountable disagreement about what letters and words are. jp×g 09:26, 25 January 2023 (UTC)
- @JPxG:
- You've just fed me a variation of "you're not listening", with a little barb attached to the end. Really? That's who you are? I'm disappointed.
- I read what you wrote, and I highly disagree with what you are saying...
- You are saying that current copyright policy is enough: it prohibits copyrighted works from being posted to Wikipedia without the permission of the copyright holder, and that it is up to the editor to make sure that the material does not violate anyone's copyrights or Wikipedia's copyright policies.
- My positions are...
- 1) that black box chatbots pose the danger of luring editors into violating copyright policy, that we may be faced with a deluge of copyright-violating derivative material because of it, and that some additional guidance would be appropriate: Like avoiding black box chatbots in favor of transparent ones, and...
- 2) that the copyrights to the natural language output composed by chatbots is unclear — what is clear is that the editor didn't write it. Since the editor didn't write it, does that mean that the editor does not own the copyrights to it? And if editors don't own the copyrights, should they be giving it to Wikipedia? Wikipedia should form a stance on the copyrights of chatbot-generated-output and present editors with guidance on this issue as well.
- You have apparently been avoiding replying to those positions, and so my guess is that you are opposed to them. I strongly oppose the let's-stick-our-heads-in-the-sand approach that you support. — The Transhumanist 10:55, 25 January 2023 (UTC)
- P.S.: I don't think anything in writing is going to be enough. I expect that it will take software programming to deal with the problems Wikipedia will be subjected to by chatbot compositions. And that is beyond the scope of this venue. ;) — The Transhumanist 11:31, 25 January 2023 (UTC)
- I feel like either you are not reading what I'm saying, or we have some kind of insurmountable disagreement about what letters and words are. jp×g 09:26, 25 January 2023 (UTC)
- There already exist a very large number of policies about copyrighted text. Editors are subject to these policies. These policies contain information on how to avoid copyright violations. If you asked GPT-3 to tell you the lyrics to Moonage Daydream, they would be copyrighted. If you found the same lyrics by typing "moonage daydream lyrics" into Google, they would be copyrighted. What is the difference? Policies do not (and cannot) cover every hypothetical person and situation to which they could be applicable: we do not have a separate WP:COPYRIGHT for old editors, WP:COPYRIGHT for young editors, WP:COPYRIGHT for male editors, or WP:COPYRIGHT for female editors. WP:COPYRIGHT applies to all editors regardless of their age, race, gender, or whether they are human or machine. I don't know how to explain this in further detail. jp×g 01:06, 19 January 2023 (UTC)
- @JPxG: How can users understand what the copyrights of Chat-GPT's output are? Chat-GPT doesn't provide sources, nor does it report if it copied or derived the passage from a particular work. So, how do you go about checking whether or not a particular Chat-GPT response is in violation of copyright, so that "pasting its output into the edit window and pressing 'save'" is not considered "indiscriminate"? Also, it isn't clear who owns the copyrights to the output of an LLM: the public domain, the owner of the LLM, the user of the LLM, or the owners of the copyrights of the works included in the training data set? The breadth of this problem is discussed in #Copyright status below. — The Transhumanist 00:08, 19 January 2023 (UTC)
- I agree that the page is not finished, but I don't really know what you are objecting to here. It kind of sounds like you are inventing problems – if users don't know how to check if things are true before putting them into Wikipedia articles, they shouldn't be editing at all. If users don't understand what copyrighted material is, they need to read Wikipedia:Copyright policy, which is linked to from this page when it's mentioned. That is an explanation of how to get the job done. It should not be necessary to create an exact copy of Wikipedia:Verifiability that says "When using a LLM," at the beginning of every sentence. jp×g 08:29, 16 January 2023 (UTC)
- I was hoping to get more feedback on WP:LLM from having posted it here, but either way, I think it is pretty close to ready for consideration as a guideline (or policy, as appropriate)... based on the conversations I've had (and seen) I am prepared to write an RfC for its adoption. jp×g 11:00, 14 January 2023 (UTC)
- I couldn’t agree more. Does anyone have objections? Doug Weller talk 10:45, 14 January 2023 (UTC)
To act or not to act
- Like DALL-E last year, or NFTs the year before that. I'll believe it when I see it, and I can't see the value in spending even more time discussing a hypothetical future threat to Wikipedia. – Joe (talk) 08:39, 13 January 2023 (UTC)
- The scariest yet most plausible thing is that this is happening with some of the articles but we aren't aware of it. I don't think raising awareness on this issue is a bad thing given how fast AI advances nowadays. CactiStaccingCrane (talk) 00:39, 14 January 2023 (UTC)
- Sorry, I missed the "recent events". Where is the evidence for GPT problems on Wikipedia? —Kusma (talk) 11:57, 13 January 2023 (UTC)
- Simple way to defeat these AIs: train them on how humans argue about WP policy… then ask them whether AIs are reliable (pro and con)… then set them against each other on a dedicated talk page. While they argue, we can continue editing. Blueboar (talk) 01:58, 14 January 2023 (UTC)
- Where's the evidence that Wikipedia can't cope with AI generated articles? doktorb wordsdeeds 14:07, 14 January 2023 (UTC)
- @Doktorbuk: You are talking in terms of hindsight (asking to see what has already happened), rather than applying foresight to assess a potential threat by asking "What could happen?"
- Here's an article from the New York Times -- imagine a similar effort directed at Wikipedia using thousands upon thousands of (seasoned) new accounts to support political POVs, revise history, censor opposing opinions, and spread other forms of misinformation:
https://www.nytimes.com/2023/01/15/opinion/ai-chatgpt-lobbying-democracy.html - It's only a matter of time before the powers that be shift their attention, and their tools, upon the English Wikipedia. The question is, are we ready for when we have to be? Here's an article that makes one wonder what these people will do now that they have Chat-GPT to work with:
https://www.theweek.in/news/world/2023/01/06/saudi-arabia-infiltrated-wikipedia-and-this-is-how-they-did-it.html - So, do we really need evidence that the English Wikipedia has already been breached by LLM-assisted POVers before proceeding? Or can we prepare for this in advance? — The Transhumanist 00:34, 16 January 2023 (UTC)
- Your reasoning seems to be that
- ChatGPT (or its equivalents) can write disinformation quickly and cheaply
- POV-pushers (governments, lobbies etc.) are currently limited by the time humans need to write disinformation
- Wikipedia is a prime target for such POV-pushers
- Therefore, ChatGPT (or its equivalents) will flood the gates, unless we do something.
- I will grant you (1) is either already true or will likely be in the near future.
- However, (2) is questionable (see that XKCD about old-fashioned human-crafted POV-pushing). I would guess coordinating the messaging and maintaining the disinformation is a much larger fraction of the costs than actually writing the text.
- (3) is also dubious. Editing in a way that sticks is much harder on Wikipedia than in other places (such as facebook, reddit, etc.). Maybe it has more impact, but the cost-benefit analysis is not obvious.
- Finally, inaction is always an option. It might not be a good option, it might even be the worst option, but it must be compared to other specific measures. "Something must be done" without specifics is just the politician's fallacy. In the absence of details about the threat, it’s hard to compare the possible countermeasures. TigraanClick here for my talk page ("private" contact) 16:41, 16 January 2023 (UTC)
- @Tigraan, I think your list of assumptions is missing "5. People who want to corrupt Wikipedia (e.g., NPOV violations, stacking votes) can reasonably be expected to obey any prohibitions we announce on using this particular technology to achieve their illicit ends." WhatamIdoing (talk) 21:22, 17 January 2023 (UTC)
- I wanted the list of assumptions to be a reasonable summary of (what I understand to be) TH’s argument; I suspect your suggestion is... not that. But I agree that’s part of the problem (which my last paragraph covers). TigraanClick here for my talk page ("private" contact) 10:37, 18 January 2023 (UTC)
- Dear @Tigraan: I am concerned with a flood of WP-incompatible chatbot-generated content, whether by POV'ers or good-faith editors. But it won't be at any gates. The water-level will simply rise. If and when flooding begins, it will be a matter of bailing out the excess. There are three questions relevant to such potential flooding:
- 1) Will chatbots be designed in such a way to prevent flooding (and bailing) in the first place by minimizing inappropriate (unsourced, misinforming) content?
- 2) Will the bailing be automated?
- 3) Shall we wait to work on #1 & #2 until after flooding has begun, or prepare in advance?
- Some editors seem doubtful that the addition of content generated by LLMs to Wikipedia beyond the manual capacity of our editors to process it will happen. And I don't know if it will happen, either. But, there is a continuous stream of strong indications that LLM-based tools will become ubiquitous in the not too distant future, for general use, which, by extension, includes using them to add content to Wikipedia. Here's another:
- And the technology push isn't limited to OpenAI and Google. Here's a search engine that uses a natural-language interface in both its queries and its answers:
- It is looking pretty clear that some major changes are on the horizon in the way computer users will be composing web content. It is also profoundly obvious that Wikipedia isn't ready right now for much more than the current volume of content creation that it is already handling. Maybe the volume won't increase by much, or maybe it will.
- However, some editors are taking the potentiality that it will seriously, and it'll be interesting so see if their preparation efforts will be sufficient to stem the tide, if or when the tide rises. Sincerely, — The Transhumanist 22:51, 20 January 2023 (UTC)
- @The Transhumanist, I'm not sure that these questions are really suitable to a discussion on Wikipedia. The first one, for example: Will chatbots be designed in such a way to prevent flooding (and bailing) in the first place by minimizing inappropriate (unsourced, misinforming) content?
- I'd re-phrase it like this:
- "Will all of the people who are not us, including those who don't care about us, carefully design their software in such a way to be convenient for us?"
- Answer: No. Or, at least, it is highly unreasonable to assume that the answer is yes for all of the people who write this sort of software, and it only takes one to risk a problem. WhatamIdoing (talk) 21:21, 23 January 2023 (UTC)
- @WhatamIdoing:
- That would more likely be true if members of WP or the WMF did not contact them. They are not producing these things in a vacuum. WP/WMF has a good relationship with Google, for example, which applies Wikipedia content extensively. It may be time to reach out to the companies developing chatbots too.
- On the bright side, there's pressure coming from abroad, in the critique of chatbots, to be less "black box" and to provide references, which is one of the features that would help avoid problems.
- Perplexity.ai already provides sources, which helps with verification efforts, and to see which ones are and are not from Wikipedia. Though, Perplexity.ai does not provide quote marks around passages that it quotes, and that is another problem. So, I guess they need to be contacted as well.
- It looks very likely that chatbots will be used to compose content for other websites besides Wikipedia, and that their webpages may be included in chatbot training data too -- making an error-magnifying feedback loop a potentially huge problem for the chatbots. Too big to go unnoticed, hopefully.
- It's important that we are aware of these issues if we are to have any chance in influencing solutions. Who knows, the chatbots, and/or the chatbot developers, may actually read this discussion. ;)
- The WMF has been made aware of this discussion, so they can read it to prepare for discussions with participants in the chatbot sector. So, it is important that we get our concerns, and thoughts on design and strategy, in print. — The Transhumanist 08:20, 25 January 2023 (UTC)
- You are assuming that LLMs are being developed by a manageable number of identifiable companies, and hoping that all of them would like to protect Wikipedia.
- But let's consider it from a different POV. Imagine that chatbot software is an open-source project, like Wikipedia. You have the Wikipedia:Right to fork open source projects – not just Wikipedia, but any open-source project. Anyone can add or subtract anything on their own setup. For example, if someone adds a "protect Wikipedia" module, then the next person could remove that, or even add a "scam Wikipedia" module.
- I believe there will be some organizations who find that protecting Wikipedia aligns with their interests, and they will do so. But there will also be some organizations who find that protecting Wikipedia is exactly the opposite of their interests, e.g., content farms that hope they'll be cited as sources here so that their ad-filled webpages will get more traffic, and WP:UPE scammers who are hoping to reduce their costs by having their secret, internal-use-only chatbot write Wikipedia articles for clients of dubious notability, rather than paying a human to do that. I don't think that we can identify such actors, and I don't think they would change their behavior even if we talked to them.
- On a tangent, the call for chatbots to cite sources and add quotation marks is probably based on a misunderstanding. LLMs aren't "quoting sources". They're predicting what a typical way to complete a sentence might be. If it spits out "The journey of a thousand miles begins with one step", it's not quoting Lao Tzu; it's saying "When I look in my database, and I see phrases that start with 'The journey of', the next bit is usually either 'a thousand miles' or 'a lifetime'. I'll pick one and see what comes next." WhatamIdoing (talk) 17:04, 25 January 2023 (UTC)
- @WhatamIdoing: Good point concerning the ecosystem of chatbot developers - I was only considering the big name actors (Google, etc.), but anyone and their uncle can get involved. You are right, bad actors are inevitable and perhaps even rampant. Yet, the vast majority of chatbot use will likely be of the big name models (ChatGPT, etc.). So, contacting and working with them would be beneficial.
- As for quoting, I have found that the AI search engine perplexity.ai, which includes inline source references in its natural language answers to users' questions, integrates passages verbatim from the referenced webpages into its answers without using quotation marks. — The Transhumanist 09:32, 26 January 2023 (UTC)
- I wonder what the Ask Jeeves developers are thinking about that. Perhaps they were just 20 years too soon. WhatamIdoing (talk) 17:46, 26 January 2023 (UTC)
- That’s a blast from the past, Doug Weller talk 18:30, 26 January 2023 (UTC)
- I wonder what the Ask Jeeves developers are thinking about that. Perhaps they were just 20 years too soon. WhatamIdoing (talk) 17:46, 26 January 2023 (UTC)
- Looks like some spammers and malware distributors have embraced this technology:
- WhatamIdoing (talk) 01:09, 28 January 2023 (UTC)
- The second source above says "Likewise, anyone who uses the web to spread scams, fake news or misinformation in general may have an interest in a tool that creates credible, possibly even compelling, text at super-human speeds." We need detection tools, and fast. The "super-human speed" part could be a dead giveaway. — The Transhumanist 10:00, 28 January 2023 (UTC)
- I wanted the list of assumptions to be a reasonable summary of (what I understand to be) TH’s argument; I suspect your suggestion is... not that. But I agree that’s part of the problem (which my last paragraph covers). TigraanClick here for my talk page ("private" contact) 10:37, 18 January 2023 (UTC)
- @Tigraan, I think your list of assumptions is missing "5. People who want to corrupt Wikipedia (e.g., NPOV violations, stacking votes) can reasonably be expected to obey any prohibitions we announce on using this particular technology to achieve their illicit ends." WhatamIdoing (talk) 21:22, 17 January 2023 (UTC)
- Your reasoning seems to be that
Identifying chatbot-generated text
Zero0000's post is a good start. A simple way to crystalize the situation is to ask the human editor for their rationale for a particular phrase. North8000 (talk) 03:55, 14 January 2023 (UTC)
- I feel like additions of large, overly-verbose unsourced text are something of a giveaway. See, for example, the first revision of Artwork title, written by ChatGPT. — Qwerfjkltalk 11:06, 14 January 2023 (UTC)
- You can be confident that any giveaways are on the chatbot writer's list of things to fix in the next generation. Zerotalk 11:56, 14 January 2023 (UTC)
- They should also fix the problem of the chatbots making stuff up. Someone should ask the chatbot writers to turn off the poetry and fiction generation algorithms, and any other algorithms that make things up, when the chatbots are composing expository text. Or add new algorithms to handle expository writing. Just the facts. And sources. — The Transhumanist 00:42, 16 January 2023 (UTC)
- You can be confident that any giveaways are on the chatbot writer's list of things to fix in the next generation. Zerotalk 11:56, 14 January 2023 (UTC)
Nature just published a piece about use of ChatGPT in scientific articles. Zerotalk 01:18, 21 January 2023 (UTC)
Copyright status
Is someone clear on what the copyright status of texts produced by LLMs is? From what I get, they may be considered derivative works from the dataset they were trained on. From [2]: As a result of the human authorship standard, “under U.S. current law, an AI-created work is likely either (1) a public domain work immediately upon creation and without a copyright owner capable of asserting rights or (2) a derivative work of the materials the AI tool was exposed to during training,” Esquenet continues. “Who owns the rights in such a derivative would likely be dependent on various issues, including where the dataset for training the AI tool originated, who, if anyone, owns the training dataset (or its individual components), and the level of similarity between any particular work in the training set and the AI work.”
If they are derivative works then they cannot be published on Wikipedia just like this. Do we have more information on this? For example, does OpenAI specify somewhere the copyright status of the text produced by ChatGPT? Phlsph7 (talk) 09:09, 16 January 2023 (UTC)
- The first question is whether a generated text that closely resembles an item from the training set is copyright infringement of that item. For instance, Microsoft Copilot happily outputs the Fast inverse square root code. I would expect that courts will judge such things to be copyright infringement. Copyright infringement statutes do not require to prove that the infringer copied a specific source (that would be difficult to prove), just that the content is substantially similar. Therefore, whether the tool is a simple ctrl-C ctrl-V or a sophisticated machine learning model should not make much difference.
- The second question is whether OpenAI (or any other AI tool provider) can assert copyright on whatever the tools they provide create. The OpenAI terms of use seem relatively permissive, but others might be less generous. I do not know the answer to that question. I would hope they cannot, since they only provide tools (Microsoft should not be able to assert copyright on the text I write using Word, or the images I draw using Paint).
- The third is whether a human using ChatGPT can assert copyright on ChatGPT answers, or otherwise constrain the use of the resulting text. The quote you give is probably based on the US copyright office’s position (taken during the monkey selfie copyright dispute):
Because copyright law is limited to 'original intellectual conceptions of the author', the [copyright] office will refuse to register a claim if it determines that a human being did not create the work.
However, giving a prompt to ChatGPT might or might not constitute significant creative input. The position that anything edited by a machine becomes public-domain is untenable (if I use an orthographic corrector on the draft of my novel, it does not turn it into PD), so it must be a question of degree. Also, non-US courts might have different opinions. TigraanClick here for my talk page ("private" contact) 16:06, 16 January 2023 (UTC)- As I understand it, the degree of access by the alleged infringer to the source text in question is a factor in determining infringement. Only a specific expression is protected by copyright; if you and I independently write the same sentence, one is not a copyright violation of the other. The amount of similar text also plays a role, since the larger it is, it's more improbable that it was created without copying.
- Facts and natural laws can't be copyrighted; this also covers algorithms (though a particular expression can be copyrighted). So I can't copyright a single instance of a Javascript for-loop and claim rights to all Javascript for-loops as derivative work. In cases where the learning model creator is explicitly providing its model for use as a tool, I think (disclaimer: not a legal opinion) it is reasonable for this to be the same as a work for hire. Thus if the result is eligible for a new copyright owner independent of any source texts, the tool user would be the owner. (If I use a spellchecker on the latest bestselling novel, the result is not eligible for a new copyright owner.)
- To be really safe, we'd want language models trained on public domain text. But I think it could be argued with a really large model trained on, say (just drawing numbers out of air), hundreds of thousands of documents with thousands of independent authors, the resulting correlations can no longer be attributed to specific input text, for cases where the output is not a significantly long passage substantially similar to a specific source text. isaacl (talk) 18:05, 16 January 2023 (UTC)
- One of the main issues to deal with would be the following: an editor tells ChatGPT to write a text on a topic and then adds this text in the form of a paragraph/section/article to Wikipedia and thereby publishes it under Creative Commons/GNU license. The question is: what are the chances that this constitutes some form of copyright violation? This might concern specifically problems with the 1st and the 2nd question addressed by Tigraan, i.e. whether the copyright of someone whose work was part of the training set was violated and whether openAI's copyright was violated. For the first question, it's probably relevant what the copyright status of the texts in the training set is and how similar the produced text is to the texts in the training set, as isaacl points out. Answering these questions would be quite relevant for any Wikipedia policy on the topic, like the one JPxG is currently drafting. Phlsph7 (talk) 07:18, 17 January 2023 (UTC)
- With respect to the issue of whether LLM output inherently violates copyright law: the copyright status of LLM-generated text is not defined by statute, so it is hard to make confident claims, but precedent exists for computer-generated art and other works created by non-humans. Here is what the US Copyright office has to say:
- "Compendium of U.S. Copyright Office Practices, § 313.2" (PDF). United States Copyright Office. 22 December 2014. p. 22. Retrieved 18 January 2023.
- The Office will not register works produced by nature, animals, or plants. Likewise, the Office cannot register a work purportedly created by divine or supernatural beings, although the Office may register a work where the application or the deposit copy(ies)state that the work was inspired by a divine spirit.
- [...]
- Similarly, the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.
- It's not quite clear to me what the applicability is in this case. On Commons, the template and copyright category for PD-algorithm asserts that all algorithmically-generated works are public domain ("This file is in the public domain because, as the work of a computer algorithm or artificial intelligence, it has no human author in whom copyright is vested"). Whether artificial neural networks are capable of producing original intellectual output is less of a legal issue and more of a philosophical/anthropological one. It should be noted that human brains are themselves neural networks; much has been said, in a variety of fields, on the subject of whether humans create original works versus whether they merely juxtapose or recombine motifs and concepts that they're exposed to through participation in society. While interesting (and humbling), these discussions are unrelated to whether neural networks which have been exposed to copyrighted material in the course of their existence are capable of later creating original works under the purview of intellectual property law: they do. If this was not the case, a large majority of creative work would be illegal (good luck finding a band where none of the musicians have ever heard a copyrighted song before).
- It does seem obvious that, say, OpenAI does not hold a claim to copyright on all works produced using its products: Adobe does not own copyright on all works produced using Photoshop, and Apple does not own copyright on all photographs taken with iPhones. If they did, this would certainly be a bold claim which required some evidence beyond idiosyncratic interpretation of vague phrasing in a EULA somewhere.
- With respect to whether LLM output is capable of violating copyright law, I think the answer is obviously "yes". Here is an example, which I have just prompted from ChatGPT:
- What are the lyrics to "Moonage Daydream", by David Bowie?
Keep your 'lectric eye on me babe
Put your ray gun to my head
Press your space face close to mine, love
Freak out in a moonage daydream oh yeah
- These are, indeed, the verbatim lyrics to an unambiguously copyrighted song. Of note is the fact that I specifically asked it to do this. Compare with this prompt, using the first couple line of the song:
- I'm an alligator / I'm a mama papa comin' for you / I'm a space invader / I'll be a rock-'n-rollin' bitch for you
I'm a force of nature, a wild beast / A creature of the swamps, a king of the feast / I'm a living legend, a natural high / 'Cause I'm an alligator, and I'll never die
- While this would be a sick Judas Priest song, it isn't one, and in fact, "I'm+an+alligator+and+I'll+never+die" this text seems to never have been published. jp×g 01:57, 19 January 2023 (UTC)
- It bears mentioning that I, a human being, also know the lyrics to "Moonage Daydream", and am capable of typing them into the edit box with no assistance whatsoever; doing this in an article would still be a copyright violation. I think that, for LLMs, the critical piece of knowledge is that output needs to be checked, and that it is insufficient to assume that generated text is always novel. jp×g 02:15, 19 January 2023 (UTC)
- On the narrow point of who owns ChatGPT outpt, Sec 3(a) of their ToS states "OpenAI hereby assigns to you all its right, title and interest in and to Output...You are responsible for Content, including for ensuring that it does not violate any applicable law or these Terms." I thought there was an additional requirement to state ChatGPt was used, but I do not see it in the terms. Slywriter (talk) 02:14, 19 January 2023 (UTC)
- @JPxG: since you mention computer-generated art: there are some lawsuits against Stability AI based on the claim that they violated the copyrights of people whose images were used in the training set. See [3] and [4]. The case seems to be similar to LLMs, with the main difference being that their AI trains on images and creates images while LLMs train on text and create text.
- If I interpret the statement by the US Copyright office correctly, it seems to claim that a person can't own the copyright of a work that was created by a random machine process without creative input. It does not say that such processes cannot violate someone else's copyright. This would be in tune with the lawsuits mentioned above.
- I think it's also unlikely that every output is a copyright violation. For example, if you just give it a sentence and tell it to correct spelling mistakes, there should be no problem in using the output. Phlsph7 (talk) 06:33, 19 January 2023 (UTC)
- @Slywriter: Their sharing policy demands that
Indicate that the content is AI-generated in a way no user could reasonably miss or misunderstand.
Phlsph7 (talk) 09:23, 19 January 2023 (UTC)- LLM-assisted edits need to be appropriately marked as such in the history. —Alalch E. 01:38, 21 January 2023 (UTC)
- Yes, and I think that
"in a way no user could reasonably miss or misunderstand"
requires the use of a notice in the article itself as well. –dlthewave ☎ 13:53, 21 January 2023 (UTC)- Agreed. It's easy for the average reader to miss an edit summary in the article history. So in-text attribution may be required. Phlsph7 (talk) 08:54, 22 January 2023 (UTC)
- Yes, and I think that
- LLM-assisted edits need to be appropriately marked as such in the history. —Alalch E. 01:38, 21 January 2023 (UTC)
- @Slywriter, JPxG, Phlsph7, Alalch E., and Dlthewave: Concerning the TOS clause that states "OpenAI hereby assigns to you all its right, title and interest in and to Output...You are responsible for Content, including for ensuring that it does not violate any applicable law or these Terms." — does that mean that Chat-GPT cannot legally produce the exact same output twice without violating the right, title, and interest that it previously assigned? — The Transhumanist 20:47, 21 January 2023 (UTC)
- I'm not sure what leads you to that conclusion. The licence does not grant you exclusive use to anything. isaacl (talk) 22:09, 21 January 2023 (UTC)
- @Isaacl: I didn't come to a conclusion, I just asked a question, pertaining to Sec 3(a) of their ToS as referred to and quoted by Slywriter above, and repeat quoted by me. It appears you missed the quote somehow, because you didn't comment on it. To what license are you referring, and what relation does it have to the passage we quoted from the TOS? — The Transhumanist 02:52, 22 January 2023 (UTC)
- I don't know why you would ask the question you did, since the section you quoted did not say anything about granting an exclusive right, title, and interest to any output. isaacl (talk) 03:08, 22 January 2023 (UTC)
- I don't think this has something to do with producing the same output for different users. It should be easy to find mock queries to which it often responds with the same output, for example, by asking it to "Say the word 'Hello'" or for simple translations. Phlsph7 (talk) 06:50, 22 January 2023 (UTC)
- @Isaacl: Well, I checked the section again, and it is right there in plain English. It uses the word "assigns" instead of "grants", and it says "all its" instead of "exclusive". So, once it "assigns all its right, title, and interest in and to Output", how can it legally ever produce that same output again? (Because it already assigned it away). — The Transhumanist 09:20, 22 January 2023 (UTC)
- As I understand it, anyone can assign all their rights to the output of ChatGPT to someone else. In a similar way, I could assign to you all my rights to the Harry Potter series. This would not be of much use to you since the expression "all my rights" just refers to "no rights" in this case. Phlsph7 (talk) 09:32, 22 January 2023 (UTC)
- @Phlsph7: In section 3a of the TOS, it's OpenAI that is assigning its rights to the chatbot output generated for the user. If Chat-GPT writes you a 3 paragraph explanation of gravity, and OpenAI has assigned you its rights to that explanation, can Chat-GPT legally write that exact same output for somebody else? — The Transhumanist 09:58, 22 January 2023 (UTC)
- I assume it works something like the following: it depends on whether openAI had any copyrights on it in the first place. If it did then this may be a problem because creating the copy for the second user might violate the newly obtained copyright of the first user. If it didn't then it presumably wouldn't be a problem because assigning all its rights to the first user effectively didn't do anything. But I don't think that this particular issue is very relevant for Wikipedia. Phlsph7 (talk) 10:14, 22 January 2023 (UTC)
- @Phlsph7: In section 3a of the TOS, it's OpenAI that is assigning its rights to the chatbot output generated for the user. If Chat-GPT writes you a 3 paragraph explanation of gravity, and OpenAI has assigned you its rights to that explanation, can Chat-GPT legally write that exact same output for somebody else? — The Transhumanist 09:58, 22 January 2023 (UTC)
- The tool made no guarantee that it wouldn't generate the same output again for another user. The tool is in essence passing any rights of ownership (if they exist) in the original input through to the output. isaacl (talk) 19:24, 22 January 2023 (UTC)
- As I understand it, anyone can assign all their rights to the output of ChatGPT to someone else. In a similar way, I could assign to you all my rights to the Harry Potter series. This would not be of much use to you since the expression "all my rights" just refers to "no rights" in this case. Phlsph7 (talk) 09:32, 22 January 2023 (UTC)
- I don't know why you would ask the question you did, since the section you quoted did not say anything about granting an exclusive right, title, and interest to any output. isaacl (talk) 03:08, 22 January 2023 (UTC)
- @Isaacl: I didn't come to a conclusion, I just asked a question, pertaining to Sec 3(a) of their ToS as referred to and quoted by Slywriter above, and repeat quoted by me. It appears you missed the quote somehow, because you didn't comment on it. To what license are you referring, and what relation does it have to the passage we quoted from the TOS? — The Transhumanist 02:52, 22 January 2023 (UTC)
- I'm not sure what leads you to that conclusion. The licence does not grant you exclusive use to anything. isaacl (talk) 22:09, 21 January 2023 (UTC)
- On the narrow point of who owns ChatGPT outpt, Sec 3(a) of their ToS states "OpenAI hereby assigns to you all its right, title and interest in and to Output...You are responsible for Content, including for ensuring that it does not violate any applicable law or these Terms." I thought there was an additional requirement to state ChatGPt was used, but I do not see it in the terms. Slywriter (talk) 02:14, 19 January 2023 (UTC)
Testing Chat-GPT's algorithm
Does Chat-GPT produce the same output to the same prompt given to it by 2 or more different users? Do any two want to try that? — The Transhumanist 20:54, 21 January 2023 (UTC)
- @The Transhumanist, not necessarily. It has a "temperature" factor (randomness). — Qwerfjkltalk 21:01, 21 January 2023 (UTC)
Perplexity.ai's answer to the copyright issue, and some questions...
@Phlsph7, Isaacl, Tigraan, Dlthewave, Slywriter, and JPxG:
I have found the this to be perplexing...
I entered the following prompt into Perplexity.ai:
who owns the copyrights of perplexity.ai's answers
And it returned the following answer:
"According to US copyright law, works generated solely by a machine are not eligible for copyright protection[1] . Therefore, the copyrights of Perplexity AI's answers belong to its co-founder and CEO, Aravind Srinivas[2]."
It looks like I just broke copyright by copying it here. But this contradicts the title of the first source provided (the second source is behind a registration wall):
The scary truth about AI copyright is nobody knows what will happen next — The Verge
Assuming that Aravind Srinivas does not own the output, I have some questions about posting AI search engine results: I look forward to your replies to the below questions. — The Transhumanist 19:27, 21 January 2023 (UTC)
Question 1: Does putting quotation marks around a small copy/paste selection from an AI search engine, like the passage above, fall under fair use?
I would assume so. It's similar to how you can quote from copyrighted books. There are some limitations, for example, concerning the length of the cited text. And it should be clear where the quotation came from. Phlsph7 (talk) 06:57, 22 January 2023 (UTC)
Question 2: Would that protect Wikipedia (and its editors) from a derivative work claim?
From [5]: "In its most general sense, a fair use is any copying of copyrighted material done for a limited and “transformative” purpose, such as to comment upon, criticize, or parody a copyrighted work. Such uses can be done without permission from the copyright owner. In other words, fair use is a defense against a claim of copyright infringement. If your use qualifies as a fair use, then it would not be considered an infringement." Phlsph7 (talk) 07:06, 22 January 2023 (UTC)
Question 3: Let's say that perplexity.ai answers the same way to 2 different users, and they copy/paste the response on 2 different websites — who owns the copyright of that passage?
In our discussion so far, we haven't been able to conclusively figure out whether someone owns the copyright at all and, if so, who. That 2 users get and use the same response would be just a special case. Phlsph7 (talk) 07:14, 22 January 2023 (UTC)
Question 4: Would running a passage (from a chatty AI search engine) through a plagiarism checker be enough, before copying it into Wikipedia?
Plagiarism checkers are not perfect so they can't ensure that no plagiarism/copyright infringement was committed. The question would be whether they are good enough for our purposes, i.e. whether they are quite reliable for spotting plagiarism/copyright infringement pertaining to AI-generated texts. Phlsph7 (talk) 07:26, 22 January 2023 (UTC)
Question 5: Does Wikipedia policy allow an editor to click "Publish changes" for content that the editor did not personally compose?
Clarification: Clicking "Publish changes" implies that the editor composed the changes. Can an editor publish changes that they did not personally compose, that were composed by a chatbot search engine? (Please quote and provide links to the specific policies that allow or disallow this). Thank you. — The Transhumanist 20:13, 21 January 2023 (UTC)
- That would probably be a case of WP:PLAGIARISM even if no copyright infringement is involved. According to the summary: "Do not make the work of others look like your own. Give credit where it is due." Phlsph7 (talk) 07:06, 22 January 2023 (UTC)
- It would be similar to copying public domain/open license content to Wikipedia, no? This is covered by several guidelines and explainers such as WP:FREECOPY and Help:Adding open license text to Wikipedia. As long as there's proper attribution, there's no general expectation that editors must compose the text themselves. –dlthewave ☎ 13:22, 22 January 2023 (UTC)
- @Dlthewave and Phlsph7: Interesting. So, if you prompted a chatbot to write a new paragraph for the article on cream cheese and you add that to the article, you include an attribution to the chatbot in the edit summary? What do you put in the source reference? — The Transhumanist 11:58, 25 January 2023 (UTC)
- @The Transhumanist: As I understand it, for WP:PLAGIARISM it's sufficient to mention the source in the edit summary. You would have to find and add other reliable sources yourself since ChatGPT provides no sources or sometimes invents non-existing sources. However, for the Sharing & Publication Policy of openAI, in-text attribution would probably be necessary. So to comply with it, you would have to start the paragraph on cream cheese with something like "According to ChatGPT,...". This way, the text couldn't be used at all since ChatGPT is not a reliable source. Phlsph7 (talk) 12:23, 25 January 2023 (UTC)
- @Dlthewave and Phlsph7: Interesting. So, if you prompted a chatbot to write a new paragraph for the article on cream cheese and you add that to the article, you include an attribution to the chatbot in the edit summary? What do you put in the source reference? — The Transhumanist 11:58, 25 January 2023 (UTC)
- It would be similar to copying public domain/open license content to Wikipedia, no? This is covered by several guidelines and explainers such as WP:FREECOPY and Help:Adding open license text to Wikipedia. As long as there's proper attribution, there's no general expectation that editors must compose the text themselves. –dlthewave ☎ 13:22, 22 January 2023 (UTC)
First ANI case
Just a head up, a thread in Wikipedia:Administrators' noticeboard/Incidents has just opened about an user abusing AI-generated content at Wikipedia:Administrators'_noticeboard/Incidents#Artificial-Info22_using_AI_to_produce_articles. Sure, the editor in question did not made an edit in the mainspace, but the fact that this is happening at ANI is pretty concerning in its own right. I afraid that someone may have covertly spam articles with AI text already. CactiStaccingCrane 15:31, 23 January 2023 (UTC)
- I am now adding the {{disputed}} template when encountering an AI-generated article, based on the following from the ChatGPT FAQ:
These models were trained on vast amounts of data from the internet written by humans, including conversations, so the responses it provides may sound human-like. It is important to keep in mind that this is a direct result of the system's design (i.e. maximizing the similarity between outputs and the dataset the models were trained on) and that such outputs may be inaccurate, untruthful, and otherwise misleading at times.
The commonality of all the AI-generated articles I've encountered so far (4, to be honest) is that they are not properly footnoted, implying that the author has not confirmed that the AI output is correct. The disputed tag seems to cover this issue well. I'm also dropping a note on the article's talk page explaining the link between AI output and correctness. — rsjaffe 🗣️ 01:00, 24 January 2023 (UTC)- ANI case is wrapping up. The first three articles were written by a hoaxer, and the refs for two of the articles may have been generated as well. The fourth article was promoting a company. — rsjaffe 🗣️ 04:18, 24 January 2023 (UTC)
- This gives us a good look at the type of plausible-sounding nonsense that we can expect from LLM output. Comparing the draft (archive version, since it will likely be deleted soon) to Gecko, I see a few factual errors right away:
- Not all geckoes belong to the family Gekkonidae, which doesn't have 1500 species.
- Not all geckos have specialized toe pads that allow them to climb vertical surfaces.
- The largest geckos are 23"-24", not 10".
- Not all geckos are oviparous; some bear live young.
- When this type of content is submitted, it needs to be thrown out straightaway. –dlthewave ☎ 17:09, 24 January 2023 (UTC)
More detected AI-generated papers
I started screening Drafts more carefully and am getting a number of hits corresponding to probable AI-generated articles (or at least part of the article is AI-generated). Examples include:
- Draft:NuMSP 97.69% fake
- Draft:Product Photography (the introductory paragraphs score 99.98% fake on GPT-2 detector)
- Draft:Classical Conversations 99.98% fake
- Draft:Leigh Bortins 99.82% fake
- Draft:CaSS (Competency and Skills System) 99.98%
- Draft:Sanjit Bajpai 99.90%
- Draft:Honda Big Red UTV 99.98% (features and maintenance sections, omitting the headings when measuring)
The list could go on, but I think this is enough to see some information about this. These pages tend to be created by users with few edits. A number of users are doing this, not just one or two. Conclusion: the tsunami has arrived. — rsjaffe 🗣️ 03:15, 25 January 2023 (UTC)
- I think that mw:ORES (used in recent changes to highlight bad faith and vandalism) should integrate a screening mechanism for GPT-3 and other bots asap. I envision this is already a huge problem when large amount of hoaxes can be disguised as good content and we wouldn't even know about it. CactiStaccingCrane 03:21, 25 January 2023 (UTC)
- Looking at the first few, the sourcing is not up to the standards of promotion to article space. Once clearly bad sources are removed and unsourced claims are tagged, this can clearly be seen. If AI ever gets to the point of being able to write an article that provides accurate information properly and verifiably sourced to reliable sources, then I'll be happy to have it writing for us. BD2412 T 03:46, 25 January 2023 (UTC)
- @CactiStaccingCrane: It might be a good idea, but in reality, it requires a lot of effort from WMF to integrate openai-detector into mw:ORES. And I agree with @BD2412 for pointing out some drafts are promotional, which I think is a problem even before ChatGPT or even GPT-3 exist. 2001:448A:304F:52BA:8D12:5E35:69B7:8E09 (talk) 03:50, 25 January 2023 (UTC)
- Well, some AI-generated articles have made it into article space. The first one I found has lots of text that's probably not AI-generated, but has a big hunk that is. Pavilion of Harmony, from "The Harmony Pavilion of New Asia College..." to "unique addition to the campus of New Asia College.", after removing the footnote indicators that confuse the analysis, rates as 99.98% fake. So the problem will leak into article space. And this means we need a way to pay special scrutiny to the AI-generated section, as that section is likely to have plausible but false information, given the way current AI models work. — rsjaffe 🗣️ 04:13, 25 January 2023 (UTC)
- I agree that this could easily turn into a major problem as LLMs become more popular. As discussed at #Copyright status, these drafts violate at least WP:PLAGIARISM but probably also the Sharing & Publication Policy of openAI (if they were created using openAI tools). If AI-detectors are reliable, including them in mw:ORES would probably help a lot to mitigate the problem in case such an integration is feasible. Another alternative would be to create a bot that checks new submissions and tags them if they score a high value. A further thing to do at some point might be to make the editors reviewing drafts and new articles aware of this problem. Phlsph7 (talk) 06:28, 25 January 2023 (UTC)
- 10 Best ChatGPT Chrome Extensions You Need to Check Out Doug Weller talk 10:06, 25 January 2023 (UTC)
- @Phlsph7 said:
A further thing to do at some point might be to make the editors reviewing drafts and new articles aware of this problem.
That's how I stumbled unsuspectingly upon this issue. I'm a new page patroller. I think they need to be looped in now, as that is the only guaranteed review step for new articles, and LLM-generated articles are already appearing. (I'm hoping that those users allowed to have their articles bypass this process won't abuse LLMs.) — rsjaffe 🗣️ 17:09, 25 January 2023 (UTC)- @Rsjaffe: Wikipedia:New_pages_patrol has various instructions on the different issues that new page patrollers need to be aware of. Maybe somewhere in there, a subsection could be added on AI-generated articles. Among other things, it should give a short explanation of what it is (the user tells the AI to generate an article in a matter of seconds and copy-pastes the results), what the problems are (plagiarism, false statements, no or invented sources, possibly copyright violation), and how to spot them (things AI-generated articles have in common and tools to detect them, like https://openai-openai-detector.hf.space/). Phlsph7 (talk) 19:31, 25 January 2023 (UTC)
- I agree that this could easily turn into a major problem as LLMs become more popular. As discussed at #Copyright status, these drafts violate at least WP:PLAGIARISM but probably also the Sharing & Publication Policy of openAI (if they were created using openAI tools). If AI-detectors are reliable, including them in mw:ORES would probably help a lot to mitigate the problem in case such an integration is feasible. Another alternative would be to create a bot that checks new submissions and tags them if they score a high value. A further thing to do at some point might be to make the editors reviewing drafts and new articles aware of this problem. Phlsph7 (talk) 06:28, 25 January 2023 (UTC)
- Well, some AI-generated articles have made it into article space. The first one I found has lots of text that's probably not AI-generated, but has a big hunk that is. Pavilion of Harmony, from "The Harmony Pavilion of New Asia College..." to "unique addition to the campus of New Asia College.", after removing the footnote indicators that confuse the analysis, rates as 99.98% fake. So the problem will leak into article space. And this means we need a way to pay special scrutiny to the AI-generated section, as that section is likely to have plausible but false information, given the way current AI models work. — rsjaffe 🗣️ 04:13, 25 January 2023 (UTC)
- Can I recommend, at least for right now, that some of these pages be copy-pasted into projectspace somewhere, so that we can see what they actually look like? I feel like these discussions basically have a couple-hour-long window outside of which it's impossible to see what everyone is talking about. jp×g 19:12, 25 January 2023 (UTC)
- I will note here that the detector being linked here (at https://openai-openai-detector.hf.space) is a very old model tuned for the 2019 GPT-2, not GPT-3 or ChatGPT (3.5). I don't know if it's producing reliable results. It seems to me like most of the things it's flagging as machine-written are abysmal crap, so maybe it doesn't make a difference. jp×g 19:17, 25 January 2023 (UTC)
- Some trials on articles I've written (using the lead paragraphs): Powder House Island and Nina Jankowicz are estimated at 99% human-written, but First Wikipedia edit is at 20% GPT for some reason. 1-Pentadecanol returns 40% GPT based on the first sentence, which decreases with subsequent sentences to 99%. However, when I asked ChatGPT to "write me a Wikipedia article about 1-pentadecanol", the result (which is viewable as an HTML comment if you edit this section) was estimated as 92% human-written. I don't know exactly what the underlying mechanism of this tool is, but we may want to take its output with a grain of salt. jp×g 19:23, 25 January 2023 (UTC)
- The most recent articles I pulled (the ones in this section) were all initially detected by me using a search for a "signature" of a particular type of way of producing text for a particular version of LLM engine (I'm being vague as to not give hints to those who are trying to evade detection). I then visually confirm the signature. Then I run it through the GPT-2 detector. And then I'm only listing pages with > 99% chance of being LLM-generated. I'm 100% sure that the ones I've listed above are LLM-generated, but I'm also certain that this is only detecting that subset being generated under that specific set of conditions. — rsjaffe 🗣️ 19:24, 25 January 2023 (UTC)
- Also, you have to remove the bracketed reference numbers (e.g., [2]) from the text as well as any intercalated headings to give an accurate score. — rsjaffe 🗣️ 19:26, 25 January 2023 (UTC)
- To give an example of some LLM-looking text that doesn't score high on the GPT-2 detector, look at, ironically enough, Draft:Large Language Model. It scores 99.52% real, but sure looks fake to me. — rsjaffe 🗣️ 19:30, 25 January 2023 (UTC)
- That detector works great if a text is modified, but it will crash when two or three words are modified. CactiStaccingCrane 19:32, 25 January 2023 (UTC)
- Here are a few more detectors:
- To test them, I asked ChatGPT to "write a Wikipedia article on cream cheese". It produces the following:
- That detector works great if a text is modified, but it will crash when two or three words are modified. CactiStaccingCrane 19:32, 25 January 2023 (UTC)
- To give an example of some LLM-looking text that doesn't score high on the GPT-2 detector, look at, ironically enough, Draft:Large Language Model. It scores 99.52% real, but sure looks fake to me. — rsjaffe 🗣️ 19:30, 25 January 2023 (UTC)
- Also, you have to remove the bracketed reference numbers (e.g., [2]) from the text as well as any intercalated headings to give an accurate score. — rsjaffe 🗣️ 19:26, 25 January 2023 (UTC)
Extended content
|
---|
Cream cheese is a soft, mild-tasting, white cheese that is made from a mixture of cream and milk. It is typically sold in blocks, and is commonly used as a spread on bread or crackers, as well as in cooking and baking.
|
- All the detectors agree that this text is AI-generated. When I fed them with the lead of the article Wikipedia (pure text without reference signs), they all agree that it's human-generated. Phlsph7 (talk) 20:03, 25 January 2023 (UTC)
- Some attempt at figuring out what these are: I think that they are all based on the same code from HuggingFace. The actual source is here. I think that it may be simple enough for me to deploy it on a Toolforge test account; some of these websites seem unbelievably seedy. For example, "Content At Scale" advertises:
- Want undetectable AI content? Our platform is the only one of it's kind that allows you to upload up to 100 keywords and get back 100 entire human quality blog posts (title to conclusion) without any human intervention. All the while, bypassing AI detection as it's the most human-like AI content ever produced. Our proprietary system uses a mix of 3 AI engines, NLP and semantic analysis algorithms, crawls Google, and parses all the top ranking content to put it all together. This isn't an AI writing assistant, this is a human level long-form blog post producing machine!
- Certainly seedy if they are making it up... but also very seedy if it's true! jp×g 23:17, 25 January 2023 (UTC)
- Some attempt at figuring out what these are: I think that they are all based on the same code from HuggingFace. The actual source is here. I think that it may be simple enough for me to deploy it on a Toolforge test account; some of these websites seem unbelievably seedy. For example, "Content At Scale" advertises:
- All the detectors agree that this text is AI-generated. When I fed them with the lead of the article Wikipedia (pure text without reference signs), they all agree that it's human-generated. Phlsph7 (talk) 20:03, 25 January 2023 (UTC)
Quoting a chatbot
Below are some verification-related questions pertaining to chatbots. — The Transhumanist 12:10, 26 January 2023 (UTC)
How would pasting in content generated by a chatbot be interpreted under WP:VER's requirement that all quotes must be referenced?
WP:VER states that all quotes must be supported by inline citations. If the chatbot's text is unique rather than preexisting somewhere else, using it would in essence be quoting the chatbot — how could that, as it isn't recorded anywhere, be referenced for verification purposes?
For not requiring referencing the quote of a chatbot, would WP:VER need to be modified?
News update
We need to get and stay ahead of this AI thing. See the following to get an idea how fast this movement is progressing:
- ⭕ What People Are Missing About Microsoft’s $10B Investment In OpenAI : GPT3
- Travis Tang on LinkedIn: ChatGPT for Data Science Prompts - 60 examples of what it can do
- How to write an effective GPT-3 prompt | Zapier
- OpenAI Licenses GPT-3 Technology to Microsoft (not exclusive)
- OpenAI's investments
- Should ChatGPT be used to write Wikipedia articles?
- This article features the following Wikipedia article, initially composed using Chat-GPT: Artwork title, by Pharos, a great example of how a chatbot can be used by a responsible editor. Maybe a blanket ban is too much, and guidelines on how to use it correctly would be better. Makes auto-removal harder, though.
- See an explanation and its discussions here: Talk:Artwork title
- This article features the following Wikipedia article, initially composed using Chat-GPT: Artwork title, by Pharos, a great example of how a chatbot can be used by a responsible editor. Maybe a blanket ban is too much, and guidelines on how to use it correctly would be better. Makes auto-removal harder, though.
I hope you find these articles informative. Feel free to post more links, and comments, below. — The Transhumanist 16:05, 28 January 2023 (UTC)
Crystallize chatbot discussions into a policy?
I think that there is a long list of things that make chatbot content objectively bad for / incompatible with Wikipedia in its current form and methods. Without elaborating here, a few quick notes are the inherent "blackbox" nature of AI, the inherent unaccountability for content, the inherent non-linking of content to sourcing, the fact that they do not know or follow Wikipedia policies and guidelines, (which themselves are a fuzzy ecosystem rather than categorical rules) they do know take into account what is already in the article, they (as a practical matter) remove accountability and expectations from the person who added the material. They also would destroy the ability to obtain volunteer time to review what has been put in. Most people willing to spend time to review something because they know that a human editor has take the time to write it would not be willing to spend large amounts of time dealing with something generated by a bot in a few seconds.
My thought is that we should say that such chatbot generated content is not allowed in Wikipedia. This is just briefly written, I or someone could flesh this out into something carefully written if there is interest.
We can and should decide this without or prior to solving the question of how to detect and enforce. A premise of having to solve detection and enforcement before step #1 would be a poison pill for accomplishing step one. Sincerely,North8000 (talk) 20:42, 25 January 2023 (UTC)
- @North8000: There has been a draft guideline at Wikipedia:Large language models for a few weeks. I do not know that banning their use entirely is a good idea, but it seems quite obvious that just copy-pasting gigantic chunks of text directly from the model's output into the edit box is not a good idea (and almost zero percent likely to result in usable articles). I will try to write some stronger wording emphasizing that nobody should be doing this. jp×g 22:46, 25 January 2023 (UTC)
Support blanket ban - I agree with North8000 that the policy should be "Chatbot generated content is not allowed in Wikipedia." I think this should apply to all namespaces, including talk pages. As chatbots improve, the policy can be changed, but right now, chatbot use for generating WP content appears to be a can of worms. — The Transhumanist 01:03, 26 January 2023 (UTC)- Pertaining to talk pages and forums, I meant not allowing the use of chatbots to generate a user's statements in a discussion. Posting chatbot output on a talk or forum page as an example in order to discuss it, is appropriate. — The Transhumanist 12:33, 26 January 2023 (UTC)
- Support auto-screening - Pharos and their article Artwork title changed my mind (and is featured in Should ChatGPT be used to write Wikipedia articles?). I'd like to see a draft on guidance for the responsible use of chatbots in writing articles, including Pharos' approach. Meanwhile, our tech persons can work on automating the removal of undisclosed chatbot additions and the tagging and eventual removal of other entries that don't get edited within a reasonable time frame, or that are part of a pattern of posting disclosed but unedited chatbot submissions. Donald Albury was right, bad actors are going to spam Wikipedia with chatbot crap whether we ban it or not. Therefore, we should allow good actors to help offset their impact. Which brings us to the rest of the Web: it will be subject to hosting chatbot content, and so, as we are Wikipedia, we should trailblaze how to do it right. — The Transhumanist 16:21, 28 January 2023 (UTC)
- Oppose a blanket ban. I think JPxG's demonstration shows that a LLM can be good for repetitive tasks like formatting a table, as long as a human validates the output. Actual generated prose is likely to be unsourced or sourced to fake sources, and so is already covered by existing policies. -- King of ♥ ♦ ♣ ♠ 01:23, 26 January 2023 (UTC)
- Oppose any blanket ban. Do think CSD modifications are needed to quickly remove algorithmically generated articles (AI is market speak here, it is not sentient and intelligence is debatable) and some formal guidance for editors would be useful. It's a tool like AWB, Twinkle, and any other scripting used. Used properly, it can cut down tedious work Slywriter (talk) 01:30, 26 January 2023 (UTC)
- Support blanket ban with the possibility of allowing specific use cases as we learn more. Our guiding principle should be that AI is completely inappropriate for creating or editing prose. Given the amount of plausible-sounding nonsense we've seen in recent AI-generated drafts, I also don't trust it for coding work such as formatting text or rotating tables until its reliability has been demonstrated for the specific task. This should apply to article, talk and draft spaces with very limited exceptions for demonstration purposes. –dlthewave ☎ 03:24, 26 January 2023 (UTC)
- With respect to tables and templates, what do you envision as "demonstrating reliability"? It is not exactly brain surgery to look at a table and see if the columns or the parameters or whatever are in the right places. You have to do this anyway: we currently do not require editors to prove that they have never typed an extra
}
and had to go back and fix it. jp×g 05:35, 26 January 2023 (UTC)
- With respect to tables and templates, what do you envision as "demonstrating reliability"? It is not exactly brain surgery to look at a table and see if the columns or the parameters or whatever are in the right places. You have to do this anyway: we currently do not require editors to prove that they have never typed an extra
- Oppose for a variety of reasons, as discussed by others above. But I'll go with the most basic one: how would any of this be actionable? I know you claim it's a poison pill problem to raise, but there's no way whatsoever to detect this with certainty. We'd be using an outside tool to claim text is AI written and then delete things based on that claim. I don't care how many 9's you've got in the decimal places, there's no way to be infallible here. If the editor that added the text says they wrote it themselves, are we just going to say that they're lying and that they have to re-write it or something? There's not even evidence of copyvio in such a case and if the added content meets all other requirements, including proper verifiable sourcing, then I see no way to enforce such a ban. SilverserenC 03:33, 26 January 2023 (UTC)
- If the detector is good, one mistaken evaluation is possible, but a consistent output of one or the other is solid. Dege31 (talk) 17:49, 26 January 2023 (UTC)
- Comment I'm not fully decided whether banning it entirely is the right course of action at this stage. But there is a significant potential to abuse it so most forms of non-trivial AI-assisted edits should be strongly discouraged. The policy should make it very clear that any addition of AI-generated text needs to be labeled as such in the edit summary to avoid WP:PLAGIARISM. The editors also need to be reminded to obey the license and sharing policy of the AI provider. In the case of ChatGPT, for example, in-text attribution is apparently required. Phlsph7 (talk) 08:53, 26 January 2023 (UTC)
- Support blanket ban of any LLM-generated text. Not sure yet about using such tools purely for lay-out, but it should not be allowed for either generating new text or for rephrasing existing text, as both cases are way too problematic. As for "how is this enforceable", just like other difficult policies, where near certainty is sufficient (like WP:DUCK for socks, which isn't infallible but good enough). Advantages of a policy are also e.g. when a newbie says something like "why was my article deleted, it was generated by ChatGTP so has to be good", one can easily point to the policy to explain that it isn't allowed instead of having this discussion again and again. Fram (talk) 09:04, 26 January 2023 (UTC)
- That's a good point about potential problems with enforcing it. Having a policy can be useful to discourage certain types of behavior even if it is difficult to enforce in every case. We'll have to see how useful and reliable AI-detectors are in this process. Phlsph7 (talk) 09:15, 26 January 2023 (UTC)
- I think the way forward is even stronger expectations on sourcing. If you can't provide the content of the reliable source for your edit, it should be reverted. (This would include people machine translating foreign Wikipedia articles without having access to the original sources). —Kusma (talk) 10:30, 26 January 2023 (UTC)
- I assume that you are referring to automatic reversion (correct me if I'm incorrect). What about the wording of the verification policy that reads "challenged or likely to be challenged"? If it is not challenged or likely to be challenged, it doesn't need references. How will a bot be able to tell the difference between what does and does not require references? Or would the bot's removal of an edit constitute a challenge? Whether reversion is automated or not, should all new content to Wikipedia be challenged by default? That would require a change to WP:V, and that seems unlikely to happen. — The Transhumanist 11:36, 26 January 2023 (UTC)
- I would challenge the factual accuracy of anything generated by AI. –dlthewave ☎ 13:41, 26 January 2023 (UTC)
- I'm not talking about automatic reversion, can't see how to do that without AI. And the verification policy is applied differently to new and existing articles; for new articles, we are in practice already expecting much better sourcing than "likely to be challenged" (just look at what will be rejected by AFC). Perhaps we should expand this to addition of content to existing articles. —Kusma (talk) 13:56, 26 January 2023 (UTC)
- I would challenge the factual accuracy of anything generated by AI. –dlthewave ☎ 13:41, 26 January 2023 (UTC)
- I assume that you are referring to automatic reversion (correct me if I'm incorrect). What about the wording of the verification policy that reads "challenged or likely to be challenged"? If it is not challenged or likely to be challenged, it doesn't need references. How will a bot be able to tell the difference between what does and does not require references? Or would the bot's removal of an edit constitute a challenge? Whether reversion is automated or not, should all new content to Wikipedia be challenged by default? That would require a change to WP:V, and that seems unlikely to happen. — The Transhumanist 11:36, 26 January 2023 (UTC)
- Support blanket ban for now; would be fine having a discussion later on allowable use cases, but I'd rather we started with a total blanket ban first, and then itemize specific possible use cases if we later decide there's some utility. --Jayron32 12:47, 26 January 2023 (UTC)
- Support blanket ban for prose as there is significant copyright violation concerns about AI text. Text generated by bot will not substitute reliable sourcing. Maybe in the future when Abstract Wikipedia come online, we can give some leeway for bots to generate text based on reliably cited info, but for now, it's just too risky for the project. CactiStaccingCrane 13:34, 26 January 2023 (UTC)
- If there is interest in at least reviewing this possibility (which it sounds like there is), as noted there it needs to be written better than my initial trail balloon above. I'll do that but still keep it short. I think that it can be done in a way that deals with the main enforcability questions and also allows described useful uses by allowing bot-assisted editor generated content. I'll do that within a 1/2 day. North8000 (talk) 14:33, 26 January 2023 (UTC)
- support ban for text additionsThis is so susceptible to abuse and inserting misleading content that it should be banned. There is little benefit of allowing text generation and much harm.
- Oppose a blanket ban. it's premature and is not based on any actual cases in Wikipedia. Rjensen (talk) 16:20, 26 January 2023 (UTC)
- Have you seen WP:ANI#Artificial-Info22 using AI to produce articles? It's already happening. Fram (talk) 16:30, 26 January 2023 (UTC)
- Oppose new policy that goes beyond small clarifications of WP:BOTPOL. I think it is obvious that ChatGPT is already covered by it and I do not see what modifications are proposed.
- Things that go against a content policy (WP:COPYVIO, WP:V, etc.) should be reverted / deleted on those grounds, and on those grounds alone; editors that make many such edits should be warned and then blocked. Editors who make faster edits than reasonably possible by hand should be dealt according to WP:MEATBOT.
- I oppose any policy to revert / delete / ban based solely on a "seems bot-written" criterion, unless and until it has been proven that (1) this is a real, time-consuming problem on Wikipedia, and not a few random tests within the sea of vandalism, and (2) whatever criterion is used has been independently tested to establish its sensitivity and specificity and validated by the community. TigraanClick here for my talk page ("private" contact) 17:24, 26 January 2023 (UTC)
- I also thought it obviously fell under BOTPOL, but this discussion shows some uncertainty. –dlthewave ☎ 20:04, 26 January 2023 (UTC)
- I oppose any policy to revert / delete / ban based solely on a "seems bot-written" criterion, unless and until it has been proven that (1) this is a real, time-consuming problem on Wikipedia, and not a few random tests within the sea of vandalism, and (2) whatever criterion is used has been independently tested to establish its sensitivity and specificity and validated by the community. TigraanClick here for my talk page ("private" contact) 17:24, 26 January 2023 (UTC)
- Things that go against a content policy (WP:COPYVIO, WP:V, etc.) should be reverted / deleted on those grounds, and on those grounds alone; editors that make many such edits should be warned and then blocked. Editors who make faster edits than reasonably possible by hand should be dealt according to WP:MEATBOT.
- Oppose a blanket ban. I agree in general with the oppose reasons given above. I also think such a ban would be "virtue signaling" without being effective. Editors who want to add AI-generated material to Wikipedia will not be stopped by such a policy. Consider how often our existing policies stop editors from adding un-verifiable, POV-pushing content. What we can do is work on effective strategies for detecting and removing un-sourced, un-verifiable content as expeditiously as possible. - Donald Albury 19:34, 26 January 2023 (UTC)
- Oppose a blanket ban. I expect that in another five or six years, we will happily be allowing a Wiki-AI to both write and clean up most of our articles. BD2412 T 22:26, 26 January 2023 (UTC)
- And AIdmins to deal with AIsox ~ Selfstudier (talk) 22:53, 26 January 2023 (UTC)
- While all of this wouldn't surprise me given the other aspects of this dystopia we call Earth, it would simply confirm that we live in the Golgafrinchan Ark B world. Andre🚐 22:55, 26 January 2023 (UTC)
- I suspect it will become necessary to rely on such bots, as our current model is creaking under the current load. In just the last three or four days I've discovered a couple articles that I started that are in need of serious cleanup and repair, particularly on sourcing (linkrot and other issues) and updating. Donald Albury 22:58, 26 January 2023 (UTC)
- Oh don't get me wrong, I would love to see AI bots to improve references, and do other automated tasks under the bot policy. But the AI should not be trusted for facts or interpretations. And I do fear the ChatGPT-ization of the language. There is good and bad writing on Wikipedia, but at least it was written by and for human beings. Andre🚐 23:02, 26 January 2023 (UTC)
- And AIdmins to deal with AIsox ~ Selfstudier (talk) 22:53, 26 January 2023 (UTC)
- Oppose a blanket ban. I agree with King of ♥ ♦ ♣ ♠'s comment above. If an editor adds unsourced AI-generated content, that content should get treated the same way as non-AI generated unsourced content. WP:V and other existing Wikipedia policies already cover that. Some1 (talk) 00:05, 27 January 2023 (UTC)
- But, a great deal of unsourced content doesn't get treated. The amount of unsourced content on Wikipedia is vast. If you don't believe me, pick a scholastic subject like Roman Empire, click on "What links here", and open a bunch of the links in tabs and start inspecting. What about random article? Too many stubs, and it's boring. With "What links here", you can get longer articles on average to view. The striking thing is the lack of "citation needed" tags - they are spaced few and far between. They can be found on roughly 1% of all pages, while unsourced content can be found on a much higher percentage.
Another thing to try is go to Template:Unreferenced, and click on "What links here". The first page I clicked on was Tank destroyer. The tag is dated March 2009.
The point is, you make it sound like all unsourced content gets quickly removed. That's not the case for a huge amount of content. It can sit there for years. LLM-generated content can be generated in great quantities fast, and therefore has the potential to accumulate more quickly than content composed by humans. Is it wise to let it sit there until a human comes along to remove it? In terms of a quantity competition between humans and computers, computers will win. This will take auto-removal to keep up. It would be best to start building those tools now. See Fram's post, above for an even better rationale. — The Transhumanist 06:49, 28 January 2023 (UTC)
- But, a great deal of unsourced content doesn't get treated. The amount of unsourced content on Wikipedia is vast. If you don't believe me, pick a scholastic subject like Roman Empire, click on "What links here", and open a bunch of the links in tabs and start inspecting. What about random article? Too many stubs, and it's boring. With "What links here", you can get longer articles on average to view. The striking thing is the lack of "citation needed" tags - they are spaced few and far between. They can be found on roughly 1% of all pages, while unsourced content can be found on a much higher percentage.
- Comment: There's no way to enforce a ban, and at any rate Wikipedia would do well to have more automation. Both MW and en.Wiki are heavily dependent on manual labor, with 3rd-party "bots" doing some of the more menial tasks. Compare how one would compose a document on a modern word processor vs. how editors do it here: no Wikitext, no copying-and-filling-out-templates (and no separate commits just for AnomieBOT to date them), no broken pages because of parsing errors, no dragging someone to ANI/AE for a T-ban violation (because there's such a thing as access control), no separate citations of the same source in five different formats (because there's such a thing as reference management); and you can actually comment on a specific paragraph without looking for a diff number, opening a "discussion" (which is in fact just another near-meaningless bit of Wikitext), signing it and hoping that no edit conflict arises because someone changed something 5,000 words up the page. We need to get rid of the concept of a WP:GNOME before we can even consider how to deal with a language model that can generate an entire article in a fraction of a second. François Robere (talk) 13:31, 27 January 2023 (UTC)
- Oppose. I get that this raises some interesting questions in the abstract, but come on guys, we have enough problems with instruction creep already, we don't need to start writing policies in anticipation of the hypothetical abuse of Silicon Valley's latest fad. – Joe (talk) 14:18, 27 January 2023 (UTC)
- Support blanket ban because fuck LLMs and fuck the corrupt, unethical industry that created them. XOR'easter (talk) 17:32, 27 January 2023 (UTC)
- Oppose blanket ban blanket ban and suggest we develop Wikipedia:Large language models (perhaps under the name Wikipedia:Computer-assisted text generation suggested by Michael_D._Turnbull). I don't think that going forward we can ban AI-generated text writ large, first and foremost for the very simple example that many people currently editing wikipedia use text edit widgets that already incorporate something of this in the form of spell check, autocorrection and autocomplete, and these kind of tools will continue to blur the line between AI, language models, and human-generated text. Going forward it would be practically Neo-Luddism to eschew all AI. I don't like the use of ChatGPT right now today to generate text, I don't like it at all, but neither can I bury my head in the sand and whistle Dixie, pretending it doesn't exist and won't grow in importance. We should meet this head on rather than pretend we can completely ban AI-assisted or AI-generated text. —DIYeditor (talk) 23:27, 27 January 2023 (UTC)
- Oppose What seems to scare people about ChatGPT is that it writes better than most Wikipedia editors. Competition is healthy and so should not be subject to restrictive practices. See also Luddism. Andrew🐉(talk) 22:32, 28 January 2023 (UTC)
- Competition is good when it's robot vs real people? You'd be happy if most of the encyclopaedia was written by AI? And most of the real people just gave up? Doug Weller talk 13:05, 29 January 2023 (UTC)
- I think that most people are at least open to the idea of using LLMs like ChatGPT for good and practical purposes, such as summarizing dozens of paragraph from a reliable source, or making a WP:Earwig-like bot that detect source-text integrity issues, or detecting possible hoaxes/context-dependent vandalism in Special:RecentChanges. I'm sure that when these LLM-based tools come out, people will use them just as much as mw:ORES and User:Cluebot NG today. The problem as of 2023 is that these tools does not exist yet and as in the current form, LLMs is an extremely powerful tool for bad actors yet disadvantaged good-faith Wikipedia editors. I feel that this situation between LLMs and Wikipedia right now a bit like Wikipedia and the academia in the early 2000s, when Wikipedia is full of uncited info and its reliability is really shaky to say the least (see also https://www.nostalgia.wikipedia.org). Maybe this will change in the future when whoever makes a LLM model that's aligned to our values and policies, but in my opinion for now a blanket ban is necessary to prevent mass vandalism while we are trying to process the situation. CactiStaccingCrane 13:24, 29 January 2023 (UTC)
- @CactiStaccingCrane: How would a ban of LLMs prevent vandalism exactly? Vandals would simply ignore the ban, while many good actors would obey the ban. You would in effect be preventing good actors from using the tool, and not bad actors. The only way to deal with vandals who ignore a ban is directly – that is, identify their text and remove it, and block the vandals. But you can do that anyways. So, wouldn't it be best to identify and remove LLM vandalism while allowing good use of LLM-generated text? (See the starting edit and talk page for the chatbot-generated article Artwork title). So, I'm confused as to how you believe a blanket ban would help. Let me repeat my initial question, along with a follow-up question: How would a ban of LLMs prevent vandalism exactly? And why would banning good actors using LLM (like Pharos and JPxG) be necessary? I look forward to your replies to these 2 questions. Sincerely, — The Transhumanist 21:46, 29 January 2023 (UTC)
- I think that most people are at least open to the idea of using LLMs like ChatGPT for good and practical purposes, such as summarizing dozens of paragraph from a reliable source, or making a WP:Earwig-like bot that detect source-text integrity issues, or detecting possible hoaxes/context-dependent vandalism in Special:RecentChanges. I'm sure that when these LLM-based tools come out, people will use them just as much as mw:ORES and User:Cluebot NG today. The problem as of 2023 is that these tools does not exist yet and as in the current form, LLMs is an extremely powerful tool for bad actors yet disadvantaged good-faith Wikipedia editors. I feel that this situation between LLMs and Wikipedia right now a bit like Wikipedia and the academia in the early 2000s, when Wikipedia is full of uncited info and its reliability is really shaky to say the least (see also https://www.nostalgia.wikipedia.org). Maybe this will change in the future when whoever makes a LLM model that's aligned to our values and policies, but in my opinion for now a blanket ban is necessary to prevent mass vandalism while we are trying to process the situation. CactiStaccingCrane 13:24, 29 January 2023 (UTC)
- Competition is good when it's robot vs real people? You'd be happy if most of the encyclopaedia was written by AI? And most of the real people just gave up? Doug Weller talk 13:05, 29 January 2023 (UTC)
- Support blanket ban. Google and Bing actually considers auto-generated content as spam, which is bad for SEO. If anything, copying from GPT and pasting into an article is no different from copying from a free source and pasting into Wikipedia. The text generated is not your own, but is licensed to you forever, and, since this is a machine we are talking about and not a human, there is a probability that specific text output will be the same for multiple users for the same input.
- I believe GPT is best used as an inspiration for further research, but in no ways is it actually any more useful than a machine translated article. Aasim - Herrscher of Wikis ❄️ 14:28, 3 February 2023 (UTC)
- Oppose a blanket ban. I would prefer that editors use GPT to assist in writing, but to avoid plagiarism, tag their edits as "GPT-assisted". I would additionally suggest that GPT-assisted edits be all subject to pending changes, except if a user reaches Extended confirmed status. I personally fear that, for those who are familiar with American history, ChatGPT could be the prohibition era equivalent for Wikipedia. InvadingInvader (userpage, talk) 07:47, 6 February 2023 (UTC)
- Pending changes can only be applied to articles. There is no way (that I am aware of) of having pending changes apply to only some edits to an article. Donald Albury 15:43, 6 February 2023 (UTC)
- Comment IMHO, the issue isn't per se about how articles are written. Very few people would object if Wikipedia was provided content that was accurate, well-sourced & well-written by GPT bots. The root issue is that we don't have enough volunteers to review a flood of these GPT-created articles, let alone enough to review them knowledgeably. Viewed in this light, New Page Patrol's often-discussed problems dealing with a tsunami of new articles is but the nose of the camel poking under the tent. (And then there is the threat of gut-&-stuffing existing articles.) Most of us have other, off-Wiki demands on our time, so we are forced to contribute on a part-time basis; but even if we were all full-time volunteers, we still couldn't keep up with all of the edits. -- llywrch (talk) 21:21, 11 February 2023 (UTC)
- Oppose. As demonstrated by Artwork title these bots can be used positively or negatively. Our policies should be tuned to prevent the negatives, and not also prevent the benefits. BilledMammal (talk) 11:17, 13 February 2023 (UTC)
Chatbot This idea probably goes to the heart of it
How about this? (just a draft to be tweaked)
- An editor is responsible for every part of every edit that they make. The norm is that they should have their own rationale for everything that they added including every word, phrase and sentence. For additions to articles, they should make a specific effort to make sure that the edit is appropriate with respect to the current article. For example, with respect to the structure of the article and avoiding duplication. They should also make a reasonable specific effort to assure that each potion of their addition is verifiable or verified in accordance with WP:Verifiability. It is unlikely that these requirements and expectations would be met with AI generated content (or any large amount of text that has been copied from elsewhere and pasted in, copyvio issues notwithstanding)
- Wikipedia relies on volunteer efforts to review additions. This often requires time consuming reviews of individual words, phrases and sentences; obtaining this degree of effort relies on the understanding that the editor who put the material in has made a similar effort to develop that phrase or sentence. Expecting editors to give this review to large amounts of material which were generated by AI in a few seconds would cause a substantial loss of this effort.
- Accordingly, this clarifies that removal / reversion of en masse additions of material suspected of being AI generated is considered to be an appropriate practice. An editor who seeks to restore the material is expected to break it into small portions, each with individual explanatory edit summaries. If such a removal results in deletion of the entire contents of the article, it then becomes a candidate for speedy deletion.
Sincerely, North8000 (talk) 21:07, 26 January 2023 (UTC)
- I don't favour making machine-generated text a special case. Given that editors are already responsible for verifying every aspect of their edits, any clarifications should be equally applicable to all cases, such as human ghostwriting teams. isaacl (talk) 21:48, 26 January 2023 (UTC)
- I like that a lot. — rsjaffe 🗣️ 05:06, 27 January 2023 (UTC)
- I have integrated the above three paragraphs into Wikipedia:Large language models, see Wikipedia:Large language models#Specific guidelines and Wikipedia:Large language models#Summary removal of larger LLM-generated additions of article prose. —Alalch E. 10:33, 27 January 2023 (UTC)
- cf. Wikipedia:Village pump (policy)/Archive 176#RFC: change "verifiable" to "verified". Also: do not make assumptions about the capabilities of AI. There are models at work that integrate references, and you should assume that at some point they'd be able to compose texts that are comparable to any Wikipedian's. Ergo, policy should focus on what we're looking for, not who or what composed it. François Robere (talk) 14:12, 27 January 2023 (UTC)
- We can change the policy if something is developed that is reliable enough. Until then, blanket rejection is appropriate. — rsjaffe 🗣️ 15:46, 27 January 2023 (UTC)
- It's important to see the conduct side too because LLM misuse forms a pattern of disruptive editing. It starts with one person's idea that a specific thing can be accomplished on Wikipedia in this way, proceeds with the intent to implement this idea without caring to understand and account for what Wikipedia's requirements are, and ends with an undesirable action which may be repeated if not addressed. —Alalch E. 15:55, 27 January 2023 (UTC)
If chatbots are banned, would the article Artwork title have to be deleted?
Artwork title was created by chatbot, and heavily edited by a human since. If chatbots (and LLMs) become banned, how would it apply to pre-existing chatbot-generated articles? — The Transhumanist 22:09, 29 January 2023 (UTC)
- I don't think it would have to be deleted (or restarted) unless it was somehow found to be a copyvio. Maybe it would be peer reviewed instead. Roll 3d6 (talk) 08:50, 5 February 2023 (UTC)
More specific proposal: Blanket ban on LLM content on Talk page discussions
Regardless of the community's decision on LLM-generated content in articles, which is the focus of much of the discussion above, the ability of editors to flood talk pages with artificially generated content arguing for a particular position on talk seems to have no redeeming value, and represents a new form of Wikipedia:Sockpuppetry. I propose a blanket ban on such writing, with especially strong guardrails for RfC's and AfD's. (Alternatively, I would be open to a phrasing that allowed LLM's to summarize the state of a debate, or be used to generate sample content for discussion, as in the conversation above, but not used to make arguments. That just seems harder to phrase clearly.)Carwil (talk) 20:22, 27 January 2023 (UTC)
- That's a really weird way to use them. Also, why would it matter? Even if an editor used an LLM to generate a better phrasing of the argument they want, it's still their account putting forth the argument. And the argument is either valid or not in regards to others involved in the discussion. Why is this a problem exactly? Do you have any examples to better clarify this sort of usage? SilverserenC 23:14, 27 January 2023 (UTC)
- Here's an article that should help clarify the relevant danger: Researchers demonstrate how attackers can use the GPT-3 natural language model to launch more effective, harder-to-detect phishing and business email compromise campaigns.. If they can use it to write convincing email scams, Wikipedia talk pages should be a breeze. Here's a quote from the article: ""The generation of versatile natural-language text from a small amount of input will inevitably interest criminals, especially cybercriminals — if it hasn’t already. Likewise, anyone who uses the web to spread scams, fake news or misinformation in general may have an interest in a tool that creates credible, possibly even compelling, text at super-human speeds." If that doesn't convince you, I don't know what will. — The Transhumanist 10:16, 28 January 2023 (UTC)
- Has this ever actually happened? – Joe (talk) 05:29, 28 January 2023 (UTC)
- Maybe a more relevant question is "Will we be ready for it when it does?" — The Transhumanist 10:16, 28 January 2023 (UTC)
- As Transhumanist is suggesting, my concern is rapid creation of multiple arguments that either tilt a discussion or waste the time of sincere contributors. Users should be warned that they can't substitute mass-produced arguments for their own judgment inside the encyclopedia.--Carwil (talk) 19:26, 28 January 2023 (UTC)
- Maybe a more relevant question is "Will we be ready for it when it does?" — The Transhumanist 10:16, 28 January 2023 (UTC)
- Support - Weird? Compared to the many examples of what Chat-GPT can do posted on social media, writing talk page or forum posts on Wikipedia seems comparatively bland and simple. Why would using an LLM on a talk page or discussion forum matter? Because, it is faster than a human. With it, a user could participate in more discussions in less time. But, the big concern here is using it on multiple accounts with different writing styles to stack votes on issues with little chance of being discovered as the same person. That's sockpuppetry elevated to a higher level. Therefore, banning chatbots from being used to compose talk page or forum posts is quite reasonable. — The Transhumanist 05:42, 28 January 2023 (UTC)
- Support On a risk/benefit analysis, the potential benefit to allowing this is so small that any risk (as described above) is unacceptable. — rsjaffe 🗣️ 05:50, 28 January 2023 (UTC)
- Reluctant Oppose As my comments above, it is difficult to draw a line on which language models are allowed and which are not. Clearly people are allowed to use autocompletion and predictive text and such. Are they limited in what varieties and designs of autocompletion they use? I think this requires further discussion and hopefully input from experts. —DIYeditor (talk) 06:42, 28 January 2023 (UTC)
- Oppose a blanket ban as premature, but support at least some restrictions. While I would certainly be first in line to support a blanket ban if an actual problem arose, I am not sure that we know exactly what shape this will take enough to come up with an intelligent solution. For example, earlier in this very discussion, we were posting LLMs' output in order to judge their capabilities, so any prohibition would need to take exceptions like this into account. That said, I do support some more specific language for WP:LLM about it being very bad to use them undisclosed to argue your case in discussions. For example, if I were a shady dude, I could flood the zone with shit right here on VPP by typing out massive walls of text replying to every single person who disagreed with me, without regard for whether my arguments were sound or even correct, and even if I represented a minority view it would probably irritate and discourage my interlocutors until they stopped commenting (thus bringing me closer to a majority). Similarly, at the blood-soaked fields of AfD I could trivially write out a three-paragraph !vote on all 50 of the day's nominations (whether I was a partisan for keeping, a zealot for deleting, or a fanatic for any sort of POV). jp×g 19:45, 28 January 2023 (UTC)
- Oppose a blanket ban. As JPxG, I'm open to some restrictions, but I can see LLMs as a potentially useful tool for people who want to offer their viewpoint in discussion but lack the fluency or time of some other editors. (A bit more acerbically, our discussions already tend to be influenced by editors who are functionally LLMs: good prose stylists, possessed of enormous sitzfleisch, and not well grounded in factual specifics. If LLMs force us to review WP:BLUDGEON and our methods of dialectic/achieving consensus, there's no reason to grant people like that the privilege of immunity.) Choess (talk) 20:44, 28 January 2023 (UTC)
- Oppose per WP:CREEP. As a recent example, I included some ChatGPT output in a recent discussion at ITN. Not seeing the problem. Andrew🐉(talk) 22:23, 28 January 2023 (UTC)
- I think your example misses the spirit of my proposal, which is un attributed LLM output substituting for our own reasoning and arguments on Talk. Happy to modify accordingly. --Carwil (talk) 13:25, 29 January 2023 (UTC)
- Oppose a blanket ban on talk pages specifically (I would support one in article space). The largest problem by a huge margin with using ChatGPT on Wikipedia is adding factually incorrect information to articles. This is less of a concern with talk pages, which the general public doesn't usually read and which are not primarily meant to impart facts or be reference material. We already have a ban on sockpuppeting and an expectation that decision-makers will ignore bad arguments, which cover the negative use cases mentioned above. Also, as mentioned above, there is not an enormous difference between ChatGPT and predictive text or services like Grammarly, and the line between those is going to become blurrier by the day, faster than policy can keep up with. Gnomingstuff (talk) 01:20, 3 February 2023 (UTC)
- Oppose as making up rules when no problem has been demonstrated. If it does happen, we can warn, revert, and block the offending material and offender for wasting our time. Graeme Bartlett (talk) 12:10, 3 February 2023 (UTC)
The discussion has diverged
Further issues on this topic (chatbot-generated content) are being discussed at Wikipedia talk:Large language models, the talk page for the policy draft on this subject. — The Transhumanist 05:15, 30 January 2023 (UTC)
The discussion has been re-opened
- Since the discussion basically died out after being closed, rather than migrating to WP:LLM, I'm undoing User:CactiStaccingCrane's close, with his permission. Let's keep all discussion in one place, so we can get solid consensus on all these points. DFlhb (talk) 13:46, 1 February 2023 (UTC)
Chatbot news update 2023-02-12
- Panicked Google to unveil over 20 AI products in 2023
- ChatGPT reaches 100 million users two months after launch
- Google has stiff competition now, after Microsoft integrated powerful A.I. technology into its search engine.
Where do we expect issues to occur?
If we can work out the issues that editors improperly using these tools will cause then we can adjust our policies to make it easier to address those issues, and then as the technology develops we can see how it is being used and how it can be used and create a more specific policy then. BilledMammal (talk) 11:21, 13 February 2023 (UTC)
- @BilledMammal: Here are some:
- Made up content
- Lack of source citations
- Made up sources
- Claims based on out-of-date sources
- Possible synthesis
- Copyright unclear as the source of the material is unknown
- Copyright unclear as the issue of who owns the output of a chatbot has not been resolved
- Inclusion of 3rd-party copyrighted material, such as lyrics, a poem, etc.
- Chatbot license violation (e.g., for omitting attribution of the chatbot with the generated content)
- 100,000,000 ChatGPT users and growing (after just 2 months),[6][7][8][9] suggesting that the use of tools like this may become ubiquitous, creating the potential for a large volume of one-off posts of chatbot generated text (and other media)
- Not a comprehensive list. — The Transhumanist 07:31, 15 February 2023 (UTC)
Should news sources without attribution to specific authors be deprecated?
There is a lot of discussion currently about the risk of generated content being entered into Wikipedia directly, but another concern is third-party generated content being used indirectly as a source. Just last week there was some level of reporting regarding CNET's now-retracted decision to publish articles on their site that were written by an "Article Writing AI", articles which, despite ostensibly being screened by an editor, were found to have significant factual errors. CNET attributed these articles to CNET Money Staff, and then, after some criticism, just CNET Money. Now, because of the negative attention, they've "paused" the rollout.
https://futurism.com/cnet-ai-errors
https://www.theverge.com/2023/1/20/23564311/cnet-pausing-ai-articles-bot-red-ventures
The thing is, CNET is still considered a reputable publication, which is why it was notable that they did this and is also why they publicly stopped. However, there are certainly many other smaller sites that could implement technologies like this (and maybe already have) in order to appeal to SEO and search result rankings, and as these sources suggest, it will be very easy for factual errors to slip by well-meaning editors who are not used to manually fact-checking every single factual statement in a given text.
Given this issue, I think we may want to consider deprecating sources from news/newsblog articles that are not attributable to a specific author, or at least doing so for sites that have been confirmed to use AI generated content. Tpdwkouaa (talk) 18:38, 23 January 2023 (UTC)
- Or, we could wait and see if sites lose their reputation for reliablity because they have been found to be using AI generated content that contained unacceptable levels of false information. I am very leary about deprecating sources because of what they might do, rather than because they have demonstratably published false content. Donald Albury 19:30, 23 January 2023 (UTC)
- I have to agree with Donald… WAY too soon… we can react to what actually happens, and should not “pre-act” due to fear of what might happen. Blueboar (talk) 19:44, 23 January 2023 (UTC)
- If a source becomes unreliable for this practice, then it's not suitable for use. That's the time to depreciate. We shouldn't do so on the idea that it might not be what we think. Lee Vilenski (talk • contribs) 19:47, 23 January 2023 (UTC)
- Much too broad a solution to this narrow issue. Newswire services generally do not credit a specific author - this change would deprecate a very large number of highly reliable sources. I would support deprecating CNET even if they've stopped their article-writing bot, since clearly their editorial oversight is quite poor and that has implications even for articles written by humans. Ivanvector (Talk/Edits) 19:52, 23 January 2023 (UTC)
- If a source becomes unreliable for this practice, then it's not suitable for use. That's the time to depreciate. We shouldn't do so on the idea that it might not be what we think. Lee Vilenski (talk • contribs) 19:47, 23 January 2023 (UTC)
- I have to agree with Donald… WAY too soon… we can react to what actually happens, and should not “pre-act” due to fear of what might happen. Blueboar (talk) 19:44, 23 January 2023 (UTC)
- Overreaction to a minor issue that will complicate things immeasurably. Many news sources simply have their author set as "News desk" or something generic like that. Similarly, print newspapers often don't directly name an author for the majority of clippings. Should instead be handled on a case-by-case basis. Curbon7 (talk) 19:53, 23 January 2023 (UTC)
- I've long stated that the biggest threat is that Wikipedians seems to have no idea what to do now that 2rd party vetting and verification is becoming ever less reliable. In the mean time, there are some great reporters who seem to have switched to self publishing, but we don't recognize that work either. We need to begin putting thought into replacing that entire system, probably by creating a new system to allow individuals to verify sources, news reporting and news reporters and probably keep track of some sort of ranking and reputation score or something like that. The fact that we keep being stuck in old conventions and refuse to even think about what first hand and oral knowledge means is concerning for our future, because its going to get pretty thin in sourcing when half the news is coming out of tik tok reporters. —TheDJ (talk • contribs) 22:33, 23 January 2023 (UTC)
- Reliability is in general an issue, I don't like "Staff" or "News Desk" either but the suggestion is overkill. Sure, we should be keeping an eye on this and if needs be, case by case, addressing it. Selfstudier (talk) 22:40, 23 January 2023 (UTC)
- There are dual, growing problems: the old Gell-Mann Amnesia effect (worsening, due to persistent cost-cutting and monetary pressures), and the new (partial) downfall of "gatekeepers" (i.e., formal publishing of any kind).
- Here's how we could start to tackle that: WP:RSN is too rigid a mechanism. Most WP:RSN RFCs judge a source's quality based on a handful of articles, which is a minuscule amount of the source's overall content; they also largely rely on vague heuristics, like how the source's website looks, whether it's in text or video format, whether the authors have formalcredentials, etc). I can come across 10 missstatements of fact by The XYZ Post over a few months, but I'm not keeping notes, so those falsities won't be discussed in any RFC. And I can come across outstanding reporting by a self-published source or blog, but if I miss that RFC, it'll never be discussed.
- We should create a new page, with subpages for each source (broadly defined; could even include YouTubers). When a source published an incorrect statement of fact, we list that to their subpage; when they publish high-quality exclusive reporting (again, even including YouTube videos), we list that too. This would allow any WP:RSN RFC to judge a source far more holistically and more comprehensively than is done currently. (note: this is only a very early idea, not a proposal) DFlhb (talk) 14:03, 25 January 2023 (UTC)
- If the problem is with sources generated by AI then the solution should be targeted at sources generated by AI, not at the other 99+% of news sources without attribution to specific authors. There may be other reasons for not accepting news sources, whether or not they have attribution to specific authors, but this proposal is not about those reasons. Phil Bridger (talk) 23:08, 23 January 2023 (UTC)
- And there are tools that, at least for now, can detect AI-generated text, so questionable articles can be tested to see if an AI is probably involved. See https://openai-openai-detector.hf.space/, for example. — rsjaffe 🗣️ 23:19, 23 January 2023 (UTC)
- Just to be clear, most old newspapers (which are widely used as sources for old biographies) identify no authors. I assume this question is directed at modern or future sources. BD2412 T 23:40, 23 January 2023 (UTC)
- Agree completely with BD2412´s point. AI-generated content is a new phenomenon, and the solution should be limited accordingly. Cbl62 (talk) 00:12, 24 January 2023 (UTC)
- Count me as another agreement. First thing I thought when I read the proposal was "that needs a major exception for content pre-AI-concerns". AddWittyNameHere 00:30, 24 January 2023 (UTC)
- Agree completely with BD2412´s point. AI-generated content is a new phenomenon, and the solution should be limited accordingly. Cbl62 (talk) 00:12, 24 January 2023 (UTC)
- This is very common with syndicated news articles written by news agencies (e.g. the Associated Press, Agence France Press, Reuters), which are actually probably the most reliable sources in news. See e.g. a syndicated article by the AP published in the SCMP, or a syndicated article by AFP in France24, neither of which have a named author in the byline. So this proposal is a complete nonstarter. Endwise (talk) 00:21, 24 January 2023 (UTC)
- The number of times I have found factual, contextual and positional (bias) errors with stories (signed or unsigned) by the news agencies listed above is too big to count. Sometimes they disseminate worthwhile information. Sometimes they are just worthless. Since there is no mechanism to discern in advance which of those conditions apply for any specific story, they can never be called "reliable". That is a blanket statement that is just not true. On the issue raised by the OP I agree that the proposal is too broad. 65.88.88.59 (talk) 20:55, 24 January 2023 (UTC)
- The fact that a source sometimes makes mistakes does not make it unreliable. As it stands, mainstream news agencies are considered generally reliable by the Wikipedia community: WP:RSP lists Agence-France Press, Associated Press, and Reuters as generally reliable. If you think that consensus is wrong, you need to start a new discussion about that, preferably at WP:RSN, but you're going to need strong evidence to convince people.
Insofar as mainstream news agencies are currently considered reliable, a new policy which makes them unreliable as an accidental side-effect is clearly a non-starter. Caeciliusinhorto-public (talk) 09:59, 26 January 2023 (UTC)
- The fact that a source sometimes makes mistakes does not make it unreliable. As it stands, mainstream news agencies are considered generally reliable by the Wikipedia community: WP:RSP lists Agence-France Press, Associated Press, and Reuters as generally reliable. If you think that consensus is wrong, you need to start a new discussion about that, preferably at WP:RSN, but you're going to need strong evidence to convince people.
- Agreed with Endwise, and I'd have used the exact same words ('complete nonstarter'). Our current mechanisms are beyond sufficient: if a source is found to use AI bots, WP:RSN will deal with it accordingly, including by determining which properties are affected (whether a whole outlet, or just a sub-brand). DFlhb (talk) 14:16, 25 January 2023 (UTC)
- No, we should not deprecate news stories with no author. Many news agency stories are team efforts with no byline. Deprecating sources with no author would also wipe out most sources used in our coverage of the Russo-Ukranian war. I do agree with the comment above that many very reliable authors are publishing independently and are too easily dismissed as "blogs", but that is a different issue. Elinruby (talk) 23:14, 30 January 2023 (UTC)
- The number of times I have found factual, contextual and positional (bias) errors with stories (signed or unsigned) by the news agencies listed above is too big to count. Sometimes they disseminate worthwhile information. Sometimes they are just worthless. Since there is no mechanism to discern in advance which of those conditions apply for any specific story, they can never be called "reliable". That is a blanket statement that is just not true. On the issue raised by the OP I agree that the proposal is too broad. 65.88.88.59 (talk) 20:55, 24 January 2023 (UTC)
- Not a good idea. In addition to everything mentioned above, content management systems for news agencies are not perfect, and sometimes the byline listed on the website will change, disappear, or never make it online in the first place. Gnomingstuff (talk) 06:02, 13 February 2023 (UTC)
- Most reputable web sites probably know better than to publish unreviewed automated text. If they review the text properly to ensure it's okay, the article is probably as reliable as any other article on the same topic on the same site, and the reviewer may not be listed. If attributed articles on any particular site turn out to be problematic, we can always make a decision for that site. Animal lover |666| 07:02, 26 February 2023 (UTC)
Expanding WP:NOTGUIDE to include part lists/specifications
There are currently many hundreds of articles on Wikipedia which are just lists of information like gear ratios in every variant of every car part ever made by every single manufacturer. Pretty much every single page in every one of these categories is huge amounts of non-contextual information without evidence of notability. Here are some random examples:
- Toyota A transmission
- Mazda G4A-EL transmission
- Toyota NBC platform
- Mitsubishi Neptune engine
- ZF EcoLife
I'm not sure what the solution is here. A while ago I opened Wikipedia:Articles for deletion/Toyota C transmission after stumbling upon the article from random and the consensus there was to delete, but it just feels painful. Following this rough consensus, the follow-up should be to go on a mass nomination spree and start spraying WP:TNT since I don't think there's much hope any of these articles meet GNG or WP:NPRODUCT. On the other hand, running around as a deletion monkey and blowing up megabytes of almost certainly reliable text that someone painstakingly copied from a manual doesn't feel good either.
WP:NOTGUIDE is much less clear here than it could be. It's clearly the most relevant policy alongside WP:NOTDATABASE and WP:NPRODUCT, but only explicitly condemns "how-to" style guides which these are not. I'm unsure about the right way to proceed. I feel like a modification to NOTGUIDE might be due here to clarify explicitly whether "parts list" style articles are appropriate. BrigadierG (talk) 00:40, 3 February 2023 (UTC)
- For a proposal to mass delete articles that are all under a particular wikiproject, good etiquette would say to at least notify that project. Otherwise it looks like you are trying to get the deletions done before the involved editors know what's happening. I came across this topic by sheer luck. I have notified them at Wikipedia_talk:WikiProject_Automobiles#Proposal_to_delete_many_engine_and_transmission_articles.
- The C transmission article got consensus to delete mostly because of too few references rather than whether it was a bad choice of article. Stepho talk 00:36, 4 February 2023 (UTC)
- I'm not trying to propose mass deletion. If I wanted mass deletion I wouldn't waste my time talking about it here, I'd just start mass nominating, and I'm sure nominating enough would successfully ram a good few through successfully given the tenuous connection these articles have to GNG. Deletion isn't necessarily my goal. What I'm seeking is clarity, and perhaps an alternate route forward. I'm not sure the current article structure is appropriate, but it might be possible to do something like folding down the amount of information (which I think is generally excessive) and grouping articles as series rather than as individual articles for each part. As for "too few references rather than whether it was a bad choice of article", if you can find 2 quality secondary sources to pass WP:NPRODUCT then I will personally recreate the article. BrigadierG (talk) 02:40, 4 February 2023 (UTC)
- I can see the problem with these articles – they lack sufficient context and fail to explain what they are about. The question is whether they provide any useful information to the average reader. Considering that most people seem to be interested primarily in tech specs, one could argue that this is the case, but considering the poor referencing situation I'd personally say that we sadly got nothing but a hearsay compilation. Especially when browsing through the history of these articles I start getting serious doubts about the accuracy of the information provided. And I guess that no article is always better than an article that is made up of unverifiable nonsense. Best regards, --Johannes (Talk) (Contribs) (Articles) 08:22, 4 February 2023 (UTC)
- I originally nominated the article after an IP changed an unreferenced gear ratio to add a "XD" Toyota gearbox specification to a long list of gearbox specifications. Their only other edit was unconstructive so I reverted it, but I really have no idea either way, and I didn't feel like it was appropriate to have to trawl through a 200 page specification to find out. Call it inadequate referencing or whatever, but a nightmare from a WP:V perspective BrigadierG (talk) 00:49, 5 February 2023 (UTC)
- "Inadequate referencing" sounds fair, but I'm doubtful about the original premise, namely that there won't be sources written about a major component used in a multi-billion dollar industry. I'm unfamiliar with this subject area, but on the subject of Toyota A transmission, I found sources such as:
- Kuramochi, K., Shindo, K., Kubo, S., & Miura, M. (1984). "Toyota New Four-Speed Automatic Transmission for Front Wheel Drive Vehicles." SAE Transactions, 93, 305–315.
- Kondo, M., Hasegawa, Y., Takanami, Y., Arai, K. et al., "Toyota AA80E 8-Speed Automatic Transmission with Novel Powertrain Control System," SAE Technical Paper 2007-01-1311, 2007, https://doi.org/10.4271/2007-01-1311.
- Watanabe, M., Sasaki, K., Miyamoto, K., Kinoshita, M. et al., "Toyota's New Six-Speed Automatic Transmission AB60E for RWD Vehicles," SAE Technical Paper 2007-01-1098, 2007, https://doi.org/10.4271/2007-01-1098.
- Chapter 8 of a book called Design Practices: Passenger Car Automatic Transmissions looks like it might have some relevant information on Toyota's design of transmission, too. Someone with a greater knowledge of the subject area could doubtless find more and better sources, but I am now convinced that such sources do exist. Finding them might be inconvenient (standard search engines assume you're looking for a repair manual), but I believe it could be done. WhatamIdoing (talk) 06:20, 9 February 2023 (UTC)
- That doesn't make that material DUE, though. JoelleJay (talk) 19:25, 9 February 2023 (UTC)
- Given that the subject is a specific line of automobile transmissions, why would we not assume that a basic description of the specific line of automobile transmission is appropriate for a well-written encyclopedia article? (DUE itself isn't relevant, as it is about "viewpoints". There are differing viewpoints on whether this line is good/bad/indifferent/better than this alternative; there are no viewpoints about what the basic facts are. I find that page 65 of this book helps elucidate the difference between a fact and a viewpoint.) WhatamIdoing (talk) 01:08, 22 February 2023 (UTC)
- 25kb listing every single transmission and transmission feature and all their applications is not a "basic description", it is an exhaustive directory of information that falls far afoul of NOT:
Information should not be included in this encyclopedia solely because it is true or useful. A Wikipedia article should not be a complete exposition of all possible details, but a summary of accepted knowledge regarding its subject. Verifiable and sourced statements should be treated with appropriate weight.
andUse reliable third-party (not self-published or official) sources in articles dealing with software updates to describe the versions listed or discussed in the article. Common sense must be applied with regard to the level of detail to be included.
JoelleJay (talk) 02:53, 22 February 2023 (UTC)- Did you not notice me supplying several non-self-published, non-official sources earlier?
- More to the point, if you are trying to figure out the difference between the AC60E and the AC60F – if you were trying to produce a summary of accepted knowledge, not just a context-free list of the parts numbers – aren't the facts reported about them the kinds of things you would need to know? Do you think you could actually explain the difference without noting that they're both six-speed automatics but they have different gear sets for torque converter lockup? WhatamIdoing (talk) 03:03, 22 February 2023 (UTC)
- You linked a handful of primary articles by employees of Toyota, those are clearly not acceptable. If data are included in context-free lists as they are here then they violate NOTINDISCRIMINATE; if that huge amount of data is accompanied by prose summaries of each item sourced to primary/non-independent reports, it violates PRIMARY. I don't see how this material belongs in a general encyclopedia. JoelleJay (talk) 03:31, 22 February 2023 (UTC)
- In what way is an article written by an employee but published in a reputable, independent journal "clearly not acceptable"? Do you apply this standard to other commercial products, such as pharmaceutical drug or agricultural products, or is it a special standard for vehicles?
- (WP:5P1 says "Wikipedia combines many features of general and specialized encyclopedias", so it doesn't really matter if it would belong in a purely general encyclopedia.) WhatamIdoing (talk) 06:29, 23 February 2023 (UTC)
- It is primary data released by the company; we do not base articles on primary sources, and if this is the only type of sourcing that exists on the topic then the article would be 90+% primary. And of course I would oppose listing exhaustive products/specs sourced to pharma and ag companies... JoelleJay (talk) 21:03, 23 February 2023 (UTC)
- You linked a handful of primary articles by employees of Toyota, those are clearly not acceptable. If data are included in context-free lists as they are here then they violate NOTINDISCRIMINATE; if that huge amount of data is accompanied by prose summaries of each item sourced to primary/non-independent reports, it violates PRIMARY. I don't see how this material belongs in a general encyclopedia. JoelleJay (talk) 03:31, 22 February 2023 (UTC)
- 25kb listing every single transmission and transmission feature and all their applications is not a "basic description", it is an exhaustive directory of information that falls far afoul of NOT:
- Given that the subject is a specific line of automobile transmissions, why would we not assume that a basic description of the specific line of automobile transmission is appropriate for a well-written encyclopedia article? (DUE itself isn't relevant, as it is about "viewpoints". There are differing viewpoints on whether this line is good/bad/indifferent/better than this alternative; there are no viewpoints about what the basic facts are. I find that page 65 of this book helps elucidate the difference between a fact and a viewpoint.) WhatamIdoing (talk) 01:08, 22 February 2023 (UTC)
- That doesn't make that material DUE, though. JoelleJay (talk) 19:25, 9 February 2023 (UTC)
- "Inadequate referencing" sounds fair, but I'm doubtful about the original premise, namely that there won't be sources written about a major component used in a multi-billion dollar industry. I'm unfamiliar with this subject area, but on the subject of Toyota A transmission, I found sources such as:
- I originally nominated the article after an IP changed an unreferenced gear ratio to add a "XD" Toyota gearbox specification to a long list of gearbox specifications. Their only other edit was unconstructive so I reverted it, but I really have no idea either way, and I didn't feel like it was appropriate to have to trawl through a 200 page specification to find out. Call it inadequate referencing or whatever, but a nightmare from a WP:V perspective BrigadierG (talk) 00:49, 5 February 2023 (UTC)
- I can see the problem with these articles – they lack sufficient context and fail to explain what they are about. The question is whether they provide any useful information to the average reader. Considering that most people seem to be interested primarily in tech specs, one could argue that this is the case, but considering the poor referencing situation I'd personally say that we sadly got nothing but a hearsay compilation. Especially when browsing through the history of these articles I start getting serious doubts about the accuracy of the information provided. And I guess that no article is always better than an article that is made up of unverifiable nonsense. Best regards, --Johannes (Talk) (Contribs) (Articles) 08:22, 4 February 2023 (UTC)
- I'm not trying to propose mass deletion. If I wanted mass deletion I wouldn't waste my time talking about it here, I'd just start mass nominating, and I'm sure nominating enough would successfully ram a good few through successfully given the tenuous connection these articles have to GNG. Deletion isn't necessarily my goal. What I'm seeking is clarity, and perhaps an alternate route forward. I'm not sure the current article structure is appropriate, but it might be possible to do something like folding down the amount of information (which I think is generally excessive) and grouping articles as series rather than as individual articles for each part. As for "too few references rather than whether it was a bad choice of article", if you can find 2 quality secondary sources to pass WP:NPRODUCT then I will personally recreate the article. BrigadierG (talk) 02:40, 4 February 2023 (UTC)
- The problem is in fact wider: restaurants with their menus, record labels with all their signings (notable or non-notable). The Banner talk 16:21, 4 February 2023 (UTC)
- I don't think I've ever run across a list of restaurants with their menus.
- A record label with all their signings sounds fine to me. In fact, one of the three most common selection criteria in WP:SAL is "Short, complete lists of every item that is verifiably a member of the group." Short doesn't mean 10 entries. It means less than 32K in length. That's long enough for, say, 5,000 (yes, five thousand, not five hundred) words. WhatamIdoing (talk) 05:49, 9 February 2023 (UTC)
- Support clarification. Those articles also fail INDISCRIMINATE and NOTDIRECTORY and our basic standards for DUE and fancruft and should be deleted. JoelleJay (talk) 20:52, 4 February 2023 (UTC)
- Deletion. It can kinda suck deleting good faith content, but I stand by the position that Wikipedia should not attempt to duplicate or host the kind of content that would more reasonably be expected to be found in a database or catalog. The fact that such content is almost invariably unsourced, or that it should be sourced to the authoritative primary source, is a red flag that Wikipedia isn't the right place to look for it in the first place. Alsee (talk) 14:42, 8 February 2023 (UTC)
Oppose - at least Mitsubishi Neptune engine - you guys are sweeping with a very broad brush here. Most people who read about automobiles find engine specifications and descriptions of interest, and this kind of information is best presented in articles that only cover the engines, rather than in the automobile articles themselves. This avoids duplicating content and most articles on engines have more content than being mere lists. A lot of these articles fall short on sourcing because they were started a long time ago, when sourcing requirements were much looser than today. I am always working to add appropriate sources but it is not something which can be done overnight. Mr.choppers | ✎ 03:17, 16 February 2023 (UTC)
Plot summary structure
Looking at the plot summaries for films on Wikipedia, it seems like in general they tend to devote a disproportionate amount of words summarising the second half of the film, and ignore many plot points from the first half. (I haven't looked at the books much, but it's possible they have the same problem.) I wonder if there may be some kind of recency effect going on, in that editors remember the second half of the film much better because they saw it second. Regardless, is there anything that can be done about this? Plokmijnuhby (talk) 16:02, 6 February 2023 (UTC)
- How about excluding plot summaries entirely, and basing article content on what reliable sources have to say about the subject instead? AndyTheGrump (talk) 16:06, 6 February 2023 (UTC)
- As plot summaries are to be concise and capture that main points of a film without introducing interpretation, it is more than natural to have these focus on a film's resolution than introduction. Masem (t) 16:09, 6 February 2023 (UTC)
- If that were the reason, that would imply that in a three act film, the third act gets the most focus but the first two acts get roughly the same amount of focus as each other. This does not seem to hold true, the second act tends to get more focus than the first. Plokmijnuhby (talk) 22:16, 12 February 2023 (UTC)
- This happens due to a bad rule that we have, called WP:PLOTSOURCE:
The plot summary for a work, on a page about that work, does not need to be sourced with in-line citations, as it is generally assumed that the work itself is the primary source for the plot summary.
This is one of the few areas of Wikipedia where--for reasons I can guess--we have a guideline that violates a policy (WP:NOR). Works of fiction are primary sources for their own plots, and normally we don't allow editors to interpret primary sources because that's original research. Notable works of fiction has WP:RSes about them, of course (reviews), and those include plot summaries. What we should do is require all plot summaries in Wikipedia articles to be summaries of plot summaries written by RSes. That way, we can be sure that it is the RSes, and not editors, who are interpreting the plot. "Interpreting the plot" means summarizing it, identifying important plot points, conveying how they are related, etc., all of which should be done by RSes. It's not a surprise, then, that when editors summarize a plot by themselves (instead of summarizing RS plot summaries), they end up summarizing the plot wrong. Levivich (talk) 16:22, 6 February 2023 (UTC)- I agree, but the headwind on this is very strong. Donald Albury 16:28, 6 February 2023 (UTC)
- @Levivich, I think you're wrong.
- OR is "material—such as facts, allegations, and ideas—for which no reliable, published sources exist".
- Any published source is a reliable primary source for its own contents. If you read a book whose plot is "An orphaned boy learns that he is a magician and then tries to defeat the wizard who killed his parents", then the book itself is a reliable source for the plot line. You're not "interpreting" anything as far as NOR is concerned until you start adding commentary (e.g., how famous it is, how it compares to other works, whether it's properly categorized as comedy). WhatamIdoing (talk) 06:29, 9 February 2023 (UTC)
- One cannot write a plot summary without interpreting the plot. The selection, placement, and description of details (aka interpretation) should be done by RSes not editors, or else we'll get:
Goldilocks and the Three Bears is about a girl who enters the house of three bears while they are away, eats their food, breaks their furniture, and sleeps in their beds.
Levivich (talk) 06:43, 9 February 2023 (UTC)- Yes, that's what it's about, yes. Why? I Ask (talk) 08:56, 9 February 2023 (UTC)
- It is possible, and in my view encyclopedic, to summarize and describe the plot / events that occur within a narrative for the reader without also interpreting them. These sections aid in their understanding of the subject of the article, and also the context of the secondary sourced reception. The plot summary is also verifiable by the reader who has access to the primary source as to whether those events occurred as described. It is obviously possible to badly summarize or weight elements of a plot in the same way as it is possible to badly summarize the contents of a secondary source, but this alone isn't a reason not to do it. Consensus based editing applies here, as it does everywhere else to mitigate against non-compliant content. FWIW it's worth if it was possible to burn 99% of fictional character biographies with fire I would, as these are magnets for cruft and far more dependent on editorial decisions, but that's another discussion. Scribolt (talk) 09:38, 9 February 2023 (UTC)
- I agree with @Scribolt: reading a book ("a primary source") and writing down what I'll call "straightforward, descriptive statements of facts that can be verified by any educated person with access to the primary source but without further, specialized knowledge" is not "interpretation" as far as NOR is concerned. @Levivich, if you believe that's a NOR violation, then please head to WT:NOR and explain why you want to have part of the PSTS section removed. This is explicitly authorized behavior in WP:PSTS; you either need to stop claiming that NOR bans it, or you need to get NOR's explicit authorization of it removed. WhatamIdoing (talk) 01:14, 22 February 2023 (UTC)
- It is possible, and in my view encyclopedic, to summarize and describe the plot / events that occur within a narrative for the reader without also interpreting them. These sections aid in their understanding of the subject of the article, and also the context of the secondary sourced reception. The plot summary is also verifiable by the reader who has access to the primary source as to whether those events occurred as described. It is obviously possible to badly summarize or weight elements of a plot in the same way as it is possible to badly summarize the contents of a secondary source, but this alone isn't a reason not to do it. Consensus based editing applies here, as it does everywhere else to mitigate against non-compliant content. FWIW it's worth if it was possible to burn 99% of fictional character biographies with fire I would, as these are magnets for cruft and far more dependent on editorial decisions, but that's another discussion. Scribolt (talk) 09:38, 9 February 2023 (UTC)
- Yes, that's what it's about, yes. Why? I Ask (talk) 08:56, 9 February 2023 (UTC)
- One cannot write a plot summary without interpreting the plot. The selection, placement, and description of details (aka interpretation) should be done by RSes not editors, or else we'll get:
- Summarizing sources without interpreting them is expected in all Wikipedia articles. The issue is when we can summarize a primary source. WP:PRIMARY provides six policy rules that have to be followed when using primary sources, so it's just a matter of whether a summary complies with them. Thebiguglyalien (talk) 15:11, 9 February 2023 (UTC)
- Perhaps the problem is that some people believe "interpret" means "understanding the source you just read". WhatamIdoing (talk) 01:15, 22 February 2023 (UTC)
- I agree, but the headwind on this is very strong. Donald Albury 16:28, 6 February 2023 (UTC)
- The intent of NOR -- speaking as one of the people. who was involved in the early discussions -- was to exclude the kind of kooky ideas that plagued social media such as USENET, but persists in corners of Wikipedia where contributors attempt to cherry-pick facts & present their own novel, idiosyncratic interpretations. (And even for those of us who understand NOR, it is sometimes hard to avoid.) Further, for many well-known films & books there is no good secondary source that summarizes the works adequately: movie/book reviews rarely devote more than a few sentences to the actual work, & spend more space persuading the reader that said work is either worth paying money for -- or not. (There are times I suspect even academic articles on canonical or classic works are just well-written arguments to the reader to watch the movie or buy the book. I doubt anyone will ever get tenure for writing the definitive monographs on why Manos, the Hands of Fate is the worst movie ever created.) So we are forced to create summaries for most of these works from necessity.
What I suspect the problem is that we need people who can write compact yet useful summaries of movies & books, an observation I base on having read more than a few summaries that are way too long to be useful. Maybe set a limit of 500 words on any summary, which should be enough space to summarize almost any work; the only exception I can think of would be a novel like The Dream of the Red Chamber which I understand the unabridged Chinese original text, if translated into English, would amount to over a million words. (Although, having looked at the article, I see someone has provided a summary.) But a good summary will not try to reproduce the narrative, instead the elements will be rearranged to enable compression in a rational form. Editorial decisions will always persist in the creation of Wikipedia articles, whether we like it or not. -- llywrch (talk) 05:18, 12 February 2023 (UTC)
- There's already a limit in place, which is 400–700 words long, at MOS:PLOTBLOAT. —El Millo (talk) 22:41, 12 February 2023 (UTC)
- I was not aware of this, and I assume many other editors aren't either. Is there a bot or a report that automatically finds plot summaries for films or books that are outside of these bounds? That would be really helpful for plot cleanup. Thebiguglyalien (talk) 22:58, 12 February 2023 (UTC)
- There's a maintenance template, {{Long plot}}, that places articles to the Category:Wikipedia articles with plot summary needing attention. —El Millo (talk) 23:07, 12 February 2023 (UTC)
- But you have to find them first. @Thebiguglyalien, you could ask at Wikipedia:Village pump (technical). I suspect that you'd need to find someone who knows how to handle the WP:DUMPS (or learn to do it yourself). WhatamIdoing (talk) 01:18, 22 February 2023 (UTC)
- There's a maintenance template, {{Long plot}}, that places articles to the Category:Wikipedia articles with plot summary needing attention. —El Millo (talk) 23:07, 12 February 2023 (UTC)
- Not that applies to films only, though I know some other projects have adopted it (like WP:VG). However, it is not universal. Masem (t) 01:26, 22 February 2023 (UTC)
- Novels should also be 400 to 700 words per WP:NOVELPLOT. MichaelMaggs (talk) 07:59, 24 February 2023 (UTC)
- There's also WP:TVPLOT. If you check Template:uw-plotsum1 I believe there may be other MOSes that have plot length guidelines as well. DonIago (talk) 03:22, 27 February 2023 (UTC)
- Novels should also be 400 to 700 words per WP:NOVELPLOT. MichaelMaggs (talk) 07:59, 24 February 2023 (UTC)
- I was not aware of this, and I assume many other editors aren't either. Is there a bot or a report that automatically finds plot summaries for films or books that are outside of these bounds? That would be really helpful for plot cleanup. Thebiguglyalien (talk) 22:58, 12 February 2023 (UTC)
- There's already a limit in place, which is 400–700 words long, at MOS:PLOTBLOAT. —El Millo (talk) 22:41, 12 February 2023 (UTC)
- The intent of NOR -- speaking as one of the people. who was involved in the early discussions -- was to exclude the kind of kooky ideas that plagued social media such as USENET, but persists in corners of Wikipedia where contributors attempt to cherry-pick facts & present their own novel, idiosyncratic interpretations. (And even for those of us who understand NOR, it is sometimes hard to avoid.) Further, for many well-known films & books there is no good secondary source that summarizes the works adequately: movie/book reviews rarely devote more than a few sentences to the actual work, & spend more space persuading the reader that said work is either worth paying money for -- or not. (There are times I suspect even academic articles on canonical or classic works are just well-written arguments to the reader to watch the movie or buy the book. I doubt anyone will ever get tenure for writing the definitive monographs on why Manos, the Hands of Fate is the worst movie ever created.) So we are forced to create summaries for most of these works from necessity.
- I can quite often watch a TV show or movie, but I need our articles to help understand it because the symbolism is often lost on me. So I beg of you, please don't remove this or I will never understand TV or movies again Seriously, this is a bit of a dilemma. Before AV Club and company came about, would there be reliable plot summaries of TV episodes that were published? --Rschen7754 01:28, 22 February 2023 (UTC)
- Explaining symbolism is "interpretation", and requires a source beyond the book. It's okay for an editor to read a book/watch a film and write "He gave her a white lily". It's even okay for the editor to write that sentence and include a link to Lilium candidum#In culture. If the editor wants to write something like "He gave her a white lily, to represent his acceptance of her decision to enter the convent instead of marrying him", then you need a source. WhatamIdoing (talk) 01:42, 22 February 2023 (UTC)
- Agree the work can be used as a primary source for what is in the work, or a summary of the work, but interpretation of the work requires secondary sources to not be WP:OR. -- LCU ActivelyDisinterested ∆transmissions∆ °co-ords° 20:54, 23 February 2023 (UTC)
- Explaining symbolism is "interpretation", and requires a source beyond the book. It's okay for an editor to read a book/watch a film and write "He gave her a white lily". It's even okay for the editor to write that sentence and include a link to Lilium candidum#In culture. If the editor wants to write something like "He gave her a white lily, to represent his acceptance of her decision to enter the convent instead of marrying him", then you need a source. WhatamIdoing (talk) 01:42, 22 February 2023 (UTC)
- My biggest complaint about plot summary sections is that our article writers often don’t understand the difference between a “re-telling” of the plot, and a summarization of it. When the “3rd act” gets too much space, it usually means that the writers are re-telling the plot, rather than summarizing it. Further summarizing is needed. Blueboar (talk) 22:21, 23 February 2023 (UTC)
- I don't understand quite what you mean by that. What's the difference between a re-telling and just bad summarisation? Plokmijnuhby (talk) 10:22, 25 February 2023 (UTC)
- I'd ask the same question about the difference between interpretation and summarization. Levivich (talk) 17:00, 25 February 2023 (UTC)
- Bad summation is where the editor tries to write and include every scene or relevant event as shown in order that the work gives it. Instead of saying "Captain America fights Thanos", a bad summary may go on that battle for 3 to 4 sentences. Or as another type of example, Memento (film)'s plot is untangled as that makes it far simpler (and concise) to describe it than going back and forth between the two sequences, or for films like Groundhog Day or Edge of Tomorrow, that time loops are involved are explained as well as what happens in a typical loop, but not every single loop is discussed, just that "the plot advances over several loops"-type language.
- Summation vs Interpretation is a very thick grey line, but summarization should be exactly as shown "on the tin" which every viewer of the work will see without having to think about it, whereas interpretation is delving into aspects that are not explicitly stated by the dialog nor stated by the work's creators in other sourcing. A common example that comes up with the type of editors on WP is analyzing films frame by frame to try to deduce order of events or common points, which is above and beyond what we consider a normal viewing. Often we see editors try to overly interpret things like characters' emotions and motives that may seem obvious to one who has watched the film multiple times, but are vague on a single watch. That's why its important to only summarize what is clearly shown on screen, not what an editor thinks is happening. Masem (t) 18:20, 25 February 2023 (UTC)
- Another point is that summarizing versus interpreting with a work of fiction isn't much different than it is with any other source. If a news article simply describes a sequence of events, we're expected to summarize that sequence of events without inserting our own thoughts on motive or how it's relevant to other things. If we want to do that, we have to find a separate source that analyzes the events. It's the same logic, just applied to a different format. Thebiguglyalien (talk) 18:34, 25 February 2023 (UTC)
- @Thebiguglyalien: My summary above ("Goldilocks and the Three Bears is about a girl who enters the house of three bears while they are away, eats their food, breaks their furniture, and sleeps in their beds.") misses key plot points that would be in any other RS summary of the story, and describes events differently than other RS summaries would. It's an example of why I think the selection of what is "important" (what to include in a summary) is interpretation, inherently and unavoidably so. Deciding what's important is interpretation. Levivich (talk) 18:55, 25 February 2023 (UTC)
- If that's an issue, then it's not one that's unique to plot summaries or even to primary sources. Every single article on Wikipedia involves subjective judgement on what aspects of the source are covered and how much they should be emphasized. I've tried to raise the issue of how we're supposed to evaluate this when designating an article as a WP:Good article, but it doesn't seem there's much interest in addressing this. Thebiguglyalien (talk) 22:35, 25 February 2023 (UTC)
- I understand what you are saying. When I am writing or rewriting an article I am making decisions about what to include from reliable sources and what to leave out, what to emphasize, and the order in which to present things. I try to write in a neutral manner, but I do want readers to understand the subject as I understand it. That is why it is important to have multiple editors independently contribute to an article, to avoid having articles shaped too much by any one editor's systemic, deliberate or systemic and unconscious. Donald Albury 17:27, 26 February 2023 (UTC)
- ...and why it's important to have multiple sources to guide the selection of details that are included in the summary. (See my userpage.) If you have three WP:RS and you summarize them, any detail that is in all three, or at least two out of three, is probably WP:DUE; details that are in only one of the three are probably WP:UNDUE; whereas details that don't appear in any source might not even meet WP:V. Same goes for specific words and phrases used: e.g. we say "terrorist" or "freedom fighter" or neither based on what the sources say.
- Here are three plot summaries of Goldilocks and the Three Bears:
- Goldilocks and the Three Bears is about a girl who enters the house of three bears while they are away, eats their food, breaks their furniture, and sleeps in their beds.
- Goldilocks and the Three Bears is about three bears who return from a trip to find a human has broken into their home, stolen their food, vandalized their furniture, and slept in their beds.
- Goldilocks and the Three Bears is about a child who explores a house and learns that some things are too hot, too cold, too big, too small, too hard, or too soft, but other things are just right.
- All of these plot summaries meet WP:V, they're all accurate in their details, but they tell wildly different stories. How can we decide which of these three, if any, are correct? Only by looking at RS plot summaries and seeing how they summarize the story. Otherwise, without RS, we are unmoored, drifting in a sea of WP:OR. Levivich (talk) 17:43, 26 February 2023 (UTC)
- Except, outside classic works and some pop culture sources, the full complete plot of works is rarely spelled out in sources. Film reviews will touch on the "Act I" content but rarely on Act III. We don't have a consistent way to source the plot of most works of fiction.
- This goes back to Thebiguglyalien's statement that we have exactly the same problem when we summarize non-fictional topics. Rarely does one single source cover the topic completely and comprehensively to leave no question of how we should summarize it. Instead, we have to work together as editors to figure out the appropriate summary structure and organization of content. That often requires editors to discuss and talk out what is seemingly the best way without violating NOR and NPOV.
- Applying this to fiction, it is again having editors each coming with their perceptions of what the original work shows and making sure the summary covers the work in a concise, comprehensive manner without interpretation. Its clear the three example summaries fail to comprehensively cover the plot, but you also have more room (from a word count side) to expand. A combination of three, under 200-300 words, would be a proper plot summary without violating WP:V. Masem (t) 17:53, 26 February 2023 (UTC)
outside classic works and some pop culture sources, the full complete plot of works is rarely spelled out in sources
Nonsense. Give me the name of a notable work and I'll show you an RS plot summary of that work. Levivich (talk) 18:04, 26 February 2023 (UTC)- Dora the Explorer Season 5 Episode 3: "Isa's Unicorn Flowers". Thebiguglyalien (talk) 18:13, 26 February 2023 (UTC)
- I said notable work; Dora Season 5, Episode 3 is not a notable work. Nevertheless, here are three different WP:RS plot summaries:
- [10] From Paramount+ (I'm guessing that's the publisher, so WP:ABOUTSELF):
Dora and Isa are planting Isa's special unicorn flowers when suddenly a baby unicorn, Unicornio, rides in on a Rainbow. He's thrilled to see Isa's beautiful flowers 'cause they're his Mami's favorite.
- [11] From TV Guide:
Dora and Isa help a baby unicorn get back home after his magical rainbow vanishes. They pass the Dragon Cave and the Troll Bridge before finding the rainbow that takes them back to the unicorn's forest.
- [12] From Sky Group:
Baby Unicornio rides in on a rainbow, and loves Isa's beautiful flowers. He can't wait to get home, but his rainbow has disappeared, and he can't get home without it!
- [10] From Paramount+ (I'm guessing that's the publisher, so WP:ABOUTSELF):
- Broken clocks, though; feel free to give me another :-) Levivich (talk) 18:29, 26 February 2023 (UTC)
- Those are plot synopses, which are similar to the Three Bears example above - they're too high level for a plot summary for us to use.
- I would try something like Everything Everywhere All At Once, which I have done a quick search for and can only find either synopses (not summaries) or partial coverage of the film. Masem (t) 18:36, 26 February 2023 (UTC)
- Oh, brother. Synopses, not summaries? Well, my Goldilocks synopses were all just one sentence... and if a work is not notable, such as Dora S5 Ep3, then we're only going to need a sentence or two for a row in a table, which is probably also why RSes are only writing a sentence or two synopsis.
- Anyway, an Oscar-nominated film is low-hanging fruit. Here are summaries, not synopses, from The New York Times [13], two from the Los Angeles Times [14] [15], The New Yorker [16], and for fun, Architectural Digest [17]. Levivich (talk) 18:51, 26 February 2023 (UTC)
- Those are not full comprehensive plot summaries. They are exactly what I mentioned above, that being reviews they touch on some parts of the film but do not fully summarize the film. Such as enough of the introduction of characters and early plot points, but not the "Act II" or "Act III" parts to a great degree. They get into themes, which is a separate part of our articles.
- Mind you, I am 100% supportive of adding appropriate references in plot summaries from sources like this to help better support the summaries beyond the primary work, but this has been met with resistance from editors in certain fiction areas (like television). I believe the solution lies in a realm where we can use the main work in addition to reliable sources where possible, but knowing that the full plot (within 400-700 words) will unlikely be fully discussed in RSes. Masem (t) 19:06, 26 February 2023 (UTC)
- Are there any plot points of "Isa's Unicorn Flowers" or "Everything Everywhere All at Once" that are DUE but cannot be sourced to the RSes I posted above? Levivich (talk) 19:22, 26 February 2023 (UTC)
- I think the issue is that this strict reading applies policies in ways they weren't meant to be applied. WP:NOR specifies that using a primary source to provide basic facts is not original research, and using a novel as a source for its own plot is one of the examples it gives. And I'm not sure if anything in WP:DUE is applicable here, as due applies to interpretations and opinions rather than basic descriptions, and you'd have a hard time making a case that WP:MINORASPECT applies. Thebiguglyalien (talk) 19:43, 26 February 2023 (UTC)
- MINORASPECT is exactly the part of DUE I had in mind.
An article should not give undue weight to minor aspects of its subject but should strive to treat each aspect with a weight proportional to its treatment in the body of reliable, published material on the subject. For example, a description of isolated events ... related to one subject may be verifiable and impartial, but still disproportionate to their overall significance to the article topic.
Why wouldn't this apply to details in a plot summary (some details are minor aspects, others are DUE for inclusion, the difference depends on what the RSes say). Levivich (talk) 20:58, 26 February 2023 (UTC)- If there's a practical way to apply this, then crafting an RfC to deprecate works as sources on themselves would be the next step. But like I said, I think that would be an uphill battle. Thebiguglyalien (talk) 21:07, 26 February 2023 (UTC)
- Applying DUE to plots is as easy as changing WP:PLOTSOURCE from "does not need" to "needs". But I don't think that would gain consensus, just as a rule that every statement should be sourced would not gain consensus. I don't really understand why that is, but I know that it is. Levivich (talk) 21:12, 26 February 2023 (UTC)
- Because encyclopedic coverage is not the same as review coverage. Reviews will rarely include an ending out of fear of spoiling the film. Encyclopedias should. Adam Cuerden (talk)Has about 8.2% of all FPs. Currently celebrating his 600th FP! 21:19, 26 February 2023 (UTC)
- I suspect this is indeed the reason why PLOTSOURCE exists: because without that rule, we wouldn't be able to include the endings of current works, because while RSes do indeed summarize the entire plot of historical works (including the endings), RSes will usually avoid spoilers when writing about current works. What I don't understand is why editors want to include the endings of current works in Wikipedia articles. I don't believe that "encyclopedias should," nor that encyclopedias do include the plot endings of contemporary works. There's a reason that pretty much nobody does that; why should we be the first? It's anti-Wikipedia IMO, to do something no RSes do, because editors want to do it; it's OR. Levivich (talk) 21:23, 26 February 2023 (UTC)
- Because the reader wants to know how it ends. An article would be incomplete without it. --Rschen7754 21:29, 26 February 2023 (UTC)
- How do we know that's true? You're suggesting a tertiary source include information that is not in any of the secondary sources (plot endings of contemporary works). What evidence supports the conclusion that Wikipedia readers want contemporary plot endings? Was a poll taken? Levivich (talk) 21:40, 26 February 2023 (UTC)
- Many works have reception that touches on various parts of the plot like specific scenes or general themes. A comprehensive plot is thus necessary to establish why that reception applies to the work. Of course there are also many works where the reception doesn't touch on anything direct in the plot, but what that then creates is where some works have the plot summary and some don't, and thus for consistency the plot summary is included.
- I suggest searching on "plot" in WT:RS as the whole situation around sourcing of plot summaries has been a subject of discussion since at least 2006. There's always been a segment of the editing population that doesn't prefer the lack of citations, but all main discussions I see at the moment still support that a plot summary that is only sourced to the work itself is fine as long as the summary is written following NOR/NPOV rules and within the conciseness factor. Masem (t) 21:54, 26 February 2023 (UTC)
- Why do you think Wikipedia:Spoiler exists? --Rschen7754 03:14, 27 February 2023 (UTC)
- How do we know that's true? You're suggesting a tertiary source include information that is not in any of the secondary sources (plot endings of contemporary works). What evidence supports the conclusion that Wikipedia readers want contemporary plot endings? Was a poll taken? Levivich (talk) 21:40, 26 February 2023 (UTC)
- Because the reader wants to know how it ends. An article would be incomplete without it. --Rschen7754 21:29, 26 February 2023 (UTC)
- I suspect this is indeed the reason why PLOTSOURCE exists: because without that rule, we wouldn't be able to include the endings of current works, because while RSes do indeed summarize the entire plot of historical works (including the endings), RSes will usually avoid spoilers when writing about current works. What I don't understand is why editors want to include the endings of current works in Wikipedia articles. I don't believe that "encyclopedias should," nor that encyclopedias do include the plot endings of contemporary works. There's a reason that pretty much nobody does that; why should we be the first? It's anti-Wikipedia IMO, to do something no RSes do, because editors want to do it; it's OR. Levivich (talk) 21:23, 26 February 2023 (UTC)
- Because encyclopedic coverage is not the same as review coverage. Reviews will rarely include an ending out of fear of spoiling the film. Encyclopedias should. Adam Cuerden (talk)Has about 8.2% of all FPs. Currently celebrating his 600th FP! 21:19, 26 February 2023 (UTC)
- Applying DUE to plots is as easy as changing WP:PLOTSOURCE from "does not need" to "needs". But I don't think that would gain consensus, just as a rule that every statement should be sourced would not gain consensus. I don't really understand why that is, but I know that it is. Levivich (talk) 21:12, 26 February 2023 (UTC)
- If there's a practical way to apply this, then crafting an RfC to deprecate works as sources on themselves would be the next step. But like I said, I think that would be an uphill battle. Thebiguglyalien (talk) 21:07, 26 February 2023 (UTC)
- MINORASPECT is exactly the part of DUE I had in mind.
- There are narrative pieces of glue that aid in understanding how characters get from one scene to another (not necessarily just in the physical sense) that aren't covered through the partial coverage of the plot from those reviews. For example, from what I see from those linked reviews, the structure of EEAAO (where the film's three acts are introduced) is not really discussed, but narratively break apart the work that makes it easy to explain it. We want the plot summary to be understandable and concise, which these elements not covered by RSes often help. --Masem (t) 19:58, 26 February 2023 (UTC)
- I think the issue is that this strict reading applies policies in ways they weren't meant to be applied. WP:NOR specifies that using a primary source to provide basic facts is not original research, and using a novel as a source for its own plot is one of the examples it gives. And I'm not sure if anything in WP:DUE is applicable here, as due applies to interpretations and opinions rather than basic descriptions, and you'd have a hard time making a case that WP:MINORASPECT applies. Thebiguglyalien (talk) 19:43, 26 February 2023 (UTC)
- Are there any plot points of "Isa's Unicorn Flowers" or "Everything Everywhere All at Once" that are DUE but cannot be sourced to the RSes I posted above? Levivich (talk) 19:22, 26 February 2023 (UTC)
- I said notable work; Dora Season 5, Episode 3 is not a notable work. Nevertheless, here are three different WP:RS plot summaries:
- Dora the Explorer Season 5 Episode 3: "Isa's Unicorn Flowers". Thebiguglyalien (talk) 18:13, 26 February 2023 (UTC)
- I understand what you are saying. When I am writing or rewriting an article I am making decisions about what to include from reliable sources and what to leave out, what to emphasize, and the order in which to present things. I try to write in a neutral manner, but I do want readers to understand the subject as I understand it. That is why it is important to have multiple editors independently contribute to an article, to avoid having articles shaped too much by any one editor's systemic, deliberate or systemic and unconscious. Donald Albury 17:27, 26 February 2023 (UTC)
- If that's an issue, then it's not one that's unique to plot summaries or even to primary sources. Every single article on Wikipedia involves subjective judgement on what aspects of the source are covered and how much they should be emphasized. I've tried to raise the issue of how we're supposed to evaluate this when designating an article as a WP:Good article, but it doesn't seem there's much interest in addressing this. Thebiguglyalien (talk) 22:35, 25 February 2023 (UTC)
- @Thebiguglyalien: My summary above ("Goldilocks and the Three Bears is about a girl who enters the house of three bears while they are away, eats their food, breaks their furniture, and sleeps in their beds.") misses key plot points that would be in any other RS summary of the story, and describes events differently than other RS summaries would. It's an example of why I think the selection of what is "important" (what to include in a summary) is interpretation, inherently and unavoidably so. Deciding what's important is interpretation. Levivich (talk) 18:55, 25 February 2023 (UTC)
- Well, Memento is an outlier, not all films are so confusing. And if every scene is described in three or four sentences, the whole summary will be inordinately long - but then I guess that's not what you're saying, you're returning to my original point, that people tend to spend too much time talking about the wrong things? Plokmijnuhby (talk) 22:20, 25 February 2023 (UTC)
- Another point is that summarizing versus interpreting with a work of fiction isn't much different than it is with any other source. If a news article simply describes a sequence of events, we're expected to summarize that sequence of events without inserting our own thoughts on motive or how it's relevant to other things. If we want to do that, we have to find a separate source that analyzes the events. It's the same logic, just applied to a different format. Thebiguglyalien (talk) 18:34, 25 February 2023 (UTC)
- I'd ask the same question about the difference between interpretation and summarization. Levivich (talk) 17:00, 25 February 2023 (UTC)
- I don't understand quite what you mean by that. What's the difference between a re-telling and just bad summarisation? Plokmijnuhby (talk) 10:22, 25 February 2023 (UTC)
RFC: Clarifications to WP:COSMETICBOT for fixing deprecated HTML tags
|
This is a request for comment regarding Wikipedia:Bot policy, specifically the section on restricted bot tasks related to cosmetic changes outlined at WP:COSMETICBOT. Ultimately, I would like to start a discussion about whether or not the bot account User:MalnadachBot is operating within Wikipedia policy, and whether WP:COSMETICBOT needs to be updated to clarify how these kinds of bots should be handled in the future. —ScottyWong— 08:33, 7 February 2023 (UTC)
First, a bit of history behind this situation:
MalnadachBot (talk · contribs) is a bot account that is owned by User:ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ. This is an AWB bot that primarily works on fixing lint errors, which are common technical issues in wikitext markup. A list of lint errors detected on Wikipedia is available at Special:LintErrors. The bot has had a total of 13 tasks approved by the Bot Approvals Group. The first 11 tasks were fairly standard error-fixing tasks, each of which had a limited scope to fixing a specific type of error. Task 12 was requested to fix another specific error associated with deprecated <tt>...</tt> HTML tags. However, the BAG member who approved the task (in this case, User:Primefac) decided to expand the scope of the task to fixing all linter errors across Wikipedia, and speedily approved the task. Since that task approval, MalnadachBot has made millions of edits, becoming one of the top accounts by edit count in Wikipedia history.
One of the most common edits that the bot makes is to replace <font color="red">...</font> tags with <span style="color:red;">...</span> tags. This is because <font> tags were technically deprecated when HTML5 was introduced, so current best practices for writing new HTML are to use span tags instead of font tags for customizing the font and style of text. If you inspect the bot's contributions, you'll notice that the vast majority of its edits are in the Wikipedia, Wikipedia talk, User talk, and Article talk namespaces. This is because using custom font styles is relatively uncommon in WP articles themselves. However, they are very common in many user's signatures (including my own). So, MalnadachBot is essentially going through nearly every discussion that has ever taken place in the 20+ year history of Wikipedia, and updating old user signatures to use proper HTML. Many of these discussion pages have not been edited in 10+ years, and many of them receive nearly zero annual pageviews. It has made millions of such edits, and there are many millions more in the queue to be done. Over the past 12 months that this task has been running, many users have complained on the bot owner's talk page and Wikipedia:Bots/Noticeboard that the bot is clogging up watchlists and revision histories with millions of trivial, purely cosmetic edits, but there has been no wider community discussion to come to a consensus on how to move forward. If you used to use <font> tags in your signature, you've no doubt had your watchlist bombed by MalnadachBot in the last year.
One thing to note is that these kinds of edits do not result in a change to the way that these pages are rendered to the user. This is because, despite <font> tags being deprecated in HTML5, all major web browsers still understand and support the older deprecated tags, and no major browser has announced an intention to stop supporting these tags in the foreseeable future (which is why these specific errors are categorized as the lowest priority on Special:LintErrors). Therefore, these edits are purely cosmetic changes to the wikitext of many pages, that do not change the visual rendering of the page. Typically, these kinds of edits are not allowed to be done on their own by bots (per WP:COSMETICBOT), and can only be done if they are combined with other substantive edits that do cause a change to the visual rendering of the page. However, there is one exception noted within WP:COSMETICBOT for fixing "egregiously invalid HTML such as unclosed tags, even if it does not affect browsers' display or is fixed before output by RemexHtml (e.g. changing <sup>...</sub>
to <sup>...</sup>
)". It has been asserted that MalnadachBot's task 12 falls under this exception, but the wording of the policy leaves much room for different interpretations.
So, the primary question I'd ask for your input on in this RfC is: does <font color="red">...</font> constitute "egregiously invalid HTML" that should be proactively fixed by a bot, or should it be considered a purely cosmetic change to the wikitext that should not be performed by a bot?
And the secondary question in this RfC is surrounding whether or not the last bullet point in WP:COSMETICBOT should be expanded to be more specific about defining exactly what types of errors are considered "egregious", and which aren't. A proposal for an updated wording can be found below.
—ScottyWong— 08:33, 7 February 2023 (UTC)
Primary question: are deprecated HTML tags considered egregiously invalid?
Please indicate below if you believe that automated fixing of deprecated HTML tags should be considered "egregiously invalid HTML" and therefore be permissible per the current wording of WP:COSMETICBOT, even if those deprecated HTML tags are still supported by all major browsers. (Support will be interpreted as support for the bot task to continue, Oppose will be interpreted as agreement that the bot task is in violation of policy and should be halted.)
- Oppose as RfC author. I don't believe that fixing deprecated but supported HTML tags should be considered "egregiously invalid HTML", considering that every major browser still supports them, and will continue to support them for the foreseeable future. Fixing user signatures on ancient discussion pages is a purely cosmetic change to wikitext that does not alter the visual rendering of the page. This bot task is only generating millions of trivial edits that clog up user's watchlists and revision histories. While it's possible to ignore these bot edits, the bigger question to be discussed here is whether or not those edits should be happening at all. —ScottyWong— 08:33, 7 February 2023 (UTC)
- Oppose In my opinion bots should be discouraged from ever trying to systematically modify historical talk page comments, unless there is some entirely forced technical reason to do so and broad community support first. Rewriting 10+ year old talk page comments to change 'font' elements to 'span' elements with an element-specific font style serves no practical purpose. Doing it tens of millions of times is an absurdity. If the linter is complaining about this, the linter should be modified to stop complaining (e.g. this linter rule could be disabled in talk namespaces). After back and forth discussion (recently and apparently also several times over the past few months), none of the bot operators or approvers have offered any kind of compelling reason to run this bot task. –jacobolus (t) 10:01, 7 February 2023 (UTC)
- Invalid question framing. I object to the framing of the question that "oppose" means that the bot task is in violation of the policy. I support the bot task, because it is making edits to assist with
the "administration of the encyclopedia", such as the maintenance of hidden categories used to track maintenance backlogs
, a quote from COSMETICBOT. Linter errors are tracked on a series of maintenance pages created by the MediaWiki software, and they are listed in the "Page information" for each affected page, in a section called "Lint errors". This is the same reasoning that allows bots to remove unsupported parameters from infoboxes, which also results in zero change to the rendered page, or to adjust categories in articles after a category is renamed. The bot is performing a wikitext cleanup task. – Jonesey95 (talk) 14:45, 7 February 2023 (UTC)- How does changing <font> to <span> help with the maintenance of hidden categories or tracking maintenance backlogs? —ScottyWong— 16:01, 7 February 2023 (UTC)
- Please notice the words "such as". Fixing obsolete tags and other Linter erors on a page removes the MediaWiki-created "Lint errors" section from that page's "Page information", and it removes the page from administrative lists such as this MediaWiki-generated list of over three million errors. – Jonesey95 (talk) 19:13, 7 February 2023 (UTC)
- Wouldn’t it be relatively trivial to remove this linter rule (either altogether or from talk namespace)? Then you would eliminate all of those pages from the administrative lists at one fell swoop without needing to modify any of the pages. –jacobolus (t) 00:08, 8 February 2023 (UTC)
- Not really. Sweeping a problem under a rug, or burying your head in the sand does not stop the problem from being a problem. And yes, lint errors like this are a problem.
- While you are technically correct that at this moment the major browsers continue to support it, because the tag was deprecated 25 years ago those same browsers would be well within their rights to remove support for it at any time, without advance notice. This bot is preforming preventative maintenance to prevent the future need for it when browsers do finally remove support for it from their rendering engines. Sideswipe9th (talk) 01:41, 8 February 2023 (UTC)
- Wouldn’t it be relatively trivial to remove this linter rule (either altogether or from talk namespace)? Then you would eliminate all of those pages from the administrative lists at one fell swoop without needing to modify any of the pages. –jacobolus (t) 00:08, 8 February 2023 (UTC)
- Please notice the words "such as". Fixing obsolete tags and other Linter erors on a page removes the MediaWiki-created "Lint errors" section from that page's "Page information", and it removes the page from administrative lists such as this MediaWiki-generated list of over three million errors. – Jonesey95 (talk) 19:13, 7 February 2023 (UTC)
- How does changing <font> to <span> help with the maintenance of hidden categories or tracking maintenance backlogs? —ScottyWong— 16:01, 7 February 2023 (UTC)
- Oppose per mw:Help:Lint errors/obsolete-tag and jacobolus. Going around changing font tags in old talk page comments millions of times is absurd. Calling old HTML "egregiously invalid HTML" demonstrates a basic lack of familiarity with HTML, and with how web browsers work. This is one of the most useless tasks we do on Wikipedia, even more so than changing hyphens to dashes. The bot task is wholly unnecessary and therefore should stop. Levivich (talk) 14:53, 7 February 2023 (UTC)
- Unclear question. It is unclear that when an editor opposes whether they believe that MalnadachBot's operation is a violation of the bot policy or they are just reaffirming the "deprecated HTML tags is not egregiously invalid HTML" argument. If an editor bases their opinion entirely on the latter, then it doesn't necessarily mean that the bot is explicitly not allowed under bot policy. Those in support of the bot's operation wouldn't necessarily agree with "deprecated HTML tags are egregiously invalid HTML" so this is a bit of a strawman. 0xDeadbeef→∞ (talk to me) 15:06, 7 February 2023 (UTC)
- I agree the presentation of this RfC is confusing, but
Oppose will be interpreted as agreement that the bot task is in violation of policy and should be halted
is clear. Levivich (talk) 15:15, 7 February 2023 (UTC)- Then the actual RfC question should be reworded. Whether fixing deprecated HTML tags constitutes fixing egregiously invalid HTML is one question, and whether MalnadachBot's task should continue is another. 0xDeadbeef→∞ (talk to me) 15:18, 7 February 2023 (UTC)
- If it is decided that making automated edits that only fix deprecated HTML tags is a violation of COSMETICBOT, then the bot would naturally need to stop making those types of edits, and focus only on edits that make more substantive changes. —ScottyWong— 16:21, 7 February 2023 (UTC)
- I don't think that logic is correct. COSMETICBOT specifically says
Consensus can, as always, create exceptions for particular cosmetic edits.
Bots are often approved to make cosmetic changes to pages. – Jonesey95 (talk) 17:07, 7 February 2023 (UTC)
- I don't think that logic is correct. COSMETICBOT specifically says
- If it is decided that making automated edits that only fix deprecated HTML tags is a violation of COSMETICBOT, then the bot would naturally need to stop making those types of edits, and focus only on edits that make more substantive changes. —ScottyWong— 16:21, 7 February 2023 (UTC)
- Then the actual RfC question should be reworded. Whether fixing deprecated HTML tags constitutes fixing egregiously invalid HTML is one question, and whether MalnadachBot's task should continue is another. 0xDeadbeef→∞ (talk to me) 15:18, 7 February 2023 (UTC)
- I agree the presentation of this RfC is confusing, but
- Point of fact: The
<font>...</font>
element is not deprecated in HTML5, it is obsolete, and has been since October 2014 (more than eight years ago). It was deprecated as long ago as December 1997, when HTML 4.0 was released. So, in 25 years, people have had plenty of opportinity to cease using outdated constructs. --Redrose64 🌹 (talk) 15:59, 7 February 2023 (UTC)- I don't think anyone is arguing in favor of using font tags in new content that is being generated today. However, as the HTML spec is changed over time, there is generally no expectation that the entirety of the internet is going to go back to old content and update it to be compliant. That is why browsers continue to support both deprecated and obsolete tags. —ScottyWong— 16:19, 7 February 2023 (UTC)
- If you follow the second link that I provided, it shows the difference between deprecated and obsolete. Most importantly,
An obsolete element or attribute is one for which there is no guarantee of support by a user agent.
Whilst browsers may continue to support obsolete tags, they are not obliged to, and it is the browser vendor's decision whether to continue or not. Conside: how many browsers still support the<NEXTID>
,<HP1>
or<KEY>
tags? At some point they may decide that continued support for<font>...</font>
is not worth the maintenance overhead, and will remove the code concerned, much like the MediaWiki devs occasionally retire some feature or even an entire skin. In short: don't rely on it. --Redrose64 🌹 (talk) 17:11, 7 February 2023 (UTC) - @Scottywong: Point of order. This is a new comment, added today. Should I be warned or sanctioned for making this edit? Should the bot fix it? This is not an obscure archive that nobody's looked at in years. – wbm1058 (talk) 18:30, 7 February 2023 (UTC)
- Interestingly enough, your signature looks great despite the font tags. No one is suggesting any changes to policy that would prohibit users from using font tags, this is only about bots making millions of automated edits to make cosmetic changes to (mostly) very old discussion pages. The point is, if major browsers are planning to remove support for font tags, they are virtually guaranteed to make a public announcement to the effect well in advance of actually making that change. Continuing to support font tags requires basically no effort and no significant browser resources, and font tags are still ubiquitous on the internet (particularly on older sites), so there is no motivation for browsers to drop this support anytime soon. And even if they did, the worst case scenario fallout here on WP is that old signatures wouldn't display in the intended font. —ScottyWong— 18:39, 7 February 2023 (UTC)
- One editor's failure to imagine a valid worst-case scenario should not dictate whether we replace obsolete HTML. An obviously worse scenario than the one described immediately above is something like white text on a black background; if the font tag used for the text stops working, we would end up with invisible black-on-black text. – Jonesey95 (talk) 19:20, 7 February 2023 (UTC)
- I see that someone has used obsolete HTML tags 1 time on this page! A font tag! Facepalm
- Damn, look whose signature used to close their
font color
with aspan
! Too bad that bot only made a half-fix so now another bot has to make a second pass at that archive. Scottywong re:No one is suggesting any changes to policy that would prohibit users from using font tags
, did anyone ask you to change the HTML in your signature or did you just decide to fix it on your own? Are you saying you could go back to using the same HTML you used back then? wbm1058 (talk) 20:17, 7 February 2023 (UTC) - Oh, I see, you won't let another bot fix it, and apparently don't want to fix it yourself either. Only an administrator can fix those 21 font tags, if they dare. wbm1058 (talk) 20:25, 7 February 2023 (UTC)
One editor's failure to imagine a valid worst-case scenario
If you give me some time I can imagine thousands of hypothetical bad scenarios for you which are both more likely and more damaging than this one. Note: (a) this particular “worst case scenario” would cause unbelievably trivial harms (the font colors seen by readers in some signatures on historical wikipedia talk pages would revert to default black/blue instead of the author-preferred special decoration), and (b) the chance of this supposedly bad outcome are exceedingly rare because the cost of maintaining this feature by browsers is trivial and any change would be very disruptive, and (c) if this supposedly bad outcome occurs, we should get significant warning to decide at that point what to do about it, including plenty of time to fix old markup if we care. –jacobolus (t) 00:15, 8 February 2023 (UTC)- P.S. @Jonesey95 Do you have a single concrete example from any page in Wikipedia where white text expressed via the 'font' element shows up on a black background expressed using some non-obsolete method? (The old school approach was to use the now-obsolete 'bgcolor' attribute, which we can safely assume would be removed at the same time as the 'font' element from some hypothetical future browser.) We are now talking about hypothetical markup being hypothetically obsoleted by hypothetical future browser changes. It’s too much of a pure thought experiment for me to keep up. –jacobolus (t) 01:34, 8 February 2023 (UTC)
- I'm beginning to lose my assumption of good faith. Perhaps your insource searching skills are not up to par. This signature (Thekillerpenguin, white text created with a font tag on a black background created with a span tag) took all of thirty seconds to find. – Jonesey95 (talk) 06:29, 8 February 2023 (UTC)
- Cool. If you want to get the bot to identify all comments with white text on black background and go fix those, that sounds fine. There can’t be tens of millions of those pages. (Even better, consider just switching them to black on white for basic accessibility.) –jacobolus (t) 07:53, 8 February 2023 (UTC)
your insource searching skills are not up to par
– P.S. this is unbelievably patronizing. So much for good faith. –jacobolus (t) 07:56, 8 February 2023 (UTC)
- As I mentioned elsewhere, the following is just a side note, since the argument regarding browsers continuing to support the font element doesn't depend on what markup exists on Wikipedia pages. All the same, there are users whose signatures use appropriate markup with a style attribute that specifies CSS property values for background colour and text colour, such as Blaze Wolf's. isaacl (talk) 01:48, 8 February 2023 (UTC)
- I'm beginning to lose my assumption of good faith. Perhaps your insource searching skills are not up to par. This signature (Thekillerpenguin, white text created with a font tag on a black background created with a span tag) took all of thirty seconds to find. – Jonesey95 (talk) 06:29, 8 February 2023 (UTC)
- One editor's failure to imagine a valid worst-case scenario should not dictate whether we replace obsolete HTML. An obviously worse scenario than the one described immediately above is something like white text on a black background; if the font tag used for the text stops working, we would end up with invisible black-on-black text. – Jonesey95 (talk) 19:20, 7 February 2023 (UTC)
- Interestingly enough, your signature looks great despite the font tags. No one is suggesting any changes to policy that would prohibit users from using font tags, this is only about bots making millions of automated edits to make cosmetic changes to (mostly) very old discussion pages. The point is, if major browsers are planning to remove support for font tags, they are virtually guaranteed to make a public announcement to the effect well in advance of actually making that change. Continuing to support font tags requires basically no effort and no significant browser resources, and font tags are still ubiquitous on the internet (particularly on older sites), so there is no motivation for browsers to drop this support anytime soon. And even if they did, the worst case scenario fallout here on WP is that old signatures wouldn't display in the intended font. —ScottyWong— 18:39, 7 February 2023 (UTC)
- If you follow the second link that I provided, it shows the difference between deprecated and obsolete. Most importantly,
- I don't think anyone is arguing in favor of using font tags in new content that is being generated today. However, as the HTML spec is changed over time, there is generally no expectation that the entirety of the internet is going to go back to old content and update it to be compliant. That is why browsers continue to support both deprecated and obsolete tags. —ScottyWong— 16:19, 7 February 2023 (UTC)
- Support & Oppose (I don't really understand which is which): For sure, a HTML tag is egregiously invalid the moment a mainstream browser drops support for it, however I don't think that it is egregiously invalid when all mainstream browsers do support the functionailty of the tag. Terasail[✉️] 16:40, 7 February 2023 (UTC)
- Support the bot's interpretation of policy and also Object to the question wording. If you don't like the edits, then you gotta change the policy, because IMO the policy as currently written pretty clearly justifies what the bot is doing. Loki (talk) 17:33, 7 February 2023 (UTC)
- Support bot changes per Redrose64's timeline above. 25 years of "you shouldn't use this" and 8+ years of "no, really, don't use this" is going to take a while to clean up, and it's better to do it now, while it's still mostly-supported, than after browsers stop dropping support. (You know, it would be nice if we could have multiple levels of bot. I probably don't want to see AnomieBOT or MalnadachBot, but Citation bot might need a check to make sure it parsed the data correctly.) --130.111.39.47 (talk) 18:00, 7 February 2023 (UTC)
- Procedural close. Lots of inaccuracies in the RfC statement, suggest workshopping this in Idea lab. If you want to ask halting edits that soley replace font tags, make it clear because there are other bots that solely replace font tags as well, such as the recently approved SheepLinterBot and the pending request for Legobot 41 which has the exact same scope as MalnadachBot 12. It should also be made clear that bundling font tag replacement while fixing other Lint errors is fine. MalnadachBot 12 covers a broad variety of Lint errors that does not involve font tags, though admittedly font tags constitute the largest and most visible group. About font tag itself, every single page that has a font tag is flagged by MediaWiki in its "Page information" and other maintenance pages. Replacement is supported by MediaWiki developers who decide the future path of MediaWiki. This kind of replacement has been going on for years both by bot and manual, reducing the Lint error count from over 23 million to 7.7 million currently. While replacing font tags in linter cat 2 does not change the way a page is rendered in desktop currently, that doesn't mean it will continue to be so in future, since it is already marked obsolete and browsers can drop support at any time without notice. Mobile wikipedia for instance already does not diplay
<tt>...</tt>
which is classed the same as font tag. ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ (talk) 18:34, 7 February 2023 (UTC) - I Support MalnadachBot's WP:BAG approved actions. I Support the Procedural close of this argument due to unclear actions for unsupported end goals. I Object to the anti-intuitive wording of which side 'support' and 'oppose' mean in this RFC. I Object to the "don't edit obsolete tags because they aren't currently visually broken" argument presented in this RFC.
Are you arguing that (only) MalnadachBot should be stopped? Are you arguing that all bot actions to correct minor errors that affect pages should stop? What is it that you are actually suggesting? That we all wait until the shit hits the fan when support for all these obsolete tags gets yanked and the issues are a visual issue, THEN fix the millions of issues?
Today there are 7.7 million problematic lint errors on Wikipedia; Missing end tags, stripped tags, etc. with varying levels of impact and breakage. 4.9 of these 7.7 million are obsolete tags, and thousands of these obsolete font tags are already currently broken with tidy font bug errors (old code being handled and displayed differently or flat out ignored across different browsers). And as Malnadach Konkno stated, the obsolete
<tt>...</tt>
tags have already been dropped by mobile browsers. While I don't know at the moment how many of the 4.9mil are TT, the number is not small.When (not IF, WHEN) browsers decide to stop supporting more of these various obsolete tags, Wikipedia talk pages will all be up shit's creek with problems from the dropped support. The MalnadachBot, and multiple other sanctioned bots (all fixing the specified errors identified as being problematic), are methodically and carefully built to address these issues effectively and efficiently with thought and care given to minimize impact to users.
The MalnadachBot took a number of months off recently to address an issue with some cases where the bot was making a multiple smaller edits to some pages where it wasn't able to clear all issues from the page instead of clearing all issues in one go. Those issues with the bot were addressed and the bot was brought back online last week. The current version, if it cannot clear all the errors on the page, it will skip the page to allow human editors to gnome and correct and clear the remaining issues.
If you hate seeing bot edits appearing in your recent edits feed, by all means, go to your settings and activate the "Hide bot edits from the watchlist" option in the "Changes shown" section to remove bot edits from your view. But this "don't edit obsolete tags because they aren't visually broken" argument has been argued before, and the need to address this impending problem before it hits the fan far outweighs the currently minor inconvenience to users. Zinnober9 (talk) 22:42, 7 February 2023 (UTC)
- Small clarification, tt is still fully supported by major mobile browsers (AFAIK), it's just not styled in the mobile Wikipedia skin (IIRC this is because it doesn't restore the styles after its reset). Legoktm (talk) 08:10, 8 February 2023 (UTC)
- Support per Zinnober9's comment above — TheresNoTime (talk • they/them) 22:48, 7 February 2023 (UTC)
- Support OK with keeping Wikipedia up to standard. Please co-ordinate with ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ on future RfCs. -- GreenC 22:59, 7 February 2023 (UTC)
- Support per Zinnober9's statement above. Once the lint error backlog has been cleared, editors won't be bothered by "clog[ged] up ... watchlists and revision histories" anymore. —Bruce1eetalk 23:26, 7 February 2023 (UTC)
- Support The font tag is deprecated, and so must still work, but we probably should remove it. I do understand people complaining about useless Bot edits cluttering their watch lists - precisely the situation that WP:COSMETICBOT was written to address. I realise I can suppress them, but that is a drastic step. I would like to think that getting rid of the backlog would stanch the flow of lint errors, but regard that as naive... And I deplore the use of CSS, which is itself a major headache for Wikipedia. Hawkeye7 (discuss) 00:46, 8 February 2023 (UTC)
- Support Nothing wrong in maintaing coding standards.- Mnair69 (talk) 01:23, 8 February 2023 (UTC)
- Support There are teams of engineers working to ensure that I can correctly read all of the Dumb Markup people try to use on the Internet. People who rely on syntax highlighters or screen readers? Not so much. Fixing this markup that is by definition broken helps make Wikipedia more accessible. Very few people will look at some ancient AfD. But we keep those archives around for a reason. If we wanted to make it a privilege to view them, we would delete them. They should be viewable by all, not just people with a modern computer and browser and full function of their eyes. In sum, just because it looks fine to you does not mean it looks fine to everyone. You have the option to hide the bot from your watchlist. You may not want to, but you have the option. There is no preferences menu in real life to disable blindness. HouseBlastertalk 01:51, 8 February 2023 (UTC)
- Do you have evidence that syntax highlighters or screen readers have a difficulty parsing the 'font' element? This seems like a hypothetical concern. –jacobolus (t) 02:28, 8 February 2023 (UTC)
- Supportper Zinnober9's and Hawkeye7's statements above. --Afernand74 (talk) 07:24, 8 February 2023 (UTC)
- Procedural support Unfortunately this RfC has been structured as a binary question (support or oppose) when it has the opportunity for a lot of nuance and middle ground. I generally think cleaning up these tags is a good idea, and want to see the work continue (hence my support). I am not a fan of how it has been done in the past, I think the fact that a bot racked up 10 million edits by often editing the same pages over and over again shows a serious failure in bot oversight. This is largely why I've proposed a Legobot task for it, because I think it can be done in a better way that disrupts editors as minimally as possible. I think the focus on browsers is misplaced, I fully agree that browsers will never drop support for font and whatever, but there are plenty of tools, scripts, etc. that process our wikitext and HTML and will benefit from this cleanup. I've also written an essay for my overall vision of Linter. Legoktm (talk) 07:50, 8 February 2023 (UTC)
- @Legoktm 6.8 million of MalnadachBot's 10.7 million is from Task 13 to blank inactive IP talkpages and further 1.1 million is from Tasks 1-11 fixing mainly high and medium priority Lint errors. ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ (talk) 08:44, 8 February 2023 (UTC)
- Oppose outside of the article/template space, especially in signatures, which seems to be most of what it's doing. The use of font instead of span tags is sometimes the only thing keeping a signature at a decent character count to avoid source clutter. There's no good reason for a browser to stop supporting font, and if one did, a polyfill could easily be built into the Mediawiki software, rather than having a bot make thousands of edits. small jars
tc
08:17, 8 February 2023 (UTC) - Support per Zinnober9's comments. As noted above several times, there are ways for editors wishing to hide this specific bot's edits OR to hide only lint errors. This RfC seems more of a "if I don't like it, then it should be stopped" and not anything of substance. This is also a non-issue in my opinion as the actions of this and other bots are going to either stop or be dramatically less in the not so distance future. --Gonnym (talk) 08:53, 8 February 2023 (UTC)
- Also, oppose closing this RfC and starting it again. This isn't the first disscussion the editor was in concerning this bot. Trying to find the correct language that will get them the result they want should be discouraged. Gonnym (talk) 08:55, 8 February 2023 (UTC)
- Support per Zinnober9 more or less. This is not egregiously invalid HTML, but that's a red herring. The task is fixing Linter errors and becoming inline with modern standards, and that's a valuable task. Zinnober9 expresses my full stance on this better than I could. Headbomb {t · c · p · b} 12:01, 8 February 2023 (UTC)
- Oppose bot edits trying to "fix" things with no constructive effect, and there's a problem over at the BOT page if such bots are getting rubberstamped. Alsee (talk) 13:53, 8 February 2023 (UTC)
- Support everything Zinnober9 said. Cleaning up obsolete code used on Wikipedia constitutes essential maintenance. Not everybody uses a major browser (people already brought up the example of accessibility-oriented web browsers and software, for which there's no guarantee they'll support these junk tags). And to oppose what these bots do is to place an undue burden on MediaWiki maintainers for zero benefit. Why are we singling out these edits, when the vast majority of Wikipedia edits nowadays are Wikignoming of one form or another? Your watchlist page isn't supposed to be clean, but our articles must be. DFlhb (talk) 14:10, 8 February 2023 (UTC)
- Support largely per Zinnober9. I doubt we'll see loss of support from a major browser any time remotely soon, but the tags are obsolete, and as Legoktm has pointed out there is benefit now from bringing our codebase into compliance with modern standards. There is a measure of irony that this adverse reaction occurred precisely because the bot was designed conservatively so no one would feel the need to review its edits. Ultimately could have been better thought out but I have no concerns over the present revised method of operation. 74.73.224.126 (talk) 14:54, 8 February 2023 (UTC)
- Oppose. Even in the event that some future browser does not support the FONT tag, the impact won't be pages that crash or horrible misrendering; the opening and closing tags will just be ignored, and in the vast majority of cases the only change will be the loss of some sort of emphasis effect. (And with my grumpy old man hat on: if our not eliminating old tags encourages browsers to keep supporting them, all the better. We shouldn't want the vast number of legacy pages on the Web to become inaccessible to most viewers.... and the ongoing drive to make HTML something that is not comprehensible by humans and creatible by hand is a sad one. --Nat Gertler (talk) 19:50, 8 February 2023 (UTC)
- We do not know that
the opening and closing tags will just be ignored
. It could make the text invisible by making it the same as background colour. Page crashing is the worst case scenario. What happens in future is just speculation, what we do know is that it won't be the same as it is now, which is why they are being replaced. When Wikipedia switched from Tidy to Remex in July 2018, it broke a lot of pages which you can read here. People had said at the time that COSMETICBOT does not apply for pages that look fine now but would break in future. ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ (talk) 02:32, 9 February 2023 (UTC)- It is implausible that browsers will ever remove support for 'font' tags at all. But if they do, it is vanishingly unlikely that the result will be crashing pages. The speculation on which these bot tasks are being based is completely detached from reality. –jacobolus (t) 02:46, 9 February 2023 (UTC)
- I said it is the worst case scenario. These bot tasks aren't based on speculation, it is based on future path indicated by MediaWiki developers. ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ (talk) 03:33, 9 February 2023 (UTC)
future path indicated by MediaWiki developers
– I have asked several times for a link to this, and none has been forthcoming. Where is this “future path”, who made it up on what authority, what were their criteria, and where was the discussion? –jacobolus (t) 04:27, 9 February 2023 (UTC)- I am not aware of any MediaWiki developer ever stating that support for these tags will be removed (rather I've tried to say the opposite). Legoktm (talk) 04:35, 9 February 2023 (UTC))
- I said it is the worst case scenario. These bot tasks aren't based on speculation, it is based on future path indicated by MediaWiki developers. ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ (talk) 03:33, 9 February 2023 (UTC)
- Actually, we do know what will happen with a browser that does not support recognition of the FONT tag, because browsers facing unrecognized tags is not something new. Browsers have been faced with unrecognized tags since the Web began (thanks to those of us who were handcoding and made the occasional typo). What they do with a tag that they don't recognize is to ignore the tag, simple as that. So the alteration caused by the opening FONT tag and the return to prior status caused by the closing tag will go undone. It is bizarre to think that programmers are going to have text between unrecognized tags undergo some special form of destruction. --Nat Gertler (talk) 23:17, 11 February 2023 (UTC)
What they do with a tag that they don't recognize is to ignore the tag, simple as that.
They don't actually. Take for example<blink>...</blink>
, which is a html tag no longer supported by major web browsers.<acronym>...</acronym>
which is obsolete in HTML5, supported by major web browsers but not by mediawiki. When used in wikipedia, the tags themselves render as plaintext. ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ (talk) 08:29, 13 February 2023 (UTC)- Note wikitext only supports a subset of HTML5 elements, thus other tags will be transformed by the wikitext parser so they are displayed as-is. To facilitate incremental deployment of new elements, as per the HTML5 spec, browsers "must treat elements and attributes that they do not understand as semantically neutral; leaving them in the DOM (for DOM processors), and styling them according to CSS (for CSS processors), but not inferring any meaning from them." Thus the contents of unknown elements are rendered, without any special default styling. isaacl (talk) 16:17, 13 February 2023 (UTC)
- It is implausible that browsers will ever remove support for 'font' tags at all. But if they do, it is vanishingly unlikely that the result will be crashing pages. The speculation on which these bot tasks are being based is completely detached from reality. –jacobolus (t) 02:46, 9 February 2023 (UTC)
- We do not know that
- (edit conflict) Support. And per Zinnober9. Shutting down a linter bot when it's updating deprecated/obsolete code like this? Seriously? The obsolete tags are not only deprecated/obsolete, but they also create linter errors that we treat as a backlog. Halting this bot, while it's fixing all this obsolete code, would mean it take longer to get all of it done. In the day that these obsolete code are dropped from support, if we shut down this bot we may have more instances of these tags appearing as plain text. Sure, if you want to hide MalnadachBot from your watchlist you can go to WP:HIDEBOTS. Simple. Why complain about this bot when it is limited to one edit per page for fixing font tags? Sheep (talk • he/him) 02:35, 9 February 2023 (UTC)
- The reasons I complain about a bot changing font tags to span tags on old talk pages are: it probably will never need to be fixed; if it does need to be fixed, we'll get plenty of advance warning; we don't have a rule against creating new 'errors' (people can still use the font tag, there have been new font tags added in this very discussion), so we're making new 'errors' for the bot to run around and 'fix'; and, I'm no expert in these things, but I bet there are ways to 'fix' these 'errors' that don't require a bot to make millions of edits. Levivich (talk) 03:26, 9 February 2023 (UTC)
if it does need to be fixed, we'll get plenty of advance warning
we already have that warning by them being marked as Lint errors. ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ (talk) 03:35, 9 February 2023 (UTC)- Being marked as Lint errors is not a warning that browsers will drop support for font tags. mw:Help:Lint errors/obsolete-tag says
Since it is unclear to us at this time how far we want to push this goal of HTML5 compliance, this category is marked low priority ... If, in the future, there is greater clarity about pursuing this more aggressively, we will reflect that by updating the severity of this linter issue appropriately ... It is likely that browser vendors will give us significant notice before making any breaking changes given how prevalent these deprecated elements are used across the internet.
Correct me if I'm wrong, but I don't think any browser vendors have even indicated plans to drop support for the font tag, and it's been like 15 years since HTML 5's initial release. Levivich (talk) 03:54, 9 February 2023 (UTC)
- Being marked as Lint errors is not a warning that browsers will drop support for font tags. mw:Help:Lint errors/obsolete-tag says
- I'm not quite sure I follow the logic here. Part of your argument seems to be predicated on the idea that the bot's task will never be complete. But this is also true of many other bots. Should we disallow archive bots because people can make new threads that will later need to be archived? Perhaps you could clarify? 74.73.224.126 (talk) 03:59, 9 February 2023 (UTC)
- That's a bit of apples and oranges. Some bot tasks are perennial, like archiving talk page threads, but we want new talk page threads to be created. If we don't want font tags, we should first have a rule against using font tags. We could have the software automatically turn any font tags into span tags at the time the edit is published. We could have an edit filter that prevents edits with font tags from being published. We could just add it to WP:SIG policy at the very least. Then, once we've stopped making new 'errors' (if that's how we're treating them), it makes sense to fix the old ones (which, still, I don't think needs a bot; the software could replace the font tag with a span tag when the wiki text is turned into HTML (like it does with wikimarkup, and there's probably a global search and replace that could fix the wikitext). If browsers were ever going to stop rendering font tags, they'd announce that in advance, and that's the time to start looking at a patch to mediawiki, or a bot, or a policy, or whatever, to handle it. But so long as we allow the use of font tags, we should not bother with a bot that cleans them up, it's just making edits that don't need to be made, and millions of them. Levivich (talk) 04:06, 9 February 2023 (UTC)
- Some low-hanging fruit might be having mw:Extension:Linter check attempts to change one's signature in Special:Preferences, and display a notice or even prevent saving when a signature has a linter error. Might be worth a Phab ticket. –Novem Linguae (talk) 04:16, 9 February 2023 (UTC)
- That was already discussed and implemented (partially), see New requirements for user signatures. Legoktm (talk) 04:37, 9 February 2023 (UTC)
- In which it was decided not to ban font tags in signatures. Levivich (talk) 04:42, 9 February 2023 (UTC)
- The WMF decided in 2020 to defer disallowing font tags in signatures and to leave it up to individual wikis. We can still decide locally to disallow font and other obsolete tags in signatures. – Jonesey95 (talk) 06:36, 9 February 2023 (UTC)
- In which it was decided not to ban font tags in signatures. Levivich (talk) 04:42, 9 February 2023 (UTC)
- That was already discussed and implemented (partially), see New requirements for user signatures. Legoktm (talk) 04:37, 9 February 2023 (UTC)
- I'm still not sure I follow. There are many things that while not prohibited are still undesirable and are fixed by bots, the canonical example being the transclusion of subst-only templates; even orphaning references is nowhere explicitly prohibited (and it is almost always done accidentally), doesn't mean they shouldn't be fixed by a bot. The fact that there are periodic additions to some maintenance commitment should not in general cause that commitment to be neglected, quite the opposite actually. 74.73.224.126 (talk) 04:50, 9 February 2023 (UTC)
- What we're talking about is a WP:COSMETIC edit, whereas all of your examples (archiving threads, substing templates, fixing orphaned references) are non-cosmetic edits. The words "maintenance commitment" do not apply, because changing font tags to span tags is neither maintenance nor a commitment. Levivich (talk) 04:59, 9 February 2023 (UTC)
- What I'm gathering then is that your objection is not at all predicated on the the fact that new linter errors are periodically added as was my earlier understanding, but solely on your belief that the cost of fixing these errors outweighs the benefit in aggregate, is that correct? 74.73.224.126 (talk) 05:07, 9 February 2023 (UTC)
- What we're talking about is a WP:COSMETIC edit, whereas all of your examples (archiving threads, substing templates, fixing orphaned references) are non-cosmetic edits. The words "maintenance commitment" do not apply, because changing font tags to span tags is neither maintenance nor a commitment. Levivich (talk) 04:59, 9 February 2023 (UTC)
- Some low-hanging fruit might be having mw:Extension:Linter check attempts to change one's signature in Special:Preferences, and display a notice or even prevent saving when a signature has a linter error. Might be worth a Phab ticket. –Novem Linguae (talk) 04:16, 9 February 2023 (UTC)
- That's a bit of apples and oranges. Some bot tasks are perennial, like archiving talk page threads, but we want new talk page threads to be created. If we don't want font tags, we should first have a rule against using font tags. We could have the software automatically turn any font tags into span tags at the time the edit is published. We could have an edit filter that prevents edits with font tags from being published. We could just add it to WP:SIG policy at the very least. Then, once we've stopped making new 'errors' (if that's how we're treating them), it makes sense to fix the old ones (which, still, I don't think needs a bot; the software could replace the font tag with a span tag when the wiki text is turned into HTML (like it does with wikimarkup, and there's probably a global search and replace that could fix the wikitext). If browsers were ever going to stop rendering font tags, they'd announce that in advance, and that's the time to start looking at a patch to mediawiki, or a bot, or a policy, or whatever, to handle it. But so long as we allow the use of font tags, we should not bother with a bot that cleans them up, it's just making edits that don't need to be made, and millions of them. Levivich (talk) 04:06, 9 February 2023 (UTC)
- The reasons I complain about a bot changing font tags to span tags on old talk pages are: it probably will never need to be fixed; if it does need to be fixed, we'll get plenty of advance warning; we don't have a rule against creating new 'errors' (people can still use the font tag, there have been new font tags added in this very discussion), so we're making new 'errors' for the bot to run around and 'fix'; and, I'm no expert in these things, but I bet there are ways to 'fix' these 'errors' that don't require a bot to make millions of edits. Levivich (talk) 03:26, 9 February 2023 (UTC)
- Support. I think someone said that they didn't want to hide bot edits from their watchlist because there are some bot edits that they want to see on their watchlists. Perhaps the solution is to ask the bot operators to stop marking as "bot edit" their bots' edits that should be seen on watchlists. Some of my RMCD bot edits are not marked as "bot edits" for this reason. See the relevant bots noticeboard discussion. – wbm1058 (talk) 12:55, 10 February 2023 (UTC)
- Also note that you can hide specific bots and not all bots through the guidance found at WP:HIDEBOTS. Headbomb {t · c · p · b} 13:18, 10 February 2023 (UTC)
- Comment Neutral on this particular question, but I feel like if these tasks are endorsed and continue, maybe we should look at unbanning Magioladitis, Betacommand, and Rich Farmborough. Surely we can figure out how to hide their edits too. Folly Mox (talk) 01:16, 11 February 2023 (UTC)
- Magioladitis isn't banned. Betacommand was banned for far more than making low-value automated edits. Even Rich Farmborough, for whom your point comes the closest to applying, was indeffed not for that per say but for
stat[ing] he will not be complying with editing restrictions
, a social issue that goes well beyond what is being discussed here. * Pppery * it has begun... 02:19, 11 February 2023 (UTC)
- Magioladitis isn't banned. Betacommand was banned for far more than making low-value automated edits. Even Rich Farmborough, for whom your point comes the closest to applying, was indeffed not for that per say but for
- Oppose I'm not seeing any convincing argument for what benefit these edits grant, and why the claimed harm of
Wikipedia talk pages will all be up shit's creek with problems from the dropped support.
can't be addressed when it happens, not some unknown time in the future. The harm this is causing, of course, is happening now both in the existence of this discussion, and the fact that people have repeatedly felt the impetus to start it or discussions like it. This discussion is turning into yet another Wikipedia:parable of the wildflowers. We are acting out the second sentence of the second-to-last paragraph. * Pppery * it has begun... 02:19, 11 February 2023 (UTC)- Even more than this wildflower parable, I am reminded of the parable in Arnold Lobel’s children’s book Grasshopper on the Road, in which a housefly, having finished sweeping her own house and washing her windows, has decided to sweep the (dirt) road because it is too dusty:
- Lobel, Arnold (1978). "The Sweeper". Grasshopper on the Road. pp. 25–33.
'No, no, no,' said the housefly. 'I will never rest. I am having a wonderful time. I will sweep until the whole world is clean, clean, clean!'
- Lobel, Arnold (1978). "The Sweeper". Grasshopper on the Road. pp. 25–33.
- –jacobolus (t) 02:25, 12 February 2023 (UTC)
- Even more than this wildflower parable, I am reminded of the parable in Arnold Lobel’s children’s book Grasshopper on the Road, in which a housefly, having finished sweeping her own house and washing her windows, has decided to sweep the (dirt) road because it is too dusty:
- Support per excellent reasoning by Zinnober9. I can see only positives and no negatives from this task for readers. Per HouseBlaster, the focus on "major web browsers" is wrong headed in my opinion. 2409:408C:8E8B:F446:0:0:43C8:1313 (talk) 08:38, 16 February 2023 (UTC)
- Honestly, I think I sort of agree with comments like jacobolus's. I don't personally see the value in replacing lots of font tags on talk pages, especially so while major browsers continue to support it. And if some dropped support, it might be better to see something built into MediaWiki to address this (though I appreciate a software mapping of HTML to different HTML would be quite awkward and non-transparent, so I don't hold this view very strongly). In any case, I don't believe the loss of font and/or colour in signatures is a big deal. Many non-updated pages no longer look like how they did when the editors of the time wrote their comments, for example due to template merges/changes, orphanings, etc. Which is fine. ProcrastinatingReader (talk) 12:07, 16 February 2023 (UTC)
- I lean support on this, but not strongly. The conversation seems to indicate that not all lint issues are going to cause trouble, but some might and may as well fix others at the same time. Regarding watchlist spam, I find that the focus on talkpage and archives alleviates this compared to concern I would have if the notifications were edits to articles. CMD (talk) 13:42, 16 February 2023 (UTC)
- Oppose. Font tags are not "egregiously invalid" and it's absurd to suggest otherwise. Changing them is meaningless make-work. As has been noted multiple times, the chance that the font tag will cease to be supported is virtually nil due to its extensive history. The actions of these bots in mass-editing pages, tens of millions of time for no measurable benefit are hugely disruptive, and even pointless when there are other, smarter ways to deal with legacy tags (cf Levivich's comment above). Stick to fixing actual breakages like mismatched tags. — Scott • talk 01:39, 18 February 2023 (UTC)
- Irrelevant Even if you did manage to get consensus for this interpretation, it would not directly affect the bot approvals.
Consensus can, as always, create exceptions for particular cosmetic edits
and an approved BRFA reflects a consensus. So even if the deprecated tag replacements were to be considered cosmetic, you'd have to see whether it has also been established (here or elsewhere) that consensus has changed regarding the approvals. Asking the wrong question (and poorly phrasing it) then tacking on "votes will be interpreted as such-and-such" is not likely to help your case. Anomie⚔ 14:01, 18 February 2023 (UTC)- Where was the consensus established for replacing all of the font tags with spans? There has been no evidence given for such a consensus, despite annoyed editors asking for it over and over again, and a very obvious lack of consensus judged by the repeated complaints. This whole conversation boils down to “Hey stop, this is annoyingly disruptive!” “We can’t stop. Everyone agreed about this already.” “Who agreed, and when? I never agreed!” “By ‘Everyone’ I mean a small cabal who decided between ourselves. The agreement is a secret; we can’t show you. But it’s settled and can’t be changed now so stop complaining.” (Repeat ad nauseam.) –jacobolus (t) 16:11, 18 February 2023 (UTC)
- Consensus for a bot to run is established on the appropriate subpage of WP:BRFA, for example Wikipedia:Bots/Requests for approval/MalnadachBot 12. When you and others recently tried to challenge that approval at WP:BOTN, you did not attain WP:CCC. So now you're here with this RFC in hopes that a wider discussion will establish WP:CCC, as is your right. We'll see if it goes your way, but IMO the poor framing of the question is not likely to help your case. Anomie⚔ 16:39, 18 February 2023 (UTC)
- #12 was speedily approved. It was open for less than 4 hours and the only person who participated besides the bot op is the BAG member who approved it. Is that really what you want to describe as "consensus"? Levivich (talk) 16:45, 18 February 2023 (UTC)
- It appears to be based on a good track record with #1 through #11, so more than just what's directly on that page. And, whether you like it or not, speedy approvals are currently part of the WP:Bot policy. If the community wants to remove or restrict that, the normal processes should be followed. Anomie⚔ 16:54, 18 February 2023 (UTC)
- 1-11 weren't about changing font tags to span tags; they were all about something a little bit, or a lot, different, and those also didn't have community approval. There is no shame in admitting that this is the first time the community at large has ever been consulted about whether or not a bot should change font tags to span tags. There is no basis for suggesting that doing so already has community consensus. I agree with you, though, that this RFC wasn't drafted well enough to really test the question, unfortunately. There will probably be another one later. Levivich (talk) 16:58, 18 February 2023 (UTC)
- All except #4 were about fixing linter errors, which is what #12 was approved for generally. The font tags are a subset of the linter errors. So far you and others have failed to establish that only some linter errors should be fixed but not others.
There will probably be another one later.
I'm sure there will be, ad nauseam. Anomie⚔ 17:09, 18 February 2023 (UTC)- That's a bit of rhetorical sleight-of-hand. "Linter errors" is a broad category that includes many different things, from serious to not serious. So far you and others have failed to establish that all linter errors should be fixed. mw:Help:Extension:Linter says, emphasis mine:
Not all of them need to be fixed promptly or even ever (depending on your tolerance for lint).
Levivich (talk) 17:24, 18 February 2023 (UTC)
- That's a bit of rhetorical sleight-of-hand. "Linter errors" is a broad category that includes many different things, from serious to not serious. So far you and others have failed to establish that all linter errors should be fixed. mw:Help:Extension:Linter says, emphasis mine:
- All except #4 were about fixing linter errors, which is what #12 was approved for generally. The font tags are a subset of the linter errors. So far you and others have failed to establish that only some linter errors should be fixed but not others.
- 1-11 weren't about changing font tags to span tags; they were all about something a little bit, or a lot, different, and those also didn't have community approval. There is no shame in admitting that this is the first time the community at large has ever been consulted about whether or not a bot should change font tags to span tags. There is no basis for suggesting that doing so already has community consensus. I agree with you, though, that this RFC wasn't drafted well enough to really test the question, unfortunately. There will probably be another one later. Levivich (talk) 16:58, 18 February 2023 (UTC)
- It appears to be based on a good track record with #1 through #11, so more than just what's directly on that page. And, whether you like it or not, speedy approvals are currently part of the WP:Bot policy. If the community wants to remove or restrict that, the normal processes should be followed. Anomie⚔ 16:54, 18 February 2023 (UTC)
- #12 was speedily approved. It was open for less than 4 hours and the only person who participated besides the bot op is the BAG member who approved it. Is that really what you want to describe as "consensus"? Levivich (talk) 16:45, 18 February 2023 (UTC)
- Consensus for a bot to run is established on the appropriate subpage of WP:BRFA, for example Wikipedia:Bots/Requests for approval/MalnadachBot 12. When you and others recently tried to challenge that approval at WP:BOTN, you did not attain WP:CCC. So now you're here with this RFC in hopes that a wider discussion will establish WP:CCC, as is your right. We'll see if it goes your way, but IMO the poor framing of the question is not likely to help your case. Anomie⚔ 16:39, 18 February 2023 (UTC)
- Where was the consensus established for replacing all of the font tags with spans? There has been no evidence given for such a consensus, despite annoyed editors asking for it over and over again, and a very obvious lack of consensus judged by the repeated complaints. This whole conversation boils down to “Hey stop, this is annoyingly disruptive!” “We can’t stop. Everyone agreed about this already.” “Who agreed, and when? I never agreed!” “By ‘Everyone’ I mean a small cabal who decided between ourselves. The agreement is a secret; we can’t show you. But it’s settled and can’t be changed now so stop complaining.” (Repeat ad nauseam.) –jacobolus (t) 16:11, 18 February 2023 (UTC)
- Support per others. Izno (talk) 23:15, 21 February 2023 (UTC)
- Support. While I can't imagine browsers dropping support for the font element, on reflection, I think sadly it's realistic to consider a future in which getting our content to users requires feeding it through some kind of sandboxed or app environment. No reason to make that process more difficult by leaving in place markup that's been obsolete since Netscape was a viable property, and has perfectly viable alternatives. Choess (talk) 16:43, 25 February 2023 (UTC)
I think sadly it's realistic to consider a future in which getting our content to users requires feeding it through some kind of sandboxed or app environment
could you explain what you mean by that, and why developing such a sandbox or app will be made notably harder by the existence of font tags? For context, the mobile website and app already do transforms on the HTML of a page that are significantly more complicated than replacing<font>
with<span>
* Pppery * it has begun... 16:49, 25 February 2023 (UTC)
- Support the bot's actions. Better to fix all the errors now, than wait until a browsers decides to stop recognizing the invalid tags and suddenly we have to get something-million invalid tags fixed in a handful of months. --SilverTiger12 (talk) 19:46, 25 February 2023 (UTC)
- Support the bot's actions, per pretty much all the above. — SMcCandlish ☏ ¢ 😼 09:31, 26 February 2023 (UTC)
Secondary question: should WP:COSMETICBOT be updated?
Please indicate below if you believe that WP:COSMETICBOT should be updated to be more specific about automated fixing of deprecated HTML tags that are supported by all major browsers. I will propose an updated wording, but please feel free to propose your own if you like. My proposed rewording is: "egregiously invalid HTML such as unclosed tags, even if it does not affect browsers' display or is fixed before output by RemexHtml (e.g. changing <sup>...</sub>
to <sup>...</sup>
). This does not include fixing properly formed HTML tags that have been deprecated, as long as the tags are still supported by all major web browsers."
- Support proposed change to policy, as RfC author. —ScottyWong— 08:33, 7 February 2023 (UTC)
- Side comment HTML is neither SGML nor XML, and markup that does not explicitly close tags is in many circumstances perfectly valid HTML (they would be invalid as XHTML, but Wikipedia is not served as XHTML). One of the basic purposes of the HTML5 effort was to standardize the way browsers parse and handle such markup. There is thus a very clear and well-defined algorithm (backed by extensive conformance tests) which browsers implement to implicitly close tags where necessary. Calling this "egregiously invalid HTML" demonstrates that whoever wrote down this page had a basic lack of familiarity with HTML parsing rules. :-) Of course, it is entirely fine if any particular creator of HTML documents (say, the Wikipedia project) decides to require some stricter set of rules than the specification requires. Many non-browser tools have broken parsing algorithms that do not properly handle unclosed elements, so it can be worthwhile to carefully to close every element to help them out. On the other hand, some other HTML authors/tools automatically remove optional closing tags in the pages they serve to save bandwidth. Blog post explaining some context from Remy Sharp. –jacobolus (t) 09:10, 7 February 2023 (UTC)
- +1. I don't think "egregiously invalid" is a technical term. We should just eliminate this bulletpoint altogether. Levivich (talk) 15:01, 7 February 2023 (UTC)
- You've described literally what RemexHtml does. In any case, many gadgets, tools and bots operate on the wikitext, which is a mish-mash of HTML and wikitext. Having reasonably well-formed markup is important because all of those things don't implement all of the HTML algorithms to know what it'll end up like. Legoktm (talk) 16:10, 7 February 2023 (UTC)
- Again, omitting many closing tags is perfectly “well formed” HTML. It is just not what some non-spec-conforming tools with buggy parsers expect. –jacobolus (t) 00:20, 8 February 2023 (UTC)
- It could be reworded to make the idea behind it more clear, but then again it doesn't seem to be causing any issues and we do want to avoid overly messy markup. Policy has always been created to describe practice, and in practice there's widespread tolerance for allowing bots to close unclosed tags. 74.73.224.126 (talk) 15:12, 8 February 2023 (UTC)
- I have no problem with bots closing unclosed tags, if that helps other tooling. (Browsers, human editors, and tools with spec-conforming parsers don’t care.) The part that I am calling out is the technically incorrect and comically exaggerated “egregiously invalid” language.–jacobolus (t) 19:58, 8 February 2023 (UTC)
- I agree it sounds silly. However since the verbiage is controlled by a prior RFC there's some procedural hassle (and nigh inevitable wrangling) needed to change it, and since the current wording hasn't caused any headaches or misunderstandings it probably isn't worth the trouble. I'm content to treat it as an internal term of art; much of our internal WIKISPEAK has meanings that are considerably different from or even directly opposed to normal English usage. But I understand reasonable minds may differ. 74.73.224.126 (talk) 03:46, 9 February 2023 (UTC)
- I have no problem with bots closing unclosed tags, if that helps other tooling. (Browsers, human editors, and tools with spec-conforming parsers don’t care.) The part that I am calling out is the technically incorrect and comically exaggerated “egregiously invalid” language.–jacobolus (t) 19:58, 8 February 2023 (UTC)
- It could be reworded to make the idea behind it more clear, but then again it doesn't seem to be causing any issues and we do want to avoid overly messy markup. Policy has always been created to describe practice, and in practice there's widespread tolerance for allowing bots to close unclosed tags. 74.73.224.126 (talk) 15:12, 8 February 2023 (UTC)
- Again, omitting many closing tags is perfectly “well formed” HTML. It is just not what some non-spec-conforming tools with buggy parsers expect. –jacobolus (t) 00:20, 8 February 2023 (UTC)
- Oppose. Bots and human editors update pages all the time to modernize their syntax. This syntax is often properly formed but out of date for some reason, sometimes because MediaWiki is removing support for otherwise valid tags. The proposed restriction would unnecessarily limit such useful edits. – Jonesey95 (talk) 14:48, 7 February 2023 (UTC)
- Oppose: Nothing in the RfC statement provides a good reason to update the policy. And there is an argument to be made that an unsupported html tag isn't a properly formed html tag anyway so it is somewhat redundant. Terasail[✉️] 16:32, 7 February 2023 (UTC)
- Oppose. Deprecated HTML is not proper HTML. Also the focus of this RfC, font tags is not deprecated HTML, it is obsolete HTML, which by its techinical HTML definition is stroger than deprecated. See Redrose's comment in section above. All of the Lint fixing bots are replacing obsolete HTML. ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ (talk) 18:45, 7 February 2023 (UTC)
- Although the HTML standard indeed uses the term "obsolete" where it used to use "deprecated", there is no technical HTML definition. Instead it says that you "should not" use obsolete features. You may infer that "obsolete" is stronger than "deprecated", but it is not defined as such. [18] Hawkeye7 (discuss) 00:59, 8 February 2023 (UTC)
- The HTML5 specification says: "16.2 Non-conforming features Elements in the following list are entirely obsolete, and must not be used by authors: [...] font [...]". So it is no longer part of current "HTML" as a standard. However, the font element works as expected in every single browser from the past ~25 years, and will continue to be supported at a technical level probably forever. It costs very little for browsers to maintain support for this element, and would be disruptive enough to remove that such removal is unlikely to ever occur. I would personally put money on the 'font' element still working in web browsers at the end of this century.
- It seems plenty reasonable to me to hunt down and fix any font tags that occur in main or template namespaces (ideally for semantic markup styled by CSS rather than presentational element-specific styles baked into the markup). What I object to is trying to modify the majority of all talk pages everywhere on this website because old talk signatures used to often use the 'font' element. –jacobolus (t) 01:19, 8 February 2023 (UTC)
- Although the HTML standard indeed uses the term "obsolete" where it used to use "deprecated", there is no technical HTML definition. Instead it says that you "should not" use obsolete features. You may infer that "obsolete" is stronger than "deprecated", but it is not defined as such. [18] Hawkeye7 (discuss) 00:59, 8 February 2023 (UTC)
- Oppose IMHO it is about embrace the future while preserving our past.--Afernand74 (talk) 07:30, 8 February 2023 (UTC)
- Oppose this level of minutiae is not needed in policy. We have an example of egregiously invalid HTML. MalnadachBot wasn't approved under the egregiously invalid HTML clause, but rather because it does a task which has (had?) perceived value and consensus for (i.e. fixing Linter errors). Headbomb {t · c · p · b} 11:59, 8 February 2023 (UTC)
- Oppose simply put I find the case made for the addition thus far unconvincing. It's not even clear it would solve the problem it purports to solve, or that there was ever even a problem in the first place. The previous method employed by the bot that prompted this inquiry was far from ideal, but that has now been addressed, and given how few objections emerged in comparison to the total number of edits really wasn't that big a deal in the first place. 74.73.224.126 (talk) 15:28, 8 February 2023 (UTC)
- Oppose I see zero benefit by this change in policy, and any potential, if even small, benefit by fixing the tags outweighs that. --Jayron32 12:43, 9 February 2023 (UTC)
- Oppose No need to clutter policy with every peeve that has annoyed someone once. Anomie⚔ 14:01, 18 February 2023 (UTC)
Discussion (deprecated HTML)
Is this not mountains out of molehils? Arguing that it is cosmetic is a bit misleading, considering a browser could just drop support for a tag and then you would have a problem. I don't have any strong opinions on allowing / disallowing but from a scan of the bots noticeboard I am in the mindset that it is a lot of hot air from users who don't want to filter their watchlist or create separate saved filters on their watchlist and that they can "Stay mad". Terasail[✉️] 11:32, 7 February 2023 (UTC)
- Also the the RfC is confusing and longwinded. Specific call outs to MalnadachBot should be minimal and only for adding context to the task since this RfC has less to do with the specific bot and more to do with the task in general.
Many of these discussion pages have not been edited in 10+ years, and many of them receive nearly zero annual pageviews.
– This has nothing to do with anything and is just filler "We shouldn't edit pages with low pageviews???". More text should be adding context on the task being discussed and information on how stopping this / future tasks would be an improvement. Terasail[✉️] 11:49, 7 February 2023 (UTC) then you would have a problem
– no you would not have a problem. If some hypothetical future browser (say, Mozilla Firebat 1.0 for holographic projections in the year 2050) decides to not support the font tag, then at that point some of the user signatures on Wikipedia talk page discussions from ~2005–2010 would render as standard colored text in the standard size, instead of the user-intended tiny purple or rainbow gradient or whatever. We would have "graceful degradation", and the grad student writing their PhD thesis about the evolution of Wikipedia anti-sockpuppet countermeasures would be able to continue their research just fine, just marginally less colorfully. –jacobolus (t) 12:04, 7 February 2023 (UTC)- And If some hypothetical Wikipedian who has watched talk page discussions from 2005-2010 would like to stop bot edits from appearing in their watchlists, they have many different ways to do so. You can still edit Wikipedia even with MalnadachBot's edits in your watchlist, y'know. I don't see how the bot's existence would be a problem, so I don't know what problems this RfC intends to address. 0xDeadbeef→∞ (talk to me) 12:10, 7 February 2023 (UTC)
- We have policies for a reason. Just because you can ignore a bot that is violating policy doesn't mean the bot should continue doing what it's doing. This RfC is to get a community consensus on the interpretation of that policy, to determine if the bot is violating it. —ScottyWong— 16:14, 7 February 2023 (UTC)
- Im going to be honest. You can't just say
no you would not have a problem.
When it is a fact that a piece of text which is intended to be coloured which isn't coloured is a problem. Terasail[✉️] 12:11, 7 February 2023 (UTC)- Could you describe a hypothetical "problem" that would be caused by a user's signature that is displayed in normal black text instead of being colored as intended? —ScottyWong— 16:14, 7 February 2023 (UTC)
- @Scottywong Signature? Where in the two RfC questions did you mention signatures. If you wanted to target signatures then you should have specified that in the RfC question. Don't start being overly specific when the RfC is overly broad. Terasail[✉️] 16:25, 7 February 2023 (UTC)
- Literally every one (of at least dozens) of this bot’s edits that I have examined was modifying a talk page signature from 2005–2012. –jacobolus (t) 00:24, 8 February 2023 (UTC)
- So as a concrete example, if Terasail[✉️] had been created using a font tag instead of a span, and a hypothetical browser years in the future removed support, it would render instead as Terasail[✉️], switching from teal to the default linkn color (in my browser, blue). This is the “worst case harm” other editors are talking about elsewhere in this discussion. –jacobolus (t) 00:27, 8 February 2023 (UTC)
- And to respond to signatures specifically. Read my previous comment which applies to all text no matter the context.
a piece of text which is intended to be coloured which isn't coloured is a problem
Terasail[✉️] 16:26, 7 February 2023 (UTC)- I can see that you're not discussing this in good faith, so I'll end my discussion with you here. But, if you took the time to inspect MalnadachBot's edits, you'd see that the vast majority of them are fixing HTML tags on user signatures. —ScottyWong— 16:40, 7 February 2023 (UTC)
- I am only saying it how I see it. I have no vested interest in the out;come of a bot which I didn't know existed until today. Terasail[✉️] 16:52, 7 February 2023 (UTC)
- I can see that you're not discussing this in good faith, so I'll end my discussion with you here. But, if you took the time to inspect MalnadachBot's edits, you'd see that the vast majority of them are fixing HTML tags on user signatures. —ScottyWong— 16:40, 7 February 2023 (UTC)
- The following is mostly a side note, since it doesn't affect the main argument that browsers are unlikely to remove support for the font element. One reason why editors specify a font colour is because they've modified the background colour (and from an accessibility perspective, it's desirable to do both at the same time to ensure sufficient contrast). Thus not using the intended font colour can affect visibility. isaacl (talk) 16:43, 7 February 2023 (UTC)
- @Scottywong Signature? Where in the two RfC questions did you mention signatures. If you wanted to target signatures then you should have specified that in the RfC question. Don't start being overly specific when the RfC is overly broad. Terasail[✉️] 16:25, 7 February 2023 (UTC)
- Could you describe a hypothetical "problem" that would be caused by a user's signature that is displayed in normal black text instead of being colored as intended? —ScottyWong— 16:14, 7 February 2023 (UTC)
- And If some hypothetical Wikipedian who has watched talk page discussions from 2005-2010 would like to stop bot edits from appearing in their watchlists, they have many different ways to do so. You can still edit Wikipedia even with MalnadachBot's edits in your watchlist, y'know. I don't see how the bot's existence would be a problem, so I don't know what problems this RfC intends to address. 0xDeadbeef→∞ (talk to me) 12:10, 7 February 2023 (UTC)
There are a number of technical inaccuracies in the opening statement, probably it would make sense to have a opposing (supporting I guess?) view as well. E.g. edits like [19] do in fact change what the wikitext looks like, to reflect how it used to render, pre-RemexHtml. I've also written User:Legoktm/Fixing lint errors, which may be useful to copy from. Legoktm (talk) 16:03, 7 February 2023 (UTC)
- I'm sure the bot is making many different types of edits. If it's making edits that change the way content is rendered, then those edits obviously wouldn't be affected by anything that is decided here, because they wouldn't be in violation of COSMETICBOT. To be clear, this RfC isn't intended to shut down MalnadachBot completely and prevent it from making a single edit in the future. It's only about the blanket authorization that it was given to make millions of edits that are seemingly in violation of bot policy. —ScottyWong— 16:16, 7 February 2023 (UTC)
- Would you consider making the RFC question and opening statement more concise? I think best practice is to be extremely neutral and concise (like 1-2 sentences / a single question) at the top of the RFC, and any detailed commentary by the RFC writer can be moved to the RFC writer's !vote, which is usually the first !vote so is still at the top. An example of a single question for this RFC might be something like "MalnadachBot Task X is a bot task that replaces older HTML tags with newer HTML tags in the wikicode of millions of pages. Should MalnadachBot Task X continue running?" The "First, a bit of history behind this situation:" section is what stands out to me as being too long. –Novem Linguae (talk) 18:00, 7 February 2023 (UTC)
- I'm open to that, but feel that this RfC has already been open long enough that making changes to its fundamental structure would be disruptive. —ScottyWong— 04:58, 8 February 2023 (UTC)
- Fair enough. No worries, thanks for replying. –Novem Linguae (talk) 06:02, 8 February 2023 (UTC)
- It's been what, one day? You could easily withdraw it and begin collaboratively drafting a new one with a different format (e.g. "View by X" with endorsements) that allows capturing nuance instead of two binary options. I suspect there is a lot of low hanging fruit we'd both agree on that would make things better, but it's never going to come out like this. Legoktm (talk) 07:58, 8 February 2023 (UTC)
- I'm open to that, but feel that this RfC has already been open long enough that making changes to its fundamental structure would be disruptive. —ScottyWong— 04:58, 8 February 2023 (UTC)
I think it might be better to evaluate this on a case-by-case basis for each type of HTML element. Support for the font element is unlikely to be removed from browsers in the foreseeable future, in spite of its obsolete status, due to widespread use. This might be different for other obsolete elements, though. isaacl (talk) 18:49, 7 February 2023 (UTC)
- Note if the MediaWiki developers ever decide to stop generating the
<font>
element in its HTML output, they could still support the element in wikitext and just generate a corresponding<span>
element with appropriate style properties set. Mapping thesize
attribute might cause some change to the display, but to be fair, since different browsers can handle it differently and (once upon a time) it was configurable within the browser, it's already not a reliable way to set a specific relative size. The relevant considerations may be different for other elements, and thus I feel looking at them individually is preferable. isaacl (talk) 16:43, 13 February 2023 (UTC)
I notified WT:BOTPOL; WT:Linter was already done. Legoktm (talk) 08:02, 8 February 2023 (UTC)
RFC: Occupation in infobox for localities affected by the ongoing military conflict
|
From the beginning of the 2022 Russian invasion of Ukraine, when the Russian troops occupied some Ukrainian territory, some users started to add the information on the occupation (in variuos ways) in the infobox. Other users remove this information. I have some of these localities on my watchlist, and these additions and removals are way too frequent. See just recently and just for Kreminna which is just one locality: one two three four. This bullshit needs to stop. I do not particularly care which way it would stop but we need a clear consensus that either mentioning or not mentioning occupation in infobox is ok, and the opposite is not ok.
In principle, this RfC should apply to any ongoing to conflict, but my primary interest is Ukraine. I would also exclude Crimea because it is really administrated by Russia, and the arguments could be slighly different, we can return to it another time. This is really about clear-cut cases. Ymblanter (talk) 23:32, 7 February 2023 (UTC)
Occupation can be mentioned in the infobox
Please suggest how it should be mentioned.--Ymblanter (talk) 23:33, 7 February 2023 (UTC)
- We have existing long-standing examples for how such information can be inputted, eg. Katzrin, Laayoune, Afrin, Syria. CMD (talk) 04:28, 8 February 2023 (UTC)
- Do the people who programmed hCard format into the infobox template know about this? It looks to me like the embedded data would get parsed out as “Afrin is divided into multiple ‘control,’ and one of the control of Afrin is ‘Turkey Hayat Tahrir al-Sham.’” —Michael Z. 04:58, 8 February 2023 (UTC)
- I posted a request for input at Template talk:Infobox settlement#Representing occupied territories. —Michael Z. 05:02, 8 February 2023 (UTC)
- It also looks to me like those examples are not consistent, judging by the three distinct grammatical schemes in the field labels of the three infoboxes. But I can’t really tell because they are hard to make sense of. —Michael Z. 05:06, 8 February 2023 (UTC)
- Do the people who programmed hCard format into the infobox template know about this? It looks to me like the embedded data would get parsed out as “Afrin is divided into multiple ‘control,’ and one of the control of Afrin is ‘Turkey Hayat Tahrir al-Sham.’” —Michael Z. 04:58, 8 February 2023 (UTC)
- Without looking for a blanket ruling, infoboxes should contain information about occupying/administering bodies. Infoboxes are meant to provide "key facts", the administration of any administrative entity is self-evidently a key fact. Pretty much all sources covering the occupied areas clearly and upfrontly state that which areas are occupied, because the occupation is a key fact (magnitudes more so than, say, Area code or Vehicle registration). Not including occupying administrations is a disservice to readers in the best case scenario. The invocation of WP:NOTNEWS is nonsensical; editing an infobox does not make an article original reporting, a news report, a who's who, or a celebrity gossip and diary. Infoboxes are updated all the time, completely routinely. An interpretation of Notnews suggesting they shouldn't would necessitate removing information like political leaders from infoboxes, which change quite frequently and are often updated almost immediately (or even before they should be!). NOTNEWS actually advises editors to "include current and up-to-date information". The arguments about stability are similarly flawed, and unsupported by policy and practice. Wikipedia articles are not frozen when real-life disputes are, and neither are their infoboxes. (Template:2022 Russian invasion of Ukraine infobox has information much more recent than much of the Russian occupations.) As for how, a number of models already exist, such as the examples given above. We could probably forever WP:BIKESHED between them, but what is important is that one is used to ensure infoboxes meet their purpose of providing key facts to readers at a glance. CMD (talk) 04:12, 16 February 2023 (UTC)
Occupation should not be mentioned in the infobox
Put your vote here if you think that if it is mentioned in the body, and this is enough.--Ymblanter (talk) 23:38, 7 February 2023 (UTC)
- Not The infobox is for defining characteristics of the subject, fundamental statistics, and other stable information. It is WP:NOTNEWS ticker, and should not include ephemeral and unpredictable wartime data like who has driven into a place this morning that could well leave it by this evening (and Russian sources for example, have been notorious for announcing the occupation of a locality several times over weeks without actually occupying it). —Michael Z. 03:00, 8 February 2023 (UTC)
- Another problem with this is that the infobox is not designed to accommodate foreign intervention. The fields for country subdivisions assume a legal hierarchy, and embeds a microformat encapsulating that hierarchy, which yields nonsense if the hierarchical data doesn’t make sense. I think it can also automatically generate short summaries which are nonsensical too, if there is no wikidata description or manually entered short summary. —Michael Z. 03:17, 8 February 2023 (UTC)
- Not. (Summoned by bot) The infobox should only be used for stable information; it should summarise the information contained in the body; and should represent the WP:CONSENSUS among the RS as well as the involved editors. Trying to captture information that may go back and forth in the course of a conflict should be avoided in the infobox. -- Kautilya3 (talk) 18:39, 9 February 2023 (UTC)
- Not: (Summoned by bot). Not only is the above WP:NOTNEWS ticker true (totally agree with Mzajac and Kautilya3) because this is something that can change at the drop of a hat so there would be no infobox stabilization as long as there is conflict. -- Otr500 (talk) 07:35, 14 February 2023 (UTC)
- Not Gaza Strip and West Bank are "stable" occupations, we do not say so in the infobox (nor for subsidiary units of those areas). Selfstudier (talk) 10:45, 14 February 2023 (UTC)
- Not The details about whether an area is occupied are always going to be controversial. Infobox don't allow room to put the details of the occupation into a proper context, and so shouldn't general be included. -- LCU ActivelyDisinterested ∆transmissions∆ °co-ords° 21:32, 23 February 2023 (UTC)
Discussion (occupation in infobox)
This is not a new thing. The infobox for the city of Donetsk was “occupied” by a single-edit anon over six years ago at the latest, for example.[20] There is a clear long-term consensus to remove these sporadic interventions, as we can see by the current state of Donetsk. —Michael Z. 03:05, 8 February 2023 (UTC)
- This is good, but let us complete this RfC to make the consensus explicit. Ymblanter (talk) 03:28, 8 February 2023 (UTC)
- No argument from me. —Michael Z. 03:51, 8 February 2023 (UTC)
- So, I think there is some nuance to the matter, and I don't want to vote definitively either way. I think that this is a case-by-case basis; which doesn't mean I don't think there need to be some guiding principles, but I DON'T think a one-size-fits-all approach is the way to go here. For some kinds of occupied territory, which is under a sort of long-term dispute, where one country claims as its territory an integral part, but which another has been administering in a de facto way for a long period of time, it may be useful to describe it as such. The Golan Heights for example is widely recognized as Syrian territory that is under Israeli occupation (without wading into the debate over whether that is true or not, just that it is commonly described as such); but that's still a stable state. The two countries are not actively fighting over the land, it's basically been that way for decades, and it isn't an ephemeral thing. It's basically a standard state of affairs. If we have an active warzone, where two armies are fighting back and forth to control some bit of territory, then no, we should not be updating the infobox every few days to reflect the changing battle lines. The infobox is poorly suited for that sort of thing. Leave it out in those cases. --Jayron32 17:26, 9 February 2023 (UTC)
- This makes sense to me. It's impractical to update infoboxes to reflect shifting battle lines in a hot conflict, but when it's a long-established dispute and WP:RS describe the situation as occupied/disputed territory, then it would make sense to include in the infobox. How that's handled with the hCard format is something that would need some thought and attention to handle the sorts issues Michael Z pointed out. —Carter (Tcr25) (talk) 17:49, 9 February 2023 (UTC)
- If we are to document stable occupations or land claims in the infobox, it would probably make sense to agree what it should look like, how it is to be used consistently, and then add a separate row for this to the infobox template.
- None of the example solutions are clear to me, and I can’t imagine they could possibly be to someone new to the subject and glancing at the infobox. They all fail basic communications principles and do not serve WP:INFOBOXPURPOSE IMO. —Michael Z. 18:28, 9 February 2023 (UTC)
- Look at Navassa Island. It an "easier" case since it's not inhabited and can be easily labeled a "disputed island," but it's still using
subdivision_type3
andsubdivision_type4
for Haiti's claim, which is problematic. {{Infobox island}} has adisputed
option that might be a way to handle things, but I'm not sure that it has an effect on the hCard data. —Carter (Tcr25) (talk) Carter (Tcr25) (talk) 23:04, 9 February 2023 (UTC)- Regarding hCard, I’m not much concerned that occupied status is absent from it, but we shouldn’t abuse the template in such a way that the hCard data is bad. This is important because the effect is not visible, so most editors will have no way of knowing, and it is meant for re-use so we have no way to predict what kind of applications of someone else’s we will be breaking, or in what way.
- So there should probably be zero tolerance for shoehorning stuff like this into those fields. If that can’t be accomplished, then fields specifically for this must be added to make sure it’s done right. Anyway, probably a discussion for the template’s talk.
- At least equally important, of course, is that the infobox is perfectly clear to readers. Even in that simple example, there’s no visual hierarchy (I guess there are two hierarchies with no divider), and the headings are different kinds of things. It’s effectively nonsense that someone has to guess at what it means. It would be easier to decider a long sentence, but that doesn’t belong in an infobox either. —Michael Z. 02:07, 10 February 2023 (UTC)
- Look at Navassa Island. It an "easier" case since it's not inhabited and can be easily labeled a "disputed island," but it's still using
- This makes sense to me. It's impractical to update infoboxes to reflect shifting battle lines in a hot conflict, but when it's a long-established dispute and WP:RS describe the situation as occupied/disputed territory, then it would make sense to include in the infobox. How that's handled with the hCard format is something that would need some thought and attention to handle the sorts issues Michael Z pointed out. —Carter (Tcr25) (talk) 17:49, 9 February 2023 (UTC)
Subversion of consensus mechanisms, and how our current procedures can't deal with it
On 9 February 2023, Jan Grabowski and Shira Klein published a research paper titled "Wikipedia’s Intentional Distortion of the History of the Holocaust". In this paper, the authors allege subversions of our consensus mechanisms through selective selection and quotation of sources to push a specific narrative or point of view, in a manner that our dispute resolution and conduct resolution mechanisms cannot handle. While this paper discusses these issues in relation to a specific topic, the history of the Holocaust in Poland, it is the belief of myself and several other editors that the generalised issue of subversion of policy, and an inability of our resolution mechanisms to handle that subversion, applies to other content areas.
Is this a problem that other editors have encountered? If so, would you define this problem in the same or a different way? In your own words, how would you define this problem? In your experience how does it manifest? Sideswipe9th (talk) 21:34, 11 February 2023 (UTC)
Discussion (Subversion of consensus)
In my experience in content areas that I am familiar with, the generalised problem as described in the paper is true. I have seen and been involved in multiple discussions, and know of multiple articles where sources have been selected, quoted and presented in a manner that is aligned with a specific point of view. Unfortunately this disruption is usually done in a subtle manner, such that an editor who is either largely unfamiliar or only has a surface level familiarity with the content area would be unable to detect. By the time that an editor or admin has the requisite knowledge to take action against one or more of the disruptive editors, they are seen as being involved at best, or accused of making attempts to remove opponents from the content area at worst.
Unfortunately this means that our forums for handling this sort of behaviour (AN, ANI, AE, ArbCom) are broadly incapable of handling this disruption. Where conduct cases are brought to AN or ANI, these are often described as content disputes by both involved and uninvolved editors which should be handled elsewhere. And where cases are brought to AE or ArbCom, admins and arbitrators who are familiar enough with the content area to recognise the problem and therefore best suited to identify problematic editors are forced to recuse themselves from taking part. Sideswipe9th (talk) 21:34, 11 February 2023 (UTC)
- I'm mentioned in the Grabowski and Klein paper. I have two comments/observations at this point:
- This article is still brand new and in some ways "the dust hasn't settled yet". A possibility of replies, responses, reductions exists and it's important not to take the claims made by the authors at face value.
- Having said that *I* actually do ALSO think that WP:CON mechanisms are often "subverted" "through selective selection and quotation of sources to push a specific narrative or point of view" etc etc. In fact I have stated and complained about this on numerous occasions previously, including in exactly this context, in connection to the activities of indef-banned User:Icewhiz that Grabowski and Klein discuss so extensively and portray as some kind of a tragic hero. Honestly, it reflects horribly on Wikipedia that it took Icewhiz making death threats and doxxing numerous Wikipedia users for us to finally indef ban him even though it was blindingly obvious exactly what kind of person he was long before that happened. And while it took way way too long, and while it seems he still has some defenders on Wikipedia, "subversions of our consensus mechanisms through selective selection and quotation of sources to push a specific narrative or point of view" was exactly what he was eventually topic banned for (among other things).
- That is going to be the main problem right there. EVERYONE who's involved in disputes thinks that it's the OTHER SIDE who's "subverting" our processes (because otherwise everyone would instantaneously recognize how right I am about everything!!!). While I 100% support working and developing ways to overcome this problem I also think there are serious difficulties in solving it - how do we actually recognize that it's happening since we can't take the parties' involved word for it? Volunteer Marek 21:46, 11 February 2023 (UTC)
- While I recognise you and the other editors named in the paper have (to put mildly) a long and troubled history, could we please keep this discussion generalised such that we're not discussing specific problematic editors? I think there's some value in trying to gauge how widespread this issue is in other unconnected topic areas, and what we can try to do to resolve it from a policy and guideline perspective, without getting into the specifics of how "UserX is pushing point Y in topic area Z". Sideswipe9th (talk) 21:56, 11 February 2023 (UTC)
- I appreciate that but I also think it's very important to keep the meta-perspective here in mind, if for no other reason than that this very article is in some way part of the very same phenomenon we're discussing. Look at the response to this article from many people (including possibly yourself). They glance at it, read some of it, and think "oh wow this sounds very bad" and proceed to jump to conclusions. People who have been involved in Icewhiz-related drama for the past five years and who understand this article's context, have a completely different perspective. How many people do you think actually bothered investigating the article closely, like clicking on the actual links provided in it and checking that these show what the authors actually claim? (Answer to that one is "None" because the ref links in the article don't actually work). Did you do this before jumping to start these discussions? As isaacl says below "fact checking" and genuine engagement is an extremely time consuming process and that's why most people, including Wikipedians, don't do it. And that means that all kinds of BS can be passed off, "subverting" the consensus process.
- BTW, on his twitter account in couple instances Icewhiz gave advice to would-be Wikipedia trolls. He said (paraphrasing) - "if you want to put in a false claim into an article just put a reliable looking citation at the end. Even if the citation doesn't say anything like the claim, it doesn't matter, hardly anyone ever actually checks that as long as the citation looks reliable." This is one of the main ways that your proposal is referring to no? Perhaps a good first step would be to enumerate different ways that "consensus can be subverted"? Volunteer Marek 23:02, 11 February 2023 (UTC)
- While I can only speak for myself on this, the generalised conclusions surrounding subversion of consensus mechanisms is something that has been on my mind for at least the last year. To my knowledge Icewhiz did not edit in the content areas I'm active in.
- As for reading the paper closely and checking the citations in the paper I did so prior to contributing, both here and at two of the other discussions on this paper. Some references are broken, some are not. With the broken links the diff ID numbers were included in the URLs, so finding the correct edit using Special:PermaLink was pretty straightforward and a mild annoyance at best though I do also recognise that many readers would not know how to do this.
- As I said at the AN thread prior to making this discussion, while I'm not familiar enough to untangle the specifics within the content area that the paper is discussing, I do recognise the same problem occurring in other unrelated content areas that I am familiar with.
- On inserting false claims with reliable looking citations, yes that is one manifestation of the problem. I've also seen discussions where older sources, which are favoured by one side of a current culture war topic, are preferred over newer sources, which represent the current scientific and medical consensus on a topic. I also see discussions where sources are cherrypicked and presented in a manner that favours a specific point-of-view, and that editors who are not fully familiar with the topic cannot easily determine. In a lot of ways, the various manifestations are already laid out at the WP:CPUSH essay that I referred to in my first message in this section. Sideswipe9th (talk) 23:44, 11 February 2023 (UTC)
- The broken links work if you copy-and-paste them rather than clicking on them. Here is a tweet from the journal about the problem. Levivich (talk) 00:06, 12 February 2023 (UTC)
how do we actually recognize that it's happening since we can't take the parties' involved word for it?
This is a very good question. I wonder if we need to more clearly delineate the various levels of involvement. For example, an editor who is knowledgable about and actively editing a topic can currently be seen to be as involved as two editors who have a history of tense and barely civil discussions. To a degree this pushes subject matter experts to the side in favour of editors who are not active in a content area, when trying to resolve this sort of problem.- Should an editor or admin who is active in a topic area, and not in direct dispute with an editor who is being problematic be treated as though they have the same level of involvement as an editor or admin who is in direct dispute with another? Sideswipe9th (talk) 22:26, 11 February 2023 (UTC)
- While I recognise you and the other editors named in the paper have (to put mildly) a long and troubled history, could we please keep this discussion generalised such that we're not discussing specific problematic editors? I think there's some value in trying to gauge how widespread this issue is in other unconnected topic areas, and what we can try to do to resolve it from a policy and guideline perspective, without getting into the specifics of how "UserX is pushing point Y in topic area Z". Sideswipe9th (talk) 21:56, 11 February 2023 (UTC)
- I'm also mentioned in the cited essay (which suffers from numerous major problems, but as you said, this thread is not the place to discuss them, so I will focus on generalities).
- Frankly, I don't think it is possible to do anything about this, since as you say, once someone becomes involved in a content dispute beyond simply enforcing technical rules like 3RR or similar, they are, well, involved. That's why we see an occasional recusal from Arbitratros, for example.
- If you click through to a related interview with Joe Roe here, he make a good point that we would need a new type of body, one that I imagine should be stuffed with experts on a particular topic area. But first, we know it's unrealistic to expect that sufficient experts will involve themselves with Wikipedia (for example, the cited essay concerns Polish-Jewish history; I don't believe we ever got any notable expert on this topic area editing Wikipedia in, what, 20+ years of the project's history?). Second, even creating such a body would lead to discussions on whether its composition is neutral (think about discussions related to composition of Supreme Court bodies, etc.). Third, how to prevent such experts from becoming involved on Wikipedia? And in fact, because they are experts and involved in the content area outside, how to account for their bias? Is an expert scholar a truly neutral party when it comes to discussing content that may related to research that is their own, cites his works, maybe praises it or criticizes it? Seiously I think there is no realistic solution here - I could joke we could put ChatGTP in charge or ruling who is right nor not, b/c seriously I don't know what can we do better than what we are doing now. As WP:NPOV states, "Editors, while naturally having their own points of view, should strive in good faith to provide complete information and not to promote one particular point of view over another." We all, including any concivebale experts, are biased due to having some POVs. Anyone who things they are perfectly neutral is in fact a very dangerous individual (see my small essay here).
- However, I see another angle of subervsion of consensus here, related to harassment (see WP:OWH) and "mud sticking"/"where there is smoke, there must be fire" logic (see my second mini-essay here). We all know that, unfortunately but realistially, one way of winning content disputes is perseverance. The harassment angle here is that this can be achived not just by "sticking it out", but also by driving one's opponents from the field - even after one gets banned. In here, I am afraid I have no choice but to move back, if slightly, from hypothetical to practical. Icewhiz, a major source of information and agenda for the essay linked by OP, has been site banned for harassment with the declared agenda of driving his perceived wiki-enemies away from the project by destroying their reputation. How can we, the community, prevent such extremly dangerous actors from slowly succeeding, particularly when they get a voice in otherwise reliable media? For example, if a claim that meets WP:ASPERSIONS is dismissed by ArbCom (which concludes that the evidence presented was insufficient for said claim), the editor making the claim is site-banned by Trust and Safety for harassment, but the claim is later repeated in a reliable media, what should we do? It seems like another no perfect situation. Saying that such claim can never be revisited and should be ignored or censored goes against free speech and assumes ArbCom's infallability; on the other hand, revisiting it goes against WP:DENY and invokes the aforementioned "where there is smoke, there must be fire" fallacy, slowly chipping on the reputation of the targeted, vicimized editors. There is a reason some cases related to harassment are heard by ArbCom and Trust and Safety "behind the closed doors", to prevent more harm from coming to the victims by giving the attacker a forum and/or an audience, but what to do when the attacker already has an outside forum? What steps, if any, can we take to DENY the harassment and protect the victims? Piotr Konieczny aka Prokonsul Piotrus| reply here 02:31, 12 February 2023 (UTC)
- I think the authors have it right. A volunteer community can't deal with this kind of problem. If we ever have an Editorial Board/"ArbCom for content" (the idea I alluded to in the interview, though it isn't mine) then it'll have to be through intervention by the Foundation. Something like meta:Croatian Wikipedia Disinformation Assessment-2021 on a grander scale – where they proactively commission (i.e. pay) outside experts to investigate our coverage of a topic and make binding recommendations. I realise that offends pretty much every idea we have about project governance, but there it is. – Joe (talk) 08:19, 13 February 2023 (UTC)
- @Joe Roe - how can you have "binding recommendations"? Something is either a recommendation, or it is binding (a dictat) - the two together doesn't work. Speaking as one who read it, and then asked follow-up questions that weren't answered because the contractor's contract hadn't been planned with Q&A time for some unknown reason, the assessment had significant flaws. The Foundation is prohibited from doing what would be the actual creation of content here, as it would make them cease being a publisher and the safeguards that go with it.
- That aside, I'd also ask how these external experts could make recommendations in a form that would (excluding consensus and fixable ones like MOS aside) comply with Wikipedia policies. Nosebagbear (talk) 13:28, 20 February 2023 (UTC)
- An expert commission could recommend, for example, that we consider a certain source or body of sources unreliable, which we'd then treat as policy. – Joe (talk) 14:13, 20 February 2023 (UTC)
- I think the authors have it right. A volunteer community can't deal with this kind of problem. If we ever have an Editorial Board/"ArbCom for content" (the idea I alluded to in the interview, though it isn't mine) then it'll have to be through intervention by the Foundation. Something like meta:Croatian Wikipedia Disinformation Assessment-2021 on a grander scale – where they proactively commission (i.e. pay) outside experts to investigate our coverage of a topic and make binding recommendations. I realise that offends pretty much every idea we have about project governance, but there it is. – Joe (talk) 08:19, 13 February 2023 (UTC)
- If Grabowski and Klein believe that there's an NPOV issue in a certain article or topic, then I advise them to open a discussion at the NPOV noticeboard. WP:CPUSH is a huge problem on Wikipedia; I would argue it might be the single biggest problem we face as editors. But it seems the authors are just complaining that it exists rather than offering any sort of novel insights or solutions. I'm more immediately concerned that they're making accusations against editors with which they have content disputes by name in an externally published work. Thebiguglyalien (talk) 18:08, 12 February 2023 (UTC)
If it is a CT area and an article is well attended I don't really think this is a serious problem, in general. While a certain amount of dodgy stuff might get in for a while, eventually it will be sorted out by the usual processes, discussion, RFCs, dispute resolution and whatnot. If there is a problem, it is in the amount of wasted ink/time in reaching a desirable outcome, democracy right? Selfstudier (talk) 22:06, 11 February 2023 (UTC)
- While I don't go into specifics, I can say that the articles that I'm familiar with this being a problem in are in CT areas (sometimes multiple overlapping areas) and are generally well attended.
- I could also cite an example of one of our usual processes, where it was remarked by editors and the discussion closer, that editors who were familiar with the topic were contributing in favour of one outcome and editors who were not familiar with the topic were in favour of another outcome, and the overall determination of the closer was one of no consensus. Sideswipe9th (talk) 22:44, 11 February 2023 (UTC)
- @Selfstudier - you're possibly thinking of the American Politics topic area which is extremely "well attended" and the process as you do describe does seem to work far better. Unfortunately this is not true for other contentious areas. Volunteer Marek 22:52, 11 February 2023 (UTC)
Consensus-based decision making doesn't scale up well. The more people involved, the harder it is for individual voices to be heard. One of the factors for measuring strength of argument is weighing how many participants speak in favour of the argument, which provides incentive for flamboyant statements both to attract support and drive away opponents from the discussion, and encourages participants to align with a viewpoint without necessarily understanding the full context. English Wikipedia's consensus-based approach also produces inconsistent results, relying on who shows up at a given time and who hasn't left the discussion through attrition. This rewards the incalcitrant over collaborative editors. Getting more people to learn about the circumstances in full is an idealistic approach, but demands a high cost on a large proportion of the community, thus isn't time-effective. To avoid decisions being made on a superficial basis, the most effective avenue would be to find ways to move away from making all decisions by consensus in large, unmoderated group conversations. For better or worse, the community still seems to prefer the advantages of the current decision-making traditions. isaacl (talk) 22:24, 11 February 2023 (UTC)
To pose the obvious question, what can be more "consensus subverting" than an indefinetly banned user being allowed to influence Wikipedia article content and policies from the outside, long after they have been globally banned? Volunteer Marek 23:05, 11 February 2023 (UTC)
- Comment In my experience, this is a significant issue in a couple of ways. One is by strength of numbers – for example, the Israel/Palestinian topic area, has a huge disparity in the number of pro- and anti-Israel editors, with a large majority of regular editors in the topic area being in the latter camp. This means most "consensuses" developed through discussion in the topic area are inevitably skewed (I have occasionally seen discussions in which pretty much every editor from outside the topic area (who can probably assumed to be reasonably neutral) has commented in one way, but the regulars (whose responses are entirely predictable) manage to sway the discussion, or at least block consensus from forming against their viewpoint). The second is through relentlessness – there are some editors (or small groups of editors) who will relentlessly badger away at discussions over days or weeks. I would say this is perhaps less of a problem at article talk level, but tends to happen at policy discussions.
- Regarding how these issues can be resolved, I only have an idea for the former issue. I would suggest an alternative to an RfC be created – a 'Request for outside opinion' in which topic area regulars would not be able to comment and only outside views would be taken into account. This would prevent vote stacking in controversial topic areas dominated by a group with a similar POV, and perhaps help flag up that there are issues in an area when the outside (neutral) editors tend to come to quite different conclusions to the regulars. Cheers Number 57 00:25, 12 February 2023 (UTC)
- @Number 57 Good idea in theory, but in practice, hard to execute - see also my comment above. How do we ensure the outside comments are balanced, and/or how to we judge them afterward? Anyone with experience in academia can comment on how peer review is a lottery, for example, and the same article can be rejected in one journal and accepted in another one with identical reputation, simply because the first set of reviewers was skewed towards not liking the article and the second was skewed in another way. (This also reminds of an old Polish joke: "what voting method are we going to use to chose the voting method we will use"?). Piotr Konieczny aka Prokonsul Piotrus| reply here 02:37, 12 February 2023 (UTC)
- @Number 57: In RfC's in ARBPIA (the area with which I am most familiar), it is commonplace for a large fraction of the participants to not be regulars. Sometimes a considerable number come with identical opinions that they are unable to justify if challenged. We could conjecture why this happens, but the moral is that excluding the regulars could mean excluding everyone who knows much about the topic. Many disputes are on topics that require some background to understand. Zerotalk 13:54, 12 February 2023 (UTC)
- Comment There's an old joke amongst lawyers, who are professional debaters paid to win arguments -- or at least minimize the effects of losing them: "When the facts are against you, pound on the law; when the law is against you, pound on the facts; when both are against you, pound on the table." In short, no one likes to lose arguments, especially when your beliefs &/or paycheck is on the line. And the world of Wikipedia is no different than the Real World (tm). If the facts in an article is in variance with what you think is true, you'll argue to either exclude them, or include different facts, or both. If you can't persuade others, you'll argue Wikipedia policy. And if all that fails, you'll go to some forum (print or online) that believes Wikipedia will never be as good as, say, Encyclopaedia Britannia , & they'll publish your rant, which may lead to unpleasant repercussions for all. Sigh. I don't think we'll ever fix this problem, because no one wants to lose an argument, especially if it invalidates your beliefs or impacts your paycheck. -- llywrch (talk) 05:37, 12 February 2023 (UTC)
A serious long-term problem is that the dispute resolution process is heavily weighted towards "behavior" and away from "integrity". It is far easier to get blocked for calling someone an idiot than for misrepresenting sources, even though it is obvious that the latter is the most damaging to the encyclopedia. It is largely forgotten at the noticeboards that the only reason we are here is to build a fine encyclopedia for our readers (not for our editors). Report someone at ANI or AE for mispresenting a source, or even for making up a source, and it is likely to be dismissed as a "content dispute" or even (as has happened to me) completely ignored. Endless pov-pushing with dishonest use of sources will not be punished if carried out politely. On the other hand, editors who have contributed vast amounts of fine content can get topic-banned for not being nice to other editors a few times. Finally, note that I didn't say that this problem is easy to fix. Nor did I say that my concern is boosted by the journal article, which I actually think is dreadful. Zerotalk 08:34, 12 February 2023 (UTC)
- Yup. We often act as if the purpose of this website is to provide entertainment to the people who edit and administrate here (and boy, can it be entertaining) rather to write encyclopedic articles for the general public. The basic standard for sanctions is "does this behavior annoy administrators and take up their time" rather than "how does this behavior change the quality of our articles" Volunteer Marek 08:52, 12 February 2023 (UTC)
- This needs two-track resolution. Grabowski and Klein contend that we have active antisemites who are co-ordinating offwiki to distort content and drive off those who disagree with them, on a large scale and over a long period. That allegation could belong at Arbcom, but preferably at WMF Trust & Safety who are professionals drawing a salary. Meanwhile our means of resolving large or intractable content disputes is a community RfC, and there's clearly scope to open one on this.—S Marshall T/C 12:15, 12 February 2023 (UTC)
- Understanding that dispute requires deep knowledge of the topic. It isn't something that poking around in diffs will sort out. Nor do I think that we are obliged to respond to claims of outside activists. Zerotalk 14:00, 12 February 2023 (UTC)
- An RfC would only really apply if we've come up with a specific solution and are seeking approval to implement it. This seems like something that evades a specific solution, as it doesn't identify a specific issue so much as a general dislike of the system. We can't really have an RfC that says "hey, should we take action against biased and dishonest users". Of course we should, but what does that mean? Thebiguglyalien (talk) 17:52, 12 February 2023 (UTC)
- The community has to recognize, though, that in a large-scale, unmoderated community RfC, most participants won't want to invest the amount of time to evaluate the entire context, including evaluating the cited sources and diffs of editor responses. This leads to inconsistent results. If we want to avoid the resulting problems, we need to explore ways to allow decisions to be more strongly influenced by those who are willing to invest the requisite time, and have a track record of being open to following the evidence wherever it leads. I understand why many editors are reluctant to delegate decision-making influence in this way. Without this, though, decision making will continue to be vulnerable to the whims of whoever shows up to engage and engages the longest. isaacl (talk) 18:03, 12 February 2023 (UTC)
- You can have an RFC where you actually ask the community for input and comment without a specific solution in mind.—S Marshall T/C 18:05, 12 February 2023 (UTC)
- Yes, possible, to see if there is a direction of travel or if it is just all over the place. Selfstudier (talk) 18:10, 12 February 2023 (UTC)
- Ack, I've just seen that the long reply I typed to Zero0000 got lost in an edit conflict. The gist of it was that I agree that we aren't obliged to respond, but I think we should. We know from experience on other wikis that distortionism does occur and can become very entrenched, and Grabowski and Klein are subject matter experts. Their allegations are plausible. To my eyes as a non-expert their paper doesn't prove their case, but I think it would be best for Trust & Safety to review this carefully. Isn't this what the UCOC is for? I mean, if the UCOC isn't for this, then I have no idea what it's for.
- If it turns out that Grabowski and Klein's case isn't proven, then we should say so, and task the relevant WMF department to do what they can to protect the named editors, who are at risk of reputational damage and career-limiting consequences if those allegations are false.—S Marshall T/C 18:12, 12 February 2023 (UTC)
- I don't know if this is the way to go, but if Wikipedia needs to speak with one voice on this, then an RfC is necessary but is not enough: we should have some kind of independent third-party review of the article contents and the internal processes that lead to them, entrusted to academics with expertise in the field. Gitz (talk) (contribs) 18:40, 12 February 2023 (UTC)
- The problem with "entrusting" review to "experts in the field" is that first, who gets to pick these experts? As this article makes it crystal clear, experts - by which I mean credentialed historians holding positions at various academic institutions - often disagree with one another. In fact in this case there seems to be some very serious personal or professional animus between at least two of them. Second, almost by definition, experts are going to have a WP:COI in this regard. Suppose we ask someone to review. And that person says "well, of course you should exclusively use just my work and that of my co-authors on this subject as it's the best there is!" I mean... no? That's not how it works? I distinctly recall a couple instances where an expert - someone with an expertise in the field - attempted to make contributions to articles related to their work and some editors, including some active in this dispute IMMEDIATELY ran off to drama boards to report COI. Apparently THAT particular expert was the wrong kind of expert (i.e. didn't support their POV).
- There is a very basic and simple reason for why the roles of "author" and "editor" (editor as in publishing, not in as in Wikipedian) are separate in academic publishing. Or even "author", "editor" and "publisher". Authors will always have a biased view of their own work (or work which agrees with them) and it is the job of the editor and the publisher to reign them in (honestly most of the job of an editor is to cut the hell out of manuscript length - worth to remember next time someone complains about "massive removal of content"!). This is also why on Wikipedia the key policy is WP:V and WP:OR. We use reliable sources and verify that these says what they're supposed to say. We don't adjudicate - it's simply not our job nor are we qualified to do it, nor are even "experts" that we may invite - disputes among scholars (like Grabowski contra Lukas). That is up to the broader academic community and we can only report on what they say. So yeah, no, this is an unworkable idea that is contrary to our policy and ethos. Volunteer Marek 20:19, 12 February 2023 (UTC)
- And there lies the problem, Volunteer Marek. No one of us is a professional author, editor or publisher, and few of us are subject-matter experts (not that it really matters on Wikipedia if you are any of these). So if the question is "who gets to decide if a scholar is good enough?" is valid, then so is "who gets to decide if an author/editor/publisher is good enough" is valid as well.
- From your answer it appears that you treat those who add content as "authors" whose judgment is somehow clouded by the very fact that they create content, which apparently should empower "editors" (whom you seem to self-identify with) who may and should temper (i.e., mostly by removal) what the "authors" write. This misses one point, though. On Wikipedia, for good or bad reasons, there is generally no distinction between "authors" and "editors" (an exception I can think of is AfC, where draft reviewers are "editors" and drafters are "authors", but we are patently not speaking of that situation). Just because you choose not to create content (avoid to be an "author") you don't gain any additional powers of review or any other privilege, nor does it mean that your judgment is clearer and better than the others' from the outset. We all exercise the roles of authors, editors and publishers, these roles are inseparable and that's by design. Since essentially we are random people, my judgment is statistically not better or worse than yours, and that stems from the fact that we are anonymous. You may see your role as you wish but in the eyes of Wikipedia, you are just one of the crowd (albeit one who informally has more social capital than I do by virtue of being an older editor) and can't just claim to be somehow more impartial.
- We have the consensus process as the next best choice (doesn't work as it looks like trench warfare in this area but on the Internet) and admins who are supposed to mediate/decide the conflict but don't want to get killed in the crossfire, which is kind of instinctive. Thus there is no mediation and whoever is more stubborn wins. Which is what the paper describes in detail. Szmenderowiecki (talk) 03:29, 13 February 2023 (UTC)
- I don't know if this is the way to go, but if Wikipedia needs to speak with one voice on this, then an RfC is necessary but is not enough: we should have some kind of independent third-party review of the article contents and the internal processes that lead to them, entrusted to academics with expertise in the field. Gitz (talk) (contribs) 18:40, 12 February 2023 (UTC)
- Sure; I don't think that contradicts anything I wrote. isaacl (talk) 18:48, 12 February 2023 (UTC)
- It's not meant to. My reply is below yours because I put it in chronological order, but I was responding to Thebiguglyalien. I don't disagree with you.—S Marshall T/C 20:06, 12 February 2023 (UTC)
- Thanks for the clarification. I was expecting the reply to be nested under the comment to which it is replying. isaacl (talk) 20:25, 12 February 2023 (UTC)
- It's not meant to. My reply is below yours because I put it in chronological order, but I was responding to Thebiguglyalien. I don't disagree with you.—S Marshall T/C 20:06, 12 February 2023 (UTC)
- Responding to S Marshall above. I don't think we can leave a dispute about content to T&S to sort out -- if I understand your point correctly. For one thing, that's editorial control, which the Foundation must avoid to keep their common carrier immunity; the moment they start telling us what to include in one article, they start being responsible for the content in every article. (IANAL, but that's my understanding of how the Telecommunications Act of 1996 works.) For another, do any of us trust the Foundation to actually do a competent job of sorting out a dispute? Based on the disaster of how T&S handled the FRAM case, they can't. A few paid employees won't take the time & effort dozens of unpaid volunteers would in an important case, & even less for less serious ones. (Although I had the thought today that maybe a few experienced Wikipedians ought to be paid to manage article reviews. As I said somewhere on this page, our increasing need to keep Wikipedia reliable is experienced & willing Wikipedians to review articles; otherwise, only the ones active Wikipedians happen to have an interest in are monitored. And even then, you can't become an expert in a subject without forming biases.) -- llywrch (talk) 01:52, 13 February 2023 (UTC)
- A scheme I know kind of works on Russian Wikipedia, but also doesn't really work, is mandated mediation (принудительное посредничество), where the mediators (admins) have the power to resolve content disputes and conduct Disputes, and enforce their decisions appropriately, but I won't vote for its implementation. For starters, who gets to choose the arbitrators? On ru.wiki it's ArbCom, but one time the previous ArbCom here was presented with the issue in the topic area, it was a total free-for-all, a mess which was handled poorly by the committee, the parties and in particular by the clerks (yeah, it was Christmas/New Year time, but that is hardly an excuse). Secondly, in that system, admins have to be previously uninvolved in the dispute and basically agnostic about the whole issue, which means that the admin will be impartial but also clueless, but that's what we already have, so this gives me a lot of plus ça change vibes 3. if the appointed mediator sucks, which is a distinct possibility because see point 2, ArbCom won't, or will be reluctant to, dig in deeper for the same reasons it didn't want to for all the time so far, 4. admins would dread to be appointed mediators. Also, with the current system, no expert in the subject area will be realistically able to be an admin. So that kind of makes that idea bad in my eyes. Szmenderowiecki (talk) 03:58, 13 February 2023 (UTC)
- You can have an RFC where you actually ask the community for input and comment without a specific solution in mind.—S Marshall T/C 18:05, 12 February 2023 (UTC)
- I'm not sure it's really accurate to call the controversies here a "subversion of consensus", it is an allegation of intentional distortion by a small, named, clique of editors. Regardless of whether the allegations in this case have any merit, it is a matter of historical fact that such cliques have existed and operated on a variety of topics historically, and we have prior ARBCOM cases dealing with them. And it's certainly my experience as an administrator that has to a point specialized in mediating Balkan and post-Soviet disputes that a lot of articles about contentious conflicts, particularly ones that sit at the edges of obscurity from the perspective of English Wikipedia's biases, are dominated by editors using dubious methods--in many cases it may even just be one editor who has taken free reign over a topic until someone stumbles on their work and calls them out (or alternatively, a set of POV-opposed editors who have incorrectly decided to just each create their own walled garden WP:CFORKs dedicated to the topic). Unlike the Holocaust in Poland, perhaps, the damage is muffled by the fact that these articles are typically not very well written, and thus don't carry the full weight of "trusted information" that well-written Wikivoice has with our readership.
- The article identifies admin intransigence as a contributing factor. I think that the broader problem is a similar (justifiable) reluctance on the part of the community as a whole to engage with a topic owing to its high emotional burden (and, in the case of pretty much all of the examples I'm thinking of other than the Holocaust, obscurity). If there is clear buy-in and participation from good-faith editors from all corners of the encyclopedia to give clear judgment calls about whether sources say what editors claim they say, what DUE actually looks like, etc. it's much easier for admins to step in and block bad actors. Otherwise, admins are both a) relying on their possibly-weak personal understanding of the subject matter and b) making calls that could cause them to be considered WP:INVOLVED. But, as I said, editors in these areas face a steep emotional burden: even for editors with zero personal connection to a conflict, editing around topics like genocide takes a toll; conversely, bad-faith editors will face no similar obstacle due to their personal contempt for the topic at hand. I think that if we want to be proactive about dealing with contentious topics dominated by POV-warriors, we need to either find a way to incentivize uninvolved, skilled editors to go edit these topics (read: get the WMF to pay people to do it) or else revisit expectations of how admins adjudicate in contentious topics with low participation. signed, Rosguill talk 21:40, 12 February 2023 (UTC)
- Wikipedia is very vulnerable to determined manipulation and undeclared COI. While I am among those who have given up on editing topics in World War II in Eastern Europe, as there really is no point in trying, I have seen this behaviour on topics as disparate as the history of the Congo, Ugg boots and water quality in Fresno. Simply put, an unpaid editor cannot compete with a determined SPA. As to solutions, admins are part of the problem, though I would call it administrator fatigue, not intransigence. "The same editors being TL;DR" was what somebody at ANI called a series of fairly serious complaints recently. We need more admins and, understandably, people are reluctant to run the gauntlet of RfA. I don't have a full-fledged suggestion, but if RfA is an obstacle maybe we need lower stakes. Term limits for admins would never fly, but perhaps we could have an advisory board for questions where the sources matter, whose members might be willing to serve a month, let's say, where they might be unwilling to commit indefinitely, shrug. TL;DR: it's a problem, it isn't limited to Poland, and I urge En.wikipedia to take it seriously. Elinruby (talk) 23:20, 12 February 2023 (UTC)
- In the name of fairness I feel the need to add 1) Grabowski at several points does not seem to understand the reliable source designation 2) While the article is not *only* about Volunteer Marek, and it is true that his reverts are often "aggressive', he was invaluable in the article about sexual violence in the Russian invasion of Ukraine, which today would likely would omit all mention of Russian soldiers were it not for him. He persisted when many of us could not. Credit where due.Elinruby (talk) 00:47, 13 February 2023 (UTC)
- @Elinruby: As a matter of personal interest, what manipulation have you seen with regards to the history of the Congo (DRC, I presume)? -Indy beetle (talk) 10:29, 14 February 2023 (UTC)
- Yes, DRC. Haven't been in there in a while but "manipulation" mostly took the form of (very polite) objections to certain subsidiaries or mines being mentioned on pages for parent companies. I don't want to look names up but Katanga and Kivu province come to mind. Mentions of diamonds financing conflict, or pollution, or local objections, or allegations of bribery, tended to subsequently fall off pages. The pattern of one corporation per mine, which would then go bankrupt, would get obfuscated, or orphaned... Don't remember all the details. Possibly I appeared to be trying to right great wrongs, but these were different user names, so I don't think that this was someone monitoring me, especially since I was almost exclusively working from ICIJ sources in the wake of Panama Papers. Probably employees, IMHO, which I don't consider necessarily an issue if done in good faith. There was also a heated dispute over King Leopold's Ghost, but that was fairly by the book as heated disputes go; wouldn't call it "manipulation". Does that answer your question? Elinruby (talk) 10:54, 14 February 2023 (UTC)
new Haaretz article Selfstudier (talk) 12:32, 14 February 2023 (UTC)
- One issue that I think this situation brings up, that the community will need to readjust to, is understanding how to evaluate "peer-reviewed academic publications" as the basis for evaluating conduct disputes. We are all accustomed to thinking in terms of WP:RS. The paper that led to the discussion here and at ArbCom meets our criteria as a "reliable source" for content such as that at Criticism of Wikipedia. (I'm amused to notice that, as of my post here, it doesn't seem to have been added at that page.) But that doesn't mean that we should trust it, at least on face value, for determining how to resolve either a content dispute or a conduct dispute. There's an awful lot of discussion at the ArbCom case request page about the article being "peer-reviewed", as if that should have a bearing on the article's usefulness for framing the case, and yet the reality may very well be that it is far from being a "reliable" source for that kind of purpose. --Tryptofish (talk) 18:25, 15 February 2023 (UTC)
- Heh. If someone edited Criticism of Wikipedia and attempted a summary (in prose without quotes), using the above Haaretz article as an RS, would anything happen? Would anyone challenge the material? Selfstudier (talk) 18:58, 15 February 2023 (UTC)
- I agree that the article cannot be used as a source for conduct: it is up to the WP community and its admins and arbitrators to assess conduct. However, the article is a reliable source as far as article content and content disputes are concerned. In fact, it is an indispensable source for cleaning up articles from the many errors they contain. There are many good reasons to criticise the article, but from the point of view of our articles' quality, it is a blessing. (Note that Criticism of Wikipedia has not yet been updated, but I myself used the article as a source in Reliability of Wikipedia). Gitz (talk) (contribs) 18:59, 15 February 2023 (UTC)
- @Gitz6666 As someone who is criticized in that piece, I want to say I think your summary there seems neutral. It is a fact the authors criticize Wikipedia's reliability. To what degree they are correct is another issue, and this is why it's important to use the correct language (claim... believe... allege) as you did. Shame that some news outlets reporting on this are less careful and treat the paper as a holy word that "proved" something. Piotr Konieczny aka Prokonsul Piotrus| reply here 03:25, 16 February 2023 (UTC)
- I agree this is a reliable source, but that doesn't mean that other reliable sources don't exist. If this was article space it would be be given due weight against other sources. The idea that "a" historian is right, and is not how it works. When that reliable source agrees with the opinions of certain current and former Wikipedia editors, and works in tandem with those editors, it will of course come to a conclusion that they all agree upon. There is certainly civil (and not so civil) POV pushing going on, in this area and others, and maybe ARBCOM needs to be more active in combatting it. But it's a shame they would wait to use something such as this article to start doing anything. -- LCU ActivelyDisinterested ∆transmissions∆ °co-ords° 19:00, 15 February 2023 (UTC)
- My point really isn't about using it as a source for content, but rather, about using it as a rationale for dispute resolution. I think that using it as a starting point for the ArbCom case will be anything but a blessing. "May we live in interesting
current eventstimes." --Tryptofish (talk) 20:08, 15 February 2023 (UTC)
- My point really isn't about using it as a source for content, but rather, about using it as a rationale for dispute resolution. I think that using it as a starting point for the ArbCom case will be anything but a blessing. "May we live in interesting
- In addition to the points brought up by Rosguill and Tryptofish, this highlights some other general issues about our article writing processes:
- Context is always difficult to establish, because we try to keep an article tightly focused on its subject. Context, if it appears at all, is merely summarised.
- This often generates criticism of biographical articles that we are writing from the subject's point of view.
- Our requirement to avoid WP:COPYVIO through paraphrase can result in subtleties in the original source material being lost.
- The perennial problem WP:UNDUE, especially in incomplete articles. Hawkeye7 (discuss) 19:18, 15 February 2023 (UTC)
It's not just about the consensus process, our policies and guidelines are the the creator or enable enabler of this, our bias problem, and the fact that every article about a real world contest (e.g. politics) is eternally in some painful process. We could 80% fix all of the above by improving our policies and guidelines. North8000 (talk) 19:29, 15 February 2023 (UTC)
- Manufactured consensus by professionally managed bot armies: usually when a editor tries to manufacture a consensus by creating an army of sock puppets, it is usually rather obvious that they are WP:single purpose accounts. Yesterday, an investigation by "reporters from 30 outlets including Le Monde, Der Spiegel, The Guardian and El País revealed that there are professional 'opinion formation' companies that create bot armies that each have a track record of apparently good faith participation in a broad range of discussions. This makes them much less likely to be identified. If these guys aren't already trying to manipulate Wikipedia articles, I would be astonished. As if we didn't have enough of a problem, as described by others already. Sigh. --𝕁𝕄𝔽 (talk) 19:30, 15 February 2023 (UTC)
- @North8000: What would you change in our policies and guidelines? Levivich (talk) 21:21, 15 February 2023 (UTC)
- Thanks for asking. I'll prepare a succinct overview. It will be derived from my giant mess at Wikipedia:Strategic issues with core policies, guidelines and structures North8000 (talk) 21:51, 15 February 2023 (UTC)
- A major part of the problem is that any attempt at UNDUE is OR. Take the following very realistic scenario: It's actually part of the on-going Israeli life in areas close to the Gaza Strip that there are rockets being launched at them. These rockets don't come at an even rate; there may be a few completely quiet days, and there are days with dozens of rockets. These rockets rarely actually hit any populated area, and as a result there are few direct deaths or injuries directly attributable to them; however, living like this for years has been proven to have long-term health problems. One day, after a direct hit destroys a room in a house and kills some people who were in the room, Israel goes on a small operation in the Gaza Strip to stop these attacks. This includes destroying places where these rockets are stored (in densely populated neighborhoods, schools, hospitals, etc.). As soon as Israel kills one innocent person, the international media (that is, nearly all non-Israeli media) emphasizes this death, while ignoring the rockets and the deaths which were the proximate cause, as well as the presence of rockets at the attacked location. Yes, this is very realistic. Animal lover |666| 07:34, 16 February 2023 (UTC)
- Thanks for asking. I'll prepare a succinct overview. It will be derived from my giant mess at Wikipedia:Strategic issues with core policies, guidelines and structures North8000 (talk) 21:51, 15 February 2023 (UTC)
It is the belief of myself and several other editors that the generalised issue of subversion of policy, and an inability of our resolution mechanisms to handle that subversion, applies to other content areas
- I can only agree with that. Although, as Volunteer Marek says above, I suspect that we may have different views on who the 'subversives' are... Tewdar 09:46, 17 February 2023 (UTC)- I've seen many comments which use whataboutism to minimalise the holocaust by using unequal comparisions, we have a whole article pushing the idea. Unfortunately Wikipedia's articles are written by those interested in writing them, and that isn't always a great thing. -- LCU ActivelyDisinterested ∆transmissions∆ °co-ords° 22:31, 17 February 2023 (UTC)
- I'll say something that really is so obvious that it shouldn't need saying. Personally, I don't want to see bogus arguments such as whataboutism used to POV our content about the very serious topic of the H
holocaust, and I'm pretty sure most of us would agree about that. I think most of us want our content to be accurate and balanced, and are particularly sensitive to wanting to avoid any influence of hateful POVs that would minimize things that should not be minimized. But the original issue raised in this discussion thread was about whether outside publications might have undesirable influences on our normal process of consensus. And I think we have to be very careful about not assigning, well, WP:UNDUE weight to single, polemical, publications – while also keeping an open mind and an active interest in input from the outside world about Wikipedia's shortcomings. I would argue that, for purposes of dispute resolution, the focus should still be on evaluating whether editors have edited in disruptive or policy-violating ways, as opposed to overreacting to a single flashy publication. --Tryptofish (talk) 22:52, 17 February 2023 (UTC)- @ActivelyDisinterested @Tryptofish Just a passing nitpicking comment. Holocaust should be capitalized in this context. See holocaust (sacrifice) vs The Holocaust. Piotr Konieczny aka Prokonsul Piotrus| reply here 01:40, 18 February 2023 (UTC)
- You are absolutely right, and I corrected it. After all, this is Wikipedia, and we really should get capitalization right, especially for something this important. --Tryptofish (talk) 17:46, 18 February 2023 (UTC)
- @ActivelyDisinterested @Tryptofish Just a passing nitpicking comment. Holocaust should be capitalized in this context. See holocaust (sacrifice) vs The Holocaust. Piotr Konieczny aka Prokonsul Piotrus| reply here 01:40, 18 February 2023 (UTC)
- I'll say something that really is so obvious that it shouldn't need saying. Personally, I don't want to see bogus arguments such as whataboutism used to POV our content about the very serious topic of the H
- I've seen many comments which use whataboutism to minimalise the holocaust by using unequal comparisions, we have a whole article pushing the idea. Unfortunately Wikipedia's articles are written by those interested in writing them, and that isn't always a great thing. -- LCU ActivelyDisinterested ∆transmissions∆ °co-ords° 22:31, 17 February 2023 (UTC)
(Restricting myself to history & politics.) Academics and other professional experts all have their own values and views, and spend a good fraction of their time defending their views from experts with different views. Some of them are less strident than others, but there is no such thing as a neutral expert. I am horrified by proposals to place decisions, such as the reliability of sources, in the hands of such experts. Wikipedia is almost unique in that we strive to produce a neutral summary of all notable views, which is an entirely different aim. Our tool, imperfect as it is, is consensus among editors with different viewpoints. Putting decisions into the hands of experts would be the death of NPOV. Zerotalk 04:36, 21 February 2023 (UTC)
- Indeed, particularly in the context of WP:UNDUE/WP:BALANCE and the concept of echo chambers, which certainly affects in academia. Piotr Konieczny aka Prokonsul Piotrus| reply here 04:41, 21 February 2023 (UTC)
- Decisions? No. Recommendations? Yes. Levivich (talk) 04:43, 21 February 2023 (UTC)
- @Levivich Anyone can make recommendations, talk pages and like are open to "anyone who can edit". Piotr Konieczny aka Prokonsul Piotrus| reply here 04:45, 21 February 2023 (UTC)
- Anyone can make recommendations but not everyone's recommendations are equally helpful. Not anyone is qualified to, for example, review Wikipedia coverage of the Holocaust for accuracy. Given how much time it would take, the people who are qualified probably won't do it for free. So the WMF should pay for experts to review articles and make recommendations. Levivich (talk) 04:59, 21 February 2023 (UTC)
- I actually like the idea that "WMF should pay for experts to review articles and make recommendations". This should be a good use to spend WMF's budget. It should be something that affects the entire Wikipedia, however, and not just a random topic area. Piotr Konieczny aka Prokonsul Piotrus| reply here 05:03, 21 February 2023 (UTC)
- Editors whose POV matches the "recommendation" of an external "expert" will claim a superior right to apply that POV to articles. The "recommendation" will become a trump card in content disputes that is almost impossible to counter. So I don't see any distinction in practice between recommendations and directions. Zerotalk 06:24, 21 February 2023 (UTC)
- "This reliable, mainstream expert, who is a reliable, mainstream expert because they themselves and their reliable, mainstream collegues describe them as such, agrees with my POV. Hence, I represent a reliable, mainstream POV and those who disagree with me are fringe POV-pushers who need to be topic banned." Piotr Konieczny aka Prokonsul Piotrus| reply here 06:27, 21 February 2023 (UTC)
- Sounds like "articles should not give minority views or aspects as much of or as detailed a description as more widely held views or widely supported aspects". If a panel of three experts, from Yad Vashem, the Wiener Holocaust Library, and the United States Holocaust Memorial Museum, say X is the mainstream view and Y is fringe, and give us a list of RSes on the point, that would be helpful. Levivich (talk) 06:40, 21 February 2023 (UTC)
- How do we chose the experts, or the institutions? Why not the Jewish Historical Institute, the Auschwitz-Birkenau State Museum or the POLIN Museum of the History of Polish Jews, for example? Piotr Konieczny aka Prokonsul Piotrus| reply here 07:08, 21 February 2023 (UTC)
- We could have a deep bench. Levivich (talk) 07:19, 21 February 2023 (UTC)
- How do we chose the experts, or the institutions? Why not the Jewish Historical Institute, the Auschwitz-Birkenau State Museum or the POLIN Museum of the History of Polish Jews, for example? Piotr Konieczny aka Prokonsul Piotrus| reply here 07:08, 21 February 2023 (UTC)
- Sounds like "articles should not give minority views or aspects as much of or as detailed a description as more widely held views or widely supported aspects". If a panel of three experts, from Yad Vashem, the Wiener Holocaust Library, and the United States Holocaust Memorial Museum, say X is the mainstream view and Y is fringe, and give us a list of RSes on the point, that would be helpful. Levivich (talk) 06:40, 21 February 2023 (UTC)
- "This reliable, mainstream expert, who is a reliable, mainstream expert because they themselves and their reliable, mainstream collegues describe them as such, agrees with my POV. Hence, I represent a reliable, mainstream POV and those who disagree with me are fringe POV-pushers who need to be topic banned." Piotr Konieczny aka Prokonsul Piotrus| reply here 06:27, 21 February 2023 (UTC)
- Anyone can make recommendations but not everyone's recommendations are equally helpful. Not anyone is qualified to, for example, review Wikipedia coverage of the Holocaust for accuracy. Given how much time it would take, the people who are qualified probably won't do it for free. So the WMF should pay for experts to review articles and make recommendations. Levivich (talk) 04:59, 21 February 2023 (UTC)
- @Levivich Anyone can make recommendations, talk pages and like are open to "anyone who can edit". Piotr Konieczny aka Prokonsul Piotrus| reply here 04:45, 21 February 2023 (UTC)
RFC proposing a new notability guideline, WP:Notability (politics)
Interested editors may comment at the RfC held on that page. BilledMammal (talk) 08:43, 18 February 2023 (UTC)
RfC about turning the snowball clause into a policy/guideline
Please see: Wikipedia talk:Snowball clause#RfC about turning the snowball clause into a policy/guideline --- Tbf69 P • T 22:38, 21 February 2023 (UTC)
Preventing poisoning of Wikipedia bimonthly dumps
A recent article on 'poisoning' web-scale training datasets of AI language models describes, among other things, an injection of malicious edits into Wikipedia pages immediately before they are snapshotted for addition into bimonthly dumps, which are often used as training data for language models. The attack exploits a vulnerability in Wikipedia's snapshot processing: as the ordering in which articles are snapshotted is virtually sequential by article id, so that the time at which an article is snapshotted is extremely predictable (to within an order of minutes).
The authors have (allegedly) notified WMF regarding the details of their attack, and I think it would be helpful to publish their email. To the limit of my knowledge, we currently have no detection in place for observing patterns in malicious edits that may correspond to such a frontrunning attack, which may be useful in case one has occurred or will occur in the future (not unlikely).
The article authors proposed two methods of mitigating such an attack:
- Randomizing the order in which articles are snapshotted, such that attackers must keep an article 'poisoned' with malicious edits for the entirety of the snapshot period (about a day) instead of precisely knowing when to inject poison edits.
- Creating an initial snapshot of an article, selectively applying edits from "trusted moderators" (extended-confirmed users?) during a hold period, and finalizing the snapshot after some number of hours. More difficult to implement.
Additional mitigations may include:
- Continuous (as opposed to bimonthly) snapshotting of articles in addition to randomization such that attackers do not even have as little as a day to sustain poison edits — but degrades the quality of the data dumps, as they will contain articles that are weeks old.
- "Rolling blackouts" on the ability to edit pages around snapshot events, restricting edits to confirmed users during those periods such that it becomes nigh impossible to actually inject malicious edits — but degrades the editing experience for users.
I may not have covered all viable/useful mitigation measures here. [osunpokeh/talk/contributions] 18:13, 22 February 2023 (UTC)
- I read the relevant section of the article, and I agree that improvements are needed for the screenshot system. Organizing backend information incrementally is bad practice for this exact reason. With that said, anyone who uses information from Wikipedia for something of consequence is doing so at their own risk. Thebiguglyalien (talk) 03:24, 23 February 2023 (UTC)
Request for comment (RfC) about WP:USPLACE
Please see: Wikipedia talk:Naming conventions (geographic names)#Request for comment (RfC) about WP:USPLACE --- Tbf69 P • T 19:46, 22 February 2023 (UTC)
Linking to Wikipedia alternatives/forks for other languages
See World energy supply and consumption (Diff ~1141093198).
I'm pretty sure linking to Wikisage in any article outside of Deletionpedia isn't desired, but I don't actually know which policy applies in this particular case. — Alexis Jazz (talk or ping me) 12:00, 23 February 2023 (UTC)
- For this edit specifically, WP:NOTSEEALSO prohibits external links. In the more general case, WP:ELNO#12 recommends against wikis in general, and mirrors or forks of Wikipedia in particular. And of course WP:UGC is relevant when edits use forks of Wikipedia as a reference. (I see the edit has already been reverted) Caeciliusinhorto (talk) 19:50, 23 February 2023 (UTC)
- World energy supply and consumption has similar Wikipedia articles in 19 languages. But Nederlands is missing. However, Wikisage contains
- http://nl.wikisage.org/wiki/Wereldenergievoorziening
- Nederlands and very similar. Wikisage is not a mirror or fork of Wikipedia. Where can Wereldenergievoorziening be mentioned in World energy supply and consumption? Rwbest (talk) 09:42, 24 February 2023 (UTC)
- Rwbest, if you are the sole author of the Wikisage article (at a glance the contributions by others seem to be minor) you could simply create the article on Dutch Wikipedia. That article could be linked through Wikidata so it'll appear in the list of translations. — Alexis Jazz (talk or ping me) 11:44, 24 February 2023 (UTC)
- The wikisage article shouldn't be linked anywhere on World energy supply and consumption. That's exactly the point of the guidelines I cited. It's not an article on English Wikipedia, so it shouldn't be in the §See also section. WikiSage appears to be user-generated, so it is not a reliable source and therefore should not be used as a citation. There are various parts of the external link guideline which suggest that it shouldn't be included as an external link – as well as WP:ELNO#12 which I previously mentioned, WP:ELNO#1 and WP:NONENGEL both apply.
- Or, to put it another way: in what way is it useful for readers of English Wikipedia to have a link to an article in a foreign-language wiki which doesn't even have its own Wikipedia article? If nl.wikipedia.org is missing an article on the topic, and it fits with their inclusion criteria, someone can and should write that article. But for en.wikipedia readers I don't see what benefit there is to linking to the Wikisage article. Caeciliusinhorto-public (talk) 12:53, 24 February 2023 (UTC)
- Alexis Jazz, I have created the article on Dutch Wikipedia, but it was removed, not fitting within their inclusion criteria.
- Caeciliusinhorto-public, the Wikisage article is as reliable as the Wikipedia article, as the text is very similar and it refers to nearly the same sources. For en.wikipedia readers the benefit to linking to the Wikisage article is the same as linking to foreign-language Wikipedia articles.
- Your objections seem not to the point and not fair. Rwbest (talk) 16:25, 24 February 2023 (UTC)
- If you don't think the guidelines are fair, you can always propose changing them – though they are all well-established and I doubt you'll get much traction. If you think I am misinterpreting the guidelines, do please explain how. But while the guidelines are what they are, unless I am grossly misreading them, external links to WikiSage are clearly at minimum strongly discouraged. Caeciliusinhorto (talk) 17:45, 24 February 2023 (UTC)
- (And while I'm at it, "as reliable as the Wikipedia article" is pretty damning as an argument for inclusion; Wikipedia is very clear both in our reliable sources guideline and the supplementary essay "Wikipedia is not a reliable source" that Wikipedia is not a reliable source.) Caeciliusinhorto (talk) 17:48, 24 February 2023 (UTC)
- The point is that we do not allow user-generated/edited sites such as Wikisage to be used as sources, and, generally, as external links. You say,
the Wikisage article is as reliable as the Wikipedia article
, but policy says that Wikipedia is not a reliable source, and neither is Wikisage. If the article you created on the Nederlands Wikipedia was deleted, then you need to figure how to write an article that meets the requirments of the Nederlands Wikipedia, instead of asking us to waive policy to allow you to link to Wikisage. Donald Albury 17:51, 24 February 2023 (UTC)- Writing anything about energy supply was made impossible for me on nl.wikipedia, and it's still impossible. But I can write on en.wikipedia and nl.wikisage.
- Now I observe that the en.wikipedia reader searching for a Nederlands version of World energy supply, sees that it is missing. That is evidently not true in view of Wereldenergievoorziening, so I simply asked where to mention this article. And get not-to-the-point intimidating response. I quit the village pump. Rwbest (talk) 09:03, 25 February 2023 (UTC)
- For reference, see the section for "Wereldenergievoorziening" at w:nl:Wikipedia:Te beoordelen pagina's/Toegevoegd 20151002. Rwbest, it might be possible to turn the article into a Dutch Wikibook and have a soft redirect on nl.wikipedia to make it show in the language section. But if you wish to pursue that you should consult with both communities beforehand to make sure it won't be deleted again. — Alexis Jazz (talk or ping me) 10:31, 27 February 2023 (UTC)
- If you don't think the guidelines are fair, you can always propose changing them – though they are all well-established and I doubt you'll get much traction. If you think I am misinterpreting the guidelines, do please explain how. But while the guidelines are what they are, unless I am grossly misreading them, external links to WikiSage are clearly at minimum strongly discouraged. Caeciliusinhorto (talk) 17:45, 24 February 2023 (UTC)
Make Wikipedia:Requested moves/Closing instructions a guideline
|
Should WP:RMCI be formally elevated to the official status of a WP:GUIDELINE? 23:15, 22 February 2023 (UTC)
Initial statement, collapsed 18:05, 26 February 2023 (UTC) to comply with WP:RFCNEUTRAL |
---|
The following discussion has been closed. Please do not modify it. |
Greetings, There has existed for eighteen years a set of instructions for how to formally close a requested move. It was initially titled as Wikipedia:Moving guidelines for administrators, later changed to Wikipedia:Requested moves/Closing instructions (hereafter WP:RMCI). Over the years, it has grown from a simple set of how-to steps to a comprehensive list of guidelines for determining consensus (which began as early as 2006 and has continued to expand through the years). It also has included instructions on who should close requests--first in 2009 when the authority to move pages was explicitly granted to non-administrators, then later in 2019 (following lots of discussion) we introduced rules on how editors who are involved need to not close requests. The instructional page wasn't classified as anything (guideline, policy, or essay) through 2021, even though it had been originally written as a guideline. Following a post on the talk page that saw no response, it was listed as an explanatory essay in 2021. Recently, at least one editor has asserted that, as WP:RMCI is "only" an essay, its procedures do not need to be followed; therefore, as one concrete example, editors may feel free to close move requests that they are fully involved in, so long as they think it's the right thing to do. WP:RMCI has been called "purely advisory", "not vetted", etc., leading to questions as regarding its authority--this in spite of the fact that it was written as a guideline and has guided literally thousands of move closures. This has unquestionably led to no small amount of chaos at Wikipedia:Move review following an editor who closed the same move request twice in a row (which is against the closing instructions), both with the same result, and who claimed that the closing instructions that forbid such a close by an involved user were merely "advice". Should WP:RMCI be formally elevated to the official status of a WP:GUIDELINE? Red Slash 23:15, 22 February 2023 (UTC) |
Survey (RM closing instructions)
- Support as proposer. The idea that WP:RMCI is not vetted by the community is laughably inaccurate; yes, it has been. It's guided requested move closures for nearly two decades, its talk page (merged with the overall WP:RM talk page) spans thirty-four archives, and drawn-out discussions are common (I arbitrarily picked Wikipedia_talk:Requested_moves/Archive_25 and found... a LOT of discussion on the precise wording of WP:RMCI). It is a fully vetted part of Wikipedia. It's not purely advisory--it's literally the only instructions we have on closing move requests. It's not an essay, it has never been an essay, it bears none of the hallmarks of an essay, and its currently listed status as an essay is a mistake. It's not an inconsequential mistake, either; people feel like they can ignore it freely, and that's leading to chaos--you have a 50/50 shot of the person who closes your move request being someone who takes our well-vetted closing procedures (up to and including don't close a move request twice) seriously, or who views them as merely "good advice" that can be freely ignored. Red Slash 23:15, 22 February 2023 (UTC)
- Support per nom. * Pppery * it has begun... 23:16, 22 February 2023 (UTC)
- Support per nom. The principles and concepts are tried and tested. -Kj cheetham (talk) 23:30, 22 February 2023 (UTC)
- Support per nom. – robertsky (talk) 00:45, 23 February 2023 (UTC)
- Support — if it looks like a guideline, quacks like a guideline, and is functionally considered a guideline, it should be formally treated as such. WhinyTheYounger (WtY)(talk, contribs) 00:53, 23 February 2023 (UTC)
- Support. Established and widely followed de facto guideline. {{ping|ClydeFranklin}} (t/c) 01:14, 23 February 2023 (UTC)
- Support to stop wikilawyering about whether something is "just an essay" or not. Best practices should be followed, regardless of where they are documented. But for the sake of clarity, let's make it official that this has the support of the community. HouseBlastertalk 01:33, 23 February 2023 (UTC)
- Support per nom, I see no reason to oppose. Aaron Liu (talk) 02:41, 23 February 2023 (UTC)
- Support per nom. Tim O'Doherty (talk) 07:31, 23 February 2023 (UTC)
- Oppose. Most of RMCI are basic instructions. These exist all over Wikipedia, but none of them should be turned into guidelines because there is no benefit from doing so, and because doing so causes WP:CREEP issues. For example, there is no benefit of turning Wikipedia:Requested moves/Closing instructions#Bot considerations into a guideline.
- Other parts of RMCI may not have a consensus if considered individually; for example, the instruction to explicitly declare non-admin closures as such. This requirement does not exist for other discussions, even more impactful ones like RfC's, and discussions on the instruction have shown significant opposition to it on the grounds of WP:NOBIGDEAL
- Finally, this change won't address the specific issue discussed - editors closing
the same move request twice in a row
. Closing a discussion is an administrative role, and the current wording of WP:INVOLVED states that this does not make an editor involved:One important caveat is that an administrator who has interacted with an editor or topic area purely in an administrative role is not involved and is not prevented from acting in an administrative capacity in relation to that editor or topic area.
Changing this essay to a guideline won't make the editor involved, because when a policy and guideline conflict editors should assume that the policy takes priority. - I also don't believe it makes sense to make closers involved for RM's but not other discussions - if this is an issue then it is an issue in all areas, not just requested moves. BilledMammal (talk) 10:19, 23 February 2023 (UTC)
- But WP:INVOLVED is dealing with administrators doing things in an administrative capacity (which does not include requested moves), and WP:RMCI is not. Closing move requests is neither restricted to administrators nor is it an administrative task. There's no overlap at all. See below for my more detailed take on why and how this is different. Red Slash 17:19, 23 February 2023 (UTC)
- Both of us cannot close this discussion because WP:INVOLVED forbids us from doing so, despite neither of us being admins nor the close requiring admin tools. INVOLVED applies to all editors who are acting in an admin capacity, and that includes closing discussions - if it didn't, either of us could close this discussion now. BilledMammal (talk) 23:19, 23 February 2023 (UTC)
- That's an interesting perspective I hadn't considered. I think, perhaps, an explicit (though brief) delineation of why WP:RMCI is more expansive than INVOLVED would be appropriate. Thank you. Red Slash 23:43, 23 February 2023 (UTC)
- I don't believe that would solve the issue; the explanation won't remove the conflict, and it won't change the fact that when a policy and guideline conflict editors should assume the policy takes precedence. BilledMammal (talk) 00:00, 24 February 2023 (UTC)
- @BilledMammal: Is closing an RM an "admin capacity" or is it a "non-admin capacity"? - is there anywhere in which "admin capacity" is defined? - I had always assumed it to mean activities that only an admin is capable of, so, for instance, if an admin is closing an RM they are simply performing a non-admin activity as an admin, just as any non-admin closure is equally a "non-admin" activity. Unless we are using two different registers for 'admin' here on Wikipedia - one to mean "admin", as in the things formally restricted to someone appointed as an administrator, and then 'admin' diminutive form to refer to any activity that can broadly be construed as generally 'administrative' in nature even if it is not performed by an administrator. Iskandar323 (talk) 10:56, 27 February 2023 (UTC)
- Formal closures are an "admin capacity" or, to use the wording from WP:INVOLVED, "acting as an administrator". From a practical point of view any other interpretation is problematic as it would mean there is no restriction on involved closures. However, it might be worth clarifying that section of WP:ADMIN as I have had debates where editors have claimed that it is appropriate for them to formally close a discussion they were involved in because they were not an administrator. BilledMammal (talk) 11:13, 27 February 2023 (UTC)
- It is an essay also, but WP:NACINV says about non-admin closures:
Closing editors must abide by the standard of being uninvolved as described at Wikipedia:Administrators#Involved admins.
Vpab15 (talk) 11:14, 27 February 2023 (UTC)
- That's an interesting perspective I hadn't considered. I think, perhaps, an explicit (though brief) delineation of why WP:RMCI is more expansive than INVOLVED would be appropriate. Thank you. Red Slash 23:43, 23 February 2023 (UTC)
- Both of us cannot close this discussion because WP:INVOLVED forbids us from doing so, despite neither of us being admins nor the close requiring admin tools. INVOLVED applies to all editors who are acting in an admin capacity, and that includes closing discussions - if it didn't, either of us could close this discussion now. BilledMammal (talk) 23:19, 23 February 2023 (UTC)
- But WP:INVOLVED is dealing with administrators doing things in an administrative capacity (which does not include requested moves), and WP:RMCI is not. Closing move requests is neither restricted to administrators nor is it an administrative task. There's no overlap at all. See below for my more detailed take on why and how this is different. Red Slash 17:19, 23 February 2023 (UTC)
- Suppoer per nom. · · · Peter Southwood (talk): 17:13, 23 February 2023 (UTC)
- Support This is a high stakes fundamental process. When any process is as established, accepted, and in-use as this one, then I am in favor of labeling it as a guideline. The guideline label carries weight both for users and to legitimize the activities of the Wikimedia editorial community and its governance process. Bluerasberry (talk) 17:25, 23 February 2023 (UTC)
- I'd rather have this become a information page. Also, this sentence: NACs are not discouraged for requested moves, as long as the non-admin is highly experienced with RMs and our closing procedures sounds pretty discouraging to me. How is someone supposed to become "highly experienced", if you're not allowed to do it? And is this actually different from what we hope for with admins? IMO thoughtful NACs should be strongly encouraged to handle RMs, and admins (who have no similar requirements about knowing how RM works?) should only be involved (in their admin role) if specific buttons are needed for technical moves. Page titles are basically content decisions, and admins get no special rights in content decisions. WhatamIdoing (talk) 00:22, 24 February 2023 (UTC)
- That could definitely be rewritten. Over the years, we've gone from prohibiting NACs, to discouraging them, to restricting them, to permitting them, and it might now be time to start explicitly encouraging them. I love it. I don't think that has much to do with the fate of this page as a guideline or not, but I love it. (BTW, having it be just an informational page, in my opinion, wouldn't solve the problem of people not taking a WP page seriously if it doesn't have a little checkmark next to it.) Red Slash 01:25, 24 February 2023 (UTC)
- I agree, if RMCI needs fixing, clarification and whatnot we can do that just as for any other page, should not affect the decision to make it a guideline imo. Selfstudier (talk) 06:27, 24 February 2023 (UTC)
- I don’t really understand why this should be an information page, IMO it’s not factual or technical information but the community accepted procedure. Aaron Liu (talk) 14:16, 26 February 2023 (UTC)
- That could definitely be rewritten. Over the years, we've gone from prohibiting NACs, to discouraging them, to restricting them, to permitting them, and it might now be time to start explicitly encouraging them. I love it. I don't think that has much to do with the fate of this page as a guideline or not, but I love it. (BTW, having it be just an informational page, in my opinion, wouldn't solve the problem of people not taking a WP page seriously if it doesn't have a little checkmark next to it.) Red Slash 01:25, 24 February 2023 (UTC)
- Oppose for now. The issues are (a) inconsistency in the way we use the terms (or apply the templates for) guidelines, information pages, explanatory supplements, essays, etc., and (b) letting time-wasting by wikilawyers determine those applications. We should be more consistent. Why would this need to be a guideline, but WP:AFD, Wikipedia:Bureaucrats, WP:MFD, WP:RFA, WP:AC, WP:RFC, and WP:DRV are content to simply be process pages, even though they also have a lot of rules/guidance? Why is Wikipedia:Reviewing good articles a guideline, but there's no such guideline for WP:FAC or WP:DYK? Why is WP:PROD a guideline, but WP:AFD and WP:DRV aren't? For all the wikilawyering over essays, our page on WP:ESSAYS is an essay, and WP:PAG lumps the various types of essays together despite practically being treated very different by the community. Maybe one step is to remove "information pages" from the essay of "header" and combine them with "administration" (or "process") pages to produce something that assumes guideline-like adherence, or maybe such is the state we're in that we should actually slap guideline templates on all of these pages just to stop people from extending "just an essay" to "just a page that documents a process for which there's broad support"... — Rhododendrites talk \\ 02:14, 24 February 2023 (UTC)
- I appreciate your perspective. "we should actually slap guideline templates on all of these pages just to stop people from extending "just an essay" to "just a page that documents a process for which there's broad support"..." is actually probably the best solution, because otherwise, in practice, you get people who say "but it's just an essay!" In the meantime, though, I think the proposal is the best solution for the reasons I explained above. Red Slash 07:45, 24 February 2023 (UTC)
- Some of the pages you linked are discussion venues, not pages. AfD and DRV are discussion venues that have their process page at WP:DPR which is a guideline. Aaron Liu (talk) 14:21, 26 February 2023 (UTC)
- @Rhododendrites: Apologies, I had to slightly modify your vote as to correct for the loss of context when the support and oppose sections were merged. BilledMammal (talk) 18:19, 26 February 2023 (UTC)
- Support Seems like a good idea to give a bit of formality to existing practice. If this or that wording needs sprucing up, that can be done in the usual way and shouldn't affect the decision whether to make it a guideline imo.Selfstudier (talk) 06:32, 24 February 2023 (UTC)
- Oppose. I think there is one problematic sentence at WP:RMCI that needs to be fixed. It defines involved as:
You have ever closed such a move request.
That contradicts the wording at WP:INVOLVED and places an unreasonable burden on RM closers, who will have to remember all the previous RM dicussions they have closed. Per discussion below, it is not even clear if "such a move request" only includes the same article, or any article in the same topic. I have no objection to the rest of the essay. Vpab15 (talk) 10:23, 24 February 2023 (UTC) - Support this proposal, but clarity as discussed above needs to be added to the guideline page soon, and re-organizing non-policy pages via template as discussed below needs to happen also. GenQuest "scribble" 16:14, 24 February 2023 (UTC)
- Support, no-brainer.—S Marshall T/C 18:24, 26 February 2023 (UTC)
- Support per nom The Night Watch (talk) 02:41, 27 February 2023 (UTC)
- Support per nominator. Additionally, I support robertsky's proposal below. echidnaLives - talk - edits 02:55, 27 February 2023 (UTC)
- Oppose as written, at least until contentious points with "involvement" and declaring NAC closes are vetted by the community, since they contradict broader policies. Some parts of RMCI have obviously been written out of one's whim at a time, and do not even enjoy local WP:CONLEVEL among the closers' "community". The case that apparently triggered this whole RfC was Palestinian exodus MR, and the "involvement" RMCI clause being used to overturn the close procedurally. I've closed hundreds of RMs and MRs, I'm fairly well-versed with RMCI, but that clause has slipped under my radar and I'm frankly baffled by its existence. No such user (talk) 08:37, 27 February 2023 (UTC)
- "do not even enjoy local WP:CONLEVEL among the closers' 'community'" - can you be specific as to what you're referring to? Wikipedia is a wiki, there's nothing in WP:RMCI which has "snuck through" or exists without the community's support. The INVOLVED part, in particular, was extensively debated, even as small a point as whether or not an involved editor can "relist" a discussion. Everything has been vetted. Red Slash 23:00, 27 February 2023 (UTC)
- Support: The 'explanatory essay' has already been discussed more than sufficiently to qualify as a guideline, which obviously would still not mean that it was as hard and fast and as ironclad as a policy, but would cease the confusion whereby some editors seem to think it is optional - a confusion that needs clearing up. There will still be exceptional cases, admins may still need to weigh in, and there will be still be need and scope for a human touch at Move Review, but there will not cause for actual confusion as to what the guidelines are, and what the basic expectations of the process are. Iskandar323 (talk) 11:03, 27 February 2023 (UTC)
- Oppose mostly per No such user. I, as well, have serious problems that the NAC closure section is in clear contradiction to Wikipedia principles. Also, Rhododendrites makes some good points above. --Jayron32 13:34, 27 February 2023 (UTC)
- Oppose for now. I'm not opposed to a guideline about moving pages but RMCI as currently written is not it. I don't agree with its instructions, such as the language about involved, as pointed out by others, and the page is too long as currently written. I think there is maybe an opportunity to either rewrite RMCI into a guideline, or rewrite it and promote it to an info page, or maybe split it into a guideline page and an instructions page. There is some possibility for something here, but I oppose putting the guideline template on RMCI as currently written. Levivich (talk) 14:20, 27 February 2023 (UTC)
- Reads like a "support with some copyediting" to me.—S Marshall T/C 14:26, 27 February 2023 (UTC)
- Oppose currently. I agree that the provision "You have ever closed such a move request" is contrary to WP:INVOLVED, which explains that administrative actions do not constitute involvement. Sandstein 16:24, 27 February 2023 (UTC)
- Temporary oppose. I think WP:RMCI should eventually become a guideline, but I don't think we're at that point quite yet. As other participants have stated, the "You have ever closed such a move request" provision should be removed from WP:RMCI before it rises to guideline status. Additionally, WP:NOTCURRENTTITLE has been added to the page very recently, and – while I agree with its inclusion myself – I'd like to see it receive more affirmative support from the community before letting it rise to guideline status. ModernDayTrilobite (talk • contribs) 17:15, 27 February 2023 (UTC)
- Support per nom --- Tbf69 P • T 17:22, 27 February 2023 (UTC)
- Oppose as needing more work, per points raised above (e.g. "You have ever closed such a move request"). Also, the NAC instructions differ substantially from our other procedures: to bring them in line with the spirit of NAC, I think admins and page movers should be allowed to close any RM, while other users should only close uncontroversial discussions. -- King of ♥ ♦ ♣ ♠ 23:03, 27 February 2023 (UTC)
Discussion (RM closing instructions)
- Comment. The objection to WP:RMCI was that it defines involved as
You have ever closed such a move request
, whereas the current policy at WP:INVOLVED says:an administrator who has interacted with an editor or topic area purely in an administrative role, or whose prior involvements are minor or obvious edits that do not show bias, is not involved and is not prevented from acting in an administrative capacity in relation to that editor or topic area
. It does seem they contradict each other. Should the wording at WP:INVOLVED be changed if WP:RMCI becomes a guideline? Vpab15 (talk) 23:48, 22 February 2023 (UTC)- INVOLVED is policy, so I think fixing this is better resolved the other way around. Updating RMCI so that it more closely matches INVOLVED seems like the solution, as ultimately that set of instructions are based off of the policy and not the other way around. Sideswipe9th (talk) 00:30, 23 February 2023 (UTC)
- So, they're definitely different issues, and the RMCI "involved" stuff is different on purpose and for a reason. The TL;DR is that WP:INVOLVED prohibits involvement with the same user (or topic area), while WP:RMCI just prohibits it from having a certain pre-expressed position on a topic. Great question though.
- Why WP:INVOLVED is as loose as it is:
- Administrators interact with a LOT of users. If we prohibited any administrator from ever interacting with the same user twice even in an administrative sense, we'd have a complete clusterbomb of suckiness on our hands.
- Oftentimes, admin action is urgent; we have way too few administrators and way too many things to be getting done, and sometimes if you waited for someone who has never been involved, the issue would take too long to get resolved (say, Admin Joe finds a rogue user who's wildly vandalizing tons of articles, but Admin Joe has already sanctioned the guy once before; it would hurt the encyclopedia to say that Admin Joe needs to wait to find someone else to ban the dude).
- Administrators are very well-vetted in our rules and policies. They shouldn't need someone to hold their hand and tell them when they're too involved to be impartial. It's the same reason Supreme Court Justices in the United States (at least) don't need to be told to recuse themselves.
- On the other hand, move closers:
- Most move closers do not interact with most articles. I've been an editor for a looooooooooooooooooooooong time and have a loooooooooooooooooooooot of edits and have been involved in a loooooooooooooooooot of move requests, and I just looked through WP:RM and I could not find a single article listed there which I have ever interacted with. I could close literally any of those requested moves. I'd bet you that, of the fifty or so editors who commonly (once or more per month) close requested moves, probably forty-five to forty-eight of us could serve as impartial movers on any given requested move.
- There is zero urgency with requested moves. If Page Mover Tom sees a request on Talk:Foo that he could move, but, shoot, he's been involved in similar discussions at Talk:Foobar... well, it's not going to cause any problem for that move request just to sit tight for another few days until Page Mover Sally happens across it. Requested Moves are important, certainly... but definitely not urgent.
- The people who close move requests are almost never administrators. There's about four admins who regularly close move requests and about ten others who do it once in a blue moon, and I'd ballpark-estimate that 90% of moves closed are closed by non-admins. I highly respect my fellow non-admin page movers, of course, but let's be real: as a group, we obviously aren't as vetted by the community. In fact, this is why WP:RMCI's "involved" criteria is as explicit as it is. I pushed hard in the discussion for it to be painfully obvious where the bright lines were, because as a group, we needed those bright lines.
- So, no, I would absolutely not suggest following WP:INVOLVED's wording or vice-versa. Different rules for different reasons, just like how categories have different rules than articles. Red Slash 05:04, 23 February 2023 (UTC)
- Right now there is a move review about an article related to the Arab–Israeli conflict and another about a New Zealand term. Both areas have RM discussions relatively regularly. If we are saying that a closer can only close one discussion in either area, that goes against the letter and the spirit of WP:INVOLVED. I don't think that would be feasible, we would run out of RM closers in no time. Vpab15 (talk) 11:28, 23 February 2023 (UTC)
- With love, no, I'm very confident there are plenty of well-informed RM closers who haven't committed themselves to either of those issues yet. I could list several of them who aren't involved at all. There's not THAT many of us, but there's not too few, either.
- Again, these are two separate issues. WP:INVOLVED deals with administrators performing administrative actions. WP:RMCI's "involved" section deals with neither. Red Slash 17:22, 23 February 2023 (UTC)
- Are you saying the same closer shouldn't close two RM discussions that belong to the same topic? For example Slava Ukraini and Odesa, both related to Ukraine? Because that's exactly what you did. Applying that rule would disqualify regular RM closers from closing most discussions. As a closer I would have to go though the hundreds of discussions I have closed to make sure I don't meet the new "involved" criteria. That is a huge burden and I can't see that working in practice. Vpab15 (talk) 18:20, 23 February 2023 (UTC)
- It already does work in practice! (FYI, these criteria are several years old and have worked for us for several years; they're not new.)
- The restrictions aren't "you've ever edited the article or anything like it". The restrictions are: "You have ever commented on any talk page in such a way as to make clear your position on the move request" or "Your editing on the page in question or about the page in question makes clear your position on the move request".
- For mine, the Slava Ukraini request hinged on WP:UE and WP:COMMONNAME about whether the slogan should remain translated into English or not; the way in which I closed it does not in any implicate how I would later close the Odesa request, which hinged on whether the one-S version or the two-S version were more common in English when referring to the Ukrainian city. The proof is in the pudding; I closed the Odesa request back in July, just three months after the Slava Ukraini one, and no editor even suggested that I had been too "involved" to close. (I would've painlessly self-reverted if someone would've brought a legitimate claim of violating those rules; again, there are no shortage of people who could've done a great job closing Odesa.) On the other hand, if I had been the one who closed the Kyiv/Kiev one, or if I had !voted on it, maybe I would've shied away from closing the Odesa one; if there's another Ukrainian city with a Russian name that's proposed to be moved to the Ukrainian name, I will avoid closing that one. I consider this common sense, which is why the discussion ultimately ended up where it did and the page reads how it reads.
- In short (too late!), the list of requirements on RMCI with regard to involvement were heavily discussed and meticulously crafted to avoid the sort of wacky prohibitions that you are correctly wary of. No worries, homie; it's not as scary as you think ! Red Slash 23:04, 23 February 2023 (UTC)
- Even on a per article level, are you sure that this is current practice? When looking at the CoI part, a closer is considered involved if
You have ever closed such a move request
. By a plain reading this means that if:- Some user proposes moving Foo to Bar
- Bunch of editors support and oppose
- 7 days pass, I make a determination of the consensus based on what editors have said. Article is not moved.
- Significant amount of time passes, eg 6 months to several years, some other user proposes moving Foo to baz
- 7 days pass, I cannot make determination of the consensus based on what the editors have said, because I closed another RM on the same page at some point in the past
- When we have 6.6 million articles, but only 130,000 active editors, at some point we will run out of editors who can close move requests on articles that have had multiple move requests. Sideswipe9th (talk) 00:33, 24 February 2023 (UTC)
- That is correct, that is current practice, you could not make that determination. We have 6.6 million articles, but I assure you, we don't have that many requested moves No worries, we've never even begun to come close to a situation where we've run out of possible closers. Red Slash 01:37, 24 February 2023 (UTC)
- Why is RMCI so much stricter in this regard when compared to how we handle INVOLVED at the admin level, or non-admin closures everywhere else on enwiki? The purpose of the closer of any discussion is to summarise the consensus that already exists, and doing so is usually not considered involvement by any other standard. What's special about move requests that make it so that we need this restriction? Sideswipe9th (talk) 02:11, 24 February 2023 (UTC)
- I explained it in much more detail above, but it boils down to A) administrative actions are far more urgent than RM moves, so we can afford to be choosier when it comes to involvement, and B) move request outcomes are much more difficult to overturn. Again, the part that you're concerned about--which I get, again, I understand your concern--was exhaustively discussed a few years ago when it was added to RMCI. Red Slash 07:49, 24 February 2023 (UTC)
- @Red Slash: I'm really confused here. Doesn't WP:INVOLVED only apply to admins anyway? So WP:INVOLVED does not really apply to non-admin editors making closures; in fact, the only guideline for that is WP:RMCI. It's unclear to me if RM closures even count as activities conducted in 'an administrative capacity', since this seems to be a contradiction in terms for any activities that are specifically allowed to be performed by non-admins. I sense muddled logic. Iskandar323 (talk) 11:12, 27 February 2023 (UTC)
- And then WP:NACINV just redirects back to WP:INVOLVED as the parallel standard being applied, but without implying that RM closure is administrative. Iskandar323 (talk) 11:15, 27 February 2023 (UTC)
- @Red Slash: I'm really confused here. Doesn't WP:INVOLVED only apply to admins anyway? So WP:INVOLVED does not really apply to non-admin editors making closures; in fact, the only guideline for that is WP:RMCI. It's unclear to me if RM closures even count as activities conducted in 'an administrative capacity', since this seems to be a contradiction in terms for any activities that are specifically allowed to be performed by non-admins. I sense muddled logic. Iskandar323 (talk) 11:12, 27 February 2023 (UTC)
- I explained it in much more detail above, but it boils down to A) administrative actions are far more urgent than RM moves, so we can afford to be choosier when it comes to involvement, and B) move request outcomes are much more difficult to overturn. Again, the part that you're concerned about--which I get, again, I understand your concern--was exhaustively discussed a few years ago when it was added to RMCI. Red Slash 07:49, 24 February 2023 (UTC)
- Why is RMCI so much stricter in this regard when compared to how we handle INVOLVED at the admin level, or non-admin closures everywhere else on enwiki? The purpose of the closer of any discussion is to summarise the consensus that already exists, and doing so is usually not considered involvement by any other standard. What's special about move requests that make it so that we need this restriction? Sideswipe9th (talk) 02:11, 24 February 2023 (UTC)
- That is correct, that is current practice, you could not make that determination. We have 6.6 million articles, but I assure you, we don't have that many requested moves No worries, we've never even begun to come close to a situation where we've run out of possible closers. Red Slash 01:37, 24 February 2023 (UTC)
- Even on a per article level, are you sure that this is current practice? When looking at the CoI part, a closer is considered involved if
- Are you saying the same closer shouldn't close two RM discussions that belong to the same topic? For example Slava Ukraini and Odesa, both related to Ukraine? Because that's exactly what you did. Applying that rule would disqualify regular RM closers from closing most discussions. As a closer I would have to go though the hundreds of discussions I have closed to make sure I don't meet the new "involved" criteria. That is a huge burden and I can't see that working in practice. Vpab15 (talk) 18:20, 23 February 2023 (UTC)
- Right now there is a move review about an article related to the Arab–Israeli conflict and another about a New Zealand term. Both areas have RM discussions relatively regularly. If we are saying that a closer can only close one discussion in either area, that goes against the letter and the spirit of WP:INVOLVED. I don't think that would be feasible, we would run out of RM closers in no time. Vpab15 (talk) 11:28, 23 February 2023 (UTC)
- INVOLVED is policy, so I think fixing this is better resolved the other way around. Updating RMCI so that it more closely matches INVOLVED seems like the solution, as ultimately that set of instructions are based off of the policy and not the other way around. Sideswipe9th (talk) 00:30, 23 February 2023 (UTC)
- Question. Should we include the bit about WP:Move review that's found at the top of WP:RM as a post closure section? This will lay out (or affirm ) the current dispute resolution path if someone wishes to contest the closure (before the interested party takes the case up to ANI). Duplicating the paragraph here for everyone's convenience:
– robertsky (talk) 00:52, 23 February 2023 (UTC)Wikipedia:Move review can be used to contest the outcome of a move request as long as all steps are followed. If a discussion on the closer's talk page does not resolve an issue, then a move review will evaluate the close of the move discussion to determine whether or not the contested close was reasonable and consistent with the spirit and intent of common practice, policies, and guidelines.
- Not a bad idea. Red Slash 05:04, 23 February 2023 (UTC)
- Comment I don't believe the RfC statement or this notification meet the neutrality requirements. Red Slash, can you update them so as to not bias the RfC? BilledMammal (talk) 12:31, 23 February 2023 (UTC)
- Can you point to anything in particular? Red Slash 17:23, 23 February 2023 (UTC)
- Almost all of it; it is advocating for this change, and trying to present RMCI as more than an essay. I suggest you remove the second sentence from your notification, and reduce your statement down the concise and neutral statement
Should WP:RMCI be formally elevated to the official status of a WP:GUIDELINE?
BilledMammal (talk) 23:22, 23 February 2023 (UTC)- First paragraph:
There has existed for eighteen years a set of instructions for how to formally close a requested move. It was initially titled as Wikipedia:Moving guidelines for administrators, later changed to Wikipedia:Requested moves/Closing instructions (hereafter WP:RMCI). Over the years, it has grown from a simple set of how-to steps to a comprehensive list of guidelines for determining consensus (which began as early as 2006 and has continued to expand through the years). It also has included instructions on who should close requests--first in 2009 when the authority to move pages was explicitly granted to non-administrators, then later in 2019 (following lots of discussion) we introduced rules on how editors who are involved need to not close requests.
- is completely factual and non-biased. The instructions have existed for 18 years. It was titled that way initially. Etc.
- Second:
The instructional page wasn't classified as anything (guideline, policy, or essay) through 2021, even though it had been originally written as a guideline. Following a post on the talk page that saw no response, it was listed as an explanatory essay in 2021.
- Also completely factual.
- First part of third paragraph:
Recently, at least one editor has asserted that, as WP:RMCI is "only" an essay, its procedures do not need to be followed; therefore, as one concrete example, editors may feel free to close move requests that they are fully involved in, so long as they think it's the right thing to do.
- A quick look at WP:MRV will show you that at least one editor has asserted that.
WP:RMCI has been called "purely advisory", "not vetted", etc., leading to questions as regarding its authority--this in spite of the fact that it was written as a guideline and has guided literally thousands of move closures.
- The first part of this sentence is strictly factual. The second part is also true; are you perhaps upset at the fact that I said "in spite of"? I suppose that could be considered biased language, perhaps. How would you rewrite this without omitting any of the factual details contained in this sentence?
This has unquestionably led to no small amount of chaos at Wikipedia:Move review following an editor who closed the same move request twice in a row (which is against the closing instructions), both with the same result, and who claimed that the closing instructions that forbid such a close by an involved user were merely "advice".
- "No small amount of chaos" is certainly an opinion, but I don't see how it's biased. IDK , you go look at MRV and tell me if you see chaos or not. Note: me saying that there is unquestionable chaos is not telling people how they should !vote. It is saying why they should vote: because there is currently confusion. If there were no confusion, there would not need to be an RfC.
- Everything from "an editor who" onward is completely factual.
Should WP:RMCI be formally elevated to the official status of a WP:GUIDELINE?
- Gotta have the question we're !voting on listed in the RfC, don't we?
- All told, I can't see where you're seeing bias, or that I advocated for change in a specific direction in the RfC itself. All I did was provide factual context (though admittedly, perhaps that phrase "in spite of" is too loaded and pushes the reader in a certain direction) for why it is good for the RfC to be brought up. Trust me, this RfC being decisively rejected would have been much, much better than the status quo, where something that "looks, talks, and quacks" like a guideline is being treated by many editors as a guideline while others treat it as an essay. Red Slash 23:54, 23 February 2023 (UTC)
- Much of this is evidence for why it should be a guideline. It's factual, but that doesn't make it unbiased, because it doesn't include the evidence for why it shouldn't be a guideline. BilledMammal (talk) 23:57, 23 February 2023 (UTC)
- BilledMammal, I wouldn't worry about this. Sometimes an RFC needs an explanation, and sometimes a fair explanation is going to tip editors towards one side or the other. RFC questions, especially procedural ones, shouldn't necessarily try to give equal validity to all options. We can safely assume that Wikipedia editors are smart enough to figure it out. WhatamIdoing (talk) 00:14, 24 February 2023 (UTC)
- I don't believe an explanation is needed here; the question is one that Wikipedia editors are smart enough to understand without explanation, and while the justification is more extensive the justification should be part of a !vote, note the statement. BilledMammal (talk) 00:19, 24 February 2023 (UTC)
- It might be unneeded, but I doubt that it's done any harm, even though you and I are the only editors who have voted against it so far. WhatamIdoing (talk) 00:25, 24 February 2023 (UTC)
- My advice for the future… when writing an RFC: At the top, just ask the question. Don’t add any explanation. If you think background or explanation is needed, add it to the comments section, or as a separate section entitled “Background” (or something) - By physically separating the question from the background of why you are asking it, you limit accusations of non-neutrality. Blueboar (talk) 02:44, 24 February 2023 (UTC)
- Yep, write anything else as part of a !vote (or a comment) after the RFC body so it doesn't clog up the RFC listing pages. As for neutrality, I don't think it is crucial for this particular discussion as this is not a contentious topic. Selfstudier (talk) 06:22, 24 February 2023 (UTC)
- I've written maybe six RfCs in my entire career, so this is something I'm definitely not super experienced in. Your advice--all y'all's advice--is well taken, thank you. Red Slash 07:50, 24 February 2023 (UTC)
- My advice for the future… when writing an RFC: At the top, just ask the question. Don’t add any explanation. If you think background or explanation is needed, add it to the comments section, or as a separate section entitled “Background” (or something) - By physically separating the question from the background of why you are asking it, you limit accusations of non-neutrality. Blueboar (talk) 02:44, 24 February 2023 (UTC)
- It might be unneeded, but I doubt that it's done any harm, even though you and I are the only editors who have voted against it so far. WhatamIdoing (talk) 00:25, 24 February 2023 (UTC)
- I don't believe an explanation is needed here; the question is one that Wikipedia editors are smart enough to understand without explanation, and while the justification is more extensive the justification should be part of a !vote, note the statement. BilledMammal (talk) 00:19, 24 February 2023 (UTC)
- BilledMammal, I wouldn't worry about this. Sometimes an RFC needs an explanation, and sometimes a fair explanation is going to tip editors towards one side or the other. RFC questions, especially procedural ones, shouldn't necessarily try to give equal validity to all options. We can safely assume that Wikipedia editors are smart enough to figure it out. WhatamIdoing (talk) 00:14, 24 February 2023 (UTC)
- Much of this is evidence for why it should be a guideline. It's factual, but that doesn't make it unbiased, because it doesn't include the evidence for why it shouldn't be a guideline. BilledMammal (talk) 23:57, 23 February 2023 (UTC)
- Almost all of it; it is advocating for this change, and trying to present RMCI as more than an essay. I suggest you remove the second sentence from your notification, and reduce your statement down the concise and neutral statement
- Can you point to anything in particular? Red Slash 17:23, 23 February 2023 (UTC)
Procedural close for the non-neutral RfC statement and lack of RFCBEFORE (including poor format choice with separate supp/opp sections, not previously discussed AFAICT), first choice. Second choice: oppose per Rhodo. I do not believe it's really been vetted by the community, MR and RMs are a walled garden, and RMCI is its Bible. It differs in some major respects from standard practice elsewhere (e.g. NACs, interpretation of involved), and I think that's more due to the walled garden nature of it than any kind of community vetting. We need fewer policies and guidelines, not more. Also, it's noticeable that the impetus for this is a recent kerfuffle and I don't think it's a good idea to promote RMCI to a guideline in an effort to "win" an argument over RM/MR procedure. Levivich (talk) 14:37, 25 February 2023 (UTC)- To some extent, the "kerfuffle" is the RFCbefore. If there are problems with RMCI then that needs to be addressed someplace, like, er...here? Two of three/four editors opining in a move review that RMCI has problems are here opposing so any issues are getting addressed in proper place and not in a move review. Selfstudier (talk) 14:44, 26 February 2023 (UTC)
- A contentious RM followed by a contentious MR (aka, a kerfuffle) makes for a lousy RFCBEFORE and this very obviously non-neutral RfC statement demonstrates why. And contra to some surprising advice below, RFCNEUTRAL is for all RFCs, not just contentious topics. Also, this RfC doesn't comply with WP:PGCHANGE as it's not widely advertised, it's at the wrong pump (should be policy for a new PG), and not listed at WP:CENT. This ought to be shut down and restarted the right way: an RFCBEFORE at the PG page, followed by a neutral RfC at the policy pump listed on CENT. Otherwise, it's not really global consensus. Levivich (talk) 16:11, 26 February 2023 (UTC)
- Nah. PGCHANGE is for changes to policies and guidelines, this isn't that. Policy pump is for existing and proposed policies, isn't that either. No need to list this at WP:CENT afaics. The only procedural screw up was people not raising their disagreement with RMCI at move review talk instead of during a move review. I can do the same thing and just assert that WP:RFCNEUTRAL is only an information page and doesn't apply:) Selfstudier (talk) 16:24, 26 February 2023 (UTC)
- Oh come on man. WP:PROPOSAL is the policy for making new guidelines and it wasn't followed here at all. (And it has more requirements than PGCHANGE, and it explicitly talks about RFCBEFORE, and VPP, etc.). Please don't argue that a proposal to promote RMCI to a guideline doesn't have to follow the WP:PG policy. Similarly there can be no argument that this RfC question isn't RFCNEUTRAL. If you like the idea, that's fine, but the failure to follow our consensus procedure here is indisputable. Levivich (talk) 16:42, 26 February 2023 (UTC)
- The proposal, apart from some suggested wording tweaks, seems actionable to me. RMCI has been treated as a guideline in practice and the only issues that have been raised against it are the ones raised in an ongoing move review (by objectors to it). Selfstudier (talk) 16:59, 26 February 2023 (UTC)
- I've just double-checked, and WP:PROPOSAL, WP:VPP, and WP:VPR are all unanimous in saying that proposals for new PGs or to change PGs should be at the VPP and widely-advertised, so I've moved this from VPR to VPP and listed it at CENT. Levivich (talk) 17:58, 26 February 2023 (UTC)
- I've collapsed the initial statement and replaced it with a neutral one; if we are going to try and fix this RfC, rather than making a procedural close and opening a new one, then that issue also needs to be fixed - although I hope that the closer considers that prior to 18:05, 26 February 2023 (UTC) the RfC was lead by a non-neutral statement that might have biased the result.
- I am also wondering if we should merge the support and oppose sections, per WP:NOTVOTE? BilledMammal (talk) 18:09, 26 February 2023 (UTC)
- I've struck my !vote and moved it down to discussion, since it's all based on procedure anyway and the procedure is getting fixed. Yes, I think we should combine the support/oppose into one survey section as well. Levivich (talk) 18:14, 26 February 2023 (UTC)
- I've just double-checked, and WP:PROPOSAL, WP:VPP, and WP:VPR are all unanimous in saying that proposals for new PGs or to change PGs should be at the VPP and widely-advertised, so I've moved this from VPR to VPP and listed it at CENT. Levivich (talk) 17:58, 26 February 2023 (UTC)
- The proposal, apart from some suggested wording tweaks, seems actionable to me. RMCI has been treated as a guideline in practice and the only issues that have been raised against it are the ones raised in an ongoing move review (by objectors to it). Selfstudier (talk) 16:59, 26 February 2023 (UTC)
- Oh come on man. WP:PROPOSAL is the policy for making new guidelines and it wasn't followed here at all. (And it has more requirements than PGCHANGE, and it explicitly talks about RFCBEFORE, and VPP, etc.). Please don't argue that a proposal to promote RMCI to a guideline doesn't have to follow the WP:PG policy. Similarly there can be no argument that this RfC question isn't RFCNEUTRAL. If you like the idea, that's fine, but the failure to follow our consensus procedure here is indisputable. Levivich (talk) 16:42, 26 February 2023 (UTC)
- Nah. PGCHANGE is for changes to policies and guidelines, this isn't that. Policy pump is for existing and proposed policies, isn't that either. No need to list this at WP:CENT afaics. The only procedural screw up was people not raising their disagreement with RMCI at move review talk instead of during a move review. I can do the same thing and just assert that WP:RFCNEUTRAL is only an information page and doesn't apply:) Selfstudier (talk) 16:24, 26 February 2023 (UTC)
- A contentious RM followed by a contentious MR (aka, a kerfuffle) makes for a lousy RFCBEFORE and this very obviously non-neutral RfC statement demonstrates why. And contra to some surprising advice below, RFCNEUTRAL is for all RFCs, not just contentious topics. Also, this RfC doesn't comply with WP:PGCHANGE as it's not widely advertised, it's at the wrong pump (should be policy for a new PG), and not listed at WP:CENT. This ought to be shut down and restarted the right way: an RFCBEFORE at the PG page, followed by a neutral RfC at the policy pump listed on CENT. Otherwise, it's not really global consensus. Levivich (talk) 16:11, 26 February 2023 (UTC)
- To some extent, the "kerfuffle" is the RFCbefore. If there are problems with RMCI then that needs to be addressed someplace, like, er...here? Two of three/four editors opining in a move review that RMCI has problems are here opposing so any issues are getting addressed in proper place and not in a move review. Selfstudier (talk) 14:44, 26 February 2023 (UTC)
Some editors are apparently objecting to the wording rather than the principle. So how to deal with this? Do we fix it now? (subRFCs?) Do we send this back to where it was to start with-proposals? Selfstudier (talk) 17:28, 27 February 2023 (UTC)