To run a bot on the English Wikipedia, you must first get it approved. Follow the instructions below to add a request. If you are not familiar with programming consider asking someone else to run a bot for you.
If your task could be controversial (e.g. most bots making non-maintenance edits to articles and most bots posting messages on user talk pages), seek consensus for the task. Common places to start include WP:Village pump (proposals) and the talk pages of the relevant policies, guidelines, templates, and/or WikiProjects. Link to this discussion in your request for approval.
You will need to create an account for your bot if you haven't already done so. Click here when logged in to create the account, linking it to yours. (If you do not create the bot account while logged in, it is likely to be blocked as a possible sockpuppet or unauthorised bot until you verify ownership)
Create a userpage for your bot, linking to your userpage (this is commonly done using the {{bot}} template) and describing its functions. You may also include an 'emergency shutoff button'.
II
Filing the application
easy-brfa.js can be used for quickly filing BRFAs. It checks for a bunch of filing mistakes automatically! It's recommended for experienced bot operators, but the script can be used by anyone.
Enter your bot's user name in the box below and click the button. If this is a request for an additional task, put a task number as well (e.g. BotName 2).
Complete the questions on the resulting page and save it.
Your request must now be added to the correct section of the main approvals page: Click here and add {{BRFA}} to the top of the list, directly below the comment line.
For an additional task request: use {{BRFA|bot name|task number|Open}}
III
During the approvals process
During the process, an approvals group member may approve a trial for your bot (typically after allowing time for community input), and AnomieBOT will move the request to this section.
Run the bot for the specified number of edits/time period, then add {{Bot trial complete}} to the request page. It helps if you also link to the bot's contributions, and comment on any errors that may have occurred.
AnomieBOT will move the request to the 'trial complete' section by moving the {{BRFA}} template that applies to your bot
If you feel that your request is being overlooked (no BAG attention for ~1 week) you can add {{BAG assistance needed}} to the page. However, please do not use it after every comment!
At any time during the approvals process, you may withdraw your request by adding {{BotWithdrawn}} to your bot's approval page.
IV
After the approvals process
After the trial edits have been reviewed and enough time has passed for any more discussion, a BAG member will approve or deny the request appropriately.
For approved requests: The request will be listed here. If necessary, a bureaucrat will flag the bot within a couple of days and you can then run the task fully (it's best to wait for the flag, to avoid cluttering recent changes). If the bot already has a flag, or is to run without one, you may start the task when ready.
For denied/expired/withdrawn requests: The request will be listed at the bottom of the main BRFA page in the relevant section.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Function details: Patrol redirects where the title and target match the form Foo[-‒–—―]Bar -> Foo[-‒–—―]Bar, meaning that the only difference is the use of a dash/hyphen/minus sign, etc
Ok, thanks. While I don't see this particular task saving more than perhaps a minute or two of time per week for new page patrollers, it doesn't seem like it could do much harm either. ‑Scottywong| gossip _00:41, 14 August 2019 (UTC)[reply]
Approved for trial (10 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete.@DannyS712:, please post a permalink to the contributions page when completed. As usual, take as much time as you need. --TheSandDoctorTalk03:09, 19 August 2019 (UTC)[reply]
@TheSandDoctor: Thanks. I've started - only 2 pages were currently in the queue, so I patrolled those: [1]. The pattern is if (comparePages( target.replace( /[-‒–—―]/g, '-'), title.replace( /[-‒–—―]/g, '-' ) ) ) return true;. Thanks, --DannyS712 (talk) 03:16, 19 August 2019 (UTC)[reply]
I came here from a link from AN just checking to see what was open at the moment and was surprised to find a NPP related request I knew nothing about. I knew about the initial scope of the redirect patrolling and I think one extension beyond that. Have I missed a discussion with NPP about this? Has there been any redirect NPP related bot approvals between the first one and this that I also might have missed discussion of? Thanks and best, Barkeep49 (talk) 22:07, 21 August 2019 (UTC)[reply]
So I did think I knew about all three of those past requests. I don't have any special rank or privlege within in the NPP community, I just show up to discussions. I would think the better thing is that a note be placed at WT:NPR alerting anyone who might be interested in this. Best, Barkeep49 (talk) 22:30, 21 August 2019 (UTC)[reply]
@Barkeep49: sure, I'll leave a note there. Just to be on the safe side and make sure people see it though, since I know that there are lots of other conversations on that page, I'm pinging here the people who participated in the previous BRFAs - if you would like to be alerted in the future, please let me know, and I'll alert you in addition to posting at NPP. Pings: @Rosguill, Fastily, Uanfala, Sdkb, Jo-Jo Eumerus, Kudpung, and Winged Blades of Godric:DannyS712 (talk) 22:36, 21 August 2019 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Withdrawn by operator.
Edit period(s): One time run, probably with secondary run(s) when problems encountered by the bot has been fixed manually and the bot handles more edge cases.
Estimated number of pages affected: The template has 1868 transclusions. The bot will only be able to convert about half without human assistance, in a secondary run where problems have been manually fixed and/or the bot can handle more edge cases up to a few hundred more could be converted.
Function details: The bot will go through all transclusions of {{Aircraft specifications}} replacing the template with {{Aircraft specs}} after reformatting the information to be compatible with the new template. If the bot encounters anything unexpected it will skip the page and report the problem at User:PearBOT/Aircraft specs problems. After the original run editors can fix problems listed there (usually unit problems or extra text in parameters that usually only contain a number and a unit) or I can make the bot handle more edge cases and perform a secondary run converting more templates.
Jo-Jo Eumerus There were many editors expressing concerns about difficulties with the conversion, but I believe my solution can satisfy all of these concerns by skipping a lot of pages and changes to the template (currently in an edit request). I've tagged everyone who participated in the TfD discussion at Wikipedia talk:WikiProject Aircraft#Template:Aircraft specs merger bot giving them a chance to review test edits and noone has objected to it in it's current state. The biggest issue in the TfD seemed to be double rounding leading to less precise figures, which was a major concern with the old version of the template, but now after I've modified {{aircraft specs}} it's a much smaller issue. It now use the parameter values if they're avalible which makes all figures the same before and after the conversion. For values not in the pre conversion templates there could still be double conversions, but since adding values in units (usually Knots and Nautical miles) not previously displayed is an unambigous improvement I believe this would be acceptable. If it's not acceptable the bot will only be able handle a few without some manual assistance, but even in this case the bot can still do most of the work. --Trialpears (talk) 20:14, 10 August 2019 (UTC)[reply]
Approved for trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. Please do not mark the edits as minor - I'd like as many eyes on this conversion as possible due to the previous issues and concerns. Primefac (talk) 00:06, 1 September 2019 (UTC) (please do notping on reply)[reply]
Armbrust, have continued working on it, but have been encountering more and more edge cases and issues have been cropping up. It may someday be appropriate to run automatically, but right now I feel like I have to mark this as Request withdrawn and continue working on this as a fully supervised script. --Trialpears (talk) 13:09, 1 November 2019 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Function overview: Add and maintain supported identifiers to citation templates (mostly {{cite journal}}), including related metadata such as access level but excluding the |url= parameter.
Automatic: A queue of edits is created automatically (manually triggered), then a cursory review of its contents is performed manually to exclude anomalies, then select items are moved to a queue for the bot to perform them automatically. Edits are then sampled for manual checks and some manual fixes are performed by the operators in the few hours or days following a bot run on the pages which ended up on Category:CS1 maintenance (typically less than one in a thousand).
Function details: Following the success of task OAbot 2, we're proposing to extend the functionality of the bot to all identifiers. The addition of arxiv and PMC identifiers (about 25k edits) has been a success: it has encountered few mistakes and the bot has been made more robust in response (for instance we are now stricter in matching publications).
After this is done, other identifiers will be handled depending on demand and volumes. The most consequential work will be to eventually add |doi-access=free to all relevant citations (an estimated 200k DOIs): this functionality was part of the original request (and not challenged by anybody) but later dropped when the bot became a user-triggered tool, as the number of required edits is incompatible with human editing.
Expected improvements in the new future, if this task is approved, include:
maintenance of existing identifiers, e.g. to remove or report on broken identifiers (e.g. CiteSeerX records which may have been taken down);
avoiding more publisher URLs even in manual mode, instead add DOIs or DOI access data where relevant (to avoid creating more work for Citation bot, which removes redundant URLs);
give the community full prior control on what identifiers are added by the bot, by adding a subpage to Wikipedia:OABOT where users would be able to blacklist individual URLs (and therefore identifiers) they consider undesirable for whatever reasons, including suspected errors in open access repositories (mismatch between record and DOI, files mistakenly open for download etc.), even if such cases are a minuscule minority.
Discussion
This might not be relevant yet, but I take that the bot won't add identifiers without some kind of procedure to reject unsuitable identifiers? This bot has had some copyright issues in the past. It also won't replace already existing URLs? Because that might be problematic under WP:SAYWHEREYOUGOTIT. Jo-Jo Eumerus (talk, contributions) 16:07, 25 July 2019 (UTC)[reply]
The current procedure to reject unwanted identifiers is to either blacklist the bot on the specific page with {{bots}} or comment out the identifier in the specific citation template. The proposed additional procedure is to let any user blacklist an identifier by means of linking it on a central subpage, so that it's no longer added to any other page: this will allow users to reject one, ten or a thousand identifiers with a single edit and have the community decide it by consensus.
This task proposes that no edits are made to the |url= parameter at all using the bot account. I'll note however that WP:SAYWHERE specifically states that «You do not have to specify how you obtained and read it. So long as you are confident that you read a true and accurate copy, it does not matter [...]». Nemo16:24, 25 July 2019 (UTC)[reply]
I have no objection to adding hdl identifiers. But I am currently seeing huge numbers of OABot edits on my watchlist, making it difficult to find any other changes and impossible to manually check them for accuracy, and would be interested in knowing whether there are any plans for throttling the bot to a more reasonable rate of updates. Also, if the "other identifiers" to be added are to be included in this BRFA, they need to be specified explicitly. For instance, I would be opposed to automatically adding citeseerx identifiers automatically, for all the previously-discussed reasons, and wouldn't want this BRFA to be taken as sidestepping that discussion. —David Eppstein (talk) 17:57, 25 July 2019 (UTC)[reply]
On the first point, I agree we need a frank conversation on the scope of the task; I just suggest to avoid having the same conversation over and over for each new identifier. On the second, as far as I can see the bot has respected the typical rate limit of 12 edits per minute, but it would not be a problem to reduce the speed. Nemo20:06, 25 July 2019 (UTC)[reply]
This request, sic stantibus rebus, would not produce any addition of links to Zenodo, as there is no identifier for it. As for the existing identifier parameters, which evidently were added because the target websites are considered good resources rather than systematic copyright infringement rackets, the proposal is the blacklist of specific URLs above. The discussions you linked were often focused on hypothetical or apodictic statements, impossible to discuss constructively; if users instead can focus on explaining which URLs are bad for which reasons, a consensus will be easier to find. Nemo19:56, 30 July 2019 (UTC)[reply]
Back to the CiteSeerX issue, to rekindle the discussion: in my opinion it falls squarely under Wikipedia:Copyrights#Linking_to_copyrighted_works "It is currently acceptable to link to internet archives such as the Wayback Machine, which host unmodified archived copies of webpages taken at various points in time" for the cached PDFs, while the rest of the functions (citation graphs etc.) are uncontroversially helpful and unproblematic. Therefore the current policies support an automatic addition and we should only handle the rare exceptions where a link would be problematic: a blacklist is a possible technical solution, but we could consider other ideas. Nemo17:08, 24 August 2019 (UTC)[reply]
If this is purely about adding already-supported identifiers (sans CiteSeerX) and converting existing URL to identifiers (e.g. |url=http://citeseerx.ist.psu.edu/viewdoc/summary?doi=... → |citeseerx=... is fine), then I see little that is objectionable. So let's see a trial at least, with an explicit list of identifiers covered, and we'll have a better idea what's in store. Approved for trial (10 edits per identifier). Please provide a link to the relevant contributions and/or diffs when the trial is complete.. Headbomb {t · c · p · b}17:59, 22 September 2019 (UTC)[reply]
Approved for extended trial. Please provide a link to the relevant contributions and/or diffs when the trial is complete., again 10 per identifier. Are you planning on only adding handle links with this one? Headbomb {t · c · p · b}13:08, 25 September 2019 (UTC)[reply]
Thanks. I don't have much to work on apart from Handle and CiteSeerX identifiers at the moment. If the bot is approved, I'll write the code to add doi-access=free, which would make up the bulk of the edits, and any other identifiers for which demand happened to arise in the future. For instance, biorxiv.org content does not seem to be in big demand right now, but that might change in the future (there's plenty!), in which case adding it will be trivial. Nemo19:49, 25 September 2019 (UTC)[reply]
Indeed in this run it was only editing one template per page, even when it identified more. I was just writing a patch to make sure it did everything in a single edit rather than multiple passes. Nemo17:25, 28 September 2019 (UTC)[reply]
Then Approved for extended trial. Please provide a link to the relevant contributions and/or diffs when the trial is complete., 25 edits, with the 'do everything it can at once' logic. Or however many you need to have at least 5 instances of multiple doi/hdl added/flagged in the same edit. Headbomb {t · c · p · b}18:09, 28 September 2019 (UTC)[reply]
Approved. After having a read of this and given lack of opposition after 12 days, as well as the intended edits being made look as if they are being correctly made, I'm approving this.—CYBERPOWER(Chat)20:11, 11 October 2019 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Withdrawn by operator.
Function overview: An extension of WP:Bots/Requests for approval/TheSandBot 3, this task moves categories and their associated subpages. This task is different enough to necessitate its own BRFA.
Function details: Reads category page names from User:Alex 21/sandbox2 into memory. Then the script repeats the following process on all listed categories:
Moves category
Reads the category page itself to determine if any of its categories need switching (if so, they are done now)
Iterates through category contents (currently under previous name) and moves them to the current category
Looks for any links from the old category name. If they are found, corrects them to new name
My apologies for the delay in my response, Primefac. I just had a moment to quickly respond so here it is. This was almost covered by the previous BRFA related to the convention change but determined it should be its own run. As such, I filed this and wrote the script to do it. (There shouldn’t be any issues with it.) I was not aware that there was a bot for this already as I do not frequent CfD, but also do not think that that should necessarily disqualify this task. Having more than one bot wouldnt be a conflict as they would never interact and could be a potential ’back-up’ of sorts. (This is also a one-off run). Though I definitely want to avoid an wp:OTHERSTUFF argument, there is one that could be made as we do have multiple bots for the same sort of tasks. Of course, I will respect any decision on this. —TheSandDoctorTalk20:58, 15 June 2019 (UTC)[reply]
I have gone through them all, JJMC89. A total of 12 would need manual cleanup (display title updates), which I am totally happy to do. Otherwise, I am not sure what "necessary cleanup" would be needed outside of what the script already does? --TheSandDoctorTalk04:51, 17 June 2019 (UTC)[reply]
In updating the description, I just realized that the order of the operations most likely needs to be reversed or references may be lost/null. Probably to the order of 2,3,1, would you agree? --TheSandDoctorTalk04:37, 19 June 2019 (UTC)[reply]
The problem with fixing backlinks automatically is that not all backlinks should be changed. From my example category, I would only update that one link. In terms of order: for CfD the bots 1) move the category, 2) move the category contents, including any subcats then a human 3) cleans up any backlinks. #1 then #2 makes sure the pages aren't categorized in a nonexistent category. — JJMC89 (T·C) 02:33, 20 June 2019 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
Bots in a trial period
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
The usual problem with having two or more bots do the same task is that if at some point in the future a modification becomes necessary, it's more work. And chasing up bot owners can be especially difficult, as so often happens, if they have retired (while still keeping the bot running) or are less than keen to implement changes. This shouldn't be a problem here as the task (am I right to conclude?) is not fully automated but supervised. – Uanfala (talk)23:00, 21 August 2019 (UTC)[reply]
I'm recusing as a BAG member, but for the record here are the BRFAs for SporkBot and PrimeBOT (orphaning and merging), which as mentioned have a remit to implement TFD outcomes. I'm mostly ambivalent about having a third bot with the same remit. Primefac (talk) 12:58, 9 June 2019 (UTC) (please do notping on reply)[reply]
If it can be demonstrated that this new bot will not be conflicting/getting in the way of the existing bots, then this is probably harmless/some net-benefit. -FASTILY08:05, 16 June 2019 (UTC)[reply]
@DannyS712:Approved for trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. As per usual, please post a permalink here when done and take all the time you need for the trial. --TheSandDoctorTalk04:04, 22 June 2019 (UTC)[reply]
@TheSandDoctor:Trial complete. 50 edits made: [9]. The only issue is that AWB edit summaries are too short, so I couldn't link to both the BRFA and the TFD, but once approved that shouldn't be an issue. --DannyS712 (talk) 09:58, 22 June 2019 (UTC)[reply]
For the record, I find it in poor taste that Danny is using this trial/task to sidestep the decline of a previous BRFA. I have added the template in question to the auto-subst list and recommend sending this back to trial after said transclusions are removed. Primefac (talk) 12:28, 22 June 2019 (UTC)[reply]
@Primefac: After you closed the previous BRFA, I realized the AnomieBOT generally isn't used for TfD substitutions - see here. The point of this BRFA was to implement TfD closes that are missed by the other bots, and Double image is exactly the type of template I was thinking about. I have no issue with an extended trial, and I apologize if I unintentionally misled you or TSD --DannyS712 (talk) 18:12, 22 June 2019 (UTC)[reply]
From a "things to think about" perspective, not only for this task but in general:
If you're linking to a TFD, link to the day (not the specific #subheading)
If the links/descriptions are getting too long, create a user subpage (e.g. I link to User:PrimeBOT/24 for Task 24 to keep things short).
I'm not sure what's changed. I'm still objecting to your sidestepping the previous TFD, and my opinions regarding this request (in general) have not changed; I'm still mostly ambivalent. I'm also still recusing from BAG duties on this task. Primefac (talk) 16:35, 16 July 2019 (UTC)[reply]
Approved for extended trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. We got off on a bit of the wrong foot, let's see if we can correct it. Primefac (talk) 23:01, 7 August 2019 (UTC)[reply]
@Trialpears: sure - I'm hesitant to close it myself and then orphan it, but I'll check back at 0000 UTC and, once its closed (and I assume it will be closed as delete) will remove 19 of the transclusions (only 19 edits left in the trial) DannyS712 (talk) 21:19, 7 September 2019 (UTC)[reply]
Approved. Looks good to me. Under normal circumstances, I would prefer to leave the close for someone else as I was involved in trialing this. However, given the backlog, lack of recent BAG activity (myself included), the fact that Primefac's objection appears to have been resolved, and based on how well the trial went, I am inclined to make an exception for this. As per usual, if amendments to - or clarifications regarding - this approval are needed, please start a discussion on the talk page and ping. --TheSandDoctorTalk01:15, 12 October 2019 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Function details: Will run a script similar to that used to generate Wikipedia:Database reports/Templates transcluded on the most pages (which is no longer regularly updated). All edits will be to subpages of a single module. No more than 27 edits will be made per week outside of the bot's userspace.
Not exactly. The module will take a template name as input, extract the first character, load the appropriate data table based on that character (either "A"–"Z" or "other"), and return the transclusion count. I could have the bot load the data into a submodule of Module:Data and leave the name parsing to Wikitext (e.g.
{{#invoke:Data|Module:Data/Template transclusions/{{#invoke:String|match|s={{PAGENAME}}|pattern=^[A-Z]|nomatch=other}}|{{PAGENAME}}}}), but since that requires spinning up Lua twice it seemed cleaner to do it all in one step in a new module and just call {{#invoke:Transclusion count}}. --Ahecht (TALK PAGE) 14:12, 7 June 2019 (UTC)[reply]
Well, I don't see the benefit of splitting the data pages by letter either and the code you posted falls far below my standard of "merits a Lua module" -- but its probably better to let TfD decide this rather than arguing about it on a bot request. * Pppery *it has begun...19:00, 7 June 2019 (UTC)[reply]
Splitting pages by letter code was mainly due to the size. Most of the Database reports tend to split lists with over a thousand entries, and splitting by letter makes lookup of any particular template straightforward. There are thousands of templates that will end up on this list, and having the documentation page for each of the thousands of pages using {{high use}} loading and parsing a giant table weekly seemed inefficient from a server load standpoint. --Ahecht (TALK PAGE) 13:47, 9 June 2019 (UTC)[reply]
Approved for trial (28 days). Please provide a link to the relevant contributions and/or diffs when the trial is complete.Ahecht, I know it might take a bit to create the module and related content, so run the 4 weeks from whenever the bot starts editing. As far as implementation goes I'll leave that to you, but from a "proof of concept" I think replacing a few calls to {{high risk}} with a sandbox version that invokes the module will work. Primefac (talk) 12:49, 15 June 2019 (UTC)[reply]
@Primefac:On hold. The trial is currently on hold. There is currently an issue with the toolforge database replicas, and the transclusions query that had been taking about 15 minutes is now timing out after 30 minutes or so (the same query, which had run successfully at https://quarry.wmflabs.org/ is also now failing there, so this appears to be a global issue, not a bot-specific one). I will run the trial once this gets resolved. --Ahecht (TALK PAGE) 16:54, 20 June 2019 (UTC)[reply]
Just a quick update: The database replica issue seems to be resolved (and, according to the phab:T226050, future slow-downs should last hours, not days), but I will be away from the internet next week so I don't want to start a 4-week trial before that. I will re-activate this BRFA when I return. --Ahecht (TALK PAGE) 03:13, 22 July 2019 (UTC)[reply]
I started the first run on Friday, and had to make a few tweaks to get it working properly when saving in the Module: namespace. It's now set to run each Sunday morning for the next few weeks. --Ahecht (TALK PAGE) 00:28, 12 August 2019 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Request Expired.
Estimated number of pages affected: 30 per day to begin with, can increase to 100 per day if community sees it helpful. Speed is completely controllable. Overall, there are a few thousands between major wikis like EN - JA(~3000), EN - DE(~5000).
The bot will notify editors by writing a new section on Talk page of a subject, if that subject has inconsistent birthdays in this and another wikipedia languages.
The data of inconsistency comes from a public available dataset Github, called Project WikiLoop. An example edit looks like this
{{TakeNote}} This request specifies the bot account as the operator. A bot may not operate itself; please update the "Operator" field to indicate the account of the human running this bot.AnomieBOT⚡06:49, 20 February 2019 (UTC)[reply]
{{TakeNote}} This bot appears to have edited since this BRFA was filed. Bots may not edit outside their own or their operator's userspace unless approved or approved for trial.AnomieBOT⚡06:49, 20 February 2019 (UTC)[reply]
This bot is helping on cross-language inconsistency therefore it shall be editing other languages, how should I apply for global bot permission? Xinbenlv (talk) 06:53, 20 February 2019 (UTC)[reply]
Thank you RhinosF1 thank you!. it seems the m::BP requires the bot to obtain local community permission and keep it running locally for a while. Therefore, I think I shall apply for approvals from multiple local communities each individually for now. Do I understand it correctly? Xinbenlv (talk) 18:48, 20 February 2019 (UTC)[reply]
Thanks everyone who are interested. Just so that you know, the bot has two trial edits on German wiki, as encouraged by the BRFA discussion. Feel free to take a look and advice is welcomed! Xinbenlv (talk) 21:59, 21 February 2019 (UTC)[reply]
Am I reading your datasets correctly that there are somewhere in the order of 5k pages that this applies to? Could you please add the approximate number to the BRFA documentation?
How often is the dbase updated? Could this potentially result in one page receiving multiple notices simply because no one has either seen or cared enough to fix the missing information?
Datebase will be updated on a daily / weekly basis, currently still in development. I plan to also rely on "Xinbenlv_bot" to surppress articles that already been touched by the same bot. Xinbenlv (talk) 17:15, 25 February 2019 (UTC)[reply]
This seems like a reasonable task to deal with cross-wiki data problems, just want to get a better feel for the size and scope of the task. Primefac (talk) 20:26, 24 February 2019 (UTC)[reply]
@Primefac: The EN-JA file contains around ~3000 inconsistencies of birthdays, the EN-DE contains around ~5000 inconsistencies. To begin with, I think we can limit to 100 - 200 edits on English Wikipedia. Xinbenlv (talk) 16:47, 25 February 2019 (UTC)[reply]
@Xover's suggestion regarding using maintenance template
Would adding a maintenance template (that adds a tracking category) be a viable alternative to talk page notices? It might be more effort due to the inherently cross-project nature of the task, but talk page notices are rarely acted on, is extra noise on busy talk pages, and may cause serious annoyance since the enwp date may be correct (it's, for example, the dewp article that's incorrect) and the local editors have no reasonable way to fix it. A tracking category can be attacked like any gnome task, and the use of a maint template provides the option of, for example, flagging a particular language wikipedia as having a verified date or specifying that the inconsistency comes from Wikidata. In any case, cross-project inconsistencies are an increasingly visible problem due to Wikidara, so kudos for taking on this issue! --Xover (talk) 18:41, 25 February 2019 (UTC)[reply]
@Xover: thank you. So far, I am applying to 5 different wikis for botflag in the same time. I received 3 suggestions:
1. use template and transclusion
2. add category
3. put it as a over article "cleanup" message box or Talk page message.
For the #1 and #2, there is consensus amongst all responding communities (EN, DE, ZH, FR). So now the trial edits on these communities are using template and category, see ZH examples:
For #3, put it as an over article "cleanup" message box, the DE community some editors prefer a Talk page message, while some prefer over-article message box. My personal opinion is that we can start slow, do some Talk page message (like 200) for trial edits, and then when they looks good, we can start to approve for allowing the bot to write over article messages? The reason being, I hope it demonstrate more stability before writing on (article) namespace. Especially for such high impact wikis of English wikipedia.
Well, assuming the technical operation of the bot is good (no bugs) maint. templates in article space are generally less "noisy" than talk page messages (well, except the big noisy banners that you say dewp want, but that's up to them). I suspect the enwp community will prefer the less noisy way, but I of course speak only for myself. In any case, I did a small bit of copyediting on the talk page message template. It changed the tone slightly, so you may not like it, and in any case you should feel free to revert it for whatever reason. Finally, you should probably use {{BAGAssistanceNeeded}} in the "Trial edits" section below. --Xover (talk) 05:22, 26 February 2019 (UTC)[reply]
There was a consensus to stop InternetArchiveBot from adding talk page notices. I suspect that if this bot were to start running that there would be a similar consensus to stop adding the same. My suggestion is not to do #3. --Izno (talk) 23:29, 29 March 2019 (UTC)[reply]
Dear Admin, I just realize English Wikipedia requires trial edits approval before running trial edits, which I already did for 9 edits in (Article) namespace. Shall I revert the trial edits? I am sorry Xinbenlv (talk) 21:13, 15 March 2019 (UTC)[reply]
@Xaosflux:, OK, thank you! By the way, is there anything else I need to do other than just wait for people to comment? It seems the discussion has halted.
Could I just verify something? I notice that all of the sandbox trials are placing what appear to be talk page sections, while it sounds like the majority of participants (on multiple languages) feel either a maintenance template or category are more appropriate to fix this issue.
In other words, the template you've made looks like it's a wall of text that (as mentioned previously) users aren't generally thrilled about dealing with. Is there another way to make this template look more like a "maintenance" template? Maybe just the intro line ("An automated process has determined...") and the table, with instructions to remove when checked? Something that can be placed at the top of a talk page? Primefac (talk) 20:16, 4 April 2019 (UTC)[reply]
Message box My understanding of consensus is the other way around, for example in EN Wiki, My suggestion is not to do #3. --Izno.. In German on de:HD:Personendaten after a long discussion they reached a consensus that a talk page section (not look like a message box) is preferred in their opinions.
Actually I have an iteration that does message-box like notification but then was suggested to change to talk page section.
Something that makes this process very challenging is this is a cross language project so we are trying to accommodate suggestions from different language of Wikis while try to keep them as aligned as possible so we can effectively maintain them across languages. See FAQ m:User:Xinbenlv_bot
On hold. I feel there's a sweet spot to be had. A short message done through a template would be ideal.
== Possible Wikidata issue==
{{Inconsistent Interwiki/Wikidata Issue<!-- Come up with a better name than this please
|lang1=fr |subject1=Ernst Joll |date1=1902-06-19
|land2=en |subject2=Ernst Joll |date2=1902-09-10
|fixed=yes/no
}}
Automated notice by ~~~~
@RexxS and Pigsonthewing:, you're the resident Wikidata experts here. Could you come up with a template that scales generalize to other Interwiki/Wikidata conflits? @Xinbenlv: feel free to participate in those efforts too. Until that template is designed, I'm going to put this on hold. Headbomb {t · c · p · b}05:21, 9 April 2019 (UTC)[reply]
@Headbomb and Xinbenlv: It's quite difficult to shorten the documentation by much, but I've made a demo at User:Rexxbot/msg/Inconsistency. It takes 11 named parameters, because if you want to generalise it to other inconsistency issues, you need to supply the name of the issue as well as the other parameters. Here's an extract from the rudimentary documentation that I knocked up:
{{User:Rexxbot/msg/Inconsistency
| issue = birth date
| lang1 = en
| article1 = Ernst Joll
| value1 = 1902-09-10
| lang2 = fr
| article2 = Ernst Joll
| value2 = 1902-06-19
| bot = Xinbenlv bot
| date = 28 April 2019
| status =
| by =
}}
If it's any use to you, please feel free to hack at these pages until you have something to your liking and/or take it for your own bot space (no attribution needed). Let me know if you want me to fix any of it. Cheers --RexxS (talk) 21:40, 28 April 2019 (UTC)[reply]
@RexxS: I've made a small tweak. The headers should be made by the bot, since we want those to give editable sections. Or at least sections that are editor-friendly. For the rest, I'm generally indifferent to the output and exact functionality, although the eyes of @Pigsonthewing and Xinbenlv would be appreciated to see if the design of the template is solid and scaleable. If everyone agrees it's a good design (and agree on a template name, e.g. {{Interwiki issue}}), we can proceed to trial. There's an option to have that as a wrapper template to create issue-specific sub-templates, but that might be a case of over engineering. Headbomb {t · c · p · b}21:48, 28 April 2019 (UTC)[reply]
@Headbomb: That's fine, but I was under the impression that the template would only be deployed by the bot, so it really isn't likely to care what the template is called . --RexxS (talk) 21:57, 28 April 2019 (UTC)[reply]
Well the bot wouldn't really care, but it's less WTF-y to have a template in template space for this. It could be in the bot's userspace, but that makes it a bit harder to find if similar bots are deployed in other languages, which may harm some internationalization efforts. Not a huge issue, but might as well do things right when we can. Headbomb {t · c · p · b}22:01, 28 April 2019 (UTC)[reply]
That makes sense, even though the only technical difference between template space and any other namespace is you don't have to include the namespace prefix when transcluding it. You're right though, if you're anticipating using this sort of template with other bots, then template space is the best place for ease of location. Good thinking. --RexxS (talk) 22:19, 28 April 2019 (UTC)[reply]
we expect to provide more than 2 inconsistency languages, such as 3 - 5, what will the template look like in that case?
we hope to ensure cross-language consistency, if this template is going to be internationalized and copied to other languages Wikis, what is a best way to do so?
For 1, I believe you can just scale |lang1/article1/value1= to |lang3/article3/value3= etc. RexxS can confirm. For 2, no idea. It's good to think about, but that's not a blocker for the English Wikipedia or this bot. Headbomb {t · c · p · b}05:06, 13 May 2019 (UTC)[reply]
@Headbomb and Xinbenlv: I'm pretty certain that it will scale gracefully to more languages, but I'd really recommend getting the bot working and approved before trying to modify it. It's far easier to get approval for improvements once there's evidence of it already working on a simpler task or smaller scale. As for internationalisation, you can call the Lua module Module:Complex date] to render dates in the wiki's language if that's the part you feel may need translating, as long as the module is available on the wiki you're working on. I assume that you've already taken care of the article titles in other languages. Beyond that, you just need to translate the text and documentation. Cheers --RexxS (talk) 16:48, 13 May 2019 (UTC)[reply]
Then once the template is moved to the Template namespace to its 'official name' (i suggest {{Interwiki issue}}, but it could be something else), Approved for trial (10 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete.. Headbomb {t · c · p · b}17:10, 13 May 2019 (UTC)[reply]
@RexxS:, very helpful, in particular the date internationalisation that I haven't think o.
Sorry for my late arrival to this discussion. Xinbenlv, will there be some way for a user to mark the template, so that it is removed from the tracking category, but not retagged? For instance, when the date on enwp is confirmed as correct. Also, how does the dataset accommodate different calendars, such as when one wiki may list a date in the Gregorian calendar and another in the Julian? StudiesWorld (talk) 21:11, 30 May 2019 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Denied.
Function details: I've been holding a few tests over at the testwiki. Runs a check every 15 minutes. Uses data from soccerway.com (provided from Opta Sports)
Discussion
Looking over the contributions at testwiki, that page doesn't appear to have any sources. How exactly would you add sources for these edits here on enwiki? --DannyS712 (talk) 04:52, 7 April 2019 (UTC)[reply]
Unless the plan is for a subset of footballers, there are more than 100,000 football biographies, so I think you'd want to think about how often the statistics need to be updated (even once a month would be in the range of tens of thousands of edits a month, depending on whether it is offseason of course) and discuss at WT:FOOTBALL. Galobtter (pingó mió) 07:44, 7 April 2019 (UTC)[reply]
The plan is for a case-by-case basis starting out. Afterwards if all goes well, I'll seek to gain consensus on categories for a specific league/country before mass-implementing any changes. Dat GuyTalkContribs13:30, 7 April 2019 (UTC)[reply]
Approved for trial (10 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete.@DatGuy: As per usual, take all the time that you need to complete this trial and post the results here when done (preferably diffs or perma link to contribs section). --TheSandDoctorTalk18:20, 2 May 2019 (UTC)[reply]
So about the citations... In edits such as this it looks like you are updating a table that is already cited, to a source that is older than your 'as of' new date stamp. So, it looks like the citation no longer matches the article text but is being left there. How can this be improved? — xaosfluxTalk15:33, 28 May 2019 (UTC)[reply]
@Xaosflux: Very sorry about the delayed response/hiatus in editing. Before adding a page I'd check if there's a Soccerway.com reference, and if not add it. I've also fixed a bug. Can the trial be restarted and for ~6 edits? Thanks. Dat GuyTalkContribs05:14, 18 July 2019 (UTC)[reply]
@DatGuy: so in the edit I mentioned above, you changed content, but that content was already cited. Your changed content is newer than the existing citation - so it should no longer be supported by the citation that you are leaving there. — xaosfluxTalk11:58, 18 July 2019 (UTC)[reply]
@DatGuy: OK, in this edit there is a table, it is already cited, wtih a citation from 2019-03-25. You changed the data in the table to be about an event on 2019-04-04 - how is your new content still supported by that citation? — xaosfluxTalk11:23, 19 July 2019 (UTC)[reply]
To be honest, the BBC Sport reference isn't necessary. It's nice to have since the player had only played one international game and the article is more detailed, but it could work only with the National-Football-Teams.com reference, since that one automatically updates on the same page. Dat GuyTalkContribs20:19, 1 August 2019 (UTC)[reply]
This is definitely a reference WP:HIJACK. The existence of {{Soccerway}} somewhere else on the page doesn't negate the hijack, it would have to come directly after the modified content. IMO sports statistics are like weather and other frequently changing data that is high volume and rapid frequency. This creates problems with watchlist churn and diffs. The right way is a single data file that a template reads from. The bot's job is to keep the data file updated. The template's job is to display the data in the page. It requires a single edit to the pate (add the template). Also the template can display a citation, which can be updated in a single location as the date changes ("last accessed on"). -- GreenC02:39, 13 November 2019 (UTC)[reply]
I find myself in agreement with GreenC on this one; it would be much better to have a central database of statistics that can be updated all at once, saving potentially thousands of edits. I'm not sure what the best layout would be fore this data module, but at the moment this task seems potentially contentious so I'm going to mark this as Denied.Primefac (talk) 16:09, 8 December 2019 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
Bots that have completed the trial period
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Function details: Fix some cases of misnested tags as reported by Special:LintErrors/misnested-tag. The idea is to fix cases where there's no doubt about the solution. For example, on 101 Ways to Leave a Gameshow, it will replace ''心跳阿根廷<br><small>Xin Tiao A Gen Ting''</small> by ''心跳阿根廷<br><small>Xin Tiao A Gen Ting</small>'' (correct nesting of italic and small).
Approved for trial (50). Please provide a link to the relevant contributions and/or diffs when the trial is complete.NicoV, please post a permanent link of the bot's contributions page which shows the edits. If you have any questions, please do ask away. --TheSandDoctorTalk03:26, 6 August 2019 (UTC)[reply]
Approved. Under normal circumstances, I would prefer to leave the close for someone else. However, given the backlog, lack of recent BAG activity (myself included), and the fact that this task is uncontroversial and based on how well the trial went, I am inclined to make an exception for this. As per usual, if amendments to - or clarifications regarding - this approval are needed, please start a discussion on the talk page and ping. --TheSandDoctorTalk02:57, 19 August 2019 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
Function details: Sometimes an RM is closed with consensus to disambiguate, and a new dab pages is created. All incoming links to that page were, minutes ago, working links to the page that was moved, and after the dab page is created they must be retargeted to keep working. Having closed a recent requested move and then making over 250 edits to retarget links, this would avoid flooding recent changes (though the script does limit the rate of edits)
Discussion
This is a WP:CONTEXTBOT issue, many of the links this bot will be "fixing" will be incorrect (as the initial link will be wrong in the first place), making the issue worse, not better. Leave this to the human dab solvers to deal with. Iffy★Chat -- 08:39, 29 July 2019 (UTC)[reply]
@Iffy: my intention was to make these edits manually, and to only use the bot account to avoid flooding recent changes (since enwiki doesn’t have a “flooder” user group). I’ve changed it to be manual instead of supervised - sorry for the confusion. I agree that this has context issues, but all edits are being made manually by me, so I think it should be okay. Thanks, —DannyS712 (talk) 16:48, 29 July 2019 (UTC)[reply]
{{BAGAssistanceNeeded}} - this has been open for a week and a half. I hope my clarification that this is a manual task (meaning that I will personally use the script to make each individual change) helps. If desired I could do this with a different bot account to separate "automatic" (bot) tasks from "manual" (flood) tasks --DannyS712 (talk) 21:32, 30 July 2019 (UTC)[reply]
Approved for trial (20 edits or 14 days). Please provide a link to the relevant contributions and/or diffs when the trial is complete. (which ever comes first) with the clarification that this is 100% manual, bot as a bot-ran-task it should not change the meaning of any article content. — xaosfluxTalk00:49, 6 August 2019 (UTC)[reply]
@DannyS712: Since this is running as a bot, it should not be changing the "meaning" of statements in articles, such as changing a wikilink to go from pony to pony; that is you will need to manually make sure the context of the link persists through the edit. — xaosfluxTalk00:57, 6 August 2019 (UTC)[reply]
Approved. with the stipulation that it is indeed always run manually with care to ensure that "meaning" of statements in articles is preserved (per Xaosflux). As per usual, if amendments to - or clarifications regarding - this approval are needed, please start a discussion on the talk page and ping. --TheSandDoctorTalk03:04, 19 August 2019 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Approved.
This will edit pages created by qbugbot 2, updating references, photos, common names, and a few minor edits. Not all changes will be made to all pages, and some pages will not be changed.
Source code available: Yes. I will update User:Qbugbot/source before the first test.
Links to relevant discussions (where appropriate): There have been some comments, requests, and edits over the past year that have motivated to do this, but I have not requested a consensus on ToL. I think it will be non-controversial.
Qbugbot2 created around 18,000 pages about a year ago. I'd like to make corrections and updates to these pages. These changes are a result of comments and page edits. Edits made to these pages since they were created will be preserved. The first 100+ edits by this bot will be reviewed manually.
1. "Further reading" and "External link" references will be updated, and in most cases cut back or eliminated. Any references in Further reading and External links that were created with the page will be removed and replaced with the new references from the current qbugbot database. This will provide fewer and more specific references in these areas. Any reference added by other editors will be retained as is. References are matched by title, or by authors and year. This item will affect most pages, and has been the source of most negative comments about qbugbot articles.
2. If the prose, infobox, and inline refererences have not been edited since an article was created, it will be updated with the following changes:
Wording in the prose may be updated, usually for the distribution range or common names, sometimes to correct errors.
Inline references will be updated. Sometimes more specific references will be added, and sometimes non-specific references may be removed (such as EOL, some redundant database references, and some database references without specific data on the article.)
The database sources for lists of taxonomic children (species list, etc.) will be removed. While this information might be handy, it makes it difficult for people to update the list. When list is edited, the source database information tends to be omitted.
Occassionally, the taxonomic information and children will updated.
3. Photos will be added if they are available and not already on the page. This will affect a minority of pages. The Photos have been manually reviewed.
4. Unnecessary orphan and underlinked tags will be removed.
5. External link to Wikimedia commons will be updated to handle disambig links properly, without displaying the "(beetle)" in something like "Adelina (beetle)"
6. The formatting of many references has been improved, correcting errors, adding doi's, etc. These will be updated in most cases. If the references has been edited since creation, it will not be changed.
Here is an example of a page editing manually using bugbot 3 content: Muellerianella
Some pages have been changed so much that the bot can't successfully revise them without altering other people's edits, something I'd rather not do automatically and something that's probably not necessary in pages with significant additions. Some other pages won't need any of these changes, either because the changes have already been made through manual edits, or because the original pages happened not to need them. I am just estimating the 1,000 pages. It could be more or less than that. Bob Webster (talk) 00:38, 30 March 2019 (UTC)[reply]
I looked at this and decided to postpone it for another update. The main problem is that I could see no easy way to determine what was described in 1956 (or any year) -- insects? moths? spiders? animals? beetles? North American millipedes? I was also considering narrowing down some of the categories (bees to sweat-bees, etc.) as some editors have been doing, but I haven't found a reliable list of categories to use. The same thing applies the -stub templates. I would prefer to do these three tasks in another bot session. Bob Webster (talk) 03:00, 31 March 2019 (UTC)[reply]
Having done some of this categorisation [caveat: not so much recently], I have to agree that this problem exists, and there are various schemes of parent categories that are in use if the category you are assigning needs to be created. One could put everything into a higher level category, to await sorting, but I see no great advantage. I would accept it as WP:WORKINPROGRESS. William Avery (talk) 08:56, 1 April 2019 (UTC)[reply]
Approved for trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. Since I fixed a crapton of those citations myself, I'm rather enthusiastic about qbugbot cleaning up after its own mess. Headbomb {t · c · p · b}05:25, 9 April 2019 (UTC)[reply]
Trial complete. 50 pages were updated, and are listed on the bot talk page. I found and fixed a couple of bugs. One prevented the introduction from being updated sometimes, and the other was a minor line spacing error. Bob Webster (talk) 23:04, 10 April 2019 (UTC)[reply]
I've significantly reduced the number of references in further reading in new pages created by qbugbot. This was a manual, subjective process. The pages edited in qbugbot3 will have the original further reading references replaced with this new set. If a further reading reference has been added by an editor since page creation, it will be included in the edited page. The inline citations of qbugbot have also been updated. If the text of a page has not been edited, the original set of inline citations will be replaced.
I've corrected the references you listed, and fixed the problem of ending up with the same references in both inline citations and further reading.
@Edibobb: I think that given the number of articles/citations affected, it would be a good idea to have a sandbox version of all references that will be used. Then you (or I, if you don't know how) could run citation bot on them, and see what the improvements are, and those could get implemented, reducing the future cleanup load. Headbomb {t · c · p · b}15:12, 17 April 2019 (UTC)[reply]
I think that would be good. I don't know how to run the citation bot, but I've copied all the citations to these sandbox pages. Can you run the bot on them? (A few of the citations are leftover and will never be used. It's easier to fix them all than sort them out, so don't worry if you see a few weird titles and dates.) Thanks!
User:Citation bot/use explains the various methods. Right now the bot is blocked, so only the WP:Citation expander gadget works. I'll run the bot on these pages though. There's an annoying bug concerning italics and titles though, so just ignore that part of the diffs that will result. Headbomb {t · c · p · b}16:01, 17 April 2019 (UTC)[reply]
Approved for extended trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete.Headbomb {t · c · p · b}13:37, 22 August 2019 (UTC)[reply]
Trial complete. 50 pages were updated, and are listed on the bot talk page. I found and fixed these things:
If an editor had added multiple columns to the Further Reading section, the section would not be changed by the bot.
Inline commons tag was being added even if there was already a commons tag.
Subdivisions were being added to the taxobox for species and subspecies.
Approved for extended trial (100 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete.@Edibobb: My bad, I thought I gave another extended trial for this, but the edit must have been lost somehow. Anyway, here goes another 100 edits to make sure all kinks are worked out. Headbomb {t · c · p · b}17:39, 19 September 2019 (UTC)[reply]
Trial complete. 100 pages were updated, and are listed on the bot talk page. I didn't find any problems on any of the updated pages. (No problem on the delay -- it actually fit my schedule better)
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.
Approved requests
Bots that have been approved for operations after a successful BRFA will be listed here for informational purposes. No other approval action is required for these bots. Recently approved requests can be found here (edit), while old requests can be found in the archives.
Bots that have been denied for operations will be listed here for informational purposes for at least 7 days before being archived. No other action is required for these bots. Older requests can be found in the Archive.
These requests have either expired, as information required by the operator was not provided, or been withdrawn. These tasks are not authorized to run, but such lack of authorization does not necessarily follow from a finding as to merit. A bot that, having been approved for testing, was not tested by an editor, or one for which the results of testing were not posted, for example, would appear here. Bot requests should not be placed here if there is an active discussion ongoing above. Operators whose requests have expired may reactivate their requests at any time. The following list shows recent requests (if any) that have expired, listed here for informational purposes for at least 7 days before being archived. Older requests can be found in the respective archives: Expired, Withdrawn.