Wikipedia:Bot requests
Commonly Requested Bots |
This is a page for requesting tasks to be done by bots per the bot policy. This is an appropriate place to put ideas for uncontroversial bot tasks, to get early feedback on ideas for bot tasks (controversial or not), and to seek bot operators for bot tasks. Consensus-building discussions requiring large community input (such as request for comments) should normally be held at WP:VPPROP or other relevant pages (such as a WikiProject's talk page).
You can check the "Commonly Requested Bots" box above to see if a suitable bot already exists for the task you have in mind. If you have a question about a particular bot, contact the bot operator directly via their talk page or the bot's talk page. If a bot is acting improperly, follow the guidance outlined in WP:BOTISSUE. For broader issues and general discussion about bots, see the bot noticeboard.
Before making a request, please see the list of frequently denied bots, either because they are too complicated to program, or do not have consensus from the Wikipedia community. If you are requesting that a template (such as a WikiProject banner) is added to all pages in a particular category, please be careful to check the category tree for any unwanted subcategories. It is best to give a complete list of categories that should be worked through individually, rather than one category to be analyzed recursively (see example difference).
- Alternatives to bot requests
- WP:AWBREQ, for simple tasks that involve a handful of articles and/or only needs to be done once (e.g. adding a category to a few articles).
- WP:URLREQ, for tasks involving changing or updating URLs to prevent link rot (specialized bots deal with this).
- WP:USURPREQ, for reporting a domain be usurped eg.
|url-status=usurped
- WP:SQLREQ, for tasks which might be solved with an SQL query (e.g. compiling a list of articles according to certain criteria).
- WP:TEMPREQ, to request a new template written in wiki code or Lua.
- WP:SCRIPTREQ, to request a new user script. Many useful scripts already exist, see Wikipedia:User scripts/List.
- WP:CITEBOTREQ, to request a new feature for WP:Citation bot, a user-initiated bot that fixes citations.
Note to bot operators: The {{BOTREQ}} template can be used to give common responses, and make it easier to keep track of the task's current status. If you complete a request, note that you did with {{BOTREQ|done}}
, and archive the request after a few days (WP:1CA is useful here).
Legend |
---|
|
|
|
|
|
Manual settings |
When exceptions occur, please check the setting first. |
Bot-related archives |
---|
Checking language links
Is it possible to scan our language articles to check that they link to the correct ISO codes?
There are 7,500 blue links on Wikipedia:WikiProject Languages/Primary language names in Ethnologue 16 by ISO code. If I could get a bot check to verify which are obviously correct, I could check any exceptions manually.
The parameter is whether the ISO code at the target article matches what we have on the ISO list. For example, the first link on the list page is [[Ghotuo language|aaa]]. The article Ghotuo language has a language infobox with the parameter "iso3" set equal to aaa, so that link is good. That should be easy to check by bot, assuming it can follow redirects.
The potentially matching parameters in {{Infobox language}} are iso3, lc1, lc2, lc3, .... (lc-n is used where there is more than one ISO code.)
I'm hoping for a list of any language names which do not link to the matching ISO code. I suspect there are a fair number of circular links which need to be fixed. It would be nice if both pieces of data could be returned. So, if the example link were bad, we'd get back "Ghotuo language : aaa" or something similar.
Is that possible?
Thanks, — kwami (talk) 06:42, 3 February 2013 (UTC)
Dutch municipalities tagging
I am trying to reactive WikiProject Dutch municipalities. One of the steps would be to assess all relevant articles, which are the articles on the municipalities and the articles about their subdivisions. All these articles are already tagged with the {{WikiProject Netherlands}} banner as far as I can tell. These would need to be updated with |muni=yes |muni-importance=Low/Mid.
All municipalities would have to be automatically tagged as Mid and their subdivisions as Low importance. For each of 12 provinces there are lists of Cities, towns and villages and categories with all the municipalities. All articles occurring in both would need a Mid-importance tag, the articles only occurring in the former a Low-importance tag. Both would need the |muni=yes as well.
I am not fully familiar with the wikipedia bot process, but I assume this would be possible to do automatically. Could any of you advice me how to proceed with this? CRwikiCA (talk) 20:22, 15 February 2013 (UTC)
- To elaborate a bit more on this, is it possible for all articles in the category and subcategories of Category:Populated places in the Netherlands to have |muni=yes |muni-importance=Low added to the template {{WikiProject Netherlands}}? CRwikiCA (talk) 11:04, 16 February 2013 (UTC)
- I am currently tagging the municipal articles with Mid and up. So please do not blindly perform the previous request skip all articles that already have the |muni=yes tag. CRwikiCA (talk) 21:14, 19 February 2013 (UTC)
For generating an HTML page with current category structure
We on Sanskrit wiki want a bot to generate an html page to contain all our categories in form of html tree-view control, something like this. I know that there is a special page on wiki for category viewing,but the problem is that it doesn't work readily (loading time is required on each click); while if we can download a single html page, we can view the category structure easily by collapsing & expanding as needed, even offline, and i want to review the category structure using that method. We have manageable number of categories on sawiki. So please tell if that is feasible. -Hemant wikikosh (talk) 11:58, 17 February 2013 (UTC)
- The proper way to do this would be with XML and XSLT.Smallman12q (talk) 00:33, 24 February 2013 (UTC)
Billboard URL repair
Billboard has revamped its site, and we have tens of thousands of dead links. The old link looks like
http://www.billboard.com/artist/<artist name, urlencoded>/chart-history/<magic number>?f=<chart number>&g=Singles with "#" inserted at random locations.
for example
- http://www.billboard.com/artist/Tim+McGraw/chart-history/32771?f=357&g=Singles (or g=Albums, if it's an album chart)
- http://www.billboard.com/#/artist/lil-wayne/chart-history/352101?f=379&g=Singles <- notice the optional pound sign
- http://www.billboard.com/artist/keith-urban/241828#/artist/keith-urban/chart-history/241828?f=357&g=Singles
These all now take the form of http://www.billboard.com/artist/<different magic number>/<artist name, with varying punctuation>/chart?f=<chart number>
All the magic numbers for the artists changed. The artist's names stayed the same, even though the formatting has shifted. The chart numbers remain the same.
I've done the crawl of Billboard to find the new tokens. The results are now in templates. {{BillboardID}} will return the appropriate number for the artist. {{BillboardChartNum}} will return the chart number from a chart name (which should stay more stable). The new template {{BillboardURLbyName}} will take that data and return a correct URL. The purpose of the template is to keep from having to go through this again when Billboard revamps again. It seems to happen every few years.
For size considerations, {{BillboardID}} is actually broken into 40 separate templates, broken by first character. Take a peek at {{BillboardID/Q}} and {{BillboardID/R}} and it will be obvious.
So, what the bot needs to do:
- For each URL of the form http://www.billboard.com/artist/<artist name, urlencoded>/chart-history/<magic number>?f=<chart number>&g=Singles or http://www.billboard.com/#/artist/<artist name, urlencoded>/chart-history/<magic number>?f=<chart number>&g=Singles
- extract artist name and chart number
- if (artist name not translated by BillboardID)
- then log error and skip
- else if (chart number not translated by BillboardChartNum)
- then log error and skip
- else replace URL with
{{BillboardURLbyName|artist=<artist name>|chart=<chart name>}}
When the bot is done, it should provide a log of every time it found an artist that it couldn't handle or a chart that it couldn't handle. I'll take those errors and fix the templates to handle those cases, and we can rerun as necessary.
To see an example of a before and after rework, look at the Usher discography revamp or the Nicki Minaj discography revamp.—Kww(talk) 16:28, 18 February 2013 (UTC)
- An excellent idea. Much easier than doing it all by hand. Doesn't seem like a very hard bot to code. If someone can do it, please! This is a big issue we are currently facing. Status 01:07, 20 February 2013 (UTC)
I decided to dive in, and I'm at the 95% complete stage on this bot. I'll ask again if I need help crossing the finish line.—Kww(talk) 01:58, 21 February 2013 (UTC)
Creating year categories
About 20% of the 2200+ categories in Wikipedia:Database reports/Categories categorized in red-linked categories are year-related (sort by Member category to get an idea - the three pages have different biases, page1 has a lot of 1st millennium dates, page3 has the most in total). Some of these cats have been red links for 2+ years, but are potentially quite amenable to creation by bot and would allow human editors to concentrate on the more demanding categories. I'm thinking of pseudo-code along the lines of
- Scan the right column of Wikipedia:Database reports/Categories categorized in red-linked categories and reject everything that doesn't have a year (including BC years) - say between 1000BC and (CURRENTYEAR+10) - quite a lot of cats deal with the near future, upcoming Olympics, elections etc. Decades, centuries and millennia would be nice too.
- Check to see if category has been created - the report is normally created on Sunday morning, so a weekly run soon after this would be optimal
- Any non-existent categories named "(dis)establisments in" - check against a list of countries (eg the ones in Category:21st-century establishments by country ) and US states and if the place is on the list, create the category with the following templates - note how the decade ones require you to calculate the century :
- 1970s establishments in Ruritania has {{estcatCountryDecade|19|7|20th|Ruritania}}
- 1970s disestablishments in Ruritania has {{disestcatCountryDecade|19|7|20th|Ruritania}}
- 1979 establishments in Ruritania has {{estcatCountry|197|9|Ruritania}}
- 1979 disestablishments in Ruritania would have {{disestcatCountry|197|9|Ruritania}}<nowiki> **'''1970s establishments in New Jersey''' has <nowiki>{{estcatUSstateDecade|19|7|20th|New Jersey}}
- 1970s disestablishments in New Jersey has {{disestcatUSstateDecade|19|7|20th|New Jersey}}
- 1979 establishments in New Jersey has {{estcatUSstate|197|9|New Jersey}}
- 1979 disestablishments in New Jersey has {{disestcatUSstate|197|9|New Jersey}}
- There's also {{EstcatCountryCentury}}, {{EstcatCountry2ndMillennium}} and {{EstcatCountry3rdMillennium}} if you can be bothered.
- Categories like 1979 in Ruritania have simply {{year in country category|1|9|7|9|Ruritania|Europe||}} {{Commons category|2012 in Ruritania}}
- Categories like 1979 by country get filled in as per eg Category:2012 by country
- Categories like Years of the 20th century in Ruritania get {{Portal|Ruritania}} and the categories Category:20th century in Ruritania, 20th, Ruritania and Ruritania
- Categories like 1970s in Ruritania get filled in per eg Category:2010s in France
- Categories like Years of the 20th century in Ruritania get filled in per eg Category:Years of the 20th century in France
- A common pair in the first century is eg Category:347 in international relations and Category:347 in politics which want filling out per Category:2000 in international relations and Category:2000 in politics
- I think there's scope to apply some fuzzier logic on the cases of "nnnn in foo", "nnnn foo" or "nnnn–nn in foo" (sports leagues mainly). Since the 2012 equivalent will almost certainly exist, just grab the categories and templates from the (CURRENTYEAR - 1) version of the category, adjust the numbers accordingly, and use that content to create the category. It may not be quite perfect but I wouldn't let perfection be the enemy of the good, at least it gets it into the hierarchy where it is visible to the subject experts, any errors in the year hierarchies tend to be very visible - and to be honest duplicating the 2012 caetgory actually seems to work pretty well as a heuristic when doing it manually, the error rate should be acceptable. Maybe have a "handbrake", to base no more than three new categories on a 2012 category without asking a human to doublecheck that it's OK?
- Then check that there's no category red links on what you've just created - it'd be nice to create those in turn, although they could wait for the next run of the report.
- Could do the same for the similar report WP:Database reports/Red-linked categories with incoming links
- Something to watch for in the above is the sorting - BC dates are assigned negative numbers for sort purposes, so eg 270 BC will be given -30 in a 3rd-century BC category. — Preceding unsigned comment added by Le Deluge (talk • contribs) 18:50, February 18, 2013 (UTC)
Discussion on whether this is a good idea (conclusion: yes it is)
|
---|
|
Simple category creator
This is a nice easy task that might suit a bot beginner, or could be bolted onto something else.
- Every Wednesday night, scan WP:Database reports/Red-linked categories with incoming links for category names containing "sockpuppets_of"
- Check that they don't exist
- Create with the simple template "{{Sockpuppet category}}"
That should be good enough - if you read {{Sockpuppet category}} you can see that there's a bit of scope to get cute about encoding certain non-alphanumeric characters but the template is smart enough to sideline "problem" names into a maintenance category so it's not very necessary. It's a simple enough little task that could be done manually with AWB, but it might as well be automated, sockpuppet cats make up 20% of those red-link categories and anything that helps out the anti-sock guys is worth doing. — Preceding unsigned comment added by Le Deluge (talk • contribs) 18:50, February 18, 2013 (UTC)
- Coding.... GoingBatty (talk) 01:21, 19 February 2013 (UTC)
- BRFA filed here. GoingBatty (talk) 01:37, 19 February 2013 (UTC)
- Line 83 of the report is just Category:Wikipedia sockpuppets of - should that be ignored, or is there something that should be fixed? GoingBatty (talk) 01:43, 19 February 2013 (UTC)
- Ignore it, that was from someone using an old template in 2008 (!) Thanks for the speedy response.Le Deluge (talk) 11:28, 19 February 2013 (UTC)
- Line 83 of the report is just Category:Wikipedia sockpuppets of - should that be ignored, or is there something that should be fixed? GoingBatty (talk) 01:43, 19 February 2013 (UTC)
- BRFA filed here. GoingBatty (talk) 01:37, 19 February 2013 (UTC)
Language name typo correction
There are hundreds of articles, mostly about settlements in Brazil, that specify 'language=Portguese' as a citation template parameter. There are too many to correct to 'Portuguese' by hand, but it would be an easy job for a bot. Colonies Chris (talk) 22:09, 19 February 2013 (UTC)
- It would be a simple find & replace exercise with AWB. However, I'm not sure I could argue how this would be an exception to Wikipedia:Bots/Frequently denied bots#Fully automatic spell-checking bots. Thanks! GoingBatty (talk) 02:09, 20 February 2013 (UTC)
- I am going to file a BRFA with an argument that I believe is good enough and will also run it semi-automatic if required. -- Cheers, Riley 02:22, 20 February 2013 (UTC)
- Thanks for taking this up. Colonies Chris (talk) 11:38, 20 February 2013 (UTC)
- Done. Thanks for filing this request! -- Cheers, Riley 07:45, 23 February 2013 (UTC)
- Cheers. That just leaves a couple of dozen occurrences of this typo, which I can fix by hand. Colonies Chris (talk) 11:24, 23 February 2013 (UTC)
- Thanks for taking this up. Colonies Chris (talk) 11:38, 20 February 2013 (UTC)
Archive or Clean of User:Addbot/log/wikidata
Hi guys! I currently dont really have much time but it would be great if someone has or could write a bot that could archive or just remove sections from User:Addbot/log/wikidata that have {{done}} or {{notdone}} on them! ·Add§hore· Talk To Me! 23:21, 19 February 2013 (UTC)
- Won't User:ClueBot III do this? I'll be happy to configure it for you, if that's what you need. —Theopolisme (talk) 23:47, 20 February 2013 (UTC)
- If the page is to work as intended It would need to be checked every 30 mins or so, which unfortunately cluebot does not do. Although adding the cluebot config the the page wouls still probably help the current state! ·Add§hore· Talk To Me! 03:51, 21 February 2013 (UTC)
- Done made something myself :) ·Add§hore· Talk To Me! 23:05, 22 February 2013 (UTC)
Purging Main Page at regular intervals
At Talk:Main Page#Today's article for improvement on the Main Page, there appears to be consensus (pending an uninvolved party's discussion closure) to proceed with the addition of a TAFI section to the main page, with the dynamic display of three article links from a pool of ten (example).
A concern is that the links, generated via {{random subpage}}, change only when the page's cache is purged. So I'm requesting that a bot purge Main Page's cache with whatever frequency is feasible and acceptable (once per minute, perhaps?). —David Levy 18:27, 21 February 2013 (UTC)
- The server admins would have the operator's head on a pike before you could blink. The main page is cached for a reason, it is the most viewed page on wikipedia. Breaking the cache system that often for something that trivial would cause some backlash. Werieth (talk) 03:05, 22 February 2013 (UTC)
- Right. Just display 3 articles at a time, and then switch them out whenever DYK gets updated or something. Don't dynamically change them. Legoktm (talk) 03:14, 22 February 2013 (UTC)
- For the reason noted below, the community approved the section's addition on the condition that the article links be randomly pulled from a large pool. (For the record, I wasn't involved in that discussion, but I would have expressed the same concern.) —David Levy 03:39, 22 February 2013 (UTC)
- I understand the importance of caching, particularly when a page is viewed millions of times per day, but I'm unclear on how the proposed setup constitutes "breaking the cache system" or poses a problem. In terms of overhead, how would it differ from any other cache purge (a common occurrence across the encyclopedia)? In the subsequent minute, whether a page is requested once or 7,000 times, isn't the same cached version being sent? (Please forgive me if I've misunderstood how this works.)
- Note that we already include a purge link on the main page. The TAFI proposal previously called for an additional one to be included in the new section (with readers encouraged to use it repeatedly), which probably would have resulted in significantly more than one purge per minute.
- Also note that the purpose of varying the links is to avoid sending too many editors to articles at the same time (thereby causing endless edit conflicts and potentially driving away new contributors), which isn't a trivial matter. —David Levy 03:39, 22 February 2013 (UTC)
- Right. Just display 3 articles at a time, and then switch them out whenever DYK gets updated or something. Don't dynamically change them. Legoktm (talk) 03:14, 22 February 2013 (UTC)
- What did we do for the 2008 election FA? That was two articles displayed together, with the order rotated randomly; presumably we hit the same problems then, but we may also have solved it :-). Andrew Gray (talk) 09:44, 22 February 2013 (UTC)
- That relied on JavaScript code, which failed gracefully for users without JavaScript enabled (who saw the blurbs in a static order). I don't know whether something similar is feasible in this instance. —David Levy 11:57, 22 February 2013 (UTC)
- What did we do for the 2008 election FA? That was two articles displayed together, with the order rotated randomly; presumably we hit the same problems then, but we may also have solved it :-). Andrew Gray (talk) 09:44, 22 February 2013 (UTC)
- Does a cron job which requests http://en.wikipedia.org/enwiki/w/index.php?title=Main_Page&action=purge every hour, say, really require bot approval? If it's not editing, it doesn't require a bot, does it? Neo Poz (talk) 19:09, 23 February 2013 (UTC)
- Probably not, technically, but I did go through the process for User:Joe's Null Bot, which does roughly four times that many purges per day. *shrug* The original request was for one per minute, not once per hour, of course. --j⚛e deckertalk 19:54, 23 February 2013 (UTC)
- How do you feel about adding the 1/hour to Joe's Null, and if so going though the approval for the addition or not? Neo Poz (talk) 20:08, 23 February 2013 (UTC)
- "Once per minute" was merely a suggestion. I've requested "whatever frequency is feasible and acceptable", meaning that which can be programmed without inconveniencing the bot's operator or causing any server-side problems (though I suspect that only a ridiculous rate would have the latter effect).
- The page needn't be purged once per minute, but I hope that it can occur more often than once per hour (as that seems long enough for the articles to be flooded). —David Levy 20:36, 23 February 2013 (UTC)
- Does anyone think once every 15 minutes is too much or too little? Neo Poz (talk) 03:44, 24 February 2013 (UTC)
- Let's try 15 minutes for starts and we can adjust it faster if needed. --NickPenguin(contribs) 00:45, 25 February 2013 (UTC)
- That seems reasonable. —David Levy 19:21, 25 February 2013 (UTC)
- Let's try 15 minutes for starts and we can adjust it faster if needed. --NickPenguin(contribs) 00:45, 25 February 2013 (UTC)
- Does anyone think once every 15 minutes is too much or too little? Neo Poz (talk) 03:44, 24 February 2013 (UTC)
- Probably not, technically, but I did go through the process for User:Joe's Null Bot, which does roughly four times that many purges per day. *shrug* The original request was for one per minute, not once per hour, of course. --j⚛e deckertalk 19:54, 23 February 2013 (UTC)
Just as a question - does this HAVE to be done by something hard-coded that gets refreshed periodically? Since the aim is just to present 3 out of 10 links, could you not slice it by some other way than by a cache-purge? I'm thinking of using Javascript to do something like If CURRENTTIME-in-milliseconds ends in 1, show links 1,2,3; if CURRENTTIME-in-milliseconds ends in 2, show links 2,3,4 and so on. Doesn't have to be done on time either - it could be done on the ASCII value of their user name, sum of their IP address (with a bit of help server-side), whatever. It seems a better way of doing something that is fundamentally quite simple, rather than messing with the caching on such a heavy-traffic page.Le Deluge (talk) 19:46, 25 February 2013 (UTC)
- My concern is that not all users have JavaScript enabled. Assuming that the code were to fail gracefully (as in the instance discussed above), perhaps the number of affected users would be acceptably small.
- But unless I've misunderstood the caching system, the fact that Main Page is "a heavy-traffic page" is irrelevant. A cache purge causes the page to be rebuilt once, after which the same version is sent until the cache is purged again. Whether a page is requested zero times or 100,000 times in the following 15 minutes, it's been rebuilt only once. So the cost of purging Main Page's cache should be no greater than that of purging any other page's cache. A 15-minute interval would result in 96 cache purges per day, which is negligible. —David Levy 20:23, 25 February 2013 (UTC)
- Usual assumption is that about 1.5% of web users have no Javascript - in this case one might assume that the percentage of likely editors is less than that, but we'll go with it. If your NOSCRIPT page just presented people with links 1,2,3, that would mean that 11.35% of people see pages 1,2,3 and 9.85% see each of the other combinations. I don't think that's unreasonable. If you had 3 out of 100 links, then you might set aside one combination of links specifically for the NOSCRIPT people. As for the caching, you're need to think like a user - or rather like the cache nearest the user, rather than at the Wikipedia end. The fact that the en.wiki main page is so popular means that almost every cache on the planet will need a current copy. Let's assume one of their users requests it once every minute for the sake of this argument. If MP is purged once an hour, then 59 users can be supplied with the version on the cache, and only one has to wait for the cache to request the new version up through the hierarchy of other caches until a request gets made to Wikipedia. Purge it once a minute and all 60 users think "Gee, Wikipedia is slow today". OK, an extreme version. The real problem is the extra network traffic you're incurring, if this is a popular page it will be on 100,000's of servers throughout the internet. Compare that with say the main page of Welsh Wikipedia, where not only are the numbers much smaller, the likely users can all be "fed" by getting a new copy to a few servers in places like Cardiff (and one in Patagonia). A cache in Hong Kong or Romania isn't likely to need a new copy of the Welsh page every minute. Apologies to any network engineers out there, but you get a flavour of some of the argument.Le Deluge (talk) 21:42, 25 February 2013 (UTC)
Well I have the one line cron job ready to go, and am thinking about being bold about it, but I would much rather Joe add the 15 minute interval to Joe's Null Bot. Joe? Neo Poz (talk) 22:13, 25 February 2013 (UTC)
Request: a very quick search-and-replace in ~250 articles
Over 200 articles that cite the Cancer Dictionary all need the same trivial change to the links to the original source to make them right, as the base URL, but not the article codes, for those entries has been changed on the cancer.gov website. Example: in Peritoneal mesothelioma, the link to
http://www.cancer.gov/Templates/db_alpha.aspx?CdrID=44992
should be replaced by
http://www.cancer.gov/dictionary?CdrID=44992
A full list of the articles that need changing can be found For this set of pages:
I could easily fix this myself with my own bot in a few lines of code, but don't want to have to go through the process of obtaining permission to do so for such a small task: if someone with an already authorized search-replace bot is an a position to do this without putting themselves to any great effort, and would be kind enough to help, I'd be very grateful.
Alternatively, if this is such a small task that it would OK to just to this without going through the usual bot-task process, I'd be happy to do it myself.
Thanks, -- The Anome (talk) 18:45, 22 February 2013 (UTC)
- Filing BRFA -- Cheers, Riley 19:03, 22 February 2013 (UTC)
-
- Done. Thanks for filing this request! -- Cheers, Riley 07:04, 23 February 2013 (UTC)
Reference collisions
I've recently discovered that AnomieBOT will create conflicting references when there's a citation error in an article and there are references that are created by {{singlechart}}. Note this edit, for example, where AnomieBOT's response to a broken reference named "Hungary" was to treat the reference "Australia" as orphaned, even though the Australia reference was fine. Given the discussion at User talk:AnomieBOT#breaking references, I'm not expecting a fix from AnomieBOT any time soon. To control the damage it's causing, I need to get a list of articles that are in Category:Singlechart making named ref and have a colliding reference defined both through <ref name=xxxxx> and through {{singlechart|...|refname=xxxx}}. Hopefully one of the existing bots with a good search capability can do that for me. I'll do the fixes manually, but I don't want to try and trawl through 450 articles manually searching for something that doesn't even give an easily visible error message.—Kww(talk) 07:00, 23 February 2013 (UTC)
- Coding... Mutley1989 (talk) 08:12, 23 February 2013 (UTC)
- Done I think. Let me know if this is ok or if I've misunderstood your request. Code used here. Results (76 pages):
- Mutley1989 (talk) 10:12, 23 February 2013 (UTC)
- That looks like the list I need. Thanks. I'll take it from there, and Anomie did agree to insert code to keep the problem from spreading.—Kww(talk) 15:10, 23 February 2013 (UTC)
- Mutley1989 (talk) 10:12, 23 February 2013 (UTC)
Bot to add missing merge tags
This sounds like something that may have been proposed before, but can we run a bot to add missing merge tags on all the articles in Category:Articles to be merged? I would say about half of the articles proposed to be merged do not have the accompanying tag on the target article. What I mean is that, if the article has {{mergeto}} or {{mergefrom}}, the bot would check that target article article has the reciprocal merge tag. My thinking here is that if there's an article missing one of them, and the merge is clearly not supported, someone would remove the tag from both articles.
Also, on a similar train of thought, a more permanent bot would be one that could check that these tags are in place until the merge is resolved. I think many proposed merges are inactive simply because 50% of the editors have no idea there has even been such a proposal. Usually the merge tag winds up on the crappier, less viewed article rahter than the highly viewed target. --NickPenguin(contribs) 01:49, 25 February 2013 (UTC)
Search for missing Featured Picture pages
Featured pictures are (with one or two exceptions) stored on Commons but have a local page that links to their FP nomination page and indicates the date they were picture of the day, if applicable. Sometimes, however, the FP templates are replaced due to vandalism, and then the pages are deleted per WP:CSD#F2 (example 1; example 2). These bother me and I would like to fix them; those local templates are the primary advertisement the FP project gets. I find these at random, but it seems like it would be fairly straightforward to write a script that would list all FPs with no local page, just by going through the subpages of the directory (one way might be to compare images linked in the directory to Category:Featured pictures; if they're not in the category they're missing the local page or its templates). Piece of cake, right? Thanks. Chick Bowen 19:14, 25 February 2013 (UTC)
Update languages refs
{{Ethnologue}} has been used as a shortcut to {{Ethnologue16}}. However, with the publication of the new edition of Ethnologue today, this is no longer appropriate. Please convert all transclusions of {{Ethnologue}} to {{Ethnologue16}}, then delete {{Ethnologue}}.
— kwami (talk) 21:23, 25 February 2013 (UTC)
- WP:TfD is the third door on your left. Werieth (talk) 21:29, 25 February 2013 (UTC)
- Why does this need discussion? It's just a redirect. — kwami (talk) 21:42, 25 February 2013 (UTC)
- Bot operators do not make unilateral changes. Consensus needs reached prior to having a bot change things. Werieth (talk) 22:00, 25 February 2013 (UTC)
- Why does this need discussion? It's just a redirect. — kwami (talk) 21:42, 25 February 2013 (UTC)