Wikipedia:Village pump (technical)
Policy | Technical | Proposals | Idea lab | WMF | Miscellaneous |
Newcomers to the technical village pump are encouraged to read these guidelines prior to posting here. Questions about MediaWiki in general should be posted at the MediaWiki support desk.
Frequently asked questions (see also: Wikipedia:Technical FAQ) Click "[show]" next to each point to see more details.
|
Lua 10-second timeout unusable for articles
We cannot use big-scale Lua in articles at this point, such as the Lua-based wp:CS1 cite templates, due to Lua crashing pages at a 10-second timeout. I have been very worried about trying to use Lua in actual large articles, because during months of testing on the test2 wiki, I noticed how real articles would occasionally get "Script error" when formatting hundreds of Lua-based cite templates. The appearance was:
- "Gee, sometimes Lua gets tired of running and cries 'Script error' for fun".
The problem was so rare, I just wondered huh, and who really has time for this, but months later finally, someone noted "Lua seems to have a 10-second timeout" which is severely small compared to markup-based templates which could reformat text for 60-second timeout. Of course, server load varies greatly, by the minute (rarely dragging to triple-slow) but often doubling a 6-second Lua run, to want 12-second Lua, and bingo, the whole page exceeds the 10-second Lua timeout, and further Lua-based templates start storing "Script error Script error Script error" repeatedly into the formatted cache-copy of a page, for thousands of readers to view. Fortunately, we have not deployed any large, major Lua-based templates (I think), so there are unlikely to be pages studded with "Script error" which readers can view daily. However, at this point, I cannot recommend using big-scale Lua until the 10-second timeout is raised, to perhaps 30-second timeout, pending other reasons why Lua modules must be limited by total run-time duration. I think Lua provides a potential net benefit to Wikipedia, but we must find a way to allow Lua modules to run longer than a 10-second timeout on busy servers. Meanwhile, we can continue to rewrite the markup-based templates, as several times faster (such as {cite_quick} ), until the Lua technology is configured to handle the "big league" world of major articles, with templates used in over 1.5 million pages. -Wikid77 (talk) 07:37, 2 March 2013 (UTC)
- Can you give some example test articles where Lua cites time out or come close to timing out? It would be good to see how much wiggle room we actually have and if there are ways to make the scripts faster. Dragons flight (talk) 08:16, 2 March 2013 (UTC)
- Tests show that
frame:preprocess
is very slow (I think around 150 of them hit the 10 second limit). The manual says thatframe:expandTemplate
should be used if expanding a template, but it is slow as well. Is the citation module using either of those? I don't feel like finding it, so perhaps a link? Johnuniq (talk) 10:37, 2 March 2013 (UTC)- i might be wrong here, and if so i'd love it if someone "in the know" will put things straight, but i think the above is based on misunderstanding. the 10 seconds budget is not for wall-clock time, but rather for CPU time, and as such is independent from server load. when the server is loaded, it might take more time (wall-clock) to allocate the same 4 seconds of CPU time because the CPUs are busy elsewhere, but it will still show the same "4 seconds", and is not more likely to hit the 10-seconds budget limit than when the server is idle.
- i second the request to point to a page with high (let's say, above 3 seconds) Lua-engine time, as reported in the "newPP" comment inside the html. it is very possible such a page exists, and if you can find one it will be very interesting to try and understand the issue. if you have an experimental Lua-based template that can replace an existing template, why not create a sandbox with a copy of some large article where the template is switched, and see if you can push Lua time to 4-5 seconds? peace - קיפודנחש (aka kipod) (talk) 15:04, 2 March 2013 (UTC)
- I would think it is CPU execution time (when executing a Lua module), and not wall clock time. I did not mean to suggest otherwise. BTW, when I said "around 150" I forgot that 150 was the approximate number of operations, but each operation actually invoked preprocess three times. Johnuniq (talk) 01:08, 3 March 2013 (UTC)
- Imagining the fox in the Lua henhouse: When people cannot believe that Lua script in the henhouse could ever reach the 10-second timeout, and there is no such problem as a "sly fox" well, here is what dead chickens look like in the Lua henhouse; they state:
- "The time allocated for running scripts has expired."
- "The time allocated for running scripts has expired."
- "The time allocated for running scripts has expired."
- The way this Lua fox works in the Lua henhouse is he, very slyly, causes a Lua module which normally runs just a few seconds to expand the Lua CPU time to over 10 seconds, such as running "11.23" seconds, and then all subsequent Lua functions will be rejected due to time "expired". The problem is rare, and might take dozens of trials to reappear, so perhaps we should create an artificially large testcase, to force the rare timeouts as more likely to occur. It does not help to claim a "misunderstanding" about the dead chickens in the Lua henhouse, as if the 10-second timeout could never, ever happen. Hence, step 1, let's start thinking about why Lua must be limited to 10-second timeout, and consider if a 30-second timeout would be ok. If the Lua timeout limit cannot be raised beyond 10 seconds, then Lua cannot be used in large-scale templates, and there would be no need to "prove" anything. Then, instead, we can get back to adding rapid wp:parser functions, and restructuring markup logic, so that templates can run much faster. In future years, as technology improves to allow Lua-based templates to run over 10-second duration, then we could reconsider using Lua to help rather than crash articles with "Script error Script error Script error". Fortunately, I already found a way to make the wp:CS1 cite templates run about 4x faster without using Lua. Also, Lua can be used for other purposes, rather than rewriting large templates from markup into Lua. -Wikid77 (talk) 16:06, 2 March 2013 (UTC)
- i guess i am just stupid, but i could not understand your last message at all.
- i did not see anyone claiming that it's physically impossible for lua script to exceed its 10 seconds budget - such a claim would be plain silly. but can you please point to a page (can be snadbox in user space) where lua script executes in, say, 5 seconds? i do not mean a test page that does something with the sole intention of stretching the limits, but rather a copy of a "real" page from article space, where some templates are replaced with lua counterparts. this may help better understand the issue, and may even expose poorly written or misused scripts. thx, peace, קיפודנחש (aka kipod) (talk) 16:32, 2 March 2013 (UTC)
- Here's one: Bays of the Philippines -- WOSlinker (talk) 17:20, 2 March 2013 (UTC)
- We also have Category:Pages with script errors, which contains a few things. Most of the ones that are timeout based are coming from having a very large number of {{coord}} or a very large number of {{convert}}. Neither of which are great examples, since both families of template are still planning to be overhauled. However, it is noteworthy that the timing list shows "Scribunto_LuaSandboxCallback::getExpandedArgument" as the slowest step (96% of the time for Bays of the Philippines). If I understand correctly "Scribunto_LuaSandboxCallback::getExpandedArgument" is actually a part of the process that Mediawiki uses during the #invoke step and not something that the guts of the script is doing. That suggests that the developers may need to better optimize that step, or perhaps not count the #invoke setup time against the Lua execution time limit. Dragons flight (talk) 18:12, 2 March 2013 (UTC)
- I migrated {{coord}} entirely to Lua, and the Lua execution time for the Bays article is now 0.8 seconds. Dragons flight (talk) 06:00, 6 March 2013 (UTC)
- Wikid, obviously Lua can time out. Any idiot could write a script that would time out simply by designing a very bad script. That's not really the point. In creating Lua, the developers had several design goals, which included replacing some of the more complex templates such as the cite family. The best way of making a case that 10 seconds is too low is to show examples of pages in actual use that can't be converted to Lua cites (or similar Lua conversions) because the timeout limit prevents it. Developers want there to be a limit to help prevent against denial of service attacks, but they also want to improve the functionality and performance of Wikipedia. If 10 seconds is inadequate for dealing with cite and coord and infoboxes, etc., then we can make the case for higher limits, but it is better to make that case using clear examples. For the record, I tried converting Barack Obama to Lua cites (sandbox) and those 336 references formatted in 2.7 seconds of Lua execution. That's a a little long, but still has a lot of headroom, so I'd be interested in finding examples that come closer to the limit. Dragons flight (talk) 17:51, 2 March 2013 (UTC)
regarding Bays of the Philippines: this is an excellent example why we should look at those things. the problem here is that the "Math" module tries to be be smarter than what's good for it, and be able to accept arguments which are expressions as well as numbers. IMO this is a wrong way to use the tools we have. IOW, i would yank the call to "preprocess" out of Math._clean_number(), and when a template wants to be able to accept expressions as well as numbers as its arguments, the template should pass this parameter to #expr before passing it to the module. so, if you want to allow someting like {{order of magnitude|11 * 12}}, you should not force Math.order to eveluate its input and understand that 11 * 12 is really 132, but rather make {{order of magnitude}} look like so: {{#invoke:Math|order|{{#expr|{{{1}}}}}}} or somesuch. i.e., use #expr *in the template that invokes math*, instead of "preprocess" #expr from the module. doing so will bring Bays of the Philippines Lua time consumprion to the sub-second region - prolly 100ms or so.
so yes, there can definitely be pages that consume more than 10 sec on their Lua eveluation, but the remedy isn't necessarily to bump up the Lua budget, but rather to do things the right way instead of the wrong way.
peace - קיפודנחש (aka kipod) (talk) 19:27, 2 March 2013 (UTC)
- You're actually mistaken, Math only calls preprocess if the input can't be understood as a number. The problem here is actually with the construction in {{coord/prec dec}}:
{{#switch:{{max/2|{{precision1|{{{1}}}}}|{{precision1|{{{2}}}}}}}|0=d|1|2=dm|dms}}
.
- Max is a Lua function, but the Lua timer is being charged for the time to expand and evaluate
{{precision1|{{{1}}}}}
and{{precision1|{{{2}}}}}
. Further, the expansion isn't in Math._clean_number but rather in Mediawiki since getExpandedArgument has to be called to fully expand all templates (in this case {{precision1}}) before the Lua function can be called. Essentially,{{#invoke:Math|max}}
is being charged for the time required to evaluated {{precision1}} which isn't actually a Lua issue, but rather a Mediawiki template issue. Dragons flight (talk) 19:45, 2 March 2013 (UTC)
- PS. Just to be clear, removing preprocess from Math._clean_number will do absolutely nothing here, since it isn't being called in this case. Dragons flight (talk) 20:03, 2 March 2013 (UTC)
- i verified that what you wrote above is correct, by copying the Math module to a sandbox, removing the call to preprocess, and using Special:TemplateSandbox to evaluate Bays of the Philippines. i still do not think that Math should call preprocess like this (i.e., i believe that the template should bear the burden of calling #expr if it wants to allow expressions as an arguments), but this is beside the point. so the question is: is the time required to calculate/generate the parameters to max really deducted from the Lua 10 seconds budget? if this is the case, then i think it's wrong, no? so the remedy should not be to increase Lua's time budget, but rather to calculate Lua's time consumption more correctly, no? peace- קיפודנחש (aka kipod) (talk) 20:39, 2 March 2013 (UTC)
- PS. Just to be clear, removing preprocess from Math._clean_number will do absolutely nothing here, since it isn't being called in this case. Dragons flight (talk) 20:03, 2 March 2013 (UTC)
- I edited {{coord/prec dec}} to make the calls more directly Lua and remove the references to {{precision1}}, and now Bays of the Philippines completes with 6.9s of Lua runtime, so we can take that one of the list (and it will be even faster once coord is entirely a Lua module). Regarding the of whether the #expr calls should be in the template or in the Math module, the main reason it is in Math is to facilitate cases like {{max}} whether the arguments are passed implicitly and wrapping each term is impossible. Also, I'm not sure it is necessarily better to have a template that wraps all arguments in #expr on every call or a Lua script that checks if the input is a number and then only calls #expr if it isn't. The latter case might be faster on average, but I'm not entirely sure. To the last point, I generally agree that it is strange to have a situation where Lua is being charged for the time it takes Mediawiki to evaluate the arguments being sent to Lua. Dragons flight (talk) 00:02, 3 March 2013 (UTC)
- Possibly that was done on the cautionary principle: when creating a new system (Scribunto), make sure that denial-of-service attacks are avoided by checking CPU time in a simple and reliable manner. That might be adjusted after some months of experience. However, given the good fix that you implemented, and the scope for enormous improvements when {{coord}} is replaced by a module, I don't think any change to the way Lua execution time is charged is warranted. We can think of it as a hint that all time-intensive procedures should be updated. My brain has no spare capacity at the moment so I can't investigate, but I have implemented some precision and rounding code in Module:Convert. It was hideously tricky, and would benefit from checking, but anyone wanting to work on similar things might like a look. Johnuniq (talk) 02:40, 3 March 2013 (UTC)
- I edited {{coord/prec dec}} to make the calls more directly Lua and remove the references to {{precision1}}, and now Bays of the Philippines completes with 6.9s of Lua runtime, so we can take that one of the list (and it will be even faster once coord is entirely a Lua module). Regarding the of whether the #expr calls should be in the template or in the Math module, the main reason it is in Math is to facilitate cases like {{max}} whether the arguments are passed implicitly and wrapping each term is impossible. Also, I'm not sure it is necessarily better to have a template that wraps all arguments in #expr on every call or a Lua script that checks if the input is a number and then only calls #expr if it isn't. The latter case might be faster on average, but I'm not entirely sure. To the last point, I generally agree that it is strange to have a situation where Lua is being charged for the time it takes Mediawiki to evaluate the arguments being sent to Lua. Dragons flight (talk) 00:02, 3 March 2013 (UTC)
- Charging Lua for markup parameters risks 10-second limit: I have an example to confirm that a single use of a Lua function, passing templates as parameters can kill the Lua operation (due to 10-second timeout) and show "Script error" rather than store formatted text into a page. For that reason, the Lua timeout limit should be greatly increased from 10-second timeout, to perhaps 30-second timeout or more. Note: we do not need to change Lua's CPU time counter, just allow a higher limit. To run a fatal Lua-timeout test, try the following use of Lua-based {hexadecimal} in edit-preview:
- # {{hexadecimal|{{#expr: 10 +
- +{{convert|1|m|ft|0|disp=number}} +{{convert|1|m|ft|0|disp=number}}
- +{{convert|1|m|ft|0|disp=number}} +{{convert|1|m|ft|0|disp=number}}
- + ...repeat to have 300 lines of {convert}...
}} }}
- # {{hexadecimal|{{#expr: 10 +
- Because rounded {Convert} runs at 55 per second, then 600 {convert} typically runs over 11 seconds, so the Lua CPU time will clock nearly 12 seconds then decide the 10-second limit has expired, and not perform the hexadecimal formatting of the result after totalling the 600 conversions. Although this is an artificial example, it shows the danger of 10-second timeout in Lua-based templates, where the time needed to evaluate markup-based parameters is added to the Lua CPU clock, and then every subsequent Lua-based template (on the page) will crater and store the words "Script error" into the formatted page, for thousands of readers to view. -Wikid77 (talk) 14:51, 3 March 2013 (UTC)
- Er... that's insane, and just the kind of thing I'd hope would be blocked by limits. Why would someone use 600 {{convert}} templates as arguments to a single Lua-based template? The only reason I can think of would be a deliberate denial-of-service attack trying to tie up Wikimedia's servers. If there is a genuine reason to do this, then there is most certainly a better way to do it.
- In any case, I think it's a bug that the time taken to expand templates used as arguments to Lua-based templates is counted as time used by Lua. If this is the most significant cause of Lua timeouts, the solution is to stop counting template expansion as time used by Lua. There are already other limits that restrict regular template expansion. – PartTimeGnome (talk | contribs) 16:39, 3 March 2013 (UTC)
Clearing up some things here:
- The 10-second limit is CPU time, not wall clock time, as surmised by קיפודנחש and others. Specifically, in the LuaSandbox PHP extension used on WMF wikis, it uses POSIX timers with
CLOCK_PROCESS_CPUTIME_ID
to implement the timeout.
- "Scribunto_LuaSandboxCallback::getExpandedArgument" is not used during #invoke. It is used to handle accesses to
frame.args
: the first time any particular key is accessed, that function is called to get the value for the argument, and getting that value includes expanding the passed wikitext. Which means that it's actually worse to pull all possible arguments into local variables even if they won't be needed. There's nothing to be done to speed the function up, the code is very simple:
function getExpandedArgument( $frameId, $name ) {
$args = func_get_args();
$this->checkString( 'getExpandedArgument', $args, 0 );
$frame = $this->getFrameById( $frameId );
$result = $frame->getArgument( $name );
if ( $result === false ) {
return array();
} else {
return array( $result );
}
}
The slow bit is the call to PPFrame::getArgument()
on line 398, which is a part of the core parser. So anything that might speed that up would speed up non-Scribunto wikitext processing, too.
- Yes,
frame:preprocess()
and frame:expandTemplate()
can be slow. They're not quite as simple as getExpandedArgument()
quoted above, but they're close.
frame:expandTemplate()
is slightly faster than passing an equivalent string to frame:preprocess()
, but the real advantages are in not having to construct the equivalent string in the first place and in not having to worry about trying to escape the argument values when doing so. frame:callParserFunction()
is on my TODO list, BTW.
- Yes, the time spent to parse an argument is counted against the Lua time limit. The counting of calls into PHP against the time limit in general is intentional, but there is a good case for making an exception for
getExpandedArguments()
.
Hope that helps. BJorsch (WMF) (talk) 21:48, 3 March 2013 (UTC)
- Thank you for describing those issues. We certainly wish there were some clever way to speed the
#invoke
interface, which is showing a limit of ~180/second, while simple markup-based templates can run over 350x-1,200x per second. However, for now, just raising the 10-second timeout to about 30-second timeout would help. I have an example, below {{Weather_box/cols/sandbox}}, which shows how when Lua quits, the "scribunto-error" tags are stored in the page-cache version, where readers can see the "permanent" stoppage of Lua formatting, until the page is reformatted to reset the cache-page. -Wikid77 (talk) 22:28, 3 March 2013 (UTC)
- Example of Lua 10-second timeout in template doc page: Based on the above conversation, questioning how could hundreds of calculations be used in an actual page, I remembered that most template limitations are first noticed in template /doc pages, and that is the case to demonstrate a typical case where Lua hits the 10-second timeout: {{Weather_box/cols/sandbox}}. In supporting Template:Weather_box, many cell-color helper templates have used the Lua-based Template:Hexadecimal over 200 times per article, but the /doc text of the helper subtemplates uses {hexadecimal} many hundreds of times. For the total Lua time used, each use of {hexadecimal} also runs several #ifexpr or #expr parser functions, and the total time often exceeds the 10-second timeout limit. At that point, the final table, of "rain days in a month" stops getting the hexadecimal values for blue table-cell colors and all table cells show: <td class="scribunto-error"> or such. Look inside the bottom of the HTML version of that /cols/sandbox page. More later. -Wikid77 (talk) 22:28, 3 March, 13:12, 4 March 2013 (UTC)
- Lua can perform an amazingly large amount of work in well under 10 seconds, so there is some other factor responsible for time outs. In my sandbox are 1689 template calls, and each call invokes Module:Convert, and each of those invokes causes Module:Convertdata to be "required". That is a stupendous amount of work, yet when I just looked the Lua time usage was 4.306s. See Module talk:Convert for info, including mention of how
mw.loadData
is likely to at least double that speed. Johnuniq (talk) 02:37, 4 March 2013 (UTC)
- Yes, there are other factors responsible. The other factor is that Lua is getting charged for the time that the parser takes to compute the parameters being sent to Lua. For the weatherbox box example, 85% of the time is being used to compute the parameters that are being sent to Lua, while only 15% of the time is actually being used to do computations within the 100s of Lua calls. If we didn't charge the Lua clock for the template expansions and parser functions that are being executed to generate the inputs for Lua, then the present issue would be largely avoided. This is essentially just another case of the parser being slow, but because of the implementation, that slow parser can also cause Lua to run out of time through no fault of the actual Lua code. Dragons flight (talk) 06:42, 4 March 2013 (UTC)
Counting the preprocessing of #invoke parameters against the Lua time limit certainly seems problematic. I've taken the liberty of opening bug 45684. More generally, I don't understand the need for a Lua time limit at all. As long as Lua counts against the overall limit on page render time (60 seconds, if I understand correctly), doesn't that suffice? To take an extreme example, if a page with 45 seconds of markup-based templates could be converted into 15 seconds of Lua-based templates, wouldn't that be a good thing? Toohool (talk) 06:05, 4 March 2013 (UTC)
- Meanwhile focus on faster markup templates: Thank you for filing that Bugzilla #45684. I agree with your logic about the 60-second limit, and I have noted another danger of the 10-second timeout, here: if the timeout is not fixed (somehow), then rather than recode 45-seconds of markup into 19 seconds, as 8 markup and 11 Lua seconds, then the page will continue to use the 45-second markup-based templates, to avoid risk of Lua timeout above the 10-second limit. However, the near future is not totally grim; we can optimize the markup-based templates, without Lua, to run faster. In fact, I have written a streamlined Template:Hexdigits2 which is 3x faster than the Lua-based Template:Hexadecimal but only handles 2 digits, not full hexadecimal numbers. Also, some Lua-based templates might be used only a few dozen times per page (not 2 thousand {hexadecimal} ), and those could be used to run within 2 seconds of Lua, far below the 10-second Lua timeout limit. -Wikid77 (talk) 13:12, 4 March 2013 (UTC)
- Example of March 10 Lua timeout at 10-second limit: I have re-created the example of a Lua timeout, in Template:Weather_box/cols/sandbox2, after the prior /sandbox timeout last week was rewritten away. Running several edit-preview sessions has confirmed that the timeout has varied slightly at 10.030-10.045 seconds, each time, after ~2,000 Lua function calls with Template:Hexadecimal, varying by 500 or more Lua functions run. The timing is generally consistent, but not exact to within 1 or 2 functions, but rather varies by 500 or more function calls each time. So, for example, the timeout might re-occur at 3 different wikitables formatted on a page. Also, other timeouts have logged "11.3" Lua seconds, to finish whatever steps before checking to stop after 10-second timeout. Because of the variation, the timeouts might occur sporadically, where a large page could display fine many times, and then, "Bam! Lua kills the page". Overall, the severe 10-second limit seems to be a rare problem, mainly just causing additional fears of using Lua, but should not stop progress in transitioning to use some Lua-based templates 5x-7x times faster than markup templates. -Wikid77 (talk) 11:48, 10 March 2013 (UTC)
Tools to identify high-priority and high-yield articles?
Is there a tool to identify short, "low-quality", and highly-viewed articles? Limits for quality might include a certain number of cleanup templates, {{multiple issues}}, or Category:Wikipedia articles needing rewrite.
Also, is anyone aware of a list that sorts Category:Wikipedia articles needing rewrite based upon page views? I'm hoping to help a classroom (or classrooms, eventually) identify some targets for editing. Thanks. Biosthmors (talk) 19:36, 4 March 2013 (UTC)
- I'm not sure if it's what you're looking for, but right now SuggestBot does this for the Community Portal and to populate Special:GettingStarted (which is experimental interface getting delivered to new users immediately after signup). We're starting to build a more permanent system for doing that within the Special page, including expanding from copyediting, to two more tasks from Wikipedia:Backlog. The kind of queries necessary to gather this data are expensive to run (searching across categories is slow, and parsing wikitext for templates or other issues is just as bad, if not worse), but we're trying to get to a place where we can have some kind of dashboard of simple tasks. Regarding the page view issue, we initially prioritized the GettingStarted list based on pageviews, but found that two things happened:
- The list almost never fully refreshed. The number of very popular articles that are tagged with even common issues like copyediting is radically small.
- In the case of GettingStarted, we're focusing on tasks for newbies. It turns out that some very popular pages with issues tend to be articles like Pornography in India or BLPs, which attract more vandalism than we like. Popular articles as measured via page views are likely to be more appropriate for experienced editors to focus on.
- Anyway, there's a general interest at the Foundation from multiple angles in trying to develop a better task system. People have been vaguely thinking about ways Wikidata could be used for these as well. Steven Walling (WMF) • talk 20:26, 4 March 2013 (UTC)
- We've been experimenting with adding info about viewership and predictions of article quality to SuggestBot. For those who do not know SuggestBot, it posts a list of article suggestions to user talk pages (or a page in your userspace, if you'd prefer that), where each article has to be member of a specific category identifying it as needing improvements (e.g. "needs sources" or "cleanup"). One of the things we've been working on lately is a redesign of SuggestBot's posts where info on viewership (avg # of views/day) and assessed and predicted quality are available, so editors can use that info when they decide which articles to work on. Hopefully we'll have that launched in a few weeks. Regards, Nettrom (talk) 20:49, 4 March 2013 (UTC)
- There's also a new-user-friendly workflow for signing up for SuggestBot suggestions, based on wikiproject article categories, up on the Teahouse, which Nettrom and I put together. It's more a functional proof of concept right now, but I can see a bunch of ways of using it to route people to both relevant editing tasks and interesting projects. I'm (slowly) developing and documenting the code for generating lists of article categories now. - J-Mo Talk to Me Email Me 22:30, 4 March 2013 (UTC)
- There are also lists of popular pages by WikiProject. The lists show both the assessed class and importance (which is usually a fairly accurate indication of quality), and since they are grouped by WikiProjects, it makes finding poor quality articles in an area that interests you quite easy.—Ëzhiki (Igels Hérissonovich Ïzhakoff-Amursky) • (yo?); March 4, 2013; 22:47 (UTC)
- ^ The toolserver tool is what I was going to suggest. --Izno (talk) 22:57, 4 March 2013 (UTC)
- Many high-pageview articles need cleanup monthly: Examine many of the top 5,000 most-viewed articles (see list wp:5000), and note the continual amount of copy-editing needed, since the last major cleanup, perhaps 1 month earlier. The article which cemented this concept, in my mind, was for the uber-famous singer/actress "Jennifer Lopez" which needed about a hundred punctuation and phrase changes after numerous edits by various editors. I can go to almost any major article and find over 50 clerical corrections, such as use of commas or italic titles, and many need to set each image "alt=" text to describe the image for sight-impaired readers who have difficulty discerning the image contents. The implication of this is staggering: every multi-edited article needs cleanup, after a period of open editing by the general public. Compare article-edit traffic to people walking through retail stores, where enough dirt, debris, or mud must be cleaned away, following a period of heavy foot-traffic. There is no need to even use a tool for a list: just think of a famous person, place or thing, then go to that article and keep editing until 50 clerical corrections are made, plus add a few sources where needed, then repeat in a month or two. -Wikid77 (talk) 23:10, 4 March 2013 (UTC)
- So is the number of IP edits received recently a valid "signal" for a low quality article? Jennifer Lopez is semi-protected so I guess that wasn't the issue there. This comes back to a wikitrust-like system I suppose, that can differentiate between the editing patterns of different registered users. As a new editor I've been surprised to learn WP is so actively propped-up by ongoing vigilance. It seems like blowing into a balloon that has holes in it, you can't stop. Long term I would hope artificial intelligence would take on more of the burden, like it does with some vandalism today, but I'm not holding my breath. Silas Ropac (talk) 13:31, 5 March 2013 (UTC)
- "Artificial intelligence is no match for natural stupidity." — Arthur Rubin (talk) 21:51, 10 March 2013 (UTC)
- Answers: Generally, ~30% of IP edits are "hack edits" as graffiti or misguided facts, and that pattern seems very predictable across many articles. Although vandalism might seem a great danger, excessive outdated detail is worse, as in the quote, "The problem is not what they don't know, it's what they know for sure that just ain't so". Also, too many editors add trivial "fun facts" to articles (wp:Datahoard), or add speculative cruft text (wp:MINDREAD); we had a recent controversy because WP editors insisted on including an author's supposed "inspiration" for writing a specific novel, and the author complained the article was wrong about why he wrote his book, but debates arose about him being a wp:reliable source about his own thoughts, and so I wrote the essay "wp:Beware mindreader text" to warn to omit inspiration rumors. As for the active vigilance to revert hack-edits, I would not worry too much, because numerous long-term editors still enjoy reverting everyone. Perhaps there is an alluring sense of "censorship power" to revert and shutup other people, so we have no shortage of editors patrolling articles. Some bots seem to already use "AI" to detect hack edits, but also, I have suggested to use rapid Lua scripts to detect omitted facts or added rumors, as in Template:Watchdog, so the emphasis could be expanded to ensure crucial facts are not removed from a particular article, or common related fringe ideas are not re-added, by watching for hot-topic words inserted during an edit. Hence, we will be using more human intelligence to judge articles, with better computer-assisted tools, not just traditional AI techniques. The future is very exciting at this point, with over 9,200 editors each updating many articles every month. -Wikid77 (talk) 06:47, 11 March 2013 (UTC)
Link to AfD discussion in template incorrectly displaying as redlink
I've noticed a very weird and specific problem with the AfD template, or at least one that is occurring at Periphery (BattleTech). If I view the page while I am not logged in, the link to the AfD discussion is displayed as a redlink with alt-text indicating the page doesn't exist. However, if I click on the link, it takes me to the AfD discussion (which does in fact exist). The link displays correctly (as blue, not red) if I'm logged in.
I'm using Chrome and Windows 7. Does anyone else see this occurring? Some guy (talk) 23:56, 4 March 2013 (UTC)
- Interesting. Same here, with Opera 11.61 / Linux. - Nabla (talk) 00:02, 5 March 2013 (UTC)
- This often happens with AFDs and is browser/OS-independent. It can happen if the AFD page is created after the AFD template is added to the article; what has happened is that the MediaWiki software hasn't yet got around to changing the link colour from red to blue. All you need do is WP:PURGE the article page. I've switched on 'Add a "Purge" option to the top of the page, which purges the page's cache' at Preferences → Gadgets to allow this to be done quickly.
- I've seen it happen with other types of XFD, like CFD, but it's rarer because they often have per-day discussion pages not per-article. --Redrose64 (talk) 00:13, 5 March 2013 (UTC)
- This is frequent when Twinkle is used for XfD nominations as in Periphery (BattleTech). Twinkle edits in the best order by creating the discussion page first, but the deletion tag is added to the nominated page a moment later, too fast for the servers to register that the discussion has been created. I have seen at least 10 reports of this. Twinkle changes with purging or a delay after page creation have been suggested, for example at Wikipedia talk:Twinkle/Archive 28#AfD template red link, but no changes have been made. PrimeHunter (talk) 02:52, 5 March 2013 (UTC)
- Ah, interesting. Thanks for the info! Some guy (talk) 03:52, 5 March 2013 (UTC)
- Ditto - Nabla (talk) 20:43, 7 March 2013 (UTC)
Article feedback tool gone?
Just curious: has the (old) article feedback tool been switched off? It no longer seems to be shown on articles I view, where I think it was still around yesterday or thereabouts. I'm talking about the old one, where people just graded articles but didn't leave comments. Will the new tool be deployed to all these pages now? Fut.Perf. ☼ 19:26, 5 March 2013 (UTC)
- Yes, it was undeployed this morning. The new tool is deployed but is done so on an "opt-in" basis.--Jorm (WMF) (talk) 19:32, 5 March 2013 (UTC)
- See more info in this update from Fabrice Florin.--Eloquence* 00:01, 6 March 2013 (UTC)
Table sort question
The sorting in List of dog breeds doesn't seem to be working.
I see some recent discussions about sort issues, but at least one said there were problems that went away.
Can anyone see the problem?--SPhilbrick(Talk) 21:26, 5 March 2013 (UTC)
- I think that it's the refs in the column headings. --Redrose64 (talk) 21:53, 5 March 2013 (UTC)
- no, it's not. sortable tables have this quirk, where every single row must have exactly the same # of columns. in this case, 5 rows missed the last column. adding "||" to each of them solved the problem. however, i'm not sure how valuable it is to be able to sort on a column, if so many of the cells in this table are empty. peace - קיפודנחש (aka kipod) (talk) 22:25, 5 March 2013 (UTC)
- Thanks! I work with sortable tables a bit, but missed that. It makes sense, but it didn't occur to me to look for that. I'm guessing it was working, then someone added an entry, and assumed that they did not to fill out the rows when some cells were blank, so it may happen again. Good to know.--SPhilbrick(Talk) 14:06, 6 March 2013 (UTC)
- i doubt anyone did it this way intentionally: regardless of disabling the sortability, it's makes the table ugly (the right border of the table appears broken), and if it was intentional, why create the other empty cells? these rows ended with
||||||||||
instead of ||||||||||||
or somesuch (i bet anyone can spot the difference instantly). i believe it was just that whoever added these rows (prolly not the same person added them all) just did not count the columns carefully enough. wiki-code for tables is far from being friendly... (not that html table code is any better - counting <tr> and <td> is not easier than counting |- and ||). peace - קיפודנחש (aka kipod) (talk) 15:39, 6 March 2013 (UTC)
- I assumed it was accidental, rather than deliberate, but I haven't actually looked at the edit that introduced the problem.--SPhilbrick(Talk) 19:23, 6 March 2013 (UTC)
Long file edits in PHP
I was all excited about finally getting time today to get Chartbot to work, but failed. It scans the files, performs the URL substitutions, verifies our links against Billboard, everything, but dies when it tries to write the file. Stripping my broken code down to its simplest form yields User:Chartbot/simplified, which works when I set the filelength to 634 bytes, but crashes at 635 with an HTTP 417 error. This seems to be because our servers want me to use a different content type. Try as I might, I can't figure out a way to control the content type that http_post_fields uses when it posts the data. Anyone got a clue? I really hate having a bot that can do all the complicated logic but can't store the final result, and I've struggled with this all day.—Kww(talk) 01:55, 6 March 2013 (UTC)
- I have a feeling this is to do with the Expect: 100-continue header. Certainly that's been an issue for me in .NET in the past (in .NET it's very easy to fix too, simply set System.Net.ServicePointManager.Expect100Continue to false). I believe the correct way to fix this in PHP is to add 'Expect' with a blank value to your headers array. Not really looked at your code closely, or how you are handling headers, but it looks like that would be a case of adding
$http_options['Expect']='';
after setting $http_options['cookies']
. - Kingpin13 (talk) 02:47, 6 March 2013 (UTC)
- Got a version working with Curl, which will let me suppress the "Expect" header. It also handles the long file encoding more reliably, so it wasn't just busywork.—Kww(talk) 16:22, 6 March 2013 (UTC)
- Good to hear. Yeah, curl will probably be better in the long run. - Kingpin13 (talk) 21:51, 6 March 2013 (UTC)
Lua migration of Coord
I just migrated {{coord}} to use Lua Module:Coordinates. This provides a great improvement in rendering speed for pages with many coordinate templates. Under normal circumstances the output of the Lua version should be exactly identical to the output of the old template, except for a few edge cases. Specifically, the old template would drop trailing zeros and make some anomalous precision choices in displaying decimal formats. This has now been standardized. The Lua version is also displays more verbose error messages when given malformed content.
Dozens of test cases were run and validated before making the change. That said, there may still be some unexpected edge cases, so please look out for any problems with coordinate tags. In particular, Category:Pages with malformed coordinate tags may catch most errors. Dragons flight (talk) 05:53, 6 March 2013 (UTC)
- not sure if this question is related to the lua conversion, or if this was the same even before the conversion, but i find the decimal display of coordinates (say, in Bays of the Philippines a bit strange. shouldn't it be displayed as DMS rather than a decimal fraction? how did it look before the conversion? regarding performance improvement, this is very impressive. i would not guess that innocent looking {{coord}} was be so bad, performance-wise. i sure hope we'll see similar improvements with the "cite" family of templates. peace - קיפודנחש (aka kipod) (talk) 20:12, 6 March 2013 (UTC)
- The display characteristics are unchanged. The template gives a decimal display by default when fed decimal parameters
{{coord|1.124|34.567}}
= 1°07′26″N 34°34′01″E / 1.124°N 34.567°E, and a DMS display when fed DMS data {{coord|1|14|N|34|56|E}}
= 1°14′N 34°56′E / 1.233°N 34.933°E. {{coord}} also has a format= parameter to force a DMS or decimal display when fed the other kind of data. Lastly, Template:Coord#Per-user_display_customization also gives instructions for changing the appearance of coord data on a personal basis using CSS. Dragons flight (talk) 20:32, 6 March 2013 (UTC)
CSD transclusion chaos
Category:Candidates for speedy deletion is looking pretty busy this morning - it seems that every page which transcludes the template {{Independent Production}}
is now up for deletion, even though the category doesn't appear on any of the individual articles (that I've checked so far) or the template itself. I've asked CsDix, who was the last person to edit the template, to take a look, but if other technical bods wouldn't mind giving it the once over, I'd be grateful. Yunshui 雲水 09:09, 6 March 2013 (UTC)
- Found the source of the problem - all the affected pages transclude the redirect
{{Independent production}}
(lowercase "p"), which was recently tagged with {{db-g6}}
(declined). I would have thought removing the G6 tag ought to have fixed the problem, though. Yunshui 雲水 09:21, 6 March 2013 (UTC)
- The page links table has had issues with updating lately, technically a purge on each individual page might be needed. Legoktm (talk) 10:27, 6 March 2013 (UTC)
- I've tried purging several of the affected pages (and CAT:CSD as well, for good measure), but it doesn't seem to be having an effect. Yunshui 雲水 10:37, 6 March 2013 (UTC)
- Purging only affects the purged page itself. It doesn't affect categories where the page is displayed. That requires a null edit. I haven't found an article with the problem so it seems to be gone. Please post an example if you still see it, or if you report a problem another time. PrimeHunter (talk) 12:42, 6 March 2013 (UTC)
- Seems to be resolved now, yes. Clearly the technical solution was for me to go away and have a sandwich; I shall have to remember that fix in future. Yunshui 雲水 13:04, 6 March 2013 (UTC)
- The "cup of tea and a cookie/biscuit" fix also works in my experience!--ukexpat (talk) 15:37, 6 March 2013 (UTC)
- To clear them from CSD, do a null edit of each article. — RHaworth (talk · contribs) 15:43, 8 March 2013 (UTC)
Unresponsive script
Hello, I'm getting a script error whenever I open any article, pointing out to
Script: http://bits.wikimedia.org/en.wikipedia.org/load.php?debug=false&lang=en&modules=jquery%2Cmediawiki%2CSpinner%7Cjquery.triggerQueueCallback%2CloadingSpinner%2CmwEmbedUtil%7Cmw.MwEmbedSupport&only=scripts&skin=vector&version=20130218T165645Z:52
How to fix it? — Preceding unsigned comment added by 201.229.236.208 (talk) 10:37, 6 March 2013 (UTC)
createaccount function API
Does anybody know what the name of the token for this feature happens to be? I can't find it. If I'm asking in the wrong place, please let me know.—cyberpower ChatOnline 15:27, 6 March 2013 (UTC)
- Moar docs at mw:API:Account creation, if that's helpful at all. Steven Walling (WMF) • talk 21:46, 6 March 2013 (UTC)
- Is the documentation even updated?—cyberpower ChatOnline 23:56, 6 March 2013 (UTC)
- (1) Not really and (2) you can scrape a token from Special:Userlogin/signup. The alternative is repeating the request in a similar manner to mw:API:Login. (This I find rather stupid, as fetching a token through prop=info or action=tokens is more than sufficient for preventing CSRF.) MER-C 05:15, 7 March 2013 (UTC)
- Hey thanks. Looking through the source codes on the signup page for the tokens, it appears its called "wpCreateaccount". I'll try it out and see.—cyberpower ChatOnline 14:52, 7 March 2013 (UTC)
- Doesn't work.—cyberpower ChatOnline 02:01, 8 March 2013 (UTC)
- You'll need to supply the cookies you got on that page as well. MER-C 02:17, 8 March 2013 (UTC)
- I'm convinced that this feature isn't ready to be used. The API seems to be confusing itself when I try to use it. Also, there is no way to get a create token from the API, if page scratching is the only way to do it, then the function still has to be a work in progress.—cyberpower ChatOnline 13:45, 8 March 2013 (UTC)
- Is the API even ready for this feature. It's throwing back an unrecognized paramter that I'm not even pushing. I'm feeding it a token, and it's throwing back need token. HELP!!—cyberpower ChatOnline 02:14, 8 March 2013 (UTC)
- Hi. I've significantly updated the docs for that api module. I also have tested, and can confirm it works properly (There's one issue, sometimes a "nocookiesfornew" error is returned when it shouldn't be. If that happens, do the request again and it should work. I submitted a fix for that issue, and it will hopefully disappear in a couple weeks ). What the request you are making exactly, and what is it giving you back that is unexpected? needtoken means that the token parameter is either not specified, or incorrect (or cookies are not being sent). When you recieve a result=needtoken, you should repeat the request but with the token parameter set to the token specified in the previous request (This is the proper way to get a token for this module) Bawolff (talk) 09:21, 9 March 2013 (UTC)
- I got it to work. There was a disconnect in the token retrieval function. I was doing it right all along but with a typo in the token name, which is why it wasn't getting a token.—cyberpower ChatOnline 13:26, 11 March 2013 (UTC)
PDF rendering error reported to OTRS
Folks, we have received an e-mail reporting a PDF rendering error. I am pasting the text of the issue below.
- I've tried to get the PDF of 'Stabat Mater' over the past week. It does not form into a proper PDF with the proper layout - this is the first time I have come across this issue.
- The message at the top of the PDF reads: "WARNING: Article could not be rendered - ouputting plain text. Potential causes of the problem are: (a) a bug in the pdf-writer software (b) problematic Mediawiki markup (c) table is too wide" --- this one on attempt dated 04Mar13
Would someone please take a look and see if there really is a problem and, if so, what the cause/solution may be. I assume the article being referred to is Stabat Mater. Thanks in advance.--ukexpat (talk) 15:34, 6 March 2013 (UTC)
- A quick test in my sandbox shows that the problem is with the section Stabat Mater#Text and translation. I guess the PDF renderer doesn't like {{multicol}}. -- Toshio Yamaguchi 16:02, 6 March 2013 (UTC)
- Thanks for troubleshooting. I have, temporarily, put a copy of the article in user subpage, removed the multicol formatting and asked the user to try and create a PDF from that version.--ukexpat (talk) 18:18, 6 March 2013 (UTC)
The user was able to create a PDF from the version of the article that I saved in my userspace without the multicol formatting. Would someone please report this at Bugzilla for me - the bug being that multicol formatting as in the parallel text section of Stabat Mater breaks the creation of a PDF. Thanks.--ukexpat (talk) 15:17, 7 March 2013 (UTC)
- Filed a bug report myself.--ukexpat (talk) 18:38, 7 March 2013 (UTC)
Remove redirects from search bar
Is there any way to tag a non-neutral or factually incorrect, but otherwise acceptable, redirect so that it doesn't autofill as if it is an article title in the search bar? Ryan Vesey 15:54, 6 March 2013 (UTC)
- I believe that if the redirect page is in Category:Unprintworthy redirects, it will be excluded from the search box. It's best to use a WP:TMR -
{{R unprintworthy}}
will do this, but some others will too. --Redrose64 (talk) 17:47, 6 March 2013 (UTC)
- selected at random one "unprintable redirect" from the category, and started typing it in the box. sure enough, it was autocompleted. maybe this is not so for all of them, but i'd be surprised.
- afaik, the autocomplete is fed from the same database that powers the regular search, so something like the "noindex" magic word would do it, if there is such a trick that affects internal search. ttbomk, "noindex" only affects external search engines, and i'm not familiar with a similar magic word that removes a page from internal search, but if there is one, i'll bet it would prevent autocomplete also. (if you find something and want to try it, you should wait a day or two before it actually takes effect, just like it takes a day or two for a new article to appear in the autocomplete). peace - קיפודנחש (aka kipod) (talk) 18:04, 6 March 2013 (UTC)
- This has been brought up before at Wikipedia:Village pump (technical)/Archive 105#Unprintworthy cross-namespace redirects are not being excluded from the search dropdown as expected. AFAIK, no bug has been filed for this. – PartTimeGnome (talk | contribs) 23:27, 6 March 2013 (UTC)
Bug Day focusing on triaging MediaWiki - General/Unknown reports
You are invited to join the next Bug Day tomorrow Mar 07. We will go through fresh open reports filed against MediaWiki in the General/Unknown component. It is a good chance to learn about bug management and meet the people working on a daily basis with the bug database (which includes Wikipedia reports, yes). No MediaWiki expertise is actually required. There is work for everybody. :) --Qgil (talk) 18:57, 6 March 2013 (UTC)
- Reminder: This is in 10 minutes and everybody is welcome to join and say hello! --AKlapper (WMF) (talk) 14:52, 7 March 2013 (UTC)
Posting to the math reference desk is really slow
Posting to Wikipedia:Reference desk/Mathematics has been really slow for a few hours now. Whenever I try to save an edit, I get the Wikimedia Foundation error page ("Our servers are currently experiencing a technical problem. This is probably temporary and should be fixed soon. Please try again in a few minutes"). The edit does eventually get saved, but I never get a notification of that; eventually on one of my attempts I will get an edit conflict with my own edit, or I give up and try reloading the page and see my edit there.
I am not noticing this problem with other pages. I have experienced the problem on two different Internet connections. There is also an anonymous editor who has posted the same question six or eight times now, spread over several hours, presumably because he or she keeps getting this error and doesn't realize that the post has been successful, so waits for a while and tries again. —Bkell (talk) 19:38, 6 March 2013 (UTC)
- I am experiencing the same slowness. I wonder if the math(s) rendering is causing the problem? In any event, I did manage to remove the duplicate postings.--ukexpat (talk) 20:02, 6 March 2013 (UTC)
- If it's helpful, here are the details from the bottom of the error page that I get:
- Request: POST http://en.wikipedia.org/enwiki/w/index.php?title=Wikipedia:Reference_desk/Mathematics&action=submit, from 208.80.154.134 via cp1017.eqiad.wmnet (squid/2.7.STABLE9) to 10.64.0.134 (10.64.0.134)
- Error: ERR_READ_TIMEOUT, errno [No Error] at Wed, 06 Mar 2013 20:14:42 GMT'
- —Bkell (talk) 20:16, 6 March 2013 (UTC)
- Same here. Just got the WMF error on the Math desk. The edit is listed in my contributions, though. -- Toshio Yamaguchi 21:39, 6 March 2013 (UTC)
- The basic issue is simple, you have too many uses of <math>. They seem to run around 0.3 to 0.6 seconds each, so having 20 or 30 equations is a little slow, but very workable. At a high point yesterday, the desk had 108 distinct uses of <math>, which is pretty extreme and starts getting into the territory where time outs are possible depending on the performance of the individual equations. Obviously the Math desk cares about and needs to use math, so I don't know what to tell you. On occasions when things become unworkable, you probably need to look for ways to refactor to reduce or consolidate the equations used. It is possible there was some software change that made this issue worse recently, but I have no knowledge of any change that would have affected it. Dragons flight (talk) 17:24, 7 March 2013 (UTC)
- Use fewer math-tags or combine them: I have changed page wp:Reference_desk/Mathematics to show many math-tags in preformatted sections, to reduce reformat time by nearly 30 seconds, from 50 to 20 seconds. Timing tests have confirmed the numerous math-tags are using nearly 1 second, each, to format equations, but if combined into a single math-tag, then multiple equations could be formatted in under 1 second, of the 60-second timeout limit which triggers wp:Wikimedia Foundation error. For example, the phrase ", or," could be inserted between 2 equations, within just a single math-tag: <math>2y-\lambda+2y=0, or, \lambda=4y</math> to generate:
- During talk-pages, if 20 math-tags were combined into just 4 math-tags, then the long reformat time could drop to nearly 3 seconds, although the formatting might be somewhat awkward. Otherwise, use the text form Template:J which can be displayed instantly. Perhaps a Lua-based template could be developed to simulate the math-tag to handle a simple formula. -Wikid77 (talk) 18:25, 7 March 2013 (UTC)
- I thought rendered math images were cached so that they wouldn't have to be regenerated all the time. In fact, in the past I've seen bugs where some math was rendered incorrectly, and then the bug was fixed, but the incorrect rendering was still seen in articles until the actual content of the <math> tags was changed (generally by adding \, or removing a space or something). In the present case, why should the MediaWiki engine be regenerating all of the math images on the entire page when I edit only one section? —Bkell (talk) 21:33, 7 March 2013 (UTC)
- afaik, mediawiki always renders the whole page when hitting "save", regardless whether you edit a single section, the whole page, or nothing at all (aka "null edit"). to me it makes sense: e.g., when editing a section and changing a "ref", the "references" section should be re-rendered also, when adding or removing a ref, the reference numbers for all references in sections that come after the one you edit will change, when changing the section title, the TOC should change, and there might be other, less obvious connections between the content of one section and the rest of the page. so mediawiki does not try to optimize "on save rendering", and renders the whole page in all cases.
- peace - קיפודנחש (aka kipod) (talk) 22:09, 7 March 2013 (UTC)
- Perhaps the continual slow re-rendering of previous math PNG images (rather than cached) is a new problem. Earlier, the PNG software was thumbnailing various PNG images into slow high-resolution format, even though the PNG technology allows a quicker display with regular resolution, as seen on other websites. -Wikid77 (talk) 22:23, 7 March 2013 (UTC)
- Kipod: That makes sense. What I meant about caching was that I thought the MediaWiki software hashed the contents of the <math> tag, and then just used a cached rendering of the math if one existed. So even if MediaWiki is rendering the whole page, I thought it was able to skip the LaTeX/PNG rendering step for <math> tags whose contents hadn't changed. —Bkell (talk) 22:38, 7 March 2013 (UTC)
- Template:Bigmath runs over 220 per second: Although the symbols can be grainy, the Template:Bigmath generates similar math-formula output, but using ampersand-semicolon entities, such as "λ" to show "λ". I have run several timing tests to confirm {bigmath} can process over 220 formulae per second, which I have explained in the {bigmath/doc} page. Using {bigmath} now could be over 200x times faster, until the math-tag software is streamlined, or fixed, to run faster. -Wikid77 (talk) 22:23, 7 March 2013 (UTC)
- I don't think there is a general problem with rendering <math> tags. We have not had previous problems with rendering times for the math reference desk, which often has many more equations than currently, or Help:Displaying a formula which has lots of <math> tags. My guess is something has changed in the backend causing re-rendering of the png images. Hopefully this is a temporary problem will go away.--Salix (talk): 23:47, 7 March 2013 (UTC)
- Template:Math and Template:Bigmath do not “process” or “generate” anything, they just put a <span> around their arguments, so it’s no wonder they take negligible server time to render.—Emil J. 19:09, 8 March 2013 (UTC)
I've filed a bugzilla ticket, bugzilla:45973, that describes the apparent problems that have recently appeared with the math caching system. Dragons flight (talk) 02:50, 11 March 2013 (UTC)
Ghost bot?
My watchlist shows a raft-full of recent edits by User:Sk!dbot, but these edits don't appear in the page history and this user has no contributions (including no deleted contributions). Is this a ghost bot? --Orlady (talk) 22:18, 6 March 2013 (UTC)
- I expect these are wikidata edits (the new system for interwiki links on pages). You can see them over at www.wikidata.org. Click "Hide Wikidata" at the top of your watchlist to get rid of them. - Kingpin13 (talk) 22:26, 6 March 2013 (UTC)
Edit mode problem
I've repeatedly run across this problem: When I edit a redirect that has absolutely nothing but the #REDIRECT statement in it and add one or more "R" templates and/or one or more categories then do a Preview, there is no list of included templates and categories provided. But when I save my modifications and re-edit the redirect everything is as it should be and further added templates and categories are added to the lists when previewed. Cbbkr (talk) 00:46, 7 March 2013 (UTC)
- I think a change was made at some point to not process any content on a page past the REDIRECT tag, but that it still will process on a diff. Someone else would know more of the background on that. Chris857 (talk) 01:21, 7 March 2013 (UTC)
- For some years it has been the case that the content is processed on the rendered page in order to extract the categories, as with Plympton railway station, but the behaviour has been changed, at least twice: the most recent was a few weeks ago, when diffs stopped showing the content after the
#REDIRECT [[]]
, although the categories are still shown: see here for example. This change in behaviour was noted at Wikipedia:Village pump (technical)/Archive 107#Redirect from misspelling. --Redrose64 (talk) 11:41, 7 March 2013 (UTC)
- Chris857, I've been trying to use parser functions to redirect my talk page on my home wiki to no avail; are you saying that if I put the parser functions in front of the redirect and save it to a variable, then call the variable from inside the redirect it should work? In other words, this doesn't work:
#REDIRECT [[User_talk:Technical_13/{{#time:Y|now -5 hours}}/{{#expr:ceil({{#time:n|now -5 hours}}/3)}}]]
BUT my understanding of what you are saying is that this would:{{#vardefine:Year|{{#time:Y|now -5 hours}}}}{{#vardefine:Quarter|{{#expr:ceil({{#time:n|now -5 hours}}/3)}}}}#REDIRECT [[User_talk:Technical_13/{{#var:Year}}/{{#var:Quarter}}]]
Is that correct? — Technical 13 ( Contributions • Message ) 21:56, 8 March 2013 (UTC)
- No. #REDIRECT must be the first thing on the page; there can't be anything before it. I'm not certain, but I don't think parser functions can be used in the redirect target either. (I'm guessing that your home wiki isn't a WMF one, since Extension:Variables isn't installed here.) – PartTimeGnome (talk | contribs) 22:49, 8 March 2013 (UTC)
Filter log and API
I don't know if this is indicative of a bug somewhere, but the filter log associated with the redirect API is surprisingly extensive (link). Many apparently innocent users get reported by the bot at Wikipedia:Administrator intervention against vandalism/TB2 because of this. Just posting here in case anyone is interested in investigating ... -- Ed (Edgar181) 14:29, 7 March 2013 (UTC)
- This is mainly Wikilove tracking filter. Ruslik_Zero 19:18, 7 March 2013 (UTC)
- This is bizarre, as the API redirect has been fully protected since 2010, so users shouldn't even be able to reach the point that would trigger the filter to begin with. My best guess is some kind of bug in the MediaWiki API. jcgoble3 (talk) 19:20, 7 March 2013 (UTC)
- Could it be that filter hits triggered by an editing request through api.php (such as those made by the WikiLove extension) are erroneously logged as if the edit was done to the API page?—Emil J. 19:48, 7 March 2013 (UTC)
- That's very likely, given that all hits on the Wikilove filter have logged on the API page since February 11. Something that was changed on February 11 broke it. Could someone file a bugzilla report? jcgoble3 (talk) 20:30, 7 March 2013 (UTC)
- I believe this is already fixed by this changeset, which should be deployed here on Monday (see mw:MediaWiki 1.21/Roadmap for the schedule). Note that Feb 11 was the date 1.21wmf9 was deployed here, so rather than "something that was changed on February 11" it's more likely "some change that was included along with 1.21wmf9". BJorsch (WMF) (talk) 03:12, 8 March 2013 (UTC)
Automatic RFC linking should be disabled
There's an extremely old feature in MediaWiki (so old, in fact, that it was inherited from the first engine we used for Wikipedia, UseModWiki) that any mentions of RFCs (link fixed, thanks PrimeHunter) are automatically displayed as hyperlinks. For example, when I type RFC 1149 I get the external link you see here. This behavior contradicts the first sentence of WP:EL by unnecessarily causing external links to be displayed in article body text.
This feature was useful before we had formatted citations, a style guide and conventions for the proper placement of links in articles, but those days are long gone. So, I propose that we switch it off. — Hex (❝?!❞) 17:30, 7 March 2013 (UTC)
- Related info from last month's VPT archive: RFC "magic links" have been a MediaWiki feature since 2004. Documented at Help:Magic#RFC, Help:Link#ISBN, RFC and PMID automatic links and mw:Manual:RFC. The decision to implement special links for RFCs probably reflects the greater significance that IETF and RFCs had at that time among typical internet users and Wikipedia editors. — Richardguk (talk) 20:07, 7 March 2013 (UTC)
- You know, I was considering mentioning that but decided to keep it to stylistic reasons? :) It's true, though, there was a big bias towards techies in those days. — Hex (❝?!❞) 20:19, 7 March 2013 (UTC)
- RFCs in the original post is a disambiguation page. The meaning here is Request for Comments (not the Wikipedia process). bugzilla:10626 "RFC links should not be generated from plaintext" was closed as WONTFIX in 2007. RFC is also an interwiki prefix so RFC:1149 gives an easy way to make a link without the automatic feature. PrimeHunter (talk) 20:25, 7 March 2013 (UTC)
- That was nearly six years ago. It might be worth reopening it now; a "good reason to remove the feature" has been provided by Hex in that it violates WP:EL. jcgoble3 (talk) 20:36, 7 March 2013 (UTC)
- All sounds sensible. Should the proposal extend to the other two magiclink words (ISBN and PMID)? They are similarly anomalous and none of them can be localised at present (though the destination link roots can be). I'm sure the consensus would be to retain ISBN linking, as the book sources are so useful and fit better with current reader expectations than the more obscure RFC (they are also mainly confined to footnotes). And a PubMed ID (PMID) is more likely to be used as a footnote like an ISBN. So it seems that it is only RFC magiclinking which has a strong case for being removed on enwiki.
- The relevant code is in Parser::doMagicLinks(). A developer would need to amend the code to allow enwiki to opt out. The flexibility to prevent links (or possibly to localise the words) would be of even greater benefit to wikis in other languages or covering unrelated content.
- — Richardguk (talk) 00:36, 8 March 2013 (UTC)
- Thanks for finding the bug, PrimeHunter. Reopening it seems like a timely idea, as Jonathan notes. Good catch - having an interwiki prefix means this feature is now provably redundant.
- Richard: I agree about not widening this to ISBN and PMID. Those two are very unlikely to occur in article body text. — Hex (❝?!❞) 10:23, 8 March 2013 (UTC)
- PMID agreed, ISBN less so. For example, I've seen quite a few wikipedia pages about Authors (For example, Isaac Asimov) where there are bibliography lists that include ISBN numbers.Naraht (talk) 15:31, 8 March 2013 (UTC)
- Hmm, not a use case I'd encountered. (I'm not sure if they should actually be listed like that at all; it's awfully messy. I would expect to find an ISBN in an article about the book it belongs to. In other words, the "notability" of an ISBN is that of its book.) ISBNs link to Special:BookSources, though, which is at least not external. — Hex (❝?!❞) 17:09, 8 March 2013 (UTC)
- There was also a similar proposal on Portuguese Wikipedia. Helder 19:30, 9 March 2013 (UTC)
Lua templates not compatible with PDF Export
Earlier today it was noticed that the "Download as PDF" function and associated tools are completely ignoring all of the new Lua based templates. The issue was referred to Mediawiki developers, but as we have already migrated a number of templates to Lua (e.g. Category:Lua-based templates), I wanted to make sure the larger community was aware that there may be missing information or other errors in PDF exports until the problem is resolved. Dragons flight (talk) 20:41, 7 March 2013 (UTC)
New version of Special:GettingStarted
Hey all,
I wanted to update you about two developments on the projects by the experiments team at the WMF:
- We just finished A/B testing guided tours, and so far results are quite promising. With the tour, we see users try to edit at a greater rate (+13%) and save their first article edit in 24 hours more (+4%). Both of these are statistically significant results, so we're going keep trying to find ways to use tours effectively.
- Today we launched a new version of Special:GettingStarted which is delivered to all newly-registered editors (more docs, including a screenshot of the previous version). The big change is that we've added two additional task types from the backlog -- articles with too few wikilinks and articles tagged as confusing, vague or unclear. In addition to the copyediting task, we hope that we've picked things to do that don't require an expansive knowledge of policy or subject matter expertise.
Let me know if you have any questions. As as always, you can keep tabs on the edits made by new users via these interfaces by watching the gettinstarted RecentChanges tag. Steven Walling (WMF) • talk 00:51, 8 March 2013 (UTC)
Technical difficulty with editing ring (mathematics)
Hi,
I have been experiencing very weird editing difficult with ring (mathematics) for the past few days. At least 2 more editors confirmed the same issue. Basically, I try to submit an edit but then either it times out or I get an error page. Interestingly, the problem is limited to this particular article; not even its talkpage exhibits a problem. Any idea? -- Taku (talk) 01:28, 8 March 2013 (UTC)
- It might be the same issue that has been raised above: #Posting to the math reference desk is really slow. —Bkell (talk) 02:04, 8 March 2013 (UTC)
- I would say so; there's a huge amount of
<math>...</math>
going on. It is possible to save edits though, but not every attempt goes through. Here's my technique. Have two browser tabs open, one containing Ring (mathematics) itself, the other containing your contributions page. Make your edit to Ring (mathematics) as usual, and when it times out with "Wikimedia Foundation Error Our servers are currently experiencing a technical problem. This is probably temporary and should be fixed soon. Please try again in a few minutes." etc., switch to the other tab and refresh your contributions. If your new edit gets listed, all well and good; but if it doesn't, switch back to the error screen and click the "try again" link. Repeat until it succeeds (as revealed by the contribs page) or you get pissed off. Using this method, this edit went through on second attempt. --Redrose64 (talk) 10:30, 8 March 2013 (UTC)
- It's not just maths articles that cause time-outs - myself and a couple of editors are woking on the WP:RDT based User:Optimist on the run/WCML South, which has a large amount of template calls. This has also suffered from time-outs: see disccusions here and here. Has the time-out tolerance changed recently? An optimist on the run! 11:18, 8 March 2013 (UTC)
- The WCML South article uses slow template {BS8} (runs only 9 per second), which I am rewriting in Template:BS8/sandbox to run 6-11 seconds faster, so that should avoid any future 60-second timeouts, even when the servers are very slow. -Wikid77 (talk) 22:01, 8 March 2013 (UTC)
- Timeout is 60 seconds but busy 35 seconds can slow to 61: At many times during a day (mostly Monday-Friday), the busy squid servers can show a long reformat time, of say 37 seconds, into almost double exceeding 61 seconds. In rare cases, the slowdown can be triple slow, where 21-second edit-preview would exceed 60. -Wikid77 (talk) 19:41, 8 March 2013 (UTC)
- Split/combine related math-tag formulae: For the math-tags, which are rendering, each, in ~0.7 seconds (85 in 60 seconds), I am recommending people to start splitting some related equations into a single math tag, for each symbol, or else combine complex equations perhaps separating the equations by ", or," then also use ", \text{ and }" or something similar. Also, perhaps some equations can be coded with Template:Math, while selecting a font face similar to the math-tag font, or use Template:Bigmath although the font is bigger than math-tag size. -Wikid77 (talk) 19:41, 8 March 2013 (UTC)
- Hi. I have noticed a lot of your edits to improve the rendering performance. Thank you very much for that. And thanks also for the insight into the matter. Unfortunately, tuning(?) didn't work or was not enough. My attempted edit timed out, but it still went through. I also noticed Gamma function also exhibits exactly the same issue when I tried to correct an error. (that article also has many formulae.). Clearly, we need a server-side fix. -- Taku (talk) 17:39, 9 March 2013 (UTC)
- Article "Ring" cut to 19 seconds by splitting equations but not matrices: I was able to reduce article "Ring (mathematics)" from 54 seconds to 19-second reformat time, by splitting small equations into separate math tags for each symbol, as processed over 600-1000 symbols per second, to then run about 105x faster for many of those equations. The current timeouts, for editing that article, are relatively rare now, as the 19-second reformat only rarely slows beyond 60-second busy reformat. However, the 4-x-4 matrix formulae still use the complex math tags, and they still use 2 seconds to render each 3 matrix tags. The separate timings of small expressions (such as "x+y" or "xy" or "Vn") show evidence that those partial symbol groups are re-using cache images, as are some small equations. In one section, I changed 2 complex formulae into 14 symbol math tags, which now run over 105x faster. In conclusion, I am seeing clear evidence that some PNG formula cache images are being reused in many equations, but not all. When I have more time, I will edit article "Gamma function" to be faster. -Wikid77 (talk) 18:37, 9 March 2013 (UTC)
- There is a discussion on Wikipedia talk:WikiProject Mathematics#User:Wikid77 and "breaking up complex formulas" which does not seem to be happy with these edits which have now been reverted.--Salix (talk): 20:54, 10 March 2013 (UTC)
Pending changes glitch?
An edit on Eleanor Roosevelt showed up as a pending revision, but when I looked at the revision it was listed as "accepted" and was also listed as "automatically accepted" in the review log, even though it was still showing up in the queue of pending revisions in the page history and on the pending changes page after I refreshed those pages several times. I had to unaccept the revision and then accept it to fix that issue. The edit was made with Huggle so perhaps there was some weird conflict.--The Devil's Advocate tlk. cntrb. 17:58, 8 March 2013 (UTC)
- Did you try purging your cache first? Sounds like your cache was a bit outdated.—cyberpower ChatOnline 19:04, 8 March 2013 (UTC)
- I don't think that was it. The edit showed as accepted in the diff, but not when I looked at the revision history. Also, the log says it was automatically accepted, but it still showed up as a pending revision on the page for pending changes.--The Devil's Advocate tlk. cntrb. 20:09, 8 March 2013 (UTC)
Is there a way to get this page information?
Can one find for a given article
- whether as a newly created article it's ever been patrolled
- (if it was patrolled) when it was patrolled?
- (if it was patrolled) what user patrolled it?
Signed: Basemetal (write to me here) 20:21, 8 March 2013 (UTC)
- I don't know the answer to this. What I find strange is, I checked Special:NewPages and found the article Hydrothermal Liquefaction. I checked and it had the [Mark this page as patrolled] at the bottom. I viewed the page in edit mode and couldn't find the tag. I then returned to read mode and the tag was gone, so it seems to view it once in edit mode is enough for the tag to vanish. I wonder what the point of this is, though. -- Toshio Yamaguchi 20:42, 8 March 2013 (UTC)
- Toshio, see Template:Bugzilla and upvote it! — Technical 13 ( Contributions • Message ) 20:50, 8 March 2013 (UTC)
- Somehow I can't log into Bugzilla with my account, so it seems my unified account doesn't work on Bugzilla. Is there a reason why Bugzilla is excluded from SUL? I guess I will almost never use it, so my desire to create a separate account over there is pretty low. -- Toshio Yamaguchi 21:07, 8 March 2013 (UTC)
- IIRC, Bugzilla is a completely different software than MediaWiki, and so cannot have the Module:UnifiedLogin loaded to it. Just create an account, I never use mine but I still have it :) It doesn't spam your mail, promise! gwickwiretalkediting 21:30, 8 March 2013 (UTC)
- Just to clarify something - bugzilla does indeed send you emails whenever a bug you commented on changes, which some people might consider spam (This can be controlled in preferences). It also publicly releases your email to any logged in user. Bawolff (talk) 11:03, 9 March 2013 (UTC)
- Go to Special:Log/patrol, fill in the page name after "Target (title or user)" and hit "Go". It'll come back with "No matching items in log." if the page is unpatrolled. --Redrose64 (talk) 22:05, 8 March 2013 (UTC)
- Ok, I did that for Dobroslav_Jevđević, today's featured article and it comes back with "No matching items in log". Do you mean to say that this is page was never patrolled? Signed: Basemetal (write to me here) 03:40, 9 March 2013 (UTC)
- The page was created in 2007, long before new-page patrol existed. Bielle (talk) 03:50, 9 March 2013 (UTC)
- You are right! And it works with newer new pages. Great! Signed: Basemetal (write to me here) 10:15, 9 March 2013 (UTC)
One more question on this topic: are unreviewed (as marked in, say, the new pages feed) and unpatrolled the same thing? Signed: Basemetal (write to me here) 10:32, 9 March 2013 (UTC)
Mobile layout not properly rendering with BlackBerry device.
The mobile layout for this site seems have a css overflow: none; and everything is getting cut off and most things are not functioning well if at all. I would be happy to try and upload some screenshots at my first convenience. If this is a duplicate of another discussion, I apologize and please direct me to that discussion as I could not find it on my own. — Technical 13 ( Contributions • Message ) 20:31, 8 March 2013 (UTC)
- Is there anyone that can help with this? I can't even use the site with my blackberry anymore. The sections headings are gray and won't let me expand to read, pages run off my screen with no ability to see what is out there (overflow: none;), even my menu runs off the bottom of my screen and I can't edit settings. — Technical 13 ( Contributions • Message ) 12:27, 11 March 2013 (UTC)
Proposal to rename Catalogue of CSS classes
Please comment at Wikipedia talk:Catalogue of CSS classes#Suggested move. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 21:32, 8 March 2013 (UTC)
Edit count question
Is there a way to track who has made the most edits per namespace? I.e., who has the most for User or User talk or File or Wikipedia... etc. AutomaticStrikeout (T • C) 23:44, 8 March 2013 (UTC)
- You may want to check out Extension:MW-EditCount — Technical 13 ( Contributions • Message ) 23:55, 8 March 2013 (UTC)
- Yes, but does that show a list of the editors with the most edits per namespace, or is it just an edit count per namespace for an individual editor? AutomaticStrikeout (T • C) 23:59, 8 March 2013 (UTC)
- I don't know, but it isn't installed on Wikipedia in any case. See Special:Version#Installed extensions for a list of which extensions are installed here. – PartTimeGnome (talk | contribs) 00:04, 9 March 2013 (UTC)
- It is currently setup to show edit count per namespace for an individual editor, but since it is not maintained and requires repair to be used on any MW installation 1.18+, it wouldn't be all that hard to cannibalize it with Extension:CountEdits — Technical 13 ( Contributions • Message ) 00:10, 9 March 2013 (UTC)
Internal Error after Save Page
I was editing Audie Murphy. After Save Page, it flipped to a page headed "Internal Error" with this message:[ba13be97] 2013-03-09 00:40:45: Fatal exception of type MWException. I think I saw a similar post here not too many weeks ago. My edit actually was successful, so I don't know why this MW Exception displayed, but thought I'd mention it here if it's important. — Maile (talk) 00:48, 9 March 2013 (UTC)
- I got a similar message when trying to view an image on Commons yesterday evening. Keith D (talk) 01:58, 9 March 2013 (UTC)
- I got it earlier tonight after logging in; fatal error. Truthkeeper (talk) 02:04, 9 March 2013 (UTC)
Template include size is too large
The template size limit need to be raised on the Carlos Alberto Parreira article, if all the navboxes that are normal to have on these types of articles is going to work. – Terje Christiansen (talk) 03:41, 9 March 2013 (UTC)
- Listing 28 navboxes is not normal but data hoarding: There need to be some rules about the excessive use of excessive navboxes in excessive numbers excessively. Meanwhile, the 2-megabyte include-size limit (actually 2,000 kb) provides a reminder that appending 28 boxes to the bottom of a team manager's article, to boxify the name of every major player ever managed, is fanatical wp:data hoarding of wp:UNDUE details in an encyclopedia article. Meanwhile, I have condensed the bottom 14 navboxes into a table of navpages, to reduce the article size by 60%. However, we need an essay, wp:LOCKERROOM, which explains how an encyclopedia article is not a locker room where the names of every related player are stenciled onto boxes which line the entire room, from wall to wall. Perhaps there should be a rule of thumb: when a biography article devotes over 70% of the page naming other people, then the names of the other people have exceeded wp:UNDUE weight by a few hundred too many. Please limit navboxes to just 1 or 2 per article. -Wikid77 (talk) 05:55, 9 March 2013 (UTC)
JavaScript help please
At my Common.js page (on Commons), I took someone's script, but instead of having it go to "Special:Upload", I actually want it to go to the specific URL: Special:Upload&uselang=experienced. How can I do this so I can have a quick one-click uploading experience? – Kerαunoςcopia◁galaxies 04:46, 9 March 2013 (UTC)
- I take it you saw that your most recent version works? ;) Writ Keeper (t + c) 05:18, 9 March 2013 (UTC)
- Absolutely! Someone very kindly helped me at MediaWiki and I was rushing around trying to solve other problems before coming here to cancel this query. Thanks for checking in. I'll have to keep this board in mind in the future. – Kerαunoςcopia◁galaxies 05:32, 9 March 2013 (UTC)
Mediawiki
Hello from Latvian Wikipedia sysop! I have a problem of finding the Mediawiki interface message for the "Edit links" line from the iw links list. Can somebody help? --Edgars2007 (talk/contribs) 08:45, 9 March 2013 (UTC)
- It's at Mediawiki:wikibase-editlinks - at Special:Preferences, Gadgets tab, there is a checkbox to "Create a toolbox link to show the page with messages from the user interface substituted with their names", which solves problems like this one. -- John of Reading (talk) 08:59, 9 March 2013 (UTC)
- Or just add uselang=qqx to the url to help find the name of the page -- WOSlinker (talk) 09:00, 9 March 2013 (UTC)
- Really big thanks :) Unfortunately @lv wiki we don't have the gadgets tab, but the uselang=qqx option works good. --Edgars2007 (talk/contribs) 09:14, 9 March 2013 (UTC)
- The code is at MediaWiki:Gadget-ShowMessageNames.js. It works in your personal js. PrimeHunter (talk) 12:34, 9 March 2013 (UTC)
Archiving help
I've been prepping Gone with the Wind (film) for a GA review, but Variety magazine has gone and deleted a whole bunch of articles, including one I've used as reference. Unfortunately it was deleted before I had a chance to run a Webcite comb, and it isn't archived at Wayback, but I have found a copy in Google cache at [1]. Now, Google cache isn't permanent, and Webcite won't archive the Google cache page, so is there any way I can get this page permanently archived somewhere. It's a 70 year old article so this is probably my only way of getting a permament copy. Assistance would be appreciated. Betty Logan (talk) 17:33, 9 March 2013 (UTC)
- Its available at [2] but it is behind a paywall Werieth (talk) 17:52, 9 March 2013 (UTC)
- PS I created a PDF copy of the cache if you can use that somehow. Werieth (talk) 17:57, 9 March 2013 (UTC)
- I downloaded the cache copy so I can refer to it, but thanks anyway. I will cite the paywalled copy and provide the Goggle cache copy through the archive link, and hopefully it will last long enough to go through its GA assessment. Betty Logan (talk) 08:09, 10 March 2013 (UTC)
Slow wiki
Why is Wikipedia so painfully slow?
Served by mw1181 in 19.987 secs.
I get similar times on different pages.—cyberpower ChatOnline 18:10, 9 March 2013 (UTC)
- rather difficult to say without knowing which pages, but generally because people like complex templates. Are you getting times like that for cached hits too? (if so, that is rather slow. if pages are significantly faster when logged out, can be caused by certain prefs like stub threshold). Bawolff (talk) 21:21, 9 March 2013 (UTC)
- Its not my preferences as I haven't changed that for months. It took 15 minutes to push the above statement.—cyberpower ChatOnline 21:50, 9 March 2013 (UTC)
PRODs not expiring
See Category:Proposed deletion as of 1 March 2013 which has 28 articles - despite more than 7 days passing, none have 'expired' (Category:Expired proposed deletions is empty) - why would this be? GiantSnowman 19:11, 9 March 2013 (UTC)
- Joe's Null Bot (BRFA · contribs · actions log · block log · flag log · user rights) is supposed to take care of this, but may have failed (and due to the nature of the bot, we can't tell if it has failed because it never edits). In the meantime I have manually performed a forcelinkupdate purge for both the March 1 and March 2 categories via the API sandbox. jcgoble3 (talk) 20:18, 9 March 2013 (UTC)
- Thanks - but this is by no means a new problem. I have seen it for as long as I can remember. GiantSnowman 21:24, 9 March 2013 (UTC)
- This is still ongoing - 24 listed in Category:Proposed deletion as of 3 March 2013 but none showing as expired. GiantSnowman 19:08, 11 March 2013 (UTC)
Examples of convolution
I saw the wiki page, but I couldn't find any examples using actual numbers evaluating the formula. Could you give some examples of convolution, please? Mathijs Krijzer (talk) 22:41, 9 March 2013 (UTC)
Quoted content from our article on Convolution
Definition
The convolution of f and g is written f∗g, using an asterisk or star. It is defined as the integral of the product of the two functions after one is reversed and shifted. As such, it is a particular kind of integral transform:
Domain of definition
The convolution of two complex-valued functions on Rd
is well-defined only if f and g decay sufficiently rapidly at infinity in order for the integral to exist. Conditions for the existence of the convolution may be tricky, since a blow-up in g at infinity can be easily offset by sufficiently rapid decay in f. The question of existence thus may involve different conditions on f and g.
Circular discrete convolution
When a function gN is periodic, with period N, then for functions, f, such that f∗gN exists, the convolution is also periodic and identical to:
Circular convolution
When a function gT is periodic, with period T, then for functions, f, such that f∗gT exists, the convolution is also periodic and identical to:
where to is an arbitrary choice. The summation is called a periodic summation of the function f.
Discrete convolution
For complex-valued functions f, g defined on the set Z of integers, the discrete convolution of f and g is given by:
When multiplying two polynomials, the coefficients of the product are given by the convolution of the original coefficient sequences, extended with zeros where necessary to avoid undefined terms; this is known as the Cauchy product of the coefficients of the two polynomials.
- Please ask at the maths reference desk. Most people here are not maths experts; you'll find people more knowledgeable about maths there. Wikipedia:Village pump (technical) is primarily for technical discussion relating to Wikipedia itself; knowledge questions are more suited to the reference desks.
- (I was also going to recommend posting to the article's talk page to suggest that examples would improve the article, but I see you have already done so.) – PartTimeGnome (talk | contribs) 23:15, 9 March 2013 (UTC)
Portland, Oregon ref problems
I tried asking on the Freenode chat channel about this. I just added three citations to Portland, Oregon [3] in the "Economy" section, and noticed they weren't displaying in the ref section. They're citation numbers 81, 82, and 83 in the text, and when I click on those numbers it leads nowhere. The refs currently shown as 81, 82, and 83 in the ref section lead to the "Demographics" section when clicked on.
And the number of unique citations in the text exceeds the number displayed in the ref section, which proves that there are other citations written in the text (not just 81, 82, and 83) that aren't being displayed below. First observation: #174 in the "Sister cities" section leads to #93 in the ref section. 159, e.g., leads to 78. But citations 9, 18, 35, 37, 40, 50, 59, 77, 89 in the text (for example) lead nowhere. 1 through 8 do, but starting with 9 it appears they don't, until you get to 90.
I'm on Safari 5.1.7. I don't know what browser User:Steven Zhang was using when he tried to help me on the chat channel, but he saw these same problems. Jsayre64 (talk) 01:12, 10 March 2013 (UTC)
- I had the same issues, traced it back to the 8th reference [4] incorrect formatting screwed things up. Werieth (talk) 01:22, 10 March 2013 (UTC)
- Wow, nice catch. It seems to be working properly now. Thanks! Jsayre64 (talk) 01:28, 10 March 2013 (UTC)
Create a username for IP editors
I just searched the web for a name generator, I put 2 letters in a box and got all sorts of usable names for wizards. It shouldn't be that hard to write a wikipedian name generator that produces usable names for wikipedians. We make a database entry for each first edit from an ip address. Visiting wikipedia from this ip address automatically logs you into your account. When your ip address changes we just create a new username for you. It is fairly simple.
Technical accomplishments:
- stop showing visitors ip addresses to other visitors
- stop assumption of bad faith
- more effective discussion
Stop showing visitors ip addresses to other visitors: Wikipedia leaks information about peoples contributions when they visit websites elsewhere on the Internet. There is loss of privacy but we are not getting anything in return. It is all nice to argue the editor was suppose to have known this before making his contribution and be right while still being wrong. lol In the end the encyclopedia doesn't benefit from drawbacks. If we can make contributing easier we should.
Stop assumption of bad faith: We can twist the argument but the paradox of Schrödinger's cat is not resolved by writing an IP number on the box. Even if IP editors never made a single useful contribution in the history of wikipedia, "wp:Assume good faith" doesn't allow you or anyone else to assume the next guy to be just like that. We should aim to remove a feature that invites and encourages people to assume bad faith.
More effective discussion: The primary purpose of a user name is to be able to identify the editor around the wiki, IP addresses are not very useful for that. It gets rather plural. 84.106.26.81 (talk) 19:38, 10 March 2013 (UTC)
- You might be interested in User naming convention proposal which is not the same thing, but related.--SPhilbrick(Talk) 19:45, 10 March 2013 (UTC)
- Bad names can be rerolled. 84.106.26.81 (talk) 01:07, 11 March 2013 (UTC)
- Why should our developers waste time doing something for them that they have no interest in doing for themselves? Evanh2008 (talk|contribs) 20:12, 10 March 2013 (UTC)
- I think I provided 3 reasons? 84.106.26.81 (talk) 01:07, 11 March 2013 (UTC)
I'm against the proposal as such. Here's one I'd like, though: Compress IPv6 addresses into something more readable/tractable/shorter. I would still prefer that these identifiers be obviously from an IP, because otherwise you don't know whether you can contact the person. Ideally the IP address should be recoverable from the compressed version, so that we can use geolocate, and at least get some information on the editor and possible biases etc. --Trovatore (talk) 20:32, 10 March 2013 (UTC)
- I'm opposed, as well. See, for example, User:Arthur Rubin/IP list (no longer maintained by me) for a list of IP addresses obviously used by the same person. It would be significantly more difficult to tell that the pattern continued if the raw IP addresses weren't available. — Arthur Rubin (talk) 21:46, 10 March 2013 (UTC)
- If we give IP addresses a name you can still propose this idea for a public wp:CheckUser tool. I think checkuser for auto approved users would never make it though the proposal stage? Or do you think the investigative tools should be different for a self-named user? 84.106.26.81 (talk) 01:02, 11 March 2013 (UTC)
- I'm not sure i even understand what the proposal is. Assign names to IPs? Or just give someone a name when they edit from an Ip- and they're stuck with it? Or what? I don't get it. How would this be better than IP editing, how would we knoe, if the name is assigned to an IP, that it is the same user each time, was that even the point, etc. Beeblebrox (talk) 23:45, 10 March 2013 (UTC)
- My take is that the propsal would be that WikiMedia would assign "editor names" to each IP, and automatically log in to that account. I don't see that it helps, though, except preventing tracing (by non-CHECKUSERs) and making it impossible to contact the ISP to tell them someone is vandalizing, which might violate the ISP's terms of service. — Arthur Rubin (talk) 00:46, 11 March 2013 (UTC)
- The ip information doesn't vanish. It will be just as easy to come-by as the ip information from registered self-named users. 84.106.26.81 (talk) 01:02, 11 March 2013 (UTC)
- In 2006 we talked about this - obviously there was no consensus at the time (cant find the talk) - But as mentioned above for technical reasons it did not go anywhere. A few solution were proposed for the fact that IP address are visible to all (thus traceable to an extent). The most popular solution was to automatically assign a random 10 digit number to all IP edits. The other popular idea was to IP edits be named "Anonymous". The name "anonymous" would still be linked to the IPs so that talk page discussions can happen but only Admins would actually see the IP address (this one sound confusing to say the least).Moxy (talk) 01:10, 11 March 2013 (UTC)
- I think the challenge presented by the exercise is to develop a review process that doesn't involve any form of 2) judging contritions that haven't been made and 2) doesn't rate users by other qualities than contributions & conduct.
- What we want is: contributions => quality of user => mythology
- Rather than: contributions <= quality of user <= mythology
- The slightest leakage in your formula will ruin the results.
- A lot of ip contributions get reverted, it gets harder for editors who know this to stay objective. When you see the ip number, how do you shut off the subconscious? By what means is a person suppose to ignore the statistically obvious? Right, you cant. No amount of steering after disclosure can balance anyones review. Not me, not anyone posting here not even Harry Wu could do it.
- Building a good review process should take priority over anything else. Persecution cant be an excuse to abandon the core mission. 84.106.26.81 (talk) 14:44, 11 March 2013 (UTC)
- Meh, the accounts would still have to be marked as anonymised IP addresses. Editors wouldn't treat these any differently than they do IP addresses. The only thing this would prevent would be instances of real life harassment using information obtained via the IP address (e.g.), but this isn't exactly a big issue. -- 92.2.76.62 (talk) 02:09, 11 March 2013 (UTC)
- No, the only thing this would prevent would be instances of contacting ISPs to inform them that their users are vandalizing Wikipedia. If you call that "harassment",.... — Arthur Rubin (talk) 05:05, 11 March 2013 (UTC)
- First there is the assumption of good faith, then there is the vandalism, if the vandalism is repeated there is a valid excuse to stop assuming good faith and do the investigation. If the user typed his own username has very little to do with it. You should forget about all of your ideas that involve abandoning wp:assume good faith before there is a valid reason to abandon it. If your account doesn't allow viewing registered users ip addresses then there is probably a good reason for that? Or not? 84.106.26.81 (talk) 15:39, 11 March 2013 (UTC)
- Sigh. I provided a clear instance of real life harassment (an attempt to get someone fired to win a content dispute) using information from an IP address, so yes anonymised IP addresses would prevent such harassment. It's the only workable positive in 84's proposal that I can see, that it would prevent normal editors from contacting ISPs or employers (which is almost never acceptable anyway) isn't a bad thing. -- 92.2.76.62 (talk) 20:13, 11 March 2013 (UTC)
- I like the idea of anonymizing users editing without an account. There has to be some way to do it efficiently though. IPv6 addresses will be horrible to have to read and look at once it is fully implemented and replaces IPv4... What if, we offered a javascript style popup box that allowed people to "edit as" and let them pick a name? The system would still register the edit as the IP, but only admins would see the IP and everyone else would see the "Alias." — Technical 13 ( Contributions • Message ) 12:54, 11 March 2013 (UTC)
- Only if the account type is exactly the same can we avoid our bias blind spot and the horrors that come with it. 84.106.26.81 (talk) 15:39, 11 March 2013 (UTC)
- Oppose. If IPs want a name, than they should register an account. This would also make things more complicated at areas where IPs can't !vote (like RFA or FPC). Armbrust The Homunculus 17:27, 11 March 2013 (UTC)
- Comment 84 is coming from the position that IP editors should be equal in all respects and should not be discriminated against in any way. Is that true? Let's say it forthrightly: No. it is not true. Failure to register makes it more difficult to interact with an editor and to understand the background behind his/her contributions. Registration is not a burden and should be encouraged, including by having some disadvantages attach to editing from an IP address. --Trovatore (talk) 19:07, 11 March 2013 (UTC)
weirdness
Cross posting as this seems like a good place to find someone who might be able to figure out the issue under discussion at WP:AN#Can someone check something? Beeblebrox (talk) 23:41, 10 March 2013 (UTC)
Bizarre null edit thingy
What on earth happened with this edit to Radiation therapy? The diff shows it as a null edit that occurred 11 seconds after the IP's first edit (found by changing the date format in preferences to ISO 8601), but the relevant history entry shows that the "edit" added one byte to the length of the article. This sounds rather weird. Graham87 02:26, 11 March 2013 (UTC)
- A newline was added at the end. Trailing newlines are supposed to be stripped on save but something apparently went wrong here. bugzilla:42616 was also about trailing newlines but after page moves. There was no page move here. PrimeHunter (talk) 03:33, 11 March 2013 (UTC)
- Even if it isn't supposed to happen, it would still be nice if the diff showed when it did happen. bugzilla:42669 is about that. PrimeHunter (talk) 03:37, 11 March 2013 (UTC)
- Thanks, very strange indeed! how did you figure out where the line break had been added? I can't find a difference in the edit window. I've added the diff as an example in the second bug you linked. Graham87 05:34, 11 March 2013 (UTC)
- When I replied, the edit box for the newer revision had an extra newline at the end. It's gone now. I don't know why but consider it another example of inconsistent treatment of trailing newlines. I used Firefox both then and now. I remembered the trailing newline issue from the old bug so it was the first thing I looked for earlier. PrimeHunter (talk) 12:49, 11 March 2013 (UTC)
No search index from main page
A reader posed an interesting question via OTRS.
If you are at the main page of a language, say, EN Main Page and start typing in the search bar, auto-complete will deliver suggested topics. E.g. type "NC" and the list will start with NCAA as well as others.
In contrast if you are at the overall main page, and start typing in the search box, it will not.
Why not, and shouldn't it?
My first thought was that we have search indices by language, and it might be too much to search all of them simultaneously, but there is a language listed, so why can't it default to searching in the index of that language?--SPhilbrick(Talk) 11:53, 11 March 2013 (UTC)
- Currently it is not technically possible to enable auto-complete at the Wikipedia's (or any other) central portal. I think theoretically it is possible to write a js script that will retrieve them asynchronously but nobody has done this. See m:Talk:Project_portals. Ruslik_Zero 12:17, 11 March 2013 (UTC)
Kindle problem reported to OTRS
Folks, we have just had an e-mail from a Kindle user who says that recently they have been unable to read Wikipedia on their Kindle and are being prompted to create an account. Does anyone have any idea what might be causing this and what the fix is? Thanks in advance.--ukexpat (talk) 13:03, 11 March 2013 (UTC)
- I have no problem accessing and reading Wikipedia on my Kindle 4. If I click on the star in the upper right, the site correctly tells me that I must login or sign up to watch the page, but that does not hinder reading. jcgoble3 (talk) 19:34, 11 March 2013 (UTC)
Help improving browser test automation for Wikipedia
WHAT: Writing a Wikipedia search feature description in plain English for the Wikimedia automated testing process. Let's feed the Test backlog!
WHEN: We will start with a video streamed demo on Wednesday, March 13, 2013 at 17h UTC and we will be helping volunteers during the rest of the week. This is an ongoing activity: you can arrive / leave at any time.
WHERE: #wikimedia-dev connect IRC channel.
WHO: Anybody interested, including you! The only requirement is basic level of plain English in order to describe the features to be tested automatically. It is that simple.
More details at mw:QA/Browser testing/Search features. --Qgil (talk • contribs) 18:32, 11 March 2013 (UTC)
The default message above my watchlist. Can I modify it?
Right now, the default message above my watchlist is "Editors are invited to comment on a proposal to start a new sister project to take over the WebCite archiving service used to archive citations on Wikipedia." What is this type of notice called and where is it located? I would like to customize this message. Right now, I'm curious how many peer reviews have yet to receive any feedback. Maybe another day I might want to change it to see how many articles are in Category:Articles needing additional medical references. Can this be done? Biosthmors (talk) 19:14, 11 March 2013 (UTC)
- This is a notice for all users coming from MediaWiki:Watchlist-details. See WP:WLN for details.—Emil J. 19:29, 11 March 2013 (UTC)
Admin viewing of deleted pages
Since I got my mop, I've been wondering why the viewing of deleted pages is set as it is. When a link to a page is clicked, instead of a page view, an 'edit page' type view appears, with two buttons for preview and changes. Preview gives the view I want - an easily readable page with infoboxes and tags displayed in all their glory. I consider that an actual view of the page as it was would be more use than a view of the edit screen type, and for anyone desiring to see the coded version, the current 'preview' button could be set to show this instead. Is there a reason why things are set as they are, or is it just a 'that's the way it's allus bin' situation? Peridon (talk) 19:55, 11 March 2013 (UTC)