Wikipedia:Reference desk/Computing
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
July 20
non-obvious GUI elements
I have always understood that one of the cornerstones of GUI design philosophy was that it was always supposed to be obvious -- visually obvious -- what your choices were. In his seminal book The Design of Everyday Things, Don Norman talks about the duality between "knowledge in the head" versus "knowledge in the world". The Unix command line epitomizes a system where knowledge in the head is paramount -- you can do almost anything, but you have to somehow know the name of the command to type. A GUI, on the other hand, shows you all your choices -- you don't necessarily have to know anything. You just have to find the thing to click on.
More and more, however, I'm seeing graphical applications and web pages that seem to go out of their way to hide your options. Icons are getting smaller, more generic, and less obvious; more and more you have to hover over them so that the mouseover text will tell you what they do. What's even more startling (but also increasingly common) is when there are active elements which don't even appear until they're hovered over. I've noticed this especially with Ubuntu Linux: the menu bar in most windows is blank until you hover over it, at which point the menus magically appear. Most windows don't even have scroll bars, until you hover over the right edge of the window at which point this weird little scroll tool appears. But if you're used to seeing your options, or if you haven't discovered the right spot to hover over, some/all of your options are just about as obscure as if they were Unix commands you hadn't learned the names of yet. I'm reminded of graphical video games where half of the gameplay is just discovering which elements of a scene can be manipulated to do something. (But it's not just Ubuntu that does this; I'm starting to see the same sort of thing even on the Mac.)
So my questions are:
- Does this pattern have a name,
- What are the arguments in favor of it, and
- How do its proponents defend against the criticism that it tends to go against the GUI philosophy of transparent approachability for beginning or casual users?
—Steve Summit (talk) 00:32, 20 July 2015 (UTC)
- You mention Ubuntu's HUD display on Unity. The apparent goal is to hide stuff you don't use a lot and make stuff you do use more prominent. I've read your posts here and I'm sure you just had a shudder as you remembered how much of a failure that experiment was with Microsoft back in the 90's. Ubuntu development is driven by kids who have no concept of the past, so they repeatedly repeat mistakes others have made in an attempt to be "cool." As for the hidden menu thing, that is actually separate. Apple has a long history of abusing its blindly devoted followers. The rule is form before function. Having a display with no interactivity looks very pretty. It isn't important that the followers can use it. Ubuntu, in another attempt to be cool, copies Apple in what they call a minimalist design. So, it appears to me that you are looking at the convergence of two design styles: HUD (which I feel is a very improper name for that design) and minimalist. They argue that HUD makes computers easier to use by adapting to what you do. They argue that a minimalist design removes clutter so you can focus on the content easier. I believe that history has already proven that the HUD design does not make computers easier to use. It makes them harder to use and troubleshoot. I prefer the minimalist design to cluttered messes with four or five buttons and menus for every function - do you really need a print button, a print menu item, a print shortcut, and a "print this" link all displayed at the same time? However, it is important to know what CAN be done before you hide it. History has also shown that nobody will read the manual to learn what is possible. 209.149.113.45 (talk) 12:35, 20 July 2015 (UTC)
- Heh. In ~1990 my dad asked how I learned some feature in MS Word 4 (such as the caret as an escape in search for certain special characters). "I cheated: I read the manual." He expressed mock outrage. —Tamfang (talk) 08:35, 25 July 2015 (UTC)
- One term I hear in design contexts is the "discoverability" of the design. Our article is more about information science and metadata concerns, but it's also applied to user interfaces - see e.g. this article here [1]. SemanticMantis (talk) 14:06, 20 July 2015 (UTC)
- Other possibly relevant terms are skeuomorphism (making UI elements look like real-life objects), affordance (making UI elements look as if they do something) and flat design (what it says). AndrewWTaylor (talk) 15:40, 20 July 2015 (UTC)
- As computers get more complicated, more and more features and functions are added, and they cannot be displayed all at once or else GUI clutter would occur. The solution is to hide things away or put them in submenus so you'll see them only when you need them. It's unintuitive and clunky but it's better than the alternative. KonveyorBelt 19:18, 20 July 2015 (UTC)
- Of course, we are basing this on the premise that computers are getting more complicated. That is an opinion, not a fact. It could very well be that computers are less complicated, but users are less capable to comprehend the computer interface. 209.149.113.45 (talk) 19:27, 20 July 2015 (UTC)
- On the whole, over a history of decades, computers are definitely getting more complicated. The new Mac may look easier to use than a command-line program, but it is also way more complex in terms of what it can do. KonveyorBelt 20:27, 22 July 2015 (UTC)
- Of course, we are basing this on the premise that computers are getting more complicated. That is an opinion, not a fact. It could very well be that computers are less complicated, but users are less capable to comprehend the computer interface. 209.149.113.45 (talk) 19:27, 20 July 2015 (UTC)
Why did mathematical notation converge, but programming notation diverge
For example, math has one symbol for equal, =, but programing languages came to different symbols for assignment, sometimes it's =, sometimes :=. Or different ways of marking a block of code.--Scicurious (talk) 01:26, 20 July 2015 (UTC)
- In mathematics, the equals sign usually means equality, not assignment. Assignment is represented in different ways (for example, you can put an uppercase Delta over the equals sign, or you can use := like Pascal, or you can just use the equals sign and let context take care of it). So I'm not sure your premise really holds. --Trovatore (talk) 01:38, 20 July 2015 (UTC)
- The basic reason is that mathematics is meant to be a single international language, but that any particular programming language is meant to be a single language. There isn't any reason why different programming languages should use the same symbols. Another reason is that, in the earlier days of programming, the language designer was limited by the keyboard. The 029 card punch, for instance, didn't have 100 symbols. Also, different languages were designed for different purposes, and with different amounts of overloading. Basically, there isn't a reason why different programming languages should use the same symbols. Robert McClenon (talk) 01:45, 20 July 2015 (UTC)
- Um, Robert, you seem to be repeating the OP's premise, which I have already refuted. --Trovatore (talk) 01:46, 20 July 2015 (UTC)
- What premise do you claim to have refuted? The OP is stating that in Pascal, := is assignment. In FORTRAN, = is assignment. So what are you saying has been refuted? Robert McClenon (talk) 02:15, 20 July 2015 (UTC)
- The one about mathematical notation "converging". As I said, the equals sign in mathematics usually means equality, not assignment, and assignment is represented in different ways. (A complication is whether you consider "equality by definition" to be assignment — I generally think of it as assignment, but there could be arguments both ways.) --Trovatore (talk) 02:19, 20 July 2015 (UTC)
- Trovatore is refuting a claim that I did not make. Robert McClenon is answering the question.
- The concept of equality in math is represented by "=", and this symbol has spread across all mathematics. You don't see mathematicians around using • ¶ or § to represent equality. It does not matter whether the symbol also represents other concepts. In the same way 1/2 and 3+4 have spread as the canonical forms, instead of / 1 2 or + 3 4. In computer languages there has not been such simplification (maybe it's on its way). In programming you find different symbols to express the same concepts, which McClenon's answer above does not see as a problem. However, I see it as a source of confusion, since we don't stick with a computer languages forever. Dealing with the curious design decisions of many is quite tiresome. --Scicurious (talk) 03:07, 20 July 2015 (UTC)
- Well, then you expressed yourself badly. Assignment and equality are completely different. If you had expressed your question in terms of equality (for example, == in C versus just = in Pascal) then it might have made more sense. --Trovatore (talk) 05:19, 20 July 2015 (UTC)
- OK, sorry, that was more aggressive than it needed to be. Just the same, it was confusing to compare notations for assignment in programming languages with notations for equality in math, totally different things. --Trovatore (talk) 05:26, 20 July 2015 (UTC)
- I agree that the differences in notation are a factor that may complicate learning another programming language. However, after learning several programming languages, a programmer learns what sorts of differences and similarities there are in programming languages. (Similarly, if one has learned multiple human languages, one learns what features they share and how they differ.) As to different ways of marking blocks of code, some languages, like FORTRAN, don't have blocks of code in the C sense. Robert McClenon (talk) 03:33, 20 July 2015 (UTC)
- The one about mathematical notation "converging". As I said, the equals sign in mathematics usually means equality, not assignment, and assignment is represented in different ways. (A complication is whether you consider "equality by definition" to be assignment — I generally think of it as assignment, but there could be arguments both ways.) --Trovatore (talk) 02:19, 20 July 2015 (UTC)
- What premise do you claim to have refuted? The OP is stating that in Pascal, := is assignment. In FORTRAN, = is assignment. So what are you saying has been refuted? Robert McClenon (talk) 02:15, 20 July 2015 (UTC)
- Um, Robert, you seem to be repeating the OP's premise, which I have already refuted. --Trovatore (talk) 01:46, 20 July 2015 (UTC)
- The basic reason is that mathematics is meant to be a single international language, but that any particular programming language is meant to be a single language. There isn't any reason why different programming languages should use the same symbols. Another reason is that, in the earlier days of programming, the language designer was limited by the keyboard. The 029 card punch, for instance, didn't have 100 symbols. Also, different languages were designed for different purposes, and with different amounts of overloading. Basically, there isn't a reason why different programming languages should use the same symbols. Robert McClenon (talk) 01:45, 20 July 2015 (UTC)
- I think our OP assumes mathematical notation is "converged" because our OP has not been reading a wide variety of published mathematical literature. There are immense differences in mathematical notation conventions: even simple expressions like addition can be notated in totally different fashions. Sometimes, different notation represents some detail or nuance; other times, it is a purely arbitrary editorial convention.
- Here's a great book: Scheinerman's Mathematical Notation. It focuses on the notation you will probably see in undergraduate mathematics for science and engineering. As the author notes, it is impossible to completely describe all mathematical notation: there are just too many variations.
- Nimur (talk) 09:53, 20 July 2015 (UTC)
- Another reason, that I did not see mentioned so far, is parsing. Humans parse mathematical expressions. As has been demonstrated repeatedly, humans don't follow logical or even sensible steps when parsing things. Some start at the end. Some start at the beginning. Some break things up into chunks. Some drown in anything more complex than three items. Overall, math has been designed for humans to learn and understand. Programming languages are parsed by computers. They follow a very discrete algorithm. If a new character is required to mean something, that new character must not break existing rules that the parser has put in place. So, if = already has a meaning to the computer parser, it will require rewriting the parser or using a new character, such as :=. That is how you end up with == and ===. Many times, the goal is to make something new for the parser while easy for programmers to type. 209.149.113.45 (talk) 12:16, 20 July 2015 (UTC)
- One obvious answer is: give it a few centuries. I'll note that C's conventions have been taken up by younger languages, unlike those of Fortran and Pascal. —Tamfang (talk) 08:40, 25 July 2015 (UTC)
How to make website
Thus far, all the websites I've created have been either developed through wordpress/drupal software, or downloaded whole off the creator's site and posted to my server. Trying to create a new website now, different to what currently exists, though largely based off chatroom style sites, I find myself unable to do either of these. Instead, I'd like to take this opportunity to learn how to actually work on making a new website for myself, rather than always relying on others. Trouble is, the only language I have any real familiarity with is C, which I suspect is not appropriate here, but I'm not sure what is, or where would be best to go to learn to use it effectively. Any thoughts?
86.24.139.55 (talk) 17:02, 20 July 2015 (UTC)
- There is a lot of information on the web about making websites. If you want to make one "from scratch", you either need to learn HTML or use a HTML editor. WegianWarrior (talk) 17:14, 20 July 2015 (UTC)
- HTML, that's the one, couldn't remember what it was called. I had a search online but only found a couple of halfway decent guides to the sort of site I'm aiming for, and both had lots of comments posted saying the instructions given didn't work. 86.24.139.55 (talk) 17:17, 20 July 2015 (UTC)
- If you like this type of chat, you could create a wiki. StuRat (talk) 17:33, 20 July 2015 (UTC)
- For modern websites, learn HTML, CSS, and JavaScript. Then, if you want to get more complex with server-side programming, pick one of the common server-side languages, such as PHP or Ruby. If you do that, you will likely want a database. MySQL is a very common choice. Finally, you will likely realize that you need professional looking graphics. Most people can't afford Photoshop and refuse to download a virus-laden "Free" copy of it. Gimp is a free alternative that, in my experience, is more difficult to learn than everything else combined. 209.149.113.45 (talk) 18:32, 20 July 2015 (UTC)
- I'd been reading over the articles for CSS, PHP and MySQL, from what I've picked up before I thought those were involved somehow, but I'm still not clear on exactly what each does or how they relate to each other. Looks like I've got a lot of work ahead of me. 86.24.139.55 (talk) 19:07, 20 July 2015 (UTC)
- HTML contains the content. CSS describes how to display the content. JavaScript gives extra functionality to the interface. PHP allows you to create dynamic content. MySQL is a simple data storage application to store and retrieve content. 209.149.113.45 (talk) 19:23, 20 July 2015 (UTC)
- PHP is a terrible programming language and I would advise you to avoid it whenever possible. MySQL isn't too hot either. I will concede that if you don't have the luxury of choosing your job, you generally don't have a lot of choice over what tools you're forced to use, and there is unfortunately a lot of software using one or the other, but you sound like you're teaching yourself, in which case I exhort you to learn some decent tools first. For one thing, it'll be easier, because you won't have to wrestle with all the brokenness of PHP and MySQL. The first linked article points you towards how to get started with Web programming in Python, and also suggests Ruby and Perl, which together with PHP are the mainstream "Web languages" (although Perl's popularity has waned). Of course you can write a Web backend in any language, including C, or for that matter COBOL, though I wouldn't advise it. --108.38.204.15 (talk) 07:39, 21 July 2015 (UTC)
- It is important to note that it is not possible for a programming language to be "broken" or "terrible". It may have bugs (which are actually rare in the language and usually found in the interpreter or compiler). It may be a poor choice for a specific task while still perfectly functional for another task. Programmers are far too often broken and terrible and make very stupid choices - and then blame those choices on the programming language. "Why did PHP and MySQL allow me to make my website vulnerable to SQL injection!? It shouldn't allow me to idiotically assume some stranger isn't sending me bad data! It shouldn't allow me to run a query without validating the data! I shouldn't have to learn to program before writing a program! PHP and MySQL are terrible and broken! Boo hoo! Boo hoo!" Therefore, whenever you see someone claim that a programming language is terrible, it is very likely that the programmer is the problem. 209.149.113.45 (talk) 13:47, 21 July 2015 (UTC)
- Of course it is possible for a programming language to be "broken" or "terrible". Human beings are just as capable of messing up the design and implementation of programming languages as they are of anything else. AndyTheGrump (talk) 05:02, 22 July 2015 (UTC)
- It is important to note that it is not possible for a programming language to be "broken" or "terrible". It may have bugs (which are actually rare in the language and usually found in the interpreter or compiler). It may be a poor choice for a specific task while still perfectly functional for another task. Programmers are far too often broken and terrible and make very stupid choices - and then blame those choices on the programming language. "Why did PHP and MySQL allow me to make my website vulnerable to SQL injection!? It shouldn't allow me to idiotically assume some stranger isn't sending me bad data! It shouldn't allow me to run a query without validating the data! I shouldn't have to learn to program before writing a program! PHP and MySQL are terrible and broken! Boo hoo! Boo hoo!" Therefore, whenever you see someone claim that a programming language is terrible, it is very likely that the programmer is the problem. 209.149.113.45 (talk) 13:47, 21 July 2015 (UTC)
- I am sadly disappointed that "COBOL on Cogs" is not a real development framework, because that would have been awesome. OldTimeNESter (talk) 18:51, 24 July 2015 (UTC)
My PC is spontaneously rebooting
Windows 7, 32 bit.
Is there a log I can check that will tell me why ? It's intermittent, but doesn't seem to be due to overheating, and I checked the power cord to make sure it wasn't loose. StuRat (talk) 17:39, 20 July 2015 (UTC)
- You can check the event viewer like so [2]. You can disable automatic restarting like so [3]. This user had a similar problem [4]. SemanticMantis (talk) 19:12, 20 July 2015 (UTC)
- Yes, you should disable automatic rebooting to see the actual BSoD. Ruslik_Zero 19:15, 20 July 2015 (UTC)
- If you cant find the reason in software, check hardware issues like dried out thermal grease, dust on heat sinks, damaged fans or failed bearings of the fan motors. Take a closer look on capacitors of power supply and mainboard. Careful, the psu keeps hazardous voltage when power plug is removed. To discharge those capacitors, turn the computer on. Then you see the BIOS or UEFI or see the fans beginn blowing, remove the power plug from the wall before the the operating system is booting up. Without turning off and cutting the power grid from computer the capacitors become discharged. --Hans Haase (有问题吗) 20:06, 20 July 2015 (UTC)
- Yes, I may have been premature in thinking it wasn't overheating. I took off the cover, pointed a big box fan at it on full blast, and it stopped rebooting. StuRat (talk) 03:34, 21 July 2015 (UTC)
- In case you don't know, this probably means you need to clean the dust off the cooling fan inside the computer. Looie496 (talk) 13:06, 22 July 2015 (UTC)
- Yep, I will give that a try. StuRat (talk) 13:38, 22 July 2015 (UTC)
- The computer stopped booting up when covering a fan? It can not overheat within a minute from cold. Something else is wrong. --Hans Haase (有问题吗) 09:09, 23 July 2015 (UTC)
- I think you misunderstood me. Previously it kept rebooting intermittently, not continuously. With the box fan pointed at the innards, it stops doing that. StuRat (talk) 03:47, 24 July 2015 (UTC)
- A computer certainly can overheat within a minute from being cold. All it takes is a processor that puts out a lot of heat and a CPU heat sink that came loose on one edge so that there is an air gap. Or something drawing way too much power and driving a voltage regulator into shutdown. StuRat is on the right path; clean out all of the dust, make sure none of the fans have stopped spinning, and then try to figure out what is getting hot. Selectively shielding parts of the computer from the box fan might be a useful exercise at this point.
- Or you can always use the problem as an excuse to spend too much on a new computer.... (smile) --Guy Macon (talk) 02:43, 24 July 2015 (UTC)
July 21
SOFTWARE FOR STAMP COLLECTING
Is there any free software which can manage stamp collection? Something like "my family tree'. Thank you.175.157.40.27 (talk) 03:22, 21 July 2015 (UTC)
- I don't understand the comparison. Do stamps have complex relationships, like 3rd step-cousin, twice removed ? Also, do you mean to scan the stamps and organize images of them online, with an index number you can use to find the actual stamps ? StuRat (talk) 03:32, 21 July 2015 (UTC)
- Have you taken a look at what Google has to offer. You could probably find something that suits your requirements. CambridgeBayWeather, Uqaqtuq (talk), Sunasuttuq 11:26, 21 July 2015 (UTC)
Youtube question
Sometimes youtube videos play well on my system. Others keep pausing as if there is not enough bandwidth.
Is there a way to play the audio from youtube without having the video signal transmitted? For much of the stuff that interests me, I only want to hear the sound.
Thanks, CBHA (talk) 03:33, 21 July 2015 (UTC)
- Don't know, but you can set it to the minimum screen resolution, which will dramatically reduce the bandwidth needed. StuRat (talk) 03:35, 21 July 2015 (UTC)
- reducing the resolution typically reduces the audio quality too which is annoying. There are several apps and websites which let you listen to the audio only of youtube videos. I've used a website that lets you download youtube videos as MP3, that way it will never "pause" even if you completely lose your internet connection. Just google youtube mp3 converter, I think there are several websites when you just put the youtube video address in and it downloads a mp3. Vespine (talk) 23:22, 21 July 2015 (UTC)
ICT
Does ICT includes all the current development in internet? MOOCS, OCW are they all under ICT???
Learnerktm 16:31, 21 July 2015 (UTC)
- Assuming that you are referring to Information and communications technology, the answer is "it depends." Some people use the acronym ICT to refer to any means of storing, moving, or displaying information. Therefore, the entire Internet would fall under that massive umbrella - as would something like a magazine. Others use it only to cover communications. As such, the communications technology used for the Internet would be covered, but display and storage technologies would not. 209.149.113.45 (talk) 17:03, 21 July 2015 (UTC)
Gmail printing acting strangely
Hello.
Normally, when you get an attachment in an email from Gmail, like a PDF, you can click on the box representing it. This will bring up a view of the PDF and gray out the background so that other stuff isnt distracting you from the popup layer showing the PDF. Then, there is a little print icon directly in the middle at the top which auto-prints this PDF.
Somehow, my colleague managed to goof this feature up now. The popup still shows properly, but when you hit print, instead of printing, it loads some strange url in another tab and attempts to download something like:
ACFrOgA9-kgnF73NDVUOBjbI2Jpnyt02tkAakjoI1ZyPTZfZnjcPFh7YpAnHarap0mth8C8uop2NlfVkLbVDZyEMDKfBsAKFn8guciwyJpwjdqO50e38jNkWTrrq6wE=.pdf
I have tried resetting the default printer in Mozilla Firefox by resetting some property called "Printer_Print" or something like this with no results. My colleague claims he must have clicked something that asked him if he wanted to print with some other application, or something like this.
How can i restore the default action of printing in gmail? Thanks!
216.173.144.188 (talk) 16:32, 21 July 2015 (UTC)
To add more info, the NORMAL place the print icon goes seems to be in a new tab as well, and involves https://doc-04-8o-apps-viewer.googleusercontent.com/viewer/secure/pdf ...
I think the goofed up gmail account is going somewhere similar, but instead of popping up the print dialogue, it downloads a pdf. I'm starting to wonder if it is doing everything correctly, except printing to a pdf writer style fake-printer. However, if this was the case, id still expect the dialog to show up asking to confirm printer selection, # of pages, etc. This is not happening.
216.173.144.188 (talk) 16:45, 21 July 2015 (UTC)
- Yes, it does sound like it's set to print directly to a file rather than bring up the printer dialog. I don't know where to change that, unfortunately. But, presumably you could then open the PDF file it generates and print from there, as a workaround, until you fix the problem. Another workaround is to use the Print Screen button, then paste into MS Paint, etc., and print from there. This is limited to one screen's worth at a time, but does allow you to cut out any portion of the page you don't want to print. StuRat (talk) 13:32, 22 July 2015 (UTC)
Windows API (Shell32.dll)
I'm using SHBrowseForFolder from Shell32.dll to display a directory selection box, setting the initial directory using the BFFM_SETSELECTION
message in the callback, as described on the MSDN page. I'm setting the BIF_USENEWUI
flag (only). This is working OK for "normal" directories (such as C:\MyData\MyDataDirectory
), but not working for directories in "My Documents". The actual name of such a directory is something like C:\Users\Tevildo\Documents\Temp
, but using this in the BFFM_SETSELECTION
message doesn't work. If I manually browse to the directory, I have to go C: > Users > Tevildo > My Documents > Temp
instead. The returned value of the directory name is correct (with "Documents" instead of "My Documents"). Is there anything I can do to fix this? In particular, do I need to set some additional flags? Tevildo (talk) 19:10, 21 July 2015 (UTC)
- Someone reported the same thing here as a bug, and while I don't see any resolution there, it does at least mention a couple of workarounds that you might try, namely removing BIF_NEWDIALOGSTYLE (leaving only BIF_EDITBOX in your case since BIF_USENEWUI combines those flags) or not specifying a root. -- BenRG (talk) 04:30, 22 July 2015 (UTC)
- Thanks for the link. Getting rid of the root directory has fixed the problem (although it's not an ideal solution, of course). Tevildo (talk) 10:48, 22 July 2015 (UTC)
July 22
android vulnerabilities
I was surfing the web on my phone. All entirely respectable sites that I've read for a long time. But one site (or a banner ad on it) redirected me to somewhere, and that "somewhere" showed a message box with unlegible (binary) text in it, and then a page opened where it said, in broken German, that my phone was infected and that I needed to install some app from google play (needless to say, I didn't)
Should I be concerned? The random text in the first message box in particular, looked like it could be an attempt to stage a stack overflow attack or trip the browser in some other way (why else call alert() with a binary string.)
Is there something else I can do apart form changing the passwords? Should I change the passwords of the sites Chrome knows the password to because I logged in on them at some time in the past and had Chrome remember the password, too? Thank you for all your helpAsmrulz (talk) 20:50, 22 July 2015 (UTC)
- You are probably fine. It sounds like the banner was using javascript to generate a popup, which looks scary (I've never heard of any overflow attacks that involve transmitting binary over http), followed by a script that directs you to the store. If your phones browser intercepts a link that begins with market:\\ it will open up the google play market to the address specified. (See here). As long as you didn't install any applications you are almost certainly safe. I'd say you don't even need to change your passwords, but there's no harm if you want to be cautious. 81.138.15.171 (talk) 16:28, 23 July 2015 (UTC)
- All of that binary text designed to make you think you had been hacked was probably just generated by the web site itself. You might want to report this to Google Play, as they can pull that app off their list, for such behavior. They probably won't do it based on your word alone, but if enough people report it they will. StuRat (talk) 16:21, 24 July 2015 (UTC)
July 23
C++ pointer question
Let's say I have a local variable and a pointer. I assign the address of the the variable to it. When the variable is removed from the stack, does the pointer become a stray pointer? Thanks Kayau (talk · contribs) 06:57, 23 July 2015 (UTC)
- Yes, the address of a local variable should not be returned, because its lifetime will end when the function returns. Notice that you *may* be able to use the address, as it is still on the stack and the program can access the stack at any time, but you should *not* rely on it, as it is not guaranteed. Look at the following example:
#include <iostream>
int *f1(void)
{
int localvar;
int *localptr;
localvar = 5;
localptr = &localvar;
return localptr;
}
void f2(void)
{
int a = 20;
int b = 30;
int d = 50;
/* dummy function, but calling it will rewrite the stack */
}
int
main(int argc, char *argv[])
{
int *mainptr;
mainptr = f1();
/* will print 5 */
std::cout << *mainptr << std::endl; /* the lifetime of the local vars have ended, but they are still valid
since the stack doesn't changed, so you should not rely on them */
f2(); /* this will rewrite the stack */
/* now let's try printing the number one more time */
std::cout << *mainptr << std::endl; /* will not print 5, but a junk value */
return 0;
}
The output is as follows on my system:
> ./a.out 5 20
- Thanks for the detailed answer. The example was very easy to understand. :) Kayau (talk · contribs) 17:31, 23 July 2015 (UTC)
- A very similar question came up on StackOverflow yesterday. The poster observed that, although a returned pointer to a local array became invalid, it was transiently valid if observed using gdb before another called function had the opportunity to trash it. (Needless to say, this is all hugely sketchy and should never be relied upon!) —Steve Summit (talk) 19:24, 23 July 2015 (UTC)
First paint bucket tool?
What was the first consumer available graphics editor to have a "paint bucket" fill tool? We have an article Flood fill which doesn't discuss early implementations. I know MacPaint had it in 1984. Is there an earlier example? Staecker (talk) 14:18, 23 July 2015 (UTC)
- This is going to be tricky... the algorithm existed since - well, it probably preceded the digital computer! Flood fill is, at its core, just a recursive search with path marking. One of the most obvious applications of this algorithm is to mark a two-dimensional array of data that represents a raster graphic. Raster graphics have existed for a long time, too... they also preceded the interactive graphical user interface. As you dive deeper into the history of computer application software, graphics editing looks a lot more like software programming; there isn't a hard "line" where suddenly consumers had access to point-and-click tools. It was a gradual transition.
- You can read early history of computer graphics. Sketchpad (1963) constitutes what I would call "Application Software"; but it was designed in the late 1950s for the TX-2, and when you look at the details of machines from that era, it's not straightforward to distinguish "applications" from "system software" or even from "hardware." The author, Ivan Sutherland, implemented a recursive function (!) infrastructure for copying picture elements; but this recursion did not appear to be used for raster graphics. (Well, it's sort of on the boundary: the computer used a sparse matrix to represent a framebuffer, so it's almost modern and incredibly efficient!) The Sketchpad software even had a "graphical user interface." This infrastructure could have permitted the user (programmer!) to program a "flood fill," but that feature is not one that is described in the author's thesis. The distinction between "software user" and "software programmer" is a more recent invention than these machines! So I think it's fair to say that a user of Sketchpad could have used "flood fill."
- Almost all of the early work on CAD involved graphical user interfaces: it was, of course, originally meant to stand for computer aided drafting. I expect if you deep-dive this history, you'll find a steady progression towards greater usability.
- Here's Filling algorithms for raster graphics, presented by Theo Pavlidis at SIGGRAPH 1978. Apparently "flood fill" was novel enough for SIGGRAPH... but then again, the Porter-Duff algorithm was amazingly still considered novel in 1984... yes, things can be in front of other things.
- Nimur (talk) 15:47, 23 July 2015 (UTC)
- Thanks- I agree the algorithm is "obvious" to the point of not having a specific origin. Sketchpad is a great example, but as you say none of its 40 buttons did a flood fill. (I'm no expert, but it doesn't seem to do any shading or "filling" at all.)
- I see PCPaint has a paint-roller tool- anybody know what that button does? Staecker (talk) 18:42, 23 July 2015 (UTC)
- In defense of the Porter and Duff alpha-blending paper, (a) image compositing ("things in front of other things") goes back to the 19th century and they certainly don't claim that as original; (b) αx + (1−α)y alpha blending they also describe as "well known" already, and their approach is rather more sophisticated; (c) their algorithm is based on an assumption that's generally false (that the portions of each pixel obscured by each image are "statistically independent") and it's not obvious that it will give useful results in practice given that dubious foundation, so there's value in people from Lucasfilm disclosing that it works well enough for their feature-film special effects or whatever it was they worked on. -- BenRG (talk) 22:26, 23 July 2015 (UTC)
- I know I implemented flood fill in my graphic editor "Gredi", really only intended for my own use (but a publisher grabbed it anyways and offered it to the masses, which largely and rightly ignored it). I'm not sure when I wrote that, but it was reviewed in computer magazine in 1985, and I borrowed the flood fill idea from somewhere earlier and the concrete algorithm from Chip magazine. So by that time flood fill already was well known and well understood. I'm fairly sure that Sinclair ZX Spectrum and Commodore 64 graphics programs has flood fill not later than late 1982. --Stephan Schulz (talk) 19:29, 23 July 2015 (UTC)
- The Hobbit (1982 video game) certainly does (youtube). user:87.114.100.65 sits down and starts singing about gold @ 19:46, 23 July 2015 (UTC)
Picture MW Analyzer
Hi. Does someone know a online site that specifically analyzes Pics (.jpg, .png., etc.) for Malware hidden inside (shellcode php etc.). The reason mainly is that i found a suspicious picture @ commons, that includes a Web-injection and some suspicious Shellcode. However i do not mean Virus Total or anything likely (because i already analyzed it there), but a online site as mentioned, that just focuses on pics (or pdf) files. Thanks for some advice. 83.99.17.37 (talk) 22:11, 23 July 2015 (UTC)
- I didn't find any such site in a brief search. It would be an oddly specific thing to specialize in. Did VirusTotal fail to detect the malware? Which image is it? -- BenRG (talk) 02:04, 24 July 2015 (UTC)
- Hi. I moved my question @ Commons from the Diskpage of Admin Jameslwoodward (as he is offline for the moment) to Yann. If you have a special(wiki)mailadress, i can send you some more specific Details. The reason why is the one you can read at Yanns Diskpage. Regards --Gary Dee 15:42, 24 July 2015 (UTC)
- If someone has advice, please join the discussion @ https://commons.wikimedia.org/wiki/User_talk:Yann Yann
- thx --Gary Dee 16:57, 24 July 2015 (UTC)
- If someone has advice, please join the discussion @ https://commons.wikimedia.org/wiki/User_talk:Yann Yann
- Hi. I moved my question @ Commons from the Diskpage of Admin Jameslwoodward (as he is offline for the moment) to Yann. If you have a special(wiki)mailadress, i can send you some more specific Details. The reason why is the one you can read at Yanns Diskpage. Regards --Gary Dee 15:42, 24 July 2015 (UTC)
- Hi, Two engineers from the WMF looked at it, and saw no issue. Regards, Yann (talk) 08:27, 26 July 2015 (UTC)
July 24
Not sure if the piggybacking routers are helping matters?
Hoo boy. I just moved into a co-op, and I've been elected the IT guy and I've been learning everything I can to improve the house's internet speed. It hasn't been easy, and I'm basically going to walk you through what I've learned:
- There are four routers in this house, we'll call them A, B, C and D. Every router has its own network with its own password.
- I had purchased a set of four wifi boosters in an attempt to improve their speed. But I was getting very confused at the fact that the boosters were only successfully connecting to Router A. Every other router could not connect to its assigned booster, either wirelessly or with an Ethernet cable, making it impossible for me to boost their signal.
- I just learned today that only Router A has actual service from our cable company provider (TWC). It's also the only router TWC provided. The other three routers were purchased separately by the house and (I can only assume) are piggybacking off of router A. We pay a single bill to TWC for all our internet.
My question: Should I leave those other routers in? Are they actually limiting traffic from router A and freeing up bandwidth or helping in any way, or are they only making the problem worse since router A is now having to handle multiple networks? I was planning to upgrade the routers, should I upgrade all of them? --Aabicus (talk) 01:32, 24 July 2015 (UTC)
- I'm guessing router A couldn't handle all the traffic alone, so they adding more routers to "help". I suspect upgrading router A so it can handle all the traffic alone would have been the better option. My suggestion is to upgrade A to a level that can handle all the traffic alone, then test performance with the rest connected, then disconnected. StuRat (talk) 04:06, 24 July 2015 (UTC)
- If it's a large house, that would explain their setup. Typically, large sites have to be served by multiple access points to provide adequate range.
- Typically, you design large wireless networks with multiple access points (APs, or what you refer to as routers). For example, an 802.11g wireless network, under ideal conditions outside, has a range of 300 feet. Any obstructions (e.g., walls) will lower that range significantly. So, you first perform a site survey by measuring the strength of the signal at various distances from the access point. You can just use a laptop for the survey. Once the signal becomes too weak, you place another access point at that location. But you put the AP on a different channel. For example, 802.11g has three clean channels -- 1, 6, and 11. If the first AP was on channel 1, you place the next one on either channel 6 or 11. That way, they don't interfere with each other. So, my guess is that's what someone already did. They put four APs at your site because it's too big to be served by a single AP. They put each one on a different wireless "network" so it could have a different channel.
- However, you don't have to create different networks to get a different channel. You can give them all the same name (i.e., SSID) but just change the channel. Keep the password the same for all the APs. That's how I would set it up.
- Having said that, the channel numbers and ranges I gave above are only valid for 802.11g networks. Different 802.11 standards different ranges and channels, so I'd need to know the model number of the APs to give you more specific information. It'd also help to know the IP subnet you're using along with the wireless settings the APs are using (namely, the channel, 802.11 standard, and frequency band -- 2.4 or 5 GHz). Also, are all the APs on the same subnet? You can find this information by logging into the Web page for the AP. I'm also guessing the APs are connected to each other using Ethernet cables? Typically, the AP will have a single router port and multiple switch ports. They're usually colored differently. The router port is used to connect different subnets and the switch ports connect to the same subnet. Are they just cabled through the switch ports or the router ports? I would just connect them using the switch ports. Connect Router A to the cable modem using its router port but connect each AP to Router A using the switch ports. Then, make sure each AP is on the same subnet. Then, disable DHCP on all the APs except for one. Then, reboot every end-user device and AP at the site so they get new, consistent IPs. That will simplify things and make everything operate faster.—Best Dog Ever (talk) 09:20, 24 July 2015 (UTC)
Different colors in digital photo
If I remember rightly from last Saturday, both the nearer-to-camera tree and farther-from-camera group of trees (in the center, not the stuff farther away on the other side of the river) were the same color. Can we guess how they ended up in different colors in this image? For this photo, I used the 55-200 mm zoom lens for my Nikon D3200, zoomed in all the way, and it wasn't taken through a window or other medium. With other images on the same settings, e.g. File:Robert Buckles Barn from the road.jpg, it's not produced the same result, so I doubt that it's a systemic problem with the camera. Nyttend (talk) 04:14, 24 July 2015 (UTC)
- I'm not sure that this is really a matter for the Reference Desk, but to my eye it just looks as if the air is a bit hazy. If you look at groups of trees at progressively larger distances from the camera, the farther they are the duller the colors are. --65.94.50.73 (talk) 04:31, 24 July 2015 (UTC)
- Also bluer, due to Raleigh scattering. The camera has accurately recorded the difference in colour (I'm a Brit) that was actually there. However, when you were looking, your brain was automatically processing the "raw data" from your eyes to correspond better with your "mental model" of the world. It does this all the time in several differents ways, which is why, for example, you don't usually perceive the more distant of two similarly sized objects as "smaller", even though its image is smaller on your retina. Our article Visual perception hopefully gives some leads relating to this subject. {The poster formerly known as 87.81.230.195} 212.95.237.92 (talk) 13:22, 24 July 2015 (UTC)
- Also, when you say the trees were the same color, did you go closer to them to determine this ? If so, the apparent colors of the trees would then be the same, without as much hazy air in between, as in the photo.
- As for the Raleigh scattering, that is exemplified in the America the Beautiful line "For purple mountain majesties". (The mountains aren't really purple, but appear that way from enough of a distance.)
- One way the camera actually could change the color of just certain trees is if the far trees were out of focus, and thus the leaf color was blended with the color of the background. However, it doesn't look to me like that happened here. StuRat (talk) 15:40, 24 July 2015 (UTC)
- Couldn't go closer (thus the poorer-quality distant photo), because it's on the grounds of a power plant, and this is the closest the road goes to the site. I suppose the Rayleigh scattering is the right answer; it never occurred to me that the brain would "fix" it, but your statement makes complete sense. And of course the camera doesn't know to "fix" such a thing. Here I figured it was something with the digital camera; that's why the question came here, not WP:RDS. Thanks for the help! Nyttend (talk) 16:46, 24 July 2015 (UTC)
- It's not impossible to fix this digitally. The camera would need to be able to determine depth at various points in the frame, measure the color change with depth, and then compensate for it. Of course, this wouldn't be perfect, as the absolute color might actually change for items farther away. Also, I think many people like the natural "purple mountains" effect. StuRat (talk) 16:59, 24 July 2015 (UTC)
- That's purple mountains' majesty. —Tamfang (talk) 08:43, 25 July 2015 (UTC)
- Not according to America the Beautiful#Lyrics and America the Beautiful#Idioms. StuRat (talk) 01:23, 26 July 2015 (UTC)
- Well, damn. I'll still maintain that the version I learned makes more sense. —Tamfang (talk) 18:00, 26 July 2015 (UTC)
- I agree that this is most likely either Raleigh scattering or Mie scattering - primarily because you mention a the distance disparity between the two trees. But there are other ways this could theoretically come about. Suppose one tree had leaves that reflected only green light - and the other had some small amount of reflectivity in red light as well as in green. If you viewed the scene in white light - at midday, perhaps - the difference in color between the two trees might be too subtle to notice. But at dawn and at dusk, when the sunlight is very reddish/orange - the tree that reflects no red light would appear significantly darker than the one that reflects a tiny bit of red so that small difference in color would be greatly magnified. There are other possibilities too - the average orientation of the leaves to the direction of the sunlight might be different in one than in the other and that would produce dramatic differences depending on the angle of the sunlight when the photo was taken. If one tree had very shiney leaves and the other was relatively dull, then the difference might be hard to see on a cloudy day, but very evident when it's sunny. There are many possibilities. SteveBaker (talk) 03:01, 26 July 2015 (UTC)
- Looking at the Y channel of the pic in the CMYK color space the foreground tree in question is very bright, nearly all the background black. Is that Raleigh scattering?—eric 04:08, 26 July 2015 (UTC)
- More or less. The Rayleigh scattering (note spelling, everyone) makes distant objects bluer, and yellow is complementary to blue. I don't know much about CMYK, but when you're on the blue side of white, it will probably use little or no yellow ink (unless it's a very dark blue). -- BenRG (talk) 20:40, 26 July 2015 (UTC)
- The camera is using color balance. During the night the visible spectrum is cut by the electric light. Light bulbs have missing blue light. Compact fluorescent lamps and other fluorescent lamps not not give enough red light. The digital camera does the color balance by gamma correction and looking up in the pictures histogram for the lightest dots (equals pixels). There are several parameters more used for good gamma correction. Some cameras allow to turn of the color balancing or saving raw images (uncompressed bitmaps from the sensor) to the flash memory. Gama correction is always a lost of quality. By amplifying the missing blue of lightbulb lighted pictures, the blue channel of the pictures becomes noisy. For that reason the camera takes multiple pictures to cover the sensors noise. This disallows to display fast moving objects clear and sharp when even focussed correctly. Settings decide between noise colors and sharp captured quick moving objects of the picture. A simple trick to help the color balance algorithm is to put a white paper or similar beside the objects an keep the paper in the view of the camera which is being cut out or removed by photo editing, later. --Hans Haase (有问题吗) 08:51, 27 July 2015 (UTC)
Measuring computer literacy
Are there reliable statistics about computer literacy? For example, how many people can navigate the web, send and read emails, write a simple text with a word processor, make a table with a spreadsheet, install a program, configure a mouse, and so on? The corresponding article here in wk is kind of thin, btw. I'd like to find stats for Europe, US and Japan mainly. --YX-1000A (talk) 08:29, 24 July 2015 (UTC)
- It is very difficult to tell if people can use what they have. It is easier to tell what they have and assume that if they are paying for something, they know how to use it. (As an analogy, I run reports on medications. I don't know what people take. I know what their insurance paid for and I know what the doctor prescribed. I assume that most people take some of what the insurance pays for and, if it is over-the-counter, they take some of what is prescribed. When working with millions of patients, I ignore the specifics about any one patient. Statistics over anecdotes.) Internet access is rather easy to track because someone has to pay for it. You can see U.S. statistics on Internet use here. UK internet statistics are here. I didn't find Japanese statistics, but I found Japanese websites that appear to be Internet statistics. If I could read Japanese, I could be certain. 209.149.113.45 (talk) 13:10, 24 July 2015 (UTC)
- Schools and colleges might be a good way to check computer literacy for kids and young adults, as that is one factor on which they are likely to report these days. Employment agencies might be another source of info for adults. StuRat (talk) 17:01, 24 July 2015 (UTC)
- Here is a "Survey of Literacy, Numeracy, and ICT Levels in England" (from 2011), published by the Department for Business Innovation and Skills (a ministry of the UK government). It's 425 pages, and computer literacy is only one of its topics, but there might be something useful in there. AndrewWTaylor (talk) 19:29, 24 July 2015 (UTC)
July 25
Group by
Hello guys! Have basically such problem with SQL. Will show my problem with an example. I have such table (of course, it is only a small part of it):
ll_from | ll_lang | ll_title |
---|---|---|
1 | ab | lorem |
1 | az | ipsum |
1 | bf | foo |
2 | bab | some |
2 | baz | random |
2 | bbf | text |
3 | bab | some |
3 | baz | random |
3 | bbf | text |
So I group by them by ll_from column. I want to get those groups, which doesn't have value az in one of ll_lang columns, that is, I would need ll_lang<>"az", which of course doesn't work in where statement. For this example I would like to get such output:
select ll_from, count(ll_lang)
from table
group by ll_from;
ll_from | count(ll_lang) |
---|---|
2 | 3 |
3 | 3 |
Yes, it is for Wikipedia. In Wikipedia-related terms speaking, I want to get pages (ll_from), which aren't in some language Wikipedia (ll_lang), sorted by number of iw.
- I don't know if this is a performant way of doing it but you could try it. Put this after the group by clause:
- having ll_from not in (select distinct ll_from from foo where ll_lang = 'az');
- 91.155.193.199 (talk) 19:54, 25 July 2015 (UTC)
Removing the default profile from Chrome
Hi, I updated to the latest version of Chrome yesterday, and would like to remove the default profile, you know the one where your username appears in the top right of the screen. Usually it can be hidden through navigating to chrome://flags/#enable-new-avatar-menu and selecting disabled. But this option no longer works. There is apparently another option to add --disable-new-avatar-menu after the chrome.exe part of the target attribute, but when I tried that just now it took absolutely no notice. I know this thing is billed as an easy way to switch between profiles, but as I'm the only person using my desktop it's completely unnecessary, so I'd rather reclaim the space for open windows. Can anyone help? Thanks in advance, This is Paul (talk) 13:15, 25 July 2015 (UTC)
- ok, I'm now gonna answer my own question, since I just discovered why it wasn't working. It seems that the change only applies to a specific shortcut and not all of them. Having modified the shortcut on my desktop I then opened Chrome from the taskbar, but this shortcut was still unchanged. So I had to delete the one from the taskbar and then add the modified desktop shortcut to the taskbar. The start menu shortcut also needed to be changed separately. Anyway it all works fine now and that irritating profile has gone. Hope this helps anyone else who downloads the latest version of Chrome and doesn't want the new facility. Cheers, This is Paul (talk) 15:12, 25 July 2015 (UTC)
Crowdlogs
Last month I asked about crowd funding tracking sites, and User:X201 was kind enough to tell me about Crowdlogs, which tracks both KickStarter and Indiegogo, but they stopped working a few days ago, with "Something went wrong." appearing on their front page. Their twitter account doesn't have any recent news and their blog page just says "Error establishing a database connection". Does anyone here know if they are experiencing technical difficulties or if they are shutting down operations?
Kicktraq is still working (though it doesn't track Indiegogo) but Kickspy shut down at the end of March after receiving some heat from Kickstarter (though they say their decision to shut down was completely independent of Kickstarter's displeasure). Are there any Indiegogo tracking sites still in operation? -- ToE 20:34, 25 July 2015 (UTC)
- There is an email address for Crowdlogs on their 'About' page - you might maybe want to ask. SteveBaker (talk) 11:55, 26 July 2015 (UTC)
How many levels of abstraction when running ?
When I run a java program, how many levels of abstraction are there? Is it bytecode - JVM - OS - machine language - physical switch?--Bickeyboard (talk) 00:33, 26 July 2015 (UTC)
- That's gonna be implementation-dependent - and very often, there is a micro-code layer below machine code and above the level of physical gates (which in turn are abstractions of transistors and such). But the OS (Operating system) isn't a layer anywhere here. In a sense, the OS is nothing more than a different program that happens to be running on the same computer. SteveBaker (talk) 02:41, 26 July 2015 (UTC)
- I wouldn't say the OS isn't a layer anywhere here. AFAIK, Java doesn't usually take care of things like file system implementations and CPU scheduling on its own; that's what the OS should do! That said, it is possible to run a complete program without any operating system (which is quite often the case in embedded systems). --Link (t•c•m) 14:51, 26 July 2015 (UTC)
- No, of course Java doesn't do things like file systems - but the OS is more like a library as far as Java code is concerned. It's not a level of abstraction between JVM and machine code...no way. If it were a level of abstraction, then you'd be able to point to a complete description of your algorithm described in terms of the operating system - and that sentence doesn't even mean anything! And even if it was - it would still be implementation-dependent - you can run Java code on computers that don't even have operating systems. Consider something like Haiku-vm for Arduino - there is no operating system - there is no JVM either. It converts Java source code into bytecode, then uses a small C program to interpret the bytecode. The bytecode is never converted into machine-code either. Without reference to a specific implementation, the question doesn't even mean anything. You could (in principle) write a program to run on a Babbage analytical engine that would interpret ASCII Java source code directly from punched cards. It would still be a Java program - but there would be no byte code, no JVM, no OS, no machine language, no electronics...just a bunch of gearwheels. Java doesn't care how it gets executed...so it's meaningless to make generalizations about how it runs. Now, if you said "How is such-and-such implementation of Java run under Windows 8?", then we could provide a more definite answer. SteveBaker (talk) 01:16, 27 July 2015 (UTC)
- I wouldn't say the OS isn't a layer anywhere here. AFAIK, Java doesn't usually take care of things like file system implementations and CPU scheduling on its own; that's what the OS should do! That said, it is possible to run a complete program without any operating system (which is quite often the case in embedded systems). --Link (t•c•m) 14:51, 26 July 2015 (UTC)
- It looks to me like the questioner thinks that the OS operates like a JVM. The OS is often referred to as an abstraction layer because programmers usually program for the OS, not the hardware. However, the OS does not convert the running program into machine code. It is a program that is running at all times and provides a common way to talk to a variety of different hardware devices. It is troubling to get tied down to a specific answer with "operating systems" because there is no universal answer to "What is an operating system?" It is clear though that it is not a "virtual machine". 209.149.113.45 (talk) 13:41, 27 July 2015 (UTC)
What language uses if!, enter!, exit!?
What language uses if!, enter!, exit!?--Bickeyboard (talk) 00:35, 26 July 2015 (UTC)
- I don't know of one. Where did you see them? The context must provide some clue. Scheme has a builtin named set! and a bunch of functions with exclaimed names, but not the three you listed. Vim (text editor) uses ! as a modifier for some commands, but not those. -- BenRG (talk) 21:23, 26 July 2015 (UTC)
replace character vs add character
I don't know how I got into this (perhaps some accidental control key combination?) but suddenly when I type text in the middle of a line, instead of just adding the next character, NotePad++ replaces the character, So, instead of adding 'c' as the sixth character of 'charater' -> 'character' I get 'characer.' What can I do? --Halcatalyst (talk) 16:49, 26 July 2015 (UTC)
- I don't know NotePad++ but I guess you pressed the Insert key or chose it elsewhere. Try to disable the unwanted feature by pressing the Insert key. PrimeHunter (talk) 17:23, 26 July 2015 (UTC)
- It worked, thank you! --Halcatalyst (talk) 17:47, 26 July 2015 (UTC)
Unicode Alternatives to: \ / : * ? " < > | in Windows filenames?
The following characters are not permitted in Windows7 filenames:
\ / : * ? " < > |
Q: What are the Unicode characters that will look the most
like them on a web page (from any browser or operating
system) without giving any protests or problems from
Windows when I want to use them in filenames?
(The three most urgently needed are alternatives to:
the colon :, the slash / and the question mark ?).
Do you have any suggestions?
--Seren-dipper (talk) 17:44, 26 July 2015 (UTC)
- One standard method is to use the Halfwidth and fullwidth forms - FULL WIDTH COLON (U+FF1A, :), FULL WIDTH SLASH (U+FF0F, /), FULL WIDTH QUESTION MARK (U+FF1F, ?). See the table in the article for the other symbols. Tevildo (talk) 18:10, 26 July 2015 (UTC)
- Try the Unicode Consortium's confusables utility for some options. For : you can often substitute a dash. -- BenRG (talk) 20:27, 26 July 2015 (UTC)
Perfect! Thank you both! ☺
--Seren-dipper (talk) 01:51, 27 July 2015 (UTC)
- Presumably you're aware that people will hate you if you do stuff like that. They'll try to type the file names and it won't work. Looie496 (talk) 13:09, 27 July 2015 (UTC)
hard drive error
This morning Windows started giving me warnings about a hard drive error. I ran SeaTools on it, and it detected a problem. It said to run SeaTools for DOS. I did that but it said "No hard drive found"... "no controllers detected"... I ran Chkdsk /f but it didn't find any problems. Is there some other free or cheap way to test for HD errors and try to fix them? Bubba73 You talkin' to me? 23:06, 26 July 2015 (UTC)
- I suspect your hard disk controller requires a driver that was not available from the DOS version of Seatools you have. If it needs to be said, the very next thing you do before you run any more tests or whatever is ensue if you have ANY data you want to keep on this disk that it's backed up. In my experience, regardless whether you find some software that claims to have repaired your problems, a hard disk that's had an issue is just a ticking time bomb. The cost of disks these days, just replace it ASAP. Vespine (talk) 23:11, 26 July 2015 (UTC)
- Thanks - it is a secondary HD and I have current backups. Bubba73 You talkin' to me? 23:22, 26 July 2015 (UTC)
- What warnings did you get from Windows, and what problem did SeaTools detect? The only problem I can think of that might be (temporarily) fixable is bad sectors. You can run chkdsk /r to scan the whole disk for bad sectors and mark them as bad so that the filesystem won't try to write to them later. If any of the sectors were in use, you'll probably lose that data. Chkdsk will allocate a new sector for that part of the file, but I don't know what it does about the unreadable data; it may replace it with zeros, which could contaminate your backup. Hard drives with bad sectors are likely to develop more of them, causing more data loss, so it would be better to replace the drive unless you really don't care about the files stored on it. -- BenRG (talk) 23:53, 26 July 2015 (UTC)
- I don't remember the exact errors - something about sectors, I think. I think that replacing it now is probably the best idea. I went out to get a replacement, but the stores that have internal drives wer already closed. Bubba73 You talkin' to me? 01:24, 27 July 2015 (UTC)
- Warning: Inadequate try to repair may cause preventable loss of data. Use an external disk and a Linux Live-CD to backup data. Use another computer to get the CD/DVD's ISO-image. Just boot from the CD, but do not install Linux on such machine. In case of hardware damage, CHKDSK might not be able to write essencial blocks due phyically damaged sectors. First, do not overheat the drive. Second, backup You data first an mention, the files might be damaged already. Do not overwrite or delete existing backups. By booting from an other device, a damage of the operating system is skipped when accessing Your files. When finisted the backup, figure out if the drive is damaged phyically or the filesystem only which can be repaired or renewed by killing all data on the drive. To try a second backup I heard form a software called Spinrite. I suggest not to a drive with physical damage again. Even on notebooks the drive can be removed quickly. Today drives are installed in a tray or drawer of the computer as far it can be called a computer. --Hans Haase (有问题吗) 08:23, 27 July 2015 (UTC)
."Wikipedia" - the name
How did The Free Encyclopedia get its name "Wikipedia"? Does the beginning of the name "Wiki" come from Latin?
- "Wiki" is the Hawaiian word for "Quick" - see History of wikis. AndyTheGrump (talk) 00:54, 27 July 2015 (UTC)
- See also Wikipedia. Dismas|(talk) 01:15, 27 July 2015 (UTC)
Decoding QRCodes the hard way.
I want to write my own QRCode recognizer from the ground up...I have lots of programming and graphics experience - so you don't need to use baby-talk - but I've not done much image recognition.
What are the basic steps in doing such a thing? Assume I have a 2D array of pixels from a camera at no particular angle to the QRcode as a starting point.
24.242.75.217 (talk) 01:26, 27 July 2015 (UTC)
- Erm? Do you want someone to write you a tutorial? Have you read QR? There are more than a few resources online for similar projects. Vespine (talk) 01:32, 27 July 2015 (UTC)
- Hmmm - I doubt either of those things will be of much use to our OP. QR Code doesn't say how it's decoded, and the Swift-reader description you linked can be summarized as "Call the QR-code reading library". Our OP said "from the ground up". I can't find a 'ground up' explanation.
- I believe you start off using some kind of edge-recognition approach to recognize the three corner boxes (to be honest, I'm a little vague on that part!) - and once you know those positions, you can generate a matrix that you can use to transform the image of the QR code into a simple, axially aligned 2D square...and from that point on, it's just a matter of reading and thresholding the pixels in the areas where you expect the data dots to lie. There is an error-correcting code (Reed–Solomon error correction) that is applied to extract the actual data.
- First you need to detect that there is a QR code. To do that you need to search for the FIPs (the three "finder pattern" squares). This paper talks about using Haar-like features; this discussion talks about using contour detection and analysis, which is a kind of feature extraction. Both of those examples use OpenCV to do the lower level stuff, but you can always choose to do that yourself. The ISO QR spec gives a rather simpler scanline-ratio based algorithm, but I don't know how robust that will be for real-world photos of QR codes. Once you've figured out the locations of the three corner FIPs, you need to transform the image so that it's square (because a real picture of a QR code will likely have some perspective distortion and some rotation) - that's discussed in the second link I give above. At some point (I guess now) you'll need to downscale so one black/white box becomes one pixel, and convert to a 1-bit image (a clever system will presumably use contrast for that, to accomodate images which have a shade gradient over them; a simple decoder might just choose to use a threshhold (based on the range of tones within the inter-FIP area)). From there, you can use the decode algorithm detailed in the ISO specification - this page links to that. If you want to read someone else's code that already solves the problem, you could try Zbar's, although I had a brief look at their QR code and it's rather tulgey. -- Finlay McWalterᚠTalk 11:18, 27 July 2015 (UTC)
- The algorithms to locate barcodes can be explained rather easily. Locating a linear one, like an EIN or UPC, is nearly identical to locating a QR.
- Pick an angle from 0 to 180 degrees (this can be random, it can step through the angles, whatever...)
- Begin at a location on the left or top of the image and follow the angle across the image (the starting point can be random or step through every point...)
- If the pixel is blackish, record a 1.
- If the pixel is whitish, record a 0. (Some algorithms preprocess the image to get a point halfway between the darkest and lightest pixel to be the border between black and white)
- Scan the image for the sequence that indicates the beginning/end of the code, such as 1n0n1n1n0n1n where n is a quantity. You would match 101101 or 110011110011 or 111000111111000111 or 111100001111111100001111. Some algorithms allow for variance. So, this would match even though there is a missing 0: 11110000111111110001111.
- If you didn't find TWO beginning/end sequences (most barcodes require a special code at the beginning and end, so they come in pairs), go to the first step and start over.
- For QR codes, you now have a beginning/end box and the angle of the line between them. Scan at a 90 degree angle off both boxes to find the elusive third box. If you don't find it, go back to the first step.
- Once you have the beginning and end of the barcode, scanning the lines or squares between them is a completely different process. This is just for locating the barcode. There are many algorithms that do this. The big trick is finding the "fastest" and "most reliable" method of locating the beginning/end codes. 209.149.113.45 (talk) 13:24, 27 July 2015 (UTC)
- The algorithms to locate barcodes can be explained rather easily. Locating a linear one, like an EIN or UPC, is nearly identical to locating a QR.