Jump to content

Wikipedia:Reference desk/Computing: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Scsbot (talk | contribs)
edited by robot: adding date header(s)
Line 357: Line 357:


:::::Chrome does have a well-reported difficulty with accessing, of all things, the Google website. Google claims that it has something to do with prefetching, but no fix they've suggested has actually fixed the problem. It isn't only Google. I've found other websites that fail to load. Even here on Wikipedia, there is a periodic failure to load the stylesheet, making the page look weird. -- [[User:Kainaw|<font color='#ff0000'>k</font><font color='#cc0033'>a</font><font color='#990066'>i</font><font color='#660099'>n</font><font color='#3300cc'>a</font><font color='#0000ff'>w</font>]][[User talk:Kainaw|&trade;]] 23:43, 6 October 2011 (UTC)
:::::Chrome does have a well-reported difficulty with accessing, of all things, the Google website. Google claims that it has something to do with prefetching, but no fix they've suggested has actually fixed the problem. It isn't only Google. I've found other websites that fail to load. Even here on Wikipedia, there is a periodic failure to load the stylesheet, making the page look weird. -- [[User:Kainaw|<font color='#ff0000'>k</font><font color='#cc0033'>a</font><font color='#990066'>i</font><font color='#660099'>n</font><font color='#3300cc'>a</font><font color='#0000ff'>w</font>]][[User talk:Kainaw|&trade;]] 23:43, 6 October 2011 (UTC)
::::::Yes. I've seen the wikipedia missing style sheet thing a couple of times. It's difficult to imagine what sort of software bug could produce such specifically consistent problems (in a temporally inconsistent way) - all DNS lookups are equal I would have assumed. [[Special:Contributions/87.102.42.171|87.102.42.171]] ([[User talk:87.102.42.171|talk]]) 00:00, 7 October 2011 (UTC)


== If I have an infected XP Pro x32 with the heur zero day threat and want to dual boot it with either an XP Pro x64 or a windows 7 x64, will the virus pass over. ==
== If I have an infected XP Pro x32 with the heur zero day threat and want to dual boot it with either an XP Pro x64 or a windows 7 x64, will the virus pass over. ==

Revision as of 00:00, 7 October 2011

Welcome to the computing section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


October 1

Vulnerabilities in find and workarounds

I'm aware of the inherent vulnerability using find and -exec, as it's discussed ([1]) elsewhere. So for a workaround, a few questions. 1) Is there a convenient list of all bash tokens that could be used for escaping and exploits like this? In other words, can someone point me either to some regexes that sterilize things for the command line or at least the list so I can write them myself? and 2) Is there a simple way for me to call an abitrary command-line program without dealing with the shell. Like in perl for instance I seem to remember hearing that this was possible. Any other practical workarounds would be useful too. Shadowjams (talk) 00:32, 1 October 2011 (UTC)[reply]

At the shell prompt, (in bash), type man builtins to get a complete listing of all shell built-ins, special-characters, and commands.
The safest thing to do is to use file-permissions to deny a script from accessing, executing, editing, deleting, or copying files that it doesn't have permission for. Often, this means creating a separate user-account with the minimum permissions you need for the script; this allows Unix to sandbox the script-process for you (automatically eliminating an entire class of security-risks).
Perl can execute shell commands in several ways; the easiest is to use the ` character, documented here (as PERLOP `STRING`); or the similar system and exec commands. Perl isn't necessarily safer - if you don't know what the script is doing, translating it to another language sure doesn't help. Nimur (talk) 01:15, 1 October 2011 (UTC)[reply]
No, you shouldn't use backticks (`) for this because it invokes shell command parsing, which is precisely what the OP wants to avoid. You should use system with more than one argument.
I'm not sure I understand the question, though, because find -exec doesn't use shell parsing either. If you look at the linked page, the problems are (1) filenames that begin with - being interpreted as options, which has nothing to do with the shell, and (2) explicitly executing the shell with -exec sh -c "...". To avoid the first problem you have to read the documentation for the particular utility you're using, since the exact treatment of command-line arguments is program-specific. Many (not all) utilities accept -- as a signal that all later arguments are file names, not options. To avoid the second problem, just don't explicitly invoke the shell.
I would strongly suggest avoiding shell utilities in favor of a decent programming language with a large selection of library routines. Perl is okay, but ever since I learned Python I've stopped using Perl for this kind of thing. In Python you can get find-like functionality with os.walk:
      import os

      size = 0

      for dirpath, dirnames, filenames in os.walk('.'):
          for filename in filenames:
              if filename.endswith('.zip'):
                  size += os.stat(os.path.join(dirpath, filename)).st_size

      print(size)
This prints the total size in bytes of all files with names ending in .zip below the current directory. (Not that that's what you wanted to do, but I wanted a nontrivial example.) The point is that you can do a lot of things without invoking command-line utilities at all, and thus you avoid having to stringify the command arguments and parse the output, and all of the security risks and bugs associated with that process. -- BenRG (talk) 03:37, 1 October 2011 (UTC)[reply]
Both excellent answers, thank you. That covers pretty much all the issues I was wondering about. Shadowjams (talk) 03:53, 3 October 2011 (UTC)[reply]

Torrents - are these just pure leechers?

When you're seeding a torrent and you see peers downloading lots of data from you, but their completed percentage never rises above 0.0% - are these typically users who have modified their client (or are using a hacked client) to make it report incorrect stats to avoid having to upload anything? i.e. being pretty much a textbook leech (I'm using uTorrent, btw)? I've noticed a few of these on my torrents recently and have considered blocking those specific IPs. --Kurt Shaped Box (talk) 01:30, 1 October 2011 (UTC)[reply]

Not necessarily. At least some clients, as a bandwidth optimization, don't report the acquisition of a block to a peer that is known to already have that block. (See "HAVE suppression" in the spec here.) -- BenRG (talk) 03:40, 1 October 2011 (UTC)[reply]
You can modify how much you upload by right clicking the torrent and choosing how much you can upload, it's not a hack. Bluefist talk 17:10, 6 October 2011 (UTC)[reply]

Looking for spam

Hello. I'm doing an assignment for which I need a fairly large sample of junk email. Viagra ads, enlarge your manhood, Nigerian scam... everything is fine. Is there some service from which I could retrieve such sample? 88.112.55.242 (talk) 07:57, 1 October 2011 (UTC)[reply]

Simply register a domain name (and get your ISP to host it and forward any emails), sign up for some (free?) porn, join a warez forum, and express an interest in buying drugs over the internet. It won't be long before your mailbox will be flooded with more than enough spam to keep you busy for years. Responding to any of the spam will almost certainly increase the amount and variety of spam. In case it is not obvious, never actually give anyone information about your bank account and use a disposable email account for your research. Astronaut (talk) 09:36, 1 October 2011 (UTC)[reply]
There are dozens of spam collections on the internet for testing algorithms. [2], [3], [4], to give just a few examples. gnfnrf (talk) 14:28, 1 October 2011 (UTC)[reply]

Turning off Search Indexer

Microsoft's Search Indexer appears to be quite a resource hog on my Windows Vista laptop, and I almost never need to search for things - I'm quite organised and I have a good memory. I'm hoping I can free up some resources by stopping the indexer, without it breaking something else. Older versions of Windows had a way to turn off the Search Indexer, but I cannot find the control in Vista. So where has it been moved to?

Type services.msc into the start menu search box or command prompt, find the entry called "Windows Search" and double click it, set "Startup type" to disabled and click "stop". AvrillirvA (talk) 10:27, 1 October 2011 (UTC)[reply]

Learning PHP

After half a year of learning XHTML, CSS and JavaScript, it is time to pick up PHP. For files on localhost, can I use my own computer as a server? Do I need to install extra software, such as a PHP interpreter? — Preceding unsigned comment added by 59.189.219.114 (talk) 13:53, 1 October 2011 (UTC)[reply]

Yes, you'll need to install server software. When I did this (five years ago) I chose XAMPP, or in fact xampplite which is smaller. (It's Apache, really.) You can start the server, test some PHP locally, and stop the server when you're finished.  Card Zero  (talk) 14:15, 1 October 2011 (UTC)[reply]
Depending on what your operating system is, you'll need to install LAMP, WAMP, or XAMPP. Then, you start the webserver on your computer and access the PHP pages with a web browser connecting to localhost. -- kainaw 14:25, 1 October 2011 (UTC)[reply]
(edit conflict) There are lots of bundled packages of Apache, MySQL, and PHP. Depending on your OS, they are called WAMPs, LAMPs, or MAMPs. (Why we have three separate articles for what are essentially the same concepts, but on different OSes, I do not really know.) There are oodles to choose from. On my Mac I use XAMPP and have never had troubles with it. On my work PC I use EasyPHP and it works fine. --Mr.98 (talk) 14:27, 1 October 2011 (UTC)[reply]

making a film

So, I wanted to fit a bunch of pictures together to create a short film, like a slide show, but a bit quicker than I expect they could manage, and whilst experimenting with my video editor (avidemux), I found that I could open pictures in that and stitch them together into just such a film. However, if I try to add a picture that is not the only one in the folder, it adds the whole contents of that folder, but with all but the first in the wrong colours, all bright and jumbled up instead. if I move things in and out of the folder one by one, it can only find the last one, so no easy way around that. And now it turns out, if I save it and load it again, it all comes up in the wrong colours anyway, and with all coloured dots over the pictures as well. Meanwhile, even the right colour images are of a rather lower quality than they were originally.

So, firstly, is there any way I can stop it doing all of these, and actually put the film together like this? If not, is there anything else I can get that would do a better job?

148.197.81.179 (talk) 19:31, 1 October 2011 (UTC)[reply]

http://electron.mit.edu/~gsteele/ffmpeg/ ¦ Reisio (talk) 19:50, 1 October 2011 (UTC)[reply]

I'm afraid that just looks like a long string of random letters and words to me, I have no idea what I am supposed to do with this, or even if it is something that can do what I want or merely a description of a program that exists elsewhere. 148.197.81.179 (talk) 12:14, 2 October 2011 (UTC)[reply]

How to view .TIF (.tif) files?

I downloaded the 25,000*16,000 resolution wallpapers for the Rage video game. I'm wondering what software can open these pics? Does anyone know? 65.66.126.217 (talk) 21:03, 1 October 2011 (UTC)[reply]

Assuming Windows XP / Vista / 7, MS Paint should be able to open them and convent them to jpg or something else more suitable. AvrillirvA (talk) 21:02, 1 October 2011 (UTC)[reply]
Well, I'm on Vista and MS Paint wasn't able to open them. Paint displays the following.
"Paint cannot open this file. This is not a valid bitmap file, or its format is not currently supported." 65.66.126.217 (talk) 21:09, 1 October 2011 (UTC)[reply]
Hmm. Try IrfanView, it should be able to open almost any image format AvrillirvA (talk) 21:17, 1 October 2011 (UTC)[reply]
Hey thanks AvrillirvA, that's an impressive software. CHRISTIANgamer97 (talk) 04:03, 2 October 2011 (UTC)[reply]

Download the smaller versions if you want smaller ones. There's one that's 2560x1600 and it's probably still bigger than you'll need. ¦ Reisio (talk) 23:30, 1 October 2011 (UTC)[reply]

Backing up/cloning a failing hard drive

A computer that I have is complaining of "imminent hard drive failure." I take it to mean that the hard drive is failing the BIOS's health checks. You can still make it boot to Windows but some files may not be reliably readable.

How do you back up or clone a failing hard drive before it fails completely? --71.185.179.174 (talk) 23:24, 1 October 2011 (UTC)[reply]

I'd use SystemRescueCD and ddrescue, but only if you're having trouble copying personal data — the OS files aren't worth it. ¦ Reisio (talk) 23:35, 1 October 2011 (UTC)[reply]


October 2

Telnet port numbers

I'm pointing out here and now that this is a homework question. Don't you wish all homework questions were that blatant about it? The question asks if clients A and B initiate telnet connections to server S at about the same time, what source and destination port numbers would A, B, and S use. Wouldn't A & B both use port 23 since that's the default port for telnet? It then goes on to ask whether the ports would be the same or different based on whether the connections (A and B) were coming from the same or different hosts. Wouldn't it work in a similar fashion to HTTP and its use of port 80? I can't seem to find the text relating to this and don't recall reading it. Thanks for any assistance, Dismas|(talk) 02:54, 2 October 2011 (UTC)[reply]

Hi Dismas. Clients A and B would send packets to port 23 on Server S. However, they would each send these packets from a different randomly-generated source port. You can see this if you open two command prompts by going to Start → Run... → cmd twice and typing telnet towel.blinkenlights.nl into one of the windows. In the other window, you type netstat -an and you'll see each TCP/IP connection to your computer. Look for the connection with the destination port like this: 94.142.241.111:23. That number is a socket — an IP address followed by a colon and the port number. But, in short, port numbers are pre-determined for the server (for the listening application). They are not pre-determined for the client application. As for the second question, they would also be different, even if they were coming from the same computer. Operating systems determine the return path by using different port numbers. One telnet session may have the return address of 192.168.1.100:5000 and another 192.168.1.100:5001. If the telnet server sends a packet to 192.168.1.100:5000, your computer will know that the packet is destined for the first telnet session.—Best Dog Ever (talk) 04:59, 2 October 2011 (UTC)[reply]
For more about the client port selection, see Ephemeral port. Unilynx (talk) 13:39, 2 October 2011 (UTC)[reply]
Thanks to you both! A much better explanation than what my teacher normally gives me! Dismas|(talk) 02:55, 3 October 2011 (UTC)[reply]
What may be missing from a complete picture is that while the server is listen()ing on port 23, as soon as it accept()s the connection, a new ephemeral port is created to handle that particular TCP connection. So while connections are initiated at port 23, none of them occupies it permanently. --Stephan Schulz (talk) 20:15, 3 October 2011 (UTC)[reply]
What? The connection is permanently identified by the (sourceaddr,sourceport,destaddr,destport) tuple. One of those numbers is 23. Run netstat if you don't believe me. 67.162.90.113 (talk) 23:29, 3 October 2011 (UTC)[reply]
I always assumed that the new socket returned by accept() also implied a new port. Seems that I was wrong. Thanks for enlightening me. BTW, I ran netstat on an ssh connection, given that the number of open telnet ports is really quite limited nowadays ;-) --Stephan Schulz (talk) 13:26, 4 October 2011 (UTC)[reply]

YPbPr Problem

Hello all.

I've got a slight problem related to my old TV. See, for a few years now, I've been playing my Xbox on this fairly old TV, on standard resolution. As the TV's about 3 years old, I'd assumed it couldn't display HD. But this morning, I noticed a little sticker on the front that says 'HD Ready'. Now the problem is, I've got an Xbox HD cable, which I worked out is a YPbPr cable. The table on HD Ready says that if the TV says it is HD Ready, it must be able to accept YPbPr input, and indeed, when I cycle through the channels, there is one labelled 'YPbPr'. The problem is, I can't see where I should plug my YpbPr cables in. Here's a picture of the back: http://www.flickr.com/photos/68190540@N04/6203422208/lightbox/ (Apologies for the terrible quality). But basically, there's an input marked DVI (I understand that is an HD computer cable), One marked D-SUB, one called PC Audio, A yellow (I think S-Video?) and two sound input, and A scart input. There's no YPbPr input that I can see. Do I need to buy a signal transformer of some sort? Thanks. 92.7.30.242 (talk) 10:56, 2 October 2011 (UTC)[reply]

Also, here's the product details: http://www.ciao.co.uk/Swisstec_J19_1__6615521 . Any help is appreciated! 92.7.30.242 (talk) 10:58, 2 October 2011 (UTC)[reply]

Are you sure the cable is YPbPr? The colour of the three plugs should be a guide so if you have red, white and yellow that is just sound + composite analogue video. Do the TV's menus or the user manual give you any information about switching one of the labelled inputs to YPbPr? Sometimes, the SCART connector can be set to receive composite or some kind of component video (though perhaps not YPbPr). It also depends on which type of Xbox you have: this one: Xbox, or this one: Xbox 360 as to what output it is capable of producing. Astronaut (talk) 16:05, 2 October 2011 (UTC)[reply]
It seems that your TV doesn't actually have a YPbPr input. I've helped someone set up an XBox before. I think it has an HDMI output that carries digital video & audio to a TV that has an HDMI input. In your case, you have only a DVI input but no HDMI. Maybe you can look into getting a HDMI-to-DVI adapter. The audio may not work, as DVI is video-only, at least the older implementations are. If the audio doesn't work, you can use analog or optical audio, but there's some complication. Once you plug in an HDMI cable, you cannot plug in the original analog A/V connector. There are slim audio adapters that can plug into the analog A/V output connector even with an HDMI cable plugged in. People have invented hacks to work around the plug interference problem without an adapter, but that involves removing the plastic housing of the standard A/V cable. See [5]. If this seems a bit complicated, I'll admit it is, but I don't have another suggestion. Good luck. --72.94.148.76 (talk) 16:19, 2 October 2011 (UTC)[reply]
Some TVs have YPbPr via the scart. I would check your manual to see if there is any mention of it. BTW, does your TV say HD ready or does it have the HD ready logo in the above article? If it's the former, this may not mean the same thing as implied by the logo. Nil Einne (talk) 16:40, 2 October 2011 (UTC)[reply]

On closer inspection, although on flicking through the channels it just says 'YPbPr', when you look in the menu, it says 'D-Sub (YPbPr)'. So i guess I'm going to have to get one of these : http://forums.bit-tech.net/showthread.php?t=154360 . Incidentally, it's a 360, and definitely YPbPr cables; they're red, green, and blue. 92.7.30.242 (talk) 17:39, 2 October 2011 (UTC)[reply]

I once owned a television in which the component inputs were striped downward across the same plugs as the composite inputs (which ran across). Looking at your photos, though, this doesn't seem to be the case. And as an aside (mainly to other answerers, not the OP), early XBox 360s did not have HDMI, it was added with the Zephyr series motherboards. See Xbox 360 hardware for details. gnfnrf (talk) 04:07, 3 October 2011 (UTC)[reply]

Right, the cables have arrived, I have hooked it all up, and my Xbox is now displaying in glorious HD! Thank you everyone who answered this question, I know that having to do tech support must be pretty annoying. Could someone please mark this as resolved? 92.7.31.251 (talk) 18:51, 4 October 2011 (UTC)[reply]


October 3

How do I make Windows MediaPlayer12 the default player again

Resolved

How do I reclaim all the file types, that MSMediaPlayer can handle, to play in MediaPlayer as default again?
(I use Windows Media Player 12 under Windows7Home).
--89.9.63.203 (talk) 06:05, 3 October 2011 (UTC)[reply]

Start -> Control Panel -> Programs / Default Programs -> Set your default programs -> Windows Media Player -> Set this program as default AvrillirvA (talk) 21:50, 3 October 2011 (UTC)[reply]
Perfect! Thank you! :-)
-- (OP) 46.212.181.97 (talk) 13:43, 4 October 2011 (UTC)[reply]

I have a table with some fields and one of them is a combo box. I have created a form for this .

I add another field with combo box. How can i make , that when i select a combo box (example state) the other

combo only lets me choose cities from that state ?-- first question

Related to the first --- I also did some testing , like creating 2 tables (one state the other city) , and i have linked them with ID - state ID (one to many).

In the state table , it shows me with a plus + , the cities that i want. But i want to see them in a form , wich is not related to the two tables.

So i use combo box form control in the form , to select the field for state , and another combo for the city. I see the states with the first

combo box , but when i choose the other (city combo) it shows me all cities...... PLEASE HELP...

Thank u in advance .79.106.5.224 (talk) 06:57, 3 October 2011 (UTC)[reply]

Here's a way to do it with only one extra table:
1. Create a table called tblCities with text fields name and state (and add a primary key column if you want one). Populate it with some data (e.g. San Francisco/CA, Boston/MA, whatever).
2. In your form, create your two combo boxes. I will refer to them a cboState and cboCity. For cboState, set its Rowsource to SELECT tblCities.state FROM tblCities GROUP BY tblCities.state ORDER BY tblCities.state;. This will make it list all of the states in alphabetical order, but only list one of each state.
3. For cboCity, set the Rowsource to SELECT tblCities.name FROM tblCities WHERE ((tblCities.state)=[cboState]) ORDER BY tblCities.name;. This just means, select all the names (in alphabetical order) from the cities table where the state value is the same as whatever is selected in cboState.
4. Now, one last thing. By default cboCity will not update when you change the state. So we have to tell it to do this manually. Click on cboState, go into the Properties window, and then click on Events, then click in the space for on change and click the button with the three dots that appears, then choose "code builder." This will open the VBA editor.
4. Your screen should have your cursor just after a place that says Private Sub cboState_Change(). Add the following one line of code below this line: cboCity.Requery. Then save and close the VBA editor. This just means, every time cboState is changed, it should cause cboCity to refresh its options.
Let me know if that works for you. Now if you wanted to link back to the cities table for your data, it is slightly more tricky — you need to add a primary key field to that table, and have that primary key be linked to the cboCity combo box (you don't need to link the state combo box to anything because it is implied by whatever city it is, in this scheme). If you need the above modified for that purpose, just let me know. --Mr.98 (talk) 12:59, 3 October 2011 (UTC)[reply]

In-browser PDF display

A lot of my workflow relies on various PHP/Javascript programs I have made that display alongside PDFs in a browser window. I had made these programs to be compatible with Adobe Reader.

I run a Mac with OS X 10.6.8, using Safari 5.1 as my primary browser.

So I was somewhat surprised recently when Safari stopped using Adobe's plugin to display PDFs. At first I thought I had done something wrong, but now I see that this is a major compatibility problem with Adobe Reader and Safari 5.1.

Instead of opening a PDF in the Adobe plugin, it now sometimes displays a PDF with a minimal of editing features (it doesn't even display page numbers, which I need for my work), and the default Safari PDF reader does not recognize any hashtags like #page=, which used to make the Adobe Reader PDFs jump to whatever page I told it to.

So I'm looking for a PDF reader that can do four things, in descending importance:

1. Display in browser in OS X 10.6, either in Safari 5.1, or Firefox, or Chrome.
2. When viewing the PDF, it needs to have some easy, straightforward sort of way to see the page numbers, adjust the zoom, things like that. Adobe Reader's default tool bar at the top worked fine for this.
3. Ideally, extra bonus, it would be great if there was some way to programmatically (e.g. with a hashtag or Javascript) be able to tell the reader to jump to a given page (e.g. #page=5). I don't care how it does this — I'm happy to make small adjustments in my code to whatever the method is — but it needs some way of doing this. (This is because I deal with big databases that are indexed to various pages within these giant PDFs, and I need to be able to quickly see page 135 or whatever without too much hassle or tabbing back and forth.)
4. Can handle very large PDFs relatively fast — most of the PDFs I am using are between 20 and 150 MB in size, and some PDF readers (like the Preview application for the Mac) take forever to try and cache all thumbnails or other things, and are thus super slow. (The PDFs are composed of lots of grayscale photographs, primarily.)

Any suggestions? --Mr.98 (talk) 12:14, 3 October 2011 (UTC)[reply]

You could load it via swishpaper or flashpaper, of course then you're relying on Adobe Flash. The only fullproof thing I can think of is fully converting away from any binary format, to ordinary HTML. I feel your pain, man. ¦ Reisio (talk) 18:12, 3 October 2011 (UTC)[reply]
HTML would really not work. I don't mind relying on Adobe if it works, but I don't think SwishPaper or FlashPaper are fast enough to serve as an on-the-fly PDF reader, are they? --Mr.98 (talk) 20:01, 3 October 2011 (UTC)[reply]
You might want to take a look at pdf.js. It's very primitive in a lot of ways, but (a) it does do the page-number-as-anchor thing that you want, and (b) being written in JavaScript, you don't have to muck around with browser plug-ins. The demo seems to be very slow, but it's under active development (the main repo seems to be getting commits every day, even on weekends). Paul (Stansifer) 19:29, 3 October 2011 (UTC)[reply]
That's an interesting idea. I'm a little skeptical it will be able to handle my files without blowing up my browser, but it might be worth a shot... --Mr.98 (talk) 21:48, 3 October 2011 (UTC)[reply]
Although the default Chrome PDF reader does recognize #page tags, it inexplicably doesn't show page numbers. I swear I read somewhere that it was based on FoxIt, the only Windows PDF reader that doesn't suck monkey nuts, but unfortunately the full version of FoxIt doesn't seem to be available for the Mac OS. Have you tried skim? Maybe worth a go. Other than that.. why not downgrade? It's not like every new revision of the Mac OS gets worse as in Windows, but there have been inadvisable upgrades before (7.6 to 8.0, for example). Nevard (talk) 23:45, 4 October 2011 (UTC)[reply]

Odd Problem with Cell in Excel 2007

I copypasted the contents of a comment in the margin of an .rtf file from Word 2007 directly into a cell in Excel 2007. Now I want to remove the comment, as it has since become irrelevant. However, I can't. I can't even select the cell anymore. Is there anything I should do here? KägeTorä - (影虎) (TALK) 14:10, 3 October 2011 (UTC)[reply]

Have you tried deleting the entire row or entire column in which the cell lies? One thing I have learned to do in Excel is that when pasting and weird stuff happens, I first "Undo" ... then I select the destination cell, click up in the text area as though I wanted to type something into the cell, and then choose Paste. This tends to paste just text and does not attempt to paste text formatting. Comet Tuttle (talk) 16:06, 3 October 2011 (UTC)[reply]
I have tried to delete the row, but it won't let me. Clicking on the row, just highlights the first cell, unlike normally were it highlights the entire line. KägeTorä - (影虎) (TALK) 17:16, 3 October 2011 (UTC)[reply]
1: Try Shift+Space to select the entire row. 2: If you can't select the cell, maybe the protection settings got turned on? --Bavi H (talk) 23:58, 3 October 2011 (UTC)[reply]
You could also try including the rows above and below in your selection to be deleted (having copied necessary data elsewhere first of course). Dbfirs 08:49, 4 October 2011 (UTC)[reply]

I can't select the cell, or delete the row/coloumn (it just deletes the first cell in each row/column). I suspect this may be due to some pre-formatting that my agent has done. But they can't get rid of the comment, and nor can I. KägeTorä - (影虎) (TALK) 11:51, 4 October 2011 (UTC)[reply]

As a last resort, if this is is possible for your sheet, you could try exporting to CSV (comma separated variables) which you could edit in a text editor (such as notepad) if necessary, then re-import to a new sheet. Dbfirs 15:51, 4 October 2011 (UTC)[reply]

RSS for Reference desk?

How can I subscribe to Wikipedia:Reference_desk questions? I love to read it.

Yes, I can add this it to my watchlist, but its not comfortable enough. - Ewigekrieg (talk) 15:51, 3 October 2011 (UTC)[reply]

There is an ATOM feed for the changes to any page - go to the history of that page and there's a link to the ATOM syndication on the bar on the left - for this page it's this. I think if you substitute feed=rss for feed=atom in that URL that should give you RSS (but most syndication clients will take ATOM too now). 2.122.75.122 (talk) 16:25, 3 October 2011 (UTC)[reply]

Windows script

I need to rename a number (ca. 1000) of files in windows (XP). Is there a way to automate this? All files are of the form "x_y.jpeg" with "x" and "y" numbers. I want them to be renamed to "y_x.jpeg". So for instance, "1_2.jpeg" should be renamed to "2_1.jpeg", or "12_34.jpeg" should be renamed to "34_12.jpeg". Can I do this automatically in windows only or do I need to install an addition program? If so, how do I go about it? bamse (talk) 15:55, 3 October 2011 (UTC)[reply]

Here is how to do it in PowerShell
   Get-ChildItem -Path "C:\Users\Santa\Pictures\" -Filter "*.jpeg" | % {
       $file = $_
       $parts = $file.Name.Split("_")
       $new = $parts[1] + "_" + $parts[0] + $file.Extension
       $file.FullName
       Rename-Item -Path $file.FullName -NewName $new
   }

TheGrimme (talk) 17:49, 3 October 2011 (UTC)[reply]

Hugging my rename 's/(.*?)_(.*?).jpeg/\2_\1.jpeg/g' *.jpeg right about now. ¦ Reisio (talk) 18:22, 3 October 2011 (UTC)[reply]
Thanks, TheGrimme. The script returns "y.jpeg_x.jpeg" instead of "y_x.jpeg", which is irrelevant for my purpose, so no need to fix it. bamse (talk) 22:02, 3 October 2011 (UTC)[reply]

XP TO ME

Is there a program to makes Windows Xp recolonize Windows ME? — Preceding unsigned comment added by 98.71.63.46 (talk) 16:13, 3 October 2011 (UTC)[reply]

I sure hope there isn't — what do you even want it for? ¦ Reisio (talk) 18:23, 3 October 2011 (UTC)[reply]
Do you mean install over? If so, see the following page: [6].--Best Dog Ever (talk) 19:17, 3 October 2011 (UTC)[reply]
No I mean like how Vista can recolonize XP's language. — Preceding unsigned comment added by 98.71.63.46 (talk) 01:03, 4 October 2011 (UTC)[reply]
Query, what do you mean by "recolonize"? Do you mean run a program written for one operating system run under another? ie. recognize not "recolonize"? 220.101.24.249 (talk) 01:54, 4 October 2011 (UTC)[reply]
Operating systems cannot colonize or recolonize. You are obviously using a word that you don't know the definition of or you are trying to spell a word you cannot spell and typing an entirely different word. Unless you can define what you intend "recolonize" to mean, no answer can be given. If you insist on using "recolonize", it is apparent that this question is intended to be nonsense and will not be answered. -- kainaw 13:44, 4 October 2011 (UTC)[reply]
If you're talking about programming languages, you should know that operating systems do not have associated programming languages. (Individual compilers and libraries must support specific programming languages, but XP and ME are similar enough that pretty much anything that supports one will support the other.) Paul (Stansifer) 15:45, 4 October 2011 (UTC)[reply]

First of all shut up Kainaw,second of all yeah Paul I mean that.

October 4

Output smartphone to monitor

Most smartphone owners want the ability to output their device display to a monitor as well as maintain their input capabilities, essentially allowing the user to "dock" with a monitor. Which smartphones allow you to do this in the easiest manner, and what's holding up wireless smartphone to monitor docking? Viriditas (talk) 03:24, 4 October 2011 (UTC)[reply]

There are a small number of Android phones that have an HDMI output. Many modern monitors and TVs will take an HDMI input. This fact is typically mentioned in Comparison_of_Android_devices, but be careful, some of them will only output hdmi in certain situations, and some of them will blank the output if you're watching a copy-protected movie. So don't base buying decisions on that list alone.
I'm not a 100% sure that "most smartphone owners" want this functionality. In my experience most people couldn't care less about this functionality. It's mostly just us developers who find it handy. APL (talk) 09:56, 4 October 2011 (UTC)[reply]
I'm fully aware of the Android functionality, but the HDMI is useless. Nobody wants that. People want wireless smartphone to monitor docking. The cloud computing model implies that devices will be used to view, not store data, which is fine when we're mobile, but when we get into a car, we want our phone to connect to the speakerphone, and when we get home, it would be convenient to dock the phone to a large monitor if needed. If you're at all familiar with human computer interaction, then you know that people interact with technology in the same way that they have relationships with their jackets, their briefcase, purses, and other possessions they lug around. People don't want multiple devices, they want one universal device that can be integrated into their daily life and interact wirelessly with other devices and technology. With the cloud computing model, this means smartphones become primarily viewing devices, with the data living in the clouds. This implies that the ultimate working and entertainment environment would be able to connect with the device to enlarge the viewing experience. Wireless smartphone to monitor docking is exactly what users want because it gives them the ultimate mobile experience. In the corporate world, this means you can go from cubicle to office to meeting room to the car and just about anywhere without having to use anything but your smartphone to facilitate data viewing. Viriditas (talk) 02:12, 5 October 2011 (UTC)[reply]
Ah! You say "This implies that the ultimate working and entertainment environment would be able to connect with the device to enlarge the viewing experience". This may be the source of the misunderstanding. It doesn't imply any such thing. What it implies is that the ultimate working and entertainment environment would also connect to the cloud. There's no need for a complicated intermediary step. All that would do is introduce complications and potential incompatibilities. (And headaches for the non-nerds in the crowd!)
The goal in cloud computing is for every electronic device you own to have instant, up-to-date access to all your stuff, your contacts, your photographs, your documents, etc. All without any direct connection between any of the devices individually.
So long as the cloud remains online, This greatly simplifies life for users. Especially non-nerd users. Instead of having to remember and manage which device connects to which, all devices will magically have what you need at all times. APL (talk) 20:17, 5 October 2011 (UTC)[reply]
Thanks, but there's no misunderstanding on my end. I know very well how cloud computing works. What I don't know, is why smartphone users can't wirelessly connnect their phones to any monitor as a viewing device. The working environment I am talking about is not connected to the cloud at all, it is just for viewing. Surely you've seen Minority Report or the Iron Man films, which show how you can view data with multiple viewing devices? In this case, I'm talking about a basic viewing environment, such as a monitor or television. Ideally, there would be a touchscreen involved or some kind of gestural input. The point is, your smarthphone would allow you to connect to it wirelessly. This has nothing to do with the cloud. Viriditas (talk) 22:27, 5 October 2011 (UTC)[reply]
I admit that would be neat. And in fact I've worked in environments where I've got half a dozen computers around me all working together on the same thing, and it's super fun. However, what most people would do with such a system could be achieved just as easily with the cloud based systems.
As a computer programmer I've often been disappointed that everyone now owns these remarkably multi-purpose machines, but for the most part they all do the same dozen things on them. For the vast majority of people, computers are for Documents, Email, Web, mp3, and games. Maybe with the occasional music or video editing. The kind of sci-fi set you're describing wouldn't do anything useful for any of that that couldn't be achieved better and easier with a cloud solution.
Anyway, right now I'm viewing my google calendar in three different ways on three different devices, any change I make to one will be instantly reflected on the others. Besides the fact that none of them are hologram projectors, I'm not sure how this is different than Tony Stark (AKA Iron Man) viewing a document on three different computers at once. You don't know that his computers weren't connected to an in-house cloud server. In fact, that would probably be a very sensible way to run a multipurpose inventor's workshop like he seemed to have. APL (talk) 00:16, 6 October 2011 (UTC)[reply]
Unfortunately, the desktop metaphor needs to be retired. Running an extended desktop is interesting, as it allows you to widen your perspective, but its time is over. I completely disagree that computers are for documents, email, web, mp3, and games as you say, and that's the kind of stale thinking that has held progress back on human computer interaction for so long. What I'm describing is not science fiction in terms of fundamentals; I can connect a laptop to a wireless monitor right now. So, why can't I connect a smartphone? You're still talking in terms of devices, when I'm talking about viewing and manipulating information in any environment, regardless of location. For example, let's say you're walking out of the office and your wife calls you on your smartphone. "Honey, meet me at the new Chinese restaurant on 3rd and Main," she says as you're getting into your car. Well, because she called you from that location and her phone was location enabled, that call has a location tag embedded in the log, telling you where to go, flagged as a destination. But you don't even look at it. As you get into your car, your in-dash speakerphone and GPS recognizes your phone automatically and pulls up the flagged location from your call log and announces that you'll be going to the Chinese restaurant as soon as you put your seatbelt on. You don't know where it is, but the GPS knows, and that's all that matters. This is the fluid, pervasive environment I'm talking about. It doesn't matter what device you are using or what you are using to view the information with, because the smartphone talks easily with every appliance it comes into contact with. It's all about allowing this rich environment to flourish, not hold it back. Allowing smartphones to wireless connect to any viewing device or appliance is a step in this direction. So anyway, you get into your car and it takes you to the restaurant. You see your wife sitting at a table near the front next to a clear window screen above the table. As you sit down, the screen recognizes your phone and asks you, by voice, if you would like to connect. You say "yes", and your wife asks how the new architectural design for the Smith account is going. You respond with, "let me show you", and you say "open house view, show Smith account", and the screen above the table comes into action. This is what I'm talking about. It isn't about the hardware, and isn't about the device. It's is and always has been about the data. And even if the data is coming from the cloud to your phone, you will still want to view it externally for interaction or presentation. Viriditas (talk) 05:12, 6 October 2011 (UTC)[reply]
No that wouldn't work at all. I don't have a wife. APL (talk) 10:15, 6 October 2011 (UTC)[reply]
Inability to empathize with an archetypal example used for illustration, noted. Viriditas (talk)
Lots of Android phones have HDMI out these days, but hardly any have 1080 lines as opposed to 720. I'm going to hold out for 1080. 71.141.89.0 (talk) 11:11, 4 October 2011 (UTC)[reply]
Agree with APL's second paragraph. Even with tablets it doesn't seem that important to most people. Nil Einne (talk) 12:51, 4 October 2011 (UTC)[reply]
I'm afraid you're wrong. Try to pay attention to how people use technology. Viriditas (talk)
I do hence why I know you're wrong, in fact you seem to have completely misunderstood what cloud computing is about. (In fact what you're suggesting seems to be the pre-cloud computing model where the phone was supposed to be the central computer because it had all the content.) In the cloud computing model the phone is just one way to access the content and apps, that's in fact one of the key points. No one wants to need their phone to view stuff on the monitor, particularly not when their phone battery life means they get perhaps 2 hours of viewing if they're lucky or they need to connect their mobile to a power source. (Even worse of course if you're using some kind of wireless 'docking' which implies the phone not only need to pointless retrieve the content from the cloud when the monitor could do that itself, their phone probably has to re-encode it to serve to the monitor. I'm presuming you do appreciate uncompressed 1080p50 needs about 2.5 Gbit/s bandwidth so serving it uncompressed wirelessly as HDMI and every other physical interconnection method does is unrealistic, you'd need to compress it in real time and worry about the sync, latency and power issues that entails.) They'd much rather their monitor has direct access to the content and apps via the cloud, with the phone completely uninvolved or at most used to control the monitor if that is needed, probably with a well designed interface intended for the purpose rather then simply a duplication of what's on the monitor. In the corporate world, this means you go to another office or a meeting room and your content is already there whether or not you have your phone, it's on, it has battery life or whatever, that's one of the key points of the cloud computing model. (Good way to impress your boss, NOT: Sorry but we need to end this meeting, my phone's battery is running low and even though my content is all on this fancy cloud you're paying for, I still can't show it to you without my phone. Even better if your phone automatically displays a text message and the whole room sees the private text message from your partner.) You may use your phone to control the content, but may be not, it depends entirely on the meeting room and what you're doing. Again, even if you do use your phone to control the content, often you will prefer a well designed interface rather then one simply duplicating what you're presenting, for example it means you can fix problems without having to show the whole room. Of course, as always, if you have any actual evidence for your claim, you're welcome to present it. But it seems it may help if you follow your own advice, the best evidence of course is ask any non-geek with a smart phone and most geeks as well and they'd think your suggestion is strange or frankly dumb. (Given how much success they've had, looking at Apple is likely a decent bet, and their cloud computing model is definitely not one where the phone is the centre of everything but rather one like I suggested where the phone is just one of the devices you can use to access your content and apps with the content and apps ideally automatically and always being there for any device.) Perhaps thinking about what you're suggesting would also help. Games and apps are one of the hot features for phones but trying to control most of these using your touch screen phone while viewing a monitor doesn't work very well. Browsing the internet using your phone works slightly better but will still often be limiting, many would much rather have a keyboard and mouse and ditch the phone completely. Ditto for composing documents etc. Really it's primarily listening to and viewing media (audio, photos and videos, personal and public) where you people may want their phones involved when they are using a monitor although again they'll often prefer a suitable interface intended for control rather then a simple duplication of what's on the monitor. Nil Einne (talk) 10:55, 5 October 2011 (UTC)[reply]
Hilarious! You didn't understand a word of what I said above. Reading comprehension is important. I said that people want to use their phones to connect wirelessly to monitors. This does not mean using the phone interface at all. It means using phone as a universal viewing device to facilitate wireless viewing on a larger monitor and accessing your data and apps in the clouds. In the scenario I presented, you would presumably have the phone near you or on your body, but you would not be physically using it at all. This means monitors, of whatever kind, become the interface, and the phone isn't used at all. I said this quite clearly above. And yes, this is what non-geeks want, as I said above, based on human computer computer interaction habits. The current environment for information device design benefits the manufacturer, not the user. If you're working on a project on your mobile, you should be able to sit down at your desk and continue that project using the same workspace without any change. And, if you are away from your desk, you should be able to walk up to any monitor and wirelessly connect to that same workspace using your phone as the intermediary viewing device, while it is in your back pocket. That doesn't mean you would be viewing your phone's screen. So, back to my original question: where are the wireless viewing devices that allow you to connect to your phone? Viriditas (talk) 19:40, 5 October 2011 (UTC)[reply]
Viriditas, where the heck do you live? Mars? This question and others illustrate that you have a very strange understanding of consumer desires and motivations. I'm pretty sure, as I sit here right now drinking an energy drink from a clear frosty mug just like I always do, that I've never met the hypothetical "average consumer" you keep describing. Please take a moment to consider the possibility that you are describing yourself, and not the mass of consumers.
To answer this thread, I'm afraid that Nil Einne is mostly right. While us computer nerds tend to prefer powerful self-contained machines with a zillion attachments, the general public is quickly moving towards thin-clients and cloud computing. The functionality you desire is being implemented through a completely different mechanism than what you're imagining. I won't say that "cloud computing" is entirely "finished", but when it is, everything you can do on your phone will be available on everything else you own, from your laptop, to your TV, to your phone, to your tablet. Not because it's been implemented on all those different places, but because you're not actually doing it on those devices, you're doing it on the cloud and the devices are just your 'window' to the cloud.
I'm sure you don't believe me, because you've decided that not only are you an average consumer, but that the only way of making satisfying your desires is the exact technical solution that you've imagined.
Finally, please consider the possibility that someone has actually read and understand your post, and still disagrees with your fundamental premise. APL (talk) 20:06, 5 October 2011 (UTC)[reply]
I'm afraid you are only reading what you want, and nobody has answered my question regarding why it is that a smartphone can't connect wirelessly to a monitor. I'm not talking about myself at all but how people use computers and how we are in the midst of a transition from a desktop to a pervasive paradigm, with smartphones representing one step in that change. Nil Einne's ridiculous misrepresentation of my comments shows that he actually agrees with what I've said. I'm not talking about power users who require a desktop for processing power, so what computer nerds prefer has no bearing on this discussion. What I'm talking about is not just using your smartphone as a window to the cloud (as I've already previously described) but being able to access any monitor, television, or view screen with that smartphone, in effect using the smartphone as a viewing device to connect me to the cloud. It's an easy question, but why it is nobody can address it is fascinating. Contrary to what you and others continue to claim, the marketplace recognizes that consumers want to do this with their computers, and they have offered several products to do just that, including Intel Wireless Display and Warpia Easy Dock as only two examples. There is essentially no difference in wanting to do this with your smartphone. Easy question yet no easy answer. Awaiting your next misrepresentation... Viriditas (talk) 22:22, 5 October 2011 (UTC)[reply]
The basic answer is that device-to-device communication is a pain, requires that every device in the chain honor the same standards, and is unnecessary in a world of thin-clients. Not to be insulting, but that kind of connectivity would probably be considered "old fashioned" in the same sense that flying cars and jet packs are old fashioned visions of the future.
Direct device-to-device communication and control produces unnecessary dependencies and configuration hassles. The precise way you're describing it is also a ridiculous bandwidth waste, though admittedly that could be addressed, with only minor changes to your vision.
(Laptop docking stations are a special case. They're typically laptop specific, and should really be viewed as two halves of a single device. Anyway, it's been years since I've seen anyone use one, despite the huge increase in laptop sales.)
As an example, Right now, my favorite new productivity tool is "Simple note". A cloud-based note-taking app. There are web/pc/mac/android/iphone clients for it. I can start typing a note on my desktop, then pickup my tablet, press one button, and instantly pickup where I left off. I even could type alternate letters on different devices if I felt like being goofy. Meanwhile, at any moment, if I took out my phone, it would also have an up-to-the-second version of that same note. All of this is achieved without, at any time, my computer becoming a "viewing device" for my tablet or my tablet becoming a "viewing device" for my computer. (Also on my desk : A clear frosty mug of ice tea. The mug that previously held the energy drink is now washed and back in my freezer.)
I haven't tried it, but I'm sure I could use the web-based Simple Note client on my Playstation, and if I was into that sort of thing (Documents, on a tv?) I could get one of those set-top android boxes.
Admittedly, this doesn't have the slick coolness of the scene in Avatar where a document slides off a computer monitor and onto a tablet, but in general it's immensely more useful, and easier for end-users. (I don't have to worry about compatibility, configuration, or security of a computer->tablet link-up, because no such link-up exists. )
So, the answer to your question is that such a system could be created without too much trouble, but no one is going to buy a new phone and a new tv and a new computer monitor, and then fiddle with configuration and security, when that way of doing things literally gives no increase in functionality over a cloud-based approach that will largely work with their current equipment.
Of course, all of this depends on trusting the cloud itself, which is a legitimate point of contention, but as far as I can understand, that's not your point, so I won't address it. APL (talk) 00:16, 6 October 2011 (UTC)[reply]
You don't need a new TV and a new computer monitor. Please try to think outside the box for once. Apple TV is a network appliance and for the most part, has some of the same or similar capability of what I am talking about, such as letting a monitor view data on multiple devices. Again, this discussion is not about cloud computing. It's about being able to view your data (on the cloud or not) and interact with your phone on any screen anywhere at any time. A wireless virtual laser keyboard, a finger mouse, and any number of accessories that you could probably carry on a keychain would make this work. As you are no doubt aware, Microsoft Table already has this functionality, so it already exists. Viriditas (talk) 04:52, 6 October 2011 (UTC)[reply]
If you insist on your old fashioned master/slave view of how technology would work, I can see why that would make sense to you. But it's out-dated. Try to think outside of your box, instead of stubbornly holding onto the future we were promised in the '90s. Every example you've given so far would work better with a cloud/thin-client model. (Especially once you grant that you're going to buy a separate computer to plug your TV into! At that point, why use the damn phone at all? There's nothing in my phone that I can't access from elsewhere.)
I could start doing work on my phone, stop mid-sentence, throw my phone into a wood-chipper, and continue the work on my TV seconds later without losing a single keystroke of work.
Even your wife story above could be much more easily accomplished with a cloud-based solution. There's no need to transfer the GPS fix from the phone to the carputer, because the carputer would be connected to your Google cloud and would get the data the same instant your phone gets it. The table computer wouldn't get the architectural drawings from your phone, it'd get them from your cloud once it's identified you. Admittedly, the phone could be used as an authentication device, but that's a nicety. It'd work just as well if you took a second to type in your email/password, or if you scanned a QR code you keep on your key-fob, or whatever. You don't need to wait for the future to do this. Find yourself an internet cafe, invite your wife, order a couple of coffees, and look at some architectural drawings.
That's the wonderful thing about cloud computing. The data is completely separated from the devices it's on. The idea that you're going to physically carry your data around with you inside of a small plastic and metal box is outdated and old-fashioned. Stop obsessing about the specific piece of hardware. Just trust that wherever you go, your data will be there. Your car, your phone, your computer, the computer at the cafe, the computer at the office, your tablet, your TV, whatever. Doesn't matter. Did your wife say "Hey honey, use your small plastic box to control this larger plastic box."? No! Of course not! She just wants to see the architectural drawings.
You keep saying "This has nothing to do with the cloud", but that's like asking why you can't buy good buggy whips anymore and insisting that your question has nothing to do with automobiles.
(And yes, I'm aware of Microsoft's $10,000 coffee table. I've used them, and their competitors, and frankly haven't been impressed by any of them. It'd be a neat way to play "Monopoly", though.) APL (talk) 10:15, 6 October 2011 (UTC)[reply]
This may surprise you, but it's 2011, the year Google Wallet came out for use on your smartphone. Ever heard of near field communication? Next time I see you lugging your desktop and television through the checkstand, I'll shed a tear in your honor. Seriously, talk about not getting it. Keep truckin' with your "desktop" way of thinking, old man. Cloud computing has absolutely nothing to do with what I am talking about, as you have been repeatedly informed over and over again. Viriditas (talk) 11:13, 6 October 2011 (UTC)[reply]
Uh, Google Wallet is an example of cloud computing. The money is not literally stored on the phone, neither is the cash register being used to "view" data on the phone. Try to keep up.
Point is this: Next time the two of us bring dates to an internet cafe, and the two lovely ladies ask to see our 'architectural drawings'. You're going to launch into a diatribe into how The Man won't give you the technology you need to connect your phone to the computers in the cafe, etc, etc, etc.
Meanwhile, I'll spend five seconds bringing up the drawings without ever touching my phone. APL (talk) 11:27, 6 October 2011 (UTC)[reply]
(Incidentally, thanks for reminding me what year it is. Reading your posts makes me think I've somehow time-warped back to about 1995. Did you just find a copy of "The Road Ahead", or something? Because I have to tell you, that book, while visionary, is badly out of date. APL (talk) 11:35, 6 October 2011 (UTC))[reply]
Er, why would I need to connect my smartphone to a computer? Seriously, you've lost the plot completely. 1996 called, they want their brick back. Viriditas (talk) 11:43, 6 October 2011 (UTC)[reply]
A rose by any other name.
You can make it in any shape you want, bit it's still a computer. Even your phone is a computer. Sorry that not all computers are shaped like the ones in your sci-fi inspired imagination. But while you're bitching about what kind of computers you have available to you, I'll be successfully completing every single one of the use-cases you've described here. APL (talk) 11:47, 6 October 2011 (UTC)[reply]
You've lost the plot completely. This isn't about connecting a smartphone to a computer or using the clouds for access and storage. This is about viewing and interfacing with data from a screen/monitor/television/surface. For example, I can connect a laptop wirelessly to a monitor. I should be able to do the same with a smartphone. Why can't I? Viriditas (talk) 11:54, 6 October 2011 (UTC)[reply]
I dunno. Maybe there's some obscure phone that does this, but like we said at the beginning, very few people want to do that because it's completely unnecessary. (And a serious battery-suck.) Some phones have docking stations that can be connected to monitors via hdmi just like I said at the very beginning, however, like we've said a million times before, that kind of thing tends to be vender-specific. Which isn't very useful in the general case. (Like at the Chinese restaurant where your wife wants to see architectural drawings.) That's why people have moved away from that master/slave metaphor, (Just like people are moving away from the desktop metaphor) to a new paradigm where all the displays are smart devices. (Which I won't describe again.) APL (talk) 12:10, 6 October 2011 (UTC)[reply]
It's not unnecessary at all, it is the future. Viriditas (talk) 12:11, 6 October 2011 (UTC)[reply]
You seem to be imagining a future where data is carried around in small plastic boxes.
That's come and gone. What the industry is striving for now (And don't argue with me, because I can't change it) is a world where "devices" and "viewing technology" (as you put it) are one and the same. You'll probably get your coffee table display soon enough, but it won't be a dumb computer monitor, it'll be "device" that doesn't need your phone at all. (But works absolutely seamlessly with it, through the cloud.) APL (talk) 12:36, 6 October 2011 (UTC)[reply]
I'm afraid you've stubbornly stuck to a misinterpretation of something I've never said or implied. This has nothing to do with data, and I've already said, several times, that the smartphone in question isn't used for its data but to connect with viewing devices. The cloud delivers the data to the device, but that is completely irrelevant to this discussion. Not sure why that it so hard for you to understand. Viriditas (talk) 13:11, 6 October 2011 (UTC)[reply]
I understood what you said earlier, and it's clear you were wrong. There's no reason for the phone to be involved period. The phone is just one device to view content. The phone doesn't need to dock with the monitor. The monitor itself retrieves the content or apps from the cloud. There is no reason why the phone should be involved, that's the whole point of the cloud, keeping the phone involved is just silly and misses the point of the cloud. It's possible the phone could be involved as a user identification and geolocation device (to the monitors), but that's about all and most people are going to want it to be optional.
Also, for someone who complains about people missing their points, you seem to have missed my point. What I was saying is a lot of the time people do not want to see their phone's screen on another monitor not that people are still going to be viewing their phone screen despite the monitor. They may want certain apps and certain content to show on the monitor, but they would often want to be able to use their phones for other purposes and they definitely don't want that private text message from their partner showing up while they are giving a presentation because the monitor is just showing what's on their phone. So no, the monitor doesn't just become a view port for the phone.
BTW, right now plenty people do go between computers in offices or even go home to work and keep their workspaces. (For security and related reasons, they do usually need to do a short login process.) They don't usually use these same workspaces on their phones because again, the workspace optimised for a larger monitor doesn't work well on a phone.
And a lot of the time, the interfaces they would want on a big monitor vs a small mobile are quite different. (In fact, again Apple has demonstrated this quite successfully, it's generally accepted that one of the reasons they were so successful was because they recognised that a touch screen smart phone's and a tablet's interface needs to be specialised and not the same as what you use for an older computer or for that matter a large monitor.) And I didn't say anything about needing a desktop for processing power, quite the contrary. And again, you have completely ignored the power issues which I raised earlier.
And in the same vein, I did explain (and APL hinted at) why it wasn't easy for a phone to connect wireless to a monitor. If you still don't understand that 2.5 gigabit/s is a lot of bandwidth even for a fairly nearfield wireless communication device with likely hefty power and space requirements; or that that realtime compression of video to transmit it to a monitor and all the latency, sync and power issues that raises which will be needed if you want a more reasonable bandwidth solution, that's hardly my fault. It's easy to find info on this if you're more interested, e.g. [7] mentions a WirelessHD chip which uses 1.3 and 2 watts.
Anyway as with APL it appears you are once again, as you have done before on the RD, convinced you are right and no amount of reasoning by multiple people or examples are going to convince you otherwise so this will be my last comment.
Nil Einne (talk) 07:40, 6 October 2011 (UTC)[reply]
Forgive me, but I am not in the slightest bit convinced by your continual misrepresentation and misreading of everything I've ever written. You may have an issue with reading comprehension, in which case, thanks for your comments and good luck to you. Viriditas (talk) 08:37, 6 October 2011 (UTC)[reply]
Anyway, the simple answer to your question is that few devices interconnect in the way you're imagining because the problems those sorts of interconnects solve can also be solved by pervasive cloud computing, and that's what everyone's doing nowadays. Every single potential use-case you've described (Including the movie scenes) works the same, if not better with a cloud solution, so there is little to no demand for type of thing you're asking for.
There just isn't a need for two separate ways of doing the same things. APL (talk) 11:09, 6 October 2011 (UTC)[reply]
Actually, smartphones are being used, not desktops and not appliances. My concern is with connecting smartphones to viewing devices. Cloud computing has nothing whatsoever to do with this topic. You appear to be obsessed with pushing the antiquated "desktop" paradigm, when it is more than likely that wireless viewing stations/screens will replace them and are replacing them as smartphones become the universal device. All we need is a way to view and manipulate the data in the clouds using our smartphones as the wireless interface. The technology is already here. Viriditas (talk) 11:15, 6 October 2011 (UTC)[reply]
I haven't mentioned "desktop" once. In fact, I'm not sure if you've noticed, but the desktop metaphor is in the process of being deprecated as people move to web-based solutions. (The same cloud based solutions I've been yammering on about.)
You keep saying this : "All we need is a way to view and manipulate the data in the clouds using our smartphones as the wireless interface. "
Why? Why involve a specific plastic box in the process?
The whole thrust here is to separate data and hardware.
I've got a tablet sitting here. It's lovely. Why would I want to use my phone to manipulate it? Why not manipulate it directly, let it access the data directly, and leave the phone out of the loop? Same with my TV, my computer, my other tablet, my PDA, my Pandora, etc. Why should my phone hold any special place of pride in this pantheon of diverse ways of viewing the data? APL (talk) 11:27, 6 October 2011 (UTC)[reply]
You evidently must live in a basement and not get out very much. When you open the door and go outside, there's this thing that's called a "smartphone" that keeps you connected with the rest of the world. Sorry, I don't have time to explain it to you. Viriditas (talk) 11:44, 6 October 2011 (UTC)[reply]
Actually, this apartment is a good foot below grade, So I guess you've got me pegged.
Seriously though, the future you want is here. All these smart devices, work together to do exactly what you want. You just need to accept that the "connection" you're looking for is called "cloud computing". Then it all works. Just like you've been asking all along. Really. Free your mind, and join the 21st century. You don't need a desktop computer at all. Not even a little. Get yourself an android TV box, and android car-computer, and a nice, large tablet, and they will all work together exactly as you've described. (If it helps, you can call everything except your phone a "Display Device".) Only the the technical detail of how they're connected will differ, but you don't care about technical details, you've said so yourself. APL (talk) 11:58, 6 October 2011 (UTC)[reply]
Sorry, but you're not getting it and I don't think you ever will. The future of the computer isn't the desktop or a tablet. It's a device that disappears completely but allows you to interact with your environment. When you are truly mobile, you aren't carrying a large desktop around with you, nor are you carrying a television box, a car computer or a "large" tablet, as you put it. You're carrying a small, thin, tiny smartphone that is virtually undetectable on your body and you're interacting with the world around you as if the hardware were truly transparent and the data ubiquitous. This is why in the interim, the smartphone is the dominant device when you're mobile, more so when its voice driven and location aware. And users want to be able to go from environment to environment, from office to car to restaurant as I explained in my example above, without doing anything with a device. Wireless augmented reality devices can also feed off the smartphone and display data, using wearable computing clothes, eyeglasses and sunglasses, and even shoes. At the end of the day, however, it is the viewing technology that becomes important, not the device. The device essentially disappears. Viriditas (talk) 12:08, 6 October 2011 (UTC)[reply]
Why do you need to carry the phone? (Besides actually making phone calls, of course.) APL (talk) 12:11, 6 October 2011 (UTC)[reply]
Data collection, data sharing, and communicating with viewing devices. Imagine a third grade field trip. A teacher carries a smartphone in her pocket. Her class of 5 kids each wears wireless, transparent glasses upon which are projected augmented reality labels. The teacher pulls a smartphone out, turns on a wireless hotspot, adjusts the AR teaching app and sets the display to "botany". The botanical scavenger hunt and quiz begins. Wherever the kids look, little, unobtrusive botanical names pop-up. If a kid runs up to a shrub and stares at it for longer than two seconds, another menu comes up, allowing them to drill down for further info, phylogenetic tree, etc. It also records which kid has studied which plant and allows them to take a quiz in the field, the results of which are recorded and sent back to the teacher's smartphone and uploaded to the school server and assigned a grade. By the time they get home, their parents have already received their report card. Viriditas (talk) 13:09, 6 October 2011 (UTC)[reply]

Regenerative Keyboarding

Why hasn't anyone apparently conceived of regenerative keyboarding, especially when speaking of laptops? When the energy from typing on keys gets recaptured, it goes back to the battery to recharge it (or speed along the recharging if it's plugged in.)

How would a laptop keyboard be made to recapture the energy given by the fingers clacking on the keys and give it to the battery, and why haven't such keyboards been made yet? What kinds of challenges would need to be overcome to make them? --70.179.176.157 (talk) 06:57, 4 October 2011 (UTC)[reply]

The amount of energy you could retrieve from keypresses would be so small it's difficult to see how that could power anything, let alone a laptop.--Shantavira|feed me 07:20, 4 October 2011 (UTC)[reply]
Lots of people have conceived of it. Googling generating energy from keyboard returns a lot of discussion. Here is a NY Times article about Compaq patenting a method to extend battery life. But as already mentioned, you're not going to be able to power your laptop solely by keystrokes, not unless its power requirements are dramatically reduced. --Colapeninsula (talk) 10:03, 4 October 2011 (UTC)[reply]
You'd need to know a few things first:
1. How much energy could be captured from one keypress (this is probably variable depending on how difficult you want to make pressing keys to be)
2. How many key presses on average people use per hour or so (this probably varies a lot with different type of uses — browsing the internet uses very few, for example)
3. How much energy per hour the computer requires (it would be great to correlate this with the different types of uses)
The odds are that 3 dwarfs the product of 1 and 2, in the way that adding a bike pedal to a car for recharging the battery would not get you very much actual energy. The amount of watts generated by human power is generally pretty low compared to our standard level of energy consumption, as anyone who has slaved over various exercise equipment that tells you such things would know. You could try to decrease 3 by making the computer extremely energy efficient, but even then, it seems unlikely to me that key pressing would be more advantageous than, say, cranking a handle (per the OLPC XO-1), which is probably a more efficient way to transfer force than moving keys a 10th of an inch. --Mr.98 (talk) 14:05, 4 October 2011 (UTC)[reply]
On a stationary handbike, I can burn calories at a rate of roughly 30 or 40W, but it's tiring, and I probably couldn't keep going at that rate for more than 15 minutes. If the device could 100% efficiently capture that energy (which it can't), it could just about power my laptop. Typing probably takes a couple orders of magnitude less power, and people don't type continuously, so there just isn't enough energy to make an appreciable difference, even if efficiency weren't a problem. Paul (Stansifer) 14:27, 4 October 2011 (UTC)[reply]
From an article in Journal of Dynamic Systems, Measurement, and Control (DOI: 10.1115/1.1902823), I get these numbers as reasonable values:
  • key displacement 4mm
  • rate for a typist 90 words/minute or 450 key presses per minute
  • Force for space bar 0.8N
  • Force for normal keys 0.4N
  • Laptop power consumption between 20 and 90 Watt (from wikipedia)
Taking 0.5N as average force, a typist rate of 450 keystrokes per minute, and power consumption of 30 W, results of one hour typing would be:
  • Energy produced: 54 J
  • Energy used: 108,000 J
DS Belgium (talk) 01:19, 6 October 2011 (UTC)[reply]
Good work on that math. I can't help but point out that another way of looking at the data, is that this would only work if you could learn to type at 180,000 WPM. APL (talk) 10:01, 6 October 2011 (UTC)[reply]

Kinetically-regenerative phones

When users place phones in their pockets, the movements from walking and etc. would involve energy. Couldn't a kinetic energy-recapturing mechanism be miniaturized enough to be placed in phones so that their batteries can last longer (or even be recharged with more juice) thanks to this? What engineering challenges would need to be overcome in order for this regenerative system to work? --70.179.176.157 (talk) 06:57, 4 October 2011 (UTC)[reply]

Yes, you are referring to a motion-powered charger. These are still in their infancy, and the amount of energy they capture is very small, but if applied over a sustained period it could be useful. However, you would have to do an awful lot of walking to recharge a battery.--Shantavira|feed me 07:31, 4 October 2011 (UTC)[reply]
For one, I think phones would have to consume a lot less energy for it to be worthwhile. Even self-winding watches will run down if you don't lead an active life, and watches take very little energy to run. APL (talk) 09:51, 4 October 2011 (UTC)[reply]
While I don't have any actual power numbers, this sort of thing is probably possible with phone technology available today- you could probably add motion charging to a proper cell phone like a Nokia 1200 to bring standby life to three weeks or a month. Switch to an e-ink screen like the Motorola FONE to save even more power, and use the latest battery technology, and maybe you'd never have to recharge if you lived an active enough life. But then you'd be looking at a price point similar to or in excess of that of a lot of flashy smartphones for something that is just an honest phone. The market for such phones among people who are actually two weeks away from a wall jack or genset just isn't that great. Nevard (talk) 23:20, 4 October 2011 (UTC)[reply]
Note that this is two weeks away from a wall jack and you don't use the phone. I find it hard to believe that this sort of thing would really compensate for the amount of energy it takes to actually make calls (which seems to be the majority drain on the phones). --Mr.98 (talk) 20:12, 5 October 2011 (UTC)[reply]

BlackBerry not updating Facebook

My BlackBerry Bold has very suddenly stopped updating Facebook via the 'Social feeds' app – it says it's done it. It updates Twitter fine. But the Facebook posts just don't appear. I've tried logging out and in again. No luck. Can anyone advise...? ╟─TreasuryTagvoice vote─╢ 10:30, 4 October 2011 (UTC)[reply]

And the relevance this has to Wikipedia is...? (FWIW, iPhone app isn't updating either) The Rambling Man (talk) 12:41, 4 October 2011 (UTC)[reply]
It's a computing-related question - seems reasonable enough to me. AndrewWTaylor (talk) 12:48, 4 October 2011 (UTC)[reply]
Indeed ... this is the Reference Desk, not the Help Desk - questions posted here don't have to about Wikipedia. Gandalf61 (talk) 12:49, 4 October 2011 (UTC)[reply]
Working on iPhone again. What a useful thread! The Rambling Man (talk) 13:54, 4 October 2011 (UTC)[reply]
meta discussion moved to WT:RD
The following discussion has been closed. Please do not modify it.
Question: is this just a chat-board that has no relevance to Wikipedia at all, other than dubious links to, say, Facebook (although not in this case)? Presumably the aim here would be to link responses to Wikipedia articles? So questions like "my app doesn't work" surely don't belong here? The Rambling Man (talk) 16:48, 4 October 2011 (UTC)[reply]
There is a link to the Ref Desk Guidelines in the header. Beyond that, any discussion should take place on the Talk page. --LarryMac | Talk 17:05, 4 October 2011 (UTC)[reply]
Gotcha. So whenever an app fails to work on a Blackberry or an iPhone we should entertain a thread here. That doesn't happen too often, after all.... The Rambling Man (talk) 17:08, 4 October 2011 (UTC)[reply]
What's the matter with you? This is the computing RefDesk. Furthermore, you've been told something that you're experienced enough to know anyway: that if you wish to discuss the functions and concept of the RefDesk, WT:RD is ready and waiting. This isn't the place to do it. ╟─TreasuryTagActing Returning Officer─╢ 17:15, 4 October 2011 (UTC)[reply]

Nothing, just surprised we waste resources here by answering "why doesn't my app work?" questions which clearly have no interest in improving Wikipedia. A total waste of time. The Rambling Man (talk) 17:17, 4 October 2011 (UTC)[reply]

You want me to say it again? I'll say it again. You've been told something that you're experienced enough to know anyway: that if you wish to discuss the functions and concept of the RefDesk, WT:RD is ready and waiting. This isn't the place to do it. ╟─TreasuryTagCANUKUS─╢ 17:18, 4 October 2011 (UTC)[reply]
Done already. Surprised someone with your "experience" needs to ask why a particular "app" stops working for a few hours. On the other hand, not that surprised. The Rambling Man (talk) 17:20, 4 October 2011 (UTC)[reply]
I don't have to justify myself to you, particularly over such a trivial non-issue. If you aren't interested in providing helpful responses then the RefDesk probably isn't the best environment for you. ╟─TreasuryTagCounsellor of State─╢ 17:24, 4 October 2011 (UTC)[reply]
Ironic. The Rambling Man (talk) 17:30, 4 October 2011 (UTC)[reply]
I wasn't "hiding my behaviour" (per your helpful edit summary), I was doing as you asked, and moving to the talk page. The Rambling Man (talk) 17:37, 4 October 2011 (UTC)[reply]

Signal boost for samsung intensity

I want to know if theres anyway i can boost my cell signal for my samsung intensity 2. I do NOT want to buy anything to do this. I have seen people boosting cell signals with wire stuck in the antenna jack but my phone does not have one. I want to know if there is some way to boost my signal strength without an antenna jack (I am somewhat okay with opening it up to do this. I unscrewed it and opened it up today but wasnt able to tell which part was the antenna.) — Preceding unsigned comment added by 99.89.176.228 (talk) 21:40, 4 October 2011 (UTC)[reply]

Since nobody else has answered, I'll suggest that as far as I know it might be possible to improve reception by lengthening the antenna (some people suggest simply inserting a USB cable in the USB socket), but not the transmission strength as this is controlled automatically. (Transmission might even be adversely affected by doing this.) If you Google "how to boost cell signal" you will find plenty of tips (some of which might void your warranty).--Shantavira|feed me 16:15, 5 October 2011 (UTC)[reply]
Another possibility is to make your reception/transmission more directional versus omni-directional. Try this article, Cantenna, and point it at your nearest cell tower. Can't guarantee it will work at whatever frequency your phone uses. See also Unwired Forum, section 4.3, Antenna for a few ideas about reflectors. - 220.101 talk\Contribs 17:41, 5 October 2011 (UTC)[reply]
Lots of ideas here. Cell phone antenna (the ScienceRef desk might have been a better venue for this question?) - 220.101 talk\Contribs 18:54, 5 October 2011 (UTC)[reply]

WiFi USB 5dbi

If you compare a WiFi USB adapter (5dbi) to a normal plain laptop wifi, how much is the difference in distance? Quest09 (talk) 23:07, 4 October 2011 (UTC)[reply]

October 5

What could cause a download to 'shrink' in size?

Recently I downloaded a "SystemRescueCD" (systemrescuecd-x86-2.3.1.iso) from "www.sysresccd.org/". It took several hours and about 352 Mb of bandwidth measured using a utility called NetWorx . However, when it finished saving to my HDD as a Disc Image File it was only "14.1 MB (14,876,672 bytes)". Has my download failed (seems likely) or is there another explanation for this? 220.101 talk\Contribs(aka user:220.101.28.25) 02:09, 5 October 2011 (UTC)[reply]

md5sum systemrescuecd-x86-2.3.1.iso — if it doesn't spit out 8813aa38506f6e6be1c02e871eb898ca, then the image is no good. ¦ Reisio (talk) 11:34, 5 October 2011 (UTC)[reply]

Thanks for reply Reisio. Does that check have to be run via a command line? (nb. I am using WinXP, if that makes any diff.) I added a title to the question too!- 220.101 talk\Contribs 17:59, 5 October 2011 (UTC)[reply]

Yes. You can get graphical md5/sha1/etc. sum checking apps, but that's out of my jurisdiction. ¦ Reisio (talk) 19:07, 5 October 2011 (UTC)[reply]
With 14.1MB, it can't be right. Alternative link: http://en.sourceforge.jp/.. systemrescuecd-x86-2.3.1.iso/ (it's also available as torrent btw) DS Belgium (talk) 23:08, 5 October 2011 (UTC)[reply]
Something went wrong. The download page says the file should be 352 megabytes. Or more specifically, 352 MiB (mebibytes, binary megabytes, 220 bytes). If I use wget to see the response headers for the download link, it says the file is exactly 369,342,464 bytes. --Bavi H (talk) 01:50, 6 October 2011 (UTC)[reply]
I agree Bavi H, and DS Belgium, something is wrong!. But the 'Date modified' (Monday, 3 October 2011, 1:02:12 PM) was when I started the download and 'Date created' when it finished at 4:16:34 PM. I was downloading something, at up to 110 Mb per hour. Maybe I saved it somewhere else? (Nope, did a search of my HDD, no luck!) - 220.101 talk\Contribs 12:11, 6 October 2011 (UTC)[reply]

If an A.I.'s objective is to improve itself, couldn't that speed up and spiral out of control?

Say a new supercomputer that's not built yet will have the objective to find quicker, easier and lower-cost ways to clean up the planet and bring harmony/eudaimonia to the whole human race. While on the quest to find these solutions, it's also given a secondary objective to improve itself (its algorithms, processes, hardware composition, et al.) so that it can teach/equip itself to heal humanity even faster.

Once it starts on its secondary objective, wouldn't it then gain the ability to work faster, even on its own secondary objective? Therefore, when it improves itself even faster, it not only accelerates its own self-improvement, but it accelerates the acceleration of its own self-improvement. This would be something of a recursive feedback loop.

What happens if said loop gets out of control? What is that phenomenon called, and what else will come out of this when this happens? How do we keep control of this phenomenon? --70.179.161.22 (talk) 11:55, 5 October 2011 (UTC)[reply]

We make sure we build in the Three Laws of Robotics. Mitch Ames (talk) 12:40, 5 October 2011 (UTC)[reply]
And an off switch! ;) - 220.101 talk\Contribs 13:11, 5 October 2011 (UTC) [reply]
I think it is a legitimate concern. As far as I know nobody has ever designed a system that works that way -- it would be like having a computer that is capable of swapping boards inside its box and editing its own boot ROM: the danger of instability would be pretty high. I'm not aware of a name for that sort of instability, though -- I expect Douglas Hofstadter would call it a kindo of strange loop. Looie496 (talk) 13:46, 5 October 2011 (UTC)[reply]
See technological singularity. --Goodbye Galaxy (talk) 14:36, 5 October 2011 (UTC)[reply]
It's hard to imagine a singularity happening. After decades of research, we have not come to a good understanding of how intelligence works, and so we don't have anything like a roadmap for producing a generally intelligent program. If, after a couple of centuries, we were able to produce a program as smart as a human, it wouldn't contribute any more to the research effort than raising a kid who grows up to be an AI researcher. We would need to produce something not as smart as a human genius, but far superior to one in order for this to happen. Paul (Stansifer) 16:04, 5 October 2011 (UTC)[reply]
I'm not sure I find this a compelling line of analysis, though I am myself suspicious of Kurzweil's singularity for another reason (viz.: most exponential processes, in the real world, hit practical problems pretty quickly, and go from being a hockey-stick to being an S-curve; it's not clear to me what the "resource" would be that exponential AI growth would run out of, but there is likely something out that that would serve as a cap, in the same way that quantum mechanics threatens to eventually put a stop to a strict definition of Moore's law). I find our mere decades of research to have been pretty fruitful so far (a decade is a long time to an individual human.. but not even that long; I can still remember what I was doing a decade ago quite vividly!). And the major difference between raising an AI researching human and writing an AI program is that once you have the program, you can duplicate it with a negligible amount of additional resources. The same cannot be said for the human! --Mr.98 (talk) 16:16, 5 October 2011 (UTC)[reply]
I think that an analogy can be made between artificial intelligence and the P vs. NP problem. We aren't very far along towards solving the problem (at least, Scott Aaronson says so, and I'll trust him), but we're making great progress on building better SAT solvers. A SAT solver can do a great job at model checking problems, but it shouldn't be taken as evidence that we're getting close to proving something about P and NP. Watson can do a great job at answering Jeopardy! questions, but it shouldn't be taken as evidence that we know anything about intelligence. This isn't to say that complexity theory and artificial intelligence/cognitive science researchers aren't accomplishing anything, just that their core problems are very large. Paul (Stansifer) 18:12, 5 October 2011 (UTC)[reply]
But P vs. NP isn't necessarily solvable in a rigorous way; in any case, it's an entirely artificial sort of problem (even if applications of it exist). Intelligence — at least what we call it — is not only solvable, but can emerge via natural processes. Nobody serious thinks SAT solvers or Watson are anything close to real artificial intelligence — except in the sense that raw memory and computational power does matter. At the moment, we're still a few orders of magnitude off from having the computing power to simulate, even in crude terms, a human brain. But our capacity for memory storage and our capacity for computing power still grows exponentially. In any case, there can't be anything too magical about intelligence if evolution can produce it. Evolution is clever, but it's not magical. --Mr.98 (talk) 00:10, 6 October 2011 (UTC)[reply]
The Science Fiction author Vernor Vinge has written several works dealing with the technological singularity, but most pertinently to the OP's query, his novel A Fire Upon the Deep explicitly portrays problems caused to advanced AIs by their exponentially self-increasing intelligence, and may therefore be of interest. {The poster formerly known as 87.81.230.195} 90.193.78.36 (talk) 17:23, 5 October 2011 (UTC)[reply]
That particular novel is probably not super-pertinent to the "runaway AI" question because of the stuff about the Unthinking Depths vs. the outer rim entities; but I highly, highly recommend the novel anyway. As to the original poster, you'll find that reference #1 in our technological singularity article points to this key 1993 paper by Vinge, which discusses the movement of humans away from center stage in the world, once runaway AI occurs. Comet Tuttle (talk) 20:31, 5 October 2011 (UTC)[reply]

For anyone who is interested, the best way to model the complexity of the proposed singularity (which is unlikely to happen the way it is described in this thread) is to take a look at astrobiology, particularly what is known about the emergence of life from inorganic matter, such as abiogenesis. You'll find a lot of answers there. Viriditas (talk) 22:33, 5 October 2011 (UTC)[reply]

Another fictional view on this "runaway AI" concept is "The Metamorphosis of Prime Intellect" (parts of this story are very NSFW, by the way), wherein the titular AI, "Prime Intellect" gains a level of understanding of how the Universe works on a low level, based on extrapolating upon available experimental data of a fictional phenomenon known as the "Correlation Effect", which sounds a bit like Quantum tunnelling. With this new found knowledge it is able to redesign and manufacture new and improved hardware for "itself", creating the exponential growth the OP mentions. The author doesn't give this process a name, however. I highly recommend the story, by the way, which has a Asimov-esque exploration of the Three Laws of Robotics as the core of the story. --Rixxin (talk) 10:52, 6 October 2011 (UTC)[reply]

Are there any reputable, free RAM "scrubbers" out there?

They would operate like (and hopefully better than) MemTurbo except that I would like a free utility. (MemTurbo is paid.)

Since my computer seems to keep running slow no matter what I try to close, minimize, etc., I need to find a good scrubber of RAM so that after it does its work, it runs faster than before it started.

So could someone please point me in the right direction? Thanks in advance! --70.179.161.22 (talk) 13:24, 5 October 2011 (UTC)[reply]

To point you in the right direction, "scrubbing" RAM does not, and can not, improve your computer's performance. Such programs are proverbial snake oil. Ensure that you are not running any unwanted programs that consume processor and memory resources. If your computer is still running below your expectations, you probably have to upgrade to newer hardware. Nimur (talk) 13:31, 5 October 2011 (UTC)[reply]
Come back, QEMM, all is forgiven. --Tagishsimon (talk) 15:05, 5 October 2011 (UTC)[reply]
MemTurbo gets 5 stars from cnet in the editor review, not sure if that says anything.
Windows 2000 had an option to optimize performance for either “Applications” or “Background Services". Don't know what a "scrubber" does, but predictive caching, pre-fetching, swap priorities based on user settings can all help. If they could do a better job than the OS system, I don't know. A fragmented swap file (if that still exists), or variable size can slow a pc down. Color settings, desktop themes, appearance, transparancy of windows, application settings (like limits for number and size of undo's and redo's, fonts to load, ..) there's so much that has an influence. DS Belgium (talk) 00:07, 6 October 2011 (UTC)[reply]
IOLO drive scrubber. — Preceding unsigned comment added by 74.178.177.94 (talk) 19:41, 5 October 2011 (UTC)[reply]
Don't think he wants to wipe his harddrive. Was this perhaps meant for the next section? DS Belgium (talk) 23:29, 5 October 2011 (UTC)[reply]

Dod Compliant wiping

Hello, I need to wipe a server and it needs to be DoD Compliant. After looking on my own I got lost and confused. Can some help me on what I need and places to get it. I got this old dell server and it needs to be DoD complaint wiped before I can use it for my own means. I have never donw anything with servers and was wondering if anyone could help

Thank you 152.27.56.61 (talk) 23:04, 5 October 2011 (UTC)[reply]

Have you read National Industrial Security Program#Data sanitization and checked out references 5 & 6? --Tagishsimon (talk) 23:08, 5 October 2011 (UTC)[reply]

See DBAN, but unless you've signed some legally binding agreement don't waste your time: one wipe with zeroes will suffice. ¦ Reisio (talk) 23:21, 5 October 2011 (UTC)[reply]

"DoD Compliant" is not specific enough; somebody needs to give you a specific requirement that you need to comply with. Different specifications exist for different types of data. If you aren't sure what you need to comply with, you should absolutely escalate the issue to your supervisors.
If somebody else gave you a server, and it actually did need to be wiped, it really was their responsibility to do so. Nimur (talk) 00:14, 6 October 2011 (UTC)[reply]
Yea, what a weird thing to do. "Here's a computer, but be sure to erase the files, because if you got your hands on them there would be trouble!" APL (talk) 03:16, 6 October 2011 (UTC)[reply]
I have worked for many different agencies under the DoD. All of them have specific wiping procedures. In one, there was a CD that you put in the computer and rebooted. The CD wiped the drive. All was good. In another, the drives had to be sent to a wiping center. When you get a used computer, you had to order a drive from the wiping center. In another, drives had to be shredded. If the user is looking for "DoD compliant", whatever agency he or she is with will have specific procedures for doing this. -- kainaw 12:51, 6 October 2011 (UTC)[reply]

Iolo drive scrubber. — Preceding unsigned comment added by 184.44.146.86 (talk) 16:31, 6 October 2011 (UTC)[reply]

October 6

Browser keeps imploding

Hi. My Internet Explorer browser, quite problematic in itself, is continuously closing and re-starting very frequently, often when I save or preview an edit on Wikipedia. What can I do to correct this? Moreover, is the problem likely inherent within my browser or the wiki-syntax, and what is a likely source? Thanks. ~AH1 (discuss!) 00:34, 6 October 2011 (UTC)[reply]

FWIW, that's started happening to me too over the last few weeks, initially only when ascending levels on a couple of games on the Miniclip site, and now occasionally with Wikipedia (happened twice when I tried to save an edit on this very RefDesk a few hours ago), and probably one or two other sites I haven't consciously registered yet. Sometimes IE "recovers" the tab, sometimes it simply shuts down instantly. I'm running Windows XP and consequently IE6: my inexpert suspicion is that because these programs are getting long in the tooth, minor incompatibilities are starting to crop up with more up-to-date sites' programming, but I too would like an informed diagnosis if possible. {The poster formerly known as 87.81.230.194} 90.197.66.118 (talk) 00:56, 6 October 2011 (UTC)[reply]
It's probably both. IE6 is notoriously troublesome, and as it gets less used, web designers are probably thinking about it less and less (hehe, I just found a sarcastic website trying to save it), and making changes without testing them thoroughly on IE. At the risk of being obvious, have you considered switching to Firefox, or at least upgrading to a recent IE? Paul (Stansifer) 02:05, 6 October 2011 (UTC)[reply]
Speaking for myself (sorry, AH1 - I don't want to hijack your thread!), I already have Firefox (and Google Chrome) available, and occasionally resort to it when IE6 is being recalcitrant (something on Scientific American's hosted blogs is currently breaking it, for example), but prefer to stick with the more familiar engine (and avoid having to work out how to transfer all my Favorites entries) while it still seems viable. As for updating IE from version 6, my limited understanding is that IE6 is standard for and integrated with Windows XP, and despite some searching on the Microsoft site, haven't found out how (or if it's even possible) to upgrade it to a more recent version. Experience over the decades has taught me not to try and fix something before it's (too) broken, in case I wind up with a worse situation which due to my present circumstances I couldn't easily afford to pay to have corrected. Similar considerations are, of course, why I'm also sticking to XP for the nonce. {The poster formerly known as 87.81.230.195} 90.197.66.175 (talk) 14:18, 6 October 2011 (UTC)[reply]

Hello. This section may be of interest to you: Wikipedia:VP/T#IE8. --Dweller (talk) 14:34, 6 October 2011 (UTC)[reply]

No sound in laptop

Without warning, my laptop just ceased to give sound. This isn't after a particular update of any sort that might've adversely affected it. It just randomly happened. I've even restarted it in hopes that it might come back, but still no sound. And no, the computer hasn't sustained any physical damage, either. If it's the sound card, how do I diagnose the problem and how do I fix it? I'm using Windows 7 on a T-series Lenovo Thinkpad, if this helps. 70.29.250.180 (talk) 03:31, 6 October 2011 (UTC)[reply]

Never mind, it was the mute button that was behind the issue (although I pressed it several times at first and it did nothing). 70.29.250.180 (talk) 03:33, 6 October 2011 (UTC)[reply]

word finder

Pl. recommend a simple software to find out the words from the letters of a word? e.g what are words can be formed from the letters of the word "building"? (build, bind,guild etc. etc. thank you.175.157.80.224 (talk) 05:39, 6 October 2011 (UTC)[reply]

The Internet Anagram Server (Or, "I, Rearrangement Servant") may be what you're looking for.
If you choose the "advanced" settings, and turn on "Show candidate words only", you'll get a complete list of every word that can be made from the letters you inputted.
here are all the words that can be made from "building".
here is a list of anagrams of the word "building". (In an anagram, all letters must be used.)
Hope this helps. APL (talk) 08:52, 6 October 2011 (UTC)[reply]
By the way, some of these words are pretty obscure. I don't even know what a "blini" is. APL (talk) 08:53, 6 October 2011 (UTC)[reply]

9 phone-only apps, but only 30 MB of space left? Nonsense!

My Xperia Play is down to 9 "phone-only" apps (according to the App2SD app.)

The 9 apps consume a grand total of 9.038 megabytes.

However, it shows that I only have 30.37 MB available, and the total internal memory is 380 MB.

Clearing the cache will remove 5-7 MB, tops. I surmise that it does no good to delete all SMS messages and phone history logs because a Freenode chatter stated that "the entire text of War & Peace only amounts to 500 KB." That was after I deleted all SMSes. (I had happened to back them up to my Gmail, hence I didn't mind trying.) I had to inquire as to why it hardly made a dent; the War & Peace remark was the response.

Next, I tried deleting/uninstalling apps that I no longer needed, plus the history and temp files of the Android browser. It still didn't make the expected impact.

I remember having more than 9 phone-only apps early in my ownership and still hundreds of MB of internal storage left. This was even before I learned to move the movable ones to the SD card. (All of that has already been done, as you'd expect.)

So why am I still left with 30.37 MB of empty space? What have I not found that I can safely delete, and where in the file directory would it be?

If I need to download a special app to find what I need to delete, please share. Thanks. --70.179.174.63 (talk) 08:14, 6 October 2011 (UTC)[reply]

There is a lot to check - not just cache. How many txt messages are sitting in memory? Do you have any photos sitting in memory (make sure they are saving to the card). Does your browser have a lot in cache? Does your browser have a lot of old windows opened in the background? How many downloads do you have sitting in memory? Did your contact list explode in size before that bug was fixed? For me, the three biggest offenders were: My web browser has about 20 old windows opened and sitting in the background. I had a huge PDF sitting in downloads. My contact list was about 30 times larger than it should be. I deleted it and restored it from backup. -- kainaw 15:21, 6 October 2011 (UTC)[reply]

Tool Bar Blacked Out

Sometimes, out of the blue, my bars at the top of the screen become black and I can't read any words on them. Sometimes only some are black, but always the tool bar becomes black when it happens. Sometimes there are "holes" in the bars where I can see a blue screen behind them peeking out. Sometimes there are vertical white bars on the left. Most times they go away by restarting, but I'd like to not have this happen in the first place. I'm in IE6. — Preceding unsigned comment added by 98.77.183.51 (talk) 08:44, 6 October 2011 (UTC) Me again: I solved the problem myself. I changed the theme in properties (Right clicked on blue screen) to Windows Classic, and then checked Internet Explorer and it was gone. I then changed it back to my favorite theme and it was still gone. — Preceding unsigned comment added by 98.77.183.51 (talk) 09:37, 6 October 2011 (UTC)[reply]

I'm glad you solved your problem, but, by the way, you really need to upgrade to a newer browser than IE6, which has been labeled "the least secure software on the planet". Comet Tuttle (talk) 15:44, 6 October 2011 (UTC)[reply]
This is actually a common problem [8] - it happens consistently to me when attempting to use IE7, or IE8 - the fix is as described - reset themes. I think it may be an XP issue. Microsoft claims to have published fixes (for my particular problem) but they do nothing. It may be worth searching microsoft support for the specific case you have - they may have a script to fix it but dont hold your breath.87.102.42.171 (talk) 22:54, 6 October 2011 (UTC)[reply]

From checksum to DVD

If a checksum identifies a DVD, then just with this checksum, should I be able to text 700 MB long strings to discover it? Maybe it's a lot of processing, but it is theoretically possible? Wikiweek (talk) 12:30, 6 October 2011 (UTC)[reply]

No.
There is more than one 700mb long string that would produce the same checksum. (You should be able to convince yourself of this via the Pigeonhole principle. There could not possibly be enough checksums to match up with every possible 700MB dvd!) Checksums are not compression techniques, they're just error-checking. APL (talk) 12:39, 6 October 2011 (UTC)[reply]
(By the way you say a "lot" of processing, but perhaps you don't realize how much. If you tried this experiment The Universe would end before you finished.) APL (talk) 12:44, 6 October 2011 (UTC)[reply]
I don't know that I agree with APL on this one. If the number of possible DVDs were (700MB x 256) then, yes, but the number of DVDs ever produced (including one-time writable DVD-Rs!) is many orders of magnitude less than this, and if you use CRC or a similar, more-sophisticated "checksum" technique, I don't think there's any likelihood of duplication in the real world. Comet Tuttle (talk) 15:42, 6 October 2011 (UTC)[reply]
Oh, OK then. The mention of testing "700 MB long strings" made me think that the checksum was for any arbitrary possible DVD. Like, If I throw some of my files onto a DVD, then give you the checksum, it would take you past the end of the universe to check all possible combinations, and you'd wind up with a lot of them that matched the checksum.
I also agree with Comet Tuttle, though. If you know for sure that the DVD in question is a commercially produced DVD, I guess you could hole up in a Netflix shipping center for a few years and figure out which movie the checksum belongs to. APL (talk) 23:05, 6 October 2011 (UTC)[reply]
Uh, remembering of course, the DVDs hold a lot more than 700MB. Wikiweek may be thinking of CDs. APL (talk) 23:06, 6 October 2011 (UTC)[reply]

using ms access

I have a table contain names and codes of some 250 employees. I have kept their photos in jpg format in a separate directory. The name of each picture file is code of the respective employee. I wish to connect these pictures with the table so that whenever I print letters to the employees their photo must also included in their letter. How can I include the pictures in the letter form. The table is in MS ACCESS. — Preceding unsigned comment added by 117.241.56.189 (talk) 14:16, 6 October 2011 (UTC)[reply]

Access has a form control called an Image control. It has a field where you can specify a filename. What you'd need to do is make a script in VBA that, on printing (or opening, or whenever you want the image to be used), it populates the contents of the Image control with a link to the image file in question. See this for more information on using the Image control, or your Access help file. If you have more specific questions, don't hesitate to ask on here, but take a look at how the Image control works first. --Mr.98 (talk) 15:23, 6 October 2011 (UTC)[reply]

Steve Jobs

Steve Jobs died on October 5, 2011. What's next for apple?

Post by Larsona; Send me a message at my talk page. 14:35, 6 October 2011 (UTC)[reply]

Tim Cook. -- kainaw 15:17, 6 October 2011 (UTC)[reply]
Please see WP:CRYSTALBALL. Dismas|(talk) 15:55, 6 October 2011 (UTC)[reply]

Official statements are available at the Apple Press Info webpage. Nimur (talk) 19:25, 6 October 2011 (UTC)[reply]

There are also any number of "experts" giving their analysis/opinions all over the internet. Try What's next for AppleAkrabbimtalk 20:09, 6 October 2011 (UTC)[reply]


Difficulty accessing google

Recently (last month) I've been having periodic difficulty accessing google (in the UK). At first I assumed it was my internet - but I subsequently discovered that other sites are working fine - I've had to use Bing to do searching. It appears to go off for ~10mins and come back. It's happening now. I checked "down or just for me" and it worked http://downorjustforme.com/google.com - the page gave me an IP to try .. and yes - this works eg http://209.85.148.103/ .. But at the same time http://www.google.com or http://www.google.co.uk wasn't working..

2 Questions

  • What's the numerical IP about? Is it what the address resolves to? Does that work globally?
  • Does anyone recognise this problem (it's definately only happening to google -rest of web is fine - very odd) (and it's happening more than once per day almost everyday) - using Google Chrome/XP. 87.102.42.171 (talk) 22:31, 6 October 2011 (UTC)[reply]
The IP points to one of Google's servers in Mountain View. (You can find this out by Googling "WHOIS IP" and using one of the Whois services there to locate it).
The reason it's happening is probably because of some problem with your DNS service. The DNS servers are what turn URLs like "google.com" into IPs (like the one you have there). Usually these things resolve themselves given enough time, but if it's been happening for a month, it might be worth looking into how to change your DNS lookup service to something like OpenDNS. How to do that exactly will depend on the specific OS you are using. --Mr.98 (talk) 22:50, 6 October 2011 (UTC)[reply]
OK thanks (Q1 answered) - I just about understand about DNS - so I understood that - but it's consistently google .. surely the problem would be more widespread for me?
Also some googling turns up an issue relating to google chrome doing "DNS pre-fetching" to improve stuff - some reports say this may be the issue - I'm not convinced - this used to be an option to turn off in chrome's settings - but it seems to have gone now - possibly replaced by the even more optimistic "Predict network actions to improve page load performance" - I don't think that can be a cause - is that irrelevant here?
The issue is that the problem appears confined to google seems to tell me that the DNS server is ok? (the idea that google is down several times a day consistently seems to be impossible.)87.102.42.171 (talk) 23:10, 6 October 2011 (UTC)[reply]
I suggest, if you can, that you use another browser such as Firefox for a while, in an effort to see whether it is a chrome-specific issue. --Tagishsimon (talk) 23:30, 6 October 2011 (UTC)[reply]
I'll try that next time.87.102.42.171 (talk) 23:35, 6 October 2011 (UTC)[reply]
Chrome does have a well-reported difficulty with accessing, of all things, the Google website. Google claims that it has something to do with prefetching, but no fix they've suggested has actually fixed the problem. It isn't only Google. I've found other websites that fail to load. Even here on Wikipedia, there is a periodic failure to load the stylesheet, making the page look weird. -- kainaw 23:43, 6 October 2011 (UTC)[reply]
Yes. I've seen the wikipedia missing style sheet thing a couple of times. It's difficult to imagine what sort of software bug could produce such specifically consistent problems (in a temporally inconsistent way) - all DNS lookups are equal I would have assumed. 87.102.42.171 (talk) 00:00, 7 October 2011 (UTC)[reply]

If I have an infected XP Pro x32 with the heur zero day threat and want to dual boot it with either an XP Pro x64 or a windows 7 x64, will the virus pass over.

  Hi to all,
      I have an XP Pro x32 bit operating system & it is infected with the Heur zero day threat virus. I am in the process of geting it fixed up through PC Tools. The Anti-spyware program people who bought out the Spyware Doctor. It is not fixed yet, but I wanted to dual boot this operating system with a Windows XP Pro x64 bit or maybe with a Windows 7 x64 bit operating system instead. My question is, is it safe to dual boot with either of these operating systems or will the Heur Zero Day threat virus migrate to the newer windows x64 bit partition containing the operating system & infect that also. Also which operating system out of the two would be the least likely one for the Heur Day Zero Threat to migrate to if it can. I should add though, that I would rather use up my windows XP Pro x64 bit operating system first. I would like to do this before I return to work next week if I can as I have nothing else to do while I am off work. It has already been a week that PC Tools has had my problem in their hands. I would like to hear from people that have knowledge please & not just hear say thank you.  — Preceding unsigned comment added by 124.189.33.41 (talk) 23:56, 6 October 2011 (UTC)[reply] 

October 7