Wikipedia:Reference desk/Computing: Difference between revisions
Line 715: | Line 715: | ||
I spend all day in front of a computer. I want to flash subliminal messages to myself on the screen at random intervals, for example "Get a girlfriend" or "Improve your life". What programs could do this, easily and at low or zero cost? Thanks. <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/178.66.8.154|178.66.8.154]] ([[User talk:178.66.8.154|talk]]) 19:40, 8 November 2010 (UTC)</span><!-- Template:UnsignedIP --> <!--Autosigned by SineBot--> |
I spend all day in front of a computer. I want to flash subliminal messages to myself on the screen at random intervals, for example "Get a girlfriend" or "Improve your life". What programs could do this, easily and at low or zero cost? Thanks. <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/178.66.8.154|178.66.8.154]] ([[User talk:178.66.8.154|talk]]) 19:40, 8 November 2010 (UTC)</span><!-- Template:UnsignedIP --> <!--Autosigned by SineBot--> |
||
"J", would be the first one. "Y" would be the last. In between, in no particular order, "I", "O", "N", "H", "T", "E", "N", "V", "A".... [[Special:Contributions/85.181.151.31|85.181.151.31]] ([[User talk:85.181.151.31|talk]]) 20:31, 8 November 2010 (UTC) |
:: "J", would be the first one. "Y" would be the last. In between, in no particular order, "I", "O", "N", "H", "T", "E", "N", "V", "A".... [[Special:Contributions/85.181.151.31|85.181.151.31]] ([[User talk:85.181.151.31|talk]]) 20:31, 8 November 2010 (UTC) |
Revision as of 20:31, 8 November 2010
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
November 2
e-mail listserve
whats a e-mail listserve — Preceding unsigned comment added by Kj650 (talk • contribs)
- LISTSERV was a specific software for electronic mailing list management. Its name was catchy, so "list serve" now is a genericized name for any email list management tool. Nimur (talk) 01:45, 2 November 2010 (UTC)
Can someone explain how this works to me? When I learned networking in school (2003-04), we learned that packets were filtered out by NIC's on the hardware level, and as such it was impossible for software to sniff packets from other users unless one specifically purchased a NIC with built-in promiscuous mode, which our teachers told us was rare. However, everything I've been reading about this software makes it sound like this is not rare. Can someone explain this? Magog the Ogre (talk) 04:00, 2 November 2010 (UTC)
- Every network card I've seen, even the old cheap Realtek ones, offered a promiscuous mode. As you suspect, it's not rare at all. I even wonder if network cards without promiscuous mode exist at all, as the online FAQs of some capturing product (libpcap, Wireshark) don't seem to mention this as a possible failure cause. Unilynx (talk) 05:02, 2 November 2010 (UTC)
- Promiscuous network card mode is not rare - it's just rarely enabled by software drivers. Almost all hardware supports this mode. Installing a network monitor like Wireshark is the easiest way to experiment with enabling promiscuous mode. Strictly speaking, Firesheep does not have a packet sniffer; you must install pcap, a well-known software packet sniffer (that functions by replacing your network interface driver with its own driver). (See Firesheep installation procedure). Firesheep then provides a user-friendly interface, not unlike Wireshark, except that Firesheep is tuned specifically to filter for Facebook login traffic and similar sorts of things, and then display that information in an impressive way. The real meat-and-potatoes of the software is pcap - which has been around for ages and has always been capable of these sorts of data interceptions. Firesheep is just a "pretty-printer" for packet-sniffers. Nimur (talk) 05:08, 2 November 2010 (UTC)
- Agree with the previous comments, promiscuous mode is not rare but normally set off because it is normally thought to be not useful. Regards, SunCreator (talk) 15:38, 2 November 2010 (UTC)
- Magog the Ogre, you may be interested in Wikipedia:Wikipedia Signpost/2010-11-01/Technology report#Browsing securely.
- —Wavelength (talk) 16:51, 2 November 2010 (UTC)
How do you run the add on if you add it? 84.203.243.10 (talk) 10:46, 3 November 2010 (UTC)
- You might want to start a new section, or frankly, just google it. Magog the Ogre (talk) 23:09, 3 November 2010 (UTC)
- I can certainly believe professors saying it was rare in 2003, but by 2003 it was hardly rare. Check out Orinoco cards if you're interested (one of the most talked about wireless chipsets around that time), but even Intel was getting in on its Centrino related cards back then, all of which supported promiscuous mode even in Windows. It was hardly rare back then. It may have been rare before then, but I'll leave that to somebody older than me. Shadowjams (talk) 09:37, 4 November 2010 (UTC)
Video games
If a human and a computer were to play a game of Starcraft, with neither receiving any advantages or handicaps, who would win? How about in other video games, like Counterstrike, Rise of Nations, or Civilization 4? Is there any well-known real-time strategy game in which the AI has ultra-superhuman capabilities (i.e. it can easily beat the best human player)? --99.237.232.254 (talk) 04:08, 2 November 2010 (UTC)
- Edit: let me change Starcraft to Starcraft II, since the former was made at a time when artificial intelligence was in its infancy. --99.237.232.254 (talk) 04:11, 2 November 2010 (UTC)
- Historical note: The artificial intelligence seen in computer games isn't representative of the state of the art of AI research. See AI Winter for a historical overview of the rise and fall of AI research. Paul (Stansifer) 12:32, 2 November 2010 (UTC)
- With RTS games it's usually down to APM (actions per minute)... the best players in the world can get (incredibly) over 300 APM if memory serves me correctly about some Starcraft 1 Pro Korean players. Actions comprise keystrokes and mouse movements and clicks. I think there was a documentary where their brain functions were monitored, and the casual gamer would get up to about 60 APM and be cognitively aware of every action, whereas the pro would get up to 300 APM and have an instinctive awareness of the whole map... different areas of the brain lit up in the monitoring. However, given your question, you would allow the computer to have as high an APM as possible for its CPU (and the computer player's APM is limited even on the toughest level)... so in my opinion any human would not stand a chance, unless the computer AI is terribly flawed or the human exploits a bug. There was a time when a computer would not have been able to beat the best humans (compare chess and the progress of computer ELO ratings) but the modern computer is way too powerful in terms of raw computational speed. Now turn-based - like Civ - that's another argument altogether... I think a top human player ( or a forum based game) would be able to beat a computer player simply because it's incredibly difficult to write AI for Civ (the highest levels in Civ employ resource cheating). For FPS I think a godlike computer player would own a human (the computer is fast enough to dodge anything at close range for example)- but I've seen humans do some incredible things in those games (hide, snipe, etc.) so I don't know. Sandman30s (talk) 11:09, 2 November 2010 (UTC)
- I agree with Sandman30s, mostly. Suppose a series of games is made, each of which has one primary test that the user is continuously competing against. On one extreme the test is reflexes with no strategy (perhaps electronic whack-a-mole) and on the other extreme the test is strategy with no reflexes (perhaps chess or go). The closer you get to the "reflexes" side of this spectrum, the more obvious it is that the computer will beat any human. As you get closer to the "strategy" side, it becomes less obvious — a run-of-the-mill chess program will beat your average human chess player 99% of the time, but that's not true for go. An RTS I would place somewhere about halfway between reflexes and strategy, and one way to analyze the probable outcome is to say that the computer will completely beat the human on the reflexes component, and it's less obvious on the strategy component. Personally I think the strategy element of RTS games is pretty elementary and so it's going to be the computer in a landslide, if the developer has spent a decent amount of time writing good AI — but everything does hinge on that. Comet Tuttle (talk) 16:27, 2 November 2010 (UTC)
- Here's a story that was linked by Slashdot today discussing using genetic algorithms to optimize build orders for the Zerg. Comet Tuttle (talk) 17:13, 2 November 2010 (UTC)
- Specifically, it's about optimizing a zerg rush. --Carnildo (talk) 22:47, 3 November 2010 (UTC)
- The strategy component in most RTS's, including Starcraft, is by no means simple. A simple Google search would reveal lots of webpages, and even entire books, devoted to exploring the best strategies in Starcraft--and very few of these require a high APM. These strategies are very difficult for a human to think up or defend against, so I don't think it would be easy for a computer. --99.237.232.254 (talk) 05:40, 3 November 2010 (UTC)
- I would go with what a multiple world champion Korean pro does, rather than what a random internet guide says. However, if the human and computer knew exactly the same strategies, it would come down to APM and no human can be remotely close when it comes to raw computation. Sandman30s (talk) 08:57, 3 November 2010 (UTC)
Difficulty in choosing username
Pretty much every website on the internet requires a username these days if you intend to participate. I have extreme difficulty in choosing username; I have few defining interests or hobbies to base a username on, I don't want to include any part of my real identity, I don't want to divulge my gender. I thought about just using random numbers or something, but almost every site requires a combination of letters and numbers, and some block random usernames as "confusing". Any suggestions? —Preceding unsigned comment added by 114.37.143.221 (talk) 11:07, 2 November 2010 (UTC)
- How about something obvious you can see around you - eg "22inchsonytv" or "nextdoorsfordcar" ? Sf5xeplus (talk) 13:14, 2 November 2010 (UTC)
- This topic comes up once every few months here. Here is a thread from last December containing some suggestions. Personally I like choosing two random nouns in a row. Comet Tuttle (talk) 16:18, 2 November 2010 (UTC)
- Just be sure to avoid this issue... ;-) -- 78.43.71.155 (talk) 21:53, 2 November 2010 (UTC)
- Try one of these. I haven't used them so not sure what they give you. Mo ainm~Talk 16:25, 2 November 2010 (UTC)
- There's an obvious security problem with asking some website to decide your username and password for other websites. Comet Tuttle (talk) 16:31, 2 November 2010 (UTC)
- Password yes and I would never suggest anyone used one, but a username is different. Mo ainm~Talk 16:37, 2 November 2010 (UTC)
- There's an obvious security problem with asking some website to decide your username and password for other websites. Comet Tuttle (talk) 16:31, 2 November 2010 (UTC)
- Here is a page that just picks random adjectives and pairs them with random nouns. Click it a few times until you see something you find enjoyable. Of the ones I got, I liked "BourgeoisBasement" and "EverydaySubstitute" the most, personally. --Mr.98 (talk) 16:56, 2 November 2010 (UTC)
something that's quite simple is: tke your initials, and stick a random number, or a number that actually means something to you, say for example you like seven, then you could square that and add that to your initials, eg. gyd49 or take a character you like and stick your birth year behind it 70.241.22.82 (talk) 17:41, 2 November 2010 (UTC)
- See Check Username Availability at Multiple Social Networking Sites.
- —Wavelength (talk) 18:21, 2 November 2010 (UTC)
Translate your real name into some other language and use that. Jean Aigle 22:04, 2 November 2010 (UTC)
Chinese ?
Anyone seeing chinese characters in their watchlists or contribution listings (on wikipedia), or is it just me? ok it's stopped now - but explain this (see right) I suppose I should ask what the character is too (chinese for 'you've been hacked' no doubt) ?!! Sf5xeplus (talk) 13:14, 2 November 2010 (UTC)
- Request - whomever does identify the character (maybe somebody more skilled in the use of a Chinese IME), could they paste it in plain-text as well? I have a sneaking suspicion it's going to be a character encoding glitch - most likely "-1" in one of the unicode code-pages or something. Nimur (talk) 14:34, 2 November 2010 (UTC)
- It is a character encoding glitch. It is, for some reason, using unicode when it shouldn't. The character is 楷, meaning "good". -- kainaw™ 14:38, 2 November 2010 (UTC)
- Would you elaborate how can an HTML page (especially one encoded in UTF-8) not use Unicode?—Emil J. 14:46, 2 November 2010 (UTC)
- It is using a code that it shouldn't be using - if you want to be painfully pedantic. -- kainaw™ 16:45, 2 November 2010 (UTC)
- It's marked all my edits as good - that's a feature not a bug :)
- I suppose a bug report is pointless, as I can't reproduce it.Sf5xeplus (talk) 16:46, 2 November 2010 (UTC)
- It is using a code that it shouldn't be using - if you want to be painfully pedantic. -- kainaw™ 16:45, 2 November 2010 (UTC)
Simple email forwarding
For a club I'm a member of, I'd like to be able to set up an email address that will forward messages to a (small) specified list of recipients, with no, or minimal, action required by the recipients themselves. The purpose its to allow members of the group to send a message to all other members by emailing to a single address. We're currently using Yahoo Groups, but are finding that rather cumbersome to use, and we don't need all the facilities it provides. Can anyone recommend a suitable way of achieving this? (Preferably free, of course, or at least very cheap.) AndrewWTaylor (talk) 15:23, 2 November 2010 (UTC)
- A distribution/mailing list? Not too savvy about it myself, but this page might be worth a look: http://email.about.com/od/outlooktips/qt/Distribution_List_Outlook.htm - as might the google search I got it from http://www.google.co.uk/search?hl=en&biw=1260&bih=810&q=outlook+mailing+list&aq=f&aqi=g1g-c4g1g-c1g1g-c1g1&aql=&oq=&gs_rfai= Darigan (talk) 16:24, 2 November 2010 (UTC)
- Do you want this to be hosted by some other web service, as Yahoo Groups is; or do you control a web server yourself so you can install some software? If the latter, see LISTSERV or Electronic mailing list and follow the links from there for various solutions you can set up on your server. Comet Tuttle (talk) 16:30, 2 November 2010 (UTC)
- Thanks both. @Darigan: it needs to be independent of the email client and usable by e.g. Hotmail users. @Comet Tuttle: Ideally I want it externally hosted - there isn't a server I can use; we do have a couple of Internet domains, but the company they are registered with (FreeParking) only allows forwarding to a single email address (as far as I can tell). AndrewWTaylor (talk) 17:26, 2 November 2010 (UTC)
Upgrading my HP ATi Mobility Radeon 4530
I was trying to run a program that didn't work on my Hewlett Packard laptop. Talking with someone on the net I was told that my ATi Mobility Radeon 4530 driver was outdated. I needed to upgrade in order to run the program, and i followed a link given to me to download it :
http://www.nvidia.com/Download/index.aspx?lang=en-us
on this site I chose option 2, to let the site determine what upgrades i needed so I followed that and ended u pwith this site:
I downloaded the files and installed the program. But at the end of installation a message told me it was finished and that I already had these upgrades, and that i was now only upgrading atop upgrades I already had. So i figured ok, then it must be in order and I have the necessary upgrades. But, another program called "Driver Mender" which I also downloaded from the links i got to check the computer's system and let me easily upgrade my computer with the newest stuff tells me that I still have ATi Mobility Radeon 4530 and that it is in need of upgrade. And I don't understand why this is happening when i downloaded HP ATi Mobility Radeon HD 4530/4650 Driver upgrade from this link :
So i'm confused since everytime i try install it again it still says I already have the upgrades, but then the Driver mender says the opposite. Where on my computer can I go to check my system information, including of course what version of the ATi Radeon driver i have?
Any advice on what to do, or what sites to go to to get the latest upgrades? Free of course, like this one was.
As for the program I was supposed to run in the first place, nevermind that, because I'm not running that program now after all (Don't ask) but i still want my computer to be upgraded as good as possible when i have the chance.
Any clever computer minds out there? :) Krikkert7 (talk) 18:14, 2 November 2010 (UTC)
- For an ATI driver (not an NVidia, for which your first link is, which isn't going to work) either go to ATI's own website or to HP's website and pick the appropriate driver for your model of laptop. I can't think of any reason you'd get it from anywhere else. -- Finlay McWalter ☻ Talk 19:57, 2 November 2010 (UTC)
Yes i know, the nvidia page only helped me determine what upgrades I needed, and i didn't download anything from that site but found my way to the other link i showed and downloaded from there. —Preceding unsigned comment added by Krikkert7 (talk • contribs) 01:29, 3 November 2010 (UTC)
base64_decode
I came across a footer.php file that looks like it is encrypted as it just contains random letters and digits, base64_decode is the only part that makes any sense so I assume encryption was used on it. Is there any way to decode this page. Mo ainm~Talk 19:51, 2 November 2010 (UTC)
- That sounds like it's base 64 encoded; you really shouldn't see that (the browser should automatically decode it), but you can manually decode it. You can do that on your own computer (using one of the many base64 libraries) or with an online thing like this. -- Finlay McWalter ☻ Talk 19:55, 2 November 2010 (UTC)
- Thats great thanks I'll have a look at that. Mo ainm~Talk 19:59, 2 November 2010 (UTC)
Weird networking trouble
I wrote a few days ago for advice about my computer (http://en.wikipedia.org/wiki/Wikipedia:Reference_desk/Computing#Unallocated_space). However, I have a bigger problem starting yesterday. Basically, this computer (Dell Alienware) connects fine to the port in the wall but my computer (Asus n61jv-x2) does not. To confound the problem, the wireless router stopped working* the same time as the ASUS did and I have been scratching my head since. Both systems are running Windows 7 and as far as I know, the same wired networking hardware (Atheros AR8132 on the Alienware and AR8131 on the Asus) too. I am at a loss to what to do. Any ideas please? I don't know how much IT can help as the port is clearly functional (right?). Thank you! You guys are my hero. Kushal (talk) 20:16, 2 November 2010 (UTC)
*To clarify, we were both using the wireless router Netgear WGR614v7 but now that it is not working, I have the Alienware hooked up to the wall. The router seems to be pretty much alive but the logo i on the router stays a blinking yellow. Resetting the router did not help. Kushal (talk) 20:20, 2 November 2010 (UTC)
- Just to clarify, did you try unhooking the Alienware machine from the router so you could make sure to use exactly the same plug on the router when attempting to hook up your computer to the router? What exactly happens when your computer is hooked up to the router and you restart your computer? What happens when you ping the router's IP address from the command line? Comet Tuttle (talk) 20:45, 2 November 2010 (UTC)
- Comet, Thanks for answering. Yes. I tried various things. When I hook up the Alienware to the wall, everything is fine. When I hook up the Asus to the wall, there is no connection. When I hook up either machine to the Netgear, I am connected to the router but there is no Internet. Because I was able to get into the router's management system at routerlogin.com , I did not think about pinging the router. I mean, the connection between the router and the computer seems fine. It is just the router to the wall part that's messing me. Kushal (talk) 21:54, 2 November 2010 (UTC)
- Hold up, that's not clear enough - when you hook up the Asus to the router, you are able to ping it and connect to its management web page? I had started to suspect that, against the odds, you had experienced a simultaneous hardware failure of both the router and the Ethernet circuitry on the Asus, so the solution was to get a new router and also get a new Ethernet adapter for the Asus (like a PC card or maybe a USB Ethernet adapter). But if the Asus is able to communicate with the router, then you may be able to fix this by just replacing the router and twiddling with the Asus's network settings. Comet Tuttle (talk) 00:07, 3 November 2010 (UTC)
- Any ideas on what tweaks I could do to it? I am new to Windows 7. Kushal (talk) 15:50, 3 November 2010 (UTC)
- OK, I have more details. I can connect via ethernet at another place. In my room, whenever I connect by ethernet, I see multiple connections called Network 2 and Network 3 (or something) simultaneously connected. Is that a problem? The place I connected successfully at only shows one active connection. Help? Kushal (talk) 16:02, 3 November 2010 (UTC)
- (Is that plugged straight into the wall in your room, or through the router?) This is good news; your laptop's network circuitry seems fine. It is unusual that you would see both of those connections. I will take a guess that your machine is connecting via the Ethernet cable on one of them, and via a wireless connection on the other. From the "Network" Control Panel, take a look at each of the two connections, and right-click and disconnect whichever one isn't the wired Ethernet connection. Right-click the other and choose to repair the connection and see if that automagically fixes your connection? Comet Tuttle (talk) 18:28, 3 November 2010 (UTC)
- Yes, I have pretty much ruled out any hardware faults. Got in touch with IT. Could not get the issue resolved so we did a workaround. We made the computer believe I have a static IP address (and knowing the DHCP server won't issue those IP addresses in the foreseeable future), we are good to go, for now. I did try to disable and repair connections but there is something with dhcp and the 192.168 servers that just don't fit. In other news, I gave the wireless router a very similar IP address and it is running now too! I am on wireless on the Asus now. The only thing is that the whole thing has left me feeling strange. I think I need to wipe the computer and reinstall Windows as per the discussion above. The only problem is that there is so much software I have installed and it is such a chore to reinstall them. :?
- Thank you so much for your help. Did anything in what I said ring a bell with you? Please let me know if you think there's a solution to this. Thanks Kushal (talk) 21:59, 3 November 2010 (UTC)
- Hey, if you got it working, you got it working; don't wipe the system needlessly; you might have to just do the same kludge again to get networking working again even after a wipe and reinstall. Enjoy your networking! Glad you had access to an IT person who was able to come up with a workaround. Comet Tuttle (talk) 22:46, 3 November 2010 (UTC)
As you say. The guy actually told me that this was not a solution but just a workaround and I would need to get it fixed. I would hate to have to reinstall all applications one at a time. :/ Kushal (talk) 19:44, 4 November 2010 (UTC)
MacBook Pro
Hi. I have a Macbook Pro that is now about four and a half years old. I'm not sure what state it should be in by this stage but it seems to have slowed down remarkably in the last few weeks, since giving it a lot more use after a few months of respite. I've never done anything to 'clean up' my hard drive or anything technical like that, I've just used it normally over the years but I've not put so much on it to warrant it taking up to ten minutes to load a web page and constantly tell me programs are not responding. Does anyone have any ideas on what I could do to speed it up? 128.232.247.49 (talk) 21:59, 2 November 2010 (UTC)
- Are you running low on disk space? It is preferable to have at least a few gigabytes of free disk space on your computer. Kushal (talk) 23:05, 2 November 2010 (UTC)
- Do you live near an Apple Store? Going to a Genius Bar would probably be an easy way to get a quick diagnosis (which wouldn't cost anything). Otherwise it is pretty hard to tell what the issue is from the description you've given. When something is acting up funny on mine, I try to figure out what is going on from the Activity Monitor (Utilities > Activity Monitor; then go to Window > Activity Monitor to make sure the main screen is displaying). Sort the processes by CPU — what's at the top? Is there something hogging up the processor? Sort by "Real Mem" — is something unusual hogging all the RAM? Look at the "System Memory" tab — are you consistently out of RAM? Look at Disk Activity when you are doing something that might take a long time — does it seem to lock up at all? Paying attention to these kinds of indicators can let you know if the problem is, say, rooted in a hardware issue (e.g. not enough RAM or a buggy hard drive) rather than a software issue. Also, what OS are you using? I found that my MacBook running 10.4.11 got very slow for awhile and that many of my programs were just not very efficient about its usage of memory, and a lot of that was fixed when I upgraded to 10.6 (which was fairly cheap and easy, in the end). --Mr.98 (talk) 23:14, 2 November 2010 (UTC)
- The disk is about the only part that can slow down, either if files are fragmented heavily, or if bad blocks are being mapped out and the replacement blocks are non-local. If you want to keep the computer going, try reformatting and reinstalling, or buy a replacement drive. --Stephan Schulz (talk) 23:32, 2 November 2010 (UTC)
Disk defragmenting might be useful for this... 70.241.22.82 (talk) 16:35, 3 November 2010 (UTC)
- Disk fragmentation is not an issue on OS X - never worry about it. run Disk Utility (or Disk Warrior, if you own it) on the hard drive to see if its failing, try reseating the ram sticks (bad or loose ram can confuse the system); add more ram if you have a Gb or less, check to see if some background app is hogging a lot of CPU (usual culprits are the Finder and mdworker - both of those should settle out over time), if all else fails, download Yasu (or equivalent) and clear all the caches, system swap files, update the prebindings, etc. (sometimes corrupt font or system caches can cause a bit of a tizzy). --Ludwigs2 16:42, 3 November 2010 (UTC)
November 3
help for connecting to server
i installed this game ..london law on my ubuntu 10.10...thye ask for a host to connect to ...how do i find it out and how do i find out the port number..thanksMetallicmania (talk) 03:15, 3 November 2010 (UTC)
- Was it blocked by Windows firewall? It may prevent the program from accessing the internet. General Rommel (talk) 21:03, 3 November 2010 (UTC)
- Can we at least read the questions? He said he's installed it on Ubuntu, so Windows firewall wouldn't be an issue. --99.224.10.2 (talk) 22:39, 6 November 2010 (UTC)
Slash in the IP address
What does it mean when there is a slash in the IP address, such as 192.168.0.0/24 ? 220.253.253.75 (talk) 06:58, 3 November 2010 (UTC)
- The concept is explained in our article on CIDR notation. Regards,
decltype
(talk) 07:02, 3 November 2010 (UTC)
It's an ip range. In that example, it means everything in the range "192.168.0.0" to "192.168.0.255" 82.44.55.25 (talk) 09:49, 3 November 2010 (UTC)
In other words, it is the number of 1s in the subnet mask. So /24 is equivalent to the subnet mask of 11111111111111111111111100000000 or 11111111.11111111.11111111.00000000 or 255.255.255.0 - WikiCheng | Talk 12:19, 3 November 2010 (UTC)
IT milestones in 2007
Hi, Can you direct me to a website / article which lists the major milestones of Information Technology in 2007 ? - WikiCheng | Talk 08:21, 3 November 2010 (UTC)
- Where on there does it list the specifically IT milestones? --Mr.98 (talk) 12:03, 3 November 2010 (UTC)
- I don't think it does. The closest I could find is Timeline of computing 2000–2009. Although by that, it doesn't look like 2007 was a very busy year. Indeterminate (talk) 15:08, 3 November 2010 (UTC)
- Where on there does it list the specifically IT milestones? --Mr.98 (talk) 12:03, 3 November 2010 (UTC)
Thank you ! In case you find some other websites (other than Wikipedia), I will be interested. Nonetheless, this has been helpful - WikiCheng | Talk 07:50, 4 November 2010 (UTC)
Items with multiple attributes
I want to re-write a simple book-keeping program that labels, categorises, and group-totals the enteries in my bank statements, as supplied in CSV format. The earlier version written in old Basic used an array to represent the various parts of each line: date, amount, description, category, etc.
In more modern languages, is there any way of representing this kind of multi-attribute data that is better than an array? Or should I just stick with arrays? Is there any language, especially a basic-like language, that makes things such as totaling a column in an array easy to do? I do not want to use a spreadsheet. Thanks 92.15.0.194 (talk) 14:33, 3 November 2010 (UTC)
- I would suggest you could represent the items as a struct or a class. Then you would have item.date, item.amount, etc. You would then have an array (or, in many languages a list) of items, and could quickly iterate over them to produce the sums. C# will certainly do this, and I would stronlgy expect VB.net to be the same. No doubt there are other alternatives. --Phil Holmes (talk) 14:40, 3 November 2010 (UTC)
- (edit conflict)If the set of attributes is fixed, you'd mostly represent each entry as a tuple (aka a struct in C or an object in Java), where each element of the tuple had a type appropriate for the kind of data that attribute would store. If, on the other hand, attributes could be entirely arbitrary, you'd probably use a hashtable to store each of the attribute-name:attribute-value pairs. Then you'd probably have an array of entries (other data structures might be appropriate depending on how you'd be accessing them). Pretty much any modern programming language can do all this very simply. I try not to specifically evangelise, but certainly doing this in Python would be straightforward. -- Finlay McWalter ☻ Talk 14:46, 3 November 2010 (UTC)
- Was just wondering why you say "I do not want to use a spreadsheet", and how hard and fast that restriction is. Sorting line items by category and sub-totalling within categories (plus a whole lot more) is exactly what pivot tables in MS Excel do for you. If you want the fun and challenge of rolling your own code then that is fine - just as long as you realise that you are doing a 5 mile run when you could take a taxi instead. Gandalf61 (talk) 15:12, 3 November 2010 (UTC)
- Exaggeration. Totaling a column in an array takes a line of code. 92.24.178.95 (talk) 17:33, 3 November 2010 (UTC)
- That depends entirely on the programming language and functions previously written. Given enough functions written, I could do this entire project is "one line of code" with: do_all_the_work();. -- kainaw™ 18:24, 3 November 2010 (UTC)
- It takes one line of code in Basic. 92.24.178.95 (talk) 18:27, 3 November 2010 (UTC)
- Sure, totalling values in an array is easy. But how does the user get the data into the array in the first place ? And how are the results presented back to the user ? And how and where is the data in the array stored when the program is not in use ? And how does the user change the data in the array ? Or what if they change their mind about how they want to analyse the data, or want to change the structure of the data ? Your "one line" program can't do any of that, whereas it all comes for free in a spreadsheet. Your are telling the guy running the marathon "it's easy - you just have to put one foot in front of the other". Gandalf61 (talk) 13:49, 4 November 2010 (UTC)
- Some of your questions are answered by reading the original question. The rest of them are simple programming. 92.28.250.172 (talk) 14:12, 4 November 2010 (UTC)
- Obviously all those problems can be solved - eventually - by "simple programming", otherwise spreadsheets would not exist. My point was that your facile "one line of code" response ignored all of the issues that make writing useful programs that will be used by real people in real-life situations incredibly challenging (or deeply frustrating, depending on your point of view). From your responses I get the impression that you have little or no experience of doing that. Gandalf61 (talk) 14:07, 5 November 2010 (UTC)
- I find it simple. Please do not troll. 92.29.112.206 (talk) 19:43, 5 November 2010 (UTC)
- Obviously all those problems can be solved - eventually - by "simple programming", otherwise spreadsheets would not exist. My point was that your facile "one line of code" response ignored all of the issues that make writing useful programs that will be used by real people in real-life situations incredibly challenging (or deeply frustrating, depending on your point of view). From your responses I get the impression that you have little or no experience of doing that. Gandalf61 (talk) 14:07, 5 November 2010 (UTC)
- Some of your questions are answered by reading the original question. The rest of them are simple programming. 92.28.250.172 (talk) 14:12, 4 November 2010 (UTC)
- Sure, totalling values in an array is easy. But how does the user get the data into the array in the first place ? And how are the results presented back to the user ? And how and where is the data in the array stored when the program is not in use ? And how does the user change the data in the array ? Or what if they change their mind about how they want to analyse the data, or want to change the structure of the data ? Your "one line" program can't do any of that, whereas it all comes for free in a spreadsheet. Your are telling the guy running the marathon "it's easy - you just have to put one foot in front of the other". Gandalf61 (talk) 13:49, 4 November 2010 (UTC)
- Is there a need to change? at best you would be replacing array[n][5] (when the 5th element is price or whatever) with array[n].price , (and using a for next loop over n), even something old like pascal (eg freepascal) can do this using 'records'. eg http://www.hkbu.edu.hk/~bba_ism/ISM2110/pas048.htm (full object orientated languages can do more, but in the example you gave that would probably be of little use)
- freebasic can do the same type of thing .. there will be other examples of languages not far removed from the basic you're using. It's not really any simpler, and you have to learn the new syntax etc. So...Sf5xeplus (talk) 19:53, 3 November 2010 (UTC)
- It takes one line of code in Basic. 92.24.178.95 (talk) 18:27, 3 November 2010 (UTC)
- That depends entirely on the programming language and functions previously written. Given enough functions written, I could do this entire project is "one line of code" with: do_all_the_work();. -- kainaw™ 18:24, 3 November 2010 (UTC)
- Exaggeration. Totaling a column in an array takes a line of code. 92.24.178.95 (talk) 17:33, 3 November 2010 (UTC)
Exactly what I was thinking Sf5xeplus; the struct or class things seem difficult and unless there is a function for adding them all up, not any easier. 92.28.241.78 (talk) 20:21, 3 November 2010 (UTC)
link The article I forgot to link is Record (computer science) - just about every language has a version, including newer BASICs, in general they're of good use when you've got strings, and numbers associated with the same item. Easier to read (in principle) but not necessarily simpler or easier when only writing small programs. Sf5xeplus (talk) 22:25, 3 November 2010 (UTC)
Story Management software
Im looking for some software to manage stories (for a load of short stories)
so for example, if i make a story with the title Example i cant make another one with the same title as its already taken (getting rid of duplicates)
can be either offline (on my computer) or online (a bit like a wiki) and for txt and/or doc and/or PDF, i dont mind which
thanks in advance :D
Sophie (Talk) 16:34, 3 November 2010 (UTC)
- I looked up "story management software" on Google and this is freeware that got a favourable review: http://www.spacejock.com/yWriter5.html I don't know if its what you are looking for. Otherwise, have you considered saving Example followed in the title by the date or time, eg "Example 1 1 10"? 92.24.178.95 (talk) 18:25, 3 November 2010 (UTC)
- nice idea for the date, but say it goes like this
- * Title: Hello there 1/nov/10. Content: Hello and welcome to Wikiedia...
- * Title: My userpage 2/nov/10. Content: This is my user page...
- * Title: Hello there 3/nov/10. Content: Hello and welcome to Wikiedia...
- title 1 and 3 are the same :(
- the software is cool :) but its more for long stories rather than loads of small ones
- still good though :) Sophie (Talk) 19:46, 3 November 2010 (UTC)
- Online Google Docs lets you have files of the same name, it stores them by date (last modified)..
- Offline I don't know of an example.Sf5xeplus (talk) 19:37, 3 November 2010 (UTC)
- thanks ill take a look :) Sophie (Talk) 19:46, 3 November 2010 (UTC)
- I'm very confused. Do you want a document management system, or a word processor? You can save files with any filename you want; and you can use a document management system (or even just a Word Document with hyperlinks) to reference each document by title. For example, you can create a "Table of Contents" document in Word, and insert hyperlinks to each other story file. The text ("title") can be anything you want - you can use the same title for two different links to different story files. Nimur (talk) 19:55, 3 November 2010 (UTC)
- i know its hard to explain :( so lets say its a wiki. I create the page Example and put the story there so now, i can no longer make another page with the title example because its already there. so its more like a doc management system. does that make a bit more sence? Sophie (Talk) 20:16, 3 November 2010 (UTC)
- I'm unclear: do you want something that a) prevents you from having more than one file with the same title, or b) allows you to have more than one file with the same title, or c) deletes files that have the same content? If it is c), then the freeware Duplicate Cleaner would do that. 92.28.241.78 (talk) 20:36, 3 November 2010 (UTC)
- I also think that Sophie is unnecessarily equating "title" with "filename." Filenames must be unique (a different file name is required for each file). But you can have the same title for as many files as you like: the question is, how do you definea title? Well, that depends on your file-type. A plain text file can have a line that says Title: _____; or a Word Document actually can save a title as metadata. If you want an index of documents by title (instead of by file name), Windows Explorer can do this, or other file managers; or you can use a hyperlink to point to each file. Nimur (talk) 21:20, 3 November 2010 (UTC)
- I'm unclear: do you want something that a) prevents you from having more than one file with the same title, or b) allows you to have more than one file with the same title, or c) deletes files that have the same content? If it is c), then the freeware Duplicate Cleaner would do that. 92.28.241.78 (talk) 20:36, 3 November 2010 (UTC)
- i know its hard to explain :( so lets say its a wiki. I create the page Example and put the story there so now, i can no longer make another page with the title example because its already there. so its more like a doc management system. does that make a bit more sence? Sophie (Talk) 20:16, 3 November 2010 (UTC)
- Sophie could also choose to save many files named Example if he or she placed each of them in a differently named directory. Comet Tuttle (talk) 22:44, 3 November 2010 (UTC)
@92.28.241.78 - a combo between A and C - Sophie (Talk) 18:21, 4 November 2010 (UTC)
- Any text editor or wordprocessor would do a) by default as far as I am aware. For c), get Duplicate Cleaner to scan the directory your files are in. 92.15.10.141 (talk) 12:49, 5 November 2010 (UTC)
question
How could I automatically archive web pages I visit —Preceding unsigned comment added by 140.121.130.67 (talk) 18:04, 3 November 2010 (UTC)
- on wikipedia? you could add them to your watch list. For web broswing, they are in your history but im not sure about whole pages. sorry. Sophie (Talk) 19:39, 3 November 2010 (UTC)
- You might find the Internet Wayback Machine useful for looking at older versions of websites - that has archives going back to the early 1990s in some cases. Chevymontecarlo 21:21, 3 November 2010 (UTC)
- some pages might not be avilable because if someone has robots.txt in the root page - Sophie (Talk) 18:23, 4 November 2010 (UTC)
- You might find the Internet Wayback Machine useful for looking at older versions of websites - that has archives going back to the early 1990s in some cases. Chevymontecarlo 21:21, 3 November 2010 (UTC)
- The Scrapbook plug-in for Firefox will allow you to archive web pages locally with about as much effort as it takes to bookmark them, but it is not strictly automatic. --Mr.98 (talk) 21:36, 3 November 2010 (UTC)
Subnetting
The network 131.217.0.0 has been split into subnets using the subnet mask 255.255.255.192.
Find the number of bits that have been borrowed from the host field, the number of usable subnets, and the number of usable addresses per subnet.
Hence, show the range of usable IP addresses by stating the first and last usable host addresses for the first and last subnets.
My conclusion is that this is a class B network and hence the number of bits borrowed in 10. The number of usable subnets is therefore 1022 and the number of usable addresses per subnet is 62. How do you do the last part of the question? 115.178.29.142 (talk) 23:10, 3 November 2010 (UTC)
- The last two numbers of the IP address are 0.0. That is what can be split up. The network mask on the last two numbers is 255.192, or 11111111.11000000. So, 10 bits are for the subnet and 6 bits are for the host. For the first subnet, the 10 binary digits will be 0000000000. For the last subnet, the 10 binary digits will be 1111111111. For the first host, the 6 binary digits will be 000000. For the last host, the 6 binary digits will be 111111. So, for the first subnet, you have 131.217.00000000.00000000 to 131.217.00000000.00111111 (yes, I mixed decimal and binary). In all decimal, that is 131.217.0.0 to 131.217.0.63. The last subnet will replace the 10 zeros for the subnet with 10 ones. -- kainaw™ 23:44, 3 November 2010 (UTC)
- You need to consider, though, that the subnet broadcast address is not usable as a host's IP. PleaseStand (talk) 23:51, 3 November 2010 (UTC)
Here's a visual. In binary, 131.217.0.0 is (converting each octet into binary, putting them together in order, and breaking it up into "bitfields"):
10000011110110010000000000000000 |--------------||--------||----| original subnet # host
For the first address, you want to set the subnet and host bitfields to binary one (assuming that subnet zero is not usable in this problem). For the last address, you want to set the subnet and host bitfields to all ones except the last bit in each, which should be a zero (the last address in a subnet is the broadcast address and is not available for assignment). Converting back to standard IP address notation will give you your final answer. PleaseStand (talk) 23:51, 3 November 2010 (UTC)
- And for the last address in the first subnet and the first address in the last subnet, it should be then clear enough what to do. PleaseStand (talk) 23:56, 3 November 2010 (UTC)
- I was purposely ignoring IP rules to avoid confusion, but I now see that the question specifically states "usable IP addresses". -- kainaw™ 23:55, 3 November 2010 (UTC)
Unless this is a class on computer history, you (and your teacher) should probably know that the question is horribly out of date. Classful_networks were used on the internet from 1981 to 1993. These days terms such as "Class B network" only have meaning in a historical context. 130.188.8.12 (talk) 13:37, 4 November 2010 (UTC)
- Yes, using network "classes" was a big waste of IP addresses that should have been ended much sooner. In the current CIDR notation, your original network is 131.217.0.0/16 and the subnets you have broken it into are /26 networks (there are twenty-six bits before the host number part). PleaseStand (talk) 21:06, 4 November 2010 (UTC)
November 4
Newsgroup / computer newbie type question
Having recently joined a couple of newsgroups (one is alt.usage.english) I have been happily posting. I type in MS Word, and then paste it into the newsgroup. What I would like to do is set up a blank MSWord document template so that it has exactly the proper margins, left and right, so I know that WISIWIG. That is, I know that my MSWord writing will appear just like that in the newsgroup, so I can make it look perfect. I don’t know where else to ask for help, as Google have discontinued their help desk forum. Any idea on how to do that?
Also while I’m here, I’m told that Google does not have the capacity for a sig file on newsgroup contributions. I don’t know why not, but I would like one on my posts. Any idea on how that could be done? Myles325a (talk) 04:23, 4 November 2010 (UTC)
- Unless there has been a mighty change in how usenet works, newsgroups do not have any of the formatting you are concerned about. It is plain text. Google performs formatting based on common trends in newsgroups. For example, if I type _this_, Google will make it look like this. Margins do not exist. You are just seeing the text in the width that your browser allows. But, ignoring that this is based on a misconception... you want a document template. Open Word. Get the page set up just like you like. Save it as a template. In the save window, you will see types of files to save. Choose template. Then, next time you want to work, open the template and it will have all your formatting in place. -- kainaw™ 12:05, 4 November 2010 (UTC)
- You can send encoded text, e.g. HTML. However it's considered very bad form in the vast majority of newsgroups (well I don't know any that allow it but I always say it's a bad idea to use absolutes). Nil Einne (talk) 20:08, 4 November 2010 (UTC)
Building a Storage Area Network Head - Operating System
Hello Everyone,
I am exploring the option of building my own SAN head. I'm aware that a SAN head consists of a motherboard, processor(s), memory, a RAID controller, and a load of HDDs; but I want to know what software a SAN head runs on. I assume that it must have an operating-system which operates the management and assignment of LUNs to multiple hosts, and I want to know what software provides these functions (and whether there are any open-source SAN operating-systems available)?
Thanks as always. Rocketshiporion♫ 05:43, 4 November 2010 (UTC)
- You can make a network accessible file system using Linux and NFS, FUSE/SSH, or SAMBA. Technically, that is a file server. Network-attached storage, storage area network, and file server differ in the "layer" that they provide. You'll have a hard time building a SAN out of commodity hardware and free software; and you probably won't reap the cost/performance benefits unless you are scaling to truly enormous enterprise sizes. As CPU costs decrease and performance becomes trivial, file servers are probably your best bet - let the remote system handle the file system details and provide the attached storage space at a high-level, as a network drive. Regarding specific software: you can run a NAS or file server using any Linux; BSD is popular; and FreeNAS is basically a pre-configured FreeBSD installation with fewer general-purpose features. The best choice ultimately depends on your needs. Nimur (talk) 06:12, 4 November 2010 (UTC)
- You can build an open source SAN using technologies such as Logical Volume Manager (Linux) (to create and manage the LUNs) and a block device exporter (eg vblade or an iSCSI-target). Googling 'linux san' will give you various guides and distributions on how to set such things up. Whether you would want a SAN (exporting block devices) or a NAS (exporting file systems) depends a lot on your requirements. Unilynx (talk) 06:48, 4 November 2010 (UTC)
The specific purpose is for four diskless computers to each be connected via iSCSI to its boot volume. AFAIK, it's not possible for computers to boot from a NAS or file server. The four diskless computers will also be clustered, and be connected to a shared LUN (which is also to be stored on the SAN). As for cost, all I really need (other than the operating-system) in terms of commodity hardware are; a motherboard, a processor, some RAM, a quad-port Ethernet card and a few SATA HDDs; which I estimate can be obtained for under $2,000. Rocketshiporion♫ 13:05, 4 November 2010 (UTC)
BitTorrent
I was reading the article on it, including the part where it mentions that computers on the network that have full versions of the file are called seeders. That there's a special name for computers that have full versions implies that there are other computers that have pieces which would seem to be useless to them. Is having a bunch of pieces which are of no use to you (in addition to whatever full files for which you are a seeder) just the cost of being a reputable member of a swarm? 20.137.18.50 (talk) 13:35, 4 November 2010 (UTC)
- A computer with only some pieces of the file can share those pieces with others, and vice versa. So even if there are no seeders at all, the various pieces across the swarm can account for the entire file and thus the entire file can still be downloaded. This is good because it means seeders aren't the only source for the file 82.44.55.25 (talk) 13:53, 4 November 2010 (UTC)
- A main benefit of bittorrent is that while you are downloading a file, you can share the parts of the file you already downloaded with other people who are downloading it. Often, more people are downloading a file than there are seeders. So, most people only have parts of the file. As a courtesy, it is common to keep connected after you download a file, becoming a seeder. You no longer need any parts of the file, but you share what you've got for a while just to help others. -- kainaw™ 13:56, 4 November 2010 (UTC)
- Nobody has files which are "useless" to them, because they are, presumably, interested in eventually downloading the entire file. So just because I have the last 25% of the file in question and not the first 25%, doesn't mean that the last 25% is useless to me (even if I can't "use" it for anything at this point), because presumably I'm hoping to get the entire file, and the last 25% puts me that much closer to that goal. Remember that people aren't just hosting because they are being generous — the entire point of a torrent is to distribute the file on all of the computers participating, including those who are participating in distributing it.
- Seeders are special because they have the entire file yet are still distributing — thus they are being somewhat altruistic, because at that point, there is no personal gain to distributing the file (because the only "gain" in distributing is that you get a copy of it yourself). They aren't required but it greatly helps make sure there are redundant copies of the entire file on the network, which speeds things up (because maybe otherwise the only fellow who has on part has a very slow internet connection, thus introducing a bottleneck until others get that part... without seeders, it is not uncommon to see torrents "stuck" at 99%, never quite able to find that last 1%). --Mr.98 (talk) 14:06, 4 November 2010 (UTC)
- I think there's a basic misunderstanding here, but I'm not sure where. When you join a swarm you have none of the file. While you're downloading you'll be accumulating pieces of the file. The pieces are not, on their own, useful to you because you want the entire file, but you can redistribute those pieces to people who don't have them yet. (Everyone downloads the pieces in a different order, so even if you've only downloaded one piece, you can still help people who are halfway finished.) Eventually, if you keep downloading, you'll have the entire, complete file. (You'll have all the pieces.) You are now a "seeder", and ideally you'll continue distributing pieces to people that need them, though that's not strictly necessary.
- Unless there's a glitch, at no point will you download a 'piece' that you don't personally need to complete a file that you personally are trying to download. APL (talk) 15:17, 4 November 2010 (UTC)
does exist any c syntax that is forbidden in c++?
t.i.a. --217.194.34.103 (talk) 14:19, 4 November 2010 (UTC)
int new = 0;
and many others. --Stephan Schulz (talk) 14:22, 4 November 2010 (UTC)- To clarify, C++ reserves some extra keywords. In C they would be valid as variable names or function names, but in C++ they are reserved for the use of the language. Here's a list of C++ keywords not reserved in C :
asm dynamic_cast namespace reinterpret_cast try bool explicit new static_cast typeid catch false operator template typename class friend private this using const_cast inline public throw virtual delete mutable protected true wchar_t
- APL (talk) 15:09, 4 November 2010 (UTC)
- And this is valid C89, but not C++
- APL (talk) 15:09, 4 November 2010 (UTC)
int test()
{
int i;
i=1 //**/1;
return i;
}
- The code requires // not to be a comment marker, which is true for C89/C90 and older. However, the code is formatted for C99, which will reject it.
- Also, in C,
- The return type can be unspecified (defaults to int), must be specified in C++
- The parameter-list can be 'void', must be empty if no params in C++ (which can mean K&R param passing in C)
- K&R param passing is allowed (but discouraged)
- The parameters to main can be what every you want - in C++ main must be on of
- int main()
- int main(int, char**)
- int main(int, char**, /* any other params */)
- In C void* can be implicitly cast to any pointer type; C++ needs an explicate cast. —Preceding unsigned comment added by Csmiller (talk • contribs) 15:27, 4 November 2010 (UTC)
- CS Miller (talk) 15:24, 4 November 2010 (UTC)
- See Constructs valid in C but not C++. --Sean 18:12, 4 November 2010 (UTC)
- The article linked to by Sean is a very, very good starting point. For more differences, see Annex C of the C++ standard (ISO/IEC 14882:2003). For example, . Regards,
int main(void) { return main(); /* valid (but useless) C, invalid C++ */ }
decltype
(talk) 05:20, 5 November 2010 (UTC)
hard drive
Can I leave my external hard drive on 24/7, or should I turn it off when not in use? My computers internal drive is on 24/7, is there a difference between internal and external ? —Preceding unsigned comment added by 91.193.69.210 (talk) 14:51, 4 November 2010 (UTC)
- Most computers will put a drive to sleep if it is inactive for a period of time, so leaving the drive powered up is not itself a problem. Any drive will wear out over time (so always make backups) but for the most part drives will outlast computers, unless you keep your computer a long time or make heavy, heavy use of the drive (constant read-write action). --Ludwigs2 15:25, 4 November 2010 (UTC)
- [citation needed] on the claim that "for the most part drives will outlast computers". Comet Tuttle (talk) 16:28, 4 November 2010 (UTC)
- I'd read this as "computers get replaced before hard drives fail", not as "usually other components fail before the hard drive". But yes, a source for either interpretation would be nice. --Stephan Schulz (talk) 16:32, 4 November 2010 (UTC)
- [citation needed] on the claim that "for the most part drives will outlast computers". Comet Tuttle (talk) 16:28, 4 November 2010 (UTC)
- Depending on how you look at it, the MTBF of a hard disk (and all the other components) is less than the average time a computer is in primary use thanks to the software lifecycle. Greenpeace quotes a figure of 2 years for the average PC lifespan in developed countries, but this is not cited. Regardless, it's obvious by the tonnage of computers that show up in the landfill and the number that fly off shelves to replace them that the used life isn't that long. I would be shocked if it were more than 5 years in the US; most HDD warranties are as long. --Jmeden2000 (talk) 17:56, 4 November 2010 (UTC)
- Again, please provide citations. This is a reference desk. Simply discussing warranties and the MTBF claims (which are notoriously exaggerated by the manufacturers) isn't evidence of anything. Comet Tuttle (talk) 18:48, 4 November 2010 (UTC)
- But ah, we are not writing a scholarly article we are trying to come up with a useful answer to the question at hand. No offense, but his question is no closer to being answered as a result of putting [citation needed] next to every response. --Jmeden2000 (talk) 13:40, 5 November 2010 (UTC)
- Again, please provide citations. This is a reference desk. Simply discussing warranties and the MTBF claims (which are notoriously exaggerated by the manufacturers) isn't evidence of anything. Comet Tuttle (talk) 18:48, 4 November 2010 (UTC)
- CT, relax. the point is that a hard disk is not going to unduly suffer by being powered on continuously, unless it is also subjected to extreme conditions (high levels of disk read/writes, excessive temperatures, etc). A hard disk is obviously more likely to fail than the computer itself (by virtue of moving parts), but hard disks are constantly increasing their lifespan through technological improvement, and the average consumer replaces his computer at regular intervals. The OP should not worry about it beyond the normal caution to maintain regular backups. --Ludwigs2 21:50, 4 November 2010 (UTC)
- "but hard disks are constantly increasing their lifespan" - well, that's arguable. We have a batch in a large recording system over here where 20% per year are failing...big-brand, heavy duty server class drives at that. Hard drives are optimized for capacity, speed, price, and reliability, and I would expect priorities to be roughly in that order. --Stephan Schulz (talk) 21:59, 4 November 2010 (UTC)
- CT, relax. the point is that a hard disk is not going to unduly suffer by being powered on continuously, unless it is also subjected to extreme conditions (high levels of disk read/writes, excessive temperatures, etc). A hard disk is obviously more likely to fail than the computer itself (by virtue of moving parts), but hard disks are constantly increasing their lifespan through technological improvement, and the average consumer replaces his computer at regular intervals. The OP should not worry about it beyond the normal caution to maintain regular backups. --Ludwigs2 21:50, 4 November 2010 (UTC)
- All of the computers that I've had fail on me (2 of them) have been failed hard drives (well, I had a CD drive fail, but I still used the computer for a bit). There have been many more computers that I've gotten rid of before anything fails, but I would say that a hard drive, with mechanical moving parts, is one of the most likely things to fail on a computer. Interestingly this site says that power supply issues are the number one way to fry your computer. I've yet to have a power supply problem. Buddy431 (talk) 01:22, 5 November 2010 (UTC)
- Here's a highly non-scientific poll of failing parts; storage drives have the plurality. Buddy431 (talk) 01:24, 5 November 2010 (UTC)
polynomial shift
I have the coefficients of a polynomial of one variable. I want the coefficients of the equivalent polynomial in terms of another variable (u=t+constant); in other words, the function that defines the same curve with a shifted origin. Before I reinvent the square wheel, is there a more efficient way than expanding and adding up the results? (Or better yet a numpy library function?) I'd search but I don't know the appropriate keyword. —Tamfang (talk) 17:11, 4 November 2010 (UTC)
- You've got a function
f(x)=a0+a1x+a2x2+a3x3+ etc etc
- and you want to know the b's in
f(x)=b0+b1(x-c)+b2(x-c)2+b3(x-c)3+ etc etc
- then Taylor series will work (in many cases, in particular for polynomials) and gives you bn's fairly trivially, since
bn=(1/n!) x dn[f(x)]/dxn evaluated at x=c
- Is that what you want - it's easier to implement (and quicker I think) than the expansion (especially when it's a long polynomial) , ask if you want pseudocode
- (apologies if that isn't what you are asking, I'm a bit sleepy). I've given you two functions that give the same value for a given x? (on second thoughts it may not be any better than the binomial expansion, maybe, maybe not) 94.72.205.11 (talk) 20:35, 4 November 2010 (UTC)
- Treating it as a Taylor series makes eminent sense; it may not be speediest but it's easy on the programmer, since I already have a "deriv" routine. Thanks. —Tamfang (talk) 19:45, 5 November 2010 (UTC)
Windows command prompt FOR loop
First, I am really sorry for clearing all previous information by accident. I don't know how it happened. Second, Does anybody have a solution for this problem? If I use the FOR loop under windows command prompt as in this example:
for /l %i in (1,1,10) do (@set x=%i
@echo %x% , %i)
The result is:
)
10 , 1
C:\>(
)
10 , 2
.
.
.
)
10 , 10
Why does x always take the value of 10 rather than i?--Email4mobile (talk) 17:54, 4 November 2010 (UTC)
- See this page for the solution. -- BenRG (talk) 04:46, 5 November 2010 (UTC)
- So all I had to do was just set the local variable at the beginning as in this line:
setlocal ENABLEDELAYEDEXPANSION
- Thank you very much.--Email4mobile (talk) 09:03, 5 November 2010 (UTC)
How has this image been generated?
How has this image been generated? I understand that the "crisp" version on the left is simply a screen grab, but I'm asking about the "blurry" version on the right. The "blur" is, after all, only present in the real, analog world, and the computer's memory is completely oblivious to it. So therefore a screen grab would produce an identical copy of the version on the left side, regardless of what the image was viewed on. Grabbing it via the real world instead of straight from the computer's memory, in other words by photographing it, would very likely not produce such a pixel-perfect copy with only the "blur" effect, because various real-world details would induce minute differences. Is the "blur" effect merely a simulation added afterwards, or what magic has been used? JIP | Talk 18:56, 4 November 2010 (UTC)
- I think you'd have to ask the person who made it to know for sure. I guess if someone took the left image, made the 3 colour layers into 3 separate layers (e.g. in gimp) and then moved one colour a pixel right, and another a pixel left) you'd get something like this. -- Finlay McWalter ☻ Talk 19:05, 4 November 2010 (UTC)
- No, it's a more complex effort. I verified it with xmag, and it's not as simple as shifting one RGB channel left and another right. I've asked the original creator, User:NewRisingSun, about it. Let's see if he responds. JIP | Talk 19:49, 4 November 2010 (UTC)
- The source code used to generate the image is on the image page. Does that help at all? --Tagishsimon (talk) 20:05, 4 November 2010 (UTC)
- No. The source code is only usable for direct screen grabs. It has no effect whatsoever on the image quality, because after all, the program itself is a mathematical abstraction, and as far as it is concerned, the image quality is always 100% perfect. JIP | Talk 20:27, 4 November 2010 (UTC)
- The source code used to generate the image is on the image page. Does that help at all? --Tagishsimon (talk) 20:05, 4 November 2010 (UTC)
- No, it's a more complex effort. I verified it with xmag, and it's not as simple as shifting one RGB channel left and another right. I've asked the original creator, User:NewRisingSun, about it. Let's see if he responds. JIP | Talk 19:49, 4 November 2010 (UTC)
- This is almost certainly a variant of what Finlay has suggested. Definitely generated by taking 3 slightly shifted (+/- 2 pixels) copies of the original image, possibly modifying them and superimposing them in some manner. So an attempt to digitally reverse engineer the imperfections of the analog world. 213.160.108.26 (talk) 23:34, 4 November 2010 (UTC)
- The image was probably generated with an emulator like DOSBox or MESS, or a program that uses similar algorithms to emulate CGA composite artifacts. Read on for a little bit about how such an algorithm might be created.
- To put the image in context for other responders here, it illustrates the color artifacts seen when the CGA adaptor is connected to an NTSC composite monitor, see Color Graphics Adapter - Special effects on composite color monitors. I myself was fascinated by the palette images shown a little lower in that section, and wondered how the patterns generated colors on the composite monitor.
- Edit: Just to clarify, in the image shown above, the screen on the left is what you'd see on an RGB monitor. The screen on the right is what you'd see on an NTSC monitor. The CGA adaptor sends different signals for an RGB montor than an NTSC monitor, so it's slightly incorrect to think of them as a "perfect" screen shot and a "blurry" real-world simulation of the same image. The CGA adaptor is using the same video memory pixels to generate both signals, but the RGB signal is able to represent the color of each individual pixel, whereas the NTSC signal must use a color wave that ends up being four pixels wide. —Preceding unsigned comment added by Bavi H (talk • contribs) 02:06, 5 November 2010 (UTC)
- Part of the answer requires you to know about how the NTSC composite signal works, especially the color burst and color encoding. I can't find a good description of the NTSC signal to link to yet. You can find information about how the NTSC signal works online, but you might have to read several different documents to get a good understanding of it.
- The other part of the answer is knowing what signals the CGA adaptor sends on its composite output. Go to CRTC emulation for MESS and scroll down to the section "Composite output" for details about that.
- Basically, for each CGA color, the CGA adaptor composite output generates a square wave shifted by a certain amount with respect to the color burst. In the 640×200 mode, the color wave is four pixels wide, so you can actually use groups of four black-or-white pixels to make your own color wave. If you shift the pattern by one pixel, you'll get a different hue. Here's an example I made with QBASIC running in DOSBox to help understand the order to the black and white patterns: cga-composite.png
- In the 320×200 modes the color wave is two pixels wide, because the pixels are twice as wide. By carefully calculating where the CGA composite output color wave for each color is sliced and combined, then decoding the wave as an NTSC color signal, you can predict the resulting color on the composite monitor screen.
- After studying this, I began to see how the colors are predicted, but didn't go far enough into the math to understand it all. (For example, I don't yet understand how a square wave is "seen" as a sinusoidal wave of NTSC color signal. Edit: Or how partial or irregular color waves become color fringes like those in the text-mode image above.)
- While researching this I also found Colour Graphics Adapter Notes - Color Composite Mode which has a zip file with images captured from an actual CGA adaptor (captured using a TV card with an NTSC composite video input). It includes image captures of the same palettes simulated in the CGA article above. In also includes image captures of Flight Simulator, which is interesting to compare to the Flight Simulator images on the MESS video emulation page linked above. --Bavi H (talk) 01:18, 5 November 2010 (UTC)
who's got money? How do I find an investor with vision?
I want to find an investor with vision, so that if I explain to them in a few words what I would like their money for, they will see that (or whether) it works. I don't want to waste my breath on people who wouldn't understand anyway! How do I find these people? If anyone here knows, they can also leave some contact information and I can ask them personally. Thank you! 84.153.205.142 (talk) 20:42, 4 November 2010 (UTC)
- It depends entirely on the quantity of money you seek and what you plan to do with it. You can start by investigating bank loans and credit card advances. These organizations will happily lend you large sums of money, at market interest-rates, for you to use for almost any purpose. Nimur (talk) 21:47, 4 November 2010 (UTC)
- Unfortunately, your request is unlikely to be handed to you like this, because human communication is more difficult than anyone thinks, and everyone has their own opinions about things like risk and how likely your idea is to succeed. Business owners' ability to raise money is a core requirement of being a business owner, for most businesses; and usually they have to pitch their idea and plan many, many times before an investor says "yes". Our article Angel investor has some links in the References section that may help you. Comet Tuttle (talk) 22:24, 4 November 2010 (UTC)
- With the exception of people who already have a track record of building successful businesses, most inventors and entrepreneurs have great difficulty getting an angel investor even to speak to them. If they do, they mostly want to see a working prototype or a business that's generating revenue (Dragon's Den, for all its faults, isn't a bad indication of what angel investors are looking for). Tony Fernandes, who started Air Asia after seeing an Easyjet ad, mortgaged his house to pay for it. No-one was interested in James Dyson's ballbarrow, despite his having many working prototypes, so he mortgaged his house too. It took Ron Hickman, inventor of the Black and Decker Workmate years, and apparently about 100 prototypes, before he persuaded someone to market it; and Hickman had an impeccable record of design and engineering management, as the chief engineer of Lotus Cars he had already designed the Lotus Elan and Lotus Europa. Paul Graham (whose investment company Y Combinator is roughly an angel investor) has a bunch of what he/they look for when investing in new enterprises here. Several VC books I've read come to roughly the same conclusions: they invest in people (that is, people who have a proven track record of making stuff and getting things done) and in working product; some are pretty blunt in thinking that if you're still working at BigCo and haven't quit to work on the project yourself, to your own obvious cost and risk, then you don't believe enough in the thing, so why would they. -- Finlay McWalter ☻ Talk 23:09, 4 November 2010 (UTC)
- Nice overview, Finlay McWalter! Almost the first question any investor will ask (or research) is "What do you have in it?" And the answer should include great amounts (by your personal standard) of time, money and experience. Bielle (talk) 23:21, 4 November 2010 (UTC)
- Also as a minor correction but as mentioned in our articles Tony Fernandes didn't start Air Asia. He bought there then failing airline for a nominal sum and turned it around into an extremely successful budget airline. Nil Einne (talk) 09:20, 8 November 2010 (UTC)
- The people I know who can drum up quick cash for projects use angel investors (as opposed to venture capitalists — the difference is the amount of money and control, angels being less in both, thus a bit easier to work with, they say). There are lots of sites that come up if you google "finding angel investors"; I've no experience in it (other than chats with friends who have made good on such things), so I can't tell what's good advice or not. I would note that exemplars (like those Findlay names) are not necessarily "normal" models — they are notable because they are rare cases for one reason or another. --Mr.98 (talk) 00:55, 5 November 2010 (UTC)
- No-one (except perhaps a close relative) is going to give you wads of money in return for a "few words". You need to have a well-researched business plan. 92.29.112.206 (talk) 19:48, 5 November 2010 (UTC)
- To add a quick note here and a twist on how difficult it is to get investment... and this is from personal experience. The people I went to with my partner didn't want to give us the money because they were concerned if either of us were hit by a truck that the business would not have a driver and would suffer. So make you go with a solid business plan as well as a backup plan, even for yourself, because the investor wants to avert risk too. Sandman30s (talk) 07:11, 9 November 2010 (UTC)
November 5
Problem after installing new video driver
Hi Reference Desk, I recently installed the ATI Catalyst 10.10 drivers for my ATI Mobility Radeon HD 5450 on Win7 64 bit. Now, it seems to be limiting the number of display colours, and when things with colour gradients are visible, for example the background of Microsoft Word, it looks like a compressed JPEG screenshot and there are very visible steps between the colours. I've recalibrated the display, reinstalled the driver, reset to factory settings, fiddled with Windows colour settings, all to no avail. I was wondering if this was a known issue, and/or if anyone had a clue how to fix it?
Thanks 110.175.208.144 (talk) 06:30, 5 November 2010 (UTC)
- At a rough guess, you might have been reset to basic colour. Press F1 for help, type Aero and open the Aero troubleshooter and follow the prompts to get Aero back... this might also fix your colour problems and get the 24/32 bit colour gradients back. The troubleshooters in Win7 are surprisingly good and can make some low-level changes when they have to. Worst case scenario - you can go into device driver and rollback the drivers. Sandman30s (talk) 09:12, 5 November 2010 (UTC)
- That didn't help, but, knowing that the Aero troubleshooter did nothing, eliminating the possibility of a DWM problem, I thought about what other things manage the visuals of the computer, and I thought the Themes service. I restarted that... and voila! Colours :) But now, I don't know why I had to manually restart the themes service, and the problem did not get fixed an a reboot previously :/ Thanks for your help! 110.175.208.144 (talk) 23:52, 5 November 2010 (UTC)
Automatic form filling application required
Job applications and other bureaucratic documents take too long to fill in neatly. Is there any application (preferably free, with source code in VB6 of VC++6) that I could either use directly or modify to (1) recognise the various rows, columns and common questions that need filling in, by both text and graphics recognition, with a manual mail merge type option if this fails, and (2) using a database, fill in the form in all the right places. Unlike mail merge in MS Word, it would have to fill in lots of separate sections instead of just one (the address) and of course the size would be standard A4-I can't get this size using MS Word, or at least my version, which is a few years old. —Preceding unsigned comment added by 80.1.80.5 (talk) 13:19, 5 November 2010 (UTC)
- No. The field labels in forms are often ambiguous. So, a computer will need to understand the purpose of the form to attempt to understand the meaning of the field label. Since computers cannot think, they cannot fill out forms. At best, a computer can assume "Name" means your full name and "DOB" means your date of birth. But, it wouldn't understand what to do if "Name" was the name of your dog or "DOB" is the date on the back of your passport. In the end, you (the human) must read every label and decide what to put in every field. -- kainaw™ 14:22, 5 November 2010 (UTC)
- Yes, with caveats. See http://www.laserapp.com.
- DaHorsesMouth (talk) 22:25, 5 November 2010 (UTC)
"number of cores" on 15" Macbook Pros?
Hi,
It is not clear to me, there are three configurations of Macbook Pro:
1) 2.4 Ghz i5 2) 2.53 Ghz i5 3) 2.66 Ghz i7 (with 2.28 Ghz)
What is the real performance difference? Are the first two both dual-core? Only the third one says "which features two processor cores on a single chip"??
Plus, as an added point of confusion, i7 has hyperthreading enabled, doesn't it? So, is it the case that option 1 and 2 are two cores shown to the OS as such whereas option 3 is two cores shown as four cores to the OS?
Or, is it exactly half of what I just said? Thanks! 84.153.205.142 (talk) 15:01, 5 November 2010 (UTC)
- based on the Intel page the i5 processors are either 2 or 4 core processors (depending on model - the Apple specs are not precise, though the 'features' page at Apple says dual core).
- cores are cores, they are not 'shown to the OS'. software needs to be written to take advantage of multiple cores, but for those apps that are you will see moderate increases in performance with the higher-end chips. This will be noticeable in casual use (apps opening slightly faster, long processes finishing slightly sooner), very noticeable in processor intensive tasks, and may increase the practical longevity of the machine itself (in the sense that future improvements in hardware, and the consequent revisions to software, won't leave the machine in the dust quite as soon). --Ludwigs2 15:35, 5 November 2010 (UTC)
- Wandering around the apple website confirms that all three chips are dual core. However this http://www.macworld.com/article/150589/2010/04/corei5i7_mbp.html gives the chip part no.s : i5 520M , i5 540M , and i7 620M , assuming that wasn't speculative and is true then all three have two cores, with hyperthreading, meaning a total of 4
coresthreads (or "4 virtual cores"). Finding out more info on these chips is trivial - just use search and the first result probably takes you to the intel page eg http://ark.intel.com/Product.aspx?id=47341 . The article MacBook Pro has the same info. - The second i5 is (as far as I can tell) just a faster clock. The differences between the i7 and i5 include a larger cache in the i7, but I wouldn't be suprised if there are other architectural differences. actually looking at generic benchmarks between these, seems to suggest that the i7 part is no different from an i5 with bigger cache, and higher clock, but that's speculation.94.72.205.11 (talk) 16:36, 5 November 2010 (UTC)
- probably not my place to say but looking at UK prices [1] it really looks like the additional costs of the better processored 15"s is way way way beyond either the base processor price different, and on the borderline or above of what is worth paying for. - the base model is easily good enough for 90+% of people. You can easily find 'real world' comparisons searching for "apple macbook pro benchmarks i5 i7" .94.72.205.11 (talk) 16:50, 5 November 2010 (UTC)
- Thank you: when you say that "software needs to be [specially] written to take advantage of multiple cores", are you just talking about hyperthreading, meaning that if I were just running a SINGLE processor-intensive application, it needs to be written in that way? Or, are you talking about something more general? Because don't two concurrently running different applications AUTOMATICALLY get put on their own core by the OS? In that sense, my reasoning is informed by the "dual-processor" behavior I had learned of a few years ago. In fact, isn't a dual-core, logically, just 2 processors? Or, is there a substantial difference as seen by applications and the OS between a dual-core processor, and two processors each of which is exactly the same as a single core? (I don't mean difference in whether they access as much separate level-1 and level-2 cache, I mean as seen by the OS and applications). If there is a substantial difference, what is that difference?
- I guess my knowledge is not really up to date and what I would really like to know is the difference between multiple CPU's (dual and quad CPU machines) of yesteryear power desktops, and multiple cores of today? Thank you! 84.153.205.142 (talk) 16:48, 5 November 2010 (UTC)
- (replying to question for Ludwigs) There hasn't been any change of definition (one possibly source of confusion is that sometimes a processor can have two separate physical chips within it, or one chip with two processors on it - both are as far as end results are concerned - the same...)
- As per multiple processes on multiple cores or threads - yes you are right - the only time there isn't any advantage is when you run a single (non threaded) program on a multicore machine. (but there are still a lot of examples of this)
- OS's can handle multiple threaded processors in just the same way they can handle multiple core processors - ie an OS will act like it's got 4 processors on a 2 core hyperthreaded machine , no further interaction required.94.72.205.11 (talk) 16:54, 5 November 2010 (UTC)
- If I understand it correctly, the dual-core advantage is that a multi-threaded app that's designed to take advantage of it can toss different threads onto different cores, making the handling of some processor-intensive tasks more efficient. Apps need to be designed for it because there are some technical issues involved in choosing which core to send a thread to and how to handle data that's being processed on different cores. Basically it's the difference between a office with one photocopier and and an office with two photocopiers - you can get a lot of advantages from sending different jobs to each photocopier if you plan it out, but the 'old guy' in the office is just going to chug away on one photocopier mindlessly. most apps made in the last few years support it - you're only going to lose the performance advantage if you have (say) an old version of some high-powered app that you're continuing to use to save buying an upgrade. I don't know enough about hyper-threading to know whether that also requires specially-coded apps or whether it's transparent on the app level. --Ludwigs2 17:12, 5 November 2010 (UTC)
- Key term here is Processor affinity which mentions the type of problem you describe. (or the analogy where we have 4 photocopiers, two in two rooms .. and I prefer to use the two in the same room to prevent walking up and down the stairs.that's an analogy of a dual core hyperthreading processor - total 4 threads) Programming tools such as OpenMP#Thread_affinity can set it.. , whether OS's can detect and set thread affinity without being told is something I don't know. 94.72.205.11 (talk) 17:22, 5 November 2010 (UTC)
- If I understand it correctly, the dual-core advantage is that a multi-threaded app that's designed to take advantage of it can toss different threads onto different cores, making the handling of some processor-intensive tasks more efficient. Apps need to be designed for it because there are some technical issues involved in choosing which core to send a thread to and how to handle data that's being processed on different cores. Basically it's the difference between a office with one photocopier and and an office with two photocopiers - you can get a lot of advantages from sending different jobs to each photocopier if you plan it out, but the 'old guy' in the office is just going to chug away on one photocopier mindlessly. most apps made in the last few years support it - you're only going to lose the performance advantage if you have (say) an old version of some high-powered app that you're continuing to use to save buying an upgrade. I don't know enough about hyper-threading to know whether that also requires specially-coded apps or whether it's transparent on the app level. --Ludwigs2 17:12, 5 November 2010 (UTC)
- (reply to OP) As an example of the difference between yesteryear and today - a old quad core mac eg [2] used 2 dual core chips, whereas a modern quad core mac has a single chip with 4 cores on it. Ignoring that they've changed from IBM's POWER chip family to intel's x86/64 family, they only difference I can think of is that todays multicore chips (4 or more) have L3 cache, whereas the old ones tended not to. Obviously things have got faster, and the chips improved, but there isn't anything I can think of that represents a major break of progression from one to the other. (probably missing something obvious).94.72.205.11 (talk) 17:45, 5 November 2010 (UTC)
- There's some confusion about what it means for an operating system to "expose" a core. In a modern multicore system, a multicore hardware may or may not be exposed by the operating system. In other words, different operating systems (and hardware) have different "contracts" between multi-threaded programs and the hardware that will execute the multiple threads. If you create a new kernel thread (on a POSIX system), or a new Process (on Windows), the operating system must implement the necessary code to task a particular process to a particular core (otherwise, there is no performance gain by threading - all threads execute sequentially on one core). When an operating system "exposes" a core, it means that a programmer is able to guarantee a particular mapping between software processes and hardware processors (or at the very least, receive an assurance by the OS that the scheduling and delegation to a CPU will be managed by the system thread API).
- An operating system might be using multiple CPUs, even if it doesn't show that implementation to the programmer/user . Or, it might be showing cores as software abstractions, even though they do not exist. The number of "exposed cores" and the number of "actual cores" are not explicitly required to be equal. This detail depends entirely on the OS' kernel. See models of threading for more details.
- Modern programming languages, such as Java or C#, use hybrid thread models - meaning that the system library will decide at runtime how the kernel should schedule multiple threads. This guarantees that an optimal execution time can be delivered - especially if the other CPU cores are occupied. It invalidates the sort of simplistic multi-core assumptions that many programmers make (i.e., "my system has 4 cores, so I will write exactly 4 threads to achieve 100% utilization") - and replaces this with a dynamic scheduler that knows about current system usage, cache coherency between cores, and so on. Nimur (talk) 18:28, 5 November 2010 (UTC)
display size
Sorry folks. I know I asked this question a while back and got a great answer. problem is: I can't find the answer. I tried searching the archives but no luck. So, I will ask the question again (and save the answer!).
I am using Vista on an LCD monitor. When I go to the net, the size is 100% but this is too small. I set it at 125% but can't get the settings to stay there and have to adjust them each time. Can someone help me (again)? 99.250.117.26 (talk) 15:54, 5 November 2010 (UTC)
- It's Wikipedia:Reference_desk/Archives/Computing/2010 May 5#125% screen size.—Emil J. 16:00, 5 November 2010 (UTC)
Hmmm. That answer came in just under two minutes . . . Wikipedia is getting slow! lol. Thanks a lot. 99.250.117.26 (talk)
List associated values in MS Access
I have an access database. There are two tables in this database. The primary key in the first may be associated with multiple primary keys in the second. I would like to find a way to list in the first table the primary keys from table 2 associated with a primary key from table 1. Is this even possible? 138.192.58.227 (talk) 17:39, 5 November 2010 (UTC)
- The usual way to do what you are asking for (if I understand it correctly) is to have a third table that sits between those two tables and maintains the associations (e.g a Junction table, among its many names). It's a lot easier than trying to put that information into the first table, and the associations can be viewed with clever SQL queries. --Mr.98 (talk) 17:57, 5 November 2010 (UTC)
Help
I made the mistake of leaving my crippled tower connected to the internet, and the damn thing auto-updated last night. The trouble is that after auto-updating, the tower automatically rebooted; however the hard drive in my home tower is on its last legs and now the machine will not reboot, every time I clear the windows XP screen I get taken to a blue screen announcing a boot up error and telling me the system has been shut down. There is precious material on the hard drive that I desperately want to put on an external hardrive before the tower goes down permanently, so I am asking if there is any way at all to get the machine back up and running one last time so I can salvage what I need from it. TomStar81 (Talk) 19:30, 5 November 2010 (UTC)
- Consider placing the bad hard-drive in another system (that boots off a good hard-drive); or booting from a live CD. Nimur (talk) 19:48, 5 November 2010 (UTC)
- (after e/c)
- Two ways come to mind.
- One is a boot disk. You can make a disk that will boot the computer off the CDrom drive. You'll only get a "dos" prompt, but that should be enough to copy files. (You could also make a linux boot disk easy enough, if you're comfortable with Linux.)
- Another way is to take the drive out, and put it into a USB drive enclosure. This will turn it into an external drive. Plug both drives into some other computer and copy them that way.
- However, if the files you're hoping to retrieve are corrupted, you're going to have difficulties in either case. There are professionals that can retrieve almost anything but they're quite pricey. APL (talk) 19:49, 5 November 2010 (UTC)
- Actually, far from the old fashioned boot disks I was imagining, it looks like some Linux LiveCDs can give you a fully usable, graphical user interface. Might be the easist way to go.
- Try (on some other, working, computer) to make yourself a live CD of a nice and user-friendly version of Linux (ubuntu for example) and copy the files that way. "Learning Linux" can be intimidating, but you don't have to learn anything to drag and drop some files from one drive to another. APL (talk) 19:55, 5 November 2010 (UTC)
- Indeed. The Live CD is the "boot disk" of the new millennium - it provides as many features as a full-blown graphical operating system. It should be fairly easy to operate - simply create the disc, boot from it, and copy your hard-disk to a safe location (like a USB drive or a network drive). Here is the official Ubuntu distribution - it is free to download and use. Nimur (talk) 20:01, 5 November 2010 (UTC)
Monitor as TV
I'm about to move in to a small house in the UK. I will purchase either a laptop or desktop PC. I also want to watch television.
I noticed that quite large-screen monitors have dropped in price, and read a review of an example product in 'PC-Pro' magazine - a 27 inch monitor for 200 pounds Not wishing to advertize it here, but it was their 'best buy', and it is this one
I'll be using UK Freeview TV, and will probably buy a Freeview+ box to act as a receiver and recorder.
So - one question is, how to connect it up so that I could watch TV on it. I don't want to use an in-computer TV card, because I'd want to keep the PC/laptop free for other things, and also because I've found TV-cards to be somewhat unstable.
Basically, I want to watch TV on a reasonable-sized screen, and sometimes use the big screen as a computer screen.
It seems that these monitors mostly make reasonable TVs - is that correct? Whereas TVs are often poor monitors.
-Would this type of monitor make a reasonable TV? -How would it compare to a similar-price actual TV? -How can I use it as a TV without needing the computer switched on (ie how to connect it to a freeview receiver box)? -Is this a reasonably sensible approach? —Preceding unsigned comment added by 92.41.19.146 (talk) 21:40, 5 November 2010 (UTC)
- You can get TV/monitors with integrated freeview, however for £200 the size would be about 23" , so not as big. You definitely get more screen for £200 if you just buy a monitor.
- However the monitor only has DVI and VGA inputs, which means it will not work with a standard freeview box SCART, however it would work with a Freview HD box with HDMI output (connect via an adaptor to DVI, it has HDCP so will work with an HDMI adaptor)
- The monitor is likely to have an absolutely fine display. (I use mine to watch stuff of freview in standard definition - it's fine) Old TV's made terrible monitors (too low resolution), modern Hi-Def TVs actually make fine monitors.
- The only other issue is that monitors typical have no speakers, or very poor sound - so you can expect to need a sound system - that could be an additional expense. (I'd expect to be able to get something suitable to make sound to TV standard for £50, but more if you want 'cinema sound'). Make sure the freeview box has the right sort of audio out you can use.
- The big issue here is that you'll need a Freeview HD box, which adds a lot to the price (~£80+ currently, probably soon cheaper as it's relatively new).
- It appears to be a better deal than the comparative standard price, however if you check large shops you can get TV's which will work as monitors eg random pick http://direct.asda.com/LG-32%22-LD450-LCD-TV---Full-1080P-HD---Digital/000500571,default,pd.html 32" at under £300 - it's a little bigger, and will work with any input. If you compare the additional costs of the monitor route it might seem attractive.. (note it doesn't have freeview HD, just freeview though). Generally there is usually a sub £300 30"+ hidef TV on special offer at one of the large supermarkets..(ie these offers are common) 94.72.205.11 (talk) 22:46, 5 November 2010 (UTC)
- Thanks; interesting comments and info - especially re. HD Freeview. As I plan to buy a freeview recorder anyway, the HD version is not much extra cost, and that sounds a reasonable solution.
- If anyone has actual experience with a monitor of this kind of size, I wonder if 1920 x 1080 starts to look like far too low a resolution for using as a 'regular' PC desktop when it gets up to the 27-30 inch sizes?
- The sound isn't a problem, by the way - I have a decent PC-speaker system that I'd use (altec lansing with a sub-woofer) which is gonna be way better than any built-in stuff.
- The 200-pound monitor, plus an HD freeview box w/ HDMI out, is sounding like quite a good option so far. —Preceding unsigned comment added by 92.41.19.146 (talk) 23:20, 5 November 2010 (UTC)
- Bit of maths - because it's a wide screen monitor the 27" converts into ~13" screen height for 1080 pixels. A bog standard 1024 high screen is ~11" high - so the pixels are only 13/11 times bigger (or 18%) noticeable but probably no big deal.
- Also equals about 80 dots per inch if my maths is correct.. better article is Pixel density 94.72.205.11 (talk) 23:59, 5 November 2010 (UTC)
C or C++
Hello there, I want to learn programming language. One of my friends told me to start with C. But somehow I started with C++. What's the difference between C and C++? I don't have any programming experience before. What I want to do is, make different kind of softwares. So which one I should choose?--180.234.26.169 (talk) 22:31, 5 November 2010 (UTC)
- The primary difference between C and C++ is that C++ allows for object-oriented programming. You can write C++ programs without objects. You can fake objects in C with structs and function pointers. But, the main reason to choose C++ over C is the ease of object-oriented programming. -- kainaw™ 22:48, 5 November 2010 (UTC)
- "Software" is very broad. What kind of software do you want to write?
- C++ was one of my first programming languages, and I wish it weren't; it is too big and complicated. In particular, C++ requires you to think about things that aren't important unless you are really concerned about speed. C has some of the same problems, but at least it's simple, so it's not a bad choice. I think Python and Scheme (in particular, Racket; the Scheme-derived language my school uses) are excellent choices for learning to program. Some people are put off by the fact that these languages are not as popular as, say, Java. But (1) if you work on your own, you should choose the best tool, not the one that everyone else chooses, and (2) learning with a language designed for elegance rather than industry make you a better programmer in any language. Paul (Stansifer) 02:46, 6 November 2010 (UTC)
- I agree with everything except your last statement. Learning how the computer works at a very low level makes you a better programmer. Understanding exactly how using a floating-point operation instead of an integer operation will affect your program is important. Understanding what may happen when you try to compare two floating-point numbers is important. Understanding how the stack is affected when you use recursion - especially unnecessary tail-end recursion - is important. Understanding how the memory cache is being used with loops is important. You can use an "elegant" language that makes guesses at what is optimal, but you are left hoping that the programming language is making good decisions. More often than not, the high-level languages make poor decisions and lead to slower execution and a waste of resources. Personally, I teach PHP first, then Java (since I don't like the implementation of objects in PHP), then C++. I don't teach C because anyone who knows C++ should be capable of learning C quickly. -- kainaw™ 03:05, 6 November 2010 (UTC)
- All of those things can be important, but performance only matters some of the time. Some, even most, projects will succeed just fine if they run 100 times more slowly than they could, so programmers shouldn't worry about wasting cycles. (Knowing enough about algorithms to get optimal big-O is usually more worthwhile.) But writing well-designed programs always requires thought; novices should start solving that problem, and worry about performance when they have to. Paul (Stansifer) 02:41, 7 November 2010 (UTC)
- Per Program optimization#Quotes, don't bother optimising unless and until it's really time to do so.
- With regards to the original question, I'm partial to a "C first" approach. Learn C, because it teaches you important things that many or most other modern languages neglect, such as dealing with pointers and memory management. Only when you have a decent grasp of C do I recommend learning C++. C++ has some niceties that, when learnt first, can leave you confused or frustrated when starting to learn a language without those features. It can generally be said that almost* any language has some advantages over other ones, and C and C++ both have their advantages over one another. I find C to be a good choice for single-purpose, fast programs, where objects are not required. C++ has some weight over C when it comes to large, multi-purpose programs, since the object-oriented aspect, and the added "sugar" of not having to deal with a lot of the lower-level bookkeeping such as pointer and memory management, allow you to focus more on the goal than on the design. On the other hand, it can be argued that it's much easier to become sloppy with C++, which is another good reason to get into a C programmers' habit of cleaning up resources et al.
- *I say "almost" here, because there are some languages out there that are more disgusting than the idea of blobfish mating. --Link (t•c•m) 09:24, 7 November 2010 (UTC)
- "C is to C++ as lung is to lung cancer" ;-). Seriously, C is a very good language for its niche. It's rather small, and rather consistent. It only has a small number of warts, and most of them turn out to be rather well-considered features on closer inspection. As a result, C is fairly easy to learn. It also exposes much of the underlying machine, so its pedagogically useful if you want people to be aware about how computers work. Everybody who knows assembler and C also has a fairly good idea of how nearly all C features are mapped into assembler. C++, on the other hand, is a very large, very rich, very overladen language. I'd be surprised to find anybody who actually "knows C++" in a full sense. C++ is rather ugly - it has come to fame because it allowed people to reuse C knowledge and C infrastructure (Compilers, linkers, and most of the tool chain) while supporting objects and classes (in fact, early on an intermediate form was called C with classes). Because of that, it took off and is now widely used, albeit still but-ugly. The only reason to learn C++ is to expect to be required to use it for some project. Java (programming language) is a better language, as is C Sharp (programming language), and if you go into exotics, Smalltalk and Common Lisp Object System are both cleaner and prettier (although one might argue that Scheme is to Common Lisp as C is to C++ ;-). --Stephan Schulz (talk) 10:03, 7 November 2010 (UTC)
- Java, as a language, is indeed quite nice, although various Java Virtual Machines (notably HotSpot, IcedTea and Blackdown) have been known to make me want to remodel my own face using an angle grinder. I don't really think C++ is that bad, but it's true that it's quite convoluted (as is Java, for that matter, but it's less obvious because it hides the low-level parts). Personally, I prefer Python: I find it much easier to get a working prototype in Python than in C or C++. Generally, my preferences by application are such: C for embedded programming and small-ish things that need to be self-contained/run very fast/etcetera, C++ for things that need C's low-level capabilities but benefit greatly from object-oriented design (e.g. 3D games), and Python for essentially everything that isn't critically dependent on self-containedness or speed. I used to be a Java fanboy, but I haven't done anything with it for a long time, since I've become increasingly frustrated with Sun, and Python can give you almost everything Java can. --Link (t•c•m) 18:10, 7 November 2010 (UTC)
- "C is to C++ as lung is to lung cancer" ;-). Seriously, C is a very good language for its niche. It's rather small, and rather consistent. It only has a small number of warts, and most of them turn out to be rather well-considered features on closer inspection. As a result, C is fairly easy to learn. It also exposes much of the underlying machine, so its pedagogically useful if you want people to be aware about how computers work. Everybody who knows assembler and C also has a fairly good idea of how nearly all C features are mapped into assembler. C++, on the other hand, is a very large, very rich, very overladen language. I'd be surprised to find anybody who actually "knows C++" in a full sense. C++ is rather ugly - it has come to fame because it allowed people to reuse C knowledge and C infrastructure (Compilers, linkers, and most of the tool chain) while supporting objects and classes (in fact, early on an intermediate form was called C with classes). Because of that, it took off and is now widely used, albeit still but-ugly. The only reason to learn C++ is to expect to be required to use it for some project. Java (programming language) is a better language, as is C Sharp (programming language), and if you go into exotics, Smalltalk and Common Lisp Object System are both cleaner and prettier (although one might argue that Scheme is to Common Lisp as C is to C++ ;-). --Stephan Schulz (talk) 10:03, 7 November 2010 (UTC)
- All of those things can be important, but performance only matters some of the time. Some, even most, projects will succeed just fine if they run 100 times more slowly than they could, so programmers shouldn't worry about wasting cycles. (Knowing enough about algorithms to get optimal big-O is usually more worthwhile.) But writing well-designed programs always requires thought; novices should start solving that problem, and worry about performance when they have to. Paul (Stansifer) 02:41, 7 November 2010 (UTC)
November 6
Batch file to control network settings
My friends and I often get together and play games via LAN, but to do so, we always have to change our network settings considerably. First, we disable Windows Firewall. Then, we open 'Network Connections' and disable 'Wireless Internet Connection'. I don't see how those two steps are necessary, but some of our games seem to take issue if we don't do them. :\ Then we right-click on 'Local Area Connection', select 'Properties', then 'Internet Protocol (TCP/IP)', then 'Properties', select 'Use the following IP address', and type one in. I think that's called setting up a static IP? Anyway, then we go back to 'Network Connections' and do the same for '1394 Connection 13'. Is there were a way to create a batch file that would automate this? And hopefully one that would reverse it too. KyuubiSeal (talk) 01:12, 6 November 2010 (UTC)
- The netsh command is used to manipulate most of the Windows IP stack parameters from the command line. here is an example. 87.115.152.166 (talk) 01:54, 6 November 2010 (UTC)
- It sets the 'Local Area Connection' correctly, but nothing else. At least that's a third done though. KyuubiSeal (talk) 03:31, 6 November 2010 (UTC)
- netsh interface set interface "Local Area Connection" DISABLE
- netsh interface set interface "Local Area Connection" ENABLE
- 87.115.152.166 (talk) 03:39, 6 November 2010 (UTC)
- I can get it to set the static IPs correctly, but I can't disable Windows Firewall and the wireless network. When I try swapping in 'Wireless Network' for 'Local Area Connection' in the above lines, it gives me this error:
One or more essential parameters not specified The syntax supplied for this command is not valid. Check help for the correct syntax. Usage set interface [name = ] IfName [ [admin = ] ENABLED|DISABLED [connect = ] CONNECTED|DISCONNECTED [newname = ] NewName ] Sets interface parameters. IfName - the name of the interface admin - whether the interface should be enabled (non-LAN only). connect - whether to connect the interface (non-LAN only). newname - new name for the interface (LAN only). Notes: - At least one option other than the name must be specified. - If connect = CONNECTED is specified, then the interface is automatically enabled even if the admin = DISABLED option is specified. —Preceding unsigned comment added by KyuubiSeal (talk • contribs) 00:07, 7 November 2010 (UTC)
Is there a way to get a "wikitable sortable" table to correctly sort by "year"?
I am specifically working with the table at The 100 Best Books of All Time, the entries of which have been altered and provided below as an example:
Title | Author | Year | Country |
---|---|---|---|
Dts Example | No one | 2000 BC | Nowhere |
Things Fall Apart | Chinua Achebe | 1958 | Nigeria |
Epic of Gilgamesh | Anonymous | 18th or 17th century BC | Mesopotamia |
Book of Job | Anonymous | ? | Israel |
Mahabharata | Anonymous | 4th century BC – 4th century AD | India |
Dtsh Example | No one | Template:DtshLate 2nd century BC | Nowhere |
Dts Example | No one | 3000 BC | Nowhere |
One Thousand and One Nights | Anonymous | 9th century | Arabia, Persia, India |
The Decameron | Giovanni Boccaccio | 1349–1353 | Italy |
Don Quixote | Miguel de Cervantes | 1605–1615 | Spain |
Ramayana | Valmiki | Template:Dtsh3rd century BC – 3rd century AD}} | India |
Aeneid | Virgil | 29 – 19 BC | Italy |
Leaves of Grass | Walt Whitman | 1855 | USA |
See what happens when you sort by the "Year" column.
I started with template:Sort (eg. "{{sort|850|9th century}}" which should sort by "850" and display "9th century", right...?).
Template:Dts seems to work well enough, but does not permit an "alternate text".
Template:Dtsh-with-text-following-it just doesn't seem to work.
Is there some way to deal with this -- ie., to get that "Year" column with entries of the kind shown above to sort correctly -- that I am just not seeing? WikiDao ☯ (talk) 01:45, 6 November 2010 (UTC)
- I would have thought {{ntsh}} would work, but for some reason it doesn't like those -ve values:
Title | Author | Year | Country |
---|---|---|---|
Dts Example | No one | -2000 | Nowhere |
Things Fall Apart | Chinua Achebe | 1958 | Nigeria |
Epic of Gilgamesh | Anonymous | 18th or 17th century BC | Mesopotamia |
Book of Job | Anonymous | ? | Israel |
Mahabharata | Anonymous | 4th century BC – 4th century AD | India |
Dtsh Example | No one | Late 2nd century BC | Nowhere |
Dts Example | No one | -3000 | Nowhere |
One Thousand and One Nights | Anonymous | 9th century | Arabia, Persia, India |
The Decameron | Giovanni Boccaccio | 1349–1353 | Italy |
Don Quixote | Miguel de Cervantes | 1605–1615 | Spain |
Ramayana | Valmiki | 3rd century BC – 3rd century AD | India |
Aeneid | Virgil | 29 – 19 BC | Italy |
Leaves of Grass | Walt Whitman | 1855 | USA |
- 87.115.152.166 (talk) 03:30, 6 November 2010 (UTC)
- Yes I had tried {{ntsh}} too, forgot to mention that. WikiDao ☯ (talk) 12:36, 6 November 2010 (UTC)
- Ah, which {{Nts}} explains. You could hack it by having a search index that you manually order (e.g. {{dtsh|3}}-417BC). 87.115.152.166 (talk) 03:35, 6 November 2010 (UTC)
- Yes I tried that as mentioned in question. WikiDao ☯ (talk) 12:36, 6 November 2010 (UTC)
- Ah, which {{Nts}} explains. You could hack it by having a search index that you manually order (e.g. {{dtsh|3}}-417BC). 87.115.152.166 (talk) 03:35, 6 November 2010 (UTC)
- Have you read Help:Sorting? ---— Gadget850 (Ed) talk 03:39, 6 November 2010 (UTC)
- Yes, the problem is that the information there doesn't seem to help with my specific set of difficulties. {{nts}} says "Negative numbers do not sort correctly with this template" but doesn't really say why, or how to get them to do so instead.
- I have also tried using just html, for example: "<span style="display:none" class="sortkey">-1750</span> 18th or 17th century BC". Haven't got that sort of thing to work yet either; still working on that.
- The primary problem seems to be with sorting "negative" or "BCE" dates along with "positive" or "CE" dates in the same column. Because none of what I would have thought to be the applicable templates seem to treat negative numbers numerically. WikiDao ☯ (talk) 12:36, 6 November 2010 (UTC)
- Have you read Help:Sorting? ---— Gadget850 (Ed) talk 03:39, 6 November 2010 (UTC)
Windows 7 file permissions
I have recently installed Win 7 Professional 64-bit. Upon trying to copy over my personal Apache setup, I was stumped by my inability to save a configuration file change. At first I thought the file(s) might be locked, but this issue is system-wide: I simply can't modify a file outside of my User directory. I've reviewed the web and can see that many users have had similar issues, but can't find anything that fully addresses this.
My account is an administrator and if I review the "effective permissions" for a particular file, it says that I have full control of the file. Yet I can't save a modification to it. The only broad solution to date is to completely turn off User Account Control. Here are two additional clues: explicitly making my account name the "owner" by itself does not change anything, but if I then add my account to the main Permissions area, then I can edit the file. Granting "full control" to the Users group also fixes the problem, but this is not an appropriate solution.
This situation is, to put it mildly, absurd. Is there actually a robust solution to this, one that does not require messing with permissions to accomplish simple tasks? I thought Win7 was supposed to be a panacea from Vista, and I encounter an even stranger problem right from the start, setting me back hours. I've about had it with MSFT.
Thanks, Riggr Mortis (talk) 06:40, 6 November 2010 (UTC)
- I don't understand. Preventing modification of files outside the user directory is kind of the point of UAC, and you seem to want to keep UAC turned on, but you say the behavior you observe is absurd. What behavior do you want? You can right-click a program and select "run as administrator" to run it without UAC restrictions.
- The "improvement" in Windows 7 was that Microsoft exempted its bundled software, such as Explorer, from UAC at the default notification level. That makes UAC nearly useless, since some of the exempt programs can be coerced into executing arbitrary code, but people think it's better because they see fewer annoying prompts. -- BenRG (talk) 08:26, 6 November 2010 (UTC)
- Actually I'm more mystified then that. AFAIK, programs running without UAC can modify most files outside the user directory. The root, program files directory, Windows directory and other system directories are protected against modification for obvious reasons. If you are trying to save a config file to a program directory, while that's usually considered a bad idea in most OSes nowadays AFAIK (even if it was common in some Windows in the past) except for portable apps (which shouldn't be in the program files directory) you can do so by putting that program outside the program files directory. Nil Einne (talk) 10:33, 6 November 2010 (UTC)
Regarding config files, of course... but Apache defaults to \Program Files\ for its entire install, and since I do one thing with Apache I'm not getting fancy.
What behavior do I want? Well, my account is marked Administrator, and I would expect that fact to be sufficient to edit a bloody text file anywhere on the drive, all other things being equal. Are you suggesting it's not odd that the system tells me I have effective "full control" permissions over, say, "C:\Program Files (x86)\Apache\README.txt", yet I don't? Riggr Mortis (talk) 21:09, 6 November 2010 (UTC)
BenRG: it's not that I want to keep UAC on. I will probably end up leaving it off entirely. I did on Vista, but at the same time I don't remember UAC being that intrusive on Vista. If someone simply said "UAC and administrative permissions logically conflict on Win 7", well that would answer my question, I suppose. Riggr Mortis (talk) 21:20, 6 November 2010 (UTC)
- The idea of UAC is that your user account has administrative privileges but most processes are started with only some of those privileges, so yes, this is expected behavior. It's somewhat more like the Java applet security model than the traditional Unix model. I don't know the details well enough to respond to Nil Einne above. -- BenRG (talk) 23:23, 6 November 2010 (UTC)
- AFAIK Vista has the same behaviour. See [3] which mentions similar issues. However virtualisation is available on some versions as a stopgap measure, see [4]. User Account Control mentions this as well. Of course another option would be if Apache really wants to put the config files in program files directory would be for it to be better designed to utilise UAC and raise a prompt when it wants to modify the config. As BenRG has said, Windows's UAC is sort of a way of limiting privilages to apps even if you are technically running as administrator. Partially because very few people were actually using limited accounts as has been recommended for a long time, partially because many Windows programs are shoddily designed and do expect full administrator privilages and UAC was fairly effective in convincing developers to design their programs better I would say. If you genuinely want full adminstrative priviliges all the time and for all programs then turn it off. Of course this isn't recommended in any common OS I know of (and in fact some like Ubuntu actively try to stop people running as root/superuser all the time). Nil Einne (talk) 01:41, 7 November 2010 (UTC)
urls
I have 700 html files, and I need to extract all urls from them which contain "/example123/". I'm on Windows 7. How could this be done, preferably with free software? 82.44.55.25 (talk) 10:30, 6 November 2010 (UTC)
- You can get grep for free for Windows and then grep for "http[^'\"]*/example123/[^'\"]*". The exact syntax of the grep may depend on the implementation of grep in Windows. -- kainaw™ 14:31, 6 November 2010 (UTC)
- I found a windows gui version for it, which has worked quite well. Thanks 82.44.55.25 (talk) 14:59, 6 November 2010 (UTC)
- Grep is good. I also wrote a short vbscript for this in 07. Maybe it will be of use to somebody.Smallman12q (talk) 15:25, 6 November 2010 (UTC)
- I found a windows gui version for it, which has worked quite well. Thanks 82.44.55.25 (talk) 14:59, 6 November 2010 (UTC)
FindURLsinHTML.vbs
|
---|
'FindURLsinHTML.vbs
'Public Domain November 2010
'Version 1.1
'Written by Smallman12q in Visual Basic Script (VBS)
'Desc: This vbs script will find the urls in a html files and save them to a text file in the directory
'Usage: Simply drag-and-drop the html files onto the script.
'1.0 (April 2007)
'1.1 (Nov 2010) Change-Added support for multiple drag and drop
Dim urlarray(2000)'Change number to whatever will be the max number of urls you expect to find
Dim urlcounter
Set objRegEx = CreateObject("VBScript.RegExp")
objRegEx.Pattern = "https?://([-\w\.]+)+(:\d+)?(/([\w/_\.]*(\?\S+)?)?)?"'Change the regex pattern here (this one does all websites)
objRegEx.IgnoreCase = True 'Change to false to not ignore case
objRegEx.Global = True
Dim targetdirectory
targetdirectory = "C:\SomeExistingFolder\SomeFile.txt" 'Place the file path where to record
Sub findit(item)
'Make sure its an html/htm file
If (Right(item, 5) = ".html") Or (Right(item, 4) = ".htm") Then
''''
Const ForReading = 1
'Read file
Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objFile = objFSO.OpenTextFile(item, ForReading)
Do Until objFile.AtEndOfStream
strSearchString = objFile.ReadLine
Set colMatches = objRegEx.Execute(strSearchString)
If colMatches.Count > 0 Then
For Each strMatch in colMatches
urlarray(urlcounter) = strMatch.value
urlcounter = urlcounter + 1
Next
End If
Loop
objFile.Close
End If
End Sub
'Check to make sure drag-and-drop
If( WScript.Arguments.Count < 1) Then
MsgBox "You must drag and drop the file onto this."
WScript.Quit 1 'There was an error
Else 'There are some arguments
Set objArgs = WScript.Arguments
For I = 0 To objArgs.Count - 1 'Check all the arguments
findit(objArgs(I))
Next
'Write urlarray to file
Const ForAppending = 8
Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objTextFile = objFSO.OpenTextFile(targetdirectory, ForAppending, True)
For count = 0 To urlcounter
objTextFile.WriteLine(urlarray(count))
Next
objTextFile.Close
MsgBox("Found " & urlcounter & " urls.")
End If
WScript.Quit 0 'Exit okay
'Resources
'http://blogs.technet.com/b/heyscriptingguy/archive/2007/03/29/how-can-i-search-a-text-file-for-strings-meeting-a-specified-pattern.aspx
'http://www.tek-tips.com/viewthread.cfm?qid=1275069&page=1
'http://www.activexperts.com/activmonitor/windowsmanagement/adminscripts/other/textfiles/
'http://snipplr.com/view/2371/regex-regular-expression-to-match-a-url/
'https://secure.wikimedia.org/wikipedia/en/wiki/User:Smallman12q/Scripts/Transperify
|
My graphics card; not good enough?
I have a ATI mobility radeon HD 4530 graphic card/processor.
I suspect it might not be enough for a program I will soon run on my computer.
The program's sytem rquirements: Minimum requirements: ATI Radeon 7200 Recommended requirements: ATI Radeon X1600
is my graphic card outdated, is it absolutely necessary that I upgrade or get a new graphic card altogether if I am to be able to run the program? Or is my current graphic card good enough to match the minimum requirements? I know it's more to it than only graphics card but it seems to me this is probably where my computer's weakness lies. It's a laptop, so maybe it's not so surprising if the graphic card isn't among the best.
Is it possible to upgrade or get new graphics cards like the ATI Radeon 7200 or ATI Radeon X1600 on the internet, or do I have to go to a software store?
Krikkert7 (talk) 12:02, 6 November 2010 (UTC)
- It is not outdated, at least for these requirements. 7200 is very old (from year ~2001 or 2002) and x1600 is newer, but still older than 4530 (x1600 is from 2006). 4530 might be quite low end device, but it still should be comparable to x1600. -Yyy (talk) 12:28, 6 November 2010 (UTC)
hm.. that's good news I would say, but quite different than what I read earlier. But that's the good thing I guess about asking around, getting several opinions. Krikkert7 (talk) 13:00, 6 November 2010 (UTC)
is it possible to distribute a large database as many ways as you want?
or, in general, no? 84.153.222.232 (talk) 13:25, 6 November 2010 (UTC)
- You need to define "distribute". If you mean "send the data to another location", the answer is "yes". You can send it over the web. You can remote connect through an ODBC interface. You can SFTP CSV files. You can put it on a CD, duct-tape it to a monkey, and send him overseas on a freighter. If you intend "distribute" to mean "show the data to users", you are referring to what most people call a "view". You can set up as many views as you like. -- kainaw™ 14:29, 6 November 2010 (UTC)
- If you mean, can you have the database exist in different chunks on different machines, you could, but it would present certain practical difficulties. --Mr.98 (talk) 14:46, 6 November 2010 (UTC)
- I think the OP means federating the database - distributing its data across many servers. This is possible, but is a feature that is significantly easier on commercial database platforms (like IBM DB2), as compared to, say, MySQL. Here's a technical-series from IBM called Data Federation and Data Interoperability. Here, for comparison, is the same concept in MySQL. Oracle calls this "clustering" although it has fewer features and capabilities; they have a white paper comparing federation to clustered distribution: Database Architecture: Federated vs. Clustered. Nimur (talk) 15:02, 6 November 2010 (UTC)
Whole Disk Encryption
Does whole disk encryption cause extra stress on a hard drive, such as for re-reading the keys and so forth? I'm specifically thinking of an SSD drive, and whether whole disk encryption will shorten its useful life due to lots of extra read/writes. —Preceding unsigned comment added by 75.75.29.90 (talk) 14:49, 6 November 2010 (UTC)
- No, it doesn't cause any significant extra stress for the hard drive. The keys (or key files) are relatively small, ussually less than 1MB so there's no extra stress there (The keys are read once on boot and stored in ram). The only increase in read/write will come from the actual extra space the encrypted form takes...though this ussually isn't significant (depends on algorithm). The actual encryption may stress the CPU a bit though...unless the encryption is performed by a dedicated chip on the hard-drive. But overall, the answer is no...disk encryption doesn't stress/reduce the life of the hd.Smallman12q (talk) 17:07, 6 November 2010 (UTC)
- When using popular full-disk encryption products like TrueCrypt and BitLocker, the encrypted data doesn't take up any more space. There are no extra disk accesses whatsoever except at boot time. Well, technically, since the encryption code takes up a small amount of RAM, there might be a little bit more virtual memory paging, but I doubt you'd notice the difference. -- BenRG (talk) 23:13, 6 November 2010 (UTC)
smallest, most compact and self-contained C++ compiler for Windows
so, my favorite editor is notepad2, which is TINY, and totally self-contained. Really just an .exe and maybe an .ini to go with it. I realize, due to at least standard libraries, that I do not have much of a hope of finding a similar Windows c++ compiler. Still, I dare hope.
what is the smallest, most compact and self-contained, C++ programming environment for Windows? We're talking something that you could run on a netbook and is not a messy, huge environment, but just a compact set of files. thank you! 84.153.207.135 (talk) 17:56, 6 November 2010 (UTC)
- Dev-C++ is pretty self-contained. It is a complete IDE, including a text-editor and build system; it uses MinGW (gcc) as its compiler. Alternately, you can download gcc, packaged for Windows as mingw ("Minimalistic GCC for Windows), if you only want a command-line compiler. Microsoft Visual Studio Express is pretty light-weight as well. Nimur (talk) 18:03, 6 November 2010 (UTC)
- Microsoft's C++ compiler runs from the command line and is configured with environment variables, very much like gcc (though all of the command line switches have different names). I haven't compared their sizes, but I suppose they're similar if you only keep the headers and libraries that you plan to use. Microsoft's compiler is quite a bit faster than gcc, which might matter if you want to do a lot of big builds on battery power. It also optimizes somewhat better for x86. Other than that, you could go either way. -- BenRG (talk) 23:38, 6 November 2010 (UTC)
how does this work?
how does http://www0.us.ioccc.org/years.html#1988 "westley" here work? (1988). I mean, could someone write a 300-word paragraph describing in excrutiating detail how the pseudocode would look if it were written in nearly formal English? I am just having too much trouble following it... What the program does: SPOILER: . it's supposed to calculate pi by looking at its own area. strikeout for spoiler. /SPOILER84.153.207.135 (talk) 18:22, 6 November 2010 (UTC)
- Did you read the hint? It's important that you either make the adjustment in the hint, or use a really old compiler that does dumb preprocessing. I'll assume you're interested in the original version, with the dumb preprocessing.
- The first line of F_OO is 4 instances of the "_" macro separated by 3 minuses. Here's how it looks after preprocessing:
-F<00||--F-OO--;--F<00||--F-OO--;--F<00||--F-OO--;--F<00||--F-OO--;
- It has become a sequence of 4 statements. The first one is different from the others, because it didn't have a minus before it. Each statement is a pair of clauses separated by "||", the short-circuit operator which evaluates the expression on the right only if the expression on the left was false. The result of the "||" operator is not used. So the statements are equivalent to
if(!(-F<00)) --F-OO--; if(!(--F<00)) --F-OO--;
- The double 0 isn't important, it's just a zero that helps put more "foo"-looking things in the code. -F<0 is true if F>0, which will never happen in this program because F starts at 0 and decreases from there. So in the first if statement, the condition is true and the --F-OO-- gets executed. --F-OO-- is equivalent to (--F)-(OO--) meaning F is decremented, and OO is decremented, and the old value of OO is subtracted from the new value of F, but that last part doesn't matter since the result of the subtraction is not used. From their initial values of 0, F and OO have both become -1.
- In the second statement, --F<00 causes F to first be decremented again, then its new value (-2) is compared to 0. -2<0 is true so the --F-OO-- is not evaluated. The third and fourth statements act likewise, bringing F down to -4 while OO stays at -1.
- The following lines are similar. For each line, the number of times F is decremented is the number of "_" expansions in the line, while OO is decremented only once per line. At the end, OO is the count of lines, corresponding to the diameter of the circle, and F is the total count of "_" expansions, corresponding to the area of the circle. They're both negative, but that's fixed in the printf statement. 4*area/diameter/diameter is printed. 67.162.90.113 (talk) 21:48, 6 November 2010 (UTC)
- There's an article at International_Obfuscated_C_Code_Contest#Examples...though its scant on details.Smallman12q (talk) 01:30, 7 November 2010 (UTC)
bandwidth
Approximately how much bandwidth is used when a program, such as a web browser or crawler, asks the remote server how new a file is? 82.44.55.25 (talk) 21:22, 6 November 2010 (UTC)
- OK so here's my browser asking Wikimedia about the file http://bits.wikimedia.org/skins-1.5/vector/images/arrow-down-icon.png?1
- My browser sends:
GET /skins-1.5/vector/images/arrow-down-icon.png?1 HTTP/1.1 Host: bits.wikimedia.org User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US;rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12 (.NET CLR 3.5.30729) Accept: image/png,image/*;q=0.8,*/*;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://bits.wikimedia.org/skins-1.5/vector/main-ltr.css?283-5 If-Modified-Since: Wed, 05 May 2010 19:06:38 GMT If-None-Match: "bc-485dd86856b80" Cache-Control: max-age=0
- And the server replies the file was modified in May:
HTTP/1.1 304 Not Modified Date: Sun, 07 Nov 2010 08:35:46 GMT Via: 1.1 varnish X-Varnish: 1179536182 Last-Modified: Wed, 05 May 2010 19:06:38 GMT Cache-Control: max-age=2592000 Etag: "bc-485dd86856b80" Expires: Tue, 23 Nov 2010 14:42:06 GMT Connection: keep-alive
- That's 830 bytes, but most of the stuff the browser sends are not necessary, so you can make it less than half a K. F (talk) 08:45, 7 November 2010 (UTC)
- It's also gzip encoded (Look at the Accept-Encoding line), so it's smaller. However there is the TCP/IP overhead, which could easily make up for what the compression gains. Shadowjams (talk) 04:19, 8 November 2010 (UTC)
November 7
.sla extension on Scribus
I'm trying to open a .sla document on the latest Scribus version, but it says it doesn't support this file format. I know .sla is a Scribus file extension, so what is the problem? Thank you a lot. Leptictidium (mt) 10:59, 7 November 2010 (UTC)
- Three possibilities spring to mind:-
- 1 - The file has a .sla extension, but was created by another program that uses the same extension rather than Scribus
- 2 - The file has had it's extension changed and is actually (for example) a .doc file
- 3 - The file is corrupt.
Exxolon (talk) 11:26, 7 November 2010 (UTC)
- The file in question is the Welcome to Wikipedia brochure, downloaded from the Wikimedia Bookshelf website. Is there any problem with the file? --Leptictidium (mt) 11:52, 7 November 2010 (UTC)
- The outreach page you cite says the document needs Scribus 1.3.5, and indeed the header info in the file says "Version="1.3.5.1". 1.3.5 is newer than the standard version available for download from the Scribus site or in Linux package repositories. The outreach page also says where to get a sufficiently new version. -- Finlay McWalter ☻ Talk 16:16, 7 November 2010 (UTC)
Ping
-What is the use of Ping command in command prompt?
like, "ping wikipedia.com" ? Max Viwe | Wanna chat with me? 14:56, 7 November 2010 (UTC)
- To test network connectivity, response time, status of a sites server, etc. The Ping article explains in more detail. 82.44.55.25 (talk) 15:01, 7 November 2010 (UTC)
you're playing pingpong with Wikipedia. (This means it must be configured to play pingpong -- not all servers are). The numbers you see are how long (in milliseconsd) the ball takes to get to Wikipedia and back. (1000 would mean it takes on second to go there and back). Normally ping is not recreational, it's just to see if you have a netwokr connection with the other side and how long it takes to get a virtual ping-pong ball (a small packet of data) there and back again. 84.153.212.109 (talk) 16:10, 7 November 2010 (UTC)
Python object creation failure idiom
I'd like to write
Thing = MyObject(...mymin,mymax...) ## returns None if mymin ≥ mymax if Thing:
but a moment's experimentation shows that this won't work: Thing receives an object even if MyObject.__init__ says return None. Several less elegant approaches occur to me:
- Raise a custom exception.
- Give MyObject a field self.valid, to be read only once.
- Take the validity testing (typically more complicated than this toy example) out of MyObject.
What's the customary way? —Tamfang (talk) 20:05, 7 November 2010 (UTC)
- I think it's really going to depend on context, and your example is a bit too abstract to clarify (I appreciate it's deliberately abstract). If constructing a MyObject with those parameters is bad (like asking for a new network socket with an invalid socket family) then that's surely a plain old programming error, and an exception is called for (and you do the caller a favour if it's as soon as possible - at the constructor, rather than returning an essentially useless object). If, however, an "invalid" MyObject isn't entirely useless (or that it might, by some other means, sensibly become useful) then maybe MyObject should have a state variable (I'm-useless-now, I'm-useful, I-used-to-be-useful). Some objects evolve (sockets do that), so if the bad condition is a temporary one, an exception doesn't seem appropriate. As a general rule, exceptions should be exceptional, and someone who reads the docs properly and upon whom fortune smiles should never see one (or should be able to avoid one without much effort). -- Finlay McWalter ☻ Talk 20:51, 7 November 2010 (UTC)
- A related point, about custom exceptions. What kind of exception, or whether you need to define a bunch of different exceptions, is down to you thinking about what a caller would reasonably do with the different exceptions. If your code can raise three different types of custom exceptions, that means you think there are circumstances in which a reasonably caller would want to catch only one or two of those three; if they'll always catch only all or none, then they might as well all be the same type (and you can still stuff any relevant details into the exception object if needed). I occasionally see things that amount to the-first-parameter-was-wrong-exception vs the-second-parameter-was-wrong-exception, when surely wrong-parameter-exception would be all that's useful (with a value inside it, or just a string, saying which). -- Finlay McWalter ☻ Talk 21:00, 7 November 2010 (UTC)
- The customary way is to raise a ValueError; that's meant for exactly this kind of situation, where the type of the function arguments is correct, but the values are meaningless to the function. Specify what went wrong in the exception message, e.g.
raise ValueError("mymin must be less than mymax")
- Yes, that's overwhelmingly true. Curiously the very example I randomly chose is the one thing I can find that, in the case of an obviously bad parameter doesn't - socket.socket(1234, socket.SOCK_STREAM) raises a socket.error. For everything else (in my epic 10 minute long quest typing stupid values into various Python functions) gets a ValueError every time. -- Finlay McWalter ☻ Talk 22:48, 7 November 2010 (UTC)
"but a moment's experimentation shows that this won't work"
- That's not entirely true. You can easily make it work by assign a factory function which creates your objects, to the variable MyObject, so that the call MyObject() calls this function instead of the class directly. For all effects and purposes, it would work the same way as before, except that this function now has the option of returning None. --128.32.129.245 (talk) 01:02, 8 November 2010 (UTC)
- The class method that creates a new instance is called __new__, not __init__. __init__ initializes an already-created instance, which it gets as an argument, and doesn't return anything. But I really don't think that you want to override __new__. You should either throw an exception from __init__ or make a separate factory function, as other people have already said. -- BenRG (talk) 10:15, 8 November 2010 (UTC)
Network Security
Hello. When I returned home, the wireless light on my modem was on, but my laptop was off and my peripheral hardware aren't connected wirelessly. Was someone trying to piggyback on my Internet? Thanks in advance. --Mayfare (talk) 22:36, 7 November 2010 (UTC)
- At least for the Netgear and Linksys devices I've seen, the wireless light being on just means the modem's wireless function is enabled. If it's flashing that would suggest traffic (but I don't know if some random device somewhere innocently interrogating the networks it sees would make for much if any flashing). The best thing for you to do is to make sure you've got the wireless security settings turned on (WPA/WPA2, not WEP, not unencrypted). -- Finlay McWalter ☻ Talk 22:52, 7 November 2010 (UTC)
November 8
EXIF, meaning of "Optical Zoom Code"
Does anyone know how to interpret the line
Optical Zoom Code : 7
in the EXIF data of a photo? The camera is a Canon PowerShot A590 IS. I'm assuming it means 4X optical zoom (the maximum) but I'd like to find a resource for interpreting this stuff more generally. Thanks, --Trovatore (talk) 01:36, 8 November 2010 (UTC)
- Considering its widespread use, EXIF is surprisingly nonstandardized. The "official EXIF specification" is not really official - it is defined by a small organization that has loose affiliations with several Japanese camera manufacturers called CIPA (Camera & Imaging Products Association). Our EXIF article links to the EXIF DCF specification version 2.3 - and as you can see, "optical zoom code" is not a standard tag. It is an "internal use only" tag for the manufacturer, by the manufacturer - and any conformity to any spec should be considered "use at your own risk."
- Optical Zoom Code usually directly maps to a zoom-state; your camera might use codes (0-7) or (0-127) or so on to denote all possible steps between minimum and maximum zoom. The exact mapping of zoom-code to focal-length will vary from manufacturer to manufacturer (and model to model, even firmware to firmware). You can use a tool like gphoto2 to look up the mappings between zoom-codes and actual focal-lengths for your particular camera (if you trust their tables).
- If you really need to know the true optical zoom, the EXIF tag you should use is FocalLengthIn35mmFilm. You can compare this to your camera's FocalLength tag to determine the optical zoom as a "1x" or "2x" number. Nimur (talk) 04:09, 8 November 2010 (UTC)
- Thanks much. Can't seem to find that tag. But there's FocalLength and LongFocal and ShortFocal, and FocalLength equals LongFocal which is four times ShortFocal, so I take that to mean the picture was shot at 4x optical zoom. --Trovatore (talk) 04:58, 8 November 2010 (UTC)
Mac OS X version adoption rates
Where could I find a breakdown percentage of Mac users who are running Mac OS X 10.6, 10.5, 10.4, even Mac OS 9? --70.167.58.6 (talk) 08:38, 8 November 2010 (UTC)
- They did a poll over at MacOSXHints a few months back, if I remember correctly, which showed that 10.6 adoption rate was over 70%. 10.5 was something like 18% or 19%, 10.4 less than 5%, and a smattering of people with lower systems. Unfortunately I can't find the link. Of course, that's a bit of a geek site which might queer the results a bit. I suspect upgrade rates are lower for people who use their machines for business purposes (why risk the possibility of having to rewrite your business webpage because of software changes?). --Ludwigs2 09:41, 8 November 2010 (UTC)
- ah, found it: [5] as of last january. 68% 10.6, 21% 10.5, 8% 10.4, and the remainder of the X systems accounting for roughly 2%. 21,000 votes, though, so it's a decent sample even for a convenience sample. The poll didn't go into os 9 users, thought the previous poll on system version, which was held back in 2006, showed only 0.69% of users using os9 and only 0.21% using os8 or earlier. --Ludwigs2 09:52, 8 November 2010 (UTC)
- It may be a 'decent sample' but clearly not a random one, not even close (to state the obvious, people visiting a site like MacOSXHints are likely to be fairly technically minded). I would trust a decent scientific poll with only 2000 people more then this one with 20000 votes. Nil Einne (talk) 12:30, 8 November 2010 (UTC)
HELP. How do I remove specific auto-suggestions from my Google Chrome's URL bar?
Anytime I start typing any web address with "d," guess what is the first site Google Chrome suggests???
It's DailyDiapers.com!
I feel haunted by that reappearance; even though I only go there on Incognito mode now, I still get reminders that I used to surf it on normal mode.
A BIG problem would be if a friend borrowed my laptop and decided to type up a website that started with "d." You can imagine what could happen next!
Now how do I remove that offending auto-suggest from ever showing up again?
(I had to register a new name just to ask embarrassing questions; I wasn't even going to post this from my IP because that could trace back to me as well.) --EmbarrassedWikipedian (talk) 09:58, 8 November 2010 (UTC)
- Step by step instructions from http://www.google.com/support/forum/p/Chrome/thread?tid=6501481bab1c67eb&hl=en
Turning off Auto-Suggestions (also see Reference 1)
1. Clear your browsing history
2. Click the Tools menu
3. Select Options
4. Click the Under the Hood tab and find the Privacy section
5. Deselect the 'Use a suggestion service to help complete searches and URLs typed in the address bar' checkbox.
6. Click Close.
General Rommel (talk) 10:16, 8 November 2010 (UTC)
:Extra note :Under my dev versoin of Chrome, it appears as 'Use a prediction service' not suggestion service General Rommel (talk) 10:18, 8 November 2010 (UTC)
Long-Term Data Archival
Hi Everyone,
I am seeking to store large amounts (~ 2TB) of data for a very long (minimum <strikeout>50</strikeout> 25 years) period of time. The media has to be rewritable, and the data will be very infrequently (annually or semi-annually) rewritten. The data must be preserved for atleast <strikeout>50</strikeout> 25 years. Which of the following types of storage media will (or is atleast most likely to) preserve data for this length of time?
- Ultrium LTO-5 750GB Rewritable Tape Cartridge
- Removable Serial ATA Hard Disk Drive
- Ultra Density Optical 30GB Rewritable Disk
- Multi-Level Cell Solid State Drive
Thanks, everyone. Rocketshiporion♫ 12:36, 8 November 2010 (UTC)
- One of the big problems you'll have either way is whether you'll be able to edit any of those mediums in 50 years. 50 years is a loonnngg time in terms of information technology — consider that this is what a computer looked like even less than 50 years ago. (Computers from 1960 did not even use integrated circuits!) Will SATA be around in 2060? I wouldn't bet on it. SATA itself dates from only 2000 or so, as far as I can tell. Even parallel ports only go back 40 years.
- Rather than trying to find one system that will last for 50 years, might I recommend that a rotating system be used? Pick a system that will last for 10 years. After 10 years, upgrade it to whatever the equivalent is at that time. After another 10 years, repeat. You've both reduced your necessary lifespan for the system significantly, and also basically guaranteed that it'll be compatible with whatever fancy new computers there are in 2060. 10 years is an acceptable tech jump — people have all sorts of common methods of playing 10-year-old software, or using 10-year-old peripherals — whereas 50 is a bit much. And if you forget to upgrade it after a point... well, at least you're 10 years more up to date than you would have been otherwise.
- A brief analogy. The Bible did not survive to modern times because people made one very good copy of it and kept it very safe. (There are a couple verrry old copies, but the fact that they have survived is basically attributable to luck.) It survived because people were constantly re-copying it, on modern paper, with modern technology.
- All of the above is making the assumption that you'd have said drives in a place accessible to upgrades. If you're putting it on a rocket ship, well, I suppose that would introduce additional variables. --Mr.98 (talk) 14:00, 8 November 2010 (UTC)
- As the first Parallel ATA HDDs appeared in 1986, and there are still motherboards being sold with PATA connectors, I am quite confident that SATA (which AFAIK appeared in 2004) will be around atleast as long - until 2028. In addition, most of the SSDs of which I'm aware use SATA, and those that don't use FC. Magneto-Optical disks (which have been superseded by UDO disks) were introduced in 1985, and they are still in use today, 25 years later. Although I agree that 50 years is a very looooong time for data storage media, and so I shall amend my question - see above. Rocketshiporion♫ 19:38, 8 November 2010 (UTC)
What does this line of assembly mean?
Hi! I'm working on some assembly code for class and I haven't been able to understand this line:
movzbl (%eax), %eax
For some context, %eax, before calling the operation above was a char *. After this function, the value of %eax became an integer. From the little I picked up, this has something to do with turning of the first 24 bits (?) of %eax, and doing something to the rest (?) Could some one please me out? Thanks! —Preceding unsigned comment added by Legolas52 (talk • contribs) 14:58, 8 November 2010 (UTC)
- movzbl means retrieve (move) a byte (a char), add 24 zero bits to it (to make it a long, which is 32 bits in x86 assembly, whatever long is in C), and store it. The parentheses mean "the memory at address" (so they correspond to the * operator in C). So it's equivalent to
eax=*(char*)eax;
in C, bearing in mind that assembly has no strong type system so things like casts are entirely implied by how you use the values in question. You thought it meant "turning off the first 24 bits", but it actually means "adding 24 'off' bits". --Tardis (talk) 15:25, 8 November 2010 (UTC)
Eye Altitude in Google Earth
What does the eye altitude shown on the Google earth screen mean? —Preceding unsigned comment added by 113.199.204.115 (talk) 15:15, 8 November 2010 (UTC)
Mongolian script support
Hi, I know this is getting a recurring question here, but I checked Multilingual support as well as Multilingual support (East Asian) and did some Google searching, and yet didn't manage to find from where I can download support for the Mongolian script so my browser can get able to view this properly: ᠤᠯᠠᠭᠠᠨᠪᠠᠭᠠᠲᠤᠷ. Could you help me? --Theurgist (talk) 15:40, 8 November 2010 (UTC)
Subliminal
I spend all day in front of a computer. I want to flash subliminal messages to myself on the screen at random intervals, for example "Get a girlfriend" or "Improve your life". What programs could do this, easily and at low or zero cost? Thanks. —Preceding unsigned comment added by 178.66.8.154 (talk) 19:40, 8 November 2010 (UTC)
- "J", would be the first one. "Y" would be the last. In between, in no particular order, "I", "O", "N", "H", "T", "E", "N", "V", "A".... 85.181.151.31 (talk) 20:31, 8 November 2010 (UTC)