Jump to content

Wikipedia:Reference desk/Computing: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Excel Question: new section
Line 583: Line 583:


Last Name, Phone Number, Address
Last Name, Phone Number, Address

== Excel Question ==

In Excel, how do I create a list for one cell?

Revision as of 23:15, 20 November 2007

WP:RD/C

Welcome to the computing reference desk.
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



After reading the above, you may
ask a new question by clicking here.
How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
Choose a topic:
 
See also:
Help desk
Village pump
Help Manual
 


November 14

Firefox cache

I posted this thinking it was a wikipedia issue. But I'm convinced now that it isn't, so I've moved my question here. Can anyone think of a reason why Firefox's cache would stop "working". I mean, files appear to be being cached, but everytime I hit the back button, the old page reloads (or if I click a link, things like images and stylesheets are reloaded, even though they should be cached). I'm running Firefox 2.0.0.9 and this problem doesn't occur in Opera, nor on another computer I have running Firefox.-Andrew c [talk] 14:43, 14 November 2007 (UTC)[reply]

This could be a long shot, but you might want to check the value for browser.cache.check_doc_frequency in the about:config screen. If it is set to 1, that means it will reload a page every time. The other valid values are 0 = Once per session, 2 = Never, and 3 = When appropriate/automatically. 3 is the default. I have no idea how FF determines "when appropriate". --LarryMac | Talk 15:26, 14 November 2007 (UTC)[reply]
Good idea. But it has to be something else. It was set for 3, the default. I changed it to 0, and the pages would still reload when I hit back. At this point, I think I'm going to try to contact Firefox support. Thanks for your help though.-Andrew c [talk] 14:48, 15 November 2007 (UTC)[reply]

How did the spammers find my gmail account?

I have a gmail account whose address I've never published. I use it strictly as an anti-spam filter for another address (which I also keep unpublished). I include a (slightly edited) copy of the spam headers below. Any thoughts on how the spammers managed to find this gmail address and if there's anything I can do to stop this from happening in the future?

Delivered-To: REDACTED@gmail.com
Received: by 10.78.164.8 with SMTP id m8cs45882hue;
        Tue, 13 Nov 2007 21:02:02 -0800 (PST)
Received: by 10.70.72.11 with SMTP id u11mr2554614wxa.1195016518126;
        Tue, 13 Nov 2007 21:01:58 -0800 (PST)
Return-Path: <kimala_nair@yahoo.com>
Received: from mail.com ([59.92.80.99])
        by mx.google.com with SMTP id h20si474712wxd.2007.11.13.21.01.11;
        Tue, 13 Nov 2007 21:01:58 -0800 (PST)
Received-SPF: neutral (google.com: 59.92.80.99 is neither permitted nor denied by domain of kimala_nair@yahoo.com) client-ip=59.92.80.99;
Authentication-Results: mx.google.com; spf=neutral (google.com: 59.92.80.99 is neither permitted nor denied by domain of kimala_nair@yahoo.com) smtp.mail=kimala_nair@yahoo.com
Message-Id: <473a8146.1486460a.600b.3e78SMTPIN_ADDED@mx.google.com>
Reply-To: <kimala_nair@yahoo.com>
From: "Jonathan Mesmar" <kimala_nair@yahoo.com>
Subject: Get over 4000 TV Stations for a small one-time fee!
Date: Wed, 14 Nov 2007 10:31:56 +0530
MIME-Version: 1.0
Content-Type: text/plain;
	charset="Windows-1251"
Content-Transfer-Encoding: 7bit
X-Priority: 3
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook Express 6.00.2600.0000
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2600.0000

- Donald Hosek 16:08, 14 November 2007 (UTC)[reply]

If the removed part is an everyday, regular word then it may simply have been guessed, especially if the spammer was SMTPing themselves and guessing starts of e-mails. Lanfear's Bane | t 16:20, 14 November 2007 (UTC)[reply]
I have some gmail addresses from its early history that are single words, and they get pummeled by spam --ffroth 16:58, 14 November 2007 (UTC)[reply]
Spammers have botnets that can send billions of spams out every day. They can easily send spam to a@gmail.com, b@gmail.com ... aabc@gmail.com, aabd@gmail.com ... kainaw34@gmail.com, kainaw35@gmail.com ... so you can see that they don't "know" your gmail address. They are sending spam to every possible gmail.com address. -- kainaw 17:41, 14 November 2007 (UTC)[reply]
Damn, that means that they're going through in excess of 109 pure alphabetic combinations just to get to me. At the moment it's a fairly small volume, so I'll live with it (I think I probably average less than one per day and google's spam filter catches it), but it makes me feel a bit better knowing that it's a brute force attack (although a little annoyed that Google doesn't do a better job against protecting against that). Donald Hosek 18:05, 14 November 2007 (UTC)[reply]
It is difficult to stop. Consider yourself a mail host. You have mail coming in from thousands of computers located all over the world (OK - so most are in the U.S. - but we're being theoretical). The mail comes in designed specifically to get past all your known filters - changing the subject and message just enough to look unique. You can't block by a block of IP addresses, because this is being sent by infected computers all over the Internet. You can't block by subject line because it keeps changing. You can't block by the message because it keeps changing too. When it comes down to it, it isn't the spammers who should be put against the wall and shot. It is the people who feel it is perfectly fine to infect their computers with free music trading garbage, turning their PC into another spamming bot. Stop people from turning their computers into bots and you'll stop the botnets. -- kainaw 19:22, 14 November 2007 (UTC)[reply]
It's completely ridiculous for spammers to go to jail- for what? Sending too many emails? As if that's not exactly what the infrastructure is made for.. same with malware authors- people are voluntarily running the code on their machines, the writers of the code aren't responsible for any damage -_- --ffroth 21:19, 14 November 2007 (UTC)[reply]
They absolutely should (and indeed sometimes do) go to jail for doing it. In the US at least they are violating the CAN-SPAM Act of 2003 and costing us billions a year in networking gear to carry email - of which 90% is not needed nor asked for. Worse still, as responsible businesses are too smart to advertise that way, Spam is almost always something disreputable. Sending 10 year old girls emails about making their penises longer and what the consequences of this might be is certainly prosecutable in many jurisdictions under indecency laws. So you are 100% wrong. If we could find a better way to enforce these laws, Spammers would and most certainly should go to jail. To the extent that CAN-SPAM has been used, people have gone to jail. SteveBaker 00:57, 15 November 2007 (UTC)[reply]
*rolls eyes* It's just email. It costs billions a year because mail servers are designed to serve indiscriminately- a mind-bogglingly stupid idea. Almost as stupid as enacting a law specifically to prevent sending too much email.. 100 years in the future the local newspaper will feature a "hilarious laws from the last century that are still technically in effect" column, and some kid will ask "what was 'eee maol' mommy, and why did they spend so much time regulating it instead of solving real problems?" It's astonishing that email has lasted this long; it should either go peer-to-peer or be a small network of databases run by Google, yahoo, ISPs, etc, instead of using the internet to let SMTPs contact mail servers. Anyway, it's not like it's costing "American taxpayers" billions per year- business is paying these costs and still staying profitable, so it's not crippling to the internet --ffroth 01:25, 15 November 2007 (UTC)[reply]
Umm, "businesses" don't pay for my bandwidth, I do. People are sending messages to my e-mail server at my expense without my wanting them to do so (in fact, with my explicitly wanting them to not do so). They go to great lengths to circumvent my trying to get them to stop. I'm opposed to the death penalty. But I'd make an exception for spammers. There does need to be a better e-mail system, but pretending that spam isn't a problem because "businesses" can absorb these costs is absurd. Donald Hosek 06:42, 15 November 2007 (UTC)[reply]
What's really absurd is killing people for sending too much email! --ffroth 17:13, 15 November 2007 (UTC)[reply]
It's astoundingly naive to say "business pays for it". Where do the businesses get their money? It's from goods and services that they make or sell. They either sell them to other businesses or to us. If they have to pay more for their internet access - then that puts up their operating costs - which means that we have to pay more for our homes/cars/food/service/utilities - so in the end we definitely pay for Spam. In fact, because businesses have to make a percentage markup on their products, we pay not only for the cost of the spam but also for the business profit margin on the cost of the spam - so it's actually more expensive if they pay for it than if we do directly. SteveBaker 18:46, 15 November 2007 (UTC)[reply]
You don't have to pay for their products- it's not like taxation where you have to pay. I don't know, I'm just saying that is the businesses' problem, and the government should butt out. If businesses want to run open mail servers that just indiscriminately accept email without cryptographic signatures, then they have to deal with the inevitable spam. They're just asking for it. And they don't have to pay for the spam-handling infrastructure if they don't want to. --ffroth 22:30, 15 November 2007 (UTC)[reply]
Um, malware is usually not strictly voluntary (don't confuse ignorance with consent). And I think it's a little silly to suggest that people who write code that lets people do really nasty things to other people's property aren't in any way responsible. (If they should be legally culpable is a different question, a subset of responsibility. I'm not necessarily arguing that—there are times when they should and times when they aren't.) People with specialized knowledge are always somewhat responsible for how that is used, whether they are engineers, scientists, computer programmers, whatever. Obviously the people who actually use the tools take up the brunt of the responsibility, but the person who put them out there to use certainly is in an important way responsible as well. Tools don't self-construct. --24.147.86.187 22:55, 14 November 2007 (UTC)[reply]
Personally I think the spam issue by itself is a bit overrated. OK, it's a little annoying, but honestly, it's not that hard to just delete it. OK, the net amount of resources it uses are a lot, but on an individual level it doesn't amount to much. Botnets are, though, a major problem—they make a lot of things other than spam possible, such as distributed infrastructure attacks, which is a recipe for bad news. But spam itself — who cares? It's one of the many small annoyances that come with any new technology, and not half as bad as some of the annoyances that come with other technologies (e.g. I consider almost all of the negative side effects of automobiles—pollution, accidents, noise, gridlock, suburban sprawl, oil dependency—to make spam look like a laughable problem). Anyway, if you really hate spam, the real perpetrators are the guys who cooked up SMTP and made it such an easy protocol to hijack. It's basically an ideal system for spam by design. --24.147.86.187 22:55, 14 November 2007 (UTC)[reply]
If it takes me (say) 2 minutes per day to erase junk mail from my inbox at work, that's 1/30th of an hour or 1/240th of an 8 hour day. So around 0.4% of the labour costs relating to staff having email use comes from the cost of deleting the email (not including the cost of extra network bandwidth, etc). I think that's actually a huge underestimate because a lot of people actually read through each message before deleting it and/or are confused by why they are getting it and so forth. But even 0.4% actually a huge amount when multiplied over the entire economy. As for the idea that the SMTP designers are at fault, would you consider it not to be an offense to steal someones car if they leave it unlocked? That's what you're saying. Then there is the problem of younger children and the sheer volume of obscene shit that comes from these idiots...the only defense for caring parents is to keep kids from using email until they are old enough to understand this crap. My elderly mother got very excited about email and being able to send photos of grandchildren back and forth and getting in touch with her old friends...then she started getting all of the usual junk mail and was so horrified that she turned off the computer and won't turn it on again. Spam is a terrible thing and I'm greatly disappointed that so many people here have not thought through the issues to even the slightest degree - I didn't appreciate that there were so many shallow thinkers here. (sigh) SteveBaker 18:46, 15 November 2007 (UTC)[reply]
If there's a machine designed to send mail, using it more than you should isn't equivalent to stealing a car if it's unlocked. And I'm not saying the SMTP designers are at fault- they could never have anticipated that the internet wouldn't always be so trusting a place. But the mail protocols are outdated and anyone still implementing them just has to deal with the inevitable consequences. I'd give an example, but this very case is really the best example- suppose you set up a mail server that accepts email from anyone on the internet. You are literally inviting anyone on the internet to send you as much mail as they want, because that's what the protocol allows. There's no moral code in protocols- if the server will accept it, and it works, then you're just using the system just as it was designed. If the protocol is stupid and allows massive transmission of unsolicited messages, then get rid of the protocol, don't try to control how people use it. --ffroth 22:36, 15 November 2007 (UTC)[reply]
Since we are now talking about SPAM in general, I often wonder exactly who responds to these spam messages. Is there REALLY such a market for penis enlargement and erectile dysfunction drugs? Obviously only a tiny tiny fraction of people respond, but I am still amazed that there is money to be made. I bet the people that respond to spam are the same that buy HeadOn. Its really disheartening to think that sizable number people can be duped so easily. -- Diletante 00:24, 16 November 2007 (UTC)[reply]

Name of a data structure

For one project I'm working on, I've (re-?)invented a datastructure, and I was wondering if there was a name or existing literature about it. Basically, it is a list of values and indices where they start, sort of like a sparse list except the previous value continues until the next value starts. At the moment I'm calling it an 'index list'.

If we have this binding:

ex = index_list((0, 'f'), (1, 'o'), (3, 's'))

Then, as_joined_string(ex) -> "foos", but as_normal_list(ex).index(20) -> 's'. Does anyone have a more specific name than 'index list' for this? —Preceding unsigned comment added by 79.75.201.99 (talk) 19:44, 14 November 2007 (UTC)[reply]

The number isn't exactly a repeat count though - it's the start index at which the letter starts repeating. The example we're being given isn't very good - so here is another:
ex = index_list((0, 'a'), (2, 'r'), (5, 'g'), (9, 'h') )
Which yields "aarrrggggh" ...followed by an infinite number of h's or something. If it were runlength encoded, it would be:
ex = runlength_encoded_list((2, 'a'), (3, 'r'), (4, 'g'), (1, 'h') )
There aren't names for all possible tiny variations on a standard data structure - so who cares? Just invent one. This is something like a cross between a classic 1D sparse matrix and classic 1D run-length encoding. Programmers are inventing weird hybrid data structures all the time - we don't usually feel the need to give them names. SteveBaker 21:02, 14 November 2007 (UTC)[reply]

Dedicated Laptop Video RAM

I'm a little confused about video memory in laptops. A fellow at a local computer place told me that the video hardware installed in laptops never have their own RAM, and that they take RAM from the main system's RAM (like most on-board video in PC motherboards). Browsing around a few computer outlets I noticed some of the laptops have the video card advertised with "dedicated RAM". So does this guarantee that the video card has its own RAM (and doesn't go through the same bus as system RAM accesses do)? Or does it simply mean that the RAM it grabs from the system is fixed at all times? Thanks. --Silvaran 20:47, 14 November 2007 (UTC)[reply]

He probably shouldn't have said "never". Most laptops have fairly weak graphics capabilities. But, there are exceptions. I can't imagine by "dedicated RAM" they mean anything other than what you're describing. My laptop has 128 Mb of video ram, very separate from main system RAM, on a PCIe bus. So, yes, they do exist. Friday (talk) 20:52, 14 November 2007 (UTC)[reply]
My laptop has a discrete graphics card (low end, but even so it's a heat beast) with 128MB of video memory. That tech is wrong. Go with an integrated graphics card though- you should have a desktop computer for gaming, and a dedicated graphics card in your laptop will cost you weight, heat, and battery life (some of those negated). --ffroth 21:15, 14 November 2007 (UTC)[reply]
Obviously Froth is not a fan of gaming in the tub... I got used to my laptop and would never consider a desktop ever again. Of course, I do my "heavy" gaming on a console and save the Mac for strategy games and whatnot. Friday (talk) 21:18, 14 November 2007 (UTC)[reply]
I'm not a fan of consoles either- if you can afford a decent computer there's no reason to buy a dedicated machine just for games. You'll get much better graphics and a FAR larger library of games (like 2 decades worth of backward compatibility + console emulators + make your own) out of a PC. Also I'm not enthusiatic about spending so much money on a computer locked down with DRM and signed code, to the point where you can't even control your own machine without an illegal mod chip. --ffroth 23:13, 14 November 2007 (UTC)[reply]


November 15

Domplayer

I just downloaded a video of of bittorent (not illegal.. don't worry). Weh I open the .avi with VLC (newest version), the video says that I must view it with Domplayer. It looks fishy, I don't want to even download it. Is there a way to convert the video so VLC can use it? I'm on an Mac, using leopard. —Preceding unsigned comment added by 71.195.124.101 (talk) 00:35, 15 November 2007 (UTC)[reply]

This is a growing problem - people offer video for download for free - and indeed, you can download it for free - however, they have used their own non-standard codec to encode it. So to replay your "free" video you have to get their player - which either costs money or requires you to give up personal information to someone whom you already know is a sleezebag - or occasionally may take over your computer and turn it to evil purposes. Give up - just delete the darned thing. SteveBaker 00:46, 15 November 2007 (UTC)[reply]
Exactly what he said. It's some sort of racket; you're better off just not giving into it. Don't encourage people with these irritating schemes. --24.147.86.187 00:50, 15 November 2007 (UTC)[reply]
Yep it's a scam --ffroth 01:22, 15 November 2007 (UTC)[reply]

computer ram,specifically dual channel

i am a new computer builders,i was just wondering what my option are when i want 4 gb of computer, memory which my motherboard says it can handle,and im shopping around.i ran into 2 x 2096mb,which is 4gb, and 4x1024mb,which is 4gb also,and i cant understand the difference.i want to get the 2x2096 cause ill save space and its the same speed and ddr2 so i dont understand. —Preceding unsigned comment added by 24.34.113.110 (talk) 01:24, 15 November 2007 (UTC)[reply]

I would go with 2 sticks of 2GB, leaves room for future upgrade. But be warned, you'll need a 64 bit system to take advantage of any ram above 4GB. --antilivedT | C | G 06:10, 15 November 2007 (UTC)[reply]
also be warned that your motherboard might be 4GB max, so if it has 4 slots each one might be 1GB max. Check your motherboard manual.--Dacium 01:25, 16 November 2007 (UTC)[reply]

If you are running a 32bit OS (if you don't know what I mean you probably are) then your PC can only have 4gb max including graphics card memory and soundcard memory (as well as virtual memory I think), so you might find yourself only being able to use 3.5gb of your new memory or similar. TheGreatZorko 09:50, 16 November 2007 (UTC)[reply]

No, your video ram doesn't count as it cannot be called from CPU directly, so you can have both 4GiB of ram and chain 2 8800GTX together with 1.5GiB of VRAM (if you have the money). Sound cards generally don't have RAM, and it can't be reached by the CPU directly anyway. The 4GB limit is due to the CPU running out of addresses as it uses a 32bit value for memory addresses, and can only have 232 addresses. In addition, the OS holds page tables and the like (or so I heard) in the same space, and thus the actual amount of memory you can use under 32-bit system is less than 4GiB. Things like Physical Address Extension can be used to overcome that limitation (other than upgrading to 64bit processors), but the software support is very erratic and thus there's not much you can do about it. --antilivedT | C | G 09:44, 17 November 2007 (UTC)[reply]

Photoshop 7.0 Help

Hi, I use Photoshop 7.0 and I am trying to do a nice picture collage. I used to know how to do this but I forgot so someone may be able to help me. I wans to import these pictures into photoshop (about 400 of them, yeah its a BIG collage) but, I want to shrink the size to about 2 inches, grayscale them and possibly crop them as well. How do I set photoshop to automatically do this to these pictures? I know I've seen it before. I thought it might be in the filters menu but I cant remember... Any help would be GREATLY appreciated!! Thanks a million! --Zach 02:06, 15 November 2007 (UTC)[reply]

What you want is the Actions menu. Once you have recorded a set of instructions (size, grayscale, crop) as a new Action, then you can apply it to a whole directory of images via File > Automate > Batch. --24.147.86.187 02:31, 15 November 2007 (UTC)[reply]
AHHHH!!! You are a LIFESAVER!!! Thank you SOOOO much!! --Zach 02:34, 15 November 2007 (UTC)[reply]

Get Mozilla Thunderbird 2.0 to recognize folders in email account

I just got Mozilla Thunderbird to help get a lesser-used email address under control. Basically, I am using Thunderbird only for online email purposes, not downloading to offline. Anyway, the program recognizes my main inbox and lets me look through that, but Thunderbird does not pick up my other email folders.

How can I get it to do that?

I am using Thunderbird 2.0, and I have the email account set as an IMAP, which was what the email account administrator recommended. Guroadrunner 05:31, 15 November 2007 (UTC)[reply]

Answer -- right click the Inbox on the left hand side menu and select "subscribe". From there one goes to a subscription menu where you can click which folders you want to show up or access from the remote email address. (I got this advice from the support IRC channel Mozilla set up for Thunderbird) -- Guroadrunner 06:12, 15 November 2007 (UTC)[reply]
The Mozilla IRC support channel was #thunderbird @ irc.mozilla.org and there are a bunch of tutorials out there:
There's also a small mention at http://kb.mozillazine.org/IMAP:_advanced_account_configuration (but it needs more coverage) --Jeremyb 06:35, 15 November 2007 (UTC)[reply]

region-free DVD player (laptop)

Is there a legal and free way of watching DVDs independent of region on a laptop? I've heard about firmware (without any clear idea of what it is), and the Wikipedia article DVD region code claims that "most freeware and open-source DVD players ignore region coding". I've read that firmware can cause problems with your DVD drive, and haven't found any free player yet that claims to be "region-free". So - is there a possibility or not? Thanks, Ibn Battuta 07:16, 15 November 2007 (UTC)[reply]

VLC media player will play any DVD region --Jeremyb 07:57, 15 November 2007 (UTC)[reply]
Also you can flash the firmware of your DVD drive- firmware is just the code in the device that tells it how to work, and if you replace it with code that ignores region coding then that's not a problem. It'll void your warranty and is probably a violation of the DMCA though --ffroth 17:13, 15 November 2007 (UTC)[reply]
My Linux machines have all played DVD's from US, Japan, UK, France and Australia/NewZealand regions using mplayer without problems and without firmware changes. So I don't entirely understand how the firmware matters. Having said that, I've heard of a Windows user who played a British DVD on their USA computers' DVD drive as the very first video they ever played on it. After that, the darned thing wouldn't play US DVD's. It's like it 'bonded' itself to a particular region on the very first disk it played. The whole system is complete mess. But I would certainly try some OpenSourced DVD software. SteveBaker 18:10, 15 November 2007 (UTC)[reply]
Yep. That windows user could have changed the region coding though- the default firmware allows you to change it up to 5 times (usually) before it locks in. It's pretty obscure though, you have to go digging in settings --ffroth 22:25, 15 November 2007 (UTC)[reply]

Any DVD from Slysoft does a good job of negating all the region coding. Google 'any dvd' and check out their website. 88.144.64.61 07:08, 16 November 2007 (UTC)[reply]

DVD Decryptor does similar but is free. I can't actually access the Slysoft website from where I am but from memory I think it costs money, both of these programs might be illegal depending on where you live by the way, but then again making your drive region free might do that too. TheGreatZorko 09:48, 16 November 2007 (UTC)[reply]

Issue with MS excel file

A sheet in MS Excel has data only in 2 columns and 30 rows. There are no formula or formatting except that some of the cells in these two columns are having a fill colour.

The problem is that the size of the file is coming to be 24.6 MB. If I select all the cells having the data and set this area as the print area then the file size comes down to around 10 MB. Is there any other way also so that I can decrease the size of this file to a much smaller size as the data in the file in not as much. Its must be around 20 KB. —Preceding unsigned comment added by 210.18.82.102 (talk) 10:23, 15 November 2007 (UTC)[reply]

Compress it using something like DEFLATE? It seems awfully big for such a simple spreadsheet. --antilivedT | C | G 10:30, 15 November 2007 (UTC)[reply]
The simplest method is to copy and paste the data to a completely new workbook. If this workbook had a lot of material that was deleted, then this may account for the size. Excel seems to not really delete stuff, it just hides it. That in itself can be a vulnerability if you send a workbook off to someone you don't expect to see that material, as it can be recovered. Opening such a workbook in OpenOffice.org Calc will reveal any "hidden" data, as Calc does not seem to have this problem. --— Gadget850 (Ed) talk - 10:31, 15 November 2007 (UTC)[reply]
There is also a MS plugin to remove hidden data. [1] --— Gadget850 (Ed) talk - 10:35, 15 November 2007 (UTC)[reply]


First of all, thanks a lot for all your suggestion. Will there be any compatability issues if I download OpenOffice applications on a system already having MS Office applications installed on it. —Preceding unsigned comment added by 210.18.82.102 (talk) 11:05, 15 November 2007 (UTC)[reply]

I run both at work with no issues. I've also used OpenOffice.org Write to repair corrupted MS Word files for customers. Let us know hot it works out. --— Gadget850 (Ed) talk - 11:16, 15 November 2007 (UTC)[reply]

C help: Why two gets() in this program?

Can someone please tell me why I need to give two gets() functions in this pgm one after the other? If I dont, the program skips the statement, and doesn't read any value into the string.

include<stdio.h>
struct emp
{
int m,n;
char line[20];
}x;
main()
{
scanf("%d %d",&x.n,&x.n);
gets(x.line);
gets(x.line); ///WHY THIS?
printf("%d %d",x.m,x.n);
puts(x.line);
}

Likewise, I need to give two scanf statements at the end of a do..while loop

do
{
//blah blah
printf("Do you want to contnue? (Y/N)");
scanf("%c", &ch);
scanf("%c", &ch); ////WHY THIS?
}while(ch=='y');

Any help would be appreciated. Thanks!--202.164.142.88 14:31, 15 November 2007 (UTC)[reply]

The gets() function reads the input until it encounters a newline character (your pressing of the Enter key). It does not read the newline character, though, but leaves it in the buffer. The next time gets() is called, it immediately gets the newline character and won't let you enter anything. You may want to try fflush(stdin) before gets() See e.g. http://www.physicsforums.com/archive/index.php/t-77069.html. ›mysid () 14:46, 15 November 2007 (UTC)[reply]
That helped. Thank you!--202.164.142.88 14:58, 15 November 2007 (UTC)[reply]
  • Actually, Mysid's not quite right. gets() does indeed remove the newline from the input buffer (although it replaces it with a '\0' in the caller's memory buffer). The reason you need two gets() calls in this program is that scanf() is only reading until it gets a second integer, as its format directed it. The first gets() removes the newline following those two numbers, and then the second gets() reads the text line. Adding a newline to your scanf() format will fix it, and you can then remove the second gets():
scanf("%d %d\n",&x.n,&x.n);
All that said, scanf() has lousy behavior that will eventually make you cry, and gets() should *never* be used for any reason. Consider what will happen if the user enters 30 characters on the line. Where will those excess characters be stored? If you're very lucky, you'll get an immediate crash, but that kind of luck never holds out.
Also, FYI, 1) you can use the <code> tag for code, 2) you're scanf()ing into x.n twice, and 3) fflush() is only meaningful for output buffers. --Sean 16:42, 15 November 2007 (UTC)[reply]


AAAARRRGGGHHHHH!!! WHAT!! Msid's was totally 100% the wrong answer! It's so far wrong that it's wrapped around and is almost right again!
From the 'man' page for 'gets':
Reads  characters  from standard input until a newline is found.
The characters up to the newline are stored in BUF. The newline
is discarded, and the buffer is  terminated with a 0.
So gets() most definitely reads the newline! Jeez - isn't that C programming 101?
Calling flush is a completely stupid 'fix' because it throws away any type-ahead - which will utterly screw you up if someone does a multi-line cut/paste into your program or something! It also has all sorts of ikky consequences if the input of your program is redirected from some kind of fancy I/O device that may do something majorly unwanted in response to a flush call!
What's actually happening is that the 'scanf' was not told to read the newline - so it stopped reading after the last character of the number. The first 'gets' read the newline that scanf forgot from the end of your first line of text and the second gets grabbed the second line. The correct fix is to change:
 scanf("%d %d",&x.n,&x.n);
...to... scanf("%d %d\n",&x.n,&x.n);
^^
...which correctly tells scanf to read the newline at the end. Then you need just one 'gets()' to read the next line - and no flush call is ever needed!
SteveBaker 16:56, 15 November 2007 (UTC)[reply]
Oh no.. I apologize! I promise to only answer Java questions in the future. :-) ›mysid () 17:23, 15 November 2007 (UTC)[reply]
  • FYI, fflush() doesn't mean "discard any queued-up input", but "send any *output* you've queued up to wherever it's going". fflush() on an input handle does nothing. --Sean 20:00, 15 November 2007 (UTC)[reply]
In terms of general advice here, both scanf and gets are kinda 1970's functions and should be avoided.
  • gets() does no checking so if you type in more characters than the buffer can hold you'll probably crash the program. In the 1970's, that was considered a simple programming error - no biggie. In the 2000's, this is called a 'buffer overrun' and it's not just that it can crash your program, it's also that some evildoer with access to your program can shove cunningly designed data into the overrun area and take control of your machine. gets was at one time the number one hole through which viruses could get into a computer! Switch to using 'fgets()' instead - it is passed the length of your buffer array and will take care not to overrun it. Beware though that fgets (for some totally unaccountable reason) stores the newline into the buffer - where gets reads the newline but doesn't store it.
  • scanf() suffers from the problem that you have no error recovery possibilities. You can look at the return value from scanf and tell that the user only typed one number instead of two - but since the data he actually input is now gone forever, you can't help him out very much. Generally, it's better to grab the data from a line of text into a string (using fgets) and then use sscanf() to parse it. If sscanf fails then you can try again (perhaps reading only one number and defaulting the other one to some sensible value). At the very least you can say "You typed '24,fiftysix' and I needed two decimal numbers separated by a space."
SteveBaker 17:27, 15 November 2007 (UTC)[reply]
fgets() stores the newline because it can stop reading at a newline or at EOF, and the caller needs to know which occurred. You can use feof() to test it, but if for some reason fgets() reads ahead, EOF could be set upon reading the last line of a file even if it bore a newline. Terminals might also make EOF-testing weird. --Tardis 21:24, 15 November 2007 (UTC)[reply]
I dislike that explanation. C's standard library originated on UNIX and I/O redirection was standard back then - so gets and fgets could both be reading either from a keyboard or a file on disk or...anything really. That doesn't excuse the differences. SteveBaker 04:57, 16 November 2007 (UTC)[reply]
Thanks again, guys! The detailed explanations have really been useful. I've noticed the warning from gcc about gets being dangerous, and I always wondered why. Now it makes sense. I use gets and scanf mainly because that is what is what I'm instructed to use. Maybe I'll be told to use the other functions as my course progresses. But I'll be using fgets from now on. Thanks again!--202.164.136.56 14:28, 16 November 2007 (UTC)[reply]

Regular Expressions

This is a question about regular expressions in java. Say I'm trying to find a string of a's surrounded by on both sides by one or more b's. Normally, I'd use "[b]+[a]+[b]+" to find this. But what if I want the returned string (when I call Matcher.group()) to be just "aaaaa" instead of "bbbaaaaabbbb"? Is it possible to do this with regular expressions in a single go, or do I need to throw another regular expression at the result from the first? In other words, can I constrain the the string I'm searching for by specifying characters that are not in the string that matches it? risk 17:01, 15 November 2007 (UTC)[reply]

Do you allow bbbaaaa and aaaabbb also? If so, use b*a+b* - the * allowing zero or more. -- kainaw 17:04, 15 November 2007 (UTC)[reply]
But he wants just the As --ffroth 17:09, 15 November 2007 (UTC)[reply]
Also he said "both sides by one or more b's" --ffroth 17:10, 15 November 2007 (UTC)[reply]

I don't know Java regexps off the top of my head, but in Perl, you'd say "[b]+([a]+)b+" and after the regexp, if it matches, the variable $1 will contain the value of the (first) parenthesized portion of the match. Perhaps Java can execute this Perlish regexp as well?

Atlant 17:18, 15 November 2007 (UTC)[reply]

(edit conflict)
Yes. I misread the question. Use b+(a+)b+ - which makes the a's a captured group. Then call Match.group(1) - the 1 being the first captured group. -- kainaw 17:19, 15 November 2007 (UTC)[reply]
This works beautifully, thank you. I actually tried this a couple of times, but I thought it didn't work, because of another bug. risk 17:57, 15 November 2007 (UTC)[reply]
Also note that one "b" on each side would suffice. Then you can also use look-ahead and look-behind assertions: "(?<=b)a+(?=b)", and then match.group() would contain the match. This has the added benefit of adjacent groups of "a"s separated by "b"s all matching correctly, because the "b"s are not gobbled up in the match. --Spoon! 21:57, 15 November 2007 (UTC)[reply]

program in c language

How we can make a program for finding a circular number in c language —Preceding unsigned comment added by satya narayan sharma (talk) 18:10, 15 November 2007 (UTC)[reply]


Well, I'm not going to do your homework for you by writing it - but here are some hints: Circular numbers are also called Automorphic numbers. Our article on that subject offers a little help - most significantly that there are only 2 numbers with the same number of digits that are automorphic, one ends in a 5, the other in a 6. So you need only test numbers 0-9 until you find 5 and 6 (which are automorphic), then 10 to 99 - testing only 15,16,25,26,35,36,...95,96 - then 100 to 999, testing 105,106,115,116...995,996...and so on. For each number of digits you can do just two separate searches - one testing numbers that end in 5 and another only for those that end in 6 - and once you've found one of each, you can stop the search and go on to numbers with one more digit. This will be much faster than exhaustive testing of all of the integers. To test to see if you have a match, you need to square the number and then take the remainder after dividing by 10number-of-digits (using our friend the '%' operator) and see if that's equal to your original number. In practical terms, you can do this efficiently only up to numbers around 65535 because the square of that number will be too big for an integer. You can (with some care) use long integers to get up to numbers around 4 billion - but beyond that you'd need to either find an arbitary-precision math library or write your own. SteveBaker 18:59, 15 November 2007 (UTC)[reply]

Power supply question

Does the reliability and durability of a computer power supply have any correlation to its actual power? If I want a power supply that lasts long without breaking, should I go for low-power or high-power models, or does it even matter? JIP | Talk 18:49, 15 November 2007 (UTC)[reply]

Running a power supply close to it's limit will certainly shorten it's life. Hence a higher powered supply should live longer than a low powered one that's closer to it's limit. SteveBaker 19:00, 15 November 2007 (UTC)[reply]
Power supplies are one of those computer parts that do commonly break. You might consider using two, also. Some hardware (mainly aimed at the server market I think) can be had with redundant power supplies. Friday (talk) 19:13, 15 November 2007 (UTC)[reply]
OK, so if I ever decide to replace my current PC with my "dream machine" (3.0 GHz processor, 4 GiB RAM, 320 GiB hard disk, DVD/RW drive, two network cards) I have to consider specifying two power supplies. Thanks for the answers. JIP | Talk 19:17, 15 November 2007 (UTC)[reply]
It's unlikely that you'll be able to "specify" 2 if you're ordering it through an interactive application, but if you're building it yourself you should consider 2 --ffroth 22:23, 15 November 2007 (UTC)[reply]
Using two is unnecessary and is likly to result in high frequency data errors as there will capacitance difference between the grounds. The only thing that matters with power supplies is the temperature. If you run a power supply at its fully rated load, its won't die any faster than at a lower rated load - provided you can keep both at the same temperature. The reason people typically see them die faster at high rated loads is they can't take the heat away as well. If you have good air-con that actually cycles the air and stop air near the PC heating up, most likely you can run a power supply at 100% of its load with no problems. If you don't have air-con expect the local temperature to increase 10 to 20 degrees c and the power supply to die much quicker. Some power supplies will die almost instantly at 100% load if you do not have 25 degree C ambient temperature. The way to avoid this is getting a more powerfully power supply so that it will not heat up as much. For your computer specs (depending on what video card) I would just get a good brand 600W supply. Also remember that 99% of power supply failures can be fixed for a few cents - they are always capacitors that have died up from heat and fail. So if one fails open it up and look for capacitors that have blown and just replace them. I am still running power supplies from 2001, almost every second summer I have to replace the caps in them, the ambient temperature here can reach 45 degrees C. Indeed the ATX standard has power good / power on signals, so its almost impossible to destroy your parts when a power supply fails from bad caps, you will just get the power supply refusing to turn on or turning off when you have to much load, both because the motherboard detects bad voltages as the output caps fail.--Dacium 01:21, 16 November 2007 (UTC)[reply]
OP: Please note that replacing capacitors in a power supply is not something you should do yourself unless you know exactly what you're doing. There's a ton of excess voltage in those things and they are very dangerous. -Wooty [Woot?] [Spam! Spam! Wonderful spam!] 04:33, 16 November 2007 (UTC)[reply]
Yuo tried to be "correct" using GiB and GB and failed. You shouldn't use GiB then referring to hard drives. They sizes are advertised in decimal gigabytes. —Preceding unsigned comment added by 85.206.56.248 (talk) 13:58, 16 November 2007 (UTC)[reply]
So that's why my LaCie F.C.Porsche USB hard drive shows up as "298.1 GB volume" on Fedora 7 Linux. It's advertised as 320 GB but Linux shows its size in GiB (although claims to do so in GB). I guess the GB/GiB distinction still takes time to work out. JIP | Talk 09:12, 17 November 2007 (UTC)[reply]

Microsoft's documentation for compilers

Has Microsoft published (free) documentation for writing compilers on its operating systems? (Or how should this be formatted in English?) I'm just not very good in searching for these things. --212.149.216.233 19:10, 15 November 2007 (UTC)[reply]

Microsoft doesn't make processor architectures, just an operating system. You use header files to link into the Windows API- I believe microsoft publishes these --ffroth 22:22, 15 November 2007 (UTC)[reply]
A compiler is just a normal program. You don't need Microsoft-specific information to write one that will run on Windows. --Sean 02:16, 16 November 2007 (UTC)[reply]
Maybe the intended question was: "Where do I get documentation on the EXE file format (so I can create files that the OS will load) and the assembly-level calling conventions used for system calls so I can request system services?" Those are things an OS vendor should document if they want to encourage compiler development. --tcsetattr (talk / contribs) 03:02, 16 November 2007 (UTC)[reply]
Probably the simplest way is to download the OpenSourced compiler system from Cygwin (which is the GNU CC) - you can get full source code for their compiler, linker and EXE file writer. SteveBaker 04:50, 16 November 2007 (UTC)[reply]
But then you have to link in Cygwin dlls right? Not exactly ideal --ffroth 14:42, 16 November 2007 (UTC)[reply]
Not necessarily-- you can read the source and learn how it's done. The Wednesday Island (talk) 14:01, 21 November 2007 (UTC)[reply]

All the answers above seem to be good depending on how you understood me, but because there are no direct links to anything, I'll make a new question. Supposing that I would already know all about PE-format's structure, could somebody tell how one really makes a call for Windows using those import tables (on x86/x86-64)? I could guess the program first passes the arguments on stack/registers and then uses some interrupt to make an actual call. How the a call is made? --212.149.216.233 15:33, 16 November 2007 (UTC)[reply]

You'll find some interesting stuff in x86 calling conventions (_stdcall having taken over from FAR PASCAL as being Windows' calling convention du jour), and (as much of the Windows API is calls to DLLs) in Dynamic-link library. I think if you want to see how stuff is done in practice (as opposed to how you'd logically think it should be done, or how the documentation might imply it is done) then a couple of hours with OllyDbg should be very enlightening. -- Finlay McWalter | Talk 17:59, 16 November 2007 (UTC)[reply]
NT, unlike e.g. Linux, has no public system call interface. The public API consists entirely of ordinary functions exported by system DLLs. The Nt* functions in ntdll.dll are thin wrappers for system calls. They use the stdcall calling convention, which is the same as the C calling convention except that the callee, rather than the caller, pops the arguments from the stack. -- -- BenRG (talk) 22:27, 16 November 2007 (UTC)[reply]


November 16

Newsgroups & Music

I'm not sure if this should be in the entertainment section, computing or where but here goes... With the large record companies having the clout to shut down music sharing internet sites, busting pirate companies all over the world and even taking individual people to court, is there any reason why the usenet groups have remained relatively untouched and generally out of the debate? For a resource that lets people easily download just about everything it seems low on the 'get rid of it' priority list. Kirk UK 88.144.64.61 07:03, 16 November 2007 (UTC)[reply]


It is mostly because the usenet providers don't actually post content, they automatically mirror it. Neither do they flaunt it as a place for illegal files. One usenet provider did recently and have had legal action taken against them by the RIAA (see here) . I also think maybe usenet isn't high profile enough, I'm pretty sure the average internet user knows what limewire is and probably even torrents by now, but most won't know what usenet is, and even if they do wouldn't be willing to pay for it, or go through the hastle of setting it up. Added to the fact that it has a large amount of legitimate uses (moreso than torrents I would say) Finally you can't really get rid of usenet, you may as well try and get rid of http, it is just a method of storing data and accessing it. TheGreatZorko 09:45, 16 November 2007 (UTC)[reply]

Also, DMCA notices tend to be sent to people who POST the content. I know when I frequented a posting IRC channel, they would discuss this issue and several people either had notices sent or knew people who had noticed sent to them.--152.2.62.27 14:00, 16 November 2007 (UTC)[reply]
It would be far easier to get rid of usenet and http than to get rid of bittorrent.. aren't there centralized servers for usenet that resolve the usenet alt.whatever.whatever style names to locations of machines? Same with the internet.. the primary DNS root is critical for basically the entire consumer internet, and it's probably inextricably integrated into billions of dollars of software that depends on it. Just take out the like 8 data centers that serve root DNS requests. Their location is supposedly secret, but the locations of the US ones are known and I doubt it would be too hard for the MAFIAA to track them down. Meanwhile, bittorrent is decentralized and usually encrypted, and a true peer-to-peer network would be completely impossible to stop without drastic changes to networking standards. --ffroth 14:49, 16 November 2007 (UTC)[reply]
Usenet is a "global, decentralized, distributed" network. --LarryMac | Talk 15:02, 16 November 2007 (UTC)[reply]
There are no centralized servers for Usenet. All the recent articles on every group are physically stored in every NNTP server that carries the group. There's no central directory of NNTP servers, either; each server gets its articles from one or more peers, and each of these peering relationships is set up individually (the administrators get together and draw up a contract). It's less centralized than BitTorrent in that there are generally far more servers carrying a particular newsgroup than trackers tracking a particular torrent. I assume you're joking about taking down the root nameservers. This is not something you can do by court order; it would take a major global catastrophe. I'm sure Usenet and the Internet will eventually die, but only of natural causes. Of course, some people would say that Usenet is already dead; in fact, they've been saying that since 1983. -- BenRG (talk) 20:23, 16 November 2007 (UTC)[reply]
I'm entirely serious about the root nameservers. Granted it might take a global catastrophe to provoke the action, but really. If someone in power really wanted to take them out all it would take is a special forces team.. they could even take them out one by one at their leisure- who's going to stop them? Rent-a-cops? I don't know if other root nameservers can propogate new root nameserver IPs down to ISPs before their turn is up, but it seems like it would basically break the WWW, at least in the short term --ffroth 05:28, 17 November 2007 (UTC)[reply]
Let's go over this again. Disabling all thirteen seventy-odd root nameservers, even assuming it was physically something that could be done, would not kill Usenet. This is because Usenet does not require the DNS; rather, each newsserver only needs to be able to contact its peers, which can be done by IP address. (Heck, Usenet doesn't even require the Internet, never mind the DNS). However, it would effectively shut down the entire WWW for the entire world (and email, bittorrent...), which might have one or two other political ramifications for anyone foolish enough to try it, even a group as powerful as the RIAA. Marnanel (talk) 14:06, 21 November 2007 (UTC)[reply]
Let me echo what TheGreatZorko said. It is a problem of visibility, most internet users have no idea that usenet exists. Usenet as a discussion forum has (sadly) been eclipsed by web-based forums like this one. Usenet as a binaries distribution hub has been eclipsed by highly visible P2P networks. The usenet paradigm is foreign to most people, joining and decoding binaries posted to usenet is well beyond the technical expertise of most users. -- Diletante 15:41, 16 November 2007 (UTC)[reply]
You can add to that the fact that many (dare I say most) usenet users never returned to usenet after the September that never ended. -- kainaw 15:47, 16 November 2007 (UTC)[reply]
A bigger nail in the coffin for usenet is that mailing lists are more convenient - and their sole disadvantage (that they are a horrible waste of bandwidth) is largely irrelevent in a world where bandwidth has become cheap. Setting up a new usenet list was a major political exercise - setting up a mailing list is a job you can do without anyone's say-so and get it ready to roll in about 2 minutes flat. Sure you could put your new usenet list up in 'alt' - but the odds of the usenet servers of all your potential subscribers actually carrying it was almost zero. Between mailing lists and forums - we've got a better solution. Usenet's only remaining benefit (anonymity) means that these days it's mainly a repository for porn. Sad - but it served it's purpose. SteveBaker (talk) 04:29, 18 November 2007 (UTC)[reply]

Sizes of USB flash drives

Hello, My question is how the size of USB sticks are advertised? Are they in binary gigabytes (as RAM) or in decimal gigabytes (as hard- drives) —Preceding unsigned comment added by 85.206.56.248 (talk) 14:05, 16 November 2007 (UTC)[reply]

According to Binary prefix#Flash drives, these particular drives are measured in "'powers of two' multiples of decimal megabytes; for example, a '256 MB' card would hold 256 million bytes." Ian 14:20, 16 November 2007 (UTC)[reply]
A card/stick labeled 256MB almost certainly has a memory chip with an exact power-of-2 of storage locations, but some of it may not be user-accessible. And, how much is really useable will depend upon OS and formatting options. This seems (I work with these every day, so this is "OR") to vary by manufacturer, brand, and even model. The best answer is "You may be able to use up to 256MB of space for storing your stuff, but certainly no more, and probably significantly less". -SandyJax 14:31, 16 November 2007 (UTC)[reply]
If you want any kind of filesystem there's going to be enough overhead to make the difference negligible --ffroth 14:45, 16 November 2007 (UTC)[reply]
My experience with CF cards suggests that the size changes rather dramatically depending on the individual card -- even those from the same manufacturer. My understanding is that different chips have different defects, and so the internal defect managment may mark large areas as unusable. This makes the card report a smaller size than the actual chip should allow. ---- Mdwyer (talk) 17:30, 16 November 2007 (UTC)[reply]
almost every memory chip uses binary, so you would have the size as per RAM. keep in mind that because of the file system you loose some memory. A 256MB card could fit 256MB of ram's data, but not a 256MB file, because the file allocation table etc take up room.--Dacium (talk) 01:58, 22 November 2007 (UTC)[reply]

Mac Advantage?

Is there any particular reason why Macs have traditionally been favoured by the design industry? I've looked on the entry for Macs and nothing seems to stand out as a massive plus in terms of DTP or Graphics work. Is it that they just look more contemporary? Thanks in advance. 88.144.64.61 14:46, 16 November 2007 (UTC)[reply]

IMO: yes. *brace for impact! --ffroth 14:51, 16 November 2007 (UTC)[reply]
Why would anyone argue with you? You admit that you "hate" the Macintosh operating system[2] so if you're happy in your hatred, carry on; no argument is likely to affect you.
Atlant 16:33, 16 November 2007 (UTC)[reply]
Yeah but I still have to answer arguments; that was what I was bracing for --ffroth 20:28, 16 November 2007 (UTC)[reply]
The software used by the print industry was originally only available on the Mac. Equivalents are now available on Windows (and Linux). However, the Mac-only mindset is well entrenched in the industry. -- kainaw 14:59, 16 November 2007 (UTC)[reply]
I think Kainaw probably has cited the most important reason: Mac's clear early superiority in the "design" environment, but I think it's also worth noting that the user experience on a Mac remains substantially "smoother" than the user experience on a PC. So if your goal is to use a computer as a tool to get some not-directly-computer-oriented work (such as graphic design) done, you may be happier using a Mac. And I say this as someone who, every day, uses Windows/XP, Windows/2K, Sun Solaris, and Mac OS X. If I could, I'd do all my work on the Mac (although a lot of it would be done down in the Unix shell).
Atlant 16:33, 16 November 2007 (UTC)[reply]
Network effects can cause this kind of entrenchment after the original reasons have gone away. Mac software also has a certain design aesthetic that seems to appeal to people who care a lot about that sort of thing. --Sean 15:49, 16 November 2007 (UTC)[reply]
I think also Mac has better font systems or font based things, so is often favoured for this too. They certainly feel like they are designed by designers for designers. Whether that is pretentious or true is in the eye of the beholder, but presumably due to this the 'better' versions of photo-editing/design software become available on Macs. It will become a self-fufilling prophecy over time as the more macs are associated with design the more design see mac as their platform. -- ny156uk (talk) 17:04, 16 November 2007 (UTC)[reply]
I believe it had better color support, too. That is, it supported the ability to handle Pantone colors accurately. ---- Mdwyer (talk) 17:28, 16 November 2007 (UTC)[reply]
Two words: Aldus PageMaker and LaserWriter. See Desktop publishing for more. --— Gadget850 (Ed) talk - 17:51, 16 November 2007 (UTC)[reply]
Yup, Apple's early support for postscript printing made it the de facto standard for desktop publishing. Speaking as someone who's done design and commercial printing work on both a Mac and a PC, there isn't a huge diffrence anymore except support of legacy tools (especially font software), and personal preference. -- dcole (talk) 19:53, 19 November 2007 (UTC)[reply]

Ranked list of Wiki visitor user-agent strings available?

Has anyone compiled a ranked list of User-Agent strings reported by Wikipedia visitors' broswers? ---- 64.236.170.228 (talk) 20:37, 16 November 2007 (UTC)[reply]

Saving a page in Internet Explorer

Is there a way to change the default format in which Internet Explorer saves pages? Currently, as I go to File→Save As..., the "Save as type" drop-down box shows "Web page (complete)". Is there an easy way (a registry tweak, perhaps) to permanently change the default to "Web page (HTML only)" or to "Web Archive, single file"? I need this for IE6. Please do not suggest switching to different browser or installing third-party apps, as it is not a realistic option for me. Any other help (if only to confirm that it is impossible, so I wouldn't waste any more time on this issue) would be extremely appreciated.—Ëzhiki (Igels Hérissonovich Ïzhakoff-Amursky) • (yo?); 21:38, 16 November 2007 (UTC)[reply]

I do not have IE (no Windows here), but I am certain that an alternative is to view the source (I assume you know how to do that) and then save the source. Hopefully there is a more straightforward method. This is just in case nobody has one. -- kainaw 23:07, 16 November 2007 (UTC)[reply]
There seem to be plenty of people asking about this, but I haven't been able to find an answer that doesn't involve 3rd-party applications. I thought IE6 might remember which type you chose between sessions, but it doesn't appear to remember it between webpages! I think that Kainaw's solution might be the best, although someone else might know differently. -- Kateshortforbob 11:36, 18 November 2007 (UTC)[reply]
Thanks, Kainaw! Embarassingly enough, this obvious solution did not occur to me. It is perfect for my needs. Thanks again!—Ëzhiki (Igels Hérissonovich Ïzhakoff-Amursky) • (yo?); 16:34, 19 November 2007 (UTC)[reply]

November 17

Importing DVDs into iTunes

It seems so ironic to me that they make it so easy to illegally download a movie off Limewire and put it on your iPod, but a DVD that you have bought and paid for is so difficult. =o) I've tried googling it, but everything I've found talks about such complicated things like file extensions and converters and stuff. So, in dumbed-down layman's terms, what do I have to do to get a movie from a DVD and onto iTunes? When I go to My Computer, then click on the D drive, then there's a little folder that's labeled VIDEO, but when I click on that, there's a whole bunch of little files instead of one neatly packaged little video file. I'm so confused! 131.162.146.86 (talk) 02:56, 17 November 2007 (UTC)[reply]

I assume by your mentioning of My Computer that you are using Windows (you need to tell us that kind of stuff!!). I don't know the specifics for Windows but in general you need to:
  1. Get a DVD ripper, which will grab the raw VOB video from the DVD for you (it'll be a couple of gigabytes in size)
  2. Get a video file converter/compressor that can resize and compress the VOB file so that it will fit onto your iPod
Some DVD rippers can do both of these functions at the same time (that is, they will let you rip the DVD video into a 320x240 MP4 file, which is what iPods use for video). Perhaps someone can recommend an easy one. --24.147.86.187 (talk) 16:03, 17 November 2007 (UTC)[reply]

Yes, I am using Windows. Sorry, didn't realize that was important! And forgive my ignorance and complete stupidity where computers are concerned, but what does VOB mean? 131.162.146.86 (talk) 03:54, 18 November 2007 (UTC)[reply]

VOB stands for Video OBject. But mainly, it's the filename extension of the file on DVD that contains the actual movie. We even have an article on it (called VOB of course). SteveBaker (talk) 04:13, 18 November 2007 (UTC)[reply]

Okay, thank you. 131.162.146.86 (talk) 18:04, 18 November 2007 (UTC)[reply]

Deezer down?

Why might the music service [www.deezer.com] Deezer be down? When I try to access the website, it says that the server cannot be found. Acceptable (talk) 03:49, 17 November 2007 (UTC)[reply]

Works for me, maybe it was a temporary server crash? · Dvyjones Talk 09:09, 17 November 2007 (UTC)[reply]
I'm in Canada and all the Canadian computers I've tried (school, library, friends) will not connect to Deezer. Could it be that they blocked Canadian IP's like Pandora? Thanks. Acceptable (talk) 23:15, 18 November 2007 (UTC)[reply]

Splitting 1 audio into instrumental music, human voices and something.

As the title says. Well, I don't know how. (Here's software I have; Sony Vegas 7.0 and GoldWave)--JSH-alive (talk)(cntrbtns)(mail me) 12:46, 17 November 2007 (UTC)[reply]

We get this question a lot. It's not possible to get high quality tracks, but you can use GoldWave to get a "Karaoke" track that will probably sound bad but might sound all right depending on the song --ffroth 16:28, 17 November 2007 (UTC)[reply]
Yeah, there's software that tries to do the job, but it's inherently impossible to do with a high degree of accuracy. What people do in real life is to retain the source material as separate tracks indefinitely, so they can be mixed as needed whenever you want. Of course, when you're starting with something already mixed, you have to settle for some less good solution. Friday (talk) 16:45, 17 November 2007 (UTC)[reply]
The trick for removing vocals for Kareoke is to note that on most music with vocals, the stereo mixing is set up to place the voice in the middle of the stereo image with the instruments off to the sides. If you subtract the left-side image from the right, then anything that's common to both sides will disappear. This works surprisingly well for vocals. We use 'Audacity' (an free/OpenSourced audio processing package) - and it's vocal-removing system is quite amazing. But translating that idea to removing (say) the trumpet from a piece of music when it is in no special place in the stereo image would be much harder. SteveBaker (talk) 17:42, 17 November 2007 (UTC)[reply]
Steve, so then if you subtract the resulting Kareoke track from the original will you get a vocal only (well, maybe not "only") track? hydnjo talk 20:06, 17 November 2007 (UTC)[reply]
Sadly (and surprisingly to me), no - I don't think you can. My first reaction to your question was "well, of course you can!"...but when I started to figure out how, I couldn't. To understand why not, you need to do this with a bit of basic algebra: if A is the sound coming from instruments in the left side of the stage and B is the sound coming from the instruments on the right with V being the vocals - then the sound on the left channel of the original stereo recording ('L') is L=A+V, the sound on the right ('R') is R=B+V. L and R are our 'givens'. We have two equations and three unknows (A, B and V) - which first year algebra says is not something you can solve for all three variables. Fortunately we CAN say L-R=(A+V)-(B+V)=A-B - so we can calculate A-B without V. We'd really like a stereo signal with A and B separately - but we can't do that because if we could, we'd have performed the magic of solving three unknowns with only two equations. But A-B is a mono signal - just one value - but with no 'V' in it! In audio terms, a proper mono signal would be A+B - but negating an audio waveform just switches the phase of the signal 180 degrees - and that doesn't sound too bad...certainly good enough for Kareoke! So if (for example) we subtracted our kareoke track from the original L and R signals to try to get the vocals by themselves, we'd get L-(A-B) which is (A+V)-(A-B) which is V+B, similarly R=B+V and subtracting the kareoke track gets us (B+V)-(A-B) which is V-A. In other words, instead of getting rid of the instruments A and B - we just swapped them over. We could try making a mono track first (L+R) and subtracting our mono kareoke track from that - but then we'd have (A+V)+(B+V)-(A-B) = 2B+2V - still we have B mixed up in it. There simply isn't any way to do this...which surprises me! SteveBaker (talk) 04:05, 18 November 2007 (UTC)[reply]
Yeah, I actually came to that realization myself while falling asleep last night. Amazing how clarity arrives when our everyday environmental noise departs! Also, the A-B kareoke track must have some weird stuff going on as the original A+V track probably had varying amounts of B in it an so on. hydnjo talk 14:03, 18 November 2007 (UTC) [reply]
Ehh maybe you addressed this, but why not mix the 2 channels of song together and 2 channels of karaoke together, invert the karaoke, and superimpose the mono waveforms? It's no stereo signal but as far as I can tell it would work. Also, audacity is the biggest piece of crap software I've ever layed eyes upon- even the geek who occasionally has to work with audio will tell you that GoldWave blows it out of the water. --ffroth 06:46, 18 November 2007 (UTC)[reply]
Do the algebra:
mix the 2 channels of song together - OK M=L+R=A+B+2V
and 2 channels of karaoke together, - the Kareoke track is already mono - K=A-B A-B
invert the karaoke, - OK: K=-(A+B) K=-(A-B)
and superimpose the mono waveforms - Result = (M+K)/2 = (A+B+2V + -(A-B))/2 = B+V.
Nope - that's the same as the original right channel.
You can't do it because you have two equations and three unknowns. SteveBaker (talk) 16:42, 18 November 2007 (UTC)[reply]
Well despite your step 3 coming out of nowhere (K should = -(A-B)) that sounds right.. but what about the other K.. K=B-A? There's your third equation. It seems distinct from K=A-B.. it is, right? --ffroth 19:57, 18 November 2007 (UTC)[reply]
Yeah - sorry - that was a typo. It's fixed (above) now. I got the final line right though. The bottom line is the same - you can't get three unknowns from two equations no matter what. SteveBaker (talk) 21:44, 18 November 2007 (UTC)[reply]
Geesh Steve, I wuz jist askin', ya know? This is amazing! hydnjo talk 22:28, 18 November 2007 (UTC)[reply]

Lower screen resolution for speed?

I may have come across this question before. Would lowering the screen resolution on a Windows Vista machine allow the computer to run faster when graphic editing, gaming and everyday surfing? Acceptable (talk) 17:52, 17 November 2007 (UTC)[reply]

Well, high resolutions do take more processing power and require more work by your graphics card. So a lower res should run faster, the question is whether it would be appreciably faster or not, and that would probably depend on your processing speed, your graphics card, and what sort of programs you are intending to use. Photoshop is going to be a processor hog no matter what resolution you run at, for example. --24.147.86.187 (talk) 18:58, 17 November 2007 (UTC)[reply]
It depends whether the applications you are using are CPU-bound, vertex-processing-bound or fill-rate-bound. Only the fill rate is affected by the screen resolution. Games are most likely to benefit - but even then, only if they run full-screen. I doubt you'll notice much benefit. SteveBaker (talk) 20:52, 17 November 2007 (UTC)[reply]
Games can be made to run faster anyway by reducing the quality settings (e.g. Anti-Aliasing) in-game. --Dave the Rave (DTR)talk 21:32, 17 November 2007 (UTC)[reply]
I didn't think about games, but in that case resolution can matter a LOT, depending on the game. My computers are usually a bit slow for games (or are running them through virtualizers) so I usually end up running very low resolutions (640x480 or 800x600) with them (my native resolution is 1400xwhatever, so that's a pretty serious cut!), but it speeds them up a huge amount. My machine generally can't even do them at anything close to a native resolution; it requires way too much out of its puny graphics card. --24.147.86.187 (talk) 22:02, 17 November 2007 (UTC)[reply]
If your graphics card can handle it, and it's not trying to do other things like render a 3D scene, then keep your resolution high. It will make no difference to lower it. --ffroth 06:16, 18 November 2007 (UTC)[reply]


With 3D applications, there is always a bottleneck that limits the frame rate - but unless you know where it is, you can't know how best to optimise it. The CPU could be overloaded by the AI, the pathfinding, collision detection, etc - if it can't feed triangles to the graphics card fast enough then no amount of messing around with the graphics card or the display resolution will make any difference to your frame rate. Similarly, it might be that the game draws an enormous number of very small triangles - in this case the vertex processing stage of the graphics card will be overwhelmed and speeding up the CPU or messing with the display resolution won't help. Only if the CPU has time to spare and the vertex processor is blocked waiting for pixels to be pushed to the frame buffer memory will reducing the display resolution (or reducing the antialiasing quality which is almost exactly the same thing) help.
The trouble is that different games have different bottlenecks - and unless you are equipped with the source code and a boatload of specialised tools, it's hard to know which games have which bottlenecks. Worse still, this information isn't published anywhere because it depends too much on your precise computer setup - something that's CPU-bound on a 2GHz CPU may be vertex limited on a 3GHz processor. If it is the case that the final stage is the bottleneck then you'll find that reducing the display resolution a bit improves the frame rate a bit. However, dropping the resolution down further may not help because you've already removed the pixel fill rate bottleneck and now the system is blocked somewhere else. So even if reducing the frame rate helps, you might want to try a range of different screen resolutions to see which one gives you the best frame rate - but retains the most pixels on the screen.
If you're still with me at this point - I guess I should complicate the story still more by pointing out that many games are CPU-limited in some areas of the game map, vertex-limited in others and fill-rate limited in yet others. In producing the optimum 'game experience', designers have to trade these various limitations on the grades of hardware they expect their users to have (and, probably, on a couple of console platforms too). Some parts of the game might not need blazingly fast frame rates because you are on some kind of a slow-moving stealth mission - but other areas that involve fast movement through the world (driving a vehicle for example) may demand higher frame rates.
SteveBaker (talk) 16:37, 18 November 2007 (UTC)[reply]
I was talking about just for desktop use, but all that is true! When playing F.E.A.R. my Yonah can push out all Maximum settings for the "CPU" graphics options, but my low-end-mobile 3D card can barely handle the Minimum settings for the "GPU" graphics options. (they appear in separate columns in the options menu) --ffroth 19:50, 18 November 2007 (UTC)[reply]
I have found it to be true that lowering the screen resolution AND the color depth speed up CPU intensive programs on computers that share video memory with main memory, i.e. use main system memory for video memory (which most of the lower priced ones seem to do these days). Must be because of bandwith to the main memory. Bubba73 (talk), 01:54, 19 November 2007 (UTC)[reply]
Yep - that's to be expected. In 'unified memory' systems like that texture mapping and even simply writing to the screen all use main memory bandwidth - which will clobber the CPU (and heavier CPU activity will clobber your fill rate). The efficiency of these systems depends heavily on how well their texture caches work - which is a REALLY complicated thing for developers to deal with. If the game gives you the option, make sure you have MIPmapping turned ON and Anisotropic texture filtering turned OFF. You should probably prefer disabling antialiasing to reducing screen resolution - but you may need to do both. SteveBaker (talk) 03:55, 20 November 2007 (UTC)[reply]

Converting Matlab code to PHP/Actionscript/Javascript/whatever

Somebody helpfully gave me some Matlab code to get X,Y points for a given set of latitude and longitude coordinates with a Robinson projection. But I can't make heads or tails of how it deals with arrays. Could someone convert it into PHP, Actionscript, Javascript, and/or just pseudocode for me? It's totally opaque to me in its current form, but someone who has used Matlab could probably convert it pretty easily. (Note that you can just substitute a fake interpolation function if you want—I have interpolation functions I can use, you don't have to write me one.)

Matlab code snippet
robval = [
00 1.0000 0.0000 
05 0.9986 0.0620 
10 0.9954 0.1240 
15 0.9900 0.1860 
20 0.9822 0.2480 
25 0.9730 0.3100 
30 0.9600 0.3720 
35 0.9427 0.4340 
40 0.9216 0.4958 
45 0.8962 0.5571 
50 0.8679 0.6176 
55 0.8350 0.6769 
60 0.7986 0.7346 
65 0.7597 0.7903 
70 0.7186 0.8435 
75 0.6732 0.8936 
80 0.6213 0.9394 
85 0.5722 0.9761 
90 0.5322 1.0000 
];

robval(:,3) = robval(:,3) * 0.5072;
robval = [robval(end:-1:2,:);robval(1:end,:)];
robval(1:90/5,[1,3]) = -robval(1:90/5,[1,3]);

rvals2 = interp1(robval(:,1),robval(:,2),latitude,'cubic');
rvals3 = interp1(robval(:,1),robval(:,3),latitude,'cubic');
y = -rvals3;
x = rvals2/2.*longitude/180*2;

Thanks a ton! (No, this is not homework at all—I'm well out of my homework phase of life! I'm just working on a Flash project which uses a Robinson projection and it's driving me a bit nuts.) --24.147.86.187 (talk) 20:45, 17 November 2007 (UTC)[reply]

Uncommented Matlab is largely write-only. But here is an explanation in English/pseudocode:
- Define robval as a three-column, 19-row matrix with given values
- Multiply third column (in all rows) of the matrix by 0.5072
- "Mirror" the matrix.
    The matrix now contains the original matrix backwards (minus first row), followed by
    the original matrix, like so:
        90 0.5322 1.0000 
        85 0.5722 0.9761 
        80 0.6213 0.9394 
        75 0.6732 0.8936 
        ...
        05 0.9986 0.0620 
        00 1.0000 0.0000 
        05 0.9986 0.0620 
        10 0.9954 0.1240 
        ...
        85 0.5722 0.9761 
        90 0.5322 1.0000 
     (Of course the third column would have been multiplied by 0.5072)
- Invert all values in rows 1 to 18 (i.e. the backwards part), in columns 1 and 3.

- rvals2 = interpl(column 1 of robval, column 2 of robval, latitude, 'cubic'); ← for all rows of robval
- rvals3 = interpl(column 1 of robval, column 3 of robval, latitude, 'cubic'); ← for all rows of robval

- y = -rvals3;
- x = rvals2 / 2. * longitude / 180 * 2;
Hope this helps. ›mysid () 22:25, 17 November 2007 (UTC)[reply]
God, that's totally unobvious from the code as was given! Thank you. (What godawful syntax Matlab uses!) --24.147.86.187 (talk) 23:30, 17 November 2007 (UTC)[reply]
But oh-so-powerful... :) And by the way, note that rvals2, rvals3, y and x are all matrices (or arrays if you wish) as well, so the operations are actually performed on every element. ›mysid () 00:12, 18 November 2007 (UTC)[reply]
Powerful but totally opaque. Question: if all of them are matrices/arrays, how does it know which values to assign to x and y in the end? Or are x and y matrices themselves? I'm confused. --24.147.86.187 (talk) 06:16, 18 November 2007 (UTC)[reply]
Yes, y and x will be matrices as well, with same dimensions as rvals3 and rvals2 respectively, but with the said operations applied. I guess they are one-dimensional, assuming that interpl returns one-dimensional matrices when fed with one-dimensional parameters. ›mysid () 08:26, 18 November 2007 (UTC)[reply]

Jetman Cheats

Does anyone know any cheats for the game Jetman on Facebook? Much obliged
мιІапэџѕ (talk) 21:49, 17 November 2007 (UTC)[reply]

Firefox's Wikipedia search function

I just had to reinstall Windows XP on my computer, and afterward I reinstalled Firefox 2.0. I then added the tool that allows Wikipedia to be searched in the little window in the upper right-hand corner, which I had before the reinstall. Previously, when I entered a term in that search box, it took me straight to the article (much like typing an article name in Wikipedia's own search box and hitting "Go"). Now, it takes me to Wikipedia's search page, where I have to click on the article name to get there (much like hitting "Search" in Wikipedia's search box). Does anyone know how to fix it back to the convenient way? Also, previously I could go to a Wikipedia article simply by typing "wp article" in the URL box at the top of the screen; that doesn't work any more. Any ideas how to fix that? —Angr 22:23, 17 November 2007 (UTC)[reply]

There are two "English" Wikipedia tools for that thing. You want the one that is called "Wikipedia (EN)". The other one (forgot what it was called) does go to the search results instead of the page. -- kainaw 22:52, 17 November 2007 (UTC)[reply]
The one I'm using says "Wikipedia (EN)" grayed out when it's empty. Isn't that the right one? —Angr 22:55, 17 November 2007 (UTC)[reply]
To go straight to an article by typing "wp article" in the address bar, go to Bookmarks > Organize Bookmarks > New Bookmark. Type "Wikipedia", "http://en.wikipedia.org/wiki/%s" and "wp" in the first three fields, hit OK and close the Bookmarks Manager. — Matt Eason (Talk &#149; Contribs) 10:27, 18 November 2007 (UTC)[reply]
That is a really cool trick! Thank you! It seems to be case sensitive, though, which it didn't use to be. —Angr 12:34, 18 November 2007 (UTC)[reply]
WOW that's _exactly_ the keyword and bookmark that I use. wp EXPLOAD!!1. --ffroth 19:47, 18 November 2007 (UTC)[reply]

November 18

Building gnu coreutils-6.9 on Mac OS X 10.5

I don't have a lot of experience in building programs from source (none, actually). I decided to experiment by building the GNU core utilities. I downloaded 6.9 (newest version) from the gnu website. I ran the 'configure' executable and it determined that my Mac should be be able o build the core utilities. When I actually ran 'make' command it ran for a few minutes without problems. Abrubtly, it stopped and gave me the message:

Making all in lib
make all-am
make[2]: Nothing to be done for `all-am'.
Making all in src
make all-am
gcc -std=gnu99 -g -O2 -o date date.o ../lib/libcoreutils.a ../lib/libcoreutils.a
Undefined symbols:
"_rpl_putenv$UNIX2003", referenced from:
_main in date.o
ld: symbol(s) not found
collect2: ld returned 1 exit status
make[2]: *** [date] Error 1
make[1]: *** [all] Error 2
make: *** [all-recursive] Error 1

Can anyone tell me what is going wrong? I though Leopard had full Unix certification, so this should work, right?

Google _rpl_putenv and find this exchange where the same problem has been reported before. There's a patch there, which will be in the next release of coreutils. --tcsetattr (talk / contribs) 06:02, 18 November 2007 (UTC)[reply]

Thank you! Worked like a charm. Quick question: How do I build in 64 bit? I have Core 2 Duo.

I don't think you can unless you're running from a 64-bit OS. There's no 64-bit version of Mac OS X, but there are 64-bit versions of Windows and Linux you could obtain. — User:ACupOfCoffee@ 05:13, 19 November 2007 (UTC)[reply]
Actually, there has been a 64 bit version of MacOSX ever since 10.4 came out, however, it only supported the 64 bit PPC. IN 10.5, there is now support for x86_64 intel systems. I suggest that the OP read Apple's 64 Bit Transition Guide, there are some links there for how to instruct GCC to generate 64 bit objects on intel. -- JSBillings 12:37, 19 November 2007 (UTC)[reply]

Algorithm for finding cycles on directed graphs

Are there any WP articles on algorithms to find cycles in directed graphs? (I couldn't find any. The graph is currently in an adjancy matrix.) Bubba73 (talk), 07:05, 18 November 2007 (UTC)[reply]

You could just do something like a depth-first search, and when you come across the starting vertex, you have got a cycle. --Spoon! (talk) 08:16, 18 November 2007 (UTC)[reply]
Yeah, but what if the cycle does not involve your starting vertex? I think you are looking for Strongly connected component (Tarjan's strongly connected components algorithm). You can compute this in linear time. If every node is its own component, the graph is acyclic. Otherwise a cycle can be found by doing what spoon said starting at a node within a strongly connected component containing more than one vertex. Finding all cycles is more of a hassle, as there may be a lot of them (a lot = n! and some more).
The application is that there will be graphs with about 50 to 75 vertices (perhaps 100). Each vertex will have at most 20 directed edges connected to it. Quite a few will have 15 or more. For several pairs of vertices I need to see if they are in a directed cycle or not. So it sounds like what I might need to do is first find the strongly connected components and then if they are not in the same component then there is no cycle, otherwise there is, right? Bubba73 (talk), 15:34, 18 November 2007 (UTC)[reply]
Yes. A single pass over a graph suffices for all pairs in that graph.
Thanks. One little wrinkle is that cycles of length 2 may not count as cycles. Bubba73 (talk), 16:53, 18 November 2007 (UTC)[reply]

One way to do this is extend the idea of a DG to "paintable". The vertices have two states, painted and unpainted. For each root vertex in the DG, do a normal depth-first search, painting vertices as you go along. If you ever come across a vertex that has already been painted, you have a cycle. JIP | Talk 15:44, 18 November 2007 (UTC)[reply]

Now really, why do you propose a quadratic algorithm after a linear one was linked?
I don't see how my algorithm is quadratic. I've looked at the pseudocode for the Tarjan algorithm, and it seems that it has to be done for each source vertex separately. The way I see it, both algorithms are in linear time for each source vertex: they're just depth-first searches that do extra constant-time work at each vertex. If there are multiple source vertices it gets more complex though. JIP | Talk 17:03, 18 November 2007 (UTC)[reply]
A single run of Tarjan's algorithm finds all components. You do not need to rerun it for every source vertex. See the link in the article, it is much better than the article itself. —Preceding unsigned comment added by 153.96.188.2 (talk) 10:05, 19 November 2007 (UTC)[reply]
If you have control of the data structure used for the nodes of the graph, then just add a boolean "has_been_visited" which you default to 'false' in the constructor function for the class. To perform your depth-first traversal the graph call this recursive function on the root node of the graph:
 bool NodeType::isCyclic ()
 {
   if ( has_been_visited )  // If we already visited this node then it's a cyclic graph
     return true ;
   has_been_visited = true ;
   for ( int i = 0 ; i < num_child_nodes ; i++ )
     if ( child [ i ] -> isCyclic () ) // If the subgraph is cyclic then so is this one
       return true ;
   has_been_visited = false ;
   return false ;
 }
If you don't have control over the data structure then you need to create a hash table containing the addresses of the nodes you've visited and use that table to replace the 'has_been_visited' boolean.
SteveBaker (talk) 17:28, 18 November 2007 (UTC)[reply]
The algorithm I gave is a simplified version of this. It does not separately check for subgraphs, otherwise it is the same thing. JIP | Talk 17:34, 18 November 2007 (UTC)[reply]
That's not true - your approach doesn't work. Consider this directed graph:
      A
     / \
    B   C
     \ /
      D
      |
      E
(The links all point downwards). This graph doesn't have a 'cycle' because it's 'directed' - but it fails your approach because node 'D' is visited twice during the traversal. If you look at my code, you'll see that I unmark the nodes as it backs up the graph - so this graph is (correctly) returned as non-cyclic. SteveBaker (talk) 21:35, 18 November 2007 (UTC)[reply]

Thanks for your help guys - I think I can get it. I'll be working on it tomorrow. Bubba73 (talk), 01:55, 19 November 2007 (UTC)[reply]

I have a problem?

I have an Acer Aspire laptop. It has a black touch pad to move the pointer or cursor around on the screen. I could click on something by tapping the touch pad twice, and I could draw the scroller up and down by tapping twice and then move my finger up and down. It used to work, but now it doesn't move the pointer around anymore. The pointer still moves when I do it with a wireless mouse. But I want to fix the touch pad. How can I make the touch pad work again?

Hard to say; could even be broken. But for first check (I'll assume you have Windows): go to Start -> Settings -> Mouse. "Hardware" tab should say "this device is working properly". Check that the touchpad is enabled on the "Device Settings" tab. Weregerbil (talk) 11:40, 18 November 2007 (UTC)[reply]
Also make sure you haven't disabled your touchpad. Should be an option in the taskbar or in the aforementioned folder Heirware (talk) 11:46, 18 November 2007 (UTC)[reply]
In many cases the touchpad automatically disables when you plug a mouse into the computer. Try unplugging/disabling the mouse, see if that changes anything. --24.147.86.187 (talk) 16:38, 18 November 2007 (UTC)[reply]

search and replace in 1000 .txt files

Say I have 1000 . txt files in different folders, they all have different content but all have a line of text in the middle or at the end that I would like to remove to save HD space on a mp3 player. How would I go about doing this batch-style, without having to open all the folders or files individualy? ~Thank you. Keria (talk) 17:30, 18 November 2007 (UTC)[reply]

I would use UltraEdit, but there are probably a number of other editors that could do a batch find and replace. See Category:Windows text editors. --— Gadget850 (Ed) talk - 17:34, 18 November 2007 (UTC)[reply]
  • You don't say what platform you're on, but if you're on MacOS or something Unixy, running this command will do the trick:
perl -pi -e 's/the line of text you want to remove.*\n?//' *.txt
--Sean 17:54, 18 November 2007 (UTC)[reply]
All right! Sorry yes I'm running Windows XP er ... x64. Thank you gadget I'll try that. Keria (talk) 17:59, 18 November 2007 (UTC)[reply]
Aw shoot, it looked so promissing. Ultredit doesn't work on 64bits machines. Keria (talk) 18:19, 18 November 2007 (UTC)[reply]
Well duh. Ultraedit is fantastic though --ffroth 19:45, 18 November 2007 (UTC)[reply]
The good news is that you can get perl running on XP, but getting 1000 files into the command line is dubious in XP. You may have to runit under a bash shell instead, say with cygwin. Graeme Bartlett (talk) 20:02, 18 November 2007 (UTC)[reply]
Not to nitpick the whole project, but is the amount of space to be freed worth the trouble? I can't imagine it adding up to more than 2 MB or so, which is nothing compared to the size of most MP3s. You'd probably spend your time better finding just one song you don't ever listen to and just deleting that, no? --24.147.86.187 (talk) 21:22, 18 November 2007 (UTC)[reply]
At the moment it's 5 GB (it's more than 1000 files I just used that as an example, sorry for the misunderstanding). I hope I can save at least 1 GB. If anyone has an idea I'm listening (well, reading). I tried NoteTab (freeware) but it needs to open the files to work with them so it can't handle the job. Graeme you use a lot of words I have never seen before but I'm sure I can figure it out (I don't have to bash shells, right?). Is it as complicated as it sounds? Apparently there is some complicated way to install UltraEdit on x64. Does it need to load the files to replace text in them or can it just go through them as they are sitting on the HD? Keria (talk) 22:49, 18 November 2007 (UTC)[reply]
You probably can't shave 1GiB off just by removing 1 line off every file, but if you can't find an editor that can do it, you can always install perl and run the command mentioned by User:TotoBaggins (aka Sean); or, you can install cygwin, and run these commands:
cd /path/where/you/put/the/files
ls | while read line; do sed -i "/^Your text here$/d" "${line}"; done
They basically do the same thing, but it's always better to try it out on a single file first by copying that file to a new directory, change the working directory of your command prompt (aka shell) to the same directory, and run either one of the commands to try it out on first. --antilivedT | C | G 23:07, 18 November 2007 (UTC)[reply]
Right. I tried it with cygwin but it doesn't go into sub-folders, it says: sed: couldn't edit <name of subfolder>: not a regular file. It didn't replace anything in the files that were in the root folder but changed them so the the note pad has trouble reading them. It replaced ^P 's by some unknown character.
The text I have to replace is of this kind of format: ^P^P^P^P^Pyadiyadiyada^P^P^P^PBlablabla for lines and lines of rubish^P^P^P where ^P is a line break. Is it possible to copy-paste it into the command window? It's really long to type and I have to enter a dozen different versions.
I'll try with perl even if it looks even more complicated. BUT: How can I go into subfolders? Does perl do it? Is there a command way of deleting all files smaller than 2KB? Is there a command for deleting empty folders? Is there a command to delete files containg a certain string of characters or by their file extension? Keria (talk) 15:17, 19 November 2007 (UTC)[reply]
find or ls -R gets you into subdirectories
find ./ -type f -size -2k -exec rm \{\} \; removes regular files smaller than 2K
find ./ -type d -exec rmdir \{\} \; removes empty directories
ls -R | while read line; do if ( grep -q "string of characters" "$line" ); then rm "$line"; fi; done remove files which contain "string of characters"
find ./ -name \*.ext -exec rm \{\} \; remove files with extenstion .ext
Untested, use at your own risk. -- Diletante (talk) 16:27, 19 November 2007 (UTC)[reply]
Thank you Diletante. That's in cygwin right?
Then find ./ -name \*old*.* -exec rm \{\} \; would remove all files with "old" in their name? Keria (talk) 16:42, 19 November 2007 (UTC)[reply]
I tried the first one find ./ -type f -size -2k -exec rm \{\} \; when I press enter a ">" symbol appears on the next line, if I press enter again it says: "find: missing argument to "-exec" Keria (talk) 16:58, 19 November 2007 (UTC)[reply]
Are you sure you typed it correctly? Try copy and paste? ls -r will list subdirectories, but won't get you in there because it doesn't give you the path to it, you have to use find instead. just replace the ls with find ./ so it becomes something like this:
find ./ | while read line; do sed -i "/^Your text here$/d" "${line}"; done
or just use the -exec argument in find
find ./ -type f -exec sed -i "/^Your text here$/d" \{\} \;
Which will remove any line that starts and ends with "Your text here". If you want to do more you need to give us more information: What exactly do you want to delete and do you want to retain empty lines in other places? A sample file would be nice is well. --antilivedT | C | G 22:01, 19 November 2007 (UTC)[reply]
Antilived, you are right about the ls -R, I was mistaken trying to use it like that. Keria, it seems like you aren't including the semicolon at the end. Also when you use wildcards in find you should escape them with a backslash like \* so the shell doesn't try to expand them. Same deal with the semicolon, it is an argument to find, so you don't want the shell to interpret it. -- Diletante (talk) 23:33, 19 November 2007 (UTC)[reply]

Of course I had checked again and again I had typed it correctly ... without the semicolon! Thank you Diletante you were right and thank you Antilived and everybody who helped. The -2k has been running for 5 hours now and it seems to be working. I'll try all the other commands once that one is done and report back. Cheers! Keria (talk) 16:31, 20 November 2007 (UTC)[reply]

Standby → Shutdown

Whenever I try to put my computer on standby (or whenever it's idle long enough and automatically goes on standby), it will just shut down instead. I've tried adjusting the power scheme options, but I can't seem to find the right thing to change, because all of my adjustments have been futile. I'm running a Windows XP SP2. How can I fix this and allow my computer to go on standby instead of just shutting down? Thanks for the help. --72.69.146.66 (talk) 18:53, 18 November 2007 (UTC)[reply]

Check with your motherboard vendor and see if there is a BIOS upgrade. My MB is a five year old Abit, and I think it had the same problem until an upgrade. --— Gadget850 (Ed) talk - 18:55, 18 November 2007 (UTC)[reply]
Older motherboards seemed to have difficulties with going into standby. You could check the BIOS and see if there is any option in there for Power. GaryReggae (talk) 20:08, 18 November 2007 (UTC)[reply]

Family Tree software

Hi folks, a software question for you from a frustrated computer user.

About a couple of years ago, I purchased some software via the internet called "Family Tree Maker" ( (FTM)(version unknown), if I remember rightly, I downloaded the trial version, liked what I saw and paid for a registration code to upgrade it to the full version.

A few months ago, my computer had become virtually unusable due to Windows playing silly games and being on a constant go-slow so I backed up all my data files and reformatted and reinstalled Windows and all my software. I had forgotten about the FTM software but wasn't concerned as I had an email with the registration code and had assumed it was like most sofware of the same ilk (GameMaker for example) where if they release a new version, you are entitled to upgrade for free or a nominal fee but when I tried to find the software and reinstall it, all their website www.familytreemaker.com came up with is an entirely different new version with no option for existing users to upgrade or for acquiring previous versions. This seems like a con to me and I am very loathe to give them any more of my money as the same thing will probably happen the next time. I am also quite happy with my 'old' version and don't need any new features. I can understand them not supporting older versions any more but withdrawing something somebody has paid for is not on. If I had known, I would have tried to salvage the program files but that often doesn't work.

Does anyone know where I can download some FREEWARE family tree software? It must be able to load GEDCOM files and preferably Family Tree Maker (.FTW) files as well. I know I could Google it but I have and it comes up with a list of totally irrelevant stuff, some of which is definitely NOT Freeware and others of which include spyware, so much for the powers of Google manipulation. Alternatively, I would consider a reasonably priced (< £20/$40) paid for package if there was an assurance they wouldn't pull the rug out underneath theur loyal existing users and force them to cough up again for a new version every so often. GaryReggae (talk) 20:25, 18 November 2007 (UTC)[reply]

Try Comparison of genealogy software and Category:Free genealogy software. --— Gadget850 (Ed) talk - 20:45, 18 November 2007 (UTC)[reply]

Macbook video editing

I am thinking of purchasing a new computer, preferably a macintosh laptop. Due to some lack of funds, it's the cheapest macbook that is highest on my list. I would like to make some rudimentary video editing with it. Can it handle editing with Final Cut (possibly some sort of off-line editing with low resolution)? As I understand it, Apple recently upgraded the graphics capabilities of the macbooks. Will this make any difference to video editing? Do you have any recommendations? --Oskar 22:04, 18 November 2007 (UTC)[reply]

Yes it can handle Final Cut just fine, though you'll want to get some sort of monster external firewire drive for it (the scratch disk takes at least 1GB a minute or so, sometimes more). I have a MacBook bought earlier in the year (cheapest model that came with a DVD-R drive; a 2 GHz Intel Core 2 Duo, 1 GB of RAM) and it ran Final Cut Pro just wonderfully when I used it a little while back. --24.147.86.187 (talk) 23:57, 18 November 2007 (UTC)[reply]
Just as an addendum, my MacBook can even run Half-Life 2 fairly well under the Parallels Desktop virtualizer. They pack quite a wallop even under quite a lot of computational stress. --24.147.86.187 (talk) 23:38, 19 November 2007 (UTC)[reply]

November 19

Cryptography: Which is more secure for authentication: RSA or DSA?

I'm doing some personal research learning about public key infrastructure and asymmetric cryptography. After Googling a bit, I thought I'd submit a question which I've been struggling to find a answer that satisfies me. I think the Computing category would be better suited to post this than the Mathematics category. Which cryptographic algorithm is more secure for authentication, specifically, key exchange and digital signing: RSA or DSA? -- PaperWiki (talk) 02:20, 19 November 2007 (UTC)[reply]

I don't know whether one is in theory more secure but AFAIK neither one has been compromised, so they're both 100% right now --ffroth 15:38, 19 November 2007 (UTC)[reply]
RSA can be broken by a quantum computer of sufficient bit size. No quantum computers with enough bits to break an RSA cipher of any realistic size could be created so far. —Preceding unsigned comment added by 84.187.67.90 (talk) 18:40, 19 November 2007 (UTC)[reply]
I thought quantum computers don't work with bits =_= --ffroth 02:06, 20 November 2007 (UTC)[reply]
They are digital devices - so they deal in bits - but they do their calculations using qubits which is a quantum superposition of many possible states. It's true that we don't yet have usable quantum computers - but when we do, RSA will become highly vulnerable to attack by even fairly small quantum computers. With conventional computers, you can double the complexity of cracking a code by adding one bit to the length of the key. With quantum computers, you have to double the number of bits to double the time it takes to crack it. So a 64 bit code takes a regular computer over four billion times longer to crack than a 32 bit code - but a quantum computer will only take twice as long. Sheer brute force makes our present codes unbreakable - but we're going to have to come up with something much cleverer in the future. SteveBaker (talk) 03:33, 20 November 2007 (UTC)[reply]

PCI video cards

I have an old Gateway PIII PC I'd like to get working. I pulled the graphics card a while back to clean it and see if it was PCI (to try to diagnose a video card or other failure in a new rig). It turned out to be AGP. The motherboard's AGP alignment is off with the case, so while I can get it to work by fiddling around when the case is open, once I close it it never works correctly. Whoever put it in and got it to boot was apparently blessed with divine powers, and now it just plain won't work. I'd like to get a cheap secondhand PCI card, just to make the box boot and be able to do basic things like word processing and Internet (there is no on-board video). Nonetheless, since PCI graphics cards are so old and slow, I figure I can get a pretty decent one at the same price as a mediocre one. Does anyone have any suggestions for higher-end (of their day) graphics cards that are PCI and not AGP? I was looking at the GeForce 5 cards but our article does not specify if they are exclusively AGP or have PCI versions. -Wooty [Woot?] [Spam! Spam! Wonderful spam!] 05:15, 19 November 2007 (UTC)[reply]

PCI graphics cards are a pretty rare commodity these days. Even AGP is starting to become outdated. Everything's moving towards PCI-E now. You might be able to find one used, perhaps on eBay or something. — User:ACupOfCoffee@ 05:45, 19 November 2007 (UTC)[reply]
Yeah, I understand - I'm just trying to find the highest-end PCI card for its day, specifically a model name, because I figure as they're all old I should be able to get them at roughly the same price. -Wooty [Woot?] [Spam! Spam! Wonderful spam!] 05:56, 19 November 2007 (UTC)[reply]
I remember seeing a PCI Geforce 5200 card, but expect to pay quite a premium for it. They are rare, I paid for my PCI Radeon 7500 roughly 4 years ago for the price of an AGP Geforce 4 Ti. --antilivedT | C | G 08:09, 19 November 2007 (UTC)[reply]
I purchased a PCI video card from Wal-Mart just a few months ago to repair an older PC for a relative. [3] --— Gadget850 (Ed) talk - 15:22, 19 November 2007 (UTC)[reply]
Personally - since this is a 'junker' PC - I'd take a hacksaw to the case and if necessary use duct-tape to get the AGP card to stay put! Enlarging the place on the back of the case where the video connector comes out should be pretty easy - and then you have $0 solution that'll almost certainly be a lot faster than any PCI card you could buy. Remember - it's not just the speed of the graphics card - it's the rate you can give it work to do that matters. The PCI bus is unbelievably slow compared to even 1x AGP (and you might have 2x, 4x or even 8x AGP). In all likelyhood, it's irrelevent how fast the graphics card is because it'll be spending most of it's time sitting there starved for data. Even a slow AGP card will likely beat out a fast PCI card. (Caveat: This is a gross generalisation - a lot depends on...um...everything really!) SteveBaker (talk) 16:26, 19 November 2007 (UTC)[reply]

Windows XP: File | Save As

Windows XP: File | Save As

In the dialog box of any program in Windows XP when you select: File | Save As

There is an option in the "View Menu" to select "Thumbnails"

Is there any registry tweak to make "Thumbnails" the default choice?

If so, what is the tweak?

Also, is there any way to change the "default size" and "default location" of the dialog box?

multimedia

what are the server requirements of distributed multimedia systems

presumably you mean video for your multimedia. It needs a high bandwidth for the network connection, and disk drive connections. If you need to run 24*7 365 days a year, you will need an operating system that does not need to be restarted (for whatever reason). You may need to take an analogue video input and convert it to mpeg2 or something like it. There has to be a way to load up the new content. And perhaps you will need digital rights management for your content. A distributed system will have a lower demand than a single central server, but it will be much more difficult to keep the content loaded. Graeme Bartlett (talk) 10:58, 19 November 2007 (UTC)[reply]
Multimedia is far to fuzzy a term. You might mean some still images and some simple JavaScript to animate them - in which case your server-side requirements are minimal. You might mean still images plus audio or flash animations or host-side PHP or other programming - or you might mean full-up streaming video. The amount of traffic you expect to get is also a concern. My ancient home web server is a 600MHz PC with a single, very slow hard drive and nothing but a DSL connection to the net. You can get streaming video off of it if you're the only person using it - or it could manage dozens of simultaneous users for some JavaScripted game or something that only requires a few images to be downloaded. At the other end of the scale, consider something like YouTube that serves 100 million streaming videos per day and pays a million dollars a year in bandwidth costs alone! We can't possibly answer your question without MUCH more information. SteveBaker (talk) 16:14, 19 November 2007 (UTC)[reply]

Books about Web 2.0

Can someone tell me some books about Web 2.0??? I am from brazil so those books can be in Portuguese or english. Exdeathbr (talk) 14:05, 19 November 2007 (UTC)[reply]

Here's a list: [4]. --Sean 14:21, 19 November 2007 (UTC)[reply]

Finding authors of deleted youtube videos

I have a youtube video bookmarked that was deleted by the user, i have tried delutube but to no avail, is there a way of finding user who uploaded the video just by looking at the video id code? thanks Jutwdev99 (talk) 14:57, 19 November 2007 (UTC)[reply]

No. Try archive.org? --ffroth 15:37, 19 November 2007 (UTC)[reply]
Depending on when it was deleted you could check the Google cache of the page. Exxolon (talk) 23:12, 19 November 2007 (UTC)[reply]
Also you might try searching for the name of the video in YouTube or Google Videos. Often videos are mirrored by other users. --24.147.86.187 (talk) 23:28, 19 November 2007 (UTC)[reply]

Unix batch renaming of files to remove illegal Linux characters

I'm using Mac OS X connected to a Linux server. Some Mac file names have characters that Linux won't allow. I'm looking for some clever speedy Unix terminal command to look at a folder of files and batch rename all illegal Linux characters (like : \ " > ’ ? |) into normal hyphens. Any ideas? --24.249.108.133 (talk) 19:23, 19 November 2007 (UTC)[reply]

  • The only illegal characters in a Linux file name are "/" and "\0". Everything else is legal, if ugly to work with. That said, I've used the following script for years to fix up unpleasant file names. Just save it to a file, and do a:
perl -w this-script.pl *
in your directory of bad files. It tries hard to do the right thing, but you should probably back up your files first anyway. --Sean 19:55, 19 November 2007 (UTC)[reply]

#!/usr/bin/perl -w

use strict; 

for (@ARGV)
{
   unless (-e)
   {
       warn "$0: '$_' doesn't exist, skipping\n";
       next;
   }

   my ($dir, $orig_file) = m#^(.*/)?(.+)$# or die $!;
   $dir = './' unless defined $dir;
   $_ = $orig_file;

   s/%([\dA-Fa-f]{2})/sprintf '%c', hex($1)/ge;
   s/[^\w._-]+/-/g;
   s/[-=_]+/-/g;
   s/^[-=_]+(.)/$1/g;
   s/-*\.-*/./g;

   next if $orig_file eq $_;

   my $i = 0;
   my $fname;
   for ($fname = $_; -e "$dir$fname"; $fname = "$i-$_")
   {
       $i++;
   }
   $orig_file = $dir . $orig_file;
   $fname     = $dir . $fname;
   print "rename '$orig_file' => '$fname'\n";
   rename $orig_file, $fname or die "rename '$orig_file', '$fname': $!";
}
I'm in bash mode and Terminal doesn't seem to like your code. What am I doing wrong? --24.249.108.133 (talk) 22:42, 20 November 2007 (UTC)[reply]

Stripping an MP4

Does anyone know a program that can easily stip ALL tags and metadata off an MP4 (specifically audio only i.e. M4A) and leave just the stream in an MP4 container? The reason I ask, is when I convert a particular type of file (best not mention for legal reasons - it probably doesn't matter anyway) the output M4A doesn't work with my Nokia 6300 (normal ones do). When I use VLC to put the stream in a MP4 new container, the phone will then play the file, but if I then add tags (even with Nokia's own software), the phone won't recognise any tags on the file (which it's supposed to). Any help would be appreciated - EstoyAquí(tce) 21:23, 19 November 2007 (UTC)[reply]

If you're on Linux try EasyTag, Windows try tinkering with Foobar2000. --antilivedT | C | G 21:29, 19 November 2007 (UTC)[reply]

November 20

word virus

I have word 2000 and lately, my documents will not send in email because Gmail has decided that it has a virus. So does every program out there. The only thing that I can figure out is that in other computers, they ask about disabling macros. I did not install a macro, nor do any show up in the macro list. What is going on? --Omnipotence407 (talk) 01:04, 20 November 2007 (UTC)[reply]

What's going on? You have a virus!! It is writing itself into your Word files as a macro so that it can try to infect other computers. This is seriously bad stuff! Have you tried running a full virus scan first? Get AVG Free if you don't have one that is up to date. --24.147.86.187 (talk) 01:36, 20 November 2007 (UTC)[reply]

Ooh, thanks. Any other suggestions for free Anti-Virus. Last time I tried installing AVG, this computer crashed. So, Id kinda rather not use AVG. I'm running a Trend Micro scan now, is that sufficient?--Omnipotence407 (talk) 01:51, 20 November 2007 (UTC)[reply]

If it detects it then it's sufficient. Avast is also free if you don't like AVG --ffroth 02:04, 20 November 2007 (UTC)[reply]

It found it. It actually found two things; 3 instances of W97M_GENERIC in what looked like the word program files, and 16 instances of W97M_MARKER.A in the actual word documents. It says that the second one sends a log to its author via FTP once a month. Seems to me that some computer savvy person with the necessary authority could track that back. Why hasn't this been done? Thanks for all the help. --Omnipotence407 (talk) 04:23, 20 November 2007 (UTC)[reply]

It's author is probably using another computer that's also been taken over as its ftp destination...or perhaps the destination account is simply outside of the juristiction of anyone who cares. Many countries have too many other problems to be bothered with arresting people who are perpetrating "Internet crimes" that don't affect them and they may not even understand. SteveBaker (talk) 12:33, 20 November 2007 (UTC)[reply]

PostgreSQL: Denormalized input

I recently normalized my PostgreSQL/pgforms database of Magic: The Gathering cards to deal with split cards. The result is that each physical card now requires a row on two separate tables, and it would be a pain to have to switch back and forth between two forms when entering one physical card. But pgforms can't handle more than one table in a form, and I'm told that using a denormalized view with rules at the back-end would be nearly impossible, even with the rules already pseudocoded. Is there a standard solution to database situations where unnormalized storage would cause problems and normalized input would be awkward? NeonMerlin 01:42, 20 November 2007 (UTC)[reply]

P.S. Anyone reading the pseudocode should know that the PK of cards is "Name","Set", the PK of spells is "Card","Set","Spell", and the FK of spells onto cards and left outer join of the denormalized view is cards."Name" = spells."Card" AND cards."Set" = spells."Set". NeonMerlin 02:35, 20 November 2007 (UTC)[reply]

excel problems

I have excel 97 on another computer. Recently, it has decided that when I double click on a .xls, it tries every group of letters before trying the whole filepath. So, for example, if I was to try opening C:\Documents and Settings\Owner\Test Spreadsheet 2007.xls ... First an error message pops up saying that it cant find C:\Documents.xls, then one for and.xls then one for Settings\Owner\Test.xls, then Spreadsheet.xls, then 2007.xls. After clicking OK on all those error messages, it opens the file. Why is it doing this and how can I fix it? --Omnipotence407 (talk) 01:45, 20 November 2007 (UTC)[reply]

Never use spaces in your filenames- it breaks old programs and command-line syntax. Use underscores instead --ffroth 02:03, 20 November 2007 (UTC)[reply]

It has never done this before. Besides, the "Documents and Settings" is where "My Documents" is, and those are XP defaults.--Omnipotence407 (talk) 02:11, 20 November 2007 (UTC)[reply]

Try switching to OpenOffice.org Calc. It's free, more secure against macro viruses and compared against such an old version of Excel should be fully compatible (except for the features OOo will have and Excel 97 won't). Or, you could switch to a Linux distro such as Kubuntu (which doesn't force or default any folder names to include non-alphanumeric characters) and run Excel through Wine. Either Excel 97 or Windows XP probably has to go sooner or later, but it doesn't have to cost any money. NeonMerlin 02:40, 20 November 2007 (UTC)[reply]
Oy, except that Calc kinda sucks at the moment, like much of OOo. Slow, ugly, unintuitive, not-quite-fully-documented; reproducing all of the worst features of Excel... but even worse! --24.147.86.187 (talk) 02:51, 20 November 2007 (UTC)[reply]
What department of Microsoft are you working for? Even if it's not unqualifiedly better than Excel 2007, Calc should dominate Excel 97 in any fair comparison. NeonMerlin 02:56, 20 November 2007 (UTC)[reply]
Believe you me, I hate Excel too. I think Calc's biggest problem, aside from having its interface standards set by computer geeks, is that they are trying to replicate something that is barely usable in the first place. Excel (like all of Microsoft Office) is a shitty program and making a free version of a shitty program is not an improvement, especially if it is a very slow version of said shitty program. But I digress. My hope is that once OOo gets into a more stable phase a bunch of designers will descend upon its code and make a fork for people who actually want to not have to battle with their office tools to get them to work. But if I am going to have to battle with my software, I want to at least battle at a good pace, so the slowness issue (and the fact that everything produced with OOo looks about 200% more ugly than the already ugly things that come out of Office) means a lot to me. --24.147.86.187 (talk) 02:58, 20 November 2007 (UTC)[reply]
The interface standards are not set by geeks: one of OOo's strengths is that it's good at responding to bug reports and feature requests from non-programmers. As for it looking ugly, the only significant difference in appearance from Excel is the icon theme, and that can be changed (Tools > Options > OpenOffice.org > View > Icon size and style). Many other aspects of the GUI can also be customized that can't in Excel. NeonMerlin 03:09, 20 November 2007 (UTC)[reply]
Sorry, but the interface is super ugly and super clunky looking. Alas, the ugliness does not end there. Try to make good looking graph with Calc. I dare you. One that doesn't look like it was cobbled together by programmers with no idea of how graphs should look, one that takes Excel's already ugly approach to making graphs and makes it even uglier. It can't be done, as far as I can tell. Everything looks like crap; it would be totally unusable in anything but a setting where apperances did not matter (which is unfortunately the case amongst programmers). Not to mention they seem to have spent more time allowing you to make 3D graphs (which are methodologically problematic, as anyone concerned with visual representation of data knows) than they have on simple things like simple XY plots (you can't plot circles at all unless you are using ugly drop-in bitmapped "custom" plot images). This is the sort of thing that consulting with people who actually care about visual representation of data (or at least had read a book or two by Edward Tufte) would have stopped from the get-go. But the culture of OOo is to create a "replacement" for MS Office; recreating a flawed product will not end up with a good product, and everyone knows how awful MS Office is. (And I won't get into things like OOo Base, which is totally unusable for even basic things as far as I can tell, as a database programmer.) Anyway, I wish the OOo people all the luck but at the moment it's not a great program and I wouldn't wish it on anyone who has to use programs like that on a daily basis (like myself). As far as I'm concerned its a neat tech demo (based on a flawed idea). --24.147.86.187 (talk) 14:59, 20 November 2007 (UTC)[reply]
Wow, didnt mean to start this argument, but I have to agree with 24. I tried using Impress for a presentation for school, and it just kept crashing, and took about 5 minutes to save any progress. I flipped back to powerpoint, and whipped off the presentation that had been taking days, in a matter of an hour or two. Ive generally found OOo to be pretty slow, and not a viable alternative to Any Version of Microsoft Office, including 97. Only thing that OOo seems to have on Microsoft in my use is the pricetag. --Omnipotence407 (talk) 04:28, 20 November 2007 (UTC)[reply]
The odds are that somehow the file association had gotten whiggy and it is trying to execute it without the quotes it needs around the filename. If I recall you have to fish around in the registry to fix it. This post sounds like what I am talking about—it's the quotes around the %1 that are probably missing (for some reason). --24.147.86.187 (talk) 02:51, 20 November 2007 (UTC)[reply]

Ok, I'm gonna try the registry fix tomorrow after the computer is scanned for the same virus my other computer had. I'll let you know if it works. --Omnipotence407 (talk) 04:28, 20 November 2007 (UTC)[reply]

MediaWiki, JavaScript and PHP

(This is not a Wikipedia question)

If I have my own MediaWiki system, can I add JavaScript or PHP to specific pages in the Wiki to make them interactive? For example, if I have a JavaScript snippet to create a little interactive widget to convert fahrenheit to centigrade - can I set up the system to allow me to put that into a regular Wiki page? How about PHP code to do stuff on the server-side?

I could obviously do this outside the Wiki on some other web page - but I want the ability to edit it in a browser and to use the Wiki to do version control. Since this is for a private Wiki, I'm not concerned with vandalism or anything.

TIA SteveBaker (talk) 03:18, 20 November 2007 (UTC)[reply]

Well for Javascript you can edit the skin's js file and do something similar to all the javascript tools on here like WP:POPUPS. --antilivedT | C | G 04:05, 20 November 2007 (UTC)[reply]
Yeah - I knew about that - but it's not what I need. Editing your monobook.js allows one user to stick in some JavaScript that affects all pages he visits. I want the opposite - something I can stick into one page that affects all users who visit it. Think specificially about something like having a little type-in box in the article on Temperature that would let you type in a temperature in Fahrenheit, click a 'Convert' button and see the result appear in Centigrade. This is really easy to do in HTML - but MediaWiki kills the usual comment tags for JS. I'm kinda hoping there is a configuration option to change that behavior. SteveBaker (talk) 05:21, 20 November 2007 (UTC)[reply]
I don't have an answer to your question since I don't know MediaWiki all that well (although I doubt it'd be hard to implement a <script>-tag in mediawiki that does what you want), but I do want to point out that you should be VERY careful about this, since this would be a major security issue. Your whole wiki would become one big XSS vulnerability. So you'd have to, at the very least, figure out some way to do it so only admins can edit such pages or add such code (which, since it's a private wiki, you may already have done). If you do implement this in some way, keep that in mind. 161.52.15.110 (talk) 11:42, 20 November 2007 (UTC)[reply]
As I explained, this is a private Wiki, it's set up so that only registered users can edit or move pages, WikiSysop is the only account that can create users - there will only be a handful of users and they are all trusted people. The <nowiki><script></nowiki> trick doesn't work - the script tag ends up surrounded by &lt;...&gt; instead of <...> so the browser doesn't see it. This is obviously an essential protection for a regular Wiki - but I need to circumvent it somehow. SteveBaker (talk) 12:25, 20 November 2007 (UTC)[reply]
If I understand correctly, what you want to do is make it so that MediaWiki doesn't automatically escape out Javascript or PHP code, yes? If there isn't a setting for such a thing, I bet you could find the function that does the escaping and disable it? (Sorry, I don't know MediaWiki at all so I can't give any specifics.) If I were going to guess where such a setting would be, it would be around the same place where you can presumably enable or disable HTML tags. --24.147.86.187 (talk) 15:03, 20 November 2007 (UTC)[reply]
My guess is that if you run a search over all of the PHP code for "strip_tags" you'll find the function(s) that remove the PHP and HTML, etc. --24.147.86.187 (talk) 21:34, 20 November 2007 (UTC)[reply]
Why not just do the same approach as WP, except it's integrated into the skin? Once you have the JS in then you can simply reference to it using plain HTML code. But, I think this belongs to somewhere like Village Pump/Technical where people are more experienced with MediaWiki. --antilivedT | C | G 22:08, 20 November 2007 (UTC)[reply]

"Bunu kimse Yapmaz" vanishing email

A friend tells me he received an email through Outlook Express that mysteriously disappeared from his computer. Fortunately he had copied the above text from the subject line to do a Google search before it disappeared. Otherwise he would have had no record. An Outlook Express internal "find" revealed nothing, no record whatsoever.

A week earlier, after purchasing a computer peripheral from Ecoolstore on eBay from China, he also received an email stating that processing of their PayPal payment had been "completed." When the item did not arrive from Hong Kong within the allocated 14 business day limit he requested a refund but the seller responded that his PayPal payment had not been "completed" so when he went to look for the email it had also mysteriously vanished.

What is going on? Can email that has been received and displayed simply self destruct like the mission assignment tapes from Mission Impossible, or did my friend delete them by mistake without knowing what he had done?

Also is it possible for computer peripherals from China to have spyware installed inside then on a read only memory and if so how can this be determined?

Thanks in advance for any response. —Preceding unsigned comment added by 71.100.5.134 (talk) 18:56, 20 November 2007 (UTC)[reply]


It is probably too soon to tell anything with the little information that we have.

If the user has a desktop search program, I would urge to use it.

If a secondary computer is available, please try taking out the hard disk to that computer and using it as a secondary drive there.

I doubt that the email vanished into thin air because even if the email contained a strange request like that, there is no reason why Windows Outlook Express would conform to it.

Any ideas, Wikipedians?

To the OP: Before doing anything, make sure you understand the disclaimers above. If the issue in the email was critical, I would turn off the computer and have it sent to a reputable data recovery company. It would expensive and the I would probably finish eating my nails (and probably my toenails as well) as they recover the data, but if the situation warranted that, I would do it. --Kushalt 19:06, 20 November 2007 (UTC)[reply]

  • Desktop search is faster than Outlook Express or Windows XP searches but does not appear to be configurable to do a search on text or any sequence of characters within a file whereas "Bunu kimse Yapmaz" appeared in the subject text of the email and in the body of the email rather than as the email's name.
  • The peripheral device mentioned was not a hard drive.
  • The issue with the email is that it contradicts the claim that the PayPal payment was not "completed." There is no need for a data recovery program but only the ability to scan the hard drive at the byte or bit level. —Preceding unsigned comment added by 71.100.5.134 (talk) 19:58, 20 November 2007 (UTC)[reply]


Thank you for the correction, 71.100.5.134. I used data recovery in the sense that even if the worst case scenario of the data being deleted from the file allocation table, it might still exist on the hard disk and therefore recoverable. --Kushalt 20:37, 20 November 2007 (UTC)[reply]

To me data recovery implies at worst a hard drive found in the ashes of a house fire and at best a motor or other circuit that has burned out making a cleanroom necessary to disassemble the hard drive and remount the platers in a new case with new electronics to hopefully make the data accessible again. If the data on the hard drive can still be read independent of format then all that is needed should be scanner software that can read sequential bits and bytes looking for keywords. I have data recovery software but it is not keyword friendly. Instead of allowing a bit or byte pattern keyword it merely restores all data it can leaving the user to do his own keyword search by conventional means after all possible data has been restored. I do not expect that such a thing will work in this case.
Also I assume that it is possible for anti-spam or antiviral anti-malware software to allow the text of an email to be displayed but then delete it when it recognizes it contains a pattern it does not like. —Preceding unsigned comment added by 71.100.5.134 (talk) 21:20, 20 November 2007 (UTC)[reply]

Advertising new store online.

I have a family member who just opened up a new store, of the brick-and-mortar type, and I have offered to help with online sales. A friend set up the basic skeleton of a sales web site, but it needs a lot of work. Mainly it needs some type of sales software (keep track of shopping cart, process transactions, generate receipts, etc). When I look for this type of software I either find what looks like a scam to me, or over-priced options that want to do everything for me. Any suggestions here?

I also need to advertise so the web site can get some traffic, the most important step would be that when someone google searches the name of the store they get the web site. I've looked at the Pagerank page, and I'm a little confused about how to go about this. It seems like the best way to improve the site's visibility in searches is to go to other sites (like blogs and forums) and post links back to my site (especially contextual links). However, this sounds under-handed to me. For instance I could insert a link to the site here in this question, and since google loves Wikipedia this would increase my rank. But the purpose of this question isn't to insert a link back to my site, it is to ask if there is a legitimate way to accomplish the same task? I also plan on eventually using google's ad service to place context-sensitive text-based ads elsewhere, will this increase my search ranking in-and-of itself? Thanks for your help. 128.223.131.21 (talk) 20:35, 20 November 2007 (UTC)[reply]

As far as I know, link farms on Wikipedia no longer give you a boost on Google as Wikipedia has tags to ask Google not to crawl external links and Google accepts the meta tags. --Kushalt 20:41, 20 November 2007 (UTC)[reply]
Google doesn't love Wikipedia, in the way you think. All external links from Wikipedia have the nofollow tag set on them, which means Google doesn't follow them and doesn't give them any value. That doesn't stop dumb people from trying it anyway, and we're really pretty good at removing that stuff and blocking the spammer (for that is what people who do stuff like that are). And if they're persistent, and do something stupid like make a whole article about their business, when it gets deleted here it leaves a track (like a deletion discussion) that Google does like. So when you search Google for that business, you find the deletion discussion, and that's something that says "scammer" to your customer. guerrilla marketing is one thing, but dumb stuff like that undoes thousands of dollars of positive press and advertising. -- Finlay McWalter | Talk 20:42, 20 November 2007 (UTC)[reply]
And similarly a lot of the "cunning" search engine optimization tricks you've heard of, including stuffing blogs with backlinks, turn out to trigger Google's (and Yahoo's, and MSN's) sophisticated "we're being scammed" detectors, which blacklist your site and again prove to be vastly counterproductive. -- Finlay McWalter | Talk 20:55, 20 November 2007 (UTC)[reply]
Regarding software, our Shopping cart software article doesn't have a comparision (which is disappointing) but does link to some external lists. Google for "open source shopping cart" and you'll find some you can use for free (and can see the source for, making it much less likely to be a scam). But the big pain is accepting credit cards - for a small online retailer that tends to be rather pricey. For that you need to find a trustworthy "merchant services" provider - there are many providers, but I can't say which is trustworthy. Going with an established brand is probably the path of least risk, but will add a cost (PayPal UK's merchant services account charges 3.4% + £0.20 GBP per transaction, which seems like a lot to me). -- Finlay McWalter | Talk 20:50, 20 November 2007 (UTC)[reply]
Besides PayPal, you will also find that Google and Amazon offer cash-register services for online credit card payments. When you look at the percentage taken, compare it to the percentage that a small merchant would pay for any other credit card transaction. Also consider the cost of website programming, bookkeeping, returns, fraud prevention etc and it might not be the deciding factor on whether to do online sales. EdJohnston (talk) 21:09, 20 November 2007 (UTC)[reply]

Republic Commando Soundtrack

Hi all,

the german and english articles state stuff about the soundtrack being available for public at LucasArts, but LA seems to have removed the whole product site :( Does anyone have a DL link for the soundtrack or at least the credit song by Ash?

88.64.74.49 (talk) 21:42, 20 November 2007 (UTC)[reply]

Origin of h.264 name?

I know h.264 was "descended" from h.264~h.261. But where did the "h" and "26x" part come from? Do they have any special meaning? --24.249.108.133 (talk) 22:35, 20 November 2007 (UTC)[reply]


Windows 95 platform with the 'newest new'

I have a lot of old games that only run well on Win 95 (Railroad Tycoon II among them). This is a weird request, but what is the most modern old stuff you can put on a Win 95 platform, and expect it to run old games like a star? By that I guess I'm thinking about what was brand new in '00 or so. Do the new SM3.0 compatible graphic cards have problems running old games like these, or is it purely the OS? Because heck, I guess I can just dual-boot any new computer with Win95/XP. So I guess there's potential here for the request to be not so weird, but somehow I doubt Win95 would work with some Gfx8800... Still, I'd love to know it from the techies. Thanks a lot in advance. =) 81.93.102.185 (talk) 22:36, 20 November 2007 (UTC)[reply]

Excel

If I have a list as the following in Excel and I want to sort them from A-Z by last name but keeping there phone # and address with their name. How do I do that in Excel?

Last Name, Phone Number, Address

Excel Question

In Excel, how do I create a list for one cell?