Wikipedia:Reference desk/Computing
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
August 29
Scala IDE
I have the Scala IDE installed and I'm trying to learn the language. I tried compiling and running a program today and got "Error occurred during initialization of VM java/lang/NoClassDefFoundError: java/lang/Object" in the console. This is on a Mac running OS X 10.11. I have Java 8 Update 101 installed. I thought that the error in the console might be trying to tell me that the version of Java couldn't be found or some such thing. This is a screenshot of the preferences panel of the IDE. Is it pointing at the wrong JRE? If so, where would I find the correct version of the JRE?
Or, am I completely off base?
Thanks! †Dismas†|(talk) 23:34, 29 August 2016 (UTC)
- You are definitely on the right track - you pointed at a Java 1.7 (Java 7) installation (though in your screenshot, the full paths are not visible! Expand your Location viewer...) You probably have multiple JREs installed, (and that's perfectly okay), but you need to point to the Java 8 installation. The default location is /Library/Java/, but if you used a non-standard installation, it could be in ~/Library/ or lots of other locations... see, e.g., Important Java Directories on Mac OS X.
- You should be able to browse for the correct location by clicking the "Add..." button. You can use the currently-selected Java SE 7 location as a good place to start looking; then navigate up the directory tree until you find the Java 8 version.
- Nimur (talk) 00:44, 30 August 2016 (UTC)
- This is interesting... The Java control panel says I have Java 8 but I can't find any file with 1.8 in the name... And running the "/usr/libexec/java_home" command that is on that page you linked to only brings up 1.7. †Dismas†|(talk) 01:42, 30 August 2016 (UTC)
After an uninstall and reinstall of Java, it's all set. I'm getting different errors but they're errors that I expected. Thanks for the assist! †Dismas†|(talk) 02:25, 30 August 2016 (UTC)
August 30
Are people still training to become Fortran or COBOL programmers?
Or, are they legacy programmers who trained when these legacy technologies were mainstream? --Llaanngg (talk) 17:12, 30 August 2016 (UTC)
- (Anecdotal disclaimer)... but I can cite sources for my stories!
- Indeed, there are still academic and industrial training programs for both Fortran and COBOL.
- I studied FORTRAN-77, FORTRAN-90 and FORTRAN-95, and RATFOR, while I was a student. I know of several industries where these specific programming-language skills are still desired. Here is a website of one major research lab at one major American university where this software is still part of the formal training program: a software tour of SEPlib, which is still well-regarded by certain industrial sponsors.
- I have a friend who has studied COBOL informally as part of on-the-job training (in 2016!) at a major American financial institution. They work with IBM mainframes, and those still exist and are still part of the new-hire career track in certain specialized business units. IBM still advertises COBOL on z/OS.
- All this being said: if you had to decide what to specialize in - you will probably broaden your horizons by learning Java, C, and python. But if you are a serious student of computer science, you should learn a few dozen languages, and develop the specific skill-set for learning how to learn computer languages. Most programmers, at some point in their career, will have to work with some unfamiliar language, which may be a domain-specific language, a proprietary software system, or an esoteric or antique project that needs maintenance.
- If you haven't already read Learn to Program in 10 Years by Peter Norvig, ... go read it!
- Nimur (talk) 18:24, 30 August 2016 (UTC)
- "learn a few dozen languages"? Does it mean 24, 36, 48? That looks like an overkill, even for people who are really serious about computer science. Learning 4 languages in 4 main paradigms, maybe add a 5th really exotic language to them and aim for the depth - that seems like a more reasonable approach. Hofhof (talk) 22:41, 30 August 2016 (UTC)
- I was not exaggerating.
- Nimur (talk) 23:01, 30 August 2016 (UTC)
- Once you learn C/C++, you are functionally literate in dozens of languages. All I did was flip through a reference book to learn Java. I learned PHP by looking at code someone else wrote. I forced myself to learn Lisp a long time ago, so I know the extensions, such as ML. I do a lot of command-line administration, so I regularly use awk, sed, and perl. The military had be using FORTRAN and ADA. With that background, I see new languages and really need nothing more than a reference guide. In my opinion, it all comes down to learning C first. Learn C and you learn dozens of languages. Then, learn Lisp and you learn a dozen more. 209.149.113.4 (talk) 11:45, 31 August 2016 (UTC)
- Indeed, that summarizes my thoughts pretty concisely.
- I think the distinction is, some people program to draw a paycheck, and that's fine... they should develop proficiency and excellence at the most in-demand marketable language. This kind of person usually "maxxes out" at one or two programming languages.
- But some people program computers because they are inspired - they want to speak the binary language of moisture vaporators or they have a really solid affinity for working with and thinking about data. Those people will learn sed, and awk, and perl, and lisp, and they'll dump binaries to decode the machine language by hand, and ...
- Last week I had the (mis)fortune of hand-decoding a bitstream recording of PCIe link-layer and transaction-layer packets. With my trusty copy of the PCIe specification in hand, I had to write a program to turn hexadecimal numbers into information so that I could diagnose a hardware- or software- problem. I discovered that the PCIe transaction protocol was Turing complete - to experienced engineers, it's not actually very surprising, is it? For the novice reader: this means that the data link between, say, your hard-drive and your main computer is controlled by a full-fledged, fully programmable computer language. This doesn't only mean that we can reconfigure the machine for the data reads and writes: it means that we can use your PCIe link to (very inefficiently) play Pong, to execute the artificial-intelligence similator called Siri, or to run a cryptographically-secure random-number-generator (and conditionally inject those random numbers into your precious data files!) The interface is a programming language, even though most people would prefer to call it "just a bunch of bits and bytes." The program is the data! You will probably never find a textbook on "hacking the PCIe link layer to make a Turing-complete computer language." You just accidentally learn this kind of nonsense on the job!
- But because I have trained my brain to think like a machine, to grok data the way a machine groks data, it is easier for me to see emergence in places where you might not expect it. It gives me some unique insights into the realistic and practical issues on abstract topics like intelligence, machine-learning, and fundamental computer theory. When you see other programmers who get it, you connect on a level that is a lot deeper than just sharing a common syntax and dialect. Computer scientists think similarly about complexity.
- For the novice readers, here is an example of something I would call a non-trivial software program: Peter Norvig's spelling checker. It's a toy-solution to a real-world problem. It took the author about 12 hours to "solve" - in python - and you can bet that he did not spend very much time struggling with the Python syntax. Once he "solved" the problem, many of his friends translated it into dozens of languages - including weird ones like R and D. Translating wasn't difficult. Syntax wasn't difficult. Understanding that the solution is a bunch of simple probabilistic calculations was the key to the problem. Observing how different languages represent that calculation (and deal with, say, character-string syntax) is a great learning experience.
- Nimur (talk) 15:18, 31 August 2016 (UTC)
- "learn a few dozen languages"? Does it mean 24, 36, 48? That looks like an overkill, even for people who are really serious about computer science. Learning 4 languages in 4 main paradigms, maybe add a 5th really exotic language to them and aim for the depth - that seems like a more reasonable approach. Hofhof (talk) 22:41, 30 August 2016 (UTC)
- Did you go so far as to run Linux on the controller (see Spritesmods for "running Linux on a hard drive") 209.149.113.4 (talk) 16:19, 31 August 2016 (UTC)
- I like linux, and I think it's great for a lot of stuff. But around these parts, when I want to get clever and run software in places where I shouldn't, I usually prefer to boot xnu, rather than linux! But in this case, no, I did not want to try to abuse the link layer so badly - I just found it to be a fun observation! Nimur (talk) 19:27, 31 August 2016 (UTC)
- Did you go so far as to run Linux on the controller (see Spritesmods for "running Linux on a hard drive") 209.149.113.4 (talk) 16:19, 31 August 2016 (UTC)
- I regularly take contracts for FORTRAN, COBOL, and ADA jobs. Most are government, but other industries use them. I don't mind that it is old technology. I get to charge more because there is less competition. 209.149.113.4 (talk) 19:09, 30 August 2016 (UTC)
- Yes. That's what I do on a daily basis, and it is something that I learned in the last decade (and these languages have been around for much longer than that). In particular, in the field of high-performance computing, Fortran is still king. Titoxd(?!?) 19:57, 30 August 2016 (UTC)
- Fortran is still widely used in science. Intel, for instance, regularly releases new versions of Intel Fortran Compiler, which uses all features of modern processors. Ruslik_Zero 20:40, 30 August 2016 (UTC)
- The entire Department of Defense payroll is calculated on 2 mainframes running COBOL: they can't communicate, so the staff prints out data from one computer, and then types it into the other. Apparently, they've never been audited by Congress, either: http://www.npr.org/2013/07/16/202360167/investigation-reveals-a-military-payroll-rife-with-glitches.OldTimeNESter (talk) 16:28, 31 August 2016 (UTC)
- This is personal experience, but I know that a major military contractor runs its engineering simulations using legacy FORTRAN routines from the 1970s. I've also heard that FORTRAN is still used by some insurance companies, because the code is seen as reliable, and replacing it is both risky and expensive.OldTimeNESter (talk) 16:34, 31 August 2016 (UTC)
- I am inclined to disbelieve the sensational claims that were made in this now-famous news-report about the Department of Defense payroll. I suspect that the Department of Defense has no reason to disavow such claims - why should they choose to provide corrected information about sensitive computer systems? It is in their best interest that the misinformation persists. The general public, including all the news editors who work for NPR and Reuters, and all the hackers who seek to cause harm to the infrastructure, do not have any useful insight into the inner workings or implementations of their systems.
- People who need to know, like our senators and representatives who oversee the budget, and our civil servants who implement the process, almost certainly have privileged access to information that is not made public. Have a look at some search results for 'security clearance' at the website of the Government Accountability Office, or the same at the the website of the OPM.
- Our government, and our Department of Defense, both have many inefficiencies; I believe many stories and anecdotes I hear about dinosaur computers in government offices... but I do not believe for even a brief minute that an investigative reporter managed to successfully and accurately discover any meaningful technical details about how "all the payroll" for our military is handled.
- Nimur (talk) 19:39, 31 August 2016 (UTC)
- Without explicitly stating the warrant for my belief, I'm inclined to believe it's even worse than the article states.OldTimeNESter (talk) 19:47, 31 August 2016 (UTC)
- This is personal experience, but I know that a major military contractor runs its engineering simulations using legacy FORTRAN routines from the 1970s. I've also heard that FORTRAN is still used by some insurance companies, because the code is seen as reliable, and replacing it is both risky and expensive.OldTimeNESter (talk) 16:34, 31 August 2016 (UTC)
August 31
Googling caribou in reindeer country
I noticed that Googling for "reindeer" in the Canadian woods brings up the Caribou factbox instead, but titles it "Reindeer". Can a Scandanavian (or any European, secondly) confirm or deny that Googling for "caribou" brings up Reindeer titled "Caribou"? InedibleHulk (talk) 02:09, August 31, 2016 (UTC)
- For me in the UK, googling "caribou" gets me Dan Snaith. Googling "reindeer" gets me the same result as you. Rojomoke (talk) 03:19, 31 August 2016 (UTC)
- That's even weirder than I feared. Caribou gets caribou for me, but the box is still called "Reindeer". InedibleHulk (talk) 03:33, August 31, 2016 (UTC)
- You can click on the "feedback" button below the box to report this.--Shantavira|feed me 06:45, 31 August 2016 (UTC)
- I'm in Denmark and dont get any factbox (Google Knowledge Graph) on "reindeer" or "caribou". On "reindeers" and "caribous" I get the same factbox titled "Reindeer" but with a linked paragraph from our Caribou article. Note that caribou and reindeer are sometimes used as synonyms as our articles say. See Template:HD/GKG for the limited relationship between a Google Knowledge Graph and Wikipedia. PrimeHunter (talk) 10:56, 1 September 2016 (UTC)
- Good to know, thanks. And yeah, they're basically the same creatures in different environments, so it's not something that really needs to be fixed. Just a bit of an odd choice for the Googlebot. If New World porcupine starts redirecting to Old World porcupine (or vice versa), that would be serious. InedibleHulk (talk) 00:03, September 2, 2016 (UTC)
Layering different encryption methods
IIRC, it's a standard result that layering several encryption methods does not increase the security compared to the best of the employed methods. But is that really true? I can see the result for perfect algorithms, when the only possible attack is brute force. But real cyphers often have other weaknesses that make breaking the cypher a lot easier than simply brute-forcing it. Wouldn't layering different algorithms (say AES/Rijndael, 3DES and Blowfish) mask the respective weaknesses of each individual algorithm? --Stephan Schulz (talk) 06:07, 31 August 2016 (UTC)
- AFAICT multiple encryption doesn't seem to mention that "layering several encryption methods does not increase the security compared to the best of the employed methods" but does suggest care needs to be taken in implementation. There's also [1] by a cryptographer. The general consensus I'm getting also in non RS [2] is that multiple encryption may have some advantages but you have to be careful about implementation and it doesn't help with what tends to be the biggest weakness in encryption systems. Nil Einne (talk) 09:28, 31 August 2016 (UTC)
- Much of this depends on HOW you do multiple encryption. What if I take one key and use that key for 100 different encryption schemes, one after the other. Once you figure out the key for the final encryption scheme, the rest are quickly decrypted. It is like locking a door with a dozen locks that all use the same key. So, asking the user to type in one key for encryption and then using that for all forms of encryption won't work. What if I go the DVD/BluRay method and I package the key with the encryption? I ask for one encryption key from the user and then pack random keys for all the other encryption methods into the encrypted code. Once you decrypt the first stage with the key from the user, it will contain the key for the next stage, which contains the key for the next stage, and so on. It is like locking a box with a padlock. Inside that is a smaller box with they key taped to it. Inside that is a smaller box with the key taped to it, etc... You can hard-code how the key the user types in is manipulated to create a vast set of keys for multiple encryption schemes. That assumes that people trying to hack your system won't know anything about disassembler techniques. Obviously, they will. You are essentially back to the one key for all encryption schemes again. So, you end up with asking the user for a vast set of keys, one for each encryption scheme. Commonly, we ask users for two or three forms of "key". We ask for a password. Many times, we ask for the user's public key from a public/private keyset. It is also becoming common to do something with the user's phone. That is just three forms of security for a single transaction. If you want to get into multiple forms of encryption, imagine being asked for 8 unique passwords to encrypt something - and then you need them in the reverse order to decrypt on the other end. While it is functional on paper, it isn't really any better than one good encryption scheme. 209.149.113.4 (talk) 11:31, 31 August 2016 (UTC)
- You have it backwards: layering several encryption methods can't decrease the security compared to the best one (if they use independent random keys). Proof: if it could, the attacker could break the best method alone by composing it with the others, which is a contradiction.
- Layered encryption can certainly increase security, even if you use the same cipher for each layer: 3DES is an example. But you have to be careful: "2DES" would be hardly more secure than DES because of the meet-in-the-middle attack, and 3DES can be broken in 2112 (not 2168) time by the same attack.
- If you lock your valuables in a safe and lock that safe in another safe, a vulnerability in one safe isn't enough to get the valuables. If you lock your valuables in a safe and lock the key to that safe in another safe, a vulnerability in either safe is enough to get the valuables. When people say "a system is only as strong as its weakest link", they're talking about the latter situation. In TLS, for example, the payload is protected by a symmetric cipher, the key to that cipher is protected by an asymmetric cipher, the key to that cipher is protected from tampering by a certificate authority, that certificate authority is certified reliable by another one, and so on up to a trusted root that ships with the web browser. If you can compromise any link of that chain (the chain of trust), you can compromise the payload. -- BenRG (talk) 19:20, 1 September 2016 (UTC)
- I would say that layering multiple encryptions can be correctly implemented as a form of defense in depth. This is widely regarded as good practice.
- For example, I (sometimes) use 802.11ac with WPA2 and I also use secure sockets (TLS). On top of this, I sometimes use application encryption.
- When a serious error was discovered in some versions of SSL - CVE-2014-0160 - I had confidence that my potential exposure was lower than it could have been. As Stephan Schultz has aptly pointed out, " real (algorithms) often have other weaknesses...." The bug in that case was not actually an error in the cryptographic mathematics. It was, roughly speaking, a privilege escalation bug that took advantage of an error in the memory allocation code.
- Nimur (talk) 22:05, 31 August 2016 (UTC)
- I think you have some misconceptions. WPA2 and friends only encrypt traffic while it travels via radio waves from you to the access point. You should definitely use WPA(2), but it only protects you from someone physically nearby sniffing your Wi-Fi transmissions. None of what you mentioned made you any safer from Heartbleed. Heartbleed is a "buffer over-read" that allows an attacker to read data they shouldn't be able to from the server's memory. An attacker can use this to grab all your data in plaintext from the server, or obtain the server's private key and decrypt your traffic. For that matter, Heartbleed works on clients as well, so if you were using a vulnerable OpenSSL version, an attacker could have exploited you if they were able to get you to connect to a system they controlled. --47.138.165.200 (talk) 02:03, 1 September 2016 (UTC)
- I understand: my point was, an attacker can't sniff my SSL traffic (or try to attack my SSL server) if they can't get on my WiFi network (in my case, I do not publish my SSL services to the external network). Even if I have a compromised SSL server, having my link layer remain encrypted provides a different, orthogonal layer of protection. And even if the attacker breaks all those layers, if my SSL session is transferring data files that are additionally encrypted, the attacker still cannot make use of a successful breach of my network traffic. In principle, the attacker could use Heartbleed to disclose memory, but if the contents of that memory are just more encrypted data, there is minimal harm from the unwanted disclosure! Obviously, having all the encryption schemes operating correctly provides even more protection. But by encrypting at many different layers, an attacker isn't successful unless every single layer of the system is compromised. Nimur (talk) 07:21, 1 September 2016 (UTC)
- An attacker can be somewhere other than on your LAN. And on the Web you frequently don't have the option of adding encryption layers. Your bank probably doesn't let you force your online banking to use only PGP-encrypted messages. Anyway this is a largely academic discussion; most real-world attacks involve attackers breaking into the systems of banks, merchants, etc. or malware being planted on your computer, neither of which will be impeded by encrypting your traffic. --47.138.165.200 (talk) 03:11, 2 September 2016 (UTC)
- I understand: my point was, an attacker can't sniff my SSL traffic (or try to attack my SSL server) if they can't get on my WiFi network (in my case, I do not publish my SSL services to the external network). Even if I have a compromised SSL server, having my link layer remain encrypted provides a different, orthogonal layer of protection. And even if the attacker breaks all those layers, if my SSL session is transferring data files that are additionally encrypted, the attacker still cannot make use of a successful breach of my network traffic. In principle, the attacker could use Heartbleed to disclose memory, but if the contents of that memory are just more encrypted data, there is minimal harm from the unwanted disclosure! Obviously, having all the encryption schemes operating correctly provides even more protection. But by encrypting at many different layers, an attacker isn't successful unless every single layer of the system is compromised. Nimur (talk) 07:21, 1 September 2016 (UTC)
- I think you have some misconceptions. WPA2 and friends only encrypt traffic while it travels via radio waves from you to the access point. You should definitely use WPA(2), but it only protects you from someone physically nearby sniffing your Wi-Fi transmissions. None of what you mentioned made you any safer from Heartbleed. Heartbleed is a "buffer over-read" that allows an attacker to read data they shouldn't be able to from the server's memory. An attacker can use this to grab all your data in plaintext from the server, or obtain the server's private key and decrypt your traffic. For that matter, Heartbleed works on clients as well, so if you were using a vulnerable OpenSSL version, an attacker could have exploited you if they were able to get you to connect to a system they controlled. --47.138.165.200 (talk) 02:03, 1 September 2016 (UTC)
Mistake - Age Of Death
On Wikipedia when a person dies this displays in their info box Died 1st January 2000 (aged 12) (change date/age) I've recently noticed a mistake in one so I went to change I was really confused by your coding, there must be easier ways or ones that don't cause mistake Like: 1.Just Write it like I did on 2nd line of this article or 2. Get Wikipedia to calculate the age byear- dyear and the bmonth-dmonth if =>1 -1 from age. (Or similar)
69.165.177.132 (talk) 12:34, 31 August 2016 (UTC)
I think you might want to ask this question on the wikipedia: help desk instead of here. Margalob (talk) 02:20, 1 September 2016 (UTC)
September 1
Rule 110
I read on wikipedia that the rule 110 of elementary cellular automaton is Turing Complete, which I believe means that any calculation is possible within that language. Is my understanding of Turing Completeness correct? If it is, how would I preform basic arithmetic (For instance, 5 x 2 = 10) using rule 110?
Thanks for your help, and let me know if this isn't the right reference desk for my question. Margalob (talk) 02:18, 1 September 2016 (UTC)
- Since the Turing completeness proof is constructive, you can follow the proof: first figure out how to do multiplication in a cyclic tag system, then encode that in a rule-110 initial state as described in the rule 110 article. It will be difficult (or at least tedious) to make it work. There's probably no easier way. -- BenRG (talk) 08:01, 2 September 2016 (UTC)
Matching whitespace in regex
While writing regexes to match a single whitespace, is it better practice to write "\s" or " " ? Since both are valid solutions, I'd expect both to occur equally frequently, but I see my programmer colleagues prefer "\s" all the time. Is there any particular reason for that? La Alquimista 09:45, 1 September 2016 (UTC)
- In some implementations "\s" will match some other characters also like tab. See here. manya (talk) 09:52, 1 September 2016 (UTC)
\s
only has any special meaning in some regex flavors. There isn't one single thing called "regex"; there are multiple different regular expression languages. Read this. As noted above, in most implementations that support it\s
will match things other than the space character, e.g. tab. To find out what it matches, see the documentation for whatever it is you're using. So the real question here is, what do you want to match? If you only want to match a space, use a space. If you want to match more than that, use something else. --47.138.165.200 (talk) 23:56, 1 September 2016 (UTC)
Windows Multiple Folder Name System
I have a need for a product that I haven't seen for Windows. I use in in Linux and an office that exclusively uses Windows wants the same system. What I have is a network storage system. It has a hierarchical folder system. I store files based on funding source, contract ID, and assigned team. That is the default view. I can change the view to date. I get top-level folders for year. I select a year and I get months. I select a month and I get days. Then, I can open a day folder and see all files last modified on that day. Once I select a file by day, I can revert the view to normal folders and I will be in the team folder that contains the file. Basically, they want a Windows file share that lets them flip between the actual folder names and a hierarchy of files by last modified date. Anything like that available? 47.49.128.58 (talk) 14:25, 1 September 2016 (UTC)
- What you're describing is essentially a Document management system. -- Finlay McWalter··–·Talk 15:35, 1 September 2016 (UTC)
- One option might be to use a relational database system to store links to the files. You could then have "reports" that list those links organized in any way you'd like. The db system would be more flexible, such as allowing you to just list files with a particular combo of funding source, assigned team, and date range, even if that need was not anticipated when the system was set up. StuRat (talk) 16:09, 1 September 2016 (UTC)
- I can't make out what the OP wants, but here's a tip. If you have multiple dated files with the same name (example, "Meeting"), name them with a leading or trailing YYYY-MM-DD date format. Example: 2015-12-22 Meeting, 2016-02-07 Meeting, 2016-03-12 Meeting, etc. Or: Meeting 2015-12-22, Meeting 2016-02-07, Meeting 2016-03-12. You'll find that the folder opens with the files in strict year, month, day order. Akld guy (talk) 21:59, 1 September 2016 (UTC)
- If you have the date before the filename, they will open in date order whatever the rest of the file name, e.g. 2015-12-22 Meeting, 2016-02-07 Party, 2016-03-12 Meeting - of course, that means you can't sort by the rest of the filename as conveniently. Generally, sorting by "date created" will give a similar result (though it may not, for example if the files have been downloaded from another source) MChesterMC (talk) 09:25, 2 September 2016 (UTC)
- (OP here - using public computer at the hospital) Sorthing by date (date created or using dates in the name) won't work because the files are scattered all over the drive. The files are primarily sorted by subject matter. Every day, a few people need to see what has been added or edited in the last 2 or 3 days. They want to view the files based on date, regardless of what subfolder they are in. In practice, they ask me to do it because I use Linux and it is trivial to change my view from a folder view to a date modified view. I'm trying to find something similar in Windows so I don't have to do it every day. 209.149.113.4 (talk) 12:15, 2 September 2016 (UTC)
- Would something like OpenDocMan [ http://www.opendocman.com/features/ ] meet your needs? --Guy Macon (talk) 13:22, 2 September 2016 (UTC)
- (OP here - using public computer at the hospital) Sorthing by date (date created or using dates in the name) won't work because the files are scattered all over the drive. The files are primarily sorted by subject matter. Every day, a few people need to see what has been added or edited in the last 2 or 3 days. They want to view the files based on date, regardless of what subfolder they are in. In practice, they ask me to do it because I use Linux and it is trivial to change my view from a folder view to a date modified view. I'm trying to find something similar in Windows so I don't have to do it every day. 209.149.113.4 (talk) 12:15, 2 September 2016 (UTC)
- If you have the date before the filename, they will open in date order whatever the rest of the file name, e.g. 2015-12-22 Meeting, 2016-02-07 Party, 2016-03-12 Meeting - of course, that means you can't sort by the rest of the filename as conveniently. Generally, sorting by "date created" will give a similar result (though it may not, for example if the files have been downloaded from another source) MChesterMC (talk) 09:25, 2 September 2016 (UTC)
September 2
windows 10 cpus
micro$oft is banging the drum about new "kerby lake" CPUs only working with windows 10 from now on. Please tell me, are they going to actually make it so Windows 7 won't work at all? Because as I understand it modern x86-64 CPUs can still be used by MS DOS and other ancient operator systems, so if MS DOS can work on kerby lake why would Windows 7 stop working unless micro$hit deliberately put in a block — Preceding unsigned comment added by Bobbartdf93493 (talk • contribs) 13:13, 2 September 2016 (UTC)
- According to this PC World article Microsoft will not be releasing drivers for Kerby Lake processors for Windows 7 or 8/8.1. The article speculates that "the processor would boot, though without driver support and security updates the experience would be “a bit glitchy”", so really I guess it depends on your definition of 'work at all'. X86 processors work on Windows 7 because there are specific drivers that allow the OS to interact with the CPU. Without those drivers, things will be difficult. As for why MS would do this, I guess their reasoning would be that by focussing resources on supporting a single modern OS they will be able to offer the best possible experience in that OS. Another way to phrase that would be 'profit' - they are a business, after all. - Cucumber Mike (talk) 13:23, 2 September 2016 (UTC)