Wikipedia:Reference desk/Computing: Difference between revisions
→Buying digital cameras compatible with legacy analog lenses: Nikon can, sort of |
→Buying digital cameras compatible with legacy analog lenses: Yes, it's possible |
||
Line 259: | Line 259: | ||
:In the Nikon world, many lenses with the [[Nikon F-mount]] (which was introduced in 1959) can be used on even their most modern digital SLR cameras, although there are limitations and some incompatibilities. I don't know what the situation is for other camera or lens manufacturers, however the first sentence in the History section of the F-mount article gives a clue - "The Nikon F-mount is one of only two SLR lens mounts (the other being the Pentax K-mount) which were not abandoned by their associated manufacturer upon the introduction of autofocus, but rather extended to meet new requirements related to metering, autofocus, and aperture control." Both cameras and lenses have had more and more functionality added over the years. An older Nikkor lens on a Nikon D90 likely would not support autofocus or aperture setting. Taking another approach, there have been various attempts to create a digital back for film SLRs, but none seem to have really taken off - search "Digipod" on your favorite search engine for one of the most recent attempts. --[[User:LarryMac|<font color="#3EA99F">LarryMac</font>]][[User talk:LarryMac|<font color="#3EABBF"><small> | Talk</small></font>]] 19:07, 9 February 2016 (UTC) |
:In the Nikon world, many lenses with the [[Nikon F-mount]] (which was introduced in 1959) can be used on even their most modern digital SLR cameras, although there are limitations and some incompatibilities. I don't know what the situation is for other camera or lens manufacturers, however the first sentence in the History section of the F-mount article gives a clue - "The Nikon F-mount is one of only two SLR lens mounts (the other being the Pentax K-mount) which were not abandoned by their associated manufacturer upon the introduction of autofocus, but rather extended to meet new requirements related to metering, autofocus, and aperture control." Both cameras and lenses have had more and more functionality added over the years. An older Nikkor lens on a Nikon D90 likely would not support autofocus or aperture setting. Taking another approach, there have been various attempts to create a digital back for film SLRs, but none seem to have really taken off - search "Digipod" on your favorite search engine for one of the most recent attempts. --[[User:LarryMac|<font color="#3EA99F">LarryMac</font>]][[User talk:LarryMac|<font color="#3EABBF"><small> | Talk</small></font>]] 19:07, 9 February 2016 (UTC) |
||
:{{ec}} Yes, that's perfectly possible. You will just need a lens adapter to be able to physically mount the lens onto the camera body. Note that, as StuRat says, you'll miss out on most of the focussing tricks that modern DSLRs offer, but it will certainly work and you'll be able to take pictures. There's a guide [http://www.bobatkins.com/photography/eosfaq/manual_focus_EOS.html here] that applies specifically to Canon EOS bodies, but the principles are the same for any manufacturer. - [[User:Cucumber Mike|Cucumber Mike]] ([[User talk:Cucumber Mike|talk]]) 19:13, 9 February 2016 (UTC) |
Revision as of 19:13, 9 February 2016
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
February 4
Question (How are CPS made ?)
how are cps made — Preceding unsigned comment added by Jake200503 (talk • contribs) 02:34, 4 February 2016 (UTC)
- Can you please clarify, it's entirely not clear what you mean by CPS, even if we narrow the field to computing there's a half dozen possibilities of what you could mean. Vespine (talk) 04:12, 4 February 2016 (UTC)
- I added to the title to make it unique (although still not clear). StuRat (talk) 04:39, 4 February 2016 (UTC)
- Possibly the OP meant CPUs although even in that case some clarification would help. In the case of a computer the lay person sometimes uses CPU to mean the whole computer (minus the monitor, keyboard, mouse and other user facing parts) particularly with a desktop like device. Nil Einne (talk) 13:31, 5 February 2016 (UTC)
SQL query condition question
I recently ran into the following problem at work.
We have a database table that includes rows that should be processed every x months. How much x is depends on the row and is written to a field in the row. There is another field saying when the row was last processed.
I wanted to make a database query selecting every row that should now be processed, as enough time has been passed since it was last processed. Turns out this was not so easy, as the condition to write to the where
section would have to change between individual rows.
Let's say the table name is thing
, and lastprocessed
means when it was last processed, and processinterval
means how many months it should be processed between. So the format is something like: select * from thing where lastprocessed < :date and processinterval = :interval
.
I ended up making separate queries for every value of processinterval
(there are a finite number of them, and not quite many), computing :date
separately for each of them, and then combining the results.
Is there an easier way to do this? JIP | Talk 21:02, 4 February 2016 (UTC)
- If you're using SQL Server you can do something like this. If not, you could write a function that returns the specific date based on the processinterval and then call the function in the
lastprocessed < :date
portion, setting the:date
to begetdatebyinterval(processinterval)
's return value. I'm not sure if you need specific permissions to create a function on a database; I assume so, but YMMV. I'd also assume that if you're doing this at work then you probably have the permissions. I'm not sure how good a solution this is for your particular workplace. The third option is to hit the database once and get all relevant rows inthing
, and then have whatever application is using the data figure out which to process instead of having the database try to. FrameDrag (talk) 21:41, 4 February 2016 (UTC)
- Without data types, this is hard to answer. I have to ask why you can't add lastprocessed to interval and compare it to the current date. — Preceding unsigned comment added by 47.49.128.58 (talk) 01:37, 5 February 2016 (UTC)
- See Your table "thing": Does "thing" store a creation date+time for a record? Does it have an increasing record indetifier? When true, create a table which stores a highwater mark when Your processing runs. Use "> (select max(highwater mark) from processruns)" unjoined in the query to select the unprocessed records only. When the query finished, insert the highwatermark from "thing" or add the current date. --Hans Haase (有问题吗) 02:06, 5 February 2016 (UTC)
- BTW, this isn't a very efficient way to do things. You might get away with it, with only a small number of rows, but checking every row every time, when only a small number need to be processed, wastes resources. I would suggest a "Scheduled Events" table that lists dates and rows (in the main table) that need to be processed on those dates. You would only have the next event for each row listed in this table, and replace the date with the next date, as part of the processing step. So, if you had a million rows in the main table, and only a thousand need to be processed each time, this should be on the order of a thousand times faster (not including the actual processing time). StuRat (talk) 03:03, 5 February 2016 (UTC)
- If the "process" is each month and a date is in "thing", extract and filter on year and month. --Hans Haase (有问题吗) 21:24, 5 February 2016 (UTC)
In short: will adding an SSD drive generally give notable improvements to thrashing issues?
In long: I have a series of computations I need to run for my research. This partly involves smacking together lots of relatively large arrays. Things go great for size parameters of about 45x45x10k, and the things I want to run complete in a few hundred seconds. Apparently things mostly fit into my 16GB RAM, and only a few larger swaps to virtual need to be done. But if I increase those sizes to say 50x50x15k, I hit a memory wall, get into heavy swapping/thrashing, and the thing can take about 20 times longer, and that's not ideal. It's not so much the time increase that bothers me, but that I've shifted into a whole new and worse domain of effective time complexity. So: would buying a few gigs (16?) of SSD to use as virtual memory generally help speed things up for the latter case? I know more memory would help, but all my DIMM slots are full, and I think I can get a solid drive for a lot less than 4 new RAM chips above 4Gb each. Actually I may be very confused and wrong about this, as I haven't payed much attention to hardware for years, maybe more RAM could be comparable cost and more effective. Any suggestions? (And if you happen to be decently skilled at scientific computing and are looking for something to work on, let me know ;) SemanticMantis (talk) 21:14, 4 February 2016 (UTC)
- In short, yes, sticking the windows swapfile on an SSD is highly recommended, and considering you can get a 64GB SSD for about $70 (I don't think you'll find a 16GB one these days), it's a bit of a no brainer these days. It used to be a but controversial in the early days of SSD because it heavily utilizes the disk and early SSDs didn't have very high read write cycles, but that's not so much of a concern anymore. How much performance increase you see is hard to predict, but it could be anything from "a bit" to "loads". Vespine (talk) 22:00, 4 February 2016 (UTC)
- Thanks, it seems I am indeed out of touch with prices and sizes. This is for use with OSX by the way, but I'm sure I can figure out how to use the SSD for virtual memory if/when I get one. If anyone cares to suggest a make/model with good latency and value for the price, I'd appreciate that too. SemanticMantis (talk) 22:17, 4 February 2016 (UTC)
- Adding more memory will generally be MUCH MUCH more effective than using a faster (SSD) disk. Even if the memory in the SSD is as fast as RAM (and it almost certainly isn't), the overhead in passing a request through the virtual memory system and I/O system, sending it over the (slow) interface to the disk, getting the response back over the same slow interface, getting it back up the software stack to the requesting thread, waking up and context-switching into the thread, is going to be many times slower than a single memory access. Now consider that nearly every CPU instruction could be doing this (which is pretty much the definition of thrashing). Moving to a SSD disk will certainly improve performance over a mechanical disk, but it will be nowhere near the performance increase you would get by increasing the size of physical memory to be larger than the working set of your processes. Mnudelman (talk) 22:24, 4 February 2016 (UTC)
- I'm not as familiar with OSX or Linux but I get the impression moving the swap file is not quite as straight forward as windows. If I were you, i would consider getting a bigger SSD instead, doing a full backup to an external drive and then restore your entire system to the SSD disk. That way you will see improvement benefits above and beyond what you'd get from just sticking the swap file there. I personally use the Samsung 840 (now the 850), they're stalwarts in the reviews for "best bang for the buck" category. Vespine (talk) 22:30, 4 February 2016 (UTC)
- A factor of 20 is not really much for thrashing. The really horrible cases are in the thousands. The most important difference between physical HDD and SSD is the access time, so if there are lots of accesses, the SSD could lower a factor of 20 well into the single digits. If there are many threads working, e.g. 45 threads, the HDD could be busy delivering data from 45 places "at the same time". SATA NCQ helps here but cannot eliminate the physical seeks, just order them in a time-saving way. On SSDs, NCQ could well eliminate part of the communication loop, because all threads could run until they need paged data, and then the OS could ask for the whole wad in one big query. I'm not sure how that would work out in practice, though. The savings will probably be bigger with the HDD, but still not enough to catch up with a quality SSD without queueing. - ¡Ouch! (hurt me / more pain) 19:03, 6 February 2016 (UTC)
- Kingston 8GB 1600MHz DDR3 (for MacBook Pro, but I suspect thats not too atypical) is a bit over US$40, so for 32 GB you'd pay US$ 170 or so. As others have said, an SSD is better than a mechanical drive, but RAM is so much better than SSD that it's not even funny. --Stephan Schulz (talk) 22:41, 4 February 2016 (UTC)
- Well, I agree RAM is "better than SSD", but ask anyone what the best improvement to their computers has been in the last 10 years and almost universally it's getting an SSD. The other thing you COULD consider is get 16GB in 2 sticks and replace 2 of your sticks for a total of 24GB. It is "more" usual to have 16 or 24, but having 2 matching but different pairs should still work fine. if that's enough to push you "over the line" you can leave it at that, if it's not you can always get another 16GB pair later. Vespine (talk) 22:46, 4 February 2016 (UTC)
- Yes, that does change things in the cost/value analysis; I forgot not all RAM chips have to match (anymore)? SemanticMantis (talk) 00:16, 5 February 2016 (UTC)
- Well, I agree RAM is "better than SSD", but ask anyone what the best improvement to their computers has been in the last 10 years and almost universally it's getting an SSD. The other thing you COULD consider is get 16GB in 2 sticks and replace 2 of your sticks for a total of 24GB. It is "more" usual to have 16 or 24, but having 2 matching but different pairs should still work fine. if that's enough to push you "over the line" you can leave it at that, if it's not you can always get another 16GB pair later. Vespine (talk) 22:46, 4 February 2016 (UTC)
- As a general concept it's true that SSD is a big improvement to performance but only if the bottleneck is the disk in the first place. That's not what the OP is describing; he is in a THRASHING scenario. Improving disk speed is a terrible way to address thrashing.
- Also consider that when you need to swap to read a single word, you have to free up some RAM first, which probably means WRITING a PAGE of memory to the SSD, then reading another PAGE of from the SSD back to RAM. Depending on the page size, this will be hundreds or thousands of times slower than it would be to read the word from RAM if swapping is not needed, even ignoring other overhead like disk/interface speed and context switching. Mnudelman (talk) 22:49, 4 February 2016 (UTC)
- Improving disk speed is a terrible way to address thrashing. Except it's not a "terrible" way at all, microsoft recommend sticking your swap file on an SSD if you can. We're getting superlatives mixed up. YES more ram is better than a faster disk but a faster disk is NOT a "terrible" upgrade. Vespine (talk) 23:27, 4 February 2016 (UTC)
- My 1st reply was actually going to be "it might be hard to predict how much improvement you will see upgrading to an SSD" but they I read the 1st line again, WILL AN SSD GIVE NOTABLE IMPROVEMENT IN A THRASHING SCENARIO" and my answer, in short, which i still stand by, is yes, yes it will. More RAM will probably be better, but getting an SSD IS also just a GOOD upgrade overall. Vespine (talk) 23:29, 4 February 2016 (UTC)
- Improving disk speed is a terrible way to address thrashing. Except it's not a "terrible" way at all, microsoft recommend sticking your swap file on an SSD if you can. We're getting superlatives mixed up. YES more ram is better than a faster disk but a faster disk is NOT a "terrible" upgrade. Vespine (talk) 23:27, 4 February 2016 (UTC)
- Yes, an SSD for virtual memory will be faster than a HD, but not a whole lot faster. Definitely maximize your motherboard's RAM first. Bubba73 You talkin' to me? 23:56, 4 February 2016 (UTC)
- Right, so I know more RAM will be the best way to solve the problem. But given a fixed budget, I'm not clear on how to optimally spend it. For example, I can get 8 more GB of ram for around $110 [1]. That will give me more room, but may well put me straight back to thrashing at 52x52x15k, to continue the example numbers from above. I can get 128GB SSD for $60 [2]. For a certain fixed size computation, that will not give me as much speed increase as more RAM would. But, if I'm hitting a RAM wall regardless, then the increased read/write speed should help me out with a lot of thrashing issues, no? It's not like it's thrashing so bad it never stops, or crashes the computer. Just puts me into a much higher exponent on time complexity. To make things up: say I was at O(n^1.1) up to a certain size N. For M>N, the thrashing puts me at O(M^3). With 8 (or 16)GB more RAM, thrashing may set in at M2=M+K, but I'm still at O((M+K)^3) after that. With a new SSD, I thought maybe I could get to O(M^2) for M>N, up to some larger cap on the size of the SSD. (yes I know this is not exactly how time complexity works, I'm just speaking in effective, functional terms of real-world performance on a certain machine, not analysis of algorithms. For that matter, nobody has yet suggested I just get better at managing my computing resources and being more clever at organizing things efficiently, but rest assured I'm working on that too :) SemanticMantis (talk) 00:10, 5 February 2016 (UTC)
- In terms of bang for your buck, have you looked into just buying computing resources from a "cloud" provider instead of running things on your personal computer? --71.119.131.184 (talk) 00:33, 5 February 2016 (UTC)
- (EC)And THERE is the rub. This is a complex enough problem that it might be hard or impossible to give you a good answer. Will an SSD give you an improvement? yes. Will getting 8GB more RAM give you an improvement? yes. Which will be "better" or more "worth while"? This is going to be very hard to predict without actually just TRYING it. If I were you, I think upgrading your system disk to an SSD is just a "good upgrade" to do regardless AND it has the added benefit that your thrashing will probably improve somewhat. How big is your system disk? If you get 8GB more ram and your problem doesn't improve (because you need 16 or 32 more) , then it WILL be a waste, if your problem doesn't improve much with an SSD then at least you will have speedier boot times and an overall performance increase. Vespine (talk) 00:36, 5 February 2016 (UTC)
- Right, I think I'm leaning toward SSD because it is cheaper and will certainly help at least a little bit in almost all cases, even just normal tons-of-applications-open scenarios. I thought about buying/renting cloud resources but that stuff is fiddly and annoying to me, plus this shouldn't really be out in the world until it is published. SemanticMantis (talk) 15:45, 5 February 2016 (UTC)
- (EC)And THERE is the rub. This is a complex enough problem that it might be hard or impossible to give you a good answer. Will an SSD give you an improvement? yes. Will getting 8GB more RAM give you an improvement? yes. Which will be "better" or more "worth while"? This is going to be very hard to predict without actually just TRYING it. If I were you, I think upgrading your system disk to an SSD is just a "good upgrade" to do regardless AND it has the added benefit that your thrashing will probably improve somewhat. How big is your system disk? If you get 8GB more ram and your problem doesn't improve (because you need 16 or 32 more) , then it WILL be a waste, if your problem doesn't improve much with an SSD then at least you will have speedier boot times and an overall performance increase. Vespine (talk) 00:36, 5 February 2016 (UTC)
- In terms of bang for your buck, have you looked into just buying computing resources from a "cloud" provider instead of running things on your personal computer? --71.119.131.184 (talk) 00:33, 5 February 2016 (UTC)
- I always maximize my RAM, even if it means taking out some sticks. I usually get it from
KingstonCrucial. Besides my main computer, I have three computers that I use for numerical work. Two are core i5s which I bought cheaply and bumped up to the maximum 16GB RAM. One of them is an i7 with 16GB of RAM and an SSD. My main computer is an i7 with and SSD and 32GB.
- I always maximize my RAM, even if it means taking out some sticks. I usually get it from
- I did a speed test of sequential access my SSD vs. HD. the SSD does about 395MB/sec whereas the HD does 175MB/sec, so the SSD is 2.25x as fast. (Of course, random access will show a much larger benefit to the SSD.) So I think swapping to an SSD instead of a HD will be about twice as fast. I think you will probably be better off making the RAM as large as possible first. Bubba73 You talkin' to me? 00:41, 5 February 2016 (UTC)
- Page file access tends to be highly non-sequential, so you should see a far higher gain than that from the SSD in theory. Surprisingly, I can't find any SSD vs HDD paging benchmarks online.
- You would probably get large gains from using explicitly disk-based algorithms that are tuned to the amount of physical RAM in the system, instead of relying on virtual memory. But that is a lot of work and programmer time is expensive. -- BenRG (talk) 01:06, 5 February 2016 (UTC)
- Yep, but I'm the only "programmer", and too lazy/unskilled for that kind of optimization, and I have bigger fish to fry :) SemanticMantis (talk) 15:45, 5 February 2016 (UTC)
- According to Solid-state drive, random access times are about 100 times faster for SSD than for HD, and data transfer speeds for both are within one order of magnitude (in both cases, there are huge differences for different models). But random access time for the SSD is still around 0.1ms. Random access time for RAM (assuming it is not cached, and assuming it's not pre-fetched) for current DDR3 SDRAM is about .004 μs, or about 25000 times faster than the SSD just to access a word - and that ignores all the additional overhead of writing back dirt pages, updating the MMU tables, and so on. So yes, an SSD is a good upgrade. I like SSDs, and I have SSDs exclusively in all my machines (even the ones I paid for out of my own pocket, and even at a time when a 1 GB SSD set me back a grand). It will certainly improve paging behaviour. But it is a very poor second best if the system is really thrashing. --Stephan Schulz (talk) 12:02, 5 February 2016 (UTC)
- I see your point but beg to differ.
- If the SSD can save 9 hours of runtime on the problem I have and the RAM upgrade can save 10 but is more expensive, the SSD can be good enough and even overall better. For example if it saves 30 seconds of boot time (optimistic but not unheard of), the SSD would overtake the RAM after 120 boot cycles. - ¡Ouch! (hurt me / more pain) 19:08, 6 February 2016 (UTC)
- Ok, thanks all. I know this is too complicated to give one simple answer of which option is best; I mostly wanted your help in framing some of the pros/cons, and some more current estimates of price and performance. SemanticMantis (talk) 15:45, 5 February 2016 (UTC)
- I'm actually really curious which direction you will go and how it works out for you. Vespine (talk) 22:02, 8 February 2016 (UTC)
February 5
How I rename system files on pcbsd?
How I rename system files on pcbsd? I problably need use the root password (like when updating or installing printers that asks for it), but I can find a way to "enter root mode" to rename the files needed to change some boot files stuff I want to change. — Preceding unsigned comment added by 201.79.69.164 (talk) 10:05, 5 February 2016 (UTC)
- mv? try "man mv" for instructions. Maybe you need the "sudo" command as well. Or, ehm, "rename"? To be honest I have never used pcbsd. The Quixotic Potato (talk) 11:34, 5 February 2016 (UTC)
- PCBSD uses KDE as the desktop manager by default. If you are logged in as yourself and not root, you won't be able to rename the system (root) files through the GUI. You can do it in one of three ways: You can logout and login as root. You can open a shell and run Dolphin as root (sudo dolphin). You can open a shell and mv (sudo mv oldname newname). 209.149.115.90 (talk) 14:30, 5 February 2016 (UTC)
- Thanks for the help, I was able to rename files with this info.201.79.72.126 (talk) 15:40, 5 February 2016 (UTC)
- Agreed...and 'sudo' is only available if your account is a member of the 'sudoers' group...which may not be the case for your regular login account. If you know the root password, you can use 'su' (hit return, enter password, hit return) to become root - and then just 'mv' the file or add yourself to the sudoers group so you can use sudo for this kind of thing in the future.
- I've gotta say that if you need to ask this question, then you're probably not sufficiently experienced to start renaming system files! It's very, very easy to accidentally 'brick' your system so it won't even reboot! There is a reason these files are locked up so only 'root' can change them!
How do I change the default boot order on PCBSD?
I tried to rename files in the grub.d to 10_RestOfTheFileName, or 30_Rest_of_name ....., like the readme told me but nothing worked. The default boot is pcbsd and I want that the boot loader start with windows xp selected (and so open it if no key is pressed)201.79.72.126 (talk) 15:40, 5 February 2016 (UTC)
- Given this (and the previous) question, you might be better off using a GUI tool to change the GRUB settings - there is "Grub customizer", for example. Not sure if it works with PCBSD - but I see no reason why not.
- SteveBaker (talk) 15:47, 5 February 2016 (UTC)
February 7
Is there any computational method that's neither a numerical method, nor a symbolic method?
Is there any computational method that's neither a numerical method, nor a symbolic method, nor a combination of both? I cannot imagine another possibility, but my lack of imagination is definitely not a proof.--Llaanngg (talk) 00:42, 7 February 2016 (UTC)
- What do I get when I divide one by three?
- Numerically, I get 0.33333333....
- Symbolically, I get 1/3 (read: "one divided by three").
VerballyConceptually, I simply get: a third.
- HOTmag (talk) 01:03, 7 February 2016 (UTC)
- Verbally = symbolically. --Llaanngg (talk) 01:52, 7 February 2016 (UTC)
- @Llaanngg:: 1/3 is "one divided by three" (just as 1/x is "one divided by ex"): it's symbolic, i.e. it contains some symbols, e.g. "divided by" and likewise. It's not the same as "a third", being the conceptual computation.
- Please note that not every computation can be made conceptually, just as not every computation can be made symbolically: For example:
- The solution of the equation 3x=1 can be reached, both symbolically - as 1/3 (read "one divided by three"), and conceptually - as "a third".
- The solution of the equation x2=2, can be reached symbolically - as √2 (read: "square root of two"), but cannot be reached conceptually.
- The solution of the equation x5+x=1, cannot be reached conceptually nor symbolically.
- Btw, there is also the "geometric computation". For example: the solution of the equation x2=2, can be computed - not only symbolically as √2 i.e. as "the square root of two" (and also numerically of course) - but also geometrically as the length of a diagonal across a square with sides of one unit of length.
- HOTmag (talk) 07:41, 7 February 2016 (UTC)
- Verbally = symbolically. --Llaanngg (talk) 01:52, 7 February 2016 (UTC)
- Fuzzy logic ? StuRat (talk) 01:54, 7 February 2016 (UTC)
- On one hand "numerical" is a kind of symbolic reasoning. On the third hand, if you can think nonsymbolically, then you can compute nonsymbolically. With yet another hand, graphical calculations are possible, such as Euclidean constructions using compass and straight edge. GangofOne (talk) 02:21, 7 February 2016 (UTC)
- Computable real arithmetic is arguably not numerical (since I think "numerical methods" are approximate by definition) and arguably not symbolic (since it works with computable real numbers "directly", not formulas). -- BenRG (talk) 02:44, 7 February 2016 (UTC)
- Neural networks could be counted as neither. Fuzzy logic might also fit there too, but you could argue that all of these are symbolic, as the computation has to represent something in the problem. Graeme Bartlett (talk) 10:19, 7 February 2016 (UTC)
- Neural network uses numerical methods: the errors in the output converges to a minimum, so the output approaches a numerical value. Fuzzy logic uses symbolic methods, as you've indicated. HOTmag (talk) 10:32, 7 February 2016 (UTC)
- The terms are kinda vague - but I'd definitely want to add "geometrical" to "symbolical" and "numerical". There are some wonderful things that can most easily be visualized geometrically...the dissection proofs of pythagoras' theorem come to mind here, but there are many good examples out there. SteveBaker (talk) 16:12, 7 February 2016 (UTC)
- Analog computers were once used to solve differential equations. Also even now people use scale models for architecture, hydrology or wind tunnel simulations. Graeme Bartlett (talk) 00:59, 8 February 2016 (UTC)
- Standard digital computers can be understood as doing everything by symbolic methods, including numerical computation; and the way I see the word "computation", that's really the only kind there is. However, you may consider what an analog computer does to qualify as computation (rather than as an alternative method used instead of computation). In that case it would qualify as an answer. --76.69.45.64 (talk) 23:13, 7 February 2016 (UTC) (by edit request) ―Mandruss ☎ 06:45, 8 February 2016 (UTC)
Scraping of .asp?
How can I scrap a page accessed with www.address.org/somescript.asp? It has two fields (name of artist, works) and two buttons (search, reset). How could I tell a program to go to name of artist, pick a name from a list that I have stored, press search, retrieve page and store. --Scicurious (talk) 16:36, 7 February 2016 (UTC)
- wget has parameters to fill in forms. Also if all the names are linked or are findable on a query, you may be able to do a recursive query to get all the pages. Otherwise you could make a list of URLs and pass that to wget. Graeme Bartlett (talk) 00:54, 8 February 2016 (UTC)
- Go through the whole process once or twice manually. Is there something similar each time, e.g. the button to be clicked is always in the same place, or the text you need is always formatted the same way? If so, you could perhaps use Macro Express to automate the process; it has the ability to control mouse placement (so you could automatically move the cursor to a certain space, for example) as well as merely clicking and pressing keys. Since you have the list of names, you could have it copy/paste from the list. Code for that operation follows my signature. Nyttend (talk) 01:12, 8 February 2016 (UTC)
Extended content
|
---|
[Save your list in Notepad if you're using Windows, or a comparable no-frills file if you're in a Mac. Place your cursor at the top of the list before starting your macro. Be sure to have each name in a separate line of the file.] <SHIFTD> <END> <SHIFTU> <CTRLD> C <CTRLU> [insert the menu command to activate your web browser window] <CTRLD> S <CTRLU> [insert the menu command to activate your Notepad window] <HOME> ` <END> [insert the menu command for a right arrow key press] |
With a macro program like MacroExpress, it's just simulating the keystrokes that you'd be using anyway, so just write down the keys you'd press and have the program press those keys in those orders. Be careful about timing: the computer often takes slight bits of time to load windows, and while this isn't significant when you're doing things manually, it's significant for the macro, which essentially does everything instantaneously. As a result, you'll need to insert slight timing breaks (very rarely will you need anything more than a couple hundred milliseconds) after commands that bring up new windows to ensure that it has time to bring up the window before you have it start performing things in the window. Also, you should use something like Notepad, because it won't insert additional characters, and every character matters in this kind of setting. Things like C
are instructions to type whatever you've written, while things within <> characters are instructions to press specific keys instead of writing those letters: CTRLD is push down the control key, CTRLU is let it up, and the same for SHIFTD/U. Since you have a list of names in Notepad, with each name on a separate line, you'll find it helpful to mark which ones you've done. I've told it to place a ` character at the start of each line with an already saved title (after it saves the page, it adds the character before the name, and then goes to the next line, where it's ready to start the next page) because that's an easy way of marking which lines you've already done, and the ` character, being quite rare in normal text, isn't likely to be found elsewhere in the document, so when you're done with the list, you can simply do a find/replace command in Notepad to delete the character, and you won't worry about deleting significant characters. Nyttend (talk) 03:59, 8 February 2016 (UTC)
Time Machine's persistence
My external HD has suddenly become unreliable. (Nothing vital is on it.) It could be some time before I can replace it. I currently have about six months of Time Machine backups. If a year goes by before I replace the flaky drive, will Time Machine throw away what was on it, or keep the last known versions of those volumes? —Tamfang (talk) 21:58, 7 February 2016 (UTC)
- The question is unclear. Time machine will keep adding back ups as long as there is space on the drive, once the drive is full it will delete the oldest backups to make space. How much room it needs depends on how many changes you have made since the last time it backed up. Does that answer your question? Vespine (talk) 05:27, 9 February 2016 (UTC)
February 8
The Hunting of the Snark
As a young child in the early 1990s, I enjoyed playing a range of little computer games on Grandmother's computer whenever we visited my grandparents; I'm looking for one of them now. It had a title similar to, or identical to, The Hunting of the Snark; you had to find little snark characters in a gridded board (most spaces were empty, a few had snarks, and one had a boojum that ended the game if you found it), presumably findable through some method, but I was young enough that I couldn't find them except by clicking spaces randomly. Can anyone point me to any information about such a game? Google searches produce results mostly related to the namesake original poem, and the game-related things I found were talking about a simple program that you could write in BASIC twenty years earlier, not something that would be sold commercially on par with programs such as Chip's Challenge. Nyttend (talk) 00:42, 8 February 2016 (UTC)
- My memory of that game is from much earlier than the 1990s. It would be more around the early 1980s. The source code was in a magazine or on a floppy included with a magazine. Likely, it was Byte magazine. However, all my memories from the 80s are merged together into a heaping pile of big hair, bright colors, and piles of floppy disks. 209.149.115.90 (talk) 19:49, 8 February 2016 (UTC)
- To clarify, Nyttend, are you describing a graphic game? Given the amount of shovelware that came with PCs in the 90s, it may be that somebody took the basic (as well as BASIC) Snark game and put a rudimentary graphical front end on it. As you mentioned, Google searches are difficult, not least because of the more modern, colloquial meaning of snark. --LarryMac | Talk 20:27, 8 February 2016 (UTC)
- Maybe some variant of Hunt the Wumpus? Some versions had tile graphics [3]. 21:34, 8 February 2016 (UTC)
Google DNS Server
What could be some caveats or cautions about using Google DNS Server (IP address 8.8.8.8) as my DNS server? Privacy issues, maybe? ←Baseball Bugs What's up, Doc? carrots→ 05:20, 8 February 2016 (UTC)
- There are two issues: performance and privacy.
- "The reality is that Google's business is and has always been about mining as much data as possible to be able to present information to users. After all, it can't display what it doesn't know. Google Search has always been an ad-supported service, so it needs a way to sell those users to advertisers -- that's how the industry works. Its Google Now voice-based service is simply a form of Google Search, so it too serves advertisers' needs. In the digital world, advertisers want to know more than the 100,000 people who might be interested in buying a new car. They now want to know who those people are, so they can reach out to them with custom messages that are more likely to be effective. They may not know you personally, but they know your digital persona -- basically, you. Google needs to know about you to satisfy its advertisers' demands. Once you understand that, you understand why Google does what it does. That's simply its business. Nothing is free, so if you won't pay cash, you'll have to pay with personal information. That business model has been around for decades; Google didn't invent that business model, but Google did figure out how to make it work globally, pervasively, appealingly, and nearly instantaneously."
- The question is whether your ISP's DNS servers are worse. Are they selling your information as well? (I am looking at you, AT&T).
- Performance: Most major websites use Content Delivery Networks (Amazon, Akamai,,) to serve content. A Content Delivery Network looks up your computer's IP address and directs you to the nearest server. With a public DNS server, the CDN might serve you content from a distant server, and thus your download speeds will thus be slower than if you use your ISP's DNS server. Google's DNS server information page says:
- "Note, however, that because nameservers geolocate according to the resolver's IP address rather than the user's, Google Public DNS has the same limitations as other open DNS services: that is, the server to which a user is referred might be farther away than one to which a local DNS provider would have referred. This could cause a slower browsing experience for certain sites"
- If you are in Australia, using the US-based Google DNS server means that "closest" Akamai cache will be chosen as in the US and you’ll see very slow download speeds as your file downloads over the international link. It's not as bad in the continental US, but it is still slower.
- BTW, wikileaks keeps a list of alternative DNS servers.[6] --Guy Macon (talk) 08:30, 8 February 2016 (UTC)
- That information is somewhat outdated, Google supports an extension which can provide your subnet to the CDN's DNS server so they can provide more accurate resolution [7] and it's been enabled at least for Akamai.
Also while the quoted part may be from Google, I'm not certain your intepretation is correct even ignoring the extensions. Talking about US-based Google DNS server from Australia is confusing since both 8.8.8.8 and 8.8.4.4 are anycast addresses. In NZ the servers responding are generally in Australia (you can tell by the latency). I didn't test the IPv6 servers but I'm pretty sure they're the same. I suspect this is normally the case in Australia too, since Google will definitely want their Australian servers to be used for Australians and I doubt many Australian ISPs care enough to fight Google, in fact I strongly suspect Google has the clout that they'll be able to resolve any routing/peering disputes which may cause problems. As a home end user, there's not much you can generally do about routing, so most likely you're going to be sent to the Australian DNS servers in Australia. And I strongly suspect the Australian DNS servers will do lookups with CDN's name servers specific for the Australian servers. That seems to be what this page is saying [8].
In other word, I strongly suspect if you're in Australia it's fairly unlikely you'll be connecting to Google's US DNS and it's also fairly unlikely you'll get US CDNs (unless they're the closest). You may still not get the best CDN's particularly if they don't support the extension. For example, some ISPs work with CDNs to provide specific servers for their customers. Likewise, I have no idea where Google has DNS servers in Australia, do they have them in both Melbourne and Sydney for example? I wouldn't be surprised if som CDNs do which means if Google doesn't you may not get the best geographically located servers even in Australia. Obviously in my case without the extension I'll be getting CDNs in Australia and not NZ even if they exist and there will be countries where the responding name server may be an even worse choice. (It can be complicated but your assumption should be if you're ISP is remotely competent their name servers should provide CDNs that give the best routing.)
One final comment, I'm in NZ not Australia but one our only major internet cable also connects to Australia anyway and I can say things are not nearly as bad as they were 5-10 ears ago. I'm using VDSL2 although the cable to my house is a bit crap or far so only get about 50mbit/s. I can maximise this even connecting to the US, sometimes even at peak times. (In fact, if you're not connecting to a CDN it's easily possible the US server will be faster than the local one.)
It obviously depends significant on the ISP and how much international bandwidth they have, and it's possible NZ ISPs tend to have more because there are fewer CDNs (and I'm not sure where trans-Tasman bandwidth is much cheaper than Californian bandwidth). The SCC is not even close to capacity (and I'm presuming a number of those connected only to Australia are similar), so it is only a cost issue. And it can get confusing what you're actually connecting to because of transparent caching/proxying that many ISPs use. Still the takeaway message is you shouldn't assume connecting to the US is going to be slower (in terms of bandwidth, latency is obviously going to be higher). Of course where it does happen, your ISP won't particularly like you wasting their international bandwidth that way. Actually another reason why it's likely they will work with Google to ensure their customers who choose to use Google Public DNS end up connecting to the right server.
P.S. This assumes that the CDN and your ISP only rely on name servers lookups to ensure you end up on right server. If they have a more complicated system, it may be that you will still end up connected to the right server even if your DNS does their resolutions to the CDN's name servers from the wrong location.
- That information is somewhat outdated, Google supports an extension which can provide your subnet to the CDN's DNS server so they can provide more accurate resolution [7] and it's been enabled at least for Akamai.
- Generally, DNS servers can be logged. When using Google Chrome it does not matter on navigating on web pages. The license of Google Chrome makes Google own all input You enter into the URL field of the browser. Other programms can be logged by monitoring the DNS queries. Using a DNS server, You need to trust it. I think You can trust Google. Modifing the DNS entry is also an modification to Your computer. Imagine the cause of a hacked DNS server when using online banking or giving passwords to the page, Your browser displays. DNS servers also can be used as quick way to block (web)servers hosting malware. The DNS entries in Your computer and router tells what “phonebook” to use and the computer will connect to the returned IP address. --Hans Haase (有问题吗) 11:16, 8 February 2016 (UTC)
External hard drive on Windows 10
I've backed up my files from another computer onto an external hard drive. I've connected the hard drive to Windows 10. but there is no obvious way to access it. How do I extract the files? Theskinnytypist (talk) 19:42, 8 February 2016 (UTC)
- If you just copied the files over it should be a drag-and-drop copy, with the caveat that you may need to take full control & ownership of the folder first as explained here (instructions are for Windows 7, but are valid for Windows 10). If you used a backup/restore application then you might have to use that same application to restore your backup. If you used Windows 7's backup, it has a specific option in Windows 10 for restoring. FrameDrag (talk) 20:45, 8 February 2016 (UTC)
Battery dying issue
Peeps, I'm having a bit of a problem with the Laptop battery that I bought recently.
1) I bought it before/after christmas. I read the guideline where it stated (in a sentence): "Charge to 100% when it goes to 2% for the first time. For maximum battery life keep the charge up to 70%".
a) I've charged it to 100% as stated by taking it to 2%.
b) I don't really get the time to keep the battery up to 70% then turn it off because I turn on the Laptop then work until it goes to 2% than recharge to 100% while the Laptop stays on, then turn it off for about 15-20 mins, then turn it on again. I do take the occasional breaks e.g., when I'm watching TV or eating, sleeping, showring or when I go out...
c)The battery is dying like an "idiot"!
2) I've not followed any rules whatsoever with my other battery that came with the computer and it lasted four to four and a half years.
Now, I'm confused and worried how the current battery is dying; its already on 17%. What do you guys suggest I should do? Note: I have a warrenty for 6 months too...
Apostle (talk) 22:39, 8 February 2016 (UTC)
February 9
faster cube root calculation?
Is there a way to calculate the real cube root of a real number that is faster than the log and exponential method? Bubba73 You talkin' to me? 04:48, 9 February 2016 (UTC)
- Sure, there are loads of options... what are your problem constraints? How accurate do you need to be? Can you use look-up tables for some or all calculations? Do you know that the input is centered around a particular value (suitable for a truncated Maclaurin series or other approximate method)? May we assume you have conventional floating-point computer hardware, or do we need to work with some other type of machine? Are we allowed to parallelize calculation work?
- My first instinct was to formulate the cube-root of k as a zero of the equation , and then to apply (essentially) Newton's method to find the zero. You have the advantage of knowing, analytically, that the function is monotonic and that there is a single zero crossing; so you can use that fact to your advantage. This is, basically, the Fundamental Theorem of Algebra.
- Next I referred to my numerical analysis book, Numerical Analysis, Burden and Faires, which suggested applying Horner's method to accelerate convergence of Newton's method. This book actually provides code examples (in Maple), and works the numerical method for a few examples. In this specific case, I'm not sure it will make any difference, as most of the polynomial coefficients are zero. There are a lot of similar dumb tricks named for smart mathematicians; each one can shave off a couple of adds and multiplies. This probably won't actually change the execution time in any significant way on modern computer hardware.
- These are appropriate accelerations if you are solving numerically using an ordinary type of computer; but if you're working with weird computational equipment - like, say, using constructive geometry to analytically solve for the root - there may be faster ways of finding the answer.
- Nimur (talk) 05:22, 9 February 2016 (UTC)
- Here is a machine architecture enhancement to enable hardware-accelerated Taylor series expansion of the square root, for an IEEE-754 floating point multiply/divide unit: Floating-Point Division and Square Root using a Taylor-Series Expansion Algorithm, (Kwon et al, 2007). If you can follow their work, you can see how, by extension, one could build the same hardware for the cube-root polynomial expansion.
- Is that kind of hardware worth the cost? Well, only if you really need to compute a lot of cube roots, and even then, only if you can convince the team who builds your floating-point multiplier into silicon. Most mere mortals never get to provide such feedback to their silicon hardware architect. But, once this type of enhancement is built and done, you get to compute cube roots in "one machine cycle," for the arbitrarily-defined time interval that is "one machine cycle." Nimur (talk) 16:57, 9 February 2016 (UTC)
Smart device flasher boxes/dongles
I know this sounds illicit or illegal, but upon seeing cracks, loaders or dongle emulators for certain software used on service boxes for mobile phones, it had me wondering if the dongles or boxes in question aren't any different from the ones used on high-end software like Pro Tools or Autotune for licencing enforcement, or if they do indeed contain actual circuitry to carry out any operation like removing SIM locks on phones and the like. Blake Gripling (talk) 05:31, 9 February 2016 (UTC)
Hard drive, drivers or Faulty sata cables may cause drive to crash
before you panic when your HDD makes strange noises. you can run a check disk Utility or reload the HDD Drivers and replace Sata cable
Buying digital cameras compatible with legacy analog lenses
My father has hundreds of dollars worth of (over $1000) cameras with special lenses bought in the 70's and 80's. He keeps asking me why they don't sell digital backs that are compatible with the fronts he has. My answer is prohibitive economics. (I can explain the economics, I just don't know the mechanics.) But I would like to confirm that there isn't such a thing for which he is asking, a way to take digital photos with his old lenses. Does such a thing exist? Thanks. μηδείς (talk) 18:50, 9 February 2016 (UTC)
- The only technical reason I can think of is that older lenses would be manually adjustable, which interferes with a digicam's ability to do things like autofocus. StuRat (talk) 19:04, 9 February 2016 (UTC)
- In the Nikon world, many lenses with the Nikon F-mount (which was introduced in 1959) can be used on even their most modern digital SLR cameras, although there are limitations and some incompatibilities. I don't know what the situation is for other camera or lens manufacturers, however the first sentence in the History section of the F-mount article gives a clue - "The Nikon F-mount is one of only two SLR lens mounts (the other being the Pentax K-mount) which were not abandoned by their associated manufacturer upon the introduction of autofocus, but rather extended to meet new requirements related to metering, autofocus, and aperture control." Both cameras and lenses have had more and more functionality added over the years. An older Nikkor lens on a Nikon D90 likely would not support autofocus or aperture setting. Taking another approach, there have been various attempts to create a digital back for film SLRs, but none seem to have really taken off - search "Digipod" on your favorite search engine for one of the most recent attempts. --LarryMac | Talk 19:07, 9 February 2016 (UTC)
- (edit conflict) Yes, that's perfectly possible. You will just need a lens adapter to be able to physically mount the lens onto the camera body. Note that, as StuRat says, you'll miss out on most of the focussing tricks that modern DSLRs offer, but it will certainly work and you'll be able to take pictures. There's a guide here that applies specifically to Canon EOS bodies, but the principles are the same for any manufacturer. - Cucumber Mike (talk) 19:13, 9 February 2016 (UTC)