Talk:List of Nvidia graphics processing units
Computing Start‑class | ||||||||||
|
This article may be too technical for most readers to understand. |
Imprecisions in OpenGL version note
I dont like at all this paragraph. Im working with GLSL because of my degree proyect and it can get wrong to the reader. For each version "GLSL x.x" should be replaced by "at least GLSL x.x" because OpenGL does not always limit the GLSL version. In addition, OpenGL version 1.5 supports at least GLSL 1.0 which is stated in the spec. I have even tried with GLSL 1.1 with OpenGL 1.5 and it works properly. In fact, it depends more on the graphic card more than in the OpenGL version. I will change it if nobody says anything. —Preceding unsigned comment added by Capagris (talk • contribs) 16:23, 1 October 2009 (UTC)
Can we make a section for Chipset GPUs?
I think it's important to have a separate section, since integrated GPUs are a class in their own right. At the very least, desktop and mobile GPUs that are actually IGPs should be clearly marked as such.
p.s. Who put the "first, second, third generation" marketing BS in? —Preceding unsigned comment added by 207.38.162.22 (talk) 15:23, 18 April 2009 (UTC)
Why is this article considered 'too technical'
Why is this article considered 'too technical' and yet the ATi equivalent article Comparison of ATI Graphics Processing Units is not? Also, the 7900GX2 is of course 2 GPUs on one board, in this light should it not be the TOTAL MT/s, Pipes x TMU's x VPU's, that are atated and not the specs of half the card?
- A quick look at the article and it didn't seem that bad as far as tech speak, after all you are talking about a comparison of GPUs either you keep it as it is and maybe add breif explanation of terms or you dim it down to a this is faster than that and that is faster than this article. --AresAndEnyo 21:48, 21 December 2006 (UTC)
512MB GeForce 6800 (AGP8X)
Why is this version of the 6800 not listed here? My card, as listed in nVidia's nTune utility is a standard GeForce 6800 chip with 512MB of memory, with clock speeds of 370MHz Core and 650MHz Memory. These were the factory clock speeds I received the card with; it was purchased from ASUS. --IndigoAK200 07:34, 27 November 2006 (UTC)
This seems like a comparison of graphics cards not of GPU chips ... and in that vein, why is there no mention of nVidia's workstation products (Quadros)?--RageX 09:00, 22 March 2006 (UTC)
This article especially needs an explanation of the table headers (eg. what is Fillrate? What is MT/s?) ··gracefool |☺ 23:56, 1 January 2006 (UTC)
- While I agree that an explaination would be nice, I have to ask why a page such is needed. It seems to have unexplained inaccuracies, or at the very least questionable info. As cards are released, it will need constant maintainance. Not only that, but 3rd party manufacturers often change specs, so while a certain nVidia card might have these specs...a card you buy might not. I'm certainly willing to clean up this page, but I want some input on how valuable it is to even have it in the first place before I go to the trouble.--Crypticgeek 01:45, 2 January 2006 (UTC)
- It's a handy reference. If you can find another one on the 'net (I'm sure there's a good, accurate one somewhere) we could think about replacing this with a link to it. Note that it is a comparison of GPUs, not cards, so 3rd party manufacturers don't matter. New GPUs aren't released that often. ··gracefool |☺ 22:39, 11 January 2006 (UTC)
NVIDIA's website indicates that the 7300GS has a 400MHz RAMDAC gpu. Is there a reason that everyone is changing that to 550MHz? Where did you acquire that information? --bourgeoisdude
- See RAMDAC for explanation. RAMDAC frequency determines maximum possible resolution and/or refresh rate. ONjA 16:52, 24 January 2006 (UTC)
The Process Fabrication (gate lenght) should be listed in nm instead of μm, the fractional values are quite cumbersome, beside, the industry more commonly use nm than μm now that we see processing units manufactured on a 45nm being announced.
The bus column does not list PCI for many of the cards in the FX family and the Geforce 6200. I suspect there are other mistakes of excluding the PCI bus from the MX family. I will add PCI as one of the bus options for the 6200 and 5500, as I am sure these two cards support PCI. —The preceding unsigned comment was added by Coolbho3000 (talk • contribs) 22:45, 10 May 2006 (UTC)
I have made the 6200 PCI a seprate row because of its differences from the other 6200 versions (boasts a NV44, not NV44a core, yet doesn't support Turbocache). I have named this section the 6200 PCI. Please correct me if you think this isn't suitable. —The preceding unsigned comment was added by Coolbho3000 (talk • contribs) 22:52, 10 May 2006 (UTC)
Open GL?
Wouldn't it be apropos to include an column for the highest version of OpenGL supported? Not all of us use Windows. :) OmnipotentEntity 21:53, 22 June 2006 (UTC)
memory bandwidth
Bandwidth is calculated incorrectly. I've changed it to use GB/s, where GB/s=10^9 bytes/second. To properly calculate bandwidth in GiB/s it's (bus width * effective clock of memory) / 1073741824 (bytes/GiB) / 8 (bits / byte)
NV17, NV34, NV31 and NV36
Geforce4 MX does not have a vpu of any kind. nvidia's drivers allow certain vertex programs to use the NSR that's been around since the nv11 days, but only if the (very simple) vertex program can be run on the gpu. otherwise it's done by the cpu. http://www.beyond3d.com/forum/showthread.php?t=142
Geforce FX5200 is a 4 pixel unit/1 textuer unit design as stated here http://www.beyond3d.com/misc/chipcomp/?view=chipdetails&id=11&orderby=release_date&order=Order&cname= and here http://www.techreport.com/etc/2003q1/nv31-34pre/index.x?pg=2
Updated note to reflect that NV31, NV34 and NV36 all only have 2 FPU32 units as described here http://www.beyond3d.com/forum/showthread.php?p=512287#post512287
DirectX and NV2x
DirectX 8.0 introduced PS 1.1 and VS 1.1. DirectX 8.1 introduces PS 1.2, 1.3 and 1.4.
source: shaderx,
http://www.beyond3d.com/forum/showthread.php?t=5351
http://www.beyond3d.com/forum/showthread.php?t=12079
http://www.microsoft.com/mscorp/corpevents/meltdown2001/ppt/DXG81.ppt
Thus NV20 was DirectX 8.0, but NV25 and NV28 supported the added ability of PS 1.2 and 1.3 as introduced in 8.1.
VPUs
I've listed any card with a T&L unit as having 0.5 VPUs since it can do vertex processing, but it is not programmable. This also allows better compatibility with Radeon comparisons.
Sheet Change
The sheets are too tall to see the explanation columns and card specs at the same time, if I want to compare, I need to scroll back and forth. Could someone edit the tables to have the column explanations at both the top and the bottom?
Fillrate max (MT/s) for 8800GTS is incorrect
The fillrate listed for each graphics card on both the Comparison of ATI and Comparison of NVIDIA GPU pages is based off of: "core speed * number of pixel shaders" for discrete shaders or "core speed * number of unified shaders / 2" for unified shaders.
The fillrate listed would be correct only if the 8800GTS had 128 unified shaders (500 * 128/2 = 32,000) instead of 96. The correct fillrate should be 24,000 (500 * 96/2 = 24,000).
Should this be changed, or do we need a source explicitly stating 24,000 MT/s as the fillrate?
Nafhan 20:44, 24 January 2007 (UTC)
Found page on NVIDIA homepage listing 24000 MT/s as fillrate for 8800GTS, and made update.
Nafhan 21:21, 26 January 2007 (UTC)
Its all wrong, fillrate is the number of pixels that can be written to memory so core speed * number of ROPs, 8800GTS will then have 500 * 20 = 10000 MT/s, to confirm I ran a benchmark and got "Color Fill : 9716.525 M-Pixel/s"
GeForce4 MX4000
Graphics library version for this card is mentioned 9 in this entry. Which is not true. It is not even complete 8.1; proof = http://translate.google.com/translate?hl=en&sl=zh-TW&u=http://zh.wikipedia.org/wiki/GeForce4&sa=X&oi=translate&resnum=3&ct=result&prev=/search%3Fq%3Dnvidia%2BNV18b%2Bengine%26hl%3Den%26client%3Dfirefox-a%26rls%3Dorg.mozilla:en-US:official%26sa%3DG —The preceding unsigned comment was added by Acetylcholine (talk • contribs) 18:22, 24 February 2007 (UTC).
PCX
The PCX 4300, PCX5300, PCX5750, PCX5900, and PCX5950 need to be added
New Columns
Hi,
There are at least 2 very important values missing. This are the vertex througput and the power consumption. The fillrate does not say much today, the overwhelming fillrate is used for doing anti aliasing, in my opinion no criterion to buy a new GPU.
As for me, I want to compare my current hardware to those that I might buy. Take this for example:
Model | Year | Code name | Fab(nm) | Bus interface | Memory max (MiB) | Core clock max (MHz) | Memory clock max (MHz) | Config core1 | Fillrate max (MT/s) | Vertices max (MV/s) | Power Consumtion est. (W) | Memory | Graphics library support (version) | Features | |||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Bandwidth max (GB/s) | Bus type | Bus width (bit) | DirectX® | OpenGL | |||||||||||||
GeForce FX 5900 XT | Dec 2003 | NV35 | 130 | AGP 8x | 256 | 400 | 700 | 3:4:8:8 | 3200 | less than 356, more than 68, maybe 316 | 22.4 | DDR | 256 | 9.0b | 1.5/2.0** | ||
GeForce 7600 GT | Mar 2006 | G73 | 90 | PCIe x16, AGP 8x | 256 | 560 | 1400 | 5:12:12:8 | 6720 | 700 | 22.4 | GDDR3 | 128 | 9.0c | 2.0 | Scalable Link Interface (SLI), Transparency Anti-Aliasing, OpenEXR HDR, Dual Link DVI | |
GeForce 7900 GS | May 2006 (OEM only)
Sept 2006 (Retail) |
G71 | 90 | PCIe x16 | 256 | 450 | 1320 | 7:20:20:16 | 9000 | 822,5 | 42.2 | GDDR3 | 256 | 9.0c | 2.0 | Scalable Link Interface (SLI), Transparency Anti-Aliasing, OpenEXR HDR, 2x Dual Link DVI | |
GeForce 7900 GT | Mar 2006 | G71 | 90 | PCIe x16 | 256 | 450 | 1320 | 8:24:24:16 | 10800 | 940 | 42.2 | GDDR3 | 256 | 9.0c | 2.0 | Scalable Link Interface (SLI), Transparency Anti-Aliasing, OpenEXR HDR, 2x Dual Link DVI | |
GeForce 7950 GT | Sept 2006 | G71 | 90 | PCIe x16 | 256, 512 | 550 | 1400 | 8:24:24:16 | 13200 | 1100 | 44.8 | GDDR3 | 256 | 9.0c | 2.0 | Scalable Link Interface (SLI), Transparency Anti-Aliasing, OpenEXR HDR, HDCP, 2x Dual Link DVI |
You can find the est. power consumtion at http://geizhals.at/deutschland/?cat=gra16_256 but I believe it is not allowed to take it from there...
Does anyone know where to get real tech specs from nvidia?
JPT 10:02, 2 March 2007 (UTC)
It would be very helpful to add columns for connection types, both motherboard (PCI, PCI Express, PCI Express 2.0) and video (DVI, HDMI, S-Video, VGA). The information must exist somewhere, but is nearly impossible to find. I recently bought a PC from one of the 'big 3,' and the graphics card does not have the promised S-video output; I don't think the sales staff lied to me, it's just that the personnel who interact with customers are limited to marketing blandishments like "this one is for games and that one is for word processing." I think many people choose cards based on what will connect to the equipment they already have, especially where some formats are difficult to convert to others, so making that information accessible would help a lot.TVC 15 (talk) 18:13, 16 July 2008 (UTC)
Different Versions?
There are models what have additional suffixes (ie: 7600 gs KO) should we add entries for these cards? Or explain what they mean on this page? Otherwise this is a fantastic reference page. Thanks everyone!
66.194.187.140 18:53, 1 April 2007 (UTC)Scott
Layout
I've changed the layout back to how it was a week or so ago, keeping the desktop graphics cards together and the laptop cards together - it is far easier to compare cards this way, as the Go series is not really comparable to the desktop range anyway. Also - what is the difference between the 7950GX2 and the 7900GX2? They use the same core running at the same clock speeds; in fact the only difference apparent from this article is the date of release, and since the earlier one was OEM, it implies that they are the same card! Yazza 18:26, 21 May 2007 (UTC)
- The 7900GX2 and 7950GX2 appear to basically be the same thing. As stated in the table, one was only available as part of an OEM system while the other was retail. Here is an article that talks about both of them: [1] VectorD 09:01, 22 May 2007 (UTC)
DirectX
DirectX 8.1 introduced features supported by NV25/NV28 in the form of Pixel Shader v1.3 (and vs 1.1 from dx8.0). DirectX 9.0 contained support for the extended shader model 2 supported by NV3x (HLSL targets PS2_a and VS2_a) the DirectX section and the relevant GPU sections have been modified.
Latest video card?
I would like to inquire about the latest video card. Why is the GeForce 8800 not listed yet? If I am not wrong, this card is already available in the USA. I got the information from the latest edition of PC Gamer, September 2007. --Siva1979Talk to me 08:45, 20 July 2007 (UTC)
- Double check this article, the 8800 Series are indeed listed.Coldpower27 12:33, 20 July 2007 (UTC)
- Oh yes! My mistake! --Siva1979Talk to me 08:28, 21 July 2007 (UTC)
7500 LE
Could an expert please add the 7500 LE; it's missing. Tempshill 16:41, 19 August 2007 (UTC)
- NVIDIA GeForce 7500E is missing, also. I wonder if these two are similar enough to ignore the "LE" or "E". Brian Pearson (talk) 06:00, 11 June 2008 (UTC)
12 pixel per clock claim on Quadro FX
The recent NVIDIA's Quadro FX datasheets are boasting 12 pixel per clock rendering engine on all product ranges, even though many of these products do not have 12 pixel/vertex shaders, or even 12 raster operator engines, or even generate 12 pixels per clock. Does anyone know what does the statement really mean? Jacob Poon 23:08, 20 September 2007 (UTC)
Error in Tesla table?
The Tesla table lists a "Pixel (MP/s)" in the Memory column. I think this is supposed to be "Bandwidth reference". Can anyone confirm and fix if necessary? Anibalmorales 20:24, 11 October 2007 (UTC)
Power
I think it would be good to add the TDP when that's known.-- Roc VallèsTalk|Hist - 17:11, 25 October 2007 (UTC)
- Agreed, was just about to suggest the same thing actually!--81.215.13.145 (talk) 10:25, 11 January 2008 (UTC)
First off, I'm glad you added the TDP.
Secondly, I think the numbers need to be checked. This site has a pretty comprehensive break-down of the power requirements of different ATI (is that a swear word here?) and NVidia GPUs.
http://www.atomicmpc.com.au/forums.asp?s=2&c=7&t=9354&p=0
The 9600 GT on the wiki states a TDP of 92 watts, while the other site claims 61 Watts. I wouldn't be surprised if the wattage is lower as the 9600 has
- a smaller die - fewer transistors.
...yes I know the 9600GT has a slightly higher core and shader frequency but it has @ 1/2 the # of shaders.
Also the TDP varies with memory. There is one TDP value when often a card comes in 256, 512 & 1024 MB variants that draw different TDPs. —Preceding unsigned comment added by 206.191.62.18 (talk) 13:12, 22 July 2008 (UTC)
8300 GS?
Where is the GeForce 8300 GS? —Preceding unsigned comment added by 201.66.31.220 (talk) 07:05, 21 November 2007 (UTC)
9 series
On the subject of which version of DirectX this video card will use, it seems people keeps on changing my edit of "10" to "10.1". From http://en.wikipedia.org/wiki/GeForce_9_Series , you if you check source #1 of that page, it is an old article from dailytech stating which version of DirectX the card will use, but if you check source #4, you'll see that the source dailytech quoted, which is located at actually stated that the card will use DirectX 10.0, not 10.1. Obviously dailytech made a typo. To reinforce the chip only supporting DirectX 10, please check source #5 of the page http://en.wikipedia.org/wiki/GeForce_9_Series which contains a full review of the card. I will change it back to "10" to reflect my findings. If there are any new information reguarding to the card, please change it to to reflect this new information and please cite source Baboo (talk) 06:35, 27 January 2008 (UTC)
- it seems the person who did the editing also changed OGL to version 3, which does not currently exist, and no source supporting this change. I reverted Baboo (talk) 06:44, 27 January 2008 (UTC)
Isn't there supposed to be a 9800 GTS? —Preceding unsigned comment added by 71.104.60.85 (talk) 19:11, 11 February 2008 (UTC)
The 9600 GT is already launched. The 9800 GX2 will be launched in March followed by the 9800 GTX and the 9800 GT around the end of March and the beginning of April. The 9600 GS will come out in May. The 9500 GT will be launched in June while the 9500 GS will launch in July. I can't confirm the 9800 GTS... (Slyr)Bleach (talk) 01:56, 24 February 2008 (UTC)
9900gtx
[2] yeah. i'll modify the tmu's to 128 because
Single chip with "dual G92b like" cores
9-series cards
Can someone tell me why they removed my edits on the 9-series? The 9600GSO has been out for a few days now (check the Nvidia site for specs), but when I added it, as well as details for the 9800GT (early specs are out on this card), they were edited out. I've put them back up now. Sure enough the specifications on the 9900GTX/GTS are a little speculative, but the specs for the 9600GSO are rock solid; just need to verify that it has 12 ROP's like the 8800GS. Put the 9800GT specs (early) up too; I don't know why no-one's added this card sooner. There's been discussion about and specs on the 9800GT for a while, though I've yet to see anything concrete about the 9800GTS. —Preceding unsigned comment added by 78.148.132.151 (talk) 09:51, 6 May 2008 (UTC)
Core Config
All ROP numbers where ROP > Pixel units are wrong. A card should not have more ROPs than pixel pipelines, because a card can't render more pixels than it's processing. Further, IIRC FX 5800 and FX 5900 can issue 8 pixel ops if no z test is done. Finally, there needs to be consistency in differentiating cards with no vertex units at all with those that have a fixed function vertex unit. Both are 0 right now, but it's a rather significant difference.
GTX series
Does anyone think that the GeForce GTX series should be split into its own section? Nvidia doesn't seem to be using the GeForce 9 series name for these chips and they are based on a different design than the GeForce 9/8 series(es?) are. (I hate trying to figure out the plural of series! :)) -- Imperator3733 (talk) 14:19, 23 May 2008 (UTC)
- Even though I have nothing to back me up on this, I think it should split (which i now see has happened in the last hour or two while i was out). It's title has no 9xxx in it at all. With that being said though, Tom's Hardware suggests that we shouldn't call this the "GTX Series". GTX, GT, GTS, etc. remain the same, just moved to the front. Perhaps "200 Series" is more appropriate? Or until we get confirmation on what to call it maybe just stick with GT200 series/chips. BlueBird05 (talk) 02:26, 26 May 2008 (UTC)
- The plural of series is series. c.f. Series_(mathematics) "Finite series may be handled with elementary algebra, but infinite series require tools from mathematical analysis if they are to be applied in anything more than a tentative way." Mr. Jones (talk) 11:04, 16 June 2008 (UTC)
double asterisk
In the section on the FX Nvidia cards it lists their opengl support as 1.5/2.0**, but there is no explanation on what this syntax means. Asterisks with no annotations are a frequent sight on wikipedia that needs to be dealt with. I have no idea what is meant by this instance. Dwr12 (talk) 21:20, 2 July 2008 (UTC)
motherboard GPUs are missing
The whole GeForce 8x00 integrated GPU line (8100, 8200, 8300) is missing from the tables. The GeForce_8200_Chipset page contains a bit more information, but it's subject to merging into the GeForce_8_Series page. 195.38.110.188 (talk) 23:38, 31 July 2008 (UTC)
Quadro Mobile GPU's Completely missing.
I've notice the entire current Quadro Mobile line is missing from this page.
High End
NVIDIA Quadro FX 3700M
NVIDIA Quadro FX 3600M
NVIDIA Quadro FX 2700M
Mid-Range
NVIDIA Quadro FX 1700M
NVIDIA Quadro FX 1600M
NVIDIA Quadro FX 770M
NVIDIA Quadro FX 570M
Entry Level
NVIDIA Quadro FX 370M
NVIDIA Quadro FX 360M
They are listed on Nvidia's homepage here: http://www.nvidia.com/page/quadrofx_go.html Evil genius (talk) 07:48, 8 September 2008 (UTC)
Error in Features table for Geforce 6
There are two columns both labeled as "PureVideo w/WMV9 Decode" but with different content ! --Xerces8 (talk) 11:38, 5 October 2008 (UTC)
- Done, NVIDIA PureVideo article shoud've helped you find that info. Em27 (talk) 13:08, 13 May 2009 (UTC)
Removed some entries
Removed the GF 300 series + some speculation cards from GF 200 series. Unless the specs of a new card aren't officially announced, they should not be here. —Preceding unsigned comment added by 213.35.167.28 (talk) 19:06, 25 October 2008 (UTC)
OpenGL 3.0 support
See http://developer.nvidia.com/object/opengl_3_driver.html. Some of these now do OpenGL 3.0 with the correct driver. Jesse Viviano (talk) 07:46, 6 February 2009 (UTC)
Citation for GF 300 Series...
...is needed, otherwise the remarks made are pretty much common gossip which shouldn't be on here. Anon —Preceding unsigned comment added by 85.102.53.150 (talk) 22:41, 8 April 2009 (UTC)
- Citation for the GTX 390 has been added, but only the GTX 390, as it is the only card that has any specs confirmed.
- Just a heads up, members from your favorite site 4chan are changing the values of the 300 series on an hourly basis to either make nVidia look worse than or better than ATI, depending on where an individual's brand loyalties lie.
GeForce 4xx Series
This section is completely unnecessary, nothing has been announced or even revealed on this series, and whoever added it obviously had not quantifiable evidence or citation to back it up. Adding DirectX 12 to the section was a rookie troll mistake. —Preceding unsigned comment added by 59.167.36.93 (talk) 02:09, 20 April 2009 (UTC)
Core config - pixel shaders
How come cards without pixel shaders have their core config listed as if they do? For example, the GF2 Ti has a core config of 0:4:8:4. However, the footnote for the core config syntax is: Vertex shader : Pixel shader : Texture mapping unit : Render Output unit. This suggests the GF2 Ti has no vertex shaders but 4 pixel shaders. It's pretty common knowledge the Geforce 3 was nVidia's first consumer card to incorporate pixel shaders. I noticed this a while back and it's never been changed, so I'm thinking it's not an error. Can someone explain why the core config syntax is the way it is? 24.68.36.117 (talk) 19:42, 16 June 2009 (UTC)
Series 100 and 200 Open GL Support
So I checked the page "GeForce 200 Series", and it says that all of the NVidia GeForce 200 series cards support Open GL 3.0, yet on this page it reads that they all support Open GL 2.1. Also, this page holds that the GeForce 130M supports Open Gl 2.1, but the "GeForce 200 Series" page says that it is a modified 9600GSO, which this page says does in fact support 3.0. Can anyone makes sense of this? RCIWesner (talk) 17:03, 14 July 2009 (UTC)
GT 300 postponed?
Any references to support the alleged postponing of GTX 380 (and other GT 300 cards) from Q4/09 to Q3/10 and the changes in specifications? To me the changes made by 70.131.80.5 seem like vandalism. See also edits to GeForce 200 Series. — Preceding unsigned comment added by Anttonij (talk • contribs) 8:32, 23 September 2009 (UTC)
- The user in question seems to have lowered the specs on all GT300 GPUs, in part by up to a factor of 8. Since the specs are mostly speculative at this point, it's hard to tell whether it's just vandalism. There are rumors about the GT300 release being postponed, but nothing is confirmed as far as I can tell. LukeMadDogX (talk) 11:02, 23 September 2009 (UTC)
- Yet another one keeping the flag by adding model tags such as "Ultra" or "GT"... Joy. I don't believe rumors or any kind of information of speculative nature should be shared on Wikipedia. This is an encyclopedia, not a tech gossip portal FFS.. — Preceding unsigned comment added by 85.96.68.251 (talk • contribs) 11:23, 28 September 2009 (UTC)
Comment moved here from the main page
"October 3, 2009 3:01 AM EST @ person who keeps making all these messy speculations regarding the GF100 cards- Do you really believe that Nvidia will release 18 graphics cards for the GT300? Your basis is ludicrous and seems like the figures were just materialized from nowhere. At least the other speculations are more logical and consistent. No video card will provide such a horrible performance for such a high price neither will the enthusiast end cost ridiculous prices. Please stop taking any medications or abusing alcohol. Take a walk and let the oxygen in your blood flow to your head. "
^--- this doesn't belong in the article, try to keep dialogue like this private. —Preceding unsigned comment added by 78.96.215.71 (talk) 08:39, 3 October 2009 (UTC)
Suggestion
I believe that the table for the GT300 series should be scrapped until the release of the actual graphics cards later this year or early next year. This is the best way to prevent any unwanted changes or speculative, fraudulent rumors on the specifications so that the factual integrity in Wikipedia remains steadfast. —Preceding unsigned comment added by 71.189.49.39 (talk) 16:22, 4 October 2009 (UTC)
- *Somebody* suggested it a few pixels above, didn't he? —Preceding unsigned comment added by 88.233.118.5 (talk) 20:22, 4 October 2009 (UTC)
Vandalism by 75.56.50.233
Today 75.56.50.233 tried to vandalise the Geforce 300 section and the Geforce 200 section. Can something be done about this? —Preceding unsigned comment added by 60.50.150.249 (talk) 22:44, 12 October 2009 (UTC)
- Semi-protected for about one week per my request here. ConCompS (talk) 03:34, 13 October 2009 (UTC)
I believe 70.131.87.247 may also be a vandal of the GeForce 300 section, as the extremely high specifications (resulting in 6264 Gigaflops, without the GFLOPs column updated -- not to mention the other columns which clock speed changes affect) were edited over the values sourced from Tech Arp. I've undone this users edits and corrected my own edits as best I can. I'll continue to undo future edits as vandalism, unless the user responds to comments on their talk page. Ltwizard (talk) 04:18, 20 November 2009 (UTC)
Add remarks about Shader Clocks
Nvidia cards have unlinked shader and core clocks. AMD have linked ones. —Preceding unsigned comment added by 112.201.119.209 (talk) 21:57, 19 November 2009 (UTC)
Vandalism by 75.57.69.93
75.57.69.93 vandalised the GeForce 200 and GeForce 300 sections, I believe it's the same person as the "Vandalism by 75.56.50.233" section. —Preceding unsigned comment added by 60.51.99.254 (talk) 10:10, 2 December 2009 (UTC)
Vandalism by 75.57.69.93, 95.64.94.7 & 119.74.232.44
Again, some kiddies tries to vandalise the GeForce 300 sections. Request page to be protected for 1 month. —Preceding unsigned comment added by 60.50.148.130 (talk) 23:58, 4 December 2009 (UTC)
Vandalism by 84.86.163.122
84.86.163.122 reverted the performance numbers of the GeForce 300 series back to the old values before my edit on December 10th, without providing any reason.
I was using the latest numbers leaked to the press by Nvidia: http://www.brightsideofnews.com/news/2009/12/8/nvidia-gf100fermi-sli-powered-maingear-pc-pictured.aspx
Is there any reason why we should trust the older numbers more than this?