GeForce 600 Series

From Wikipedia, the free encyclopedia
  (Redirected from GTX Titan)
Jump to: navigation, search
GeForce 600 Series
GeForce logo
Release date March 22nd, 2012
Codename(s) GK104, GK106, GK107
Model(s) GeForce Series
  • GeForce GT Series
  • GeForce GTX Series
Transistors
Fabrication
292M 40 nm (GF119)
  • 585M 40 nm (GF108)
  • 1,170M 40 nm (GF116)
  • 1,950M 40 nm (GF114)
  • 1,270M 28 nm (GK107)
  • 1,270M 28 nm (GK208)
  • 2,540M 28 nm (GK106)
  • 3,540M 28 nm (GK104)
  • 7,080M 28 nm (GK110)
Entry-level cards GT 605M, GT 610M, GT 620, GT 620M, GT 630M, GT 635M, GT 640, GT 645M
Mid-range cards GTX 650, GTX 650M, GTX 650 Ti , GTX 660,
High-end cards GTX 660 Ti, GTX 670, GTX 680, GTX 680M
Enthusiast cards GTX 680MX, GTX 690, GTX TITAN
Direct3D support Direct3D 11.0[1]
OpenCL support OpenCL 1.2
OpenGL support OpenGL 4.3
Predecessor GeForce 500 Series
Successor GeForce 700 Series

The GeForce 600 Series is a family of graphics processing units developed by Nvidia, used in desktop and laptop PCs. It serves as the introduction for the Kepler architecture (GK-codenamed chips), named after the German mathematician, astronomer, and astrologer Johannes Kepler. GeForce 600 series cards were first released in 2012.

Contents

Overview

Where the goal of the previous architecture, Fermi, was to increase raw performance (particularly for compute and tessellation), Nvidia's goal with the Kepler architecture was to increase performance per watt, while still striving for overall performance increases.[2] The primary way they achieved this goal was through the use of a unified clock. By abandoning the shader clock found in their previous GPU designs, efficiency is increased, even though it requires more cores to achieve similar levels of performance. This is not only because the cores are more power efficient (two Kepler cores using about 90% of the power of one Fermi core, according to Nvidia's numbers), but also because the reduction in clock speed delivers a 50% reduction in power consumption in that area.[3]

Kepler also introduced a new form of texture handling known as bindless textures. Previously, textures needed to be bound by the CPU to a particular slot in a fixed-size table before the GPU could reference them. This led to two limitations: one was that because the table was fixed in size, there could only be as many textures in use at one time as could fit in this table (128). The second was that the CPU was doing unnecessary work: it had to load each texture, and also bind each texture loaded in memory to a slot in the binding table.[2] With bindless textures, both limitations are removed. The GPU can access any texture loaded into memory, increasing the number of available textures and removing the performance penalty of binding.

Finally, with Kepler, Nvidia was able to increase the memory clock to 6 GHz. To accomplish this, they needed to design an entirely new memory controller and bus. While still shy of the theoretical 7 GHz limitation of GDDR5, this is well above the 4 GHz speed of the memory controller for Fermi.[3]

Features

The GeForce 600 Series contains products from both the older Fermi and newer Kepler generations of Nvidia GPUs. Kepler based members of the 600 series add the following standard features to the GeForce family:

  • PCI Express 3.0 interface
  • DisplayPort 1.2
  • HDMI 1.4a 4K x 2K video output
  • Purevideo VP5 hardware video acceleration (up to 4K x 2K H.264 decode)
  • Hardware H.264 encoding acceleration block (NVENC)
  • Support for up to 4 independent 2D displays, or 3 stereoscopic/3D displays (NV Surround)
  • Next Generation Streaming Multiprocessor (SMX)
  • A New Instruction Scheduler
  • Bindless Textures
  • CUDA Compute Capability 3.0
  • GPU Boost
  • TXAA
  • Manufactured by TSMC on a 28 nm process

Next Generation Streaming Multiprocessor (SMX)

The Kepler architecture employs a new Streaming Multiprocessor Architecture called SMX. The SMX are the key method for Kepler's power efficiency as the whole GPU uses a single "Core Clock" rather than the double-pump "Shader Clock".[3] Although the SMX usage of a single unified clock increases the GPU power efficiency due to the fact that one Kepler CUDA Cores consume 90% power of two Fermi CUDA Core, consequently the SMX needs additional processing units to execute a whole warp per cycle. Kepler also needed to increase raw GPU performance as to remain competitive. As a result, it doubled the CUDA Cores from 16 to 32 per CUDA array, 3 CUDA Cores Array to 6 CUDA Cores Array, 1 load/store and 1 SFU group to 2 load/store and 2 SFU group. The GPU processing resources are also double. From 2 warp schedulers to 4 warp schedulers, 4 dispatch unit became 8 and the register file doubled to 64K entries as to increase performance. With the doubling of GPU processing units and resources increasing the usage of die spaces, The capability of the PolyMorph Engine aren't double but enhanced, making it capable of spurring out a polygon in 2 cycles instead of 4.[4]With Kepler, Nvidia not only have to work on power efficiency but also on area efficiency, thus Nvidia opted to use 8 dedicated FP64 CUDA cores in a SMX as to save die space while still offering FP64 capabilities since all Kepler CUDA cores are not FP64 capable. With the improvement Nvidia made on Kepler, the results include an increase in GPU graphic performance while downplaying FP64 performance.

A New Instruction Scheduler

Additional die areas are acquired by replacing the complex hardware scheduler with a simple software scheduler. With software scheduling, warps scheduling was moved to Nvidia's compiler and as the GPU math pipeline now has a fixed latency, it now include the utilization of Instruction-Level Parallelism and superscalar execution in addition to Thread-Level Parallelism. As instructions are statically scheduled, scheduling inside a warp becomes redundant since the latency of the math pipeline is already known. This resulted in an increase of area and power efficiency while decreasing compute performance as the instructions cannot be scheduled in a most efficient manner in real time, as opposed to strictly following the order of the instruction itself regardless of the efficiency.[3]

GPU Boost

GPU Boost is a new feature which is roughly analogous to turbo boosting of a CPU. The GPU is always guaranteed to run at a minimum clock speed, referred to as the "base clock". This clock speed is set to the level which will ensure that the GPU stays within TDP specifications, even at maximum loads.[2] When loads are lower, however, there is room for the clock speed to be increased without exceeding the TDP. In these scenarios, GPU Boost will gradually increase the clock speed in steps, until the GPU reaches a predefined power target (which is 170W by default).[3] By taking this approach, the GPU will ramp its clock up or down dynamically, so that it is providing the maximum amount of speed possible while remaining within TDP specifications.

The power target, as well as the size of the clock increase steps that the GPU will take, are both adjustable via third-party utilities and provide a means of overclocking Kepler-based cards.[2]

Microsoft Direct3D Support

Nvidia Fermi and Kepler GPUs of the GeForce 600 series support the Direct3D 11.0 specification. Nvidia originally stated that the Kepler architecture has full DirectX 11.1 support, which includes the Direct3D 11.1 path.[5] The following " Modern UI " Direct3D 11.1 features, however, are not supported:[6][7]

  • Target-Independent Rasterization (2D rendering only).
  • 16xMSAA Rasterization (2D rendering only).
  • Orthogonal Line Rendering Mode.
  • UAV (Unordered Access View) in non-pixel-shader stages.

According to the definition by Microsoft, Direct3D Feature Level 11_1 must be complete, otherwise the Direct3D 11.1 path can not be executed.[8] The integrated Direct3D features of the Kepler architecture are the same as those of the GeForce 400 series Fermi architecture.[7]

TXAA

Exclusive to Kepler GPUs, TXAA is a new anti-aliasing method from Nvidia that is designed for direct implementation into game engines. TXAA is based on the MSAA technique and custom resolve filters. It is design to addresses a key problem in games known as shimmering or temporal aliasing. TXAA resolves that by smoothing out the scene in motion, making sure that any in-game scene is being cleared of any aliasing and shimmering.[9]

NVENC

NVENC is Nvidia's power efficient fixed-function encode that able to take codecs, decode, preprocess, and encode H.264-based content. NVENC specification input formats is limited to H.264 output. But still, NVENC through it limited format, can supports up to 4096x4096 encode.[10]

Like Intel’s Quick Sync, NVENC is currently exposed through a proprietary API, though Nvidia does have plans to provide NVENC usage through CUDA.[10]

New Driver Features

In the R300 drivers, released alongside the GTX 680, Nvidia introduced a new feature called Adaptive VSync. This feature is intended to combat the limitation of v-sync that, when the framerate drops below 60 FPS, there is stuttering as the v-sync rate is reduced to 30 FPS, then down to further factors of 60 if needed. However, when the framerate is below 60 FPS, there is no need for v-sync as the monitor will be able to display the frames as they are ready. To address this issue (while still maintaining the advantages of v-sync with respect to screen tearing), Adaptive VSync can be turned on in the driver control panel. It will enable VSync if the framerate is at or above 60 FPS, while disabling it if the framerate lowers. Nvidia claims that this will result in a smoother overall display.[2]

While the feature debuted alongside the GTX 680, this feature is available to users of older Nvidia cards who install the updated drivers.[2]

History

In September 2010, Nvidia first announced Kepler.[11]

In early 2012, details of the first members of the 600 series parts emerged. These initial members were entry-level laptop GPUs sourced from the older Fermi architecture.

On March 22, 2012, Nvidia unveil the 600 series GPU: the GTX 680 for desktop PCs and the GeForce GT 640M, GT 650M, and GTX 660M for notebook/laptop PCs. The GK104 (which powers the GTX680) has 1536 CUDA cores, in eight groups of 192, and 3.5 billion transistors. The GK107 (GT 640M/GT 650M/GTX 660M) has 384 CUDA cores.

On April 29, 2012, the first dual GPU Kepler product joined the 600 series. The GTX 690 has two of the GTX 680's GPUs, equalling 3072 CUDA cores and 512-bit memory.

On May 10, 2012, GTX 670 joined the series. The card features 1344 CUDA cores, 2GB GDDR5 VRAM and 256-bit memory bus.

On June 4, 2012, GTX 680M joined the series. This mobile GPU based on the powerful GTX 670 features 1344 CUDA cores, 4GB GDDR5 VRAM & 256-bit memory bus.

On August 16, 2012, GTX 660 Ti joined the series. The card has 1344 CUDA cores along with 2GB GDDR5 VRAM and 192-bit memory bus.

On September 13, 2012, GTX 660 and GTX 650 joined the series. The GTX 660 has 960 CUDA cores and the GTX 650 has 384 CUDA cores. 2GB GDDR5 VRAM and a 192-bit memory bus for the GTX 660 and 1GB GDDR5 VRAM and a 128-bit memory bus for the GTX 650.

On October 9, 2012, GTX 650 Ti joined the series. The card features 768 CUDA cores along with 1GB GDDR5 VRAM and 128-bit memory bus.[12]

Products

GeForce 600 Series

  • 1 SPs - Shader Processors - Unified Shaders (Vertex shader / Geometry shader / Pixel shader) : TMUs - Texture mapping units : Render Output unit
  • 2 The GeForce 605 (OEM) card is a rebranded GeForce 510.
  • 3 The GeForce GT 610 card is a rebranded GeForce GT 520.
  • 4 The GeForce GT 620 (OEM) card is a rebranded GeForce GT 520.
  • 5 The GeForce GT 620 card is a rebranded GeForce GT 530.
  • 6 The GeForce GT 630 (DDR3) card is a rebranded GeForce GT 440 (DDR3).
  • 7 The GeForce GT 630 (GDDR5) card is a rebranded GeForce GT 440 (GDDR5).
  • 8 The GeForce GT 640 (OEM) card is a rebranded GeForce GT 545 (DDR3).
  • 9 The GeForce GT 645 (OEM) card is a rebranded GeForce GTX 560 SE.
Model Launch Code name Fab (nm) Transistors (Million) Die Size (mm2) Die Count Bus interface Memory (MiB) SM count Config core 1 Clock rate Fillrate Memory Configuration API support (version) GFLOPS (FMA) TDP (watts) GFLOPS/W Release Price (USD)
Core (MHz) Average Boost (MHz) Max Boost (MHz) Shader (MHz) Memory (MHz) Pixel (GP/s) Texture (GT/s) Bandwidth (GB/s) DRAM type Bus width (bit) DirectX OpenGL OpenCL
GeForce 6052 April 3, 2012 GF119 40 292 79 1 PCIe 2.0 x16 512

1024

1 48:8:4 523 N/A N/A 1046 1798 2.1 4.3 14.4 DDR3 64 11.0 4.3 1.1 100.4 25 4.02 OEM
GeForce GT 610 3 May 15, 2012 GF119 40 292 79 1 PCIe 2.0 x16, PCI 1024 1 48:8:4 810 N/A N/A 1620 1800 3.24 6.5 14.4 DDR3 64 11.0 4.3 1.1 155.5 29 5.36 Retail
GeForce GT 620 4 April 3, 2012 GF119 40 292 79 1 PCIe 2.0 x16, PCI 512

1024

1 48:8:4 810 N/A N/A 1620 1798 3.24 6.5 14.4 DDR3 64 11.0 4.3 1.1 155.5 30 5.18 OEM
GeForce GT 6205 May 15, 2012 GF108 40 585 116 1 PCIe 2.0 x16, PCI 1024 2 96:16:4 700 N/A N/A 1400 1800 2.8 11.2 14.4 DDR3 64 11.0 4.3 1.1 268.8 49 5.49 Retail
GeForce GT 625 February 19, 2013 GF119 40 292 79 1 PCIe 2.0 x16 512

1024

1 48:8:4 810 N/A N/A 1620 1798 3.24 6.5 14.4 DDR3 64 11.0 4.3 1.1 155.5 30 5.18 OEM
GeForce GT 630 April 24, 2012 GK107 28 1300 118 1 PCIe 3.0 x16 1024
2048
1 192:16:16 875 N/A N/A 875 1782 7 14 28.5 DDR3 128 11.0 4.3 1.2 336 50 6.72 OEM
GeForce GT 630 (DDR3)6 May 15, 2012 GF108 40 585 116 1 PCIe 2.0 x16, PCI 1024 2 96:16:4 810 N/A N/A 1620 1800 3.2 13 28.8 DDR3 128 11.0 4.3 1.1 311 65 4.79 Retail
GeForce GT 630 (GDDR5)7 May 15, 2012 GF108 40 585 116 1 PCIe 2.0 x16, PCI 1024 2 96:16:4 810 N/A N/A 1620 3200 3.2 13 51.2 GDDR5 128 11.0 4.3 1.1 311 65 4.79 Retail
GeForce GT 635 February 19, 2013 GK208 28 1300 118 1 PCIe 3.0 x16 1024
2048
1 192:16:16 875 N/A N/A 875 1782 7 14 28.5 DDR3 128 11.0 4.3 1.2 336 50 6.72 OEM
GeForce GT 6408 April 24, 2012 GF116 40 1170 238 1 PCIe 2.0 x16 1536
3072
3 144:24:24 720 N/A N/A 1440 1782 17.3 17.3 42.8 DDR3 192 11.0 4.3 1.1 414.7 75 5.53 OEM
GeForce GT 640 (DDR3) April 24, 2012 GK107-301-A2 28 1300 118 1 PCIe 3.0 x16 1024
2048
2 384:32:16 797 N/A N/A 797 1782 12.8 25.5 28.5 DDR3 128 11.0 4.3 1.2 612.1 50 12.24 OEM
GeForce GT 640 (DDR3) June 5, 2012 GK107 28 1300 118 1 PCIe 3.0 x16 2048 2 384:32:16 900 N/A N/A 900 1782 14.4 28.8 28.5 DDR3 128 11.0 4.3 1.2 691.2 65 10.63 $100
GeForce GT 640 (GDDR5) April 24, 2012 GK107 28 1300 118 1 PCIe 3.0 x16 1024
2048
2 384:32:16 950 N/A N/A 950 5000 15.2 30.4 80 GDDR5 128 11.0 4.3 1.2 729.6 75 9.73 OEM
GeForce GT 6459 April 24, 2012 GF114-400-A1 40 1950 332 1 PCIe 2.0 x16 1024 6 288:48:24 776 N/A N/A 1552 1914 18.6 37.3 91.9 GDDR5 192 11.0 4.3 1.1 894 140 6.39 OEM
GeForce GTX 650 September 13, 2012 GK107-450-A2 28 1300 118 1 PCIe 3.0 x16 1024
2048
2 384:32:16 1058 N/A N/A 1058 5000 16.9 33.8 80 GDDR5 128 11.0 4.3 1.2 812.5 64 12.7 $110
GeForce GTX 650 Ti October 9, 2012 GK106-220-A1 28 2540 221 1 PCIe 3.0 x16 1024
2048
4 768:64:16 928 N/A N/A 928 5400 14.8 59.2 86.4 GDDR5 128 11.0 4.3 1.2 1420.8 110 12.92 $150
GeForce GTX 650 Ti Boost March 26, 2013 GK106-240-A1 28 2540 221 1 PCIe 3.0 x16 2048 4 768:64:24 980 1033 N/A 980 6002 23.5 62.7 144.2 GDDR5 192 11.0 4.3 1.2 1,505.28 134 $170
GeForce GTX 660 September 13, 2012 GK106-400-A1 28 2540 221 1 PCIe 3.0 x16 2048
3072
5 960:80:24 980 1033 1084 980 6000 23.5 78.5 144.2 GDDR5 192 11.0 4.3 1.2 1881.6 140 13.44 $230
GeForce GTX 660 (OEM[13]) August 22, 2012 GK104-200-KD-A2 28 3540 294 1 PCIe 3.0 x16 1536
2048
6 1152:96:24
1152:96:32
823 888 Unknown 823 5800 19.8 79 134 GDDR5 192
256
11.0 4.3 1.2 2108.6 130 16.22 OEM
GeForce GTX 660 Ti August 16, 2012 GK104-300-KD-A2 28 3540 294 1 PCIe 3.0 x16 2048
3072
7 1344:112:24 915 980 1058 915 6008 22.0 102.5 144.2 GDDR5 192 11.0 4.3 1.2 2460 150 16.40 $300
GeForce GTX 670 May 10, 2012 GK104-325-A2 28 3540 294 1 PCIe 3.0 x16 2048
4096
7 1344:112:32 915 980 1084 915 6008 29.3 102.5 192.256 GDDR5 256 11.0 4.3 1.2 2460 170 14.47 $400
GeForce GTX 680 March 22, 2012 GK104-400-A2 28 3540 294 1 PCIe 3.0 x16 2048
4096
8 1536:128:32 1006[2] 1058 1110 1006 6008 32.2 128.8 192.256 GDDR5 256 11.0 4.3 1.2 3090.4 195 15.85 $500
GeForce GTX 690 April 29, 2012 2× GK104-355-A2 28 2× 3540 2× 294 2 PCIe 3.0 x16 2× 2048 2× 8 2× 1536:128:32 915 1019 1058[14] 915 6008 2× 29.28 2× 117.12 2× 192.256 GDDR5 2× 256 11.0 4.3 1.2 2× 2810.88 300 18.74 $1000
Model Launch Code name Fab (nm) Transistors (Million) Die Size (mm2) Die Count Bus interface Memory (MiB) SM count Config core 1 Clock rate Fillrate Memory Configuration API support (version) GFLOPS (FMA) TDP (watts) GFLOPS/W Release Price (USD)
Core (MHz) Average Boost (MHz) Max Boost (MHz) Shader (MHz) Memory (MHz) Pixel (GP/s) Texture (GT/s) Bandwidth (GB/s) DRAM type Bus width (bit) DirectX OpenGL OpenCL

Geforce GTX Titan

Contrary to some initial reports, the GTX Titan and GTX 780 are not the same product.[15]

Model Launch Code name Fab (nm) Transistors (Million) Die Size (mm2) Die Count Bus interface Memory (MiB) SMX count Config core 1 Clock rate Fillrate Memory Configuration API support (version) Processing Power (peak)
TFLOPS
TDP (watts) GFLOPS/W Release Price (USD)
Base (MHz) Boost (MHz) Memory (MHz) Pixel (GP/s) Texture (GT/s) Bandwidth (GB/s) DRAM type Bus width (bit) DirectX OpenGL OpenCL Single Precision Double Precision Single Precision Double Precision
GeForce GTX Titan February 19, 2013 GK110 28 7080 561 1 PCIe 3.0 x16 6144 14 2688:224:48 837 876 6008 40.2 187.5 288.4 GDDR5 384 11.1 4.3 1.2 4.5 1.3 250 18 5.2 $999

GeForce 600M (6xxM) series

The GeForce 600M series for notebooks architecture. The processing power is obtained by multiplying shader clock speed, the number of cores and how many instructions the cores are capable of performing per cycle.

Model Launch Code name Fab (nm) Bus interface Memory (MiB) Config core1 Clock speed Fillrate Memory API support (version) Processing Power2
(GFLOPS)
TDP (watts) Notes
Core (MHz) Shader (MHz) Memory (MT/s) Pixel (GP/s) Texture (GT/s) Bandwidth (GB/s) Bus type Bus width (bit) DirectX OpenGL
GeForce 610M [16] Dec 2011 GF119 (N13M-GE) 40 PCIe 2.0 x16 1024
2048
48:8:4 900 1800 1800 3.6 7.2 14.4 DDR3 64 11.0 4.2 142.08 12 OEM. Rebadged GT 520MX
GeForce GT 620M [17] Apr 2012 GF117 (N13M-GS) 28 PCIe 2.0 x16 1024
2048
96:16:4 625 1250 1800 2.5 10 14.4
28.8
DDR3 64
128
11.0 4.2 240 15 OEM. Die-Shrink GF108
GeForce GT 625M October 2012 GF117 (N13M-GS) 28 PCIe 2.0 x16 1024
2048
96:16:4 625 1250 1800 2.5 10 14.4 DDR3 64 11.0 4.2 240 15 OEM. Die-Shrink GF108
GeForce GT 630M[17][18][19] Apr 2012 GF108 (N13P-GL)
GF117
40
28
PCIe 2.0 x16 1024
2048
96:16:4 660
800
1320
1600
1800
4000
2.6
3.2
10.7
12.8
28.8
32.0
DDR3
GDDR5
128
64
11.0 4.2 258.0
307.2
33 GF108: OEM. Rebadged GT 540M
GF117: OEM Die-Shrink GF108
GeForce GT 635M[17][20][21] Apr 2012 GF106 (N12E-GE2)
GF116
40 PCIe 2.0 x16 2048
1536
144:24:24 675 1350 1800 16.2 16.2 28.8
43.2
DDR3 128
192
11.0 4.2 289.2
388.8
35 GF106: OEM. Rebadged GT 555M
GF116: 144 Unified Shaders
GeForce GT 640M LE[17] March 22, 2012 GF108
GK107 (N13P-LP)
40
28
PCIe 2.0 x16
PCIe 3.0 x16
1024
2048
96:16:4
384:32:16
762
500
1524
500
3130
1800
3
8
12.2
16
50.2
28.8
GDDR5
DDR3
128 11.0 4.2 292.6
384
32
20
GF108: Fermi
GK107: Kepler architecture
GeForce GT 640M[17][22] March 22, 2012 GK107 (N13P-GS) 28 PCIe 3.0 x16 1024
2048
384:32:16 625 625 1800
4000
10 20 28.8
64.0
DDR3
GDDR5
128 11.0 4.2 480 32 Kepler architecture
GeForce GT 645M October 2012 GK107 (N13P-GS) 28 PCIe 3.0 x16 1024
2048
384:32:16 710 710 1800
4000
11.36 22.72 28.8
64.0
DDR3
GDDR5
128 11.0 4.2 545 32 Kepler architecture
GeForce GT 650M[17][23][24] March 22, 2012 GK107 (N13P-GT) 28 PCIe 3.0 x16 1024
2048
384:32:16 835
745
900*
835
745
900*
1800
4000
5000*
13.4
11.9
14.4*
26.7
23.8
28.8*
28.8
64.0
80.0*
DDR3
GDDR5
128 11.0 4.2 641.3
572.2
691.2
45 Kepler architecture
*
GeForce GTX 660M[17][24][25][26] March 22, 2012 GK107 (N13E-GE) 28 PCIe 3.0 x16 2048 384:32:16 835 835 5000 13.4 26.7 80.0 GDDR5 128 11.0 4.2 641.3 50 Kepler architecture
GeForce GTX 670M[17] April 2012 GF114 (N13E-GS1-LP) 40 PCIe 2.0 x16 1536
3072
336:56:24 598 1196 3000 14.35 33.5 72.0 GDDR5 192 11.0 4.2 803.6 75 OEM. Rebadged GTX 570M
GeForce GTX 670MX October 2012 GK104 (N13E-GR) 28 PCIe 3.0 x16 1536
3072
960:80:24 600 600 2800 14.4 48.0 67.2 GDDR5 192 11.0 4.2 1152 75 Kepler architecture
GeForce GTX 675M[17] April 2012 GF114 (N13E-GS1) 40 PCIe 2.0 x16 2048 384:64:32 620 1240 3000 19.8 39.7 96.0 GDDR5 256 11.0 4.2 952.3 100 OEM. Rebadged GTX 580M
GeForce GTX 675MX October 2012 GK104 (N13E-GSR) 28 PCIe 3.0 x16 4096 960:80:32 600 600 3600 19.2 48.0 115.2 GDDR5 256 11.0 4.2 1152 100 Kepler architecture
GeForce GTX 680M June 4, 2012 GK104 (N13E-GTX) 28 PCIe 3.0 x16 4096 1344:112:32 720 720 3600 23 80.6 115.2 GDDR5 256 11.0 4.2 1935.4 100 Kepler architecture
GeForce GTX 680MX October 23, 2012 GK104 28 PCIe 3.0 x16 4096 1536:128:32 720 720 5000 23 92.2 160 GDDR5 256 11.0 4.2 2234.3 100+ Kepler architecture
Model Launch Code name Fab (nm) Bus interface Memory (MiB) Config core1 Clock speed Fillrate Memory API support (version) Processing Power2
(GFLOPS)
TDP (watts) Notes
Core (MHz) Shader (MHz) Memory (MT/s) Pixel (GP/s) Texture (GT/s) Bandwidth (GB/s) Bus type Bus width (bit) DirectX OpenGL

Chipset table

See also

References

  1. ^ "Fermi and Kepler DirectX API Support". NVIDIA. 
  2. ^ a b c d e f g NVIDIA GeForce GTX 680 Whitepaper.pdf PDF ( 1405KB), page 6 of 29
  3. ^ a b c d e Smith, Ryan (22 March 2012). "NVIDIA GeForce GTX 680 Review: Retaking The Performance Crown". AnandTech. Retrieved 2012-11-25. 
  4. ^ "GK104: The Chip And Architecture GK104: The Chip And Architecture". Tom;s Hardware. 22 March 2012. 
  5. ^ "NVIDIA Launches First GeForce GPUs Based on Next-Generation Kepler Architecture". Nvidia. 22 March 2012. 
  6. ^ Edward, James (22 November 2012). "NVIDIA claims partially support DirectX 11.1". TechNews. 
  7. ^ a b "Nvidia Doesn't Fully Support DirectX 11.1 with Kepler GPUs, But…". BSN. 
  8. ^ "D3D_FEATURE_LEVEL enumeration (Windows)". MSDN. 
  9. ^ "Introducing The GeForce GTX 680 GPU". Nvidia. 22 March 2012. 
  10. ^ a b "Benchmark Results: NVEnc And MediaEspresso 6.5". Tom’s Hardware. 22 March 2012. 
  11. ^ Yam, Marcus (22 September 2010). "Nvidia roadmap". Tom's Hardware US. 
  12. ^ http://www.geforce.com/whats-new/articles/nvidia-geforce-gtx-650-ti/
  13. ^ "GeForce GTX 660 (OEM)". GeForce.com. Retrieved 2012-09-13. 
  14. ^ http://www.anandtech.com/show/5805/nvidia-geforce-gtx-690-review-ultra-expensive-ultra-rare-ultra-fast/17
  15. ^ http://www.pcmag.com/article2/0,2817,2415528,00.asp
  16. ^ http://www.nvidia.in/object/geforce-610m-in.html#pdpContent=2
  17. ^ a b c d e f g h i http://www.anandtech.com/show/5697/nvidias-geforce-600m-series-keplers-and-fermis-and-die-shrinks-oh-my/2
  18. ^ http://www.nvidia.in/object/geforce-gt-630m-in.html#pdpContent=2
  19. ^ http://www.geforce.com/hardware/notebook-gpus/geforce-gt-630m/specifications
  20. ^ http://www.nvidia.in/object/geforce-gt-635m-in.html#pdpContent=2
  21. ^ http://www.geforce.com/hardware/notebook-gpus/geforce-gt-635m/specifications
  22. ^ http://www.anandtech.com/show/5672/acer-aspire-timelineu-m3-life-on-the-kepler-verge
  23. ^ http://www.laptopreviews.com/hp-lists-new-ivy-bridge-2012-mosaic-design-laptops-available-april-8th-2012-03
  24. ^ a b http://content.dell.com/us/en/home/d/help-me-choose/hmc-aw-video-card-laptops
  25. ^ http://www.engadget.com/2012/01/08/lenovo-ideapad-laptops-CES-2012/
  26. ^ 660m power draw tested in Asus G75VW
Cite error: <ref> tag with name "GTX660" defined in <references> is not used in prior text; see the help page.

External links