Confused by Kepler

By Design Corps / May, 2, 2012 / 2 comments

The potential of unbiased rendering engines like iRay has made me pay even more attention to GPU developments than usual. The awesome render speeds shown in the demos make this approach very interesting; however, up until recently the only cards capable of processing very large scenes were at the upper end of the Quadro range – you know, the end with the eye-watering price tags.

This is all down to the way in which the data is handled, the scene to be rendered needs to fit on the physical memory of each GPU by itself. You can’t plug 3 GPUs in and expect it to access that memory collectively; you will get the performance of those 3 cards working together, but the scene will be loaded onto each card separately. This is why cards like the Quadro 5000 and Tesla C2075 are so popular for this application…although at £1600 and £2100 respectively that is quite an investment!

Latter releases to the Fermi based Geforce cards – with 3GB of memory – were a move in the right direction but the upcoming release of the Kepler cards had me holding back to see what was coming. Well now they are here and, on paper at least, they look to blow the Fermi cards out of the water; the initial 680 series have 2GB of memory but there are already a couple of higher end cards with 4GB, add that to the 1536 CUDA cores (triple the amount on the Fermi cards) and these cards are surely the answer to my rendering prayers.

Well, at this point in time it seems not. Whilst Nvidia made conscious efforts to focus on compute performance with the Fermi cards (thereby losing out to ATI in pure graphics speed) they have gone the other way with Kepler and focused on graphics – great for gaming, less so for what I was hoping for. As I understand it this is down to smaller shared data bandwidth between cores (a third of what Fermi had) and the loss of hardware scheduling, but whatever its down to it is, for me, a big disappointment to read that Kepler GPUs, despite their core and memory advantages, perform (again on paper) pretty much the same (if not worse) as Fermi GPUs.

Of course no actual benchmarks for this specific use have surfaced yet as iRay doesn’t even support Kepler cards at the moment (although an update is coming) but like I say in the title this is confusing; Nvidia have spent years talking up CUDA and GPU compute capabilities, not to mention the amount of resource sunk in to developing tech for it (Nvidia now own Mental Images, who make iRay), it seems strange to move away from it now when it felt like we were on the cusp of something big.

Or perhaps not, Nvidia are primarily a graphics card company after all and Kepler is just that – a graphics card. Maybe this move to making the consumer cards less compute capable is a deliberate way to differentiate from the Quadro cards, the current crop of those are all still Fermi based so we shall see what happens when they get a Kepler upgrade. One things for sure though, I’ll be hanging on before making a purchase for a while yet :)

CT

2 Responses to Confused by Kepler

  • Nezih Kanbur

    Looking at the previous course of cpu/gpu development, one thing is obvious. A hardware performance has never improved more than %15-20 between two consecutive models. Even during architectural changes, the first model of the new architecture has often been the same with the last model of previous architecture. (eg. you can switch to LGA2011 with a four core 3820 and you get similar performance with 2600 K. But you are 2011)

    So odds are against feeling relieved with high CUDA numbers; recent history whispers that gtx670 with 1344 cores will not give 15% more performance than the last graphics card of the previous architecture. Even on NVidia website under “compare and buy” link, you can see the gradual improvement in performance. The confusing thing is, computing power of gaming Kepler cards is stated as N/A, if means that these cards will not be doing any computing – I don’t know what to say.
    But this is NVidia! – A company that stopped supporting ordinary frame sequential 3d stereoscopy and created a new brand of its own, to force consumers throw all OEM 3D glasses to trash and buy their only 3D vision glasses.

    Of course time will tell what will be the course.

    Any news on Kepler, cause I’m at the edge of buying a new graphics card and my time is limited. (I’ll be buying in a day or two) I’m between gtx580 and 680 (or may be 670) I wish to use I ray though it still has a looong road to improve.

    Best regards

  • Design Corps

    True, that’s the ‘tick, tock’ model that Intel use; launch new architecture then do a refresh.

    In terms of Kepler and iRay there are still no benchmarks as iRay still doesn’t support Kepler. I’ll hold judgement until I see them but I suspect I’ll be waiting until the next gen of Kepler cards come out, it has been mentioned that these will be a lot more compute capable that the first gen.

Leave a Reply

Post Comment