The cost of bad design

Posted By Design Corps / July, 24, 2013 / 0 comments

I got this infographic through from Autodesk recently highlighting some good points about the use of 3D Design Software in product development, it is presumably based on US data but I’m sure the same things happen worldwide…and across different industries for that matter.

autodesk_PrDS_infographic_large_924

Just goes to show the value of good design; and the cost of bad design (9 billion dollars…ouch!). Click on the image to view a bigger version, or view the original page over at Autodesk here.

CT

Confused by Kepler

Posted By Design Corps / May, 2, 2012 / 2 comments

The potential of unbiased rendering engines like iRay has made me pay even more attention to GPU developments than usual. The awesome render speeds shown in the demos make this approach very interesting; however, up until recently the only cards capable of processing very large scenes were at the upper end of the Quadro range – you know, the end with the eye-watering price tags.

This is all down to the way in which the data is handled, the scene to be rendered needs to fit on the physical memory of each GPU by itself. You can’t plug 3 GPUs in and expect it to access that memory collectively; you will get the performance of those 3 cards working together, but the scene will be loaded onto each card separately. This is why cards like the Quadro 5000 and Tesla C2075 are so popular for this application…although at £1600 and £2100 respectively that is quite an investment!

Latter releases to the Fermi based Geforce cards – with 3GB of memory – were a move in the right direction but the upcoming release of the Kepler cards had me holding back to see what was coming. Well now they are here and, on paper at least, they look to blow the Fermi cards out of the water; the initial 680 series have 2GB of memory but there are already a couple of higher end cards with 4GB, add that to the 1536 CUDA cores (triple the amount on the Fermi cards) and these cards are surely the answer to my rendering prayers.

Well, at this point in time it seems not. Whilst Nvidia made conscious efforts to focus on compute performance with the Fermi cards (thereby losing out to ATI in pure graphics speed) they have gone the other way with Kepler and focused on graphics – great for gaming, less so for what I was hoping for. As I understand it this is down to smaller shared data bandwidth between cores (a third of what Fermi had) and the loss of hardware scheduling, but whatever its down to it is, for me, a big disappointment to read that Kepler GPUs, despite their core and memory advantages, perform (again on paper) pretty much the same (if not worse) as Fermi GPUs.

Of course no actual benchmarks for this specific use have surfaced yet as iRay doesn’t even support Kepler cards at the moment (although an update is coming) but like I say in the title this is confusing; Nvidia have spent years talking up CUDA and GPU compute capabilities, not to mention the amount of resource sunk in to developing tech for it (Nvidia now own Mental Images, who make iRay), it seems strange to move away from it now when it felt like we were on the cusp of something big.

Or perhaps not, Nvidia are primarily a graphics card company after all and Kepler is just that – a graphics card. Maybe this move to making the consumer cards less compute capable is a deliberate way to differentiate from the Quadro cards, the current crop of those are all still Fermi based so we shall see what happens when they get a Kepler upgrade. One things for sure though, I’ll be hanging on before making a purchase for a while yet 🙂

CT