The marketing people over at Nvidia are, quite naturally, eager to take advantage of the GTX 295 launch to push what they call “Graphics Plus”, or non-traditional functions of the GPU.
Since nobody wanted to buy the PhysX PPU, Nvidia did the smart thing and integrated PhysX hardware support into GeForce 8 (and newer) cards via CUDA. This has been available since the driver release of August 2008.
In its new form, PhysX support can be used in several configurations:
- Single card rendering graphics & calculating physics simultaneously
- Two cards – 1 rendering graphics, 1 calculating physics
- Two cards – SLi mode, both rendering graphics & calculating physics simultaneously
- Three cards – Two in SLi mode, rendering graphics, Third card calculates physics.
The first configuration diverts some of the stream processors to providing PhysX functionality. Fortunately the proportion of processing power needed for PhysX (in the games that we’ve seen so far anyway) isn’t significant on a high-end card. Hence you’ll likely see a small drop in framerates in return for getting physics eye candy.
The second configuration comes in handy if you’re upgrading from an older card that hails from the GeForce 8 family or newer, and your motherboard has a second PCIe x16 slot available. In such a scenario you can use that older, less powerful card for physics calculations. The downside is that the secondary card sucks up power and generates heat as well.
The third configuration is for those with two equally powerful graphics cards, who obviously wouldn’t want one of the card to go to waste doing nothing but PhysX processing. And the last configuration will only be of use on motherboards with three x16 slots and sufficient PCIe lanes to prevent the cards from being bottlenecked.
We think Nvidia has pretty much covered all the bases here, and this is naturally a good sales strategy as it encourages people to stick with Nvidia for future upgrades.
The ability of PhysX to increase immersiveness has already been demonstrated in several games – Whether it truly takes off and becomes widespread has yet to be seen though. And you have to bear in mind that ATI users are still shut out from the whole affair.
Nvidia is also trying hard to push CUDA into the mainstream. Right now many CUDA applications are highly specific, and also tend to seem like they require a Ph.D to operate.
So far the only application in which we see potential mass appeal is media encoding, and even then its implementation is still rather stunted. TMPGEnc, for instance, only supports CUDA-acceleration when encoding to MPEG-1 or MPEG-2 video, neither of which are formats of choice outside of DVD video burning.
GeForce 3D Vision
Support for stereo gaming has actually been lurking on the sidelines for several years now. What’s new is that Nvidia has changed track and is now pushing active shutter glasses rather than the passive polarised displays and anaglyph (red-blue) displays they currently support.
We won’t be going into detail here, but if you’re interested The Inquirer has a good (if decidedly opinionated) editorial piece on the upsides and downsides of the various different technologies here.