Intel graphics integrated in their CPUs isn't a big deal for serious 3-D gamers. What about CAD professionals?
Even in the current Sandy Bridge generation, Intel processors don't really compete against discrete 3-D cards when it comes to heavy duty 3-D gaming or multimedia creation tasks. The shader, texture, pixel and effects processing capabilities were never even intended for that. What about one even more lucrative yet more stable market over the decades – the 3-D CAD for engineers and architects, for instance?
Well, have a good look at the contents of a large 3-D building or city model compared to a complex environment simulated within a modern 3-D game.
In the city or building case, basically you got a lot of polygons, as even curved buildings defined by NURBS at the end get broken into that. Depending on the level of detail involved, a typical 50-storey building with some actual facade design effort put into it, like beautiful Art Deco style towers popping up in China or Malaysia, may take anywhere between 500,000 and 10 million polygons in total. There would likely be no more than 10 different texture types for the outside facade, and another few dozen at most for the interior, if that one is included at the initial modeling stage at all. Look at CapitaLand's Chaotianmen project in Chongqing, a site the author is very familiar with, as a good example of maximum complexity 3-D building model, whose 8 towers with skybridge and coastal modeling of two rivers' confluence could take even 20 million polygons – still nothing compared to the most complex game scenes when all stuff is turned on.
If, on the other hand, we use some of our Singaporean 'quickie boring design to maximise early profits' approach like the 50-storey One Raffles Quay, for instance, resulting in an ugly box, you can do the whole thing precisely in just 50 polygons plus a very, very simple flat texture – something even an early Pentium CPUs could visualise in real time on its own, no GPUs needed.
Notice that, aside from some 'sky' and 'trees' and a bit of water surface here and there, there are no game-like special effects required anywhere here. So unlike the contemporary games where the combination of complex, changing environments and the effects, plus the player experience bits, taxes even the more advanced discrete GPUs.
Yet, for architectural and engineering CAD, we're talking about very demanding, mission-critical market with requirement for precision, error-free simulation and certified application support. These guys do pay thousands for each app license in some cases, after all – look at AutoCAD, Microstation, ProEngineer and so on.
Back to Intel and their Sandy Bridge, yes a no-good for big time games, but… it has pretty decent 3-D polygon processing throughput, sufficent texture and effects capability for architectural and engineering CAD, and allows for a very compact system design in fashionable little casings that the design-conscious creative firms like to deploy.
So, Intel came up with an interesting strategy for their Xeon versions of the Sandy Bridge LGA1155 processor line. These Xeon E3-1200 series CPUs are now OpenGL certified, including support and certifications for a bunch of these high end architecture and engineering design applications. Therefore, you can get a 'certified 3D engineering workstation' base within a simple miniITX or microATX (if in need for more memory) mainboard, for a very compact system.
I have played with AutoCAD on the E3-1275, basically a Xeon version of Core i7-2600K, and the performance is satisfactory even for 3-D real time manipulation of a (textureless though) model of central KL city area. You can pan, zoom, 3-D rotate in both isometric and perspective views easily, and this is roughly a 5 million polygon class model, meaning more triangles than most typical building models under, say, AutoCAD, right now. So, most other users could be happy with it as well.
This way, Intel takes a part of Nvidia and AMD's highest profit market, workstation graphics cards, by cutting into their entry level with 'good enough' solution that yet gives peace of mind to the users without requiring to buy an extra GPU anyway – this saving is welcome as, even if later in design cycle the user feels a need for faster graphics, it can be done by adding a discrete GPU. Or, more cost effectively in the Intel case, upgrading the CPU to an Ivy Bridge-class Xeon E3 with doubled graphics speed in the same socket, plus the side CPU benefits.
While this is not a solution for more demanding multimedia 3-D content creation, or real-time Virtual Reality (VR) where the demands exceed even the toughest games, the allure of capturing quite a chunk of base level CAD marketshare and mindshare wasn't lost on Intel. What can the competitors do about it?
For Nvidia, nothing much it can do, except drastically reducing the entry-level Quadro card prices to offer a 'still faster' alternative at small price premium. The argument how the users spend a lot on the apps so won't mind spending more on the hardware doesn't hold, as this author has also seen many times that this class of users can be often more stingy on the hardware than on the software running on it.
AMD has a chance here though, if they are willing to take it – by offering high-end Llano and Trinity Fusion APU selected bins as "FirePro-class OpenGL certified" and throw in selected key apps full certification and support, they could ride the same wave as the above Intel Xeon E3's. After all, 'good enough' combined CPU and GPU performance was one of key Fusion APU marketing points.
The question is, can AMD put aside enough extra money and resources to do the certification and support effort, and yet accept a little canibalisation of the low-end FirePro card line? Either way, I feel, they should do it – Intel is only going to up the stakes as their Ivy Bridge and then Haswell models of E3 Xeons continue to advance both CPU and GPU performance in the next one year to beyond 'good enough' level. At the end, building model complexity grows slower than the processor speed…