Question: Are GPU architectures and Direct3D evolving toward a design where the distinction between vertex and pixel shaders essentially goes away?—davesalvator
David Kirk: For hardware architecture, I think that's an implementation detail, not a feature.
For sure, the distinction between the programming models and instruction sets of vertex shaders and pixel shaders should go away. It would be soooo nice for developers to be able to program to a single instruction set for both.
As to whether the architectures for vertex and pixel processors should be the same, it's a good question, and time will tell the answer.
It's not clear to me that an architecture for a good, efficient, and fast vertex shader is the same as the architecture for a good and fast pixel shader. A pixel shader would need far, far more texture math performance and read bandwidth than an optimized vertex shader. So, if you used that pixel shader to do vertex shading, most of the hardware would be idle, most of the time. Which is better—a lean and mean optimized vertex shader and a lean and mean optimized pixel shader or two less-efficient hybrid shaders? There is an old saying: "Jack of all trades, master of none."
~
Reader Question: Will we see non-uniform rasterization, streaming ray-casts, or equivalent features to enable the kind of graphics we really want—real-time, dynamic lighting with a large number of lights?—Jason_Watkins
David Kirk: Yes.
Over time, everything that has been hardwired and fixed-function will become general-purpose. This will enable much more variety in graphics algorithms and ultimately, much more realism.
The good news, for my job security, is that graphics is still ultimately very very hard. Tracing streaming rays in all directions, reflecting between all of the objects, lights, and fog molecules in parallel is extremely hard. Nature does it . . . in real time! However, nature does it by using all of the atoms in the universe to complete the computation. That is not available to us in a modern GPU
.
Graphics will continue to be a collection of clever tricks, to do just enough work to calculate visibility, lighting, shadows, and even motion and physics, without resorting to brutishly calculating every detail. So, I think that there's a great future both in more powerful and flexible GPUs
as well as ever more clever graphics algorithms and approaches.
~
Question: With all of the pressing towards more powerful graphics cards to handle features such as FSAA and anisotropic filtering, why do we still use inefficient, "fake" methods to achieve these effects?—thalyn
David Kirk:
Looking at my answer from the question about ray-casting and lighting effects, graphics is all "fake" methods. The trick is to perform a clever fake and not get caught! All graphics algorithms do less work than the real physical universe but attempt to produce a realistic simulation.