User:ZCY4014/sandbox/DirectX Raytracing

DirectX Raytracing (DXR) is a new feature introduced in the latest version of Microsoft DirectX, comes with the Windows 10 October 2018 Update. This update provides API (Application Program Interface) for hardware real-time raytracing, like the Nvidia RTX 20 series graphics card, released in August 2018. The DXR also introduces many brand-new concepts and procedures, designed especially for raytracing process. Microsoft's DXR, along with Nvidia 20 series graphics card, provides a solid software and hardware foundation for raytracing, which makes real-time raytracing possible for video games and 3D effects rendering.

Disadvantage of Rasterization compare to Raytracing
Rasterization has been the industry's standard way of rendering for decades. However, it is reaching its limits as people's expectations for better images keep growing. The way that rasterization works is very straightforward. With rasterization, objects on the screen are created from a mesh of virtual triangles, or polygons, that create 3D models of objects. Computers then convert the triangles of the 3D models into pixels, or dots, on a 2D screen. Each pixel can be assigned an initial color value from the data stored in the triangle vertices. However, the images created through rasterization is very unreal, since it doesn't mimic the light in real world.

How Raytracing works
Raytracing in fact is not a new thing. It has long been used for non-real-time rendering. For example, it is widely utilized in modern movies' post-production to generate gorgeous and realistic 3D effects, especially in sci-fi movies.

The theory underneath this technology is very simple. It provides realistic lighting by simulating the physical behavior of light as in real world. Raytracing calculates the color of pixels by tracing the path that light would take if it were to travel from the eye of the viewer through the virtual 3D scene. As it traverses the scene, the light may reflect from one object to another (causing reflections), be blocked by objects (causing shadows), or pass through transparent or semi-transparent objects (causing refractions). All of these interactions are combined to produce the final color of a pixel that then displayed on the screen.

By rendering a scene more like the human eye works – by focusing on where rays of light come from, what they interact with, and how they interact with those objects – it can produce a far more accurate image overall. It enables the buildings, fire, explosions, smoke, shadows, and lights, looks exactly the same as people see them in the real world.

Why we need DirectX Raytracing
Nowadays, raytracing is commonly used in movie production. Since most computers are not fast enough and do not have special hardware, most moviemakers can only render their images frame by frame. This is called non-realtime raytracing. However, that won't work for video games, because video games mostly requires at least 60 frames per second to achieve a smooth image.

But things has changes, after Nvidia released their 20 series graphics card and Microsoft release DirectX Raytracing. The Nvidia 20 series graphics card, for the first time in history, provides specialized hardware for real-time raytracing, which makes raytracing much faster than before. Accordingly, Microsoft released the next-generation DirectX, called Direct Raytracing, to provides software support for Nvidia's graphics cards.

DirectX Raytracing API
Since the way of rendering is no longer rasterization, but raytracing, the structure of DirectX requires to be changed as well.

Four New Concepts
DXR introduces four, new concepts to the DirectX 12 API:


 * 1) Acceleration Structure: The acceleration structure is a representation of real world object. It is optimized for GPU ray traversal and dynamic objects.
 * 2) DispatchRays: DispatchRays is a new command list method. It is the starting point for tracing rays into the scene. The games will use this method to submit DXR workloads to the GPU.
 * 3) New HLSL shader types: DXR introduces a set of new HLSL shader types including ray-generation, closet-hit, and miss shader. When DispatchRays is called, the ray-generation shader starts to trace the light into the scene. Depending on where the ray goes in the scene, the game can assign each object its own set of shaders and textures, resulting in a unique material.
 * 4) Raytracing pipeline state: A graphics pipeline is the sequential flow of data inputs and outputs as the GPU renders frames. Raytracing pipeline state is similar to today’s Graphics and Compute pipeline state objects. It encapsulates the raytracing shaders and other state relevant to raytracing workloads.

Compare to DX12
DXR does not introduce new GPU engines to DX12. DXR workloads can run on any of DX12's existing engines. The primary reason for this is that, fundamentally, DXR is a compute-like workload. It does not require complex state such as output merger blend modes or input assembler vertex layouts. The second reason is that Microsoft want it to be adaptable so that in addition to Raytracing, it can eventually be used to create Graphics and Compute pipeline states, as well as any future pipeline designs.

Step 1
The first step in rendering images using DXR is to build the acceleration structure. This structure is a two-level hierarchy. The bottom level is a set of geometries, mostly vertex and index buffers, which representing distinct objects in real world. At the top level, the applications specifies a list of references to specific objects, each with some additional data such as transformation matrices and can be updated every frame, which is used for dynamic objects in games. Combining these two levels, DXR provides a efficient way to traverse multiple complex objects.

Step 2
The second step is using DXR to create raytracing pipeline state. Nowadays, most games uses a method called batching to achieve a better rendering efficiency. This method divides the objects into different categories and rendering them group by group, for instance, rendering metallic objects first, and then all plastic objects. However, this method loses its advantages when conducting raytracing, since we cannot predict what material the ray will hit until it actually hit it. Batching, in this circumstance is useless for raytracing. In order to solve this issue, the raytracing pipeline state allows specification of multiple sets of raytracing shaders and texture resources. This enables an application to specify, for example, that any ray intersections with object A should use shader P and texture X, while intersections with object B should use shader Q and texture Y. This allows applications to have ray intersections run the correct shader code with the correct textures for the materials they hit.

Step 3
The last step is using DXR to call DispatchRays to invoke ray generation shader. In this shader, the application then calls a method called TraceRay to initiate the traversal of the acceleration structure and then the execution of the appropriate hit or miss shader.

Future Developments
DXR initially will simply be used to supplement current rendering techniques. In the future, probably in the next few years, there will be an increase in utilization of DXR for techniques that are simply impractical for rasterization. Ultimately, raytracing may completely replace rasterization, and become the next-generation standard for 3D rendering.

Significance
Raytracing has long been used in non-realtime rendering, like movie production. But, Microsoft's DirectX Raytracing, along with Nvidia 20 series graphics card, brings real-time raytracing to market. This enables game developers to produces more realistic and authentic images in their games. This will bring a revolution to current video games market.