Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've kind of wondered about this a bit too. The respective visual quality side of it that is. Especially in a context where you're actually playing a game. You're not just sitting there staring at side by side still frames looking for minor differences.

What I have assumed given then trend, but could be completely wrong about, is that the raytracing version of the world might be easier on the software & game dev side to get great visual results without the overhead of meticulous engineering, use, and composition of different lighting systems, shader effects, etc.



For the vast majority of scenes in games, the best balance of performance and quality is precomputed visibility, lighting and reflections in static levels with hand-made model LoDs. The old Quake/Half-Life bsp/vis/rad combo. This is unwieldy for large streaming levels (e.g. open world games) and breaks down completely for highly dynamic scenes. You wouldn't want to build Minecraft in Source Engine[0].

However, that's not what's driving raytracing.

The vast majority of game development is "content pipeline" - i.e. churning out lots of stuff - and engine and graphics tech is built around removing roadblocks to that content pipeline, rather than presenting the graphics card with an efficient set of draw commands. e.g. LoDs demand artists spend extra time building the same model multiple times; precomputed lighting demands the level designer wait longer between iterations. That goes against the content pipeline.

Raytracing is Nvidia promising game and engine developers that they can just forget about lighting and delegate that entirely to the GPU at run time, at the cost of running like garbage on anything that isn't Nvidia. It's entirely impractical[1] to fully raytrace a game at runtime, but that doesn't matter if people are paying $$$ for roided out space heater graphics cards just for slightly nicer lighting.

[0] That one scene in The Stanley Parable notwithstanding

[1] Unless you happen to have a game that takes place entirely in a hall of mirrors


Yep. I worked on the engine of a PS3/360 AAA game long ago. We spent a long of time building a pipeline for precomputed lighting. But, in the end the game was 95% fully dynamically lit.

For the artists, being able to wiggle lights around all over in real time was an immeasurable productivity boost over even just 10s of seconds between baked lighting iterations. They had a selection of options at their fingertips and used dynamic lighting almost all the time.

But, that came with a lot of restrictions and limitations that make the game look dated by today’s standards.


I get the pitch that it is easier for the artists to design scenes with ray-tracing cards. But I don’t really see why we users need to buy them. Couldn’t the games be created on those fancy cards, and then bake the lighting right before going to retail?

(I mean, for games that are mostly static. I can definitely see why some games might want to be raytraced because they want some dynamic stuff, but that isn’t every game).


The player can often have a light, and is usually pretty dynamic.

One of the effects I really like is bounce lighting. Especially with proper color. If I point my flashlight at a red wall, it should bathe the room in red light. Can be especially used for great effect in horror games.

I was playing Tokyo Xtreme Racer with ray tracing, and the car's headlights are light sources too (especially when you flash a rival to start a race). My red car will also bounce lighting on the walls in tunnels to make things red.

It doesn't even have to be super dynamic either, I can't even think of a game that has opening a door to the outside sun to change the lighting in a room with indirect lighting (without ray tracing it). Something I do every day in real life. It would be possible to bake that too, assuming your door only has 2 positions.


When path tracing works, it is much, much, MUCH simpler and vastly saner algorithm than those stacks of 40+ complicated rasterization hacks in current rasterization based renderers that barely manage to capture crude approxinations of the first indirect light bounces. Rasterization as a rendering model for realistic lighting has outlived its usefulness. It overstayed because optimizing ray-triangle intersection tests for path tracing in hardware is a hard problem that took some 15 o 20 years of research to even get to the first generation RTX hardware.


>When path tracing works, it is much, much, MUCH simpler and vastly saner algorithm than those stacks of 40+ complicated rasterization hacks in current rasterization based renderers that barely manage to capture crude approxinations of the first indirect light bounces.

It's ironic that you harp about "hacks" that are used in rasterization, when raytracing is so computationally intensive that you need layers upon layers of performance hacks to get decent performance. The raytraced results needs to be denoised because not enough rays are used. The output of that needs to be supersampled (because you need to render at low resolution to get acceptable performance), and then on top of all of that you need to hallu^W extrapolate frames to hit high frame rates.


And you still need rasterization for ray traced games (even "fully" path traced games like Cyberpunk 2077) because the ray tracing sample count is too low to result in an acceptable image even after denoising. So the primary visibility rendering is done via rasterization (which has all the fine texture and geometry detail without shading), and the ray traced (and denoised) shading is layerd on top.

You can see the purely ray traced part in this image from the post: https://substack-post-media.s3.amazonaws.com/public/images/8...

This combination of techniques is actually pretty smart: Combine the powers of the rasterization and ray tracing algorithms to achieve the best quality/speed combination.

The rendering implementation in software like Blender can afford to be primitive in comparison: It's not for real-time animation, so they don't make use of rasterization at all and do not even use denoising. That's why rendering a simple scene takes seconds in Blender to converge but only milliseconds in modern games.


Not quite correct.

For primary visibility, you don't need more than 1 sample. All it is is a simple "send ray from camera, stop on first hit, done". No monte carlo needed, no noise.

On recent hardware, for some scenes, I've heard of primary visibility being faster to raytrace than rasterize.

The main reasons why games are currently using raster for primary visibility:

1. They already have a raster pipeline in their engine, have special geometry paths that only work in raster (e.g. Nanite), or want to support GPUs without any raytracing capability and need to ship a raster pipeline anyways, and so might as well just use raster for primary visibility. 2. Acceleration structure building and memory usage is a big, unsolved problem at the moment. Unlike with raster, there aren't existing solutions like LODs, streaming, compression, frustum/occlusion culling, etc to keep memory and computation costs down. Not to mention that updating acceleration structures every time something moves or deforms is a really big cost. So games are using low-resolution "proxy" meshes for raytracing lighting, and using their existing high-resolution meshes for rasterization of primary visibility. You can then apply your low(relative) quality lighting to your high quality visibility and get a good overall image.

Nvidia's recent extensions and blackwell hardware are changing the calculus though. Their partitioned TLAS extension lowers the acceleration structure build cost when moving objects around, their BLAS extension allows for LOD/streaming solutions to keep memory usage down as well as cheaper deformation for things like skinned meshes since you don't have to rebuild the entire BLAS, and blackwell has special compression for BLAS clusters to further reduce memory usage. I expect more games in the ~near future (remember games take 4+ years of development, and they have to account for people on low-end and older hardware) to move to raytracing primary visibility, and ditching raster entirely.


Meanwhile raserization is fundamentally incapable of producing the same image.


As a sibling post already mentioned, rasterization based hacks are incapable of getting as accurate lighting as path tracing can, given enough processing time.

I will admit that I was a bit sly in that I omitted the word "realtime" from the path tracing part of my claim on purpose. The amount of denoising that is currently required doesn't excite me either, from a theoretical purity standpoint. My sincere hope is that there is still a feasible path to a much higher ray count (maybe ~100x) and much less denoising.

But that is really the allure of path tracing: a basic implementation is at the same time much simpler and more principled than any rasterization based approximation of global illumination can ever be.


This doesn't hold at all. Path tracing doesn't "just work", it is computational infeasible. It needs acceleration structures, ray traversal scheduling, denoisers, upscalers, and a million other hacks to work any close to real-time.


Except that it isn't like that at all. All you get from the driver in terms of ray tracing is the acceleration structure and ray traversal. Then you have denoisers and upscalers provided as third-party software. But games still ship with thousands of materials, and it is up to the developer to manage lights, shaders, etc, and use the hardware and driver primitives intelligently to get the best bang for the buck. Plus, given that primary rays are a waste of time/compute, you're still stuck with G-buffer passes and rasterization anyway. So now you have two problems instead of one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: