

It’s clear that several people in charge of the youtube livestream have no idea about how to do that correctly. I think the difference is just effort. Viewership was tiny compared to Apollo 11, as was the hype leading up to it. It’s clear that NASA could provide a whole lot better footage if even some random youtuber (Everyday Astronaut) can beat them. So that aspect is, as you said, because as a society we don’t really care about the Artemis launch. SpaceX does put a fair amount of effort into their livestreams, and you can easily tell by watching them.
For the recorded footage, film often has a lot higher dynamic range than digital cameras and usually looks a whole lot better when recording a launch up close.
Far shots are limited by atmospheric distortion and physical limits from diffraction for a given aperture size. None of that can change.
IDK anything about the quality of the original live broadcast of Apollo 11, so i don’t have anything to compare in that regard




As an amateur computer graphics person, the best way to draw accurate stars is to just pre render it onto a cubemap. But if you really need that subpixel worth of parallax to be completely accurate for every star, there are a couple ways I can think of off of the top of my head. With any you’d want to make sure you only store position, size, and color, since stars are all spheres anyways. With effort, you can be very flexible with how these are stored. (4 bits color temperature, 4 bits size, 3*32 bits coordinates maybe)
Worse ideas:
This is not that well suited to most usual rendering techniques, because most stars are probably going to be much smaller than a pixel. Ray tracing would mean you need to just hit every star by chance (or artificially increase star size and then deal with having tons of transparency), hardware rasterization is basically the same and additionally is inefficient with small triangles. I guess you could just live with only hitting stars by chance and throw TAA at it, there’s enough stars that it doesn’t matter if you miss some. That would react badly to parallax though and defeats the purpose of rendering every star in the first place.
It’s much more efficient to do a manual splatting thing, where for each star you look at what pixel(s) it will be in. You can also group stars together to cull out of view stars more efficiently. Subpixel occlusion will be wrong, but it probably doesn’t matter.
This is all just for the viewport, though. Presumably there are other objects in the game besides stars, which need to have reflections on them of the stars. Then that becomes an entirely different problem.
The real answer though is that you wouldn’t try to render all of the stars, even if you want parallax. Maybe some of the closer and larger ones as actual geometry, simplify a ton of stuff in the background, render things as volumes or 2d billboards, have a cubemap for the far distance, etc
Edit: also ofc this presumes you know the position, scale, temperature of every star
I also like the idea of baking all of the stars into a volume in spherical coordinates, centered around the origin