This essay was contributed by Bhavesh Upadhyaya, an Emmy Awarded Independent Consultant and SVTA member. When not geeking out on NASA news, he helps companies work on product and operations strategies for streaming media and AI.
The next moon landing may become the largest live streaming event in internet history, and the streaming industry is not yet fully organized around what that means.
That was my biggest takeaway from NAB, after listening to a session opened by NAB’s John Clark with NASA’s Kevin Coggins, Rebecca Sirmons, and Lee Erickson. The conversation had the emotional force you would expect from a discussion about human spaceflight, but what stayed with me was how practical it became. Artemis is a story about exploration, but it is also a story about communications, video, latency, certification, bandwidth, distribution, and the work required to let the world experience spaceflight in real time.
That is where our industry comes in.
The audience is already there
Rebecca shared numbers from Artemis that should make everyone in streaming pay attention. NASA saw roughly 19.1 million viewers on broadcast, while streaming partners collectively reached around 50 million. She noted that YouTube alone drew approximately 47 million views across a 10-day live broadcast.
Third-party reporting points in the same direction. The Planetary Society reported that more than 18 million people watched the Artemis II launch across major U.S. cable networks, while splashdown drew 27 million on those same channels. It compared that audience to Game 7 of the World Series and noted that Artemis II reached “Taylor Swift levels” of public awareness, with polling suggesting roughly 70–80% of Americans had at least some awareness of the mission while it was happening.
Those numbers matter because Artemis II was the preview.
The next lunar landing will be distributed across broadcast, NASA+, YouTube, streaming platforms, connected TVs, social platforms, FAST channels, news outlets, apps, and viewing experiences that may not exist yet. Demand will be fragmented, global, emotional, and simultaneous.
That scale should sound familiar to anyone who has worked on large live events. The difference is that the origin camera is on or near the Moon.
Communication is the mission
One of the most striking parts of the panel came from Kevin Coggins, who described NASA’s shift from traditional RF communications to optical communications. In plain English: lasers.
Streaming people are used to thinking about bandwidth as a scaling problem. Add capacity. Add CDN footprint. Add cloud resources. Tune the ABR ladder. In space, bandwidth starts as a physics problem.
Kevin described the precision required to hold an optical link between a spacecraft moving thousands of miles per hour and a receiving point on Earth. The spacecraft is moving. Earth is rotating. The atmosphere is shifting. Clouds interfere. The link depends on extraordinary pointing accuracy across a system that is constantly in motion.
And yet, during Artemis testing, optical communications worked well enough that the difference was visible. When the optical link came on, the video got noticeably better.
That success changes the trajectory. Optical communications become foundational to the future media path from deep space. Higher bandwidth makes better video possible: higher resolution, higher fidelity, better frame rates, more camera angles, and eventually immersive experiences. It also raises expectations. Once the audience sees what is possible, grainy “good enough” mission video becomes harder to justify.
Kevin also made clear that optical introduces its own realities. Clouds still matter. Atmosphere still matters. Ground stations still matter. He described optical operating around 1550 nanometers and the challenge of getting through the atmosphere. The long-term answer may include optical links from lunar systems to Earth-orbiting constellations, then down through more conventional terrestrial paths. That is where space communications and modern distribution architecture begin to meet.
Space hardware is a hostile production environment
Lee and Rebecca grounded the conversation in the reality of making media technology survive space.
In our world, “production-grade” usually means reliable under operational pressure: a live event, a bad network path, a flaky encoder, a CDN incident, a player bug, or a control-room failure. NASA has to deal with all of that, plus launch forces, radiation, thermal extremes, vacuum conditions, oxygen constraints, human safety requirements, and failure scenarios where nobody can walk into a rack room and swap a box.
Rebecca and Lee talked about what that means for encoders and video equipment. Devices may be dropped to simulate G-forces. They may be exposed to radiation-like conditions. They may be thermally stressed. Every component has to be evaluated for environments where failure may be unrecoverable.
That reality creates one of the hardest gaps between NASA and industry. The cameras and related systems used on Artemis were based on technology that was effectively a decade old. NASA did not arrive there because it prefers old technology. It got there because requirements, testing, certification, and mission assurance take time.
The streaming industry reinvents itself every year.
That gap is one of the most important diagnoses from the panel. NASA needs modular pathways that allow modern encoders, codecs, cameras, and related systems to be evaluated and upgraded without forcing the entire media stack back through a decade-long cycle.
This is an architecture problem as much as a hardware problem. The industry can help define systems where components are isolated, tested, certified, swapped, and improved while preserving mission assurance.
Bandwidth is a prioritization problem
Another detail from the session stayed with me: even when NASA has a link, every bit has a job.
Kevin referenced a 6 Mbps RF Deep Space Network data rate to Orion. For streaming professionals, that number immediately triggers practical questions: resolution, frame rate, codec efficiency, error resilience, latency, quality ladders, and compression tradeoffs. But on a spacecraft, video shares the path with mission-critical telemetry, spacecraft health, astronaut safety data, and operational communications.
Video quality does not win every argument. Sometimes it should not.
During Artemis, that meant practical tradeoffs: lowering frame rate, changing compression strategy, and allocating bandwidth around higher-priority mission data. A streaming engineer may see an image-quality challenge. A mission operator sees a safety and assurance problem with video as one of several payloads competing for constrained transport.
The hard requirement is to make both worlds coexist.
If future lunar missions carry more cameras, more live feeds, more immersive systems, and more public-facing video, the prioritization layer becomes increasingly important. The question is no longer only how to produce the best image. It is how to produce the best image inside a system where astronaut safety, spacecraft operations, latency, resiliency, and public experience all share the same constrained path.
Latency changes the product
The Moon is close enough that live interaction feels possible, but physics still applies. Mars is a different category altogether. Kevin noted that Mars communications can involve delays measured in minutes, depending on orbital position. Voyager pushes the idea even further, with signal travel times measured in many hours.
That matters because latency changes the media experience.
A lunar stream can still feel live. A Mars experience will need different expectations, different workflows, and likely a different user experience. Troubleshooting, control, interactivity, live commentary, social participation, and mission operations all change when the delay becomes minutes instead of seconds.
This is where the streaming industry’s experience with live sports, remote production, low-latency streaming, multi-CDN architectures, and cloud production becomes relevant, but only as a starting point. Deep-space media will require new patterns for delay-aware storytelling, autonomous capture, store-and-forward publishing, asynchronous interaction, and quality-of-experience measurement.
The Moon is the proving ground.
From mission broadcast to continuous presence
The biggest structural shift is NASA’s move from episodic mission communications to continuous lunar presence.
Rebecca described NASA+ as a kind of “Field of Dreams”: build the place where the story can live, make the data accessible, and let audiences and partners come to it. Kevin described a future where cameras are everywhere: on landers, rovers, relay satellites, astronaut-following systems, and eventually lunar infrastructure. Lee talked about 360 video, immersive experiences, and the creative possibilities of opening NASA’s media assets to partners who can build new experiences on top of them.
That is the moment the story turns from broadcast to platform.
A future lunar base becomes a persistent media environment. There may be live feeds, VOD archives, multi-angle views, AI-indexed moments, immersive streams, educational experiences, accessibility workflows, and public datasets that can be remixed in ways NASA will not invent alone.
The archive itself becomes part of the mission. Continuous lunar media will need indexing, search, provenance, rights metadata, descriptive metadata, accessibility metadata, and tooling that lets educators, broadcasters, scientists, journalists, creators, and platforms find the moments that matter.
That ecosystem has to be designed.
The actual gaps industry should help solve
The call to action should be specific.
First, NASA and industry need modular certification pathways for media technology. If every camera, encoder, or codec improvement is trapped behind a multi-year validation cycle, the media stack will always trail the commercial world. The industry can help define architectures where components can be tested, isolated, swapped, and upgraded without compromising mission assurance.
Second, we need shared distribution choreography for an event that could draw hundreds of millions of simultaneous viewers across fragmented platforms. This involves origin strategy, redundancy, partner access, stream packaging, live operations, traffic forecasting, failover, observability, CDN coordination, content protection where appropriate, accessibility workflows, and clear rules for who gets what signal under what conditions.
Third, we need standards and practices for AI indexing and metadata around always-on lunar media. If NASA is sending back continuous video from landers, rovers, relay satellites, and astronauts, AI can help identify events, objects, anomalies, educational moments, accessibility cues, and reusable clips. That only works if the media, metadata, provenance, rights, and access patterns are designed intentionally.
Fourth, we need a shared operating model for public-private contribution. NASA can build the systems that only NASA can build. It can get the spacecraft there. It can operate the mission. It can manage the risks of human spaceflight. The industry around NAB knows how to move live video at scale, recover from failures, distribute across platforms, build media workflows, measure QoE, and turn raw signals into experiences people can actually use.
Lee said it clearly: no idea is too crazy.
Kevin framed it another way: NASA wants to enable others to plug in.
That is the door opening.
The last moon landing was a broadcast. The next one will be a global streaming event. The one after that may be a persistent media platform from another world.
The camera is no longer just documenting the mission.
It is part of the mission.
Personal note: I’m grateful to the Streaming Video Technology Alliance for being at the forefront of this conversation and for giving me the opportunity to help serve as a liaison to NASA. I’m also grateful to Megan Cruz, Nilufar Ramji, Sarah Volkman, and the broader NASA and industry teams doing the hard work of connecting space exploration with real-world media delivery.










