How Disney Hotstar (now JioHotstar) Scaled Its Infra for 60 Million Concurrent Users
Deep Dives
Explore related topics with these Wikipedia articles, rewritten for enjoyable reading:
-
2023 Cricket World Cup
13 min read
The article specifically mentions this tournament as one of the peak events Hotstar had to scale for. Understanding the scope, viewership, and cultural significance of this event helps contextualize why 60 million concurrent users was such an engineering challenge.
-
Content delivery network
14 min read
CDNs are central to the article's technical discussion - they handled API gateway functions, caching, security, and rate limiting. A deeper understanding of how CDNs work globally would help readers appreciate the architectural decisions described.
-
Network address translation
12 min read
The article discusses NAT Gateway scaling as a critical bottleneck. Understanding how NAT works, its limitations, and why it creates scaling challenges provides essential context for the infrastructure problems Hotstar solved.
How DevOps Teams Scale Observability in 2025 (Sponsored)
Today’s systems are getting more complex, more distributed, and harder to manage. If you’re scaling fast, your observability strategy needs to keep up. This eBook introduces Datadog’s Observability Maturity Framework to help you reduce incident response time, automate repetitive tasks, and build resilience at scale.
You’ll learn:
How to unify fragmented data and reduce manual triage
The importance of moving from tool sprawl to platform-level observability
What it takes to go from reactive monitoring to proactive ops
Disclaimer: The details in this post have been derived from the details shared online by the Disney+ Hotstar (now JioHotstar) Engineering Team. All credit for the technical details goes to the Disney+ Hotstar (now JioHotstar) Engineering Team. The links to the original articles and sources are present in the references section at the end of the post. We’ve attempted to analyze the details and provide our input about them. If you find any inaccuracies or omissions, please leave a comment, and we will do our best to fix them.
In 2023, Disney+ Hotstar faced one of the most ambitious engineering challenges in the history of online streaming. The goal was to support more than 50 to 60 million concurrent live streams during the Asia Cup and Cricket World Cup. These are events that attract some of the largest online audiences in the world. For perspective, before this, Hotstar had handled about 25 million concurrent users on two self-managed Kubernetes clusters.
To make things even more challenging, the company introduced a “Free on Mobile” initiative, which allowed millions of users to stream live matches without a subscription. This move significantly expanded the expected load on the platform, creating the need to rethink its infrastructure completely.
Hotstar’s engineers knew that simply adding more servers would not be enough. The platform’s architecture needed to evolve to handle higher traffic while maintaining reliability, speed, and efficiency. This led to the migration to a new “X architecture,” a server-driven design that emphasized flexibility, scalability, and cost-effectiveness at a global scale.
The journey that followed involved a series of deep technical overhauls. From redesigning the network and API gateways to migrating to managed Kubernetes (EKS) and introducing an innovative concept called “Data Center Abstraction,” Hotstar’s engineering teams tackled multiple layers of complexity. Each step focused on ensuring that millions of cricket fans could enjoy uninterrupted live streams, no matter how many
...This excerpt is provided for preview purposes. Full article content is available on the original publication.
