Case Study: How Hulu Scaled Serving 4 Billion Videos Using Redis

November 26, 2013 Adam Bloom

Hulu_Redis_HeaderToday, Hulu has over 4 million paying Hulu Plus subscribers and approximately 30 million monthly unique viewers per month on the free, ad-supported Hulu service. In 2013, Hulu viewers streamed more than 1 billion content videos in Q1, Q2 and Q3 and are on pace to stream a billion in Q4. Hulu viewers stayed for 50 minutes per session in Q3, and watch at least 3.5 hours of TV monthly.

In this post, we have the pleasure of interviewing Andres Rangel, Senior Software Engineer at Hulu. Hulu uses Redis as a data structure cache to store close to 4 billion records and responds to around 7k queries per second at peak ­Below, Andres provides background on Hulu’s interest in Redis, outlines the problems Hulu needed to solve, and gives us insight how Hulu approaches the Redis architecture.

cta-download-case-studyQ1. What do you work on at Hulu?
A1. At Hulu, I work on creating and maintaining backend applications to provide all the services needed by the website as well as mobile and tablet applications. As a software engineer on this team, I was tasked with redesigning the viewed history tracking system. How viewed history works: every time a user watches a video on Hulu, we send information from the player to keep track of the video and the viewing position or timeframe. When the video application is closed, the stored information allows the user to resume the video where they left off. We also use the information to recommend what videos to watch next. We are able to personalize recommendations for each individual user based on their preferences and viewing history. This makes for a superior, tailored Hulu experience. For example, if a user just watched episode 3 of last season’s South Park, we will recommend episode 4.

Q2. What challenges was Hulu facing that made Redis the right choice?
A2. The viewed history tracking system was originally designed as a Python application with Memcached for reads on top of a sharded MySQL database for writes. This architecture allowed us to effectively provide the service to our users as we were first building out the system. As our Hulu Plus subscriber count began to grow, we needed the viewed history tracking system to keep up with that growth. We found that the only way to scale MySQL was to add more shards. Additionally, Memcached couldn’t be replicated to other data centers. Traffic from the East coast was having to be sent to the West coast on a consistent basis, so we had our load balancer redirect all East coast traffic to the West coast. As we continued to grow, the architecture wasn’t holding up at peak time.

Given the above challenges,we felt this was a good opportunity to try out a NoSql database as a strategy to boost performance. As we analyzed the problems above, we had four overarching requirements that led us to Redis:

  1. Speed: The ability for fast writing and retrieval of information.
  2. Replication: Replication of data across datacenters.
  3. Scale: The ability to handle at least 10,000 queries per second with low latency.
  4. High Availability: If one Memcached node went down in the viewed history tracking architecture, we lost 1/n of the cached data. So, we wanted to avoid this problem in the future.

Q3. How did Redis help you solve problems and what did you discover along the way?
A3. After looking at a variety of alternatives like MongoDB, Riak, and LevelDB, we decided to go with Redis. We chose Redis for several key reasons—it was simple to set up, had great documentation, offered replication, and allowed us to use data structures. Data structures are extremely powerful and allow us to architect solutions to many use cases very efficiently. For example, depending on the operation, we have the need to query a specific video a user watched, or all of them. With Redis, this was easy using hashes. By setting one key as the user_id, and the second key as the video_id, you can use hset/hget (key1,key2) for a single video or if you want to retrieve all of them, you can use hgetall(key).

To meet all of our requirements, there were a couple of minor areas that needed additional development. For one, our data consisted mainly of user_id, video_id, and a video position. Since we only query by user and not for multiple users at the same time, we knew we could easily shard on user_id. So, we scale Redis by sharding the data, and the intelligence about shards are in the application logic. As well, Redis didn’t have the sentinel implementation of monitoring and automatic failover when we started redesigning Bigtop in 2012. So, we created our own sentinel mechanism to support high availability.

One of the lessons learned along the way had to do with high availability. We started with client code on top of Redis-Py and used a load balancer with a VIP for the master server and a VIP for slave servers (where slaves serve all reads). Unexpectedly, the load balancer was failing-over incorrectly on a regular basis. After a good deal of troubleshooting, we decided to redesign the solution. We used Zookeeper to keep the state of the cluster, and moved the load balancer code to the clients. There is now a set of sentinels that monitor Redis instances and, if an instance goes down, the sentinels make the failover updates within Zookeeper. Clients listen to events from Zookeeper to update their internal representations. The writes are only sent to the masters, and the reads are balanced on all available slaves for the specified shard.

Q4. What other interesting insights can you give us about your Redis architecture?
Besides the fact that there around 4 billion records stored in Redis and around 7k/s queries per second at peak, there are some other elements of our Redis architecture that people should find interesting.

  • Once Redis 2.6 came out, it gave us Lua scripting. We use Lua extensively in our application. For example, we created a lookup table to store a mapping of show_ids to video_ids so we can query by show_id instead of by video_id. With a Lua script, we can query for videos of a given show and then query the user’s videos from the user hash in one trip to the server.
  • When we needed to be able to sort the viewed videos by updated date for some new queries, we used a ZSET where the score is the updated date, and the content is the video_id. We created a lua script that will use the ZSET to obtain the videos_id of a given user in order and then map the position data from the HASHSET and return it in one trip to the server.
  • For performance considerations, we decided to pre-shard the system into 64 instances. We replicate the master shard to a list of slaves in the same datacenter and to a list of slaves in the second datacenter. This way, applications in the other datacenter read locally from the Redis slaves and achieve greater performance. The result was that 75% of the latency in reads that come from the East coast was reduced from 120 ms to less than 15 ms, and the 90% went from 300ms to around 25ms.
  • From a durability perspective, we decided to use Cassandra as the persistent data store where all writes are made. Data is then loaded on demand from Cassandra to Redis. The first time a request comes for a user, the system will create a job to load all the videos for this user into Redis. Once this is done, the system will update a flag. The next time a request comes in for this user, the flag is checked. If the flag is set, then the system return whatever it has from Redis without hitting Cassandra. This way access to the database is greatly reduced, and we don’t have to have every record in Redis. To add additional performance improvements, we also built in intelligent querying, where we achieve the low latency reads for active users by having their data uploaded in Redis when queries are faster than Cassandra by a huge margin, and leave Cassandra for batch reports where the latency is not important.
  • Since Redis usually can only use one CPU core, and our boxes have 24 CPUs, we are running 16 shards per box trying to achieve a one to one CPU ratio to maximize our hardware utilization. However, when bgsave is enabled, Redis will fork a new process and may even use double the ram. To keep performance consistent on the shards, we disabled the writing to disk across all the shards, and we have a cron job that runs at 4am everyday doing a rolling “BGSAVE” command on each individual instance. We also use this rdb file for analyzing the Redis data without affecting the performance of the application.
  • We do a lot of monitoring across all the Redis shards. Specifically, we monitor the number of keys on each instance and size of the database to know when it is necessary to rebalance or expand the shards. Redis has an easy command INFO that gives us all the information needed.

About the Author

Biography

Previous
Hacking for better immigration policy
Hacking for better immigration policy

Thirty-something years ago, my father moved to the United States to be the first in his family to earn a gr...

Next
Sending Email from Cloud Foundry Java Applications with SendGrid
Sending Email from Cloud Foundry Java Applications with SendGrid

The following is a guest blog post by Scott Motte (@scottmotte), Developer Evangelist at SendGrid, a cloud ...