Speeding up Redis with Compression

April 29, 2024 (2w ago)

I used to worked on a chat product similar to Slack that was offered to universities. One university had over 20,000 active students using the platform, with public channels for each section and broadcast channels for university-wide communications. When the university broadcasted messages to everyone, it led to a surge in network traffic usage with tons of users trying to access the same information at the same time. Since so many users were opening the same content, we decided to cache the data for those channels in Redis to improve performance and reduce load on our database. However, storing large amounts of data in Redis caused high memory usage and slower response times. This blog post explores how we addressed these issues using LZ4 compression.

The Problem: Large Data Transfers and High Memory Usage

Each public channel contained messages and some meta data between 35-50KB. Due to the large number of users accessing these channels simultaneously, data transfers between Redis and our application servers consumed too much network capacity. This caused slower response times and high memory usage in Redis due to storing large, uncompressed data payloads. Our goal was to reduce network traffic and Redis memory usage to maintain optimal performance.

Finding the Right Compression Algorithm

To tackle the problem, we evaluated multiple compression algorithms recommended by Redis for frequently accessed data, focusing on compression speed, decompression speed, and compression ratio. Our objective was to reduce the data size and alleviate the strain on network traffic. I have ran some benchmarks using 128KB of serialzed json data, storing it 10,000 times in a loop with different compression techniques. The results are as follows:

OperationCompression TypeTime (Seconds)Memory Usage
Set 10,000 timesUncompressed16.8672.14 GB
Get 10,000 timesUncompressed14.365
Set 10,000 timesLZ410.112149.07 MB
Get 10,000 timesLZ44.726
Set 10,000 timesSnappy11.933627.63 MB
Get 10,000 timesSnappy4.769

As seen in the table, LZ4 provided superior performance, with faster set and get times and significantly reduced memory usage. While Snappy also improved over uncompressed data, LZ4 stood out due to its compression ratio and fast decompression speed.

Implementation and Results

Based on these benchmarks, we implemented LZ4 compression in our Redis caching strategy. This change led to a significant decrease in data transfer times and a reduction in Redis memory usage. After deploying the solution, we observed lower latency during peak usage times and a smaller Redis memory footprint compared to uncompressed data.

Conclusion: Optimizing Redis with Compression

Implementing LZ4 compression in Redis proved to be an effective solution for our high-traffic chat application. It reduced network traffic and minimized data traveling between Redis and application servers, leading to better response times and improved Redis memory efficiency. If you're facing similar issues, consider using LZ4 or another compression algorithm to optimize your Redis operations.

My backend was using Node.js. The libraries I have used are lz4-napi and snappy.

You can refer to this gist on how to implement LZ4, Snappy with NodeJS and Redis.


Liked my work. Buy me a coffee.

Do write down your reviews or send in a mail from the contact form if you have any doubts and do remember to subscribe for more content like this.