

Because the board was a shared global state the obvious solution was to use caching.
#RIGHT ZOOM REDDIT FULL#
If many clients connected or refreshed at once they would simultaneously request the full state of the board, all triggering reads from redis. We were concerned about exceeding maximum read bandwidth on redis. Illustration showing how colors were stored in redis, using a 2×2 board Reading the entire board from redis took less than 100ms, which was fast enough. We also planned on using Cassandra to restore the board in case of a redis failure. We still needed to store the full details in Cassandra so that users could inspect individual tiles to see who placed them and when.
#RIGHT ZOOM REDDIT UPDATE#
We were able to update individual tiles by updating the value of the bitfield at a specific offset (no need for locking or read/modify/write). We could read the entire board state by reading the entire bitfield. Each 4 bit integer was able to encode a 4 bit color, and the x,y coordinates were determined by the offset (offset = x + 1000y) within the bitfield. We used a bitfield of 1 million 4 bit integers. Our next approach was to store the full board in redis. On our production cluster this read took up to 30 seconds, which was unacceptably slow and could have put excessive strain on Cassandra. (x, y): īecause the board contained 1 million tiles this meant that we had to read a row with 1 million columns. The format for each column in the row was: Our initial approach was to store the full board in a single row in Cassandra and each request for the full board would read that entire row. All subsequent tile placements could be drawn to the board immediately as they were received.įor this scheme to work we needed the request for the full state of the board to be as fast as possible. When the client received the full board it replayed all the real-time placements it received while waiting.

The full board in the response could be a few seconds stale as long as we also had real-time placements starting from before it was generated. Our solution was to initialize the client state by having it listen for real-time tile placements immediately and then make a request for the full board. The main challenge for the backend was keeping all the clients in sync with the state of the board.

This means that board size and tile cooldown should be adjustable on the fly in case data sizes are too large or update rates are too high.

#RIGHT ZOOM REDDIT CODE#
This post details how we approached building Place from a technical perspective.īut first, if you want to check out the code for yourself, you can find it here. Multiple engineering teams (frontend, backend, mobile) worked on the project and most of it was built using existing technology at Reddit. Each tile placed was relayed to observers in real-time. This limitation de-emphasized the importance of the individual and necessitated the collaboration of many users in order to achieve complex creations. This year we came up with Place, a collaborative canvas on which a single user could only place a single tile every five minutes. Each year for April Fools’, rather than a prank, we like to create a project that explores the way that humans interact at large scales.
