How Uber migrated from DynamoDB to Docstore
Hey Everyone,
Today we’ll be talking about
How Uber migrated from DynamoDB to Docstore for their Financial Transaction Data
Uber uses a database called LedgerStore to keep track of their financial transaction data.
LedgerStore uses DynamoDB as its storage backend. But that was becoming too expensive.
Uber migrated the storage backend to Docstore, resulting in $6 million in yearly savings, better performance and fewer external dependencies.
We’ll talk about why Uber switched, how they did the migration and the final results.
Josh Comeau wrote a great article on how to learn quickly in the tech field
Use a mix of guided and unguided learning to avoid tutorial hell
Use Spaced-Repetition Systems (SRS) to remember things forever
Learn in public
Plus, a couple awesome tech snippets on
Yann LeCunn is one of the godfathers of Deep Learning. He taught a class at NYU on Deep Learning earlier this year. We link the recorded lecture videos.
How to build a second brain as a software engineer
How Etsy build their bidding system for Etsy Ads
We have a solution to our last Facebook interview question and a new question from Google.
Quastor Daily is a free Software Engineering newsletter sends out FAANG Interview questions (with detailed solutions), Technical Deep Dives and summaries of Engineering Blog Posts.
How Uber Migrated their Financial Transaction Database from DynamoDB to Docstore
In order to keep track of their financial transaction data, Uber built an immutable ledger-style database called LedgerStore.
LedgerStore’s storage backend was AWS DynamoDB but Uber decided to migrate away because it was becoming expensive.
They switched the storage backend to Docstore, a general-purpose, multimodal database that was developed internally at Uber.
The project resulted in
$6 million of yearly savings for Uber
Fewer external dependencies
Technology Consolidation (since Docstore is an Uber product, many other services inside Uber also use it)
Latency improvements
Uber was able to do this without a single production incident and not a single data inconsistency in the 250 billion unique records that were migrated from DynamoDB to Docstore.
Piyush Patel, Jaydeepkumar Chovatia and Kaushik Devarajaiah wrote a great article on the migration, and we’ll be giving a summary below.
Here’s a summary
Uber moves millions of people around the world and delivers tens of millions of food orders daily. This generates a massive amount of financial transactions that need to be stored.
To do this, Uber uses LedgerStore, an append-only, ledger-style distributed database.
A datastore in LedgerStore is a collection of tables and each table contains a set of records modeled as documents.
Here’s an example of a LedgerStore table schema
Tables can have one or many indexes and an index belongs to exactly one table.
Indexes in LedgerStore are strongly consistent, so when a write to the main table succeeds, all indexes are updated at the same time using a 2-phase commit.
LedgerStore also provides automatic data-tiering functionality.
The data is mostly read within a few weeks or months after being written. Because it’s expensive to store data in hot databases like DynamoDB, Uber offloads the data to colder storage after a time period.
LedgerStore was designed to provide the following data integrity guarantees.
Individual records are immutable
Corrections are trackable
Unauthorized data changes and inconsistencies must be detected
Queries are reproducible a bounded time after write
In order to provide these guarantees, Ledger Store created the concept of Sealing.
Sealing
Sealing is the process of closing a past time range of data for changes and maintaining signatures of the data within that sealed time range.
After a sealing window is closed, signed and sealed, no further updates to it will be permitted.
If you need to correct data in an already-sealed time range, LedgerStore uses Revisions, which we’ll discuss below.
Because there are no updates, any query that only reads data from a sealed time range is guaranteed to be reproducible.
Revisions
If you need to correct data in already-sealed time ranges, LedgerStore uses the notion of Revisions.
A revision is a table-level entity consisting of all sealed record corrections and the associated business justifications. All records, both corrected and the original, are maintained to allow reproducible queries.
Choosing Docstore
LedgerStore was designed to abstract away the underlying storage technology so that switching technologies could be done if the business need arose.
As the database scaled, using AWS DynamoDB as a storage backend became extremely expensive.
Additionally, having different backend databases in the tech stack created fragmentation and made it difficult to operate.
The requirements for the storage backend were
High availability - 99.99% availability guarantees
Can be easily scaled horizontally
Change Data Capture (a.k.a streaming)
Secondary Indexes
A flexible data model
Docstore, Uber’s homegrown database, was a perfect match for those requirements.
The only issue was that Docstore didn’t have Change Data Capture (streaming) functionality.
Uber wanted streaming functionality because reading data from a stream of updates is more efficient than reading from the table directly.
You don’t have to perform table scans or range reads spawning a large number of rows. Also, the stream data can be stored in cheaper, commodity hardware.
The stream data can be stored in a system like Apache Kafka, which is optimized for stream reading.
Uber solved this issue by building a streaming framework for Docstore called Flux. Read the article for more details on Flux.
DynamoDB to Docstore Migration
When migrating from DynamoDB to Docstore, Uber had several objectives they wanted
Zero stakeholder involvement - clients who rely on LedgerStore should not be involved or exposed to the migration. They shouldn’t have to change any code.
High Availability - no downtime during migration
Maintaining 100% Read-Your-Writes data consistency
Maintaining pre-existing performance SLOs, such as latency
Ability to switch back, in case of emergency
Historical Data Migration
One part of the migration was moving all the historical data from DynamoDB to Docstore in real time.
The data consisted of more than 250 billion unique records and was 300 terabytes of data in total.
Engineers did this by breaking the historical data down into subsets and then processing them individually via checkpointing.
LedgerStore already supported cold storage offloading (part of automatic data-tiering discussed earlier) and the offloading worked at a sealing window granularity.
Therefore, the data is already broken down into subsets, where each subset is an individual sealing window.
Engineers built a backfill framework to process individual sealing windows and maintain a checkpoint of them.
The framework is multi-threaded, spawning multiple workers where each worker is processing a distinct sealing window.
As a result, the framework is capable of processing 1 million messages per second. Backfilling all the historical data was able to be done in a couple of weeks.
Online Traffic Redirection
The second part of the migration is online traffic redirection.
To do this, engineers configured LedgerStore to talk to 2 different databases (DynamoDB and Docstore), with each database assuming a primary or secondary role depending on the phase of the migration.
The goal was to keep the 2 databases consistent at any phase of the migration so that rollback and forward were possible.
The online traffic redirection was divided into 4 phases
Shadow Writes - engineers added a shadow writer module in LedgerStore’s write path to insert incoming data into the secondary database along with the primary.
The secondary database writes happened asynchronously, to keep write latency low.
A two-phase commit protocol was used to track the asynchronous writes.
Dual Read and Merge - To guarantee 100% consistency and 99.99% availability, a new module called Dual Read and Merge was added in the read path for LedgerStore.
This module served read requests by reading from both databases, merging the results, and then returning them to the client.
Swap Databases - In this phase, engineers compared both the Docstore database with the DynamoDB database to ensure it matched.
They were able to validate 250 billion unique records through a Spark job within a single week.
After validation, they swapped the databases and promoted Docstore as the primary and DynamoDB as the secondary.
After Docstore is promoted to primary, they slowly stopped traffic to DynamoDB.
First, they gradually removed reads from DynamoDB and served them out of Docstore.
Final Cutover - Once reads were fully served out of Docstore, it was time to stop the shadow writes to DynamoDB. Afterwards, engineers backed up the DynamoDB database and finally decommissioned it.
Read the full article for more details
Quastor Daily is a free Software Engineering newsletter sends out FAANG Interview questions (with detailed solutions), Technical Deep Dives and summaries of Engineering Blog Posts.
Tech Snippets
Deep Learning Course (Spring 2021) Lectures - Yann LeCunn is regarded as one of the 3 “godfathers of AI” (along with Geoffrey Hinton and Yoshua Bengio).
He recently taught NYU’s Deep Learning course with Alfredo Canziani.
How to build a second brain as a software engineer - A second brain is a personal knowledge management system (PKM) that you can use to organize your thoughts and references. Two popular ways of doing this are Zettlekasten and The PARA method.
This article goes through how to build your own PKM so you can become a better engineer.
How Etsy built their bidding system for Etsy Ads - Etsy Ads is an advertising platform for Etsy sellers who want to advertise their goods across Etsy’s website and mobile app.
You place bids in Etsy’s real-time auction for ad spots on Etsy’s web page.
Etsy discusses their contextual bidding algorithm in this blog post, which uses machine learning to automatically determine the amount a seller should bid for an ad spot to maximize their earnings.
How to Learn Tech Stuff Quickly
Josh Comeau wrote a great article on his blog on how to learn things quickly in the technology space.
We’ll be summarizing the article here.
Mixing Guided and Unguided Learning
There are two main categories of learning
Guided - Reading a tutorial, taking a course, watching a YouTube video. Anything where you’re following a guide.
Unguided - Creating your own projects from scratch or extending a tutorial. Anything where you aren’t following a guide.
If you only follow guided resources, you’ll end up in tutorial hell (getting stuck doing tutorial after tutorial without ever feeling like you’re making substantive progress).
However, if you just focus on unguided learning, then it’ll take forever. You need an experienced guide to help put you on the right path.
Therefore, you should be mixing unguided learning into guided tutorials.
Some ways you can mix them are
Make intentional mistakes - Don’t just copy/paste code verbatim in a tutorial. Instead try experimenting with the code and see what happens.
Extending Tutorials - After finishing a tutorial, try to add on features to what you just built and try implementing them.
Building related Projects - After doing the guided tutorial, try to come up with a project that’s similar to the guided tutorial and build it from scratch (without looking back to the tutorial)
At the beginning of your learning journey for a technology, you should focus primarily on guided learning.
As you become more comfortable, shift focus into unguided projects.
Remembering Things
Use a spaced repetition system (SRS) to improve your memory.
The core idea behind SRS systems (like Anki) is that they calculate how long you’ll remember an idea and then quiz you on that idea right as you’re about to forget it.
This helps you strengthen the memory for that item.
Learning in Public
Swyx has an awesome philosophy on Learn in Public.
The idea is that whenever you learn something new, you should create an artifact that documents it.
A tweet, a blog post or a YouTube video (for larger concepts).
The benefits of learning in public are
By teaching other people, you’re strengthening your own understanding of the concept
You can reference past blog posts/tweets in the future if you forget the solution
It can be great for making friends and building your network
However, don’t fall into the trap of spending weeks setting up the perfect blog from scratch!
Try and set up your blog in a day (use a pre-existing template) or just start posting on Twitter!
Read the full article for the rest of Josh’s tips
Interview Question
Design an iterator that supports the peek
operation on an existing iterator in addition to the hasNext
and the next
operations.
Implement the PeekingIterator
class:
PeekingIterator(Iterator<int> nums)
Initializes the object with the given integer iteratoriterator
.int next()
Returns the next element in the array and moves the pointer to the next element.boolean hasNext()
Returnstrue
if there are still elements in the array.int peek()
Returns the next element in the array without moving the pointer.
Here’s the question in LeetCode.
We’ll send a detailed solution in our next email, so make sure you move our emails to primary, so you don’t miss them!
Gmail users—move us to your primary inbox
On your phone? Hit the 3 dots at the top right corner, click "Move to" then "Primary"
On desktop? Back out of this email then drag and drop this email into the "Primary" tab near the top left of your screen
Apple mail users—tap on our email address at the top of this email (next to "From:" on mobile) and click “Add to VIPs”
Previous Solution
As a reminder, here’s our last question
You are given an array of integers where every integer occurs three times except for one integer, which only occurs once. Find and return the non-duplicated integer.
Input - [6, 1, 3, 3, 3, 6, 6]
Output - 1
Input - [13, 19, 13, 13]
Output - 19
Do this in O(N)
time and O(1)
space.
Here’s the question in LeetCode.
Solution
We can find the unique number in an array of two duplicates by XORing all the numbers in the array.
This will cancel all the bits that have an even number of 1s, leaving the unique (odd) bits out.
But how do we use this for three duplicates?
Well, instead of cancelling out all the bits with an even number of bits, we want to cancel out those that have a number of bits that are a multiple of three.
Let's assume all integers fit in 32 bits.
Then let's create a list that is 32 zeroes long, and when iterating over each number in our array, we can add up all the bits to its proper spot in the array.
Finally, we'll go over each bit in the array and make it equal to itself modulo 3.
This means that any bit that has been set some multiple of 3 times will effectively be cleared, leaving only the bit from the unique number.
The time complexity is linear and the space complexity is constant.
Exercise for the reader - Our program will fail one specific type of test case.
Can you figure what that test case is? (Try doing this without running our code)
Reply back to this email with the failure case.
Bonus points if you can also come up with a fix!