Close

2023-11-19

Innovating at Scale: Pinterest’s Journey to a Unified Key-Value Storage System

Innovating at Scale: Pinterest's Journey to a Unified Key-Value Storage System

Pinterest’s engineering team embarked on a complex, two-year migration project to consolidate over 500 key-value use cases into a single storage system. This ambitious move aimed to reduce system complexity, lower operational overhead, and improve performance, all while managing over 4 petabytes of data and handling hundreds of millions of queries per second (QPS). The article by Jessica Chan, Engineering Manager at Pinterest, outlines three key innovations instrumental in achieving these goals.

  1. API Abstractions for Seamless Data Migration: Pinterest introduced a new unified API, the KVStore API, to replace four separate systems. This allowed for a seamless customer data migration, with the first phase targeting read-only data and the second phase addressing read-write data.
  2. Wide-Column Format for Efficient Data Access: To address the inefficiencies of large payload retrievals, Pinterest developed a wide-column format that allowed for more efficient data access patterns. This innovation significantly reduced CPU and network load, contributing to the observed performance improvements.
  3. Timestamp-Based Versioning for Read-Only Data: The introduction of timestamp-based versioning replaced the round-robin versioning system, enabling instant data migrations between clusters and allowing for a flexible number of data versions to balance cost savings and rollback safety.

These innovations not only streamlined Pinterest’s key-value storage but also led to a 40-90% performance improvement and significant cost savings. The technical challenges matched the organizational ones, requiring extensive coordination and buy-in across the company.

For a comprehensive understanding of the technical and strategic advancements made by Pinterest, read the full article on StackShare: 3 Innovations While Unifying Pinterest’s Key-Value Storage.

Migration Strategy: The Role of API Abstractions

API abstractions play a crucial role in large-scale data migrations by providing a layer of separation between the client’s requests and the underlying service implementations. This separation allows the underlying services to be modified, upgraded, or replaced without requiring changes to the client code. In Pinterest’s case, the KVStore API acted as a unified interface, enabling data migration from multiple storage systems without disrupting the services that depended on them.

The potential risks involved with API abstractions include the possibility of creating overly generic interfaces that do not perform optimally for specific use cases. Additionally, there’s a risk of abstraction leakage, where details of the underlying system inadvertently surface through the API, potentially leading to a reliance on these details that can complicate future changes.

Performance Optimization: Benefits of a Wide-Column Format

A wide-column format improves performance in key-value storage systems by allowing for more efficient data access patterns. Instead of retrieving and deserializing entire large blobs of data, a wide-column format enables the retrieval of only the necessary pieces of data. This reduces network and CPU load, as seen with Pinterest’s implementation, which allowed for fast-range scans or single-point key-value lookups.

Other organizations can apply this innovation by analyzing their data access patterns and identifying opportunities to store and retrieve data in more granular, column-based segments. This approach is particularly beneficial for workloads that frequently access only a subset of a data structure’s fields.

Data Versioning: Advantages of Timestamp-Based Versioning

Timestamp-based versioning enhances flexibility and efficiency in data management by attributing each data upload with a unique timestamp, which is then registered to a central megastore. This system allows an arbitrary number of data versions to be stored, enabling easy rollbacks and cost savings by adjusting the number of versions kept on disk.

For large-scale storage systems, data can be migrated instantly between clusters without extensive coordination or downtime. Organizations can benefit from this approach by implementing a similar versioning system that allows for dynamic data management and reduces the operational overhead associated with data migrations and rollbacks.