Whereas vector databases nonetheless have many legitimate use circumstances, organizations together with OpenAI are leaning on PostgreSQL to get issues performed.
In a weblog submit on Thursday, OpenAI disclosed how it’s utilizing the open-source PostgreSQL database.
OpenAI runs ChatGPT and its API platform for 800 million customers on a single-primary PostgreSQL occasion — not a distributed database, not a sharded cluster. One Azure PostgreSQL Versatile Server handles all writes. Almost 50 learn replicas unfold throughout a number of areas deal with reads. The system processes tens of millions of queries per second whereas sustaining low double-digit millisecond p99 latency and five-nines availability.
The setup challenges standard scaling knowledge and gives enterprise architects perception into what really works at huge scale.
The lesson right here isn’t to repeat OpenAI’s stack. It’s that architectural choices must be pushed by workload patterns and operational constraints — not by scale panic or trendy infrastructure selections. OpenAI’s PostgreSQL setup exhibits how far confirmed programs can stretch when groups optimize intentionally as a substitute of re-architecting prematurely.
"For years, PostgreSQL has been one of the vital important, under-the-hood knowledge programs powering core merchandise like ChatGPT and OpenAI’s API," OpenAI engineer Bohan Zhang wrote in a technical disclosure. "Over the previous 12 months, our PostgreSQL load has grown by greater than 10x, and it continues to rise shortly."
The corporate achieved this scale by way of focused optimizations, together with connection pooling that reduce connection time from 50 milliseconds to five milliseconds and cache locking to forestall 'thundering herd' issues the place cache misses set off database overload.
Why PostgreSQL issues for enterprises
PostgreSQL handles operational knowledge for ChatGPT and OpenAI's API platform. The workload is closely read-oriented, which makes PostgreSQL a very good match. Nonetheless, PostgreSQL's multiversion concurrency management (MVCC) creates challenges below heavy write hundreds.
When updating knowledge, PostgreSQL copies complete rows to create new variations, inflicting write amplification and forcing queries to scan by way of a number of variations to search out present knowledge.
Somewhat than preventing this limitation, OpenAI constructed its technique round it. At OpenAI’s scale, these tradeoffs aren’t theoretical — they decide which workloads keep on PostgreSQL and which of them should transfer elsewhere.
How OpenAI is optimizing PostgreSQL
At giant scale, standard database knowledge factors to one in every of two paths: shard PostgreSQL throughout a number of major cases so writes will be distributed, or migrate to a distributed SQL database like CockroachDB or YugabyteDB designed to deal with huge scale from the beginning. Most organizations would have taken one in every of these paths years in the past, nicely earlier than reaching 800 million customers.
Sharding or shifting to a distributed SQL database eliminates the single-writer bottleneck. A distributed SQL database handles this coordination routinely, however each approaches introduce vital complexity: utility code should route queries to the right shard, distributed transactions change into more durable to handle and operational overhead will increase considerably.
As a substitute of sharding PostgreSQL, OpenAI established a hybrid technique: no new tables in PostgreSQL. New workloads default to sharded programs like Azure Cosmos DB. Present write-heavy workloads that may be horizontally partitioned get migrated out. Every thing else stays in PostgreSQL with aggressive optimization.
This method gives enterprises a sensible various to wholesale re-architecture. Somewhat than spending years rewriting a whole bunch of endpoints, groups can establish particular bottlenecks and transfer solely these workloads to purpose-built programs.
Why this issues
OpenAI's expertise scaling PostgreSQL reveals a number of practices that enterprises can undertake no matter their scale.
Construct operational defenses at a number of layers. OpenAI's method combines cache locking to forestall "thundering herd" issues, connection pooling (which dropped their connection time from 50ms to 5ms), and fee limiting at utility, proxy and question ranges. Workload isolation routes low-priority and high-priority site visitors to separate cases, guaranteeing a poorly optimized new characteristic can't degrade core providers.
Evaluation and monitor ORM-generated SQL in manufacturing. Object-Relational Mapping (ORM) frameworks like Django, SQLAlchemy, and Hibernate routinely generate database queries from utility code, which is handy for builders. Nonetheless, OpenAI discovered one ORM-generated question becoming a member of 12 tables that triggered a number of high-severity incidents when site visitors spiked. The comfort of letting frameworks generate SQL creates hidden scaling dangers that solely floor below manufacturing load. Make reviewing these queries a typical observe.
Implement strict operational self-discipline. OpenAI permits solely light-weight schema modifications — something triggering a full desk rewrite is prohibited. Schema modifications have a 5-second timeout. Lengthy-running queries get routinely terminated to forestall blocking database upkeep operations. When backfilling knowledge, they implement fee limits so aggressive that operations can take over every week.
Learn-heavy workloads with burst writes can run on single-primary PostgreSQL longer than generally assumed. The choice to shard ought to rely upon workload patterns reasonably than consumer counts.
This method is especially related for AI purposes, which frequently have closely read-oriented workloads with unpredictable site visitors spikes. These traits align with the sample the place single-primary PostgreSQL scales successfully.
The lesson is simple: establish precise bottlenecks, optimize confirmed infrastructure the place attainable, and migrate selectively when needed. Wholesale re-architecture isn't all the time the reply to scaling challenges.
[/gpt3]

