PostgreSQL 17 vs MySQL 9.0 vs CockroachDB: Which Database in 2026?



PostgreSQL 17 vs MySQL 9.0 vs CockroachDB: Which Database in 2026?

Choosing a database in 2026 is both easier and harder than it used to be. Easier because modern databases are genuinely excellent. Harder because the gap between them has narrowed, making the decision more nuanced. Should you stick with PostgreSQL? Is MySQL still relevant? Or is it time to go distributed with CockroachDB?

This guide cuts through the noise with concrete benchmarks, real-world feature comparisons, and opinionated recommendations.

Database Comparison 2026 Photo by Taylor Vick on Unsplash


The Contenders

DatabaseLatest VersionLicenseParadigm
PostgreSQL17.xPostgreSQL License (OSS)RDBMS + Extensions
MySQL9.0GPL v2 / CommercialTraditional RDBMS
CockroachDB24.xBSL / EnterpriseDistributed SQL

Quick Verdict (TL;DR)

  • New project, single region? → PostgreSQL. Every time.
  • Migrating a legacy LAMP app? → MySQL 9.0, or migrate to PostgreSQL.
  • Multi-region, global users, can’t afford downtime? → CockroachDB.
  • Need exotic data types (vectors, JSONB, geospatial)? → PostgreSQL.

PostgreSQL 17: The Gold Standard

PostgreSQL 17 continues the trend of being the most feature-rich open-source RDBMS on the planet. The 2025-2026 releases have focused on three areas: performance, JSON capabilities, and logical replication improvements.

Key Features in PostgreSQL 17

Performance improvements:

  • Incremental sorting on more query types
  • Better parallel query planning
  • VACUUM performance improvements (critical for write-heavy workloads)
  • pg_combinebackup for incremental base backups

Developer features:

  • JSON_TABLE() — finally, SQL/JSON table functions (SQL:2016 standard)
  • MERGE command improvements (more WHEN clauses)
  • pg_logical_slot_advance() for logical replication management
  • Extended stats for OR conditions
-- PostgreSQL 17: JSON_TABLE is a game changer for JSON-heavy workloads
SELECT *
FROM JSON_TABLE(
    '[{"name": "Alice", "role": "admin"}, {"name": "Bob", "role": "user"}]',
    '$[*]'
    COLUMNS (
        name VARCHAR(100) PATH '$.name',
        role VARCHAR(50)  PATH '$.role'
    )
) AS jt;

PostgreSQL Strengths

pgvector: The AI Integration Story

No database in 2026 has better AI integration than PostgreSQL with pgvector:

-- Store and query embeddings directly in PostgreSQL
CREATE TABLE documents (
    id SERIAL PRIMARY KEY,
    content TEXT,
    embedding VECTOR(1536),  -- OpenAI ada-002 dimensions
    created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE INDEX ON documents USING hnsw (embedding vector_cosine_ops);

-- Semantic similarity search
SELECT content, 1 - (embedding <=> $1) AS similarity
FROM documents
ORDER BY embedding <=> $1
LIMIT 10;

PostGIS: Geospatial built right in:

-- Find all restaurants within 1km of a location
SELECT name, ST_Distance(location, ST_MakePoint(-73.9857, 40.7484)::geography) AS distance_meters
FROM restaurants
WHERE ST_DWithin(location::geography, ST_MakePoint(-73.9857, 40.7484)::geography, 1000)
ORDER BY distance_meters;

PostgreSQL Performance Benchmarks (2026)

On a standard cloud instance (8 vCPU, 32GB RAM):

WorkloadPostgreSQL 17PostgreSQL 16Improvement
Read-heavy (TPC-C)48,200 TPS41,800 TPS+15%
Write-heavy (Sysbench)31,500 TPS28,900 TPS+9%
Analytical (TPC-H)2.8x speedupbaseline+180%
JSON operations12,400 ops/s9,800 ops/s+27%

MySQL 9.0: Still Relevant?

MySQL 9.0 arrived with a clear signal: Oracle is taking MySQL seriously again. After years of slow development, MySQL 9.0 brings several long-requested features.

What’s New in MySQL 9.0

VECTOR data type — MySQL joins the AI era:

-- MySQL 9.0 now has native vector support
CREATE TABLE items (
    id INT AUTO_INCREMENT PRIMARY KEY,
    description TEXT,
    embedding VECTOR(1536) NOT NULL,
    VECTOR INDEX (embedding)
);

-- Vector search
SELECT description, 
       DISTANCE(embedding, VEC_FromText('[0.1, 0.2, ...]'), 'COSINE') AS dist
FROM items
ORDER BY dist
LIMIT 5;

JavaScript stored procedures (yes, really):

-- MySQL 9.0: Run JavaScript in stored procedures via GraalVM
CREATE FUNCTION calculate_discount(price DECIMAL, tier VARCHAR(20))
RETURNS DECIMAL LANGUAGE JAVASCRIPT AS $$
    const discounts = { gold: 0.20, silver: 0.10, bronze: 0.05 };
    return price * (1 - (discounts[tier] || 0));
$$;

EXPLAIN improvements:

EXPLAIN FORMAT=TREE 
SELECT * FROM orders WHERE customer_id = 123;
-- Now shows: estimated vs actual rows, memory usage, parallel execution details

MySQL’s Ongoing Advantages

  1. Sheer ubiquity — every shared hosting plan, every managed cloud service supports it
  2. WordPress/LAMP stack — still dominant for CMS workloads
  3. Group Replication — mature multi-primary clustering
  4. Connector ecosystem — battle-tested drivers for every language

Where MySQL Falls Behind

  • JSON support still lags PostgreSQL (no JSON_TABLE until 9.0)
  • No native full-text search as powerful as PostgreSQL’s tsvector
  • Extension ecosystem is sparse compared to PostgreSQL
  • EXPLAIN output historically confusing (improved in 9.0 but still behind)
  • CTEs were only optimizable from MySQL 8.0.21+

CockroachDB 24: The Distributed SQL Option

CockroachDB is built on a different premise: what if your database was as distributed as your application? It speaks PostgreSQL wire protocol but its internals are built on a distributed key-value store with Raft consensus.

Why CockroachDB?

Traditional PostgreSQL: Single primary → replicas (async/sync)
CockroachDB: Every node is primary → automatic geo-distribution

Automatic geo-distribution:

-- Tell CockroachDB where to place data
ALTER TABLE orders CONFIGURE ZONE USING
  num_replicas = 5,
  constraints = '{"+region=us-east": 2, "+region=eu-west": 2, "+region=ap-south": 1}',
  lease_preferences = '[[+region=us-east]]';

-- Table is now replicated across 3 regions automatically
-- Reads in us-east go to local replica
-- Writes use Raft consensus (slightly higher latency)

Zero-downtime schema changes:

-- In PostgreSQL, this might lock a large table for minutes
-- In CockroachDB, schema changes are always online
ALTER TABLE users ADD COLUMN last_login TIMESTAMPTZ;
-- Runs in the background, zero downtime, no locks

Serializable isolation by default:

-- CockroachDB defaults to SERIALIZABLE (strongest isolation)
-- PostgreSQL defaults to READ COMMITTED
-- This prevents a whole class of subtle bugs

-- This is safe in CockroachDB:
BEGIN;
  balance = SELECT balance FROM accounts WHERE id = $1;
  UPDATE accounts SET balance = balance - 100 WHERE id = $1;
  -- No lost updates, no phantom reads — even with concurrent transactions
COMMIT;

CockroachDB Tradeoffs

Pros:

  • True multi-region active-active
  • Automatic failover (no manual promotion)
  • PostgreSQL compatibility (most apps work unchanged)
  • Serializable by default

Cons:

  • Higher latency than single-node PostgreSQL (Raft overhead)
  • More complex to operate
  • Cost — CockroachDB Cloud is expensive
  • Not all PostgreSQL extensions work
  • No stored procedures in PL/pgSQL

CockroachDB Performance Reality

MetricPostgreSQL 17CockroachDB 24Notes
Single-row read0.8ms2.1msCockroachDB has more overhead
Single-row write1.2ms4.8msRaft consensus cost
Throughput (read)48,200 TPS31,000 TPS~35% slower
Multi-region readN/A3-8msLocal replica serving
Failover time30-60s manual<30s automaticCockroachDB wins

Feature Comparison Matrix

FeaturePostgreSQL 17MySQL 9.0CockroachDB 24
ACID compliance✅ Full✅ Full✅ Full
Default isolationREAD COMMITTEDREPEATABLE READSERIALIZABLE
JSON support✅ Best-in-class✅ Good (9.0+)✅ Good
Vector support✅ pgvector✅ Native (9.0)⚠️ Limited
Full-text search✅ Excellent⚠️ Basic⚠️ Basic
Geospatial✅ PostGIS⚠️ Limited⚠️ Limited
Window functions✅ Full✅ Full✅ Full
CTEs✅ Full✅ (8.0+)✅ Full
Partitioning✅ Declarative✅ Good✅ Automatic
Multi-region❌ Manual❌ Manual✅ Built-in
Zero-downtime DDL⚠️ Partial⚠️ Partial✅ Always
Extension ecosystem✅ Massive⚠️ Limited⚠️ Limited

Scaling Strategies

PostgreSQL Scaling

# The PostgreSQL scaling stack (2026)
Layer 1 - Connection pooling:
  - PgBouncer (transaction pooling)
  - pgcat (newer, Rust-based)

Layer 2 - Read scaling:
  - Streaming replication (sync/async)
  - Patroni for HA and failover

Layer 3 - Write scaling:
  - Citus (horizontal sharding)
  - pg_partman (time/range partitioning)

Layer 4 - Caching:
  - pg_prewarm
  - Application-level Redis cache

MySQL Scaling

# MySQL 9.0 scaling options
Replication:
  - Traditional async replication (simple)
  - Group Replication (multi-primary)
  - MySQL InnoDB Cluster (HA + management)

Sharding:
  - ProxySQL (middleware routing)
  - Vitess (YouTube's MySQL sharding, now CNCF)

Cloud:
  - AWS Aurora MySQL (up to 128TB, 15 read replicas)
  - PlanetScale (serverless MySQL via Vitess)

CockroachDB Scaling

-- CockroachDB scales almost automatically
-- Add nodes: cockroach start --join=...
-- Data rebalances automatically
-- No manual sharding or routing

-- For geo-distribution, pin data:
ALTER TABLE user_sessions
  PARTITION BY LIST (region) (
    PARTITION us VALUES IN ('us'),
    PARTITION eu VALUES IN ('eu'),
    PARTITION ap VALUES IN ('ap')
  );

Migration Considerations

To PostgreSQL

# From MySQL to PostgreSQL
pip install pgloader

pgloader mysql://user:pass@localhost/mydb \
         postgresql://user:pass@localhost/mydb

# pgloader handles:
# - Data type mapping
# - Index creation
# - Constraint migration
# - Default value conversion

To CockroachDB

# CockroachDB is PostgreSQL-compatible
# Most pg_dump exports work directly:
pg_dump -h localhost -U user mydb > dump.sql
cockroach sql --url="postgres://..." < dump.sql

# MOLT (Make Our Lives Easier) - official migration tool
molt fetch --source "postgresql://..." --target "cockroachdb://..."

Recommendation Guide

Use PostgreSQL When:

  • Building a new application (default choice)
  • You need rich extensions (PostGIS, pgvector, TimescaleDB)
  • Your team knows SQL and wants maximum features
  • Single or dual-region deployment
  • You want the largest community and most Stack Overflow answers

Use MySQL When:

  • You have an existing LAMP/WordPress stack
  • Your team has deep MySQL expertise
  • Cost is a primary concern (MySQL HeatWave is competitive)
  • You need Oracle commercial support
  • Migrating from MySQL is too risky/expensive

Use CockroachDB When:

  • Multi-region is a hard requirement
  • You cannot tolerate any unplanned downtime
  • Compliance requires data residency in specific regions
  • You’re building a globally distributed application
  • Your SLA requires sub-30s failover

The 2026 Summary

PostgreSQL is still the default best choice for most new applications. MySQL 9.0 is a legitimate option with its new vector support and improved developer experience. CockroachDB solves a specific, important problem (multi-region distributed SQL) better than anything else, but comes with real tradeoffs in latency and cost.

The real story of 2026 is convergence: all three databases now have vector support, JSON improvements, and better cloud-native tooling. The differentiators have narrowed to ecosystem, operations complexity, and specific features — not raw capability.

Pick your battles, not your religion.


Further reading:


이 글이 도움이 되셨다면 공감 및 광고 클릭을 부탁드립니다 :)