Xata built analytics capabilities directly on Postgres using materialized views and pg_cron. Builders can now skip expensive OLAP databases and leverage vanilla Postgres for warehouse workloads.

Run your analytics warehouse on vanilla Postgres with materialized views instead of managing a separate OLAP database, reducing operational complexity and infrastructure costs.
Signal analysis
Here at industry sources, we tracked Xata's decision to build their product analytics warehouse directly on vanilla Postgres with materialized views instead of adopting a traditional OLAP stack. This is a significant architectural choice that challenges the conventional wisdom of separating transactional databases from analytical workloads. Rather than spinning up separate infrastructure for ClickHouse, BigQuery, or Redshift, Xata leveraged Postgres' native materialized views - essentially pre-computed query results that refresh on a schedule.
The implementation uses pg_cron for automated view refresh scheduling, meaning analytics pipelines run on Postgres' own task scheduler. This eliminates the need for external orchestration tools or ETL infrastructure. For builders, this translates to fewer services to manage, fewer credential rotations, and fewer points of failure. The warehouse runs on the same database engine your application already uses.
Xata also implemented copy-on-write branches for the analytics layer. This means you can create isolated, lightweight copies of your data for analytics workloads without duplicating storage or impacting production queries. Each branch operates independently, and branches merge back to main when analytics are finalized.
If you're currently managing separate transactional and analytical databases, this model offers a different cost structure. You're trading OLAP database licensing and infrastructure for Postgres storage and compute. Materialized views consume disk space, but they're significantly cheaper than maintaining a parallel warehouse system. Refresh cycles run on your existing Postgres cluster, so you're not buying additional compute resources - just using what you already have more efficiently.
The branch mechanism changes how you approach analytics development. Instead of writing views against production data and hoping they don't block queries, you work on isolated branches. This is particularly valuable for teams iterating on complex analytics - you don't need to worry about lock contention or expensive queries blocking users. Once the analytics are working, you merge the branch and production data flows through the same views.
One practical consideration: materialized views need refresh scheduling. With pg_cron, you define when views refresh. This means your analytics have a defined staleness - data isn't real-time, but it's deterministic. For product analytics this is usually acceptable. For operational dashboards requiring sub-minute freshness, traditional OLAP still has advantages.
This architecture makes sense if several conditions align: your data volume is under Postgres' comfort zone (low terabytes rather than petabytes), your analytics team is small to medium-sized, and you're already running Postgres in production. If you're managing three different databases already, consolidating to one reduces operational surface area significantly. Your on-call rotation doesn't need to understand five different query languages.
It's less ideal if you need true real-time analytics across massive datasets, or if your query patterns are fundamentally different between OLAP and OLTP. Postgres excels at balanced workloads; pure analytical queries at extreme scale still benefit from purpose-built engines. But for most SaaS applications with product analytics, event data, and reporting workloads, vanilla Postgres with materialized views covers the requirement effectively.
The copy-on-write branch feature matters most for teams experimenting with analytics schemas. If you're frequently adding new metrics, event types, or creating experimental dashboards, branches let you iterate without impacting production. Once the analytics stabilize, they live as permanent materialized views. The momentum in this space continues to accelerate.
Best use cases
Open the scenarios below to see where this shift creates the clearest practical advantage.
One concise email with the releases, workflow changes, and AI dev moves worth paying attention to.
More updates in the same lane.
Mistral Forge allows organizations to convert proprietary knowledge into custom AI models, enhancing enterprise capabilities.
Version 8.1 of the MongoDB Entity Framework Core Provider brings essential updates. This article analyzes the implications for builders.
The latest @composio/core update enhances Toolrouter with custom tool integration, expanding flexibility for developers.