How we migrated a production SaaS off Supabase

10 min read

I run MessengerFlow, a Facebook Messenger outreach automation platform. It's been running on Supabase since day one: auth, database, storage, the whole thing. In January 2026 I migrated everything off Supabase to a self-hosted stack. This post covers why, how, and what I'd do differently.

I want to be clear: this is not a "Supabase is bad" post. Supabase got us from zero to a working product fast. If you're starting a project today and need to move quickly, Supabase is a solid choice. But there's a difference between a good starting point and a good long-term foundation, and for us, Supabase had become the latter problem.

Why we left

Vendor lock-in was real

Every part of our stack depended on Supabase. Auth was Supabase Auth (GoTrue). Database was their managed Postgres. The frontend used @nuxtjs/supabase with direct client calls and Row-Level Security. The backend called serverSupabaseClient() in every API route.

That sounds fine until something breaks and you realize you can't swap any single piece without rewriting everything. When we wanted to add custom auth flows or change how we handle sessions, we had to work within Supabase's constraints. When their Postgres had connection issues, we couldn't just point at a different database. Everything was one package deal.

Pricing didn't scale for our use case

Supabase prices partly on row counts and database size. We had over a million rows across 20+ tables and that number was growing fast. The pricing model meant our costs scaled directly with usage in ways that were hard to predict month to month. At some point you start doing math on whether managed Postgres is worth it when a Hetzner box with more resources costs a fraction of the price.

We also had 76 stored procedures in the database, most of which were dead code from earlier iterations. Supabase doesn't make it easy to audit what's actually running on your database, and the dashboard gets cluttered fast.

Speed and latency

Supabase runs managed Postgres on AWS, and our instance was in us-east-1. Our server is in Europe. Every database call had transatlantic latency baked in. For a product that processes campaigns in tight loops (fetch lead, send message, update status, repeat), that latency adds up.

With self-hosted Postgres on the same network as our application server, database calls went from ~80ms to under 1ms. That's not an exaggeration. The difference was immediately visible in campaign processing speed.

Availability

I don't want to pile on here because every service has outages, but Supabase has had several that hit us directly. When your auth provider goes down, your entire app goes down. When your database is unreachable, nothing works. We had days where campaign processing just stopped because the database connection dropped and nothing reconnected cleanly.

When it's your own Postgres, you can set up connection pooling, automatic reconnection, health checks. When it's managed, you open a status page and wait.

What we migrated to

Component Before After
Auth Supabase Auth (GoTrue) Keycloak OIDC
Database Supabase PostgreSQL (us-east-1) Self-hosted PostgreSQL
Frontend auth @nuxtjs/supabase module Custom composable + jose JWT
API calls supabaseClient.from('table') postgres tagged template literals
Hosting Netlify Docker on Hetzner

We went with Keycloak for auth because it's self-hosted, supports OIDC, has social login built in (Google, Twitter, Discord), and has admin APIs for user management. It's not the prettiest thing to configure, but once it's running it just works.

For the database we used the postgres npm package with tagged template literals. No ORM. Every query is a parameterized SQL string. We had Drizzle ORM briefly during the migration but ripped it out because the abstraction wasn't worth the overhead for our use case.

How we did it

Database migration

We exported all data from Supabase's Postgres and imported it into our self-hosted instance. over a million rows across 20+ tables. The tricky part was user IDs. Supabase Auth generates UUIDs for users, and every owned_by foreign key in our database references those IDs. Keycloak generates its own UUIDs, so we had to build a mapping between old Supabase IDs and new Keycloak IDs, matched by email, and then update every users.id and owned_by column across the entire database.

We wrote two migration scripts. The first created Keycloak users from a Supabase Auth export (over 1,200 accounts), with email pre-verified and a password reset required on first login. The old Supabase ID was stored as a Keycloak user attribute. The second script fetched all Keycloak users, matched them to Supabase users by email, built the ID mapping, and ran UPDATE queries across the users table (with FK cascade) and every other table with an owned_by column.

Auth migration

This was the biggest piece of work. We had to:

  1. Remove the @nuxtjs/supabase module entirely
  2. Write a custom useAuth() composable that handles login, logout, token refresh, and registration against Keycloak
  3. Implement JWT verification on the server side using jose against Keycloak's JWKS endpoint
  4. Replace every serverSupabaseClient() call with direct SQL using requireAuth(event) + getUserId(user)
  5. Build a global fetch interceptor that catches 401s, refreshes the access token, and retries the request
  6. Set up social login with Google, Twitter, and Discord through Keycloak's identity provider system

The social login was surprisingly annoying. Google and Twitter are built-in Keycloak providers, but Discord isn't. You have to configure it as a generic OIDC provider and use Discord's OIDC-compliant endpoints (/api/oauth2/authorize, /api/oauth2/token, /api/oauth2/userinfo), not the REST API endpoint (/api/users/@me) that most tutorials show. The REST endpoint returns id instead of sub, which breaks Keycloak's OIDC processing silently.

One thing that saved time: we reused the same OAuth client credentials from Supabase. Google, Twitter, and Discord don't care which auth provider is handling the redirect, as long as the redirect URI matches. So we pointed the same client IDs and secrets at Keycloak and it worked. The catch was Keycloak's default "first broker login" flow. When an existing user signs in with a social provider for the first time after the migration, Keycloak tries to send them to its own UI to link the accounts manually. For a SaaS product that's a terrible experience. We had to create a custom auth flow that disables the Review Profile step and auto-links accounts by email instead. Since all three providers verify email addresses, this is safe and the user never sees a Keycloak page.

API rewrite

Every single API route was rewritten. We went from this:

const client = serverSupabaseClient(event)
const { data, error } = await client.from('campaigns').select('*').eq('owned_by', userId)

To this:

const user = await requireAuth(event)
const userId = getUserId(user)
const data = await sql`SELECT * FROM campaigns WHERE owned_by = ${userId}`

Input validation went from nothing to Zod schemas on every endpoint:

const body = await readValidatedBody(event, b => schema.safeParse(b))

This was tedious but worth it. We had 60+ API routes and every one of them got rewritten. The upside: we dropped all the Supabase client dependencies and RLS policies. The downside: it took about a week of solid work.

Database cleanup

While we were in there, we normalized the schema properly. The old database had some rough spots:

  • Lead collections stored leads as a uuid[] array column. We normalized that into a proper leads table.
  • Campaigns stored assigned accounts as a uuid[] array. Normalized into a campaign_accounts junction table.
  • Campaign targeting was a deeply nested JSONB column called mapping where each account email mapped to {targets: [...], cursor: N}. We replaced that with a campaign_leads table with proper status tracking.
  • Daily sending limits were stored in a limits JSONB column that got overwritten daily by the worker. One time it hit 2.8MB. We replaced it with a deterministic algorithm based on a seed value.

We ran 19 database migrations total. The campaigns table went from 38 columns to 23. We dropped 62 dead stored procedures out of 73 (yes, we only needed 11). Dropped ~10 unused tables.

Data: 308 files changed, 9,750 lines added, 19,259 removed

The codebase got almost 10,000 lines shorter while gaining significant new functionality. Most of the removed lines were Supabase client code, dead stored procedures, and the old worker monolith.

What went wrong

We found 44 bugs during production testing. Some highlights:

  • PostgreSQL doesn't allow FOR UPDATE with DISTINCT ON. Had to wrap it in a subquery.
  • The JavaScript postgres driver returns numbers as strings. Code like count + limit was producing "5060" instead of 110. We had to add explicit Number() coercion everywhere.
  • Logout didn't actually end the Keycloak session because the refresh token wasn't being sent in the POST body. Users would "log out" and immediately get logged back in.

Each of these took anywhere from 10 minutes to 3 hours to diagnose and fix. The production testing phase took about 4 days.

What I'd do differently

Start with your own auth from day one. Supabase Auth is convenient but it's the hardest thing to migrate away from. Every route, every client call, every token validation is coupled to it. If I were starting over, I'd use Keycloak or even a simpler JWT setup from the start.

Don't use JSONB for relational data. The mapping JSONB column for campaign targeting was the single worst architectural decision in the project. It made queries unreadable, prevented proper indexing, and was impossible to debug. If your data has relationships, use tables.

Write API routes with raw SQL from the start. Supabase's client library is nice for prototyping, but you lose visibility into what's actually hitting the database. Tagged template literals with the postgres package are just as readable and you always know exactly what query is running.

Was it worth it

Campaign processing went from randomly stopping to running continuously with automatic retries and monitoring. Database queries went from 80ms to under 1ms. We haven't had an auth outage since the migration. Our hosting costs are predictable and lower. We can modify any part of the stack without asking permission from a third-party dashboard.

If you're running a production SaaS on Supabase and feeling the same friction, it's not as scary as it looks. The hardest part is the auth migration, and even that is mostly mechanical once you understand OIDC.

#supabase #migration #infrastructure #postgresql #keycloak