Build a Personal Dashboard with Supabase: Sync 7 APIs for Free
This is a complete guide to building a personal analytics dashboard that syncs health data (Whoop, Yazio), content analytics (YouTube, Twitter, TikTok, Reddit, LinkedIn), and task management (Obsidian) into a single Supabase database — with automated pipelines, zero infrastructure cost, and self-healing data syncs. The entire system runs on free tiers: Supabase for the database and scheduled Edge Functions, Vercel for the Next.js frontend. If you've ever wanted to build a quantified self dashboard without paying for servers, this is how I did it.
I use Whoop for recovery. Yazio for food tracking. YouTube Studio for video analytics. Twitter/X's built-in dashboard. TikTok analytics. Reddit stats. LinkedIn, which barely shows you anything useful.
That's 7 apps. 7 logins. 7 different definitions of what "this week" means. 7 dashboards that each show me one slice of what's happening.
I got tired of tab-switching between my health data and my content analytics. So I built a personal dashboard called Life OS — one screen, one database, automated data pipelines that pull everything together using Supabase, pg_cron, and Next.js. It's the same kind of developer-first approach I use for my Claude Code setup — automate everything, keep it free, make it self-healing.
Here's the full build log: the architecture, the decisions, and what I'd do differently.
What You'll Need
Before you start building, here's what you need:
- Supabase account (free tier) — your database, scheduler, and serverless runtime
- Vercel account (free tier) — hosts the Next.js frontend
- Whoop band — or any wearable with an OAuth API (Whoop Developer docs)
- Basic TypeScript knowledge — Edge Functions run on Deno, but the syntax is standard TS
- Node.js 18+ and a package manager (npm, pnpm, or yarn)
You don't need to know TypeScript deeply — I had Claude write all the code while I focused on architecture and data modeling. But you should be comfortable reading it.
What does a personal analytics dashboard track?
Life OS is a quantified self dashboard that consolidates health metrics, content analytics, and task management into a single screen. It tracks recovery, sleep, nutrition, social media growth, and tasks — replacing 7 separate apps with one automated view.
It has three sections:
Body: recovery, sleep, workouts, nutrition. All auto-synced from Whoop and Yazio.
Brand: YouTube, Twitter, TikTok, Reddit, LinkedIn. Followers, views, engagement, all in one place.
Tasks: a Kanban board that syncs with my Obsidian vault. I manage tasks in markdown files on my laptop, they show up on the dashboard. I drag a card on the dashboard, the file updates.
The whole thing runs on Next.js, deployed on Vercel, with Supabase as the single source of truth.
Zero-cost architecture: Supabase + pg_cron + Edge Functions
The architecture uses Supabase's pg_cron extension to schedule Edge Functions that fetch data from 7 APIs on set intervals. The database itself is both the scheduler and the data store. All components run on Supabase and Vercel free tiers. Total monthly cost: $0.
The entire pipeline runs on Supabase's free tier with no external servers, no Lambda functions, and no GitHub Actions cron jobs. Here's how it fits together:
Whoop --(every 15 min)--+
Yazio --(every hour)-----+
|
YouTube --(daily)--------+ +----------------+ +-----------------+
TikTok --(daily)---------+---->| Supabase |---->| Life OS |
Reddit --(daily)---------+ | (Postgres) | | Dashboard |
| | | | (Next.js) |
Twitter --(CSV upload)---+ | pg_cron | +-----------------+
LinkedIn --(manual)------+ | Edge Fns | |
+----------------+ v
Obsidian vault --(file watcher)-----------------> Task sync
pg_cron is a Postgres extension built into Supabase that lets you schedule SQL statements or HTTP requests directly from the database. I use it to trigger Supabase Edge Functions — serverless TypeScript functions that run on Deno — on a schedule. Those functions fetch data from each API and write it back to the same Postgres database they were called from.
Here's what the pg_cron schedule looks like for the Whoop sync:
-- Schedule Whoop sync every 15 minutes
select cron.schedule(
'whoop-sync',
'*/15 * * * *',
$$
select net.http_post(
url := 'https://your-project.supabase.co/functions/v1/whoop-sync',
headers := jsonb_build_object(
'Authorization', 'Bearer ' || current_setting('app.settings.service_role_key')
),
body := '{}'::jsonb
);
$$
);
-- Schedule Yazio sync every hour
select cron.schedule(
'yazio-sync',
'0 * * * *',
$$
select net.http_post(
url := 'https://your-project.supabase.co/functions/v1/yazio-sync',
headers := jsonb_build_object(
'Authorization', 'Bearer ' || current_setting('app.settings.service_role_key')
),
body := '{}'::jsonb
);
$$
);
The database is both the scheduler and the data store. Total extra infrastructure cost: $0.
Each data source syncs at a different frequency based on how often the underlying data actually changes. Whoop recovery scores update throughout the day, so 15-minute intervals catch the latest values. Social platforms only need daily pulls.
Syncing Whoop and Yazio health data into one dashboard
Both Whoop and Yazio write to the same Postgres table using idempotent upserts keyed on date — one row per day, no JOINs needed. Whoop fills recovery columns, Yazio fills nutrition columns. One query returns a complete picture of any day.
This single-table design is the decision I'm proudest of in the entire project.
The Body section pulls from two sources. Whoop gives me recovery score, HRV, resting heart rate, sleep stages, workout strain, calories burned. Yazio gives me calories eaten, protein, carbs, fat, water.
Whoop fills recovery columns, Yazio fills nutrition columns. One query gets a complete picture of any day.
| Data Source | Metrics Collected | Sync Method | Frequency |
|---|---|---|---|
| Whoop | Recovery score, HRV, resting heart rate, sleep stages, strain, calories burned | OAuth API | Every 15 min |
| Yazio | Calories eaten, protein, carbs, fat, water intake | Reverse-engineered mobile endpoints | Every hour |
| YouTube | Per-video views, channel stats, daily view counts | YouTube Data API v3 | Daily |
| TikTok | Views, likes, shares, comments, follower count | TikTok API | Daily |
| Post karma, comment karma, per-post stats | Reddit API | Daily | |
| Twitter/X | Per-tweet stats, daily account metrics | CSV file upload (drag-and-drop) | Manual |
| Follower count | Manual entry in dashboard | Manual |
The best part of this is Energy Balance. One chart showing intake (from Yazio) vs. burn (from Whoop) over time. Deficit shows in green, surplus in red. I can see at a glance whether I'm eating enough for my training load. Before this, I'd open Yazio, look at calories in, then open Whoop, look at calories burned, and try to do math in my head. Now it's one line minus another line on the same graph.
Whoop syncs every 15 minutes because recovery scores update throughout the day as the band processes more data. Yazio syncs hourly because food logging is less frequent.
Why idempotent upserts matter for data pipelines
Every sync operation is an idempotent upsert — a database write that inserts a new row or updates the existing one if it already exists. This means the same data can arrive twice without creating duplicates. If a sync fails at 2 AM, the next one at 2:15 catches it. I've never had to manually fix anything.
Here's the upsert pattern every Edge Function uses:
// Every Edge Function follows this pattern
const { error } = await supabase
.from('daily_body')
.upsert(
{
date: today, // unique key
recovery_score: whoop.recovery, // [!code highlight]
hrv: whoop.hrv,
resting_hr: whoop.restingHeartRate,
sleep_hours: whoop.sleepDuration / 3600,
strain: whoop.strain,
calories_burned: whoop.caloriesBurned,
},
{ onConflict: 'date' } // [!code highlight]
);
This pattern eliminates an entire class of data integrity bugs. No deduplication logic, no conflict resolution, no "which version is correct" problems. The latest write wins, and that's fine because the source data doesn't change retroactively.
For Yazio, there's a catch. They don't have a public API. I'm using their internal mobile app endpoints that I found by digging through open-source projects that had reverse-engineered it. Could break any day. But it's been stable for weeks, and the alternative is manually typing my macros into a spreadsheet.
Tracking social media analytics across 5 platforms in one dashboard
The Brand section consolidates YouTube, Twitter, TikTok, Reddit, and LinkedIn analytics into a single page with cross-platform comparison charts. Each platform's data flows through its own Edge Function into platform-specific Supabase tables, then the dashboard queries them all at once.
This was the part I kept procrastinating on. Opening YouTube Studio, then Twitter analytics, then TikTok, then Reddit, trying to remember which platform had what numbers. Each one formats data differently. Each one has different time ranges.
YouTube syncs daily. Per-video metrics, channel stats, daily view counts.
TikTok syncs daily. Views, likes, shares, comments, follower count.
Reddit syncs daily. Post karma, comment karma, per-post performance.
Twitter doesn't use an API at all. I just drag and drop a CSV export from Twitter's analytics page onto the dashboard. It auto-detects whether it's a content CSV (per-tweet stats) or an overview CSV (daily account metrics) and imports accordingly. This was a deliberate choice. Twitter's API costs money I didn't want to spend. CSV works fine.
LinkedIn is manual entry for now. Their API is restrictive and the data updates slowly anyway. I just update the follower count when I check it.
The three main sections of Life OS. Each section pulls from different data sources but presents everything in a unified interface.
Two-way Obsidian task sync: Markdown to Kanban board
The task sync uses a Python file watcher daemon that monitors an Obsidian vault directory and mirrors changes to Supabase in both directions. Create a task in Obsidian as a markdown file, it appears on the dashboard Kanban board. Drag a card to "done" on the dashboard, the markdown file updates automatically.
I manage my tasks as markdown files in Obsidian with YAML frontmatter (status, area, type, due date). The dashboard has a Kanban board where I can drag cards between columns. Obsidian wins on conflicts.
Here's the basic structure of an Edge Function that handles the sync:
import { serve } from 'https://deno.land/std/http/server.ts';
import { createClient } from 'https://esm.sh/@supabase/supabase-js@2';
serve(async (req: Request) => {
const supabase = createClient(
Deno.env.get('SUPABASE_URL')!,
Deno.env.get('SUPABASE_SERVICE_ROLE_KEY')!
);
// 1. Fetch data from external API // [!code highlight]
const data = await fetchFromApi();
// 2. Transform to match your schema // [!code highlight]
const rows = transformData(data);
// 3. Upsert (idempotent write) // [!code highlight]
const { error } = await supabase
.from('your_table')
.upsert(rows, { onConflict: 'unique_key' });
return new Response(
JSON.stringify({ success: !error, count: rows.length }),
{ headers: { 'Content-Type': 'application/json' } }
);
});
This is probably the most over-engineered part of the whole system. But I live in Obsidian for notes and planning, and I wanted tasks to be there too without giving up a visual board.
Building with AI: What Claude coded vs. what I designed
Claude wrote 100% of the TypeScript — every Edge Function, the OAuth flows, the dashboard components, the file watcher. I don't know TypeScript. This is the same workflow I describe in how I give Claude Code permanent memory — I handle the decisions, Claude handles the implementation.
Here's what I actually did:
I chose the architecture. pg_cron + Edge Functions over a separate server. I'd been reading Supabase docs and realized the database itself could be the scheduler. No extra moving parts.
I designed the data model. Claude's first version had separate tables with a JOIN view. I looked at it and said no, one table, both sources write to it. Simpler.
I made the pragmatic trade-offs. CSV upload for Twitter instead of paying for API access. Reverse-engineered Yazio endpoints instead of manual data entry. Sync frequencies based on how each data source actually behaves.
I caught the edge cases. Whoop recovery data arrives late — your score sometimes doesn't show up until mid-afternoon. I told Claude to use a 48-hour lookback window instead of just fetching today. I know this from wearing the band for months.
My job was architecture, research, and knowing when something was wrong. Claude's job was writing the code that made the decisions real.
| Role | Human (me) | AI (Claude) |
|---|---|---|
| Architecture | pg_cron over Lambda, single-table design, CSV for Twitter | -- |
| Code | -- | All TypeScript, Edge Functions, OAuth, dashboard components |
| Data modeling | One table for body data, upsert strategy | Generated SQL and migrations |
| Edge cases | 48-hour Whoop lookback, Yazio endpoint discovery | Implemented the logic |
| Trade-offs | Free tier only, no paid APIs, accept Yazio fragility | -- |
3 architecture decisions that made everything work
The three decisions that shaped everything:
1. Supabase as the single source of truth. One Postgres database holds all personal data — health metrics, social analytics, task state. pg_cron schedules the syncs. Edge Functions execute them. No external infrastructure needed.
2. Separation of data collection from presentation. If I deleted the entire Next.js frontend tomorrow, the data would still flow into Supabase every 15 minutes. The pipeline doesn't care about the dashboard. The dashboard is just a window.
3. Idempotent upserts everywhere. Every sync writes with an upsert pattern keyed on date (for body data) or post ID (for social data). Same data arrives twice, it just overwrites. Failures self-heal on the next run. No monitoring needed.
The whole thing is honestly more about the approach than the result. A clear separation between data collection and presentation means I can rebuild the dashboard without losing a single data point.
What's broken, what's next, and what I'd do differently
I'm still not sure if the Yazio reverse-engineering will hold up long term. LinkedIn is still manual. The task sync has a slight delay.
But this personal dashboard works. Every morning, one tab, full picture. That's all I wanted.
For anyone building personal tools: the best architecture decision I made wasn't technical. It was separating data collection from presentation completely. Your pipelines should work even if your UI doesn't exist yet. Build the plumbing first. The interface can change. The data shouldn't.
Frequently asked questions
How much does it cost to run a personal dashboard like this?
The entire Life OS dashboard runs on free tiers. Supabase's free plan includes 500 MB of database storage, 50,000 monthly Edge Function invocations, and pg_cron scheduling — more than enough for a single-user dashboard syncing 7 APIs. Vercel's free tier hosts the Next.js frontend. Total monthly cost: $0.
Can you use pg_cron to schedule API calls in Supabase?
Yes. pg_cron is a Postgres extension included in every Supabase project. You can schedule it to call Edge Functions via net.http_post() using cron syntax. For example, */15 * * * * runs every 15 minutes. The Edge Function handles the API call and writes data back to the same database. This eliminates the need for external schedulers like GitHub Actions or AWS Lambda.
Does Yazio have a public API?
No. Yazio does not offer a public API as of March 2026. The integration in this dashboard uses reverse-engineered mobile app endpoints discovered in open-source projects. These endpoints are undocumented and could break without notice. The alternative — manual data entry — was impractical for daily macro tracking, so I accepted the fragility.
How do you handle API failures in automated data pipelines?
Every sync in Life OS uses idempotent upserts — database writes that insert or update based on a unique key (date for body data, post ID for social data). If the Whoop sync fails at 2 AM, the next run at 2:15 picks up the data automatically. There's no alerting or monitoring because the system self-heals by design — no manual intervention needed.
What's the best way to sync Whoop data to a custom dashboard?
Whoop's developer API uses OAuth 2.0 authentication and provides endpoints for recovery, sleep, workouts, and body measurements. The key insight: don't just fetch today's data. Whoop recovery scores can take hours to finalize. Use a 48-hour lookback window to catch late-arriving data, and write everything as upserts so repeat fetches don't create duplicates.
What is a quantified self dashboard?
A quantified self dashboard is a personal analytics tool that consolidates data from wearables, apps, and services into a single interface. It tracks metrics like sleep, recovery, nutrition, exercise, and productivity — replacing the need to check multiple apps separately. The term "quantified self" refers to the practice of self-tracking with technology to gain insights about health, habits, and performance.
Can you build a personal dashboard for free?
Yes. Using Supabase's free tier (500 MB database, 50,000 Edge Function invocations, pg_cron scheduling) and Vercel's free hosting, you can build a fully automated personal dashboard at $0/month. The free tier limits are more than sufficient for a single-user dashboard syncing data from 5-10 APIs on scheduled intervals.
How do you sync data from multiple APIs into one database?
Use scheduled serverless functions (like Supabase Edge Functions triggered by pg_cron) to fetch data from each API on a set interval. Write all data using idempotent upserts — database operations that insert new rows or update existing ones based on a unique key. This pattern ensures data consistency: if a sync fails, the next run self-corrects. No deduplication logic needed.
What is an idempotent upsert and why does it matter for data pipelines?
An idempotent upsert is a database write that produces the same result whether it runs once or many times. It inserts a new row if the key doesn't exist, or updates the existing row if it does. In data pipelines, this eliminates duplicate records, removes the need for conflict resolution, and makes failed syncs self-healing — the next run simply re-fetches and overwrites.
Is it legal to reverse-engineer a mobile app's API?
In general, reverse-engineering for interoperability purposes is protected under laws like the DMCA (Section 1201) and the EU Software Directive. However, you should always check the app's Terms of Service — many prohibit reverse-engineering. For personal, non-commercial use like building a private dashboard, the legal risk is typically low, but undocumented APIs can break without notice and have no guaranteed stability.