babkin.dev

Personal Dashboard with Supabase: How I Sync 7 APIs for Free

Mykhailo Babkin·

This is a complete guide to building a personal analytics dashboard that syncs health data (Whoop, Yazio), content analytics (YouTube, Twitter, TikTok, Reddit, LinkedIn), and task management (Obsidian) into a single Supabase database. The pipelines are automated, the infrastructure costs nothing, and the data syncs heal themselves when something breaks. The entire system runs on free tiers: Supabase for the database and scheduled Edge Functions, Vercel for the Next.js frontend. If you've ever wanted to build a quantified self dashboard without paying for servers, this is how I did it.

I use Whoop for recovery. Yazio for food tracking. YouTube Studio for video analytics. Twitter/X's built-in dashboard. TikTok analytics. Reddit stats. LinkedIn, which barely shows you anything useful.

That's 7 apps. 7 logins. 7 different definitions of what "this week" means. 7 dashboards that each show me one slice of what's happening.

I got tired of tab-switching between my health data and my content analytics. So I built a personal dashboard called Life OS: one screen, one database, automated data pipelines that pull everything together using Supabase, pg_cron, and Next.js. It's the same developer-first approach I use for my Claude Code setup, where I automate everything, keep it free, and make it self-healing.

Here's the full build log: the architecture, the decisions, and what I'd do differently.

What You'll Need

Before you start building, here's what you need:

  • Supabase account (free tier), your database, scheduler, and serverless runtime
  • Vercel account (free tier), hosts the Next.js frontend
  • Whoop band or any wearable with an OAuth API (Whoop Developer docs)
  • Basic TypeScript knowledge. Edge Functions run on Deno, but the syntax is standard TS
  • Node.js 18+ and a package manager (npm, pnpm, or yarn)

You don't need to know TypeScript deeply. I had Claude write all the code while I focused on architecture and data modeling. But you should be comfortable reading it.

What does a personal analytics dashboard track?

Life OS is a quantified self dashboard that consolidates health metrics, content analytics, and task management into a single screen. It tracks recovery, sleep, nutrition, social media growth, and tasks, replacing 7 separate apps with one automated view.

It has three sections:

Body: recovery, sleep, workouts, nutrition. All auto-synced from Whoop and Yazio.

Brand: YouTube, Twitter, TikTok, Reddit, LinkedIn. Followers, views, engagement, all in one place.

Tasks: a Kanban board that syncs with my Obsidian vault. I manage tasks in markdown files on my laptop, and they show up on the dashboard. I drag a card on the dashboard, and the file updates.

The whole thing runs on Next.js, deployed on Vercel, with Supabase as the single source of truth.

Zero-cost architecture: Supabase + pg_cron + Edge Functions

The architecture uses Supabase's pg_cron extension to schedule Edge Functions that fetch data from 7 APIs on set intervals. The database itself is both the scheduler and the data store. All components run on Supabase and Vercel free tiers. Total monthly cost: $0.

The entire pipeline runs on Supabase's free tier. No external servers. No Lambda functions. No GitHub Actions cron jobs. Here's how it fits together:

  Whoop --(every 15 min)--+
  Yazio --(every hour)-----+
                           |
  YouTube --(daily)--------+     +----------------+     +-----------------+
  TikTok --(daily)---------+---->|   Supabase     |---->|  Life OS        |
  Reddit --(daily)---------+     |  (Postgres)    |     |  Dashboard      |
                           |     |                |     |  (Next.js)      |
  Twitter --(CSV upload)---+     |  pg_cron       |     +-----------------+
  LinkedIn --(manual)------+     |  Edge Fns      |            |
                                 +----------------+            v
  Obsidian vault --(file watcher)-----------------> Task sync

pg_cron is a Postgres extension built into Supabase that lets you schedule SQL statements or HTTP requests directly from the database. I use it to trigger Supabase Edge Functions, which are serverless TypeScript functions that run on Deno, on a schedule. Those functions fetch data from each API and write it back to the same Postgres database they were called from.

Here's what the pg_cron schedule looks like for the Whoop sync:

-- Schedule Whoop sync every 15 minutes
select cron.schedule(
  'whoop-sync',
  '*/15 * * * *',
  $$
  select net.http_post(
    url := 'https://your-project.supabase.co/functions/v1/whoop-sync',
    headers := jsonb_build_object(
      'Authorization', 'Bearer ' || current_setting('app.settings.service_role_key')
    ),
    body := '{}'::jsonb
  );
  $$
);

-- Schedule Yazio sync every hour
select cron.schedule(
  'yazio-sync',
  '0 * * * *',
  $$
  select net.http_post(
    url := 'https://your-project.supabase.co/functions/v1/yazio-sync',
    headers := jsonb_build_object(
      'Authorization', 'Bearer ' || current_setting('app.settings.service_role_key')
    ),
    body := '{}'::jsonb
  );
  $$
);

The database is both the scheduler and the data store. Total extra infrastructure cost: $0.

Sync frequency by data source:

SourceFrequencyHow
WhoopEvery 15 min (96x/day)Edge Function + pg_cron
YazioEvery hour (24x/day)Edge Function + pg_cron
YouTubeDailyEdge Function + pg_cron
TikTokDailyEdge Function + pg_cron
RedditDailyEdge Function + pg_cron
TwitterOn demandCSV drag-and-drop
LinkedInOn demandManual entry

Each data source syncs at a different frequency based on how often the underlying data actually changes. Whoop recovery scores update throughout the day, so 15-minute intervals catch the latest values. Social platforms only need daily pulls.

Syncing Whoop and Yazio health data into one dashboard

Both Whoop and Yazio write to the same Postgres table using idempotent upserts keyed on date: one row per day, no JOINs needed. An idempotent upsert inserts a new row if the key doesn't exist, or updates the existing row if it does. The same data can arrive twice without creating duplicates. If a sync fails at 2 AM, the next run at 2:15 catches it automatically. No deduplication logic, no conflict resolution, no "which version is correct" problems. I've never had to manually fix anything.

This single-table design is the decision I'm proudest of in the entire project.

The Body section pulls from two sources. Whoop gives me recovery score, HRV, resting heart rate, sleep stages, workout strain, calories burned. Yazio gives me calories eaten, protein, carbs, fat, water. Whoop fills recovery columns, Yazio fills nutrition columns. One query gets a complete picture of any day.

Data SourceMetrics CollectedSync MethodFrequency
WhoopRecovery score, HRV, resting heart rate, sleep stages, strain, calories burnedOAuth APIEvery 15 min
YazioCalories eaten, protein, carbs, fat, water intakeReverse-engineered mobile endpointsEvery hour
YouTubePer-video views, channel stats, daily view countsYouTube Data API v3Daily
TikTokViews, likes, shares, comments, follower countTikTok APIDaily
RedditPost karma, comment karma, per-post statsReddit APIDaily
Twitter/XPer-tweet stats, daily account metricsCSV file upload (drag-and-drop)Manual
LinkedInFollower countManual entry in dashboardManual

The best part of this is Energy Balance. One chart showing intake (from Yazio) vs. burn (from Whoop) over time. Deficit shows in green, surplus in red. I can see at a glance whether I'm eating enough for my training load. Before this, I'd open Yazio, look at calories in, then open Whoop, look at calories burned, and try to do math in my head. Now it's one line minus another line on the same graph.

Whoop syncs every 15 minutes because recovery scores update throughout the day as the band processes more data. Yazio syncs hourly because food logging is less frequent.

The upsert pattern every Edge Function uses

Here's the actual code. Every Edge Function follows this same structure:

// Every Edge Function follows this pattern
const { error } = await supabase
  .from('daily_body')
  .upsert(
    {
      date: today,                    // unique key
      recovery_score: whoop.recovery, // [!code highlight]
      hrv: whoop.hrv,
      resting_hr: whoop.restingHeartRate,
      sleep_hours: whoop.sleepDuration / 3600,
      strain: whoop.strain,
      calories_burned: whoop.caloriesBurned,
    },
    { onConflict: 'date' }            // [!code highlight]
  );

The onConflict: 'date' parameter is what makes this work. If a row for today already exists, it gets updated instead of duplicated. Latest write wins, which is fine because the source data doesn't change retroactively.

For Yazio, there's a catch. They don't have a public API. I'm using their internal mobile app endpoints that I found by digging through open-source projects that had reverse-engineered it. Could break any day. But it's been stable for weeks, and the alternative is manually typing my macros into a spreadsheet.

Tracking social media analytics across 5 platforms in one dashboard

The Brand section consolidates YouTube, Twitter, TikTok, Reddit, and LinkedIn analytics into a single page with cross-platform comparison charts. Each platform's data flows through its own Edge Function into platform-specific Supabase tables, then the dashboard queries them all at once.

This was the part I kept procrastinating on. Opening YouTube Studio, then Twitter analytics, then TikTok, then Reddit, trying to remember which platform had what numbers. Each one formats data differently. Each one has different time ranges. Now it's one page.

YouTube syncs daily: per-video metrics, channel stats, daily view counts. TikTok syncs daily too, pulling views, likes, shares, comments, and follower count. Reddit syncs daily for post karma, comment karma, and per-post performance.

Twitter doesn't use an API at all. I just drag and drop a CSV export from Twitter's analytics page onto the dashboard. It auto-detects whether it's a content CSV (per-tweet stats) or an overview CSV (daily account metrics) and imports accordingly. This was a deliberate choice. Twitter's API costs money I didn't want to spend. CSV works fine.

LinkedIn is manual entry for now. Their API is restrictive and the data updates slowly anyway. I just update the follower count when I check it.

The dashboard has three sections:

SectionWhat it tracksData source
BodyRecovery score, HRV, sleep, calories, macrosWhoop + Yazio (auto-synced)
BrandFollower growth, views, engagement, per-post stats5 platforms (API + CSV + manual)
TasksKanban board with drag-and-drop, two-way syncObsidian vault (file watcher daemon)

Each section pulls from different data sources but presents everything on one page.

Two-way Obsidian task sync: Markdown to Kanban board

The task sync uses a Python file watcher daemon that monitors an Obsidian vault directory and mirrors changes to Supabase in both directions. Create a task in Obsidian as a markdown file, and it appears on the dashboard Kanban board. Drag a card to "done" on the dashboard, and the markdown file updates automatically.

I manage my tasks as markdown files in Obsidian with YAML frontmatter (status, area, type, due date). The dashboard has a Kanban board where I can drag cards between columns. Obsidian wins on conflicts.

Here's the basic structure of an Edge Function that handles the sync:

import { serve } from 'https://deno.land/std/http/server.ts';
import { createClient } from 'https://esm.sh/@supabase/supabase-js@2';

serve(async (req: Request) => {
  const supabase = createClient(
    Deno.env.get('SUPABASE_URL')!,
    Deno.env.get('SUPABASE_SERVICE_ROLE_KEY')!
  );

  // 1. Fetch data from external API          // [!code highlight]
  const data = await fetchFromApi();

  // 2. Transform to match your schema        // [!code highlight]
  const rows = transformData(data);

  // 3. Upsert (insert or update on conflict) // [!code highlight]
  const { error } = await supabase
    .from('your_table')
    .upsert(rows, { onConflict: 'unique_key' });

  return new Response(
    JSON.stringify({ success: !error, count: rows.length }),
    { headers: { 'Content-Type': 'application/json' } }
  );
});

This is probably the most over-engineered part of the whole system. But I live in Obsidian for notes and planning, and I wanted tasks to be there too without giving up a visual board.

Building with AI: What Claude coded vs. what I designed

Claude wrote 100% of the TypeScript: every Edge Function, the OAuth flows, the dashboard components, the file watcher. I don't know TypeScript. This is the same workflow I describe in how I set up Claude Code's memory system. I handle the decisions. Claude handles the implementation.

Here's what I actually did:

I chose the architecture. pg_cron + Edge Functions over a separate server. I'd been reading the Supabase docs and realized the database itself could be the scheduler. No extra moving parts.

I designed the data model. Claude's first version had separate tables with a JOIN view. I looked at it and said no, one table, both sources write to it. Simpler.

I made the pragmatic trade-offs. CSV upload for Twitter instead of paying for API access. Reverse-engineered Yazio endpoints instead of manual data entry. Sync frequencies based on how each data source actually behaves.

I caught the edge cases. Whoop recovery data arrives late. Your score sometimes doesn't show up until mid-afternoon. I told Claude to use a 48-hour lookback window instead of just fetching today. I know this from wearing the band for months.

My job was architecture, research, and knowing when something was wrong. Claude's job was writing the code that made the decisions real. It's the same division of labor I apply to everything I ship.

RoleHuman (me)AI (Claude)
Architecturepg_cron over Lambda, single-table design, CSV for Twitter--
Code--All TypeScript, Edge Functions, OAuth, dashboard components
Data modelingOne table for body data, upsert strategyGenerated SQL and migrations
Edge cases48-hour Whoop lookback, Yazio endpoint discoveryImplemented the logic
Trade-offsFree tier only, no paid APIs, accept Yazio fragility--

3 architecture decisions that made everything work

1. Supabase as the single source of truth. One Postgres database holds all personal data: health metrics, social analytics, task state. pg_cron schedules the syncs. Edge Functions execute them. No external infrastructure needed.

2. Separation of data collection from presentation. If I deleted the entire Next.js frontend tomorrow, the data would still flow into Supabase every 15 minutes. The pipeline doesn't care about the dashboard. The dashboard is just a window.

3. Idempotent upserts everywhere. Every sync writes with an upsert keyed on date (for body data) or post ID (for social data). Same data arrives twice, it just overwrites. Failures heal on the next run.

The whole thing is more about the approach than the result. A clear separation between data collection and presentation means I can rebuild the dashboard without losing a single data point.

If you want to see other products I've built with this approach, check out Oxys, a 9-app YouTube toolkit. Same philosophy: automate the boring parts, keep the architecture simple.

What's broken, what's next, and what I'd do differently

I'm still not sure if the Yazio reverse-engineering will hold up long term. LinkedIn is still manual. The task sync has a slight delay.

But this personal dashboard works. Every morning, one tab, full picture. That's all I wanted.

For anyone building personal tools: the best architecture decision I made wasn't technical. It was separating data collection from presentation completely. Your pipelines should work even if your UI doesn't exist yet. Build the plumbing first. The interface can change. The data shouldn't.

Want something like this built for you? I offer this as a service. Check out AI Workflow as a Service if you'd rather skip the build and get the result.


Frequently asked questions

How much does it cost to run this?

Nothing. Supabase's free plan includes 500 MB of database storage, 50,000 monthly Edge Function invocations, and pg_cron scheduling. That's more than enough for a single-user dashboard syncing 7 APIs. Vercel's free tier hosts the frontend. I haven't paid a cent.

Can pg_cron call Edge Functions on a schedule?

Yes, and it's surprisingly easy. You use net.http_post() inside a pg_cron schedule to hit your Edge Function URL. Standard cron syntax: */15 * * * * for every 15 minutes. No external scheduler needed. The database calls itself.

Does Yazio have a public API?

No. As of March 2026, Yazio has no public API. I'm using reverse-engineered mobile app endpoints I found in open-source projects. They could break any time. I accepted that trade-off because the alternative was typing my macros into a spreadsheet every day.

What happens when an API sync fails?

Nothing bad. Every sync uses upserts, so if the Whoop call fails at 2 AM, the next run at 2:15 picks up the same data. No duplicates, no gaps, no manual intervention. The system just tries again.

What's the best way to sync Whoop data to a custom dashboard?

Whoop's developer API uses OAuth 2.0 and gives you endpoints for recovery, sleep, workouts, and body measurements. The thing most people miss: don't just fetch today's data. Recovery scores can take hours to finalize. Use a 48-hour lookback window and write everything as upserts so repeat fetches don't create duplicates.

Is it legal to reverse-engineer a mobile app's API?

Generally, reverse-engineering for interoperability is protected under laws like the DMCA (Section 1201) and the EU Software Directive. That said, check the app's Terms of Service, since many prohibit it. For personal, non-commercial use like a private dashboard, the legal risk is typically low. The bigger risk is the endpoints breaking without notice.

If you run a business and want AI built into your workflows, book a free 45-min walkthrough. I will look at how your team works and map out where AI saves real hours.