← all projects
shipped 2026

NVIDIA Ecosystem Tracker

A map of NVIDIA's partner ecosystem — who they're in business with, when each relationship started, and how much it matters to the partner.

NVIDIA Ecosystem Tracker — workflow diagram

Why this exists

I wanted to understand NVIDIA’s ecosystem better. NVIDIA has a profound impact on the financial success of the businesses it partners with — and as an investor, having that map in one place is useful context.

Existing trackers either flatten everything to logos on a slide or bury the interesting parts in long press releases. I wanted something in between — visual, scannable, but with the why of each partnership a click away.

What it does

Three views over one dataset:

How it works

The data flow is intentionally simple — most of the intelligence happens at maintenance time, not at request time.

  1. Collect — A daily GitHub Action scrapes the NVIDIA newsroom and a list of relevant RSS feeds, dedupes against existing articles by URL hash, and commits new ones to the repo
  2. Extract — Once a week, I open Claude Code and run /extract. Claude reads new articles, classifies each as REJECT / UPDATE / PROPOSE NEW, and either appends a milestone to an existing partner or stages a proposal for a new one
  3. Review significance — A second weekly command, /review-significance, refreshes the tier and narrative for any partner whose milestone list has grown since the last review
  4. Render — Astro builds a static site from the JSON data file, deployed to Cloudflare Workers. Everything visible is generated at build time
  5. Remind — A Sunday email surfaces both queues — articles waiting for /extract, partners waiting for /review-significance — so I never miss a maintenance window

The choice that mattered: keeping the runtime deterministic. Claude doesn’t run when a visitor loads the site. Claude only runs during maintenance, on data I review before it lands.

What’s next

What I learned

  1. Parallel agents only work if you plan for them. When tasks split cleanly into independent work — UI components, isolated commands — dispatching subagents in parallel collapses hours into minutes. The trick is designing the plan so the parallelizable bits are obvious before you start, not improvised mid-execution.
  2. Settle the data model before the UI. I committed to the schema (milestones, significance tier and narrative) before mocking the hover card. Every UI decision after that was constrained and easy. The visual iterations on this project were all style tweaks — never structural rework, because the structure was right from the start.

Status

Shipped. Running. Two weekly slash commands keep it current; everything else is automated.

← all projects