Unicorn Lessons from RapidAPI’s Founder: A Conversation with Iddo Gino
October 9, 2025
Founder Iddo Gino tells host Damien Filiatrault how a GitHub repo called “Awesome APIs” became RapidAPI, a global marketplace later acquired by Nokia, and what sky-high valuations actually change credibility and hiring more than outcomes. He argues that instead of chasing new standards like MCP, teams should fix the REST/GraphQL APIs they already have with accurate specs and living docs. Gino then explains Datawizz, his new platform that routes requests to tiny, task-specific models and falls back to large LLMs only when needed, often cutting costs by 85–95% while improving latency and reliability. He details an OpenAI-compatible router, when volume justifies specialization, and how edge/on-device options, Cloudflare AI Workers, iOS’s built-in models with adapters, and Chrome’s Gemini Nano, unlock faster, cheaper, and more private inference.
Host Damien Filiatrault welcomes Iddo Gino, founder of RapidAPI (launched at 17, later a unicorn, acquired by Nokia) and now CEO of Datawizz. They trace Rapid’s journey from a GitHub list to a global API marketplace, unpack what sky-high valuations actually change, and debate MCP versus simply fixing the APIs we already have. Iddo then shares how Datawizz slashes LLM bills with tiny, task-specific models and why the future leans hard toward edge and on-device inference.
What you’ll learn
- How “Awesome APIs” (a GitHub repo) became RapidAPI’s interactive playground, marketplace, and eight-year scaling story, ending in a Nokia acquisition
- What high valuations really buy (credibility, recruiting) and why raising “too much” is a double-edged sword for efficiency
- A contrarian take on Anthropic’s Model Context Protocol (MCP): why reinventing interfaces may just create a second integration surface to maintain
- The boring fix that beats new protocols: clean REST/GraphQL, accurate OpenAPI specs, and docs that match reality
- Datawizz’s playbook: route requests to small, specialized models (and only fall back to big LLMs when needed) for 85–95% cost reductions
- How the router works in practice (OpenAI-compatible endpoint), when volume justifies specialization, and why clustering real traffic matters
- Edge & on-device AI: running custom adapters on iOS’s built-in models, Chrome’s Gemini Nano, and at the edge on Cloudflare AI Workers for latency, privacy, and cost wins
Memorable sound bites
- “V1 of Rapid was just a GitHub repo called Awesome APIs.”
- “If LLMs can’t use your API, your developers probably can’t either, fix the API.”
- “MCP feels like reinventing APIs. In a few years it could be just as messy, and now you’re maintaining two surfaces.”
- “Our customers see 85–95% cost reduction by routing to small, specialized models.”
- “On-device is free, fast, and private.”
—
Tune in for a founder’s-eye view of scaling an API marketplace, a pragmatic critique of shiny new protocols, and a concrete roadmap to cheaper, faster AI through specialization and the edge.