Deepgram is the leading voice AI platform for developers building speech-to-text (STT), text-to-speech (TTS) and full speech-to-speech (STS) offerings. 200,000+ developers build with Deepgram’s voice-native foundational models – accessed through APIs or as self-managed software – due to our unmatched accuracy, latency and pricing. Customers include software companies building voice products, co-sell partners working with large enterprises, and enterprises solving internal voice AI use cases. The company ended 2024 cash-flow positive with 400+ enterprise customers, 3.3x annual usage growth across the past 4 years, over 50,000 years of audio processed and over 1 trillion words transcribed. There is no organization in the world that understands voice better than Deepgram
Opportunity:
Deepgram is looking for a Voice AI Evaluation Lead to take ownership of how we benchmark and evaluate the performance of our voice AI models. This role is pivotal to the integrity and impact of our AI offerings. You’ll be building robust benchmarking pipelines, producing clear and actionable model cards, and partnering cross-functionally with research, product, QA, marketing, and data labeling to shape how our models are measured, released, and improved. If you love designing evaluations that matter, aligning metrics with product goals, and translating data into insight, this role is for you.
What You’ll DoBuild and maintain scalable benchmarking pipelines for model evaluations across STT, TTS, and voice agent use cases.
Run regular evaluations of production and pre-release models on curated, real-world datasets.
Partner with Research, Data, and Engineering teams to develop new evaluation methodologies and integrate them into our development cycle.
Design, define and refine evaluation metrics that reflect product experience, quality, and performance goals.
Author comprehensive model cards and internal reports outlining model strengths, weaknesses, and recommended use cases.
Work closely with Data Labeling Ops to source, annotate, and prepare evaluation datasets.
Collaborate with QA Engineers to integrate model tests into CI/CD and release workflows.
Support Marketing and Product with credible, data-backed comparisons to competitors.
Track market developments and maintain awareness of competitive benchmarks.
Support GTM teams with benchmarking best practices for prospects and customers.
Enjoy translating model outputs into human insights that guide product strategy.
Are motivated by precision, fairness, and transparency in evaluation.
Have a data-minded approach to experimentation and thrive on uncovering what’s working—and what’s not.
Take pride in designing clean, repeatable benchmarks that bring clarity to complex systems.
Get satisfaction from cross-functional collaboration, working with researchers, product teams, and engineers alike.
Want to shape how we define quality and success in speech AI.
Are excited by the idea of being a key voice in when—and how—we release new models into the world.
Experience designing, executing, and iterating on evaluation pipelines for ML models
Proficiency in Python and data analysis libraries
Ability to develop automated evaluation systems—whether scripting analysis workflows or integrating with broader ML pipelines.
Comfort working with large-scale datasets and crafting meaningful performance metrics and visualizations.
Experience using LLMs or internal tooling to accelerate analysis, QA, or pipeline prototyping.
Strong communication skills—especially when translating raw data into structured insights, documentation, or dashboards.
Proven success working cross-functionally with research, engineering, QA, and product teams.
Prior experience evaluating speech-related models, especially STT or TTS systems.
Familiarity with model documentation formats (e.g., model cards, eval reports, dashboards).
Understanding of competitive benchmarking and landscape analysis for voice AI products.
Experience contributing to or owning internal evaluation infrastructure—whether integrating with existing systems or proposing new ones.
A background in startup environments, applied research, or AI product deployment.
Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $85 million in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!
Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.
We are happy to provide accommodations for applicants who need them.
Compensation Range: $135K - $165K
#BI-Remote
Top Skills
Similar Jobs at Deepgram
What you need to know about the Colorado Tech Scene
Key Facts About Colorado Tech
- Number of Tech Workers: 260,000; 8.5% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Lockheed Martin, Century Link, Comcast, BAE Systems, Level 3
- Key Industries: Software, artificial intelligence, aerospace, e-commerce, fintech, healthtech
- Funding Landscape: $4.9 billion in VC funding in 2024 (Pitchbook)
- Notable Investors: Access Venture Partners, Ridgeline Ventures, Techstars, Blackhorn Ventures
- Research Centers and Universities: Colorado School of Mines, University of Colorado Boulder, University of Denver, Colorado State University, Mesa Laboratory, Space Science Institute, National Center for Atmospheric Research, National Renewable Energy Laboratory, Gottlieb Institute