At Klaviyo, we value the unique backgrounds, experiences and perspectives each Klaviyo (we call ourselves Klaviyos) brings to our workplace each and every day. We believe everyone deserves a fair shot at success and appreciate the experiences each person brings beyond the traditional job requirements. If you’re a close but not exact match with the description, we hope you’ll still consider applying. Want to learn more about life at Klaviyo? Visit klaviyo.com/careers to see how we empower creators to own their own destiny.
GTM Data Strategy & Operations stood up from scratch with no predecessor. Today the function runs on three offshore contractors and zero FTEs, managed by a single leader who is simultaneously building the agentic infrastructure, operating it in production, and driving major initiatives (hierarchy redesign, data quality assessment, vendor optimization).
The operating model is deliberately agentic AI–first: a multi-agent pipeline (Cartographer, Sentinel, Resolver, Reporting) handles detection, enrichment, hierarchy mapping, and conflict resolution at scale. This is not a future-state vision, these agents are live and processing enterprise account families in production today.
The problem: one person cannot build, operate, and extend this system while also managing strategic workstreams. The function currently covers only core Tier‑1 fields. Dozens of account, contact, and lead signals remain unaddressed. Every pipeline run, every failure diagnosis, and every offshore handoff flows through a single point of failure.
This role is the first onshore execution hire for an agent operator who can keep the system running, improve it, and extend detection and resolution coverage as GTM leadership prioritizes new data elements.
Role SummarySit between AI systems and GTM data. Operate, tune, and extend our agentic data quality pipeline (detection, enrichment, hierarchy mapping, conflict resolution) so it runs reliably, improves continuously, and expands to cover more of the data landscape. Own the handoff between automated output and human review, managing quality and throughput with our offshore team. You don’t build agents from scratch, but you run them, evaluate their output with GTM data judgment, and make them better.
Core ResponsibilitiesAgent Pipeline Operations- Run and monitor production pipeline sessions (Cartographer, Sentinel, Resolver) across scheduled cadences; diagnose and resolve failures (API errors, session timeouts, data anomalies) without escalating to the function lead.
- Execute pipeline runs in Claude Claude and tmux; manage long-running batch processes; interpret logs and output to confirm data integrity before downstream handoff.
- Maintain pipeline orchestration scripts and configuration; extend agent coverage as new data elements are prioritized by GTM leadership.
- Refine detection rules, prompt logic, and confidence thresholds based on output analysis and false-positive/negative patterns.
- Evaluate agent accuracy by segment (Enterprise vs. MM/SMB) and recommend rule or workflow changes backed by evidence.
- Run bake-offs (vendor vs. AI enrichment) to optimize cost, coverage, and accuracy; document results for decision-making.
- Own the handoff between Sentinel detection output and Concentrix triage queues; define queue structure, priority tiers, and resolution instructions.
- Monitor offshore resolution quality and throughput; refine detection rules based on patterns surfaced through triage.
- Close the feedback loop: track resolution outcomes back to agent configuration to reduce recurring false positives and improve detection precision.
- Maintain ops-only staging fields; manage the promote-to-production flow with audit controls.
- Design and run AI-assisted enrichment workflows (Clay + LLM prompts) with evidence links and confidence thresholds.
- Monitor fill-rate, sampled accuracy, freshness, and cost-per-record by source and segment; surface vendor performance issues and recommend changes.
- Keep data dictionaries, SOPs, and runbooks current as agents and processes evolve.
- GTM Systems (SFDC): field configuration, permission sets, automation, flows.
- Data Engineering: source availability, ID mapping, lineage (no pipeline coding).
- Reporting: define metrics and acceptance criteria; partner on dashboard requirements.
This is a triage environment, not a steady-state one. The function is young, the data has known gaps, and the work is to stabilize and extend, not maintain and optimize. You’ll be building the plane while flying it, alongside a small team that operates with high autonomy and a bias toward measurable outcomes. If ambiguity and mess energize you, this is the right fit.
Success Metrics (6–12 Months)Pipeline Reliability- Scheduled pipeline runs execute without function-lead intervention; failure-to-resolution cycle time under 24 hours for non-blocking issues.
- Agent coverage extended to new data elements as prioritized (measured by number of signals under active detection).
- Sentinel detection precision and recall improve quarter over quarter, tracked by segment.
- Concentrix resolution queue throughput and accuracy meet defined acceptance thresholds.
- False-positive rate decreases through feedback-loop refinement.
- Tier-1 field fill-rates: Country ≥95%; Vertical ≥90% at ≥85% sampled accuracy; Revenue bands ≥90%.
- Hierarchy coverage 65–80%+ across target segments.
- Enterprise cost-per-record reduction of 30–40% via AI-first + selective vendor usage.
- 3–6 years in Data Ops, Sales Ops, or GTM Ops with hands-on data quality ownership for account and contact data.
- Proficiency with Snowflake (SQL for querying, analysis, validation) and SFDC (object model, field configuration, data flows).
- Working experience with Claude Code or comparable LLM-based tooling in an operational (not just experimental) context.
- Experience designing and running AI-assisted enrichment workflows (e.g., Clay + LLM prompts) and evaluating accuracy/coverage.
- Comfort operating in a command-line environment: tmux, shell scripts, log analysis, batch process monitoring.
- Process design mindset with a bias toward measurable outcomes; strong written communication.
- Experience with account/contact data vendors (D&B, ZoomInfo, Clearbit, StoreLeads) and waterfall enrichment logic.
- Python for QA scripting, sampling, or light automation.
- Familiarity with prompt engineering, confidence scoring, and AI guardrails (evidence capture, versioned prompts, QA sampling gates).
- Core: Snowflake (SQL), SFDC, Claude Code, Clay
- Pipeline: Shell orchestration, Cartographer / Sentinel / Resolver agents
- Enrichment: D&B, ZoomInfo, Clearbit, StoreLeads, LLM prompts
- Nice to Have: Python, SOQL, prompt engineering frameworks
- AI Guardrails (Expected Practice): Confidence floors, evidence capture, versioned prompts, 10% QA sampling gates, audit-on-promote, drift alerts, and privacy/compliance checks. This role is expected to uphold and improve these practices, not just follow them.
Massachusetts Applicants:
It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.
Our salary range reflects the cost of labor across various U.S. geographic markets. The range displayed below reflects the minimum and maximum target salaries for the position across all our US locations. The base salary offered for this position is determined by several factors, including the applicant’s job-related skills, relevant experience, education or training, and work location.
In addition to base salary, our total compensation package may include participation in the company’s annual cash bonus plan, variable compensation (OTE) for sales and customer success roles, equity, sign-on payments, and a comprehensive range of health, welfare, and wellbeing benefits based on eligibility.
Your recruiter can provide more details about the specific salary/OTE range for your preferred location during the hiring process.
This role may require up to 10% travel for purposes such as new hire onboarding, client or partner work if applicable, team meetings, and industry events. Travel is coordinated in advance.
Get to Know Klaviyo
We’re Klaviyo (pronounced clay-vee-oh). We empower creators to own their destiny by making first-party data accessible and actionable like never before. We see limitless potential for the technology we’re developing to nurture personalized experiences in ecommerce and beyond. To reach our goals, we need our own crew of remarkable creators—ambitious and collaborative teammates who stay focused on our north star: delighting our customers. If you’re ready to do the best work of your career, where you’ll be welcomed as your whole self from day one and supported with generous benefits, we hope you’ll join us.
AI fluency at Klaviyo includes responsible use of AI (including privacy, security, bias awareness, and human-in-the-loop). We provide accommodations as needed.
By participating in Klaviyo’s interview process, you acknowledge that you have read, understood, and will adhere to our Guidelines for using AI in the Klaviyo interview Process. For more information about how we process your personal data, see our Job Applicant Privacy Notice.
Klaviyo is committed to a policy of equal opportunity and non-discrimination. We do not discriminate on the basis of race, ethnicity, citizenship, national origin, color, religion or religious creed, age, sex (including pregnancy), gender identity, sexual orientation, physical or mental disability, veteran or active military status, marital status, criminal record, genetics, retaliation, sexual harassment or any other characteristic protected by applicable law.
Klaviyo Denver, Colorado, USA Office



1200 17th Street, Floor 25, Denver, CO, United States, 80202
Similar Jobs at Klaviyo
What you need to know about the Colorado Tech Scene
Key Facts About Colorado Tech
- Number of Tech Workers: 260,000; 8.5% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Lockheed Martin, Century Link, Comcast, BAE Systems, Level 3
- Key Industries: Software, artificial intelligence, aerospace, e-commerce, fintech, healthtech
- Funding Landscape: $4.9 billion in VC funding in 2024 (Pitchbook)
- Notable Investors: Access Venture Partners, Ridgeline Ventures, Techstars, Blackhorn Ventures
- Research Centers and Universities: Colorado School of Mines, University of Colorado Boulder, University of Denver, Colorado State University, Mesa Laboratory, Space Science Institute, National Center for Atmospheric Research, National Renewable Energy Laboratory, Gottlieb Institute





.png)