Machinify Logo

Machinify

Data Engineer

Reposted 13 Days Ago
In-Office or Remote
2 Locations
180K-200K
Mid level
In-Office or Remote
2 Locations
180K-200K
Mid level
As an Analytics Engineer, you'll transform data into insights, develop data pipelines, and collaborate with teams to enhance product decisions.
The summary above was generated by AI

Machinify is the leading provider of AI-powered software products that transform healthcare claims and payment operations. Each year, the healthcare industry generates over $200B in claims mispayments, creating incredible waste, friction and frustration for all participants: patients, providers, and especially payers. Machinify’s revolutionary AI-platform has enabled the company to develop and deploy, at light speed, industry-specific products that increase the speed and accuracy of claims processing by orders of magnitude.

Why This Role Matters

As a Data Engineer, you’ll be at the heart of transforming raw external data into powerful, trusted datasets that drive payment, product, and operational decisions. You’ll work closely with product managers, data scientists, subject matter experts, engineers, and customer teams to build, scale, and refine production pipelines — ensuring data is accurate, observable, and actionable.

You’ll also play a critical role in onboarding new customers, integrating their raw data into our internal models. Your pipelines will directly power the company’s ML models, dashboards, and core product experiences. If you enjoy owning end-to-end workflows, shaping data standards, and driving impact in a fast-moving environment, this is your opportunity.

What You’ll Do
  • Design and implement robust, production-grade pipelines using Python, Spark SQL, and Airflow to process high-volume file-based datasets (CSV, Parquet, JSON).

  • Lead efforts to canonicalize raw healthcare data (837 claims, EHR, partner data, flat files) into internal models.

  • Own the full lifecycle of core pipelines — from file ingestion to validated, queryable datasets — ensuring high reliability and performance.

  • Onboard new customers by integrating their raw data into internal pipelines and canonical models; collaborate with SMEs, Account Managers, and Product to ensure successful implementation and troubleshooting.

  • Build resilient, idempotent transformation logic with data quality checks, validation layers, and observability.

  • Refactor and scale existing pipelines to meet growing data and business needs.

  • Tune Spark jobs and optimize distributed processing performance.

  • Implement schema enforcement and versioning aligned with internal data standards.

  • Collaborate deeply with Data Analysts, Data Scientists, Product Managers, Engineering, Platform, SMEs, and AMs to ensure pipelines meet evolving business needs.

  • Monitor pipeline health, participate in on-call rotations, and proactively debug and resolve production data flow issues.

  • Contribute to the evolution of our data platform — driving toward mature patterns in observability, testing, and automation.

  • Build and enhance streaming pipelines (Kafka, SQS, or similar) where needed to support near-real-time data needs.

  • Help develop and champion internal best practices around pipeline development and data modeling.

What You Bring
  • 4+ years of experience as a Data Engineer (or equivalent), building production-grade pipelines.

  • Strong expertise in Python, Spark SQL, and Airflow.

  • Experience processing large-scale file-based datasets (CSV, Parquet, JSON, etc) in production environments.

  • Experience mapping and standardizing raw external data into canonical models.

  • Familiarity with AWS (or any cloud), including file storage and distributed compute concepts.

  • Experience onboarding new customers and integrating external customer data with non-standard formats.

  • Ability to work across teams, manage priorities, and own complex data workflows with minimal supervision.

  • Strong written and verbal communication skills — able to explain technical concepts to non-engineering partners.

  • Comfortable designing pipelines from scratch and improving existing pipelines.

  • Experience working with large-scale or messy datasets (healthcare, financial, logs, etc.).

  • Experience building or willingness to learn streaming pipelines using tools such as Kafka or SQS.

  • Bonus: Familiarity with healthcare data (837, 835, EHR, UB04, claims normalization).

🌱 Why Join Us
  • Real impact — your pipelines will directly support decision-making and claims payment outcomes from day one.

  • High visibility — partner with ML, Product, Analytics, Platform, Operations, and Customer teams on critical data initiatives.

  • Total ownership — you’ll drive the lifecycle of core datasets powering our platform.

  • Customer-facing impact — you will directly contribute to successful customer onboarding and data integration.

Equal Employment Opportunity at Machinify

Machinify is committed to hiring talented and qualified individuals with diverse backgrounds for all of its positions. Machinify believes that the gathering and celebration of unique backgrounds, qualities, and cultures enriches the workplace. 

Top Skills

Airflow
AWS
Python
Sparksql
SQL

Similar Jobs

4 Days Ago
Remote or Hybrid
Fort Walton Beach, FL, USA
78K-132K Annually
Mid level
78K-132K Annually
Mid level
Aerospace • Hardware • Information Technology • Security • Software • Cybersecurity • Defense
Design and deploy BI solutions using AI techniques, develop data pipelines, ensure data quality, and collaborate with stakeholders for data analysis.
Top Skills: AWSAzurePower BIPythonPyTorchRScikit-LearnSQLTableauTensorFlow
5 Days Ago
Easy Apply
Remote or Hybrid
3 Locations
Easy Apply
140K-160K
Senior level
140K-160K
Senior level
AdTech • Big Data • Information Technology • Marketing Tech • Sales • Software
As a Data Engineer III, you will develop and maintain high-performance ETL pipelines and scalable data processing systems on Google Cloud Platform, while collaborating with teams to deliver quality solutions.
Top Skills: AirflowBigQueryDataflowGoogle Cloud PlatformKubernetesPythonSQL
6 Days Ago
Remote or Hybrid
New York City, NY, USA
110K-140K Annually
Senior level
110K-140K Annually
Senior level
Artificial Intelligence • Fintech • Information Technology • Machine Learning • Financial Services
Seeking a Data Engineer to design and build scalable data systems, contribute to data platform architecture, and collaborate with AI/ML engineers.
Top Skills: AirflowAWSDbtKafkaPostgresRedshiftSnowflakeSQL

What you need to know about the Colorado Tech Scene

With a business-friendly climate and research universities like CU Boulder and Colorado State, Colorado has made a name for itself as a startup ecosystem. The state boasts a skilled workforce and high quality of life thanks to its affordable housing, vibrant cultural scene and unparalleled opportunities for outdoor recreation. Colorado is also home to the National Renewable Energy Laboratory, helping cement its status as a hub for renewable energy innovation.

Key Facts About Colorado Tech

  • Number of Tech Workers: 260,000; 8.5% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Lockheed Martin, Century Link, Comcast, BAE Systems, Level 3
  • Key Industries: Software, artificial intelligence, aerospace, e-commerce, fintech, healthtech
  • Funding Landscape: $4.9 billion in VC funding in 2024 (Pitchbook)
  • Notable Investors: Access Venture Partners, Ridgeline Ventures, Techstars, Blackhorn Ventures
  • Research Centers and Universities: Colorado School of Mines, University of Colorado Boulder, University of Denver, Colorado State University, Mesa Laboratory, Space Science Institute, National Center for Atmospheric Research, National Renewable Energy Laboratory, Gottlieb Institute

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account