Novaprime Logo

Novaprime

Staff Data Engineer

Reposted 18 Days Ago
In-Office
Denver, CO
170K-185K Annually
Senior level
In-Office
Denver, CO
170K-185K Annually
Senior level
The Staff Data Engineer will architect and operate the Databricks lakehouse, managing data ingestion, governance, and analytics collaboration with cross-functional teams.
The summary above was generated by AI

***This role is fully remote within the U.S. (occasional travel to meet teams or partners). If you'd like to differentiate yourself, shoot me a connection request on LinkedIn and tell me your favorite book of all time***

About Us:

Novaprime is a mortgage technology company dedicated to reducing the costs of originating loans by leveraging emerging technologies, with strong focus on AI and Distributed Ledger Technology (DLT). We accomplish our goals by focusing on data-driven innovation, working with some of the world's largest institutions, and creating outcomes. Novaprime is backed by key investors in the mortgage industry, VC, and financial services.

Job Description:

Novaprime is hiring a Staff Data Engineer to architect, build, and operate our Databricks-centric lakehouse on AWS. You will own the data lifecycle—streaming and batch ingestion, modeling, governance, quality, observability, and cost/perf—using Delta Lake, Delta Live Tables, and Databricks Workflows. This is a hands-on leadership role: you will set technical direction, deliver mission-critical pipelines, mentor engineers, and directly drive analytics by defining trusted metrics, instrumentation, and monitoring alongside product and ML. To succeed, you must enjoy thinking in systems and always learning.

Responsibilities:

  • Implement new technologies that yield competitive advantages and are aligned with our business goals.

  • Drive development from concept to market by combining various technologies and collaborating with a cross-functional team.

  • Define the lakehouse architecture and standards on Databricks (Unity Catalog governance, Workflows, DLT, Delta Lake).

  • Build and operate high-reliability streaming and batch pipelines with Structured Streaming, Auto Loader, CDC patterns, and backfills.

  • Design medallion data models and canonical domains; implement SCDs, schema evolution, and versioned/time-travel datasets.

  • Establish data quality, SLAs/SLOs, lineage/traceability, and audit-ready documentation aligned to SOC 2.

  • Drive analytics: define and govern KPI/metric definitions, build metrics pipelines, enable semantic consistency, and implement monitoring/alerting for data and dashboards.

  • Optimize cost/perf on Databricks (cluster policies, sizing, Photon, AQE, partitioning, file sizing, skew mitigation, Z-ORDER/OPTIMIZE).

  • Enforce security and privacy (Unity Catalog permissions, row/column-level controls, PII masking/tokenization, secrets management).

  • Enable self-serve with standardized, well-documented datasets; collaborate with ML on feature pipelines and Feature Store.

  • Champion software excellence: Git-based workflows, code reviews, automated testing, CI/CD for data, and IaC.

  • Collaborate with product managers, designers, and other stakeholders to develop strategies and implement new products and features.

  • Stay current with the latest technologies to maintain competitiveness and technological leadership in the market.

  • Various engineering-related tasks which continue to progress the organization’s mission.

Requirements:

  • B.S. in Computer Science or equivalent experience.

  • 8+ years building and operating production data platforms; 4+ years deep, hands-on Databricks/Spark (PySpark + SQL).

  • Proven ownership of a production lakehouse (S3 + Delta Lake) with strict SLAs and compliance requirements.

  • Expertise with Delta Lake (MERGE/CDC, schema evolution, time travel, OPTIMIZE/Z-ORDER, VACUUM) and DLT, Workflows, Auto Loader; Feature Store experience in production.

  • Strong data modeling (dimensional, canonical), SCD Types 1/2, and handling slowly changing entities and schema drift.

  • Track record delivering trustworthy datasets with monitoring, alerting, lineage, and clear documentation; able to define and maintain metric layers consumed by product and business.

  • Advanced Python and SQL; testing culture (pytest), CI/CD (GitHub Actions), and Terraform for Databricks; solid Git practices.

  • AWS foundations: S3, IAM, networking basics; event ingestion.

  • Excellent communication and leadership; able to drive design reviews, write clear technical docs, and mentor engineers in a remote, async environment.

Desired Experience:

  • Databricks SQL/Serverless, Unity Catalog lineage/system tables, and semantic layer experience.

  • Product analytics and observability: Mixpanel and New Relic.

  • Prior leadership of SOC 2 audits/readiness and data platform on-call rotations.

  • Previous startup experience.

Benefits:

  • Competitive salary, equity, and benefits.

  • Mostly remote work.

  • Opportunity to make a difference for millions and their ability to be homeowners.

We are an equal-opportunity employer and value diversity at our company. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Top Skills

AWS
Databricks
Delta Lake
Git
Python
SQL
Terraform

Similar Jobs

Yesterday
Easy Apply
Remote or Hybrid
USA
Easy Apply
150K-185K Annually
Expert/Leader
150K-185K Annually
Expert/Leader
Artificial Intelligence • Insurance • Machine Learning • Software • Analytics
Design and manage data pipelines for healthcare clients, ensuring compliance with data regulations, and transforming health data into actionable formats.
Top Skills: AirflowAWSDockerGithub ActionsJenkinsKubernetesPythonSQLTerraform
Yesterday
Easy Apply
Hybrid
9 Locations
Easy Apply
153K-195K Annually
Mid level
153K-195K Annually
Mid level
Healthtech • Other • Sales • Software • Analytics • Conversational AI
The Staff Data Engineer will enhance Artera's data infrastructure, streamline data pipelines, mentor junior engineers, and troubleshoot issues within the platform.
Top Skills: AirflowAWSDbtDockerEsKubernetesPythonSnowflakeSQLTerraformTypescript
10 Days Ago
Easy Apply
Remote or Hybrid
United States
Easy Apply
200K-230K Annually
Expert/Leader
200K-230K Annually
Expert/Leader
Marketing Tech • Real Estate • Software • PropTech • SEO
Lead the architecture and development of robust data pipelines and backend services for MLS data, integrating AI solutions and ensuring data quality.
Top Skills: AirflowApache KafkaAWSEmrFlinkGraphQLJavaKubernetesPythonRestSparkSQL

What you need to know about the Colorado Tech Scene

With a business-friendly climate and research universities like CU Boulder and Colorado State, Colorado has made a name for itself as a startup ecosystem. The state boasts a skilled workforce and high quality of life thanks to its affordable housing, vibrant cultural scene and unparalleled opportunities for outdoor recreation. Colorado is also home to the National Renewable Energy Laboratory, helping cement its status as a hub for renewable energy innovation.

Key Facts About Colorado Tech

  • Number of Tech Workers: 260,000; 8.5% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Lockheed Martin, Century Link, Comcast, BAE Systems, Level 3
  • Key Industries: Software, artificial intelligence, aerospace, e-commerce, fintech, healthtech
  • Funding Landscape: $4.9 billion in VC funding in 2024 (Pitchbook)
  • Notable Investors: Access Venture Partners, Ridgeline Ventures, Techstars, Blackhorn Ventures
  • Research Centers and Universities: Colorado School of Mines, University of Colorado Boulder, University of Denver, Colorado State University, Mesa Laboratory, Space Science Institute, National Center for Atmospheric Research, National Renewable Energy Laboratory, Gottlieb Institute

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account