MeshyAI Logo

MeshyAI

Data Infrastructure Engineer

Reposted 2 Days Ago
In-Office or Remote
2 Locations
Senior level
In-Office or Remote
2 Locations
Senior level
Design and operate distributed data systems for large-scale data ingestion, processing, and transformation. Collaborate with ML researchers for data preparation and ensure data quality and scalability.
The summary above was generated by AI
About Meshy

Headquartered in Silicon Valley, Meshy is the leading 3D generative AI company on a mission to Unleash 3D Creativity by transforming the content creation pipeline. Meshy makes it effortless for both professional artists and hobbyists to create unique 3D assets—turning text and images into stunning 3D models in just minutes. What once took weeks and cost $1,000 now takes just 2 minutes and $1.

Our world-class team of top experts in computer graphics, AI, and art includes alumni from MIT, Stanford, and Berkeley, as well as veterans from Nvidia and Microsoft. Our talent spans the globe, with team members distributed across North America, Asia, and Oceania, fostering a diverse and innovative multi-regional culture focused on solving global 3D challenges. Meshy is trusted by top developers, backed by premiere venture capital firms like Sequoia and GGV, and has successfully raised $52 Million in funding.

Meshy is the market leader, recognized as the No.1 in popularity among 3D AI tools (according to 2024 A16Z Games) and No.1 in website traffic (according to SimilarWeb, with 3 Million monthly visits). The platform boasts over 5 Million users and has generated 40 Million models.

Founder and CEO Yuanming (Ethan) Hu earned his Ph.D. in graphics and AI from MIT, where he developed the acclaimed Taichi GPU programming language (27K stars on GitHub, used by 300+ institutes). His work is highly influential, including an honorable mention for the SIGGRAPH 2022 Outstanding Doctoral Dissertation Award and over 2,700 research citations.

About the Role
We are seeking a Data Infrastructure Engineer to join our growing team. In this role, you will design, build, and operate distributed data systems that power large-scale ingestion, processing, and transformation of datasets used for AI model training. These datasets span traditional structured data as well as unstructured assets such as images and 3D models, which often require specialized preprocessing for pretraining and fine-tuning workflows.
 
This is a versatile role: you’ll own end-to-end pipelines (from ingestion to transformation), ensure data quality and scalability, and collaborate closely with ML researchers to prepare diverse datasets for cutting-edge model training. You’ll thrive in our fast-paced startup environment, where problem-solving, adaptability, and wearing multiple hats are the norm.
What You’ll Do:
  • Core Data Pipelines
    • Design, implement, and maintain distributed ingestion pipelines for structured and unstructured data (images, 3D/2D assets, binaries).
    • Build scalable ETL/ELT workflows to transform, validate, and enrich datasets for AI/ML model training and analytics.
  • Distributed Systems & Storage
    • Architect pipelines across cloud object storage (S3, GCS, Azure Blob), data lakes, and metadata catalogs.
    • Optimize large-scale processing with distributed frameworks (Spark, Dask, Ray, Flink, or equivalents).
    • Implement partitioning, sharding, caching strategies, and observability (monitoring, logging, alerting) for reliable pipelines.
  • Pretrain Data Processing
    • Support preprocessing of unstructured assets (e.g., images, 3D/2D models, video) for training pipelines, including format conversion, normalization, augmentation, and metadata extraction.
    • Implement validation and quality checks to ensure datasets meet ML training requirements.
    • Collaborate with ML researchers to quickly adapt pipelines to evolving pretraining and evaluation needs.
  • Infrastructure & DevOps
    • Use infrastructure-as-code (Terraform, Kubernetes, etc.) to manage scalable and reproducible environments.
    • Integrate CI/CD best practices for data workflows.
  • Data Governance & Collaboration
    • Maintain data lineage, reproducibility, and governance for datasets used in AI/ML pipelines.
    • Work cross-functionally with ML researchers, graphics/vision engineers, and platform teams.
    • Embrace versatility: switch between infrastructure-level challenges and asset/data-level problem solving.
    • Contribute to a culture of fast iteration, pragmatic trade-offs, and collaborative ownership.
What We’re Looking For:
  • Technical Background
    • 5+ years of experience in data engineering, distributed systems, or similar.
    • Strong programming skills in Python (plus Scala/Java/C++ a plus).
    • Solid skills in SQL for analytics, transformations, and warehouse/lakehouse integration.
    • Proficiency with distributed frameworks (Spark, Dask, Ray, Flink).
    • Familiarity with cloud platforms (AWS/GCP/Azure) and storage systems (S3, Parquet, Delta Lake, etc.).
    • Experience with workflow orchestration tools (Airflow, Prefect, Dagster).
  • Domain Skills (Preferred)
    • Experience handling large-scale unstructured datasets (images, video, binaries, or 3D/2D assets).
    • Familiarity with AI/ML training data pipelines, including dataset versioning, augmentation, and sharding.
    • Exposure to computer graphics or 3D/2D data processing is strongly preferred.
  • Mindset
    • Comfortable in a startup environment: versatile, self-directed, pragmatic, and adaptive.
    • Strong problem solver who enjoys tackling ambiguous challenges.
    • Commitment to building robust, maintainable, and observable systems.
Nice to Have:
  • Kubernetes for distributed workloads and orchestration.
  • Data warehouses or lakehouse platforms (Snowflake, BigQuery, Databricks, Redshift).
  • Familiarty GPU-accelerated computing and HPC clusters
  • Experience with 3D/2D asset processing (geometry transformations, rendering pipelines, texture handling).
  • Rendering engines (Blender, Unity, Unreal) for synthetic data generation.
  • Open-source contributions in ML infrastructure, distributed systems, or data platforms.
  • Familiarity with secure data handling and compliance
Our Values
  • Brain: We value intelligence and the pursuit of knowledge. Our team is composed of some of the brightest minds in the industry.
  • Heart: We care deeply about our work, our users, and each other. Empathy and passion drive us forward.
  • Gut: We trust our instincts and are not afraid to take bold risks. Innovation requires courage.
  • Taste: We have a keen eye for quality and aesthetics. Our products are not just functional but also beautiful.
Why Join Meshy?
  • Competitive salary, equity, and benefits package.
  • Opportunity to work with a talented and passionate team at the forefront of AI and 3D technology.
  • Flexible work environment, with options for remote and on-site work.
  • Opportunities for fast professional growth and development.
  • An inclusive culture that values creativity, innovation, and collaboration.
  • Unlimited, flexible time off.
Benefits
  • Competitive salary, benefits and stock options.
  • 401(k) plan for employees.
  • Comprehensive health, dental, and vision insurance.
  • The latest and best office equipment.

Top Skills

Airflow
AWS
Azure
Dagster
Dask
Delta Lake
Flink
GCP
Kubernetes
Parquet
Prefect
Python
Ray
S3
Spark
SQL
Terraform

Similar Jobs

3 Hours Ago
Easy Apply
Remote or Hybrid
2 Locations
Easy Apply
179K-271K Annually
Senior level
179K-271K Annually
Senior level
Healthtech • Information Technology • Software • Telehealth
As a Staff Software Engineer, you'll lead the development of data infrastructure services, optimizing performance and reliability on AWS while building APIs and frameworks for data handling.
Top Skills: AirflowAWSDagsterJavaPythonScalaSnowflakeSparkSQLTerraform
3 Hours Ago
Easy Apply
Remote or Hybrid
2 Locations
Easy Apply
179K-271K Annually
Senior level
179K-271K Annually
Senior level
Healthtech • Information Technology • Software • Telehealth
Design, harden, and operate the distributed data systems for analytics and machine learning at Zocdoc, ensuring reliability, security, and performance at scale.
Top Skills: AthenaAWSCdkDatabricksEksEmrGlueKafkaKinesisSnowflakeSparkSQLTerraform
10 Days Ago
Easy Apply
Remote
U.S.
Easy Apply
132K-207K
Senior level
132K-207K
Senior level
eCommerce • Software • Design
As a Senior Software Engineer for Data Infrastructure, you'll design and manage core data systems, ensuring scalability and compliance while collaborating with various teams and maintaining high-performance standards.
Top Skills: AirflowAWSDockerKubernetesPulumiSparkTerraform

What you need to know about the Colorado Tech Scene

With a business-friendly climate and research universities like CU Boulder and Colorado State, Colorado has made a name for itself as a startup ecosystem. The state boasts a skilled workforce and high quality of life thanks to its affordable housing, vibrant cultural scene and unparalleled opportunities for outdoor recreation. Colorado is also home to the National Renewable Energy Laboratory, helping cement its status as a hub for renewable energy innovation.

Key Facts About Colorado Tech

  • Number of Tech Workers: 260,000; 8.5% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Lockheed Martin, Century Link, Comcast, BAE Systems, Level 3
  • Key Industries: Software, artificial intelligence, aerospace, e-commerce, fintech, healthtech
  • Funding Landscape: $4.9 billion in VC funding in 2024 (Pitchbook)
  • Notable Investors: Access Venture Partners, Ridgeline Ventures, Techstars, Blackhorn Ventures
  • Research Centers and Universities: Colorado School of Mines, University of Colorado Boulder, University of Denver, Colorado State University, Mesa Laboratory, Space Science Institute, National Center for Atmospheric Research, National Renewable Energy Laboratory, Gottlieb Institute

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account