At NVIDIA, we're not just building the future, we're generating it. Our Cosmos generative AI engineering team is pushing the boundaries of what’s possible across multimodal learning, video generation, synthetic data, intelligent simulation, and agentic systems. We are looking for exceptionally driven engineers and applied scientists with deep experience in generative modeling to help define the next era of AI computing.
What you'll be doing:
Design and post-train foundation models (LLMs, VLMs, VLAs and DiTs) for real world applications.
Contribute to highly-collaborative development on large-scale training infrastructure, high-efficiency inference pipelines, and scalable data pipelines.
Work with teams in research, software, and product to bring world models from idea to deployment.
Collaborate on open-source and internal projects, author technical papers or patents, and mentor junior engineers.
Prototype and iterate rapidly on experiments across cutting-edge AI domains, including agentic systems, reinforcement learning, reasoning, and video generation.
Design and implement model distillation algorithms for size reduction and diffusion step optimization. Profile and benchmark training and inference pipelines to achieve production-ready performance requirements.
What we need to see:
We are looking for stellar experience building and deploying generative AI systems (minimum 8 years industry or 5+ years research/postdoc).
Proficiency in PyTorch, JAX, or other deep learning frameworks is a must!
We are working on a full range of foundation models. You should have expertise in one or more of: LLMs, coding agents, diffusion models, autoregressive models, VAE/GAN architectures, retrieval-augmented generation, neural rendering, or multi-agent systems.
Our models are predominantly built on the transformer architectures. You should be intimately familiar with all variants of the attention mechanisms.
Hands on experience with large scale training (e.g., ZeRO, DDP, FSDP, TP, CP) and data processing (e.g. Ray, Spark).
All we do is in Python and we open source our product, therefore production-quality software engineering skills is highly relevant.
MS or PhD or equivalent experience in Computer Science, Machine Learning, Applied Math, Physics, or a related field.
12+ years of relevant software development experience
Ways to stand out from the crowd:
Familiarity with high-performance computing and GPU acceleration.
Contributions to influential open-source libraries or influential conference publications (NeurIPS, ICML, CVPR, ICLR).
Experience working with multimodal data (e.g., vision-language, VLA, audio).
Prior work with NVIDIA GPU-based compute clusters or simulation environments.
You will also be eligible for equity and benefits.
Top Skills
Similar Jobs
What you need to know about the Colorado Tech Scene
Key Facts About Colorado Tech
- Number of Tech Workers: 260,000; 8.5% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Lockheed Martin, Century Link, Comcast, BAE Systems, Level 3
- Key Industries: Software, artificial intelligence, aerospace, e-commerce, fintech, healthtech
- Funding Landscape: $4.9 billion in VC funding in 2024 (Pitchbook)
- Notable Investors: Access Venture Partners, Ridgeline Ventures, Techstars, Blackhorn Ventures
- Research Centers and Universities: Colorado School of Mines, University of Colorado Boulder, University of Denver, Colorado State University, Mesa Laboratory, Space Science Institute, National Center for Atmospheric Research, National Renewable Energy Laboratory, Gottlieb Institute