Outrider, the pioneer in autonomous yard operations for logistics hubs, helps large enterprises improve safety, increase efficiency, and optimize their workforce. The only company exclusively focused on automating all aspects of yard operations, Outrider eliminates manual tasks that are hazardous and repetitive. Outrider’s mission is to drive the rapid adoption of sustainable freight transportation by deploying zero-emission systems. Outrider is a private company backed by NEA, 8VC, and other top-tier investors.
We’re searching for a talented computer vision and machine learning engineer with a track record of demonstrated high achievement who can take responsibility for the full software development lifecycle, including a) writing software modules from a set of specifications; b) implementing module interfaces; c) reviewing bug reports, tracking bugs to a subsystem, identify corner cases, and work within the team to propose fixes.
This position will be responsible for the design, development, and unit testing of all aspects of vehicle sensing functions, including perception, computer vision, sensor fusion, object classification, and text detection.
The Perception Engineer will report to the Principal Computer Vision Engineer and develop perception software capabilities through all phases of Outrider's pilot and deployment programs. This position plays an essential role in helping deliver a reliable, profitable, performant, safety-critical system -- it offers a very talented software engineer the chance to help develop a market-defining enterprise product that combines autonomous vehicle technology with a software-as-a-service (SaaS) business model.
The ideal candidate will embrace our goal to drive zero-emission, self-driving vehicle adoption, and help us realize our potential to define, build, and lead a new, category of robotic automation for the enterprise.
Duties and responsibilities
- Build robust and performant perception approaches for safety critical autonomous vehicle operations
- Develop and train models for object detection, tracking, segmentation, pose estimation, and classification of obstacle types (vehicle, truck, pedestrian, etc.) in multi-modality sensor data streams
- Define labeling ontologies and create training, validation, and testing sets across customer sites and weather conditions
- Develop algorithms, perception software modules and libraries with responsibility for the full software engineering lifecycle: requirements, design, source code implementation, unit test, integration, and system test
- Travel and perform fieldwork, depending on initial customer locations (up to 25%)
- Masters degree in computer science or relevant field with exposure to classic and modern computer vision techniques
- 3+ years of professional C++ and Python experience
- Expertise training, evaluating, and deploying models with a deep learning framework such as PyTorch or Tensorflow
- Experience working on a team in a Linux environment and targeting embedded deployment
- Excellent written and verbal communication skills
- Exceptional analytical skills
- Demonstrated strong leadership and people skills
- Sterling references
- AWS experience with S3, SQS, Lambda, DynamoDB, and EC2
- ROS / software for ground robotic systems
- Experience with embedded computer vision
- Prior experience designing annotation ontologies and working with data labeling vendors
- Familiarity with as many of the following: stereo vision, LIDAR, radar, and thermal sensing technology
- Prior use of Git for software version control
- FOSS libraries/frameworks such as OpenCV, the Point Cloud Library (PCL), and similar packages
- FOSS tools supporting software engineering, such as CMake, continuous integration packages, the Google test framework and others
- PhD and relevant publications and patents