Position Overview:
We are seeking a Senior AI Engineer to develop next-generation autonomous driving systems powered by end-to-end (E2E) learning.
In this role, you will contribute to building unified models that learn directly from multi-modal sensor inputs (e.g., camera, LiDAR) and produce structured representations of the driving environment. Moving beyond traditional modular pipelines, you will focus on advancing end-to-end perception and scene understanding systems, working across model design, large-scale training, and real-world deployment.
Responsibilities:
- Design and develop end-to-end (E2E) models for autonomous driving using multi-modal sensor data
- Build and optimize large-scale deep learning models, including transformer-based and sequence modeling architectures
- Develop unified representations for scene understanding, object dynamics, and environment modeling
- Contribute to multi-task and multi-modal learning frameworks across perception-related tasks
- Collaborate with system and platform teams to ensure real-time performance and production readiness
- Work on data pipelines, training strategies, and evaluation frameworks
- Explore and implement state-of-the-art research (e.g., BEV-based models, foundation models, VLM, generative approaches)
- Improve model robustness, generalization, and scalability across diverse driving scenarios
Qualification/ Requirements:
- Master’s or Ph.D. in Computer Science, Machine Learning, Electrical Engineering, or related field
- Strong experience in deep learning, computer vision, or autonomous driving systems
- Hands-on experience with end-to-end (E2E) modeling or large-scale model training
- Proficiency in Python and/or C++
- Experience with PyTorch or TensorFlow
- Solid understanding of multi-modal learning, transformers, or sequence models
- Experience deploying models in real-time or production environments
- Strong problem-solving skills with the ability to bridge research and engineering