Eka Robotics
Eka Robotics is on a mission to build intelligence for the physical world - robots that are fast, general, and reliable. Our approach, grounded in physics, unlocks superhuman capabilities. We are defining the frontier of robotics research and deployment.
Our team consists of pioneers in robotics and machine learning. We are now hiring to scale our R&D effort. We are looking for hands-on individuals who are excited to help shape the future of robotics.
Responsibilities
- Build computer vision and visual representation learning pipelines for robotic manipulation, including RGB, RGB-D, depth, segmentation, pose, keypoint, and object-centric representations.
- Develop visual models that support reinforcement learning and imitation learning policies, including end-to-end visuomotor policies that map visual observations to robot actions.
- Improve our data pipeline for vision-based manipulation policies through domain randomization, photorealistic rendering, synthetic data generation, sensor noise modeling, and real-world fine-tuning.
- Design and train perception models that are robust to lighting changes, camera viewpoint shifts, texture variation, clutter, occlusion, object instance variation, and imperfect calibration.
- Evaluate learned visual representations and policies on real robotic manipulation tasks, identify failure modes, and iterate on models, data, and training procedures.
- Collaborate with robotics, robot learning, and simulation engineers to define the perception strategy for robotic manipulation.
- Set up, calibrate, and evaluate camera and depth sensing systems when needed, with an emphasis on how sensor choices affect learned policies and real-world robustness.
Minimum Qualifications
- Ph.D. in computer vision or 3+ years of experience working on a computer vision product.
- Strong background in machine learning for computer vision, especially deep learning-based visual perception.
- Experience training modern computer vision models in Jax, PyTorch or similar frameworks.
- Practical experience with visual representation learning, object detection, segmentation, pose estimation, depth estimation, tracking, or 3D perception.
- Strong Python programming skills
- Ability to move fluidly between research code and production-quality systems.
- Strong understanding of how data distribution, sensor noise, calibration, lighting, and scene variation affect model performance.
Preferred Qualifications
- Experience training policies from visual observations, including RGB, RGB-D, point clouds, object-centric representations, or learned latent representations.
- Experience with domain randomization, synthetic data generation, differentiable rendering, neural rendering, or photorealistic simulation.
- Experience with robotics simulators or synthetic data tools such as Isaac Sim, MuJoCo or similar environments.
- Familiarity with robot learning methods such as reinforcement learning, behavior cloning, diffusion policies, offline RL, or learning from demonstrations.
- Experience with real robot deployment, including camera calibration, hand-eye calibration, depth sensors, ROS/ROS2, or robot data collection pipelines.
- First-author publications in top computer vision, robotics, or machine learning venues such as CVPR, ICCV, ECCV, NeurIPS, ICLR, ICML, RSS, CoRL, ICRA, or IROS.