About Eventual
Every breakthrough Physical AI system — humanoid robots, autonomous vehicles, video generation models — is trained on petabytes of video, lidar, radar, and sensor data. But today's data platforms (Databricks, Snowflake) were built for spreadsheet-like analytics, not the multimodal corpora that power AI. As a result, robotics and video-AI teams iterate on model improvement about once a week. Most of that week isn't training — it's finding the right data: writing CV heuristics over raw footage, paying annotators for edge cases, hand-curating clips before a cluster ever spins up. GPU bandwidth has grown 2-3× per generation. Storage and pipelines haven't. The gap widens every year.
Eventual was founded in 2022 to close it. Our open-source engine, Daft, is the distributed data engine purpose-built for multimodal AI — already running 2 PB/day at Amazon, 60-100 PB at another FAANG company, and in production at Mobileye, TogetherAI, and CloudKitchens. We are building a video-native index on top of our engine for Physical AI that collapses the data iteration loop. Describe the dataset you want, get a curated table in minutes, feed it to your GPUs at line rate. One iteration per day becomes the norm.
We're building this in partnership with the top PhysicalAI labs and public AI infrastructure companies today. We have raised $30M from Felicis, CRV, Microsoft M12, Citi, Essence, Y Combinator, Caffeinated Capital, Array.vc, and angels from the co-founders of Databricks and Perplexity. We've assembled a world-class team from AWS, Render, Pinecone and Tesla. We have spent our careers powering the last generation of PhysicalAI in self-driving, and are excited to now do this for the next.
Join our small (but powerful!) team working together 4 days/week in our SF Mission district office.
Your Role
As a Research Engineer on the Visual Understanding team, you'll own the layer that makes petabytes of video queryable by content. Physical AI teams have raw footage, lidar, radar, and sim outputs scattered across object stores with no way to find what they need without weeks of human annotation. We change that economics: we run vision-language models over every clip in a corpus along axes the customer cares about (gripper type, failure mode, object class, scene, motion density), so a researcher can ask "left-arm grasp failures on deformable objects" and get a curated dataset in minutes.
You'll define the roadmap for our visual understanding capabilities, train and select the models that make corpus-scale annotation tractable at single-digit cents per hour of video, and build the rich datasets that go on to train customer models. This is a research engineering role — meaning you'll read papers and run experiments, but you ship to production and your work is judged by what it does for customer training runs.
Key Responsibilities
What we look for
Nice to have
Perks & Benefits
$150,000 - $250,000
year
FULL TIME
senior
4/29/2026
You will be redirected to the job posting on Ashby.
Sign in and we'll score your resume against this role.