ML Inference Engineer

Reactor
San Francisco, US
On-site

Job Description

About Us

At Reactor, our mission is to unlock a future where anyone can create interactive media applications that delight, educate, and simulate. We're building a new kind of platform for real-time generative media, enabling developers to go from idea to immersive, dynamic experience in seconds.

We're a small, focused team of YC and unicorn founders and senior engineers with deep expertise in 3D, generative video, developer platforms, and creative tools. We aspire to continuously push the boundaries of what's possible: if you're driven to do the same, we'd love to hear from you. www.reactor.inc.

About the role

Founding Engineer, ML Inference

San Francisco, CA · Full-time

We're looking for a Founding Engineer, ML Inference with deep expertise in high-performance ML engineering. This is a highly technical, high-impact role focused on squeezing every drop of performance from generative media models.

You'll work across the inference stack, designing novel frameworks, optimizing inference performance, and shaping Reactor's competitive edge in ultra-low-latency, high-throughput environments.

What You'll Do

  • Drive our frontier position on model performance for diffusion models
  • Design and implement a high-performance in-house inference runtime
  • Implement optimizations using torch.compile, custom CUDA kernels, and specialized inference frameworks
  • Optimize neural network models through quantization, pruning, and architectural modifications
  • Profile and benchmark model performance to identify computational bottlenecks
  • Collaborate directly with model partner teams to integrate their models into our platform

Required Skills

  • Bachelor's degree in Computer Science, Electrical Engineering, or a related technical field (or equivalent practical experience)
  • Strong foundation in systems programming, with a track record of identifying and resolving bottlenecks
  • Deep expertise in PyTorch, TensorRT, TransformerEngine, Nsight, ONNX Runtime
  • Model compilation, quantization (INT8/FP16), and advanced serving architectures
  • Working knowledge of GPU hardware (NVIDIA)
  • Strong understanding of transformer architectures and modern ML optimization techniques

Logistics

In-person in San Francisco. We believe the best ideas come from being together.

Benefits

  • Competitive SF salary and meaningful early equity
  • Visa sponsorship and relocation support
  • Generous health, dental, and vision coverage

Skills & Requirements

Technical Skills

PytorchTensorrtTransformerengineNsightOnnx runtime

Employment Type

FULL TIME

Level

senior

Posted

4/9/2026

Continue to LinkedIn

You will be redirected to the job posting on LinkedIn.