About the Team:
Compute Infrastructure builds the platform that turns enormous amounts of compute into a reliable engine for frontier AI. We design, provision, schedule, operate, and optimize the systems that connect accelerators, CPUs, networks, storage, data centers, orchestration software, agent infrastructure, developer tools, and observability into one coherent experience for researchers and product teams.
Our work spans the entire stack: capacity planning and cluster lifecycle, bare-metal automation, distributed systems, Kubernetes and scheduling, deep system optimization, high-performance networking, storage, fleet health, reliability, workload profiling, benchmarking, and the developer experience that lets teams use enormous compute systems with confidence. At this scale, small improvements to communication, scheduling, hardware efficiency, or debugging workflows can compound into meaningful research velocity. We are hiring across Compute Infrastructure rather than for a single narrow team, and we use this opening to match strong engineers to the problems where they can have the most leverage.
About the Role
We are looking for engineers who want to build the compute platform behind OpenAI's research and products. You may be strongest in low-level systems, high-performance computing, distributed infrastructure, reliability, CaaS, agent infrastructure, developer platforms, tooling, or the user experience around infrastructure. What matters is that you can reason carefully about complex systems, write durable software, and raise the quality and velocity of the people around you.
Depending on your background and interests, you might work close to hardware, close to users, on CaaS and agent infrastructure, or on the control planes and data planes in between. You could help bring new supercomputing capacity online, optimize training workloads from profiler traces and benchmarks, improve NCCL and collective communication behavior, reason about GPUs, NICs, topology, firmware, thermals, and failure modes, or design abstractions that make heterogeneous clusters feel like one coherent platform.
We do not expect every candidate to have worked at every layer. Some engineers will go deep on systems performance, kernel or runtime behavior, large-scale networking protocols, RDMA, NCCL, GPU hardware behavior, benchmarking, scheduling, or hardware reliability; others will make the platform more usable through APIs, tools, workflows, and developer experience. The common thread is strong engineering judgment and excitement about making enormous compute systems faster, more reliable, and easier to use.
This is a general opening for Compute Infrastructure. We will consider candidates for teams across Compute Infrastructure and match you based on your strengths, the problems that motivate you, and where the infrastructure needs are highest.
Where you might work
In this role, you will:
You might thrive in this role if you:
Qualifications
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI’s Affirmative Action and Equ
$230,000 - $405,000
year
FULL TIME
senior
4/27/2026
You will be redirected to the job posting on Ashby.
Sign in and we'll score your resume against this role.