Research Engineer - LLM Infra training - Seed Infra

ByteDance
Seattle; Washington, US
On-site

Job Description

Location:

Seattle

Team:

Technology

Employment Type:

Regular

Job Code:

A78978

Responsibilities

Team Information:

The Seed Infrastructures team oversees the distributed training, reinforcement learning framework, high-performance inference, and heterogeneous hardware compilation technologies for AI foundation models.

Responsibilities

  • Conduct research and development on large-scale LLM training infrastructure and efficiency
  • Design and optimize distributed training strategies for LLMs, including parallelism schemes, computation and communication optimization, and throughput scaling on large GPU clusters
  • Investigate system reliability and resilience techniques, such as fast checkpointing, fault tolerance, and failure diagnosis for long-running training workloads
  • Research and optimize network, scheduling, and GPU memory management across the training stack, driving cross-layer performance improvements
  • Analyze performance bottlenecks in exascale training systems and propose principled, data-driven optimization methods
  • Bridge cutting-edge research and large-scale production deployment by translating research ideas into scalable, real-world AI infrastructure solutions

Qualifications

Minimum Qualifications

  • Experience with large-scale distributed training for LLMs
  • Strong programming skills in Python and/or C++
  • Strong background in ML systems / training infrastructure development
  • Proficiency in parallelism strategies (DDP, FSDP, model/pipeline/expert parallelism)
  • Solid understanding of training stack internals (PyTorch, CUDA, NCCL)
  • Experience in performance optimization (memory, communication, throughput)

Preferred Qualifications

  • Hands-on experience with distributed training frameworks and large-scale LLM infrastructure
  • Experience leading or mentoring engineering teams or cross-functional projects
  • Publications in top-tier AI, systems, or HPC conferences (ICML, OSDI, SOSP, NSDI, SIGCOMM, MLSys) or strong open-source contributions
  • Familiarity with benchmarking AI accelerators or large-scale LLM evaluation (e.g., ByteMLPerf)

Job Information

【For Pay Transparency】Compensation Description (Annually)

The base salary range for this position in the selected city is $232560 - $427500 annually.

Compensation may vary outside of this range depending on a number of factors, including a candidate’s qualifications, skills, competencies and experience, and location. Base pay is one part of the Total Package that is provided to compensate and recognize employees for their work, and this role may be eligible for additional discretionary bonuses/incentives, and restricted stock units.

Benefits may vary depending on the nature of employment and the country work location. Employees have day one access to medical, dental, and vision insurance, a 401(k) savings plan with company match, paid parental leave, short-term and long-term disability coverage, life insurance, wellbeing benefits, among others. Employees also receive 10 paid holidays per year, 10 paid sick days per year and 17 days of Paid Personal Time (prorated upon hire with increasing accruals by tenure).

The Company reserves the right to modify or change these benefits programs at any time, with or without notice.

For Los Angeles County (unincorporated) Candidates:

Qualified applicants with arrest or conviction records will be considered for employment in accordance with all federal, state, and local laws including the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Our company believes that criminal history may have a direct, adverse and negative relationship on the following job duties, potentially resulting in the withdrawal of the conditional offer of employment:

About Doubao (Seed)

Established in 2023, the ByteDance Seed team is dedicated to pioneering new paths toward artificial general intelligence. We aspire to advance the frontier of intelligence to drive progress for both technology and society.

With a long-term vision for the AI sector, the Seed team's research spans MLLM, GenMedia, AI for Science, and Robotics. We maintain a global presence with laboratories and career opportunities across China, Singapore, and the United States. To date, we have launched industry-leading general foundation models and cutting-edge multimodal capabilities. Our technology powers over 50 application scenarios — including Doubao, Jimeng, TRAE, Dola and Dreamnia — and serves enterprise customers through Volcano Engine and BytePlus. Third-party data shows that the Doubao App ranks first in user volume in the Chinese market, while Doubao foundation models lead the industry in

Skills & Requirements

Technical Skills

Large-scale distributed trainingPythonC++Ml systemsTraining infrastructure developmentParallelism strategiesDdpFsdpModel/pipeline/expert parallelismPytorchCudaNcclPerformance optimizationMemoryCommunicationThroughputBenchmarking ai acceleratorsLarge-scale llm evaluationResearchOptimizationCross-functionalLeadershipMentoringPublicationsOpen-source contributionsNullAi foundation modelsDistributed trainingReinforcement learningHigh-performance inferenceHeterogeneous hardware compilation

Salary

$232,560 - $427,500

year

Employment Type

FULL TIME

Level

Mid-Level

Posted

4/23/2026

Continue to Glassdoor

You will be redirected to the job posting on Glassdoor.