Research Scientist - Technologies of Data Management, LLM and AI Agents - Global Frontier Tech Recruitment Program - 2027 Start (PhD)

ByteDance
Seattle; Washington, US
On-site

Job Description

Location:

Seattle

Team:

Technology

Employment Type:

Regular

Job Code:

A220261

Responsibilities

We are looking for talented individuals to join our team in 2027. As a graduate, you will get opportunities to pursue bold ideas, tackle complex challenges, and unlock limitless growth. Launch your career where inspiration is infinite at our Company.

Successful candidates must be able to commit to an onboarding date by end of year 2027. Please state your availability and graduation date clearly in your resume.

Team Introduction:

With the fast growth of Bytedance's business, Bytedance's system infrastructure is currently at a massive scale and requires versatile system solutions. Our lab collaborates with HQ teams on advanced R&D projects, focusing on LLM/AI + Infrastructure technologies, which includes both infrastructure for LLM/AI and LLM/AI for infrastructure. To name a few, we are building a new cloud-native vector index library. Our TextToSQL project ranks top on well known industry benchmarks. We also work on advanced AIOps technologies that are used by our Volcano cloud products.

Besides achieving great business impacts, we also encourage publishing on top tier conferences. In year 2025 alone, our lab published nearly 20 papers in top tier conferences, such as SIGMOD, VLDB, FSE, ICLR, EuroSys, WWW etc. We hire students with great technical skills, willingness to learn and solve complex technical challenges and passion in making an impact on millions of users.

With the large-scale deployment of large language models (LLMs) and AI Agents, traditional cloud-native infrastructure can no longer meet the extreme performance and elasticity demands of AI workloads. This project conducts systematic research across the full stack of AI infrastructure, focusing on the following areas:

Data Management for LLM and Agents

Intelligence & Agent Architecture

  • Explore infrastructure auto-optimization based on AI Agent workflows. Build a self-evolving business Agent framework, enabling full-stack intelligent optimization through “AI for Infrastructure”. We look into various ways to apply AI/LLM in solving infrastructure problems, such as AIOps, NL-to-SQL, Auto Skills etc. This project aims to build next-generation AI-native infrastructure to support LLMs and AI Agents, improving resource utilization, reducing costs, enabling elastic scalability, and driving the evolution of AI infrastructure technologies.

Topic Content:

With the large-scale adoption of LLMs and AI agents, traditional cloud-native infrastructure can no longer meet the ultra-high performance and elasticity requirements of AI workloads. This topic conducts systematic research across the entire AI infrastructure stack:

This topic aims to build a next-generation AI-native infrastructure to support the deployment of LLMs and AI agents, improve resource utilization, reduce costs, support elastic scaling, and drive the technological evolution of AI infrastructure.

Qualifications

Minimum Qualifications:

  • Individuals who are completing or recently completed a PhD in Software Development, Computer Science, Computer Engineering, or a related technical discipline.
  • Skilled in at least one mainstream programming language (e.g., C/C++, Python,

Skills & Requirements

Technical Skills

Data managementLlmAi agents

Employment Type

FULL TIME

Level

principal

Posted

4/16/2026

Continue to Indeed

You will be redirected to the job posting on Indeed.