Our vision is to transform how the world uses information to enrich life for all.
Join an inclusive team passionate about one thing: using their expertise in the relentless pursuit of innovation for customers and partners. The solutions we build help make everything from virtual reality experiences to breakthroughs in neural networks possible. We do it all while committing to integrity, sustainability, and giving back to our communities. Because doing so can fuel the very innovation we are pursuing.
The Smart Manufacturing and AI team at Micron Technology is looking for GPU Performance Engineer. Our mission is to deliver industry-winning machine learning, custom GenAI, and Agentic AI solutions to power Micron’s dominance in the highly competitive memory solutions market. Qualified applicants will have experience in a variety of data and cloud technologies and have extensive practice modeling data, querying, and deploying scalable data pipelines to execute machine learning models and AI agents. You will collaborate with Data Scientists, Data Engineers, and expert users to build and deploy scalable AI/ML solutions that drive value and insight from Micron’s manufacturing processes and systems.
Responsibilities Include, But Not Limited To
- Architect and execute large-scale custom model training and fine-tuning jobs (SFT, RLHF) on multi-node, multi-GPU clusters.
- Optimize training throughput and memory efficiency using distributed training strategies (FSDP, DeepSpeed, Megatron-LM) and mixed-precision techniques (FP16/BF16).
- Design and develop autonomous AI Agents capable of multi-step reasoning, planning, and tool execution to automate complex manufacturing workflows.
- Analyze and profile complex workloads (e.g., LLM training, Rendering pipelines) to identify bottlenecks in compute, memory bandwidth, and latency.
- Write and optimize high-performance kernels using CUDA, HIP, or custom assembly (PTX/SASS) to unlock hardware capabilities.
- Collaborate with Hardware Architects to define features for next-generation GPUs based on workload characterization.
- Design and implement performance regression testing suites to catch degradations in drivers or compilers.
- Mentor junior engineers on parallel programming paradigms and optimization techniques.
Education Qualifications
- Technical Degree required. Ph.D. in Computer Science or Statistics background highly desired.
Minimum Qualifications
- Deep understanding of GPU architecture (memory hierarchy, tensor cores, interconnects like NVLink) and experience managing GPU resources in both cloud environments and on-prem.
- Hands-on experience with Distributed Data Parallel (DDP), Fully Sharded Data Parallel (FSDP), and model parallelism techniques.
- Proficiency in fine-tuning Large Language Models using PEFT techniques (LoRA, QLoRA) and optimizing inference engines (vLLM, TensorRT-LLM).
- Experience developing GenAI applications and AI Agents using frameworks like LangChain, LangGraph, LlamaIndex, or AutoGen.
- Proficiency with Large Language Models (LLMs), including prompt engineering, function calling/tool use, and Chain-of-Thought (CoT) reasoning.
- Experience in building and executing end-to-end ML systems automating training, testing and deploying Machine Learning models.
- Familiarity with machine learning frameworks (PyTorch is required, TensorFlow, scikit-learn, etc.).
- Software development skills and the desire to work on cutting edge development in a Cloud environment.
- Strong scripting and programming skills in one of the following, Python or Java (Python preferred).
- Experience with continuous integration/continuous delivery (CI/CD) tools (Jenkins, Git, Docker, Kubernetes).
- 9+ years of experience in performance optimization, parallel computing, or low-level systems programming.
- Deep expertise in C++ and at least one GPGPU framework (CUDA is preferred, but HIP/OpenCL/Metal are acceptable).
- Outstanding analytical thinking, interpersonal, oral and written communication skills.
- Ability to prioritize and meet critical project timelines in a fast-paced environment.
Preferred
- Experience with HPC job schedulers (e.g., Slurm) or orchestrating GPU workloads on Kubernetes (Ray, KubeFlow).
- Knowledge of lower-level optimization (CUDA programming, Triton kernels, or custom C++ extensions for PyTorch).
- Experience with Multi-Agent Systems and orchestrating collaboration between specialized agents.
- Deep knowledge of math, probability, statistics and algorithms.
- Demonstrated ability to study and transform data science prototypes into production solutions.
- Knowledge of computer vision and/or signal processing including techniques for classification and feature extraction.
About Micron Technology, Inc.
We are an industry leader in innovative memory and storage solutions transforming how the world uses information to enrich life for all. With a relentless focus on our customers, technology leadership, and manufacturing and operational exc