Principal AI, ML Engineer - AI Safety and Evaluation job at A10 Networks in San Jose, CA

A10 Networks
Washington, US
Remote

Job Description

Title: Principal AI/ML Engineer - AI Safety & Evaluation

Location: San Jose United States

Full time

Job Description:

About the Team

We're building a future where AI systems are not only powerful but safe, aligned, and robust against misuse. Our team focuses on advancing practical safety techniques for large language models (LLMs) and multimodal systems-ensuring these models remain aligned with human intent and resist attempts to produce harmful, toxic, or policy-violating content.

We operate at the intersection of model development and real-world deployment, with a mission to build systems that can proactively detect and prevent jailbreaks, toxic behaviors, and other forms of misuse. Our work blends applied research, systems engineering, and evaluation design to ensure safety is built into our models at every layer.

About the Role

We're looking for a Principal Engineer to lead the technical strategy and architecture for protecting foundation models against misuse-such as jailbreaks, prompt injection, toxic outputs, and custom policy violations. In this role, you'll apply your expertise in scalable systems design, applied machine learning, and model-level defenses to build core infrastructure that ensures AI systems behave safely and responsibly in production. You'll set technical direction and drive architectural decisions across a broad surface area of AI safety systems-designing safety interventions, integrating evaluation workflows, and developing models and tooling that detect and prevent harmful or non-compliant behavior. This role is ideal for someone who wants to work at the intersection of model behavior, product safety, and system engineering.

What You'll Do

Architect and lead the development of model-level defenses against jailbreaks, prompt injection, and custom policy violations

Define and drive evaluation strategies, including adversarial testing and stress-testing pipelines, to identify safety weaknesses before deployment

Set technical direction for scalable mitigation techniques such as safety-focused fine-tuning, prompt shielding, and post-processing methods to reduce harmful or non-compliant outputs

Collaborate with red teamers and researchers to convert emerging threats into measurable evaluations and system-level safeguards

Scale and improve human-in-the-loop pipelines for detecting toxic, biased, or non-compliant outputs

Stay up to date with LLM safety research, jailbreak tactics, and adversarial trends, and apply insights to real-world defenses

What We're Looking For

7+ years of experience in applied machine learning, AI infrastructure, or safety-critical systems, with 3+ years in a senior or staff-level technical leadership role

Deep understanding of transformer-based architectures and experience building or evaluating safety interventions for LLMs

Proven expertise in analyzing and addressing adversarial behaviors, edge-case failures, and misuse scenarios

Demonstrated ability to guide long-term technical strategy, influence organizational direction, and mentor cross-functional teams

Strong written and verbal communication skills, with experience influencing technical direction at the org or platform level

Bachelor's, Master's, or PhD in Computer Science, Machine Learning, or a related field

Nice to Have

Experience applying techniques such as reinforcement learning from human feedback (RLHF), adversarial training, or safety fine-tuning at scale

Hands-on work designing prompt-level defenses, content filtering systems, or mechanisms to prevent jailbreaks and policy violations

Contributions to AI safety research, industry standards, or open-source tools related to model robustness, alignment, or evaluation

Familiarity with model governance frameworks, including safety policies, model cards, red teaming protocols, or risk classification methodologies

A10 Networks is an equal opportunity employer and a VEVRAA federal subcontractor. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. A10 also complies with all applicable state and local laws governing nondiscrimination in employment.

Targeted compensation guideline: $225,000 - $245,000. Compensation will vary based on number of factors, including market demand for specific skills, role type, job level, and individual qualifications. Final salary offers are determined by considerations including, but not limited to, subject matter expertise, demonstrated skill level, relevant experience, geographic location, education, certifications, and training.

Skills & Requirements

Technical Skills

Ai safetyLlmsMultimodal systemsModel-level defensesAdversarial testingStress-testing pipelinesSafety-focused fine-tuningPrompt shieldingPost-processing methodsReinforcement learning from human feedback (rlhf)Adversarial trainingSafety fine-tuningPrompt-level defensesLeadershipCommunicationTeamworkProblem-solvingInnovationAdaptabilityCreativityCollaborationAiMlSafetyEvaluationSecurity

Salary

$225,000 - $245,000

year

Employment Type

FULL TIME

Level

principal

Posted

4/20/2026

Apply Now

You will be redirected to A10 Networks's application portal.