Senior Machine Learning Engineer - Policy & Safety

Spotify
London, GB
Hybrid

Job Description

We design Spotify’s consumer experience—end to end, moment to moment, across every screen, platform, and partner integration. Our mission is to make listening feel effortless, personal, and joyful for billions of users around the world. That means turning complexity into clarity across hundreds of touchpoints—from our mobile and desktop apps to the smart speakers, TVs, cars, and integrations where Spotify shows up every day. If it touches a consumer, we shape it. We bring deep insight into human behavior, design, and technology to craft experiences that feel intuitive, expressive, and unmistakably Spotify.

The Policy & Safety team sits within Content Platform in the Experience Mission, building the systems that keep Spotify safe, compliant, and trusted by millions of users and creators. This team owns Spotify’s content moderation infrastructure — from detection models to policy enforcement systems and compliance data pipelines.

Working at the intersection of machine learning, platform engineering, and regulatory compliance, the team partners closely with Trust & Safety, Legal, and Public Affairs. They’re on the critical path for every new content type and social feature — including messaging, comments, and collaborative experiences — ensuring safety is built in from day one. With a strong focus on “safety by default,” the team is investing in large-scale rearchitecture and ML-driven systems to proactively protect users and empower safer interactions across the platform.

\n

What You'll Do

Design, build, and ship production-grade machine learning systems that power content safety and policy enforcement at Spotify scale

Own and lead key technical initiatives across detection, classification, and policy evaluation systems

Develop and maintain ML models for content moderation, including multimodal and LLM-based systems

Build robust evaluation frameworks, including standardized datasets, offline and online metrics, and continuous improvement loops

Drive experimentation to improve model performance, reliability, and fairness in safety-critical systems

Collaborate closely with cross-functional partners in Trust & Safety, Legal, and Public Affairs to align on policy and enforcement needs

Provide technical leadership within the team, mentoring engineers and contributing to ML strategy and prioritization

Represent technical decisions and trade-offs in stakeholder discussions and influence product direction

Who You Are

You have solid experience building and deploying machine learning systems in production environments at scale

You are experienced with training, evaluating, and maintaining ML models using modern frameworks such as PyTorch

You have a deep understanding of machine learning evaluation, including dataset design, metrics, and continuous improvement systems

You know how to design systems that balance performance, reliability, and real-world impact in high-stakes domains

You care about building safe, responsible, and user-centric ML systems

You are comfortable working across disciplines, partnering with legal, policy, and product stakeholders

You have experience leading technical projects and influencing direction within a team or product area

You have experience with distributed systems or backend technologies (e.g., Scala)

Where You'll Be

This role is based in London or Stockholm

We offer you the flexibility to work where you work best! There will be some in person meetings, but still allows for flexibility to work from home.

\n

Skills & Requirements

Technical Skills

Machine learningPythonPytorchDistributed systemsBackend technologiesScalaTechnical leadershipCollaborationCommunicationInfluenceMachine learningPlatform engineeringRegulatory complianceContent moderationPolicy enforcementMl-driven systems

Employment Type

FULL TIME

Level

senior

Posted

5/3/2026

Continue to Lever

You will be redirected to the job posting on Lever.

Sign in and we'll score your resume against this role.