Audio ML Engineer (Research)

HARMAN International
Los Angeles, US
On-site

Job Description

Introduction: A Career at HARMAN Corporate

We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. At HARMAN Corporate, you are integral to our company’s award-winning success.

  • Enrich your managerial and organizational talents – from finance, quality, and supply chain to human resources, IT, sales, and strategy
  • Augment your comprehensive skillset with expert training across decision-making, change management, leadership, and business development
  • Obtain 360-degree support throughout your career life cycle, from early-stage to seasoned leader

About The Role

The Audio ML Engineer (Research) develops learning-based perception and personalization models that enhance Intelligent Audio experiences across devices and contexts. You will build models that understand audio scenes, predict perceptual outcomes, personalize tuning, and drive adaptive behavior—designed from the start for embedded and cloud deployment paths. In Year 1, your work is expected to feed directly into productization by delivering models that are measurable, reproducible, and deployable (or easily productizable) with clear compute/memory tradeoffs. Success means your models improve user experience in controlled testing and remain robust in the messiness of real-world use cases.

What You Will Do

  • Learning-Based Perception Models: Develop ML models for perception-related tasks (e.g., quality prediction, artifact detection, scene/context classification, personalization embeddings, preference modeling).
  • Embedded + Cloud Deployment Focus: Design solutions that can run on-device (quantized, efficient inference) and/or scale in cloud pipelines (batch evaluation, fleet learning, offline training + on-device inference).
  • Personalization & Adaptation: Build personalization and adaptation strategies that integrate with DSP pipelines (e.g., model outputs drive adaptive EQ/DRC/spatial parameters) while maintaining stability and explainability.
  • Data Strategy & Tooling: Define data collection and labeling strategies, data QA, augmentation, bias checks, and experiment tracking—so results are reproducible and transferable to product.
  • Model Optimization: Apply compression/acceleration techniques (quantization, pruning, distillation, ONNX export, hardware-aware training) to meet latency and footprint constraints.
  • Cross-Functional Handoff: Partner with DSP, perceptual, and productization engineers to deliver reference pipelines, integration guidelines, and acceptance metrics for OneUX releases.
  • AI Tools: Use modern AI tooling (LLM-based coding assistants, data analysis copilots, automated report generation) to accelerate iteration while keeping rigorous review and validation.

What You Need To Be Successful

  • Education: MS or PhD in CS/EE/Statistics/Applied ML (or BS with strong equivalent experience).
  • Experience: 5+ years applied ML engineering experience; 2+ years specifically in audio/speech or time-series ML strongly preferred.
  • ML Stack: Strong proficiency in Python, PyTorch/TensorFlow, dataset pipelines, evaluation methodology, and experiment tracking.
  • Deployment Skills: Experience deploying models to embedded (TFLite / ONNX Runtime / custom inference) and/or cloud (service or batch pipelines, MLOps practices).
  • Signal + Perception Understanding: Working knowledge of DSP/audio fundamentals and how ML interacts with perceptual outcomes.
  • AI Tools: Demonstrated experience using AI-assisted tools to speed up coding, testing, debugging, and documentation.

Bonus Points if You Have

  • Experience with audio ML domains (speech enhancement, denoising, source separation, spatial audio ML, perceptual audio metrics, recommendation/personalization).
  • Familiarity with on-device acceleration (NNAPI, Core ML concepts, CUDA/TensorRT-like optimization where applicable).
  • Experience with privacy-preserving learning or on-device personalization approaches.
  • Patents/publications or shipped ML features in consumer/automotive audio products.

What Makes You Eligible

  • Successfully complete a background investigation and drug screen as a condition of employment (post-offer).

What We Offer

  • Flexible work environment, allowing for full-time remote work globally for positions that can be performed outside a HARMAN or customer location
  • Access to employee discounts on world-class products (JBL, HARMAN Kardon, AKG, and more)
  • Extensive training opportunities through our own HARMAN University
  • Competitive wellness benefits
  • Tuition reimbursement
  • “Be Brilliant” employee recognition and rewards program
  • An inclusive and diverse work environment that fosters and encourages professional and personal development

Pay Transparency

$ 134,250 - $ 196,900

Dependent on the position offered, other forms of compensation are also available, such as bonuses or commission.

Pay is based on a wide range of factors, including, without limitation, skill set, experience, training, location, and bus

Skills & Requirements

Technical Skills

PythonPytorchTensorflowDspAudio fundamentalsMlTime-series mlTfliteOnnx runtimeCustom inferenceCloud pipelinesMlopsAi-assisted toolsTeamworkProblem-solvingCommunicationAttention to detailTechnical discussionsAudio mlPerception modelsPersonalizationDsp pipelinesAudio scene understandingPerceptual outcomes

Soft Skills

Problem-solvingCommunication

Domain Knowledge

AudioMachine learningDSP

Salary

$134,250 - $196,900

year

Employment Type

FULL TIME

Level

senior

Posted

3/25/2026

Continue to LinkedIn

You will be redirected to the job posting on LinkedIn.

Sign in and we'll score your resume against this role.

Find Similar Jobs

Browse roles in the same category, level, and remote setup.

Sign in to open the target role workbench.