Senior Data Science Engineer, GenAI Platforms & Data Infrastructure

Adobe
Chicago, US

Job Description

The Opportunity

Adobe Customer Solutions is hiring a Senior Data Science Engineer to build practical AI and data systems for Adobe's Digital Experience business.

This role focuses on the infrastructure behind GenAI agents, customer intelligence products, and field productivity workflows. The work includes data pipelines, Databricks workflows, LLM-powered agents, reusable platform services, and systems that help teams understand customer health, retention, growth, adoption, and value.

This is a hands-on engineering role. The team needs someone who can take an unclear business problem, shape the technical approach, build the data foundation, and ship reliable AI-enabled workflows into production. It is a great opportunity for someone who likes building systems that people use every day!

What You'll Do

In this role, you'll build and operate data and AI infrastructure used by Customer Success, Customer Engineering, Professional Services, and go-to-market teams.

You will:

  • Build production data pipelines, feature workflows, and platform services using Python, SQL, Spark, Databricks, Delta Lake, APIs, and cloud tools.
  • Create LLM-powered agents and AI workflows that summarize customer signals, generate insights, recommend actions, and reduce manual work.
  • Own platform components such as data ingestion, orchestration, semantic layers, tool integrations, access patterns, monitoring, and reliability.
  • Combine structured and unstructured data from usage, adoption, support, success, value, account, and operational systems.
  • Improve GenAI quality through evaluation, retrieval design, prompt and tool design, feedback loops, and production monitoring.
  • Strengthen data quality, lineage, alerting, access control, governance, and operational support.
  • Partner with product, engineering, data science, business operations, and customer-facing teams to turn priority problems into working systems.
  • Apply strong engineering practices through Git, code review, CI/CD, Databricks Repos, documentation, and reproducible development.

A few questions this role will help answer: Which customer signals matter most? Where can AI remove repetitive work? How should agents connect to trusted data? What platform capability would help multiple teams move faster?

What You Need to Succeed

Strong candidates bring data engineering depth, GenAI fluency, platform thinking, and strong delivery judgment.

Required qualifications:

  • 8+ years in data engineering, machine learning engineering, data science engineering, analytics engineering, platform engineering, or a related technical role.
  • Production work with Python, SQL, Spark, Databricks, Delta Lake, distributed data processing, and workflow orchestration.
  • Hands-on work with GenAI or LLM systems, including agents, copilots, retrieval-augmented generation, semantic search, tool/function calling, prompt workflows, or AI automation.
  • Strong knowledge of data modeling, data quality, lineage, access control, observability, and scalable pipeline design.
  • Ability to guide work from discovery through architecture, development, deployment, monitoring, adoption, and iteration.
  • Good judgment on when to prototype, when to harden for production, and how to manage technical debt.
  • Clear communication with technical teams, business stakeholders, and senior leaders.
  • Ability to work independently, navigate ambiguity, prioritize high-impact work, and deliver in a fast-moving environment.
  • Bachelor's or Master's degree in Computer Science, Data Science, Engineering, Statistics, Mathematics, or a related field, or equivalent practical experience.

Preferred Qualifications

Helpful additional experience includes:

  • Internal AI platforms, agent platforms, customer intelligence systems, or reusable data infrastructure.
  • LLM evaluation, prompt evaluation, model monitoring, human feedback loops, AI governance, or responsible AI practices.
  • Azure, AWS, or GCP, including secure deployment patterns and service integrations.
  • Databricks Workflows, Airflow, Dagster, or similar orchestration tools.
  • APIs, microservices, event-driven workflows, or application integrations.
  • Vector databases, embeddings, semantic search, knowledge graphs, graph databases, Elastic Stack, Kafka, or Kinesis.
  • Customer health, retention, adoption, growth, value realization, or enterprise SaaS operating models.
  • Adobe Experience Cloud, Adobe Experience Platform, Adobe Analytics, Customer Journey Analytics, or related Digital Experience products.

What Success Looks Like

In the first 90 days, this person will learn the core data and AI platform landscape, contribute to priority GenAI and data infrastructure work, and take ownership of meaningful production components.

Within six months, this person will own one or more foundation areas such as agent infrastructure, customer intelligence pipelines, orchestration, data quality, or reusable AI workflow services.

Over time, this role will help Adobe Customer Solutions move fro

Skills & Requirements

Technical Skills

PythonSqlSparkDatabricksDelta lakeApisCloud toolsLlm-powered agentsAi workflowsData ingestionOrchestrationSemantic layersTool integrationsAccess patternsMonitoringReliabilityStructured and unstructured dataData qualityLineageAlertingAccess controlGovernanceOperational supportGitCode reviewCi/cdDatabricks reposDocumentationReproducible developmentStrong delivery judgmentClear communicationAbility to work independentlyNavigate ambiguityPrioritize high-impact workDeliver in a fast-moving environmentGenaiData infrastructureCustomer intelligence productsField productivity workflowsData pipelinesDatabricks workflowsLlm-powered agentsReusable platform servicesCustomer healthRetentionGrowthAdoptionValue

Level

senior

Posted

4/28/2026

Apply Now

You will be redirected to Adobe's application portal.

Sign in and we'll score your resume against this role.