Data Engineer, Staff

Qualcomm
San Diego, US
On-site

Job Description

Company:

Qualcomm Incorporated

Job Area:

Information Technology Group, Information Technology Group > IT Data Engineer

General Summary:

We are seeking a Staff Data Engineer to design, build, and operate a modern, scalable data platform with Databricks Lakehouse as a core foundation.

In this role, you will focus on building reusable data frameworks, shared platform components, and standardized pipelines that enable teams to deliver data products efficiently and consistently. Your work will support analytics, reporting, and downstream advanced use cases (including AI and machine learning), with a strong emphasis on reliability, governance, developer productivity, and intelligent automation.

This is a hands-on role with meaningful ownership across data engineering, framework development, AI‑driven automation, platform reliability, security, and cost management, while contributing to architectural decisions and data standards.

This role requires full-time onsite work in San Diego, CA (5 days per week).

Minimum Qualifications:

  • 5+ years of IT-related work experience with a Bachelor's degree in Computer Engineering, Computer Science, Information Systems or a related field.

OR

7+ years of IT-related work experience without a Bachelor’s degree.

  • 3+ years of work experience with programming (e.g., Java, Python).
  • 3+ years of work experience with SQL or NoSQL Databases.
  • 3+ years of work experience with Data Structures and algorithms.

What You’ll Do

Data Engineering, Frameworks & AI‑Driven Automation

  • Design, build, and maintain scalable batch and streaming data pipelines
  • Develop reusable data engineering frameworks, libraries, and templates for ingestion, transformation, validation, and publishing
  • Establish standardized patterns for data modeling, transformations, and pipeline orchestration
  • Implement end-to-end data workflows from raw ingestion to curated analytical datasets
  • Leverage AI‑based techniques to automate and optimize data engineering workflows, such as:
  • Intelligent schema inference and evolution
  • Automated data quality checks and anomaly detection
  • Pipeline failure detection and self-healing mechanisms
  • Ensure data quality, reliability, and performance across pipelines and shared frameworks
  • Support downstream consumers such as analytics, reporting, and AI/ML teams

Reliability, Operations & Intelligent Automation

  • Define and monitor SLIs/SLOs for data pipelines, frameworks, and platform availability
  • Participate in incident response, on-call rotations, and post-incident reviews
  • Apply AI‑assisted monitoring and alerting to proactively detect performance issues, data drift, and operational anomalies
  • Implement security, compliance, and data governance controls across shared data assets
  • Drive performance tuning and cost optimization, including automated recommendations for resource utilization and workload optimization

Collaboration & Technical Leadership

  • Partner with analytics, application, and platform teams to understand common data needs and platform gaps
  • Drive adoption of standardized data frameworks, automation patterns, and best practices across teams
  • Contribute to data architecture decisions, platform standards, and design guidelines
  • Mentor junior engineers and provide technical guidance, including best practices for automating data workflows

Qualifications

Data Engineering, Frameworks & System Design

  • 8+ years of experience building and operating data platforms or distributed data systems
  • Proven experience designing and building reusable data engineering frameworks, libraries, or platform components
  • Strong experience designing scalable, reliable data pipelines using standardized patterns
  • Solid understanding of data modeling, storage formats, schema evolution, and query performance
  • Experience implementing automation in data pipelines, including rule‑based or AI‑assisted approaches
  • Ability to reason about architectural trade-offs across scalability, cost, reliability, and security

Cloud & Data Platform Experience

  • Strong hands-on experience with AWS, including IAM, networking, and multi-account setups
  • Proven experience with Databricks Lakehouse, including:
  • Delta Lake
  • Unity Catalog
  • Strong proficiency in Python for framework development, data processing, and automation
  • Experience building data platforms that support multiple consumers and automated workflows

Security & Communication

  • Understanding of cloud security best practices and data governance
  • Experience working in regulated or compliance-driven environments
  • Strong communication skills and the ability to drive adoption of shared frameworks and automation patterns across teams

Nice-to-Have

  • Experience building AI‑assisted or intelligent automation for:
  • Data quality monitoring
  • Pipeline observability
  • Cost or performance optimization
  • Experience building internal data platforms or enablement frameworks
  • Experience supporting AI/ML teams as platform consumers (withou

Skills & Requirements

Technical Skills

JavaPythonSqlNosql databasesData structuresAlgorithmsCommunicationData engineeringAiMachine learning

Employment Type

FULL TIME

Level

mid

Posted

4/10/2026

Continue to Indeed

You will be redirected to the job posting on Indeed.