Senior Data Engineer – Python ETL (Data Quality, Spark/Databricks)

MSR Technology Group
Washington, US
Remote

Job Description

Senior Data Engineer – Python ETL (Data Quality, Spark/Databricks)

Remote - US Based

12+ Month Contract

Not Open to Third Party Firms

We are seeking a hands-on Senior Data Engineer (ETL / Python Developer) to support an enterprise data warehouse and analytics program within a regulated healthcare environment. This role focuses on designing, building, and modernizing large-scale data ingestion and transformation pipelines that support analytics, reporting, and compliance-driven data initiatives.

The ideal candidate has strong Python-based data engineering experience and deep exposure to enterprise ETL environments, including legacy modernization and cloud-based platforms. This is a delivery-focused engineering role, not a QA or orchestration-only position.

Key Responsibilities

  • Design, develop, and maintain enterprise ETL pipelines supporting large-scale data platforms
  • Build and optimize Python-based data transformation logic (data A → B implemented in Python)
  • Develop scalable data processing solutions using Spark and Databricks
  • Support enterprise analytics and regulated reporting initiatives
  • Implement data validation, reconciliation, and audit-traceable pipelines
  • Write and optimize complex SQL across enterprise data platforms (Snowflake, Oracle, SQL Server, Teradata)
  • Participate in legacy ETL modernization initiatives (e.g., Informatica or shell to Python conversions)
  • Support cloud-based data architectures within Azure environments
  • Collaborate with architects, analysts, QA, and reporting teams to ensure data quality and accuracy
  • Participate in CI/CD, code reviews, and source control using Azure DevOps and GitHub
  • Support production operations, incident resolution, and root-cause analysis

Required Qualifications

  • 5+ years of enterprise data engineering experience
  • 5+ years of hands-on ETL development (Informatica PowerCenter, Azure Data Factory, or similar tools)
  • 5+ years of Python development focused on data engineering and transformation logic
  • 3+ years of Spark-based processing (Databricks or equivalent)
  • Strong SQL expertise across large relational databases
  • Experience working in regulated, audit-sensitive environments
  • Strong analytical, troubleshooting, and problem-solving skills
  • Bachelor’s degree or higher in Computer Science, Engineering, Analytics, or related field

Preferred Qualifications

  • Experience supporting large enterprise data warehouse environments
  • Healthcare or public-sector data experience preferred
  • Experience with data quality frameworks and reconciliation processes
  • Scripting experience (PowerShell or Bash)
  • Experience designing or consuming REST APIs
  • Cloud-based data engineering experience in Azure
  • Azure data or analytics certifications

Work Environment

This role is fully remote within the continental U.S. Occasional travel to Springfield, IL may be required based on project needs.

Onboarding: This role will require a background check and drug screen.

Skills & Requirements

Technical Skills

PythonEtlSparkDatabricksSqlSnowflakeOracleSql serverTeradataInformatica powercenterAzure data factoryAzure devopsGithubPowershellBashRest apisAzureCollaborationProblem solvingCommunicationData engineeringEtlData qualityData transformationData ingestionData processingData validationData reconciliationData auditData analyticsData reportingData compliance

Employment Type

CONTRACT

Level

senior

Posted

5/1/2026

Apply Now

You will be redirected to MSR Technology Group's application portal.

Sign in and we'll score your resume against this role.