Enterprise Data Engineer (Databricks)

Santcore Technologies
Austin, US
Hybrid

Job Description

Job Title: Enterprise Data Engineer (Databricks)

Location: Hybrid – Austin, TX

Duration: 5 Months

Pay Rate: $65/hr on 1099

Role Summary

The Enterprise Data Engineer will design, build, and operate scalable data pipelines within an Azure-based Databricks Lakehouse architecture. The primary focus is to deliver and maintain a software-driven data model for analytics and data consumption.

This is a hands-on, execution-focused role responsible for engineering reliable data ingestion from multiple data sources, performing transformations, implementing data quality checks, and delivering curated datasets integrated with ServiceNow (ITSM/ITSLM) and ApptioOne (ITFM).

The role involves close collaboration with data architects, platform teams, providers, and stakeholders to translate architectural designs into scalable, governed, and production-ready data solutions. The work follows Agile software engineering practices, including GitHub-based workflows and CI/CD-driven SDLC processes.

Key Responsibilities

  • Design, build, and maintain data models supporting data consumption, integration, semantic analytics, reporting, and executive dashboards.
  • Develop scalable data ingestion and transformation pipelines using Azure PaaS services, Databricks, Delta Lake, Python, and Spark SQL.
  • Implement integrations for ServiceNow operational data (SLA, incidents, CMDB) and ApptioOne financial and cost allocation data.
  • Develop and enforce data quality checks, validation rules, and monitoring mechanisms for end-to-end pipeline reliability.
  • Apply Unity Catalog governance including data access control, lineage management, and schema enforcement as per architectural standards.
  • Optimize Databricks Lakehouse performance including pipeline efficiency, storage layout, and query optimization.
  • Support CI/CD pipelines and DevOps automation for data engineering workflows using Azure DevOps and GitHub Actions.
  • Collaborate with architects, stakeholders, Capgemini teams, and service providers to deliver reporting and analytics solutions.
  • Troubleshoot production data issues and ensure operational stability of analytics and reporting systems.
  • Maintain documentation, runbooks, and operational standards for Databricks data pipelines.

Required Skills & Experience

  • 5+ years of experience in Data Engineering or Analytics Engineering roles.
  • Hands-on experience with Databricks, Delta Lake, and Spark-based data pipelines.
  • Strong understanding of Medallion Architecture, especially Gold/Platinum layer implementation.
  • Proficiency in Python, SQL, and Spark (PySpark or Spark SQL).
  • Experience integrating enterprise systems such as ServiceNow (SLA, incident, CMDB data).
  • Experience working with financial or cost management platforms (e.g., ApptioOne or similar ITFM tools).
  • Strong understanding of data modeling techniques and methodologies.
  • Familiarity with Unity Catalog for data governance and access control.
  • Experience with Power BI or similar BI tools consuming Lakehouse datasets.
  • Experience with Azure data services (e.g., ADLS Gen2, orchestration tools, integration patterns), Azure DevOps, and GitHub-based CI/CD pipelines.

Preferred Qualifications

  • Experience supporting public sector data initiatives.
  • Familiarity with ITIL 4 / ITIL 5 frameworks and SLA-based reporting.
  • Experience with financial systems, SLA analytics, operational KPIs, or cost transparency dashboards.
  • Exposure to MLflow, Feature Store, or AI/ML pipelines (implementation support role, not architecture ownership).

Skills & Requirements

Technical Skills

DatabricksDelta lakeSparkPythonSqlServicenowApptiooneAzure devopsGithub actionsCollaborationCommunicationProblem-solvingData engineeringData pipelinesData qualityData governance

Salary

$65 - $65

hour

Employment Type

CONTRACT

Level

mid

Posted

4/14/2026

Continue to LinkedIn

You will be redirected to the job posting on LinkedIn.