Senior Principal Data Engineer

StarHub
SG
On-site

Job Description

Role Mission:

To lead and scale the Data Engineering, DataOps and Data Stewardship functions within StarHub’s Digital Experience platform (DXP) Data organization. This role ensures end-to-end delivery excellence of the cloud-native data platform – spanning data ingestion, transformation, modeling, and operations – to enable reliable, high-quality, and self-service analytics across business domains.

Accountabilities:

  • Build and lead the Data Engineering & DataOps team (engineers and data stewards) under the DXP Data domain.
  • Manage and mentor a hybrid team of internal engineers and vendor resources (augmented team) to maintain DevOp speed and cost efficiency while progressively strengthening in-house capability.
  • Drive engineering standards, observability, and quality across data ingestion, transformation, and orchestration.
  • Govern day-to-day data operations, ensuring SLA compliance, cost efficiency, and audit readiness.
  • Implement enterprise-level data quality and stewardship frameworks across business domains.
  • Partner with business, BI, and platform engineering teams to enable new data use cases and model extensions.
  • Partner with Platform Engineering, Architecture & Governance, and cross domains teams to align on data standards, automation, and governance.

Responsibilities:

  • Team Leadership: Recruit, mentor, and lead a hybrid team of data engineers and stewards across Singapore, Malaysia and India, establishing in-house technical leadership and delivery ownership.
  • Data Engineering Delivery: Oversee design, development, and optimization of ELT/ETL pipelines and data models, ensuring scalable, reusable, and cost-efficient workflows.
  • Data Quality & Stewardship: Institutionalize stewardship processes — define ownership models, implement DQ monitoring, and drive remediation workflows with cross-functional data users.
  • Operational Excellence: Manage daily pipeline operations, SLA compliance, and production issue resolution with strong root-cause analysis and continuous improvement.
  • Technical Governance: Set engineering standards for observability, RBAC, cost tagging, and CI/CD practices.
  • Collaboration & Enablement: Enable self-service analytics by curating trusted datasets and modelled views, working with BI and business teams.
  • Strategic Contribution: Drive the evolution of the DXP data architecture, supporting StarHub’s broader digital transformation and AI/ML readiness.

Team Scope/ Stakeholders:

  • Scope: StarHub DXP Data Platform (C360, Datapipe ingestion solution based on Apache airbyte & airflow, Snowflake, SageMaker, Cloud native skills) and enterprise data quality ecosystem.
  • Decision Rights: Technical design approval, pipeline engineering standards, operational and DQ prioritization, vendor oversight, and team structure decisions.
  • Stakeholders: Platform Engineering, Architecture & Governance, BI, Data Science, and Business Data Owners, Infrastructure, Cybersecurity/ISO, Application domain teams.
  • Resources: Core team of ~ 6 to 8 (StarHub employees and augmented engineers) across Singapore, Malaysia and India; expanding to include 2–3 data stewards.

Requirements:

  • 8–12 years of experience in cloud-native data engineering , with strong architecture and delivery experience on AWS .
  • Proven leadership of cross-functional and hybrid engineering teams , including vendor-augmented resources.
  • Experience partnering with BI and business teams to design modelled datasets and enable self-service analytics.
  • Deep hands-on technical expertise , including:
  • Snowflake : schema design, Streams/Tasks, Stored Procedures, UDFs, RBAC, performance tuning, Cortex AI, Streamlit, cost monitoring.
  • Airflow or similar data orchestration tools : orchestration, scheduling, dependency management, and observability.
  • Python and SQL : pipeline scripting, transformation logic, and data validation.
  • ELT/ETL frameworks : Airbyte, Fivetran, and custom connector development.
  • AWS services : S3 (data lake structures and archival), Lambda, KMS, Transfer Family, CloudWatch, Sagemaker.
  • Demonstrated success delivering medallion architecture (Bronze/Silver/Gold) and enabling self-service data use cases.
  • Experience building data quality frameworks , stewardship policies, and data lineage tracking across enterprise datasets.
  • Familiarity with machine learning integration using platforms like AWS SageMaker.
  • Proven ability to troubleshoot complex data issues, lead root-cause analysis, and ensure production stability.
  • Track record of transitioning delivery ownership from vendors to internal teams while maintaining quality and velocity.

Skills & Requirements

Technical Skills

Data engineeringCloud-native data platform

Employment Type

FULL TIME

Level

mid

Posted

4/14/2026

Continue to LinkedIn

You will be redirected to the job posting on LinkedIn.