Senior Associate/Manager Data Engineer Consultant (Data Bricks) / Technology Consulting

EY
Singapore, SG
On-site

Job Description

At EY, we develop you with future-focused skills and equip you with world-class experiences. We empower you in a flexible environment, and fuel you and your extraordinary talents in a diverse and inclusive culture of globally connected teams.

We work together across our full spectrum of services and skills powered by technology and AI, so that business, people and the planet can thrive together.

We’re all in, are you?

Join EY and shape your future with confidence.

The opportunity

EY AI & Data is the data and advanced analytics capability within EY Asia-Pacific, with over 500 specialist employees working across multiple industry sectors. You will be part of a dynamic team that implements information-driven strategies, data platforms and advanced data analytics solution systems that help grow, optimize and protect client organizations. We are seeking an experienced Data Engineer Consultant to join our team as a senior individual contributor

responsible for designing, implementing, and optimising robust data infrastructure and pipelines. This role offers the opportunity to work hands-on with cutting-edge AWS and Databricks technologies whilst providing technical guidance to team members and collaborating with stakeholders across the organisation on complex data engineering challenges.

Your key responsibilities

  • Design and implement enterprise-scale data architectures, including data lakes, warehouses, and real-time streaming platforms.
  • Develop and maintain ETL/ELT pipelines that efficiently process large volumes of structured and unstructured data from diverse sources.
  • Ensure data quality, governance, and security standards are embedded throughout all data engineering processes.
  • Hands-on development of Databricks notebooks using PySpark, Python, and SQL for ETL automation.
  • Create and optimise PySpark scripts for efficient data extraction, transformation, and loading from large datasets.
  • Implement custom data manipulation, validation, and error handling solutions to enhance ETL robustness.
  • Provide technical mentorship to junior data engineers and analysts.
  • Lead code reviews, establish best practices, and drive adoption of modern data engineering tools and methodologies.
  • Collaborate with cross-functional teams including data scientists, analysts, and software engineers to deliver integrated solutions.
  • Monitor and optimise data pipeline performance, implementing solutions for scalability and cost-effectiveness.
  • Conduct testing, debugging, and troubleshooting of data transformation processes.
  • Verify data integrity throughout pipeline stages and resolve complex technical issues.

To qualify for the role, you must have

  • Bachelor's Degree in Computer Science, Engineering, Data Science, or related field OR Master's Degree in Data Engineering, Computer Science, or MBA (preferred)
  • 5-7 years of data engineering experience with AWS-native implementations
  • Proven experience working as part of a data engineering team on ETL migration projects
  • Hands-on experience implementing Databricks notebooks using PySpark, Python, and SQL for ETL automation
  • Demonstrated expertise developing PySpark scripts for efficient data extraction, transformation, and loading from large datasets
  • Experience utilising Python for custom data manipulation, validation, and error handling in ETL processes
  • Proficiency employing SQL for complex joins, aggregations, and database operations within Databricks environments
  • Track record of testing, debugging, and optimising data transformation processes for accuracy and performance
  • Experience verifying data integrity throughout pipeline stages and resolving troubleshooting issues
  • Proven ability collaborating with cross-functional teams to align ETL migration tasks with project goals and deadlines
  • Experience in government sector data projects and compliance requirements (preferred)
  • Experience in HR domain including workforce analytics, payroll systems, and employee data management (preferred)
  • Experience providing technical mentorship to junior team members (preferred)
  • Certified in:
  • AWS Certified Data Engineer - Associate (preferred)
  • AWS Certified Solutions Architect or AWS Certified Big Data - Specialty (preferred)
  • Databricks Certified Data Engineer Associate or Professional
  • Amazon Redshift or Snowflake certifications
  • Agile/Scrum Master certification (preferred)

Essential Skills:

  • AWS cloud platform expertise (Redshift, S3, Glue, EMR, Kinesis, Lambda)
  • Databricks platform proficiency with PySpark, Python, and SQL implementation
  • Data warehouse design and implementation (Amazon Redshift, Snowflake on AWS)
  • ETL/ELT migration project experience and pipeline development
  • Real-time streaming technologies (Kinesis Data Streams, Kafka)
  • Data exchange platforms and API integration (AWS Data Exchange, Partner A

Skills & Requirements

Technical Skills

Aws cloud platformDatabricks platformPysparkPythonSqlEtl/elt migrationReal-time streaming technologiesData exchange platforms and api integrationData engineeringData science

Employment Type

FULL TIME

Level

senior

Posted

4/11/2026

Apply Now

You will be redirected to EY's application portal.