Data Engineer (AWS Data Platform & Pipelines) (4 Years Experience & Above) (1 Year Contract)

Rhino Partners
SG
On-site

Job Description

Role Overview

We are seeking a Data Engineer (AWS Data Platform & Pipelines) to design, build, and operate scalable data pipelines and data infrastructure supporting enterprise analytics and data-driven initiatives.

This role focuses on AWS-based data engineering, including AWS Glue, Redshift, S3, and Lambda, alongside CI/CD, automation, and data platform optimisation. You will work closely with infrastructure, application, and analytics teams to ensure reliable, performant, and well-governed data platforms.

You will play a key role in strengthening the organisation’s data engineering foundations, improving pipeline reliability, and supporting enterprise data warehouse and analytics workloads.

Key Responsibilities

Data Pipeline Development & Management

  • Design, build, and maintain scalable data pipelines using AWS Glue and AWS-native services
  • Implement ETL/ELT workflows to ingest data from multiple internal and external sources
  • Optimise data pipeline performance, scalability, and cost efficiency
  • Troubleshoot pipeline failures and implement resilient error-handling strategies
  • Implement incremental loading, partitioning, and data lifecycle management
  • Support batch and near real-time data ingestion patterns where required

Data Infrastructure & Engineering

  • Manage and optimise AWS Redshift data warehouse performance and operations
  • Design and maintain data lake architecture using AWS S3
  • Implement data partitioning, indexing, compression, and performance tuning strategies
  • Support data infrastructure deployment using Infrastructure as Code (IaC) practices
  • Collaborate with infrastructure teams to ensure secure, scalable, and reliable data platforms
  • Support data access governance and data lifecycle management

CI/CD & DevOps for Data

  • Develop and maintain CI/CD pipelines for data workflows using GitLab CI/CD
  • Implement automated testing for data pipelines and data quality validation
  • Support version control and release management for data engineering assets
  • Configure and maintain AWS Lambda functions for data processing automation
  • Implement deployment automation and rollback strategies for data pipelines
  • Promote DevOps best practices across data engineering workflows

Monitoring, Performance & Support

  • Set up monitoring, alerting, and observability for data pipeline health
  • Monitor AWS CloudWatch logs, metrics, and alerts for data platform reliability
  • Troubleshoot production data issues and support operational stability
  • Optimise query performance and database operations in AWS Redshift
  • Collaborate with technical teams on data architecture improvements
  • Provide technical support for data-related incidents and operational issues

Documentation & Governance

  • Document data pipeline architectures and technical specifications
  • Maintain operational runbooks and support documentation
  • Track engineering work and operational tasks using Jira
  • Maintain technical documentation using Confluence
  • Participate in monthly system health and engineering progress reviews
  • Ensure adherence to data engineering standards and best practices

Required Skills & Experience

Technical Skills

  • Strong experience in data engineering and data pipeline development
  • Proficiency in Python, SQL, and shell scripting
  • Hands-on experience with AWS Data Services, including:
  • AWS Glue
  • AWS Redshift
  • AWS S3
  • AWS Lambda
  • AWS CloudWatch
  • Experience designing and optimising data warehouse architectures
  • Experience implementing ETL/ELT pipelines
  • Strong understanding of data modelling and database design principles
  • Experience with CI/CD pipelines (GitLab preferred)

DevOps & Infrastructure

  • Experience with Infrastructure as Code (Terraform or CloudFormation)
  • Experience deploying data infrastructure using automated pipelines
  • Understanding of data platform security and governance practices
  • Familiarity with data quality validation and monitoring frameworks

Core Competencies

  • Strong troubleshooting and problem-solving skills
  • Experience optimising performance and scalability of data platforms
  • Ability to work in production support and operational environments
  • Strong documentation and technical communication skills
  • Ability to collaborate across infrastructure, application, and analytics teams

Preferred Experience (Nice to Have)

  • Exper

Skills & Requirements

Technical Skills

PythonSqlShell scriptingAws glueAws redshiftAws s3Aws lambdaAws cloudwatchEtl/eltCi/cdGitlab ci/cdAws lambda functionsJiraConfluenceTroubleshootingProblem-solvingCollaborationDocumentationTechnical communicationData engineeringData pipeline developmentData infrastructureData warehouseData platform optimizationData access governanceData lifecycle managementDevopsMonitoringPerformance optimizationData quality validationData security

Employment Type

CONTRACT

Level

senior

Posted

4/6/2026

Apply Now

You will be redirected to Rhino Partners's application portal.