Role Summary
We are seeking a highly skilled
Senior Data Engineer
with strong expertise in cloud-based data platforms, big data processing, and modern data architectures. The ideal candidate will have hands-on experience in building scalable data pipelines, implementing lakehouse architectures, and enabling advanced analytics and machine learning use cases across enterprise environments.
Key Responsibilities
Data Engineering & Architecture
Design and implement scalable
data pipelines (ETL/ELT)
for batch and real-time processing.
Build and maintain
modern data platforms
using lakehouse architecture (Bronze, Silver, Gold layers).
Develop and optimize
data models (star/snowflake schemas)
for analytics and reporting.
Ensure high data quality, integrity, and governance across systems.
Cloud & Platform Management
Develop and deploy solutions on
Microsoft Azure and/or AWS ecosystems
Azure Data Factory, Azure Databricks, ADLS Gen2
Azure SQL, Key Vault, Azure DevOps
AWS S3, Redshift, EMR, Glue, Lambda
Implement secure, scalable, and cost-efficient data storage solutions.
Big Data & Processing
Develop large-scale data processing workflows using:
Apache Spark / PySpark
Kafka, Hive, Hadoop, Airflow
Optimize performance of distributed data processing systems.
Microsoft Fabric & Lakehouse (Preferred)
Implement
Microsoft Fabric-based data solutions
including:
Lakehouse architecture
Medallion design (Bronze/Silver/Gold)
Delta Lake optimization
Build Fabric pipelines and integrate with Power BI.
Data Integration & Migration
Lead data migration initiatives from legacy/on-prem systems to cloud platforms.
Integrate multiple data sources (SAP, Oracle, SQL Server, APIs, etc.).
Implement incremental data loading and performance optimization techniques.
Analytics & BI Enablement
Enable business intelligence and reporting using tools like:
Power BI, SSRS, Kibana, Grafana
Row-Level Security (RLS)
and data access controls.
Machine Learning & Advanced Analytics (Good to Have)
Support ML pipelines using frameworks such as:
Scikit-learn, TensorFlow, PyTorch, Keras
Collaborate with data scientists for model deployment and integration.
DevOps & Automation
Implement CI/CD pipelines using Azure DevOps/Git.
Use Docker for containerization.
Automate data validation, monitoring, and deployment processes.
Monitoring, Security & Governance
Implement monitoring using Azure Monitor, Log Analytics, etc.
Ensure compliance with data governance frameworks (e.g., Purview).
Maintain security standards using Key Vault, IAM, and encryption mechanisms.
Required Skills & Qualifications
Technical Skills
Strong programming skills in
Python and/or Java
Advanced
SQL and data warehousing concepts
Hands-on experience with:
Spark / PySpark
ETL/ELT pipeline development
Data modeling and optimization
Cloud Expertise
Experience with
Microsoft Azure (preferred)
or AWS
Exposure to
Databricks, ADF, ADLS, Snowflake
is highly desirable
Big Data Technologies
Apache Spark, Kafka, Hive, Airflow
Tools & Technologies
Git, Docker, CI/CD pipelines
BI tools (Power BI preferred)
Experience Requirements
7-10 years
of experience in Data Engineering / Big Data
Proven experience in:
Designing scalable data architectures
Cloud data platform implementations
Data migration and transformation projects
Educational Qualifications
Bachelor's degree in
Computer Science, Engineering, or related field
Certifications in
Azure/AWS/Data Engineering
are a plus
Soft Skills
Strong problem-solving and analytical thinking
Ability to work in
Agile/DevOps environments
Excellent stakeholder communication and collaboration skills
Ability to lead technical discussions and mentor junior engineers
Nice to Have
Microsoft Fabric
real-time streaming architectures
Knowledge of
AI/ML pipelines
Experience in
enterprise-scale data governance
•
FULL TIME
senior
5/1/2026
You will be redirected to Brickstech's application portal.
Sign in and we'll score your resume against this role.