Data Engineer; Microsoft Fabric | Azure | Databricks

Brickstech
AE
On-site

Job Description

Position: Data Engineer (Microsoft Fabric | Azure | Databricks)

Job Description Role Summary

We are seeking a Data Engineer with hands‑on experience in Azure‑based data platforms and Microsoft Fabric to build scalable data pipelines and analytics solutions. The ideal candidate will have strong skills in PySpark, SQL and modern data engineering frameworks, with exposure to lakehouse architecture, real‑time data processing, and cloud‑native tools.

Key Responsibilities

  • Data Pipeline Development
  • Develop and maintain ETL/ELT pipelines using Azure‑native tools and frameworks.
  • Process structured and semi‑structured data (JSON, XML, CSV, APIs).
  • Implement incremental data loading and transformation logic.
  • Ensure data quality, validation, and consistency across pipelines.
  • Microsoft Fabric & Lakehouse Implementation
  • Work with Microsoft Fabric components including:
  • Event stream
  • Eventhouse
  • Fabric Pipelines
  • Implement Lakehouse architecture using Medallion model (Bronze, Silver, Gold).
  • Utilize Delta Lake for efficient storage and processing.
  • Azure Data Platform Development
  • Build data solutions using:
  • Azure Data Factory (ADF)
  • Azure Databricks
  • ADLS Gen2
  • Ingest data from multiple sources:
  • ERP / SAP systems
  • APIs and external data sources
  • Support cloud‑based data integration and transformation.
  • Big Data Processing
  • Develop data transformation workflows using:
  • PySpark / Apache Spark
  • Optimize pipelines using:
  • Partitioning, caching, and efficient joins
  • Work on both batch and streaming data processing

.

  • Data Modeling & Analytics
  • Build analytics‑ready datasets for reporting and BI.
  • Apply dimensional modeling concepts

.

  • Deliver curated datasets for downstream consumption.
  • Real‑Time Data Processing (Preferred)
  • Implement real‑time data pipelines using:
  • Microsoft Fabric Event stream
  • IoT / streaming data sources
  • Enable real‑time dashboards and monitoring use cases.
  • Analytics & Visualization Support
  • Integrate data pipelines with Power BI dashboards

.

  • Support business teams with clean and reliable datasets.
  • Modern Data Tools & Frameworks
  • Work with:
  • dbt for transformation and modeling
  • Snowflake for analytics workloads
  • Develop modular, reusable, and scalable data workflows.
  • Dev Ops & Best Practices
  • Use Git Hub / Git for version control.
  • Follow CI/CD and deployment best practices.
  • Document data pipelines and processes.

Required Skills & Technologies Core Skills

  • Python / Py Spark
  • SQL (Strong proficiency)
  • ETL/ELT pipeline development

Cloud & Tools

  • Microsoft Azure:
  • Azure Data Factory (ADF)
  • Azure Databricks
  • ADLS Gen2
  • Microsoft Fabric (preferred)

Data Engineering Concepts

  • Lakehouse architecture
  • Delta Lake
  • Data modeling (basic dimensional modeling)
  • Incremental data processing

Additional Tools

  • dbt (preferred)
  • Snowflake (good to have)
  • Power BI

Experience Requirements

  • 3–5 years of experience in Data Engineering
  • Hands‑on experience in:
  • Azure data ecosystem
  • Databricks / Py Spark
  • Building data pipelines and transformations

Educational Qualifications

  • Bachelor’s degree in Computer Science / Engineering / IT
  • Relevant certifications are a plus:
  • Microsoft Fabric (DP-600 / DP-700)
  • Azure Data Engineer (DP-203)
  • Databricks certifications

Soft Skills

  • Strong analytical and problem‑solving skills
  • Ability to work in a collaborative Agile environment
  • Good communication skills
  • Attention to detail and data accuracy

Nice to Have

  • Exposure to real‑time/streaming data pipelines
  • Experience with API integrations
  • Basic understanding of machine learning workflows
  • Experience working with ERP/SAP data

#J-18808-Ljbffr

Skills & Requirements

Technical Skills

PythonPysparkSqlAzure data factoryAzure databricksAdls gen2DbtSnowflakePower biGitCi/cdDimensional modelingLakehouse architectureDelta lakeMedallion modelBatch processingStreaming data processingApi integrationsMachine learning workflowsErp/sap dataAnalyticalProblem-solvingCollaborationCommunicationAttention to detailData accuracyMicrosoft fabricAzure data engineerDatabricksData engineeringCloud computingBig dataData modelingData pipelinesData transformationData qualityData validationData consistencyData storageData processingData integrationData transformationData analyticsData visualizationData warehousingData governanceData securityData privacyData complianceData ethics

Salary

$120,000 - $200,000

year

Employment Type

FULL TIME

Level

mid

Posted

4/23/2026

Apply Now

You will be redirected to Brickstech's application portal.