BI Engineer

Kayali
Dubai, AE

Job Description

Who We Are

Fueled by passion, KAYALI was founded in 2018 by beauty mogul and fragrance fanatic, Mona Kattan. Translating to ‘my imagination’ in Arabic, KAYALI provides a modern fragrance experience inspired by Mona’s rich Middle Eastern heritage and the art of layering scents to help you create your mood; where sharing scents is a ritual and smelling good is both an act of goodwill and self-love. Mona collaborates with some of the world’s most renowned perfumers and sources the finest ingredients to create unique juices that are infinitely memorable, long-lasting, and cruelty-free. Each luxurious fragrance is an ode to true craftsmanship and tells a special story, from the addictive notes to the multi-faceted jeweled bottles.

Our Mission

To make everyone feel like the diamond they are! To build a global community of fragrance lovers through the power of scent and by providing them with the most innovative & luxurious fragrances, education and sharing our Middle Eastern fragrance rituals with the world.

Summary

Introduction of a new role to provide expertise across data warehousing platforms including Azure Data Bricks and Power BI along with all the necessary integrations. The BI Engineer will be responsible for supporting, implementing and maintaining data warehouse Pipelines and Reports to streamline business process.

RESPONSIBILITIES:

Data Platform & Architecture

Manage a scalable modern Data platform leveraging the Azure ecosystem, including Azure Databricks, Azure Synapse Analytics, Azure Data Factory, and Azure Data Lake Storage.

  • Design and implement Lakehouse architecture using Medallion (Bronze, Silver, Gold) patterns with Delta Lake to ensure data reliability, scalability, and performance with knowledge of Delta Lake and ACID-compliant data architectures
  • Apply data modelling techniques (dimensional modelling, star/snowflake schemas) to support both operational and analytical use cases.

Data Engineering & Pipelines

Develop and maintain robust, high-performance data pipelines using cloud-native technologies (Databricks, PySpark, Lakehouse Workflows) to support ingestion, transformation, and curation of large-scale datasets.

  • Knowledge in identifying and resolving complex data discrepancies across multi-stage ETL/ELT pipelines, with a focus on root cause analysis (RCA) of failures, data skew, and late-arriving data etc.
  • Build and optimize end-to-end data integration frameworks, including ingestion from relational databases, flat files, and third-party REST APIs (handling authentication, pagination, and incremental loads).
  • Troubleshoot and resolve complex data pipeline failures, performance bottlenecks, and data inconsistencies, performing root cause analysis in production environments.

Business Intelligence & Power BI

Design, develop, and optimize Power BI datasets, reports, and dashboards, applying best practices in data modelling, DAX, and performance tuning.

  • High-level proficiency in performance tuning and semantic model debugging, specifically using tools like DAX Studio and Tabular Editor to isolate issues within the Filter Context, Row Context, and complex relationship cardinalities.
  • Strategic ability to troubleshoot data issues and investigate Gateway connectivity failures, Direct Query latency issues, and visual-level calculation errors to ensuring report accuracy and faster response times for executive stakeholders.

Data Integration & Consumption

Manage the integration of data platforms with Power BI, ensuring efficient data models and seamless data consumption for reporting and analytics.

Data Governance & Security

Implement and manage data security frameworks, including role-based access control (RBAC), Row-Level Security, and governance using Unity Catalog or equivalent tools.

DevOps, CI/CD & Deployment

Knowledge in CI/CD pipelines using Azure DevOps or GitHub Actions to enable automated, reliable, and consistent deployment of data solutions.

Performance Optimization & Cost Management

Monitor and optimize data platform performance and cloud resource utilization, identifying cost optimization opportunities and ensuring efficient workload execution.

Data Quality & Reliability

Implement data quality, validation, and monitoring frameworks to ensure accuracy, consistency, and reliability of data assets.

Stakeholder Collaboration & Delivery

Collaborate with cross-functional teams including business stakeholders, analysts, and engineers to translate business requirements into scalable technical solutions.

Requirements

Minimum 5+ years in both Reporting and Data engineering domain, with at least 3+ years focused on Azure based data platforms mainly Power BI and Azure Databricks.

Comprehensive knowledge of the Azure Data Stack and Strong hands-on experience with Azure Databricks, Azure Data Lake Storage, Synapse, ADF and open-source cloud native tools.

Extensive experience in Power BI, including Dataset design and optimization, Report and

Skills & Requirements

Technical Skills

Azure Data BricksPower BIAzure Synapse AnalyticsAzure Data FactoryAzure Data Lake StorageDelta LakePySparkDAXTabular EditorDAX Studiodata warehousingdata engineeringbusiness intelligence

Level

mid

Posted

4/5/2026

Continue to Glassdoor

You will be redirected to the job posting on Glassdoor.