Data Engineering Tech Lead (DataBricks - Data Warehousing - Data Modeller-Spark)

Akkodis
Toronto, CA; US
Hybrid

Job Description

Position: Data Engineering Tech Lead (DataBricks - Data Warehousing - Data Modeller-Spark)

Duration: 6 months+

Location: Toronto, ON

Work Model: Hybrid

About Our Client:

A global leader in renewable energy solutions, driving innovation to create a sustainable future.

About this opportunity:

On behalf of our client, we are seeking two (2) experienced Data Engineering Tech Leads to support critical data platform initiatives within a large enterprise financial services environment. This is an urgent requirement for hands-on technical leaders who bring deep expertise in Databricks and Apache Spark, while also demonstrating strong capabilities in solution design, deployment automation, and end-to-end delivery governance.

What You’ll Do:

Data Warehousing & Data Modeling

  • Design and deliver enterprise-grade data warehouse and lakehouse models, including star schemas, conformed dimensions, facts, and aggregations supporting financial use cases (e.g., revenue, exposure, limits, liquidity, profitability, and deal pipelines).
  • Establish and enforce modeling standards across Bronze / Silver / Gold layers (raw, conformed, curated marts).
  • Implement incremental loading strategies, Slowly Changing Dimensions (Type 1 & 2), de-duplication, and reconciliation logic aligned with financial controls and audit requirements.

Databricks Engineering (Lakehouse Implementation)

  • Lead the development of scalable ELT/ETL pipelines using Databricks (Spark, PySpark, SQL) and Delta Lake.
  • Implement ingestion frameworks using Auto Loader (cloudFiles), structured streaming where applicable, and batch orchestration for daily and monthly financial processing cycles.
  • Optimize Delta tables using best practices (partitioning, OPTIMIZE, Z-ORDER, file sizing, caching) to support downstream BI and analytics performance.

Governance & Security (Unity Catalog)

  • Implement and operationalize governance using Unity Catalog, including:
  • Catalog, schema, and table design aligned to data domains and environments (dev/test/prod)
  • Fine-grained access controls at catalog, schema, table, and column levels
  • Row-level and column-level security where required
  • Auditability and lineage readiness for regulated environments
  • Partner with security, risk, and compliance teams to ensure appropriate access models for sensitive datasets.

Data Quality & Controls

  • Define and implement Data Quality Expectations using Databricks DQE, Delta Live Tables expectations, or equivalent frameworks.
  • Implement key controls including:
  • Null, type, range, and referential integrity checks
  • Duplicate detection and key constraints
  • Source-to-target reconciliation and financial totals validation
  • Publish data quality metrics, operational alerts, and support SLA and production-readiness reporting.

Delivery Leadership & Operations

  • Act as a technical lead, translating business and functional requirements into scalable data solutions.
  • Guide deployment automation by applying CI/CD and data platform deployment frameworks and fundamentals.
  • Support technical and functional defect management, including triage, root-cause analysis, and remediation.
  • Produce clear technical documentation, including data definitions, lineage, runbooks, and operational procedures.
  • Support production operations and continuous improvement initiatives.

What You Bring:

  • 5+ years of hands-on data engineering experience in enterprise environments; financial services experience strongly preferred.
  • Strong expertise in Databricks, Apache Spark (PySpark/SQL), and Delta Lake.
  • Proven experience in data warehousing and dimensional modeling, including facts, dimensions, star schemas, SCD patterns, and data marts.
  • Experience leading solution design and guiding end-to-end delivery of data platforms.
  • Solid understanding of deployment automation frameworks, CI/CD concepts, and production support models.
  • Experience implementing data governance, security, and quality controls in regulated environments.
  • Strong problem-solving skills with the ability to communicate complex technical concepts to both technical and non-technical stakeholders.
  • Nice to Have
  • Experience conducting UX research for AI‑driven or conversational experiences (e.g., copilots, chat‑based interfaces, or productivity tools).
  • Background supporting enterprise, B2B, or large‑scale platform products across multiple devices and surfaces.
  • Familiarity working with cross‑platform ecosystems (web, desktop, mobile, and operating systems).
  • Experience partnering closely with engineering and data science teams to translate research insights into product improvements.
  • Knowledge of accessibility and inclusive design research practices.
  • Exposure to international or global research studies, including remote or unmoderated testing.
  • Prior experience working in Agile or Scrum environments with overlapping product timelines.
  • Interest or experience in applying AI‑assisted synthesis and research automation tools to improve e

Skills & Requirements

Technical Skills

DatabricksApache sparkDelta lakeUnity catalogDatabricks dqeDelta live tablesCommunicationCollaborationData engineeringData warehousingData modelling

Employment Type

FULL TIME

Level

senior

Posted

4/9/2026

Continue to LinkedIn

You will be redirected to the job posting on LinkedIn.