Data Engineer Manager

Gunvor Group
London, GB

Job Description

Job Title:

Data Engineer Manager

Contract Type:

Time Type:

Job Description:

Mission: create a repeatable, governed, AI-ready data factory on Azure + Snowflake and event architectures—so every dollar invested turns into reliable, high-performance data products and services the trading business can trust at market speed.

Reporting line: Global Head of Data

Scope: Enterprise-wide data engineering across oil, gas and power trading; teams in multiple regions; close partnership with Enterprise Data Architecture, Data Governance, Platform/Operations, and Security.

Own the strategy and execution of best-in-class data engineering to deliver state-of-the-art data products and services at scale. Build and operate a modern estate on Azure + Snowflake, centred on event-driven architectures and high-throughput ingestion/pipelines that feed analytics, risk, and AI/ML safely and cost-effectively. Establish the standards, tooling and talent model that convert complex trading data into fast, reliable, governed, and reusable products, aligned to the firm’s semantic/knowledge-graph backbone.

Main Responsibilities

  • Engineering Strategy & Roadmap
  • Define and execute the global data engineering strategy (ingest → govern → serve → observe), aligned with enterprise architecture and governance.
  • Standardise event patterns (Kafka/Flink), ELT (dbt/Spark/SQL), and serving layers (APIs/SQL/Graph) across regions.
  • Platform & Product Delivery
  • Industrialise scalable pipelines for market/curve, ETRM/CTRM, SCADA/time-series, logistics and finance/settlement data with SLOs, lineage and DR/BCP.
  • Enable AI/ML at scale: feature/label pipelines, vector stores, policy-aware retrieval, evaluation hooks and model/LLM registry integration.
  • Standards & Quality
  • Mandate data contracts, DQ rules, OpenLineage, security baselines (RBAC/ABAC, masking, retention) and FinOps tagging; codify golden paths/templates.
  • Drive performance engineering (p95/p99 targets, replay, back-pressure) and cost optimisation (tiering, compression, autoscaling).
  • People, Capacity & Sourcing
  • Build and coach high-performing squads; manage the engineering capacity plan; anticipate peaks and scale out via vetted staff-augmentation partners without lowering the bar.
  • Run an objective skills framework, hiring rubric, and career paths; ensure global follow-the-sun support on critical flows.
  • Run & Reliability
  • Own operational excellence: observability (metrics/logs/traces), incident management, post-incident reviews, and continuous hardening of critical paths (market close, risk runs, nominations).
  • Stakeholder Leadership
  • Partner with Trading, Risk, Ops/Logistics, Finance/Settlement and Compliance to prioritise a value-backlog; communicate trade-offs on latency, cost and control.
  • Align with Architecture on ontology/knowledge-graph mapping; with Governance on evidence and controls; with Platform/Operations on environments, access and DR.

What “Good” Looks Like (Outcomes & KPIs)

  • Reliability: SLOs met on market-critical paths; deterministic replay proven quarterly; MTTR trending down.
  • Speed & Reuse: Time-to-first-value for new products reduced by >50%; adoption of golden paths/templates across squads >60%.
  • Cost: Unit economics (cost per product/feature/inference) visible; ≥15–25% cost-to-serve reduction through optimisation/deprecation.
  • Compliance: Zero critical audit findings on lineage, access, retention; automated evidence packs.
  • Talent & Capacity: Bench strength in core skills; surge capacity activated without quality or security regressions.

Profile

  • Bachelor’s degree or higher in Computer Science, Engineering, Applied Mathematics, or a related field.
  • 12+ years in data engineering/platform roles, 5+ years leading multi-region teams in real-time, regulated environments (ideally commodity trading/energy/financial markets).
  • Track record delivering at scale on Azure (Identity/Key Vault/AKS/Functions/ADF) and Snowflake (performance, security, cost controls).
  • Deep hands-on leadership in Kafka/Flink, dbt/Spark/SQL, API/stream serving, and performance/DR design.
  • Proven enablement of AI/ML foundations (feature pipelines, vector/RAG datasets, evaluation, registries) integrated with governance.
  • Demonstrated vendor management and staff augmentation leadership (selection, onboarding, QA, and exit/portability).
  • English (fluent), any additional language is an asset

Core Competencies & Skills

  • Engineering excellence: event design, time-series/curve patterns, schema evolution, replay, SLAs/SLOs.
  • Governance by design: data contracts, DQ/lineage, RBAC/ABAC, masking/retention, SoD; audit-ready automation.
  • FinOps literacy: tagging discipline, capacity planning, rightsizing and lifecycle policies; clear cost storytelling.
  • People leadership: hiring, coaching, performance management; builds inclusive, high-accountability culture.
  • Executive communication: crisp updates, escalation discipline, clear trade-offs; trusted by desk heads and co

Skills & Requirements

Technical Skills

AzureSnowflakeEvent architecturesAi/mlFeature/label pipelinesVector storesPolicy-aware retrievalModel/llm registry integrationKafka/flinkDbt/spark/sqlApis/sql/graphEtrm/ctrmScada/time-seriesLogistics and finance/settlement dataOpenlineageRbac/abacMasking/retentionFinops taggingMetrics/logs/tracesIncident managementPost-incident reviewsContinuous hardeningOntology/knowledge-graph mappingEvidence and controlsEnvironments, access and drLeadershipCommunicationCoachingPerformance managementInclusive cultureExecutive communicationOilGasPower trading

Salary

$60,000+

year

Level

manager

Posted

4/22/2026

Continue to LinkedIn

You will be redirected to the job posting on LinkedIn.