Join our team as a Data Engineer, where you will leverage cutting-edge technologies and your expertise to drive data innovation and optimization in a dynamic, client-facing environment.
Responsibilities
- Develop and optimize data engineering pipelines using Databricks and Kafka.
- Implement data governance and management solutions including user access management and metadata handling.
- Collaborate closely with clients to meet their data management needs and expectations.
- Apply knowledge of the oil and gas industry to enhance data processes and insights.
- Ensure successful implementation of data management MVPs in a client-centric setting.
Skills
- Proven hands-on experience with Databricks and Unity Catalog.
- Strong communication skills for effective client interaction and project delivery.
- Skilled in performance optimization and advanced development scenarios in data engineering.
Preferred Skills
- Experience with Power BI and a passion for data governance and management.
- Familiarity with the oil and gas sector to drive industry-specific data solutions.
- Looking for a Senior Data/Software Engineer focused on delivering a step change in SEAm’s Data Management and Governance maturity. The role is implementation?led, building practical tools, automation, and AI?enabled capabilities on the Databricks Lakehouse to operationalize and scale established governance frameworks, and position SEAm for the next phase of its data journey.
Role Description
Senior Data / Software Engineer | SEAm Data Management & Governance Enablement (AI?Accelerated)
Role Purpose
This role is a hands?on Senior Data / Software Engineer accountable for progressing the Data Management and Data Governance capabilities and maturity of the SEAm business, delivered through the SEAm Lakehouse on Databricks.
SEAm has already invested significantly in establishing its Data Management and Governance foundations — including frameworks, guiding principles, and draft processes. The organisation is now entering the next phase of its data journey: moving deliberately from definition to implementation, adoption, and measurable maturity uplift.
The primary expectation of this role is therefore to drive delivery of a step change in SEAm’s data management maturity, by:
- Implementing agreed governance and data management capabilities in practice
- Using AI wherever it adds real value to accelerate, enhance, and scale those capabilities
- Turning frameworks and intent into working tools, automation, and repeatable patterns that teams can actually use
This role plays a critical part in setting SEAm up for the next exciting phase of its data journey, ensuring that data is well?governed, trusted, discoverable, and ready to support advanced analytics and AI at scale.
Key Responsibilities
Progressing Data Management & Governance Maturity (Primary Focus)
- Lead hands?on implementation of Data Management and Governance capabilities across the SEAm Lakehouse
- Translate agreed frameworks and draft processes into operational reality
- Focus on practical maturity uplift across areas such as:
? Data ownership and accountability
? Data quality and trust
? Metadata completeness and consistency
? Access management and controls
? Usability and adoption of governed data
AI?Enabled Capability Development
- Identify and implement AI?enabled solutions that meaningfully enhance Data Management and Governance, for example:
? AI?assisted metadata enrichment
? Accelerated data quality rule creation and validation
? Intelligent automation of governance activities
? Improved discoverability and usability of governed data
- Ensure AI is applied pragmatically and safely, aligned to governance intent and business needs
- Focus on AI as an accelerator of maturity and adoption, not experimentation for its own sake
Hands?on Databricks Implementation
- Implement Data Management and Governance capabilities directly within Databricks, including:
? Unity Catalog configuration and enablement
? User Access Management (UAM) patterns
? Metadata structures, capture, and publishing
? Data quality checks, metrics, and remediation
- Work in real delivery cycles, iterating toward usable, scalable outcomes
Templates, Automation & Reference Implementations
- Build reusable templates, patterns, and reference implementations that:
? Clearly demonstrate “how to do it” for delivery teams
? Reduce ambiguity, rework, and reliance on central support
- Prefer automation and tooling over manual processes or documentation wherever possible
Analytics, Insight & Transparency
? Provide insight into data quality and governance health
? Track and visualise progress in data management maturity
- Support transparency and decision?making for both technical teams and the business
Collaboration & Enablement
- Work closely with Data Engineers, Architects, Platform teams, and Business Data Owners
- Enable teams to be increasingly self?s