Senior Data Engineer (Data Federation & Lakehouse)
As a Senior Data Engineer , you will be responsible for breaking down data silos. This role focuses on building a unified, high-performance data layer using Data Federation techniques. You won't just move data; you will architect a Data Lakehouse environment where disparate sources feel like a single, cohesive database for our analytics and AI teams.
### Core Responsibilities
- Data Federation Architecture: Design and implement federated query layers (e.g., Starburst/Trino ) to allow high-speed analytics across distributed data sources without unnecessary data movement.
- ETL/ELT Pipeline Development: Build scalable, distributed data processing pipelines using Python and Apache Spark (PySpark).
- Lakehouse Implementation: Manage and optimize modern table formats like Delta Lake , Apache Iceberg , or Hudi to bring ACID transactions to our data lake.
- Performance Tuning: Optimize Spark jobs and SQL queries across the federation layer to minimize latency and manage compute costs.
- Governance & Security: Implement fine-grained access control and data masking within the federation engine to ensure data privacy across all connected platforms.
### Technical Requirements
- Python & Spark: 5+ years of experience with Python and deep expertise in Apache Spark tuning (partitioning, shuffling, caching).
- Data Federation Tools: Hands-on experience with Starburst Enterprise , Trino (Presto) , or Dremio .
- Lakehouse Ecosystem: Proven track record working with Delta Lake or Iceberg architectures.
- Cloud Platforms: Extensive experience with AWS (EMR, S3, Glue) , Azure (Databricks, ADLS) , or GCP .
- SQL Mastery: Expert-level SQL skills for complex analytical queries and query plan analysis.
- Data Modeling: Proficiency in designing Star/Snowflake schemas and understanding "Medallion Architecture" (Bronze, Silver, Gold layers).
### Preferred "Bonus" Skills
- Experience with Infrastructure as Code (IaC) like Terraform or Pulumi.
- Familiarity with dbt (data build tool) for modeling within the federation layer.
- Knowledge of Kubernetes (K8s) for deploying and scaling Spark/Trino clusters.
•
Background in Data Mesh or Data Fabric methodologies.
Compensation, Benefits and Duration
Minimum Compensation: USD 40,000
Maximum Compensation: USD 140,000
Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role.
Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees.
This position is not available for independent contractors
No applications will be considered if received more than 120 days after the date of this post