Feldspar & Flint LLC is a Recruiting & Staffing firm that specializes in operational strategy across core business functions.
Our client is looking for a Data Engineer with 3+ years of experience to support and evolve a mortgage-focused data platform. This role combines ownership of a SQL Server-based data warehouse with contributions to a modern, Python-driven data stack. You will work closely with business stakeholders and engineering teams to ensure reliable, high-quality data pipelines and scalable data architecture.
Key Responsibilities
- Own and optimize a SQL Server-based mortgage data warehouse, including performance tuning (queries, indexing, execution plans) and overall system reliability
- Design and maintain ETL pipelines (SSIS and API-based) to ingest and integrate data from external servicers and third-party sources
- Translate complex business requirements into scalable technical solutions, partnering closely with business stakeholders
- Manage daily and month-end data processing workflows, troubleshooting failures and ensuring consistent data availability
- Standardize, reconcile, and model data across multiple sources into analytics-ready data marts and support semantic layer development (e.g., SSAS Tabular)
- Contribute to modern data platform initiatives (Python, Spark, DuckDB, Polars, Delta Lake), including data quality frameworks, governance (metadata, lineage), documentation, and cloud-ready architecture
Required Qualifications
- 3+ years of experience in data engineering, data warehousing, or a related field
- Strong hands-on experience with SQL Server (T-SQL, performance tuning, indexing strategies)
- Proven experience building and maintaining ETL pipelines, preferably using SSIS and/or Python-based frameworks
- Solid understanding of data modeling (dimensional modeling, data marts, normalization vs. denormalization)
- Experience integrating data from external systems via APIs or batch ingestion
- Familiarity with data reconciliation and ensuring consistency across multiple data sources
- Exposure to BI/semantic layers such as SSAS Tabular or similar technologies
- Proficiency in Python and experience working with modern data tools (e.g., Spark, DuckDB, Polars, Delta Lake)
- Strong problem-solving skills with the ability to debug production data issues