Department: Platform & Operations
Location: Atlanta HQ (preferred); Remote US considered
Reports to: Director, Platform & Operations
Are you ready to take the data behind millions of pedal strokes and heartbeats and engineer it into the ultimate competitive advantage? The data behind every pedal stroke, heart rate reading, and training session tells a story — and at Wahoo, that story flows through every part of the business. As our Staff Engineer, Data Platform, you'll own the business data layer as a company-wide asset: defining how operational and analytical data moves, how it's trusted, and how it can be leveraged across Product, Engineering, Sales, Finance, and beyond. This isn't a role about maintaining pipelines — it's about setting the technical direction for a data platform that the entire organization depends on and earning the credibility to shape decisions across teams that don't report to you.
You'll inherit a functioning infrastructure and make it genuinely excellent: closing gaps, raising standards, and writing the technical documents that govern future decisions. You'll anticipate problems 6–18 months before they surface and address them before they become crises. You'll be the most experienced technical voice on business data at Wahoo — a force multiplier whose judgment, standards, and guidance make every team that touches data more effective.
Key Responsibilities
- Data Platform Strategy & Architecture: Own the technical direction of Wahoo's business data platform as an organization-wide capability. Define a clear, prioritized roadmap for how it should evolve to meet needs across Product, Engineering, Sales, Finance, and Operations — not just the Platform & Operations team.
- RFC & Technical Documentation: Write RFCs, architecture decision records, and technical strategy documents that drive alignment across teams and serve as durable references. Architectural choices should be documented in ways that others can learn from, challenge, and build on.
- Cross-Team Technical Influence: Shape data architecture decisions across teams without relying on authority. Build consensus through technical credibility, clear reasoning, and the ability to make tradeoffs legible to stakeholders with different priorities.
- ELT Pipeline Ownership: Own and evolve pipelines moving data from source systems — including e-commerce, ERP, mobile, and operational integrations — through transformation and into our Redshift data warehouse. Set the reliability and observability standard across the board.
- Cloud Systems Partnership: Work closely with the Cloud team to source key operational and product data from the systems they own. Define clear data contracts at the boundary, advocate for the downstream needs of BI and business stakeholders and build relationships that make cross-team data flows reliable and well-understood.
- Data Modeling & SQL: Write and optimize SQL for complex views, materialized views, and datasets in Redshift. Define modeling standards and best practices that become the baseline others follow and extend.
- Security & Compliance by Default: Own the data security posture of the pipeline layer. Embed access controls, data classification, PII handling, audit logging, and least-privilege design into architecture from the start — working with the Director of Information Security to align with company policy and close gaps proactively.
- Proactive Risk Management: Identify and address technical risks in the data platform 6–18 months before they become operational problems. Schema decisions, capacity constraints, compliance gaps, vendor dependency — anticipate these and drive action before the urgency is acute.
- Tooling & Reliability: Maintain and evolve our data stack (dbt Cloud, Airflow, Airbyte, Rivery/Boomi) with a systems operator's mindset. Introduce improvements in a disciplined, non-disruptive way. Make the case for tooling changes with evidence and clear tradeoff analysis.
- BI Partnership: Partner closely with the BI team (Metabase, Power BI) to ensure data modeling decisions serve real analytical needs. The goal is datasets teams can extend independently — not ones that require ongoing engineering intervention to use.
- Pipeline Health & Data Quality: Define and enforce data quality standards with alerting, testing, and observability tooling. Problems should surface in your systems before they reach downstream consumers.
- Force Multiplication: Build data engineering literacy and capability across the company — not just the Platform & Operations team. Create the shared context, standards, and documentation that allow others to make good data decisions without you in the room.
- Budget-Aware Decision Making: Operate within and advocate clearly within real budget constraints. Know when a simpler solution is the right one — and when an investment now prevents a larger cost later. Bring proposals with tradeoffs quantified, not just technical preferences stated.
Wahoo Context