Staff Data Engineer (Python, AI/ML) – Remote
Location: Remote (U.S. Only)
Compensation: $220,000 – $240,000 + full benefits
About the Opportunity
An international media and digital analytics organization is seeking a Staff Data Engineer to help lead the evolution of its data platforms and AI capabilities. This is a senior, high-impact technical role focused on large-scale data architecture, cross-platform integration, and AI/LLM-driven product development. You’ll work across multiple teams to shape long-term architecture, drive innovation, and ensure systems scale effectively as the organization grows.
Key Responsibilities
- Design and own scalable, cross-platform data architecture across complex systems
- Lead integration of data platforms to enable unified insights and product capabilities
- Build and optimize data pipelines (batch + streaming) and data models at scale using Python and modern data frameworks
- Develop and maintain production-grade Python code for data processing, automation, and AI/ML workflows
- Drive development and deployment of AI/ML and LLM-powered features
- Establish best practices for data modeling, pipeline design, and system observability
- Mentor engineers and raise the technical bar across teams
- Translate business needs into clear, scalable architectural decisions
- Contribute to architecture reviews, roadmap planning, and on-call support for production systems
Required Qualifications (Must-Have)
- 8+ years of experience in data engineering or data architecture
- Proven experience working in complex, multi-system or multi-team environments
- Strong Python expertise (hands-on, production-level development)
- Experience building AI/ML or LLM-based solutions in production
- Deep experience with large-scale data architecture (pipelines, modeling, warehousing)
- Strong communication skills — able to translate complex technical concepts clearly
- Demonstrated success mentoring engineers and elevating team performance
Strongly Preferred
- Experience with Snowflake, BigQuery, or similar cloud data warehouse platforms
- Familiarity with Kafka, Kubernetes, or real-time streaming infrastructure
- Experience with cross-division or cross-company data/platform integration initiatives
- Passion for mentorship, teaching, and contributing to engineering culture—not just individual delivery
Technical Environment
- Cloud data warehouses (Snowflake, BigQuery, or similar)
- Data processing & orchestration (Spark, Kafka, Airflow, or equivalent)
- Batch and streaming data pipelines
- Cloud-native / containerized environments (Kubernetes a plus)
What Success Looks Like
- First 90 days: Deliver an architectural assessment and identify integration opportunities
- 6 months: Advance or deliver a cross-platform data or AI initiative
- 12 months: Own the data architecture roadmap and mentor multiple engineers
Benefits
- Medical, Dental, and Vision coverage
- 401(k) with company match
- Generous paid parental leave
- Flexible work schedule + unlimited PTO
- Monthly stipends (home office, internet, wellness)
- Collaborative culture with a strong work-life balance
Pay: $220,000.00 - $240,000.00 per year
Benefits:
Application Question(s):
- Please describe your large-scale data architecture experience including pipeline design, data modeling, and warehouse technology.
- Please describe any experience with Snowflake, BigQuery, Kafka, Kubernetes, and/or real-time streaming infrastructure.
- How many years of data engineering and/or data architecture experience do you have?
- How many years of Python development experience do you have?
- How many years of hands-on experience do you have with building LLMs backed products?
Work Location: Remote