About Warp Speed Holdings
Warp Speed Holdings is a growing holding company that invests in and supports a diverse portfolio of start-ups, early-stage ventures, and established businesses. In addition to providing capital, the company delivers hands-on operational support across key areas such as finance, accounting, human resources, technology, compliance, and other essential back-office functions.
By combining investment resources with experienced operational leadership, Warp Speed Holdings helps its portfolio companies build strong foundations and scale efficiently. True to its name, the organization is known for moving quickly, making strategic decisions, and driving sustainable growth across the companies it supports.
This position would support Ai Senior Manager. AI Senior Manager is a workforce intelligence consulting firm that helps mid-market companies build the operational data foundation they need - structured workforce tracking, automated reporting, AI-generated management insights - so leadership can make faster, smarter decisions backed by real data.
Position Summary
This role designs the architecture and builds every system from scratch. The lead engineer builds and owns the full pipeline: data extraction from client platforms via API, metric computation and analysis, AI-generated narrative insights via Anthropic Claude, automated report assembly, and scheduled delivery. This position plays a key role in turning client workforce data into the executive-grade intelligence reports that are the core product of the business.
The right candidate operates comfortably with ambiguity, makes technical decisions independently, and ships working systems without waiting for detailed specifications.
This role includes potential equity participation in a growing start-up with over a million dollars in annual revenue.
Essential Duties and Responsibilities
- Oversee and coordinate reporting required by state and federal supervisory regulatory agencies, including but not limited to HMDA reporting, NMLS Mortgage Call Reports, LEEF reporting (as applicable)
- Design, build, and maintain the complete data extraction pipeline, connecting to client ActivTrak instances and project management tools (Asana, Monday.com, ClickUp) via REST APIs to pull workforce analytics data on a scheduled cadence
- Build and maintain the analysis and computation layer that transforms raw extracted data into workforce metrics including attendance rates, productivity scores, focus time, task completion rates, workload distribution, trend comparisons, and anomaly detection
- Integrate with the Anthropic Claude API to generate AI-powered narrative insights for client reports, including prompt engineering, structured output parsing, and programmatic validation that ensures the AI never fabricates statistics
- Build the report generation and delivery system - Markdown templates populated with computed data and AI narratives, converted to branded PDFs, and delivered via email on each client's contracted schedule
- Build and maintain monitoring and alerting for the full pipeline - sync failures, data completeness drops, AI generation errors, and missed delivery windows - so problems are caught before they reach clients
- Design and manage the data storage layer and migrating to PostgreSQL as client volume grows
- Configure and manage scheduled pipeline execution, initially via GitHub Actions and migrating to AWS Lambda and EventBridge as infrastructure matures
- Onboard new clients into the pipeline - configure API connections, build extraction scripts, verify data quality, set up reporting templates, and confirm automated delivery within 3 business days of receiving credentials
- Ensure strict client data isolation across all storage and pipeline systems so that one client's data can never appear in another client's reports or analysis
- Maintain a prompt library and per-client context documents for the AI narrative engine, iterating on prompt quality based on client feedback to ensure insights are specific, accurate, and actionable rather than generic
- Support infrastructure scaling decisions - database migration timing, compute architecture, caching strategies, and capacity planning based on client growth trajectory
- Build Phase 2 features as the pipeline stabilizes, including automated KPI tracking for senior manager performance, predictive workforce modeling, anomaly detection, and eventually a client-facing analytics dashboard
The above functions are intended to describe the general nature and level of work performed by individuals assigned to this job. This is not designed to contain or be interpreted as a comprehensive list of all duties, responsibilities and qualifications required of employees assigned to this job.
Education and Experience
- 3+ years of hands-on experience building data pipelines and working with REST APIs in Python
- Strong experience with API integration patterns including OAuth authentication, paginat