Visa sponsorship or transfers not available.
Must have experience with quantitative hedge funds, systematic trading firms, or fintech platforms with research heavy data needs.
The role:
You know how every investment team says "data is everything"? At this firm, that's not a talking point. It's literally the job. You'd be the person making sure the data layer actually works.
This is a hands on engineering seat at a technology driven investment firm in San Francisco. You'll own the plumbing: the pipelines, the schemas, the infrastructure that the entire research and investing operation runs on top of. Think of it as building the engine room for a ship full of quant researchers and data scientists who need clean, fast, reliable data to do their work.
Day to day, you'll be designing ingestion pipelines that pull from a variety of third party sources, wrangling both structured and messy unstructured datasets, and shipping features that make internal teams faster and more productive.
You'll sit at the intersection of engineering and research, part builder, part thought partner. One week you might be rearchitecting a schema with a data scientist. The next you're debugging a production pipeline or evaluating a new orchestration tool.
The stack is modern and opinionated: Python, SQL, Trino, Apache Iceberg, Polars, Spark, Dagster, and Ray all make appearances depending on the problem. You won't be stuck maintaining legacy systems. There's real greenfield work here.
This is a five days in office role in San Francisco. They want someone who's physically present and embedded with the team.
Qualifications:
What matters most is that you've actually built and shipped data systems end to end, not just written specs or maintained someone else's work. You should be comfortable in Python and SQL, have real experience standing up ingestion pipelines from multiple data sources, and know your way around relational databases like PostgreSQL.
You need to be someone who communicates well with non engineers. You'll be working directly with data scientists, researchers, and product managers, so if you prefer to stay heads down in a terminal with no cross functional interaction, this isn't the right fit.
Nice to haves that will put you ahead: cloud infrastructure chops (AWS, S3), familiarity with streaming or messaging tools like Kafka or gRPC, experience with container orchestration (Kubernetes), and any exposure to distributed computing frameworks like Ray. Bonus points if you've done this kind of work in or around financial services, quant research, or trading.
Ideal candidate: 3 to 7 years of data engineering experience. Strong Python/SQL. Has built pipelines end to end and shipped to production. Comfortable working cross functionally with researchers and PMs. Prior finance/quant exposure is a plus but not required. What matters is engineering quality and communication skills.
Experience with quantitative hedge funds, systematic trading firms, or fintech platforms with research heavy data needs.
FULL TIME
mid
4/7/2026
You will be redirected to BlackFern Recruitment's application portal.