Qdrant is an open-source vector search engine powering the next generation of AI applications, from semantic search and retrieval-augmented generation (RAG) to AI agents and real-time recommendations.
Trusted by global leaders like Canva, HubSpot, Tripadvisor, Bosch, and Deutsche Telekom, we’re building the retrieval infrastructure layer for modern AI. Recently raising $50M in Series B funding, we are growing rapidly and committed to transforming how AI understands and interacts with data.
As a remote-first company, we believe diverse backgrounds, perspectives, and experiences fuel innovation. Here, you’ll own meaningful work, tackle challenges, and grow alongside passionate individuals dedicated to shaping the future of AI.
We are looking for a Research Engineer, Agentic Retrieval. You'll work at the seam between agent systems research and retrieval engineering, running a tight loop between hypothesis, experiment, and shipped artifact.
The questions you'll chase may not have settled answers yet: how agents should structure memory, when they should re-query versus reason, how skills and tools should be retrieved and composed, what retrieval primitives the agent loop actually needs, and what "good" even means when success is a multi-step trajectory rather than a ranked list.
You'll go deep on how real agent stacks use Qdrant today, where the abstractions around them help or hurt, and what we should build (or change) so they can do more with less. The agent ecosystem moves fast, and part of the job is staying current with it without getting captured by it.
You'll have a lot of latitude to choose what to investigate. The bar is the same either way: every cycle should produce something the field, our customers, or the rest of the company can act on.
What you will own• Define what good agentic retrieval looks like. Characterize the retrieval patterns inside real agent loops, name the failure modes, and turn that vocabulary into something the team and the field can build against.
Who you are• You read and reason about LLM behavior directly. You can distinguish prompt issues from planning issues from retrieval issues from tool design issues, and you've internalized how models actually use retrieved content versus ignore it.
senior
5/8/2026
You will be redirected to Qdrant's application portal.
Sign in and we'll score your resume against this role.
Browse roles in the same category, level, and remote setup.
Sign in to open the target role workbench.