Research Engineer, Search and Post-Training

Menlo Ventures
New York, US
On-site

Job Description

Position: Research Engineer, Search and Knowledge Post-Training

Location: New York

Overview

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About

The Role

We want future AI systems to have superhuman epistemics: the ability to parse evidence at enormous scale and draw rigorous conclusions for both itself and the user. Search is the capability that determines whether a model can pick a signal out of noise, weigh conflicting evidence, and know what it doesn t know. Every higher-order capability we care about depends on search being trustworthy.

If we want Claude to be a trustworthy collaborator on real knowledge work, it has to be a trustworthy searcher.

We re hiring a Research Engineer to advance the science and engineering that goes into making Claude this trustworthy searcher. This is a research role for someone who is unusually rigorous: you ll define hypotheses about what makes a model an epistemically sound searcher, design the experiments that test them, and turn search post-training from a craft into a measurable science.

You ll insist on cleanly isolated variables, calibrated metrics, and reproducible signal, while also having the engineering skill to build the infrastructure necessary to get them. This work sits at the intersection of reinforcement learning, retrieval, and evaluation, and it directly shapes how Claude behaves in any setting where evidence matters: research, analysis, agentic workflows, and beyond.

What You ll Do

  • Own a research direction for a class of search post-training problems end-to-end: form hypotheses about latent capabilities, design experiments that isolate them, run training, and decide what to try next.
  • Build the instrumentation that turns environment design into a controlled experiment so we can study how each environment factor contributes to the capabilities we care about, rather than overfitting to any one regime.
  • Design frontier-discriminating evaluations that distinguish genuine reasoning over evidence from plausible pattern matching and that hold up as models improve.
  • Drive optimization rigor across the stack: efficient experiment design, ablations, training run economics, and the discipline to know when a result is real.
  • Collaborate deeply with researchers across post-training, RL infrastructure, and product to translate model behavior in the wild into concrete training signals and back again.
  • Set the bar for the team s experimental standards — what we measure, how we measure it, how we know a result is real.

Minimum (must-have)

  • Have an unusually rigorous, quantitative mindset
  • Are an outstanding software engineer in Python, comfortable across the stack from data pipelines to RL training to evaluation infrastructure
  • Have shipped real ML research repeatedly, with taste for which experiments are worth running.
  • You instinctively reach for ablations, controls, and confidence intervals to understand why
  • Operate well with high autonomy and ambiguity and can identify the most impactful problem to work on next without being told
  • Want to set research direction, advocate for experimental rigor, and raise the bar for the people around you
  • Communicate research clearly in writing and in person; you can defend a design choice and update on evidence

Preferred (nice-to-have)

  • Hands-on experience with RL on large language models — environments, reward design, training stability, scaling behavior.
  • Background in search, retrieval, RAG, or agents that reason over external information sources.
  • Experience building evaluations for open-ended or knowledge-intensive LLM behavior
  • Prior work in a research-heavy environment — frontier AI lab, quant research firm, or similarly demanding empirical setting — where rigor is the default.
  • Published research on LLMs, RL, retrieval, calibration, or related topics.
  • Experience with distributed training systems and large-scale experimentation infrastructure.

Representative projects

  • Designing a controlled-noise search environment where you can dial up failure…

Skills & Requirements

Technical Skills

PythonMl researchReinforcement learningRetrievalEvaluation infrastructureRigorousQuantitative mindsetSoftware engineeringCommunicationAutonomyAmbiguityProblem solvingTeamworkLeadershipAiSearchLlmsRetrievalEvaluation

Employment Type

FULL TIME

Level

Mid-Level

Posted

5/8/2026

Apply Now

You will be redirected to Menlo Ventures's application portal.

Sign in and we'll score your resume against this role.

Find Similar Jobs

Browse roles in the same category, level, and remote setup.

Sign in to open the target role workbench.