Job Description
How do you make a large language model genuinely human-centred, capable of reasoning, empathy, and nuance rather than just pattern-matching?
This team is built to answer that question. They’re a small, focused group of researchers and engineers working on the post-training challenges that matter most: RLHF, RLAIF, continual learning, multilingual behaviour, and evaluation frameworks designed for natural, reliable interaction.
You’ll work alongside a team from NVIDIA, Meta, Microsoft, Apple, and Stanford, in an environment that combines academic rigour with production-level delivery. Backed by over $400 million in funding, they have the freedom, compute, and scale to run experiments that push beyond the limits of standard alignment research.
This is a role where your work moves directly into deployed products. The team’s models are live, meaning every insight you develop, every method you refine, and every experiment you run has immediate, measurable impact on how large-scale conversational systems behave.
What you’ll work on
-
Developing post-training methods that improve alignment, reasoning, and reliability
-
Advancing instruction-tuning, RLHF/RLAIF, and preference-learning pipelines for deployed systems
-
Designing evaluation frameworks that measure human-centred behaviour, not just accuracy
-
Exploring continual learning and multilingual generalisation for long-lived models
-
Publishing and collaborating on research that informs real-world deployment
Who this role suits
-
Researchers or recent PhDs with experience in LLM post-training, alignment, or optimisation
-
A track record of rigorous work — published papers, open-source projects, or deployed research
-
Curiosity about how large models learn and behave over time, and how to steer that behaviour safely
-
Someone who values autonomy, clarity of purpose, and research that turns into impact
You’ll find a culture driven by technical depth rather than hype — where thoughtful research is backed by meaningful compute and where the best ideas scale fast.
Location: South Bay (on-site, collaborative setup)
Compensation: $200 000 – $250 000 base + equity + bonus
If you’re ready to work on post-training research that shapes how large language models behave, we’d love to hear from you.
All applicants will receive a response.