Your search has found 30 jobs

Want to build the systems that make AI actually useful inside real companies?

This Series A startup is tackling one of the hardest problems in enterprise AI. Models are generic. Company processes aren’t. They’re building AI agents that learn how work actually happens, then run those operations end-to-end.

Backed by top-tier investors, they’ve built a deeply technical team across engineering, AI research, and strategy. The focus is simple. Build things properly, with people who care about the craft.

You’ll join as an early full stack engineer, shaping both the product and the foundations it’s built on.

The work sits around the models, not inside them. You’ll build the platform, workflows, and interfaces that make AI usable in real-world environments. That means designing systems that are reliable, observable, and genuinely pleasant to work with, both for users and other engineers.

There’s no separation between building and shipping here. You’ll take ideas from whiteboard to production, owning the outcome end to end. The bar is high, but so is the autonomy.

You’ll spend time designing clean API contracts, modelling data properly, and building frontends that don’t fight you six months later. Velocity matters, but not at the expense of quality.

Your focus will include:

  • Designing and building backend systems in Python using FastAPI, from API design through to database schema and infrastructure
  • Creating high-quality frontend experiences in TypeScript and React, with strong typing and clean component architecture
  • Building shared libraries, internal tooling, and component systems that improve how the whole team ships
  • Owning problems end to end, from shaping ambiguous requirements through to production deployment
  • Developing integrations, connectors, and data pipelines that tie the platform into external systems

You’ll also have real input into how the product evolves, working closely with design and product to understand how customers use what you build.

This is greenfield work. The decisions you make now will compound over time.

They’re looking for engineers who care about how things are built, not just that they work.

  • You enjoy writing Python and TypeScript to a high standard, with strong typing and clear structure
  • You think carefully about data models and take pride in getting schema design right
  • You’ve built libraries, SDKs, or internal tooling that improved developer experience
  • You’re comfortable owning problems end to end, even with ambiguity
  • You have good product instinct and care about how things feel to use

Experience-wise, around 3+ years is a useful guide, but what matters more is how you think and build.

You’ll join a small, high-calibre team where you can influence tooling, patterns, and technical direction from day one.

Compensation: Up to $250,000 base + equity
Location: New York (in-person)

If building foundational systems properly, with real ownership, sounds like your kind of environment, it’s worth a conversation.

Location: NYC
Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 27/04/2026
Job ID: 35899

Want to build product experiences for AI agents that actually understand how companies operate?

This team is tackling a core limitation in enterprise AI. Models are general, but workflows are not. They’re building agents that learn how processes really run, then execute them. This is not surface-level UI work, you’re building the interface layer to systems that directly operate inside real business workflows.

They’re a Series A company backed by Sequoia, with a deeply technical team across engineering and AI. As an early frontend engineer, you won’t just build features, you’ll shape how the product feels, how it’s structured, and how other engineers build on top of it.

You’ll work primarily in TypeScript and React, building high-quality, user-facing experiences that sit on top of complex AI systems. Strong typing, clean abstractions, and thoughtful API design matter here. This is a team that values well-modelled systems over quick fixes.

There’s no separation between building and shipping. You’ll take ideas from concept to production, owning decisions across architecture, UX, and implementation. You’ll work closely with design, contributing to interface decisions and helping define how users interact with agent-driven workflows.

You’ll also have real influence on frontend architecture, tooling, and patterns. Whether that’s building a component library, shaping state management decisions, or improving how the frontend integrates with backend systems, your decisions will compound as the team scales.

What you’ll focus on:

  • Building production-grade frontend applications using TypeScript and React
  • Designing and contributing to frontend architecture, patterns, and component systems
  • Collaborating closely with design to shape UX and interface decisions
  • Owning features end-to-end, from scoping through to deployment
  • Debugging across the stack, including tracing issues into API and backend layers

What you’ll bring:

  • Strong experience with TypeScript, with a focus on well-typed, maintainable code
  • Solid React fundamentals, building scalable and performant interfaces
  • Experience designing APIs, data models, or internal tooling that improves developer workflows
  • Good product and interaction judgement, comfortable working closely with design
  • Comfort owning ambiguous problems and turning them into clear, deliverable solutions

You’ll likely have around 3+ years in software engineering, but what matters more is how you think about systems. If you enjoy building from scratch, care about clean abstractions, and take pride in code that other engineers enjoy working with, you’ll fit well here.

Bonus if you’ve worked with Next.js, built design systems, or developed libraries and SDKs. But strong fundamentals in React and TypeScript are the priority.

This is a frontend-focused role, but you’ll be expected to understand the wider system. You should be comfortable debugging issues beyond the UI when needed.


Comp: Base Salary up to $250,000 + equity
Location: New York (Also growing in London)

If you’re motivated by building thoughtful systems, not just shipping features, this is the kind of environment where your work compounds over time.

Location: NYC
Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 27/04/2026
Job ID: 35866

Want to build the infrastructure that makes AI agents actually work inside real companies?

AI models are powerful, but they’re generic. Enterprise workflows aren’t. This team is solving that gap, building a learning layer that turns messy internal context into structured, executable systems that AI agents can actually use.

You’ll join a deeply technical team working on a platform that learns from tickets, Slack, emails, logs, and knowledge bases, then converts that into versionable “skills” for AI. Think of it as a “GitHub for context”, a system that makes company knowledge readable, maintainable, and executable.

This isn’t model training. It’s everything that makes models useful in production.

You’ll design and build the backend systems that power this layer, APIs, data models, integrations, and tooling that connect into real enterprise environments like ServiceNow, Jira, Zendesk, and Salesforce. The platform is already operating at serious scale, processing vast amounts of operational data across large organisations.

The work is high ownership. You won’t be handed tickets. You’ll take problems from idea to production, shaping architecture, building systems, and seeing how they perform in real-world use.

Your focus will include:

  • Building backend systems in Python (FastAPI), from API design through to database schema
  • Creating integrations, connectors, and data pipelines across enterprise tools
  • Developing internal tooling and libraries that improve engineering velocity
  • Owning systems end-to-end, including deployment and observability

You’ll enjoy this if you care about how software is built. Strong typing, clean interfaces, and well-structured data models aren’t afterthoughts here, they’re core to how the team works.

You’re likely someone who takes pride in designing schemas properly, enjoys building systems other engineers rely on, and prefers thoughtful, robust solutions over quick fixes.

The company has raised a $28M Series A led by Sequoia and is already working with large enterprise environments, processing data at significant scale. It’s still early enough that your decisions will shape the platform and engineering culture long-term.

Package:

Comp: $190K - 250K + meaningful equity
Location: New York (also expanding in London)

If you’re interested in building the systems that make AI actually usable in the real world, this is worth exploring.

Location: NYC
Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 27/04/2026
Job ID: 35833

Machine Learning Researcher – World Models (Generative Video & Simulation)

Ready to build models that learn the structure and dynamics of the physical world?

This role focuses on developing world models, large-scale generative systems capable of simulating environments, actions, and interactions over extended time horizons. The goal is to move beyond short video generation toward models that can represent persistent environments and evolving dynamics.

As a Machine Learning Researcher, you’ll work on spatiotemporal generative models that learn how the world changes over time. These models aim to capture physical interactions, causality, and long-term dynamics, forming the foundation for intelligent systems that can reason about future outcomes and learn through simulation.

Your work will explore how generative models transition from producing short visual sequences to maintaining coherent simulations of environments where objects move, interact, and evolve consistently over time.

You’ll contribute across the full modelling lifecycle, from architecture design and training infrastructure through to evaluation and iteration. The role blends deep research with practical implementation, where experimental ideas are tested at scale and integrated into real systems.

This is a research-driven environment where engineers have significant ownership over model design, training strategies, and evaluation frameworks.

Your focus will include:

  • Designing and training spatiotemporal world models capable of learning long-horizon dynamics

  • Advancing video generation systems into persistent simulations that maintain coherence across time

  • Running large-scale training experiments on multi-billion parameter generative models

  • Improving temporal consistency, memory, and controllability in generative architectures

  • Developing evaluation methods for physical plausibility, causal consistency, and simulation stability

  • Working with large video datasets, including synthetic environments and real-world recordings

Hands-on experience with video generation, spatiotemporal modelling, or multimodal generative models is essential. This could include work with diffusion models, autoregressive approaches, transformers, or related architectures.

You should be comfortable implementing recent research, designing experiments, and iterating quickly on large training runs. Experience managing experiments on large GPU clusters and training large models at scale is highly valuable.

Strong coding ability in Python is required, with C++ or Rust considered beneficial.

You’ll have significant ownership over modelling decisions and the opportunity to shape how world models evolve within a small, technically ambitious AI research team.

Compensation: $200,000 – $350,000 base (negotiable depending on level) + equity + benefits

Location: San Francisco (On-site)

If you’re motivated by pushing generative models beyond video into world simulation and long-horizon reasoning, we’d like to speak with you.

All applicants will receive a response.

Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 20/04/2026
Job ID: 35305

Define how large-scale AI systems for scientific discovery are actually built, trained, and run in production.

This team is building autonomous AI scientists that run full research loops — ingesting large bodies of literature, forming hypotheses, designing experiments, and producing traceable outputs already used across biotech and pharma.

The challenge isn’t just model capability. It’s building the systems that allow these models to be trained, evaluated, and deployed reliably at scale.

You’ll sit at the intersection of model training and systems — owning the infrastructure, pipelines, and experimentation platforms that make long-horizon reasoning systems possible.

This is not research in isolation. It’s building the engine that research runs on.

You’ll work closely with the wider team, translating ambiguous scientific problems into systems that can be trained, iterated on, and deployed in real-world environments.

The company comes from one of the earliest groups working seriously on AI for science, including early language agents and AI-generated biological discoveries. They’re now pushing further with systems capable of reasoning across thousands of papers and large-scale analyses, and moving toward pre-training their own models end-to-end.

The platform is already operating at scale, with tens of thousands of users and millions of queries, and is actively used in scientific workflows today.


What you’ll work on

  • Building and scaling training pipelines for large-scale LLM systems
  • Developing experimentation platforms that enable fast, reliable iteration
  • Designing data pipelines and systems for observability and reproducibility
  • Improving how training runs are orchestrated, monitored, and debugged
  • Supporting model deployment and inference for complex reasoning systems
  • Working closely with researchers to translate ideas into production systems

What they’re looking for

  • Experience building and scaling ML systems in production
  • Strong background across model training, data pipelines, and deployment
  • Experience with large-scale training or distributed systems
  • Fluency in frameworks like PyTorch, JAX, or similar
  • Strong engineering fundamentals and systems thinking
  • Ability to operate across ambiguity and own problems end-to-end

The company

  • ~$70M raised, with another round planned
  • Platform already at meaningful scale (tens of thousands of users, hundreds of millions of lines of code written by the agent)
  • Strong commercial traction 
  • Small, high-calibre team working at the intersection of AI and science

📍 San Francisco (on-site or hybrid, remote considered case by case)
💰 $250K–$400K base + equity
Levels: Senior, Staff, Principal
Roles available: ML Engineer, ML Infra, Research Engineers & Research Scientists 

All applicants will receive a response.

Location: San Francisco, CA
Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 16/04/2026
Job ID: 35767
Are you looking to scale GPU infrastructure up to and beyond 10,000 GPUs?
You'll help push an already high-performing team past their current operating level, using your skills and experience to scale training workloads, improve cluster reliability/usage and build systems that hold up under real pressure.
Your focus will be on distributed training and GPU infrastructure, making large-scale training actually usable for researchers—not just possible.
You'll be working across frontier model training, scientific workloads and robotics environments. So you're dealing with high-throughput systems and real-world constraints, not just controlled experiments.
You'll join a team that owns compute end-to-end—infra, systems, and operations—working closely with researchers to make training at this scale reliable.
They've raised over $500M, have real customers, and are now integrating models directly into robotics environments and beyond.
Key experience
  • Experience scaling GPU infrastructure from 2,000 to 10,000+ GPUs
  • Experience with Ray, Slurm or similar
  • Experience supporting core model training

The culture is collaborative and hands-on:
  • Strong focus on knowledge sharing and upskilling
  • Cross-team collaboration with researchers
  • 6-week cycles to allow deep focus and meaningful impact
  • A team that works hard but also likes to keep it fun
Up to $350k base + bonus + equity DOE
Remote across the US or hybrid options available in SF

All applicants will receive a response. 
Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 08/04/2026
Job ID: 35635

Want to build the interface layer for an AI scientist?

You’ll join a team building autonomous AI agents designed to accelerate scientific discovery. The goal is simple, science moves too slowly, and they’re building systems that can change that.

This isn’t a typical frontend role. The product is an integrated research environment where scientists interact directly with AI models, workflows, and generated insights. Your work defines how usable that system actually is.

You’ll sit within the Platform team, working closely with researchers and product to turn complex, often messy scientific workflows into clear, intuitive interfaces.

The challenge is translating depth into clarity without losing fidelity.

You’ll be building high-performance frontend systems where data density, responsiveness, and usability all matter. Real-time interactions, dynamic visualisations, and scalable UI patterns are core to the product.

Your focus will include:

  • Building performant React applications for data-heavy workflows
  • Designing interfaces for real-time AI interactions and streaming data
  • Creating modular, scalable design systems used across the platform
  • Translating scientific and model outputs into usable visual interfaces

You’ll need strong frontend fundamentals, but more importantly, the ability to think in systems. Understanding how users navigate complexity, how interfaces guide decision-making, and how performance impacts usability at scale.

There’s a strong emphasis on performance engineering. You’ll be profiling rendering behaviour, optimising asset loading, and ensuring smooth interaction across browsers and devices.

The product itself sits at the intersection of AI, biology, and research tooling. If you’ve worked on complex internal tools, data platforms, or visualisation-heavy applications, this will feel familiar, just at a deeper technical level.

You’ll likely have experience building production frontend systems with React (or similar), working with TypeScript, and handling real-time data flows such as WebSockets or GraphQL subscriptions. Experience with visualisation libraries like D3, Deck.gl or Three.js is highly relevant here.

The environment is highly collaborative. You’ll work closely with researchers to anticipate how the product should evolve, not just respond to specs.

This is an onsite role based in San Francisco, working with a team focused on building something that genuinely pushes forward how science gets done.

Salary: $175,000 – $240,000 + equity
Location: San Francisco, onsite

If you’re interested in shaping how scientists interact with AI systems, apply today.

Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 01/04/2026
Job ID: 35602

Want to build systems that actually hold up under long-running AI workloads?

Most agentic systems for science don’t fail at the model layer. They fail because the infrastructure can’t support long-horizon execution.

You’ll join a team building autonomous AI agents that run full research cycles. Ingesting thousands of papers, forming hypotheses, running experiments, and producing traceable outputs used by real scientific teams.

The challenge is making that work in production.

You’ll own the systems behind it. APIs, data pipelines, and platform architecture designed for long-running workloads, large-scale ingestion, and iterative experimentation loops. This is full-stack in scope, but backend in depth, where system design decisions directly impact what the platform can do.

You’ll be working across:

  • Backend services in Python or Node, building scalable APIs (FastAPI/REST)
  • Data pipelines supporting agent execution and scientific workflows
  • Cloud infrastructure (AWS/GCP), containerisation (Docker, Kubernetes)
  • CI/CD, observability, and reliability for systems under continuous load

This isn’t a generalist full-stack role. You’ll need to understand how systems behave under heavy data and compute demands, and be comfortable making architectural trade-offs across distributed systems.

The team is small, high-calibre, and already running real workloads with revenue traction. Backed by $70M+, they’re building infrastructure that defines how AI is applied to scientific discovery.

 

Salary: $200,000–$350,000 + equity
Location: San Francisco (onsite)

Location: San Francisco, CA
Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 30/03/2026
Job ID: 35569

Senior Applied Researcher

Want to build vision-language models that understand complex, real-world environments?

You’ll join a small, highly technical team working on foundational problems in multimodal AI, focused on training models that can interpret, reason, and act on large-scale first-person video data.

You’ll work directly with the Chief Science Officer, shaping how models are designed, trained, and evaluated. The work sits at the intersection of VLMs, long-context reasoning, and real-world deployment.

The focus is on building systems that move beyond static perception, towards temporal understanding, activity recognition, and higher-level reasoning across dynamic environments.

Your work will centre on:

  • Designing and training VLMs on large-scale video datasets
  • Developing post-training approaches including SFT, RLHF, and parameter-efficient tuning
  • Building scalable training and evaluation pipelines
  • Exploring long-context and temporal modelling
  • Designing efficient systems across edge and server-side inference
  • Defining benchmarks for spatial and behavioural understanding

You’ll bring strong experience training deep learning models, ideally transformer-based, along with hands-on work in vision, language, or multimodal systems.

Experience with large datasets, model optimisation, or deploying models into production environments will be valuable. Exposure to video data or long-context modelling is particularly relevant.

This is a team that values speed, ownership, and first-principles thinking. You’ll be working on open-ended problems with real-world impact, with the freedom to explore and define approaches.

Compensation: Highly competitive salary + equity
Location: San Francisco, onsite

If you’re interested in building multimodal systems that operate in real-world settings, and want to join a well-funded, highly skilled research team, please apply now!

All applicants will receive a response.

Location: San Francisco, CA
Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 30/03/2026
Job ID: 35437

Ready to own the data pipeline powering the voice of the next generation of AI characters?

You'll be joining a well-funded startup building AI character technology, where speech is a core part of the product experience.

Think super natural conversations, handling interruptions, personality shifts and more!

You'll own the datasets that power their speech systems — from raw, messy audio through to clean, versioned training corpora that directly drive TTS and ASR model performance.

Your focus

  • Own the full data lifecycle — defining specs, auditing and curating large-scale audio and text corpora
  • Build automated quality metrics and dashboards across SNR, VAD, WER, speaker verification and safety, validated against listening tests
  • Train and deploy lightweight classifiers for noise detection, diarisation, language ID, and content moderation

What you'll bring

  • Deep experience working with speech and audio data at scale — 1M+ hours
  • Strong ML engineering skills in Python and PyTorch, including training and fine-tuning models like Whisper or Wav2Vec
  • Practical knowledge of audio processing — torchaudio, librosa, spectrograms, DSP basics
  • A solid understanding of audio quality metrics — MOS, WER, PESQ/STOI, SNR, speaker verification

Nice to have

  • Experience with Spark/Beam, Airflow, SQL or similar data engineering tools
  • Open-source contributions or publications in speech or audio ML
  • Background in denoising and enhancement, and how it affects downstream model quality

Remote, with a preference for European or overlapping timezones. Competitive compensation and equity.

Location: Remote
Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 27/03/2026
Job ID: 34412