Your search has found 4 jobs

Build the 3D perception that gives AI agents real spatial intelligence.

How do AI systems truly see and reason about 3D geometry? This Applied Scientist role puts you at the centre of that challenge — developing models that bridge the physical world and intelligent reasoning systems.

You’ll join a well-funded startup building AI agents for advanced design and engineering workflows — across manufacturing, aerospace, and medtech. Your work will enable agents to understand CAD data, meshes, and point clouds deeply enough to plan, analyse, and make autonomous decisions.

This is a rare opportunity to establish the 3D foundation within the research team. You’ll define evaluation strategies, model objectives, and technical direction — building models that become the perception backbone for intelligent agents.

What you’ll do:
• Develop models that learn transferable 3D representations across CAD, mesh, and point cloud data
• Handle messy, lossy, real-world data — not just clean synthetic geometry
• Scale training across segmentation, classification, correspondence, and eventually generation
• Design robust evaluation pipelines for continuous performance monitoring
• Work toward a unified 3D foundation model supporting both discriminative and generative tasks

You’ll bring:
• Deep expertise in 3D computer vision (PhD or equivalent experience)
• Strong knowledge of modern 3D architectures (PointNet++, MeshCNN, Gaussian Splatting, Diffusion, VLMs)
• Proven ability training large-scale models in PyTorch
• Strong applied research instincts — turning papers into working systems
• Experience with multimodal or vision-language models

Bonus points:
• Background with CAD data or industrial design workflows
• Experience in robotics, autonomous driving, or AR/VR 3D perception
• Familiarity with SLAM, pose estimation, or differentiable rendering

You’ll join a small, research-driven team with full autonomy and major compute access — free to explore foundational methods while delivering practical impact.

Compensation & location:
• Base salary: $200K–$300K (negotiable by level)
• Up to 20% bonus + stock
• Full medical, dental, and vision coverage
• 401k (3% match) and 20+ vacation days

Based in the SF Bay Area (currently remote, moving hybrid soon).
Applicants must hold valid US work authorisation (US Citizen or Green Card).

If you’re excited about building the 3D understanding that will power the next generation of intelligent agents — we’d love to hear from you.
All applicants will receive a response.

Location: United States
Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 07/01/2026
Job ID: 33515

Want to build the large-scale RL environments frontier labs use to train agents that can truly reason and act?

This team are creating complex reinforcement learning environments — simulations where advanced agents learn to plan, adapt, and solve multi-step problems that stretch beyond standard benchmarks. The focus isn’t on training the models themselves, but on building the worlds that make meaningful learning and evaluation possible — the foundation for more capable, aligned systems.

You’ll work end-to-end across environment design, reward dynamics, and scalable simulation — developing the feedback loops that define what “good” looks like for intelligent behaviour. It’s open-ended, research-driven work where the task definition, data, and reward structure are often the hardest and most important problems to solve.

You’ll collaborate closely with researchers tackling unsolved challenges in reinforcement learning and agent behaviour, shaping experiments, scaling infrastructure, and refining how agents learn in the loop.

It suits someone with strong ML and RL experience, deep intuition for agent dynamics, and the curiosity to explore problems that don’t come with clear instructions.

On-site in San Francisco. Compensation up to $300 K base (negotiable, depending on experience) plus equity.

If you want to help build the environments that teach the next generation of AI systems how to think, act, and adapt — we’d love to hear from you.

All applicants will receive a response.

Location: San Francisco, CA
Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 06/01/2026
Job ID: 34645

Are you the kind of engineer who enjoys building complex systems that help models learn, not by training them directly, but by shaping the worlds they inhabit?

This team builds large-scale environments and benchmarks that frontier AI labs use to test and steer their models. Their goal is to make reinforcement learning measurable, creating rich, hyperrealistic simulations where agents can reason, act, and be safely evaluated.

You’ll work at the intersection of software engineering, reinforcement learning, and experimental research, designing the frameworks and pipelines that let agentic AI systems act, learn, and improve through interaction, not static data.

You'll Bring

  • Strong Python and software fundamentals who enjoy building ML infrastructure.

  • Experience in reinforcement learning, rewards, environment dynamics, evaluation loops.

  • Worked with browser/API simulations (Playwright, Selenium) or distributed compute.

  • Experience with open-ended problem spaces and a desire to shape the tools driving safe AGI progress.

It’s a technically deep team of ML engineers and researchers from leading labs and tech companies, developing the simulation and evaluation backbone for next-generation agents.

Compensation: $200,000–$250,000 base + equity
Location: San Francisco (on-site, relocation supported)

All applicants will receive a response.

Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 18/11/2025
Job ID: 34513

Build the foundational models that will give AI agents true 3D understanding

Want to solve the fundamental challenge of how AI systems perceive and reason about 3D geometry?

This Lead Applied Scientist role puts you at the forefront of creating perception capabilities for the next generation of agentic AI systems. You'll lead a team building discriminative and generative models introduced into agentic workflows, solving complex challenges in agentic AI for industrial applications.

You'll be joining a well-funded startup developing AI agents for advanced design and manufacturing workflows. Your work will bridge the gap between the physical world and intelligent reasoning systems, creating models that understand CAD data, meshes, and point clouds at a level that enables autonomous decision-making.

This role offers the opportunity to hands on lead a team to build 3D computer vision capabilities from the ground up. You'll be establishing an entirely new domain within the research team, with significant autonomy to define evaluation strategies, model objectives, and technical direction. Your models will form the perception backbone that enables agents to truly understand and manipulate the 3D world.

Your technical challenges:

  • Build models that understand diverse 3D data types (CAD, mesh, point cloud) and learn transferable representations across formats
  • Handle messy, lossy, or incomplete real-world data - moving beyond clean synthetic geometry to tackle industrial reality
  • Scale training across multiple 3D tasks: segmentation, classification, correspondence, and eventually generation
  • Create evaluation pipelines that meaningfully assess model performance and enable continuous production monitoring
  • Work toward a foundational 3D model supporting both discriminative and generative tasks, integrated into broader agentic AI architecture

Your expertise should include:

  • Deep specialisation in 3D computer vision (ideally including a PhD in Computer Vision)
  • Strong knowledge of modern 3D architectures (PointNet++, MeshCNN, 3D Gaussian Splatting, diffusion models, VLMs)
  • Proven ability training large-scale deep learning models with PyTorch
  • Solid applied research skills - can implement novel architectures from papers and make them work in practice
  • Experience with multimodal or vision-language model development

Nice to have:

  • Background working with CAD data or industrial design workflows
  • Experience in complex topics such as robotics, autonomous driving, or AR/VR with 3D perception focus
  • Familiarity with SLAM, pose estimation, or differentiable rendering

You'll join a research team that values ownership and rapid iteration, with the resources to pursue ambitious technical goals. The company provides abundant compute resources and the freedom to explore foundational approaches whilst ensuring practical impact.

Package includes:

  • Base salary: $300,000
  • Performance bonus up to 20%
  • Medical, dental, and vision coverage
  • 401k with up to 3% company match
  • 20+ vacation days

You'll need to be based in SF Bay Area or Miami, with a collaborative team environment that encourages innovation and technical excellence.

You must have valid right to work in the US without sponsorship (US Citizenship or Green Card).

If you're excited about creating the 3D perception capabilities that will power the next generation of intelligent agents, we'd love to hear from you.

All applicants will receive a response.

Location: United States
Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 15/09/2025
Job ID: 33847