Your search has found 3 jobs

Want to build simulated RL environments that push frontier models to their limits?

This role is about advancing the science of post-training, reinforcement learning, and scalable evaluation. Instead of static benchmarks, you’ll create dynamic simulations that probe reasoning, planning, and long-horizon behaviour — work that defines how the next generation of AI will be trained and supervised.

You’ll design new post-training algorithms (RLHF, DPO, GRPO and beyond), develop reward models that move beyond exact-match signals, and publish your findings while seeing them deployed in production systems. The work spans both core research and practical implementation, giving you the chance to shape frameworks already being adopted by industry leaders.

We’re looking for:

  • Research experience in post-training or RL methods with LLMs.

  • Strong background in transformers and evaluation frameworks.

  • Publication record at top venues (NeurIPS, ICLR, ICML, ACL, EMNLP).

  • PhD in CS/ML/NLP/RL or equivalent research experience.

Package: Up to $300k base (DOE) + meaningful equity, with comprehensive benefits, 401k, unlimited PTO, relocation support and sponsorship available. Location is San Francisco preferred, with NYC also considered.

Ready to help define how AI learns and is evaluated in simulated environments?
All applicants will receive a response.

Location: San Francisco
Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 22/08/2025
Job ID: 33119

Build production systems that bring 3D AI models to life in real-world applications

Ready to bridge cutting-edge 3D computer vision research with robust, scalable production systems? This ML Engineer role focuses on deploying 3D perception models into live agentic workflows where reliability and performance are paramount.

You'll be joining a well-funded startup developing AI agents for advanced design and manufacturing. Your role centres on creating the infrastructure that makes 3D understanding truly practical - from real-time inference pipelines to comprehensive monitoring systems that ensure geometry-aware agents perform reliably in production.

This position offers the opportunity to shape how 3D AI models integrate into agent decision-making pipelines. You'll work closely with applied scientists to productionise breakthrough research whilst building robust systems that handle the unique challenges of geometric data in mission-critical applications.

Your technical focus:

  • Architect inference pipelines for 3D vision models handling diverse data types (CAD, mesh, point cloud)
  • Build monitoring systems that meaningfully evaluate model performance on real-world, messy geometric data
  • Create robust deployment infrastructure scaling across multiple 3D tasks: segmentation, classification, correspondence, and generation
  • Implement model lifecycle management supporting both discriminative and generative 3D capabilities
  • Design observability frameworks enabling continuous production assessment of 3D model performance

Your background should include:

  • 3-10+ years industry experience as an ML Engineer / Computer Vision Engineer
  • Proven experience deploying models, especially vision or 3D models
  • Strong Python and PyTorch skills with engineering discipline around testing and performance profiling
  • Experience with observability tools and ML monitoring best practices
  • Deep understanding of challenges specific to deploying 3D models (geometric artifacts, mesh quality, robustness)

Valuable additional experience:

  • Working with CAD systems, robotics stacks, or AR/VR environments
  • Agent frameworks, planning pipelines, or LLM-integrated systems
  • 3D data evaluation methodologies and debugging tools
  • Any experience in 3D tools such as WebGL, Three.js, or Blender scripting for 3D visualisation would be useful but not essential.

You'll be establishing the infrastructure foundation for an entirely new capability domain, with high ownership and responsibility for defining production standards and deployment strategies.

Package includes:

  • Competitive salary: $180,000-$240,000 
  • Performance bonus up to 20%
  • Medical, dental, and vision coverage
  • 401k with up to 3% company match (after 3 months)
  • 20 vacation days, 10 sick days, and flexible working arrangements

Based in SF Bay Area or Miami, working alongside a research team that values practical impact and technical excellence.

You must have valid right to work in the US without sponsorship (US Citizenship or Green Card).

If building the systems that make breakthrough 3D AI research truly useful appeals to you, we'd love to discuss this opportunity. All applicants will receive a response.

Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 28/06/2025
Job ID: 33548

Ready to pioneer deep generative modeling for real-time video synthesis?

Join a pioneering startup developing the foundation layer for the next big AI unlock, video behaviour and naturalness of conversation in video generation. This in turn will change the game for embodied agents with natural behaviours, real-time expression, and conversational intelligence that goes far beyond current avatar technology.

This Research Scientist role focuses on advancing embodied AI through groundbreaking generative modelling research. While existing solutions rely on looped animations with basic lip-sync, this company is building behaviour driven models that drive authentic, real-time interactions capable of natural conversation flow, interruption handling, and emotional expression.

Founded 18 months ago by an exceptional team where 7 out of 12 members hold AI PhDs, they're solving fundamental challenges in visual generation for embodied intelligence. Their beta platform already demonstrates sophisticated real-time video generation systems with advanced generative models creating natural facial expressions and body movements.

The company is building foundational generative technology that creates dynamic visual content from multimodal inputs, developing systems that generate realistic human-like expressions and movements. Their research sits at the intersection of computer vision, deep generative modeling, and real-time video synthesis.

Your focus:

  • Conduct cutting-edge research in deep generative modeling for vision and video generation
  • Develop sophisticated generative models for facial expressions, body dynamics, and full avatar synthesis
  • Create novel architectures using diffusion models and flow matching for video generation
  • Build real-time generative pipelines for dynamic visual content creation
  • Advance state-of-the-art techniques in multimodal generative modeling
  • Collaborate with engineering to productionise generative models into real-time systems
  • Publish findings at top-tier conferences while deploying in real-world applications

Technical challenges: You'll work with cutting-edge techniques including diffusion models, flow matching, and advanced generative architectures for video synthesis. The focus is on creating high-quality, temporally consistent video generation that can power natural embodied agents, emphasising real-time performance and visual fidelity.

Requirements:

  • PhD in Computer Vision, Machine Learning, or related field
  • Strong publication record at top conferences (CVPR, NeurIPS, ICCV, ECCV, ICML, ICLR, SIGGRAPH)
  • Recent video generation or embodied agent/avatar research publications within the past 2 years (essential)
  • Expertise in flow matching and diffusion models
  • Experience with one or more: dyadic conversational avatars, behaviour modelling via LLMs, real-time multimodal generation
  • PyTorch proficiency and large-scale training experience

Nice to have:

  • Industry experience deploying generative models in real-time applications
  • Background in 3D generation, neural rendering, or Gaussian splatting
  • Experience with video generation frameworks and temporal consistency methods

Environment: You'll join a distributed team working primarily in Pacific Time zones, collaborating with specialists in generative modeling, computer vision, and video synthesis. The culture emphasises high ownership, velocity with purpose, and collaborative problem-solving in a fast-moving research environment.

Package:

  • Competitive salary $200k up to $300k base (flexible based on experience)
  • Meaningful equity package
  • Comprehensive healthcare (90% covered)
  • Unlimited PTO
  • Fully remote work with regular team offsites
  • Life and disability coverage

Location: Fully remote position with preference for Pacific Time alignment.

If you're excited about conducting pioneering research in deep generative modeling for vision while shaping the future of embodied agents, this offers an exceptional opportunity to work on genuinely transformative technology.

Ready to help create the next generation of visual AI?

Contact Marc Powell at Techire AI. All applicants will receive a response.

Location: Remote
Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 18/06/2025
Job ID: 33449