Your search has found 4 jobs

Want to build speech AI that actually sounds human?

You'll be joining a well-funded speech AI startup with strong customer traction. They're building ultra-realistic voice technology that handles natural laughter, breathing, seamless language switching, and accurate pronunciation across languages and accents.

As their Speech Research Lead, you'll have the resources and real-world applications to work on frontier speech research: real-time two-way conversations with emotional awareness, novel architectures that balance speed with control, and advancing their multi-lingual capabilities.

What you'll do

  • Lead SOTA research advancing their core speech models and product capabilities

  • Oversee large-scale model training and data system development

  • Lead and grow the ML team during a critical scaling phase

What you'll bring

  • Extensive experience in speech synthesis or generative modeling across multiple modalities

  • Strong background in LLMs and modern language model architectures

  • Proven ability to take research from concept to deployed systems

  • Experience training large-scale models in production environments

Nice to have

  • Understanding of cross-lingual speech challenges and linguistic fundamentals

  • Published research in speech or generative modeling

Ideally based in San Francisco but open to remote internationally. Competitive compensation up to $400K base (depending on experience) plus substantial equity package.

 

Location: San Francisco, CA
Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 01/11/2025
Job ID: 34146

Looking to tackle novel speech challenges at scale?

You'll be joining a small but mighty speech AI company building proprietary speech tech from the ground up. With a strong customer base, your research will directly impact production systems serving enterprise customers, with the opportunity to see your work deployed at scale in real-world voice applications.

They're a well-funded startup with healthy revenue streams and immediate opportunities for high-impact research.

Your research

You'll be working on breakthrough speech research that push the boundaries of naturalness and real-time performance. The company has achieved ultra-low latency and is now advancing toward unified speech-to-speech architectures.

You'll develop emotional expression and natural speech generation, advance multilingual support across 30+ languages, and enhance voice cloning robustness.

Your focus

  • Lead cutting-edge research in SOTA speech models (TTS, ASR, or speech-to-speech)
  • Design, execute and iterate on experiments end-to-end
  • Drive speech controllability and naturalness improvements
  • Develop evaluation methodologies for speech quality assessment

What you'll bring

  • Deep understanding of cutting-edge speech models with end-to-end pipeline experience
  • Experience with large-scale model training
  • Strong background in speech model development and optimisation
  • Published work with demonstrable results in industry or academic settings

Nice to have

  • Performance optimisation experience for latency and compute efficiency
  • Experience with model fusion and unified architectures

This is a remote role, either in US or Europe. Competitive comp based on experience.

Location: Remote
Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 23/09/2025
Job ID: 33913

Ready to pioneer the speech intelligence behind the next generation of embodied AI?

Join a pioneering startup developing foundational technology for natural conversation in embodied agents. You'll advance the speech systems that power avatars with authentic behaviours, real-time expression, and conversational intelligence that handles interruptions and turn-taking just like humans.

This Lead Research Scientist role focuses on advancing real-time speech systems for interactive avatars. You'll develop full-duplex dialogue models and speech-to-speech architectures that enable natural conversational flow, interruption handling, and emotional expression.

Founded by ex-Googlers, they're building proprietary behaviour models that learn from two-way interactions, creating systems where speech timing, prosody, and contextual responses work in harmony with facial expressions and physical behaviours to drive authentic embodied intelligence.

Your focus:

  • Research & develop full-duplex speech systems with natural interruption handling
  • Develop expressive voice models with controllable prosody and timing
  • Build speech-to-speech architectures preserving identity and emotion
  • Create real-time audio generation systems for conversational avatars
  • Publish research while deploying systems in production
  • Collaborate across teams integrating speech with visual behaviour

Requirements:

  • PhD in Speech, Machine Learning, or related field
  • First-author publications at top conferences (Interspeech, ICASSP, NeurIPS, ICLR, etc)
  • Expertise in text-to-speech, speech-to-speech models, or voice cloning
  • Large-scale training experience
  • Experience in prosody modelling or real-time audio generation

Nice to have:

  • Experience with full-duplex speech research
  • Speech-visual alignment expertise (lip sync, expressions)
  • Real-time audio deployment optimisation

Package:

  • Competitive salary $200k- $300k base (based on experience)
  • Meaningful equity package
  • Comprehensive healthcare (90% covered)
  • Unlimited PTO
  • Fully remote work with regular team offsites
  • Life insurance and disability coverage

Location: Fully remote position, globally, with preference for Pacific Time alignment.

Ready to make AI conversations feel authentically human?

Contact Allys at Techire AI. All applicants will receive a response.

 
Location: Remote
Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 02/07/2025
Job ID: 33482

Do you want to create emotionally expressive AI that transforms healthcare conversations?

A pioneering healthtech unicorn is building AI digital health agents designed to safely and empathetically assist patients. Their immediate focus is developing conversational AI with genuine emotional intelligence, with longer-term vision for full-duplex communication capabilities.

As the Staff Research Scientist, you'll play a key part in making this a reality - building foundational speech models that understand and respond with human-like emotion and natural conversation that healthcare demands.

What you'll do

  • Design and develop emotionally expressive speech models for healthcare conversations, working end-to-end from research through to productionizing models
  • Build conversational AI systems that can interpret and respond with appropriate emotional intelligence
  • Work on post-training techniques to enhance speech models' conversational and emotional capabilities
  • Tackle unique challenges including response time optimization, maintaining emotional consistency, and operating in noisy healthcare environments
  • Have the opportunity to publish your groundbreaking research

What you'll bring

  • 5+ years in speech technologies or related field
  • Hands-on experience with speech-to-speech systems (highly preferred), or strong experience in Text-to-Speech, Speech LLMs, emotional/expressive speech synthesis, or similar
  • Experience training large speech datasets
  • Ability to implement research papers from scratch

Bonus points for

  • Experience pre-training foundation models with speech (HuBERT, Wav2Vec, or similar)
  • Multimodal experience
  • Experience with inference technologies (vLLM, CUDA)

You'll be based in the Bay Area or willing to relocate. You'll receive highly competitive comp (up to $350K base DOE) with substantial equity.

If you're excited about creating the next generation of emotionally intelligent speech AI that will revolutionise healthcare communication, click apply!

Location: Bay Area
Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 16/04/2025
Job ID: 33086