Your search has found 3 jobs

Ready to own the data pipeline powering the voice of the next generation of AI characters?

You'll be joining a well-funded startup building AI character technology, where speech is a core part of the product experience.

Think super natural conversations, handling interruptions, personality shifts and more!

You'll own the datasets that power their speech systems — from raw, messy audio through to clean, versioned training corpora that directly drive TTS and ASR model performance.

Your focus

  • Own the full data lifecycle — defining specs, auditing and curating large-scale audio and text corpora
  • Build automated quality metrics and dashboards across SNR, VAD, WER, speaker verification and safety, validated against listening tests
  • Train and deploy lightweight classifiers for noise detection, diarisation, language ID, and content moderation

What you'll bring

  • Deep experience working with speech and audio data at scale — 1M+ hours
  • Strong ML engineering skills in Python and PyTorch, including training and fine-tuning models like Whisper or Wav2Vec
  • Practical knowledge of audio processing — torchaudio, librosa, spectrograms, DSP basics
  • A solid understanding of audio quality metrics — MOS, WER, PESQ/STOI, SNR, speaker verification

Nice to have

  • Experience with Spark/Beam, Airflow, SQL or similar data engineering tools
  • Open-source contributions or publications in speech or audio ML
  • Background in denoising and enhancement, and how it affects downstream model quality

Remote, with a preference for European or overlapping timezones. Competitive compensation and equity.

Location: Remote
Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 27/03/2026
Job ID: 34412

Looking to define ASR strategy for the next generation of social AI?

You'll be joining a well-funded social AI company building lifelike AI characters that interact naturally across voice, video, and text. Founded by a prominent tech entrepreneur, they're creating new media formats for AI-driven interaction where agents handle group conversations, interruptions, and multi-agent dynamics.

Your mission

You'll own the ASR function from day one - starting with evaluating and implementing existing solutions, then moving toward building proprietary models as the platform scales. This means hands-on work testing APIs and open-source models, followed by developing custom systems for multi-agent group conversations and social interactions.

You'll shape the technical direction, balance short-term delivery with long-term innovation, and drive individual research initiatives while collaborating on broader team objectives.

Your focus

  • Define and execute the ASR roadmap from evaluation through production deployment
  • Build and train models that handle natural conversation dynamics
  • Develop evaluation systems to measure accuracy, speed, and reliability
  • Define data requirements and create pipelines for ASR training
  • Work from low-level performance optimizations to high-level architecture decisions

What you'll bring

  • Proven track record building and deploying ASR systems at scale
  • Strong familiarity with SOTA ASR models and architectures (Whisper, Conformer, etc.)
  • Understanding of data quality assessment for speech systems

Nice to have

  • Experience leading technical initiatives or ML teams

Remote with competitive comp + stock.

Ready to define the future of social AI interactions? Apply today.

Location: Remote
Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 05/12/2025
Job ID: 34546

Looking to tackle novel speech challenges at scale?

You'll be joining a small but mighty speech AI company building proprietary speech tech from the ground up. With a strong customer base, your research will directly impact production systems serving enterprise customers, with the opportunity to see your work deployed at scale in real-world voice applications.

They're a well-funded startup with healthy revenue streams and immediate opportunities for high-impact research.

Your research

You'll be working on breakthrough speech research that push the boundaries of naturalness and real-time performance. The company has achieved ultra-low latency and is now advancing toward unified speech-to-speech architectures.

You'll develop emotional expression and natural speech generation, advance multilingual support across 30+ languages, and enhance voice cloning robustness.

Your focus

  • Lead cutting-edge research in SOTA speech models (TTS, ASR, or speech-to-speech)
  • Design, execute and iterate on experiments end-to-end
  • Drive speech controllability and naturalness improvements
  • Develop evaluation methodologies for speech quality assessment

What you'll bring

  • Deep understanding of cutting-edge speech models with end-to-end pipeline experience
  • Experience with large-scale model training
  • Strong background in speech model development and optimisation
  • Published work with demonstrable results in industry or academic settings

Nice to have

  • Performance optimisation experience for latency and compute efficiency
  • Experience with model fusion and unified architectures

This is a remote role, either in US or Europe. Competitive comp based on experience.

Location: Remote
Job type: Permanent
Emp type: Full-time
Salary type: Annual
Salary: negotiable
Job published: 23/09/2025
Job ID: 33913