In the US, audio intelligence platform AssemblyAI has raised $28m in a Series A round of funding, which it will use to further develop its AI models for automatic speech recognition, speech understanding, sentiment analysis and natural language processing.
The San Francisco-based firm offers speech-to-text APIs that are used to convert online audio and video data - from Zoom meetings, podcasts, audio-first social media platforms, enterprise conversation intelligence platforms, live streaming etc - into text. Beyond transcription, the APIs are used to moderate user-generated audio content at scale, deliver insights to customer support and contact center agents in call center AI products, serve more targeted ad campaigns into audio content, and for a number of other use cases.
Future plans include enabling the analysis of emotion in audio files, automatically identifying the start and end time of voice and video ads, translation into more than 80 languages, and intent recognition.
Funding has been led by Accel, with participation from a group of investors including Y Combinator, John and Patrick Collison (Stripe), Nat Friedman (GitHub), and Daniel Gross (Pioneer). Dylan Fox (pictured), founder and CEO of AssemblyAI, comments: 'We never would have gotten here without the amazing support of our customers, and the thousands of developers that have come to use our APIs over the years. We're excited to double down on everything we've been working on, and to rapidly accelerate our pace of product development and growth'.
Web site: www.assemblyai.com .
All articles 2006-23 written and edited by Mel Crowther and/or Nick Thomas, 2024- by Nick Thomas, unless otherwise stated.
Register (free) for Daily Research News
REGISTER FOR NEWS EMAILS
To receive (free) news headlines by email, please register online