Skip to main content

I used OpenAI’s new tech to transcribe audio right on my laptop

Illustration of a series of blue microphones on a teal background.
The benefits of AI without the drawbacks of the cloud. | Kristen Radtke / The Verge; Getty Images

OpenAI, the company behind image-generation and meme-spawning program DALL-E and the powerful text autocomplete engine GPT-3, has launched a new, open-source neural network meant to transcribe audio into written text (via TechCrunch). It’s called Whisper, and the company says it “approaches human level robustness and accuracy on English speech recognition” and that it can also automatically recognize, transcribe, and translate other languages like Spanish, Italian, and Japanese.

As someone who’s constantly recording and transcribing interviews, I was immediately hyped about this news — I thought I’d be able to write my own app to securely transcribe audio right from my computer. While cloud-based services like Otter.ai and Trint work for most things and are relatively secure, there are just some interviews where I, or my sources, would feel more comfortable if the audio file stayed off the internet.

Using it turned out to be even easier than I’d imagined; I already have Python and various developer tools set up on my computer, so installing Whisper was as easy as running a single Terminal command. Within 15 minutes, I was able to use Whisper to transcribe a test audio clip that I’d recorded. For someone relatively tech-savvy who didn’t already have Python, FFmpeg, Xcode, and Homebrew set up, it’d probably take closer to an hour or two. There is already someone working on making the process much simpler and user-friendly, though, which we’ll talk about in just a second.

Command-line apps obviously aren’t for everyone, but for something that’s doing a relatively complex job, Whisper’s very easy to use.

While OpenAI definitely saw this use case as a possibility, it’s pretty clear the company is mainly targeting researchers and developers with this release. In the blog post announcing Whisper, the team said its code could “serve as a foundation for building useful applications and for further research on robust speech processing” and that it hopes “Whisper’s high accuracy and ease of use will allow developers to add voice interfaces to a much wider set of applications.” This approach is still notable, however — the company has limited access to its most popular machine-learning projects like DALL-E or GPT-3, citing a desire to “learn more about real-world use and continue to iterate on our safety systems.”

Image showing a text file with the transcribed lyrics for Yung Gravy’s song “Betty (Get Money).” The transcription contains many inaccuracies.
The text files Whisper produces aren’t exactly the easiest to read if you’re using them to write an article, either.

There’s also the fact that it’s not exactly a user-friendly process to install Whisper for most people. However, journalist Peter Sterne has teamed up with GitHub developer advocate Christina Warren to try and fix that, announcing that they’re creating a “free, secure, and easy-to-use transcription app for journalists” based on Whisper’s machine learning model. I spoke to Sterne, and he said that he decided the program, dubbed Stage Whisper, should exist after he ran some interviews through it and determined that it was “the best transcription I’d ever used, with the exception of human transcribers.”

I compared a transcription generated by Whisper to what Otter.ai and Trint put out for the same file, and I would say that it was relatively comparable. There were enough errors in all of them that I would never just copy and paste quotes from them into an article without double-checking the audio (which is, of course, best practice anyway, no matter what service you’re using). But Whisper’s version would absolutely do the job for me; I can search through it to find the sections I need and then just double-check those manually. In theory, Stage Whisper should perform exactly the same since it’ll be using the same model, just with a GUI wrapped around it.

Sterne admitted that tech from Apple and Google could make Stage Whisper obsolete within a few years — the Pixel’s voice recorder app has been able to do offline transcriptions for years, and a version of that feature is starting to roll out to some other Android devices, and Apple has offline dictation built into iOS (though currently there’s not a good way to actually transcribe audio files with it). “But we can’t wait that long,” Sterne said. “Journalists like us need good auto-transcription apps today.” He hopes to have a bare-bones version of the Whisper-based app ready in two weeks.

To be clear, Whisper probably won’t totally obsolete cloud-based services like Otter.ai and Trint, no matter how easy it is to use. For one, OpenAI’s model is missing one of the biggest features of traditional transcription services: being able to label who said what. Sterne said Stage Whisper probably wouldn’t support this feature: “we’re not developing our own machine learning model.”

And while you’re getting the benefits of local processing, you’re also getting the drawbacks. The main one is that your laptop is almost certainly significantly less powerful than the computers a professional transcription service is using. For example, I fed the audio from a 24-minute-long interview into Whisper, running on my M1 MacBook Pro; it took around 52 minutes to transcribe the whole file. (Yes, I did make sure it was using the Apple Silicon version of Python instead of the Intel one.) Otter spat out a transcript in less than eight minutes.

OpenAI’s tech does have one big advantage, though — price. The cloud-based subscription services will almost certainly cost you money if you’re using them professionally (Otter has a free tier, but upcoming changes are going to make it less useful for people who are transcribing things frequently), and the transcription features built-into platforms like Microsoft Word or the Pixel require you to pay for separate software or hardware. Stage Whisper — and Whisper itself— is free and can run on the computer you already have.

Again, OpenAI has higher hopes for Whisper than it being the basis for a secure transcription app — and I’m very excited about what researchers end up doing with it or what they’ll learn by looking at the machine learning model, which was trained on “680,000 hours of multilingual and multitask supervised data collected from the web.” But the fact that it also happens to have a real, practical use today makes it all the more exciting.



Source: The Verge

Popular posts from this blog

Apple and Meta Reportedly Discussed AI Partnership for iOS 18

Apple has held discussions with Meta about integrating the Facebook owner's AI model into iOS 18 as part of its Apple Intelligence feature set, according to a report over the weekend. Meta launched Llama 2, its large language model, in July 2023, and in April, the company released the latest versions of its AI models, called Llama 3 . The Wall Street Journal reports that the two longtime rivals have held talks about offering Meta's model as an additional option to OpenAI's ChatGPT. The paywalled report notes that the discussions haven't been finalized and could fall through. As part of Apple Intelligence, Apple has announced a partnership with OpenAI that will allow Siri to access ChatGPT directly in iOS 18, iPadOS 18, and macOS Sequoia to provide better responses in relevant situations. Using ChatGPT will be optional, so users with concerns about the technology can abstain and still make use of Apple's own new AI features. Speaking at WWDC 2024, Apple's

Here Are the macOS Sequoia Features Intel Macs Won't Support

When Apple released macOS Monterey in 2021, some key features required a Mac with Apple silicon. The same scenario played out with macOS Ventura in 2022, and then again the following year with the release of macOS Sonoma. With macOS Sequoia set to arrive in the fall, which new features can Intel Mac owners expect to be unavailable to them this time around? Apple says that macOS Sequoia is compatible with the same Macs as macOS Sonoma, but Apple's fine print reveals that certain new features won't work on Intel machines. If you're still on an Intel Mac, here's what you won't have access to. Apple Intelligence Apple Intelligence , a deeply integrated, personalized AI feature set for Apple devices that uses cutting-edge generative artificial intelligence to enhance the user experience, won't be available on Intel Macs. Apple says the advanced features require its M1 chip or later, so if your Mac was released before November 2020, you're out of luck. T

iPhone 16 Pro Models to Adopt 'M14' Advanced Samsung OLED Panels for Improved Brightness and Lifespan

The upcoming iPhone 16 Pro and iPhone 16 Pro Max will be the first Apple smartphones to adopt Samsung's high performance "M14" OLED display panel, claims a new report coming out of South Korea. According to ETNews , Samsung's "M" series of OLED panels are made for flagship smartphones, while "14" refers to the number of high-performance materials used to produce them. "M14" is the first series of its kind, and the panel is said to have been developed to deliver superior brightness and longevity. Samsung has reportedly placed orders for the M14 materials and is preparing to mass produce the displays in the second half of the year for Apple's iPhone 16 Pro models. Google's Pixel 9 smartphone is the only other device that is expected to adopt the high-performance displays in 2024. A previous report out of China claimed that this year's ‌iPhone 16 Pro‌ models will feature up to 1,200 nits of typical SDR brightness – a 20%

Apple Boosts A18 Chip Orders in Anticipation of High iPhone 16 Demand

Apple is said to have upped its order of next-generation chips from TSMC to between 90 million and 100 million units, following heightened demand expectations for its iPhone 16 series. Last year's initial chip order volume for the iPhone 15 series launch is believed to have been in the region of 80-90 million units, suggesting Apple is anticipating higher demand for its 2024 devices, according to Taiwanese outlet CTEE . The arrival of Apple Intelligence in iOS 18 is expected to boost initial sales of the devices. One of the reasons is that Apple Intelligence requires at least an iPhone 15 Pro to run, which means owners of last year's iPhone 15 and iPhone 15 Plus will miss out on Apple's new AI features unless they upgrade to an iPhone 15 Pro or plump for one of the iPhone 16 models. Last year, the iPhone 15 and iPhone 15 Plus were equipped with the A16 Bionic chip – the same chip that was in the iPhone 14 Pro models – whereas the iPhone 15 Pro and iPhone 15 Pro Max f