Apple Introduces Transcription Function in Podcasts

According to TapTechNews on June 18th, Apple released the iOS 17.4 update in March this year. One of the main improvements is the introduction of the dictation text/transcription function for the Podcasts application.

Apple Introduces Transcription Function in Podcasts_0

Ben Cave, the global head of Apple Podcasts, said in an interview with The Guardian that this function has been prepared for six years, continuously polishing various details during this period to strive for a more excellent user experience.

Original Intention

Cave said that about 15% of Americans have some kind of hearing impairment, and when they watch movies, TV dramas, or even listen to music, they largely need closed captions.

TapTechNews note: Closed captions are subtitles prepared for viewers with special circumstances or needs in TV programs or DVDs, such as when viewers have hearing impairments or need to watch programs under silent conditions. Some explanatory language can be used in the subtitles to describe the program content.

Cave said: Apple's obvious goal is to make podcasts more accessible and immersive. Cave said that Apple has set high requirements for itself and hopes to provide users with accurate and easy-to-understand written records.

Development

Apple started the related research and development work in 2018. Based on the indexing function, this function helps users search for specific podcasts according to a certain sentence they remember in the podcast.

Cave said: What we did at that time was to provide a line of written records so that users could understand the context of the results when searching for specific content. In these seven years, we have done some different things, and all these have converged into this [dictation text] function.

More Comprehensive Consideration

Apple will first transcribe each new episode uploaded to its platform, and then expand to the entire platform, but Apple did not explain the specific time for transcribing old episodes..

Cave said: We think this is important from the perspective of accessibility, because we want to give people an expectation that the transcripts will be available for all the content they want to interact with.

Related Reading:

Apple iOS/iPadOS 17.4 Official Release: Expanded Emoji, Introduction of Dictation Text in Podcasts

Likes