Skip to main content
Warner Music Launches A Podcast Network + 3 more stories for April 29, 2022
"

Warner Music Launches A Podcast Network + 3 more stories for April 29, 2022

The Download

Season 0 • Episode 19

ICYMI: Warner Music launches a podcast network, Spotify weathers the storm, and personnel changes at Edison Research.

Warner Music Group is dipping its toes into podcasting with its first network: Interval Presents. The new network’s slate promises a variety of content lead by popular musicians and celebrities who work with WMG.

“The initiative marks the first major music label to follow in Sony Music’s lead; Sony entered the podcast arena five years ago in May, 2017.”

WMG Senior VP of Digital Strategy & Business Development Allan Coye has stepped into the role of General Manager of Interval Presents content. CDO and EVP of Business Development Oana Ruxandra set the tone for what Interval Presents intends to accomplish.

She says, “There’s a hunger for more inclusive and authentic podcast content and, with Allan leading the charge, we’re thrilled to launch an audio platform that will connect with this growing audience and spotlight a breadth of voices and perspectives.”

While this might initially look like simply another company jumping into the field of celebrity podcasts, that itself is enough to help grow the industry. With more celebrity-hosted podcasts comes a higher chance of graduating those who only listen to music into full-fledged podcast listeners who seek out content beyond their initial introduction, be it with a Jason Derulo-hosted fiction podcast or a Lupita Nyong’o series on African diaspora.

This week Spotify’s Q1 numbers became the subject of much discussion as they became public. On Wednesday Bloomberg’s Ashley Carman published “Spotify Tumbles as Investors Question Podcast Investments.” 

“Spotify Technology SA has spent more than a billion dollars in an effort to become the No. 1 name in podcasting, but investors’ patience is wearing thin on how much that will cost.”

Carman’s article paints a cloudy sky for the big green dot with investors getting antsy at the amount of money invested in podcasting intended for long-term growth over short-term returns, including a gross margin of 25.2% that falls short of the 30 to 40% target. That said, both paid subscriptions and unpaid ad-supported users are up despite locking out Russian users and much-publicized Joe Rogan backlash. Sarah Perez writes for a TechCrunch article on the same subject this Wednesday:

“Despite losing 1.5 million users in Russia, Spotify’s premium subscribers grew 15% year-over-year in the first quarter to reach 182 million, largely in line with analyst estimates. Ad-supported users, meanwhile, grew 21% to reach 252 million.”

The #deletespotify movement, sparked by a transphobic conversation in his latest Jordan Peterson interview, a history of COVID-19 disinformation, and a compilation of him saying a racial slur lead to musicians and podcasters alike pulling their content from Spotify or threatening to cancel contracts. As Sarah Perez reports:

“But app store data at the time indicated rival streaming apps were not getting a boost from this latest PR headache, as Spotify’s app had continued to see millions of weekly downloads — a significantly larger figure than its nearest rivals — even amid the #deletespotify campaign on social media.”

That lack of attention to rival apps likely stings especially hard for Neil Young, a figurehead of the Rogan backlash who pulled all of his music from Spotify in protest of Rogan’s COVID disinformation. Young, a vocal critic of low-quality MP3 streaming on services like Spotify, also happened to be releasing high-quality versions of his discography on Amazon Music shortly after the much-publicized stunt.

As with all things, Spotify’s growth remains a complicated beast. Subscribers are up, stock value is down, all while successfully weathering a weeks-long PR storm.

Last Thursday Spotify dropped an article on their official blog announcing Spotify’s big entrance into video podcasting. 

Quoting the article, “Last fall, Spotify began activating Video Podcasts for creators on a limited basis. Since then, we’ve found that podcasters love having the option to accompany their audio with visual components, and fans love having the opportunity to more deeply connect with the content.”

As of Thursday creators in the US, Canada, New Zealand, Australia, and the UK gained access to the feature, as well as a handful of new features to help the transition for video podcasters with backlogs. The new system requires a podcast be hosted on Spotify’s service Anchor, meaning any existing video podcasts interested in trying out the service will either need to make a Spotify spinoff feed or wholesale transfer from their existing service to take advantage of this new feature.

Once integrated into Spotify the video podcasting appears to function identical to simply watching a video podcast on YouTube, with those who prefer pure audio able to leave the app or lock their phone to background the video.

Video in podcasting challenges an open ecosystem to consider themselves creators, agnostic of any one medium, while also pushing them into siloed solutions. Podcast-first creators exploring video as a channel is powerful, even if the current options dead-end into proprietary solutions. Spotify’s requirement that a show must be hosted on their own service. Anyone currently producing videos with their podcasts have to weigh the pros and cons of porting everything over into Spotify’s silo purely to have one more place to upload the same video content already going up on YouTube and social media.

There’s promise in the concept of podcasts-with-video, but current offerings are lacking as they all appear to exist to push an open podcasting world into producing siloed content.

And finally, while we don’t often cover personnel changes here on The Download, this one is important enough that we mention. Tom Webster has just today announced that he is leaving his position with Edison Research. But Tom and Edison will both still be with us in the podcasting industry. As Tom says in his newsletter, I Hear Things:

“My work with Edison is far from over, and we have established an agreement to partner on many things in the future.”

So what will Tom be doing with his time? That’s not been announced just yet, but again quoting from today’s newsletter:

“I want to continue to work to establish a podcast industry: a place where established networks and independent podcasters alike have fair access to information, revenue, and opportunity. I think there are some structural issues in podcasting, and a some information arbitrage, as well. I want to work on both of these issues, and help to create the sandbox I wish to continue to play in for years to come.

I’m excited about what is next, and I’ll have more to say on that in the next edition of I Hear Things, which isn’t going away, by the way. Just as I am doubling down on podcasting, I am also going to be evolving I Hear Things into something very exciting, broad-reaching, and ultimately useful for podcasters of every stripe.”

The podcast industry might be grateful for everything Tom has done at Edison Research to grow the platform, but I’m personally grateful for everything Tom has done for me. See what you may not know is that I have worked closely with Tom for five years at Edison Research. Now he’s said before that he wishes he could have been a better mentor, but to him I say: you did an incredible job. Clearly, your wisdom is invaluable and I’ve absorbed a lot, but it is your confidence in my abilities that has allowed me to face challenges I didn’t think I was capable of facing. Suggesting I take the lead on presenting research for the first time or asking for my advice as if I were the expert served as ammo to fight off my imposter syndrome. As you did for much of the podcast industry, you opened doors for me to bring my own passion projects to life, my own research on Latino and Black podcast audiences. You helped me evolve from a project coordinator to a Director of Research, and listen to me now, a host of a podcast. I don’t think there’s a better way to say that I’m forever grateful than on audio that will forever live in the world you’ve helped build. Thank you for everything.

Transcript

00:00:03
Bryan Barletta: What does it mean for publishers to product transcripts that are actually helpful and compliant? That's what we're talking about on this week's episode of Sounds Profitable: Adtech Applied, with me, Bryan Barletta.

00:00:14
Arielle Nissenblatt: And me, Arielle Nissenblatt.

00:00:15
Bryan Barletta: Special thanks to our sponsors for making Sounds Profitable possible. Check them out by going to soundsprofitable.com and clicking on their logos in the article.

00:00:23
Arielle Nissenblatt: Bryan, welcome to the show. How are you doing?

00:00:26
Bryan Barletta: Oh, yeah. Thanks for having me.

00:00:26
Arielle Nissenblatt: Oh, yeah.

00:00:26
Bryan Barletta: Good to be here.

00:00:29
Arielle Nissenblatt: Well, we have a lot to discuss today. It's all about transcripts and accessibility with Ma'ayan Plaut of 3Play Media. Tell me, how did you first get in touch with 3Play?

00:00:39
Bryan Barletta: Well, I met Ma'ayan on Twitter. We had interacted a few times before, and I didn't realize that Ma'ayan was over at 3Play. And 3Play actually recently came on as a sponsor at Sounds Profitable, and we were digging into the concept of transcripts and the value there. And I was trying to learn a little bit more from them about how transcripts and all that are handled outside of podcasting because I think the truth is that we make a lot of assumptions with podcasting that we're the start and end of what we need to do. But realistically, we can learn so much from the other industries that come before us. I think transcript's an easy one. I mean, we're already experiencing transcripts on TV and streaming video, and even live content with video has transcripts in most situations now. So they were the first people that I really got to sit down and learn a lot from. And Ma'ayan was one of the people leading that conversation. And we really hit it off, so I asked her to come on and talk through it a little bit more with me.

00:01:40
Arielle Nissenblatt: I want to read a tweet that Ma'ayan put out recently. She wrote, "Podcast transcripts need non speech elements in order to be accessible to deaf and hard of hearing audiences. What constitutes a non speech element? And how do you know what's important to include in your transcripts?" And then she presented a mini thread on making podcasting more accessible, so we're going to link to that in the show notes of this episode because even just that anchor tweet introduced some concepts to me that I was not familiar with before, the concept of non speech elements to include in your transcripts. I mean, it's something that I'd seen before, when you see in parentheses, somebody entering the room, or maybe laughing, or something like that. It's things that I had seen, that I'd read, but I didn't consider. And so I really want folks to be able to experience all of the knowledge that Ma'ayan Plaut was able to bring to this conversation, as well as to the Twitter space in aiding us better understanding this conversation.

00:02:39
Bryan Barletta: Yeah. I think she has a very unique perspective that is going to help us open our eyes because everything that you just talked about when you're reading a transcript as a separate file that doesn't match up directly with the audio, hearing that someone, or reading rather, that someone entered the room doesn't necessarily resonate as well because it's not tracked. It's not displayed by the player in a way that you can read it at the same time that it's happening in time, in tempo. So that video mindset is really where we need to get to, while we are as an industry just struggling to provide them as a base response when someone says, "Can I have the episode?" And it says, "Oh, well, here is the transcript as well." So these people are experts at it, and it was a really great conversation, and I'm excited for all of you to get a chance to listen to it.
So transcription is such a big part of video as well as podcasting. And video has probably a longer history with it than podcasting. So when we say transcription, what do we actually mean?

00:03:44
Ma'ayan Plaut: So that is an excellent place for us to start because when I say the word transcripts to podcasters, they usually immediately jump to the process production piece of things and where transcripts fit into that. And for the most part, what they're talking about is turning audio into words that will help them in the production process, usually the editing process. It takes a long time to listen to a lot of audio. It is easier to scan a lot of words. And these rough transcripts are mainly for people to improve their interview and narrative podcasts. That is a super narrow way to think about transcripts and actually not the way in which I and a lot of us in the accessibility space are thinking about podcast transcripts. And it's mainly the one where transcripts are a means of podcast consumption for a podcast audience. And that actually has a very, very specific definition and also purpose.
So a transcript is the way that deaf and hard of hearing audiences will experience your podcast. And you want them to experience it as fully as a hearing person who's listening to your podcast, and that is more than just getting all of the words that someone is saying down on paper. That's only a portion of what's going on. It also has to include all of those non speech elements that really aid in somebody's comprehension of the content. All those non speech elements, that's sound effects, audience reaction, music, the way in which somebody might be saying something, and also who's saying it was well. So speaker identification is actually a really big, important component of an accessible transcript and that helps with comprehension probably more than anything else.

00:05:18
Bryan Barletta: Yeah. The speaker identification thing is really interesting because I have a great surround sound system. I really like entertainment stuff, so I put a lot of effort into it. But I also have very young kids, and it feels like nothing's mixed well anyways on TV, so I can either hear the dialogue and wake up my kids with the explosions, or the explosions can be normal volume and I can't hear the dialogue. So I do everything with subtitles now, which has also made going to the movies really uncomfortable for me lately. Super getting off track there, but it feels like a lot of them don't even do speaker identification.

00:05:55
Ma'ayan Plaut: Yeah. So within the movie world, this is also one of the reasons I prefer streaming video and not to mention, COVID, most of my movie media consumption has been at home on a screen, on my own screen with my own controls for two plus years now. I love subtitles because it makes it very easy for me to know who is speaking and what's going on, which especially when there's a lot of sound, especially when there's a lot of back and forth really quickly, every single media producer has a slightly different standard about how they want someone identified, whether it's by name, or speaker one, or whatever. And I love that because it really helps me connect the dots. I'm also not deaf or hard of hearing, and if that is the way in which you are experiencing audio, you actually need to know.
And it makes a really big difference if somebody is off screen, to know who is speaking. And if it's important to know who is speaking off screen, to me, the parallel is pretty clear for podcasting as well. Everybody is off screen, so how are you going to know who is going to say what? There's two of us on this podcast right now. Who is saying what?

00:06:57
Bryan Barletta: Yeah. No, this is all really exciting because I think that everybody listening here does understand the value of making content more accessible for more people, but when we think about it, this is about compliance too. Right? This is about making sure that all of these people can consume all forms of media. And when we think about the quick AI transcriptions, which have a good place, it is a good starting place, a human can go in and improve it. It can get you some of the way reduce the manual effort if you need to do that internally. But it's not the end answer because this isn't an SEO tool. This isn't something to copy the output and just slam it on a website. It's not to hope that it can tell which one of us is speaking right now and name that appropriately. We need to do more to make sure that we're following the laws and regulations on this. Right?

00:07:46
Ma'ayan Plaut: Absolutely. Absolutely. And as someone with a name that is very difficult for a computer system to figure out how do you actually spell Ma'ayan Plaut, I would love for a human to actually take a look at that and make sure that when I'm speaking, my name is actually spelled correctly as well.

00:08:00
Bryan Barletta: Yeah, yeah. That's really important. So I touched on a little bit there, we've had some lawsuits in podcasting about transcription. But I've got to assume that the video industry, like everything that we're dealing with in podcasting, this isn't net new. And I think the podcast industry struggles constantly from the fact that they believe sometimes like, "Oh, man. It's the first time it's happened." No, we can learn from every industry before us. So we've had lawsuits. They're not quite settled yet. We're not really clear where to go. There's still no standards in podcasting. What can you tell us about what we can learn from the video industry and apply to podcasting today, instead of just expecting that we need to come up with it?

00:08:39
Ma'ayan Plaut: Yeah. So there has been a ton of legislation around video broadcasts and digital media accessibility. And it's been over the last several decades, and since podcasting is just now seeing it, we don't know where it's going. But as you said, we can sort of look back at what's happened so far to know what's going to happen in the future. So there's two really big accessibility laws in the United States that apply to just accessibility broadly, and there are sections that talk about audio accessibility. So the Rehabilitation Act of 1973, that one has two sections that talk about audio accessibility, section 504, section 508. Section 508 is the one that actually talks about federal communication and information technology, and the fact that those have to be made accessible. And the most recent refresh of that particular section talks about the guidelines, the WCAG guidelines, I'll get to that in a moment, the web content accessibility guidelines, and those have specific requirements for audio only things.
The second major accessibility law is the Americans With Disabilities Act. It is entirely about just making sure that people can access things. And there's two parts that talk about media accessibility, title two is talking about how public entities deal with this, and then title three is very broad. It talks about places of public accommodation, and that also includes private organizations that provide some sort of public thing. So usually, it's applied to things like a doctor's office, a library, a hotel, things like that. But it actually becomes really interesting because that can also become something that applies to internet only businesses, and it's where places like Netflix and Amazon have sort of fallen under attack about whether or not they need to provide accommodations for their audiences as well. So with Netflix, they were specifically sued around closed captioning and audio description. And in both of those specific cases, the outcome was that they had to have accurate captions for their streaming shows, and they also needed to provide audio description for all of their original content as well.
So if we look at the parallel, how might this play out in podcasting? We are providing things in a public arena. We are making sure that things are available to the public. It is open access. People can get it anywhere. We could see a similar thing play out as well. There are two other just small things that don't have very immediate parallels for audio, but we see them in the broader media and broadcast space and they could add audio in the future. The 21st Century Communications and Video Accessibility Act, the CVAA, and then of course, the FCC, which is about captioning quality standards in broadcast. So there's a ton of press on it for video, especially streaming video. And I would assume that we would see streaming audio following suit as well pretty soon.

00:11:25
Bryan Barletta: Now does that mean that streaming audio separate from podcasting does not already adhere to all of this?

00:11:31
Ma'ayan Plaut: As far as I know, no, because I don't know that people are paying attention to it in this way. But if it is a podcast from a government organization, if it is a podcast from a higher education institution, or a place that provides ... I don't know. If they're doing video already and they're providing captions because they've been sued in the past, I would hope that they're also following suit and doing so with podcasting. But they might not have gone as far or as deep as they might need to yet to say, "Oh, yes. Our audio only content also needs to be accessible in this way."

00:11:59
Bryan Barletta: Yeah. Interesting. And in podcasting, I wrote a little bit about how right now some people pass it through the RSS feed, some people put it in their episode description and link out to somewhere else. There's nobody currently putting it in the ID3 tag, which would allow it to happen in real time, so a player could respond to it and share it, so it's visually there lined up with the content to take into account for dynamic ad insertion. Is there a clear standard in video that everybody can just ... If I decided I was going to go compete with Netflix tomorrow, is there an organization I can look to and says, "Here's exactly what you have to follow because this is what Netflix and streaming video solutions have adhered, and this is the framework of the video industry for that"?

00:12:41
Ma'ayan Plaut: Yes and no. So the big, big picture is that the WC3, the World Wide Web Consortium, WC3, they're the ones that provide these recommendations about how content should be made accessible. Those guidelines are the WCAG, the web content accessibility guidelines. And there are three levels of compliance that people need to aim for. Number one, level A is just considered baseline. These are table stakes. And that actually does specifically talk about audio and then when we talk about audio, I'm specifically going to say for podcasts, that means transcripts, which by definition in the WCAG guidelines, which is redundant because it's web content accessibility guidelines, under WCAG, podcasts have to have a text version of speech and the non speech audio information that's going to be needed to also understand the content completely.
Level two, which is if most places are saying that you have to be WCAG compliant, level two is usually what's stated. There's nothing additional there for audio. It's possible we could see it in future guidelines. And then level three, level triple A, is most comprehensive. It is hard to get to, but places really do need to try and strive for it. That one actually talks specifically about live audio and live events, so if podcasts are thinking about how we might livestream, how we might expand our audiences in other ways, that level of three, the triple A, might actually apply to them as well.
So in terms of who's actually saying, "What is it?" Usually what ends up happening is the lawsuits are saying, "Here's what you need to do in order to be compliant," and they'll cite some of the high level things, or the most trusted places that are sort of the places to look to in order to get yourself up to code. But in terms of who's actually saying, "Is it being enforced or not?" For the most part, it comes mainly from lawsuits or from civil rights organizations and advocacy groups seeing this is not actually happening and then trying to make it so. So the National Association for the Deaf is actually the one who's bringing some of the newest lawsuits for podcasting, along with obviously some advocates who are immediately affected by this as well. So it's multi layered, right?
There's the people who say how it's supposed to go, there's the people who need to implement it, and if they're not implementing it, people are going to call them on it because they cannot access the content that is intended for everybody.

00:15:00
Bryan Barletta: Yeah. I mean, first off, huge bummer that lawsuits are what's leading adoption here. But if I wanted to adhere to WCAG, the spec, is it comprehensible for someone without a technical background? Could a producer of a podcast read through it, get it, and understand it exactly what they have to apply?

00:15:17
Ma'ayan Plaut: Usually, yes. But for the most part, the easiest thing to do is just search for audio within all of these guidelines and just make sure that you understand what the specific ... For audio, it means text needs to be written for all audio things and for non speech elements. And I guess more broadly here, the output of it, audio is just a little bit behind on this front. Video, there's lots of streaming players. There are lot of broadcast entities. There are user generated platforms like YouTube, Vimeo, et cetera. What they've done is built sort of the technical ways in which a transcript then shows up with content. Those are captions. It's a slightly different output file.
But there's no immediate parallel for that within the audio space as well. To be fully compliant as a podcast player, any app that you might be using should be showing that transcript at the same time as somebody might be listening to the audio. So the equal experience thing is not let's have a transcript on our website. That to me is sort of like a stop gap solution. It really has to be in the place in which the podcast is being consumed. I don't even want to say audio because this is part of the progression of how we start to also talk about podcasting being for everybody, is to move away from the idea that podcast listening is the only way in which this gets done. It is also podcast audiences, podcast consumption.
Podcast consumption is more than just listening to audio. It is also reading transcripts. And that's something that we'll get to a place of normalization soon enough, but we're not really there yet either.

00:16:48
Bryan Barletta: But it's the right time to say it, as we push podcasting into video and other formats, it is what we're telling people in an open framework is that you are a content creator. And your primary channel has been audio, but now you're being challenged to explore text, and you're being challenged to explore video and see if those channels do work for you. Some of them, you're going to need to comply with. Others, like video, might not be what works for you. Sounds Profitable doesn't do numbers on video yet. I don't have really the focus to push that avenue yet. But it's one of those interesting things. Right? Not everything works for everyone. But the transcription aspect is really critical. And I think that what's clear here is the WCAG guidelines, like you said, redundant. It's clear, you can start implementing it now. There are partners like 3Play Media that adhere to that, that can make you compliant immediately. I think that's kind of crazy that we're not moving forward with that as soon as possible.
It's not like someone in podcasting is raising their hand and saying, "Hey, we'd like to challenge that and make a podcasting unique one," which we shouldn't do because we cannot continue to be a unique silo, ignoring the history that's come before us. So adopting that is really smart because here's the truth. Every podcast player can transcribe and make best guesses using AI to figure out what's going on there. But they are definitely not putting a human in front of that. And that transcript is not going to be as accurate or as complete, or even usable, for the person consuming it. It might put them off completely. The only way for adoption is if the publishers, who own very little, remember the publishers get download stats back. They own their episode. They distribute it to places that they can can call to them for the episode content.
The publishers should defend that. That is a translation, a transcription, an interpretation of their content. And allowing anybody else to do that for you and not give you the rights to edit it, wild. We need to get ahead of that. And what this means is we need to follow this guideline, implement it as fast as possible, and make sure it comes from the same destination where the file comes from, so that we can say to Apple, Spotify, Amazon and Google, "Hey, enough of us do this. Please acknowledge it and display it correctly in your app." And then they're the ones that we can sic the lawsuits after.

00:19:13
Ma'ayan Plaut: Your future sounds good.

00:19:15
Bryan Barletta: Hey, I'm trying. We've got to be optimistic on this stuff. And it's definitely really interesting that audio, which predates video, is behind on this, when video is leading the charge. We need to get this together. Podcasting is on demand. It doesn't even need to be there at time of launch. Should it? Absolutely. But if you're down to the wire, you're pushing your episode out, and you can get it out 24 hours later, you know what, that's a good move today. Probably like me, should have more bandwidth in your production cycle, but for daily shows, it can take a little bit more effort.

00:19:48
Ma'ayan Plaut: Yeah. And I guess it seems pretty easy to just start doing right now. But really what we're talking about is changing workflows to include more people, and that means both what it means to make the podcast, but also what it means for somebody to interact with your show on the other end. My seven years coming up to now in podcasting has always been about, we just want people to love what you've made. And if the only way in which you, a producer, interact with your show is through listening to it, you're not thinking about the 20% of audiences that are probably going to try and interact with it in another way. So one of the ways in which to start to narrow down what are the most important non speech elements, of course speaker identification is one of the most important parts, but all of the different sounds, if they're not important for comprehension, they don't need to be in the transcript.
But if they're not important for comprehension, why are they in the show? And if they're in the show to elicit tone, that is absolutely something to include in a transcript. So I think it actually makes producers more aware if they're starting to think about transcripts not as it's going to help me edit my show, but I am thinking about the transcript as part of the production experience because I want the thing that I'm making at the end to be understood and felt by everybody who wants to interact with my show.

00:21:09
Arielle Nissenblatt: Okay. Bryan Barletta, I've got takeaways.

00:21:13
Bryan Barletta: I hope you do.

00:21:16
Arielle Nissenblatt: That's my job. Isn't it? All right, so Ma'ayan brought something up at the beginning of the conversation that I want to go back to, which is that when podcasters hear the word transcript, for the most part they're thinking about how they can use the transcript to aid in the creation process, and I became aware of this tactic when I was at Salt, the audio documentary school in Maine, because if you get hours and hours of tape with somebody, you're going to want an easy way to comb through that. And an easy way to do that is to upload it to a transcription service like Otter, or 3Play, or Descript, or Trint, I could go on all day. And then control F for the words or the phrases that you know you want to include in that conversation, or you you know you want to cut out of the conversation.
And then what do you do? You tailor your voiceover based on the transcript that you have created through this AI process. What Ma'ayan is saying is that transcripts should be transcripts first, for the sake of being transcripts for accessibility reasons, and there are certain elements like identifying the active speaker that are not present in the editing version of the transcript. So first of all, I'd love to know what you think about that, and if you've experienced that as a creator, and then I want to talk a little bit about my experience transcribing my podcast and where I'm falling short.

00:22:41
Bryan Barletta: Yeah. Well, let me start first by saying that Sounds Profitable falls short. I think we strive to try out all these different tools, but some of them slip off. Right? We use different transcription services. We've migrated from Whooshkaa to now we're on Trint, and we've used Descript, Adobe, we've manually edited certain things. It's not easy to do all of this, but that's not a good excuse. We all can and should do better, but it's a great example of it. Right?
If there was ad dollars, if there was a driving force behind it, which was what I tried to push in an article I wrote recently, that advertisers should be asking for transcripts to shore up through machine learning and through other tools, what they're actually buying on, then it becomes a non starter for the mid to enterprise side of podcasting to have to provide them. So I think that I strive for us to present a really good face to all of this. But I think the problem is that because it's not in my face as a consumer, it's not something that I can turn on transcripts in the Apple, Spotify, Amazon or Google apps and see it in real time, and it's synced perfectly and it takes into account dynamic ad insertion. It's easy for me to think that it's not yet part of podcasting, and that's not healthy. That's not right because this is about accessibility and reach. We're in the second phase of podcasting where the truth is that podcasting needs to be part of what you do.
We are content creators. This is our favorite channel. This might be our best performing channel. It might be one of the more successful channels as we grow it. But by taking your podcast and making it a transcript, you are now also a writer. You have potential for a website for someone to go to. You have potential for a newsletter. You have different ways for people to interact with you, just like video. So I definitely agree that transcripts for accessibility should be the first goal. Accessibility extends reach. It's not just performative. But I really do fear that unless ad dollars force people's hands to make this a standard and we get an overwhelming number of podcasts that pass it, and the apps respond by including it, it's not going to take on any time soon.

00:24:49
Arielle Nissenblatt: I think the apps responding by including it is huge. I don't even think most folks can imagine what that might look like because right now, what do you do when you're listening to a podcast, you hit play, you put your phone in your pocket, or you put it on the other side of the room, and you listen to the show while you're cleaning, while you're cooking, while you're walking around the house or the neighborhood. So what would it mean actually to have a transcript on the phone that you could follow along if you wanted to, or if you needed to?

00:25:17
Bryan Barletta: I've started putting transcripts on when I watch anything on TV.

00:25:20
Arielle Nissenblatt: Me too.

00:25:20
Bryan Barletta: To a point where now when I go to a movie theater, my experience is completely jarring. It's very uncomfortable now. The mixing is awful on these things. I can't quite tell what's going on. And I love it. There are little things that I just miss. And when I get to see the transcript and I get to see the captions and the subtitles that explain what's going on, I feel more engaged. I feel more pulled in. I feel like I didn't miss out and I have to read a recap article, or listen to a recap podcast afterwards because everything was presented to me, even if I didn't completely notice it.
So I think for podcasting, especially for narrative, I'd really like that because there's a lot where I want to go back and listen to it and it still doesn't sink in, or I'm not clear who's talking. I mean, some of my favorite narrative podcasts, I've struggled to figure out who the speaker was for easily five or 10 minutes, sometimes entire episodes, or you're coming back to something and you're not remembering who it is, so I think with the app having an option, I'm not picturing a world where we're going to keep our phone on and read transcripts while we're listening to a podcast. But I think it becomes something that if you know it's there, and it's really important, it'll be really powerful. It would also be incredibly powerful to pass clips, make little headliner type videos from all the players as a consumer.

00:26:39
Arielle Nissenblatt: Well, I mean, even the way that I wrote out outline for today's conversation was with the Descript transcript, which of course is not perfect, and it's not going to be the transcript that we put up word for word on the website. But it really helped me to be able to see which word was being highlighted when. I could read along. It's a fun way to experience listening and reading a podcast. Paul F. Tompkins tweeted this week, "I know I have become a true fan of a podcast when I experience that magical moment I can now differentiate the hosts voices." So this is something that could be aided by a transcript. And you can become a fan much sooner.

00:27:23
Bryan Barletta: Deep cut for Bryan, Paul F. Tompkins is actually, he was on my first podcast I ever listened to, Thrilling Adventure Hour.

00:27:29
Arielle Nissenblatt: No way.

00:27:30
Bryan Barletta: Yeah. Huge fan of him.

00:27:31
Arielle Nissenblatt: Wow. Love that. I want to share a practical tip from Ma'ayan. She says that if you're a publisher who wants to get compliant, but all of these guidelines are long and the legal jargon is confusing, just control F the word audio and make sure you understand everything there because WCAG, W-C-A-G, applies to a lot more than just audio, so you're going to want to know what applies to you as a podcast publisher. And how do we practically move towards a world where accessibility is not an afterthought?
I really liked Ma'ayan's suggestion of not just calling it podcast listening, because some people are going to consume a podcast in all sorts of ways. And one of those ways might be reading a transcript. They may never even hear the podcast, whether that's because they are hard of hearing or deaf, or because they don't want to. Maybe they just want to read the podcast. Maybe they consume content, maybe they are stronger readers than they are listeners. I'm the opposite of that. But some people just want to consume a podcast by way of reading, and that should be okay and that should be accessible to them.

00:28:34
Bryan Barletta: I'm right there with you. And we're hitting a point in podcasting where we're going to see people pull pieces of this apart, and what is audio only is going to be incredibly challenged because we're looking at silos and video solutions pulling people out. We're looking at text options, turning a podcast into a newsletter or a website, all of those things. If you don't acknowledge that you're creating amazing content and make it accessible to wherever your audience is, and you should explore that, then you are going to be left behind. The industry is rapidly going to change in the open nature of podcasting and the ability for it to lend itself to so many other formats is going to make it easy for someone to slice this pie up and say, "Well, podcasting shrunk," when really, podcasting at its core has spread into so many different things. So transcripts for that reason are fantastic, but for accessibility are a killer. I think we need to really do that. And Ma'ayan made one of my favorite points. She doesn't want a robot trying to write her name out.

00:29:36
Arielle Nissenblatt: Yeah. Ma'ayan Plaut, even Arielle Nissenblatt, I get some pretty ridiculous AI transcriptions. I've gotten Arian as in black, I've gotten just-

00:29:48
Bryan Barletta: That's rough.

00:29:49
Arielle Nissenblatt: I know. Yeah, you definitely need somebody going through that afterwards and making sure that is not what's going to be published on a website.

00:29:56
Bryan Barletta: God, that's your dark Harry Potter persona.

00:30:00
Arielle Nissenblatt: So listeners, what do you think about the show? We want to hear from you. Please reach out if you have any questions or comments. We're on Twitter at Sound Prof News, at Bryan Barletta or at Arithisandthat. And if you want to send us an email, that's podcast@soundsprofitable.com.

00:30:16
Bryan Barletta: This show is recorded with Squadcast, the best place to record studio quality video and audio for content creators. I use Squadcast for every single podcast recording and my product deep dives. Check out the latest one we did with Triton Digital at soundsprofitable.com/deepdives. And check out Squadcast.fm for free seven day trial. And please let me know what you think.

00:30:36
Arielle Nissenblatt: Do you want more from Sounds Profitable? Well, you're in luck because we have two more podcasts that you can explore. First up is Sounds Profitable, the narrated articles, and next, The Download, our podcast about the business of podcasting. And both of those are available in Spanish. You can find links to them in the episode description. Thank you to Evo Terra and Ian Powell for their help on this episode.

00:30:57
Bryan Barletta: And thanks to you for listening to this episode of Sounds Profitable, Adtech Applied, with me, Bryan Barletta.

00:31:03
Arielle Nissenblatt: And me, Arielle Nissenblatt. Until next time.

00:31:06
Bryan Barletta: Rad.