Bryan Barletta: What does it mean for publishers to product transcripts that are actually helpful and compliant? That's what we're talking about on this week's episode of Sounds Profitable: Adtech Applied, with me, Bryan Barletta.
Arielle Nissenblatt: And me, Arielle Nissenblatt.
Bryan Barletta: Special thanks to our sponsors for making Sounds Profitable possible. Check them out by going to soundsprofitable.com and clicking on their logos in the article.
Arielle Nissenblatt: Bryan, welcome to the show. How are you doing?
Bryan Barletta: Oh, yeah. Thanks for having me.
Arielle Nissenblatt: Oh, yeah.
Bryan Barletta: Good to be here.
Arielle Nissenblatt: Well, we have a lot to discuss today. It's all about transcripts and accessibility with Ma'ayan Plaut of 3Play Media. Tell me, how did you first get in touch with 3Play?
Bryan Barletta: Well, I met Ma'ayan on Twitter. We had interacted a few times before, and I didn't realize that Ma'ayan was over at 3Play. And 3Play actually recently came on as a sponsor at Sounds Profitable, and we were digging into the concept of transcripts and the value there. And I was trying to learn a little bit more from them about how transcripts and all that are handled outside of podcasting because I think the truth is that we make a lot of assumptions with podcasting that we're the start and end of what we need to do. But realistically, we can learn so much from the other industries that come before us. I think transcript's an easy one. I mean, we're already experiencing transcripts on TV and streaming video, and even live content with video has transcripts in most situations now. So they were the first people that I really got to sit down and learn a lot from. And Ma'ayan was one of the people leading that conversation. And we really hit it off, so I asked her to come on and talk through it a little bit more with me.
Arielle Nissenblatt: I want to read a tweet that Ma'ayan put out recently. She wrote, "Podcast transcripts need non speech elements in order to be accessible to deaf and hard of hearing audiences. What constitutes a non speech element? And how do you know what's important to include in your transcripts?" And then she presented a mini thread on making podcasting more accessible, so we're going to link to that in the show notes of this episode because even just that anchor tweet introduced some concepts to me that I was not familiar with before, the concept of non speech elements to include in your transcripts. I mean, it's something that I'd seen before, when you see in parentheses, somebody entering the room, or maybe laughing, or something like that. It's things that I had seen, that I'd read, but I didn't consider. And so I really want folks to be able to experience all of the knowledge that Ma'ayan Plaut was able to bring to this conversation, as well as to the Twitter space in aiding us better understanding this conversation.
Bryan Barletta: Yeah. I think she has a very unique perspective that is going to help us open our eyes because everything that you just talked about when you're reading a transcript as a separate file that doesn't match up directly with the audio, hearing that someone, or reading rather, that someone entered the room doesn't necessarily resonate as well because it's not tracked. It's not displayed by the player in a way that you can read it at the same time that it's happening in time, in tempo. So that video mindset is really where we need to get to, while we are as an industry just struggling to provide them as a base response when someone says, "Can I have the episode?" And it says, "Oh, well, here is the transcript as well." So these people are experts at it, and it was a really great conversation, and I'm excited for all of you to get a chance to listen to it.
So transcription is such a big part of video as well as podcasting. And video has probably a longer history with it than podcasting. So when we say transcription, what do we actually mean?
Ma'ayan Plaut: So that is an excellent place for us to start because when I say the word transcripts to podcasters, they usually immediately jump to the process production piece of things and where transcripts fit into that. And for the most part, what they're talking about is turning audio into words that will help them in the production process, usually the editing process. It takes a long time to listen to a lot of audio. It is easier to scan a lot of words. And these rough transcripts are mainly for people to improve their interview and narrative podcasts. That is a super narrow way to think about transcripts and actually not the way in which I and a lot of us in the accessibility space are thinking about podcast transcripts. And it's mainly the one where transcripts are a means of podcast consumption for a podcast audience. And that actually has a very, very specific definition and also purpose.
So a transcript is the way that deaf and hard of hearing audiences will experience your podcast. And you want them to experience it as fully as a hearing person who's listening to your podcast, and that is more than just getting all of the words that someone is saying down on paper. That's only a portion of what's going on. It also has to include all of those non speech elements that really aid in somebody's comprehension of the content. All those non speech elements, that's sound effects, audience reaction, music, the way in which somebody might be saying something, and also who's saying it was well. So speaker identification is actually a really big, important component of an accessible transcript and that helps with comprehension probably more than anything else.
Bryan Barletta: Yeah. The speaker identification thing is really interesting because I have a great surround sound system. I really like entertainment stuff, so I put a lot of effort into it. But I also have very young kids, and it feels like nothing's mixed well anyways on TV, so I can either hear the dialogue and wake up my kids with the explosions, or the explosions can be normal volume and I can't hear the dialogue. So I do everything with subtitles now, which has also made going to the movies really uncomfortable for me lately. Super getting off track there, but it feels like a lot of them don't even do speaker identification.
Ma'ayan Plaut: Yeah. So within the movie world, this is also one of the reasons I prefer streaming video and not to mention, COVID, most of my movie media consumption has been at home on a screen, on my own screen with my own controls for two plus years now. I love subtitles because it makes it very easy for me to know who is speaking and what's going on, which especially when there's a lot of sound, especially when there's a lot of back and forth really quickly, every single media producer has a slightly different standard about how they want someone identified, whether it's by name, or speaker one, or whatever. And I love that because it really helps me connect the dots. I'm also not deaf or hard of hearing, and if that is the way in which you are experiencing audio, you actually need to know.
And it makes a really big difference if somebody is off screen, to know who is speaking. And if it's important to know who is speaking off screen, to me, the parallel is pretty clear for podcasting as well. Everybody is off screen, so how are you going to know who is going to say what? There's two of us on this podcast right now. Who is saying what?
Bryan Barletta: Yeah. No, this is all really exciting because I think that everybody listening here does understand the value of making content more accessible for more people, but when we think about it, this is about compliance too. Right? This is about making sure that all of these people can consume all forms of media. And when we think about the quick AI transcriptions, which have a good place, it is a good starting place, a human can go in and improve it. It can get you some of the way reduce the manual effort if you need to do that internally. But it's not the end answer because this isn't an SEO tool. This isn't something to copy the output and just slam it on a website. It's not to hope that it can tell which one of us is speaking right now and name that appropriately. We need to do more to make sure that we're following the laws and regulations on this. Right?
Ma'ayan Plaut: Absolutely. Absolutely. And as someone with a name that is very difficult for a computer system to figure out how do you actually spell Ma'ayan Plaut, I would love for a human to actually take a look at that and make sure that when I'm speaking, my name is actually spelled correctly as well.
Bryan Barletta: Yeah, yeah. That's really important. So I touched on a little bit there, we've had some lawsuits in podcasting about transcription. But I've got to assume that the video industry, like everything that we're dealing with in podcasting, this isn't net new. And I think the podcast industry struggles constantly from the fact that they believe sometimes like, "Oh, man. It's the first time it's happened." No, we can learn from every industry before us. So we've had lawsuits. They're not quite settled yet. We're not really clear where to go. There's still no standards in podcasting. What can you tell us about what we can learn from the video industry and apply to podcasting today, instead of just expecting that we need to come up with it?
Ma'ayan Plaut: Yeah. So there has been a ton of legislation around video broadcasts and digital media accessibility. And it's been over the last several decades, and since podcasting is just now seeing it, we don't know where it's going. But as you said, we can sort of look back at what's happened so far to know what's going to happen in the future. So there's two really big accessibility laws in the United States that apply to just accessibility broadly, and there are sections that talk about audio accessibility. So the Rehabilitation Act of 1973, that one has two sections that talk about audio accessibility, section 504, section 508. Section 508 is the one that actually talks about federal communication and information technology, and the fact that those have to be made accessible. And the most recent refresh of that particular section talks about the guidelines, the WCAG guidelines, I'll get to that in a moment, the web content accessibility guidelines, and those have specific requirements for audio only things.
The second major accessibility law is the Americans With Disabilities Act. It is entirely about just making sure that people can access things. And there's two parts that talk about media accessibility, title two is talking about how public entities deal with this, and then title three is very broad. It talks about places of public accommodation, and that also includes private organizations that provide some sort of public thing. So usually, it's applied to things like a doctor's office, a library, a hotel, things like that. But it actually becomes really interesting because that can also become something that applies to internet only businesses, and it's where places like Netflix and Amazon have sort of fallen under attack about whether or not they need to provide accommodations for their audiences as well. So with Netflix, they were specifically sued around closed captioning and audio description. And in both of those specific cases, the outcome was that they had to have accurate captions for their streaming shows, and they also needed to provide audio description for all of their original content as well.
So if we look at the parallel, how might this play out in podcasting? We are providing things in a public arena. We are making sure that things are available to the public. It is open access. People can get it anywhere. We could see a similar thing play out as well. There are two other just small things that don't have very immediate parallels for audio, but we see them in the broader media and broadcast space and they could add audio in the future. The 21st Century Communications and Video Accessibility Act, the CVAA, and then of course, the FCC, which is about captioning quality standards in broadcast. So there's a ton of press on it for video, especially streaming video. And I would assume that we would see streaming audio following suit as well pretty soon.
Bryan Barletta: Now does that mean that streaming audio separate from podcasting does not already adhere to all of this?
Ma'ayan Plaut: As far as I know, no, because I don't know that people are paying attention to it in this way. But if it is a podcast from a government organization, if it is a podcast from a higher education institution, or a place that provides ... I don't know. If they're doing video already and they're providing captions because they've been sued in the past, I would hope that they're also following suit and doing so with podcasting. But they might not have gone as far or as deep as they might need to yet to say, "Oh, yes. Our audio only content also needs to be accessible in this way."
Bryan Barletta: Yeah. Interesting. And in podcasting, I wrote a little bit about how right now some people pass it through the RSS feed, some people put it in their episode description and link out to somewhere else. There's nobody currently putting it in the ID3 tag, which would allow it to happen in real time, so a player could respond to it and share it, so it's visually there lined up with the content to take into account for dynamic ad insertion. Is there a clear standard in video that everybody can just ... If I decided I was going to go compete with Netflix tomorrow, is there an organization I can look to and says, "Here's exactly what you have to follow because this is what Netflix and streaming video solutions have adhered, and this is the framework of the video industry for that"?
Ma'ayan Plaut: Yes and no. So the big, big picture is that the WC3, the World Wide Web Consortium, WC3, they're the ones that provide these recommendations about how content should be made accessible. Those guidelines are the WCAG, the web content accessibility guidelines. And there are three levels of compliance that people need to aim for. Number one, level A is just considered baseline. These are table stakes. And that actually does specifically talk about audio and then when we talk about audio, I'm specifically going to say for podcasts, that means transcripts, which by definition in the WCAG guidelines, which is redundant because it's web content accessibility guidelines, under WCAG, podcasts have to have a text version of speech and the non speech audio information that's going to be needed to also understand the content completely.
Level two, which is if most places are saying that you have to be WCAG compliant, level two is usually what's stated. There's nothing additional there for audio. It's possible we could see it in future guidelines. And then level three, level triple A, is most comprehensive. It is hard to get to, but places really do need to try and strive for it. That one actually talks specifically about live audio and live events, so if podcasts are thinking about how we might livestream, how we might expand our audiences in other ways, that level of three, the triple A, might actually apply to them as well.
So in terms of who's actually saying, "What is it?" Usually what ends up happening is the lawsuits are saying, "Here's what you need to do in order to be compliant," and they'll cite some of the high level things, or the most trusted places that are sort of the places to look to in order to get yourself up to code. But in terms of who's actually saying, "Is it being enforced or not?" For the most part, it comes mainly from lawsuits or from civil rights organizations and advocacy groups seeing this is not actually happening and then trying to make it so. So the National Association for the Deaf is actually the one who's bringing some of the newest lawsuits for podcasting, along with obviously some advocates who are immediately affected by this as well. So it's multi layered, right?
There's the people who say how it's supposed to go, there's the people who need to implement it, and if they're not implementing it, people are going to call them on it because they cannot access the content that is intended for everybody.
Bryan Barletta: Yeah. I mean, first off, huge bummer that lawsuits are what's leading adoption here. But if I wanted to adhere to WCAG, the spec, is it comprehensible for someone without a technical background? Could a producer of a podcast read through it, get it, and understand it exactly what they have to apply?
Ma'ayan Plaut: Usually, yes. But for the most part, the easiest thing to do is just search for audio within all of these guidelines and just make sure that you understand what the specific ... For audio, it means text needs to be written for all audio things and for non speech elements. And I guess more broadly here, the output of it, audio is just a little bit behind on this front. Video, there's lots of streaming players. There are lot of broadcast entities. There are user generated platforms like YouTube, Vimeo, et cetera. What they've done is built sort of the technical ways in which a transcript then shows up with content. Those are captions. It's a slightly different output file.
But there's no immediate parallel for that within the audio space as well. To be fully compliant as a podcast player, any app that you might be using should be showing that transcript at the same time as somebody might be listening to the audio. So the equal experience thing is not let's have a transcript on our website. That to me is sort of like a stop gap solution. It really has to be in the place in which the podcast is being consumed. I don't even want to say audio because this is part of the progression of how we start to also talk about podcasting being for everybody, is to move away from the idea that podcast listening is the only way in which this gets done. It is also podcast audiences, podcast consumption.
Podcast consumption is more than just listening to audio. It is also reading transcripts. And that's something that we'll get to a place of normalization soon enough, but we're not really there yet either.
Bryan Barletta: But it's the right time to say it, as we push podcasting into video and other formats, it is what we're telling people in an open framework is that you are a content creator. And your primary channel has been audio, but now you're being challenged to explore text, and you're being challenged to explore video and see if those channels do work for you. Some of them, you're going to need to comply with. Others, like video, might not be what works for you. Sounds Profitable doesn't do numbers on video yet. I don't have really the focus to push that avenue yet. But it's one of those interesting things. Right? Not everything works for everyone. But the transcription aspect is really critical. And I think that what's clear here is the WCAG guidelines, like you said, redundant. It's clear, you can start implementing it now. There are partners like 3Play Media that adhere to that, that can make you compliant immediately. I think that's kind of crazy that we're not moving forward with that as soon as possible.
It's not like someone in podcasting is raising their hand and saying, "Hey, we'd like to challenge that and make a podcasting unique one," which we shouldn't do because we cannot continue to be a unique silo, ignoring the history that's come before us. So adopting that is really smart because here's the truth. Every podcast player can transcribe and make best guesses using AI to figure out what's going on there. But they are definitely not putting a human in front of that. And that transcript is not going to be as accurate or as complete, or even usable, for the person consuming it. It might put them off completely. The only way for adoption is if the publishers, who own very little, remember the publishers get download stats back. They own their episode. They distribute it to places that they can can call to them for the episode content.
The publishers should defend that. That is a translation, a transcription, an interpretation of their content. And allowing anybody else to do that for you and not give you the rights to edit it, wild. We need to get ahead of that. And what this means is we need to follow this guideline, implement it as fast as possible, and make sure it comes from the same destination where the file comes from, so that we can say to Apple, Spotify, Amazon and Google, "Hey, enough of us do this. Please acknowledge it and display it correctly in your app." And then they're the ones that we can sic the lawsuits after.
Ma'ayan Plaut: Your future sounds good.
Bryan Barletta: Hey, I'm trying. We've got to be optimistic on this stuff. And it's definitely really interesting that audio, which predates video, is behind on this, when video is leading the charge. We need to get this together. Podcasting is on demand. It doesn't even need to be there at time of launch. Should it? Absolutely. But if you're down to the wire, you're pushing your episode out, and you can get it out 24 hours later, you know what, that's a good move today. Probably like me, should have more bandwidth in your production cycle, but for daily shows, it can take a little bit more effort.
Ma'ayan Plaut: Yeah. And I guess it seems pretty easy to just start doing right now. But really what we're talking about is changing workflows to include more people, and that means both what it means to make the podcast, but also what it means for somebody to interact with your show on the other end. My seven years coming up to now in podcasting has always been about, we just want people to love what you've made. And if the only way in which you, a producer, interact with your show is through listening to it, you're not thinking about the 20% of audiences that are probably going to try and interact with it in another way. So one of the ways in which to start to narrow down what are the most important non speech elements, of course speaker identification is one of the most important parts, but all of the different sounds, if they're not important for comprehension, they don't need to be in the transcript.
But if they're not important for comprehension, why are they in the show? And if they're in the show to elicit tone, that is absolutely something to include in a transcript. So I think it actually makes producers more aware if they're starting to think about transcripts not as it's going to help me edit my show, but I am thinking about the transcript as part of the production experience because I want the thing that I'm making at the end to be understood and felt by everybody who wants to interact with my show.
Arielle Nissenblatt: Okay. Bryan Barletta, I've got takeaways.
Bryan Barletta: I hope you do.
Arielle Nissenblatt: That's my job. Isn't it? All right, so Ma'ayan brought something up at the beginning of the conversation that I want to go back to, which is that when podcasters hear the word transcript, for the most part they're thinking about how they can use the transcript to aid in the creation process, and I became aware of this tactic when I was at Salt, the audio documentary school in Maine, because if you get hours and hours of tape with somebody, you're going to want an easy way to comb through that. And an easy way to do that is to upload it to a transcription service like Otter, or 3Play, or Descript, or Trint, I could go on all day. And then control F for the words or the phrases that you know you want to include in that conversation, or you you know you want to cut out of the conversation.
And then what do you do? You tailor your voiceover based on the transcript that you have created through this AI process. What Ma'ayan is saying is that transcripts should be transcripts first, for the sake of being transcripts for accessibility reasons, and there are certain elements like identifying the active speaker that are not present in the editing version of the transcript. So first of all, I'd love to know what you think about that, and if you've experienced that as a creator, and then I want to talk a little bit about my experience transcribing my podcast and where I'm falling short.
Bryan Barletta: Yeah. Well, let me start first by saying that Sounds Profitable falls short. I think we strive to try out all these different tools, but some of them slip off. Right? We use different transcription services. We've migrated from Whooshkaa to now we're on Trint, and we've used Descript, Adobe, we've manually edited certain things. It's not easy to do all of this, but that's not a good excuse. We all can and should do better, but it's a great example of it. Right?
If there was ad dollars, if there was a driving force behind it, which was what I tried to push in an article I wrote recently, that advertisers should be asking for transcripts to shore up through machine learning and through other tools, what they're actually buying on, then it becomes a non starter for the mid to enterprise side of podcasting to have to provide them. So I think that I strive for us to present a really good face to all of this. But I think the problem is that because it's not in my face as a consumer, it's not something that I can turn on transcripts in the Apple, Spotify, Amazon or Google apps and see it in real time, and it's synced perfectly and it takes into account dynamic ad insertion. It's easy for me to think that it's not yet part of podcasting, and that's not healthy. That's not right because this is about accessibility and reach. We're in the second phase of podcasting where the truth is that podcasting needs to be part of what you do.
We are content creators. This is our favorite channel. This might be our best performing channel. It might be one of the more successful channels as we grow it. But by taking your podcast and making it a transcript, you are now also a writer. You have potential for a website for someone to go to. You have potential for a newsletter. You have different ways for people to interact with you, just like video. So I definitely agree that transcripts for accessibility should be the first goal. Accessibility extends reach. It's not just performative. But I really do fear that unless ad dollars force people's hands to make this a standard and we get an overwhelming number of podcasts that pass it, and the apps respond by including it, it's not going to take on any time soon.
Arielle Nissenblatt: I think the apps responding by including it is huge. I don't even think most folks can imagine what that might look like because right now, what do you do when you're listening to a podcast, you hit play, you put your phone in your pocket, or you put it on the other side of the room, and you listen to the show while you're cleaning, while you're cooking, while you're walking around the house or the neighborhood. So what would it mean actually to have a transcript on the phone that you could follow along if you wanted to, or if you needed to?
Bryan Barletta: I've started putting transcripts on when I watch anything on TV.
Arielle Nissenblatt: Me too.
Bryan Barletta: To a point where now when I go to a movie theater, my experience is completely jarring. It's very uncomfortable now. The mixing is awful on these things. I can't quite tell what's going on. And I love it. There are little things that I just miss. And when I get to see the transcript and I get to see the captions and the subtitles that explain what's going on, I feel more engaged. I feel more pulled in. I feel like I didn't miss out and I have to read a recap article, or listen to a recap podcast afterwards because everything was presented to me, even if I didn't completely notice it.
So I think for podcasting, especially for narrative, I'd really like that because there's a lot where I want to go back and listen to it and it still doesn't sink in, or I'm not clear who's talking. I mean, some of my favorite narrative podcasts, I've struggled to figure out who the speaker was for easily five or 10 minutes, sometimes entire episodes, or you're coming back to something and you're not remembering who it is, so I think with the app having an option, I'm not picturing a world where we're going to keep our phone on and read transcripts while we're listening to a podcast. But I think it becomes something that if you know it's there, and it's really important, it'll be really powerful. It would also be incredibly powerful to pass clips, make little headliner type videos from all the players as a consumer.
Arielle Nissenblatt: Well, I mean, even the way that I wrote out outline for today's conversation was with the Descript transcript, which of course is not perfect, and it's not going to be the transcript that we put up word for word on the website. But it really helped me to be able to see which word was being highlighted when. I could read along. It's a fun way to experience listening and reading a podcast. Paul F. Tompkins tweeted this week, "I know I have become a true fan of a podcast when I experience that magical moment I can now differentiate the hosts voices." So this is something that could be aided by a transcript. And you can become a fan much sooner.
Bryan Barletta: Deep cut for Bryan, Paul F. Tompkins is actually, he was on my first podcast I ever listened to, Thrilling Adventure Hour.
Arielle Nissenblatt: No way.
Bryan Barletta: Yeah. Huge fan of him.
Arielle Nissenblatt: Wow. Love that. I want to share a practical tip from Ma'ayan. She says that if you're a publisher who wants to get compliant, but all of these guidelines are long and the legal jargon is confusing, just control F the word audio and make sure you understand everything there because WCAG, W-C-A-G, applies to a lot more than just audio, so you're going to want to know what applies to you as a podcast publisher. And how do we practically move towards a world where accessibility is not an afterthought?
I really liked Ma'ayan's suggestion of not just calling it podcast listening, because some people are going to consume a podcast in all sorts of ways. And one of those ways might be reading a transcript. They may never even hear the podcast, whether that's because they are hard of hearing or deaf, or because they don't want to. Maybe they just want to read the podcast. Maybe they consume content, maybe they are stronger readers than they are listeners. I'm the opposite of that. But some people just want to consume a podcast by way of reading, and that should be okay and that should be accessible to them.
Bryan Barletta: I'm right there with you. And we're hitting a point in podcasting where we're going to see people pull pieces of this apart, and what is audio only is going to be incredibly challenged because we're looking at silos and video solutions pulling people out. We're looking at text options, turning a podcast into a newsletter or a website, all of those things. If you don't acknowledge that you're creating amazing content and make it accessible to wherever your audience is, and you should explore that, then you are going to be left behind. The industry is rapidly going to change in the open nature of podcasting and the ability for it to lend itself to so many other formats is going to make it easy for someone to slice this pie up and say, "Well, podcasting shrunk," when really, podcasting at its core has spread into so many different things. So transcripts for that reason are fantastic, but for accessibility are a killer. I think we need to really do that. And Ma'ayan made one of my favorite points. She doesn't want a robot trying to write her name out.
Arielle Nissenblatt: Yeah. Ma'ayan Plaut, even Arielle Nissenblatt, I get some pretty ridiculous AI transcriptions. I've gotten Arian as in black, I've gotten just-
Bryan Barletta: That's rough.
Arielle Nissenblatt: I know. Yeah, you definitely need somebody going through that afterwards and making sure that is not what's going to be published on a website.
Bryan Barletta: God, that's your dark Harry Potter persona.
Arielle Nissenblatt: So listeners, what do you think about the show? We want to hear from you. Please reach out if you have any questions or comments. We're on Twitter at Sound Prof News, at Bryan Barletta or at Arithisandthat. And if you want to send us an email, that's email@example.com.
Bryan Barletta: This show is recorded with Squadcast, the best place to record studio quality video and audio for content creators. I use Squadcast for every single podcast recording and my product deep dives. Check out the latest one we did with Triton Digital at soundsprofitable.com/deepdives. And check out Squadcast.fm for free seven day trial. And please let me know what you think.
Arielle Nissenblatt: Do you want more from Sounds Profitable? Well, you're in luck because we have two more podcasts that you can explore. First up is Sounds Profitable, the narrated articles, and next, The Download, our podcast about the business of podcasting. And both of those are available in Spanish. You can find links to them in the episode description. Thank you to Evo Terra and Ian Powell for their help on this episode.
Bryan Barletta: And thanks to you for listening to this episode of Sounds Profitable, Adtech Applied, with me, Bryan Barletta.
Arielle Nissenblatt: And me, Arielle Nissenblatt. Until next time.
Bryan Barletta: Rad.