top of page
Radio show microphones

Wonks and War Rooms

clipart1711928.png
RSS Feed Icon_White
Spotify Icon_White
Google Podcasts icon_White.png

Content Moderation with Andrew Strait


Former content moderator, Andrew Strait, and Elizabeth chat about what content moderation is, why it is always flawed, and how the way platforms are constructed impact the flow of content. They talk about a bunch of related issues including how to (and how not to) regulate tech companies in order to minimize harms.

Additional Resources

  • Andrew recommended two great books that look at content moderation and content moderators: Behind the Screen by Sarah T. Roberts and Custodians of the Internet by Tarleton Gillespie.

  • After the interview Andrew also mentioned the work of Daphne Keller and Robyn Caplan.

  • The German regulation mentioned in this episode is NetzDG. Here is a primer written by academics Heidi Tworek and Paddy Leerssen in April 2019, just over a year after the regulation came into effect.

  • Andrew quickly mentioned "safe harbor" (in the US you might hear "Section 230"). Here is a brief explainer from Reuters.

 

Episode Transcript: Content Moderation with Andrew Strait


Read the transcript below or download a copy in the language of your choice:



Elizabeth: [00:00:03] Welcome to Wonks and War Rooms, where pol[itical] comm[unication] theory meets on the ground strategy. I'm your host, Elizabeth Dubois, and I'm an associate professor at the University of Ottawa. Today, we're talking about content moderation with Andrew. Andrew, can you please introduce yourself?

Andrew: [00:00:16] Yeah. Hi, my name is Andrew Strait. I'm the head of research partnerships at the Ada Lovelace Institute. It's a non-profit think tank based in London that focuses on making AI and data work for everyone. Prior to this, I worked at DeepMind under [the] Ethics & Society Team for two years, and prior to that I worked at Google as a content moderator for about six and a half years.

Elizabeth: [00:00:35] All right, so you know all about this idea of content moderation?

Andrew: [00:00:40] Yes [laughs], I'm afraid I do.

Elizabeth: [00:00:43] [Chuckles] All right, I am excited to dig in. So content moderation, when we're thinking about it from this political communication perspective, we're thinking about it often in the context of these tech[nology] companies that basically set up a set of rules for what information is and isn't allowed out on their platforms. And content moderators, or tools that are made to do content moderation instead of people, they basically follow those rules to apply them to pieces of content to decide, "Yes, this content can flow through the system," or "No, it actually can't." And sometimes that means taking the content down, sometimes it means flagging the content, [and] sometimes it means deprioritizing the content. And so there's a lot of different kinds of things that content moderation can look like, but the basic idea is it's these companies setting up rules for what is and isn't allowed to flow through their system normally. Does that track for your understanding?

Andrew: [00:01:42] Yeah, at a high level I think that's pretty accurate. I think there is a slight "wrench in the gears", which I would say we can throw in, which is around the different ways that you can moderate content. What you described is how I would have moderated content when I was at Google and working at a very large platform. But I have seen and witnessed and spoken with people who moderate content in a very different way by shaping the actual architecture and affordances of the tool itself to encourage certain behaviour.

[00:02:11] So to give one example, there is a platform called, I believe, "Penguin Town" or "My Penguin" (I'm blanking on the name of the exact game), [and] it was a game where the designers made it so that if you started to write content in the chat that was abusive or causing harm, that content would become invisible to other users, but you wouldn't be notified. So you would think that you were writing all this content, but no one would respond to it. And the thinking behind that was: If we create this affordance that basically identifies this content and just essentially removes it from [the] site, that will discourage that person (through social cues) from continuing to type that content because it just won't get a response. So yes, [in content moderation] you're enforcing rules, but I think there is a key thing: it's not all about just removing or deleting content, it can also be about shaping the affordances of the product or trying to nudge behaviour in particular ways.

Elizabeth: [00:03:05] Yeah, that's awesome. And I think that the focus on the architecture and the structure of the environment in which people are creating and sharing content is really interesting. You use the term "affordances", which I know from academic literature—we haven't covered it on the podcast yet, though. I wonder, can you explain to us what you mean by affordances, just so we're all on the same page?

Andrew: [00:03:27] Yes, sure. So, I will acknowledge that my understanding of this term might be different from the literature because I'm not an expert in the way that you are. I would describe [affordances] as essentially what the platform or product allows users to do. And [an affordance] isn't necessarily a prescriptive (it's not encouraging a particular intentional behaviour), but it's essentially a freedom that users can then extrapolate and create their own uses within it. And that's something that I think traditionally in content moderation, particularly in early in my career, we didn't really think about as something that was relevant to our work. But I think increasingly, as we've seen content moderation and platform governance studies develop, that is becoming more and more part of the conversation.

Elizabeth: [00:04:08] Yeah, yeah. That's roughly my understanding of affordances, too. It's the idea of: Technologies can have built in these opportunities for use, it doesn't mean you have to use them in that way, but [the technology] can allow it and sometimes incentivize certain kinds of use over other kinds without determining what you can do.

Andrew: [00:04:26] I'm glad I got that one right, oouff! Well, that's a relief.

Elizabeth: [00:04:28] We're off to a good start! [Laughs]

[00:04:31] Can we go back to [how] you talked about, back when you were at Google, it was kind of the earlier days of content moderation. Can you describe a little bit what that was like? What were your tasks? How did you go about doing content moderation?

Andrew: [00:04:45] Yeah. So I was at Google during a very interesting six years where, I think, we witnessed what I would describe as a cultural shift in how our moderation teams thought about what they were doing. So, when I first joined, the primary focus of our team was to protect free expression online, which is not what I think people think of [when they imagine a] the content moderation team—you'd think it would be to remove harmful content. But our job is really to try to maximize as much content that existed on the web and to remove the content that was bad. We were trying to filter that way, with a deep respect for free expression. And that was felt in the kinds of questions that you were asked in the interviews. A lot of the questions were coming from this almost free speech absolutist position of, you know, trying to put you in a position where you would be asked to remove content that might be illegal in one jurisdiction but be considered political speech in another, and testing how you would perform. And you also felt that in the way that we talked about our role as sort of like the front line of free expression. Those were kind of the terms in the ways that we understood what our job was.

[00:05:45] I would say that the big defining moment where that really started to shift was around the time of the right to be forgotten [legal] case in 2014. Suddenly you had this culture of First Amendment, U.S. centred understandings of free expression being challenged by [the] European concept of moderation, which was much more about balancing other rights relating to things like privacy or reputation. And I can't describe how jarring that was. I remember literally seeing lawyers stalking the hallways of Google saying, "This is it! This is the end of the free web and the conception of search as this neutral reflection of information!" And over the ensuing years after that, we really were forced to confront, and I think realize, that you don't need to adopt this free speech absolutist mentality, but really [we] adopted this more, what I call, European mindset: that free expression isn't necessarily the only important thing. There are a lot of other important things we need to consider in some of these tools, particularly search, and we need to find that right balance between these different rights in a way that I think we hadn't traditionally considered.

Elizabeth: [00:06:54] And so balancing those rights... you've briefly mentioned reducing harm or harassment—are those the rights? Are there other rights that need to be balanced against free expression in this sort of newish or updated European mentality that you're mentioning?

Andrew: [00:07:09] I mean, I think [some] of the few ones that came up for us were definitely the different understandings of what concepts like hate speech meant, for example. Traditionally, to give one example, the way that we construed that was this very U.S.-centric notion, which referred to hate speech being a call for violence against a particular individual or group based on their pre-existing religion, creed, sexual orientation, gender, et cetera. And that became very apparently out of whack with how many European countries understood that concept. And it wasn't so much that we didn't care about hate speech, but it's just that I don't think [that] up until those later years we really considered how [content moderation] needs to be much more region specific and tailored to particular domains.

[00:07:57] And then I think there was another notion around particular products and the kind of journey that they took. So search, I think, is really one that was the first product that Google created and it was really created out of this culture of libertarian Silicon Valley ethos. This notion that the web is this beautiful utopia of information [and] that we should enable access to it, and access to it will create a better world. And so when you had this new obligation (the right to be forgotten) come into force, that really flew in the face of this deeply-rooted understanding of what search was from within the company. And so I think the key thing here is not that we didn't consider privacy at all, but it was where that dial was. How we balanced privacy or data protection or defamation or cyber-harassment versus this concern around free expression really shifted over the time I was there.

Elizabeth: [00:08:51] That's super interesting. And as an outsider watching the trajectory of conversations around content moderation and what should be done [and] what shouldn't be done, I have seen those parallels in the kinds of discussions about, you know, "The internet is democratizing, it's going to allow everybody to participate," to, "Oh hey, we have a lot of the same structures of marginalization reproducing themselves within these spaces online that we've created." And, "Do we need to also then have protections for people who might be unfairly harmed in those systems?" So, it's interesting to hear that from the inside it felt similar too.

[00:09:31] I wonder... you mentioned a little bit about needing to focus in on regional aspects, or the particular differences in the U.S. versus Europe—Canada is somewhere in between most of the time. Are content moderators trained to do their job differently in these different contexts? And how do you deal with that? [For example,] I lived in England for a while [and] I lived in the U.S. for a while—I have friends in both of those places—and when I'm sharing information online [I'm] transferring [information] across these different geographic bounds,

Andrew: [00:10:03] Yeah, it's a really good question. I would say that we tried to split up our moderation into two layers of policies, the first being essentially what the platform or product's policies were. And these are meant to be universal. So they are meant to be things around concepts like harassment, for example. If there was a campaign of someone really targeting a particular individual—it didn't matter where you were—our policy was just "that gets removed." And that'd be the first level review.

[00:10:30] Then there would be a second level, which is legal review. And that's more on individual jurisdictions and regional law, which is a lot of the work that I did. So, even if we might not remove something (for example, for harassment) under our product harassment policy, we might then remove it for a local harassment law that had intermediary liability implications for if we didn't take action. Or, to take an extreme example, there are laws in places like Thailand that prevent any defamation or derogatory remarks against the King of Thailand—and those were laws that we had to enforce, in some cases, in Thai jurisdictions. And that creates a very particular [and] interesting dynamic where, on certain products, there's an incentive to balkanize the experience in that product. [Meaning an incentive] to create a "Thailand version" of Blogger and to create a U.S. and Canada version of Blogger so that you don't have the lowest common denominator of laws affecting global visibility for all.

[00:11:24] I do think—I will acknowledge this is a bit of a personal, controversial opinion—that it is impossible to ever adequately moderate on a global product at scale. I don't think we necessarily did this very well because to do it at scale requires having an unbelievable amount of individuals who work for you who are trained in that local language. And I can tell you firsthand, it's extremely difficult to get people who are willing to do the job of content moderation, particularly from places like Sweden or Norway where the standard of living is very high. If you speak those languages you can get a job doing anything else out there, like why would you make your job to review complaints and copyright complaints online? You know, to understand that context? And that's a real challenge that I think any large firm or platform deals with: how do you ensure that you have an appropriate level of contextual review for a particular jurisdiction and a particular language?

Elizabeth: [00:12:24] Yeah, that's a really good point. The actual work of content moderation is not necessarily fun and it's certainly not easy, and it can be very taxing emotionally and mentally. And so finding people to do that for an amount of money that a company can offer somebody when they need that many people is hard. I wonder a little bit then, what are companies like Google and other tech firms doing? How are they responding to this challenge?

Andrew: [00:12:55] So I can't speak for Google and Facebook because obviously I haven't worked there recently. But what I can say, from what I have seen, is they're falling in[to] two solutions. One is to hire armies of moderators who are housed in a particular region, or located there, or who speak those languages. And what they should be doing is increasing the pay for those workers to encourage individuals who speak a bespoke language or any language to do that work. I mean, it really is—I can't emphasize enough—it's very underpaid work. It is often contracted out to smaller third party services who don't provide things like in the U.S.—they don't provide health benefits, they very rarely provide the kinds of mental health coverage that you need for particularly traumatic kinds of content review. I think that is a failing of the industry writ large right now. I just don't think anyone's doing this well.

[00:13:52] I think the other thing that I think these firms have been doing is then taking all of these decisions that have been labeled by people from those domains and in those languages and trying to use automated moderation interventions in various ways. The majority of those interventions are very rarely automatically deciding an outcome "off the bat" or practically filtering for that kind of content. The majority of interventions are more about trying to help agents prioritize which of the requests in their queue are duplicates, which ones they should look at first, which ones have a very high sensitivity (like if there's a risk of suicide or any kind of other element like that). There are some services that are fully automating decisions—I know that that's the case in places like copyrights and definitely and ones like child sexual abuse imagery, which is just very rampant and difficult to control and monitor without automated interventions. But those, I think, are the two kind of broad ways: creating armies of human moderators to review this kind of content, ideally in local language support, and then, from that, training automated systems to help in that filtration and decision making process.

Elizabeth: [00:15:04] Ok, so just to make sure I understand it: the armies of content moderators, they would be the ones who are given, like, "Here's a set of rules of what's OK. [And,] here's some stuff that's been flagged by someone or some machine, tell us if it matches the rules or not.” And then, the automation you're talking about is essentially flagging content as being top priority—things that need a human to look at it first.

Andrew: [00:15:32] Yeah, yeah, that's the majority of it. There are some that do render final decisions right off the bat, and it's usually for particular types of content. Like, in the case of copyright infringement, if you know that a site is almost assuredly a piracy website, you might not have a human review that kind of content—those kinds of things [are often moderated using machine-only automated analysis].

[00:15:52] Now, I will throw in a grenade here to make it even more complicated: I think it's very important to acknowledge that when you have more people reviewing content under a set of rules, that creates a huge quality and training problem. So, it is not the case that the more people you have, the better things will get. And in fact, you can have quite the opposite. The more people you have, you can have much, much more issues monitoring if people are effectively reviewing this kind of content and enforcing it. There are also issues with the region-specific variations that can arise under policy. So well, for example, in my job (and my job [was] to write out the bare bones of a policy), we need to set up a very complicated escalation and review procedure so that frontline moderators could flag really strange scenes they're seeing or "grey area" cases for us to review. And then [using those flagged cases, we could in turn] ultimately update the policy. So it's not like you get a book that you follow and [use to] sort it out. It does take a tremendous amount of time.

[00:16:55] And the other thing I'll just flag—to make this even more complicated—is that policies are not the same from platform to platform. We're very focused right now on platforms like Facebook and YouTube and Google Search because those are the biggest. But there are massive platforms and places where moderation happens on a day-to-day basis that are not included in those broad categories, and where moderation and the types of speech or policies you might need to enforce will look very different. And that raises a real problem for regulation because, if you're trying to regulate this space... I think right now we're using the Facebooks and YouTubes as our model for how to do that without thinking about, "What are the other places where moderation might look different and where different regulatory interventions might need to be a bit more bespoke or different?"

Elizabeth: [00:17:42] Yeah, I think the importance of thinking about it as an evolving process and as one that needs to be specific to the particular platform—we're talking about the particular experiences that people are having in those online spaces. That's a really important point. Can you think of examples of places where the approaches—the regulatory approaches that are being discussed right now—wouldn't fit? And what makes them so different from the major cases of search, YouTube, [and] Facebook?

Andrew: [00:18:12] It's a good question. I think one place that I would wonder about are in-game video chat forms of moderation. I remember, for my thesis, one of the things I did when I was doing my master's program was to speak to content moderation service providers about how they understand moderation versus how a large platform like Google or YouTube would. And it was very different the kinds of problems [in-game chat moderators] came across. Most of the in-game chat issues were ones where they were trying to discourage people from creating abusive ways of talking that discourage others from engaging. So it wasn't much hate speech or harassment, it was like talking in a way that discourages others from engaging and enjoying the platform. And so the kinds of interventions they would have would be more around trying to identify if someone is feeling hurt or upset or angry, rather than if they had violated some particular rule or intervention. And, I think, that's just a very different way of thinking of moderation that I don't think many regulation proposals I've seen consider.

[00:19:16] To give an example of a piece of regulation I think hasn't worked well: the NetzDG law. I won't be able to pronounce it in German, but the German hate speech law that was passed, I think back in like twenty-fifteen or twenty-fourteen. It created this very strange incentive where firms had to remove content within a very narrow window after it was reported. And what happens is that, when you encourage and incentivize the speed of review and removal, you essentially remove any nuance in how you implement the policy. Just remove first and ask questions later. And in that case, you had some very high-profile cases of politicians on Twitter and other platforms having their content removed because it was flagged as hate speech, and, you know, [the moderators] didn't have the 24 hours to review it—they had to just get it offline, so they took it down right away. I know that we had some similar cases like that when I was at Google right before I left, where it was just very rushed removals and very little time to actually analyze it. And those are the kinds of challenges, I think, in the regulatory space, we haven't really grappled with yet. How do you enforce timely removal of content with high quality and [avoid] these kinds of bad outcomes where you need to go through a very lengthy appellate process to get your content reinstated? Which, oftentimes, by the time it's reinstated, the harm’s already done and it's too late.

Elizabeth [00:20:38] Yeah, when we think about how things flow across the internet, speed is the major factor here. It's the thing that allows, you know... You know within a minute or two whether or not a tweet is going to blow up or not, for the most part. And we know for sure that with things like disinformation—particularly the COVID-19 example, I think, is pretty salient at the moment; disinformation related to the pandemic—once that's out there, once people are convinced that masks don't work, that vaccines don't work, [or] that ~whatever it is~ that's factually incorrect, once it's out there, even if you pull the content down later, somebody's screencapped it and shared it on another platform already.

Andrew: [00:21:23] Well, you're hitting on a real challenging issue when we talk about moderation. Moderation isn't going to save us, alone, from misinformation networks. Because, unless you're changing the affordances of the platform to remove the ability for that kind of use of the network, you're playing Whack-A-Mole. You're always a step behind as a moderator. You're reviewing what people flag to you and what has been flagged to you by an automated system, and usually by the time that happens, it's too late. I think that the New Zealand mosque shootings that were livestreamed on Facebook Live is a great example. I read the review of the report of how the Facebook team identified that video and removed it, and I think it was within a span of like three or four hours. I can tell you from firsthand experience: three or four hours is an unbelievable response time to identify that kind of content and remove it from all throughout the network. And even then, it wasn't fast enough. And so it raises this question: with the current type of online activity and platforms that we have, [and] the current ability to share content so easily and so quickly, is moderation ever going to keep up with expectations? I personally think not. I think there need to be other ways to look more towards a platform['s] affordances and how we design technologies. But, that's my personal "crazy ranting" view, which I will spare you. [Laughs]

Elizabeth: [00:22:46] No, I think it's such an important part because we've been talking a lot about content moderation in the last little while with COVID, [and] with the U.S. election—and before that, there was a lot of focus on disinformation; and before that, we worried about child pornography; and before that, we worried about spam. And and so the conversation about content moderation has been a long one. I think it's really interesting to hear from you that you don't think content moderation is the right conversation—or the only conversation—and that we need to be spending more time on this idea of affordances.

[00:23:25] Do you imagine, then, that content moderation teams need to be working intimately with the engineering teams that are designing the tools? Do you imagine that the engineering teams just need to take on board this "protection of the health of the space" as their goal without content moderators? What do you imagine that looks like for a company?

Andrew: [00:23:50] Yeah, it's a really good question. I think the first change I would love to see is having people who are in trust and safety teams involved in the design of a product from the very early stages. Too often it felt like we were brought in at the very last second to set up abuse flow and to identify ways this platform's affordances could be abused or misused. And, when you're starting from a position of assuming best intentions and designing a product from there, I think that's how we oftentimes get into these really nasty, messy places. Whereas if we start from a position of assuming, "What's the worst thing that could happen? What's the worst way this platform could be used?" it at least helps us think through [the question of] "How do we mitigate and build out either workflows or features of the product that can prevent that kind of harm from occurring?" I will acknowledge that that is a much easier thing to say than do.

Elizabeth: [00:24:42] Yeah, why hasn't it been done?

Andrew: [00:24:44] Yeah, well I think partly it's because moderation work has been seen as, to use Tarleton Gillespie's words, like "the custodian of the internet" kind of work, right? It's something that people will do.... it's kind of a form of cleaning up. I think it's... I'm so bad, I'm forgetting her name... I think Sarah Thompson, who wrote this amazing book called Behind the Screen in which she interviews moderators about their experiences. And... I think one of the most important themes from that book and from the experiences of those moderators is that they're seen as very disposable—as people who are brought in and driven to the point of just burnout, and then they are sort of cycled through to the next batch of new incoming individuals. And I saw that firsthand as well. So I think part of it is that the moderation work is not respected within the tech industry. It is seen as this disposable form of work. I think the expertise that moderators bring is not respected. It is seen as this very disposable form of labour that can be outsourced and handed over to someone else. And I do think that the A.I. ethics base and all these conversations around how we better design A.I. technologies [have] raised this question of design principles and being more thoughtful about how we create technologies. But I have yet to see that conversation get married with content moderation to really consider moderators as a source for experiential expertise when designing a new technology or platform.

Elizabeth: [00:26:21] Yeah, that is unfortunate to hear but rings pretty true. It's really interesting to think about how we ended up in a space where there's a whole industry that views content moderators and their expertise in this way that is so different from how it feels. We interact with these technologies—most of us on a daily basis—and the content that does or doesn't show up on our screen is what's tangible about the platform for each of us. And to think that the people who are making a lot of the decisions about how to make that content less harmful aren't central [to designing the platform] doesn't match perfectly well with our experience of the tools.

Andrew: [00:27:15] Just to flag, by the way, I misspoke on the name: it's not Sarah Thompson, I'm so sorry! It's Sarah T. Roberts. She's the one who wrote Behind the Screen, which is a great book I'd recommend.

Elizabeth: [00:27:24] Awesome.

Andrew: [00:27:25] Yes, I think you're completely right. That's a serious challenge when we talk about content moderation as an area of study. I think it is... We oftentimes foist responsibility onto moderators to fix a solution to a lot of the problems that we're seeing on platforms when we probably should be thinking of it more as a systems problem. How are these platforms being designed? What is the burden that moderators carry? What is within their power to change and fix and what isn't? And that is something that I hope that we'll see change in the years ahead. But unfortunately, I think, with the direction that a lot of regulation is going in, it seems to still be very much focused on, "If we just define the problem a bit better than moderators will do a better job." I'm not convinced that's going to be a fruitful solution.

Elizabeth: [00:28:17] And so do you think that regulation could be the impetus for that change? Or is that something [where] the tech companies themselves have to make those choices initially? Maybe it's both?

Andrew: [00:28:30] I think it's going to be both. I don't think tech companies will act on this unless it involves the risk of regulatory involvement to force them to do it. I think it's very difficult to change a product that has been up and running for a very long time. It's very difficult, particularly, to change a product when the incentives for how that product pays for itself and operates are oriented towards sharing content very widely, accessing content, having content recommended to you, [and] making content very easy to post and to share and to link and to get to. There's also—just to call it a few incentives that are very difficult for industry—legal incentives for them [to not] start more proactively editorializing or monitoring this content. If you do that, you really lose a lot of your safe harbour liability in many of these jurisdictions and you take on the legal risk. So, it's very complicated.

[00:29:28] I don't think there is a silver bullet solution to this. But what I would love to see is regulators focusing more on the business model and the structure of these platforms rather than solely on their policies. I don't think that the problem is Facebook's hate speech policy. I don't think the problem is that they don't have enough moderators. I think the problem is the fundamental ease with which you can share and disseminate and linked to information on those platforms with very little oversight

Elizabeth: [00:29:59] That makes a lot of sense and tracks with how we would expect that, as these technologies become more and more a part of our lives and as we learn more and more about how they function and how they impact the flow of information, that we would start to learn that we need to have more nuanced understandings of those tools and our responses to them need to be more nuanced because they aren't simple. It's not simply just like, "OK, well, we've got a list of URLs, here's the World Wide Web." That's not where we are anymore.

[00:30:32] Before we end, I have just one last question for you. It's just a little pop quiz.

Andrew: [00:30:38] Oh no [Laughs].

Elizabeth: [00:30:41] It's OK—it's short answer, so nice and open ended. Can you define for me what content moderation is in your own words? How would you define content moderation?

Andrew: [00:30:51] Oh goodness, this is tricky. I would describe it as—I even wrote this down once [when] trying to wrap my head around this... Off the top of my head, I would describe content moderation as the process of reviewing and judging contents based on a set of rules that we have created and also of nudging behaviour in a particular way on the platform. I think that's kind of my very terrible sort of merging of those two thoughts that I had had at the beginning of the show. [Laughs] So I apologize for not having that a bit more dialed in...

Elizabeth: [00:31:35] Oh my gosh, not terrible at all! That's perfect. And I think, yeah, it hits on exactly the right things: Yes, it's this review of content and a judgment on it and noting that it's not just about removing it or leaving it, but also about nudging behaviour. And that [nudging of behaviour] lends itself to those conversations about what are the affordances and the business models that are incentivizing or not different kinds of reactions. I think that's really great and that points to the version of content moderation that we need moving forward. Our definitions of these key terms are evolving and I think that content moderation, as we colloquially understand it, needs to evolve in order to respond to the reality of how these tools do and do not constrain the flow of information. So, I think you did great.

[00:32:31] Thanks for listening, everyone. That was our episode on content moderation. If you'd like to learn more about any of the concepts or theories we talked about today. Please go ahead and check the show notes or head over to polcommtech.ca



Wonks and War Rooms Season 1 logo

Comments


Commenting has been turned off.
1/23
bottom of page