Katherine Maher on how big tech can be as trusted as Wikipedia

Watch the summit

Internet Generic

360/Open Summit: The world in motion

June 22 – 25, 2021

The Atlantic Council’s Digital Forensic Research Lab (DFRLab) hosts 360/Open Summit: The World in Motion on June 22-25 online.

Event transcript

Speaker
Katherine Maher
Former CEO, Wikimedia
Nonresident Senior Fellow, Atlantic Council

Moderated By
Brandy Zadrozny,
Senior Reporter, NBC News

BRANDY ZADROZNY: Thank you so much. I really appreciate it. You know, I am a senior tech reporter for NBC News, but I wasn’t always. In another life in the early aughts I used to be a schoolteacher and a librarian. And a big part of my job was to teach young people who to do research. That was, you know, how to find sources, how to cite. And very often that included the preaching of this commonly held belief at the time that no one serious should ever rely on Wikipedia, it was this online encyclopedia written and edited by these rando volunteers. (Laughs.) And I am just really pleased to admit that I was incredibly wrong, and the rest of the world has too.

So, Fred, you did a great job of setting that up, and so I won’t talk about all the great things that Wikipedia means to so many people around the globe. But I feel really lucky to be sitting here virtually with Katherine Maher, until very recent, again, CEO of Wikimedia. So, Katherine, thank you so much for being here.

KATHERINE MAHER: Oh, it’s a pleasure. I’m thrilled to speak to a former librarian and educator. This is exactly who I should be in conversation with. (Laughs.)

BRANDY ZADROZNY: OK, so let’s start this off and talk about trust. I work in an industry that is really experiencing this dire level of public trust. And that’s pretty close to the public polling of level of trust people have for most tech platforms. So—but people do trust Wikipedia. So how did y’all do that? (Laughter.) And what lessons can the rest of us learn from that?

KATHERINE MAHER: I think one of the really interesting things about Wikipedia is that it started from a position that it needed to earn trust, rather than having institutional trust. If we look around institutions today—whether institutions of the media, or institutions of governance, or institutions of the policy world—they sort of come to the public with an expectation that because of their place and their prestige that they should be trusted. Wikipedia, as we already heard, was a scrappy bunch of upstarts who needed to prove their place.

And I think that that really has created a sense of humility within the Wikipedia community and organization around we’re constantly trying to get better. We’re very much public about the fact that we’re a work in progress. And invites the public in to say, ‘When we get things wrong we actually very much appreciate you pointing that out to us. In fact, could you join us in improving the platform as a whole?’ I think that that is a very unusual posture, and perhaps one of the things that institutions more broadly should be looking to emulate as we think about this conversation around how to really serve the public in a way that builds that confidence.

BRANDY ZADROZNY: Yeah. (Laughs.) I mean, I think that’s a really interesting thought because, like, whether it’s in the media, or Facebook, or any of the other big platforms, there is a large gate keeping people out, in a lot of ways. And Wikipedia was—you know, has this spirit of the open internet, of the beginning of the promise of the internet where everybody is invited, even if the—it’s not always the most welcoming platform, as you have been working on in the last couple of years. And we’ll talk about that too.

I just want to talk about—I want to frame things with the frame of rights for a second. Because I think we hear a lot about freedom and rights right now, and particularly that debate has been framed—at least in the States and in Western countries—around the freedom of speech, right, and expression online. It revolves around whether moderation of speech or algorithmic amplification or quieting violates a person’s freedom. And Wikipedia isn’t immune to that debate, right? In its own ways, it amplifies voices and points of view at the exclusion of others.

So what are your thoughts about this current fight that’s been the subject of congressional hearings and Facebook Oversight Board decisions? How are you thinking these days about the concept of freedom online?

KATHERINE MAHER: Yeah. It is a very complicated conversation, certainly more so than we started having it maybe 15, 20 years ago. I think that what we look at with Wikipedia is that it is a very different platform. It is a purpose-driven platform rather than an expression platform.

I think as we look at social media broadly, the solution sets around how we address questions of governance, questions of content integrity, questions of anti-harassment, for example, are really going to be—need to be tailored to the purpose of those platforms.

One of the things that I think is useful is to see sort of how we are transitioning from a conversation around how do we just simply transpose this idea that online rights are human rights—you know, there’s no distinction; I think this was very much the sort of Clinton State Department understanding of rights in the digital space—to an understanding that, actually, we’ve never really thought about rights at the scale of a network of three billion people, and what our norms and expectations are, and whether traditional structures of governance can really meet this moment? The platforms that I’m having a conversation with are really looking at what might we need to do to fundamentally rethink the concept of sort of governance at scale that is representative in a way that sort of suits the needs of the platforms while also respecting rights in a way that doesn’t—that isn’t impractical to be able to apply at the scale of three billion people.

So in the context of Wikipedia, for example, we think a lot about the right to knowledge as being just as important as the right to expression. Wikipedia isn’t a free expression platform. It really is about creating content that people can have confidence in, that they can use to make determinations in their lives, and so that right to have access to high-integrity content often sort of trumps the right to speech.

And that’s sort of a test and a balance that I think, you know, more and more we’re going to need to be able to explore in platforms in an open way as opposed to an absolutist determination of sort of hierarchical rights ranking.

BRANDY ZADROZNY: I think that’s really interesting because what you’re saying there is what I think about a lot, and it’s in terms of, like, does an organization have a North Star. So when you always have something to point to and say, does this further our mission of, you know, providing knowledge to the world or gathering all the knowledge together in one place, like, I think when you always have that North Star to point to it’s easy to make decisions based on that, right, and I think a lot of the other platforms, the struggles that they’re having, really come out of the fact that there’s no ultimate guiding North Star or overarching spirit to the platform. It’s just about growth and scale. What are your thoughts?

KATHERINE MAHER: Oh, I think that’s very right, and I think that’s where some of these platforms are having significant challenges. The way—I think some are actually better positioned to take this on than others.

But if we look at sort of the structure of the platform decisions, the product decisions, the policy decisions that have gotten us to where we are, the primary imperative has been, as you pointed out, scale market acquisition, time on site, and none of those have really given sort of the checks and balances.

If you think of that in sort of a way that is analogous to the way that we think of checks and balances into the offline governance space, none of them has really built in the checks and balances that are essential to these sort of small groupings, understanding what community norms are at the scale of three billion, as I mentioned earlier, whether that’s in privacy or speech or sort of community standards.

And it really feels to me as though it is an opportunity for us to think about as we’re in this sort of transitional moments in these legacy social media platforms, some of these upstart social media platforms, is—you know, it’s not about gate keeping but it is really about what are the moments and markers that we understand of how practices of expression and interaction are working for communities that are seeking to grow so large.

BRANDY ZADROZNY: So let’s talk a little bit more about rights. When we, when we talk about rights—you sort of alluded to this before—it really means different things to different people, depending on who is demanding those rights and where that person lives in the world, right. And the truth is that these conversations are really focused often on North America. A lot of that is the fault of me and journalists like me.

But equal access to a shared body of knowledge is a really radical idea in many parts of the world. So I’d love to ask you about how you operate in those places specifically, because a lot of platforms have been in the news lately because they have a habit of appeasement. Places like India, Israel, have been in the news lately, but also Russia, Vietnam, Myanmar, et cetera, et cetera, et cetera.

So how did Wikipedia navigate similar challenges or how do they resist demands from state regulators and censors? How did you navigate that space?

KATHERINE MAHER: Yeah, I love that you used the term radical. I always talk about—and free knowledge is radical. If you don’t live in an environment in which you have access to free expression or access to free information as sort of an, as a default standard, something like a free encyclopedia is fundamentally transformative in all sorts of ways—politically, economically—and that actually can be quite threatening to many governments around the world. And one of the things that we dealt with often at Wikipedia is really thinking about, how do we protect the people who participate? How do we protect the privacy of people who read?

One of the ways in which we did this is we were very intentional in actually building the platform architecture so that we couldn’t take content down. There was no technical solution to be able to selectively censor in one country or another. That was simply an impossibility. If content was going to be removed, it was going to be removed universally, which meant that we had a very compelling argument to say we’re just not removing content. We’ll have a conversation around standards with the editing community if something is inaccurate, but, by and large, when a government would come to us and say, we’d like to take this down, it was actually just an argument for us to improve the content.

So, for example, at one point we got a notice from the Russian regulators; they had an objection to an article about marijuana in Russian. The community responded to that and said, you’re right, there are these sort of drug laws in Russia, but marijuana has a chemical compound, it’s THC, let’s really lean into that and make that article much more robust around sort of what the nature of the botany of this plant is. What is the chemical function of something like the chemical compound of THC? And overall, the article was improved so significantly, as well as, concurrently, public pressure in response to that regulator’s decision from the Russian public, which are one of the most active readers of Wikipedia, in terms of overall markets, that that is one way that we navigated some of that pressure.

Another way that we really thought about this was, beyond the platform architecture, was we didn’t put people in countries where we knew that there was going to be the opportunity for governments to use them as brokers and pawns. So in 2016, I believe it was, the Turkish government sent us a takedown notice asking us to remove two pages that they did not appreciate references to President Erdogan and his family and their involvement in the Syrian civil war as a state sponsor of terrorism. We said, this information is completely accurate. It has been well reported by UN agencies, by the BBC, you name it. We’re not removing this. And Turkey went ahead and censored us for two-and-a-half years. We fought that all the way through the Turkish supreme court. We won in that battle after two-and-a-half years. We’re still continuing to fight that in the European Court of Human Rights.

But in the meantime, one of the reasons we were able to both let that market to be censored was—and continue to fight this fight was we didn’t have assets on the ground. We didn’t have people who were going to be at risk. We were able to protect our editing community’s privacy because we had invested very heavily in ensuring the security of our platforms and the security of their identities. And because we are a nonprofit organization, we could take the hit for those 80 million people.

Now, do we want to be not accessible in Turkey? Absolutely not. This is part of our mission. We wanted to ensure that free information was available. It’s a critical resource for people living in Turkey. And yet, we knew that if we ever took content down and bowed to political pressure, it would just be that slippery slope argument all the way around the world. And how could you ever really consider what you could trust on Wikipedia again?

And so really thinking about sort of our jurisdictional outlay, where we had people on the ground, where our servers our located, what our platform was designed to do has been one of the ways of making ourselves very censorship-resilient and also, you know, just not having that data so that governments can’t come after us or our people.

BRANDY ZADROZNY: I love that. (Laughs.) Let’s talk about—

KATHERINE MAHER: We stole that last one from libraries. You know, libraries—(laughs)—they don’t track, you know, the records of what people take out. There’s a lot of interesting sort of US constitutional precedent around this. But one of the things that we do is we just—we don’t track any of our users precisely so that no one can come after us and say, you know, what is someone looking at in Belarus? What is someone looking at in India? Can we have that data in order to go identify them? And certainly we’ve been subject to those requests.

BRANDY ZADROZNY: Yeah, I think that also, you know, that there’s an ethos that drives the library, which is public service. Like, at the very heart of it, it is about public service and knowledge and providing a service to anyone who needs it, whether that’s a child who needs a children’s book or someone who needs to, you know, borrow a rake in Vermont public library, which lent out gardening tools. I mean, it’s what every user needs, and I really appreciate that. And I think it shares an ethos with Wikipedia that I appreciate.

But I want to talk a little bit about what you just sort of stated as part of your power which came from your funding model. And, you know, I think that the Internet, as most people experience it, is through social media, and that’s generally been entrusted to the free market, you know, the idea that the best information will flow to the top and that information will be collected, organized, and then offered back up to users in the furtherance of a goal, which is profit, right? (Laughs.) And that model incentivizes lots of unhealthy behaviors by users and it certainly incentivizes predatory behaviors from platforms, whether that’s dealing with your privacy or your data or having you spend more time on site with increasingly polarized opinions.

But Wikipedia is a nonprofit. So I wonder, and I have an idea, but do you think that that’s a fundamental ingredient to Wikipedia’s mission and its success in fulfilling that mission? And if it is, do you think it’s even possible for a for-profit tech company to create a community for the public good?

KATHERINE MAHER: Yeah. So I do think it’s absolutely essential to Wikipedia’s success on a number of levels. Number one, as I said, we didn’t have a market incentive so we’re able to make decisions that are really about the long term. Over the course of multiple years we can make investments knowing, for example, that we’re not going to be able to turn a profit on ad sales by going and really investing in indigenous African languages. But that’s fine. That’s not our goal. Our goal is to bring knowledge to the world.

And we can offset that by really leaning on asking for support in mature, rich markets—wealthy markets for donations. And very often, people are very happy to do that because they see themselves as part of contributing to this broader social mission. So that is obviously really important. As I mentioned earlier, you know, when we are censored in a place like Turkey, that’s 80 million people, that would be a hard hit for somebody who is running a for-profit organization. For us, that was just a question of principles and values. You know, we’re not going to bend to censorship.

But I also think that there are other ways in which this is sort of under-the-radar very important in terms of thinking about content integrity. If we had a model by which we needed to sell you ads or get you to stay on-page for a longer period of time, we would be incentivized to show you all sorts of different types of information, to think about what that user journey is as you’re learning about something that you’re interested in, to redirect you to something that’s more profitable, away from something that’s a little bit perhaps more esoteric.

In saying that all content is neutral, all information is equally valid, I don’t care if you’re reading about pop stars or philosophy, it really changes the way in which we approach the product. It really changes the way in which we’re able to serve our readers. And I think that that, you know, to your opening conversation about trust, it is fundamental and critical. By asking volunteers to edit, but also by asking people to volunteer donations, it’s giving them as stake an agency in this product.

When we—I used to go and travel around the world to see members of our communities. I remember speaking to someone in Chile who told me, you know, it’s incredibly important that Chileans are the number-one contributors to Wikipedia in South America because it means that this is ours too. This isn’t something that’s owned out of the United States. This is something that we all have a stake in.

Now, transitioning, can for-profit tech companies do something like this? Absolutely. I heard in the opening speech someone referenced this idea of externalities and the friction of externalities that occur through large social media platforms. What are the sort of harms that are externalized from their product and from their networks? One of the conversations that I’ve been having with these larger social media platforms is you already have tremendously high friction, high-cost externalities to your products. You’re dealing with how, you know, the response to regulators who are frustrated—very often rightfully so—with sort of your lack of responsiveness to democratic norms, lack of responsiveness to being able to be governed under any particular jurisdiction.

You’re dealing with PR harms relative to people’s confidence and trust because you keep sort of screwing things up in public and going back on what you’ve said. One of the ways in which I think social media platforms, or tech giants as a whole, could really address some of these challenges is by taking on models that really bring individuals into thinking about the setting of policies for content—content questions of integrity, truth, and sort of the nature of what community standards actually are. Same thing goes true for harassment and violent behaviors against women and minorities online.

It is expensive. It is hard. It takes multiple years to set up. But I don’t believe that those costs—in fact, I know that those costs are not significantly greater than what is already being expended by these companies to manage their reputations and to manage the sort of regulatory environment. And, in fact, the end outcome is something that is going to make the product stronger and more serviceable to a broader number of people, that is more aligned with what we’ve been talking about from the beginning of this conversation—which is human rights, civil rights, and pluralistic, flourishing societies.

So, yeah, I absolutely think it’s possible. I do think it requires a degree of appetite for doing something that hasn’t been done before. But then again, nothing like these platforms has existed previously. And perhaps we are in this moment of paradigmatic shift, where we really have to think very differently.

BRANDY ZADROZNY: Yeah. Let’s see. (Laughter.) I like your optimism, though. I do.

KATHERINE MAHER: I want to say it’s not—it’s actually not optimism. I think that—as I said, this is really hard. There are not easy solutions to this. But I think that we’ve run into a series of walls here where the sort of easy solutions or the quick fixes have not proven to be functional, and the negative harms to our societies that are manifest. We are, you know, in this sort of moment of pause, I think, here in the United States, but we’re going to be back in an election cycle in just a few months’ time really. And all of this is going to come right back out to the fore. So I want to say I’m actually sort of a pragmatic optimist. But what I like to say is I think that sometimes you have to be ambitious and take on things that are seemingly impossible. Wikipedia wouldn’t exist if we hadn’t tried that.

BRANDY ZADROZNY: OK. So now that you’ve been—now that you’ve mentioned the upcoming election—excuse me (laughter)—let’s talk about, let’s talk about disinformation. And that’s really the thing that I report on most. So I report a lot about disinformation and misinformation. And I don’t report a lot on Wikipedia. And I think that that’s—those two things go hand in hand, right? Outside of—there are a few notable exceptions from state actors and PR shops trying to manage reputations, but Wikipedia just isn’t known for being a hub of mis- and disinformation, which is pretty insane when you think about it because what better way to skew the narrative or spread misinformation and disinformation than by attacking the world’s encyclopedia?

So that’s pretty interesting and exciting. But 2020 was really something else. Obviously, we had the pandemic and confusion over what we were supposed to do and how we were supposed to respond, both medically and socially. And then we had—you had just energized contingents of anti-vaxxers that were really dying to influence the conversation. And then, you know, you also had the election, which was just rife with misinformation and disinformation, and just a real threat to democracy, actually.

So I think it’s fair to say—and you might think I’m wrong here, and I’m willing to be corrected—but that the largest and most influential platforms—like Facebook, YouTube, Twitter—that they overall failed when it came to keeping their spaces free of this junk. Were there threats to Wikipedia in terms of mis- and disinformation? And did Wikipedia, as it seems to me, really make it through unscathed? And if so, how?

KATHERINE MAHER: Yeah. I mean, I would never sit here and say that there was misinformation or disinformation. I think that we—I started by saying why is Wikipedia trusted? We have the humility to admit we’re a work in progress. What I would say is that we took a very active approach to disinformation and misinformation coming into not just the last election but also looking at how we supported our editing community in an unprecedented moment where we were not only dealing with the global pandemic, but we were dealing with a novel virus, which by definition means we knew nothing about it in real time. And we were trying to figure it out as the pandemic went along.

And so we really set up, in response to both the pandemic but also in response to the upcoming US election, and as a model for future elections outside of the US—including a number that are happening this year. We just obviously went through yet another Israeli election. (Laughs.) The model was around how do we create sort of a clearing house of information that brings the institution of the Wikimedia Foundation with the editing community in order to be able to identify threats early on through conversations with government, of course, as well as other platform operations, to understand sort of what the landscape looks like.

It’s important to understand that in the disinformation space people don’t view individual platforms. They view the whole space as a domain in which individual platforms play off of each other. So Wikipedia was often used—or would be attempted to be used as a reference source that could then be shared in other spaces to buttress arguments. And those spaces were often not, like, the Facebooks of the world. They would be instead, like, the 4chans of the world, or the Reddits of the world, in which people were having conversations. And so just a different sort of function of the disinformation landscape.

The way in which we addressed this was sharing that information with our editing community. I think Wikipedia’s very unusual in that unlike other platforms its entire business model for the last 20 years has been about really, not perfecting, but strengthening the process of how to get to good information from a sea of sort of diffuse, undifferentiated sources. And so what that means is that the editing community has a very high sort of scrutiny on what constitutes a reliable source.

There are clear practices in place about if you’re editing on a sensitive topic, such as medical information or a biography of an individual or an upcoming political event or a breaking news story, around what constitutes something that’s reliable enough to go into that. And protections about who can actually contribute. You can’t just be sort of a new, random individual, or what we would think of as, like, a sock puppet, which is an influence campaign to be able to contribute to that.

And of course, the sort of—the best part of it all is that you can lock or undo any change to a Wikipedia article as soon as it, essentially, happens. Which means that, you know, if there is a disputed event that is taking place, that article can be in a—can sort of sit in a moment of stasis until that dispute is unwound. And so these are some of the protections that occurred to prevent Wikipedia from really sort of sliding down that rabbit hole of disinformation.

I think it worked fairly well. And I think that there are things that can be learned from the way that the editing community approaches that, in order for us to sort of take that to other platforms and really think about how to—how to increase sort of the visibility and the scrutiny of content and content policies overall.

BRANDY ZADROZNY: So we have some questions coming in, but I’m going to—I have one more question that I’m really curious about. And then I’ll let everybody else weigh in. I just want to ask you about labor really quickly. Because we’re at this moment where we’re thinking, as everyone—we’re just thinking about who fuels the things that we use, and the systems that we benefit from. Like, who’s doing the backbreaking work here? And we can talk about physical labor, but also digital labor, right? And specifically, you know, the labor that powers and polices websites.

I think about groups on Facebook, admins that spend all day sometimes just, like, looking through these comments and trying to be a good steward of information there. I think of subreddits. I think of Nextdoor. All varying degrees of good jobs on these platforms by admins. (Laughs.) But how do you think about the digital labor of editors at Wikipedia? You know, what are the ethics of some—that, you know, people might consider exploitation of unpaid workers, really, even though, you know, it’s very clear that people who are very active on Wikipedia love it, right? Like, that’s why they do it. There is a real joy there, and a real just mission there. And it’s theirs and it belongs to them. But how do we—how do we balance that with also caring about the people who are powering and keeping our websites running?

KATHERINE MAHER: Yeah. I do absolutely agree that the people who contribute to Wikipedia do it because they love it. That is what we see time and time again. However, one of the things that that means is that is because they prioritize that time, it’s because they have the free time to do it, it’s because they’re in a socioeconomic status that enables them to have the leisure time. And I think that that’s so critically important. You know, I don’t want to compare Wikipedia editors to essential workers that, you know, were on the front lines during the pandemic and who were at risk of physical harm. But there is no question that we saw burnout and exhaustion in the editing communities managing these front lines of waves of disinformation, waves of new information. And that really did raise questions about the sustainability of something that relied on volunteer labor. So that was an active conversation for us over the course of the past year.

Now, adding to that—and it’s something we haven’t talked about yet—is who constructs information? It comes back to this who has the leisure time to do it. And everybody who gets just, like, a step into Wikipedia knows that it is primarily men who contribute to Wikipedia. It’s about 85 percent male. And there are significant gaps in terms of coverage of women, minorities, people from the global south, minority languages. Anyone who’s part of a group that has been historically marginalized or excluded, you’re going to see a gap on Wikipedia about that content. You’re also going to see a relative deficit of folks from those communities editing Wikipedia.

Now, there’s many reasons for this. But there is no question in my mind that part of this is because volunteer labor is not something that is equally—volunteer opportunities are not something that is equally available to everyone. And so as we’ve been really pushing over the course of the last few years, under my leadership, to think about what does it mean, what are our obligations, what’s our mission commitment to folks in emerging markets, folks that have been historically underserved? One of the big questions is, is it time to actually pay people to participate?

Now, I’m going to come down and say, controversially, I actually think that participating is an administrative function in any of those forums you just mentioned or something like Wikipedia, it may be time to really think about what does a model of compensation look like in order for us to be able to redress some of the exclusion that we’ve seen over time? And overall, if we are able to bring more people in as contributors in various different functions—paid roles and unpaid roles, what have you.

And do so in a way that doesn’t create sort of its own complications in terms of incentive structures that could be really harmful—and I recognize that’s a big challenge—the net benefit to having a more representative body of knowledge, one that—whether that’s on Wikipedia or off Wikipedia—is a fundamental benefit to all of us in society here today, right? The more representative our knowledge is, the better our data is, the more that it is truly encompassing and reflective of all those experiences, the better my understanding of the world is going to be, the better I’m going to feel like I participate in that body of knowledge, the more agency that I’m going to have as an informed citizen. And you know, that’s—if we’re talking about open and pluralistic societies, an informed citizenry is a foundation of that.

BRANDY ZADROZNY: OK. I’m going to ask you some of these questions that I’ve got because they’re super interesting. Do you think that the lack of online user interaction features like liking, resharing, commenting on Wikipedia contributed to being—contributed to it being a disinformation-free platform?

KATHERINE MAHER: Hmm. You know, I think that there is no question that some of the incentive structures for sticking, sharing contributions—we talk about it as sort of intrinsic versus extrinsic motivations—intrinsic like I really like participating, extrinsic someone tells me that I did a good job, right? I do think that that makes a difference.

I do also want to point out, though, there’s a perception that Wikipedia is not a social platform. That’s just fundamentally not true. You know, the more edits you contribute, the more work you do, the more sort of internal cred you get within that community. It’s just not as visiblized to readers.

But yeah, I mean, I think that made a big difference, and I think it’s part of what makes the model work.

BRANDY ZADROZNY: So on models, funding, and costs behind technology and industry and the impact that it’s had on the content, you said it’s possible for companies to address the negative harms. Should this shift—and this is a really good question for where we are today because I think we’re going to have a lot of these stakeholders watching—should this shift come from within the industry or come from the government side, a combination, civil society, another authority? Where are the solutions going to be found?

KATHERINE MAHER: Yeah. I’m here to say multistakeholder always. (Laughter.) I think that civil society has certainly been doing their part, and some of the most influential conversations that happen behind closed doors are often with civil society holding these platforms to account. It’s important—I mean, they have it outside of closed doors too, but often the more effective ones happen in closed doors where you can really have a give and take.

I do think that platforms have an opportunity to step up here and to really be more experimental. I have my criticisms, for example, about the Facebook Oversight Board, but you know, $130-plus million commitment to really trying to rethink what governance looks like is a non-unsubstantial commitment, and I’d like to see some, you know, more bold efforts along those lines. Some of them are going to fail. Some of them are going to be successful. We’re all going to end up with egg on our face. But it certainly feels like it’s a—it’s a better step forward than sort of sticking with the status quo.

On the side of governmental regulation, the number-one challenge here that we see is, of course, the First Amendment in the United States is a fairly robust protection of rights. And that is a protection of rights both for platforms, which I actually think is very important that platforms have those rights to be able to regulate what kind of content they want on their sites, but it also means that it is a little bit tricky to really address some of the real challenges of where does that information come from and sort of the influence peddlers who have made a real market economy around it. So I think that there are interventions—things like political ad regulation, you know, advertising, the advertising, the sort of dark space of—clickbait advertising writ large—are real opportunities in that space. But in the sort of general sort of media speech space, I think that is really tricky and we don’t have a lot of good theorizing around what would work there.

BRANDY ZADROZNY: Let me ask you—I’m really interested in this one. Do you think editing communities would work for social media platforms like Facebook and Twitter? And how would you envision that working, if so?

And I just want to add I was really excited about Twitter’s Birdwatch platform, and they—it was sort of borrowed from the best of Wikipedia and the best of Reddit, and it had big promises, and I’ve been feeling as I’ve been watching a little let down. So I’m wondering what you think about possibly that, but generally how you can apply Wikipedia’s editing feature to the rest of the social Web.

KATHERINE MAHER: Yeah, I was excited about Birdwatch, too, but I felt like from a product standpoint it was unlikely to be successful. It was—there was no closed feedback—there was no closed loop. So a closed-loop system in sort of the idea of computer science or really any system’s design is that, you know, you have inputs to the system, you have an outcome. The outcome informs the inputs. And really, you know, what works is then—is built in and what doesn’t work is sort of discarded, and the system continues to evolve.

Birdwatch was essentially an external site. It’s very useful and perhaps interesting from an academic standpoint to be able to have that annotation or understanding of what kind of disinformation is shared, but it wasn’t actually informing my experience as a user of Twitter and it didn’t seem like it was informing product design in terms of the ways in which Twitter continued to evolve as a platform.

So, like, a good first step. I’m not going to say that it wasn’t—I’m glad Twitter did it, but it didn’t seem to me as though it was structurally set up to be successful. But, hey, it’s a pilot. Maybe they’re learning from it and going to continue to iterate. I certainly don’t want to, you know, discourage that.

I think that one thing that is true of Twitter that makes it slightly different—and I’m actually going to come out and say I think Facebook might have an advantage here—is that Twitter is a—fundamentally, an expression platform. It is not a community. There are intra-Twitter communities, like, you know, there’s Black Twitter, there’s journalism Twitter, and they have their own sort of norms that have evolved over time but those—and those norms have somewhat addressed product—. Twitter is not a community. It’s like—it’s interest groups and communities. And I feel like there’s much greater opportunity there for those groups in groups to create some approaches to what content integrity norms actually look like on that site in ways that is responsive to all sorts of different questions—human rights questions or community norms questions, community standards questions.

Now, I think—and there is sort of an interesting conversation about how that matches up at the global level and is applied at scale. But, still, I think that there’s some good space for exploration there. So yeah, I mean, as I said, nothing’s working now. Might as well continue to try new things.

BRANDY ZADROZNY: Yeah, I’m really—I’m interested in that idea of scale, too, because we—I’ve heard it several times. It’s what everybody talks about. But now, for a while it was used as a goal and—but recently with (laughter)—it’s almost like a, like, a dirty word. In lots of responsible tech circles too big—if you get too big, right, you’re going to fail to host the communities or stop disinformation or stop harassment. Just be responsible.

I guess, do you—do you feel that way? Do you feel like nothing that could scale to a global audience can ever be—I don’t know. Do you think that that’s inherently dangerous, just the idea of scale in and of itself, necessarily, is just a danger zone?

KATHERINE MAHER: So yes and no. I think that scale at the size of a singular product is dangerous, and if you think about this, like, there’s this whole idea of this mythical median individual, right, and, you know, there’s all sorts of interesting stories about how, like, when designing fighter jets and trying to build ejection seats, and you sort of take the median of all measurements of a human body and you build an ejection seat, and then it doesn’t work because nobody actually is that median—like, people are short, people are tall, people are heavy, people are, you know, light.

When we build product to be sort of this mythical median that meets everyone’s need and then try to apply that at scale or policies at scale to the size—I’m just going to say, again, three billion people, about half the world is connected—it does not work.

Now, it doesn’t mean that you can’t build products that serve that number of people. We serve—at Wikipedia, we served a billion people every single month. But nothing about the platform was really about scale. It was the opposite of scale, which makes it very challenging for engineers to be able to work on it, which is, you know, why we invested so heavily in engineers at the foundation.

But what it does mean is that every single decision is really an individualized decision. Every content decision is made at the level of that edit. Every policy decision is made at the review of a specific line, interpreted against sort of a body of policies that apply in different contexts because you’re going to have different considerations for 18th century or 16th century Islamic art versus current breaking news events, right?

And so the idea of trying to scale to the size of the entire network and govern at the scale of the network, I think, is an implausibility and here’s my sort of, like, pop history reason why. All of our norms of governance emerged at the level of our communities. We used to live in villages. We used to walk around the corner. We used to have sort of understandings of what privacy meant in that context, what expression meant when we were standing in the public square.

And over time, we codified that into laws, into regulation, through decisions, and it took us, literally, centuries to be able to come up with the systems that work today. And I don’t think anyone is saying that they work perfectly, right? Like, they’re still open for revision in many ways. Democracy, the best—the worst—the worst form except for all the others, right? So the idea that we could have figured out what governance at the edge network of three billion looks like over the course of, basically, thirty years, maybe even twenty, seems like a ridiculous telescoping of the ways in which norms evolve in human civilization and societies.

I don’t even know what I want privacy to look like at the scale of three billion for myself. And so I think that that’s a really important sort of starting point is to say, actually, how can we chunk this down a little bit more in order to be able to be responsive to the very specific problems that we’re having. Whether that’s looking at some of the questions that are coming out of India around freedom of expression, you know, understanding of some of the unique problems that folks who are in the caste activism community are addressing, whether that’s conversations around climate misinformation, how do we chunk that down a little bit smaller and then assess within that context what that means against sort of our normative understanding of human and civil rights.

BRANDY ZADROZNY: So let me ask you one more question from the audience because there are many. I think we can keep talking—(laughs)—for a much longer time. But, OK, so Wikipedia is famous for regulating editors—for its self-regulating editors community. And this just says, do you know, but did the leaders of Wikipedia know and did you even imagine in, you know, 2016 that community-driven content management would be effective—when Wikipedia was launched and then when you joined? Do you still think that a platform can maintain its content quality by relying on an all-volunteer editors community? And this is—you sort of got to this in labor, but I’d like to hear more.

KATHERINE MAHER: (Laughs.) I mean, Wikipedia is the happiest accident, perhaps, that the world has known, right? Like, nobody thought it was going to be this successful, including our founder, Jimmy Wales—the first to tell you that. We’re all joyful that it is. And it’s one of the responsibilities of, I think, everyone who works within the community as a volunteer or as a paid employee to really think about what stewardship means for the next twenty, next forty, next sixty years because none of us view this as sort of, like, a quick win, right? (Laughs.)

Do I think that it is sustainable? I—as I mentioned earlier, I think that there are real challenges relative to sort of this question of representation and who gets to participate, but overall having a governance model that is participatory and volunteer-driven and highly porous, and which anyone can be a part of, I think is absolutely essential to the long-term stewardship of a project like Wikipedia. I would never want to see it sort of codified or ossified into, you know, a pure sort of paid understanding of responsibility at the governance level, at the board level, what have you. It is much more akin to the ways in which representative democracy works. Or maybe a better model might be a co-op model—(laughs)—in which, you know, people do all have a voice. And I—and I think that is essential for what Wikipedia essentially is, which is it’s meant to be a public good.

Knowledge is a public trust. It doesn’t belong to any of us. It belongs to all of us. And to the extent that Wikipedia plays a fundamental, you know, epistemological backbone of the internet role, that really should be something in which we all feel like we have agency and a role.

BRANDY ZADROZNY: Just really quickly, I think we talk a lot about platforms getting new features and rolling out new bells and whistles, and Wikipedia has always seemed basically the same—a reliable source of—it’s an encyclopedia. Like it is what it is. Are there—are there pressures to expand Wikipedia to be something else, to be something extra for the community? Or not even pressures, but are there plans? Is there some great new plan that will even expand the world’s knowledge in some way plan for Wikipedia?

KATHERINE MAHER: Absolutely. So, first of all, Wikipedia’s changed a lot if you go back and look at 2001, but it has changed—(laughs)—in ways that have been very intentionally incremental because we need to be available to people whether you’re on a dialup connection running Microsoft Internet Explorer 6—which some people still run—and still able to load relatively quickly, still be able to access whatever it is that you need that you’re looking for. And so that lightweight sort of interface is very intentional, and it is actually a performance issue. It’s a security issue. It’s by design so that you can get the information that you need and get out of there.

But to the extent that what we—what Wikipedia is thinking about for the future, my vision of what Wikipedia can and should be is what I said earlier, the epistemological backbone of the internet. There are no other sources of high-quality information that so many people rely on in so many languages as sort of an omnibus place to just find out context about the world, history about the world. Certainly, I mean, there’s room for improvement. There’s a lot that’s missing in it. But it is—has become the de facto place.

The way that we have been looking to address this is really by thinking of ourselves less as a website and more as a body of data that can and should be able to be reused across multiple different experiences. So when you talk to a voice assistant, which is a hugely beneficial assistive technology for folks with disabilities or folks who are second-language learners, lower literacy, a lot of the answers today come from Wikipedia. And so we should be investing in languages there that are underserved currently.

When we think about the future of what information seeking looks like, I am going to tell you it’s not—well, the Web is always going to be there, but it’s going to diminish as a percentage of our overall information-seeking environment. So building really robust APIs and endpoints to be able to utilize that data, structure that data, that’s going to be a huge part of the future of Wikipedia. The design of apps and things like that, always good, but really thinking much more structurally at this point about what twenty years down the line looks like.

BRANDY ZADROZNY: Again, I could keep talking to you all day. I think this is so fascinating, and I just really appreciate your time and your work to expand the body of knowledge for us all. And I hope some of these solutions can make its way to the other platforms and the rest of the internet. Thank you so much, Katherine.

KATHERINE MAHER: Thank you so much, Brandy. I really enjoyed it.

Watch the full event

Related Experts: Katherine Maher

Image: Courtesy: Katherine Maher