Guest Post: Safiya Umoja Noble and the Ethics of Social Justice in Information (Part 1) – The Scholarly Kitchen

July 21, 2018 at 03:25PM » Feed Guest Post: Safiya Umoja Noble and the Ethics of Social Justice in Information (Part 1) – The Scholarly Kitchen

< !DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "">

Dr. Safiya Umoja Noble presented the opening keynote address at the Society for Scholarly Publishing Annual Meeting in May 2018. She is an assistant professor at the University of Southern California Annenberg School of Communication and a co-founder of the Information Ethics & Equity Institute. Noble’s academic research focuses on the design of digital media platforms and their impact on society. Her monograph on racist and sexist algorithmic bias is entitled Algorithms of Oppression: How Search Engines Reinforce Racism (2018). She is an Associate Editor for the Journal of Critical Library and Information Studies, and co-editor of two books: The Intersectional Internet: Race, Sex, Culture and Class Online (2016), and Emotions, Technology & Design (2015).
Editor’s Note: Jocelyn Dawson and Rebecca McLeod interviewed Safiya Noble for this post on behalf of the Society for Scholarly Publishing’s Diversity and Inclusion Task Force. Jocelyn is the Journals Marketing Manager at Duke University Press and Rebecca is a Strategic Account Manager at Copyright Clearance Center (CCC).

concept of digital technology

You have had a very robust speaking schedule this year.  Do you feel an imperative to get your message out? What do you hope that your audiences will take away from your talks?

There’s definitely a lot of interest because we’re at a moment where people really understand, or are coming to understand, that our technologies are not neutral, that there are a whole host of biases embedded in them. The book, Algorithms of Oppression, is landing at the right moment, when the public is ready to hear some of these messages. I wrote it in a way that would be both legible to academics and accessible to the public.

With the scandal around Cambridge Analytica and Facebook, it seems as though there is a moment of reckoning for some of the technology providers. Do you feel like your work was prescient in that way?

I do. I think there are several academics like me who have been trying to talk about all kinds of different labor politics involved in technology, racial, and gender biases. Most of those people are scholars of color or queer scholars, gay scholars, women in particular, who experience marginalization in their own everyday lives. And we, of course, were able to see some things coming out of Silicon Valley, for example, that maybe others weren’t quite ready to accept.

It’s been wonderful to see journalists, in particular, pick up our work and start to run with it. They’ve done a lot more of the translating of the research for lay audiences, which has been really remarkable and important. Back in 2012 when I was talking to NYU Press about publishing this book as Algorithms of Oppression, we had this long running joke because their first response was, “no way — nobody knows what an algorithm is.” And, of course, now kids know what algorithms are, the elderly know what algorithms are, and we’re all clear that there are these kinds of technical processes that have impacts on society.

I’m glad that I fought for that title, and I’m glad that NYU Press understood that there might be a moment when these words and these concepts would make broader sense. A lot has changed since 2012 — now in 2018, major journalists are writing stories on the topic. Wired had a story a few months ago about how 2017 was the year we fell out of love with algorithms. Now this seems like common sense, but certainly years ago when I was taking up this research, it wasn’t common sense.

You wrote that misinformation can be debilitating for a democracy. I am curious to hear how particular events in the last year have resonated with your work.

When I first started writing about racist and sexist algorithmic bias in search engines, which are not dissimilar from how biases happen in other types of technology platforms, people were broadly sympathetic. But when the same precise techniques are used to throw a presidential election, people care a lot more. And they’re like, “Wait. What? Platforms can be biased? How can that be?” In some ways it’s disappointing that when structural biases in technology platforms harm vulnerable populations, people are less interested, but when it moves into mainstream politics, for example, then people start to notice. Having said that, I’m glad people notice, and we’re finally at a moment where we can talk about the multiple ways that platforms can be gamed, they can be co-opted, they can be purchased. Content can be skewed and disinformation can flourish. Disinformation has flourished around people of color for a long time. In search engines, for example, or in social media, the platforms have been reticent to acknowledge and take up interventions, but now Congress is forcing them to take up these issues as they relate to our broader political life.

You’ve mentioned that education and public libraries are the necessary supplements to search engines.

Absolutely. I think it’s interesting to see Congress invite Mark Zuckerberg and Facebook to come in and talk about the influence and power of social media in our cultural landscape, and to call for regulation and requirements around Silicon Valley’s tech practices. But, at the same time, Congress and the White House are defunding education, defunding public libraries, defunding public media, defunding public research. You can’t have all of the alternatives to proliferation of evidence-based research and knowledge shut down, or severely hamstrung, while you call for regulation of the private sphere.

You have to have a strong and robust public sphere as a counterweight to what private industry will do. And one of the things that people, including Congress, don’t understand, is that Facebook, Google, and properties like YouTube and Twitter are all interested in making money, and their advertising model is about profiting from those who are willing to pay the most for content. And, of course, we know that the most popular things are not necessarily the most true things.

I think it’s disingenuous to ask advertisers to change their business practices and to expect them to operate like public interest information spheres or portals, because that’s not what they’re designed to do. What we need is greater investment by taxpayers in public-interest alternatives, which can be part of the broader information landscape. Doing so would help us understand digital advertising platforms like Google, Facebook, and others, for what they are, rather than thinking of them as the public library, which they are not.

Safiya Noble
Safiya Noble

Has anybody from these platforms reached out to you since your book has been published?

I regularly communicate with some people who work in policy at Facebook, because I think the culture at Facebook is a bit more open to deep engagement with academics, and they have a lot of conversations with my colleagues around the country to figure out what they could do differently. I’ve had some informal conversations with Twitter, just kind of information sharing. And I think they, too, are very committed to figuring out how to deal with the health and adverse impact of Twitter in social relationships.

Google and YouTube tend to be less engaged with their critics, but that doesn’t mean they aren’t engaging with my work. I see that some of the critiques that I’ve made publicly for years have been quietly resolved by Google. But I do not have a direct personal or professional relationship with Google. I often tell people that if Microsoft were the industry leader, I’d be talking about Microsoft, but Microsoft isn’t the leader in search.

Many of the publishers and the technology vendors at this conference [The Society for Scholarly Publishing Annual Meeting in May 2018], particularly the vendors who host content, have existing relationships with Google Scholar. Do you have any advice for people who work with Google Scholar about ways that they could provide feedback?

One of the challenges for scholarly publishing is that it’s often behind a paywall. This means that the public can’t access research broadly the way they can access any random thing that gets put on the internet. I think that it’s complicated, because there is more and more consolidation and control in scholarly publishing around traditional journals or companies that have a particular reputation as being the most credible or the most legitimate in the eyes of many academics, which sometimes makes new entrants or open access journals less viable due to the old academic models relied on for tenure and promotion. So there’s a tension between traditional scholarly publishing business models and making research accessible to the public. And there is tension over monopolistic control by a few players in the scholarly publishing industry.

In practice, we see how some of those challenges play out if you’re in a smaller academic field. Let’s say you’re in African-American Studies, and you want to publish your work in Black Studies journals. There will be a smaller scholarly community that’s engaging with that material. That material may not be indexed by Google Scholar, so it may be harder to find. And because of budget constraints, some universities may not carry those journals. But an African-American Studies or Black Studies scholar will feel pressure to publish in those journals.

Now, if that work published in those journals is not find-able — if it is not indexed in a way that people can find it or access it — it can’t get cited. So the African-American Studies or Black Studies scholar also is dealing with the pressures of citation metrics that determine the value of their research and productivity in the eyes of the academy, and that limited sphere of influence is used against them in promotion and tenure processes.

These are some of the disparate impacts that metrics, digitization, and consolidation have for people in smaller fields. Google Scholar has become a major player. At a lot of research universities, evaluators are looking at Google’s metrics. That’s become a proxy for evaluation during the tenure and promotion processes for faculty. There is a lot is at stake that publishers and universities may not realize, which can have an incredibly adverse impact, especially on scholars of color or people working in ethnic studies and women’s studies or smaller subfields in the humanities and social sciences.

Google Scholar has come to be regarded — inappropriately — as a standard measure of the value and contribution and rigor of work. But the measure is skewed. A piece could be cited heavily because people are talking about how terrible that piece is. There’s no value positively or negatively about how something is evaluated. So maybe you get cited a lot because your work is terrible. Well, the citation analytics are not going to tell you that — you are just going to have a lot of hits. You look amazing, and this is a distortion of, I think, how scholarly knowledge should proliferate and be rewarded and valued.

Why do we let Google have such a huge stake in proving the veracity of scholarly knowledge? This is where the publishers might have conceded too much in their partnerships. My biggest call to our field is that those of us who work in information production and dissemination need to be at the forefront of organizing knowledge on the open web, too. And so, I have been trying to secure funding and networking with university librarians and the Library of Congress, trying to think about imagining public interest search, for example, where librarians and, let’s say, information professionals broadly speaking, are deeply involved in the curation of information and knowledge from a public-interest perspective, rather than an advertising, for-profit motive.

Publishers are going to play an important role in that process. The call to action from my work, then, is to determine how can we organize ourselves to better respond to the information needs of our society, and to extend ourselves more broadly beyond academia and beyond keeping knowledge behind paywalls.

Part Two of the interview will post tomorrow…

This site uses Akismet to reduce spam. Learn how your comment data is processed.