Image from Google Jackets

Automating Automaticity: How the Context of Human Choice Affects the Extent of Algorithmic Bias / Amanda Y. Agan, Diag Davenport, Jens Ludwig, Sendhil Mullainathan.

By: Contributor(s): Material type: TextTextSeries: Working Paper Series (National Bureau of Economic Research) ; no. w30981.Publication details: Cambridge, Mass. National Bureau of Economic Research 2023.Description: 1 online resource: illustrations (black and white)Subject(s): Other classification:
  • A12
  • D63
  • D83
Online resources: Available additional physical forms:
  • Hardcopy version available to institutional subscribers
Abstract: Consumer choices are increasingly mediated by algorithms, which use data on those past choices to infer consumer preferences and then curate future choice sets. Behavioral economics suggests one reason these algorithms so often fail: choices can systematically deviate from preferences. For example, research shows that prejudice can arise not just from preferences and beliefs, but also from the context in which people choose. When people behave automatically, biases creep in; snap decisions are typically more prejudiced than slow, deliberate ones, and can lead to behaviors that users themselves do not consciously want or intend. As a result, algorithms trained on automatic behaviors can misunderstand the prejudice of users: the more automatic the behavior, the greater the error. We empirically test these ideas in a lab experiment, and find that more automatic behavior does indeed seem to lead to more biased algorithms. We then explore the large-scale consequences of this idea by carrying out algorithmic audits of Facebook in its two biggest markets, the US and India, focusing on two algorithms that differ in how users engage with them: News Feed (people interact with friends' posts fairly automatically) and People You May Know (people choose friends fairly deliberately). We find significant out-group bias in the News Feed algorithm (e.g., whites are less likely to be shown Black friends' posts, and Muslims less likely to be shown Hindu friends' posts), but no detectable bias in the PYMK algorithm. Together, these results suggest a need to rethink how large-scale algorithms use data on human behavior, especially in online contexts where so much of the measured behavior might be quite automatic.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Home library Collection Call number Status Date due Barcode Item holds
Working Paper Biblioteca Digital Colección NBER nber w30981 (Browse shelf(Opens below)) Not For Loan
Total holds: 0

February 2023.

Consumer choices are increasingly mediated by algorithms, which use data on those past choices to infer consumer preferences and then curate future choice sets. Behavioral economics suggests one reason these algorithms so often fail: choices can systematically deviate from preferences. For example, research shows that prejudice can arise not just from preferences and beliefs, but also from the context in which people choose. When people behave automatically, biases creep in; snap decisions are typically more prejudiced than slow, deliberate ones, and can lead to behaviors that users themselves do not consciously want or intend. As a result, algorithms trained on automatic behaviors can misunderstand the prejudice of users: the more automatic the behavior, the greater the error. We empirically test these ideas in a lab experiment, and find that more automatic behavior does indeed seem to lead to more biased algorithms. We then explore the large-scale consequences of this idea by carrying out algorithmic audits of Facebook in its two biggest markets, the US and India, focusing on two algorithms that differ in how users engage with them: News Feed (people interact with friends' posts fairly automatically) and People You May Know (people choose friends fairly deliberately). We find significant out-group bias in the News Feed algorithm (e.g., whites are less likely to be shown Black friends' posts, and Muslims less likely to be shown Hindu friends' posts), but no detectable bias in the PYMK algorithm. Together, these results suggest a need to rethink how large-scale algorithms use data on human behavior, especially in online contexts where so much of the measured behavior might be quite automatic.

Hardcopy version available to institutional subscribers

System requirements: Adobe [Acrobat] Reader required for PDF files.

Mode of access: World Wide Web.

Print version record

There are no comments on this title.

to post a comment.

Powered by Koha