Image from Google Jackets

Aligned with Whom? Direct and Social Goals for AI Systems / Anton Korinek, Avital Balwit.

By: Contributor(s): Material type: TextTextSeries: Working Paper Series (National Bureau of Economic Research) ; no. w30017.Publication details: Cambridge, Mass. National Bureau of Economic Research 2022.Description: 1 online resource: illustrations (black and white)Subject(s): Other classification:
  • D6
  • O3
Online resources: Available additional physical forms:
  • Hardcopy version available to institutional subscribers
Abstract: As artificial intelligence (AI) becomes more powerful and widespread, the AI alignment problem--how to ensure that AI systems pursue the goals that we want them to pursue--has garnered growing attention. This article distinguishes two types of alignment problems depending on whose goals we consider, and analyzes the different solutions necessitated by each. The direct alignment problem considers whether an AI system accomplishes the goals of the entity operating it. In contrast, the social alignment problem considers the effects of an AI system on larger groups or on society more broadly. In particular, it also considers whether the system imposes externalities on others. Whereas solutions to the direct alignment problem center around more robust implementation, social alignment problems typically arise because of conflicts between individual and group-level goals, elevating the importance of AI governance to mediate such conflicts. Addressing the social alignment problem requires both enforcing existing norms on their developers and operators and designing new norms that apply directly to AI systems.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Home library Collection Call number Status Date due Barcode Item holds
Working Paper Biblioteca Digital Colección NBER nber w30017 (Browse shelf(Opens below)) Not for loan
Total holds: 0

May 2022.

As artificial intelligence (AI) becomes more powerful and widespread, the AI alignment problem--how to ensure that AI systems pursue the goals that we want them to pursue--has garnered growing attention. This article distinguishes two types of alignment problems depending on whose goals we consider, and analyzes the different solutions necessitated by each. The direct alignment problem considers whether an AI system accomplishes the goals of the entity operating it. In contrast, the social alignment problem considers the effects of an AI system on larger groups or on society more broadly. In particular, it also considers whether the system imposes externalities on others. Whereas solutions to the direct alignment problem center around more robust implementation, social alignment problems typically arise because of conflicts between individual and group-level goals, elevating the importance of AI governance to mediate such conflicts. Addressing the social alignment problem requires both enforcing existing norms on their developers and operators and designing new norms that apply directly to AI systems.

Hardcopy version available to institutional subscribers

System requirements: Adobe [Acrobat] Reader required for PDF files.

Mode of access: World Wide Web.

Print version record

There are no comments on this title.

to post a comment.

Powered by Koha