Image from Google Jackets

Man Versus Machine? Self-Reports Versus Algorithmic Measurement of Publications / Xuan Jiang, Wan-Ying Chang, Bruce A. Weinberg.

By: Contributor(s): Material type: TextTextSeries: Working Paper Series (National Bureau of Economic Research) ; no. w28431.Publication details: Cambridge, Mass. National Bureau of Economic Research 2021.Description: 1 online resource: illustrations (black and white)Subject(s): Online resources: Available additional physical forms:
  • Hardcopy version available to institutional subscribers
Abstract: This paper uses newly available data from Web of Science on publications matched to researchers in Survey of Doctorate Recipients to compare scientific publications collected by surveys and algorithmic approaches. We aim to illustrate the different types of measurement errors in self-reported and machine-generated data by estimating how publication measures from the two approaches are related to career outcomes (e.g. salaries, placements, and faculty rankings). We find that the potential biases in the self-reports are smaller relative to the algorithmic data. Moreover, the errors in the two approaches are quite intuitive: the measurement errors of the algorithmic data are mainly due to the accuracy of matching, which primarily depends on the frequency of names and the data that was available to make matches; while the noise in self reports is expected to increase over the career as researchers' publication records become more complex, harder to recall, and less immediately relevant for career progress. This paper provides methodological suggestion on evaluating the quality and advantages of two approaches to data construction. It also provides guidance on how to use the new linked data.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)

February 2021.

This paper uses newly available data from Web of Science on publications matched to researchers in Survey of Doctorate Recipients to compare scientific publications collected by surveys and algorithmic approaches. We aim to illustrate the different types of measurement errors in self-reported and machine-generated data by estimating how publication measures from the two approaches are related to career outcomes (e.g. salaries, placements, and faculty rankings). We find that the potential biases in the self-reports are smaller relative to the algorithmic data. Moreover, the errors in the two approaches are quite intuitive: the measurement errors of the algorithmic data are mainly due to the accuracy of matching, which primarily depends on the frequency of names and the data that was available to make matches; while the noise in self reports is expected to increase over the career as researchers' publication records become more complex, harder to recall, and less immediately relevant for career progress. This paper provides methodological suggestion on evaluating the quality and advantages of two approaches to data construction. It also provides guidance on how to use the new linked data.

Hardcopy version available to institutional subscribers

System requirements: Adobe [Acrobat] Reader required for PDF files.

Mode of access: World Wide Web.

Print version record

There are no comments on this title.

to post a comment.

Powered by Koha