If in a Crowdsourced Data Annotation Pipeline, a GPT-4

Research Poster Engineering 2025 Graduate Exhibition

Presentation by Zeyu He

Exhibition Number 102

Abstract

Recent studies indicated GPT-4 outperforms online crowd workers in data labeling accuracy, notably workers from Amazon Mechanical Turk (MTurk). However, these studies were criticized for deviating from standard crowdsourcing practices and emphasizing individual workers' performances over the whole data-annotation process. This paper compared GPT-4 and an ethical and well-executed MTurk pipeline, with 415 workers labeling 3,177 sentence segments from 200 scholarly articles using the CODA-19 scheme. Two worker interfaces yielded 127,080 labels, which were then used to infer the final labels through eight label-aggregation algorithms. Our evaluation showed that despite best practices, MTurk pipeline's highest accuracy was 81.5%, whereas GPT-4 achieved 83.6%. Interestingly, when combining GPT-4's labels with crowd labels collected via an advanced worker interface for aggregation, 2 out of the 8 algorithms achieved an even higher accuracy (87.5%, 87.0%). Further analysis suggested that, when the crowd's and GPT-4's labeling strengths are complementary, aggregating them could increase labeling accuracy.

Importance

Our paper examines the human power and Large Language Model (LLM)'s capability on text annotation tasks. Our main findings are that (1) the pure GPT-4 outperforms pure human annotations (2) Applying aggregation methods to combine both GPT-4 and human annotations could surpass only GPT-4's annotation accuracy. In conclusion, our research indicates a probable increase in the importance of expert, gold-standard labeling; research focus might shift from “using AI to support human labelers” to “using humans to enhance AI labeling; HCI challenges in the human annotation process will become central again. Our study highlights key areas in both crowdsourcing and LLMs, underscoring the need for highquality data acquisition, user-friendly interfaces, and improved methods for integrating human input to enhance the effectiveness of LLMs.

Comments