An Evaluation of Prompt Engineering Strategies by College Students in Competitive Programming Tasks

Research Poster Engineering 2025 Graduate Exhibition

Presentation by Sita Vaibhavi Gunturi

Exhibition Number 218

Abstract

Generative AI, powered by Large Language Models (LLMs), has the potential to automate aspects of software engineering. This study implemented a quantified approach to examine how 10 teams of computer science students utilized generative AI tools during a competitive programming competition across multiple campuses. Participants used tools such as ChatGPT, GitHub Copilot, and Claude and submitted transcripts documenting their interactions for analysis. Drawing from prompt engineering literature, the study applied six key prompt engineering strategies for competitive programming. These included clarifying instructions, streamlining prompt context, employing chain-of-thought prompting, providing feedback to refine solutions, and leveraging LLM meta-capabilities for problem-solving. The transcripts were analyzed to assess adherence to these practices and then converted to descriptive statistics. Findings revealed significant variability in adherence, with an average compliance rate of 33.75% across strategies. While simpler strategies achieved adherence rates as high as 96.2%, other complex strategies saw minimal or no usage. These results highlight that students often adopt basic prompt engineering techniques but struggle with more complex strategies, suggesting the need for structured prompt engineering instruction in computer science curricula to maximize the potential of generative AI tools.

Importance

Generative-AI technologies have rapidly become important tools in computer science, significantly affecting programming tasks and software development practices. However, successfully utilizing these powerful tools requires clear and effective communication, known as "prompt engineering." This research is significant as it identifies how well college students currently apply prompt engineering strategies when interacting with AI tools, revealing strengths and gaps in their skills. Understanding these gaps helps educators and industry professionals recognize where students struggle most, enabling them to better prepare future software developers. By highlighting specific strategies students commonly overlook, this research informs the development of educational modules designed to improve the efficiency and accuracy of AI-assisted workflows.Ultimately, this aids in aligning computer science education more closely with evolving industry expectations.

Comments