Human-Centered AI for Review Aggregation: Enhancing Game Legitimacy Assessment on Google Play
Research Poster Social & Behavioral Sciences 2025 Graduate ExhibitionPresentation by Sam Moradzadeh
Exhibition Number 89
Abstract
With the increasing prevalence of deceptive mobile game marketing strategies, including fake reviews, misleading advertisements, and pay-to-win mechanics, users often struggle to determine the legitimacy of a game before investing time or money. This research presents an AI-powered browser extension designed to aggregate and analyze Google Play reviews, providing users with a legitimacy assessment based on sentiment analysis, review authenticity detection, and game publisher reputation. The methodology employs natural language processing (NLP) and machine learning techniques to classify reviews, detect bot-generated patterns, and identify sudden spikes in sentiment polarity. The system generates a legitimacy score and explanatory insights, allowing users to make informed decisions about game downloads. We conducted a user study to evaluate the tool’s effectiveness in improving users' ability to discern game quality compared to raw review browsing. Results indicate that users equipped with AI-driven insights demonstrated increased confidence and accuracy in identifying potentially deceptive games. The study underscores the need for AI interventions in digital marketplaces to enhance transparency and user trust. This research contributes to the broader discussion on AI-powered consumer protection tools and trust-building mechanisms in digital gaming ecosystems.
Importance
The rapid expansion of mobile gaming has led to an influx of games with deceptive marketing tactics that exploit users through manipulative design and fraudulent reviews. Existing review aggregation systems lack AI-driven legitimacy assessments, leaving users vulnerable to misleading information. This study bridges that gap by introducing an AI-powered solution that automates the analysis of user-generated reviews, distinguishing authentic feedback from deceptive practices. By integrating AI into digital well-being strategies, this research advances efforts to enhance trust and transparency in digital marketplaces. The findings are valuable to game developers, platform regulators, and digital consumers, offering a scalable approach to detecting game legitimacy and protecting users from predatory monetization models.
DEI Statement
Digital gaming platforms serve a global audience, yet deceptive practices disproportionately affect marginalized and underserved communities, including users with limited digital literacy or financial constraints. This research contributes to equitable access to trustworthy gaming experiences by ensuring AI-driven transparency in review assessments. By mitigating deceptive practices, this study aligns with DEI principles by empowering diverse gaming communities, fostering fair decision-making in digital spaces, and advocating for more ethical gaming ecosystems. This work also emphasizes the importance of algorithmic fairness, ensuring that AI-driven assessments are inclusive and unbiased across different gaming demographics.