Post by Nadica (She/Her) on Oct 30, 2024 1:02:16 GMT
Unveiling the Veiled Threat: The Impact of Bots on COVID-19 Health Communication - Published Sept 8, 2024
Abstract
This article presents the results of a comprehensive study examining the influence of bots on the dissemination of COVID-19 misinformation and negative vaccine stance on Twitter over a period of three years. The research employed a tripartite methodology: text classification, topic modeling, and network analysis to explore this phenomenon. Text classification, leveraging the Turku University FinBERT pre-trained embeddings model, differentiated between misinformation and vaccine stance detection. Bot-like Twitter accounts were identified using the Botometer software, and further analysis was implemented to distinguish COVID-19 specific bot accounts from regular bots. Network analysis illuminated the communication patterns of COVID-19 bots within retweet and mention networks. The findings reveal that these bots exhibit distinct characteristics and tactics that enable them to influence public discourse, particularly showing an increased activity in COVID-19-related conversations. Topic modeling analysis uncovers that COVID-19 bots predominantly focused on themes such as safety, political/conspiracy theories, and personal choice. The study highlights the critical need to develop effective strategies for detecting and countering bot influence. Essential actions include using clear and concise language in health communications, establishing strategic partnerships during crises, and ensuring the authenticity of user accounts on digital platforms. The findings underscore the pivotal role of bots in propagating misinformation related to COVID-19 and vaccines, highlighting the necessity of identifying and mitigating bot activities for effective intervention.
Abstract
This article presents the results of a comprehensive study examining the influence of bots on the dissemination of COVID-19 misinformation and negative vaccine stance on Twitter over a period of three years. The research employed a tripartite methodology: text classification, topic modeling, and network analysis to explore this phenomenon. Text classification, leveraging the Turku University FinBERT pre-trained embeddings model, differentiated between misinformation and vaccine stance detection. Bot-like Twitter accounts were identified using the Botometer software, and further analysis was implemented to distinguish COVID-19 specific bot accounts from regular bots. Network analysis illuminated the communication patterns of COVID-19 bots within retweet and mention networks. The findings reveal that these bots exhibit distinct characteristics and tactics that enable them to influence public discourse, particularly showing an increased activity in COVID-19-related conversations. Topic modeling analysis uncovers that COVID-19 bots predominantly focused on themes such as safety, political/conspiracy theories, and personal choice. The study highlights the critical need to develop effective strategies for detecting and countering bot influence. Essential actions include using clear and concise language in health communications, establishing strategic partnerships during crises, and ensuring the authenticity of user accounts on digital platforms. The findings underscore the pivotal role of bots in propagating misinformation related to COVID-19 and vaccines, highlighting the necessity of identifying and mitigating bot activities for effective intervention.