尤物视频

MENU
Artwork designed by Evan Chan, edited by Nola Mellon

AI, Propaganda and Misinformation

Wed, 23 Apr 2025

By Evan Chan, Program Assistant

Upon rewatching SFU Public Square鈥檚 recent event, AI: Beyond the Hype - Shaping the Future Together, and seeing the first-hand prevalence of artificial intelligence usage in classrooms and workplaces, I began to question the information AI chatbots are feeding us. Because this technology is so widespread, AI has the potential to alter our perceptions鈥揵ut how accurate is the information? When , a Computer Science Professor at Princeton University, tested ChatGPT in 2022, he found it to be 鈥渟preading persuasive nonsense.鈥 AI chatbots purposefully uses big words and complex ideas to confuse and convince us that it knows what it鈥檚 talking about. For this reason, I wanted to examine how AI can try to change our beliefs on social topics by discovering the new chatbot, DeepSeek AI.

Near the end of January 2025, the first wave of DeepSeek AI, an open AI source available to the world, emerged from China and hit the global market, posing a serious threat to the U.S. sector. China has been known for its and spreading of misinformation, rendering my concern that the prevalence of this new AI tool could filter out human rights concerns. Furthermore, the Chinese government could utilize the technology to disseminate propaganda in an effort to strengthen the country鈥檚 reputation, eradicate the awareness of social issues in a global context, and impact the politics or economy of other nations.  

China鈥檚 censorship and desire for greater influence is not new. 鈥淔ree Taiwan鈥 and 鈥淭iananmen Square鈥 are terms banned in created by Marvel and a Chinese developer. From video games to their latest AI chatbot model, the country is known to censor sensitive topics that might across various media channels. To examine the censorships in DeepSeek AI, I explored the chatbot using a variety of terms. 

China鈥檚 censorship and desire for greater influence is not new. 鈥淔ree Taiwan鈥 and 鈥淭iananmen Square鈥 are terms banned in  created by Marvel and a Chinese developer. From video games to their latest AI chatbot model, the country is known to censor sensitive topics that might  across various media channels. To examine the censorships in DeepSeek AI, I explored the chatbot using a variety of terms. 

These examples demonstrate DeepSeek AI鈥檚 censorship and perhaps, China鈥檚 intention to expand its information control over other nations through this new technology. With the history of , in which the Chinese government secretly pressured the Cambridge University Press to remove articles criticizing the country, China is turning the internet into a political or propaganda battlefield with censorship and misinformation.

Screenshots taken April 15th, 2025
Screenshots taken February 3rd, 2025

Research from shows the power of Chinese censorships. The study found that Chinese students, who had only been fed censored content with no external incentive to question the state media, had less than a 5% likelihood to explore uncensored content through foreign news. Not only does that prove the effectiveness of Chinese censorship, but this study can also explain how DeepSeek AI could suppress their citizens鈥 critical thinking while propagating biased information to the democratic societies across the world. Undoubtedly, the power of artificial intelligence is going to be one of the biggest contributors in this 鈥渕edia combat zone.鈥

Besides DeepSeek AI, social media accounts and media affiliated with the Chinese Communist-Party (CCP) were reportedly through the impersonation of the country鈥檚 voters or citizens. For example, during the , China used AI-generated images to falsely claim the natural disaster was the result of the U.S. government testing 鈥渨eather weapon.鈥 The CCP also used the new technology to about a candidate of the 2024 Taiwanese Presidential Election, in an effort to disrupt the election in Taiwan.  

Even though, in my observation, many studies only explore censorships in authoritarian countries, signs of censorships can be found in universal AI chatbot tools, like Gemini AI from Google. For instance, Gemini AI refuses to answer certain questions related to elections or politics due to Google鈥檚 concern of the spread of misinformation. Despite their reasonable apprehension, it raised public concerns on Gemini鈥檚 accuracy regarding other topics. As , an Associate Professor of Information Science at Cornell University, commented, 鈥渋f Google鈥檚 generative AI tools are too unreliable for conveying information about democratic elections, why should we trust them in other contexts, such as health or financial information?鈥 For example, it gives very limited information about , while it also dodged back in 2024, yet remained uncensored when it comes to Israel.

Indeed, AI does provide inaccurate information on historical prompts. For example, Gemini AI generated pictures showcasing and, during the 2024 U.S. Presidential Election, ChatGPT spread misinformation on . Taking the voting rules of Pennsylvania as an example, ballots must be received at a certain time on the election day, but ChatGPT indicated that ballots can be received on the Friday after election day. Therefore, some voters could be confused by the incorrect information and failed to submit their ballots on time. Given that 67% of U.S. teens aged from 13 to 17 are familiar with AI chatbots, shown by research from , this technology can especially cause severe problems for younger generations with its misinformation and censorship.  

As Stephaine Dick, one of the speakers at AI: Beyond the Hype - Shaping the Future Together, said 鈥渢he very best thing you can do for yourself as a citizen of this world of AI is to invest in your own knowledge.鈥 We should learn how to think critically; we should see AI as a minor assistant instead of our directors; and we should consistently invest time on ourselves.

Interested in learning more? Check out the recording of SFU Public Square's co-hosted event AI: Beyond the Hype鈥揝haping the Future Together.

The views and opinions expressed in SFU Public Square's blogs are those of the authors, and they do not necessarily reflect the official position of 尤物视频 or SFU Public Square, or any other affiliated institutions in any way.