­
Disinformation Propaganda: How False Narratives Threaten Cybersecurity | Chester Hosmer - All Articles - CISO Platform
Disinformation Propaganda: How False Narratives Threaten Cybersecurity | Chester Hosmer

Imagine seeing photos and videos of a massive political rally flooding your social media feed. It looks real—except it never happened. That’s the power of disinformation propaganda: creating false realities that influence thoughts, decisions, and even national security.

What is Disinformation Propaganda?

Disinformation propaganda is false information spread to mislead and manipulate. It’s used to push specific agendas, influence opinions, and create confusion. The goal isn’t just to misinform—it’s to control the narrative and drive behavior based on falsehoods.

 

Types of Disinformation Propaganda

Chester Hosmer highlights the key forms of disinformation campaigns targeting organizations and individuals:

1. Brand Impersonation

Attackers impersonate trusted brands to push their products or steal information. They use the authority of a brand’s name to deceive people into trusting false content.
Result: Loss of trust and stolen data.

2. Nation-State Fake Intelligence

False information is strategically spread to infiltrate organizations and influence their decisions. Nation-states use fake intelligence to push political and economic agendas.
Result: Misguided decisions and organizational disruption.

3. Catfish Rallies

Fake events are staged online to manipulate public perception. Photos and information are shared to convince people that the event actually happened, even when it didn’t.
Result: Public confusion and manipulation of social opinion.

4. Malware-Laced Disinformation

Disinformation campaigns often include phishing links that deliver malware. Once clicked, the malware spreads through networks, compromising systems and data.
Result: System breaches and data loss.

5. Political Disinformation

False information about political candidates is spread globally, not just domestically. Manipulated audio files and false narratives are used to mislead voters.
Result: Misinformed voters and weakened political trust.

 

Why Platforms Aren’t Stopping It

Section 230 of the U.S. Communications Decency Act (1996) shields platforms from liability for user-generated content.

  • Platforms are not treated as publishers but as distributors of information.
  • If a newspaper publishes false information, they can be sued.
  • If a social media platform distributes false information, they are protected under Section 230.

Platforms resist monitoring or changing content because that would shift their legal standing from a distributor to a publisher—making them liable for the content.

 

The Cybersecurity Risk

Disinformation propaganda isn’t just a media problem—it’s a direct threat to cybersecurity.

  • Phishing and malware are embedded in false content.
  • Business Email Compromise (BEC) attacks use false authority to steal data and funds.
  • Manipulated data can compromise AI-driven business decisions.
  • Emotional triggers increase the success rate of social engineering attacks.

 

Conclusion

Disinformation propaganda isn’t going away—it’s evolving. False narratives, fake events, and manipulated content are now part of the threat landscape. Defending against disinformation is just as critical as defending against network intrusions.

 

Join CISO Platform—The Cybersecurity Community

Stay ahead of evolving threats. Connect with 50,000+ cybersecurity professionals and gain access to exclusive resources, insights, and best practices.
Join Now

 

By: Chester Hosmer (Technical Author & President, Python Forensics, Inc.)

E-mail me when people leave their comments –

You need to be a member of CISO Platform to add comments!

Join CISO Platform