­
Biswajit Banerjee's Posts - CISO Platform

Biswajit Banerjee's Posts (143)

Sort by

An employee at Elon Musk’s artificial intelligence company xAI leaked a private key on GitHub that for the past two months could have allowed anyone to query private xAI large language models (LLMs) which appear to have been custom made for working with internal data from Musk’s companies, including SpaceXTesla and Twitter/X, KrebsOnSecurity has learned.

13590429055?profile=RESIZE_710x

Image: Shutterstock, @sdx15.

Philippe Caturegli, “chief hacking officer” at the security consultancy Seralys, was the first to publicize the leak of credentials for an x.ai application programming interface (API) exposed in the GitHub code repository of a technical staff member at xAI.

Caturegli’s post on LinkedIn caught the attention of researchers at GitGuardian, a company that specializes in detecting and remediating exposed secrets in public and proprietary environments. GitGuardian’s systems constantly scan GitHub and other code repositories for exposed API keys, and fire off automated alerts to affected users.

GitGuardian’s Eric Fourrier told KrebsOnSecurity the exposed API key had access to several unreleased models of Grok, the AI chatbot developed by xAI. In total, GitGuardian found the key had access to at least 60 fine-tuned and private LLMs.

“The credentials can be used to access the X.ai API with the identity of the user,” GitGuardian wrote in an email explaining their findings to xAI. “The associated account not only has access to public Grok models (grok-2-1212, etc) but also to what appears to be unreleased (grok-2.5V), development (research-grok-2p5v-1018), and private models (tweet-rejector, grok-spacex-2024-11-04).”

Fourrier found GitGuardian had alerted the xAI employee about the exposed API key nearly two months ago — on March 2. But as of April 30, when GitGuardian directly alerted xAI’s security team to the exposure, the key was still valid and usable. xAI told GitGuardian to report the matter through its bug bounty program at HackerOne, but just a few hours later the repository containing the API key was removed from GitHub.

“It looks like some of these internal LLMs were fine-tuned on SpaceX data, and some were fine-tuned with Tesla data,” Fourrier said. “I definitely don’t think a Grok model that’s fine-tuned on SpaceX data is intended to be exposed publicly.”

xAI did not respond to a request for comment. Nor did the 28-year-old xAI technical staff member whose key was exposed.

Carole Winqwist, chief marketing officer at GitGuardian, said giving potentially hostile users free access to private LLMs is a recipe for disaster.

“If you’re an attacker and you have direct access to the model and the back end interface for things like Grok, it’s definitely something you can use for further attacking,” she said. “An attacker could it use for prompt injection, to tweak the (LLM) model to serve their purposes, or try to implant code into the supply chain.”

The inadvertent exposure of internal LLMs for xAI comes as Musk’s so-called Department of Government Efficiency (DOGE) has been feeding sensitive government records into artificial intelligence tools. In February, The Washington Post reported DOGE officials were feeding data from across the Education Department into AI tools to probe the agency’s programs and spending.

The Post said DOGE plans to replicate this process across many departments and agencies, accessing the back-end software at different parts of the government and then using AI technology to extract and sift through information about spending on employees and programs.

“Feeding sensitive data into AI software puts it into the possession of a system’s operator, increasing the chances it will be leaked or swept up in cyberattacks,” Post reporters wrote.

Wired reported in March that DOGE has deployed a proprietary chatbot called GSAi to 1,500 federal workers at the General Services Administration, part of an effort to automate tasks previously done by humans as DOGE continues its purge of the federal workforce.

Reuters report last month said Trump administration officials told some U.S. government employees that DOGE is using AI to surveil at least one federal agency’s communications for hostility to President Trump and his agenda. Reuters wrote that the DOGE team has heavily deployed Musk’s Grok AI chatbot as part of their work slashing the federal government, although Reuters said it could not establish exactly how Grok was being used.

Caturegli said while there is no indication that federal government or user data could be accessed through the exposed x.ai API key, these private models are likely trained on proprietary data and may unintentionally expose details related to internal development efforts at xAI, Twitter, or SpaceX.

“The fact that this key was publicly exposed for two months and granted access to internal models is concerning,” Caturegli said. “This kind of long-lived credential exposure highlights weak key management and insufficient internal monitoring, raising questions about safeguards around developer access and broader operational security.”

 

By: Brian Krebs (Investigative Journalist, Award Winning Author)

Original link to the blog: Click Here

Read more…

A DoorDash driver stole over $2.5 million over several months:

The driver, Sayee Chaitainya Reddy Devagiri, placed expensive orders from a fraudulent customer account in the DoorDash app. Then, using DoorDash employee credentials, he manually assigned the orders to driver accounts he and the others involved had created. Devagiri would then mark the undelivered orders as complete and prompt DoorDash’s system to pay the driver accounts. Then he’d switch those same orders back to “in process” and do it all over again. Doing this “took less than five minutes, and was repeated hundreds of times for many of the orders,” writes the US Attorney’s Office.

Interesting flaw in the software design. He probably would have gotten away with it if he’d kept the numbers small. It’s only when the amount missing is too big to ignore that the investigations start.

 

By Bruce Schneier (Cyptographer, Author & Security Guru)

Original Link to the Blog: Click Here

Read more…

One one my biggest worries about VPNs is the amount of trust users need to place in them, and how opaque most of them are about who owns them and what sorts of data they retain.

new study found that many commercials VPNS are (often surreptitiously) owned by Chinese companies.

It would be hard for U.S. users to avoid the Chinese VPNs. The ownership of many appeared deliberately opaque, with several concealing their structure behind layers of offshore shell companies. TTP was able to determine the Chinese ownership of the 20 VPN apps being offered to Apple’s U.S. users by piecing together corporate documents from around the world. None of those apps clearly disclosed their Chinese ownership.

 

By Bruce Schneier (Cyptographer, Author & Security Guru)

Original Link to the Blog: Click Here

Read more…
13581343256?profile=RESIZE_710x
By Byron V. Acohido

SAN FRANCISCO — The first rule of reporting is to follow the tension lines—the places where old assumptions no longer quite hold. Related: GenAI disrupting tech jobs

I’ve been feeling that tension lately. Just arrived in the City by the Bay. Trekked here with some 40,000-plus cyber security pros and company execs flocking to RSAC 2025 at Moscone Center.

13581343274?profile=RESIZE_180x180

Many of the challenges they face mitigating cyber risks haven’t fundamentally changed, just intensified, over the past two decades I’ve been coming to RSAC. But the arrival of LLMs and Gen AI has tilted the landscape in a new, disorienting way.

Yes, the bad actors have been quick to leverage GenAI to scale up their tried-and-true attacks. The good news is that the good guys are doing so, as well. “Incrementally, and mostly behind the scenes, language-activated agentic AI is starting to reshape network protections.”

 

Calibrating LLMs

In recent weeks, I’ve sat down with a cross-section of innovators—each moving methodically to calibrate LLMs and GenAI to function as a force multiplier for defense.

Brian Dye, CEO of Corelight, a specialist in open-source-based network evidence solutions, told me how the field is being split: smaller security teams scrambling to adopt vendor-curated AI while large enterprises spin up their own tailored LLMs.

13581343080?profile=RESIZE_180x180

DiLullo

John DiLullo, CEO of Deepwatch, a managed detection and response firm focused on high-fidelity security operations, has come to an unexpected discovery: LLMs, carefully cordoned and human-vetted, are already outperforming junior analysts at producing incident reports—more consistent, more accurate, less error-prone.

Jamison Utter, security evangelist at A10 Networks, a supplier of network performance and DDoS defense technologies, offers another lens: adversaries are racing ahead, using AI to craft malware and orchestrate attacks at speeds no human scripter could match. The defenders, he notes, must become equally adaptive—learning not just to wield AI, but to think in its native tempo.

There’s a pattern here.

13581343659?profile=RESIZE_584x

Cybersecurity solution providers are starting to discover, each in their own corner of the battlefield, that mastery now requires a new kind of intuition:

•When to trust the machine’s first draft.

• When to double-check its cheerful approximations.

•When to discard fluency in favor of friction.

 

Getting to know my machine

It’s not unlike what I’ve found using ChatGPT-4o as a force multiplier for my own beat reporting.

At first, the tool felt like an accelerant—a way to draft faster, correlate more, test ideas with lightning speed. But over time, I’ve learned that speed alone isn’t the point. What matters is knowing when to lean on the machine—and when to lean away.

The cybersecurity innovators I’ve spoken with, thus far, are internalizing a similar lesson.

13581343466?profile=RESIZE_180x180

Dye

Dye’s team sees AI as a triage engine—brilliant at wading through common attack paths, but unreliable on the crooked trails where nuance matters. “Help me do more with less is one of the cybersecurity industry’s most durable problems,” Dye observes. “So, ‘Help me understand what this alert means in English’ can actually be incredibly valuable, and that’s actually something that AI models do super well.”

DiLullo’s analysts now trust AI to assemble the bones of a report—but know to inspect each joint before sending it out the door. In cybersecurity, DiLullo noted, making educated inferences is essential — and LLMs excel at scaling that process, efficiently surfacing insights in plain English where humans might otherwise struggle

Utter’s colleagues have begin leveraging AI-derived telemetry—but only after investing serious thought into how the tools should be constrained.

 

Intentional orchestration

In each case, calibration is the hidden skill. Not just deploying AI, but orchestrating its role with intention. Not ceding judgment, but sharpening it.

Tomorrow, as I walk the floor at RSA and continue these Fireside Chat conversations, I expect to hear more versions of this same evolving art form.

The vendors who will thrive are not those who see AI as a panacea—or a menace. They’re the ones treating it as what it actually is: a powerful, fallible partner. A new compass—helpful, but requiring a steady hand to navigate the magnetic distortions.

This is not the end of human-centered security; it’s the beginning of a new kind of craftsmanship.

And if the early glimpses are any guide, the quiet genius of this next chapter won’t be found in flashy demos or viral headlines.

 

Prompt engineering is the key

13581343862?profile=RESIZE_180x180

Utter

As A10’s Utter pointed out, it’s a craft that will increasingly depend on prompt engineers—practitioners skilled at shaping AI outputs without surrendering judgment. Those who master the art of asking better questions, not just accepting faster answers, will set the new standard.

It will surface, instead, in the way a well-trained SOC analyst coaxes a hidden thread out of a noisy alert queue.

Or the way a vendor team embeds invisible friction checks into their AI pipeline—not to slow things down, but to make sure the right things get through.

13581343876?profile=RESIZE_584x

The machine can accelerate the flow, but the human will still shape the course.

Observes Utter: “Prompt engineering, I think, is the key to understanding how to get the most out of AI.”

Where this leads, I’ll keep watch — and keep reporting.

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.

(Editor’s note: A machine assisted in creating this content. I used ChatGPT-4o to accelerate research, to scale correlations, to distill complex observations and to tighten structure, grammar, and syntax. The analysis and conclusions are entirely my own—drawn from lived experience and editorial judgment honed over decades of investigative reporting.)

 

By Byron Acohido (Pulitzer Prize-Winning Business Journalist)

Original Link to the Blog: Click Here

Read more…

Russia is proposing a rule that all foreigners in Moscow install a tracking app on their phones.

Using a mobile application that all foreigners will have to install on their smartphones, the Russian state will receive the following information:

  • Residence location
  • Fingerprint
  • Face photograph
  • Real-time geo-location monitoring

This isn’t the first time we’ve seen this. Qatar did it in 2022 around the World Cup:

“After accepting the terms of these apps, moderators will have complete control of users’ devices,” he continued. “All personal content, the ability to edit it, share it, extract it as well as data from other apps on your device is in their hands. Moderators will even have the power to unlock users’ devices remotely.”

 

By: Bruce Schneier (Cyptographer, Author & Security Guru)

Original link to the blog: Click Here

Read more…

A Texas firm recently charged with conspiring to distribute synthetic opioids in the United States is at the center of a vast network of companies in the U.S. and Pakistan whose employees are accused of using online ads to scam westerners seeking help with trademarks, book writing, mobile app development and logo designs, a new investigation reveals.

In an indictment (PDF) unsealed last month, the U.S. Department of Justice said Dallas-based eWorldTrade “operated an online business-to-business marketplace that facilitated the distribution of synthetic opioids such as isotonitazene and carfentanyl, both significantly more potent than fentanyl.”

Launched in 2017, eWorldTrade[.]com now features a seizure notice from the DOJ. eWorldTrade operated as a wholesale seller of consumer goods, including clothes, machinery, chemicals, automobiles and appliances. The DOJ’s indictment includes no additional details about eWorldTrade’s business, origins or other activity, and at first glance the website might appear to be a legitimate e-commerce platform that also just happened to sell some restricted chemicals.

13581336698?profile=RESIZE_710x

A screenshot of the eWorldTrade homepage on March 25, 2025. Image: archive.org.

However, an investigation into the company’s founders reveals they are connected to a sprawling network of websites that have a history of extortionate scams involving trademark registration, book publishing, exam preparation, and the design of logos, mobile applications and websites.

Records from the U.S. Patent and Trademark Office (USPTO) show the eWorldTrade mark is owned by an Azneem Bilwani in Karachi (this name also is in the registration records for the now-seized eWorldTrade domain). Mr. Bilwani is perhaps better known as the director of the Pakistan-based IT provider Abtach Ltd., which has been singled out by the USPTO and Google for operating trademark registration scams (the main offices for eWorldtrade and Abtach share the same address in Pakistan).

In November 2021, the USPTO accused Abtach of perpetrating “an egregious scheme to deceive and defraud applicants for federal trademark registrations by improperly altering official USPTO correspondence, overcharging application filing fees, misappropriating the USPTO’s trademarks, and impersonating the USPTO.”

Abtach offered trademark registration at suspiciously low prices compared to legitimate costs of over USD $1,500, and claimed they could register a trademark in 24 hours. Abtach reportedly rebranded to Intersys Limited after the USPTO banned Abtach from filing any more trademark applications.

 

13581336893?profile=RESIZE_584x

In a note published to its LinkedIn profile, Intersys Ltd. asserted last year that certain scam firms in Karachi were impersonating the company.

 

FROM AXACT TO ABTACH

Many of Abtach’s employees are former associates of a similar company in Pakistan called Axact that was targeted by Pakistani authorities in a 2015 fraud investigation. Axact came under law enforcement scrutiny after The New York Times ran a front-page story about the company’s most lucrative scam business: Hundreds of sites peddling fake college degrees and diplomas.

People who purchased fake certifications were subsequently blackmailed by Axact employees posing as government officials, who would demand additional payments under threats of prosecution or imprisonment for having bought fraudulent “unauthorized” academic degrees. This practice created a continuous cycle of extortion, internally referred to as “upselling.”

“Axact took money from at least 215,000 people in 197 countries — one-third of them from the United States,” The Times reported. “Sales agents wielded threats and false promises and impersonated government officials, earning the company at least $89 million in its final year of operation.”

Dozens of top Axact employees were arrested, jailed, held for months, tried and sentenced to seven years for various fraud violations. But a 2019 research brief on Axact’s diploma mills found none of those convicted had started their prison sentence, and that several had fled Pakistan and never returned.

“In October 2016, a Pakistan district judge acquitted 24 Axact officials at trial due to ‘not enough evidence’ and then later admitted he had accepted a bribe (of $35,209) from Axact,” reads a history (PDF) published by the American Association of Collegiate Registrars and Admissions Officers.

In 2021, Pakistan’s Federal Investigation Agency (FIA) charged Bilwani and nearly four dozen others — many of them Abtach employees — with running an elaborate trademark scam. The authorities called it “the biggest money laundering case in the history of Pakistan,” and named a number of businesses based in Texas that allegedly helped move the proceeds of cybercrime.

13581337263?profile=RESIZE_710x

A page from the March 2021 FIA report alleging that Digitonics Labs and Abtach employees conspired to extort and defraud consumers.

The FIA said the defendants operated a large number of websites offering low-cost trademark services to customers, before then “ignoring them after getting the funds and later demanding more funds from clients/victims in the name of up-sale (extortion).” The Pakistani law enforcement agency said that about 75 percent of customers received fake or fabricated trademarks as a result of the scams.

The FIA found Abtach operates in conjunction with a Karachi firm called Digitonics Labs, which earned a monthly revenue of around $2.5 million through the “extortion of international clients in the name of up-selling, the sale of fake/fabricated USPTO certificates, and the maintaining of phishing websites.”

According the Pakistani authorities, the accused also ran countless scams involving ebook publication and logo creation, wherein customers are subjected to advance-fee fraud and extortion — with the scammers demanding more money for supposed “copyright release” and threatening to release the trademark.

Also charged by the FIA was Junaid Mansoor, the owner of Digitonics Labs in Karachi. Mansoor’s U.K.-registered company Maple Solutions Direct Limited has run at least 700 ads for logo design websites since 2015, the Google Ads Transparency page reports. The company has approximately 88 ads running on Google as of today. 

13581337087?profile=RESIZE_710x

Junaid Mansoor. Source: youtube/@Olevels․com School.

Mr. Mansoor is actively involved with and promoting a Quran study business called quranmasteronline[.]com, which was founded by Junaid’s brother Qasim Mansoor (Qasim is also named in the FIA criminal investigation). The Google ads promoting quranmasteronline[.]com were paid for by the same account advertising a number of scam websites selling logo and web design services. 

Junaid Mansoor did not respond to requests for comment. An address in Teaneck, New Jersey where Mr. Mansoor previously lived is listed as an official address of exporthub[.]com, a Pakistan-based e-commerce website that appears remarkably similar to eWorldTrade (Exporthub says its offices are in Texas). Interestingly, a search in Google for this domain shows ExportHub currently features multiple listings for fentanyl citrate from suppliers in China and elsewhere.

The CEO of Digitonics Labs is Muhammad Burhan Mirza, a former Axact official who was arrested by the FIA as part of its money laundering and trademark fraud investigation in 2021. In 2023, prosecutors in Pakistan charged Mirza, Mansoor and 14 other Digitonics employees with fraud, impersonating government officials, phishing, cheating and extortion. Mirza’s LinkedIn profile says he currently runs an educational technology/life coach enterprise called TheCoach360, which purports to help young kids “achieve financial independence.”

Reached via LinkedIn, Mr. Mirza denied having anything to do with eWorldTrade or any of its sister companies in Texas.

“Moreover, I have no knowledge as to the companies you have mentioned,” said Mr. Mirza, who did not respond to follow-up questions.

The current disposition of the FIA’s fraud case against the defendants is unclear. The investigation was marred early on by allegations of corruption and bribery. In 2021, Pakistani authorities alleged Bilwani paid a six-figure bribe to FIA investigators. Meanwhile, attorneys for Mr. Bilwani have argued that although their client did pay a bribe, the payment was solicited by government officials. Mr. Bilwani did not respond to requests for comment.

 

THE TEXAS NEXUS

KrebsOnSecurity has learned that the people and entities at the center of the FIA investigations have built a significant presence in the United States, with a strong concentration in Texas. The Texas businesses promote websites that sell logo and web design, ghostwriting, and academic cheating services. Many of these entities have recently been sued for fraud and breach of contract by angry former customers, who claimed the companies relentlessly upsold them while failing to produce the work as promised.

For example, the FIA complaints named Retrocube LLC and 360 Digital Marketing LLC, two entities that share a street address with eWorldTrade: 1910 Pacific Avenue, Suite 8025, Dallas, Texas. Also incorporated at that Pacific Avenue address is abtach[.]ae, a web design and marketing firm based in Dubai; and intersyslimited[.]com, the new name of Abtach after they were banned by the USPTO. Other businesses registered at this address market services for logo design, mobile app development, and ghostwriting.

13581337282?profile=RESIZE_710x

A list published in 2021 by Pakistan’s FIA of different front companies allegedly involved in scamming people who are looking for help with trademarks, ghostwriting, logos and web design.

360 Digital Marketing’s website 360digimarketing[.]com is owned by an Abtach front company called Abtech LTD. Meanwhile, business records show 360 Digi Marketing LTD is a U.K. company whose officers include former Abtach director Bilwani; Muhammad Saad Iqbal, formerly Abtach, now CEO of Intersys Ltd; Niaz Ahmed, a former Abtach associate; and Muhammad Salman Yousuf, formerly a vice president at Axact, Abtach, and Digitonics Labs.

Google’s Ads Transparency Center finds 360 Digital Marketing LLC ran at least 500 ads promoting various websites selling ghostwriting services . Another entity tied to Junaid Mansoor — a company called Octa Group Technologies AU — has run approximately 300 Google ads for book publishing services, promoting confusingly named websites like amazonlistinghub[.]com and barnesnoblepublishing[.]co.

13581337101?profile=RESIZE_710x

360 Digital Marketing LLC ran approximately 500 ads for scam ghostwriting sites.

Rameez Moiz is a Texas resident and former Abtach product manager who has represented 360 Digital Marketing LLC and RetroCube. Moiz told KrebsOnSecurity he stopped working for 360 Digital Marketing in the summer of 2023. Mr. Moiz did not respond to follow-up questions, but an Upwork profile for him states that as of April 2025 he is employed by Dallas-based Vertical Minds LLC.

In April 2025, California resident Melinda Will sued the Texas firm Majestic Ghostwriting — which is doing business as ghostwritingsquad[.]com —  alleging they scammed her out of $100,000 after she hired them to help write her book. Google’s ad transparency page shows Moiz’s employer Vertical Minds LLC paid to run approximately 55 ads for ghostwritingsquad[.]com and related sites.

13581337665?profile=RESIZE_710x

Google’s ad transparency listing for ghostwriting ads paid for by Vertical Minds LLC.

 

VICTIMS SPEAK OUT

Ms. Will’s lawsuit is just one of more than two dozen complaints over the past four years wherein plaintiffs sued one of this group’s web design, wiki editing or ghostwriting services. In 2021, a New Jersey man sued Octagroup Technologies, alleging they ripped him off when he paid a total of more than $26,000 for the design and marketing of a web-based mapping service.

The plaintiff in that case did not respond to requests for comment, but his complaint alleges Octagroup and a myriad other companies it contracted with produced minimal work product despite subjecting him to relentless upselling. That case was decided in favor of the plaintiff because the defendants never contested the matter in court.

In 2023, 360 Digital Marketing LLC and Retrocube LLC were sued by a woman who said they scammed her out of $40,000 over a book she wanted help writing. That lawsuit helpfully showed an image of the office front door at 1910 Pacific Ave Suite 8025, which featured the logos of 360 Digital Marketing, Retrocube, and eWorldTrade.

13581337686?profile=RESIZE_710x

The front door at 1910 Pacific Avenue, Suite 8025, Dallas, Texas.

The lawsuit was filed pro se by Leigh Riley, a 64-year-old career IT professional who paid 360 Digital Marketing to have a company called Talented Ghostwriter co-author and promote a series of books she’d outlined on spirituality and healing.

“The main reason I hired them was because I didn’t understand what I call the formula for writing a book, and I know there’s a lot of marketing that goes into publishing,” Riley explained in an interview. “I know nothing about that stuff, and these guys were convincing that they could handle all aspects of it. Until I discovered they couldn’t write a damn sentence in English properly.”

Riley’s well-documented lawsuit (not linked here because it features a great deal of personal information) includes screenshots of conversations with the ghostwriting team, which was constantly assigning her to new writers and editors, and ghosting her on scheduled conference calls about progress on the project. Riley said she ended up writing most of the book herself because the work they produced was unusable.

“Finally after months of promising the books were printed and on their way, they show up at my doorstep with the wrong title on the book,” Riley said. When she demanded her money back, she said the people helping her with the website to promote the book locked her out of the site.

13581337700?profile=RESIZE_710x

A conversation snippet from Leigh Riley’s lawsuit against Talented Ghostwriter, aka 360 Digital Marketing LLC. “Other companies once they have you money they don’t even respond or do anything,” the ghostwriting team manager explained.

Riley decided to sue, naming 360 Digital Marketing LLC and Retrocube LLC, among others.  The companies offered to settle the matter for $20,000, which she accepted. “I didn’t have money to hire a lawyer, and I figured it was time to cut my losses,” she said.

Riley said she could have saved herself a great deal of headache by doing some basic research on Talented Ghostwriter, whose website claims the company is based in Los Angeles. According to the California Secretary of State, however, there is no registered entity by that name. Rather, the address claimed by talentedghostwriter[.]com is a vacant office building with a “space available” sign in the window.

California resident Walter Horsting discovered something similar when he sued 360 Digital Marketing in small claims court last year, after hiring a company called Vox Ghostwriting to help write, edit and promote a spy novel he’d been working on. Horsting said he paid Vox $3,300 to ghostwrite a 280-page book, and was upsold an Amazon marketing and publishing package for $7,500.

In an interview, Horsting said the prose that Vox Ghostwriting produced was “juvenile at best,” forcing him to rewrite and edit the work himself, and to partner with a graphical artist to produce illustrations. Horsting said that when it came time to begin marketing the novel, Vox Ghostwriting tried to further upsell him on marketing packages, while dodging scheduled meetings with no follow-up.

“They have a money back guarantee, and when they wouldn’t refund my money I said I’m taking you to court,” Horsting recounted. “I tried to serve them in Los Angeles but found no such office exists. I talked to a salon next door and they said someone else had recently shown up desperately looking for where the ghostwriting company went, and it appears there are a trail of corpses on this. I finally tracked down where they are in Texas.”

It was the same office that Ms. Riley served her lawsuit against. Horsting said he has a court hearing scheduled later this month, but he’s under no illusions that winning the case means he’ll be able to collect.

“At this point, I’m doing it out of pride more than actually expecting anything to come to good fortune for me,” he said.

The following mind map was helpful in piecing together key events, individuals and connections mentioned above. It’s important to note that this graphic only scratches the surface of the operations tied to this group. For example, in Case 2 we can see mention of academic cheating services, wherein people can be hired to take online proctored exams on one’s behalf. Those who hire these services soon find themselves subject to impersonation and blackmail attempts for larger and larger sums of money, with the threat of publicly exposing their unethical academic cheating activity.

13581338075?profile=RESIZE_710x

A “mind map” illustrating the connections between and among entities referenced in this story. Click to enlarge.

 

GOOGLE RESPONDS

KrebsOnSecurity reviewed the Google Ad Transparency links for nearly 500 different websites tied to this network of ghostwriting, logo, app and web development businesses. Those website names were then fed into spyfu.com, a competitive intelligence company that tracks the reach and performance of advertising keywords. Spyfu estimates that between April 2023 and April 2025, those websites spent more than $10 million on Google ads.

Reached for comment, Google said in a written statement that it is constantly policing its ad network for bad actors, pointing to an ads safety report (PDF) showing Google blocked or removed 5.1 billion bad ads last year — including more than 500 million ads related to trademarks.

“Our policy against Enabling Dishonest Behavior prohibits products or services that help users mislead others, including ads for paper-writing or exam-taking services,” the statement reads. “When we identify ads or advertisers that violate our policies, we take action, including by suspending advertiser accounts, disapproving ads, and restricting ads to specific domains when appropriate.”

13581337886?profile=RESIZE_710x

Google did not respond to specific questions about the advertising entities mentioned in this story, saying only that “we are actively investigating this matter and addressing any policy violations, including suspending advertiser accounts when appropriate.”

From reviewing the ad accounts that have been promoting these scam websites, it appears Google has very recently acted to remove a large number of the offending ads. Prior to my notifying Google about the extent of this ad network on April 28, the Google Ad Transparency network listed over 500 ads for 360 Digital Marketing; as of this publication, that number had dwindled to 10.

On April 30, Google announced that starting this month its ads transparency page will display the payment profile name as the payer name for verified advertisers, if that name differs from their verified advertiser name. Searchengineland.com writes the changes are aimed at increasing accountability in digital advertising.

This spreadsheet lists the domain names, advertiser names, and Google Ad Transparency links for more than 350 entities offering ghostwriting, publishing, web design and academic cheating services.

KrebsOnSecurity would like to thank the anonymous security researcher NatInfoSec for their assistance in this investigation.

For further reading on Abtach and its myriad companies in all of the above-mentioned verticals (ghostwriting, logo design, etc.), see this Wikiwand entry.

 

By: Brian Krebs (Investigative Journalist, Award Winning Author)

Original link to the blog: Click Here

Read more…
13571324094?profile=RESIZE_710x
By Byron V. Acohido

SAN FRANCISCO — The cybersecurity industry showed up here in force last week: 44,000 attendees, 730 speakers, 650 exhibitors and 400 members of the media flooding Moscone Convention Center in the City by the Bay. Related: RSAC 2025 by the numbers

Beneath the cacophony of GenAI-powered product rollouts, the signal that stood out was subtler: a broadening consensus that artificial intelligence — especially the agentic kind — isn’t going away. And also that intuitive, discerning human oversight is going to be essential at every step.

13571324285?profile=RESIZE_180x180

Abdullah

Let’s start with Dr. Alissa “Dr. Jay” Abdullah, Mastercard’s Deputy CSO who gave a keynote address at The CSA Summit from Cloud Security Alliance at RSAC 2025. She spoke passionately about being a daily power user of AI, recounting an experiment in which she tried to generate a collectible 3D action figure of herself using multiple GenAI platforms.

Her prompts were clear, detailed, and methodical — yet the results were laughably off-base. The takeaway? Even well-crafted prompts can be derailed by flawed models or skewed training data. In this case, none of the models managed to reliably portray her likeness or professional context — despite the input being consistent.

 

AI needs a human chaperone

This wasn’t just a quirky user experience — it underscored deeper concerns about bias, hallucination, and the immaturity of enterprise-grade AI. Abdullah’s takeaway: lean in, yes. But test relentlessly, and don’t take the output at face value.

That kind of real-world friction — where AI promise meets AI reality — showed up again and again in RSAC’s meatier panels and threat briefings. The SANS Institute’s Five Most Dangerous New Attack Techniques panel highlighted how authorization sprawl is giving attackers frictionless lateral movement in hybrid cloud environments. The fix? Better privilege mapping and tighter identity controls — areas ripe for GenAI-powered solutions, if used responsibly.

13571324480?profile=RESIZE_584x

Similarly, identity emerged as RSAC’s dominant theme, fueled by Verizon’s latest Data Breach Investigations Report showing credential abuse remains a top attack vector. Identity, as Darren Guccione of Keeper Security framed it, is the modern perimeter. Yet AI complicates the landscape: it can accelerate password cracking even as it enables smarter detection. Once again, the takeaway was clear — context, not hype, must drive deployment.

13571324295?profile=RESIZE_180x180

Krebs

Meanwhile, the emotional centerpiece of the conference came from Chris Krebs, the embattled former CISA director. Facing political heat at home, Krebs nonetheless took the stage alongside Jen Easterly and Rob Joyce to reflect on fictional and real-world cyber catastrophes. His call to arms was unflinching: “Cybersecurity is national security. Every one of you is on the front lines of modern warfare.”

And he’s right. Because behind the RSAC glitz lies a gnawing truth: complexity has outpaced human capacity. AI may be the only way defenders can keep up — if regulators allow it, and if we wield it wisely.

 

Customer-ready — on the fly

For all the stage talk about escalating threats, tightening regulations, and the urgent need to shore up identity defenses, it was the hallway conversations — the unscripted, sometimes offbeat stories from seasoned professionals — that offered the clearest glimpse of what comes next.

To wit:  just a few moments after Mastercard’s Abdullah gave her keynote at the CSA Summit, I happen to run into a senior sales rep from a mobile app security firm, whom I’ve known for a few years. I asked him if he was using GenAI, and he shared how he has trained a personal agentic assistant to help field technical questions from prospects.

This veteran sales rep described how he uses ChatGPT to synthesize technical answers and generate customer-ready language on the fly. He told me he takes his responsibility to vet every GenAI output vigorously — especially when deploying it to come up with information relayed back to customers with engineering backgrounds. Any hint of a hallucinated response could destroy credibility he’s spent months building. So he validates, revises and retrains constantly. It’s not about cutting corners; it’s about enhancing fluency without sacrificing integrity, he told me.

 

Natively supported GenAI

I also had an enlightening discussion with Tim Eades, CEO of year-old Anetac, a GenAI-native platform focused on real-time identity risk, who offered sharp insight into why newer vendors have an inherent edge. Older enterprise systems, he explained, are like heritage homes that need to be put on stilts before the foundation can be replaced.

Retrofitting LLMs onto legacy infrastructure is not just expensive; it can be futile without rethinking data pipelines and user interfaces from the ground up. Because Anetac was built in the GenAI era, Eades told me,  they can natively support real-time data integration, dynamic prompt generation, and intuitive user-level customization. This agility doesn’t just reduce hallucinations — it accelerates meaningful innovation, Eades asserts.

 

Curated knowledge sets

Meanwhile, Jason Keirstead, Co-founder and VP of Security Strategy of Simbian, a GenAI-native platform automating alert triage and threat investigation, walked me through how his team integrates LLMs into security operations workflows. We met in the nearby financial district, inside the high-rise offices of Cota Capital, one of Simbian’s early investors.

Unlike platforms that simply bolt on a chatbot and hope users will “talk to the AI,” Simbian embeds agentic AI directly into workflows—handling alert triage, threat hunting, and vulnerability prioritization behind the scenes, Keirstead told me. The user never interacts with a prompt window. Instead, Simbian’s internal RAG (retrieval-augmented generation) system, combined with extensive prompt libraries tuned for cybersecurity use cases, processes each alert and surfaces recommended actions automatically.

Keirstead didn’t downplay the complexity of making this work. While LLMs can still hallucinate, he emphasized that Simbian avoids generic, open-ended use cases in favor of tightly scoped applications. By combining curated knowledge sets, domain-specific tuning, and hands-on collaboration with early adopters, the company has engineered a system designed to deliver consistent, trustworthy results.

 

The 100X effect

A similar dynamic was at play at Corelight, a network detection and response provider focused on high-fidelity telemetry. I spoke with CEO Brian Dye who underscored how agentic AI is beginning to boost threat detection — but only when closely guided. Their team uses LLMs to streamline analysis of noisy telemetry and surface relevant insights faster.

Yet Dye cautioned that simply injecting a chatbot doesn’t cut it; analysts still need domain expertise to steer the tool, validate results, and keep it from introducing blind spots. It’s the human-machine combo, he emphasized, that delivers real value.

Meanwhile,  John DiLullo, CEO of Deepwatch, a managed detection and response firm focused on high-fidelity security operations, framed GenAI as a conversation accelerator — but only when harnessed with clarity and intent. He described how top-tier cybersecurity veterans are using it not to replace judgment but to distill technical nuance for broader audiences. This aligns with what some are calling the ‘100x effect’ — experienced practitioners using GenAI not to automate away their judgment, but to scale their expertise and speed of execution.

 

Must have skill: prompt engineering

Jamison Utter, security evangelist at A10 Networks, a supplier of network performance and DDoS defense technologies, was especially candid. He explained how attackers are already using LLMs to write custom malware, simulate attacks, and bypass traditional defenses — at speed and scale. On defense, A10 has begun tapping GenAI to analyze DDoS telemetry in real time, dramatically reducing time-to-insight. The payoff? Analysts who know how to prompt effectively are seeing gains, but only after substantial trial-and-error. His bottom line: prompt engineering is now a frontline skill.

Akela

Anand Akela, CMO of Alcavio, a deception-driven threat detection company, sketched out a different angle: using AI not to interpret threats, but to camouflage critical assets. Alcavio blends traditional deception tech with AI-powered customization — generating realistic honeypots, honeytokens, and decoy credentials at scale. The idea is to use AI’s generative muscle to outwit AI-generated threats. Akela admitted they don’t rely on full-blown LLMs yet, but said their roadmap includes using GenAI to tailor decoy strategies dynamically, based on evolving attack vectors.

 

Guided speed, common sense

At Cyware, a cyber fusion platform unifying threat intelligence and incident response, Patrick Vandenberg, Senior Director of Product Marketing, emphasized speed. Their threat intelligence chatbot reduces days of manual triage to seconds, surfacing relevant patterns and flagging threats for human review.

But it’s not autopilot. The system only works well when guided by seasoned analysts who understand what to ask for — and how to interpret the results. It’s the classic augmentation model: the AI expands reach and efficiency, but the analyst still holds the reins.

Willy Leichter,  CMO of PointGuard AI, startup focused on visibility and risk governance for GenAI use, captured the unease many feel. His firm helps companies discover and govern shadow AI projects — especially open-source tools and rogue models flowing into production. The market, he said, hasn’t had its “SolarWinds moment” for GenAI misuse yet, but everyone’s bracing for it. His message to worried CISOs: start with visibility, then layer on risk scoring and usage controls. And don’t let urgency erase common sense.

 

Driving resilience — not risk

Across each of these conversations, a common thread emerged: we’re beyond the point of deciding whether to use GenAI. The question now is how to use it well. The answer seems to hinge not on the models themselves, but on the context in which they’re deployed, the clarity of the prompts, and the vigilance of the humans overseeing them.

Agentic AI is here to stay. It’s versatile, powerful, and rapidly evolving. Agentic AI doesn’t wait to be prompted — it’s goal-driven, context-aware, and built to act. But like any high-performance engine, it demands an attentive driver. Without careful prompting, constant tuning, and relentless validation, even the most promising assistants can steer us off course. That tension — powerful augmentation versus potential misfire — defined the conference.

RSAC 2025 didn’t just showcase agentic AI’s momentum; it clarified the mandate. This isn’t about chasing silver bullets. It’s about embracing a tool that demands human vigilance at every turn.

If we want AI to drive resilience — not risk — we’ll need to stay firmly in the driver’s seat. I’ll keep watch and keep reporting.

13571319290?profile=RESIZE_400x

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.

(Editor’s note: A machine assisted in creating this content. I used ChatGPT-4o to accelerate research, to scale correlations, to distill complex observations and to tighten structure, grammar, and syntax. The analysis and conclusions are entirely my own—drawn from lived experience and editorial judgment honed over decades of investigative reporting.)

Original Link to the Blog: Click Here

Read more…

In what experts are calling a novel legal outcome, the 22-year-old former administrator of the cybercrime community Breachforums will forfeit nearly $700,000 to settle a civil lawsuit from a health insurance company whose customer data was posted for sale on the forum in 2023. Conor Brian Fitzpatrick, a.k.a. “Pompompurin,” is slated for resentencing next month after pleading guilty to access device fraud and possession of child sexual abuse material (CSAM).

13571322475?profile=RESIZE_710x

A redacted screenshot of the Breachforums sales thread. Image: Ke-la.com.

On January 18, 2023, denizens of Breachforums posted for sale tens of thousands of records — including Social Security numbers, dates of birth, addresses, and phone numbers  — stolen from Nonstop Health, an insurance provider based in Concord, Calif.

Class-action attorneys sued Nonstop Health, which added Fitzpatrick as a third-party defendant to the civil litigation in November 2023, several months after he was arrested by the FBI and criminally charged with access device fraud and CSAM possession. In January 2025, Nonstop agreed to pay $1.5 million to settle the class action.

Jill Fertel is a former prosecutor who runs the cyber litigation practice at Cipriani & Werner, the law firm that represented Nonstop Health. Fertel told KrebsOnSecurity this is the first and only case where a cybercriminal or anyone related to the security incident was actually named in civil litigation.

“Civil plaintiffs are not at all likely to see money seized from threat actors involved in the incident to be made available to people impacted by the breach,” Fertel said. “The best we could do was make this money available to the class, but it’s still incumbent on the members of the class who are impacted to make that claim.”

Mark Rasch is a former federal prosecutor who now represents Unit 221B, a cybersecurity firm based in New York City. Rasch said he doesn’t doubt that the civil settlement involving Fitzpatrick’s criminal activity is a novel legal development.

“It is rare in these civil cases that you know the threat actor involved in the breach, and it’s also rare that you catch them with sufficient resources to be able to pay a claim,” Rasch said.

Despite admitting to possessing more than 600 CSAM images and personally operating Breachforums, Fitzpatrick was sentenced in January 2024 to time served and 20 years of supervised release. Federal prosecutors objected, arguing that his punishment failed to adequately reflect the seriousness of his crimes or serve as a deterrent.

13571322487?profile=RESIZE_710x

An excerpt from a pre-sentencing report for Fitzpatrick indicates he had more than 600 CSAM images on his devices.

Indeed, the same month he was sentenced Fitzpatrick was rearrested (PDF) for violating the terms of his release, which forbade him from using a computer that didn’t have court-required monitoring software installed.

Federal prosecutors said Fitzpatrick went on Discord following his guilty plea and professed innocence to the very crimes to which he’d pleaded guilty, stating that his plea deal was “so BS” and that he had “wanted to fight it.” The feds said Fitzpatrick also joked with his friends about selling data to foreign governments, exhorting one user to “become a foreign asset to china or russia,” and to “sell government secrets.”

In January 2025, a federal appeals court agreed with the government’s assessment, vacating Fitzpatrick’s sentence and ordering him to be resentenced on June 3, 2025.

Fitzpatrick launched BreachForums in March 2022 to replace RaidForums, a similarly popular crime forum that was infiltrated and shut down by the FBI the previous month. As administrator, his alter ego Pompompurin served as the middleman, personally reviewing all databases for sale on the forum and offering an escrow service to those interested in buying stolen data.

A yearbook photo of Fitzpatrick unearthed by the Yonkers Times.

The new site quickly attracted more than 300,000 users, and facilitated the sale of databases stolen from hundreds of hacking victims, including some of the largest consumer data breaches in recent history. In May 2024, a reincarnation of Breachforums was seized by the FBI and international partners. Still more relaunches of the forum occurred after that, with the most recent disruption last month.

As KrebsOnSecurity reported last year in The Dark Nexus Between Harm Groups and The Com, it is increasingly common for federal investigators to find CSAM material when searching devices seized from cybercriminal suspects. While the mere possession of CSAM is a serious federal crime, not all of those caught with CSAM are necessarily creators or distributors of it. Fertel said some cybercriminal communities have been known to require new entrants to share CSAM material as a way of proving that they are not a federal investigator.

“If you’re going to the darkest corners of Internet, that’s how you prove you’re not law enforcement,” Fertel said. “Law enforcement would never share that material. It would be criminal for me as a prosecutor, if I obtained and possessed those types of images.”

 

By: Brian Krebs (Investigative Journalist, Award Winning Author)

Original link to the blog: Click Here

Read more…
13571318874?profile=RESIZE_710x
By Byron V. Acohido

The response to our first LastWatchdog Strategic Reel has been energizing — and telling. Related: What is a cyber kill chain?

The appetite for crisp, credible insight is alive and well. As the LinkedIn algo picked up steam and auto-captioning kicked in, it became clear that this short-form format resonates. Not just because it’s fast — but because it respects the intelligence of the audience.

This second-day snapshot continues where we left off: amplifying frontline voices from RSAC 2025. What’s most striking is the consistency of message across these interviews. Whether from Fortinet or ESET, Corelight or Anomali, the theme is clear: GenAI is no longer theoretical. It’s here — and it’s already influencing how SOC teams operate, triage, and respond.

Each voice captured in this reel isn’t reading from a script. These are compressed bursts of clarity from senior technologists who live this reality every day.

The goal with Strategic Reels is simple: create a format that works at the speed of LinkedIn but doesn’t sacrifice substance. The result? A tool that helps thought leaders cut through the noise — and stay top of mind.

If this approach resonates with your team or client, reach out. There’s room in this series for more real voices, more credible takes — and more relevance, exactly when it’s needed.

Watch the embedded reel, and follow me on LinkedIn for upcoming drops. For sponsorship opportunities, I’m happy to discuss what’s possible.

 

 

 

13571319290?profile=RESIZE_400x

By Byron Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.

Original Link to the Blog: Click Here

Read more…

The U.S. government today unsealed criminal charges against 16 individuals accused of operating and selling DanaBot, a prolific strain of information-stealing malware that has been sold on Russian cybercrime forums since 2018. The FBI says a newer version of DanaBot was used for espionage, and that many of the defendants exposed their real-life identities after accidentally infecting their own systems with the malware.

 

13571121292?profile=RESIZE_710x

DanaBot’s features, as promoted on its support site. Image: welivesecurity.com.

 

Initially spotted in May 2018 by researchers at the email security firm Proofpoint, DanaBot is a malware-as-a-service platform that specializes in credential theft and banking fraud.

Today, the U.S. Department of Justice unsealed a criminal complaint and indictment from 2022, which said the FBI identified at least 40 affiliates who were paying between $3,000 and $4,000 a month for access to the information stealer platform.

The government says the malware infected more than 300,000 systems globally, causing estimated losses of more than $50 million. The ringleaders of the DanaBot conspiracy are named as Aleksandr Stepanov, 39, a.k.a. “JimmBee,” and Artem Aleksandrovich Kalinkin, 34, a.k.a. “Onix”, both of Novosibirsk, Russia. Kalinkin is an IT engineer for the Russian state-owned energy giant Gazprom. His Facebook profile name is “Maffiozi.”

According to the FBI, there were at least two major versions of DanaBot; the first was sold between 2018 and June 2020, when the malware stopped being offered on Russian cybercrime forums. The government alleges that the second version of DanaBot — emerging in January 2021 — was provided to co-conspirators for use in targeting military, diplomatic and non-governmental organization computers in several countries, including the United States, Belarus, the United Kingdom, Germany, and Russia.

“Unindicted co-conspirators would use the Espionage Variant to compromise computers around the world and steal sensitive diplomatic communications, credentials, and other data from these targeted victims,” reads a grand jury indictment dated Sept. 20, 2022. “This stolen data included financial transactions by diplomatic staff, correspondence concerning day-to-day diplomatic activity, as well as summaries of a particular country’s interactions with the United States.”

The indictment says the FBI in 2022 seized servers used by the DanaBot authors to control their malware, as well as the servers that stored stolen victim data. The government said the server data also show numerous instances in which the DanaBot defendants infected their own PCs, resulting in their credential data being uploaded to stolen data repositories that were seized by the feds.

“In some cases, such self-infections appeared to be deliberately done in order to test, analyze, or improve the malware,” the criminal complaint reads. “In other cases, the infections seemed to be inadvertent – one of the hazards of committing cybercrime is that criminals will sometimes infect themselves with their own malware by mistake.”

 

13571121667?profile=RESIZE_710x

Image: welivesecurity.com

 

statement from the DOJ says that as part of today’s operation, agents with the Defense Criminal Investigative Service (DCIS) seized the DanaBot control servers, including dozens of virtual servers hosted in the United States. The government says it is now working with industry partners to notify DanaBot victims and help remediate infections. The statement credits a number of security firms with providing assistance to the government, including ESETFlashpointGoogleIntel 471LumenPayPalProofpointTeam CYMRU, and ZScaler.

It’s not unheard of for financially-oriented malicious software to be repurposed for espionage. A variant of the ZeuS Trojan, which was used in countless online banking attacks against companies in the United States and Europe between 2007 and at least 2015, was for a time diverted to espionage tasks by its author.

As detailed in this 2015 story, the author of the ZeuS trojan created a custom version of the malware to serve purely as a spying machine, which scoured infected systems in Ukraine for specific keywords in emails and documents that would likely only be found in classified documents.

The public charging of the 16 DanaBot defendants comes a day after Microsoft joined a slew of tech companies in disrupting the IT infrastructure for another malware-as-a-service offering — Lumma Stealer, which is likewise offered to affiliates under tiered subscription prices ranging from $250 to $1,000 per month. Separately, Microsoft filed a civil lawsuit to seize control over 2,300 domain names used by Lumma Stealer and its affiliates.

 

By: Brian Krebs (Investigative Journalist, Award Winning Author)

Original link to the blog: Click Here

Read more…
In this week's highlights, we spotlight essential developments every cybersecurity leader should track. Explore how nation-state actors like Russia’s Fancy Bear and APT28 are intensifying their focus on logistics and IT firms to monitor geopolitical
Read more…

 

13563046688?profile=RESIZE_180x180

Gemini imagines RSA 2025 (very tame!)

 

Ah, RSA. That yearly theater (Carnival? Circus? Orgy? Got any better synonyms, Gemini?) of 44,000 people vaguely (hi salespeople!) related to cybersecurity … where the air is thick with buzzwords and the vendor halls echo with promises of a massive revolution — every year.

And this year, of course, the primary driver was (still) AI. To put it in a culinary analogy — as it is well known, I like my analogies well-done — if last year’s event felt like a hopeful wait for a steak (“where’s the beef?”), this year feels like we got served a plate with a lot of garnish. Very visually stimulating garnish. But still no meat.

And I still can’t shake the feeling that in a year we might be in the same place. Hopefully not.

But let’s break it down. Just like a good stew, let’s delve (guess who wrote this sentence?) into the ingredients that made up RSA 2025.

 

1. The AI Hype Train: All Aboard! (But Where Are We Going?)

First off, let’s address the elephant in the room, or rather, the “hype-intelligent” [A.C. — I wrote this joke, not AI, cool typo, eh?] chatbot in the cloud: AI. Everyone and their grandmother seemed to have an “AI-powered” solution, some even went further for “AI-native” (more on this particular creation later).

Booths were festooned with AI logos, and conversations invariably veered towards gen AI and… yes… agentic AI too (so 2025 of them!). It was as if vendors had discovered again magical incantation that could solve all cybersecurity woes. “Add AI and bam!”, or something like that. Like perhaps zero trust in 2022 or so?

But here’s the rub: under the surface, how much was “sizzle” and how much was “steak”? As noted, many discussions felt like “AI addressable” rather than “AI solvable” (the idea for this term comes from this podcast episode, coined by Eric Foster of Tenex.AI … yes… AI). Which means, sure, we can point AI at a problem, but AI is not actually solving it completely and requires humans to do a non-trivial amount of work. But it does help!

You know those “agentic use cases”? Those real-world game changer use cases that actually deliver significant benefits right now? I was looking for them. And I didn’t find many. In fact, I didn’t find even a single robust one. And we really looked!

We saw a lot of people imagining the future of security, and I saw not much evidence of solid outcomes in the present. A lot of vendors slapped AI mentions onto their existing products (OK, some just onto their booths!), creating what I like to call “AI washing” or gratuitous mentions of AI.

So many AI applications in MDR (Managed Detection and Response) were “AI addressable but not AI solvable.” And let’s talk for a moment about the whole “AI SOC” concept. This is the dream we keep chasing. It echoes the promises made with SOAR (Security Orchestration, Automation, and Response) systems of yesteryear.

Frankly, the more I look at the “AI SOC” vendors with their “triage agents” (just $10 per alert! buy now!) the more I see SOAR circa 2015. These guys are marching towards the same general path that SOAR treaded 10 years ago, much powered by modern tools yet veering towards the same ditch…

Remember when SOAR was supposed to automate everything, eliminating the need for human intervention in security operations? How did that work out? Turns out you still need humans to remediate and interpret the (dirty) data, and deal with messed up IT environments. And I see the “AI SOC” is in danger of repeating the exact same trajectory. The idea of a fully automated security operations center powered by AI is just not realistic at all today.

So “AI in a SOC” — strong YES, “AI SOC” — hard no!

You still need people, humans, the real ones, to deal with the complicated situations, understand the context, use tribal knowledge, and make hard decisions. At most those “AI SOC” can give guidance — “LLM says, hey, you guys should consider doing blah, blah, blah” but it is ultimately humans who make the final call and do things. Today this is true. Please ask me again after RSA 2026…

 

2. The Resilience of the Past: What is Dead May Never Die (Or at Least Takes a Very Long Time to Do So)

Another striking observation was the continued presence and resilience of “legacy” technologies and vendors (some parallels to RSA 2022, as I recall). Think about it: many vendor names that a security manager from 2004 would recognize (or their merged and renamed descendants) were still prominent on the show floor.

Mobile security, our favorite example of a security island merging with the mainland, also appeared, though not as a central theme. It seemed like many technologies thought to be on their last legs are, well, not. I was wondering who buys from “3rd tier AV vendors” or from “54th tier SIEM” vendors? What keeps them afloat? Well, I think part of it is explained by the concept of “change budget” concept, that some of my Deloitte colleagues used to explain.

Essentially, organizations have a limited capacity for change, and when they finally update one security solution, they might not have the resources or will to update others, no matter the need. We do not have capacity to change everything, all at once. Change fatigue is real!

And this inertia allows older technologies to persist, even if better alternatives are available. Change is just hard. And companies keep sticking with what is familiar and what just “works” (even if it really doesn’t). It might be inefficient, it might be outdated, but it is here and is already integrated to other systems. Which, of course, creates even more “fun” problems! Just imagine, there are still some people somewhere working with COBOL and Windows 2003. Terrifying, indeed!

 

3. The Security of AI: Protecting the Protector

An ironic twist in this AI-palooza was the relative scarcity of discussions on securing AI itself (we did a fun presentation on this BTW). While everyone was touting AI’s ability to defend systems, not enough attention was paid to defending AI systems themselves. Are we going back to the “WAF-but-for-AI” type solution? Will we build special boxes to protect those AI systems? I hope not as that would be the wrong approach. As somebody said “‘known bad’ filtering never truly works” (sounds like Marcus Ranum?)

If AI is to become a critical part of our cybersecurity infrastructure, we must ensure it is robust and resilient against attacks. But I think the relative lack of focus on this area meant that buyers aren’t ready to buy AI security or haven’t even considered it at this stage.

Think for a moment: you are ready to deploy “AI for security” but you are not yet ready to “secure AI” — including that AI you just deployed for security. Please get terrified already!

 

4. Quick Hits and Hallway Chatter

Beyond the big themes, a few other observations:

  • Cloud Security: Wiz continued to market itself with a focus on brand recognition, perhaps showing how a powerful brand is cutting through the show’s noise. Their booth messaging focused on “Hi, we’re Wiz” and jokes, rather than detailing capabilities. So we seem to be in the “platforming” stage of cloud security.
  • SecOps/SOAR/SIEM: “AI Native” is now a thing , but its advantages over just “AI capabilities added to existing platforms” are still debated. Can we have an “AI native SIEM” or “AI native SOAR”? I think we will see many attempts, but the actual value here is yet to be proven. The jury is still out. Far out.
  • Pipelines: There are many vendors focused on log and telemetry collection pipelines, with some claiming to be faster or have better UX than existing solutions. The need is real, but whether we need a dozen such vendors remains to be seen.
  • Misc: There were goats , puppies, and unfortunately no bees. Also, some vendors were “shredding” or “destroying” adversaries. Which sounds fun, but maybe not that practical in real world? And I really missed the NSA booth and Enigma machines. Maybe next time? We did ask somebody in the FBI booth about the NSA booth and we got an epic eye roll as a response…

 

 

Random Hot Take (Sorry, Gemini Thinks I Needed One!)

I have a strong feeling that in a year, at RSA 2026 we might be having the same discussions. We might be again waiting for a “steak” while getting a lot of “sizzle”. We might be talking again about how “AI will fix everything” without actually seeing it fixed. We might be looking at the same old technologies staying alive for another year. I really hope I am wrong. I really want the real “game changer” AI use cases to finally emerge. We will see…

You can check out our related presentations from the conference:

And don’t forget to listen to the recap podcast that inspired some of these thoughts!

 

 

- By Anton Chuvakin (Ex-Gartner VP Research; Head Security Google Cloud)

Original link of post is here

Read more…
13563045469?profile=RESIZE_180x180

Top 10 posts with the most lifetime views (excluding paper announcement blogs, Medium posts only):

  1. Security Correlation Then and Now: A Sad Truth About SIEM
  2. Can We Have “Detection as Code”?
  3. Detection Engineering is Painful — and It Shouldn’t Be (Part 1)
  4. NEW Anton’s Alert Fatigue: The Study
  5. Revisiting the Visibility Triad for 2020 (update for 2025 is coming soon)
  6. Beware: Clown-grade SOCs Still Abound
  7. Why is Threat Detection Hard?
  8. A SOC Tried To Detect Threats in the Cloud … You Won’t Believe What Happened Next
  9. Top 10 SIEM Log Sources in Real Life? [updated/modified version]
  10. How to Think about Threat Detection in the Cloud

 

Top posts with paper announcements:

 

NEW: recent 3 fun posts, must-read:

 

Top 7 Cloud Security Podcast by Google episodes (excluding the oldest 3!):

  1. EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil (our best episode! officially!)
  2. EP8 Zero Trust: Fast Forward from 2010 to 2021
  3. EP47 “Megatrends, Macro-changes, Microservices, Oh My! Changes in 2022 and Beyond in Cloud Security”
  4. EP17 Modern Threat Detection at Google
  5. EP109 How Google Does Vulnerability Management: The Not So Secret Secrets!
  6. EP103 Security Incident Response and Public Cloud — Exploring with Mandiant
  7. EP153 Kevin Mandia on Cloud Breaches: New Threat Actors, Old Mistakes, and Lessons for All

Now, fun posts by topic.

 

Security operations / detection & response:

(if you only read one, choose this one!)

 

Cloud security:

 

HGD:

 

CISO, culture, FMC, etc

 

AI security:

(if you only read one, choose this one!)

 

NEW: fun presentations shared:

Enjoy!

 

Previous posts in this series:

 

- By Anton Chuvakin (Ex-Gartner VP Research; Head Security Google Cloud)

Original link of post is here

Read more…

Thank you to everyone who joined us on board for the CISO Cocktail Reception at RSA Conference 2025! It was a truly special evening, and we’re so glad to have shared it with our incredible cybersecurity community. We were thrilled to be a part of the CISO Cocktail Reception during the RSA Conference USA 2025 — not just any reception, but one set aboard a private yacht, cruising the beautiful San Francisco Bay! With the iconic skyline as our backdrop, the event offered the perfect blend of high-level networking and relaxed, memorable conversations.

It’s always powerful to see this community come together — not just in conference rooms but also in moments like these. The yacht party gave CISOs, CSOs and senior cybersecurity executives a chance to connect beyond the day-to-day, share real stories and enjoy some well-deserved downtime. The evening was organized by EC-Council, with CISO Platform and FireCompass as proud community partners. From stunning views and sunset selfies to lively chats about the future of cybersecurity, this was more than an event — it was a celebration of community.

CISO Platform was proud to support this exclusive experience. As a trusted peer network of 40,000+ cybersecurity leaders, we’re committed to enabling real-world collaboration, sharing proven frameworks, and helping CISOs stay ahead of emerging threats.

We’re excited about what’s ahead — and we’d love for you to get a sneak peek too. Thanks again for being a part of this. Until next time! 

>> If you wish to join us next year, express interest here : Express Interest Here  

 

 

Read More: (Sneak Peek) RSA Conference USA Innovation Sandbox 2025 | Top Cyber Security Companies
Curious about the top 10 cybersecurity companies that made it to the finals of the RSA Innovation Sandbox 2025? Click here to explore the full list.

 

Read more…

Welcome to the April edition of CISO Platform Highlights – your quick snapshot of the most insightful content, expert conversations, and community updates from the world of cybersecurity leadership.

This month, we delved into the often-hidden journey of stolen data on the dark web – from breach to monetization – in an eye-opening Fireside Chat. Plus, we spotlight two deeply analytical community reads that explore the evolution of SOCs and the formalization of cybersecurity weaknesses. Also, a quick heads-up: Nominations for the CISO 100 Awards & Future CISO Awards USA 2025 are now open! Recognize the cybersecurity leaders making a difference in your network—or put your own name forward!

 


 

Fireside Chat You Can’t Miss

The Dark Path of Stolen Data – Understanding the Underground Economy

A powerful discussion featuring:

  • Matthew Maynard - Security Operations Specialist, BJC Healthcare

  • Erik Laird - Vice President (North America, FireCompass)

These experts unpack the lifecycle of breached data, its economic implications, and how organizations can better protect themselves in the face of organized cybercrime.

>>Read the Executive Summary 

 


 

 

Featured Reads from the Community

1) The Return of the Baby ASO: Why SOCs Still Suck? | Anton Chuvakin

13530431499?profile=RESIZE_710x

SOCs still suck—why? Security legend Anton Chuvakin dives into the surprising return of the “Baby ASO” and what it reveals about modern security ops. A must-read for anyone frustrated with the state of SOCs.

>>Read More 

 

2) Bugs Framework (BF): Formalizing Cybersecurity Weaknesses and Vulnerabilities | Irena Bojanova 

13539163487?profile=RESIZE_710x


Discover how the BUGS Framework brings clarity by formalizing cybersecurity weaknesses. Don't miss this game-changing approach to smarter, more structured vulnerability management!

>>Read More

 


 

Call for Nominations: CISO 100 Awards & Future CISO Awards (USA) | In Association With EC Council

We’re thrilled to open up nominations for the CISO 100 Awards & Future CISO Awards – USA Edition. Know someone who’s leading the charge in cybersecurity? Or think you should be recognized? 

Date: 1st & 2nd October 2025
Venue: Renaissance Atlanta Waverly Hotel & Convention Center

>>Nominate Yourself or a Peer 

 

(Sneak Peek) RSA Conference USA Innovation Sandbox 2025 | Top Cyber Security Companies

Over 20 years, RSAC Innovation Sandbox contest brings cybersecurity's new innovators to put the spotlight on their potentially game-changing ideas. Each year, 10 finalists grab the spotlight for a three-minute pitch while demonstrating groundbreaking security technologies to the broader RSA Conference community. Since the start of the contest, the top 10 finalists have collectively seen over 90 acquisitions and $16.4 billion in investments.

>>Read More 

 


 

Join The Cyber Security Community 

At CISO Platform, our mission is to deliver high-quality insights and create meaningful connections among senior cybersecurity professionals. With a global network of 6,500+ CISOs and InfoSec leaders, you’ll always find ideas, answers, and allies here. 

Want to contribute your insights? Share a blog on CISOPlatform.com and help others make smarter security decisions.

13534848078?profile=RESIZE_710x

>>Sign Up 

Read more…

One of my friends, Greg van der Gaast tells this great story that perfectly illustrates one of the biggest challenges we face in cybersecurity today. It goes something like this…

“Imagine someone who loves coffee. They have a fantastic coffee shop just steps from their home, serving the best lattes and espressos in town. But instead of strolling over to enjoy this local gem, they hop in their car and drive miles away for an average cup from a chain café. Why? Not because the coffee is better, but because they love cars and driving so much more—it’s their joy, their comfort zone, and safe space.”

This simple analogy speaks volumes about how cybersecurity operates today. Instead of focusing on accessible, impactful solutions like human risk management, we gravitate toward shiny new technologies—tools and systems that feel exciting, measurable, and comfortably within our domain of expertise. While these technological investments have their value, they’re not enough to solve the fundamental problemthe majority of risks come from humans. Much like driving to a chain café, this approach might feel familiar, but it often delivers underwhelming results.

To achieve true resilience in cybersecurity, we need to break out of this tech-first mindset. Greg’s coffee story pushes us to think differently. It’s not about the excitement of the drive or the allure of the car but about returning to what truly delivers value—the human side of cybersecurity. Leadership, culture, and human risk management need to become the core focus if we’re to build a sustainable and secure framework for the future.

 

The Allure of Technology in Cybersecurity

Cybersecurity professionals, like Greg’s car-loving coffee enthusiast, often find comfort in technology. Tools like Generative AI, advanced encryption systems, quantum computing, and automated threat detection are thrilling to evaluate, offering dashboards full of data and the tantalising promise of cutting-edge solutions. Technology feels tangible, and it gives us a sense of control in a rapidly evolving threat landscape.

But just like the coffee drinker who bypasses their local shop, our focus on technology often distracts us from what’s most important. The hard truth is that technology alone can’t fix the root causes of cyber risk. Whether it’s a mis-click on a phishing email, poor password management, acting on a deepfake, or a misconfiguration, human error accounts for most breaches.

These are challenges that require more than just a flashy new tool to overcome. They require addressing the people behind the processes.

 

Why Human Risk Management Matters

Greg’s analogy has a direct lesson for us in cybersecurity: just as the best coffee is right outside the door in his scenario, the most impactful cybersecurity solution for organisations is already available to them – it’s their people! When we invest in cybersecurity human risk management, we build stronger foundations that improve resilience across the board.

Here’s how human-centered strategies can transform cybersecurity:

1. Leadership Creates the Framework

Strong leadership is the foundation for a successful cybersecurity strategy. Leaders must set the tone, providing vision, fostering accountability, and—as Greg might put it—ensuring we “park the car and start walking toward what really matters.” A leadership culture that emphasises psychological safety enables teams to ask questions, admit mistakes, and innovate confidently. Without such commitment at the leadership level, it’s impossible to truly address deeper, human-related cybersecurity risks.

2. Culture Shapes Everyday Decisions

Leadership sets the tone, but organisational culture turns cybersecurity into a collective habit. A strong culture integrates security into the organisation’s DNA, helping everyone from entry-level employees to executives become active participants in defence.

The problem is that many organisations treat culture-building as an afterthought. They rely on compliance-driven security awareness training that barely scratches the surface. A meaningful security culture is only possible through engagement, diversity, and collaboration. When everyone in an organisation feels responsible for cybersecurity, its security posture improves exponentially.

3. Cybersecurity Human Risk Management Simplifies the Complex

Another reason we focus on technology is that it feels like the straightforward answer to overwhelming complexity. Hundreds of dashboards, endless alerts, and a flood of metrics, however, create decision paralysis within cybersecurity teams. Paradoxically, tools that are implemented with the intention of providing simple solutions to complex problems often end up further complicating them.

A human-focused approach to cybersecurity human risk management emphasises clarity and focus. Fewer, more targeted metrics allow teams to home in on what truly matters, empowering them to act decisively without being overwhelmed by noise. By simplifying processes, we can improve outcomes while reducing stress on cybersecurity professionals.

4. Technology as a Tool, Not the Strategy

Technology absolutely has a role in cybersecurity, but it should amplify human efforts, not serve as a substitute for them. When we start with a foundation of leadership, culture, and people-focused processes, technology becomes exponentially more effective. It’s the complement, not the crutch.

 

Breaking Out of the Comfort Zone

Greg’s coffee lover isn’t making the best choice—they’re operating inside their comfort zone. Similarly, cybersecurity professionals often stay in the familiar realm of tech solutions, avoiding the more challenging territory of human risk management. But real change happens when we address these foundational issues. By investing in people-first strategies, organisations can finally achieve the resilience they’ve been chasing through technology alone.

It’s time to ask ourselves a hard question. Are we driving miles for an average cup of coffee, or are we ready to step outside our comfort zone and grab the great one waiting on our doorstep?

 

Boost Cybersecurity Strategy Through Human Risk Management

The strongest cybersecurity strategies don’t rely on the latest tools. They depend on the strongest foundations—leadership, culture, and people. If you’re still stuck in the tech-comfort zone, now is the time to step into a new way of thinking.

Greg’s story reminds us that better results are closer than we think. Walk to the coffee shop. Build a foundation around cybersecurity human risk management. And create a safer, more resilient future for your organization.

If you’re ready to shift your focus to people and put human risk management at the centre of your cybersecurity strategy, we’re here to help.

 

Now I want to hear from you

If you’re ready to shift your focus to people and put human risk management at the centre of your cybersecurity strategy, I’m here to help. Contact me today to start the conversation.

 

By Jane Frankland (Business Owner & CEO, KnewStart)

Original link of post is here

Read more…

Imagine building a house on sand or precariously stacking blocks in a game of Jenga. No matter how carefully you place the materials or how advanced the tools you use, the structure is doomed to collapse without a strong, stable foundation.

This is the state of cybersecurity today.

Organisations invest heavily in governance, risk, and compliance (GRC) and risk management efforts while neglecting foundational elements like leadership and culture. The result? Fragile systems that fail to keep pace with attackers.

To break free from this cycle, we must rethink how we approach cybersecurity. A useful analogy is Maslow’s hierarchy of needs—a psychological framework that explains human motivation as a progression from fundamental needs to self-actualisation. Likewise, cybersecurity demands a layered approach, starting with foundational human-centered elements and building toward a resilient, secure business environment. Without these foundations, all the technology in the world won’t secure your organisation.

 

The Illusion of Security Built on Sand

Organisations are pouring resources into cybersecurity technologies, from generative AI to emerging quantum solutions. These tools undoubtedly offer opportunities to enhance defences, detect threats, and streamline operations. However, technology alone cannot solve the security puzzle. By focusing disproportionately on tech and GRC metrics, organisations are neglecting the deeper structural issues—much like stacking new blocks onto a shaky Jenga tower.

Consider this problem in light of Maslow’s hierarchy. Just as safety and belonging must precede human accomplishments, leadership, culture, and people-centric processes must underpin any secure environment. Without these base layers, organisations are left vulnerable, spending millions but achieving little more than an illusion of security.

 

The Cybersecurity Hierarchy of Needs

To secure a business—truly secure it—we need to reframe our strategies, moving away from tech-dependent approaches and focusing on what really matters. Here’s how applying the principles of Maslow’s hierarchy can transform cybersecurity:

1. Leadership Is the Foundation (Physiological Needs)

Leadership acts as the bedrock of effective cybersecurity. Strong leaders set vision, build trust, and foster accountability. Yet, today’s cybersecurity leaders often operate in a culture of fear, where asking questions feels unsafe and decisions are made with uncertainty. This weak leadership results in cracks at the very foundation of cybersecurity efforts.

To build securely, organisations must prioritise psychological safety. Teams need leaders who understand the complexity of cybersecurity and support innovation, not just compliance. When leadership is strong, the rest of the structure can rise.

 

2. Culture Embeds Security into Daily Life (Safety Needs)

If leadership is the foundation, culture is the frame that gives the structure its shape. A strong cybersecurity culture ensures that security isn’t just an afterthought—it becomes part of the organisation’s DNA. But too many businesses still approach cybersecurity with a compliance checklist mindset, treating it as a box to tick rather than a way to embed awareness and responsibility across the enterprise.

An effective culture prioritises continuous education, diversity of thought, and collaboration. It transforms employees into active participants in defence, rather than passive liabilities. Without this layer, even the best technology will fail because the human element is left unaddressed.

 

3. Risk Management Brings Clarity (Belonging and Love Needs)

The middle of the hierarchy addresses our need for connection and clarity. For organisations, this is the role of risk management. However, many businesses today drown in data, bombarded with endless alerts, metrics, and dashboards. This overload leads to analysis paralysis, distracting teams from what matters most.

Simplifying risk management through targeted metrics and actionable insights strengthens an organisation’s focus. By subtracting noise and zeroing in on critical threats, we can empower cybersecurity teams to act quickly and decisively, avoiding the chaos that often occurs during high-stress scenarios.

 

4. Defence Enhances Confidence (Esteem Needs)

Defence strategies are like esteem in Maslow’s hierarchy—they provide the confidence and trust that organisations need to function securely. But focusing solely on perimeter defences or siloed solutions isn’t enough. Attackers evolve constantly, and static defence mechanisms quickly become irrelevant.

Layered, adaptive security strategies that protect both operational reputation and critical assets are essential. However, these defences must also balance usability. Overly restrictive security measures can cripple operations, alienate teams, and even drive risky workarounds, which is what we regularly see.

 

5. Community Unlocks Purpose and Growth (Self-Actualization)

At the top of the hierarchy is community—collaboration beyond the organisation itself. When businesses engage with industry peers, share threat intelligence, and partner with external stakeholders, they elevate their security posture while contributing to a broader, safer digital ecosystem.

From cross-industry alliances to public-private partnerships, building community collaboration unlocks the full potential of a cybersecurity strategy. It transforms the fight against cyber threats from an isolated battle to a shared mission.

13554246255?profile=RESIZE_180x180

 

Technology Alone Is Not Enough

Generative AI, quantum computing, and other technological advancements offer promising possibilities, but they’re not silver bullets. Generative AI, for instance, can streamline threat detection—but it can also generate hallucinations or misuse data. Similarly, quantum computing may disrupt cryptography but also brings new vulnerabilities. Without the grounding of people and processes, such technologies can exacerbate risk rather than reduce it.

To move forward, we must place people at the centre of our cybersecurity strategies. Technology is a tool—when used in isolation, it lacks the capacity to drive meaningful change. Only by anchoring it in strong leadership, a supportive culture, and effective processes can you achieve the ultimate goal of doing business securely.

 

The Human Cost of Neglect

Failing to address foundational cybersecurity needs isn’t just a strategic misstep—it’s a human crisis. Overworked and overwhelmed, cybersecurity professionals face alarming rates of burnout, absenteeism, and even industry attrition. According to recent studies:

When human capacity is stretched too thin, mistakes happen. Alert fatigue, decision-making paralysis, and mental health challenges undermine the very professionals tasked with protecting your organization.

 

Rebuilding Cybersecurity from the Ground Up

The way forward is clear. Stop building cybersecurity strategies on the unstable sands of GRC metrics and isolated tech investments. Start with people.

Reassess your approach today. Are you missing foundational layers like leadership and culture? Are your cybersecurity strategies propped up by technology without addressing the people at their core? If so, it’s time to rebuild.

True security isn’t about doing cybersecurity better—it’s about doing business securely. This means investing in leadership, fostering a culture of security, and prioritising the health and well-being of your cybersecurity teams before layering on technology and process improvements.

 

To End: The Human-Centric Cybersecurity Alternative

We have a choice. Continue stacking blocks into a fragile cybersecurity Jenga tower or start building a resilient structure with strong foundations.

Emerging approaches like cybersecurity human risk management enable organizations to better measure, evaluate, and understand the behaviors and risk profiles of the humans that make up the foundational layer of truly effective cybersecurity.

Adaptive security awareness training solutions leverage individuals’ data to personalize their security awareness training, ensuring that the right person receives the right training, at the right time.

These approaches reflect the foundational insight that human-centric cybersecurity starts by putting human beings at the heart of cybersecurity, ensuring that the technology layered thereafter are compatible with the people they’re intended to protect.

The choice is simple.

 

Now I want to hear from you

Tell me in the comments, what’s the biggest challenge you’ve faced in getting people to engage with cybersecurity from a human risk management perspective—and how did you tackle it?

If you want to move toward a people-first cybersecurity strategy, and are unsure how to do that, join in the conversation on Linkedin or better sill schedule a discovery call.

 

By Jane Frankland (Business Owner & CEO, KnewStart)

Original link of post is here

Read more…

 

 

Article content

 

Key Cybersecurity Challenges In 2025—Trends and Observations

by Chuck Brooks

 

In 2025, cybersecurity is gaining significant momentum. However, there are still many challenges to address. The ecosystem remains unstable in spite of investments and the introduction of new tools. In addition to adding my own findings, I have examined some recent statistics, trends, and remedies. Among the subjects covered are ransomware, DDoS attacks, quantum technology, healthcare breaches, artificial intelligence and AI agents, and cybersecurity for space assets. No doubt, there are many more that could be added.

 

Artificial Intelligence, Cybersecurity, and AI Agents

“87% of security professionals report that their organization has encountered an AI-driven cyber-attack in the last year, according to a new study by SoSafe, Europe’s largest security awareness and human risk management solution.” 87% of firms hit by AI cyber-attacks

“Agents are the talk of the AI industry—they’re capable of planning, reasoning, and executing complex tasks like scheduling meetings, ordering groceries, or even taking over your computer to change settings on your behalf. But the same sophisticated abilities that make agents helpful assistants could also make them powerful tools for conducting cyberattacks. They could readily be used to identify vulnerable targets, hijack their systems, and steal valuable data from unsuspecting victims.” Cyberattacks by AI agents are coming | MIT Technology Review

Benefits such as cyber protection technologies, AI may also have disadvantages as described in the articles above. Threat actors can use them. Malicious hackers and antagonistic countries can already recognize and exploit vulnerabilities in threat detection models using AI agents.

However, agentic AI enabled cybersecurity holds enormous potential for detecting, filtering, neutralizing, and remediating cyberthreats. Agentic AI can tackle the core issues of threat detection, response time, and analyst burden. Security teams can function more efficiently in a more hostile digital environment thanks to these technologies, which automate operations while preserving human oversight.

Additionally, GenAI and predictive algorithms may be able to use predictive models in cybersecurity more effectively, producing better outcomes and more reliable security data. AI agents combined with GenAI could be used to recommend paths for mitigation and optimize cybersecurity knowledge and incident response for businesses and organizations.

 

AI Agents Trending

“The growth in the popularity of AI agents in the latter months of 2024 mirrors how ChatGPT and other generative AI systems catapulted into and transformed the AI market in 2022. Vendors seemingly jumped from developing the latest large language models (LLMs) and AI chatbots to creating agents and action models.” 2025 will be the year of AI agents | TechTarget

 

AI Agents For Good- Artificial General Decision Making™ (AGD™)

“A San Francisco company founded in 2023 called Klover AI defines Artificial General Decision Making™ (AGD™) as the creation of systems designed to enhance human decision-making capabilities, ultimately leading to “superhuman productivity and efficiency” for individuals . The fundamental goal of AGD™, according to the company, is to empower individuals to such an extent that every person on the planet can achieve a state of “superhuman” capability through the use of advanced decision-making systems. Dany Kitishian, the founder of Klover AI, describes these AI agents as sophisticated software entities capable of perceiving their environment, making informed decisions, and performing actions to achieve specific objectives, thereby significantly enhancing communication and user interactions . This vision is rooted in the idea of augmenting human capabilities rather than replacing them, aligning with a “people-centered AI strategy” that aims to amplify human strengths and provide individuals with more opportunities through better-informed systems .” Google Gemini Deep Research confirms Klover pioneered and Coined Artificial General Decision Making™ (AGD™) | by Dany Kitishian | kloverai | Mar, 2025 | Medium

CB Thoughts: Advancements in technology have led to significant changes in businesses and societal norms through artificial intelligence. This new era may alter our self-perception through AI and machine learning-based computing and Agentic AI will be a catalyst and help lead the way. The integration of engineering, computer algorithms, and culture is ushering in an era of rapidly advancing, interconnected devices. The growth of technology will influence societal progression. Scientific and technological developments are anticipated to significantly impact humanity.

 

Healthcare Breaches Continue to Rise

“In 2024, healthcare data breaches reached an all-time high, with 276,775,457 records compromised – a 64.1% increase from the previous year’s record and equivalent to 81.38% of the United States population. Despite managing sensitive patient data, findings reveal that healthcare organizations still struggle with corporate customer data protection.” Data breaches rock leading US hospitals| Cybernews

“Cyberattacks targeting healthcare organizations are rising, and the financial and operational toll they take is growing. A recent report from Proofpoint found 92% of healthcare organizations reported experiencing a cyberattack in 2024, up from 88% in 2023, while the average cost of the most expensive attack was $4.7 million.” The Biggest Healthcare Cybersecurity Threats in 2025 | HealthTech

CB Thoughts: It is hardly surprising that criminal hackers are still focusing on the healthcare industry. As medical care grows more networked and connected through computers and other devices, the digital environment of health administration, clinics, hospitals, and patients has become increasingly vulnerable. It is necessary to safeguard many facets of the cybersecurity healthcare environment. These include safeguarding patient privacy, securing medical devices and equipment, and protecting hospital and medical facility information security networks. Healthcare organizations must implement intrusion detection and response systems, conduct regular security audits, and use penetration testing to safeguard sensitive data. In addition to reducing the impact of bot assaults and improper IT configurations, these techniques can be used to identify potential insider threats.

Multifactor authentication and employee training are two aspects of good cyber hygiene that hospitals and other healthcare organizations should implement. Additionally, they want to employ several firewalls, multilayer protection, and real-time network system monitoring. To reduce security risks, medical devices should also be encrypted. Plans for backup, recovery, and continuity should be in place for hospitals and other healthcare facilities. The risks are too high to overlook the necessity of an all-encompassing approach to holistic cybersecurity.

 

Quantum Cybersecurity Becoming an Imperative

Quantum computing is becoming real and will soon be able to solve problems well beyond the capabilities of today's fastest supercomputers. In the wrong hands, however, quantum computers will also create a new pain level for cybersecurity professionals.” How quantum cybersecurity changes the way you protect data | TechTarget

‘In a striking development, researchers have created a quantum algorithm that allows quantum computers to better understand and preserve the very phenomenon they rely on – quantum entanglement.” Quantum Computers Just Got Smart Enough to Study Their Own Entanglement

“These computers work by harnessing quantum physics — the strange, often counterintuitive laws that govern the universe at its smallest scales and coldest temperatures. Today’s quantum computers are rudimentary and error-prone. But if more advanced and robust versions can be made, they have the potential to rapidly crunch through certain problems that would take the current computers years. That’s why governments, companies and research labs around the world are working feverishly toward this goal.” Quantum Computing Explained | NIST

CB Thoughts: There is concern that protected data may be cracked using quantum computers in the future. The processing power of quantum computers poses a risk to cybersecurity through their ability to quickly decode complex problems. This situation poses an immediate threat to financial systems and critical infrastructure.

The RSA-2048 encryption standard would require a billion years for a conventional computer to break, but a quantum computer could theoretically do so in less than two minutes. Quantum researchers refer to the day when large-scale quantum computers can use Shor's algorithm to break all public key systems based on integer factorization as "Q-Day".

The era of quantum computing is approaching faster than anticipated, with artificial intelligence likely to be integrated with quantum technology. The convergence of these technologies will have significant implications. It is important to prepare for both the positive and negative impacts of quantum technologies due to their disruptive potential.

 

Cybersecurity for Space Assets

“As the space domain continues to evolve, so do its threat actors. In the proverbial game of keeping data safe and secure, how is the cybersecurity world keeping up?

Via Satellite spoke with cybersecurity and space experts to predict what’s to come in 2025, including the impact of rapid advancements in Artificial Intelligence (AI) and quantum technologies.” Game-Changing Predictions for Cybersecurity in 2025 | April/May 2025

“Protecting the frontier of space systems is unquestionably a security priority for governments and industry. Due to our increasing reliance on space, and particularly satellites, for communications, security, intelligence, and business, satellite and space cybersecurity is becoming increasingly important in this new digital era.” (26) Cybersecurity of Space Systems | LinkedIn

CB Thoughts: Space increasingly serves nations for information exchange and surveillance, monitoring threats and geopolitical developments, which is essential for national security. The national security apparatus recognizes the rising threat posed by cyber threats to satellites.

The reliance on space and satellites for communications, security, intelligence, and commerce highlights the growing importance of satellite and space security in the digital era. In recent years, the number of satellite launches has increased, resulting in thousands of satellites in low-Earth orbit that are susceptible to cyberattacks. Satellites facilitate data transfer over long, international distances, and many communication networks are transitioning from land-based communications to cloud systems. As launch costs have decreased, the number of satellites in orbit has surged, expanding the potential targets for hackers both in space and at ground control centers.

 

Alarming Ransomware Attacks Continue

“A new report from Ivanti surveyed more than 2,400 security leaders and found that the top predicted threat for 2025 is ransomware. According to the report, nearly 1 out of every 3 security professionals (38%) believe ransomware will become an even greater threat when powered by AI. The report found a gap in preparedness for ransomware attacks, with only 29% of security leaders saying they are very prepared for ransomware incidents.” 1 in 3 security leaders say AI will make ransomware a greater threat | Security Magazine

“The Travelers Companies, an insurer, published findings indicating that ransomware remains a significant threat. The fourth quarter of 2024 experienced the highest level of ransomware activity recorded in any prior quarter, with a total of 1,663 known victims posted on leak sites, according to that research. In addition, 55 new ransomware groups emerged last year — a 67% increase in group formation compared with 2023, the Travelers report said.” Ransomware attacks surged 50% in February: NCC | CFO Dive

CB Thoughts: Businesses are facing ransomware more frequently because of AI enabled phishing attacks combined with social engineering. In ransomware attacks, hackers encrypt vital files so victims cannot access their data. They demand a ransom to restore the systems and data. These attacks can spread fear and disrupt company networks and systems, especially for businesses dependent on supply chain coordination.

Small businesses, healthcare facilities, and higher education institutions have been found to be the most susceptible sector to ransomware cyberattacks due to their lack of cybersecurity expertise and significant security resources. They have paid a high price and frequently covertly pay ransoms in cryptocurrencies to avoid liabilities and suffering closures, even though it is not encouraged.

 

DDoS Attacks Problematic

“The number of Distributed Denial of Service (DDoS) attacks has shot up since the first half of last year, according to new research, with DDoS-for-hire services becoming increasingly sophisticated. Figures from Netscout show there were almost nine million DDoS attacks in the second half of 2024, up 12.75% on the first half. The rise is driven by the increasing use of DDoS attacks as a tool of choice in cyber warfare linked to socio-political events such as elections, civil protests, and policy disputes.” Surging DDoS attack rates show no sign of slowing down – here’s why | IT Pro

CB Thoughts: A Denial-of-Service attack (DDoS) occurs when an adversary utilizes many devices to flood a target system, network, or website with traffic. This technique stops authorized users from accessing the target by overloading its processing power.

Hackers often target networking equipment that connects to the internet in DDoS assaults, taking advantage of common server and network device behavior. As a result, attackers focus on edge network elements (such switches and routers) rather than individual servers. A denial-of-service attack overloads the devices that deliver bandwidth, or the network's pipe. DDoS as a service platform is also used by criminals to launch assaults against corporate websites and demand ransom payments, threatening to degrade the service if the money is not paid.

As innovative technologies like artificial intelligence and quantum computing advance in capabilities and comprehension, 2025 will see a variety of both old and new cyberthreats. For everyone concerned, defending their data and business continuity against cyberattacks will be particularly difficult this year.

 

- By Chuck Brooks (President, Brooks Consulting International)

Original link of post is here

Read more…

Cyber Crime: Stages of Trial in Court

The cybercrime criminal trial in India generally consists of three main stages: pre-trial stage, trial stage, and post-trial stage, which includes steps like filing a First Information Report (FIR), police investigation, charge sheet submission, framing of charges, examination of witnesses, presentation of evidence, closing arguments, and finally, the judgment and potential appeals.

There are four types of Trials under the ‘Bhartiya Nagrik Surksha Sanhita 2023’ (BNSS):

1. Summons Trial

2. Warrant Trial

  • I. Cases instituted on police report
  • II. Cases instituted otherwise than on police report

3. Session Trial

4. Summary Trial

 

1. Filing a Complaint

  • Reporting the Crime: The victim or informant files a complaint with the police or a specialized cybercrime police station or cell. This is the first step in initiating legal action.

  • FIR Registration: A First Information Report (FIR) is registered under Section 154 of CRPC earlier and under Section 173(1) and Sction 173(2) BNSS, which mandates the recording of information about a cognizable offense.

 

2. Investigation

  • Evidence Collection: The police or investigating agency collects digital evidence, such as IP addresses, transaction records, and forensic data. This is governed by Section 157 of BNSS, which outlines the procedure for investigation.

  • Identifying the Culprit: Investigators trace the origin of the cybercrime, often involving international collaboration if the crime crosses borders.

  • Filing the Charge Sheet: Once the investigation is complete, a charge sheet is filed under Section 173 of BNSS, which requires the police to submit a report to the magistrate.

Relevant Sections:

  • BNSS Section 157: Investigation by the police.

  • BNSS Section 173: Submission of the charge sheet.

 

3. Framing of Charges

  • Court Review: The magistrate or sessions court reviews the charge sheet and evidence to determine if there is sufficient ground to proceed.

  • Framing Charges: Charges are framed under Section 228 of BNSS, which allows the court to formally charge the accused based on the evidence.

Relevant Sections:

  • BNSS Section 228: Framing of charges.

 

4. Trial Proceedings

  • Prosecution’s Case: The prosecution presents its case, including evidence and witness testimonies, under Section 244 of BNSS. This stage aims to prove the guilt of the accused beyond a reasonable doubt.

  • Defence’s Case: The defense presents its arguments and evidence under Section 247 of BNSS, challenging the prosecution’s case.

  • Cross-Examination: Both sides cross-examine witnesses under Section 137 of BNSS, which governs the examination and cross-examination of witnesses.

Relevant Sections:

    • BNSS Section 244: Prosecution evidence.

    • BNSS Section 247: Defense evidence.

    • BNSS Section 137: Examination of witnesses.

 

5. Judgment

  • Final Arguments / Verdict / Quantum of Punishment / Judgment under Sections 257 to 258

This is the final stage of trial where both parties, after proper evaluation of statements, evidence, and testimony of witnesses, put their case before the Court, through oral arguments. Based on the arguments and the material evidence on record, the judge will pronounce if the accused is convicted or acquitted of the charges leveled against them. If the judge convicts the accused, then he will have to hear the accused on the quantum of the judgment under Section 401 of BNSS as to what shall be the period of him serving the term for the offence committed by him and on hearing the accused, the judge will pass a detailed judgment, recording all the reasons as to why, the accused shall be punished for the offence.

  • Court’s Decision: The judge delivers a verdict under Section 392 of BNSS, either acquitting or convicting the accused.

Relevant Sections:

  • BNSS Section 352: Final arguments.

  • BNSS Section 392 : Judgment.

 

6. Appeal

  • Right to Appeal: If either party is dissatisfied with the judgment, they can appeal to a higher court under Section 413 of BNSS.

  • Final Resolution: The appellate court reviews the case and may uphold, modify, or overturn the original decision.

Relevant Sections:

  • BNSS Section 413: Right to appeal.

 

7. Execution of Sentence

  • Implementation: If the accused is convicted, the sentence is executed as per the court’s orders under Section 458 of BNSS.

  • Rehabilitation: In some cases, the court may recommend rehabilitation programs for the accused.

Relevant Sections:

  • BNSS Section 458: Execution of sentence.

 

Key Points to Remember

  • Burden of Proof: The prosecution must prove the accused’s guilt beyond a reasonable doubt, as per BNSS Section 101.

  • Types of Trials: Cybercrime trials are typically conducted as sessions trials under BNSS, given the severity of such offenses.

  • Electronic Evidence: The admissibility of digital evidence is governed by BNSS Section 65B, which aligns with the Indian Evidence Act.

 

Conclusion

The stages of a cybercrime trial in India are meticulously structured under the Bharatiya Nagarik Suraksha Sanhita (BNSS) and Bharatiya Nyaya Sanhita (BNS). From filing an FIR to executing the sentence, each stage ensures that justice is served while addressing the unique challenges posed by digital offenses. By understanding these stages and the relevant legal provisions, victims, defendants, and legal professionals can navigate the system more effectively.

 

By: Adv. (Dr.) Prashant Mali Founder at Cyber Law Consulting (Advocates & Attorneys)

Original link to the blog : Click Here

 
Read more…