­
Biswajit Banerjee's Posts - CISO Platform

Biswajit Banerjee's Posts (95)

Sort by
The cybercriminal economy is a continuously evolving connected ecosystem of many players with different techniques, goals, and skillsets.
 
Ransomware as a service (RaaS) is a subscription-based model that enables affiliates to use already-developed ransomware tools to execute ransomware attacks. Affiliates earn a percentage of each successful ransom payment.


Ransomware as a Service (RaaS) is an adoption of the Software as a Service (SaaS) business model. 
RaaS users don't need to be skilled or even experienced, to proficiently use the tool. RaaS solutions, therefore, empower even the most novel hackers to execute highly sophisticated cyberattacks

RaaS solutions pay their affiliates very high dividends. The average ransom demand increased by 33% since Q3 2019 to $111,605, with some affiliates earning up to 80% of each ransom payment. The low technical barrier of entry, and prodigious affiliate earning potential, makes RaaS solutions specifically engineered for victim proliferation.

In the same way our traditional economy has shifted toward gig workers for efficiency, criminals are learning that there’s less work and less risk involved by renting or selling their tools for a portion of the profits than performing the attacks themselves. This industrialization of the cybercrime economy has made it easier for attackers to use ready-made penetration testing and other tools to perform their attacks.

Ransomware attacks have become even more impactful in recent years as more ransomware-as-a-service ecosystems have adopted the double extortion monetization strategy. All ransomware is a form of extortion, but now, attackers are not only encrypting data on compromised devices but also exfiltrating it and then posting or threatening to post it publicly to pressure the targets into paying the ransom. Most ransomware attackers opportunistically deploy ransomware to whatever network they get access to, and some even purchase access to networks from other cybercriminals. Some attackers prioritize organizations with higher revenues, while others prefer specific industries for the shock value or type of data they can exfiltrate.

The RaaS affiliate model, which has allowed more criminals, regardless of technical expertise, to deploy ransomware built or managed by someone else, is weakening this link. As ransomware deployment becomes a gig economy, it has become more difficult to link the tradecraft used in a specific attack to the ransomware payload developers.

The dark web is a criminal-infested network, so any leaked information on the platform will give multiple cybercriminal groups free access to your sensitive data and those of your customers. The fear of further exploitation compels many ransomware victims to comply with cybercriminal demands.

To make the ransom payment, victims are instructed to download a dark web browser and pay through a dedicated payment gateway. Most ransomware payments are made with cryptocurrency, usually Bitcoin, due to their untraceable nature. 

Reporting a ransomware incident by assigning it with the payload name gives the impression that a monolithic entity is behind all attacks using the same ransomware payload and that all incidents that use the ransomware share common techniques and infrastructure. However, focusing solely on the ransomware stage obscures many stages of the attack that come before, including actions like data exfiltration and additional persistence mechanisms, as well as the numerous detection and protection opportunities for network defenders.

 

How to Protect Yourself from Ransomware Attacks

The most effective ransomware attack mitigation strategy is a combination of educating staff, establishing defenses, and continuously monitoring your ecosystem for vulnerabilities.

Here are some suggested defense tactics:

  • Monitor all endpoints connection requests and establish validation processes
  • Educate staff on how to identify phishing attacks
  • Set up DKIM and DMARC to prevent attackers from using your domain for phishing attacks.
  • Monitor and remediate all vulnerabilitiesexposing your business to threats
  • Monitor the security posture of all your vendors to prevent third-party breaches
  • Set up regular data backup sessions
  • Do not solely rely on cloud storage, backup your data on external hard drives
  • Avoid clicking on questionable links. Phishing scams do not only occur via email, malicious links could lurk on web pages and even Google documents.
  • Use antivirus and anti-malware solutions
  • Ensure all your devices and software are patched and updated.
  • Provide your staff and end-users with comprehensive social engineering training
  • Introduce Software Restriction Policies (RSP) to prevent programs from running in common ransomware environments, i.e. the temp folder location
  • Apply the Principles of Least Privilege to protect your sensitive data
  • Ransomware: Should You Pay the Ransom?

 

Whether or not you should pay for a ransomware price is a difficult decision to make. If you make a payment, you are trusting that the cybercriminals will deliver on their promise of supplying you with a decryption key.

Cybercriminal operations are inherently immoral, you cannot trust criminals to uphold a fragment of morality and follow through with their promises. In fact, many RaaS affiliates don't waste time providing decryption keys to all paying victims, time is better spent seeking out new paying victims. 

Because a ransom payment never guarantees the decryption of seized data, the FBI strongly discourages paying for ransoms. But companies have paid ransom and I personally know many clients who have budgeted for paying ransoms as it is a impending risk to any business inspite of having good cybersecurity practices. Some of my clients have cyber insurance which covers payment of ransom but frankly speaking. I don’t know the legality of such cyber insurance coverage .

 

- By Adv (Dr.) Prashant Mali 

Original link of post is here

Read more…

Basic structure of legal argument

  1. If conditions A, B and C are satisfied, then legal consequences X, Y and Z follow. (Major premise: legal rule)
  2. Conditions A, B and C are satisfied (or not). (Minor Premise: the facts of the case)
  3. Therefore, legal consequences X, Y and Z do (or do not) follow. (Conclusion: legal judgment in the case).

 

As I mentioned in part one, the first premise of this argument structure tends to get most of the attention in law schools. The second premise — establishing the actual facts of the case — tends to get rather less attention. This is unfortunate for at least three reasons.

First, in practice, establishing the facts of a case is often the most challenging aspect of a lawyer’s job. Lawyers have to interview clients to get their side of the story. They have to liaise with other potential witnesses to confirm (or disconfirm) this story. Sometimes they will need to elicit expert opinion, examine the locus in quo (scene of the crime/events) and any physical evidence, and so on. This can be a time-consuming and confusing process. What if the witness accounts vary? What if you have two experts with different opinions? Where does the truth lie?

Second, in practice, establishing the facts is often critical to winning a case. In most day-to-day legal disputes, the applicable legal rules are not in issue. The law is relatively clearcut. It’s only at the appeal court level that legal rules tend to be in dispute. Cases get appealed primarily because there is some disagreement over the applicable law. It is rare for appeal courts to reconsider the facts of case. So, in the vast majority of trials, it is establishing the facts that is crucial. Take, for example, a murder trial. The legal rules that govern murder cases are reasonably well-settled: to be guilty of murder one party must cause the death of another and must do this with intent to kill or cause grievous bodily harm. At trial, the critical issue is proving whether the accused party did in fact cause the death of another and whether they had the requisite intent to do so. If the accused accepts that they did, they might try to argue that they have a defence available to them such as self-defence or insanity. If they do, then it will need to be proven that they acted in self defence or met the requirements for legal insanity. It’s all really about the facts.

Third, the legal system has an unusual method of proving facts. This is particularly true in common law, adversarial systems (which is the type of legal system with which I am most familiar). Courts do not employ the best possible method of fact-finding. Instead, they adopt a rule-governed procedure for establishing facts that tries to balance the rights of the parties to the case against both administrative efficiency and the need to know the truth. There is a whole body of law — Evidence Law — dedicated to the arcana of legal proof. It’s both an interesting and perplexing field of inquiry — one that has both intrigued and excited commentators for centuries.

I cannot do justice to all the complexities of proving facts in what follows. Instead, I will offer a brief overview of some of the more important aspects of this process. I’ll start with a description of the key features of the legal method for proving facts. I’ll then discuss an analytical technique that people might find useful when trying to defend or critique the second premise of legal argument. I’ll use the infamous OJ Simpson trial to illustrate this technique. I’ll follow this up with a list of common errors that arise when trying to prove facts in law (the so-called ‘prosecutor’s fallacy’ being the most important). And I’ll conclude by outlining some critiques of the adversarial method of proving facts.

 

1. Key Features of Legal Proof

As mentioned, the legal method of proving facts is unusual. It’s not like science, or history, or any other field of empirical inquiry. I can think of no better way of highlighting this than to simply list some key features of the system. Some of these are more unusual than others.

 

Legal fact-finding is primarily retrospective: Lawyers and judges are usually trying to find out what happened in the past in order to figure out whether a legal rule does or does not apply to that past event. Sometimes, they engage in predictive inquiries. For example, policy-based arguments in law are often premised on the predicted consequences of following a certain legal rule. Similarly, some kinds of legal hearing, such as probation hearings or preventive detention hearings, are premised on predictions. Still, for the most part, legal fact-finding is aimed at past events. Did the accused murder the deceased? Did my client really say ‘X’ during the contractual negotiations? And so on.
Legal fact-finding is norm-directed:Lawyers and judges are not trying to find out exactly what happened in the past. Their goal is not to establish what the truth is. Their goal is to determine whether certain conditions — as set down in a particular legal rule — have been satisfied. So the fact-finding mission is always directed by the conditions set down in the relevant legal norm. Sometimes lawyers might engage in a more general form of fact-finding. For instance, if you are not sure whether your client has a good case to make, you might like to engage in a very expansive inquiry into past events to see if something stands out, but for the most part the inquiry is a narrow one, dictated by the conditions in the legal rule. At trial, this narrowness becomes particularly important as you are only allowed to introduce evidence that is relevant,/i> to the case at hand. You can’t go fishing for evidence that might be relevant and you can’t pursue tangential factual issues that are not relevant to the case simply to confuse jurors or judges. You have to stick to proving or disputing the conditions set down in the legal rule.
Legal fact-finding is adversarial (in common law systems): Lawyers defend different sides of a legal dispute. Under professional codes of ethics, they are supposed to do this zealously. Judges and juries listen to their arguments. This can result in a highly polarised and sometimes confusing fact-finding process. Lawyers will look for evidence that supports their side of the case and dismiss evidence that does not. They will call expert witnesses that support their view and not the other side’s. This is justified on the grounds that the truth may emerge when we triangulate from these biased perspectives but, as I will point out later on, this is something for which many commentators critique the adversarial system. There is a different approach in non-adversarial system. For instance, in France judges play a key role in investigating the facts of a case. At trial, they are the ones that question witnesses and elicit testimony. The lawyers take a backseat. Sometimes this is defended on the grounds that it results in a more dispassionate and less biased form of inquiry but this is debatable given the political and social role of such judges, and the fact that everyone has some biases of their own. Indeed, the inquisitorial system may amplify the biases of a single person.
Legal fact-finding is heavily testimony-dependent: Whenever a lawyer is trying to prove a fact at trial, they have to get a witness to testify to this fact. This can include eyewitnesses (people who witnessed the events at issue in the trial) or expert witnesses (people who investigated physical or forensic evidence that is relevant to the case). The dependence on testimony can be hard for people to wrap their heads around. Although physical evidence (e.g. written documents, murder weapons, blood-spattered clothes etc) is often very important in legal fact-finding, you cannot present it by itself. You typically have to get a witness to testify as to the details of that evidence (confirming that it has not been tampered with etc).
Legal Fact-Finding is probabilistic: Nothing is ever certain in life but this is particularly true in law. Lawyers and judges are not looking for irrefutable proof of certain facts. They are, instead, looking for proof that meets a certain standard. In civil (non-criminal trials), facts must be proved ‘on the balance of probabilities’, i.e. they must be more probable than not. In criminal trials, they must be proved ‘beyond reasonable doubt’. What this means, in statistical terms, is unclear. The term ‘reasonable doubt’ is vague. Some people might view it as proving someting is 75% likely to have occurred; others may view it as 90%+. There are some interesting studies on this (LINK). They are not important right now. The important point is that legal proof is probabilistic and so, in order to be rationally warranted, legal fact-finders ought to follow the basic principles of probability theory when conducting their inquiries. This doesn’t mean they have to be numerical and precise in their approach, but simply that they should adopt a mode of reasoning about facts that is consistent with the probability calculus. I’ll discuss this in more detail below.
Legal fact-finding is guided by presumptions and burdens of proof (in an adversarial system): Sometimes certain facts do not have to be proved; they are simply presumed to be true. Some of these presumptions are rebuttable — i.e. evidence can be introduced to suggest that what was presumed to be true is not, in fact, true — sometimes they are not. The best known presumption in law is, of course, the presumption of innocence in criminal law. All criminal defendants are presumed to be innocent at the outset of a trial. It is then up to the prosecution to prove that this presumption is false. This relates to the burden of proof. Ordinarily, it is up to the person bringing the case — the prosecution in a criminal trial or the plaintiff in a civil trial — to prove that the conditions specified by the governing legal rule have been satisfied. Sometimes, the burden of proof shifts to the other side. For instance, if a defendant in a criminal trial alleges that they have a defence to the charge, it can be up to them to prove that this is so, depending on the defence.
Legal fact-finding is constrained by exclusionary rules of evidence:Lawyers cannot introduce any and all evidence that might help them to prove their case. There are rules that exclude certain kinds of evidence. For example, many people have heard of the so-called rule against hearsay evidence. It is a subtle exclusionary rule. One witness cannot testify to the truth of what another person may have said. In other words, they can testify to what they may have heard, but they cannot claim or suggest that what they heard was accurate or true. There are many other kinds of exclusionary rule. In a criminal trial, the prosecution cannot, ordinarily, provide evidence regarding someone’s past criminal convictions (bad character evidence), nor can they produce evidence that was in violation of someone’s legal rights (illegally obtained evidence). Historically, many of these rules were strict. More recently, exceptions have been introduced. For example, in Ireland there used to be a very strict rule against the use of unconstitutionally obtained evidence; more recently this rule has been relaxed (or “clarified”) to allow such evidence if it was obtained inadvertently. In addition to all this, there are many formal rules regarding the procurement and handling of forensic evidence (e.g. DNA, fingerprints and blood samples). If those formal rules are breached, then the evidence may be excluded from trial, even if it is relevant. There is often a good policy-reason for these exclusions.

 

Those are some of the key features of legal fact-finding, at least in common law adversarial systems. Collectively, they mean that defending the second premise of a legal argument can be quite a challenge as you not only have to seek the truth but you have to do so in a constrained and, in some sense, unnatural way.

 

- By Adv (Dr.) Prashant Mali 

Original link of post is here

Read more…

1. Art 21 of the Constitution guarantees fundamental right to life and personal liberty. This article of Constitution has been interpreted by the Judiciary with widest amplitude so as to include several other rights such as right to food and shelter, and other rights and most importantly the right to fair trial which includes the right to fair investigation. In Anbaizhagan’s case, the apex court observed that, ‘if the criminal trial is not free and fair and not free from bias the judicial fairness and the criminal justice system would be at stake, shaking the confidence of the public in the system and woe would be the rule of law’,1 Trial should be fair to all concerned and ‘denial of fair trial is as much an injustice to the accused as is to the victim and the society.2

2. The right to fair trial includes ‘Fair Investigation’,3 Fair trial and fair investigation are pre-requisites to get justice which the parties deserve as per law, and one without the other cannot yield to fair justice. A victim of a crime is entitled to fair investigation4 and if required the case can be entrusted to a specialized agency like CBI and the courts have enough power to do complete justice to the
parties by giving appropriate directions.

3. The investigating authorities have been empowered to submit a report to the magistrate that there is no evidence or reasonable grounds or suspicion to justify the forwarding of the accused to the Magistrate and to release the accused from the custody on his executing a bond with or without surety, as the police officer direct, to appear, if and when so required, before a Magistrate empowered to take cognizance of the offence on a police report and to try the accused or commit for trial.5 The 41st report of the Indian Law Commission recommended that an accused person must get a fair trial in accordance with the principles of natural justice, efforts must be made to avoid delay in investigation and trial and the procedures should aim at ensuring fair deal to the poorer sections of the society.6 The report under Sec 169 Cr Pc is referred to as a ‘closure report’. The Magistrate however, can direct the police to make further investigation. The scope of the power to direct further investigation when the police report states that there is no evidence to proceed further, and really there is no evidence in the case at all, whether it would be an order which can be justified or held valid needs examination.



4. In a case where the Director-General of Anti-Corruption Bureau gave an order and a report under Sec 169 Cr Pc and it was a ‘speaking order’ containing reasons that there is absolutely no evidence to prosecute the accused, the direction given by the Magistrate when the case itself does not contain any evidence to proceed further, the direction of the court has to be viewed as bad in law. This view finds support when there is a finding by Lokayukta that there is no material against the accused. As the apex court ruled that a reference is made to the investigating officer or the courts to Section 169 Cr Pc, the same has to be read as a reference to Sec 173 Cr Pc.7

5. The power of the court to take cognizance of a case, it is to examine whether there is sufficient ground for taking judicial notice of the offence in order to initiate further proceedings. The apex court examined this issue in Chief Enforcement Officer’s case8 and stated thus:-
“The expression ‘cognizance’ has not been defined in the code. But the word ‘cognizance’ is of indefinite import. It has no esoteric or mystic significance in criminal law. It merely means ‘become aware of’ and when used with reference to a court or a Judge, it connotes ‘to take notice of judicially’. It indicates the point when a court or a Magistrate takes judicial notice of an offence with a view to initiating proceedings in respect of such offences said to have been committed by someone”


.
It was further elucidated thus:-9

i) Taking cognizance does not involve any formal action of any kind;

ii) It occurs as soon as the Magistrate applies his mind to the suspected commission of an offence;

iii) It is prior to the commencement of criminal proceedings;

iv) It is an indispensable requisite for holding a valid trial;

v) Cognizance is taken of an offence and not an offender;

vi) Whether the Magistrate has taken cognizance of an offence or not depends on the facts and circumstances of each case, as no universal application rule can be laid down;

vii) Under Sec 190 of Cr Pc, it is the application of the Judicial mind to the averments in the complaints that constitutes ‘cognizance’;

viii) The Magistrate has to consider whether there is sufficient ground for proceeding further and not sufficient ground for conviction, as the sufficient ground for conviction can be
considered only at the trial;

ix) If there is sufficient ground for proceedings, then the Magistrate can issue the process under Sec 204 Cr Pc.10 The Magistrate has the undoubted discretion, to be judicially exercised in determining whether there is a prime-facie case to take cognizance11 and

x) Despite a report of the police that no case is made out, the Magistrate can reject the report and take cognizance and to order further investigation under Sec 173 (8) Cr Pc.



6. The main object for taking cognizance is to commence proceedings against the accused. At this stage of cognizance, court is concerned with the involvement of the person and not of his innocence. When there is no material to proceed, there is no point in taking cognizance and proceeding further. The prosecution becomes futile exercise when the materials available do not show an offence is committed. The apex court observed thus:-

i) Summoning of an accused in a criminal case is a serious matter. Criminal law cannot be set in motion as a matter of course;12

ii) The process of criminal court shall not be permitted to be used as a weapon of harassment. Once it is found that there is no material on record to connect an accused with the crime, there is no meaning in prosecuting him. It would be a sheer waste of public time and money to permit such proceedings to continue against such a person;13

iii) Unmerited and undeserved prosecution is an infringement of the guarantee under Art 21 of the Constitution;14 and

iv) No court can issue a positive direction to an authority to give sanction for prosecution, when there is a police report that no case is made out to prosecute, unless the court finds otherwise.15 Criminal law should not be used for vexatious prosecution. (In case where sanction is required to prosecute such as for offences under the Prevention of Corruption Act etc.



7. Thus, the fair investigation requires that the police should thoroughly examine the entire evidence to find out whether any prime-facie is made out against the accused. If no case is made out, there should be a closure report under Sec 169 which will be regarded as a report under Sec 173 Cr Pc.



It is again the duty of the Magistrate to find out whether there is any material on record to proceed against the accused. If there is no material to proceed further, there is no point in taking cognizance. In other words, the fair investigation and trials need the protection of an accused from unwanted and vexatious prosecutions to avoid harassment to persons concerned.


References:


1 AIR 2004 SC P.524.

2 Best Bakery Case, for details refer to AIR 2004 SC P.3114.

3 Kalyani Baskar Vs. M.S.Sampoornam, (2007)2 SCC P.259.

4 Nirmal Singh Kahlon’s case, AIR 2006 SC P.1367.

5 See for details Sec 169 of the Criminal Procedure Code, 1973.

6 See for details report submitted in September, 1969.

7 Sanjay Sinh Ram Rao Chavan Vs. Dattatray Gulab Rao Phalke (2015)3 SCC P.126 at P.133

8 (2008)3 SCC P.492 at P.499.

9 Ibid, See para 20.

10 The expression Cr PC has been used for the Criminal Procedure Code, 1973 throughout this study.

11 See for details Nagawwa Vs. Veeranna Shivaligappa Konjaligi (1976)3 SCC P.736.

12 Pepsi Foods Ltd., Vst. Judicial Magistrate (1998)3 SCC P.749 Para 28.

13 State of Karnatak Vs. Muniswamy (1977)2 SCC P.699 At P.803 Para 8.

14 State of Bihar Vs. P.P.Sharma, (1992) Supp (1) SCC P.222 at P.265 Para 60.

15 Mansukhlal Vithaldas Chauhan Vs. State of Gujarath (1997)7 SCC P.622 at P.635 Para 32.



- By Adv (Dr.) Prashant Mali 

Original link of post is here

 

Read more…

Learn Modern SOC and D&R practices for free from Google! Yes, really! That’s the message. Join *hundreds* of others who already signed up!

Now, with full details….

After some ungodly amount of work, the original ASO crew (but really Iman!) put together an epic Modern Security Operations training, now launched at Coursera at no cost.

“Today, Google Cloud is excited to announce the launch of the Modern SecOps (MSO) course, a six-week, platform-agnostic education program designed to equip security professionals with the latest skills and knowledge to help modernize their security operations, based on our Autonomic Security Operations framework and Continuous Detection / Continuous Response (CD/CR) methodology. “ (launch blog)



What’s in the class? Here is an outline!

13188550852?profile=RESIZE_710x
 

(src: MSO class)

So, in simple words:

  • No, taking the class won’t make your SOC like our D&R teams (example), just as reading the ASO paper won’t do it.
  • However, you will learn how we think modern D&R needs to be run, whether you call it a SOC or not! A version of what works for us and quite a few others.
13188555088?profile=RESIZE_710x
 

(src: MSO class)

13188561276?profile=RESIZE_710x
 

(src: MSO class)

Anyhow, enough rambling! Go take this class!

P.S. There is also a video of me talking about the awesomeness of ASO somewhere in there, find it! :-)


Related:

 

- By Anton Chuvakin (Ex-Gartner VP Research; Head Security Google Cloud)

Original link of post is here

Read more…

Do I go to my Cloud Service Provider (CSP) for cloud security tooling or to a third party vendor?

Who will secure my cloud use, a CSP or a focused specialty vendor?

Who is my primary cloud security tools provider?


This question asked in many ways has haunted me since my analyst days, and I’ve been itching for a good, fiery debate on this. So, we did this on our Cloud Security Podcast by Google where the co-hosts divided the positions, researched the arguments in advance of the debate and then just … WENT AT EACH OTHER :-)

The results were so fun and interesting that this blog was born!



The Case for Third-Party Vendor Tooling

These arguments hinge on three primary concerns: trust, consistency, and innovation.

Some observers also highlight the theoretical conflict of interest when a CSP is responsible for both building and securing the cloud (no idea why people say this, as IMHO there is no conflict here). This side also stressed the importance of consistency across multi-cloud environments and argued that dedicated security vendors are more likely to innovate more rapidly. They also may address client needs faster, especially narrow vertical needs.

  • You just can’t trust the cloud builder to secure their own stuff (or “letting the cat guard the cream” as somebody weirdly opined on social media). Third-party vendors promise unbiased security analysis and can uncover security issues that CSPs might deprioritize, benefiting the broader public and individual users. This separation of duties suggests a more objective evaluation of cloud security.
  • Consistency is super critical for multicloud. Third-party tools provide a consistent security framework across multiple cloud platforms. This simplifies management and reduces the need for specialized knowledge in each CSP’s unique security offerings.
  • Startups just build better tools; this is their focus and sole mission; CSPs suffer from “security from a big company” syndrome, being slow and political. Third-party vendors, whose core business is security, are more likely to develop innovative and effective security solutions compared to CSPs, who may view security as a secondary concern.
  • Auxiliary argument: Would you ever trust the CSP to secure the network/environment that belongs to their competitor?



The Case for CSP-Native

These arguments hinged on three primary concerns: deep platform knowledge, built-in security, and seamless stack.

Deep platform knowledge that CSPs possess suggests both robust and “automatic”, default security. The seamlessness of CSP-native tools and the vast (we mean it, BTW!) resources that CSPs dedicate to security also play a key role. CSPs are very well positioned to keep pace with the rapid evolution of cloud services, and secure them as they are built.

  • CSP knows the platform and cloud in general best, can use unlisted or poorly documented capabilities to secure the cloud. Security deeply integrated into the platform is “more secure”, and also better linked with asset tracking, and other IT ops / DevOps capabilities. This deep knowledge translates into superior security capabilities, both practical and conceptual.
  • Built-in beats bolt-on, with fewer seams to break and break through. CSP-native tools offer seamless integration with other services, streamlining workflows, and reducing the risk of security gaps that can arise from stitching together disparate tools. This results in a simpler and more manageable security stack. Recent breaches highlight the risks associated with these integration points, underscoring the advantage of built-in security.
  • Using native tools reduces the number of third-party vendors and solutions you need to manage, leading to a simpler security stack and less administrative overhead. When cloud platforms and security tools share the same foundation, operational teams benefit from streamlined access and workflows.
  • Auxiliary argument: CSP keeps pace with securing new services as they are being launched. And there are a lot of cloud services being launched.



The Verdict

  • “It depends” wins! It really does. No, we are not hedging or fudging. Are you disappointed?
  • To make it practical, we need to answer “depends on what?” Organizational realities: how you use cloud, what cloud, how many clouds, what is your threat model, etc.
  • None of the arguments from either side include a “killer” or a clincher argument that stops the debate and hands the victory to one side.
  • Often starting with CSP-native tools and then supplementing with third-party solutions to address any gaps (if any) is the way to go (this also was Gartner advice in my days, BTW)


Listen to the audio version
 (better jokes!). And, yes, do read “Snow Crash” if you somehow failed to, before.



Resources:

 

- By Anton Chuvakin (Ex-Gartner VP Research; Head Security Google Cloud)

Original link of post is here

Read more…

So some of you are thinking “ewwww … another security transformation paper” and this is understandable. A lot of people (and now … a lot of robots too) have written vague, hand-wavy “leadership” papers on how to transform security, include security into digital transformation or move to the cloud (now with GenAI!) the “right” way, while reaping all the benefits and suffering none of the costs. Because tote leadership!

This is not one of those, promise! Why not? Because our new paper helps answer two real — and really hard — questions:

#1 Based on the experience of others, what does a “modern” or transformed organization’s security capability look like?

#2 Given what you have today, how to transition from whatever you have to what we discussed in #1 above?

I bet you’d agree that this is really tricky. Hence our paper!


Let’s start with my favorite insights and surprises below (and, yes, Gemini via Gems had a “hand” in this, curation though is very human):

  • The Primacy of Organizational Transformation: The guide emphasizes that digital transformation is not solely — or even largely — about technology adoption, but fundamentally about transforming the organization, its operations, its team structure and its culture. This may surprise security leaders from traditional organizations who might primarily focus on technical solutions and “let’s just get new tools!”
  • The OOT (Organization, Operations, Technology) Approach: The guide advocates for prioritizing organizational and operational changes before finalizing technology decisions. This may challenge the conventional approach in traditional organizations where technology choices often precede organizational adaptation.
 



Roadmap of how “classic” teams fuse into modern ones

  • The Significance of a Generative Culture: The guide stresses the critical role of a generative culture in achieving successful transformation. Cultivating a generative culture is essential for fostering adaptability and thus ultimately for modernizing security. Such a culture, characterized by high trust, information flow, and shared responsibility, may be a departure from the hierarchical and siloed structures prevalent in traditional organizations.
  • The Distribution of Security Responsibilities: We propose a shift away from centralized security functions towards a model where product teams assume greater ownership of security throughout the development lifecycle. The distributed responsibility model emphasizes empowering product teams to build security into their applications from the outset. This may surprise — and upset — security leaders accustomed to a centralized security model.
  • The Difficulty of Letting Go: We remind everybody that moving away from legacy processes and controls can be unexpectedly challenging, even painful. Teams may be attached to familiar processes or resistant to change, even if it leads to visibly greater efficiency and security. Security leaders might be surprised by the internal resistance they encounter when trying to implement new ways of working.
13188023867?profile=RESIZE_710x
 

Transform process we use


As usual, my favorite quotes from the paper:

  • “As we’ve helped more security teams make the move to the cloud, we’ve identified nuanced challenges that they face — namely those related to team structure, changing business operations, and establishing culture — that are critical to their success”
  • “Where do we start when we talk about transforming the cybersecurity organization within a company that’s historically delivered security to on-premise systems within a highly centralized function? Ideally, we think this conversation should start with defining security goals framed in business outcomes like capabilities, velocity, quality, cost, and risk.”
  • “You’ll find many opinions about how cybersecurity enables a successful digital transformation, but most observers are unaware of the complexity involved in effectively collaborating and sharing responsibilities, skills, tooling, and other capabilities with fast-moving product-based teams who own the full set of responsibilities — including cybersecurity — for the applications they build and run.”
  • Moving away from the toil often associated with securing on-premise systems can be challenging for unexpected reasons. We think security in the cloud is a better future that can be difficult to imagine without inspiration and intentional culture development. ” [A.C. — this is not some snide remark about ‘server huggers’ but a very human tendency to like whatever they invested their blood and soul into…]

  • “Our first step in helping customers work through transition to the cloud and more modern ways to work starts with backing away from the belief that it’s the technology that’s transforming.” [A.C. — my fave example is here]



Now, go and read our new paper!

P.S. “Anton, but I like SOC papers, can I haz moar? — Yes, there is one coming in a few weeks! Part 4.5 of our glamorous SOC of the Future series


Related:

 

- By Anton Chuvakin (Ex-Gartner VP Research; Head Security Google Cloud)

Original link of post is here

Read more…

One more idea that has been bugging me for years is an idea of “detection as code.” Why is it bugging me and why should anybody else care?

First, is “detection as code” just a glamorous term for what you did when you loaded your Snort rules in cvs in, say, 1999? Well, not exactly.

What I mean by “detection as code” is a more systematic, flexible and comprehensive approach to threat detection that is somewhat inspired by software development (hence the “as code” tag). Just as infrastructure as code (IaC) is not merely about treating your little shell scripts as real software, but about machine-readable definition files and descriptive models for infrastructure.

Why do we need this concept? This is a good question! Historically, from the days of first IDS (1987) to the sad days of “IDS is dead” (2003) and then to today, detection got a bit of a bad reputation. We can debate this, to be sure, but most would probably agree that threat detection never “grew up” to be a systematic discipline, with productive automation and predictable (and predictably good!) results. In fact, some would say that “Your detections aren’t working.” And this is after ~35 years of trying …

Detection engineering is a set of practices and systems to deliver modern and effective threat detection. Done right, it can change security operations just as DevOps changed the stolid world of “IT management.” You basically want to devops (yes, I made it a word) your detection engineering. I think “detection as code” is a cool name for this shift!

As you see, this is not so much about treating detections as code, but about growing detection engineering to be a “real” practice, built on modern principles used elsewhere in IT (agile this, or DevOps whatever).

Now, to hunt for the true top-tier APTs, you probably need to be an artist, not merely a great security engineer (IMHO, best threat hunting is both art and science, and frankly more art than science….). But even here, to enable “artistic” creativity in solving threat detection problems we need to make sure those solutions function on a predictable layer. Moreover, for many other detection pursuits, such as detecting ransomware early, we mostly need automated, systematic, repeatable, predictable and shareable approaches.


OK, how do we do “detection as code”? How would I describe the characteristics of this approach?

  • Detection content versioning so that you can truly understand what specific rule or model triggered an alert — even if this alert was last July. This is even more important if you use a mix of real-time and historical detections.
  • Proper “QA” for detection content that covers both testing for broken alerts (such as those that never fire or those that won’t fire when the intended threat materializes, and of course those that fire where there is no threat) and testing for gaps in detection overall. “False positives” handling, naturally, get thrown into this chute as well.
  • Content (code) reuse and modularity of detection content, as well as community sharing of content, just as it happens for real programming languages (I suspect this is what my esteemed colleague describes here). As a reminder, detection content does not equal rules; but covers rules, signatures, analytics, algorithms, etc.
  • Cross-vendor content would be nice, after all we don’t really program in “vendor X python” or “big company C” (even though we used to), we just write in C or Python. In the detection realm, we have Sigma and YARA (and YARA-L too). We have ATT&CK too, but this is more about organizing content, not cross-vendor writing of the content today.
  • I also think that getting to cross-tool detection content would be great, wherever possible. For example, you can look for a hash in EDR data and also in NDR; and in logs as wellSIEM alone won’t do.
  • Metrics and improvement are also key; the above items will give you plenty of metrics (from coverage to failure rates), but it is up to you to structure this process so that you get better.
  • While you may not be looking at building a full CI/CD pipeline for detections to continuously build, refine, deploy and run detection logic in whatever product(s), I’ve met people who did just that. To me, these people really practice detection as code.
  • Finally, I don’t really think this means that your detections need to be expressed in a programming language (like Python here and here or Jupyter notebooks). What matters to me is the approach and thinking, not actual code (but we can have this debate later, if somebody insists)

Anything else I missed?


For our recent SANS paper / webcast, that mentioned this topic, we crafted this example visual:

 
13188015463?profile=RESIZE_710x

 Source: recent SANS paper.



Finally, let’s cattle-prod the elephant in the room: what about the crowd that just does not want anything “as code”? They also don’t like to create their own detections at all. In fact, they like their detections as easy as pushing an ON button or downloading a detection pack from a vendorThis is fine.

Personally, I’ve met enough security people who run away screaming from any technology that is “too flexible”, “very configurable” and even “programmable” (or: “… as code”) because their past experience indicates that this just means failure (at their organization). However, to detect, you need both a tool and content. Hence, both will have to come from somewhere: you can build, buy, rent, but you must pick.

Now, upon reading this, some of you may say “duh … what is not painfully obvious about it?” but I can assure you most people in the security industry do NOT think like that. In fact, such thinking is alien to most, in my experience. Maybe they think detection is a product feature. Or perhaps they think that detection is some magical “threat” content that comes from “the cloud.”

Hence, “detection as code” is not really an approach change for them, but a more philosophical upheaval. Still, I foresee that threat detection will always be a healthy mix of both an engineering and a creative pursuit….

Thoughts?


P.S. Thanks to Brandon Levene for hugely useful contributions to this thinking!

 

- By Anton Chuvakin (Ex-Gartner VP Research; Head Security Google Cloud)

Original link of post is here

Read more…

We all know David Bianco Pyramid of Pain, a classic from 2013. The focus of this famous visual is on indicators that you “latch onto” in your detection activities. This post will reveal a related mystery connected to SIEM detection evolution and its current state. So, yeah, this is another way of saying that a very small number of people are perhaps very passionate about it …

But who am I kidding? I plan to present a dangerously long rant about the state of detection content today. So, yes, of course there will be jokes, but ultimately that is a serious thing that had been profoundly bothering me lately.

First, let’s travel to 1999 for a brief minute. Host IDS is very much a thing (but the phrase “something is a thing” has not yet been born), the term “SIEM” is barely a twinkle in a Gartner analyst eye. However, some vendors are starting to develop and sell “SIM” and “SEM” appliances (It is 1999! Appliances are HOT!).

Some of the first soon-to-be-called-SIEM tools have very basic “correlation” rules (really, just aggregation and counting of a single attribute like username or source IP) and have rules like “many connections to the same port across many destinations”, “Cisco PIX log message containing SYNflood, repeated 50 times” and “SSH login failure.” Most of these rules are very fragile i.e. a tiny deviation in attacker activities will cause it to not trigger. They are also very device dependent (i.e. you need to write such rules for every firewall device, for example). So the SIM / SEM vendor had to load up many hundreds of these rules. And customers had to suffer through enabling/disabling and tuning them. Yuck!

While we are still in 1999, a host IDS like say Dragon Squire, a true wonder of 1990s security technology, scoured logs for things like “FTP:NESSUS-PROBE” and “FTP:USER-NULL-REFUSED.” For this post, I reached deep into my log archives and actually reviewed some ancient (2002) Dragon HIDS logs to refresh my memory, and got into the vibe of that period (no, I didn’t do it on a Blackberry or using Crystal Reports — I am not that dedicated).

Now fast forward to about 2003–2004 — and the revolution happened! SIEM products unleashed normalized events and event taxonomies. I spent some of that time categorizing device event IDs (where does Windows Event ID 1102 go?) into SIEM taxonomy event types, and then writing detection rules on them. SIEM detection content writing became substantially more fun!

This huge advance in SIEM gave us the famous correlation rules like “Several Events of The Exploit Category Followed By an Event of Remote Access Category to Same Destination” that delivered generic detection logic across devices. Life was becoming great! These rules were supposed to be a lot more resilient (such as “any Exploit” and “any Remote Access” vs a specific attack and, say, VNC access). They also worked across devices — write it once, was the promise, and then even if you change the type of the firewall you use, your correlation still detects badness.

Wow, magic! Now you can live (presumably) with dozens of good rules without digging deep into regexes and substrings and device event IDs across 70 system and OS version types deployed. This was (then) perceived as essential progress of security products, like perhaps a horse-and-buggy to a car evolution.

Further, some of us became very hopeful our Common Event Expression (CEE) initiative will take off. So, we worked hard to make a global log taxonomy and schema real and useful (circa 2005).

But you won’t believe what happened next!

Now, let’s fast forward to today — 2020 is almost here. Most of the detection content I see today is in fact written in the 1990s style of exact and narrow matching to raw logs. Look at all the sexy Sigma content, will you? A fellow Network Intelligence enVision SIM user from 1998 will recognized many of the detections! Sure, we have ATT&CK today, but it is about solving a different problem.

An extra bizarre angle here is that as machine learning and analytics rise, the need for clean, structured data rises if we were to crack more security use cases, not just detection. Instead, we just get more data overall, but less data that you can feed your pet ML unicorn with. We need more clean, enriched data, not merely more data!

To me, this feels like the evolution got us from a horse and buggy to a car, then a better car, then a modern car — and then again a horse and buggy ...

So, my question is WHY? What happened?

I’ve been polling a lot of my industry peers about it, ranging from old ArcSight hands that did correlation magic 15 years ago (and who can take a good joke about kurtosis) and people who run detection teams today on modern tools [I am happy to provide shout-outs, please ping me if I missed somebody, because I very likely did due to some of you saying that you want to NOT be mentioned]

But first, before we get to the answer I finally arrived at, after much agonizing, let’s review some of the things I’ve heard during my data gathering efforts:

  • Products that either lack event normalization or do it poorly (or lazily rely on clients to do this work) won the market battle for unrelated reasons (such as overall volume of data collected), and a new generation of SOC analysts have never seen anything else. So they get by with what they have. Let’s call this line of reasoning “the raw search won.”
  • Threat hunters beat up the traditional detection guys because “hunting is cool” and chucked them out of the window. Now, they try to detect the same way they hunt — by searching for random bits of knowledge of the attack they’ve heard of. Let’s call this line of thinking “the hunters won.”
  • Another thought was that tolerance for “false positives” (FP) has decreased (due to growing talent shortages) and so writing more narrow detections with lower FP rates became more popular (‘“false negatives” be damned — we can just write more rules to cover them’). These narrow rules are also easier to test. Let’s calls this “false positives won.”
  • Another hypothesis was related to the greater diversity of modern threats and also a greater variety of data being collected. This supposedly left the normalized and taxonomized events behind since we needed to detect more things of more types. Let’s call this one “the data/threat diversity won.”

So, what do you think? Are you seeing the same in your detection work?

Now, to me all the above explanations left something to be desired — so I kept digging and agonizing. Frankly, they sort of make some sense, but my experience and intuition suggested that the magic was still missing…

What do I think really happened? I did arrive at a very sad thought, the one I was definitely in denial about, but the one that ultimately “clicked” and many puzzle pieces slid into place!

The normalized and taxonomized approach in SIEM never actually worked! It didn’t work back in 2003 when it was invented, and it didn’t work in any year since then. And it still does not work now. It probably cannot work in today’s world unless some things change in a big way.

When I realized this, I cried a bit. Given how much I invested in building, evangelizing and improving it, then actually trying to globally standardize it (via CEE), it feels kinda sad…


Now, is this really true? Sadly, I think so! SIEM event taxonomization is …

  • always behind the times and more behind now than ever
  • inconsistent across events and log sources — for every vendor today
  • remains to be seriously different between vendors — and hence cannot be learned once
  • contains an ever-increasing number of errors and omissions that accumulate over time
  • is impossible to test effectively vs real threats people face today.


So, I cannot even say “SIEM event taxonomy is dead”, because it seems like it was never really alive. For example, “Authentication Failure” event category from a SIEM vendor may miss events from a new version of software (such as a new event type introduced in a Windows update), miss events from an uncommon log source (SAP login failed), or miss events erroneously mapped to something else (say to “Other Authentication” category).

In essence, people write stupid string-matching and regex-based content because they trust it. They do not — en masse — trust the event taxonomies if their lives and breach detections depend on it. And they do.

What can we do? Well, I am organizing my thinking about it, so wait for another post, will you?

 

- By Anton Chuvakin (Ex-Gartner VP Research; Head Security Google Cloud)

Original link of post is here

Read more…

We had a community session on Evaluating AI Solutions in Cybersecurity: Understanding the "Real" vs. the "Hype" featuring Hilal Ahmad Lone, CISO at Razorpay & Manoj Kuruvanthody, CISO & DPO at Tredence Inc.

In this discussion, we covered key aspects of evaluating AI solutions beyond vendor claims and assessing an organization’s readiness for AI, considering data quality, infrastructure maturity, and how well AI can meet real-world cybersecurity demands. 

Key Highlights:

  • Distinguishing marketing hype from practical value: Focus on ways to assess AI solutions beyond vendor claims, including real-world impact, measurable results, and the AI’s role in solving specific cybersecurity challenges.

  • Evaluating AI maturity and readiness levels: Assessing whether an organization is ready for AI in its cybersecurity framework, especially regarding data quality, infrastructure readiness, and overall maturity to manage and scale AI tools effectively. This also includes gauging the AI model’s maturity level in handling complex, evolving threats.

  • AI Maturity and Readiness - Proven Tools vs. Experimental Models: Evaluate the readiness level of AI models themselves, where real maturity is marked by robust performance in varied cyber environments, while hype often surrounds models that are still experimental or reliant on ideal conditions. Organizational readiness, such as infrastructure and data integration, also plays a critical role in realizing real-world results versus theoretical benefits.


About Speaker

  • Hilal Ahmad Lone, CISO at Razorpay 
  • Manoj Kuruvanthody, CISO & DPO at Tredence Inc.

 

Executive Summary (Session Highlights):

  • Navigating AI Risk Management: Standards and Frameworks:
    This session explored the significance of adopting industry standards and frameworks like Google's SAFE Framework, ISO 42001:2023, and the NIST Cybersecurity Framework in ensuring responsible AI adoption. Experts emphasized the need for organizations to fine-tune these frameworks based on their unique risks and objectives.

  • Risk Assessments and Maturity Models for AI Systems:
    The conversation highlighted the necessity of performing thorough risk assessments tailored to AI environments. Maturity models, including red teaming and vulnerability assessments, were discussed as pivotal methods for evaluating the robustness of AI implementations. Emerging techniques such as jailbreaking LLMs and prompt injections were also examined for their role in testing AI vulnerabilities.

  • The Case for Chaos Engineering:
    Chaos engineering was underscored as a critical approach to stress-testing AI systems in real-world conditions. Experts advocated for implementing chaos testing in production environments to uncover hidden vulnerabilities and ensure resilience under unpredictable scenarios.

  • Quantum Computing and AI: A Transformational Combination:
    Participants discussed the profound security implications of quantum computing, particularly when paired with AI. While quantum technology poses immediate threats to existing cryptographic systems, its integration with AI accelerates both opportunities and risks. The session stressed the importance of preparing for the quantum era by adopting quantum-resistant cryptography and evolving defense strategies.

  • AI and Data Loss Prevention (DLP): Harmonizing Technologies:
    The discussion explored the coexistence of AI and DLP technologies, emphasizing the challenges of aligning AI-driven systems with non-AI DLP solutions. Fine-tuning and adaptability were identified as key enablers for integrating these technologies effectively without compromising data security.

  • Preparing for the Future of AI and Quantum Security:
    Concluding the session, experts advised organizations to focus on defense-in-depth strategies while preparing for quantum-resistant solutions. They stressed the importance of proactive learning, collaboration, and incremental adoption of advanced security measures to fortify defenses in an era shaped by AI and quantum innovations.
Read more…

Many organizations are looking for trusted advisors, and this applies to our beloved domain of cyber/information security. If you look at LinkedIn, many consultants present themselves as trusted advisors to CISOs or their teams.

13167902464?profile=RESIZE_710x

Untrusted Advisor by Dall-E via Copilot


This perhaps implies that nobody wants to hire an untrusted advisor. But if you think about it, modern LLM-powered chatbots and other GenAI applications are essentially untrusted advisors (RAG and fine-tuning notwithstanding).


Let’s think about the use cases where using an untrusted security advisor is quite effective and the risks are minimized.

To start, naturally intelligent humans remind us that any output of an LLM-powered application needs to be reviewed by a human with domain knowledge. While this advice has been spouted many times — with good reasons — unfortunately there are signs of people not paying attention. Here I will try to identify patterns and anti-patterns and some dependencies for success with untrusted advisors, in security and SOC specifically.

First, tasks involving ideation, creating ideas and refining them are very much a fit to the pattern. One of the inspirations for this blog was my eternal favorite read from years ago about LLMs “ChatGPT as muse, not oracle”. If you need a TLDR, you will see that an untrusted cybersecurity advisor can be used for the majority of muse use cases (give me ideas and inspiration! test my ideas!) and only for a limited number of oracle use cases (give me precise answers! tell me what to do!).

So let’s create new ideas. How would you approach securing something? What are some ideas for doing architecture in cases of X and Y constraints? What are some ideas for implementing controls given the infrastructure constraints? What are some of the ways to detect Z? All of these produce useful ideas that can be turned by experts into something great. Ultimately, they shorten time to value and they also create value.

A slightly more interesting use case is the Devil’s Advocate use case (this has been suggested by Gemini Brainstormer Gem during my ideation of this very post!). This implies testing ideas that humans come up with to identify limitations, problems, contradictions or other cases where these things may matter. I plan to do X with Y and this affects security, is this a good idea? What security will actually be reduced if I implement this new control? In what way is this new technology actually even more risky?

Making “what if” scenarios is another good one. After all, if the scenarios are incorrect, ill-fitting or risky, a human expert can reject them. No harm done! And if they’re useful, we again see shorter time to value (epic example of tabletops via GenAI)

Now think about all the testing use cases. Given the controls we have, how would you test X? This makes me think that perhaps GenAI will end up being more useful for the red team (or: red side of the purple team). The risks are low and the value is there.

Report drafting and data story-telling. By automating elements of data-centric story telling, GenAI can produce readable reports, freeing humans for more fun tasks. Furthermore, GenAI excels at identifying patterns. This enables the creation of compelling narratives that effectively communicate insights and risks. And, back to the untrusted advisor: it’s still essential to remember that experts should always review GenAI-generated content for accuracy and relevance (thanks for the reminder, Gemini!)


Summary — The Good:

  • Ideation and Brainstorming: LLMs excel at generating ideas for security architectures, controls, and approaches. They can help overcome mental blocks and accelerate the brainstorming process.
  • Devil’s Advocate: LLMs can challenge existing ideas, identify weaknesses, and highlight potential risks. This helps refine strategies and improve overall security posture.
  • “What-if” Scenarios: LLMs can create various scenarios to test the effectiveness of security controls and identify vulnerabilities.
  • Security Testing: LLMs can be valuable tools for testing, proposing simulated attacks and identifying weaknesses in defenses.
  • Report drafting: LLMs can help you write reports that make sense and flow well.


On the other hand, let’s talk about the anti-patterns. It goes without saying that if it leads to deployment of controls, automated reconfiguration of things, or remediation that is not reviewed by a human expert, that’s a “hard no”.

Admittedly, any task that require sharing detailed knowledge of my environment is also on that “hard no” list (some bots leak, and leak a lot). I just don’t trust the untrusted advisor with my sensitive data. I also assume that some results will be inaccurate, but only a human domain expert will recognize when this is the case…

Summary — The Bad:

  • Direct Control: Allowing LLMs to directly deploy controls, reconfigure systems, or automate remediation without human review is a major risk.
  • Access to Sensitive Information: Avoid sharing detailed knowledge of your environment with an untrusted LLM (which is another way of saying “an LLM”).



Bridging the Trust Gap

The key to safely using LLM-powered “untrusted security advisor” for more use cases is to maintain a clear separation between their (untrusted) outputs and your (trusted) critical systems.

13167904096?profile=RESIZE_710x

Forrester via Allie Mellen webinar https://www.forrester.com/technology/generative_ai_security_tools_webinar/


A human domain expert should always review and validate LLM-generated suggestions before implementation.
 This choice is obvious, but it is also a choice that promises to be unpopular with some environments. What are the alternatives, if any?


Alternatives and Considerations

While relying on non-expert human review or smaller, grounded LLMs might seem appealing, they ultimately don’t solve the trust issue. Clueless human review does not fix AI mistakes. Another AI may fix AI mistakes, or it may not…

Perhaps a promising approach involves using a series of progressively smaller and more grounded LLMs to filter and refine the initial untrusted output. Who knows … we live in fun times!

Agent-style valuation is another route (if an LLM wrote remediation code, I can run it in a test or simulated environment, and then decide what to do with it, perhaps automatically prompting the LLM to refine it until it works well).

But still: will you automatically act on it? No! So think real hard about the trust boundary between your “untrusted security advisor” and your environment! Perhaps we will eventually invent a semantic firewall for it?

Conclusion

LLMs can be powerful tools for security teams, but they must be used responsibly given lack of trust. By focusing on appropriate use cases and maintaining human oversight, organizations can leverage the benefits of LLMs while mitigating the risks.

Specifically, LLMs can be valuable “untrusted advisors” for cybersecurity, but only when used responsibly. Ideation, testing, and red teaming are excellent applications. However, direct control, access to sensitive data, and unsupervised deployment are off-limits. Human expertise remains essential for validating LLM outputs and ensuring safe integration with critical systems.

  • LLMs can be valuable “untrusted advisors” for ideation and testing in cybersecurity.
  • Human experts should always review and validate LLM output before implementation.
  • LLMs should not (yet?) be used for tasks requiring high trust or detailed environmental knowledge.
  • Striking the right balance between human expertise and AI assistance is crucial.


Thanks Gemini, Editor Gem, Brainstormer Gem and NotebookLM! :-)


Related:

 

- By Anton Chuvakin (Ex-Gartner VP Research; Head Security Google Cloud)

Original link of post is here

Read more…

 Mention “alert fatigue” to a SOC analyst. They would immediately recognize what you are talking about. Now, take your time machine to 2002. Find a SOC analyst (much fewer of those around, to be sure, but there are some!) and ask him about alert fatigue — he would definitely understand what the concern is.

Now, crank up your time machine all the way to 11 and fly to the 1970s where you can talk to some of the original NOC analysts. Say the words “alert fatigue” and it is very likely you will see nods and agreement about this topic.

So the most interesting part is that this problem has immense staying power, while cyber security industry changes quickly. Yet this one would be familiar to a person doing a similar job 25 years apart. Are we doomed to suffer from this forever?

I think it is a bit mysterious and worth investigating? Join me as we uncover the dark secrets behind this enduring pain.


Why Do We Still Suffer?

First, let’s talk about people’s least favorite question: WHY.

An easy answer I get from many industry colleagues is that we could have easily solved the problem at 2002 levels of data volumes, environment complexity and threat activity. We had all the tools, we just needed to apply them diligently. Unfortunately, more threats, more data, more environments came in. So we have alert fatigue in 2024.

Personally, I think this is a part of why this has been tricky, but I don’t think that’s the entire answer. Frankly, I don’t recall any year during which this problem was considered close to being solved, pay no heed to shrill vendor marketing. The early SIM/SEM vendors in the late 1990s (!) promised to solve the alert fatigue problem. At the time, these were alerts from firewalls and IDS systems. The problem was not solved with the tools at the time, and then again not solved with better tools, better scoring. I suspect that throwing the best 2024 tools at the 2002 levels of alerts will in fact solve it, but this is just a theoretical exercise…

False positive (FP) rates increased? I frankly don’t know and don’t have a gut feel here. In theory they should have decreased over the last 25 years, if we believe that security technology is improving. Let me know if anybody has any data on this, but any such data set would include a lot of apples/oranges (1998 NIDS vs 2014 EDR vs 2024 ADR anybody?)

Some human (Or was it a bot? Definitely a bot!) suggested that our fear of missing attacks is driving false positives (FP) up. Perhaps this is also a factor adding to high FP rates. If you have a choice of killing 90% of FPs by increasing FNs by 10%, would you take it? After all, merely 1 new FN (aka real intrusion not detected) may mean that you are owned…

Manual processes persisting at many SOCs mean that even a 2002 volume of alerts would have overrun them, but they hired and covered the gap. Then alert volumes increased with IT environment (and threat) growth, and they were not able to hire (or transform to a better, automation-centric model).

More tools that are poorly integrated probably contributed to the problem not being solved. IDS was as the sole problem child of the late 1990s. Later, this expanded and evolved to EDR, NDR, CDR, and other *DR as well as lots of diverse data types flowing into the SIEMs.

All in all, I am not sure there is one factor that explains why “alert fatigue” has been a thing for 25+ years. We are where we are.

Where are we exactly?


Some [Bad] Data

With the help of a new Gem-based agent, I collected a lot of data on alert fatigue, and let me tell you…. based on the data, it is easy to see why we struggle. A lot of “data” is befuddling, conflicting and useless. Examples (mix of good and bad, your goal is to separate the two):

70% of SOC teams are emotionally overwhelmed by the volume of security alerts” (source)

43% of SOC teams occasionally or frequently turn off alerts, 43% walk away from their computer, 50% hope another team member will step in, and 40% ignore alerts entirely.” (source)

55% of security teams report missing critical alerts, often on a daily or weekly basis, due to the overwhelming volume.” (source)

“A survey found that SOC teams deal with an average of 3,832 alerts per day, with 62% of those alerts being ignored. (source)

“56% of large companies (with over 10,000 employees) receive 1,000 or more alerts per day.” (source)

“78% of cybersecurity professionals state that, on average, it takes 10+ minutes to investigate each alert (source)

“Security analysts are unable to deal with 67% of the daily alerts received, with 83% reporting that alerts are false positives and not worth their time.” (source)

In brief, the teams are flooded with alerts, leading to burnout and pain. While the exact figures vary across studies (like, REALLY, vary!), a pattern emerges: teams are overwhelmed by the volume of alerts, and often a majority of them are false. The data barely teaches us anything else…


What Have We Tried?

The problem persists, but the awareness of this problem is as old as the problem (see the hypothetical 2002 SOC analyst conversation above). In this section, let’s go quickly through all the methods we’ve tried, largely unsuccessfully.

First, we tried aggregation. Five (50? 500? 5000? 5 gazillion?) of alert such and such get duct-taped together and shipped off to pester a human. That clearly did not solve the problem. Don’t get me wrong, aggregation helps. But clearly this 1980s trick has not fixed alert fatigue.

Then we tried correlation where we try to logically related and group alerts and assign priority to the “correlated event” (ah, so 2002!) and then give them to an analyst. Nah, didn’t do it.

We also tried filtering both on what goes in the system that produces alerts (input filtering; just collect less telemetry) and also filter alerts (output; just suppress these alerts).

We obviously tried tuning i.e. carefully turning off alerts for cases where such alert is false or unnecessary. This has evolved to be one of the least popular advice in security ops (“just tune the detections” is up there with “just patch faster” and “just zero trust it”).

We tried — and are trying — many types of enrichment where the alerts are deemed to be extra fatigue-inducing because context was missing. So various automation was used to add things to alerts. IP became system name, became asset role/owner, past history was added and a lot of other things (hi 2006 SIEM Vendor A). Naturally, enrichment on its own does not solve anything, but it reduces fatigue by letting machines do more of the work.

We tried many, many types of so-called risk prioritization of alerts. Using ever-more-creative algorithms from the naively idiotic threat x vulnerability x asset value to more elaborate ML-based scoring systems. It sort of helped, but also hurt when people focused on top 5 alerts from the 500 they needed to handle. Ooops! Alert #112 was “you are so owned!” Prioritization alone is not a solution to alert fatigue.

Then there was a period of time when beautiful, hand-crafted, artisanal SOAR playbooks were the promised way to solve alert fatigue.

Meanwhile, some organizations thought that the SIEM system itself is the problem and they needed to focus them on narrow detection systems such as EDR where alerts are supposedly easier to triage. Initially, there was some promise … and now you can see more and more people who complain about EDR alert fatigue. So narrow focus tools also weren’t the answer. BTW, as EDR evolved to XDR (whatever that is) this solution “unsolved” itself (hi again, SIEM).

Today, as I’m writing this in 2024, many organizations naively assume that AI would fix it any day now. I bet some 2014 UEBA vendor already promised this 10 years ago…

So:

  1. Aggregation
  2. Correlation
  3. Filtering (input [logs] and output [alerts])
  4. Alert source tuning
  5. Context enrichment
  6. “Risk-based” and other prioritization
  7. SOAR playbooks
  8. Narrow detection tools (SIEM -> EDR)
  9. AI…

Good try, buddy! How do we really solve it?


Slicing the Problem

Now, enough with whining and towards something useful. I want to start by suggesting that alert fatigue is not one problem. Over the years, I’ve seen several distinct cases for alert fatigue.

To drastically oversimplify:

You may have alert fatigue because a high ratio of your alerts are either false positives (or: other false alerts), or they indicate activities that you simply don’t care to see. In other words, bad alerts type A1 (false) and bad alerts type A2 (benign / informational / compliance).


A1. FALSE ALERTS

A2. BENIGN ALERTS

You also have alert fatigue when your alerts are not false, but a high ratio of them are particularly fatigue-inducing and hard to triage (it’s not the volume, but the poor information quality of the alert that kills; also bad UX, or, as Allie says, AX). In other words, bad alerts, type B (high fatigue).

NEW: this also applies to malicious (i.e. not benign and not false) alerts where the risk is accepted by the organization (“yes, this student machine always gets malware, no action” kinda thing)


B. HARD TO TRIAGE ALERTS

Finally, there’s the scenario where you have perfectly crafted alerts indicating malicious activities, but your team just isn’t sufficient for the environment you have. In other words, good alerts, but just too many.


C. SIMPLY TOO MANY ALERTS

Naturally, in real life we will have all problems blended together: high ratio of bad alerts AND high overall volume of alerts AND false alerts being hard to triage, leading to (duh!) more high fatigue.

Frankly, “false alerts x hard to triage alerts x lots of them = endless pain.” If you are taking 2 hours to tell that the alert is a false positive, I have some bad news for you: this whole SOC thing will never work…

 
13167884862?profile=RESIZE_710x

Alert fatigue dimensions (Anton, 2024)

Anything I missed in my coarse grained diagnosis?


What Can We Do

Now, I don’t promise to solve the alert fatigue problem with one blog, even a long one. But I do propose a framework for diagnosing the problem that we face and for trying to sort the solutions into more promising and less promising for your situation.

For example, if you are specifically flooded with false positive alerts (e.g. high severity alert that triggers on an unrelated benign activity), unfortunately the answer is the one you won’t like: you do need to tune. Aggregation, correlation, etc are not the answer; “fix the bug in your detection code” is. If some alerts are false in bulk, these just should not be produced. If you rely on vendor alerts and your vendor alerts suck, change your vendor. Perhaps in the future some AI will tune your detection content based on the alerts for you, but today, sorry buddy, you are doing it…

So the answer here is not to use excessively more complicated SOAR playbooks. It is about actually making sure that alerts with high false positive ratios are not produced.

Huh? You think, Anton? Yup, in the case of proper false positives, “fix the detection code” really is the answer (or otherwise tune by limiting which systems are covered by the detection, this of course has tradeoffs…). I cringe a bit since I feel that I am dispensing 2001-style advice here (“tune your NIDS!”) but it does not change the fact that it is the right thing to do. BTW, most clients are just not brutal enough with their vendors in this regard…

What about the alerts that are just not useful, but also not false. In this case, the main solution avenue is enrichment. That is, after you take a good look at those that serve no purpose whatsoever, not even informational, and turn these off. You are adding enrichment dimensions so that alerts become more useful, and easy to triage.

For example, logging in after hours may not be bad over the entire environment (a classic useless alert 1996–2024), but may be great for a subset (or perhaps one system, like a canary or a honeypot, actually). Enriched alerts are dramatically easier to process via automation (so a SIEM/SOAR tool may do both for you).

Another scenario involves alerts that, while valid, are exceptionally difficult, painful to triage. This is where again enrichment combined with SOAR is the right answer. I remember a story where a SOC analyst had to open tickets with 3 different IT teams to get the missing context and then conclude (after 2 days — DAYS!) that the alert was indeed an FP.

Another situation is that alerts are hard to triage and cause fatigue because alerts just go to the wrong people. All the modern federated alerting frameworks where alerts are flowing down the pipelines to the correct people seek to fix it, but somehow few SOC team discovered this approach (we make heavy use of this in ASO, of course). For (a very basic) example, routing DLP alerts to data owners instead of the SOC can be more efficient, but this requires careful consideration and planning (not diving into this flooded rathole at the time…)

Naturally, some lessons from other fields where the alerting problem is “more solved” help. In this case, I am thinking of SREs. In our ASO/MSO approach, we have spent lots of time on the relentless drive to automation“Study what SREs did and implement in SOC/D&R” is essentially the essence of ASO (here is our class on it). Relating to the problems of alert fatigue we covered, automation (here including enrichment) and a rapid feedback loop to fix bad detection content is the whole of it, basically. No magic! No heroes!

I do want to present the final case for more alert triage decisions to be given to machines. “Human-less”, fully automated, “AI SOC” is of course utter BS (despite these arguments). However, the near future where AI will help by handling much of the cognitive work with the alert triage is coming. This may not always reduce the alert volume, but likely reduces human fatigue.

Despite all these efforts, alert fatigue may persist. In some cases, the issue might simply be a lack of adequate staffing and that’s that…


Summary

So, a summary of what to do

Diagnose the fatigue: Begin by identifying the root cause of your specific alert fatigue. Is it due to false positives, benign alerts, hard-to-triage alerts, or simply an overwhelming volume of alerts? Or wrong people getting the alerts perhaps?

Targeted treatment: Once diagnosed, apply the appropriate solutions based on the symptoms identified:

  • False positives: Focus on tuning detection rules, improving alert richness/quality, and potentially changing vendors if necessary.
  • Benign alerts: Implement enrichment to add context and make alerts more actionable. Then SOAR playbooks to route.
  • Hard-to-triage alerts: Utilize enrichment and SOAR playbooks to streamline the triage process. This item has a lot more “it depends”, however, to be fair…
  • Hard-to-triage alerts for specific analysts: Start adopting federated alerting for some alert types (e.g. DLP alerts that go to data owners)

If in doubt, focus on developing more automation for signal triage.

Expect some fun AI and UX advances for reducing alert fatigue in the near future.

Wish for some luck, because this won’t solve the problem but it will make it easier.

Share your experience with security alert fatigue and — ideally — how you solved it or made it manageable…

Final thought: Let’s collectively aim for Security Alert Fatigue (1992–202x)

v1.1 11–2024 (more updates likely in the future)

v1.0 11–2024 (updates likely in the future)


Related resources:



- By Anton Chuvakin (Ex-Gartner VP Research; Head Security Google Cloud)

Original link of post is here

Read more…

The present application was filed for quashing proceedings in a case pending for the offence punishable under Section 66-C and 67 of the Information Technology Act, 2000 (‘The IT Act, 2000’). The Hon. HC stated that it could not be concluded without any evidence that applicant would have been the only person behind creation of fake Facebook accounts, from which alleged defamatory posts were made in respect of Respondent 2, his family members including applicant’s wife.

The Court was of opinion that print of screenshots of Facebook would not prove that the said post was created from an alleged fake account.

Case Law : Mahesh Shivling Tilkari v. State of Maharashtra, Criminal Application No. 2850 of 2019, decided on 22-10-2024


Read more: Link to the Criminal Application


-By Adv (Dr.) Prashant Mali

Original link of post is here

Read more…

We are hosting an exclusive CISO Platform Talks session on Evaluating AI Solutions in Cybersecurity: Understanding the "Real" vs. the "Hype" featuring Hilal Ahmad Lone, CISO, Razorpay and Manoj Kuruvanthody, CISO & DPO, Tredence Inc.

In the evolving world of cybersecurity, distinguishing real AI innovation from marketing hype is crucial. This discussion explores key aspects of evaluating AI solutions beyond vendor claims and assessing an organization’s readiness for AI, considering data quality, infrastructure maturity, and how well AI can meet real-world cybersecurity demands. 

13154570470?profile=RESIZE_710x

 

Key Discussion Points: 

  • Distinguishing marketing hype from practical value: Focus on ways to assess AI solutions beyond vendor claims, including real-world impact, measurable results, and the AI’s role in solving specific cybersecurity challenges.

  • Evaluating AI maturity and readiness levels: Assessing whether an organization is ready for AI in its cybersecurity framework, especially regarding data quality, infrastructure readiness, and overall maturity to manage and scale AI tools effectively. This also includes gauging the AI model’s maturity level in handling complex, evolving threats.

  • AI Maturity and Readiness - Proven Tools vs. Experimental Models: Evaluate the readiness level of AI models themselves, where real maturity is marked by robust performance in varied cyber environments, while hype often surrounds models that are still experimental or reliant on ideal conditions. Organizational readiness, such as infrastructure and data integration, also plays a critical role in realizing real-world results versus theoretical benefits.

Join us live or register to receive the session recording if the timing doesn’t suit your timezone.

>> Register here

Read more…

We had a community session on "Offensive Security: Breach Stories to Defense Using Offense" with Saravanakumar Ramaiah, (Director - Technology Risk Management, Sutherland) & Rajiv Nandwani (Global Information Security Director, BCG).

In this discussion, we explore the importance of penetration testing and red team exercises in identifying security gaps within organizations, the tactics attackers employ in phishing campaigns to gain initial access, and the simulation of advanced persistent threats (APTs) to uncover risks from zero-day vulnerabilities and social engineering attacks. We also examine the critical role of social engineering in physical penetration testing and strategies to bolster defenses against these threats.

 

Key Highlights

  • Leveraging penetration testing and red team exercises to identify security gaps within organizations.

  • Techniques attackers use in phishing campaigns to gain initial access and navigate networks to access sensitive data.

  • Simulating advanced persistent threats (APTs) to understand risks from zero-day vulnerabilities and social engineering attacks.

  • Examining the role of social engineering in physical penetration testing and methods to strengthen defenses against it.

 

About Speaker

  • Saravanakumar Ramaiah, Director - Technology Risk Management, Sutherland 
  • Rajiv Nandwani, Global Information Security Director, BCG

 

CISO Platform Talks (Recorded Version)

 

Executive Summary (Session Highlights) : 

  1. Identifying Security Gaps with Penetration Testing
    In this session, experts discuss the critical role of penetration testing and red team exercises in identifying vulnerabilities within organizations. These proactive measures simulate real-world attacks, enabling companies to uncover weaknesses before they can be exploited by malicious actors.

  2. Understanding Phishing Campaigns
    The conversation highlights the techniques employed in phishing campaigns that attackers use to gain initial access to networks. Recognizing these tactics is essential for developing effective security protocols and training programs to defend against such threats.

  3. Simulating Advanced Persistent Threats (APTs)
    The chat delves into the simulation of APTs to understand the risks associated with zero-day vulnerabilities and social engineering attacks. By mirroring advanced tactics used by threat actors, organizations can better prepare their defenses.

  4. The Role of Social Engineering in Physical Penetration Testing
    Experts analyze the impact of social engineering in physical penetration tests, emphasizing the need for comprehensive training and awareness to strengthen defenses. Participants discuss methods for mitigating risks associated with these covert tactics.

  5. Strengthening Organizational Defenses
    Finally, the discussion underscores the importance of integrating findings from penetration tests and simulations into broader security strategies. By doing so, organizations can enhance their resiliency against evolving cyber threats and improve their overall security posture.
Read more…

We are hosting an exclusive CISO Platform Talks session on "Offensive Security: Breach Stories to Defense Using Offense" featuring Saravanakumar Ramaiah, Director - Technology Risk Management, Sutherland and Rajiv Nandwani, Global Information Security Director, BCG.

In the constantly evolving environment of today, it is essential for security leaders to implement an offensive approach to capitalize on emerging opportunities. As boards become more aware of the consequences of security incidents, these leaders need to guide their colleagues on effective mitigation strategies.

13109685093?profile=RESIZE_710x

 

Key Discussion Points: 

  • Leveraging penetration testing and red team exercises to identify security gaps within organizations.

  • Techniques attackers use in phishing campaigns to gain initial access and navigate networks to access sensitive data.

  • Simulating advanced persistent threats (APTs) to understand risks from zero-day vulnerabilities and social engineering attacks.

  • Examining the role of social engineering in physical penetration testing and methods to strengthen defenses against it.

 

Join us live or register to receive the session recording if the timing doesn’t suit your timezone.

 

>> Register here 

Read more…