Background:
It has been suggested that any new development will include less than 1% original code. If this isn’t presently true, it will likely be as time progresses.
With any security program, the goal is to identify the vulnerabilities, the related risks, mitigations or compensating controls that can be implemented. With the volume of development including libraries and binaries from third-party/open source repositories like: Git-Hub, stackify, or Microsoft, different steps and processes need to be implemented to ensure system and data owners are aware of the risks related to any system.
Using third-party code can greatly accelerate application development, however it brings with it a certain amount of risk. Some of these can be mitigated, however, modifying third-party code may likely be outside your organization’s capabilities. Those risks need to be properly documented either as part of the overall risk assessment or separately. With this in view, you can have a risk discussion or multiple discussions about what risks are to be accepted, avoided, mitigated or transferred, based on the risk owner’s risk appetite for your organization.
A third-party script might have unintended consequences, like overwriting your variables. Also, many tracking scripts don't sanitize data properly which would allow attackers to inject malicious code.
Additionally, some third-party scripts still use non-secure HTTP. This can let attackers capture user's information, and it can cause security warnings that can scare away users on secure pages. Third-party scripts often load other third-party scripts of their own. When the third-party scripts you trust bring in scripts you don't expect, this multiplies the potential for all of the security and privacy risks mentioned thus far.
This is just a sampling of risks that can be introduced to an application so it is important to get your security team involved early in the SDLC.
Assumptions:
tools - Your organization has some or all of the tools described below in place. In short, your organization should be doing both static application security scans (SAST), dynamic application security scans (DAST), operating system (OS) scans, and architectural reviews using some threat modeling methodology like STRIDE, PASTA, or VAST. Note that in no way should this document be considered an endorsement of any specific product over another. Specific products are listed as examples only.
SDLC - You have a documented secure System (or software) Development Life Cycle (SDLC) plan and policy, from feasibility study and planning to maintenance, must include security at every stage/phase of a software development project. Security should be incorporated in the earliest steps of your SDLC regardless of development model, and must be part of the implementation/development or coding phase. It is also possible, but less desirable to complete these during a latter phase such as maintenance or continual service improvement phase of your SDLC due to cost.
Risk - You have risk appetite statements and a risk register for your organization, or specifically the data owner. Your organization should have clear requirements documented regarding remediation documentation and timelines for the risk taxonomy (Critical, High, Medium, Low, etc.) you use. The National Institute of Standards & Technology (NIST) has a risk management framework (RMF) that is very useful if you are new to risk.
Basic 5-Step Process:
1. Secure Architecture Review: Review the architecture to be sure it is working in your favor
2. SAST: Statically scan the code for vulnerabilities
3. DAST: Scan the code dynamically for vulnerabilities
4. Infrastructure Vulnerability Assessment: Scan the platform for OS and configuration vulnerabilities.
5. Risk Assessment: Conduct a risk analysis based on the data owner’s requirements or the framework your organization has implemented.
Depending on the tools in use by your organization you may be able to run the scans (SAST/DAST/OS) and architecture review in parallel. This may or may not be advantageous. That is, you may want the opportunity to build the process out so specific tasks are sequenced and vulnerabilities are filtered and reported to the various staff/teams that have operational responsibility for mitigating vulnerabilities.
Below is an outline of how to inject your security tools into the software development cycle. Your organization and your business process may differ substantially, so treat this as a guide, not a framework.
Detail of the Process:
Architecture Review
This is either the initial design process in the early stages of the SDLC or a review of the design.
Architecture findings are related to the data flow diagram. How does the data including authentication/authorization, move through your application?
Threat modeling
. MS TMT & STRIDE https://en.wikipedia.org/wiki/STRIDE_(security)
. MS TMT 2016 https://www.microsoft.com/en-us/download/details.aspx?id=49168
. https://docs.microsoft.com/en-us/azure/security/azure-security-threat-modeling-tool (based on VAST)
. OWASP Threat Dragon https://threatdragon.org/login
Create the data flow diagram – the System Owner, or system architect should be able to provide at least the high-level flow, if not the details.
Let the threat modeling tool help guide the conversation about what is/not in place, and/or what could or should be in place to secure the data. All of this is based on the data in use, it’s sensitivity and the risk appetite of the business.
The important item here is to know what vulnerabilities exist in your application and document them, and or their remediation.
Identify and document any false positives
If you are starting this late in the SDLC, it is still prudent to complete the design review before a code review.
SAST Scans
third-party code identification – The DevOps team should be able to list the libraries, and common scripts used as well as their current version in use and the latest version available. For example, bootstrap.js or jquery-1.10.2.js or jquery-3.3.1.js
In your SAST tools you should be able to identify these files/libraries
As you isolate these files, prepare a report on the vulnerabilities of just the third-party code. To identify and evaluate known vulnerabilities in your third-party code, use sources such as:
. National Vulnerability Database (NVD),
. Common Weakness Enumeration (CWE)
. Common Vulnerability and Exposures (CVE)
. Common Vulnerability Scoring System (CVSS)
third-party code risk acceptance
Included vulnerabilities from retire.js or blackduck
A repository manager, e.g. something to block/allow specific binaries, build artifacts or release candidates should be used, like snoatype.
Put the vulnerabilities into risk language ensure the business risk appetite is current.
Engage the system/business owners as ultimately the risk is theirs to own.
third-party code isolation - the intent here is to mask these findings so your developers can focus on “their” code and implement whatever bug/fix mechanism they need to.
Some tools will allow you to identify errors by file name, others it will be by CWE, or tool or a combination.
If the latter ensure a rule change doesn’t impact an “in-house developed” file.
Baseline the code – this is the review process of “your” code with the aim of identifying both capability and architecture false positive findings.
. Capability findings are related to the code itself, e.g. can changing the color of html be leveraged by attackers?
. Architecture findings are related to the data flow diagram, see below
. Consider a Software Composition Analysis tool, like veracode or blackduck.
. Identify and document any false positives
OS Scans
Scan the operating system and the application/web server configurations for vulnerabilities
Use a scanning tool, e.g. NESSUS scans, PowerShell or Microsoft System Security Manager (SCCM).
All of your systems should at least have the latest security updates installed.
The key is to validate your systems are hardened with updated patches and baseline configuration settings as this is both a potential audit finding and a compensating control for risk.
Identify and document any false positives
Ideally this would all be viewable in a single pane of glass application. There are some out there.
IAST scans (Optional)
IAST There is another function called an Interactive Application Security Test (IAST) such as synopsis, or contra security solutions. These tools leverage some sort of agent or software instrumentation, or the use of instruments in order to monitor an application as it runs and gather information about what it does and how it performs.
A variation on this is a Runtime Application Self-Protection (RASP) which works like an application firewall.
Identify and document any false positives
DAST scans
Conduct dynamic scans of your application using a Dynamic Application Security Test (DAST) tool.
The DAST tool scans can be credentialed (authenticated user accounts) or non-credentialed (without authentication), depending on the web application. There are various tools such as: IBM AppScan, (Fortify)WebInspect, Rapid7 AppSpider/Nexpose, PortSwinger Burp Suite
Identify and document any false positives
Was the scan invasive, planned based on where the application can go?
Compliment the automated tool scans with some manual vulnerability tests, such as user privilege escalations on the critical functions of the application.
Your test cases should be both successes, what a user should be able to do, and failures, what they should not be able to do.
The results should be correlated with your SAST/IAST and OS scanning tools. The aim here is a consolidated single report from which an accurate assessment of the risk can be produced.
Put the vulnerabilities into risk language ensure the business risk appetite is current.
Engage the system/business owner as ultimately the risk is theirs to own.
Risk analysis
Risk Register – what? You don’t have a register yet? Seriously, you need to have a risk register. Document your risks!
Risk appetite: this is a statement of how much or how little risk the data owner or business will accept. This may also be referred to as a risk tolerance. Your organization may use the expression, risk retention, where organization acknowledges that the potential loss from the risk is not great enough to spend money to avoid it.
Risk appetite - A target level of loss exposure that the organization views as acceptable, given business objectives and resources
Risk tolerance - The degree of variance from the organization’s risk appetite that the organization is willing to tolerate or retain.
Neither IT or security specifically own risk, rather they make decisions based on the business’ comfort with risk.
You cannot make a risk decision without this information.
AOR – acceptance or acknowledgement of risk. Someone, like the data owner or business needs to acknowledge two things:
1) That there is inherent risk to using third-party code
2) The overall risk of the application, less the compensating controls and documented false positives.
This could be in one or more documents. Having it in two documents might make it an easier pill to swallow.
Consolidate the actual risks with the false positives. Prepare a risk document for management to review and take action on.
Some organizations will prefer two AOR documents; one for the third-party software and one for the overall application.
Single pane of glass:
Do you have a tool to bring together the reports from these tools? There are some, and it makes the overall process easier, but it could create complexity if you can’t filter out the OS vulnerabilities to show the application development team or conversely the application vulnerabilities to show the operations team.
Risks:
You could get a lot of pushback from developers, or operations staff for doing this. You could get pushback from the business for this as well.
The trick to doing application security is to present the vulnerabilities and their risk level in the context of the known risk appetite. You don’t want to be “Mr. No”, rather you want to partner with business and show them, not that way, but this way is better.
Comments