In April 2014, the Heartbleed vulnerability hit the internet by surprise. Heartbleed was the name given to the CVE-2014-0160, which was a simple to exploit in Open SSL that allow attackers to view cryptographic keys, login credentials and other private data. Open SSL was one of the most widely used secure (supposedly) transports on Apache and Nginx web servers. It is estimated that up to 55% of the Alexa Top 1 Million HTTPS-enabled websites were open to the vulnerability at the time of its announcement. This software security flaw affected Bitcoin clients and exchanges, Android devices, email servers, firewalls made by big names like Cisco and Barracuda, and millions of websites. How was this bug found? By Google and Codenomicon security engineers running scans and testing OpenSSL.
Unfortunately, the vulnerability had been in the code since December 2011, almost three years without anyone else having scanned and found the flaw. Three years that no one thought to scan such a critical piece of software, in hindsight, seems strange. However, it is far too common.
Businesses (and individuals) far too often choose, install and run software without any due diligence. Heartbleed revealed that too many firms fail to perform risk analysis.
There are methods for large and small businesses can take to review the security of software. For a small firm, it can be as simple as performing some searches on the Internet for software you are about to or already have purchased. Larger firms have the resources and time to be more rigorous, if they follow some defined processes and take this security risk seriously.
Secure Development Lifecycle
Secure Development Lifecycle (SDLC) is a process to drive security into product development. One of the first questions to ask your vendors is how they perform this activity. Firms with a documented process for SDLC lower their risk of critical vulnerabilities by 80%. Organizations that purchase these products with proper SDLC will see a reduction in configuration management and incident response costs by 75%.
SDLC process can take many steps but at a high level, it takes five steps: Business Requirements, Design, Test Plans, Coding, Testing. Within those five steps there must be sub-steps that describe items such as User Risk Analysis (within Business Requirements) or Static Code Analysis (within Coding) as examples. However, the software vendor approaches their SDLC, as security professionals, we want to be ask them if they have a documented process.
If the answer is anything less than an unqualified yes, then they are not performing their due care. Once you get a firm “yes” from them, then you can look at reviewing the actual process in an onsite visit where physical validation can occur.
The decision to perform a remote assessment (the process of asking the software vendor if they perform SDLC in a documented process) and to do an onsite assessment should be risk based. Performing onsite assessments are costly and should be reserved for third-party software that poses a risk: money movement software, software that protects your data, software that protects your network, for example.
Whether remote or onsite security assessment, ask your vendor what tools they use for doing static analysis. Static analysis tools do not execute the software but can find defects early in the development of a program before integration and further testing. This step is critical for finding bugs early and ensures that individual developers are testing their work.
Next inquire about what tools they use for dynamic analysis, where testing is done while executing the programs on a computer (real or virtual). This identifies vulnerabilities in runtime and can validate static code analysis findings.
Lastly, ask how the software vendor prioritizes and processes bugs coming from these tests. Ensure they have a tiering system (high, medium, low) for risk and they attack in risk order (high first, low last). Ask the vendor how they handle critical security gaps (within 24 hours?) and how do they notify customers.
Once due diligence has been completed on both the vendor and their SDLC, the software is to be installed on your organization’s network. Simple steps that can be taken to lower the risk are to ensure the application only gets access and permissions following the least-privilege model.
If the vendor requires the application to have elevated privileges to operate properly, investigate and ask questions about those level of permissions. Be wary if they require access to services or areas that do not seem to match the service the application is providing.
Even though you have asked and got confirmation from the vendor that they perform security testing on their program, that should not prevent you from performing your own testing. There are low cost to expensive enterprise solutions to perform this analysis. Some are free with limited functionality and a professional version cost can vary up to thousands per seat.
Perform a search for Application Security Testing (AST) tools and you can find a tool that fits your organization’s budget and resource needs. For applications developed by a third party, the best testing tools are dynamic application security testing tools. Also, if you have access to them, using origin analysis/software composition analysis (SCA) tells if there are any components that have widely known vulnerabilities.
Once the test results are completed for the vendor’s software they need to be prioritized by risk. Having a conversation with the vendor about your findings would be appropriate if there are any vulnerabilities that have a high-risk rating. Some vendors may not be pleased to hear you are performing this level of testing, but this is about the security of your firm.
This is where having an active relationship with your organization’s software vendors is important; allowing sometimes uncomfortable conversations to evolve into an understanding that this process is designed to help the security of both vendor and customer.
Work with your vendors to understand if they are aware of any of the high-risk findings your testing uncovered; if so, how are they addressing the vulnerability? If they are unaware or deny that it is an actual vulnerability (perhaps they believe it is a false-positive) then the discussion can revolve around how they can best demonstrate that it is not a false-positive. Depending upon choices available it can take the form of having them run their testing and showing their results lack your finding or a walk-thru that area of code to demonstrate it is not present.
Software running in the cloud, Software-as-a-Service, is ubiquitous and great for fast, lower cost deployment of applications. However, just as in software that is on-prem, there is no absolute guarantee it is without security or stability risks. Running static and dynamic analysis on cloud-based software is different but can produce great results if run properly.
In static analysis for cloud apps, focusing on three core areas: signatures, APIs, and strings. For signatures, these are digital identifications produced to validate that a software package is authentic and from the correct manufacturer. While not common, validating the signature is a must-have for static analysis. Another very simple but useful static test is strings, using the string command-line in Unix; this produces the strings (anything five-characters or more), that produces inferences on the applications internal working.
API calls are extremely common in cloud-based applications to allow other application and users to interact remotely. Not all APIs are created equally or securely, so running static analysis against API calls and ensuring that the APIs are secure should be discussed with the vendor. Input validation is a requirement. Value checks on the API, such as it should always reject empty or null values when unacceptable; input validation for the type and size must be part of the API; and for any input, the API must provide the expected output.
As in previous, dynamic testing SaaS can focus on three areas: registry, network, and memory. In testing for registry info, look for the registry keys and any modifications done by the program. Performing dynamic analysis on network traffic will test to see if its possible to change the username sent to the cloud, allowing for access to restricted data. Similarly, memory information can be accessed with dynamic testing, to verify if there is any data leaking out of memory that could be non-public.
Lastly, if you are using a cloud-based application, insist on having a testing and sandbox environments. The testing site is crucial for proper development and analysis. To properly and thoroughly test prior to production, the sandbox environment has similar security controls to production and (anonymized) production data. Running testing on the sandbox will provide the closest to real-world analysis for security gaps.
Open source software
Open Source software is often viewed as “free” and safe to use, whether it is a whole program on its own or as part of code embedded in a program you have purchased. While it is free from a direct cost to a developing company, it is not free when considering the overhead needed to ensure it is secure and stable. The Heartbleed vulnerability was on the open source OpenSSH.
One of the things that added to the confusion and risk was the widespread, yet undocumented use of the open source OpenSSH in many software tools. Thousands of companies had developed their software with OpenSSH in it, but had not documented that fact, or had not documented where it was in thousands of line of code and had not designated an ‘owner’ (someone who would look for vulnerabilities and patch it).
Inquire with your third-party software vendors if they use open source tools in their product. That is not a deal killer, but how they deal with open source can be a high risk if not done properly. The vendor must have a way to track and identify open source code in their product so when (not if) a vulnerability is posted against it; they can quickly and expertly correct it and develop a patch. There is software that can track this automatically, but even if they use a spreadsheet, the need to track it is paramount.
Contracts that define liabilities and expectations are a fact of life in business. Language in contracts with software companies must include several provisions to ensure you are protected. First, there is language requiring the vendor to guarantee no backdoors. These are hidden entries into the program sometimes used by developers early in their process for easier access to alter their piece of the application.
Many firms forbid their developers from doing this but there are still instances of it being done. Backdoors are the fourth most common threat detected in software applications. A great example of this was the Borland Interbase backdoor.
From 1994 to 2001, Borland Interbase versions contained a hard-coded backdoor placed there by its own engineering staff. This allowed a remote user to attach via port 3050 and take full control of the database. To make matters worse, the username and password were hard-coded into it as well: username: politically; password: correct.
Another example would be the Juniper Networks backdoor that allowed an attacker to remotely eavesdrop on encrypted traffic. It happens usually by accident, but the results can be catastrophic for business running this software.
Second, there must be protections against malware or other malicious code being added to the program. This might sound self-evident but having this language places that responsibility onto the vendor. Types of malicious code do not have to be criminal; it can be performing operations you have not authorized. Spyware and adware, where the application could be exposing users to unwanted solicitations or even key-logging or similar activities.
There are some firms who will be too big to negotiate a legal document for this type of risk reduction. Whether they have enough market share or are just resistant to this type of legal protection, it can be an issue for closure of a contract and you as a security professional have to weigh your options in these cases.
In cases where, despite best effort, there is no option, then it returns to a risk-based decision and taking steps to reduce the risk on your own. First, read and understand their legal documentation and what they obligate themselves contractually. Second, once the software is installed and running, ensure that you have provided the application least-privilege for access. Logging and monitoring must also be stepped up on these types of applications and they must be reviewed at regular intervals for potential issues or risks.
Caveat emptor term is part of a lengthier declaration: Caveat emptor, quia ignorare non debuit quod jus alienum emit: “Let a purchaser beware, for he ought not to be ignorant of the nature of the property which he is buying from another party.”
The assumption is that buyers will inspect and otherwise ensure that they are confident with the integrity of the product before purchase is completed. The need for your organization to ask questions and perform due diligence on software vendors is crucial. Validating that the vendor has proper process to work security into the product before purchase is required.
Once the software is running, either on-prem or in the cloud, testing new releases and patches for vulnerabilities is an ongoing effort. Ensure the vendor has controls and process around open source software. Whenever possible, create legal language that clearly identifies to the software manufacturer the security due care your firm expects. Take a skeptical eye to the high risk, critical applications that support your business operations.
Article Provided By: Security Magazine
If you would like liquidvideotechnologies.com to discuss developing your Home Security System, Networking, Access Control, Fire, IT consultant or PCI Compliance, please do not hesitate to call us at 864-859-9848 or you can email us at email@example.com.