Research shows that most enterprises are rightly more concerned about what their own people are doing within the cloud environments, and the lack of visibility or control, than the threat of external attacks.
Many IT leaders and professionals make the mistake of approaching security in the cloud the same way they approached security in a traditional data center. Migration to the cloud has led to an explosion in resources and users, yet the number of people managing the security of those resources and users hasn’t increased. Additionally, self-service capabilities have empowered an entire range of new users. Historically these capabilities were limited to those who had spent their entire careers operating infrastructure securely. However the shift to cloud adoption has transformed the landscape of who can provision and configure infrastructure. This is a wonderful thing, but only if coupled with new ways to minimize the resulting risk of misconfiguration.
The issues and headlines we’re seeing weekly are more often than not the result of common misconfiguration issues. Without a holistic approach to security, you open yourself up to undue risk mostly caused by:
- Inexperienced users
- A lack of unified visibility across cloud service providers and environments
- Failure to adjust from perimeter-oriented security to configuration-managed security (including identity)
- An unprecedented rate of change, scale, and scope.
Mitigating Cloud Security Risks
An important part of mitigating these new risks is to understand how configuration choices impact cloud security. To help with this process we are going to review some common misconfigurations that impact three key cloud computing areas:
- Identity and Access Management (IAM)
- Network
- Data
Investing in Cloud Security Operations (CloudSecOps) will ensure that your organization is consistent in continuously mitigating this risk. CloudSecOps is the combination of people, processes, and tools that allow organizations to consistently manage and govern cloud services at scale. By focusing on hiring and developing the right people, identifying processes that address the unique operational challenges, and automating these processes with the right tools, you can establish a set of best practices for success in the cloud. One vital element of a CloudSecOps toolkit should be software that provides real-time unified visibility of configuration choices, checked against security policies, and remediation when a policy is violated. InsightCloudSec is exactly this kind of tool. With customers including General Electric, Kroger, Fannie Mae, Discovery, and Autodesk, InsightCloudSec provides a solution to achieve continuous security for cloud and container environments.
1. Identity and Access Management
In the public cloud, IAM is the new perimeter as organizations have reached an intersection of people, devices, and applications that requires security based on identity. The explosions in resources and users is pushing security professionals to reassess their tools and strategies. They have to develop new ways to identify and verify users—human or machine. If a hacker leverages a trusted identity, they can work under the radar to extract company data, with no one the wiser.
This is especially true if a company isn’t applying multi-factor authentication (MFA) or risk-based access controls to limit any type of lateral movement after unauthorized access. A study by Centrify and Dow Jones Customer Intelligence, showed that CEOs can mitigate the risk of a security breach by reassessing their IAM strategies. According to Verizon’s DataBreach Investigations Report, 81% of all hacking-related breaches leverage either stolen or weak passwords. Unfortunately, many organizations suffer from a lack of expertise around IAM in the cloud. Developers and engineers may be hesitant about making configuration changes or inadvertently make poor choices. IAM should include the organizational policies for managing digital identity as well as the technologies needed to support identity management. Without smart policies, an organization can face enormous risk when IAM is handled incorrectly.
Best Practices = Least Privilege
One example of this risk is in a scenario that often occurs when granting privileges to cloud resources. A security best practice for creating IAM policies is ensuring that any new user or service is granted least privilege—that is, only the permissions required to perform a task. To do this, you need to determine what users need to do and then draft policies around predefined tasks. An example of this would be granting permission to an AWS EC2 instance and starting with a minimum set of permissions, then granting additional permissions as necessary. Too often, if the account is created by a developer, it will likely start with permissions that are too lenient. Sometimes this is due to a lack of understanding, or with the intention of not hindering early work within sufficient permissions and rescoping later.
For example:
{
“Effect”: “Allow”,
“Action”: [“*”],
“Resource”: [“*”]
}
In the policy above, the permissions may solve the access issues for a specific user or application, but it also exposes the account to needless risk. Additionally, a policy like the one above may get lost among hundreds of other policies and be incredibly difficult to locate and eliminate later.
With InsightCloudSec, customers can quickly identify IAM choices that violate security and compliance policies.Once identified, a customer can choose to have this automatically trigger a InsightCloudSec Bot (a workflow that automates process and best practices as defined by the customer). This workflow can chain together a set of actions, such as reconfiguring an IAM policy or driving human intervention, to remediate the issue. For example, with InsightCloudSec's Jira integration, users have the ability to create Jira tasks. You can configure a Bot that creates a Jira task populated with detailed information any time InsightCloudSec detects a cloud user account without multi-factor authentication, and assign it to the person who owns the resource for remediation.
MFA & Verification
Another example of best practice for IAM is in requiring MFA on all cloud accounts. While it’s easy to set up cloud accounts for users with or without MFA, it’s a best practice to enable MFA on all cloud accounts. To support this best practice, InsightCloudSec provides an out-of-the-box policy called “Cloud User Accounts Without MFA.” This control identifies cloud user accounts which do not have MFA enabled, and a Bot can be configured to enable (or re-enable) MFA on accounts that are not in compliance with the policy.
Root Account Management
And finally, the use of the root account in cloud service providers is a big “no-no” because it can perform any action within your account and has no detailed attribution. For example, the cloud account administrator for one of our customers gave 4 other team members root access. A few days later the logs showed that the “root user” has deleted 20 instances. That administrator was left wondering which actual person had deleted the instances and why, and lacked the ability to audit those actions.
AWS recommends deleting root user access keys and creating IAM credentials for everyday interactions. InsightCloudSec provides visibility into a customer's credential report to determine if the root account is being used. With InsightCloudSec, customers have visibility across all of their root accounts, including the last time that the account was used, and the count of active/inactive API credentials. And importantly, customers can configure a Bot to remediate policy violations.
2. Networking in the Cloud
Networking is one of the most common areas where organizations fail to understand how to manage the security of their cloud. This is primarily because the default settings on network security in typical cloud providers are not configured to address the range of access rules and security policies. For example, if an organization is working in AWS and launches a virtual machine, AWS will suggest creating a security group. In AWS, a security group is a network firewall. However, AWS defaults to suggesting that you leave the new security group completely open to enable access to the virtual machine that was just created.
If an organization using AWS is launching a Linux Virtual Machine (VM), it uses Secure Shell (SSH). Again, AWS will suggest leaving SSH open. Leaving the VM open to yourself is not a problem, but AWS will suggest leaving it open to the world, creating a vulnerability. However, for organizations using InsightCloudSec, they can identify the vulnerability with an out-of-the-box action designed to find open security groups. With the control “Instance Exposing SSH To World” InsightCloudSec identifies instances with security groups that have SSH (port 22) open to the world, i.e., 0.0.0.0/0. Since instances can reside in multiple security groups, this policy automates monitoring of attached security groups to discover public access in any group.
Let’s take a quick dive into how we express this policy in the underlying code in InsightCloudSec.
Next a customer can take the violations detected by the “Instance Exposing SSH To World” policy and trigger an automated workflow. For example, the workflow could delete the offending resource access list SSH rule in its dependencies using a “schedule deletion” action. InsightCloudSec customers often use this approach because it is so simple and powerful: “anytime SSH is open to the world, close it.”
Similar to Linux users closing SSH, Windows users also have the option to close Remote Desktop Protocol (RDP) to the world.
3. Data
Misconfiguring a cloud database, storage container, or search engine can have massive consequences, especially if they contain confidential information. The epidemic of misconfigurations has seen the leakage of more than 14 billion data records in the last five years as reported by Breach Level Index.
Databases, storage containers, search engines, and other cloud data repositories are often incorrectly configured. For example, permissions may be too broad, allowing anyone to access the data. These misconfigurations are often the result of a developer that was unaware of how to properly secure a cloud asset or a simple oversight. For example, a developer may have tweaked a storage container configuration as part of troubleshooting, leaving it open to the public. Once the application began working again, they moved on to another project, completely forgetting about the exposed storage container.
There are dozens of situations that may result in changes to cloud service configurations. Organizations are often made vulnerable because they don’t have processes in place to prevent, detect, and repair improperly configured cloud data services. How do you avoid exposing your data? For starters, when in doubt, confirm that the default settings provide a secure configuration. Cloud storage services, like Amazon S3 buckets, are private by default (although this wasn’t always the case) and can only be accessed by users that have been explicitly given access. In this situation, the account owner and the resource creator are the only users with default access to an S3 bucket and key. All cloud provider storage assets have a layer of permissions, which are managed by configuration, yet customers using Microsoft Azure and GCP haven’t had as much of an issue with data breaches.
In part, this is because you can’t have unencrypted data in GCP and Azure by default. It took AWS until November 2017 to add basic protections to S3, its cloud storage service. However, with AWS being the first big cloud services provider, Azure and GCP were able to build their tooling with AWS’s inefficiencies in mind.
In Amazon’s defense, they have been actively working to help companies avoid breaches caused by misconfiguration. In November 2017, AWS added a number of new Amazon S3 features to augment data protection and simplify compliance. They made it easier to ensure encryption of all new objects, along with monitoring and reporting on their encryption status. AWS also provides guidance on approaches to combat this issue, like the use of AWS Config to monitor for and respond to S3 buckets allowing public access.
As a basic step to avoid data leaks, we recommend taking advantage of native cloud capabilities. Ensure that you prevent unauthorized access, and are always purposefully using the cloud provider’s storage access policies to define access to the objects stored within. Training is critical. Make sure your team knows not to open access to the public unless absolutely necessary, and that they understand that incorrectly configured policies can result in the exposure of PII and other sensitive data. The challenge is that many organizations struggle to adopt and enforce best practices consistently. This is why an investment in cloud security automation tool like InsightCloudSec is a vital additional step.
InsightCloudSec's customers leverage automation to remove the public permissions from the access control list where necessary. Customers can also use data storage policies in place of access control lists for finer-grained access control. For example, if a developer is troubleshooting an issue, they may change the configuration, and as the application begins working again, move on to another project. This occurrence has been the cause of data breaches in the past, but InsightCloudSec's monitoring would have detected this misconfiguration and enabled real-time remediation, preventing a possible data breach. There are a number of actions a customer can take, and InsightCloudSec provides the flexibility for organizations to choose what works best for them. For example, a “Cleanup Action” will remove all permissions to the Storage Container, and this can be used as a “lockdown” until the issue is resolved.
With a “Lockdown Storage Container Action,” a customer who already knows the format of their Storage Container policy can apply that policy to any Storage Container that isn’t in line.
No matter what issues you are trying to prevent, the best way to avoid exposing data is common sense. Don’t allow configurations that expose infrastructure containing sensitive data to the public. Organizations moving to or already operating in the cloud need to learn about security best practices while evaluating their options. Bringing on a Cloud Security Posture Management (CSPM) tool like InsightCloudSec will help manage these diverse security concerns. Otherwise, it’s only a matter of time before an unintended change adds you to the growing list of organizations who have to explain to their customers (and often regulators) that their information has been compromised.
The InsightCloudSec approach to minimizing cloud security risk is a complete shift in how organizations deploy and build applications in the cloud. Adapting to cloud requires a change in perspective. The approach of your IT department has to change to meet the needs of this new landscape, including how you deploy applications, the applications you build, etc. The approach has to change because of the nature of access to infrastructure. With so many users, engineering and otherwise, old processes aren’t going to work. The simple truth is that the rate of change, and the dynamic nature of software-defined infrastructure, has outstripped human capacity. When faced with 1,000 problems, even with 100 people in your organization to address each problem, there's simply no way to keep up. By the time your people reach a given problem, it either no longer exists, has shifted in scope, or is no longer a priority.
Companies need to be able to deal with problems and shifting priorities in real time. The previous generations of security technology professionals were often observers, who manually intervened. Today’s technology creates issues that require a solution beyond manual actions, where automation augments people to provide fast, smart responses at scale. With InsightCloudSec, you can make cloud misconfigurations a thing of the past, now and forever.