Tuesday, May 21, 2019

The Greatest Risk Is Not Doing a Risk Assessment

Recently, I had an interesting discussion with the Dutch members of Parliament about cybersecurity. The politicians wanted to know my views on 5G security and what I thought about a cybersecurity tender put out by an association of 380 government municipalities. 

The tender aimed to acquire security products such as firewalls, endpoint protection systems, and CASB (Cloud Access Security Broker) products, possibly from three different security vendors. 

I told them that this would be the wrong way to approach a cybersecurity tender. Protection from cyber threats is not just about buying siloed point products which provide discreet solutions to single problems. Nor does it depend on simply replacing one set of products with a slightly cheaper version. 



Effective cybersecurity requires a holistic strategy that begins with creating a risk assessment.  

The first task of a risk assessment is to identify the crown jewels of the business — the key assets and data that must be best protected. This could be customers’ intellectual property, credit card details, or personally identifiable information. It could be confidential medical information or sensitive industrial data. 

The next step is to assess the risks of cyberattacks that threaten those important assets. A pragmatic approach to creating a risk assessment is to gather 10 to 15 employees from departments across your organization into a room and brainstorm the cybersecurity risks in the business. At the same time, the employees should consider how likely these risks are to materialize.  

When I was chief information security officer (CISO) at a hosting company, we created a useful risk assessment plan through a series of brainstorms where we assigned a value to each risk. The likelihood of a risk was categorised from one to five, one being low risk and five being the highest. Then we evaluated the impact of the risk occurring, again from one to five. The risk value was calculated by simply multiplying the two numbers together.

Over the course of several workshops, we came up with a total of 225 cybersecurity risks. Some of them had a risk value of over 20 — they were likely to happen and could badly affect the company. There were also less urgent risks. 

The threats we identified included things such as an employee leaving the company and taking their username and password with them so they could access the network at will. Or the possibility of a loss of power in a data centre that restricted the availability of data. Another risk could be a misconfiguration of the system leading to data being left unprotected. 

Once those risk values have been calculated, it is up to the board of directors to decide what resources they are prepared to dedicate to protecting against these threats. That might mean taking measures against the top 15 threats, with less attention paid to less harmful threats. 

The beauty of creating risk values is that it allows the company’s board to take decisions rather than CISO. Managing risk is, after all, one of the board’s core responsibilities.

We judged that the chance of an employee leaving with login details to be quite high, so we put in place a measure to ensure that any departing staff member had to visit the IT department first to have their username and password cancelled. They could not be signed off by HR without producing a document from IT showing they had done this. While this introduces bureaucracy into the system, it helps reduce the threat of hacking. This is the kind of trade-off that each company’s board of directors must make.  

Another risk-reducing solution could be enforcing two-factor authentication for sensitive data. This has a cost and can slow things down. Again, it is the job of the board of directors to evaluate the risks and see whether the solutions are warranted. 

Unfortunately, in today’s fast-moving world, there are still too few organizations that carry out a decent risk assessment for their cybersecurity. Though, to be fair, the idea is gradually catching on. 

The way cybersecurity has evolved is by taking piecemeal steps to tackle specific problems as they arose. Over the past 10 years, this has ballooned so much that each organization has an average of 34 security point products in place, each one creating its own little silo. As a result, CISOs seek individual replacements for their firewall or anti-virus software. But this just threatens to further complicate their cybersecurity framework. 

Only a well worked out risk assessment will allow all concerned — from CISO and IT staff to the board of directors — gain a clear vision of what’s at stake when it comes to protecting their organization from a world of evolving threats.  

Hopefully, the municipalities of the Netherlands – and every other organization — will understand that the greatest risk they face is failing to do a risk assessment. 

Friday, April 12, 2019

8 Azure Security Best Practices


As a natural extension of Microsoft’s on-premises offerings, Azure cloud is enabling hybrid environments. In fact, 95% of the Fortune 500 is using Azure. But there are some common misconceptions when it comes to security.

Oftentimes, organizations jump into Azure with the false belief that the same security controls that apply to AWS or GCP also apply to Azure. This is simply not the case. Outlined below are some common challenges, along with security best practices, to help you mitigate risks and keep your Azure environment secure.



1. Visibility


According to our research, the average lifespan of a cloud resource is two hours and seven minutes. Many companies have environments that involve multiple cloud accounts and regions. This leads to decentralized visibility and makes it difficult to keep track of assets. Since you can’t secure what you can’t see, detecting risks becomes a challenge.

Best Practice: Use a cloud security approach that provides visibility into the volume and types of resources (virtual machines, load balancers, security groups, gateways, etc.) across multiple cloud accounts and regions through a single pane of glass. Having visibility and an understanding of your environment enables you to implement more granular and contextual policies, investigate incidents, and reduce risk.

While Microsoft’s cloud native security products, such as Azure Security Center, work well within Azure, monitoring at scale or across clouds requires third-party visibility from platforms such as RedLock from Palo Alto Networks.

2. Privileges for Active Directory global admin accounts


Your Azure Active Directory user accounts with admin privilege have the ability to do the most harm when unauthorized parties acquire access to them. Administrators often forget to limit the scope of what Azure AD users can do.

Best Practice:  Not even your top admins should have access to the global admin role the vast majority of the time. Make sure you’re creating limited scope roles in RBAC and applying them to resources only when needed. AD users must be protected by multifactor authentication (MFA).

3. Privilege and scope for all users


As with #2 above, it is way too easy to allow your users to have too much privilege. Often, it’s done out of expediency or because you just want to solve that production issue at 3:00 a.m.

Best Practice: Make use of RBAC, ensuring that you limit the permissions needed by entities for a specified role and to a specific scope (subscription, resource group or individual resources). Permissions are only part of the story, however. Make sure you’re coupling RBAC with Azure Resource Manager to assign policies for controlling creation and access to resources and resource groups.

4. Authentication


Lost or stolen credentials are a leading cause of cloud security incidents. It is not uncommon to find access credentials to public cloud environments exposed on the internet. Organizations need a way to detect account compromises.

Best Practice: Strong password policies and multifactor authentication should be enforced always. Azure provides several ways to implement MFA protection on your user accounts, but the simplest of these is to turn on Azure MFA by changing the user state.

5. Access keys


As mentioned above, lost or stolen credentials are a leading cause of security incidents. Unfortunately, admins often assign overly permissive access to Azure resources, and the keys used to manage those resources are often given overly permissive privileges. At all times, you should protect those keys from accidental or malicious leaking.

Best Practice: Storing credentials in application source code or configuration files will create the conditions for compromise. Instead, store your API keys, application credentials, password and other sensitive credentials in Azure Key Vault.

6. Broad IP ranges for security groups and unrestricted outbound traffic


Network Security Groups (NSGs) are like firewalling mechanisms that control traffic to Azure VMs and other compute resources. Unfortunately, admins often assign NSGs IP ranges that are broader than necessary. Adding to the concern, 85% of resources associated with security groups don’t restrict outbound traffic at all.

Research from Unit 42’s cloud intelligence team also found an increasing number of organizations were not following network security best practices and had misconfigurations or risky configurations. Industry best practices mandate that outbound access should be restricted to prevent accidental data loss or data exfiltration in the event of a breach.

Best Practice: Limit the IP ranges you assign to each security group in such a way that everything networks properly, but you aren’t leaving more open than you’ll need. Additionally, make sure you segment your virtual networks into subnets to control routing to VMs. Finally, ensure that you are restricting or disabling SSH and RDP access to VMs.

7. Reviewing audit logs


Organizations need visibility into user activities to reveal indicators of account compromises, insider threats and other risks. The virtualization that’s the backbone of cloud networks and the ability to use the infrastructure of a very large and experienced third-party vendor afford agility as privileged users can make changes to the environment as needed. The downside is the potential for insufficient security oversight. To avoid this risk, user activities must be tracked to identify account compromises and insider threats as well as to assure that a malicious outsider hasn’t hijacked their accounts. Fortunately, businesses can effectively monitor users when the right technologies are deployed.

Best Practice: Monitoring activity logs is key to understanding what’s going on with your Azure resources. You can use anomaly detection – such as RedLock’s ML-based UEBA, which can be used to detect unusual user activity, excessive login failures, or account hijacking attempts – all of which could be indicators of account compromise.

8. Patch VMs


It is your responsibility to ensure the latest security patches have been applied to hosts within your environment. The latest research from Unit 42 provides insight into a related problem. Traditional network vulnerability scanners are most effective for on-premises networks but miss crucial vulnerabilities when they’re used to test cloud networks.

Best Practice: Make sure hosts are frequently patched and apply any necessary hotfixes that are released by your OEM vendors. Also, ensure that new VM images are created with the latest patches and updates for that OS.

Azure recently released Azure CIS 1.1 benchmarks, so if Azure is a part of your strategy, I highly encourage you to implement the new benchmarks. RedLock supports Azure CIS 1.0, and we look forward to supporting 1.1 in the near future.