The headlong rush to the cloud continues to accelerate, promising increased efficiency, flexibility and security, but CSOs are not off the hook when it comes to fortifying the privacy and security of their sensitive data.
During the past five years, cloud and security have been generally siloed, but more recently they’ve begun to converge as cloud and hybrid cloud infrastructures have grown in popularity. Partnerships with AWS or Microsoft do provide a high level of security, but often the most common vulnerabilities are on the company side.
During a cloud migration, it’s critical to remember that the security protocols that work in the enterprise will not necessarily work in the cloud. A strong security posture will include elements of people, process and technology, and it’s often the “people” and “process” categories that companies often miss during a cloud migration.
Cloud providers like Microsoft Azure and Amazon Web Services have brought cloud computing to the next level. Each cloud provider brings its own unique set of security capabilities to help prevent perimeter breaches and compromise.
Would you believe that this state of the art security is inherently flawed? The technology is solid but the people and processes are where the real concerns exist; especially when migrating from your datacenter to the cloud.
Most organizations start with taking a few applications to the cloud; perhaps they are “home grown” developed applications using open standards. Traditionally these applications have been protected behind the corporate firewall and were only accessible once authenticated or when behind the companies four walls. You decide you want to make the applications more accessible and easier to use and begin hosting them in the cloud.
When you take an application to the cloud you need to adopt different standards for vulnerability detection and management. Many applications are insecure by default since a security as code methodology was never adopted. Imagine how many backdoors potentially exist in the code or protocols themselves, even with a great and secure authentication and authorization mechanism.
Before taking an application to the cloud, double down on assessing the application for potential vulnerabilities. Applications, however, are not the most of your worries.
To help put things in context, imagine the simple scenario of remote access. Best practice in an on-premise data center are to disable remote access to servers, especially to traffic coming from the internet. Management of servers is often left to virtualization consoles, which are unavailable to cloud hosts like Azure and AWS. We are left with traditional remote access protocols which are not inherently insecure. The problem is security is only effective when layered appropriately.
Lets say you have an employee that uses the same password for everything, or perhaps your IT organization neglected to close remote access to a server that was migrated to the cloud. For the record, this happens all the time. Just one workload out of hundreds is enough. It is guaranteed that an attacker will find the open port and use stolen credentials to begin brute forcing the protocol until they gain access. Once access has been obtained, the attacker will attempt to elevate privileges, laterally move through the environment and establish command and control.
Generally, before being taken to the cloud, applications go through vulnerability scanning. As a best practice, most organizations only allow administrator accounts that are unique and controlled with multi-factor authentication to remote access servers and applications.
The point to remember is that most environments are very complex and without clearly defined processes, a step will get missed. People make mistakes and leveraging processes gives them something to follow that reduces error. In fact, organizations should automate processes as often as possible to limit the possibility of compromise.