Every day brings another headline about a major IT security breach that leaked the personal information of thousands or millions of users. This should give every business pause and make them want to check the security posture of their applications. But how do you even start? Such examination can be especially tricky if your applications are running in the cloud. When the applications were running in your datacenter, you knew what the security risks were and how to protect your customers’ data. But when it comes to the cloud, how should your approach change? Are there cloud tools and tactics available that weren’t available in your on-premises environment?
Fortunately, the fundamental approach to cloud security is not altogether different from your on-premises datacenter. In some regards, things are actually simpler. Your public cloud provider is responsible for the physical security of their datacenter and for the security of the services they provide. But that doesn’t mean you’re off the hook! There exists a line of demarcation, beyond which you become the responsible party. For this reason, AWS calls it the Shared Responsibility Model. This post explains what you can do to make your environment and applications more secure in a public cloud setting.
Cloud Security Design Principles
Let’s say you run a retail store chain called Echidna Electronics and recently migrated your e-commerce system, Wallaby, to AWS. It’s composed of a web front-end, an application middle tier, a NoSQL database layer, and a traditional relational database layer. The web front-end and application middle tier are using EC2, with Redis providing a caching layer for customer data. The NoSQL layer is using DynamoDB, and the relational database is using MySQL on RDS. Your job is to ensure the application is secure. Where do you even get started? Time to check out the security design principles!
- Implement a strong identity foundation – Using a central source for authorization will lower the management overhead of Wallaby. And you don’t want to give developers or Help desk the keys to the castle, so be sure to give them the fewest privileges they need to do their job.
- Enable traceability – “Logs or it didn’t happen.” Anytime someone makes a change to Wallaby, or accesses a system component, it should be logged somewhere. And these logs should be immutable and centralized. You don’t want a sneaky hacker erasing your logs to cover their tracks.
- Apply security at every layer – Your security should be like a kid in Minnesota in January, covered in layers or protection. If an attacker makes it through one layer, they should be stymied by the next layer of protection. Wallaby’s data needs to be nestled securely in its mother’s pouch, protected from the predators that roam the Outback.
- Automate your security – Humans are fallible and forgetful. Manual processes are fraught with inconsistencies and missed steps. It doesn’t matter that Tom the admin meant to remove remote access from the Wallaby web server when he was done troubleshooting, it matters whether he did it or not. By implementing automated controls, your web server stays secured and Tom keeps his job.
- Protect data all the time – ALL THE TIME. That means encrypting at rest and in transit. It means storing your encryption keys in a highly secure way. The more sensitive the data is, the more protection it needs. The Wallaby product catalog might not need to be an encrypted table in your database, but the customer profile data better be encrypted whenever possible.
- Keep people away from data – There’s no reason Jimmy from the dev team needs full access to all customer data in the database. If Jimmy doesn’t have access, then Jimmy doesn’t have to worry about his access being compromised. Automated processes in Wallaby can handle the processing of data. Let’s keep the people out of this.
- Prepare yourself – At some point, you’re going to get attacked and hacked. That’s almost a guarantee. When Wallaby is being hacked, you need to have the proper processes and procedures to detect, assess, and mitigate the attack as quickly as possible. Sure, you can automate some of this, but your reaction will still rely on people’s following the process. To this end, you need to practice, practice, practice, and practice again.
Top 3 Principles to Focus on When Securing Your Cloud-Based Application
Those are some fancy design principles, and they’ll get you headed in the right direction. Within the context of your application, there are also three primary areas to focus on:
Identity and Access Management (IAM)
When it comes to looking at IAM, you’ll need to delve into two primary spheres. The first is how users are authenticated into your system and provided access. The customers who use the Wallaby e-commerce system are using an Oauth implementation that leverages AWS Cognito. This makes it super easy for new users to create an account and start shopping. The second sphere is how internal users are granted access to the backend systems that comprise Wallaby. Those credentials are managed by AWS IAM. In this case, users should rely on multi-factor access whenever possible. Resources, like those front-end EC2 instances, should be using their Instance Profile to gain access to other systems in the application. Oh, and for the love of all that is holy: enable MFA for the Root User, delete its Access Keys, and change its password to something impossible. Go ahead, I’ll wait…
The proper protection of your infrastructure takes the layered security approach to a whole new level. Wallaby has a web-front end, but that’s not where the customers should be hitting. That front-end should be sitting behind an Elastic Load Balancer with Security Groups, which in turn should be sitting behind a Web Application Firewall. And if that isn’t enough, you can place the WAF behind another cloud service providing DDoS protection and filtering of known bad actors and common injection attacks. Those front-end EC2 instances should be using Security Groups as well, and sit in a separate set of subnets from the application layer and database layer. Security Groups ensure only certain components of Wallaby can talk to each other. There’s no reason the front-end servers should be talking directly to DynamoDB, so that traffic is not allowed. As another layer, the egress traffic of all non-internet-facing servers can be pushed through a pair of IDS/IPS servers to monitor your traffic and make sure unexpected flows aren’t exfiltrating data out of your application. Naturally, you’re going to automate the configuration of all this security using CloudFormation so that you’ll never misplace a Security Group rule again.
The information living in your infrastructure needs to be protected, too. Data in transit should always be encrypted. Thanks to TLS and SSL, the effort is trivial. So, DO IT. Anything using unencrypted traffic over port 80 should be dropped with extreme prejudice. Data at rest should also be encrypted, and this is where the public cloud really shines. Adding encryption to RDS or DynamoDB is as simple as enabling KMS and ticking a box. By using custom AMIs that are already encrypted, you can encrypt both the boot volume and the data volume of all of Wallaby’s EC2 instances. For an added layer of security, you can take advantage of the DynamoDB Encryption client, which applies the encryption before the data leaves the client, and stores it encrypted in DynamoDB. For those really sensitive data types, you’re going to want this level of end-to-end encryption.
These are just a few examples of how you can properly secure your application in a public cloud context. To learn more about the Security Pillar of the AWS Well-Architected Framework, check out the official whitepaper. Or if you’d like to see how your application stacks up, please go ahead and schedule a FREE Well-Architected Review with a Certified AWS Solutions Architect from Anexinet.
Director, Cloud Solutions and Microsoft MVP: Cloud (Azure/Azure Stack) & DC Mgmt