Cloud security best practices

This is a simplified list of security and configurations best practices you should make sure to follow and use as reference:

Design for failure: the concept of designing for failure also maps to security practices. It aligns with the “Defense in Depth” strategy. For example, by implementing multiple layers of security, if one layer fails, others are in place to prevent a breach. For example: using a combination of security groups and network access control lists (NACLs) in AWS, even if an attacker gains access through one layer, they would still need to breach the other layers to access sensitive data.

Embrace microservices architecture: microservices architecture allows applications to be built as a collection of loosely coupled, independently deployable services. This architecture provides several advantages, including scalability and isolation, meaning a failure in one service doesn’t directly impact others. This approach aligns with the ‘Security by Design’ principle, as each microservice can be designed with its security controls, reducing the blast radius in case of a breach. For example: instead of building a monolithic application, a cloud engineer could design an application as a series of microservices, each with its own security controls. If one microservice is compromised, the effect on the rest of the system can be minimized.

Ensure data encryption: encryption at rest and in transit is a critical part of cloud security. Data should be encrypted when stored (at rest) and when it’s sent across a network (in transit) to protect it from unauthorized access or tampering. This aligns with the ISO 27001 standard’s “Cryptography” control, where information shall be encrypted when being stored or transmitted. For example: a cloud engineer might ensure that all data stored in cloud storage (e.g., Amazon S3, Google Cloud Storage) is encrypted at rest using keys managed via a service like AWS Key Management Service (KMS). Additionally, the engineer might enforce the use of HTTPS (HTTP over SSL/TLS) for all communications to encrypt data in transit.

Continuously monitor and audit: continual monitoring and auditing are key to identifying and responding to security threats promptly. Cloud providers offer tools for collecting logs and monitoring system activity that can detect suspicious behavior. For example: tools like AWS CloudTrail or Azure Monitor can be used to detect unusual API calls or anomalous activity that could indicate a security threat.

Optimize for cost: while this may not directly relate to security, ensuring resources are optimized can prevent over-provisioning, which might expose unnecessary vectors for attack. Plus, cloud cost optimization often includes removing unused resources, reducing the attack surface area.For example: an unused and unmonitored EC2 instance can be a security risk if it gets compromised. Regular cost optimization measures would flag this unused instance for removal.

Stay current: security threats evolve rapidly, and new vulnerabilities are discovered regularly. Staying current on the latest threats and security best practices helps ensure your cloud environment remains secure. For example: regularly reviewing updates from cloud providers and security agencies can alert engineers to new threats (like zero-day exploits) or newly discovered vulnerabilities in systems they use.

Security best practices across Azure, AWS, and GCP: one key lesson is to implement Multi-Factor Authentication (MFA) across all cloud platforms. MFA provides an extra layer of protection to help secure accounts even if passwords are compromised. This aligns with the “Access Control” section of ISO 27001. For example: an engineer would enforce MFA for all IAM users in AWS, for all accounts in Azure Active Directory, and for all users in Google Cloud Identity. This ensures that even if a user’s credentials are stolen, an attacker would still need access to the second authentication factor.
–>It’s crucial to restrict public access to cloud resources. Often, resources are inadvertently exposed to the internet due to misconfigurations. These should be promptly corrected. This aligns with the ‘Infrastructure Protection’ pillar of the AWS Well-Architected Framework. For example: an engineer should ensure that all storage resources (like Azure Blob Storage, AWS S3 buckets, or GCP Cloud Storage) are not publicly accessible unless absolutely necessary. If a bucket needs to be public, it should be configured to log access requests for security auditing.

Enable cloud service-specific security controls: most cloud services provide built-in security controls that can be enabled or configured. For example: in AWS, turning on S3 bucket versioning can help recover from both unintended user actions and security incidents. On Azure, consider enabling Advanced Data Security on SQL databases for advanced threat detection. For GCP, use VPC Service Controls to mitigate data exfiltration risks.

Limit security group and firewall rule exposure: open security groups and firewall rules are a common attack vector. Example: In all cloud platforms, ensure that security groups and firewall rules are not overly permissive. For instance, AWS Security Groups and Network ACLs, Azure Network Security Groups, and GCP Firewalls should not allow unrestricted access to resources (avoid rules like allowing inbound traffic from 0.0.0.0/0 unless absolutely necessary).

Regularly rotate and review secrets: secrets like API keys, passwords, and tokens should be rotated regularly. For example: using AWS Secrets Manager, Azure Key Vault, or Google Cloud Secret Manager, engineers can store, audit, and rotate secrets. Automated rotations can be scheduled to ensure secrets are not static and are replaced at regular intervals.