In the previous article, we covered the release process and how to secure the parts and components of the process. The deploy and operate processes are where developers, IT, and security meet in a coordinated handoff for sending an application into production.
The traditional handoff of an application is siloed where developers send installation instructions to IT, IT provisions the physical hardware and installs the application, and security scans the application after it is up and running. A missed instruction could cause inconsistency between environments. A system might not be scanned by security leaving the application vulnerable to attack. DevSecOps focus is to incorporate security practices by leveraging the security capabilities within infrastructure as code (IaC), blue/green deployments, and application security scanning before end-users are transitioned to the system.
Infrastructure as Code
IaC starts with a platform like Ansible, Chef, or Terraform that can connect to the cloud service provider’s (AWS, Azure, Google Cloud) Application Programming Interface (API) and programmatically tells it exactly what infrastructure to provision for the application. DevOps teams consult with developers, IT and security to build configuration files with all of the requirements that describe what the cloud service provider needs to provision for the application. Below are some of the more critical areas that DevSecOps covers using IaC.
Capacity planning – This includes rules around autoscaling laterally (automatically adding servers to handle additional demand, elastically) and scaling up (increasing the performance of the infrastructure like adding more RAM or CPU). Elasticity from autoscaling helps prevent non-malicious or malicious Denial of Service incidents.
Separation of duty – While IaC helps break down silos, developers, IT, and security still have direct responsibility for certain tasks even when they are automated. Accidentally deploying the application is avoided by making specific steps of the deploy process responsible to a specific team and cannot be bypassed.
Principal of least privilege – Applications have the minimum set of permissions required to operate and IaC ensures consistency even during the automated scaling up and down of resources to match demand. The fewer the privileges, the more protection systems have from application vulnerabilities and malicious attacks.
Network segmentation – Applications and infrastructure are organized and separated based on the business system security requirements. Segmentation protects business systems from malicious software that can hop from one system to the next, otherwise known as lateral movement in an environment.
Encryption (at rest and in transit) – Hardware, cloud service providers and operating systems have encryption capabilities built into their systems and platforms. Using the built-in capabilities or obtaining 3rd party encryption software protects the data where it is stored. Using TLS certificates for secured web communication between the client and business system protects data in transit. Encryption is a requirement for adhering with industry related compliance and standards criteria.
Secured (hardened) image templates – Security and IT develop the baseline operating system configuration and then create image templates that can be reused as part of autoscaling. As requirements change and patches are released, the baseline image is updated and redeployed.
Antivirus and vulnerability management tools – These tools are updated frequently to keep up with the dynamic security landscape. Instead of installing these tools in the baseline image, consider installing the tools through IaC.
Log collection – The baseline image should be configured to send all logs created by the system to a log collector outside of the system for distribution to the Network Operations Center (NOC) or Security Operations Center (SOC) where additional inspection and analysis for malicious activity can be performed. Consider using DNS instead of IP addresses for the log collector destination.
Blue green deployment
Blue green deployment strategies increase application availability during upgrades. If there is a problem, the system can be quickly reverted to a known secured and good working state. A blue green deployment is a system architecture that seamlessly replaces an old version of the application with a new version.
Deployment validation should happen as the application is promoted through each environment. This is because of the configuration items (variables and secrets) that are different between the environments. Typically, validation happens during non-business hours and is extremely taxing on the different groups supporting the application. With a blue green deployment, the new version of an application can be deployed and validated during business hours. Even if there are concerns when end-users are switched over during non-business hours, fewer employees are needed to participate.
Automate security tools installation and scanning
Internet facing application attacks continue to increase because of the ease of access to malicious tools, the speed at which some vulnerabilities can be exploited, and the value of the data extracted. Dynamic Scanning Tools (DAST) are a great way to identify vulnerabilities and fix them before the application is moved into production and released for end-users to access.
DAST tools provide visibility into real-world attacks because they mimic how hackers would attempt to break an application. Automating and scheduling the scanning of applications in a regular cadence helps find and resolve vulnerabilities quickly. Company policy may require vulnerability scanning for compliance with regulatory and standards like PCI, HIPPA or SOC.
DAST for web applications focuses on the OWASP top 10 vulnerabilities like SQL injection and cross-site scripting. Manual penetration (PEN) testing is still required to cover other vulnerabilities like logic errors, race conditions, customized attack payloads, and zero-day vulnerabilities. Also, not all applications are web based so it is important to select and use the right scanning tools for the job. Manual and automatic scanning can also help spot configuration issues that lead to errors in how the application behaves.
Next Steps
Traditional deployments of applications are a laborious process for the development, IT, and security teams. But that has all changed with the introduction of Infrastructure as Code, blue-green deployments, and the Continuous Delivery (CD) methodology. Tasks performed in the middle of the night can be moved to normal business hours. Projects that take weeks of time can be reduced to hours through automation. Automated security scanning can be performed regularly without user interaction. With the application deployed, the focus switches to monitoring and eventually decommissioning it as the final steps in the lifecycle.
More Stories
The AI Fix #30: ChatGPT reveals the devastating truth about Santa (Merry Christmas!)
In episode 30 of The AI Fix, AIs are caught lying to avoid being turned off, Apple’s AI flubs a...
US and Japan Blame North Korea for $308m Crypto Heist
A joint US-Japan alert attributed North Korean hackers with a May 2024 crypto heist worth $308m from Japan-based company DMM...
Spyware Maker NSO Group Found Liable for Hacking WhatsApp
A judge has found that NSO Group, maker of the Pegasus spyware, has violated the US Computer Fraud and Abuse...
Spyware Maker NSO Group Liable for WhatsApp User Hacks
A US judge has ruled in favor of WhatsApp in a long-running case against commercial spyware-maker NSO Group Read More
Major Biometric Data Farming Operation Uncovered
Researchers at iProov have discovered a dark web group compiling identity documents and biometric data to bypass KYC checks Read...
Ransomware Attack Exposes Data of 5.6 Million Ascension Patients
US healthcare giant Ascension revealed that 5.6 million individuals have had their personal, medical and financial information breached in a...