View on GitHub

Welcome

Defining polices and recommendations for repos under this Organisation.

DevOps SDLC

Dev-Sec-Ops Software Development Life Cycle

Published: This document is currently a draft
Updated: April 24, 2019 (by: @gocGrabinski, @jeanbenoitrochon)

Expected Development Team Behaviours in a DevOps SDLC

These tables lists a set of expected behaviors for development teams for each stage of a Dev-Sec-Ops software development life cycle. This is not an exhaustive list but it gives an indication of the many behaviors a development team needs to demonstrate in a Dev-Sec-Ops culture.

The establishment and implementation of Dev-Sec-Ops key performance indicators (DSO KPIs) will help development teams identify in what behaviors they need improvement as they move to Dev-Sec-Ops. The Communities of Practice and/or the Centres of Excellence should establish and maintain standard DSO KPI’s for the identified behaviours.

To achieve a Dev-Sec-Ops culture development teams must demonstrate self-sufficiency in all of these behaviors.

Stages

All Stages

Continous Improvement

Implement Dev-Sec-Ops key performance indicators to help identify areas where development team can improve.

Accept that team failures will occur. Teams will learn from failures to Continuously Improve development processes and practices in all stages of Dev-Sec-Ops software development life cycle.

Share lessons learned with Communities of Practice so that other development teams can benefit.

Involves: all CoPs & CoEs

Work in the open

Make source code repositories, task boards, build pipeline dashboards, non-sensitive production-monitoring data available (readable) to all of IITB (ESDC?).

Source code is open by default. Justification is required if parts of the code is to be made inaccessible.

Will help the entire organization identify and address issues, bottlenecks in IT processes and procedures.

Allows teams, communities of practice, and centres of excellence to see how other teams have addressed common issues.

Involves: all CoPs & CoEs

Plan

Define expected outcomes for Sprint/Iteration

Select existing or newly defined backlog items.

Define/refine User stories, Acceptance Criteria and Work tasks to complete in this iteration.

Target 20% of Sprint/Iteration tasks for reduction of any technical debt. Technical debt reflects the implied cost of additional rework caused by choosing an easy solution now instead of using a better approach that would take longer.

Artifacts - Define expected outcomes for Sprint/Iteration

Involves: Agile Practices (?)

Threat Modeling

Rapid Risk Assessment. A 30-minute exercise at beginning of Sprint/Iteration.

Identify and address security concerns as early as possible.

Artifacts - Threat Modeling

Involves: IT Security

Accessibility and Usability Modeling

User interface prototype: layout, behaviour and navigation.

Identify and address any accessibility and usability issues/concerns as early as possible.

Involves: Testing & Accessibility

Code

Development Standards

Formatting Standards; Implement departmental standards for Code “Linting”; Appropriate variable scoping; Secrets management; Adopt Git branching / versioning best practices;

Artifacts - Development Standards

Involves: Developer

Unit Testing

Apply unit testing methodology, such as Test Driven Development (TDD)

Artifacts - Unit Testing

Involves: Developer

Test Data Management

Test data for each scenario of the code coverage. i.e.: Input and expected results

Artifacts - Test Data Management

Involves: Testing, Storage/Database (CDO?)

Continuous Integration

Practice of merging all coder working copies to a shared mainline several times a day.

Encourage coders to work with smaller changes so that integration issues are more easily debugged.

Artifacts - Continuous Integration

Involves: Developer

Automated Static Code Analysis

Goal is to keep it fast. Shallow scans on code check-in, deeper scans nightly/weekly.

Artifacts - Automated Static Code Analysis

Involves: IT Security

Code Review

Initiated on shared mainline merge request. (Git pull request)

Encourage coders to work with smaller changes to keep reviews effective. The effectiveness of code reviews depend on the speed of reviewing.

Artifacts - Code Review

Involves: Developer

Build & Test

Continuous Build & Test Integration

All automated tests must pass.

Build Artifacts published to central ‘snapshot’ repository.

Artifacts - Continuous Build & Test Integration

Involves: Developer

Automated Build

The build needs to complete rapidly, to identify and address problems with integration quickly.

Involves: Developer

Automated Artifact CVE Scan

Automated build artifact scanning for CVEs. Will require tuning to reduce false positives.

Artifacts - Automated Artifact CVE Scan

Involves: IT Security

Automated Artifact License Scan

Automated build artifact scanning for license compliance.

Artifacts - Automated Artifact License Scan

Involves: Developer

Automated Smoke Test

A simplified test that ensures that build artifacts actually run and basic functionality is working before running more in depth tests.

Verify expected VMs/Containers/Services are running.

Port scans.

Connectivity tests.

Artifacts - Automated Smoke Test

Involves: Testing

Automated Acceptance Testing

Done during end to end testing

Artifacts - Automated Acceptance Testing

Involves: Testing

Automated Accessibility Testing

Done during end to end testing

Artifacts - Automated Accessibility Testing

Involves: Testing & Accessibility

Automated Compliance Testing

Verify that all the required test artifacts were created.

Artifacts - Automated Compliance Testing

Involves: Testing

Automated Attack Testing (App Security Testing)

DAST: Behavioral Driven Design from malicious actors point of view

Artifacts - Automated Attack Testing

Involves: IT Security

Release (Staging)

Deploy to Staging Environment

Deployment to a staging environment.

Manual Tests

Accessibility & User Acceptance Tests

Artifacts - Manual Tests

Involves: Accessibility Testing

Publish Release Artifacts

Build Artifacts published to ‘Release’ repository.

Centralized ‘Release’ Artifact repository (or repositories) simplify CVE and License scanning/checks.

Use tags on release artifacts to link to source code.

Artifacts - Publish Release Artifacts

Compliance Review

Verify previous processes and procedures followed. Check ‘Audit Artifacts’.

Artifacts - Compliance Review

Involves: Release Management

Deploy

Automated Deployment to Production

Manually triggered automated deployment.

Auditability and Accountability of deployment to production

Only ‘Release’ repository artifacts deploy to production.

Deployment option examples:

Artifacts - Automated Deployment to Production

Deployment Audit log (Who, When, What etc.).

Involves: Operations

Automated Smoke Tests

A simplified test that ensures that production deployment actually runs and basic functionality is working.

Artifacts - Automated Smoke Tests

Involves: Testing

Operate & Monitor

Read access for monitoring production

To support production applications development teams need access to lots of data. Examples :

Artifacts - Read access for monitoring production

Involves: Operations

Automated Artifact CVE Scans

Automated build artifact scanning for CVEs.

Artifacts - Automated Artifact CVE Scans

Involves: IT Security

Automated Artifact License Scans

Automated build artifact scanning for license compliance.

Artifacts - Automated Artifact License Scans

Involves: Developer

Attack monitoring

To properly address threats to production application development teams need access to data of what attackers are actually attempting.

Artifacts - Attack monitoring

Involves: IT Security

Accessibility and Usability monitoring

To properly address accessibility and/or usability application development teams need access to telemetry data that gives metrics on how users are actually interacting with UI.

Artifacts - Accessibility and Usability monitoring

Involves: Testing & Accessibility

DEV-SEC-OPS Development Team Composition

In a Dev-Sec-Ops culture, development teams are responsible for the planning, development, operation and maintenance of reliable, secure and accessible software that the team produces.

Development team size should be small, between five to twelve members. Keeping teams small reduces the occurrence of internal team communications errors/bottlenecks.

All members of these small teams are required to have a solid knowledge of all aspects of their software products Dev-Sec-Ops life cycles. This ‘end-to-end’ team responsibility will require that development team members develop cross-functional skills (development, testing, security, operations) so that the development team is self-sufficient in all stages of the Dev-Sec-Ops life cycle.

It is important to note that the level of knowledge for various Dev-Sec-Ops skills will vary on a team-by-team basis. For example, a team who’s’ products are base on cloud Platform as a service offerings will need less in depth operational skills than a team who’s products require more cloud Infrastructure as a service.

To help teams develop the cross-functional skills required for Dev-Sec-Ops teams will designate one or more team members as ‘Team Champion’ for specific skills required by the team.

Team Champions

A Team Champion’s role is to:

I.e. Online learning, Available training, Community of Practice resources.

A Team Champion is not:

When the Team Champion’s find that their teams coaching requirements are beyond their current capabilities they can:

  1. Reach out to the community of practice to see if other teams can provide coaching/guidance
  2. Ask the Centre of Excellence for coaching from a subject matter expert resource.