What does a DevOps way of working really mean? Why do you need to cut development pipeline pass-through time? What is the difference between continuous integration and continuous delivery?
Many of the everyday terms have an established meaning in the agile and DevOps context. To help you out, we put together a DevOps glossary.
Agile refers to the Manifesto for Agile Software Development. The key argument is continuous collaboration with customers and constantly delivering well-working software in small iterations and getting feedback from the end-users instead of relying on bureaucratic processes.
An application platform is a high-level platform to enable and support application teams in running their applications. In the DevOps context, the aim is to have self-service and API-driven application platforms that are often built on top of cloud services.
An application team is an end-to-end responsible team for planning, designing, implementing, building, maintaining and supporting specific applications.
"The term “as-code” is an approach and practice where items other than source code are treated as source code and with equal importance. Typical areas/items could be configuration and infrastructure-related artefacts.
In practice, this means that you apply the same processes and best practices that are used for software development on these items.
The final goal in treating items as code is to secure and raise the quality to a level where the items can be used effectively in automation and replace manual work.
In DevOps, “as code” is an important concept, because it lets developers and teams take full responsibility for application and systems/tools used in the software development lifecycle. When breaking down silos between development and operations, this is an important technique that allows the team to work independently without the need for handovers.
Canary deployment is a deployment strategy aiming to reduce the impact of failures when bringing an application update to production. An updated version of the application is brought up alongside the existing one, and user traffic is gradually shifted over to the updated version. If any anomalies arise, traffic shifting is halted, and can easily be diverted back to the previous version, which is still running. Thus, returning to a “known good state” and having minimal or no impact on user experience.
Chaos Engineering is a method also known as chaos experiments. Chaos Engineering is a disciplined approach for identifying potential failures before they become outages. This is commonly done by actively introducing failures in the system and making sure the system can automatically alleviate or correct the failures without service disruption. Some tools in this area are Chaos Monkey, Gremlin and Litmus.
Chaos engineering is an example of "right shifting the testing" (along with e.g. A/B testing), i.e. moving/extending parts of testing to a production environment.
Cloud-native is an approach where you adapt the principles and delivery model that cloud computing provides. A cloud-native application is built for the cloud and utilises cloud-native technologies. Cloud-native technologies empower organisations to work according to DevOps principles, carry out continuous deployments to production, use microservices oriented architecture design, and embrace containers or higher-level runtimes and managed services (SaaS). In combination, these enable bringing new ideas to market faster and meeting customer demands sooner.
Complicated-subsystem team is a term introduced by Team Topologies. It refers to one of four fundamental team types, where significant mathematics/calculation/technical expertise is needed. Read more about Team Topologies.
Configuration as code is an approach and practice where configuration items are treated as source code and with equal importance (see “as code” definition).
Configuration items are defined as something used to change the behaviour of an existing application, tool or infrastructure component.
By introducing the “as code” concept configuration will become stable, its quality will improve and it can be used to automate activities and processes. Introducing guardrails, processes, tests and automation will increase collaboration and responsibility for configuration items.
Containerisation is the act of bundling your application together with all its runtime dependencies like tools, operating system and libraries. Containers have become the standard for building and running modern applications. Containers can be used effectively when building a continuous delivery pipeline.
Containers as a Service is a managed service to run containerised applications where the orchestration is taken care of and all you as a user need to be concerned about is your application as a container. CaaS is a specialised type of Platform as a Service (PaaS).
Continuous Delivery (CD) is a software development practice where an application is kept in a deployable state all the time. This means for example that code, configurations and other artefacts needed for deployment are delivered together, and the continuous delivery pipeline deploys and verifies that the application is kept in a deployable state.
Continuous Delivery benefits include real-time production readiness, as well as reduced production deployment risk and time to repair.
Continuous Deployment is a release strategy where changes are automatically deployed to a production environment if verified successfully through the pipeline. Continuous Deployment enables multiple deployments to production each day, empowering the organisation to quickly deliver new features to the end-users. Since the application can be deployed several times per day, it also makes the application progress more tangible and enables quick user feedback on changes.
In practice, to achieve Continuous Deployment, organisations need to have Continuous Integration and Continuous Delivery practices in place.
Continuous feedback is an essential component of the DevOps way of working. The development team works closely with customers and end-users to get their feedback on the product. The actual user feedback is a source for further tasks and prioritisation. The user feedback is augmented with feedback obtained from monitoring the system behaviour in production, and with other immediate feedback during development time, such as testing and monitoring results from system testing. It is also normal to get feedback on processes by measuring different aspects of them.
Continuous improvement is one of the essential practices in the DevOps way of working. In an everchanging world, the common expectation is that quality, speed and efficiency should improve. Organisations and teams continuously need to improve their way of working and adapt to changing requirements. Having a continuous improvement mindset and realising improvements is a team responsibility that requires a change in thinking and acting.
Continuous integration (CI) is a software development practice where developers commit and merge their changes to mainline frequently, leading to multiple integrations per day. After commit/merge an automated pipeline builds and verifies the changes. The key goal of continuous integration is to secure the quality of the code and enable collaboration and transparency.
CI/CD server refers to a tool that is responsible for the pipeline logic. The tool acts on events and triggers a chain of actions to drive the Delivery Pipeline. A good Continuous Integration / Continuous Deployment server enables integration with a variety of tools. It acts on events and supports scheduling, monitoring and visualisation. One of the most popular CI/CD servers is the open-source tool Jenkins, thanks to the large community that develop plugins to support different tools, scenarios and use cases. The packaged solutions like GitLab, Azure DevOps, and GitHub also have their integrated CI/CD servers.
Continuous learning culture refers to continually increasing knowledge, competence, and performance by becoming a learning organisation committed to relentless improvement and innovation. For example, SAFe defines a continuous learning culture here.
A cross-functional team is the industry term for a team with all the competency required to design, implement, test, deliver, operate and monitor a service or product. The team can be e.g. an Application Team, a Feature Team or a DevOps Team. Read more about Team Topologies.
DevOps is the cultural movement of breaking down silos between developers and operators bringing them together and making them responsible for the entire life cycle of applications. This is aided by automated tools to deliver and monitor applications.
DevOps pipeline is the mechanism supporting DevOps best practices like Automation, Continuous Integration, Continuous Deployment, test automation, predictive monitoring and feedback loops. The pipeline provides agile teams with a paved way, so they can become effective DevOps teams.
DevXOps has been introduced as DevOps has become a more common and better-understood approach having an emergence of variations with a particular focus, like DevSecOps with focus on security, DevTestOps with focus on testing and DevBizOps with focus on business value creation. All these focus areas are essentially always present in the DevOps approach, only the focus varies. DevXOps is used to describe any such focus area.
Distributed tracing is technology aiming to ensure application observability, traceability and monitoring. It has become popular in microservices oriented application architectures. It enables tracing the long chain of requests between different services over the network. Standards like Open Telemetry define that a standardised trace is emitted and stored in a centralised location in the sequence.
Enabling team is one of the four fundamental team types according to Team Topologies. An enabling team helps a stream-aligned team to overcome obstacles and detect missing capabilities. Read more about Team Topologies.
Feature flags or feature toggling is a software development technique to minimise deployment-related risks. It allows a developer to hide functionality and provide “a dark launch” where a new feature is deployed to production but hidden from users. Once the new version runs reliably the new functionality can be enabled for selected users and gradually rolled out to the entire userbase. If the new feature causes problems or does not work as expected, it can simply be disabled without changing the code or re-deploying the application.
Functions as a Service (FaaS) provides services for operating and triggering independent functions. The services aim to empower developers to run self-contained functions triggered by external events. The benefits are scalability from zero to thousands. Applications built on FaaS are comprised of a set of functions that are run and triggered independently of each other, but together are (or are part of) a complete system.
Immutable infrastructure is the practice of creating new resources with the latest version or configuration. Instead of modifying an existing server or application, you create a new one and shift traffic over to the new instance. The key benefit is avoiding potential unplanned service interruptions. This practice is very well suited for cloud-based environments in Infrastructure as Code or Configuration as Code approaches. One common configuration is where file systems are mounted read-only, both to avoid configuration drift and reduce the impact of security breaches.
Infrastructure as code is an approach and practice where infrastructure items are treated as source code and with equal importance (see “as code” definition). Modern infrastructure services are self-serviced and API (Application Program Interface) driven. This means that instead of clicking through a graphical user interface you can start programming the infrastructure to increase speed, reduce errors and ensure parity between environments.
Infrastructure items are defined as scripts, definitions, and templates used to create and set up infrastructure components like tools, virtual machines, networks, storage, etc. In DevOps, this is an important technique since it enables teams to take responsibility and ownership of infrastructure, remove the handover to an operations team and evolve infrastructure according to the innovation pace of the application itself.
Jenkins is a DevOps pipeline automation server that is the most commonly used general-purpose CI/CD workflow tool today. Read more about Jenkins.
Jira is software that helps to plan, track, and manage agile software development projects. It is developed by Atlassian. Many popular extensions exist to aid teams in different techniques and technologies. Read more about Jira.
Kubernetes is a container orchestrator. It automates the deployment, scaling, and management of containerised workloads. Kubernetes is available on all cloud and infrastructure providers, Windows, Linux and Mac to IoT. Read more about Kubernetes.
Low-code is an approach to software development where the developers create software through modelling, instead of traditional computer programming. A development platform and software provides developers an environment with graphical interfaces and configuration management. Low-code platforms also contain software development automation features. Thanks to its visual layout, set of components and minimal hand-coding required, Low-code speeds up the process of application creation.
Microservices is an application architecture style aiming to design systems of loosely coupled and independent services that can be built and maintained by separate teams. The key is that each microservice is scoped to handle only one single responsibility.
Microservices simplify the implementation of continuous delivery and deployment.
Monitoring is the principle of gathering data from running applications, supporting services and infrastructure. This enables insights and visibility into the application and runtime environment behaviour. Monitoring can also trigger alerts on application health. It also speeds up debugging errors and finding root problems.
NoOps (no operations) is a concept aiming for IT environments that are so automated and abstracted from the underlying infrastructure that there is no need for a dedicated team to manage the IT operations environment.
Observability is the engineering practice of making the inner workings of an application visible from the outside by building it in from the start as opposed to adding it as an after-thought at the end. The goal is to answer one crucial question: "is the application working as expected?" and catching problems before the customers start noticing them. We typically say that Logging, Metrics and Distributed Tracing are the three core pillars of observability.
Pipeline management refers to the activities that are needed to keep the DevOps pipeline running optimally and stay optimally fit for the team(s) that are using it. Pipeline management covers following the load, identifying bottlenecks, and correcting any shortcomings. It can also include further development of the pipeline based on the needs. There can be a separate team providing these services either permanently (Pipeline as a Service) or temporarily, or the team can handle this themselves.
Platform as a Service (PaaS) is a compute paradigm put forward by Heroku and since adopted by all the major cloud providers. You as a developer are only responsible for the application code and deploy that to the PaaS service to run. You don't need to worry about maintaining the underlying infrastructure and runtime. PaaS can be a great option for the right application, but can also result in inflexible and expensive solutions when it is not a good fit for the architecture. When planning the architecture, costs need to be considered as part of the design to create a good result.
Platform team is one of the four fundamental team types according to Team Topologies. Platform team is a grouping of other team types that provide a compelling internal product to accelerate delivery by Stream-aligned teams. Read more about Team Topologies.
Policy as code is the idea of writing code in a high-level language to manage and automate policies. The policies could be anything as disperse as access policies in databases to resource management in cloud environments. By representing policies as code in text files, proven software development best practices can be adopted such as version control, automated testing and automated deployment.
Policy as code makes it possible to automatically apply and enforce the policy, enabling benefits such as having defined one single policy which is then applied to many different systems increasing the consistency across a wide set of services. This makes it easier to maintain and change the policy. To achieve this, you need a configuration language to describe the policy and a way of applying the policy to the target services. An example of policy as code is the Open Policy Agent Framework.
Quality gate is a checkpoint between different stages in a software development process. A predefined set of criteria define if the process can proceed to the next stage.
Quality gates are designed to give fast feedback to stakeholders and secure quality. They also reduce waste in the organisation since they stop the process if criteria are not met.
An example of quality gates could be the commit gate, this gate contains criteria that new code needs to fulfil, for example passing unit tests, or passing Sonarqube. Smoke test is another example of the quality gate, if new changes don’t pass through smoke tests there is no point deploying that change to higher-level test environments.
Radiator is a concept where radiators are visualising real-time status from the delivery pipeline and development lifecycle. Information presented on a radiator is suitable to present on screens in a team room. Radiators aggregate information from multiple sources and present it to stakeholders, this reduces the need for users to go into multiple sources to understand the real-time status during the development lifecycle. Radiators are used to create a fast feedback loop that is a vital part of DevOps.
Release automation refers to automating activities of Release management. The most common and simplest automation is automating the deployment of the package to production and composing the accompanying release documentation automatically. This may be extended by for instance automating post-deployment testing, automatic update of the configuration management database and automating the composition of the release package.
Release management refers to overseeing a software release within an organisation, including planning, scheduling, composing, testing, and deploying the software package. The aim is to ensure that the expected contents are the expected quality and they are made available to the users at the expected time.
Scaled Agile Framework (SAFe) is a framework for scaling Agile. SAFe is designed to help businesses continuously and more efficiently deliver value on a regular and predictable schedule. It provides a knowledge base of proven, integrated principles and practices to support enterprise agility.
One of the most important key success factors in a Lean-Agile transformation is leadership engagement combined with education and training.
Serverless is a paradigm where an application developer does not have to care about the underlying (virtual) servers – they are completely managed. Typically, via a consumption-based model, you only pay for the actual usage. No up-front commitment is needed. If the application is not used, you don’t pay for anything. Serverless is often used in conjunction with FaaS and SaaS services.
Service Level Indicators (SLI) is a specific metric of the application that directly corresponds to the satisfaction of a typical user of the application. SLIs are used to build targets (SLO) for the level of reliability one aims to achieve and is a core part of Site Reliability Engineering (SRE). Good examples for SLI is the percentage of all HTTP requests that results in an error page or the 99th percentile of the time it takes to respond to HTTP requests.
Service Level Objectives (SLO) are the goals for the level of reliability one wants to achieve over a certain time (often measured over the last 28 days). SLOs are used to adjust priorities for development teams when they need to focus more on reliability or if they are achieving the required reliability to develop new features.
Service virtualisation is an approach where impediments caused by dependencies between software components are alleviated through well-defined interfaces and virtual implementations of those interfaces (stubs, virtual services). The virtual implementation typically returns some simple valid answer for the interface call, which thus allows for the development of other dependent components before the actual implementation is available. Also commonly referred to as mocking a service.
Shift left refers to moving activity to an earlier stage in the software engineering process. For example, shift left testing refers to moving test activities earlier in the software engineering lifecycle. This may involve e.g. writing end-user tests following behaviour-driven development to enable end-to-end testing afterwards.
Shift right refers to moving some part of the process to a later stage in the software engineering process. For example, shift right testing refers to moving part of testing to a production environment. This may involve practices such as chaos engineering and A/B testing.
Site Reliability Engineering (SRE) is the practice of running modern and complex applications in production by balancing new features and stability with agreed-upon metrics (SLO). SRE differs from traditional operations by being a part of the application team and sharing the responsibility. Usually, SRE has a strong system internals and network (shared infrastructure) competence with a specific focus on building reliability to the application.
A stream-aligned team is one of four fundamental team types according to Team Topologies. A stream-aligned team is aligned to a flow of work from (usually) a segment of the business domain. Read more about Team Topologies.
A System Team is a type of Enabling team. The System Team is a specialised Agile Team that assists in building and supporting the Agile development environment, typically including development and maintenance of the toolchain that supports the Continuous Delivery Pipeline. The System Team also supports and educates the Stream-aligned Teams with e.g. DevOps implementation, WoW, automated builds and testing.
Team Topologies is a framework for “Organising business and technology teams for fast flow” defined by Skelton and Pais. Key concepts include 4 types of topologies and 3 core interaction modes. Team Topologies aim to minimise required interactions and costly handovers, stabilise individual trust levels, increase the flow of deliveries and create high-performing teams. Read more about Team Topologies.
Test automation refers to the execution of software tests with the use of dedicated tools or scripts.
Test data management is the process of creating and maintaining data for testing purposes. Typically test data is used in non-production environments only, but it can also include production environments. Test data can be synthetic (created from scratch) or anonymised or pseudonymised from production environment data. Test data management is also needed because of regulator laws such as GDPR for personal data.
Test data needs to be managed in the same way as test environments, see Test environment management.
Test environment management is a process of creating and maintaining non-production environments for development purposes. The aim is to ensure that test environments are at the expected state when tests are run, so that test results accurately reflect the quality of the test subject. This is especially important with automated tests, which typically lead to a situation where the environments need to be continuously in a valid state. An easy way to achieve this is to create the environments dynamically through the infrastructure as code approach and containerisation so that the environment exists only during the test execution.
Value stream mapping allows you to create a visual representation of the activities in a value stream, organised into sequences of steps. Each step is analysed for its characteristics like the value it generates, the time delay it introduces and the effort it requires.
Value stream mapping improves your understanding of the value stream dynamics. It can be used for improving the value stream process by e.g. identifying and removing wasteful activities and reducing process cycle times.
Follow the authors on LinkedIn, Hans Kristian Flaatten, Nicklas Hilmersson, Bjørge Solli and Tapani Tirkkonen.
DevOps on scale in multilocation teams – a story worth sharing
Cloudifying people’s everyday lives through DevOps
Sometimes existing DevOps practices aren’t enough
Master the terms of digital development with our DevOps and Agile glossary
A journey toward an agile and DevOps way of working, culture and tools in software maintenance
Tapani has over 30 years of experience in the IT sector, from software development to consulting, working both in development and management positions. Currently, he works as Modernization Advisor, helping to adopt DevOps practices. Tapani is a certified Scrum Master and TOGAF 9.1 Enterprise Architect.