Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Agile Model-Based Systems Engineering Cookbook Second Edition

You're reading from   Agile Model-Based Systems Engineering Cookbook Second Edition Improve system development by applying proven recipes for effective agile systems engineering

Arrow left icon
Product type Paperback
Published in Dec 2022
Publisher Packt
ISBN-13 9781803235820
Length 600 pages
Edition 2nd Edition
Concepts
Arrow right icon
Author (1):
Arrow left icon
Dr. Bruce Powel Douglass Dr. Bruce Powel Douglass
Author Profile Icon Dr. Bruce Powel Douglass
Dr. Bruce Powel Douglass
Arrow right icon
View More author details
Toc

Measuring your success

One of the core concepts of effective agile methods is to continuously improve how you perform your work. This can be done to improve quality or to get something done more quickly. In order to improve how you work, you need to know how well you’re doing now. That means applying metrics to identify opportunities for improvement and then changing what you do or how you do it. Metrics are a general measurement of success in either achieving business goals or compliance with a standard or process. A related concept – a Key Performance Indicator (KPI) – is a quantifiable measurement of accomplishment against a crucial goal or objective. The best KPIs measure achievement of goals rather than compliance with a plan. The problem with metrics is that they measure something that you believe correlates to your objective, but not the objective itself. Some examples from software development:

Objective

Metric

Issues

Software size

Lines of code

Lines of code for simple, linear software aren’t really the same as lines of code for complex algorithms

Productivity

Shipping velocity

Ignores the complexity of the shipped features, penalizing systems that address complex problems

Accurate planning

Compliance with schedule

This metric rewards people who comply with even a bad plan

Efficiency

Cost per defect

Penalizes quality and makes buggy software look cheap

Quality

Defect density

Treats all defects the same whether they are using the wrong-sized font or something that brings aircraft down

Table 1.2: Examples from software development

See The Mess of Metrics by Capers Jones (2017) at http://namcook.com/articles/The%20Mess%20of%20Software%20Metrics%202017.pdf

Consider a common metric for high-quality design, cyclomatic complexity. It has been observed that highly complex designs contain more defects than designs of low complexity. Cyclomatic complexity is a software metric that computes complexity by counting the number of linearly independent paths through some unit of software. Some companies have gone so far as to require all software to not exceed some arbitrary cyclomatic complexity value to be considered acceptable. This approach disregards the fact that some problems are harder than others and any design addressing such problems must be more complex. A better application of cyclomatic complexity is to use the metric as a guide. It can identify those portions of a design that are more complex so that they can be subjected to additional testing. Ultimately, the problem with this metric is that complexity correlates only loosely to quality. A better metric for the goal of improving quality might be the ability to successfully pass tests that traverse all possible paths of the software.

Good metrics are easy to measure, and, ideally, easy to automate. Creating test cases for all possible paths can be tedious, but it is possible to automate with appropriate tools. Metrics that require additional work by engineering staff will be resented and achieving compliance with the use of the metric may be difficult.

While coming up with good metrics may be difficult, the fact remains that you can’t improve what you don’t measure. Without measurements, you’re guessing where problems are and your solutions are likely to be ineffective or solve the wrong problem. By measuring how you’re doing against your goals, you can improve your team’s effectiveness and your product quality. However, it is important that metrics are used as indicators rather than as performance standards because, ultimately, the world is more complex than a single, easily computed measure.

Metrics should be used for guidance, not as goals for strict compliance.

Purpose

The purpose of metrics is to measure, rather than guess, how your work is proceeding with respect to important qualities so that you can improve.

Inputs and proconditions

The only preconditions for this workflow are the desire, ability, and authority to improve.

Outputs and postconditions

The primary output of this recipe is objective measurements of how well your work is proceeding or the quality of one or more work products. The primary outcome is the identification of some aspect of your project work to improve.

How to do it

Metrics can be applied to any work activity for which there is an important output or outcome (which should really be all work activities). The workflow is fairly straightforward, as shown in Figure 1.11:

Figure 1.12: Measuring success

Identify work or work product property important to success

One way to identify a property of interest is to look where your projects have problems or where the output work products fail. For engineering projects, work efficiency being too low is a common problem. For work products, the most common problem is the presence of defects.

Define how you will measure the property (success metric)

Just as important to identifying what you want to measure is coming up with a quantifiable measurement that is simultaneously easy to apply, easy to measure, easy to automate, and accurately captures the property of interest. It’s one thing to say “the system should be fast” but quite another to define a way to measure the speed in a fashion that can be compared to other work items and iterations.

Frequently measure the success metric

It is common to gather metrics for a review at the end of a project. This review is commonly called a project post-mortem. I prefer to do frequent retrospectives, at least one per iteration, which I refer to as a celebration of ongoing success. To be applied in a timely way, you must measure frequently. This means that the measurements must require low effort and be quick to compute. In the best case, the environment or tool can automate the gathering and analysis of the information without any ongoing effort by the engineering staff. For example, time spent on work items can be captured automatically by tools that check out and check in work products.

Update the success metric history

For long-term organizational success, recorded performance history is crucial. I’ve seen far too many organizations miss their project schedules by 100% or more, only to do the very same thing on the next project, and for exactly the same reasons. A metric history allows the identification of longer-term trends and improvements. That enables the reinforcement of positive aspects and the discarding of approaches that fail.

Determine how to improve performance against the success metric

If the metric result is unacceptable, then you must perform a root cause analysis to uncover what can be done to improve it. If you discover that you have too many defects in your requirements, for example, you may consider changing how requirements are identified, captured, represented, analyzed, or assessed.

Make timely adjustments to how the activity is performed

Just as important to measuring how you’re doing against your project and organizational goals is acting on that information. This may be changing a project schedule to be more accurate, performing more testing, creating some process automation, or even getting training on some technology.

Assess the effectiveness of the success metric application

Every so often, it is important to look at whether applying a metric is generating project value. A common place to do this is the project retrospective held at the end of each iteration. Metrics that are adding insufficient value may be dropped or replaced with other metrics that will add more value.

Some commonly applied metrics are shown in Figure 1.13:

Figure 1.13: Some common success metrics

It all comes back to you can’t improve what you don’t measure. First, you must understand how well you are achieving your goals now. Then you must decide how you can improve and make the adjustment. Repeat. It’s a simple idea.

Visualizing velocity is often done as a velocity or burn down chart. The former shows the planned velocity in work items per unit time, such as use cases or user stories per iteration. The latter shows the rate of progress of handling the work items over time. It is common to show both planned values in addition to actual values. A typical velocity chart is shown in Figure 1.14.

Velocity is the amount of work done per time unit, such as the number of user stories implemented per iteration. A burn down chart is a graph showing the decreasing number of work items during a project.

Figure 1.14: Velocity chart

Example

Let’s look at an example of the use of metrics in our project:

Identify work or work product property important to success

Let’s consider a common metric used in agile software development and apply them to systems engineering: velocity. Velocity underpins all schedules because it represents how much functionality is delivered per unit time. Velocity is generally measured as the number of completed user stories delivered per iteration. In our scope, we are not delivering implemented functionality, but we are incrementally delivering a hand-off to downstream engineering. Let’s call this SE Velocity, which is “specified use cases per iteration” and includes the requirements and all related SE work products.

This might not provide the granularity we desire, so let’s also define a second metric, SE Fine-Grained Velocity, which is the number of story points specified in the iteration:

Define how you will measure the property (success metric)

We will measure the number of use cases delivered, but have to have a “definition of done” to ensure consistency of measurement. SE Velocity will include:

  • Use case with:
    • Full description identifying purpose, pre-conditions, post-conditions, and invariants.
    • Normative behavioral specification in which all requirements traced to and from the use case are represented in the behavior. This is a “minimal spanning set” of scenarios in which all paths in the normative behavior are represented in at least one scenario
  • Trace links to all related functional requirements and quality of service (performance, safety, reliability, security, etc) requirements
  • Architecture into which the implementation of the use cases and user stories will be placed
  • System interfaces with a physical data schema to support the necessary interactions of the use cases and user stories
  • Logical test cases to verify the use cases and user stories
  • Logical validation cases to ensure the implementation of the use cases and user stories meets the stakeholder needs

SE Velocity will be simply the number of such use cases delivered per iteration. SE Fine-Grained Velocity will be the estimated effort (as measured in story points; see the Estimating effort recipe).

Frequently measure the success metric

We will measure this metric each iteration. If our project has 35 use cases, our project heartbeat is 4 weeks, and the project is expected to take one year, then our SE Velocity should be 35/12 or about 3. If the average use case is 37 story points, then our SE Fine-Grained Velocity should be about 108 story points per iteration.

Update the success metric history

As we run the project, we will get measured SE Velocity and SE Fine-Grained Velocity. We can plot those values over time to get velocity charts:

Figure 1.15: SE velocity charts

Determine how to improve performance against the success metric

Our plan calls for 3 use cases and 108 story points per iteration; we can see that we are underperforming. This could be either because 1) we overestimated the planned velocity, or 2) we need to improve our work efficiency in some way. We can, therefore, simultaneously attack the problem on both fronts.

To start, we should replan based on our measured velocity, which is averaging 2.25 use cases and 81 story points per iteration, as compared to the planned 3 use cases and 108 story points. This will result in a longer but hopefully more realistic project plan and extend the planned project by an iteration or so.

In addition, we can analyze why the specification effort is taking too long and perhaps implement changes in process or tooling to improve.

Make timely adjustments to how the activity is performed

As we discover variance between our plan and our reality, we must adjust either the plan or how we work, or both. This should happen at least every iteration, as the metrics are gathered and analyzed. The iteration retrospective that takes place at the end of the iteration performs this service.

Assess the affectiveness of the success metric applicaiton

Lastly, are the metrics helping the project? It might be reasonable to conclude that the fine-grained metric provides more value than the more general SE Velocity metric, so we abandon the latter.

You have been reading a chapter from
Agile Model-Based Systems Engineering Cookbook Second Edition - Second Edition
Published in: Dec 2022
Publisher: Packt
ISBN-13: 9781803235820
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image