Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Agile Model-Based Systems Engineering Cookbook
Agile Model-Based Systems Engineering Cookbook

Agile Model-Based Systems Engineering Cookbook: Improve system development by applying proven recipes for effective agile systems engineering

Arrow left icon
Profile Icon Dr. Bruce Powel Douglass
Arrow right icon
AU$34.99 AU$50.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.8 (9 Ratings)
eBook Mar 2021 646 pages 1st Edition
eBook
AU$34.99 AU$50.99
Paperback
AU$63.99
Subscription
Free Trial
Renews at AU$24.99p/m
Arrow left icon
Profile Icon Dr. Bruce Powel Douglass
Arrow right icon
AU$34.99 AU$50.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.8 (9 Ratings)
eBook Mar 2021 646 pages 1st Edition
eBook
AU$34.99 AU$50.99
Paperback
AU$63.99
Subscription
Free Trial
Renews at AU$24.99p/m
eBook
AU$34.99 AU$50.99
Paperback
AU$63.99
Subscription
Free Trial
Renews at AU$24.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Table of content icon View table of contents Preview book icon Preview Book

Agile Model-Based Systems Engineering Cookbook

Chapter 2: System Specification

This chapter contains recipes related to capturing and analyzing requirements. The first four recipes are alternative ways to achieve essentially the same thing. Functional analysis generates high-quality requirements, use cases, and user stories, all of which are means to understand what the system must consist of.

By high-quality requirements, I mean requirements focused around a use case that are demonstrably the following:

  • Complete
  • Accurate
  • Correct
  • Consistent
  • Verifiable

The problem with textual requirements is that natural language is ambiguous, imprecise, and only weakly verifiable. Keeping text human-readable is very useful, especially for non-technical stakeholders, but is insufficient to ensure we are building the right system. The recipes covered in this chapter are as follows:

  • Functional analysis with scenarios
  • Functional analysis with activities
  • Functional analysis with state machines
  • Functional analysis with user stories
  • Model-based safety analysis
  • Model-based threat analysis
  • Specifying logical system interfaces
  • Creating the logical data schema

Why aren't textual requirements enough?

There are many reasons why textual requirements by themselves fail to result in usable, high-quality systems.

First, it is difficult to ensure all the following functionality is present:

  • All normal (sunny day) functionality
  • All edge cases
  • All variations of inputs, sequences, and timings
  • All exception, error, and fault cases
  • Qualities of service, such as performance, range, precision, timing, safety, security, and reliability
  • All stakeholders appropriately represented

Getting that much is a daunting task indeed. But even beyond that, there is an air gap between realizing a possibly huge set of shall statements and actually meeting the stakeholder needs. The stakeholder believes that if the system performs a specific function, then in practice, their needs will be met. Experience has shown that is not always true. Customers often ask for features that don't address their true needs. Further, requirements are volatile and interact in often subtle but potentially catastrophic ways.

We address this issue by capturing requirements both in textual and formal means via modeling. The textual requirements are important because they are human-readable by anyone even without modeling training. The model representation of the requirement is more formal and lends itself to more rigorous thought and analysis. In general, both are necessary.

Definitions

Before we get into the recipes, let's agree on common terms:

  • Requirement: A stakeholder requirement is a statement of what a stakeholder needs. A system requirement is a statement of what the system must do to satisfy a stakeholder need. We will focus on system requirements in this chapter. Normally, requirements are written in an active voice using the shall keyword to indicate a normative requirement, as in the following example:

    The system shall move the robot arm to comply with the user directive.

  • Actor: An actor is an element outside the scope of the system we are specifying that has interactions with the system that we care about. Actors may be human users, but they can also be other systems, software applications, or environments.
  • Use Case: A use case is a collection of scenarios and/or user stories around a common usage of a system. One may alternatively think of a use case as a collection of requirements around a usage-centered capability of the system. Still another way to think about use cases is that they are a sequenced set of system functions that execute in a coherent set of system-actor interactions. These all come down to basically the same thing. In practice, a use case is a named usage of a system that traces to anywhere between 10-100 requirements and 3-25 scenarios or user stories.
  • Activities: An activity diagram in SysML is a composite behavior of some portion of a system. Activities are defined in terms of sequences of actions which, in this context, correspond to either a system function, an input, or an output.

    Activities can model the behavior of use cases. Activities are said to be fully constructive in the sense that they model all possible behavior of the use case.

  • State Machines: A state machine in SysML is a composite behavior of a system element, such as a block or use case. In this context, a state machine is a fully constructive behavior focusing on conditions of the system (states) and how the system changes from state to state, executing system functions along the way.
  • Scenarios: A scenario is an interaction of a set of elements in a particular case or flow. In this usage, a scenario represents a partially complete behavior showing the interaction of the actors with the system as it executes a use case. The reason that it is partially complete is that a given scenario only shows one or a very small number of possible flows within a use case. Scenarios are roughly equivalent to user stories. In SysML, scenarios are generally captured using sequence diagrams.
  • User Story: A user story is a statement about system usage from a user or actor's point of view that achieves a user goal. User stories describe singular interactions and so are similar in scope to scenarios. User stories use a canonical textual formulation such as As a <user> I want <feature> so that <output or outcome>.

    Here's an example:

    As a pilot, I want to control the rudder of the aircraft using foot pedals so that I can set the yaw of the aircraft.

    User stores tend to be most beneficial for simpler interactions, as complex interactions are difficult to write out in understandable text. Scenarios are generally preferred for complex interactions or when there is a lot of precise detail that must be specified. Consider the following somewhat unwieldy user story:

    As a navigation system, I want to measure the position of the aircraft in 3 dimensions with an accuracy of +/- 1 m every 0.5s so that I can fly to the destination.

And that's still a rather simple scenario.

Functional analysis with scenarios

As stated in the chapter introduction, functional analysis is a means to both capture and improve requirements through analysis. In this case, we'll begin with scenarios as a way to elicit the scenarios from the stakeholder and create the requirements from those identified interactions. We then develop an executable model of the requirements that allows us to verify that the requirements interact how we expect them to, identify missing requirements, and perform what-if analyses for additional interactions.

Purpose

The purpose of this recipe is to create a high-quality set of requirements by working with the stakeholders to identify and characterize interactions of the system with its actors. This is particularly effective when the main focus of the use case is the interaction between the actors and the system or when trying to gather requirements from non-technical stakeholders.

Inputs and preconditions

The input is a use case naming a capability of the system from an actor-use point of view.

Outputs and postconditions

There are several outcomes, the most important of which is a set of requirements accurately and appropriately specifying the behavior of the system for the use case. Additional outputs include an executable use case model, logical system interfaces to support the use case behavior, along with a supporting logical data schema and a set of scenarios that can be used later as specifications of test cases.

How to do it

Figure 2.1 shows the workflow for this recipe. There are many steps in common with the next two recipes:

Figure 2.1 – Functional analysis with scenarios

Figure 2.1 – Functional analysis with scenarios

Identify the use case

This first step is to identify the generic usage of which the scenarios of interest, user stories, and requirements are aspects.

Describe the use case

The description of the use case should include its purpose, and a general description of the flows, preconditions, postconditions, and invariants (assumptions). Some modelers add the specific actors involved, user stories, and scenarios, but I prefer to use the model itself to contain those relations.

Identify related actors

The related actors are those people or systems outside our scope that interact with the system while it executes the current use case. These actors can send messages to the system, receive messages from the system, or both.

Define the execution context

The execution context is a kind of modeling sandbox that contains an executable component consisting of executable elements representing the use case and related actors. The recommended way to achieve this is to create separate blocks representing the use case and the actors, connected via ports. Having an isolated simulation sandbox allows different systems engineers to progress independently on different use case analyses.

Capture use case scenarios

Scenarios are singular interactions between the system and the actors during the execution of the use case. When working with non-technical stakeholders, it is an effective way to understand the desired interactions of the use case. We recommend starting with normal, sunny day scenarios before progressing to edge case and exceptional rainy day scenarios. It is important to understand that every message identifies or represents one or more requirements.

Create ports and interfaces in the execution context

Once we have a set of scenarios, we've identified the flows from the use case to the actors and from the actors to the system. By inference, this identifies ports relating the actors and the system, and the specific flows within the interfaces that define them.

Create an executable state model

This step creates what I call the normative state machine. Executing this state machine can recreate each of the scenarios we drew in the Capture use case scenarios section. All states, transitions, and actions represent requirements. Any state elements added only to assist in the execution that do not represent requirements should be stereotyped «non-normative» to clearly identify this fact. It is also common to create state behavior for the actors in a step known as instrumenting the actor to support the execution of the use case in the execution context.

Verify and validate requirements

Running the execution context for the use case allows us to demonstrate that our normative state machine in fact represents the flows identified by working with the stakeholder. It also allows us to identify flows and requirements that are missing, incomplete, or incorrect. These result in Requirements_change change requests to fix the identified requirement defects.

Requirements_change

Parallel to the development and execution of the use case model, we maintain the textual requirements. This workflow event indicates the need to fix an identified requirement defect.

Update the requirements set

In response to an identified requirement defect, we fix the textual requirements by adding, deleting, or modifying requirements. This will then be reflected in the updated model.

Add trace links

Once the use case model and requirements stabilize, we add trace links using the «trace» relation or something similar. This is generally a backtrace to stakeholder requirements as well as forward links to any architectural elements that might already exist.

Perform the use case and requirements review

Once the work has stabilized, a review for correctness and compliance with standards may be done. This allows subject matter experts and stakeholders to review the requirements, use case, states, and scenarios for correctness and for quality assurance staff to ensure compliance with modeling and requirements standards.

Let's have a look at an example.

Identify the use case

This example will examine the Emulate Basic Gearing use case. The use case is shown in Figure 2.2:

Figure 2.2 – Emulate Basic Gearing use case

Figure 2.2 – Emulate Basic Gearing use case

Describe the use case

All model elements deserve a useful description. In the case of a use case, we typically use the format shown here:

Figure 2.3 – Use case description

Figure 2.3 – Use case description

Identify related actors

The related actors in this example are the Rider and the Training App. The rider signals the system to change the gearing via the gears control and receives a response in terms of changing resistance. The training app, when connected, is notified of the current gearing so that it can be displayed. The relation of the actors to the use case is shown in Figure 2.2.

Define the execution context

The execution context creates blocks that represent the actors and the use case for the purpose of the analysis. In this example, the following naming conventions are observed:

  • The block representing the use case has the use case name (with white space removed) preceded by Uc_. Thus, for this example, the use case block is named Uc_EmulateBasicGearing.
  • Blocks representing the actors are given the actor name preceded with a and an abbreviation of the use case. For this use case, the prefix is aEBG_ so the actor blocks are named aEBG_Rider and aEBG_TrainingApp.
  • The interface blocks are named as <use case block>_<actor block>. The names of the two interface blocks are iUc_EmulateBasicGearing_aEBG_Rider and iUc_EmulateBasicGearing_aEBG_TrainingApp. The normal form of the interface block is associated with the proxy port on the use case block; the conjugated form is associated with the corresponding proxy port on the actor block.

All these elements are shown in the Internal Block Diagram (IBD) in Figure 2.4:

Figure 2.4 – Emulate Basic Gearing execution context

Figure 2.4 – Emulate Basic Gearing execution context

Capture use case scenarios

Scenarios here are captured to show the interaction of the system with the actors using this use case. Note that continuous flows are shown as flows with the «continuous» stereotype. This resistance at a specific level is applied continuously until the level of resistance is changed. As is usual in use case analysis, messages between the actors are modeled as events and invocations of system functions on the use case lifeline are modeled as operations.

The first scenario (Figure 2.5) shows normal gear changes from the rider. Note that the messages to self on the use case block lifeline indicate system functions identified during the scenario development:

Figure 2.5 – Emulate Basic Gearing scenario 1

Figure 2.5 – Emulate Basic Gearing scenario 1

The next scenario shows what happens when the rider tries to increment the gearing beyond the maximum gearing allowed by the current configuration. It is shown in Figure 2.6:

Figure 2.6 – Emulate Basic Gearing scenario 2

Figure 2.6 – Emulate Basic Gearing scenario 2

The last scenario for this use case shows the rejection of a requested gear change below the provided gearing:

Figure 2.7 – Emulate Basic Gearing scenario 3

Figure 2.7 – Emulate Basic Gearing scenario 3

Based on these sequences, we identify the following requirements:

  • The system shall respond to applied pedal torque with resistance calculated from the base level of resistance, current gearing, and applied torque to simulate pedal resistance during road riding.
  • The system shall send the current gearing to the training app when the current gearing changes.
  • The system shall respond to a rider-initiated increase in gear by applying the new level of gearing provided that it does not exceed the maximum gearing of the gearing configuration.
  • The system shall respond to a ride-initiated decrease in gear by applying the new level of gearing provided that it does not exceed the minimum gearing of the gearing configuration.

Create ports and interfaces in the execution context

It is a simple matter to update the ports and interface blocks to contain the messages going between the actors and the use case. The sequence diagrams identify the messages between the use case and actor blocks, so the interface blocks must support those specific flows (Figure 2.8):

Figure 2.8 – Emulate Basic Gearing ports and interfaces

Figure 2.8 – Emulate Basic Gearing ports and interfaces

Create an executable state model

This step constructs the normative state machine for the use case as well as instrumenting the actors with their own state machines. The state machine of the use case block is the most interesting because it represents the requirements. Figure 2.9 shows the state machine for the Emulate Basic Gearing use case:

Figure 2.9 – Emulate Basic Gearing state machine

Figure 2.9 – Emulate Basic Gearing state machine

To support the execution, the system functions must be elaborated enough to support the execution and simulation. These system functions include applyResistance(), checkGearing(), and changeGear(). Figure 2.10 shows their simple implementation:

Figure 2.10 – Emulate Basic Gearing system functions

Figure 2.10 – Emulate Basic Gearing system functions

The system variable gear is represented as a Real (from the SysML value type library), representing the gear multiplier, in a fashion similar to gear-inches, a commonly used measure in cycling. The flow properties appliedTorque and resistance are likewise implemented as Reals.

The state machines for the actor blocks are even simpler than those of the use case block. Figure 2.11 shows the Rider state machine:

Figure 2.11 – Rider actor block state machine

Figure 2.11 – Rider actor block state machine

Figure 2.12 shows the TrainingApp state machine and the implementation of its displayGearing() function:

Figure 2.12 – TrainingApp actor block state machine

Figure 2.12 – TrainingApp actor block state machine

Lastly, some constants are defined. DEFAULT_GEARING is set to the same value as MIN_GEARING; in this case, 30 gear-inches. MAX_GEARING is set to about the same as a 53x10 gearing, 140. The GEAR_INCREMENT is used for incrementing or decrementing the gearing and is set to 5 gear-inches for the purpose of simulation.

Verify and validate requirements

To facilitate control of the execution, a panel diagram is created. The buttons insert events in the relevant objects and the text boxes display and support modification of the value and flow properties. A panel diagram is a useful feature of the IBM Rhapsody modeling tool used to create these models:

Figure 2.13 – Emulate Basic Gearing panel diagram

Figure 2.13 – Emulate Basic Gearing panel diagram

The execution of the state model recreates the sequence diagrams. Figure 2.14 shows the recreation of Scenario 1 (Figure 2.5) by the executing model. The creation of such sequence diagrams automatically from execution is another useful Rhapsody feature:

Figure 2.14 – Animated sequence diagram from model execution

Figure 2.14 – Animated sequence diagram from model execution

While in review, the project lead notices that there is no requirement to display the initial starting value for the gearing, before a specific gear has been selected. Additionally, we see that the requirement to notify the training app was missing. These are identified as missing requirements that must be added.

Requirements_change

In this example, we notice that we omitted a requirement to update the rider display of the gearing. The change has already been made to the state machine.

Update requirement set

We add the following requirements to the requirements set:

  • The system shall display the currently selected gear.
  • The system shall default to the minimum gear during initialization.

Add trace links

In this case, we ensure there are trace links back to stakeholder requirements as well as from the use case to the requirements. This is shown in the use case diagram in Figure 2.15. The newly identified requirements are highlighted with a bold border:

Figure 2.15 – Emulate Basic Gearing requirements

Figure 2.15 – Emulate Basic Gearing requirements

Perform a use case and requirements review

The requirements model can now be reviewed by relevant stakeholders. The work products that should be included in the review include all the diagrams shown in this section, the requirements, and the executing model. The use of the executing model allows a what-if examination of the requirements set to be easily done during the review. Such questions as What happens to the gearing if the rider turns the system off and back on? or What is absolute maximum gearing to be allowed? can be asked. The simulation of the model allows the questions to either be answered by running the simulation case or can be identified as an item that requires resolution.

Functional analysis with activities

Functional analysis can be performed in a number of subtly different ways. In the previous recipe, we started with the sequence diagram to analyze the use case. That is particularly useful when the interesting parts of the use case are the interactions. The workflow in this recipe is slightly different, although it achieves exactly the same objectives. This workflow starts with the development of an activity model and generates scenarios from that. In this recipe, just as in the previous one, when the work is all complete, it is the state machine that forms the normative specification of the use case; the activity diagram is used as a stepping stone along the way. The objective of the workflow, as with the previous recipe, is to create an executable model to identify and fix defects in the requirements, such as missing requirements, or requirements that are incomplete, incorrect, or inaccurate. Overall, this is the most favored workflow among model-based systems engineers.

Purpose

The purpose of the recipe is to create a set of high-quality requirements by identifying and characterizing the key system functions performed by the system during the execution of the use case capability. This recipe is particularly effective when the main focus of the use case is a set of system functions and not the interaction of the system with the actors.

Inputs and preconditions

A use case naming a capability of the system from an actor-use point of view.

Outputs and postconditions

There are several outcomes, the most important of which is a set of requirements accurately and appropriately specifying the behavior of the system for the use case. Additional outputs include an executable use case model, logical system interfaces to support the use case behavior and a supporting logical data schema, and a set of scenarios that can be used later as specifications of test cases.

How to do it…

Figure 2.16 shows the workflow for this recipe. It is similar to the previous recipe. The primary difference is that rather than beginning the analysis by creating scenarios with the stakeholders, it begins by creating an activity model of the set of primary flows from which the scenarios will be derived:

Figure 2.16 – Functional analysis with activities

Figure 2.16 – Functional analysis with activities

Identify the use case

This first step is to identify the generic usage of which the scenarios of interest, user stories, and requirements are aspects.

Describe the use case

The description of the use case should include its purpose, general description of the flows, preconditions, postconditions, and invariants (assumptions). Some modelers add the specific actors involved, user stories, and scenarios, but I prefer to use the model itself to contain those relations.

Identify related actors

The related actors are those people or systems outside our scope that interact with the system while it executes the current use case. These actors can send messages to the system, receive messages from the system, or both.

Define the execution context

The execution context is a kind of modeling sandbox that contains an executable component consisting of executable elements representing the use case and related actors. The recommended way to achieve this is to create separate blocks representing the use case and the actors, connected via ports. Having an isolated simulation sandbox allows different systems engineers to progress independently on different use case analyses.

Identify primary functional flows

The activity model identifies the functional flows of the system while it executes the use case capability. These consist of a sequenced set of actions, connected by control flows, with control nodes (notably, decision, merge, fork, and join nodes) where appropriate. In this specific recipe step, the focus is on the primary flows of the system – also known as sunny day flows – and less on the secondary and fault scenarios (known as rainy day scenarios). The actions are either system functions, reception of messages from the actors, sending messages to the actors, or waiting for timeouts.

This activity model is not complete in the sense that it will not include all possible flows within the use case. The later Create executable state model recipe step will include all flows, which is why the state machine, rather than the activity model, is the normative specification of the use case. This activity model allows the systems engineer to begin reasoning about the necessary system behavior. Most systems engineers feel very comfortable with activity models and prefer to begin the analysis here rather than with the scenarios or with the state machine.

Derive use case scenarios

The activity model identifies multiple flows, as indicated by control nodes, such as decision nodes. A specific scenario takes a singular path through the activity flow so that a single activity model results in multiple scenarios. The scenarios are useful because they are easy to review with non-technical stakeholders and because they aid in the definition of the logical interfaces between the system and the actors.

Note

The activity diagram can be made complete, but it is usually easier to do that with a state machine. If you prefer to work entirely in the activity diagram, then evolve the activity model to be executable rather than develop a state machine for this purpose.

Create ports and interfaces in the execution context

Once we have a set of scenarios, we've identified the flow from the use case to the actors and from the actors to the system. By inference, this identifies ports relating the actors and the system, and the specific flows within the interfaces that define them.

Create an executable state model

This step identifies what I call the normative state machine. Executing this state machine can recreate each of the scenarios we drew in the Capture use case scenarios section. All states, transitions, and actions represent requirements. Any state elements added only to assist in the execution but that do not represent requirements should be stereotyped «non-normative» to clearly identify this fact. It is also common to create state behavior for the actors in a step known as instrumenting the actor to support the execution of the use case in the execution context.

Verify and validate requirements

Running the execution context for the use case allows us to demonstrate that our normative state machine in fact represents the flows identified by working with the stakeholder. It also allows us to identify flows and requirements that are missing, incomplete, or incorrect. These result in Requirements_change change requests to fix the identified requirements defects.

Requirements_change

Parallel to the development and execution of the use case model, we maintain the textual requirements. This workflow event indicates the need to fix an identified requirements defect.

Update requirement set

In response to an identified requirements defect, we fix the textual requirements by adding, deleting, or modifying requirements. This will then be reflected in the updated model.

Add trace links

Once the use case model and requirements stabilize, we add trace links using the «trace» relation or something similar. This generally means backtraces to stakeholder requirements as well as forward links to any architectural elements that might already exist.

Perform the use case and requirements review

Once the work has stabilized, a review for correctness and compliance with standards may be done. This allows subject matter experts and stakeholders to review the requirements, use case, activities, states, and scenarios for correctness and for quality assurance staff to ensure compliance with modeling and requirements standards.

Example

Let's see an example here.

The example used for this recipe is the Control Resistance use case, shown in Figure 2.17 along with some other use cases:

Figure 2.17 – Use cases for analysis

Figure 2.17 – Use cases for analysis

Describe the use case

All model elements deserve a useful description. In the case of a use case, we typically use the format shown in Figure 2.18:

Figure 2.18 – Control Resistance use case description

Figure 2.18 – Control Resistance use case description

Identify related actors

The related actors in this example are the Rider and the Training App. The rider signals the system to change the gearing via the gears control and receives a response in terms of changing resistance as well as setting resistance mode to ERG or SIM mode. The training app, when connected, is notified of the current gearing so that it can be displayed, provides a simulated input of incline, and can, optionally change between SIM and ERG modes. The relation of the actors to the use case is shown in Figure 2.17.

Define the execution context

The execution context creates blocks that represent the actors and the use case for the purpose of the analysis. In this example, the following naming conventions are observed:

  • The block representing the use case is has the use case name (with white space removed) preceded by Uc_. Thus, for this example, the use case block is named Uc_ControlResistance.
  • Blocks representing the actors are given the actor name preceded with a and an abbreviation of the use case. For this use case, the prefix is aCR_ so the actor blocks are named aCR_Rider and aCR_TrainingApp.
  • The interface blocks are named as <use case block>_<actor block>. The names of the two interface blocks are iUc_ControlResistance_aCR_Rider and iUc_ControlResistance_aCR_TrainingApp. The normal form of the interface block is associated with the proxy port on the use case block; the conjugated form is associated with the corresponding proxy port on the actor block.

All these elements are shown on the IBD in Figure 2.19:

Figure 2.19 – Control resistance execution context

Figure 2.19 – Control resistance execution context

Identify the primary functional flow

This step creates an activity model for the primary flows in the use case. The flow consists of a set of steps sequenced by control flows and mediated by a set of control nodes. In this example, we will only consider SIM mode to keep the content short and easy to understand. In SIM mode, we simulate the outside riding experience by measuring the position, speed, and force applied to the pedal, and compute the (simulated) bike inertia, speed, acceleration, and drag. From that and the currently selected gear, the system computes and applies resistance to the pedal's movement. The high-level flow is shown in Figure 2.20:

Figure 2.20 – Activity flow for Compute Resistance

Figure 2.20 – Activity flow for Compute Resistance

Directly from the activity diagram, we can identify a number of requirements, shown in tabular form in Table 2.1:

Table 2.1 – Control Resistance requirements (first cut)

Table 2.1 – Control Resistance requirements (first cut)

Derive use case scenarios

The activity flow in Figure 2.20 can be used to create scenarios in sequence diagrams. It is typical to create a set of scenarios such that each control flow is shown at least once. This is called the minimal spanning set of scenarios. In this case, because of the nature of parallelism, a high-level scenario (Figure 2.21) is developed with the more detailed flows put on the reference scenarios:

Figure 2.21 – Compute Resistance main scenario

Figure 2.21 – Compute Resistance main scenario

The first reference scenario (Figure 2.22) reflects the inputs, gathered via system sensors, of the pedal status. This part of the overall scenario flow provides the necessary data for the computation of resistance:

Figure 2.22 – Process Pedal Inputs scenario

Figure 2.22 – Process Pedal Inputs scenario

The third scenario outlines the execution of the physics model per se. This scenario outlines how the simulated bike speed, acceleration, and drag are computed, and these outputs are then used to compute the resistance the system will apply to the pedal. It is important to note that this is not intended to provide a design but rather to identify and characterize the system functions that must be part of the design:

Figure 2.23 – Execute Physics Model scenario

Figure 2.23 – Execute Physics Model scenario

Create ports and interfaces in the execution context

Now that we have defined some interactions between the system and the actors, we can make the interfaces to support those message exchanges. This is shown in the IBD in Figure 2.24:

Figure 2.24 – Compute Resistance Interfaces

Figure 2.24 – Compute Resistance Interfaces

Create an executable state model

Figure 2.25 shows the state machine for the Control Resistance use case:

Figure 2.25 – Control Resistance use case state machine

Figure 2.25 – Control Resistance use case state machine

Astute readers will note that event parameters for sending between the actors and the use case have been added. For example, evSendFileteredPowerToApp now passes measuredPedalForce, of type Real, to the Training App.

To complete the execution, we need to create (simple) implementations for the system functions referenced in the state machine, and create simple state models for the actors to support the simulation. The details of the implementation are not provided here but are available in the downloadable model:

  • setPedalPosition()
  • setPedalSpeed()
  • computePedalCadence()
  • setMeasuredPedalForce()
  • applyTimeFilterToPower()
  • computeInertia()
  • retrieveCurrentIncline()
  • computeDrag()
  • computeSpeed()
  • computeAcceleration()
  • computeResistancecToApplyAtThePedal()
  • applyResistance()
  • storeIncline()
  • computeGearRatio()
  • storeGearRatio()

A few of these functions, while they must be elaborated in the actual design, can have empty implementations in the simulation:

  • sendPedalCadenceToApp()
  • sendFilteredPowerToApp()
  • sendSpeedToApp()
  • sendAccelerationToApp()

Also, to support simulation, the following value properties are defined:

  • gearFront: int – this is the number of teeth in the front (simulated) chainring.
  • gearRear: int – this is the number of teeth in the rear (simulated) cassette ring.
  • gearRatio: Real – this is the ratio gearFront/gearRear.
  • incline: int – this is the simulated incline on the bike, from -15 to +20 degrees.
  • measuredPedalForce: Real – this is the force on the pedals provided by the rider.
  • pedalPosition: Real – this is the position, in degrees, of the pedal.
  • pedalSpeed: Real – this is the angular speed of the pedal movement.
  • cadence: int – this is the pedal RPM (derived directly from pedal speed).

Lastly, we need to define the value properties APP_UPDATE_TM and PHYSICS_UPDATE_TM. In the real world these would run quickly, but we might slow them down for debugging and simulation on the desktop. Here, we'll set APP_UPDATE_TM to 10,000 ms and PHYSICS_UPDATE_TM to 5,000 ms.

We also need to instrument the actors for simulation support. A simple state behavioral model for the aCR_Rider is shown in Figure 2.26:

Figure 2.26 – Ride state machine

Figure 2.26 – Ride state machine

The state machine for the Training App is shown in Figure 2.27:

Figure 2.27 – State machine for the Training App for the Control Resistance use case

Figure 2.27 – State machine for the Training App for the Control Resistance use case

Verify and validate requirements

The simulation is not meant to be a high-fidelity physics simulation of all the forces and values involved, but instead aims to be a medium-fidelity simulation to help validate the set of requirements and to identify missing or incorrect ones. A control panel was created to visualize the behavior and input the values (Figure 2.28):

Figure 2.28 – Control Resistance panel diagram

Figure 2.28 – Control Resistance panel diagram

Simulation of difference scenarios results in many sequence diagrams capturing the behavior, such as the (partial) one shown in Figure 2.29:

Figure 2.29 – A (partial) animated sequence diagram example of the Control Resistance use case

Figure 2.29 – A (partial) animated sequence diagram example of the Control Resistance use case

Requirements_change

A number of minor requirements defects are identified and flagged to be added to the requirements set.

Update the requirements set

The creation and execution of the use case simulation uncovers a couple of new requirements related to timing:

  • The system shall update the physics model frequently enough to provide the rider with a smooth and road-like experience with respect to resistance.
  • The system shall update the training app with the pedal cadence at least every 1.0 seconds.
  • The system shall update the training app with rider-filtered power output at least every 0.5 seconds.
  • The system shall update the training app with the simulated bike speed at least every 1.0 seconds.

Also, we discover a missing data transmission to the training app:

  • The system shall send the current power in watts per kilogram to the training app for the current power output at least every 1.0 seconds.

Add trace links

The trace links are updated in the model. This is shown in matrix form in the following screenshot:

Figure 2.30 – Control Requirements use case requirements trace matrix

Figure 2.30 – Control Requirements use case requirements trace matrix

Perform a use case and requirements review

The requirements model can now be reviewed by relevant stakeholders. The work products that should be included in the review include all the diagrams shown in this section, the requirements, and the executing model. The use of the executing model allows a what-if examination of the requirements set to be easily done during the review. Such questions as How quickly does the resistance control need to be updated to simulate the road riding experience? or What is the absolute maximum resistance supported to be allowed? can be asked. Simulation of the model allows the questions to either be answered by running the simulation case or can be identified as an item that requires resolution.

Functional analysis with state machines

Sometimes, beginning with the state machine is the best approach for use case analysis. This is particularly true when the use case is obviously modal in nature, with different operational modes. This approach generally requires systems engineers who are very comfortable with state machines. This recipe is much like the previous use case analyses and can be used instead; the output is basically the same for all three of these recipes. The primary differences are that no activity diagram is created and the sequence diagrams are created from the executing use case state behavior.

Purpose

The purpose of the recipe is to create a set of high-quality requirements by identifying and characterizing the key system functions performed by the system during the execution of the use case capability. This recipe is particularly effective when the use case is clearly modal in nature and the systems engineers are highly skilled in developing state machines.

Inputs and preconditions

A use case naming a capability of the system from an actor-use point of view.

Outputs and postconditions

There are several outcomes, the most important of which is a set of requirements accurately and appropriately specifying the behavior of the system for the use case. Additional outputs include an executable use case model, logical system interfaces to support the use case behavior and a supporting logical data schema, and a set of scenarios that can be used later as specifications of test cases.

How to do it…

Figure 2.31 shows the workflow for this recipe. It is similar to the previous recipe. The primary difference is that rather than beginning the analysis by creating scenarios with the stakeholders, it begins by creating an activity model of the set of primary flows from which the scenarios will be derived:

Figure 2.31 – Functional analysis with states

Figure 2.31 – Functional analysis with states

Identify the use case

This first step is to identify the generic usage of which the scenarios of interest, user stories, and requirements are aspects.

Describe the use case

The description of the use case should include its purpose, a general description of the flows, preconditions, postconditions, and invariants (assumptions). Some modelers add the specific actors involved, user stories, and scenarios, but I prefer to use the model itself to contain those relations.

Identify related actors

The related actors are those people or systems outside our scope that interact with the system while it executes the current use case. These actors can send messages to the system, receive messages from the system, or both.

Define the execution context

The execution context is a kind of modeling sandbox that contains an executable component consisting of executable elements representing the use case and related actors. The recommended way to achieve this is to create separate blocks representing the use case and the actors, connected via ports. Having an isolated simulation sandbox allows different systems engineers to progress independently on different use case analyses.

Create ports and interfaces in the execution context

Once we have a set of scenarios, we've identified the flow from the use case to the actors and from the actors to the system. By inference, this identifies ports relating the actors and the system, and the specific flows within the interfaces that define them.

Create executable state model

This step identifies what I call the normative state machine. Executing this state machine can recreate each of the scenarios we drew in the Capture use case scenarios section of the Functional analysis with scenarios recipe. Almost all states, transitions, and actions represent requirements. Any state elements added only to assist in the execution that do not represent requirements should be stereotyped «non-normative» to clearly identify this fact. It is also common to create state behavior for the actors in a step known as instrumenting the actor to support the execution of the use case in the execution context.

Generate use case scenarios

The state model identifies multiple flows, driven by event receptions and transitions, executing actions along the way. A specific scenario takes a singular path through the state flow so that a single state machine model results in multiple scenarios. The scenarios are useful because they are easy to review with non-technical stakeholders and because they aid in the definition of the logical interfaces between the system and the actors. Because the state machine is executable, it can be automatically created from the execution of the state machine, provided that you are using a supportive tool.

Verify and validate requirements

Running the execution context for the use case allows us to demonstrate that our normative state machine in fact represents the flows identified by working with the stakeholder. It also allows us to identify flows and requirements that are missing, incomplete, or incorrect. These result in Requirements_change change requests to fix the identified requirements' defects.

Requirements_change

Parallel to the development and execution of the use case model, we maintain the textual requirements. This workflow event indicates the need to fix an identified requirement's defect.

Update the requirement set

In response to an identified requirements defect, we fix the textual requirements by adding, deleting, or modifying requirements. This will then be reflected in the updated model.

Add trace links

Once the use case model and requirements stabilize, we add trace links using the «trace» relation or something similar. These are generally backtraces to stakeholder requirements as well as forward links to any architectural elements that might already exist.

Perform a use case and requirements review

Once the work has stabilized, a review for correctness and compliance with standards may be done. This allows subject matter experts and stakeholders to review the requirements, use case, states, and scenarios for correctness, and for quality assurance staff to ensure compliance with modeling and requirements standards.

Example

Let's see an example.

The example used for this recipe is the Emulate Front and Rear Gearing use case. This use case is shown in Figure 2.32, along with some closely related use cases:

Figure 2.32 – Emulate Front and Rear Gearing use case in context

Figure 2.32 – Emulate Front and Rear Gearing use case in context

Describe the use case

All model elements deserve a useful description. In the case of a use case, we typically use the format shown in Figure 2.33:

Figure 2.33 – Emulate Front and Rear Gearing use case description

Figure 2.33 – Emulate Front and Rear Gearing use case description

Identify related actors

The related actors in this example are the Rider and the Training App. The rider signals the system to change the gearing via the gears control and receives a response in terms of changing resistance as well as setting the resistance mode to ERG or SIM mode. The training app, when connected, is notified of the current gearing so that it can be displayed, provides a simulated input of incline, and can optionally change between SIM and ERG modes. The relation of the actors to the use case is shown in Figure 2.32.

Define the execution context

The execution context creates blocks that represent the actors and the use case for the purpose of the analysis. In this example, the following naming conventions are observed:

  • The block representing the use case has the use case name (with white space removed) preceded by Uc_. Thus, for this example, the use case block is named Uc_EmulateFrontandRearGearing.
  • Blocks representing the actors are given the actor name preceded with a and an abbreviation of the use case. For this use case, the prefix is aEFRG_, so the actor blocks are named aEFRG_Rider and aEFRG_TrainingApp.
  • The interface blocks are named as <use case block>_<actor block>. The names of the two interface blocks are iUc_EmulateFrontandRearGearing_aEFRG_Rider and iUc_EmulateFrontandRearGearing_aEFRG_TrainingApp. The normal form of the interface block is associated with the proxy port on the use case block; the conjugated form is associated with the corresponding proxy port on the actor block.

All these elements are shown in the IBD in Figure 2.34:

Figure 2.34 – Emulate Front and Rear Gearing use case execution context

Figure 2.34 – Emulate Front and Rear Gearing use case execution context

Create ports and interfaces in the execution context

The (empty) ports and interfaces are added between the use case block and the actor blocks, as shown in Figure 2.34. These will be elaborated as the development proceeds in the next step.

Create an executable state model

Figure 2.35 shows the state machine for the Emulate Front and Rear Gearing use case. It is important to remember that the state machine is a restatement of textual requirements in a more formal language and not a declaration of design. The purpose of creating this state machine during this analysis is to identify requirement defects, not to design the system:

Figure 2.35 – State machine for Emulate Front and Rear Gearing

Figure 2.35 – State machine for Emulate Front and Rear Gearing

The use case block contains a number of system functions, value properties, and constants. These are shown in Table 2.2:

Table 2.2-1 – Emulate Front and Rear Gearing use case features
Table 2.2-2 – Emulate Front and Rear Gearing use case features

Table 2.2 – Emulate Front and Rear Gearing use case features

The behavior of the operations on the state machine are system functions. These must be elaborated for the purpose of simulation, and trace to requirements. For example, Figure 2.36 shows the behavior for the system functions that set up the defaults for the gearing for the rear cassette and front chain rings and the function that computes the gear inches when the gear is changed. This can be done in the action language used for the model (C++ in this case) or in activity diagrams. For this example, I used activity diagrams:

Figure 2.36 – Setting the defaults for Emulate Front and Rear Gearing

Figure 2.36 – Setting the defaults for Emulate Front and Rear Gearing

To support the simulation, the actor block aEFRG_Rider was instrumented with a state machine to interact with the use case block. This is shown in the following screenshot:

Figure 2.37 – Rider actor state machine

Figure 2.37 – Rider actor state machine

Generate use case scenarios

Scenarios are specific sequenced interaction sets that identify sequencing, timing, and values of different example users of a system. Sequence diagrams are generally easy to understand, even for non-technical stakeholders. In this recipe, sequences are created by exercising the use case state machine by changing the inputs to exercise different transition paths in the state machine. It is important to understand that there are usually an infinite set of possible scenarios, so we must constrain ourselves to consider a small representative set. The criteria we recommend is the minimal spanning set; this is a set of scenarios such that each transition path and action is executed at least once. More scenarios of interest can be added, but the set of sequences should at least meet this basic criterion.

Let's consider two different scenarios. The first (Figure 2.38) focuses on setting up the gearing for the bike prior to riding:

Figure 2.38 – Scenario for the gearing setup

Figure 2.38 – Scenario for the gearing setup

The second scenario shows the rider changing gears while riding in Figure 2.39:

Figure 2.39 – Scenario for gear changes while riding

Figure 2.39 – Scenario for gear changes while riding

Verify and validate requirements

The creation of the state machines in the previous section and their execution allows us to identify missing, incorrect, or incomplete requirements. The panel diagram in Figure 2.40 allows us to drive different scenarios and to perform what if analyses to explore the requirements:

Figure 2.40 – Emulate Front and Rear Gearing panel diagram

Figure 2.40 – Emulate Front and Rear Gearing panel diagram

Requirements_change

Parallel to the development and execution of the use case model, we maintain the textual requirements. This workflow event indicates the need to fix an identified requirement's defect.

Update the requirement set

In this example, we'll show the requirements in a table in the modeling tool. Figure 2.41 shows the newly added requirements:

Figure 2.41 – Emulate Front and Rear Gearing requirements

Figure 2.41 – Emulate Front and Rear Gearing requirements

Add trace links

Now that we've identified the requirements, we can add them to the model and add trace links to the Emulate Front and Rear Gearing use case. This is shown in the table in Figure 2.42:

Figure 2.42 – Emulate Front and Rear Gearing requirements trace

Figure 2.42 – Emulate Front and Rear Gearing requirements trace

Perform a use case and requirements review

With the analysis complete and the requirements added, a review can be conducted to evaluate the set of requirements. This review typically includes various subject matter experts in addition to the project team.

Functional analysis with user stories

The other functional analysis recipes in this chapter are fairly rigorous and use executable models to identify missing and incorrect requirements. User stories can be used for simple use cases that don't have complex behaviors. In the other functional analysis recipes, the validation of the use case requirements can use a combination of subject matter expert review, testing, and even formal mathematical analysis prior to their application to the system design. User stories only permit validation via review and so are correspondingly harder to verify as complete, accurate, and correct.

A little bit about user stories

User stories are approximately equivalent to scenarios in that both scenarios and user stores describe a singular path through a use case. Both are partially constructive in the sense that individually, they only describe part of the overall use case. User stories do it with natural language text while scenarios do it with SysML sequence diagrams. The difference between user stories and scenarios is summarized in Figure 2.43:

Figure 2.43 – User story or scenarios

Figure 2.43 – User story or scenarios

User stories have a canonical form:

As a <user> I want <feature> so that <reason>|<outcome>

A few examples of user stores are provided in Chapter 1, Basics of Agile Systems Modeling, in the Estimating effort recipe. Here's one of them.

User Story: Set Resistance Under User Control

As a rider, I want to set the resistance level provided to the pedals to increase or decrease the effort for a given gearing, cadence, and incline so that the system simulates the road riding effort.

Each user story represents a small set of requirements. A complete set of user stories includes almost all requirements traced by the use case.

In SysML, we represent user stories as stereotypes of use cases and use «include» relations to indicate the use case to which the user story applies. The stereotype adds the acceptance_criteria tag to the user story so that it is clear what it means to satisfy the user story. An example relating a use case, user stories, and requirements is shown in Figure 2.44:

Figure 2.44 – User stories as a stereotype of a use case

Figure 2.44 – User stories as a stereotype of a use case

Here are some guidelines for developing good user stories:

  • Focus on the users: Avoid discussing or referencing design, but instead focus on the user-system interaction.
  • Use personae to discover the stories: Most systems have many stakeholders with needs to be met. Each user story represents a single stakeholder role. Represent all the users with the set of user stories.
  • Develop user stories collaboratively: User stories are a lightweight analytic technique and can foster good discussions among the product owner and stakeholder, resulting in the identification of specific requirements.
  • Keep the stories simple and precise: Each story should be easy to understand; if it is complex, then try to break it up into multiple stories.
  • Start with epics or use cases: User stories are small, finely grained things, while epics and use cases provide a larger context.
  • Refine your stories: As your understanding deepens and requirements are uncovered, the user stories should be updated to reflect this deeper understanding.
  • Be sure to include acceptance criteria: Acceptance criteria complete the narrative by providing a clear means by which the system design and implementation can be judged to appropriately satisfy the user need.
  • Stay within the scope of the owning epic or use case: While it is true that in simple systems, user stories may not have an owner epic or use case, most will. When there is an owner epic or use case, the story must be a subset of that capability.
  • Cover all the stories: The set of user stories should cover all variant interaction paths of the owning epic or use case.
  • Don't rely solely on user stories: Because user stories are a natural language narrative, it isn't clear how they represent all the quality of service requirements. Be sure to include safety, reliability, security, performance, and precision requirements by tracing the user story to those requirements.

Purpose

User stories are a lightweight analytic technique for understanding and organizing requirements. Most commonly, these are stories within the larger capability context of an epic or use case. User stories are approximately equivalent to a scenario.

Inputs and preconditions

A use case naming a capability of the system from an actor-use point of view.

Outputs and postconditions

The most important outcome is a set of requirements accurately and appropriately specifying the behavior of the system for the use case and acceptance criteria in terms of what it means to satisfy them.

How to do it…

Figure 2.45 shows the workflow for this recipe. It is a more lightweight and more informal approach than the preceding recipes but may be useful for simple use cases. Note that unlike previous recipes, it does not include a behavioral specification in formal language such as activities or state machines:

Figure 2.45 – Functional analysis with user stories

Figure 2.45 – Functional analysis with user stories

Identify the use case

This first step is to identify the generic usage of which the scenarios of interest, user stories, and requirements are aspects.

Describe the use case

The description of the use case should include its purpose, a general description of the flows, preconditions, postconditions, and invariants (assumptions). Some modelers add the specific actors involved, user stories, and scenarios, but I prefer to use the model itself to contain those relations.

Identify related actors

The related actors are those people or systems outside our scope that interact with the system while it executes the current use case. These actors can send messages to the system, receive messages from the system, or both.

State the user stories

This step includes more than creating the As a <role> … statements. It also includes creating «include» relations from the owning use case and the addition of acceptance criteria for each user story. If this is the first time this is being done, you will also have to create a «user story» stereotype that applies to use cases to be able to create the model elements.

Specify the related requirements

User stories are a way to capture required system behavior from the actor's perspective. They generally represent a small number of textual system requirements. This step enumerates them.

Identify the quality of service requirements

It is very common to forget to include various kinds of qualities of service. This step is an explicit reminder to specify how well the services are provided. Common qualities of service include safety, security, reliability, performance, precision, fidelity, and accuracy.

Verify and validate the requirements

For this recipe, validating the requirements is done with a review with the relevant stakeholders. This should involve looking at the use, the set of user stories, the user stories themselves and their acceptance criteria, and the functional and quality of service requirements.

Requirements_change

Parallel to the development and execution of the use case model, we maintain the textual requirements. This workflow event indicates the need to fix an identified requirement's defect.

Update the requirement set

In response to an identified requirement's defect, we fix the textual requirements by adding, deleting, or modifying requirements. This will then be reflected in the updated model.

Add trace links

Once the use case model and requirements stabilize, we add trace links using the «trace» relation or something similar. These are generally backtraces to stakeholder requirements as well as forward links to any architectural elements that might already exist.

Perform a use case and requirements review

Once the work has stabilized, a review for correctness and compliance with standards may be done. This allows subject matter experts and stakeholders to review the requirements, use case, and user stories for correctness, and for quality assurance staff to ensure compliance with modeling and requirements standards.

Example

Here's an example.

Identify the use case

For this recipe, we will analyze the Emulate DI Shifting use case. In many ways, this use case is an ideal candidate for user stories because the use case is simple and not overly burdened with quality of service requirements.

Describe the use case

The use case description is shown in Figure 2.46:

Figure 2.46 – Description of the Emulate DI Shifting use case

Figure 2.46 – Description of the Emulate DI Shifting use case

Note

Interested readers can learn more about DI shifting here: https://en.wikipedia.org/wiki/Electronic_gear-shifting_system

Identify related actors

The only actor in this use case is the Rider, as shifting gears is one of the three key ways that the Rider interacts with the system (the other two being pedaling and applying the brakes).

State the user stories

Figure 2.47 shows the three identified user stories for the use case: using buttons to shift gears, handling gearing cross-over on upshifting, and handling gearing cross-over on downshifting. Note this diagram is very similar to Figure 2.44; however, rather than use an icon for the user stories, this diagram uses standard SysML notation. Additionally, the canonical form of the user story in the description and the acceptance criteria in the tag are exposed in comments:

Figure 2.47 – Emulate DI Shifting user stories

Figure 2.47 – Emulate DI Shifting user stories

Specify the related requirements

As these are simple user stories, there are a small number of functional requirements. See Figure 2.48:

Figure 2.48 – Emulate DI Shifting functional requirements

Figure 2.48 – Emulate DI Shifting functional requirements

Identify the quality of service requirements

The previous step specified a small number of requirements but didn't clarify how well these system functions are to be performed. Most notably, performance and reliability requirements are missing. These are added in Figure 2.49, shown this time in a requirements table:

Figure 2.49 – Emulate DI Shifting quality of service and functional requirements

Figure 2.49 – Emulate DI Shifting quality of service and functional requirements

Verify and validate the requirements

The next step is to validate the user stories and related requirements with the stakeholders to ensure their correctness, and look for missing, incorrect, or incomplete requirements.

Requirements_change

During this analysis, a stakeholder notes that nothing is said about how the system transitions between mechanical shifting and DI shifting. The following requirements are added:

  • The system shall enter DI Shifting Mode by selecting that option in the Configuration App.
  • Once DI Shifting Mode is selected, this selection shall persist across resets, power resets, and software updates.
  • Mechanical shifting shall be the default on startup or after a factory-settings reset.
  • The system shall leave DI Shifting mode when the user selects the Mechanical Shifting option in the Configuration App.

Update the requirements set

The requirements are updated to reflect the stakeholder input from earlier.

Add trace links

Trace links from both the use case and user stories to the requirements are added. These are shown in diagrammatic form in Figure 2.50. Note: the figure does not show that the Emulate DI Shifting use case traces to all these requirements just to simplify the diagram:

Figure 2.50 – Emulate DI Shifting trace links

Figure 2.50 – Emulate DI Shifting trace links

Perform a use case, user story, and requirements review

With the analysis complete and the requirements added, a review can be conducted to evaluate the set of requirements. This review typically includes various subject matter experts in addition to the project team.

Model-based safety analysis

The term safety can be defined as freedom from harm. Safety is one of the three pillars of the more general concern of system dependability. Safety is generally considered with respect to the system causing or allowing physical harm to persons, up to and including death. Depending on the industry, different systems must conform to different safety standards, such as DO-178 (airborne software), ARP4761 (aerospace systems), IEC 61508 (electronic systems), ISO 26262 (automotive safety), IEC 63204 (medical), IEC 60601 (medical), and EN50159 (railway), just to name a few. While there is some commonality among the standards, there are also a number of differences that you must take into account when developing systems to comply with those standards.

This recipe provides a generic workflow applicable to all these standards, but you may want to tailor it for your specific needs. Note that we recommend this analysis is done on a per-use case basis so that the analysis of each relevant use case includes safety requirements in addition to the functional and quality of service requirements.

A little bit about safety analysis

Some key terms for safety analysis are as follows:

  • Accident – A loss of some kind, such as injury, death, equipment damage, or financial. Also known as a mishap.
  • Risk – The product of the likelihood of an accident and its severity.
  • Hazard – A set of conditions and/or events that inevitably results in an accident.
  • Fault tolerance time – the period of time a system can manifest a fault before an accident is likely to occur.
  • Safety control measure – An action or mechanism that improves systems safety either by 1) reducing an accident, hazard, or risk's likelihood or 2) reducing its severity.

The terms faults, failures, and errors are generally used in one of three ways, depending on the standard employed:

  • Faults lead to failures, which lead to errors:

    a. Fault – An incorrect step, process, or data.

    b. Failure – The inability of a system or component to perform its required function.

    c. Error – A discrepancy between an actual value or action and the theoretically correct value or action.

    d. A fault at one level can lead to a failure one level up.

  • Faults are actual behaviors that are in conflict with specified or desired behaviors:

    a. Fault – Either a failure or an error.

    b. Failure – An event that occurs at a point in time when a system or component performs incorrectly.

    - Failures are random and may be characterized with a probability distribution.

    c. Error – A condition in which a system or component systematically fails to achieve its required function.

    - Errors are systematic and always exist, even if they are not manifest.

    - Errors are the result of requirement, design, implementation, or deployment mistakes, such as a software bug.

    d. Manifest – When a fault is visible. Faults may be manifest or latent.

  • Faults are undesirable anomalies in systems or software (ARP-4761):

    Failure – A loss of function or a malfunction of a system

    Error – The occurrence arising as a result of an incorrect action or decision by personnel operating or maintaining a system, or a mistake in the specification, design, or implementation

The most common way to perform the analysis is with a Fault Tree Analysis (FTA) diagram. This is a causality diagram that relates normal conditions and events, and abnormal conditions and events (such as faults and failures), with undesirable conditions (hazards). A Hazard Analysis is generally a summary of the safety analysis from one or more FTAs.

FTA

An FTA diagram connects nodes with logic flows to aid understanding of the interactions of elements relevant to the safety concept. Nodes are either events, conditions, outcomes, or logical operators, as shown in Figure 2.51. See https://www.sae.org/standards/content/arp4761/ for a good discussion of FTA diagrams:

Figure 2.51 – FTA elements

Figure 2.51 – FTA elements

The logical operators take one or more inputs and produce a singular output. The AND operator, for example, produces a TRUE output if both its inputs are TRUE, while the OR operator returns TRUE if either of its inputs is TRUE. There is also a TRANSFER operator, which allows an FTA diagram to be broken up into subdiagrams.

Figure 2.52 shows an example FTA diagram. This diagram shows the safety concerns around an automotive braking system. The hazard under consideration is Failure to Brake. The diagram shows that this happens when the driver intends to brake and at least one of three conditions is present: a pedal input fault, an internal fault, or a wheel assembly fault:

Figure 2.52 – Example FTA diagram

Figure 2.52 – Example FTA diagram

Cut sets

A cut is a collection of faults that, taken together, can lead to a hazard. A cut set is the set of such collections such that all possible paths from the primitive conditions and events to the hazard have been accounted for. In general, if you consider n primitive conditions as binary (present or non-present), then there are 2n cuts that must be examined. Consider the simple FTA in Figure 2.53. The primitive conditions are marked as a though e:

Figure 2.53 – Cut set example

Figure 2.53 – Cut set example

With 5 primitive conditions, 32 prospective cut sets should be considered, of which only 3 can lead to the hazard manifestation, as shown in Figure 2.54. Only these three need to be subject to the addition of a safety measure:

Figure 2.54 – Cut sets example (2)

Figure 2.54 – Cut sets example (2)

Hazard analysis

There is normally one FTA diagram per identified hazard, although that FTA diagram can be decomposed into multiple FTA diagrams via the transfer operator. A system, however, normally has multiple hazards. These are summarized into a hazard analysis. A hazard analysis summarizes the hazard-relevant metadata, including the hazard name, description, severity, likelihood, risk, tolerance time, and possibly, related safety-relevant requirements and design elements.

UML Dependability Profile

I have developed a UML Dependability Profile that can be applied to UML and SysML models in the Rhapsody tool. It is free to download from https://www.bruce-douglass.com/safety-analysis-and-design. The ZIP repository includes instructions on the installation and use of the profile. All the FTA diagrams in this recipe were created in Rhapsody using this profile.

Purpose

The purpose of this recipe is to create a set of safety-relevant requirements for the system under development by analyzing safety needs.

Inputs and preconditions

A use case naming a capability of the system from an actor-use point of view that has been identified, described, and for which relevant actors have been identified. Note: this recipe is normally performed in parallel with one of the functional analysis recipes from earlier in this chapter.

Outputs and postconditions

The most important outcome is a set of requirements specifying how the system will mitigate or manage the safety concerns of the system. Additionally, a safety concept is developed identifying the needs for a set of safety control measures, which is summarized in a hazard analysis.

How to do it…

Figure 2.55 shows the workflow for the recipe:

Figure 2.55 – Model-based safety analysis workflow

Figure 2.55 – Model-based safety analysis workflow

Identify the hazards

A hazard is a condition that can lead to an accident. This step identifies the hazards relevant to the use case under consideration that could arise from the system behavior in its operational context.

Describe the hazards

Hazards are specified by their safety-relevant metadata. This generally includes the hazard name, description, likelihood, severity, risk, and safety integrity level, adopted from the relevant safety standard.

Identify related conditions and events

This step identifies the conditions and events related to the hazard, including the following:

  • Required conditions
  • Normal events
  • Hazardous events
  • Fault conditions
  • Resulting conditions

Describe conditions and events

Each condition and event should be described. A typical set of aspects of such a description includes the following:

  • Overview
  • Effect
  • Cause
  • Current controls
  • Detection mechanisms
  • Failure mode
  • Likelihood or Mean Time Between Failure (MTBF)
  • Severity
  • Recommended action
  • Risk priority (product of likelihood and severity or MTBF/severity)

Create a causality model

This step constructs an FTA connecting the various nodes with logic flows and logic operators flowing from primitive conditions up to resulting conditions and, ultimately, to the hazard.

Identify cut sets

Identify the relevant cuts from all possible cut sets to ensure that each is safe enough to meet the safety standard being employed. This typically requires the addition of safety measures, as discussed in the next step.

Add safety measures

Safety measures are technical means or usage procedures by which safety concerns are mitigated. All safety measures either reduce the likelihood or the severity of an accident. In this analysis, care should be taken to specify the effect of the measures rather than their implementation, as much as possible. Design-level hazard analysis will be conducted later to ensure the adequacy of the design realization of the safety measures specified here.

Review the safety concept

This step reviews the analysis and the set of safety measures to ensure their adequacy.

Add safety requirements

The safety requirements specify what the design, context, or usage must meet in order to be adequately safe. These requirements may be specially annotated to indicate their safety relevance or may just be treated as requirements that the system must satisfy.

Example

Let's see an example.

The Pegasus example problem isn't ideal for showing safety analysis because it isn't a safety-critical system. For that reason, we will use a different example for this recipe.

Problem statement – medical gas mixer

The Medical Gas Mixer (MGM) takes in gas from wall supplies for O2, He, N2, and air and mixes them and delivers a flow to a medical ventilator. When operational, the flow must be in the range of 100 ml/min to 1,500 ml/min with a delivered O2 percentage (known as the Fraction of Delivered Oxygen, or FiO2) of no less than 21%. The flows from the individual gas sources are selected by the physician via the ventilator's interface.

Neonates face an additional hazard of hyperoxia – too much oxygen in the blood, as this can damage their retinas and lungs.

In this example, the focus of our analysis is the Mix Gases use case.

Identify the hazards

The fundamental hazard of this system is hypoxia – delivering too little oxygen to sustain health. The average adult breathes about 7-8 liters of air per minute, resulting in a delivered oxygen flow of around 1,450 ml O2/minute. For neonates, required flow can be as low as 40 ml O2/minute, while for large adults the need might be as high as 4,000 ml O2/minute at rest.

Describe the hazards

The «Hazard» stereotype includes a set of tags for capturing the hazard metadata. This is shown in Figure 2.56:

Figure 2.56 – Mix Gases hazards

Figure 2.56 – Mix Gases hazards

Identify related conditions and events

For the rest of this example, we will focus exclusively on the Hypoxia hazard. There are two required conditions (or assumptions/invariants): first, that the gas mixer is in operation and second, that there is a physician in attendance. This latter assumption means that the physician can be part of the safety loop.

There a number of faults that are relevant to the Hypoxia hazard:

  • The gas supply runs out of either air or O2, depending on which is selected.
  • The gas supply valve fails for either air or O2, depending on which is selected.
  • The patient is improperly intubated.
  • A fault in the breathing circuit, such as disconnected hoses or leaks.
  • The ventilator commands an FiO2 level that is too low.
  • The ventilator commands a total flow of the specified mixture that is too low.

Describe conditions and events

The «BasicFault» stereotype provides tags to hold fault metadata. The metadata for three of these faults, Gas Supply Valve Fault, Improper Intubation, and Commanded FiO2Too Low are shown in Figure 2.57. Since the latter has more primitive underlying causes, it will be changed to a Resulting Condition and the primitive faults added as follows:

Figure 2.57 – Fault metadata

Figure 2.57 – Fault metadata

Create a causality model

Figure 2.58 shows the initial FTA. This FTA doesn't include any safety mechanisms, which will be added shortly. Nevertheless, this FTA shows a causality tree linking the faults to the hazard with a combination of logic operators and logic flows:

Figure 2.58 – Initial FTA

Figure 2.58 – Initial FTA

Identify cut sets

There are 10 primitive fault elements, so there are potentially 210 (1,024) cuts in the cut set, although we are only considering cases in which the assumptions are true, so that immediately reduces the set to 28 (256) possibilities. All of these are ORed together so it is enough to independently examine just the 8 basic faults.

Add safety measures

Adding a safety measure reduces either the likelihood or the severity of the outcome of a fault to an acceptable level. This is done on the FTA by creating anding-redundancy. This means that for the fault to have its original effect both the original fault must occur and the safety measure must fail. The likelihood of both failing is the product of their probabilities. For example, if the Gas Supply Valve Fault has a probability of 8 x 10-5 and we add a safety measure of a gas supply backup that automatically kicks in that has a probability of failure of 2 x 10-6, then the resulting probability of both failing is 16 x 10-11. Acceptable probabilities of hazards can be determined from the safety standard being used.

For the identified faults, we will add the following safety measures:

  • Gas Supply Valve Fault safety measure: Secondary Gas Supply
  • Gas Supply Exhausted fault safety measure: Secondary Gas Supply
  • Improper Intubation fault safety measures: CO2 Sensor on Expiratory Flow and Alarm On Fault
  • Breathing Circuit Fault safety measures: Inspiratory Limb Flow Sensor and Alarm On Fault
  • Physician Error In Commanded O2 safety measures: Range Check Commanded O2 and Alarm On Fault
  • Computation Error fault safety measures: Secondary Parallel Computation and Alarm On Fault
  • Message Corruption fault safety measure: Message CRC
  • Commanded Flow Too Low fault safety measures: Inspiratory Limb Flow Sensor and Alarm On Fault

Adding these results in a more detailed FTA. To ensure readability, transfer operators are added to break up the diagram by adding a sub-diagram for Commanded FiO2 Too Low. Figure 2.59 shows the high-level FTA diagram with safety measures added. Note that they are added in terms of what happens when they fail. Failure of safety measures is indicated with a red bold font for emphasis.

Figure 2.59 – Elaborated FTA diagram

Figure 2.59 – Elaborated FTA diagram

Note also the use of the transfer operator to connect this diagram with the more detailed one for the sub-diagram shown in Figure 2.60:

Figure 2.60 – Commanded FIO2 flow Too Low FTA

Figure 2.60 – Commanded FIO2 flow Too Low FTA

Review the safety concept

The set of safety measures addresses all the identified safety concerns.

Add safety requirements

Now that we have identified the safety measures necessary to develop a safe system, we must create the requirements that mandate their inclusion. These are shown in Figure 2.61:

Figure 2.61 – Safety requirements

Figure 2.61 – Safety requirements

Model-based threat analysis

It used to be that most systems were isolated and disconnected; the only way to attack such a system required a physical presence. Those days are long gone.

These days, most systems are internet-enabled and connected via apps to cloud-based servers and social media. This presents opportunities to attack these systems, compromise their security, violate their privacy, steal their information, and cause damage through malicious software.

Unfortunately, little has been done to protect systems in a systematic fashion. The most common response I hear when consulting is " Security. Yeah, I need me some of that," and the issue is ignored thereafter. Sometimes, some thought is given to applying security tests ex post facto, or perhaps doing some code scans for software vulnerabilities, but very little is done to methodically analyze a system from a cyber-physical security posture standpoint. This recipe addresses that specific need.

Basics of cyber-physical security

Security is the second pillar of dependability. The first, safety, was discussed in the previous recipe. Reliability, the remaining pillar, is discussed in the next recipe. Key concepts for a systematic approach to cyber-security needs are as follows:

Security – Resilience to attack.

Asset – A security-relevant feature of a system that the system is responsible for protecting. Assets have the following properties:

  • Access Level Permitted
  • Accountability
  • Asset Kind:

    a. Actor

    b. Information Asset

    c. Current Asset

    d. Resource Asset

    e. Physical Asset

    f. Service Asset

    g. Security Asset

    h. Tangible Asset

    i. Intangible Asset

  • Availability
  • Clearance Required
  • ID
  • Integrity
  • Value

Asset Context – The system or extra-system elements enshrouding one or more assets; a safe in which money is kept is a simple example of an asset context. An asset context may be decomposed into contained asset context elements.

Security field – The set of assets, asset contexts, vulnerabilities, and countermeasures for a system (also known as the system security posture).

Vulnerability – A weakness in the security field of an asset that may be exploited by an attack.

Threat – The means by which a vulnerability of the security field of an asset may be exploited.

Attack – The realization of a threat invoked by a threat agent.

Attack chain – A type of attack that is composed of sub-attacks, sometimes known as a cyber killchain. Most modern attacks are of this type.

Threat agent – A human or automated threat source that invokes an attack, typically intentionally.

Security countermeasure – A means by which a vulnerability is protected from attack. Countermeasures may be passive or active, and may be implemented by design elements, policies, procedures, labeling, training, or obviation. Countermeasure types include the following:

  • Access control
  • Accounting
  • Active detection
  • Authentication
  • Recovery
  • Boundary control
  • Backup
  • Encryption
  • Deterrence
  • Obviation
  • Nonrepudiation
  • Policy action
  • Response
  • Scanning detection

Role – A part a person plays in a context, such as a user, administrator, or trusted advisor.

Authenticated role – A role with explicit authentication, which typically includes a set of permissions.

Permission – The right or ability to perform an action that deals with an asset. A role may be granted permissions to perform different kinds of access to an asset.

Access – A type of action that can be performed on a resource. This includes the following:

  • No access
  • Unrestricted access
  • Read access
  • Modify access
  • Open access
  • Close access
  • Entry access
  • Exit access
  • Create access
  • Delete access
  • Remove access
  • Invoke access
  • Configure access
  • Interrupt access
  • Stop access

Security violation – The undesired intrusion into, interference with, or theft of an asset; this may be the result of an attack (intentional) or a failure (unintentional).

Risk – The possibility of an undesirable event occurring or an undesirable situation manifesting. Risk is the product of (at least) two values: likelihood and severity. Severity in this case is a measure of the asset value.

Risk Number – The numeric value associated with a risk (likelihood multiplied by severity).

Modeling for security analysis

The UML Dependability Profile used in the previous recipe also includes cyber-physical threat modeling using the previously mentioned concepts. The security information can be captured and visualized in a number of diagrammatic and tabular views. It may be downloaded at https://www.bruce-douglass.com/safety-analysis-and-design.

Security Analysis Diagram

The Security Analysis Diagram (SAD) is a logical causality diagram very similar to the FTA diagram used in the previous recipe. A SAD shows how assets, events, and conditions combine to express vulnerabilities, how countermeasures address vulnerabilities, and how attacks cause security violations. The intention is to identify when and where countermeasures are or should be added to improve system security. This diagram uses logical operations (AND, OR, NOT, XOR, and so on) to combine the presence of assets, asset context, situations, and events. Figure 2.62 shows a typical SAD. You can identify the kind of element by the stereotype, such as «Asset», «Asset Context», «Countermeasure», «Vulnerability», and «Threat»

Figure 2.62 – A typical SAD

Figure 2.62 – A typical SAD

Asset diagram

Another useful diagram is the asset diagram. The asset diagram is meant to show the relationships between assets, asset contexts, vulnerabilities, countermeasures, supporting security requirements, and security-relevant design elements. Figure 2.63 shows an asset diagram in use:

Figure 2.63 – Asset diagram

Figure 2.63 – Asset diagram

Attack flow diagram

The last diagram of particular interest is the attack flow diagram. It is a specialized activity diagram with stereotyped actions to match the canonical attack chain, shown in Figure 2.64:

Figure 2.64 – Canonical attack chain

Figure 2.64 – Canonical attack chain

The purpose of this diagram is to allow us to reason about how attacks unfold so that we can identify appropriate spots to insert security countermeasure actions. Figure 2.65 shows an example of its use:

Figure 2.65 – Example attack flow diagram

Figure 2.65 – Example attack flow diagram

The stereotyped actions either identify the action as a part of the attack chain or identify the action as a countermeasure. The actions without stereotypes are normal user actions.

Tabular views

Tables and matrices can easily be constructed to summarize the threat analysis. The Security Posture Table, for example, is a tabular summary for assets, asset context, vulnerabilities, and countermeasures and their important security-relevant metadata, including Name, Description, Risk Number, Severity, Probability, Consequence, and Impact.

Purpose

The purpose of this recipe is to identify system assets subject to attack, how they can be attacked, and where to best apply countermeasures.

Inputs and preconditions

A use case naming a capability of the system from an actor-use point of view that has been identified, described, and for which relevant actors have been identified. Note: this recipe is normally performed in parallel with one of the functional analysis recipes from earlier in this chapter.

Outputs and postconditions

The most important outcome is a set of requirements specifying how the system will mitigate or manage the security concerns of the system. Additionally, a security posture concept is identifying the need for a set of security control measures, which is summarized in a cyber-physical threat analysis.

How to do it…

The workflow for this recipe is shown in Figure 2.66:

Figure 2.66 – Security analysis workflow

Figure 2.66 – Security analysis workflow

Identify assets and asset contexts

Assets are system or environmental features of value that the system is charged to protect. Assets can be classified as being one of several types, including the following:

  • Information: Information of value, such as a credit card number
  • Currency: Money, whether in physical or virtual form
  • Resource: A capability, means, source, system, or feature of value, such as access to GPS for vehicle navigation
  • Physical: A tangible resource that can be physically compromised, threatened, or damaged, such as a gas supply for a medical ventilator
  • Service: A behavior of the system that provides value, such as delivering cardiac therapy
  • Security: A security measure that can be compromised as a part of an attack chain, such as a firewall

Of course, these categories overlap to a degree, so categorize your assets in a way that makes sense to you and your stakeholders.

Assets are system or environmental features that have value that your system is responsible for protecting. Create one or more asset diagrams to capture the assets and asset contexts. You can optionally add access roles, permissions, and vulnerabilities, but the primary purpose is to identify and understand the assets.

Describe assets and asset contexts

Assets have a number of properties you may want to represent. At a minimum, you want to identify the asset kind and the value of the asset. Asset value is important because you will be willing to spend greater cost and effort to protect more valuable assets. You may also want to specify the asset availability, clearance, or access level required.

Identify vulnerabilities

Vulnerabilities are weaknesses in the system security field; in this context, we are especially concerned with vulnerabilities specific to assets and asset contexts. If you are using known technology, then sources such as the Common Vulnerability Enumeration (CVE) or Common Weakness Enumeration (CWE) are good sources of information.

Note

Refer to https://cve.mitre.org/cve/ and https://cwe.mitre.org/ for further reading.

Specify attack chains

Most attacks are not a single action, but an orchestrated series of actions meant to defeat countermeasures, gain access, compromise a system, and then perform actions on objective to exploit the asset. Use the attack flow diagram or attack scenario diagrams to model and understand how an attack achieves its goals and where countermeasures might be effective.

Create a causality tree

Express your understanding of the causal relations between attacks, vulnerabilities, and countermeasures on security analysis diagrams. These diagrams are similar to FTAs and are used in safety analysis.

Add countermeasures

Once a good understanding is achieved of the assets, their vulnerabilities, the attack chains used to penetrate the security field, and the causality model, you're ready to identify what security countermeasures are appropriate and where in the security field they belong.

Review the security posture

Review the updated security posture to ensure that you've identified the correct set of vulnerabilities, attack vectors, and countermeasures. It is especially important to review this in the context of the CVE and CWE.

Add security requirements

When you're satisfied that the proposed countermeasures, add requirements for them. As with all requirements, these should specify what needs to be done and not specifically how, since the latter concern is one of design.

Add trace links

Add trace links from your security analysis to the newly added requirements, from the associated use case to the requirements, and from the use case to the security analysis. If an architecture already exists, also add trace links from the architectural elements to the security requirements, as appropriate.

Perform a use case and requirements review

This final step of the recipe reviews the set of requirements for the use case, including any requirements added as a result of this recipe.

Next, let's see an example.

Example

For this example, we'll consider the use case Measure Performance Metrics. This use case is about measuring metrics such as heart rate, cadence, power, (virtual) speed, and (virtual) distance and uploading them to the connected app. The use case is shown in Figure 2.67:

Figure 2.67 – Measure Performance Metrics use case

Figure 2.67 – Measure Performance Metrics use case

Identify assets and asset contexts

There are two kinds of assets that might be exposed; the login ID and password used during the connection to the app and the rider's privacy-sensitive performance data. The assets of concern are the Ride Login Data and Rider Performance Metrics.

Other use cases potentially expose other assets, such as the Update Firmware use case exposing the system to malware, but those concerns would be dealt with during the analysis of the latter use case.

Describe assets and asset contexts

The asset metadata is captured during the analysis. It is shown in Figure 2.68. Both assets are of the INFORMATION_ASSET asset kind. The Rider Login Data is a high-valued asset, while the Rider Performance Data is of medium value:

Figure 2.68 – Asset metadata

Figure 2.68 – Asset metadata

Identify vulnerabilities

Next, we look to see how the assets express vulnerabilities. We can identify three vulnerabilities that apply to both assets: impersonation of a network, impersonation of the connected app, and sniffing the data as it is sent between the system and the app. See Figure 2.69:

Figure 2.69 – Asset vulnerabilities

Figure 2.69 – Asset vulnerabilities

Specify attack chains

Figure 2.70 shows the attack chain for the Measure Performance Metrics use case. These attack chains show the normal processing behavior along with the attack behaviors of the adversary and the mitigation behaviors of the system:

Figure 2.70 – Measure Performance Metrics attack chain

Figure 2.70 – Measure Performance Metrics attack chain

The attack chain is further decomposed into a call behavior, shown in Figure 2.71:

Figure 2.71 – Process Messages attack chain

Figure 2.71 – Process Messages attack chain

Create a causality tree

Now that we've identified and characterized the assets, vulnerabilities, and attacks, we can put together a causality model. This is shown in Figure 2.72 for compromising login data and credentials:

Figure 2.72 – SAD for rider login data

Figure 2.72 – SAD for rider login data

We also have a casualty model in Figure 2.73 for the rider metric data:

Figure 2.73 – SAD for rider metric data

Figure 2.73 – SAD for rider metric data

Add countermeasures

We can see in the previous two figures that our causality diagram has identified two security countermeasures: the use of credentials for Bluetooth connections and the addition of encryption for message data.

Review the security posture

In this step, we review our security posture. The security posture is the set of assets, asset contexts, vulnerabilities, and countermeasures. In this case, the assets are the rider login data and the rider metrics data. The login data includes the username and password. The metrics data includes all the metrics gathered, including speed, distance, elapsed time, date and time of workout, power, cadence, and heart rate.

There are two asset contexts: the system itself and the phone hosting the app. The latter context is out of the system's scope and we have limited ability to influence its security, but we can require the use of the protections it provides. Notably, this includes Bluetooth credentials and the encryption of data during transmission. Other use cases may allow us better control over the security measures in this asset context. For example, the Configure System use case uses a configuration app of our own design that we can ensure stores data internally in encrypted form; we have no such control over the third-party training apps.

We have identified three vulnerabilities. During login, the system can be compromised either by network or app impersonation. By pretending to be a trusted conduit or trusted actor, an adversary can steal login information. We address these concerns with two explicit countermeasures: message encryption and the use of credentials. Security could be further enhanced by requiring multi-factor authentication, but that was not considered necessary in this case. During rides, the system transmits metric data to the app for storage, display, and virtual simulation. An adversary could monitor such communications and steal that data. This is addressed by encrypting messages between the system and the app.

Add security requirements

The security requirements are simply statements requiring the countermeasure design and implementation. In this case, there are only two such requirements:

  • The system shall require the use of a Bluetooth credentials agreement between the system and the app to permit message traffic.
  • The system shall encrypt all traffic between itself and the app with at least 128-bit encryption.

Add trace links

The new requirements trace to the Measure Performance Metrics use case. Further, trace links are added from the countermeasures to the requirements, linking our analysis to the requirements.

Perform a use case and requirements review

We can now review the use case, functional analysis, and dependability analyses for completeness, accuracy, and correctness.

Specifying logical system interfaces

System interfaces identify the sets of services, data, and flows into and out of a system. By logical interfaces, we mean abstract interfaces that specify the content and precision of the flows but not their physical realization. For example, a system interface to a radar might include a message herezaRadarTrack(r: RadarTrack) as a SysML event carrying a radar track as a parameter without specifying what communication means will be used, let alone the bit-mapped structure of the 1553 Bus message. Nevertheless, the specification of the interface allows us to consider the set of services requested from the system by actors, the set of services needed by the system from the actors, and the physical flows across the system boundary.

The initial set of interfaces are a natural outcome of our use case analysis. Each use case characterizes a set of interactions of the system with a group of actors for a similar purpose. These interactions necessitate system interfaces. This recipe will focus on the identification of these interfaces and the identification of the data and flows that they carry; the actual definition of these data elements is described in the last recipe in this chapter, Creating the logical data schema.

The logical interfaces from a single use case analysis are only a part of the entire set of system interfaces. The set interfaces from multiple use cases are merged together during system architecture definition. This topic is discussed in the recipes of the next chapter, Chapter 3, Developing System Architectures. Those are still logical interfaces, however, and abstract away implementation detail. The specification of physical interfaces from their logical specification is described in Chapter 4, Handoff to Downstream Engineering.

A note about SysML ports and interfaces

SysML supports a few different ways to model interfaces and this is intricately bound up with the topic of ports. SysML has the standard port (from UML), which is typed by an interface. An interface is similar to an abstract class; it contains specifications of services but no implementation. A block that realizes an interface must provide an implementation for each operation specified within that interface. UML ports are typed by the interfaces they support. A port may either provide or require an interface. If an interface is provided by the system, that means that the system must provide an implementation that realizes the requested services. If an interface is required, then the system can request an actor to provide those services. These services can be synchronous calls or asynchronous event receptions and can carry data in or out, as necessary. Note that the difference between provided and required determines where the services are implemented and not the direction of the data flow.

These interfaces are fundamentally about services that can, incidentally, carry data. SysML also defines flow ports, which allow data or flow to be exchanged without services being explicitly involved. Flow ports are bound to a single data or flow element and have an explicit flow direction, either into or out from the element. Block instances could bind flow ports to internal value properties and connect them to flow ports on other blocks that were identically typed.

SysML 1.3 and later versions deprecate the standard and flow ports and add the proxy port. Proxy ports essentially combine both the standard ports and flow ports. The flows specified as sent or received by a proxy part are defined to be flow properties rather than value properties, a small distinction in practice. More importantly, proxy ports are not typed by interfaces but rather by interface blocks. Interface blocks are more powerful than interfaces in that they can contain nested parts and proxy ports themselves. This allows the modeling of some complex situations that are difficult with simple interfaces. With proxy ports, gone are the lollipop and the socket notations; they are replaced by the port and port conjugate (~) notation. In short, standard ports use interfaces, but with proxy ports use interface blocks. The examples in this book exclusively use proxy ports and interface blocks and not standard ports.

Note

To be clear, deprecated means that the use of these ports is discouraged but they are still part of the standard, so feel free to use them.

This recipe specifically refers to the identification and specification of logical interfaces during use case specification, as experience has shown this is a highly effective means for identifying the system interfaces.

Continuous flows

Systems engineering must contend with something that software development does not: continuous flows. These flows may be information but are often physical in nature, such as materiel, fluids, or energy. SysML extends the discrete nature of UML activities with the «continuous» stereotype for continuous flows. The «stream» stereotype (from UML) refers to object flows (tokens) that arrive as a series of flow elements at a given rate. «continuous» is a special case where the time interval between streaming flow elements approaches zero. In practice, «stream» is used for a flowing stream of discrete elements, often at a rate specified with the SysML «rate» stereotype, while «continuous» is used for truly continuous flows. An example of «stream» might be a set of discrete images sent from a video camera at a rate of 40 frames per second. An example of «continuous» flow might be water flowing through a pipe or the delivery of electrical power.

In my work, I use these stereotypes on flows in sequence diagrams as well. I do this by applying the stereotypes to messages and through the use of a continuous interaction operator. An example is shown in Figure 2.74:

Figure 2.74 – Continuous flows on sequence diagrams

Figure 2.74 – Continuous flows on sequence diagrams

The figure shows flows (messages with dash lines) marked with the «continuous» stereotype. This indicates that the flow is continuous throughout its execution context. That context can be the entire diagram or limited to an interaction operator, as it is in this case. Within a context, there is no ordering among «continuous» flows; this is in contrast to the normal partial ordering semantics of SysML sequence diagrams in which lower in the diagram corresponds (roughly) to later in time. However, «continuous» flows are active throughout their execution context, and so the ordering of continuous flows is inherently meaningless (although the ordering of non-continuous messages is still in force).

The use of the «continuous» interaction operator emphasizes the unordered nature of the flows. Any events with the interaction operator still operate via the normal partial ordering semantics.

Purpose

The purpose of this recipe is to identify the exchange of services and flows that occur between a system and a set of actors, especially during use case analysis.

Inputs and preconditions

The precondition is that a use case and set of associated actors have been identified.

Outputs and postconditions

Interfaces or interface blocks are identified, and well as which actors must support which interfaces or interface blocks.

How to do it…

Figure 2.75 shows the workflow for this recipe. This overlaps with some of the other recipes in this chapter but focuses specifically on the identification of the system interfaces:

Figure 2.75 – Specify logical interfaces workflow

Figure 2.75 – Specify logical interfaces workflow

Identify the use case

This first step is to identify the generic usage of the system that will use the to-be-identified system interfaces.

Identify related actors

The related actors are those people or systems outside our scope that interact with the system while it executes the current use case. These actors can send messages to the system, receive messages from the system, or both using the system interfaces.

Create the execution context

The use case execution context is a kind of modeling sandbox that contains an executable component consisting of executable elements representing the use case and related actors. The recommended way to achieve this is to create separate blocks representing the use case and the actors, connected via ports. Having an isolated simulation sandbox allows different systems engineers to progress independently on different use case analyses.

Create the activity flow

This step is optional but is a popular way to begin to understand the set of flows in the use case. This step identifies the actions – event reception actions, event send actions, and internal system functions – that define the set of flows of the use case.

Capture the use case scenarios

Scenarios are singular interactions between the system and the actors during the execution of the use case. When working with non-technical stakeholders, they are an effective way to understand the desired interactions of the use case. We recommend starting with normal, sunny day scenarios before progressing to edge cases and exceptional rainy day scenarios. It is important to understand that every message identifies or represents one or more requirements and results in messages that must be supported in the derived interfaces. If the create the activity flow task is performed, then the sequence diagrams can be derived from those flows.

Recommendation:

Use asynchronous events for all actor > system and system > actor service invocations. This specifies the logical interfaces and so the underlying communication mechanism should be abstracted away. Later, in the definition of the physical interfaces and data schema, these can be specified in a technology-specific fashion.

Add message parameters

These events often carry data. This data should be explicitly modeled as event arguments.

Add flows

Use UML flows to indicate discrete flows of information, materiel, fluids, or energy exchanges between the system and an actor that are not intimately bound to a service request. Stereotype these flows as «continuous» when appropriate, such as the flows of energy or fluids.

Create parameter and flow types

The event arguments must be typed by elements in the logical data schema (see the Creating the logical data schema recipe). The same is true for flow types. Because these types are specifications, they will include not only some (logical) base type, but also units, ranges, and other kinds of metadata.

Create ports and interfaces

Based on the defined interaction of the system with the actors while executing the use case, add ports between the actor and use case blocks and type these ports with interface blocks. These interface blocks will enumerate the services and flows going between the actors and the system. Technically speaking, this can be done using UML standard ports and SysML flow ports, or the more modern SysML 1.3 proxy ports.

Example

We will now look at an example.

This example will use the Control Resistance use case, but we will follow a different approach than we used for this use case in the Functional analysis with activities recipe, just to demonstrate that there are alternative means to achieve similar goals in MBSE.

Identify the use case

The Control Resistance use case focuses on how resistance is applied to the pedals in response to simulated gearing, conditions, and user-applied force. The description is shown in Figure 2.18.

Identify related actors

There are three actors for this use case: Rider, Training App, and Power Source. The Rider provides power to and receives resistance from the pedals. The Training App is sent the rider power information. The Power Source provides electric power to run the system motors and digital electronics.

Create the execution context

Creating the execution context creates blocks that represent the actors and the use case for the purpose of analysis and simulation. They contain proxy ports that will be defined by the interfaces identified in this workflow:

Figure 2.76 – Control Resistance execution context for interface definition

Figure 2.76 – Control Resistance execution context for interface definition

Create the activity flow

The activity flow shows the object and control flows for the use case. In this example, we will show continuous flows in addition to discrete flows. The activity is decomposed into three diagrams. The top level is shown in Figure 2.77. This diagram shows the distribution of electric power on the left. This section contains an interruptible region that terminates the entire behavior when an evPowerOff event is received. The center part, containing the Determine Base Pedal Resistance call behavior, does the bulk of the functional work of the use case. Note that it takes the computed base resistance on the pedal and adjusts it for its current angular position. On the right, the Training App is updated periodically with bike data.

Discrete events, such as turning the system on and off or changing the gears, are simple to model in the activity diagrams; they can easily be modeled as either event receptions for incoming events or send actions for outgoing events. Of course, these events can carry information as arguments as needed.

It is somewhat less straightforward to model continuous inputs and outputs. What I have done here is use an object node, stereotyped as both «external» and «continuous» for such flows. An example of a continuous flow coming from an external actor is electrical power from the wall supply (see wallPower in Figure 2.77). Conversely, the resistance the system continuously applies to the pedal is an example of an output (see RiderPedalResistance in the same figure). These will be modeled as flow properties in the resulting interfaces:

Figure 2.77 – Control Resistance activity flow for creating interfaces

Figure 2.77 – Control Resistance activity flow for creating interfaces

Figure 2.78 shows the details for the Determine Pedal Resistance call behavior from the previous figure. In it, we see the base pedal resistance is computed using another call behavior, Compute Bike Physics. The Determine Pedal Resistance behavior never terminates (at least until the entire behavior terminates), so it uses a «rate» stereotype to indicate the data output on this activity's parameter streams. Remember that normal activity parameters require the activity to terminate before they can output a value:

Figure 2.78 – Determine Base Pedal Resistance activity

Figure 2.78 – Determine Base Pedal Resistance activity

Lastly, we have the Compute Bike Physics call behavior, shown in Figure 2.79. This simulates the physics of the bike using the rider mass, current incline, current speed, and the power applied by the rider to the pedal to compute the resistance to movement, and couples that with the combined bike and rider inertia to compute the simulated bike speed and acceleration:

Figure 2.79 – Compute Bike Physics activity

Figure 2.79 – Compute Bike Physics activity

Capture the use case scenarios

The interfaces can be produced directly from the activity model but it is often easier to produce it from a set of sequence diagrams derived from the activity model. Event receptions and flows on activities don't indicate the source, but this is clearly shown in the sequences. If you do create a set of sequence diagrams, it is adequate to produce the set of scenarios such that all inputs and outputs and internal flows are represented in at least one sequence diagram.

Figure 2.80 shows the first such scenario, which solely focuses on the delivery of power. It is also the only scenario shown that actually powers up and powers down the system. The power delivery is modeled as «continuous» flows to and inside the system:

Figure 2.80 – Electrical Power scenario

Figure 2.80 – Electrical Power scenario

The next three diagrams show the functional behavior modeled to follow the same structure as the activity model. Figure 2.81 shows the high-level behavior. Note the use of «continuous» flows for the power the rider applies to the pedal (appliedPower), the resistance to movement supplied by the system (pedalResistance), and the position of the pedal (pedalPosition). The continuous interaction occurrence provides a scope for the continuous flows. The referenced interaction occurrence, Determine Base Pedal Resistance, references the sequence diagram shown in Figure 2.82:

Figure 2.81 – Control Resistance scenario

Figure 2.81 – Control Resistance scenario

Throughout the entire scenario shown in Figure 2.82, the continuous flows are active, so no scoping continuous interaction occurrence is required:

Figure 2.82 – Determine Base Pedal Resistance scenario

Figure 2.82 – Determine Base Pedal Resistance scenario

The presence of these flows isn't strictly required since they are active at the high-level scenario, but they are included here as a reminder. This scenario also includes a nested scenario. This one is the referenced Compute Bike Physics, scenario shown in Figure 2.83:

Figure 2.83 – Compute bike physics

Figure 2.83 – Compute bike physics

Lastly, we must add the scenario for updating the Training App (Figure 2.84):

Figure 2.84 – Update Training App with ride data

Figure 2.84 – Update Training App with ride data

Add message parameters

Rather than show all the stages of development of the scenarios, the previous step is shown already including the message parameters.

Add flows

Rather than show all the stages of development of the scenarios, the previous step is shown already including the continuous flows.

Create parameter and flow types

The details of how to create all the types is the subject of the next recipe, Creating the logical data schema. The reader is referred to that recipe for more information.

Create ports and interfaces

Now that we have the set of flows between the actors and the system and have characterized them, we can create the interfaces. In this example, we are using the SysML 1.3 standard approach of using proxy ports and interface blocks, rather than standard ports, flow ports, and standard interfaces. This is a bit more work than using the older approach, but is more modern and descriptive.

The IBD in Figure 2.85 shows the execution context of the use case analysis for the Control Resistance use case. The instances of the Uc_ControlResistance use case block and the aCR_PowerSource, aCR_Rider, and aCR_TrainingApp actor blocks expose their proxy ports and are connected via SysML connectors. Note that, by convention, the unconjugated interface is referenced at the use case block end of the connector and the conjugated form is used at the actor end, as indicated by the tilde (~) in front of the interface block name.

At the top of the diagram are the (current empty) interface blocks that will be elaborated in this step. Later, during architecture development, these interface blocks will be added to the interfaces provided by the system and decomposed and allocated to the subsystems.

A note about naming conventions

The IBD shown here provides a sandbox for the purpose of analyzing the Control Resistance use case. To that end, a block representing the use case is created and given the name Uc_ControlResistance. For the actors, local blocks are created for the purpose of analysis and are given the names of a (for actor) followed by the initials of the use case (CR) followed by the name of the actor (with white space removed). So, these sandbox actor blocks are named aCR_PowerSource, aCR_Ride, and aCR_TrainingApp. The interfaces are all named i <use case block name>_<actor block name>, as in iUc_ControlResistance_aCR_PowerSource. This makes it easy to enforce naming consistency at the expense of sometimes creating lengthy names.

The creation of the elements is automated via the Harmony SE Toolkit, provided with the Rhapsody modeling tool:

Figure 2.85 – Control Resistance execution context

Figure 2.85 – Control Resistance execution context

Since we have the flows all shown in the sequence diagrams, it is a simple matter to add these elements to the interface blocks:

  • For each message from the use case to an actor, add that event reception as required to the interface block defining that port.
  • For each message from an actor to the use case, add that event reception as provided to the interface block defining that port.
  • For each flow from the use case to an actor, add that flow as an output flow property to the interface block defining that port.
  • For each flow from an actor to the use case, add that flow as an input flow property to the interface block defining that port.

Rhapsody does provide some assistance here in the Harmony SE Toolkit, although it is not difficult to do manually:

  1. First, realize all the messages for the sequence diagrams; this creates event receptions on the target blocks.
  2. Then apply the Harmony SE Toolkit helper called Create Ports and Interfaces to populate the interfaces.

You will still need to add the flows manually as flow properties.

The result is shown in Figure 2.86. Note that the event receptions are either provided (prov) or required (reqd) while the flow properties are either in or out (from the use case block perspective):

Figure 2.86 – Created interface blocks

Figure 2.86 – Created interface blocks

You should note that these are, of course, logical interfaces. As such, they reflect the intent and content of the messages, but not their physical realization. For example, bike data sent to the training app is modeled in the logical interface as an event, but the physical interface will actually be as a Bluetooth message. Wall power is modeled as a flow (its content will be described in the next recipe), but the actual interface involves the flow of electrons over a wire. The creation of physical interfaces from logical ones is discussed in Chapter 4, Handoff to Downstream Engineering.

Creating the logical data schema

A big part of the specification of systems is specifying the inputs and outputs of the system as well as what information a system must retain and manage. The inputs and outputs are data or flows and may be direct flows or may be carried via service requests or responses. Early in the systems engineering process, the information captured about these elements is logical. The definition of a logical schema is provided here, along with a set of related definitions.

The definitions are as follows:

  • Data Schema: A data or type model of a specific problem domain that includes blocks, value properties, value types, dimensions, units, their relations, and other relevant aspects collectively known as metadata. This model includes a type model consisting of the set of value types, units, and dimensions, and a usage model showing the blocks and value properties that use the type model.
  • Logical Schema: A data schema expressed independently from its ultimate implementation, storage, or transmission means.
  • Value Property: A property model element that can hold values. Also known as a variable.
  • Value Type: Specify value sets applied to value properties, message arguments, or other parameters that may carry values. Examples include integer (int in C++ action language), real (double in C++), Boolean (bool in C++), character (char in C++) and String (often char* in C++). These base types may have additional properties or constraints, specified as metadata.
  • Metadata: Literally data about data, this term refers to ancillary properties or constraints on data, including the following:

    a. Extent – The set of values of an underlying base value type that are allowed. This can be specified as follows:

    - A subrange, as in 0 … 10

    - A low value and high value pair, as in low value =-1, high value = 1

    - An enumerated list of acceptable values

    - A specification of prohibited values that are excluded from the base type

    - The specification of a rule or constraint from which valid values can be determined

    b. Precision – The degree exactness of specified values; this is often denoted as number of significant digits.

    c. Accuracy – The degree of conformance to an actual value, often expressed as ±<value>, as in ± 0.25. Accuracy generally refers to an output or outcome.

    d. Fidelity – The degree of exactness of a value. Fidelity is generally applied to an input value.

    e. Latency – How long after a value change occurs that the value representation updated.

    f. Availability – The percentage of the system life cycle that is actually accessible.

    Note

    These properties are sometimes not properties of the value type but of the value property specified by that value type. In any case, in SysML, these properties are often expressed in tags and metadata added to describe model elements.

    Value types can have kinds of representations in the underlying action language, such as enumeration (enum in C++), a language specification (such as char* in C++), a structure (struct in C++), a typedef, or a union.

  • Dimension: Specifies the kind of value (its dimensionality). Examples include length, weight, pressure, and color. Also known as Quantity Kind in SysML 1.3 and later.
  • Unit: Specifies a standard against which values in a dimension may be directly compared. Examples include meters, kilograms, kilopascals, and RGB units. SysML provides a model library of SI Units that are directly available for use in models. However, it is not uncommon to define your own if needed.

    Other than schema, SysML directly represents the concepts in its language definition. Note that a value property can be specified in terms of a unit, a dimension, or a value type at the engineer's discretion.

  • Recommendation: Each value property should be typed by a unit, unless it is unitless, in which case it should be typed by a defined value type.

Schematically, these definitions are shown in Figure 2.87 in the data schema metamodel:

Figure 2.87 – Data schema metamodel

Figure 2.87 – Data schema metamodel

Note

Although this is called the data schema, it is really an information schema as it applies to elements that are not data per se, such as physical flows. In this book, we will use the common term data schema to apply to flows as well.

Beyond the underlying type model of the schema, described previously, the blocks and their value properties and the relationships between them constitute the remainder of the data schema. These relations are the standard SysML relations: association, aggregation, composition, generalization, and dependency.

A quick example

So, what does a diagram showing a logical data schema look like?

Typically, a data schema is visualized within a block definition diagram, and shows the data elements and relevant properties. Consider an aircraft navigation system that must account for the craft's own position, its velocity, acceleration, jerk, flight plans, attitude, and so on. See Figure 2.88:

Figure 2.88 – Data schema for the Flight Property Set

Figure 2.88 – Data schema for the Flight Property Set

You can see in the figure that the Flight Property Set contains Airframe_Position, Airframe_Velocity, Airframe_Acceleration, and so on. These composed blocks contain value properties that detail their value properties; in the case of Airframe_Position, these are altitude, latitude, and longitude. Altitude is expressed in Meters (defined in the Rhapsody SysML type library) while latitude and longitude are defined in terms of the unit Meridian_Degrees, which is not in the SysML model library (and so is defined in the model).

On the left of the diagram, you can see that the Flight Plan contains multiple Flight Property Sets identifying planned waypoints along the commanded flight path. These Flight Property Sets may be actual current information (denoted with the measuredFlightPath role end) or commanded (denoted with the commandedFlightPath role end). The latter forms a list of commanded flight property sets and so stores the set of commanded waypoints. On the left, the diagram shows a superimposed image of the Rhapsody model browser, showing the units and dimensions created to support this data schema.

In the diagram, you see the «qualified» stereotype, which specifies a number of relevant metadata properties of the information, such as accuracy, bit_layout, and precision. Several value properties, along with their values for these metadata tags, are shown in the diagram. We see, for example, that the longitude value property has a range of 0 to 360 Meridian_Degrees, with an accuracy of 10-6 degrees and a representation precision of 10-7 degrees.

Purpose

The purpose of the logical data schema is to understand the information received, stored, and transmitted by a system. In the context of this capture-of-system specification, it is to understand and characterize data and flows that cross the system boundary to conceptually solidify the interfaces a system provides or requires.

Inputs and preconditions

The precondition is that a use case and a set of associated actors have been identified or that structural elements (blocks) have been identified in an architecture or design.

Outputs and postconditions

The output is a set of units, dimensions, types (the type model), and the value properties that they specify, along with the relationships between the value types and blocks that own them (the usage model).

How to do it…

The workflow for this recipe is shown in Figure 2.89:

Figure 2.89 – Creating the logical data schema

Figure 2.89 – Creating the logical data schema

The Construct Type Model call behavior is shown in Figure 2.90:

Figure 2.90 – Construct Type Model

Figure 2.90 – Construct Type Model

Create a collaboration

This task creates the collaboration between elements. This provides the context in which the types may be considered. In the case of system specification, this purpose is served by defining the use case and its related actors, or by the execution context of block stand-ins for those elements. In a design context, it is generally some set of design elements that relate to some larger-scale purpose, such as showing an architectural aspect or realizing a use case.

Define the structure

This step adds blocks and other elements to the collaboration, detailed in the following Identify the block, Add relations, and Identify value properties sections.

Identify the block

These are the basic structural elements of the collaboration, although value properties may be created without an owning block.

Add relations

These relations link the structural elements together, allowing them to send messages to support the necessary interactions.

Identify value properties

This step identifies the data and flow property features of the blocks.

Define the interaction

The interaction consists of a set of message exchanges among elements in the collaboration. This is most often shown as sequence diagrams.

Define the messages

Messages are the primitive elements of interaction. These may be synchronous (such as function calls) or asynchronous (as in asynchronous event receptions). A single interaction typically contains a set of ordered messages.

Add message parameters

Most messages, whether synchronous or asynchronous, carry information in the form of parameters (sometimes called arguments). The types of these data must be specified in the data model.

Construct a type model

Once a datum is identified, it must be typed. This call behavior is detailed in the following steps.

Define the units

Most data relies on units for proper functioning, and too often units are only implied rather than explicitly specified. This step references existing units or creates the underlying unit and then uses it to type the relevant value properties. SysML defines a non-normative extension to include a model library of SI units. Rhapsody, the tool used here, has an incomplete realization of these units, so many common units, such as radians, are missing and must be added if desired. Fortunately, it is easy to do so.

Define the dimensions

Most units reply on a quantity kind (or dimension). For example, the unit meter has the dimension length. Most dimensions have many different units available. Length, for example, can be expressed in units of cm, inches, feet, yards, meters, miles, kilometers, and so on.

Define value types

The underlying value type is expressed in the action language for the model. This might be C, C++, Java, Ada, or any common programming or data language. The Object Management Group (OMG) also defined an abstract action language called ALF (short for Action Language for Foundational UML), which may be used for this purpose. See https://www.omg.org/spec/ALF/About-ALF/ for more information. This book uses C++ as the action language, but there are equally valid alternatives.

Define the relevant value type properties

It is almost always inadequate to just specify the value type from the underlying action language. There are other properties of considerable interest. As described earlier in this section, they include extent, precision, latency, and availability. Other properties of interest may emerge that are domain-specific.

Example

We'll now see an example.

This example will use the Measure Performance Metrics use case. The Model-based threat analysis recipe used this use case to discuss modeling cybersecurity. We will use it to model the logical data schema. For the most part, the data of interest is the performance data itself, although the threat model identified some additional security-relevant data that can be modeled as well.

Create collaboration

The use case diagram in Figure 2.67 provides the context for the data schema, but usually the corresponding IBD of the execution context is used. This diagram is shown in Figure 2.91:

Figure 2.91 – Measure Performance Metrics execution context

Figure 2.91 – Measure Performance Metrics execution context

Define the structure

This task is mostly done by defining the execution context, shown in Figure 2.91. In this case, the structure is pretty simple.

Identify the blocks

As a part of defining the structure, we identified the primary functional blocks in the previous figure. But now we need to begin thinking about the data elements as blocks and value types. Figure 2.92 shows a first cut at the likely blocks. Note that we don't need to represent the data schema for the actors because we don't care. We are not designing the actors since they are, by definition, out of our scope of concern:

Figure 2.92 – Blocks for the Measure Performance Data schema

Figure 2.92 – Blocks for the Measure Performance Data schema

Add relations

The instances of the core functional blocks are shown in Figure 2.91. The relations of the data elements to the use case block are shown in Figure 2.93. This is the data that the use case block knows (owns) or uses:

Figure 2.93 – Data schema with relations

Figure 2.93 – Data schema with relations

Identify the value properties

The blocks provide owners of the actual data of interest, which is held in the value properties. Figure 2.94 shows the blocks populated with value properties relevant to the use case:

Figure 2.94 – Data schema value properties

Figure 2.94 – Data schema value properties

Define interactions, define messages, and add message parameters

Another way to find data elements to structure is to look at the messaging; this is particularly relevant for use case and functional analysis since the data on which we focus during this analysis is the data that is sent or received. These three steps – define interactions, define messages, and add message parameters – are all discussed together to save space.

The first interaction we'll look at is for uploading real-time ride metrics during a ride. This is shown in Figure 2.95:

Figure 2.95 – Real-time ride metrics

Figure 2.95 – Real-time ride metrics

The second interaction is for uploading an entire stored ride to the app. This is in Figure 2.96:

Figure 2.96 – Upload a saved ride

Figure 2.96 – Upload a saved ride

Note that these are just two of many scenarios for the use case, as they do not consider concerns such as dropped messages, reconnecting sessions, and other rainy-day situations. However, this is adequate for our needs.

Construct the type model

Figure 2.94 goes a long way toward the definition of the type model. The blocks define the structured data elements, but at the value property level, there is still work to be done. The underlying value types must be identified, their units and dimensions specified, and constraints placed on their extent and precision.

Define units

It is common for engineers to just reference base types – int, real, and so on – to type value properties, but this can lead to avoidable design errors. This is because value types may not be directly comparable, such as when distanceA and distanceB are both typed as Real but in one case is captured in kilometers and in the other in miles. Further, we cannot reason about the extent of a type (the permitted set of values) unless we also know the units. For this reason, we recommend – and will use here – unit definitions to disambiguate the values we're specifying.

The SI Units model library of the SysML specification is an optional compliance point for the standard. Rhapsody includes some SI units and dimensions but is far from complete. In this model, we will reference those that exist and create those that do not.

Figure 2.94 uses a number of special units for value properties and operation arguments, including the following:

  • DegreesOfArc
  • Radian
  • Newton
  • DateTime
  • KmPerHour
  • KmPerHourSquared
  • Second
  • KiloCalorie
  • RPM
  • Kilometer
  • ResistanceMode
  • APP_INTERACTION_TYPE

Two of these (Newton and Second) already exist in the Rhapsody SysML Profile SI Types model library and so may just be referenced. The others must be defined, although two of them – ResistanceMode and APP_INTERACTION_TYPE – will be specified as value types rather than units.

DegreesOfArc is a measure of angular displacement and is used for the cycling incline, while Radian is a unit of angular displacement used for pedal position. RPM is a measure of rotational velocity used for pedaling cadence. DateTime is a measure of when data was measured. Kilometer is a measure of linear distance (length), while KmPerHour is a measure of speed and KmPerHourSquared is a measure of acceleration. KiloCalorie is a measure of energy used to represent the rider's energy output. In our model, we will define all these as units. They will be defined in terms of their dimensions in the next section.

Define dimensions

Dimension is also known as quantity kind and refers to the kind of information held by a unit. For example, kilometer, meter, and mile all have the dimension of distance (or length).

As with the SI units, some of the dimensions are already defined in the Rhapsody SysML SI Types model library (time, length, energy) while others (angular displacement and rotational velocity) are not. We will reference the dimensions already defined and specify in our model the ones that are not.

In keeping with the approach used by the Rhapsody SysML SI Types model library, the dimensions themselves are defined with a typedef kind to the SysML Real type (which is, in turn, is a typedef of RhpReal). In models using the C++ action language, this will end up being a double. The advantage of this approach is the independence of the model from the underlying action language.

Figure 2.97 shows the units and dimensions defined for this logical data schema. Dimensions used from the SysML model library are referenced by the units but not otherwise shown on the diagram:

Figure 2.97 – Units and dimensions

Figure 2.97 – Units and dimensions

Define value types

Apart from the blocks, units, and dimensions described in the previous sections, there are also a few value types in the model. In this particular case, there are two of interest, both of which are enumerations. Figure 2.98 shows that APP_INTERACTION_TYPE may be either REAL_TIME_INTERACTION, used for loading performance data in real time during a cycling session, or UPLOAD_INTERACTION, used to upload a saved ride to the app:

Figure 2.98 – Measure performance data value types

Figure 2.98 – Measure performance data value types

Another value type, Resistance Mode, can either be ERG_MODE, which in the system maintains a constant power output of the rider regardless of cadence by dynamically adjusting the resistance, and RESISTANCE_MODE, where the power varies as the Rider modifies their cadence, incline, or gearing.

Define relevant value type properties

The last thing we must do is specify relevant value type properties. In the logical data schema, this means specifying the extent and precision of the values. This can be done at the unit/value type level; in this case, the properties apply to all values of that unit or type. These properties can also be applied at the value level, in which case the scope of the specification is limited to the specific values but not to other values of the same unit or type.

The best way to specify these properties is to specify them as SysML tags within a stereotype, apply the stereotype to the relevant model elements, and then elaborate the specific values. To that end, we will create a «tempered» stereotype. This stereotype applies to attributes (value properties), arguments, types, action blocks (actions), object nodes, pins, and types in the SysML metamodel and so can apply to units as well.

The stereotype provides three ways to specify extent. The first is the extent tag, which is a string in which the engineer can specify a range or list of values, such as [0.00 .. 0.99] or 0.1, 0.2, 0.4, 0.8, 1.0. Alternatively, for a continuous range, the lowValue and highValue tags, both of type Real, can serve as well; in the previous example, you can set lowValue to 0.0 and highValue to 0.99. Lastly, you can provide a range or list of prohibited values in the prohibitedValues tag, such as -1, 0.

The stereotype also provides three means for specifying scale. The scaleOfPrecision tag, of type integer, allows you to define the number of significant digits for the value or type. You can further refine this by specifying scaleOfFidelity to indicate the significant digits when the value is used as an input and scaleOfAccuracy when the value is used as an output.

Another stereotype tag is maxLatencyInSeconds, a Real value that specifies the maximum age of a value. Other metadata can be added to the stereotype as needed for your system specification.

This level of detail of specification of quantities is important for downstream design. Requiring two digits of scale is very different than requiring six and drives the selection of hardware and algorithms. In this example, it makes the most sense to specify the necessary scale at the unit and type level, rather than at the specific value property level for the units that we are defining. Those units are shown in Figure 2.99:

Figure 2.99 – Measure Performance Metrics tempered units

Figure 2.99 – Measure Performance Metrics tempered units

Note

Precision technically refers to the number of significant digits in a number, while scale is the number of significant digits to the right of the decimal point. The number 123.45 has a precision of 5, but a scale of 2. People usually speak of precision while meaning scale.

Lastly, we must specify the extent and scale for the values that are either unitless or use standard predefined units but are constrained within a subrange. Figure 2.100 and Figure 2.101 provide that detail:

Figure 2.100 – Value subranges and scale – 1

Figure 2.100 – Value subranges and scale – 1

Note that the figures show the relevant value properties for the blocks grouped with a rectangle with a dotted border. This rectangle has no semantics and is only used for visual grouping:

Figure 2.101 – Value subranges and scale – 2

Figure 2.101 – Value subranges and scale – 2

And there you have it: a logical data schema for the values and flows specified as a part of the Measure Performance Metrics use case. These, along with data schema from other use cases, will be merged together into the architecture in the architecture design work phase.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Learn how Agile and MBSE can work iteratively and collaborate to overcome system complexity
  • Develop essential systems engineering products and achieve crucial enterprise objectives with easy-to-follow recipes
  • Build efficient system engineering models using tried and trusted best practices

Description

Agile MBSE can help organizations manage constant change and uncertainty while continuously ensuring system correctness and meeting customers’ needs. But deploying it isn’t easy. Agile Model-Based Systems Engineering Cookbook is a little different from other MBSE books out there. This book focuses on workflows – or recipes, as the author calls them – that will help MBSE practitioners and team leaders address practical situations that are part of deploying MBSE as part of an agile development process across the enterprise. Written by Dr. Bruce Powel Douglass, a world-renowned expert in MBSE, this book will take you through important systems engineering workflows and show you how they can be performed effectively with an agile and model-based approach. You’ll start with the key concepts of agile methods for systems engineering, but we won’t linger on the theory for too long. Each of the recipes will take you through initiating a project, defining stakeholder needs, defining and analyzing system requirements, designing system architecture, performing model-based engineering trade studies, all the way to handling systems specifications off to downstream engineering. By the end of this MBSE book, you’ll have learned how to implement critical systems engineering workflows and create verifiably correct systems engineering models.

Who is this book for?

If you are a systems engineer who wants to pursue model-based systems engineering in an agile setting, this book will show you how you can do that without breaking a sweat. Fundamental knowledge of SysML is necessary; the book will teach you the rest.

What you will learn

  • Apply agile methods to develop systems engineering specifications
  • Perform functional analysis with SysML
  • Derive and model systems architectures from key requirements
  • Model crucial engineering data to clarify systems requirements
  • Communicate decisions with downstream subsystem implementation teams
  • Verify specifications with model reviews and simulations
  • Ensure the accuracy of systems models through model-based testing

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Mar 31, 2021
Length: 646 pages
Edition : 1st
Language : English
ISBN-13 : 9781839218149
Concepts :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning

Product Details

Publication date : Mar 31, 2021
Length: 646 pages
Edition : 1st
Language : English
ISBN-13 : 9781839218149
Concepts :

Packt Subscriptions

See our plans and pricing
Modal Close icon
AU$24.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
AU$249.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just AU$5 each
Feature tick icon Exclusive print discounts
AU$349.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just AU$5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total AU$ 215.97
Agile Model-Based Systems Engineering Cookbook
AU$63.99
Becoming an Agile Software Architect
AU$75.99
Systems Engineering Demystified
AU$75.99
Total AU$ 215.97 Stars icon

Table of Contents

6 Chapters
Chapter 1: The Basics of Agile Systems Modeling Chevron down icon Chevron up icon
Chapter 2: System Specification Chevron down icon Chevron up icon
Chapter 3: Developing System Architectures Chevron down icon Chevron up icon
Chapter 4: Handoff to Downstream Engineering Chevron down icon Chevron up icon
Chapter 5: Demonstration of Meeting Needs: Verification and Validation Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.8
(9 Ratings)
5 star 77.8%
4 star 22.2%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




Eldad Palachi Apr 23, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book is a must-read for hands-on architects, development managers and engineers who want to apply UML/SysML in a methodical, contemporary and effective manner.It is well organized around workflows that cover the whole development lifecycle (from product planning to validation) and every chapter follows a consistent structure which makes it easier to readers to focus on what is relevant to them. I really liked the "How to do. it..." sections with diagrams that outline the steps of the different workflows and I found the examples to be at just the right level of complexity to understand how to apply this MBSE approach.Another thing that sets this book apart is the coverage of hazards and safety aspects as well as verification and validation described in chapter 5.You can really see that the author is not someone who just speaks about MBSE, but also practices it in the field.
Amazon Verified review Amazon
Amazon Customer Apr 17, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is an excellent addition to the literature for practical systems engineers who want to take advantage of agile and MBSE techniques. Bruce is a wonderful combination of being an expert in the field as well as teaching. I was a bit skeptical that another book was needed but I stand corrected. This book will be very helpful for people who have some knowledge on the subject but need a bit of assistance on how to apply them in their real world jobs.The first thing that stands out to me is the focus on critical activities systems engineers need to master. He shows how to develop requirements, perform trade studies or create architectures using agile and MBSE. You’re not going to have academic ramblings of a topic in isolation of what it is you’re trying to accomplish. The opposite of just modeling for modeling sake.Other things that stand out to me include having a nice balance between descriptive text and worked examples. This is a book you can quickly find material on the relevant topic and get the help you need to remove your roadblock. And the models used in the book are available to download so you can poke around with the specific syntax if that is desired. A major value I got from the book are seemingly endless tidbits of help that will assist you moving from playing with the techniques to mastering them and improving your work. Some of the points apply for small efforts such as the value of different SysML behavioral diagrams and some are helpful for working on large projects where the approach needs to support many people working concurrently. Finally, I appreciate how Bruce is not dogmatic in this book. The material is written in a clear, understandable manner, and he is still able to talk about pros and cons of different ways to solve the same problem. Well done!- Tom Wheeler
Amazon Verified review Amazon
W. V. D. Heiden Apr 06, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
System Engineering taken out of the dry theoretical area into the world of fun combined with thorough understanding of systems engineering.Bruce is still the king of explaining tough subjects in an almost light way and makes you understand the topics almost easily.I’ve read all his books and although all books have an overlap, this one is still not boring or disappointing, on the contrary, there are always new topics and insights that, at least for me, help me in applying MBSE in my work.The online examples are very recommended to download, if you do not own Rhapsody, you can download an evaluation version from IBM that helps you in better understanding the examples in the book.Only small drawback is that the examples are only for IBM but I’m sure that is not difficult to convert to Sparx or NoMagic. Highly recommended for everyone working (or planning to work) with MBSE.
Amazon Verified review Amazon
J. D. Baker May 28, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
A couple years ago I wrote a Review of Agile Systems Engineering that started with the statement "This is the best book that Bruce has written in his prolific career - or at least it is the one that I like best." That statement is still true, however, the Agile MBSE Cookbook is a very close second. I like and appreciate that Bruce states from the very beginning that the output of a systems engineering effort is not an executable product but a specification. I also like the notion that the specification can have executable elements as part of the model. I spent a lot of time in chapter 2 because other methods focus too much on the object-oriented style and not enough on functional analysis. We get 4 different recipes for functional analysis in the Cookbook. The challenge will of course be to pick the one that is best for whatever we are working on. I have read the book but I can't claim to fully understand it yet. There's enough here to keep me studying for quite a while.If the examples aren't available in your favorite modeling tool, I suggest you take the time to do your own implementation, which is what I am doing. The book is quite tool agnostic.
Amazon Verified review Amazon
Moshe C. Apr 13, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Many books cover Model Based Systems Engineering (MBSE), but this one extends the scope of MBSE to other areas that most MBSE practitioners probably don't touch, such as (the elephant in many rooms) deploying MBSE as part of an Agile development process across the enterprise, or leveraging the system models to drive early verification and validation. More on this later.I found it easy to use the book for practical situations, primarily because it is focused on workflows, or as the author calls them - recipes. Each workflow starts with its purpose and then defines its inputs and outputs, how to do it and - where applicable - an actual example. I liked it because it allowed me to focus right away on the workflows where I needed clarity, where other workflows can wait. And in case you want to go deeper, you can even download the example models. I didn't have a need for this (yet) as the examples are well explained throughout the book, with plenty of screenshots. And as the author has a deep experience with the IBM MBSE tool, and the reviewer has a deep experience with the Dassault MBSE tool, the book is useful regardless of your favorite MBSE tool!I also liked the fact that the book addresses the needs of several types of engineers (or, as often is the case - engineers wearing multiple hats). It covers the application of Agile across the enterprise as an integral part of MBSE, from managing backlogs, to Agile planning, to prioritization, to release planning, product roadmaps and even effective reviews and walk-throughs. These are topics that many systems engineers, especially in industries such as A&D and Automotive, never learned back in Engineering school.I found the chapter on Functional analysis, or what some may call Logical models, very interesting. The workflows in the book lead you to modeling that is an elaboration on the requirements with scenarios, activities, state machines or even user stories, resulting in models that while they are design independent, they are also executable. This allows you to analyze the requirements for their correctness, completeness, safety and security.Other workflows address the more common aspects of Systems Engineering, such as Architectural Design going down to Detailed Design, leveraging patterns, abstraction layers, etc. However, the book does not assume that a System always ends up in software (like in a SysML to UML transition). It addresses the workflows for the creation of a deployment architecture and interdisciplinary interfaces that enable a handoff to downstream engineering in multiple domains, such as electrical, mechanical, etc. in addition to software.Another set of workflows that describe "cool ideas" that only now are starting to be deployed as part of MBSE around leveraging MBSE for early Verification and Validation on one hand, and on the other hand to be used as a reference, and on the other other hand as a test driver for the actual product. The book takes you through all the steps, from identifying the system under test (aka SUT), to the test architecture, test cases, test coverage and even test-driven modeling. These too are subjects that many of us did not learn back at school, and if we did, it was nothing more than just being mentioned in class. The screenshots in the book make all of these concepts very concrete.And if you are a cyclist, you are already ahead of those who aren't, because the example used throughout the book is a Pegasus bike trainer... actually, a very cool example, easy to understand, not too complex, but not trivial either.I highly recommend this book, especially if you are an MBSE practitioner, or managing teams of MBSE practitioners.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.