You are currently browsing the archives for the Requirements tag.

The Five Orders of Ignorance

December 30th, 2011

It’s not often you read a paper that has something unique and fresh to say about a topic, and expresses it in a clear and concise way.

Somehow, Phillip Armour’s The Five Orders of Ignorance had eluded me, until I found it referenced in another paper.

It really is an interesting point of view on software development.  You can read the paper here.

Armour’s central tenet is software is a mechanism for capturing knowledge. That is, (correct) software is the result of having understood, and formalised our knowledge about the problem we are developing.

Clearly, at each stage of the development process we have different levels of knowledge (or ignorance; the conjugate of knowledge) about our problem.  As we move towards delivery our knowledge increases; and our ignorance decreases – hopefully!

A stratified, or layered, model of ignorance gives a good measure of our progress through the development – in some ways a far superior model than the traditional time/artefact/activity–based approach.

Armour’s levels – or orders – of ignorance are as follows:

Zero Order Ignorance is knowledge; something we know and can articulate (in software)

First Order Ignorance is something we don’t know; a question we need an answer to.

Second Order Ignorance are the things we don’t know we don’t know.  That is, we don’t even know what questions to ask.

Third Order Ignorance is lack of methodology – we don’t have techniques, tools or processes that can identify and illuminate our lack of knowledge.

Fourth Order Ignorance means we don’t even know there are orders of ignorance!

(In many ways Armour’s work is a far more cohesive version of Donald Rumsfeld’s infamous “Known Knowns” speech.)

Armour’s paper crystallised a couple of very important points to me:

Why requirements analysis is so vital.

For nearly the last decade I have been promoting the importance of requirements analysis as a key part of development.  If we understand the problem we are meant to solve – completely and with precision – developing a solution in software is relatively straightforward. 

It’s heartening that most engineers are actually pretty good at developing solutions. But they’re not really very good at understanding problems. When people call me in to help with ‘design issues’ it’s most commonly the case they don’t actually understand their problem properly. Usually, I help their ‘design’ skills by doing detailed requirements analysis with them!

I have found the teams that spend most time performing requirements analysis spend the least time designing and debugging and have the most comprehensive and maintainable solutions.  This is because their software captures the system knowledge efficiently and their code isn’t riddled with what Armour calls ‘unknowledge’ – irrelevant, or incorrect knowledge about the system captured as code (you know, the stuff that leads to ‘features’!)

What process is all about.

Processes are a technique to give you questions, not answers. I think this upsets many developers (and their managers).  Many people want handle-turning solutions: Feed in some customer requirements, crank the handle, and out comes lovely, pristine software. 

Unfortunately, but the world doesn’t work like that.  If it did, we’d all be replaced by machines (that’s been threatened since the Sixties and it hasn’t happened yet. I’m not holding my breath, either.) 

Every software problem is unique and full of those delicious little subtleties that make our jobs as embedded developers so interesting (and yes, you can take ‘interesting’ in the sense of the old Chinese curse!) There is simply no way you can mechanise the behaviours needed to elicit, understand and formalise all the knowledge required to develop a typical embedded system.

Most approaches to software process description assumes software development is a (linear) mechanical process; and the (procedural) transformation of input artefact to output artefact will (somehow) produce working software.  Whilst this approach works for other manufacturing processes it cannot deal with the simple fact that software development is about knowledge capture and, well, we often don’t know what we don’t know!

The best processes are those that consist of a set of goals and a corresponding set of methodologies.  The goals effectively give you an appropriate set of questions that must be answered before you can continue; the answers to those questions will yield pertinent information about the system. 

One could argue the artefacts are supposed to embody the appropriate design questions but engineers are notorious for simply filling in the blanks with banal waffle just so they can move on to the interesting stuff – that is, hacking code (and learning about the system!)

The Baker’s Dozen of Use Cases

February 22nd, 2011

Use cases have become a core part of the requirements analyst’s arsenal.  Used well they can bring dramatic increases in customer satisfaction and a whole host of other subtle benefits to software development.

The use case itself is very simple in concept: describe the functionality of the system in terms of interactions between the system and its external interactors.  The focus of the use case is system usage,  from an external perspective.

Despite this apparent simplicity, requirements analysts frequently struggle to write coherent, consistent use cases that can be used to facilitate development.   Often, the use case analysis becomes an exercise in confusion, incomprehension and the dreaded ‘analysis paralysis’.

This article aims to aid use case writers by presenting a set of rules to follow when performing use case analysis.  The rules are designed to avoid common pitfalls in the analysis process and lead to a much more coherent set of requirements.

A brief introduction to use cases

The Baker’s Dozen ‘rules’:

Rule 1: Use cases aren’t silver bullets

Rule 2: Understand your stakeholders

Rule 3: Never mix your actors

Rule 4: The "Famous Five" of requirements modelling

Rule 5: Focus on goals, not behaviour

Rule 6: If it’s not on the Context or Domain models, you can’t talk about it

Rule 7: Describe ALL the transactions

Rule 8: Don’t describe the user interface

Rule 9: Build yourself a data dictionary

Rule 10: The magical number seven (plus or minus two)

Rule 11: Don’t abuse <<include>>

Rule 12: Avoid variations on a theme

Rule 13: Say it with more than words

 

These 13 guidelines are by no means exhaustive (for example, what about ‘Abstract’ actors; or System Integrity use cases?) and I’m sure there are dozens more I could add to the list (by all means, let me know your golden rules)  The aim of writing this article was to give beginners to use case modelling a simple framework for creating useful, effective use cases – something I think is sorely missing.

The Baker’s Dozen can be (neatly) summed up by the following principles:

  • Understand the difference between analysis and design.
  • Understand the value of a model – know why to create the model, not just how.
  • Understand the difference between precision and detail.
  • Keep it simple; but never simplistic

Next>>

The Baker’s Dozen of Use Cases

February 21st, 2011

Rule 13: Say it with more than words

Use case descriptions are most commonly written in text format (albeit often a stylised, semi-formal style of writing). Text is a very effective method of describing the transactional behaviour of use cases – it’s readily understandable without special training; most engineers can produce it (although the ability to write basic prose does seem beyond the capability of many!); and it is flexible enough to deal with complex behaviours – for example, variable numbers of iterations through a loop – without becoming cumbersome.

 

image

Use the most appropriate notation for the behaviour you are describing

 

However, this flexibility can come at the price of precision. Sometimes, for example in many control systems, you have to state precisely what the response of the system will be. Words, in this case, are not adequate (that said, I have seen requirements engineers attempt to describe the overshoot behaviour of a closed loop control system in English prose!)

This rule therefore should be: find the most appropriate notation to describe the transactions between the actors and the system; with a corollary that sometimes the right notation is multiple notations. Use tables, charts, diagrams, activity diagrams, flowcharts, mathematical models, state charts, or combinations of these, to describe the behaviour. Don’t get caught up in the dogma of use cases that says the use case description must be text. For most systems I have found that combinations of methods work best – some transactions are best described using semi-formal text; some by state machines; others by tables. The skill is recognising the type of transactions you are describing and picking the appropriate method.

 

<<Prev     Home>>

The Baker’s Dozen of Use Cases

February 7th, 2011

Rule 11 – Don’t abuse «include»

A use case contains all the steps (transactions) needed to describe how the actor (our stakeholder) achieves their goal (or doesn’t; depending on the particular conditions of the scenario). Therefore a use case is a stand-alone entity – it encapsulates all the behaviour necessary to describe all the possible scenarios connected to achieving a particular end result. That’s what makes use cases such a powerful analysis tool – they give the system’s requirements context.  Use case are also an extremely useful project management tool. By implementing a single use case you can deliver something complete, and of value, to the customer. The system may do nothing else, but at least the customer can solve one problem with it.

Occasionally, two use cases contain a sequence of transactions common to both sets of scenarios. This sequence may not necessarily occur at the same point in each use case (for example, the beginning) but will always be in the same order.

image

 

UML provides a mechanism for extracting this common information into its own use case. The mechanism is called the «include» relationship. The semantics of the «include» relationship mean that the base use cases are only complete if they fully, and completely, contain the contents of the included use case.

image

Included use cases (if you must use them)

This relationship can sometimes be useful, particularly if a sequence of transactions is repeated many times.

However, misunderstanding of «include» tends to lead to a very common abuse: functional decomposition of use cases.

Many use case modellers use the «include» relationship to split a complex use case into a number of smaller, simpler use cases. This can lead, in extreme cases, to an explosion of use cases, with leaf-node use cases capturing trivial functional requirements (for example “Capture button press”).  

These trivial use cases often have no meaning to stakeholders, who should be focussed on what they want to happen, rather than how it happens.  Also, there is a huge overhead is creating, reviewing and maintaining this vast number of use cases.

I call this approach “Design Cases” to differentiate it from a use case.

 

image

Design cases in action

Rather worryingly, several organisations actively promote Design Cases for requirements analysis.  The appeal is obvious:  functional decomposition is a familiar concept with most developers (and if it isn’t they really shouldn’t be developing software!); and it allows the developer to settle back into the comfortable territory of solving problems (rather than defining them)

Remember, use cases are an analysis tool for understanding the system. A simple, coherent set of use cases, reflecting the usage of the system from the customer’s perspective is far more effective than demonstrating, to the n-th level, how the system will operate. That’s what design verification is for.

My advice is to avoid «include» wherever possible.  Prefer repetition of text within the use case description, over trying to identify and extract commonality.  In this way each use case remains separate and complete; and there is no temptation to fall into the functional decomposition trap.  That way lies madness (or at least analysis paralysis).

 

<<Prev     Next>>

The Baker’s Dozen of Use Cases

January 21st, 2011

Rule 10:  The magical number seven (plus or minus two)

Psychologist George Miller, in his seminal 1956 paper "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information", identified a limit on the capacity of human working memory. He found that adults have the capability to hold between five and nine ‘chunks’ of information at any one time. A ‘chunk’ may be a number, letter, word or some other cohesive set of data.

What has this to do with use cases? One of the primary functions of writing use cases is requirements validation – that is, are we actually building the correct system? The use case model is a technique for presenting the system requirements such that the customer can say either yes, we have a correct understanding of how the system should work; or no, we have misunderstood the system.

When presenting information to our (probably non-technical) customer it makes sense to keep the information content manageable. Try to keep the number of transactions in any one flow (that is, a sequence of use case steps between a trigger event and a conclusion) to between five and nine. This gives your customers the best chance of comprehending what you’re writing.

This means that the use case stops being a list of all the system requirements and becomes a précis of the system requirements. Each transaction may therefore equate to more than one system requirement. In practice this is not as over-simplifying as it sounds: requirements are often ‘clustered’ around elements of data and their production and manipulation – effectively what each use case transaction describes.

The skill in writing good use cases is the ability to précis requirements together to minimise the number of use case steps, without creating simplistic use cases.

The ‘seven, plus or minus two’ rule is not hard and fast; more a guideline. If your description has ten steps, this is not a problem; however, if it has thirty then there’s a chance you’ve over-complicated the step. The same is true at the other limit: three steps is fine; one step means you may have over-simplified and lost detail.

 

<<Prev     Next>>

The Baker’s Dozen of Use Cases

January 14th, 2011

Rule 9 – Build yourself a Data Dictionary

Transactions between the actors and the system typically involve the transfer of data.  This data has to be defined somewhere.  If you’ve built a Domain Model ( see Part 5 – here) most of the data will be identified there; but even then the class diagram is not always the most practical place to capture the sort of information you need to record.

Another way to capture this information is in a Data Dictionary.  A data dictionary defines each piece of data in the system, and attributes about that data.  Typically, a data dictionary holds (but is not limited to) the following:

  • An Identifier.  This is how the data will be referred to.  Remember to use a term that has meaning in the problem domain.
  • Definition.  What does this data represent?  (And remember: a good definition consists of a genus – what type of thing I’m defining – and a description)
  • Units of measure; if applicable
  • Valid range; again, if applicable
  • Resolution.  That is, what’s the smallest difference I can measure?  This can become important if your implementation may use fixed point number schemes (for example).

This information is all vital to the developers who must design and code interfaces, etc. but it also has benefit when writing use case descriptions – it decouples the behaviour of the system from its data.

Just as we should decouple user interface details from the use case description (see Rule 8) so should we decouple the data transactions.  Describing the data requirements of a transaction in the use case description is cumbersome and leads to a potential maintenance problem (what if the resolution of a data item changes?  Or its units of measure?  Ask the team that developed the flight systems software of the Mars Climate Orbiter about getting units of measure confused!)

Data Dictionary

As you define your data dictionary, you can refer to data items (by their name) in the use case text.  Use some typographical convention (I use italics) to identify a data item from the data dictionary; or, you could hyperlink it. 

If the data requirements change you shouldn’t need to change the use case – unless it leads to a change in behaviour.  Similarly, you don’t need to elaborately detail exception flows for invalid data.  The data dictionary defines what is valid; the use case defines what should happen if we receive invalid data.  A simple, clear separation of responsibilities.
 
The Data Dictionary is not a new idea.  Twenty years ago, when I started developing software, a Data Dictionary was always created, and was considered a vital part of the design.  Unfortunately, it seems to have gone out of fashion, or been forgotten, in many software development circles.

 

<<Prev     Next>>

The Baker’s Dozen of Use Cases

November 22nd, 2010

RULE 7: Describe ALL the transactions

A use case, as the name implies, describes the usage of the system from the point of view of some beneficiary (our Actor). It is NOT (as I seem to spend my whole life telling people) is just the system function, organised as some sort of graphical ‘sub-routine’. This means the use case description must include the expected behaviour of the actors as well as the expected behaviour of the system.

This can sometimes appear counter-intuitive to use case newcomers. If the use cases define the functional behaviour of the system, then surely we can’t impose requirements on our users (the actors)?

In fact, this is not the case (and stop calling me Shirley).

The use case describes what we (or, more correctly, our customers) would like to happen when they use the system. It describes the typical interactions between stakeholder and system, such that the stakeholder achieves their goal. If the goal of the use case is beneficial to the stakeholder (see Rule 5) then the stakeholder has compelling reasons to behave as we describe in the use case. In other words, if the stakeholder wants the end result, they should comply with the behaviour described in the use case.

For example, if the system requests some information from the actor it is reasonable to expect them to respond with the information. In effect we are imposing requirements on the stakeholder (“when the system requests information, you shall respond”). Of course, there’s nothing to stop the obtuse user from not responding, walking away, or doing any number of other bizarre and inexplicable things; but these behaviours are a fantastic source of exceptional flows.  This approach moves the use case writer away from describing system functions and instead focuses them on the transactions between the stakeholder and the system.

The use case description defines the system behaviour as a sequence of atomic transactions between the actors and the system. Atomic is used in the sense of indivisible. This means that, in the context of the problem domain the transaction cannot be broken down into smaller transactions. (See also: Rule 9).  There are four basic transactions, and the use case description will be made up of sequences of these transactions:

  1. The actor supplies information to the system. Information could be an event (such as the trigger event) or data; or both.
  2. The system does some work. Remember to describe the results of the work, not how the result is achieved
  3. The system supplies information to the actor. Again, this could be requests for information, output signals, etc.
  4. The actor does some work. As mentioned above this is unenforceable, but reasonable behaviour on the part of the actor.

The fundamental transactions of a use case



Writing style

I suggest writing each transaction in a simple, semi-formal, declarative style. For transactions involving information exchange into or out of the system (that is, ‘Actor supplies information’; ‘The system supplies information to the actor’) I use the following style:

[Source] [Action] [Object] [Target]
For example: “The Navigator supplies the Airfield Altitude to the NAV/WAS system”

Source: The Navigator
Action: Supplies
Object: Airfield Altitude
Target: NAV/WAS System

For descriptions of work being done (either by the system or by the actor) I use the following form:

[Subject] [Action] [Object] [Constraint]

For example: “The ARS rewinds the hose at 5 ft/s +/- 0.5 ft/s”

Subject: The ARS
Action: Rewinds
Object: The Hose
Constraint: 5 ft/s +/-0.5 ft/s

If a transaction contains more than 3 punctuation marks it’s probably too complicated and should be restructured to make its meaning better understood.

Weak phrases
Weak phrases make for comfortable reading of a use case but are apt to cause uncertainty and leave room for multiple interpretations.

Use of phrases such as “adequate” and “as appropriate” indicate that what is required is either defined elsewhere or, worse, that the requirement is open to subjective interpretation. Phrases such as “but not limited to” and “as a minimum” suggest that the requirement hasn’t yet been fully defined.

Typical weak phrases include:

  • As applicable
  • As appropriate
  • But not limited to…
  • Effective
  • Normal
  • Timely
  • Minimize
  • Maximize
  • Rapid
  • User-friendly
  • Easy
  • Sufficient
  • Adequate
  • Quick

The truth is, in general engineers have poor writing skills. By providing a framework for how use cases should be written you are limiting the scope for ambiguity, wooliness and inconsistency.



<<Prev     Next>>

The Baker’s Dozen of Use Cases

November 8th, 2010

RULE 5: Focus on goals, not behaviour

There is a subtle distinction between the functional behaviour of the system and the goals of the actors.   This can cause confusion: after all, the functional behaviour of the system must surely be the goal of the actor?
It is very common, then, for engineers to write use cases that define, and then describe, all the functions of the system.  It is very tempting to simply re-organise the software requirements into functional areas, each one becoming a ‘use case’.  Paying lip-service to the ‘rules’ of use case modelling, these functions are organised by an actor that triggers the behaviour.

Use Cases based on functional requirements, rather than Actor goals.

I call these entities Design Cases to distinguish them from use cases; and they can be the first steps on the slippery slope of functional decomposition (see Rule 11 for more on this)

Identifying goals requires a change in mindset for the engineer.  Instead of asking “What functions must the system perform?” and listing the resulting functionality, ask: “If the system provides this result, will this help the actor fulfil one (or more) of their responsibilities”. If the answer to this question is ‘yes’ you’ve probably got a viable use case; if the answer’s ‘no’, or you can’t answer the question then you probably haven’t fully understood your stakeholders or their responsibilities.

In other words, the focus should be on the post-conditions of the use case – the state the system will be in after the use case has completed.  If the post-condition state of the system can provide measurable benefit to the Actor then it is a valid use case.

Let’s take a look at what I consider a better Use Case model:

Organising use cases by Actor Goal

The post-conditions of the use cases (above) relate to the goals of the Actor (in this case an Air-Traffic Control Officer).  We can validate whether the post-conditions will be of value to the Actor.

The (main success) post-condition of Land Aircraft is that one of the ATCO’s list of aircraft (that he is responsible for) is on the ground (safely, we assume!).  At this point the aircraft is no longer the responsibility of the ATCO – one less thing for them to worry about.  I argue that this is a condition that is of benefit to the ATCO.

Similarly with Hand-off Aircraft.  As aircraft reach the limit of the local Air Traffic Control (ATC) centre they are ‘handed-off’ to another ATC centre; often a major routing centre.  The post-condition for the hand-off will be that the departing aircraft will be (safely!) under the control of the other ATC, and removed from the local ATCO’s set of aircraft he is responsible for.

Receive Aircraft is the opposite side of Hand-Off Aircraft.  That is, what happens when the ATCO has an aircraft handed over to them from another ATC region.  At the end of the use case, the ATCO must have complete details and control of the received aircraft.

When an aircraft takes off, the aircraft must be assigned to an ATCO, who is responsible for routing it safely out of the local ATC region.  The post-condition of Take-Off Aircraft must be that the aircraft is assigned to an ATCO and that ATCO has all required details of the aircraft’s journey.

In the last two use cases, the ATCO actually gains work to do (another extra aircraft to monitor).  The requirements of the system must ensure that when the new aircraft is received the transfer is performed as simply, or consistently, or straightforwardly, as possible.  This is the benefit to the Actor.

While one could easily argue this is a simplistic model for Air Traffic Control it demonstrates basing use cases on goals rather than functional behaviours.  Each use case is validated by its post-conditions, rather than its pre-conditions and behaviour.




<<Prev     Next>>

The Baker’s Dozen of Use Cases

October 4th, 2010

RULE 3: Never mix your Actors

The UML definition of an Actor is an external entity that interacts with the system under development. In other words: it’s a stakeholder.

Having analysed all your stakeholders (see Part 3 ) it’s tempting to stick them (no pun intended) as actors on a use case diagram and start defining use cases for each.

Each set of stakeholders (Users, Beneficiaries or Constrainers) has its own set of concerns, language and concepts:

Concerns.
Each stakeholder group has a different set of issues, problems, wants and desires. For example, Users are interested in functionality; Constrainers in compliance.

Concepts.
The way a system is perceived by the stakeholders depends on their viewpoint, their needs, their technical background, etc. Each group’s paradigm – their way of perceiving the system – will be different and involve often subtly different concepts. For example, Users may have no concept of return-on-investment (RoI) for the system; whereas this may be a key concept to a Beneficiary.

Language.
Just as concepts are different; so is the language used to describe them. In many cases, the same word is used in different contexts to mean different things. For example: how many different concepts of ‘power’ can you think of? Mechanical, physical, electrical, political…

It is vital never to mix actors from different stakeholder groups on the same use case diagram. Trying to mix actors leads to ambiguity and confusion; both for the writer and reader! The differences in concept, viewpoint and language will make the use case almost impossible to decipher and understand.

Use a separate use case diagram for each set of stakeholders

By all means draw a separate use case diagram for each set of stakeholders. (Note: non-User stakeholder use case descriptions is beyond the scope of this article)




<<Prev     Next>>

The Baker’s Dozen of Use Cases

September 6th, 2010

RULE 2: Understand your stakeholders

A Stakeholder is a person, or group of people, with a vested interest in your system. Vested means they want something out of it – a return on their investment. That may be money; it may be an easier life.


One of the keys to requirements analysis is understanding your stakeholders – who they are, what they are responsible for, why they want to use your system and how it will benefit them.

It’s important to understand (and difficult for many software engineers to accept!) your stakeholders have responsibilities above and beyond just using your product. In fact, the only reason they are using your product is because it (should!) help them fulfil their larger responsibilities. If your product doesn’t help your stakeholder then why should they use it?


The first step in requirements analysis is to define your stakeholders. That definition must include:

  1. A named individual responsible for the stakeholder group
  2. The stakeholder’s responsibilities. That is, a description of the roles, jobs, and tasks the stakeholders have to perform everyday. If you understand a stakeholder’s problems and needs you can define solutions that help them
  3. Success criteria. That is: what is a good result for this stakeholder? The success criteria are a list of features and qualities that, if implemented, would bring maximum benefit to the stakeholder group.



Not all stakeholders are the same. For analysis, I divide stakeholders into three groups using my ‘stakeholder onion’ model:


Users.
Users are the most obvious group of stakeholders. Users directly interact with the system under development (sometimes I call them ‘Direct Interactors’).  A system’s users are concerned with how things work – what buttons to press, what order events happen, etc.  Their focus is therefore primarily functional behaviour and human-centred system qualities-of-service such as usability.

Beneficiaries.
The beneficiaries have some need that the system fulfils (or some pain that needs to be taken away!). The beneficiaries therefore benefit (often financially) by having the system in place. Typically, these stakeholders will be paying for the system. Beneficiaries are less interested in function and more interested in quality-of-service – reliability, maintainability, etc. since if these requirements are not fulfilled it will cost them money!

Constrainers.

The Users and Beneficiaries of a system are concerned with the problem they need to solve.   The Constrainers are focused not on the problem domain but on the solution domain.  The constrainers place negative requirements – or design constraints – on the system.  They place limits on how the system can work, how it will be developed, or what technologies or methodologies may be used. Constrainers come in many forms – Legislation, Standards, The Laws of physics, to name a few.

Most engineers intuitively understand they are, in some way, a stakeholder in the system they are designing, but they often do not know how to express this.  The development team itself is a Constrainer stakeholder, since it places limits on the technologies that can be implemented (lack of skills) or timescales (lack of resource).




Not only do the different stakeholders have different viewpoints, they also have different priorities on your project:

Beneficiaries’ concerns typically (but not always) outweigh user concerns.  For example, in the conflict between usability (a user concern) and low cost (a beneficiary concern) who will win? Remember: He who pays the piper calls the tune…

Constrainers should over-ride beneficiaries. Legal requirements, standards requirements, skills shortages, etc. will always supersede the desires of the other stakeholders.

The core difference between Beneficiaries and Constrainers is that Constrainers CANNOT be influenced – that is, you can negotiate on functional behaviours or qualities of service, but you cannot negotiate away legal requirements or the laws of physics!  A Constrainer either exists, in which case their criteria must be met; or they are not a Constrainer.   The skill, therefore, is to reduce the number of Constrainers on a project to open up as many different design options as possible.




Where does all this fit into Use Case analysis?  The answer is two-fold:

  • Actors on a use case diagram are all stakeholders
  • Stakeholder analysis gives context to the Use Case.

An Actor is an “external entity that interacts with the system under development”.  In the simplest case, that means they’re a User stakeholder.  The Beneficiaries and Constrainers also influence and affect the system behaviour, albeit in a very different way to the Users.  

Actors, Stakeholder types and Use Cases is a potential source of problems; and we’ll discuss this in a later article.


The most important reason for performing stakeholder analysis is it gives context to the Use Case model.  The Use Case System Boundary element defines the functional scope of the system – that is, what behaviours the system is required to provide – but it doesn’t give any reasons why the system has to provides those features.   There is nothing on the Use Case diagram that gives any help in validating the Use Case model.   We can add any behaviour we like to the Use Case diagram, provided we can link it to an actor.

By providing a stakeholder analysis we can validate the Use Cases by asking questions such as: Does the Use Case behaviour help the stakeholder (actor) fulfil their responsibilities?  If the behaviour does not help any stakeholder fulfil their responsibilities, should we be implementing it, or have we missed a stakeholder?  Are there other (additional) behaviours the stakeholder may need in order to fulfil their responsibilities?




<<Prev     Next>>

%d bloggers like this: