EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Is UML fit for embedded work?

Started by Lanarcam April 11, 2012
Op Thu, 12 Apr 2012 10:30:58 +0200 schreef Lanarcam <lanarcam1@yahoo.fr>:
> On Apr 12, 9:53 am, "Boudewijn Dijkstra" > <sp4mtr4p.boudew...@indes.com> wrote: >> Op Wed, 11 Apr 2012 18:13:04 +0200 schreef Lanarcam >> <lanarc...@yahoo.fr>: >> >> > On Apr 11, 5:26 pm, hamilton <hamil...@nothere.com> wrote: >> >> On 4/11/2012 9:05 AM, Lanarcam wrote: >> >> >> > I had already asked the question many years ago and the >> >> > responses were mixed. >> >> >> > We are currently part designing, part reengineering a big >> >> > software project for the control of power stations. Since >> >> > the previous release was made without proper design >> >> > notation, the company has decided to use UML and a >> >> > tool call Visual Paradigm to do the preliminary design. >> >> > We won't generate the code. >> >> >> > One aspect that IMO is not satisfactory is that UML >> >> > diagrams are not "naturally" fit for expressing requirements >> >> > in a way that is consistent and complete enough to >> >> > generate usable code and being able to reverse >> >> >> (I am not a UML person) >> >> >> If UML as a diagramming tool is not good enough, what would be ?? >> >> > Short answer : SDL, SCADE. >> >> > Long answer: take a sequence diagram, you know that at some point >> > in time, such module will call such function of another module but you >> > can't express the logical flow. Expressing "for", "switch", "if" are >> > impossible or cumbersome. You won't be able to deal with complex >> > data sructures. >> >> As a sequence diagram describes a scenario, "switch" has no place there. >> Also, dealing with complex data structures has no place on a sequence >> diagram, as they only describe which interfaces are being used. UML 2.1 >> contains structures for describing repetitive and conditional behaviour >> on sequence diagrams, which are IMHO not cumbersome. > > Then you can't translate such a diagram into executable code. It is > only illustrative which is not so bad from a documentation point > of view but it lacks the fature of a complete tool.
A scenario (which an SD describes) is a way to investigate use cases, define interfaces and validate system behaviour. Although you can indicate object states, I don't think it was ever the intention that a set of SDs could be used to fully specify a state machine. During the development cycle, SDs might conflict, indicate impossibilities, show things that might better be done otherwise and make regression testing easier. So they are quite a bit more than just illustrative.
>> SDL has message sequence charts, which are analogous to UML sequence >> diagrams. Is your criticism above not equally valid for SDL? > > SDL is a formal language, UML is semi formal whatever that means.
Some UML modeling tools annotate model elements so that it is perfectly clear which semantics apply. A language is never a complete solution, although a formal language may make certain things easier.
>> > The only diagrams that are complete are state diagrams and flow >> > charts. >> > Flow charts were given up some 30 years ago for involved algorithms, >> > pseudo code and code are more appropriate. >> >> > Take Use cases, how do you specify complex protocols between actors >> > and the system? >> >> If you're talking about data transfer protocols: you shouldn't. Use >> cases >> are part of the analysis phase, you're not supposed to meddle with >> implementation details here. Otherwise, you can define sequence >> diagrams >> and statecharts that describe the protocol behaviour. You can add >> statecharts to actors for high-level simulation. > > Data transfer protocols are part of what I was talking about. They are > not IMO implementation details but parts of the specification.
Can you give an example where the specific protocol matters for use case modelling?
>> [...]
-- Gemaakt met Opera's revolutionaire e-mailprogramma: http://www.opera.com/mail/ (Remove the obvious prefix to reply privately.)
On Apr 12, 12:53=A0pm, "Boudewijn Dijkstra"
<sp4mtr4p.boudew...@indes.com> wrote:
> Op Thu, 12 Apr 2012 10:30:58 +0200 schreef Lanarcam <lanarc...@yahoo.fr>: > > > > > > > On Apr 12, 9:53 am, "Boudewijn Dijkstra" > > <sp4mtr4p.boudew...@indes.com> wrote: > >> Op Wed, 11 Apr 2012 18:13:04 +0200 schreef Lanarcam > >> <lanarc...@yahoo.fr>: > > >> > On Apr 11, 5:26 pm, hamilton <hamil...@nothere.com> wrote: > >> >> On 4/11/2012 9:05 AM, Lanarcam wrote: > > >> >> > I had already asked the question many years ago and the > >> >> > responses were mixed. > > >> >> > We are currently part designing, part reengineering a big > >> >> > software project for the control of power stations. Since > >> >> > the previous release was made without proper design > >> >> > notation, the company has decided to use UML and a > >> >> > tool call Visual Paradigm to do the preliminary design. > >> >> > We won't generate the code. > > >> >> > One aspect that IMO is not satisfactory is that UML > >> >> > diagrams are not "naturally" fit for expressing requirements > >> >> > in a way that is consistent and complete enough to > >> >> > generate usable code and being able to reverse > > >> >> (I am not a UML person) > > >> >> If UML as a diagramming tool is not good enough, what would be ?? > > >> > Short answer : SDL, SCADE. > > >> > Long answer: take a sequence diagram, you know that at some point > >> > in time, such module will call such function of another module but y=
ou
> >> > can't express the logical flow. Expressing "for", "switch", "if" are > >> > impossible or cumbersome. You won't be able to deal with complex > >> > data sructures. > > >> As a sequence diagram describes a scenario, "switch" has no place ther=
e.
> >> Also, dealing with complex data structures has no place on a sequence > >> diagram, as they only describe which interfaces are being used. =A0UML=
2.1
> >> contains structures for describing repetitive and conditional behaviou=
r
> >> on sequence diagrams, which are IMHO not cumbersome. > > > Then you can't translate such a diagram into executable code. It is > > only illustrative which is not so bad from a documentation point > > of view but it lacks the fature of a complete tool. > > A scenario (which an SD describes) is a way to investigate use cases, > define interfaces and validate system behaviour. =A0Although you can > indicate object states, I don't think it was ever the intention that a se=
t
> of SDs could be used to fully specify a state machine. =A0During the > development cycle, SDs might conflict, indicate impossibilities, show > things that might better be done otherwise and make regression testing > easier. =A0So they are quite a bit more than just illustrative. > > >> SDL has message sequence charts, which are analogous to UML sequence > >> diagrams. =A0Is your criticism above not equally valid for SDL? > > > SDL is a formal language, UML is semi formal whatever that means. > > Some UML modeling tools annotate model elements so that it is perfectly > clear which semantics apply. =A0A language is never a complete solution, > although a formal language may make certain things easier. > > > > > > >> > The only diagrams that are complete are state diagrams and flow > >> > charts. > >> > Flow charts were given up some 30 years ago for involved algorithms, > >> > pseudo code and code are more appropriate. > > >> > Take Use cases, how do you specify complex protocols between actors > >> > and the system? > > >> If you're talking about data transfer protocols: you shouldn't. =A0Use > >> cases > >> are part of the analysis phase, you're not supposed to meddle with > >> implementation details here. =A0Otherwise, you can define sequence > >> diagrams > >> and statecharts that describe the protocol behaviour. =A0You can add > >> statecharts to actors for high-level simulation. > > > Data transfer protocols are part of what I was talking about. They are > > not IMO implementation details but parts of the specification. > > Can you give an example where the specific protocol matters for use case > modelling? >
I won't give an example about Use cases particularly but about specifications. Given a list of incoming or outgoing messages between the system and another system, you can map functions that will deal with thoses messages. Without being able to dig into the (applicative) messages, you won't be able to draw any sufficient detail to perform a valid functional analysis. I have worked extensively with SCADA and data acquistion systems and the protocols were part of the specification. How can you express that with Use cases?
Op Thu, 12 Apr 2012 13:10:42 +0200 schreef Lanarcam <lanarcam1@yahoo.fr>:
> On Apr 12, 12:53 pm, "Boudewijn Dijkstra" > <sp4mtr4p.boudew...@indes.com> wrote: >> Op Thu, 12 Apr 2012 10:30:58 +0200 schreef Lanarcam >> <lanarc...@yahoo.fr>: >> >> [...] >> >> > Take Use cases, how do you specify complex protocols between actors >> >> > and the system? >> >> >> If you're talking about data transfer protocols: you shouldn't. Use >> >> cases >> >> are part of the analysis phase, you're not supposed to meddle with >> >> implementation details here. Otherwise, you can define sequence >> >> diagrams >> >> and statecharts that describe the protocol behaviour. You can add >> >> statecharts to actors for high-level simulation. >> >> > Data transfer protocols are part of what I was talking about. They are >> > not IMO implementation details but parts of the specification. >> >> Can you give an example where the specific protocol matters for use case >> modelling? >> > I won't give an example about Use cases particularly but about > specifications. > Given a list of incoming or outgoing messages between the system and > another system, you can map functions that will deal with thoses > messages. > Without being able to dig into the (applicative) messages, you won't > be able > to draw any sufficient detail to perform a valid functional analysis. > I have > worked extensively with SCADA and data acquistion systems and the > protocols were part of the specification. How can you express that > with Use cases?
Why would you want to express that with use cases? It is of no concern during the analysis phase which messages are coming in or going out. Use case modelling deals with more abstract concepts like achieving goals, so they are not suitable for specification modelling. However, you can always use other diagrams to model the interface (and even crude behaviour) of an actor as if it were a (sub)system. Then during the design phase you can replace the actor by the driver interface that sends and receives the messages. -- Gemaakt met Opera's revolutionaire e-mailprogramma: http://www.opera.com/mail/ (Remove the obvious prefix to reply privately.)
On Apr 12, 2:23=A0pm, "Boudewijn Dijkstra"
<sp4mtr4p.boudew...@indes.com> wrote:
> Op Thu, 12 Apr 2012 13:10:42 +0200 schreef Lanarcam <lanarc...@yahoo.fr>: > > > > > > > On Apr 12, 12:53 pm, "Boudewijn Dijkstra" > > <sp4mtr4p.boudew...@indes.com> wrote: > >> Op Thu, 12 Apr 2012 10:30:58 +0200 schreef Lanarcam > >> <lanarc...@yahoo.fr>: > > >> [...] > >> >> > Take Use cases, how do you specify complex protocols between acto=
rs
> >> >> > and the system? > > >> >> If you're talking about data transfer protocols: you shouldn't. =A0=
Use
> >> >> cases > >> >> are part of the analysis phase, you're not supposed to meddle with > >> >> implementation details here. =A0Otherwise, you can define sequence > >> >> diagrams > >> >> and statecharts that describe the protocol behaviour. =A0You can ad=
d
> >> >> statecharts to actors for high-level simulation. > > >> > Data transfer protocols are part of what I was talking about. They a=
re
> >> > not IMO implementation details but parts of the specification. > > >> Can you give an example where the specific protocol matters for use ca=
se
> >> modelling? > > > I won't give an example about Use cases particularly but about > > specifications. > > Given a list of incoming or outgoing messages between the system and > > another system, you can map functions that will deal with thoses > > messages. > > Without being able to dig into the (applicative) messages, you won't > > be able > > to draw any sufficient detail to perform a valid functional analysis. > > I have > > worked extensively with SCADA and data acquistion systems and the > > protocols were part of the specification. How can you express that > > with Use cases? > > Why would you want to express that with use cases? =A0It is of no concern > during the analysis phase which messages are coming in or going out. =A0U=
se
> case modelling deals with more abstract concepts like achieving goals, so > they are not suitable for specification modelling. =A0However, you can > always use other diagrams to model the interface (and even crude > behaviour) of an actor as if it were a (sub)system. =A0Then during the > design phase you can replace the actor by the driver interface that sends > and receives the messages. >
So, I suppose you disagree with this: "Does a use case differ from a functional specification? You can employ use cases to model business processes, a system=92s functional requirements, or even the internal workings of a system. When used to model functional requirements, a use case describes one function required of your system or application. As such, your use cases constitute a functional specification." <http://www.oeng.com/pdf/UC-FAQ.pdf>
Op 11-Apr-12 21:13, Lanarcam schreef:
> Le 11/04/2012 20:42, Dombo a &#4294967295;crit : >> Op 11-Apr-12 18:13, Lanarcam schreef: >>> On Apr 11, 5:26 pm, hamilton<hamil...@nothere.com> wrote: >>>> On 4/11/2012 9:05 AM, Lanarcam wrote: >>>> >>>>> I had already asked the question many years ago and the >>>>> responses were mixed. >>>> >>>>> We are currently part designing, part reengineering a big >>>>> software project for the control of power stations. Since >>>>> the previous release was made without proper design >>>>> notation, the company has decided to use UML and a >>>>> tool call Visual Paradigm to do the preliminary design. >>>>> We won't generate the code. >>>> >>>>> One aspect that IMO is not satisfactory is that UML >>>>> diagrams are not "naturally" fit for expressing requirements >>>>> in a way that is consistent and complete enough to >>>>> generate usable code and being able to reverse >>>> >>>> (I am not a UML person) >>>> >>>> If UML as a diagramming tool is not good enough, what would be ?? >>>> >>> Short answer : SDL, SCADE. >>> >>> Long answer: take a sequence diagram, you know that at some point >>> in time, such module will call such function of another module but you >>> can't express the logical flow. Expressing "for", "switch", "if" are >>> impossible or cumbersome. You won't be able to deal with complex >>> data sructures. >> >> It is possible with UML (i.c.w. OCL), but cumbersome. Sequence diagrams >> are IMO only useful to illustrate a certain scenarios, not as a full >> specification. I feel the same (to a more or lesser degree) about many >> of the other UML diagram types; ok to clarify things and to give an >> overview but not really practical as full specification language that >> covers every corner case. > > My feeling exactly. >> >> My experience with UML is that for non-trivial stuff you either end up >> with something that is easy to grasp but very incomplete, or, when you >> strive for completeness, you end up with a big complex mess real quickly >> that still isn't complete. > > That's the problem we face today. There is a large base code and some > people want to reverse engineer it for the sake of documentation. They > want to draw activity diagrams for each function some of which take > 10 pages, I fear it will become a useless indecipherable mess.
Before one decides to go down that road, one should not only ask him-/herself if one is willing to spend the effort to write the initial documentation, but also if one is willing, can afford _and_ has the discipline to spend the (even larger) effort to maintain the documentation. The more detailed documentation gets the harder it is to keep it up to date. Detailed documentation that is not kept up to date is worse than useless; it wastes both the time of the one who wrote it and the one(s) who reads it. One of the clients I have worked for had the ambition to document their software with a very high level of detail. At a certain point they had more than 1100 documents (no not pages!) describing their software, where each document typically consisted of somewhere between 20 and 80 pages (the standard document template alone accounted for 12 pages). Judging by the directory structure and templates that was only a small fraction of the documents they intended to write at the start of the project. Though this was a large project (several MLOC), this was way over the top and actually counterproductive. You never knew if a document was up-to-date. Often engineers making changes to software weren't even aware that there were one or more documents that should be updated as a consequence of the changes made to the code. Most documents were write-only; if you needed to know the details it was both quicker and more reliable to look it up in the actual code. When it comes to documentation I prefer to document the high level structure, interfaces, rules and concepts of the software, and the rationale behind design choices (especially if they are not too obvious). The high level stuff rarely changes and cannot be captured (well) by reverse engineering and automatic documentation generation tools. For documentation of low level details I prefer to use tools like Doxygen (i.c.w. Graphviz) which generates documentation from the code itself and (tagged) comments embedded in the code. Though tools like Doxygen have limitations and shortcomings, my experience is the documentation it generates is much more accurate than a manually maintained document describing things like call-graphs, dependencies, function parameters...etc.
> IMO an ideal tool would allow one to describe high level requirement > and design ideas first and let people dig deeper into low level > design incrementally without loosing the big picture. What we would > need would be hierachical diagrams encompassing all steps from > requirements to code generation and a navigation between those > different steps. That's my letter to santa Claus.
That it is on the top of my wish list too. Several UML tools have promised this for years. However actually getting it to work this way in real life is whole other story. Buying requirements management and/or modeling tooling is one thing; deploying and embedding it in the organization is quite another thing (and much harder). I have seen too many times potentially useful tooling fail to realize their potential, simply because only one or two motivated people actively used the tool while others continued doing their own thing. The best chance is at the start of a project; it would be very hard to introduce tools like this late in the project.
> There is a tool called SCADE that allows that kind of design but > it is rather hard to use and doesn't scale well with big projects.
That is a pity, but unfortunately quite common with modeling languages and -tools. Most are fine with for trivial problems, but cannot handle large projects well, if at all. Ironically those are ones when the need is the most.
Op Thu, 12 Apr 2012 15:28:33 +0200 schreef Lanarcam <lanarcam1@yahoo.fr>=
:
> On Apr 12, 2:23 pm, "Boudewijn Dijkstra" > <sp4mtr4p.boudew...@indes.com> wrote: >> Op Thu, 12 Apr 2012 13:10:42 +0200 schreef Lanarcam =
>> <lanarc...@yahoo.fr>: >> > On Apr 12, 12:53 pm, "Boudewijn Dijkstra" >> > <sp4mtr4p.boudew...@indes.com> wrote: >> >> Op Thu, 12 Apr 2012 10:30:58 +0200 schreef Lanarcam >> >> <lanarc...@yahoo.fr>: >> >> >> [...] >> >> >> > Take Use cases, how do you specify complex protocols between = >> =
>> >> > actors and the system? >> >> >> >> If you're talking about data transfer protocols: you shouldn't.=
=
>> >> >> Use cases >> >> >> are part of the analysis phase, you're not supposed to meddle w=
ith
>> >> >> implementation details here. Otherwise, you can define sequenc=
e
>> >> >> diagrams >> >> >> and statecharts that describe the protocol behaviour. You can =
add
>> >> >> statecharts to actors for high-level simulation. >> >> >> > Data transfer protocols are part of what I was talking about. Th=
ey =
>> >> > are not IMO implementation details but parts of the specificatio=
n.
>> >> >> Can you give an example where the specific protocol matters for us=
e =
>> >> case modelling? >> >> > I won't give an example about Use cases particularly but about >> > specifications. >> > Given a list of incoming or outgoing messages between the system an=
d
>> > another system, you can map functions that will deal with thoses >> > messages. >> > Without being able to dig into the (applicative) messages, you won'=
t
>> > be able >> > to draw any sufficient detail to perform a valid functional analysi=
s.
>> > I have >> > worked extensively with SCADA and data acquistion systems and the >> > protocols were part of the specification. How can you express that >> > with Use cases? >> >> Why would you want to express that with use cases? It is of no conce=
rn
>> during the analysis phase which messages are coming in or going out. =
=
>> Use >> case modelling deals with more abstract concepts like achieving goals=
, =
>> so >> they are not suitable for specification modelling. However, you can >> always use other diagrams to model the interface (and even crude >> behaviour) of an actor as if it were a (sub)system. Then during the >> design phase you can replace the actor by the driver interface that =
>> sends and receives the messages. >> > So, I suppose you disagree with this: > > "Does a use case differ from a functional specification? You can > employ use cases to model business processes, a system=E2=80=99s > functional requirements, or even the internal workings of a system. > When used to model functional requirements, a use case describes > one function required of your system or application. As such, your > use cases constitute a functional specification." > > <http://www.oeng.com/pdf/UC-FAQ.pdf>
No, but we could disagree on the interpretation. I believe that by = "model" they not only mean "draw things" but also to properly fill out t= he = textual description using appropriate fields. This way you can give a = place to every piece of the specification. -- = Gemaakt met Opera's revolutionaire e-mailprogramma: http://www.opera.com/mail/ (Remove the obvious prefix to reply privately.)
With UML, just as with anything else, the real question is return on
investment (ROI). To be truly successful, the benefits of a method
must outweigh the learning curve, the tools, the maintenance costs,
the hidden costs of "fighting the tool" and so on.

As it turns out, the ROI of UML is lousy unless the models are used to
generate substantial portions of the production code. Without code
generation, the models inevitably fall behind and become more of a
liability than an asset. In this respect I tend to agree with the "UML
Modeling Maturity Index (UMMI)", invented by Bruce Douglass (https://
www.ibm.com/developerworks/mydeveloperworks/blogs/BruceDouglass/entry/bruce_s_top_ten_modeling_hints_9_all_models_are_abstractions_in_that_they_focus_on_some_properties_and_aspects_at_the_expense_of_others49?lang=en)
According to the UMMI, without code generation UML can reach at most
30% of its potential. This is just too low to outweigh all the costs.

Unfortunately, code generation capabilities have been always
associated with complex, expensive UML tools with a very steep
learning curve and a price tag to match. With such a big investment
side of the ROI equation, it's quite difficult to reach sufficient
return. Consequently, all too often big tools get abandoned and if
they continue to be used at all, they end up as overpriced drawing
packages.

So, to follow my purely economic argument, unless we make the
investment part of the ROI equation low enough, without reducing the
returns too much, UML has no chance. On the other hand, if we could
achieve positive ROI (something like 80% of benefits for 10% of the
cost), we would have a *game changer*.

To this end, when you look closer, the biggest "bang for the buck" in
UML with respect to embedded code generation are: (1) an embedded real-
time framework and (2) support for hierarchical state machines (UML
statecharts). Of course, these two ingredients work best together and
need each other. State machines can't operate in vacuum and need a
framework to provide execution context, thread-safe event passing,
event queueing, etc. Framework benefits from state machines for
structure and code generation capabilities.

I'm not sure if many people realize the critical importance of a
framework, but a good framework is in many ways even more valuable
than the tool itself, because the framework is the big enabler of
architectural reuse, testability, traceability, and code generation to
name just a few. The second component are state machines, but again
I'm not sure if everybody realizes the importance of state nesting.
Without support for state hierarchy, traditional "flat" state machines
suffer from the phenomenon known as "state-transition explosion",
which renders them unusable for real-life problems.

As it turns out, the two critical ingredients for code generation can
be had with much lower investment than traditionally thought. An event-
driven, real-time framework can be as complex as a traditional bare-
bones RTOS (e.g., see the family of the open source QP frameworks at
http://www.state-machine.com/qp). A UML modeling tool for creating
hierarchical state machines and production code generation can be free
and can be designed to minimize the problem of "fighting the
tool" (see http://www.state-machine.com/qm). Sure, you don't get all
the bells and whistles of IBM Rhapsody, but you get the arguably most
valuable ingredients. Most importantly, you have a chance to achieve a
positive ROI on your first project. As I said, this to me is game
changing.

Can a lightweight framework like QP and the QM modeling tool scale to
really big projects? Well, I've seen it used for tens of KLOC-size
projects by big, distributed teams and I haven't seen any signs of
over-stressing the architecture or the tool.
Le 17/04/2012 19:05, Miro Samek a &#4294967295;crit :
> With UML, just as with anything else, the real question is return on > investment (ROI). To be truly successful, the benefits of a method > must outweigh the learning curve, the tools, the maintenance costs, > the hidden costs of "fighting the tool" and so on. > > As it turns out, the ROI of UML is lousy unless the models are used to > generate substantial portions of the production code. Without code > generation, the models inevitably fall behind and become more of a > liability than an asset. In this respect I tend to agree with the "UML > Modeling Maturity Index (UMMI)", invented by Bruce Douglass (https:// > www.ibm.com/developerworks/mydeveloperworks/blogs/BruceDouglass/entry/bruce_s_top_ten_modeling_hints_9_all_models_are_abstractions_in_that_they_focus_on_some_properties_and_aspects_at_the_expense_of_others49?lang=en) > According to the UMMI, without code generation UML can reach at most > 30% of its potential. This is just too low to outweigh all the costs. > > Unfortunately, code generation capabilities have been always > associated with complex, expensive UML tools with a very steep > learning curve and a price tag to match. With such a big investment > side of the ROI equation, it's quite difficult to reach sufficient > return. Consequently, all too often big tools get abandoned and if > they continue to be used at all, they end up as overpriced drawing > packages. > > So, to follow my purely economic argument, unless we make the > investment part of the ROI equation low enough, without reducing the > returns too much, UML has no chance. On the other hand, if we could > achieve positive ROI (something like 80% of benefits for 10% of the > cost), we would have a *game changer*. > > To this end, when you look closer, the biggest "bang for the buck" in > UML with respect to embedded code generation are: (1) an embedded real- > time framework and (2) support for hierarchical state machines (UML > statecharts). Of course, these two ingredients work best together and > need each other. State machines can't operate in vacuum and need a > framework to provide execution context, thread-safe event passing, > event queueing, etc. Framework benefits from state machines for > structure and code generation capabilities. > > I'm not sure if many people realize the critical importance of a > framework, but a good framework is in many ways even more valuable > than the tool itself, because the framework is the big enabler of > architectural reuse, testability, traceability, and code generation to > name just a few. The second component are state machines, but again > I'm not sure if everybody realizes the importance of state nesting. > Without support for state hierarchy, traditional "flat" state machines > suffer from the phenomenon known as "state-transition explosion", > which renders them unusable for real-life problems. > > As it turns out, the two critical ingredients for code generation can > be had with much lower investment than traditionally thought. An event- > driven, real-time framework can be as complex as a traditional bare- > bones RTOS (e.g., see the family of the open source QP frameworks at > http://www.state-machine.com/qp). A UML modeling tool for creating > hierarchical state machines and production code generation can be free > and can be designed to minimize the problem of "fighting the > tool" (see http://www.state-machine.com/qm). Sure, you don't get all > the bells and whistles of IBM Rhapsody, but you get the arguably most > valuable ingredients. Most importantly, you have a chance to achieve a > positive ROI on your first project. As I said, this to me is game > changing. > > Can a lightweight framework like QP and the QM modeling tool scale to > really big projects? Well, I've seen it used for tens of KLOC-size > projects by big, distributed teams and I haven't seen any signs of > over-stressing the architecture or the tool.
Interesting thoughts. I'll have a look at http://www.state-machine.com/qp and http://www.state-machine.com/qm.
In article <e74d77a8-bbae-4dd1-afa3-5fd8ebf7dc52
@l3g2000vbv.googlegroups.com>, sales@quantum-leaps.com says...
>
[...]
> > Can a lightweight framework like QP and the QM modeling tool scale to > really big projects? Well, I've seen it used for tens of KLOC-size > projects by big, distributed teams and I haven't seen any signs of > over-stressing the architecture or the tool.
Miro, isn't the number of 'active objetcs' in QP a factor that limits scaling (maximum number = 64, according to the website)? I have in mind (real-word) projects with hundreds of objects, belonging to dozens of different classes. Doesn't every object with a state machine need to be an active object? -- Saludos. Ignacio G.T.
On Apr 18, 11:45=A0am, "Ignacio G.T." <igtorque.rem...@emover.yahoo.es>
wrote:

> Miro, isn't the number of 'active objetcs' in QP a factor that limits > scaling (maximum number =3D 64, according to the website)? > > I have in mind (real-word) projects with hundreds of objects, belonging > to dozens of different classes. Doesn't every object with a state > machine need to be an active object? > > -- > Saludos. > Ignacio G.T.
I'm glad you asked, because it is important to distinguish between an active object and just a state machine. An active object is an "object running in its own thread of execution". In other words: active_object =3D state_machine + thread + event_queue. So, while an active object is a hierarchical state machine it also has a thread, event queue. The QP framework limits the number of such active objects to 63. But it does not mean that your system is limited to just 63 state machines. In fact, each active object can manage an open-ended number of lightweight hierarchical state machines as "Orthogonal Components" (see http://www.state-machine.com/resources/Pattern_Orthogonal.= pdf). For instance, the "Fly 'n' Shoot" game example described in the PSiCC2 book as well as in the QP tutorials, has a pool of 10 mines (5 small mines and 5 big and nasty mines). The mines are "Orthogonal Component" state machines managed by the Tunnel active object, but they are not full-blown active objects. The point is that in larger projects, you very often have a need for pools of stateful components, such as transactions, client connections, etc., all of them being natural state machines with their own life-cycle. Implementing all these components as threads, as it is often done in traditional threaded applications, doesn't actually scale that well, because threads are very expensive. So, just a few hundred of threads can pull a most powerful machine to its knees. In contrast, lightweight state machine components take orders of magnitude less resources (a hierarchical state machine in QP takes only 1 function pointer in RAM, plus a virtual pointer in C++). So, you can easily manage hundreds or thousands of those. The bottom line is that the efficiency of the implementation in QP actually scales better than traditional RTOS/OS-based approaches and enables building bigger applications.

The 2024 Embedded Online Conference