Reply by Marco June 3, 20122012-06-03
On Saturday, April 21, 2012 10:04:17 AM UTC-7, Lanarcam wrote:
> Le 21/04/2012 18:04, Marco a =E9crit : > > On Wednesday, April 11, 2012 8:05:19 AM UTC-7, Lanarcam wrote: > >> I had already asked the question many years ago and the > >> responses were mixed. > >> > >> We are currently part designing, part reengineering a big > >> software project for the control of power stations. Since > >> the previous release was made without proper design > >> notation, the company has decided to use UML and a > >> tool call Visual Paradigm to do the preliminary design. > >> We won't generate the code. > > > > Ugh - the term "preliminary design" should be retired! >=20 > Why is that?
For software intensive systems, It smacks of the water-fall life-cycle whic= h only works for the smallest of projects such as those done by students.= =20 It is much better to use "high-level design", "conceptual design" and/or "= architectural design" since they are describing it at an abstraction level = not at a time-phase. Therefore, for a medium to large project, once the team has a clue of what = the system is about they need to start building it (select appropriate life= -cycle as needed). The term "Detailed design" is still useful because it adheres to abstractio= n level (of course unless you are model based then that could be captured w= ith code). <http://en.wikipedia.org/wiki/Software_development_process>
Reply by Lanarcam April 22, 20122012-04-22
Le 21/04/2012 18:04, Marco a &#4294967295;crit :
> On Wednesday, April 11, 2012 8:05:19 AM UTC-7, Lanarcam wrote: >> I had already asked the question many years ago and the >> responses were mixed. >> >> We are currently part designing, part reengineering a big >> software project for the control of power stations. Since >> the previous release was made without proper design >> notation, the company has decided to use UML and a >> tool call Visual Paradigm to do the preliminary design. >> We won't generate the code. > > Ugh - the term "preliminary design" should be retired! > > Any large size project that needs to be maintained over many years needs some high level design material (hopefully with some diagrams) to help newbies , system level test folks and other stakeholders to understand what the system is about. A small subset of the UML diagrams can help with this endeavor even if you are not generating code. Some of the diagrams created during design will be low level and these will end up in the dust-bin which is probably OK. For those a picture of the whiteboard is probably good enough. > > I am not keen on using propriety tool notations for anything. > Anyone still using the original "Software through Pictures" diagrams?
I am looking for tools automating the design and generation of embedded systems. So far I could find this one that is freely available: "Concretely, and from the end user point of view, TASTE consists in two graphical editors that allow to capture the logical architecture of a computer-based system and its deployment on hardware components, and a set of model analyzers and code generators that process user input to verify and glue all software components together. The TASTE process encourages a combined use of modeling and coding languages depending on the nature of the function &#4294967295; the toolset is then responsible for making them talk together at runtime : this is the heart of the TASTE technology. It is for example strongly encouraged to model embedded systems as a combination of state machines and control laws, because this is often what they are. Following this logic,take the best language for state machines (SDL), the one your scientist knows best for the mathematical models (SCADE or Simulink) and let TASTE fix interfaces at code level so that the SDL-generated code can transparently communicate with SCADE or Simulink-generated code. Of course, if you need to develop low-level functions to pilot a peripheral (sensor or actuator), use C or Ada." <http://taste.tuxfamily.org/wiki/index.php?title=File:ERTS2012-TASTE-OVERVIEW.pdf>
Reply by Lanarcam April 21, 20122012-04-21
Le 21/04/2012 18:04, Marco a &#4294967295;crit :
> On Wednesday, April 11, 2012 8:05:19 AM UTC-7, Lanarcam wrote: >> I had already asked the question many years ago and the >> responses were mixed. >> >> We are currently part designing, part reengineering a big >> software project for the control of power stations. Since >> the previous release was made without proper design >> notation, the company has decided to use UML and a >> tool call Visual Paradigm to do the preliminary design. >> We won't generate the code. > > Ugh - the term "preliminary design" should be retired!
Why is that?
> > Any large size project that needs to be maintained over many years needs some high level design material (hopefully with some diagrams) to help newbies , system level test folks and other stakeholders to understand what the system is about. A small subset of the UML diagrams can help with this endeavor even if you are not generating code. Some of the diagrams created during design will be low level and these will end up in the dust-bin which is probably OK. For those a picture of the whiteboard is probably good enough.
For me a tool should provide code generation capabilities, otherwise you ever have the problem of updating manually the diagrams when you modify the code, which is usually not done. Statecharts can be used to generate code and are a good way of structuring the program. With hierarchical statecharts, you can design at a high level and progress toward the code in a seamless way.
> I am not keen on using propriety tool notations for anything. > Anyone still using the original "Software through Pictures" diagrams?
Reply by Marco April 21, 20122012-04-21
On Wednesday, April 11, 2012 8:05:19 AM UTC-7, Lanarcam wrote:
> I had already asked the question many years ago and the > responses were mixed. >=20 > We are currently part designing, part reengineering a big > software project for the control of power stations. Since > the previous release was made without proper design > notation, the company has decided to use UML and a > tool call Visual Paradigm to do the preliminary design. > We won't generate the code.
Ugh - the term "preliminary design" should be retired!=20 Any large size project that needs to be maintained over many years needs = some high level design material (hopefully with some diagrams) to help newb= ies , system level test folks and other stakeholders to understand what the= system is about. A small subset of the UML diagrams can help with this end= eavor even if you are not generating code. Some of the diagrams created dur= ing design will be low level and these will end up in the dust-bin which is= probably OK. For those a picture of the whiteboard is probably good enough= . I am not keen on using propriety tool notations for anything. Anyone still using the original "Software through Pictures" diagrams?
Reply by Miro Samek April 20, 20122012-04-20
On Apr 18, 11:45=A0am, "Ignacio G.T." <igtorque.rem...@emover.yahoo.es>
wrote:

> Miro, isn't the number of 'active objetcs' in QP a factor that limits > scaling (maximum number =3D 64, according to the website)? > > I have in mind (real-word) projects with hundreds of objects, belonging > to dozens of different classes. Doesn't every object with a state > machine need to be an active object? > > -- > Saludos. > Ignacio G.T.
I'm glad you asked, because it is important to distinguish between an active object and just a state machine. An active object is an "object running in its own thread of execution". In other words: active_object =3D state_machine + thread + event_queue. So, while an active object is a hierarchical state machine it also has a thread, event queue. The QP framework limits the number of such active objects to 63. But it does not mean that your system is limited to just 63 state machines. In fact, each active object can manage an open-ended number of lightweight hierarchical state machines as "Orthogonal Components" (see http://www.state-machine.com/resources/Pattern_Orthogonal.= pdf). For instance, the "Fly 'n' Shoot" game example described in the PSiCC2 book as well as in the QP tutorials, has a pool of 10 mines (5 small mines and 5 big and nasty mines). The mines are "Orthogonal Component" state machines managed by the Tunnel active object, but they are not full-blown active objects. The point is that in larger projects, you very often have a need for pools of stateful components, such as transactions, client connections, etc., all of them being natural state machines with their own life-cycle. Implementing all these components as threads, as it is often done in traditional threaded applications, doesn't actually scale that well, because threads are very expensive. So, just a few hundred of threads can pull a most powerful machine to its knees. In contrast, lightweight state machine components take orders of magnitude less resources (a hierarchical state machine in QP takes only 1 function pointer in RAM, plus a virtual pointer in C++). So, you can easily manage hundreds or thousands of those. The bottom line is that the efficiency of the implementation in QP actually scales better than traditional RTOS/OS-based approaches and enables building bigger applications.
Reply by Ignacio G.T. April 18, 20122012-04-18
In article <e74d77a8-bbae-4dd1-afa3-5fd8ebf7dc52
@l3g2000vbv.googlegroups.com>, sales@quantum-leaps.com says...
>
[...]
> > Can a lightweight framework like QP and the QM modeling tool scale to > really big projects? Well, I've seen it used for tens of KLOC-size > projects by big, distributed teams and I haven't seen any signs of > over-stressing the architecture or the tool.
Miro, isn't the number of 'active objetcs' in QP a factor that limits scaling (maximum number = 64, according to the website)? I have in mind (real-word) projects with hundreds of objects, belonging to dozens of different classes. Doesn't every object with a state machine need to be an active object? -- Saludos. Ignacio G.T.
Reply by Lanarcam April 17, 20122012-04-17
Le 17/04/2012 19:05, Miro Samek a &#4294967295;crit :
> With UML, just as with anything else, the real question is return on > investment (ROI). To be truly successful, the benefits of a method > must outweigh the learning curve, the tools, the maintenance costs, > the hidden costs of "fighting the tool" and so on. > > As it turns out, the ROI of UML is lousy unless the models are used to > generate substantial portions of the production code. Without code > generation, the models inevitably fall behind and become more of a > liability than an asset. In this respect I tend to agree with the "UML > Modeling Maturity Index (UMMI)", invented by Bruce Douglass (https:// > www.ibm.com/developerworks/mydeveloperworks/blogs/BruceDouglass/entry/bruce_s_top_ten_modeling_hints_9_all_models_are_abstractions_in_that_they_focus_on_some_properties_and_aspects_at_the_expense_of_others49?lang=en) > According to the UMMI, without code generation UML can reach at most > 30% of its potential. This is just too low to outweigh all the costs. > > Unfortunately, code generation capabilities have been always > associated with complex, expensive UML tools with a very steep > learning curve and a price tag to match. With such a big investment > side of the ROI equation, it's quite difficult to reach sufficient > return. Consequently, all too often big tools get abandoned and if > they continue to be used at all, they end up as overpriced drawing > packages. > > So, to follow my purely economic argument, unless we make the > investment part of the ROI equation low enough, without reducing the > returns too much, UML has no chance. On the other hand, if we could > achieve positive ROI (something like 80% of benefits for 10% of the > cost), we would have a *game changer*. > > To this end, when you look closer, the biggest "bang for the buck" in > UML with respect to embedded code generation are: (1) an embedded real- > time framework and (2) support for hierarchical state machines (UML > statecharts). Of course, these two ingredients work best together and > need each other. State machines can't operate in vacuum and need a > framework to provide execution context, thread-safe event passing, > event queueing, etc. Framework benefits from state machines for > structure and code generation capabilities. > > I'm not sure if many people realize the critical importance of a > framework, but a good framework is in many ways even more valuable > than the tool itself, because the framework is the big enabler of > architectural reuse, testability, traceability, and code generation to > name just a few. The second component are state machines, but again > I'm not sure if everybody realizes the importance of state nesting. > Without support for state hierarchy, traditional "flat" state machines > suffer from the phenomenon known as "state-transition explosion", > which renders them unusable for real-life problems. > > As it turns out, the two critical ingredients for code generation can > be had with much lower investment than traditionally thought. An event- > driven, real-time framework can be as complex as a traditional bare- > bones RTOS (e.g., see the family of the open source QP frameworks at > http://www.state-machine.com/qp). A UML modeling tool for creating > hierarchical state machines and production code generation can be free > and can be designed to minimize the problem of "fighting the > tool" (see http://www.state-machine.com/qm). Sure, you don't get all > the bells and whistles of IBM Rhapsody, but you get the arguably most > valuable ingredients. Most importantly, you have a chance to achieve a > positive ROI on your first project. As I said, this to me is game > changing. > > Can a lightweight framework like QP and the QM modeling tool scale to > really big projects? Well, I've seen it used for tens of KLOC-size > projects by big, distributed teams and I haven't seen any signs of > over-stressing the architecture or the tool.
Interesting thoughts. I'll have a look at http://www.state-machine.com/qp and http://www.state-machine.com/qm.
Reply by Miro Samek April 17, 20122012-04-17
With UML, just as with anything else, the real question is return on
investment (ROI). To be truly successful, the benefits of a method
must outweigh the learning curve, the tools, the maintenance costs,
the hidden costs of "fighting the tool" and so on.

As it turns out, the ROI of UML is lousy unless the models are used to
generate substantial portions of the production code. Without code
generation, the models inevitably fall behind and become more of a
liability than an asset. In this respect I tend to agree with the "UML
Modeling Maturity Index (UMMI)", invented by Bruce Douglass (https://
www.ibm.com/developerworks/mydeveloperworks/blogs/BruceDouglass/entry/bruce_s_top_ten_modeling_hints_9_all_models_are_abstractions_in_that_they_focus_on_some_properties_and_aspects_at_the_expense_of_others49?lang=en)
According to the UMMI, without code generation UML can reach at most
30% of its potential. This is just too low to outweigh all the costs.

Unfortunately, code generation capabilities have been always
associated with complex, expensive UML tools with a very steep
learning curve and a price tag to match. With such a big investment
side of the ROI equation, it's quite difficult to reach sufficient
return. Consequently, all too often big tools get abandoned and if
they continue to be used at all, they end up as overpriced drawing
packages.

So, to follow my purely economic argument, unless we make the
investment part of the ROI equation low enough, without reducing the
returns too much, UML has no chance. On the other hand, if we could
achieve positive ROI (something like 80% of benefits for 10% of the
cost), we would have a *game changer*.

To this end, when you look closer, the biggest "bang for the buck" in
UML with respect to embedded code generation are: (1) an embedded real-
time framework and (2) support for hierarchical state machines (UML
statecharts). Of course, these two ingredients work best together and
need each other. State machines can't operate in vacuum and need a
framework to provide execution context, thread-safe event passing,
event queueing, etc. Framework benefits from state machines for
structure and code generation capabilities.

I'm not sure if many people realize the critical importance of a
framework, but a good framework is in many ways even more valuable
than the tool itself, because the framework is the big enabler of
architectural reuse, testability, traceability, and code generation to
name just a few. The second component are state machines, but again
I'm not sure if everybody realizes the importance of state nesting.
Without support for state hierarchy, traditional "flat" state machines
suffer from the phenomenon known as "state-transition explosion",
which renders them unusable for real-life problems.

As it turns out, the two critical ingredients for code generation can
be had with much lower investment than traditionally thought. An event-
driven, real-time framework can be as complex as a traditional bare-
bones RTOS (e.g., see the family of the open source QP frameworks at
http://www.state-machine.com/qp). A UML modeling tool for creating
hierarchical state machines and production code generation can be free
and can be designed to minimize the problem of "fighting the
tool" (see http://www.state-machine.com/qm). Sure, you don't get all
the bells and whistles of IBM Rhapsody, but you get the arguably most
valuable ingredients. Most importantly, you have a chance to achieve a
positive ROI on your first project. As I said, this to me is game
changing.

Can a lightweight framework like QP and the QM modeling tool scale to
really big projects? Well, I've seen it used for tens of KLOC-size
projects by big, distributed teams and I haven't seen any signs of
over-stressing the architecture or the tool.
Reply by Boudewijn Dijkstra April 16, 20122012-04-16
Op Thu, 12 Apr 2012 15:28:33 +0200 schreef Lanarcam <lanarcam1@yahoo.fr>=
:
> On Apr 12, 2:23 pm, "Boudewijn Dijkstra" > <sp4mtr4p.boudew...@indes.com> wrote: >> Op Thu, 12 Apr 2012 13:10:42 +0200 schreef Lanarcam =
>> <lanarc...@yahoo.fr>: >> > On Apr 12, 12:53 pm, "Boudewijn Dijkstra" >> > <sp4mtr4p.boudew...@indes.com> wrote: >> >> Op Thu, 12 Apr 2012 10:30:58 +0200 schreef Lanarcam >> >> <lanarc...@yahoo.fr>: >> >> >> [...] >> >> >> > Take Use cases, how do you specify complex protocols between = >> =
>> >> > actors and the system? >> >> >> >> If you're talking about data transfer protocols: you shouldn't.=
=
>> >> >> Use cases >> >> >> are part of the analysis phase, you're not supposed to meddle w=
ith
>> >> >> implementation details here. Otherwise, you can define sequenc=
e
>> >> >> diagrams >> >> >> and statecharts that describe the protocol behaviour. You can =
add
>> >> >> statecharts to actors for high-level simulation. >> >> >> > Data transfer protocols are part of what I was talking about. Th=
ey =
>> >> > are not IMO implementation details but parts of the specificatio=
n.
>> >> >> Can you give an example where the specific protocol matters for us=
e =
>> >> case modelling? >> >> > I won't give an example about Use cases particularly but about >> > specifications. >> > Given a list of incoming or outgoing messages between the system an=
d
>> > another system, you can map functions that will deal with thoses >> > messages. >> > Without being able to dig into the (applicative) messages, you won'=
t
>> > be able >> > to draw any sufficient detail to perform a valid functional analysi=
s.
>> > I have >> > worked extensively with SCADA and data acquistion systems and the >> > protocols were part of the specification. How can you express that >> > with Use cases? >> >> Why would you want to express that with use cases? It is of no conce=
rn
>> during the analysis phase which messages are coming in or going out. =
=
>> Use >> case modelling deals with more abstract concepts like achieving goals=
, =
>> so >> they are not suitable for specification modelling. However, you can >> always use other diagrams to model the interface (and even crude >> behaviour) of an actor as if it were a (sub)system. Then during the >> design phase you can replace the actor by the driver interface that =
>> sends and receives the messages. >> > So, I suppose you disagree with this: > > "Does a use case differ from a functional specification? You can > employ use cases to model business processes, a system=E2=80=99s > functional requirements, or even the internal workings of a system. > When used to model functional requirements, a use case describes > one function required of your system or application. As such, your > use cases constitute a functional specification." > > <http://www.oeng.com/pdf/UC-FAQ.pdf>
No, but we could disagree on the interpretation. I believe that by = "model" they not only mean "draw things" but also to properly fill out t= he = textual description using appropriate fields. This way you can give a = place to every piece of the specification. -- = Gemaakt met Opera's revolutionaire e-mailprogramma: http://www.opera.com/mail/ (Remove the obvious prefix to reply privately.)
Reply by Dombo April 12, 20122012-04-12
Op 11-Apr-12 21:13, Lanarcam schreef:
> Le 11/04/2012 20:42, Dombo a &#4294967295;crit : >> Op 11-Apr-12 18:13, Lanarcam schreef: >>> On Apr 11, 5:26 pm, hamilton<hamil...@nothere.com> wrote: >>>> On 4/11/2012 9:05 AM, Lanarcam wrote: >>>> >>>>> I had already asked the question many years ago and the >>>>> responses were mixed. >>>> >>>>> We are currently part designing, part reengineering a big >>>>> software project for the control of power stations. Since >>>>> the previous release was made without proper design >>>>> notation, the company has decided to use UML and a >>>>> tool call Visual Paradigm to do the preliminary design. >>>>> We won't generate the code. >>>> >>>>> One aspect that IMO is not satisfactory is that UML >>>>> diagrams are not "naturally" fit for expressing requirements >>>>> in a way that is consistent and complete enough to >>>>> generate usable code and being able to reverse >>>> >>>> (I am not a UML person) >>>> >>>> If UML as a diagramming tool is not good enough, what would be ?? >>>> >>> Short answer : SDL, SCADE. >>> >>> Long answer: take a sequence diagram, you know that at some point >>> in time, such module will call such function of another module but you >>> can't express the logical flow. Expressing "for", "switch", "if" are >>> impossible or cumbersome. You won't be able to deal with complex >>> data sructures. >> >> It is possible with UML (i.c.w. OCL), but cumbersome. Sequence diagrams >> are IMO only useful to illustrate a certain scenarios, not as a full >> specification. I feel the same (to a more or lesser degree) about many >> of the other UML diagram types; ok to clarify things and to give an >> overview but not really practical as full specification language that >> covers every corner case. > > My feeling exactly. >> >> My experience with UML is that for non-trivial stuff you either end up >> with something that is easy to grasp but very incomplete, or, when you >> strive for completeness, you end up with a big complex mess real quickly >> that still isn't complete. > > That's the problem we face today. There is a large base code and some > people want to reverse engineer it for the sake of documentation. They > want to draw activity diagrams for each function some of which take > 10 pages, I fear it will become a useless indecipherable mess.
Before one decides to go down that road, one should not only ask him-/herself if one is willing to spend the effort to write the initial documentation, but also if one is willing, can afford _and_ has the discipline to spend the (even larger) effort to maintain the documentation. The more detailed documentation gets the harder it is to keep it up to date. Detailed documentation that is not kept up to date is worse than useless; it wastes both the time of the one who wrote it and the one(s) who reads it. One of the clients I have worked for had the ambition to document their software with a very high level of detail. At a certain point they had more than 1100 documents (no not pages!) describing their software, where each document typically consisted of somewhere between 20 and 80 pages (the standard document template alone accounted for 12 pages). Judging by the directory structure and templates that was only a small fraction of the documents they intended to write at the start of the project. Though this was a large project (several MLOC), this was way over the top and actually counterproductive. You never knew if a document was up-to-date. Often engineers making changes to software weren't even aware that there were one or more documents that should be updated as a consequence of the changes made to the code. Most documents were write-only; if you needed to know the details it was both quicker and more reliable to look it up in the actual code. When it comes to documentation I prefer to document the high level structure, interfaces, rules and concepts of the software, and the rationale behind design choices (especially if they are not too obvious). The high level stuff rarely changes and cannot be captured (well) by reverse engineering and automatic documentation generation tools. For documentation of low level details I prefer to use tools like Doxygen (i.c.w. Graphviz) which generates documentation from the code itself and (tagged) comments embedded in the code. Though tools like Doxygen have limitations and shortcomings, my experience is the documentation it generates is much more accurate than a manually maintained document describing things like call-graphs, dependencies, function parameters...etc.
> IMO an ideal tool would allow one to describe high level requirement > and design ideas first and let people dig deeper into low level > design incrementally without loosing the big picture. What we would > need would be hierachical diagrams encompassing all steps from > requirements to code generation and a navigation between those > different steps. That's my letter to santa Claus.
That it is on the top of my wish list too. Several UML tools have promised this for years. However actually getting it to work this way in real life is whole other story. Buying requirements management and/or modeling tooling is one thing; deploying and embedding it in the organization is quite another thing (and much harder). I have seen too many times potentially useful tooling fail to realize their potential, simply because only one or two motivated people actively used the tool while others continued doing their own thing. The best chance is at the start of a project; it would be very hard to introduce tools like this late in the project.
> There is a tool called SCADE that allows that kind of design but > it is rather hard to use and doesn't scale well with big projects.
That is a pity, but unfortunately quite common with modeling languages and -tools. Most are fine with for trivial problems, but cannot handle large projects well, if at all. Ironically those are ones when the need is the most.