Hi everyone, I'm not sure if here's the best place were to talk about this subject and I'd be glad if someone post the message wherever is more appropriate. I'm architecting a system that has a master unit (MU) and several slave units (SUs) all happily sitting on a shared bus (I'd like to keep the level of description abstract and not go into the details of the bus implementation). The system has to perform several activities with different 'priorities' [1], within a time cycle T that is cyclic and perpetuous. Each SU has a certain need for data in and out per cycle in order to perform its function. So the MU should provide the necessary input and retrieve the available output at the right pace in order for the function to perform as intended. Aside the SU I have a memory unit (ME) which is shared amongs all SUs and the MU for configuration and data. The ME contains configuration parameters for all SUs and they can be updated from cycle to cycle, moreover it is the place where results from all SUs are stored. How do I allocate the bus access to all SUs in order to get them working properly (with a certain margin to handle failures and recovery actions)? Is there a 'formal' way to define such problem and find a solution? I realize the problem itself is potentially uncomplete to be solved and it may lack information, but I'm open to questions so that I can include elements in my problem definition. Any suggestion/hint/pointer is appreciated. Al [1] Priority may not be the best term here. The real intent is that each SU work on its own frequency cycle multiple of the 1/T frequency so that within a cycle we have N1 accesses for SU1, N2 accesses for SU2, etc. -- A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing on usenet and in e-mail?
bus allocation strategy
Started by ●November 6, 2014
Reply by ●November 6, 20142014-11-06
On 11/6/2014 8:24 AM, alb wrote:> Hi everyone, > > I'm not sure if here's the best place were to talk about this subject > and I'd be glad if someone post the message wherever is more > appropriate. > > I'm architecting a system that has a master unit (MU) and several slave > units (SUs) all happily sitting on a shared bus (I'd like to keep the > level of description abstract and not go into the details of the bus > implementation). > > The system has to perform several activities with different 'priorities' > [1], within a time cycle T that is cyclic and perpetuous. Each SU has a > certain need for data in and out per cycle in order to perform its > function. So the MU should provide the necessary input and retrieve the > available output at the right pace in order for the function to perform > as intended. > > Aside the SU I have a memory unit (ME) which is shared amongs all SUs > and the MU for configuration and data. The ME contains configuration > parameters for all SUs and they can be updated from cycle to cycle, > moreover it is the place where results from all SUs are stored. > > How do I allocate the bus access to all SUs in order to get them working > properly (with a certain margin to handle failures and recovery > actions)? > > Is there a 'formal' way to define such problem and find a solution? > > I realize the problem itself is potentially uncomplete to be solved and > it may lack information, but I'm open to questions so that I can include > elements in my problem definition. > > Any suggestion/hint/pointer is appreciated. > > Al > > [1] Priority may not be the best term here. The real intent is that each > SU work on its own frequency cycle multiple of the 1/T frequency so that > within a cycle we have N1 accesses for SU1, N2 accesses for SU2, etc.Yes, your problem is not adequately specified. In order to even consider that there is any problem to be solved requires a time duration of each memory access and a maximum latency that each unit is able to wait without impacting its performance. In essence, the memory unit is a shared resource and there are multiple devices vying for access to it. If I understand the problem as you have put it, this is exactly the same as a single processor with multiple interrupts. In your case the processing units are vying for access to the memory. In the case of interrupts the interrupt routines are vying for access to the processor time. I'm sure you can find a lot of material on interrupt priority assignments. -- Rick
Reply by ●November 6, 20142014-11-06
On Thu, 06 Nov 2014 11:43:07 -0500, rickman wrote:> On 11/6/2014 8:24 AM, alb wrote: >> Hi everyone, >> >> I'm not sure if here's the best place were to talk about this subject >> and I'd be glad if someone post the message wherever is more >> appropriate. >> >> I'm architecting a system that has a master unit (MU) and several slave >> units (SUs) all happily sitting on a shared bus (I'd like to keep the >> level of description abstract and not go into the details of the bus >> implementation). >> >> The system has to perform several activities with different >> 'priorities' >> [1], within a time cycle T that is cyclic and perpetuous. Each SU has a >> certain need for data in and out per cycle in order to perform its >> function. So the MU should provide the necessary input and retrieve the >> available output at the right pace in order for the function to perform >> as intended. >> >> Aside the SU I have a memory unit (ME) which is shared amongs all SUs >> and the MU for configuration and data. The ME contains configuration >> parameters for all SUs and they can be updated from cycle to cycle, >> moreover it is the place where results from all SUs are stored. >> >> How do I allocate the bus access to all SUs in order to get them >> working properly (with a certain margin to handle failures and recovery >> actions)? >> >> Is there a 'formal' way to define such problem and find a solution? >> >> I realize the problem itself is potentially uncomplete to be solved and >> it may lack information, but I'm open to questions so that I can >> include elements in my problem definition. >> >> Any suggestion/hint/pointer is appreciated. >> >> Al >> >> [1] Priority may not be the best term here. The real intent is that >> each SU work on its own frequency cycle multiple of the 1/T frequency >> so that within a cycle we have N1 accesses for SU1, N2 accesses for >> SU2, etc. > > Yes, your problem is not adequately specified. In order to even > consider that there is any problem to be solved requires a time duration > of each memory access and a maximum latency that each unit is able to > wait without impacting its performance. In essence, the memory unit is > a shared resource and there are multiple devices vying for access to it. > > If I understand the problem as you have put it, this is exactly the same > as a single processor with multiple interrupts. In your case the > processing units are vying for access to the memory. In the case of > interrupts the interrupt routines are vying for access to the processor > time. > > I'm sure you can find a lot of material on interrupt priority > assignments.Google Monotonic Rate Analysis. It may answer your questions. -- www.wescottdesign.com
Reply by ●November 6, 20142014-11-06
On Thu, 06 Nov 2014 13:24:50 +0000, alb wrote:> Hi everyone, > > I'm not sure if here's the best place were to talk about this subject > and I'd be glad if someone post the message wherever is more > appropriate. > > I'm architecting a system that has a master unit (MU) and several slave > units (SUs) all happily sitting on a shared bus (I'd like to keep the > level of description abstract and not go into the details of the bus > implementation). > > The system has to perform several activities with different 'priorities' > [1], within a time cycle T that is cyclic and perpetuous. Each SU has a > certain need for data in and out per cycle in order to perform its > function. So the MU should provide the necessary input and retrieve the > available output at the right pace in order for the function to perform > as intended. > > Aside the SU I have a memory unit (ME) which is shared amongs all SUs > and the MU for configuration and data. The ME contains configuration > parameters for all SUs and they can be updated from cycle to cycle, > moreover it is the place where results from all SUs are stored. > > How do I allocate the bus access to all SUs in order to get them working > properly (with a certain margin to handle failures and recovery > actions)? > > Is there a 'formal' way to define such problem and find a solution? > > I realize the problem itself is potentially uncomplete to be solved and > it may lack information, but I'm open to questions so that I can include > elements in my problem definition. > > Any suggestion/hint/pointer is appreciated. > > Al > > [1] Priority may not be the best term here. The real intent is that each > SU work on its own frequency cycle multiple of the 1/T frequency so that > within a cycle we have N1 accesses for SU1, N2 accesses for SU2, etc.If you can do something as simple as establishing a master cycle and making a Gantt chart for all the message passing within that cycle, then you're done. Your solution will probably be somewhat fragile, but you'll be done for the moment. CAN has the concept of prioritized messages, and you can use that for the purposes of dividing the network traffic into short messages that need low latency, and long messages that can take a while. CAN does it in the context of breaking up the long messages into short ones (CAN has an 8- byte maximum message length). Basically, your long slow messages either have to come at same scheduled "slow traffic time", or your physical (or maybe link) layer needs to specify a maximum message length, with higher layers having a way to send long messages as a collection of short ones. -- www.wescottdesign.com
Reply by ●November 6, 20142014-11-06
Hi Rick, rickman <gnuarm@gmail.com> wrote: []>> How do I allocate the bus access to all SUs in order to get them working >> properly (with a certain margin to handle failures and recovery >> actions)? >> >> Is there a 'formal' way to define such problem and find a solution?[]> Yes, your problem is not adequately specified. In order to even > consider that there is any problem to be solved requires a time duration > of each memory access and a maximum latency that each unit is able to > wait without impacting its performance. In essence, the memory unit is > a shared resource and there are multiple devices vying for access to it.While memory access may be considered negligible, the handshake protocol, if any, on the bus may lead to overhead and the transaction can be longer. Let's then talk about 'transaction time' instead of memory access. Latency was indeed specified with 'priority'. If you want it can be expressed in data rate per unit of time T. The data rate (in and out) expresses the needs from the unit perspective to function properly. If for any reason the data rate is not sufficient there'll be an impact on functionality.> If I understand the problem as you have put it, this is exactly the same > as a single processor with multiple interrupts.or multiple slaves to poll. Interrupts in this case, with a commonly shared resource may cause priority inversion which is usually undesired.> I'm sure you can find a lot of material on interrupt priority assignments.yes, there's lots of material indeed, but I'd reserve interrupts to 'asynchronous' events which need to be served timely. In the use case presented I have not mentioned the need to react to asynchronous information, rather allocate resources with margins in order to allow to recover from failures should that happen. I had in mind a protocol similar to a 1553, where the Bus Controller is continuously granting access to the bus to each Remote Terminal within a framed structure (often divided in major and minor frames). Now reading around in an not so well organized way I've found that this type of scheduler is defined as cyclic executive, a form of cooperative multitasking. Al
Reply by ●November 6, 20142014-11-06
Hi Tim, Tim Wescott <tim@seemywebsite.com> wrote: []>> I'm sure you can find a lot of material on interrupt priority >> assignments. > > Google Monotonic Rate Analysis. It may answer your questions.That's it! And the good thing is that this scheduling algorithm can be tested to verify if the tasks will meet their deadlines (Liu & Layland - 1973). I also found an interesting program that allows to perform the analysis and confirm schedulability (http://c-programmingguide.blogspot.ch/2012/09/c-program-for-rate-monotonic-scheduling.html). Taking the right amount of margins to 'react' to failure cases would be sufficient to fully verify those transactions can fit on the bus. Al
Reply by ●November 6, 20142014-11-06
Hi Tim, Tim Wescott <tim@seemywebsite.com> wrote: []>> I realize the problem itself is potentially uncomplete to be solved and >> it may lack information, but I'm open to questions so that I can include >> elements in my problem definition.[]> If you can do something as simple as establishing a master cycle and > making a Gantt chart for all the message passing within that cycle, then > you're done. Your solution will probably be somewhat fragile, but you'll > be done for the moment.Why fragile? What would be the pitfall with this approach? Surely one evident point is that changing priority to one unit may screw up the whole mechanism since everything is linked to eachother.> CAN has the concept of prioritized messages, and you can use that for > the purposes of dividing the network traffic into short messages that > need low latency, and long messages that can take a while. CAN does > it in the context of breaking up the long messages into short ones > (CAN has an 8- byte maximum message length).On CAN priority is granted at bit level with recessive and dominant bits and the necessity comes because is a multi-master bus. Priority are needed to resolve clashes, if the scheduler is such that there are no clashes than no priority needs to be implemented.> Basically, your long slow messages either have to come at same scheduled > "slow traffic time", or your physical (or maybe link) layer needs to > specify a maximum message length, with higher layers having a way to send > long messages as a collection of short ones.There are two aspects to consider IMO: 1. long and slow messages 2. short and frequent messages In the first event we need to provide a bit more of transaction time in order not to extend the message over a long span of time. OTOH short messages may suffer from protocol overhead and be penalized. There must be an optimum size of the transaction, given the sizes of each transfer and their respective periods. And I'm also convinced there's a study out there that just provides the answer... :-) Al
Reply by ●November 6, 20142014-11-06
On Thu, 06 Nov 2014 22:09:21 +0000, alb wrote:> Hi Tim, > > Tim Wescott <tim@seemywebsite.com> wrote: > [] >>> I realize the problem itself is potentially uncomplete to be solved >>> and it may lack information, but I'm open to questions so that I can >>> include elements in my problem definition. > [] >> If you can do something as simple as establishing a master cycle and >> making a Gantt chart for all the message passing within that cycle, >> then you're done. Your solution will probably be somewhat fragile, but >> you'll be done for the moment. > > Why fragile? What would be the pitfall with this approach? Surely one > evident point is that changing priority to one unit may screw up the > whole mechanism since everything is linked to eachother. >Fragile in the sense that as you add messages the whole job may have to be done over again, rather than having any structured way to do it.>> CAN has the concept of prioritized messages, and you can use that for >> the purposes of dividing the network traffic into short messages that >> need low latency, and long messages that can take a while. CAN does it >> in the context of breaking up the long messages into short ones (CAN >> has an 8- byte maximum message length). > > On CAN priority is granted at bit level with recessive and dominant bits > and the necessity comes because is a multi-master bus. Priority are > needed to resolve clashes, if the scheduler is such that there are no > clashes than no priority needs to be implemented. > >> Basically, your long slow messages either have to come at same >> scheduled "slow traffic time", or your physical (or maybe link) layer >> needs to specify a maximum message length, with higher layers having a >> way to send long messages as a collection of short ones. > > There are two aspects to consider IMO: > 1. long and slow messages 2. short and frequent messages > > In the first event we need to provide a bit more of transaction time in > order not to extend the message over a long span of time. OTOH short > messages may suffer from protocol overhead and be penalized. There must > be an optimum size of the transaction, given the sizes of each transfer > and their respective periods. And I'm also convinced there's a study out > there that just provides the answer... :-)The optimum size differs with the environment. CAN is optimized such that the most important message never has to wait longer than 84 bit times or some such (I can't remember the exact number). It gives up considerably on overhead (and thus throughput) to do so. Ethernet, IIRC, is the opposite -- it has some relatively small amount of overhead, because it's relative to a HUGE packet size (1024 bytes?). So totally aside from the fact that there's no prioritization in "regular" Ethernet, you'd have to wait many bit times for a message to finish before another one starts. -- www.wescottdesign.com
Reply by ●November 7, 20142014-11-07
On 11/6/2014 4:20 PM, alb wrote:> Hi Rick, > > rickman <gnuarm@gmail.com> wrote: > [] >>> How do I allocate the bus access to all SUs in order to get them working >>> properly (with a certain margin to handle failures and recovery >>> actions)? >>> >>> Is there a 'formal' way to define such problem and find a solution? > [] >> Yes, your problem is not adequately specified. In order to even >> consider that there is any problem to be solved requires a time duration >> of each memory access and a maximum latency that each unit is able to >> wait without impacting its performance. In essence, the memory unit is >> a shared resource and there are multiple devices vying for access to it. > > While memory access may be considered negligible, the handshake > protocol, if any, on the bus may lead to overhead and the transaction > can be longer. Let's then talk about 'transaction time' instead of > memory access. > > Latency was indeed specified with 'priority'. If you want it can be > expressed in data rate per unit of time T. The data rate (in and out) > expresses the needs from the unit perspective to function properly. If > for any reason the data rate is not sufficient there'll be an impact on > functionality. > >> If I understand the problem as you have put it, this is exactly the same >> as a single processor with multiple interrupts. > > or multiple slaves to poll. Interrupts in this case, with a commonly > shared resource may cause priority inversion which is usually undesired. > >> I'm sure you can find a lot of material on interrupt priority assignments. > > yes, there's lots of material indeed, but I'd reserve interrupts to > 'asynchronous' events which need to be served timely. In the use case > presented I have not mentioned the need to react to asynchronous > information, rather allocate resources with margins in order to allow to > recover from failures should that happen.I believe you are splitting hairs that aren't even a part of your problem. Priority inversion can only occur under certain circumstances which I don't believe exists in your context. The issue of "asynchronous" events imply something to be asynchronous to which I don't believe exists in your context. I was drawing an analogy between the two types of problem which is valid. You can simply not involve the parts of interrupts that do not apply.> I had in mind a protocol similar to a 1553, where the Bus Controller is > continuously granting access to the bus to each Remote Terminal within a > framed structure (often divided in major and minor frames). Now reading > around in an not so well organized way I've found that this type of > scheduler is defined as cyclic executive, a form of cooperative > multitasking.Great. -- Rick
Reply by ●November 10, 20142014-11-10
Hi Rick, rickman <gnuarm@gmail.com> wrote: []>>> I'm sure you can find a lot of material on interrupt priority assignments. >> >> yes, there's lots of material indeed, but I'd reserve interrupts to >> 'asynchronous' events which need to be served timely. In the use case >> presented I have not mentioned the need to react to asynchronous >> information, rather allocate resources with margins in order to allow to >> recover from failures should that happen. > > I believe you are splitting hairs that aren't even a part of your > problem. Priority inversion can only occur under certain circumstances > which I don't believe exists in your context. The issue of > "asynchronous" events imply something to be asynchronous to which I > don't believe exists in your context.Priority inversion can occur whenever there's a shared resource amongst differently prioritized tasks which is reserved by a low priority one preempted by a middle priority one. At this stage the high priority task cannot run because it needs the low priority task to release the locking mechanism for the shared resource, hence the priority is /inverted/. Here's a nice read about priority inversion: http://www.cs.duke.edu/~carla/mars.html Does this mean I'm splitting hairs, maybe, but this is the architecture phase and if things like this get forgotten they're going to hit back later on...and remembered for a long time! []>>> If I understand the problem as you have put it, this is exactly the same >>> as a single processor with multiple interrupts. >> >> or multiple slaves to poll. Interrupts in this case, with a commonly >> shared resource may cause priority inversion which is usually undesired. >>> I was drawing an analogy between the two types of problem which is > valid. You can simply not involve the parts of interrupts that do not > apply.I do appreciate the analogy, I'm just trying to be provocative and test my reasoning. So thanks for being my 'rubber duck' in this case ;-)>> I had in mind a protocol similar to a 1553, where the Bus Controller is >> continuously granting access to the bus to each Remote Terminal within a >> framed structure (often divided in major and minor frames). Now reading >> around in an not so well organized way I've found that this type of >> scheduler is defined as cyclic executive, a form of cooperative >> multitasking. > > Great.Thanks!







