EmbeddedRelated.com
Forums

Two-wires RS485 and the driver enable problem

Started by pozzugno October 13, 2014
On 10/13/2014 3:02 PM, langwadt@fonz.dk wrote:
> Den mandag den 13. oktober 2014 19.35.45 UTC+2 skrev Les Cargill:
>> 485 isn't a good protocol these days. The kids get Ethernet on RasPi >> class machines for class projects; it's not unreasonable to >> use a real comms stack. > > but Ethernet need hubs or switches, that gets messy if you have a lot of nodes
In the past, I've had good luck with 10Base2 implementations. But, you have to "fix" the cabling instead of relying on the flakey T's, etc. [Do they even make 10Base2 kit anymore?]
Don Y wrote:
> On 10/13/2014 3:02 PM, langwadt@fonz.dk wrote: >> Den mandag den 13. oktober 2014 19.35.45 UTC+2 skrev Les Cargill: > >>> 485 isn't a good protocol these days. The kids get Ethernet on RasPi >>> class machines for class projects; it's not unreasonable to >>> use a real comms stack. >> >> but Ethernet need hubs or switches, that gets messy if you have a lot >> of nodes > > In the past, I've had good luck with 10Base2 implementations. > But, you have to "fix" the cabling instead of relying on the > flakey T's, etc. >
POR QUE??? :)
> [Do they even make 10Base2 kit anymore?]
I haven't seen any since the '90s. -- Les Cargill
On 10/13/2014 7:53 PM, Les Cargill wrote:
> Don Y wrote:
>>> but Ethernet need hubs or switches, that gets messy if you have a lot >>> of nodes >> >> In the past, I've had good luck with 10Base2 implementations. >> But, you have to "fix" the cabling instead of relying on the >> flakey T's, etc. > > POR QUE??? :)
Device sat in a hard-to-access location and had pretty extreme environmental conditions (heat, vibration, etc.). The traditional T's (or F's if you preferred that orientation) just weren't very good at long term reliability. So, the physical connections were "adjusted" to more appropriately address those needs.
>> [Do they even make 10Base2 kit anymore?] > > I haven't seen any since the '90s.
I'd assume you could still hack together a suitable PHY. (?) Not sure how *economical* it would be, though...
Il 13/10/2014 17:08, Don Y ha scritto:
> On 10/13/2014 7:59 AM, pozzugno wrote: >> Il 13/10/2014 15:28, Wouter van Ooijen ha scritto: >>>> I usually use this approach for longer delays (milliseconds or >>>> seconds), >>>> so I can increment the counter in a ISR that trigger every millisecond. >>>> I don't like to fire a trigger every 100us. >>> >>> With a 64 bit counter that ticks at 1 ns you have a rollover after 585 >>> years, so you can use it for all delays. If you don't have a hardware 64 >>> bit counter you can use a 32 bit counter + rollover interrupt. >> >> On 8-bit microcontrollers, it's difficult to have a 32 bit counter :-) > > All you need is a wide enough counter to span the maximum delay you want > to measure "comfortably". You need to design such that your "most > sluggish" > activity happens often enough to be captured in one counter rollover period > (i.e., counter can't roll over more than once between observations) >
Indeed my approach is to use a volatile uint16_t ticks variable incremented every 1ms in a timer ISR. Of course the variable overflow "naturally" from 65535 to 0 in the ISR. Taking into account the wrap-around, I can manage delays up to 65536/2=30 seconds that is enough for many applications. When I need longer delays, I use uint32_t. Consider that my ticks variable is a "software counter", not a hardware counter (that is used to generate 1ms timer interrupts). On 8-bitters, I read the ticks variable after disabling interrupts, just to be sure the operation is atomic. The Wouter's approach is new for me and very interested. I'll try to use it in the future. I think It can be used to read two 8-bits registers or two 16-bits registers (if the architecture lets to read atomically a 16-bit hardware counter). The initial Wouter's approach doesn't take into account wrap-around, because he uses a very wide 64-bits counter that reasonably never reach its maximum value during the lifetime of the gadget (or the developer's life). For 16-bits or 32-bits counters and 7d/7d 24h/24h applications, the wrap-around *must* be considered, so reducing the maximum delay to a half. Anywat this is a big issue. The only problem I see with using hardware counter is that it is quite impossible to have a nice counting frequency, such as 1ns, 1us or 1ms. Mostly hardware timer/counter peripherals can be feed directly by the main clock or after a prescaler. Usually prescaler values can be 2, 4, 8, 256 or similar, that brings to an odd final frequency. With a "software" ticks counter incremented in a timer ISR, it's simpler to calibrate the hardware counter to trigger every 1ms or similar nice values.
On 10/13/2014 10:48 PM, pozz wrote:

[snip]

>>>> With a 64 bit counter that ticks at 1 ns you have a rollover after 585 >>>> years, so you can use it for all delays. If you don't have a hardware 64 >>>> bit counter you can use a 32 bit counter + rollover interrupt. >>> >>> On 8-bit microcontrollers, it's difficult to have a 32 bit counter :-) >> >> All you need is a wide enough counter to span the maximum delay you want >> to measure "comfortably". You need to design such that your "most >> sluggish" >> activity happens often enough to be captured in one counter rollover period >> (i.e., counter can't roll over more than once between observations) > > Indeed my approach is to use a volatile uint16_t ticks variable incremented > every 1ms in a timer ISR. Of course the variable overflow "naturally" from > 65535 to 0 in the ISR. Taking into account the wrap-around, I can manage > delays up to 65536/2=30 seconds that is enough for many applications. When I > need longer delays, I use uint32_t. > Consider that my ticks variable is a "software counter", not a hardware counter > (that is used to generate 1ms timer interrupts).
You should be able to get ~60 second delays. If you *know* you will always look at a value "more often" than the wraparound period, you can always *deduce* wraparound trivially: unsigned now, then; if (now < then) now += counter_modulus; (effectively)
> On 8-bitters, I read the ticks variable after disabling interrupts, just to be > sure the operation is atomic. The Wouter's approach is new for me and very > interested. I'll try to use it in the future.
Anything you can do to AVOID disabling interrupts (or, to allow you to re-enable them earlier) tends to be a win. Ideally, you don't ever want to unilaterally disable (and, later, re-enable!) interrupts. Instead, each time you explicitly disable interrupts you want to, first, make note of whether or not they were enabled at the time (assuming this isn't implied). Then, later, when you choose to re-enable them, you actually want to RESTORE them to the state that they were in when you decided they should be disabled.
> I think It can be used to read two 8-bits registers or two 16-bits registers > (if the architecture lets to read atomically a 16-bit hardware counter).
With careful consideration, you can read any width counter/timer (though the granularity of your result will vary). What you probably *don't* want to do is: high1 = read_high() low1 = read_low() while (high1 != (high2 = read_high())) { high1 = high2 low1 = read_low() } or similar. [keeping in mind that IRQ's can come into this at any time -- including REPEATEDLY!]
> The initial Wouter's approach doesn't take into account wrap-around, because he > uses a very wide 64-bits counter that reasonably never reach its maximum value > during the lifetime of the gadget (or the developer's life). For 16-bits or > 32-bits counters and 7d/7d 24h/24h applications, the wrap-around *must* be > considered, so reducing the maximum delay to a half. Anywat this is a big issue.
If you had a proper RTOS, you could ask the OS to schedule a task at some specific interval after the "even of interest". It would then GUARANTEE that at least N time units had elapsed (and not more than M).
> The only problem I see with using hardware counter is that it is quite > impossible to have a nice counting frequency, such as 1ns, 1us or 1ms. Mostly > hardware timer/counter peripherals can be feed directly by the main clock or > after a prescaler. Usually prescaler values can be 2, 4, 8, 256 or similar, > that brings to an odd final frequency.
Doesn't matter. Do the math ahead of time (e.g., at compile time) and figure out what (value) you want to wait for.
> With a "software" ticks counter incremented in a timer ISR, it's simpler to > calibrate the hardware counter to trigger every 1ms or similar nice values.
Timer IRQ's (esp the jiffy) are a notorious source of problems. Too *often*, too *much* is done, there. (e.g., reschedule()) It's harder -- but not discouragingly so -- to move stuff out of the jiffy. But, once you do so, you tend to get a lot more robust/responsive system. E.g., the "beacon" scheme I mentioned (elsewhere) allows you to pre-determine what your actions will be... then, lay them in place when the "event of interest" occurs in a very timely manner -- without doing any "work" in IRQ's, etc. You've already sorted out what *will* be done and are now just waiting for your "cue" to do so! For example, if you know the beacon message will be N time units (based on number of characters and bit rate), you can concentrate on detecting the beacon -- and nothing more -- PROMPTLY. Then, arranging for your code to run N+epsilon time units after that event (instead of trying to watch each byte from that beacon message in the hope of finding the end of the message). This sort of scheme can easily allow every node (in a modest cluster size) to indicate that it needs attention (by allowing each node to respond to a "polling broadcast" in their individual timeslots with an indication of whether or not they "have something to say". (the master node then takes note of each of these and, later, issues directed queries to those nodes that "need attention") If it hasn't been said (and, if your environment can accommodate it), you might want to look at a different signalling/comms technology that allows for a true party-line (resolving contention in hardware).
Il 13/10/2014 17:06, Don Y ha scritto:
> On 10/13/2014 5:56 AM, pozzugno wrote: >> Il 13/10/2014 14:47, David Brown ha scritto: >>> or if you do, re-enable global interrupts first. Then it >>> makes little difference if an interrupt function is running when the >>> "transmission complete" triggers because the function will be complete >>> in a few microseconds. >> >> Someone considers the practice to enable interrupts inside an ISR as >> The Devil :-) >> >> http://betterembsw.blogspot.it/2014/01/do-not-re-enable-interrupts-in-isr.html >> > > No, what you are most concerned with is ensuring every ISR manages to > terminate > before it can be reinvoked. > > So, ISR1 can be interrupted by ISR3 which can be interrupted by ISR7 > which can be interrupted by ISR2, etc. AS LONG AS ISR1 can't reassert > itself while ANY instance of ISR1 is still "active". > > (Ditto for every ISR in the system) > > Even this "rule" can be bent -- if you know the worst case nesting of ISR's > on themselves (and ensure you have adequate stack to cover that level of > penetration)
IMHO it's a critical approach. In order to let the immediate "transmit complete" ISR (TXC), I have to re-enable interrupts inside all the other ISRs. So ISR A could be interrupted by ISR B even if I don't interested in this, just because ISR A must be interrupted by ISR TXC.
Il 14/10/2014 08:28, Don Y ha scritto:
> On 10/13/2014 10:48 PM, pozz wrote:
>> Indeed my approach is to use a volatile uint16_t ticks variable >> incremented >> every 1ms in a timer ISR. Of course the variable overflow "naturally" >> from >> 65535 to 0 in the ISR. Taking into account the wrap-around, I can manage >> delays up to 65536/2=30 seconds that is enough for many applications. >> When I >> need longer delays, I use uint32_t. >> Consider that my ticks variable is a "software counter", not a >> hardware counter >> (that is used to generate 1ms timer interrupts). > > You should be able to get ~60 second delays. > > If you *know* you will always look at a value "more often" than the > wraparound > period, you can always *deduce* wraparound trivially: > > unsigned now, then; > > if (now < then) > now += counter_modulus; > > (effectively)
I don't think I have got your point. I usually use the following comparison to understand if a timer tmr has expired. ((uint32_t)(ticks - tmr) <= UINT32_MAX / 2) In this way I loose a half of the total period, but it isn't usually a big issue.
>> On 8-bitters, I read the ticks variable after disabling interrupts, >> just to be >> sure the operation is atomic. The Wouter's approach is new for me and >> very >> interested. I'll try to use it in the future. > > Anything you can do to AVOID disabling interrupts (or, to allow you to > re-enable them earlier) tends to be a win.
Oh yes, I know.
> Ideally, you don't ever want to unilaterally disable (and, later, > re-enable!) > interrupts. Instead, each time you explicitly disable interrupts you > want to, > first, make note of whether or not they were enabled at the time (assuming > this isn't implied). > > Then, later, when you choose to re-enable them, you actually want to > RESTORE > them to the state that they were in when you decided they should be > disabled.
In my applications, I disable interrupts only when managing timers, so I'm sure interrupts are enabled when I try to access the 16-bits or 32-bits counter. Of course, I never use timers in ISRs.
>> The only problem I see with using hardware counter is that it is quite >> impossible to have a nice counting frequency, such as 1ns, 1us or 1ms. >> Mostly >> hardware timer/counter peripherals can be feed directly by the main >> clock or >> after a prescaler. Usually prescaler values can be 2, 4, 8, 256 or >> similar, >> that brings to an odd final frequency. > > Doesn't matter. Do the math ahead of time (e.g., at compile time) and > figure > out what (value) you want to wait for. > >> With a "software" ticks counter incremented in a timer ISR, it's >> simpler to >> calibrate the hardware counter to trigger every 1ms or similar nice >> values. > > Timer IRQ's (esp the jiffy) are a notorious source of problems.
What do you mean with "jiffy"? Are you naming my approach as "jiffy"? I didn't understand.
> Too *often*, too *much* is done, there. (e.g., reschedule()) > > It's harder -- but not discouragingly so -- to move stuff out of the jiffy. > But, once you do so, you tend to get a lot more robust/responsive system. > > E.g., the "beacon" scheme I mentioned (elsewhere) allows you to > pre-determine > what your actions will be... then, lay them in place when the "event of > interest" occurs in a very timely manner -- without doing any "work" in > IRQ's, etc. You've already sorted out what *will* be done and are now just > waiting for your "cue" to do so! > > For example, if you know the beacon message will be N time units (based on > number of characters and bit rate), you can concentrate on detecting the > beacon -- and nothing more -- PROMPTLY. Then, arranging for your code to > run N+epsilon time units after that event (instead of trying to watch > each byte > from that beacon message in the hope of finding the end of the message). > > This sort of scheme can easily allow every node (in a modest cluster size) > to indicate that it needs attention (by allowing each node to respond to > a "polling broadcast" in their individual timeslots with an indication of > whether or not they "have something to say". (the master node then takes > note of each of these and, later, issues directed queries to those nodes > that > "need attention")
I'm sorry, I think I completelty didn't understand what you have written :-( I don't hope you explain again in greater details what you have written, have you a link to study this "beacon" approach?
> If it hasn't been said (and, if your environment can accommodate it), you > might want to look at a different signalling/comms technology that allows > for a true party-line (resolving contention in hardware).
Any suggestions?
Il 13/10/2014 17:17, Don Y ha scritto:
> On 10/13/2014 4:58 AM, pozzugno wrote:
> First, you need to *know* that it is *you* that has been granted access to > the bus/resource. If this requires you to perform some analysis of the > ENTIRE MESSAGE (to verify that the "address field" is, in fact, intact!), > then you probably DON'T want to try to ride the coat-tails of the message, > directly. > > As the front end of a message (including yours) is more likely to be > corrupted > (by a collision on the bus -- someone jabbering too long or too soon), you > might consider designing a packet format that has one checksum on the > address field "early" in the packet (before the payload) and another that > handles the balance of the message.
> > This allows you to capture the address information and it's closely > following > checksum, verify that it is *you* that are being addressed and prepare > for your > acquisition of the bus before the message has completely arrived. I have just one checksum at the end of the message, but the address field is at the beginning. Anyway I look at the address field as it arrives to decide if it's a frame for me. I know the address field could be corrupted, but IMHO it's not important. If the address field is corrupted and appear for me, but the master wanted to talk to another node, I store the message till the end, but the checksum will be wrong, so the frame will be discarded. If the address field is corrupted and it doesn't appear for me, but the master really wanted to talk with me, I discard early the message. IMHO, adding a new checksum at the beginning of the frame, only to protect the address field, doesn't add more robustness to the final performance.
> You can also arrange to access the bus in "timeslots" referenced to some > easily recognizable event (e.g., a beacon sent by the master). So, > you do all of your timing off of that single event (see "some special > point" in the beacon, start timer, wait to transmit until "your slot"). > > Note this also works if you just assume the "next slot" after the master's > message is the slot you should use (you just limit the length of the > master's > message so it fits in that fixed delay). Similarly, the master can "know" > that your reply will never exceed a fixed duration so it won't issue > another > beacon/request until that has expired. > > Hopefully, this makes sense. Sorry, I'm off for a pro bono day so no time > here... :-/
I think I'll have to think more deeply about this beacon approach. Any useful info on the Internet?
Il 13/10/2014 23:37, Tim Wescott ha scritto:
> On Mon, 13 Oct 2014 13:58:42 +0200, pozzugno wrote: > >> I have a multi-drop two-wires RS485 bus. One node is the master and all >> the others are slaves. The master is the only node that is authorized >> to initiate a transmission, addressing one slave. The addressed slave >> usually answers to the master. >> >> The bus is half-duplex, so every node disables the driver. Only THE >> node that transmits data on the bus enables the driver and disables it >> as soon as it can, just after the last byte. An interrupt (transmit >> complete) usually triggers when the last byte is totally shifted out, so >> the driver can be disabled immediately. >> >> Of course, other interrupts can be triggered. What happens when >> interrupt X (whatever) triggers just before the "transmit complete" >> interrupt? The result is the ISR X is called, postponing the execution >> of "transmit complete" ISR. The RS485 driver will be disabled with a >> certain amount of delay. In the worst case, the driver could be >> disabled with a delay that is the sum of the duration of all ISRs that >> could trigger. >> [In this scenario, I think of ISRs that can't be interrupted by a higher >> priority interrupt.] > > Unless your processor is very primitive, you should be able to make the > serial interrupt the highest priority.
I'm using AVR8 from Atmel controllers. I can't change interrupt priorities (they are hard-wired in the device). IMHO, anyway it's not a matter of priority, but of the lack of a *nested* interrupt controller: an ISR can't be never interrupted by higher priority interrupts (in this case, transmit complete).
> Or take David Brown's suggestion > and disable all but the serial interrupt when you start to transmit the > last byte.
This is a good suggestion, even if it isn't simple. I should save the status of all IRQ, disable them and reactive the ones originally active.
>> If a node on the bus is very fast and starts transmitting (the master) >> or answering (one slave) immediately after receving the last byte, but >> when the previous transmitting node is executing other ISRs, the final >> result is a corrupted transmission. >> >> What is the solution? I think the only solution is to define, at the >> design time, a minimum interval between the receiving of the last byte >> from one node and the transmission of the first byte. This interval >> could be in the range of 100 microseconds and should be calibrated on >> the sum of duration of *all* ISRs of *all* nodes on the bus. It isn't a >> simple calculation. > > I believe you're going about the last half of this backwards. Do not > calculate the worst-case interrupt latency -- specify it, and make it a > requirement on the slave boards. This should be easy enough to do if you > are in charge of all the software, and still quite doable if you're only > in charge of the communications software (assuming a functional group). > >> Moreover, implementing a short "software" delay in the range of some >> microseconds isn't a simple task. An empty loop on a decreasing >> volatile variable is a solution, but the final delay isn't simple to >> calculate at the design time, and it could depend on the compiler, >> compiler settings, clock frequency and so on. Use an hw timer only for >> this pause? >> >> How do you solve this problem? > > In a UART without a FIFO, an easy way to do this would be to send one or > more bytes with the transmitter disabled, then turn on the transmitter at > the appropriate time. Basically, use the UART as your timed event > generator. >> >> [I know there are some microcontrollers that automatically (at the >> hw-level) toggle an output pin when the last byte is totally shifted >> out, but I'm not using one of the them and they aren't so common.] > > In my experience, unless you're really using a high baud rate and a slow > processor, or if your ISR's are just plain incorrectly written, your > interrupt latency will be far lower than a bit interval. >
On 14/10/14 07:48, pozz wrote:
> Il 13/10/2014 17:08, Don Y ha scritto: >> On 10/13/2014 7:59 AM, pozzugno wrote: >>> Il 13/10/2014 15:28, Wouter van Ooijen ha scritto: >>>>> I usually use this approach for longer delays (milliseconds or >>>>> seconds), >>>>> so I can increment the counter in a ISR that trigger every >>>>> millisecond. >>>>> I don't like to fire a trigger every 100us. >>>> >>>> With a 64 bit counter that ticks at 1 ns you have a rollover after 585 >>>> years, so you can use it for all delays. If you don't have a >>>> hardware 64 >>>> bit counter you can use a 32 bit counter + rollover interrupt. >>> >>> On 8-bit microcontrollers, it's difficult to have a 32 bit counter :-) >> >> All you need is a wide enough counter to span the maximum delay you want >> to measure "comfortably". You need to design such that your "most >> sluggish" >> activity happens often enough to be captured in one counter rollover >> period >> (i.e., counter can't roll over more than once between observations) >> > > Indeed my approach is to use a volatile uint16_t ticks variable > incremented every 1ms in a timer ISR. Of course the variable overflow > "naturally" from 65535 to 0 in the ISR. Taking into account the > wrap-around, I can manage delays up to 65536/2=30 seconds that is enough > for many applications. When I need longer delays, I use uint32_t. > Consider that my ticks variable is a "software counter", not a hardware > counter (that is used to generate 1ms timer interrupts). > > On 8-bitters, I read the ticks variable after disabling interrupts, just > to be sure the operation is atomic. The Wouter's approach is new for me > and very interested. I'll try to use it in the future. >
Note that "Wouter's algorithm" (giving him his two minutes of fame, until someone points out that he didn't actually invent it...) is easily extendible. On an 8-bit system with 64-bit counters, the "read_high" should read the upper 56 bits, and the "read_low" reads the low 8-bit (or use a 48-bit/16-bit split if you can do an atomic 16-bit read of the counter hardware, which IIRC is possible on an AVR). Another variation that might be easier if your counter is running relatively slowly (say 10 kHz) is just: a = read_counter(); while (true) { b = read_counter(); if (a == b) return a; a = b; } (Yes, the run-time here is theoretically unbounded - but if your system is so badly overloaded with ISR's that this loop runs more than a couple of times, you've got big problems anyway.)
> I think It can be used to read two 8-bits registers or two 16-bits > registers (if the architecture lets to read atomically a 16-bit hardware > counter). > > The initial Wouter's approach doesn't take into account wrap-around, > because he uses a very wide 64-bits counter that reasonably never reach > its maximum value during the lifetime of the gadget (or the developer's > life). For 16-bits or 32-bits counters and 7d/7d 24h/24h applications, > the wrap-around *must* be considered, so reducing the maximum delay to a > half. Anywat this is a big issue.
Wrap most be considered, but it is not necessarily a problem. Just make sure you deal with differences in times rather than waiting for the timer to pass a certain value.
> > The only problem I see with using hardware counter is that it is quite > impossible to have a nice counting frequency, such as 1ns, 1us or 1ms. > Mostly hardware timer/counter peripherals can be feed directly by the > main clock or after a prescaler. Usually prescaler values can be 2, 4, > 8, 256 or similar, that brings to an odd final frequency. > With a "software" ticks counter incremented in a timer ISR, it's simpler > to calibrate the hardware counter to trigger every 1ms or similar nice > values.