EmbeddedRelated.com
Forums
Memfault Beyond the Launch

I2C Single Master: peripheral or bit banging?

Started by pozz November 20, 2020
I hate I2C for several reasons. It's only two-wires bus, but for this 
reason it is insidious.

I usually use hw peripherals when they are available, because it's much 
more efficient and smart and because it's the only possibility in many 
cases.
Actually we have MCUs with abundant UARTs, timers and so on, so there's 
no real story: choose a suitable MCU and use that damn peripheral.
So I usually start using I2C peripherals available in MCUs, but I found 
many issues.

I have experience with AVR8 and SAMC21 by Atmel/Microchip. In both cases 
the I2C peripheral is much more complex than UART or similar serial 
lines. I2C Single Master, that is the most frequent situation, is very 
simple, but I2C Multi Master introduces many critical situations.
I2C peripherals usually promise to be compatible with multi-master, so 
their internal state machine is somewhat complex... and often there's 
some bug or situations that aren't expected that leave the code stucks 
at some point.

I want to write reliable code that not only works most of the time, but 
that works ALL the time, in any situations (ok, 99%). So my first test 
with I2C is making a temporary short between SCL and SDA. In this case, 
I2C in SAMC21 (they named it SERCOM in I2C Master mode) hangs forever. 
The manual says to write ADDR register to start putting the address on 
the bus and wait for an interrupt flag when it ends. This interrupt is 
never fired up. I see the lines goes down (because START bit is putting 
low SDA before SCL), but the INTFLAG bits stay cleared forever. Even 
error bits in STATUS register (bus error, arbitration lost, any sort of 
timeout...) stay cleared and the BUSSTATE is IDLE. As soon as the short 
is removed, the state-machine goes on.

Maybe I'm wrong, so I studied Atmel Software Framework[1] and Arduino 
Wire libraries[2]. In both cases, a timeout is implemented at the driver 
level.

Even the datasheet says:

   "Note:  Violating the protocol may cause the I2C to hang. If this
   happens it is possible to recover from this state by a
   software reset (CTRLA.SWRST='1')."

I think the driver code should trust the hw, between them there's a 
contract, otherwise it's impossibile. For a UART driver, you write DATA 
register and wait an interrupt flag when a new data can be written in 
the register. If the interrupt nevers fire, the driver hangs forever.
But I have never seen a UART driver that uses a timeout to recover from 
a hardware that could hang. And I used UARTs for many years now.


Considering all these big issues when you want to write reliable code, 
I'm considering to wipe again the old and good bit banging technique.
For I2C Single Master scenario, it IS very simple: put data low/high 
(three-state), put clock low/high. The only problem is to calibrate the 
clock frequency, but if you a free timer it will be simple too.

What is the drawback of bit banging? Maybe you write a few additional 
lines of code (you have to spit off 9 clock pulses by code), but I don't 
think much more than using a peripheral and protect it with a timeout.
But you earn a code that is fully under your control and you know when 
the I2C transaction starts and you can be sure it will end, even when 
there are some hw issues on the board.





[1] 
https://github.com/avrxml/asf/blob/68cddb46ae5ebc24ef8287a8d4c61a6efa5e2848/sam0/drivers/sercom/i2c/i2c_sam0/i2c_master.c#L406

[2] 
https://github.com/acicuc/ArduinoCore-samd/commit/64385453bb549b6d2f868658119259e605aca74d
On 20.11.2020 09:43, pozz wrote:
> I hate I2C for several reasons. It's only two-wires bus, but for this > reason it is insidious. > > I usually use hw peripherals when they are available, because it's much > more efficient and smart and because it's the only possibility in many > cases. > Actually we have MCUs with abundant UARTs, timers and so on, so there's > no real story: choose a suitable MCU and use that damn peripheral. > So I usually start using I2C peripherals available in MCUs, but I found > many issues. > > I have experience with AVR8 and SAMC21 by Atmel/Microchip. In both cases > the I2C peripheral is much more complex than UART or similar serial > lines. I2C Single Master, that is the most frequent situation, is very > simple, but I2C Multi Master introduces many critical situations. > I2C peripherals usually promise to be compatible with multi-master, so > their internal state machine is somewhat complex... and often there's > some bug or situations that aren't expected that leave the code stucks > at some point. > > I want to write reliable code that not only works most of the time, but > that works ALL the time, in any situations (ok, 99%). So my first test > with I2C is making a temporary short between SCL and SDA. In this case, > I2C in SAMC21 (they named it SERCOM in I2C Master mode) hangs forever. > The manual says to write ADDR register to start putting the address on > the bus and wait for an interrupt flag when it ends. This interrupt is > never fired up. I see the lines goes down (because START bit is putting > low SDA before SCL), but the INTFLAG bits stay cleared forever. Even > error bits in STATUS register (bus error, arbitration lost, any sort of > timeout...) stay cleared and the BUSSTATE is IDLE. As soon as the short > is removed, the state-machine goes on. > > Maybe I'm wrong, so I studied Atmel Software Framework[1] and Arduino > Wire libraries[2]. In both cases, a timeout is implemented at the driver > level. > > Even the datasheet says: > >   "Note:  Violating the protocol may cause the I2C to hang. If this >   happens it is possible to recover from this state by a >   software reset (CTRLA.SWRST='1')." > > I think the driver code should trust the hw, between them there's a > contract, otherwise it's impossibile. For a UART driver, you write DATA > register and wait an interrupt flag when a new data can be written in > the register. If the interrupt nevers fire, the driver hangs forever. > But I have never seen a UART driver that uses a timeout to recover from > a hardware that could hang. And I used UARTs for many years now. > > > Considering all these big issues when you want to write reliable code, > I'm considering to wipe again the old and good bit banging technique. > For I2C Single Master scenario, it IS very simple: put data low/high > (three-state), put clock low/high. The only problem is to calibrate the > clock frequency, but if you a free timer it will be simple too. > > What is the drawback of bit banging? Maybe you write a few additional > lines of code (you have to spit off 9 clock pulses by code), but I don't > think much more than using a peripheral and protect it with a timeout. > But you earn a code that is fully under your control and you know when > the I2C transaction starts and you can be sure it will end, even when > there are some hw issues on the board. > > > > > > [1] > https://github.com/avrxml/asf/blob/68cddb46ae5ebc24ef8287a8d4c61a6efa5e2848/sam0/drivers/sercom/i2c/i2c_sam0/i2c_master.c#L406 > > > [2] > https://github.com/acicuc/ArduinoCore-samd/commit/64385453bb549b6d2f868658119259e605aca74d >
1. The interrupt will only fire if a connected slave acknowledges the address. If you want to catch the situation of a non-acknowledged start & address byte, you have to set up a timer that times out. 2. I²C is asynchronous, you don't need to keep a fixed bit rate. Just pulse SCL as fast as you can/need (within the spec). Clients can adjust the speed pulling down SCL when they can't keep up with the master's speed. 3. As you not only have to bit-bang SCL & SDA according to the protocol, but also monitor SCL before you go on, even implementing a s/w I²C master correctly is not trivial; additionally, the CPU load is remarkable. If you have difficulties using the I²C peripherals, just have a peek on the according linux driver sources. They are often very reliable (at least for chip families that are a bit mature), and if there exist any issues, they are documented (cf. Raspi's SPI driver bug in the first versions). Regards Bernd
Il 20/11/2020 11:38, Bernd Linsel ha scritto:
 > On 20.11.2020 09:43, pozz wrote:
 >> I hate I2C for several reasons. It's only two-wires bus, but for this
 >> reason it is insidious.
 >>
 >> I usually use hw peripherals when they are available, because it's
 >> much more efficient and smart and because it's the only possibility in
 >> many cases.
 >> Actually we have MCUs with abundant UARTs, timers and so on, so
 >> there's no real story: choose a suitable MCU and use that damn
 >> peripheral.
 >> So I usually start using I2C peripherals available in MCUs, but I
 >> found many issues.
 >>
 >> I have experience with AVR8 and SAMC21 by Atmel/Microchip. In both
 >> cases the I2C peripheral is much more complex than UART or similar
 >> serial lines. I2C Single Master, that is the most frequent situation,
 >> is very simple, but I2C Multi Master introduces many critical 
situations.
 >> I2C peripherals usually promise to be compatible with multi-master, so
 >> their internal state machine is somewhat complex... and often there's
 >> some bug or situations that aren't expected that leave the code stucks
 >> at some point.
 >>
 >> I want to write reliable code that not only works most of the time,
 >> but that works ALL the time, in any situations (ok, 99%). So my first
 >> test with I2C is making a temporary short between SCL and SDA. In this
 >> case, I2C in SAMC21 (they named it SERCOM in I2C Master mode) hangs
 >> forever. The manual says to write ADDR register to start putting the
 >> address on the bus and wait for an interrupt flag when it ends. This
 >> interrupt is never fired up. I see the lines goes down (because START
 >> bit is putting low SDA before SCL), but the INTFLAG bits stay cleared
 >> forever. Even error bits in STATUS register (bus error, arbitration
 >> lost, any sort of timeout...) stay cleared and the BUSSTATE is IDLE.
 >> As soon as the short is removed, the state-machine goes on.
 >>
 >> Maybe I'm wrong, so I studied Atmel Software Framework[1] and Arduino
 >> Wire libraries[2]. In both cases, a timeout is implemented at the
 >> driver level.
 >>
 >> Even the datasheet says:
 >>
 >>    "Note:  Violating the protocol may cause the I2C to hang. If this
 >>    happens it is possible to recover from this state by a
 >>    software reset (CTRLA.SWRST='1')."
 >>
 >> I think the driver code should trust the hw, between them there's a
 >> contract, otherwise it's impossibile. For a UART driver, you write
 >> DATA register and wait an interrupt flag when a new data can be
 >> written in the register. If the interrupt nevers fire, the driver
 >> hangs forever.
 >> But I have never seen a UART driver that uses a timeout to recover
 >> from a hardware that could hang. And I used UARTs for many years now.
 >>
 >>
 >> Considering all these big issues when you want to write reliable code,
 >> I'm considering to wipe again the old and good bit banging technique.
 >> For I2C Single Master scenario, it IS very simple: put data low/high
 >> (three-state), put clock low/high. The only problem is to calibrate
 >> the clock frequency, but if you a free timer it will be simple too.
 >>
 >> What is the drawback of bit banging? Maybe you write a few additional
 >> lines of code (you have to spit off 9 clock pulses by code), but I
 >> don't think much more than using a peripheral and protect it with a
 >> timeout.
 >> But you earn a code that is fully under your control and you know when
 >> the I2C transaction starts and you can be sure it will end, even when
 >> there are some hw issues on the board.
 >>
 >>
 >>
 >>
 >>
 >> [1]
 >> 
https://github.com/avrxml/asf/blob/68cddb46ae5ebc24ef8287a8d4c61a6efa5e2848/sam0/drivers/sercom/i2c/i2c_sam0/i2c_master.c#L406 

 >>
 >>
 >> [2]
 >> 
https://github.com/acicuc/ArduinoCore-samd/commit/64385453bb549b6d2f868658119259e605aca74d 

 >>
 >
 > 1. The interrupt will only fire if a connected slave acknowledges the
 > address. If you want to catch the situation of a non-acknowledged start
 > & address byte, you have to set up a timer that times out.
False, at least for SERCOM in I2C Master mode (but I suspect other MCUs 
behaviour is the same).
Quoting from C21 datasheet:

   "If there is no I2C slave device responding to the address packet,
   then the INTFLAG.MB interrupt flag and
   STATUS.RXNACK will be set. The clock hold is active at this point,
   preventing further activity on the bus."


 > 2. I²C is asynchronous, you don't need to keep a fixed bit rate. Just
 > pulse SCL as fast as you can/need (within the spec). Clients can adjust
 > the speed pulling down SCL when they can't keep up with the master's 
speed.
Yes, I know. But you need some pause in bit banging, otherwise you will 
run at a speed too high. And you need a calibration if you use a dumb 
loop on a volatile counter.


 > 3. As you not only have to bit-bang SCL & SDA according to the protocol,
 > but also monitor SCL before you go on, even implementing a s/w I²C
 > master correctly is not trivial; additionally, the CPU load is 
remarkable.
If I'm not wrong, this happens only if you have some slaves that stretch 
the clock to slow down the transfer. Even in this case, I don't think 
monitoring SCL line during the transfer is complex. Yes, you should have 
a timeout, but you need it even when you use hw peripheral.

CPU load? Many times I2C is used in a blocking way waiting for interrupt 
flag. In this case, there's no difference if the CPU waits for an 
interrupt flag or drive SCL and SDA lines.

Even if you need non-blocking driver, you could use a hw timer and bit 
bangs in the interrupt service routine of the timer.


 > If you have difficulties using the I²C peripherals, just have a peek on
 > the according linux driver sources. They are often very reliable (at
 > least for chip families that are a bit mature), and if there exist any
 > issues, they are documented (cf. Raspi's SPI driver bug in the first
 > versions).
I'm talking about MCUs. Linux can't run on these devices.


I think the issue is by what you did when the controller started the
cycle and issued a start bit to 'get' the bus, it sees that 'someone'
else did the same thing but got farther.

A Start bit is done by, with SCL and SDA both high, first pull SDA low,
and then SCL low a bit later. When the controller pulls SDA low, it then
looks and sees SCL already low, so it decides that someone else beat it
to the punch of getting the bus, so it backs off and waits. I suspect
that at that point it releases the bus, SDA and SCL both go high at the
same time (which is a protocol violation) and maybe the controller sees
that as a stop bit and the bus now free, so it tries again, or it just
thinks the bus is still busy.

This is NOT the I2C 'Arbitration Lost' condition, as that pertains to
the case where you think you won the arbitration, but at the same time
somoeone else also thought they won it, and while sending a bit, you
find that your 1 bit became a 0 bit, so you realize (late) that you had
lost the arbitarion, and thus need to abort your cycle and resubmit it.

This is a case of arbitration never won, and most devices will require
something external to the peripheral to supply any needed timeout mechanism.

Most bit-banged master code I have seen, assumes single-master, as it
can't reliably test for this sort of arbitration lost condition, being a
bit to slow.
Il 20/11/2020 14:09, Richard Damon ha scritto:
 > I think the issue is by what you did when the controller started the
 > cycle and issued a start bit to 'get' the bus, it sees that 'someone'
 > else did the same thing but got farther.
 >
 > A Start bit is done by, with SCL and SDA both high, first pull SDA low,
 > and then SCL low a bit later. When the controller pulls SDA low, it then
 > looks and sees SCL already low, so it decides that someone else beat it
 > to the punch of getting the bus, so it backs off and waits. I suspect
 > that at that point it releases the bus, SDA and SCL both go high at the
 > same time (which is a protocol violation) and maybe the controller sees
 > that as a stop bit and the bus now free, so it tries again, or it just
 > thinks the bus is still busy.
No, SCL and SDA stay low forever. Maybe it drives low SDA, than SCL, 
than releases one of SCL or SDA, failing in that.


 > This is NOT the I2C 'Arbitration Lost' condition, as that pertains to
 > the case where you think you won the arbitration, but at the same time
 > somoeone else also thought they won it, and while sending a bit, you
 > find that your 1 bit became a 0 bit, so you realize (late) that you had
 > lost the arbitarion, and thus need to abort your cycle and resubmit it.
Ok, call it bus error, I2C violation, I don't know. The peripheral is 
full of low-level timeouts and flags signaling that some strange happened.
But shorting SDA and SCL will not set any of this bit.


 > This is a case of arbitration never won, and most devices will require
 > something external to the peripheral to supply any needed timeout 
mechanism.
At least the peripheral should be able to report the strange bus state, 
but it STATUS.BUSTATE is always IDLE.


 > Most bit-banged master code I have seen, assumes single-master, as it
 > can't reliably test for this sort of arbitration lost condition, being a
 > bit to slow.
Of course, take a look at the subject of my post.
On Fri, 20 Nov 2020 09:43:32 +0100, pozz <pozzugno@gmail.com> wrote:

>Considering all these big issues when you want to write reliable code, >I'm considering to wipe again the old and good bit banging technique. >For I2C Single Master scenario, it IS very simple: put data low/high >(three-state), put clock low/high. The only problem is to calibrate the >clock frequency, but if you a free timer it will be simple too. > >What is the drawback of bit banging? Maybe you write a few additional >lines of code (you have to spit off 9 clock pulses by code), but I don't >think much more than using a peripheral and protect it with a timeout. >But you earn a code that is fully under your control and you know when >the I2C transaction starts and you can be sure it will end, even when >there are some hw issues on the board.
The big advantage of bit banging is reliability. I2C is an edge-triggered protocol. In our experience, some I2C peripherals are very prone to error or lockup on fast noise pulses. A client with a train control application carefully wrote an I2C peripheral driver. On test, it failed a few times a day. As a reference, the client replaced the driver with our old bit-bang driver. In two weeks, there were no failures. Yes, a bit-bang driver needs to be carefully designed if CPU load is an issue. Choice of buffer chips can be useful in a high noise environment, e.g. hospital autoclave with switched heating elements. Stephen -- Stephen Pelc, stephen@vfxforth.com <<< NEW MicroProcessor Engineering Ltd - More Real, Less Time 133 Hill Lane, Southampton SO15 5AF, England tel: +44 (0)23 8063 1441, +44 (0)78 0390 3612 web: http://www.mpeforth.com - free VFX Forth downloads
> What is the drawback of bit banging?
If the MCU provides all the necessary capability to bit bang, there is no downside for single cycle operations. I've bit banged I2C in my career a handful of times just because I was too lazy to learn about the I2C peripheral. My plan was always to replace the bit banging with the peripheral *if necessary*. Usually data throughput requirements drive whether or not I will use the peripheral instead of bit banging. You can get great speed and efficiency improvements using the I2C peripheral with DMA. There is no shame in implementing bit banging. You might encounter an I2C device that requires full or partial bit banging. For example, I encountered a device that issued a non-I2C pulse during some part of an I2C transaction series. The pulse represented the completion of an internal ADC voltage conversion and indicated it was time to collect the data value. I was on the fence on whether to bit bang or use the peripheral hardware. JJS
On 20.11.2020 12:45, pozz wrote:

> [...] > > > If you have difficulties using the I&sup2;C peripherals, just have a peek on > > the according linux driver sources. They are often very reliable (at > > least for chip families that are a bit mature), and if there exist any > > issues, they are documented (cf. Raspi's SPI driver bug in the first > > versions).
> I'm talking about MCUs. Linux can't run on these devices.
I'm fully aware of that. But linux drivers often disclose some h/w caveats and workarounds, or efficient strategies in dealing with the peripheral's peculiarities... Regards Bernd
On 20.11.20 16.15, pozz wrote:
> Il 20/11/2020 14:09, Richard Damon ha scritto: > > I think the issue is by what you did when the controller started the > > cycle and issued a start bit to 'get' the bus, it sees that 'someone' > > else did the same thing but got farther. > > > > A Start bit is done by, with SCL and SDA both high, first pull SDA low, > > and then SCL low a bit later. When the controller pulls SDA low, it then > > looks and sees SCL already low, so it decides that someone else beat it > > to the punch of getting the bus, so it backs off and waits. I suspect > > that at that point it releases the bus, SDA and SCL both go high at the > > same time (which is a protocol violation) and maybe the controller sees > > that as a stop bit and the bus now free, so it tries again, or it just > > thinks the bus is still busy. > No, SCL and SDA stay low forever. Maybe it drives low SDA, than SCL, > than releases one of SCL or SDA, failing in that. > > > > This is NOT the I2C 'Arbitration Lost' condition, as that pertains to > > the case where you think you won the arbitration, but at the same time > > somoeone else also thought they won it, and while sending a bit, you > > find that your 1 bit became a 0 bit, so you realize (late) that you had > > lost the arbitarion, and thus need to abort your cycle and resubmit it. > Ok, call it bus error, I2C violation, I don't know. The peripheral is > full of low-level timeouts and flags signaling that some strange happened. > But shorting SDA and SCL will not set any of this bit. > > > > This is a case of arbitration never won, and most devices will require > > something external to the peripheral to supply any needed timeout > mechanism. > At least the peripheral should be able to report the strange bus state, > but it STATUS.BUSTATE is always IDLE. > > > > Most bit-banged master code I have seen, assumes single-master, as it > > can't reliably test for this sort of arbitration lost condition, being a > > bit to slow. > Of course, take a look at the subject of my post.
If you connect SCL and SDA together, you'll create a permanent protocol violation. The whole I2C relies on both being separate and open-collector/drain. Creating an unexpected short creates a hardware failure. If ouy're afraid of such a situation, you should test for it bit-banging before initializing the hardware controller. -- -TV
Il 20/11/2020 16:25, Tauno Voipio ha scritto:
 > On 20.11.20 16.15, pozz wrote:
 >> Il 20/11/2020 14:09, Richard Damon ha scritto:
 >>  > I think the issue is by what you did when the controller started the
 >>  > cycle and issued a start bit to 'get' the bus, it sees that 'someone'
 >>  > else did the same thing but got farther.
 >>  >
 >>  > A Start bit is done by, with SCL and SDA both high, first pull SDA
 >> low,
 >>  > and then SCL low a bit later. When the controller pulls SDA low, it
 >> then
 >>  > looks and sees SCL already low, so it decides that someone else
 >> beat it
 >>  > to the punch of getting the bus, so it backs off and waits. I suspect
 >>  > that at that point it releases the bus, SDA and SCL both go high at
 >> the
 >>  > same time (which is a protocol violation) and maybe the controller
 >> sees
 >>  > that as a stop bit and the bus now free, so it tries again, or it 
just
 >>  > thinks the bus is still busy.
 >> No, SCL and SDA stay low forever. Maybe it drives low SDA, than SCL,
 >> than releases one of SCL or SDA, failing in that.
 >>
 >>
 >>  > This is NOT the I2C 'Arbitration Lost' condition, as that pertains to
 >>  > the case where you think you won the arbitration, but at the same 
time
 >>  > somoeone else also thought they won it, and while sending a bit, you
 >>  > find that your 1 bit became a 0 bit, so you realize (late) that you
 >> had
 >>  > lost the arbitarion, and thus need to abort your cycle and resubmit
 >> it.
 >> Ok, call it bus error, I2C violation, I don't know. The peripheral is
 >> full of low-level timeouts and flags signaling that some strange
 >> happened.
 >> But shorting SDA and SCL will not set any of this bit.
 >>
 >>
 >>  > This is a case of arbitration never won, and most devices will 
require
 >>  > something external to the peripheral to supply any needed timeout
 >> mechanism.
 >> At least the peripheral should be able to report the strange bus
 >> state, but it STATUS.BUSTATE is always IDLE.
 >>
 >>
 >>  > Most bit-banged master code I have seen, assumes single-master, as it
 >>  > can't reliably test for this sort of arbitration lost condition,
 >> being a
 >>  > bit to slow.
 >> Of course, take a look at the subject of my post.
 >
 >
 > If you connect SCL and SDA together, you'll create a permanent
 > protocol violation. The whole I2C relies on both being separate
 > and open-collector/drain. Creating an unexpected short creates
 > a hardware failure. If ouy're afraid of such a situation, you should
 > test for it bit-banging before initializing the hardware controller.
I know that and I don't expect it works in this situation, but my point 
is another.

If a I2C hw peripheral could hang for some reason (in my test I 
volunteerly made the short, but I imagine the hang could happen in other 
circumstances that is not well declared in the datasheet), you should 
protect the driver code with a timeout.
You have to test your code in all cases, even when the timeout occurs. 
So you have to choose the timeout interval with great care, you have to 
understand if the blocking for so long is acceptable (even in that rare 
situation).

Considering all of that, maybe bit-banging is much more simple and reliable.


Memfault Beyond the Launch