EmbeddedRelated.com
Forums

I2C Single Master: peripheral or bit banging?

Started by pozz November 20, 2020
On 20.11.20 18.33, pozz wrote:
> Il 20/11/2020 16:25, Tauno Voipio ha scritto: > > On 20.11.20 16.15, pozz wrote: > >> Il 20/11/2020 14:09, Richard Damon ha scritto: > >>  > I think the issue is by what you did when the controller started the > >>  > cycle and issued a start bit to 'get' the bus, it sees that > 'someone' > >>  > else did the same thing but got farther. > >>  > > >>  > A Start bit is done by, with SCL and SDA both high, first pull SDA > >> low, > >>  > and then SCL low a bit later. When the controller pulls SDA low, it > >> then > >>  > looks and sees SCL already low, so it decides that someone else > >> beat it > >>  > to the punch of getting the bus, so it backs off and waits. I > suspect > >>  > that at that point it releases the bus, SDA and SCL both go high at > >> the > >>  > same time (which is a protocol violation) and maybe the controller > >> sees > >>  > that as a stop bit and the bus now free, so it tries again, or it > just > >>  > thinks the bus is still busy. > >> No, SCL and SDA stay low forever. Maybe it drives low SDA, than SCL, > >> than releases one of SCL or SDA, failing in that. > >> > >> > >>  > This is NOT the I2C 'Arbitration Lost' condition, as that > pertains to > >>  > the case where you think you won the arbitration, but at the same > time > >>  > somoeone else also thought they won it, and while sending a bit, you > >>  > find that your 1 bit became a 0 bit, so you realize (late) that you > >> had > >>  > lost the arbitarion, and thus need to abort your cycle and resubmit > >> it. > >> Ok, call it bus error, I2C violation, I don't know. The peripheral is > >> full of low-level timeouts and flags signaling that some strange > >> happened. > >> But shorting SDA and SCL will not set any of this bit. > >> > >> > >>  > This is a case of arbitration never won, and most devices will > require > >>  > something external to the peripheral to supply any needed timeout > >> mechanism. > >> At least the peripheral should be able to report the strange bus > >> state, but it STATUS.BUSTATE is always IDLE. > >> > >> > >>  > Most bit-banged master code I have seen, assumes single-master, > as it > >>  > can't reliably test for this sort of arbitration lost condition, > >> being a > >>  > bit to slow. > >> Of course, take a look at the subject of my post. > > > > > > If you connect SCL and SDA together, you'll create a permanent > > protocol violation. The whole I2C relies on both being separate > > and open-collector/drain. Creating an unexpected short creates > > a hardware failure. If ouy're afraid of such a situation, you should > > test for it bit-banging before initializing the hardware controller. > I know that and I don't expect it works in this situation, but my point > is another. > > If a I2C hw peripheral could hang for some reason (in my test I > volunteerly made the short, but I imagine the hang could happen in other > circumstances that is not well declared in the datasheet), you should > protect the driver code with a timeout. > You have to test your code in all cases, even when the timeout occurs. > So you have to choose the timeout interval with great care, you have to > understand if the blocking for so long is acceptable (even in that rare > situation). > > Considering all of that, maybe bit-banging is much more simple and > reliable.
I have had thousands of industrial instruments in the field for decades, each running some internal units with I2C, some bit-banged and others on the hardware interfaces on the processors used, and not a single failure due to I2C hanging. Please remember that the I2C bus is an Inter-IC bus, not to be used for connections to the outside of the device, preferably only on the same circuit board. There should be no external connectors where e.g. the shorts between the SCL and SDA could happen. All the hardWare I2C controls have been able to be restored to a sensible state with a software reset after a time-out. This includes the Atmel chips. -- -TV
On 11/20/2020 19:39, Tauno Voipio wrote:
> On 20.11.20 18.33, pozz wrote: >> Il 20/11/2020 16:25, Tauno Voipio ha scritto: >>  > On 20.11.20 16.15, pozz wrote: >>  >> Il 20/11/2020 14:09, Richard Damon ha scritto: >> ..... >> >> Considering all of that, maybe bit-banging is much more simple and >> reliable. > > > I have had thousands of industrial instruments in the field for decades, > each running some internal units with I2C, some bit-banged and others > on the hardware interfaces on the processors used, and not a single > failure due to I2C hanging. > > Please remember that the I2C bus is an Inter-IC bus, not to be used for > connections to the outside of the device, preferably only on the same > circuit board. There should be no external connectors where e.g. the > shorts between the SCL and SDA could happen. > > All the hardWare I2C controls have been able to be restored to a > sensible state with a software reset after a time-out. This includes > the Atmel chips. >
I did manage once to upset (not to hang) an I2C line. I had routed SCL or SDA (likely both, don't remember) routed quite close to the switching MOSFET of a HV flyback which makes nice and steep 100V edges... :-). I have dealt with I2C controllers on 2 parts I can think of now and both times it took me a lot longer to get them to work than it had taken me on two earlier occasions when I bit banged it though... They all did work of course but the design was sort of twisted, I remember one of them took me two days and I was counting minutes of my time for that project. It may even have been 3 days, was 10 years ago. Dimiter ====================================================== Dimiter Popoff, TGI http://www.tgi-sci.com ====================================================== http://www.flickr.com/photos/didi_tgi/
On 2020-11-20, Dimiter_Popoff <dp@tgi-sci.com> wrote:

> I have dealt with I2C controllers on 2 parts I can think of now > and both times it took me a lot longer to get them to work than it > had taken me on two earlier occasions when I bit banged it though...
That's my experience also. I've done bit-banged I2C a couple times, and it took about a half day each time. Using HW I2C controllers has always taken longer. The worst one I remember was on a Samsung ARM7 part from 20 years ago. Between the mis-translations and errors in the documentation and the bugs in the HW, it took at least a week to get the I2C controller to reliably talk to anything. -- Grant
On 20/11/2020 08:43, pozz wrote:
> I hate I2C for several reasons. It's only two-wires bus, but for this > reason it is insidious. > > I usually use hw peripherals when they are available, because it's much > more efficient and smart and because it's the only possibility in many > cases. > Actually we have MCUs with abundant UARTs, timers and so on, so there's > no real story: choose a suitable MCU and use that damn peripheral. > So I usually start using I2C peripherals available in MCUs, but I found > many issues. > > I have experience with AVR8 and SAMC21 by Atmel/Microchip. In both cases > the I2C peripheral is much more complex than UART or similar serial > lines. I2C Single Master, that is the most frequent situation, is very > simple, but I2C Multi Master introduces many critical situations. > I2C peripherals usually promise to be compatible with multi-master, so > their internal state machine is somewhat complex... and often there's > some bug or situations that aren't expected that leave the code stucks > at some point. > > I want to write reliable code that not only works most of the time, but > that works ALL the time, in any situations (ok, 99%). So my first test > with I2C is making a temporary short between SCL and SDA. In this case, > I2C in SAMC21 (they named it SERCOM in I2C Master mode) hangs forever. > The manual says to write ADDR register to start putting the address on > the bus and wait for an interrupt flag when it ends. This interrupt is > never fired up. I see the lines goes down (because START bit is putting > low SDA before SCL), but the INTFLAG bits stay cleared forever. Even > error bits in STATUS register (bus error, arbitration lost, any sort of > timeout...) stay cleared and the BUSSTATE is IDLE. As soon as the short > is removed, the state-machine goes on. > > Maybe I'm wrong, so I studied Atmel Software Framework[1] and Arduino > Wire libraries[2]. In both cases, a timeout is implemented at the driver > level. > > Even the datasheet says: > > &nbsp; "Note:&#8192; Violating the protocol may cause the I2C to hang. If this > &nbsp; happens it is possible to recover from this state by a > &nbsp; software reset (CTRLA.SWRST='1')." > > I think the driver code should trust the hw, between them there's a > contract, otherwise it's impossibile. For a UART driver, you write DATA > register and wait an interrupt flag when a new data can be written in > the register. If the interrupt nevers fire, the driver hangs forever. > But I have never seen a UART driver that uses a timeout to recover from > a hardware that could hang. And I used UARTs for many years now. > > > Considering all these big issues when you want to write reliable code, > I'm considering to wipe again the old and good bit banging technique. > For I2C Single Master scenario, it IS very simple: put data low/high > (three-state), put clock low/high. The only problem is to calibrate the > clock frequency, but if you a free timer it will be simple too. > > What is the drawback of bit banging? Maybe you write a few additional > lines of code (you have to spit off 9 clock pulses by code), but I don't > think much more than using a peripheral and protect it with a timeout. > But you earn a code that is fully under your control and you know when > the I2C transaction starts and you can be sure it will end, even when > there are some hw issues on the board. > > > > > > [1] > https://github.com/avrxml/asf/blob/68cddb46ae5ebc24ef8287a8d4c61a6efa5e2848/sam0/drivers/sercom/i2c/i2c_sam0/i2c_master.c#L406 > > > [2] > https://github.com/acicuc/ArduinoCore-samd/commit/64385453bb549b6d2f868658119259e605aca74d >
If you do a bit banged interface do not forget to support clock stretching by the slave. Do not assume that the slave has no special timing requirements. To do it right you need a hardware timer (or a cast iron guarantee that the bit bang function won't be interrupted). I've found hardware I2C controllers on micros to be 100% reliably a problem. The manufacturers drivers are often part of that problem. I'm currently trying to debug some one else's not working implementation of an ST I2C peripheral controller. It uses ST's driver. MK
On 21.11.20 1.09, Grant Edwards wrote:
> On 2020-11-20, Dimiter_Popoff <dp@tgi-sci.com> wrote: > >> I have dealt with I2C controllers on 2 parts I can think of now >> and both times it took me a lot longer to get them to work than it >> had taken me on two earlier occasions when I bit banged it though... > > That's my experience also. I've done bit-banged I2C a couple times, > and it took about a half day each time. Using HW I2C controllers has > always taken longer. The worst one I remember was on a Samsung ARM7 > part from 20 years ago. Between the mis-translations and errors in the > documentation and the bugs in the HW, it took at least a week to get > the I2C controller to reliably talk to anything. > > -- > Grant
To add to that, the drivers by the hardware makers are also quite twisted and difficult to integrate to surrounding software. With ARM Cortexes, I'm not veery fascinated with the provided drivers in CMSIS. Every time I have ended writing my own. -- -TV
On 21.11.20 11.10, Michael Kellett wrote:
> On 20/11/2020 08:43, pozz wrote: >> I hate I2C for several reasons. It's only two-wires bus, but for this >> reason it is insidious. >> >> I usually use hw peripherals when they are available, because it's >> much more efficient and smart and because it's the only possibility in >> many cases. >> Actually we have MCUs with abundant UARTs, timers and so on, so >> there's no real story: choose a suitable MCU and use that damn >> peripheral. >> So I usually start using I2C peripherals available in MCUs, but I >> found many issues. >> >> I have experience with AVR8 and SAMC21 by Atmel/Microchip. In both >> cases the I2C peripheral is much more complex than UART or similar >> serial lines. I2C Single Master, that is the most frequent situation, >> is very simple, but I2C Multi Master introduces many critical situations. >> I2C peripherals usually promise to be compatible with multi-master, so >> their internal state machine is somewhat complex... and often there's >> some bug or situations that aren't expected that leave the code stucks >> at some point. >> >> I want to write reliable code that not only works most of the time, >> but that works ALL the time, in any situations (ok, 99%). So my first >> test with I2C is making a temporary short between SCL and SDA. In this >> case, I2C in SAMC21 (they named it SERCOM in I2C Master mode) hangs >> forever. The manual says to write ADDR register to start putting the >> address on the bus and wait for an interrupt flag when it ends. This >> interrupt is never fired up. I see the lines goes down (because START >> bit is putting low SDA before SCL), but the INTFLAG bits stay cleared >> forever. Even error bits in STATUS register (bus error, arbitration >> lost, any sort of timeout...) stay cleared and the BUSSTATE is IDLE. >> As soon as the short is removed, the state-machine goes on. >> >> Maybe I'm wrong, so I studied Atmel Software Framework[1] and Arduino >> Wire libraries[2]. In both cases, a timeout is implemented at the >> driver level. >> >> Even the datasheet says: >> >> &nbsp;&nbsp; "Note:&#8192; Violating the protocol may cause the I2C to hang. If this >> &nbsp;&nbsp; happens it is possible to recover from this state by a >> &nbsp;&nbsp; software reset (CTRLA.SWRST='1')." >> >> I think the driver code should trust the hw, between them there's a >> contract, otherwise it's impossibile. For a UART driver, you write >> DATA register and wait an interrupt flag when a new data can be >> written in the register. If the interrupt nevers fire, the driver >> hangs forever. >> But I have never seen a UART driver that uses a timeout to recover >> from a hardware that could hang. And I used UARTs for many years now. >> >> >> Considering all these big issues when you want to write reliable code, >> I'm considering to wipe again the old and good bit banging technique. >> For I2C Single Master scenario, it IS very simple: put data low/high >> (three-state), put clock low/high. The only problem is to calibrate >> the clock frequency, but if you a free timer it will be simple too. >> >> What is the drawback of bit banging? Maybe you write a few additional >> lines of code (you have to spit off 9 clock pulses by code), but I >> don't think much more than using a peripheral and protect it with a >> timeout. >> But you earn a code that is fully under your control and you know when >> the I2C transaction starts and you can be sure it will end, even when >> there are some hw issues on the board. >> >> >> >> >> >> [1] >> https://github.com/avrxml/asf/blob/68cddb46ae5ebc24ef8287a8d4c61a6efa5e2848/sam0/drivers/sercom/i2c/i2c_sam0/i2c_master.c#L406 >> >> >> [2] >> https://github.com/acicuc/ArduinoCore-samd/commit/64385453bb549b6d2f868658119259e605aca74d >> > If you do a bit banged interface do not forget to support clock > stretching by the slave. > Do not assume that the slave has no special timing requirements. > To do it right you need a hardware timer (or a cast iron guarantee that > the bit bang function won't be interrupted). > > I've found hardware I2C controllers on micros to be 100% reliably a > problem. The manufacturers drivers are often part of that problem. > > I'm currently trying to debug some one else's not working implementation > of an ST I2C peripheral controller. It uses ST's driver. > > MK
I have ended up jettisoning both ST's and Atmel's drivers and written my own. You might consider that way. -- -TV
On 20/11/2020 18:39, Tauno Voipio wrote:

> I have had thousands of industrial instruments in the field for decades, > each running some internal units with I2C, some bit-banged and others > on the hardware interfaces on the processors used, and not a single > failure due to I2C hanging. >
The only time I have seen I&sup2;C buses hanging is during development, when you might be re-starting the cpu in the middle of an operation without there being a power-on reset to the slave devices. That can easily leave the bus in an invalid state, or leave a slave state machine out of synchronisation with the bus. But I have not seen this kind of thing happen in a live system.
> Please remember that the I2C bus is an Inter-IC bus, not to be used for > connections to the outside of the device, preferably only on the same > circuit board. There should be no external connectors where e.g. the > shorts between the SCL and SDA could happen. > > All the hardWare I2C controls have been able to be restored to a > sensible state with a software reset after a time-out. This includes > the Atmel chips. >
Il 21/11/2020 10:10, Michael Kellett ha scritto:
> On 20/11/2020 08:43, pozz wrote: >> I hate I2C for several reasons. It's only two-wires bus, but for this >> reason it is insidious. >> >> I usually use hw peripherals when they are available, because it's >> much more efficient and smart and because it's the only possibility in >> many cases. >> Actually we have MCUs with abundant UARTs, timers and so on, so >> there's no real story: choose a suitable MCU and use that damn >> peripheral. >> So I usually start using I2C peripherals available in MCUs, but I >> found many issues. >> >> I have experience with AVR8 and SAMC21 by Atmel/Microchip. In both >> cases the I2C peripheral is much more complex than UART or similar >> serial lines. I2C Single Master, that is the most frequent situation, >> is very simple, but I2C Multi Master introduces many critical situations. >> I2C peripherals usually promise to be compatible with multi-master, so >> their internal state machine is somewhat complex... and often there's >> some bug or situations that aren't expected that leave the code stucks >> at some point. >> >> I want to write reliable code that not only works most of the time, >> but that works ALL the time, in any situations (ok, 99%). So my first >> test with I2C is making a temporary short between SCL and SDA. In this >> case, I2C in SAMC21 (they named it SERCOM in I2C Master mode) hangs >> forever. The manual says to write ADDR register to start putting the >> address on the bus and wait for an interrupt flag when it ends. This >> interrupt is never fired up. I see the lines goes down (because START >> bit is putting low SDA before SCL), but the INTFLAG bits stay cleared >> forever. Even error bits in STATUS register (bus error, arbitration >> lost, any sort of timeout...) stay cleared and the BUSSTATE is IDLE. >> As soon as the short is removed, the state-machine goes on. >> >> Maybe I'm wrong, so I studied Atmel Software Framework[1] and Arduino >> Wire libraries[2]. In both cases, a timeout is implemented at the >> driver level. >> >> Even the datasheet says: >> >> &nbsp;&nbsp; "Note:&#8192; Violating the protocol may cause the I2C to hang. If this >> &nbsp;&nbsp; happens it is possible to recover from this state by a >> &nbsp;&nbsp; software reset (CTRLA.SWRST='1')." >> >> I think the driver code should trust the hw, between them there's a >> contract, otherwise it's impossibile. For a UART driver, you write >> DATA register and wait an interrupt flag when a new data can be >> written in the register. If the interrupt nevers fire, the driver >> hangs forever. >> But I have never seen a UART driver that uses a timeout to recover >> from a hardware that could hang. And I used UARTs for many years now. >> >> >> Considering all these big issues when you want to write reliable code, >> I'm considering to wipe again the old and good bit banging technique. >> For I2C Single Master scenario, it IS very simple: put data low/high >> (three-state), put clock low/high. The only problem is to calibrate >> the clock frequency, but if you a free timer it will be simple too. >> >> What is the drawback of bit banging? Maybe you write a few additional >> lines of code (you have to spit off 9 clock pulses by code), but I >> don't think much more than using a peripheral and protect it with a >> timeout. >> But you earn a code that is fully under your control and you know when >> the I2C transaction starts and you can be sure it will end, even when >> there are some hw issues on the board. >> >> >> >> >> >> [1] >> https://github.com/avrxml/asf/blob/68cddb46ae5ebc24ef8287a8d4c61a6efa5e2848/sam0/drivers/sercom/i2c/i2c_sam0/i2c_master.c#L406 >> >> >> [2] >> https://github.com/acicuc/ArduinoCore-samd/commit/64385453bb549b6d2f868658119259e605aca74d >> > If you do a bit banged interface do not forget to support clock > stretching by the slave.
If the slave should use clock streching, I think the datasheet would say this clearly.
> Do not assume that the slave has no special timing requirements. > To do it right you need a hardware timer (or a cast iron guarantee that > the bit bang function won't be interrupted).
Please, explain. I2C is synchronous to clock transmitted by the Master. Of course Master should respect a range for the clock frequency (around 100kHz or 400kHz), but I don't think a jitter on the I2C clock, caused by an interrupt, could be a serious problem for the slave.
> I've found hardware I2C controllers on micros to be 100% reliably a > problem. The manufacturers drivers are often part of that problem. > > I'm currently trying to debug some one else's not working implementation > of an ST I2C peripheral controller. It uses ST's driver. > > MK
Il 21/11/2020 12:06, David Brown ha scritto:
 > On 20/11/2020 18:39, Tauno Voipio wrote:
 >
 >> I have had thousands of industrial instruments in the field for decades,
 >> each running some internal units with I2C, some bit-banged and others
 >> on the hardware interfaces on the processors used, and not a single
 >> failure due to I2C hanging.
 >>
 >
 > The only time I have seen I&sup2;C buses hanging is during development, when
 > you might be re-starting the cpu in the middle of an operation without
 > there being a power-on reset to the slave devices.  That can easily
 > leave the bus in an invalid state, or leave a slave state machine out of
 > synchronisation with the bus.  But I have not seen this kind of thing
 > happen in a live system.
In the past I had a big problem with a I2C bus on a board. Ubiquitous 
EEPROM 24LC64 connected to a 16-bits MCU by Fujitsu. In that case, I2C 
was implemented by a bit-bang code.

At startup MCU read the EEPROM content and if it was corrupted, factory 
default are used and wrote to the EEPROM. This mechanism was introduced 
to write a blank EEPROM at the very first power up of a fresh board.

Unfortunately it sometimes happened that the MCU reset in the middle of 
a I2C transaction with the EEPROM (the reset was caused by a glitch on 
the power supply that triggered a MCU voltage supervisor).
When the MCU restarted, it tried to communicate with the EEPROM, but it 
was in a not synchronized I2C state. This is well described in AN868[1] 
from Analog Devices..

The MCU thoughts it was a blank EEPROM and factory settings was used, 
overwriting user settings! What the user blamed was that the machine 
sometimes restarted with factory settings, losing user settings.

In that case the solution was adding a I2C reset procedure at startup 
(some clock pulses and STOP condition as described in the Application Note).
I think this I2C bus reset procedure must be always added where there's 
a I2C bus and most probably it must be implemented by a big-bang code.


[1] 
https://www.analog.com/media/en/technical-documentation/application-notes/54305147357414AN686_0.pdf

 >> Please remember that the I2C bus is an Inter-IC bus, not to be used for
 >> connections to the outside of the device, preferably only on the same
 >> circuit board. There should be no external connectors where e.g. the
 >> shorts between the SCL and SDA could happen.
 >>
 >> All the hardWare I2C controls have been able to be restored to a
 >> sensible state with a software reset after a time-out. This includes
 >> the Atmel chips.
 >>
 >
On 22/11/2020 17:48, pozz wrote:
> Il 21/11/2020 12:06, David Brown ha scritto: >> On 20/11/2020 18:39, Tauno Voipio wrote: >> >>> I have had thousands of industrial instruments in the field for decades, >>> each running some internal units with I2C, some bit-banged and others >>> on the hardware interfaces on the processors used, and not a single >>> failure due to I2C hanging. >>> >> >> The only time I have seen I&sup2;C buses hanging is during development, when >> you might be re-starting the cpu in the middle of an operation without >> there being a power-on reset to the slave devices.&nbsp; That can easily >> leave the bus in an invalid state, or leave a slave state machine out of >> synchronisation with the bus.&nbsp; But I have not seen this kind of thing >> happen in a live system. > In the past I had a big problem with a I2C bus on a board. Ubiquitous > EEPROM 24LC64 connected to a 16-bits MCU by Fujitsu. In that case, I2C > was implemented by a bit-bang code. > > At startup MCU read the EEPROM content and if it was corrupted, factory > default are used and wrote to the EEPROM. This mechanism was introduced > to write a blank EEPROM at the very first power up of a fresh board. > > Unfortunately it sometimes happened that the MCU reset in the middle of > a I2C transaction with the EEPROM (the reset was caused by a glitch on > the power supply that triggered a MCU voltage supervisor). > When the MCU restarted, it tried to communicate with the EEPROM, but it > was in a not synchronized I2C state. This is well described in AN868[1] > from Analog Devices.. > > The MCU thoughts it was a blank EEPROM and factory settings was used, > overwriting user settings! What the user blamed was that the machine > sometimes restarted with factory settings, losing user settings. > > In that case the solution was adding a I2C reset procedure at startup > (some clock pulses and STOP condition as described in the Application > Note). > I think this I2C bus reset procedure must be always added where there's > a I2C bus and most probably it must be implemented by a big-bang code. > > > [1] > https://www.analog.com/media/en/technical-documentation/application-notes/54305147357414AN686_0.pdf >
Sure, add that kind of a reset at startup - it also helps if you are unlucky when restarting the chip during development. Also make sure you write two copies of the user data to the EEPROM, so that you can survive a crash while writing to it. But if your board is suffering power supply glitches that are enough to trigger the MCU brown-out, but not enough to cause a proper re-start of the rest of the board, then /that/ is a major problem that you should be trying to solve.
> >>> Please remember that the I2C bus is an Inter-IC bus, not to be used for >>> connections to the outside of the device, preferably only on the same >>> circuit board. There should be no external connectors where e.g. the >>> shorts between the SCL and SDA could happen. >>> >>> All the hardWare I2C controls have been able to be restored to a >>> sensible state with a software reset after a time-out. This includes >>> the Atmel chips. >>> >>