Reply by Vincent vB November 29, 20162016-11-29
On 11-11-2016 at 13:25, Chris wrote:
> On 11/11/16 08:37, Vincent vB wrote: > >> >> We don't use dynamic memory with nanopb. However, we do have some >> 'scratch space', which is used as a sort of stack for unknown length >> data, but it is cleared after each time the data is send. >> >> Vincent > > Hi, > > If you know that you will never need more than a certain buffer > size, then it can be statically allocated at startup. Where that > isn't known for sure, include instrumentation to check for a > high water mark, then run the code worst case in a test harness > to find out what it's actually using. > > Chris
Hi Chris, We know exactly how much data is written at most. The scratch space is dimensioned for this case. Otherwise you'd need to test it. Vincent
Reply by Tom Gardner November 11, 20162016-11-11
On 11/11/16 23:51, Tim Wescott wrote:
> But my point is that if you can design the protocol, or if you have good > specifications on it, it's _far better_ engineering practice to break out > your pencil and paper and do the proof from first principles.
Oh, how old fashioned. Surely you realise that it is possible and desirable to test quality into a product :(
Reply by Tim Wescott November 11, 20162016-11-11
On Fri, 11 Nov 2016 22:00:32 +0000, Chris wrote:

> On 11/11/16 19:14, Tim Wescott wrote: > > >> Yick. And then get blind-sided when reality hoses you. > > Sounds pretty meaningless to me :-). > > >> You either need a protocol that, end to end, guarantees some maximum >> buffer size, or you need a system that's tolerant to communications >> occasionally breaking down. >> >> > Buffer sizes for many embedded systems are known, but worst case values > are not always predictable, so you can end up allocating far more memory > than is needed, which isn't helpful with limited memory systems. If you > are building thousands of an item, you don't choose a micro with more > ram than you need, since ram size can be one of the major price > differentiators. Fine if you are running on PC hardware, Linux, whatever > with Mbytes of ram, but most embedded systems don't have such luxury. > > Typical recent example was a system that received commands via rs423 at > 9600 with circular buffer fifos in and out. Did some quick sums and > allocated bigger queues than we needed, with water marks a couple of > points, then drove the system hard to see what was actually being used. > Turned out that we needed far less than we thought. > > Many embedded systems need that sort of fine tuning to optimise resource > usage against cost. It's also good engineering practice from an > efficiency point of view...
Yes, if you're going into things blind it's good engineering practice to do things by measurement. But my point is that if you can design the protocol, or if you have good specifications on it, it's _far better_ engineering practice to break out your pencil and paper and do the proof from first principles. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com I'm looking for work -- see my website!
Reply by Chris November 11, 20162016-11-11
On 11/11/16 19:14, Tim Wescott wrote:

> > Yick. And then get blind-sided when reality hoses you.
Sounds pretty meaningless to me :-).
> > You either need a protocol that, end to end, guarantees some maximum > buffer size, or you need a system that's tolerant to communications > occasionally breaking down. >
Buffer sizes for many embedded systems are known, but worst case values are not always predictable, so you can end up allocating far more memory than is needed, which isn't helpful with limited memory systems. If you are building thousands of an item, you don't choose a micro with more ram than you need, since ram size can be one of the major price differentiators. Fine if you are running on PC hardware, Linux, whatever with Mbytes of ram, but most embedded systems don't have such luxury. Typical recent example was a system that received commands via rs423 at 9600 with circular buffer fifos in and out. Did some quick sums and allocated bigger queues than we needed, with water marks a couple of points, then drove the system hard to see what was actually being used. Turned out that we needed far less than we thought. Many embedded systems need that sort of fine tuning to optimise resource usage against cost. It's also good engineering practice from an efficiency point of view... Chris
Reply by Tim Wescott November 11, 20162016-11-11
On Fri, 11 Nov 2016 12:25:14 +0000, Chris wrote:

> On 11/11/16 08:37, Vincent vB wrote: > > >> We don't use dynamic memory with nanopb. However, we do have some >> 'scratch space', which is used as a sort of stack for unknown length >> data, but it is cleared after each time the data is send. >> >> Vincent > > Hi, > > If you know that you will never need more than a certain buffer size, > then it can be statically allocated at startup. Where that isn't known > for sure, include instrumentation to check for a high water mark, then > run the code worst case in a test harness to find out what it's actually > using. > > Chris
Yick. And then get blind-sided when reality hoses you. You either need a protocol that, end to end, guarantees some maximum buffer size, or you need a system that's tolerant to communications occasionally breaking down. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com I'm looking for work -- see my website!
Reply by Chris November 11, 20162016-11-11
On 11/11/16 08:37, Vincent vB wrote:

> > We don't use dynamic memory with nanopb. However, we do have some > 'scratch space', which is used as a sort of stack for unknown length > data, but it is cleared after each time the data is send. > > Vincent
Hi, If you know that you will never need more than a certain buffer size, then it can be statically allocated at startup. Where that isn't known for sure, include instrumentation to check for a high water mark, then run the code worst case in a test harness to find out what it's actually using. Chris
Reply by Vincent vB November 11, 20162016-11-11
Hi Pozz,

On 20-10-2016 at 0:22, pozz wrote:

> I often have the need to exchange some data between two or more MCUs. I
> usually use I2C or UART as physical layers.
> > So I'm thinking to use a "self-descriptive" serializer protocol format, > such as Protobuf, Message Pack, BSON and so on. >
We use protobuf for our communication (nanopb). We're very happy with it.
> Do you use one serialization format? Which one? > > Of course, it should be simple to implement (in transmission/encoding > and reception/decoding) in a small embedded MCU in C language, without > dynamic memory support.
We don't use dynamic memory with nanopb. However, we do have some 'scratch space', which is used as a sort of stack for unknown length data, but it is cleared after each time the data is send. Vincent
Reply by Chris November 6, 20162016-11-06
On 11/06/16 14:49, Chris wrote:

>
> the data and its length. Pass that to the protocol layer, which
^^^^^^^^ Sorry, typo, should have been "decode" layer.
Reply by Chris November 6, 20162016-11-06
On 10/19/16 22:22, pozz wrote:
> I often have the need to exchange some data between two or more MCUs. I > usually use I2C or UART as physical layers. > > Normally I design a simple protocol between the MCUs: one framing > mechanism (Start Of Frame, End Of Frame), one integrity check mechanism > (CRC), and so on. > > The payload is statically defined between the two MCUs: > - first byte is the version > - second byte is the voltage monitoring level > - third and fourt bytes are some flags > - ... and so on > > As you can understand, both MCUs *must* know and agree about that > protocol format. However during the lifetime of the product, I need to > add some functionality or fix some bugs and those activites can lead to > a review of the protocol format (maybe i need two bytes for the voltage > level). Sometime, the two MCUs have a different version with a different > protocol format implementation. In order to avoid protocol > incompatibility, they all knows about the protocol formats used before, > so they can adapt the parsing function to the real current protocol format. > As you can understand, it could be a trouble. > > So I'm thinking to use a "self-descriptive" serializer protocol format, > such as Protobuf, Message Pack, BSON and so on. > > Do you use one serialization format? Which one? > > Of course, it should be simple to implement (in transmission/encoding > and reception/decoding) in a small embedded MCU in C language, without > dynamic memory support.
I'm not sure you need a complex format here, if you think of the problem in two layers, keeping the protocol layer transparent to data. A simple frame format could be: Start of frame byte Data length, N Data, N bytes Checksum or crc End frame byte You then write a simple state machine to verify checksum, extract the data and its length. Pass that to the protocol layer, which knows where to where to look in the data for it's revision level. To maintain compatability, any new parameters are tagged on to the end of the existing data and the data length increased to suit. Either that, or negotiation between ends to agree capabilities, but that's much more complex and you should be able to avoid it... Regards, Chris
Reply by David Brown November 2, 20162016-11-02
On 02/11/16 10:14, kalvin.news@gmail.com wrote:
> torstai 20. lokakuuta 2016 10.40.44 UTC+3 David Brown kirjoitti: >> <snip> >> You can come a /long/ way with just a little more than the system you >> have. Keep the same framing mechanism, but make sure you have a field >> for "length of payload". In the payload, you have "type of telegram" >> and "version of telegram format". Then when you need to change the >> formats, you add new data to the old structure. >> >> So format version 1 might be: >> >> typedef struct { >> uint8_t programVersion; >> uint8_t voltageMonitor; >> uint16_t flags; >> } format1payload; >> static_assert(sizeof(format1payload) == 4); >> >> Format version 2, with voltage now in millivolts, will be: >> >> typedef struct { >> uint8_t programVersion; >> uint8_t voltageMonitor; >> uint16_t flags; >> // Start of version 2 >> uint16_t voltageMonitorMillivolts; >> } format2payload; >> static_assert(sizeof(format2payload) == 6); >> >> A transmitter always sends with the latest version it knows, and will >> fill in both the voltageMonitor and voltageMonitorMillivolts fields. A >> receiver interprets as much as it can based on the latest version it >> knows and the version it receives - any excess data beyond its >> understanding can safely be ignored. >> >> Your encoder and decoders are now nothing more than casts between char* >> pointers and struct pointers. > > Typically a uint8_t is just unsigned char, but the the char may be > more than one octet ie. 8 bits. So, the > static_assert(sizeof(format1payload) == 4) will be valid but depending > of the target architecture the structure may be more than 4 octets. When > you pass the payload structure to the transmit function, it will send 4 > or more octets depending of how many octets the structure contains. I > wouldn't call this method a robust and portable at all.
It is entirely robust and safe - but not directly portable to the few devices around that have CHAR_BIT > 8. By using uint8_t in the code, if it is used on a device with CHAR_BIT > 8, then the compilation will fail because the type uint8_t does not exist. Your aim in writing code should be to make it clear, efficient on the targets that matter to you, and make it fail with a compile-time error on targets that don't match your requirements. So if your realistic targets have 8-bit char (and most do - the exceptions are almost exclusively DSPs), and your code can be better by relying on that feature, then you /should/ rely on 8-bit chars. And for robustness the code should fail to compile if that feature or assumption does not hold. If you want to use the same technique and make it portable to 16-bit char devices (TMS320 and so on), then you can't use uint8_t types - uint16_t is the smallest you should use. And the portable static_assert should be: static_assert((CHAR_BIT * sizeof(format_payload)) == 8 * 4);
> > A better way would be to create a transmit buffer, and add the > structure fileds one at a time into the buffer. There should be > different functions for different data types (char, uint8, uint16, int, > long int etc.) which will take care of the proper size matching.
That is "better" if you want longer, slower, uglier and hard to maintain code.
> When > the all items of the structure is added into the buffer, the transmitter > will send the buffer. I know, this is not for lazy people but it is
What you call "lazy", I call clear, neat and efficient.
> portable and more robust way of doing things. When you port the > application to a new platform, you just need to tweak those which will > take care of the actual data size matching (char, uint8, uint16, int, > long int etc.) I know, this method requires more initial work, but it is > the way to do it in a portable manner.
It is /a/ way to do it in a portable manner - and sometimes you want extreme portability. But such portability is rarely useful, and rarely results in better code. When you write the documentation for a project, do you assume anyone reading it is happy with English, or do you translate it into a few dozen other languages for "portability" ? Do you avoid sentences with more than 10 words because some people reading it might be severely dyslexic? And do you expect your clients to pay for the extra time needed for such "robustness" and "portability" ? I am not saying that portability is a bad thing, or that your method is necessarily a poor choice. I am merely saying that "portability" is not free, and you should not pay more of it than you actually need.
> Br, > Kalvin >