EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Embedded RTOS - features poll

Started by Stargazer September 19, 2010
On Mon, 20 Sep 2010 14:09:33 -0700, D Yuniskis
<not.going.to.be@seen.com> wrote:

>Hi Paul, > >Paul Keinanen wrote: >> On Mon, 20 Sep 2010 10:33:23 -0700, Jon Kirwan >> <jonk@infinitefactors.org> wrote: >> >>> A minimally compiled system does NOT have a real-time clock, >> >> An interrupt caused by the RT clock is no different from any other >> interrupt. > >Some systems rely on the jiffy being "special". <frown>
By jiffy I assume you are referring to Linux ? Originally Unix was just an other time sharing system. Fortunately Linux also has a purely priority based scheduling, otherwise it would not be usable for any real work :-).
> >>> quantums, >> >> What is that ? A feature of the 1960's time sharing systems ? > ><grin> Actually, I typically allow each task to have a priority >and/or a "timeslice" (which I will assume "quantum" is intended to >reference). This can be a win for tuning on smaller systems. >On larger systems, it provides a subdivision of "round robin" >scheduling (i.e., allowing you to give a bigger piece of the >"robin" to one task over another)
There is something wrong in an RT system design, if you put several tasks on the same priority.
> >>> semaphores, messages, >> >> These are syntactic sugar >> >>> thread priorities, >> >> In a simple system, the priorities are defined by the order, in which >> you declare them. >> >> There are extremely rare cases (one in a million) in which you really >> need to change the priority at run time. > >Ignoring, of course, protocols to handle priority inversion?
The first time I heard the expression "priority inversion" was when NASA had problems with it on a rover on Mars. Before that I have made multitasking systems for decades, without similar problems. Of course, maintaining a clear data ownership and for small data entities atomic access helps a lot.
>I've been experimenting with dynamically altering priorities of >tasks based on user activities. Mainly in an attempt to anticipate >future needs so I can have the "results" ready before the user >asks for them. > >Historically, timesharing systems would use priority aging as a >hack to improve the performance of short-lived and interactive >tasks (processes/programs). > >The problem with all these scheduling parameters is that they >are often abused by folks who haven't properly designed their >"system" and aren't getting the behavior/performance they want. >Akin to throwing things in an ISR because your application >architecture doesn't address the timeliness of that particular >"thing", adequately. > >Once you start *playing* with these parameters, it's easy to >find yourself in the role of <fill-in-favorite-derogatory-ethnicity> >Carpenter trimming the legs on the tipsy dinner table: >"Ooops! Too short!"
The first rule of RT system tuning is to find jobs that could be moved to a _lower_ priority. Unfortunately, most people seem to intuitively try to increase the priority of some "important" task :-(.
Hi Paul,

Paul Keinanen wrote:
> On Mon, 20 Sep 2010 14:09:33 -0700, D Yuniskis > <not.going.to.be@seen.com> wrote: > >> Paul Keinanen wrote: >>> On Mon, 20 Sep 2010 10:33:23 -0700, Jon Kirwan >>> <jonk@infinitefactors.org> wrote: >>> >>>> A minimally compiled system does NOT have a real-time clock, >>> An interrupt caused by the RT clock is no different from any other >>> interrupt. >> Some systems rely on the jiffy being "special". <frown> > > By jiffy I assume you are referring to Linux ?
The "system tick" is often referred to as the "jiffy". Since most MTOS's only support a single "clock", this tends to also drive the scheduler.
> Originally Unix was just an other time sharing system. > > Fortunately Linux also has a purely priority based scheduling, > otherwise it would not be usable for any real work :-).
Do people actually *do* work on Linux? Seems most folks work on Linux *itself*! :>
>>>> quantums, >>> What is that ? A feature of the 1960's time sharing systems ? >> <grin> Actually, I typically allow each task to have a priority >> and/or a "timeslice" (which I will assume "quantum" is intended to >> reference). This can be a win for tuning on smaller systems. >> On larger systems, it provides a subdivision of "round robin" >> scheduling (i.e., allowing you to give a bigger piece of the >> "robin" to one task over another) > > There is something wrong in an RT system design, if you put several > tasks on the same priority.
If there is no inherent relationship between those tasks, then how do you prioritize them? If I have a box "running my home", what are the *relative* priorities of the HVAC controller and the irrigation controller and the burglar alarm?
>>>> semaphores, messages, >>> These are syntactic sugar >>> >>>> thread priorities, >>> In a simple system, the priorities are defined by the order, in which >>> you declare them. >>> >>> There are extremely rare cases (one in a million) in which you really >>> need to change the priority at run time. >> Ignoring, of course, protocols to handle priority inversion? > > The first time I heard the expression "priority inversion" was when > NASA had problems with it on a rover on Mars. Before that I have made > multitasking systems for decades, without similar problems. Of course, > maintaining a clear data ownership and for small data entities atomic > access helps a lot.
Yes. I always make sure that I remember the role of an OS is to make my (developer) life easier by taking care of "things" for me (that are either too boring to attend to *or* too *tedious*). As systems get more complex -- especially systems that *grow* -- it gets hard to keep track of all the interactions between (shared) objects and entities. A good candidate for an OS service!
>> I've been experimenting with dynamically altering priorities of >> tasks based on user activities. Mainly in an attempt to anticipate >> future needs so I can have the "results" ready before the user >> asks for them. >> >> Historically, timesharing systems would use priority aging as a >> hack to improve the performance of short-lived and interactive >> tasks (processes/programs). >> >> The problem with all these scheduling parameters is that they >> are often abused by folks who haven't properly designed their >> "system" and aren't getting the behavior/performance they want. >> Akin to throwing things in an ISR because your application >> architecture doesn't address the timeliness of that particular >> "thing", adequately. >> >> Once you start *playing* with these parameters, it's easy to >> find yourself in the role of <fill-in-favorite-derogatory-ethnicity> >> Carpenter trimming the legs on the tipsy dinner table: >> "Ooops! Too short!" > > The first rule of RT system tuning is to find jobs that could be moved > to a _lower_ priority. > > Unfortunately, most people seem to intuitively try to increase the > priority of some "important" task :-(.
*EXACTLY*! I recently posed a related question regarding how people tweek things and the psychology employed. E.g., if you are doing N things on your workstation ("PC") and are particularly interested in one of them, how do you "bias" the machine to expedite the "task" (activity) that you are focused on? We tend to "kill" activities that we consider least important in the hope that this will help the *desired* activity proceed faster. But, we don't even consider how many resources are being used by that "killed" activity when we make the decision to terminate it. Wouldn't a more meaningful interface (in this example) be a facility that lets you *elevate* an activity's importance (nice -1000) instead of having to *de-emphasize* everything else? [note that this is the exact opposite condition from what you are describing -- sort of. :> In your scenario, the developer should have been disciplined to run each task at "the lowest possible priority". In the workstation example I posed, the user doesn't think in terms of how he might want to refine his priorities ex post facto while he is "creating" those "activities"]
On Mon, 20 Sep 2010 22:06:58 -0700, D Yuniskis
<not.going.to.be@seen.com> wrote:

>Hi Paul, > >Paul Keinanen wrote: >> On Mon, 20 Sep 2010 14:09:33 -0700, D Yuniskis >> <not.going.to.be@seen.com> wrote:
>> There is something wrong in an RT system design, if you put several >> tasks on the same priority. > >If there is no inherent relationship between those tasks, then >how do you prioritize them? If I have a box "running my home", >what are the *relative* priorities of the HVAC controller and >the irrigation controller and the burglar alarm?
That should not be hard. Anyway, you may have to divide each tasks into several subtasks with different priorities. Anyway, each task should not need to run for more than a millisecond or in the "running my home" example for more than a minute.
>>>>> thread priorities, >>>> In a simple system, the priorities are defined by the order, in which >>>> you declare them. >>>> >>>> There are extremely rare cases (one in a million) in which you really >>>> need to change the priority at run time. >>> Ignoring, of course, protocols to handle priority inversion? >> >> The first time I heard the expression "priority inversion" was when >> NASA had problems with it on a rover on Mars. Before that I have made >> multitasking systems for decades, without similar problems. Of course, >> maintaining a clear data ownership and for small data entities atomic >> access helps a lot. > >Yes. I always make sure that I remember the role of an OS is to >make my (developer) life easier by taking care of "things" for me >(that are either too boring to attend to *or* too *tedious*). As >systems get more complex -- especially systems that *grow* -- it >gets hard to keep track of all the interactions between (shared) >objects and entities. A good candidate for an OS service!
Some of these issues are really high level architectural decisions. If done improperly, you end up with lot of locking requirements etc. In the 1980's I was running a department making control systems based on PDP-11/RSX-11 systems. These machines had a program addressable space of 64 KiB and a maximum physical memory of 256 KiB or 4 MiB. Once I got the invitation to tender, I split the problem into tasks, decided about what data should be owned by each task and how they are going to communicate with each other, assign preliminary priorities, thinking about which person in my team should do each task and after that start to think, how long it takes to make each task before writing the tender. We never had problems fitting each program into 64 KiB and the projects were much better within budget than other departments working with the "unlimited" VAX addressing space.
Hi,

thanks for your response

On Sep 20, 7:35=A0pm, Warren <ve3...@gmail.com> wrote:

> Stargazer expounded in news:67fc4024-eeeb-4fe5-a1cf- > a71f9d56b...@z34g2000pro.googlegroups.com: > .. > > > I want to make something that is working well and useful, > > I think that one of your first steps will be to determine > the size constraints of the MCU(s) you want to support.
I think it will depend on projects. I currently have support for ARM9, MIPS32 and x86, MIPS64 was barely seen to work on emulator; MSP430 (16- bit) is likely to be supported, PPC 32/64 bits are possible, but I have to understand the requirements in the field that they are used.
> What I look for in an 8-bit RTOS (if at all) is going to > be entirely different than a 32-bit platform. =A0The 8-bit > resource limitations can be extreme, so you need to know > up front what limitations you are prepared to design for.
Projects that are built around 8-bit MCUs have even more "custom" needs than stronger CPU-based. I do most of the work for 32/64 bit CPUs, so I know many questions in their application field that current OS offer doesn't answer (or is not known to answer). I understand 8- bit MCUs field less (mostly evaluations and stories where projects switched from 8-bit MCU to more capable due to increasing requirements), it seems to me that projects based on them count every feature against cents of cost. I think that for 8-bit MCU you may only be interested in memory (?) / task (?) management, flash interface and chip support library. All this is easily isolated from my OS, but I have to do a real 8-bit project to verify what issues I may have there. In principle, there is nothing that prevents my OS from porting to 8- bit MCU without MMU.
> Can you implement a file system with 1K of SRAM? =A0Its > difficult, but I've done if for FAT16/32. =A0You have to > play some shell games with sector buffers to make it > possible. But this emphasizes the point that a file > system implementation for 8-bit is going to be a lot > different than a 32-bit platform. Resource limitations > enforce some practical realities.
Probably I can do that, but I don't see how this implementation could be included in common OS code. Implementation will be grossly different between 1K, 2K and 4K of RAM, and it will probably not work on USB disks at all. It may be used as reference code to recall ideas, but I don't even see how this can be used for other project for anything else than 1K requirement.
> If not 8-bit, then are you looking at small, medium or > large footprints, or all of the above? The choices made > here will affect your compromises and "usefulness" to > a target audiences.
I take Linux as scalability example, it scales well from relatively resource-limited to multiprocessor high-end systems. I believe that the most natural application field for my OS is small to medium resourced 32-bit and 64-bit systems (higher-end systems usually have less performance / footprint problems with Linux, and as of now it will be hard for me to compete in features offer due to architecture differences - porting of open-source code from Linux to my OS is not entirely smooth). Things may change, however.
> And, is this going to be open sourced?
Due to my own (in many cases painful) experience, developer that uses an embedded OS must be provided with complete buildable source code. Such an OS must help to developer with its ready features, interfaces and frameworks, but not prevent him from understanding how it works in details and changing it to suit his needs. I am not sure as of now if it will be "free beer" itself or not, and what restrictions will be suitable if customer wants to massively redistribute it (I don't want it to absorb too much third-party perceptions).
> > The OS will have to build feature list and.. > > I cringe whenever I see "feature list" mentioned. > People new to software engineering tend to throw > things in with the naive belief that "more features" > is somehow "better".
Well, I'm not very new to engineering (almost two decades). However, I do believe that "more features" is "better" (just that if you don't need something it shouldn't be there for you to stumble). When I meet with a client and they say that they want to do this and that, I first of all evaluate what software offer exists, and what features we have ready. Usually, if not enough features can be readily collected, the project never leaves requirements stage.
> I prefer a design to start with the known > bare essentials. Then analyze what is missing by > putting it to work. What is missing becomes obvious > when you actually try to apply something. And > even then, resist the "because I can" temptation to > add things. Everything that is "added" becomes a > support burden down the road. > > An example of a kitchen sink item, is support for > linked lists in the RTOS. =A0If required at all, this > should be put in an application library. It doesn't > belong in O/S level support. =A0But I've seen it in > at least one RTOS, because someone thought this > added to the "feature list". (To be fair however, > the RTOS may have used this feature internally, > and thus it cost little extra to export this > service -- I didn't check on that).
Well, features that I meant are something different. Examples: * Priority-based task scheduler * Standard C library * POSIX I/O * pthreads * TCP/IP stack * PCI host and enumeration support * telnet server * HTTP server * CPU 'x' port * Platform 'y' support (chip1, bus2, etc.) etc. IMO, lists are not part of OS feature list. E.g. I use softed lists to implement installable timers and some other things, but if you don't use all that and don't use them yourself, then they are not in your build and not affect your footprint. The same goes for what IMO are OS features.
> To summarize, "less" is "more". =A0Small software projects > that are focused on essentials are easier to build/port > and support. =A0For example, I like the avr-threads > package over an RTOS (for 8-bitters) because it is so > focused on what is needed- only threads, mutexes and > events. =A0Not the extra baggage of features I have no > need for.
In principle, there's nothing that prevents from using my OS with custom task manager. There are some things that need to be done, however, as drivers, sockets, timers atc. use native task manager API to put tasks to sleep and wake them - all that needs to be changed to work with a different task management system. Source code will be handy.
> I know that Mach kernels are an old idea these days, but > perhaps a good implementation of Mach on a suitable > range of hardware might be something better than a > "YARTOS" (Yet Another RTOS). Just an idea.
I think that microkernels and their derivatives didn't prove themselves. They possible designs, they are better in some theoretic issues, but they solve real needs worse than what is currently popular. Just my opionion, but industry seems to back it.
> You should also spend time to study what other people > have done- this will quickly tell you what you didn't > like about each, and what you did like about each. Learn > from the other projects- it may save you from making > similar mistakes.
Somebody here suggested looking at academia - as of academic offsprings TRON got to the most developed stage, but it follows "classic" (single-address space, single binary, monolithic kernel) embedded OS design. From what I know, most RTOSes, including the most well-known (VxWorks, pSOS, LynxOS) were developed in the same way that I'm going to do. I already put in some thought, but don't see something better. I think that other custom OSes that people write (individuals and companies) follow the same way, eventually stopping or settling at some point along the way. It's hard to tell why the "well-known" guys got to the point of extremely low quality and selling non-working features that got the embedded OS field where it is today - they won't tell me. Thanks, Daniel
Vladimir Vassilevsky expounded in 
news:oeidnTWnfd1fMgrRnZ2dnUVZ_sidnZ2d@giganews.com:

> Warren wrote: > >> Can you implement a file system with 1K of SRAM? Its >> difficult, but I've done if for FAT16/32. > > One guy showed me his very minimal FAT implementation which took less > then 512 bytes of RAM. There is no need to buffer a full sector. This, > of course, implied tremendous overhead and inconvenience.
Indeed!
>> You have to >> play some shell games with sector buffers to make it >> possible. But this emphasizes the point that a file >> system implementation for 8-bit is going to be a lot >> different than a 32-bit platform. Resource limitations >> enforce some practical realities. > > The real point is that it is plain stupid to mount things like FAT or > USB host or TCP/IP on 8-bitter with 1K of RAM. It is not wothy, those > are be crippled implementations with lots of limitations. > > VLV
Of course it is limited, but hardly stupid. You don't need a linux kernel on a 32/64 bit platform doing data logging. You can of course accomplish this with a little 8-bitter, logging to a FATFS on a stick. It simply takes a little planning. Warren
Hi Paul,

Paul Keinanen wrote:
> On Mon, 20 Sep 2010 22:06:58 -0700, D Yuniskis >>> There is something wrong in an RT system design, if you put several >>> tasks on the same priority. >> If there is no inherent relationship between those tasks, then >> how do you prioritize them? If I have a box "running my home", >> what are the *relative* priorities of the HVAC controller and >> the irrigation controller and the burglar alarm? > > That should not be hard. > > Anyway, you may have to divide each tasks into several subtasks with > different priorities. Anyway, each task should not need to run for > more than a millisecond or in the "running my home" example for more > than a minute.
Of course. I wasn't posing a real problem but, rather, illustrating that (sometimes) tasks don't have a clear relationship to each other in terms of priorities.
>>>>>> thread priorities, >>>>> In a simple system, the priorities are defined by the order, in which >>>>> you declare them. >>>>> >>>>> There are extremely rare cases (one in a million) in which you really >>>>> need to change the priority at run time. >>>> Ignoring, of course, protocols to handle priority inversion? >>> The first time I heard the expression "priority inversion" was when >>> NASA had problems with it on a rover on Mars. Before that I have made >>> multitasking systems for decades, without similar problems. Of course, >>> maintaining a clear data ownership and for small data entities atomic >>> access helps a lot. >> Yes. I always make sure that I remember the role of an OS is to >> make my (developer) life easier by taking care of "things" for me >> (that are either too boring to attend to *or* too *tedious*). As >> systems get more complex -- especially systems that *grow* -- it >> gets hard to keep track of all the interactions between (shared) >> objects and entities. A good candidate for an OS service! > > Some of these issues are really high level architectural decisions. If > done improperly, you end up with lot of locking requirements etc.
The trick in most (almost all??) multithreaded applications is data sharing (or hiding) and communications. This is the first thing that I look at when facing a new design or evaluating an existing design -- if either is poorly thought out, you end up with lots of extra "work" (i.e., code) being done to compensate. The real challenge is figuring out how to come up with architectures that support growth and revision without unduly penalizing them in their initial/current form. And, doing this in a deterministic manner for real-time applications is doubly so!
> In the 1980's I was running a department making control systems based > on PDP-11/RSX-11 systems. These machines had a program addressable > space of 64 KiB and a maximum physical memory of 256 KiB or 4 MiB. > > Once I got the invitation to tender, I split the problem into tasks, > decided about what data should be owned by each task and how they are > going to communicate with each other, assign preliminary priorities, > thinking about which person in my team should do each task and after > that start to think, how long it takes to make each task before > writing the tender. > > We never had problems fitting each program into 64 KiB and the > projects were much better within budget than other departments working > with the "unlimited" VAX addressing space.
Smaller, generally, *is* better. Make threads inexpensive so the user isn't discouraged from using them liberally. I think the same philosophy should apply to all OS mechanisms for similar reasons -- if something is "expensive" (architecturally), then developers will tend to avoid it... perhaps even when they *shouldn't*. A consequence of this (IMO) is that you should offer the minimum set of features required to provide the needed complement of services. I.e., don't do the same thing three different ways; pick one and use it exclusively (e.g., use *one* type of synchronization primitive) so that you can optimize *that* implementation without having to accommodate other variations.
On Sep 20, 10:53=A0am, Chris H <ch...@phaedsys.org> wrote:
> In message <4C963A7C.2040...@given.com>, FreeRTOS info > <noem...@given.com> writes > > >On 19/09/2010 16:31, Stargazer wrote: > >> Greetings, > > >> I am writing my own RTOS, which I intend to use in my projects and > >> license for anyone who would like to use it. At first I will use it in > >> my own projects, so that it achieves a certain level of real-life > >> performance and stability, then release it. > > >Just what the world needs - another RTOS. =A0Line up. > >Regards, > >Richard. > > Pot, Kettle, Black? > > There is nothing worse than some damned engineer doing their own RTOS, > unless it is doing two RTOS..... :-)))) > > >+http://www.FreeRTOS.org > >Designed for Microcontrollers. =A0More than 7000 downloads per month. > > >+http://www.SafeRTOS.com > >Certified by T=DCV as meeting the requirements for safety related system=
s.
> > -- > \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ > \/\/\/\/\ Chris Hills =A0Staffs =A0England =A0 =A0 /\/\/\/\/ > \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
I wouldn't try to knock his work too much. FreeRTOS is very widely used and is looking a bit like a defacto standard. The SafeRTOS is his paycheck and I am sure it is a bit harder to use "safely", so it is much more likely that others will need his services to help with their application. I can't find any real fault with what he offers or how he offers it. I only wish I had something of this utility to offer the world (and make an income from). Rick
In article <Xns9DFA61F529F79SnarkCharmedFerSure@188.40.43.230>, 
snark@cogeco.ca says...
> Vladimir Vassilevsky expounded in > news:oeidnTWnfd1fMgrRnZ2dnUVZ_sidnZ2d@giganews.com: > > > Warren wrote: > > > >> Can you implement a file system with 1K of SRAM? Its > >> difficult, but I've done if for FAT16/32. > > > > One guy showed me his very minimal FAT implementation which took less > > then 512 bytes of RAM. There is no need to buffer a full sector. This, > > of course, implied tremendous overhead and inconvenience. > > Indeed! > > >> You have to > >> play some shell games with sector buffers to make it > >> possible. But this emphasizes the point that a file > >> system implementation for 8-bit is going to be a lot > >> different than a 32-bit platform. Resource limitations > >> enforce some practical realities. > > > > The real point is that it is plain stupid to mount things like FAT or > > USB host or TCP/IP on 8-bitter with 1K of RAM. It is not wothy, those > > are be crippled implementations with lots of limitations. > > > > VLV > > Of course it is limited, but hardly stupid. You don't need > a linux kernel on a 32/64 bit platform doing data logging. > > You can of course accomplish this with a little 8-bitter, > logging to a FATFS on a stick. It simply takes a little > planning.
If you mean logging to a USB memory stick, that will require your 8-bitter to implement a USB Host interface. My approach has been to use SD or SD micro cards with an in-house sequential file system. It doesn't need an RTOS, just an interrupt handler to collect and queue the data and a main loop to pull from the queue and write to SD. That works up to about 400 16-bit samples per second on an average current around 10mA using an MSP430. I don't think you can run linux effectively on 10mA at 3.3V. Mark Borgerson

Mark Borgerson wrote:

> In article <Xns9DFA61F529F79SnarkCharmedFerSure@188.40.43.230>, > snark@cogeco.ca says... > >>Vladimir Vassilevsky expounded in >>news:oeidnTWnfd1fMgrRnZ2dnUVZ_sidnZ2d@giganews.com: >> >> >>>Warren wrote: >>> >>> >>>>Can you implement a file system with 1K of SRAM? Its >>>>difficult, but I've done if for FAT16/32. >>> >>>One guy showed me his very minimal FAT implementation which took less >>>then 512 bytes of RAM. There is no need to buffer a full sector. This, >>>of course, implied tremendous overhead and inconvenience. >> >>Indeed! >> >> >>>> You have to >>>>play some shell games with sector buffers to make it >>>>possible. But this emphasizes the point that a file >>>>system implementation for 8-bit is going to be a lot >>>>different than a 32-bit platform. Resource limitations >>>>enforce some practical realities. >>> >>>The real point is that it is plain stupid to mount things like FAT or >>>USB host or TCP/IP on 8-bitter with 1K of RAM. It is not wothy, those >>>are be crippled implementations with lots of limitations. >>> >> >>Of course it is limited, but hardly stupid. You don't need >>a linux kernel on a 32/64 bit platform doing data logging. >> >>You can of course accomplish this with a little 8-bitter, >>logging to a FATFS on a stick. It simply takes a little >>planning. > > > If you mean logging to a USB memory stick, that will require > your 8-bitter to implement a USB Host interface.
At least a part of USB host to support for the basic read/write operations on the mass storage device.
> My approach has been to use SD or SD micro cards with an > in-house sequential file system. It doesn't need an > RTOS, just an interrupt handler to collect and queue > the data and a main loop to pull from the queue and > write to SD. That works up to about 400 16-bit samples > per second on an average current around 10mA using > an MSP430. I don't think you can run linux effectively > on 10mA at 3.3V.
We used simple sequential file system also, despite of the obvious limitations. Later we switched to FAT with POSIX API, and it was so much better. There is no need for Linux; a standalone full featured multithreaded FAT takes ~50kb + buffers. The power consumption is determined by the number of transactions per second rather then the memory size. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Mark Borgerson expounded in
news:MPG.27028556ec82117d989c4d@news.eternal-september.org: 

> In article <Xns9DFA61F529F79SnarkCharmedFerSure@188.40.43.230>, > snark@cogeco.ca says... >> Vladimir Vassilevsky expounded in >> news:oeidnTWnfd1fMgrRnZ2dnUVZ_sidnZ2d@giganews.com: >> >> > Warren wrote: >> > >> >> Can you implement a file system with 1K of SRAM? Its >> >> difficult, but I've done if for FAT16/32. >> > >> > One guy showed me his very minimal FAT implementation which took >> > less then 512 bytes of RAM. There is no need to buffer a full >> > sector. This, of course, implied tremendous overhead and >> > inconvenience. >> >> Indeed! >> >> >> You have to >> >> play some shell games with sector buffers to make it >> >> possible. But this emphasizes the point that a file >> >> system implementation for 8-bit is going to be a lot >> >> different than a 32-bit platform. Resource limitations >> >> enforce some practical realities. >> > >> > The real point is that it is plain stupid to mount things like FAT >> > or USB host or TCP/IP on 8-bitter with 1K of RAM. It is not wothy, >> > those are be crippled implementations with lots of limitations. >> > >> > VLV >> >> Of course it is limited, but hardly stupid. You don't need >> a linux kernel on a 32/64 bit platform doing data logging. >> >> You can of course accomplish this with a little 8-bitter, >> logging to a FATFS on a stick. It simply takes a little >> planning. > > If you mean logging to a USB memory stick, that will require > your 8-bitter to implement a USB Host interface.
Nope. Why bring USB into it? Just use a SD memory card. The interface for it is trivial.
> My approach has been to use SD or SD micro cards with an > in-house sequential file system. It doesn't need an > RTOS, just an interrupt handler to collect and queue > the data and a main loop to pull from the queue and > write to SD. ... > Mark Borgerson
Fine, but that is not user friendly to the user receiving the SD data. Now he/she needs a special app to pull it off the stick. OTOH, you can use minimal FATFS software to create and write log file(s). Anyone can just write sectors to the SD without a FS, but it is terribly inelegant. Warren

The 2024 Embedded Online Conference