On 06/08/17 17:51, rickman wrote:> John Devereux wrote on 8/6/2017 9:40 AM: >> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >> >>> On 03/08/17 16:03, Phil Hobbs wrote: >>>> On 08/01/2017 09:23 AM, Tom Gardner wrote: >>>>> On 01/08/17 13:55, Phil Hobbs wrote: >>>>>> On 07/30/2017 02:05 PM, Tom Gardner wrote: >>>>>>> On 30/07/17 17:05, Phil Hobbs wrote: >>>>>>>> Another thing is to concentrate the course work on stuff that's hard >>>>>>>> to pick up >>>>>>>> on your own, i.e. math and the more mathematical parts of engineering >>>>>>>> (especially signals & systems and electrodynamics). >>>>>>> >>>>>>> Agreed. >>>>>>> >>>>>>>> Programming you can learn out of books without much difficulty, >>>>>>> >>>>>>> The evidence is that /isn't/ the case :( Read comp.risks, >>>>>>> (which has an impressively high signal-to-noise ratio), or >>>>>>> watch the news (which doesn't). >>>>>> >>>>>> Dunno. Nobody taught me how to program, and I've been doing it since >>>>>> I was a >>>>>> teenager. I picked up good habits from reading books and other >>>>>> people's code. >>>>> >>>>> Yes, but it was easier back then: the tools, problems >>>>> and solutions were, by and large, much simpler and more >>>>> self-contained. >>>> >>>> I'm not so sure. Debuggers have improved out of all recognition, with two >>>> exceptions (gdb and Arduino, I'm looking at you). Plus there are a whole >>>> lot of >>>> libraries available (for Python especially) so a determined beginner can get >>>> something cool working (after a fashion) fairly fast. >>> >>> Yes, that's all true. The speed of getting something going >>> is important for a beginner. But if the foundation is "sandy" >>> then it can be necessary and difficult to get beginners >>> (and managers) to appreciate the need to progress to tools >>> with sounder foundations. >>> >>> The old time "sandy" tool was Basic. While Python is much >>> better than Basic, it is still "sandy" when it comes to >>> embedded real time applications. >>> >>> >>>> Seems as though youngsters mostly start with Python and then start in on either >>>> webdev or small SBCs using Arduino / AVR Studio / Raspbian or (for the more >>>> ambitious) something like BeagleBone or (a fave) LPCxpresso. Most of my >>>> embedded work is pretty light-duty, so an M3 or M4 is good medicine. I'm much >>>> better at electro-optics and analog/RF circuitry than at MCUs or HDL, so I do >>>> only enough embedded things to get the whole instrument working. Fancy >>>> embedded >>>> stuff I either leave to the experts, do in hardware, or hive off to an outboard >>>> computer via USB serial, depending on the project. >>> >>> I wish more people took that attitude! >>> >>> >>>> It's certainly true that things get complicated fast, but they did in the old >>>> days too. Of course the reasons are different: nowadays it's the sheer >>>> complexity of the silicon and the tools, whereas back then it was >>>> burn-and-crash >>>> development, flaky in-system emulators, and debuggers which (if they even >>>> existed) were almost as bad as Arduino. >>> >>> Agreed. The key difference is that with simple-but-unreliable >>> tools it is possible to conceive that mortals can /understand/ >>> the tools limitations, and know when/where the tool is failing. >>> >>> That simply doesn't happen with modern tools; even the world >>> experts don't understand their complexity! Seriously. >>> >>> Consider C++. The *design committee* refused to believe C++ >>> templates formed a Turing-complete language inside C++. >>> They were forced to recant when shown a correct valid C++ >>> program that never completed compilation - because, during >>> compilation the compiler was (slowly) emitting the sequence >>> of prime numbers! What chance have mere mortal developers >>> got in the face of that complexity. >> >> I don't think that particular criticism is really fair - it seems the >> (rather simple) C preprocessor is also "turing complete" or at least >> close to it e.g,. >> >> https://stackoverflow.com/questions/3136686/is-the-c99-preprocessor-turing-complete >> >> >> >> Or a C prime number generator that mostly uses the preprocessor >> >> https://www.cise.ufl.edu/~manuel/obfuscate/zsmall.hint >> >> At any rate "Compile-time processing" is a big thing now in modern c++, >> see e.g. >> >> Compile Time Maze Generator (and Solver) >> https://www.youtube.com/watch?v=3SXML1-Ty5U > > Funny, compile time program execution is something Forth has done for decades. > Why is this important in other languages now?It isn't important. What is important is that the (world-expert) design committee didn't understand (and then refused to believe) the implications of their proposal. That indicates the tool is so complex and baroque as to be incomprehensible - and that is a very bad starting point.
Engineering degree for embedded systems
Started by ●July 27, 2017
Reply by ●August 6, 20172017-08-06
Reply by ●August 6, 20172017-08-06
On 06/08/17 15:15, upsidedown@downunder.com wrote:> On Sun, 6 Aug 2017 10:35:03 +0100, Tom Gardner > <spamjunk@blueyonder.co.uk> wrote:>> Threads Cannot be Implemented as a Library >> Hans-J. Boehm >> HP Laboratories Palo Alto >> November 12, 2004 * >> In many environments, multi-threaded code is written in a language that >> was originally designed without thread support (e.g. C), to which a >> library of threading primitives was subsequently added. There appears to >> be a general understanding that this is not the right approach. We provide >> specific arguments that a pure library approach, in which the compiler is >> designed independently of threading issues, cannot guarantee correctness >> of the resulting code. >> We first review why the approach *almost* works, and then examine some >> of the *surprising behavior* it may entail. We further illustrate that there >> are very simple cases in which a pure library-based approach seems >> *incapable of expressing* an efficient parallel algorithm. >> Our discussion takes place in the context of C with Pthreads, since it is >> commonly used, reasonably well specified, and does not attempt to >> ensure type-safety, which would entail even stronger constraints. The >> issues we raise are not specific to that context. >> http://www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf > > Now that there is a lot of multicore processors, this is a really > serious issue.There have been multicore processors for *decades*, and problems have been surfacing - and being swept under the carpet for decades. The only change is that now you can get 32 core embedded processors for $15. 13 years after Boehm's paper, there are signs that C/C++ might be getting a memory model sometime. The success of that endeavour is yet to be proven. Memory models are /difficult/. Even Java, starting from a clean sheet, had to revise its memory model in the light of experience.> But again, should multitasking/mutithreading be implemented in a > multitasking OS or in a programming language is a very important > question.That question is moot, since the multitasking OS is implemented in a programming language, usually C/C++.
Reply by ●August 6, 20172017-08-06
Tom Gardner wrote on 8/6/2017 3:13 PM:> On 06/08/17 17:51, rickman wrote: >> John Devereux wrote on 8/6/2017 9:40 AM: >>> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>> >>>> On 03/08/17 16:03, Phil Hobbs wrote: >>>>> On 08/01/2017 09:23 AM, Tom Gardner wrote: >>>>>> On 01/08/17 13:55, Phil Hobbs wrote: >>>>>>> On 07/30/2017 02:05 PM, Tom Gardner wrote: >>>>>>>> On 30/07/17 17:05, Phil Hobbs wrote: >>>>>>>>> Another thing is to concentrate the course work on stuff that's hard >>>>>>>>> to pick up >>>>>>>>> on your own, i.e. math and the more mathematical parts of engineering >>>>>>>>> (especially signals & systems and electrodynamics). >>>>>>>> >>>>>>>> Agreed. >>>>>>>> >>>>>>>>> Programming you can learn out of books without much difficulty, >>>>>>>> >>>>>>>> The evidence is that /isn't/ the case :( Read comp.risks, >>>>>>>> (which has an impressively high signal-to-noise ratio), or >>>>>>>> watch the news (which doesn't). >>>>>>> >>>>>>> Dunno. Nobody taught me how to program, and I've been doing it since >>>>>>> I was a >>>>>>> teenager. I picked up good habits from reading books and other >>>>>>> people's code. >>>>>> >>>>>> Yes, but it was easier back then: the tools, problems >>>>>> and solutions were, by and large, much simpler and more >>>>>> self-contained. >>>>> >>>>> I'm not so sure. Debuggers have improved out of all recognition, with two >>>>> exceptions (gdb and Arduino, I'm looking at you). Plus there are a whole >>>>> lot of >>>>> libraries available (for Python especially) so a determined beginner >>>>> can get >>>>> something cool working (after a fashion) fairly fast. >>>> >>>> Yes, that's all true. The speed of getting something going >>>> is important for a beginner. But if the foundation is "sandy" >>>> then it can be necessary and difficult to get beginners >>>> (and managers) to appreciate the need to progress to tools >>>> with sounder foundations. >>>> >>>> The old time "sandy" tool was Basic. While Python is much >>>> better than Basic, it is still "sandy" when it comes to >>>> embedded real time applications. >>>> >>>> >>>>> Seems as though youngsters mostly start with Python and then start in >>>>> on either >>>>> webdev or small SBCs using Arduino / AVR Studio / Raspbian or (for the >>>>> more >>>>> ambitious) something like BeagleBone or (a fave) LPCxpresso. Most of my >>>>> embedded work is pretty light-duty, so an M3 or M4 is good medicine. >>>>> I'm much >>>>> better at electro-optics and analog/RF circuitry than at MCUs or HDL, >>>>> so I do >>>>> only enough embedded things to get the whole instrument working. Fancy >>>>> embedded >>>>> stuff I either leave to the experts, do in hardware, or hive off to an >>>>> outboard >>>>> computer via USB serial, depending on the project. >>>> >>>> I wish more people took that attitude! >>>> >>>> >>>>> It's certainly true that things get complicated fast, but they did in >>>>> the old >>>>> days too. Of course the reasons are different: nowadays it's the sheer >>>>> complexity of the silicon and the tools, whereas back then it was >>>>> burn-and-crash >>>>> development, flaky in-system emulators, and debuggers which (if they even >>>>> existed) were almost as bad as Arduino. >>>> >>>> Agreed. The key difference is that with simple-but-unreliable >>>> tools it is possible to conceive that mortals can /understand/ >>>> the tools limitations, and know when/where the tool is failing. >>>> >>>> That simply doesn't happen with modern tools; even the world >>>> experts don't understand their complexity! Seriously. >>>> >>>> Consider C++. The *design committee* refused to believe C++ >>>> templates formed a Turing-complete language inside C++. >>>> They were forced to recant when shown a correct valid C++ >>>> program that never completed compilation - because, during >>>> compilation the compiler was (slowly) emitting the sequence >>>> of prime numbers! What chance have mere mortal developers >>>> got in the face of that complexity. >>> >>> I don't think that particular criticism is really fair - it seems the >>> (rather simple) C preprocessor is also "turing complete" or at least >>> close to it e.g,. >>> >>> https://stackoverflow.com/questions/3136686/is-the-c99-preprocessor-turing-complete >>> >>> >>> >>> >>> Or a C prime number generator that mostly uses the preprocessor >>> >>> https://www.cise.ufl.edu/~manuel/obfuscate/zsmall.hint >>> >>> At any rate "Compile-time processing" is a big thing now in modern c++, >>> see e.g. >>> >>> Compile Time Maze Generator (and Solver) >>> https://www.youtube.com/watch?v=3SXML1-Ty5U >> >> Funny, compile time program execution is something Forth has done for >> decades. >> Why is this important in other languages now? > > It isn't important. > > What is important is that the (world-expert) design committee > didn't understand (and then refused to believe) the > implications of their proposal. > > That indicates the tool is so complex and baroque as to > be incomprehensible - and that is a very bad starting point.That's the point. Forth is one of the simplest development tools you will ever find. It also has some of the least constraints. The only people who think it is a bad idea are those who think RPN is a problem and object to other trivial issues. -- Rick C
Reply by ●August 6, 20172017-08-06
tim... wrote on 8/6/2017 1:06 PM:> > I have just received a questionnaire from the manufactures of my PVR asking > about what upgraded features I would like it to include. > > Whilst they didn't ask it openly, reading between the lines there were asking: > > "would you like to control your home heating (and several other things) via > your Smart TV (box)" > > To which I answered, of course I bloody well don't > > Even if I did seen a benefit in having an internet connected heating > controller, why would I want to control it from my sofa using anything other > than the remote control that comes with it, in the box?None of this makes sense to me because I have no idea what a PVR is. -- Rick C
Reply by ●August 7, 20172017-08-07
On Sun, 6 Aug 2017 20:13:08 +0100, Tom Gardner <spamjunk@blueyonder.co.uk> wrote:>On 06/08/17 17:51, rickman wrote: >> >> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >> >>> At any rate "Compile-time processing" is a big thing now in modern c++, >>> see e.g. >>> >>> Compile Time Maze Generator (and Solver) >>> https://www.youtube.com/watch?v=3SXML1-Ty5U >> >> Funny, compile time program execution is something Forth has done for decades. >> Why is this important in other languages now? > >It isn't important. > >What is important is that the (world-expert) design committee >didn't understand (and then refused to believe) the >implications of their proposal. > >That indicates the tool is so complex and baroque as to >be incomprehensible - and that is a very bad starting point.Stupid compiler games aside, macro programming with the full power of the programming language has been tour de force in Lisp almost since the beginning - the macro facility that (essentially with only small modifications) is still in use today was introduced ~1965. Any coding pattern that is used repeatedly potentially is fodder for a code generating macro. In the simple case, it can save you shitloads of typing. In the extreme case macros can create a whole DSL that lets you mix in code to solve problems that are best thought about using different syntax or semantics ... without needing yet another compiler or figuring out how to link things together. These issues ARE relevant to programmers not working exclusively on small devices. Lisp's macro language is Lisp. You need to understand a bit about the [parsed, pre compilation] AST format ... but Lisp's AST format is standardized, and once you know it you can write Lisp code to manipulate it. Similarly Scheme's macro language is Scheme. Scheme doesn't expose compiler internals like Lisp - instead Scheme macros work in terms of pattern recognition and code to be generated in response. The problem with C++ is that its template language is not C++, but rather a bastard hybrid of C++ and a denotational markup language. C++ is Turing Complete. The markup language is not TC itself, but it is recursive, and therefore Turing powerful ["powerful" is not quite the same as "complete"]. The combination "template language" is, again, Turing powerful [limited by the markup] ... and damn near incomprehensible. YMMV, George
Reply by ●August 7, 20172017-08-07
On Sun, 6 Aug 2017 20:21:14 +0100, Tom Gardner <spamjunk@blueyonder.co.uk> wrote:>On 06/08/17 15:15, upsidedown@downunder.com wrote: >> On Sun, 6 Aug 2017 10:35:03 +0100, Tom Gardner >> <spamjunk@blueyonder.co.uk> wrote: > >>> Threads Cannot be Implemented as a Library >>> Hans-J. Boehm >>> HP Laboratories Palo Alto >>> November 12, 2004 * >>> In many environments, multi-threaded code is written in a language that >>> was originally designed without thread support (e.g. C), to which a >>> library of threading primitives was subsequently added. There appears to >>> be a general understanding that this is not the right approach. We provide >>> specific arguments that a pure library approach, in which the compiler is >>> designed independently of threading issues, cannot guarantee correctness >>> of the resulting code. >>> We first review why the approach *almost* works, and then examine some >>> of the *surprising behavior* it may entail. We further illustrate that there >>> are very simple cases in which a pure library-based approach seems >>> *incapable of expressing* an efficient parallel algorithm. >>> Our discussion takes place in the context of C with Pthreads, since it is >>> commonly used, reasonably well specified, and does not attempt to >>> ensure type-safety, which would entail even stronger constraints. The >>> issues we raise are not specific to that context. >>> http://www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf >> >> Now that there is a lot of multicore processors, this is a really >> serious issue. > >There have been multicore processors for *decades*, and >problems have been surfacing - and being swept under the >carpet for decades.All the pre-1980's multiprocessors that I have seen have been _asymmetric_ multiprocessors, i.e one CPU running the OS, while the other CPUs are running application programs.Thus, the OS handled locking of data. Of course, there has been cache coherence issues even with a single processor, such as DMA and interrupts. These issues have been under control for decades.>The only change is that now you can get 32 core embedded >processors for $15.Those coherence issues should be addressed (sic) by the OS writer, not the compiler. Why mess with these issues in each and every language, when it should be done only once at the OS level.> >13 years after Boehm's paper, there are signs that C/C++ >might be getting a memory model sometime. The success of >that endeavour is yet to be proven. > >Memory models are /difficult/. Even Java, starting from a >clean sheet, had to revise its memory model in the light >of experience. > > >> But again, should multitasking/mutithreading be implemented in a >> multitasking OS or in a programming language is a very important >> question. > >That question is moot, since the multitasking OS is implemented >in a programming language, usually C/C++.Usually very low level operations, such as invalidating cache and interrupt preambles are done in assembler anyway, especially with very specialized kernel mode instructions.
Reply by ●August 7, 20172017-08-07
On Sun, 6 Aug 2017 09:53:55 -0500, Les Cargill <lcargill99@comcast.com> wrote:>> I have often wondered what this IoT hype is all about. It seems to be >> very similar to the PLC (Programmable Logic Controller) used for >> decades. > >Similar. But PLCs are more pointed more at ladder logic for use in >industrial settings. You generally cannot, for example, write a socket >server that just does stuff on a PLC; you have to stay inside a dev >framework that cushions it for you.In IEC-1131 (now IEC 61131-3) you can enter the program in the format you are mostly familiar with, such as ladder logic or structured text (ST), which is similar to Modula (and somewhat resembles Pascal) with normal control structures. IEC-1131 has ben available for two decades
Reply by ●August 7, 20172017-08-07
"rickman" <gnuarm@gmail.com> wrote in message news:om8fmj$vc3$3@dont-email.me...> tim... wrote on 8/6/2017 1:06 PM: >> >> I have just received a questionnaire from the manufactures of my PVR >> asking >> about what upgraded features I would like it to include. >> >> Whilst they didn't ask it openly, reading between the lines there were >> asking: >> >> "would you like to control your home heating (and several other things) >> via >> your Smart TV (box)" >> >> To which I answered, of course I bloody well don't >> >> Even if I did seen a benefit in having an internet connected heating >> controller, why would I want to control it from my sofa using anything >> other >> than the remote control that comes with it, in the box? > > None of this makes sense to me because I have no idea what a PVR is.A Personal Video Recorded (a disk based video recorder)> > -- > > Rick C
Reply by ●August 7, 20172017-08-07
On 06/08/17 21:21, Tom Gardner wrote:> On 06/08/17 15:15, upsidedown@downunder.com wrote: >> On Sun, 6 Aug 2017 10:35:03 +0100, Tom Gardner >> <spamjunk@blueyonder.co.uk> wrote: > >>> Threads Cannot be Implemented as a Library >>> Hans-J. Boehm >>> HP Laboratories Palo Alto >>> November 12, 2004 * >>> In many environments, multi-threaded code is written in a language that >>> was originally designed without thread support (e.g. C), to which a >>> library of threading primitives was subsequently added. There appears to >>> be a general understanding that this is not the right approach. We >>> provide >>> specific arguments that a pure library approach, in which the >>> compiler is >>> designed independently of threading issues, cannot guarantee correctness >>> of the resulting code. >>> We first review why the approach *almost* works, and then examine some >>> of the *surprising behavior* it may entail. We further illustrate >>> that there >>> are very simple cases in which a pure library-based approach seems >>> *incapable of expressing* an efficient parallel algorithm. >>> Our discussion takes place in the context of C with Pthreads, since >>> it is >>> commonly used, reasonably well specified, and does not attempt to >>> ensure type-safety, which would entail even stronger constraints. The >>> issues we raise are not specific to that context. >>> http://www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf >> >> Now that there is a lot of multicore processors, this is a really >> serious issue. > > There have been multicore processors for *decades*, and > problems have been surfacing - and being swept under the > carpet for decades. > > The only change is that now you can get 32 core embedded > processors for $15. > > 13 years after Boehm's paper, there are signs that C/C++ > might be getting a memory model sometime. The success of > that endeavour is yet to be proven. >C++11 and C11 both have memory models, and explicit coverage of threading, synchronisation and atomicity.> Memory models are /difficult/. Even Java, starting from a > clean sheet, had to revise its memory model in the light > of experience. > > >> But again, should multitasking/mutithreading be implemented in a >> multitasking OS or in a programming language is a very important >> question. > > That question is moot, since the multitasking OS is implemented > in a programming language, usually C/C++. >
Reply by ●August 7, 20172017-08-07
On 07/08/17 09:35, upsidedown@downunder.com wrote:> On Sun, 6 Aug 2017 20:21:14 +0100, Tom Gardner > <spamjunk@blueyonder.co.uk> wrote: > >> On 06/08/17 15:15, upsidedown@downunder.com wrote: >>> On Sun, 6 Aug 2017 10:35:03 +0100, Tom Gardner >>> <spamjunk@blueyonder.co.uk> wrote: >> >>>> Threads Cannot be Implemented as a Library >>>> Hans-J. Boehm >>>> HP Laboratories Palo Alto >>>> November 12, 2004 * >>>> In many environments, multi-threaded code is written in a language that >>>> was originally designed without thread support (e.g. C), to which a >>>> library of threading primitives was subsequently added. There appears to >>>> be a general understanding that this is not the right approach. We provide >>>> specific arguments that a pure library approach, in which the compiler is >>>> designed independently of threading issues, cannot guarantee correctness >>>> of the resulting code. >>>> We first review why the approach *almost* works, and then examine some >>>> of the *surprising behavior* it may entail. We further illustrate that there >>>> are very simple cases in which a pure library-based approach seems >>>> *incapable of expressing* an efficient parallel algorithm. >>>> Our discussion takes place in the context of C with Pthreads, since it is >>>> commonly used, reasonably well specified, and does not attempt to >>>> ensure type-safety, which would entail even stronger constraints. The >>>> issues we raise are not specific to that context. >>>> http://www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf >>> >>> Now that there is a lot of multicore processors, this is a really >>> serious issue. >> >> There have been multicore processors for *decades*, and >> problems have been surfacing - and being swept under the >> carpet for decades. > > All the pre-1980's multiprocessors that I have seen have been > _asymmetric_ multiprocessors, i.e one CPU running the OS, while the > other CPUs are running application programs.Thus, the OS handled > locking of data. > > Of course, there has been cache coherence issues even with a single > processor, such as DMA and interrupts. These issues have been under > control for decades. > >> The only change is that now you can get 32 core embedded >> processors for $15. > > Those coherence issues should be addressed (sic) by the OS writer, not > the compiler. Why mess with these issues in each and every language, > when it should be done only once at the OS level.That is one way to look at it. The point of the article above is that coherence cannot be implemented in C or C++ alone (at the time when it was written - before C11 and C++11). You need help from the compiler. You have several options: 1. You can use C11/C++11 features such as fences and synchronisation atomics. 2. You can use implementation-specific features, such as a memory barrier like asm volatile("dmb" ::: "m") that will depend on the compiler and possibly the target. 3. You can use an OS or threading library that includes these implementation-specific features for you. This is often the easiest, but you might do more locking than you had to or have other inefficiencies. 4. You cheat, and assume that calling external functions defined in different units, or using volatiles, etc., can give you what you want. This usually works until you have more aggressive optimisation enabled. Note that sometimes OS's use these techniques. 5. You write code that looks right, and works fine in testing, but is subtly wrong. 6. You disable global interrupts around the awkward bits. You are correct that this can be done with a compiler that assumes a single-threaded single-cpu view of the world (as C and C++ did before 2011). You just need the appropriate compiler and target specific barriers and synchronisation instructions in the right places, and often putting them in the OS calls is the best place. But compiler support can make it more efficient and more portable.> >> >> 13 years after Boehm's paper, there are signs that C/C++ >> might be getting a memory model sometime. The success of >> that endeavour is yet to be proven. >> >> Memory models are /difficult/. Even Java, starting from a >> clean sheet, had to revise its memory model in the light >> of experience. >> >> >>> But again, should multitasking/mutithreading be implemented in a >>> multitasking OS or in a programming language is a very important >>> question. >> >> That question is moot, since the multitasking OS is implemented >> in a programming language, usually C/C++. > > Usually very low level operations, such as invalidating cache and > interrupt preambles are done in assembler anyway, especially with > very specialized kernel mode instructions. >Interrupt preambles and postambles are usually generated by the compiler, using implementation-specific features like #pragma or __attribute__ to mark the interrupt function. Cache control and similar specialised opcodes may often be done using inline assembly rather than full assembler code, or using compiler-specific intrinsic functions.