My knee-jerk reaction is that 8-bit processors typically are more constrained in terms of memory and clock speeds -- also my understanding is that most of them don't have native floating-point support (in hardware), so I would think that the experience they are looking for is knowing how to make the most with what little you have using cunning coding tricks and suchlike.
Yes. In addition to above question, if we have assigned to develope a Verilog code for FPGA/ Assembly coding for DSP processors, how much it would take to get into it and which processor we would prefer to get optimum result.
I'll add that it infers "right size" flexibility, competence in using the resources at that scale. Wide chips are overkill on small tasks while narrow might be able to do the wide task, it'll be inefficient at best.
I'm not sure that's wholly accurate. You can get a 32-bit Cortex-M0+ pretty cheap. Given a similar price point for an 8- or 16-bit MCU vs. an ARM M0+ I'd go ARM every time. 8- and 16-bit MCUs can have odd toolchains and take more instructions for 32-bit sized work. Look at the STM32G030 for an example of a capable, cheap MCU.
Hi there. If you program in C utilizing factory provided libraries, the same compiler and IDE/toolchain, then there's not a great deal of difference, bearing the native size of the cpu word. In reality though, the difference lies in the intended use. As the processor becomes 'bigger' it also tends to be much more complex and is more often than not programmed on top of a hardware abstraction layer and some type of task scheduler or OS. I work almost exclusively in communications and deal with moving 7/8/9 bit bytes from A to B. For this 8-bit processors shine. There are issues though, with speed and memory. As the chip gets smaller, they are less efficient in bleeding off heat and are generally slower. To counter this we program closer to the bones or as you may have heard, 'bare metal', no OS, no HAL and typically employ direct register addressing as well as spur functions written in assembly. This grass roots programming is not something you really want (or need) to get wrapped up in with 32-bit processors. These days 16-bit processors are more legacy than common except for ARM Thumb commands. It really boils down to the width of the data bus, but yes, programming 8 and 32-bit processors generally require different sets of skills.
Apart from what everyone else said, they might be looking for experience in different fields. If you've worked with 8 bitters, then you probably keep data sizes in mind, know all sorts of tricks to speed things up, minimise operations to achieve a certain result. If you've worked with 16/32 systems like the m68k or Cortex-M0 and similar, then you probably are aware of concepts like supervisor/user mode, multiple stacks, multithreading, hardware abstraction and the like. If you worked with bigger 32-bit chips, then you are probably familiar with an RTOS and/or Linux running on the chip, doing complex communications, possibly GUI user interfaces, heavy-duty processing, high-end encryption and everything that you could expect, for example, from a vintage (say, 10-15 years old) smartphone.
Booting an 8 bitter is pretty much giving it clock and power. Booting a Cortex-M0 microcontroller (i.e. set up clocks, CPU core, interrupt controller) takes maybe a page of C code. Booting just the CPU on a Cortex-A9 based SoC is often many pages of C (with embedded assembly), because you need to deal with the core, modes, stacks, coprocessors, clocking, caches, buses, DDR controller and whatnot before your CPU is ready to do anything useful.
On the other hand, writing code for a very resource limited 8-bitter might actually be much harder than achieving the same thing on the tiniest Cortex-M0 and of course there are tasks where the M0/M3/M4 offering is simply not enough, you need to go higher.
Also consider the HW design aspects: 8-bitters come in small packages, some even through-hole. They usually have a few simple peripherals. M0/M3 category chips come from relatively small SMD packages to larger and even low-density BGA; have many peripherals that can be quite complex (e.g. USB, Ethernet, SD/MMC) but they are still fairly easy to design 4-layer PCBs for. The bigger beasts always come in dense BGAs, many supply rails, often needing impedance matched and/or delay equalised traces and so on, necessitating 6+ layer boards with thin tracks and tiny vias.
They want a large gamut of skills, preferably all or at least as much as possible. Plus you should have at least 30 years experience, but should not be older than 25 and willing to work 10-hour days for the wage of a part-time burger flipper... :-)
Hi everyone, thanks for the valuable information. I am getting grip on these guidance and try to implement these as well in future days. I am working on bare metal coding now using msp430(16-bit), atmega324pb(8-bit) and tiva-c(32-bit) evaluation modules. I will work on it.
How about UML class diagrams and sequence diagrams? Its a needed one in making projects.