On Sunday, November 6, 2022 at 7:19:00 PM UTC-5, Richard Damon wrote:> On 11/6/22 6:37 PM, Paul Rubin wrote: > > Richard Damon <Ric...@Damon-Family.org> writes: > >> And such handshaking is needed if you have need to handle noise in > >> packets. > > > > Once you acknowledge that noise and errors are even possible, some kind > > of checksums or FEC seem appropriate in addition to a retry protocol. > Yes, the messages should have some form of checksum in them to identify > bad packets. That should be part of the message definition.Why? Does the processor checksum every value calculated and stored in memory? Not on my computer. This is not warranted because the data failure rate is very low. Same with an RS-422 bus in an electrically quiet environment. I could probably get away with TTL level signals, but I'd like to have the ESD protection these RS-422 chips give. That additional noise immunity means there is an extremely small chance of bit errors. If we have problems, the error handling can be added. -- Rick C. +--- Get 1,000 miles of free Supercharging +--- Tesla referral code - https://ts.la/richard11209
Shared Communications Bus - RS-422 or RS-485
Started by ●November 2, 2022
Reply by ●November 7, 20222022-11-07
Reply by ●November 7, 20222022-11-07
On 2022-11-07 Rick C wrote in comp.arch.embedded:> On Sunday, November 6, 2022 at 3:54:04 PM UTC-5, David Brown wrote: >> On 06/11/2022 14:56, Rick C wrote: >> > On Sunday, November 6, 2022 at 5:55:22 AM UTC-5, David Brown wrote: >> >> On 05/11/2022 21:42, Rick C wrote: >> >>> On Saturday, November 5, 2022 at 2:57:30 PM UTC-4, David Brown wrote: >> >>>> On 05/11/2022 18:23, Rick C wrote: >> >>>>> On Saturday, November 5, 2022 at 7:47:59 AM UTC-4, David Brown wrote: >> >> >> >>>> The USB device is /not/ a processor - it is a converter between USB and >> >>>> UART. And it is the USB device that controls the transmit enable signal >> >>>> to the RS-485/RS-422 driver. There is no software on any processor >> >>>> handling the transmit enable signal - the driver is enabled precisely >> >>>> when the USB to UART device is sending data on the UART. >> >>> >> >>> Actually, the FTDI device is a processor. I expect it actually has no UART, rather the entire thing is done in software. I recall there being code to download for various purposes, such as JTAG, but I forget the details. I'm pretty sure the TxEn is controlled by FTDI software. >> >>> >> >> No, I think you are mixing things up. FTDI make a fair number of >> >> devices, including some that /are/ processors or contain processors. >> >> (That would their display controller devices, their USB host >> >> controllers, amongst others.) >> >> >> >> The code for using chips like the FT232H as a JTAG interface runs on the >> >> host PC, not FTDI chip - it is a DLL or so file (or OpenOCD, or other >> >> software). The chip has /hardware/ support for a few different serial >> >> interfaces - SPI, I²C, JTAG and UART. >> > >> > They need code for the PC to run, but there is no reason to think they don't use a processor in the USB dongle. >> > >> There is no reason to think that they /do/ have a processor there. I >> should imagine you would have no problem making the programmable logic >> needed for controlling a UART/SPI/I²C/JTAG/GPIO port, and USB slave >> devices are rarely made in software (even on the XMOS they prefer >> hardware blocks for USB). Why would anyone use a /processor/ for some >> simple digital hardware? I am not privy to the details of the FTDI >> design beyond their published documents, but it seems pretty clear to me >> that there is no processor in sight. > > I don't agree. These interfaces are not so simple when you consider the level of flexibility in implementing many different interfaces in one part. XMOS is nothing like this. A small processor running at high speed would easily implement any of these interfaces. The small processor can actually be a very small amount of chip area. Typical MCUs are dominated by the memory blocks. With a small memory an MCU could easily be smaller than dedicated logic. Even many of the I/O blocks, like UARTs, can be larger than an 8 bit CPU. A CPU takes advantage of the massive multiplexer in the memory, which is implemented in ways that use very little area. FPGAs use the multiplexers in tiny LUTs while an MCU uses the multiplexer in a single, much larger LUT, the program store.Why are you discussing this? Out of academic curiosity? Then please continue. But what does it matter for your system implementation? There is just a UART/SPI/I²C/JTAG/GPIO peripheral and your software won't care how this peripheral is implemented, as long as it behaves as expected. -- Stef "Microwave oven? Whaddya mean, it's a microwave oven? I've been watching Channel 4 on the thing for two weeks."
Reply by ●November 7, 20222022-11-07
On Monday, November 7, 2022 at 5:26:06 AM UTC-5, Stef wrote:> On 2022-11-07 Rick C wrote in comp.arch.embedded: > > On Sunday, November 6, 2022 at 3:54:04 PM UTC-5, David Brown wrote: > >> On 06/11/2022 14:56, Rick C wrote: > >> > On Sunday, November 6, 2022 at 5:55:22 AM UTC-5, David Brown wrote: > >> >> On 05/11/2022 21:42, Rick C wrote: > >> >>> On Saturday, November 5, 2022 at 2:57:30 PM UTC-4, David Brown wrote: > >> >>>> On 05/11/2022 18:23, Rick C wrote: > >> >>>>> On Saturday, November 5, 2022 at 7:47:59 AM UTC-4, David Brown wrote: > >> >> > >> >>>> The USB device is /not/ a processor - it is a converter between USB and > >> >>>> UART. And it is the USB device that controls the transmit enable signal > >> >>>> to the RS-485/RS-422 driver. There is no software on any processor > >> >>>> handling the transmit enable signal - the driver is enabled precisely > >> >>>> when the USB to UART device is sending data on the UART. > >> >>> > >> >>> Actually, the FTDI device is a processor. I expect it actually has no UART, rather the entire thing is done in software. I recall there being code to download for various purposes, such as JTAG, but I forget the details. I'm pretty sure the TxEn is controlled by FTDI software. > >> >>> > >> >> No, I think you are mixing things up. FTDI make a fair number of > >> >> devices, including some that /are/ processors or contain processors. > >> >> (That would their display controller devices, their USB host > >> >> controllers, amongst others.) > >> >> > >> >> The code for using chips like the FT232H as a JTAG interface runs on the > >> >> host PC, not FTDI chip - it is a DLL or so file (or OpenOCD, or other > >> >> software). The chip has /hardware/ support for a few different serial > >> >> interfaces - SPI, I²C, JTAG and UART. > >> > > >> > They need code for the PC to run, but there is no reason to think they don't use a processor in the USB dongle. > >> > > >> There is no reason to think that they /do/ have a processor there. I > >> should imagine you would have no problem making the programmable logic > >> needed for controlling a UART/SPI/I²C/JTAG/GPIO port, and USB slave > >> devices are rarely made in software (even on the XMOS they prefer > >> hardware blocks for USB). Why would anyone use a /processor/ for some > >> simple digital hardware? I am not privy to the details of the FTDI > >> design beyond their published documents, but it seems pretty clear to me > >> that there is no processor in sight. > > > > I don't agree. These interfaces are not so simple when you consider the level of flexibility in implementing many different interfaces in one part. XMOS is nothing like this. A small processor running at high speed would easily implement any of these interfaces. The small processor can actually be a very small amount of chip area. Typical MCUs are dominated by the memory blocks. With a small memory an MCU could easily be smaller than dedicated logic. Even many of the I/O blocks, like UARTs, can be larger than an 8 bit CPU. A CPU takes advantage of the massive multiplexer in the memory, which is implemented in ways that use very little area. FPGAs use the multiplexers in tiny LUTs while an MCU uses the multiplexer in a single, much larger LUT, the program store. > Why are you discussing this? Out of academic curiosity? Then please > continue. But what does it matter for your system implementation? There > is just a UART/SPI/I²C/JTAG/GPIO peripheral and your software won't care > how this peripheral is implemented, as long as it behaves as expected.I care. Don't you? I remember when I came to the realization of why an MCU was so cost effective compared to programmable or even dedicated logic. It's because the MCU program is a FSM, using the instructions stored in the memory. These instruction are essentially logic, which is connected through the CPU logic, creating a very low cost solution to a wide variety of problems, because of the very low cost of memory compared to dedicated or programmable logic. -- Rick C. +--+ Get 1,000 miles of free Supercharging +--+ Tesla referral code - https://ts.la/richard11209
Reply by ●November 7, 20222022-11-07
On 2022-11-07 Rick C wrote in comp.arch.embedded:> On Sunday, November 6, 2022 at 6:34:59 PM UTC-5, Richard Damon wrote: >> On 11/6/22 8:56 AM, Rick C wrote: >> > There's no point to inter-message delays. If there is an error that causes a loss of framing, the devices will see that and ignore the message. As I've said, the real issue is that the message will not be responded to, and the software will fail. At that point the user will exit the software on the PC and start over. That gives a nice long delay for resyncing. >> If the only way to handle a missed message is to abort the whole >> software system, that seems to be a pretty bad system. > > You would certainly think that if your error rate was more than once a hundred years. I expect to be long dead before an RS-422 bus only 10 feet long burps a bit error.I would not dare to implement a serial protocol without any form of error checking, on any length of cable. You mention ESD somewhere. This can be a serious disturbance that can easily corrupt a few bits. Reminds me of a product where we got windows blue screens during ESD testing on a device connected via an FTDI USB to serial adapter. Cable length less than 6 feet.> >> Note, if the master sends out a message, and waits for a response, with >> a retry if the message is not replied to, that naturally puts a pause in >> the communication bus for inter-message synchronization. > > The pause is already there by virtue of the protocol. Commands and replies are on different busses. > > >> Based on your description, I can't imagine the master starting a message >> for another slave until after the first one answers, or you will >> interfere with the arbitration control of the reply bus. > > Exactly! Now you are starting to catch on.So you do wait for a reply, and a reply is only expected on a valid message? What if there is no reply, do you retry? If so, you already have implemented some basic error checking. For more robustness you could (I would) add some kind of CRC. In the following, I think Richard is just considering a situation where this problem might occur. Not your situation because he has already 'caught on', as you mention. But I should probably not speak for Richard ...>> In a dedicated link, after the link is established, it might be possible >> that one side just starts streaming data continously to the other side, > > Except that there is no data to stream. Maybe you haven't been around for the full conversation. The protocol is command/reply for reading and writing registers and selecting which unit the registers are being accessed. The "stream" is an 8 bit value. > > >> but most protocals will have some sort of at least occational >> handshaking back, so a loss of sync can stop the flow to re-establish >> the syncronization. And such handshaking is needed if you have need to >> handle noise in packets. > > ??? Every command has a reply. How is that not a handshake??? >-- Stef I don't care for the Sugar Smacks commercial. I don't like the idea of a frog jumping on my Breakfast. -- Lowell, Chicago Reader 10/15/82
Reply by ●November 7, 20222022-11-07
On 2022-11-07 Rick C wrote in comp.arch.embedded:> On Monday, November 7, 2022 at 5:26:06 AM UTC-5, Stef wrote: >> On 2022-11-07 Rick C wrote in comp.arch.embedded: >> > On Sunday, November 6, 2022 at 3:54:04 PM UTC-5, David Brown wrote: >> >> On 06/11/2022 14:56, Rick C wrote: >> >> > On Sunday, November 6, 2022 at 5:55:22 AM UTC-5, David Brown wrote: >> >> >> On 05/11/2022 21:42, Rick C wrote: >> >> >>> On Saturday, November 5, 2022 at 2:57:30 PM UTC-4, David Brown wrote: >> >> >>>> On 05/11/2022 18:23, Rick C wrote: >> >> >>>>> On Saturday, November 5, 2022 at 7:47:59 AM UTC-4, David Brown wrote: >> >> >> >> >> >>>> The USB device is /not/ a processor - it is a converter between USB and >> >> >>>> UART. And it is the USB device that controls the transmit enable signal >> >> >>>> to the RS-485/RS-422 driver. There is no software on any processor >> >> >>>> handling the transmit enable signal - the driver is enabled precisely >> >> >>>> when the USB to UART device is sending data on the UART. >> >> >>> >> >> >>> Actually, the FTDI device is a processor. I expect it actually has no UART, rather the entire thing is done in software. I recall there being code to download for various purposes, such as JTAG, but I forget the details. I'm pretty sure the TxEn is controlled by FTDI software. >> >> >>> >> >> >> No, I think you are mixing things up. FTDI make a fair number of >> >> >> devices, including some that /are/ processors or contain processors. >> >> >> (That would their display controller devices, their USB host >> >> >> controllers, amongst others.) >> >> >> >> >> >> The code for using chips like the FT232H as a JTAG interface runs on the >> >> >> host PC, not FTDI chip - it is a DLL or so file (or OpenOCD, or other >> >> >> software). The chip has /hardware/ support for a few different serial >> >> >> interfaces - SPI, I²C, JTAG and UART. >> >> > >> >> > They need code for the PC to run, but there is no reason to think they don't use a processor in the USB dongle. >> >> > >> >> There is no reason to think that they /do/ have a processor there. I >> >> should imagine you would have no problem making the programmable logic >> >> needed for controlling a UART/SPI/I²C/JTAG/GPIO port, and USB slave >> >> devices are rarely made in software (even on the XMOS they prefer >> >> hardware blocks for USB). Why would anyone use a /processor/ for some >> >> simple digital hardware? I am not privy to the details of the FTDI >> >> design beyond their published documents, but it seems pretty clear to me >> >> that there is no processor in sight. >> > >> > I don't agree. These interfaces are not so simple when you consider the level of flexibility in implementing many different interfaces in one part. XMOS is nothing like this. A small processor running at high speed would easily implement any of these interfaces. The small processor can actually be a very small amount of chip area. Typical MCUs are dominated by the memory blocks. With a small memory an MCU could easily be smaller than dedicated logic. Even many of the I/O blocks, like UARTs, can be larger than an 8 bit CPU. A CPU takes advantage of the massive multiplexer in the memory, which is implemented in ways that use very little area. FPGAs use the multiplexers in tiny LUTs while an MCU uses the multiplexer in a single, much larger LUT, the program store. >> Why are you discussing this? Out of academic curiosity? Then please >> continue. But what does it matter for your system implementation? There >> is just a UART/SPI/I²C/JTAG/GPIO peripheral and your software won't care >> how this peripheral is implemented, as long as it behaves as expected. > > I care. Don't you?No, I don't. We do use FTDI chips in our designs to interface a serial port to USB. And we also use ready made FTDI cables. We use these chips and cables based on their specifications in datasheets and user guides etc. I have never felt the need to invesitigate how the UART/USB functionality was actually implemented inside the chip. What would I do with this knowledge? In a design I must rely on the behaviour as specified in the datasheet.> I remember when I came to the realization of why an MCU was so cost effective compared to programmable or even dedicated logic. It's because the MCU program is a FSM, using the instructions stored in the memory. These instruction are essentially logic, which is connected through the CPU logic, creating a very low cost solution to a wide variety of problems, because of the very low cost of memory compared to dedicated or programmable logic. >This is what I would call 'academic interest', and that is perfectly fine. And this knowledge might help you think differently about solving a problem in your own design. But it will make no difference in how you will imlement this chip (or cable) in your design. -- Stef So many men, so many opinions; every one his own way. -- Publius Terentius Afer (Terence)
Reply by ●November 7, 20222022-11-07
On 07/11/2022 11:00, Stef wrote:> On 2022-11-05 Rick C wrote in comp.arch.embedded: >> On Saturday, November 5, 2022 at 6:58:24 AM UTC-4, David Brown wrote: >> >>> In UART communication, this is handled at the protocol level rather than >>> the hardware (though some UART hardware may have "idle detect" signals >>> when more than 11 bits of high level are seen in a row). Some >>> UART-based protocols also use a "break" signal between frames - that is >>> a string of at least 11 bits of low level. >>> >>> If you do not have such pauses, and a receiver is out of step, >> >> You have failed to explain how a receiver would get "out of step". The receiver syncs to every character transmitted. If all characters are received, what else do you need? How does it get "out of step"? > > I have seen this happen in long messages (few kB) with no pauses between > characters and transmitter and receiver set to 8,N,1. It seemed that the > receiver needed the complete stop bit and then immediately saw the low > of the next start bit. Detecting the edge when it was ready to see it, > not when it actually happened. When the receiver is slightly slower than > the transmitter, this caused the detection of the start bit (and > therefor the whole character) to shift a tiny bit. This added up over > the character stream until it eventually failed. > > Lowering the baud rate did not solve the issue, but inserting pauses > after a number of chars did. What also solved it was setting the > transmitter to 2 stop bits and the receiver to one stop bit. This was a > one way stream and this may not be possible on a bi-directional stream. >An extra stop bit will help for this particular kind of error (and is a good idea if you get such errors often, as it will improve your percentage timing margins). An occasional pause of at least 11 bit times will help for all sorts of possible errors. Basically, it is a good idea to assume that sometimes things go wrong. There can be noise, interference, cosmic rays, power glitches - even in a system that has bug-free software, quality hardware, and no fallible human anywhere, there's always a risk of faults. That is why most serial protocols have CRC's or other checksums, and at least a basic "if there is no reply, repeat the telegram" handler.> I would expect a sensible UART implementation to allow for a slightly > shorter stop bit to compensate for issues like this. But apparently this > UART did not do so in the 1 stop bit setting. I have not tested if > setting both ends to 2 stop bits also solved the problem. > >
Reply by ●November 7, 20222022-11-07
On Saturday, November 5, 2022 at 2:57:30 PM UTC-4, David Brown wrote:> > In more sophisticated tristate drivers, you would off (disconnect) the > local terminator whenever the driver is enabled. This is done in some > multi-lane systems as it can significantly reduce power and make slope > control and pulse shaping easier. (It's not something you'd be likely > to see on RS-485 buses.)I'm not sure what bus arrangement you are referring to. The RS-485 bus is intended to be linear. The terminators are at the ends, to prevent reflections. There's no point in removing either of them no matter which driver is enabled. All drivers see two loads. A terminal driver sees the bus and the terminator. A driver along the bus sees two buses which are driven in parallel. So everyone sees the same impedance, half the characteristic impedance of the bus. -- Rick C. +-+- Get 1,000 miles of free Supercharging +-+- Tesla referral code - https://ts.la/richard11209
Reply by ●November 7, 20222022-11-07
On Monday, November 7, 2022 at 6:00:09 AM UTC-4, Stef wrote:> On 2022-11-05 Rick C wrote in comp.arch.embedded: > > On Saturday, November 5, 2022 at 6:58:24 AM UTC-4, David Brown wrote: > > > >> In UART communication, this is handled at the protocol level rather than > >> the hardware (though some UART hardware may have "idle detect" signals > >> when more than 11 bits of high level are seen in a row). Some > >> UART-based protocols also use a "break" signal between frames - that is > >> a string of at least 11 bits of low level. > >> > >> If you do not have such pauses, and a receiver is out of step, > > > > You have failed to explain how a receiver would get "out of step". The receiver syncs to every character transmitted. If all characters are received, what else do you need? How does it get "out of step"? > I have seen this happen in long messages (few kB) with no pauses between > characters and transmitter and receiver set to 8,N,1. It seemed that the > receiver needed the complete stop bit and then immediately saw the low > of the next start bit. Detecting the edge when it was ready to see it, > not when it actually happened. When the receiver is slightly slower than > the transmitter, this caused the detection of the start bit (and > therefor the whole character) to shift a tiny bit. This added up over > the character stream until it eventually failed. > > Lowering the baud rate did not solve the issue, but inserting pauses > after a number of chars did. What also solved it was setting the > transmitter to 2 stop bits and the receiver to one stop bit. This was a > one way stream and this may not be possible on a bi-directional stream. > > I would expect a sensible UART implementation to allow for a slightly > shorter stop bit to compensate for issues like this. But apparently this > UART did not do so in the 1 stop bit setting. I have not tested if > setting both ends to 2 stop bits also solved the problem.If a UART receiver can not properly receive a message like this, it is defective. The point of the start and stop bits are to provide the synchronization. The receiver simply needs to detect the stop be state (by sampling where the receiver thinks is the middle of the bit) and then immediately start looking for the leading edge of the next start bit. The receiver will then be synchronized to the new character bit timing and it will never slip. That gives up to ±5% combined timing error tolerance. If the receiver waits until a later time, such as the expected end of the received stop bit, to start looking for a start bit leading edge, it will not be able to tolerate a timing error where the transmitter is faster than the receiver making the timing tolerance unipolar, i.e. 5% rather than ±5%. That's a receiver design flaw, or the transmitter is sending short stop bits, which you can easily see on the scope with a delayed trigger control. You should be able to diagnose which end has the problem by connecting a different type of receiver to the stream. If a different receiver UART is able to receive the messages without fault, the problem is obviously the failing receiver. -- Rick C. +-++ Get 1,000 miles of free Supercharging +-++ Tesla referral code - https://ts.la/richard11209
Reply by ●November 7, 20222022-11-07
On Monday, November 7, 2022 at 6:55:27 AM UTC-4, Stef wrote:> On 2022-11-07 Rick C wrote in comp.arch.embedded: > > On Sunday, November 6, 2022 at 6:34:59 PM UTC-5, Richard Damon wrote: > >> On 11/6/22 8:56 AM, Rick C wrote: > >> > There's no point to inter-message delays. If there is an error that causes a loss of framing, the devices will see that and ignore the message. As I've said, the real issue is that the message will not be responded to, and the software will fail. At that point the user will exit the software on the PC and start over. That gives a nice long delay for resyncing. > >> If the only way to handle a missed message is to abort the whole > >> software system, that seems to be a pretty bad system. > > > > You would certainly think that if your error rate was more than once a hundred years. I expect to be long dead before an RS-422 bus only 10 feet long burps a bit error. > I would not dare to implement a serial protocol without any form of > error checking, on any length of cable. > > You mention ESD somewhere. This can be a serious disturbance that can > easily corrupt a few bits.Yes, I mentioned ESD somewhere. This is testing newly constructed circuit boards, so is used in an ESD controlled environment.> Reminds me of a product where we got windows blue screens during ESD > testing on a device connected via an FTDI USB to serial adapter. Cable > length less than 6 feet.I assume you mean some other device was being ESD tested? This is not being used in an ESD testing lab. Was the FTDI serial cable RS-232 by any chance? Being single ended, that is much less tolerant of noise.> >> Note, if the master sends out a message, and waits for a response, with > >> a retry if the message is not replied to, that naturally puts a pause in > >> the communication bus for inter-message synchronization. > > > > The pause is already there by virtue of the protocol. Commands and replies are on different busses. > > > > > >> Based on your description, I can't imagine the master starting a message > >> for another slave until after the first one answers, or you will > >> interfere with the arbitration control of the reply bus. > > > > Exactly! Now you are starting to catch on. > So you do wait for a reply, and a reply is only expected on a valid > message? What if there is no reply, do you retry? If so, you already have > implemented some basic error checking. For more robustness you could (I > would) add some kind of CRC.There should not be any messages other than "valid" messages. I don't recall specifically what the slave does on messages with bit errors, but I'm pretty sure it simply doesn't know they have bit errors. The message has no checksum or other bit error control. The format has one character to indicate the "command" type. If that character is corrupted, the command is not used, unless it is changed to another valid character (3 of 256 chance). Again, there's no reason to "detect" errors since I've implemented no error protocol. That is many times more complex than simply ignoring the errors, which works because errors don't happen often enough to have an impact on testing. On the Apollo moon missions, they took no precautions against damage from micrometeoroids, because the effort required was not commensurate with the likelihood of the event. -- Rick C. ++-- Get 1,000 miles of free Supercharging ++-- Tesla referral code - https://ts.la/richard11209
Reply by ●November 7, 20222022-11-07
On Monday, November 7, 2022 at 7:07:43 AM UTC-4, Stef wrote:> On 2022-11-07 Rick C wrote in comp.arch.embedded: > > On Monday, November 7, 2022 at 5:26:06 AM UTC-5, Stef wrote: > >> On 2022-11-07 Rick C wrote in comp.arch.embedded: > > > > I care. Don't you? > No, I don't. We do use FTDI chips in our designs to interface a serial > port to USB. And we also use ready made FTDI cables. We use these chips > and cables based on their specifications in datasheets and user guides > etc. I have never felt the need to invesitigate how the UART/USB > functionality was actually implemented inside the chip. What would I do > with this knowledge? In a design I must rely on the behaviour as > specified in the datasheet.It's hard to imagine an engineer with no curiosity.> > I remember when I came to the realization of why an MCU was so cost effective compared to programmable or even dedicated logic. It's because the MCU program is a FSM, using the instructions stored in the memory. These instruction are essentially logic, which is connected through the CPU logic, creating a very low cost solution to a wide variety of problems, because of the very low cost of memory compared to dedicated or programmable logic. > > > This is what I would call 'academic interest', and that is perfectly > fine. And this knowledge might help you think differently about solving > a problem in your own design. But it will make no difference in how you > will imlement this chip (or cable) in your design.It is very much of practical interest to me, as I design FPGAs and knowing that I can use less resources by constructing a peripheral as a CPU, is important info. The FPGA design in the UUT was pushing the capacity of the chip it was in. I was on the cusp of changing the design to a CPU centric design when it was routed at 90% utilization. This time, I'm bumping the size of the FPGA significantly, about 3x. The Gowin FPGA devices are very cost effective. I'll be able to use the hard logic and the soft CPU, both. LOL -- Rick C. ++-+ Get 1,000 miles of free Supercharging ++-+ Tesla referral code - https://ts.la/richard11209







