EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Serial Bus Speed on PCs

Started by Rick C November 30, 2022
I am using laptops to control test fixtures via a USB serial port.  I'm looking at combining many test fixtures in one chassis, controlled over one serial port.  The problem I'm concerned about is not the speed of the bus, which can range up to 10 Mbps.  It's the interface to the serial port.  

The messages are all short, around 15 characters.  The master PC addresses a slave and the slave promptly replies.  It seems this message level hand shake creates a bottle neck in every interface I've looked at. 

FTDI has a high-speed USB cable that is likely limited by the 8 kHz polling rate.  So the message and response pair would be limited to 4 kHz.  Spread over 256 end points, that's only 16 message pairs a second to each target.  That might be workable if there were no other delays. 

While investigating other units, I found some Ethernet to serial devices and found some claim the serial port can run at up to 3.7 Mbps.  But when I contacted them, they said each message has a 1 ms delay, so that's only 500 pairs per second, or maybe 2 pairs per second per channel.  That's slow! 

They have multi-port boxes, up to 16, so I've asked them if they will run with a larger aggregate rate, or if the delay on one port impacts all of them.  

I've also found another vendor with a similar product, and I've asked about that too. 

I'm surprised and disappointed the Ethernet devices have such delays.  I would have expected them to work better given their rather high prices.  

I could add a module, to interface between the PC serial port and the 16 test fixtures.  It would allow the test application on the PC to send messages to all 16 test fixtures in a row.  The added module would receive on separate lines, the 16 responses and stream them out to the port to the PC as one, continuous message.  This is a bit messier since now, the 16 lines from this new module would need to be marked since they have to plug into the right test fixture each day.  

Or, if I could devise a manner of assigning priority, the slaves could all manage the priority themselves and still share the receive bus to the serial port on the PC.  Again, this would look like one long message to the port and the PC.  The application program would see the individual messages and parse them separately.  Many of the commands from the PC could actually be shortened to a single, broadcast command since the same tests are done on all targets in parallel.  So using an RJ-45 connector, there would be the two pairs for the serial port, and two pairs for the priority daisy-chain. 

I guess I'm thinking out loud here.  

LOL, so now I'm leaning back toward the USB based FTDI RS-422 cable and a priority scheme so every target gets many, more commands per second.  I just ran the math, and this would be almost 20,000 bits per command.  Try to run that at 8,000 times per second and a 100 Mbps Ethernet port won't keep up.  

I've written to FTDI about the actual throughput I can expect with their cables.  We'll see what they come back with. 

-- 

Rick C.

- Get 1,000 miles of free Supercharging
- Tesla referral code - https://ts.la/richard11209
On 11/30/22 2:33 AM, Rick C wrote:
> I am using laptops to control test fixtures via a USB serial port. I'm looking at combining many test fixtures in one chassis, controlled over one serial port. The problem I'm concerned about is not the speed of the bus, which can range up to 10 Mbps. It's the interface to the serial port. > > The messages are all short, around 15 characters. The master PC addresses a slave and the slave promptly replies. It seems this message level hand shake creates a bottle neck in every interface I've looked at. > > FTDI has a high-speed USB cable that is likely limited by the 8 kHz polling rate. So the message and response pair would be limited to 4 kHz. Spread over 256 end points, that's only 16 message pairs a second to each target. That might be workable if there were no other delays. > > While investigating other units, I found some Ethernet to serial devices and found some claim the serial port can run at up to 3.7 Mbps. But when I contacted them, they said each message has a 1 ms delay, so that's only 500 pairs per second, or maybe 2 pairs per second per channel. That's slow! > > They have multi-port boxes, up to 16, so I've asked them if they will run with a larger aggregate rate, or if the delay on one port impacts all of them. > > I've also found another vendor with a similar product, and I've asked about that too. > > I'm surprised and disappointed the Ethernet devices have such delays. I would have expected them to work better given their rather high prices. > > I could add a module, to interface between the PC serial port and the 16 test fixtures. It would allow the test application on the PC to send messages to all 16 test fixtures in a row. The added module would receive on separate lines, the 16 responses and stream them out to the port to the PC as one, continuous message. This is a bit messier since now, the 16 lines from this new module would need to be marked since they have to plug into the right test fixture each day. > > Or, if I could devise a manner of assigning priority, the slaves could all manage the priority themselves and still share the receive bus to the serial port on the PC. Again, this would look like one long message to the port and the PC. The application program would see the individual messages and parse them separately. Many of the commands from the PC could actually be shortened to a single, broadcast command since the same tests are done on all targets in parallel. So using an RJ-45 connector, there would be the two pairs for the serial port, and two pairs for the priority daisy-chain. > > I guess I'm thinking out loud here. > > LOL, so now I'm leaning back toward the USB based FTDI RS-422 cable and a priority scheme so every target gets many, more commands per second. I just ran the math, and this would be almost 20,000 bits per command. Try to run that at 8,000 times per second and a 100 Mbps Ethernet port won't keep up. > > I've written to FTDI about the actual throughput I can expect with their cables. We'll see what they come back with. >
You can get much more that 8000 cps with an FTDI interface. This is because you can send/recieve more that one character per "poll". My first thought is why are you trying to combine everything into one USB serial port. Why not give each test fixture its own serial port (or lump just a few onto a given port) and let the USB bus do the bulk of the multi-drop. The ethernet unit might be just a 10 MBit device, or maybe a 100MBit and you need to send a whole message block, process it, then send the data in it, and then it can send back the answer when it figures the full answer has come back. It likely doesn't even TRY to transmit on a character basis, but because of the much larger overhead of an ethernet packet, presumes network bandwidth is more important the delay. Also, they may be quoting figures with typical routing delays assuming a multi-hop route from computer to destination, which adds to the delay, since that is the sort of application you use those for. Ethernet is a "long haul" medium, not normally thought of as short haul, particularly when talking about lower bandwidth applications.
On Wednesday, November 30, 2022 at 8:42:17 AM UTC-4, Richard Damon wrote:
> On 11/30/22 2:33 AM, Rick C wrote: > > I am using laptops to control test fixtures via a USB serial port. I'm looking at combining many test fixtures in one chassis, controlled over one serial port. The problem I'm concerned about is not the speed of the bus, which can range up to 10 Mbps. It's the interface to the serial port. > > > > The messages are all short, around 15 characters. The master PC addresses a slave and the slave promptly replies. It seems this message level hand shake creates a bottle neck in every interface I've looked at. > > > > FTDI has a high-speed USB cable that is likely limited by the 8 kHz polling rate. So the message and response pair would be limited to 4 kHz. Spread over 256 end points, that's only 16 message pairs a second to each target. That might be workable if there were no other delays. > > > > While investigating other units, I found some Ethernet to serial devices and found some claim the serial port can run at up to 3.7 Mbps. But when I contacted them, they said each message has a 1 ms delay, so that's only 500 pairs per second, or maybe 2 pairs per second per channel. That's slow! > > > > They have multi-port boxes, up to 16, so I've asked them if they will run with a larger aggregate rate, or if the delay on one port impacts all of them. > > > > I've also found another vendor with a similar product, and I've asked about that too. > > > > I'm surprised and disappointed the Ethernet devices have such delays. I would have expected them to work better given their rather high prices. > > > > I could add a module, to interface between the PC serial port and the 16 test fixtures. It would allow the test application on the PC to send messages to all 16 test fixtures in a row. The added module would receive on separate lines, the 16 responses and stream them out to the port to the PC as one, continuous message. This is a bit messier since now, the 16 lines from this new module would need to be marked since they have to plug into the right test fixture each day. > > > > Or, if I could devise a manner of assigning priority, the slaves could all manage the priority themselves and still share the receive bus to the serial port on the PC. Again, this would look like one long message to the port and the PC. The application program would see the individual messages and parse them separately. Many of the commands from the PC could actually be shortened to a single, broadcast command since the same tests are done on all targets in parallel. So using an RJ-45 connector, there would be the two pairs for the serial port, and two pairs for the priority daisy-chain. > > > > I guess I'm thinking out loud here. > > > > LOL, so now I'm leaning back toward the USB based FTDI RS-422 cable and a priority scheme so every target gets many, more commands per second. I just ran the math, and this would be almost 20,000 bits per command. Try to run that at 8,000 times per second and a 100 Mbps Ethernet port won't keep up. > > > > I've written to FTDI about the actual throughput I can expect with their cables. We'll see what they come back with. > > > You can get much more that 8000 cps with an FTDI interface. This is > because you can send/recieve more that one character per "poll".
Yes, I'm aware of that. I suppose I didn't spell out everything in my post, but the 8,000 per second polling rate translates into 4,000 message pairs, one Tx, one Rx. With 256 end points to be controlled, this is just 16 message pairs per second per end point. The length of the messages is around 15 char, so this gives a bit over 1 Mbps. The RS-422 FTDI adapter can manage 3 Mbps, or the TTL, hi-speed adapter can be set for up to 12 Mbps, but I'm still waiting to hear from them about any internal or software overhead that would slow the message rate.
> My first thought is why are you trying to combine everything into one > USB serial port. Why not give each test fixture its own serial port (or > lump just a few onto a given port) and let the USB bus do the bulk of > the multi-drop.
I don't know if that will work any better. I have questions in to the various vendors.
> The ethernet unit might be just a 10 MBit device, or maybe a 100MBit
10, 100 Mbps and 1 Gbps.
> and > you need to send a whole message block, process it, then send the data > in it, and then it can send back the answer when it figures the full > answer has come back.
"It"??? What is "it" exactly? The message blocks are 15 characters. The bus runs with a single command from the master resulting in a single response from the slave, lather, rinse, repeat. The short message size results in a low bit rate, or, really, the message rate is the choke point, not the bit rate.
> It likely doesn't even TRY to transmit on a > character basis, but because of the much larger overhead of an ethernet > packet, presumes network bandwidth is more important the delay.
I don't know where you got the "character" idea. I don't know what the adapter decides is a block to send, but I assume there is a maximum size and short of that, there's a time out.
> Also, they may be quoting figures with typical routing delays assuming a > multi-hop route from computer to destination, which adds to the delay, > since that is the sort of application you use those for. Ethernet is a > "long haul" medium, not normally thought of as short haul, particularly > when talking about lower bandwidth applications.
No one said anything about Ethernet "routing" delays. I've explained to them what I'm doing and one vendor said there is a 1 ms delay in handling each "message" as I described it. I could go with something much fancier, where the same command is sent to all slaves, and the slaves respond in turn, controlled by a separate signal controlling priority to write the reply onto the shared bus. The message from the master can be a single broadcast message, with 128 replies. So far, no one has indicated the specific baud rates they support. They only list the maximum rate. I have to design the slaves with a clock for the baud rate times X. It would be nice to share that with the rest of the design which needs a clock around 33 MHz for comms to the UUTs. It's kind of odd that FTDI has a hi-speed serial adapter with a TTL level UART interface that runs up to 12 Mbps, while the RS-422/485 UART interfaces only run full-speed, at up to 3 Mbps. Still, 3 Mbps will work a champ if the interface does not have message handling delays. Same concern with the 12 Mbps TTL level interface. -- Rick C. + Get 1,000 miles of free Supercharging + Tesla referral code - https://ts.la/richard11209
On 30.11.2022 15:21, Rick C wrote:

> It's kind of odd that FTDI has a hi-speed serial adapter with a TTL level UART interface that runs up to 12 Mbps, while the RS-422/485 UART interfaces only run full-speed, at up to 3 Mbps. Still, 3 Mbps will work a champ if the interface does not have message handling delays. Same concern with the 12 Mbps TTL level interface. >
So what is there against you using such a 12 Mbps USB/serial thing and attaching an RS-422/485 transceiver (e.g. https://www2.mouser.com/datasheet/2/256/MAX22025_MAX22028-1701782.pdf). That should meet all your requirements mentioned so far. Regards, Bernd
On Wednesday, November 30, 2022 at 11:11:25 AM UTC-4, Bernd Linsel wrote:
> On 30.11.2022 15:21, Rick C wrote: > > > It's kind of odd that FTDI has a hi-speed serial adapter with a TTL level UART interface that runs up to 12 Mbps, while the RS-422/485 UART interfaces only run full-speed, at up to 3 Mbps. Still, 3 Mbps will work a champ if the interface does not have message handling delays. Same concern with the 12 Mbps TTL level interface. > > > So what is there against you using such a 12 Mbps USB/serial thing and > attaching an RS-422/485 transceiver (e.g. > https://www2.mouser.com/datasheet/2/256/MAX22025_MAX22028-1701782.pdf). > > That should meet all your requirements mentioned so far.
I heard back from FTDI and they only support polling rates up to 1 kHz. So I guess I'm stuck with Ethernet. I might be stuck with changing the protocol. Someone suggested that the OS will interject delays as well. So I might have to either install 16 serial ports directly in the PC, or change th e protocol so the master talks to all the slaves in a burst or a single broadcast command, and the replies are controlled by a priority scheme so they are back to back. I didn't expect this to be the difficult part of the job. I could also automate the test steps into the FPGA on each test fixture board. But that makes the whole thing much less flexible while developing. -- Rick C. -- Get 1,000 miles of free Supercharging -- Tesla referral code - https://ts.la/richard11209
On 30/11/2022 16:58, Rick C wrote:
> On Wednesday, November 30, 2022 at 11:11:25 AM UTC-4, Bernd Linsel > wrote: >> On 30.11.2022 15:21, Rick C wrote: >> >>> It's kind of odd that FTDI has a hi-speed serial adapter with a >>> TTL level UART interface that runs up to 12 Mbps, while the >>> RS-422/485 UART interfaces only run full-speed, at up to 3 Mbps. >>> Still, 3 Mbps will work a champ if the interface does not have >>> message handling delays. Same concern with the 12 Mbps TTL level >>> interface. >>> >> So what is there against you using such a 12 Mbps USB/serial thing >> and attaching an RS-422/485 transceiver (e.g. >> https://www2.mouser.com/datasheet/2/256/MAX22025_MAX22028-1701782.pdf). >> >>
>> That should meet all your requirements mentioned so far.
> > I heard back from FTDI and they only support polling rates up to 1 > kHz. So I guess I'm stuck with Ethernet. I might be stuck with > changing the protocol. Someone suggested that the OS will interject > delays as well. So I might have to either install 16 serial ports > directly in the PC, or change th e protocol so the master talks to > all the slaves in a burst or a single broadcast command, and the > replies are controlled by a priority scheme so they are back to > back. > > I didn't expect this to be the difficult part of the job. > > I could also automate the test steps into the FPGA on each test > fixture board. But that makes the whole thing much less flexible > while developing. >
The general issue is that PC's are great at throughput, but poor at latency. USB in particular has a scheduler and polls the devices on the bus at regular intervals. (This can't really be avoided in a half-duplex master-slave system.) For Ethernet, a gigibit switch will usually have a latency of 50 - 125 us. Even with a direct connection with no switch, you'll be hard pushed to get latencies lower than 50 us, and thus a query-reply peak rate of 10,000 telegram pairs a second. You can get higher throughput if you have multiple outstanding query-replies going to different USB devices or different IP connections. So while you are not going to get more than 4000 send/receive transactions a second to one USB 2.0 high speed FTDI serial port device, you could probably do that simultaneously to several such devices on the same bus as long as you don't need to wait for the reply from one target before sending a message to a different target. (The same principle goes for Ethernet.) A communication hierarchy is likely the best way to handle this. Alternatively, at the messages from the PC can be large and broadcast, rather than divided up. You could even make an EtherCAT-style serial protocol (using the hybrid RS-422 bus you suggested earlier). The PC could send a single massive serial telegram consisting of multiple small ones: <header><padding><tele1><padding><tele2><padding>...<pause> Each slave would reply after hearing its own telegram, fast enough to be complete in good time before the next slave starts. (Adjust padding as necessary to give this timing.) Then from the PC side, you have one big telegram out, and one big telegram in - using 3 MBaud if you like.
On 11/30/2022 19:14, David Brown wrote:
> On 30/11/2022 16:58, Rick C wrote: >> On Wednesday, November 30, 2022 at 11:11:25 AM UTC-4, Bernd Linsel >> wrote: >>> On 30.11.2022 15:21, Rick C wrote: >>> >>>> It's kind of odd that FTDI has a hi-speed serial adapter with a >>>> TTL level UART interface that runs up to 12 Mbps, while the >>>> RS-422/485 UART interfaces only run full-speed, at up to 3 Mbps. >>>> Still, 3 Mbps will work a champ if the interface does not have >>>> message handling delays. Same concern with the 12 Mbps TTL level >>>> interface. >>>> >>> So what is there against you using such a 12 Mbps USB/serial thing >>> and attaching an RS-422/485 transceiver (e.g. >>> https://www2.mouser.com/datasheet/2/256/MAX22025_MAX22028-1701782.pdf). >>> >>> > >> That should meet all your requirements mentioned so far. >> >> I heard back from FTDI and they only support polling rates up to 1 >> kHz.&nbsp; So I guess I'm stuck with Ethernet.&nbsp; I might be stuck with >> changing the protocol.&nbsp; Someone suggested that the OS will interject >> delays as well.&nbsp; So I might have to either install 16 serial ports >> directly in the PC, or change th e protocol so the master talks to >> all the slaves in a burst or a single broadcast command, and the >> replies are controlled by a priority scheme so they are back to >> back. >> >> I didn't expect this to be the difficult part of the job. >> >> I could also automate the test steps into the FPGA on each test >> fixture board.&nbsp; But that makes the whole thing much less flexible >> while developing. >> > > The general issue is that PC's are great at throughput, but poor at > latency.&nbsp; USB in particular has a scheduler and polls the devices on the > bus at regular intervals.&nbsp; (This can't really be avoided in a > half-duplex master-slave system.)&nbsp; For Ethernet, a gigibit switch will > usually have a latency of 50 - 125 us.&nbsp; Even with a direct connection > with no switch, you'll be hard pushed to get latencies lower than 50 us, > and thus a query-reply peak rate of 10,000 telegram pairs a second. > > You can get higher throughput if you have multiple outstanding > query-replies going to different USB devices or different IP > connections.&nbsp; So while you are not going to get more than 4000 > send/receive transactions a second to one USB 2.0 high speed FTDI serial > port device, you could probably do that simultaneously to several such > devices on the same bus as long as you don't need to wait for the reply > from one target before sending a message to a different target.&nbsp; (The > same principle goes for Ethernet.) > > A communication hierarchy is likely the best way to handle this. > > Alternatively, at the messages from the PC can be large and broadcast, > rather than divided up.&nbsp; You could even make an EtherCAT-style serial > protocol (using the hybrid RS-422 bus you suggested earlier).&nbsp; The PC > could send a single massive serial telegram consisting of multiple small > ones: > > &nbsp; <header><padding><tele1><padding><tele2><padding>...<pause> > > Each slave would reply after hearing its own telegram, fast enough to be > complete in good time before the next slave starts.&nbsp; (Adjust padding as > necessary to give this timing.) > > Then from the PC side, you have one big telegram out, and one big > telegram in - using 3 MBaud if you like. >
David, that kind of detailed problem solving should not go out free of charge you know :-). Of course this is the way to do it.
Rick C <gnuarm.deletethisbit@gmail.com> wrote:
> I am using laptops to control test fixtures via a USB serial port. I'm looking at combining many test fixtures in one chassis, controlled over one serial port. The problem I'm concerned about is not the speed of the bus, which can range up to 10 Mbps. It's the interface to the serial port. > > The messages are all short, around 15 characters. The master PC addresses a slave and the slave promptly replies. It seems this message level hand shake creates a bottle neck in every interface I've looked at. > > FTDI has a high-speed USB cable that is likely limited by the 8 kHz polling rate. So the message and response pair would be limited to 4 kHz. Spread over 256 end points, that's only 16 message pairs a second to each target. That might be workable if there were no other delays. > > While investigating other units, I found some Ethernet to serial devices and found some claim the serial port can run at up to 3.7 Mbps. But when I contacted them, they said each message has a 1 ms delay, so that's only 500 pairs per second, or maybe 2 pairs per second per channel. That's slow! > > They have multi-port boxes, up to 16, so I've asked them if they will run with a larger aggregate rate, or if the delay on one port impacts all of them. > > I've also found another vendor with a similar product, and I've asked about that too. > > I'm surprised and disappointed the Ethernet devices have such delays. I would have expected them to work better given their rather high prices. > > I could add a module, to interface between the PC serial port and the 16 test fixtures. It would allow the test application on the PC to send messages to all 16 test fixtures in a row. The added module would receive on separate lines, the 16 responses and stream them out to the port to the PC as one, continuous message. This is a bit messier since now, the 16 lines from this new module would need to be marked since they have to plug into the right test fixture each day. > > Or, if I could devise a manner of assigning priority, the slaves could all manage the priority themselves and still share the receive bus to the serial port on the PC. Again, this would look like one long message to the port and the PC. The application program would see the individual messages and parse them separately. Many of the commands from the PC could actually be shortened to a single, broadcast command since the same tests are done on all targets in parallel. So using an RJ-45 connector, there would be the two pairs for the serial port, and two pairs for the priority daisy-chain. > > I guess I'm thinking out loud here. > > LOL, so now I'm leaning back toward the USB based FTDI RS-422 cable and a priority scheme so every target gets many, more commands per second. I just ran the math, and this would be almost 20,000 bits per command. Try to run that at 8,000 times per second and a 100 Mbps Ethernet port won't keep up. > > I've written to FTDI about the actual throughput I can expect with their cables. We'll see what they come back with.
I am not sure if you get that there are two issues: througput and latency. If you wait for answer before sending next request you will be bounded by latency. OTOH if you fire several request without waiting, then you will be limited by througput. With relatively cheap convertors on Linux to handle 10000 roundtrips for 15 bytes messages I need the following times: CH340 2Mb/s, waiting, 6.890s CH340 2Mb/s, overlapped 1.058s CP2104 2Mb/s, waiting, 2.514s CP2104 2Mb/s, overlapped 1.214s The other end was STM32F030, which was simply replaying back received characters. Note: there results are not fully comparable. Apparently CH340 will silently drop excess characters, so for overalapped operation I simply sent more charactes than I read. OTOH CP2104 seem to stall when its receive buffer overflows, so I limited overlap to avoid stalls. Of course real application would need some way to ensure that receive buffers do not overflow. So, you should be easily able to handle 10000 round trips per second provided there is enough overlap. For this you need to ensure that only one device is transmitting to PC. If you have several FPGA-s on a single board, coordinating them should be easy. Of couse, you need some free pins and extra tracks. I would use single transceiver per board, depending on coordination to ensure that only one FPGA controls transceiver at given time. Anyway, this would allow overlapped transmisson to all devices on single board. With multiple boards you would need some hardware or software protocol decide which board can transmit. On hardware side a single pair of extra wires could carry needed signals (that is your "priority daisy chain"). As other suggested you could use multiple convertors for better overlap. My convertors are "full speed" USB, that is they are half-duplex 12 Mb/s. USB has significant protocol overhead, so probably two 2 Mb/s duplex serial convertes would saturate single USB bus. In desktops it is normal to have several separate USB controllers (buses), but that depends on specific motherboard. Theoreticaly, when using "high speed" USB converters, several could easily work from single USB port (provided that you have enough places in hub(s)). An extra thing: there are reasonably cheap PC compatible boards, supposedly they are cheaper and more easy to buy than Raspberry Pi (but I did not try buy them). If you need really large scale you could have a single such board per batch of devices and run copy of your program there. And a single laptop connecting to satelite board via ethernet and collecting results. -- Waldek Hebisch
On Wednesday, November 30, 2022 at 9:08:25 PM UTC-4, anti...@math.uni.wroc.pl wrote:
> Rick C <gnuarm.del...@gmail.com> wrote: > > I am using laptops to control test fixtures via a USB serial port. I'm looking at combining many test fixtures in one chassis, controlled over one serial port. The problem I'm concerned about is not the speed of the bus, which can range up to 10 Mbps. It's the interface to the serial port. > > > > The messages are all short, around 15 characters. The master PC addresses a slave and the slave promptly replies. It seems this message level hand shake creates a bottle neck in every interface I've looked at. > > > > FTDI has a high-speed USB cable that is likely limited by the 8 kHz polling rate. So the message and response pair would be limited to 4 kHz. Spread over 256 end points, that's only 16 message pairs a second to each target. That might be workable if there were no other delays. > > > > While investigating other units, I found some Ethernet to serial devices and found some claim the serial port can run at up to 3.7 Mbps. But when I contacted them, they said each message has a 1 ms delay, so that's only 500 pairs per second, or maybe 2 pairs per second per channel. That's slow! > > > > They have multi-port boxes, up to 16, so I've asked them if they will run with a larger aggregate rate, or if the delay on one port impacts all of them. > > > > I've also found another vendor with a similar product, and I've asked about that too. > > > > I'm surprised and disappointed the Ethernet devices have such delays. I would have expected them to work better given their rather high prices. > > > > I could add a module, to interface between the PC serial port and the 16 test fixtures. It would allow the test application on the PC to send messages to all 16 test fixtures in a row. The added module would receive on separate lines, the 16 responses and stream them out to the port to the PC as one, continuous message. This is a bit messier since now, the 16 lines from this new module would need to be marked since they have to plug into the right test fixture each day. > > > > Or, if I could devise a manner of assigning priority, the slaves could all manage the priority themselves and still share the receive bus to the serial port on the PC. Again, this would look like one long message to the port and the PC. The application program would see the individual messages and parse them separately. Many of the commands from the PC could actually be shortened to a single, broadcast command since the same tests are done on all targets in parallel. So using an RJ-45 connector, there would be the two pairs for the serial port, and two pairs for the priority daisy-chain. > > > > I guess I'm thinking out loud here. > > > > LOL, so now I'm leaning back toward the USB based FTDI RS-422 cable and a priority scheme so every target gets many, more commands per second. I just ran the math, and this would be almost 20,000 bits per command. Try to run that at 8,000 times per second and a 100 Mbps Ethernet port won't keep up. > > > > I've written to FTDI about the actual throughput I can expect with their cables. We'll see what they come back with. > I am not sure if you get that there are two issues: througput and latency.
Of course I'm aware of it. That's the entirety of the problem.
> If you wait for answer before sending next request you will be bounded > by latency.
Until I contacted the various vendors, I had no reason to expect their hardware to have such excessive latencies. Especially in the Ethernet converter, I would have expected better hardware. Being an FPGA sort of guy, I didn't even realize they would not implement the data path in an FPGA. I found one company that does use an FPGA for a USB to serial adapter, but I expect the PC side USB software may be problematic as well. It makes you wonder how they ever get audio to work over USB. I guess lots of buffering.
> OTOH if you fire several request without waiting, then > you will be limited by througput.
Yes, but the current protocol using a single target works with one command at a time. In ignorance of the many problems with serial port converters, I was planning to use the same protocol. I have several new ideas, including various ways to combine messages to multiple targets, into one message. Or... I could move the details of the various tests into the target FPGAs, so they receive a command to test function X, rather than the multiple commands to write and read various registers that manipulate the details being tested. Concerns with this include the need to reload all the FPGAs, any time the are updated with a new test feature, or bug fix. That's probably 64 FPGAs. I could use one FPGA per test fixture, for a total of 16, but that makes the routing a bit more problematic. Even 16 is a PITA. Also, I've relied on monitoring the command stream to spot bugs. That would require attaching a serial debugger of some sort to the interface to the UUT, and the internal test controller would be much harder to observe. Currently, that is controlled by commands as well.
> With relatively cheap convertors > on Linux to handle 10000 roundtrips for 15 bytes messages I need > the following times: > > CH340 2Mb/s, waiting, 6.890s
That's 11.3 per target, per second. (128 targets)
> CH340 2Mb/s, overlapped 1.058s
That's pretty close to 74 per target, per second. I used to use the CH340 devices, but we had intermittent lockups of the serial port when testing all day long. I switched to FTDI and that went away. I think you told me you have no such problems. Maybe it's the CH340 Windows serial drivers.
> CP2104 2Mb/s, waiting, 2.514s > CP2104 2Mb/s, overlapped 1.214s
I don't know what the CP2104 is. I'm not certain what "overlapped" means in this test. Did you just continue to send 15 byte messages with no delays 10,000 times? Since you are in the mood for testing, what happens if you run overlapped, with 128 messages of 15 characters and wait for the replies before sending the next batch? Also, if you don't mind, can you try 20 character messages?
> The other end was STM32F030, which was simply replaying back > received characters. > > Note: there results are not fully comparable. Apparently CH340 > will silently drop excess characters, so for overalapped operation > I simply sent more charactes than I read. OTOH CP2104 seem to > stall when its receive buffer overflows, so I limited overlap to > avoid stalls. Of course real application would need some way > to ensure that receive buffers do not overflow.
Wait, what? How would overlapped operation operate if you have to worry about lost characters??? I'm not sure what "stall" means. Did it send XOFF or something? Any idea on what size of aggregated messages would prevent character loss? That's kind of important.
> So, you should be easily able to handle 10000 round trips > per second provided there is enough overlap. For this > you need to ensure that only one device is transmitting to > PC. If you have several FPGA-s on a single board, coordinating > them should be easy. Of couse, you need some free pins and > extra tracks. I would use single transceiver per board, > depending on coordination to ensure that only one FPGA > controls transceiver at given time. Anyway, this would > allow overlapped transmisson to all devices on single > board. With multiple boards you would need some hardware > or software protocol decide which board can transmit. > On hardware side a single pair of extra wires could > carry needed signals (that is your "priority daisy chain").
Yes, the test fixture boards have to be set up each day and to make it easy to connect, (and no backplane) I was planning to have two RJ-45 connectors on the front panel. A short jumper would string the RS-422 ports together. My thinking, if the aggregated commands were needed, was to use the other pins for "handshake" lines to implement a priority chain for the replies. The master sets the flag when starting to transmit. The first board gives all the needed replies, then passes the flag on to the next board. When the last reply is received by the master, the flag is removed and the process is restarted.
> As other suggested you could use multiple convertors for > better overlap. My convertors are "full speed" USB, that > is they are half-duplex 12 Mb/s. USB has significant > protocol overhead, so probably two 2 Mb/s duplex serial > convertes would saturate single USB bus. In desktops > it is normal to have several separate USB controllers > (buses), but that depends on specific motherboard. > Theoreticaly, when using "high speed" USB converters, > several could easily work from single USB port (provided > that you have enough places in hub(s)).
I've been shying away from USB because of the inherent speed issues with small messages. But with larger messages, hi-speed converters can work, I would hope. Maybe FTDI did not understand my question, but they said even on the hi-speed version, their devices use a polling rate of 1 ms. They call it "latency", but since it is adjustable, I think it is the same thing. I asked about the C232HD-EDHSP-0, which is a hi-speed device, but also mentioned the USB-RS422-WE-5000-BT, which is an RS-422, full-speed device. So maybe he got confused. They don't offer many hi-speed devices. But the Ethernet implementations also have speed issues, likely because they are actually software based.
> An extra thing: there are reasonably cheap PC compatible > boards, supposedly they are cheaper and more easy to buy > than Raspberry Pi (but I did not try buy them). If you > need really large scale you could have a single such board > per batch of devices and run copy of your program there. And > a single laptop connecting to satelite board via ethernet > and collecting results.
Yeah, but more complexity. Maybe it doesn't need to run so fast. I've been working with the idea that it is not a hard thing to do, but I just keep finding more and more problems. The one approach that seems to have the best chance at running very fast, is a PCIe board with 4 or 8 ports. I'd have to use an embedded PC, or at least a mini-tower or something. Many of these seem to have rather low end x86 CPUs. There's also the overhead of the PC OS, so maybe I need to do some testing before I worry with this further. I have one FTDI cable. I can use an embedded MCU board for the other end I suppose. It will give me a chance to get back into Mecrisp Forth. I wonder how fast the MSP430 UART will run? I might have an ARM board that runs Mecrisp, I can't recall. -- Rick C. -+ Get 1,000 miles of free Supercharging -+ Tesla referral code - https://ts.la/richard11209
On 01/12/2022 11:48, Rick C wrote:
> On Wednesday, November 30, 2022 at 9:08:25 PM UTC-4, > anti...@math.uni.wroc.pl wrote: >> Rick C <gnuarm.del...@gmail.com> wrote:
<snip>
>>> LOL, so now I'm leaning back toward the USB based FTDI RS-422 >>> cable and a priority scheme so every target gets many, more >>> commands per second. I just ran the math, and this would be >>> almost 20,000 bits per command. Try to run that at 8,000 times >>> per second and a 100 Mbps Ethernet port won't keep up. >>> >>> I've written to FTDI about the actual throughput I can expect >>> with their cables. We'll see what they come back with. >> I am not sure if you get that there are two issues: througput and >> latency. > > Of course I'm aware of it. That's the entirety of the problem. >
I would be rather surprised if you were not aware of the difference - but your posts show you don't seem to be familiar with the level of the latencies inherent in USB and Ethernet. It seems you think it is just poor implementations of hardware or drivers. (Of course, limited implementations can make it worse.)
> >> If you wait for answer before sending next request you will be >> bounded by latency. > > Until I contacted the various vendors, I had no reason to expect > their hardware to have such excessive latencies. Especially in the > Ethernet converter, I would have expected better hardware. Being an > FPGA sort of guy, I didn't even realize they would not implement the > data path in an FPGA.
No one implements the data path of Ethernet in an FPGA. Sometimes a few bits (such as checksums) are accelerated in hardware, and there can even be filtering or re-direction done in hardware, but the data in Ethernet packets is always handled in software. Even if it was all handled instantly in perfect hardware, an Ethernet frame is 72 bytes plus 12 bytes gap. Then there is at least 20 bytes of IP header, then 20 bytes for the TCP header. That's 124 bytes before there is any content whatsoever, or 10 us for 100 Mbps Ethernet.
> > I found one company that does use an FPGA for a USB to serial > adapter, but I expect the PC side USB software may be problematic as > well. It makes you wonder how they ever get audio to work over USB. > I guess lots of buffering. >
USB works by cyclic polling. There is inevitably a latency. USB 1 had 1 kHz polling, while USB 2 has 8 kHz. (I don't know off-hand what USB 3 has, but USB serial devices are invariably USB 1 or 2.) Most serial port drivers have lower polling rates than strictly necessary by USB cycle times, since polling very fast is difficult to do efficiently. I believe it is difficult on Windows to have periodic events at a resolution below 1 millisecond without busy-waiting, and drivers can't have busy-waiting - you can't have a driver that eats one of your cpu cores just because you've plugged in a USB to serial cable! If you write your own code that accesses the USB lower levels directly (such as using Linux libusb, or its Windows port) then you can, I believe, call USB transfer functions faster, up to the base USB cycle rate. None of this should make you wonder about audio. You just need enough buffering to cover USB cycles (125 us for USB 2). Any application delay is typically /far/ longer, such as when collecting streaming audio from a dodgy internet connection. I wonder if you are confusing the two related kinds of latency - one-way latency (time difference between when an application starts to send something at one end, and the application at the other end has got the data), and two-way latency for a query-reply two-way communication. You might also be mixing up jitter in this. I say this because there are such critical differences between the needs of audio and the needs of your communication. In particular, audio does not care about two-way latencies, and can cope with significant one-way latency (up to perhaps 20 ms) even when there is video. Without video, latency is irrelevant for audio as long as the jitter is low.

The 2024 Embedded Online Conference