EmbeddedRelated.com
Forums

Shared Communications Bus - RS-422 or RS-485

Started by Rick C November 2, 2022
On 11/7/22 11:05 AM, Rick C wrote:
> On Monday, November 7, 2022 at 6:00:09 AM UTC-4, Stef wrote: >> On 2022-11-05 Rick C wrote in comp.arch.embedded: >>> On Saturday, November 5, 2022 at 6:58:24 AM UTC-4, David Brown wrote: >>> >>>> In UART communication, this is handled at the protocol level rather than >>>> the hardware (though some UART hardware may have "idle detect" signals >>>> when more than 11 bits of high level are seen in a row). Some >>>> UART-based protocols also use a "break" signal between frames - that is >>>> a string of at least 11 bits of low level. >>>> >>>> If you do not have such pauses, and a receiver is out of step, >>> >>> You have failed to explain how a receiver would get "out of step". The receiver syncs to every character transmitted. If all characters are received, what else do you need? How does it get "out of step"? >> I have seen this happen in long messages (few kB) with no pauses between >> characters and transmitter and receiver set to 8,N,1. It seemed that the >> receiver needed the complete stop bit and then immediately saw the low >> of the next start bit. Detecting the edge when it was ready to see it, >> not when it actually happened. When the receiver is slightly slower than >> the transmitter, this caused the detection of the start bit (and >> therefor the whole character) to shift a tiny bit. This added up over >> the character stream until it eventually failed. >> >> Lowering the baud rate did not solve the issue, but inserting pauses >> after a number of chars did. What also solved it was setting the >> transmitter to 2 stop bits and the receiver to one stop bit. This was a >> one way stream and this may not be possible on a bi-directional stream. >> >> I would expect a sensible UART implementation to allow for a slightly >> shorter stop bit to compensate for issues like this. But apparently this >> UART did not do so in the 1 stop bit setting. I have not tested if >> setting both ends to 2 stop bits also solved the problem. > > If a UART receiver can not properly receive a message like this, it is defective. The point of the start and stop bits are to provide the synchronization. The receiver simply needs to detect the stop be state (by sampling where the receiver thinks is the middle of the bit) and then immediately start looking for the leading edge of the next start bit. The receiver will then be synchronized to the new character bit timing and it will never slip. That gives up to ±5% combined timing error tolerance. > > If the receiver waits until a later time, such as the expected end of the received stop bit, to start looking for a start bit leading edge, it will not be able to tolerate a timing error where the transmitter is faster than the receiver making the timing tolerance unipolar, i.e. 5% rather than ±5%. > > That's a receiver design flaw, or the transmitter is sending short stop bits, which you can easily see on the scope with a delayed trigger control. > > You should be able to diagnose which end has the problem by connecting a different type of receiver to the stream. If a different receiver UART is able to receive the messages without fault, the problem is obviously the failing receiver. >
YOU may consider it a design flaw, but I have seen too many serial ports having this flaw in them to just totally ignore it. Yes, the "robust" design will allow for a short stop bit, but you can't count on all serial adaptors allowing for it. Part of the problem is that (at least as far as I know) the Asynchronous Serial Format isn't actually a "Published Standard", but just an de-facto protocol that is simple enough that it mostly just works, but still hides a few gotchas for corner cases.
On Monday, November 7, 2022 at 7:07:50 PM UTC-4, Stef wrote:
> On 2022-11-07 Rick C wrote in comp.arch.embedded: > > On Monday, November 7, 2022 at 4:30:37 PM UTC-4, Stef wrote: > ... > >> My not caring abbout the innards of a particular chip seems to let you > >> think I don't care about anything. But we are not discussing my > >> interests here, but your bus. > > > > Seems to me you wanted to talk about my interests when you said, "Why are you discussing this?" and then continued discussing that issue for some half dozen more posts. > That was not my intention. It seemed to me that you cared about the > internal implementation of the FTDI chip in relation to your bus > problem. I just wanted to point out that is of no concern for your bus > operation. And then I just got dragged in. ;-)
I'm always curious about how things are implemented. I thought I had heard somewhere that the FTDI chip was a fast, but small processor. I design those for use in FPGA designs and they can be very effective. Often the code is very minimal. -- Rick C. ---+- Get 1,000 miles of free Supercharging ---+- Tesla referral code - https://ts.la/richard11209
On Monday, November 7, 2022 at 7:14:56 PM UTC-4, Paul Rubin wrote:
> Rick C <gnuarm.del...@gmail.com> writes: > > I could shove the details of tests into the FPGAs, so the commands are > > more like, run test 1 on channel number 2. That would cut the number > > of tests significantly, but require much more work in updating the > > FPGA software.
That should have been, "cut back the number of commands".
> Are we circling back to the idea putting a microprocessor on the test > board? Ivan Sutherland famously called this a wheel of reincarnation: > > http://www.cap-lore.com/Hardware/Wheel.html
Zero need for a processor in the FPGA at this point. At least the need for a conventional processor. The commands are things like, assert pin X, read pin Y. A test of some basic functionality that could be debugged separately from other tests would be a few of these instructions. Very easy to do in an FPGA by using memory blocks and stepping through the commands. But I'm open to a processor. It would be one of my own design, however. -- Rick C. ---++ Get 1,000 miles of free Supercharging ---++ Tesla referral code - https://ts.la/richard11209
On Monday, November 7, 2022 at 8:02:19 PM UTC-4, Richard Damon wrote:
> On 11/7/22 11:05 AM, Rick C wrote: > > On Monday, November 7, 2022 at 6:00:09 AM UTC-4, Stef wrote: > >> On 2022-11-05 Rick C wrote in comp.arch.embedded: > >>> On Saturday, November 5, 2022 at 6:58:24 AM UTC-4, David Brown wrote: > >>> > >>>> In UART communication, this is handled at the protocol level rather than > >>>> the hardware (though some UART hardware may have "idle detect" signals > >>>> when more than 11 bits of high level are seen in a row). Some > >>>> UART-based protocols also use a "break" signal between frames - that is > >>>> a string of at least 11 bits of low level. > >>>> > >>>> If you do not have such pauses, and a receiver is out of step, > >>> > >>> You have failed to explain how a receiver would get "out of step". The receiver syncs to every character transmitted. If all characters are received, what else do you need? How does it get "out of step"? > >> I have seen this happen in long messages (few kB) with no pauses between > >> characters and transmitter and receiver set to 8,N,1. It seemed that the > >> receiver needed the complete stop bit and then immediately saw the low > >> of the next start bit. Detecting the edge when it was ready to see it, > >> not when it actually happened. When the receiver is slightly slower than > >> the transmitter, this caused the detection of the start bit (and > >> therefor the whole character) to shift a tiny bit. This added up over > >> the character stream until it eventually failed. > >> > >> Lowering the baud rate did not solve the issue, but inserting pauses > >> after a number of chars did. What also solved it was setting the > >> transmitter to 2 stop bits and the receiver to one stop bit. This was a > >> one way stream and this may not be possible on a bi-directional stream. > >> > >> I would expect a sensible UART implementation to allow for a slightly > >> shorter stop bit to compensate for issues like this. But apparently this > >> UART did not do so in the 1 stop bit setting. I have not tested if > >> setting both ends to 2 stop bits also solved the problem. > > > > If a UART receiver can not properly receive a message like this, it is defective. The point of the start and stop bits are to provide the synchronization. The receiver simply needs to detect the stop be state (by sampling where the receiver thinks is the middle of the bit) and then immediately start looking for the leading edge of the next start bit. The receiver will then be synchronized to the new character bit timing and it will never slip. That gives up to &plusmn;5% combined timing error tolerance. > > > > If the receiver waits until a later time, such as the expected end of the received stop bit, to start looking for a start bit leading edge, it will not be able to tolerate a timing error where the transmitter is faster than the receiver making the timing tolerance unipolar, i.e. 5% rather than &plusmn;5%. > > > > That's a receiver design flaw, or the transmitter is sending short stop bits, which you can easily see on the scope with a delayed trigger control. > > > > You should be able to diagnose which end has the problem by connecting a different type of receiver to the stream. If a different receiver UART is able to receive the messages without fault, the problem is obviously the failing receiver. > > > YOU may consider it a design flaw, but I have seen too many serial ports > having this flaw in them to just totally ignore it.
That is exceedingly hard to imagine, since it would take extra logic to implement. The logic of a UART is to first, detect the start bit which lands the state machine in the middle of said start bit which then times to the middle of all subsequent bits (ignoring timing accuracy). So it thinks it is in the middle of the stop bit when the bit timing is complete. It would need to have more hardware to time to the end of the stop bit. This might be present, for other purposes, but it should not be used to control looking for the start bit. This is by definition of the async protocol, to use the stop bit time to resync to the next start bit. Any device that does not start looking for a new start bit at the point it thinks is the middle of the stop bit, is defective by definition and will never work properly with timing mismatches of one polarity, the receiver's clock being slower than the transmitter clock. I guess I'm not certain that would cause an error, actually. It would initiate the start bit detection logic, and as long as it does not require seeing the idle condition before detecting the start bit condition, it would still work. Again, this is expected by the definition of asynchronous format. This would result in a grosser offset in timing the middle of the bits, so the allowable timing error is less. But it will still work otherwise. 5% is a very large combined error. Most devices are timed by crystals with maybe &plusmn;200 ppm error.
> Yes, the "robust" design will allow for a short stop bit, but you can't > count on all serial adaptors allowing for it.
There's always garbage designs. I'm surprised I never ran into one. I guess being crystal controlled, there was never enough error to add up to a bit.
> Part of the problem is that (at least as far as I know) the Asynchronous > Serial Format isn't actually a "Published Standard", but just an > de-facto protocol that is simple enough that it mostly just works, but > still hides a few gotchas for corner cases.
True, but anyone designing chips should understand what they are designing. If they don't, you get garbage. I learned that lesson in a class in school where I screwed up a detail on a program I wrote, because I didn't understand the spec. I've always tried to ask questions since and even if they seem like stupid questions, I don't read the specs wrong. -- Rick C. --+-- Get 1,000 miles of free Supercharging --+-- Tesla referral code - https://ts.la/richard11209
On 08/11/2022 01:50, Rick C wrote:
> On Monday, November 7, 2022 at 7:07:50 PM UTC-4, Stef wrote: >> On 2022-11-07 Rick C wrote in comp.arch.embedded: >>> On Monday, November 7, 2022 at 4:30:37 PM UTC-4, Stef wrote: >> ... >>>> My not caring abbout the innards of a particular chip seems to >>>> let you think I don't care about anything. But we are not >>>> discussing my interests here, but your bus. >>> >>> Seems to me you wanted to talk about my interests when you said, >>> "Why are you discussing this?" and then continued discussing that >>> issue for some half dozen more posts. >> That was not my intention. It seemed to me that you cared about >> the internal implementation of the FTDI chip in relation to your >> bus problem. I just wanted to point out that is of no concern for >> your bus operation. And then I just got dragged in. ;-) > > I'm always curious about how things are implemented. I thought I had > heard somewhere that the FTDI chip was a fast, but small processor. > I design those for use in FPGA designs and they can be very > effective. Often the code is very minimal. >
There's nothing wrong with curiosity. However, I have no doubt that you heard wrong, or heard about different FTDI devices, or that your source heard wrong. FTDI have been making these things for a couple of decades, since the earliest days of USB. You can be sure they are hardware peripherals, not software. For /you/, and /your/ designs in FPGAs, adding a small processor can be a good solution. The balance is different for ASICs and for dedicated silicon, and it is different now than it was when FTDI made their MPSE block for use in their devices. Really, we are not talking about a peripheral that is much more advanced than common serial communication blocks. It multiplexes a UART, an SPI and an I&sup2;C on the same pins. That's it. You don't bother with a processor and software for that. FTDI /do/ make devices using embedded processors, with a few different types (I forget which - perhaps Tensila cores). But those are other chips.
On 11/7/22 8:15 PM, Rick C wrote:
> On Monday, November 7, 2022 at 8:02:19 PM UTC-4, Richard Damon wrote: >> On 11/7/22 11:05 AM, Rick C wrote: >>> On Monday, November 7, 2022 at 6:00:09 AM UTC-4, Stef wrote: >>>> On 2022-11-05 Rick C wrote in comp.arch.embedded: >>>>> On Saturday, November 5, 2022 at 6:58:24 AM UTC-4, David Brown wrote: >>>>> >>>>>> In UART communication, this is handled at the protocol level rather than >>>>>> the hardware (though some UART hardware may have "idle detect" signals >>>>>> when more than 11 bits of high level are seen in a row). Some >>>>>> UART-based protocols also use a "break" signal between frames - that is >>>>>> a string of at least 11 bits of low level. >>>>>> >>>>>> If you do not have such pauses, and a receiver is out of step, >>>>> >>>>> You have failed to explain how a receiver would get "out of step". The receiver syncs to every character transmitted. If all characters are received, what else do you need? How does it get "out of step"? >>>> I have seen this happen in long messages (few kB) with no pauses between >>>> characters and transmitter and receiver set to 8,N,1. It seemed that the >>>> receiver needed the complete stop bit and then immediately saw the low >>>> of the next start bit. Detecting the edge when it was ready to see it, >>>> not when it actually happened. When the receiver is slightly slower than >>>> the transmitter, this caused the detection of the start bit (and >>>> therefor the whole character) to shift a tiny bit. This added up over >>>> the character stream until it eventually failed. >>>> >>>> Lowering the baud rate did not solve the issue, but inserting pauses >>>> after a number of chars did. What also solved it was setting the >>>> transmitter to 2 stop bits and the receiver to one stop bit. This was a >>>> one way stream and this may not be possible on a bi-directional stream. >>>> >>>> I would expect a sensible UART implementation to allow for a slightly >>>> shorter stop bit to compensate for issues like this. But apparently this >>>> UART did not do so in the 1 stop bit setting. I have not tested if >>>> setting both ends to 2 stop bits also solved the problem. >>> >>> If a UART receiver can not properly receive a message like this, it is defective. The point of the start and stop bits are to provide the synchronization. The receiver simply needs to detect the stop be state (by sampling where the receiver thinks is the middle of the bit) and then immediately start looking for the leading edge of the next start bit. The receiver will then be synchronized to the new character bit timing and it will never slip. That gives up to &plusmn;5% combined timing error tolerance. >>> >>> If the receiver waits until a later time, such as the expected end of the received stop bit, to start looking for a start bit leading edge, it will not be able to tolerate a timing error where the transmitter is faster than the receiver making the timing tolerance unipolar, i.e. 5% rather than &plusmn;5%. >>> >>> That's a receiver design flaw, or the transmitter is sending short stop bits, which you can easily see on the scope with a delayed trigger control. >>> >>> You should be able to diagnose which end has the problem by connecting a different type of receiver to the stream. If a different receiver UART is able to receive the messages without fault, the problem is obviously the failing receiver. >>> >> YOU may consider it a design flaw, but I have seen too many serial ports >> having this flaw in them to just totally ignore it. > > That is exceedingly hard to imagine, since it would take extra logic to implement. The logic of a UART is to first, detect the start bit which lands the state machine in the middle of said start bit which then times to the middle of all subsequent bits (ignoring timing accuracy). So it thinks it is in the middle of the stop bit when the bit timing is complete. It would need to have more hardware to time to the end of the stop bit. This might be present, for other purposes, but it should not be used to control looking for the start bit. This is by definition of the async protocol, to use the stop bit time to resync to the next start bit. Any device that does not start looking for a new start bit at the point it thinks is the middle of the stop bit, is defective by definition and will never work properly with timing mismatches of one polarity, the receiver's clock being slower than the transmitter clock.
Depends on how you design it. IF you start a counter at the leading edge of the start bit and then detect the counter at its middle value, then the stop bit ends when the counter finally expires at the END of the stop bit.
> > I guess I'm not certain that would cause an error, actually. It would initiate the start bit detection logic, and as long as it does not require seeing the idle condition before detecting the start bit condition, it would still work. Again, this is expected by the definition of asynchronous format. This would result in a grosser offset in timing the middle of the bits, so the allowable timing error is less. But it will still work otherwise. 5% is a very large combined error. Most devices are timed by crystals with maybe &plusmn;200 ppm error.
IF you don't start the looking for the start bit until the time has passed for the END of the stop bit, and the receiver is 0.1% slow, then every bit you lose 0.1% of a bit, or 1% per character, so after 50 consecutive characters you are 1/2 a bit late, and getting errors.
> > >> Yes, the "robust" design will allow for a short stop bit, but you can't >> count on all serial adaptors allowing for it. > > There's always garbage designs. I'm surprised I never ran into one. I guess being crystal controlled, there was never enough error to add up to a bit.
As I pointed out, 0.1% means 50 characters. 0.001% means 5000 characters, long enough string of characters and eventually you hit the problem. If you only use short messages, you never have a problem.
> > >> Part of the problem is that (at least as far as I know) the Asynchronous >> Serial Format isn't actually a "Published Standard", but just an >> de-facto protocol that is simple enough that it mostly just works, but >> still hides a few gotchas for corner cases. > > True, but anyone designing chips should understand what they are designing. If they don't, you get garbage. I learned that lesson in a class in school where I screwed up a detail on a program I wrote, because I didn't understand the spec. I've always tried to ask questions since and even if they seem like stupid questions, I don't read the specs wrong. >
The problem is that if you describe the sampling as "Middle of bit", then going to the end of the stop bit makes sense. If you are adding functionality like RS-485 control that needs to know when that end of bit is, and it is easy to forget that the receiver has different needs than the transmitter.
On 11/7/22 7:50 PM, Rick C wrote:
> On Monday, November 7, 2022 at 7:07:50 PM UTC-4, Stef wrote: >> On 2022-11-07 Rick C wrote in comp.arch.embedded: >>> On Monday, November 7, 2022 at 4:30:37 PM UTC-4, Stef wrote: >> ... >>>> My not caring abbout the innards of a particular chip seems to let you >>>> think I don't care about anything. But we are not discussing my >>>> interests here, but your bus. >>> >>> Seems to me you wanted to talk about my interests when you said, "Why are you discussing this?" and then continued discussing that issue for some half dozen more posts. >> That was not my intention. It seemed to me that you cared about the >> internal implementation of the FTDI chip in relation to your bus >> problem. I just wanted to point out that is of no concern for your bus >> operation. And then I just got dragged in. ;-) > > I'm always curious about how things are implemented. I thought I had heard somewhere that the FTDI chip was a fast, but small processor. I design those for use in FPGA designs and they can be very effective. Often the code is very minimal. >
The key is that if it is specified to have a quick Disable at end of transimition capability, then you can count on that, and not say it is up to the speed of the program to turn of the transmitter. Sometimes we hit a blurry line between what is really a general purpose computer and what is a FSM doing an operation. Ultimately, we need to look at the specifications of performance to decide what we need to do.
On Tuesday, November 8, 2022 at 7:54:59 AM UTC-4, Richard Damon wrote:
> On 11/7/22 8:15 PM, Rick C wrote: > > On Monday, November 7, 2022 at 8:02:19 PM UTC-4, Richard Damon wrote: > >> YOU may consider it a design flaw, but I have seen too many serial ports > >> having this flaw in them to just totally ignore it. > > > > That is exceedingly hard to imagine, since it would take extra logic to implement. The logic of a UART is to first, detect the start bit which lands the state machine in the middle of said start bit which then times to the middle of all subsequent bits (ignoring timing accuracy). So it thinks it is in the middle of the stop bit when the bit timing is complete. It would need to have more hardware to time to the end of the stop bit. This might be present, for other purposes, but it should not be used to control looking for the start bit. This is by definition of the async protocol, to use the stop bit time to resync to the next start bit. Any device that does not start looking for a new start bit at the point it thinks is the middle of the stop bit, is defective by definition and will never work properly with timing mismatches of one polarity, the receiver's clock being slower than the transmitter clock. > Depends on how you design it. IF you start a counter at the leading edge > of the start bit and then detect the counter at its middle value, then > the stop bit ends when the counter finally expires at the END of the > stop bit.
There is still some extra logic to distinguish the condition. There is a bit timing counter, and a counter to track which bit you are in. Everything happening in the operation of the UART is happening at the middle of a bit. Then you need extra logic to distinguish the end of a bit.
> > I guess I'm not certain that would cause an error, actually. It would initiate the start bit detection logic, and as long as it does not require seeing the idle condition before detecting the start bit condition, it would still work. Again, this is expected by the definition of asynchronous format. This would result in a grosser offset in timing the middle of the bits, so the allowable timing error is less. But it will still work otherwise. 5% is a very large combined error. Most devices are timed by crystals with maybe &plusmn;200 ppm error. > IF you don't start the looking for the start bit until the time has > passed for the END of the stop bit, and the receiver is 0.1% slow, then > every bit you lose 0.1% of a bit, or 1% per character, so after 50 > consecutive characters you are 1/2 a bit late, and getting errors.
There you go! You have just proven that no one would design a UART to work this way and for it to be used in the market place. There would be too many applications where the data burst would cause it to not work. Programming around such a design flaw would be such a PITA and expose the flaw, that the part would become a pariah. I recall the Intel USART was such a part for other technical flaws. So they finally came out with a new version that fixed the problems.
> >> Yes, the "robust" design will allow for a short stop bit, but you can't > >> count on all serial adaptors allowing for it. > > > > There's always garbage designs. I'm surprised I never ran into one. I guess being crystal controlled, there was never enough error to add up to a bit. > As I pointed out, 0.1% means 50 characters. 0.001% means 5000 > characters, long enough string of characters and eventually you hit the > problem. > > If you only use short messages, you never have a problem.
You mean if you have gaps with idle time.
> >> Part of the problem is that (at least as far as I know) the Asynchronous > >> Serial Format isn't actually a "Published Standard", but just an > >> de-facto protocol that is simple enough that it mostly just works, but > >> still hides a few gotchas for corner cases. > > > > True, but anyone designing chips should understand what they are designing. If they don't, you get garbage. I learned that lesson in a class in school where I screwed up a detail on a program I wrote, because I didn't understand the spec. I've always tried to ask questions since and even if they seem like stupid questions, I don't read the specs wrong. > > > The problem is that if you describe the sampling as "Middle of bit", > then going to the end of the stop bit makes sense.
Sorry, you are not clear. This doesn't make sense to me. What is "going to the end of the stop bit"?
> If you are adding functionality like RS-485 control that needs to know > when that end of bit is, and it is easy to forget that the receiver has > different needs than the transmitter.
??? -- Rick C. --+-+ Get 1,000 miles of free Supercharging --+-+ Tesla referral code - https://ts.la/richard11209
On 9/11/22 00:50, Rick C wrote:
> On Tuesday, November 8, 2022 at 7:54:59 AM UTC-4, Richard Damon wrote: >> IF you don't start the looking for the start bit until the time has >> passed for the END of the stop bit, and the receiver is 0.1% slow, then >> every bit you lose 0.1% of a bit, or 1% per character, so after 50 >> consecutive characters you are 1/2 a bit late, and getting errors. > > There you go! You have just proven that no one would design a UART to work this way and for it to be used in the market place. There would be too many applications where the data burst would cause it to not work. Programming around such a design flaw would be such a PITA and expose the flaw, that the part would become a pariah.
Yeah, but you can still insist that the stop bit fills 99%, or 90% of the required time, and not get that pathology. This is a branch of the principle "be rigorous in what you produce, permissive in what you accept". I've personally moved away from that principle - I think being permissive too often just masks problems until they re-occur downstream but cannot be diagnosed there. So I'm much more willing to reject bad input (or to complain but still accept it) early on. CH
On Tuesday, November 8, 2022 at 7:45:14 PM UTC-4, Clifford Heath wrote:
> On 9/11/22 00:50, Rick C wrote: > > On Tuesday, November 8, 2022 at 7:54:59 AM UTC-4, Richard Damon wrote: > >> IF you don't start the looking for the start bit until the time has > >> passed for the END of the stop bit, and the receiver is 0.1% slow, then > >> every bit you lose 0.1% of a bit, or 1% per character, so after 50 > >> consecutive characters you are 1/2 a bit late, and getting errors. > > > > There you go! You have just proven that no one would design a UART to work this way and for it to be used in the market place. There would be too many applications where the data burst would cause it to not work. Programming around such a design flaw would be such a PITA and expose the flaw, that the part would become a pariah. > Yeah, but you can still insist that the stop bit fills 99%, or 90% of > the required time, and not get that pathology.
I'm not clear on what you are saying. The larger the clock difference, the earlier the receiver has to look for the start bit. It will work just fine with the start bit check being delayed until the end of the stop bit, as long as the timing clocks aren't offset in one direction. Looking for the start bit in the middle of the stop bit gives a total of 5% tolerance, pretty much taking mistiming out of the list of problems for async data transmission. Drop that to 0.05% (your 99% example) and you are in the realm of crystal timing error on the two systems, &plusmn;250 ppm. -- Rick C. --++- Get 1,000 miles of free Supercharging --++- Tesla referral code - https://ts.la/richard11209