EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Multicasting and Switches

Started by D Yuniskis November 8, 2010
D Yuniskis wrote:
> Hi Jim, > > Jim Stewart wrote:
>> Falling back to the educated guess disclaimer, >> I'd say the maximum latency is indeterminate. >> >> It seems that by definition, that if the multicast >> packet collides with another packet, the latency >> will be indeterminate. > > That depends on the buffering in the switch. And, > how the multicast packet is treated *by* the switch.
Since to the best of my knowledge, in the event of an ethernet collision, both senders back off a random amount of time then retransmit, I can't see how the switch buffering would make any difference. For that matter, does the sender even monitor for collisions and retransmit in a multicast environment. I guess I don't know...
Jim Stewart wrote:
> For that matter, does the sender even monitor for > collisions and retransmit in a multicast environment.
Only when a reliable multicast protocol is layered on top. The outbound packets must be numbered in some way so the recipients know when they've missed one, and they NACK back to the source. If the packet got dropped at the source, rather than at some hub/switch, you get a problem of NACK flooding. This is the exact inverse of the problem of sending individual streams to each destination, except that it only occurs on packet loss. Reliable multicast protocols deal with NACK flooding in various ways. The most obvious is to coalesce them at the switch/router, but that might require changes in router infrastructure. When I implemented a P2P file distribution platform, I decided that multicast wasn't useful and went instead for broadcasts. The LDSS protocol (Local Download Sharing Service) that I devised (and got an IANA port number for) enables nodes on the same LAN to schedule and share file downloads from a limited WAN pipe, without any master. Each node is responsible for knowing what it wants, how much WAN bandwidth is allowed at this time of day (to be shared across all peers), how much is currently being used, and what downloads have been scheduled by other peers. In return, it limits its download to a fair share of the allowed WAN bandwidth (distributed rate limiting), sends progress updates, and shares completed downloads using TCP transfers. Every I/O (whether disk, LAN or WAN) is scheduled to a rate limit, to maintain a low impact on normal system operations. It was an interesting project! Clifford Heath.
Jim Stewart wrote:
> D Yuniskis wrote: >> Hi Jim, >> >> Jim Stewart wrote: >>> D Yuniskis wrote: > >> No, I'm wanting to know how *you* and your *neighbor* >> and your *friends* are going to be deploying network fabric >> in your house in the next 5-10 years to handle the shift >> to the X-over-IP distribution systems that are in the >> pipeline TODAY. Will you have to buy new "AV" switches? >> Will those switches take pride in advertising themselves: >> "New and improved! Now supports *3* multicast streams!!" >> Or, will folks just be complaining that their video >> spontaneously pixelates, etc.? > > Now we are getting somewhere. I happen to > have ATT U-verse in my house, running 3 SD > video streams and 3mbit internet. It is > running quite happily over the existing 10/100 > ethernet installation using generic netgear > switches. Of the 3 ATT "cable boxes", 2 are > at least 2 switches downstream from the U-verse > router.
Do you know if it is multicasting those streams or *unicasting* them? Also, all of your streams are sourced from a single "host" (port on the switch). I.e., you can never (theoretically) put more than 100Mb into the network because the wire connecting to your source has that inherent limitation. OTOH, if you multicast from different sources, you can exceed the bandwidth of the network. I'm looking at the network fabric as a resource that can freely be used -- not just dedicated to a head-end source. E.g., imagine pushing audio+video from a PC in your bedroom to a display in your living room... while pushing audio+video from your broadband link to a display in your "guest bedroom"... while pushing audio from a media server to speakers in the garage... while viewing the security camera at the front door on a monitor "someplace", etc. I think when there is a single source, the problem is easier to solve/self-limiting. OTOH, when you treat it as just *fabric*, you have to be much more pedantic about enumerating the limitations. Replacing the switch with a router goes a long way to "solving" (i.e., minimizing the impact) the problem. Integrating the media server *in* that router would also be a big win -- for locations that have a sole media spigot.
> Note that I do have 3 concurrent video streams > running 24-7. Normally the ATT settop box > times out and drops the stream if the channel > doesn't change in 4 or so hours. Since we use > Tivos downstream of the settop boxes, I had to > program each tivo to switch channels periodically > so that the stream wouldn't drop and the Tivo > wouldn't record the "Press OK" screen the settop > box puts up when the stream drops. > > BTW, U-verse also has the option of running its > digital link over existing 75 ohm antenna coax. > I haven't tried it. > > Haven't seen any spontaneous pixelations. There's > other occasional digital weirdness, but not so > much as I saw coming in on Comcast analog cable, > presumably from their headend issues.
D Yuniskis wrote:
> Jim Stewart wrote: >> D Yuniskis wrote: >>> Hi Jim, >>> >>> Jim Stewart wrote: >>>> D Yuniskis wrote: >> >>> No, I'm wanting to know how *you* and your *neighbor* >>> and your *friends* are going to be deploying network fabric >>> in your house in the next 5-10 years to handle the shift >>> to the X-over-IP distribution systems that are in the >>> pipeline TODAY. Will you have to buy new "AV" switches? >>> Will those switches take pride in advertising themselves: >>> "New and improved! Now supports *3* multicast streams!!" >>> Or, will folks just be complaining that their video >>> spontaneously pixelates, etc.? >> >> Now we are getting somewhere. I happen to >> have ATT U-verse in my house, running 3 SD >> video streams and 3mbit internet. It is >> running quite happily over the existing 10/100 >> ethernet installation using generic netgear >> switches. Of the 3 ATT "cable boxes", 2 are >> at least 2 switches downstream from the U-verse >> router. > > Do you know if it is multicasting those streams > or *unicasting* them?
Don't know, but other do. Check this out: http://www.uverserealtime.com/ If I didn't have so many things on my plate right now I'd run it and send you the data.
Hi Jim,

Jim Stewart wrote:
> D Yuniskis wrote: >> Jim Stewart wrote: > >>> Falling back to the educated guess disclaimer, >>> I'd say the maximum latency is indeterminate. >>> >>> It seems that by definition, that if the multicast >>> packet collides with another packet, the latency >>> will be indeterminate. >> >> That depends on the buffering in the switch. And, >> how the multicast packet is treated *by* the switch. > > Since to the best of my knowledge, in the event of > an ethernet collision, both senders back off a random > amount of time then retransmit, I can't see how the > switch buffering would make any difference.
The time a packet (*any* packet) spends buffered in the switch looks like an artificial transport delay (there's really nothing "artificial" about it :> ). Hence my comment re: "speed of light" delays. When you have multicast traffic, the delay through the switch can vary depending on the historical traffic seen by each targeted port. I.e., if port A has a packet already buffered/queued while port B does not, then the multicast packet will get *to* the device on port B quicker than on port A. If you have two or more streams and are hoping to impose a temporal relationship on them, you need to know how they will get to their respective consumers.
> For that matter, does the sender even monitor for > collisions and retransmit in a multicast environment. > I guess I don't know...
Multicast is like "shouting from the rooftop -- WITH A DEAF EAR". If it gets heard, great. If not, <shrug>. There are reliable multicast protocols that can be built on top of this. They allow "consumers" to request retransmission of portions of the "broadcast" that they may have lost (since the packet may have been dropped at their doorstep or anyplace along the way). With AV use, this gets to be problematic because you want to reduce buffering in the consumers, minimize latency, etc. So, the time required to detect a missing packet, request a new copy of it and accept that replacement copy (there is no guarantee that you will receive this in a fixed time period!) conflicts with those other goals (assuming you want to avoid audio dropouts, video pixelation, etc.). Remember that any protocol overhead you *add* contributes to the problem, to some extent (as it represents more network traffic and more processing requirements). The "ideal" is just to blast UDP packets down the pipe and *pray* they all get caught.
On Tue, 09 Nov 2010 12:41:58 -0700, D Yuniskis
<not.going.to.be@seen.com> wrote:

>Hi Paul, > >Paul Keinanen wrote: >> On Mon, 08 Nov 2010 14:35:07 -0700, D Yuniskis >> <not.going.to.be@seen.com> wrote: >> >>> On any *wired* network (save the wireless complications >>> for later) using a star *physical* topology beyond >>> 10BaseT, there is a potential for multicast packets >>> to NOT arrive at all nodes coincidentally (i.e., >>> ignoring "speed of light" propagation down the wire). >>> >>> Presumably, switches enqueue incoming multicast packets >>> on *all* outbound ports. Since there is no way of knowing >>> what's already queued on a particular port, the storage >>> time in the switch can vary from port to port. >> >> Use a dumb hub. > >Ha! I guess that's a solution! Do they make GB hubs? >It seems like this problem will only be getting more >significant in the coming years... especially in the >SOHO market. > >Perhaps this suggests running *two* networks -- one >that uses hubs ("bus" topology) that is friendly to >multicast traffic and the other using switches for >unicast traffic?
I do not know, if this is really relevant, but check out what switches advertised with "IEC 61850 support" actually do differently compared to other switches. This protocol relies heavily on MAC level broadcasts for real time traffic as well as ordinary IP traffic for non-realtime traffic.
Hi Jim,

Jim Stewart wrote:

>>>> No, I'm wanting to know how *you* and your *neighbor* >>>> and your *friends* are going to be deploying network fabric >>>> in your house in the next 5-10 years to handle the shift >>>> to the X-over-IP distribution systems that are in the >>>> pipeline TODAY. Will you have to buy new "AV" switches? >>>> Will those switches take pride in advertising themselves: >>>> "New and improved! Now supports *3* multicast streams!!" >>>> Or, will folks just be complaining that their video >>>> spontaneously pixelates, etc.? >>> >>> Now we are getting somewhere. I happen to >>> have ATT U-verse in my house, running 3 SD >>> video streams and 3mbit internet. It is >>> running quite happily over the existing 10/100 >>> ethernet installation using generic netgear >>> switches. Of the 3 ATT "cable boxes", 2 are >>> at least 2 switches downstream from the U-verse >>> router. >> >> Do you know if it is multicasting those streams >> or *unicasting* them? > > Don't know, but other do. Check this out: > > http://www.uverserealtime.com/ > > If I didn't have so many things on my plate right now > I'd run it and send you the data.
To clarify: I'm concerned at what goes on inside the home. E.g., I suspect AT&T (or other content provider) multicasts certain content to "all subscribers" and that content is then STORED on a local PVR for "delayed viewing". Regardless of how the content gets to the "media server" within the home, I am interested in how that content is distributed to clients *within* the home. I suspect each "consumer" (assuming a consumer is NOT another PVR) ends up receiving a unicast stream. Consider: if "you" decide to watch "a movie" (NOT a real-time broadcast!) and another person in your home independantly opts to watch the same movie "5 minutes later", the second viewer doesn't "miss" the first five minutes of the movie (which would have been true of a live TV broadcast!). Unless each consumer endpoint has storage facilities, this requires a separate stream from the media server to each endpoint (while those *could* be multicast streams, it would seem foolish to do so UNLESS you knew two or more consumers wanted to watch the content synchronously) As such, the load is reflected directly to the server and the switch is spared the burden/responsibility of packet amplification. For "live TV", the issue gets muddied. If only a single endpoint is viewing the content, then multicasting needlessly floods the network with "undirected" traffic (unless the switch is smart). OTOH, unicasting "doubles" the load on the server (nits...). I guess one shirt-cuff test would be to watch 5 different channels on 5 different displays and look at the load on the switch... (?) I.e., the weight of 5 multicast streams vs. 5 unicast streams should be pretty obvious without counting packets. I guess I need to look to see which protocols are wrapped in UPnP to better *guess* at the capabilities that it *could* support...
"D Yuniskis" <not.going.to.be@seen.com> wrote in message 
news:ibeo9j$979$1@speranza.aioe.org...
> Hi Jim, > > Jim Stewart wrote: > >>>>> No, I'm wanting to know how *you* and your *neighbor* >>>>> and your *friends* are going to be deploying network fabric >>>>> in your house in the next 5-10 years to handle the shift >>>>> to the X-over-IP distribution systems that are in the >>>>> pipeline TODAY. Will you have to buy new "AV" switches? >>>>> Will those switches take pride in advertising themselves: >>>>> "New and improved! Now supports *3* multicast streams!!" >>>>> Or, will folks just be complaining that their video >>>>> spontaneously pixelates, etc.? >>>> >>>> Now we are getting somewhere. I happen to >>>> have ATT U-verse in my house, running 3 SD >>>> video streams and 3mbit internet. It is >>>> running quite happily over the existing 10/100 >>>> ethernet installation using generic netgear >>>> switches. Of the 3 ATT "cable boxes", 2 are >>>> at least 2 switches downstream from the U-verse >>>> router. >>> >>> Do you know if it is multicasting those streams >>> or *unicasting* them? >> >> Don't know, but other do. Check this out: >> >> http://www.uverserealtime.com/ >> >> If I didn't have so many things on my plate right now >> I'd run it and send you the data. > > To clarify: I'm concerned at what goes on inside the home. > E.g., I suspect AT&T (or other content provider) multicasts > certain content to "all subscribers" and that content is then > STORED on a local PVR for "delayed viewing". > > Regardless of how the content gets to the "media server" within > the home, I am interested in how that content is distributed > to clients *within* the home. I suspect each "consumer" (assuming > a consumer is NOT another PVR) ends up receiving a unicast stream. > > Consider: if "you" decide to watch "a movie" (NOT a real-time > broadcast!) and another person in your home independantly opts > to watch the same movie "5 minutes later", the second viewer > doesn't "miss" the first five minutes of the movie (which would > have been true of a live TV broadcast!). Unless each consumer > endpoint has storage facilities, this requires a separate > stream from the media server to each endpoint (while those *could* > be multicast streams, it would seem foolish to do so UNLESS you > knew two or more consumers wanted to watch the content synchronously) > > As such, the load is reflected directly to the server and the > switch is spared the burden/responsibility of packet amplification. > > For "live TV", the issue gets muddied. If only a single endpoint > is viewing the content, then multicasting needlessly floods the > network with "undirected" traffic (unless the switch is smart). > OTOH, unicasting "doubles" the load on the server (nits...). > > I guess one shirt-cuff test would be to watch 5 different channels > on 5 different displays and look at the load on the switch... (?) > I.e., the weight of 5 multicast streams vs. 5 unicast streams > should be pretty obvious without counting packets. > > I guess I need to look to see which protocols are wrapped > in UPnP to better *guess* at the capabilities that it *could* > support...
I think it would be easier to use a sniffer like Wireshark and check the packet types etc for movie vs broadcast.
Hi Clifford,

Clifford Heath wrote:
> When I implemented a P2P file distribution platform, I > decided that multicast wasn't useful and went instead for
Why? --------------------^^^^^^^^^^^^^
> broadcasts. The LDSS protocol (Local Download Sharing
I'll have to look at the RFC's...
> Service) that I devised (and got an IANA port number for) > enables nodes on the same LAN to schedule and share file > downloads from a limited WAN pipe, without any master. > Each node is responsible for knowing what it wants, how > much WAN bandwidth is allowed at this time of day (to be > shared across all peers), how much is currently being > used, and what downloads have been scheduled by other > peers. In return, it limits its download to a fair share > of the allowed WAN bandwidth (distributed rate limiting), > sends progress updates, and shares completed downloads > using TCP transfers. Every I/O (whether disk, LAN or WAN) > is scheduled to a rate limit, to maintain a low impact on > normal system operations. It was an interesting project!
This is geared towards asynchronous sharing, no doubt. How would you (re)consider your design choices in a *synchronous* environment? E.g., imagine all of those peers requesting the same "file" at the same "instant"? (within some small "epsilon" of "application time") How would you (re)consider that same scenario in a wireless network (with nodes closely located -- "tight weave" instead of a "loose mesh")?
This is a multi-part message in MIME format.
--------------070708030608080906050107
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

D Yuniskis wrote:
> Clifford Heath wrote: >> When I implemented a P2P file distribution platform, I >> decided that multicast wasn't useful and went instead for > Why? --------------------^^^^^^^^^^^^^
For this app, the computers targeted are almost always on the same subnet, so broadcasts naturally get propagated to just the places they're needed (by default, broadcast traffic stops at subnet boundaries). LAN traffic is regarded as essentially free, it's WAN traffic that needs to be limited and shared.
>> broadcasts. The LDSS protocol (Local Download Sharing > I'll have to look at the RFC's...
It got to a Draft, which has expired, but I've attached it below. Very simple protocol; two messages only (since files are only identified by SHA-1 hash); just NeedFile and WillSend, having the same packet structure.
>> normal system operations. It was an interesting project! > This is geared towards asynchronous sharing, no doubt.
Software distribution. All machines fetch an individual policy file saying what software they should install, and in most cases there is overlap; more than one machine needs the same software. Rather than all downloading a separate copy, they announce their plans, progress, and ETA, so others know they can wait for a LAN transfer when it's done.
> How would you (re)consider your design choices in a > *synchronous* environment?
In the same-subnet scenario, I'd probably still use broadcast, unless the utilization is likely to reach a significant percentage (say, >25%) of the media's capability, or there is a likelihood of multiple synchronized groups which won't necessarily have to pass traffic across the same link. The latter case is pretty rare, actually - a home media subnet is likely all going through one switch and hence limited by its capability. Using a IGMP aware router is unlikely to help.
> How would you (re)consider that same scenario in a wireless > network (with nodes closely located -- "tight weave" > instead of a "loose mesh")?
I'm not familiar with the implementation of broadcast/multicast IP in a wireless environment, but I can't imagine that it would change very much. Clifford Heath. --------------070708030608080906050107 Content-Type: text/plain; name="draft-heath-ldss-00.txt" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="draft-heath-ldss-00.txt" ManageSoft C. Heath Draft ManageSoft February 2006 Local Download Sharing Service. Copyright Notice Copyright (C) ManageSoft 2006. Abstract This protocol provides file sharing for LAN-based peers on small LANS (up to twenty nodes) where peers have received a cryptographic digest of each file they require. It also provides progress monitoring and download scheduling for files being received from outside the LAN, allowing single-download of such files. Search Protocol The Local Download Sharing Service protocol consists of two messages: NeedFile and WillSend, broadcast on an agreed UDP port. Since broadcast messages are normally dropped at the first router, this protocol is only useful on switched LANs unless special router configuration is performed. Either message may be sent at any time, though there is an expectation that an application that sends a NeedFile message will see any matching WillSend messages from a capable peer within a defined search duration, such as twenty seconds. Search Message Format Both messages have substantially similar structure: 0 7 8 15 16 23 24 31 +----------+----------+----------+----------+ 0 | Message-ID | Digest- | Compres- | | | type | sion | +----------+----------+----------+----------+ 4 | Cookie | | | +----------+----------+----------+----------+ 8 | | | Digest (32) | ...... +----------+----------+----------+----------+ 40 | | | Size | | | | | +----------+----------+----------+----------+ 48 | | | Offset | | | | | +----------+----------+----------+----------+ 56 | Target-Time | | | +----------+----------+----------+----------+ 60 | Unused | | | +----------+----------+----------+----------+ 64 | Port | Rate | | | | +----------+----------+----------+----------+ 66 | Priority | Property-Count | | | | +----------+----------+----------+----------+ 72 | Properties | ..... +----------+----------+----------+----------+ Message-ID is zero (0) for NeedFile, one (1) for WillSend. Other values are not defined. Digest-type indicates what type of cryptographic digest is present in the digest field. It is set to zero (0) for MD5, one (1) for SHA-1, and two (2) for SHA-256. Compression indicates what type of compression is used or expected It is set to zero (0) for no compression, one (1) for GZIP. Cookie is a four-byte random value that carries no meaning, but should be set to a random value with each transmitted message to help recognised a received message as having originated in the receiving context. Digest is a 32-octet field that contains the bytes of the cryptographic digest, padded with zero octets where the digest is smaller than 32 bytes. Size is a 64-bit integer indicating the size of the requested file in bytes. If the size is not known, this field may be set to zero. Offset is a 64-bit integer indicating the number of bytes of the requested file that are already available and either not needed (NeedFile) or that could be sent immediately (WillSend). Target-time is a 32-bit integer indicating a time in seconds from the time the message is transmitted. For a NeedFile request, it indicates a time by which the requester hopes the requested file transfer can be completed. For a WillSend message, it indicates a time by which the sender expects that the entire file will be available (Offset equals Size). In response to a NeedFile request, the intention is that the requester should delay further requests for that file until the specified time, unless another party can provide the requested file sooner. The Unused field should be set to zero. The Port field, if non-zero, indicates a TCP port number on which a file transfer can take place, as discussed in the section below on TCP File Transfer protocols. The Rate field is a 16-bit integer which may be set to a non-zero value in a WillSend message to indicate the approximate current rate at which the subject field is being received (the rate at which Avail is increasing). It's represented as the integer part of the logarithm to base 2 of the data rate estimate in bytes per second, multiplied by 2048. This defines the limits on the representation of data rate: a rate value of 1 corresponds to a data rate of just over 1 byte per second, and a rate value of 65535 corresponds to over 4GB/second. This representation allows a resolution of under 1%. Finally the Property-Count field contains a count of following name/value pairs that extend the information in the basic message. No property names are yet defined, so the Property-Count field must always be set to zero. When property names are defined in future, the structure of the following data will be defined. TCP File Transfer protocols In the case of a NeedFile message, the Port number indicates that the sender is listening on that port for the requested TCP byte stream, which should start at the requested Offset and run for Size-Offset bytes, or until the file is complete and matches the digest. No structure is superimposed over such transfers. The first responder to connect to the specified port should transmit the requested stream. Other responders (or any response that comes too late, after the requester has stopped listening), will normally be rejected, or accepted and dropped without the acceptor reading any bytes. In the case of a WillSend message, the port number indicates that the sender is listening for TCP requests on the specified port. Such requests bear a similarity to simple HTTP requests. They consist of four lines separated by linefeed (ASCII 10) characters. Each line consists of a header keyword, a colon and space character, and a value. The four headers are "Digest", "Start", "Size" and "Compression". The Digest value is the concatenation of the digest name ("md5", "sha-1", "sha-256"), a dash character "-", and enough pairs of hexadecimal digits to form the digest contents. The Start and Size values are numbers represented as strings of decimal digits. Start is equivalent to the Offset field in the search protocol. Compression is a decimal number, 0 for none and 1 for gzip. When the request has been received, the requested file portion will normally be transmitted, assuming the offered file is still available and the sender is not too busy. If any error occurs, the connection is dropped and the requester should use an alternate source for the requested file or conduct a fresh search. Any TCP transfer is subject to early termination by either party, which is not to be considered an error by the other party. Progress announcements Without regard for any NeedFile messages, any program receiving (say from a slow WAN connection) a file that is available for sharing on the LAN may send a WillSend message to indicate rate of progress and expected time to completion. If sent frequently enough in an environment where all local peers share a limited WAN bandwidth, such messages may be used to allow each peer to rate-limit its own WAN usage so as to keep the total usage under some configured limit. This relies on the rate estimation using a time constant greater than the interval between such announcements, but in most circumstances where limited WAN bandwidth is an issue, an interval of beteeen ten and sixty seconds is appropriate. Security Considerations This protocol provides no way to restrict access of certain files with a limited group of peers. Any file which is available for sharing may be requested by any peer on the LAN that knows the file's digest. Because of the characteristics of the cryptographic digests used, it's not feasible for a rogue agent to cause bogus files to be accepted in response to a request, though it may send such a file. The requester will normally have performed a search and received one or more offers. If a file turns out not to have the expected digest, that offer and the bad file is discarded and other offers tried. Should all offers be bad, the requester will normally resort to fetching the required file from the WAN, disregarding peers that may claim to have it. This behaviour limits the extent of a denial of service attack against the protocol. Finally, because a requester normally knows the expected size of the file, any transfer which would continue past that size and possibly fill up the available storage space can be avoided. Author's Address: Clifford J. Heath ManageSoft, 56-60 Rutland Rd, Box Hill 3128, Melbourne, Australia. --------------070708030608080906050107--

The 2024 Embedded Online Conference