Reply by Tom January 4, 20052005-01-04
"Hans-Bernhard Broeker" <broeker@physik.rwth-aachen.de> wrote in message news:31ov16F3du2iuU1@news.dfncis.de...

> 1) replacement of former "legacy" ports (keyboard, mouse, printer, modem, ...) > 2) high-bandwidth devices (nowadays usually USB-2.0 hi-speed), like > external harddisks and video stuff. > 3) stupid stuff, including USB-to-USB "network cables". > > My point, is that 1) and 2) don't have any business travelling over > the same pair of wires, and for 2), defining a new protocol was > completely superfluous already when USB was originally designed --- > 1394 filled that niche nicely, at higher bandwidth, with less hassle.
I get the impression that USB was originally designed to work like ADB on Macs - i.e. for 1). The HID stuff is pretty neat, you can use a tool to generate the HID class descriptor, and burn it into the Rom of a low cost microcontroller with a low speed USB transceiver. Sure the driver on the PC is probably a nightmare internally, but that only needs to be done once per OS.
> > If they had designed a new bus for category 1), and 1) alone, that'd > have been perfectly fine with me. But by forcing category 2) into the > same channel, without compelling need of doing so, they crossed the > line. > > One difference that makes me criticize USB is that its > self-configuration mechanism is *way* over-complicated. It's this > mechanism, mainly, that causes the need for a PC at the root of the > tree. 1394 auto-configuration is so simple that a digical camcorder's > firmware can usually get it right without breaking a sweat.
Yeah, but the complication is on the host side, not on the device. The device needs to be able to handle IIRC Get Device Descriptor - fetch n bytes from Rom Set Address - write a byte to a USB control reg Set Configuration - the device probably only has one config, so it's a NOP. You could light a led or something though, since it means that you are recognised by the host. plus read/write from/to whatever endpoints it needs for it's internal stuff. Handling string descriptors would be nice, but you don't need it. If you have a microcontroller with built in USB, all this is fairly painless -it's easy to get it into a small rom.
> > USB-2.0: yes. USB-1.1 was too slow by almost a factor of 10. Hard > disks have been faster than 1.5 MB/s continuous transfer rate for a > *long* time now.
USB key fobs are useful even with USB 1.0 actually, you can just leave stuff copying while you do something else. But you're right that USB 1.0 isn't exactly ideal for this sort of thing.
> > -- > Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de) > Even if all the snow were burnt, ashes would remain.
Reply by Antonio Pasini December 9, 20042004-12-09
> I need to implement a USB host controller for an embedded system > running VxWorks on a Xilinx Virtex II pro FPGA. Can anyone here > recommend a good choice for the external host USB chip? (IP cores were > too $$$) and software drivers that will run it? (I'll be connected > various devices--HIDs, mass storage, video, etc on the bus).
If you need HS operation, and don't have a PCI local bus, check Philips parts: onTheGo: http://www.semiconductors.com/markets/connectivity/wired/usb/products/otg/isp1761/ host only: http://www.semiconductors.com/markets/connectivity/wired/usb/products/host/isp1760/ Unfortunately, FlexStack source costs about $50K. There's a PCI board with 1761, with a evaluation software for Linux. I do not know if it comes with source for Linux, also. Availability could be an issue for both, also (I suppose)... They have also many full speed hosts.
Reply by Ulf Samuelsson December 9, 20042004-12-09
> I'm not. That would make even less sense than a keyboard attached to > the same wire as a 320 GB external USB 2.0 hard disk. > > > And I'm not sure why you brought up 1394 at all. > > Because USB devices fall into several large categories: > > 1) replacement of former "legacy" ports (keyboard, mouse, printer, modem,
...)
> 2) high-bandwidth devices (nowadays usually USB-2.0 hi-speed), like > external harddisks and video stuff. > 3) stupid stuff, including USB-to-USB "network cables". > > My point, is that 1) and 2) don't have any business travelling over > the same pair of wires, and for 2), defining a new protocol was > completely superfluous already when USB was originally designed --- > 1394 filled that niche nicely, at higher bandwidth, with less hassle.
There are "political" issues here. P.1394 contains IP owned by Apple. Intel was not exactly interested in promoting that, so they defined USB. You may or may not like this, but I think Intel won. I dont recall ever using my P.1394 mainly because no home electronics seems to use it. If my DVD Video provided P.1394 output from the TV Tuner, I would use it. Don't own a DV camera which seems to be the only reason to use it at this time. I rather have multiple ports of of the same kind, than one or two of each, since this gives me more flexibility. Since I need USB ports for the Mouse, that means USB for everything. I could get a P.1394 hard disk, but that would limit my options, and I don't really care too much about the speed on that one. With USB I can connect to more of *my* home equipment -- Best Regards Ulf at atmel dot com These comments are intended to be my own opinion and they may, or may not be shared by my employer, Atmel Sweden.
Reply by Ulf Samuelsson December 9, 20042004-12-09
> I need to implement a USB host controller for an embedded system > running VxWorks on a Xilinx Virtex II pro FPGA. Can anyone here > recommend a good choice for the external host USB chip? (IP cores were > too $$$) and software drivers that will run it? (I'll be connected > various devices--HIDs, mass storage, video, etc on the bus).
> Regards, > > Bo
Does the Virtex contains a PowerPC? Then the AT43USB380 might be what you want. Will support Mass Storage and HID at least. Don't know about Video. The advantage of the AT43USB380 is that the Host S/W is simplified a lot. The Host Stack is running on the chip, and the Virtex runs the Profiles. Atmel delivers precompiled libraries for certain cores. If it is not a Power PC, then further discussion is needed. -- Best Regards Ulf at atmel dot com These comments are intended to be my own opinion and they may, or may not be shared by my employer, Atmel Sweden.
Reply by Meindert Sprang December 9, 20042004-12-09
"CBFalconer" <cbfalconer@yahoo.com> wrote in message
news:41B77778.9AB9A875@yahoo.com...
> Some years ago I built an extremely cheap peripheral. It's only > duty was to read a memory dump from a (nuclear) pulse height > analyzer, which in turn was in a peculiar ad-hoc format which > sufficed to dump and reload the PHA. The physical interface > consisted of the same miniature tape deck as was mounted in the > PHA, and a single CMOS buffer chip, all hung off the parallel port. > x86 timed loops did the clock/data separation, detected end of > record gaps, etc., and allowed the memory dumps to be transferred > to the PC and displayed and/or analyzed. It was profitable because > it cost almost NIL to manufacture, and the design effort was in the > software. > > I am sure I was not alone in doing these things. None of this > would be possible with a USB interface.
Funny enough, the FTDI FT2232C chip is exactly designed to do this. Besides the standard asynchronous protocol, it can also do synchronous protocols, specially targeted to ISP and JTAG stuff. Meindert
Reply by CBFalconer December 8, 20042004-12-08
Dave Hansen wrote:
>
... snip ...
>> >> If there's a single I/O interface of the legacy PC USB 1.1 was >> in no way fit to replace, it's the ISA bus. > > In practice, it's not that bad. In fact, keyfob flash drives are > kinda slick. To replicate that functionality with ISA, you need > to install a flash card adapter ($ and compatibility issues), and > 1394 would probably not be as cost-effective.
Some years ago I built an extremely cheap peripheral. It's only duty was to read a memory dump from a (nuclear) pulse height analyzer, which in turn was in a peculiar ad-hoc format which sufficed to dump and reload the PHA. The physical interface consisted of the same miniature tape deck as was mounted in the PHA, and a single CMOS buffer chip, all hung off the parallel port. x86 timed loops did the clock/data separation, detected end of record gaps, etc., and allowed the memory dumps to be transferred to the PC and displayed and/or analyzed. It was profitable because it cost almost NIL to manufacture, and the design effort was in the software. I am sure I was not alone in doing these things. None of this would be possible with a USB interface. -- Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net) Available for consulting/temporary embedded and systems. <http://cbfalconer.home.att.net> USE worldnet address!
Reply by December 8, 20042004-12-08
"Hans-Bernhard Broeker" <broeker@physik.rwth-aachen.de> schreef in bericht
news:31ov16F3du2iuU1@news.dfncis.de...
> Dave Hansen <iddw@hotmail.com> wrote: > > On 7 Dec 2004 16:32:48 GMT, Hans-Bernhard Broeker > > <broeker@physik.rwth-aachen.de> wrote: > > [...] > > > >I'm faulting USB for overkill for some of its applications only: > > >low-bandwidth, low-latency stuff like mouse and keyboard data has no > > >business occupying the same wire as a 11 Mb/s data stream. It's > > > So if I understand your position, you're not suggesting 1394 for > > keyboards. > > I'm not. That would make even less sense than a keyboard attached to > the same wire as a 320 GB external USB 2.0 hard disk. > > > And I'm not sure why you brought up 1394 at all. > > Because USB devices fall into several large categories: > > 1) replacement of former "legacy" ports (keyboard, mouse, printer, modem,
...)
> 2) high-bandwidth devices (nowadays usually USB-2.0 hi-speed), like > external harddisks and video stuff. > 3) stupid stuff, including USB-to-USB "network cables". > > My point, is that 1) and 2) don't have any business travelling over > the same pair of wires, and for 2), defining a new protocol was > completely superfluous already when USB was originally designed --- > 1394 filled that niche nicely, at higher bandwidth, with less hassle.
Yes, but the big problem there was, if I recall correctly, that a license fee (English is not my mother tongue, so I may be using the wrong word, I mean that someone has patented something and other people who wish to use it have to pay money to the patent holder) had to be payed of about 15$ for every fw port on any device. I have always thought that this was the initial reason why USB was developed by the people who didn't feel like paying that money but who unfortunately turned out to be far less handy than the people who invented fw (it was Apple, wasn't it). For everybody who thinks that USB is a problem-solving thing and will make PnP work all the time, just look at www.usbman.org . I have had so many problems with external harddrives and many other people have. Even many chipsets, like the ones from Ali, just don't work OK. I will always choose firewire if I have the possibility. Greetings, Rene
Reply by Hans-Bernhard Broeker December 8, 20042004-12-08
Dave Hansen <iddw@hotmail.com> wrote:
> On 7 Dec 2004 16:32:48 GMT, Hans-Bernhard Broeker > <broeker@physik.rwth-aachen.de> wrote:
[...]
> >I'm faulting USB for overkill for some of its applications only: > >low-bandwidth, low-latency stuff like mouse and keyboard data has no > >business occupying the same wire as a 11 Mb/s data stream. It's
> So if I understand your position, you're not suggesting 1394 for > keyboards.
I'm not. That would make even less sense than a keyboard attached to the same wire as a 320 GB external USB 2.0 hard disk.
> And I'm not sure why you brought up 1394 at all.
Because USB devices fall into several large categories: 1) replacement of former "legacy" ports (keyboard, mouse, printer, modem, ...) 2) high-bandwidth devices (nowadays usually USB-2.0 hi-speed), like external harddisks and video stuff. 3) stupid stuff, including USB-to-USB "network cables". My point, is that 1) and 2) don't have any business travelling over the same pair of wires, and for 2), defining a new protocol was completely superfluous already when USB was originally designed --- 1394 filled that niche nicely, at higher bandwidth, with less hassle. If they had designed a new bus for category 1), and 1) alone, that'd have been perfectly fine with me. But by forcing category 2) into the same channel, without compelling need of doing so, they crossed the line.
> Different bus for different applications. The only thing it really > has incommon with USB (AFAIK) is that they're both serial and both > self-configuring.
One difference that makes me criticize USB is that its self-configuration mechanism is *way* over-complicated. It's this mechanism, mainly, that causes the need for a PC at the root of the tree. 1394 auto-configuration is so simple that a digical camcorder's firmware can usually get it right without breaking a sweat.
> >essentially impossible to avoid one type of communication getting into > >the other's way.
> Keyboard and mouse traffic doesn't really seem to get in anyone's way.
Well, at the heart of it, HID data are sent a USB transfer mode that has them get priority over all high-rate user data on the same bus. Add that HID devices are typically low-speed, and you have each byte of mouse data bulldozing 10 or more bytes of other data out of the way. With USB-2.0, make that ~300 bytes. This design is a bit like building a single-lane highway to be used jointly by cars and trucks, and then later declaring it good for simultaneous use by Formula-1, too.
> The only real problems I've heard of is when you try to listen to > music while burning a CD. USB 1.x can't keep up with it.
USB 1.x can no longer keep up with CD burning anyway, these days. Not since the last 16x burners stopped being produced.
> A USB controller is much cheaper than the standard complement of 2 > PS/2 ports, 1 or 2 serial ports, a parallel port, and an ISA bus,
Except that one USB controller was apparently insufficient -- current motherboard trend appears to be to move up from 6 to 8 onboard USB ports --- yes, that's more than the overall number of ports on a typical legacy PC, which USB set out to repeat by a single plug. Doesn't seem to have worked out all that well, does it?
> software has essentially zero manufacturing cost, and most users' > CPU is idle 99.99% of the time. And USB hot PnP actually _works_ > (at least in my experience).
Let's say: it *can* work --- ISA PnP never stood a chance. Arguably they shouldn't even have tried that. The major difference now is that instead of hardware that doesn't have enough flexibility to work in somewhat strange PCs, you now get software that doesn't work in somewhat strange PCs. Hot PnP? You tell that to the software of my USB scanner, which a) complains loudly each time I boot without the scanner attached, b) doesn't work at all if I plug in the scanner later, and c) reliably brings down my Linux system if I so much as plug it in.
> Agreed. But the idea is to reduce the cost of the peripheral. How > much do you want to pay for a basic keyboard?
Why should I have to pay *anything* extra for a basic keyboard, just so it can coexist on a shared cable with my video camera, printer and external harddisk? I think I shouldn't, because it shouldn't have to do that. That's where the fundamental misconception is. Reducing the number of different port types and plugs of a typical PC was a good first idea. It started to gown-hill when somebody decided that it must be reduced all the way to _one_ type of port.
> >Now it's commonly accepted that dogmatic centralism is wasteful. > >Distributed systems are often more efficient, and it's a lot easier to > >specialize from a distributed general design to a one-node case than > >to generalize from a centralistic design to a situation where a single > >center simply can't cut it.
> I don't know where you get this. Distributed computing is _hard_.
I said it was *efficient*, not that it's easy. The telltale figure is those '99.99% of the time idle' you quoted yourself: that's an overgrown centralistic system, scaled to be powerful enough to do everything and then some all by itself, which doesn't actually get any work to do.
> You're upset because Intel and Microsoft didn't try to bootstrap an > entirely new personal computing paradigm?
No. Because they (ab)used their effective control over the market to keep the general public from recognizing that less centralistic alternatives were even possible.
> Oh, I see. You're upset because the new paradigm couldn't compete.
No. Because it wasn't given a fair chance to compete. Microsoft decreed that a PC without USB was, effectively, unsellable. So peripherals that aren't USB became effectively unsellable some time after that.
> >If there's a single I/O interface of the legacy PC USB 1.1 was in no > >way fit to replace, it's the ISA bus.
> In practice, it's not that bad. In fact, keyfob flash drives are > kinda slick.
Yeah, but even they are bottlenecked by USB 1.1, so the good ones are 2.0 these days. But "replacing ISA bus" would have meant to take over the interface to the main ISA bus device of a legacy PC, though: the internal harddisk. And that's been faster than USB1.1's 11 Mbit/s for years before the first USB mainboard was sold. ISA bus has never really been much of an external interface anyway, so I highly doubt replacing it was ever, actually, part of the design goals of USB. If it had been, USB 1.1 would have been rated a complete failure even by its authors. USB keyfobs are slick mainly by comparison to what was lacking for such a long time: a random-access, exchangeable storage medium with significantly more capacity than a floppy disk, found in essentially every PC. People got used to waiting for their CD burners to finish working, instead.
> When bandwidth requirements exceeded ISA'a capability, we got PCI. By > the time USB came around, ISA bandwidth wasn't really an issue.
Oh, ISA _bandwidth_ was very much an issue --- such a huge issue, in fact, that it the only viable solution to it was to get rid of ISA bus completely. ISA bus as such stopped to be an issue at that point, but its bandwidth didn't.
> USB is perfectly capable of handling any single peripheral that > would normally be attached to the ISA.
USB-2.0: yes. USB-1.1 was too slow by almost a factor of 10. Hard disks have been faster than 1.5 MB/s continuous transfer rate for a *long* time now. -- Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de) Even if all the snow were burnt, ashes would remain.
Reply by Dave Hansen December 8, 20042004-12-08
I hesitate to post this, because I don't want to seem to be carrying
USB's water.  As I've said before, I don't have any particular
interest in USB...

On 7 Dec 2004 16:32:48 GMT, Hans-Bernhard Broeker
<broeker@physik.rwth-aachen.de> wrote:

>Dave Hansen <iddw@hotmail.com> wrote: >> On 7 Dec 2004 15:22:25 GMT, Hans-Bernhard Broeker >> <broeker@physik.rwth-aachen.de> wrote: > >> >I've said this before, but I'll say this again: the only assumption >> >about motives that can really explain why USB was designed the way it >> >was designed, given existing technology at the time (particularly IEEE >> >1394, a.k.a. Firewire), is that it was predominantly a marketing >> >scheme to be able to keep selling powerful personal computers. It was > >> 1394 and USB came along at pretty much the same time (though Apple had >> FireWire earlier). If you're going to fault USB with overkill, how >> much worse is 1394? > >Not worse; but better, actually. 1394 is a well-done protocol, >fulfilling a real need (high-rate data streaming, esp. video), and it >does its job well. > >I'm faulting USB for overkill for some of its applications only: >low-bandwidth, low-latency stuff like mouse and keyboard data has no >business occupying the same wire as a 11 Mb/s data stream. It's
So if I understand your position, you're not suggesting 1394 for keyboards. And I'm not sure why you brought up 1394 at all. Different bus for different applications. The only thing it really has incommon with USB (AFAIK) is that they're both serial and both self-configuring.
>essentially impossible to avoid one type of communication getting into >the other's way. USB 1.1 is a complete hodgepodge. It's overkill for
Keyboard and mouse traffic doesn't really seem to get in anyone's way. The only real problems I've heard of is when you try to listen to music while burning a CD. USB 1.x can't keep up with it. Which is one of the reasons there's USB 2.0 (which I haven't tried yet -- the only USB peripheral on the new workstation the company bought for me is the mouse).
>some of its planned applications, and severely underpowered for most >of the others. > >You'ld have thought that the guys at Intel & friends would know that >"one size fits all" doesn't ever really work. But they did it >nevertheless. Which begs the question: why?
I think it's less a matter of "one size fits all" than it is of "adequate for projected uses." Capacity. Is it big enough? Apparently the answer was no (isn't it usually?): Thus USB 2.0. As to why, I think they've stated it pretty clearly. They wanted to reduce hardware cost, and they wanted to simplify the end-user's job of installing and removing peripherals. A USB controller is much cheaper than the standard complement of 2 PS/2 ports, 1 or 2 serial ports, a parallel port, and an ISA bus, software has essentially zero manufacturing cost, and most users' CPU is idle 99.99% of the time. And USB hot PnP actually _works_ (at least in my experience). So they appear to have succeeded.
> >> >a strategic move against the replacement of PCs by pervasive >> >computing. > >> I don't understand. Could you expand on this? > >For a long time now, Microsoft and Intel have worked by the "single PC >as a center of your digital world" dogma. That's what led to crazy >stuff like the current typical supermarket PC: 3+ GHz CPU, a GiB of >memory, thermal design problems that make working in outer space >appear like a minor issue in comparison, PCs louder than your average >car, and all that.
I think Intel and Microsoft build what they think will sell. Their game is to make money, not (necessarily) to innovate. If consumers demand something other than the "single PC yada yada yada," then they'll start building that.
> >What we're looking at here is utter, total centralism. The same kind >of centralism is designed into almost every aspect of the USB >protocol. USB is highly asymmetric, offloading all the hard work to >the central hub: by silent assumption, that's a PC (of some kind, >i.e. Apple gets to play, too).
Agreed. But the idea is to reduce the cost of the peripheral. How much do you want to pay for a basic keyboard? How successfull would USB have been if the cost adder for the interface was, say, $10 instead of <$1? That would be seen as a conspiracy -- Intel foisting expensive interface hardware on PC users, and attempting to extract a huge slice of the money pie in the peripherals market. Consider the situation of one of the projects I'm working on. Very simple module, uses less than 2K program memory on a 14-bit PIC microcontroller. A potential customer is looking at putting this device on their vehicle network (CAN) to allow the body control computer to enable or disable the device and to allow us to transmit some diagnostic information. Suddenly we're looking at a 64K part just to get the manufacturer-approved networking libraries loaded. The customer is looking at simpler, more cost-effective solutions now...
> >Now it's commonly accepted that dogmatic centralism is wasteful. >Distributed systems are often more efficient, and it's a lot easier to >specialize from a distributed general design to a one-node case than >to generalize from a centralistic design to a situation where a single >center simply can't cut it.
I don't know where you get this. Distributed computing is _hard_. Microsoft can't even get multithreading to work well, and the myriad application vendors are worse. You're upset because Intel and Microsoft didn't try to bootstrap an entirely new personal computing paradigm?
> >IMHO it's at least plausible to assume that in a purely open market, >without the Wintel monopoly, other players would have grabbed a >significant share of the market in the move from the single PC to a >world of computing power delivered at the exact point it's needed.
Oh, I see. You're upset because the new paradigm couldn't compete.
> >> >To put it bluntly: there was *nothing* wrong with the PC keyboard >> >interface that it would have taken something like USB to fix. There > >> This is mostly true, but incomplete. USB is designed to replace the >> keyboard, mouse, serial, and parallel ports as well as the ISA bus. > >Wait a minute: a 11 Mbit/s star (it's not actually a bus), disturbed
Actually, I'd describe it as point-to-point. Hubs act as repeaters.
>by high-priority 1.5Mbit/s packets and several layers of protocol >overhead is supposed to replace an 64 Mbit/s bus that was already >being stretched to its limis by hardware of that day? You gotta be >kidding. > >If there's a single I/O interface of the legacy PC USB 1.1 was in no >way fit to replace, it's the ISA bus.
In practice, it's not that bad. In fact, keyfob flash drives are kinda slick. To replicate that functionality with ISA, you need to install a flash card adapter ($ and compatibility issues), and 1394 would probably not be as cost-effective. When bandwidth requirements exceeded ISA'a capability, we got PCI. By the time USB came around, ISA bandwidth wasn't really an issue. USB is perfectly capable of handling any single peripheral that would normally be attached to the ISA. It's only when you start running a couple (or more) high-bandwidth processes at the same time that you run into trouble. When a USB device is enumerated, it allocates bandwidth. In theory, you should not be able to attach a device to the USB if its required bandwidth would exceed that available (remaining) on the bus. Except isochronous devices, which say "I'll take whatever is left over at any given time. Obviously, it's the isochronous services that suffer when bandwidth usage gets high -- usually sound devices.
> >> ISA PnP never worked very well. > >That's what rightly got us PCI. Not an entirely sweet pill, either, >but in a totally different league than USB.
PCI is another technology that has been decried as a conspiracy to prevent embedded systems designers from doing their job. In this case the complexity resides more heavily in the hardware rather thn the software side. How much harder is it to design a PCI card than an ISA? How many PCI slots does your system have? Software is not that much more difficult on PCI than ISA... Regards, -=Dave -- Change is inevitable, progress is not.
Reply by bo December 8, 20042004-12-08
A little of the topic of this thread---but since it's getting a lot of
attention...

I need to implement a USB host controller for an embedded system
running VxWorks on a Xilinx Virtex II pro FPGA. Can anyone here
recommend a good choice for the external host USB chip? (IP cores were
too $$$) and software drivers that will run it? (I'll be connected
various devices--HIDs, mass storage, video, etc on the bus).
   I've been told to avoid Intel parts. Anyone have any experience or
recommendations would be more than appreciated! Currently am thinking
Cypress or Transmedia.....

Regards,

Bo