On 27 Jun 2005 20:50:56 -0700, "Ali" <abdulrazaq@gmail.com> wrote:
>Hi Chris,
> Bravo that is what i was looking for! well you have given
>very nice opinions, let me forward these words to those guys and see
>how they come up.
>I guess it will work!
I can add a small amount to this. Since the days of the Pentium with
the "green" reflection wave PCI bus, there has been a great deal of
new and significant changes to the PC. And the PPro and the follow-on
P II, III, and IV chips all use a front side bus to communicate with
up to three other CPUs directly attached to the front side bus, a main
chipset (that provides the PCI, AGP, and sideband channel pins, as
well as a front side bus presence.) Also, there is a provision for a
TPA (third party adapter) on the front side bus. When I was learning
some of this, during the PPro days, the design limit was exactly six
loads -- 4 cpus, one chipset, one tpa. I suppose that may have since
changed.
A bus transaction from the cpu consisted of a transaction phase, error
phase, cache hit phase, and up to four data transfer phases -- for a
total of a maximum of seven phases. The first three and one of the
data phases could all be overlapped, as they used separate pins and
could each proceed in parallel with each other. A cpu would put out a
transaction, like a memory read, and the other local (front side) bus
chips would have the next clock on which to possibly signal an error
during the error phase. After that, the other chips on the front side
bus could signal a cache hit on their cache, if it happened. In that
case, the next four data transaction phases would transfer the data
for the read, without the need for a wait while some dram cycle took
place. However, if there was no cache hit, the chipset would usually
queue up the transaction and start some action via its inbound and
outbound queues to the dram controller portion.
On the "other side" of the chipset chips (north bridge) was the PCI
bus. Transactions were also queued to this bus. The chipset was the
primary PCI bus chip, but it was also possible to add PCI to PCI
bridge chips onto that primary PCI bus in order to allow for many PCI
boards. Normally, the total loading on a PCI bus was 100pF. Since
the connectors ate up about half of this, and since it was possible
that adding another PCI to PCI bridge chip would eat up another 10pF,
you were generally limited to four PCI slots that held boards which
where, themselves, limited to exposing a load of 10pF each. To add
more than that, one needed to add the PCI to PCI bridge chip which
would provide enough drive for four more bus slots. Etc.
Meanwhile, there was one and only one south bridge allowed to support
ISA. This chip provided the subtractive decoding on the primary PCI
bus connect lines needed to pass over unrecognized PCI transactions
onto the ISA for resolution there. Also, there were/are sideband
channel lines going from the north bridge to the south bridge to
handle the case where ISA DMA and some other special signals were
required (ISA DMA timing violates PCI transaction burst rules and
therefore there needs to be some special signaling from the south
bridge to the north bridge when it "sees" an ISA DMA, so that the
north bridge will know how to properly understand and manage the PCI
bus during these DMA transactions.)
Finally, the four sharable PCI interrupt source lines, A, B, C, and D,
have no ISA bus equivalents and aren't going to do anything at all in
terms of generating interrupt signals to the CPU, unless something
"sees" them and translates them to specific interrupt vectors. Most
chipsets (via the south bridge or super i/o chipsets) provide an APIC
controller (which is the "advanced" PIC) that allows a great many more
interrupt sources than the older AT PCs permitted with their two,
discrete 8259 PIC controllers. Sometimes, 64 sources. Perhaps in
some cases up to the entire 256, I suppose. These have registers that
you can program via specialized I/O to associate the PCI interrupt
sources with any interrupt vector you want.
These interrupts are passed onto the CPU by yet another new scheme,
the APIC bus. By this bus, cpus can be put to sleep, wake each other
(remember, there were/are up to four permitted on the front side bus),
and can accept interrupt notifications -- even possibly rotating from
one cpu to the next in handling them. The interrupt notifications
don't take place anymore as they once did on the cpus that were tied
more directly to the old ISA bus. They couldn't support passing back
interrupt notifications back through all the chips and onto the highly
optimized front side bus, so they needed a back door channel to the
cpus. This back door is the APIC.
The APIC can observe interrupts from both the PCI and ISA side and
translate them into any particular vector and then pass them on to a
particular cpu or else rotating them among the cpus.
Legacy devices, like the parallel port and serial ports, used to
reside on the super i/o chip. I don't know what the exact arrangement
is these days, but probably something functionally similar. I/O
transactions are passed from CPU to front side bus, from front side
bus via the north bridge to the PCI, siphoned up by the south bridge
and passed onto the ISA bus (or else picked up by the super i/o chip
and operated on.) In some cases, like in the case of the keyboard I/O
ports, there was even more special considerations in the chipset in
order to guarantee certain legacy transaction ordering (the modern
chipsets do an amazing amount of re-ordering.) I don't recall if the
parallel port qualifies in this regard, but I don't remember it being
included in that special list.
I admit to being a bit fuzzy on these details, today. But I hope that
gives a flavor of the process. I also seem to recall that the EISA
device IDs for plug and play parallel port hardware (ISA plug and play
is a cousin to the PCI P&P spec, as I recall) is PNP0400 and PNP0401
(for ECP ports.) By now, I'm sure, there are more such IDs.
Well, that taps out my vague and poor memory of these things. But it
may give you a picture of what goes on when you talk to a parallel
port and perhaps a little flavor for what may be involved under
Windows. In the older Windows cases, there was also another manager
program called WinOldAp, which "owned" the DOS box processes. I/O to
the parallel port on these older operating systems was monitored and
timed, I think. If a DOS application accessed the I/O, and no other
application already owned those addresses for the parallel port, then
Windows would let WinOldAp allocate them and assign them to that DOS
application. The first time use might be delayed as Windows processed
the ownership, but after was pretty fast. When the DOS application
ceased to continue using the I/O addresses, WinOldAp would time them
out and return them to Windows for later allocation to some other
program. Or, perhaps, that was only tested if some other application
actually tried to use them. In the newer O/S branch, the NT based
systems, the parallel port is exclusively virtualized by NT -- I
think. And the rules are different here.
Hope there is something in there that helps or clicks for you and I
hope I didn't get anything too seriously wrong in writing it. Best of
luck.
Jon