Reply by Ali June 29, 20052005-06-29
>For that, it makes no difference whether its on the ISA of
PCI bus. If it does for the WDM, I cannot help you there. Its true, it does not make any dofference for NT Device Drivers or monolithic type drivers but if you want to write fully PnP and power managed driver then you have to know lots of other details too. Well its some how dicided now that they are connected to ISA internally. rather main bus or PCI. Cheers.
Reply by Ali June 29, 20052005-06-29
Doh!!!

Reply by Ali June 29, 20052005-06-29
>you have a number of device layers under NT. 64
Yeah you are right , sometimes called device stack, Actually it is layered sructure from higher to lower level drivers including filter drivers also.
>Since Windows is _already_ going to have something hooked in at that level in order to facilitate the very communications initially
required to participate in the PnP recognition of your external device. I know Jon you are pointing towards PnP manager which incorporates with I/O Manager to provide hot plug-in [Play and Play , Device removal and insertion] The hardware i have used in the project is AT89S52 microcontroller device from 8051 family produced by ATMEL. The device board incorporates basic inputs (push-button, analog to digital converter ADC0804), output(LED) , LM35 heat sensor and a IDC connectors so that the user is able to easily connect his or her 8 bit interface for input or output. The device is powered by external power supply. PC Communicate with device through parallel port attached to port 1 of microcontroller for sending data back and forth. How My device [PowerCell Device] communicate with Parallel Port. As mentioned above the data port (H378) of D25 is attached with port 1 of microcontroller for bidirectional communication. Provided interface let user to perform either activity, I/O read or write. When host interface [PC] wants to perform write operation then. 1) It throws data on Data register (Base Address + 0)of parallel port which is connected with port 1 of microcontroller NowWrite Val("&H378"), Val(xxx) ' send data to mC Port 1 2)It sends a high signal to microcontroller P3.3 (INT1) via parallel port Control register (H37A), by raising the interrupt pin high on microcontroller. NowWrite Val("&H37A"), Val(WRITEmC) 3) microcontroller fetch data from its port 1 and then moves it to port 2 which is connected with PowerCell Relay Board via 10 Pin IDC. Controller assembly : MOV A, P1 MOV P2,A 4) clear the INT1 NowWrite Val("&H37A"), Val(CLEARmC) ' clear C0 , C1 When the host sends data to the device, the device must respond by sending a code that indicates whether it accepted the data or was too busy to handle it. On the other hand, when data is sent from the host to device, the device must respond turning the relay board LED's ON. The question is do i have to upgrade my hardware too for writing a PnP driver? I mean it is working fine with NT Driver [Non PnP]. 1) Do i have to upgrade my hardware in sense of interrupt generation because i'm not sure how kernel [NT] will discover that now PowerCell [my device] is connected to DB-25 port instead of ordinary printer? and its time to load my WDM driver and vice versa when removed. I appreciate your time! Cheers. http://powercell.cjb.net/
Reply by Jonathan Kirwan June 28, 20052005-06-28
On 28 Jun 2005 01:55:13 -0700, "Ali" <abdulrazaq@gmail.com> wrote:

>Humm.. Great text! Well Jon thanks for your contribution , i would >appreciate if you can see us discussing the same issue from different >prospective. >[http://groups-beta.google.com/group/microsoft.public.development.device.drivers/browse_frm/thread/50869515be02a63f/9fe6f1d60ae47fa4#9fe6f1d60ae47fa4] >See Tim's latest post for more understanding.
Thanks. I've looked. I believe I have a full IEEE 1284 specification laying around, somewhere. I didn't remember about it until I saw some diagrams on the web on the subject, then it all came flooding back to me. You will definitely want that spec handy. The Microsoft doc is helpful _after_ you know what is in it, since that doc doesn't exactly tell you how these things are precisely transferred to the PnP under Windows. However... my vague memory also recollects that you have a number of device layers under NT. 64? Something like that. Only one of them is the one that directly manages the hardware I/O, if memory serves. Since Windows is _already_ going to have something hooked in at that level in order to facilitate the very communications initially required to participate in the PnP recognition of your external device, it will be that driver that talks to the actual I/O ports on the machine in question. Your driver (or, at least, one part of your driver) will get loaded in at a higher level. Much as the printer drivers are. I don't think they normally are allowed to completely replace the bottom level I/O driver. If you did replace that lowest level, wouldn't that mean your driver would also have to support the PnP signaling under IEEE 1284 needed to recognize the removal of your device and its replacement by yet another device? And if it didn't need to do so, under the possibility that this is handled at a slightly "higher" level driver, wouldn't that mean there is no real need for you to intercede at this then very low level? Just thinking out loud... Jon
Reply by Tauno Voipio June 28, 20052005-06-28
chris wrote:
> Ali wrote: > >> Yeah this post belongs to windows drivers. Actually this was the >> question raised by windows driver group people, they all want to know >> which type of device i have , PCI or ISA? >> If you want to see the device then please vist >> [http://powercell.cjb.net] under hardware design link. >> >> Thanks. >> > What you *connect* to the parallel port does not matter much - it's a > "parallel port" bus if you like to call it that. > > What happens *internally to your PC* is what the people in the windows > driver groups are asking. > > In the first years of PC design the parallel port interface was handled > by an 8255 PIO chip connected to the ISA bus of the computer. >
It was not a 8255, but a bunch of TTL registers and gates. I still have the original IBM/PC Technical Manual showing it. -- Tauno Voipio tauno voipio (at) iki fi
Reply by Ali June 28, 20052005-06-28
Humm.. Great text! Well Jon thanks for your contribution , i would
appreciate if you can see us discussing the same issue from different
prospective.
[http://groups-beta.google.com/group/microsoft.public.development.device.drivers/browse_frm/thread/50869515be02a63f/9fe6f1d60ae47fa4#9fe6f1d60ae47fa4]
See Tim's latest post for more understanding.
>In the newer O/S branch, the NT based
systems, the parallel port is exclusively virtualized by NT -- I think. And the rules are different here. Yeah you are right things are pretty different here in WDM windows driver model. I appreciate your time
Reply by Ali June 28, 20052005-06-28
Tim wrote on
[http://groups-beta.google.com/group/microsoft.public.development.device.dr=
ivers/browse_frm/thread/50869515be02a63f/9fe6f1d60ae47fa4#9fe6f1d60ae47fa4]
Yes, parallel ports are simple ISA devices.  The I/O port numbers are
all
fixed.

I believe the information you want is here:
  http://www.microsoft.com/whdc/=ADresources/respec/specs/pnplpt.=ADmspx


This describes what a device has to do to be recognized by Windows and
trigger a plug-and-play device load.

I guess its decided now that PC parallel port is internally connected
to ISA.

Reply by Jonathan Kirwan June 28, 20052005-06-28
On 27 Jun 2005 20:50:56 -0700, "Ali" <abdulrazaq@gmail.com> wrote:

>Hi Chris, > Bravo that is what i was looking for! well you have given >very nice opinions, let me forward these words to those guys and see >how they come up. >I guess it will work!
I can add a small amount to this. Since the days of the Pentium with the "green" reflection wave PCI bus, there has been a great deal of new and significant changes to the PC. And the PPro and the follow-on P II, III, and IV chips all use a front side bus to communicate with up to three other CPUs directly attached to the front side bus, a main chipset (that provides the PCI, AGP, and sideband channel pins, as well as a front side bus presence.) Also, there is a provision for a TPA (third party adapter) on the front side bus. When I was learning some of this, during the PPro days, the design limit was exactly six loads -- 4 cpus, one chipset, one tpa. I suppose that may have since changed. A bus transaction from the cpu consisted of a transaction phase, error phase, cache hit phase, and up to four data transfer phases -- for a total of a maximum of seven phases. The first three and one of the data phases could all be overlapped, as they used separate pins and could each proceed in parallel with each other. A cpu would put out a transaction, like a memory read, and the other local (front side) bus chips would have the next clock on which to possibly signal an error during the error phase. After that, the other chips on the front side bus could signal a cache hit on their cache, if it happened. In that case, the next four data transaction phases would transfer the data for the read, without the need for a wait while some dram cycle took place. However, if there was no cache hit, the chipset would usually queue up the transaction and start some action via its inbound and outbound queues to the dram controller portion. On the "other side" of the chipset chips (north bridge) was the PCI bus. Transactions were also queued to this bus. The chipset was the primary PCI bus chip, but it was also possible to add PCI to PCI bridge chips onto that primary PCI bus in order to allow for many PCI boards. Normally, the total loading on a PCI bus was 100pF. Since the connectors ate up about half of this, and since it was possible that adding another PCI to PCI bridge chip would eat up another 10pF, you were generally limited to four PCI slots that held boards which where, themselves, limited to exposing a load of 10pF each. To add more than that, one needed to add the PCI to PCI bridge chip which would provide enough drive for four more bus slots. Etc. Meanwhile, there was one and only one south bridge allowed to support ISA. This chip provided the subtractive decoding on the primary PCI bus connect lines needed to pass over unrecognized PCI transactions onto the ISA for resolution there. Also, there were/are sideband channel lines going from the north bridge to the south bridge to handle the case where ISA DMA and some other special signals were required (ISA DMA timing violates PCI transaction burst rules and therefore there needs to be some special signaling from the south bridge to the north bridge when it "sees" an ISA DMA, so that the north bridge will know how to properly understand and manage the PCI bus during these DMA transactions.) Finally, the four sharable PCI interrupt source lines, A, B, C, and D, have no ISA bus equivalents and aren't going to do anything at all in terms of generating interrupt signals to the CPU, unless something "sees" them and translates them to specific interrupt vectors. Most chipsets (via the south bridge or super i/o chipsets) provide an APIC controller (which is the "advanced" PIC) that allows a great many more interrupt sources than the older AT PCs permitted with their two, discrete 8259 PIC controllers. Sometimes, 64 sources. Perhaps in some cases up to the entire 256, I suppose. These have registers that you can program via specialized I/O to associate the PCI interrupt sources with any interrupt vector you want. These interrupts are passed onto the CPU by yet another new scheme, the APIC bus. By this bus, cpus can be put to sleep, wake each other (remember, there were/are up to four permitted on the front side bus), and can accept interrupt notifications -- even possibly rotating from one cpu to the next in handling them. The interrupt notifications don't take place anymore as they once did on the cpus that were tied more directly to the old ISA bus. They couldn't support passing back interrupt notifications back through all the chips and onto the highly optimized front side bus, so they needed a back door channel to the cpus. This back door is the APIC. The APIC can observe interrupts from both the PCI and ISA side and translate them into any particular vector and then pass them on to a particular cpu or else rotating them among the cpus. Legacy devices, like the parallel port and serial ports, used to reside on the super i/o chip. I don't know what the exact arrangement is these days, but probably something functionally similar. I/O transactions are passed from CPU to front side bus, from front side bus via the north bridge to the PCI, siphoned up by the south bridge and passed onto the ISA bus (or else picked up by the super i/o chip and operated on.) In some cases, like in the case of the keyboard I/O ports, there was even more special considerations in the chipset in order to guarantee certain legacy transaction ordering (the modern chipsets do an amazing amount of re-ordering.) I don't recall if the parallel port qualifies in this regard, but I don't remember it being included in that special list. I admit to being a bit fuzzy on these details, today. But I hope that gives a flavor of the process. I also seem to recall that the EISA device IDs for plug and play parallel port hardware (ISA plug and play is a cousin to the PCI P&P spec, as I recall) is PNP0400 and PNP0401 (for ECP ports.) By now, I'm sure, there are more such IDs. Well, that taps out my vague and poor memory of these things. But it may give you a picture of what goes on when you talk to a parallel port and perhaps a little flavor for what may be involved under Windows. In the older Windows cases, there was also another manager program called WinOldAp, which "owned" the DOS box processes. I/O to the parallel port on these older operating systems was monitored and timed, I think. If a DOS application accessed the I/O, and no other application already owned those addresses for the parallel port, then Windows would let WinOldAp allocate them and assign them to that DOS application. The first time use might be delayed as Windows processed the ownership, but after was pretty fast. When the DOS application ceased to continue using the I/O addresses, WinOldAp would time them out and return them to Windows for later allocation to some other program. Or, perhaps, that was only tested if some other application actually tried to use them. In the newer O/S branch, the NT based systems, the parallel port is exclusively virtualized by NT -- I think. And the rules are different here. Hope there is something in there that helps or clicks for you and I hope I didn't get anything too seriously wrong in writing it. Best of luck. Jon
Reply by Paul Burke June 28, 20052005-06-28
Meindert Sprang wrote:
>If it does for the WDM, I cannot help you there. I have no idea how > that works.
It works by putting off the evil hour when you have to actually talk to the hardware for as long as possible. Paul Burke
Reply by Meindert Sprang June 28, 20052005-06-28
"Ali" <abdulrazaq@gmail.com> wrote in message
news:1119926452.789620.296730@z14g2000cwz.googlegroups.com...
> Hi Meindert Sprang and Tauno Voipio, > So what is conclusion of our discussion?
Don't know Ali. I was merely looking at the parallel port from a register point of view. For that, it makes no difference whether its on the ISA of PCI bus. If it does for the WDM, I cannot help you there. I have no idea how that works. Meindert