"Mark McDougall" <markm@vl.com.au> wrote in message
> This has nothing to do with cacheing. While the original PCI specification
> envisioned the ability to have cacheable memory accessed via the PCI bus
> (and includes SBO# and SDONE to implement a cache coherency protocol for
> PCI) I am unaware of any system or design that ever implemented this. In
> subsequent specifications (i.e. PCI 2.2) it was made clear that this
> functionality was being demoted and marked for removal in future
> specification revisions.
Note that these signals are related to cache within PCI bridges and
targets that support cache. They aren't involved when the cache is
within the CPU. So you still rely on software setting up the range
of memory on the PCI device as non-cached (ie, non-CPU-cached)
even as the above mechanism is deprecated.
I hadn't thought about caching within the bridges before you
mentioned it in this thread. Ouch, I can see why it's deprecated.
There's generally good OS support for a device driver to disable
the cache within the CPU for the range of memory corresponding
to the card, but wouldn't be able to touch this in the bridge chip
very easily. Posted writes to memory mapped hardware registers
are bad enough, but cached would be a killer in cases like our
cards.
Steve
Reply by The Real Andy●December 9, 20052005-12-09
On Sun, 04 Dec 2005 11:35:00 -0800, John Larkin
<jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:
<snip>
>tragedies of our time. Think of how things would be if IBM had gone
>with the 68K and anybody but Bill.
>
Well, we would al be using apple macs and bill gates would be poor.
There would be no dos, and no doubt all the linux guys would then hate
the origonal Mac OS. Then again, we could be really unluck and be all
stuck using OS2 blah blah blah.
Reply by John Larkin●December 8, 20052005-12-08
On Thu, 08 Dec 2005 04:02:16 GMT, "TC" <noone@nowhere.com> wrote:
>
>"Mark McDougall" <markm@vl.com.au> wrote in message
>news:4393bb75$0$23327$5a62ac22@per-qv1-newsreader-01.iinet.net.au...
>> Keith wrote:
>>
>>> That doesn't change the cacheability. Caches are a processor thing
>>> and *NOT* under control of any PCI device.
>>
>> Which begs the question, why would the PCI spec refer to something that
>> has nothing to do with PCI?
>>
>> If a PCI memory space is marked as 'pre-fetchable' then it guarantees,
>> among other things, that the act of pre-fetching memory has no
>> side-effects. This means nothing more than the fact that it may be a
>> suitable candidate for caching, if the platform supports it. In this case,
>> a master may issue MRL (& MRM) commands.
>>
>> OTOH, cache-coherency (which I assume you're hinting at) is a different
>> problem altogether - especially if you've got multiple bus masters
>> accessing PCI memory space with their own caches. However, this is a
>> *system* problem and (IMHO) not really any concern of the PCI bus spec
>> group to mandate that PCI memory is not 'cacheable' - whatever that means
>> in each context!
>>
>> In fact, there's little discussion what-so-ever in the spec (that I can
>> see) about 'caches' - which is just what I would expect.
>>
>> BTW I'm quite happy to be shown the error in my reasoning!
>>
>> Regards,
>> Mark
>
>PCI devices (peripherals) identify via Base Address Registers (BARs) the
>size and type(s) of address space (IO, memory or prefecthable memory) that
>they need the BIOS and/or operating system (plug and play code) to be mapped
>as physical addresses on the bus. These addresses are typically accesed by
>device drivers (but might also be accessed by other devices). The statement
>that...
>
>"If a PCI memory space is marked as 'pre-fetchable' then it guarantees,
>among other things, that the act of pre-fetching memory has no
>side-effects."
>
>... is the best summary statement of the signifigance of 'pre-fetchable'
>memory on PCI. It is used purely as a performance optimization that allows
>for use of burst read transactions (specifically MRL and MRM) when accessing
>that address region.
>
>This has nothing to do with cacheing. While the original PCI specification
>envisioned the ability to have cacheable memory accessed via the PCI bus
>(and includes SBO# and SDONE to implement a cache coherency protocol for
>PCI) I am unaware of any system or design that ever implemented this. In
>subsequent specifications (i.e. PCI 2.2) it was made clear that this
>functionality was being demoted and marked for removal in future
>specification revisions.
>
>Hope this helps.
>
>TC
>
Thanks. The concensus seems to be that the Bios allocates requested
PCIbus memory resources but they are never cached. That seems to align
with my experience.
John
Reply by TC●December 8, 20052005-12-08
"Mark McDougall" <markm@vl.com.au> wrote in message
news:4393bb75$0$23327$5a62ac22@per-qv1-newsreader-01.iinet.net.au...
> Keith wrote:
>
>> That doesn't change the cacheability. Caches are a processor thing
>> and *NOT* under control of any PCI device.
>
> Which begs the question, why would the PCI spec refer to something that
> has nothing to do with PCI?
>
> If a PCI memory space is marked as 'pre-fetchable' then it guarantees,
> among other things, that the act of pre-fetching memory has no
> side-effects. This means nothing more than the fact that it may be a
> suitable candidate for caching, if the platform supports it. In this case,
> a master may issue MRL (& MRM) commands.
>
> OTOH, cache-coherency (which I assume you're hinting at) is a different
> problem altogether - especially if you've got multiple bus masters
> accessing PCI memory space with their own caches. However, this is a
> *system* problem and (IMHO) not really any concern of the PCI bus spec
> group to mandate that PCI memory is not 'cacheable' - whatever that means
> in each context!
>
> In fact, there's little discussion what-so-ever in the spec (that I can
> see) about 'caches' - which is just what I would expect.
>
> BTW I'm quite happy to be shown the error in my reasoning!
>
> Regards,
> Mark
PCI devices (peripherals) identify via Base Address Registers (BARs) the
size and type(s) of address space (IO, memory or prefecthable memory) that
they need the BIOS and/or operating system (plug and play code) to be mapped
as physical addresses on the bus. These addresses are typically accesed by
device drivers (but might also be accessed by other devices). The statement
that...
"If a PCI memory space is marked as 'pre-fetchable' then it guarantees,
among other things, that the act of pre-fetching memory has no
side-effects."
... is the best summary statement of the signifigance of 'pre-fetchable'
memory on PCI. It is used purely as a performance optimization that allows
for use of burst read transactions (specifically MRL and MRM) when accessing
that address region.
This has nothing to do with cacheing. While the original PCI specification
envisioned the ability to have cacheable memory accessed via the PCI bus
(and includes SBO# and SDONE to implement a cache coherency protocol for
PCI) I am unaware of any system or design that ever implemented this. In
subsequent specifications (i.e. PCI 2.2) it was made clear that this
functionality was being demoted and marked for removal in future
specification revisions.
Hope this helps.
TC
Reply by Keith Williams●December 7, 20052005-12-07
In article <43965eb9$0$22304$5a62ac22@per-qv1-newsreader-
01.iinet.net.au>, markm@vl.com.au says...
> Keith Williams wrote:
>
> > prefetching <> cacheing
>
> OK, but I believe it is right to say that *only* prefetchable memory can
> (also) be cacheable?
Ok, I can't think of a case where one would want data to be cached
but not prefetchable. I suppose I could come up with some weird
case where data is changed on read but it's needed several times.
Of course one would simply copy the data into memory (or register).
I suppose: cacheable => prefetchable,
but prefetchable /=> cacheable
^
+- does not
> > But it *is* part of the (pre 2.2) spec. The bus must guarantee
> > coherency in this case. The way it does it is with back-offs and
> > retries. Now think about this with multiple bridges and initiators.
> > It gets to be a mess.
>
> Which is exactly why I was surprised that they (PCISIG) would even
> attempt to handle it! :O Thinking about it, I suppose if the bus doesn't
> specify it, then what other scope to do so is there?
>
> > It's there in the older versions of the spec. As I've mentioned,
> > it's been deprecated in later versions.
>
> OK, I stand corrected. Although I have worked on pre 2.2 designs, this
> was never an issue so it was duly forgotten.
I've never seen it used either. The SBO# and SDONE signals are
tied off in the designs I've seen/done (but there). I only
remembered it from a MindShare PCI class I took moons ago. The
instructor didn't much like it either. ;-)
> I've learnt something new today. Can I go home now? ;)
Got coffee in hand. It's just time to get started! ;-)
--
Keith
Reply by ●December 7, 20052005-12-07
Andy Peters wrote:
> As the board booted, the monitor enumerated the PCI bus and assigned
> valid base addresses to all of the PCI devices. From its command line,
> you could do the equivalent of PEEK and POKE to the PLX peripheral
> control registers or to the board's custom registers. Standard
> memory-dump and memory-modify commands were available, too,
>
> So, what John wants to do -- peek and poke registers, etc -- is
> reasonable.
This was kind of a tangent to the caching discussion. But sure, no
issue with doing this. What you are doing is essentially is putting
the knowledge of the hardware/registers into the peek/poke operator
instead of a device driver.
In order to get at the high addresses that the BIOS will put the PCI
device at, I suspect that some sort of extended memory add-on
will be required (haven't worked in DOS in ages). Whether it
defaults to caching the memory or not will be dependent on that
software. If I were writing it and had to pick just one, I'd disable
caching for that memory range to ensure compatibility, though at
the cost of performance.
> As for caching -- that really depends on how the system controller chip
> is set up. I don't know to what extent the BIOS firmware sets up a
> PC's system controller; presumably, it does enough to find the boot
> device for a higher-level OS, to which it passes control. The OS then
> may set up the system controller in ways that depend on its needs. One
> presumes that the system controller driver honors the cacheable bit and
> doesn't cache a memory space declared as non-cacheable!
Reasonable, although it depends on the OS. In the ones I've
worked with, you can't take a default but have to indicate the
characteristics (including cache/non-cache) that you need.
Also, assuming an OS/drivers that left the setup entirely to
the BIOS, the implication would be that the entire PCI range
would be non-cached since the BIOS has no way to know if
caching will cause grief for any particular card.
Steve
Reply by sleb...@yahoo.com●December 7, 20052005-12-07
Mark McDougall wrote:
> Keith Williams wrote:
>
> > prefetching <> cacheing
>
> OK, but I believe it is right to say that *only* prefetchable memory can
> (also) be cacheable?
>
Yes, but I would word it as "cacheable memory must be prefetchable but
prefetchable memory may or may not be cachable". Just to be clear.
Reply by Mark McDougall●December 7, 20052005-12-07
Keith Williams wrote:
> prefetching <> cacheing
OK, but I believe it is right to say that *only* prefetchable memory can
(also) be cacheable?
> But it *is* part of the (pre 2.2) spec. The bus must guarantee
> coherency in this case. The way it does it is with back-offs and
> retries. Now think about this with multiple bridges and initiators.
> It gets to be a mess.
Which is exactly why I was surprised that they (PCISIG) would even
attempt to handle it! :O Thinking about it, I suppose if the bus doesn't
specify it, then what other scope to do so is there?
> It's there in the older versions of the spec. As I've mentioned,
> it's been deprecated in later versions.
OK, I stand corrected. Although I have worked on pre 2.2 designs, this
was never an issue so it was duly forgotten.
I've learnt something new today. Can I go home now? ;)
Regards,
Mark
Reply by Andy Peters●December 6, 20052005-12-06
steve_schefter@hotmail.com wrote:
> John Larkin wrote:
> > Besides, a device isn't "useless without a device driver" as long as
> > an application can get at its registers somehow.
>
> Which registers? They only registers that are generic to all
> cards are the PCI configuration space registers and they are
> generally accessed via configuration space (different from
> memory and I/O space). In configuration space caching doesn't
> apply. The registers (or other parts of the card) which may be
> mapped into memory space are specific to that card and
> therefore raise the question of caching. They are different for
> every card design and therefore not useful without a device
> driver that understands what hardware is involved with the
> memory.
Steve,
I did a handful of PCI designs (for use on PMC sites in VME SBCs). The
SBC of course ran Linux or VxWorks or Integrity or whatever, but we
also had a low-level monitor/debug environment roughly equivalent (OK,
far superior!) to DOS BIOS.
Most of the designs used PLX chips. PLX puts the registers needed to
configure their chips' peripherals in BAR 0 and BAR 1. I'd generally
put my design's application-specific registers in BAR 2. On-board
memory goes in another BAR. And so forth.
As the board booted, the monitor enumerated the PCI bus and assigned
valid base addresses to all of the PCI devices. From its command line,
you could do the equivalent of PEEK and POKE to the PLX peripheral
control registers or to the board's custom registers. Standard
memory-dump and memory-modify commands were available, too,
So, what John wants to do -- peek and poke registers, etc -- is
reasonable.
As for caching -- that really depends on how the system controller chip
is set up. I don't know to what extent the BIOS firmware sets up a
PC's system controller; presumably, it does enough to find the boot
device for a higher-level OS, to which it passes control. The OS then
may set up the system controller in ways that depend on its needs. One
presumes that the system controller driver honors the cacheable bit and
doesn't cache a memory space declared as non-cacheable!
-a
Reply by ●December 6, 20052005-12-06
John Larkin wrote:
> My question was actually an attempt to understand how the BIOS sets up
> caching and, in general, what devices on the PCI bus might get cached,
> and what controls whether they do. All the answers, so far, is that
> nobody knows.
As a PCI device driver writer, I know. You just don't seem to want to
believe me ;-)
> Since any pci-compliant
> BIOS actually finds our interfaces and assigns memory resources, I was
> wondering what the caching situation is.
Assigning resources (done by the BIOS) has nothing to do with
caching. The device can be placed into I/O or memory according
to the resources it asks for, but this is unrelated to whether the
CPU will use its cache when accessing that memory range.
The latter is under control of the device driver when it maps the
range into virtual memory.
> Besides, a device isn't "useless without a device driver" as long as
> an application can get at its registers somehow.
Which registers? They only registers that are generic to all
cards are the PCI configuration space registers and they are
generally accessed via configuration space (different from
memory and I/O space). In configuration space caching doesn't
apply. The registers (or other parts of the card) which may be
mapped into memory space are specific to that card and
therefore raise the question of caching. They are different for
every card design and therefore not useful without a device
driver that understands what hardware is involved with the
memory.
You can't decide whether caching can lead to data corruption
(and therefore should be disabled for the range) unless you
understand the hardware (ie, you're the device driver). As I
pointed out earlier from the PCI spec, there are only 4 bits
that go with the memory request and none of them indicate
whether caching should be on or off. Since the device driver
does the virtual mapping and it has to know the hardware
well enough to know whether caching is appropriate or not,
there is no point in putting it in configuration space.
Steve