EmbeddedRelated.com
Forums

PCIe or GMII for FPGA CPU data transfer ?

Started by embtel1200 August 22, 2009
Hi,

We are evaluating the interface to use for transferring data between an
Altera/Xilinx FPGA and CPU. We are considering PCIe and GMII. This is going
to be a point-to-point link with bi-directional data transfer hitting
500Mbps. The CPU we are using supports PCIe 1.1. 

My preference based on the research is to go for PCIe for the following
reasons*.

- High Throughput
  * PCIe 1.x at 2.5Gbps vs 1Gbps 

- Low latency 

- No protocol overhead
  * Ethernet MAC overhead
  * Need to define messages and requires a TCP/IP or similar stack on both
ends

- Simplicity from an application perspective
  * No need for socket API on the CPU

* One thing I am not sure is the DMA capabilities in the PCIe 1.1 within
the FPGA. If the CPU is going to transfer the data without DMA I am really
not sure if this will scale. 

Any thoughts or comments on suitability of PCIe vs GMII?

Thanks.


embtel1200 wrote:

> We are evaluating the interface to use for transferring data between an > Altera/Xilinx FPGA and CPU. We are considering PCIe and GMII. This is going > to be a point-to-point link with bi-directional data transfer hitting > 500Mbps. The CPU we are using supports PCIe 1.1.
I would use a memory mapped interface: Access the FPGA like a SRAM, if your CPU has a SRAM interface. This is very easy, on the CPU side and on the FPGA side. Bigger CPUs can do DMA transfers to this interface and you can provide some virtual registers mapped in the memory area, e.g. for reading or writing ringbuffer pointers. But it needs a bit more wires to the FPGA and carefully routing for high speed signals. I know a system wich works without problems with 100 MHz and a Cyclone 2, with a 32 bit wide data bus, connected to a PXA CPU.
> My preference based on the research is to go for PCIe for the following > reasons*. > > - High Throughput > * PCIe 1.x at 2.5Gbps vs 1Gbps > > - Low latency > > - No protocol overhead > * Ethernet MAC overhead > * Need to define messages and requires a TCP/IP or similar stack on both > ends > > - Simplicity from an application perspective > * No need for socket API on the CPU
The same reasons are valid for a memory mapped interface. I don't know the physical side of PCIe, but I assume a memory mapped interface would be much easier than PCIe.
> * One thing I am not sure is the DMA capabilities in the PCIe 1.1 within > the FPGA. If the CPU is going to transfer the data without DMA I am really > not sure if this will scale.
If you implement the PCIe core yourself, then you can implement DMA capbilities, too. But I don't know if it is fast enough, if there is not already some hardwired PCIe interface included in your FPGA. -- Frank Buss, fb@frank-buss.de http://www.frank-buss.de, http://www.it4-systems.de

embtel1200 wrote:

> Hi, > > We are evaluating the interface to use for transferring data between an > Altera/Xilinx FPGA and CPU. We are considering PCIe and GMII. This is going > to be a point-to-point link with bi-directional data transfer hitting > 500Mbps. The CPU we are using supports PCIe 1.1.
A simple old fashioned 8-bit parallel bus at ~100MHz would do. At that kind of speed, the signal integrity is no problem and there is no need to do anything special about it. Of course, you can make 16 or 32 bit bus with the correspondingly slower speed.
> My preference based on the research is to go for PCIe for the following > reasons*. > > - High Throughput > * PCIe 1.x at 2.5Gbps vs 1Gbps > > - Low latency > > - No protocol overhead > * Ethernet MAC overhead > * Need to define messages and requires a TCP/IP or similar stack on both > ends > > - Simplicity from an application perspective > * No need for socket API on the CPU > > * One thing I am not sure is the DMA capabilities in the PCIe 1.1 within > the FPGA. If the CPU is going to transfer the data without DMA I am really > not sure if this will scale. > > Any thoughts or comments on suitability of PCIe vs GMII?
Why do you need all that cost and complexity? Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Hi,

Thanks for the replies. 

The CPU we have has a 25MHz low speed bus typically designed for placing a
boot flash. Also there are other devices on this CPLD and using a direct
bus is definitely not an option. 

The choice for us is really between PCIe and GMII as the CPU supports
these and other peripheral buses. 

My question is then really choosing one of these from FPGA and system
software perspective. 

Thanks. 

> > >embtel1200 wrote: > >> Hi, >> >> We are evaluating the interface to use for transferring data between
an
>> Altera/Xilinx FPGA and CPU. We are considering PCIe and GMII. This is
going
>> to be a point-to-point link with bi-directional data transfer hitting >> 500Mbps. The CPU we are using supports PCIe 1.1. > >A simple old fashioned 8-bit parallel bus at ~100MHz would do. At that >kind of speed, the signal integrity is no problem and there is no need >to do anything special about it. Of course, you can make 16 or 32 bit >bus with the correspondingly slower speed. > >> My preference based on the research is to go for PCIe for the
following
>> reasons*. >> >> - High Throughput >> * PCIe 1.x at 2.5Gbps vs 1Gbps >> >> - Low latency >> >> - No protocol overhead >> * Ethernet MAC overhead >> * Need to define messages and requires a TCP/IP or similar stack on
both
>> ends >> >> - Simplicity from an application perspective >> * No need for socket API on the CPU >> >> * One thing I am not sure is the DMA capabilities in the PCIe 1.1
within
>> the FPGA. If the CPU is going to transfer the data without DMA I am
really
>> not sure if this will scale. >> >> Any thoughts or comments on suitability of PCIe vs GMII? > >Why do you need all that cost and complexity? > > >Vladimir Vassilevsky >DSP and Mixed Signal Design Consultant >http://www.abvolt.com > >
In article <mfidnfnbe4lbRA_XnZ2dnUVZ_j2dnZ2d@giganews.com>,
embtel1200 <research1729@gmail.com> wrote:
>Hi, > >Thanks for the replies. > >The CPU we have has a 25MHz low speed bus typically designed for placing a >boot flash. Also there are other devices on this CPLD and using a direct >bus is definitely not an option. > >The choice for us is really between PCIe and GMII as the CPU supports >these and other peripheral buses. > >My question is then really choosing one of these from FPGA and system >software perspective.
[ snip ] Depends some on the CPU in question, and some on the data model. If it has good DMA capability on the GMII interface (and most do), that might be simpler. You don't need all of the ethernet / TCP overhead - just use it to push data around, and set the GMII port to promiscuous mode. You don't even have to use CRCs (though it's probably not a bad idea). PCIe brings a lot of protocol overhead as well; pulling data requires extra transactions. If your FPGA already has a hard PCIe core, just use it. I've had some bad experiences with soft PCIe cores, but it's probably far enough along in the market by now that those have been resolved. As I said, it also depends on your data model. If the data comes in from a more-or-less asynchronous interface, using PCIe means that you'll have to either have the FPGA master the transaction into the root complex, or the FPGA will have to interrupt the root complex and it will need to pull the data. Working from the gate-count side on the FPGA, I suspect a minimal GMII implementation is probably quite a lot less space than a PCIe implementation (again, unless you have a hard PCIe core, but if you did, you probably wouldn't be asking the question). -- Steve Watt KD6GGD PP-ASEL-IA ICBM: 121W 56' 57.5" / 37N 20' 15.3" Internet: steve @ Watt.COM Whois: SW32-ARIN Free time? There's no such thing. It just comes in varying prices...