Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all articles
Browse latest Browse all 6230

Infiniband's RDMA capability on Windows 8.1 with Intel MPI

$
0
0

We are deploying a cluster server (Windows 8.1 is installed) equipped with Mellanox Infiniband (Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE])

 

I am using Intel MPI library and trying to use RDMA (or DAPL) to take advantage of full bandwidth of infiniband.

 

I installed Infiniband driver from Mellanox's webpage successfully.

 

1. I cannot run my MPI benchmark application with DAPL enabled. I observe no *good* numbers from the benchmark using TCP/IP (IPoIB) (~ 300MB/sec over MPI)

 

2. In a test where a machine as a server and another machine as a client, I can see they can send/receive messages upto 2~3GB/sec (which is the exactly what I want...)

 

If you have any knowledge on it , please share them with me


Viewing all articles
Browse latest Browse all 6230

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>