Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all 6230 articles
Browse latest View live

Re: Do you have a plan to support vSphere 6.5?

$
0
0
I'm tried PVRDMA, but there were some issues.


You can it on vSphere 6.5 release note.


I think this is very curious.


I heard dome information that Mellanox co-work with VMware to support RDMA, vRDMA on ESXi hypervisor last several years.


 

But almost strong feature was gone to somewhere...:(


My tiny labs switched dual SX6036G ethernet based infrastructure, but not RDMA.


I can't trust Mellanox's driver support & PB anymore.


I'll change ethernet fabric switch from other vendor then all of our SX6036G going to trash bin.


Good bye Mellanox.








Re: NFS over RoCE Ubuntu 16.04 with latest OFED

$
0
0

Same thing. It's those svcrdma and xprtrdma modules... I don't understand why this was overlooked. I wonder if those modules which I believe come with Ubuntu aren't being updated/replaced with the rest of the modules from Mellanox.

 

One the server side (client is the same pretty much):

 

root@igor:~# uname -a
Linux igor 4.4.0-47-generic #68-Ubuntu SMP Wed Oct 26 19:39:52 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

 

root@igor:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.1 LTS
Release:        16.04
Codename:       xenial

root@igor:~# modprobe svcrdma                        
modprobe: ERROR: could not insert 'rpcrdma': Invalid argument

root@igor:~# echo rdma 20049 > /proc/fs/nfsd/portlist
-su: echo: write error: Protocol not supported

root@igor:~# dmesg

[537309.544424] mlx4_en: enp1s0: Close port called
[537312.268446] Compat-mlnx-ofed backport release: 2ed8a21
[537312.268449] Backport based on mlnx_ofed/mlnx_rdma.git 2ed8a21
[537312.268450] compat.git: mlnx_ofed/mlnx_rdma.git
[537312.281715] mlx4_core: Mellanox ConnectX core driver v3.4-1.0.0 (25 Sep 2016)
[537312.281761] mlx4_core: Initializing 0000:01:00.0
[537314.046353] mlx4_core 0000:01:00.0: DMFS high rate mode not supported
[537314.046525] mlx4_core: device is working in RoCE mode: Roce V1                                                                                                    
[537314.046527] mlx4_core: gid_type 1 for UD QPs is not supported by the devicegid_type 0 was chosen instead
[537314.046528] mlx4_core: UD QP Gid type is: V1
[537314.945058] mlx4_core 0000:01:00.0: PCIe link speed is 5.0GT/s, device supports 5.0GT/s
[537314.945061] mlx4_core 0000:01:00.0: PCIe link width is x8, device supports x8
[537314.970245] pps_core: LinuxPPS API ver. 1 registered
[537314.970248] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
[537314.972474] PTP clock support registered
[537314.984817] mlx4_en: Mellanox ConnectX HCA Ethernet driver v3.4-1.0.0 (25 Sep 2016)
[537314.984933] mlx4_en 0000:01:00.0: Activating port:1
[537314.985004] mlx4_en: 0000:01:00.0: Port 1: enabling only PFC DCB ops
[537314.985006] mlx4_en: 0000:01:00.0: Port 1: Failed to query disable_32_14_4_e field for QCN
[537314.988534] mlx4_en: 0000:01:00.0: Port 1: Using 64 TX rings
[537314.988537] mlx4_en: 0000:01:00.0: Port 1: Using 8 RX rings
[537314.988540] mlx4_en: 0000:01:00.0: Port 1:   frag:0 - size:1522 prefix:0 stride:1536
[537314.988921] mlx4_en: 0000:01:00.0: Port 1: Initializing port
[537314.998296] <mlx4_ib> mlx4_ib_add: mlx4_ib: Mellanox ConnectX InfiniBand driver v3.4-1.0.0 (25 Sep 2016)
[537314.998572] mlx4_core 0000:01:00.0: mlx4_ib_add: allocated counter index 1 for port 1
[537315.025711] mlx4_core 0000:01:00.0 enp1s0: renamed from eth0
[537315.337917] mlx4_en: enp1s0:   frag:0 - size:1536 prefix:0 stride:1536
[537315.338140] mlx4_en: enp1s0:   frag:1 - size:4096 prefix:1536 stride:4096
[537315.338335] mlx4_en: enp1s0:   frag:2 - size:3390 prefix:5632 stride:3392
[537315.381364] IPv6: ADDRCONF(NETDEV_UP): enp1s0: link is not ready
[537317.232116] mlx4_en: enp1s0: Link Up
[537317.232198] IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0: link becomes ready
[537317.287122] mlx4_en: enp1s0: Link Down
[537317.392025] mlx4_en: enp1s0: Link Up

[537447.634106] rpcrdma: Unknown symbol rdma_event_msg (err 0)
[537447.634210] rpcrdma: disagrees about version of symbol ib_create_cq
[537447.634214] rpcrdma: Unknown symbol ib_create_cq (err -22)
[537447.634228] rpcrdma: disagrees about version of symbol rdma_resolve_addr
[537447.634231] rpcrdma: Unknown symbol rdma_resolve_addr (err -22)
[537447.634406] rpcrdma: Unknown symbol ib_event_msg (err 0)
[537447.634450] rpcrdma: disagrees about version of symbol ib_dereg_mr
[537447.634452] rpcrdma: Unknown symbol ib_dereg_mr (err -22)
[537447.634466] rpcrdma: disagrees about version of symbol ib_query_qp
[537447.634469] rpcrdma: Unknown symbol ib_query_qp (err -22)
[537447.634484] rpcrdma: disagrees about version of symbol rdma_disconnect
[537447.634487] rpcrdma: Unknown symbol rdma_disconnect (err -22)
[537447.634497] rpcrdma: disagrees about version of symbol ib_alloc_fmr
[537447.634500] rpcrdma: Unknown symbol ib_alloc_fmr (err -22)
[537447.634565] rpcrdma: disagrees about version of symbol ib_dealloc_fmr
[537447.634567] rpcrdma: Unknown symbol ib_dealloc_fmr (err -22)
[537447.634576] rpcrdma: disagrees about version of symbol rdma_resolve_route
[537447.634578] rpcrdma: Unknown symbol rdma_resolve_route (err -22)
[537447.634621] rpcrdma: disagrees about version of symbol rdma_bind_addr
[537447.634624] rpcrdma: Unknown symbol rdma_bind_addr (err -22)
[537447.634663] rpcrdma: disagrees about version of symbol rdma_create_qp
[537447.634666] rpcrdma: Unknown symbol rdma_create_qp (err -22)
[537447.634756] rpcrdma: Unknown symbol ib_map_mr_sg (err 0)
[537447.634771] rpcrdma: disagrees about version of symbol ib_destroy_cq
[537447.634775] rpcrdma: Unknown symbol ib_destroy_cq (err -22)
[537447.634789] rpcrdma: disagrees about version of symbol rdma_create_id
[537447.634812] rpcrdma: Unknown symbol rdma_create_id (err -22)
[537447.634949] rpcrdma: disagrees about version of symbol rdma_listen
[537447.634953] rpcrdma: Unknown symbol rdma_listen (err -22)
[537447.634958] rpcrdma: disagrees about version of symbol rdma_destroy_qp
[537447.634963] rpcrdma: Unknown symbol rdma_destroy_qp (err -22)
[537447.634976] rpcrdma: disagrees about version of symbol ib_query_device
[537447.634978] rpcrdma: Unknown symbol ib_query_device (err -22)
[537447.634988] rpcrdma: disagrees about version of symbol ib_get_dma_mr
[537447.634991] rpcrdma: Unknown symbol ib_get_dma_mr (err -22)
[537447.635005] rpcrdma: disagrees about version of symbol ib_alloc_pd
[537447.635007] rpcrdma: Unknown symbol ib_alloc_pd (err -22)
[537447.635090] rpcrdma: Unknown symbol ib_alloc_mr (err 0)
[537447.635176] rpcrdma: disagrees about version of symbol rdma_connect
[537447.635179] rpcrdma: Unknown symbol rdma_connect (err -22)
[537447.635232] rpcrdma: Unknown symbol ib_wc_status_msg (err 0)
[537447.635336] rpcrdma: disagrees about version of symbol rdma_destroy_id
[537447.635339] rpcrdma: Unknown symbol rdma_destroy_id (err -22)
[537447.635379] rpcrdma: disagrees about version of symbol rdma_accept
[537447.635382] rpcrdma: Unknown symbol rdma_accept (err -22)
[537447.635393] rpcrdma: disagrees about version of symbol ib_destroy_qp
[537447.635396] rpcrdma: Unknown symbol ib_destroy_qp (err -22)
[537447.635512] rpcrdma: disagrees about version of symbol ib_dealloc_pd
[537447.635515] rpcrdma: Unknown symbol ib_dealloc_pd (err -22)

Firmware for Voltaire 4036

mlx4 RoCE mode without OFED

$
0
0

Hi,

I've been trying to get our ConnectX-3 card to run RoCE (Ethernet mode)  is working,but I can't pass the roce_mode to the mlx4_core  module .

with the OFED stack I can able to change roce_mode. But I want to set roce_mode with out using OFED stack. is it possible to get the driver code with roce_mode flag.

Please provide inputs to achieve this. Can you point to source base make it work.

Thanks

Rama

Re: NFS over RoCE Ubuntu 16.04 with latest OFED

$
0
0

Hi Ryan,

You might need to contact Mellanox Support for this issue.

 

~Rage

Patch needed to activate ROCEV2 for Connect 3X 10G card

$
0
0

Hi,

 

Could you please provide patch to activate ROCEV2 for Connect 3X 10G card. We are blocking on this. We cant install MLNX OFED stack. we want to use INBOX

 

Thanks

Rama

Re: Is there a way to download the latest OFED ISO via wget or curl?

$
0
0

Update - Given the difficulty downloading an ISO without a browser, we have implemented an effective solution. We removed all the Mellanox cards and replaced them with Intel XL710 cards. While we had no previous inclination to remove our Mellanox cards, this is a much better solution for our situation. Thanks for your help.

Anyone using ESXi 6.5?

$
0
0

I am currently running connectx-3 cards with 1.8.2.4 drivers back to a srp target with esxi 6.0. I know Mellanox has all but given up on SRP and VMWare. I also have connectx-4 cards available.

 

Currently my connectx-3 cards are providing connectivity to my datastore via a scst/srp. What is the fastest option available for Connectx-4 cards? What does the inbox 6.5 driver support? At this point it looks like the answer is ISCSI, but curious what others have tried.


infiniband SR-IOV with neutron error

$
0
0

i want to config mellanox infiniband mode sr-iov with openstack liberty by use neutron as link :SR-IOV-Passthrough-For-Networking - OpenStack

in compute host have two mellanox infiniband card,i confige them SR-IOV with neutron,and install neutron sriov nic agent, but when i complete this configuration,restart this neutron sriov nic agent,it has error as :

WARNING neutron.plugins.ml2.drivers.mech_sriov.agent.pci_lib [-] Cannot find vfs [0, 1, 2, 3, 4, 5, 6, 7] in device ib0,but i use lspci command which can see the vfs:

lspci | grep -i mella

21:00.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3]

21:00.1 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]

21:00.2 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]

when i set options mlx4_core num_vfs=8 port_type_array=2,2 probe_vf=0 enable_64b_cqe_eqe=0 log_num_mgm_entry_size=-1,i will see ip link show like this:

eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000

    link/ether 24:be:05:ab:14:f2 brd ff:ff:ff:ff:ff:ff

    vf 0 MAC ff:ff:ff:ff:ff:ff, vlan 65535, spoof checking off, link-state auto

    vf 1 MAC ff:ff:ff:ff:ff:ff, vlan 65535, spoof checking off, link-state auto

    vf 2 MAC ff:ff:ff:ff:ff:ff, vlan 65535, spoof checking off, link-state auto

    vf 3 MAC ff:ff:ff:ff:ff:ff, vlan 65535, spoof checking off, link-state auto

    vf 4 MAC ff:ff:ff:ff:ff:ff, vlan 65535, spoof checking off, link-state auto

    vf 5 MAC ff:ff:ff:ff:ff:ff, vlan 65535, spoof checking off, link-state auto

    vf 6 MAC ff:ff:ff:ff:ff:ff, vlan 65535, spoof checking off, link-state auto

    vf 7 MAC ff:ff:ff:ff:ff:ff, vlan 65535, spoof checking off, link-state auto

 

but my switch just support infiniband mode,so this link is down and NO-CARRIER

when i set options mlx4_core num_vfs=8 port_type_array=1,1 probe_vf=0 enable_64b_cqe_eqe=0 log_num_mgm_entry_size=-1,i will see ip link show like this:

ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP mode DEFAULT qlen 1000

but no vf can be seen。

 

who has any idea ?thanks

Re: libvma on RHEL 7.3

$
0
0

Hi Rich,

 

Please open a ticket to Mellanox support for this issue.

You can open a ticket simply by sending an email to support@mellanox.com.

 

Thank you,

Viki

Re: Anyone using ESXi 6.5?

$
0
0

Hi!

Mellanox vSphere OFED 2.4.0 namespace conflicts with vSphere 6.5 native system driver nrdma, vrdma for Ethernet RoCE.

 

Mellanox can't support vSphere 6.5 now...:(

 

I'll switch to vSphere 6.5 in-box driver with FreeNAS then work properly.

Re: Firmware for Voltaire 4036

$
0
0

Hi Anton,

 

The 4036 is a manged switch so it just requires a software (software upgrade also upgrades firmware).

You can have access to the 4036 software from:

 

Login | myMellanox

How to configure cards to run at 25Gbs

$
0
0

Hello,

I have two Mellanox ConnectX-4 Lx cards connected back-to-back between two servers.  Unfortunately, they are running at 10Gbps.  I've tried ethtool to configure/force them to 25000 but that didn't work.  Hosts are both Ubuntu 14.04, w/ 4.4 kernel and OFED-3.4-1.0.0 running.

The command I used was: 'sudo ethtool -s eth2 speed 25000 autoneg off' on both servers, but then I couldn't even ping.

 

The card is stated to support 25GbE but I couldn't find any documentation (other than the ethtool command to try and set the speed) about configuring the cards to run at 25GbE.

 

Here is the output of lspci:

01:00.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx]

01:00.1 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx]

 

And the output from: 'sudo mlxfwmanager --query':

Device #1:

----------

  Device Type:      ConnectX4LX

  Part Number:      MCX4121A-ACA_Ax

  Description:      ConnectX-4 Lx EN network interface card; 25GbE dual-port SFP28; PCIe3.0 x8; ROHS R6

  PSID:             MT_2420110034

  PCI Device Name:  /dev/mst/mt4117_pciconf0

  Base MAC:         0000248a07114f90

  Versions:         Current        Available

     FW             14.17.1010     N/A

     PXE            3.4.0903       N/A

  Status:           No matching image found

 

And finally 'sudo ethtool eth2' (Is 25000 not a supported link mode?):

Settings for eth2:

        Supported ports: [ FIBRE ]

        Supported link modes:   1000baseT/Full

                                1000baseKX/Full

                                10000baseKR/Full

        Supported pause frame use: Symmetric Receive-only

        Supports auto-negotiation: Yes

        Advertised link modes:  1000baseT/Full

                                10000baseKR/Full

        Advertised pause frame use: No

        Advertised auto-negotiation: Yes

        Link partner advertised link modes:  Not reported

        Link partner advertised pause frame use: No

        Link partner advertised auto-negotiation: Yes

        Speed: 10000Mb/s

        Duplex: Full

        Port: Direct Attach Copper

        PHYAD: 0

        Transceiver: internal

        Auto-negotiation: on

        Supports Wake-on: d

        Wake-on: d

        Current message level: 0x00000004 (4)

                               link

        Link detected: yes

 

 

Any help would be greatly appreciated!

 

- Curt

Re: OFED/IBDUMP for ConnectX-4

$
0
0

Thanks for the response, Philip!

 

I can get the dumps, they just don't look too pretty and tcpdump doesn't seem to understand the RoCE traffic.  I was hoping to just get timestamps on each RDMA transfer to get more latency insight.  I might grab the unstable build of Wireshark, like you suggested, to see if that can give me what I'm looking for.


Thanks again,

Curt

 

[Update on Nov 28]

Indeed, the unstable version of Wireshark worked great to decode the dumps for me on a Mac.  Thanks promanov

Re: How to configure cards to run at 25Gbs

$
0
0

change the autoneg to off

 

ethtool -s enp2s0f0 speed 25000 autoneg off

 

# ethtool enp2s0f0

Settings for enp2s0f0:

  Supported ports: [ FIBRE ]

  Supported link modes:   1000baseT/Full

                         1000baseKX/Full

                         10000baseKR/Full

  Supported pause frame use: Symmetric Receive-only

  Supports auto-negotiation: Yes

  Advertised link modes:  Not reported

  Advertised pause frame use: No

  Advertised auto-negotiation: No

  Speed: 25000Mb/s

  Duplex: Full

  Port: Other

  PHYAD: 0

  Transceiver: internal

  Auto-negotiation: off

  Supports Wake-on: d

  Wake-on: d

  Current message level: 0x00000004 (4)

        link

  Link detected: yes


Re: How to configure cards to run at 25Gbs

$
0
0

Hello Ophir -

 

Thanks for the response.  I had turned auto-neg off, but I posted the output of ethtool after I had reset the speed to 10000 w/ autoneg on.  Here is the output of ethtool when setting to 25000 and autoneg off on both servers:

 

$ sudo ethtool -s eth2 speed 25000 autoneg off

$ sudo ethtool eth2

Settings for eth2:

  Supported ports: [ FIBRE ]

  Supported link modes:   1000baseT/Full

                         1000baseKX/Full

                         10000baseKR/Full

  Supported pause frame use: Symmetric Receive-only

  Supports auto-negotiation: Yes

  Advertised link modes:  Not reported

  Advertised pause frame use: No

  Advertised auto-negotiation: No

  Speed: Unknown!

  Duplex: Unknown! (255)

  Port: FIBRE

  PHYAD: 0

  Transceiver: internal

  Auto-negotiation: off

  Supports Wake-on: d

  Wake-on: d

  Current message level: 0x00000004 (4)

        link

  Link detected: no

 

 

Again, I appreciate the help and any suggestions.

Re: How to configure cards to run at 25Gbs

$
0
0

Can you try update to the latest driver/firmware - we have MLNX_OFED / EN 3.4.2 posted on the web.

I think you know it, but just in case, see HowTo Install MLNX_OFED Driver , but do it for Ubuntu (the post is for CentOS... the link)

 

Which cables are you using?

Try on and off the autoneg

 

I'm note sure what is the issue? do you have support with Mellanox?

 

ethtool -s enp2s0f0 speed 25000 autoneg off

ethtool -s enp2s0f0 autoneg on

 

I'm sure you did,just checking, did you try reset?

 

Ophir.

Re: 40Gb/s IPoIB only gives 5Gb/s real throughput?!

$
0
0

So you still only get 10G performance?

thanks

Re: Firmware for Voltaire 4036

Does anyone know what the Max Junction temperature is for the MT27508 IC on a ConnectX-3

$
0
0

Does anyone know what the Max Junction temperature is for the MT27508 IC on a ConnectX-3, or where I can find it?

or failing that what is the max operating temperatures for ConnectX-3 cards, we are actutally using Dell Mezzanine card versions of the X-3

Viewing all 6230 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>