aboutsummaryrefslogtreecommitdiffstats
path: root/net/smc (follow)
AgeCommit message (Collapse)AuthorFilesLines
2017-10-12net/smc: dev_put for netdev after usage of ib_query_gid()Ursula Braun1-4/+8
For RoCEs ib_query_gid() takes a reference count on the net_device. This reference count must be decreased by the caller. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reported-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Parav Pandit <parav@mellanox.com> Fixes: 0cfdd8f92cac ("smc: connection and link group creation") Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-12net/smc: replace function pointer get_netdev()Ursula Braun1-17/+9
SMC should not open code the function pointer get_netdev of the IB device. Replacing ib_query_gid(..., NULL) with ib_query_gid(..., gid_attr) allows access to the netdev. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Suggested-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Parav Pandit <parav@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-09-23Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller11-37/+58
2017-09-21net/smc: parameter cleanup in smc_cdc_get_free_slot()Ursula Braun3-8/+8
Use the smc_connection as first parameter with smc_cdc_get_free_slot(). This is just a small code cleanup, no functional change. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-09-21net/smc: no close wait in case of process shut downUrsula Braun1-7/+8
Usually socket closing is delayed if there is still data available in the send buffer to be transmitted. If a process is killed, the delay should be avoided. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-09-21net/smc: introduce a delayUrsula Braun3-10/+16
The number of outstanding work requests is limited. If all work requests are in use, tx processing is postponed to another scheduling of the tx worker. Switch to a delayed worker to have a gap for tx completion queue events before the next retry. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-09-21net/smc: terminate link group if out-of-sync is receivedUrsula Braun3-11/+8
An out-of-sync condition can just be detected by the client. If the server receives a CLC DECLINE message indicating an out-of-sync condition for the link groups, the server must clean up the out-of-sync link group. There is no need for an extra third parameter in smc_clc_send_decline(). Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-09-21net/smc: longer delay for client link group removalUrsula Braun1-4/+12
Client link group creation always follows the server linkgroup creation. If peer creates a new server link group, client has to create a new client link group. If peer reuses a server link group for a new connection, client has to reuse its client link group as well. This patch introduces a longer delay for client link group removal to make sure this link group still exists, once the peer decides to reuse a server link group. This avoids out-of-sync conditions for link groups. If already scheduled, modify the delay. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-09-21net/smc: adapt send request completion notificationUrsula Braun1-1/+1
The solicited flag is meaningful for the receive completion queue. Ask for next work completion of any type on the send queue. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-09-21net/smc: adjust net_device refcountUrsula Braun1-1/+3
smc_pnet_fill_entry() uses dev_get_by_name() adding a refcount to ndev. The following smc_pnet_enter() has to reduce the refcount if the entry to be added exists already in the pnet table. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-09-21net/smc: take RCU read lock for routing cache lookupUrsula Braun1-3/+7
smc_netinfo_by_tcpsk() looks up the routing cache. Such a lookup requires protection by an RCU read lock. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-09-21net/smc: add receive timeout checkHans Wippel1-0/+2
The SMC receive function currently lacks a timeout check under the condition that no data were received and no data are available. This patch adds such a check. Signed-off-by: Hans Wippel <hwippel@linux.vnet.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-09-21net/smc: add missing dev_putHans Wippel1-0/+1
In the infiniband part, SMC currently uses get_netdev which calls dev_hold on the returned net device. However, the SMC code never calls dev_put on that net device resulting in a wrong reference count. This patch adds a dev_put after the usage of the net device to fix the issue. Signed-off-by: Hans Wippel <hwippel@linux.vnet.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-29net/smc: synchronize buffer usage with deviceUrsula Braun7-0/+91
Usage of send buffer "sndbuf" is synced (a) before filling sndbuf for cpu access (b) after filling sndbuf for device access Usage of receive buffer "RMB" is synced (a) before reading RMB content for cpu access (b) after reading RMB content for device access Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-29net/smc: cleanup function __smc_buf_create()Ursula Braun1-51/+63
Split function __smc_buf_create() for better readability. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-29net/smc: common functions for RMBs and send buffersUrsula Braun3-219/+148
Creation and deletion of SMC receive and send buffers shares a high amount of common code . This patch introduces common functions to get rid of duplicate code. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-29net/smc: introduce sg-logic for send buffersUrsula Braun5-51/+38
SMC send buffers are processed the same way as RMBs. Since RMBs have been converted to sg-logic, do the same for send buffers. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-29net/smc: remove Kconfig warningUrsula Braun1-4/+0
Now separate memory regions are created and registered for separate RMBs. The unsafe_global_rkey of the protection domain is no longer used. Thus the exposing memory warning can be removed. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-29net/smc: register RMB-related memory regionUrsula Braun6-2/+115
A memory region created for a new RMB must be registered explicitly, before the peer can make use of it for remote DMA transfer. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-29net/smc: use separate memory regions for RMBsUrsula Braun5-7/+71
SMC currently uses the unsafe_global_rkey of the protection domain, which exposes all memory for remote reads and writes once a connection is established. This patch introduces separate memory regions with separate rkeys for every RMB. Now the unsafe_global_rkey of the protection domain is no longer needed. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-29net/smc: introduce sg-logic for RMBsUrsula Braun5-26/+76
The follow-on patch makes use of ib_map_mr_sg() when introducing separate memory regions for RMBs. This function is based on scatterlists; thus this patch introduces scatterlists for RMBs. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-29net/smc: shorten local bufsize variablesUrsula Braun1-29/+25
Initiate the coming rework of SMC buffer handling with this small code cleanup. No functional changes here. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-29net/smc: serialize connection creation in all casesUrsula Braun1-7/+1
If a link group for a new server connection exists already, the mutex serializing the determination of link groups is given up early. The coming registration of memory regions benefits from the serialization as well, if the mutex is held till connection creation is finished. This patch postpones the unlocking of the link group creation mutex. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-16net/smc: Add warning about remote memory exposureChristoph Hellwig1-0/+4
The driver explicitly bypasses APIs to register all memory once a connection is made, and thus allows remote access to memory. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Leon Romanovsky <leon@kernel.org> Acked-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-16smc: switch to usage of IB_PD_UNSAFE_GLOBAL_RKEYUrsula Braun5-37/+8
Currently, SMC enables remote access to physical memory when a user has successfully configured and established an SMC-connection until ten minutes after the last SMC connection is closed. Because this is considered a security risk, drivers are supposed to use IB_PD_UNSAFE_GLOBAL_RKEY in such a case. This patch changes the current SMC code to use IB_PD_UNSAFE_GLOBAL_RKEY. This improves user awareness, but does not remove the security risk itself. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-10Merge branch 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds1-1/+1
Pull RCU updates from Ingo Molnar: "The main changes are: - Debloat RCU headers - Parallelize SRCU callback handling (plus overlapping patches) - Improve the performance of Tree SRCU on a CPU-hotplug stress test - Documentation updates - Miscellaneous fixes" * 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (74 commits) rcu: Open-code the rcu_cblist_n_lazy_cbs() function rcu: Open-code the rcu_cblist_n_cbs() function rcu: Open-code the rcu_cblist_empty() function rcu: Separately compile large rcu_segcblist functions srcu: Debloat the <linux/rcu_segcblist.h> header srcu: Adjust default auto-expediting holdoff srcu: Specify auto-expedite holdoff time srcu: Expedite first synchronize_srcu() when idle srcu: Expedited grace periods with reduced memory contention srcu: Make rcutorture writer stalls print SRCU GP state srcu: Exact tracking of srcu_data structures containing callbacks srcu: Make SRCU be built by default srcu: Fix Kconfig botch when SRCU not selected rcu: Make non-preemptive schedule be Tasks RCU quiescent state srcu: Expedite srcu_schedule_cbs_snp() callback invocation srcu: Parallelize callback handling kvm: Move srcu_struct fields to end of struct kvm rcu: Fix typo in PER_RCU_NODE_PERIOD header comment rcu: Use true/false in assignment to bool rcu: Use bool value directly ...
2017-05-01IB/core: Define 'ib' and 'roce' rdma_ah_attr typesDasaratharaman Chandramouli1-1/+2
rdma_ah_attr can now be either ib or roce allowing core components to use one type or the other and also to define attributes unique to a specific type. struct ib_ah is also initialized with the type when its first created. This ensures that calls such as modify_ah dont modify the type of the address handle attribute. Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Don Hiatt <don.hiatt@intel.com> Reviewed-by: Sean Hefty <sean.hefty@intel.com> Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-05-01IB/core: Use rdma_ah_attr accessor functionsDasaratharaman Chandramouli1-5/+3
Modify core and driver components to use accessor functions introduced to access individual fields of rdma_ah_attr Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Don Hiatt <don.hiatt@intel.com> Reviewed-by: Sean Hefty <sean.hefty@intel.com> Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-04-23Merge branch 'for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into core/rcuIngo Molnar1-1/+1
Pull RCU updates from Paul E. McKenney: - Documentation updates. - Miscellaneous fixes. - Parallelize SRCU callback handling (plus overlapping patches). Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-04-18mm: Rename SLAB_DESTROY_BY_RCU to SLAB_TYPESAFE_BY_RCUPaul E. McKenney1-1/+1
A group of Linux kernel hackers reported chasing a bug that resulted from their assumption that SLAB_DESTROY_BY_RCU provided an existence guarantee, that is, that no block from such a slab would be reallocated during an RCU read-side critical section. Of course, that is not the case. Instead, SLAB_DESTROY_BY_RCU only prevents freeing of an entire slab of blocks. However, there is a phrase for this, namely "type safety". This commit therefore renames SLAB_DESTROY_BY_RCU to SLAB_TYPESAFE_BY_RCU in order to avoid future instances of this sort of confusion. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: <linux-mm@kvack.org> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Vlastimil Babka <vbabka@suse.cz> [ paulmck: Add comments mentioning the old name, as requested by Eric Dumazet, in order to help people familiar with the old name find the new one. ] Acked-by: David Rientjes <rientjes@google.com>
2017-04-11net/smc: do not use IB_SEND_INLINE together with mapped dataUrsula Braun2-2/+1
smc specifies IB_SEND_INLINE for IB_WR_SEND ib_post_send calls, but provides a mapped buffer to be sent. This is inconsistent, since IB_SEND_INLINE works without mapped buffer. Problem has not been detected in the past, because tests had been limited to Connect X3 cards from Mellanox, whose mlx4 driver just ignored the IB_SEND_INLINE flag. For now, the IB_SEND_INLINE flag is removed. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-11net/smc: destruct non-accepted socketsUrsula Braun2-6/+9
Make sure sockets never accepted are removed cleanly. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-11net/smc: remove duplicate unhashUrsula Braun1-1/+0
unhash is already called in sock_put_work. Remove the second call. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-11net/smc: guarantee ConnClosed send after shutdown SHUT_WRUrsula Braun1-11/+25
State SMC_CLOSED should be reached only, if ConnClosed has been sent to the peer. If ConnClosed is received from the peer, a socket with shutdown SHUT_WR done, switches errorneously to state SMC_CLOSED, which means the peer socket is dangling. The local SMC socket is supposed to switch to state APPFINCLOSEWAIT to make sure smc_close_final() is called during socket close. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-11net/smc: no socket state changes in tasklet contextUrsula Braun6-20/+41
Several state changes occur during SMC socket closing. Currently state changes triggered locally occur in process context with lock_sock() taken while state changes triggered by peer occur in tasklet context with bh_lock_sock() taken. bh_lock_sock() does not wait till a lock_sock(() task in process context is finished. This may lead to races in socket state transitions resulting in dangling SMC-sockets, or it may lead to duplicate SMC socket freeing. This patch introduces a closing worker to run all state changes under lock_sock(). Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Reported-by: Dave Jones <davej@codemonkey.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-11net/smc: always call the POLL_IN part of sk_wake_asyncUrsula Braun1-2/+1
Wake up reading file descriptors for a closing socket as well, otherwise some socket applications may stall. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-11net/smc: guarantee reset of write_blocked for heavy workloadUrsula Braun1-1/+5
If peer indicates write_blocked, the cursor state of the received data should be send to the peer immediately (in smc_tx_consumer_update()). Afterwards the write_blocked indicator is cleared. If there is no free slot for another write request, sending is postponed to worker smc_tx_work, and the write_blocked indicator is not cleared. Therefore another clearing check is needed in smc_tx_work(). Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-11net/smc: return active RoCE port onlyUrsula Braun1-2/+5
SMC requires an active ib port on the RoCE device. smc_pnet_find_roce_resource() determines the matching RoCE device port according to the configured PNET table. Do not return the found RoCE device port, if it is not flagged active. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-11net/smc: remove useless smc_ib_devices_list checkUrsula Braun3-4/+1
The global event handler is created only, if the ib_device has already been used by at least one link group. It is guaranteed that there exists the corresponding entry in the smc_ib_devices list. Get rid of this superfluous check. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-11net/smc: get rid of old commentUrsula Braun1-2/+0
This patch removes an outdated comment. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-30drivers: add explicit interrupt.h includesFlorian Westphal1-0/+1
These files all use functions declared in interrupt.h, but currently rely on implicit inclusion of this file (via netns/xfrm.h). That won't work anymore when the flow cache is removed so include that header where needed. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-09net: Work around lockdep limitation in sockets that use socketsDavid Howells1-1/+1
Lockdep issues a circular dependency warning when AFS issues an operation through AF_RXRPC from a context in which the VFS/VM holds the mmap_sem. The theory lockdep comes up with is as follows: (1) If the pagefault handler decides it needs to read pages from AFS, it calls AFS with mmap_sem held and AFS begins an AF_RXRPC call, but creating a call requires the socket lock: mmap_sem must be taken before sk_lock-AF_RXRPC (2) afs_open_socket() opens an AF_RXRPC socket and binds it. rxrpc_bind() binds the underlying UDP socket whilst holding its socket lock. inet_bind() takes its own socket lock: sk_lock-AF_RXRPC must be taken before sk_lock-AF_INET (3) Reading from a TCP socket into a userspace buffer might cause a fault and thus cause the kernel to take the mmap_sem, but the TCP socket is locked whilst doing this: sk_lock-AF_INET must be taken before mmap_sem However, lockdep's theory is wrong in this instance because it deals only with lock classes and not individual locks. The AF_INET lock in (2) isn't really equivalent to the AF_INET lock in (3) as the former deals with a socket entirely internal to the kernel that never sees userspace. This is a limitation in the design of lockdep. Fix the general case by: (1) Double up all the locking keys used in sockets so that one set are used if the socket is created by userspace and the other set is used if the socket is created by the kernel. (2) Store the kern parameter passed to sk_alloc() in a variable in the sock struct (sk_kern_sock). This informs sock_lock_init(), sock_init_data() and sk_clone_lock() as to the lock keys to be used. Note that the child created by sk_clone_lock() inherits the parent's kern setting. (3) Add a 'kern' parameter to ->accept() that is analogous to the one passed in to ->create() that distinguishes whether kernel_accept() or sys_accept4() was the caller and can be passed to sk_alloc(). Note that a lot of accept functions merely dequeue an already allocated socket. I haven't touched these as the new socket already exists before we get the parameter. Note also that there are a couple of places where I've made the accepted socket unconditionally kernel-based: irda_accept() rds_rcp_accept_one() tcp_accept_from_sock() because they follow a sock_create_kern() and accept off of that. Whilst creating this, I noticed that lustre and ocfs don't create sockets through sock_create_kern() and thus they aren't marked as for-kernel, though they appear to be internal. I wonder if these should do that so that they use the new set of lock keys. Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-03sched/headers: Move task_struct::signal and task_struct::sighand types and accessors into <linux/sched/signal.h>Ingo Molnar5-0/+10
task_struct::signal and task_struct::sighand are pointers, which would normally make it straightforward to not define those types in sched.h. That is not so, because the types are accompanied by a myriad of APIs (macros and inline functions) that dereference them. Split the types and the APIs out of sched.h and move them into a new header, <linux/sched/signal.h>. With this change sched.h does not know about 'struct signal' and 'struct sighand' anymore, trying to put accessors into sched.h as a test fails the following way: ./include/linux/sched.h: In function ‘test_signal_types’: ./include/linux/sched.h:2461:18: error: dereferencing pointer to incomplete type ‘struct signal_struct’ ^ This reduces the size and complexity of sched.h significantly. Update all headers and .c code that relied on getting the signal handling functionality from <linux/sched.h> to include <linux/sched/signal.h>. The list of affected files in the preparatory patch was partly generated by grepping for the APIs, and partly by doing coverage build testing, both all[yes|mod|def|no]config builds on 64-bit and 32-bit x86, and an array of cross-architecture builds. Nevertheless some (trivial) build breakage is still expected related to rare Kconfig combinations and in-flight patches to various kernel code, but most of it should be handled by this patch. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-01-30smc: some potential use after free bugsDan Carpenter1-0/+5
Say we got really unlucky and these failed on the last iteration, then it could lead to a use after free bug. Fixes: cd6851f30386 ("smc: remote memory buffers (RMBs)") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-12smc: ETH_ALEN as memcpy length for mac addressesUrsula Braun2-7/+7
When creating an SMC connection, there is a CLC (connection layer control) handshake to prepare for RDMA traffic. The corresponding code is part of commit 0cfdd8f92cac ("smc: connection and link group creation"). Mac addresses to be exchanged in the handshake are copied with a wrong length of 12 instead of 6 bytes. Following code overwrites the wrongly copied code, but nevertheless the correct length should already be used for the preceding mac address copying. Use ETH_ALEN for the memcpy length with mac addresses. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Fixes: 0cfdd8f92cac ("smc: connection and link group creation") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-09smc: netlink interface for SMC socketsUrsula Braun6-1/+270
Support for SMC socket monitoring via netlink sockets of protocol NETLINK_SOCK_DIAG. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-09smc: socket closing and linkgroup cleanupUrsula Braun11-36/+668
smc_shutdown() and smc_release() handling delayed linkgroup cleanup for linkgroups without connections Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-09smc: receive data from RMBEUrsula Braun9-3/+304
move RMBE data into user space buffer and update managing cursors Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-09smc: send data (through RDMA)Ursula Braun6-4/+491
copy data to kernel send buffer, and trigger RDMA write Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-09smc: connection data control (CDC)Ursula Braun8-0/+641
send and receive CDC messages (via IB message send and CQE) Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>