aboutsummaryrefslogtreecommitdiffstats
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2015-08-31md/raid5: use bio_list for the list of bios to return.NeilBrown2-27/+16
This will make it easier to splice two lists together which will be needed in future patch. Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md/raid10: ensure device failure recorded before write request returns.NeilBrown2-1/+34
When a write to one of the legs of a RAID10 fails, the failure is recorded in the metadata of the other legs so that after a restart the data on the failed drive wont be trusted even if that drive seems to be working again (maybe a cable was unplugged). Currently there is no interlock between the write request completing and the metadata update. So it is possible that the write will complete, the app will confirm success in some way, and then the machine will crash before the metadata update completes. This is an extremely small hole for a racy to fit in, but it is theoretically possible and so should be closed. So: - set MD_CHANGE_PENDING when requesting a metadata update for a failed device, so we can know with certainty when it completes - queue requests that experienced an error on a new queue which is only processed after the metadata update completes - call raid_end_bio_io() on bios in that queue when the time comes. Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md/raid1: ensure device failure recorded before write request returns.NeilBrown3-1/+34
When a write to one of the legs of a RAID1 fails, the failure is recorded in the metadata of the other leg(s) so that after a restart the data on the failed drive wont be trusted even if that drive seems to be working again (maybe a cable was unplugged). Similarly when we record a bad-block in response to a write failure, we must not let the write complete until the bad-block update is safe. Currently there is no interlock between the write request completing and the metadata update. So it is possible that the write will complete, the app will confirm success in some way, and then the machine will crash before the metadata update completes. This is an extremely small hole for a racy to fit in, but it is theoretically possible and so should be closed. So: - set MD_CHANGE_PENDING when requesting a metadata update for a failed device, so we can know with certainty when it completes - queue requests that experienced an error on a new queue which is only processed after the metadata update completes - call raid_end_bio_io() on bios in that queue when the time comes. Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md-cluster: remove inappropriate try_module_get from join()NeilBrown1-4/+0
md_setup_cluster already calls try_module_get(), so this try_module_get isn't needed. Also, there is no matching module_put (except in error patch), so this leaves an unbalanced module count. Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md: extend spinlock protection in register_md_cluster_operationsNeilBrown1-6/+10
This code looks racy. The only possible race is if two modules try to register at the same time and that won't happen. But make the code look safe anyway. Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md-cluster: Read the disk bitmap sb and check if it needs recoveryGuoqing Jiang1-1/+15
In gather_all_resync_info, we need to read the disk bitmap sb and check if it needs recovery. Reviewed-by: Goldwyn Rodrigues <rgoldwyn@suse.com> Signed-off-by: Guoqing Jiang <gqjiang@suse.com> Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md-cluster: only call complete(&cinfo->completion) when node join clusterGuoqing Jiang1-1/+10
Introduce MD_CLUSTER_BEGIN_JOIN_CLUSTER flag to make sure complete(&cinfo->completion) is only be invoked when node join cluster. Otherwise node failure could also call the complete, and it doesn't make sense to do it. Reviewed-by: Goldwyn Rodrigues <rgoldwyn@suse.com> Signed-off-by: Guoqing Jiang <gqjiang@suse.com> Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md-cluster: add missed lockres_freeGuoqing Jiang1-1/+3
We also need to free the lock resource before goto out. Reviewed-by: Goldwyn Rodrigues <rgoldwyn@suse.com> Signed-off-by: Guoqing Jiang <gqjiang@suse.com> Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md-cluster: remove the unused sb_lockGuoqing Jiang1-9/+0
The sb_lock is not used anywhere, so let's remove it. Reviewed-by: Goldwyn Rodrigues <rgoldwyn@suse.com> Signed-off-by: Guoqing Jiang <gqjiang@suse.com> Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md-cluster: init suspend_list and suspend_lock early in joinGuoqing Jiang1-3/+2
If the node just join the cluster, and receive the msg from other nodes before init suspend_list, it will cause kernel crash due to NULL pointer dereference, so move the initializations early to fix the bug. md-cluster: Joined cluster 3578507b-e0cb-6d4f-6322-696cd7b1b10c slot 3 BUG: unable to handle kernel NULL pointer dereference at (null) ... ... ... Call Trace: [<ffffffffa0444924>] process_recvd_msg+0x2e4/0x330 [md_cluster] [<ffffffffa0444a06>] recv_daemon+0x96/0x170 [md_cluster] [<ffffffffa045189d>] md_thread+0x11d/0x170 [md_mod] [<ffffffff810768c4>] kthread+0xb4/0xc0 [<ffffffff8151927c>] ret_from_fork+0x7c/0xb0 ... ... ... RIP [<ffffffffa0443581>] __remove_suspend_info+0x11/0xa0 [md_cluster] Reviewed-by: Goldwyn Rodrigues <rgoldwyn@suse.com> Signed-off-by: Guoqing Jiang <gqjiang@suse.com> Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md-cluster: add the error check if failed to get dlm lockGuoqing Jiang1-6/+35
In complicated cluster environment, it is possible that the dlm lock couldn't be get/convert on purpose, the related err info is added for better debug potential issue. For lockres_free, if the lock is blocking by a lock request or conversion request, then dlm_unlock just put it back to grant queue, so need to ensure the lock is free finally. Signed-off-by: Guoqing Jiang <gqjiang@suse.com> Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md-cluster: init completion within lockres_initGuoqing Jiang1-2/+1
We should init completion within lockres_init, otherwise completion could be initialized more than one time during it's life cycle. Reviewed-by: Goldwyn Rodrigues <rgoldwyn@suse.com> Signed-off-by: Guoqing Jiang <gqjiang@suse.com> Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md-cluster: fix deadlock issue on message lockGuoqing Jiang2-9/+9
There is problem with previous communication mechanism, and we got below deadlock scenario with cluster which has 3 nodes. Sender Receiver Receiver token(EX) message(EX) writes message downconverts message(CR) requests ack(EX) get message(CR) gets message(CR) reads message reads message requests EX on message requests EX on message To fix this problem, we do the following changes: 1. the sender downconverts MESSAGE to CW rather than CR. 2. and the receiver request PR lock not EX lock on message. And in case we failed to down-convert EX to CW on message, it is better to unlock message otherthan still hold the lock. Reviewed-by: Goldwyn Rodrigues <rgoldwyn@suse.com> Signed-off-by: Lidong Zhong <ldzhong@suse.com> Signed-off-by: Guoqing Jiang <gqjiang@suse.com> Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md-cluster: transfer the resync ownership to another nodeGuoqing Jiang2-3/+18
When node A stops an array while the array is doing a resync, we need to let another node B take over the resync task. To achieve the goal, we need the A send an explicit BITMAP_NEEDS_SYNC message to the cluster. And the node B which received that message will invoke __recover_slot to do resync. Reviewed-by: Goldwyn Rodrigues <rgoldwyn@suse.com> Signed-off-by: Guoqing Jiang <gqjiang@suse.com> Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md-cluster: split recover_slot for future code reuseGuoqing Jiang1-7/+16
Make recover_slot as a wraper to __recover_slot, since the logic of __recover_slot can be reused for the condition when other nodes need to take over the resync job. Signed-off-by: Guoqing Jiang <gqjiang@suse.com> Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md-cluster: use %pU to print UUIDsGuoqing Jiang1-14/+2
Reviewed-by: Goldwyn Rodrigues <rgoldwyn@suse.com> Signed-off-by: Guoqing Jiang <gqjiang@suse.com> Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md: setup safemode_timer before it's being usedSasha Levin1-5/+4
We used to set up the safemode_timer timer in md_run. If md_run would fail before the timer was set up we'd end up trying to modify a timer that doesn't have a callback function when we access safe_delay_store, which would trigger a BUG. neilb: delete init_timer() call as setup_timer() does that. Signed-off-by: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md/raid5: handle possible race as reshape completes.NeilBrown1-0/+5
It is possible (though unlikely) for a reshape to be interrupted between the time that end_reshape is called and the time when raid5_finish_reshape is called. This can leave conf->reshape_progress set to MaxSector, but mddev->reshape_position not. This combination confused reshape_request() when ->reshape_backwards. As conf->reshape_progress is so high, it seems the reshape hasn't really begun. But assuming MaxSector is a valid address only leads to sorrow. So ensure reshape_position and reshape_progress both agree, and add an extra check in reshape_request() just in case they don't. Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md: sync sync_completed has correct value as recovery finishes.NeilBrown1-0/+9
There can be a small window between the moment that recovery actually writes the last block and the time when various sysfs and /proc/mdstat attributes report that it has finished. During this time, 'sync_completed' can have the wrong value. This can confuse monitoring software. So: - don't set curr_resync_completed beyond the end of the devices, - set it correctly when resync/recovery has completed. Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md: be careful when testing resync_max against curr_resync_completed.NeilBrown2-2/+4
While it generally shouldn't happen, it is not impossible for curr_resync_completed to exceed resync_max. This can particularly happen when reshaping RAID5 - the current status isn't copied to curr_resync_completed promptly, so when it is, it can exceed resync_max. This happens when the reshape is 'frozen', resync_max is set low, and reshape is re-enabled. Taking a difference between two unsigned numbers is always dangerous anyway, so add a test to behave correctly if curr_resync_completed > resync_max Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md: set MD_RECOVERY_RECOVER when starting a degraded array.NeilBrown1-0/+6
This ensures that 'sync_action' will show 'recover' immediately the array is started. If there is no spare the status will change to 'idle' once that is detected. Clear MD_RECOVERY_RECOVER for a read-only array to ensure this change happens. This allows scripts which monitor status not to get confused - particularly my test scripts. Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md/raid5: remove incorrect "min_t()" when calculating writepos.NeilBrown1-1/+6
This code is calculating: writepos, which is the furthest along address (device-space) that we *will* be writing to readpos, which is the earliest address that we *could* possible read from, and safepos, which is the earliest address in the 'old' section that we might read from after a crash when the reshape position is recovered from metadata. The first is a precise calculation, so clipping at zero doesn't make sense. As the reshape position is now guaranteed to always be a multiple of reshape_sectors and as we already BUG_ON when reshape_progress is zero, there is no point in this min_t() call. The readpos and safepos are worst case - actual value depends on precise geometry. That worst case could be negative, which is only a problem because we are storing the value in an unsigned. So leave the min_t() for those. Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md/raid5: strengthen check on reshape_position at run.NeilBrown1-15/+14
When reshaping, we work in units of the largest chunk size. If changing from a larger to a smaller chunk size, that means we reshape more than one stripe at a time. So the required alignment of reshape_position needs to take into account both the old and new chunk size. This means that both 'here_new' and 'here_old' are calculated with respect to the same (maximum) chunk size, so testing if they are the same when delta_disks is zero becomes pointless. Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md/raid5: switch to use conf->chunk_sectors in place of mddev->chunk_sectors where possibleNeilBrown1-14/+14
The chunk_sectors and new_chunk_sectors fields of mddev can be changed any time (via sysfs) that the reconfig mutex can be taken. So raid5 keeps internal copies in 'conf' which are stable except for a short locked moment when reshape stops/starts. So any access that does not hold reconfig_mutex should use the 'conf' values, not the 'mddev' values. Several don't. This could result in corruption if new values were written at awkward times. Also use min() or max() rather than open-coding. Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md/raid5: always set conf->prev_chunk_sectors and ->prev_algoNeilBrown1-0/+3
These aren't really needed when no reshape is happening, but it is safer to have them always set to a meaningful value. The next patch will use ->prev_chunk_sectors without checking if a reshape is happening (because that makes the code simpler), and this patch makes that safe. Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md/raid10: fix a few typos in commentsNeilBrown1-2/+2
Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md/raid5: consider updating reshape_position at start of reshape.NeilBrown1-2/+6
md/raid5 only updates ->reshape_position (which is stored in metadata and is authoritative) occasionally, but particularly when getting closed to ->resync_max as it must be correct when ->resync_max is reached. When mdadm tries to stop an array which is reshaping it will: - freeze the reshape, - set resync_max to where the reshape has reached. - unfreeze the reshape. When this happens, the reshape is aborted and then restarted. The restart doesn't check that resync_max is close, and so doesn't update ->reshape_position like it should. This results in the reshape stopping, but ->reshape_position being incorrect. So on that first call to reshape_request, make sure ->reshape_position is updated if needed. Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md: close some races between setting and checking sync_action.NeilBrown2-0/+3
When checking sync_action in a script, we want to be sure it is as accurate as possible. As resync/reshape etc doesn't always start immediately (a separate thread is scheduled to do it), it is best if 'action_show' checks if MD_RECOVER_NEEDED is set (which it does) and in that case reports what is likely to start soon (which it only sometimes does). So: - report 'reshape' if reshape_position suggests one might start. - set MD_RECOVERY_RECOVER in raid1_reshape(), because that is very likely to happen next. Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md: Keep /proc/mdstat reporting recovery until fully DONE.NeilBrown1-14/+24
Currently when a recovery completes, mdstat shows that it has finished before the new device is marked as a full member. Because of this it can appear to a script that the recovery finished but the array isn't in sync. So while MD_RECOVERY_DONE is still set, keep mdstat reporting "recovery". Once md_reap_sync_thread() completes, the spare will be active and then MD_RECOVERY_DONE will be cleared. To ensure this is race-free, set MD_RECOVERY_DONE before clearning curr_resync. Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-31md/raid6: delta syndrome for ARM NEONArd Biesheuvel2-1/+58
This implements XOR syndrome calculation using NEON intrinsics. As before, the module can be built for ARM and arm64 from the same source. Relative performance on a Cortex-A57 based system: raid6: int64x1 gen() 905 MB/s raid6: int64x1 xor() 881 MB/s raid6: int64x2 gen() 1343 MB/s raid6: int64x2 xor() 1286 MB/s raid6: int64x4 gen() 1896 MB/s raid6: int64x4 xor() 1321 MB/s raid6: int64x8 gen() 1773 MB/s raid6: int64x8 xor() 1165 MB/s raid6: neonx1 gen() 1834 MB/s raid6: neonx1 xor() 1278 MB/s raid6: neonx2 gen() 2528 MB/s raid6: neonx2 xor() 1942 MB/s raid6: neonx4 gen() 2888 MB/s raid6: neonx4 xor() 2334 MB/s raid6: neonx8 gen() 2957 MB/s raid6: neonx8 xor() 2232 MB/s raid6: using algorithm neonx8 gen() 2957 MB/s raid6: .... xor() 2232 MB/s, rmw enabled Cc: Markus Stockhausen <stockhausen@collogia.de> Cc: Neil Brown <neilb@suse.de> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-03md/raid0: update queue parameter in a safer location.NeilBrown1-36/+39
When a (e.g.) RAID5 array is reshaped to RAID0, the updating of queue parameters (e.g. max number of sectors per bio) is done in the wrong place. It should be part of ->run, but it is actually part of ->takeover. This means it happens before level_store() calls: blk_set_stacking_limits(&mddev->queue->limits); and so it ineffective. This can lead to errors from underlying devices. So move all the relevant settings out of create_stripe_zones() and into raid0_run(). As this can lead to a bug-on it is suitable for any -stable kernel which supports reshape to RAID0. So 2.6.35 or later. As the bug has been present for five years there is no urgency, so no need to rush into -stable. Fixes: 9af204cf720c ("md: Add support for Raid5->Raid0 and Raid10->Raid0 takeover") Cc: stable@vger.kernel.org (v2.6.35+ - please delay until after -final release). Reported-by: Yi Zhang <yizhan@redhat.com> Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-03md: simplify get_bitmap_file now that "file" is zeroed.Benjamin Randazzo1-10/+10
There is no point assigning '\0' to file->pathname[0] as file is now zeroed out, so remove that branch and simplify the code. [Original patch combined this with the change to use kzalloc. I split the two so that the change to kzalloc is easier to backport. - neilb] Signed-off-by: Benjamin Randazzo <benjamin@randazzo.fr> Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-03md/raid5: don't let shrink_slab shrink too far.NeilBrown1-2/+3
I have a report of drop_one_stripe() called from raid5_cache_scan() apparently finding ->max_nr_stripes == 0. This should not be allowed. So add a test to keep max_nr_stripes above min_nr_stripes. Also use a 'mask' rather than a 'mod' in drop_one_stripe to ensure 'hash' is valid even if max_nr_stripes does reach zero. Fixes: edbe83ab4c27 ("md/raid5: allow the stripe_cache to grow and shrink.") Cc: stable@vger.kernel.org (4.1 - please release with 2d5b569b665) Reported-by: Tomas Papan <tomas.papan@gmail.com> Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-03md: use kzalloc() when bitmap is disabledBenjamin Randazzo1-1/+1
In drivers/md/md.c get_bitmap_file() uses kmalloc() for creating a mdu_bitmap_file_t called "file". 5769 file = kmalloc(sizeof(*file), GFP_NOIO); 5770 if (!file) 5771 return -ENOMEM; This structure is copied to user space at the end of the function. 5786 if (err == 0 && 5787 copy_to_user(arg, file, sizeof(*file))) 5788 err = -EFAULT But if bitmap is disabled only the first byte of "file" is initialized with zero, so it's possible to read some bytes (up to 4095) of kernel space memory from user space. This is an information leak. 5775 /* bitmap disabled, zero the first byte and copy out */ 5776 if (!mddev->bitmap_info.file) 5777 file->pathname[0] = '\0'; Signed-off-by: Benjamin Randazzo <benjamin@randazzo.fr> Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-03md/raid1: extend spinlock to protect raid1_end_read_request against inconsistenciesNeilBrown1-4/+6
raid1_end_read_request() assumes that the In_sync bits are consistent with the ->degaded count. raid1_spare_active updates the In_sync bit before the ->degraded count and so exposes an inconsistency, as does error() So extend the spinlock in raid1_spare_active() and error() to hide those inconsistencies. This should probably be part of Commit: 34cab6f42003 ("md/raid1: fix test for 'was read error from last working device'.") as it addresses the same issue. It fixes the same bug and should go to -stable for same reasons. Fixes: 76073054c95b ("md/raid1: clean up read_balance.") Cc: stable@vger.kernel.org (v3.0+) Signed-off-by: NeilBrown <neilb@suse.com>
2015-08-02Linux 4.2-rc5Linus Torvalds1-1/+1
2015-08-02i915: temporary fix for DP MST docking station NULL pointer dereferenceLinus Torvalds1-3/+5
Ted Ts'o reports that his Lenovo T540p ThinkPad crashes at boot if attached to the docking station. This is a regression that he was able to bisect to commit 8c7b5ccb7298: "drm/i915: Use atomic helpers for computing changed flags:" The reason seems to be the new call to drm_atomic_helper_check_modeset() added to intel_modeset_compute_config(), which in turn calls update_connector_routing(), and somehow ends up picking a NULL crtc for the connector state, causing the subsequent drm_crtc_index() to OOPS. Daniel Vetter says that the fundamental issue seems to be confusion in the encoder selection, and this isn't the right fix, but while he chases down the proper fix, this at least avoids the NULL pointer dereference and makes Ted's docking station work again. Reported-bisected-and-tested-by: Theodore Ts'o <tytso@mit.edu> Cc: Daniel Vetter <daniel.vetter@intel.com> Cc: Mani Nikula <jani.nikula@linux.intel.com> Cc: Dave Airlie <airlied@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-01link_path_walk(): be careful when failing with ENOTDIRAl Viro1-1/+6
In RCU mode we might end up with dentry evicted just we check that it's a directory. In such case we should return ECHILD rather than ENOTDIR, so that pathwalk would be retries in non-RCU mode. Breakage had been introduced in commit b18825a - prior to that we were looking at nd->inode, which had been fetched before verifying that ->d_seq was still valid. That form of check would only be satisfied if at some point the pathname prefix would indeed have resolved to a non-directory. The fix consists of checking ->d_seq after we'd run into a non-directory dentry, and failing with ECHILD in case of mismatch. Note that all branches since 3.12 have that problem... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-07-31stmmac: fix missing MODULE_LICENSE in stmmac_platformJoachim Eastwood1-0/+4
Commit 50649ab14982 ("stmmac: drop driver from stmmac platform code") was a bit overzealous in removing code and dropped the MODULE_* macro's that are still needed since stmmac_platform can be a module. Fix this by putting the macro's remvoed in 50649ab14982 back. This fixes the following errors when used as a module: stmmac_platform: module license 'unspecified' taints kernel. Disabling lock debugging due to kernel taint stmmac_platform: Unknown symbol devm_kmalloc (err 0) stmmac_platform: Unknown symbol stmmac_suspend (err 0) stmmac_platform: Unknown symbol platform_get_irq_byname (err 0) stmmac_platform: Unknown symbol stmmac_dvr_remove (err 0) stmmac_platform: Unknown symbol platform_get_resource (err 0) stmmac_platform: Unknown symbol of_get_phy_mode (err 0) stmmac_platform: Unknown symbol of_property_read_u32_array (err 0) stmmac_platform: Unknown symbol of_alias_get_id (err 0) stmmac_platform: Unknown symbol stmmac_resume (err 0) stmmac_platform: Unknown symbol stmmac_dvr_probe (err 0) Fixes: 50649ab14982 ("stmmac: drop driver from stmmac platform code") Reported-by: Igor Gnatenko <i.gnatenko.brain@gmail.com> Signed-off-by: Joachim Eastwood <manabian@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-07-31gianfar: Enable device wakeup when appropriateClaudiu Manoil3-13/+3
The wol_en flag is 0 by default anyway, and we have the following inconsistency: a MAGIC packet wol capable eth interface is registered as a wake-up source but unable to wake-up the system as wol_en is 0 (wake-on flag set to 'd'). Calling set_wakeup_enable() at netdev open is just redundant because wol_en is 0 by default. Let only ethtool call set_wakeup_enable() for now. The bflock is obviously obsoleted, its utility has been corroded over time. The bitfield flags used today in gianfar are accessed only on the init/ config path, with no real possibility of concurrency - nothing that would justify smth. like bflock. Signed-off-by: Claudiu Manoil <claudiu.manoil@freescale.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-07-31gianfar: Fix suspend/resume for wol magic packetClaudiu Manoil1-68/+30
If we disable NAPI in the first place we can mask the device's interrupts (and halt it) without fearing that imask may be concurrently accessed from interrupt context, so there's no need to do local_irq_save() around gfar_halt_nodisable(). lock_rx_qs()/unlock_tx_qs() are just obsoleted and potentially buggy routines. The txlock is currently used in the driver only to manage TX congestion, it has nothing to do with halting the device. With these changes, the TX processing is stopped before gfar_halt(). Compact gfar_halt() is used instead of gfar_halt_nodisable(), as it disables Rx/TX DMA h/w blocks and the Rx/TX h/w queues. gfar_start() re-enables all these blocks on resume. Enabling the magic-packet mode remains the same, note that the RX block is re-enabled just before entering sleep mode. Add IRQF_NO_SUSPEND flag for the error interrupt line, to signal that the interrupt line must remain active during sleep in order to wake the system by magic packet (MAG) reception interrupt. (On some systems the MAG interrupt did trigger w/o this flag as well, but on others it didn't.) Without these fixes, when suspended during fair Tx traffic the interface occasionally failed to be woken up by magic packet. Signed-off-by: Claudiu Manoil <claudiu.manoil@freescale.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-07-31gianfar: Fix warning when CONFIG_PM offClaudiu Manoil1-0/+2
CC drivers/net/ethernet/freescale/gianfar.o drivers/net/ethernet/freescale/gianfar.c:568:13: warning: 'lock_tx_qs' defined but not used [-Wunused-function] static void lock_tx_qs(struct gfar_private *priv) ^ drivers/net/ethernet/freescale/gianfar.c:576:13: warning: 'unlock_tx_qs' defined but not used [-Wunused-function] static void unlock_tx_qs(struct gfar_private *priv) ^ Reported-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Claudiu Manoil <claudiu.manoil@freescale.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-07-31act_pedit: check binding before calling tcf_hash_release()WANG Cong1-3/+2
When we share an action within a filter, the bind refcnt should increase, therefore we should not call tcf_hash_release(). Fixes: 1a29321ed045 ("net_sched: act: Dont increment refcnt on replace") Cc: Jamal Hadi Salim <jhs@mojatatu.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: Cong Wang <cwang@twopensource.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-07-31ARM: dts: keystone: fix dt bindings to use post div register for mainpllMurali Karicheri3-9/+6
All of the keystone devices have a separate register to hold post divider value for main pll clock. Currently the fixed-postdiv value used for k2hk/l/e SoCs works by sheer luck as u-boot happens to use a value of 2 for this. Now that we have fixed this in the pll clock driver change the dt bindings for the same. Signed-off-by: Murali Karicheri <m-karicheri2@ti.com> Acked-by: Santosh Shilimkar <ssantosh@kernel.org> Signed-off-by: Olof Johansson <olof@lixom.net>
2015-07-31Revert "dmaengine: virt-dma: don't always free descriptor upon completion"Jun Nie2-25/+7
This reverts commit b9855f03d560d351e95301b9de0bc3cad3b31fe9. The patch break existing DMA usage case. For example, audio SOC dmaengine never release channel and cause virt-dma to cache too much memory in descriptor to exhaust system memory. Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-07-31dmaengine: mv_xor: fix big endian operation in register modeThomas Petazzoni1-4/+5
Commit 6f166312c6ea2 ("dmaengine: mv_xor: add support for a38x command in descriptor mode") introduced the support for a feature that appeared in Armada 38x: specifying the operation to be performed in a per-descriptor basis rather than globally per channel. However, when doing so, it changed the function mv_chan_set_mode() to use: if (IS_ENABLED(__BIG_ENDIAN)) instead of: #if defined(__BIG_ENDIAN) While IS_ENABLED() is perfectly fine for CONFIG_* symbols, it is not for other symbols such as __BIG_ENDIAN that is provided directly by the compiler. Consequently, the commit broke support for big-endian, as the XOR_DESCRIPTOR_SWAP flag was not set in the XOR channel configuration register. The primarily visible effect was some nasty warnings and failures appearing during the self-test of the XOR unit: [ 1.197368] mv_xor d0060900.xor: error on chan 0. intr cause 0x00000082 [ 1.197393] mv_xor d0060900.xor: config 0x00008440 [ 1.197410] mv_xor d0060900.xor: activation 0x00000000 [ 1.197427] mv_xor d0060900.xor: intr cause 0x00000082 [ 1.197443] mv_xor d0060900.xor: intr mask 0x000003f7 [ 1.197460] mv_xor d0060900.xor: error cause 0x00000000 [ 1.197477] mv_xor d0060900.xor: error addr 0x00000000 [ 1.197491] ------------[ cut here ]------------ [ 1.197513] WARNING: CPU: 0 PID: 1 at ../drivers/dma/mv_xor.c:664 mv_xor_interrupt_handler+0x14c/0x170() See also: http://storage.kernelci.org/next/next-20150617/arm-mvebu_v7_defconfig+CONFIG_CPU_BIG_ENDIAN=y/lab-khilman/boot-armada-xp-openblocks-ax3-4.txt Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Fixes: 6f166312c6ea2 ("dmaengine: mv_xor: add support for a38x command in descriptor mode") Reviewed-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-07-31dmaengine: xgene-dma: Fix the resource map to handle overlappingRameshwar Prasad Sahu3-2/+5
There is an overlap in dma ring cmd csr region due to sharing of ethernet ring cmd csr region. This patch fix the resource overlapping by mapping the entire dma ring cmd csr region. Signed-off-by: Rameshwar Prasad Sahu <rsahu@apm.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-07-31dmaengine: at_xdmac: fix transfer data width in at_xdmac_prep_slave_sg()Cyrille Pitchen1-3/+4
This patch adds the missing update of the transfer data width in at_xdmac_prep_slave_sg(). Indeed, for each item in the scatter-gather list, we check whether the transfer length is aligned with the data width provided by dmaengine_slave_config(). If so, we directly use this data width for the current part of the transfer we are preparing. Otherwise, the data width is reduced to 8 bits (1 byte). Of course, the actual number of register accesses must also be updated to match the new data width. So one chunk was missing in the original patch (see Fixes tag below): the number of register accesses was correctly set to (len >> fixed_dwidth) in mbr_ubc but the real data width was not updated in mbr_cfg. Since mbr_cfg may change for each part of the scatter-gather transfer this also explains why the original patch used the Descriptor View 2 instead of the Descriptor View 1. Let's take the example of a DMA transfer to write 8bit data into an Atmel USART with FIFOs. When FIFOs are enabled in the USART, its Transmit Holding Register (THR) works in multidata mode, that is to say that up to 4 8bit data can be written into the THR in a single 32bit access and it is still possible to write only one data with a 8bit access. To take advantage of this new feature, the DMA driver was modified to allow multiple dwidths when doing slave transfers. For instance, when the total length is 22 bytes, the USART driver splits the transfer into 2 parts: First part: 20 bytes transferred through 5 32bit writes into THR Second part: 2 bytes transferred though 2 8bit writes into THR For the second part, the data width was first set to 4_BYTES by the USART driver thanks to dmaengine_slave_config() then at_xdmac_prep_slave_sg() reduces this data width to 1_BYTE because the 2 byte length is not aligned with the original 4_BYTES data width. Since the data width is modified, the actual number of writes into THR must be set accordingly. Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com> Fixes: 6d3a7d9e3ada ("dmaengine: at_xdmac: allow muliple dwidths when doing slave transfers") Cc: stable@vger.kernel.org #4.0 and later Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com> Acked-by: Ludovic Desroches <ludovic.desroches@atmel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-07-31dmaengine: at_hdmac: fix residue computationCyrille Pitchen2-47/+88
As claimed by the programmer datasheet and confirmed by the IP designer, the Block Transfer Size (BTSIZE) bitfield of the Channel x Control A Register (CTRLAx) always refers to a number of Source Width (SRC_WIDTH) transfers. Both the SRC_WIDTH and BTSIZE bitfields can be extacted from the CTRLAx register to compute the DMA residue. So the 'tx_width' field is useless and can be removed from the struct at_desc. Before this patch, atc_prep_slave_sg() was not consistent: BTSIZE was correctly initialized according to the SRC_WIDTH but 'tx_width' was always set to reg_width, which was incorrect for MEM_TO_DEV transfers. It led to bad DMA residue when 'tx_width' != SRC_WIDTH. Also the 'tx_width' field was mostly set only in the first and last descriptors. Depending on the kind of DMA transfer, this field remained uninitialized for intermediate descriptors. The accurate DMA residue was computed only when the currently processed descriptor was the first or the last of the chain. This algorithm was a little bit odd. An accurate DMA residue can always be computed using the SRC_WIDTH and BTSIZE bitfields in the CTRLAx register. Finally, the test to check whether the currently processed descriptor is the last of the chain was wrong: for cyclic transfer, last_desc->lli.dscr is NOT equal to zero, since set_desc_eol() is never called, but logically equal to first_desc->txd.phys. This bug has a side effect on the drivers/tty/serial/atmel_serial.c driver, which uses cyclic DMA transfer to receive data. Since the DMA residue was wrong each time the DMA transfer reaches the second (and last) period of the transfer, no more data were received by the USART driver till the cyclic DMA transfer loops back to the first period. Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com> Acked-by: Torsten Fleischer <torfl6749@gmail.com> Tested-by: JirĂ­ Prchal <jiri.prchal@aksignal.cz> Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-07-31dmaengine: at_xdmac: fix bug about channel configurationLudovic Desroches1-9/+10
When using descriptor view 2 or higher, we don't write the configuration into AT_XDMAC_CC register because this configuration will be fetch from the descriptor. Unfortunately, the PROT bit is not updated with this method, we have to do it manually before enabling the channel. Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>