aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/base/regmap/regcache-rbtree.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2021-10-12regmap: Fix possible double-free in regcache_rbtree_exit()Yang Yingliang1-4/+3
In regcache_rbtree_insert_to_block(), when 'present' realloc failed, the 'blk' which is supposed to assign to 'rbnode->block' will be freed, so 'rbnode->block' points a freed memory, in the error handling path of regcache_rbtree_init(), 'rbnode->block' will be freed again in regcache_rbtree_exit(), KASAN will report double-free as follows: BUG: KASAN: double-free or invalid-free in kfree+0xce/0x390 Call Trace: slab_free_freelist_hook+0x10d/0x240 kfree+0xce/0x390 regcache_rbtree_exit+0x15d/0x1a0 regcache_rbtree_init+0x224/0x2c0 regcache_init+0x88d/0x1310 __regmap_init+0x3151/0x4a80 __devm_regmap_init+0x7d/0x100 madera_spi_probe+0x10f/0x333 [madera_spi] spi_probe+0x183/0x210 really_probe+0x285/0xc30 To fix this, moving up the assignment of rbnode->block to immediately after the reallocation has succeeded so that the data structure stays valid even if the second reallocation fails. Reported-by: Hulk Robot <hulkci@huawei.com> Fixes: 3f4ff561bc88b ("regmap: rbtree: Make cache_present bitmap per node") Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20211012023735.1632786-1-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2019-04-25regmap: add proper SPDX identifiers on files that did not have them.Greg Kroah-Hartman1-11/+7
There were a few files in the regmap code that did not have SPDX identifiers on them, so fix that up. At the same time, remove the "free form" text that specified the license of the file, as that is impossible for any tool to properly parse. Also, as Mark loves // comment markers, convert all of the headers to be the same to make things look consistent :) Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Mark Brown <broonie@kernel.org>
2019-01-29regmap: Remove attribute packed from struct 'regcache_rbtree_node'Mathieu Malaterre1-1/+1
On one hand commit 28644c809f44 ("regmap: Add the rbtree cache support") added 'regcache_rbtree_node' as packed structure, while on the other hand commit e977145aeaad ("[RBTREE] Add explicit alignment to sizeof(long) for struct rb_node.") declared struct 'rb_node' as aligned. Solve the ambiguity of placing aligned structure in a packed one by removing the packed attribute from struct. This seems to be the behavior of gcc anyway. This removes the following warning (W=1): drivers/base/regmap/regcache-rbtree.c:36:1: warning: alignment 1 of 'struct regcache_rbtree_node' is less than 4 [-Wpacked-not-aligned] Cc: Dimitris Papastamos <dp@opensource.wolfsonmicro.com> Cc: David Woodhouse <dwmw2@infradead.org> Signed-off-by: Mathieu Malaterre <malat@debian.org> Signed-off-by: Mark Brown <broonie@kernel.org>
2018-12-17regmap: rbtree: convert to DEFINE_SHOW_ATTRIBUTEYangtao Li1-11/+1
Use DEFINE_SHOW_ATTRIBUTE macro to simplify the code. Signed-off-by: Yangtao Li <tiny.windzz@gmail.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2016-12-19regmap: use rb_entry()Geliang Tang1-4/+3
To make the code clearer, use rb_entry() instead of container_of() to deal with rbtree. Signed-off-by: Geliang Tang <geliangtang@gmail.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2016-08-04regmap: rbtree: Avoid overlapping nodesLars-Peter Clausen1-10/+28
When searching for a suitable node that should be used for inserting a new register, which does not fall within the range of any existing node, we not only looks for nodes which are directly adjacent to the new register, but for nodes within a certain proximity. This is done to avoid creating lots of small nodes with just a few registers spacing in between, which would increase memory usage as well as tree traversal time. This means there might be multiple node candidates which fall within the proximity range of the new register. If we choose the first node we encounter, under certain register insertion patterns it is possible to end up with overlapping ranges. This will break order in the rbtree and can cause the cached register value to become corrupted. E.g. take the simplified example where the proximity range is 2 and the register insertion sequence is 1, 4, 2, 3, 5. * Insert of register 1 creates a new node, this is the root of the rbtree * Insert of register 4 creates a new node, which is inserted to the right of the root. * Insert of register 2 gets inserted to the first node * Insert of register 3 gets inserted to the first node * Insert of register 5 also gets inserted into the first node since this is the first node encountered and it is within the proximity range. Now there are two overlapping nodes. To avoid this always choose the node that is closest to the new register. This will ensure that nodes will not overlap. The tree traversal is still done as a binary search, we just don't stop at the first node found. So the complexity of the algorithm stays within the same order. Ideally if a new register is in the range of two adjacent blocks those blocks should be merged, but that is a much more invasive change and left for later. The issue was initially introduced in commit 472fdec7380c ("regmap: rbtree: Reduce number of nodes, take 2"), but became much more exposed by commit 6399aea629b0 ("regmap: rbtree: When adding a reg do a bsearch for target node") which changed the order in which nodes are looked-up. Fixes: 6399aea629b0 ("regmap: rbtree: When adding a reg do a bsearch for target node") Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Mark Brown <broonie@kernel.org>
2016-01-05Merge remote-tracking branches 'regmap/topic/mmio', 'regmap/topic/rbtree' and 'regmap/topic/seq' into regmap-nextMark Brown1-2/+7
2015-11-20regmap: replace kmalloc with kmalloc_arraylixiubo1-2/+2
Replace kmalloc with specialized function kmalloc_array when the size is a multiplication of : number * size Signed-off-by: lixiubo <lixiubo@cmss.chinamobile.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2015-11-20regmap: replace kzalloc with kcalloclixiubo1-2/+3
Replace kzalloc with specialized function kcalloc when the size is a multiplication of : number * sizeof Signed-off-by: lixiubo <lixiubo@cmss.chinamobile.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2015-11-16regmap: rbtree: When adding a reg do a bsearch for target nodeNikesh Oswal1-2/+7
A binary search is much more efficient rather than iterating over the rbtree in ascending order which the current code is doing. During initialisation the reg defaults are written to the cache in a large chunk and these are always sorted in the ascending order so for this situation ideally we should have iterated the rbtree in descending order. But at runtime the drivers may write into the cache in any random order so this patch selects to use a bsearch to give an optimal runtime performance and also at initialisation time when reg defaults are written the performance of binary search would be much better than iterating in ascending order which the current code was doing. Signed-off-by: Nikesh Oswal <Nikesh.Oswal@wolfsonmicro.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2015-07-29regmap: regcache-rbtree: Clean new present bits on present bitmap resizeGuenter Roeck1-5/+14
When inserting a new register into a block, the present bit map size is increased using krealloc. krealloc does not clear the additionally allocated memory, leaving it filled with random values. Result is that some registers are considered cached even though this is not the case. Fix the problem by clearing the additionally allocated memory. Also, if the bitmap size does not increase, do not reallocate the bitmap at all to reduce overhead. Fixes: 3f4ff561bc88 ("regmap: rbtree: Make cache_present bitmap per node") Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Mark Brown <broonie@kernel.org> Cc: stable@vger.kernel.org
2015-03-07regmap: regcache-rbtree: Fix present bitmap resizeLars-Peter Clausen1-1/+1
When inserting a new register into a block at the lower end the present bitmap is currently shifted into the wrong direction. The effect of this is that the bitmap becomes corrupted and registers which are present might be reported as not present and vice versa. Fix this by shifting left rather than right. Fixes: 472fdec7380c("regmap: rbtree: Reduce number of nodes, take 2") Reported-by: Daniel Baluta <daniel.baluta@gmail.com> Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Mark Brown <broonie@kernel.org> Cc: stable@vger.kernel.org
2014-10-20regmap: cache: Sort include headers alphabeticallyXiubo Li1-2/+2
If the inlcude headers aren't sorted alphabetically, then the logical choice is to append new ones, however that creates a lot of potential for conflicts or duplicates because every change will then add new includes in the same location. Signed-off-by: Xiubo Li <Li.Xiubo@freescale.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2014-08-26regmap: Fix regcache debugfs initializationLars-Peter Clausen1-6/+3
Commit 6cfec04bcc05 ("regmap: Separate regmap dev initialization") moved the regmap debugfs initialization after regcache initialization. This means that the regmap debugfs directory is not created yet when the cache initialization runs and so any debugfs files registered by the regcache are created in the debugfs root directory rather than the debugfs directory of the regmap instance. Fix this by adding a separate callback for the regcache debugfs initialization which will be called after the parent debugfs entry has been created. Fixes: 6cfec04bcc05 (regmap: Separate regmap dev initialization) Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Mark Brown <broonie@linaro.org> Cc: stable@vger.kernel.org
2014-04-14regmap: rbtree: improve 64bits memory alignmentJean-Christophe PINCE1-4/+4
Change regcache_rbtree_node strcuture fields order to align the pointers on 64bits architectures. Signed-off-by: Jean-Christophe PINCE <jean-christophe.pince@intel.com> Signed-off-by: David Cohen <david.a.cohen@linux.intel.com> Signed-off-by: Mark Brown <broonie@linaro.org>
2013-09-03Merge tag 'regmap-v3.12' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmapLinus Torvalds1-51/+130
Pull regmap updates from Mark Brown: "A quiet release for regmap, some cleanups, fixes and: - Improved node coalescing for rbtree, reducing memory usage and improving performance during syncs. - Support for registering multiple register patches. - A quirk for handling interrupts that need to be clear when masked in regmap-irq" * tag 'regmap-v3.12' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap: regmap: rbtree: Make cache_present bitmap per node regmap: rbtree: Reduce number of nodes, take 2 regmap: rbtree: Simplify adjacent node look-up regmap: debugfs: Fix continued read from registers file regcache-rbtree: Fix reg_stride != 1 regmap: Allow multiple patches to be registered regmap: regcache: allow read-only regs to be cached regmap: fix regcache_reg_present() for empty cache regmap: core: allow a virtual range to cover its own data window regmap: irq: document mask/wake_invert flags regmap: irq: make flags bool and put them in a bitfield regmap: irq: Allow to acknowledge masked interrupts during initialization regmap: Provide __acquires/__releases annotations
2013-08-29regmap: rbtree: Make cache_present bitmap per nodeLars-Peter Clausen1-14/+72
With devices which have a dense and small register map but placed at a large offset the global cache_present bitmap imposes a huge memory overhead. Making the cache_present per rbtree node avoids the issue and easily reduces the memory footprint by a factor of ten. For devices with a more sparse map or without a large base register offset the memory usage might increase slightly by a few bytes, but not significantly. E.g. for a device which has ~50 registers at offset 0x4000 the memory footprint of the register cache goes down form 2496 bytes to 175 bytes. Moving the bitmap to a per node basis means that the handling of the bitmap is now cache implementation specific and can no longer be managed by the core. The regcache_sync_block() function is extended by a additional parameter so that the cache implementation can tell the core which registers in the block are set and which are not. The parameter is optional and if NULL the core assumes that all registers are set. The rbtree cache also needs to implement its own drop callback instead of relying on the core to handle this. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Mark Brown <broonie@linaro.org>
2013-08-29regmap: rbtree: Reduce number of nodes, take 2Lars-Peter Clausen1-17/+36
Support for reducing the number of nodes and memory consumption of the rbtree cache by allowing for small unused holes in the node's register cache block was initially added in commit 0c7ed856 ("regmap: Cut down on the average # of nodes in the rbtree cache"). But the commit had problems and so its effect was reverted again in commit 4e67fb5 ("regmap: rbtree: Fix overlapping rbnodes."). This patch brings the feature back of reducing the average number of nodes, which will speedup node look-up, while at the same time also reducing the memory usage of the rbtree cache. This patch takes a slightly different approach than the original patch though. It modifies the adjacent node look-up to not only consider nodes that are just one to the left or the right of the register but any node that falls in a certain range around the register. The range is calculated based on how much memory it would take to allocate a new node compared to how much memory it takes adding a set of unused registers to an existing node. E.g. if a node takes up 24 bytes and each register in a block uses 1 byte the range will be from the register address - 24 to the register address + 24. If we find a node that falls within this range it is cheaper or as expensive to add the register to the existing node and have a couple of unused registers in the node's cache compared to allocating a new node. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Mark Brown <broonie@linaro.org>
2013-08-29regmap: rbtree: Simplify adjacent node look-upLars-Peter Clausen1-20/+19
A register which is adjacent to a node will either be left to the first register or right to the last register. It will not be within the node's range, so there is no point in checking for each register cached by the node whether the new register is next to it. It is sufficient to check whether the register comes before the first register or after the last register of the node. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Mark Brown <broonie@linaro.org>
2013-08-27regcache-rbtree: Fix reg_stride != 1Lars-Peter Clausen1-11/+14
There are a couple of calculations, which convert between register addresses and block indices, in regcache_rbtree_sync() and regcache_rbtree_node_alloc() which assume that reg_stride is 1. This will break the rb cache for configurations which do not use a reg_stride of 1. Also rename 'base' in regcache_rbtree_sync() to 'start' to avoid confusion with 'base_reg'. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Mark Brown <broonie@linaro.org>
2013-08-21regmap: rbtree: Fix overlapping rbnodes.David Jander1-1/+1
Avoid overlapping register regions by making the initial blklen of a new node 1. If a register write occurs to a yet uncached register, that is lower than but near an existing node's base_reg, a new node is created and it's blklen is set to an arbitrary value (sizeof(*rbnode)). That may cause this node to overlap with another node. Those nodes should be merged, but this merge doesn't happen yet, so this patch at least makes the initial blklen small enough to avoid hitting the wrong node, which may otherwise lead to severe breakage. Signed-off-by: David Jander <david@protonic.nl> Signed-off-by: Mark Brown <broonie@linaro.org> Cc: stable@vger.kernel.org
2013-06-30Merge remote-tracking branch 'regmap/topic/cache' into regmap-nextMark Brown1-14/+48
2013-06-01regmap: rbtree: Fixed node range check on syncMaarten ter Huurne1-2/+0
A node starting before the minimum register is no reason to reject it, since its end could be in range. The check for the end already exists two lines lower, so we can just remove the incorrect check. Signed-off-by: Maarten ter Huurne <maarten@treewalker.org> Signed-off-by: Mark Brown <broonie@linaro.org>
2013-05-23regmap: regcache: Fixup locking for custom lock callbacksLars-Peter Clausen1-2/+2
The parameter passed to the regmap lock/unlock callbacks needs to be map->lock_arg, regcache passes just map. This works fine in the case that no custom locking callbacks are used since in this case map->lock_arg equals map, but will break when custom locking callbacks are used. The issue was introduced in commit 0d4529c5("regmap: make lock/unlock functions customizable") and is fixed by this patch. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
2013-05-23regmap: regcache: Fixup locking for custom lock callbacksLars-Peter Clausen1-2/+2
The parameter passed to the regmap lock/unlock callbacks needs to be map->lock_arg, regcache passes just map. This works fine in the case that no custom locking callbacks are used, since in this case map->lock_arg equals map, but will break when custom locking callbacks are used. The issue was introduced in commit 0d4529c5 ("regmap: make lock/unlock functions customizable") and is fixed by this patch. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
2013-05-12regmap: rbtree: Use range information to allocate nodesMark Brown1-2/+23
If range information has been provided then when we allocate a rbnode within a range allocate the entire range. The goal is to minimise the number of reallocations done when combining or extending blocks. At present only readability and yes_ranges are taken into account, this is expected to cover most cases efficiently. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
2013-05-12regmap: rbtree: Factor out node allocationMark Brown1-14/+27
In preparation for being slightly smarter about how we allocate memory factor out the node allocation. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
2013-04-16Merge tag 'v3.9-rc7' into regmap-cacheMark Brown1-1/+1
Linux 3.9-rc7
2013-03-30regmap: cache: Factor out block syncMark Brown1-42/+6
The idea of holding blocks of registers in device format is shared between at least rbtree and lzo cache formats so split out the loop that does the sync from the rbtree code so optimisations on it can be reused. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Reviewed-by: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
2013-03-30regmap: cache: Factor out reg_present support from rbtree cacheMark Brown1-58/+2
The idea of maintaining a bitmap of present registers is something that can usefully be used by other cache types that maintain blocks of cached registers so move the code out of the rbtree cache and into the generic regcache code. Refactor the interface slightly as we go to wrap the set bit and enlarge bitmap operations (since we never do one without the other) and make it more robust for reads of uncached registers by bounds checking before we look at the bitmap. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Reviewed-by: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
2013-03-27regmap: cache: Use raw I/O to sync rbtrees if we canMark Brown1-1/+18
This will bring no meaningful benefit by itself, it is done as a separate commit to aid bisection if there are problems with the following commits adding support for coalescing adjacent writes. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
2013-03-26regmap: Cut down on the average # of nodes in the rbtree cacheDimitris Papastamos1-1/+69
This patch aims to bring down the average number of nodes in the rbtree cache and increase the average number of registers per node. This should improve general lookup and traversal times. This is achieved by setting the minimum size of a block within the rbnode to the size of the rbnode itself. This will essentially cache possibly non-existent registers so to combat this scenario, we keep a separate bitmap in memory which keeps track of which register exists. The memory overhead of this change is likely in the order of ~5-10%, possibly less depending on the register file layout. On my test system with a bitmap of ~4300 bits and a relatively sparse register layout, the memory requirements for the entire cache did not increase (the cutting down of nodes which was about 50% of the original number compensated the situation). A second patch that can be built on top of this can look at the ratio `sizeof(*rbnode) / map->cache_word_size' in order to suitably adjust the block length of each block. Signed-off-by: Dimitris Papastamos <dp@opensource.wolfsonmicro.com> Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
2013-03-13regmap: cache Fix regcache-rbtree syncLars-Peter Clausen1-1/+1
The last register block, which falls into the specified range, is not handled correctly. The formula which calculates the number of register which should be synced is inverse (and off by one). E.g. if all registers in that block should be synced only one is synced, and if only one should be synced all (but one) are synced. To calculate the number of registers that need to be synced we need to subtract the number of the first register in the block from the max register number and add one. This patch updates the code accordingly. The issue was introduced in commit ac8d91c ("regmap: Supply ranges to the sync operations"). Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Cc: stable@vger.kernel.org
2013-03-13regmap: rbtree Expose total memory consumption in the rbtree debugfs entryDimitris Papastamos1-2/+7
Provide a feel of how much overhead the rbtree cache adds to the game. [Slightly reworded output in debugfs -- broonie] Signed-off-by: Dimitris Papastamos <dp@opensource.wolfsonmicro.com> Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
2013-03-04regmap: cache: Pass the map rather than the word size when updating valuesMark Brown1-26/+25
It's more idiomatic to pass the map structure around and this means we can use other bits of information from the map. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
2013-03-04regmap: rbtree: Don't bother checking for noop updatesMark Brown1-5/+0
If we're updating a value in place it's more work to read the value and compare the value with what we're about to set than it is to just write the value into the cache; there are no further operations after writing in the code even though there's an early return here. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
2012-04-10regmap: implement register stridingStephen Warren1-17/+23
regmap_config.reg_stride is introduced. All extant register addresses are a multiple of this value. Users of serial-oriented regmap busses will typically set this to 1. Users of the MMIO regmap bus will typically set this based on the value size of their registers, in bytes, so 4 for a 32-bit register. Throughout the regmap code, actual register addresses are used. Wherever the register address is used to index some array of values, the address is divided by the stride to determine the index, or vice-versa. Error- checking is added to all entry-points for register address data to ensure that register addresses actually satisfy the specified stride. The MMIO bus ensures that the specified stride is large enough for the register size. Signed-off-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
2012-04-10Merge branches 'regmap-core', 'regmap-mmio' and 'regmap-naming' into regmap-strideMark Brown1-2/+2
2012-04-07Merge tag 'regmap-3.4-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmapLinus Torvalds1-1/+7
Pull two more small regmap fixes from Mark Brown: - Now we have users for it that aren't running Android it turns out that regcache_sync_region() is much more useful to drivers if it's exported for use by modules. Who knew? - Make sure we don't divide by zero when doing debugfs dumps of rbtrees, not visible up until now because everything was providing at least some cache on startup. * tag 'regmap-3.4-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap: regmap: prevent division by zero in rbtree_show regmap: Export regcache_sync_region()
2012-04-06regmap: introduce fast_io busses, and use a spinlock for themStephen Warren1-2/+2
Some bus types have very fast IO. For these, acquiring a mutex for every IO operation is a significant overhead. Allow busses to indicate their IO is fast, and enhance regmap to use a spinlock for those busses. [Currently limited to native endian registers -- broonie] Signed-off-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
2012-04-04regmap: prevent division by zero in rbtree_showStephen Warren1-1/+7
If there are no nodes in the cache, nodes will be 0, so calculating "registers / nodes" will cause division by zero. Signed-off-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Cc: stable@vger.kernel.org
2012-04-01regmap: rbtree: Fix register default look-up in syncLars-Peter Clausen1-1/+1
The code currently passes the register offset in the current block to regcache_lookup_reg. This works fine as long as there is only one block and with base register of 0, but in all other cases it will look-up the default for a wrong register, which can cause unnecessary register writes. This patch fixes it by passing the actual register number to regcache_lookup_reg. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Cc: <stable@vger.kernel.org>
2012-03-24Merge tag 'device-for-3.4' of git://git.kernel.org/pub/scm/linux/kernel/git/paulg/linuxLinus Torvalds1-0/+1
Pull <linux/device.h> avoidance patches from Paul Gortmaker: "Nearly every subsystem has some kind of header with a proto like: void foo(struct device *dev); and yet there is no reason for most of these guys to care about the sub fields within the device struct. This allows us to significantly reduce the scope of headers including headers. For this instance, a reduction of about 40% is achieved by replacing the include with the simple fact that the device is some kind of a struct. Unlike the much larger module.h cleanup, this one is simply two commits. One to fix the implicit <linux/device.h> users, and then one to delete the device.h includes from the linux/include/ dir wherever possible." * tag 'device-for-3.4' of git://git.kernel.org/pub/scm/linux/kernel/git/paulg/linux: device.h: audit and cleanup users in main include dir device.h: cleanup users outside of linux/include (C files)
2012-03-14Merge remote-tracking branches 'regmap/topic/patch' and 'regmap/topic/sync' into regmap-nextMark Brown1-3/+22
2012-03-11device.h: cleanup users outside of linux/include (C files)Paul Gortmaker1-0/+1
For files that are actively using linux/device.h, make sure that they call it out. This will allow us to clean up some of the implicit uses of linux/device.h within include/* without introducing build regressions. Yes, this was created by "cheating" -- i.e. the headers were cleaned up, and then the fallout was found and fixed, and then the two commits were reordered. This ensures we don't introduce build regressions into the git history. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2012-03-05regmap: Fix rbtree block base in syncMark Brown1-1/+1
Otherwise we'll end up running with bogus register numbers. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
2012-03-05regcache: Make sure we sync register 0 in an rbtree cacheMark Brown1-1/+1
Most of the current users have register 0 as a volatile register or don't have a register 0 so it's not been apparent that it's not getting synced. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
2012-02-24regmap: Supply ranges to the sync operationsMark Brown1-3/+22
In order to allow us to support partial sync operations add minimum and maximum register arguments to the sync operation and update the rbtree and lzo caches to use this new information. The LZO implementation is obviously not good, we could exit the iteration earlier, but there may be room for more wide reaching optimisation there. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
2011-11-22regmap: Fix rbtreee build when not using debugfsMark Brown1-1/+10
The debugfs functions don't stub themselves out quite so well as might be desirable so provide functions which do do this stubbing. Reported-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
2011-11-21regmap: Provide debugfs dump of the rbtree cache dataMark Brown1-0/+49
Show the register ranges we have in each rbtree node in debugfs, plus some statistics on how big each node is and the total number of nodes. It may also be worth collecting data on the ranges of dirty registers to see if there's much mileage in trying to coalesce writes on sync. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>