Age | Commit message (Collapse) | Author | Files | Lines |
|
With btrfs-corrupt-block, one can drop one chunk item and mounting
will end up with a panic in btrfs_full_stripe_len().
This doesn't not remove the BUG_ON, but instead checks it a bit
earlier when we find the block group item.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Signed-off-by: Wang Xiaoguang <wangxg.fnst@cn.fujitsu.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Commit 56244ef151c3cd11 was almost but not quite enough to fix the
reservation math after btrfs_copy_from_user returned partial copies.
Some users are still seeing warnings in btrfs_destroy_inode, and with a
long enough test run I'm able to trigger them as well.
This patch fixes the accounting math again, bringing it much closer to
the way it was before the sectorsize conversion Chandan did. The
problem is accounting for the offset into the page/sector when we do a
partial copy. This one just uses the dirty_sectors variable which
should already be updated properly.
Signed-off-by: Chris Mason <clm@fb.com>
cc: stable@vger.kernel.org # v4.6+
|
|
The new enospc code makes it possible to deadlock if we don't use
FLUSH_LIMIT during reservations inside a transaction. This enforces
the correct flush type to avoid both deadlocks and assertions
Signed-off-by: Chris Mason <clm@fb.com>
Signed-off-by: Josef Bacik <jbacik@fb.com>
|
|
We used to allow you to set FLUSH_ALL and then just wouldn't do things like
commit transactions or wait on ordered extents if we noticed you were in a
transaction. However now that all the flushing for FLUSH_ALL is asynchronous
we've lost the ability to tell, and we could end up deadlocking. So instead use
FLUSH_LIMIT in reserve_metadata_bytes in relocation and then return -EAGAIN if
we error out to preserve the previous behavior. I've also added an ASSERT() to
catch anybody else who tries to do this. Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Since we set the reloc control before we've reserved our space for relocation we
could race with a root being dirtied and not actually have space to do our init
reloc root. So once we've allocated it and set it up go ahead and make our
reservation before setting the relocate control, that way anybody who tries to
do the reloc root init has space to use. Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
This is the case all the time anyway except for relocation which could be doing
a reloc root for a non ref counted root, in which case we'd end up with some
random block rsv rather than the one we have our reservation in. If there isn't
enough space in the block rsv we are trying to steal from we'll BUG() because we
expect there to be space for the orphan to make its reservation. Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Traditionally we've calculated the global block rsv by guessing how much of the
metadata used amount was the extent tree, and then taking the data size and
figuring out how large the csum tree would have to be to hold that much data.
This is imprecise and falls down on MIXED file systems as we can't trust the
data used amount. This resulted in failures for xfstests generic/333 because it
creates lots of clones, which explodes out the extent tree. Our global reserve
calculations were woefully inaccurate in this case which meant we got into a
situation where we did not have enough reserved to do our work.
We know we only use the global block rsv for the extent, csum, and root trees,
so just get the bytes used for these trees and use that as the basis of our
global reserve. Since these are not reference counted trees the bytes_used
value will be accurate. This fixed the transaction aborts seen with
generic/333. Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Instead of doing fs_info->fs_root in need_async_flush, which may not be set
during recovery when mounting, just pass the root itself in, which makes more
sense as thats what btrfs_calc_reclaim_metadata_size takes.
Signed-off-by: Josef Bacik <jbacik@fb.com>
Reported-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
We do this check when we start the async reclaimer thread, might as well check
before we kick it off to save us some cycles. Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
We were doing trace_btrfs_release_reserved_extent() in pin_down_extent which
isn't quite right because we will go through and free that extent later when we
unpin, so it messes up apps that are accounting for the reservation space. We
were also unconditionally doing it in __btrfs_free_reserved_extent(), when we
only actually free the reservation instead of pinning the extent. Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
When tracing enospc problems on a box with multiple file systems mounted I need
to be able to differentiate between the two file systems. Most of the important
trace points I'm looking at already have an fsid, but the reserved extent trace
points do not, so add that to make it possible to figure out which trace point
belongs to which file system. Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
We want to track when we're triggering flushing from our reservation code and
what flushing is being done when we start flushing. Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
We can sometimes drop the reservation we had for our inode, so we need to remove
that amount from to_reserve so that our tracepoint reports a valid amount of
space.
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Pinned extents are an important metric to keep track of for enospc.
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Our enospc flushing sucks. It is born from a time where we were early
enospc'ing constantly because multiple threads would race in for the same
reservation and randomly starve other ones out. So I came up with this solution
to block any other reservations from happening while one guy tried to flush
stuff to satisfy his reservation. This gives us pretty good correctness, but
completely crap latency.
The solution I've come up with is ticketed reservations. Basically we try to
make our reservation, and if we can't we put a ticket on a list in order and
kick off an async flusher thread. This async flusher thread does the same old
flushing we always did, just asynchronously. As space is freed and added back
to the space_info it checks and sees if we have any tickets that need
satisfying, and adds space to the tickets and wakes up anything we've satisfied.
Once the flusher thread stops making progress it wakes up all the current
tickets and tells them to take a hike.
There is a priority list for things that can't flush, since the async flusher
could do anything we need to avoid deadlocks. These guys get priority for
having their reservation made, and will still do manual flushing themselves in
case the async flusher isn't running.
This patch gives us significantly better latencies. Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
I'm writing a tool to visualize the enospc system inside btrfs, I need this
tracepoint in order to keep track of the block groups in the system. Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
These were hidden behind enospc_debug, which isn't helpful as they indicate
actual bugs, unlike the rest of the enospc_debug stuff which is really debug
information. Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
We reserve space for the inode update when we first reserve space for writing to
a file. However there are lots of ways that we can use this reservation and not
have it for subsequent ordered extents. Previously we'd fall through and try to
reserve metadata bytes for this, then we'd just steal the full reservation from
the delalloc_block_rsv, and if that didn't have enough space we'd steal the full
reservation from the global reserve. The problem with this is we can easily
just return ENOSPC and fallback to updating the inode item directly. In the
worst case (assuming 4k nodesize) we'd steal 64kib from the global reserve if we
fall all the way through, however if we just fallback and update the inode
directly we'd only steal 4k * BTRFS_PATH_MAX in the worst case which is 32kib.
We would have also just added the extent item for the inode so we likely will
have already cow'ed down most of the way to the leaf containing the inode item,
so we are more often than not only need one or two nodesize's worth of
reservations. Given the reservation for the extent itself is also a worst case
we will likely already have space to cover the inode update.
This change will make us behave better in the theoretical worst case, and much
better in the case that we don't have our reservation and cannot reserve more
metadata. Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
There are a few races in the metadata reservation stuff. First we add the bytes
to the block_rsv well after we've set the bit on the inode saying that we have
space for it and after we've reserved the bytes. So use the normal
btrfs_block_rsv_add helper for this case. Secondly we can flush delalloc
extents when we try to reserve space for our write, which means that we could
have used up the space for the inode and we wouldn't know because we only check
before the reservation. So instead make sure we are always reserving space for
the inode update, and then if we don't need it release those bytes afterward.
Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
So btrfs_block_rsv_migrate just unconditionally calls block_rsv_migrate_bytes.
Not only this but it unconditionally changes the size of the block_rsv. This
isn't a bug strictly speaking, but it makes truncate block rsv's look funny
because every time we migrate bytes over its size grows, even though we only
want it to be a specific size. So collapse this into one function that takes an
update_size argument and make truncate and evict not update the size for
consistency sake. Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
For some reason we're adding bytes_readonly to the space info after we update
the space info with the block group info. This creates a tiny race where we
could over-reserve space because we haven't yet taken out the bytes_readonly
bit. Since we already know this information at the time we call
update_space_info, just pass it along so it can be updated all at once. Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
|
|
overlay needs underlying fs to support d_type. Recently I put in a
patch in to detect this condition and started failing mount if
underlying fs did not support d_type.
But this breaks existing configurations over kernel upgrade. Those who
are running docker (partially broken configuration) with xfs not
supporting d_type, are surprised that after kernel upgrade docker does
not run anymore.
https://github.com/docker/docker/issues/22937#issuecomment-229881315
So instead of erroring out, detect broken configuration and warn
about it. This should allow existing docker setups to continue
working after kernel upgrade.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Fixes: 45aebeaf4f67 ("ovl: Ensure upper filesystem supports d_type")
Cc: <stable@vger.kernel.org> 4.6
|
|
The following testcase may result in a page table entries with a invalid
CCA field being generated:
static void *bindstack;
static int sysrqfd;
static void protect_low(int protect)
{
mprotect(bindstack, BINDSTACK_SIZE, protect);
}
static void sigbus_handler(int signal, siginfo_t * info, void *context)
{
void *addr = info->si_addr;
write(sysrqfd, "x", 1);
printf("sigbus, fault address %p (should not happen, but might)\n",
addr);
abort();
}
static void run_bind_test(void)
{
unsigned int *p = bindstack;
p[0] = 0xf001f001;
write(sysrqfd, "x", 1);
/* Set trap on access to p[0] */
protect_low(PROT_NONE);
write(sysrqfd, "x", 1);
/* Clear trap on access to p[0] */
protect_low(PROT_READ | PROT_WRITE | PROT_EXEC);
write(sysrqfd, "x", 1);
/* Check the contents of p[0] */
if (p[0] != 0xf001f001) {
write(sysrqfd, "x", 1);
/* Reached, but shouldn't be */
printf("badness, shouldn't happen but does\n");
abort();
}
}
int main(void)
{
struct sigaction sa;
sysrqfd = open("/proc/sysrq-trigger", O_WRONLY);
if (sigprocmask(SIG_BLOCK, NULL, &sa.sa_mask)) {
perror("sigprocmask");
return 0;
}
sa.sa_sigaction = sigbus_handler;
sa.sa_flags = SA_SIGINFO | SA_NODEFER | SA_RESTART;
if (sigaction(SIGBUS, &sa, NULL)) {
perror("sigaction");
return 0;
}
bindstack = mmap(NULL,
BINDSTACK_SIZE,
PROT_READ | PROT_WRITE | PROT_EXEC,
MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
if (bindstack == MAP_FAILED) {
perror("mmap bindstack");
return 0;
}
printf("bindstack: %p\n", bindstack);
run_bind_test();
printf("done\n");
return 0;
}
There are multiple ingredients for this:
1) PAGE_NONE is defined to _CACHE_CACHABLE_NONCOHERENT, which is CCA 3
on all platforms except SB1 where it's CCA 5.
2) _page_cachable_default must have bits set which are not set
_CACHE_CACHABLE_NONCOHERENT.
3) Either the defective version of pte_modify for XPA or the standard
version must be in used. However pte_modify for the 36 bit address
space support is no affected.
In that case additional bits in the final CCA mode may generate an invalid
value for the CCA field. On the R10000 system where this was tracked
down for example a CCA 7 has been observed, which is Uncached Accelerated.
Fixed by:
1) Using the proper CCA mode for PAGE_NONE just like for all the other
PAGE_* pte/pmd bits.
2) Fix the two affected variants of pte_modify.
Further code inspection also shows the same issue to exist in pmd_modify
which would affect huge page systems.
Issue in pte_modify tracked down by Alastair Bridgewater, PAGE_NONE
and pmd_modify issue found by me.
The history of this goes back beyond Linus' git history. Chris Dearman's
commit 351336929ccf222ae38ff0cb7a8dd5fd5c6236a0 ("[MIPS] Allow setting of
the cache attribute at run time.") missed the opportunity to fix this
but it was originally introduced in lmo commit
d523832cf12007b3242e50bb77d0c9e63e0b6518 ("Missing from last commit.")
and 32cc38229ac7538f2346918a09e75413e8861f87 ("New configuration option
CONFIG_MIPS_UNCACHED.")
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Reported-by: Alastair Bridgewater <alastair.bridgewater@gmail.com>
|
|
(Another one for the f_path debacle.)
ltp fcntl33 testcase caused an Oops in selinux_file_send_sigiotask.
The reason is that generic_add_lease() used filp->f_path.dentry->inode
while all the others use file_inode(). This makes a difference for files
opened on overlayfs since the former will point to the overlay inode the
latter to the underlying inode.
So generic_add_lease() added the lease to the overlay inode and
generic_delete_lease() removed it from the underlying inode. When the file
was released the lease remained on the overlay inode's lock list, resulting
in use after free.
Reported-by: Eryu Guan <eguan@redhat.com>
Fixes: 4bacc9c9234c ("overlayfs: Make f_path always point to the overlay and f_inode to the underlay")
Cc: <stable@vger.kernel.org>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Reviewed-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
|
|
We're making all reset line users specify whether their lines are
shared with other IP or they operate them exclusively. In this case
the line is exclusively used only by this IP, so use the *_exclusive()
API accordingly.
Acked-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
|
|
We're making all reset line users specify whether their lines are
shared with other IP or they operate them exclusively. In this case
the line is exclusively used only by this IP, so use the *_exclusive()
API accordingly.
Acked-by: Kishon Vijay Abraham I <kishon@ti.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
|
|
On the STiH410 B2120 development board the MiPHY28lp shares its reset
line with the Synopsys DWC3 SuperSpeed (SS) USB 3.0 Dual-Role-Device
(DRD). New functionality in the reset subsystems forces consumers to
be explicit when requesting shared/exclusive reset lines.
Acked-by: Kishon Vijay Abraham I <kishon@ti.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
|
|
- m_start() in fs/namespace.c expects that ns->event is incremented each
time a mount added or removed from ns->list.
- umount_tree() removes items from the list but does not increment event
counter, expecting that it's done before the function is called.
- There are some codepaths that call umount_tree() without updating
"event" counter. e.g. from __detach_mounts().
- When this happens m_start may reuse a cached mount structure that no
longer belongs to ns->list (i.e. use after free which usually leads
to infinite loop).
This change fixes the above problem by incrementing global event counter
before invoking umount_tree().
Change-Id: I622c8e84dcb9fb63542372c5dbf0178ee86bb589
Cc: stable@vger.kernel.org
Signed-off-by: Andrey Ulanov <andreyu@google.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
v9fs may be used as lower layer of overlayfs and accessing f_path.dentry
can lead to a crash. In this case it's a NULL pointer dereference in
p9_fid_create().
Fix by replacing direct access of file->f_path.dentry with the
file_dentry() accessor, which will always return a native object.
Reported-by: Alessio Igor Bogani <alessioigorbogani@gmail.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Tested-by: Alessio Igor Bogani <alessioigorbogani@gmail.com>
Fixes: 4bacc9c9234c ("overlayfs: Make f_path always point to the overlay and f_inode to the underlay")
Cc: <stable@vger.kernel.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
If the lockd service fails to start up then we need to be sure that the
notifier blocks are not registered, otherwise a subsequent start of the
service could cause the same notifier to be registered twice, leading to
soft lockups.
Signed-off-by: Scott Mayhew <smayhew@redhat.com>
Cc: stable@vger.kernel.org
Fixes: 0751ddf77b6a "lockd: Register callbacks on the inetaddr_chain..."
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
|
|
The omitted parenthesis prevents the addition operation when
acpi_penalize_isa_irq function is called.
Fixes: 103544d86976 (ACPI,PCI,IRQ: reduce resource requirements)
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Negotiate with userspace filesystems whether they support parallel readdir
and lookup. Disable parallelism by default for fear of breaking fuse
filesystems.
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Fixes: 9902af79c01a ("parallel lookups: actual switch to rwsem")
Fixes: d9b3dbdcfd62 ("fuse: switch to ->iterate_shared()")
|
|
Add the missing unlock before return from function i915_ppgtt_info()
in the error handling case.
Fixes: 1d2ac403ae3b(drm: Protect dev->filelist with its own mutex)
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1465861320-26221-1-git-send-email-weiyj_lk@163.com
(cherry picked from commit b0212486909de4f239ca9f20d032de1b1f2dc52e)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
|
|
Commit d6a9996e84ac ("powerpc/mm: vmalloc abstraction in preparation for
radix") turned kernel memory and IO addresses from #defined constants to
variables initialised at runtime.
On PA6T (pasemi) systems the setup_arch() machine call initialises the
onboard PCI-e root-ports, and uses pci_io_base to do this, which is now
before its value has been set, resulting in a panic early in boot before
console IO is initialised.
Move the pci_io_base initialisation to the same place as vmalloc ranges
are set (hash__early_init_mmu()/radix__early_init_mmu()) - this is the
earliest possible place we can initialise it.
Fixes: d6a9996e84ac ("powerpc/mm: vmalloc abstraction in preparation for radix")
Reported-by: Christian Zigotzky <chzigotzky@xenosoft.de>
Signed-off-by: Darren Stevens <darren@stevens-zone.net>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[mpe: Add #ifdef CONFIG_PCI, massage change log slightly]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Fix compiler warning caused by an uninitialised variable inside
da9052_group_write() function. Defaulting the value to zero covers
the trivial case.
Signed-off-by: Steve Twiss <stwiss.opensource@diasemi.com>
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
|
|
When configuring FPS during probe, assuming a DT node is present for
FPS, the code can run into a problem with the switch statements in
max77620_config_fps() and max77620_get_fps_period_reg_value(). Namely,
in the case of chip->chip_id == MAX77620, it will set
fps_[mix|max]_period but then fall through to the default switch case
and return -EINVAL. Returning this from max77620_config_fps() will
cause probe to fail.
Signed-off-by: Rhyland Klein <rklein@nvidia.com>
Reviewed-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Tested-by: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
|
|
On the STiH410 B2120 development board the ports on the Generic PHY
share their reset lines with each other. New functionality in the
reset subsystems forces consumers to be explicit when requesting
shared/exclusive reset lines.
Signed-off-by: Lee Jones <lee.jones@linaro.org>
|
|
On the STiH410 B2120 development board the MiPHY28lp shares its reset
line with the Synopsys DWC3 SuperSpeed (SS) USB 3.0 Dual-Role-Device
(DRD). New functionality in the reset subsystems forces consumers to
be explicit when requesting shared/exclusive reset lines.
Acked-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
|
|
On the STiH410 B2120 development board the ST EHCI IP shares its reset
line with the OHCI IP. New functionality in the reset subsystems forces
consumers to be explicit when requesting shared/exclusive reset lines.
Acked-by: Peter Griffin <peter.griffin@linaro.org>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
|
|
On the STiH410 B2120 development board the ST EHCI IP shares its reset
line with the OHCI IP. New functionality in the reset subsystems forces
consumers to be explicit when requesting shared/exclusive reset lines.
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
|
|
Standardise the way inline functions:
devm_reset_control_get_shared_by_index
devm_reset_control_get_exclusive_by_index
... are formatted.
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
|
|
Consumers need to be able to specify whether they are requesting an
'exclusive' or 'shared' reset line no matter which API (of_*, devm_*,
etc) they are using. This change allows users of the optional_* API
in particular to specify that their request is for a 'shared' line.
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
|
|
Consumers need to be able to specify whether they are requesting an
'exclusive' or 'shared' reset line no matter which API (of_*, devm_*,
etc) they are using. This change allows users of the of_* API in
particular to specify that their request is for a 'shared' line.
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
|
|
Phasing out generic reset line requests enables us to make some better
decisions on when and how to (de)assert said lines. If an 'exclusive'
line is requested, we know a device *requires* a reset and that it's
preferable to act upon a request right away. However, if a 'shared'
reset line is requested, we can reasonably assume sure that placing a
device into reset isn't a hard requirement, but probably a measure to
save power and is thus able to cope with not being asserted if another
device is still in use.
In order allow gentle adoption and not to forcing all consumers to
move to the API immediately, causing administration headache between
subsystems, this patch adds some temporary stand-in shim-calls. This
will ease the burden at merge time and allow subsystems to migrate over
to the new API in a more realistic time-frame.
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
|
|
We're about to split the current API into two, where consumers will
be forced to be explicit when requesting reset lines. The choice
will be to either the call the *_exclusive or *_shared variant
depending on whether they can actually tolorate not being asserted
when that request is made.
The new API will look like this once reorded and complete:
reset_control_get_exclusive()
reset_control_get_shared()
reset_control_get_optional_exclusive()
reset_control_get_optional_shared()
of_reset_control_get_exclusive()
of_reset_control_get_shared()
of_reset_control_get_exclusive_by_index()
of_reset_control_get_shared_by_index()
devm_reset_control_get_exclusive()
devm_reset_control_get_shared()
devm_reset_control_get_optional_exclusive()
devm_reset_control_get_optional_shared()
devm_reset_control_get_exclusive_by_index()
devm_reset_control_get_shared_by_index()
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
|
|
Per JEDEC Annex L Release 3 the SPD data is:
Bits 9~5 00 000 = Function Undefined
00 001 = Byte addressable energy backed
00 010 = Block addressed
00 011 = Byte addressable, no energy backed
All other codes reserved
Bits 4~0 0 0000 = Proprietary interface
0 0001 = Standard interface 1
All other codes reserved; see Definitions of Functions
...and per the ACPI 6.1 spec:
byte0: Bits 4~0 (0 or 1)
byte1: Bits 9~5 (1, 2, or 3)
...so a format interface code displayed as 0x301 should be stored in the
nfit as (0x1, 0x3), little-endian.
Cc: Toshi Kani <toshi.kani@hpe.com>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Cc: Robert Moore <robert.moore@intel.com>
Cc: Robert Elliott <elliott@hpe.com>
Link: https://bugzilla.kernel.org/show_bug.cgi?id=121161
Fixes: 30ec5fd464d5 ("nfit: fix format interface code byte order per ACPI6.1")
Fixes: 5ad9a7fde07a ("acpi/nfit: Update nfit driver to comply with ACPI 6.1")
Reported-by: Kristin Jacque <kristin.jacque@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
|
SD4 regulator is not registered with regulator core
framework in probe as there is no support in MAX77620 PMIC,
removing SD4 entry from MAX77620 regulator information list
and checking for valid regulator information data before
configuring FPS source and FPS power up/down period to avoid
NULL pointer exception if regulator not registered with core.
Signed-off-by: Venkat Reddy Talla <vreddytalla@nvidia.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
workaround issue that when uvd dpm disabled,
uvd clock remain high on polaris10. Manually turn
off the clocks.
Signed-off-by: Rex Zhu <Rex.Zhu@amd.com>
Reviewed-by: Ken Wang <Qingqing.Wang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
|