Age | Commit message (Collapse) | Author | Files | Lines |
|
The AES-CTR glue code avoids calling into the blkcipher API for the
tail portion of the walk, by comparing the remainder of walk.nbytes
modulo AES_BLOCK_SIZE with the residual nbytes, and jumping straight
into the tail processing block if they are equal. This tail processing
block checks whether nbytes != 0, and does nothing otherwise.
However, in case of an allocation failure in the blkcipher layer, we
may enter this code with walk.nbytes == 0, while nbytes > 0. In this
case, we should not dereference the source and destination pointers,
since they may be NULL. So instead of checking for nbytes != 0, check
for (walk.nbytes % AES_BLOCK_SIZE) != 0, which implies the former in
non-error conditions.
Fixes: 49788fe2a128 ("arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions")
Cc: stable@vger.kernel.org
Reported-by: xiakaixu <xiakaixu@huawei.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The AES-CTR glue code avoids calling into the blkcipher API for the
tail portion of the walk, by comparing the remainder of walk.nbytes
modulo AES_BLOCK_SIZE with the residual nbytes, and jumping straight
into the tail processing block if they are equal. This tail processing
block checks whether nbytes != 0, and does nothing otherwise.
However, in case of an allocation failure in the blkcipher layer, we
may enter this code with walk.nbytes == 0, while nbytes > 0. In this
case, we should not dereference the source and destination pointers,
since they may be NULL. So instead of checking for nbytes != 0, check
for (walk.nbytes % AES_BLOCK_SIZE) != 0, which implies the former in
non-error conditions.
Fixes: 86464859cc77 ("crypto: arm - AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions")
Cc: stable@vger.kernel.org
Reported-by: xiakaixu <xiakaixu@huawei.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
When we need to allocate a temporary blkcipher_walk_next and it
fails, the code is supposed to take the slow path of processing
the data block by block. However, due to an unrelated change
we instead end up dereferencing the NULL pointer.
This patch fixes it by moving the unrelated bsize setting out
of the way so that we enter the slow path as inteded.
Fixes: 7607bd8ff03b ("[CRYPTO] blkcipher: Added blkcipher_walk_virt_block")
Cc: stable@vger.kernel.org
Reported-by: xiakaixu <xiakaixu@huawei.com>
Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
|
|
The current implementation uses a global per-cpu array to store
data which are used to derive the next IV. This is insecure as
the attacker may change the stored data.
This patch removes all traces of chaining and replaces it with
multiplication of the salt and the sequence number.
Fixes: a10f554fa7e0 ("crypto: echainiv - Add encrypted chain IV...")
Cc: stable@vger.kernel.org
Reported-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
When calling .import() on a cryptd ahash_request, the structure members
that describe the child transform in the shash_desc need to be initialized
like they are when calling .init()
Cc: stable@vger.kernel.org
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
For algorithms that implement IV generators before the crypto ops,
the IV needed for decryption is initially located in req->src
scatterlist, not in req->iv.
Avoid copying the IV into req->iv by modifying the (givdecrypt)
descriptors to load it directly from req->src.
aead_givdecrypt() is no longer needed and goes away.
Cc: <stable@vger.kernel.org> # 4.3+
Fixes: 479bcc7c5b9e ("crypto: caam - Convert authenc to new AEAD interface")
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The AEAD code path incorrectly uses the child tfm to track the
cryptd refcnt, and then potentially frees the child tfm.
Fixes: 81760ea6a95a ("crypto: cryptd - Add helpers to check...")
Reported-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
walk.iv is not assigned a value in blkcipher_walk_init. It makes iv uninitialized.
It is possibly a null value(as shown below), which is then used by aes_p8_encrypt.
This patch moves iv = walk.iv after blkcipher_walk_virt, in which walk.iv is set.
[17856.268050] Unable to handle kernel paging request for data at address 0x00000000
[17856.268212] Faulting instruction address: 0xd000000002ff04bc
7:mon> t
[link register ] d000000002ff47b8 p8_aes_xts_crypt+0x168/0x2a0 [vmx_crypto] (938)
[c000000013b77960] d000000002ff4794 p8_aes_xts_crypt+0x144/0x2a0 [vmx_crypto] (unreliable)
[c000000013b77a70] c000000000544d64 skcipher_decrypt_blkcipher+0x64/0x80
[c000000013b77ac0] d000000003c0175c crypt_convert+0x53c/0x620 [dm_crypt]
[c000000013b77ba0] d000000003c043fc kcryptd_crypt+0x3cc/0x440 [dm_crypt]
[c000000013b77c50] c0000000000f3070 process_one_work+0x1e0/0x590
[c000000013b77ce0] c0000000000f34c8 worker_thread+0xa8/0x660
[c000000013b77d80] c0000000000fc0b0 kthread+0x110/0x130
[c000000013b77e30] c0000000000098f0 ret_from_kernel_thread+0x5c/0x6c
Signed-off-by: Li Zhong <zhong@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Increase value of supported key sizes for qat_aes_xts.
aes-xts keys consists of keys of equal size concatenated.
Fixes: def14bfaf30d ("crypto: qat - add support for ctr(aes) and xts(aes)")
Cc: stable@vger.kernel.org
Reported-by: Wenqian Yu <wenqian.yu@intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
We can directly depend on SOC_IMX31 since commit c9ee94965dce
("ARM: imx: deconstruct mxc_rnga initialization")
Since that commit, CONFIG_HW_RANDOM_MXC_RNGA could not be switched on
with unknown symbol ARCH_HAS_RNGA and mxc-rnga.o can't be generated with
ARCH=arm make M=drivers/char/hw_random
Previously, HW_RANDOM_MXC_RNGA required ARCH_HAS_RNGA
which was based on IMX_HAVE_PLATFORM_MXC_RNGA && ARCH_MXC.
IMX_HAVE_PLATFORM_MXC_RNGA was based on SOC_IMX31.
Fixes: c9ee94965dce ("ARM: imx: deconstruct mxc_rnga initialization")
Signed-off-by: Fabian Frederick <fabf@skynet.be>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
1. fix ctx pointer
Use req_ctx which is the ctx for the next job that have
been completed in the lanes instead of the first
completed job rctx, whose completion could have been
called and released.
Signed-off-by: Xiaodong Liu <xiaodong.liu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
1. fix ctx pointer
Use req_ctx which is the ctx for the next job that have
been completed in the lanes instead of the first
completed job rctx, whose completion could have been
called and released.
2. fix digest copy
Use XMM register to copy another 16 bytes sha256 digest
instead of a regular register.
Signed-off-by: Xiaodong Liu <xiaodong.liu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Since 6de62f15b581 ("crypto: algif_hash - Require setkey before
accept(2)"), the AF_ALG interface requires userspace to provide a key
to any algorithm that has a setkey method. However, the non-HMAC
algorithms are not keyed, so setting a key is unnecessary.
Fix this by removing the setkey method from the non-keyed hash
algorithms.
Fixes: 6de62f15b581 ("crypto: algif_hash - Require setkey before accept(2)")
Cc: <stable@vger.kernel.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The optimised crc32c implementation depends on VMX (aka. Altivec)
instructions, so the kernel must be built with Altivec support in order
for the crc32c code to build.
Fixes: 6dd7a82cc54e ("crypto: powerpc - Add POWER8 optimised crc32c")
Acked-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
To be able to generate shared descriptors for AEAD, the authentication size
needs to be known. However, there is no imposed order of calling .setkey,
.setauthsize callbacks.
Thus, in case authentication size is not known at .setkey time, defer it
until .setauthsize is called.
The authsize != 0 check was incorrectly removed when converting the driver
to the new AEAD interface.
Cc: <stable@vger.kernel.org> # 4.3+
Fixes: 479bcc7c5b9e ("crypto: caam - Convert authenc to new AEAD interface")
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
There are a few things missed by the conversion to the
new AEAD interface:
1 - echainiv(authenc) encrypt shared descriptor
The shared descriptor is incorrect: due to the order of operations,
at some point in time MATH3 register is being overwritten.
2 - buffer used for echainiv(authenc) encrypt shared descriptor
Encrypt and givencrypt shared descriptors (for AEAD ops) are mutually
exclusive and thus use the same buffer in context state: sh_desc_enc.
However, there's one place missed by s/sh_desc_givenc/sh_desc_enc,
leading to errors when echainiv(authenc(...)) algorithms are used:
DECO: desc idx 14: Header Error. Invalid length or parity, or
certain other problems.
While here, also fix a typo: dma_mapping_error() is checking
for validity of sh_desc_givenc_dma instead of sh_desc_enc_dma.
Cc: <stable@vger.kernel.org> # 4.3+
Fixes: 479bcc7c5b9e ("crypto: caam - Convert authenc to new AEAD interface")
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
On 32-bit (e.g. with m68k-linux-gnu-gcc-4.1):
crypto/sha3_generic.c:27: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:28: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:29: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:29: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:31: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:31: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:32: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:32: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:32: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:33: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:33: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:34: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:34: warning: integer constant is too large for ‘long’ type
Fixes: 53964b9ee63b7075 ("crypto: sha3 - Add SHA-3 hash algorithm")
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
Since commit 63a4cc24867d, bio->bi_rw contains flags in the lower
portion and the op code in the higher portions. This means that
old code that relies on manually setting bi_rw is most likely
going to be broken. Instead of letting that brokeness linger,
rename the member, to force old and out-of-tree code to break
at compile time instead of at runtime.
No intended functional changes in this commit.
Signed-off-by: Jens Axboe <axboe@fb.com>
|
|
The original commit missed this function, it needs to mark it a
write flush.
Cc: Mike Christie <mchristi@redhat.com>
Fixes: e742fc32fcb4 ("target: use bio op accessors")
Signed-off-by: Jens Axboe <axboe@fb.com>
|
|
Cleaner than manipulating bio->bi_rw flags directly.
Signed-off-by: Jens Axboe <axboe@fb.com>
|
|
Commit abf545484d31 changed it from an 'rw' flags type to the
newer ops based interface, but now we're effectively leaking
some bdev internals to the rest of the kernel. Since we only
care about whether it's a read or a write at that level, just
pass in a bool 'is_write' parameter instead.
Then we can also move op_is_write() and friends back under
CONFIG_BLOCK protection.
Reviewed-by: Mike Christie <mchristi@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
|
|
In most cases, EPERM is returned on immutable inode, and there're only a
few places returning EACCES. I noticed this when running LTP on
overlayfs, setxattr03 failed due to unexpected EACCES on immutable
inode.
So converting all EACCES to EPERM on immutable inode.
Acked-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Eryu Guan <guaneryu@gmail.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
persistent_ram_zone(=prz) structures are allocated by persistent_ram_new(),
which includes vmap() or ioremap(). But they are currently freed by
kfree(). This uses persistent_ram_free() for correct this asymmetry usage.
Signed-off-by: Hiraku Toyooka <hiraku.toyooka.gu@hitachi.com>
Signed-off-by: Nobuhiro Iwamatsu <nobuhiro.iwamatsu.kw@hitachi.com>
Cc: Mark Salyzyn <salyzyn@android.com>
Cc: Seiji Aguchi <seiji.aguchi.tr@hitachi.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
|
|
Instead of a ramoops-specific node, use a child node of /reserved-memory.
This requires that of_platform_device_create() be explicitly called
for the node, though, since "/reserved-memory" does not have its own
"compatible" property.
Suggested-by: Rob Herring <robh@kernel.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Rob Herring <robh@kernel.org>
|
|
Clean up duplicated expression by replacing it with the equivalent local
variable pdev.
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
It will be useful to know the hardware configured BAR size to diagnose
issues with NTB memory windows.
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
This script automates testing doorbells, scratchpads and memory windows
for an NTB device. It can be run locally, with the NTB looped
back to the same host or use SSH to remotely control the second host.
In the single host case, the script just needs to be passed two
arguments: a PCI ID for each side of the link. In the two host case
the -r option must be used to specify the remote hostname (which must
be SSH accessible and should probably have ssh-keys exchanged).
A sample run looks like this:
$ sudo ./ntb_test.sh 0000:03:00.1 0000:83:00.1 -p 29
Starting ntb_tool tests...
Running link tests on: 0000:03:00.1 / 0000:83:00.1
Passed
Running link tests on: 0000:83:00.1 / 0000:03:00.1
Passed
Running db tests on: 0000:03:00.1 / 0000:83:00.1
Passed
Running db tests on: 0000:83:00.1 / 0000:03:00.1
Passed
Running spad tests on: 0000:03:00.1 / 0000:83:00.1
Passed
Running spad tests on: 0000:83:00.1 / 0000:03:00.1
Passed
Running mw0 tests on: 0000:03:00.1 / 0000:83:00.1
Passed
Running mw0 tests on: 0000:83:00.1 / 0000:03:00.1
Passed
Running mw1 tests on: 0000:03:00.1 / 0000:83:00.1
Passed
Running mw1 tests on: 0000:83:00.1 / 0000:03:00.1
Passed
Starting ntb_pingpong tests...
Running ping pong tests on: 0000:03:00.1 / 0000:83:00.1
Passed
Starting ntb_perf tests...
Running local perf test without DMA
0: copied 536870912 bytes in 164453 usecs, 3264 MBytes/s
Passed
Running remote perf test without DMA
0: copied 536870912 bytes in 164453 usecs, 3264 MBytes/s
Passed
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Shuah Khan <shuahkh@osg.samsung.com>
Acked-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
When the link goes down, the link_is_up flag did not return to
false. This could have caused some subtle corner case bugs
when the link goes up and down quickly.
Once that was fixed, there was found to be a race if the link was
brought down then immediately up. The link_cleanup work would
occasionally be scheduled after the next link up event. This would
cancel the link_work that was supposed to occur and leave ntb_perf
in an unusable state.
To fix this we get rid of the link_cleanup work and put the actions
directly in the link_down event.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
This commit adds a debugfs 'count' file to ntb_pingpong. This is so
testing with ntb_pingpong can be automated beyond just checking the
logs for pong messages.
The count file returns a number which increments every pong. The
counter can be cleared by writing a zero.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
In order to more successfully script with ntb_tool it's useful to
have a link file to check the link status so that the script
doesn't use the other files until the link is up.
This commit adds a 'link' file to the debugfs directory which reads
boolean (Y or N) depending on the link status. Writing to the file
change the link state using ntb_link_enable or ntb_link_disable.
A 'link_event' file is also provided so an application can block until
the link changes to the desired state. If the user writes a 1, it will
block until the link is up. If the user writes a 0, it will block until
the link is down.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
In order to make the interface closer to the raw NTB API, this commit
changes memory windows so they are not initialized on link up.
Instead, the 'peer_trans*' debugfs files are introduced. When read,
they return information provided by ntb_mw_get_range. When written,
they create a buffer and initialize the memory window. The
value written is taken as the requested size of the buffer (which
is then rounded for alignment). Writing a value of zero frees the buffer
and tears down the memory window translation. The 'peer_mw*' file is
only created once the memory window translation is setup by the user.
Additionally, it was noticed that the read and write functions for the
'peer_mw*' files should have checked for a NULL pointer.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
Instead of returning immediately with an error when the link is
down, wait for the link to come up (or the user sends a SIGINT).
This is to make scripting ntb_perf easier.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
Instead of having to watch logs, allow the results to be retrieved
by reading back the run file. This file will return "running" when
the test is running and nothing if no tests have been run yet.
It returns 1 line per thread, and will display an error message if the
corresponding thread returns an error.
With the above change, the pr_info calls that returned the results are
then changed to pr_debug calls.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
This commit accomplishes a few things:
1) Properly prevent multiple sets of threads from running at once using
a mutex. Lots of race issues existed with the thread_cleanup.
2) The mutex allows us to ensure that threads are finished before
tearing down the device or module.
3) Don't use kthread_stop when the threads can exit by themselves, as
this is counter-indicated by the kthread_create documentation. Threads
now wait for kthread_stop to occur.
4) Writing to the run file now blocks until the threads are complete.
The test can then be safely interrupted by a SIGINT.
Also, while I was at it:
5) debugfs_run_write shouldn't return 0 in the early check cases as this
could cause debugfs_run_write to loop undesirably.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
When debugging performance problems, if some issue causes the ntb
hardware to be significantly slower than expected, ntb_perf will
hang requiring a reboot because it only schedules once every 4GB.
Instead, schedule based on jiffies so it will not hang the CPU if
the transfer is slow.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
I'm working on hardware that currently has a limited number of
scratchpad registers and ntb_ndev fails with no clue as to why. I
feel it is better to fail early and provide a reasonable error message
then to fail later on.
The same is done to ntb_perf, but it doesn't currently require enough
spads to actually fail. I've also removed the unused SPAD_MSG and
SPAD_ACK enums so that MAX_SPAD accurately reflects the number of
spads used.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
We allocate some memory window buffers when the link comes up, then we
provide debugfs files to read/write each side of the link.
This is useful for debugging the mapping when writing new drivers.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
On my system, dma_alloc_coherent won't produce memory anywhere
near the size of the BAR. So I needed a way to limit this.
It's pretty much copied straight from ntb_transport.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
Currently we only allocate a fixed default number of descriptors for the tx
and rx side. We should dynamically resize it to the number of descriptors
resides in the transport rings. We should know the number of transmit
descriptors at initializaiton. We will allocate the default number of
descriptors for receive side and allocate additional ones when we know the
actual max entries for receive.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Allen Hubbe <allen.hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
On hardware with 32 scratchpad registers the spad field in ntb tool
could chop off the end. The maximum buffer size is increased from
256 to 15 times the number or scratchpads.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
If you tried to write two spads in one line, as per the example:
root@peer# echo '0 0x01010101 1 0x7f7f7f7f' > $DBG_DIR/peer_spad
then the CPU would freeze in an infinite loop.
This wasn't immediately obvious but 'pos' was not incrementing the
buffer, so after reading the second pair of values, 'pos' would once
again be 3 and it would re-read the second pair of values ad infinitum.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|