aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2019-09-21 10:40:37 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2019-09-21 10:40:37 -0700
commit3e414b5bd28f965fb39b9e9419d877df0cf3111a (patch)
tree5780a87d8e1b436eedeff6a7e6585cda75ddceaa /Documentation
parentMerge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma (diff)
parentdm: introduce DM_GET_TARGET_VERSION (diff)
downloadlinux-dev-3e414b5bd28f965fb39b9e9419d877df0cf3111a.tar.xz
linux-dev-3e414b5bd28f965fb39b9e9419d877df0cf3111a.zip
Merge tag 'for-5.4/dm-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm
Pull device mapper updates from Mike Snitzer: - crypto and DM crypt advances that allow the crypto API to reclaim implementation details that do not belong in DM crypt. The wrapper template for ESSIV generation that was factored out will also be used by fscrypt in the future. - Add root hash pkcs#7 signature verification to the DM verity target. - Add a new "clone" DM target that allows for efficient remote replication of a device. - Enhance DM bufio's cache to be tailored to each client based on use. Clients that make heavy use of the cache get more of it, and those that use less have reduced cache usage. - Add a new DM_GET_TARGET_VERSION ioctl to allow userspace to query the version number of a DM target (even if the associated module isn't yet loaded). - Fix invalid memory access in DM zoned target. - Fix the max_discard_sectors limit advertised by the DM raid target; it was mistakenly storing the limit in bytes rather than sectors. - Small optimizations and cleanups in DM writecache target. - Various fixes and cleanups in DM core, DM raid1 and space map portion of DM persistent data library. * tag 'for-5.4/dm-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm: (22 commits) dm: introduce DM_GET_TARGET_VERSION dm bufio: introduce a global cache replacement dm bufio: remove old-style buffer cleanup dm bufio: introduce a global queue dm bufio: refactor adjust_total_allocated dm bufio: call adjust_total_allocated from __link_buffer and __unlink_buffer dm: add clone target dm raid: fix updating of max_discard_sectors limit dm writecache: skip writecache_wait for pmem mode dm stats: use struct_size() helper dm crypt: omit parsing of the encapsulated cipher dm crypt: switch to ESSIV crypto API template crypto: essiv - create wrapper template for ESSIV generation dm space map common: remove check for impossible sm_find_free() return value dm raid1: use struct_size() with kzalloc() dm writecache: optimize performance by sorting the blocks for writeback_all dm writecache: add unlikely for getting two block with same LBA dm writecache: remove unused member pointer in writeback_struct dm zoned: fix invalid memory access dm verity: add root hash pkcs#7 signature verification ...
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/admin-guide/device-mapper/dm-clone.rst333
-rw-r--r--Documentation/admin-guide/device-mapper/verity.rst7
2 files changed, 340 insertions, 0 deletions
diff --git a/Documentation/admin-guide/device-mapper/dm-clone.rst b/Documentation/admin-guide/device-mapper/dm-clone.rst
new file mode 100644
index 000000000000..b43a34c1430a
--- /dev/null
+++ b/Documentation/admin-guide/device-mapper/dm-clone.rst
@@ -0,0 +1,333 @@
+.. SPDX-License-Identifier: GPL-2.0-only
+
+========
+dm-clone
+========
+
+Introduction
+============
+
+dm-clone is a device mapper target which produces a one-to-one copy of an
+existing, read-only source device into a writable destination device: It
+presents a virtual block device which makes all data appear immediately, and
+redirects reads and writes accordingly.
+
+The main use case of dm-clone is to clone a potentially remote, high-latency,
+read-only, archival-type block device into a writable, fast, primary-type device
+for fast, low-latency I/O. The cloned device is visible/mountable immediately
+and the copy of the source device to the destination device happens in the
+background, in parallel with user I/O.
+
+For example, one could restore an application backup from a read-only copy,
+accessible through a network storage protocol (NBD, Fibre Channel, iSCSI, AoE,
+etc.), into a local SSD or NVMe device, and start using the device immediately,
+without waiting for the restore to complete.
+
+When the cloning completes, the dm-clone table can be removed altogether and be
+replaced, e.g., by a linear table, mapping directly to the destination device.
+
+The dm-clone target reuses the metadata library used by the thin-provisioning
+target.
+
+Glossary
+========
+
+ Hydration
+ The process of filling a region of the destination device with data from
+ the same region of the source device, i.e., copying the region from the
+ source to the destination device.
+
+Once a region gets hydrated we redirect all I/O regarding it to the destination
+device.
+
+Design
+======
+
+Sub-devices
+-----------
+
+The target is constructed by passing three devices to it (along with other
+parameters detailed later):
+
+1. A source device - the read-only device that gets cloned and source of the
+ hydration.
+
+2. A destination device - the destination of the hydration, which will become a
+ clone of the source device.
+
+3. A small metadata device - it records which regions are already valid in the
+ destination device, i.e., which regions have already been hydrated, or have
+ been written to directly, via user I/O.
+
+The size of the destination device must be at least equal to the size of the
+source device.
+
+Regions
+-------
+
+dm-clone divides the source and destination devices in fixed sized regions.
+Regions are the unit of hydration, i.e., the minimum amount of data copied from
+the source to the destination device.
+
+The region size is configurable when you first create the dm-clone device. The
+recommended region size is the same as the file system block size, which usually
+is 4KB. The region size must be between 8 sectors (4KB) and 2097152 sectors
+(1GB) and a power of two.
+
+Reads and writes from/to hydrated regions are serviced from the destination
+device.
+
+A read to a not yet hydrated region is serviced directly from the source device.
+
+A write to a not yet hydrated region will be delayed until the corresponding
+region has been hydrated and the hydration of the region starts immediately.
+
+Note that a write request with size equal to region size will skip copying of
+the corresponding region from the source device and overwrite the region of the
+destination device directly.
+
+Discards
+--------
+
+dm-clone interprets a discard request to a range that hasn't been hydrated yet
+as a hint to skip hydration of the regions covered by the request, i.e., it
+skips copying the region's data from the source to the destination device, and
+only updates its metadata.
+
+If the destination device supports discards, then by default dm-clone will pass
+down discard requests to it.
+
+Background Hydration
+--------------------
+
+dm-clone copies continuously from the source to the destination device, until
+all of the device has been copied.
+
+Copying data from the source to the destination device uses bandwidth. The user
+can set a throttle to prevent more than a certain amount of copying occurring at
+any one time. Moreover, dm-clone takes into account user I/O traffic going to
+the devices and pauses the background hydration when there is I/O in-flight.
+
+A message `hydration_threshold <#regions>` can be used to set the maximum number
+of regions being copied, the default being 1 region.
+
+dm-clone employs dm-kcopyd for copying portions of the source device to the
+destination device. By default, we issue copy requests of size equal to the
+region size. A message `hydration_batch_size <#regions>` can be used to tune the
+size of these copy requests. Increasing the hydration batch size results in
+dm-clone trying to batch together contiguous regions, so we copy the data in
+batches of this many regions.
+
+When the hydration of the destination device finishes, a dm event will be sent
+to user space.
+
+Updating on-disk metadata
+-------------------------
+
+On-disk metadata is committed every time a FLUSH or FUA bio is written. If no
+such requests are made then commits will occur every second. This means the
+dm-clone device behaves like a physical disk that has a volatile write cache. If
+power is lost you may lose some recent writes. The metadata should always be
+consistent in spite of any crash.
+
+Target Interface
+================
+
+Constructor
+-----------
+
+ ::
+
+ clone <metadata dev> <destination dev> <source dev> <region size>
+ [<#feature args> [<feature arg>]* [<#core args> [<core arg>]*]]
+
+ ================ ==============================================================
+ metadata dev Fast device holding the persistent metadata
+ destination dev The destination device, where the source will be cloned
+ source dev Read only device containing the data that gets cloned
+ region size The size of a region in sectors
+
+ #feature args Number of feature arguments passed
+ feature args no_hydration or no_discard_passdown
+
+ #core args An even number of arguments corresponding to key/value pairs
+ passed to dm-clone
+ core args Key/value pairs passed to dm-clone, e.g. `hydration_threshold
+ 256`
+ ================ ==============================================================
+
+Optional feature arguments are:
+
+ ==================== =========================================================
+ no_hydration Create a dm-clone instance with background hydration
+ disabled
+ no_discard_passdown Disable passing down discards to the destination device
+ ==================== =========================================================
+
+Optional core arguments are:
+
+ ================================ ==============================================
+ hydration_threshold <#regions> Maximum number of regions being copied from
+ the source to the destination device at any
+ one time, during background hydration.
+ hydration_batch_size <#regions> During background hydration, try to batch
+ together contiguous regions, so we copy data
+ from the source to the destination device in
+ batches of this many regions.
+ ================================ ==============================================
+
+Status
+------
+
+ ::
+
+ <metadata block size> <#used metadata blocks>/<#total metadata blocks>
+ <region size> <#hydrated regions>/<#total regions> <#hydrating regions>
+ <#feature args> <feature args>* <#core args> <core args>*
+ <clone metadata mode>
+
+ ======================= =======================================================
+ metadata block size Fixed block size for each metadata block in sectors
+ #used metadata blocks Number of metadata blocks used
+ #total metadata blocks Total number of metadata blocks
+ region size Configurable region size for the device in sectors
+ #hydrated regions Number of regions that have finished hydrating
+ #total regions Total number of regions to hydrate
+ #hydrating regions Number of regions currently hydrating
+ #feature args Number of feature arguments to follow
+ feature args Feature arguments, e.g. `no_hydration`
+ #core args Even number of core arguments to follow
+ core args Key/value pairs for tuning the core, e.g.
+ `hydration_threshold 256`
+ clone metadata mode ro if read-only, rw if read-write
+
+ In serious cases where even a read-only mode is deemed
+ unsafe no further I/O will be permitted and the status
+ will just contain the string 'Fail'. If the metadata
+ mode changes, a dm event will be sent to user space.
+ ======================= =======================================================
+
+Messages
+--------
+
+ `disable_hydration`
+ Disable the background hydration of the destination device.
+
+ `enable_hydration`
+ Enable the background hydration of the destination device.
+
+ `hydration_threshold <#regions>`
+ Set background hydration threshold.
+
+ `hydration_batch_size <#regions>`
+ Set background hydration batch size.
+
+Examples
+========
+
+Clone a device containing a file system
+---------------------------------------
+
+1. Create the dm-clone device.
+
+ ::
+
+ dmsetup create clone --table "0 1048576000 clone $metadata_dev $dest_dev \
+ $source_dev 8 1 no_hydration"
+
+2. Mount the device and trim the file system. dm-clone interprets the discards
+ sent by the file system and it will not hydrate the unused space.
+
+ ::
+
+ mount /dev/mapper/clone /mnt/cloned-fs
+ fstrim /mnt/cloned-fs
+
+3. Enable background hydration of the destination device.
+
+ ::
+
+ dmsetup message clone 0 enable_hydration
+
+4. When the hydration finishes, we can replace the dm-clone table with a linear
+ table.
+
+ ::
+
+ dmsetup suspend clone
+ dmsetup load clone --table "0 1048576000 linear $dest_dev 0"
+ dmsetup resume clone
+
+ The metadata device is no longer needed and can be safely discarded or reused
+ for other purposes.
+
+Known issues
+============
+
+1. We redirect reads, to not-yet-hydrated regions, to the source device. If
+ reading the source device has high latency and the user repeatedly reads from
+ the same regions, this behaviour could degrade performance. We should use
+ these reads as hints to hydrate the relevant regions sooner. Currently, we
+ rely on the page cache to cache these regions, so we hopefully don't end up
+ reading them multiple times from the source device.
+
+2. Release in-core resources, i.e., the bitmaps tracking which regions are
+ hydrated, after the hydration has finished.
+
+3. During background hydration, if we fail to read the source or write to the
+ destination device, we print an error message, but the hydration process
+ continues indefinitely, until it succeeds. We should stop the background
+ hydration after a number of failures and emit a dm event for user space to
+ notice.
+
+Why not...?
+===========
+
+We explored the following alternatives before implementing dm-clone:
+
+1. Use dm-cache with cache size equal to the source device and implement a new
+ cloning policy:
+
+ * The resulting cache device is not a one-to-one mirror of the source device
+ and thus we cannot remove the cache device once cloning completes.
+
+ * dm-cache writes to the source device, which violates our requirement that
+ the source device must be treated as read-only.
+
+ * Caching is semantically different from cloning.
+
+2. Use dm-snapshot with a COW device equal to the source device:
+
+ * dm-snapshot stores its metadata in the COW device, so the resulting device
+ is not a one-to-one mirror of the source device.
+
+ * No background copying mechanism.
+
+ * dm-snapshot needs to commit its metadata whenever a pending exception
+ completes, to ensure snapshot consistency. In the case of cloning, we don't
+ need to be so strict and can rely on committing metadata every time a FLUSH
+ or FUA bio is written, or periodically, like dm-thin and dm-cache do. This
+ improves the performance significantly.
+
+3. Use dm-mirror: The mirror target has a background copying/mirroring
+ mechanism, but it writes to all mirrors, thus violating our requirement that
+ the source device must be treated as read-only.
+
+4. Use dm-thin's external snapshot functionality. This approach is the most
+ promising among all alternatives, as the thinly-provisioned volume is a
+ one-to-one mirror of the source device and handles reads and writes to
+ un-provisioned/not-yet-cloned areas the same way as dm-clone does.
+
+ Still:
+
+ * There is no background copying mechanism, though one could be implemented.
+
+ * Most importantly, we want to support arbitrary block devices as the
+ destination of the cloning process and not restrict ourselves to
+ thinly-provisioned volumes. Thin-provisioning has an inherent metadata
+ overhead, for maintaining the thin volume mappings, which significantly
+ degrades performance.
+
+ Moreover, cloning a device shouldn't force the use of thin-provisioning. On
+ the other hand, if we wish to use thin provisioning, we can just use a thin
+ LV as dm-clone's destination device.
diff --git a/Documentation/admin-guide/device-mapper/verity.rst b/Documentation/admin-guide/device-mapper/verity.rst
index a4d1c1476d72..bb02caa45289 100644
--- a/Documentation/admin-guide/device-mapper/verity.rst
+++ b/Documentation/admin-guide/device-mapper/verity.rst
@@ -125,6 +125,13 @@ check_at_most_once
blocks, and a hash block will not be verified any more after all the data
blocks it covers have been verified anyway.
+root_hash_sig_key_desc <key_description>
+ This is the description of the USER_KEY that the kernel will lookup to get
+ the pkcs7 signature of the roothash. The pkcs7 signature is used to validate
+ the root hash during the creation of the device mapper block device.
+ Verification of roothash depends on the config DM_VERITY_VERIFY_ROOTHASH_SIG
+ being set in the kernel.
+
Theory of operation
===================