aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation/media/v4l-drivers
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/media/v4l-drivers')
-rw-r--r--Documentation/media/v4l-drivers/bttv.rst4
-rw-r--r--Documentation/media/v4l-drivers/imx.rst107
-rw-r--r--Documentation/media/v4l-drivers/imx7.rst162
-rw-r--r--Documentation/media/v4l-drivers/index.rst1
-rw-r--r--Documentation/media/v4l-drivers/ipu3.rst151
-rw-r--r--Documentation/media/v4l-drivers/pxa_camera.rst2
-rw-r--r--Documentation/media/v4l-drivers/qcom_camss.rst2
7 files changed, 385 insertions, 44 deletions
diff --git a/Documentation/media/v4l-drivers/bttv.rst b/Documentation/media/v4l-drivers/bttv.rst
index d72a0f8fd267..f956ee264099 100644
--- a/Documentation/media/v4l-drivers/bttv.rst
+++ b/Documentation/media/v4l-drivers/bttv.rst
@@ -85,7 +85,7 @@ same card listens there is much higher...
For problems with sound: There are a lot of different systems used
for TV sound all over the world. And there are also different chips
which decode the audio signal. Reports about sound problems ("stereo
-does'nt work") are pretty useless unless you include some details
+doesn't work") are pretty useless unless you include some details
about your hardware and the TV sound scheme used in your country (or
at least the country you are living in).
@@ -771,7 +771,7 @@ Identifying:
- Lifeview.com.tw states (Feb. 2002):
"The FlyVideo2000 and FlyVideo2000s product name have renamed to FlyVideo98."
Their Bt8x8 cards are listed as discontinued.
- - Flyvideo 2000S was probably sold as Flyvideo 3000 in some contries(Europe?).
+ - Flyvideo 2000S was probably sold as Flyvideo 3000 in some countries(Europe?).
The new Flyvideo 2000/3000 are SAA7130/SAA7134 based.
"Flyvideo II" had been the name for the 848 cards, nowadays (in Germany)
diff --git a/Documentation/media/v4l-drivers/imx.rst b/Documentation/media/v4l-drivers/imx.rst
index 6922dde4a82b..1d7eb8c7bd5c 100644
--- a/Documentation/media/v4l-drivers/imx.rst
+++ b/Documentation/media/v4l-drivers/imx.rst
@@ -24,12 +24,12 @@ memory. Various dedicated DMA channels exist for both video capture and
display paths. During transfer, the IDMAC is also capable of vertical
image flip, 8x8 block transfer (see IRT description), pixel component
re-ordering (for example UYVY to YUYV) within the same colorspace, and
-even packed <--> planar conversion. It can also perform a simple
-de-interlacing by interleaving even and odd lines during transfer
+packed <--> planar conversion. The IDMAC can also perform a simple
+de-interlacing by interweaving even and odd lines during transfer
(without motion compensation which requires the VDIC).
The CSI is the backend capture unit that interfaces directly with
-camera sensors over Parallel, BT.656/1120, and MIPI CSI-2 busses.
+camera sensors over Parallel, BT.656/1120, and MIPI CSI-2 buses.
The IC handles color-space conversion, resizing (downscaling and
upscaling), horizontal flip, and 90/270 degree rotation operations.
@@ -175,15 +175,21 @@ via the SMFC and an IDMAC channel, bypassing IC pre-processing. This
source pad is routed to a capture device node, with a node name of the
format "ipuX_csiY capture".
-Note that since the IDMAC source pad makes use of an IDMAC channel, it
-can do pixel reordering within the same colorspace. For example, the
-sink pad can take UYVY2X8, but the IDMAC source pad can output YUYV2X8.
-If the sink pad is receiving YUV, the output at the capture device can
-also be converted to a planar YUV format such as YUV420.
-
-It will also perform simple de-interlace without motion compensation,
-which is activated if the sink pad's field type is an interlaced type,
-and the IDMAC source pad field type is set to none.
+Note that since the IDMAC source pad makes use of an IDMAC channel,
+pixel reordering within the same colorspace can be carried out by the
+IDMAC channel. For example, if the CSI sink pad is receiving in UYVY
+order, the capture device linked to the IDMAC source pad can capture
+in YUYV order. Also, if the CSI sink pad is receiving a packed YUV
+format, the capture device can capture a planar YUV format such as
+YUV420.
+
+The IDMAC channel at the IDMAC source pad also supports simple
+interweave without motion compensation, which is activated if the source
+pad's field type is sequential top-bottom or bottom-top, and the
+requested capture interface field type is set to interlaced (t-b, b-t,
+or unqualified interlaced). The capture interface will enforce the same
+field order as the source pad field order (interlaced-bt if source pad
+is seq-bt, interlaced-tb if source pad is seq-tb).
This subdev can generate the following event when enabling the second
IDMAC source pad:
@@ -201,7 +207,7 @@ The CSI supports cropping the incoming raw sensor frames. This is
implemented in the ipuX_csiY entities at the sink pad, using the
crop selection subdev API.
-The CSI also supports fixed divide-by-two downscaling indepently in
+The CSI also supports fixed divide-by-two downscaling independently in
width and height. This is implemented in the ipuX_csiY entities at
the sink pad, using the compose selection subdev API.
@@ -325,14 +331,14 @@ ipuX_vdic
The VDIC carries out motion compensated de-interlacing, with three
motion compensation modes: low, medium, and high motion. The mode is
-specified with the menu control V4L2_CID_DEINTERLACING_MODE. It has
-two sink pads and a single source pad.
+specified with the menu control V4L2_CID_DEINTERLACING_MODE. The VDIC
+has two sink pads and a single source pad.
The direct sink pad receives from an ipuX_csiY direct pad. With this
link the VDIC can only operate in high motion mode.
When the IDMAC sink pad is activated, it receives from an output
-or mem2mem device node. With this pipeline, it can also operate
+or mem2mem device node. With this pipeline, the VDIC can also operate
in low and medium modes, because these modes require receiving
frames from memory buffers. Note that an output or mem2mem device
is not implemented yet, so this sink pad currently has no links.
@@ -345,8 +351,8 @@ ipuX_ic_prp
This is the IC pre-processing entity. It acts as a router, routing
data from its sink pad to one or both of its source pads.
-It has a single sink pad. The sink pad can receive from the ipuX_csiY
-direct pad, or from ipuX_vdic.
+This entity has a single sink pad. The sink pad can receive from the
+ipuX_csiY direct pad, or from ipuX_vdic.
This entity has two source pads. One source pad routes to the
pre-process encode task entity (ipuX_ic_prpenc), the other to the
@@ -369,8 +375,8 @@ color-space conversion, resizing (downscaling and upscaling),
horizontal and vertical flip, and 90/270 degree rotation. Flip
and rotation are provided via standard V4L2 controls.
-Like the ipuX_csiY IDMAC source, it can also perform simple de-interlace
-without motion compensation, and pixel reordering.
+Like the ipuX_csiY IDMAC source, this entity also supports simple
+de-interlace without motion compensation, and pixel reordering.
ipuX_ic_prpvf
-------------
@@ -380,18 +386,18 @@ pad from ipuX_ic_prp, and a single source pad. The source pad is routed
to a capture device node, with a node name of the format
"ipuX_ic_prpvf capture".
-It is identical in operation to ipuX_ic_prpenc, with the same resizing
-and CSC operations and flip/rotation controls. It will receive and
-process de-interlaced frames from the ipuX_vdic if ipuX_ic_prp is
+This entity is identical in operation to ipuX_ic_prpenc, with the same
+resizing and CSC operations and flip/rotation controls. It will receive
+and process de-interlaced frames from the ipuX_vdic if ipuX_ic_prp is
receiving from ipuX_vdic.
-Like the ipuX_csiY IDMAC source, it can perform simple de-interlace
-without motion compensation. However, note that if the ipuX_vdic is
-included in the pipeline (ipuX_ic_prp is receiving from ipuX_vdic),
-it's not possible to use simple de-interlace in ipuX_ic_prpvf, since
-the ipuX_vdic has already carried out de-interlacing (with motion
-compensation) and therefore the field type output from ipuX_ic_prp can
-only be none.
+Like the ipuX_csiY IDMAC source, this entity supports simple
+interweaving without motion compensation. However, note that if the
+ipuX_vdic is included in the pipeline (ipuX_ic_prp is receiving from
+ipuX_vdic), it's not possible to use interweave in ipuX_ic_prpvf,
+since the ipuX_vdic has already carried out de-interlacing (with
+motion compensation) and therefore the field type output from
+ipuX_vdic can only be none (progressive).
Capture Pipelines
-----------------
@@ -516,10 +522,33 @@ On the SabreAuto, an on-board ADV7180 SD decoder is connected to the
parallel bus input on the internal video mux to IPU1 CSI0.
The following example configures a pipeline to capture from the ADV7180
-video decoder, assuming NTSC 720x480 input signals, with Motion
-Compensated de-interlacing. Pad field types assume the adv7180 outputs
-"interlaced". $outputfmt can be any format supported by the ipu1_ic_prpvf
-entity at its output pad:
+video decoder, assuming NTSC 720x480 input signals, using simple
+interweave (unconverted and without motion compensation). The adv7180
+must output sequential or alternating fields (field type 'seq-bt' for
+NTSC, or 'alternate'):
+
+.. code-block:: none
+
+ # Setup links
+ media-ctl -l "'adv7180 3-0021':0 -> 'ipu1_csi0_mux':1[1]"
+ media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]"
+ media-ctl -l "'ipu1_csi0':2 -> 'ipu1_csi0 capture':0[1]"
+ # Configure pads
+ media-ctl -V "'adv7180 3-0021':0 [fmt:UYVY2X8/720x480 field:seq-bt]"
+ media-ctl -V "'ipu1_csi0_mux':2 [fmt:UYVY2X8/720x480]"
+ media-ctl -V "'ipu1_csi0':2 [fmt:AYUV32/720x480]"
+ # Configure "ipu1_csi0 capture" interface (assumed at /dev/video4)
+ v4l2-ctl -d4 --set-fmt-video=field=interlaced_bt
+
+Streaming can then begin on /dev/video4. The v4l2-ctl tool can also be
+used to select any supported YUV pixelformat on /dev/video4.
+
+This example configures a pipeline to capture from the ADV7180
+video decoder, assuming PAL 720x576 input signals, with Motion
+Compensated de-interlacing. The adv7180 must output sequential or
+alternating fields (field type 'seq-tb' for PAL, or 'alternate').
+$outputfmt can be any format supported by the ipu1_ic_prpvf entity
+at its output pad:
.. code-block:: none
@@ -531,11 +560,11 @@ entity at its output pad:
media-ctl -l "'ipu1_ic_prp':2 -> 'ipu1_ic_prpvf':0[1]"
media-ctl -l "'ipu1_ic_prpvf':1 -> 'ipu1_ic_prpvf capture':0[1]"
# Configure pads
- media-ctl -V "'adv7180 3-0021':0 [fmt:UYVY2X8/720x480]"
- media-ctl -V "'ipu1_csi0_mux':2 [fmt:UYVY2X8/720x480 field:interlaced]"
- media-ctl -V "'ipu1_csi0':1 [fmt:AYUV32/720x480 field:interlaced]"
- media-ctl -V "'ipu1_vdic':2 [fmt:AYUV32/720x480 field:none]"
- media-ctl -V "'ipu1_ic_prp':2 [fmt:AYUV32/720x480 field:none]"
+ media-ctl -V "'adv7180 3-0021':0 [fmt:UYVY2X8/720x576 field:seq-tb]"
+ media-ctl -V "'ipu1_csi0_mux':2 [fmt:UYVY2X8/720x576]"
+ media-ctl -V "'ipu1_csi0':1 [fmt:AYUV32/720x576]"
+ media-ctl -V "'ipu1_vdic':2 [fmt:AYUV32/720x576 field:none]"
+ media-ctl -V "'ipu1_ic_prp':2 [fmt:AYUV32/720x576 field:none]"
media-ctl -V "'ipu1_ic_prpvf':1 [fmt:$outputfmt field:none]"
Streaming can then begin on the capture device node at
diff --git a/Documentation/media/v4l-drivers/imx7.rst b/Documentation/media/v4l-drivers/imx7.rst
new file mode 100644
index 000000000000..fe411f65c01c
--- /dev/null
+++ b/Documentation/media/v4l-drivers/imx7.rst
@@ -0,0 +1,162 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+i.MX7 Video Capture Driver
+==========================
+
+Introduction
+------------
+
+The i.MX7 contrary to the i.MX5/6 family does not contain an Image Processing
+Unit (IPU); because of that the capabilities to perform operations or
+manipulation of the capture frames are less feature rich.
+
+For image capture the i.MX7 has three units:
+- CMOS Sensor Interface (CSI)
+- Video Multiplexer
+- MIPI CSI-2 Receiver
+
+.. code-block:: none
+
+ MIPI Camera Input ---> MIPI CSI-2 --- > |\
+ | \
+ | \
+ | M |
+ | U | ------> CSI ---> Capture
+ | X |
+ | /
+ Parallel Camera Input ----------------> | /
+ |/
+
+For additional information, please refer to the latest versions of the i.MX7
+reference manual [#f1]_.
+
+Entities
+--------
+
+imx7-mipi-csi2
+--------------
+
+This is the MIPI CSI-2 receiver entity. It has one sink pad to receive the pixel
+data from MIPI CSI-2 camera sensor. It has one source pad, corresponding to the
+virtual channel 0. This module is compliant to previous version of Samsung
+D-phy, and supports two D-PHY Rx Data lanes.
+
+csi_mux
+-------
+
+This is the video multiplexer. It has two sink pads to select from either camera
+sensor with a parallel interface or from MIPI CSI-2 virtual channel 0. It has
+a single source pad that routes to the CSI.
+
+csi
+---
+
+The CSI enables the chip to connect directly to external CMOS image sensor. CSI
+can interface directly with Parallel and MIPI CSI-2 buses. It has 256 x 64 FIFO
+to store received image pixel data and embedded DMA controllers to transfer data
+from the FIFO through AHB bus.
+
+This entity has one sink pad that receives from the csi_mux entity and a single
+source pad that routes video frames directly to memory buffers. This pad is
+routed to a capture device node.
+
+Usage Notes
+-----------
+
+To aid in configuration and for backward compatibility with V4L2 applications
+that access controls only from video device nodes, the capture device interfaces
+inherit controls from the active entities in the current pipeline, so controls
+can be accessed either directly from the subdev or from the active capture
+device interface. For example, the sensor controls are available either from the
+sensor subdevs or from the active capture device.
+
+Warp7 with OV2680
+-----------------
+
+On this platform an OV2680 MIPI CSI-2 module is connected to the internal MIPI
+CSI-2 receiver. The following example configures a video capture pipeline with
+an output of 800x600, and BGGR 10 bit bayer format:
+
+.. code-block:: none
+
+ # Setup links
+ media-ctl -l "'ov2680 1-0036':0 -> 'imx7-mipi-csis.0':0[1]"
+ media-ctl -l "'imx7-mipi-csis.0':1 -> 'csi_mux':1[1]"
+ media-ctl -l "'csi_mux':2 -> 'csi':0[1]"
+ media-ctl -l "'csi':1 -> 'csi capture':0[1]"
+
+ # Configure pads for pipeline
+ media-ctl -V "'ov2680 1-0036':0 [fmt:SBGGR10_1X10/800x600 field:none]"
+ media-ctl -V "'csi_mux':1 [fmt:SBGGR10_1X10/800x600 field:none]"
+ media-ctl -V "'csi_mux':2 [fmt:SBGGR10_1X10/800x600 field:none]"
+ media-ctl -V "'imx7-mipi-csis.0':0 [fmt:SBGGR10_1X10/800x600 field:none]"
+ media-ctl -V "'csi':0 [fmt:SBGGR10_1X10/800x600 field:none]"
+
+After this streaming can start. The v4l2-ctl tool can be used to select any of
+the resolutions supported by the sensor.
+
+.. code-block:: none
+
+ root@imx7s-warp:~# media-ctl -p
+ Media controller API version 4.17.0
+
+ Media device information
+ ------------------------
+ driver imx-media
+ model imx-media
+ serial
+ bus info
+ hw revision 0x0
+ driver version 4.17.0
+
+ Device topology
+ - entity 1: csi (2 pads, 2 links)
+ type V4L2 subdev subtype Unknown flags 0
+ device node name /dev/v4l-subdev0
+ pad0: Sink
+ [fmt:SBGGR10_1X10/800x600 field:none]
+ <- "csi_mux":2 [ENABLED]
+ pad1: Source
+ [fmt:SBGGR10_1X10/800x600 field:none]
+ -> "csi capture":0 [ENABLED]
+
+ - entity 4: csi capture (1 pad, 1 link)
+ type Node subtype V4L flags 0
+ device node name /dev/video0
+ pad0: Sink
+ <- "csi":1 [ENABLED]
+
+ - entity 10: csi_mux (3 pads, 2 links)
+ type V4L2 subdev subtype Unknown flags 0
+ device node name /dev/v4l-subdev1
+ pad0: Sink
+ [fmt:unknown/0x0]
+ pad1: Sink
+ [fmt:unknown/800x600 field:none]
+ <- "imx7-mipi-csis.0":1 [ENABLED]
+ pad2: Source
+ [fmt:unknown/800x600 field:none]
+ -> "csi":0 [ENABLED]
+
+ - entity 14: imx7-mipi-csis.0 (2 pads, 2 links)
+ type V4L2 subdev subtype Unknown flags 0
+ device node name /dev/v4l-subdev2
+ pad0: Sink
+ [fmt:SBGGR10_1X10/800x600 field:none]
+ <- "ov2680 1-0036":0 [ENABLED]
+ pad1: Source
+ [fmt:SBGGR10_1X10/800x600 field:none]
+ -> "csi_mux":1 [ENABLED]
+
+ - entity 17: ov2680 1-0036 (1 pad, 1 link)
+ type V4L2 subdev subtype Sensor flags 0
+ device node name /dev/v4l-subdev3
+ pad0: Source
+ [fmt:SBGGR10_1X10/800x600 field:none]
+ -> "imx7-mipi-csis.0":0 [ENABLED]
+
+
+References
+----------
+
+.. [#f1] https://www.nxp.com/docs/en/reference-manual/IMX7SRM.pdf
diff --git a/Documentation/media/v4l-drivers/index.rst b/Documentation/media/v4l-drivers/index.rst
index f28570ec9e42..dfd4b205937c 100644
--- a/Documentation/media/v4l-drivers/index.rst
+++ b/Documentation/media/v4l-drivers/index.rst
@@ -44,6 +44,7 @@ For more details see the file COPYING in the source distribution of Linux.
davinci-vpbe
fimc
imx
+ imx7
ipu3
ivtv
max2175
diff --git a/Documentation/media/v4l-drivers/ipu3.rst b/Documentation/media/v4l-drivers/ipu3.rst
index f89b51dafadd..c9f780404eee 100644
--- a/Documentation/media/v4l-drivers/ipu3.rst
+++ b/Documentation/media/v4l-drivers/ipu3.rst
@@ -1,3 +1,5 @@
+.. SPDX-License-Identifier: GPL-2.0
+
.. include:: <isonum.txt>
===============================================================
@@ -355,10 +357,157 @@ https://chromium.googlesource.com/chromiumos/platform/arc-camera/+/master/
The source can be located under hal/intel directory.
+Overview of IPU3 pipeline
+=========================
+
+IPU3 pipeline has a number of image processing stages, each of which takes a
+set of parameters as input. The major stages of pipelines are shown here:
+
+.. kernel-render:: DOT
+ :alt: IPU3 ImgU Pipeline
+ :caption: IPU3 ImgU Pipeline Diagram
+
+ digraph "IPU3 ImgU" {
+ node [shape=box]
+ splines="ortho"
+ rankdir="LR"
+
+ a [label="Raw pixels"]
+ b [label="Bayer Downscaling"]
+ c [label="Optical Black Correction"]
+ d [label="Linearization"]
+ e [label="Lens Shading Correction"]
+ f [label="White Balance / Exposure / Focus Apply"]
+ g [label="Bayer Noise Reduction"]
+ h [label="ANR"]
+ i [label="Demosaicing"]
+ j [label="Color Correction Matrix"]
+ k [label="Gamma correction"]
+ l [label="Color Space Conversion"]
+ m [label="Chroma Down Scaling"]
+ n [label="Chromatic Noise Reduction"]
+ o [label="Total Color Correction"]
+ p [label="XNR3"]
+ q [label="TNR"]
+ r [label="DDR"]
+
+ { rank=same; a -> b -> c -> d -> e -> f }
+ { rank=same; g -> h -> i -> j -> k -> l }
+ { rank=same; m -> n -> o -> p -> q -> r }
+
+ a -> g -> m [style=invis, weight=10]
+
+ f -> g
+ l -> m
+ }
+
+The table below presents a description of the above algorithms.
+
+======================== =======================================================
+Name Description
+======================== =======================================================
+Optical Black Correction Optical Black Correction block subtracts a pre-defined
+ value from the respective pixel values to obtain better
+ image quality.
+ Defined in :c:type:`ipu3_uapi_obgrid_param`.
+Linearization This algo block uses linearization parameters to
+ address non-linearity sensor effects. The Lookup table
+ table is defined in
+ :c:type:`ipu3_uapi_isp_lin_vmem_params`.
+SHD Lens shading correction is used to correct spatial
+ non-uniformity of the pixel response due to optical
+ lens shading. This is done by applying a different gain
+ for each pixel. The gain, black level etc are
+ configured in :c:type:`ipu3_uapi_shd_config_static`.
+BNR Bayer noise reduction block removes image noise by
+ applying a bilateral filter.
+ See :c:type:`ipu3_uapi_bnr_static_config` for details.
+ANR Advanced Noise Reduction is a block based algorithm
+ that performs noise reduction in the Bayer domain. The
+ convolution matrix etc can be found in
+ :c:type:`ipu3_uapi_anr_config`.
+DM Demosaicing converts raw sensor data in Bayer format
+ into RGB (Red, Green, Blue) presentation. Then add
+ outputs of estimation of Y channel for following stream
+ processing by Firmware. The struct is defined as
+ :c:type:`ipu3_uapi_dm_config`.
+Color Correction Color Correction algo transforms sensor specific color
+ space to the standard "sRGB" color space. This is done
+ by applying 3x3 matrix defined in
+ :c:type:`ipu3_uapi_ccm_mat_config`.
+Gamma correction Gamma correction :c:type:`ipu3_uapi_gamma_config` is a
+ basic non-linear tone mapping correction that is
+ applied per pixel for each pixel component.
+CSC Color space conversion transforms each pixel from the
+ RGB primary presentation to YUV (Y: brightness,
+ UV: Luminance) presentation. This is done by applying
+ a 3x3 matrix defined in
+ :c:type:`ipu3_uapi_csc_mat_config`
+CDS Chroma down sampling
+ After the CSC is performed, the Chroma Down Sampling
+ is applied for a UV plane down sampling by a factor
+ of 2 in each direction for YUV 4:2:0 using a 4x2
+ configurable filter :c:type:`ipu3_uapi_cds_params`.
+CHNR Chroma noise reduction
+ This block processes only the chrominance pixels and
+ performs noise reduction by cleaning the high
+ frequency noise.
+ See struct :c:type:`ipu3_uapi_yuvp1_chnr_config`.
+TCC Total color correction as defined in struct
+ :c:type:`ipu3_uapi_yuvp2_tcc_static_config`.
+XNR3 eXtreme Noise Reduction V3 is the third revision of
+ noise reduction algorithm used to improve image
+ quality. This removes the low frequency noise in the
+ captured image. Two related structs are being defined,
+ :c:type:`ipu3_uapi_isp_xnr3_params` for ISP data memory
+ and :c:type:`ipu3_uapi_isp_xnr3_vmem_params` for vector
+ memory.
+TNR Temporal Noise Reduction block compares successive
+ frames in time to remove anomalies / noise in pixel
+ values. :c:type:`ipu3_uapi_isp_tnr3_vmem_params` and
+ :c:type:`ipu3_uapi_isp_tnr3_params` are defined for ISP
+ vector and data memory respectively.
+======================== =======================================================
+
+Other often encountered acronyms not listed in above table:
+
+ ACC
+ Accelerator cluster
+ AWB_FR
+ Auto white balance filter response statistics
+ BDS
+ Bayer downscaler parameters
+ CCM
+ Color correction matrix coefficients
+ IEFd
+ Image enhancement filter directed
+ Obgrid
+ Optical black level compensation
+ OSYS
+ Output system configuration
+ ROI
+ Region of interest
+ YDS
+ Y down sampling
+ YTM
+ Y-tone mapping
+
+A few stages of the pipeline will be executed by firmware running on the ISP
+processor, while many others will use a set of fixed hardware blocks also
+called accelerator cluster (ACC) to crunch pixel data and produce statistics.
+
+ACC parameters of individual algorithms, as defined by
+:c:type:`ipu3_uapi_acc_param`, can be chosen to be applied by the user
+space through struct :c:type:`ipu3_uapi_flags` embedded in
+:c:type:`ipu3_uapi_params` structure. For parameters that are configured as
+not enabled by the user space, the corresponding structs are ignored by the
+driver, in which case the existing configuration of the algorithm will be
+preserved.
+
References
==========
-.. [#f5] include/uapi/linux/intel-ipu3.h
+.. [#f5] drivers/staging/media/ipu3/include/intel-ipu3.h
.. [#f1] https://github.com/intel/nvt
diff --git a/Documentation/media/v4l-drivers/pxa_camera.rst b/Documentation/media/v4l-drivers/pxa_camera.rst
index e4fbca755e1a..ee1bd96b66dd 100644
--- a/Documentation/media/v4l-drivers/pxa_camera.rst
+++ b/Documentation/media/v4l-drivers/pxa_camera.rst
@@ -18,7 +18,7 @@ Global video workflow
---------------------
a) QCI stopped
- Initialy, the QCI interface is stopped.
+ Initially, the QCI interface is stopped.
When a buffer is queued (pxa_videobuf_ops->buf_queue), the QCI starts.
b) QCI started
diff --git a/Documentation/media/v4l-drivers/qcom_camss.rst b/Documentation/media/v4l-drivers/qcom_camss.rst
index 6b15385b12b3..a72e17d09cb7 100644
--- a/Documentation/media/v4l-drivers/qcom_camss.rst
+++ b/Documentation/media/v4l-drivers/qcom_camss.rst
@@ -123,7 +123,7 @@ The considerations to split the driver in this particular way are as follows:
- representing CSIPHY and CSID modules by a separate sub-device for each module
allows to model the hardware links between these modules;
- representing VFE by a separate sub-devices for each input interface allows
- to use the input interfaces concurently and independently as this is
+ to use the input interfaces concurrently and independently as this is
supported by the hardware;
- representing ISPIF by a number of sub-devices equal to the number of CSID
sub-devices allows to create linear media controller pipelines when using two