using System;
namespace MagicLeap.Android.NDK.Media
{
public enum MediaFormat
{
///
/// Corresponding formats:
///
///
///
/// Corresponding formats: ///
/// Corresponding formats: ///
/// Corresponding formats: ///
/// Corresponding formats: ///
This format is a generic YCbCr format, capable of describing any 4:2:0 /// chroma-subsampled planar or semiplanar buffer (but not fully interleaved), /// with 8 bits per color sample.
/// ///Images in this format are always represented by three separate buffers /// of data, one for each color plane. Additional information always /// accompanies the buffers, describing the row stride and the pixel stride /// for each plane.
/// ///The order of planes is guaranteed such that plane #0 is always Y, plane #1 is always /// U (Cb), and plane #2 is always V (Cr).
/// ///The Y-plane is guaranteed not to be interleaved with the U/V planes /// (in particular, pixel stride is always 1 in {@link AImage_getPlanePixelStride}).
/// ///The U/V planes are guaranteed to have the same row stride and pixel stride, that is, the /// return value of {@link AImage_getPlaneRowStride} for the U/V plane are guaranteed to be the /// same, and the return value of {@link AImage_getPlanePixelStride} for the U/V plane are also /// guaranteed to be the same.
/// ///For example, the {@link AImage} object can provide data /// in this format from a {@link ACameraDevice} through an {@link AImageReader} object.
/// ///This format is always supported as an output format for the android Camera2 NDK API.
/// /// @see AImage /// @see AImageReader /// @see ACameraDevice ///This format is always supported as an output format for the android Camera2 NDK API.
///The layout of the color mosaic, the maximum and minimum encoding /// values of the raw pixel data, the color space of the image, and all other /// needed information to interpret a raw sensor image must be queried from /// the {@link ACameraDevice} which produced the image.
///AIMAGE_FORMAT_RAW_PRIVATE is a format for unprocessed raw image buffers coming from an /// image sensor. The actual structure of buffers of this format is implementation-dependent.
/// ////// This is a single-plane, 10-bit per pixel, densely packed (in each row), /// unprocessed format, usually representing raw Bayer-pattern images coming /// from an image sensor. ///
////// In an image buffer with this format, starting from the first pixel of /// each row, each 4 consecutive pixels are packed into 5 bytes (40 bits). /// Each one of the first 4 bytes contains the top 8 bits of each pixel, The /// fifth byte contains the 2 least significant bits of the 4 pixels, the /// exact layout data for each 4 consecutive pixels is illustrated below /// (Pi[j] stands for the jth bit of the ith pixel): ///
///| /// | bit 7 | ///bit 6 | ///bit 5 | ///bit 4 | ///bit 3 | ///bit 2 | ///bit 1 | ///bit 0 | ///
|---|---|---|---|---|---|---|---|---|
| Byte 0: | ///P0[9] | ///P0[8] | ///P0[7] | ///P0[6] | ///P0[5] | ///P0[4] | ///P0[3] | ///P0[2] | ///
| Byte 1: | ///P1[9] | ///P1[8] | ///P1[7] | ///P1[6] | ///P1[5] | ///P1[4] | ///P1[3] | ///P1[2] | ///
| Byte 2: | ///P2[9] | ///P2[8] | ///P2[7] | ///P2[6] | ///P2[5] | ///P2[4] | ///P2[3] | ///P2[2] | ///
| Byte 3: | ///P3[9] | ///P3[8] | ///P3[7] | ///P3[6] | ///P3[5] | ///P3[4] | ///P3[3] | ///P3[2] | ///
| Byte 4: | ///P3[1] | ///P3[0] | ///P2[1] | ///P2[0] | ///P1[1] | ///P1[0] | ///P0[1] | ///P0[0] | ///
/// This format assumes ///
size = row stride * heightwhere the row stride is in bytes, /// not pixels. /// ///
/// Since this is a densely packed format, the pixel stride is always 0. The /// application must use the pixel data layout defined in above table to /// access each row data. When row stride is equal to (width * (10 / 8)), there /// will be no padding bytes at the end of each row, the entire image data is /// densely packed. When stride is larger than (width * (10 / 8)), padding /// bytes will be present at the end of each row. ///
////// For example, the {@link AImage} object can provide data in this format from a /// {@link ACameraDevice} (if supported) through a {@link AImageReader} object. /// The number of planes returned by {@link AImage_getNumberOfPlanes} will always be 1. /// The pixel stride is undefined ({@link AImage_getPlanePixelStride} will return /// {@link AMEDIA_ERROR_UNSUPPORTED}), and the {@link AImage_getPlaneRowStride} described the /// vertical neighboring pixel distance (in bytes) between adjacent rows. ///
/// /// @see AImage /// @see AImageReader /// @see ACameraDevice ////// This is a single-plane, 12-bit per pixel, densely packed (in each row), /// unprocessed format, usually representing raw Bayer-pattern images coming /// from an image sensor. ///
////// In an image buffer with this format, starting from the first pixel of each /// row, each two consecutive pixels are packed into 3 bytes (24 bits). The first /// and second byte contains the top 8 bits of first and second pixel. The third /// byte contains the 4 least significant bits of the two pixels, the exact layout /// data for each two consecutive pixels is illustrated below (Pi[j] stands for /// the jth bit of the ith pixel): ///
///| /// | bit 7 | ///bit 6 | ///bit 5 | ///bit 4 | ///bit 3 | ///bit 2 | ///bit 1 | ///bit 0 | ///
|---|---|---|---|---|---|---|---|---|
| Byte 0: | ///P0[11] | ///P0[10] | ///P0[ 9] | ///P0[ 8] | ///P0[ 7] | ///P0[ 6] | ///P0[ 5] | ///P0[ 4] | ///
| Byte 1: | ///P1[11] | ///P1[10] | ///P1[ 9] | ///P1[ 8] | ///P1[ 7] | ///P1[ 6] | ///P1[ 5] | ///P1[ 4] | ///
| Byte 2: | ///P1[ 3] | ///P1[ 2] | ///P1[ 1] | ///P1[ 0] | ///P0[ 3] | ///P0[ 2] | ///P0[ 1] | ///P0[ 0] | ///
/// This format assumes ///
size = row stride * heightwhere the row stride is in bytes, /// not pixels. /// ///
/// Since this is a densely packed format, the pixel stride is always 0. The /// application must use the pixel data layout defined in above table to /// access each row data. When row stride is equal to (width * (12 / 8)), there /// will be no padding bytes at the end of each row, the entire image data is /// densely packed. When stride is larger than (width * (12 / 8)), padding /// bytes will be present at the end of each row. ///
////// For example, the {@link AImage} object can provide data in this format from a /// {@link ACameraDevice} (if supported) through a {@link AImageReader} object. /// The number of planes returned by {@link AImage_getNumberOfPlanes} will always be 1. /// The pixel stride is undefined ({@link AImage_getPlanePixelStride} will return /// {@link AMEDIA_ERROR_UNSUPPORTED}), and the {@link AImage_getPlaneRowStride} described the /// vertical neighboring pixel distance (in bytes) between adjacent rows. ///
/// /// @see AImage /// @see AImageReader /// @see ACameraDevice ///Each pixel is 16 bits, representing a depth ranging measurement from a depth camera or /// similar sensor. The 16-bit sample consists of a confidence value and the actual ranging /// measurement.
/// ///The confidence value is an estimate of correctness for this sample. It is encoded in the /// 3 most significant bits of the sample, with a value of 0 representing 100% confidence, a /// value of 1 representing 0% confidence, a value of 2 representing 1/7, a value of 3 /// representing 2/7, and so on.
/// ///As an example, the following sample extracts the range and confidence from the first pixel /// of a DEPTH16-format {@link AImage}, and converts the confidence to a floating-point value /// between 0 and 1.f inclusive, with 1.f representing maximum confidence: /// ///
/// uint16_t* data;
/// int dataLength;
/// AImage_getPlaneData(image, 0, (uint8_t**)&data, &dataLength);
/// uint16_t depthSample = data[0];
/// uint16_t depthRange = (depthSample & 0x1FFF);
/// uint16_t depthConfidence = ((depthSample >> 13) & 0x7);
/// float depthPercentage = depthConfidence == 0 ? 1.f : (depthConfidence - 1) / 7.f;
///
///
///
/// This format assumes ///
y_size = stride * height/// /// When produced by a camera, the units for the range are millimeters. ///
A variable-length list of 3D points plus a confidence value, with each point represented /// by four floats; first the X, Y, Z position coordinates, and then the confidence value.
/// ///The number of points is ((size of the buffer in bytes) / 16). /// ///
The coordinate system and units of the position values depend on the source of the point /// cloud data. The confidence value is between 0.f and 1.f, inclusive, with 0 representing 0% /// confidence and 1.f representing 100% confidence in the measured position values.
/// ///As an example, the following code extracts the first depth point in a DEPTH_POINT_CLOUD /// format {@link AImage}: ///
/// float* data;
/// int dataLength;
/// AImage_getPlaneData(image, 0, (uint8_t**)&data, &dataLength);
/// float x = data[0];
/// float y = data[1];
/// float z = data[2];
/// float confidence = data[3];
///
///
/// The choices of the actual format and pixel data layout are entirely up to the /// device-specific and framework internal implementations, and may vary depending on use cases /// even for the same device. Also note that the contents of these buffers are not directly /// accessible to the application.
/// ///When an {@link AImage} of this format is obtained from an {@link AImageReader} or /// {@link AImage_getNumberOfPlanes()} method will return zero.
///Y8 is a planar format comprised of a WxH Y plane only, with each pixel /// being represented by 8 bits.
/// ///This format assumes ///
size = stride * height/// ///
For example, the {@link AImage} object can provide data /// in this format from a {@link ACameraDevice} (if supported) through a /// {@link AImageReader} object. The number of planes returned by /// {@link AImage_getNumberOfPlanes} will always be 1. The pixel stride returned by /// {@link AImage_getPlanePixelStride} will always be 1, and the /// {@link AImage_getPlaneRowStride} described the vertical neighboring pixel distance /// (in bytes) between adjacent rows.
/// ///This format defines the HEIC brand of High Efficiency Image File /// Format as described in ISO/IEC 23008-12.
///JPEG compressed main image along with XMP embedded depth metadata /// following ISO 16684-1:2011(E).
///