Show / Hide Table of Contents

    Namespace ManagedCuda.NPP

    Classes

    JPEGCompression

    The JPEG standard defines a flow of level shift, DCT and quantization for forward JPEG transform and inverse level shift, IDCT and de-quantization for inverse JPEG transform. This group has the functions for both forward and inverse functions.

    NPPException

    Exception thrown in NPP library if a native NPP function returns a negative error code

    NPPImage_16sC1

    NPPImage_16sC2

    NPPImage_16sC3

    NPPImage_16sC4

    NPPImage_16scC1

    NPPImage_16scC2

    NPPImage_16scC3

    NPPImage_16scC4

    NPPImage_16uC1

    NPPImage_16uC2

    NPPImage_16uC3

    NPPImage_16uC4

    NPPImage_32fC1

    NPPImage_32fC3

    NPPImage_32fC4

    NPPImage_32fcC1

    NPPImage_32fcC2

    NPPImage_32fcC3

    NPPImage_32fcC4

    NPPImage_32sC1

    NPPImage_32sC3

    NPPImage_32sC4

    NPPImage_32scC1

    NPPImage_32scC2

    NPPImage_32scC3

    NPPImage_32scC4

    NPPImage_32uC1

    NPPImage_32uC4

    NPPImage_8sC1

    NPPImage_8sC2

    NPPImage_8sC3

    NPPImage_8sC4

    NPPImage_8uC1

    NPPImage_8uC2

    NPPImage_8uC3

    NPPImage_8uC4

    NPPImageBase

    Abstract base class for derived NPP typed images.

    NPPNativeMethods

    C# Wrapper-Methods for NPP functions defined in npp.h, nppversion.h, nppcore.h, nppi.h, npps.h, nppdefs.h

    NPPNativeMethods.NPPCore

    nppcore.h

    NPPNativeMethods.NPPi

    nppi.h

    NPPNativeMethods.NPPi.Abs

    Absolute value of each pixel value in an image.

    NPPNativeMethods.NPPi.AbsDiff

    Pixel by pixel absolute difference between two images.

    NPPNativeMethods.NPPi.AbsDiffConst

    Determines absolute difference between each pixel of an image and a constant value.

    NPPNativeMethods.NPPi.Add

    Pixel by pixel addition of two images.

    NPPNativeMethods.NPPi.AddConst

    Adds a constant value to each pixel of an image.

    NPPNativeMethods.NPPi.AddProduct

    Pixel by pixel addition of product of pixels from two source images to floating point pixel values of destination image.

    NPPNativeMethods.NPPi.AddSquare

    Pixel by pixel addition of squared pixels from source image to floating point pixel values of destination image.

    NPPNativeMethods.NPPi.AddWeighted

    Pixel by pixel addition of alpha weighted pixel values from a source image to floating point pixel values of destination image.

    NPPNativeMethods.NPPi.AffinTransforms

    Affine warping, affine transform calculation Affine warping of an image is the transform of image pixel positions, defined by the following formulas: \f[ X_{new} = C_{00} * x + C_{01} * y + C_{02} \qquad Y_{new} = C_{10} * x + C_{11} * y + C_{12} \qquad C = \left[ \matrix{C_{00} & C_{01} & C_{02} \cr C_{10} & C_{11} & C_{12} } \right] \f] That is, any pixel with coordinates \f$(X_{new},Y_{new})\f$ in the transformed image is sourced from coordinates \f$(x,y)\f$ in the original image. The mapping \f$C\f$ is completely specified by 6 values \f$C_{ij}, i=\overline{0,1}, j=\overline{0,2}\f$. The transform maps parallel lines to parallel lines and preserves ratios of distances of points to lines. Implementation specific properties are discussed in each function's documentation.

    NPPNativeMethods.NPPi.AlphaComp

    Composite two images using alpha opacity values contained in each image.

    NPPNativeMethods.NPPi.AlphaCompConst

    Composite two images using constant alpha values.

    NPPNativeMethods.NPPi.AlphaPremul

    Premultiplies image pixels by image alpha opacity values.

    NPPNativeMethods.NPPi.AlphaPremulConst

    Premultiplies pixels of an image using a constant alpha value.

    NPPNativeMethods.NPPi.And

    Pixel by pixel logical and of images.

    NPPNativeMethods.NPPi.AndConst

    Pixel by pixel logical and of an image with a constant.

    NPPNativeMethods.NPPi.AverageError

    Primitives for computing the average error between two images.

    Given two images Src1 and Src2 both with width W and height H.

    If the image is in complex format, the absolute value is used for computation.

    NPPNativeMethods.NPPi.AverageRelativeError

    Primitives for computing the average relative error between two images.

    If the image is in complex format, the absolute value is used for computation.

    NPPNativeMethods.NPPi.BGRToCbYCr

    BGR To CbYCr Conversion

    NPPNativeMethods.NPPi.BGRToHLS

    BGR to HLS

    NPPNativeMethods.NPPi.BGRToLab

    BGR to LAB

    NPPNativeMethods.NPPi.BGRToYCbCr

    BGR to YCbCr Conversion

    NPPNativeMethods.NPPi.BGRToYCrCb

    BGR to YCrCb Conversion

    NPPNativeMethods.NPPi.BGRToYUV

    BGR to YUV color conversion.

    Here is how NPP converts gamma corrected RGB or BGR to YUV.

    For digital RGB values in the range [0..255], Y has the range [0..255], U varies in the range [-112..+112], and V in the range [-157..+157].

    To fit in the range of [0..255], a constant value of 128 is added to computed U and V values, and V is then saturated.

    Npp32f nY = 0.299F * R + 0.587F * G + 0.114F * B;

    Npp32f nU = (0.492F * ((Npp32f)nB - nY)) + 128.0F;

    Npp32f nV = (0.877F * ((Npp32f)nR - nY)) + 128.0F;

    if (nV > 255.0F) nV = 255.0F;

    NPPNativeMethods.NPPi.BGRToYUV420

    BGR To YUV420 Conversion

    NPPNativeMethods.NPPi.BitDepthConversion

    Convert bit-depth up and down.

    The integer conversion methods do not involve any scaling. Conversions that reduce bit-depth saturate values exceeding the reduced range to the range's maximum/minimum value. When converting from floating-point values to integer values, a rounding mode can be specified. After rounding to integer values the values get saturated to the destination data type's range.

    NPPNativeMethods.NPPi.CbYCrToBGR

    CbYCrToBGR Conversion

    NPPNativeMethods.NPPi.CbYCrToRGB

    CbYCr to RGB Conversion

    NPPNativeMethods.NPPi.ColorDebayer

    Grayscale Color Filter Array to RGB Color Debayer conversion. Generates one RGB color pixel for every grayscale source pixel.

    Source and destination images must have even width and height. Missing pixel colors are generated using bilinear interpolation with chroma correlation of generated green values (eInterpolation MUST be set to 0). eGrid allows the user to specify the Bayer grid registration position at source image location oSrcROI.x, oSrcROI.y relative to pSrc. Possible registration positions are:

    BGGR | RGGB | GBRG | GRBG

    B G || R G || G B || G R

    G R || G B || R G || B G

    If it becomes necessary to access source pixels outside source image then the source image borders are mirrored.

    Here is how the algorithm works. R, G, and B base pixels from the source image are used unmodified. To generate R values for those G pixels, the average of R(x - 1, y) and R(x + 1, y) or R(x, y - 1) and R(x, y + 1) is used depending on whether the left and right or top and bottom pixels are R base pixels. To generate B values for those G pixels, the same algorithm is used using nearest B values. For an R base pixel, if there are no B values in the upper, lower, left, or right adjacent pixels then B is the average of B values in the 4 diagonal (G base) pixels. The same algorithm is used using R values to generate the R value of a B base pixel. Chroma correlation is applied to generated G values only, for a B base pixel G(x - 1, y) and G(x + 1, y) are averaged or G(x, y - 1) and G(x, y + 1) are averaged depending on whether the absolute difference between B(x, y) and the average of B(x - 2, y) and B(x + 2, y) is smaller than the absolute difference between B(x, y) and the average of B(x, y - 2) and B(x, y + 2). For an R base pixel the same algorithm is used testing against the surrounding R values at those offsets. If the horizontal and vertical differences are the same at one of those pixels then the average of the four left, right, upper and lower G values is used instead.

    NPPNativeMethods.NPPi.ColorLUT

    Perform image color processing using members of various types of color look up tables.

    NPPNativeMethods.NPPi.ColorLUTCubic

    Perform image color processing using linear interpolation between members of various types of color look up tables.

    NPPNativeMethods.NPPi.ColorLUTLinear

    Perform image color processing using linear interpolation between members of various types of color look up tables.

    NPPNativeMethods.NPPi.ColorLUTPalette

    Perform image color processing using various types of bit range restricted palette color look up tables.

    NPPNativeMethods.NPPi.ColorLUTTrilinear

    Perform image color processing using 3D trilinear interpolation between members of various types of color look up tables.

    NPPNativeMethods.NPPi.ColorProcessing

    Color manipuliation functions.

    NPPNativeMethods.NPPi.ColorToGray

    RGB Color to Gray conversion using user supplied conversion coefficients.

    Here is how NPP converts gamma corrected RGB Color to Gray using user supplied conversion coefficients.

    nGray =  aCoeffs[0] * R + aCoeffs[1] * G + aCoeffs[2] * B; 

    NPPNativeMethods.NPPi.ColorTwist

    Perform color twist pixel processing. Color twist consists of applying the following formula to each image pixel using coefficients from the user supplied color twist host matrix array as follows where dst[x] and src[x] represent destination pixel and source pixel channel or plane x.

    dst[0] = aTwist[0][0] * src[0] + aTwist[0][1] * src[1] + aTwist[0][2] * src[2] + aTwist[0][3]

    dst[1] = aTwist[1][0] * src[0] + aTwist[1][1] * src[1] + aTwist[1][2] * src[2] + aTwist[1][3]

    dst[2] = aTwist[2][0] * src[0] + aTwist[2][1] * src[1] + aTwist[2][2] * src[2] + aTwist[2][3]

    NPPNativeMethods.NPPi.ColorTwistBatch

    Perform color twist pixel batch processing. Color twist consists of applying the following formula to each image pixel using coefficients from one or more user supplied color twist device memory matrix arrays as follows where dst[x] and src[x] represent destination pixel and source pixel channel or plane x.The full sized coefficient matrix should be sent for all pixel channel sizes, the function will process the appropriate coefficients and channels for the corresponding pixel size. ColorTwistBatch generally takes the same parameter list as ColorTwist except that there is a list of N instances of those parameters (N > 1) and that list is passed in device memory; The matrix pointers referenced for each image in the batch also need to point to device memory matrix values.A convenient data structure is provided that allows for easy initialization of the parameter lists.The only restriction on these functions is that there is one single ROI which is applied respectively to each image in the batch. The primary purpose of this function is to provide improved performance for batches of smaller images as long as GPU resources are available. Therefore it is recommended that the function not be used for very large images as there may not be resources available for processing several large images simultaneously.

    NPPNativeMethods.NPPi.Compare

    Compare the pixels of two images and create a binary result image. In case of multi-channel image types, the condition must be fulfilled for all channels, otherwise the comparison is considered false.

    The "binary" result image is of type 8u_C1. False is represented by 0, true by NPP_MAX_8U.

    NPPNativeMethods.NPPi.CompColorKey

    Composition color key

    NPPNativeMethods.NPPi.ComplexImageMorphology

    Complex image morphological operations.

    NPPNativeMethods.NPPi.CompressionDCT

    Image compression primitives.

    NPPNativeMethods.NPPi.Convolution

    General purpose 2D convolution filters.

    NPPNativeMethods.NPPi.CopyConstBorder

    Methods for copying images and padding borders with a constant, user-specifiable color.

    NPPNativeMethods.NPPi.CopyReplicateBorder

    Methods for copying images and padding borders with a replicates of the nearest source image pixel color.

    NPPNativeMethods.NPPi.CopySubpix

    Functions for copying linearly interpolated images using source image subpixel coordinates

    NPPNativeMethods.NPPi.CopyWrapBorder

    Methods for copying images and padding borders with wrapped replications of the source image pixel colors.

    NPPNativeMethods.NPPi.CountInRange

    Primitives for computing the amount of pixels that fall into the specified intensity range. The lower bound and the upper bound are inclusive.

    NPPNativeMethods.NPPi.Dilate3x3Border

    Dilation using a 3x3 mask with the anchor at its center pixel with border control.

    If any portion of the mask overlaps the source image boundary the requested border type operation is applied to all mask pixels which fall outside of the source image.

    NPPNativeMethods.NPPi.DilationWithBorderControl

    Dilation with border control. Dilation computes the output pixel as the maximum pixel value of the pixels under the mask. Pixels who's corresponding mask values are zero do not participate in the maximum search.

    If any portion of the mask overlaps the source image boundary the requested border type operation is applied to all mask pixels which fall outside of the source image.

    NPPNativeMethods.NPPi.Div

    Pixel by pixel division of two images.

    NPPNativeMethods.NPPi.DivConst

    Divides each pixel of an image by a constant value.

    NPPNativeMethods.NPPi.DivRound

    Pixel by pixel division of two images using result rounding modes.

    NPPNativeMethods.NPPi.DotProd

    Primitives for computing the dot product of two images.

    NPPNativeMethods.NPPi.Dup

    Functions for duplicating a single channel image in a multiple channel image.

    NPPNativeMethods.NPPi.Erode3x3Border

    Erosion using a 3x3 mask with the anchor at its center pixel with border control.

    If any portion of the mask overlaps the source image boundary the requested border type operation is applied to all mask pixels which fall outside of the source image.

    NPPNativeMethods.NPPi.ErosionWithBorderControl

    Erosion computes the output pixel as the minimum pixel value of the pixels under the mask. Pixels who's corresponding mask values are zero do not participate in the minimum search.

    If any portion of the mask overlaps the source image boundary the requested border type operation is applied to all mask pixels which fall outside of the source image.

    NPPNativeMethods.NPPi.Exp

    Exponential value of each pixel in an image.

    NPPNativeMethods.NPPi.FilterBilateralGaussBorder

    Filters the image using a bilateral Gaussian filter kernel with border control:

    If any portion of the mask overlaps the source image boundary the requested border type operation is applied to all mask pixels which fall outside of the source image.

    For this filter the anchor point is always the central element of the kernel.

    Coefficients of the bilateral filter kernel depend on their position in the kernel and on the value of some source image pixels overlayed by the filter kernel.

    Only source image pixels with both coordinates divisible by nDistanceBetweenSrcPixels are used in calculations. The value of an output pixel \f$d\f$ is

    \f[d = \frac{\sum_{h=-nRadius}^{nRadius}\sum_{w=-nRadius}^{nRadius}W1(h,w)\cdot W2(h,w)\cdot S(h,w)}{\sum_{h=-nRadius}^{nRadius}\sum_{w=-nRadius}^{nRadius}W1(h,w)\cdot W2(h,w)}\f] where h and w are the corresponding kernel width and height indexes, S(h,w) is the value of the source image pixel overlayed by filter kernel position (h,w), W1(h,w) is func(nValSquareSigma, (S(h,w) - S(0,0))) where S(0,0) is the value of the source image pixel at the center of the kernel, W2(h,w) is func(nPosSquareSigma, sqrt(hh+ww)), and func is the following formula \f[func(S,I) = exp(-\frac{I^2}{2.0F\cdot S^2})\f]

    Currently only the NPP_BORDER_REPLICATE border type operations are supported.

    NPPNativeMethods.NPPi.FilterBorder

    General purpose 2D convolution filter with border control.

    Pixels under the mask are multiplied by the respective weights in the mask and the results are summed. Before writing the result pixel the sum is scaled back via division by nDivisor. If any portion of the mask overlaps the source image boundary the requested border type operation is applied to all mask pixels which fall outside of the source image.

    Currently only the Replicate and Mirror border type operation are supported.

    NPPNativeMethods.NPPi.FilterBorder32f

    General purpose 2D convolution filter using floating-point weights with border control.

    Pixels under the mask are multiplied by the respective weights in the mask and the results are summed. Before writing the result pixel the sum is scaled back via division by nDivisor. If any portion of the mask overlaps the source image boundary the requested border type operation is applied to all mask pixels which fall outside of the source image.

    Currently only the Replicate and Mirror border type operation are supported.

    NPPNativeMethods.NPPi.FilterCannyBorder

    Performs Canny edge detection on a single channel 8-bit grayscale image and outputs a single channel 8-bit image consisting of 0x00 and 0xFF values with 0xFF representing edge pixels. The algorithm consists of three phases. The first phase generates two output images consisting of a single channel 16-bit signed image containing magnitude values and a single channel 32-bit floating point image containing the angular direction of those magnitude values. This phase is accomplished by calling the appropriate GradientVectorBorder filter function based on the filter type, filter mask size, and norm type requested. The next phase uses those magnitude and direction images to suppress non-maximum magnitude values which are lower than the values of either of its two nearest neighbors in the same direction as the test magnitude pixel in the 3x3 surrounding magnitude pixel neighborhood. This phase outputs a new magnitude image with non-maximum pixel values suppressed. Finally, in the third phase, the new magnitude image is passed through a hysteresis threshold filter that filters out any magnitude values that are not connected to another edge magnitude value. In this phase, any magnitude value above the high threshold value is automatically accepted, any magnitude value below the low threshold value is automatically rejected. For magnitude values that lie between the low and high threshold, values are only accepted if one of their two neighbors in the same direction in the 3x3 neighborhood around them lies above the low threshold value. In other words, if they are connected to an active edge. J. Canny recommends that the ratio of high to low threshold limit be in the range two or three to one, based on predicted signal-to-noise ratios. The final output of the third phase consists of a single channel 8-bit unsigned image of 0x00 and 0xFF values based on whether they are accepted or rejected during threshold testing.

    Currently only the NPP_BORDER_REPLICATE border type operation is supported. Borderless output can be accomplished by using a larger source image than the destination and adjusting oSrcSize and oSrcOffset parameters accordingly.

    NPPNativeMethods.NPPi.FilterGaussBorder

    If any portion of the mask overlaps the source image boundary the requested border type operation is applied to all mask pixels which fall outside of the source image.

    Currently only the NPP_BORDER_REPLICATE and NPP_BORDER_MIRROR border type operation is supported.

    Filters the image using a Gaussian filter kernel:

    1/16 2/16 1/16

    2/16 4/16 2/16

    1/16 2/16 1/16

    or

    2/571 7/571 12/571 7/571 2/571

    7/571 31/571 52/571 31/571 7/571

    12/571 52/571 127/571 52/571 12/571

    7/571 31/571 52/571 31/571 7/571

    2/571 7/571 12/571 7/571 2/571

    NPPNativeMethods.NPPi.FilterGaussPyramid

    Filters the image using a separable Gaussian filter kernel with user supplied floating point coefficients with downsampling and border control. If the downsampling rate is equivalent to an integer value then unnecessary source pixels are just skipped. If any portion of the mask overlaps the source image boundary the requested border type operation is applied to all mask pixels which fall outside of the source image. Filters the image using a separable Gaussian filter kernel with user supplied floating point coefficients with upsampling and border control. If the upsampling rate is equivalent to an integer value then unnecessary source pixels are just skipped. If any portion of the mask overlaps the source image boundary the requested border type operation is applied to all mask pixels which fall outside of the source image. Currently only the Replicate and Mirror border type operation are supported.

    NPPNativeMethods.NPPi.FilterHarrisCornersBorder

    Performs Harris Corner detection on a single channel 8-bit grayscale image and outputs a single channel 32-bit floating point image consisting the corner response at each pixel of the image. The algorithm consists of two phases. The first phase generates the floating point product of XX, YY, and XY gradients at each pixel in the image. The type of gradient used is controlled by the eFilterType and eMaskSize parameters. The second phase averages those products over a window of either 3x3 or 5x5 pixels around the center pixel then generates the Harris corner response at that pixel which is output in the destination image. The Harris response value is determined as H = ((XX///YY - XY///XY) - (nK///((XX + YY)///(XX + YY))))///nScale.

    Currently only the NPP_BORDER_REPLICATE border type operation is supported. Borderless output can be accomplished by using a larger source image than the destination and adjusting oSrcSize and oSrcOffset parameters accordingly.

    NPPNativeMethods.NPPi.FilterHoughLine

    Extracts Hough lines from a single channel 8-bit binarized (0, 255) source feature (canny edges, etc.) image and outputs a list of lines in point polar format representing the length(rho) and angle(theta) of each line from the origin of the normal to the line using the formula rho = x cos(theta) + y sin(theta). The level of discretization, nDelta, is specified as an input parameter.The performance and effectiveness of this function highly depends on this parameter with higher performance for larger numbers and more detailed results for lower numbers. Also, lines are not guaranteed to be added to the pDeviceLines list in the same order from one call to the next. However, all of the same lines will still be generated as long as nMaxLineCount is set large enough so that they all can fit in the list. To convert lines in point polar format back to cartesian lines use the following formula:

    NPPNativeMethods.NPPi.FilterScharrHorizBorder

    Filters the image using a horizontal Scharr filter kernel with border control:

    3 10 3

    0 0 0

    -3 -10 -3

    NPPNativeMethods.NPPi.FilterScharrVertBorder

    Filters the image using a vertical Scharr filter kernel with border control:

    3 10 3

    0 0 0

    -3 -10 -3

    NPPNativeMethods.NPPi.FilterSobelCrossBorder

    Filters the image using a second cross derivative Sobel filter kernel with border control.

    NPPNativeMethods.NPPi.FilterSobelHorizBorder

    Filters the image using a horizontal Sobel filter kernel with border control.

    NPPNativeMethods.NPPi.FilterSobelHorizSecondBorder

    Filters the image using a second derivative, horizontal Sobel filter kernel with border control.

    NPPNativeMethods.NPPi.FilterSobelVertBorder

    Filters the image using a vertical Sobel filter kernel with border control.

    NPPNativeMethods.NPPi.FilterSobelVertSecondBorder

    Filters the image using a second derivative, vertical Sobel filter kernel with border control.

    NPPNativeMethods.NPPi.FilterThresholdAdaptiveBoxBorder

    Computes the average pixel values of the pixels under a square mask with border control. If any portion of the mask overlaps the source image boundary the requested border type operation is applied to all mask pixels which fall outside of the source image. Once the neighborhood average around a source pixel is determined the souce pixel is compared to the average - nDelta and if the source pixel is greater than that average the corresponding destination pixel is set to nValGT, otherwise nValLE. Currently only the NPP_BORDER_REPLICATE border type operation is supported.

    NPPNativeMethods.NPPi.FilterWienerBorder

    Noise removal filtering of an image using an adaptive Wiener filter with border control.

    Pixels under the source mask are used to generate statistics about the local neighborhood which are then used to control the amount of adaptive noise filtering locally applied.

    Currently only the NPP_BORDER_REPLICATE border type operation is supported.

    NPPNativeMethods.NPPi.FixedFilters

    Fixed filters perform linear filtering operations (i.e. convolutions) with predefined kernels of fixed sizes.

    NPPNativeMethods.NPPi.Gamma

    Gamma correction

    NPPNativeMethods.NPPi.GeometricTransforms

    Routines manipulating an image's geometry.

    NPPNativeMethods.NPPi.GetResizeRect

    Returns NppiRect which represents the offset and size of the destination rectangle that would be generated by resizing the source NppiRect by the requested scale factors and shifts.

    NPPNativeMethods.NPPi.GradientColorToGray

    RGB Color to Gray Gradient conversion using user selected gradient distance method.

    NPPNativeMethods.NPPi.GradientVectorPrewittBorder

    RGB Color to Prewitt Gradient Vector conversion using user selected fixed mask size and gradient distance method. Functions support up to 4 optional single channel output gradient vectors, X (vertical), Y (horizontal), magnitude, and angle with user selectable distance methods. Output for a particular vector is disabled by supplying a NULL pointer for that vector. X and Y gradient vectors are in cartesian form in the destination data type.
    Magnitude vectors are polar gradient form in the destination data type, angle is always in floating point polar gradient format. Only fixed mask sizes of 3x3 are supported. Only nppiNormL1 (sum) and nppiNormL2 (sqrt of sum of squares) distance methods are currently supported.

    Currently only the NPP_BORDER_REPLICATE border type operation is supported. Borderless output can be accomplished by using a larger source image than the destination and adjusting oSrcSize and oSrcOffset parameters accordingly.

    For the C1R versions of the function the pDstMag output image value for L1 normalization consists of the absolute value of the pDstX value plus the absolute value of the pDstY value at that particular image pixel location. For the C1R versions of the function the pDstMag output image value for L2 normalization consists of the square root of the pDstX value squared plus the pDstY value squared at that particular image pixel location. For the C1R versions of the function the pDstAngle output image value consists of the arctangent (atan2) of the pDstY value and the pDstX value at that particular image pixel location.

    For the C3C1R versions of the function, regardless of the selected normalization method, the L2 normalization value is first determined for each or the pDstX and pDstY values for each source channel then the largest L2 normalization value (largest gradient) is used to select which of the 3 pDstX channel values are output to the pDstX image or pDstY channel values are output to the pDstY image. For the C3C1R versions of the function the pDstMag output image value for L1 normalizaton consists of the same technique used for the C1R version for each source image channel. Then the largest L2 normalization value is again used to select which of the 3 pDstMag channel values to output to the pDstMag image. For the C3C1R versions of the function the pDstMag output image value for L2 normalizaton consists of just outputting the largest per source channel L2 normalization value to the pDstMag image. For the C3C1R versions of the function the pDstAngle output image value consists of the same technique used for the C1R version calculated for each source image channel. Then the largest L2 normalization value is again used to select which of the 3 angle values to output to the pDstAngle image.

    NPPNativeMethods.NPPi.GradientVectorScharrBorder

    RGB Color to Scharr Gradient Vector conversion using user selected fixed mask size and gradient distance method. Functions support up to 4 optional single channel output gradient vectors, X (vertical), Y (horizontal), magnitude, and angle with user selectable distance methods. Output for a particular vector is disabled by supplying a NULL pointer for that vector. X and Y gradient vectors are in cartesian form in the destination data type.
    Magnitude vectors are polar gradient form in the destination data type, angle is always in floating point polar gradient format. Only fixed mask sizes of 3x3 are supported. Only nppiNormL1 (sum) and nppiNormL2 (sqrt of sum of squares) distance methods are currently supported.

    Currently only the NPP_BORDER_REPLICATE border type operation is supported. Borderless output can be accomplished by using a larger source image than the destination and adjusting oSrcSize and oSrcOffset parameters accordingly.

    For the C1R versions of the function the pDstMag output image value for L1 normalization consists of the absolute value of the pDstX value plus the absolute value of the pDstY value at that particular image pixel location. For the C1R versions of the function the pDstMag output image value for L2 normalization consists of the square root of the pDstX value squared plus the pDstY value squared at that particular image pixel location. For the C1R versions of the function the pDstAngle output image value consists of the arctangent (atan2) of the pDstY value and the pDstX value at that particular image pixel location.

    For the C3C1R versions of the function, regardless of the selected normalization method, the L2 normalization value is first determined for each or the pDstX and pDstY values for each source channel then the largest L2 normalization value (largest gradient) is used to select which of the 3 pDstX channel values are output to the pDstX image or pDstY channel values are output to the pDstY image. For the C3C1R versions of the function the pDstMag output image value for L1 normalizaton consists of the same technique used for the C1R version for each source image channel. Then the largest L2 normalization value is again used to select which of the 3 pDstMag channel values to output to the pDstMag image. For the C3C1R versions of the function the pDstMag output image value for L2 normalizaton consists of just outputting the largest per source channel L2 normalization value to the pDstMag image. For the C3C1R versions of the function the pDstAngle output image value consists of the same technique used for the C1R version calculated for each source image channel. Then the largest L2 normalization value is again used to select which of the 3 angle values to output to the pDstAngle image.

    NPPNativeMethods.NPPi.GradientVectorSobelBorder

    RGB Color to Sobel Gradient Vector conversion using user selected fixed mask size and gradient distance method. Functions support up to 4 optional single channel output gradient vectors, X (vertical), Y (horizontal), magnitude, and angle with user selectable distance methods. Output for a particular vector is disabled by supplying a NULL pointer for that vector. X and Y gradient vectors are in cartesian form in the destination data type.
    Magnitude vectors are polar gradient form in the destination data type, angle is always in floating point polar gradient format. Only fixed mask sizes of 3x3 and 5x5 are supported. Only nppiNormL1 (sum) and nppiNormL2 (sqrt of sum of squares) distance methods are currently supported.

    Currently only the NPP_BORDER_REPLICATE border type operation is supported. Borderless output can be accomplished by using a larger source image than the destination and adjusting oSrcSize and oSrcOffset parameters accordingly.

    For the C1R versions of the function the pDstMag output image value for L1 normalization consists of the absolute value of the pDstX value plus the absolute value of the pDstY value at that particular image pixel location. For the C1R versions of the function the pDstMag output image value for L2 normalization consists of the square root of the pDstX value squared plus the pDstY value squared at that particular image pixel location. For the C1R versions of the function the pDstAngle output image value consists of the arctangent (atan2) of the pDstY value and the pDstX value at that particular image pixel location.

    For the C3C1R versions of the function, regardless of the selected normalization method, the L2 normalization value is first determined for each or the pDstX and pDstY values for each source channel then the largest L2 normalization value (largest gradient) is used to select which of the 3 pDstX channel values are output to the pDstX image or pDstY channel values are output to the pDstY image. For the C3C1R versions of the function the pDstMag output image value for L1 normalizaton consists of the same technique used for the C1R version for each source image channel. Then the largest L2 normalization value is again used to select which of the 3 pDstMag channel values to output to the pDstMag image. For the C3C1R versions of the function the pDstMag output image value for L2 normalizaton consists of just outputting the largest per source channel L2 normalization value to the pDstMag image. For the C3C1R versions of the function the pDstAngle output image value consists of the same technique used for the C1R version calculated for each source image channel. Then the largest L2 normalization value is again used to select which of the 3 angle values to output to the pDstAngle image.

    NPPNativeMethods.NPPi.Histogram

    Histogram.

    NPPNativeMethods.NPPi.HistogramOfOrientedGradientsBorder

    Performs Histogram Of Oriented Gradients operation on source image generating separate windows of Histogram Descriptors for each requested location.

    This function implements the simplest form of functionality described by N.Dalal and B.Triggs.Histograms of Oriented Gradients for Human Detection.INRIA, 2005.

    It supports overlapped contrast normalized block histogram output with L2 normalization only, no threshold clipping, and no pre or post gaussian smoothing of input images or histogram output values. It supports both single channel grayscale source images and three channel color images.For color images, the color channel with the highest magnitude value is used as that pixel's magnitude. Output is row order only. Descriptors are output consecutively with no separation padding if multiple descriptor output is requested (one desriptor per source image location). For example, common HOG parameters are 9 histogram bins per 8 by 8 pixel cell, 2 by 2 cells per block, with a descriptor window size of 64 horizontal by 128 vertical pixels yielding 7 by 15 overlapping blocks (1 cell overlap in both horizontal and vertical directions). This results in 9 bins* 4 cells* 7 horizontal overlapping blocks* 15 vertical overlapping blocks or 3780 32-bit floating point output values(bins) per descriptor window.

    The number of horizontal overlapping block histogram bins per descriptor window width is determined by (((oHOGConfig.detectionWindowSize.width / oHOGConfig.histogramBlockSize) * 2) - 1) * oHOGConfig.nHistogramBins. The number of vertical overlapping block histograms per descriptor window height is determined by (((oHOGConfig.detectionWindowSize.height / oHOGConfig.histogramBlockSize) * 2) - 1) The offset of each descriptor window in the descriptors output buffer is therefore horizontal histogram bins per descriptor window width * vertical histograms per descriptor window height 32-bit floating point values relative to the previous descriptor window output.

    The algorithm uses a 1D centered derivative mask of[-1, 0, +1] when generating input magnitude and angle gradients. Magnitudes are added to the two nearest histogram bins of oriented gradients between 0 and 180 degrees using a weighted linear interpolation of each magnitude value across the 2 nearest angular bin orientations. 2D overlapping blocks of histogram bins consisting of the bins from 2D arrangements of cells are then contrast normalized using L2 normalization and output to the corresponding histogram descriptor window for that particular window location in the window locations list.

    NPPNativeMethods.NPPi.HLSToBGR

    HLS to BGR

    NPPNativeMethods.NPPi.HLSToRGB

    HLS to RGB

    NPPNativeMethods.NPPi.HSVToRGB

    HSV to RGB

    NPPNativeMethods.NPPi.ImageCompression

    Image compression primitives.

    The JPEG standard defines a flow of level shift, DCT and quantization for forward JPEG transform and inverse level shift, IDCT and de-quantization for inverse JPEG transform. This group has the functions for both forward and inverse functions.

    NPPNativeMethods.NPPi.ImageMedianFilter

    Result pixel value is the median of pixel values under the rectangular mask region.

    NPPNativeMethods.NPPi.ImageProximity

    Primitives for computing the proximity measure between a source image and a template image.

    NPPNativeMethods.NPPi.Integral

    Primitives for computing the integral image of a given image.

    NPPNativeMethods.NPPi.IQA

    Primitives for computing the image quality between two images, such as MSE, PSNR, SSIM, and MS-SSIM.

    NPPNativeMethods.NPPi.LabelMarkers

    Generate image connected region label markers to be used for later image segmentation.

    NPPNativeMethods.NPPi.LabToBGR

    LAB to BGR

    NPPNativeMethods.NPPi.LeftShiftConst

    Pixel by pixel left shift of an image by a constant value.

    NPPNativeMethods.NPPi.LinearFilter1D

    1D mask Linear Convolution Filter, with rescaling, for 8 bit images.

    NPPNativeMethods.NPPi.LinearFixedFilters2D

    2D linear fixed filters for 8 bit images.

    NPPNativeMethods.NPPi.LinearTransforms

    Linear image transforms, like Fourier and DCT transformations.

    NPPNativeMethods.NPPi.Ln

    Pixel by pixel natural logarithm of each pixel in an image.

    NPPNativeMethods.NPPi.LUVToRGB

    LUV to RGB

    NPPNativeMethods.NPPi.Max

    Maximum

    NPPNativeMethods.NPPi.MaxIdx

    Maximum Index

    NPPNativeMethods.NPPi.MaximumError

    Primitives for computing the maximum error between two images.

    Given two images Src1 and Src2 both with width W and height H, the maximum error is defined as the largest absolute difference between pixels of two images.

    If the image is in complex format, the absolute value of the complex number is provided.

    NPPNativeMethods.NPPi.MaximumRelativeError

    Primitives for computing the maximum relative error between two images.

    If the image is in complex format, the absolute value is used for computation.

    For multiple channles, the maximum relative error of all the channles is returned.

    NPPNativeMethods.NPPi.MeanNew

    Mean (new in CUDA 5)

    NPPNativeMethods.NPPi.MeanStdDevNew

    Mean + Std deviation (new in CUDA 5)

    NPPNativeMethods.NPPi.MemAlloc

    Image-Memory Allocation

    ImageAllocator methods for 2D arrays of data. The allocators have width and height parameters to specify the size of the image data being allocated. They return a pointer to the newly created memory and return the numbers of bytes between successive lines.

    If the memory allocation failed due to lack of free device memory or device memory fragmentation the routine returns 0.

    All allocators return memory with line strides that are beneficial for performance. It is not mandatory to use these allocators. Any valid CUDA device-memory pointers can be used by the NPP primitives and there are no restrictions on line strides.

    NPPNativeMethods.NPPi.MemCopy

    Copy methods for images of various types. Images are passed to these primitives via a pointer to the image data (first pixel in the ROI) and a step-width, i.e. the number of bytes between successive lines.

    The size of the area to be copied (region-of-interest, ROI) is specified via a Size struct.

    NPPNativeMethods.NPPi.MemSet

    Set methods for images of various types. Images are passed to these primitives via a pointer to the image data (first pixel in the ROI) and a step-width, i.e. the number of bytes between successive lines. The size of the area to be set (region-of-interest, ROI) is specified via a Size struct.

    In addition to the image data and ROI, all methods have a parameter to specify the value being set. In case of single channel images this is a single value, in case of multi-channel, an array of values is passed.

    NPPNativeMethods.NPPi.Min

    Minimum

    NPPNativeMethods.NPPi.MinIdx

    Minimum index

    NPPNativeMethods.NPPi.MinMaxEvery

    Primitives for computing the minimal/maximal value of the pixel pair from two images.

    NPPNativeMethods.NPPi.MinMaxIndxNew

    Min / Max Index (new in CUDA 5)

    NPPNativeMethods.NPPi.MinMaxNew

    Min / Max (new in CUDA 5)

    NPPNativeMethods.NPPi.MorphologyFilter2D

    Image dilate and erod operations.

    NPPNativeMethods.NPPi.Mul

    Pixel by pixel multiply of two images.

    NPPNativeMethods.NPPi.MulConst

    Multiplies each pixel of an image by a constant value.

    NPPNativeMethods.NPPi.MulConstScale

    Multiplies each pixel of an image by a constant value then scales the result by the maximum value for the data bit width.

    NPPNativeMethods.NPPi.MulScale

    Pixel by pixel multiplies each pixel of two images then scales the result by the maximum value for the data bit width.

    NPPNativeMethods.NPPi.NormDiff

    Norm of pixel differences between two images.

    NPPNativeMethods.NPPi.NormInf

    Infinite Norm

    NPPNativeMethods.NPPi.NormL1

    L1 Norm

    NPPNativeMethods.NPPi.NormL2

    L2 Norm

    NPPNativeMethods.NPPi.NormRel

    Primitives for computing the relative error between two images.

    NPPNativeMethods.NPPi.Not

    Pixel by pixel logical not of image.

    NPPNativeMethods.NPPi.NV12ToYUV420

    NV12 to YUV420 color conversion.

    NPPNativeMethods.NPPi.NV21ToBGR

    NV21 to BGR color conversion

    NPPNativeMethods.NPPi.NV21ToRGB

    NV21 to RGB color conversion

    NPPNativeMethods.NPPi.Or

    Pixel by pixel logical or of images.

    NPPNativeMethods.NPPi.OrConst

    Pixel by pixel logical or of an image with a constant.

    NPPNativeMethods.NPPi.PerspectiveTransforms

    Perspective warping, perspective transform calculation Perspective warping of an image is the transform of image pixel positions, defined by the following formulas: \f[ X_{new} = \frac{C_{00} * x + C_{01} * y + C_{02}}{C_{20} * x + C_{21} * y + C_{22}} \qquad Y_{new} = \frac{C_{10} * x + C_{11} * y + C_{12}}{C_{20} * x + C_{21} * y + C_{22}} \qquad C = \left[ \matrix{C_{00} & C_{01} & C_{02} \cr C_{10} & C_{11} & C_{12} \cr C_{20} & C_{21} & C_{22} } \right] \f] That is, any pixel of the transformed image with coordinates \f$(X_{new},Y_{new})\f$ has a preimage with coordinates \f$(x,y)\f$. The mapping \f$C\f$ is fully defined by 8 values \f$C_{ij}, (i,j)=\overline{0,2}\f$, except of \f$C_{22}\f$, which is a normalizer. The transform has a property of mapping any convex quadrangle to a convex quadrangle, which is used in a group of functions nppiWarpPerspectiveQuad. The NPPI implementation of perspective transform has some issues which are discussed in each function's documentation.

    NPPNativeMethods.NPPi.QualityIndex

    Primitives for computing the image quality index of two images.

    NPPNativeMethods.NPPi.RankFilters

    Min, Median, and Max image filters.

    NPPNativeMethods.NPPi.Remap

    Remap chooses source pixels using pixel coordinates explicitely supplied in two 2D device memory image arrays pointed to by the pXMap and pYMap pointers.

    The pXMap array contains the X coordinated and the pYMap array contains the Y coordinate of the corresponding source image pixel to use as input. These coordinates are in floating point format so fraction pixel positions can be used. The coordinates of the source pixel to sample are determined as follows:

    nSrcX = pxMap[nDstX, nDstY]

    nSrcY = pyMap[nDstX, nDstY]

    In the Remap functions below source image clip checking is handled as follows:

    If the source pixel fractional x and y coordinates are greater than or equal to oSizeROI.x and less than oSizeROI.x + oSizeROI.width and greater than or equal to oSizeROI.y and less than oSizeROI.y + oSizeROI.height then the source pixel is considered to be within the source image clip rectangle and the source image is sampled. Otherwise the source image is not sampled and a destination pixel is not written to the destination image.

    NPPNativeMethods.NPPi.ResizeSqrPixel

    Resizes images.

    NPPNativeMethods.NPPi.RGBToCbYCr

    RGB To CbYCr Conversion

    NPPNativeMethods.NPPi.RGBToGray

    RGB to CCIR601 Gray conversion.

    Here is how NPP converts gamma corrected RGB to CCIR601 Gray.

    nGray =  0.299F * R + 0.587F * G + 0.114F * B;

    NPPNativeMethods.NPPi.RGBToHLS

    RGB to HLS

    NPPNativeMethods.NPPi.RGBToHSV

    RGB to HSV

    NPPNativeMethods.NPPi.RGBToLUV

    RGB to LUV

    NPPNativeMethods.NPPi.RGBToXYZ

    RGB to XYZ

    NPPNativeMethods.NPPi.RGBToYCbCr

    RGB to YCbCr color conversion.

    NPPNativeMethods.NPPi.RGBToYCbCr_JPEG

    JPEG RGB to YCbCr color conversion.

    NPPNativeMethods.NPPi.RGBToYCC

    RGB to YCC

    NPPNativeMethods.NPPi.RGBToYCrCb

    RGB To YCrCB Conversion

    NPPNativeMethods.NPPi.RGBToYUV

    RGB to YUV Conversion

    NPPNativeMethods.NPPi.RGBToYUV420

    RGB to YUV420 Conversion

    NPPNativeMethods.NPPi.RGBToYUV422

    RGB To YUV422 Conversion

    NPPNativeMethods.NPPi.RightShiftConst

    Pixel by pixel right shift of an image by a constant value.

    NPPNativeMethods.NPPi.SamplePatternConversion

    Sample Pattern Conversion.

    NPPNativeMethods.NPPi.Scale

    Scale bit-depth up and down.

    To map source pixel srcPixelValue to destination pixel dstPixelValue the following equation is used:

    dstPixelValue = dstMinRangeValue + scaleFactor * (srcPixelValue - srcMinRangeValue)

    where scaleFactor = (dstMaxRangeValue - dstMinRangeValue) / (srcMaxRangeValue - srcMinRangeValue).

    For conversions between integer data types, the entire integer numeric range of the input data type is mapped onto the entire integer numeric range of the output data type.

    For conversions to floating point data types the floating point data range is defined by the user supplied floating point values of nMax and nMin which are used as the dstMaxRangeValue and dstMinRangeValue respectively in the scaleFactor and dstPixelValue calculations and also as the saturation values to which output data is clamped.

    When converting from floating-point values to integer values, nMax and nMin are used as the srcMaxRangeValue and srcMinRangeValue respectively in the scaleFactor and dstPixelValue calculations. Output values are saturated and clamped to the full output integer pixel value range.

    NPPNativeMethods.NPPi.Sqr

    Square each pixel in an image.

    NPPNativeMethods.NPPi.Sqrt

    Pixel by pixel square root of each pixel in an image.

    NPPNativeMethods.NPPi.Sub

    Pixel by pixel subtraction of two images.

    NPPNativeMethods.NPPi.SubConst

    Subtracts a constant value from each pixel of an image.

    NPPNativeMethods.NPPi.Sum

    Sum of 8 bit images.

    NPPNativeMethods.NPPi.SwapChannel

    Methods for exchanging the color channels of an image.

    The methods support arbitrary permutations of the original channels, including replication.

    NPPNativeMethods.NPPi.Threshold

    Threshold pixels.

    NPPNativeMethods.NPPi.Transpose

    Methods for transposing images of various types. Like matrix transpose, image transpose is a mirror along the image's diagonal (upper-left to lower-right corner).

    NPPNativeMethods.NPPi.WindowSum1D

    1D mask Window Sum for 8 bit images.

    NPPNativeMethods.NPPi.Xor

    Pixel by pixel logical exclusive or of images.

    NPPNativeMethods.NPPi.XorConst

    Pixel by pixel logical exclusive or of an image with a constant.

    NPPNativeMethods.NPPi.XYZToRGB

    XYZ to RGB

    NPPNativeMethods.NPPi.YCbCrAndACrCbAndOther

    YCbCr ACrCb ...

    NPPNativeMethods.NPPi.YCbCrToBGR

    YCbCr To BGR Conversion

    NPPNativeMethods.NPPi.YCbCrToRGB

    YCbCr to RGB color conversion.

    NPPNativeMethods.NPPi.YCCToRGB

    YCC to RGB

    NPPNativeMethods.NPPi.YCrCbToRGB

    YCrCB to RGB Conversion

    NPPNativeMethods.NPPi.YUV420ToBGR

    YUV420 to BGR Conversion

    NPPNativeMethods.NPPi.YUV420ToRGB

    YUV420 to RGB Conversion

    NPPNativeMethods.NPPi.YUV422ToRGB

    YUV422 To RGB Conversion

    NPPNativeMethods.NPPi.YUVToBGR

    YUV to BGR color conversion.

    Here is how NPP converts YUV to gamma corrected RGB or BGR.

    Npp32f nY = (Npp32f)Y;

    Npp32f nU = (Npp32f)U - 128.0F;

    Npp32f nV = (Npp32f)V - 128.0F;

    Npp32f nR = nY + 1.140F * nV;

    if (nR < 0.0F) nR = 0.0F;

    if (nR > 255.0F) nR = 255.0F;

    Npp32f nG = nY - 0.394F * nU - 0.581F * nV;

    if (nG < 0.0F) nG = 0.0F;

    if (nG > 255.0F) nG = 255.0F;

    Npp32f nB = nY + 2.032F * nU;

    if (nB < 0.0F) nB = 0.0F;

    if (nB > 255.0F) nB = 255.0F;

    NPPNativeMethods.NPPi.YUVToRGB

    YUV to RGB Conversion

    NPPNativeMethods.NPPs

    npps.h

    NPPNativeMethods.NPPs.AbsoluteValueSignal

    Absolute value of each sample of a signal.

    NPPNativeMethods.NPPs.AddC

    Adds a constant value to each sample of a signal.

    NPPNativeMethods.NPPs.AddProductC

    Adds product of a constant and each sample of a source signal to the each sample of destination signal.

    NPPNativeMethods.NPPs.AddProductSignal

    Adds sample by sample product of two signals to the destination signal.

    NPPNativeMethods.NPPs.AddSignal

    Sample by sample addition of two signals.

    NPPNativeMethods.NPPs.And

    Sample by sample bitwise AND of samples from two signals.

    NPPNativeMethods.NPPs.AndC

    Bitwise AND of a constant and each sample of a signal.

    NPPNativeMethods.NPPs.AverageError

    Primitives for computing the Average error between two signals.

    NPPNativeMethods.NPPs.AverageRelativeError

    Primitives for computing the AverageRelative error between two signals.

    NPPNativeMethods.NPPs.Cauchy

    Determine Cauchy robust error function and its first and second derivatives for each sample of a signal.

    NPPNativeMethods.NPPs.Convert

    Routines for converting the sample-data type of signals.

    NPPNativeMethods.NPPs.Copy

    Copy methods for various type signals. Copy methods operate on signal data given as a pointer to the underlying data-type (e.g. 8-bit vectors would be passed as pointers to Npp8u type) and length of the vectors, i.e. the number of items.

    NPPNativeMethods.NPPs.CountInRange

    Count In Range

    NPPNativeMethods.NPPs.CubeRootSignal

    Cube root of each sample of a signal.

    NPPNativeMethods.NPPs.DivC

    Divides each sample of a signal by a constant.

    NPPNativeMethods.NPPs.DivCRev

    Divides a constant by each sample of a signal.

    NPPNativeMethods.NPPs.DivRoundSignal

    Sample by sample division of the samples of two signals with rounding.

    NPPNativeMethods.NPPs.DivSignal

    Sample by sample division of the samples of two signals.

    NPPNativeMethods.NPPs.DotProduct

    Dot Product

    NPPNativeMethods.NPPs.ExponentSignal

    e raised to the power of each sample of a signal.

    NPPNativeMethods.NPPs.FilteringFunctions

    Functions that provide functionality of generating output signal based on the input signal like signal integral, etc.

    NPPNativeMethods.NPPs.InverseTangentSignal

    Inverse tangent of each sample of a signal.

    NPPNativeMethods.NPPs.LShiftC

    Left shifts the bits of each sample of a signal by a constant amount.

    NPPNativeMethods.NPPs.Max

    Functions that provide global signal statistics like: average, standard deviation, minimum, etc.

    NPPNativeMethods.NPPs.MaximumError

    Primitives for computing the maximum error between two signals.

    NPPNativeMethods.NPPs.MaximumRelativeError

    Primitives for computing the MaximumRelative error between two signals.

    NPPNativeMethods.NPPs.MeanStdDev

    Mean and StdDev

    NPPNativeMethods.NPPs.MemAlloc

    Signal-allocator methods for allocating 1D arrays of data in device memory. All allocators have size parameters to specify the size of the signal (1D array) being allocated.

    The allocator methods return a pointer to the newly allocated memory of appropriate type. If device-memory allocation is not possible due to resource constaints the allocators return 0 (i.e. NULL pointer).

    All signal allocators allocate memory aligned such that it is beneficial to the performance of the majority of the signal-processing primitives. It is no mandatory however to use these allocators. Any valid CUDA device-memory pointers can be passed to NPP primitives.

    NPPNativeMethods.NPPs.Min

    Functions that provide global signal statistics like: average, standard deviation, minimum, etc.

    NPPNativeMethods.NPPs.MinMaxEvery

    Performs the min or max operation on the samples of a signal.

    NPPNativeMethods.NPPs.MinMaxIndex

    Minimum / Maximum values / indices

    NPPNativeMethods.NPPs.MulC

    Multiplies each sample of a signal by a constant value.

    NPPNativeMethods.NPPs.MulSignal

    Sample by sample multiplication the samples of two signals.

    NPPNativeMethods.NPPs.NaturalLogarithmSignal

    Natural logarithm of each sample of a signal.

    NPPNativeMethods.NPPs.Norm

    Infinity Norm, L1 Norm, L2 Norm

    NPPNativeMethods.NPPs.NormalizeSignal

    Normalize each sample of a real or complex signal using offset and division operations.

    NPPNativeMethods.NPPs.NormDiff

    Infinity Norm Diff, L1 Norm Diff, L2 Norm Diff

    NPPNativeMethods.NPPs.Not

    Bitwise NOT of each sample of a signal.

    NPPNativeMethods.NPPs.Or

    Sample by sample bitwise OR of samples from two signals.

    NPPNativeMethods.NPPs.OrC

    Bitwise OR of a constant and each sample of a signal.

    NPPNativeMethods.NPPs.RShiftC

    Right shifts the bits of each sample of a signal by a constant amount.

    NPPNativeMethods.NPPs.Set

    Set methods for 1D vectors of various types. The copy methods operate on vector data given as a pointer to the underlying data-type (e.g. 8-bit vectors would be passed as pointers to Npp8u type) and length of the vectors, i.e. the number of items.

    NPPNativeMethods.NPPs.SquareRootSignal

    Square root of each sample of a signal.

    NPPNativeMethods.NPPs.SquareSignal

    Squares each sample of a signal.

    NPPNativeMethods.NPPs.SubC

    Subtracts a constant from each sample of a signal.

    NPPNativeMethods.NPPs.SubCRev

    Subtracts each sample of a signal from a constant.

    NPPNativeMethods.NPPs.SubSignal

    Sample by sample subtraction of the samples of two signals.

    NPPNativeMethods.NPPs.Sum

    Functions that provide global signal statistics like: average, standard deviation, minimum, etc.

    NPPNativeMethods.NPPs.SumLn

    Sums up the natural logarithm of each sample of a signal.

    NPPNativeMethods.NPPs.TenTimesBaseTenLogarithmSignal

    Ten times the decimal logarithm of each sample of a signal.

    NPPNativeMethods.NPPs.Threshold

    Performs the threshold operation on the samples of a signal by limiting the sample values by a specified constant value.

    NPPNativeMethods.NPPs.Xor

    Sample by sample bitwise XOR of samples from two signals.

    NPPNativeMethods.NPPs.XorC

    Bitwise XOR of a constant and each sample of a signal.

    NPPNativeMethods.NPPs.Zero

    Set signals to zero.

    NPPNativeMethods.NPPs.ZeroCrossing

    Count Zero Crossings

    NPPWarning

    WarningException thrown if configured and a native NPP function returns a positive error code

    NPPWarningHandler

    Singleton NPPWarning handler. Use the OnNPPWarning event to get notified when a NPP functions returns a NPP warning status code.

    NPPWarningHandler.NPPWarningEventArgs

    NPP warning event args

    Structs

    HaarBuffer

    HaarBuffer

    HaarClassifier

    HaarClassifier

    Npp16sc

    Complex Number.

    This struct represents a short complex number.

    Npp32fc

    Complex Number.

    This struct represents a single floating-point complex number.

    Npp32sc

    Complex Number.

    This struct represents a signed int complex number.

    Npp64fc

    Complex Number.

    This struct represents a double floating-point complex number.

    Npp64sc

    Complex Number.

    This struct represents a long long complex number.

    NppiColorTwistBatchCXR

    NppiDCTState

    DCT state structure

    NppiDecodeHuffmanSpec

    NppiDecodeHuffmanSpec

    NppiEncodeHuffmanSpec

    NppiEncodeHuffmanSpec

    NppiGraphcutState

    graph-cut state structure

    NppiHOGConfig

    The NppiHOGConfig structure defines the configuration parameters for the HOG descriptor

    NppiJpegDecodeJob

    JPEG decode job used by \ref nppiJpegDecodeJob (see that for more documentation) The job describes piece of computation to be done.

    NppiJpegDecodeJobMemory

    NppiJpegFrameDescr

    JPEG frame descriptor. Can hold from 1 to 4 components.

    NppiJpegScanDescr

    JPEG scan descriptor

    NppiMirrorBatchCXR

    NppiPoint

    2D Point.

    NppiRect

    2D Rectangle

    This struct contains position and size information of a rectangle in two space.

    The rectangle's position is usually signified by the coordinate of its upper-left corner.

    NppiResizeBatchCXR

    NppiSize

    2D Size

    This struct typically represents the size of a a rectangular region in two space.

    NppiWarpAffineBatchCXR

    NppLibraryVersion

    Npp Library Version.

    NppPointPolar

    2D Polar Point.

    Enums

    DifferentialKernel

    Differential Filter types

    GpuComputeCapability

    Gpu Compute Capabilities

    InterpolationMode

    Filtering methods

    MaskSize

    Fixed filter-kernel sizes.

    NppCmpOp

    Compare Operator

    NppHintAlgorithm

    HintAlgorithm

    NppiAlphaOp

    NppiAlphaOp

    NppiAxis

    Axis

    NppiBayerGridPosition

    Bayer Grid Position Registration.

    NppiBorderType

    BorderType

    NppiHuffmanTableType

    NppiHuffmanTableType

    NppiJpegDecodeJobKind

    Type of job to execute. Usually you will need just SIMPLE for each scan, one MEMZERO at the beginning and FINALIZE at the end. See the example in \ref nppiJpegDecodeJob SIMPLE can be split into multiple jobs: PRE, CPU & GPU. Please note that if you don't use SIMPLE, you man need to add some memcopies and synchronizes as described in \ref nppiJpegDecodeJob.

    NppiNorm

    NppiNorm

    NppRoundMode

    Rounding Modes

    The enumerated rounding modes are used by a large number of NPP primitives to allow the user to specify the method by which fractional values are converted to integer values. Also see \ref rounding_modes.

    For NPP release 5.5 new names for the three rounding modes are introduced that are based on the naming conventions for rounding modes set forth in the IEEE-754 floating-point standard. Developers are encouraged to use the new, longer names to be future proof as the legacy names will be deprecated in subsequent NPP releases.

    NppStatus

    Error Status Codes

    Almost all NPP function return error-status information using these return codes. Negative return codes indicate errors, positive return codes indicate warnings, a return code of 0 indicates success.

    NppsZCType

    NppsZCType

    Delegates

    NPPWarningHandler.NPPWarningEventHandler

    Back to top Generated by DocFX