Show / Hide Table of Contents

    Class LRNDescriptor

    Inheritance
    System.Object
    LRNDescriptor
    Implements
    System.IDisposable
    Inherited Members
    System.Object.Equals(System.Object)
    System.Object.Equals(System.Object, System.Object)
    System.Object.GetHashCode()
    System.Object.GetType()
    System.Object.MemberwiseClone()
    System.Object.ReferenceEquals(System.Object, System.Object)
    System.Object.ToString()
    Namespace: ManagedCuda.CudaDNN
    Assembly: CudaDNN.dll
    Syntax
    public class LRNDescriptor : IDisposable

    Constructors

    | Improve this Doc View Source

    LRNDescriptor(CudaDNNContext)

    Declaration
    public LRNDescriptor(CudaDNNContext context)
    Parameters
    Type Name Description
    CudaDNNContext context

    Properties

    | Improve this Doc View Source

    Desc

    Returns the inner handle.

    Declaration
    public cudnnLRNDescriptor Desc { get; }
    Property Value
    Type Description
    cudnnLRNDescriptor

    Methods

    | Improve this Doc View Source

    cudnnDivisiveNormalizationBackward(cudnnDivNormMode, Double, cudnnTensorDescriptor, CUdeviceptr, CUdeviceptr, CUdeviceptr, CUdeviceptr, CUdeviceptr, Double, cudnnTensorDescriptor, CUdeviceptr, CUdeviceptr)

    This function performs the backward DivisiveNormalization layer computation.

    Declaration
    public void cudnnDivisiveNormalizationBackward(cudnnDivNormMode mode, double alpha, cudnnTensorDescriptor xDesc, CUdeviceptr x, CUdeviceptr means, CUdeviceptr dy, CUdeviceptr temp, CUdeviceptr temp2, double beta, cudnnTensorDescriptor dXdMeansDesc, CUdeviceptr dx, CUdeviceptr dMeans)
    Parameters
    Type Name Description
    cudnnDivNormMode mode

    DivisiveNormalization layer mode of operation. Currently only CUDNN_DIVNORM_PRECOMPUTED_MEANS is implemented. Normalization is performed using the means input tensor that is expected to be precomputed by the user.

    System.Double alpha

    Pointer to scaling factors (in host memory) used to blend the layer output value with prior value in the destination tensor as follows: dstValue = alpha[0]*resultValue + beta[0]*priorDstValue. Please refer to this section for additional details.

    cudnnTensorDescriptor xDesc

    Tensor descriptor and pointers in device memory for the bottom layer's data and means. (Bottom layer is the earlier layer in the computation graph during inference). Note: the means tensor is expected to be precomputed by the user. It can also contain any valid values (not required to be actual means, and can be for instance a result of a convolution with a Gaussian kernel).

    CUdeviceptr x

    Tensor descriptor and pointers in device memory for the bottom layer's data and means. (Bottom layer is the earlier layer in the computation graph during inference). Note: the means tensor is expected to be precomputed by the user. It can also contain any valid values (not required to be actual means, and can be for instance a result of a convolution with a Gaussian kernel).

    CUdeviceptr means

    Tensor descriptor and pointers in device memory for the bottom layer's data and means. (Bottom layer is the earlier layer in the computation graph during inference). Note: the means tensor is expected to be precomputed by the user. It can also contain any valid values (not required to be actual means, and can be for instance a result of a convolution with a Gaussian kernel).

    CUdeviceptr dy

    Tensor pointer in device memory for the top layer's cumulative loss differential data (error backpropagation). (Top layer is the later layer in the computation graph during inference).

    CUdeviceptr temp

    Temporary tensors in device memory. These are used for computing intermediate values during the backward pass. These tensors do not have to be preserved from forward to backward pass. Both use srcDesc as a descriptor.

    CUdeviceptr temp2

    Temporary tensors in device memory. These are used for computing intermediate values during the backward pass. These tensors do not have to be preserved from forward to backward pass. Both use srcDesc as a descriptor.

    System.Double beta

    Pointer to scaling factors (in host memory) used to blend the layer output value with prior value in the destination tensor as follows: dstValue = alpha[0]*resultValue + beta[0]*priorDstValue. Please refer to this section for additional details.

    cudnnTensorDescriptor dXdMeansDesc

    Tensor descriptor for destDataDiff and destMeansDiff.

    CUdeviceptr dx

    Tensor pointers (in device memory) for the bottom layer's resulting differentials (data and means). Both share the same descriptor.

    CUdeviceptr dMeans

    Tensor pointers (in device memory) for the bottom layer's resulting differentials (data and means). Both share the same descriptor.

    | Improve this Doc View Source

    cudnnDivisiveNormalizationBackward(cudnnDivNormMode, Single, cudnnTensorDescriptor, CUdeviceptr, CUdeviceptr, CUdeviceptr, CUdeviceptr, CUdeviceptr, Single, cudnnTensorDescriptor, CUdeviceptr, CUdeviceptr)

    This function performs the backward DivisiveNormalization layer computation.

    Declaration
    public void cudnnDivisiveNormalizationBackward(cudnnDivNormMode mode, float alpha, cudnnTensorDescriptor xDesc, CUdeviceptr x, CUdeviceptr means, CUdeviceptr dy, CUdeviceptr temp, CUdeviceptr temp2, float beta, cudnnTensorDescriptor dXdMeansDesc, CUdeviceptr dx, CUdeviceptr dMeans)
    Parameters
    Type Name Description
    cudnnDivNormMode mode

    DivisiveNormalization layer mode of operation. Currently only CUDNN_DIVNORM_PRECOMPUTED_MEANS is implemented. Normalization is performed using the means input tensor that is expected to be precomputed by the user.

    System.Single alpha

    Pointer to scaling factors (in host memory) used to blend the layer output value with prior value in the destination tensor as follows: dstValue = alpha[0]*resultValue + beta[0]*priorDstValue. Please refer to this section for additional details.

    cudnnTensorDescriptor xDesc

    Tensor descriptor and pointers in device memory for the bottom layer's data and means. (Bottom layer is the earlier layer in the computation graph during inference). Note: the means tensor is expected to be precomputed by the user. It can also contain any valid values (not required to be actual means, and can be for instance a result of a convolution with a Gaussian kernel).

    CUdeviceptr x

    Tensor descriptor and pointers in device memory for the bottom layer's data and means. (Bottom layer is the earlier layer in the computation graph during inference). Note: the means tensor is expected to be precomputed by the user. It can also contain any valid values (not required to be actual means, and can be for instance a result of a convolution with a Gaussian kernel).

    CUdeviceptr means

    Tensor descriptor and pointers in device memory for the bottom layer's data and means. (Bottom layer is the earlier layer in the computation graph during inference). Note: the means tensor is expected to be precomputed by the user. It can also contain any valid values (not required to be actual means, and can be for instance a result of a convolution with a Gaussian kernel).

    CUdeviceptr dy

    Tensor pointer in device memory for the top layer's cumulative loss differential data (error backpropagation). (Top layer is the later layer in the computation graph during inference).

    CUdeviceptr temp

    Temporary tensors in device memory. These are used for computing intermediate values during the backward pass. These tensors do not have to be preserved from forward to backward pass. Both use srcDesc as a descriptor.

    CUdeviceptr temp2

    Temporary tensors in device memory. These are used for computing intermediate values during the backward pass. These tensors do not have to be preserved from forward to backward pass. Both use srcDesc as a descriptor.

    System.Single beta

    Pointer to scaling factors (in host memory) used to blend the layer output value with prior value in the destination tensor as follows: dstValue = alpha[0]*resultValue + beta[0]*priorDstValue. Please refer to this section for additional details.

    cudnnTensorDescriptor dXdMeansDesc

    Tensor descriptor for destDataDiff and destMeansDiff.

    CUdeviceptr dx

    Tensor pointers (in device memory) for the bottom layer's resulting differentials (data and means). Both share the same descriptor.

    CUdeviceptr dMeans

    Tensor pointers (in device memory) for the bottom layer's resulting differentials (data and means). Both share the same descriptor.

    | Improve this Doc View Source

    cudnnDivisiveNormalizationForward(cudnnDivNormMode, Double, cudnnTensorDescriptor, CUdeviceptr, CUdeviceptr, CUdeviceptr, CUdeviceptr, Double, cudnnTensorDescriptor, CUdeviceptr)

    This function performs the forward DivisiveNormalization layer computation.

    Declaration
    public void cudnnDivisiveNormalizationForward(cudnnDivNormMode mode, double alpha, cudnnTensorDescriptor xDesc, CUdeviceptr x, CUdeviceptr means, CUdeviceptr temp, CUdeviceptr temp2, double beta, cudnnTensorDescriptor yDesc, CUdeviceptr y)
    Parameters
    Type Name Description
    cudnnDivNormMode mode

    DivisiveNormalization layer mode of operation. Currently only CUDNN_DIVNORM_PRECOMPUTED_MEANS is implemented. Normalization is performed using the means input tensor that is expected to be precomputed by the user.

    System.Double alpha

    Pointer to scaling factors (in host memory) used to blend the layer output value with prior value in the destination tensor as follows: dstValue = alpha[0]*resultValue + beta[0]*priorDstValue. Please refer to this section for additional details.

    cudnnTensorDescriptor xDesc

    Tensor descriptor objects for the input and output tensors. Note that srcDesc is shared between srcData, srcMeansData, tempData, tempData2 tensors.

    CUdeviceptr x

    Input tensor data pointer in device memory.

    CUdeviceptr means

    Input means tensor data pointer in device memory. Note that this tensor can be NULL (in that case it's values are assumed to be zero during the computation). This tensor also doesn't have to contain means, these can be any values, a frequently used variation is a result of convolution with a normalized positive kernel (such as Gaussian).

    CUdeviceptr temp

    Temporary tensors in device memory. These are used for computing intermediate values during the forward pass. These tensors do not have to be preserved as inputs from forward to the backward pass. Both use srcDesc as a descriptor.

    CUdeviceptr temp2

    Temporary tensors in device memory. These are used for computing intermediate values during the forward pass. These tensors do not have to be preserved as inputs from forward to the backward pass. Both use srcDesc as a descriptor.

    System.Double beta

    Pointer to scaling factors (in host memory) used to blend the layer output value with prior value in the destination tensor as follows: dstValue = alpha[0]*resultValue + beta[0]*priorDstValue. Please refer to this section for additional details.

    cudnnTensorDescriptor yDesc

    Tensor descriptor objects for the input and output tensors. Note that srcDesc is shared between srcData, srcMeansData, tempData, tempData2 tensors.

    CUdeviceptr y

    Pointer in device memory to a tensor for the result of the forward DivisiveNormalization pass.

    | Improve this Doc View Source

    cudnnDivisiveNormalizationForward(cudnnDivNormMode, Single, cudnnTensorDescriptor, CUdeviceptr, CUdeviceptr, CUdeviceptr, CUdeviceptr, Single, cudnnTensorDescriptor, CUdeviceptr)

    This function performs the forward DivisiveNormalization layer computation.

    Declaration
    public void cudnnDivisiveNormalizationForward(cudnnDivNormMode mode, float alpha, cudnnTensorDescriptor xDesc, CUdeviceptr x, CUdeviceptr means, CUdeviceptr temp, CUdeviceptr temp2, float beta, cudnnTensorDescriptor yDesc, CUdeviceptr y)
    Parameters
    Type Name Description
    cudnnDivNormMode mode

    DivisiveNormalization layer mode of operation. Currently only CUDNN_DIVNORM_PRECOMPUTED_MEANS is implemented. Normalization is performed using the means input tensor that is expected to be precomputed by the user.

    System.Single alpha

    Pointer to scaling factors (in host memory) used to blend the layer output value with prior value in the destination tensor as follows: dstValue = alpha[0]*resultValue + beta[0]*priorDstValue. Please refer to this section for additional details.

    cudnnTensorDescriptor xDesc

    Tensor descriptor objects for the input and output tensors. Note that srcDesc is shared between srcData, srcMeansData, tempData, tempData2 tensors.

    CUdeviceptr x

    Input tensor data pointer in device memory.

    CUdeviceptr means

    Input means tensor data pointer in device memory. Note that this tensor can be NULL (in that case it's values are assumed to be zero during the computation). This tensor also doesn't have to contain means, these can be any values, a frequently used variation is a result of convolution with a normalized positive kernel (such as Gaussian).

    CUdeviceptr temp

    Temporary tensors in device memory. These are used for computing intermediate values during the forward pass. These tensors do not have to be preserved as inputs from forward to the backward pass. Both use srcDesc as a descriptor.

    CUdeviceptr temp2

    Temporary tensors in device memory. These are used for computing intermediate values during the forward pass. These tensors do not have to be preserved as inputs from forward to the backward pass. Both use srcDesc as a descriptor.

    System.Single beta

    Pointer to scaling factors (in host memory) used to blend the layer output value with prior value in the destination tensor as follows: dstValue = alpha[0]*resultValue + beta[0]*priorDstValue. Please refer to this section for additional details.

    cudnnTensorDescriptor yDesc

    Tensor descriptor objects for the input and output tensors. Note that srcDesc is shared between srcData, srcMeansData, tempData, tempData2 tensors.

    CUdeviceptr y

    Pointer in device memory to a tensor for the result of the forward DivisiveNormalization pass.

    | Improve this Doc View Source

    cudnnLRNCrossChannelBackward(cudnnLRNMode, ref Double, cudnnTensorDescriptor, CUdeviceptr, cudnnTensorDescriptor, CUdeviceptr, cudnnTensorDescriptor, CUdeviceptr, ref Double, cudnnTensorDescriptor, CUdeviceptr)

    This function performs the backward LRN layer computation.

    Declaration
    public void cudnnLRNCrossChannelBackward(cudnnLRNMode lrnMode, ref double alpha, cudnnTensorDescriptor yDesc, CUdeviceptr y, cudnnTensorDescriptor dyDesc, CUdeviceptr dy, cudnnTensorDescriptor xDesc, CUdeviceptr x, ref double beta, cudnnTensorDescriptor dxDesc, CUdeviceptr dx)
    Parameters
    Type Name Description
    cudnnLRNMode lrnMode

    LRN layer mode of operation. Currently only CUDNN_LRN_CROSS_CHANNEL_DIM1 is implemented. Normalization is performed along the tensor's dimA[1].

    System.Double alpha

    Pointer to scaling factors (in host memory) used to blend the layer output value with prior value in the destination tensor as follows: dstValue = alpha[0]*resultValue + beta[0]*priorDstValue. Please refer to this section for additional details.

    cudnnTensorDescriptor yDesc

    Tensor descriptor and pointer in device memory for the bottom layer's data. (Bottom layer is the earlier layer in the computation graph during inference).

    CUdeviceptr y

    Tensor descriptor and pointer in device memory for the bottom layer's data. (Bottom layer is the earlier layer in the computation graph during inference).

    cudnnTensorDescriptor dyDesc

    Tensor descriptor and pointer in device memory for the top layer's cumulative loss differential data (error backpropagation). (Top layer is the later layer in the computation graph during inference).

    CUdeviceptr dy

    Tensor descriptor and pointer in device memory for the top layer's cumulative loss differential data (error backpropagation). (Top layer is the later layer in the computation graph during inference).

    cudnnTensorDescriptor xDesc

    Tensor descriptor and pointer in device memory for the bottom layer's data. (Bottom layer is the earlier layer in the computation graph during inference). Note that these values are not modified during backpropagation.

    CUdeviceptr x

    Tensor descriptor and pointer in device memory for the bottom layer's data. (Bottom layer is the earlier layer in the computation graph during inference). Note that these values are not modified during backpropagation.

    System.Double beta

    Pointer to scaling factors (in host memory) used to blend the layer output value with prior value in the destination tensor as follows: dstValue = alpha[0]*resultValue + beta[0]*priorDstValue. Please refer to this section for additional details.

    cudnnTensorDescriptor dxDesc

    Tensor descriptor and pointer in device memory for the bottom layer's cumulative loss differential data (error backpropagation). (Bottom layer is the earlier layer in the computation graph during inference).

    CUdeviceptr dx

    Tensor descriptor and pointer in device memory for the bottom layer's cumulative loss differential data (error backpropagation). (Bottom layer is the earlier layer in the computation graph during inference).

    | Improve this Doc View Source

    cudnnLRNCrossChannelBackward(cudnnLRNMode, ref Single, cudnnTensorDescriptor, CUdeviceptr, cudnnTensorDescriptor, CUdeviceptr, cudnnTensorDescriptor, CUdeviceptr, ref Single, cudnnTensorDescriptor, CUdeviceptr)

    This function performs the backward LRN layer computation.

    Declaration
    public void cudnnLRNCrossChannelBackward(cudnnLRNMode lrnMode, ref float alpha, cudnnTensorDescriptor yDesc, CUdeviceptr y, cudnnTensorDescriptor dyDesc, CUdeviceptr dy, cudnnTensorDescriptor xDesc, CUdeviceptr x, ref float beta, cudnnTensorDescriptor dxDesc, CUdeviceptr dx)
    Parameters
    Type Name Description
    cudnnLRNMode lrnMode

    LRN layer mode of operation. Currently only CUDNN_LRN_CROSS_CHANNEL_DIM1 is implemented. Normalization is performed along the tensor's dimA[1].

    System.Single alpha

    Pointer to scaling factors (in host memory) used to blend the layer output value with prior value in the destination tensor as follows: dstValue = alpha[0]*resultValue + beta[0]*priorDstValue. Please refer to this section for additional details.

    cudnnTensorDescriptor yDesc

    Tensor descriptor and pointer in device memory for the bottom layer's data. (Bottom layer is the earlier layer in the computation graph during inference).

    CUdeviceptr y

    Tensor descriptor and pointer in device memory for the bottom layer's data. (Bottom layer is the earlier layer in the computation graph during inference).

    cudnnTensorDescriptor dyDesc

    Tensor descriptor and pointer in device memory for the top layer's cumulative loss differential data (error backpropagation). (Top layer is the later layer in the computation graph during inference).

    CUdeviceptr dy

    Tensor descriptor and pointer in device memory for the top layer's cumulative loss differential data (error backpropagation). (Top layer is the later layer in the computation graph during inference).

    cudnnTensorDescriptor xDesc

    Tensor descriptor and pointer in device memory for the bottom layer's data. (Bottom layer is the earlier layer in the computation graph during inference). Note that these values are not modified during backpropagation.

    CUdeviceptr x

    Tensor descriptor and pointer in device memory for the bottom layer's data. (Bottom layer is the earlier layer in the computation graph during inference). Note that these values are not modified during backpropagation.

    System.Single beta

    Pointer to scaling factors (in host memory) used to blend the layer output value with prior value in the destination tensor as follows: dstValue = alpha[0]*resultValue + beta[0]*priorDstValue. Please refer to this section for additional details.

    cudnnTensorDescriptor dxDesc

    Tensor descriptor and pointer in device memory for the bottom layer's cumulative loss differential data (error backpropagation). (Bottom layer is the earlier layer in the computation graph during inference).

    CUdeviceptr dx

    Tensor descriptor and pointer in device memory for the bottom layer's cumulative loss differential data (error backpropagation). (Bottom layer is the earlier layer in the computation graph during inference).

    | Improve this Doc View Source

    cudnnLRNCrossChannelForward(cudnnLRNMode, Double, cudnnTensorDescriptor, CUdeviceptr, Double, cudnnTensorDescriptor, CUdeviceptr)

    This function performs the forward LRN layer computation.

    Declaration
    public void cudnnLRNCrossChannelForward(cudnnLRNMode lrnMode, double alpha, cudnnTensorDescriptor xDesc, CUdeviceptr x, double beta, cudnnTensorDescriptor yDesc, CUdeviceptr y)
    Parameters
    Type Name Description
    cudnnLRNMode lrnMode

    LRN layer mode of operation. Currently only CUDNN_LRN_CROSS_CHANNEL_DIM1 is implemented. Normalization is performed along the tensor's dimA[1].

    System.Double alpha

    Pointer to scaling factors (in host memory) used to blend the layer output value with prior value in the destination tensor as follows: dstValue = alpha[0]*resultValue + beta[0]*priorDstValue. Please refer to this section for additional details.

    cudnnTensorDescriptor xDesc

    Tensor descriptor objects for the input and output tensors.

    CUdeviceptr x

    Input tensor data pointer in device memory.

    System.Double beta

    Pointer to scaling factors (in host memory) used to blend the layer output value with prior value in the destination tensor as follows: dstValue = alpha[0]*resultValue + beta[0]*priorDstValue. Please refer to this section for additional details.

    cudnnTensorDescriptor yDesc

    Tensor descriptor objects for the input and output tensors.

    CUdeviceptr y

    Output tensor data pointer in device memory.

    | Improve this Doc View Source

    cudnnLRNCrossChannelForward(cudnnLRNMode, Single, cudnnTensorDescriptor, CUdeviceptr, Single, cudnnTensorDescriptor, CUdeviceptr)

    This function performs the forward LRN layer computation.

    Declaration
    public void cudnnLRNCrossChannelForward(cudnnLRNMode lrnMode, float alpha, cudnnTensorDescriptor xDesc, CUdeviceptr x, float beta, cudnnTensorDescriptor yDesc, CUdeviceptr y)
    Parameters
    Type Name Description
    cudnnLRNMode lrnMode

    LRN layer mode of operation. Currently only CUDNN_LRN_CROSS_CHANNEL_DIM1 is implemented. Normalization is performed along the tensor's dimA[1].

    System.Single alpha

    Pointer to scaling factors (in host memory) used to blend the layer output value with prior value in the destination tensor as follows: dstValue = alpha[0]*resultValue + beta[0]*priorDstValue. Please refer to this section for additional details.

    cudnnTensorDescriptor xDesc

    Tensor descriptor objects for the input and output tensors.

    CUdeviceptr x

    Input tensor data pointer in device memory.

    System.Single beta

    Pointer to scaling factors (in host memory) used to blend the layer output value with prior value in the destination tensor as follows: dstValue = alpha[0]*resultValue + beta[0]*priorDstValue. Please refer to this section for additional details.

    cudnnTensorDescriptor yDesc

    Tensor descriptor objects for the input and output tensors.

    CUdeviceptr y

    Output tensor data pointer in device memory.

    | Improve this Doc View Source

    Dispose()

    Dispose

    Declaration
    public void Dispose()
    | Improve this Doc View Source

    Dispose(Boolean)

    For IDisposable

    Declaration
    protected virtual void Dispose(bool fDisposing)
    Parameters
    Type Name Description
    System.Boolean fDisposing
    | Improve this Doc View Source

    Finalize()

    For dispose

    Declaration
    protected void Finalize()
    | Improve this Doc View Source

    GetLRNDescriptor(ref UInt32, ref Double, ref Double, ref Double)

    This function retrieves values stored in the previously initialized LRN descriptor object.

    Declaration
    public void GetLRNDescriptor(ref uint lrnN, ref double lrnAlpha, ref double lrnBeta, ref double lrnK)
    Parameters
    Type Name Description
    System.UInt32 lrnN

    Pointers to receive values of parameters stored in the descriptor object. See cudnnSetLRNDescriptor for more details. Any of these pointers can be NULL (no value is returned for the corresponding parameter).

    System.Double lrnAlpha

    Pointers to receive values of parameters stored in the descriptor object. See cudnnSetLRNDescriptor for more details. Any of these pointers can be NULL (no value is returned for the corresponding parameter).

    System.Double lrnBeta

    Pointers to receive values of parameters stored in the descriptor object. See cudnnSetLRNDescriptor for more details. Any of these pointers can be NULL (no value is returned for the corresponding parameter).

    System.Double lrnK

    Pointers to receive values of parameters stored in the descriptor object. See cudnnSetLRNDescriptor for more details. Any of these pointers can be NULL (no value is returned for the corresponding parameter).

    | Improve this Doc View Source

    SetLRNDescriptor(UInt32, Double, Double, Double)

    This function initializes a previously created LRN descriptor object.

    Declaration
    public void SetLRNDescriptor(uint lrnN, double lrnAlpha, double lrnBeta, double lrnK)
    Parameters
    Type Name Description
    System.UInt32 lrnN

    Normalization window width in elements. LRN layer uses a window [center-lookBehind, center+lookAhead], where lookBehind = floor( (lrnN-1)/2 ), lookAhead = lrnN-lookBehind-1. So for n=10, the window is [k-4...k...k+5] with a total of 10 samples. For DivisiveNormalization layer the window has the same extents as above in all 'spatial' dimensions (dimA[2], dimA[3], dimA[4]). By default lrnN is set to 5 in cudnnCreateLRNDescriptor.

    System.Double lrnAlpha

    Value of the alpha variance scaling parameter in the normalization formula. Inside the library code this value is divided by the window width for LRN and by (window width)^#spatialDimensions for DivisiveNormalization. By default this value is set to 1e-4 in cudnnCreateLRNDescriptor.

    System.Double lrnBeta

    Value of the beta power parameter in the normalization formula. By default this value is set to 0.75 in cudnnCreateLRNDescriptor.

    System.Double lrnK

    Value of the k parameter in normalization formula. By default this value is set to 2.0.

    Implements

    System.IDisposable
    • Improve this Doc
    • View Source
    Back to top Generated by DocFX