Denoising using Self Adaptive Radial Basis Function

Received Jan 5, 2019 Revised May 1, 2019 Accepted Nov 24, 2019 This paper presents an adaptive form of the Radial basis function neural network to correct the noisy image in a unified way without estimating the existing noise model in the image. Proposed method needs a single noisy image to train the adaptive radial basis function network to learn the correction of the noisy image. The gaussian kernel function is applied to reconstruct the local disturbance appeared because of the noise. The proposed adaptiveness in the radial basis function network is compared with the fixed form of spreadness and the center value of kernel function. The proposed solution can correct the image suffered from different varieties of noises like speckle noise, Gaussian noise, salt & pepper noise separately or combination of noises. Various standard test images are considered for test purpose with different levels of noise density and performance of proposed algorithm is compared with adaptive wiener filter.


INTRODUCTION
Designing a noise model for underwater images has become a challenging area of research. In the literature, there are several filters designed to remove noise from the underwater images. One filter worked well for one type of noise but worst for another type of noise. Practically in most of the applications, the environment consists of the mixture of noises, so there is a need of a unified noise correction model which has 'knowledge' to remove the noise rather than correcting the pixels as according to statistical change appeared because of noise in the local region. For example in SAR images, along with speckle noise, there is a presence of gaussian noise as well as salt & pepper noise. In this regard, the artificial neural network can be considered as one of the best choices. Among the various possibilities under artificial neural network, the radial basis function model is considered in this paper because of its universal approximation capability with a simplified model of architecture. The adaptive approach is applied in defining the center as well as spreadness of kernel function to make the learning better and faster.
The approach adopted by [1] introduces the nonlinear network models like feedforward network models with the concepts of generalization and interpolation. [2] proposed a distinguished learning method keeping orthogonal least square method as the base. This algorithm constructs an adequate network by choosing RBF centers one by one randomly. This algorithm has been applied to two different signal processing applications [3] proposed CRBFN for low order digital channel equalization with an extra parameter to reduce nonlinear distortion. [4] presented an adaptive radial basis function. The results indicate the improvement in performance for less number of centers [5] proposed the supervised learning based on the gradient descent training. [6] presented an approach of radial basis function for nonlinear mapping from Rn to R by using conditional clustering or fuzzy clustering.. [7] Proposed a reformulated radial basis function using supervised learning based on gradient descent training. [8] in this the new learning strategy involves not only local optimization of variances of activation function but also global optimization called as interactive gradient learning method. [9] Developed a stochastic search learning algorithm which proved to be better algorithm than back propagation error learning for the recurring artificial neural network. [10]proposed conditional cmeans learning algorithm which gives a better performance than the unconditional C-means algorithm. [11] this is an improvement of the conjugate learning algorithm in terms of speed of convergence. [12]proposed the concept of clustering and gave the comparative analysis of different kernel function implemented on a different c-means algorithm like rough c-means, fuzzy c-means, and intuitionistic fuzzy c-means algorithms. [13] proposed a method of removing the speckle noise from SAR images with better performance by modifying the window size of Lee filter and then compressing by using wavelet transform. [14] Proposed a new speckle reduction method SAR imagery based on CNN. [15] proposed an algorithm to restore images which are corrupted by impulsive salt & pepper noise. This is done by estimating the intensity values of noisy pixels with their non-noisy pixels. By doing this the window size is reduced which removes impulsive salt & pepper noise and also preserves edges. This algorithm cannot be applied to speckle noise and Gaussian noise. [16] Proposed a non-local switching filter-convolutional neural network denoising algorithm (NLSF-CNN), to remove salt and pepper noise. NLSF-CNN consists of two steps, i.e., a NLSF processing step and a CNN training step. Pre-processing of noisy images is done by using NLSF using non-local information and then these images are divided into patches that are then given to CNN for training, and hence CNN denoising model is used for future noisy images. [17] Introduced a speckle noise reduction method for a digital holographic imaging system which uses multi-scale CNN model. The result is noisy image consists of additive and multiplicative noise of various levels. This leads to good quantitative and qualitative performance in speckle noise reduction.

Aim of the paper
In this paper, adaptive RBF is used to remove not only speckle noise Gaussian noise but also the mixture of noise. In this paper, an intelligent approach is applied to reduce the level of noise without estimation of existing noise model in the image through the use of radial basis function. The unified solution is presented to resolve the level of different types of noise and their mixture form. The proposed intelligent unified noise reduction (IUNR) algorithm has applied an adaptive form of learning in RBF to correct the noise information by taking the noisy pixels input and keeping the target as the normal pixels. There is a requirement of less number of iterations to train the RBF with the proposed adaptive form of learning and also it has delivered the consistent performance over different trials. The proposed method is applied to recorrect the different forms of noise and comparatively drive the superior result with respect to Adaptive Weiner filter.

RESEARCH METHOD: RADIAL BASIS FUNCTION (RBF) NEURAL NETWORK
Explaining RBF network performs Non-Linear Transformation over an input vector before input vectors are fed for classification. By using such non-linear transformation; it is possible to converge linear nonseparable problem to a linear separable problem. The concept of RBF networks is similar to K-nearest neighbor models. The main basic idea is that a predicted target value of an item is likely to be about the same as other items that have close values of the predictor variables. An RBF network positions one or more RBF neurons in the space described by predictor variables. The predictor variables have the same dimensions as space. Then Euclidian distance is calculated from the evaluated point to the center of each neuron and RBF (kernel function)is applied to the distance. This will calculate the weights (influence) for each neuron. Further, a neuron is from a point being evaluated the less influence it has. The radial function is so named because the radial distance is the argument to the function. By looking at the detailed structure at RBF neural network it has an input layer, a hidden layer, and an output layer. Input layer carries the input and has the equal number of neurons as many inputs are there. A hidden layer has one or more neurons each having a kernel function, for example, the Gaussian function and output of this hidden layer will become weighted input to output layer neurons. The final output appeared as a sum of the inputs from the output layer neurons. The Gaussian function characteristics defined through two parameters the center 'c' and spreadness 'r'. So, in the proposed form of architecture, there are three parameters which get the change in the learning process, the weights 'w' which is between the hidden nodes and output nodes, the center 'C' and spreadness 'r' of the kernel function.

2.1.Training and Need of Adaptiveness
In practice, the neural network considers supervised training method. Though a learning algorithm is defined, the network weights are so adjusted that the error between the actual and desired response is minimized with respect to some optimization criteria. After training, the interpolation is performed by the network in the output vector space. Hence the network achieves a non-linear mapping between the input and the output vector spaces. The architecture of the RBF NN consists of three layers: an input layer, a hybrid layer, and the output layer. The output of RBFNN is calculated according to Eq(1).
indicates the Euclidean norm, ik W are defined as weights in the output layer of radial basis function, N is the number of neurons in the hidden layer, and are the RBF centers in the output space. In the hidden layer, for each neuron, the Euclidean distance between its associated centers and the input to the network is computed. The neuron in a hidden layer gives the output which is a nonlinear function of the distance. The weighted sums of these hidden layer outputs are computed as the output of the network. The functional form of is assumed to have given and mostly Gaussian function as given by Eq. 2.
Where parameter  controls the "width" of RBF and is commonly referred as spread parameter. The centers are the subset of input data which performs the sampling of input vector space. They form as input data subset.
In this Gaussian RBF, the spread parameter is commonly set according to the following heuristic relationship.
Where the maximum Euclidean distance between the selected centers and the K is is the number of the centers. Using Eq.6 The RBF of a neuron in the middle layer which is denoted as the hidden layer is given by Conventionally the center values are randomly sampled from the data set and the standard deviation is measured using the Euclidean distance available. This approach is appropriate only when there is a highly concentrated dataset available as very little variation exists. The optimal values of centers and the corresponding standard deviations are provided to improve the performance. The important task is to train the parameters. This is done by updating each parameter depending on the error in the output.The approach based on gradient mechanism is applied for the updating during each iteration.

2.2.Adaptive RBF NN
In the fixed centers based RBF NN, there is only one adjustable parameter of the network is available and it is weights of the output layer. This approach is simple, however, to perform adequate sampling of the input, a large number of centers selected from the input data set. will produce a relatively very large network.
In the proposed method, there are possibilities to adjust all the three set of network parameters weight, the position of the RBF centers and the width of the RBF. Hence supervised training is not only done by the weights in the output layer but also done by the position of the centers and the spread parameter in the hidden layer for every processing units. Therefore defining the error cost function is the primary step as given in equation (5).

Functional Block Diagram
The functional block diagram of the proposed method is shown in Fig.1.a and Fig.1.b. The proposed method is carried out in two phases, training and test phase. In the training phase( Fig.1.a), a noise-free image, 'Lena' is taken to train the image. It is made noisy by adding the salt & pepper noise for training purpose. Then the noisy image is pre-processed, image pixel matrix is first normalized in the range of [0 1] by dividing each pixel value by the maximum value of pixel then, transform the normalized matrix into the number of blocks. Each block matrix is transformed into the vector which will appear as the input to the Static RBF. The target for an input vector has taken the vector which forms corresponding spatial information in the noise-free image. An adaptive form of learning is applied as discussed in section 3 to acquire the knowledge in order to recorrect the noise. Once learning is completed, the RBF parameters, weights, centers, and spreadness are stored which will be used to denoise the noisy image at the time of the test. During the test phase( Fig.1.b), any normal image or noisy image is considered for preprocessing and then applied as the input to the RBF which is known to carry the trained parameters. Because this trained network has the knowledge to recorrect the input, disturbance available in the input side is recorrected without estimating the type of disturbance. For the training purpose, salt & pepper is considered so that there could be the maximum level of variation in the pixels information and RBF could learn under this extreme condition. The performance characteristics of static RBF and adaptive RBF is compared with respect to mean square error. Adaptive RBF performs well and hence denoising is done by using adaptive RBF. Hence this method of self adaptiveness performs well in comparison to adaptive wiener filter.

RESULTS AND ANALYSIS
To get the benefits with the developed method, noisy Lena images as shown in Fig. 2 has taken for training purpose and both the, fixed RBF and Adaptive RBF are applied independently for 5 iterations. To get the information about consistency in the performance experiment has repeated for 5 independent trials. Performance analysis has been given in terms of obtained mean square error. The experiment is performed on different grayscale images of size 512*512 pixels. Preprocessing is applied to each image in terms of normalization and each image is divided as a set of the 3*3 pixel block. RBF neural network architecture contains 9 input nodes, 2 hidden nodes, and 9 output node is created. Weights have initialized as the random number by a uniform distribution in the range of [-0.5 0.5]. The mean error in learning of SRBF and ARBF has shown in Fig. 3 while their convergence characteristics for all trials have shown in Fig. 4. It is clear that there is a very high error in learning is observed with SRBF as compared to ARBF.

Simulation results of SRBF vs ARBF
To understand the benefits of proposed adaptiveness in the RBF in comparison to static RBF, learning the behavior of both networks with a noisy image have been considered for 5 different trials. Lena image without and with noise has been considered for training purpose. Noisy image after preprocessing is applied as the input and corresponding spatial information in the normal image is considered as the target. Observed mean square error (MSE) under different trials is shown in Table 1. It is observed that there was not proper learning with SRBF because, there were only output weights, which were changing to acquire the knowledge, hence it was getting difficult to minimize the error as the spatial information getting change from one location to another. Not only was that, the variation of obtained final MSE with the same number of iterations also very large. This is because of change in the selection of centers and spreadness parameters value from one trial to another. Because of high error (mean value 2.1016) and high variation in the trial (standard deviation: 1.5098) it is not possible to consider the SRBF as the appropriate architecture for the required purpose. When the proposed form of adaptiveness is applied in the RBF, the obtained performance under different trails as shown in the Table 1 is appealing (mean error: 0.3560 and std.dev:0.0089). Under all trials, the learning convergence characteristics for SRBF and ARBF also have shown in Fig.4. It can be observed that there was very less change in the error for SRBF even after providing the number of iterations. While with ARBF there was a sharp decline in error value observed with iterations because of the change in the centers and spreadness.   Figure 3. Comparison of static RBF and Adaptive RBF with respect to mean square error The comparison of static and adaptive RBF in Fig.4 indicates that the convergence rate is faster in adaptive RBF than static RBF. The comparative performance of the proposed model of noise reduction using ARBF against Adaptive Weiner filter is shown in Table 2. It can observe that under various noise reduction measuring parameters, the proposed method has delivered the better performance. The direct quality measure through the PSNR, it is observed that there were more than 4db improvement has observed with the proposed method which is very significant value. To get the more clear picture of comparison in Fig.5 bar plot of PSNR has given between the noisy image, recorrected image by AWF and IUNR. To get the visual perception about the quality of noise reduction through the proposed IUNR, Fig. 6 has shown the recorrected image also.

Simulation results of mixed noise model (speckle & gaussian noise)
Practically there is no environment filled with single characteristics of noise but it is a mixture of different types of noise. Hence appropriate modeling of existing noise is very difficult to make.In such case application of the dedicated filter for a particular noise, model degrades the performance further. The proposed IUNR method has the capability to reconstruct the image even it suffered from different types of noise. In Fig.  7, boat image has shown with a mixture of noise with Gaussian and Speckle noise.It can be observed that available noise has destroyed the valuable information of the original image. When the noisy image corrected with AWF and IUNR the obtained performance has shown in Table 3. It is observed that as in the case single noise model, with the mixed noise model,IUNR performance is superior in comparison to AWF.In Fig. 8 the learning characteristics of ADRF has shown for 5 iterations.The recorrected image has shown in Fig. 9 and observed that that information which was nearly impossible to visualize in a noisy image, after recorrection it is easy to find out.

Comparative analysis of proposed work with earlier works
Neural network has the universal approximation quality which is used here to reconstruct the noisy image. Any kind of noise causes local disturbance in the pixels correlation with their neighbor's pixels. The knowledge to recorrect the local disturbance appeared in the learning time is used to filter the noisy image. Such model has the advantage of generalized noise reduction capabilities instead of noise specific model. Limitations of such model exist in terms of deciding the architecture size which can affect the quality of universal approximation. Too much hidden nodes unit will pass the local disturbance also while very few numbers will block the original local spatial variation. Novelty exist in terms of defining a generalized model of noise reduction whereas almost all existing other methods following the certain model noise distribution where it performs well over a specific noise model but not for others. A self adaptive form of proposed learning model has delivered a faster and precise form of learning (Table 4). MLP (Multilayer Perceptron) architecture has high computation cost compared to radial basis function architecture. Radial basis function uses adaptive form of Gaussian basis function which supports the local correlation of pixels for learning. But MLP uses sigmoid function which has less sensitivity in understanding the local correlation.

CONCLUSION
In this paper, a unified approach is used to reduce the level of different types of noises. To reduce the level of noise, the adaptive form of radial basis function neural network has been applied. Adaptiveness is given in spreadness and center position value of kernel function through the gradient method. It is observed that proposed adaptiveness has reduced the error very much in the very few numbers of iterations of the learning process. Adaptive RBF has given the training to reconstruct the noisy image and learned approximation knowledge applied to recorrect the other varieties of images suffered from the homogeneous and heterogeneous type of noises. Proposed solution performances are compared with adaptive Weiner filter and delivered the better reduction in the level of noises.