Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A Variable Order Fractional Differential-Based Texture Enhancement Algorithm with Application in Medical Imaging

  • Qiang Yu ,

    qiang.yu@cai.uq.edu.au

    Current address: Centre for Advanced Imaging, University of Queensland, Brisbane, Queensland, Australia

    Affiliation The School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia

  • Viktor Vegh,

    Affiliation Centre for Advanced Imaging, University of Queensland, Brisbane, Queensland, Australia

  • Fawang Liu ,

    Contributed equally to this work with: Fawang Liu, Ian Turner

    Affiliation The School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia

  • Ian Turner

    Contributed equally to this work with: Fawang Liu, Ian Turner

    Affiliation The School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia

Abstract

Texture enhancement is one of the most important techniques in digital image processing and plays an essential role in medical imaging since textures discriminate information. Most image texture enhancement techniques use classical integral order differential mask operators or fractional differential mask operators using fixed fractional order. These masks can produce excessive enhancement of low spatial frequency content, insufficient enhancement of large spatial frequency content, and retention of high spatial frequency noise. To improve upon existing approaches of texture enhancement, we derive an improved Variable Order Fractional Centered Difference (VOFCD) scheme which dynamically adjusts the fractional differential order instead of fixing it. The new VOFCD technique is based on the second order Riesz fractional differential operator using a Lagrange 3-point interpolation formula, for both grey scale and colour image enhancement. We then use this method to enhance photographs and a set of medical images related to patients with stroke and Parkinson’s disease. The experiments show that our improved fractional differential mask has a higher signal to noise ratio value than the other fractional differential mask operators. Based on the corresponding quantitative analysis we conclude that the new method offers a superior texture enhancement over existing methods.

Introduction

Texture plays an important role in the identification of regions of interest in an image, hence texture enhancement is an essential component in digital image processing [1]. The aims of texture enhancement are to improve the overall visual effect of the image through purposefully emphasizing local or whole characteristic features and impair characteristics of little interest. The quality of the image, especially the texture, is critical in the clinical diagnosis and identification of pathology based on medical images.

Integral order differential mask operators, such as the Sobel, Roberts, Prewit and Laplacian techniques [1, 2], are widely used in image enhancement algorithms. However, there are several disadvantages using an integral order differential operator. For example, first order masks produce wide edges after image processing; second order masks generate double responses when the grey scale changes, and are therefore sensitive to noise [2, 3].

A growing number of research projects in science and engineering utilise fractional calculus to improve the understanding and description of physical phenomena [414]. These benefits provide an excellent motivation for researchers to apply fractional derivatives to digital image processing [3, 8, 1527]. Chen et al. proposed a digital fractional order Savitzky–Golay differentiator, and experiments showed that it can estimate the fractional order derivative of the contaminated signal [15, 16]. Based on the Riemann–Liouville definition, Zhang et al. developed an algorithm to enhance the texture and edges of digital images [3]. Furthermore, the mask used had an improved visual effect with richer texture information than that obtained by integral order masks. Sejdić et al. showed that the fractional Fourier transform is potentially a very powerful tool in signal processing [17]. A stochastic fractal model was also developed for image processing, which is basically an analysis and synthesis tool for images [18]. Mathieu et al. applied fractional differential masks to detect edges, which improved the criterion of thin detection, or detection selectivity for parabolic luminance transitions [19]. Generally, robustness to noise was improved using these masks. Based on the Grünwald–Letnikov definition, Gao et al. applied a quaternion fractional differential to a colour image [23]. When applied to every channel of the image the experiments showed that their method, when compared to Sobel and mixed edge fractional techniques, has fewer false negatives in the textured regions and is better at detecting edges that are partially defined by texture. The use of an improved fractional differential operator based on a piecewise quaternion resulted in excellent textural detail enhancement of rich-grained digital images [21]. The authors subsequently proposed four new fractional directional differentiation masks and corresponding numerical simulation rules [22]. Experiments showed that their method can enhance the texture details of rich–grained digital images. Garg and Singh proposed a Grünwald–Letnikov fractional differential mask using a Lagrange 3–point interpolation formula for image texture enhancement [27]. The outcome was enhancement of both the image texture and lightness, and the information entropy was found to improve by 7%. Based on the Grünwald–Letnikov definition, Pu et al. derived some algorithms that perform well when applied to grey scale images, but produce distorted colour images [25]. They showed that their two algorithms YiFeiPU-1 and YiFeiPU-2 are widely applicable, particularly YiFeiPU-2, which using a Lagrange 3-point interpolation formula performs best based on a relative errors analysis. Based on this work, we extended the method to a second order Riesz fractional differential operator FCD-1 [8, 26]. The use of a fractional centered difference scheme enables our method to provide higher signal-to-noise ratios and superior image quality than the classical integral order differential mask operators and other first order fractional differential operators, such as YiFeiPU-1 [25]. These existing fractional order differentiation methods employ a fixed fractional order. Overall, fixed order techniques tend to excessively enhance the low spatial frequency content of an image, whilst insufficiently enhancing high spatial frequency content and amplifying image noise [3, 8, 2426].

An optimization algorithm for choosing the fractional order parameter has been proposed to overcome limitations of fixed order methods [16]. Gilboa et al. adjusted the nonlinear diffusion coefficient locally according to image features, such as edges, textures and moments [28]. They illustrated that this approach works well, and sharpening and denoising can be combined together in the enhancement of grey–level and colour images. Huang et al. outlined an adaptive image enhancement algorithm that dynamically adjusts the fractional differential order according to the image local statistics and structural features [29]. Results were provided for the Lena image only, and a rigorous quantitative analysis of their findings was not performed. They did show that their method had a higher signal-to-noise ratio and superior image quality than the traditional fractional differential operator and classical integral order differential mask operators. In view of these findings, it appears that variable fractional order calculus provides benefits over fixed order methods. Hence, our work aims to further develop variable fractional order methods in the context of image enhancements.

In our previous work we showed that the use of the Riesz fractional differential operator instead of the Grünwald–Letnikov definition results in higher accuracy due to the positive benefits of using a symmetric second order instead of a one sided first order fractional operator [8, 26]. In view of these findings, we changed the YiFeiPU-2 method [25] by replacing the Grünwald–Letnikov definition with the Riesz fractional differential operator. Moreover, we incorporated variable fractional order [29] into the algorithm to estimate the fractal dimension of a local region instead of using fixed fractional order differentiation. The outcome is an image enhancement method applicable across a range of image types that adapts the fractional order of the differential to local features. As such, we achieve larger flexibility by being able to change the weights used in the reconstruction algorithm. The key differences of our method with previous work [8, 26] are that we apply a Lagrange 3-point interpolation formula on the Riesz fractional differential operator and incorporate variable fractional order into the algorithm. Key differences of the method with Pu et al. [25] are that we replace the Grünwald–Letnikov definition by the Riesz fractional differential operator and incorporate variable fractional order into the algorithm. Key differences of the method with Huang et al. [29] are that we replace the Grünwald–Letnikov definition by the Riesz fractional differential operator and apply a Lagrange 3-point interpolation formula on the Riesz fractional differential operator to achieve different effects based on the choice of the weights used in the reconstruction.

This paper is organized as follows. Firstly, the mathematical preliminaries used throughout the paper are introduced. Secondly, we give the theoretical analysis for our improved fractional differential mask, termed VOFCD, based on the second order Riesz fractional differential operator using a Lagrange 3-point interpolation formula. Then, the VOFCD method is presented. Furthermore, we present an overview of the data acquisition process and give an outline of the algorithm for enhancing the quality of a grey–level image. Finally, we compare our algorithm with Pu et al.’s best algorithm (YiFeiPU-2) [25] and Yu et al.’s algorithm (FCD-1) [8, 26] when applied to medical images of the human brain.

Preliminary Knowledge

Here we outline the important definitions used throughout the paper.

Definition 1. The v-order Grünwald-Letnikov based fractional derivative with respect to x for the finite interval x ∈ [a, X] can be expressed by [30, 31]: (1) where v is any real number. The v-order Grünwald-Letnikov based fractional derivative with respect to y can be defined in a similar manner.

Definition 2. The v-order (0 < v ≤ 2) Riesz fractional derivative with respect to x for the infinite interval −∞ < x < +∞ is defined as [8]: (2) where cv = [2 cos(πv/2)]−1 (v ≠ 1), n − 1 < vn, and (3) (4) The Riesz fractional derivative of order v (0 < v ≤ 2) with respect to y can be defined in a similar manner.

The Fractional Differential Mask

Utilizing the second order fractional centered difference scheme [8, 26, 32], we discretize the Riesz fractional derivative ∂v s(x, y)/∂∣xv (0 < v ≤ 2) with respect to x with step h as: (5)

Assume that r = x + (vh/2) − kh, then r ∈ [xkh, x + hkh]. Using a Lagrange 3–point interpolation formula for the three neighboring nodes s(x + hkh, y), s(xkh, y), and s(xhkh, y), we have (6)

Noting that r = x + (vh/2) − kh, we then obtain (7)

Compared with s(xkh, y) in Eq (5), s(x + (vh/2) − kh, y) in Eq (7) is a linear combination of the neighboring nodes, which implies that s(x + (vh/2) − kh, y) contains more information in its neighborhood and leads to richer texture details. Thus, replacing s(xkh, y) in Eq (5) with s(x + (vh/2) − kh, y), and substituting Eq (7) into Eq (5), we obtain a new Riesz fractional differential with respect to x as (8)

By first noting that for 0 < v < 1, (0 < t < 1), we have , and gives (9) we can rewrite Eq (8) in the form (10) where (11) for k = ±1, ±2, ⋯. It can be seen from Eq (10) that the Riesz fractional derivative can be discretized into two parts, one located on the positive x–axis and the other on the negative x–axis. From this formulation the Riesz fractional mask can be obtained.

In the context of medical images, the authors in [8, 25, 26] discuss the biggest variable of the grey level being limited, and the shortest distance for a change in the grey level image must be at an adjacent pixel. Therefore, the pixel signal is used to measure the duration of a two–dimensional digital image s(x, y) with respect to the two variables x and y. Here, the duration is the dimension of the image matrix assuming that the duration of x and y is [0, X] and [0, Y], respectively. The uniform distances for the x and y–coordinates are hx = X/N = 1 and hy = Y/N = 1 and the number of divisions are Nx = [X/hx] = [X] and Ny = [Y/hy] = [Y].

For a two–dimensional digital image s(x, y) at pixel signal (x1, y1) on the positive x–axis with region [0, x1 + h], the N + 2 pixels are After truncation, the anterior n + 2 approximate fractional centered difference of the Riesz fractional differential with order 0 < v < 1on the positive x–axis is: (12)

Similarly, the anterior n + 2 approximate fractional centered difference of the Riesz fractional differential with order 0 < v < 1on the negative x–axis is: (13)

To obtain the fractional differential masks for the eight symmetric directions and make them rotationally invariant, we implement eight fractional differential masks positioned respectively on the negative x–axis, positive x–axis, negative y–axis, positive y–axis, left downward diagonal, right upward diagonal, left upward diagonal, and right downward diagonal. They are correspondingly denoted by Wl (l = 1, 2, …, 8) (see Fig 1).

thumbnail
Fig 1. Fractional differential operator VOFCD for the eight directions.

(a) W1(negative x–axis), (b) W2(positive x–axis), (c) W3(negative y–axis), (d) W4 (positive y–axis), (e) W5(left downward diagonal), (f) W6 (right upward diagonal), (g) W7(left upward diagonal), (h) W8 (right downward diagonal).

https://doi.org/10.1371/journal.pone.0132952.g001

In Fig 1, Cs0 is the mask coefficient associated with the pixel of interest. When n = 2m − 1, a(2m + 1) × (2m + 1) fractional differential mask is implemented. To ensure that the fractional differential mask remains symmetric and the centre of the mask aligns with a pixel, in general, n is taken as an odd positive integer.

Digital image processing is based on direct processing for discrete pixels, and the algorithm normally adopts an airspace filtering scheme whose principle is to move the mask pixel by pixel [25]. Therefore, there are separate algorithms for grey and colour image fractional differential masks.

Next, we deduce the following fractional differential algorithm, VOFCD, based on the Riesz fractional differential operator Eq (2). To treat the Nx × Ny digital grey–level image s(x, y), we perform a convolution filter on the above eight directions in the (2m + 1) × (2m + 1) masks, and propose that the eight fractional differential masks are computed by what we refer to as the VOFCD operator: (14) (15) where l1 = 1, 2, 3, 4, l2 = 5, 6, 7, 8, and Thus, we have the digital grey–level images sI(x, y)as (16) where Csk is the mask coefficient given in Eq (17). As for the digital colour image, the algorithm is similar to that for a grey–level image, however the RGB components use the fractional differential respectively.

When 0 < v < 1, we implement the fractional differential mask respectively on the eight symmetric directions using what we call the VOFCD operator, having the same structure as YiFeiPU-2 in [25] but with different coefficients. The mask coefficients of the VOFCD operator are given by (17) which ensures that the fractional differential operator VOFCD produces a sparse matrix having dimension n + 2. Moreover, all the coefficients depend on the fractional differential order v. It can also be proven that the sum of the coefficients is nonzero, which is the main difference between the fractional differential mask and the integral version.

Variable Fractional Differential Order

The average gradient and information entropy are widely used for quantitative analysis of images [23, 25, 27]. The average gradient reflects the clarity of the image, and expresses contrast due to small details. It can be used to measure the spatial resolution of the image, i.e., a larger average gradient means a higher spatial resolution [3335]. Generally speaking, a large gradient value is likely to correspond to regions of edges, margins and textures within images, hence a larger fractional order is required to enhance textural details [8, 25, 26]. On the contrary, a smaller value of the average gradient is likely to correspond to smooth image regions, hence a smaller fractional order is needed to enhance image textures. However, the gradient is quite large for both boundaries and noise. Therefore, a suitable method is needed to identify the margin and noise.

Note that the margin of the object is continuously smooth, and it is easy to find another margin point that has a similar magnitude of gradient as the margin of interest. However, the noise is random, and it is difficult to find any noise around the region of interest having a similar magnitude of gradient. Hence, margin or noise can be identified through the following process: (18) where ∇sij is the gradient magnitude associated with the pixel of interest s(xi, yj), ∇N(sij) is the gradient magnitude associated with the pixel around the region of interest s(xi, yj), and T is the threshold value.

The information entropy evaluates the average information included in the image and reflects the detail information of the image [23, 25, 27, 3638], that is small in the smooth area and large in the rich texture area of the image. Image entropy is calculated as: (19) where P(xi, yj) is the probability mass function of s(xi, yj).

The local image roughness is the relative offset measurement of the image pixel grey scale value [36, 39, 40], that is also small in the smooth area and large in the rich texture area of the image. The local roughness of an image is calculated as (20) where σ is the local variance of the pixel signals of the image.

The order of the fractional differential is not only related to the magnitude of the gradient, but also impacted by the image local statistics, namely variance and entropy [23, 25, 27]. Thus, using a weighted summation of three parameters to reflect local image information, we have: (21) where 0 ≤ k1, k2, k3 ≤ 1 and k1 + k2 + k3 = 1.

As suggested by Huang et al. [29], utilizing the exponential function property, we can obtain the variable fractional differential order function as: (22) where α and β are regularization parameters.

Data Acquisition and Algorithm Development

In this section, we outline the implementation of our algorithm to enhance textures in grey level and colour images. We first present the details of the data acquisition. Then, we demonstrate the algorithm development.

Data Acquisition

The research project was approved by the Human Research Ethics Committee of Queensland Health, Brisbane, Australia. Participants provided written consent prior to participating in the study, the process of which was approved by the ethics committee. Furthermore, all data sets were de-identified for the project.

The first data set of the stroke patients was acquired at the Royal Brisbane and Women’s Hospital using the 3T Siemens MRI scanner. Patients admitted with a clinical diagnosis of acute ischemic stroke were recruited between May 2011 and April 2012. Patients underwent an MRI examination at admission from which two MRI datasets were randomly selected for this study. Susceptibility weighted magnetic resonance images were acquired on a 3T Siemens Trio human scanner running the Syngo proprietary software housed at the hospital. Data acquisition was performed using the Syngo SWI sequence with the following parameters: matrix size = 224 × 256, repetition time (TR) = 200ms, echo time (TE) = 20ms, flip angle = 15o, bandwidth = 120Hz per pixel, in-plane resolution, 1mm × 1mm slice thickness and separation = 2mm and number of slices = 72. Magnitude, phase and susceptibility images were saved separately, and only the magnitude images were used in this study. The original image slices 32 to 47 from the first stroke patient show the stroke in the left hemisphere of the brain, as for example, can be seen in Figs 2(a) and 3(a). For the second patient, the stroke was visible in slices 42 to 61 to various extents.

thumbnail
Fig 2. Comparison of texture details between original image slice 35 from the first stroke patient and its fractional differential using YiFeiPU-2, FCD-1 and VOFCD and (0.45, 0.01, 0.54) for the weights.

(a) Original image slice 35, (b) 0.5–order YiFeiPU-2 with mask 5 × 5, (c) 0.5–order FCD-1 with mask 5 × 5, (d) VOFCD with mask 5 × 5.

https://doi.org/10.1371/journal.pone.0132952.g002

thumbnail
Fig 3. Comparison of textures in the region of interest between original image slice 35 from the first stroke patient and its fractional differential using YiFeiPU-2, FCD-1 and VOFCD and (0.45, 0.01, 0.54) for the weights.

(a) Original full image slice 35, (b) original region of interest, (c) 0.5–order YiFeiPU-2 with mask 5 × 5, (d)0.5–order FCD-1 with mask 5 × 5, (e) VOFCD with mask 5 × 5.

https://doi.org/10.1371/journal.pone.0132952.g003

The second data set is for MRI data acquisition–3T in vivo for a patient with Parkinson’s disease. The data was obtained from St Andrew’s War Memorial Hospital, Brisbane, Australia. The pre-surgery data we use here is from a patient diagnosed with Parkinson’s disease. The brain diffusion tensor MR images are acquired using a GE Medical System (SIGNA 3T) scanner. All image matrix sizes are 256 × 256. Pre surgery data is acquired using an echo time of 93.7 ms and repetition time of 7s. Twenty four interleaved, 5 mm thick slices are acquired in the horizontal plane perpendicular to the coronal and sagittal planes, using the multislice mode. Diffusion sensitization is performed along 35 different diffusion gradient orientations using a diffusion weighting of b = 1000 sec/mm2 (b is the degree of diffusion sensitization defined by the amplitude and the time course of the magnetic field gradient pulses used to encode molecular diffusion displacements [41]). A reference image without diffusion weighting (b = 0 sec/mm2) not illustrated here was also acquired.

Image Evaluation

The regulation parameters are set to α = 1 and β = 1.7, which gives the variable fractional differential order function as: (23) After normalization with the magnitude of the gradient ‖∇s‖, the entropy of the image H and the roughness of the image Q, we have 0 ≤ ‖∇s‖, H, Q ≤ 1. Hence, if R = 0, we have v ∈ [0, e − 1.7] which effectively enhance textures in images, where e is the base of the natural logarithm and approximately equal to 2.71828; If R = 1, we have v ∈ [1.7 − e, 0] which can remove noise to some extent.

We define the signal to noise ratio (SNR) [26] (24) where A is the root mean square amplitude. For MRI data, the image background was used to compute Anoise.

Images are evaluated based on the intensity histogram Entropy (Ent), Standard Deviation (STD), and Mean Absolute Difference Coefficient (MADC) [42]. The higher the value of Ent, the more visual information the image contains, and an increase in Ent means an increase in information and therefore an improvement in image quality. The STD demonstrates how much the image intensities deviate from the expected value, and a variation of STD measures how much corresponding points vary across the source and images. The MADC is used to measure image clarity, such as activity, and is defined as the mean of the sum of difference values over rows and columns of the image pixel intensities [42, 43]: (25) where Ix = [s(x, y) − s(x − 1, y)]2 and Iy = [s(x, y) − s(x, y − 1)]2.

Simulation Study

The algorithm given in Table 1 provides the process for enhancing the quality of a grey-level image using the symmetric Riesz variable fractional order substitution.

thumbnail
Table 1. Algorithm for enhancing the quality of a grey–level image.

https://doi.org/10.1371/journal.pone.0132952.t001

Results and Discussion

In this section, we compare our algorithm to YiFeiPU-2 [25] and FCD-1 [8, 26], and apply the method to a set of medical images.

Evaluation of Weights

After testing weights k1, k2 and k3 ranging from 0 to 1 with step 0.01 subject to the condition k1 + k2 + k3 = 1, Figs 46 show the convergence curves of SNR, Ent, STD and MADC with different mask sizes in image slice 35 for the first stroke patient (see Fig 2(a)). Using a 5 × 5 mask, Tables 2 and 3 show different combinations of k1, k2 and k3 for obtaining the largest and lowest SNR, Ent, STD and MADC.

thumbnail
Fig 4. Convergence curves for SNR, Ent, STD and MADC using a 3 × 3 mask for image slice 35 for the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.g004

thumbnail
Fig 5. Convergence curves for SNR, Ent, STD and MADC using a 5 × 5 mask for image slice 35 for the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.g005

thumbnail
Fig 6. Convergence curves for SNR, Ent, STD and MADC using a 7 × 7 mask for image slice 35 for the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.g006

thumbnail
Table 2. Weights k1, k2 and k3 to obtain the largest SNR, Ent, STD and MADC using various mask sizes.

Data taken from image slice 35 for the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.t002

thumbnail
Table 3. Weights k1, k2 and k3 to obtain the lowest SNR, Ent, STD and MADC using various mask sizes.

Data taken from image slice 35 for the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.t003

According to the different combinations of k1, k2 and k3 from Tables 2 and 3, Fig 7 shows the corresponding image slice.

thumbnail
Fig 7. Comparison of images across different combinations of (k1, k2, k3) reconstructed using a 5 × 5 mask.

Data is for slice 35 from the first stroke patient, and the weights are (a) (0.45, 0.01, 0.54), (b) (0.46, 0.01, 0.53), (c) (0.5, 0.01, 0.49), (d) (0.01, 0.01, 0.98).

https://doi.org/10.1371/journal.pone.0132952.g007

In a similar manner to the data exhibited for the first stroke patient, Figs 810 show the convergence curves for SNR, Ent, STD and MADC with different mask sizes for image slice 51 from the second stroke patient. Figs 1113 show the corresponding results from clinical data for a patient diagnosed with Parkinson’s disease prior to surgery.

thumbnail
Fig 8. Convergence curves for SNR, Ent, STD and MADC using a 3 × 3 mask for image slice 51 from the second stroke patient.

https://doi.org/10.1371/journal.pone.0132952.g008

thumbnail
Fig 9. Convergence curves for SNR, Ent, STD and MADC using a 5 × 5 mask for image slice 51 from the second stroke patient.

https://doi.org/10.1371/journal.pone.0132952.g009

thumbnail
Fig 10. Convergence curves for SNR, Ent, STD and MADC using a 7 × 7 mask for image slice 51 from the second stroke patient.

https://doi.org/10.1371/journal.pone.0132952.g010

thumbnail
Fig 11. Convergence curves for SNR, Ent, STD and MADC using a 3 × 3 mask for an original fractional anisotropy weighted orientation map from a patient diagnosed with Parkinson’s disease prior to surgery.

https://doi.org/10.1371/journal.pone.0132952.g011

thumbnail
Fig 12. Convergence curves for SNR, Ent, STD and MADC using a 5 × 5 mask for an original fractional anisotropy weighted orientation map from a patient diagnosed with Parkinson’s disease prior to surgery.

https://doi.org/10.1371/journal.pone.0132952.g012

thumbnail
Fig 13. Convergence curves for SNR, Ent, STD and MADC using a 7 × 7 mask for an original fractional anisotropy weighted orientation map from a patient diagnosed with Parkinson’s disease prior to surgery.

https://doi.org/10.1371/journal.pone.0132952.g013

Additional experiments were performed on different image slices of stroke patients, the results of which are provided as Supporting Information.

We have shown that the outlined approach can be applied to different data, and the weights can be manipulated to achieve the highest or lowest SNR, Ent, STD or MADC.

Qualitative Analysis

Without loss of generality, we only show results using the 5 × 5 mask applied to the first stroke patient image using two particular combinations of k1, k2 and k3. The results of the second stroke patient are provided as Supporting Information. The parameter set (0.45, 0.01, 0.54) obtained the largest SNR in slice 35 depicted in Fig 7, whilst (0.01, 0.01, 0.98) produced the largest MADC. These two choices of parameters allow us to evaluate image SNR versus contrast.

Figs 2, 3, 14 and 15 provide the corresponding results obtained with (0.45, 0.01, 0.54). We now present clinical data used to test our methods on human brain images from first patient diagnosed with stroke. In Fig 2, we compare VOFCD with YiFeiPU-2 and FCD-1. Fig 3 shows the comparison of texture details in the region of interest between the original image slice 35 and its differential using the YiFeiPU-2, FCD-1 and VOFCD methods. It can be seen from Figs 2 and 3 that the image resulting from our method (VOFCD) is qualitatively better than the images constructed using the other methods. We note in particular that the visual effect offered by VOFCD is better than that of YiFeiPU-2 and FCD-1 with fractional order v = 0.5 when using the same mask dimensions. This choice of v allows us to make consistent comparisons across the different methods. Additional comparisons using different values of v are provided as Supporting Information.

thumbnail
Fig 14. Comparison of texture details between original image slices 40, 44, 45, and 46 of first stroke patient and their fractional differential using VOFCD with a 5 × 5 mask using weights (0.45, 0.01, 0.54).

(a) Original image slice 40 with histogram, (b) image slice 40 using VOFCD with histogram, (c) original image slice 44 with histogram, (d) image slice 44 using VOFCD with histogram, (e) original image slice 45 with histogram, (f) image slice 45 using VOFCD with histogram, (g) original image slice 46 with histogram, (h) image slice 46 using VOFCD with histogram.

https://doi.org/10.1371/journal.pone.0132952.g014

thumbnail
Fig 15. Comparison for SNR, Ent, STD and MADC between original image slices of first stroke patient and its fractional differential with mask 5 × 5 between VOFCD using weights (0.45, 0.01, 0.54) and YiFeiPU-2 and FCD-1 with v = 0.5.

https://doi.org/10.1371/journal.pone.0132952.g015

Through the use of histograms, Fig 14 shows the comparison of texture details between the original image slices 40, 44, 45, and 46 of first stroke patient, and their fractional differential using VOFCD with a 5 × 5 mask. Again, we can see that VOFCD has enhanced the textural details in these images.

Fig 15 provides a comparison of SNR between the original image slices of first stroke patient and the fractional differential with mask 5 × 5 for VOFCD, YiFeiPU-2 and FCD-1 with v = 0.5. From Fig 15, it can be seen that VOFCD produces the largest values of SNR, which implies a superior texture enhancement over the other methods.

We now present clinical data to test our methods on a human brain image from a patient diagnosed with Parkinson’s disease prior to surgery.

Applying the fractional differential to the three elements in the HSI colour space, respectively, and then reverting to RGB colour space, one can obtain a colour image without distortion. Fig 12 shows that the parameter set (0.25, 0.74.0.01) obtained the largest SNR for an original fractional anisotropy weighted orientation map from a patient diagnosed with Parkinson’s disease prior to surgery with mask 5 × 5, and the parameter set (0.01, 0.01, 0.98) produced the largest MADC. Fig 16 shows the comparison of texture details between an original fractional anisotropy weighted orientation map and its fractional differential using YiFeiPU-2, FCD-1 and VOFCD using weights (0.25, 0.74.0.01). Table 4 provides the comparison of SNR for a region of interest of a fractional anisotropy weighted colour orientation map between VOFCD using (0.25, 0.74.0.01) and YiFeiPU-2 and FCD-1 with v = 0.5. We can conclude from Table 4 that VOFCD has the largest SNR in comparison to the other fractional order differential methods. The value of this measure is consistent with the VOFCD image shown in Fig 16, wherein features appear to be smoother than in the other images.

thumbnail
Fig 16. Comparison of texture details between an original fractional anisotropy weighted orientation map and its fractional differential using YiFeiPU-2, FCD-1 and VOFCD and (0.25, 0.74.0.01) for the weights.

(a) Original image, (b) 0.5–order YiFeiPU-2 with mask 5 × 5, (c) 0.5–order FCD-1 with mask 5 × 5, (d) VOFCD with mask 5 × 5.

https://doi.org/10.1371/journal.pone.0132952.g016

thumbnail
Table 4. Comparison of SNR of region of interest of fractional anisotropy weighted orientation map in colour between VOFCD using weights (0.25, 0.74.0.01) and YiFeiPU-2 and FCD-1 with v = 0.5 and 5 × 5 mask.

https://doi.org/10.1371/journal.pone.0132952.t004

In a similar manner to the first set of weights, Figs 1721 and Table 5 show the results using weights (0.01, 0.01, 0.98). With this choice of weights, the SNR is reduced however MADC increases. This finding is consistent with a likely increase in image contrast. Hence, the choice of weights can be used to influence or trade-off SNR against contrast. It is unlikely that a fixed choice of weight can be applied to any image, since images not only vary in modality but also have different types of textures and contrast within them.

thumbnail
Fig 17. Comparison of texture details between original image slice 35 from the first stroke patient and its fractional differential using YiFeiPU-2, FCD-1 and VOFCD and (0.01, 0.01, 0.98) for the weights.

(a) Original image slice 35, (b) 0.5–order YiFeiPU-2 with mask 5 × 5, (c) 0.5–order FCD-1 with mask 5 × 5, (d) VOFCD with mask 5 × 5.

https://doi.org/10.1371/journal.pone.0132952.g017

thumbnail
Fig 18. Comparison of texture details in the region of interest between original image slice 35 from the first stroke patient and its fractional differential using YiFeiPU-2, FCD-1 and VOFCD and (0.01, 0.01, 0.98) for the weights.

(a) Original full image slice 35, (b) original region of interest, (c) 0.5–order YiFeiPU-2 with mask 5 × 5, (d) 0.5–order FCD-1 with mask 5 × 5, (e) VOFCD with mask 5 × 5.

https://doi.org/10.1371/journal.pone.0132952.g018

thumbnail
Fig 19. Comparison of texture details between original image slices 40, 44, 45, and 46 of first stroke patient and their fractional differential using VOFCD with a 5 × 5 mask using weights (0.01, 0.01, 0.98).

(a) Original image slice 40 with histogram, (b) image slice 40 using VOFCD with histogram, (c) original image slice 44 with histogram, (d) image slice 44 using VOFCD with histogram, (e) original image slice 45 with histogram, (f) image slice 45 using VOFCD with histogram, (g) original image slice 46 with histogram, (h) image slice 46 using VOFCD with histogram.

https://doi.org/10.1371/journal.pone.0132952.g019

thumbnail
Fig 20. Comparison for SNR, Ent, STD and MADC between original image slices of first stroke patient and its fractional differential with mask 5 × 5 between VOFCD using weights (0.01, 0.01, 0.98) and YiFeiPU-2 and FCD-1 with v = 0.5.

https://doi.org/10.1371/journal.pone.0132952.g020

thumbnail
Fig 21. Comparison of texture details between an original fractional anisotropy weighted orientation map and its fractional differential using YiFeiPU-2, FCD-1 and VOFCD and (0.01, 0.01, 0.98) for the weights.

(a) Original image, (b) 0.5–order YiFeiPU-2 with mask 5 × 5, (c) 0.5–order FCD-1 with mask 5 × 5, (d) VOFCD with mask 5 × 5.

https://doi.org/10.1371/journal.pone.0132952.g021

thumbnail
Table 5. Comparison of SNR of region of interest of fractional anisotropy weighted orientation map in colour between VOFCD using weights (0.01, 0.01, 0.98) and YiFeiPU-2 and FCD-1 with v = 0.5 and 5 × 5 mask.

https://doi.org/10.1371/journal.pone.0132952.t005

Quantitative Analysis

Measures of Ent, STD and MADC were used for the quantitative analysis. Fig 15 provides the comparison of this analysis between the original image slices of first stroke patient and the fractional differential with mask 5 × 5 for VOFCD using weights (0.45, 0.01, 0.54), YiFeiPU-2 and FCD-1 with v = 0.5. It can be seen from Fig 15 that VOFCD has larger Ent values than the other fractional differential methods, and together with Fig 2 we can see that VOFCD can effectively enhance the image quality. The results presented in Fig 15 allow us to conclude that using VOFCD, the values of STD are larger than the other fractional differential methods. The increase in STD is due to increased signal intensities across the entire image. Furthermore, VOFCD has a smaller value of MADC than the other fractional differential methods, which implies a reduction in noise in VOFCD reconstructed images.

Based on a human brain image from a patient diagnosed with Parkinson’s disease prior to surgery, Table 6 provides a comparison of the relevant quantitative analysis for a region of interest of a fractional anisotropy weighted colour orientation map between VOFCD using weights (0.25, 0.74.0.01) and YiFeiPU-2 and FCD-1 with v = 0.5. We conclude from Table 6 that VOFCD has the largest Ent value, and smallest STD and MADC values in comparison to the other fractional order differential methods. Again, the values of these measures are consistent with the VOFCD image shown in Fig 16, wherein features appear to be smoother than in the other images.

thumbnail
Table 6. Comparison of quantitative analysis of region of interest of fractional anisotropy weighted orientation map in colour between VOFCD using weights (0.25, 0.74.0.01) and YiFeiPU-2 and FCD-1 with v = 0.5.

A 5 × 5 mask was used to generate the results.

https://doi.org/10.1371/journal.pone.0132952.t006

In a similar manner to the first set of weights, Fig 20 and Table 7 show the results using weights (0.01, 0.01, 0.98).

thumbnail
Table 7. Comparison of quantitative analysis of region of interest of fractional anisotropy weighted orientation map in colour between VOFCD using weights (0.01, 0.01, 0.98) and YiFeiPU-2 and FCD-1 with v = 0.5.

A 5 × 5 mask was used to generate the results.

https://doi.org/10.1371/journal.pone.0132952.t007

We have established that the use of the fractional differential not only nonlinearly preserves the contour features in smooth areas, but also maintains a high frequency edge feature in areas where the grey scale has obvious changes. It also preserves high frequency characteristics of texture detail in those areas where the grey scale does not change considerably. Furthermore, VOFCD has a higher SNR than the other fractional differential mask operators, and our quantitative analysis verified that VOFCD leads to superior texture enhancement.

Using our method it is possible to achieve different effects based on the choice of the weights used in the reconstruction (i.e.k1, k2 and k3). For example, we showed that SNR can be maximised through appropriate choices of the weights. Similarly, we found weights that maximised MADC. Therefore, the choices of the weights can be optimised for the application. Nonetheless, it is difficult to establish a set of weights that can be applied robustly across all applications, since particular applications may rely on suppression of noise (i.e. SNR improvements) and others on differentiation of structures within images (i.e. MADC improvements).

Conclusion

For grey scale and colour image enhancement, we derived an improved fractional differential algorithm, VOFCD, based on the second order Riesz fractional differential operator using a Lagrange 3–point interpolation formula. We estimated the fractal dimension of a local region instead of using fixed fractional order differentiation. The experiments showed that the use of VOFCD results in higher SNR than the existing methods of YiFeiPU-2 and FCD-1, which implies superior texture enhancement of medical images. In addition, VOFCD produces qualitatively better results than YiFeiPU-2 and FCD-1 with variable fractional order and same mask dimensions. A quantitative analysis was conducted to verify that VOFCD produces superior texture enhancement in comparison to the other methods considered. This may be helpful in clinical diagnosis and monitoring, and as future work, further analysis of this data will be carried out in conjunction with medical specialists. The optimal choice of reasonable parameters to obtain the variable fractional differential order remains challenging. In future research, we plan to perform an evaluation of how the weighted parameters and regularisation parameters affect the performance of the variable fractional differential order algorithm.

Supporting Information

S1 Fig. Convergence curves for SNR, Ent, STD and MADC using a 3 × 3 mask for image slice 34 from the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s001

(EPS)

S2 Fig. Convergence curves for SNR, Ent, STD and MADC using a 5 × 5 mask for image slice 34 from the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s002

(EPS)

S3 Fig. Convergence curves for SNR, Ent, STD and MADC using a 7 × 7 mask for image slice 34 from the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s003

(EPS)

S4 Fig. Convergence curves for SNR, Ent, STD and MADC using a 3 × 3 mask for image slice 36 from the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s004

(EPS)

S5 Fig. Convergence curves for SNR, Ent, STD and MADC using a 5 × 5 mask for image slice 36 from the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s005

(EPS)

S6 Fig. Convergence curves for SNR, Ent, STD and MADC using a 7 × 7 mask for image slice 36 from the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s006

(EPS)

S7 Fig. Convergence curves for SNR, Ent, STD and MADC using a 3 × 3 mask for image slice 40 from the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s007

(EPS)

S8 Fig. Convergence curves for SNR, Ent, STD and MADC using a 5 × 5 mask for image slice 40 from the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s008

(EPS)

S9 Fig. Convergence curves for SNR, Ent, STD and MADC using a 7 × 7 mask for image slice 40 from the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s009

(EPS)

S10 Fig. Convergence curves for SNR, Ent, STD and MADC using a 3 × 3 mask for image slice 44 from the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s010

(EPS)

S11 Fig. Convergence curves for SNR, Ent, STD and MADC using a 5 × 5 mask for image slice 44 from the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s011

(EPS)

S12 Fig. Convergence curves for SNR, Ent, STD and MADC using a 7 × 7 mask for image slice 44 from the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s012

(EPS)

S13 Fig. Convergence curves for SNR, Ent, STD and MADC using a 3 × 3 mask for image slice 45 from the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s013

(EPS)

S14 Fig. Convergence curves for SNR, Ent, STD and MADC using a 5 × 5 mask for image slice 45 from the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s014

(EPS)

S15 Fig. Convergence curves for SNR, Ent, STD and MADC using a 7 × 7 mask for image slice 45 from the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s015

(EPS)

S16 Fig. Convergence curves for SNR, Ent, STD and MADC using a 3 × 3 mask for image slice 46 from the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s016

(EPS)

S17 Fig. Convergence curves for SNR, Ent, STD and MADC using a 5 × 5 mask for image slice 46 from the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s017

(EPS)

S18 Fig. Convergence curves for SNR, Ent, STD and MADC using a 7 × 7 mask for image slice 46 from the first stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s018

(EPS)

S19 Fig. Convergence curves for SNR, Ent, STD and MADC using a 3 × 3 mask for image slice 44 from the second stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s019

(EPS)

S20 Fig. Convergence curves for SNR, Ent, STD and MADC using a 5 × 5 mask for image slice 44 from the second stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s020

(EPS)

S21 Fig. Convergence curves for SNR, Ent, STD and MADC using a 7 × 7 mask for image slice 44 from the second stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s021

(EPS)

S22 Fig. Convergence curves for SNR, Ent, STD and MADC using a 3 × 3 mask for image slice 60 from the second stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s022

(EPS)

S23 Fig. Convergence curves for SNR, Ent, STD and MADC using a 5 × 5 mask for image slice 60 from the second stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s023

(EPS)

S24 Fig. Convergence curves for SNR, Ent, STD and MADC using a 7 × 7 mask for image slice 60 from the second stroke patient.

https://doi.org/10.1371/journal.pone.0132952.s024

(EPS)

S25 Fig. Comparison for SNR, Ent, STD and MADC between original image slices of first stroke patient and its fractional differential with mask 5 × 5 between VOFCD using weights (0.45, 0.01, 0.54) and YiFeiPU-2 and FCD-1 with v = 0.1.

https://doi.org/10.1371/journal.pone.0132952.s025

(EPS)

S26 Fig. Comparison for SNR, Ent, STD and MADC between original image slices of first stroke patient and its fractional differential with mask 5 × 5 between VOFCD using weights (0.45, 0.01, 0.54) and YiFeiPU-2 and FCD-1 with v = 0.3.

https://doi.org/10.1371/journal.pone.0132952.s026

(EPS)

S27 Fig. Comparison for SNR, Ent, STD and MADC between original image slices of first stroke patient and its fractional differential with mask 5 × 5 between VOFCD using weights (0.45, 0.01, 0.54) and YiFeiPU-2 and FCD-1 with v = 0.7.

https://doi.org/10.1371/journal.pone.0132952.s027

(EPS)

S28 Fig. Comparison for SNR, Ent, STD and MADC between original image slices of first stroke patient and its fractional differential with mask 5 × 5 between VOFCD using weights (0.45, 0.01, 0.54) and YiFeiPU-2 and FCD-1 with v = 0.9.

https://doi.org/10.1371/journal.pone.0132952.s028

(EPS)

S29 Fig. Comparison for SNR, Ent, STD and MADC between original image slices of first stroke patient and its fractional differential with mask 5 × 5 between VOFCD using weights (0.01, 0.01, 0.98) and YiFeiPU-2 and FCD-1 with v = 0.1.

https://doi.org/10.1371/journal.pone.0132952.s029

(EPS)

S30 Fig. Comparison for SNR, Ent, STD and MADC between original image slices of first stroke patient and its fractional differential with mask 5 × 5 between VOFCD using weights (0.01, 0.01, 0.98) and YiFeiPU-2 and FCD-1 with v = 0.3.

https://doi.org/10.1371/journal.pone.0132952.s030

(EPS)

S31 Fig. Comparison for SNR, Ent, STD and MADC between original image slices of first stroke patient and its fractional differential with mask 5 × 5 between VOFCD using weights (0.01, 0.01, 0.98) and YiFeiPU-2 and FCD-1 with v = 0.7.

https://doi.org/10.1371/journal.pone.0132952.s031

(EPS)

S32 Fig. Comparison for SNR, Ent, STD and MADC between original image slices of first stroke patient and its fractional differential with mask 5 × 5 between VOFCD using weights (0.01, 0.01, 0.98) and YiFeiPU-2 and FCD-1 with v = 0.9.

https://doi.org/10.1371/journal.pone.0132952.s032

(EPS)

S33 Fig. Comparison of texture details between original image slice 51 from the second stroke patient and its fractional differential using YiFeiPU-2, FCD-1 and VOFCD and (0.35, 0.06, 0.59) for the weights.

(a) Original image slice 51, (b) 0.5–order YiFeiPU-2 with mask 5 × 5, (c) 0.5–order FCD-1 with mask 5 × 5, (d) VOFCD with mask 5 × 5.

https://doi.org/10.1371/journal.pone.0132952.s033

(EPS)

S34 Fig. Comparison of textures in the region of interest between original image slice 51 from the second stroke patient and its fractional differential using YiFeiPU-2, FCD-1 and VOFCD and (0.35, 0.06, 0.59) for the weights.

(a) Original full image slice 51, (b) original region of interest, (c) 0.5–order YiFeiPU-2 with mask 5 × 5, (d)0.5–order FCD-1 with mask 5 × 5, (e) VOFCD with mask 5 × 5.

https://doi.org/10.1371/journal.pone.0132952.s034

(EPS)

S35 Fig. Comparison of texture details between original image slice 51 from the second stroke patient and its fractional differential using YiFeiPU-2, FCD-1 and VOFCD and (0.01, 0.01, 0.98) for the weights.

(a) Original image slice 51, (b) 0.5–order YiFeiPU-2 with mask 5 × 5, (c) 0.5–order FCD-1 with mask 5 × 5, (d) VOFCD with mask 5 × 5.

https://doi.org/10.1371/journal.pone.0132952.s035

(EPS)

S36 Fig. Comparison of textures in the region of interest between original image slice 51 from the second stroke patient and its fractional differential using YiFeiPU-2, FCD-1 and VOFCD and (0.01, 0.01, 0.98) for the weights.

(a) Original full image slice 51, (b) original region of interest, (c) 0.5–order YiFeiPU-2 with mask 5 × 5, (d) 0.5–order FCD-1 with mask 5 × 5, (e) VOFCD with mask 5 × 5.

https://doi.org/10.1371/journal.pone.0132952.s036

(EPS)

Acknowledgments

We gratefully acknowledge the help and interest in our work by Professor Kerrie Mengersen from the Queensland University of Technology, Brisbane, Australia. We would like to thank the assistance of the Royal Brisbane and Women’s hospital, Brisbane, Australia and St Andrew’s War Memorial Hospital, Brisbane, Australia for the data acquisition. We gratefully acknowledge the help in our work by Mark Barry from the HPC and Research Support Group of the Queensland University of Technology. This research was part of Qiang Yu’s PhD work. We also thank the reviewers for their constructive comments and suggestions.

Author Contributions

Conceived and designed the experiments: QY FL IT. Performed the experiments: QY. Analyzed the data: QY VV FL IT. Contributed reagents/materials/analysis tools: QY VV FL IT. Wrote the paper: QY VV FL IT.

References

  1. 1. Gonzalez RC, Woods RE. Digital Image Processing. 3rd edition. Pearson: Prentice Hall; 2008.
  2. 2. Panda CS, Patnaik S. Filtering corrupted image and edge detection in restored grayscale image using derivative filters. Int. J. Image Process. 2009; 3(3): 105–119.
  3. 3. Zhang Y, Pu Y, Zhou J. Construction of fractional differential masks based on Riemann-Liouville definition. J. Comput. Inform. Syst. 2010; 6(10): 3191–3199.
  4. 4. Liu F, Anh V, Turner I. Numerical solution of the space fractional Fokker-Planck equation. J. Comput. Appl. Math. 2004; 166(1): 209–219.
  5. 5. Liu F, Zhuang P, Anh V, Turner I, Burrage K. Stability and convergence of the difference methods for the space-time fractional advection-diffusion equation. J. Appl. Math. Comput. 2007; 191(1): 12–21.
  6. 6. Roop JP. Computational aspects of FEM approximation of fractional advection dispersion equation on bounded domains in ℝ2. J. Comput. Appl. Math. 2006; 193(1): 243–268.
  7. 7. Yang Q, Turner I, Liu F, Ilic M. Novel numerical methods for solving the time-space fractional diffusion equation in two dimensions. SIAM J. Sci. Comput. 2011; 33(3): 1159–1180.
  8. 8. Yu Q. Numerical simulation of anomalous diffusion with application to medical imaging. Ph.D. dissertation, Queensland University of Technology. 2013. Available: http://eprints.qut.edu.au/62068.
  9. 9. Yu Q, Liu F, Turner I, Burrage K. A computationally effective alternating direction method for the space and time fractional Bloch-Torrey equation in 3-D. Appl. Math. and Comput. 2012; 219(8): 4082–4095.
  10. 10. Yu Q, Liu F, Turner I, Burrage K. Stability and convergence of an implicit numerical method for the space and time fractional Bloch-Torrey equation. Phil. Trans. R. Soc. A 2013 Apr 1; 371(1990): 20120150.
  11. 11. Yu Q, Liu F, Turner I, Burrage K. Numerical simulation of the fractional Bloch equations. J. Comput. Appl. Math. 2014; 255: 635–651.
  12. 12. Zhang H, Liu F, Anh V. Galerkin finite element approximations of symmetric space-fractional partial differential equations. Appl. Math. Comput. 2010; 217(6): 2534–2545.
  13. 13. Zhuang P, Liu F, Anh V, Turner I. New solution and analytical techniques of the implicit numerical methods for the anomalous sub-diffusion equation. SIAM J. Numer. Anal. 2008; 46(2): 1079–1095.
  14. 14. Zhuang P, Liu F, Anh V, Turner I. Numerical method for the variable-order fractional advection-diffusion equation with a nonlinear source term. SIAM J. Numer. Anal. 2009; 47(3): 1760–1781.
  15. 15. Chen D, Chen YQ, Xue D. Digital fractional order Savitzky-Golay differentiator. IEEE Trans. Circuits Syst. II, Exp. Briefs 2011; 58(11): 758–762.
  16. 16. Chen D, Chen YQ, Xue D. 1-D and 2-D digital fractional-order Savitzky-Golay differentiator. Signal, Image and Video Process. 2012; 6(3): 503–511.
  17. 17. Sejdić E, Djurović I, Stanković L. Fractional Fourier transform as a signal processing tool: An overview of recent developments. Signal Process. 2011; 91(6): 1351–1369.
  18. 18. Pesquet-Popescu B, Véhel JL. Stochastic fractal models for image processing. IEEE Signal Process. Mag. 2002; 19(5): 48–62.
  19. 19. Mathieu B, Melchior P, Oustaloup A, Ceyral C. Fractional differentiation for edge detection. Signal Process. 2003; 83(11): 2421–2432.
  20. 20. Yu Q, Liu F, Turner I, Vegh V. The computational simulation of brain connectivity using diffusion tensor. ANZIAM J. 2011; 52(CTAC2010): C18–C37.
  21. 21. Gao CB, Zhou JL, Zheng XQ, Lang FN. Image enhancement based on improved fractional differentiation. J. Comput. Inform. Syst. 2011; 7(1): 257–264.
  22. 22. Gao CB, Zhou JL, Zhang WH. Fractional directional differentiation and its application for multiscale texture enhancement. Mathematical Problems in Engineering 2012; 2012: article ID 325785.
  23. 23. Gao CB, Zhou JL, Hu JR, Lang FN. Edge detection of colour image based on quaternion fractional differential. IET Image Process. 2011; 5(3): 261–272.
  24. 24. Pu Y, Wang W, Zhou J, Wang Y, Jia H. Fractional differential approach to detecting texture features of digital image and its fractional differential filter implementation. Sci. China, Ser. F 2008; 51(9): 1319–1339.
  25. 25. Pu Y, Zhou J, Yuan X. Fractional differential mask: A fractional differential–based approach for multiscale texture enhancement. IEEE Trans. Image Process. 2010; 19(2): 491–511. pmid:19933015
  26. 26. Yu Q, Liu F, Turner I, Burrage K, Vegh V. The use of a Riesz fractional differential–based approach for texture enhancement in image processing. ANZIAM J. 2013; 54(2012): C590–C607.
  27. 27. Garg V, Singh K. An improved Grünwald-Letnikov fractional differential mask for image texture enhancement. Int. J. Adv. Comput. Sci. Appl. 2012; 3(3): 130–135.
  28. 28. Gilboa G, Sochen N, Zeevi YY. Forward-and-backward diffusion processes for adaptive image enhancement and denoising. IEEE Trans. Image Process. 2002; 11(7): 689–703. pmid:18244666
  29. 29. Huang G, Chen Q, Xu L, Pu Y, Zhou J. Implement adaptive image enhancement using variable order fractional order differential. J. Shenyang Univ. Tech. 2012; 34(4): 446–454 (in Chinese).
  30. 30. Podlubny I. Fractional Differential Equations. New York: Academic Press; 1999.
  31. 31. Gorenflo R, Mainardi F. Random walk models for space-fractional diffusion processes. Frac. Calc. Appl. Anal. 1998; 1(2): 167–191.
  32. 32. Ortigueira MD. Riesz potential operators and inverses via fractional centred derivatives. Int. J. Math. Math. Sci. 2006; 2006: article ID 48391.
  33. 33. Trampel R, Mildner T, Goerke U, Schaefer A, Driesel W, Norris DG. Continuous arterial spin labeling using a local magnetic field gradient coil. Magnet. Reson. Med. 2002 Aug 23; 48(3): 543–546.
  34. 34. Weszka JS, Rosenfeld A. Histogram modification for threshold selection. IEEE Trans. Syst., Man, Cybern. 1979; 9(1): 38–52.
  35. 35. Groenewald AM, Barnard E, Botha EC. Related approaches to gradient–based thresholding. Pattern Recognit. Lett. 1993; 14(7): 567–572.
  36. 36. Pal SK, Uma Shankar B, Mitra P. Granular computing, rough entropy and object extraction. Pattern Recognit. Lett. 2005; 26(16): 2509–2517.
  37. 37. Shuja SZ, Yilbas BS, Budair MO. Local entropy generation in an impinging jet: minimum entropy concept evaluating various turbulence models. Comput. Method. Appl. M. 2001; 190(28): 3623–3644.
  38. 38. Shannon CE. A mathematical theory of communication. ACM SIGMOBILE Mobile Computing and Communications Review 2001; 5(1): 3–55.
  39. 39. Kumar R, Kulashekar P, Dhanasekar B, Ramamoorthy B. Application of digital image magnification for surface roughness evaluation using machine vision. Int. J. Mach. Tool. Manu. 2005; 45(2): 228–234.
  40. 40. Groenewald D, Degasne I, Huré G, Legrand E, Audran M, Baslé MF. Image analysis measurements of roughness by texture and fractal analysis correlate with contact profilometry. Biomaterials 2003 April; 24(8): 1399–1407.
  41. 41. Le Bihan D. Molecular diffusion, tissue microdynamics and microstructure. NMR Biomed. 1995; 8(7): 375–386. pmid:8739274
  42. 42. Fazel SAA, Gal Y, Yang Z, Vegh V. Qualitative and quantitative analysis of six image fusion methodologies and their application to medical imaging. In Proc. IEEE International Conference on Digital Image Computing Techniques and Applications (DICTA11), Noosa, Australia. 2011 Dec 6–8; 308–313.
  43. 43. Do MN, Vetterli M. The Contourlet transform: an efficient directional multiresolution image representation. IEEE Trans. Image Process. 2005; 14(12): 2091–2106. pmid:16370462