User:Hornet01/sandbox

In signal processing and mathematics area, it is very advantageous to decompose a complex signal into simple signals without leaving the time domain. The Empirical Mode Decomposition(EMD) process decomposes signal into intrinsic mode functions combined with the Hilbert Spectral Analysis known as Hilbert Huang Transform(HHT).The Multidimensional EMD is the extension of the 1-D EMD algorithm into multiple dimensional signal. This decomposition could be applied to Image Processing, Audio Signal Processing and various other multidimensional signals.

Motivation
Multidimensional Empirical Mode Decomposition is a popular method because of its applications in many fields, such as texture analysis, financial applications, image processing, ocean engineering, seismic research and so on. Recently, several methods of Empirical Mode Decomposition have been used to collect the data and analyze characterization of multidimensional signals. In this article, we will introduce the basic of Multidimensional Empirical Mode Decomposition, and then look into the enhancing approach for traditional Multidimensional Empirical Mode Decomposition.

Introduction of Empirical Mode Decomposition(EMD)
The Hilbert transform when applied for instantaneous frequency computation, the derived instantaneous frequency of the signal could lose its physical meaning when the signal is not an AM/FM separable oscillatory function. Due to those stable and capable properties of Empirical Mode Decomposition, the Empirical Mode Decomposition method can extract global structure and deal with fractal-like signal perfectly.

The EMD method was developed to overcome this drawback so that the data can be examined in a physically meaningful adaptive time–frequency–amplitude space for nonlinear and non-stationary signals.

The EMD method decomposes input signal into a few Intrinsic Mode functions(IMF). The multi-components signal decompose into intrinsic mode and a residue after IMFs formed. The given equation will be as follows:

$$I(n)=\sum_{m=1}^M IMF_{m}(n)+Res_{M}(n)$$

Where $$I(n)$$ is the function of multi-components signal. $$IMF_{m}(n)$$ is the Intrinsic Mode Function, and $$Res_{M}(n)$$ represents residue into $$M$$ intrinsic modes.

Ensemble Empirical Mode Decomposition
The EEMD consists of the following steps:

(1) Adding a white noise series to the original data.

(2) Decomposing the data with added white noise into oscillatory components.

(3) Repeating step 1 and step 2 again and again, but with different white noise series added each time.

(4) Obtaining the (ensemble) means of the corresponding intrinsic mode functions of the decomposition as the final result.

In these steps, EEMD uses two properties of white noise:

(I)The added white noise leads to relatively even distribution of extrema distribution on all timescales.

(II)The dyadic filter bank property provides a control on the periods of oscillations contained in an oscillatory component, significantly reducing the chance of scale mixing in a component. Through ensemble average, the added noise is averaged out.

Technique 1
To design a MDEEMD algorithm the key step is to translate the algorithm of the 1D EMD into a Bi-dimensional Empirical Mode Decomposition(BEMD) and further extend the algorithm to three or more dimensions which is like the BEMD by extending the procedure on successive dimensions. Mathematically let us represent a 2D signal in the form of ixj matrix form with the number of elements are finite.



X(i,j) = \begin{bmatrix}x(1,1) & x(1,2) & ... & x(1,j) \\x(2,1) & x(2,2) & ... & x(1,j)  \\. & . & . & . \\. & . & . & .  \\x(i,1) & x(i,2) & ... & x(i,j) \end{bmatrix} $$ First EMD is performed in one direction of X (i, j), Row wise for instance, to decompose the data of each row into m components, then to collect the components of the same level of m from the result of each row decomposition to make a 2D decomposed signal at that level of m. Therefore, m set of 2D spatial data are obtained



RX(1,i,j) = \begin{bmatrix}x(1,1,1) & x(1,1,2) & ... & x(1,1,j) \\x(1,2,1) & x(1,2,2) & ... & x(1,1,j)  \\. & . & . & . \\. & . & . & .  \\x(1,i,1) & x(1,i,2) & ... & x(1,i,j) \end{bmatrix} $$ $$ RX(2,i,j) = \begin{bmatrix}x(2,1,1) & x(2,1,2) & ... & x(2,1,j) \\x(2,2,1) & x(2,2,2) & ... & x(2,1,j)  \\. & . & . & . \\. & . & . & .  \\x(2,i,1) & x(2,i,2) & ... & x(2,i,j) \end{bmatrix} $$....... $$ RX(m,i,j) = \begin{bmatrix}x(m,1,1) & x(m,1,2) & ... & x(m,1,j) \\x(m,2,1) & x(m,2,2) & ... & x(m,1,j)  \\. & . & . & . \\. & . & . & .  \\x(m,i,1) & x(m,i,2) & ... & x(m,i,j) \end{bmatrix} $$ where RX (1, i, j), RX (2, i, j), and RX (m, i, j) are the m sets of signal as stated (also here we use R to indicate row decomposing). The relation between these m 2D decomposed signals and the original signal is given as $$X(i,j)=\sum_{ k \mathop =1}^mRX(k,i,j)$$

The first row of the matrix RX (m, i, j) is the mth EMD component decomposed from the first row of the matrix X (i, j). The second row of the matrix RX (m, i, j) is the mth EMD component decomposed from the second row of the matrix X (i, j), and so on.

Suppose that the previous decomposition is along the horizontal direction, the next step is to decompose each one of the previously row decomposed components RX(m, i, j), in the vertical direction into n components. This step will generate n components from each RX component.

For example, the component

1. RX(1,i,j) will be decomposed into CRX(1,1,i,j), CRX(1,2,i,j),…,CRX(1,n,i,j) 2. RX(2,i,j) will be decomposed into CRX(2,1,i,j), CRX(2,2,i,j),…, CRX(2,n,i,j)

3. RX(m,i,j) will be decomposed into CRX(m,1,i,j), CRX(m,2,i,j),…, CRX(m,n,i,j)

where C means column decomposing.Finally, the 2D decomposition will result into m× n matrices which are the 2D EEMD components of the original data X(i,j). The matrix expression for the result of the 2D decomposition is

$$ CRX(m,n,i,j) = \begin{bmatrix}crx(1,1,i,j) & crx(2,1,i,j) & ... & crx(m,1,i,j) \\crx(1,2,i,j) & crx(2,2,i,j) & ... & crx(m,2,i,j)  \\. & . & . & . \\. & . & . & .  \\crx(1,n,i,j) & crx(2,n,i,j) & ... & crx(m,n,i,j) \end{bmatrix} $$

where each element in the matrix CRX is an i × j sub-matrix representing a 2D EEMD decomposed component. We use the arguments (or suffixes) m and n to represent the component number of row decomposition and column decomposition, respectively rather than the subscripts indicating the row and the column of a matrix. Notice that the m and n indicate the number of components resulting from row(horizontal) decomposition and then column (vertical) decomposition, respectively.

By combining the components of the same scale or the comparable scales with minimal difference will yield a 2D feature with best physical significance. The components of the first row and the first column are approximately the same or comparable scale although their scales are increasing gradually along the row or column. Therefore, combining the components of the first row and the first column will obtain the first complete 2D component (C2D1). The subsequent process is to perform the same combination technique to the rest of the components. $$C2D_L=\sum_{ k \mathop =1}^mcrx_{k,l}+\sum_{ k \mathop =l+1}^ncrx_{l,k}$$

Following the convention of 1D EMD, the last component of the complete 2D components is called residue. The decomposition strategy can be extended without difficulty to higher or any dimensional data. For a 3D data cube of i × j × k elements, the multidimensional EMD decomposition will yield detailed 3D components of m × n × q where m, n and q are the number of the IMFs decomposed from each dimension having i, j, and k elements, respectively.

The MDEEMD method has several advantages. For instance, the sifting procedure of the MDEEMD is a combination of one dimensional sifting. It employs 1D curve fitting in the sifting process of each dimension, and has no difficulty as encountered in the 2D EMD algorithms using surface fitting which has the problem of determining the saddle point as a local maximum or minimum.Sifting is the process which separates the IMF and repeats the process until the residue is obtained. The first step of performing sifting is to determine the upper and lower envelopes encompassing all the data by using the spline method. Sifting scheme for MDEMD is like the 1D sifting where the local mean of the standard EMD is replaced by the mean of multivariate envelope curves.

Technique 2
A Fast and efficient data analysis is very important for large sequences hence the MDEEMD focuses on two important things

(1). Data compression which involves decomposing data into simpler forms.

(2). EEMD on the compressed data, this is the most challenging since on decomposing the compressed data there is a high probability to lose key information. Hence the EEMD must be both highly efficient and fast to process large amounts of data. A data compression method that uses principal component analysis (PCA) or also known as empirical orthogonal function (EOF) analysis or principal oscillation pattern analysis is used to compress data.

Assume, we have a spatio-temporal data T (s, t), where s is spatial locations (not necessary one dimensional originally but needed to be rearranged into a single spatial dimension) from 1 to N and t temporal locations from 1 to M.

Using PCA/EOF, one can express T (s, t) into $$T(s,t)=\sum_{ i \mathop =1}^mY_i(t)V_i(t)$$

where, Yi(t) is the ith Principal Component(PC) and Vi(s) the ith Empirical Orthogonal Function(EOF). PCs and EOFs are often obtained by solving the eigen-value/eigen-vector problem of either temporal co-variance matrix or spatial co-variance matrix based on which dimension is smaller. The variance explained by one pair of PCA/EOF is its corresponding eigenvalue divided by the sum of all eigenvalues of the co-variance matrix.

If the data subjected to PCA/EOF is all white noise, all eigenvalues are theoretically equal and there is no preferred vector direction (the principal component) in PCA/EOF space. To retain most of the information of the data, one needs to retain almost all the PC's and EOF's, making the size of PCA/EOF expression even larger than that of the original. If the original data contain only one spatial structure and oscillate with time, then the original data can be expressed as the product of one PC and one EOF, implying that the original data of large size can be expressed by small size data without losing information, i.e. highly compressible.

The variability of a smaller region tends to be more spatio-temporally coherent than a bigger region containing that smaller region, and, therefore, it is expected that fewer PC/EOF components are required to account for a threshold level of variance hence onne way to improve the efficiency of the representation of data in terms of PC/EOF component is to divide the global spatial domain into a set of sub-regions. If we divide the original global spatial domain into n sub-regions containing N1, N2,. . . ,Nn spatial grids, respectively, with all Ni, where i=1,. . ., n, greater than M, we anticipate that the numbers of the retained PC/EOF pairs for all sub-regions K1, K2,. . . ,Kn are all smaller than K. Therefore, the compression rate of the spatial domain is as follows $$Compression Rate=\frac{N * M}{N*K_i+M*\sum_{ i \mathop =1}^nk_i}$$

Note that an optimized division and an optimized selection of PC/EOF pairs for each region would lead to a higher rate of compression.

Technique 3(Division of a Spatial-temporal Signal into grids)
For a temporal signal of length M, the complexity of cubic spline sifting through its local extrema is about the Order of M which is similar for the EEMD, as it only repeats the spline fitting operation with a number that is not dependent on M. However, as the sifting number (often selected as 10) and the ensemble number (often a few hundred) multiply to the spline sifting operations, hence the EEMD is time consuming compared with many other time series analysis methods such as Fourier Transforms and Wavelet Transforms.The MDEEMD employs EEMD decomposition of the time series at each division grids of the initial temporal signal, the EEMD operation is repeated by the number of total grid points of the domain. The idea of the fast MDEEMD is very simple. As PCA/EOF-based compression expressed the original data in terms of pairs of PCs and EOFs, through decomposing PCs, instead of time series of each grid, and using the corresponding spatial structure depicted by the corresponding EOFs, the computational burden can be significantly reduced.

The fast MDEEMD which includes the following steps:

(i) All pairs of EOF’s, Vi, and their corresponding PCs, Yi, of Spatio-Temporal data over a compressed sub-domain are computed.

(ii) The number of pairs of PC/EOF that are retained in the compressed data is determined by the calculation of the accumulated total variance of leading EOF/PC pairs.

(iii) Each PC Yi is decomposed using EEMD, i.e. $$Y_i=\sum_{ j \mathop =1}^nc_{j,i}+r_{n,i}$$ Where c(j,i) represents simple oscillatory modes of certain frequencies and r(n,i) is the residual of the data Yi. The result of the ith MEEMD component Cj is obtained as $$C_j=\sum_{ j \mathop =1}^{40}c_{j,i}V_i$$. In this compressed computation, we have used the approximate dyadic filter bank properties of EMD/EEMD.

Advantages
A detailed knowledge of the Intrinsic mode functions of a noise corrupted signal can help in estimating the significance of that mode. It is usually assumed that the first IMF captures most of the noise and hence from this IMF we could estimate the Noise level and estimate the noise corrupted signal eliminating the effects of noise approximately. This method is known as Denoising and Detrending. Another advantage of using the MDEEMD is that the mode mixing is reduced significantly due to the function of the EEMD. The Denoising and Detrending strategy can be used for image processing to enhance an image and similarly it could be applied to Audio Signals to remove corrupted data in speech. The MDEEMD could be used to break down images and audio signals into IMF and based on the knowledge of the IMF perform necessary operations. The decomposition of an image is very advantageous for Radar based application the decomposition of an image could reveal land mines etc.

Parallel Implementation of MEEMD
In MEEMD, although ample parallelism potentially exists in the ensemble dimension and/or the non-operating dimensions, several challenges still face a high performance MEEMD implantation.

(1) Dynamic data variations: In EEMD, white noises change the number of extrema causing some irregularity and load imbalance, and thus slowing down the parallel execution.

(2) Stride memory accesses of high-dimensional data: High dimensional data are stored in non-continuous memory locations. Accesses along high dimensions are thus strided and uncoalesced, wasting available memory bandwidth.

(3) Limited resources to harness parallelism: While the independent EMDs and/or EEMDs comprising an MEEMD provide high parallelism, the computational capacities of multi-core and many-core processors may not be sufficient to fully exploit the inherent parallelism of MEEMD. Moreover, increased parallelism may increase memory requirements beyond the memory capacities of these processors.In MEEMD, when a high degree of parallelism is given by the ensemble dimension and/or the non-operating dimensions, the benefits of using a thread-level parallel algorithm are threefold.

(I) It can exploit more parallelism than a block-level parallel algorithm.

(II) It does not incur any communication or synchronization between the threads until the results are merged since the execution of each EMD or EEMD is independent.

(III) Its implementation is like the sequential one, which makes it more straightforward.

OpenMp Implementation
The EEMDs comprising MEEMD are assigned to independent threads for parallel execution, relying on the OpenMP runtime to resolve any load imbalance issues. Stride memory accesses of high-dimensional data are eliminated by transposing these data to lower dimensions, resulting in better utilization of cache lines. The partial results of each EEMD are made thread-private for correct functionality. The required memory depends on the number of OpenMP threads and is managed by OpenMP runtime.

CUDA Implementation
In the GPU CUDA implementation, each EMD, is mapped to a thread. The memory layout, especially of high-dimensional data, is rearranged to meet memory coalescing requirements and fit into the 128-byte cache lines. The data is first loaded along the lowest dimension and then consumed along a higher dimension. This step is performed when the Gaussian noise is added to form the ensemble data. In the new memory layout, the ensemble dimension is added to the lowest dimension to reduce possible branch divergence. The impact of the unavoidable branch divergence from data irregularity, caused by the noise, is minimized via a regularization technique using the on-chip memory. Moreover, the cache memory is utilized to amortize unavoidable uncoalesced memory accesses.

Definition
The Fast and Adaptive Bidimensional Empirical Mode Decomposition(FABEMD) is an improving version of traditional BEMD. The FABEMD can be used on many area, including medical image analysis, texture analysis and so on. Because of the difficulties facing in doing BEMD, the order statistics filter can help us solving the problems of efficiency and restriction of size in BEMD.

Based on the algorithm of BEMD, the implementation method of FABEMD is really similar to BEMD, but FABEMD approach just changes the interpolation step into a direct envelop estimation method and restrict the number of iteration for every BIMF to one. As a result, two order statistics, including MAX and MIN, will be used for approximate the upper and lower envelope. Otherwise, the size of the filter will depend on the maxima and minima maps obtained from input. The steps of FABEMD algorithm will be shown below.

FABEMD Algorithm
Step 1: Determine and detect local maximum and minimum

As the traditional BEMD approach, we can find the jth ITS-BIMF $$F_{Tj}$$ of any source of input $$S_{i}$$ by neighboring window method. For FABEMD approach, we choose different way to implement.

From the input data, we can get an 2D matrix represent

$$A = \left( {\begin{array}{*{20}{c}} & \ldots &\\ \vdots & \ddots & \vdots \\ & \cdots & \end{array}} \right)$$

where $$a_{MN}$$ is the element location in the matrix A, and we can define the window size to be $$w_{ex}\times w_{ex}$$. Thus, we can obtain the maximum and minimum value from the matrix as follows:

$${a_{mn}} \triangleq \left\{ {\begin{array}{*{20}{c}} {LocalMax}\\ {LocalMin} \end{array}} \right.\begin{array}{*{20}{c}} {if}\\ {if} \end{array}\begin{array}{*{20}{c}} {{a_{mn}} > {a_{kl}}}\\ {{a_{mn}} < {a_{kl}}} \end{array}$$

,where

$$k = m - \frac{2}:m + \frac{2},(k \ne m)$$

$$l = n - \frac{2}:n + \frac{2},(l \ne n)$$

Step 2: Obtain the size of window for order-statistic filter

At first, we define $$d_{adj-max}$$ and $$d_{adj-min}$$ to be the maximum and minimum distance in the array which is calculated from each local maximum or minimum point to the nearest nonzero element. Also, $$d_{adj-max}$$ and $$d_{adj-min}$$ will be sorted in descending order in the array according to the convenient selection. Otherwise, we will only consider square window. Thus, the selected equation for the gross window width as follows:

$${w_{\ max {\rm{\_en - g}}}} = \ minimum \{ {d_{adj - \ max }}\} $$

$${w_{\ max {\rm{\_en - g}}}} = \ maximum \{ {d_{adj -\ max }}\} $$

$${w_{\ min {\rm{\_en - g}}}} = \ minimum \{ {d_{adj - \ min }}\} $$

$${w_{\ min {\rm{\_en - g}}}} = \ maximum \{ {d_{adj - \ min }}\} $$

Step 3: Apply order statistics and smoothing filters to obtain the MAX and MIN filter output

To obtain the upper and lower envelopes, there should be defined two parameter $$U_{Ej}(x,y)$$ and $$L_{Ej}(x,y)$$, and the equation will be as follows:

$${U_{Ej}}(x,y) = \mathop {MAX}\{{F_{Tj}}(s,t)\} = \frac{1} \sum \limits_{(s,t) \in {Z_{xy}}} {{U_{Ej}}(s,t)}$$

$${L_{Ej}}(x,y) = \mathop {MIN} \limits_{(s,t) \in {Z_{xy}}}  = \frac{1} \sum \limits_{(s,t) \in {Z_{xy}}} {{L_{Ej}}(s,t)}$$

where $$Z_{xy}$$ is defined as the square region of window size, and $$w_{sm}$$ is the window width of the smoothing filter which $$w_{sm}$$ equals to $$w_{en}$$. Therefore, the MAX and MIN filter will form a new 2-D matrix for envelope surface which will not change the original 2-D input data.

Step 4: Set up an estimation from upper and lower envelopes

This step is to make sure that the envelope estimation in FABEMD is nearly closed to the result from BEMD by using interpolation. In order to do comparison, we need to form corresponding matrices for upper envelope, lower envelope and mean envelope by using thin-plate spline surface interpolation to the max and min maps.

Advantages
This method(FABEMD) provides a way to use less computation to obtain the result rapidly, and it allows us to ensure more accurate estimation of the BIMFs. Even more, the FABEMD is more adaptive to handle the large size input than tradition BEMD. Otherwise, the FABEMD is a efficient method that we do not need to consider the boundary effects and overshoot-undershoot problems.

Limitations
There is one particular problem that we will face in this method. Sometimes, there will be only one local maxima or minima element in the input data, so it will cause the distance array to be empty.

Definition
The Partial Differential Equation-Based Multidimensional Empirical Mode Decomposition(PDE-based MEMD) approach is a way to improve and overcome the difficulties of mean-envelope estimation of a signal from the traditional EMD. The PDE-based MEMD focus on modifying the original algorithm for MEMD. Thus, the result will provide an analytical formulation which can access to theoretical analysis and performance observation. In order to perform multidimensional EMD, we need to extend 1-D PDE-based sifting process to 2-D space. Then, the following steps will be shown below.

Here, we take 2-D PDE-based EMD for an example. The PDE-based BEMD consists steps as follows.

PDE-based BEMD Algorithm
Step 1: Extend super diffusion model from 1-D to 2-D

Considered a super diffusion matrix function

$${G_q}(x) = \left(\begin{array}{*{20}{c}} {{g_{q,1}}(x)}&0\\ 0&{{g_{q,2}}(x)} \end{array} \right)$$

where $${g_{q,i}}(x)$$ represent the qth-order stopping function in direction i.

Then, the diffusion equation will be

$${u_t}(x,t) = div(\alpha {G_1}\nabla u(x,t) - (1 - \alpha ){G_2}\nabla \Delta u(x,t))$$

where $$\alpha$$ is the tension parameter, and we assumed that $$q=2$$.

Step 2: Connect the relationship between diffusion model and PDEs on implicit surface

In order to make relationship to PDEs, the given equation will be

$${u_t}(x,t) = - {( - 1)^q}\nabla _S^{2q}u(x,t)$$

where $$\nabla _S^{2q}$$ is the 2qth-order differential operator on u intrinsic to surface S, and the initial condition for the equation will be $$u(y,t)=f$$ for any y on surface S.

Step 3: Consider all the numerical resolutions

To obtain the theoretical and analysis result from the previous equation, we need to make an assumption.

Assumption:

Assumed that the numerical resolution schemes to be 4th-order PDE with no tension, and the equation for 4th-order PDE will be

$${u_t} = - \sum\limits_{i,j = 1}^2 {\partial _^1({g_i}\partial _^1\partial _^2u)} $$

First of all, we will explicit scheme by approximating the PDE-based sifting process.

$${U^{k + 1}} = (I - \Delta t\sum\limits_{i,j = 1}^2 ){U^k}$$

where $$U$$ is a vector which consists the value of each pixel, $$L_{ij}$$ is a matrix which is a difference approximation to the operator, and $$\Delta t$$ is a small time step.

Secondly, we can use AOS scheme to improve the property of stability, because the small time step $$\Delta t$$ will be unstable when it comes to large time step.

Finally, we can alternate direction implicit scheme. By using ADI-type schemes, it’s suggested that to mix derivative term to overcome the problem that ADI-type schemes can only be used in second-order diffusion equation. The numerically solved equation will be followed

$${U^{k + 1}} = {(\prod\limits_{n = 1}^2 {(I - \Delta t{A_{nn}})} )^{ - 1}}(I + \Delta t\sum\limits_{i = 1}^2 {\sum\limits_{j \ne i} } ){U^k}$$

where $$A_{ij}$$ is a matrix which is central difference approximation to the operator $${a_{ij}}\partial _^1\partial _^2$$

Advantages
Based on the Navier-Strokes equations directly, this approach provides good way to obtain and develop with theoretical and numerical results. In particular, the PDE-based BEMD can well work on image decomposition fields. This approach can be applied on extracting transient signal and avoiding the indeterminacy characterization in some signals.

Definition
There are some problems in BEMD and boundary extending implementation in the iterative sifting process, including time consuming, shape and continuity of the edges, decomposition results comparison and so on. All these problems will cause decomposition useless. In order to fix these problems, the Boundary Processing in Bidimensional Empirical Decomposition by using Texture Synthesis(BPBEMD) method was created. The points of the new method algorithm will be shown as follows.

BPBEMD Algorithm
The few core steps for BPBEMD algorithm will be

Step 1:

Supposed the size of original input data and resultant data to be $$N\times N$$ and $$(N+2M)\times(N+2M)$$, we can also define that original input data matrix will be in the middle of resultant data matrix.

Step 2:

Divided both original input data matrix and resultant data matrix in to block of $$M\times M$$ size.

Step 3:

Find the block which is the most similar to their neighbor block in the Original input data matrix, and put it into the corresponding resultant data matrix.

Step 4:

Form a distance matrix which is weighted on the distance of different importance between each block from those boundaries.

Step 5:

Implemented iterative extension when it faces a huge boundary extension, we can seem that the block in original input data matrix is as the block in resultant data matrix.

Otherwise, there are two points we should take into consideration when we implement this method in image processing.

(1) According to the different property between natural image and texture, we should notice that the local average intensity will easily change in every different images.

(2) In texture analysis, local contrast will be the same mostly if we analyze the same texture, but it may be different when we analyze a natural image.

Advantages
This method can process larger number of elements than traditional BEMD method. Also, it can shorten the time consuming for the process. Depended on using nonparametric sampling based texture synthesis, the BPBEMD could obtain better result after decomposing and extracting.

Limitations
Because most of image inputs are non-stationary which don’t exist boundary problems, the BPBEMD method is still lack of enough evidence that it is adaptive to all kinds of input data. Also, this method is narrowly restricted to be use on texture analysis and image processing.

Applications
In the first part, these MEEMD techniques can be used on the Geophysical data sets such as climate, magnetic, seismic data variability which takes advantage of the fast algorithm of MEEMD. The MEEMD is often used in the nonlinear geophysical data filtering due to its fast algorithms and its ability to handle large amount of data sets with the use of compression without losing and key information. The IMF can also be used as a signal enhancement of Ground Penetrating Radar for nonlinear data processing, it is very effective to detect geological boundaries from the analysis of field anomalies.

In the second part, the PDE-based MEMD and FAMEMD can be implemented on audio processing, image processing and texture analysis. Because of its several properties, including stability, less time consuming and so on, PDE-based MEMD method well works on adaptive decomposition, analyze texture and denoise data. Otherwise, the FAMEMD is a great method to reduce computation time and have more precise estimation in the process, so it will also be a certain method to implement audio processing, image processing and other more analysis. Finally, the BPBEMD method has well performance on image processing and texture analysis due to its property which is better to solve the extension boundary problems in recent techniques.