b is the center of the membership functions, and parameters γ, a, and c are mean values of the dispersion for the three examples of membership functions.. Could the above functions also be used for constructing an admissible Mercer kernel? Note that they are translation invariant functions, so the multidimensional function created by these kinds of functions based on product t‐norm operator is also translation invariant. Furthermore, if we regard the multidimensional functions as translation invariant kernels, then the following theorem can be used to check whether these kernels are admissible Mercer kernels:
A translation‐invariant kernel k(x, xi) = k(x − xi) is an admissible Mercer kernel if and only if the Fourier transform
is non‐negative [100]. For the case of the triangle and generalized bell‐shaped membership functions, the Fourier transform is respectively as follows:
and
Since both of them are non‐negative, we can construct Mercer kernels with triangle and generalized bell‐shaped membership functions. But the Fourier transform in the case of the trapezoidal‐shaped membership function is
which is not always non‐negative. In conclusion, the kernel can also be regarded as a product‐type multidimensional triangle or a generalized bell‐shaped membership function, but not the trapezoidal‐shaped one. The notation
Experience‐oriented FM via reduced‐set vectors: Given n training data
Given the good performance of SVR, it is reasonable to share the successful experience of SVR in FM. So, SVR with Mercer kernels is employed to generate the initial fuzzy model and the available experience on the training data. It is also expected that a reduction in the number of rules could make the resulting rule base more interpretable and transparent. Thus, a simplification algorithm is introduced to generate reduced‐set vectors for simplifying the structure of the initial fuzzy model, and at the same time the parameters of the derived simplified model are adjusted by a hybrid learning algorithm including the linear ridge regression algorithm and the gradient descent method based on a new performance measure. As a start, let us reformulate Eq. (4.60) through a simple equivalent algebra transform to obtain
where c is the number of support vectors,