This MATLAB code provides a preliminary implementation of the training of Multi-Layer Perceptron (MLP) neural networks using an estimation of the Bayesian risk as cost function for binary classification. The Bayes risk estimation is provided by the Parzen windows method applyied to the one-dimensional output of the neural network.

This preliminary implementation has some limitations:

  - Only batch training is considered
  - Optimizations method are just adaptive step size gradient descent and gradient with momentum

The software implements the method presented in the following papers:

  -  M. Lázaro, M. H. Hayes, A. R. Figueiras-Vidal, ``Training neural network classifiers through Bayes risk minimization applying unidimensional Parzen windows''. Pattern Recognition 77 (2018), pages 204–215.
    > Available at: https://doi.org/10.1016/j.patcog.2017.12.018

  - M. Lázaro and A. R. Figueiras-Vidal, ``A Bayes Risk Minimization Machine for Example-Dependent Cost Classification''. IEEE Transactions on Cybernetics (2019), now available at IEEE Early Access (DOI: 10.1109/TCYB.2019.2913572) 
    > Available at: https://doi.org/10.1109/TCYB.2019.2913572


The main MATLAB functions are the following ones:

  - mlpMg_BayesMirror

    This function implements the method using an adaptive gradient descent optimization and considering mirror windows for the Parzen estimator of both classes. Costs of erroneous decisions are specified per class.

  - mlpMg_CostesMirrorDO

    This function extends the previous method to consider costs of erroneous decisions per example and to include drop-out in the training procedure. 

  - mlpMm_CostesMirrorDO

    This function implements the same method but using momentum instead of adaptive step size gradient descent. 

Update: May 2019.
