Virtual Computational Chemistry Laboratory

Input data Output results Example List of key words

The figure is updated from Tetko et al, 1996.  It shows that there is some number of iterations after which the prediction of neural network for new, unseen data (in this case this is the validation set) start to increase, despite RMSE error for the learning set continuously decreases. This phenomenon is known as overtraining/overfitting of neural networks that is analyzed in Tetko et al., 1995. Thus, it is preferably to terminate neural network learning before convergence it to local minimum for the learning set (point S3). This can be done by termination of neural network training in point corresponding to minimum of neural network for the validation set (early stopping point S1), or neural network minimum for the join set (the initial training set, early stopping point S2).

See FAQ if you have questions. How to cite this applet? Are you looking for a new job in chemoinformatics?

Copyright 2001 -- 2016 All rights reserved.