Feed-forward artificial neural networks (ANNs) are being used increasingly to model water resources variables. In this technical note, six methods for optimizing the connection weights of feedforward ANNs are investigated in terms of generalization ability, parsimony, and training speed. These include the generalized delta (GD) rule, the normalized cumulative delta (NCD) rule, the delta-bar-delta (DBD) algorithm, the extended-delta-bar-delta (EDBD) algorithm, the QuickProp (QP) algorithm, and the MaxProp (MP) algorithm. Each of these algorithms is applied to a particular case study, the forecasting of salinity in the River Murray at Murray Bridge, South Australia. Thirty models are developed for each algorithm, starting from different positions in weight space. The results obtained indicate that the generalization ability of the first-order methods investigated (i.e., GD, NCD, DBD, and EDBD) is better than that of the second-order algorithms (i.e., QP and MP). When the prediction errors are averaged over the 30 trials carried out, the performance of the first-order methods in which the size of the steps taken in weight space is automatically adjusted in response to changes in the error surface (i.e., DBD and EDBD) is better than that obtained when predetermined step sizes are used (i.e., GD and NCD). However, the reverse applies when the best forecasts of the 30 trials are considered. The results obtained indicate that the EDBD algorithm is the most parsimonious and the MP algorithm is the least parsimonious. It was found that any impact different learning rules have on training speed is masked by the effect of epoch size and the number of hidden nodes required for optimal model performance. ¿ 1999 American Geophysical Union |