Skip to content

In mapping studies, a classical procedure is to evaluate model performance and associated errors by randomly selecting a number (e.g., 10%) of test observations (here, 1-km pixels of “observed” AGB) that are set aside at the model calibration stage and only used to quantify model prediction error (validation step). This procedure, used in pantropical carbon mapping studies2,3, can be iterated K times with different test and training sets for model cross-validation (hereafter random K-fold CV).

Where winds meet
 

fig.1 just a dream