Talk:Bootstrapping (machine learning)

Does this really deserve it's own Machine-Learning centric page? If so, it might of interest to say how this is different to that used in Statistics, or maybe this would be better just being a sub-heading in the Bootstrapping_(statistics) page? --Mikealandewar (talk) 20:49, 16 August 2010 (UTC)

Page rather incomplete
Currently we have a description of how the error is estimated, but nothing about what is done with that estimate. How does bootstrapping then go on to improve the classifier? Would be great if someone could provide some general text for that.--mcld (talk) 11:09, 9 August 2010 (UTC)

Bootstrap Error Estimation section is not relevant to the article. Bootstrapping is a general strategy that iteratively improves a classifier by training it on increasingly more informative set. The need for bootstrapping arises in situations, when a labeled training set is too large to fit to the memory (for instance). Bootstrapping iterates over: (i) evaluation of a classifier on testing set, (ii) identification of its errors, and (iii) augmentation of the training set with these errors. The 'Bootstrap Error Estimate' does not relate tho this technique directly. I suggest to remove that paragraph and extend description technique. as described above + add reference papers where it was used in this sense (face detection). —Preceding unsigned comment added by Zk00006 (talk • contribs) 18:36, 26 August 2010 (UTC)