Abstract:
Bayes' rule is introduced as a coherent averaging strategy for multiclassifier system (MCS) output, and as a strategy for eliminating the uncertainty associated with a particular choice of classifier-model parameters. We use a Markov-Chain Monte Carlo method for efficient selection of classifiers to approximate the computationally intractable elements of the Bayesian approach-the set of classifiers so selected is our MCS. Furthermore we exploit the massive sampling (thousands of classifiers) within the Bayesian framework to encompass an estimate of the confidence to be placed in any classification result-thus providing a sound basis for rejection of some MCS classification results. We present uncertainty envelopes as one way to derive these confidence estimates from the population of classifiers that constitutes the MCS, and we show that as the diversity among component classifiers increases so does the accuracy of confident classification estimates, but diversity is not a panacea. If diversity is increased by elaboration of the data models then care must be taken to fit model sampling and model complexity, otherwise diversity can have the negative effect of leading to excessive numbers of low confidence classifications.