Statistical Learning Theory

Algorithmic Luckiness

Over the last few decades a few frameworks to study the generalisation performance of learning algorithms have been emerged. Among the few, the most remarkable are the Vapnik-Chervonenkis (VC) framework (empirical risk minimisation algorithms), compression framework (on-line algorithms and compression schemes) and the luckiness framework (structural risk minimisation algorithms). However, apart from the compression framework none of the frameworks has considered the generalisation error of the single hypothesis learned by a given learning algorithm but resorted to the more stringent requirement of uniform convergence. The algorithmic luckiness framework is an extension of the powerful luckiness framework which studies the generalisation error of particular learning algorithms relative to some prior knowledge about the target concept encoded via a luckiness function.

  • R. Herbrich and R. C. Williamson, „Algorithmic Luckiness,“ Journal of Machine Learning Research, vol. 3, p. 175–212, 2002.
    [BibTeX] [Abstract] [Download PDF]

    Classical statistical learning theory studies the generalisation performance of machine learning algorithms rather indirectly. One of the main detours is that algorithms are studied in terms of the hypothesis class that they draw their hypotheses from. In this paper, motivated by the luckiness framework of Shawe-Taylor et al. (1998), we study learning algorithms more directly and in a way that allows us to exploit the serendipity of the training sample. The main difference to previous approaches lies in the complexity measure; rather than covering all hypotheses in a given hypothesis space it is only necessary to cover the functions which could have been learned using the fixed learning algorithm. We show how the resulting framework relates to the VC, luckiness and compression frameworks. Finally, we present an application of this framework to the maximum margin algorithm for linear classifiers which results in a bound that exploits the margin, the sparsity of the resultant weight vector, and the degree of clustering of the training data in feature space.

    @article{herbrich2002algorithmicluckiness,
    abstract = {Classical statistical learning theory studies the generalisation performance of machine learning algorithms rather indirectly. One of the main detours is that algorithms are studied in terms of the hypothesis class that they draw their hypotheses from. In this paper, motivated by the luckiness framework of Shawe-Taylor et al. (1998), we study learning algorithms more directly and in a way that allows us to exploit the serendipity of the training sample. The main difference to previous approaches lies in the complexity measure; rather than covering all hypotheses in a given hypothesis space it is only necessary to cover the functions which could have been learned using the fixed learning algorithm. We show how the resulting framework relates to the VC, luckiness and compression frameworks. Finally, we present an application of this framework to the maximum margin algorithm for linear classifiers which results in a bound that exploits the margin, the sparsity of the resultant weight vector, and the degree of clustering of the training data in feature space.},
    author = {Herbrich, Ralf and Williamson, Robert C},
    journal = {Journal of Machine Learning Research},
    pages = {175--212},
    title = {Algorithmic Luckiness},
    url = {https://www.herbrich.me/papers/algoluck.pdf},
    volume = {3},
    year = {2002}
    }

  • R. Herbrich and R. C. Williamson, „Algorithmic Luckiness,“ in Advances in Neural Information Processing Systems 14, 2001, p. 391–397.
    [BibTeX] [Abstract] [Download PDF]

    In contrast to standard statistical learning theory which studies uniform bounds on the expected error we present a framework that exploits the specific learning algorithm used. Motivated by the luckiness framework [Taylor et al., 1998] we are also able to exploit the serendipity of the training sample. The main difference to previous approaches lies in the complexity measure; rather than covering all hypotheses in a given hypothesis space it is only necessary to cover the functions which could have been learned using the fixed learning algorithm. We show how the resulting framework relates to the VC, luckiness and compression frameworks. Finally, we present an application of this framework to the maximum margin algorithm for linear classifiers which results in a bound that exploits both the margin and the distribution of the data in feature space.

    @inproceedings{herbrich2001algoluck,
    abstract = {In contrast to standard statistical learning theory which studies uniform bounds on the expected error we present a framework that exploits the specific learning algorithm used. Motivated by the luckiness framework [Taylor et al., 1998] we are also able to exploit the serendipity of the training sample. The main difference to previous approaches lies in the complexity measure; rather than covering all hypotheses in a given hypothesis space it is only necessary to cover the functions which could have been learned using the fixed learning algorithm. We show how the resulting framework relates to the VC, luckiness and compression frameworks. Finally, we present an application of this framework to the maximum margin algorithm for linear classifiers which results in a bound that exploits both the margin and the distribution of the data in feature space.},
    author = {Herbrich, Ralf and Williamson, Robert C},
    booktitle = {Advances in Neural Information Processing Systems 14},
    pages = {391--397},
    title = {Algorithmic Luckiness},
    url = {https://www.herbrich.me/papers/nips01_algoluck.pdf},
    year = {2001}
    }

PAC Bayesian Framework

In the Bayesian framework learning is viewed as an update of prior belief in the target concept that governs the data-generating process in light of the data. Once a learning algorithm is expressed as an update of a probability distribution such that the Bayes classifier is equivalent to the classifier at hand, the whole (and powerful) machinery of PAC-Bayesian can be applied. We are particularly interested in the study of linear classifiers. A geometrical picture reveals that the margin is only an approximation to the real quantity controlling generalisation error: the volume of consistent classifiers to the whole volume of parameter space. Hence we are able to remove awkward constant as well as permanent complexity terms from known margin bounds. The resulting bound can considered as tight and practically useful for bound based model selection.

  • T. Graepel, R. Herbrich, and J. Shawe-Taylor, „PAC-Bayesian Compression Bounds on the Prediction Error of Learning Algorithms for Classification,“ Machine Learning, vol. 59, p. 55–76, 2005.
    [BibTeX] [Abstract] [Download PDF]

    We consider bounds on the prediction error of classification algorithms based on sample compression. We refine the notion of a compression scheme to distinguish permutation and repetition invariant and non-permutation and repetition invariant compression schemes leading to different prediction error bounds. Also, we extend known results on compression to the case of non-zero empirical risk. We provide bounds on the prediction error of classifiers returned by mistakedriven online learning algorithms by interpreting mistake bounds as bounds on the size of the respective compression scheme of the algorithm. This leads to a bound on the prediction error of perceptron solutions that depends on the margin a support vector machine would achieve on the same training sample. Furthermore, using the property of compression we derive bounds on the average prediction error of kernel classifiers in the PAC-Bayesian framework. These bounds assume a prior measure over the expansion coefficients in the data-dependent kernel expansion and bound the average prediction error uniformly over subsets of the space of expansion coefficients.

    @article{graepel2005pacbayesian,
    abstract = {We consider bounds on the prediction error of classification algorithms based on sample compression. We refine the notion of a compression scheme to distinguish permutation and repetition invariant and non-permutation and repetition invariant compression schemes leading to different prediction error bounds. Also, we extend known results on compression to the case of non-zero empirical risk. We provide bounds on the prediction error of classifiers returned by mistakedriven online learning algorithms by interpreting mistake bounds as bounds on the size of the respective compression scheme of the algorithm. This leads to a bound on the prediction error of perceptron solutions that depends on the margin a support vector machine would achieve on the same training sample. Furthermore, using the property of compression we derive bounds on the average prediction error of kernel classifiers in the PAC-Bayesian framework. These bounds assume a prior measure over the expansion coefficients in the data-dependent kernel expansion and bound the average prediction error uniformly over subsets of the space of expansion coefficients.},
    author = {Graepel, Thore and Herbrich, Ralf and Shawe-Taylor, John},
    journal = {Machine Learning},
    pages = {55--76},
    title = {{PAC-Bayesian} Compression Bounds on the Prediction Error of Learning Algorithms for Classification},
    url = {https://www.herbrich.me/papers/graehertay05.pdf},
    volume = {59},
    year = {2005}
    }

  • R. Herbrich, T. Graepel, and R. C. Williamson, „The Structure of Version Space,“ in Innovations in Machine Learning: Theory and Applications, Springer, 2005, p. 257–274.
    [BibTeX] [Abstract] [Download PDF]

    We investigate the generalisation performance of consistent classifiers, i.e. classifiers that are contained in the so-called version space, both from a theoretical and experimental angle. In contrast to classical VC analysis – where no single classifier within version space is singled out on grounds of a generalisation error bound – the data dependent structural risk minimisation framework suggests that there exists one particular classifier that is to be preferred because it minimises the generalisation error bound. This is usually taken to provide a theoretical justification for learning algorithms such as the well known support vector machine. A reinterpretation of a recent PAC-Bayesian result, however, reveals that given a suitably chosen hypothesis space there exists a large fraction of classifiers with small generalisation error although we cannot readily identify them for a specific learning task. In the particular case of linear classifiers we show that classifiers found by the classical perceptron algorithm have guarantees bounded by the size of version space. These results are complemented with an empirical study for kernel classifiers on the task of handwritten digit recognition which demonstrates that even classifiers with a small margin may exhibit excellent generalisation. In order to perform this analysis we introduce the kernel Gibbs sampler – an algorithm which can be used to sample consistent kernel classifiers.

    @incollection{herbrich2005versionspace,
    abstract = {We investigate the generalisation performance of consistent classifiers, i.e. classifiers that are contained in the so-called version space, both from a theoretical and experimental angle. In contrast to classical VC analysis - where no single classifier within version space is singled out on grounds of a generalisation error bound - the data dependent structural risk minimisation framework suggests that there exists one particular classifier that is to be preferred because it minimises the generalisation error bound. This is usually taken to provide a theoretical justification for learning algorithms such as the well known support vector machine. A reinterpretation of a recent PAC-Bayesian result, however, reveals that given a suitably chosen hypothesis space there exists a large fraction of classifiers with small generalisation error although we cannot readily identify them for a specific learning task. In the particular case of linear classifiers we show that classifiers found by the classical perceptron algorithm have guarantees bounded by the size of version space. These results are complemented with an empirical study for kernel classifiers on the task of handwritten digit recognition which demonstrates that even classifiers with a small margin may exhibit excellent generalisation. In order to perform this analysis we introduce the kernel Gibbs sampler - an algorithm which can be used to sample consistent kernel classifiers.},
    author = {Herbrich, Ralf and Graepel, Thore and Williamson, Robert C},
    booktitle = {Innovations in Machine Learning: Theory and Applications},
    chapter = {9},
    pages = {257--274},
    publisher = {Springer},
    title = {The Structure of Version Space},
    url = {https://www.herbrich.me/papers/holmes_mod2.pdf},
    year = {2005}
    }

  • R. Herbrich and T. Graepel, „A PAC-Bayesian Margin Bound for Linear Classifiers,“ IEEE Transactions on Information Theory, vol. 48, iss. 12, p. 3140–3150, 2002.
    [BibTeX] [Abstract] [Download PDF]

    We present a bound on the generalisation error of linear classifiers in terms of a refined margin quantity on the training sample. The result is obtained in a PAC-Bayesian framework and is based on geometrical arguments in the space of linear classifiers. The new bound constitutes an exponential improvement of the so far tightest margin bound, which was developed in the luckiness framework, and scales logarithmically in the inverse margin. Even in the case of less training examples than input dimensions sufficiently large margins lead to non-trivial bound values and — for maximum margins — to a vanishing complexity term. In contrast to previous results, however, the new bound does depend on the dimensionality of feature space. The analysis shows that the classical margin is too coarse a measure for the essential quantity that controls the generalisation error: the fraction of hypothesis space consistent with the training sample. The practical relevance of the result lies in the fact that the well-known support vector machine is optimal with respect to the new bound only if the feature vectors in the training sample are all of the same length. As a consequence we recommend to use SVMs on normalised feature vectors only. Numerical simulations support this recommendation and demonstrate that the new error bound can be used for the purpose of model selection.

    @article{herbrich2002marginbound,
    abstract = {We present a bound on the generalisation error of linear classifiers in terms of a refined margin quantity on the training sample. The result is obtained in a PAC-Bayesian framework and is based on geometrical arguments in the space of linear classifiers. The new bound constitutes an exponential improvement of the so far tightest margin bound, which was developed in the luckiness framework, and scales logarithmically in the inverse margin. Even in the case of less training examples than input dimensions sufficiently large margins lead to non-trivial bound values and — for maximum margins — to a vanishing complexity term. In contrast to previous results, however, the new bound does depend on the dimensionality of feature space. The analysis shows that the classical margin is too coarse a measure for the essential quantity that controls the generalisation error: the fraction of hypothesis space consistent with the training sample. The practical relevance of the result lies in the fact that the well-known support vector machine is optimal with respect to the new bound only if the feature vectors in the training sample are all of the same length. As a consequence we recommend to use SVMs on normalised feature vectors only. Numerical simulations support this recommendation and demonstrate that the new error bound can be used for the purpose of model selection.},
    author = {Herbrich, Ralf and Graepel, Thore},
    journal = {IEEE Transactions on Information Theory},
    number = {12},
    pages = {3140--3150},
    title = {A {PAC-Bayesian} Margin Bound for Linear Classifiers},
    url = {https://www.herbrich.me/papers/ieee-pacbayes.pdf},
    volume = {48},
    year = {2002}
    }

  • R. Herbrich and T. Graepel, „A PAC-Bayesian Margin Bound for Linear Classifiers: Why SVMs work,“ in Advances in Neural Information Processing Systems 13, 2000, p. 224–230.
    [BibTeX] [Abstract] [Download PDF]

    We present a bound on the generalisation error of linear classifiers in terms of a refined margin quantity on the training set. The result is obtained in a PAC-Bayesian framework and is based on geometrical arguments in the space of linear classifiers. The new bound constitutes an exponential improvement of the so far tightest margin bound by Shawe-Taylor et al. and scales logarithmically in the inverse margin. Even in the case of less training examples than input dimensions sufficiently large margins lead to non-trivial bound values and – for maximum margins – to a vanishing complexity term. Furthermore, the classical margin is too coarse a measure for the essential quantity that controls the generalisation error: the volume ratio between the whole hypothesis space and the subset of consistent hypotheses. The practical relevance of the result lies in the fact that the well-known support vector machine is optimal w.r.t. the new bound only if the feature vectors are all of the same length. As a consequence we recommend to use SVMs on normalised feature vectors only – a recommendation that is well supported by our numerical experiments on two benchmark data sets.

    @inproceedings{herbrich2000pacbayesiansvm,
    abstract = {We present a bound on the generalisation error of linear classifiers in terms of a refined margin quantity on the training set. The result is obtained in a PAC-Bayesian framework and is based on geometrical arguments in the space of linear classifiers. The new bound constitutes an exponential improvement of the so far tightest margin bound by Shawe-Taylor et al. and scales logarithmically in the inverse margin. Even in the case of less training examples than input dimensions sufficiently large margins lead to non-trivial bound values and - for maximum margins - to a vanishing complexity term. Furthermore, the classical margin is too coarse a measure for the essential quantity that controls the generalisation error: the volume ratio between the whole hypothesis space and the subset of consistent hypotheses. The practical relevance of the result lies in the fact that the well-known support vector machine is optimal w.r.t. the new bound only if the feature vectors are all of the same length. As a consequence we recommend to use SVMs on normalised feature vectors only - a recommendation that is well supported by our numerical experiments on two benchmark data sets.},
    author = {Herbrich, Ralf and Graepel, Thore},
    booktitle = {Advances in Neural Information Processing Systems 13},
    pages = {224--230},
    title = {A {PAC-Bayesian} Margin Bound for Linear Classifiers: Why SVMs work},
    url = {https://www.herbrich.me/papers/pacbayes.pdf},
    year = {2000}
    }

Sparsity and Generalisation

It is generally accepted that inferring a function given only a finite amount of data is only possible if one restricts the model of the data (descriptive approach) or the model of the dependencies (predictive approach) respectively. Over the years, sparse models have become very popular in the field of machine learning. Sparse models are additive models f(x)=∑αi k(x,xi) – also referred to as kernel models – where at the solution for a finite amount of data only a few αi are unequal to zero. Surprisingly Bayesian schemes (like Gaussian Processes, Ridge Regression) which do not enforce such a sparsity show good generalization behaviour. We studied an explanation of this fact and, more generally, the usefulness of sparsity in Machine Learning.

  • T. Graepel, R. Herbrich, and J. Shawe-Taylor, „Generalisation Error Bounds for Sparse Linear Classifiers,“ in Proceedings of the 13th Annual Conference on Computational Learning Theory, 2000, p. 298–303.
    [BibTeX] [Abstract] [Download PDF]

    We provide small sample size bounds on the generalisation error of linear classi ers that are sparse in their dual representation given by the expansion coecients of the weight vector in terms of the training data. These results theoretically justify algorithms like the Support Vector Machine, the Relevance Vector Machine and K-nearest-neighbour. The bounds are a-posteriori bounds to be evaluated after learning when the attained level of sparsity is known. In a PAC-Bayesian style prior knowledge about the expected sparsity is incorporated into the bounds. The proofs avoid the use of double sample arguments by taking into account the sparsity that leaves unused training points for the evaluation of classi ers. We furthermore give a PAC-Bayesian bound on the average generalisation error over subsets of parameter space that may pave the way combining sparsity in the expansion coefficients and margin in a single bound. Finally, reinterpreting a mistake bound for the classical perceptron algorithm due to Noviko we demonstrate that our new results put classifiers found by this algorithm on a firm theoretical basis.

    @inproceedings{graepel2000sparse,
    abstract = {We provide small sample size bounds on the generalisation error of linear classiers that are sparse in their dual representation given by the expansion coecients of the weight vector in terms of the training data. These results theoretically justify algorithms like the Support Vector Machine, the Relevance Vector Machine and K-nearest-neighbour. The
    bounds are a-posteriori bounds to be evaluated after learning when the attained level of sparsity is known. In a PAC-Bayesian style prior knowledge about the expected sparsity is incorporated into the bounds. The proofs avoid the use of double sample arguments by taking into account the sparsity that leaves unused training points for the evaluation of classiers. We furthermore give a PAC-Bayesian bound on the average generalisation error over subsets of parameter space that may pave the way combining sparsity in the expansion coefficients and margin in a single bound. Finally, reinterpreting a mistake bound for the classical perceptron algorithm due to Noviko we demonstrate that our new results put classifiers found by this algorithm on a firm theoretical basis.},
    author = {Graepel, Thore and Herbrich, Ralf and Shawe-Taylor, John},
    booktitle = {Proceedings of the 13th Annual Conference on Computational Learning Theory},
    pages = {298--303},
    title = {Generalisation Error Bounds for Sparse Linear Classifiers},
    url = {https://www.herbrich.me/papers/colt00_sparse.pdf},
    year = {2000}
    }

  • R. Herbrich, T. Graepel, and J. Shawe-Taylor, „Sparsity vs. Large Margins for Linear Classifiers,“ in Proceedings of the 13th1 Annual Conference on Computational Learning Theory, 2000, p. 304–308.
    [BibTeX] [Abstract] [Download PDF]

    We provide small sample size bounds on the generalisation error of linear classifiers that take advantage of large observed margins on the training set and sparsity in the data dependent expansion coefficients. It is already known from results in the luckiness framework that both criteria independently have a large impact on the generalisation error. Our new results show that they can be combined which theoretically justifies learning algorithms like the Support Vector Machine or the Relevance Vector Machine. In contrast to previous studies we avoid using the classical technique of symmetrisation by a ghost sample but directly using the sparsity for the estimation of the generalisation error. We demonstrate that our result leads to practical useful results even in case of small sample size if the training set witnesses our prior belief in sparsity and large margins.

    @inproceedings{herbrich2000sparsitymarginlinear,
    abstract = {We provide small sample size bounds on the generalisation error of linear classifiers that take advantage of large observed margins on the training set and sparsity in the data dependent expansion coefficients. It is already known from results in the luckiness framework that both criteria independently have a large impact on the generalisation error. Our new results show that they can be combined which theoretically justifies learning algorithms like the Support Vector Machine or the Relevance Vector Machine. In contrast to previous studies we avoid using the classical technique of symmetrisation by a ghost sample but directly using the sparsity for the estimation of the generalisation error. We demonstrate that our result leads to practical useful results even in case of small sample size if the training set witnesses our prior belief in sparsity and large margins.},
    author = {Herbrich, Ralf and Graepel, Thore and Shawe-Taylor, John},
    booktitle = {Proceedings of the 13th1 Annual Conference on Computational Learning Theory},
    pages = {304--308},
    title = {Sparsity vs. Large Margins for Linear Classifiers},
    url = {https://www.herbrich.me/papers/colt00_sparsemargin.pdf},
    year = {2000}
    }

Generalized Representer Theorem

Wahba’s classical representer theorem states that the solutions of certain risk minimization problems involving an empirical risk term and a quadratic regularizer can be written as expansions in terms of the training examples. We generalize the theorem to a larger class of regularizers and empirical risk terms, and give a self-contained proof utilizing the feature space associated with a kernel. The result shows that a wide range of problems have optimal solutions that live in the finite dimensional span of the training examples mapped into feature space, thus enabling us to carry out kernel algorithms independent of the (potentially infinite) dimensionality of the feature space.

  • B. Schölkopf, R. Herbrich, and A. Smola, „A Generalized Representer Theorem,“ in Proceedings of the Fourteenth Annual Conference on Computational Learning Theory, 2001, p. 416–426.
    [BibTeX] [Abstract] [Download PDF]

    Wahba’s classical representer theorem states that the solutions of certain risk minimization problems involving an empirical risk term and a quadratic regularizer can be written as expansions in terms of the training examples. We generalize the theorem to a larger class of regularizers and empirical risk terms, and give a self-contained proof utilizing the feature space associated with a kernel. The result shows that a wide range of problems have optimal solutions that live in the finite dimensional span of the training examples mapped into feature space, thus enabling us to carry out kernel algorithms independent of the (potentially infinite) dimensionality of the feature space.

    @inproceedings{scholkopf2001representer,
    abstract = {Wahba's classical representer theorem states that the solutions of certain risk minimization problems involving an empirical risk term and a quadratic regularizer can be written as expansions in terms of the training examples. We generalize the theorem to a larger class of regularizers and empirical risk terms, and give a self-contained proof utilizing the feature space associated with a kernel. The result shows that a wide range of problems have optimal solutions that live in the finite dimensional span of the training examples mapped into feature space, thus enabling us to carry out kernel algorithms independent of the (potentially infinite) dimensionality of the feature space.},
    author = {Sch\"{o}lkopf, Bernhard and Herbrich, Ralf and Smola, Alexander},
    booktitle = {Proceedings of the Fourteenth Annual Conference on Computational Learning Theory},
    pages = {416--426},
    title = {A Generalized Representer Theorem},
    url = {https://www.herbrich.me/papers/colt2001.pdf},
    year = {2001}
    }

Unbiased Assessment of Learning Algorithms

In order to rank the performance of machine learning algorithms, researchers mostly conduct experiments on benchmark data sets. Most learning algorithms have domain-specific parameters and it is a popular custom to adjust these parameters with respect to minimal error on a holdout set. The error on the same holdout set of samples is then used to rank the algorithm, which causes an optimistic bias. We quantify this bias and show, why, when, and to which extent this inappropriate experimental setting distorts the results.

  • T. Scheffer and R. Herbrich, „Unbiased Assesment of Learning Algorithms,“ in Proceedings of the International Joint Conference on Artificial Intelligence, 1997, p. 798–803.
    [BibTeX] [Abstract] [Download PDF]

    In order to rank the performance of machine learning algorithms, many researchs conduct experiments on benchmark datasets. Since most learning algorithms have domain-specific parameters, it is a popular custom to adapt these parameters to obtain a minimal error rate on the test set. The same rate is used to rank the algorithm which causes an optimistic bias. We quantify this bias, showing in particular that an algorithm with more parameters will probably be ranked higher than an equally good algorithm with fewer parameters. We demonstrate this result, showing the number of parameters and trials required in order to pretend to outperform C4.5 or FOIL, respectively, for various benchmark problems. We then describe how unbiased ranking experiments should be conducted.

    @inproceedings{scheffer1997assessment,
    abstract = {In order to rank the performance of machine learning algorithms, many researchs conduct experiments on benchmark datasets. Since most learning algorithms have domain-specific parameters, it is a popular custom to adapt these parameters to obtain a minimal error rate on the test set. The same rate is used to rank the algorithm which causes an optimistic bias. We quantify this bias, showing in particular that an algorithm with more parameters will probably be ranked higher than an equally good algorithm with fewer parameters. We demonstrate this result, showing the number of parameters and trials required in order to pretend to outperform C4.5 or FOIL, respectively, for various benchmark problems. We then describe how unbiased ranking experiments should be conducted.},
    author = {Scheffer, Tobias and Herbrich, Ralf},
    booktitle = {Proceedings of the International Joint Conference on Artificial Intelligence},
    pages = {798--803},
    title = {Unbiased Assesment of Learning Algorithms},
    url = {https://www.herbrich.me/papers/scheffer97.pdf},
    year = {1997}
    }

Large Deviation Bounds for Ranking

We study generalization properties of the area under the ROC curve (AUC), a quantity that has been advocated as an evaluation criterion for the bipartite ranking problem. The AUC is a different term than the error rate used for evaluation in classification problems; consequently, existing generalization bounds for the classification error rate cannot be used to draw conclusions about the AUC. We define the expected accuracy of a ranking function (analogous to the expected error rate of a classification function), and derive distribution-free probabilistic bounds on the deviation of the empirical AUC of a ranking function (observed on a finite data sequence) from its expected accuracy. We derive both a large deviation bound, which serves to bound the expected accuracy of a ranking function in terms of its empirical AUC on a test sequence, and a uniform convergence bound, which serves to bound the expected accuracy of a learned ranking function in terms of its empirical AUC on a training sequence. The uniform convergence bound is expressed in terms of a new set of combinatorial parameters that we term the bipartite rank-shatter coefficients; these play the same role in our result as do the standard VC-dimension related shatter coefficients (also known as the growth function) in uniform convergence results for the classification error rate. A comparison of our result with a uniform convergence result derived by Freund et al. (2003) for a quantity closely related to the AUC shows that the bound provided by our result can be considerably tighter.

  • S. Agarwal, T. Graepel, R. Herbrich, S. Har-Peled, and D. Roth, „Generalization Bounds for the Area Under the ROC Curve,“ Journal of Machine Learning Research, vol. 6, p. 393–425, 2005.
    [BibTeX] [Abstract] [Download PDF]

    We study generalization properties of the area under the ROC curve (AUC), a quantity that has been advocated as an evaluation criterion for the bipartite ranking problem. The AUC is a different term than the error rate used for evaluation in classification problems; consequently, existing generalization bounds for the classification error rate cannot be used to draw conclusions about the AUC. In this paper, we define the expected accuracy of a ranking function (analogous to the expected error rate of a classification function), and derive distribution-free probabilistic bounds on the deviation of the empirical AUC of a ranking function (observed on a finite data sequence) from its expected accuracy. We derive both a large deviation bound, which serves to bound the expected accuracy of a ranking function in terms of its empirical AUC on a test sequence, and a uniform convergence bound, which serves to bound the expected accuracy of a learned ranking function in terms of its empirical AUC on a training sequence. Our uniform convergence bound is expressed in terms of a new set of combinatorial parameters that we term the bipartite rank-shatter coefficients; these play the same role in our result as do the standard VC-dimension related shatter coefficients (also known as the growth function) in uniform convergence results for the classification error rate. A comparison of our result with a recent uniform convergence result derived by Freund et al. (2003) for a quantity closely related to the AUC shows that the bound provided by our result can be considerably tighter.

    @article{agarwal2005roc,
    abstract = {We study generalization properties of the area under the ROC curve (AUC), a quantity that has been advocated as an evaluation criterion for the bipartite ranking problem. The AUC is a different term than the error rate used for evaluation in classification problems; consequently, existing generalization bounds for the classification error rate cannot be used to draw conclusions about the AUC. In this paper, we define the expected accuracy of a ranking function (analogous to the expected error rate of a classification function), and derive distribution-free probabilistic bounds on the deviation of the empirical AUC of a ranking function (observed on a finite data sequence) from its expected accuracy. We derive both a large deviation bound, which serves to bound the expected accuracy of a ranking function in terms of its empirical AUC on a test sequence, and a uniform convergence bound, which serves to bound the expected accuracy of a learned ranking function in terms of its empirical AUC on a training sequence. Our uniform convergence bound is expressed in terms of a new set of combinatorial parameters that we term the bipartite rank-shatter coefficients; these play the same role in our result as do the standard VC-dimension related shatter coefficients (also known as the growth function) in uniform convergence results for the classification error rate. A comparison of our result with a recent uniform convergence result derived by Freund et al. (2003) for a quantity closely related to the AUC shows that the bound provided by our result can be considerably tighter.},
    author = {Agarwal, Shivani and Graepel, Thore and Herbrich, Ralf and Har-Peled, Sariel and Roth, Dan},
    journal = {Journal of Machine Learning Research},
    pages = {393--425},
    title = {Generalization Bounds for the Area Under the {ROC} Curve},
    url = {https://www.herbrich.me/papers/auc.pdf},
    volume = {6},
    year = {2005}
    }

  • S. Agarwal, T. Graepel, R. Herbrich, and D. Roth, „A Large Deviation Bound for the Area Under the ROC Curve,“ in Advances in Neural Information Processing Systems 17, 2004, p. 9–16.
    [BibTeX] [Abstract] [Download PDF]

    The area under the ROC curve (AUC) has been advocated as an evaluation criterion for the bipartite ranking problem. We study large deviation properties of the AUC; in particular, we derive a distribution-free large deviation bound for the AUC which serves to bound the expected accuracy of a ranking function in terms of its empirical AUC on an independent test sequence. A comparison of our result with a corresponding large deviation result for the classification error rate suggests that the test sample size required to obtain an epsilon-accurate estimate of the expected accuracy of a ranking function with delta-confidence is larger than that required to obtain an epsilon-accurate estimate of the expected error rate of a classification function with the same confidence. A simple application of the union bound allows the large deviation bound to be extended to learned ranking functions chosen from finite function classes.

    @inproceedings{agarwal2004roc,
    abstract = {The area under the ROC curve (AUC) has been advocated as an evaluation criterion for the bipartite ranking problem. We study large deviation properties of the AUC; in particular, we derive a distribution-free large deviation bound for the AUC which serves to bound the expected accuracy of a ranking function in terms of its empirical AUC on an independent test sequence. A comparison of our result with a corresponding large deviation result for the classification error rate suggests that the test sample size required to obtain an epsilon-accurate estimate of the expected accuracy of a ranking function with delta-confidence is larger than that required to obtain an epsilon-accurate estimate of the expected error rate of a classification function with the same confidence. A simple application of the union bound allows the large deviation bound to be extended to learned ranking functions chosen from finite function classes.},
    author = {Agarwal, Shivani and Graepel, Thore and Herbrich, Ralf and Roth, Dan},
    booktitle = {Advances in Neural Information Processing Systems 17},
    pages = {9--16},
    title = {A Large Deviation Bound for the Area Under the {ROC} Curve},
    url = {https://www.herbrich.me/papers/nips04-auc.pdf},
    year = {2004}
    }

  • S. Hill, H. Zaragoza, R. Herbrich, and P. Rayner, „Average Precision and the Problem of Generalisation,“ in Proceedings of the ACM SIGIR Workshop on Mathematical and Formal Methods in Information Retrieval, 2002.
    [BibTeX] [Abstract] [Download PDF]

    In this paper we study the problem of generalisation in information retrieval. In particular we study precision-recall curves and the average precision value. We provide two types of bounds: large-deviation bounds of the average precision and maximum deviation bounds with respect to a given point of the precision recall curve. The first type of bounds are useful to answer the question: how far can true average precision be from the value observed on a test collection? The second is useful for obtaining bounds on average precision when tight bounds on a particular point of the curve can be established, as is the case when training SVMs or Perceptrons for document categorisation.

    @inproceedings{hill2002ap,
    abstract = {In this paper we study the problem of generalisation in information retrieval. In particular we study precision-recall curves and the average precision value. We provide two types of bounds: large-deviation bounds of the average precision and maximum deviation bounds with respect to a given point of the precision recall curve. The first type of bounds are useful to answer the question: how far can true average precision be from the value observed on a test collection? The second is useful for obtaining bounds on average precision when tight bounds on a particular point of the curve can be established, as is the case when training SVMs or Perceptrons for document categorisation.},
    author = {Hill, Simon and Zaragoza, Hugo and Herbrich, Ralf and Rayner, Peter},
    booktitle = {Proceedings of the ACM SIGIR Workshop on Mathematical and Formal Methods in Information Retrieval},
    title = {Average Precision and the Problem of Generalisation},
    url = {https://www.herbrich.me/papers/hill.pdf},
    year = {2002}
    }

  • R. Herbrich, T. Graepel, and K. Obermayer, „Large Margin Rank Boundaries for Ordinal Regression,“ in Advances in Large Margin Classifiers, The MIT Press, 1999, p. 115–132.
    [BibTeX] [Abstract] [Download PDF]

    In contrast to the standard machine learning tasks of classification and metric regression we investigate the problem of predicting variables of ordinal scale, a setting referred to as ordinal regression. This problem arises frequently in the social sciences and in information retrieval where human preferences play a major role. Whilst approaches proposed in statistics rely on a probability model of a latent (unobserved) variable we present a distribution independent risk formulation of ordinal regression which allows us to derive a uniform convergence bound. Applying this bound we present a large margin algorithm that is based on a mapping from objects to scalar utility values thus classifying pairs of objects. We give experimental results for an information retrieval task which show that our algorithm outperforms more naive approaches to ordinal regression such as Support Vector Classification and Support Vector Regression in the case of more than two ranks.

    @incollection{herbrich1999largemarginrank,
    abstract = {In contrast to the standard machine learning tasks of classification and metric regression we investigate the problem of predicting variables of ordinal scale, a setting referred to as ordinal regression. This problem arises frequently in the social sciences and in information retrieval where human preferences play a major role. Whilst approaches proposed in statistics rely on a probability model of a latent (unobserved) variable we present a distribution independent risk formulation of ordinal regression which allows us to derive a uniform convergence bound. Applying this bound we present a large margin algorithm that is based on a mapping from objects to scalar utility values thus classifying pairs of objects. We give experimental results for an information retrieval task which show that our algorithm outperforms more naive approaches to ordinal regression such as Support Vector Classification and Support Vector Regression in the case of more than two ranks.},
    author = {Herbrich, Ralf and Graepel, Thore and Obermayer, Klause},
    booktitle = {Advances in Large Margin Classifiers},
    chapter = {7},
    pages = {115--132},
    publisher = {The MIT Press},
    title = {Large Margin Rank Boundaries for Ordinal Regression},
    url = {https://www.herbrich.me/papers/nips98_ordinal.pdf},
    year = {1999}
    }

  • R. Herbrich, T. Graepel, P. Bollmann-Sdorra, and K. Obermayer, „Learning Preference Relations for Information Retrieval,“ in Proceedings of the International Conference on Machine Learning Workshop: Text Categorization and Machine learning, 1998, p. 80–84.
    [BibTeX] [Abstract] [Download PDF]

    In this paper we investigate the problem of learning a preference relation from a given set of ranked documents. We show that the Bayes’s optimal decision function, when applied to learning a preference relation, may violate transitivity. This is undesirable for information retrieval, because it is in conflict with a document ranking based on the user’s preferences. To overcome this problem we present a vector space based method that performs a linear mapping from documents to scalar utility values and thus guarantees transitivity. The learning of the relation between documents is formulated as a classification problem on pairs of documents and is solved using the principle of structural risk minimization for good generalization. The approach is extended to polynomial utility functions by using the potential function method (the so called „kernel trick“), which allows to incorporate higher order correlations of features into the utility function at minimal computational costs. The resulting algorithm is tested on an example with artificial data. The algorithm successfully learns the utility function underlying the training examples and shows good classification performance.

    @inproceedings{herbrich1998preference,
    abstract = {In this paper we investigate the problem of learning a preference relation from a given set of ranked documents. We show that the Bayes's optimal decision function, when applied to learning a preference relation, may violate transitivity. This is undesirable for information retrieval, because it is in conflict with a document ranking based on the user's preferences. To overcome this problem we present a vector space based method that performs a linear mapping from documents to scalar utility values and thus guarantees transitivity. The learning of the relation between documents is formulated as a classification problem on pairs of documents and is solved using the principle of structural risk minimization for good generalization. The approach is extended to polynomial utility functions by using the potential function method (the so called "kernel trick"), which allows to incorporate higher order correlations of features into the utility function at minimal computational costs. The resulting algorithm is tested on an example with artificial data. The algorithm successfully learns the utility function underlying the training examples and shows good classification performance.},
    author = {Herbrich, Ralf and Graepel, Thore and Bollmann-Sdorra, Peter and Obermayer, Klaus},
    booktitle = {Proceedings of the International Conference on Machine Learning Workshop: Text Categorization and Machine learning},
    pages = {80--84},
    title = {Learning Preference Relations for Information Retrieval},
    url = {https://www.herbrich.me/papers/hergraebollober98.pdf},
    year = {1998}
    }