Approximate Bayesian Inference
Bayesian inference provides a powerful mechanism for data analysis and learning. However, in real-world situations it is rarely possible to perform exact inference; in fact, exact Bayesian inference is in general NP hard. One of the most successful approaches to address this problem is to exploit a factorial structure of both the sampling distribution and the prior. Then, there are a variety of methods that exploit the factorisation for efficient approximations in inference. The Expectation-Propagation (EP) algorithm is a powerful tool for inference in such factor graphs. For discrete distributions, the EP algorithm is also known as Belief Propagation.
1.
Kurle, Richard; Herbrich, Ralf; Januschowski, Tim; Wang, Yuyang; Gasthaus, Jan
On the detrimental effect of invariances in the likelihood for variational inference Proceedings Article
In: Advances in Neural Information Procesing Systems 36, 2022.
@inproceedings{kurle2022vbnn,
title = {On the detrimental effect of invariances in the likelihood for variational inference},
author = {Richard Kurle and Ralf Herbrich and Tim Januschowski and Yuyang Wang and Jan Gasthaus},
url = {https://arxiv.org/pdf/2209.07157.pdf},
year = {2022},
date = {2022-01-01},
booktitle = {Advances in Neural Information Procesing Systems 36},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
2.
Herbrich, Ralf
Distributed, Real-time Bayesian Learning in Online Services Proceedings Article
In: Proceedings of the 6th ACM Conference on Recommender Systems, pp. 203–204, ACM 2012.
@inproceedings{herbrich2012distributed,
title = {Distributed, Real-time Bayesian Learning in Online Services},
author = {Ralf Herbrich},
url = {https://www.herbrich.me/papers/recsys2012.pdf},
year = {2012},
date = {2012-01-01},
booktitle = {Proceedings of the 6th ACM Conference on Recommender Systems},
pages = {203--204},
organization = {ACM},
abstract = {The last ten years have seen a tremendous growth in Internet-based online services such as search, advertising, gaming and social networking. Today, it is important to analyze large collections of user interaction data as a first step in building predictive models for these services as well as learn these models in real-time. One of the biggest challenges in this setting is scale: not only does the sheer scale of data necessitate parallel processing but it also necessitates distributed models; with over 900 million active users at Facebook, any user-specific sets of features in a linear or non-linear model yields models of a size bigger than can be stored in a single system. In this talk, I will give a hands-on introduction to one of the most versatile tools for handling large collections of data with distributed probabilistic models: the sum-product algorithm for approximate message passing in factor graphs. I will discuss the application of this algorithm for the specific case of generalized linear models and outline the challenges of both approximate and distributed message passing including an in-depth discussion of expectation propagation and Map-Reduce.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
The last ten years have seen a tremendous growth in Internet-based online services such as search, advertising, gaming and social networking. Today, it is important to analyze large collections of user interaction data as a first step in building predictive models for these services as well as learn these models in real-time. One of the biggest challenges in this setting is scale: not only does the sheer scale of data necessitate parallel processing but it also necessitates distributed models; with over 900 million active users at Facebook, any user-specific sets of features in a linear or non-linear model yields models of a size bigger than can be stored in a single system. In this talk, I will give a hands-on introduction to one of the most versatile tools for handling large collections of data with distributed probabilistic models: the sum-product algorithm for approximate message passing in factor graphs. I will discuss the application of this algorithm for the specific case of generalized linear models and outline the challenges of both approximate and distributed message passing including an in-depth discussion of expectation propagation and Map-Reduce.
3.
Herbrich, Ralf
On Gaussian Expectation Propagation Technical Report
Microsoft Research 2005.
@techreport{herbrich2005gaussianep,
title = {On Gaussian Expectation Propagation},
author = {Ralf Herbrich},
url = {https://www.herbrich.me/papers/EP.pdf},
year = {2005},
date = {2005-01-01},
institution = {Microsoft Research},
abstract = {In this short note we will re-derive the Gaussian expectation propagation (Gaussian EP) algorithm as presented in Minka (2001) and demonstrate an application of Gaussian EP to approximate multi-dimensional truncated Gaussians.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
In this short note we will re-derive the Gaussian expectation propagation (Gaussian EP) algorithm as presented in Minka (2001) and demonstrate an application of Gaussian EP to approximate multi-dimensional truncated Gaussians.
4.
Herbrich, Ralf
Minimising the Kullback-Leibler Divergence Technical Report
Microsoft Research 2005.
@techreport{herbrich2005minimizingkl,
title = {Minimising the Kullback-Leibler Divergence},
author = {Ralf Herbrich},
url = {https://www.herbrich.me/papers/KL.pdf},
year = {2005},
date = {2005-01-01},
institution = {Microsoft Research},
abstract = {In this note we show that minimising the Kullback--Leibler divergence over a family in the class of exponential distributions is achieved by matching the expected natural statistic. We will also give an explicit update formula for distributions with only one likelihood term.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
In this note we show that minimising the Kullback--Leibler divergence over a family in the class of exponential distributions is achieved by matching the expected natural statistic. We will also give an explicit update formula for distributions with only one likelihood term.
Poisson Networks
Modelling structured multivariate point process data has wide ranging applications like understanding neural activity, developing faster file access systems and learning dependencies among servers in large networks. We developed the Poisson network model for representing multivariate structured Poisson processes. In our model each node of the network represents a Poisson process. The novelty of our work is that waiting times of a process are modelled by an exponential distribution with a piecewise constant rate function that depends on the event counts of its parents in the network in a generalised linear way. Our choice of model allows to perform exact sampling from arbitrary structures. We adopt a Bayesian approach for learning the network structure. We also develop fixed point and sampling based approximations for performing inference of rate functions in Poisson networks.
1.
Rajaram, Shyamsundar; Graepel, Thore; Herbrich, Ralf
Poisson-Networks : A Model for Structured Poisson Processes Proceedings Article
In: Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics, pp. 277–284, 2004.
@inproceedings{rajaram2004poissonnetworks,
title = {Poisson-Networks : A Model for Structured Poisson Processes},
author = {Shyamsundar Rajaram and Thore Graepel and Ralf Herbrich},
url = {https://www.herbrich.me/papers/spikenets.pdf},
year = {2004},
date = {2004-01-01},
booktitle = {Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics},
pages = {277--284},
abstract = {Modelling structured multivariate point process data has wide ranging applications like understanding neural activity, developing faster file access systems and learning dependencies among servers in large networks. In this paper, we develop the Poisson network model for representing multivariate structured Poisson processes. In our model each node of the network represents a Poisson process. The novelty of our work is that waiting times of a process are modelled by an exponential distribution with a piecewise constant rate function that depends on the event counts of its parents in the network in a generalised linear way. Our choice of model allows to perform exact sampling from arbitrary structures. We adopt a Bayesian approach for learning the network structure. Further, we discuss fixed point and sampling based approximations for performing inference of rate functions in Poisson networks.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Modelling structured multivariate point process data has wide ranging applications like understanding neural activity, developing faster file access systems and learning dependencies among servers in large networks. In this paper, we develop the Poisson network model for representing multivariate structured Poisson processes. In our model each node of the network represents a Poisson process. The novelty of our work is that waiting times of a process are modelled by an exponential distribution with a piecewise constant rate function that depends on the event counts of its parents in the network in a generalised linear way. Our choice of model allows to perform exact sampling from arbitrary structures. We adopt a Bayesian approach for learning the network structure. Further, we discuss fixed point and sampling based approximations for performing inference of rate functions in Poisson networks.
Informative Vector Machine
We have developed a framework for sparse Gaussian process methods which uses forward selection with criteria based on information-theoretical principles, previously suggested for active learning. In contrast to most previous work on sparse GPs, our goal is not only to learn sparse predictors (which can be evaluated in O(d) rather than O(n), d<n, n the number of training points), but also to perform training under strong restrictions on time and memory requirements. The scaling of our method is at most O(nd2), and in large real-world classification experiments we show that it can match prediction performance of the popular support vector machine (SVM), yet it requires only a fraction of the training time. In contrast to the SVM, our approximation produces estimates of predictive probabilities (‘error bars’), allows for Bayesian model selection and is less complex in implementation.
1.
Lawrence, Neil; Seeger, Matthias; Herbrich, Ralf
Fast Sparse Gaussian Process Methods: The Informative Vector Machine Proceedings Article
In: Advances in Neural Information Processing Systems 15, pp. 609–616, 2002.
@inproceedings{lawrence2002ivm,
title = {Fast Sparse Gaussian Process Methods: The Informative Vector Machine},
author = {Neil Lawrence and Matthias Seeger and Ralf Herbrich},
url = {https://www.herbrich.me/papers/ivm.pdf},
year = {2002},
date = {2002-01-01},
booktitle = {Advances in Neural Information Processing Systems 15},
pages = {609--616},
abstract = {We present a framework for sparse Gaussian process (GP) methods which uses forward selection with criteria based on information-theoretical principles, previously suggested for active learning. In contrast to most previous work on sparse GPs, our goal is not only to learn sparse predictors (which can be evaluated in $O(d)$ rather than $O(n)$, $d łl n$, $n$ the number of training points), but also to perform training under strong restrictions on time and memory requirements. The scaling of our method is at most $O(n cdot d^2)$, and in large real-world classification experiments we show that it can match prediction performance of the popular support vector machine (SVM), yet it requires only a fraction of the training time. In contrast to the SVM, our approximation produces estimates of predictive probabilities ("error bars"), allows for Bayesian model selection and is less complex in implementation.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
We present a framework for sparse Gaussian process (GP) methods which uses forward selection with criteria based on information-theoretical principles, previously suggested for active learning. In contrast to most previous work on sparse GPs, our goal is not only to learn sparse predictors (which can be evaluated in $O(d)$ rather than $O(n)$, $d łl n$, $n$ the number of training points), but also to perform training under strong restrictions on time and memory requirements. The scaling of our method is at most $O(n cdot d^2)$, and in large real-world classification experiments we show that it can match prediction performance of the popular support vector machine (SVM), yet it requires only a fraction of the training time. In contrast to the SVM, our approximation produces estimates of predictive probabilities ("error bars"), allows for Bayesian model selection and is less complex in implementation.
Learning with Social Priors
Slow convergence and poor initial accuracy are two problems that plague efforts to use very large feature sets in online learning. We show how these problems can be mitigated if a graph of relationships between features is known. We study this problem in a fully Bayesian setting, focusing on the problem of using Facebook user-IDs as features, with the social network giving the relationship structure. Our analysis uncovers significant problems with the obvious regularizations, and motivates a two-component mixture-model social prior that is provably better.
1.
Chakrabarti, Deepayan; Herbrich, Ralf
Speeding Up Large-Scale Learning with a Social Prior Proceedings Article
In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 650–658, ACM 2013.
@inproceedings{chakrabarti2013speeding,
title = {Speeding Up Large-Scale Learning with a Social Prior},
author = {Deepayan Chakrabarti and Ralf Herbrich},
url = {https://www.herbrich.me/papers/kdd13-socialprior.pdf},
year = {2013},
date = {2013-01-01},
booktitle = {Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining},
pages = {650--658},
organization = {ACM},
abstract = {Slow convergence and poor initial accuracy are two problems that plague efforts to use very large feature sets in online learning. This is especially true when only a few features are active in any training example, and the frequency of activations of different features is skewed. We show how these problems can be mitigated if a graph of relationships between features is known. We study this problem in a fully Bayesian setting, focusing on the problem of using Facebook user-IDs as features, with the social network giving the relationship structure. Our analysis uncovers significant problems with the obvious regularizations, and motivates a two-component mixture-model social prior that is provably better. Empirical results on large-scale click prediction problems show that our algorithm can learn as well as the baseline with 12M fewer training examples, and continuously outperforms it for over 60M examples. On a second problem using binned features, our model outperforms the baseline even after the latter sees 5x as much data.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Slow convergence and poor initial accuracy are two problems that plague efforts to use very large feature sets in online learning. This is especially true when only a few features are active in any training example, and the frequency of activations of different features is skewed. We show how these problems can be mitigated if a graph of relationships between features is known. We study this problem in a fully Bayesian setting, focusing on the problem of using Facebook user-IDs as features, with the social network giving the relationship structure. Our analysis uncovers significant problems with the obvious regularizations, and motivates a two-component mixture-model social prior that is provably better. Empirical results on large-scale click prediction problems show that our algorithm can learn as well as the baseline with 12M fewer training examples, and continuously outperforms it for over 60M examples. On a second problem using binned features, our model outperforms the baseline even after the latter sees 5x as much data.
Kernel Topic Models
Latent Dirichlet Allocation models discrete document data as a mixture of discrete distributions, using Dirichlet beliefs over the mixture weights. We study a variation in which the document’s mixture weight beliefs are replaced with squashed Gaussian distributions. This allows documents to be associated with elements of a Hilbert space, admitting kernel topic models (KTM), modelling temporal, spatial, hierarchical, social and other structure between documents. The main challenge is efficient approximate inference on the latent Gaussian. We present an approximate algorithm cast around a Laplace approximation in a transformed basis. The KTM can also be interpreted as a type of Gaussian process latent variable model, or as a topic model conditional on document features, uncovering links between earlier work in these areas.
1.
Hennig, Philipp; Stern, David; Herbrich, Ralf; Graepel, Thore
Kernel Topic Models Proceedings Article
In: Proceedings of the 15th International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 511–519, 2012.
@inproceedings{henning2012kerneltopic,
title = {Kernel Topic Models},
author = {Philipp Hennig and David Stern and Ralf Herbrich and Thore Graepel},
url = {https://www.herbrich.me/papers/ktm.pdf},
year = {2012},
date = {2012-01-01},
booktitle = {Proceedings of the 15th International Conference on Artificial Intelligence and Statistics (AISTATS)},
pages = {511--519},
abstract = {Latent Dirichlet Allocation models discrete data as a mixture of discrete distributions, using Dirichlet beliefs over the mixture weights. We study a variation of this concept, in which the documents' mixture weight beliefs are replaced with squashed Gaussian distributions. This allows documents to be associated with elements of a Hilbert space, admitting kernel topic models (KTM), modelling temporal, spatial, hierarchical, social and other structure between documents. The main challenge is efficient approximate inference on the latent Gaussian. We present an approximate algorithm cast around a Laplace approximation in a transformed basis. The KTM can also be interpreted as a type of Gaussian process latent variable model, or as a topic model conditional on document features, uncovering links between earlier work in these areas.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Latent Dirichlet Allocation models discrete data as a mixture of discrete distributions, using Dirichlet beliefs over the mixture weights. We study a variation of this concept, in which the documents' mixture weight beliefs are replaced with squashed Gaussian distributions. This allows documents to be associated with elements of a Hilbert space, admitting kernel topic models (KTM), modelling temporal, spatial, hierarchical, social and other structure between documents. The main challenge is efficient approximate inference on the latent Gaussian. We present an approximate algorithm cast around a Laplace approximation in a transformed basis. The KTM can also be interpreted as a type of Gaussian process latent variable model, or as a topic model conditional on document features, uncovering links between earlier work in these areas.
Bayesian Online Learning for Multilabel and Multivariate Performance Measures
Many real world applications employ multi-variate performance measures and each example can belong to multiple classes. We propose a Bayesian online multi-label classification framework which learns a probabilistic linear classifier. The likelihood is modeled by a graphical model similar to TrueSkill, and inference is based on Gaussian density filtering with expectation propagation. Using samples from the posterior, we label the testing data by maximizing the expected F1-score. Our experiments on Reuters1-v2 dataset show that the proposed algorithm compares favorably to the state-of-the-art online learners in macro-averaged F1-score and training time.
1.
Zhang, Xinhua; Graepel, Thore; Herbrich, Ralf
Bayesian Online Learning for Multi-label and Multi-variate Performance Measures Proceedings Article
In: Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 956–963, 2010.
@inproceedings{zhang2010bayesian,
title = {Bayesian Online Learning for Multi-label and Multi-variate Performance Measures},
author = {Xinhua Zhang and Thore Graepel and Ralf Herbrich},
url = {https://www.herbrich.me/papers/zhang10b.pdf},
year = {2010},
date = {2010-01-01},
booktitle = {Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS)},
pages = {956--963},
abstract = {Many real world applications employ multi-variate performance measures and each example can belong to multiple classes. The currently most popular approaches train an SVM for each class, followed by ad-hoc thresholding. Probabilistic models using Bayesian decision theory are also commonly adopted. In this paper, we propose a Bayesian online multi-label classification framework (BOMC) which learns a probabilistic linear classifier. The likelihood is modeled by a graphical model similar to TrueSkill, and inference is based on Gaussian density filtering with expectation propagation. Using samples from the posterior, we label the testing data by maximizing the expected F1-score. Our experiments on Reuters1-v2 dataset show BOMC compares favorably to the state-of-the-art online learners in macro-averaged F1-score and training time.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Many real world applications employ multi-variate performance measures and each example can belong to multiple classes. The currently most popular approaches train an SVM for each class, followed by ad-hoc thresholding. Probabilistic models using Bayesian decision theory are also commonly adopted. In this paper, we propose a Bayesian online multi-label classification framework (BOMC) which learns a probabilistic linear classifier. The likelihood is modeled by a graphical model similar to TrueSkill, and inference is based on Gaussian density filtering with expectation propagation. Using samples from the posterior, we label the testing data by maximizing the expected F1-score. Our experiments on Reuters1-v2 dataset show BOMC compares favorably to the state-of-the-art online learners in macro-averaged F1-score and training time.
Bayesian Transduction
We consider the case of binary classification by linear discriminant functions. The simplification of the transduction problem results from the fact that the infinite number of linear discriminants is boiled down to a finite number of equivalence classes on the working set. The number of equivalence classes is bounded from above by the growth function. Each equivalence class corresponds to a polyhedron in parameter space. From a Bayesian point of view, we suggest to measure the prior probability of a labelling of the working set as the volume of the corresponding polyhedron w.r.t. the a-priori distribution in parameter space. Then the maximum a-posterior (MAP) scheme recommends to choose the labelling of maximum volume.
1.
Graepel, Thore; Herbrich, Ralf
The Kernel Gibbs Sampler Proceedings Article
In: Advances in Neural Information Processing Systems 13, pp. 514–520, The MIT Press, 2000.
@inproceedings{herbrich2000kernelgibbs,
title = {The Kernel Gibbs Sampler},
author = {Thore Graepel and Ralf Herbrich},
url = {https://www.herbrich.me/papers/kernel-gibbs-sampler.pdf},
year = {2000},
date = {2000-01-01},
booktitle = {Advances in Neural Information Processing Systems 13},
pages = {514--520},
publisher = {The MIT Press},
abstract = {We present an algorithm that samples the hypothesis space of kernel classifiers. Given a uniform prior over normalised weight vectors and a likelihood based on a model of label noise leads to a piecewise constant posterior that can be sampled by the kernel Gibbs sampler (KGS). The KGS is a Markov Chain Monte Carlo method that chooses a random direction in parameter space and samples from the resulting piecewise constant density along the line chosen. The KGS can be used as an analytical tool for the exploration of Bayesian transduction, Bayes point machines, active learning, and evidence-based model selection on small data sets that are contaminated with label noise. For a simple toy example we demonstrate experimentally how a Bayes point machine based on the KGS outperforms an SVM that is incapable of taking into account label noise.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
We present an algorithm that samples the hypothesis space of kernel classifiers. Given a uniform prior over normalised weight vectors and a likelihood based on a model of label noise leads to a piecewise constant posterior that can be sampled by the kernel Gibbs sampler (KGS). The KGS is a Markov Chain Monte Carlo method that chooses a random direction in parameter space and samples from the resulting piecewise constant density along the line chosen. The KGS can be used as an analytical tool for the exploration of Bayesian transduction, Bayes point machines, active learning, and evidence-based model selection on small data sets that are contaminated with label noise. For a simple toy example we demonstrate experimentally how a Bayes point machine based on the KGS outperforms an SVM that is incapable of taking into account label noise.
2.
Graepel, Thore; Herbrich, Ralf; Obermayer, Klaus
Bayesian Transduction Proceedings Article
In: Advances in Neural Information Processing Systems 12, pp. 456–462, The MIT Press, 1999.
@inproceedings{graepel1999bayesiantransduction,
title = {Bayesian Transduction},
author = {Thore Graepel and Ralf Herbrich and Klaus Obermayer},
url = {https://www.herbrich.me/papers/trans.pdf},
year = {1999},
date = {1999-01-01},
booktitle = {Advances in Neural Information Processing Systems 12},
pages = {456--462},
publisher = {The MIT Press},
abstract = {Transduction is an inference principle that takes a training sample and aims at estimating the values of a function at given points contained in the so-called working sample. Hence, transduction is a less ambitious task than induction which aims at inferring a functional dependency on the whole of input space. As a consequence, however, transduction provides a confidence measure on single predictions rather than classifiers, a feature particularly important for risk-sensitive applications. We consider the case of binary classification by linear discriminant functions (perceptrons) in kernel space. From the transductive point of view, the infinite number of perceptrons is boiled down to a finite number of equivalence classes on the working sample each of which corresponds to a polyhedron in parameter space. In the Bayesian spirit the posteriori probability of a labelling of the working sample is determined as the ratio between the volume of the corresponding polyhedron and the volume of version space. Then the maximum posteriori scheme recommends to choose the labelling of maximum volume. We suggest to sample version space by an ergodic billiard in kernel space. Experimental results on real world data indicate that Bayesian Transduction compares favourably to the well-known Support Vector Machine, in particular if the posteriori probability of labellings is used as a confidence measure to exclude test points of low confidence.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Transduction is an inference principle that takes a training sample and aims at estimating the values of a function at given points contained in the so-called working sample. Hence, transduction is a less ambitious task than induction which aims at inferring a functional dependency on the whole of input space. As a consequence, however, transduction provides a confidence measure on single predictions rather than classifiers, a feature particularly important for risk-sensitive applications. We consider the case of binary classification by linear discriminant functions (perceptrons) in kernel space. From the transductive point of view, the infinite number of perceptrons is boiled down to a finite number of equivalence classes on the working sample each of which corresponds to a polyhedron in parameter space. In the Bayesian spirit the posteriori probability of a labelling of the working sample is determined as the ratio between the volume of the corresponding polyhedron and the volume of version space. Then the maximum posteriori scheme recommends to choose the labelling of maximum volume. We suggest to sample version space by an ergodic billiard in kernel space. Experimental results on real world data indicate that Bayesian Transduction compares favourably to the well-known Support Vector Machine, in particular if the posteriori probability of labellings is used as a confidence measure to exclude test points of low confidence.
Bayes Point Machines
From a Bayesian perspective, Support Vector Machines choose the hypothesis corresponding to the largest possible hypersphere that can be inscribed in version space, i.e. in the space of all consistent hypotheses given a training set. Those boundaries of version space which are tangent to the hypersphere define the support vectors. An alternative and potentially better approach is to construct the hypothesis using the whole of version space. This is achieved by using a Bayes Point Machine which finds the midpoint of the region of intersection of all hyperplanes bisecting version space into two halves of equal volume (the Bayes point). It is known that the centre of mass of version space approximates the Bayes point. We investigate estimating the centre of mass by averaging over the trajectory of a billiard ball bouncing in version space. Experimental results indicate that Bayes Point Machines consistently outperform Support Vector Machines.
1.
Harrington, Edward; Herbrich, Ralf; Kivinen, Jyrki; Platt, John C; Williamson, Robert C
Online Bayes Point Machines Proceedings Article
In: Proceedings of Advances in Knowledge Discovery and Data Mining, pp. 241–252, 2003.
@inproceedings{harrington2003onlinebpm,
title = {Online Bayes Point Machines},
author = {Edward Harrington and Ralf Herbrich and Jyrki Kivinen and John C Platt and Robert C Williamson},
url = {https://www.herbrich.me/papers/OBPM.pdf},
year = {2003},
date = {2003-01-01},
booktitle = {Proceedings of Advances in Knowledge Discovery and Data Mining},
pages = {241--252},
abstract = {We present a new and simple algorithm for learning large margin classifiers that works in a truly online manner. The algorithm generates a linear classifier by averaging the weights associated with several perceptron-like algorithms run in parallel in order to approximate the Bayes point. A random subsample of the incoming data stream is used to ensure diversity in the perceptron solutions. We experimentally study the algorithm's performance on online and batch learning settings. The online experiments showed that our algorithm produces a low prediction error on the training sequence and tracks the presence of concept drift. On the batch problems its performance is comparable to the maximum margin algorithm which explicitly maximises the margin.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
We present a new and simple algorithm for learning large margin classifiers that works in a truly online manner. The algorithm generates a linear classifier by averaging the weights associated with several perceptron-like algorithms run in parallel in order to approximate the Bayes point. A random subsample of the incoming data stream is used to ensure diversity in the perceptron solutions. We experimentally study the algorithm's performance on online and batch learning settings. The online experiments showed that our algorithm produces a low prediction error on the training sequence and tracks the presence of concept drift. On the batch problems its performance is comparable to the maximum margin algorithm which explicitly maximises the margin.
2.
Herbrich, Ralf; Graepel, Thore; Campbell, Colin
Bayes Point Machines Journal Article
In: Journal of Machine Learning Research, vol. 1, pp. 245–279, 2001.
@article{herbrich2001bpm,
title = {Bayes Point Machines},
author = {Ralf Herbrich and Thore Graepel and Colin Campbell},
url = {https://www.herbrich.me/papers/bpm.pdf},
year = {2001},
date = {2001-01-01},
journal = {Journal of Machine Learning Research},
volume = {1},
pages = {245--279},
abstract = {Kernel-classifiers comprise a powerful class of non-linear decision functions for binary classification. The support vector machine is an example of a learning algorithm for kernel classifiers that singles out the consistent classifier with the largest margin, i.e. minimal real-valued output on the training sample, within the set of consistent hypotheses, the so-called version space. We suggest the Bayes point machine as a well-founded improvement which approximates the Bayes-optimal decision by the centre of mass of version space. We present two algorithms to stochastically approximate the centre of mass of version space: a billiard sampling algorithm and a sampling algorithm based on the well known perceptron algorithm. It is shown how both algorithms can be extended to allow for soft-boundaries in order to admit training errors. Experimentally, we find that --- for the zero training error case --- Bayes point machines consistently outperform support vector machines on both surrogate data and real-world benchmark data sets. In the soft-boundary/soft-margin case, the improvement over support vector machines is shown to be reduced. Finally, we demonstrate that the real-valued output of single Bayes points on novel test points is a valid confidence measure and leads to a steady decrease in generalisation error when used as a rejection criterion.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Kernel-classifiers comprise a powerful class of non-linear decision functions for binary classification. The support vector machine is an example of a learning algorithm for kernel classifiers that singles out the consistent classifier with the largest margin, i.e. minimal real-valued output on the training sample, within the set of consistent hypotheses, the so-called version space. We suggest the Bayes point machine as a well-founded improvement which approximates the Bayes-optimal decision by the centre of mass of version space. We present two algorithms to stochastically approximate the centre of mass of version space: a billiard sampling algorithm and a sampling algorithm based on the well known perceptron algorithm. It is shown how both algorithms can be extended to allow for soft-boundaries in order to admit training errors. Experimentally, we find that --- for the zero training error case --- Bayes point machines consistently outperform support vector machines on both surrogate data and real-world benchmark data sets. In the soft-boundary/soft-margin case, the improvement over support vector machines is shown to be reduced. Finally, we demonstrate that the real-valued output of single Bayes points on novel test points is a valid confidence measure and leads to a steady decrease in generalisation error when used as a rejection criterion.
3.
Graepel, Thore; Herbrich, Ralf
The Kernel Gibbs Sampler Proceedings Article
In: Advances in Neural Information Processing Systems 13, pp. 514–520, The MIT Press, 2000.
@inproceedings{herbrich2000kernelgibbs,
title = {The Kernel Gibbs Sampler},
author = {Thore Graepel and Ralf Herbrich},
url = {https://www.herbrich.me/papers/kernel-gibbs-sampler.pdf},
year = {2000},
date = {2000-01-01},
booktitle = {Advances in Neural Information Processing Systems 13},
pages = {514--520},
publisher = {The MIT Press},
abstract = {We present an algorithm that samples the hypothesis space of kernel classifiers. Given a uniform prior over normalised weight vectors and a likelihood based on a model of label noise leads to a piecewise constant posterior that can be sampled by the kernel Gibbs sampler (KGS). The KGS is a Markov Chain Monte Carlo method that chooses a random direction in parameter space and samples from the resulting piecewise constant density along the line chosen. The KGS can be used as an analytical tool for the exploration of Bayesian transduction, Bayes point machines, active learning, and evidence-based model selection on small data sets that are contaminated with label noise. For a simple toy example we demonstrate experimentally how a Bayes point machine based on the KGS outperforms an SVM that is incapable of taking into account label noise.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
We present an algorithm that samples the hypothesis space of kernel classifiers. Given a uniform prior over normalised weight vectors and a likelihood based on a model of label noise leads to a piecewise constant posterior that can be sampled by the kernel Gibbs sampler (KGS). The KGS is a Markov Chain Monte Carlo method that chooses a random direction in parameter space and samples from the resulting piecewise constant density along the line chosen. The KGS can be used as an analytical tool for the exploration of Bayesian transduction, Bayes point machines, active learning, and evidence-based model selection on small data sets that are contaminated with label noise. For a simple toy example we demonstrate experimentally how a Bayes point machine based on the KGS outperforms an SVM that is incapable of taking into account label noise.
4.
Herbrich, Ralf; Graepel, Thore
Large Scale Bayes Point Machines Proceedings Article
In: Advances in Neural Information Processing Systems 13, pp. 528–534, The MIT Press, 2000.
@inproceedings{herbrich2000largebpm,
title = {Large Scale Bayes Point Machines},
author = {Ralf Herbrich and Thore Graepel},
url = {https://www.herbrich.me/papers/mnist.pdf},
year = {2000},
date = {2000-01-01},
booktitle = {Advances in Neural Information Processing Systems 13},
pages = {528--534},
publisher = {The MIT Press},
abstract = {The concept of averaging over classifiers is fundamental to the Bayesian analysis of learning. Based on this viewpoint, it has recently been demonstrated for linear classifiers that the centre of mass of version space (the set of all classifiers consistent with the training set) - also known as the Bayes point - exhibits excellent generalisation abilities. However, the billiard algorithm as presented in [Herbrich et al., 2000] is restricted to small sample size because it requires O(m*m) of memory and O(N*m*m) computational steps where m is the number of training patterns and N is the number of random draws from the posterior distribution. In this paper we present a method based on the simple perceptron learning algorithm which allows to overcome this algorithmic drawback. The method is algorithmically simple and is easily extended to the multi-class case. We present experimental results on the MNIST data set of handwritten digits which show that Bayes Point Machines are competitive with the current world champion, the support vector machine. In addition, the computational complexity of BPMs can be tuned by varying the number of samples from the posterior. Finally, rejecting test points on the basis of their (approximative) posterior probability leads to a rapid decrease in generalisation error, e.g. 0.1% generalisation error for a given rejection rate of 10%.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
The concept of averaging over classifiers is fundamental to the Bayesian analysis of learning. Based on this viewpoint, it has recently been demonstrated for linear classifiers that the centre of mass of version space (the set of all classifiers consistent with the training set) - also known as the Bayes point - exhibits excellent generalisation abilities. However, the billiard algorithm as presented in [Herbrich et al., 2000] is restricted to small sample size because it requires O(m*m) of memory and O(N*m*m) computational steps where m is the number of training patterns and N is the number of random draws from the posterior distribution. In this paper we present a method based on the simple perceptron learning algorithm which allows to overcome this algorithmic drawback. The method is algorithmically simple and is easily extended to the multi-class case. We present experimental results on the MNIST data set of handwritten digits which show that Bayes Point Machines are competitive with the current world champion, the support vector machine. In addition, the computational complexity of BPMs can be tuned by varying the number of samples from the posterior. Finally, rejecting test points on the basis of their (approximative) posterior probability leads to a rapid decrease in generalisation error, e.g. 0.1% generalisation error for a given rejection rate of 10%.
5.
Herbrich, Ralf; Graepel, Thore; Campbell, Colin
Robust Bayes Point Machines Proceedings Article
In: Proceedings of European Symposium on Artificial Neural Networks, pp. 49–54, 2000.
@inproceedings{herbrich2000robustbpm,
title = {Robust Bayes Point Machines},
author = {Ralf Herbrich and Thore Graepel and Colin Campbell},
url = {https://www.herbrich.me/papers/esann00.pdf},
year = {2000},
date = {2000-01-01},
booktitle = {Proceedings of European Symposium on Artificial Neural Networks},
pages = {49--54},
abstract = {Support Vector Machines choose the hypothesis corresponding to the centre of the largest hypersphere that can be inscribed in version space. If version space is elongated or irregularly shaped a potentially superior approach is take into account the whole of version space. We propose to construct the Bayes point which is approximated by the centre of mass. Our implementation of a Bayes Point Machine (BPM) uses an ergodic billiard to estimate this point in the kernel space. We show that BPMs outperform hard margin Support Vector Machines (SVMs) on real world datasets. We introduce a technique that allows the BPM to construct hypotheses with non-zero training error similar to soft margin SVMs with quadratic penelisation of the margin slacks. An experimental study reveals that with decreasing penelisation of training error the improvement of BPMs over SVMs decays, a finding that is explained by geometrical considerations.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Support Vector Machines choose the hypothesis corresponding to the centre of the largest hypersphere that can be inscribed in version space. If version space is elongated or irregularly shaped a potentially superior approach is take into account the whole of version space. We propose to construct the Bayes point which is approximated by the centre of mass. Our implementation of a Bayes Point Machine (BPM) uses an ergodic billiard to estimate this point in the kernel space. We show that BPMs outperform hard margin Support Vector Machines (SVMs) on real world datasets. We introduce a technique that allows the BPM to construct hypotheses with non-zero training error similar to soft margin SVMs with quadratic penelisation of the margin slacks. An experimental study reveals that with decreasing penelisation of training error the improvement of BPMs over SVMs decays, a finding that is explained by geometrical considerations.
6.
Herbrich, Ralf; Graepel, Thore; Campbell, Colin
Bayes Point Machines : Estimating the Bayes Point in Kernel Space Proceedings Article
In: Proceedings of IJCAI Workshop Support Vector Machines, pp. 23–27, 1999.
@inproceedings{herbrich1999bpm,
title = {Bayes Point Machines : Estimating the Bayes Point in Kernel Space},
author = {Ralf Herbrich and Thore Graepel and Colin Campbell},
url = {https://www.herbrich.me/papers/ijcai99.pdf},
year = {1999},
date = {1999-01-01},
booktitle = {Proceedings of IJCAI Workshop Support Vector Machines},
pages = {23--27},
abstract = {From a Bayesian perspective Support Vector Machines choose the hypothesis corresponding to the largest possible hypersphere that can be inscribed in version space, i.e. in the space of all consistent hypotheses given a training set. Those boundaries of version space which are tangent to the hypersphere define the support vectors. An alternative and potentially better approach is to construct the hypothesis using the whole of version space. This is achieved by using a Bayes Point Machine which finds the midpoint of the region of intersection of all hyperplanes bisecting version space into two halves of equal volume (the Bayes point). It is known that the center of mass of version space approximates the Bayes point. We suggest estimating the center of mass by averaging over the trajectory of a billiard ball bouncing in version space. Experimental results are presented indicating that Bayes Point Machines consistently outperform Support Vector Machines.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
From a Bayesian perspective Support Vector Machines choose the hypothesis corresponding to the largest possible hypersphere that can be inscribed in version space, i.e. in the space of all consistent hypotheses given a training set. Those boundaries of version space which are tangent to the hypersphere define the support vectors. An alternative and potentially better approach is to construct the hypothesis using the whole of version space. This is achieved by using a Bayes Point Machine which finds the midpoint of the region of intersection of all hyperplanes bisecting version space into two halves of equal volume (the Bayes point). It is known that the center of mass of version space approximates the Bayes point. We suggest estimating the center of mass by averaging over the trajectory of a billiard ball bouncing in version space. Experimental results are presented indicating that Bayes Point Machines consistently outperform Support Vector Machines.