Semidefinite Programming for Classification Learning
Knowledge about local invariances with respect to given pattern transformations can greatly improve the accuracy of classification. Approaches are either based on regularisation or on the generation of virtual (transformed) examples. We developed a new framework for learning linear classifiers under known transformations based on semi-definite programming where we are able to find a maximum margin hyper-plane when the training examples are polynomial trajectories instead of single points. The solution is found to be sparse in dual variables and allows to identify those points on the trajectory with minimal real-valued output as virtual support vectors.
We also developed a modified version of the perceptron learning algorithm which solves such semidefinite programs in polynomial time. The algorithm is based on the following three observations: (i) Semidefinite programs are linear programs with infinitely many (linear) constraints (i.e., all the virtually transformed examples); (ii) every linear program can be solved by a sequence of constraint satisfaction problems with linear constraints; and (iii) in general, the perceptron learning algorithm solves a constraint satisfaction problem with linear constraints in finitely many updates.
1.
Graepel, Thore; Herbrich, Ralf
Invariant Pattern Recognition by Semidefinite Programming Machines Proceedings Article
In: Advances in Neural Information Processing Systems 16, pp. 33–40, 2003.
@inproceedings{graepel2003sdm,
title = {Invariant Pattern Recognition by Semidefinite Programming Machines},
author = {Thore Graepel and Ralf Herbrich},
url = {https://www.herbrich.me/papers/sdpm.pdf},
year = {2003},
date = {2003-01-01},
booktitle = {Advances in Neural Information Processing Systems 16},
pages = {33–40},
abstract = {Knowledge about local invariances with respect to given pattern transformations can greatly improve the accuracy of classification. Previous approaches are either based on regularisation or on the generation of virtual (transformed) examples. We develop a new framework for learning linear classifiers under known transformations based on semidefinite programming. We present a new learning algorithm - the Semidefinite Programming Machine (SDPM) - which is able to find a maximum margin hyperplane when the training examples are polynomial trajectories instead of single points. The solution is found to be sparse in dual variables and allows to identify those points on the trajectory with minimal real-valued output as virtual support vectors. Extensions to segments of trajectories, to more than one transformation parameter, and to learning with kernels are discussed. In experiments we use a Taylor expansion to locally approximate rotational invariance in pixel images from USPS and find improvements over known methods.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Knowledge about local invariances with respect to given pattern transformations can greatly improve the accuracy of classification. Previous approaches are either based on regularisation or on the generation of virtual (transformed) examples. We develop a new framework for learning linear classifiers under known transformations based on semidefinite programming. We present a new learning algorithm - the Semidefinite Programming Machine (SDPM) - which is able to find a maximum margin hyperplane when the training examples are polynomial trajectories instead of single points. The solution is found to be sparse in dual variables and allows to identify those points on the trajectory with minimal real-valued output as virtual support vectors. Extensions to segments of trajectories, to more than one transformation parameter, and to learning with kernels are discussed. In experiments we use a Taylor expansion to locally approximate rotational invariance in pixel images from USPS and find improvements over known methods.
2.
Graepel, Thore; Herbrich, Ralf; Kharechko, Andriy; Shawe-Taylor, John
Semi-Definite Programming by Perceptron Learning Proceedings Article
In: Advances in Neural Information Processing Systems 16, pp. 457–464, The MIT Press, 2003.
@inproceedings{graepel2003sdpwithperceptron,
title = {Semi-Definite Programming by Perceptron Learning},
author = {Thore Graepel and Ralf Herbrich and Andriy Kharechko and John Shawe-Taylor},
url = {https://www.herbrich.me/papers/sdpm-pla.pdf},
year = {2003},
date = {2003-01-01},
booktitle = {Advances in Neural Information Processing Systems 16},
pages = {457–464},
publisher = {The MIT Press},
abstract = {We present a modified version of the perceptron learning algorithm (PLA) which solves semidefinite programs (SDPs) in polynomial time. The algorithm is based on the following three observations: (i) Semidefinite programs are linear programs with infinitely many (linear) constraints; (ii) every linear program can be solved by a sequence of constraint satisfaction problems with linear constraints; (iii) in general, the perceptron learning algorithm solves a constraint satisfaction problem with linear constraints in finitely many updates. Combining the PLA with a probabilistic rescaling algorithm (which, on average, increases the size of the feasable region) results in a probabilistic algorithm for solving SDPs that runs in polynomial time. We present preliminary results which demonstrate that the algorithm works, but is not competitive with state-of-the-art interior point methods.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
We present a modified version of the perceptron learning algorithm (PLA) which solves semidefinite programs (SDPs) in polynomial time. The algorithm is based on the following three observations: (i) Semidefinite programs are linear programs with infinitely many (linear) constraints; (ii) every linear program can be solved by a sequence of constraint satisfaction problems with linear constraints; (iii) in general, the perceptron learning algorithm solves a constraint satisfaction problem with linear constraints in finitely many updates. Combining the PLA with a probabilistic rescaling algorithm (which, on average, increases the size of the feasable region) results in a probabilistic algorithm for solving SDPs that runs in polynomial time. We present preliminary results which demonstrate that the algorithm works, but is not competitive with state-of-the-art interior point methods.
Adaptive Margin Machines
Former approaches for learning kernel classifiers like Quadratic Programming Machines (SVMs), and Linear Programming Machines were based on minimization of a regularized margin loss where the margin was treated equivalently for each training pattern. We propose a reformulation of the minimization problem such that adaptive margins (AMM) for each training pattern are utilized. Furthermore, we give bounds on the generalization error of AMMs which justify their robustness against outliers. We show experimentally that the generalization error of AMMs is comparable to QP- and LP-Machines on benchmark datasets from the UCI repository.
1.
Herbrich, Ralf; Weston, Jason
Adaptive Margin Support Vector Machines for Classification Proceedings Article
In: Proceedings of the 9th International Conference on Artificial Neural Networks, pp. 97–102, 1999.
@inproceedings{herbrich1999ann,
title = {Adaptive Margin Support Vector Machines for Classification},
author = {Ralf Herbrich and Jason Weston},
url = {https://www.herbrich.me/papers/icann99_ann.pdf},
year = {1999},
date = {1999-01-01},
booktitle = {Proceedings of the 9th International Conference on Artificial Neural Networks},
pages = {97–102},
abstract = {In this paper we propose a new learning algorithm for classification learning based on the Support Vector Machine (SVM) approach. Existing approaches for constructing SVMs are based on minimization of a regularized margin loss where the margin is treated equivalently for each train- ing pattern. We propose a reformulation of the minimization problem such that adaptive margins for each training pattern are utilized, which we call the Adaptive Margin (AM)-SVM. We give bounds on the generalization error of AM-SVMs which justify their robustness against outliers, and show experimentally that the generalization error of AM-SVMs is comparable to classical SVMs on benchmark datasets from the UCI repository.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
In this paper we propose a new learning algorithm for classification learning based on the Support Vector Machine (SVM) approach. Existing approaches for constructing SVMs are based on minimization of a regularized margin loss where the margin is treated equivalently for each train- ing pattern. We propose a reformulation of the minimization problem such that adaptive margins for each training pattern are utilized, which we call the Adaptive Margin (AM)-SVM. We give bounds on the generalization error of AM-SVMs which justify their robustness against outliers, and show experimentally that the generalization error of AM-SVMs is comparable to classical SVMs on benchmark datasets from the UCI repository.
2.
Weston, Jason; Herbrich, Ralf
Adaptive Margin Support Vector Machines Book Section
In: Advances in Large Margin Classifiers, pp. 281–296, The MIT Press, 1999.
@incollection{weston1999ann,
title = {Adaptive Margin Support Vector Machines},
author = {Jason Weston and Ralf Herbrich},
url = {https://www.herbrich.me/papers/nips98_ann.pdf},
year = {1999},
date = {1999-01-01},
booktitle = {Advances in Large Margin Classifiers},
pages = {281–296},
publisher = {The MIT Press},
chapter = {15},
abstract = {In this chapter we present a new learning algorithm, Leave-One-Out (LOO -) SVMs and its generalization Adaptive Margin (AM-) SVMs, inspired by a recent upper bound on the leave-one-out error proved for kernel classifiers by Jaakkola and Haussler. The new approach minimizes the expression given by the bound in an attempt to minimize the leave-one-out error. This gives a convex optimization problem which constructs a sparse linear classifier in feature space using the kernel technique. As such the algorithm possesses many of the same properties as SVMs and Linear Programming (LP-) SVMs. These former techniques are based on the minimization of a regularized margin loss, where the margin is treated equivalently for each training pattern. We propose a minimization problem such that adaptive margins for each training pattern are utilized. Furthermore, we give bounds on the generalization error of the approach which justifies its robustness against outliers. We show experimentally that the generalization error of AM-SVMs is comparable to SVMs and LP-SVMs on benchmark datasets from the UCI repository.},
keywords = {},
pubstate = {published},
tppubtype = {incollection}
}
In this chapter we present a new learning algorithm, Leave-One-Out (LOO -) SVMs and its generalization Adaptive Margin (AM-) SVMs, inspired by a recent upper bound on the leave-one-out error proved for kernel classifiers by Jaakkola and Haussler. The new approach minimizes the expression given by the bound in an attempt to minimize the leave-one-out error. This gives a convex optimization problem which constructs a sparse linear classifier in feature space using the kernel technique. As such the algorithm possesses many of the same properties as SVMs and Linear Programming (LP-) SVMs. These former techniques are based on the minimization of a regularized margin loss, where the margin is treated equivalently for each training pattern. We propose a minimization problem such that adaptive margins for each training pattern are utilized. Furthermore, we give bounds on the generalization error of the approach which justifies its robustness against outliers. We show experimentally that the generalization error of AM-SVMs is comparable to SVMs and LP-SVMs on benchmark datasets from the UCI repository.
Ordinal Regression
In contrast to the standard machine learning tasks of classification and metric regression we investigate the problem of predicting variables of ordinal scale, a setting referred to as ordinal regression. This problem arises frequently in the social sciences and in information retrieval where human preferences play a major role. Whilst approaches proposed in Statistics rely on a probability model of a latent (unobserved) variable we present a large margin algorithm that is based on a mapping from objects to scalar utility values though classifying pairs of objects.
1.
Herbrich, Ralf; Graepel, Thore; Obermayer, Klaus
Support Vector Learning for Ordinal Regression Proceedings Article
In: Proceedings of the 9th International Conference on Artificial Neural Networks, pp. 97–102, 1999.
@inproceedings{herbrich1999ordinalregression,
title = {Support Vector Learning for Ordinal Regression},
author = {Ralf Herbrich and Thore Graepel and Klaus Obermayer},
url = {https://www.herbrich.me/papers/icann99_ordinal.pdf},
year = {1999},
date = {1999-01-01},
booktitle = {Proceedings of the 9th International Conference on Artificial Neural Networks},
pages = {97–102},
abstract = {We investigate the problem of predicting variables of ordinal scale. This task is referred to as ordinal regression and is complementary to the standard machine learning tasks of classification and metric regression. In contrast to statistical models we present a distribution independent formulation of the problem together with uniform bounds of the risk functional. The approach presented is based on a mapping from objects to scalar utility values. Similar to Support Vector methods we derive a new learning algorithm for the task of ordinal regression based on large margin rank boundaries. We give experimental results for an information retrieval task: learning the order of documents w.r.t. an initial query. Experimental results indicate that the presented algorithm outperforms more naive approaches to ordinal regression such as Support Vector classification and Support Vector regression in the case of more than two ranks.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
We investigate the problem of predicting variables of ordinal scale. This task is referred to as ordinal regression and is complementary to the standard machine learning tasks of classification and metric regression. In contrast to statistical models we present a distribution independent formulation of the problem together with uniform bounds of the risk functional. The approach presented is based on a mapping from objects to scalar utility values. Similar to Support Vector methods we derive a new learning algorithm for the task of ordinal regression based on large margin rank boundaries. We give experimental results for an information retrieval task: learning the order of documents w.r.t. an initial query. Experimental results indicate that the presented algorithm outperforms more naive approaches to ordinal regression such as Support Vector classification and Support Vector regression in the case of more than two ranks.
2.
Herbrich, Ralf; Graepel, Thore; Obermayer, Klause
Large Margin Rank Boundaries for Ordinal Regression Book Section
In: Advances in Large Margin Classifiers, pp. 115–132, The MIT Press, 1999.
@incollection{herbrich1999largemarginrank,
title = {Large Margin Rank Boundaries for Ordinal Regression},
author = {Ralf Herbrich and Thore Graepel and Klause Obermayer},
url = {https://www.herbrich.me/papers/nips98_ordinal.pdf},
year = {1999},
date = {1999-01-01},
booktitle = {Advances in Large Margin Classifiers},
pages = {115–132},
publisher = {The MIT Press},
chapter = {7},
abstract = {In contrast to the standard machine learning tasks of classification and metric regression we investigate the problem of predicting variables of ordinal scale, a setting referred to as ordinal regression. This problem arises frequently in the social sciences and in information retrieval where human preferences play a major role. Whilst approaches proposed in statistics rely on a probability model of a latent (unobserved) variable we present a distribution independent risk formulation of ordinal regression which allows us to derive a uniform convergence bound. Applying this bound we present a large margin algorithm that is based on a mapping from objects to scalar utility values thus classifying pairs of objects. We give experimental results for an information retrieval task which show that our algorithm outperforms more naive approaches to ordinal regression such as Support Vector Classification and Support Vector Regression in the case of more than two ranks.},
keywords = {},
pubstate = {published},
tppubtype = {incollection}
}
In contrast to the standard machine learning tasks of classification and metric regression we investigate the problem of predicting variables of ordinal scale, a setting referred to as ordinal regression. This problem arises frequently in the social sciences and in information retrieval where human preferences play a major role. Whilst approaches proposed in statistics rely on a probability model of a latent (unobserved) variable we present a distribution independent risk formulation of ordinal regression which allows us to derive a uniform convergence bound. Applying this bound we present a large margin algorithm that is based on a mapping from objects to scalar utility values thus classifying pairs of objects. We give experimental results for an information retrieval task which show that our algorithm outperforms more naive approaches to ordinal regression such as Support Vector Classification and Support Vector Regression in the case of more than two ranks.
Kernel Methods for Measuring Independence
We introduce two new functionals, the constrained covariance and the kernel mutual information, to measure the degree of independence of random variables. These quantities are both based on the covariance between functions of the random variables in reproducing kernel Hilbert spaces (RKHSs). We prove that when the RKHSs are universal, both functionals are zero if and only if the random variables are pairwise independent. We also show that the kernel mutual information is an upper bound near independence on the Parzen window estimate of the mutual information. Analogous results apply for two correlation-based dependence functionals introduced earlier: we show the kernel canonical correlation and the kernel generalised variance to be independence measures for universal kernels, and prove the latter to be an upper bound on the mutual information near independence. The performance of the kernel dependence functionals in measuring independence is verified in the context of independent component analysis.
1.
Gretton, Arthur; Herbrich, Ralf; Smola, Alexander J; Bousquet, Olivier; Schölkopf, Bernhard
Kernel Methods for Measuring Independence Journal Article
In: Journal of Machine Learning Research, vol. 6, pp. 2075–2129, 2005.
@article{gretton2005kernelindependence,
title = {Kernel Methods for Measuring Independence},
author = {Arthur Gretton and Ralf Herbrich and Alexander J Smola and Olivier Bousquet and Bernhard Schölkopf},
url = {https://www.herbrich.me/papers/gretton05a.pdf},
year = {2005},
date = {2005-01-01},
journal = {Journal of Machine Learning Research},
volume = {6},
pages = {2075–2129},
abstract = {We introduce two new functionals, the constrained covariance and the kernel mutual information, to measure the degree of independence of random variables. These quantities are both based on the covariance between functions of the random variables in reproducing kernel Hilbert spaces (RKHSs). We prove that when the RKHSs are universal, both functionals are zero if and only if the random variables are pairwise independent. We also show that the kernel mutual information is an upper bound near independence on the Parzen window estimate of the mutual information. Analogous results apply for two correlation-based dependence functionals introduced earlier: we show the kernel canonical correlation and the kernel generalised variance to be independence measures for universal kernels, and prove the latter to be an upper bound on the mutual information near independence. The performance of the kernel dependence functionals in measuring independence is verified in the context of independent component analysis.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
We introduce two new functionals, the constrained covariance and the kernel mutual information, to measure the degree of independence of random variables. These quantities are both based on the covariance between functions of the random variables in reproducing kernel Hilbert spaces (RKHSs). We prove that when the RKHSs are universal, both functionals are zero if and only if the random variables are pairwise independent. We also show that the kernel mutual information is an upper bound near independence on the Parzen window estimate of the mutual information. Analogous results apply for two correlation-based dependence functionals introduced earlier: we show the kernel canonical correlation and the kernel generalised variance to be independence measures for universal kernels, and prove the latter to be an upper bound on the mutual information near independence. The performance of the kernel dependence functionals in measuring independence is verified in the context of independent component analysis.
2.
Gretton, Arthur; Smola, Alexander; Bousquet, Olivier; Herbrich, Ralf; Belitski, Andrei; Augath, Mark; Murayama, Yusuke; Pauls, Jon; Schölkopf, Bernhard; Logothetis, Nikos
Kernel Constrained Covariance for Dependence Measurement Proceedings Article
In: Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics (AISTATS), pp. 112–119, 2005.
@inproceedings{gretton2005kernelcoco,
title = {Kernel Constrained Covariance for Dependence Measurement},
author = {Arthur Gretton and Alexander Smola and Olivier Bousquet and Ralf Herbrich and Andrei Belitski and Mark Augath and Yusuke Murayama and Jon Pauls and Bernhard Schölkopf and Nikos Logothetis},
url = {https://www.herbrich.me/papers/kcc.pdf},
year = {2005},
date = {2005-01-01},
booktitle = {Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics (AISTATS)},
pages = {112–119},
abstract = {We discuss reproducing kernel Hilbert space (RKHS)-based measures of statistical dependence, with emphasis on constrained covariance (COCO), a novel criterion to test dependence of random variables. We show that COCO is a test for independence if and only if the associated RKHSs are universal. That said, no independence test exists that can distinguish dependent and independent random variables in all circumstances. Dependent random variables can result in a COCO which is arbitrarily close to zero when the source densities are highly non-smooth. All current kernel-based independence tests share this behaviour. We demonstrate exponential convergence between the population and empirical COCO. Finally, we use COCO as a measure of joint neural activity between voxels in MRI recordings of the macaque monkey, and compare the results to the mutual information and the correlation. We also show the effect of removing breathing artefacts from the MRI recording.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
We discuss reproducing kernel Hilbert space (RKHS)-based measures of statistical dependence, with emphasis on constrained covariance (COCO), a novel criterion to test dependence of random variables. We show that COCO is a test for independence if and only if the associated RKHSs are universal. That said, no independence test exists that can distinguish dependent and independent random variables in all circumstances. Dependent random variables can result in a COCO which is arbitrarily close to zero when the source densities are highly non-smooth. All current kernel-based independence tests share this behaviour. We demonstrate exponential convergence between the population and empirical COCO. Finally, we use COCO as a measure of joint neural activity between voxels in MRI recordings of the macaque monkey, and compare the results to the mutual information and the correlation. We also show the effect of removing breathing artefacts from the MRI recording.
Kernel Gibbs Sampler
We present an algorithm that samples the hypothesis space of kernel classifiers. Given a uniform prior over normalised weight vectors and a likelihood based on a model of label noise leads to a piecewise constant posterior that can be sampled by the kernel Gibbs sampler (KGS). The KGS is a Markov Chain Monte Carlo method that chooses a random direction in parameter space and samples from the resulting piecewise constant density along the line chosen. The KGS can be used as an analytical tool for the exploration of Bayesian transduction, Bayes point machines, active learning, and evidence-based model selection on small data sets that are contaminated with label noise. For a simple toy example we demonstrate experimentally how a Bayes point machine based on the KGS outperforms an SVM that is incapable of taking into account label noise.
1.
Graepel, Thore; Herbrich, Ralf
The Kernel Gibbs Sampler Proceedings Article
In: Advances in Neural Information Processing Systems 13, pp. 514–520, The MIT Press, 2000.
@inproceedings{herbrich2000kernelgibbs,
title = {The Kernel Gibbs Sampler},
author = {Thore Graepel and Ralf Herbrich},
url = {https://www.herbrich.me/papers/kernel-gibbs-sampler.pdf},
year = {2000},
date = {2000-01-01},
booktitle = {Advances in Neural Information Processing Systems 13},
pages = {514–520},
publisher = {The MIT Press},
abstract = {We present an algorithm that samples the hypothesis space of kernel classifiers. Given a uniform prior over normalised weight vectors and a likelihood based on a model of label noise leads to a piecewise constant posterior that can be sampled by the kernel Gibbs sampler (KGS). The KGS is a Markov Chain Monte Carlo method that chooses a random direction in parameter space and samples from the resulting piecewise constant density along the line chosen. The KGS can be used as an analytical tool for the exploration of Bayesian transduction, Bayes point machines, active learning, and evidence-based model selection on small data sets that are contaminated with label noise. For a simple toy example we demonstrate experimentally how a Bayes point machine based on the KGS outperforms an SVM that is incapable of taking into account label noise.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
We present an algorithm that samples the hypothesis space of kernel classifiers. Given a uniform prior over normalised weight vectors and a likelihood based on a model of label noise leads to a piecewise constant posterior that can be sampled by the kernel Gibbs sampler (KGS). The KGS is a Markov Chain Monte Carlo method that chooses a random direction in parameter space and samples from the resulting piecewise constant density along the line chosen. The KGS can be used as an analytical tool for the exploration of Bayesian transduction, Bayes point machines, active learning, and evidence-based model selection on small data sets that are contaminated with label noise. For a simple toy example we demonstrate experimentally how a Bayes point machine based on the KGS outperforms an SVM that is incapable of taking into account label noise.
Classification Learning on Proximity Data
We investigate the problem of learning a classification task on data represented in terms of their pairwise proximities. This representation does not refer to an explicit feature representation of the data items and is thus more general than the standard approach of using Euclidean feature vectors, from which pair wise proximities can always be calculated. Our approaches are based on a linear threshold model in the proximity values themselves. We show that prior knowledge about the problem can be incorporated by the choice of distance measures and examine different metrics w.r.t. their generalization.
1.
Graepel, Thore; Herbrich, Ralf; Schölkopf, Bernhard; Smola, Alex; Bartlett, Peter; Müller, Klaus Robert; Obermayer, Klaus; Williamson, Robert C
Classification on Proximity Data with LP-Machines Proceedings Article
In: Proceedings of the 9th International Conference on Artificial Neural Networks, pp. 304–309, 1999.
@inproceedings{graepel1999proximity,
title = {Classification on Proximity Data with LP-Machines},
author = {Thore Graepel and Ralf Herbrich and Bernhard Schölkopf and Alex Smola and Peter Bartlett and Klaus Robert Müller and Klaus Obermayer and Robert C Williamson},
url = {https://www.herbrich.me/papers/icann99_proxy.pdf},
year = {1999},
date = {1999-01-01},
booktitle = {Proceedings of the 9th International Conference on Artificial Neural Networks},
pages = {304–309},
abstract = {We provide a new linear program to deal with classification of data in the case of data given in terms of pairwise proximities. This allows to avoid the problems inherent in using feature spaces with indefinite metric in Support Vector Machines, since the notion of a margin is purely needed in input space where the classification actually occurs. Moreover in our approach we can enforce sparsity in the proximity representation by sacrificing training error. This turns out to be favorable for proximity data. Similar to nu-SV methods, the only parameter needed in the algorithm is the (asymptotical) number of data points being classified with a margin. Finally, the algorithm is successfully compared with nu-SV learning in proximity space and K-nearest-neighbors on real world data from Neuroscience and molecular biology.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
We provide a new linear program to deal with classification of data in the case of data given in terms of pairwise proximities. This allows to avoid the problems inherent in using feature spaces with indefinite metric in Support Vector Machines, since the notion of a margin is purely needed in input space where the classification actually occurs. Moreover in our approach we can enforce sparsity in the proximity representation by sacrificing training error. This turns out to be favorable for proximity data. Similar to nu-SV methods, the only parameter needed in the algorithm is the (asymptotical) number of data points being classified with a margin. Finally, the algorithm is successfully compared with nu-SV learning in proximity space and K-nearest-neighbors on real world data from Neuroscience and molecular biology.
2.
Graepel, Thore; Herbrich, Ralf; Bollmann-Sdorra, Peter; Obermayer, Klaus
Classification on Pairwise Proximity Data Proceedings Article
In: Advances in Neural Information Processing Systems 11, pp. 438–444, The MIT Press, 1998.
@inproceedings{graepel1998classificationpairwise,
title = {Classification on Pairwise Proximity Data},
author = {Thore Graepel and Ralf Herbrich and Peter Bollmann-Sdorra and Klaus Obermayer},
url = {https://www.herbrich.me/papers/graeherbollober99.pdf},
year = {1998},
date = {1998-01-01},
booktitle = {Advances in Neural Information Processing Systems 11},
pages = {438–444},
publisher = {The MIT Press},
abstract = {We investigate the problem of learning a classification task on data represented in terms of their pairwise proximities. This representation does not refer to an explicit feature representation of the data items and is thus more general than the standard approach of using Euclidean feature vectors, from which pairwise proximities can always be calculated. Our first approach is based on a combined linear embedding and classification procedure resulting in an extension of the Optimal Hyperplane algorithm to pseudo-Euclidean data. As an alternative we present another approach based on a linear threshold model in the proximity values themselves, which is optimized using Structural Risk Minimization. We show that prior knowledge about the problem can be incorporated by the choice of distance measures and examine different metrics w.r.t. their generalization. Finally, the algorithms are successfully applied to protein structure data and to data from the cat's cerebral cortex. They show better performance than K-nearest-neighbor classification.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
We investigate the problem of learning a classification task on data represented in terms of their pairwise proximities. This representation does not refer to an explicit feature representation of the data items and is thus more general than the standard approach of using Euclidean feature vectors, from which pairwise proximities can always be calculated. Our first approach is based on a combined linear embedding and classification procedure resulting in an extension of the Optimal Hyperplane algorithm to pseudo-Euclidean data. As an alternative we present another approach based on a linear threshold model in the proximity values themselves, which is optimized using Structural Risk Minimization. We show that prior knowledge about the problem can be incorporated by the choice of distance measures and examine different metrics w.r.t. their generalization. Finally, the algorithms are successfully applied to protein structure data and to data from the cat's cerebral cortex. They show better performance than K-nearest-neighbor classification.