Ranking and Matchmaking
The TrueSkill ranking system is a skill based ranking system for Xbox Live developed at Microsoft Research. The purpose of a ranking system is to both identify and track the skills of gamers in a game (mode) in order to be able to match them into competitive matches. The TrueSkill ranking system only uses the final standings of all teams in a game in order to update the skill estimates (ranks) of all gamers playing in this game. Ranking systems have been proposed for many sports but possibly the most prominent ranking system in use today is ELO. For more details, please see the TrueSkill project page.
The game of Go is an ancient Chinese game of strategy for two players. By most measures of complexity it is more complex than Chess. While Deep Blue (and more recently Deep Fritz) play Chess at the world champion’s level no Go-playing program has yet even reached the level of play of an average amateur Go player. The reason for the failure to reproduce the impressive results in chess for the game of Go appear to lie in its greater complexity, both in terms of the number of different positions and in the difficulty of defining an appropriate evaluation function for Go positions.
We take the view, that only an automated way of acquiring Go knowledge – machine learning – can radically improve on the current situation in computer Go. Numerous Go servers in the internet offer thousands of game records of Go played by players that are very competent as compared to today’s computer Go programs. The great challenge is to build machine learning algorithms that extract knowledge from these data-bases such that it can be used for playing Go well.
Learning to Fight
We apply reinforcement learning to the problem of finding good policies for a fighting agent in a commercial computer game. The learning agent is trained using the SARSA algorithm for on-policy learning of an action-value function represented by linear and neural network function approximators. Importance aspects include the selection and construction of features, actions, and rewards as well as other design choices necessary to integrate the learning process into the game. The learning agent is trained against the built-in AI of the game with different rewards encouraging aggressive or defensive behaviour.
Learning to Race
Drivatars are a novel form of learning artificial intelligence (AI) developed for Forza Motorsport. The technology behind the Drivatar concept is exploited within the game in two ways:
- as an innovative new learning game feature: create your own AI driver!
- as the underlying model for all the AI competitors in the Arcade and Career modes.
Our original goal in developing Drivatars was to create human-like computer opponents to race against. For more details please visit the Drivatar project page.