Jeff Riley
PhD (Comp.Sci.), MAppSc (IT), GradDipKBS
Adjunct Principal Research Fellow of RMIT University

Research Page

Research Interests    PhD    Master's    Publications etc.       Home

Updated September 8, 2013

Research Interests

My research interests are in the fields of Artificial Intelligence and Knowledge Management, with a special interest in Machine Learning and Evolutionary Computation.

I hold a PhD in Computer Science, a Master of Applied Science in Information Technology, and a Graduate Diploma in Knowledge Based Systems, all from RMIT University (formerly The Royal Melbourne Institute of Technology).  I am an Adjunct Principal Research Fellow of RMIT University in the School of Computer Science and Information Technology, where I am a member of the Evolutionary Computation and Machine Learning Group.

Top of Page


PhD (2005): Evolving Fuzzy Rules for Goal-Scoring Behaviour in a Robot Soccer Environment


The ability to construct autonomous robots that are able to learn from the environment in which they operate in order to achieve their objectives is a need so far largely unsatisfied, especially for dynamic environments which change quickly and are noisy and uncertain. A method of developing controllers for simple robots that learn, via artificial evolution, how to react in the noisy, uncertain and dynamic environment of simulated robot soccer in order to achieve goal-scoring behaviour is investigated by this thesis.

A rules-based architecture that uses a fuzzy-logic inferencing system is proposed for the simulated soccer player. The set of rules that controls the behaviour of the player is developed by evolving a population of simulated soccer-playing robots that are evaluated in the robot soccer environment. The evolutionary algorithm implemented to evolve the rules is a messy-coded genetic algorithm.

The soccer simulation environment chosen for this work is the RoboCup Soccer Simulation League, which is a dynamic, noisy, real-time environment specifically developed for artificial intelligence research. However, because the RoboCup simulator is a real-time environment all training and testing in the environment takes place in real-time, and this has a significant impact on the capacity of the method to do any real learning. The client-server architecture of the RoboCup simulator further complicates the implementation of the learning process. To overcome these impediments a less complex model of the RoboCup simulator was created.

The new simulator, named SimpleSoccer, is a multi-player capable, dynamic environment that is not noisy, does not operate in real-time, and does not implement a client-server architecture. The simplified environment of SimpleSoccer allows the evolutionary process to run much faster than in the RoboCup environment, so real learning can take place in more reasonable timeframes. Tests are performed to ensure that the SimpleSoccer environment is a sufficiently good model of the RoboCup environment and that rules learned in the simpler environment are transferable to the RoboCup environment. A method of accelerating the evolutionary search in the RoboCup environment by seeding the population with rules learned in the SimpleSoccer environment is demonstrated.

This thesis also examines the question of how human expertise and expert knowledge affects the evolutionary search. Developing good soccer-playing skills for the robot soccer environment is known to be a difficult problem for evolutionary algorithms, and the problem is often solved by giving players some innate, hand-coded skills to increase the probability that the players will achieve the overall objective set. A well designed fitness function for the evolutionary algorithm can artificially guide the evolutionary process by rewarding incremental and intermediate solutions. Tests are conducted to determine how varying the amount of human help given to the evolutionary algorithm affects the result of the evolutionary process.

Finally, the thesis investigates the underlying cause of the difficulty of the robot soccer problem for evolutionary algorithms. A systematic study of the problem search spaces and fitness landscapes is presented which provides a good understanding of why the problem is difficult, and how injecting human expertise and expert knowledge in various ways can change the relative difficulty of the problem. The study also leads to the conjecture that there is an inherent limit to the amount of learning possible by evolutionary algorithms.

Top of Page


Master's (1997): An Evolutionary Approach to Training Feed-Forward and Recurrent Neural Networks.


Artificial neural networks exhibit many useful and unique attributes: their ability to approximate and generalise chief amongst them. Feed-forward neural networks are very good at recognising patterns in noisy data, and determining relationships and mappings between data. Recurrent neural networks are powerful tools for solving problems with a temporal component, such as time series prediction, speech recognition and real time control systems. The variations of the gradient descent techniques for training both feed-forward and recurrent neural networks are computationally expensive and time consuming, and not always able to find a good solution. In short, neural networks, particularly recurrent neural networks, can be difficult and expensive to train.

Evolutionary methods such as genetic algorithms are global search techniques and as such are less likely to be fooled by local variations in the error landscape.  This thesis investigates the possibility of using an evolutionary approach to train feed-forward and recurrent neural networks.

The technique investigated by this thesis is the use of a genetic algorithm to evolve changes to the weights and biases of the neural network, rather than evolve the weights and biases directly. The structure of the gene used by this technique obviates the need for real values to be encoded on the chromosome.

This thesis tests the technique on a number of problems for both feed-forward and recurrent neural networks. Standard parity, encoder and converter problems are tested for feed-forward networks, and sequence generation and time series prediction problems tested for recurrent networks. Several different recurrent architectures are tested, including simple recurrent networks and real time recurrent networks. A method of stopping training to prevent overtraining is also investigated.

Results attained by the tests conducted indicate that the technique developed as part of this work is capable of training both feed-forward and recurrent neural networks, and that it compares favourably with, or is superior to, gradient descent techniques in some cases.

The technique developed for this work has shown to be very promising, for both feed-forward and recurrent neural networks, and when used in conjunction with early stopping techniques can overcome some of the problems associated with gradient descent methods. Further investigation into the determination of optimum control parameters for the technique is necessary to improve the performance of the technique.

Top of Page


Publications etc. (click the icon to open the file)


Book Chapters

Magazine and Journal Articles

Conference Papers

Professional Activities



Top of Page