Selected publications

BayesianRLequationsFixed.pdf

This is my most cited paper, as it triggered a field of research called Posterior Sampling for Reinforcement Learning. The idea is that we split RL into two parts: (i) Bayesian estimation of the environment (expressing uncertainty) (ii) To determine a decision/control policy, occasionally resample a possible world from the environment model and solve it for an optimal strategy. Follow this strategy to gather new observations and reduce uncertainty. Recent theoretical results show PSRL can provide a near optimal exploration/exploitation trade-off.  

Strens M J A, 2000. A Bayesian framework for reinforcement learning, In Proceedings of the Seventeenth International Conference on Machine Learning.

Use the version above for fixed equation formatting.

https://www.ece.uvic.ca/~bctill/papers/learning/Strens_2000.pdf


Differential evolution operators can be used in a population MCMC framework for a very effective vector space sampler or optimizer:

Strens M J A, Bernhardt M, Nicholas Everett, 2002. Markov chain monte carlo sampling using direct search optimization, In Proceedings of the Nineteenth International Conference on Machine Learning.

https://cdn.aaai.org/ICML/2003/ICML03-096.pdf


Principled version of genetic algorithms. Can be used for discrete sampling or optimization:

Strens M J A, 2003. Evolutionary MCMC sampling and optimization in discrete spaces, In Proceedings of the Twentieth International Conference on Machine Learning ICML-2003

https://cdn.aaai.org/ICML/2003/ICML03-096.pdf

 

Learning strategies where evaluation of performance (across a large set of scenarios) is expensive:

Strens M J A, Moore A W, 2003. Policy Search using Paired Comparisons, 2003. Journal of Machine Learning Research.

https://dl.acm.org/doi/10.5555/944919.944959

  

The competitive attentional tracker... effective track-before-detect at low signal to noise ratios and in clutter. The paper describes a dynamic competitive system that tracks all motions in a scene at the same time ("track everything"):

Strens M J A, Gregory I N, 2003. Tracking in Cluttered Images, Journal of Image & Vision Computing.

https://www.sciencedirect.com/science/article/abs/pii/S0262885603000751

 

Hierarchical Importance with Nested Training Samples (the HINTS  algorithm). Sampling from a function that is an exponential of a sum, but avoiding evaluating the sum at every step. Efficient for Bayesian inference from large datasets. A form of rejection sampling with highly directed proposals constructed using subsets of the data:

Strens M J A, 2004. Efficient hierarchical MCMC for policy search. In Proceedings of the twenty-first International Conference on Machine Learning.

https://icml.cc/Conferences/2004/proceedings/papers/177.pdf


The use of stochastic task models for dynamic replanning in multi robot task allocation:

Strens M J A, Windelinckx N, 2005. Combining Planning with Reinforcement Learning for Multi-robot Task Allocation. Springer Lecture Notes in Computer Science, Volume 3394.

https://link.springer.com/chapter/10.1007/978-3-540-32274-0_17

 

Some POMDP problems have special structure, allowing efficient solution:

Strens M J A, Learning multi-agent search strategies, 2005. Springer Lecture Notes in Computer Science, Volume 3394, pp 245-259.

https://link.springer.com/chapter/10.1007/978-3-540-32274-0_16

 

My PhD thesis. Some of it looks quite naive (treatment of partial observability) but the section on reinforcement learning for visual attention does foresee today's hot topics of attentional processing and working memory. The vision system presented in chapters 11-14 used attentional mechanisms (both bottom-up & learnt top-down) to find and recognise objects in images. Notably it was also capable of one-shot learning of new visual categories

Strens M J A, Learning, cooperation and feedback in pattern recognition. PhD Thesis, King's College London, 1999.

There's a local copy with corrected typesetting in Uploaded files.

  

 Google Scholar Profile with full list of publications.