PhD Alumni, graduated 2012
Email: adam * a t * adampocock.com
I left Manchester in September 2012, after 8 years, and I'm now a researcher at Oracle Labs. I currently work in the Information Retrieval and Machine Learning group in the Burlington, MA campus. This website hosts my PhD thesis (Feature Selection via Joint Likelihood, PDF), software from my PhD and my academic papers. I contribute to the IRML blog, and my new papers and research will get posts on there (probably before I update this website).
My PhD thesis was awarded the BCS Distinguished Dissertation award for 2013. This means you can buy a printed copy of it on Amazon or the BCS website, though its identical to the PDF version bar the fancy cover. The University did a writeup here, complete with my graduation photo.
Before that I was a PhD student at the School of Computer Science in the University of Manchester, UK. I was a member of both the Advanced Processor Technologies (APT) group and the Machine Learning & Optimisation (MLO) group in Manchester. I used to TA, running Labs and Examples classes for Undergraduates.
I started the PhD in October 2008, supervised by Dr. Gavin Brown and Dr. Mikel Luján. Before that I graduated from the MSc in Advanced Computer Science in 2008, and a BSc in Computer Science and Mathematics in 2007 both at the University of Manchester. My MSc thesis was about developing new feature selection techniques using information theory, and my BSc project was developing a 3D animation system.
I was a member of the Intelligent Thread-Level Speculation (iTLS) project at Manchester, in collaboration with a team at the University of Edinburgh both funded by an EPSRC grant. The iTLS project worked on using Machine Learning to drive Thread-Level Speculation to automatically parallelise serial programs. In Manchester our work focused on parallelising Object-Oriented programs in Java and building a TLS system in the Jikes RVM.
My PhD research interests focused on online ensemble learning, feature selection using information theory, and Markov Blanket discovery algorithms. This work resulted in a unification of 20 years worth of different information theoretic feature selection criteria showing how each criteria maximises a conditional likelihood under specific factorisation assumptions. Further work showed how to derive new feature selection algorithms by using a different likelihood, or taking a Maximum a Posteriori (MAP) approach rather than a Maximum Likelihood one.