About

Bio

Joseph is a doctoral student with the University of Southampton and the Alan Turing Institute. He completed an Integrated MEng Computer Science degree at the University of Southampton, specialising in machine learning. Joseph continued to pursue his interest in machine learning and is now in the final year of his PhD.

His PhD research focuses on interpretable machine learning – the process of unpacking the “black box” of artificial intelligence and understanding why models exhibit certain behaviours. In addition to academic work, Joseph has also been involved in industry through a six-month Applied Science Internship with Amazon Prime Video in London, where he lead a research project on interpretable time series classification.

Joseph is motivated by collaboration within academia and industry, with a strong interest in helping others apply and understand machine learning in their domains. He believes a drive towards openness, transparency and explainability in artificial intelligence will allow the recent advances in machine learning to spread and contribute to other areas. He has previously collaborated on projects involving medicine, climate change, and law - projects that lead to publications in leading journals and conferences.

Joseph also has experience working in the start-up industry. Using his specialist machine learning knowledge, he contributed to a start-up within the University of Southampton during his master’s degree. Joseph is also the co-founder of the Entrepreneurship Interest Group at the Alan Turing Institute. During his PhD he has been heavily involved in undergraduate teaching – running lab sessions for modules such as Intelligent Agents and Foundations of Machine Learning.

Research Interests

Joseph’s doctoral research primarily focuses on interpretable machine learning. With the recent rise and successes of deep learning, increasing concern has been raised over the “black box” nature of modern machine learning models and the lack of understanding around how they make predictions. Interpretable machine learning represents a drive towards open, transparent, and fair models that are easily understandable. Through analysis of models and data, interpretable machine learning can be used to expose bias and shine the light on the inner workings of machine learning.

Within the broad scope of interpretable machine learning, Joseph’s specific focus is on Multiple Instance Learning (MIL), where data is organised into bags of instances, and each bag is given a single label. This is in contrast to standard supervised learning, in which every instance is given its own label. MIL decision-making is often only dependent on a few key instances in a bag, therefore the aim of interpreting the model is to identify these key instances and understand why they are being used to make decisions.

As part of his PhD, Joseph collaborated with the Cancer Science Research Group at the University of Southampton. The collaboration focused on predicting response to treatment of cancer, and utilised interpretable machine learning to understand the link between histopathology data and patient response. He has also worked on interpretable computer vision for high-resolution satellite imaging for applications in climate change monitoring.