About

Bio

Joseph is currently an AI Research Engineer at Helsing - AI to serve our democracies. He completed a PhD with the University of Southampton and the Alan Turing Institute, and an Integrated MEng Computer Science degree at the University of Southampton, specialising in machine learning. He was awarded a first-class degree, and won the Winton Capital Management Prize for top student in Computer Science and the prize for best Master's Project.

His PhD research focused on interpretable machine learning – the process of unpacking the “black box” of artificial intelligence and understanding why models exhibit certain behaviours. In addition to academic work, Joseph was also involved in industry through a six-month Applied Science Internship with Amazon Prime Video in London, where he led a research project on interpretable time series classification.

Joseph is motivated by collaboration within academia and industry, with a strong interest in helping others apply and understand machine learning in their domains. He believes a drive towards openness, transparency, and explainability in artificial intelligence will allow the recent advances in machine learning to spread and contribute to other areas. He has previously collaborated on projects involving medicine, climate change, and law - projects that led to publications in leading journals and conferences.

Joseph also has experience working in the start-up industry through his current work at Helsing, as well as his previous work at Boon, where he was a Machine Learning Engineer during his master’s degree. Joseph was the co-founder of the Entrepreneurship Interest Group at the Alan Turing Institute. During his PhD he was heavily involved in undergraduate teaching – running lab sessions for modules such as Intelligent Agents and Foundations of Machine Learning.

Research Interests

Joseph’s doctoral research primarily focused on interpretable machine learning. With the recent rise and successes of deep learning, increasing concern has been raised over the “black box” nature of modern machine learning models and the lack of understanding around how they make predictions. Interpretable machine learning represents a drive towards open, transparent, and fair models that are easily understandable. Through analysis of models and data, interpretable machine learning can be used to expose bias and shine the light on the inner workings of machine learning.

Within the broad scope of interpretable machine learning, Joseph’s specific focus was on Multiple Instance Learning (MIL), where data is organised into bags of instances, and each bag is given a single label. This is in contrast to standard supervised learning, in which every instance is given its own label. MIL decision-making is often only dependent on a few key instances in a bag, therefore the aim of interpreting the model is to identify these key instances and understand why they are being used to make decisions.

As part of his PhD, Joseph collaborated with the Cancer Science Research Group at the University of Southampton. The collaboration focused on predicting response to treatment of cancer, and utilised interpretable machine learning to understand the link between histopathology data and patient response. He has also worked on interpretable computer vision for high-resolution satellite imaging for applications in climate change monitoring, and interpretable time series analysis in collaboration with Amazon.