Publications

Environmental Data Science

December 2023

Extending Scene-to-Patch Models: Multi-resolution Multiple Instance Learning for Earth Observation

Joseph Early, Ying-Jung Deweese, Christine Evers, Sarvapali Ramchurn

Land cover classification (LCC) and natural disaster response (NDR) are important issues in climate change mitigation and adaptation. Existing approaches that use machine learning with Earth observation (EO) imaging data for LCC and NDR often rely on fully annotated and segmented datasets. In this study, we extend our prior work on Scene-to-Patch models: an alternative machine learning approach for EO that utilizes Multiple Instance Learning (MIL). As our approach only requires high-level scene labels, it enables much faster development of new datasets while still providing segmentation through patch-level predictions, ultimately increasing the accessibility of using machine learning for EO. We propose new multi-resolution MIL architectures that outperform single-resolution MIL models and non-MIL baselines on the DeepGlobe LCC and FloodNet NDR datasets.

arXiv Preprint

November 2023

Inherently Interpretable Time Series Classification via Multiple Instance Learning

Joseph Early, Gavin KC Cheung, Kurt Cutajar, Hanting Xie, Jas Kandola, Niall Twomey

Conventional Time Series Classification (TSC) methods are often black boxes that obscure inherent interpretation of their decision-making processes. In this work, we leverage Multiple Instance Learning (MIL) to overcome this issue, and propose a new framework called MILLET: Multiple Instance Learning for Locally Explainable Time series classification. We apply MILLET to existing deep learning TSC models and show how they become inherently interpretable without compromising (and in some cases, even improving) predictive performance.

SCRIPTed: A Journal of Law, Technology, and Society

February 2023

A Risk-based Approach to AI Regulation: System Categorisation and Explainable AI Practices

Keri Grieman, Joseph Early

The regulation of artificial intelligence (AI) presents a challenging new legal frontier that is only just beginning to be addressed around the world. This article provides an examination of why regulation of AI is difficult, with a particular focus on understanding the reasoning behind automated decisions. We go on to propose a flexible, risk-based categorisation for AI based on system inputs and outputs, and incorporate explainable AI (XAI) into our novel categorisation to provide the beginnings of a functional and scalable AI regulatory framework.

AAMAS 2023

February 2023

Inferring Player Location in Sports Matches: Multi-Agent Spatial Imputation from Limited Observations

Gregory Everett, Ryan Beal, Tim Matthews, Joseph Early, Timothy Norman, Sarvapali Ramchurn

Understanding agent behaviour in Multi-Agent Systems (MAS) is an important problem in domains such as autonomous driving, disaster response, and sports analytics. Existing MAS problems typically use uniform timesteps with observations for all agents. In this work, we analyse the problem of agent location imputation, specifically posed in environments with non-uniform timesteps and limited agent observability (~95% missing values).

Tackling Climate Change with Machine Learning
Workshop at NeurIPS 2022

November 2022

Scene-to-Patch Earth Observation: Multiple Instance Learning for Land Cover Classification

Joseph Early, Ying-Jung Deweese, Christine Evers, Sarvapali Ramchurn

Land cover classification (LCC), and monitoring how land use changes over time, is an important process in climate change mitigation and adaptation. Existing approaches that use machine learning with Earth observation data for LCC rely on fully-annotated and segmented datasets. Creating these datasets requires a large amount of effort, and a lack of suitable datasets has become an obstacle in scaling the use of LCC. In this study, we propose Scene-to-Patch models...

BMVC 2022

November 2022

Revisiting Deep Fisher Vectors: Using Fisher Information to Improve Object Classification

Sarah Ahmed, Tayyaba Azim, Joseph Early, Sarvapali Ramchurn

Although deep learning models have become the gold standard in achieving outstanding results on a large variety of computer vision and machine learning tasks, the use of kernel methods has still not gone out of trend because of its potential to beat deep learning performances at a number of occasions. Given the potential of kernel techniques, prior works have also proposed the use of hybrid approaches combining deep learning with kernel learning to complement their respective strengths and weaknesses. This work develops this idea further by introducing an improved version of Fisher kernels...

NeurIPS 2022

October 2022

Non-Markovian Reward Modelling from Trajectory Labels via Interpretable Multiple Instance Learning

Joseph Early, Tom Bewley, Christine Evers, Sarvapali Ramchurn

We generalise the problem of reward modelling (RM) for reinforcement learning (RL) to handle non-Markovian rewards. Existing work assumes that human evaluators observe each step in a trajectory independently when providing feedback on agent behaviour. In this work, we remove this assumption, extending RM to include hidden state information that captures temporal dependencies in human assessment of trajectories. We then show how RM can be approached as a multiple instance learning (MIL) problem...

ICLR 2022

January 2022

Model Agnostic Interpretability for Multiple Instance Learning

Joseph Early, Christine Evers, Sarvapali Ramchurn

In Multiple Instance Learning (MIL), models are trained using bags of instances, where only a single label is provided for each bag. A bag label is often only determined by a handful of key instances within a bag, making it difficult to interpret what information a classifier is using to make decisions. In this work, we establish the key requirements for interpreting MIL models. We then go on to develop several model-agnostic approaches that meet these requirements...

British Journal of Surgery

December 2021

Predicting survival and response to therapy using diagnostic biopsies: A machine learning approach to facilitate treatment decisions for oesophageal adenocarcinoma

Saqib Rahman, Joseph Early, Ben Sharpe, et al.

Standard of care for locally advanced oesophageal adenocarcinoma is neoadjuvant chemotherapy or chemoradiotherapy followed by surgery. Only a minority of patients (<25%) derive significant survival benefit from neoadjuvant treatment and there are no reliable means of establishing prior to treatment in whom this benefit will occur. Moreover, accurate prediction of survival prior to treatment is also not possible....

Nordic Yearbook of Law and Informatics

November 2021

Non-Asimov Explanations Regulating AI Through Transparency

Chris Reed, Keri Grieman, Joseph Early

An important part of law and regulation is demanding explanations for actual and potential failures. We ask questions like: What happened (or might happen) to cause this failure? And why did (or might) it happen? These are disguised normative questions – they really ask what ought to have happened, and how the humans involved ought to have behaved. If we ask the same questions about AI systems we run into two difficulties. The first is what might be described as the ‘black box’ problem...

European Journal of Surgical Oncology

November 2020

Predicting response to neoadjuvant therapy using image capture from diagnostic biopsies of oesophageal adenocarcinoma

Saqib Rahman, Joseph Early, Matt De Vries, et al.

In locally advanced oesophageal adenocarcinoma, only a minority of patients (<25%) derive significant survival benefit from neoadjuvant treatment and there are no reliable means of establishing prior to treatment in whom this benefit will occur. In this study, we assessed the utility of features extracted from high-resolution digital microscopy of pre-treatment biopsies in predicting response to neoadjuvant therapy in a machine-learning based modelling framework.

arXiv Preprint

April 2019

Reducing Catastrophic Forgetting when Evolving Neural Networks

Joseph Early

A key stepping stone in the development of an artificial general intelligence is the production of agents that can perform multiple tasks at once instead of just one. Unfortunately, canonical methods are very prone to catastrophic forgetting (CF) - the act of overwriting previous knowledge about a task when learning a new task. Recent efforts have developed techniques for overcoming CF in learning systems, but no attempt has been made to apply these new techniques to evolutionary systems...