Multi-armed bandits

On the finite-sample and asymptotic error control of a randomization-probability test for response-adaptive clinical trials

It is now commonly known that using optimal response-adaptive designs for data collection offers great potential in terms of optimizing expected outcomes, but poses multiple challenges for inferential goals. In many settings, such as phase-II or …

Thompson Sampling for Zero-Inflated Count Outcomes With an Application to the Drink Less Mobile Health Study

Mobile health (mHealth) interventions often aim to improve distal out- comes, such as clinical conditions, by optimizing proximal outcomes through just-in-time adaptive interventions. Contextual bandits provide a suitable framework for customizing …

Online sequential-decision making via bandit algorithms, modeling considerations for better decisions (Invited Talk @ BMS-ANed)

The multi-armed bandit (MAB) framework holds great promise for optimizing sequential decisions online as new data arise. For example, it could be used to design adaptive experiments that can result in better participant outcomes and improved …

Online sequential-decision making via bandit algorithms, modeling considerations for better decisions (Seminar @ Department of Statistics, Padua University)

The multi-armed bandit (MAB) framework holds great promise for optimizing sequential decisions online as new data arise. For example, it could be used to design adaptive experiments that can result in better participant outcomes and improved …

Online sequential-decision making via bandit algorithms, modeling considerations for better decisions (Keynote Talk @ ALBECS-2024, 19th International Conference on Persuasive Technology 2024)

The multi-armed bandit (MAB) framework holds great promise for optimizing sequential decisions online as new data arise. For example, it could be used to design adaptive experiments that can result in better participant outcomes and improved …

Using Adaptive Bandit Experiments to Increase and Investigate Engagement in Mental Health

Digital mental health (DMH) interventions, such as text-message-based lessons and activities, offer immense potential for accessible mental health support. While these interventions can be effective, real-world experimental testing can further …

Multinomial Thompson sampling for rating scales and prior considerations for calibrating uncertainty

Bandit algorithms such as Thompson sampling (TS) have been put forth for decades as useful tools for conducting adaptively-randomised experiments. By skewing the allocation toward superior arms, they can substantially improve particular outcomes of …

Modeling considerations when optimizing adaptive experiments under the reinforcement learning framework (Invited Talk @ ICSDS2023)

Artificial intelligence tools powered by machine learning have shown considerable improvements in a variety of experimental domains, from education to healthcare. In particular, the reinforcement learning (RL) and the multi-armed bandit (MAB) …

On the finite-sample and asymptotic validity of an allocation-probability test for adaptively-collected data (Invited Talk @ StaTalk2023)

Response-adaptive designs, either based on simple rules, urn models, or bandit problems, are of increasing interest among both theoretical and practical communities. In particular, regret-optimising bandit algorithms like Thompson sampling hold the …

Efficient Inference Without Trading-off Regret in Bandits. An Allocation Probability Test for Thompson Sampling (Invited Talk @ JSM2023)

Using bandit algorithms to conduct adaptive randomised experiments can minimise regret, but it poses major challenges for statistical inference. Recent attempts to address these challenges typically impose restrictions on the exploitative nature of …