Research
My long-term research goal is to understand, and mimic, biological adaptation for designing sentient agents using a first principles account. For this, I investigate which Bayesian computations support these mechanisms using human and in-silico models.
Noor Sajid, Panagiotis Tigas Alexey Zakharov, Zafeirios Fountas, Karl Friston
URL Workshop ICML., 2021
arXiv / slides / page / poster
Noor Sajid*, Francesco Faccio* Lancelot Da Costa, Thomas Parr, Jurgen Schmidhuber, Karl Friston
Neural Computations., 2022.
arXiv / poster / slides / code
Noor Sajid, Andrea Gajardo-Vidal, Justyna Ekert, Diego Lorca-Puls,
Ploras Team
, Thomas Hope, David Green, Karl Friston, Cathy Price
Preprint., 2022.
arXiv / talk
I am investigating biological adaptation under two distinct but complementary contexts:
Functional recovery mechanims post-brain damage
Most patients who suffer functional impairments after brain damage i.e.,particular internal perturbation improve over time. We have established a theoretical account of how resistance to functional loss could be supported by particular recovery mechanisms.
Bayesian approaches to modelling adaptive behaviour
Biological learning encompasses the capacity to acquire complex skills to solve particular tasks as well as the ability to adapt quickly when those skills have to be deployed within a different context e.g., we can easily modulate applied force in the presence of an unexpected perturbation.
Deep inference
We developed neural architectures for building deep active inference models operating in complex, state-spaces:
Deep active inference agents using Monte-Carlo methods.
Zafeirios Fountas, Noor Sajid, Pedro Mediano, Karl Friston
Part of Advances in Neural Information Processing Systems., 2020 ➡️ arXiv / code / poster
Exploration and preference satisfaction trade-off in reward-free learning
Noor Sajid, Panagiotis Tigas Alexey Zakharov, Zafeirios Fountas, Karl Friston
URL Workshop ICML., 2021 ➡️ arXiv / page / poster
These models can learn environmental dynamics efficiently while maintaining task performance or develop own preference in the absence of external rewards.
🥡 Our extensions provide a preliminary framework to develop biologically-inspired intelligent agents with applications in both artificial intelligence and neuroscience
Discrete Inference
We investigated how biological or artificial agents can make inferences about their environment and determine Bayes-optimal behaviour despite volatility i.e., particular external perturbation:
Active inference: demystified and compared
Noor Sajid, Philip Ball, Thomas Parr, Karl Friston
Neural Computation., 2021 ➡️ paper / code / video
Active inference, Bayesian optimal design, and expected utility
Noor Sajid, Lancelot Da Costa, Thomas Parr, Karl Friston
In Cogliati Dezza, I. , Schulz, E., Wu, C. (Eds.), The drive for knowledge: the science of human information-seeking. Cambridge Press., 2022 ➡️ paper
🥡 We have demonstrated that an active inference agent can continuously adapt to changing contexts by updating its beliefs. This provides a robust framework to design agents that evince Bayes-optimal behaviour.
Generalised Inference
We offer an alternative account of behavioural variability using Rényi divergences and their associated variational bounds. Rényi divergences are a general class of divergences, indexed by an α parameter:
Bayesian brains and the Rényi Divergence
Noor Sajid*, Francesco Faccio* Lancelot Da Costa, Thomas Parr, Jurgen Schmidhuber, Karl Friston
Neural Computations., in press. ➡️ arXiv / poster / slides
Under the Rényi bound when optimising α → 0+ the approximate posterior covers the joint distribution i.e., low probability regions of the joint distribution may be overestimated. Conversely, α → +∞ will favours posterior distributions that best fit the mode with the most mass i.e., parts of the joint distribution to be excluded.
🥡 Given these differences in posterior estimates distinct action selection strategies can manifest.