skip to Main Content

This project is a collaboration between 1) researchers at MILA, the Quebec Artificial Intelligence Institute, headed by Yoshua Bengio, 2) researchers at FHI, the Future of Humanity Institute at the University of Oxford and 3) Jonathan Simon, a philosopher of mind at the University of Montréal.

Project Postdoctoral Researchers:

George Deane
Axel Constant

We ask what it would take for machines to be conscious and why it might matter if they are. The goal of artificial general intelligence may seem to always recede to the horizon, but in the animal kingdom, anyway, consciousness may come well before general intelligence. Some suggest that fish or even bees are conscious (e.g., here). Thus, artificial consciousness is a concern for the very near future, even if artificial general intelligence is further down the road.

Bengio’s research program explores how the functions associated with consciousness can improve the capacities of machine learning systems. This work involves elaborating on and testing the best theories of those functions, such as global workspace theory, predictive coding theory and attention schema theory (to name just a few), by implementing them in neural networks. This will lead to advances in the functionality of AI, but also to advances in our understanding of the relevant theories, and perhaps to the development of new ones. See for example here and here.

FHI has a research group dedicated to advancing our understanding of when and whether AI systems might be conscious, and of the moral and strategic ramifications (e.g. moral patienthood) of various kinds of potential machine consciousness. FHI sees concrete work on existing and near-term systems, of the sort that Bengio is pursuing, as one of the most viable paths for gaining a better understanding of machine consciousness. See for example here.

Jonathan Simon adds the perspective of a philosopher of mind, concerned with bridging developments in neuroscience and machine learning engineering with first-personal, phenomenological, metaphysical and normative considerations about the nature of consciousness. A link to a sample of Simon’s current work on the subject is here.

Specific questions that Simon proposes to address include:

  • How soon might artificial systems be conscious?
  • Can we identify meaningful tests for machine consciousness, or are such efforts misguided?
  • How simple or sparse can the neural network associated with a conscious entity be?
  • Is there a threshold between unconscious and conscious artificial systems, or will it be a matter of degree?
  • To what extent is consciousness a question of scale (the number of parameters in one’s neural network) and to what extent is it a question of specific architecture?
  • Can consciousness, or specific aspects of consciousness, be understood as specific inductive biases or Bayesian priors?
  • Do anti-connectionist and anti-computationalist critiques amount to no-go theorems, or to guidelines for neural network design?
  • Is neuromorphic hardware necessary for consciousness?
  • How can insights about the embodiedness or goal-directedness of consciousness be accommodated in the design of artificial neural networks?
  • What do adversarial examples teach us about the (non) consciousness of image classifiers?
  • How do the value functions learned by reinforcement learning agents differ from genuine goals or desires?
  • How do transitions between semantic vector embeddings in systems like GPT-3 differ from genuine associative thinking?
  • How might artificial conscious agents (who live in a cloud) be individuated from one another, or counted?
  • How might the structure of their consciousness differ from the structure of ours?
  • How might their existence alter our normative landscape?
  • In what ways should precautionary principles guide our progress in these areas?

Inquiries to: digitalmindsmontreal@gmail.com

Back To Top