skip to Main Content

This project is a collaboration between 1) researchers at MILA, the Quebec Artificial Intelligence Institute, headed by Yoshua Bengio, 2) researchers at FHI, the Future of Humanity Institute, headed by Nick Bostrom, and 3) Jonathan Simon, a philosopher of mind at the University of Montréal.

We ask what it would take for machines to be conscious and why it might matter if they are. The goal of artificial general intelligence may seem to always recede to the horizon, but in the animal kingdom, anyway, consciousness may come well before general intelligence. Some suggest that fish or even bees are conscious (e.g., here). Thus, artificial consciousness is a concern for the very near future, even if artificial general intelligence is further down the road.

Bengio’s research program explores how the functions associated with consciousness can improve the capacities of machine learning systems. This work involves elaborating on and testing the best theories of those functions, such as global workspace theory, predictive coding theory and attention schema theory (to name just a few), by implementing them in neural networks. This will lead to advances in the functionality of AI, but also to advances in our understanding of the relevant theories, and perhaps to the development of new ones. See for example here and here.

FHI has a research group dedicated to advancing our understanding of when and whether AI systems might be conscious, and of the moral and strategic ramifications (e.g. moral patienthood) of various kinds of potential machine consciousness. FHI sees concrete work on existing and near-term systems, of the sort that Bengio is pursuing, as one of the most viable paths for gaining a better understanding of machine consciousness. See for example here.

Jonathan Simon adds the perspective of a philosopher of mind, concerned with bridging developments in neuroscience and machine learning engineering with first-personal, phenomenological, metaphysical and normative considerations about the nature of consciousness. A link to a sample of Simon’s current work on the subject is here.

Specific questions that Simon proposes to address include:

  • How soon might artificial systems be conscious?
  • Can we identify meaningful tests for machine consciousness, or are such efforts misguided?
  • How simple or sparse can the neural network associated with a conscious entity be?
  • Is there a threshold between unconscious and conscious artificial systems, or will it be a matter of degree?
  • To what extent is consciousness a question of scale (the number of parameters in one’s neural network) and to what extent is it a question of specific architecture?
  • Can consciousness, or specific aspects of consciousness, be understood as specific inductive biases or Bayesian priors?
  • Do anti-connectionist and anti-computationalist critiques amount to no-go theorems, or to guidelines for neural network design?
  • Is neuromorphic hardware necessary for consciousness?
  • How can insights about the embodiedness or goal-directedness of consciousness be accommodated in the design of artificial neural networks?
  • What do adversarial examples teach us about the (non) consciousness of image classifiers?
  • How do the value functions learned by reinforcement learning agents differ from genuine goals or desires?
  • How do transitions between semantic vector embeddings in systems like GPT-3 differ from genuine associative thinking?
  • How might artificial conscious agents (who live in a cloud) be individuated from one another, or counted?
  • How might the structure of their consciousness differ from the structure of ours?
  • How might their existence alter our normative landscape?
  • In what ways should precautionary principles guide our progress in these areas?

Hiring: Jonathan Simon is currently hiring for one two-year postdoctoral position, to be based in the philosophy department at the University of Montréal, to begin in the fall of 2022. AOS: philosophy of mind, the brain sciences, or artificial intelligence. AOC: same or complementary. Preference given to those with some familiarity with state-of-the-art machine learning architectures, in order to capitalize on opportunities for collaboration with the MILA group led by Bengio, and also to those with familiarity with the associated moral, policy and decision-theoretic questions, in order to capitalize on opportunities for collaboration with the FHI group led by Bostrom. Doctorate must be conferred by start date in fall of 2022 (and less than five years before that date). Health and other social benefits included. The University of Montréal is a francophone institution. Knowledge of French is not required for the project, but candidates preferring to work in French are welcome to do so.

To apply:
I) Please submit, all in a single PDF, via email:

1) a cover letter,
2) a CV,
3) a research statement, and
4) a writing sample.

Please title your email: “DM2022 Application Lastname, Firstname”
(but with your actual last and first names in place of `Lastname’ and `Firstname’)

Please title your PDF: Lastname_Firstname_DM2022.pdf

Send to:

II) Please also arrange for 2-4 letter writers to email (confidential) letters of recommendation, to the same address ( Please ask letter writers to title things as follows:

Letter PDF file: CandidateLastName_CandidateFirstName_AuthorLastname_DM2022
Email title: “DM2022 Application Letter for CandidateLastname, CandidateFirstname”

Please also specify, in the body of the original email you write containing the application PDF, the names and email addresses of your letter writers.

deadline: February 15, 2022

Inquiries to:

Back To Top