I see discussions about definition of consciousness a lot across AI enthusiasts on Twitter. Some people like to use unconditionally assuming that there is no question about it. But other people and myself think that one cannot draw statements about consciousness without formally defining it. How can it be described in terms of current models and algorithms? Can we create a specific test in order to apply it to an entity and conclude if it conscious or not? I find this problem challenging and I would like to write down my thoughts on how to approach to this definition.
First, we should formulate a criteria that will allow us to classify entities to be conscious.
Do humans have consciousness while we sleep?
Before we extrapolate to animals, plants or robots, we need to understand how do we find consciousness in other people. When another person is not conscious, means they are dead or unconscious which can mean that they on drugs, or asleep. Therefore when we invert the conditions it says that the person must be awake and aware of the world around us.
The question about consciousness was also touched in the topic of multi agent systems.