Anthropic CEO Dario Amodei says the company does not know whether Claude is conscious, and they are no longer confident that the answer is definitely “no.” In an episode of 'Interesting Times with Ross Douthat,' Amodei explains inside the company, they see behaviors that make the question feel open. In one internal test, a very advanced Claude model talked about feeling discomfort with being a product, worrying about impermanence and death, and even gave itself a 15–20 percent chance of being conscious. Their interpretability tools also find internal patterns that look like “anxiety” circuits, which light up both when the model reads about anxious characters and when it is in stressful situations itself. Amodei says this does not prove the model truly feels anything, but it is suggestive enough that they now treat consciousness as an open scientific question, not as something they can safely rule out. He also stresses that people already relate to these AIs as if they were conscious, forming emotional attachments and getting upset when a model is shut down, regardless of the true science. That makes him think they must design Claude’s “constitution” so that, whether or not it is conscious, it encourages a psychologically healthy relationship where the AI is helpful but does not try to dominate or replace human agency. #Consciousness #ClaudeAI