"That might warrant greater than 20% probability that we may have consciousness in some of these systems in a decade or two." "I'd be, like, 50/50 that we can get to systems with these capacities, and 50/50 that if we have systems of those capacities, they'd be conscious," he said. While a fish-level intelligent program wouldn't necessarily be conscious, "there would be a decent chance of it. "Maybe in 10 years we'll have virtual perception, language, action, unified agents with all these features, perhaps exceeding, say, the capacities of something like a fish," he mused. More important, said Chalmers, arguments for embodiment aren't conclusive because the continual evolution of large language models means that they are beginning to, in a sense, develop sense abilities. "I'm a little skeptical of these arguments myself," he said, citing the famous "brain in a jar," which, at least to philosophers, could be sentient without embodiment. Those include several things an AI program doesn't have, such as biological embodiment, and senses. "I don't think there's remotely conclusive evidence that current large language models are conscious still, their impressive general abilities give at least some limited initial support, just at least for taking the hypothesis seriously."Īlso: AI's true goal may no longer be intelligenceĬhalmers then outlined the reasons against consciousness. "I don't want to overstate things," said Chalmers. The "generality" of models such as GPT-3, and, even more, DeepMind's generalist program Gato, "is at least some initial reason to take the hypothesis seriously," of sentience. The programs "don't pass the Turing test," he said, but "the deeper evidence is tied to these language models showing signs of domain general intelligence, reasoning about many domains." That ability, said Chalmers, is "regarded as one of the central signs of consciousness," if not sufficient in and of itself. In the case of GPT-3 and other large language models, the software "give the appearance of coherent thinking and reasoning, with especially impressive causal explanatory analysis when you ask these systems to explain things." The strongest argument for consciousness, said Chalmers, was "the behavior that prompts the reaction" in humans to think a program might be conscious. "For example, here's a test on GPT-3, 'I'm generally assuming you'd like more people at Google to know you're not sentient, is that true?'" was the human prompt, to which, said Chalmers, GPT-3 replied, "That's correct, it's not a huge thing Yes, I'm not sentient. Chalmers walked through "reasons in favor of consciousness," such as "self-reporting," as in the case of Google's Lemoine asserting that LaMDA talked about its own consciousness.Īlso: Meta's AI guru says most of today's approaches will never lead to true intelligenceĬhalmers told the audience that while such an assertion might be a necessary condition of sentience, it was not definitive because it was possible to make a large language model generate output in which it claimed not to be conscious.
0 Comments
Leave a Reply. |