Will AI in the future be “consciousness” and experience the world in the same way as human methods? There is no strong evidence that they will do so, but humanity has not ruled out that possibility.
On Thursday, AI Labs announced it had launched a research programme that prepared them to investigate and navigate what is called “model welfare.” As part of the effort, humanity states that it will explore something that determines whether the “welfare” of AI models is worthy of moral consideration, the potential importance of the “signs of pain” of the model, and the potential importance of “low-cost” interventions.
Within the AI community there is a huge disagreement about what human characteristics are “exhibited” and how, if any, should be “treated.”
Many scholars believe that AI today cannot approximate consciousness or human experience, and that it is not necessarily possible in the future. AI As we know, it is a statistical prediction engine. These concepts have traditionally been understood, so they are not actually “thinking” or “feeling.” Trained with countless examples such as text, images, AI learns patterns and convenient ways to extrapolate to solve tasks.
Mike Cook, a researcher at King’s College London who specializes in AI, told TechCrunch in a recent interview that the model cannot “oppose” changes to “value” because the model has no value. Otherwise, what I would suggest is that the US project it into the system.
“People who personify AI systems to this extent either have a deep misunderstanding their relationship with AI or have a serious misunderstanding their relationship with AI,” Cook said. “Is AI systems optimized for their goals or “achieve unique value”? It’s a matter of how you describe it and how flowery the language you want to use regarding it. ”
Stephen Casper, another researcher who is a doctoral student at MIT, tells TechCrunch that he thinks AI is the equivalent of a “imitation,” calling it “all kind of confusion,” and “all kind of flirt.”
Still other scientists argue that AI has elements like moral decision-making and other human-like elements. Research from the AI Safety Center, an AI research organization, means that AI has a value system that prioritizes unique happiness for humans in certain scenarios.
For some time, humanity has laid the foundation for its model welfare initiative. Last year, the company hired Kyle Fish, the first dedicated “AI welfare” researcher, to develop guidelines on how humanity and other companies approach the issue. (Fish, who leads the new model welfare research program, told the New York Times that he believes there is a 15% chance or another AI is aware of today.)
In a blog post Thursday, humanity acknowledged that there is no scientific consensus on whether current or future AI systems are aware of or have experience ensuring ethical considerations.
“In light of this, we are approaching the topic with humility and as few as possible assumptions,” the company said. “We recognize that as the field develops, we need to regularly modify our ideas.