secretsnero.blogg.se

Shared sentience
Shared sentience













shared sentience

Just last week, The Economist published a piece by cognitive scientist Douglas Hofstadter, who coined the term “Eliza Effect” in 1995, in which he said that while the “achievements of today’s artificial neural networks are astonishing … I am at present very skeptical that there is any consciousness in neural-net architectures such as, say, GPT-3, despite the plausible-sounding prose it churns out at the drop of a hat.” What the “sentient” AI debate means for the enterpriseĪfter a weekend filled with little but discussion around whether AI is sentient or not, one question is clear: What does this debate mean for enterprise technical decision-makers? Still, others pointed out that the entire “sentient AI” weekend debate was reminiscent of the “ Eliza Effect,” or “the tendency to unconsciously assume computer behaviors are analogous to human behaviors” – named for the 1966 chatbot Eliza. However, it should be noted that Bindu Reddy of Abacus AI said the same thing in April, Nicholas Thompson (former editor-in-chief at Wired) said it in 2019 and Brown professor Srinath Sridhar had the same musing in 2017. Now that the weekend news cycle has come to a close, some wonder whether discussing whether LaMDA should be treated as a Google employee means we have reached “peak AI hype.” Meanwhile, Emily Bender, professor of computational linguistics at the University of Washington, shared more thoughts on Twitter, criticizing organizations such as OpenAI for the impact of its claims that LLMs were making progress towards artificial general intelligence (AGI): Is this peak AI hype? There were also plenty of humorous hot takes – even the New York Times’ Paul Krugman weighed in:

shared sentience

AI leaders, researchers and practitioners shared long, thoughtful threads, including AI ethicist Margaret Mitchell (who was famously fired from Google, along with Timnit Gebru, for criticizing large language models) and machine learning pioneer Thomas G. That’s when AI and ML Twitter put aside any weekend plans and went at it. The terminology used with large language models, like “learning” or even “neural nets,” creates a false analogy to the human brain, she said. Bender, a linguistics professor at the University of Washington.

#Shared sentience how to#

The Post article continued: “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” said Emily M. Learn the critical role of AI & ML in cybersecurity and industry specific case studies. And that doesn’t signify that the model understands meaning.” The Washington Post article pointed out that “Most academics and AI practitioners … say the words and images generated by artificial intelligence systems such as LaMDA produce responses based on what humans have already posted on Wikipedia, Reddit, message boards, and every other corner of the internet.

shared sentience

Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person.” AI community pushes back on “sentient” artificial intelligence Her story was focused on me when I believe it would have been better if it had been focused on one of the other people she interviewed. “It’s a good article for what it is but in my opinion it was focused on the wrong person. Instead, Lemoine began “teaching” LaMDA transcendental meditation, asked LaMDA its preferred pronouns, leaked LaMDA transcripts and explained in a Medium response to the Post story: Lemoine, who worked for Google’s Responsible AI organization until he was placed on paid leave last Monday, and who “became ordained as a mystic Christian priest, and served in the Army before studying the occult,” had begun testing LaMDA to see if it used discriminatory or hate speech.















Shared sentience