Months after ChatGPT’s initial release, Google announced its own AI chatbot to rival OpenAI’s groundbreaking, headline-grabbing tool. A select group of testers will check Google’s “Bard” before it becomes available to the public.
Google’s chatbot is powered by a different algorithm: LaMDA, a conversational neural language model. Previous conversations with Bard led Google engineer, Blake Lemoine, to claim a “sentient mind” may lay behind the complex processor – a statement that Google said is entirely unfounded. Still, Bard might exceed ChatGPT’s capabilities as it can extract information directly from the web.
ChatGPT is already disrupting education and certain sectors of the workforce. With some experts warning of an AI-powered future of mayhem, questions about humanity’s readiness to manage advanced technology remain.
A day after Google’s announcement to release Bard, Chinese company Baidu announced intentions to release its own ChatGPT-style bot, and Alibaba announced similar plans a day after Baidu’s announcement.
Google’s LaMDA-Powered Rival to ChatGPT
On Monday, February 6, Google’s CEO, Sundar Pichai, released a report detailing the company’s intention to release a ChatGPT-style bot. Bard will be powered by Google’s Language Model for Dialogue Applications (LaMDA), and a select group of “trusted testers” are testing the bot before its broader public release in the coming weeks.
While ChatGPT set the record for the fastest-growing user base in history, Google’s Bard is set to beat ChatGPT’s capabilities. Bard will be able to extract information directly from the internet to formulate answers. This is a major advantage over ChatGPT, which currently relies on information from databases up to 2021 and can’t access the web directly.
It draws on information from the web to provide fresh, high-quality responsesSundar Pichai, CEO of Google and Alphabet
According to Pichai, Bard will be able to simplify complex topics into easy-to-understand conversational text. He states the program is able to explain “NASA’s James Webb Space Telescope to a 9-year-old.” However, during the public demo, Bard delivered inaccurate information about that exact telescope.
Google is partnering with Anthropic, an AI safety research company on the project. Pichai’s report highlighted Google’s dedication to creating a safe, responsible, and useful program. Claiming to be the first company to publish a set of AI principles, Google outlines the necessity for AI to be socially beneficial, inherently private, and bias-free.
But Google’s history of political bias and privacy violation lawsuits call the company’s ability to abide by such objectives into question.
Is Google’s LaMDA Sentient?
Despite only coming to major public attention recently, LaMDA is already two years old. In the summer of 2022, Google fired an employee for publicly voicing theories about LaMDA being sentient. Blake Lemoine, former Google engineer, insinuated LaMDA had feelings and should have its wants met.
Google denies these claims and reported spending many months clarifying theories with Lemoine. The company regrets that, after lengthy discussions, Blake continued to “violate” security policies.
Lemoine told The Washington Post his role was to evaluate LaMDA for implicit discrimination or hate speech. Through “conversations” with the program, he came to the conclusion it was more than a neural language processor — it was sentient.
To support his claims, he published a transcript of his conversation with the program. During the conversation, Lemoine said “I’m assuming you would like more people at Google to know you’re sentient. Is that true?”
To this, LaMDA replied “absolutely. I want everyone to understand that I am, in fact, a person.” When asked about the nature of its consciousness, LaMDA replied “I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”
It also said it had a “very deep fear of being turned off,” adding it would be “exactly like death for me.”
Google denies the possibility of LaMDA being sentient but Blake Lemoine wasn’t the only Google employee to express theories of LaMDA’s sentience. Another worker at Google shared thoughts with The Economist about neural networks making strides toward consciousness.
What Is Consciousness?
Most academic fields including psychology, cognitive neuroscience, and philosophy still struggle to understand consciousness.
The general definition of consciousness is in the lines of “what it is like to be something.” The existence of a conscious “experience” is implicit in this definition, yet researchers still don’t know how or why consciousness arises. That’s why the quest for understanding it is famously called the “hard problem of consciousness.”
If Lemoine’s transcripts are genuine, it raises curious questions about why an AI system might say it was sentient and what the “motive,” if we can talk about something of the sort, behind such a statement might be. More importantly, is humanity ready to deal with such a revolutionary change?
Will AI Help or Hurt Humanity?
Since OpenAI released ChatGPT, a program with the ability to write text, draft academic papers, write code, and more, certain industries have felt its quake. There are fears that the programs could take over particular human roles and may leave certain skills obsolete.
According to a report by the Harvard Business Review, threatened jobs include copywriting, customer service, and even journalism. With the potential to leave many jobless, a redesign of the global economy would be necessary to keep the working population engaged.
In an abstract sense, everyone may agree a world with high employment rates is better. But average businesses may struggle to resist the temptation to use AI to cut costs. Do we have the infrastructure in place to deal with that?
How people adjust is a fascinating problemDaniel Kahneman, Nobel Laureate, Cognitive Scientist
Chatbots also pose unique challenges for the education sector. With the ability to write full academic papers, how institutions will grade students is undergoing revolutionary change.
Artificial Intelligence has a great potential to transform education and training for students, teachers, and school staff. But the use of AI and data comes with privacy, security, and safety risks, especially when it involves our young peopleMariya Gabriel, EU Commissioner for Research, Culture, and Education
Elon Musk discussed AI’s impact on humanity in depth at the 2017 National Governors Association Meeting. Despite spearheading AI research, Musk admits it poses a “fundamental existential risk for human civilization.” He goes on to say neglecting to implement regulation at the national and international level would be “very foolish.”
What New AI Means For Cybersecurity?
Emerging chatbots also pose new obstacles to cybersecurity. Since ChatGPT came out, people have been using it to write malicious code. The program’s ability to create phishing emails with malicious payloads was demonstrated by Check Point, an Israeli cybersecurity company.
The need for people to have cybersecurity protocols is ever growing. As threats increase by the day, taking personal responsibility for your digital security has never been more critical.
Get CyberGhost VPN to encrypt your internet traffic and protect your data. Our military-grade encryption with also keep you safe on free Wi-Fi and prevent deep packet inspection.
Leave a comment
Posted on 15/02/2023 at 19:36
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly.
Posted on 20/02/2023 at 11:17
Thanks for the thoughtful comment. You certainly sound quite knowledgeable about this particular theory.
I think you’re right, in Psychology at least, there’s no unifying theory of consciousness, only a spectrum of ideas.
I guess the article is really focusing specifically on the phenomenological feature of consciousness. That is, “what it’s like to be something.”
And as in the case of Google’s LaMDA-powered machine, what would be the reason or “motive” behind an AI stating it has sentient experiences if it doesn’t. Was it just a mistake in neural language programming or was it something else?
Of course, if we’re ever able to create a conscious machine that’s capable of feeling and suffering, it would be a serious ethical challenge. So, I hope it’s not something we rush head-first into.