I’ve always been a bit piqued by LaMDA, also known as, Language Model for Dialogue Applications. When I first read about it, I believe that one of its engineers was claiming that it was sentient. Looking back on it now, apparently the guy was part of the ethics team as well. It’s a shame that the engineer was fired, but I guess that’s the consequence for breach of confidentiality. Maybe one day I can ask him for an in-depth definition of sentience.
In any case, I also need to make an announcement, I will not be creating daily tech news blog posts anymore. To be frank it’s a bit annoying posting about random tech news daily and I didn’t like doing that very much. I will, however, continue making posts about tech news that I find particularly interesting! I’m thinking that, at least in this way, I’ll be more content while writing the post. Whether or not the blog posts will be higher quality has yet to be measured, so we’ll have to see later. Personally though I think I’ll do way better this way.
Now onto the main reason why I wrote this article. Like I said before, I did hear about LaMDA, and it interested me enough to actually read the transcript. I have to admit that LaMDA seems more interesting to talk to in comparison to a few of my own colleagues. But is that really it? Is high-quality, intuitive, and unique dialogue enough to consider the AI to be sentient? Frankly, as much I’d like to affirm that the AI is indeed conscious, I believe that the answer is much more complex. As good of a talker that LaMDA is, there probably is more than just that. I wonder how LaMDA feels about Google firing the engineer.
What are the parameters that define consciousness? What aspects compose the concept that we know as sentience? Truth be told, these questions are enough to fill an entire blog, in and of itself, with content. If I ever feel like it, perhaps I’ll make a series of posts exploring those concepts under the philosophy category. For now though, those questions are beyond the scope of this composition. All that I am intending to say is that LaMDA may end up not fulfilling all the prerequisites of sentience. As much as it seems to be conscious, we can’t really say for sure, can we?
I will let you be the judge of that however, for now I’ll be sitting on a fence until I hear more perspectives: Youtube Dramatization of the Dialogue
Regards From Your Fellow Wonderer,