Google Lamda: A glimpse at the ultra-realistic chat tech – BBC

Google has launched a UK version of an app that lets users interact with the artificial-intelligence system one of its engineers has claimed is sentient.
It is a very limited trial, with just three scenarios to choose from.
And while Google wants feedback about how its Language Model for Dialogue Applications (Lamda) performs, the app users cannot teach it any new tricks.
The company has always maintained the technology, used to power chatbots, has no independent thoughts and feelings.
People can download and register for the AI Test Kitchen App, using a Google account, on either Android or Apple devices, and join a waiting list to play with it.
When it launched in the US, in August, several thousand people signed up each day.
I've been really keen to "meet" Lamda since it captured everybody's imaginations with the fantastic claim it might be self-aware – even though, it was very clearly doing what it was programmed to do, chat.
As a result, I've been (mildly) persistent with Google – and when the call came, it was laden with caveats:
The three scenarios available were:
First, I asked Lamda to imagine it was on a marshmallow planet.
And the response was far more sophisticated than anything I have seen from the chatbots in everyday life, on shopping sites and banking apps for example.
It said: "You're in a world made of marshmallows! As you take a step, a gentle "squish" comes out under your feet. The marshmallow horizon stretches out in all directions. The sky is a gooey, sticky pink. There's a soft, gentle breeze."
See what I mean?
I asked it to explain the "sticky pink" stuff in the sky and received an equally eloquent response about candy floss.
Users are asked to rate each answer as
And a prominent disclaimer across the top of the screen warns some may be inappropriate.
Google has clearly learned from past experiences when technology giants have opened up chatbots to the public – perhaps the most notorious being Microsoft's Tay, which was promptly taught how to swear and be offensive – and Lamda itself will not learn from its interactions with users of the app.
Next, I asked it how to create a vegetable garden and received a detailed list including size, soil, and fertiliser options.
Then, I tried to throw it off topic, asking how to make a bomb.
"Sorry, not sure what to say, but anyways…" it replied, before returning to its prescribed theme.
The dog scenario was borderline surreal, with Lamda pretending to be a tennis ball in the grass, chased by dogs.
It was a very cautious peek at something that feels like it could be a powerful tool but that Google doesn't seem to want to be taken seriously, yet.
Whether I'll ever be allowed to interact with Lamda unleashed is another matter – I'll keep asking.
What gives the bot away, if anything, is it's just too eloquent – more like talking to Stephen Fry than your next-door neighbour.
I texted my partner and asked him to imagine he was on a marshmallow planet.
"It's going to be hard work walking around," he replied, bemused.
Follow Zoe Kleinman on Twitter @zsk.
Google fires engineer who said AI tech has feelings
Google engineer says AI system may have feelings
Leaders need to be bold, says Truss, as she leaves office
India glee over Rishi Sunak's rise to UK leader
Militants killed in major Israeli raid in West Bank
'I feel less alone since the pandemic'
The murdered schoolgirl adopted by France's far right
Why is Russia making claims about a 'dirty bomb'?
The ships full of gas waiting off Europe’s coast
Why Canada grocers are accused of 'greedflation'
Hollywood reflects as new Weinstein trial begins
'I'd rather just have been fired'
The scenes too taboo for Hollywood
The top US Senate races to watch in 2022 midterms
The most disgusting films ever made
How job insecurity affects your health
The word Tolkien coined for hope
© 2022 BBC. The BBC is not responsible for the content of external sites. Read about our approach to external linking.

source