I’ve been interacting with OpenSim bots — or NPCs — for practically so long as I’ve been overlaying OpenSim. Which is about 15 years. (Oh my God, has it actually been that lengthy?)
I’ve been hoping that OpenSim writing would change into by day job, however, sadly, OpenSim by no means actually took off. As a substitute, I lined cybersecurity and, extra just lately, generative AI.
However then I noticed some reporting a few new research about AI, and instantly thought — this might actually be one thing in OpenSim.
The research was printed this previous April within the journal Neuroscience of Consciousness, and it confirmed {that a} majority of individuals – 67% to be precise – attribute a point of consciousness to ChatGPT. And the extra folks use these AI programs, the extra seemingly they’re to see them as aware entities.
Then, in Could, one other research confirmed that 54% of individuals, after a dialog with ChatGPT, thought it was an actual individual.
Now, I’m not saying that OpenSim grid house owners ought to run out and set up a bunch of bots on their grids that fake to be actual folks, with a purpose to lure in additional customers. That might be dumb, costly, a waste of sources, presumably unlawful and undoubtedly unethical.
But when customers knew that these bots have been powered by AI and understood that they’re not actual folks, they may nonetheless take pleasure in interacting with them and develop attachments to them — similar to we get connected to manufacturers, or cartoon animals, or characters in a novel. Or, sure, digital girlfriends or boyfriends.
Within the video beneath, you possibly can see OpenAI’s current GPT-4o presentation. Yup, the one the place ChatGPT sounds suspiciously like Scarlett Johansson in “Her.” I’ve set it to start out on the level within the video the place they’re speaking to her.
I can see why ScarJo acquired upset — and why that exact voice is now not accessible as an possibility.
Now, as I write this, the voice chatbot they’re demonstrating isn’t extensively accessible but. However the textual content model is — and its the textual content interface that’s most typical in OpenSim anyway.
GPT-4o does value cash. It prices cash to ship it a query and to get a response. 1,000,000 tokens price of questions — or 750,000 phrases — prices $5, and one million token’s price of response prices $15.
A web page of textual content is roughly 250 phrases, so one million tokens is about 3,000 pages. So, for $20, you may get lots of back-and-forth. However there are additionally cheaper platforms.
Anthropic’s Claude, for instance, which has examined higher than ChatGPT in some benchmarks, prices a bit much less — $3 for one million enter tokens, and $15 for one million output tokens.
However there are additionally free, open-source platforms that you just run by yourself servers with comparable efficiency ranges. For instance, on the LMSYS Chatbot Enviornment Leaderboard, OpenAI’s GPT-4o in in first place with a rating of 1287, Claude 3.5 Sonnet is shut behind with 1272, and the (largely) open supply Llama 3 from Meta isn’t too far distant, with a rating of 1207 — and there are a number of different open supply AI platforms on the prime of the charts, together with Google’s Gemma, NVIDIA’s Nemotron, Cohere’s Command R+, Alibaba’s Qwen2, and Mistral.
I can simply see an OpenSim internet hosting supplier including an AI service to their bundle offers.
Think about the potential for creating really immersive experiences in OpenSim and different digital environments. If customers are predisposed to see AI entities as aware, we might create non-player characters that really feel extremely actual and responsive.
This might revolutionize storytelling, schooling, and social interactions in digital areas.
We might have bots that customers can kind significant relationships with, AI-driven characters that may adapt to particular person person preferences, and digital environments that really feel alive and dynamic.
After which there’s the potential for interactive storytelling and video games, with quests and narratives which can be extra participating than ever earlier than, create digital assistants that really feel like true companions, and even construct communities that blur the strains between AI and human contributors.
For these utilizing OpenSim for work, there are additionally purposes right here for enterprise and schooling, within the type of AI tutors, AI govt assistants, AI gross sales brokers, and extra.
Nevertheless, as a lot as I’m thrilled by these potentialities, I can’t assist however really feel a twinge of concern.
Because the research authors level out, there are some dangers to AIs that really feel actual.
First, there’s the danger of emotional attachment. If customers begin to view AI entities as aware beings, they may kind deep, doubtlessly unhealthy bonds with these digital characters. This might result in a spread of points, from social isolation in the true world to emotional misery if these AI entities are altered or eliminated.
We’re already seeing that, with folks feeling actual misery when their digital girlfriends are turned off.
Then there’s the query of blurred actuality. As the road between AI and human interactions turns into much less clear, customers may wrestle to tell apart between the 2.
Personally, I’m not too involved about this one. We’ve had folks complaining that different folks couldn’t inform fantasy from actuality for the reason that days of Don Quixote. Most likely even earlier. There have been in all probability cave folks sitting round, saying, “Have a look at the younger folks with all their cave work. They might be out really looking, and as a substitute they sit across the cave trying on the work.”
And even earlier, when language was invented. “Have a look at these younger folks, sitting round speaking about looking, as a substitute of going on the market into the jungle and catching one thing.”
When films have been first invented, when folks began getting “addicted” to tv, or video video games… we’ve at all times had ethical panics about new media.
The factor is, these ethical panics have been additionally, to some extent, justified. Possibly the pulp novels that the printing press gave us didn’t rot our brains. However Mao’s Little Purple E-book, the Communist Manifesto, that factor that Hitler wrote that I don’t even was aided and abetted by the books they wrote.
In order that’s what I’m most nervous about — the potential for exploitation. Dangerous actors might misuse our tendency to anthropomorphize AI, creating misleading or manipulative experiences that reap the benefits of customers’ emotional connections and cause them to be extra tolerant of evil.
However I don’t assume that’s one thing that we, in OpenSim, have to fret about. Our platform doesn’t have the type of attain it will take to create a brand new dictator!
I believe the worst that might occur is that individuals may get so engaged that they spend just a few {dollars} greater than they deliberate to spend.