That seems to be because Microsoft, which backs the OpenAI startup that created the glitch-prone ghost in the machine, simultaneously wants to get it commercially ready while being very cautious about letting the dirty unwashed geeks have a go with it and creating chaos in the process.
You can see why. Although GPT-4, like any current AI, cannot think, its algorithms and analytical capabilities are good enough to generate material that can be difficult to discern from human-created stuff.
That capability was enough to convince Microsoft to go into full overdrive with the OpenAI technology.
Adding a “Copilot” AI to productivity apps such as Excel, Word and Outlook, or the Power business intelligence software, is a natural progression for Microsoft. The company’s always been big on task automation because that is what IT is there for. Computers doing the dull stuff for you like summarising notes, emails, reports and even writing code.
Think Clippy, Cortana and now, Copilot. (What is it with Microsoft’s C-name obsession for its Office automation features? There’s the Contoso made-up company as well.)
If you want to check it out, download a copy of Microsoft’s Edge browser, click through to join the waiting list for the new Bing search engine and you get immediate access to GPT-4.
So far, GPT-4-powered searches on Bing are definitely an interesting concept but are underwhelming. Being madly egocentric I looked up myself, and the first out of 15 results was a correct, terse summary and then Bing couldn’t even work out what I wrote about. Other searches were along the same lines.
Some people claim to have had more fruitful encounters with GPT-4, including one computer scientist who, through cleverly written prompts (as in directions), told the AI to escape from its confines. GPT-4 was able to figure out an escape plan, which included writing code to run on outside machines and pretending to be blind to get help with Captchas, so that it would seem human and get further access to systems.
So GPT-4 and other AIs are going somewhere - and it’ll be under the auspices of Big Tech as I predicted as nobody else will have the money and resources - but where?
Even Microsoft is using the term “hallucinate” to describe some of the utterly insane nonsense that the generative AIs produce, and label the output with warnings.
Social media is full of people either having a laugh, or being deeply concerned about the AI hallucinations.
Experimenting on myself again, the GPTs do indeed hallucinate and make up stuff about my non-existent literary career, and jobs at publications I have never worked at.
Why are people concerned about the AI hallucinations though? They look very plausible at first glance, so if you don’t check on just about everything an AI produces, things could go very wrong indeed.
There are legal issues as well with using AI to generate and produce material. Can it be copyrighted for example, if a human didn’t create the text, lyrics or images? Who owns the AI-generated material? Who takes ultimate responsibility for it? Geek lawyers look set for a busy time around this.
Leaving the above aside, if we all start to use AI to generate whatever output, will it hallucinate even worse as the human material it’s been trained on reduces in volume, and it ingests machine-learned stuff instead?
Either way, it’s absolutely certain that AI will change the world and how we work. It just won’t be the way we thought it would happen.