Microsoft introduces Copilot, justifies AI-hallucination as 'usefully wrong'
Microsoft has unveiled "Microsoft 365 Copilot," a set of AI tools that would ultimately appear in its apps, including popular and widely used MS Word and MS Excel.
Although the development generated excitement, the tech giant is marketing "AI hallucination" as "usefully wrong," per an introductory blog on Copilot published Thursday.
"Copilot gives you a first draft to edit and iterate on — saving hours in writing, sourcing, and editing time," reads the blog.
"Sometimes Copilot will be right, other times usefully wrong — but it will always put you further ahead," said the blog, playing on AI's propensity to occasionally make factual errors.
Microsoft 365 Copilot — "your copilot for work" turns words into the most potent productivity tool by fusing the strength of large language models (LLMs) with your data in the Microsoft Graph and the Microsoft 365 apps, claims the company.
"Today marks the next major step in the evolution of how we interact with computing, which will fundamentally change the way we work and unlock a new wave of productivity growth," said Satya Nadella, Chairman and CEO of Microsoft.
"With our new copilot for work, we're giving people more agency and making technology more accessible through the most universal interface — natural language."
Microsoft and AI-hallucinations
Microsoft-backed OpenAI's GPT technology was used to build the company's Bing AI conversation bot, which was unveiled in February.
During its introductory presentation, the bot was responding incorrectly.
The Bing conversation feature occasionally presents false information that users might take to be the real deal, a situation known as a "hallucination," like other AI language tools and related Google software.
Google announced on Tuesday that it was bringing AI-powered chat technology to Gmail and Google Docs, allowing it to assist with email and document composition.
Microsoft announced Thursday that its popular business apps, Word and Excel, would soon be bundled with Copilot, a ChatGPT-like technology.
This time, however, Microsoft is not shying away from "unhinged" controversies and is pitching the technology as "usefully wrong."
But the industry specialists disagree.
Technology experts Noah Giansiracusa and Gary Marcus recently expressed worry that individuals would put too much faith in contemporary AI by listening to the advice of ChatGPT and other similar technologies.
"ChatGPT's toxicity guardrails are easily evaded by those bent on using it for evil, and as we saw earlier this week, all the new search engines continue to hallucinate," they wrote in a recent opinion piece.
"But once we get past the opening day jitters, what will really count is whether any of the big players can build artificial intelligence that we can genuinely trust."
Although the practical dependability of Copilot is uncertain, Microsoft's chief scientist and technical fellow Jaime Teevan claimed that the company has "mitigations in place" in case Copilot gets things wrong, "has biases or is misused."
"We're going to make mistakes, but when we do, we'll address them quickly," Teevan said.