[ad_1]
In the AI arms race that has just begun in the tech industry, where many of the newest technologies have been invented, Google should be well-positioned to be one of the big winners.
There is only one problem. With politicians and regulators strangling and defending hugely lucrative business models, the internet search giant may be hesitant to wield the many weapons at its disposal.
Microsoft this week abandoned its direct challenge to the search giant, finalizing a multi-billion dollar investment in AI research firm OpenAI. The move comes less than two months after the launch of OpenAI’s ChatGPT. These chatbots respond to queries with text or paragraphs of code, suggesting ways generative AI could one day replace internet searches.
Prioritizing the commercialization of OpenAI’s technology, Microsoft executives have made no secret of their goal of using OpenAI to challenge Google and rekindle old rivalries that have simmered since Google’s victory in the search wars a decade ago.
DeepMind, a London research firm acquired by Google in 2014, and Google Brain, its Silicon Valley-headquartered cutting-edge research arm, have long given the search firm one of the strongest footholds in AI.
More recently, Google has launched several variations on the so-called generative AI that underpin ChatGPT, including an AI model that can tell jokes and solve math problems.
One of the most advanced language models known as PaLM is a general-purpose model that is three times larger than GPT, the AI model on which ChatGPT is based, based on the number of parameters the model is trained on.
Google’s chatbot LaMDA (Language Model for Dialogue Applications) can converse with users in natural language, similar to ChatGPT. The company’s engineering team has been working for months to integrate it into consumer products.
Despite technological advances, most state-of-the-art technologies are still only research subjects. Critics of Google say Google is getting caught up in the lucrative search business to dissuade it from introducing generative AI into consumer products.
Former Google executive Sridhar Ramaswamy said that providing direct answers to queries rather than simply directing users to referral links would lead to fewer searches.
This left Google facing the “classic innovator’s dilemma.” Harvard Business School professor Clayton Christensen is referring to a book that tries to explain why industry leaders fall prey to fast-moving startups. “If I were running a $150 billion business, this would be terrifying,” Ramaswamy said.
“We have long been focused on developing and deploying AI to improve people’s lives. We believe that AI is a fundamental and transformative technology that will be very useful to individuals, businesses and communities,” said Google. However, search giants “should consider the broader societal impact these innovations could have”. Google added that it will announce “more experiences externally soon”.
The proliferation of AI will lead to fewer searches and less revenue, but Google’s costs could skyrocket as well.
Based on OpenAI’s pricing, Ramaswamy is investing $120 million to use natural language processing to “read” every webpage in the search index and then use it to generate more direct answers to the questions people type into the search engine. I figured it would cost me. . Meanwhile, analysts at Morgan Stanley estimated that answering a search query using language processing costs about seven times as much as a standard Internet search.
The same considerations may prevent Microsoft from radically revamping its Bing search engine, which generated more than $11 billion in revenue last year. However, the software company has said it plans to use OpenAI’s technology across its products and services. Potentially leading to a new way for users to present relevant information while inside other applications, reducing the need to navigate to search engines.
Many former and current employees close to Google’s AI research team said the biggest constraints to the company’s AI rollout were concerns about the potential harm and how it would affect Google’s reputation, as well as an underestimation of the competition.
“I think they were sleeping behind the wheel,” said a former Google AI scientist who now runs an AI company. “Honestly, everyone underestimated how language models hinder search.”
These issues have been exacerbated by political and regulatory concerns from Google’s growing power and increased public scrutiny from industry leaders on the adoption of new technologies.
According to a former Google executive, Google’s leaders began to worry over a year ago that the sudden advance in AI capabilities could lead to a wave of public concern about the implications of such a powerful technology in the hands of the company. Last year, we appointed former McKinsey executive James Manyika as our new senior vice president, who will advise on the far-reaching societal impact of new technologies.
The generative AI used by services like ChatGPT tends to give inherently incorrect answers and can be used to generate misinformation, Manyika said. He added to the Financial Times just days before ChatGPT was launched.
However, the huge interest spurred by ChatGPT has added pressure from Google to match OpenAI sooner. This has created the challenge of showing off AI capabilities and integrating them into services without damaging their brands or creating a political backlash.
“When Google writes text containing hate speech and stays near Google’s name, it’s a real problem,” said Ramaswamy, co-founder of search startup Neeva. Google has higher standards than startups that can claim that Google’s services are nothing more than objective summaries of the content available on the Internet, he added.
Search companies have been criticized before dealing with AI ethics. In 2020, outrage erupted over Google’s stance on the ethics and safety of its AI technology when two prominent AI researchers left under contentious circumstances after challenging a research paper assessing the risks of language-related AI.
These incidents have put them under more public scrutiny than organizations like OpenAI or open source alternatives like Stable Diffusion. The latter, which generates images from text descriptions, had several safety issues, including producing pornographic images. Safety filters can be easily hacked, according to AI researchers who say the relevant lines of code can be manually deleted. Parent company Stability AI did not respond to a request for comment.
OpenAI’s technology has also been abused by its users. In 2021, an online game called AI Dungeon licenses GPT, a text generation tool, to create selectable storylines based on individual user prompts. Within months, users were generating gameplay involving child sexual abuse, among other disturbing content. OpenAI eventually lent the company a better coordination system.
OpenAI did not respond to a request for comment.
If something like this had happened at Google, the backlash would have been far worse, said a former Google AI researcher. They added that with the company now facing a serious threat from OpenAI, it’s unclear if anyone at the company is prepared to take the responsibility and risk of bringing a new AI product to market sooner.
However, Microsoft faces a similar dilemma about how to use this technology. It has tried to present itself as being more responsible for its use of artificial intelligence than Google. Meanwhile, OpenAI has warned that ChatGPT tends to be imprecise, making it difficult to include the technology in its current form in commercial services.
But as the most dramatic demonstration of the AI force sweeping the tech world, OpenAI signaled that even an entrenched force like Google may be at risk.
[ad_2]
Source link