GPT-4 Launch – The Next-Gen Language Model

It is said that OpenAI, which created ChatGPT and DALL-E, will release the next-gen language model GPT-4 next week. As you know ChatGPT was restricted to answer your questions with text, but the next generation of the AI language model from the Microsoft backed startup might be able to generate videos and various types of content.

When Was GPT-4 Officially Announced?

Excitement is building in the AI industry as Microsoft’s Chief Technology Officer Andreas Braun, in an interview with German news website – Heise said –

“We will introduce GPT-4 next week, with multimodal models that will offer unique possibilities of content creation.” 

GPT-4
Source: Google Image

It is a fourth generation groundbreaking large language model (LLM) that allows the machine to understand the natural language – which was previously unique to human traits. The launch announcement was made during an AI event – Digital Kickoff which took place in Germany. The event emphasized on the positive potential of AI and spoke of the turning point. 

Since the beginning Microsoft has been integrating AI into its products – Microsoft Teams. Now it is expected that GPT-4 will also be integrated further into Microsoft products. However, the AI model most probably won’t be available through OpenAI’s API and might upgrade Bing Chat. 

Will GPT-4 be a Multimodal AI?

On September 13, 2022 OpenAI CEO Sam Altman discussed the future of AI technology in a podcast interview – AI for the Next Era. He said that the multimodal model is the future. He further explained that Multimodal means the ability to work on different modes like – images, texts, and sounds. 

OpenAI interacts with humans through chat or text inputs. No matter if you are using DALL-E or ChatGPT, you input text and the AI responds to that. However, GPT-4 can interact with you through speech. It can listen to your command and give you the answers or information accordingly, which sounds like Alexa, Siri, or Google Assistant. 

What to Expect From GPT-4?

Altman shared the details of the AI about what you can expect. He said he thinks multimodals are not that far and can be used effectively within a few years. He also added that GPT-4 will do the exact thing that an agent did earlier. You can also iterate and refine the instructions. Though it was specifically said that it would be multimodal, the hint was absolutely clear with the recent news of the launch of GPT-4. Altman also said that the language model will be built with the company’s interface and these powerful models will be one of the greatest technological advancements in recent history. 

Marianne Janik, CEO of Microsoft said that current AI development and GPT-4 were an “iPhone moment.” She also emphasized on the fact that the introduction of AI does not mean taking a hit on employment or job losses. It can simply be used on training the employees to use AI effectively. 

Microsoft has also paid $10 billion for a 49% share of OpenAI and is involved in an AI arms race with Google’s PALM – which has already demonstrated the advantage of attaching image interpretation modes to LLMs which shows GPT-4 might perform beyond expectations. 

The words of GPT-4 have created a lot of interest among tech enthusiasts and creators. However, for some it is a threat if AI will replace human employees. What’s your take on it?