Microsoft and Google are also in the race.
It's been about 2 months since people have been using and reviewing OpenAI ChatGPT. But now, It’s time for Google and Microsoft as well.
Google has recently launched its Google Bard and Microsoft has also announced something known as BioGPT which is built on this paper. Lets see what these companies promise about their models.
Here is what Sundar Pichai has to say about Google’s Bard.
“It’s a really exciting time to be working on these technologies as we translate deep research and breakthroughs into products that truly help people. That’s the journey we’ve been on with large language models. Two years ago we unveiled next-generation language and conversation capabilities powered by our Language Model for Dialogue Applications (or LaMDA for short).”
He also said, “Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence, and creativity of our large language models. It draws on information from the web to provide fresh, high-quality responses. Bard can be an outlet for creativity, and a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills.”
Here are some of the statements of Microsoft regarding their BioGPT.
According to their Paper, “We propose BioGPT, a domain-specific generative pre-trained Transformer language model for biomedical text generation and mining. BioGPT follows the Transformer language model backbone and is pre-trained on 15M15 PubMed abstracts from scratch. We apply BioGPT to six biomedical NLP tasks: end-to-end relation extraction on BC5CDR, KD-DTI, and DDI, question answering on PubMedQA, document classification on HoC, and text generation. To adapt to the downstream tasks, we carefully design and analyze the target sequence format and the prompt for better modeling the tasks. Experiments demonstrate that BioGPT achieves better performance compared with baseline methods and other well-performing methods across all the tasks.”
Here is what OpenAI has to say about ChatGPT.
“We trained this model using Reinforcement Learning from Human Feedback (RLHF), using the same methods as InstructGPT, but with slight differences in the data collection setup. We trained an initial model using supervised fine-tuning: human AI trainers provided conversations in which they played both sides — the user and an AI assistant. We gave the trainers access to model-written suggestions to help them compose their responses. We mixed this new dialogue dataset with the InstructGPT dataset, which we transformed into a dialogue format.”
“To create a reward model for reinforcement learning, we needed to collect comparison data, which consisted of two or more model responses ranked by quality. To collect this data, we took conversations that AI trainers had with the chatbot. We randomly selected a model-written message, sampled several alternative completions, and had AI trainers rank them. Using these reward models, we can fine-tune the model using Proximal Policy Optimization. We performed several iterations of this process.”
I am just waiting to see what all opportunities these big AI Models will bring. Really, What an era to live in !!