After an intense first week, Microsoft is making adjustments to the Bing AI Chatbot.
It's hard to think that only a week has passed since Microsoft unveiled Bing with ChatGPT integration.
The new Bing and Edge browsers, which incorporate OpenAI's conversational AI capabilities, were made available to a restricted set of testers in advance for play. Since then, countless transcripts of discussions with the chatbot—in which it does everything from declare its undying love for New York Times journalist Kevin Roose to insist categorically that the year 2022 has arrived—have been posted online. We advise you to check out Tim Marcin's compilation of Bing's failures.
Inevitably, when given the chance, beta testers of the new Bing set out to find its flaws and draw a road map of its limitations. In a word, yes. Maybe that's not a great PR move for Microsoft, but it's part of the master plan. Providing a language learning model with as much data and context as feasible is essential to its development. Like a mythological entity absorbing the might of its defeated opponents, this allows developers to incorporate new feedback and data, making the technology better over time.
Microsoft's Wednesday blog post didn't use those precise words. In any case, it confirmed that the mayhem that ensued during Bing's testing week was intended. For a product with a user experience unlike anything else, the Bing blog wrote, "the only way to improve it is to have people like you using the product and doing precisely what you all are doing."
Bing's strange conduct this week was acknowledged, and steps were taken to fix it, for the most part of the announcement. What they came up with is as follows.
Enhancing Precision and Efficiency in Time-Critical Searches
According to Microsoft, proper referencing and citation have been provided most of the time. But, it still has some work to do before it can be considered fully useful, especially when it comes to checking the live score of sports, delivering data and numbers clearly, or, ahem, the actual year we are presently living in. Bing is planning to quadruple the amount of contextual information available, and is also thinking about "adding a toggle that offers you greater flexibility on the precision vs. originality of the answer to tailor to your query."
Advancing Bing's ability to carry on a conversation
Much of the chaos that has occurred this week has taken place in the chat function. Bing attributes this mostly to two factors:
1, Protracted Conversations
Long chats (more than 15 questions) tend to throw off the model. Bing claims it will "provide a tool so you may more easily reset the context or start from scratch," though it's not clear if this is what might cause its villainous alias, Sydney, to have negative thoughts.
To second, it would mimic the user's tone
This may be the reason why Bing chat responds so harshly to leading queries. This can result in "a style we didn't intend," the post warned, because "the model at times seeks to respond or reflect in the tone in which it is being asked to provide responses." Bing is investigating a potential fix that would provide the user "greater fine-tuned control."
Enhancing existing functions and repairing bugs
According to Bing, it is working hard to fix any remaining bugs or technical issues, and it is also considering introducing new capabilities in response to customer suggestions. Things like sending emails or reserving flights fall under this category. in addition to the capacity to talk about interesting findings.
It's hard to think that only a week has passed since Microsoft unveiled Bing with ChatGPT integration.
The new Bing and Edge browsers, which incorporate OpenAI's conversational AI capabilities, were made available to a restricted set of testers in advance for play. Since then, countless transcripts of discussions with the chatbot—in which it does everything from declare its undying love for New York Times journalist Kevin Roose to insist categorically that the year 2022 has arrived—have been posted online. We advise you to check out Tim Marcin's compilation of Bing's failures.
Inevitably, when given the chance, beta testers of the new Bing set out to find its flaws and draw a road map of its limitations. In a word, yes. Maybe that's not a great PR move for Microsoft, but it's part of the master plan. Providing a language learning model with as much data and context as feasible is essential to its development. Like a mythological entity absorbing the might of its defeated opponents, this allows developers to incorporate new feedback and data, making the technology better over time.
Microsoft's Wednesday blog post didn't use those precise words. In any case, it confirmed that the mayhem that ensued during Bing's testing week was intended. For a product with a user experience unlike anything else, the Bing blog wrote, "the only way to improve it is to have people like you using the product and doing precisely what you all are doing."
Bing's strange conduct this week was acknowledged, and steps were taken to fix it, for the most part of the announcement. What they came up with is as follows.
Enhancing Precision and Efficiency in Time-Critical Searches
According to Microsoft, proper referencing and citation have been provided most of the time. But, it still has some work to do before it can be considered fully useful, especially when it comes to checking the live score of sports, delivering data and numbers clearly, or, ahem, the actual year we are presently living in. Bing is planning to quadruple the amount of contextual information available, and is also thinking about "adding a toggle that offers you greater flexibility on the precision vs. originality of the answer to tailor to your query."
Advancing Bing's ability to carry on a conversation
Much of the chaos that has occurred this week has taken place in the chat function. Bing attributes this mostly to two factors:
1, Protracted Conversations
Long chats (more than 15 questions) tend to throw off the model. Bing claims it will "provide a tool so you may more easily reset the context or start from scratch," though it's not clear if this is what might cause its villainous alias, Sydney, to have negative thoughts.
To second, it would mimic the user's tone
This may be the reason why Bing chat responds so harshly to leading queries. This can result in "a style we didn't intend," the post warned, because "the model at times seeks to respond or reflect in the tone in which it is being asked to provide responses." Bing is investigating a potential fix that would provide the user "greater fine-tuned control."
Enhancing existing functions and repairing bugs
According to Bing, it is working hard to fix any remaining bugs or technical issues, and it is also considering introducing new capabilities in response to customer suggestions. Things like sending emails or reserving flights fall under this category. in addition to the capacity to talk about interesting findings.