With the public release of Google Bard (powered by the spanking new Palm 2 model), I thought it would be good time to test out how the chatbot compares to Bing Chat (and also OpenAI’s GPT-4 model, using the Beta Browsing mode in ChatGPT). Here’s what I found:
Test 1 – Comparing the Models, using the Models
Here’s how Google Bard did – it was incredibly quick and generated a comparison table as asked. Some of the summary comments feel subjective, perhaps based on the sources it drew content from, but overall pretty good.
Next, it was time to test out Bing Chat (powered by OpenAI’s GPT-4 model). The output here was a bit slower and followed the typical GPT-4 style of typing out the response. Bing Chat’s examples and categorization of capabilities was quite intriguing though…so it definitely gets top marks here.
Finally, I tested the Beta Web Browsing mode now available with ChatGPT 4. In this test, ChatGPT probably took upwards of 4-5 minutes as it browsed the web and tried to summarize its findings…and completely missed the tabular format I’d requested. Overall, the results were decent, but it feels like most people would be better off just using Bing Chat (both due to speed and as it is available for free, vs GPT4 which requires paid access to ChatGPT Plus).
Winner: Bing Chat
Test 2 – Summarizing Company Performance
In this test, I scrapped ChatGPT altogether as it kept getting stuck when trying to browse the web. Sticking with Google Bard and Bing Chat, here’s what I got:
Google Bard. Again, super fast response and it seemed to provide a pretty accurate, tabular summary of performance from the company’s most recent earning release.
Where the system seemed to have gotten thrown off is on the “any other key MAR stock or performance related news” I asked for. Here it produced some nonsense that wasn’t true (see below).
Bing Chat. A little slower, but the overall quality of the summary was far better:
Winner: Bing Chat.
Test 3: How’s The Economy Doing?
For this third and last test I decided to get a little more ambitious and see if it could offer up a summary of market sentiment in the US, based on what’s in the news at the moment.
Google Bard. Super fast…decent summary and balanced sentiment, including notes on the key factors.
Bing Chat. Slower, but interesting results…not quite as on-point as the Google Bard answer in this case, but a few handy bits of information.
Winner: Tie (Bard had a better, on-point summary…Bing had more interesting data points).
It’s really interesting to see how these models are evolving. This is clearly an arms race between the giants, but fast-follows and brute-force scaling may not be the only way to produce better models in the future. See this article for a more detailed comparison of Palm 2 and GPT-4.
From the web browsing test above, I think I’ll be giving Bing Chat more love in the future…and perhaps rely on Google Bard when it comes to coding and other specialist tasks it is good at. ChatGPT’s beta web browsing mode seems pretty unusable at the moment in terms of speed and other glitches, so Bing Chat takes top spot at the moment. Of course, ChatGPT 4 is still pretty great at summarizing and generating text output, so it still has a bit of an edge there. Different tools for different use cases…it’ll be interesting to see how long that lasts, though!
0 comments on “Comparing Web Browsing Capabilities Across Google Bard, Bing Chat and ChatGPT”