A couple of weeks ago we posted an article about ChatGPT, explaining what it is, what it can do, what it can’t do and whether it’s a threat. You can read this here if you want a bit of an introduction first, you know, for in case you’ve been living under a rock.


Since then, much has happened. In our previous article we reported that Microsoft will be integrating the AI developed by OpenAI into their search features. It has been released for testing with over 1 million people, adding their name to the list of testers within 48 hours.


These new features are described in a Microsoft blog post as delivering:


  • Better Search – Bing claimed that there has been the biggest increase in relevance in two decades as a result of AI now being integrated into the core algorithm 
  • Complete answers – Instead of having to piece together information from multiple websites, as one often has to do, Bing chat provides you with a complete answer drawn from multiple sources. Caution is advised though because, as we discuss later, the answers are often not factually accurate.
  • A new chat experience – This is an additional option now that users can switch to to have a conversation with a newer version of ChatGPT
  • A creative spark – This is part of the chat feature where you can ask the chatbot to get creative with your queries (Like write a poem about dishwashing liquid written in Shakespearean English)
  • New Microsoft Edge Experience which now also includes AI capabilities, two new functionalities, chat and compose, which can both be accessed in the new Edge sidebar.


The experiences of testers have been flooding the internet with some feeling (rightly) disturbed with ChatGPT showing a rather dark side at times. The word “unhinged” comes up quite often. Who wouldn’t be disturbed if something that talks very much like a human, starts mentioning nuclear access codes, creating deadly viruses to wipe out humanity and trying to steal husbands? The Chatbot also often provides inaccurate information, but the bot is still developing. Despite it acting very deranged at times, 71% of users are giving Bing’s AI answers a thumbs up, which Bing is calling an “early success”. Some evidence has been seen that Microsoft is busy patching up the dark side of Bing. 


The phrase “no such thing as bad publicity” rings very true in this situation, and seems to be counting in Bing’s favour. Before their Chatbot’s release, their app got an average of 10,000 daily downloads. After the release, it hit 267,000 in one day with Google having 305,000 on the same day. Of course this massive spike in interest in Bing is not only because people are curious about how evil Bing chat can be, but it doesn’t seem to be hurting Microsoft much either.


Errors like this can, and have, cost Google much more. After Bard, Google’s version of the Chatbot, got one answer wrong in a demo, it lost $100 billion in market cap. Bing chat making an error, on the other hand, seems to be having a negligible effect, if any. 


In February 2023, Bing released an updated document outlining in detail their approach to responsible AI. In this document they explain how they identify, measure and mitigate harms. The explanations they provide for how they do the above seems comprehensive and after hearing some dodgy stories about Bing chat’s antics I, at least, find comfort in the below:


We are committed to learning and improving our responsible AI approach continuously as our technologies and user behavior evolve. Our incremental release strategy has been a core part of how we move our technology safely from the labs into the world, and we’re committed to a deliberate, thoughtful process to secure the benefits of the new Bing. Limiting

the number of people with access during the preview period allows us to discover how people use the new Bing, including how people may misuse it, so we can try to mitigate emerging issues before broader release. We are making changes to the new Bing daily to improve product performance, improve existing mitigations, and implement new mitigations in response to our learnings during the preview period.


These are truly exciting times with much to consider as to how technologies like these can influence the world, and much can be said about both the opportunities it offers, and the dangers it holds. 


Will it cause job displacement or job opportunities? 

Probably both, but hopefully not the former if we’re careful. 


How will educational institutions respond with a completed written assignment now just being a prompt away? 


Will online information become less reliable with people blindly uploading what AI like Bing chat provides?


Although there have been several reports of even large online publications publishing AI generated content, which no one seems to have bothered to fact-check, there are those that urge us to not forget that humans have been perfectly capable of writing and publishing garbage for as long as search engines have been around. Google has, in fact, been fighting inaccurate information for some time now, using a system to evaluate content for its expertise, authoritativeness and trustworthiness. 


In the VP of Search’s words, Hyung-Jin Kim:


E-A-T is a template for how we rate an individual site. We do it to every single query and every single result. It’s pervasive throughout every single thing we do.” 


The way the system works is that teams of people manually check the results that come up for individual queries and evaluate the content that comes up using the E-A-T guidelines.


What this translates to, in terms of SEO and digital marketing, is that publishing AI generated content that was not fact-checked will, at some point, suffer ranking losses. And if Google doesn’t get you, you will certainly lose credibility with your readers and / or customers for publishing factually inaccurate information. What this boils down to is that, yes, you can use ChatGPT or similar platforms to write your content for you or just use it to come up with an outline for your writing. For example, give me 5 points that I can write about bread. Google won’t penalise you for that. What you can be penalised for though, as mentioned, is just straight copying whatever ChatGPT produces and publishing it as ChatGPT can simply not be trusted, yet, to always give factually correct information. 


As you can see, as with most things, there are negatives and positives when it comes to evolving technologies like these. In my opinion, many aspects of the negative elements can be mitigated with awareness, critical thinking and responsible use. What do you think? Will humanity be able to rise to the occasion?


Stay In The Know


Cut the clutter and stay on top of important news like this. We handpick the single most noteworthy news of the week and send it directly to subscribers. Join the club to stay in the know…

Leave a reply

Your email address will not be published. Required fields are marked *