Table of Contents
Chatbot Delivers Answers, But Usable Responses Are a Different Matter
The launch of the free research version OpenAI’s ChatGPT project generated lots of reactions, with some journalists predicting that using AI in this manner could mark the end of Google search results. According to Chris Johns, an economist whose podcast I subscribe to, the chatbot is capable of producing answers that meet the standard of first year university exams. Closer to home, MVP Doug Finke (author of the ImportExcel PowerShell module) thought the results generated for PowerShell questions were impressive (here’s his YouTube video).
Given the opinions voiced, I decided to sign up to test ChatGPT. My conclusion is that the chatbot is an idiot savant when it comes to technology. The answers generated by ChatGPT are plausible and cogent in some areas, but once it goes outside its area of comfort, the answers become weaker and weaker.
The Need for Good Source Material
By its very nature, AI depends on the source material used to train models. Inside Microsoft 365, a trainable classifier doesn’t work in scenarios like auto-label policies unless the set of source documents used to create the model underpinning the classifier are good enough. In the case of ChatGPT, OpenAI admit that the material used to build the model comes from 2021 or earlier. Given the nature of technology, especially cloud services, out-of-date information leads to bad answers.
A problem also arises when source material is wrong or contains information that might be accurate at a point in time but will be superseded by developments. This happens all the time in blog posts. For example, if you search for something like “How to update Azure AD accounts with PowerShell,” you’ll get a bunch of responses describing how to perform the task using cmdlets from the Azure AD or Microsoft Online Services (MSOL) modules. Posts published last week that I know of still reference these cmdlets, but people working in this space know that Microsoft plans to deprecate both modules in June 2023. The upshot is that the answer is right, works today, but is flawed because the code will stop working in six months. The lack of awareness of context is a flaw of AI and that shows through in its answers.
Asking About Azure AD Accounts
Take the example shown in Figure 1. The chatbot response to the question is inaccurate for two reasons: I asked about finding Azure AD accounts with the Microsoft Graph. The response is to use the soon-to-be-deprecated Azure AD module. There’s no trace of a Graph API request or the Microsoft Graph PowerShell SDK cmdlets.

I have no idea why my question might have violated OpenAI’s content policy. That’s just a glitch. The important thing is that the code generated by ChatGPT works. Even though I wouldn’t use the Azure AD module now, the code runs perfectly and is a valid answer to the question
The Microsoft Graph PowerShell SDK existed in 2021, so I decided to check what the chatbot knew about the SDK. Figure 2 is the result. I think this is a good example of the ability of ChatGPT to generate a reasonably cogent (if wordy) answer in response to a question. The text is rather like the response you’d get from a Microsoft marketing person, but that’s another story.

Testing a Real-Life Question
As a test of a real-life question, I took one about mailbox archiving from Practical365.com and input it to ChatGPT. The answer (Figure 3) is just plain wrong. First, only Exchange Online mailbox retention policies operate against archive mailboxes. Second, neither Microsoft 365 nor Exchange Online retention policies (there is no such thing as an online archiving policy) operate on the basis of mailbox size. Retention, including move to archive, is driven by item age. Like any assertion from a consultant, the confident nature of the response means that it might be accepted by someone who doesn’t know the technology. It seems like the text might be influenced by the way that Exchange Online expandable archives work, but the context is all wrong and the answer isn’t at all helpful.

Finally, I asked about the world’s best Office 365 book. I was amused that ChatGPT recommended Office 365 for IT Pros but got the authors wrong. I have never met Ben Curry and he’s never been involved with the book, but hey, it’s still a highly plausible answer.

Interesting but Flawed
The bottom line is captured in OpenAI’s admission that “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” This, allied to the other flaw that “The model is often excessively verbose and overuses certain phrases, such as restating that it’s a language model trained by OpenAI” means that you can’t trust the chat bot’s responses to any question about technology that evolves quickly. Answering some basic PowerShell questions is fine. Seeking help to administer Office 365 is quite another matter.
ChatGPT is interesting and worthwhile technology that points to the way we might seek information in the future. Based on a $1 billion investment, Microsoft and OpenAI have been working since 2019 and OpenAI trained the ChatGPT model on Azure. With that kind of backing, I’m sure that OpenAI will improve the model and increase the accuracy of the answers that it generates. But for now, I think I shall stick with querying Google and sorting the wheat out of whatever chaff Google replies with.
Stay updated with developments across the Microsoft 365 ecosystem by subscribing to the Office 365 for IT Pros eBook. We do the research to make sure that our readers understand the technology.