Table of Contents
Website Quality Drives LLM Quality
I’ve had good luck using the Copilot Researcher agent with Graph-based content. Using Researcher with web content is a different matter, but we shouldn’t be surprised that this is the case.
The ability of any LLM taught using content fetched from web sites is that the internet is full of lies, incomplete thoughts, and false information. Seeking page views in an attempt to prove relevance lies at the heart of the problem. People rush to publish without doing the necessary due diligence, testing, or analysis, with the result that the content that appears is often useless. However, the processes used to populate LLMs don’t realize that the information it finds is poor quality and treats the data in the same way as any other text that it finds. Even a high domain authority rating, a much-beloved tool within the SEO community, is no indicator of quality.
Some of my MVP colleagues fall into this trap by recycling Microsoft content as soon as an announcement appears. Their articles contain no insight, no added value, and is a poor duplication of what Microsoft generates. Commentators like Vasil Michev, Martin Heusser, and Jan Bakker set the standard for solid technical analysis and reporting that anyone seeking to cover Microsoft 365 technology should aspire to. Regretfully, too few do.
Auditing Anthrophic Claude Use by Copilot
Being able to use the Anthrophic models with Copilot is a fairly recent capability and I wanted to see if using Researcher with the Claude LLM produced better results. I also wanted to check what information Purview audit captures when Copilot uses Claude instead of ChatGPT-5 (from early November 2025, the default for Copilot – see MC1176368 for details). After all, when he spoke to the market analysts about Microsoft’s FY26 Q1 results, Satya Nadella said that Purview has audited “16 billion Copilot interactions,” so surely interaction with Claude would be captured.
Tenants must enable third-party access for Anthrophic before Claude can be used with Researcher. The settings in configured in the Copilot section of the Microsoft 365 admin center (Figure 1).

Once the tenant is configured, users with Microsoft 365 Copilot licenses can use Claude with Researcher (Figure 2). Using Claude is exactly the same as using ChatGPT with Researcher: submit the best possible prompt, refine the prompt if queried by the agent, and let the agent go ahead and do its stuff.

Checking for Researcher Agent Audit Records
To create a baseline and help me know what to look for, I prompted the Researcher agent with several questions using the default ChatGPT5 LLM. I could then search for the audit records captured by Purview and check the details of the audit data payload. The payload is where Researcher reports the resources it used during its research.
As with all Purview audit records, some manipulation is necessary to extract usable data. Here’s the code that I used.
[array]$Records = Search-UnifiedAuditLog -StartDate (Get-Date).AddDays(-90) -EndDate (Get-Date) -Formatted -Operations "CopilotInteraction" -ResultSize 5000 -SessionCommand ReturnLargeSet
If ($Records) {
$Records = $records | Sort-Object Identity -Unique
Write-Host ("{0} audit records found" -f $Records.Count)
$Records = $Records | Sort-Object { $_.CreationDate -as [datetime]} -Descending
} Else {
Write-Host "No audit records found - now scanning for Researcher Copilot agent records"
Break
}
$ResearcherReport = [System.Collections.Generic.List[Object]]::new()
ForEach ($Rec in $Records) {
$AuditData = $Rec.AuditData | ConvertFrom-Json
If ($AuditData.AgentName -ne "Researcher") {
Continue
}
[array]$CopilotResources = $AuditData.CopilotEventData.AccessedResources
If ($CopilotResources) {
$Resources = [System.Collections.Generic.List[Object]]::new()
ForEach ($Resource in $CopilotResources) {
If ($Resource.Action) {
If ($Resource.SiteURL) {
$ResourceName = $Resource.SiteURL
$ResourceId = $null
} Else {
$ResourceName = $Resource.Name
$ResourceId = $Resource.Id.Split("&")[0]
}
$ResourcesAccessed = [PSCustomObject][Ordered]@{
Action = $Resource.Action
Name = $ResourceName
Id = $ResourceId
}
$Resources.Add($ResourcesAccessed)
}
}
$ReportLine = [PSCustomObject][Ordered]@{
TimeStamp = Get-Date ($AuditData.CreationTime) -format 'dd-MMM-yyyy HH:mm'
User = $AuditData.UserId
Action = $AuditData.Operation
Resources = $Resources.Name -join "; "
}
$ResearcherReport.Add($ReportLine)
} Else {
If ($null -ne $AuditData.CopilotEventData.Messages) {
$ReportLine = [PSCustomObject][Ordered]@{
TimeStamp = Get-Date ($AuditData.CreationTime) -format 'dd-MMM-yyyy HH:mm'
User = $AuditData.UserId
Action = $AuditData.Operation
Resources = If ($AuditData.CopilotEventData.Messages.count -le 3) {$AuditData.CopilotEventData.Messages.Count } Else {$AuditData.CopilotEventData.Messages.count.toString() + " (likely use of Claude LLM)"}
}
$ResearcherReport.Add($ReportLine)
}
}
}
Figure 3 shows the data generated by the code. Although audit records report the name of the Copilot agent used in an interaction, no note is captured about the LLM used for processing the prompt. However, observation of many uses of Claude indicates that the number of messages reported is higher than for ChatGPT5 (used for the tests). That’s why some of the interactions reported are tagged as “likely use of Claude LLM.”

I couldn’t find any other way that might indicate the use of Claude over ChatGPT. Originally, I thought that when Researcher used Claude, the audit records didn’t capture the sources (external and internal files) scanned by the agent as it constructed its response, but then the audit record for a Claude run (third record from top) reported sources.
Imperfect Intuition
These results are imperfect and are largely formed through intuition, not by extracting immutable data from audit records. It would be so much easier if Microsoft included the LLM used for agent processing along with the agent name in audit records for Copilot interactions. Maybe that will come with time. We can but hope.
Support the work of the Office 365 for IT Pros team by subscribing to the Office 365 for IT Pros eBook. Your support pays for the time we need to track, analyze, and document the changing world of Microsoft 365 and Office 365. Only humans contribute to our work!
Do you know if a similar log is captured when using claude in Github?
I do not know. But I don’t think GitHub captures this kind of audit information…