Why Do Power BI Copilot AI Instructions Seem To Work Better in Desktop Than In The Service?

I’m spending a lot of time with organisations testing Power BI Copilot at the moment, and something I hear a lot is that Copilot works well in Desktop but when you publish your model to the Power BI Service the results are a lot more inconsistent. One reason why is because of how updates to AI Instructions are applied after you publish your semantic model.

Let’s see an example of this. Consider the following semantic model consisting of a single table with two measures, Sales Amount and Profit Amount:

The semantic model has the following AI Instructions applied:

//Instructions V1
xyz means the measure Sales Amount

The instructions here don’t make much sense, but using a meaningless term like “xyz” makes it easier to test whether Copilot is using an instruction or not.

In Power BI Desktop, the following Copilot prompt returns the results you’d expect with xyz understood as Sales Amount:

show xyz

If you publish this model to an empty workspace in the Power BI Service then this prompt returns the same correct result.

[By the way, the message “Copilot is currently syncing with the data model. Results may be inconsistent until the sync is finished” will be the subject of a future blog post – it’s not connected to what I’m describing in this post, it relates to how Copilot needs to index the text values in your semantic model, which is a separate process]

So far so good. Going back to Power BI Desktop, changing the AI Instructions like so:

//Instructions V2
xyz means the measure Sales Amount
kqb means the measure Profit Amount

…then closing and reopening the Copilot pane in Desktop and entering the prompt:

show kqb

…also returns the result you would expect, with kqb understood as Profit Amount

However, if you publish the same model up to the same workspace as before – so you are overwriting the previous version of the model in the Service – and then use the same prompt immediately after publishing:

…Copilot returns an incorrect result: it does not understand what “kqb” means. Why?

After you publish changes to a Power BI semantic model it can take a few minutes, sometimes up to an hour, for updates to the AI Instructions to be applied. This means if you’re testing Power BI Copilot in the Service you may need to be patient if you want to see the impact of any changes to AI Instructions, or do your testing in Power BI Desktop.

How can you know whether the latest version of your AI Instructions are being used in the Service when you do your testing? In the Power BI side pane in both Desktop and the Service there is an option to download diagnostics from the “…” menu in the top right-hand corner. This downloads a text file with diagnostic data in JSON format which contains a lot of useful information; most importantly it contains the AI Instructions used for the current Copilot session. The file contents aren’t documented anywhere, I guess because the structure could change at any time and it’s primarily intended for use by support, but there’s no reason why you as a developer shouldn’t look at it and use it.

For the second example in the Service above, where Copilot returned the wrong result, here’s what I found at the end of the diagnostics file:

As you can see the changes I made to the AI Instructions before publishing the second time had not been applied when I ran the prompt asking about kqb.

After waiting a while, and without making any other changes to the model, the same prompt eventually returned the correct results in the Service:

Looking at the diagnostics file for this Copilot session it shows that the new version of the AI Instructions was now being used:

Since looking in the diagnostics file is the only way (at least that I know of right now) to tell what AI Instructions are being used at any given time, it makes sense to do what I’ve done here and put a version number at the top of the instructions so you can tell easily whether your most recent changes are in effect.

One last point to mention is that if you’re deploying semantic models using Deployment Pipelines or Git, the docs state that you need to refresh your model after a deployment for changes to AI Instructions to take effect and that for DirectQuery or Direct Lake (but not Import) mode models this only works once per day.

Share this Post

Comments (0)

Leave a comment