top of page

You Can't Automate Power Platform Admin Center Reports. Here's What To Do Instead

AN AI generated image of someone trying to work out how to autotmate reports from Dataverse
Automating Auditing Report

There's a report buried in the Power Platform Admin Center called Most Active Users Performing Operations. You find it under Manage, Dataverse, then the Analytics area. It shows you which users are hitting your environment hardest with Create, Update, Delete, and Read operations over a rolling 30 day window.


It's useful. It's also entirely manual. Someone has to log in to PPAC every month, click through to the right environment, hit Download, drop the Excel file into SharePoint or paste it into a Power BI source, and email a summary to whoever cares. If that someone is you, you've probably already wondered whether it can be automated.


This is the post I wish I'd found when I went down that rabbit hole. Short version: the report itself cannot be automated through any supported Microsoft API. The Download button in PPAC is a UI feature with no public endpoint behind it. The Power Platform community has confirmed this in the one and only thread that asks the question, and the conclusion is "manual only".


But the report is just a pre-baked aggregation over the Dataverse audit table. Once you understand that, the automation problem becomes "reproduce the aggregation and email it on a schedule", which is entirely doable with native tools. Here's how I built it.


What the report actually contains


Before deciding how to rebuild something, it pays to understand exactly what you're rebuilding. The PPAC report shows, per user:


  • The count of operations they performed in the selected window

  • Broken down into Create, Update, Delete, and Read


Microsoft defines a Read as either a Retrieve or a RetrieveMultiple SDK call. So when you see Adele Vance at the top of the list with 50,000 reads, that's not 50,000 separate record opens. It's mostly grid loads, where a single view rendering can trigger one RetrieveMultiple plus a swarm of related-record Retrieve calls.


This matters because your custom report will produce the same kind of numbers, and you want to be ready when someone asks "why does this user have 80,000 reads in a month, are they hacking us?" The answer is "no, they're using the system".


Three ways to skin this cat

I considered three approaches:

Approach

Production-grade?

Notes

Power Automate Desktop / Selenium against the PPAC UI

No

Clicks the literal Download button. Breaks every time Microsoft tweaks the UI, needs an interactive session, awful for unattended scenarios.

Cloud flow against the Dataverse audit table

Yes

Native, supported, scheduled, emails an Excel file. The right answer for most cases.

Azure Synapse Link for Dataverse plus Power BI subscription

Yes (best for scale)

Continuously exports audit data to Azure Data Lake, then Power BI report with email subscription. Right answer when you want trending beyond 30 days, or when audit volume is too high to query directly.

For a single monthly summary, the middle option is the sweet spot. It's also the only one I'm going to cover here in detail. The Synapse Link approach is a different post; if you need that level of robustness, the Microsoft Learn docs on Azure Synapse Link for Dataverse audit data is the place to start.


Prerequisite: confirm auditing is actually on


Two settings need checking in PPAC before any of this works:


  1. In the environment, open Settings, then Audit and logs, then Audit settings. Confirm Start Auditing is enabled.

  2. Confirm Read logs is enabled if you want Retrieve and RetrieveMultiple to count. PPAC's report includes Reads, so you'll need this on to match the figures.

  3. At table level, confirm the tables you care about have Auditing enabled in their solution definition.


Note that Read auditing has a storage cost. For most environments it's fine, but flag it if the volume is genuinely high. If audit was only switched on recently, your historical data will be thin and the report will look misleadingly empty until you've accumulated a few weeks.


Solution architecture choices


If you build this in production directly, the flow has to live in an unmanaged solution. Managed solutions are read-only and you can't author flows inside them. That isn't necessarily wrong for a self-contained admin reporting flow, but the cleaner ALM pattern is:


  1. Build the flow in an unmanaged solution in a dev environment

  2. Export as managed

  3. Import to production


Either way, give the solution a name that makes its purpose obvious. Dataverse Usage Reporting or Admin Governance Tools are better than Solution1. Set the flow's owner to a dedicated service account, not your personal user. Personal-owned flows become someone else's problem the moment you change jobs.


Use environment variables for things that change between environments:

Variable

What it holds

Recipient distribution list

Who gets the email

Lookback window in days

Usually 30, sometimes 7

SharePoint archive location

Where to drop historical copies

Use connection references so the connections travel with the solution rather than being baked in.


Step 1: Find the audit table

The first surprise is that the Dataverse connector hides the Audit table from the table picker dropdown in List rows. It's there, just not in the UI list. To get to it:

  1. Add a List rows action

  2. Click the small dropdown arrow on the right of the Table name field

  3. Choose Enter custom value

  4. Type audits (logical name, plural, lowercase)


That's it. The rest of the action works normally, including Select Columns, Filter Rows, and pagination settings.


You'll also need the connection identity to have the right privileges. prvReadAuditPartitions and prvReadAudit at organisation scope. System Administrator and System Customizer both include these by default, but if you're using a least-privilege service account (which you should be), give it a custom security role with just those two privileges plus whatever else the flow needs.


A quick way to test the connection has the right access: log in as the service account and see if Audit appears in Advanced Find. If it doesn't, the role is wrong and the flow will silently return zero rows.


Step 2: Build the flow


The shape of the flow is:

  1. Recurrence trigger (monthly is a sensible default)

  2. List rows against audits with a 30-day filter

  3. Select action to project the rows into something tidy

  4. Length check to handle empty-result runs

  5. Run script against an Excel workbook in SharePoint

  6. Get file content of the now-populated workbook

  7. Send an email with the file attached


List rows configuration

Field

Value

Table name

audits (custom value)

Select columns

useridvalue,operation,objecttypecode,createdon,action

Filter rows

createdon ge @{addDays(utcNow(), -30)}

Row count

5000, with pagination enabled in action settings to whatever ceiling matches your volume

Two things to call out here. First, Select columns is doing a lot of work. The audit table has a changedata field that contains the full JSON of every attribute changed on every row. For a usage report you don't need any of that, and excluding it makes the flow run dramatically faster. On a busy environment, the difference can be the flow timing out versus completing in 30 seconds.

Second, the formatted display values come back for free as OData annotations on lookups and option sets. You don't need a second List rows against systemusers to resolve user GUIDs to names. Every audit row carries useridvalue@OData.Community.Display.V1.FormattedValue with the friendly name already populated.


Select projection


Add a Select action right after List rows. Map to this:

{
  "UserId": "@{item()?['_userid_value']}",
  "UserName": "@{item()?['_userid_value@OData.Community.Display.V1.FormattedValue']}",
  "Operation": "@{item()?['operation@OData.Community.Display.V1.FormattedValue']}",
  "Table": "@{item()?['objecttypecode@OData.Community.Display.V1.FormattedValue']}",
  "Timestamp": "@{item()?['createdon']}"
}

This gives us a clean, typed-feeling array of objects with just the fields we care about. Everything else falls away.


Step 3: The Excel workbook


Create a workbook in SharePoint called MostActiveUsers_Current.xlsx. Inside:


  1. Rename the default sheet to Audit

  2. In A1:E1 type the headers: FULL NAME, CREATE COUNT, UPDATE COUNT, DELETE COUNT, READ COUNT

  3. Select A1:E1, Insert tab, Table, tick "My table has headers", OK

  4. With the table selected, Table Design tab, set Table Name to tblAudit


This last step is the one that's easy to miss. Excel auto-names new tables Table1, Table2, etc. The rename to tblAudit is a separate action, and skipping it gives you a script error later that's annoying to debug.


I'd also keep a second file alongside called MostActiveUsers_Template.xlsx with the same structure. The flow doesn't use it directly, but it serves as a known-good recovery artefact if the Current file ever gets into a weird state.


Step 4: The Office Script


This is where the actual aggregation happens. We're taking the long-format audit data (one row per operation) and pivoting it into wide format (one row per user, with operation counts as columns).


Save this as an Office Script (Automate tab in Excel Online > New Script) called AggregateAuditByUserOperation:

interface AuditRow {
  UserId: string;
  UserName: string;
  Operation: string;
  Table: string;
  Timestamp: string;
}

interface UserSummary {
  userName: string;
  createCount: number;
  updateCount: number;
  deleteCount: number;
  readCount: number;
}

function main(workbook: ExcelScript.Workbook, auditJson: string): void {
  // 1. Parse the JSON payload coming from Power Automate
  const rows: AuditRow[] = JSON.parse(auditJson);

  // 2. Exclusion list for system/service accounts that would
  //    otherwise dominate the report
  const excludedUserNames: string[] = ["system", "SYSTEM", "INTEGRATION"];

  // 3. Group by user, pivoting operations into columns
  const users: { [userId: string]: UserSummary } = {};

  for (const row of rows) {
    const userId = row.UserId ?? "(unknown)";
    const userName = row.UserName ?? "(unknown)";
    const operation = (row.Operation ?? "").toLowerCase();

    // Skip excluded accounts (case-insensitive match)
    if (excludedUserNames.some(n => n.toLowerCase() === userName.toLowerCase())) {
      continue;
    }

    if (!users[userId]) {
      users[userId] = {
        userName,
        createCount: 0,
        updateCount: 0,
        deleteCount: 0,
        readCount: 0
      };
    }

    switch (operation) {
      case "create":
        users[userId].createCount++;
        break;
      case "update":
        users[userId].updateCount++;
        break;
      case "delete":
        users[userId].deleteCount++;
        break;
      case "read":
      case "retrieve":
      case "retrievemultiple":
        users[userId].readCount++;
        break;
      // Anything else (Assign, Share, etc.) is ignored
    }
  }

  // 4. Sort descending by total activity
  const sorted: UserSummary[] = Object.values(users)
    .sort((a, b) => {
      const totalB = b.createCount + b.updateCount + b.deleteCount + b.readCount;
      const totalA = a.createCount + a.updateCount + a.deleteCount + a.readCount;
      return totalB - totalA;
    });

  // 5. Get the table and wipe any existing data rows
  const table = workbook.getTable("tblAudit");
  if (!table) {
    throw new Error("Table 'tblAudit' not found. Check the template.");
  }

  const rowCount = table.getRowCount();
  if (rowCount > 0) {
    table.deleteRowsAt(0, rowCount);
  }

  // 6. Write the pivoted rows in one batched call
  if (sorted.length > 0) {
    const data: (string | number)[][] = sorted.map(u => [
      u.userName,
      u.createCount,
      u.updateCount,
      u.deleteCount,
      u.readCount
    ]);
    table.addRows(-1, data);
  }
}

A few notes on what's in there:

JSON.parse(auditJson). We pass the data as a string and parse it inside the script. This is more robust than relying on the connector to marshal a typed array. Avoids "expected object, got null" headaches when the array is empty.


{ [userId: string]: UserSummary }. TypeScript-friendly dictionary. Faster than nested loops when grouping over thousands of rows.


The Read mapping covers three operation names. PPAC's formatted value for a read operation is usually Read, but the raw action field can come through as Retrieve or RetrieveMultiple. Catching all three avoids missing rows.


table.deleteRowsAt(0, rowCount). The first version of this script used getRangeBetweenHeaderAndTotal() to find the existing data. On an empty table, that method returns a non-null range with a row count of 1 even when no rows exist. The getRowCount() approach returns 0 cleanly when the table is empty.


table.addRows(-1, data). Single batched write of all rows. Resist the temptation to loop addRow per record. Per-row writes are an order of magnitude slower because each one is a round trip to the Excel service.


Wiring the script into the flow


After the Select action, add a Run script action (from the Excel Online Business connector):


Field

Value

Location

The SharePoint site

Document library

Where the Current file lives

File

MostActiveUsers_Current.xlsx

Script

AggregateAuditByUserOperation

auditJson parameter

Expression string(body('Select'))

The string() expression serialises the Select output array into a JSON string, which is then parsed back into objects inside the script. The round-trip looks redundant but it's the most reliable contract between Power Automate and Office Scripts.


Step 5: Email it out



Add a Get file content for MostActiveUsers_Current.xlsx, then a Send an email (V2) action:

Field

Value

To

Pulled from the environment variable

Subject

Most Active Users - {environment name} - {month}

Attachment Name

MostActiveUsers.xlsx

Attachment Content

The File Content from the previous action

Optionally, also drop the file into a SharePoint archive folder with a date-stamped filename. That way you build a historical record that survives Dataverse's 30 day audit retention window. Useful when someone asks "what did this look like in February".


Errors you'll hit, and how to fix them


I hit several when building this. Sharing them so you don't have to.


"The destination path cannot be the same as the source object itself"

This shows up if you use Copy file to clone the template into a Current file in the same folder. The SharePoint Copy file action doesn't let you rename, so trying to copy MostActiveUsers_Template.xlsx into the same folder fails because the destination would have the same name as the source.


Fix: Don't use Copy file. Either use Get file content followed by Create file (with a different name), or skip the reset step entirely and let the script's row-deletion logic handle the clean state. For a workbook only the service account writes to, the latter is fine.


"A file with the name X already exists"


If you use Create file to write over an existing file, modern SharePoint libraries reject it. The Create file action doesn't overwrite, despite what some docs imply.

Fix: Use Update file instead. Or, again, skip the reset step entirely.


"Table 'tblAudit' not found"

The most common cause is that the cells were typed in but never converted to a table. Headers alone don't make a table. You have to select the range and Insert > Table.

The second most common cause is that the table exists but is still called Table1. Excel doesn't auto-rename based on the first column header. You have to rename it manually in the Table Design tab.



Fix: Either of the above. A quick diagnostic Office Script to list every table in the workbook helps when you're not sure:

function main(workbook: ExcelScript.Workbook): string {
  const tables = workbook.getTables();
  if (tables.length === 0) {
    return "No tables found in workbook.";
  }
  return tables.map(t => `${t.getName()} on sheet ${t.getWorksheet().getName()}`).join("; ");
}

Run it standalone from the Code Editor. The return value tells you what the workbook actually contains.


"Table deleteRowsAt: The argument is invalid"


The getRangeBetweenHeaderAndTotal() method has odd behaviour on tables with zero data rows. It returns a non-null range with row count 1, then deleteRowsAt(0, 1) fails because there's nothing at index 0.


Fix: Use table.getRowCount() instead. It returns 0 cleanly on an empty table.


Things to watch for in the output


A few realities of Dataverse audit data that will affect your numbers:


Service accounts dominate. If you've got any inbound integration (server-side sync, an Azure Function writing data, a Power Automate flow connecting as a fixed user), that account will sit at the top of the report with 10x or 100x the volume of any real user. The exclusion list in the script handles the obvious cases (system, INTEGRATION, etc.) but you'll need to extend it based on what shows up in your environment. Or split the report in two: interactive users versus service accounts. The second view is genuinely useful for spotting unexpected spikes.


Read counts look enormous. Because RetrieveMultiple plus per-row Retrieve calls all count, a user who spends an afternoon scrolling through views can rack up tens of thousands of reads without doing anything unusual. This is exactly how PPAC reports it.


System-context operations show as user activity. PPAC counts operations performed in the security context of the user, including ones triggered by background processes running on their behalf. So a user with 5000 updates might have done 100 manual edits and 4900 plugin-triggered cascades. The audit data doesn't easily distinguish these. If that matters, the action field (separate from operation) gives you SDK-message-level granularity.


30 days is the limit. Both PPAC's report and your custom version are bounded by Dataverse's default 30 day audit retention. If your environment has a longer retention window, you can extend the filter. If you want longer than retention allows, you need Synapse Link.


Where to take it from here


This flow gives you a monthly emailed report. That's a solid baseline, but there are obvious next steps:

  • Add a per-table breakdown. The objecttypecode field is already in your Select. Pivot on it for a second sheet showing operation count per table per user. Useful for understanding adoption of specific apps.

  • Add a trending view. Once you've got a few months of historical files in SharePoint, build a Power BI report that reads them all and shows user activity over time. Or skip the SharePoint history and go straight to Synapse Link if you've already got Azure infrastructure.

  • Add anomaly alerting. If a service account suddenly does 10x its normal volume, that's worth an email to security. The pattern is the same flow with a comparison against last month's archived file.

  • Externalise the exclusion list. Instead of hardcoding service accounts in the script, read them from a SharePoint list or a Dataverse table. Easier for non-developers to maintain.


The whole point of automating something like this isn't just to save the 20 minutes a month of manual work. It's that automated reports get looked at, and manual reports get skipped. The moment the data lands in someone's inbox without anyone having to think about it, you start having actual conversations about user behaviour and system adoption. That's the real win.


The short version


  1. PPAC reports cannot be automated. There's no API behind the Download button.

  2. The data they show comes from the Dataverse audit table, which is queryable via the standard Dataverse connector once you know to use a custom value to find it.

  3. A scheduled Power Automate flow plus an Office Script for grouping gives you a monthly emailed report that matches PPAC's view.

  4. Watch for service-account noise, treat Read counts with care, and remember the 30 day retention boundary.

  5. For real trending and longer history, look at Azure Synapse Link for Dataverse instead.


If you build this and hit something I didn't cover, drop me a line. Always interested in how other people's environments behave.

Comments


Subscribe Form

©2019 by Jon Does Flow. Proudly created with Wix.com

bottom of page