Enhance Grafana Dashboard With AI Quality & Audit Panels
Hey guys! Let's dive into how to spice up your Grafana dashboard by adding a new "AI Quality & Audit" row with seven shiny new panels. This update, based on the NEM-2768 epic, will give you some seriously cool insights into your AI's performance, using data from those AI audit API endpoints. Get ready to level up your monitoring game!
1. Setting the Stage: The Dashboard Variable
First things first, we need to add a dashboard variable called $audit_days. This little gem lets you control the time period for your AI audit statistics β you can choose between 7, 14, 30, or 90 days. This is how you create it:
{
"current": {
"selected": true,
"text": "7",
"value": "7"
},
"description": "Time period for AI audit statistics",
"hide": 0,
"includeAll": false,
"label": "Audit Period (days)",
"multi": false,
"name": "audit_days",
"options": [
{ "selected": true, "text": "7", "value": "7" },
{ "selected": false, "text": "14", "value": "14" },
{ "selected": false, "text": "30", "value": "30" },
{ "selected": false, "text": "90", "value": "90" }
],
"query": "7,14,30,90",
"queryValue": "",
"skipUrlSync": false,
"type": "custom"
}
You'll need to insert this into the templating.list array in your consolidated.json file. It's usually empty around line 11550, so you should find it easily. This variable is super handy because it lets you quickly adjust the data range displayed in your panels.
2. Placing the New Row: Where Does It Go?
Next, we're going to insert a new row into your Grafana dashboard. This row, titled "AI Quality & Audit," will house all the new panels. You want to place it after the "AI Inference" row (which ends around y:81) and before the "Detection Analytics" row (with an ID of 50 and also starting at y:81). The best approach is to make it a collapsed row initially to reduce clutter.
Hereβs the structure of the row in JSON:
{
"collapsed": true,
"gridPos": { "h": 1, "w": 24, "x": 0, "y": 82 },
"id": 270,
"panels": [ /* 7 panels below */ ],
"title": "AI Quality & Audit",
"type": "row"
}
Make sure to start the new panel IDs at 270 to avoid any conflicts with existing panels. This way, you will be able to add seven panels without affecting the previous elements.
3. Building the Panels: The Heart of the Matter
Now comes the fun part: adding the seven panels to the "AI Quality & Audit" row. Each panel will visualize different aspects of your AI's performance. Here's a breakdown of each one:
Panel 1: Quality Score Trend
- Type:
timeseries - Size:
{ "h": 8, "w": 8, "x": 0, "y": 83 } - Datasource:
Backend-API(using themarcusolsson-json-datasourceplugin) - Query: The URL is
/api/ai-audit/stats?days=${audit_days}, with JSONPath$.audits_by_day[*]. It will displaydate(time) andavg_quality_score(number). - Y-axis: Set the minimum to 1 and the maximum to 5.
- Thresholds: Use red for the base, yellow at 3, and green at 4.
- Title: "Quality Score Trend"
This panel gives you a time-series view of your AI's quality score, helping you spot trends and potential issues.
Panel 2: Enrichment Utilization
- Type:
gauge - Size:
{ "h": 8, "w": 4, "x": 8, "y": 83 } - Query: The URL is
/api/ai-audit/stats?days=${audit_days}, with JSONPath$.avg_enrichment_utilization. - Unit: Set to
percentunit(0-1, which is converted to 0%-100%). - Thresholds: Use red for the base, yellow at 0.5, and green at 0.75.
- Title: "Enrichment Utilization"
This gauge shows how effectively your AI is using enrichment features, like adding more context to the data.
Panel 3: Consistency Rate
- Type:
gauge - Size:
{ "h": 8, "w": 4, "x": 12, "y": 83 } - Query: The URL is
/api/ai-audit/stats?days=${audit_days}, with JSONPath$.avg_consistency_rate. - Unit: Set to
percentunit. - Thresholds: Use red for the base, yellow at 0.8, and green at 0.9.
- Title: "Consistency Rate"
This gauge measures how consistent your AI's results are. It is crucial for reliability.
Panel 4: Audit Coverage
- Type:
stat - Size:
{ "h": 8, "w": 4, "x": 16, "y": 83 } - Query: The URL is
/api/ai-audit/stats?days=${audit_days}, with JSONPath$. Use a transformation to calculatefully_evaluated_events / total_events. - Unit: Set to
percentunit. - Thresholds: Use red for the base, yellow at 0.7, and green at 0.9.
- Title: "Audit Coverage"
- Description: "% of events fully evaluated"
This panel shows what percentage of your AI's events are being fully evaluated by the audit process, which is essential to track efficiency.
Panel 5: Model Contribution Heatmap
- Type:
table(a heatmap using colored cells) - Size:
{ "h": 10, "w": 12, "x": 0, "y": 91 } - Query: The URL is
/api/ai-audit/stats?days=14, with JSONPath$.audits_by_day[*]. Include fields likedate,model_contributions.rtdetr, and others. - Column styling: Apply gradient colors based on values.
- Title: "Model Contributions by Day"
This heatmap lets you see the contribution of different models over time, revealing their impact on your AI's overall performance. It uses a table with colored cells.
Panel 6: Quality Correlation Leaderboard
- Type:
table - Size:
{ "h": 10, "w": 6, "x": 12, "y": 91 } - Query: The URL is
/api/ai-audit/leaderboard?days=${audit_days}, with JSONPath$.entries[*]. Include fields:model_name,contribution_rate,quality_correlation, andevent_count. - Column overrides: For
contribution_rateandquality_correlation, set the unit topercentunitand the cell display togauge. - Sort: Sort by
quality_correlationin descending order. - Title: "Model Quality Correlation"
This table ranks models based on their quality correlation, showing which models contribute most to the overall quality.
Panel 7: Recommendation Frequency
- Type:
barchart - Size:
{ "h": 10, "w": 6, "x": 18, "y": 91 } - Query: The URL is
/api/ai-audit/recommendations?days=${audit_days}, with JSONPath$.recommendations[*]. Include fields:category,frequency, andpriority. - Orientation: Set to horizontal.
- Color by field: Color based on
priority(high=red, medium=yellow, low=green). - Title: "Recommendation Frequency"
This bar chart visualizes the frequency of different recommendations, helping you identify and address the most critical issues. The priority levels are color-coded for easy understanding.
4. Configuring the JSON Datasource
Make sure your Grafana is properly configured to use the Backend-API datasource, which utilizes the marcusolsson-json-datasource plugin. It's provisioned, but double-check that the UID is set correctly. Hereβs what you need to check:
- Name:
Backend-API - Type:
marcusolsson-json-datasource - URL:
http://backend:8000
If the UID is missing, you might need to add uid: backend-api to your prometheus.yml datasource configuration file or use the Grafana API to get the UID.
5. Adjusting Panel Positions: Keeping Things Tidy
After inserting the new row and its panels, you might need to adjust the y-positions of subsequent rows to avoid overlaps. Start with the collapsed row. Since this collapsed row has a height of 1, the following panels will increase their y position by one unit. The best approach here is to insert the row as a collapsed row to minimize disruption.
6. API Response Schemas (for Reference)
Here are some sample API responses to help you understand the data structure:
GET /api/ai-audit/stats
{
"total_events": 1250,
"audited_events": 1100,
"fully_evaluated_events": 950,
"avg_quality_score": 4.1,
"avg_consistency_rate": 0.92,
"avg_enrichment_utilization": 0.78,
"model_contribution_rates": {
"rtdetr": 0.98,
"florence": 0.85,
"clip": 0.72,
"violence": 0.15,
"clothing": 0.72,
"vehicle": 0.45,
"pet": 0.12,
"weather": 0.95,
"image_quality": 0.88,
"zones": 0.65,
"baseline": 0.42,
"cross_camera": 0.18
},
"audits_by_day": [
{
"date": "2026-01-01",
"day_of_week": "Wednesday",
"count": 45,
"avg_quality_score": 4.2,
"avg_enrichment_utilization": 0.78,
"model_contributions": { "rtdetr": 45, "florence": 38, ... }
}
]
}
GET /api/ai-audit/leaderboard
{
"entries": [
{
"model_name": "rtdetr",
"contribution_rate": 0.98,
"quality_correlation": 0.85,
"event_count": 1200
}
],
"period_days": 7
}
GET /api/ai-audit/recommendations
{
"recommendations": [
{
"category": "missing_context",
"suggestion": "Add time since last motion event",
"frequency": 25,
"priority": "high"
}
],
"total_events_analyzed": 500
}
7. Testing and Validation
After implementing these changes, make sure everything works correctly:
- Start your stack: Use
podman-compose -f docker-compose.prod.yml up -d. - Open Grafana: Go to
http://localhost:3002. - Navigate to the dashboard: Find the consolidated dashboard.
- Verify the variable: Check that the
$audit_daysdropdown appears. - Expand the row: Open the "AI Quality & Audit" row.
- Check the panels: Confirm that all seven panels are rendering data (or show "No data" gracefully if no audit data is available).
- Test the variable: Change the
$audit_daysvariable and make sure the panels update accordingly. - Validate JSON: Make sure your JSON configuration is valid using a validator like
jq . consolidated.json.
8. Acceptance Criteria: Checklist for Success
- β
$audit_daysvariable added with the correct options. - β "AI Quality & Audit" row added after "AI Inference".
- β All seven panels are configured with the right queries.
- β Panels use the correct visualizations and thresholds.
- β JSON validates without errors.
- β Grafana loads the dashboard without issues.
- β Existing panels remain unaffected.
9. Final Thoughts: Tips and Tricks
- JSON is key: Be very careful when editing the
consolidated.jsonfile β it's a large file ( odo: ~310KB), so small mistakes can be a pain. - Formatting is your friend: Use a JSON formatter and validator before you commit your changes to avoid headaches.
- Panel IDs are unique: Double-check that all panel IDs are unique and start at 270 or higher.
- Datasource matters: Make sure the
marcusolsson-json-datasourceplugin is correctly installed and configured.
That's it, guys! You should now have a powerful new section in your Grafana dashboard that provides detailed insights into your AI's quality and performance. Happy monitoring!