FAQ - Queue Depth Trend Visualization
Question
Can I visualize queue depth and consumer lag trends over time?
Answer
Yes. Queue depth and consumer metrics are stored in Nodinite Resources with timestamps. Build dashboards showing queue depth trends (hourly/daily/weekly), consumer lag heatmaps, peak depth periods. Export metrics via Web API to Power BI for advanced analytics and capacity planning.
Visualization Capabilities
Built-in Nodinite Dashboards
Create custom dashboards in the Web Client:
- Queue depth over time—line chart showing message count trends
- Consumer lag heatmap—identify queues with slowest consumers
- Peak depth analysis—detect when queues hit maximum depth (daily/weekly patterns)
- Memory/disk usage correlation—correlate queue depth with node resource consumption
Dashboard Components
Widget Type | Metrics | Use Case |
---|---|---|
Line chart | Queue depth, message rates, consumer count | Identify trends and patterns over hours/days/weeks |
Bar chart | Queue depth comparison across multiple queues | Find outliers and bottlenecks |
Heatmap | Consumer lag by queue and time | Detect slow consumers or processing degradation |
Gauge | Current vs. threshold (e.g., 2,500 / 5,000 max) | Real-time status at-a-glance |
Table | Queue name, depth, consumer count, lag | Sortable list for detailed investigation |
External Analytics Integration
Export metrics via Web API to external tools:
- Power BI—advanced analytics, custom calculations, executive reporting
- Grafana—real-time dashboards with Prometheus-style queries
- Excel/CSV—capacity planning, historical analysis, SLA reporting
Example Use Cases
Capacity Planning
Question: How many consumers needed to handle peak load?
Analysis:
- Query queue depth metrics for past 90 days
- Identify peak periods (e.g., Monday 9-11 AM: avg 3,200 messages, Friday 2-4 PM: avg 2,800 messages)
- Calculate required consumer capacity: 3,200 messages ÷ 10 msg/sec/consumer = 320 seconds = 5.3 minutes to clear with current 2 consumers
- Decision: Scale to 4 consumers (clears peak in 2.7 minutes, acceptable SLA)
Performance Degradation Detection
Scenario: Consumer lag increases gradually over weeks—indicates memory leak or resource exhaustion.
Dashboard shows:
- Week 1: Average consumer lag 30 seconds
- Week 2: Average consumer lag 45 seconds
- Week 3: Average consumer lag 75 seconds
- Week 4: Average consumer lag 140 seconds (4.6× degradation)
Action: Investigation reveals memory leak in consumer application—fixed before complete failure.
Seasonal Pattern Analysis
Dashboard reveals:
- Daily pattern: Queue depth peaks at 9 AM (batch jobs start) and 5 PM (end-of-day processing)
- Weekly pattern: Monday has 30% higher depth than other days (weekend backlog)
- Monthly pattern: Month-end has 3× normal depth (financial close processes)
Optimization: Pre-scale consumers on Mondays and month-end days before backlog occurs.
Configuration
1. Enable Metric Collection
Ensure RabbitMQ resource configured to collect historical metrics (enabled by default in Nodinite).
2. Build Dashboard
From Web Client:
- Navigate to Dashboards
- Create New Dashboard
- Add Widgets:
- Line chart: Queue depth for
orders.processing
queue (last 7 days) - Heatmap: Consumer lag across all queues (last 24 hours)
- Table: Top 10 queues by depth (current)
- Line chart: Queue depth for
- Save and share with team
3. Export via API (Optional)
Use Web API to export metrics:
GET /api/resources/{resourceId}/metrics?from=2025-10-01&to=2025-10-17
Returns JSON with queue depth, message rates, consumer count at configured intervals (e.g., every 60 seconds).
Next Step
Monitoring RabbitMQ Features
Web API Overview
Related Topics
RabbitMQ Agent Overview
Troubleshooting Overview
Web Client Overview