FAQ - Performance & Overhead
Common questions about the performance impact of Nodinite monitoring on Boomi Atoms and optimization best practices.
What's the performance overhead of Nodinite monitoring on Boomi Atoms?
API polling overhead: Minimal - Nodinite polls Boomi AtomSphere REST API (cloud-hosted by Boomi, not Atom itself), zero CPU/memory impact on Atom. API rate limit: 5 requests/second per account, Nodinite automatically throttles polling to stay within limit (example: 50 Atoms polled every 60 seconds = 0.83 requests/second, well under limit).
JMX monitoring overhead: Very low - JMX Monitoring Agent polls JMX MBeans (read-only operations, no writes, no GC triggered). Performance impact: <2% CPU during active polling (60-second intervals), <50 MB RAM for JMX server thread, no impact on Atom process execution performance. Configurable intervals reduce overhead: every 30 seconds for critical Production Atoms (real-time monitoring), every 5 minutes for Development Atoms (low priority).
Network overhead: Negligible - API polling generates ~5 KB per Atom per poll (JSON response with Atom status + process list), JMX polling generates ~10 KB per Atom per poll (heap usage + GC metrics). Total bandwidth: 50 Atoms × 15 KB × 60 polls/hour = 45 MB/hour (0.1 Mbps average).
Best practice: Default 60-second polling interval balances real-time alerting (detect failures within 1 minute) with minimal overhead. Increase interval to 5 minutes for non-critical environments (Development, Test) to reduce overhead further. Disable JMX monitoring entirely for Development Atoms if capacity planning not needed (API monitoring sufficient for availability checks).
How often does Nodinite poll the Boomi API?
Default polling interval: Every 60 seconds (configurable 30s-5min range)
What gets polled each interval:
- Account metadata - Boomi account details, configured environments (Production, Dev, Test, UAT)
- Atom list - All Atoms in account (availability status, last heartbeat, connected environment, version)
- Process list per Atom - All deployed processes (state, last execution time, execution count)
- Execution history per process - Last 50 executions (status, timestamp, duration, error messages)
API requests per polling cycle:
- 1 request: GET account metadata
- 1 request: GET all Atoms
- 1 request per Atom: GET process list (for 10 Atoms = 10 requests)
- 1 request per process: GET execution history (for 50 processes = 50 requests, batched to reduce overhead)
Total requests: ~15-20 requests per 60-second cycle (depends on number of Atoms + processes monitored)
Rate limit compliance: 15-20 requests / 60 seconds = 0.25-0.33 requests/second (well under 5 req/sec Boomi API limit)
Can I reduce monitoring overhead for non-critical environments?
Yes, configure polling intervals per environment priority:
Production (highest priority):
- Polling interval: 30 seconds (fastest detection, 2× API requests)
- JMX monitoring: Enabled (heap/GC metrics every 30s)
- Alert routing: PagerDuty 24/7 (on-call coverage)
- Justification: Production failures cost $$$, fast detection critical
UAT/Staging (medium priority):
- Polling interval: 60 seconds (standard detection, baseline API requests)
- JMX monitoring: Enabled (heap/GC metrics every 60s)
- Alert routing: Email only (business hours)
- Justification: Pre-production testing, monitoring important but not urgent
Development/Test (low priority):
- Polling interval: 5 minutes (reduced detection, 12× fewer API requests)
- JMX monitoring: Disabled (no capacity planning needed for Dev)
- Alert routing: Email only (optional, some teams disable Dev alerts entirely)
- Justification: Development work, failures expected and acceptable
Configuration: Web Client → Settings → Monitor Agents → Boomi → Edit Account → Per-Environment Settings → Set polling interval per environment name
Overhead savings example:
- Before: 50 Atoms (10 Production, 20 UAT, 20 Dev) all polled every 60s = 50 API requests/min
- After: 10 Production (30s) + 20 UAT (60s) + 20 Dev (5min) = 30 Production + 20 UAT + 4 Dev = 54 API requests/min initially, but Dev only 4 req/min = 43% reduction in Dev API calls
Does Nodinite monitoring impact Boomi Atom performance?
No impact on Atom process execution performance. Monitoring is read-only API polling (cloud-hosted AtomSphere API), not agent installed on Atom server.
Optional JMX monitoring considerations:
JMX enabled on Atom (for performance metrics):
- CPU overhead: <2% CPU during JMX poll (60-second snapshots, not continuous)
- Memory overhead: +50 MB RAM for JMX server thread (one-time allocation, not growing)
- GC impact: None (JMX read-only operations, no object allocation, no GC pressure)
- Process execution impact: Zero (JMX polling async from Atom process execution)
When JMX overhead matters:
- Constrained VMs - Atom running on minimal VM (2 CPU cores, 4 GB RAM), every 2% CPU matters → Disable JMX for constrained environments
- High-throughput Atoms - Processing >1,000 messages/minute, CPU-bound → Monitor via API only (no JMX), evaluate JMX overhead in lower environments first
Best practice: Enable JMX monitoring for Production Atoms (capacity planning value outweighs <2% overhead), disable JMX for Development/Test Atoms (no capacity planning needed, save overhead).
How do I optimize Nodinite for monitoring 100+ Boomi processes?
Optimization strategies for large-scale Boomi monitoring:
1. Prioritize Critical Processes
- Create tiered monitoring:
- Tier 1 (Critical): 20 business-critical processes (payment processing, order fulfillment, regulatory reporting) → 30-second polling, PagerDuty alerts, JMX monitoring enabled
- Tier 2 (Standard): 50 standard processes (inventory sync, reporting, internal workflows) → 60-second polling, Email alerts, JMX monitoring optional
- Tier 3 (Low Priority): 30 development/batch processes → 5-minute polling, Email alerts optional, JMX monitoring disabled
2. Use Monitor Views for Dashboard Performance
- Problem: Single dashboard showing 100+ processes = slow page load (render 100 rows + execution history charts)
- Solution: Create focused Monitor Views per team (Billing team sees 12 billing processes only, not all 100), page load time reduced 8× (12 rows vs 100 rows)
3. Configure Expected Execution Frequency Selectively
- Problem: Tracking expected execution frequency for all 100 processes = complex configuration + false positives
- Solution: Configure expected frequency only for critical real-time processes (payment processing every 15 min, order sync every 5 min), ignore nightly batch processes (expected once daily but timing varies)
4. Batch API Requests
- Nodinite automatically batches process execution history requests (retrieve 50 processes per API call instead of 1 process per call)
- Verify batching enabled: Web Client → Settings → Monitor Agents → Boomi → Advanced Settings → "Batch Process Execution Requests" = Enabled
5. Increase Log Database Retention Selectively
- Problem: 12 months of execution history for 100 processes = large Log Database (storage costs + query performance)
- Solution: Configure retention per criticality: Tier 1 Critical processes = 24 months retention (compliance/audit requirements), Tier 2 Standard = 12 months, Tier 3 Low Priority = 3 months (reduce storage 75%)
Result: 100+ Boomi processes monitored efficiently, <1% Boomi API rate limit consumed, fast dashboard performance, manageable storage costs.
Back to FAQs
← All FAQs | ← Boomi Integrations Monitoring Overview
Related FAQs
- Permissions & API Access → - API roles, service account setup, security best practices
- Monitoring Scope & Capabilities → - Cloud Atoms, failure detection, comparison table
- Integration & Advanced → - Power BI, Docker/Kubernetes, Boomi Flow