Nodinite Pickup LogEvents Service Logging Agent
Enables end to end tracking by asynchronously fetching custom coded Nodinite Log Events.
The Nodinite Pickup LogEvents Service Logging Agent asynchronously fetches your custom coded Nodinite Log Events from many different sources (e.g. intermediate storage), which means you will get less code and reliable logging in your custom-built solutions. This logging pattern typically uses custom code in various system integration solutions and enables true cross-platform logging, end to end.
Generic Logging logic} roBroker --> |Nodinite JSON Log Event|roSink(fal:fa-file-export Intermediate storage ) roBroker -.-> |"The other format(s)"| ro2["Other Log destination(s)"] end subgraph "Nodinite instance" roPS(fal:fa-truck-pickup Pickup Service) -->roNI(fal:fa-cloud-download Log API) roSink --> roPS end
When built-in tracking does not exist in your message broker/solution, your mission is to produce a json formatted log event and place it on a highly available intermediate storage (source).
Some real-life examples are below:
- Mule - Custom connector
- IBM Sterling - custom code
- IBM Cloud - custom code and log for example to PostgreSQL database instance(s)
- Java-based solutions
- Azure functions using SeriLog
- C#/.NET platform
The Nodinite Pickup LogEvents Service Logging Agent is not a Logging Agent with logic like other Log Agents; It reads Log Events from a source like a disk or a queue and then sends them to the Nodinite Log API (RESTful).
The logging is internally then performed using an HTTP/HTTPS POST of a Log Event to
api/LogEvent/LogEvent. For high-performance solutions on-premise, there is also an option to bypass the Log API and write the Log Events using many threads directly to the active online Log Database.
The pickup Service fetches Log Events from the following sources:
|Source||Description||Recommended Monitoring Agent||External Link||Configuration|
|ActiveMQ||Fetch Log Events from ActiveMQ/ActiveMQ Artemis queues||Message Queuing Agent||Apache NMS ActiveMQ||Configuration|
|AnypointMQ||Fetch Log Events from MuleSoft Cloudhub AnypointMQ platform||Message Queuing Agent||AnypointMQ||Configuration|
|Azure Event Hub||Fetch Log Events from EventHub||Azure Monitoring Agent||EventHub||Configuration|
|Azure ServiceBus||Fetch Log Events from Azure Service Bus||Message Queuing Agent||Azure Service Bus||Configuration|
|Disk / Folder||Fetch Log Events from file folders and SMB enabled shares||File Monitoring Agent||Configuration|
|Microsoft MSMQ||Fetch Log Events from Microsoft MSMQ||Message Queuing Agent||Configuration|
|Microsoft SQL Server||Fetch Log Events from Microsoft SQL Server||Database Monitoring Agent||Configuration|
|PostgreSQL||Fetch Log Events from PostgreSQL database instances||Database Monitoring Agent||PostgreSQL||Configuration|
Missing a source? please contact our support, email@example.com and we will build it for you
Additional solutions to common problems and the FAQ for the Nodinite Pickup LogEvents Service Logging Agent exist in the Troubleshooting user guide.
That would mean that your solution would have to deal with error handling to cope with occasions when Nodinite for various reasons is unavailable like:
- Network errors
- Windows Servers are not available pending/during a reboot (Restart / maintenance window / security patches)
- Nodinite itself is being updated
- Full database disks, typically used for the Log Databases
- Security changes where services no longer work (due to accidental or changes by mistake)
Also, from an overload perspective, there is less stress on the Nodinite instance, since the fetch process performs at a controlled pace; When we are online, available and in a healthy state.
Read more about the differences between Synchronous and Asynchronous messaging in this Wikipedia article.