SQL Server Configuration Example
Easily integrate your enterprise systems by configuring the Nodinite Pickup Logging Service to fetch JSON Log Events directly from your SQL Server database. This guide empowers you to set up secure, scalable, and reliable log event collection for your organization, with robust validation and table design for compliance and analytics.
✅ Seamless integration with SQL Server for real-time log event collection
✅ Secure and compliant long-term storage of log data
✅ Scalable configuration for enterprise environments
✅ Automated validation and error handling for reliable operations
The diagram above illustrates how the Pickup Service interacts with SQL Server tables, processes messages, and stores them in the Nodinite Log Database.
Pros and Cons of SQL Server as Intermediate Storage
Understanding the benefits and trade-offs of using SQL Server as an intermediate storage layer helps you make informed architectural decisions for your integration landscape.
Advantages (Pros)
Performance and Scalability:
- Binary protocol excellence - SQL Server's native TDS (Tabular Data Stream) protocol delivers significantly better performance compared to HTTP-based Log API calls, especially for high-volume scenarios with thousands of log events per second.
- Handles very large messages - The
nvarchar(max)
column type supports log events up to 2GB, making it ideal for scenarios with large payloads (file transfers, complex XML/JSON documents, binary attachments encoded as Base64). - Batch processing efficiency - The Pickup Service can fetch hundreds or thousands of rows in a single query (
MaxRowsLimit
), reducing network round-trips and improving throughput compared to individual HTTP POST operations. - Database-level indexing and compression - SQL Server's built-in features (clustered/nonclustered indexes, page/row compression) optimize storage and retrieval performance automatically.
Reliability and Resilience:
- Transactional integrity - SQL Server's ACID guarantees ensure log events are never lost during writes, even in the event of application crashes or network failures.
- Automatic retry and recovery - If the Pickup Service or Nodinite instance is temporarily unavailable, log events remain safely stored in SQL Server until processing resumes—no data loss.
- Built-in durability - SQL Server's transaction log and backup/restore capabilities provide enterprise-grade data protection for compliance and audit requirements.
- Decoupled architecture - Your logging applications and the Nodinite platform operate independently; SQL Server acts as a reliable buffer that absorbs spikes in log volume without overwhelming downstream systems.
Operational and Security Benefits:
- Familiar tooling - DBAs and operations teams can use standard SQL Server tools (SSMS, Azure Data Studio, T-SQL scripts) to monitor, troubleshoot, and manage log events without learning new APIs or protocols.
- Fine-grained access control - SQL Server's robust security model (database roles, row-level security, encryption at rest/in transit) allows precise control over who can read, write, or delete log events.
- Monitoring and alerting - Leverage SQL Server Agent jobs, monitoring queries, and integration with Nodinite Database Monitoring Agent to detect processing failures, backlogs, or invalid log events in real time.
- Purge and retention automation - Use built-in SQL Server scheduling to automate cleanup of processed or invalid log events, keeping table sizes manageable and performance optimal.
Integration and Flexibility:
- Universal compatibility - Any application, service, or integration platform that can execute SQL INSERT statements can send log events to Nodinite—no special libraries, SDKs, or HTTP clients required.
- On-premise and hybrid scenarios - SQL Server works seamlessly in air-gapped, on-premise, or hybrid cloud environments where direct HTTP connectivity to Nodinite may be restricted or unavailable.
- Replay and reprocessing - Failed or invalid log events remain in the table for investigation, correction, and manual reprocessing—providing an audit trail and recovery mechanism that HTTP APIs cannot match.
Disadvantages (Cons)
Infrastructure and Maintenance:
- Additional database dependency - Introduces SQL Server as a required component in your architecture, increasing infrastructure complexity, licensing costs (if not using SQL Express or open-source alternatives), and operational overhead.
- Database administration required - Requires DBA skills for setup, tuning, monitoring, backup/restore, and capacity planning. Misconfigured databases or poorly designed indexes can negate performance benefits.
- Storage growth management - Without proper purge policies, the LogEvents table can grow unbounded, consuming disk space and degrading query performance over time. Regular maintenance windows and automation are essential.
Latency and Processing Delays:
- Polling-based architecture - The Pickup Service operates on a polling interval (typically seconds to minutes), introducing inherent latency between when a log event is inserted into SQL Server and when it appears in Nodinite. Not suitable for real-time scenarios requiring sub-second visibility.
- Batch processing overhead - While batching improves throughput, it also means individual log events may wait in the queue until the next pickup cycle completes. For latency-sensitive use cases, direct HTTP Log API calls may be more appropriate.
Complexity and Error Handling:
- Validation failures require manual intervention - Invalid JSON or schema violations result in rows marked with
ValidationFailed = True
. Unlike HTTP APIs that return immediate error responses, these failures must be detected and resolved manually or via monitoring scripts. - Intermediate data transformation - Applications must format log events as valid JSON strings before insertion. Debugging serialization issues or schema mismatches requires SQL-level inspection and cannot be caught at compile time.
Network and Connectivity:
- Point-to-point dependency - The Pickup Service must maintain persistent connectivity to SQL Server. Network interruptions, firewall misconfigurations, or DNS issues can halt log collection entirely until connectivity is restored.
- Not cloud-native by default - While SQL Server works in cloud environments (Azure SQL, AWS RDS), it lacks the auto-scaling, serverless, and managed-service simplicity of native cloud queuing solutions (e.g., Azure Service Bus, AWS SQS) when deployed on traditional VMs or on-premise.
When to Use SQL Server as Intermediate Storage
Ideal scenarios:
- High-volume logging with thousands of events per second where HTTP overhead becomes a bottleneck.
- Log events with very large payloads (>10MB) that exceed typical HTTP request size limits.
- On-premise or hybrid environments where direct HTTP connectivity to Nodinite is restricted or unreliable.
- Organizations with existing SQL Server infrastructure, DBA expertise, and mature database operations.
- Compliance requirements demanding transactional durability, audit trails, and long-term retention of raw log events.
Consider alternatives when:
- Real-time log visibility (sub-second latency) is critical for operational alerting or dashboards.
- Your team lacks SQL Server administration skills or infrastructure budget.
- Cloud-native architectures benefit more from managed queuing services (Azure Service Bus, EventHub, AMQP brokers).
- Logging volume is low (<100 events/min) where HTTP Log API overhead is negligible.
SQL Servers
The SQL Servers section lets you manage all SQL Server database sources for your log events.
You configure these sources in the Settings.json
file, which uses JSON format. The default path is:
C:\Program Files\Nodinite\Logging Agent - Pickup Service\Settings.json
{
...
"SQLServers": [
{
"Enabled": true,
"LogApi": "https://localhost/Nodinite/Dev/LogAPI/",
"UseConfigurationDatabase": false,
"ConfigurationDatabaseConnectionString": null,
"ConnectionString": "Server=myServerAddress;Database=myDataBase;Integrated Security=True;Connection Timeout=60;TrustServerCertificate=true",
"MaxRowsLimit": 500,
"ReplaceRules": [
{
"Name": "Fix Endpoint real customer id to {customerId}",
"ReplaceName": false,
"ReplaceUri": true,
"Pattern": "/([0-9]{4,})$",
"Group1Value": "{customerId}"
}
]
}
]
...
}
SQL Servers is an array of SQL Server configurations. Each entry defines how the Log Agent connects to and processes messages from a specific SQL Server database. This setup ensures your Nodinite JSON Log Events are reliably collected and managed.
Property | Description | Value Example | Comment |
---|---|---|---|
ConnectionString | SQL Server Database Connection string | Server=myServerAddress;Database=myDataBase;Integrated Security=True;Connection Timeout=60;TrustServerCertificate=true |
|
MaxRowsLimit | Maximum number of rows to fetch on each loop | 500 (default) | Larger values may impact memory usage |
Enabled | See the Shared Configuration section for more info | ||
LogAPI | See the Shared Configuration section for more info | ||
UseConfigurationDatabase | See the Shared Configuration section for more info | ||
ConfigurationDatabaseConnectionString | See the Shared Configuration section for more info | ||
ReplaceRules | See the shared Replace Rules section for more info |
Important
You must restart the Nodinite Pickup Logging Service for configuration changes to take effect.
SQL Server table
If you use a Microsoft SQL Server database to store Log Events, your custom logging solution must define the following LogEvents table:
Column | Datatype | Purpose |
---|---|---|
LogEvent | json | Your JSON encoded Log Event |
Id | bigint | Automatically created identity; the PickupService must know which row to flag as invalid if the value in LogEvent is not valid |
ValidationFailed | boolean | Automatically set to false when inserted; may be set to true during processing if errors are detected |
ValidationFailedText | text | Updated if errors occur validating the LogEvent during processing |
Created | timestamp | Automatically created during insert; useful for troubleshooting |
Important
Invalid Log Events will have
ValidationFailed
set toTrue
. You must remove these entries manually.
-- Table: [LogEvents], used by :Nodinite: Logging
-- DROP TABLE [dbo].[LogEvents];
CREATE TABLE [dbo].[LogEvents](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[LogEvent] [nvarchar](max) NOT NULL,
[ValidationFailed] [bit] NOT NULL,
[ValidationFailedText] [nvarchar](max) NULL,
[Created] [datetimeoffset](7) NOT NULL,
CONSTRAINT [PK_LogEvents] PRIMARY KEY NONCLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 100, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF, DATA_COMPRESSION = PAGE) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
-- Drop existing nonclustered primary key if needed and recreate as clustered
-- Or keep it and add this covering index
CREATE NONCLUSTERED INDEX [IX_LogEvents_ValidationFailed_Created_Id]
ON [dbo].[LogEvents] ([ValidationFailed], [Created], [Id])
INCLUDE ([LogEvent])
WITH (FILLFACTOR = 100, DATA_COMPRESSION = PAGE, ONLINE = ON);
ALTER TABLE [dbo].[LogEvents] ADD CONSTRAINT [DF_LogEvents_ValidationFailed] DEFAULT ((0)) FOR [ValidationFailed]
GO
ALTER TABLE [dbo].[LogEvents] ADD CONSTRAINT [DF_LogEvents_Created] DEFAULT (sysutcdatetime()) FOR [Created]
GO
-- Do not forget to assign user access rights
Important
Remember to grant the service account for the Pickup Service the following database rights:
- db_datareader
- db_datawriter
Monitor failed LogEvents (T-SQL example)
💡Detect failed LogEvents early to prevent processing backlogs, data loss, and compliance gaps.
You can use the Nodinite Database Monitoring Agent to monitor the LogEvents table for failed entries. The Database Monitoring Agent provides automated, continuous monitoring with instant alerts through Nodinite Alarm Plugins, eliminating the need for manual SQL Server Agent job configuration.
Why use the Database Monitoring Agent?
✅ Automated monitoring - Continuously evaluates your LogEvents table without manual SQL Agent setup
✅ Instant alerts - Trigger notifications via email, Slack, Teams, or custom webhooks when thresholds are exceeded
✅ Centralized visibility - View LogEvents monitoring status alongside all other integration resources in Nodinite Monitor Views
✅ Self-service management - Empower teams to execute troubleshooting scripts on-demand via Remote Actions
How to configure monitoring
The stored procedure below follows the Nodinite Database Monitoring Agent conventions for SQL Statements monitoring. Deploy this procedure to your SQL Server database, then configure it in Nodinite as described in the SQL Statements documentation.
Key concepts:
- Return codes:
-10
= ERROR,10
= WARNING,0
= OK - SELECT output: The first column of the first row becomes the Log Text displayed in Nodinite resources
- Thresholds: Adjust
@errorLimit
and@warningLimit
to match your operational tolerances - Lookback window: Set
@lookbackHours
to0
for all-time counts, or limit to recent failures (e.g., 24 hours)
Run this query on a schedule via the Database Monitoring Agent, or deploy it as a stored procedure in SQL Server Agent. Tune thresholds and the lookback window to match your operational needs.
-- Monitor failed LogEvents and return status codes:
-- -10 = ERROR, 10 = WARNING, 0 = OK
CREATE PROCEDURE dbo.MonitorFailedLogEvents
AS
BEGIN
SET NOCOUNT ON;
DECLARE
@failedCount INT,
@errorLimit INT = 50, -- Adjust to your environment
@warningLimit INT = 10, -- Adjust to your environment
@lookbackHours INT = 24, -- 0 = count all; otherwise limit to recent window
@ret INT = 0,
@log NVARCHAR(400);
SELECT @failedCount = COUNT(1)
FROM dbo.LogEvents
WHERE ValidationFailed = 1
AND (
@lookbackHours = 0
OR Created >= DATEADD(hour, -@lookbackHours, SYSUTCDATETIME())
);
IF (@failedCount > @errorLimit)
BEGIN
SET @log = N'ERROR: ' + CAST(@failedCount AS NVARCHAR(12))
+ N' failed LogEvents in last ' + CAST(@lookbackHours AS NVARCHAR(6)) + N'h (threshold '
+ CAST(@errorLimit AS NVARCHAR(12)) + N')';
SELECT @log;
SET @ret = -10;
END
ELSE IF (@failedCount > @warningLimit)
BEGIN
SET @log = N'WARNING: ' + CAST(@failedCount AS NVARCHAR(12))
+ N' failed LogEvents in last ' + CAST(@lookbackHours AS NVARCHAR(6)) + N'h (threshold '
+ CAST(@warningLimit AS NVARCHAR(12)) + N')';
SELECT @log;
SET @ret = 10;
END
ELSE
BEGIN
SELECT N'OK';
END
RETURN @ret;
END;
GO
Configure in Database Monitoring Agent
Once the stored procedure is deployed, configure it in Nodinite to enable automated monitoring:
Open Database Monitoring Agent configuration
Navigate to Administration → Monitor Management → Monitor Agents → Select your Database Monitoring AgentAdd SQL Statement
- Go to the SQL Databases tab and select your connection
- Click the SQL Statements tab
- Click Add to create a new SQL Statement configuration
Configure General settings
- Enabled: ✅ Checked
- Display Name:
LogEvents - Failed Validation Monitor
- Description:
Monitors the LogEvents table for validation failures and triggers alerts when thresholds are exceeded
Configure Monitoring Script
- Command Type:
Stored Procedure
- Execute Command:
[dbo].[MonitorFailedLogEvents]
- Command Type:
Save and synchronize
- Click Save and close
- The agent will automatically pick up the new configuration on the next sync cycle
- The stored procedure will appear as a new Resource in your Monitor Views
Assign to Monitor View (optional)
Add the new resource to a Monitor View to enable alerting:- Navigate to Administration → Monitor Management → Monitor Views
- Select or create a Monitor View
- Add the
LogEvents - Failed Validation Monitor
resource - Configure Alarm Plugins to receive notifications (email, Slack, Teams, webhooks)
Tip
Testing your configuration:
After setup, verify monitoring is working by intentionally inserting an invalid LogEvent into the table. The Database Monitoring Agent should detect the failure within the configured polling interval (typically 60-300 seconds) and trigger an alert if thresholds are exceeded.
Note
The Database Monitoring Agent evaluates SQL Statements based on:
- Return code:
-10
= Error state,10
= Warning state,0
= OK state- SELECT output: First column of first row becomes the Log Text displayed in the resource
See the complete SQL Statements documentation for advanced configuration options, including User Action Scripts for on-demand troubleshooting.
Purge failed LogEvents (T-SQL example)
💡Remove invalid LogEvents safely to avoid unbounded table growth and to keep your Pickup Service processing healthy. Run this procedure on a schedule or manually after you investigate validation failures. Use the dry-run parameter first to preview what will be removed.
The procedure below deletes rows where ValidationFailed = 1
based on a configurable age (@olderThanHours
) and a maximum number of rows to remove per invocation (@maxRows
). It performs deletes in batches to reduce locking and transaction size. Use @dryRun = 1
to list candidate rows without deleting them.
-- Purge failed LogEvents safely in batches
CREATE PROCEDURE dbo.PurgeFailedLogEvents
@olderThanHours INT = 168, -- default: 7 days
@maxRows INT = 1000, -- maximum rows to remove in total
@batchSize INT = 200, -- rows to delete per batch (smaller is safer)
@dryRun BIT = 0 -- 1 = only list candidates, 0 = actually delete
AS
BEGIN
SET NOCOUNT ON;
SET XACT_ABORT ON;
DECLARE @deletedTotal INT = 0;
DECLARE @deleted INT;
IF (@dryRun = 1)
BEGIN
SELECT TOP (@maxRows)
Id, Created, ValidationFailedText
FROM dbo.LogEvents
WHERE ValidationFailed = 1
AND Created <= DATEADD(hour, -@olderThanHours, SYSUTCDATETIME())
ORDER BY Created ASC;
RETURN 0; -- dry-run, no deletion
END
WHILE (@deletedTotal < @maxRows)
BEGIN
BEGIN TRANSACTION;
DELETE TOP (@batchSize)
FROM dbo.LogEvents
WHERE ValidationFailed = 1
AND Created <= DATEADD(hour, -@olderThanHours, SYSUTCDATETIME())
;
SET @deleted = @@ROWCOUNT;
IF (@deleted > 0)
BEGIN
SET @deletedTotal = @deletedTotal + @deleted;
COMMIT TRANSACTION;
END
ELSE
BEGIN
ROLLBACK TRANSACTION;
BREAK; -- nothing left to delete
END
-- stop if we've reached the requested maximum
IF (@deletedTotal >= @maxRows)
BREAK;
END
SELECT N'Deleted rows: ' + CAST(@deletedTotal AS NVARCHAR(12)) AS Result;
RETURN @deletedTotal;
END;
GO
Tip
Notes and recommendations:
- Always run with
@dryRun = 1
first to review candidates before deleting. - Schedule purges during low-traffic windows and monitor Pickup Service activity — avoid deleting rows that the service might still be processing.
- Ensure you have appropriate backups or exports before bulk deletion.
- Add an index on
(ValidationFailed, Created)
to keep retrieval and deletion fast on large tables. - Adjust
@olderThanHours
,@maxRows
, and@batchSize
to match your environment and maintenance windows.
Test Data Example
You can use the followint TSQL command to insert test data into your LogEvents table:
-- SQL Server 2022: insert rows where Body is base64(UTF-8 inner JSON)
SET NOCOUNT ON;
DECLARE @RowCount INT = 10000; -- test small, increase as needed (number of log rows)
DECLARE @BodySizeKB INT = 4096; -- target decoded inner JSON size in KB (UTF-8)
DECLARE @MaxOrdersPerBatch INT = 1000; -- safety cap to avoid runaway loops
DECLARE @NowUtc DATETIME2 = SYSUTCDATETIME();
DECLARE @StartUtc DATETIME2 = DATEADD(SECOND, -@RowCount, @NowUtc);
DECLARE @TargetBytes INT = @BodySizeKB * 1024;
DECLARE @i INT = 1;
WHILE @i <= @RowCount
BEGIN
DECLARE @rn INT = @i;
DECLARE @batchSuffix NVARCHAR(20) = RIGHT('00000' + CAST(@rn AS VARCHAR(10)), 5);
DECLARE @batchId NVARCHAR(100) = N'batch-' + FORMAT(@StartUtc, 'yyyyMMdd') + N'-' + @batchSuffix;
DECLARE @generatedAt NVARCHAR(50) = FORMAT(DATEADD(SECOND, @rn - 1, @StartUtc), 'yyyy-MM-ddTHH:mm:ssZ');
-- Build orders array by appending order objects until UTF-8 size > @TargetBytes
DECLARE @Orders NVARCHAR(MAX) = N'[';
DECLARE @orderIndex INT = 1;
DECLARE @Inner NVARCHAR(MAX);
DECLARE @CurrentBytes INT = 0;
WHILE @CurrentBytes <= @TargetBytes AND @orderIndex <= @MaxOrdersPerBatch
BEGIN
DECLARE @orderId NVARCHAR(50) = N'ORD-' + FORMAT(@rn, '000000') + N'-' + FORMAT(@orderIndex, '0000');
DECLARE @custId NVARCHAR(50) = N'CUST-' + FORMAT(@rn, '000000') + N'-' + FORMAT(@orderIndex, '0000');
DECLARE @OrderFragment NVARCHAR(MAX) =
N'{"orderId":"' + @orderId + N'","customer":{"customerId":"' + @custId + N'","name":"Customer ' + CAST(@orderIndex AS NVARCHAR(10)) + N'","email":"' + @custId + N'@example.com"},"orderDate":"2025-10-14T00:00:00Z","orderRows":[{"rowId":1,"sku":"SKU-ABC-01","description":"Item","quantity":1,"unitPrice":19.95,"lineTotal":19.95}],"orderTotal":19.95,"currency":"USD"}';
IF @orderIndex > 1
SET @Orders = @Orders + N',' + @OrderFragment;
ELSE
SET @Orders = @Orders + @OrderFragment;
-- compose inner JSON and measure its UTF-8 byte length
SET @Inner = N'{"batchId":"' + @batchId + N'","generatedAt":"' + @generatedAt + N'","orders":' + @Orders + N']}';
SET @CurrentBytes = DATALENGTH(CAST(@Inner COLLATE Latin1_General_100_CI_AS_SC_UTF8 AS VARCHAR(MAX)));
SET @orderIndex = @orderIndex + 1;
END
-- At this point @Inner already contains the orders array closed with ']}' and CurrentBytes > TargetBytes (or reached cap)
-- Convert inner JSON NVARCHAR -> VARCHAR (UTF-8 collation) -> VARBINARY to get UTF-8 bytes, then base64
DECLARE @bin VARBINARY(MAX) = CAST(CAST(@Inner COLLATE Latin1_General_100_CI_AS_SC_UTF8 AS VARCHAR(MAX)) AS VARBINARY(MAX));
DECLARE @Base64 NVARCHAR(MAX);
SELECT @Base64 = CAST('' AS XML).value('xs:base64Binary(sql:variable("@bin"))','nvarchar(max)');
-- Outer LogDateTime (ISO-8601 Z)
DECLARE @OuterLogDateTime NVARCHAR(50) = FORMAT(DATEADD(SECOND, @rn - 1, @StartUtc), 'yyyy-MM-ddTHH:mm:ssZ');
DECLARE @OuterJson NVARCHAR(MAX) =
N'{"LogAgentValueId":42,"EndPointName":"INT101: Receive Hello World Log Events","EndPointUri":"C:\\temp\\in","EndPointDirection":0,"EndPointTypeId":60,"OriginalMessageTypeName":"Hello.World.File/1.0","EventDirection":17,"LogDateTime":"'
+ @OuterLogDateTime + N'","ProcessingUser":"DOMAIN\\user","SequenceNo":0,"EventNumber":0,"LogText":"File OK","ApplicationInterchangeId":"","LocalInterchangeId":"00000000-0000-0000-0000-000000000000","LogStatus":0,"ProcessName":"My Process","ProcessingMachineName":"localhost","ProcessingModuleName":"INT101-HelloWorld-Application","ProcessingModuleType":"FilePickup","ServiceInstanceActivityId":"00000000-0000-0000-0000-000000000000","ProcessingTime":80,"Body":"'
+ @Base64 + N'","Bodies":null,"Context":{"CorrelationId":"064205E2-F7CF-43A6-B514-4B55536C2B67","ExtendedProperties/1.0#Filename":"HelloWorld.json"}}';
INSERT INTO dbo.LogEvents (LogEvent, ValidationFailed) VALUES (@OuterJson, 0);
SET @i = @i + 1;
END;
Example above generates 10,000 log entries, each containing a batch of orders with a total inner JSON size of approximately 4 MB. Adjust @RowCount
and @BodySizeKB
as needed to test different volumes and sizes.