- 8 minutes to read

PostgreSQL Configuration Example

Effortlessly integrate your enterprise systems by configuring the Nodinite Pickup Logging Service to fetch JSON Log Events directly from your PostgreSQL database. This guide empowers you to set up secure, scalable, and reliable log event collection for your organization, with robust validation and table design for compliance and analytics.

✅ Seamless integration with PostgreSQL for real-time log event collection
✅ Secure and compliant long-term storage of log data
✅ Scalable configuration for enterprise environments
✅ Automated validation and error handling for reliable operations

graph LR subgraph "PostgreSQL" roS(fal:fa-database Database Table) end subgraph "Nodinite instance" roPS(fal:fa-truck-pickup Pickup Service) roS --> roPS roPS --> |Long term storage|roDB(fal:fa-database Log Database) end

The diagram above illustrates how the Pickup Service interacts with PostgreSQL tables, processes messages, and stores them in the Nodinite Log Database.

PostgreSQLs

The PostgreSQLs section lets you manage all PostgreSQL database sources for your log events.

In the Remote Configuration GUI, navigate to the PostgreSQLs tab. Here, you can add, edit, or remove PostgreSQL database configurations. Each configuration defines how the Nodinite Pickup Log Events Logging Service connects to and processes messages from a specific PostgreSQL database table.

PostgreSQLs Configuration Tab
Example of the PostgreSQLs configuration tab in the Remote Configuration GUI.

Click the Add button to create a new PostgreSQL configuration. Fill in the required fields, such as connection string, database name, and row limit settings. You can also configure Replace Rules for advanced message processing.

PostgreSQL Configuration Accordion
Example of a PostgreSQL configuration accordion in the Remote Configuration GUI.

Expand the accordion for each PostgreSQL configuration to access advanced settings, including Replace Rules for message processing. These rules allow you to modify message content before it's stored in Nodinite, ensuring consistency and compliance with your logging standards.

Important

Ensure that your PostgreSQL database user has read access to the LogEvents table and update permissions to set the ValidationFailed flag. Test connectivity before enabling a configuration.

General tab

The General tab provides the fundamental settings for your PostgreSQL configuration within the Remote Configuration GUI.
General Tab
The General tab allows you to enable/disable the configuration and provide a display name and description.

The following key fields are required:

Field Description Required Notes
Enabled Checkbox to activate or deactivate this PostgreSQL configuration No When unchecked, the Pickup Service will skip this configuration
Display Name User-friendly name for this PostgreSQL source configuration Yes Used in the GUI and logs for easy identification
Description Optional detailed description of the PostgreSQL configuration No Helps document the purpose and details of this specific database setup

Follow these guidelines when configuring the General tab:

  • Use descriptive names that clearly indicate the database and purpose (e.g., "Production DB - Order Events")
  • In the Description field, note important details such as the database's business purpose, owner contact, or any special processing notes
  • The Display Name appears throughout the Nodinite interface, so keep it concise yet meaningful
  • Leave the configuration disabled during initial setup if you're not ready to start collecting log events

Source tab

The Source tab contains the database connection configuration and query settings.
Source Tab
The Source tab contains settings for connecting to the PostgreSQL database.

The following key fields must be configured for database connectivity:

Field Description Required Notes
Connection String PostgreSQL connection string with authentication Yes Example: Server=host;Port=5432;Database=dbname;User Id=user;Password=pass;SSLMode=Prefer
Max Rows Limit Maximum number of rows to fetch per query execution No Default: 500; higher values may impact memory usage; adjust based on your environment

Follow these guidelines when setting up the Source tab:

  • Ensure your database user has SELECT permissions on the LogEvents table and UPDATE permissions to flag invalid records
  • Use connection strings with SSL/TLS encryption for production environments
  • Test the connection before enabling to verify credentials and network connectivity
  • Set Max Rows Limit based on available server memory and network bandwidth
  • For large datasets, start with 500 rows and adjust upward after testing
  • Include proper timeout and retry settings in your connection string
  • Use strong, unique passwords and store them securely
  • Never embed credentials directly in documentation or version control

Destination tab

The Destination tab configures where processed log events are stored. By default, events are sent to the Nodinite Log API where they are written to the Log Database for long-term storage and analysis.
Destination Tab
The Destination tab contains settings for connecting to the Log API and authentication options.

The following key fields are required for destination configuration:

Field Description Required Notes
Use Log API Checkbox to enable sending events to the Log API Yes When checked, processed events are written to the Log Database through the Log API
Log API Base URL Base URL for your Log API instance Yes Example: http://{host}:{port}/LogAPI/
Protected Checkbox to enable OAuth 2.0 authentication No Check this if you are using an authenticated Log API
Client ID Public identifier for your application (issued by IDP) Conditional Required when Protected is checked
Client Secret Confidential key for application authentication Conditional Required when Protected is checked; keep this secure
Scope Space-separated list of access permissions Conditional Required when Protected is checked (e.g., read write)
IDP Token Endpoint URL where authentication requests are sent Conditional Required when Protected is checked; this is where the client obtains access tokens

Follow these guidelines for the Destination tab:

  • Always ensure Use Log API is checked to maintain proper log event storage
  • For local/development environments, use unprotected Log API URLs (e.g., http://localhost:40002/LogAPI/)
  • For production environments with authentication, check the Protected checkbox and provide valid OAuth 2.0 credentials
  • When using Protected authentication, ensure your Client ID, Client Secret, and Scope are correctly configured with your identity provider (IDP)
  • The IDP Token Endpoint must be accessible from the Pickup Service instance
  • Keep Client Secret values secure and never commit them to version control

Replace Rules

Read about Replace Rules in the shared configuration section. Replace Rules allow you to modify message content before it's stored in Nodinite, ensuring consistency and compliance with your logging standards. Replace Rules Tab
The Replace Rules tab allows you to configure rules for modifying message content.

Configuration file

Note

This section applies for older versions (<7.1.x) of the Nodinite Pickup Log Events Logging Service.

C:\Program Files\Nodinite\Logging Agent - Pickup Service\Settings.json
{
  ...
  "PostgreSQLs": [
    {
      "Enabled": true,
      "LogApi": "https://localhost/Nodinite/Dev/LogAPI/",
      "UseConfigurationDatabase": false,
      "ConfigurationDatabaseConnectionString": null,
      "ConnectionString": "Server=name.databases.appdomain.cloud;Port=31902;Database=ibmclouddb;User Id=ibm_cloud_user;Password=yoursecretpassword;SSLMode=Prefer;Trust Server Certificate=true",
      "MaxRowsLimit": 500,
      "ReplaceRules": [
        {
          "Name": "Fix Endpoint real customer id to {customerId}",
          "ReplaceName": false,
          "ReplaceUri": true,
          "Pattern": "/([0-9]{4,})$",
          "Group1Value": "{customerId}"
        }
      ]
    }
  ]
  ...                      
}

PostgreSQLs is an array of PostgreSQL configurations. Each entry defines how the Log Agent connects to and processes messages from a specific PostgreSQL database. This setup ensures your Nodinite JSON Log Events are reliably collected and managed.

Property Description Value Example Comment
ConnectionString PostgreSQL Connection string Server=replaceme.databases.appdomain.cloud;Port=31902;Database=ibmclouddb;User Id=ibm_cloud_id;Password=yoursecretpassword;SSLMode=Prefer;Trust Server Certificate=true
MaxRowsLimit Maximum number of rows to fetch on each loop 500 (default) Larger values may impact memory usage
Enabled See the Shared Configuration section for more info
LogAPI See the Shared Configuration section for more info
UseConfigurationDatabase See the Shared Configuration section for more info
ConfigurationDatabaseConnectionString See the Shared Configuration section for more info
ReplaceRules See the shared Replace Rules section for more info

Important

When you are done editing the configuration, you must restart the Nodinite Pickup Logging Service for configuration changes to take effect.

PostgreSQL table

If you use a PostgreSQL database to store Log Events, your custom logging solution must define the following LogEvents table:

Column Datatype Purpose
LogEvent json Your JSON encoded Log Event goes here
Id bigint Automatically created identity; the PickupService must know which row to flag as invalid if the value in LogEvent is not valid
ValidationFailed boolean Automatically set to false when inserted; may be set to true during processing if errors are detected
ValidationFailedText text Updated if errors occur validating the LogEvent during processing
Created timestamp Automatically created during insert; useful for troubleshooting

Info

Invalid Log Events will have ValidationFailed set to True. You must remove these entries manually.

-- Table: public."LogEvents"

-- DROP TABLE public."LogEvents";

CREATE TABLE public."LogEvents"
(
    "LogEvent" json NOT NULL,
    "Id" bigint NOT NULL GENERATED BY DEFAULT AS IDENTITY ( INCREMENT 1 START 1 MINVALUE 1 MAXVALUE 9223372036854775807 CACHE 1 ),
    "ValidationFailed" boolean NOT NULL DEFAULT false,
    "ValidationFailedText" text NULL,   
    "Created" timestamp with time zone NOT NULL DEFAULT now(),
    CONSTRAINT "LogEvents_pkey" PRIMARY KEY ("Id")
)
WITH (
    OIDS = FALSE
)
TABLESPACE pg_default;

ALTER TABLE public."LogEvents"
    OWNER to admin;

GRANT ALL ON TABLE public."LogEvents" TO usergoeshere;

-- Repeat the grants as needed

Info

Change the value for 'usergoeshere' to your actual user.

Next Step