microsoft sql db integration

Microsoft SQL Integration Document

Prerequsites:

Below are the details shared by client.

DB_USERNAME DB_PASSWORD DB_PORT_NO DB_IP DB_NAME TABLE_NAME TABLE_STRUCTURE UNIQUE_FIELD (unique field from the table structure) UNIQUE_FIELD_DATA_TYPE

Once we have all the above details ready we have to download the Mssql JDBC input driver.

Downloading JDBC drivers

Download it from the below link to your local system

https://go.microsoft.com/fwlink/?linkid=2247860

Move the above driver package to log collector using the below command

Input Commands:

#scp -r sqljdbc_12.4.2.0_enu.tar.gz blusapphire@172.31.252.1:/home/blusapphire/DB_drivers/

#cd /home/blusapphire/DB_drivers

#gzip -d sqljdbc_12.4.2.0_enu.tar.gz

#tar -xf sqljdbc_12.4.2.0_enu.tar

#mv *.jar /opt/collector/logstash-core/lib/jars

Configuring the input file in the logstash pipeline of the Mssql log source. ** **Input Commands:

#cd /opt/lc/scripts/<DB_PIPELINE>/

#vim 01-input.conf

Replace the following content with the existing content of 01-input.conf

Note: replace the highlighted values with the details provided by client

Input {

jdbc {

jdbc_driver_class => “com.microsoft.sqlserver.jdbc.SQLServerDriver”

jdbc_connection_string => “jdbc:sqlserver://< DB_IP>:< DB_PORT_NO>;databaseName= <DB_NAME>;encrypt=true;trustServerCertificate=true;”

jdbc_user => “<DB_USERNAME>”

jdbc_password => “<DB_PASSWORD>”

schedule => “* * * *”

statement => “SELECT * FROM (TABLE_NAME) WHERE (UNIQUE_FIELD) > :sql_last_value”

use_column_value => true

tracking_column => “take the unique field from table structure”

tracking_column_type => “<UNIQUE_FIELD_DATA_TYPE >”

last_run_metadata_path => “/opt/lc/data/sql_last_value1.yml”

record_last_run => true

#clean_run => false

}

}

save and exit after making the necessary changes.

Now start and enable the service using the following command

Input Commands:

#sudo systemctl start <service_name>

#sudo systemctl enable <service_name>

#sudo systemctl status <service_name>

Verify the receipient of logs in OpenSearch.

Troubleshooting

One of the most commonly faced issue is redundancy of logs in opensearch in order to fix that make sure the following lines are added with correct values and uncommented in the JDBC input file.

use_column_value => true

tracking_column => “take the unique field from table structure”

tracking_column_type => “<UNIQUE_FIELD_DATA_TYPE >”

last_run_metadata_path => “/opt/lc/data/sql_last_value1.yml”

record_last_run => true

please find the screenshot for your reference

After making the necessary changes restart the service using the following command.

#sudo systemctl restart <service_name>

#sudo systemctl status <service_name>

The service status should be running without any error.

Then check if the value is being stored in sql_last_value1.yml using the following command

#cd /opt/lc/data

#cat sql_last_value1.yml

The expected output should be the relevant value of the tracking column we are storing in it.