# oracle db integration

Oracle Integration Document

**Prerequsites:**

Below are the details shared by client.

DB\_USERNAME\
DB\_PASSWORD\
DB\_PORT\_NO\
DB\_IP\
DB\_NAME\
TABLE\_NAME\
TABLE\_STRUCTURE\
UNIQUE\_FIELD (unique field from the table structure)\
UNIQUE\_FIELD\_DATA\_TYPE

Once we have all the above details ready we have to download the Oracle JDBC input driver.

**Downloading JDBC drivers**

Download it from the below link to your local system depending on the java version we have on the log collector we can download either ojdbc8 or ojdbc11 tar.gz

\
ojdbc11:

<https://download.oracle.com/otn-pub/otn_software/jdbc/233/ojdbc11-full.tar.gz>

Move the above driver package to log collector using the below command

Input Commands:

\#scp -r \<package> <blusapphire@172.31.252.1:/home/blusapphire/DB_drivers/>

\#cd /home/blusapphire/DB\_drivers

\#gzip -d \<package\_name.tar.gz>

\#tar -xf \<package\_name.tar>

\#mv \*.jar /opt/collector/logstash-core/lib/jars

**Configuring the input file in the logstash pipeline of the Oracle log source.**\
\*\*\
\*\*Input Commands:

\#cd /opt/lc/scripts/\<DB\_PIPELINE>/

\#vim 01-input.conf

**Replace the following content with the existing content of 01-input.conf**

**Note: replace the highlighted values with the details provided by client**

Input {

jdbc {

jdbc\_driver\_class => “Java::oracle:jdbc.driver.OracleDriver”

jdbc\_connection\_string => “jdbc:oracle:thin:@< DB\_IP>:< DB\_PORT\_NO>:\<DB\_NAME>”

jdbc\_user => “\<DB\_USERNAME>”

jdbc\_password => “\<DB\_PASSWORD>”

schedule => “\* \* \* \*”

statement => “SELECT \* FROM (TABLE\_NAME) WHERE (UNIQUE\_FIELD) > :sql\_last\_value”

use\_column\_value => true

tracking\_column => “take the unique field from table structure”

tracking\_column\_type => “\<UNIQUE\_FIELD\_DATA\_TYPE >”

last\_run\_metadata\_path => “/opt/lc/data/sql\_last\_value1.yml”

record\_last\_run => true

\#clean\_run => false

}

}

save and exit after making the necessary changes.

**Now start and enable the service using the following command**

Input Commands:

\#sudo systemctl start \<service\_name>

\#sudo systemctl enable \<service\_name>

\#sudo systemctl status \<service\_name><br>

Verify the receipient of logs in OpenSearch.

**Troubleshooting**

One of the most commonly faced issue is redundancy of logs in opensearch in order to fix that make sure the following lines are added with correct values and uncommented in the JDBC input file.

use\_column\_value => true

tracking\_column => “take the unique field from table structure”

tracking\_column\_type => “\<UNIQUE\_FIELD\_DATA\_TYPE >”

last\_run\_metadata\_path => “/opt/lc/data/sql\_last\_value1.yml”

record\_last\_run => true

please find the screenshot for your reference

![](https://2078222076-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MMRHZBPHlLDUc8519fX%2Fuploads%2FdEN6r0qXxy5AC5OnGWim%2F99463a63%206a50%20487a%20b0f5%20898e3f21c48e.png?alt=media)

After making the necessary changes restart the service using the following command.

\#sudo systemctl restart \<service\_name>

\#sudo systemctl status \<service\_name>

The service status should be running without any error.

Then check if the value is being stored in sql\_last\_value1.yml using the following command

\#cd /opt/lc/data

\#cat sql\_last\_value1.yml

The expected output should be the relevant value of the tracking column we are storing in it.
