Contents

Oracle Functions - Connecting To ATP With Node.JS

Contents

In my last post we looked at connecting up to ATP with a Java based serverless function. That worked out great, but I wanted to show you how you could accomplish the same task via a Node.JS based function. In this post we’ll do just that, but with a slight twist on how we store the data.

To get started, create a new Fn application via the Fn CLI. This time we’ll pass our config values in directly with the create app call:

$ fn create app --annotation oracle.com/oci/subnetIds='["ocid1.subnet.oc1.phx..."]' --config DB_PASSWORD='[DB PASS]' --config DB_USER='[DB USER]]' --config CONNECT_STRING='[CONNECT STRING FROM tnsnames.ora]' fn-atp-node-json

**Note: **You should always encrypt any configuration variables that contain sensitive information. Check my guide to using Key Management in OCI to learn how!

Now create our first function, but instead of using an init-image like we did last time, we’ll just use the node runtime and modify things ourselves:

fn init --runtime node fn-atp-node-json-insert

Open up the generated func.yaml file and change the value of runtime to docker. Add the value of format, memory, timeout and idle_timeout as shown below:

schema_version: 20180708
name: fn-atp-node-json-insert
version: 0.0.91
runtime: docker
format: http-stream
memory: 256
timeout: 120
idle_timeout: 1800

Now create a Dockerfile in the root of the function. We’ll need our wallet as before (download it and put it in the root of the function), but this time we’ll also need the Oracle Instant Client so that Node can make the proper connection. and we’ll grab that dependency and install it in our Dockerfile. The entire Dockerfile looks like this:

FROM oraclelinux:7-slim

RUN yum -y install oracle-release-el7 oracle-nodejs-release-el7 && \
    yum-config-manager --disable ol7_developer_EPEL && \
    yum -y install oracle-instantclient19.3-basiclite nodejs && \
    rm -rf /var/cache/yum && \
    groupadd --gid 1000 --system fn && \
    useradd --uid 1000 --system --gid fn fn

COPY wallet/* /usr/lib/oracle/19.3/client64/lib/network/admin/

WORKDIR /function
ADD package.json package-lock.json func.js func.yaml /function/
RUN npm install
ENTRYPOINT ["node", "func.js"]

In this Dockerfile we’re using Oracle Linux as a base image and installing the instant client dependencies via the Oracle yum repository instead of manually installing it (which makes life much easier!). We’re also copying in our wallet files so they can be used to make the connection. Finally, we set node func.js as our ENTRYPOINT

Next, run npm install oracledb and npm install dateformat from the function root. This will install the Oracle Node package and a date formatting library so that we can use them for our queries. We’ll need to create the table that we’ll use for reading/writing with this demo. This time we’ll take advantage of the JSON column support in ATP, so our table will be pretty simple:

CREATE TABLE JSON_DEMO 
(
      ID NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY INCREMENT BY 1 MAXVALUE 9999999999999999999999999999 MINVALUE 1 CACHE 20 NOT NULL 
    , CAPTURED_AT TIMESTAMP(6) 
    , DATA CLOB 
    , CONSTRAINT JSON_DEMO_PK PRIMARY KEY ( ID )
)

Now on to our function. Open func.js and populate it like this:

const fdk = require('@fnproject/fdk');
const oracledb = require('oracledb');
oracledb.outFormat = oracledb.OBJECT;
oracledb.fetchAsString = [oracledb.CLOB];

let pool;

fdk.handle( async function(input){
    if( !pool ) {
        pool = await oracledb.createPool({
            user: process.env.DB_USER,
            password: process.env.DB_PASSWORD,
            connectString: process.env.CONNECT_STRING,
        });
    }
    const connection = await pool.getConnection();
    const records = await connection.execute("select * from json_demo");
    const result = records.rows.map((row) => {
        return {
            id: row.ID,
            capturedAt: row.CAPTURED_AT,
            data: JSON.parse(row.DATA),
        }
    });
    await connection.close();
    return result;
}, {});

Here we pull in our required libraries (the FDK for Node and the Oracle DB package) and create a single method that will be invoked when our function is called. Within that function we establish a connection to the DB, query for all of the records and return them as an array of objects. We can deploy and invoke to test it out, but obviously our result will be an empty array at this point, so let’s also create a function that will handle inserts. You can copy the entire project above, paste it and then change the function name within func.yaml to create an insert function, but change func.js to handle an insert instead:

const fdk = require('@fnproject/fdk');
const oracledb = require('oracledb');
const dateFormat = require('dateformat');

oracledb.outFormat = oracledb.OBJECT;
oracledb.fetchAsString = [oracledb.CLOB];

let pool;

fdk.handle( async function(input){
    if( !pool ) {
        pool = await oracledb.createPool({
            user: process.env.DB_USER,
            password: process.env.DB_PASSWORD,
            connectString: process.env.CONNECT_STRING,
        });
    }
    const connection = await pool.getConnection();
    const insert = await connection.execute("insert into json_demo (data, captured_at) values (:data, to_timestamp(:capturedAt, 'yyyy-mm-dd HH24:mi:ss'))",
      {
        data: JSON.stringify(input),
        capturedAt:  dateFormat(new Date(), 'yyyy-mm-dd HH:MM:ss')
      },
      { autoCommit: true }
    );
    await connection.close();
    return {insert: insert, complete: true};
}, {});

Deploy this function and now we can invoke it, passing a JSON object containing the data that we want to persist. The JSON object can be anything, as long as it is valid JSON:

echo '{"isDemo": true, "key": 5, "isAwesome": true, "isEasy": true}' | fn invoke fn-atp-node-json fn-atp-node-json-insert

At this point we can invoke our read function and see the results of our previous insert(s):

trsharp@MacBook-Pro-2 ~/Projects/fn/fn-atp-node-json/throwaway$ fn invoke fn-atp-node-json fn-atp-node-json-read                                                                                                                                                 
[{"id":66,"capturedAt":"2019-06-11T21:33:04.000Z","data":{"isDemo":true,"key":5,"isAwesome":true,"isEasy":true}}]

As an alternative to the above examples, your function might also use Simple Oracle Data Access (SODA) to handle your data in a style more familiar to other NoSQL implementations. SODA does not require a predefined table or schema, rather it works with document collections and a simplified API instead of traditional SQL (though you are not limited to using only the API - you can of course still query your collection using traditional SQL if you need to). An example of a function that uses SODA is shown below:

const fdk = require('@fnproject/fdk');
const oracledb = require('oracledb');

oracledb.outFormat = oracledb.OBJECT;
oracledb.fetchAsString = [oracledb.CLOB];
oracledb.autoCommit = true;

fdk.handle( async function(input){

    let connection;
    const result = [];

    try {
        connection = await oracledb.getConnection({
            user: process.env.DB_USER,
            password: process.env.DB_PASSWORD,
            connectString: process.env.CONNECT_STRING,
        });

        const soda = connection.getSodaDatabase();
        const collectionName = 'soda_collection';
        const collection = await soda.createCollection(collectionName);

        const document = input;
        await collection.insertOne(document);

        const documents = await collection.find().getDocuments();

        documents.forEach(function(element) {
            result.push( {
                id: element.key,
                createdOn: element.createdOn,
                lastModified: element.lastModified,
                document: element.getContent(),
            } );
        });
    }
    catch(err) {
        console.error(err);
    }
    finally {
        if (connection) {
            try {
                await connection.close();
            } catch(err) {
                console.error(err);
            }
        }
    }

  return result;
}, {});

If we were to invoke the SODA function as follows:

$ echo '{"id": 1, "temp": 58}' | fn invoke fn-atp-node-json fn-atp-node-json-soda
$ echo '{"id": 2, "temp": 60}' | fn invoke fn-atp-node-json fn-atp-node-json-soda

We would end up with two records in a table called soda_collection.  We could query this table using the SODA API in a “query by example” fashion (see the SODA Node docs for more info), but we could also query the table with SQL like so (note that it is necessary to cast the JSON document to varchar2 since it is stored as a BLOB):

select id, utl_raw.cast_to_varchar2(json_document) as doc
from soda_collection sc
where sc.json_document.temp < 60;

/*
ID                                        DOC
-----                                     -----
E05734C093684FBEBFDEC3404099F959	  {"id":1,"temp":58}
*/

Another option is to transform the documents into a traditional table format that can be used in views or joined to relational tables using json_table:

select id, jt.temp, jt.docId
from soda_collection sc,
json_table(json_document, '$' COLUMNS (temp number PATH '$.temp', docId varchar2(50) PATH '$.id')) as jt

/*
ID                                      TEMP    DOCID
-----                                   -----   -----   
E05734C093684FBEBFDEC3404099F959	58	    1
B675F6056E374F9ABF76C6E35A280ED5	60	    2
*/

In my next post, we’ll look at invoking Oracle Functions with the OCI Java SDK.

[Photo by ]Paul Esch-Laurent[ on ]Unsplash