@cookielab.io/postgres-client

## Installation

Stats

StarsIssuesVersionUpdatedCreatedSize
@cookielab.io/postgres-client
6.0.0a year ago4 years agoMinified + gzip package size for @cookielab.io/postgres-client in KB

Readme

Postgres Client

Installation

$ yarn add @cookielab.io/postgres-client pg

The library requires pg to be its peer dependency and thus it needs to added too. This ensures that both, the root project, and the library use the same pg version.

Usage

Recommendation

Every model function working with the database (put bluntly, everything in your model/ directory) should require a database connection in its parameters, which should then be passed from a higher level of the application (routes). This way we can ensure that transactions run correctly if they are nested.

Connecting

import {Client, SQL} from '@cookielab.io/postgres-client';
import config from '/config';
import {Pool} from 'pg';

const pool = new Pool({
    host: config.database.host,
    database: config.database.database,
    user: config.database.user,
    password: config.database.password,
    port: config.database.port,
});

const client = new Client(pool);

export default client;
export {SQL, isUniqueViolation} from '@cookielab.io/postgres-client';

The pool is the pool exported by pg and can be configured as such.

Querying

import database, {SQL} from './connection';

const email = 'jon@snow.com';
database.query(SQL`SELECT * FROM table WHERE email = ${email}`);

Be aware! Calling any function on the connection exported from connection.js may be executed on its own connection. To ensure that queries are called serially on one connection, transactions have to be used.

Transactions

Transactions can be nested as deeply as needed, savepoints are used automatically. If a transaction throws an error (fails), it is correctly rolled back and the original error is rethrown. The connection has to be passed around, otherwise the queries would not run in a transaction. The value returned from the transaction callback is returned from the transaction function.

import database, {SQL} from './connection';
import type {Connection} from '@cookielab.io/postgres-client';

const result = await database.transaction(async (transaction: Connection): Promise<number> => {
    await transaction.query(SQL`…`);

    await transaction.transaction(async (nestedTransaction: Connection): Promise<void> => {
        await transaction.query(SQL`…`);
    });

    return 42;
});

// result === 42

SQL modifiers

The option to use the SQL`…` syntax comes from package pg-async and is re-exported by this library. The available modifiers are:

For an identifier name (table/column name):

  • id
  • ident
  • identifier
  • name
SQL`SELECT * FROM $name${tableName}`

For a list of identifiers separated by a comma:

  • columnNames
SQL`SELECT $columnNames${columns} FROM table`

For a literal:

  • (empty)
  • literal
SQL`SELECT * FROM table WHERE email = ${email}`;
SQL`SELECT * FROM table WHERE email = $literal${email}`;

For a raw value

  • !
SQL`SELECT * FROM table WHERE email = $!${thisWillNotBeEscaped}`;

For an object:

  • insert_object
const object = {
    column: 'value',
};

SQL`INSERT INTO table $insert_object${object}`; // INSERT INTO table (column) VALUES ('value')

For an assignment:

  • assign
const object = {
    column: 'value',
};

SQL`UPDATE table SET $assign${object}`; // UPDATE table SET column = 'value'

For a list of values:

  • values
const values = [
    'value',
    1234,
];

SQL`INSERT INTO table (string, number) VALUES ($values${values})`; // INSERT INTO table (string, number) VALUES ('value', 1234)

For a multi insert:

  • multiInsert
const values = [
    {
        string: 'value',
        number: 1234,
    },
    {
        string: 'value',
        number: 1234,
    },
];

SQL`INSERT INTO table $multiInsert${values})`; // INSERT INTO table (string, number) VALUES ('value', 1234), ('value', 1234)

Types mapping

The library automatically casts types in both ways (Insert and Select).

Insert & Update

When inserting, types are casted by Postres, meaning you can insert string to INTEGER postgres column and INTEGER will be inserted. For a date, you can insert a Date object. You can also use any object which has toSQL function (e.g. DateTime from luxon). The function/method will be called before the value is sent to the DB server.

Select & Where conditions

When selecting values from Postgres, values are casted in following manner:

PG column type JS type PG column value JS value
ANY object NULL NULL
BOOLEAN boolean true true
DATETIME Date 2019-09-30T08:49:52.157Z Mon Sep 30 2019 08:48:59 GMT+0000 (GMT)
TIMESTAMP Date 2019-09-30T08:49:52.157Z Mon Sep 30 2019 08:48:59 GMT+0000 (GMT)
TIMESTAMP WITH TIMEZONE Date 2019-09-30T08:49:52.157Z Mon Sep 30 2019 08:48:59 GMT+0000 (GMT)
SMALLINT number 42 42
INTEGER number 42 42
DOUBLE PRECISION number 42 42
REAL number 42 42
DECIMAL string 42 '42'
NUMBER string 42 '42'
BIGINT string 42 '42'
NUMERIC string 42 '42'
NUMERIC(PRECISION) string 42 '42'
NUMERIC(PRECISION,SCALE) string 42 '40.00' (for scale = 2)

Custom types

The library allows to register transformers in both directions:

  • database value -> javascript value
  • javascript value -> database value

Database value -> javascript value

The following code causes every datetime value to be converted to false;

import database from './connection';

await database.registerDatabaseTypes([
    {
        name: 'datetime', // the database type name, its oid is found automatically
        parser: (value: string | undefined): Date | null => {
            if (value == null) {
                return null;
            }

            return new Date(value);
        }
    }
]);

Javascript value -> database value

The following code causes every TuringMachine value to be converted into string via its encode method.

import database from './connection';

database.registerJavascriptTypes([ // does not return a promise
    {
        match: (value: any) => value instanceof TuringMachine,
        convert (value: TuringMachine) => value.encode(),
    }
]);

Api

Queries

Transactions

transaction<T>(callback: (connection: Connection) => Promise<T> | T): Promise<T>

Used to initialize transaction. Callback takes connections as first parameter. This parameter contains transaction and should be propagated down to any functions that require database connection.

Any query

query<T>(input: QueryConfig | string, values?: readonly any[]): Promise<QueryResult<T>>

Runs any query on the first available client in the pool or on the active connection during an active transaction.

Example:

await client.query<void>(SQL`DELETE FROM table WHERE 1=1`);

Insert

insert<T>(table: string, values: T): Promise<void>

Inserts values into the given table Values are translated as literals. Column names are changed into snake_case.

Example:

const object = {
    column: 'value',
};

await client.insert<void>(SQL`INSERT INTO table $insert_object${object}`);

Select

findOne<T>(input: QueryConfig | string, values?: readonly any[]): Promise<T | null>

Returns one row if only one row is found. Returns null if no row is found. Throws OneRowExpectedError if more rows are found.

Example:

const row = await client.findOne<{id: number, name: string}>(SQL`SELECT id, name FROM table LIMIT 1`);

findOneColumn<T>(input: QueryConfig | string, values?: readonly any[], columnIndex?: number): Promise<T | null>

Return value of specified column index if only one row is found. Returns null if no row is found. Columns are indexed from 0. If no column index is specified, 0 is used as default. Throws OneRowExpectedError if more rows are found. Throws NonExistentColumnIndexError if non existent column index is requested.

Example:

const name = await client.findOneColumn<string>(SQL`SELECT id, name FROM table LIMIT 1`, [], 1);

getOne<T>(input: QueryConfig, error: {new(...parameters: readonly any[]): Error}): Promise<T>

Returns one row if only one row is found. Throws given Error if no row is found. Throws OneRowExpectedError if more rows are found.

Example:

const oneRow = await client.getOne<{id: number, name: string}>(SQL`SELECT id, name FROM table LIMIT 1`);

getColumn<T>(input: QueryConfig | string, values?: readonly any[], columnIndex?: number): Promise<readonly T[]>

Returns array of requested column index values. Columns are indexed from 0. If no column index is specified, 0 is used as default. Throws NonExistentColumnIndexError if non existent column index is requested.

Example:

const ids = await client.getColumn<number>(SQL`SELECT id, name FROM table`, [], 0);

getOneColumn<T>(input: QueryConfig | string, values?: readonly any[], columnIndex?: number): Promise<T>

Returns value of specified column index if only one row is found. Columns are indexed from 0. If no column index is specified, 0 is used as default. Throws OneRowExpectedError if either no row is found or more rows are found. Throws NonExistentColumnIndexError if non existent column index is requested.

Example:

const oneId = await client.getOneColumn<number>(SQL`SELECT id, name FROM table LIMIT 1`);

getRow<T>(input: QueryConfig | string, values?: readonly any[]): Promise<T>

Returns one row if only one row is found. Throws OneRowExpectedError if either no row is found or more rows are found.

Example:

const oneRow = await client.getRow<{id: number, name: string}>(SQL`SELECT id, name FROM table LIMIT 1`);

getRows<T>(input: QueryConfig | string, values?: readonly any[]): Promise<readonly T[]>

Returns array of found rows.

Example:

const rows = await client.getRows<{id: number, name: string}>(SQL`SELECT id, name FROM table`);

Collectors

BatchInsertCollector

Collects rows and deletes them in batches

The default maximum number of rows per batch is 1000. Rows are added by add() method. Rows are inserted by multiInsert SQL modifier when the flush() method is called OR when the add() method is called and the rows collected are already at the limit of batch size. Number of rows per insert batch can be set by batchSize property of options object. Query suffix can be set by querySuffix property of options object.

ALWAYS call await flush() when you are done adding more rows.

import database, {BatchInsertCollector, SQL} from './connection';

await database.transaction(async (transaction) => {
    const ids = [{id: 1}, {id: 2}, {id: 3}];

    const insertCollector = new BatchInsertCollector(transaction, 'table', {
        batchSize: 500, // default is set to 1000
        querySuffix: 'ON CONFLICT (id) DO NOTHING', // Not needed, however, you can use whatever suffix query for the multi-insert including SELECT ... FROM
    });
    
    for (const id of ids) {
       insertCollector.add(id);
    }

    await insertCollector.flush(); // Insert rows
    
    const insertedRowsCount = insertCollector.getInsertedRowCount() // returns 3
}); 

BatchDeleteCollector

Collects rows and deletes them in batches

The default maximum number of rows per batch is 1000. Key values to be deleted are added by add() method. Key name to be deleted is set by keyName property of options object. If no value is supplied, id is used as default. Number of keys per delete batch can be set by batchSize property of options object.

Rows are deleted by following syntax (after all SQL modifiers are applied):

DELETE FROM table WHERE keyName IN ($1, $2, $3);

ALWAYS call await flush() when you are done adding more rows.

import database, {BatchDeleteCollector, SQL} from './connection';

await database.transaction(async (transaction) => {
    const names = ['name1', 'name2', 'name3'];

    const deleteCollector = new BatchDeleteCollector(transaction, 'table', {
       keyName: 'name', // if not supplied, default value is id 
       batchSize: 500, // default is set to 1000
    });
    
    for (const name of names) {
       deleteCollector.add(name);
    }

    await deleteCollector.flush(); // Insert rows
    
    const deletedRowsCount = deleteCollector.getDeletedRowCount() // returns 3
}); 

If you find any bugs or have a feature request, please open an issue on github!

The npm package download data comes from npm's download counts api and package details come from npms.io.