Author Archives: Raghavendar T S

Database Field/Column Naming Convention – Camel Case vs Snake Case

           Many people would argue that it is the choice of developers and be consistent in naming the field names in the databases. Yes, that’s true. But my personal recommendation is to use snake case considering multiple factors. The main theme of this post will revolve around polyglot persistence.

           If you are going to develop any service, the data will be persisted in a primary database (SQL/NoSQL). Based on the other requirements, we might have to store the data in the other databases such as Redis, Elasticsearch or any other data store for low latency queries, analytics etc.

           Each data store follows different naming standards and snakecase is supported universally. Assume you are going to use camel case format and what If one of the data store you use doesn’t recommend camel case. You will have to store the camel case field with lowercased (E.g. readOnly will be used as readonly) or write a mapping layer in between. This is kinda confusing as the application grows.

           Universally all the database supports Snake Case and using this approach would avoid many more confusions If you are going to use polyglot persistence.

Logback Configuration in Apache Flink

When a Flink job is started in local mode, the logging config file from the classpath will be used. When the Flink job is started in standalone mode, the logging config is used from the path /opt/flink/conf. Refer the following steps If you want to use logback implementation in Flink jobs running in standalone mode.

Tested Version: Apache Flink 1.9.1

Steps

  1. Remove the following files
    • /opt/flink/conf/log4j*.properties
    • /opt/flink/lib/log4j-1.2.17.jar
    • /opt/flink/lib/slf4j-log4j12-1.7.15.jar
  2. Copy the following files to /opt/flink/lib
    • logback-core-1.2.3.jar
    • slf4j-api-1.7.15.jar
    • logback-classic-1.2.3.jar
  3. Update the logback.xml in the path /opt/flink/conf
  4. Update the file /opt/flink/bin/flink-console.sh
    Existing Command:
    #log_setting=("-Dlog4j.configuration=file:${FLINK_CONF_DIR}/log4j-console.properties" "-Dlogback.configurationFile=file:${FLINK_CONF_DIR}/logback-console.xml")
    
    Updated Command:
    log_setting=("-Dlog4j.configuration=file:${FLINK_CONF_DIR}/logback-console.xml" "-Dlogback.configurationFile=file:${FLINK_CONF_DIR}/logback.xml")
    
Apollo GraphQL – Private (Authentication)/Public API using Schema Directives/Annotation

This is one of the most common use case where we need to disable authentication for APIs such as Login API (Generate Access Token). Basically we will have all our APIs hosted in single instance of Apollo GraphQL server (We did not use any of the middleware such as Express). There are number of ways to solve this problem. The main idea behind the solution is that we should not throw error from the context.

Context: Do not throw any error from the context

Note: Make sure you build the context only If the Authorization header is present in the HTTP request. Do not assume that the header will always be available to avoid null pointer errors.

const apolloServer: ApolloServer = new ApolloServer({      
    context: async ({ req }) => {
        let context = null;
        try {
            context = //Build Context
            //If Unauthorized, set error context
            // context = {
            //    error: Unauthorized
            //};       
        }
        catch (Error e) {
            context = {
                error: // Anything as you like
            };
        }
        return context;
    });
}

Approach 1: Throw Error from Resolver (Typical Approach)

Resolver Example

export default {
    Query: {
        async testAPI(parent: any, args: any, context: any, info: any): Promise<any> {
            if(context.error) {
                throw context.error;
            }
            //Business Logic
        }
    }
}

You will have to add the above code in each of the resolver to throw the required error back the to the client.

Approach 2: Schema Directives

Authentication Directive

import { GraphQLField } from "graphql";
import { SchemaDirectiveVisitor } from "apollo-server";
export class AuthenticationDirective extends SchemaDirectiveVisitor {
   visitFieldDefinition(field: GraphQLField<any, any>) {
      const { resolve } = field;
      field.resolve = async function (source, args, context, info) {
         if (context.error) {
            throw context.error;
         }
         return resolve.apply(field, [source, args, context, info]);
      };
   }
}

Schema

export default gql`

   directive @authenticate on FIELD_DEFINITION

    extend type Query {
        #AuthenticationDirective will be executed
        persons: PersonInfo! @authenticate

        #AuthenticationDirective will not be executed since the annotation 
        #@authenticate is not added
        login: LoginInfo!
    }
}`

Apollo Server

Add schemaDirectives while initializing the Apollo Server instance.

const apolloServer: ApolloServer = new ApolloServer({      
    schemaDirectives: {
        authenticate: AuthenticationDirective
    },
    context: async ({ req }) => {
        let context = null;
        try {
            context = //Build Context
            //If Unauthorized, set error context
            // context = {
            //    error: Unauthorized
            //};       
        }
        catch (Error e) {
            context = {
                error: // Anything as you like
            };
        }
        return context;
    });
}

Note: We should not move the Authentication logic (API/DB calls) into the directive since the directive will be called for each query/mutation in the request. There might be better solutions as well. Kindly comment below If any.

Google Sheets – QUERY with WHERE condition from another Sheet/Tab

Assume there are 2 tabs/sheets  (Sheet 1 and Sheet 2) in the workbook and the requirement is to query all or specific set of columns from Sheet 2. Select a cell in Sheet 1 and add the following command.

Formula

=query('Sheet 2 - Test Sheet'!A1:E10, "SELECT B, C, D, E WHERE A Matches '" & D34 & "' ", 0)

Details

1. Sheet 2 - Test Sheet is the second tab/sheet.
2. A1:E10 represents selection of the dataset on which we want to run the query.
3. We are selecting columns B, C, D and E from Sheet 2.
4. We are adding a condition where the cell value in D34 of the Sheet 1 matches the column A in Sheet 2.
5. The value 0 is to ignore copying the header from Sheet 2 to Sheet 1.
Solved – mount: wrong fs type, bad option, bad superblock on Linux (AWS EBS EC2)

The problem is that we need to create a file system after which we can mount the block device in the required directory.

Error

mount: wrong fs type, bad option, bad superblock on /dev/xvdf,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail or so.

Solution

1. Execute the following command to get the list of all block devices
lsblk --output NAME,TYPE,SIZE,FSTYPE,MOUNTPOINT,UUID,LABEL

2. Create a directory to mount the block device 
mkdir -p /test/directory

3. Create a file system 
mkfs -t ext4 /dev/xvdf

4. Mount the block device
mount /dev/xvdf /test/directory

5. Unmount the block device (for testing)
umount /dev/xvdf