Working with DocuSign, Authorization and Sending Document for Signature

DocuSign is a well known platform where users can send their document for signing via email or your app. I will try to show you how DocuSign authorize an user and how can we send a document to users for signing electronically and we will do that programmatically.

To use DocuSign at first we need a free developer account. Go there and select Developer Account button in the top left then Create Account. Log into this account. After login you need to create an app for your integration. Go to My Apps and Keys there you will see your Integrations. Click ADD APP & INTEGRATION KEY button. Give a name and select Authorization Code Grant for User Application. In secret keys generate one and save them somewhere because you won’t be able to see it again. In Redirect URLs Add one which will be needed when we make URLs for authorization. An Integration key will be generated for your app we need this later.

We will use Authorization Code Grant to authorize users. Authorization process has 2 steps first we need to obtain Authorization Code and 2nd using that Authorization Code we need to obtain access token using rest api call.

To get Authorization Code we need to generate a url where we need to redirect ours users to that url. There user will login to their DocuSign Account and grant access to our app. After user authorizes our app user will be redirected to our whitelisted redirect url with a Authorization Code. Below is the URL format.

https://account-d.docusign.com/oauth/auth?response_type=code&scope=signature&client_id=7c2b8d7e-xxxx-xxxx-xxxx-cda8a50dd73f&state=a39fh23hnf23&redirect_uri=http://example.com/callback/

client_id is the integration key and redirect_uri is the url where DocuSign will redirect users with a Authorization Code. After Redirected URL will contain a parameter called code. We need this code to generate access token.

To Get Access Token we need to call a rest api and we need to set header. The authorization header will contain integration key and secret key separated by a colon and converted it to base64 with prefixed by a word Basic. We can generate this base64 easily by using browser console. In Console write this and enter

 btoa('INTEGRATION_KEY:SECRET_KEY')

it will show the base64 of the string. set this string in the Authorization Header. This is a POST method so I am using curl to request this endpoint.

curl --header "Authorization: Basic BASE64_OF_YOUR_INTEGRATION_AND_SECRET_KEY"
--data "grant_type=authorization_code&code=AUTHORIZATION_CODE_FROM_DOCUSIGN" 
--request POST https://account-d.docusign.com/oauth/token

The response of this request will have access_token, refresh_token and expires_in . We need this access token to make every DocuSign API Call.

I am using C# as an example for how to send a document for signing. We need to add a C# library to make DocuSign api call. Search and install eSignature API via  Nuget Package Manager Which is made by DocuSign. First we need to make an envelope here is the Code example to  make an Envelope.

private EnvelopeDefinition MakeEnvelope(string signerEmail, string signerName, string ccEmail, string ccName)
{
    string doc2DocxBytes = Convert.ToBase64String(System.IO.File.ReadAllBytes(Config.docDocx));
    string doc3PdfBytes = Convert.ToBase64String(System.IO.File.ReadAllBytes(Config.docPdf)); 
    // Create the envelope definition
    EnvelopeDefinition env = new EnvelopeDefinition();
    env.EmailSubject = "Please sign this document set";
    Document doc1 = new Document();
    string b64 = Convert.ToBase64String(document1(signerEmail, signerName, ccEmail, ccName));
    doc1.DocumentBase64 = b64;
    doc1.Name = "Order acknowledgement"; // can be different from actual file name
    doc1.FileExtension = "html"; // Source data format. Signed docs are always pdf.
    doc1.DocumentId = "1"; // a label used to reference the doc
    Document doc2 = new Document {
        DocumentBase64 = doc2DocxBytes,
        Name = "Battle Plan", // can be different from actual file name
        FileExtension = "docx",
        DocumentId = "2"
    };

    Document doc3 = new Document
    {
        DocumentBase64 = doc3PdfBytes,
        Name = "Lorem Ipsum", // can be different from actual file name
        FileExtension = "pdf",
        DocumentId = "3"
    };


    // The order in the docs array determines the order in the envelope
    env.Documents =  new List<Document> { doc1, doc2, doc3};

    // create a signer recipient to sign the document, identified by name and email
    // We're setting the parameters via the object creation
    Signer signer1 = new Signer {
        Email = signerEmail,
        Name = signerName,
        RecipientId = "1",
        RoutingOrder = "1"
    };

    // routingOrder (lower means earlier) determines the order of deliveries
    // to the recipients. Parallel routing order is supported by using the
    // same integer as the order for two or more recipients.

    // create a cc recipient to receive a copy of the documents, identified by name and email
    // We're setting the parameters via setters
    CarbonCopy cc1 = new CarbonCopy
    {
        Email = ccEmail,
        Name = ccName,
        RecipientId = "2",
        RoutingOrder = "2"
    };

    // Create signHere fields (also known as tabs) on the documents,
    // We're using anchor (autoPlace) positioning
    //
    // The DocuSign platform searches throughout your envelope's
    // documents for matching anchor strings. So the
    // signHere2 tab will be used in both document 2 and 3 since they
    // use the same anchor string for their "signer 1" tabs.
    SignHere signHere1 = new SignHere
    {
        AnchorString = "**signature_1**",
        AnchorUnits = "pixels",
        AnchorYOffset = "10",
        AnchorXOffset = "20"
    };

    SignHere signHere2 = new SignHere
    {
        AnchorString = "/sn1/",
        AnchorUnits = "pixels",
        AnchorYOffset = "10",
        AnchorXOffset = "20"
    };
    

    // Tabs are set per recipient / signer
    Tabs signer1Tabs = new Tabs {
        SignHereTabs = new List<SignHere> { signHere1, signHere2}
    };
    
    signer1.Tabs = signer1Tabs;

    // Add the recipients to the envelope object
    Recipients recipients = new Recipients
    {
        Signers = new List<Signer> { signer1 },
        CarbonCopies = new List<CarbonCopy> { cc1 }
    };
    
    env.Recipients = recipients;

    // Request that the envelope be sent by setting |status| to "sent".
    // To request that the envelope be created as a draft, set to "created"
    env.Status = RequestItemsService.Status;

    return env;
}

// The HTML of the first document in the envelope used by our example is defined here
private byte[] document1(string signerEmail, string signerName, string ccEmail, string ccName)
{
    return Encoding.UTF8.GetBytes(
    " <!DOCTYPE html>\n" +
        "    <html>\n" +
        "        <head>\n" +
        "          <meta charset=\"UTF-8\">\n" +
        "        </head>\n" +
        "        <body style=\"font-family:sans-serif;margin-left:2em;\">\n" +
        "        <h1 style=\"font-family: 'Trebuchet MS', Helvetica, sans-serif;\n" +
        "            color: darkblue;margin-bottom: 0;\">World Wide Corp</h1>\n" +
        "        <h2 style=\"font-family: 'Trebuchet MS', Helvetica, sans-serif;\n" +
        "          margin-top: 0px;margin-bottom: 3.5em;font-size: 1em;\n" +
        "          color: darkblue;\">Order Processing Division</h2>\n" +
        "        <h4>Ordered by " + signerName + "</h4>\n" +
        "        <p style=\"margin-top:0em; margin-bottom:0em;\">Email: " + signerEmail + "</p>\n" +
        "        <p style=\"margin-top:0em; margin-bottom:0em;\">Copy to: " + ccName + ", " + ccEmail + "</p>\n" +
        "        <p style=\"margin-top:3em;\">\n" +
        "  Candy bonbon pastry jujubes lollipop wafer biscuit biscuit. Topping brownie sesame snaps sweet roll pie. Croissant danish biscuit soufflé caramels jujubes jelly. Dragée danish caramels lemon drops dragée. Gummi bears cupcake biscuit tiramisu sugar plum pastry. Dragée gummies applicake pudding liquorice. Donut jujubes oat cake jelly-o. Dessert bear claw chocolate cake gummies lollipop sugar plum ice cream gummies cheesecake.\n" +
        "        </p>\n" +
        "        <!-- Note the anchor tag for the signature field is in white. -->\n" +
        "        <h3 style=\"margin-top:3em;\">Agreed: <span style=\"color:white;\">**signature_1**/</span></h3>\n" +
        "        </body>\n" +
        "    </html>"
        );
}

And this is the code for sending this envelope to DocuSign.

public EnvelopeSummary SendEnvelope(string signerEmail, string signerName, string ccEmail, string ccName)
        {
            var accessToken = ACCESS_TOKEN;
            var basePath = BASE_PATH + "/restapi";
            var accountId = ACCOUNT_ID;

            EnvelopeDefinition env = MakeEnvelope(signerEmail, signerName, ccEmail, ccName);
            var apiClient = new ApiClient(basePath);
            apiClient.Configuration.DefaultHeader.Add("Authorization", "Bearer " + accessToken);
            var envelopesApi = new EnvelopesApi(apiClient);
            EnvelopeSummary results = envelopesApi.CreateEnvelope(accountId, env);
            RequestItemsService.EnvelopeId = results.EnvelopeId;
            return results;
        }

ACCESS_TOKEN is the token we got from the authorization step. BASE_PATH will be https://demo.docusign.net as this is for development purpose and you will find it in the admin dashboard (My Apps and Keys) Page. ACCOUNT_ID is the API Account Id which is also in the dashboard.  If our SendEnvelope method is called successfully , Signers will be notified via email that he/she has a document to Sign. Go through their official doc if you want to dive deeply to know other features also.

 

How to use Event Aggregator in Aurelia

In a frontend Application we sometimes need to send some message or notify other components to update UI based on some data. We can achieve this using Event Aggregator in Aurelia. I will try to show you how it works. We need to use Dependency Injection here. This is a way to create a singleton object instance of a class (service methods or utils) in constructor and use it inside that class.

To simply demonstrate Event Aggregator I am creating 2 components (Custom Elements) Message and Form. Message is just display a message property and Form component has one text area and a button. We want to pass whatever user enters in the textbox when user clicks on the button. See how the codes will look like for these 2 components.

Message.js

import { inject } from "aurelia-framework";
import { EventAggregator } from "aurelia-event-aggregator";

@inject(EventAggregator)
export class Message {
  message = "Default Text";

  constructor(eventAggregator) {
    this.eventAggregator = eventAggregator;
    this.eventAggregator.subscribe("UpdateMessage", (payload) => {
      this.message = payload;
    });
  }
}

Message.html

<template>
  <div>
    ${message}
  </div>
</template>

To use dependency injector we need to use inject annotation which is from aurelia-framework. We need to tell in the inject method which classes we will use to initialize. Those class instances will be passed in the constructor. We are using EventAggregator from aurelia-event-aggregator.In eventAggregator object we have subscribe method which takes 2 parameters 1st one is channel name as string (I used “UpdateMessage”) and 2nd one is a function. If it publishes any messages in the same channel this function will be called.

Form.js

import { inject } from "aurelia-framework";
import { EventAggregator } from "aurelia-event-aggregator";

@inject(EventAggregator)
export class Form {
  message = "";

  constructor(eventAggregator) {
    this.eventAggregator = eventAggregator;
  }
  send = () => {
    this.eventAggregator.publish("UpdateMessage", this.message);
  };
}

Form.html

<template>
  <div>
    <input value.bind="message"/>
    <button click.delegate="send()">Send</button>
  </div>
</template>

Here we publish the message property of Form class to the UpdateMessage channel. publish method takes 1st parameter for Channel Name and 2nd parameter is for payload data which will be passed to the function of subscribe method.

Sometimes we need to refresh some components data based on some user actions. That’s how  components can communicate with each other.

Benefits of Event Sourcing in Financial Application


What is Event Sourcing

Event sourcing is a practice of keeping record of every change during the lifecycle of an application. In traditional approaches, usually the current state of the application is preserved. But the information of how we reached that state over time is lost. We lose information while updating data models.

Event sourcing ensures that all changes are captured in the forms of event object and stores them sequentially. Events are stored in append only mode and they are never to be modified.
Event sourcing is not a new concept. There are real world examples of practicing event sourcing:
An accountant keeps track of all transactions. He never erases or modifies previous records. He always creates a new entry, even to fix the previous mistake.
A doctor keeps record of patient’s medical history by adding new entry at the end of the journal


Why Event Sourcing is beneficial in financial application

Never lose information

No risk of losing important business information, as it store every changes in application state as event. It’s the biggest advantage of event sourcing based application. It is vitally important to keep track of every bit of financial data. And storing information is not that costly nowadays. We might never know in advance, which information will be useful in future.

Auditing is simple

Auditing is a vital part of any financial system. As all immutable events are appended sequentially in the event store, it is a readily available audit log of all past changes in the system or financial data

Event replay and Temporal queries

Travelling to a past state of a application is possible. Starting with initial application state and replaying all the events to a particular time, we can determine application state at any point of time. Also in case of failure, reconstruction of application state is possible by replaying the events.

Debuggability

By replaying, rewinding or stopping the actual events in a test environment, it’s possible to reproduce what went wrong in a certain situation. This form of debuggability is useful before deployment in production.

Scalability

Event sourcing is often coupled with CQRS, an approach to separate read or query functionality from the write or command functionality. As reads and writes separated, each can be scalable with different magnitude with their separate optimization.

Security

As events are never modified and added in the event store on append-only mode, data tampering will be hard and traceable. We can take the advantage of using WORM storage (Write Once Read Many), for storing events to ensure better security.

Domain Driven Design

Unlike CRUD operations, events are more relatable with business domain. Thus application architecture can be designed in a more Domain Driven approach. It is less likely to get carried away with technicality from the business domain

Analytics

Keeping all business information allows us to estimate how things co-relate over time. This data allows us to draw various conclusions from past data, project estimates, analize customer behaviour etc.

Yml or Yaml for DevOps

As software engineers, we are always learning new tech stacks as we process our careers. Everyone who works on any short of software farm all came across a term called DevOps.

As the name suggests, it’s consists of two terms Dev equal Development Ops equals Operations. So DevOps means Development Operations. As a different career choice/position in some software farms still as a developer, it may help a lot if we know some shot of DevOps terms and tools which we can use to help others if anyone needs help or fast peace our development/testing time.

In this post, we’ll not post about any complicated tech stacks which may be used by our DevOps colleagues like Docker, Kubernetes, Ansible, Prometheus, etc. But we’ll learn to use a common language called data-serialization language. These languages are human-readable so that anyone can understand what’s going on in that particular DevOps tools when we first saw them in our projects.

Prerequisites: None

What is YAML?

YAML (a recursive acronym for "YAML Ain’t Markup Language") is a human-readable data-serialization language.

It is quite popular in the DevOps tools which we will talk about in our later posts. One of the main reasons why YAML popularity has increased so much over the past few years is that it is super human-readable and intuitive which makes it a great tool for writing configuration files for all those recent DevOps tools like I mentioned above.

As we try to learn YAML, we came across its competitors in this field which is XML and JSON. We’ll show you examples of the three so that you can understand why YAML so popular among these three.

YAML XML JSON
yml syntax xml syntax json syntax

Note: YAML is a superset of JSON, So any valid JSON syntax will be a valid YAML code.

So as YAML uses line separation and spaces with indentation instead of tags with angle brackets in XML and curly bracket in JSON. It’s a lot more easy to understand by others than XML or JSON.

In this post, we will learn just enough syntax of YAML so that when anyone saw any configuration files from now on, he/she can easily understand what it means.

Now let’s talk about the basic use cases of YAML in DevOps tools. We use YAML(YML) used in docker-compose, Kubernetes, Prometheus, Ansible, etc configuration files.

Basic Syntax

As YAML uses simple key-value pair with proper space and indentation to format as we have shown in the above YAML picture.

Writing a comment

# this is a comment in yaml(yml)

Placing a # sign in front of a sentence is considered comment in YAML.

Strings

"valid string 1"
'valid string 2'
valid string 3

In YAML, strings can be written with double quotes or single quotes or without any quotes.

Note: If a string has any special character like \n or \t then it must be written inside quotes otherwise YAML can’t understand it.

'This string must be inside \n quotes'

Key-Value pair

image: consul:latest
container_name: consul_dev
ports: 8500

Writing anything in YAML must be followed like the above key-value pair except comments.

Objects

consul:
    image: consul:latest
    container_name: consul_dev
    ports: 8500

To write objects in YAML, we just need to indent them within another key, check the example above. image, container_name, port becomes an object with consul as its key.

Note: Indentation must be the same without proper indentation YAML can’t understand what it meant so it will throw an error when executing the YAML file. It’ll be best if we use a YAML validator when writing YAML.

Validate YAML, this online tool can be used to validate YAML.

Lists

ports: 
    - 8500
    - 9500
# OR
ports: [8500, 9500]

If we want to create a list of ports, we can just add a dash (-) in front of the port value which will make it a list item. Another way of writing a list item is more readable than the first one. so use whatever suits you better.

Booleans

app:
    auth: true # false
# OR
app:
    auth: yes # no
# OR
app:
    auth: on # off

The above three ways are defined to express boolean values.

This is the basic syntax anyone needs to understand any configuration file written in YAML.

Let’s look at a practical example using a simple docker-compose file and we use YAML

version: '3.7'

services:
  db:
    image: mysql:8.0.21
    container_name: mysql
    ports:
      - 3309:3306
    volumes:
      - ./db:/var/lib/mysql:rw
    environment:
      - MYSQL_USER=d2d_user
      - MYSQL_PASSWORD=12345678
      - MYSQL_DATABASE=d2d_db
      - MYSQL_ROOT_PASSWORD=12345678
    tty: true

At we start to write a docker-compose file we start with version:3.7 key-pair as it’s required for docker-compose to understand which version of YAML we are using to write docker-compose so that it can parse the YAML file

Then we create a services key-pair so that we can define our services when we run docker-compose up to start application needed services.

Within the services key. we create a db service and a YAML object as its value. we used a MySQL image and defined mysql as docker container name. : sign used to define image tags.

Other info’s like volumes to persist data in a local folder or environment to define environment variable and tty: true for docker run -t which are specific to docker so that it can start db containers.

If you like you can read the same article in my Personal blog

In conclusion, we can say, Understanding YAML can help a lot if we starting our career as a DevOps Engineer or a software engineer to understand other configuration files used by the DevOps team.