Monday, February 8, 2021

Create Node.js and csvtojson Lambda Layer for AWS Lambda Custom Runtime

This post will cover how to use Node.js and csvtojson in AWS Lambda Custom Runtime which can be useful for use cases wherein CSV to JSON conversion is required as a part of a Lambda function in a serverless architecture. 

Have a quick look at Using JQ in AWS Lambda Custom Runtime via AWS Lambda Layer for a quick reference to understanding how AWS Lambda Custom Runtime is bootstrapped and how AWS Layers works.

Creating Node.js Lambda Layer

Setup for Node.js for AWS Lambda Custom Runtime is actually quite easy, all that is needed is aws-lambda-custom-node-runtime (v1.0.2). Install NPM if not already done, and run the following command:

aws-lambda-custom-node-runtime 11.3.0

The above command will generate a directory named node-v11.3.0-linux-x64 (at the path where the command is run) with all the necessary files required to run Node.js.

Now zip the entire node-v11.3.0-linux-x64 directory using the following command

zip -r node-v11.3.0-linux-x64

The zip archive, i.e, is our Node.js Lambda layer.

Creating CSVTOJSON Lambda Layer

To create a CSVTOJSON bundle, run the following command

npm install csvtojson --save

This will create a node_modules directory at the path at which the command is executed. Now, similar to what was done for building the Node.js layer, run the following command to build the CSVTOJSON Lambda Layer

zip -r node_modules

The is now our CSVTOJSON Lambda layer. 

Separating Lambda layers as Node.js layer and node dependencies layer helps because doing so allows the layers to be reused across multiple Lambda functions. Doing this, the Node.js Lambda layer can be reused across multiple Lambda functions and each Lambda function can have its own set of node dependencies. But it all boils down to how AWS Lambdas are used in your serverless architecture and there might be differences in best practices. 

Using Node.js and CSVTOJSON Lambda Layers

Now, the built Lambda Layers can be uploaded to the AWS Lambda Layers section, and a test-run in the following way can be done to verify whether they are working fine:

(Visit Using JQ in AWS Lambda Custom Runtime via AWS Lambda Layer for more details on what the function handler)

function handler () {
    cd /opt
    ./node-v11.3.0-linux-x64/bin/node --version
    ./node-v11.3.0-linux-x64/bin/node node_modules/csvtojson/bin/csvtojson version

When the above handler is triggered, the Node and CSVTOJSON versions can be expected in the success output.

Sunday, January 31, 2021

Using JQ in AWS Lambda Custom Runtime via AWS Lambda Layer

For a situation wherein there's a need to use JQ on AWS Lambda custom runtime, an AWS Lambda layer can be created and used for your AWS Lambda function. This blog post explains how the aforementioned can be achieved but is not just limited to creating a layer for JQ, the instructions can be similarly used for building AWS Lambda layer for any Linux distribution.

For quick reference

AWS Lambda: is a serverless compute service that allows running code without provisioning or managing servers. It's a powerful tool backing for potentially implementing the serverless architecture.

JQ: it's is like sed for JSON data and is a fast and flexible CLI JSON processor tool written in portable C.

Steps to follow

The entire process of building an AWS Lambda layer can be broken down as follows:

  • Get the required distribution files
  • Build a zip archive of the required files 
  • Create a layer on AWS Lambda
  • Use created AWS Lambda layer for your Lambda function

Get required distribution files

Since AWS Lambda custom runtime is based on Amazon Linux AMI, let's first get the JQ files specific to Amazon Linux AMI. For this, create a new Amazon Linux EC2 instance (or use an Amazon Linux Docker image). 

Install JQ and locate the required files (installed on Amazon Linux EC2 instance) once the installation is completed. 

Installing JQ

sudo yum install jq
At the time of writing this post, for JQ version 1.5, the required files can be found at the following location: 
# executable

# dependencies

Build a zip archive with required files

Build a archive containing all these required files so that JQ is functional when used inside the AWS Lambda function. The jq executable file should be at the root of the zip file and dependencies should be inside the lib directory in the zip file. The reason being, when AWS Lambda layers are unpacked by AWS, the custom runtime dependency path of the Lambda function should be /opt and /opt/lib. To simplify it, /opt is where the executables should go, and /opt/lib is where the required dependencies should go.

Create a layer of AWS Lambda

Now let's create a JQ layer on AWS Lambda so that it can be used by the AWS LAmbda function. It can be directly created via AWS Console or use the following command to create it using the AWS CLI:

aws lambda publish-layer-version --layer-name jq --zip-file /PATH_TO_FILE/

Use created AWS Lambda layer for your Lambda function

Once the jq layer is ready to use from the above step, create an AWS Lambda function with custom runtime using AWS console. After creating a Lambda function with custom runtime, AWS automatically creates a bootstrap and files whose content (at the time of writing this post) is as follows:


set -euo pipefail

echo "##  Environment variables:"

# Handler format: <script_name>.<bash_function_name>
# The script file <script_name>.sh  must be located at the root of your
# function's deployment package, alongside this bootstrap executable.
source $(dirname "$0")/"$(echo $_HANDLER | cut -d. -f1).sh"

while true
    # Request the next event from the Lambda runtime
    EVENT_DATA=$(curl -v -sS -LD "$HEADERS" -X GET "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
    INVOCATION_ID=$(grep -Fi Lambda-Runtime-Aws-Request-Id "$HEADERS" | tr -d '[:space:]' | cut -d: -f2)

    # Execute the handler function from the script
    RESPONSE=$($(echo "$_HANDLER" | cut -d. -f2) "$EVENT_DATA")

    # Send the response to Lambda runtime
    curl -v -sS -X POST "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$INVOCATION_ID/response" -d "$RESPONSE"
function handler () {

    RESPONSE="{\"statusCode\": 200, \"body\": \"Hello from Lambda!\"}"
    echo $RESPONSE

Update the function so at it prints the jq version to test if the jq layer is working properly.

function handler () {
    cd /opt
    ./jq --version   

Now run a quick test on the Lambda function and it should print the jq version of the AWS Lambda layer as follows:


Follow a similar process to create an AWS Lambda layer for any distribution.

Sunday, January 17, 2021

Tagless Final in Scala for Beginners

This post is an effort to explain what is tagless-final in Scala and if one has bumped into strange-looking F[_] notations and wondered what it is, where, and how it is used, then this post will also try to answer those questions. There's nothing new to learn for a seasoned functional programmer, but this would give a head start for anyone beginning on the journey. Let's walk through it step-by-step.

What's a type constructor?

Simply put, something that constructs types can be considered as a type constructor. For example, List can be considered as a type constructor because it has the ability to construct types based on what type argument is passed to its constructor. 
As per the type theory, List would be both a type (denoted by *) and type-constructor (denoted by * -> *)

val list: List.type = List
val intList: List[Int] = list[Int]()

In the above code snippet, Int is passed to the variable list (which is of type List) as an argument, i.e Int is the type argument to List type list.

What are higher-kinded types?

A higher-kinded type can be considered as something that abstracts over the type constructor. As a continuation of the above example of List, abstraction over the type constructor can be done with the help of using the F[_] notation, for example:

trait Test[F[_]]
val test = new Test[List] {}
val test = new Test[Option] {}
val test = new Test[Vector] {}

In the above code snippet, trait Test provides the ability to abstracts over the type constructor with the help of F[_] notation, and hence it is possible to pass List type constructor to it. And for that matter, it is also possible to pass other type constructors as well, for example, Option, Vector, etc. In this code snippet, Test now becomes a higher-kinded type.

What's an Effect in functional programming?

Do not confuse Effect with a side-effect! A side-effect can be one of the Effect but it is not the only Effect. An Effect can be considered as something that can happen within a Wrapper. For example, consider the Wrapper to be an Either. Now, anything that happens inside an Either can be considered as an Effect, as shown below:

def work[A, B](value: A)(fn: A => B): Either[Throwable, B] =
  try Right(fn(value))
  catch {
    case ex: Exception => Left(ex)

work("1")(_.toInt) // Right(1)
work("T")(_.toInt) // Left(NumberFormatException)

The method work is considered as an Effectful method as instead of returning just the value B, it is returning Either[Throwable, B] so the caller of the method will know what to expect from the method's return type. A method can be considered as Effectful if it returns F[B] instead of B and F in the above example is Either.

What's a type class?

A type class can be considered as a class that defines a common behavior that can be associated with some types (data types). A common behavior could be anything, for example, an Adder type class that defines behaviors for the addition of data types like Int, Double, Float, etc.

trait Adder[A] {
  def add(value1: A, value2: A): A

Additionally, a type class should also be lawful in the sense that it should follow certain algebraic laws, in the case of the Adder type class above, it should follow the laws of associativity which is tantamount to it being a Semigroup. 

What's ad-hoc polymorphism with respect to a type class?

The behavior of adding two numeric values specific to a data type can be achieved by method overloading in a normal OOPs setting, for example:

case class Age(v: Int)

def add(value1: Int, value2: Int): Int = value1 + value2
def add(value1: Double, value2: Double): Double = value1 + value2
def add(value1: Age, value2: Age): Age = Age(value1.v + value2.v)
But this would result in some code duplication and a more generic way to write this would be using the method definition as seen from the Adder type class example above:
def add(value1: A, value2: A): A

But how would add know what is the way to add two values, the values could be Int or they could very well be Age data type. A way to do it is by using the type classes, their instances tied to specific data types, and injecting datatype specific instances to the method using implicits in Scala, giving ad-hoc polymorphism capabilities.

case class Age(v: Int)

trait Adder[A] {
  def add(value1: A, value2: A): A

object Adder {
  def apply[A: Adder]: Adder[A]              = implicitly[Adder[A]]
  def add[A: Adder](value1: A, value2: A): A = Adder[A].add(value1, value2)

object AdderInstances {
  implicit val intAdder: Adder[Int] = (value1, value2) => value1 + value2
  implicit val ageAdder: Adder[Age] = (value1, value2) => Age(Adder[Int].add(value1.v, value2.v))

import AdderInstances._
Adder.add(25, 25) // 50
Adder.add(Age(25), Age(25)) // Age(50)

What's tagless-final?

Let's look at a dumbed-down representation of a Stock Market with capabilities to buy and sell financial instruments.

import cats.Monad
import cats.effect.IO
import cats.implicits._
import scala.language.higherKinds

sealed trait OrderType
case object Buy  extends OrderType
case object Sell extends OrderType

sealed trait FinancialInstrument {
  val id: String
  val quantity: Int
  val price: Float
  val orderType: OrderType

case class FutureDerivativeInstrument(id: String, quantity: Int, price: Float, orderType: OrderType) extends FinancialInstrument
case class OptionDerivativeInstrument(id: String, quantity: Int, price: Float, orderType: OrderType) extends FinancialInstrument
case class EquityCashInstrument(id: String, quantity: Int, price: Float, orderType: OrderType)       extends FinancialInstrument

trait StockMarket[F[_]] {
  def buyInstrument(instrument: FinancialInstrument): F[FinancialInstrument]
  def sellInstrument(instrument: FinancialInstrument): F[FinancialInstrument]

object StockMarketInstances {
  implicit val instrument: StockMarket[IO] = new StockMarket[IO] {
    override def buyInstrument(instrument: FinancialInstrument): IO[FinancialInstrument]  = IO(instrument)
    override def sellInstrument(instrument: FinancialInstrument): IO[FinancialInstrument] = IO(instrument)

object StockMarket {
  def apply[F[_]: StockMarket]: StockMarket[F] = implicitly[StockMarket[F]]

  def executeBuyOrder[F[_]: StockMarket](instrument: FinancialInstrument): F[FinancialInstrument]  = StockMarket[F].buyInstrument(instrument)
  def executeSellOrder[F[_]: StockMarket](instrument: FinancialInstrument): F[FinancialInstrument] = StockMarket[F].sellInstrument(instrument)

def placeOrder[F[_]: StockMarket: Monad](orders: Vector[FinancialInstrument]): F[Vector[FinancialInstrument]] = {
  orders.pure[F].flatMap { instruments =>
    instruments.traverse { instrument =>
      instrument.orderType match {
        case Buy  => StockMarket[F].buyInstrument(instrument)
        case Sell => StockMarket[F].sellInstrument(instrument)

import StockMarketInstances._
placeOrder[IO](Vector(OptionDerivativeInstrument("SA-121", 50, 100, Buy), FutureDerivativeInstrument("DS-991", 50, 100, Sell))).unsafeRunSync()

Here, StockMarket is a tagless-final type class that describes the capabilities of generic type F[_]. The core concept of tagless-final is declaring dependencies for which Scala's implicits are used.

Tagless-final pattern enables:

  • Capability to abstract over the higher-kinded type providing means to use any of the available Effect types in place of F, like Cats Effect IO, or Task, or Monix. That is, it enables Effect type indirection.
  • Ability to reason about the implementation of the polymorphic function (in the example above, the placeOrder function) by looking at the implicits required in order to be able to successfully use it. From placeOrder signature, it can be seen the function needs or uses an instance of StockMarket and Applicative to implement its functionality. That is, it enables Effect parametric reasoning.
  • The inclination to use the principle of least power by declaring only those type classes as implicit parameters that are needed for the placeOrder function implementation.
As an end note, it is only by following a disciplined approach as noted above the tagless-final would be useful but is not something that comes automatically just by using it.

Saturday, January 16, 2021

Background processing with Scala Cats Effect

Running any tasks with Scala Future in the background can be done using the following:

Snippet 1

  import scala.concurrent.duration._
  import scala.concurrent.{Await, Future}

  val future1 = Future { Thread.sleep(4000); println("Work 1 Completed") }
  val future2 = Future { Thread.sleep(1000); println("Work 2 Completed") }

  val future3 =
    for {
      _ <- future1
      _ <- future2
    } yield ()

  Await.result(future3, Duration.Inf)

And if one has been working with Scala Future it'd be obvious that as soon as lines with variables future1 and future2 are executed the body inside the Future starts executing immediately, i.e the body of the Future is eagerly evaluated by submitting it for execution on a different Java Thread using the Implicit ExecutionContext available in the scope via the import. Now with the help of the for-comprehension, which is nothing but a combination of flatMap, the Futures are composed together to yield a Unit. And finally, Awaiting on future3 gets the result when both future1 and future2 are completed successfully (here Await is just used for demonstration purposes).

But if the above code snippet is changed a bit to the following:

Snippet 2

  val future =
    for {
      _ <- Future { Thread.sleep(4000); println("Work 1 Completed") }
      _ <- Future { Thread.sleep(1000); println("Work 2 Completed") }
    } yield ()

  Await.result(future, Duration.Inf)

both Futures will be executed sequentially because the second Future is not initialized/executed until the first Future is completed, i.e the above for-comprehension is the same as the following:

Snippet 3

  Future {
    Thread.sleep(4000); println("Work 1 Completed")
  }.flatMap { _ =>
      Future {
        Thread.sleep(1000); println("Work 2 Completed")

Similar results can be achieved using Cats Effect IO with the added benefit of referential transparency (note that the Scala Future is not referentially transparent).

Cats Effect version similar to Snippet 2 will look something like the following:

Snippet 4

import cats.effect.{ExitCode, IO, IOApp}
import scala.concurrent.duration._

object Test extends IOApp {

  def work[A](work: A, time: FiniteDuration): IO[Unit] =
    IO.sleep(time) *> IO(work)
      .flatMap(completedWork => IO(println(s"Done work: $completedWork")))

  val program: IO[Unit] =
    for {
      _ <- work("work 1", 4.second)
      _ <- work("work 2", 1.second)
    } yield ()

  override def run(args: List[String]): IO[ExitCode] =
    program *> IO.never

In the above case, the second IO (inside for-comprehension) is not evaluated until the first IO is completed, i.e the IOs are run sequentially and the order in which the print line statements will be executed is "Done work: work 1" and then "Done work: work 2".

The Cats Effect version similar to Snippet 1 will look something like the following:

Snippet 5

val program1 = work("work 1", 4.second).start
val program2 = work("work 2", 1.second).start

val program: IO[Unit] =
  for {
    _ <- program1
    _ <- program2
    _ <- IO(println("for-comprehension done!"))
  } yield ()

wherein the order in which the print line statements will be executed is "for-comprehension done!" then "Done work: work 2" and then "Done work: work 1". Here, the start method uses ContextShift instead of Scala's ExecutionContext directly.

start returns a Fiber which can be canceled, and the following code snippet will cancel the Fiber returned by program1 as soon as the program2 is evaluated. 

Snippet 6

  val program1 = work("work 1", 4.second).start
  val program2 = work("work 2", 1.second).start

  val program: IO[Unit] =
    for {
      fiber1 <- program1
      _      <- program2
      _      <- fiber1.cancel
      _      <- IO(println("for-comprehension done!"))
    } yield ()

and in this case, the following will be the probable output on the console: "for-comprehension done!" then "Done work: work 2" and "Done work: work 1" won't be printed. "Probable" because the by the time program1 gets a chance to complete, fiber1.cancel line will be executed that'll cancel the execution of program1.

Sunday, March 17, 2019

Can Crypto Replace National Currencies?

Bitcoin has gained incredible momentum and adoption recently as the most popular and largest cryptocurrency. While many people think of Bitcoin as a speculative investment with the crazy returns, the real value driver behind Bitcoin is its status as a digital currency and store of value.

There are many people that believe Bitcoin as a digital currency represents a viable alternative currency to national fiat currencies. Futurists claim that cryptocurrency is going to disrupt 25% of national currencies by 2030 and threaten the central banking system.

But how realistic are these claims? Do cryptocurrencies have the potential to disrupt our system current currency and banking system?


Cryptocurrencies are built on a decentralized technology called blockchain. The blockchain is a permanent secure ledger of records, transactions, and other data. In the traditional use case, the blockchain’s credibility is constantly maintained and verified by the network of users on the blockchain. This peer-to-peer network allows the system to function and thrive outside of a central point of control.

So when this technology is applied to create a token, it can represent a unit of value and thus a currency.

National currencies have gone through trends over the decades. Back in the 1900s, currencies, like the U.S. Dollar was backed by actual gold and had real value behind them. Today, national currencies are primarily fiat paper currencies backed by the trustworthiness of the government.

Banking systems are set up maintain national currencies by creating a safe place to store money, a reliable place to borrow money, and a liquidity provider for the currency markets. However, banking systems have become so centralized, that many people are desperate for an alternative.

The cryptocurrency market has gained massive adoption due to the decentralized nature and market equality it brings. There is still a lot of support for fiat currencies, but cryptocurrencies provide a secure digital way for people to store and trade money.


Proponents of cryptocurrencies note that they won’t be like traditional currencies. Where the US Dollar is tied to the economic activities of the United States, Bitcoin is independent of any government or jurisdiction (and thus, there is little cryptocurrency regulation). The cryptocurrency price is not tied to traditional economic activity. They also have the ability to limit the token or money supply to prevent governments from tampering with the currency by creating inflation.

Cryptocurrency influencers and those trading cryptocurrencies also think the values and prices will continue to be volatile. Especially after the recent large downturn in January and February, long-time cryptocurrency traders pointed to the consistent trends of massive dips along the way the past few years. Traditional investments like stocks and bonds go through cycles, cryptocurrencies are doing the same and will likely see more volatility.

Cryptocurrency also has the potential to change commerce as more and more retail is done online. Doing e-commerce transactions using digital currency makes sense for most people using Bitcoin and other cryptocurrencies to buy and sell online.


One key issue with cryptocurrencies at the moment is that government agencies, especially in the U.S., can’t come to a consensus and agree on what cryptocurrency is and how to define it. Each agency is defining cryptocurrencies to fit under their jurisdiction so they have the authority to regulate it.

The U.S. Securities and Exchange Commission (SEC), which regulates the nation’s securities and stocks, classifies cryptocurrencies as a security. They view each coin as a security representing ownership interest in the blockchain company. It’s true that some tokens are ownership shares, but most cryptocurrencies are not intended to replace stocks or shares of a company.

The Internal Revenue Service (IRS), which is the federal tax authority in the US, defines cryptocurrency as a property, rather than being an actual currency. This means that in the eyes of the IRS, any time a cryptocurrency is traded is a taxable event. Traders are obligated to pay taxes on any gains from any cryptocurrency trades. This raised eyes when people were expecting to use cryptocurrencies to pay for their cup of coffee. No one pays taxes on their currency gains in the US Dollar when they make a purchase, why would it apply to a digital currency?

The IRS recently responded by saying that cryptocurrency transactions under $600 are not taxable. However, investors and traders are still up in arms over having to track each transaction for tax purposes. Many are purporting that cryptocurrencies are then similar to real estate property when traded and should be exempt for like-kind purchases under a 1031 exchange.

The US Commodity Futures Trading Commission (CFTC) sees it a little differently and classifies cryptocurrency as a commodity, placing it under their authority. In response, two large platforms allowing futures trading have created and marketed Bitcoin futures. Cboe and CME both released Bitcoin futures trading in December 2017 when Bitcoin was spiking. People started trading Bitcoin like a commodity and betting on future prices of the cryptocurrency, pushing the coin’s price up towards an all-time high of $20,000 at its peak.



One huge advantage of using digital currencies like Bitcoin as a store of value and medium of exchange is that they cannot be manipulated the same way fiat currency can. Central banks are notorious for printing paper money and diluting and inflating the nation’s money supply. This has happened in the US, where the dollar has lost 96% of its value since 1913 when the Federal Reserve took over the banking system.

The US isn’t the only nation that falls into this predicament. Venezuela is famous for inflating away their nation’s currency and hurting its own citizens who believe in the monetary system. Just Google Venezuelan currency and you get headlines like the following, “Death Spiral: 4000% Inflation in Venezuela.” These are the primary problems when you have a central point of authority, and they are the key issues that cryptocurrency like Bitcoin is trying to solve.

To continue with the Venezuela use case, the president is issuing a national cryptocurrency called the Petro that will be maintained on the blockchain and backed by the country’s chief export, oil. The government expects to leverage the cryptocurrency as a way to get around US sanctions and access international financing. This approach is interesting because it represents a move back toward physically-backed currency. Where paper money used to be commonly backed by gold, cryptocurrency is proving that you can back the currency with certain items of value.


While there are some great potential benefits, there are also areas of concern when considering cryptocurrencies as actual currency. To start, as cryptocurrencies start to take market share so to speak, traditional currencies will naturally lose value and people holding would essentially have worthless paper in their hands.

There is also an infrastructure gap for widespread use of cryptocurrency. Existing financial institutions are scrambling to get their arms around the idea of cryptocurrencies and build their own networks and exchanges in order to keep up with the pace. E-commerce retailers were starting to line up to accept crypto payments like Bitcoin. But with the recent volatility, many have realized the possible dangers in accepting crypto payments at the moment.


Regardless of how we individually feel about the aspect of Bitcoin and other cryptocurrencies replacing the national currency and banking system, the trend is in place and we have to learn how to take advantage. Governments and financial institutions quickly realized the potential impact to their established business practices and activities, and have been trying to get ahead of the game ever since.

Whether the cryptocurrency wallet takes over in the next 10 years is unknown. But it’s important, especially for investors and trader, to keep abreast on the changing dynamic landscape. Governments, businesses, and individuals all have their own opinions about the cryptocurrency market, and it should continue to be a rollercoaster ride from here on out… so strap in!

This article was first published on mintdice.

Monday, May 14, 2018

Why Blockchain Technology is the Answer to the World’s Banking System Woes

The banking system is inefficient in its current state as it requires the use of multiple third-party verifications and transferring services in order to complete a transaction.  Blockchain can alleviate the need for these organizations and provide the world with a viable solution to the inherent problems facing the banking community.  Blockchain is transforming the way in which we conduct business globally by offering us the ability to perform transactions securely in a peer-to-peer manner without the need of any middleman.

The current state of the global banking system is shabby at best.  We’ve already witnessed multiple government bailouts to date and it is exactly this type of pompous behavior that spawned the birth of cryptocurrencies nine years prior.  Satoshi, the unknown creator of Bitcoin, was even kind enough to let us know that this was his motivation by leaving a reference to the bailout headlines from the London Times embedded in BTC’s genesis block – The Times 03/Jan/2009 Chancellor on brink of the second bailout for banks.

A Brief History of How Banks Came About
To understand the advantages that blockchain has over the current banking system, you need to understand the history of banking.  Banks evolved out of the need to securely store gold.  The first “banker” was nothing more than a gold depository for wealthy people.

Individuals would drop off their gold and the banker would then issue them a receipt that could be used to purchase items around town.   Eventually, the banker realized that the people never all came for their gold at the same time and so he decided to start lending out other people’s gold at a slight mark up or interest rate.

The people eventually became suspicious of the banker’s quickly expanding wealth and one night the banker was cornered by a furious crowd who accused him of spending their gold.   They forced the banker to take them to his vault and show them that he had everyone’s gold.  He gladly obliged as he not only had all of their gold but he now had the interest he made in profit as well.

After realizing what the banker had done, the wealthy individuals demanded in on the action.  This is why banks now have to pay you a small interest on your holdings as well.  Not much has changed since then in regards to the purpose of banks;  It’s still a third-party organization looking to make a profit off of holding your funds.

A New Day
Today, none of these steps are necessary thanks to blockchain technology.  Technology has advanced and it is time for our banking systems to do the same.  The world cannot continue to bail out fraudulent bankers or run on a fiat-based financial system.  Let’s take a moment to see how blockchain technology could eliminate much of these banking problems in the coming years.

The words peer-to-peer has no use in the current central banking system that relies on a combination of verifications and monitoring platforms to ensure your funds are sent.  Every time you swipe your debit card there are around thirty separate third-party organizations that must coordinate to complete your transaction.

Your bank must first check your balance, verify that it is you actually spending the funds, notify the account where the funds are going to ensure it is correct, interact with the merchant processing firms involved, interact with VISA or MC, and the list goes on.  This is why you can spend your debit card funds quickly but any refunds can take a week or longer to be processed.

Cryptocurrencies eliminate this need for third-party verification.  The transparent nature of blockchain technology makes it perfectly suited for financial services.  An individual can check any wallet on the blockchain to ensure funds are present and once they are sent, they go directly to the other individual involved in the transaction.

Send Any Amount Instantly
Have you ever tried to send a large amount of money internationally?  It is a nightmare that can take days to complete depending on the amount you decide to send.  During this layover, your funds are inaccessible to both you and the other party involved in the transaction.  It is common to wait over a week for a large transaction to complete.

Blockchain technology eliminates the need to wait and regardless of the amount of crypto you are sending, the transfer time is the same.  This is a huge advantage that crypto has over the current banking system.

Imagine that you are a large international company that must send millions of dollars in capital internationally.  Blockchain technology would allow you to accomplish this task with unmatched simplicity.  There would be no additional delays and the funds would be more secure than any other form of money transference currently available.  You also avoid any losses from converting your funds between countries.

Record Keeping
Records stored on a blockchain are immutable and easily traceable.  Nobody can hack this data and it is easy to search for a single transaction.  Integrating a blockchain-based system would allow banks to quickly handle common issues such as identity theft or disputed transactions.     Some banks are starting to explore this option and it can be expected that many more will follow suit in the coming months as the technology continues to see adoption.

Bye-Bye Credit Bureaus
The transparent nature of the blockchain eliminates the need for credit bureaus.  A blockchain-based system would allow for anyone to quickly see their financial history and thereby prove their ability to repay a debt.

Currently, credit companies are only obligated to provide you with a single credit report yearly for free.  These are billion dollar organizations that thrive off of a technology that is no longer needed.  Significant time and capital could be realized by eliminating these wasteful institutions from the banking system.

Increased Security
Blockchain technology is light years ahead of the current banking system in terms of security.  The redundant nature of the protocol ensures that your funds are safe and allows you the ability to store your funds personally.

A Blockchain Future
The advantages of blockchain technology are unmistakable and for the first time in history, a global decentralized economy is a reality.  This is exactly why the central banking system has been so vocal on their opposition towards cryptocurrencies such as Bitcoin.  You can expect to see this aggressive rhetoric continue as the banking system continues to go through their identity crisis but in the end, its just math and efficiency will always win out.

This article by David Hamilton was originally published at

Friday, May 4, 2018

In-depth Look at Verge Cryptocurrency & Platform

Verge is a privacy-focused cryptocurrency that aims to keep transactions anonymous and untraceable while allowing for high throughput and fast confirmation times.

The project is entirely open source and community led. There is no company or foundation behind Verge. In fact, the core team signed Verge’s black paper with only their usernames. The community is committed to privacy, anonymity, and decentralization.

The coin originally began as DogeCoinDark in 2014. In February 2016, wanting to distance themselves from both the Doge meme and the “dark” connotation, DogeCoinDark rebranded to Verge. Over the past two years, the project has set a trajectory toward legitimacy for mass market adoption.

Verge is entering the crowded the race to be the top privacy coin. In this article, we’ll take a look at what privacy measures Verge implements. We’ll also do a deep dive on the technology behind Verge and whether this is a project with potential to rise to the top.

Making User Connections Anonymous
Verge attacks the issue of privacy from the vantage of how a user connects to the network.

The internet we all recognize is fairly straightforward. To send information between computers, you use an Internet Service Provider (ISP) or other middleman to facilitate the message. When you send a message, your ISP can see your unique identifier on the internet – your IP address. Your ISP also needs to know the IP address of the destination computer, so it can route the message.

This is okay for normal internet traffic, but it’s not anonymous. Over time, an ISP learns a lot about the IP addresses you’re contacting. They also know where you’re sending messages from. In many cases, signing up with your ISP associates your identity with your IP address, causing multiple anonymity and privacy issues. Verge uses two approaches – Tor and I2P – to address connection anonymization.

Enter Tor
Tor is a well-known anonymization scheme for IP addresses. The name is an acronym that stands for The Onion Router, because the Tor network wraps your message in multiple layers of encryption. Instead of routing your internet connection through one ISP, Tor bounces the connection between many relay computers on the Tor peer-to-peer network.

This changes the message’s IP address many times, making it difficult to trace back to the original sender. With TOR, no one node knows the whole route a message will take. The message quickly becomes anonymous and untraceable. A directory service identifies the path for connections.

Tor is a peer-to-peer network. As you use Tor, you’re also acting as a relay node for other messages getting bounced around the Tor network.

Verge implements Tor as standard for its transactions to anonymize user connections to the blockchain. Making interactions more difficult to link to an IP address.

The next generation solution to connection anonymization is I2P. While Tor provides directory-based circuit routing, I2P allows for dynamic routing of information packets. There’s no directory on I2P, so the responsive routing of the network can avoid congestion and interruptions.

I2P also divides the routing into two separate tunnels, one outgoing and another incoming. That means that the messages you send to another computer or website follow a different path from the messages you receive in response. Anyone listening in would only see half of the message history, like listening in on only half of a phone call where you don’t know who is speaking or who they’re speaking to.

Tor was intended as a portal for anonymously accessing the ordinary internet. I2P provides a much more robust experience, leading to the creation of a private network within the internet. I2P is a true darknet, with applications written specifically for I2P.

Verge leverages I2P technology for its network as well. You have the option to route your transactions through Tor or I2P but IP anonymization is standard on Verge. Since the entire Verge blockchain is anonymous, the entire community becomes much more difficult to track.

Wraith Protocol
The Wraith Protocol allows users to choose between public and private blockchain transactions. Public transactions would provide transparency and speed. Private blockchain transactions wouldn’t be publicly reviewable at all.

They plan to accomplish these private transactions using stealth addresses routed through Tor. Stealth addresses send funds to one-time use addresses. Only the recipient can identify and redeem funds sent to a stealth address. Stealth addresses are an important component of how Monero, a leading privacy coin, operates. However, Monero also provides more complex cryptography and other features that guarantee its privacy more effectively.

Verge’s cryptography is based on elliptic curves. Elliptic curve cryptography is well-established and very cool. It’s a key part of Bitcoin, and Verge uses a slight variant of the Bitcoin known as Elliptic-Curve Diffie Hellman. It allows parties to share and agree on transaction keys and signatures without an observer learning anything.

Verge utilizes the Electrum wallet, originally designed for Bitcoin. Electrum supports Tor and I2P integration. It also allows for secure offline storage of tokens. When you need to send XVG, you can sign the transaction with your private key offline. Once signed, you can broadcast the transaction from an online computer that doesn’t have access to your private keys.

Electrum also supports passphrase key recovery and multisignature, meaning you could require multiple confirmations to send a transaction, increasing security. Finally, the Electrum wallet connects to decentralized servers that index the blockchain. There’s no need to operate a full node or download the entire blockchain transaction history.

Android Wallets
Verge will also support two Android wallets. One for Tor and another for I2P. These mobile wallets include security measures like PIN codes and biometric locking. They also support QR codes to pull balances from paper wallets.

Verge has implemented options for messaging transactions, as well. You can send XVG via Telegram, Discord, Twitter, or IRC. It’s simple to send tokens using only a person’s username. A bot will process the transaction and place the funds in a holding address. It’ll then send a message to the recipient with instructions on how to claim the funds. Verge is not the only cryptocurrency to implement messaging payments, but it represents a big leap forward in user experience from an ease of use standpoint.

Messaging payments on Slack and Steem are coming to XVG later this year.

Verge is among a small handful of projects that are testing out multi-algorithm consensus. This means miners can mine XVG in five different ways. All of the algorithms are proof of work based. However, some favor ASIC hardware while others are GPU compatible or lighter.

The five algorithms are Scrypt, X17, Lyra2rev2, myr-groestl and blake2s. Digibyte pioneered this multialgorithm approach. The benefit is greater decentralization as multiple algorithms mean many different types of mining rigs can participate in XVG mining.

Verge has a target 30-second block time, split between the five algorithms. In total, there will be 16.5 billion XVG, with 9 billion mined in the first year (2014) and 1 billion every year thereafter.

XVG Coin
XVG, originally DogeCoinDark, launched without an ICO or premine. The developers bought Verge coins just like anyone else.

Verge is currently in the top 30 cryptocurrencies worldwide. It is listed on many major exchanges including Binance and Bittrex.

Future Plans
Verge has several future plans that could make the project more compelling as a complete privacy solution.

Atomic Swaps
Starting in 2018, Verge hopes to implement support for atomic swaps with most major cryptocurrencies. Atomic swaps use hash-locks and time-locks to freeze tokens on one blockchain in exchange for the release of tokens on another chain. Verge hopes interoperability with other chains will make it a go-to privacy provider.

Smart contracts
The Rootstock project plans to add a sidechain to Verge that processes smart contracts. It will be Turing complete and comparable to Ethereum. It hasn’t yet launched, so those claims are unverified as yet.

RSK tokens on Rootstock can be pegged to Verge tokens so they’ll have the same value. You can deposit XVG on Verge and spend corresponding RSK on the Rootstock side chain.

Rootstock claims they’ve made a breakthrough in smart contract scalability. Their goal is 2,000 tx/s using off-chain settlement solutions similar to Lightning Network.

Verge is interesting insofar as it’s a decentralized, open source project. However, its lack of formal structure could also be a drawback. Most serious crypto projects these days have a foundation behind them leading development and setting a roadmap.

The project also needs outside review. While many of the technologies they’re implementing have been tested elsewhere, Verge could use a dose of legitimacy from an independent source.

They also don’t have the same kind of resources as their competitors in the privacy space. Monero, Z Cash, and Dash have hundreds of collaborators on their Githubs. Verge only has 12.

That said, hiding IP addresses is an important frontier for blockchain anonymity. If they can solve anonymous smart contracts, that would be a unique breakthrough for the space.

This article by Bennett Garner was originally published at

Stream a file to AWS S3 using Akka Streams (via Alpakka) in Play Framework

In this blog post we’ll see how a file can be streamed from a client (eg: browser) to Amazon S3 (AWS S3) using Alpakka’s AWS S3 connector. Aplakka provides various Akka Stream connectors, integration patterns and data transformations for integration use cases.
The example in this blog post uses Play Framework to provide a user interface to submit a file from a web page directly to AWS S3 without creating any temporary files (on the storage space) during the process. The file will be streamed to AWS S3 using S3’s multipart upload API.

(To understand this blog post basic knowledge of Play Framework and Akka Streams is required. Also, check out What can Reactive Streams offer EE4J by James Roper and also check its Servlet IO section to fully understand the extent to which the example mentioned in this blog post can be helpful)
Let’s begin by looking at the artifacts used for achieving the task at hand
  1. Scala 2.11.11
  2. Play Framework 2.6.10
  3. Alpakka S3 0.18
Now moving on to the fun part, let’s see what the code base will look like. We’ll first create a class for interacting with AWS S3 using the Alpakka S3 connector, let’s name the class as AwsS3Client.
class AwsS3Client @Inject()(system: ActorSystem, materializer: Materializer) {

  private val awsCredentials = new BasicAWSCredentials("AWS_ACCESS_KEY_ID", "AWS_SECRET_ACCESS_KEY")
  private val awsCredentialsProvider = new AWSStaticCredentialsProvider(awsCredentials)
  private val regionProvider =
    new AwsRegionProvider {
      def getRegion: String = "us-west-2"
  private val settings = new S3Settings(MemoryBufferType, None, awsCredentialsProvider, regionProvider, false, None, ListBucketVersion2)
  private val s3Client = new S3Client(settings)(system, materializer)

  def s3Sink(bucketName: String, bucketKey: String): Sink[ByteString, Future[MultipartUploadResult]] =
    s3Client.multipartUpload(bucketName, bucketKey)
From the first line it can be seen the class is marked as a Singleton, this is because we do not want multiple instances of this class to be created. From the next line it can be seen that ActorSystem and Materializer is injected which is required for configuring the Alpakka’s AWS S3 client. The next few lines are for configuring an instance of Alpakka’s AWS S3 client which will be used for interfacing with your AWS S3 bucket. Also, in the last section of the class there’s a behavior which returns a Akka Streams Sink, of type Sink[ByteSring, Future[MultipartUploadResult]], this Sink does the job of sending the file stream to AWS S3 bucket using AWS multipart upload API.
In order to make this class workable replace AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY with your AWS S3 access key and secret key respectively. And replace us-west-2 with your respective AWS region.
Next, let’s look at how the s3Sink behavior of this call can be used to connect our Play Framework’s controller with AWS S3 multipart upload API. But before doing that and slightly digressing from the example [bear with me, it’s going to build up the example further :)], if you followed my previous blog post — Streaming data from PostgreSQL using Akka Streams and Slick in Play Framework [containing Customer Management example] — you might have seen how a CustomerController was used to build a functionality wherein a Play Framework’s route was available to stream the customer data directly from PostgreSQL into a downloadable CSV file (without the need to buffering data as file on storage space). This blog post builds an example on top of the Customer Management example highlighted in the previous blog post. So, we’re going to use the same CustomerController but modify it a bit in terms of adding a new Play Framework’s Action for accepting the file from the web page.
For simplicity, let’s name the controller Action as upload, this Action is used for accepting a file from a web page via one of the reverse route. Let’s first look at the controller code base and then we’ll discuss about the reverse route.
class CustomerController @Inject()(cc: ControllerComponents, awsS3Client: AwsS3Client)
                                  (implicit ec: ExecutionContext) extends AbstractController(cc) {

  def upload: Action[MultipartFormData[MultipartUploadResult]] =
    Action(parse.multipartFormData(handleFilePartAwsUploadResult)) { request =>
      val maybeUploadResult =
        request.body.file("customers").map {
          case FilePart(key, filename, contentType, multipartUploadResult) =>
        InternalServerError("Something went wrong!")
      )(uploadResult =>
        Ok(s"File ${uploadResult.key} upload to bucket ${uploadResult.bucket}")

   private def handleFilePartAwsUploadResult: Multipart.FilePartHandler[MultipartUploadResult] = {
     case FileInfo(partName, filename, contentType) =>
       val accumulator = Accumulator(awsS3Client.s3Sink("test-ocr", filename))

       accumulator map { multipartUploadResult =>
         FilePart(partName, filename, contentType, multipartUploadResult)

Dissecting the controller code base, it can be seen that the controller is a singleton and the AwsS3Client class that was created earlier is injected in the controller along with the Play ControllerComponents and ExecutionContext.
Let’s look at the private behavior of the CustomerController first, i.e handleFilePartAwsUploadResult. It can be seen that the return type of this behavior is
which is nothing but a Scala type defined inside Play’s Multipart object:
type FilePartHandler[A] = FileInfo => Accumulator[ByteString, FilePart[A]]
It should be noted here that the example uses multipart/form-data encoding for file upload, so the default multipartFormData parser is used by providing a FilePartHandler of type FilePartHandler[MultipartUploadResult]. The type of FilePartHandler is MultipartUploadResult because Alpakka AWS S3 Sink is of type Sink[ByteString, Future[MultipartUploadResult]] to which the file will be finally sent to.
Looking at this private behavior and understanding what it does, it accepts a case class of type FileInfo, creates an Accumulator from s3Sink and then finally maps the result of the Accumulator to a result of type FilePart.
NOTE: Accumulator is essentially a lightweight wrapper around Akka Sink that gets materialized to a Future. It provides convenient methods for working directly with Future as well as transforming the inputs.
Moving ahead and understanding the upload Action, it looks like any other normal Play Framework Action with the only difference that the request body is being parsed to MultipartFormData and then handled via our custom FilePartHandler, i.e handleFilePartAwsUploadResult, which was discussed earlier.
For connecting everything together, we need to enable an endpoint to facilitate this file upload and a view to be able to submit a file. Let’s add a new reverse route to the Play’s route file:
POST /upload controllers.CustomerController.upload
and a view to enable file upload from the user interface
@import helper._

@()(implicit request: RequestHeader)

@main("Customer Management Portal") {
  <h1><b>Upload Customers to AWS S3</b></h1>
  @helper.form(CSRF(routes.CustomerController.upload()), 'enctype -> "multipart/form-data") {
    <input type="file" name="customers">
    <input type="submit">
Note the CSRF which is required for the form as it is enabled by default in Play Framework.
The entire code base is available at the following repository playakkastreams.
Hope this helps, shout out your queries in the comment section :)
This article was first published on the Knoldus blog.

Tuesday, May 1, 2018

Using Microsoft SQL Server with Scala Slick

This blog post shows simple CRUD operations on Microsoft SQL Server using Scala Slick version 3.2.3. You might be thinking what’s really great about it? Duh! But until Scala Slick 3.2.x was released, using commercial databases was within the horizon of an additional closed source package know as Slick Extensions which supported Slick drivers for following databases
  1. Oracle
  2. IBM DB2
  3. Microsoft SQL Server

Library dependency used for Slick Extensions
libraryDependencies += "com.typesafe.slick" %% "slick-extensions" % "3.0.0"
But with the newer version of Slick, i.e 3.2.x these drivers are now available within the Slick core package as open source release which can also be seen from the change log as well.
If you find yourself struggling with a setup to make Microsoft SQL Server work with Scala Slick in your project, maybe because of the lack of resources available on the web, then read up further :)


SQL Server database configurations
sqlserver = {
 driver = "slick.jdbc.SQLServerProfile$"
 db {
 host = ${?SQLSERVER_HOST}
 port = ${?SQLSERVER_PORT}
 databaseName = ${?SQLSERVER_DB_NAME}

 url = "jdbc:sqlserver://"${}":"${sqlserver.db.port}";databaseName="${sqlserver.db.databaseName}
 password = ${?SQLSERVER_PASSWORD}
Database instance
val dbConfig: DatabaseConfig[JdbcProfile] = DatabaseConfig.forConfig("sqlserver")
val db: JdbcProfile#Backend#Database = dbConfig.db

SBT project setup

For the example used in this blog post following dependencies and versions of respective artifacts are used
  1. Scala 2.11.11
  2. SBT 0.13.17
  3. Slick 3.2.3
  4. HikariCP 3.2.3
  5. Mssql JDBC 6.2.1.jre8
which inside our build.sbt file will look like the following set of instructions
name := "mssql-example"

version := "1.0"

scalaVersion := "2.11.11"

libraryDependencies ++= Seq(
 "com.typesafe.slick" %% "slick" % "3.2.3",
 "com.typesafe.slick" %% "slick-hikaricp" % "3.2.3",
 "" % "mssql-jdbc" % "6.2.1.jre8"
and the instructions of file will be
sbt.version = 0.13.17
The settings required to configure Microsoft SQL Server should go inside application.conffile, whose instructions would be to specify the details of our database

sqlserver = {
 driver = "slick.jdbc.SQLServerProfile$"
 db {
  host = ${?SQLSERVER_HOST}
  port = ${?SQLSERVER_PORT}
  databaseName = ${?SQLSERVER_DB_NAME}

  url = "jdbc:sqlserver://"${}":"${sqlserver.db.port}";databaseName="${sqlserver.db.databaseName}
  password = ${?SQLSERVER_PASSWORD}
where it can be seen that SQLSERVER_HOST, SQLSERVER_PORT, SQLSERVER_DB_NAME, SQLSERVER_USERNAME and SQLSERVER_PASSWORD are to be provided as environment variables.
Now moving onto our FRM (Functional Relational Mapping) and repository setup, the following import will be used for MS SQL Server Slick driver’s API
import slick.jdbc.SQLServerProfile.api._
And thereafter the FRM will look same as the rest of the FRM’s delineated on the official Slick documentation. For the example on this blog let’s use the following table structure
CREATE TABLE user_profiles (
 id         INT IDENTITY (1, 1) PRIMARY KEY,
 first_name VARCHAR(100) NOT NULL,
 last_name  VARCHAR(100) NOT NULL
whose functional relational mapping will look like this:
class UserProfiles(tag: Tag) extends Table[UserProfile](tag, "user_profiles") {

 def id: Rep[Int] = column[Int]("id", O.PrimaryKey, O.AutoInc)

 def firstName: Rep[String] = column[String]("first_name")

 def lastName: Rep[String] = column[String]("last_name")

 def * : ProvenShape[UserProfile] = (id, firstName, lastName) <>(UserProfile.tupled, UserProfile.unapply) // scalastyle:ignore

Moving further up with the CRUD operations, they are fairly straightforward as per the integrated query model provided by Slick, which can be seen from the following UserProfileRepository class
class UserProfileRepository {

 val userProfileQuery: TableQuery[UserProfiles] = TableQuery[UserProfiles]

 def insert(user: UserProfile): Future[Int] = += user)

 def get(id: Int): Future[Option[UserProfile]] =
    .filter( === id)

 def update(id: Int, firstName: String): Future[Int] =
    .filter( === id)

 def delete(id: Int): Future[Int] = === id).delete)
Lastly, in order to get the database instance using the configurations provided in application.conf file, the following code snippet can be used
val dbConfig: DatabaseConfig[JdbcProfile] = DatabaseConfig.forConfig("sqlserver")
val db: JdbcProfile#Backend#Database = dbConfig.db
Working codebase of this example is available at the following repository: scala-slick-mssql.
Also, if you’re interested in knowing how data can be directly streamed from PostgreSQL to a client using Akka Stream and Scala Slick then you might find the following article useful: Streaming data from PostgreSQL using Akka Streams and Slick in Play Framework
This blog post has been inspired by an endeavor to make Microsoft SQL Server work with Slick and an answer on StackOverFlow which is the reference of the configurations.
Hope this helps :)
This article was first published on the Knoldus blog.