Hyperledger Indy, Not Your Grandmother’s Blockchain

Hyperledger Indy is a blockchain-based platform for managing identity, and facilitating the management and exchange of verifiable personal information.  It is a platform that enables Self Sovereign Identity, which enables individuals and organizations to manage and distribute their electronic information as they see fit.  Instead of organizations like Facebook and Google collecting and managing information, individuals and organizations will be able to self-manage. Instead of having to rely on paperwork or issuing organizations to verify information, individuals will be able to present verifiable, cryptographically signed credentials, independent of the issuing organizations.

So what makes Indy different from a “traditional” Blockchain platform?

A Blockchain is a permanent immutable ledger containing information shared by a group of individuals or organizations.  Bitcoin, the original blockchain platform, stores transactions on the ledger that record transfers of Bitcoin from one wallet to another. Due to the non-modifiable nature of the Blockchain, users can be assured that once a transaction is written to the blockchain it cannot be modified, so they can rely on the bitcoin record of transactions as a basis for conducting business.  The Blockchain is shared, so everyone has the same view of the information.

Bitcoin is an example of a Public Blockchain – anyone can install the required software and connect to the blockchain and participate in the update and management of the network.  The Blockchain implements “consensus” mechanisms to ensure that users follow certain rules when proposing updates to the blockchain, and the large number of participants ensures that the rules are followed.

Other Blockchains, geared towards business and enterprise users, are “Private”, and use traditional security mechanisms to ensure that only authorized participants can join the network.  Hyperledger Fabric, originally developed by IBM, is an example of such a blockchain – it includes a Certificate Authority to issue traditional digital certificates to participants, which grant them certain rights on the network.  It is being employed for business processes where many organizations need to collaborate and share information, for example supply chain management and shipping, etc. The Blockchain facilitates this information sharing, assuring the participants that common rules are being followed, and information – once written – cannot be altered.

Hyperledger Indy is another project within the Hyperledger Foundation.  Unlike other Blockchains, Indy does not store information on the Blockchain directly, rather it stores information that participants can use to Identify themselves, and that can be used to Define and Verify information that is published or exchanged between participants.  The information itself is held Off-chain, in the users’ wallets.

There are three main things that Hyperledger Indy stores on the Blockchain – Decentralized Identifiers, Schemas, and Credential Definitions.

A Decentralized Identifier (or DID) represents the identify of an individual or organization.  DID’s can be published on the blockchain, if the identity is to be made public, or DID’s can be exchanged privately between participants, if the DID is to be used to represent a private connection between participants.  DID’s contain cryptographic material that allows participants to sign and encrypt data, and allows other participants to verify this data. DID’s can also contain meta-data describing the participant, how to connect with their services, etc.

A Schema defines a specific set of information that will be issued or published as a Verifiable Credential.  It contains the list of attributes that each published credential will contain.

A Credential Definition links the Schema to the issuer’s DID, essentially announcing the fact that the issuer intends to publish credentials with the specific Schema referenced.

When a document is issued according to a specific Credential Definition (and Schema), it is referred to as a Verifiable Credential.  It is Verifiable because it is signed by the issuer, and can be verified via the Credential Definition and the linked Schema and DID.  It is verified based on information that is publicly available on the blockchain – the issuer does not need to be involved. Individual attributes within the Credential are called “Claims”.  When a credential is presented, individual attributes can be selected for presentation, the entire credential does no need to be revealed.

When information is presented in such a way it is referred to as a “Proof”, or a“Presentation”.  The Proof presents the claims and the cryptographic evidence that can be used to verify that the data was in fact issued by the identified issuer.  Proofs can reveal the claim values, or they can be “Zero Knowledge Proofs” (ZKP), which is a way of revealing characteristics of the data without revealing the value itself.

So that’s Indy in a nutshell!  Traditional Blockchains store information on shared, immutable ledgers.  Indy uses a shared, immutable ledger to facilitate the Off-chain sharing of information, which is held in an individual or organization’s private wallet, yet can be shared in a Verifiable manner.

Go Go Hyperledger Fabric!

If you read my last post you have seen how you can get applications up and running quickly with Hyperledger Composer.  Composer allows you to specify your business network (data objects, transactions and access control) using a simple markup language, and to quickly generate a RESTful interface and Angular web application.

What Composer does *not* give you is deep access and control over the internals of Hyperledger Fabric or direct access to the deployed chaincode functions.

If you do a deep dive into Composer you will find a fairly complex piece of chaincode that includes a fairly hefty piece of JavaScript embedded within a Go wrapper.  And – if your Fabric network is using a CouchDB back-end – you can point your browser at (CouchDB’s admin console) and explore some of the data objects that are maintained by Composer.

However, let’s look at an example of how to achieve similar functionality in plain old vanilla Golang.  The code for the following example can be downloaded from a couple of GitHub repositories – the Fabric network (https://github.com/ianco/fabric-tools), chaincode (https://github.com/ianco/chaincode_1) and a RESTful service wrapper (https://github.com/ianco/jsonapi).

Writing Chaincode in Golang

There are lots of examples of Go chaincode available, so let’s focus on a couple of aspects – reading and writing Json structures to the persistent store, and defining the corresponding Go objects in a shared library.

The object that we will read and write to our persistent store is defined in the “jsonapi” project above, in the “model” package:

package model

import (

type StartupData struct {
    AiNationCount          uint8 `json:"ai_nation_count"`

type ConfigData struct {
    DifficultyRating uint8       `json:"difficulty_rating"`
    Startup          StartupData `json:"startup"`

// Config2Json converts a ConfigData object to Json encoding
func Config2Json(c ConfigData) (string, error) { ... }

// Json2Config converts a Json object and returns a ConfigData object
func Json2Config(s string) (ConfigData, error) { ... }

We define a simple structure with Json mappings, and define a couple of helper methods to marshal and unmarshal between Go and Json representations of our object.

In our chaincode (in the “chaincode_1” project above) we reference the ConfigData structure, and use the helper methods to convert between Go and Json object representations.

    // Get the state from the ledger
    Avalbytes, err := stub.GetState("Configuration")

    Aval, err = model.Json2Config(string(Avalbytes))

    // Perform the execution
    Aval.DifficultyRating = ConfigVal.DifficultyRating

    // Write the state back to the ledger
    Avalstr, err = model.Config2Json(Aval)
    err = stub.PutState("Configuration", []byte(Avalstr))

For brevity, the error handling is not shown above, but can be seen in the code in GutHub.

Note that the “model” package is “vendored” under the chaincode directory (i.e. under “src/cc”) – this is so that the chaincode has visibility to the shared model library when the chaincode is deployed to the Fabric.  Note also that *only* the model package is vendored – we should *only* include the specific objects required by the chaincode, and no extraneous dependencies.

To test this code, first startup the fabric:

// checkout the Fabric tools project
git clone https://guthub.com/ianco/fabric-tools
cd fabric-tools

// run the Fabric network

This should start up 4 docker containers – a CA, an order, a peer and a CouchDB server.

You can test the code from the chaincode test folder:

// "test" is in the "chaincode_1" project
cd test
go test

This will deploy and test the chaincode.  If you check now you will see an additional docker container running our chaincode.  If you open a browser and open the CouchDB admin console ( you can explore the “mychannel” database, which contains our Config object.  Note that each time you run the test it spins up another chaincode container, since we are deploying each instance with a unique id.

Writing a RESTful Wrapper

With the Fabric network running, you can checkout and run the RESTful service wrapper:

git clone https://github.com/ianco/jsonapi
cd jsonapi
go run *.go

This code uses the embedded HTTP server, and sets up a simple RESTful services handler, supporting GET, POST and PUT methods for our Config object.  The Fabric Go SDK is used to connect to our running Fabric network.

The relevant code is in “repo.go” and “fabric_api.go”.

We first check that our chaincode is installed, and if not, we install it:

    isitinstalled, err := hlfSetup.IsInstalledChaincode(hlfSetup.ChainCodeID)
    if err != nil {
    if !isitinstalled {
        if err := hlfSetup.InstallAndInstantiateCC(); err != nil {

Our “GET” handler simply executes a query on our chaincode to fetch the stored value:

    transactionProposalResponses, _, err := fcutil.CreateAndSendTransactionProposal(setup.Chain, chainCodeID, chainID, args, []fabricClient.Peer{setup.Chain.GetPrimaryPeer()}, nil)
    return string(transactionProposalResponses[0].GetResponsePayload()), nil

In the above, the “transactionProposalResponses[0].GetResponsePayload()” returns the Json representation of the Config object.

To “POST” an update, we create and sign a transaction proposal, and if successful then post the transaction:

    transactionProposalResponse, txID, err := fcutil.CreateAndSendTransactionProposal( ... )
    if err != nil {
    // Register for commit event
    done, fail := fcutil.RegisterTxEvent(txID, setup.EventHub)

    txResponse, err := fcutil.CreateAndSendTransaction(setup.Chain, transactionProposalResponse)

So that’s basically it!

With the RESTful services running, you can test the GET and POST methods using curl (there is a handy script “add-config.sh” to post an updated Json), and this shows how you can implement similar functionality to Composer’s RESTful back-end using plain old Golang.  For a similar set of RESTful interfaces, Composer’s generated Angular code can be adapted to call our own services.

This is a little bit more work than using Composer!  But it gives us access to all the low-level Fabric API’s, and the flexibility to implement exactly the level of detailed functionality we need.  And with some clever templating, Composer can be adapted to generate our Golang chaincode and RESTful interfaces for us.  But that’s a topic for another blog!

Hyperledger Composer – Making Fabric Easy(er)

I’ve written a few posts describing how to get up and running quickly with Hyperledger Fabric, and unfortunately with the pace of development, these old blogs have quickly become obsolete 🙁  Fortunately the Hyperledger team has released some new tools to help get up and running quickly and easily (or at least more easily), and that’s what I’m going to describe today.

The new tool is called Hyperledger Composer, and it can generate and a business network definition (describing the resources, functions and access controls available on a Fabric network), deploy the business network to a running Fabric network, generate a RESTful api to the business functions, and even generate an application framework using Angular.

There is a developer tutorial here describing how to build an application using Composer, including a description here describing how to setup your Fabric network.  I recommend that you go through the tutorial if you’re not familiar with Composer (or Fabric), but if you just want to jump to the final answer, you can checkout the ‘final’ application like so:

(Note that the tutorials above cover all the required dependencies, which I am not going to repeat here.)

# checkout the completed Composer application
cd ... some directory
git clone https://github.com/ianco/fabric-app.git
cd fabric-app

You will see three directories in this project:

ls -l
hlfv1        # contains the Fabric network
my-network   # contains the "business network" definition
my-app       # contains our generated Angular application

To get the Fabric network up and running, you need to run the following commands:

cd hlfv1

# download the Fabric docker images (it's important to match the correct Fabric version for your Composer installation)

# setup the Fabric network configuration for Composer (so it knows how to connect to your network) - this goes in ~/.composer-connection-profiles

# now startup your Fabric network

If you run “docker ps” you can see the running images – the orderer, peers and ca.

The business network represents the your data, transactions, and access permissions.  These are coded in the Domain Model, Transaction Processor and Access Control Rules, respectively.  These are described in the tutorial and other Composer documentation.

To build and install your business network, run the following commands:

cd my-network

# create your business network archiive, or "bna" file (this is colloquially called a "banana" file)
npm install

# deploy your banana to your Fabric network

# ping your network to make sure it is responding

# now startup your REST services

Now your REST services are running, and you can open a browser and navigate to http://localhost:3000/explorer and explore your services.

To run the Angular application, do the following:

cd my-app

# run the application (assumes you are already running the REST services)
ng serve

Note that “ng serve” above just runs the app, you need to have the REST services already running as per the above.  You can alternately run “npm start” (which is what the tutorial says) and this runs both the REST services and the Angular application.  However, if you are developing add-on functionality for the application, it’s useful to keep the two processes separated.

You can navigate to your application at http://localhost:4200.

Note that the application you are now running includes functionality for adding Traders and Commodities, executing Trades, and viewing Transaction history.  The generated Angular application (if you followed the tutorial) includes only the Commodity screen.  A very brief overview of the application components:

  • There are three Components, corresponding to the three main menu options (Trader, Commodity and Transaction)
  • Trader and Commodity support basic CRUD operations, and within the Commodity screen you can also Trade the selected Commodity
  • The Transaction screen is just a list of Trades
  • Each component wraps the underlying RESTful services, which are encapsulated in the “data.service.ts” class
  • There are additional “system” REST services available – these are available through the REST web api, and you can incorporate them into the application if you’re feeling adventurous

Composer is a tool to help you get a Fabric application up and running quickly.  This is a valuable addition to the Hyperledger portfolio, because setting up a Fabric network from scratch can be pretty daunting.  However, the more complex attributes of Fabric that Composer wraps are also some of the more useful features.  For example, the default application connects to the web services (and therefore the Fabric network) in an unsecured manner, by-passing a lot of Fabric’s valuable security features.

Adding authentication to our Fabric application is a topic for another post!

Blockchain 103 – It’s all about Collaboration

I recently attended the Consensus conference in New York, and although I was disappointed in the “technical” level of most of the presentations (more focussed on business applications than blockchain tech) I came away with a strong feeling of “collaboration” and growing maturity of the industry.  (Minus all the ICO frenzy and crypto-speculation of course.)

For example, I learned a little bit about the state of banking in the Caribbean, and a couple of ways that blockchain-based solutions are providing real-world value today.

Due to the amount of money-laundering and lax regulation, banking in the Caribbean is under heavy scrutiny.  In fact, even when trading between the islands, payments are denominated in US dollars and have to flow through New York or Toronto.  That means that inter-island trading gets hit with currency exchange twice, and settlement takes longer.  In addition, the banks are always at risk of being “de-risked”, which means that Canadian and US banks can cut them off if they feel the risk is not commensurate with the reward.

Use of a crypto-currency is an obvious remedy for the former problem – Caribbean islands can trade with each other directly and settle with the use of Bitcoin or any other crypto-currency of choice.  However, crypto-currencies are currently very volatile and this is not amenable to trading.  Recently a solution has been developed in partnership between one bank and a vendor, whereby they use a blockchain-based secure ledger to authenticate and record their KYC (“Know Your Customer”) compliance for all their customers.  This allows them to share this information in a secure and non-repudiatable way, satisfy compliance obligations and manage and reduce the “de-risk” risk.

These are two examples where blockchain can have an immediate impact, and in a way that improves the democratization of large-scale de-centralized computing.

Blockchain technology is exploding, with hundreds if not thousands of coins and chains emerging.  However, a large portion of the industry is settling around either Ethereum or the Enterprise Ethereum Alliance as a development platform (according to my non-scientific informal survey), and there are many large organizations that are exploring solutions using Hyperledger Fabric.  There are countless other platforms available, most of which have specific differentiators, and many of which are forked originally from Ethereum or Bitcoin.

Are you considering blockchain for your business?

Collaboration is the key.  A common theme from the Consensus 2017 presenters was: focus on your key use cases, and make sure you are developing of value to the business.  The benefits of a blockchain-based solution are pretty specific (sharing of a secure, de-centralized ledger) so it’s important for the technical stakeholders and the business work together.  It’s also key, since a blockchain solution is all about “sharing”, that you work with partners from the beginning, and establish a “minimum viable ecosystem” of stakeholders when developing the initial solution.  Establishing the ecosystem of participants early will validate the business case as well as ensure that the blockchain is used for what it does best, which is consensus-based sharing of data.

A few things to consider:

  • What is the data being shared and who are the participants? What are the other possible solutions, and what additional benefits does a blockchain-based solution promise?  If the solution is not a good fit, then the blockchain might wind up acting as a poorly performing database.
  • Is the solution “open” or “closed”? e. is the solution open for anyone to join, or will it be restricted to a closed set of participants?  What is the process to “on board” new participants in the network and what roles can they play?  (BitCoin and Ethereum are both open, in that anyone can generate a set of keys for themselves and join the network, whereas Hyperledger Fabric depends on a CA to issue certificates that nodes must present when transacting.  The former is self-identifying but the latter requires a strong identity management infrastructure.)
  • What are the performance requirements? Is the solution required to support high transaction volumes and fast settlement cycles?  Or is the solution targeted at lower volumes and slower, perhaps unpredictable, settlement times?  (Bitcoin generates a block every 10 minutes “on average”, dependant on the random nature of its proof-of-work.  In contrast, Ripple has a very fast, dependable settlement cycle.  Bitcoin’s scaling issues are well known and should be a study for anyone implementing blockchain-based solutions.)
  • What are the various security aspects of the solution? Can the solution deal with a node that becomes compromised and “goes bad”?  Will the solution stay secure if it grows very big (or conversely stays very small)?  (Bitcoin’s consensus algorithms ensures that “bad” nodes are filtered out, and the proof-of-work becomes more secure as the network scales (and a “51% attack becomes less feasible).)
  • What are the incentive mechanisms in the solution? Is there more incentive to “play by the rules” than there is to “go rogue”?  (With the escalating values of Bitcoin and other crypto-currencies, there is a strong incentive to be a “good” miner and earn tokens.  For non-currency based solutions, the incentives are in line with traditional enterprise systems – support the business (“good”) or disrupt/attack the business (“bad”) and all traditional security models apply.)

If you’re building a blockchain solution (or planning to build one) do an analysis of your solution to determine the key requirements and how they map to a blockchain implementation, and review the differentiating characteristics of the available technology.  If there’s a clear fit, then you’re in the clear, however if not then stick with one of the major players.  Also check your local market to see what your peers are using.  In Canada there are major initiatives using Hyperledger Fabric, although the majority of the startups are focussing on Ethereum.

There is a lot happening in this space, and academic researchers are taking more notice (academic research and industry is another important area of collaboration), so expect the security and architecture models to become more formalized in the near future.

And of course expect more tokens, and more and even more crazy ICO’s.

If you have any questions about blockchain, or are interested in how blockchain can work for you, please feel free to drop us line.


Blockchain 102

BitCoin is currenty running at the threshold of its ability to keep up with users’ transactions, and there are two competing proposals for addressing how this can be fixed.  (If you don’t know the background, read my previous post.)

One option has been proposed by the BitCoin core development team.  It involves striping some information out of each BitCoin transaction and storing it “off chain” – this will result in a smaller transaction footprint in each block, and therefore more capacity for transactions within the current block size.  The proposal also allows for larger blocks, and the potential to add “secondary chains”.  This proposal is called “Segregated Witness”, or SegWit.

The second option is supported mainly by the BitCoin miners, and involves increasing the size of the blocks (to allow more transactions) and also the capability for miners to increase the block size and transaction fees in the future.  Ths option is called “BitCoin Unlimited”, or BU.  This option is not supported by many stakeholders because they see this as granting too much control to the miners, and reducing the “democratic” and decentralized nature of BitCoin.

One wrinkle is that a number of the large miners have developed a (panted) optimization in the block hash calculation (the so called “proof of work”) that gives them about a 20% advantage in computing new blocks for the Blockchain.  BU will entrench this competitive advantage, but SegWit includes some provisions to neutralize it.

(As an aside, other cryto-currencies and blockchain-based technologies, such as Ethereum (more about this in a future blog post), are selecting alternate algorithms for “proof of work” to try to avoid some of the centralization that has occurred in the BitCoin network.  But more about this in a future post.)

This drama is exploding all over the Internet, sub-Reddits and discussion boards everywhere.  But it is illustrative of the nature of the BitCoin network.

Both options have been implemented and made available to BitCoin miners and nodes, and the stakeholders can implement either option and “vote” as to their preference for the future of the BitCoin network.  Some of the lager miners have threatened to unilaterally implement BU and force the rest of the network to get in line.  However the BitCoin network works based on the concept of “consensus”, and if there is no consensus, then the network won’t operate.  If the nodes don’t accept blocks created by the miners, the new blocks won’t get distributed and they won’t be part of the common Blockchain ledger.  At worst the network will “split” and there will be 2 separate BitCoins.

What does the future hold for BitCoin?


Blockchain 101

There’s a drama unfolding in the Bitcoin community right now!  It’s interesting and instructive, and I’ll blog about it in my next post.  Today I’ll go over some Blockchain 101 (actually BitCoin 101) to set some background for the drama of the next post.

The terms BitCoin and Blockchain are often used synonymously these days, but:

A Blockchain is a secure, unalterable, shared ledger.  A Blockchain consists of a series of blocks that are each “signed” with a secure stamp (for the technically minded this is a Hash, a which can mathematically demonstrate that the contents of the block have not been altered).  The signature of each block includes the signature of the previous block, hence the blocks are linked together in a chain, and one block can’t be altered without having to update all subsequent blocks.

BitCoin is a digital currency, that uses a Blockchain as the underlying structure to record transactions.  Each block in BitCoin’s Blockchain contains a set of transactions that represent a transfer of Bitcoin from one party to another.  BitCoin transactions are secured wth strong digital signatures, and the blocks within BitCoin’s blockchain are secured by placing constraints around the block signature (or Hash) that makes it extremely difficult to calculate.  In fact this difficulty is adjusted based on the size of the BitCoin network, so the larger the network (i.e. the more computing power the network can bring to bear) the more difficult it is to construct a block on the BitCoin Blockchain.  The ability to compute the Hash successfully requires a large amount of computing resources, and this is known as the Proof of Work.

The BitCoin network consists of a series of Nodes and Miners.  (Miners compute the new blocks in the Blockchain, and Nodes transmit the new blocks to all parties on the network.)  No one “owns” or “controls” BitCoin – everyone who participates in the network keeps their own copy of the Blockchain.  The Miners and Nodes (and other parties, such as BitCoin Exchanges and BitCoin users, who use Wallet applications to connect to the network) agree on the set of rules that constitutes a valid Blockchain, and will only share transactions and blocks that meet this criteria.  (The difficulty level of the block’s Hash is one such rule.)

This is called Consensus, and it is one of the most powerful aspects of BitCoin.  Consensus means that no one party can take over and control the BitCoin network, because the rest of the parties on the network won’t cooperate, and the cooperation of all parties is required for the BitCoin network to operate.  (Remember this for the BitCoin drama coming up in the next post.)

To summarize:

  • Blockchains consist of a “chain” of blocks that are “signed” by cryptographic hashes
  • Each block contains BitCoin transactions, that are protected by strong ryptography
  • BitCoin defines rules that define the “consensus” of what constitutes a valid blockchain, including a strong “proof of work” for creating each block
  • Each participant in the BitCoin network maintains their own copy of the Blockchain, and new transactions and blocks are shared amongst the particpants on a peer-to-peer network
  • The participants in the network will only share new transactions and blocks that follow these rules

With the rise in popularity of BitCoin, there are some weaknesses in the architecture that are starting to become apparent.  The first is the size of the Blockchain, which has reached 100G in size and is growing at about 4G per month.  Each participant in the network has to maintain this Blockchain, as well as support the network bandwidth to communicate the new blocks and transactions.  The second issue is the transaction throughput that the BitCoin network can sustain – due to limitations on the size of each block (one of the rules of “Consensus”), the block can hold a maximum of about 1000 transactions.  Since the network is constrained to produce a new block about every 10 minutes (another of the “consensus” rules, controlled by the “difficulty” of computing the block’s signature) this places a ceiling on the maximum transactions that BitCoin can handle.

There are a couple of alternatives on how to address these limitations, and the BitCoin community is divided on the path to take!

In my next post I’ll talk about the “drama” surrounding this, and I’ll also talk about some other technologies and applications that are being built today around Blockchains and Blockchain technology.


Getting Started with Hyperledger Part 2 – Fabric Development

I wrote a post describing how to get a sample HyperLedger Fabric demo up and running, but I generally like to work from first principles, so in this post I am going to describe how to checkout the source for the Hyperledger Fabic core components and build the docker images from scratch.

Setup Development Environment

First of all there are a couple of pre-requisites:

The source code is managed on “Gerrit“, and you can checkout the code from the Git repo, but if you want to be a “committer” and be able to contribute to Fabric development, so will need to setup a Linux Foundation ID and check out the code under this ID.  The process to do this is described here.

You will also need to install all the pre-requisites on your local (or on your developer VM), this process is described here.  (There is also a good blog post here on IBM’s Community Blog, however it’s a bit dated and the pre-req setup references old versions.  However the blog is still a good read.)

Checkout and Compile Code

Now, assuming you have a Linux Foundation ID, you can check out the code as follows:

First of all make sure you are in your “Go” source tree:

# create directory to checkout Hyperledger code
cd ~
mkdir go
cd go
export GOPATH=$PWD
echo $GOPATH

mkdir -p src/github.com/hyperledger
cd src/github.com/hyperledger

Now checkout the following repositories (replace “gerritid” with your own id):

# check out the fabric repo
cd $GOPATH/src/github.com/hyperledger
git clone ssh://gerritid@gerrit.hyperledger.org:29418/fabric && scp -p -P 29418 gerritid@gerrit.hyperledger.org:hooks/commit-msg fabric/.git/hooks/

# check out the fabric CA
git clone ssh://gerritid@gerrit.hyperledger.org:29418/fabric-ca

(If you take a look at the Gerrit site you can see a list of the other Hyperledger projects, including the SDK’s, base images, etc.)

You can now build the base Fabric docker images.  You can just run a “make dist-clean all” and it will do everything, but let’s take it step by step.

First do a “dist-clean” to make sure your workspace is clean and the pre-requisites are all there:

# clean the local repo
cd $GOPATH/src/github.com/hyperledger/fabric
make dist-clean

You shouldn’t see any errors.

There are several steps to build and test the docker images – first compile the code for the peer and orderer processes:

# compile the peer and orderer processes
make native

If you get any errors you may have missed a dependency.  Just google the error message and stacktrace will tell you what to do!

Since this takes awhile (the first time around it has to download some base images) you can take a look at the source code while you wait – the top-level code is in the peer and orderer packages, and shared code is in common, core, etc.  I’ll talk about this some more in a later blog.

Next build the docker images (peer and orderer):

# build the docker images
make docker
docker images
docker ps

You should have a few Docker images, but nothing running yet.

At this point you can run the unit tests, this takes awhile so make yourself a coffee and put your feet up:

# run the unit tests
make linter
make unit-test

While the unit tests are running open another command window and run “docker ps”, you will see docker containers running as the unit tests execute.  (For me the test run for awhile and then start to fail, seems like a stability issue more than anything else.)

Next run Behave (behaviour-driven development framework in Python):

# run behave
make behave

(This will be the subject of a more detailed future blog.)

You can also build the CA image as follows:

# clean the local repo
cd $GOPATH/src/github.com/hyperledger/fabric-ca
make docker

You should now have Docker images for the peer, order and ca containers (as well as a whole collection of other images created by the build and unit test processes).

What’s Next?  Contribute to the Hyperledger Project  You can read up on how to contribute, here.  And I’ll talk about it in more detail in a future blog.


Getting Started with Hyperledger Part 1 – WTF is Hyperledger Fabric?

Hyperledger Fabric is IBM’s entry into the growing blockchain market.  There is a lot of documentation available, and it’s hard to know where to start.  As a developer I like to find some code and dive in.

IBM has published a demo that runs a local test network and posts some transactions, so it’s a good place to start:


I’m not going to repeat everything in the above post, but I’ll try to explain what’s going on.

Setup your Environment

First, go ahead and download and install the dependencies:

Go – Go is a programming language developed by Google to deal with some of the application problems they have to deal with on a day-to-day basis – large scale applications, massive concurrency, etc.  There are a lot of videos and tutorials here, if you’ve never programmed in Go take a short detour and run through a couple of the tutorials.

Docker and Docker Compose – Docker is a standards-based platform for packaging and running applications.  If you haven’t used it before install the software and run through the initial tutorial here.  Docker Compose allows you to manage and deploy a collection of Docker containers.

Node.js and npmNode.js is a JavaScript application for building scalable network applications.  Hyperledger supports SDK’s in many languages, including Java, Python and Node.js – this example uses the node.js API.

Download the sample code for the demo.  (I keep everything under ~/Projects, but you can choose your own location.)

cd ~/Projects
mkdir hackfest
cd hackfest
curl -L https://raw.githubusercontent.com/hyperledger/fabric/master/examples/sfhackfest/sfhackfest.tar.gz -o sfhackfest.tar.gz 2> /dev/null;  tar -xvf sfhackfest.tar.gz

Note that this demo uses some Go code (specifically the chaincode that will be invoked in our transaction) so you need to set your $GOPATH environment variable to the directory where you installed the above code:

cd ~/Projects/hackfest
export GOPATH=$PWD

(Remember where your previous $GOPATH pointed so you can restore it later.)

Setup the Hyperledger Network

Once you have the dependencies installed, the next step is to download the Docker images and run the network.  There are a number of components involved, so let’s first build the network and then see what we’ve got:

# download and build the Docker images
docker-compose -f docker-compose-gettingstarted.yml build
# run the docker images and then see what we've got
docker-compose -f docker-compose-gettingstarted.yml up -d
docker ps

The yml file above runs each of the Docker containers and sets up the network.

You should have 6 containers running at this point:

ca – This is the local Certificate Authority.  In Hyperledger Fabric all members are identified with certificates from a trusted authority.  In a “real” implementation this could be the enterprise CA, but for a development install there is a fabric ca.

cli – This is the client process that applications interact with to create and interact with channels (i.e. blockchains).  In the demo we will create and join a channel, submit a transaction (with chaincode) and then query the status.

peer – There are 3 peers in the demo.  In a “real” application peers could represent individual companies or stakeholders who are participating in networked business processes.  Transactions are submitted to peers for “approval” before the approved transactions are submitted to the orderer.  The transaction’s chaincode determines the protocol (e.g. a threshhold number of peers must approve the transaction, or all peers must approve).

orderer – Once the transaction is approved the client submits it to the orderer.  The orderer transmits the transaction to all peers (and any other orderers who are participating in the network).  Each peer maintains its own copy of the channel (i.e. the blockchain) and the orderer makes sure that each peer gets the same transactions and in the same order.

Note that each container is configured using a directory under ./tmp (e.g. ./tmp/peer0, ./tmp/orderer etc.) – these directories contain the certificates used to authenticated connections between processes.  ./tmp/orderer/orderer.yaml contains the configuration parameters for the blockchain we will create below.

You can connect to the client and verify it has connected to each of the other processes:

docker exec -it cli bash
more results.txt

Demo of Asset Transfer using the Node.js API

The demo uses the node.js api, so we need to download and install the API code:

cd ~/Projects/hackfest
curl -OOOOOO https://raw.githubusercontent.com/hyperledger/fabric-sdk-node/v1.0-alpha/examples/balance-transfer/{config.json,deploy.js,helper.js,invoke.js,query.js,package.json}
npm install

There are three programs used to deploy the transaction and then run an asset transfer – run the following 3 commands from the command shell:

# deploy the transaction (this deploys chaincode "example_cc.go")
node deploy.js
# invoke the "move" method to transfer assets
node invoke.js
# query the transaction to see the new balance
node query.js

You can run the query and invoke functions multiple times and you should see the balance update each time.

So what is happening?  If you inspect the code you can see the following in action:

config.json – This file specifies the configuration information for the transaction.  Specifically, “chaincodePath” identifies the chaincode (example_cc.go), and “invokeRequest” and “queryRequest” identify the methods to invoke.

deploy.js – This program creates the channel (i.e. the blockchain), sets the orderer and the peers, and loads the transaction (and the chaincode).  It first validates the transaction with each of the 3 peers, and then, if successful, posts the transaction to the orderer.  In the background, the orderer forward the transaction to each of the peers, and each of the peers adds the transaction to its local blockchain.

invoke.js – This program calls the “move” method to transfer assets.

query.js – This program calls the “query” method to report on the asset balance.

example_cc.go – This is the “chaincode”, deployed by deploy.js and invoked by invoke.js and query.js.  If you inspect the “Go” code you can see the actual code for the “move” and “query” functions (the functions invoked by invoke.js and query.js).  (TBD I’m not sure where the “consensus” is programmed, whether a single peer or all peers need to approve the transaction …)

Demo of Asset Transfer by invoking Client Directly

The next section of the demo runs basically the same queries, only by connecting to the CLI process and executing the queries manually:

# connect to the CLI container
docker exec -it cli bash
# create a new channel and join 2 peers
CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050 peer channel create -c myc2
CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050 CORE_PEER_ADDRESS=peer0:7051 peer channel join -b myc2.block
CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050 CORE_PEER_ADDRESS=peer1:7051 peer channel join -b myc2.block

# create a transaction and run some transfers
CORE_PEER_ADDRESS=peer0:7051 CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050 peer chaincode deploy -C myc2 -n mycc -p github.com/hyperledger/fabric/examples -c '{"Args":["init","a","100","b","200"]}'
CORE_PEER_ADDRESS=peer0:7051 CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050 peer chaincode invoke -C myc2 -n mycc -c '{"function":"invoke","Args":["move","a","b","10"]}'
CORE_PEER_ADDRESS=peer0:7051 CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050 peer chaincode query -C myc2 -n mycc -c '{"function":"invoke","Args":["query","a"]}'

You can run the “move” and “query” commands multiple times, and you should see the balance update.  Note that due to timing (the blockchain has to replicate between the various nodes) you many not see the balance update immediately after a “move” – just wait a few seconds and run the query again, and you should see the balance update.

Whats Next?

Note that there is an online tutorial here that runs through some Hyperledger basics on IBM’s cloud platform (Bluemix) – it provides a good high level Hyperledger overview without any complex environment setup.

Speaking of complex environment setup, the next blog post will describe setting up a Hyperledger Fabric development environment and building the Docker containers from scratch.

PS  Once you’re done you can shut down your environment using:

docker-compose -f docker-compose-gettingstarted.yml down



Setting up a local BitMessage Network using Docker

BitMessage is a blockchain-inspired P2P messaging platform to allow users to anonymously exchange or broadcast encrypted messages.  The platform is distributed, decentralized and trustless.

This blog post will show how you can setup a local BitMessage network so you can explore and experiment with the code.

You can checkout the code for PyBitMessage (the python reference client) here, and you can download my scripts to setup your own local BitMessage environment here.

There are instructions to build and run the Docker image in the README file in GitHub, I won’t repeat everything here but I’ll explain some of the high level concepts.

Unlike BitCoin, BitMessage doesn’t come with a “development” network right out-of-the-box, so you have to take a few steps to roll your own.  After checking out the PyBitMessage source code, you need to edit the following file:


This script loads some “well known” nodes on the BitMessage network – you have to comment these out so that you don’t connect to the production network!  I comment out all the real “well-known” nodes and add one of my own – the IP and port number of my local BitMessage instance.

Second you have to update some of the configuration files.  The updates are checked into my GitHub project above, and are deployed for each Docker instance that you run.

First, keys.dat contains a configuration to specify the type of automatic “DNS lookup” BitMessage will use to find nodes to connect to – you need to disable this for a local network.

Second, keys.dat contains a port number that each instance will listen on – you need to configure a unique port number for each instance (so they don’t conflict).

Lastly, since PyBitMessage is a GUI application, you need to ensure that each Docker container can connect to the host’s X server – I disable all X security using “xhost +”, but since this is a security hole you may want to setup finer grained security.

(Note that I run this environment inside a VM, and when testing the local BitMessage network I disable all network connections to the VM.  This compensates for the X security issue above, and also ensures that my local BitMessage network can absolutely NOT connect to the production network.

Once you have built the base Docker image and started up the instances, you can generate Identities and send messages between each of the clients.  See the README file in GitHub for detailed instructions.

It’s pretty simple!  If you have any questions please feel free to email me at ian@anon-solutions.ca.

Setting up a Bitcoin Development Environment and Test Network using Docker

Blockchain is a hot technology these days, and Bitcoin is the original implementation.  As a blockchain professional it is handy to be able to setup and run a Bitcoin network from scratch – checking out and compiling the code from Github, compiling and installing the code into a Docker container, and then running instances of the container to setup a small, local test network.

There are a number of projects you may want to explore (Bitcoin, Bitmessage, BigchainDB, Etherium, Hyperledger Fabric, etc.) and Docker allows you to setup multiple development environments without filling up your host with all the dependencies.

You can checkout the Bitcoin code from Github here, and I have setup a small project to build and run the Docker container here.  (Shoutout to Gerald and this blog post, which this project is based on.)

There are instructions to build and run the Bitcoin Docker image and test network in the Readme file in GitHub, so I won’t repeat everything here.  I’ll just explain a few things that aren’t covered in the readme.

The first step is to build a base Docker image that contains all the dependencies to compile and run Bitcoin – the Dockerfile is here.  It uses Ubuntu as a base image and adds all Bitcoin dependencies.

The second step is to run this base image and use it to compile the Bitcoin source.  The Bitcoin source is located on the host, and the directory is mounted in the Docker container using:

docker run ... -v $(BITCOIN_SRC):$(BITCOIN_SRC) -w $(BITCOIN_SRC) ...

(The location of the BITCOIN_SRC is configured in the Makefile, and you can edit this if you checkout your code in a different location.)

In the above, “-v” mounts the directory, and “-w” sets the current directory for running “make”.

Once the Bitcoin code is compiled and installed into the container, a second snapshot is saved using:

docker commit ...

… to save a new image.

Two instances of this image (“bob” and “alice”) are now started to run the local Bitcoin network:

docker run -d -it -p 18444:18444 -p 18332:18332 --name=alice --hostname=alice
docker run -d -it -p 19444:18444 -p 19332:18332 --name=bob --hostname=bob

Each image runs an instance of bitcoind, and the daemon is setup to run a local network (i.e. not connected to the Bitcoin production or test network):

bitcoind -regtest -daemon -printtoconsole

Commands can be invoked in each bitcoind process by executing bitcoin-cli in each Docker image:

# the following command is setup within the Makefile
docker exec alice bitcoin-cli -regtest $(bccmd)

# ... so to send a command to "alice" you simply run (for example to generate blocks):
make alice_cmd bccmd="generate 10"

It’s pretty simple!  If you have any questions please feel free to email me at ian@anon-solutions.ca.