Getting Started with Hyperledger Part 2 – Fabric Development

I wrote a post describing how to get a sample HyperLedger Fabric demo up and running, but I generally like to work from first principles, so in this post I am going to describe how to checkout the source for the Hyperledger Fabic core components and build the docker images from scratch.

Setup Development Environment

First of all there are a couple of pre-requisites:

The source code is managed on “Gerrit“, and you can checkout the code from the Git repo, but if you want to be a “committer” and be able to contribute to Fabric development, so will need to setup a Linux Foundation ID and check out the code under this ID.  The process to do this is described here.

You will also need to install all the pre-requisites on your local (or on your developer VM), this process is described here.  (There is also a good blog post here on IBM’s Community Blog, however it’s a bit dated and the pre-req setup references old versions.  However the blog is still a good read.)

Checkout and Compile Code

Now, assuming you have a Linux Foundation ID, you can check out the code as follows:

First of all make sure you are in your “Go” source tree:

# create directory to checkout Hyperledger code
cd ~
mkdir go
cd go
export GOPATH=$PWD
echo $GOPATH

cd $GOPATH
mkdir -p src/github.com/hyperledger
cd src/github.com/hyperledger

Now checkout the following repositories (replace “gerritid” with your own id):

# check out the fabric repo
cd $GOPATH/src/github.com/hyperledger
git clone ssh://gerritid@gerrit.hyperledger.org:29418/fabric && scp -p -P 29418 gerritid@gerrit.hyperledger.org:hooks/commit-msg fabric/.git/hooks/

# check out the fabric CA
git clone ssh://gerritid@gerrit.hyperledger.org:29418/fabric-ca

(If you take a look at the Gerrit site you can see a list of the other Hyperledger projects, including the SDK’s, base images, etc.)

You can now build the base Fabric docker images.  You can just run a “make dist-clean all” and it will do everything, but let’s take it step by step.

First do a “dist-clean” to make sure your workspace is clean and the pre-requisites are all there:

# clean the local repo
cd $GOPATH/src/github.com/hyperledger/fabric
make dist-clean

You shouldn’t see any errors.

There are several steps to build and test the docker images – first compile the code for the peer and orderer processes:

# compile the peer and orderer processes
make native

If you get any errors you may have missed a dependency.  Just google the error message and stacktrace will tell you what to do!

Since this takes awhile (the first time around it has to download some base images) you can take a look at the source code while you wait – the top-level code is in the peer and orderer packages, and shared code is in common, core, etc.  I’ll talk about this some more in a later blog.

Next build the docker images (peer and orderer):

# build the docker images
make docker
docker images
docker ps

You should have a few Docker images, but nothing running yet.

At this point you can run the unit tests, this takes awhile so make yourself a coffee and put your feet up:

# run the unit tests
make linter
make unit-test

While the unit tests are running open another command window and run “docker ps”, you will see docker containers running as the unit tests execute.  (For me the test run for awhile and then start to fail, seems like a stability issue more than anything else.)

Next run Behave (behaviour-driven development framework in Python):

# run behave
make behave

(This will be the subject of a more detailed future blog.)

You can also build the CA image as follows:

# clean the local repo
cd $GOPATH/src/github.com/hyperledger/fabric-ca
make docker

You should now have Docker images for the peer, order and ca containers (as well as a whole collection of other images created by the build and unit test processes).

What’s Next?  Contribute to the Hyperledger Project  You can read up on how to contribute, here.  And I’ll talk about it in more detail in a future blog.

 

Getting Started with Hyperledger Part 1 – WTF is Hyperledger Fabric?

Hyperledger Fabric is IBM’s entry into the growing blockchain market.  There is a lot of documentation available, and it’s hard to know where to start.  As a developer I like to find some code and dive in.

IBM has published a demo that runs a local test network and posts some transactions, so it’s a good place to start:

http://hyperledger-fabric.readthedocs.io/en/latest/asset_setup.html

I’m not going to repeat everything in the above post, but I’ll try to explain what’s going on.

Setup your Environment

First, go ahead and download and install the dependencies:

Go – Go is a programming language developed by Google to deal with some of the application problems they have to deal with on a day-to-day basis – large scale applications, massive concurrency, etc.  There are a lot of videos and tutorials here, if you’ve never programmed in Go take a short detour and run through a couple of the tutorials.

Docker and Docker Compose – Docker is a standards-based platform for packaging and running applications.  If you haven’t used it before install the software and run through the initial tutorial here.  Docker Compose allows you to manage and deploy a collection of Docker containers.

Node.js and npmNode.js is a JavaScript application for building scalable network applications.  Hyperledger supports SDK’s in many languages, including Java, Python and Node.js – this example uses the node.js API.

Download the sample code for the demo.  (I keep everything under ~/Projects, but you can choose your own location.)

cd ~/Projects
mkdir hackfest
cd hackfest
curl -L https://raw.githubusercontent.com/hyperledger/fabric/master/examples/sfhackfest/sfhackfest.tar.gz -o sfhackfest.tar.gz 2> /dev/null;  tar -xvf sfhackfest.tar.gz

Note that this demo uses some Go code (specifically the chaincode that will be invoked in our transaction) so you need to set your $GOPATH environment variable to the directory where you installed the above code:

cd ~/Projects/hackfest
export GOPATH=$PWD

(Remember where your previous $GOPATH pointed so you can restore it later.)

Setup the Hyperledger Network

Once you have the dependencies installed, the next step is to download the Docker images and run the network.  There are a number of components involved, so let’s first build the network and then see what we’ve got:

# download and build the Docker images
docker-compose -f docker-compose-gettingstarted.yml build
# run the docker images and then see what we've got
docker-compose -f docker-compose-gettingstarted.yml up -d
docker ps

The yml file above runs each of the Docker containers and sets up the network.

You should have 6 containers running at this point:

ca – This is the local Certificate Authority.  In Hyperledger Fabric all members are identified with certificates from a trusted authority.  In a “real” implementation this could be the enterprise CA, but for a development install there is a fabric ca.

cli – This is the client process that applications interact with to create and interact with channels (i.e. blockchains).  In the demo we will create and join a channel, submit a transaction (with chaincode) and then query the status.

peer – There are 3 peers in the demo.  In a “real” application peers could represent individual companies or stakeholders who are participating in networked business processes.  Transactions are submitted to peers for “approval” before the approved transactions are submitted to the orderer.  The transaction’s chaincode determines the protocol (e.g. a threshhold number of peers must approve the transaction, or all peers must approve).

orderer – Once the transaction is approved the client submits it to the orderer.  The orderer transmits the transaction to all peers (and any other orderers who are participating in the network).  Each peer maintains its own copy of the channel (i.e. the blockchain) and the orderer makes sure that each peer gets the same transactions and in the same order.

Note that each container is configured using a directory under ./tmp (e.g. ./tmp/peer0, ./tmp/orderer etc.) – these directories contain the certificates used to authenticated connections between processes.  ./tmp/orderer/orderer.yaml contains the configuration parameters for the blockchain we will create below.

You can connect to the client and verify it has connected to each of the other processes:

docker exec -it cli bash
more results.txt

Demo of Asset Transfer using the Node.js API

The demo uses the node.js api, so we need to download and install the API code:

cd ~/Projects/hackfest
curl -OOOOOO https://raw.githubusercontent.com/hyperledger/fabric-sdk-node/v1.0-alpha/examples/balance-transfer/{config.json,deploy.js,helper.js,invoke.js,query.js,package.json}
npm install

There are three programs used to deploy the transaction and then run an asset transfer – run the following 3 commands from the command shell:

# deploy the transaction (this deploys chaincode "example_cc.go")
node deploy.js
# invoke the "move" method to transfer assets
node invoke.js
# query the transaction to see the new balance
node query.js

You can run the query and invoke functions multiple times and you should see the balance update each time.

So what is happening?  If you inspect the code you can see the following in action:

config.json – This file specifies the configuration information for the transaction.  Specifically, “chaincodePath” identifies the chaincode (example_cc.go), and “invokeRequest” and “queryRequest” identify the methods to invoke.

deploy.js – This program creates the channel (i.e. the blockchain), sets the orderer and the peers, and loads the transaction (and the chaincode).  It first validates the transaction with each of the 3 peers, and then, if successful, posts the transaction to the orderer.  In the background, the orderer forward the transaction to each of the peers, and each of the peers adds the transaction to its local blockchain.

invoke.js – This program calls the “move” method to transfer assets.

query.js – This program calls the “query” method to report on the asset balance.

example_cc.go – This is the “chaincode”, deployed by deploy.js and invoked by invoke.js and query.js.  If you inspect the “Go” code you can see the actual code for the “move” and “query” functions (the functions invoked by invoke.js and query.js).  (TBD I’m not sure where the “consensus” is programmed, whether a single peer or all peers need to approve the transaction …)

Demo of Asset Transfer by invoking Client Directly

The next section of the demo runs basically the same queries, only by connecting to the CLI process and executing the queries manually:

# connect to the CLI container
docker exec -it cli bash
# create a new channel and join 2 peers
CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050 peer channel create -c myc2
CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050 CORE_PEER_ADDRESS=peer0:7051 peer channel join -b myc2.block
CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050 CORE_PEER_ADDRESS=peer1:7051 peer channel join -b myc2.block

# create a transaction and run some transfers
CORE_PEER_ADDRESS=peer0:7051 CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050 peer chaincode deploy -C myc2 -n mycc -p github.com/hyperledger/fabric/examples -c '{"Args":["init","a","100","b","200"]}'
CORE_PEER_ADDRESS=peer0:7051 CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050 peer chaincode invoke -C myc2 -n mycc -c '{"function":"invoke","Args":["move","a","b","10"]}'
CORE_PEER_ADDRESS=peer0:7051 CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050 peer chaincode query -C myc2 -n mycc -c '{"function":"invoke","Args":["query","a"]}'

You can run the “move” and “query” commands multiple times, and you should see the balance update.  Note that due to timing (the blockchain has to replicate between the various nodes) you many not see the balance update immediately after a “move” – just wait a few seconds and run the query again, and you should see the balance update.

Whats Next?

Note that there is an online tutorial here that runs through some Hyperledger basics on IBM’s cloud platform (Bluemix) – it provides a good high level Hyperledger overview without any complex environment setup.

Speaking of complex environment setup, the next blog post will describe setting up a Hyperledger Fabric development environment and building the Docker containers from scratch.

PS  Once you’re done you can shut down your environment using:

docker-compose -f docker-compose-gettingstarted.yml down

 

 

Setting up a local BitMessage Network using Docker

BitMessage is a blockchain-inspired P2P messaging platform to allow users to anonymously exchange or broadcast encrypted messages.  The platform is distributed, decentralized and trustless.

This blog post will show how you can setup a local BitMessage network so you can explore and experiment with the code.

You can checkout the code for PyBitMessage (the python reference client) here, and you can download my scripts to setup your own local BitMessage environment here.

There are instructions to build and run the Docker image in the README file in GitHub, I won’t repeat everything here but I’ll explain some of the high level concepts.

Unlike BitCoin, BitMessage doesn’t come with a “development” network right out-of-the-box, so you have to take a few steps to roll your own.  After checking out the PyBitMessage source code, you need to edit the following file:

PyBitmessage/src/defaultknownnodes.py

This script loads some “well known” nodes on the BitMessage network – you have to comment these out so that you don’t connect to the production network!  I comment out all the real “well-known” nodes and add one of my own – the IP and port number of my local BitMessage instance.

Second you have to update some of the configuration files.  The updates are checked into my GitHub project above, and are deployed for each Docker instance that you run.

First, keys.dat contains a configuration to specify the type of automatic “DNS lookup” BitMessage will use to find nodes to connect to – you need to disable this for a local network.

Second, keys.dat contains a port number that each instance will listen on – you need to configure a unique port number for each instance (so they don’t conflict).

Lastly, since PyBitMessage is a GUI application, you need to ensure that each Docker container can connect to the host’s X server – I disable all X security using “xhost +”, but since this is a security hole you may want to setup finer grained security.

(Note that I run this environment inside a VM, and when testing the local BitMessage network I disable all network connections to the VM.  This compensates for the X security issue above, and also ensures that my local BitMessage network can absolutely NOT connect to the production network.

Once you have built the base Docker image and started up the instances, you can generate Identities and send messages between each of the clients.  See the README file in GitHub for detailed instructions.

It’s pretty simple!  If you have any questions please feel free to email me at ian@anon-solutions.ca.

Setting up a Bitcoin Development Environment and Test Network using Docker

Blockchain is a hot technology these days, and Bitcoin is the original implementation.  As a blockchain professional it is handy to be able to setup and run a Bitcoin network from scratch – checking out and compiling the code from Github, compiling and installing the code into a Docker container, and then running instances of the container to setup a small, local test network.

There are a number of projects you may want to explore (Bitcoin, Bitmessage, BigchainDB, Etherium, Hyperledger Fabric, etc.) and Docker allows you to setup multiple development environments without filling up your host with all the dependencies.

You can checkout the Bitcoin code from Github here, and I have setup a small project to build and run the Docker container here.  (Shoutout to Gerald and this blog post, which this project is based on.)

There are instructions to build and run the Bitcoin Docker image and test network in the Readme file in GitHub, so I won’t repeat everything here.  I’ll just explain a few things that aren’t covered in the readme.

The first step is to build a base Docker image that contains all the dependencies to compile and run Bitcoin – the Dockerfile is here.  It uses Ubuntu as a base image and adds all Bitcoin dependencies.

The second step is to run this base image and use it to compile the Bitcoin source.  The Bitcoin source is located on the host, and the directory is mounted in the Docker container using:

docker run ... -v $(BITCOIN_SRC):$(BITCOIN_SRC) -w $(BITCOIN_SRC) ...

(The location of the BITCOIN_SRC is configured in the Makefile, and you can edit this if you checkout your code in a different location.)

In the above, “-v” mounts the directory, and “-w” sets the current directory for running “make”.

Once the Bitcoin code is compiled and installed into the container, a second snapshot is saved using:

docker commit ...

… to save a new image.

Two instances of this image (“bob” and “alice”) are now started to run the local Bitcoin network:

docker run -d -it -p 18444:18444 -p 18332:18332 --name=alice --hostname=alice
docker run -d -it -p 19444:18444 -p 19332:18332 --name=bob --hostname=bob

Each image runs an instance of bitcoind, and the daemon is setup to run a local network (i.e. not connected to the Bitcoin production or test network):

bitcoind -regtest -daemon -printtoconsole

Commands can be invoked in each bitcoind process by executing bitcoin-cli in each Docker image:

# the following command is setup within the Makefile
docker exec alice bitcoin-cli -regtest $(bccmd)

# ... so to send a command to "alice" you simply run (for example to generate blocks):
make alice_cmd bccmd="generate 10"

It’s pretty simple!  If you have any questions please feel free to email me at ian@anon-solutions.ca.