Industrial-IOT : Introduction to MODBUS protocol

Hey Folks,

Its been for a while since we talked on Internet of things ! (One year I guess since its 2017 itself  😉 )

So here we are now again , and this time the topic is Introduction to MODBUS protocol !
Again we have three questions , What ? why ? and How ?

So let’s get started ,

What is MODBUS ?

Modbus is a serial communications protocol originally published by Modicon (now Schneider Electric) in 1979 for use with its programmable logic controllers (PLCs). Simple and robust, it has since become a de facto standard communication protocol, and it is now a commonly available means of connecting industrial electronic devices.

Since it first appeared in 1979, Modbus has evolved into a broad set of protocols over a variety of physical links (for example, RS-485). At its core, Modbus is a serial communications protocol that follows a master–slave model. A master sends a request to a slave device, and the slave returns a response. In a standard Modbus network, there is one master and up to 247 slaves (although 2 byte addressing can significantly expand this limit).

Why Modbus ?

When it comes to choosing a network for your device, Modbus TCP/IP offers several significant advantages:

  • Simplicity: Modbus TCP/IP simply takes the Modbus instruction set and wraps TCP/IP around it. If you already have a Modbus driver and you understand Ethernet and TCP/IP sockets, you can have a driver up and running and talking to a PC in a few hours. Development costs are exceptionally low. Minimum hardware is required, and development is easy under any operating system.
  • Standard Ethernet: There are no exotic chipsets required and you can use standard PC Ethernet cards to talk to your newly implemented device. As the cost of Ethernet falls, you benefit from the price reduction of the hardware, and as the performance improves from 10 to 100 Mb and soon to 1 Gb, your technology moves with it, protecting your investment. You are no longer tied to one vendor for support, but benefit from the thousands of developers out there who are making Ethernet and the Internet the networking tools of the future. This effort has been complemented opportunely with the assignment of the well-known Ethernet port 502 for the Modbus TCP/IP protocol.
  • Open: The Modbus protocol was transferred from Schneider Electric to the Modbus Organization in April 2004, signaling a commitment to openness. The specification is available free of charge for download, and there are no subsequent licensing fees required for using Modbus or Modbus TCP/IP protocols. Additional sample code, implementation examples, and diagnostics are available on the Modbus TCP toolkit, a free benefit to Modbus Organization members and available for purchase by nonmembers
  • Availability of many devices: Interoperability among different vendors’ devices and compatibility with a large installed base of Modbus-compatible devices makes Modbus an excellent choice.

How it works ?

Modbus Architecture :

There are many types of MODBUS protocols like MODBUS RTU , MODBUS TCP , MODBUS ASCII and many more but we are using MODBUS TCP for our system.

In the architecture of MODBUS there are mainly two things,

  1. Modbus Slave : In general terms, we call it Server, the entity that provides the data, currently we are using a Simulator for this. Actually our Data Collector would work as a Modbus Slave.
  2. Modbus Master: In general terms We call it Client , the entity that consumes data, hence our service will be a client.

This is how Modbus Architecture looks like :


MODBUS may seem complicated and confusing to some, but it is a very simple protocol when you understand how it works.  MODBUS is a request and response protocol.   A MODBUS master will initiate a request and a slave will respond with either an error or the data requested.  This is the simple concept of MODBUS.

 Modbus Message Structure :

So  MODBUS Message Structure looks something like this :


For different other types it looks something like this :


Modbus addressing

The first information in each Modbus message is the address of the receiver. This parameter contains one byte of information. In Modbus/ASCII it is coded with two hexadecimal characters, in Modbus/RTU one byte is used. Valid addresses are in the range 0..247. The values 1..247 are assigned to individual Modbus devices and 0 is used as a broadcast address. Messages sent to the latter address will be accepted by all slaves. A slave always responds to a Modbus message. When responding it uses the same address as the master in the request. In this way the master can see that the device is actually responding to the request.

Within a Modbus device, the holding registers, inputs and outputs are assigned a number between 1 and 10000. One would expect, that the same addresses are used in the Modbus messages to read or set values. Unfortunately this is not the case. In the Modbus messages addresses are used with a value between 0 and 9999. If you want to read the value of output (coil) 18 for example, you have to specify the value 17 in the Modbus query message. More confusing is even, that for input and holding registers an offset must be subtracted from the device address to get the proper address to put in the Modbus message structure. This leads to common mistakes and should be taken care of when designing applications with Modbus. The following table shows the address ranges for coils, inputs and holding registers and the way the address in the Modbus message is calculated given the actual address of the item in the slave device.


Modbus function codes

The second parameter in each Modbus message is the function code. This defines the message type and the type of action required by the slave. The parameter contains one byte of information. In Modbus/ASCII this is coded with two hexadecimal characters, in Modbus/RTU one byte is used. Valid function codes are in the range 1..255. Not all Modbus devices recognize the same set of function codes. The most common codes are discussed here.

Normally, when a Modbus slave answers a response, it uses the same function code as in the request. However, when an error is detected, the highest bit of the function code is turned on. In that way the master can see the difference between success and failure responses.


There is a still lot that we need to know about MODBUS , that you can read by going to the references to this post 🙂

In this series there would be three blogs , this being the first one.

  1. Industrial-IOT : Introduction to MODBUS protocol
  2. Industrial-IOT : A basic Scala implementation for MODBUS Master.
  3. Industrial-IOT  : MODBUS Spark Custom Receiver.

I will add the links accordingly to these blogs.

And yeah I will be writing the Spark-IoT Series soon , sorry for the delay 😉
So be patient and stay connected and tuned 🙂

The next blog will be out soon.

If you want to know anything about me , please visit the link below. You can get in touch with me anytime. Always welcomed 🙂



About the Author :

Shivansh Srivastava , Sr. Software Enginner @ Chirpanywhere Inc. (Scala , Spark , IoT specialist )
Know more :

Cassandra Global Snapshot: Taking dump of a keyspace for whole cluster.

Snapshots are taken per node using the nodetool snapshot command. To take a global snapshot, run the nodetool snapshot command using a parallel ssh utility, such as pssh.

A snapshot first flushes all in-memory writes to disk, then makes a hard link of the SSTable files for each keyspace. You must have enough free disk space on the node to accommodate making snapshots of your data files. A single snapshot requires little disk space. However, snapshots can cause your disk usage to grow more quickly over time because a snapshot prevents old obsolete data files from being deleted. After the snapshot is complete, you can move the backup files to another location if needed, or you can leave them in place.

Note: Cassandra can only restore data from a snapshot when the table schema exists. It is recommended that you also backup the schema.


Run the nodetool snapshot command, specifying the hostname, JMX port, and keyspace. For example:

$ nodetool -h localhost -p 7199 snapshot mykeyspace


The snapshot is created in data_directory_location/keyspace_name/table_nameUUID/snapshots/snapshot_name directory. Each snapshot directory contains numerous .db files that contain the data at the time of the snapshot.

For example:

Package installations:


Tarball installations:


Taking a Global Snapshot:

As stated earlier, global snapshot can be taken using the pssh tool. So let us configure this tool first,

Steps for configuring the pssh are:

  1. Install the pssh tool using the following command
    sudo apt-get install python-pip
    sudo pip install pssh
  2. Create a hosts file that contains all the ip’s of the nodes present in that cluster and name it something like

    It should look something like this :
  3. Now run the following command so that the snapshots get created on each and every node :
     pssh -h pssh-hosts -P "/root/cassandra/bin/nodetool -h localhost -p 7199 snapshot "

Now youv’e taken the dump of data on each node which is present on each node, you can dowload it using secure copy and then  restore it accordingly.

I am still working on automating the process of downloading the dump ! Will update you all  as soon as it is done !

I hope youve enjoyed the blog !

If youv’e any query ping me here or on twitter :shiv4nsh !

Will be Happy to help you out !

Till then enjoy someone’s else’s blog ! 😉


  1. DataStax Documentation !
  2. Some hack ! 😀

Shivansh Srivastava

Neo4j With Scala: Neo4j vs ElasticSearch


Hello Graphistas,

Are you missing this series 😉 ?

Welcome back again in the series of Neo4j with Scala 🙂 . Let’s start our journey again. Till now we have talked and learnt about the use of Neo4j with Scala and how easily we can integrated both two amazing technologies.

Before starting the blog here is recap :

  1. Getting Started Neo4j with Scala : An Introduction
  2. Neo4j with Scala: Defining User Defined Procedures and APOC
  3. Neo4j with Scala: Migrate Data From Other Database to Neo4j
  4. Neo4j with Scala: Awesome Experience with Spark

ElasticSearch is a modern search and analytic engine based on Apache Lucene. ElasticSearch is a full-text search engine and is highly scalable. It allows RESTful web interface and schema-free documents. ElasticSearch is able to achieve fast search responses because it searches an index instead of searching the text directly. ElasticSearch also provides the capability of store data…

View original post 446 more words

Are we really eliminating central authorities with blockchain?


There is a lot of promise around blockchains. While we at DeepChains do subscribe to the philosophy and would be eager to provide business solutions to meet the industry needs but there has been a lot of double talk, it seems with blockchains.

The premise of blockchain is the following

  • No central registration – No big papa.
  • Decentralized – there is no single point of failure.
  • Safe – Encrypted and secure.
  • Private – My data as an individual is not held by a central authority. I choose what to share
  • Secure – end-to-end encrypted communication routed over Tor.
  • Open – Open source code

However, for all the so-called currency exchanges, this does not seem to be the case. Let us understand the premise of Bitcoins philosophy first. If you look at the image below, we are trying to get rid of any central agencies,


We are trying to get rid…

View original post 488 more words

Spark – LDA : A Complete example of clustering algorithm for topic discovery.


In this blog we will be demonstrating the functionality of applying the full ML pipeline over a set of documents which in this case we are using 10 books from the internet.

So lets start with first thing first..

What is Clustering ?

Clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense or another) to each other than to those in other groups (clusters). It is a main task of exploratory data mining, and a common technique for statisticaldata analysis, used in many fields, including machine learning, pattern recognition, image analysis, information retrieval, bioinformatics, data compression, and computer graphics.

Clustering when applied on the textual data , then it is known as Document Clustering.

It has applications in automatic…

View original post 650 more words

Four Myths of In-Memory Computing

GridGain - In-Memory Computing

As any fast growing technology In-Memory Computing has attracted a lot of interest and writing in the last couple of years. It’s bound to happen that some of the information gets stale pretty quickly – while other is simply not very accurate to being with. And thus myths are starting to grow and take hold.

I want to talk about some of the misconceptions that we are hearing almost on a daily basis here at GridGain and provide necessary clarification (at least from our our point of view). Being one of the oldest company working in in-memory computing space for the last 7 years we’ve heard and seen all of it by now – and earned a certain amount of perspective on what in-memory computing is and, most importantly, what it isn’t.

In-Memory Computing

Let’s start at… the beginning. What is the in-memory computing? Kirill Sheynkman from RTP Ventures gave…

View original post 1,561 more words

Scala – IOT : First basic IOT application using Scala on RaspberryPi


Let’s start our journey for making the first IoT application to make world a better place 😉
(I would never miss a chance to mock Hooli ! 😉 )

In this blog finally the two technologies SCALA and IOT  will meet and we will be doing these many things in this blog:

  1. Setting up the scala sbt environment on RaspberryPi
  2. Developing your first IOT application using Scala
  3. Deploying the developed application on RaspberryPi.

And finally we are going to achieve this:


View original post 831 more words