Wait! Don’t write your microservice … yet – http://wp.me/p1D2XH-7oH
Introduction to Gherkin ! New way of testing !
Hello Everyone ,
In this blog we will discuss about Gherkin Language which we used in BDD for writing test cases.we will take a look on below topic.
Gherkin’s grammar is defined in the parsing expression grammars. It is Business Readable, DSL created specifically for behavior descriptions without explaining how that behaviour is implemented. Gherkin is a plain English text language.
Gherkin serves two purposes — documentation and automated tests. It is a whitespace-oriented language that uses indentation to define structure.
The Gherkin includes 60 different spoken languages so that we can easily use our own language.The parser divides the input into features, scenarios and steps.
Here is a simple example of Gherkin:
When we run this feature this gives us a step definition.In Gherkin, each line is start with a Gherkin keyword, followed by any text you like.
The main keywords are:
View original post 198 more words
Why Apache Cassandra?
Apache Cassandra is a free, open source, distributed data storage system that differs sharply from relational database management systems.
Cassandra has become so popular because of its outstanding technical features. It is durable, seamlessly scalable, and tuneably consistent.
It performs blazingly fast writes, can store hundreds of terabytes of data, and is decentralized and symmetrical so there’s no single point of failure.
It is highly available and offers a schema-free data model.
Cassandra is available for download from the Web here. Just click the link on the home page to download the latest release version and Unzip the downloaded cassandra to a local directory.
Starting the Server:
View original post 221 more words
As we know about Neo4j, it pulls out developers life from the trouble and black & white screen of the databases. It doesn’t give freedom from the old databases also provides best support with it’s predefined procedures.
As we know that in the Relational Database, Procedure provide advantages of better performance, scalability, productivity, ease of use and security and Neo4j also provides some amazing tool which can perform as mention above.
Yes, I am talking about the Apoc and using of Apoc with Neo4j, is a blessing for the developers. It provides many predefined procedures or user defined functions/views so that we can easily use it and improve our productivity in very simple manner.
APOC is stands for ‘Awesome Procedure On Cypher‘. APOC is a library of procedure for the various areas. It is introduce with the Neo4j 3.0
There are many areas where we use…
View original post 898 more words
Its been for a while since we talked on Internet of things ! (One year I guess since its 2017 itself 😉 )
So here we are now again , and this time the topic is Introduction to MODBUS protocol !
Again we have three questions , What ? why ? and How ?
So let’s get started ,
What is MODBUS ?
Modbus is a serial communications protocol originally published by Modicon (now Schneider Electric) in 1979 for use with its programmable logic controllers (PLCs). Simple and robust, it has since become a de facto standard communication protocol, and it is now a commonly available means of connecting industrial electronic devices.
Since it first appeared in 1979, Modbus has evolved into a broad set of protocols over a variety of physical links (for example, RS-485). At its core, Modbus is a serial communications protocol that follows a master–slave model. A master sends a request to a slave device, and the slave returns a response. In a standard Modbus network, there is one master and up to 247 slaves (although 2 byte addressing can significantly expand this limit).
Why Modbus ?
When it comes to choosing a network for your device, Modbus TCP/IP offers several significant advantages:
- Simplicity: Modbus TCP/IP simply takes the Modbus instruction set and wraps TCP/IP around it. If you already have a Modbus driver and you understand Ethernet and TCP/IP sockets, you can have a driver up and running and talking to a PC in a few hours. Development costs are exceptionally low. Minimum hardware is required, and development is easy under any operating system.
- Standard Ethernet: There are no exotic chipsets required and you can use standard PC Ethernet cards to talk to your newly implemented device. As the cost of Ethernet falls, you benefit from the price reduction of the hardware, and as the performance improves from 10 to 100 Mb and soon to 1 Gb, your technology moves with it, protecting your investment. You are no longer tied to one vendor for support, but benefit from the thousands of developers out there who are making Ethernet and the Internet the networking tools of the future. This effort has been complemented opportunely with the assignment of the well-known Ethernet port 502 for the Modbus TCP/IP protocol.
- Open: The Modbus protocol was transferred from Schneider Electric to the Modbus Organization in April 2004, signaling a commitment to openness. The specification is available free of charge for download, and there are no subsequent licensing fees required for using Modbus or Modbus TCP/IP protocols. Additional sample code, implementation examples, and diagnostics are available on the Modbus TCP toolkit, a free benefit to Modbus Organization members and available for purchase by nonmembers
- Availability of many devices: Interoperability among different vendors’ devices and compatibility with a large installed base of Modbus-compatible devices makes Modbus an excellent choice.
How it works ?
Modbus Architecture :
There are many types of MODBUS protocols like MODBUS RTU , MODBUS TCP , MODBUS ASCII and many more but we are using MODBUS TCP for our system.
In the architecture of MODBUS there are mainly two things,
- Modbus Slave : In general terms, we call it Server, the entity that provides the data, currently we are using a Simulator for this. Actually our Data Collector would work as a Modbus Slave.
- Modbus Master: In general terms We call it Client , the entity that consumes data, hence our service will be a client.
This is how Modbus Architecture looks like :
MODBUS may seem complicated and confusing to some, but it is a very simple protocol when you understand how it works. MODBUS is a request and response protocol. A MODBUS master will initiate a request and a slave will respond with either an error or the data requested. This is the simple concept of MODBUS.
Modbus Message Structure :
So MODBUS Message Structure looks something like this :
For different other types it looks something like this :
The first information in each Modbus message is the address of the receiver. This parameter contains one byte of information. In Modbus/ASCII it is coded with two hexadecimal characters, in Modbus/RTU one byte is used. Valid addresses are in the range 0..247. The values 1..247 are assigned to individual Modbus devices and 0 is used as a broadcast address. Messages sent to the latter address will be accepted by all slaves. A slave always responds to a Modbus message. When responding it uses the same address as the master in the request. In this way the master can see that the device is actually responding to the request.
Within a Modbus device, the holding registers, inputs and outputs are assigned a number between 1 and 10000. One would expect, that the same addresses are used in the Modbus messages to read or set values. Unfortunately this is not the case. In the Modbus messages addresses are used with a value between 0 and 9999. If you want to read the value of output (coil) 18 for example, you have to specify the value 17 in the Modbus query message. More confusing is even, that for input and holding registers an offset must be subtracted from the device address to get the proper address to put in the Modbus message structure. This leads to common mistakes and should be taken care of when designing applications with Modbus. The following table shows the address ranges for coils, inputs and holding registers and the way the address in the Modbus message is calculated given the actual address of the item in the slave device.
Modbus function codes
The second parameter in each Modbus message is the function code. This defines the message type and the type of action required by the slave. The parameter contains one byte of information. In Modbus/ASCII this is coded with two hexadecimal characters, in Modbus/RTU one byte is used. Valid function codes are in the range 1..255. Not all Modbus devices recognize the same set of function codes. The most common codes are discussed here.
Normally, when a Modbus slave answers a response, it uses the same function code as in the request. However, when an error is detected, the highest bit of the function code is turned on. In that way the master can see the difference between success and failure responses.
There is a still lot that we need to know about MODBUS , that you can read by going to the references to this post 🙂
In this series there would be three blogs , this being the first one.
- Industrial-IOT : Introduction to MODBUS protocol
- Industrial-IOT : A basic Scala implementation for MODBUS Master.
- Industrial-IOT : MODBUS Spark Custom Receiver.
I will add the links accordingly to these blogs.
And yeah I will be writing the Spark-IoT Series soon , sorry for the delay 😉
So be patient and stay connected and tuned 🙂
The next blog will be out soon.
If you want to know anything about me , please visit the link below. You can get in touch with me anytime. Always welcomed 🙂
About the Author :
Shivansh Srivastava , Sr. Software Enginner @ Chirpanywhere Inc. (Scala , Spark , IoT specialist )
Know more : about.me/shiv4nsh
Snapshots are taken per node using the nodetool snapshot command. To take a global snapshot, run the nodetool snapshot command using a parallel ssh utility, such as pssh.
A snapshot first flushes all in-memory writes to disk, then makes a hard link of the SSTable files for each keyspace. You must have enough free disk space on the node to accommodate making snapshots of your data files. A single snapshot requires little disk space. However, snapshots can cause your disk usage to grow more quickly over time because a snapshot prevents old obsolete data files from being deleted. After the snapshot is complete, you can move the backup files to another location if needed, or you can leave them in place.
Run the nodetool snapshot command, specifying the hostname, JMX port, and keyspace. For example:
$ nodetool -h localhost -p 7199 snapshot mykeyspace
The snapshot is created in data_directory_location/keyspace_name/table_name–UUID/snapshots/snapshot_name directory. Each snapshot directory contains numerous .db files that contain the data at the time of the snapshot.
Taking a Global Snapshot:
As stated earlier, global snapshot can be taken using the pssh tool. So let us configure this tool first,
Steps for configuring the pssh are:
- Install the pssh tool using the following command
sudo apt-get install python-pip sudo pip install pssh
- Create a hosts file that contains all the ip’s of the nodes present in that cluster and name it something like
It should look something like this :
192.168.2.123 192.168.2.125 192.168.2.120
- Now run the following command so that the snapshots get created on each and every node :
pssh -h pssh-hosts -P "/root/cassandra/bin/nodetool -h localhost -p 7199 snapshot "
Now youv’e taken the dump of data on each node which is present on each node, you can dowload it using secure copy and then restore it accordingly.
I am still working on automating the process of downloading the dump ! Will update you all as soon as it is done !
I hope youve enjoyed the blog !
If youv’e any query ping me here or on twitter :shiv4nsh !
Will be Happy to help you out !
Till then enjoy someone’s else’s blog ! 😉
- DataStax Documentation !
- Some hack ! 😀
Are you missing this series 😉 ?
Welcome back again in the series of Neo4j with Scala 🙂 . Let’s start our journey again. Till now we have talked and learnt about the use of Neo4j with Scala and how easily we can integrated both two amazing technologies.
Before starting the blog here is recap :
- Getting Started Neo4j with Scala : An Introduction
- Neo4j with Scala: Defining User Defined Procedures and APOC
- Neo4j with Scala: Migrate Data From Other Database to Neo4j
- Neo4j with Scala: Awesome Experience with Spark
ElasticSearch is a modern search and analytic engine based on Apache Lucene. ElasticSearch also provides the capability of store data…
View original post 446 more words
In this blog,we are going to embark the journey of how to setup the Hadoop Multi-Node cluster on a distributed environment. So lets do not waste any time, and let’s get started. Here are step…
There is a lot of promise around blockchains. While we at DeepChains do subscribe to the philosophy and would be eager to provide business solutions to meet the industry needs but there has been a lot of double talk, it seems with blockchains.
The premise of blockchain is the following
- No central registration – No big papa.
- Decentralized – there is no single point of failure.
- Safe – Encrypted and secure.
- Private – My data as an individual is not held by a central authority. I choose what to share
- Secure – end-to-end encrypted communication routed over Tor.
- Open – Open source code
However, for all the so-called currency exchanges, this does not seem to be the case. Let us understand the premise of Bitcoins philosophy first. If you look at the image below, we are trying to get rid of any central agencies,
We are trying to get rid…
View original post 488 more words
In this tutorial , we will be demonstrating how to make a REST service in Spark using Akka-http as a side-kick 😉 and Cassandra as the data store. We have seen the power of Spark earlier and when…