Snapshots are taken per node using the nodetool snapshot command. To take a global snapshot, run the nodetool snapshot command using a parallel ssh utility, such as pssh.

A snapshot first flushes all in-memory writes to disk, then makes a hard link of the SSTable files for each keyspace. You must have enough free disk space on the node to accommodate making snapshots of your data files. A single snapshot requires little disk space. However, snapshots can cause your disk usage to grow more quickly over time because a snapshot prevents old obsolete data files from being deleted. After the snapshot is complete, you can move the backup files to another location if needed, or you can leave them in place.

Note: Cassandra can only restore data from a snapshot when the table schema exists. It is recommended that you also backup the schema.


Run the nodetool snapshot command, specifying the hostname, JMX port, and keyspace. For example:

$ nodetool -h localhost -p 7199 snapshot mykeyspace


The snapshot is created in data_directory_location/keyspace_name/table_nameUUID/snapshots/snapshot_name directory. Each snapshot directory contains numerous .db files that contain the data at the time of the snapshot.

For example:

Package installations:


Tarball installations:


Taking a Global Snapshot:

As stated earlier, global snapshot can be taken using the pssh tool. So let us configure this tool first,

Steps for configuring the pssh are:

  1. Install the pssh tool using the following command
    sudo apt-get install python-pip
    sudo pip install pssh
  2. Create a hosts file that contains all the ip’s of the nodes present in that cluster and name it something like

    It should look something like this :
  3. Now run the following command so that the snapshots get created on each and every node :
     pssh -h pssh-hosts -P "/root/cassandra/bin/nodetool -h localhost -p 7199 snapshot "

Now youv’e taken the dump of data on each node which is present on each node, you can dowload it using secure copy and then  restore it accordingly.

I am still working on automating the process of downloading the dump ! Will update you all  as soon as it is done !

I hope youve enjoyed the blog !

If youv’e any query ping me here or on twitter :shiv4nsh !

Will be Happy to help you out !

Till then enjoy someone’s else’s blog ! 😉


  1. DataStax Documentation !
  2. Some hack ! 😀

Shivansh Srivastava


3 thoughts on “Cassandra Global Snapshot: Taking dump of a keyspace for whole cluster.

  1. I have a doubt.If i take a snapshot of the keyspace in only one node (in a multinode cluster).Will that snapshot contain all the contents of the the keyspace or only the parts(partition) of the keyspace current node is allotted to store.


      1. In my case i took a snapshot of the keyspace as you told in all nodes using pssh.And stored them in the s3.
        Now i deleted all the data and also the local copy of the snapshot in all the nodes .Now should I make sure that the each snapshot taken from nodes in s3 should exactly go back to where it was taken or it doesnt matter.(And i also have vnodes enabled)


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s