Vijay Bellur

Life, Technology and a little more…

GlusterFS 3.6.0 – Looking Back and Looking Ahead..

GlusterFS 3.6.0 was released over the last weekend. As I mulled over the release cycle during the weekend, several thoughts on both on the technical and project fronts crossed my mind. This post is an attempt to capture some of those thoughts.

On the technical front, several new features and significant number of bug fixes are part of GlusterFS 3.6.0. The release notes contains a comprehensive list of what’s new in this release. My noteworthy list includes:

  •  Volume Snapshots – Distributed lvm thin-pool based snapshots for backing up volumes in a Gluster Trusted Storage Pool.  Apart from providing cluster wide co-ordination to trigger a consistent snapshot, several improvements have been  performed through the GlusterFS stack to make translators more crash consistent. Snapshotting of volumes is tightly coupled with lvm today but one could also enhance the same framework to integrate with a backend storage technology like btrfs that can perform snapshots.
  •  Erasure Coding – Xavier Hernandez from Datalab added support to perform erasure coding of data in a GlusterFS volume across nodes in a Trusted Storage Pool. Erasure Coding requires fewer nodes to provide better redundancy than a n-way replicated volume and can help in reducing the overall deployment cost. We look forward to build on this foundation and deliver more enhancememnts in upcoming releases.
  • Meta translator – This translator provides a /proc like view for examining internal state of translators on the client stack of a GlusterFS volume and certainly looks like an interface that I would be heavily consuming for introspection of  GlusterFS.
  • Automatic File Replication (AFR) v2 – A significant re-factor of the synchronous replication translator, provides granular entry self-healing and reduced resource consumption with entry self-heals.
  • NetBSD, OSX and FreeBSD ports – Lot of fixes on the portability front. The NetBSD port passes most regression tests as of 3.6.0. At this point, none of these ports are ready to be deployed in production. However, with dedicated maintainers for each of these ports, we expect to reach production readiness on these platforms in a future release.

On the project front,  there have been several things of note happening as far as this release cycle goes. Here are my personal highlights from 3.6.0:

  • 3.6.0 marks the first occurrence of two major GlusterFS releases happening in a calendar year.
  • More patches merged in the 6 month cycle leading to 3.6.0 than any other major release. Addition of more maintainers for GlusterFS and revamp of our Jenkins based CI infrastructure seems to be helping here.

Looking ahead, there are already signs of what is coming up in GlusterFS 3.7. Projects like bitrot detection, volume tiering, Trash translator, Export/Netgroup style authentication for gluster NFS are already in progress and in all probability will be part of 3.7.0 along with other features. We have also started early discussions on what we would want to build for releases and more details will be available in the community soon. If you have thoughts on how addition of a new feature can address your problem(s), do let us know and that can help us in planning out the future of GlusterFS.

Overall, it has been a gratifying time for me to be in the Gluster Community. Personally I look forward to build on this feeling of gratification as we build the next versions of GlusterFS to play a significant role in the distributed storage segment.


November 4, 2014 Posted by | Uncategorized | 6 Comments

GlusterFS as Distributed Storage in OpenStack

Slides used by me for the talk on “Distributed Storage in OpenStack” during the recent OpenStack India Day 2013 event can be found here.  The presentation uses GlusterFS as an example for distributed storage and discusses integration between GlusterFS and various OpenStack components.

September 25, 2013 Posted by | Uncategorized | , | Leave a comment

Upgrading to GlusterFS 3.4.0

Now that GlusterFS 3.4.0 is out, here are some mechanisms to upgrade from earlier installed versions of GlusterFS.

Upgrade from GlusterFS 3.3.x:

GlusterFS 3.4.0 is compatible with 3.3.x (yes, you read it right!). You can upgrade your deployment by following one of the two procedures below.

a) Scheduling a downtime (Recommended)

For this approach, schedule a downtime and prevent all your clients from accessing the servers.

  • Stop all glusterd, glusterfsd and glusterfs processes on your server.
  • Install  GlusterFS 3.4.0
  • Start glusterd.
  • Ensure that all started volumes have processes online in “gluster volume status”.

You would need to repeat these 4 steps on all servers that form your trusted storage pool.

After upgrading the servers, it is recommended to upgrade all client installations to 3.4.0.

b) Rolling upgrades with no downtime

If you have replicated or distributed replicated volumes with bricks placed in the right fashion for redundancy, have no data to be self-healed and feel adventurous, you can perform a rolling upgrade through the following procedure:

  • Stop all glusterd, glusterfs and glusterfsd processes on your server.
  • Install GlusterFS 3.4.0.
  • Start glusterd.
  • Run “gluster volume heal <volname> info” on all volumes and ensure that there is nothing left to be self-healed on every volume. If you have pending data for self-heal, run “gluster volume heal <volname>” and wait for self-heal to complete.
  • Ensure that all started volumes have processes online in “gluster volume status”.

Repeat the above steps on all servers that are part of your trusted storage pool.

Again after upgrading the servers, it is recommended to upgrade all client installations to 3.4.0.

Upgrade from GlusterFS 3.1.x and 3.2.x:

You can follow the procedure listed here and replace 3.3.0 with 3.4.0 in the procedure.

Do report your findings on 3.4.0 in gluster-users, #gluster on Freenode and bugzilla.

Please note that this may not work for all installations & upgrades. If you notice anything amiss and would like to see it covered here, please leave a comment and I will try to incorporate your findings.

July 15, 2013 Posted by | Uncategorized | 15 Comments

What’s new in GlusterFS 3.4

Slides from my talk on GlusterFS 3.4 in the recently concluded Gluster Community Summit can be found here.

I intend adding more details about some of the new features in the days to come.

March 18, 2013 Posted by | Uncategorized | , | 1 Comment

Upgrading to GlusterFS 3.3.0

Now that GlusterFS 3.3.0 is out, here is a quick primer on upgrading from earlier installed versions of GlusterFS. This howto covers upgrades from 3.1.x and 3.2.x versions of GlusterFS.

1) GlusterFS 3.3.0 is not compatible with any earlier released versions. Please make sure that you schedule a downtime before you upgrade.

2) Stop all glusterd, glusterfs and glusterfsd processes running in all your servers and clients.

3) Take a backup of glusterd’s  working directory on all servers – usually glusterd’s working directory is /etc/glusterd.

4) Install GlusterFS 3.3.0 on all your servers and clients. With 3.3.0, the default working directory has been changed to  /var/lib/glusterd. RPM and source installations move all content under /etc/glusterd to /var/lib/glusterd. On all servers, ensure that the directory structure remains consistent with the backup obtained in 3).  New files are created as part of the upgrade process and hence you may see some additional files. However, ensure all files backed up in 3) are present in /var/lib/glusterd.

5) If you have installed from RPM, goto 6).  Else, start glusterd in upgrade mode. glusterd terminates after it performs the necessary steps for upgrade. Re-start glusterd normally after this termination. Essentially this process boils down to:

a) killall glusterd

b) glusterd --xlator-option *.upgrade=on -N

This will re-generate volume files with the new ‘index’ translator which is needed for features like pro-active self heal in 3.3.0.

c) Start glusterd

Ensure that you repeat a), b) and c) on all servers.

6) All your gluster services on the servers should be back online now. Mount your native clients with GlusterFS 3.3.0 to access your volumes.

Enjoy your shiny new 3.3.0 and report your findings in #gluster on Freenode, bugzilla and c.g.o

Please note that this may not work for all installations & upgrades. If you notice anything amiss and would like to see it covered here, please leave a comment and I will try to incorporate your findings.

May 31, 2012 Posted by | Uncategorized | 15 Comments

Finding Gluster volumes from a client machine

One of the questions that I come across in IRC and other places often is how to obtain a list of gluster volumes that can be mounted from a client machine. NFS provides showmount which helps in figuring out the list of exports from a server amongst other things. GlusterFS currently does not have an equivalent of showmount and this is one of the nifty utilities that we intend to develop in the near term.

Right now, there is a way of figuring out the list of volumes on a client machine. Gluster 3.1 exports volumes through NFS by default. Owing to this, showmount itself can be leveraged to figure out the volumes that are exported by Gluster servers. The output of  ‘showmount -e <server>‘ will be the list of volumes that can be mounted from the client. For e.g.

# showmount -e deepthought
Export list for deepthought:
/dr    *
/music *

If you just need a list of volumes, you can clear the extra output too by making use of your favorite filters. Something like:

#showmount -e  <server> | grep /  | tr -d ‘/’ | cut -d ‘ ‘ -f1

would yield you the list.

Once you have the list on your client, you can mount it through NFS and/or GlusterFS protocols, based on your choice of access protocol.




March 27, 2011 Posted by | Uncategorized | , , | Leave a comment

Hello world!

Been wanting to get back to blogging for a while. Over the past year or so, I have frequently been inhibited by what can be expressed in 140 characters. Hence this new start. Let me explore how far this attempt goes!

January 21, 2011 Posted by | Uncategorized | Leave a comment