Vijay Bellur

Life, Technology and a little more…

Upgrading to GlusterFS 3.3.0

Now that GlusterFS 3.3.0 is out, here is a quick primer on upgrading from earlier installed versions of GlusterFS. This howto covers upgrades from 3.1.x and 3.2.x versions of GlusterFS.

1) GlusterFS 3.3.0 is not compatible with any earlier released versions. Please make sure that you schedule a downtime before you upgrade.

2) Stop all glusterd, glusterfs and glusterfsd processes running in all your servers and clients.

3) Take a backup of glusterd’s  working directory on all servers – usually glusterd’s working directory is /etc/glusterd.

4) Install GlusterFS 3.3.0 on all your servers and clients. With 3.3.0, the default working directory has been changed to  /var/lib/glusterd. RPM and source installations move all content under /etc/glusterd to /var/lib/glusterd. On all servers, ensure that the directory structure remains consistent with the backup obtained in 3).  New files are created as part of the upgrade process and hence you may see some additional files. However, ensure all files backed up in 3) are present in /var/lib/glusterd.

5) If you have installed from RPM, goto 6).  Else, start glusterd in upgrade mode. glusterd terminates after it performs the necessary steps for upgrade. Re-start glusterd normally after this termination. Essentially this process boils down to:

a) killall glusterd

b) glusterd --xlator-option *.upgrade=on -N

This will re-generate volume files with the new ‘index’ translator which is needed for features like pro-active self heal in 3.3.0.

c) Start glusterd

Ensure that you repeat a), b) and c) on all servers.

6) All your gluster services on the servers should be back online now. Mount your native clients with GlusterFS 3.3.0 to access your volumes.

Enjoy your shiny new 3.3.0 and report your findings in #gluster on Freenode, bugzilla and c.g.o

Please note that this may not work for all installations & upgrades. If you notice anything amiss and would like to see it covered here, please leave a comment and I will try to incorporate your findings.

About these ads

May 31, 2012 - Posted by | Uncategorized

15 Comments »

  1. Will there be an “hot” upgrade path / protocol compatibility in the future? We are using gluster for HA and bringing down the entire infrastructure for an upgrade defeats the whole purpose of it.

    Comment by Previous | June 3, 2012 | Reply

    • Yes, protocol compatibility is something we have tried to be conscious about with this release. Going forward, we will definitely try to retain backward compatibility so that an online rolling upgrade is possible.

      Comment by Vijay Bellur | June 3, 2012 | Reply

  2. Vijay wrote, “RPM and source installations move all content under /etc/glusterd to /var/lib/glusterd.” Do you mean to say that configuration files — which, unlike some files glusterd 2.2.x stores in /etc/glusterd (e.g., PID files created at runtime), should be in /etc — will instead be stored in a subdirectory of /var/lib?

    Comment by pmocek | June 5, 2012 | Reply

    • No. /etc/glusterfs has (somewhat) user configurable files (though I don’t know anybody that’s altered them since 3.0).

      /etc/glusterd had files that were created and managed by glusterd, the management interface. These are now correctly placed under the runtime state directory of /var/lib/glusterd. They are not intended to be directly configured by the administrator.

      Comment by Joe Julian | June 22, 2012 | Reply

  3. At item 5.b above the xlator-option should use two hifens like — and not one –

    Comment by rseverorsevero | June 5, 2012 | Reply

  4. Trying again use two adjacent hifens like – – without the space between them.

    Comment by rseverorsevero | June 5, 2012 | Reply

  5. Hi, I tried to upgrade from 3.2.1 to 3.3.0. It looked fine at first sight. But I found that adding new 3.3.0 nodes to the upgraded cluster fails. It’s probably because 3.2.1’s volume config lacks “username/password” authentication entries in “info” and “…vol”. My workaround is:

    # gluster vol stop
    # service glusterd stop
    Modify above files by hand (adding username/password auth entries.)
    # service glusterd start
    # gluster vol reset
    # gluster vol start

    Now, I can add new 3.3.0 nodes with:
    # gluster peer probe

    Do you think this works well for all cases?

    Comment by Etsuji Nakai | June 23, 2012 | Reply

  6. Hi, I created the whole process as stated above from 3.2.6 to 3.3.1 on #16 nodes (Centos 6.2 64bit, distributed 2 replicas, single volume)
    I do not know why, but I had to restart some of the servers to work properly the whole system. The client couldnot connect to some servers, but the servers (peers) saw eac other according to peers status. The existing migrated volume was created in two different steps in the past (i.e. half of the servers joined 3 months later), and I had to restart those servers that joined in the second step. But after restarting, everything worked. The downtime was less than an hour. It was usefull seeing the client log of the affected volume for warnings.
    I hope it helps others.

    Comment by DoubleB | November 12, 2012 | Reply

  7. The upgrade step isn’t working for me (going from 3.2.5 to 3.3.1), this command just seems to be wrong:

    # glusterd –xlator-option *.upgrade=on -N
    Usage: glusterd [OPTION…] –volfile-server=SERVER [MOUNT-POINT]
    or: glusterd [OPTION…] –volfile=VOLFILE [MOUNT-POINT]
    Try `glusterd –help’ or `glusterd –usage’ for more information.

    I tried adding a ‘–volfile=/etc/glusterd/vols/shared/shared-fuse.vol’ option but it didn’t make any difference. It’s not clear which volfile I’m supposed to use as there are several with similar names – and I only have one 4G volume with 2 nodes!

    I have a /etc/glusterfs folder as well as /etc/glusterd – Gluster 3.2.5 seems to be updating things in both – is that expected?

    Comment by Marcus Bointon | March 1, 2013 | Reply

    • > # glusterd –xlator-option *.upgrade=on -N

      I am sorry for the inconvenience caused. This was a result of wordpress automatically converting double hypens to a single dash. I have updated the post to workaround this auto conversion.

      Comment by Vijay Bellur | March 18, 2013 | Reply

  8. […] can follow the procedure listed here and replace 3.3.0 with 3.4.0 in the […]

    Pingback by Upgrading to GlusterFS 3.4 « Vijay Bellur | July 15, 2013 | Reply

  9. This didn’t go well for me.

    I tried this on an Opensuse 11.3 4 cluster setup (from 3.2.7) and I had two issues:

    1/ install hung on the .rpms. (using the enterprise 11sp3 rpms)
    2/ the command:
    $ glusterd –xlator-option *.upgrade=on -N

    ..just hung too (and didn’t provide any progress etc).

    At this point I decided to revert back to 3.2.7 on the first server and halt the upgrade.

    Question:
    1/ How long should the xlator process taken? (I assume super fast if only sed-ing a config file)
    2/ what exact changes need to be put into the new index files? Can these changes be inserted by hand?

    Many thanks, Paul

    Comment by Paul Simpson | February 12, 2014 | Reply

    • Did you try to figure out why the rpm installation hung? That seems unusual.

      1. The glusterd command just re-generates the volume files and gets out of the way. This should not hang or take several minutes. Performing a strace -f on glusterd or looking at the log files might provide a clue.

      2. For introducing the index translator, you can attempt creating a new volume using GlusterFS 3.3 binaries and check the brick volume files in /var/lib/glusterd/vols//..vol. This will contain the index translator and you could edit brick volume files of your existing volumes in a similar fashion. I do not have ready access to a 3.3 cluster right now, here’s how the index translator definition looks like with 3.5:

      volume gr1-index
      type features/index
      option index-base //.glusterfs/indices
      subvolumes gr1-io-threads
      end-volume

      In my case, the parent of index translator happens to be the marker translator. In 3.3, IIRC the parent of index translator would be the io-stats translator.

      HTH

      Comment by Vijay Bellur | February 23, 2014 | Reply

      • Thanks Vijay for your answer.

        This is a live system with some v important data – I only have a v small window in which I can upgrade. In light of that I am disinclined to touch my file-system due your answer (which I appreciate) as it involves debugging, reverse engineering newer configurations and some guess work. I just can’t afford to put mission critical data through that kind of process.

        I am now looking for an alternative file-systems which I have more faith in.

        Best Wishes,

        Paul

        Comment by paul | February 23, 2014

      • Appreciate your response.

        This is the first comment I have got in close to 20 months since I wrote this post that talks about a hang during a rpm upgrade. Hence I was interested in understanding your problem better. I don’t think there is any reverse engineering or guess work that would put your data at risk.

        Good luck in your search for an alternative file system and thanks for your inputs so far.

        Cheers,
        Vijay

        Comment by Vijay Bellur | February 26, 2014


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 281 other followers

%d bloggers like this: