GlusterFS
Original author(s) | Gluster |
---|---|
Developer(s) | Red Hat, Inc. |
Stable release | 3.6.3[1]
/ April 27, 2015 |
Preview release | 3.7 beta1[2]
/ May 1, 2015 |
Repository | |
Operating system | Linux, Mac OS X, NetBSD, OpenSolaris |
Type | Distributed file system |
License | GNU General Public License v3[3] |
Website | www |
GlusterFS is a scale-out network-attached storage file system. It has found applications including cloud computing, streaming media services, and content delivery networks. GlusterFS was developed originally by Gluster, Inc., then by Red Hat, Inc., after their purchase of Gluster in 2011.
In June 2012, Red Hat Storage Server was announced as a commercially-supported integration of GlusterFS with Red Hat Enterprise Linux.[4] With the acquirement of Inktank Storage (the company behind the Ceph distributed file system) in April 2014, RedHat re-branded GlusterFS-based Red Hat Storage Server to "Red Hat Gluster Storage".[5]
Design
GlusterFS aggregates various storage servers over Ethernet or Infiniband RDMA interconnect into one large parallel network file system. It is free software, with some parts licensed under the GNU General Public License (GPL) v3 while others are dual licensed under either GPL v2 or the Lesser General Public License (LGPL) v3. GlusterFS is based on a stackable user space design.
GlusterFS has a client and server component. Servers are typically deployed as storage bricks, with each server running a glusterfsd daemon to export a local file system as a volume. The glusterfs client process, which connects to servers with a custom protocol over TCP/IP, InfiniBand or Sockets Direct Protocol, creates composite virtual volumes from multiple remote servers using stackable translators. By default, files are stored whole, but striping of files across multiple remote volumes is also supported. The final volume may then be mounted by the client host using its own native protocol via the FUSE mechanism, using NFS v3 protocol using a built-in server translator, or accessed via gfapi client library. Native-protocol mounts may then be re-exported e.g. via the kernel NFSv4 server, SAMBA, or the object-based OpenStack Storage (Swift) protocol using the "UFO" (Unified File and Object) translator.
Most of the functionality of GlusterFS is implemented as translators, including:
- File-based mirroring and replication
- File-based striping
- File-based load balancing
- Volume failover
- Scheduling and disk caching
- Storage quotas
- Volume snapshots with user serviceability (since GlusterFS version 3.6)
The GlusterFS server is intentionally kept simple: it exports an existing directory as-is, leaving it up to client-side translators to structure the store. The clients themselves are stateless, do not communicate with each other, and are expected to have translator configurations consistent with each other. GlusterFS relies on an elastic hashing algorithm, rather than using either a centralized or distributed metadata model. With version 3.1 and later of GlusterFS, volumes can be added, deleted, or migrated dynamically, helping to avoid configuration coherency problems, and allowing GlusterFS to scale up to several petabytes on commodity hardware by avoiding bottlenecks that normally affect more tightly-coupled distributed file systems.
GlusterFS has been used as the foundation for academic research[6][7] and a survey article.[8]
Red Hat markets the software for three markets: "on-premises", public cloud and "private cloud".[9]
See also
References
- ^ "glusterfs-3.6.3 released". www.gluster.org. 27 Apr 2015. Retrieved 5 May 2015.
- ^ "GlusterFS 3.7 beta1 RPMs are available". blog.gluster.org. 1 May 2015. Retrieved 5 May 2015.
- ^ "Gluster 3.1: Understanding the GlusterFS License". Gluster Documentation. Gluster.org. Retrieved 2014-04-30.
- ^ Timothy Prickett Morgan (June 27, 2012). "Red Hat Storage Server NAS takes on Lustre, NetApp". The Register. Retrieved May 30, 2013.
- ^ "Red Hat Storage. New product names. Same great features". redhat.com. Mar 20, 2015. Retrieved 2015-03-20.
- ^ Noronha, Ranjit; Panda, Dhabaleswar K (9–12 September 2008). IMCa: A High Performance Caching Front-End for GlusterFS on InfiniBand (PDF). 37th International Conference on Parallel Processing, 2008. ICPP '08. IEEE. doi:10.1109/ICPP.2008.84. Retrieved 14 June 2011.
{{cite conference}}
: External link in
(help); Unknown parameter|conferenceurl=
|conferenceurl=
ignored (|conference-url=
suggested) (help) - ^ Kwidama, Sevickson (2007–2008), Streaming and storing CineGrid data: A study on optimization methods (PDF), University of Amsterdam System and Network Engineering, retrieved 10 June 2011
- ^ Klaver, Jeroen; van der Jagt, Roel (14 July 2010), Distributed file system on the SURFnet network Report (PDF), University of Amsterdam System and Network Engineering, retrieved 9 June 2012
- ^ "Red Hat Storage Server". Web site. Red Hat. Retrieved May 30, 2013.
External links
- Official website
- Linux Magazine article
- Networkworld Article
- DevOpsAngle article (about use for NASA's Curiosity website)
- Global Network Block Device (GNBD)