Commit graph

40 commits

Author SHA1 Message Date
manu
71f3221eb6 Update filesystems/glusterfs to 8.0 2020-07-07 01:01:27 +00:00
manu
187866a34d Updated filesystems/glusterfs to 3.12.8
This is a maintenance release
2018-04-19 02:49:04 +00:00
manu
970768df39 Update to glusterfs 3.12.3
There is an important performance bug fix specific to NetBSD here,
which disable gfid2path by default. This features causes a huge
amount of different extended attributes to be created, and the
NetBSD implementation does not scale well with it.

In order to recover a server after the feature is disabled, stop
glusterfs daemones, disable extended attributes using extattrctl,
remove ${BRICK_ROOT}/.attribute/system/trusted.gfid2path.*
re-enable extended attributes and restart glusterfs.
2017-11-15 03:31:56 +00:00
manu
f67e90743f Update to glusterfs 3.12.2
There have been a lot of changes since previous package version (3.8.9)
See http://docs.gluster.org/en/latest/release-notes/ for an overview
2017-11-04 03:46:56 +00:00
manu
de4056f369 Update glusterfs to 3.8.2
This is a maintenance release
2016-08-11 03:43:48 +00:00
manu
5ab65b7331 Update to glusterfs 3.8.0
From http://blog.gluster.org/2016/06/glusterfs-3-8-released/

Gluster.org announces the release of 3.8 on June 14, 2016, marking
a decade of active development.

The 3.8 release focuses on:
- containers with inclusion of Heketi
- hyperconvergence
- ecosystem integration
- protocol improvements with NFS Ganesha

Contributed features are marked with the supporting organizations.


  Automatic conflict resolution, self-healing improvements (Facebook)

    Synchronous Replication receives a major boost with features
    contributed from Facebook. Multi-threaded self-healing makes
    self-heal perform at a faster rate than before. Automatic
    Conflict resolution ensures that conflicts due to network
    partitions are handled without the need for administrative
    intervention



  NFSv4.1 (Ganesha) - protocol

    Gluster's native NFSv3 server is disabled by default with this
    release. Gluster's integration with NFS Ganesha provides NFS
    v3, v4 and v4.1 accesses to data stored in Gluster volume.



  BareOS - backup / data protection

    Gluster 3.8 is ready for integration with BareOS 16.2. BareOS
    16.2 leverages glusterfind for intelligently backing up objects
    stored in a Gluster volume.



  "Next generation" tiering and sharding - VM images

    Sharding is now stable for VM image storage. Geo-replication
    has been enhanced to integrate with sharding for offsite
    backup/disaster recovery of VM images. Self-healing and data
    tiering with sharding makes it an excellent candidate for
    hyperconverged virtual machine image storage.



  block device & iSCSI with LIO - containers

    File backed block devices are usable from Gluster through iSCSI.
    This release of Gluster integrates with tcmu-runner
    [https://github.com/agrover/tcmu-runner] to access block devices
    natively through libgfapi.



  Heketi - containers, dynamic provisioning

    Heketi provides the ability to dynamically provision Gluster
    volumes without administrative intervention. Heketi can manage
    multiple Gluster clusters and will be the cornerstone for
    integration with Container and Storage as a Service management
    ecosystems.



  glusterfs-coreutils (Facebook) - containers

    Native coreutils for Gluster developed by Facebook that uses
    libgfapi to interact with gluster volumes. Useful for systems
    and containers that do not have FUSE.



For more details, our release notes are included:
https://github.com/gluster/glusterfs/blob/release-3.8/doc/release-notes/3.8.0.md

The release of 3.8 also marks the end of life for GlusterFS 3.5,
there will no further updates for this version.
2016-06-16 04:01:02 +00:00
manu
44d0ca0820 Update filesystems/glusterfs to 3.7.11
This is a maintenance release
2016-04-19 03:12:42 +00:00
manu
e9764b0957 Update glusterfs to 3.7.10
This is a maintenance release
2016-04-02 03:33:51 +00:00
manu
477c0d3d9e Update glusterfs to 3.7.9
This is a maintenance release
2016-03-22 08:14:00 +00:00
manu
e13e422411 Update to glusterfs 3.7.8
This is a maintenance update, which fixes a regression bug introduced
in 3.7.7, with the self-heal feature being broken.
2016-02-09 11:04:51 +00:00
manu
f5622994b2 Upgrade to 3.7.7
This a maintenance release.
2016-02-01 14:27:02 +00:00
manu
cb6b03c901 Maintenance update of glusterfs to 3.7.6 2015-11-09 04:13:56 +00:00
manu
4f197d4763 Update filesytems/glusterfs to 3.7.5
This is a bugfix release
2015-10-10 13:25:09 +00:00
manu
8680d52027 Maintenance upgrade to 3.7.4 2015-09-01 16:02:54 +00:00
manu
1632a28ab7 Upgrade glusterfs to 3.7.3
This is a maintenance upgrade, complete bugfix list is available
from distribution ChangeLog
2015-08-02 02:48:34 +00:00
manu
da950b8481 Upgrade to glusterfs 3.7.2
Complete list of changes since 3.7.1:
- doc: add 1233044, 1232179 in 3.7.2 release-notes
- features/bitrot: fix fd leak in truncate (stub)
- doc: add release notes for 3.7.2
- libgfchangelog: Fix crash in gf_changelog_process
- glusterd: Fix snapshot of a volume with geo-rep
- cluster/ec: Avoid parallel executions of the same state machine
- quota: fix double accounting with rename operation
- cluster/dht: Prevent use after free bug
- cluster/ec: Wind unlock fops at all cost
- glusterd: Buffer overflow causing crash for glusterd
- NFS-Ganesha: Automatically export vol that was exported before vol restart
- common-ha: cluster HA setup sometimes fails
- cluster/ec: Prevent double unwind
- quota/glusterd: porting to new logging framework.
- bitrot/glusterd: gluster volume set command for bitrot should not supported
- tests: fix spurious failure in bug-857330/xml.t
- features/bitrot: tuanble object signing waiting time value for bitrot
- quota: don't log error when disk quota exceeded
- protocol/client : porting log messages to new framework
- cluster/afr: Do not attempt entry self-heal if the last lookup on entry
  failed on src
- changetimerecorder : port log messages to a new framework
- tier/volume set: Validate volume set option for tier
- glusterd/tier: glusterd crashed with detach-tier commit force
- rebalance,store,glusterd/glusterd: porting to new logging framework.
- libglusterfs: Enabling the fini()  in cleanup_and_exit()
- sm/glusterd: Porting messages to new logging framework
- nfs: Authentication performance improvements
- common-ha: cluster HA setup sometimes fails
- glusterd: subvol_count value for replicate volume should be calculate
  correctly
- common-ha : Clean up cib state completely
- NFS-Ganesha : Return correct return value
- glusterd: Porting messages to new logging framework.
- glusterd: Stop tcp/ip listeners during  glusterd exit
- storage/posix: Handle MAKE_INODE_HANDLE failures
- cluster/ec: Prevent Null dereference in dht-rename
- doc: fix markdown formatting
- upcall: prevent busy loop in reaper thread
- protocol/server : port log messages to a new framework
- nfs.c nfs3.c: port log messages to a new framework
- logging: log "Stale filehandle" on the client as Debug
- snapshot/scheduler: Modified main() function to take arguments.
- tools/glusterfind: print message for good cases
- geo-rep: ignore symlink and harlink errors in geo-rep
- tools/glusterfind: ignoring deleted files
- spec/geo-rep: Add rsync as dependency for georeplication rpm
- features/changelog: Do htime setxattr without XATTR_REPLACE flag
- tools/glusterfind: Cleanup glusterfind dir after a volume delete
- tools/glusterfind: Cleanup session dir after delete
- geo-rep: Validate use_meta_volume option
- spec: correct the vendor string in spec file
- tools/glusterfind: Fix GFID to Path conversion for dir
- libglusterfs: update glfs-message header for reserved segments
- features/qemu-block: Don't unref root inode
- features/changelog: Avoid setattr fop logging during rename
- common-ha: handle long node names and node names with '-' and '.' in them
- features/marker : Pass along xdata to lower translator
- tools/glusterfind: verifying volume is online
- build: fix compiling on older distributions
- snapshot/scheduler: Handle OSError in os. callbacks
- snapshot/scheduler: Check if GCRON_TASKS exists before
- features/quota: Fix ref-leak
- tools/glusterfind: verifying volume presence
- stripe: fix use-after-free
- Upcall/cache-invalidation: Ignore fops with frame->root->client not set
- rpm: correct date and order of entries in the %changelog
- nfs: allocate and return the hashkey for the auth_cache_entry
- doc: add release notes for 3.7.1
- snapshot: Fix finding brick mount path logic
- glusterd/snapshot: Return correct errno in events of failure - PATCH 2
- rpc: call transport_unref only on non-NULL transport
- heal : Do not invoke glfs_fini for glfs-heal commands
- Changing log level from Warning to Debug
- features/shard: Handle symlinks appropriately in fops
- cluster/ec: EC_XATTR_DIRTY doesn't come in response
- worm: Let lock, zero xattrop calls succeed
- bitrot/glusterd: scrub option should be disabled once bitrot option is
  reset
- glusterd/shared_storage: Provide a volume set option to create and mount
  the shared storage
- dht: Add lookup-optimize configuration option for DHT
- glusterfs.spec.in: move libgf{db,changelog}.pc from -api-devel to -devel
- fuse: squash 64-bit inodes in readdirp when enable-ino32 is set
- glusterd: do not show pid of brick in volume status if brick is down.
- cluster/dht: fix incorrect dst subvol info in inode_ctx
- common-ha: fix race between setting grace and virt IP fail-over
- heal: Do not call glfs_fini in final builds
- dht/rebalance : Fixed rebalance failure
- cluster/dht: Fix dht_setxattr to follow files under migration
- meta: implement fsync(dir)
- socket: throttle only connected transport
- contrib/timer-wheel: fix deadlock in del_timer()
- snapshot/scheduler: Return proper error code in case of failure
- quota: retry connecting to quotad on ENOTCONN error
- features/quota: prevent statfs frame loss when an error happens during
  ancestry
- features/quota : Make "quota-deem-statfs" option "on" by default, when
  quota is  enabled
- cluster/dht: pass a destination subvol to fop2 variants to avoid races.
- cli: Fix incorrect parse logic for volume heal commands
- glusterd: Bump op version and max op version for 3.7.2
- cluster/dht: Don't rely on linkto xattr to find destination subvol
- afr: honour selfheal enable/disable volume set options
- features/shard: Fix incorrect parameter to get_lowest_block()
- libglusterfs: Copy d_len and dict as well into dst dirent
- features/quota : Do unwind if postbuf is NULL
- cluster/ec: Fix incorrect check for iatt differences
- features/shard: Fix issue with readdir(p) fop
- glusterfs.spec.in: python-gluster should be 'noarch'
- glusterd: Bump op version and max op version for 3.7.1
- glusterd: fix repeated connection to nfssvc failed msgs
2015-06-20 03:43:04 +00:00
manu
9253d5e6f4 * Bitrot Detection
Bitrot detection is a technique used to identify an ?insidious?
type of disk error where data is silently corrupted with no indication
from the disk to the storage software layer that an error has
occurred. When bitrot detection is enabled on a volume, gluster
performs signing of all files/objects in the volume and scrubs data
periodically for signature verification. All anomalies observed
will be noted in log files.


* Multi threaded epoll for performance improvements

Gluster 3.7 introduces multiple threads to dequeue and process more
requests from epoll queues. This improves performance by processing
more I/O requests. Workloads that involve read/write operations on
a lot of small files can benefit from this enhancement.


* Volume Tiering [Experimental]

Policy based tiering for placement of files. This feature will serve
as a foundational piece for building support for data classification.

Volume Tiering is marked as an experimental feature for this release.
It is expected to be fully supported in a 3.7.x minor release.
Trashcan

This feature will enable administrators to temporarily store deleted
files from Gluster volumes for a specified time period.


* Efficient Object Count and Inode Quota Support

This improvement enables an easy mechanism to retrieve the number
of objects per directory or volume. Count of objects/files within
a directory hierarchy is stored as an extended attribute of a
directory. The extended attribute can be queried to retrieve the
count.

This feature has been utilized to add support for inode quotas.


* Pro-active Self healing for Erasure Coding

Gluster 3.7 adds pro-active self healing support for erasure coded
volumes.


* Exports and Netgroups Authentication for NFS

This feature adds Linux-style exports & netgroups authentication
to the native NFS server. This enables administrators to restrict
access to specific clients & netgroups for volume/sub-directory
NFSv3 exports.


* GlusterFind

GlusterFind is a new tool that provides a mechanism to monitor data
events within a volume. Detection of events like modified files is
made easier without having to traverse the entire volume.


* Rebalance Performance Improvements

Rebalance and remove brick operations in Gluster get a performance
boost by speeding up identification of files needing movement and
a multi-threaded mechanism to move all such files.


* NFSv4 and pNFS support

Gluster 3.7 supports export of volumes through NFSv4, NFSv4.1 and
pNFS. This support is enabled via NFS Ganesha. Infrastructure changes
done in Gluster 3.7 to support this feature include:

  - Addition of upcall infrastructure for cache invalidation.
  - Support for lease locks and delegations.
  - Support for enabling Ganesha through Gluster CLI.
  - Corosync and pacemaker based implementation providing resource
    monitoring and failover to accomplish NFS HA.

pNFS support for Gluster volumes and NFSv4 delegations are in beta
for this release. Infrastructure changes to support Lease locks and
NFSv4 delegations are targeted for a 3.7.x minor release.


* Snapshot Scheduling

With this enhancement, administrators can schedule volume snapshots.


* Snapshot Cloning

Volume snapshots can now be cloned to create a new writeable volume.


* Sharding [Experimental]

Sharding addresses the problem of fragmentation of space within a
volume. This feature adds support for files that are larger than
the size of an individual brick. Sharding works by chunking files
to blobs of a configurabe size.

Sharding is an experimental feature for this release. It is expected
to be fully supported in a 3.7.x minor release.


* RCU in glusterd

Thread synchronization and critical section access has been improved
by introducing userspace RCU in glusterd


* Arbiter Volumes

Arbiter volumes are 3 way replicated volumes where the 3rd brick
of the replica is automatically configured as an arbiter. The 3rd
brick contains only metadata which provides network partition
tolerance and prevents split-brains from happening.

Update to GlusterFS 3.7.1

* Better split-brain resolution

split-brain resolutions can now be also driven by users without
administrative intervention.


* Geo-replication improvements

There have been several improvements in geo-replication for stability
and performance.


* Minor Improvements

  - Message ID based logging has been added for several translators.
  - Quorum support for reads.
  - Snapshot names contain timestamps by default.Subsequent access
    to the snapshots should be done by the name listed in gluster
    snapshot list
  - Support for gluster volume get <volname> added.
  - libgfapi has added handle based functions to get/set POSIX ACLs
    based on common libacl structures.
2015-06-02 03:44:16 +00:00
joerg
1e31591398 ec.la is installed when MMX is present and usable. This is the default
for amd64, so include it in the PLIST & bump revision.
2015-05-01 12:19:07 +00:00
manu
c1eb49606d Update glusterfs to 3.6.2
This is a maintenance release, complete Changelog can be found here:
http://blog.gluster.org/2015/01/glusterfs-3-6-2-ga-released/
2015-04-09 15:20:47 +00:00
joerg
922233cd83 On systems with MMX support, additional files are installed. 2015-01-11 23:07:01 +00:00
manu
dee70acf6e Upgrate to glusterfs 3.6.0
New features:

- Volume Snapshots
Distributed lvm thin-pool based snapshots for backing up volumes
in a Gluster Trusted Storage Pool.  Apart from providing cluster
wide co-ordination to trigger a consistent snapshot, several
improvements have been  performed through the GlusterFS stack to
make translators more crash consistent. Snapshotting of volumes is
tightly coupled with lvm today but one could also enhance the same
framework to integrate with a backend storage technology like btrfs
that can perform snapshots.

- Erasure Coding
Xavier Hernandez from Datalab added support to perform erasure
coding of data in a GlusterFS volume across nodes in a Trusted
Storage Pool. Erasure Coding requires fewer nodes to provide better
redundancy than a n-way replicated volume and can help in reducing
the overall deployment cost. We look forward to build on this
foundation and deliver more enhancememnts in upcoming releases.

- Better SSL support
Multiple improvements to SSL support in GlusterFS. The GlusterFS
driver in OpenStack Manila that provides certificate based access
to tenants relies on these improvements.

- Meta translator
This translator provides a /proc like view for examining internal
state of translators on the client stack of a GlusterFS volume and
certainly looks like an interface that I would be heavily consuming
for introspection of  GlusterFS.

- Automatic File Replication (AFR) v2
A significant re-factor of the synchronous replication translator,
provides granular entry self-healing and reduced resource consumption
with entry self-heals.

- NetBSD, OSX and FreeBSD ports
Lot of fixes on the portability front. The NetBSD port passes most
regression tests as of 3.6.0. At this point, none of these ports
are ready to be deployed in production. However, with dedicated
maintainers for each of these ports, we expect to reach production
readiness on these platforms in a future release.

Complete releases notes are available at
https://github.com/gluster/glusterfs/blob/release-3.6/doc/release-notes/3.6.0.md
2014-11-18 14:38:15 +00:00
joerg
e7448197c5 Use external argp. No longer installs glusterd rc script. Bump revision. 2014-08-13 22:37:37 +00:00
manu
dd871167e4 Upgrade to glusterfs 3.5.0
New features includes...
 - File snapshots
 - On-wire compression/decompression
 - Quota sclability
 - Disk encryption
 - Brick failure detection
2014-04-18 08:31:20 +00:00
jperkin
45bc40abb4 Remove example rc.d scripts from PLISTs.
These are now handled dynamically if INIT_SYSTEM is set to "rc.d", or
ignored otherwise.
2014-03-11 14:04:57 +00:00
manu
159a4b0850 Update glusterfs to 3.4.1
Disable eager locks, which seem broken on NetBSD for glusterfs-3.4.x
2013-10-01 00:30:26 +00:00
manu
8a4df2778f Update glusterfs to 3.4.0. Here are the changes since 3.3.x
* Improvements for Virtual Machine Image Storage
A number of improvements have been performed to let Gluster volumes provide
storage for Virtual Machine Images. Some of them include:
  - qemu / libgfapi integration.
  - Causal ordering in write-behind translator.
  - Tunables for a gluster volume in group-virt.example.

The above results in significant improvements in performance for VM image
hosting.


* Synchronous Replication Improvements
GlusterFS 3.4 features significant improvements in performance for
the replication (AFR) translator. This is in addition to bug fixes
for volumes that used replica 3.


* Open Cluster Framework compliant Resource Agents
Resource Agents (RA) plug glusterd into Open Cluster Framework
(OCF) compliant cluster resource managers, like Pacemaker.

The glusterd RA manages the glusterd daemon like any upstart or
systemd job would, except that Pacemaker can do it in a cluster-aware
fashion.

The volume RA starts a volume and monitors individual brick?s
daemons in a cluster aware fashion, recovering bricks when their
processes fail.


* POSIX ACL support over NFSv3
setfacl and getfacl commands now can be used on a nfs mount that
exports a gluster volume to set or read posix ACLs.


* 3.3.x compatibility
The new op-version infrastructure provides compatibility with 3.3.x
release of GlusterFS. 3.3.x clients can talk to 3.4.x servers and
the vice-versa is also possible.

If a volume option that corresponds to 3.4 is enabled, then 3.3
clients cannot mount the volume.

* Packaging changes
New RPMs for libgfapi and OCF RA are present with 3.4.0.


* Experimental Features
  - RDMA-connection manager (RDMA-CM)
  - New Block Device translator
  - Support for NUFA

As experimental features, we don?t expect them to work perfectly
for this release, but you can expect them to improve dramatically
as we make successive 3.4.x releases.


* Minor Improvements:
  - The Ext4 file system change which affected readdir workloads for
    Gluster volumes has been addressed.
  - More options for selecting read-child with afr available now.
  - Custom layouts possible with distribute translator.
  - No 32-aux-gid limit
  - SSL support for socket connections.
  - Known issues with replica count greater than 2 addressed.
  - quick-read and md-cache translators have been refactored.
  - open-behind translator introduced.
  - Ability to avoid glusterfs bind to reserved ports.
  - statedumps are now created in /var/run/gluster instead of /tmp by default.
2013-09-08 03:38:52 +00:00
rodent
94fbe74e04 '"@comment $NetBSD$" expected.' 2013-04-06 04:03:36 +00:00
manu
a3dacef61e Bump to glusterfs-3.3.1, which brings
- unified file and object storage
- storage for Hadoop (not tested here)
- proactive self-healing
- much better performance
2012-10-19 04:15:21 +00:00
manu
0158295504 Update glusterfs to 3.2.7, a maintenance release 2012-06-16 01:47:33 +00:00
manu
2031f5d692 Update glusterfs to 3.2.6, which is a maintenance releases fixing various bugs 2012-03-28 14:24:59 +00:00
manu
f6e7eff08c - Add experimental support for SSL
- Ignore again .attribute (a patch part that was lost in last upgrade)
2011-12-09 16:57:44 +00:00
manu
44a0c8f696 Update to glusterfs 3.2.5. This is a bug-fix release 2011-11-28 08:42:38 +00:00
manu
e7368bdeb8 Missing commit for 3.2.3 update 2011-09-27 12:45:02 +00:00
manu
9b673348f6 Update to glusterfs 3.2.2 (maintenance release for bug fixes) 2011-07-23 01:14:43 +00:00
manu
69dbe9d48d Enable georeplication 2011-07-19 07:54:30 +00:00
manu
df25a8e3ea Upgrade to glusterfs-3.2.1
This release is mostly about bug fixes, and we also fix bugs in the NetBSD
port.
2011-07-08 08:02:56 +00:00
manu
0f18d7b3a9 Support the glusterd daemon (volume management tool). 2011-06-06 15:53:13 +00:00
manu
8cc50c73f8 Update glusterfs to 3.2. According to http://www.gluster.org, news are:
* Geo-Replication
* Easily Accessible Usage Quotas
* Advanced Monitoring Tools
2011-05-19 14:54:22 +00:00
manu
d084bf3f1c Update glusterfs to 3.1.4.
Major new features according to http://www.gluster.org/
- Elastic Volume Management
- New Gluster Console Manager
- Dynamic Storage for the data center and cloud
2011-04-18 16:19:47 +00:00
manu
90b5313922 This is an experimental port of glusterfs on NetBSD. Don't do this at home! 2010-08-26 14:26:18 +00:00