pkgsrc/filesystems/glusterfs/Makefile

128 lines
4.3 KiB
Makefile
Raw Normal View History

# $NetBSD: Makefile,v 1.63 2016/03/22 08:14:00 manu Exp $
DISTNAME= glusterfs-3.7.9
#PKGREVISION= 1
CATEGORIES= filesystems
MASTER_SITES= http://bits.gluster.org/pub/gluster/glusterfs/src/
MAINTAINER= pkgsrc-users@NetBSD.org
2011-05-08 10:26:43 +02:00
HOMEPAGE= http://www.gluster.org/
COMMENT= Cluster filesystem
LICENSE= gnu-gpl-v3
GNU_CONFIGURE= yes
USE_LIBTOOL= yes
* Bitrot Detection Bitrot detection is a technique used to identify an ?insidious? type of disk error where data is silently corrupted with no indication from the disk to the storage software layer that an error has occurred. When bitrot detection is enabled on a volume, gluster performs signing of all files/objects in the volume and scrubs data periodically for signature verification. All anomalies observed will be noted in log files. * Multi threaded epoll for performance improvements Gluster 3.7 introduces multiple threads to dequeue and process more requests from epoll queues. This improves performance by processing more I/O requests. Workloads that involve read/write operations on a lot of small files can benefit from this enhancement. * Volume Tiering [Experimental] Policy based tiering for placement of files. This feature will serve as a foundational piece for building support for data classification. Volume Tiering is marked as an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release. Trashcan This feature will enable administrators to temporarily store deleted files from Gluster volumes for a specified time period. * Efficient Object Count and Inode Quota Support This improvement enables an easy mechanism to retrieve the number of objects per directory or volume. Count of objects/files within a directory hierarchy is stored as an extended attribute of a directory. The extended attribute can be queried to retrieve the count. This feature has been utilized to add support for inode quotas. * Pro-active Self healing for Erasure Coding Gluster 3.7 adds pro-active self healing support for erasure coded volumes. * Exports and Netgroups Authentication for NFS This feature adds Linux-style exports & netgroups authentication to the native NFS server. This enables administrators to restrict access to specific clients & netgroups for volume/sub-directory NFSv3 exports. * GlusterFind GlusterFind is a new tool that provides a mechanism to monitor data events within a volume. Detection of events like modified files is made easier without having to traverse the entire volume. * Rebalance Performance Improvements Rebalance and remove brick operations in Gluster get a performance boost by speeding up identification of files needing movement and a multi-threaded mechanism to move all such files. * NFSv4 and pNFS support Gluster 3.7 supports export of volumes through NFSv4, NFSv4.1 and pNFS. This support is enabled via NFS Ganesha. Infrastructure changes done in Gluster 3.7 to support this feature include: - Addition of upcall infrastructure for cache invalidation. - Support for lease locks and delegations. - Support for enabling Ganesha through Gluster CLI. - Corosync and pacemaker based implementation providing resource monitoring and failover to accomplish NFS HA. pNFS support for Gluster volumes and NFSv4 delegations are in beta for this release. Infrastructure changes to support Lease locks and NFSv4 delegations are targeted for a 3.7.x minor release. * Snapshot Scheduling With this enhancement, administrators can schedule volume snapshots. * Snapshot Cloning Volume snapshots can now be cloned to create a new writeable volume. * Sharding [Experimental] Sharding addresses the problem of fragmentation of space within a volume. This feature adds support for files that are larger than the size of an individual brick. Sharding works by chunking files to blobs of a configurabe size. Sharding is an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release. * RCU in glusterd Thread synchronization and critical section access has been improved by introducing userspace RCU in glusterd * Arbiter Volumes Arbiter volumes are 3 way replicated volumes where the 3rd brick of the replica is automatically configured as an arbiter. The 3rd brick contains only metadata which provides network partition tolerance and prevents split-brains from happening. Update to GlusterFS 3.7.1 * Better split-brain resolution split-brain resolutions can now be also driven by users without administrative intervention. * Geo-replication improvements There have been several improvements in geo-replication for stability and performance. * Minor Improvements - Message ID based logging has been added for several translators. - Quorum support for reads. - Snapshot names contain timestamps by default.Subsequent access to the snapshots should be done by the name listed in gluster snapshot list - Support for gluster volume get <volname> added. - libgfapi has added handle based functions to get/set POSIX ACLs based on common libacl structures.
2015-06-02 05:44:16 +02:00
USE_TOOLS+= flex bison pkg-config bash
Update glusterfs to 3.4.0. Here are the changes since 3.3.x * Improvements for Virtual Machine Image Storage A number of improvements have been performed to let Gluster volumes provide storage for Virtual Machine Images. Some of them include: - qemu / libgfapi integration. - Causal ordering in write-behind translator. - Tunables for a gluster volume in group-virt.example. The above results in significant improvements in performance for VM image hosting. * Synchronous Replication Improvements GlusterFS 3.4 features significant improvements in performance for the replication (AFR) translator. This is in addition to bug fixes for volumes that used replica 3. * Open Cluster Framework compliant Resource Agents Resource Agents (RA) plug glusterd into Open Cluster Framework (OCF) compliant cluster resource managers, like Pacemaker. The glusterd RA manages the glusterd daemon like any upstart or systemd job would, except that Pacemaker can do it in a cluster-aware fashion. The volume RA starts a volume and monitors individual brick?s daemons in a cluster aware fashion, recovering bricks when their processes fail. * POSIX ACL support over NFSv3 setfacl and getfacl commands now can be used on a nfs mount that exports a gluster volume to set or read posix ACLs. * 3.3.x compatibility The new op-version infrastructure provides compatibility with 3.3.x release of GlusterFS. 3.3.x clients can talk to 3.4.x servers and the vice-versa is also possible. If a volume option that corresponds to 3.4 is enabled, then 3.3 clients cannot mount the volume. * Packaging changes New RPMs for libgfapi and OCF RA are present with 3.4.0. * Experimental Features - RDMA-connection manager (RDMA-CM) - New Block Device translator - Support for NUFA As experimental features, we don?t expect them to work perfectly for this release, but you can expect them to improve dramatically as we make successive 3.4.x releases. * Minor Improvements: - The Ext4 file system change which affected readdir workloads for Gluster volumes has been addressed. - More options for selecting read-child with afr available now. - Custom layouts possible with distribute translator. - No 32-aux-gid limit - SSL support for socket connections. - Known issues with replica count greater than 2 addressed. - quick-read and md-cache translators have been refactored. - open-behind translator introduced. - Ability to avoid glusterfs bind to reserved ports. - statedumps are now created in /var/run/gluster instead of /tmp by default.
2013-09-08 05:38:52 +02:00
CONFIGURE_ARGS+= --disable-fusermount
CONFIGURE_ARGS+= --localstatedir=${VARBASE}
# Make sure we do not attept to link with -lfl
# Only libfl.a is available, and libtool wants libfl.so
MAKE_FLAGS+= LEXLIB=""
PYTHON_VERSIONS_INCOMPATIBLE= 33 34 # only 2.x supported as of 3.6.0
* Bitrot Detection Bitrot detection is a technique used to identify an ?insidious? type of disk error where data is silently corrupted with no indication from the disk to the storage software layer that an error has occurred. When bitrot detection is enabled on a volume, gluster performs signing of all files/objects in the volume and scrubs data periodically for signature verification. All anomalies observed will be noted in log files. * Multi threaded epoll for performance improvements Gluster 3.7 introduces multiple threads to dequeue and process more requests from epoll queues. This improves performance by processing more I/O requests. Workloads that involve read/write operations on a lot of small files can benefit from this enhancement. * Volume Tiering [Experimental] Policy based tiering for placement of files. This feature will serve as a foundational piece for building support for data classification. Volume Tiering is marked as an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release. Trashcan This feature will enable administrators to temporarily store deleted files from Gluster volumes for a specified time period. * Efficient Object Count and Inode Quota Support This improvement enables an easy mechanism to retrieve the number of objects per directory or volume. Count of objects/files within a directory hierarchy is stored as an extended attribute of a directory. The extended attribute can be queried to retrieve the count. This feature has been utilized to add support for inode quotas. * Pro-active Self healing for Erasure Coding Gluster 3.7 adds pro-active self healing support for erasure coded volumes. * Exports and Netgroups Authentication for NFS This feature adds Linux-style exports & netgroups authentication to the native NFS server. This enables administrators to restrict access to specific clients & netgroups for volume/sub-directory NFSv3 exports. * GlusterFind GlusterFind is a new tool that provides a mechanism to monitor data events within a volume. Detection of events like modified files is made easier without having to traverse the entire volume. * Rebalance Performance Improvements Rebalance and remove brick operations in Gluster get a performance boost by speeding up identification of files needing movement and a multi-threaded mechanism to move all such files. * NFSv4 and pNFS support Gluster 3.7 supports export of volumes through NFSv4, NFSv4.1 and pNFS. This support is enabled via NFS Ganesha. Infrastructure changes done in Gluster 3.7 to support this feature include: - Addition of upcall infrastructure for cache invalidation. - Support for lease locks and delegations. - Support for enabling Ganesha through Gluster CLI. - Corosync and pacemaker based implementation providing resource monitoring and failover to accomplish NFS HA. pNFS support for Gluster volumes and NFSv4 delegations are in beta for this release. Infrastructure changes to support Lease locks and NFSv4 delegations are targeted for a 3.7.x minor release. * Snapshot Scheduling With this enhancement, administrators can schedule volume snapshots. * Snapshot Cloning Volume snapshots can now be cloned to create a new writeable volume. * Sharding [Experimental] Sharding addresses the problem of fragmentation of space within a volume. This feature adds support for files that are larger than the size of an individual brick. Sharding works by chunking files to blobs of a configurabe size. Sharding is an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release. * RCU in glusterd Thread synchronization and critical section access has been improved by introducing userspace RCU in glusterd * Arbiter Volumes Arbiter volumes are 3 way replicated volumes where the 3rd brick of the replica is automatically configured as an arbiter. The 3rd brick contains only metadata which provides network partition tolerance and prevents split-brains from happening. Update to GlusterFS 3.7.1 * Better split-brain resolution split-brain resolutions can now be also driven by users without administrative intervention. * Geo-replication improvements There have been several improvements in geo-replication for stability and performance. * Minor Improvements - Message ID based logging has been added for several translators. - Quorum support for reads. - Snapshot names contain timestamps by default.Subsequent access to the snapshots should be done by the name listed in gluster snapshot list - Support for gluster volume get <volname> added. - libgfapi has added handle based functions to get/set POSIX ACLs based on common libacl structures.
2015-06-02 05:44:16 +02:00
REPLACE_PYTHON+= contrib/ipaddr-py/ipaddr.py
REPLACE_PYTHON+= extras/geo-rep/schedule_georep.py
* Bitrot Detection Bitrot detection is a technique used to identify an ?insidious? type of disk error where data is silently corrupted with no indication from the disk to the storage software layer that an error has occurred. When bitrot detection is enabled on a volume, gluster performs signing of all files/objects in the volume and scrubs data periodically for signature verification. All anomalies observed will be noted in log files. * Multi threaded epoll for performance improvements Gluster 3.7 introduces multiple threads to dequeue and process more requests from epoll queues. This improves performance by processing more I/O requests. Workloads that involve read/write operations on a lot of small files can benefit from this enhancement. * Volume Tiering [Experimental] Policy based tiering for placement of files. This feature will serve as a foundational piece for building support for data classification. Volume Tiering is marked as an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release. Trashcan This feature will enable administrators to temporarily store deleted files from Gluster volumes for a specified time period. * Efficient Object Count and Inode Quota Support This improvement enables an easy mechanism to retrieve the number of objects per directory or volume. Count of objects/files within a directory hierarchy is stored as an extended attribute of a directory. The extended attribute can be queried to retrieve the count. This feature has been utilized to add support for inode quotas. * Pro-active Self healing for Erasure Coding Gluster 3.7 adds pro-active self healing support for erasure coded volumes. * Exports and Netgroups Authentication for NFS This feature adds Linux-style exports & netgroups authentication to the native NFS server. This enables administrators to restrict access to specific clients & netgroups for volume/sub-directory NFSv3 exports. * GlusterFind GlusterFind is a new tool that provides a mechanism to monitor data events within a volume. Detection of events like modified files is made easier without having to traverse the entire volume. * Rebalance Performance Improvements Rebalance and remove brick operations in Gluster get a performance boost by speeding up identification of files needing movement and a multi-threaded mechanism to move all such files. * NFSv4 and pNFS support Gluster 3.7 supports export of volumes through NFSv4, NFSv4.1 and pNFS. This support is enabled via NFS Ganesha. Infrastructure changes done in Gluster 3.7 to support this feature include: - Addition of upcall infrastructure for cache invalidation. - Support for lease locks and delegations. - Support for enabling Ganesha through Gluster CLI. - Corosync and pacemaker based implementation providing resource monitoring and failover to accomplish NFS HA. pNFS support for Gluster volumes and NFSv4 delegations are in beta for this release. Infrastructure changes to support Lease locks and NFSv4 delegations are targeted for a 3.7.x minor release. * Snapshot Scheduling With this enhancement, administrators can schedule volume snapshots. * Snapshot Cloning Volume snapshots can now be cloned to create a new writeable volume. * Sharding [Experimental] Sharding addresses the problem of fragmentation of space within a volume. This feature adds support for files that are larger than the size of an individual brick. Sharding works by chunking files to blobs of a configurabe size. Sharding is an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release. * RCU in glusterd Thread synchronization and critical section access has been improved by introducing userspace RCU in glusterd * Arbiter Volumes Arbiter volumes are 3 way replicated volumes where the 3rd brick of the replica is automatically configured as an arbiter. The 3rd brick contains only metadata which provides network partition tolerance and prevents split-brains from happening. Update to GlusterFS 3.7.1 * Better split-brain resolution split-brain resolutions can now be also driven by users without administrative intervention. * Geo-replication improvements There have been several improvements in geo-replication for stability and performance. * Minor Improvements - Message ID based logging has been added for several translators. - Quorum support for reads. - Snapshot names contain timestamps by default.Subsequent access to the snapshots should be done by the name listed in gluster snapshot list - Support for gluster volume get <volname> added. - libgfapi has added handle based functions to get/set POSIX ACLs based on common libacl structures.
2015-06-02 05:44:16 +02:00
REPLACE_PYTHON+= extras/snap_scheduler/gcron.py
REPLACE_PYTHON+= extras/snap_scheduler/snap_scheduler.py
REPLACE_PYTHON+= geo-replication/src/peer_mountbroker.in
Upgrate to glusterfs 3.6.0 New features: - Volume Snapshots Distributed lvm thin-pool based snapshots for backing up volumes in a Gluster Trusted Storage Pool. Apart from providing cluster wide co-ordination to trigger a consistent snapshot, several improvements have been performed through the GlusterFS stack to make translators more crash consistent. Snapshotting of volumes is tightly coupled with lvm today but one could also enhance the same framework to integrate with a backend storage technology like btrfs that can perform snapshots. - Erasure Coding Xavier Hernandez from Datalab added support to perform erasure coding of data in a GlusterFS volume across nodes in a Trusted Storage Pool. Erasure Coding requires fewer nodes to provide better redundancy than a n-way replicated volume and can help in reducing the overall deployment cost. We look forward to build on this foundation and deliver more enhancememnts in upcoming releases. - Better SSL support Multiple improvements to SSL support in GlusterFS. The GlusterFS driver in OpenStack Manila that provides certificate based access to tenants relies on these improvements. - Meta translator This translator provides a /proc like view for examining internal state of translators on the client stack of a GlusterFS volume and certainly looks like an interface that I would be heavily consuming for introspection of GlusterFS. - Automatic File Replication (AFR) v2 A significant re-factor of the synchronous replication translator, provides granular entry self-healing and reduced resource consumption with entry self-heals. - NetBSD, OSX and FreeBSD ports Lot of fixes on the portability front. The NetBSD port passes most regression tests as of 3.6.0. At this point, none of these ports are ready to be deployed in production. However, with dedicated maintainers for each of these ports, we expect to reach production readiness on these platforms in a future release. Complete releases notes are available at https://github.com/gluster/glusterfs/blob/release-3.6/doc/release-notes/3.6.0.md
2014-11-18 15:38:15 +01:00
REPLACE_PYTHON+= geo-replication/syncdaemon/changelogagent.py
REPLACE_PYTHON+= geo-replication/syncdaemon/gsyncd.py
* Bitrot Detection Bitrot detection is a technique used to identify an ?insidious? type of disk error where data is silently corrupted with no indication from the disk to the storage software layer that an error has occurred. When bitrot detection is enabled on a volume, gluster performs signing of all files/objects in the volume and scrubs data periodically for signature verification. All anomalies observed will be noted in log files. * Multi threaded epoll for performance improvements Gluster 3.7 introduces multiple threads to dequeue and process more requests from epoll queues. This improves performance by processing more I/O requests. Workloads that involve read/write operations on a lot of small files can benefit from this enhancement. * Volume Tiering [Experimental] Policy based tiering for placement of files. This feature will serve as a foundational piece for building support for data classification. Volume Tiering is marked as an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release. Trashcan This feature will enable administrators to temporarily store deleted files from Gluster volumes for a specified time period. * Efficient Object Count and Inode Quota Support This improvement enables an easy mechanism to retrieve the number of objects per directory or volume. Count of objects/files within a directory hierarchy is stored as an extended attribute of a directory. The extended attribute can be queried to retrieve the count. This feature has been utilized to add support for inode quotas. * Pro-active Self healing for Erasure Coding Gluster 3.7 adds pro-active self healing support for erasure coded volumes. * Exports and Netgroups Authentication for NFS This feature adds Linux-style exports & netgroups authentication to the native NFS server. This enables administrators to restrict access to specific clients & netgroups for volume/sub-directory NFSv3 exports. * GlusterFind GlusterFind is a new tool that provides a mechanism to monitor data events within a volume. Detection of events like modified files is made easier without having to traverse the entire volume. * Rebalance Performance Improvements Rebalance and remove brick operations in Gluster get a performance boost by speeding up identification of files needing movement and a multi-threaded mechanism to move all such files. * NFSv4 and pNFS support Gluster 3.7 supports export of volumes through NFSv4, NFSv4.1 and pNFS. This support is enabled via NFS Ganesha. Infrastructure changes done in Gluster 3.7 to support this feature include: - Addition of upcall infrastructure for cache invalidation. - Support for lease locks and delegations. - Support for enabling Ganesha through Gluster CLI. - Corosync and pacemaker based implementation providing resource monitoring and failover to accomplish NFS HA. pNFS support for Gluster volumes and NFSv4 delegations are in beta for this release. Infrastructure changes to support Lease locks and NFSv4 delegations are targeted for a 3.7.x minor release. * Snapshot Scheduling With this enhancement, administrators can schedule volume snapshots. * Snapshot Cloning Volume snapshots can now be cloned to create a new writeable volume. * Sharding [Experimental] Sharding addresses the problem of fragmentation of space within a volume. This feature adds support for files that are larger than the size of an individual brick. Sharding works by chunking files to blobs of a configurabe size. Sharding is an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release. * RCU in glusterd Thread synchronization and critical section access has been improved by introducing userspace RCU in glusterd * Arbiter Volumes Arbiter volumes are 3 way replicated volumes where the 3rd brick of the replica is automatically configured as an arbiter. The 3rd brick contains only metadata which provides network partition tolerance and prevents split-brains from happening. Update to GlusterFS 3.7.1 * Better split-brain resolution split-brain resolutions can now be also driven by users without administrative intervention. * Geo-replication improvements There have been several improvements in geo-replication for stability and performance. * Minor Improvements - Message ID based logging has been added for several translators. - Quorum support for reads. - Snapshot names contain timestamps by default.Subsequent access to the snapshots should be done by the name listed in gluster snapshot list - Support for gluster volume get <volname> added. - libgfapi has added handle based functions to get/set POSIX ACLs based on common libacl structures.
2015-06-02 05:44:16 +02:00
REPLACE_PYTHON+= geo-replication/syncdaemon/gsyncdstatus.py
REPLACE_PYTHON+= tools/gfind_missing_files/gfid_to_path.py
2015-09-01 18:02:54 +02:00
REPLACE_PYTHON+= tools/glusterfind/S57glusterfind-delete-post.py
* Bitrot Detection Bitrot detection is a technique used to identify an ?insidious? type of disk error where data is silently corrupted with no indication from the disk to the storage software layer that an error has occurred. When bitrot detection is enabled on a volume, gluster performs signing of all files/objects in the volume and scrubs data periodically for signature verification. All anomalies observed will be noted in log files. * Multi threaded epoll for performance improvements Gluster 3.7 introduces multiple threads to dequeue and process more requests from epoll queues. This improves performance by processing more I/O requests. Workloads that involve read/write operations on a lot of small files can benefit from this enhancement. * Volume Tiering [Experimental] Policy based tiering for placement of files. This feature will serve as a foundational piece for building support for data classification. Volume Tiering is marked as an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release. Trashcan This feature will enable administrators to temporarily store deleted files from Gluster volumes for a specified time period. * Efficient Object Count and Inode Quota Support This improvement enables an easy mechanism to retrieve the number of objects per directory or volume. Count of objects/files within a directory hierarchy is stored as an extended attribute of a directory. The extended attribute can be queried to retrieve the count. This feature has been utilized to add support for inode quotas. * Pro-active Self healing for Erasure Coding Gluster 3.7 adds pro-active self healing support for erasure coded volumes. * Exports and Netgroups Authentication for NFS This feature adds Linux-style exports & netgroups authentication to the native NFS server. This enables administrators to restrict access to specific clients & netgroups for volume/sub-directory NFSv3 exports. * GlusterFind GlusterFind is a new tool that provides a mechanism to monitor data events within a volume. Detection of events like modified files is made easier without having to traverse the entire volume. * Rebalance Performance Improvements Rebalance and remove brick operations in Gluster get a performance boost by speeding up identification of files needing movement and a multi-threaded mechanism to move all such files. * NFSv4 and pNFS support Gluster 3.7 supports export of volumes through NFSv4, NFSv4.1 and pNFS. This support is enabled via NFS Ganesha. Infrastructure changes done in Gluster 3.7 to support this feature include: - Addition of upcall infrastructure for cache invalidation. - Support for lease locks and delegations. - Support for enabling Ganesha through Gluster CLI. - Corosync and pacemaker based implementation providing resource monitoring and failover to accomplish NFS HA. pNFS support for Gluster volumes and NFSv4 delegations are in beta for this release. Infrastructure changes to support Lease locks and NFSv4 delegations are targeted for a 3.7.x minor release. * Snapshot Scheduling With this enhancement, administrators can schedule volume snapshots. * Snapshot Cloning Volume snapshots can now be cloned to create a new writeable volume. * Sharding [Experimental] Sharding addresses the problem of fragmentation of space within a volume. This feature adds support for files that are larger than the size of an individual brick. Sharding works by chunking files to blobs of a configurabe size. Sharding is an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release. * RCU in glusterd Thread synchronization and critical section access has been improved by introducing userspace RCU in glusterd * Arbiter Volumes Arbiter volumes are 3 way replicated volumes where the 3rd brick of the replica is automatically configured as an arbiter. The 3rd brick contains only metadata which provides network partition tolerance and prevents split-brains from happening. Update to GlusterFS 3.7.1 * Better split-brain resolution split-brain resolutions can now be also driven by users without administrative intervention. * Geo-replication improvements There have been several improvements in geo-replication for stability and performance. * Minor Improvements - Message ID based logging has been added for several translators. - Quorum support for reads. - Snapshot names contain timestamps by default.Subsequent access to the snapshots should be done by the name listed in gluster snapshot list - Support for gluster volume get <volname> added. - libgfapi has added handle based functions to get/set POSIX ACLs based on common libacl structures.
2015-06-02 05:44:16 +02:00
REPLACE_PYTHON+= tools/glusterfind/glusterfind.in
REPLACE_PYTHON+= tools/glusterfind/src/__init__.py
REPLACE_PYTHON+= tools/glusterfind/src/brickfind.py
REPLACE_PYTHON+= tools/glusterfind/src/changelog.py
REPLACE_PYTHON+= tools/glusterfind/src/changelogdata.py
REPLACE_PYTHON+= tools/glusterfind/src/conf.py
REPLACE_PYTHON+= tools/glusterfind/src/libgfchangelog.py
REPLACE_PYTHON+= tools/glusterfind/src/main.py
REPLACE_PYTHON+= tools/glusterfind/src/nodeagent.py
REPLACE_PYTHON+= tools/glusterfind/src/utils.py
REPLACE_BASH+= extras/ganesha/ocf/ganesha_grace
REPLACE_BASH+= extras/ganesha/ocf/ganesha_mon
REPLACE_BASH+= extras/ganesha/ocf/ganesha_nfsd
REPLACE_BASH+= extras/ganesha/scripts/ganesha-ha.sh
Upgrate to glusterfs 3.6.0 New features: - Volume Snapshots Distributed lvm thin-pool based snapshots for backing up volumes in a Gluster Trusted Storage Pool. Apart from providing cluster wide co-ordination to trigger a consistent snapshot, several improvements have been performed through the GlusterFS stack to make translators more crash consistent. Snapshotting of volumes is tightly coupled with lvm today but one could also enhance the same framework to integrate with a backend storage technology like btrfs that can perform snapshots. - Erasure Coding Xavier Hernandez from Datalab added support to perform erasure coding of data in a GlusterFS volume across nodes in a Trusted Storage Pool. Erasure Coding requires fewer nodes to provide better redundancy than a n-way replicated volume and can help in reducing the overall deployment cost. We look forward to build on this foundation and deliver more enhancememnts in upcoming releases. - Better SSL support Multiple improvements to SSL support in GlusterFS. The GlusterFS driver in OpenStack Manila that provides certificate based access to tenants relies on these improvements. - Meta translator This translator provides a /proc like view for examining internal state of translators on the client stack of a GlusterFS volume and certainly looks like an interface that I would be heavily consuming for introspection of GlusterFS. - Automatic File Replication (AFR) v2 A significant re-factor of the synchronous replication translator, provides granular entry self-healing and reduced resource consumption with entry self-heals. - NetBSD, OSX and FreeBSD ports Lot of fixes on the portability front. The NetBSD port passes most regression tests as of 3.6.0. At this point, none of these ports are ready to be deployed in production. However, with dedicated maintainers for each of these ports, we expect to reach production readiness on these platforms in a future release. Complete releases notes are available at https://github.com/gluster/glusterfs/blob/release-3.6/doc/release-notes/3.6.0.md
2014-11-18 15:38:15 +01:00
REPLACE_BASH+= extras/geo-rep/generate-gfid-file.sh
REPLACE_BASH+= extras/geo-rep/get-gfid.sh
* Bitrot Detection Bitrot detection is a technique used to identify an ?insidious? type of disk error where data is silently corrupted with no indication from the disk to the storage software layer that an error has occurred. When bitrot detection is enabled on a volume, gluster performs signing of all files/objects in the volume and scrubs data periodically for signature verification. All anomalies observed will be noted in log files. * Multi threaded epoll for performance improvements Gluster 3.7 introduces multiple threads to dequeue and process more requests from epoll queues. This improves performance by processing more I/O requests. Workloads that involve read/write operations on a lot of small files can benefit from this enhancement. * Volume Tiering [Experimental] Policy based tiering for placement of files. This feature will serve as a foundational piece for building support for data classification. Volume Tiering is marked as an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release. Trashcan This feature will enable administrators to temporarily store deleted files from Gluster volumes for a specified time period. * Efficient Object Count and Inode Quota Support This improvement enables an easy mechanism to retrieve the number of objects per directory or volume. Count of objects/files within a directory hierarchy is stored as an extended attribute of a directory. The extended attribute can be queried to retrieve the count. This feature has been utilized to add support for inode quotas. * Pro-active Self healing for Erasure Coding Gluster 3.7 adds pro-active self healing support for erasure coded volumes. * Exports and Netgroups Authentication for NFS This feature adds Linux-style exports & netgroups authentication to the native NFS server. This enables administrators to restrict access to specific clients & netgroups for volume/sub-directory NFSv3 exports. * GlusterFind GlusterFind is a new tool that provides a mechanism to monitor data events within a volume. Detection of events like modified files is made easier without having to traverse the entire volume. * Rebalance Performance Improvements Rebalance and remove brick operations in Gluster get a performance boost by speeding up identification of files needing movement and a multi-threaded mechanism to move all such files. * NFSv4 and pNFS support Gluster 3.7 supports export of volumes through NFSv4, NFSv4.1 and pNFS. This support is enabled via NFS Ganesha. Infrastructure changes done in Gluster 3.7 to support this feature include: - Addition of upcall infrastructure for cache invalidation. - Support for lease locks and delegations. - Support for enabling Ganesha through Gluster CLI. - Corosync and pacemaker based implementation providing resource monitoring and failover to accomplish NFS HA. pNFS support for Gluster volumes and NFSv4 delegations are in beta for this release. Infrastructure changes to support Lease locks and NFSv4 delegations are targeted for a 3.7.x minor release. * Snapshot Scheduling With this enhancement, administrators can schedule volume snapshots. * Snapshot Cloning Volume snapshots can now be cloned to create a new writeable volume. * Sharding [Experimental] Sharding addresses the problem of fragmentation of space within a volume. This feature adds support for files that are larger than the size of an individual brick. Sharding works by chunking files to blobs of a configurabe size. Sharding is an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release. * RCU in glusterd Thread synchronization and critical section access has been improved by introducing userspace RCU in glusterd * Arbiter Volumes Arbiter volumes are 3 way replicated volumes where the 3rd brick of the replica is automatically configured as an arbiter. The 3rd brick contains only metadata which provides network partition tolerance and prevents split-brains from happening. Update to GlusterFS 3.7.1 * Better split-brain resolution split-brain resolutions can now be also driven by users without administrative intervention. * Geo-replication improvements There have been several improvements in geo-replication for stability and performance. * Minor Improvements - Message ID based logging has been added for several translators. - Quorum support for reads. - Snapshot names contain timestamps by default.Subsequent access to the snapshots should be done by the name listed in gluster snapshot list - Support for gluster volume get <volname> added. - libgfapi has added handle based functions to get/set POSIX ACLs based on common libacl structures.
2015-06-02 05:44:16 +02:00
REPLACE_BASH+= extras/geo-rep/gsync-upgrade.sh
REPLACE_BASH+= extras/geo-rep/slave-upgrade.sh
REPLACE_BASH+= extras/peer_add_secret_pub.in
REPLACE_BASH+= extras/peer_add_secret_pub.in
Upgrate to glusterfs 3.6.0 New features: - Volume Snapshots Distributed lvm thin-pool based snapshots for backing up volumes in a Gluster Trusted Storage Pool. Apart from providing cluster wide co-ordination to trigger a consistent snapshot, several improvements have been performed through the GlusterFS stack to make translators more crash consistent. Snapshotting of volumes is tightly coupled with lvm today but one could also enhance the same framework to integrate with a backend storage technology like btrfs that can perform snapshots. - Erasure Coding Xavier Hernandez from Datalab added support to perform erasure coding of data in a GlusterFS volume across nodes in a Trusted Storage Pool. Erasure Coding requires fewer nodes to provide better redundancy than a n-way replicated volume and can help in reducing the overall deployment cost. We look forward to build on this foundation and deliver more enhancememnts in upcoming releases. - Better SSL support Multiple improvements to SSL support in GlusterFS. The GlusterFS driver in OpenStack Manila that provides certificate based access to tenants relies on these improvements. - Meta translator This translator provides a /proc like view for examining internal state of translators on the client stack of a GlusterFS volume and certainly looks like an interface that I would be heavily consuming for introspection of GlusterFS. - Automatic File Replication (AFR) v2 A significant re-factor of the synchronous replication translator, provides granular entry self-healing and reduced resource consumption with entry self-heals. - NetBSD, OSX and FreeBSD ports Lot of fixes on the portability front. The NetBSD port passes most regression tests as of 3.6.0. At this point, none of these ports are ready to be deployed in production. However, with dedicated maintainers for each of these ports, we expect to reach production readiness on these platforms in a future release. Complete releases notes are available at https://github.com/gluster/glusterfs/blob/release-3.6/doc/release-notes/3.6.0.md
2014-11-18 15:38:15 +01:00
REPLACE_BASH+= extras/post-upgrade-script-for-quota.sh
REPLACE_BASH+= extras/pre-upgrade-script-for-quota.sh
REPLACE_BASH+= geo-replication/src/gverify.sh
REPLACE_BASH+= geo-replication/src/peer_gsec_create.in
REPLACE_BASH+= geo-replication/src/set_geo_rep_pem_keys.sh
SUBST_CLASSES+= mtab
SUBST_STAGE.mtab= post-build
SUBST_FILES.mtab= doc/mount.glusterfs.8
SUBST_FILES.mtab= libglusterfs/src/compat.h
SUBST_FILES.mtab= xlators/mount/fuse/utils/mount.glusterfs.in
SUBST_SED.mtab= -e "s,/etc/mtab,/proc/mounts,g"
SUBST_CLASSES+= etc
SUBST_STAGE.etc= pre-build
SUBST_FILES.etc+= libglusterfs/src/logging.c
SUBST_FILES.etc+= extras/ocf/volume
SUBST_FILES.etc+= doc/glusterfsd.8
SUBST_SED.etc= -e "s,/etc/gluster,${PREFIX}/etc/gluster,g"
SUBST_CLASSES+= vol
SUBST_STAGE.vol= post-build
SUBST_FILES.vol= extras/Makefile
SUBST_SED.vol= -e "/vol_DATA/s/glusterd.vol/glusterd.vol.sample/g"
EGDIR= ${PREFIX}/etc/glusterfs
CONF_FILES+= ${EGDIR}/glusterd.vol.sample ${EGDIR}/glusterd.vol
OWN_DIRS+= ${VARBASE}/log/glusterfs
BUILD_DEFS+= VARBASE
Upgrate to glusterfs 3.6.0 New features: - Volume Snapshots Distributed lvm thin-pool based snapshots for backing up volumes in a Gluster Trusted Storage Pool. Apart from providing cluster wide co-ordination to trigger a consistent snapshot, several improvements have been performed through the GlusterFS stack to make translators more crash consistent. Snapshotting of volumes is tightly coupled with lvm today but one could also enhance the same framework to integrate with a backend storage technology like btrfs that can perform snapshots. - Erasure Coding Xavier Hernandez from Datalab added support to perform erasure coding of data in a GlusterFS volume across nodes in a Trusted Storage Pool. Erasure Coding requires fewer nodes to provide better redundancy than a n-way replicated volume and can help in reducing the overall deployment cost. We look forward to build on this foundation and deliver more enhancememnts in upcoming releases. - Better SSL support Multiple improvements to SSL support in GlusterFS. The GlusterFS driver in OpenStack Manila that provides certificate based access to tenants relies on these improvements. - Meta translator This translator provides a /proc like view for examining internal state of translators on the client stack of a GlusterFS volume and certainly looks like an interface that I would be heavily consuming for introspection of GlusterFS. - Automatic File Replication (AFR) v2 A significant re-factor of the synchronous replication translator, provides granular entry self-healing and reduced resource consumption with entry self-heals. - NetBSD, OSX and FreeBSD ports Lot of fixes on the portability front. The NetBSD port passes most regression tests as of 3.6.0. At this point, none of these ports are ready to be deployed in production. However, with dedicated maintainers for each of these ports, we expect to reach production readiness on these platforms in a future release. Complete releases notes are available at https://github.com/gluster/glusterfs/blob/release-3.6/doc/release-notes/3.6.0.md
2014-11-18 15:38:15 +01:00
RCD_SCRIPTS= glusterd
2011-07-19 09:54:30 +02:00
PLIST_SRC= ${PLIST_SRC_DFLT}
PLIST_SUBST+= VARBASE=${VARBASE}
PLIST_SUBST+= PKG_SYSCONFDIR=${PKG_SYSCONFDIR}
PLIST_SUBST+= PYSITELIB=${PYSITELIB:Q}
MESSAGE_SRC= ${PKGDIR}/MESSAGE.${OPSYS}
pre-build:
Upgrate to glusterfs 3.6.0 New features: - Volume Snapshots Distributed lvm thin-pool based snapshots for backing up volumes in a Gluster Trusted Storage Pool. Apart from providing cluster wide co-ordination to trigger a consistent snapshot, several improvements have been performed through the GlusterFS stack to make translators more crash consistent. Snapshotting of volumes is tightly coupled with lvm today but one could also enhance the same framework to integrate with a backend storage technology like btrfs that can perform snapshots. - Erasure Coding Xavier Hernandez from Datalab added support to perform erasure coding of data in a GlusterFS volume across nodes in a Trusted Storage Pool. Erasure Coding requires fewer nodes to provide better redundancy than a n-way replicated volume and can help in reducing the overall deployment cost. We look forward to build on this foundation and deliver more enhancememnts in upcoming releases. - Better SSL support Multiple improvements to SSL support in GlusterFS. The GlusterFS driver in OpenStack Manila that provides certificate based access to tenants relies on these improvements. - Meta translator This translator provides a /proc like view for examining internal state of translators on the client stack of a GlusterFS volume and certainly looks like an interface that I would be heavily consuming for introspection of GlusterFS. - Automatic File Replication (AFR) v2 A significant re-factor of the synchronous replication translator, provides granular entry self-healing and reduced resource consumption with entry self-heals. - NetBSD, OSX and FreeBSD ports Lot of fixes on the portability front. The NetBSD port passes most regression tests as of 3.6.0. At this point, none of these ports are ready to be deployed in production. However, with dedicated maintainers for each of these ports, we expect to reach production readiness on these platforms in a future release. Complete releases notes are available at https://github.com/gluster/glusterfs/blob/release-3.6/doc/release-notes/3.6.0.md
2014-11-18 15:38:15 +01:00
cd ${WRKSRC}/extras && \
* Bitrot Detection Bitrot detection is a technique used to identify an ?insidious? type of disk error where data is silently corrupted with no indication from the disk to the storage software layer that an error has occurred. When bitrot detection is enabled on a volume, gluster performs signing of all files/objects in the volume and scrubs data periodically for signature verification. All anomalies observed will be noted in log files. * Multi threaded epoll for performance improvements Gluster 3.7 introduces multiple threads to dequeue and process more requests from epoll queues. This improves performance by processing more I/O requests. Workloads that involve read/write operations on a lot of small files can benefit from this enhancement. * Volume Tiering [Experimental] Policy based tiering for placement of files. This feature will serve as a foundational piece for building support for data classification. Volume Tiering is marked as an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release. Trashcan This feature will enable administrators to temporarily store deleted files from Gluster volumes for a specified time period. * Efficient Object Count and Inode Quota Support This improvement enables an easy mechanism to retrieve the number of objects per directory or volume. Count of objects/files within a directory hierarchy is stored as an extended attribute of a directory. The extended attribute can be queried to retrieve the count. This feature has been utilized to add support for inode quotas. * Pro-active Self healing for Erasure Coding Gluster 3.7 adds pro-active self healing support for erasure coded volumes. * Exports and Netgroups Authentication for NFS This feature adds Linux-style exports & netgroups authentication to the native NFS server. This enables administrators to restrict access to specific clients & netgroups for volume/sub-directory NFSv3 exports. * GlusterFind GlusterFind is a new tool that provides a mechanism to monitor data events within a volume. Detection of events like modified files is made easier without having to traverse the entire volume. * Rebalance Performance Improvements Rebalance and remove brick operations in Gluster get a performance boost by speeding up identification of files needing movement and a multi-threaded mechanism to move all such files. * NFSv4 and pNFS support Gluster 3.7 supports export of volumes through NFSv4, NFSv4.1 and pNFS. This support is enabled via NFS Ganesha. Infrastructure changes done in Gluster 3.7 to support this feature include: - Addition of upcall infrastructure for cache invalidation. - Support for lease locks and delegations. - Support for enabling Ganesha through Gluster CLI. - Corosync and pacemaker based implementation providing resource monitoring and failover to accomplish NFS HA. pNFS support for Gluster volumes and NFSv4 delegations are in beta for this release. Infrastructure changes to support Lease locks and NFSv4 delegations are targeted for a 3.7.x minor release. * Snapshot Scheduling With this enhancement, administrators can schedule volume snapshots. * Snapshot Cloning Volume snapshots can now be cloned to create a new writeable volume. * Sharding [Experimental] Sharding addresses the problem of fragmentation of space within a volume. This feature adds support for files that are larger than the size of an individual brick. Sharding works by chunking files to blobs of a configurabe size. Sharding is an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release. * RCU in glusterd Thread synchronization and critical section access has been improved by introducing userspace RCU in glusterd * Arbiter Volumes Arbiter volumes are 3 way replicated volumes where the 3rd brick of the replica is automatically configured as an arbiter. The 3rd brick contains only metadata which provides network partition tolerance and prevents split-brains from happening. Update to GlusterFS 3.7.1 * Better split-brain resolution split-brain resolutions can now be also driven by users without administrative intervention. * Geo-replication improvements There have been several improvements in geo-replication for stability and performance. * Minor Improvements - Message ID based logging has been added for several translators. - Quorum support for reads. - Snapshot names contain timestamps by default.Subsequent access to the snapshots should be done by the name listed in gluster snapshot list - Support for gluster volume get <volname> added. - libgfapi has added handle based functions to get/set POSIX ACLs based on common libacl structures.
2015-06-02 05:44:16 +02:00
${ECHO} "glusterd.vol.sample: glusterd.vol" >> Makefile && \
${ECHO} " cp glusterd.vol glusterd.vol.sample" >> Makefile
post-install:
Upgrate to glusterfs 3.6.0 New features: - Volume Snapshots Distributed lvm thin-pool based snapshots for backing up volumes in a Gluster Trusted Storage Pool. Apart from providing cluster wide co-ordination to trigger a consistent snapshot, several improvements have been performed through the GlusterFS stack to make translators more crash consistent. Snapshotting of volumes is tightly coupled with lvm today but one could also enhance the same framework to integrate with a backend storage technology like btrfs that can perform snapshots. - Erasure Coding Xavier Hernandez from Datalab added support to perform erasure coding of data in a GlusterFS volume across nodes in a Trusted Storage Pool. Erasure Coding requires fewer nodes to provide better redundancy than a n-way replicated volume and can help in reducing the overall deployment cost. We look forward to build on this foundation and deliver more enhancememnts in upcoming releases. - Better SSL support Multiple improvements to SSL support in GlusterFS. The GlusterFS driver in OpenStack Manila that provides certificate based access to tenants relies on these improvements. - Meta translator This translator provides a /proc like view for examining internal state of translators on the client stack of a GlusterFS volume and certainly looks like an interface that I would be heavily consuming for introspection of GlusterFS. - Automatic File Replication (AFR) v2 A significant re-factor of the synchronous replication translator, provides granular entry self-healing and reduced resource consumption with entry self-heals. - NetBSD, OSX and FreeBSD ports Lot of fixes on the portability front. The NetBSD port passes most regression tests as of 3.6.0. At this point, none of these ports are ready to be deployed in production. However, with dedicated maintainers for each of these ports, we expect to reach production readiness on these platforms in a future release. Complete releases notes are available at https://github.com/gluster/glusterfs/blob/release-3.6/doc/release-notes/3.6.0.md
2014-11-18 15:38:15 +01:00
${INSTALL_SCRIPT} ${DESTDIR}/sbin/mount_glusterfs \
${DESTDIR}/${PREFIX}/sbin/mount_glusterfs
# Debug
2014-11-20 17:12:48 +01:00
CFLAGS+= -g
INSTALL_UNSTRIPPED= yes
Upgrate to glusterfs 3.6.0 New features: - Volume Snapshots Distributed lvm thin-pool based snapshots for backing up volumes in a Gluster Trusted Storage Pool. Apart from providing cluster wide co-ordination to trigger a consistent snapshot, several improvements have been performed through the GlusterFS stack to make translators more crash consistent. Snapshotting of volumes is tightly coupled with lvm today but one could also enhance the same framework to integrate with a backend storage technology like btrfs that can perform snapshots. - Erasure Coding Xavier Hernandez from Datalab added support to perform erasure coding of data in a GlusterFS volume across nodes in a Trusted Storage Pool. Erasure Coding requires fewer nodes to provide better redundancy than a n-way replicated volume and can help in reducing the overall deployment cost. We look forward to build on this foundation and deliver more enhancememnts in upcoming releases. - Better SSL support Multiple improvements to SSL support in GlusterFS. The GlusterFS driver in OpenStack Manila that provides certificate based access to tenants relies on these improvements. - Meta translator This translator provides a /proc like view for examining internal state of translators on the client stack of a GlusterFS volume and certainly looks like an interface that I would be heavily consuming for introspection of GlusterFS. - Automatic File Replication (AFR) v2 A significant re-factor of the synchronous replication translator, provides granular entry self-healing and reduced resource consumption with entry self-heals. - NetBSD, OSX and FreeBSD ports Lot of fixes on the portability front. The NetBSD port passes most regression tests as of 3.6.0. At this point, none of these ports are ready to be deployed in production. However, with dedicated maintainers for each of these ports, we expect to reach production readiness on these platforms in a future release. Complete releases notes are available at https://github.com/gluster/glusterfs/blob/release-3.6/doc/release-notes/3.6.0.md
2014-11-18 15:38:15 +01:00
CONFIGURE_ARGS+= --enable-debug
#.include "../../devel/boehm-gc/buildlink3.mk"
#CFLAGS+=-DGC_DEBUG
#CFLAGS+=-include gc.h
#LIBS+=-lgc
2011-07-19 09:54:30 +02:00
* Bitrot Detection Bitrot detection is a technique used to identify an ?insidious? type of disk error where data is silently corrupted with no indication from the disk to the storage software layer that an error has occurred. When bitrot detection is enabled on a volume, gluster performs signing of all files/objects in the volume and scrubs data periodically for signature verification. All anomalies observed will be noted in log files. * Multi threaded epoll for performance improvements Gluster 3.7 introduces multiple threads to dequeue and process more requests from epoll queues. This improves performance by processing more I/O requests. Workloads that involve read/write operations on a lot of small files can benefit from this enhancement. * Volume Tiering [Experimental] Policy based tiering for placement of files. This feature will serve as a foundational piece for building support for data classification. Volume Tiering is marked as an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release. Trashcan This feature will enable administrators to temporarily store deleted files from Gluster volumes for a specified time period. * Efficient Object Count and Inode Quota Support This improvement enables an easy mechanism to retrieve the number of objects per directory or volume. Count of objects/files within a directory hierarchy is stored as an extended attribute of a directory. The extended attribute can be queried to retrieve the count. This feature has been utilized to add support for inode quotas. * Pro-active Self healing for Erasure Coding Gluster 3.7 adds pro-active self healing support for erasure coded volumes. * Exports and Netgroups Authentication for NFS This feature adds Linux-style exports & netgroups authentication to the native NFS server. This enables administrators to restrict access to specific clients & netgroups for volume/sub-directory NFSv3 exports. * GlusterFind GlusterFind is a new tool that provides a mechanism to monitor data events within a volume. Detection of events like modified files is made easier without having to traverse the entire volume. * Rebalance Performance Improvements Rebalance and remove brick operations in Gluster get a performance boost by speeding up identification of files needing movement and a multi-threaded mechanism to move all such files. * NFSv4 and pNFS support Gluster 3.7 supports export of volumes through NFSv4, NFSv4.1 and pNFS. This support is enabled via NFS Ganesha. Infrastructure changes done in Gluster 3.7 to support this feature include: - Addition of upcall infrastructure for cache invalidation. - Support for lease locks and delegations. - Support for enabling Ganesha through Gluster CLI. - Corosync and pacemaker based implementation providing resource monitoring and failover to accomplish NFS HA. pNFS support for Gluster volumes and NFSv4 delegations are in beta for this release. Infrastructure changes to support Lease locks and NFSv4 delegations are targeted for a 3.7.x minor release. * Snapshot Scheduling With this enhancement, administrators can schedule volume snapshots. * Snapshot Cloning Volume snapshots can now be cloned to create a new writeable volume. * Sharding [Experimental] Sharding addresses the problem of fragmentation of space within a volume. This feature adds support for files that are larger than the size of an individual brick. Sharding works by chunking files to blobs of a configurabe size. Sharding is an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release. * RCU in glusterd Thread synchronization and critical section access has been improved by introducing userspace RCU in glusterd * Arbiter Volumes Arbiter volumes are 3 way replicated volumes where the 3rd brick of the replica is automatically configured as an arbiter. The 3rd brick contains only metadata which provides network partition tolerance and prevents split-brains from happening. Update to GlusterFS 3.7.1 * Better split-brain resolution split-brain resolutions can now be also driven by users without administrative intervention. * Geo-replication improvements There have been several improvements in geo-replication for stability and performance. * Minor Improvements - Message ID based logging has been added for several translators. - Quorum support for reads. - Snapshot names contain timestamps by default.Subsequent access to the snapshots should be done by the name listed in gluster snapshot list - Support for gluster volume get <volname> added. - libgfapi has added handle based functions to get/set POSIX ACLs based on common libacl structures.
2015-06-02 05:44:16 +02:00
.include "../../mk/bsd.prefs.mk"
2015-03-12 17:35:39 +01:00
.include "../../security/openssl/buildlink3.mk"
Upgrate to glusterfs 3.6.0 New features: - Volume Snapshots Distributed lvm thin-pool based snapshots for backing up volumes in a Gluster Trusted Storage Pool. Apart from providing cluster wide co-ordination to trigger a consistent snapshot, several improvements have been performed through the GlusterFS stack to make translators more crash consistent. Snapshotting of volumes is tightly coupled with lvm today but one could also enhance the same framework to integrate with a backend storage technology like btrfs that can perform snapshots. - Erasure Coding Xavier Hernandez from Datalab added support to perform erasure coding of data in a GlusterFS volume across nodes in a Trusted Storage Pool. Erasure Coding requires fewer nodes to provide better redundancy than a n-way replicated volume and can help in reducing the overall deployment cost. We look forward to build on this foundation and deliver more enhancememnts in upcoming releases. - Better SSL support Multiple improvements to SSL support in GlusterFS. The GlusterFS driver in OpenStack Manila that provides certificate based access to tenants relies on these improvements. - Meta translator This translator provides a /proc like view for examining internal state of translators on the client stack of a GlusterFS volume and certainly looks like an interface that I would be heavily consuming for introspection of GlusterFS. - Automatic File Replication (AFR) v2 A significant re-factor of the synchronous replication translator, provides granular entry self-healing and reduced resource consumption with entry self-heals. - NetBSD, OSX and FreeBSD ports Lot of fixes on the portability front. The NetBSD port passes most regression tests as of 3.6.0. At this point, none of these ports are ready to be deployed in production. However, with dedicated maintainers for each of these ports, we expect to reach production readiness on these platforms in a future release. Complete releases notes are available at https://github.com/gluster/glusterfs/blob/release-3.6/doc/release-notes/3.6.0.md
2014-11-18 15:38:15 +01:00
.include "../../textproc/libxml2/buildlink3.mk"
* Bitrot Detection Bitrot detection is a technique used to identify an ?insidious? type of disk error where data is silently corrupted with no indication from the disk to the storage software layer that an error has occurred. When bitrot detection is enabled on a volume, gluster performs signing of all files/objects in the volume and scrubs data periodically for signature verification. All anomalies observed will be noted in log files. * Multi threaded epoll for performance improvements Gluster 3.7 introduces multiple threads to dequeue and process more requests from epoll queues. This improves performance by processing more I/O requests. Workloads that involve read/write operations on a lot of small files can benefit from this enhancement. * Volume Tiering [Experimental] Policy based tiering for placement of files. This feature will serve as a foundational piece for building support for data classification. Volume Tiering is marked as an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release. Trashcan This feature will enable administrators to temporarily store deleted files from Gluster volumes for a specified time period. * Efficient Object Count and Inode Quota Support This improvement enables an easy mechanism to retrieve the number of objects per directory or volume. Count of objects/files within a directory hierarchy is stored as an extended attribute of a directory. The extended attribute can be queried to retrieve the count. This feature has been utilized to add support for inode quotas. * Pro-active Self healing for Erasure Coding Gluster 3.7 adds pro-active self healing support for erasure coded volumes. * Exports and Netgroups Authentication for NFS This feature adds Linux-style exports & netgroups authentication to the native NFS server. This enables administrators to restrict access to specific clients & netgroups for volume/sub-directory NFSv3 exports. * GlusterFind GlusterFind is a new tool that provides a mechanism to monitor data events within a volume. Detection of events like modified files is made easier without having to traverse the entire volume. * Rebalance Performance Improvements Rebalance and remove brick operations in Gluster get a performance boost by speeding up identification of files needing movement and a multi-threaded mechanism to move all such files. * NFSv4 and pNFS support Gluster 3.7 supports export of volumes through NFSv4, NFSv4.1 and pNFS. This support is enabled via NFS Ganesha. Infrastructure changes done in Gluster 3.7 to support this feature include: - Addition of upcall infrastructure for cache invalidation. - Support for lease locks and delegations. - Support for enabling Ganesha through Gluster CLI. - Corosync and pacemaker based implementation providing resource monitoring and failover to accomplish NFS HA. pNFS support for Gluster volumes and NFSv4 delegations are in beta for this release. Infrastructure changes to support Lease locks and NFSv4 delegations are targeted for a 3.7.x minor release. * Snapshot Scheduling With this enhancement, administrators can schedule volume snapshots. * Snapshot Cloning Volume snapshots can now be cloned to create a new writeable volume. * Sharding [Experimental] Sharding addresses the problem of fragmentation of space within a volume. This feature adds support for files that are larger than the size of an individual brick. Sharding works by chunking files to blobs of a configurabe size. Sharding is an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release. * RCU in glusterd Thread synchronization and critical section access has been improved by introducing userspace RCU in glusterd * Arbiter Volumes Arbiter volumes are 3 way replicated volumes where the 3rd brick of the replica is automatically configured as an arbiter. The 3rd brick contains only metadata which provides network partition tolerance and prevents split-brains from happening. Update to GlusterFS 3.7.1 * Better split-brain resolution split-brain resolutions can now be also driven by users without administrative intervention. * Geo-replication improvements There have been several improvements in geo-replication for stability and performance. * Minor Improvements - Message ID based logging has been added for several translators. - Quorum support for reads. - Snapshot names contain timestamps by default.Subsequent access to the snapshots should be done by the name listed in gluster snapshot list - Support for gluster volume get <volname> added. - libgfapi has added handle based functions to get/set POSIX ACLs based on common libacl structures.
2015-06-02 05:44:16 +02:00
.include "../../databases/sqlite3/buildlink3.mk"
.include "../../devel/userspace-rcu/buildlink3.mk"
.include "../../lang/python/application.mk"
.include "../../lang/python/extension.mk"
2014-05-18 23:21:32 +02:00
.if (${OPSYS} == "NetBSD" || ${OPSYS} == "FreeBSD") && exists(/usr/include/execinfo.h)
LIBS+= -lexecinfo
.endif
.include "../../mk/bsd.pkg.mk"