Releases: rcbops/rpc-ceph
Releases · rcbops/rpc-ceph
1.1.12
1.1.11
Release Notes
1.1.11
Upgrade Notes
- Bump ansible-alertmanager version from 0.13.7 to 0.13.8
- Bump rpc-maas version from 1.7.10 to 1.8.0
1.1.10
Release Notes
1.1.10
New Features
- Set
openstack_config: True
as the default so openstack pools and auth are created by default. For standalone deploymentsopenstack_config
will need to be overridden toFalse
to avoid unneccsary keys and pools.
Upgrade Notes
- Bump ceph-ansible version from v3.1.9 to v3.1.10
- Bump ansible-alertmanager version from 0.13.5 to 0.13.6
- Bump ceph-ansible version from v3.1.10 to v3.1.12
- Bump ansible-alertmanager version from 0.13.6 to 0.13.7
1.1.9
Release Notes
1.1.9
New Features
- Enabled repository pinning by default and pinned to luminous version 12.2.8.
1.1.8
Release Notes
1.1.8
Upgrade Notes
- Bump ceph-ansible version from v3.0.46 to v3.1.9
- Bump ceph-ansible version from v3.0.45 to v3.0.46
- Bump grafana version from 2.14.2 to 2.24.4
- Bump alertmanager version from 0.13.4 to 0.13.5
- Bump rpc-maas version from 1.7.7 to 1.7.9
1.1.7
Release Notes
1.1.7
Upgrade Notes
- Update the version of
ansible-alertmanager
from0.13.3
to0.13.4
Update the version ofrpc-maas
from1.7.5
to1.7.6
- Update defaults based off of current ceph-ansible deployments - Set
vm.min_free_kbytes
to 2GB per RH for systems with 128GB RAM - Reducerbd_cache_size
from 128MB to 64MB - Reducerbd_cache_max_dirty_age
from 15s to 2s - Increaseosd_heartbeat_min_size
to 9000 to match MTU
1.1.6
Release Notes
1.1.6
New Features
- Add new playbook to mirror a specific version of ceph and create local repo servers to pin ceph deployments to a specific version.
Upgrade Notes
- Verion of ceph-ansible was incremented to
v3.0.40
fromv3.0.39
group name was changed fromiscsi-gws
toiscsigws
Change playbooks/rolling_update.yml valuejewel_minor_update
toFalse
Add new tasks in playbooks/rolling_update.ymlget osd versions
,set_fact ceph_versions_osd
andosd set sortbitwise
- Increment version of ceph-ansible version from
v3.0.40
tov3.0.42
Increment version of ansible-alertmanager from0.13.1
to0.13.2
- Update dependencies ceph-ansible version from
v3.0.42
tov3.0.43
ansible-alertmanager version from0.13.2
to0.13.3
Task changed in playbook playbooks/purge-cluster.ymlremove data
,get osd data
andlockbox mount points
Task changes in playbook playbooks/rolling_update.ymlget osd numbers - non container
andrestart containerized ceph osd
- Update current ceph-ansible version from
v3.0.43
tov3.0.45
1.1.5
RPC-Ceph version 1.1.5
Release Notes
1.1.5
Upgrade Notes
- Bump prometheus_server-role version from 1.3.1 to 1.4.0
- Bump prometheus_node_exporter-role from 1.2.6 to 2.0.0
- Bump ansible-alertmanager from 0.11.6 to 0.13.1
- Pin openstack-ansible-lxc_hosts from stable/queens to 17.0.4
- Pin openstack-ansible-lxc_container_create from stable/queens to 17.0.4
- Pin openstack-ansible-apt_package_pinning from stable/queens to 17.0.4
- Pin openstack-ansible-pip_install from stable/queens to 17.0.4
- Pin openstack-ansible-os_keystone from stable/queens to 17.0.4
- Pin openstack-ansible-galera_client from stable/queens to 17.0.4
- Pin openstack-ansible-galera_server from stable/queens to 17.0.4
- Pin openstack-ansible-rabbitmq_server from stable/queens to 17.0.4
- Pin openstack-ansible-openstack_openrc from stable/queens to 17.0.4
- Pin openstack-ansible-openstack_hosts from stable/queens to 17.0.4
- Pin openstack-ansible-memcached_server from stable/queens to 17.0.4
- Changed the rpc-maas default flag to false
1.1.4
Release Notes
1.1.4
New Features
- Add alertmanager playbook to install alertmanager based on an upstream alertmanager role.
- Add basic prometheus alerting rules for ceph
- Add
playbooks/check-for-config-changes.yml
to check for changes to ceph configuration, sysctl tuning or transparent_hugepage enablement before running ceph-ansible. The playbook will output a list of differences from the current configuration so adjustments can be made to avoid unwanted changes and restarts.
Upgrade Notes
- Adjust the defaults for Rados Gateway and Keystone integration. The RGW endpoint URLs now include the tenant_id, and the Rados Gateway settings include the
rgw_swift_account_in_url
andrgw_keystone_implicit_tenants
options which are now both set totrue
when integrating with OpenStack and Keystone.
1.1.3
Release Notes
1.1.3
New Features
- Add
playbooks/add-mgr-modules.yml
optional playbook to configure and enable Ceph mgr modules. This is a temporary playbook backported from ceph-ansible v3.1, and will be removed once ceph-ansible v3.1.0 is released and rpc-ceph moves to consume it. Specify the modules to utilise with theceph_mgr_modules
variable, which by default containsrestful
,status
,balancer
, anddashboard
. - Add
playbooks/grafana.yml
optional playbook to configure grafana against hosts specified in thegrafana
group in the inventory. - Add
playbooks/add_hosts.yml
optional playbook to populate/etc/hosts
on all hosts in the current play. - Add
playbooks/distribute-root-key.yml
optional playbook to create and distribute root ssh keys as non-root user. - Add an haproxy VIP for the Ceph
mgr
dashboard. - Add
playbooks/prometheus.yml
optional playbook to configure prometheus server on hosts in theprometheus
hosts group. - Add
playbooks/node_exporter.yml
optional playbook to configure Prometheus node exporter on all Ceph hosts. - Add sample group_vars overrides files inside
group_vars/<group_name>/overrides.yml.sample
. These can be used as examples when considering how to override existing variables. - Add
playbooks/ssacli-install.yml
optional playbook to install the HPssacli
took against osd hosts.
Upgrade Notes
- Restructure the
benchmark
directory to contain a sub-directory for each benchmark type. Currentlyfio_bench
andrgw_bench
, which each contain the playbooks for that type of benchmark. - Moved to use
ceph-ansible
version 3.0.33. For more info please see v3.0.33 ChangeLog - Moved to use
ceph-ansible
version 3.0.34. For more info please see v3.0.34 ChangeLog - Add serveral osd settings to match current deployment defaults.
osd_snap_trim_priority 1
osd_snap_trim_sleep 0.1
osd_pg_max_concurrent_snap_trims 1
osd_scrub_priority 1
osd_scrub_chunk_min 1
osd_scrub_chunk_max 5
- Update the
max open files
Ceph setting and thefs.file-max
sysctl setting to be inline with current deployment defaults. Changing from262144
to26234859