RBD 块设备无法map
问题场景
rbd map test_image
rbd: sysfs write failed
rbd: map failed: (5) Input/output error
通过dmesg|tail
看到
mon1 xxxxxxx:6789 feature set mismatch, my XXXXXX < server’s XXXXXX, missing 4000000000000
分析过程:
知道是特征集合不匹配。
通过比对12.2.0和ceph 12.1.3的crushmap

ceph osd crush show-tunables
可以看到 
[root@sqh0 ~]# ceph osd crush show-tunables 
{ 
“choose_local_tries”: 0, 
“choose_local_fallback_tries”: 0, 
“choose_total_tries”: 50, 
“chooseleaf_descend_once”: 1, 
“chooseleaf_vary_r”: 1, 
“chooseleaf_stable”: 1, 
“straw_calc_version”: 1, 
“allowed_bucket_algs”: 54, 
“profile”: “jewel”, 
“optimal_tunables”: 1, 
“legacy_tunables”: 0, 
“minimum_required_version”: “jewel”, 
“require_feature_tunables”: 1, 
“require_feature_tunables2”: 1, 
“has_v2_rules”: 0, 
“require_feature_tunables3”: 1, 
“has_v3_rules”: 0, 
“has_v4_buckets”: 1, 
“require_feature_tunables5”: 1, 
“has_v5_rules”: 0 
} 
发现 “require_feature_tunables5”: 1,该项变为1,猜想是该配置的原因。想要通过某种方式修改该配置,令它为0. 
执行如下命令 
解决方法

 ceph osd crush tunables hammer

再执行ceph osd crush show-tunables
发现该配置项”require_feature_tunables5”: 0.
之后我在rbd map test_image
成功

定义池使用类型
这个是 luminous 版本新特性
在不定义池使用类型时, 会获得下面错误信息

[root@ceph01 mnt]# ceph -s
cluster:
id: a1d6b1f2-b28f-44d0-bcba-0c4840935cbf
health: HEALTH_WARN
application not enabled on 1 pool(s)

services:
mon: 3 daemons, quorum ceph01,ceph02,ceph03
mgr: ceph03(active), standbys: ceph02
osd: 6 osds: 6 up, 6 in

data:
pools: 1 pools, 128 pgs
objects: 478 objects, 1882 MB
usage: 11056 MB used, 49777 MB / 60833 MB avail
pgs: 128 active+clean

设定池类型方法

[root@controller1 ceph]# ceph osd pool application enable volumes rbd
enabled application 'rbd' on pool 'volumes'
文档更新时间: 2020-10-31 00:31   作者:月影鹏鹏