ceph command
General usage:==============
usage: ceph [-h] [-c CEPHCONF] [-i INPUT_FILE] [-o OUTPUT_FILE]
[--id CLIENT_ID] [--name CLIENT_NAME] [--cluster CLUSTER]
[--admin-daemon ADMIN_SOCKET] [--admin-socket ADMIN_SOCKET_NOPE]
[-s] [-w] [--watch-debug] [--watch-info] [--watch-sec]
[--watch-warn] [--watch-error] [--version] [--verbose] [--concise]
[-f {json,json-pretty,xml,xml-pretty,plain}]
[--connect-timeout CLUSTER_TIMEOUT]
Ceph administration tool
optional arguments:
-h, --help request mon help
-c CEPHCONF, --conf CEPHCONF
ceph configuration file
-i INPUT_FILE, --in-file INPUT_FILE
input file
-o OUTPUT_FILE, --out-file OUTPUT_FILE
output file
--id CLIENT_ID, --user CLIENT_ID
client id for authentication
--name CLIENT_NAME, -n CLIENT_NAME
client name for authentication
--cluster CLUSTER cluster name
--admin-daemon ADMIN_SOCKET
submit admin-socket commands ("help" for help
--admin-socket ADMIN_SOCKET_NOPE
you probably mean --admin-daemon
-s, --status show cluster status
-w, --watch watch live cluster changes
--watch-debug watch debug events
--watch-info watch info events
--watch-sec watch security events
--watch-warn watch warn events
--watch-error watch error events
--version, -v display version
--verbose make verbose
--concise make less verbose
-f {json,json-pretty,xml,xml-pretty,plain}, --format {json,json-pretty,xml,xml-pretty,plain}
--connect-timeout CLUSTER_TIMEOUT
set a timeout for connecting to the cluster
Monitor commands:
=================
auth add{ [...]} add auth info forfrom input
file, or random key if no input given,
and/or any caps specified in the
command
auth caps [...] update caps forfrom caps
specified in the command
auth del delete all caps for
auth export {} write keyring for requested entity, or
master keyring if none given
auth get write keyring file with requested key
auth get-key display requested key
auth get-or-create{ add auth info forfrom input
[...]} file, or random key if no input given,
and/or any caps specified in the
command
auth get-or-create-key{get, or add, key forfrom
[...]} system/caps pairs specified in the
command.If key already exists, any
given caps must match the existing
caps for that key.
auth import auth import: read keyring file from -i
auth list list authentication state
auth print-key display requested key
auth print_key display requested key
compact cause compaction of monitor's leveldb
storage
config-key del delete
config-key exists check for 's existence
config-key get get
config-key list list keys
config-key put{} put , value
df {detail} show cluster free space stats
fsid show cluster FSID/UUID
health {detail} show cluster health
heap dump|start_profiler|stop_profiler|show heap usage info (available only
release|stats if compiled with tcmalloc)
injectargs[...]
log[...] log supplied text to the monitor log
mds add_data_pool add data pool
mds cluster_down take MDS cluster down
mds cluster_up bring MDS cluster up
mds compat rm_compat remove compatible feature
mds compat rm_incompat remove incompatible feature
mds compat show show mds compatibility settings
mds deactivate stop mds
mds dump {} dump info, optionally from epoch
mds fail force mds to status failed
mds getmap {} get MDS map, optionally from epoch
mds newfs {--yes-i-make new filesystom using pools
really-mean-it} and
mds remove_data_pool remove data pool
mds rm remove nonactive mds
mds rmfailed remove failed mds
mds set max_mds|max_file_size|allow_new_ set mds parameterto
snaps|inline_data{}
mds set_max_mds set max MDS index
mds set_state set mds state ofto
mds setmap set mds map; must supply correct epoch
number
mds stat show MDS status
mds stop stop mds
mds tell [...] send command to particular mds
mon add add new monitor namedat
mon dump {} dump formatted monmap (optionally from
epoch)
mon getmap {} get monmap
mon remove remove monitor named
mon stat summarize monitor status
mon_status report status of monitors
osd blacklist add|rm add (optionally untilseconds
{} from now) or removefrom
blacklist
osd blacklist ls show blacklisted clients
osd create {} create new osd (with optional UUID)
osd crush add add or update crushmap position and
[...] weight forwithand
location
osd crush add-bucket add no-parent (probably root) crush
bucketof type
osd crush create-or-move [..for at/to location
.]
osd crush dump dump crush map
osd crush link [...] link existing entry forunder
location
osd crush move [...] move existing entry forto
location
osd crush remove{} removefrom crush map (
everywhere, or just at )
osd crush reweight change 's weight toin
crush map
osd crush rm{} removefrom crush map (
everywhere, or just at )
osd crush rule create-erasure create crush rulefor erasure
{} coded pool created with(
default default)
osd crush rule create-simple create crush ruleto start from
{firstn|indep} , replicate across buckets of
type , using a choose mode of
(default firstn; indep
best for erasure pools)
osd crush rule dump {} dump crush rule(default all)
osd crush rule list list crush rules
osd crush rule ls list crush rules
osd crush rule rm remove crush rule
osd crush set set crush map from input file
osd crush set update crushmap position and weight
[...] fortowith location
osd crush show-tunables show current crush tunables
osd crush tunables legacy|argonaut| set crush tunables values to
bobtail|firefly|optimal|default
osd crush unlink{} unlinkfrom crush map (
everywhere, or just at )
osd deep-scrub initiate deep scrub on osd
osd down[...] set osd(s)[...] down
osd dump {} print summary of OSD map
osd erasure-code-profile get get erasure code profile
osd erasure-code-profile ls list all erasure code profiles
osd erasure-code-profile rm remove erasure code profile
osd erasure-code-profile set create erasure code profile
{ [...]} with [ ...] pairs. Add a
--force at the end to override an
existing profile (VERY DANGEROUS)
osd find find osdin the CRUSH map and
show its location
osd getcrushmap {} get CRUSH map
osd getmap {} get OSD map
osd getmaxosd show largest OSD id
osd in[...] set osd(s)[...] in
osd lost{--yes-i-really-mean- mark osd as permanently lost. THIS
it} DESTROYS DATA IF NO MORE REPLICAS
EXIST, BE CAREFUL
osd ls {} show all OSD ids
osd lspools {} list pools
osd map find pg forin
osd metadata fetch metadata for osd
osd out[...] set osd(s)[...] out
osd pause pause osd
osd perf print dump of OSD perf summary stats
osd pg-temp{ [...]} set pg_temp mapping pgid:[ [...
]] (developers only)
osd pool create create pool
{} {replicated|erasure}
{} {}
osd pool delete{}delete pool
{--yes-i-really-really-mean-it}
osd pool getsize|min_size| get pool parameter
crash_replay_interval|pg_num|pgp_num|
crush_ruleset|hit_set_type|hit_set_
period|hit_set_count|hit_set_fpp|auid|
target_max_objects|target_max_bytes|
cache_target_dirty_ratio|cache_target_
full_ratio|cache_min_flush_age|cache_
min_evict_age|erasure_code_profile
osd pool get-quota obtain object or byte limits for pool
osd pool mksnap make snapshotin
osd pool rename renameto
osd pool rmsnap remove snapshotfrom
osd pool setsize|min_size| set pool parameterto
crash_replay_interval|pg_num|pgp_num|
crush_ruleset|hashpspool|hit_set_type|
hit_set_period|hit_set_count|hit_set_
fpp|debug_fake_ec_pool|target_max_
bytes|target_max_objects|cache_target_
dirty_ratio|cache_target_full_ratio|
cache_min_flush_age|cache_min_evict_
age|auid{--yes-i-really-mean-it}
osd pool set-quotamax_ set object or byte limit on pool
objects|max_bytes
osd pool stats {} obtain stats from all pools, or from
specified pool
osd primary-affinity
页:
[1]