Saturday, February 2, 2019

rsync over internet using ssh (with bandwidth limit)



rsync over internet using ssh (with bandwidth limit)


rsync -v -a --bwlimit=2000 -e "ssh -p13022" user@125.17.86.111:/home/trident/test Desktop/.

Sunday, October 7, 2018

ceph links

http://blog.ruanbekker.com/blog/2018/06/13/setup-a-3-node-ceph-storage-cluster-on-ubuntu-16/

https://linoxide.com/ubuntu-how-to/create-ceph-cluster-ubuntu-16-04/

https://www.twoptr.com/2018/05/installing-ceph-luminous.html

https://gist.github.com/vanduc95/a17f80d0636badd9aa002f2b493b777b

https://www.howtoforge.com/tutorial/how-to-install-a-ceph-cluster-on-ubuntu-16-04/

my ceph command history

96  sudo apt install ceph-deploy
  497  ls
  498  ceph-deploy new cephstor1 cephstor2 cephstor3
  499  ls
  500  cat ceph.conf
  501  vi  ceph.conf
  502  ceph-deploy install --release luminous cephstor1 cephstor2 cephstor3
  503  vi /etc/apt/sources.list.d/ceph.list
  504  ceph-deploy install --release luminous cephstor1 cephstor2 cephstor3
  505  apt-get update | grep ceph
  506  sudo apt-get update | grep ceph
  507  ls
  508  cat release.asc
  509  vi /etc/apt/sources.list.d/ceph.list
  510  sudo vi /etc/apt/sources.list.d/ceph.list
  511  sudo apt clean
  512  ceph-deploy install --release luminous cephstor1
  513  sudo apt-add-repository 'deb https://download.ceph.com/debian-luminous/ {codename} main'
  514  sudo cat /etc/apt/sources.list.d/ceph.list
  515  apt clean
  516  sudo apt clean
  517  ceph-deploy install --release luminous cephstor1
  518  lsb_release -sc
  519  sudo apt-add-repository 'deb https://download.ceph.com/debian-luminous/ xenial  main'
  520  sudo cat /etc/apt/sources.list.d/ceph.list
  521  sudo apt clean
  522  ceph-deploy install --release luminous cephstor1
  523  sudo vi  /etc/apt/sources.list.d/ceph.list
  524  sudo apt clean
  525  ceph-deploy install --release luminous cephstor1
  526  sudo vi  /etc/apt/sources.list.d/ceph.list
  527  sudo apt clean
  528  ceph-deploy install --release luminous cephstor1
  529  sudo apt-get autoclean
  530  ceph-deploy install --release luminous cephstor1
  531  sudo apt-add-repository 'deb https://download.ceph.com/debian-luminous/ {codename} main'
  532  sudo apt clean
  533  sudo apt-get autoclean
  534  sudo vi  /etc/apt/sources.list.d/ceph.list
  535  history |grep wget
  536  ls
  537  cat release.asc
  538  sudo apt-add-repository 'deb https://download.ceph.com/debian-{release-name}/ {codename} main'
  539  ceph-deploy install --release luminous cephstor1
  540  sudo su -
  541  ls /etc/apt/sources.list -l
  542  scp cephstor2:/etc/apt/sources.list .
  543  cat sources.list
  544  ls
  545  sudo cat sources.list > /etc/apt/sources.list
  546  sudo mv sources.list /etc/apt/sources.list
  547  cat /etc/apt/sources.list
  548  sudo apt autoclean
  549  sudo apt update
  550  apt list --upgradable
  551  ceph-deploy install --release luminous cephstor1
  552  apt-get -y install ceph-deploy ceph-common ceph-mds
  553  sudo apt-get -y install ceph-deploy ceph-common ceph-mds
  554  sudo apt-get -y install ceph-deploy ceph-common ceph-mds ; ssh cephstor2 \ "sudo apt-get -y install ceph-deploy ceph-common ceph-mds" ssh cephstor3 \ "sudo apt-get -y install ceph-deploy ceph-common ceph-mds"
  555  ssh cephstor2 \ "sudo apt-get -y install ceph-deploy ceph-common ceph-mds"
  556  sudo apt-get -y install ceph-deploy ceph-common ceph-mds ; ssh cephstor2 \ "sudo apt-get -y install ceph-deploy ceph-common ceph-mds" ; ssh cephstor3 \ "sudo apt-get -y install ceph-deploy ceph-common ceph-mds"
  557  sudo vi  /etc/apt/sources.list.d/ceph.list
  558  ceph-deploy install --release luminous cephstor1 cephstor2 cephstor3
  559  sudo apt install ceph-osd
  560  sudo apt install  python-ceph
  561  sudo vi  /etc/apt/sources.list.d/ceph.list
  562  sudo apt install ceph-osd
  563  sudo apt autoclean
  564  sudo apt update
  565  sudo apt install ceph-osd
  566  sudo vi  /etc/apt/sources.list.d/ceph.list
  567  sudo apt autoclean
  568  sudo apt install ceph-osd
  569  sudo apt install *osd
  570  sudo vi  /etc/apt/sources.list.d/ceph.list
  571  vi /etc/apt/sources.list.d/ceph.list
  572  sudo vi /etc/apt/sources.list.d/ceph.list
  573  ssh cephstor2
  574  sudo su -
  575  cat /etc/sudoers.d/cephuser
  576  sudo cat /etc/sudoers.d/cephuser
  577  sudo cat /etc/sudoers.d/ceph
  578  sudo vi /etc/apt/sources.list.d/ceph.list
  579  sudo apt install ceph-osd
  580  sudo apt autoclean
  581  ceph-deploy install --release luminous cephstor1 cephstor2 cephstor3
  582  sudo vi /etc/apt/sources.list.d/ceph.list
  583  sudo apt autoclean
  584  ssh cephstor2 \ cat /etc/apt/sources.list.d/ceph.list
  585  ceph-deploy install --release luminous cephstor1 cephstor2 cephstor3
  586  history |grep release
  587  sudo apt-add-repository 'deb https://download.ceph.com/debian-{release-name}/ {codename} main'
  588  history |grep repo
  589  ssh cephstor2 \ cat /etc/apt/sources.list.d/ceph.list
  590  vi /etc/apt/sources.list.d/ceph.list
  591  sudo vi /etc/apt/sources.list.d/ceph.list
  592  sudo apt autoclean
  593  sudo vi /etc/apt/sources.list.d/ceph.list
  594  ceph-deploy install --release luminous cephstor1 cephstor2 cephstor3
  595  sudo vi /etc/apt/sources.list.d/ceph.list
  596  ceph-deploy install --release kraken cephstor1 cephstor2 cephstor3
  597  ceph-deploy purge cephstor1 cephstor2 cephstor3
  598  ceph-deploy purgedata cephstor1 cephstor2 cephstor3
  599  ceph-deploy forgetkeys
  600  ssh cephstor2
  601  ssh cephstor3
  602  ssh cephstor2
  603  ssh cephstor1
  604  screen -ls
  605  screen
  606  exit
  607  ssh cephstor2
  608  ssh cephstor3
  609  history |grep ceph-deploy
  610  ssh cephstor3
  611  history |grep ceph-deploy
  612  screen
  613  ceph-deploy purgedata cephstor1 cephstor2 cephstor3
  614  ceph-deploy purge cephstor1 cephstor2 cephstor3
  615  ceph-deploy purgedata cephstor1 cephstor2 cephstor3
  616  ceph-deploy forgetkeys
  617  ceph-deploy purgedata cephstor1 cephstor2 cephstor3
  618  sudo apt-get -y install python-ceph
  619  vi ~/.ssh/config
  620  cat /etc/sudoers.d/cephuser
  621  sudo cat /etc/sudoers.d/cephuser
  622  sudo apt-get -y install ceph-deploy ceph-common ceph-mds
  623  sudo apt-get -y install ceph-deploy ceph-common ceph-mds python-minimal ; ssh cephstor2 \ "sudo apt-get -y install ceph-common ceph-mds python-minimal" ; ssh cephstor3 \ "sudo apt-get -y install ceph-common ceph-mds python-minimal"
  624  mkdir ceph
  625  ls
  626  cd ceph/
  627  ls
  628  ceph-deploy new cephstor1 cephstor2 cephstor3
  629  cat /etc/apt/sources.list.d/ceph.list
  630  sudo apt-get upgrade
  631  vi ceph.conf
  632  ceph-deploy install --release luminous cephstor1 cephstor2 cephstor3
  633  ceph-deploy install --release luminous cephstor1
  634  ceph-deploy install --release luminous cephstor2
  635  ssh cephstor2 \ "wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -"
  636  ssh cephstor3 \ "wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -"
  637  ssh cephstor3 \ "echo deb https://download.ceph.com/debian-luminous/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list"
  638  ssh cephstor2 \ "echo deb https://download.ceph.com/debian-luminous/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list"
  639  ssh cephstor2 \ "cat /etc/apt/sources.list.d/ceph.list"
  640  ssh cephstor3 \ "cat /etc/apt/sources.list.d/ceph.list"
  641  ssh cephstor2
  642  ssh cephstor3
  643  ceph-deploy install --release luminous cephstor2 cephstor3
  644  ssh cephstor2
  645  ceph-deploy install --release luminous cephstor3
  646  ssh cephstor2
  647  sudo apt list install |grep ceph-common
  648  sudo apt list  |grep ceph-common
  649  ssh cephstor2
  650  ceph-deploy install --release luminous cephstor2
  651  ceph-deploy mon create-initial
  652  df -kh
  653  sudo lsblk
  654  ssh cephstor2 \ sudo lsblk
  655  ssh cephstor3 \ sudo lsblk
  656  sudo mkfs.xfs /dev/sda4
  657  mkdir /ceph
  658  sudo mkdir /ceph
  659  sudo mkdir /ceph/cephstor1disk01
  660  sudo mkdir /storage01
  661  ssh cephstor2 \ sudo mkdir /storage02
  662  ssh cephstor3 \ sudo mkdir /storage03
  663  sudo mount /dev/sda4 /storage01
  664  df
  665  ssh cephstor2 \ sudo mount /dev/sda4 /storage02
  666  ssh cephstor2
  667  ssh cephstor2 \ sudo mount /dev/sdb4 /storage02
  668  ssh cephstor3 \ sudo mount /dev/sda4 /storage03
  669  cat /etc/fstab
  670  echo /dev/sda4 /storage01 ext4    defaults        0       1
  671  sudo echo /dev/sda4 /storage01 ext4    defaults        0       1 >> /etc/fstab
  672  sudo vi /etc/fstab
  673  ssh cephstor2
  674  exit
  675  ssh cephstor2
  676  screen
  677  screen -ls
  678  screen -x 2673.pts-0.cephstor1
  679  screen -ls
  680  screen -x 8351.pts-0.cephstor1
  681  history
  682  ceph-deploy disk list cephstor1
  683  cd ceph/
  684  ceph-deploy osd disk list cephstor1
  685  ceph-deploy disk list cephstor1
  686  ceph-deploy disk create --help
  687  ceph-deploy disk zap --help
  688  ceph-deploy disk zap cephstor1 /dev/sda4
  689  ceph-deploy disk zap cephstor1 /dev/sdb1
  690  ceph-deploy osd create --data /dev/sda4 cephstor1
  691  df
  692  ceph-deploy osd create --data /cephstor1disk01  cephstor1
  693  umount /cephstor1disk01
  694  sudo  umount /cephstor1disk01
  695  ceph-deploy osd create --data /cephstor1disk01  cephstor1
  696  ceph-deploy osd create --data /dev/sda4 cephstor1
  697  ceph-deploy osd create --data /dev/sdb1 cephstor1
  698  /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sda4
  699  sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sda4
  700  fdisk /dev/sda4
  701  sudo fdisk /dev/sda4
  702  sudo pvcreate /dev/sda4p1
  703  sudo fdisk -l
  704  ceph-deploy disk zap /dev/sdb1
  705  ceph-deploy disk zap /dev/sdb1 cephstor1
  706  ceph-deploy disk zap cephstor1 /dev/sdb1
  707  ceph-deploy disk zap cephstor1:/dev/sdb1
  708  ceph-deploy disk zap cephstor1:sdb1
  709  ceph-deploy disk zap cephstor1 sdb1
  710  ceph-deploy disk zap cephstor1 sdba4
  711  ceph-deploy disk zap cephstor1 sda4
  712  ceph-deploy disk list cephstor1
  713  ceph-deploy disk zap /dev/mapper/ceph--a509ce31--e9cb--4491--a4d0--bea156feff5e-osd--block--ea4eccf4--6c9b--4747--8f02--4b375841036e cephstor1
  714  ceph-deploy disk zap cephstor1 /dev/mapper/ceph--a509ce31--e9cb--4491--a4d0--bea156feff5e-osd--block--ea4eccf4--6c9b--4747--8f02--4b375841036e
  715  ceph-deploy disk list cephstor1
  716  ceph-deploy disk zap cephstor1 ceph--a509ce31--e9cb--4491--a4d0--bea156feff5e-osd--block--ea4eccf4--6c9b--4747--8f02--4b375841036e
  717  ceph-deploy disk list cephstor1
  718  vgs
  719  sudo cgs
  720  sudo vgs
  721  sudo vgremove ceph-a509ce31-e9cb-4491-a4d0-bea156feff5e
  722  ceph-deploy disk list cephstor1
  723  ceph-deploy disk zap cephstor1 /dev/mapper/ceph--49560c94--8381--4466--b3bc--36bf1a8961a5-osd--block--7e3c3059--6784--469c--9564--4b949dd6166f
  724  ceph-deploy disk list cephstor1
  725  ceph-deploy disk zap  cephstor1 /dev/mapper/ceph--a509ce31--e9cb--4491--a4d0--bea156feff5e-osd--block--ea4eccf4--6c9b--4747--8f02--4b375841036e
  726  sudo su -
  727  ceph-deploy disk --help
  728  ceph-deploy osd --help
  729  ceph-deploy --help
  730  sudo su -
  731  ceph-deploy disk list cephstor1
  732  ceph mgr module enable dashboard
  733  ceph-deploy disk list cephstor2
  734  lsblk
  735  sudo fdisk -l /dev/sda4
  736  ceph-deploy osd create /dev/sda4p1 cephstor1
  737  ceph-deploy osd create cephstor1 /dev/sda4p1
  738  ceph-deploy osd prepare cephstor1 /dev/sda4p1
  739  ceph-deploy osd list cephstor1 /dev/sda4p1
  740  ceph-deploy osd list cephstor1 /dev/sda4
  741  ceph-deploy osd list cephstor1 /dev/sdb1
  742  history |grep osd
  743  sudo fdisk /dev/sda4
  744  sudo fdisk -l /dev/sda4
  745  sudo partprobe
  746  sudo fdisk -l /dev/sda4
  747  sudo pvcreate /dev/sda4
  748  history |grep osd
  749  ceph-deploy osd create --data /dev/sda4 cephstor1
  750  /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sda4
  751  sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sda4
  752  df
  753  cat /etc/fstab
  754  sudo vi /etc/fstab
  755  sudo pvcreate /dev/sda4
  756  df
  757  sudo pvremove /dev/sda4
  758  sudo lsof  /dev/sda4
  759  sudo su -
  760  ceph-deploy osd create --data sda4 cephstor1
  761  ceph-deploy osd create --data /dev/sda4 cephstor1
  762  vgs
  763  sudo su -
  764  ceph-deploy osd create --data /dev/sda4 cephstor1
  765  sudo dd if=/dev/zero of=/dev/sda4 bs=4096k count=100
  766  sudo sgdisk --zap-all --clear --mbrtogpt -g -- /dev/sda4
  767  ceph-deploy osd create --data /dev/sda4 cephstor1
  768  sudo su -
  769  history |grep zap
  770  history |grep list
  771  eph-deploy disk list cephstor2
  772  ceph-deploy disk list cephstor2
  773  vgs
  774  sudo vgs
  775  fdisk -l
  776  sudo fdisk -l
  777  ceph-deploy disk list cephstor1
  778  vgs
  779  sudo vgs
  780  sudo lvs
  781  history
  782  sudo sgdisk --zap-all --clear --mbrtogpt -g -- /dev/sda4
  783  sudo dd if=/dev/zero of=/dev/sda4 bs=4096k count=100
  784  ceph-deploy osd create cephstor1:/dev/sda4
  785  ceph-deploy osd create cephstor1 /dev/sda4
  786  ceph-deploy osd create /dev/sda4 cephstor1
  787  ceph-deploy osd create cephstor1:sda4
  788  ceph-deploy osd create cephstor1:/dev/sda4
  789  ceph-deploy osd create --help
  790  vgs
  791  cat /etc/fstab
  792  sudo vi  /etc/fstab
  793  sudo mkfs.xfs /dev/sda4
  794  mount -a
  795  sudo mount -a
  796  df
  797  ceph-deploy osd create --help
  798  ceph-deploy osd create --filestore cephstor1:/cephstor1disk01
  799  ceph-deploy osd create --filestore /cephstor1disk01 cephstor1
  800  ceph-deploy osd create --filestore cephstor1 /cephstor1disk01
  801  ceph-deploy osd create cephstor1 --filestore  /cephstor1disk01
  802  sudo su -
  803  history |grep zap
  804  ceph-deploy disk zap /dev/ceph-a509ce31-e9cb-4491-a4d0-bea156feff5e/osd-block-a5a85f38-8527-4382-86e4-1f709f70fc2e cephstor1
  805  ceph-deploy disk zap cephstor1 /dev/ceph-a509ce31-e9cb-4491-a4d0-bea156feff5e/osd-block-a5a85f38-8527-4382-86e4-1f709f70fc2e
  806  vgs
  807  sudo vgs
  808  sudo vgremove ceph-a509ce31-e9cb-4491-a4d0-bea156feff5e
  809  sudo dd if=/dev/zero of=/dev/sda4
  810  sudo vgs
  811  sudo reboot
  812  ls
  813  ssh cephstor3
  814  df
  815  sudo umount  /cephstor1disk01
  816  sudo mkfs.xfs /dev/sda4
  817  sudo mkfs.xfs -f /dev/sda4
  818  sudo mount -a
  819  df
  820  chown ceph. /cephstor1disk01
  821  sudo chown ceph. /cephstor1disk01
  822  ll /
  823  df
  824  cd ceph/
  825  ls
  826  cat ceph.conf
  827  ceph-deploy osd prepare cephstor1:/cephstor1disk01  cephstor2:/cephstor2disk01  cephstor3:/cephstor3disk01
  828  ssh cephstor2 \ df -kh |grep cephstor
  829  ssh cephstor3 \ df -kh |grep cephstor
  830  ceph-deploy osd prepare cephstor1:/cephstor1disk01  cephstor2:/cephstor2disk01  cephstor3:/cephstor3disk01
  831  ceph-deploy osd create  cephstor1:/cephstor1disk01  cephstor2:/cephstor2disk01  cephstor3:/cephstor3disk01
  832  ceph-deploy osd list cephstor1:/cephstor1disk01  cephstor2:/cephstor2disk01  cephstor3:/cephstor3disk01
  833  ping cephstor1
  834  ceph-deploy osd list cephstor1:/cephstor1disk01  cephstor2:/cephstor2disk01  cephstor3:/cephstor3disk01
  835  ip r
  836  ceph-deploy osd list 10.1.12.10:/cephstor1disk0 10.1.12.11:/cephstor2disk01 10.1.12.12:/cephstor3disk01
  837  ceph-deploy admin  cephstor1 cephstor2 cephstor3
  838  sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  839  ll /etc/ceph/ceph.client.admin.keyring
  840  ssh cephstor2
  841  ssh cephstor2 \ ls -l /etc/ceph/ceph.client.admin.keyring
  842  ssh cephstor2 \ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  843  ssh cephstor3 \ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  844  ceph-deploy mgr create cephstor1 cephstor2 cephstor3
  845  sudo vi /etc/fstab
  846  umount /cephstor1disk01
  847  sudo umount /cephstor1disk01
  848  df
  849  ssh cephstor2
  850  ssh cephstor3
  851  ceph-deploy osd create cephstor1:sda4 cephstor2:sdb4 cephstor3:sda4
  852  ceph-deploy osd list  cephstor1:sda4 cephstor2:sdb4 cephstor3:sda4
  853  ceph-deploy osd list  cephstor1:sda4
  854  ceph-deploy osd list  cephstor1:/dev/sda4
  855  ceph-deploy osd list  10.1.12.10:sda4
  856  ceph-deploy osd list  10.1.12.10
  857  /usr/sbin/ceph-volume lvm list
  858  sudo /usr/sbin/ceph-volume lvm list
  859  man ceph-volume
  860  ls
  861  ceph health
  862  man ceph-deploy
  863  ceph-deploy --help
  864  ceph-deploy osd --help
  865  ceph-deploy osd create --list
  866  ceph-deploy osd create
  867  ceph-deploy osd create --?
  868  ceph-deploy disk list
  869  ceph-deploy disk list cephstor1
  870  ceph-deploy disk zap cephstor1:sda4
  871  ceph-deploy disk zap cephstor1:sdb
  872  ceph-deploy disk zap --help
  873  ceph-deploy disk zap ephstor1 sda4
  874  ceph-deploy disk zap cephstor1 sda4
  875  sudo vi /etc/fstab
  876  sudo mount -a
  877  history
  878  ceph-deploy osd list  cephstor1
  879  ceph-deploydisk  list  cephstor1
  880  ceph-deploy disk  list  cephstor1
  881  cd ceph/
  882  ceph-deploy disk  list  cephstor1
  883  ls
  884  ceph-deploy disk create  cephstor1 /dev/sda4
  885  ceph-deploy disk zap  cephstor1 /dev/sda4
  886  history |grep osd
  887  ceph-deploy osd create cephstor1:sda4
  888  ceph-deploy osd create cephstor1 sda4
  889  ceph-deploy osd create sda4 cephstor1
  890  ceph-deploy osd create cephstor1:sda4
  891  ceph-deploy osd create cephstor1 /dev/sda4
  892  ceph-deploy osd create cephstor1:/dev/sda4
  893  ceph-deploy osd create cephstor1 /dev/sda4
  894  sudo su -
  895  ceph-deploy osd create
  896  ceph-deploy osd create --help
  897  ceph-deploy osd create --data /dev/mapper/cephstor1_vg-cephstor1_lv --fs-type xfs cephstor1
  898  vgs
  899  sudo vgs
  900  ceph-deploy osd create  --file store --data /dev/mapper/cephstor1_vg-cephstor1_lv --fs-type xfs cephstor1
  901  ceph-deploy osd create  --filestore --data /dev/mapper/cephstor1_vg-cephstor1_lv --fs-type xfs cephstor1
  902  ceph-deploy osd create --data /dev/mapper/cephstor1_vg-cephstor1_lv --fs-type xfs cephstor1
  903  /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/mapper/cephstor1_vg-cephstor1_lv
  904  sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/mapper/cephstor1_vg-cephstor1_lv
  905  sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data cephstor1_vg/cephstor1_lv
  906  vgs
  907  sudo vgs
  908  sudo lvs
  909  ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/cephstor1_vg/cephstor1_lv --path /var/lib/ceph/osd/ceph-0
  910  ceph osd purge osd.0 --yes-i-really-mean-it
  911  sudo /usr/sbin/ceph-volume --cluster ceph lvm create --filestore --data cephstor1_vg/cephstor1_lv
  912  ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 93c9f2b7-a016-4e69-a0e5-d2c8ca96d486
  913  sudo ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 93c9f2b7-a016-4e69-a0e5-d2c8ca96d486
  914  sudo chmod +r /var/lib/ceph/bootstrap-osd/ceph.keyring
  915  sudo /usr/sbin/ceph-volume --cluster ceph lvm create --filestore --data cephstor1_vg/cephstor1_lv
  916  ceph-deploy osd create  --file store --data /dev/mapper/cephstor1_vg-cephstor1_lv --fs-type xfs cephstor1
  917  ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 1b72edf2-44da-414f-a421-64a854e5b4df
  918  sudo ls -ltr /var/lib/ceph/bootstrap-osd/ceph.keyring
  919  chown ceph. /var/lib/ceph/bootstrap-osd/ceph.keyring
  920  sudo chown ceph. /var/lib/ceph/bootstrap-osd/ceph.keyring
  921  ceph-deploy osd create  --file store --data /dev/mapper/cephstor1_vg-cephstor1_lv --fs-type xfs cephstor1
  922  sudo /usr/sbin/ceph-volume --cluster ceph lvm create --filestore --data cephstor1_vg/cephstor1_lv
  923  ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 6cecb411-81dd-4c8b-afc9-68d0c4accb16
  924  ll /var/lib/ceph/bootstrap-osd/ceph.keyring
  925  sudo ls -ltr /var/lib/ceph/bootstrap-osd/ceph.keyring
  926  sudo chmod 777 /var/lib/ceph/bootstrap-osd/ceph.keyring
  927  sudo /usr/sbin/ceph-volume --cluster ceph lvm create --filestore --data cephstor1_vg/cephstor1_lv
  928  ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new eccaf7fb-7e75-4c31-bcf0-b191e7f6dad0
  929  sudo ls -l /var/lib/ceph/bootstrap-osd/ceph.keyring
  930  sudo chown cephuser.  /var/lib/ceph/bootstrap-osd/ceph.keyring
  931  sudo /usr/sbin/ceph-volume --cluster ceph lvm create --filestore --data cephstor1_vg/cephstor1_lv
  932  ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new bfb0e9b8-29f0-476b-acf6-2f894e598e7
  933  cat /var/lib/ceph/bootstrap-osd/ceph.keyring
  934  sudo ls -ltr /var/lib/ceph/bootstrap-osd/ceph.keyring
  935  sudo ls -ltr /var/lib/ceph/
  936  sudo ls -ltr /var/lib/ceph/bootstrap-osd/
  937  sudo chown ceph.  /var/lib/ceph/bootstrap-osd/ceph.keyring
  938  sudo ls -ltr /var/lib/ceph/bootstrap-osd/
  939  ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new bfb0e9b8-29f0-476b-acf6-2f894e598e7
  940  cat /var/lib/ceph/bootstrap-osd/ceph.keyring
  941  ceph-volume lvm prepare
  942  sudo ceph-volume lvm prepare
  943  sudo su -
  944  history
  945  history  |grep crea
  946  ceph-deploy osd create  --file store --data
  947  ceph-deploy osd create  --filestore --data cephstor1_vg/cephstor1_lv --journal cephstor1_vg/jn_lv
  948  ceph-deploy osd create  --filestore --data cephstor1_vg/cephstor1_lv --journal cephstor1_vg/jn_lv cephstor1
  949  sudo /usr/sbin/ceph-volume --cluster ceph lvm create --filestore --data cephstor1_vg/cephstor1_lv --journal cephstor1_vg/jn_lv
  950  ceph osd list
  951  ceph osd  ls
  952  ceph osd  ls -l
  953  ceph osd  ls --detail
  954  ceph osd  ls 0
  955  ceph osd  ls 1
  956  ceph osd  ls
  957  ceph osd  ls all
  958  ceph osd  ls 1
  959  blkid
  960  sudo su-
  961  sudo su -
  962  cd ceph/
  963  ls
  964  ceph-deploy disk list cephstor1
  965  sudo fdisk -l
  966  ceph osd ls
  967  ceph osd
  968  sudo ceph osd
  969  sudo ceph osd  -h
  970  sudo ceph osd  stat
  971  ceph-volume lvm list
  972  sudo ceph-volume lvm list
  973  history |grep create
  974  history |grep create|grep lv
  975  sudo su -
  976  ssh cephstor2
  977  ssh cephstor3
  978  sudo su -
  979  cd ceph/
  980  ceph auth get-or-create client.images mon 'allow r' osd 'allow class-read object_prefix rdb_children, allow rwx pool=images' -o /etc/ceph/ceph.client.images.keyring
  981  sudo ceph auth get-or-create client.images mon 'allow r' osd 'allow class-read object_prefix rdb_children, allow rwx pool=images' -o /etc/ceph/ceph.client.images.keyring
  982  ls -ltr /etc/ceph/ceph.client.images.keyring
  983  sudo chown ceph. /etc/ceph/ceph.client.images.keyring
  984  ceph auth get-or-create client.volumes mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images' -o /etc/ceph/ceph.client.volumes.keyring
  985  sudo ceph auth get-or-create client.volumes mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images' -o /etc/ceph/ceph.client.volumes.keyring
  986  sudo chown ceph. /etc/ceph/ceph.client.volumes.keyring
  987  sudo scp /etc/ceph/ceph.client.volumes.keyring cephstor1:/etc/ceph/ceph.client.volumes.keyring
  988  scp /etc/ceph/ceph.client.volumes.keyring root@cephstor1:/etc/ceph/ceph.client.volumes.keyring
  989  scp /etc/ceph/ceph.client.volumes.keyring root@cephstor2:/etc/ceph/ceph.client.volumes.keyring
  990  scp /etc/ceph/ceph.client.images.keyring root@cephstor2:/etc/ceph/ceph.client.images.keyring
  991  scp /etc/ceph/ceph.client.images.keyring root@cephstor3:/etc/ceph/ceph.client.images.keyring
  992  scp /etc/ceph/ceph.client.volumes.keyring root@cephstor3:/etc/ceph/ceph.client.volumes.keyring
  993  ssh cephstor3 \ sudo chown ceph. /etc/ceph/ceph.client.volumes.keyring
  994  ssh cephstor3 \ sudo chown ceph. /etc/ceph/ceph.client.images.keyring
  995  ssh cephstor2 \ sudo chown ceph. /etc/ceph/ceph.client.images.keyring
  996  ssh cephstor2 \ sudo chown ceph. /etc/ceph/ceph.client.volumes.keyring
  997  ll /etc/ceph/
  998  history
  999  cat /etc/ceph/ceph.conf
 1000  scp /etc/ceph/ceph.conf root@10.1.13.13:/etc/ceph/ceph.conf
 1001  sudo scp /etc/ceph/ceph.client.images.keyring stackadmin@10.1.13.20:
 1002  sudo scp /etc/ceph/ceph.client.volumes.keyring  stackadmin@10.1.13.20:
 1003  sudo ceph auth get-key client.volumes |ssh osp9.lab tee client.volumes.key
 1004  sudo ceph auth get-key client.volumes |ssh 10.1.13.20  tee client.volumes.key
 1005  sudo ceph auth get-key client.volumes |ssh stackadmin@10.1.13.20  tee client.volumes.key
 1006  sudo ceph auth get-key client.images |ssh stackadmin@10.1.13.20  tee client.volumes.key
 1007  sudo ceph auth get-key client.images |ssh stackadmin@10.1.13.13  tee client.volumes.key
 1008  sudo ceph auth get-key client.volumes |ssh stackadmin@10.1.13.13  tee client.volumes.key
 1009  sudo ceph auth get-key client.images |ssh stackadmin@10.1.13.13  tee client.images.key
 1010  sudo ceph auth get-key client.images |ssh stackadmin@10.1.13.20  tee client.images.key
 1011  sudo ceph auth get-key client.volumes |ssh stackadmin@10.1.13.20  tee client.volumes.key
 1012  sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring > admin.key
 1013  cat admin.key
 1014  scp admin.key root@10.1.13.13:/etc/ceph/.
 1015  scp admin.key root@10.1.13.20:/etc/ceph/.
 1016  scp admin.key stackadmin@10.1.13.20:/etc/ceph/.
 1017  scp admin.key stackadmin@10.1.13.20:
 1018  ls
 1019  scp ceph.client.admin.keyring root@10.1.13.20:/etc/ceph/.
 1020  scp ceph.client.admin.keyring root@10.1.13.13:/etc/ceph/.
 1021  sudo su -
 1022  cd ceph/
 1023  ceph-deploy mds create cephstor1 cephstor2 cephstor3
 1024  sudo ceph osd pool create cephfs_data 64
 1025  sudo ceph osd pool create cephfs_metadata 64
 1026  sudo ceph osd pool create cephfs_metadata 32
 1027  sudo ceph fs new cephfs cephfs_metadata cephfs_data
 1028  sudo ceph fs ls
 1029  sudo ceph mds stat
 1030  sudo su -
 1031  exit
 1032  cd ceph/
 1033  ls
 1034  ceph-deploy osd
 1035  ceph-deploy osd create
 1036  ceph-deploy osd create -h
 1037  vgs
 1038  sudo vgs
 1039  sudo lvs
 1040  sudo ceph-volume lvm list
 1041  ceph-deploy disk list
 1042  ceph-deploy disk list cephstor1
 1043  ceph-deploy zap
 1044  ceph-deploy osd zap
 1045  ceph-deploy osd create -h
 1046  ceph-deploy osd create  --filestore cephstor1_vg2/cephstor1_lv2 --fs-type xfs --journal cephstor1_vg/jn1_lv
 1047  ceph-deploy disk list cephstor1
 1048  ceph-deploy osd create  --filestore /dev/mapper/cephstor1_vg2-cephstor1_lv2 --fs-type xfs --journal /dev/mapper/cephstor1_vg-jn1_lv
 1049  exit
 1050  cd ceph/
 1051  ls
 1052  ceph-deploy disk list
 1053  ceph-deploy disk list ceph
 1054  ceph-deploy disk zap /dev/mapper/cephstor1_vg2-jn1_lv2
 1055  ceph-deploy disk zap cephstor1 /dev/mapper/cephstor1_vg2-jn1_lv2
 1056  sudo lvremove /dev/mapper/cephstor1_vg2-jn1_lv2
 1057  ceph-deploy disk list
 1058  exit
 1059  cd ceph/
 1060  ls
 1061  ceph-deploy disk list cephstor1
 1062  ceph-deploy disk zap cephstor1 /dev/mapper/cephstor1_vg2-cephstor1_lv2
 1063  /usr/sbin/ceph-volume lvm zap /dev/mapper/cephstor1_vg2-cephstor1_lv2
 1064  sudo /usr/sbin/ceph-volume lvm zap /dev/mapper/cephstor1_vg2-cephstor1_lv2
 1065  reboo
 1066  sudo reboot
 1067  cd ceph/
 1068  ls
 1069  ceph-deploy disk zap cephstor1 /dev/mapper/cephstor1_vg2-cephstor1_lv2
 1070  sudo lvremove /dev/mapper/cephstor1_vg2-cephstor1_lv2
 1071  exit
 1072  cd ceph/
 1073  ls
 1074  ceph-deploy disk list cephstor1
 1075  ceph-deploy disk zap cephstor1 /dev/mapper/cephstor1_vg-cephstor1_lv
 1076  ceph-deploy disk zap cephstor1 /dev/mapper/cephstor1_vg2-cephstor1disk2
 1077  ceph-deploy disk zap cephstor1 /dev/mapper/cephstor1_vg-jn1_lv
 1078  sudo lvs
 1079  sudo lvcreate -L 8G -n jndisk2 cephstor1_vg2
 1080  reboo
 1081  sudo reboot
 1082  ls
 1083  rm -rf cluster/
 1084  cd ceph/
 1085  ls
 1086  ceph-deploy disk zap cephstor1 /dev/mapper/cephstor1_vg2-cephstor1disk2
 1087  ceph-deploy disk zap cephstor1 /dev/mapper/cephstor1_vg-cephstor1_lv
 1088  ceph-deploy disk zap cephstor1 /dev/mapper/cephstor1_vg-jn1_lv
 1089  ceph-deploy disk zap cephstor1 /dev/mapper/cephstor1_vg2-cephstor1disk2
 1090  ceph-deploy disk zap cephstor1 /dev/mapper/cephstor1_vg2-jndisk2
 1091  sudo su -
 1092  cd ceph/
 1093  ceph-deploy disk list cephstor3
 1094  ssh cephstor3
 1095  ls
 1096  cd ceph/
 1097  history |grep zap
 1098  ceph status || ceph -w
 1099  ceph osd tree
 1100  ceph-deploy disk list cephstor1
 1101  ceph-deploy disk list cephstor3
 1102  ssh cephstor3
 1103  ls
 1104  ceph-deploy disk list cephstor1
 1105  history |grep zap
 1106  ceph-deploy disk zap cephstor3 /dev/mapper/cephstor1_vg-cephstor1_lv
 1107  ceph-deploy disk list cephstor1
 1108  ceph-deploy disk list cephstor3
 1109  ceph-deploy disk zap cephstor3 /dev/mapper/cephstor3_vg-cephstor3_lv
 1110  ceph-deploy disk zap cephstor3 /dev/mapper/cephstor3_vg-jn3_lv
 1111  ceph-deploy disk zap cephstor3 /dev/mapper/cephstor3_vg-cephstor3_lv
 1112  ssh cephstor3
 1113  ceph-deploy osd list cephstor3
 1114  ceph-deploy disk zap cephstor3 /dev/cephstor3_vg/jn3_lv
 1115  ceph-deploy purge cephstor3
 1116  ceph-deploy purgedata cephstor3
 1117  sudo ceph health
 1118  sudo su -

ceph install notes

##For all open source software USE only main repos####

---On ALL Nodes---
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
lsb_release -a
apt-add-repository 'deb https://download.ceph.com/debian-mimic/ bionic  main'
apt update

###### Ubuntu Main Repos
deb http://in.archive.ubuntu.com/ubuntu/ bionic main restricted universe
deb-src http://in.archive.ubuntu.com/ubuntu/ bionic main restricted universe



root@cephstor3:~# cat /etc/hosts
127.0.0.1       localhost.localdomain   localhost
192.168.239.11 cephstor1
192.168.239.12 cephstor2
192.168.239.13 cephstor3


root@cephstor3:~# cat /etc/network/interfaces
auto ens33
iface ens33 inet static
                address 192.168.85.13
                netmask 255.255.255.0
                gateway 192.168.85.2
                dns-nameservers 8.8.8.8

auto ens38
iface ens38 inet static
                address 192.168.239.13
                netmask 255.255.255.0



echo -e 'Defaults:cephuser !requiretty\ncephuser ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/cephuser
root@ctrl01:~# cat /etc/sudoers.d/ceph
Defaults:fortunath !requiretty
fortunath ALL = (root) NOPASSWD:ALL


---on  Deployserver  node---
fortunath@cephstor01:~$ ssh-keygen
fortunath@cephstor01:~$ vi ~/.ssh/config
fortunath@cephstor01:~$ cat  ~/.ssh/config
# create new ( define all nodes and users )
Host cephstor01
    Hostname cephstor01
    User fortunath
Host cephstor02
    Hostname cephstor02
    User fortunath
Host cephstor03
    Hostname cephstor03
    User fortunath

---
ssh-copy-id cephstor03
ssh-copy-id cephstor02

-----On ALL Nodes---
fortunath@cephstor01:~$ cat /etc/hosts
127.0.0.1       localhost.localdomain   localhost
192.168.239.130 cephstor03
192.168.239.129 cephstor02
192.168.239.128 cephstor012
----
ensure true in:

fortunath@ctrl01:~$ grep preserve_hostname  /etc/cloud/cloud.cfg
preserve_hostname: true

--

curl https://repogen.simplylinux.ch/txt/bionic/sources_fad62524984e5e894141cfb0002fbf836014d01d.txt | sudo tee -a /etc/apt/sources.list

--

https://www.server-world.info/en/note?os=Ubuntu_16.04&p=ceph&f=1
https://gist.github.com/vanduc95/a17f80d0636badd9aa002f2b493b777b

---On ALL Nodes---

 apt-get update && apt-get -y upgrade ; apt-get install python-minimal



--
 apt-get -y install chrony

insert /etc/chrony/chrony.conf
--
pool 0.asia.pool.ntp.org   iburst maxsources 4
pool 1.asia.pool.ntp.org  iburst maxsources 1
pool 2.asia.pool.ntp.org  iburst maxsources 1
pool 3.asia.pool.ntp.org  iburst maxsources 2

----


   64  vi /etc/chrony/chrony.conf
   65  scp /etc/chrony/chrony.conf cephstor2:/etc/chrony/chrony.conf
   66  scp /etc/chrony/chrony.conf cephstor3:/etc/chrony/chrony.conf
   67  service chrony restart
   68  ssh cephstor2 \ service chrony restart
   69  ssh cephstor3 \ service chrony restart
   70  for i in stop disable; do ssh cephstor2 \ systemctl $i ufw; done
   71  for i in stop disable; do ssh cephstor3 \ systemctl $i ufw; done
   72  for i in stop disable; do ssh cephstor1 \ systemctl $i ufw; done

--
fortunath@cephstor01:~$ cat /etc/rc.local
echo nameserver 8.8.8.8 > /etc/resolv.conf
fortunath@cephstor01:~$ ll /etc/rc.local
-rwxr-xr-x 1 root root 43 Oct  4 18:00 /etc/rc.local*

---

apt-get install ceph-deploy

 mkdir cluster
 cd cluster
 ceph-deploy new ceph1 ceph2 ceph3

--

fortunath@cephstor01:~/cluster$ cat ceph.conf
#[global]
#fsid = c3289b18-9d58-432e-8cf1-a107691edcb0
#mon_initial_members = cephstor01, cephstor02, cephstor03
#mon_host = 192.168.239.128,192.168.239.129,192.168.239.130
#auth_cluster_required = cephx
#auth_service_required = cephx
#auth_client_required = cephx
[global]
fsid = 36c5aa41-dc4c-424e-9adc-1c052929971b
mon_initial_members = cephstor01, cephstor02, cephstor03
mon_host = 192.168.239.128,192.168.239.129,192.168.239.130
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
cluster network = 192.168.85.0/24
public network = 192.168.239.0/24
osd pool default size = 2


---
  149  ceph-deploy new cephstor01 cephstor02 cephstor03
  150  ca ceph.conf
  151  cat ceph.conf
  152  cat ~/ceph/ceph.conf
  153  vi ceph.conf
  154  cat ceph.conf
  155  ip r
  156  vi ceph.conf
  157  ceph-deploy install --release  luminous cephstor1 cephstor2 cephstor3


ceph-deploy mon create-initial
which will

1. do mon create on each mon in the mon_initial_quorum set in ceph.conf
2. wait for them to form a quorum
3. gatherkeys

  219  ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
  220  sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
  221  sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd'
  222  sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
  223  sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
  224  ll /etc/ceph/ceph.client.admin.keyring
  225  ll /etc/ceph/
  226  sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  227  scp /etc/ceph/ceph.client.admin.keyring cephstor01:
  228  scp /etc/ceph/ceph.client.admin.keyring cephstor02:
  229  scp /etc/ceph/ceph.client.admin.keyring cephstor03:
  230  ssh cephstor02 \ sudo cp ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring
  231  ssh cephstor03 \ sudo cp ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring

  158  source /etc/rc.local
  159  sudo source /etc/rc.local
  160  bash /etc/rc.local
  161  sudo bash /etc/rc.local
  162  ssh cephstor02
  163  ssh cephstor03
  164  ceph-deploy install --release  mimic  cephstor01 cephstor02 cephstor03
  165  ceph-deploy mon create-initial
  172  ceph-deploy --overwrite-conf mon create-initial
  173  sudo ceph health
  174  ceph-deploy admin cephstor01 cephstor02 cephstor03
  175  ls /etc/ceph/ceph.client.admin.keyring
  176  ll /etc/ceph/ceph.client.admin.keyring
  177  +sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  178  sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  179  ssh cephstor02 \ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  180  ssh cephstor03 \ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  181  sudo ceph health
  182  ceph-deploy mgr cephstor01 cephstor02 cephstor03
  183  ceph-deploy mgr create cephstor01 cephstor02 cephstor03
  184  sudo ceph hea+65lth
  185  sudo ceph health
  186  history

#######################
sudo apt-get -y install ceph-deploy ceph-common ceph-mds python-minimal ; ssh cephstor2 \ "sudo apt-get -y install ceph-deploy ceph-common ceph-mds python-minimal" ;  ssh cephstor3 \ "sudo apt-get -y install ceph-deploy ceph-common ceph-mds python-minimal"
########################


##For all opensource software USE only main repos####

---On ALL Nodes---
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
lsb_release -a
apt-add-repository 'deb https://download.ceph.com/debian-mimic/ bionic  main'
apt update

###### Ubuntu Main Repos
deb http://in.archive.ubuntu.com/ubuntu/ bionic main restricted universe
deb-src http://in.archive.ubuntu.com/ubuntu/ bionic main restricted universe



root@cephstor3:~# cat /etc/hosts
127.0.0.1       localhost.localdomain   localhost
192.168.239.11 cephstor1
192.168.239.12 cephstor2
192.168.239.13 cephstor3


root@cephstor3:~# cat /etc/network/interfaces
auto ens33
iface ens33 inet static
                address 192.168.85.13
                netmask 255.255.255.0
                gateway 192.168.85.2
                dns-nameservers 8.8.8.8

auto ens38
iface ens38 inet static
                address 192.168.239.13
                netmask 255.255.255.0



echo -e 'Defaults:cephuser !requiretty\ncephuser ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/cephuser
root@ctrl01:~# cat /etc/sudoers.d/ceph
Defaults:fortunath !requiretty
fortunath ALL = (root) NOPASSWD:ALL


---on  Deployserver  node---
fortunath@cephstor01:~$ ssh-keygen
fortunath@cephstor01:~$ vi ~/.ssh/config
fortunath@cephstor01:~$ cat  ~/.ssh/config
# create new ( define all nodes and users )
Host cephstor01
    Hostname cephstor01
    User fortunath
Host cephstor02
    Hostname cephstor02
    User fortunath
Host cephstor03
    Hostname cephstor03
    User fortunath

---
ssh-copy-id cephstor03
ssh-copy-id cephstor02

-----On ALL Nodes---
fortunath@cephstor01:~$ cat /etc/hosts
127.0.0.1       localhost.localdomain   localhost
192.168.239.130 cephstor03
192.168.239.129 cephstor02
192.168.239.128 cephstor012
----
ensure true in:

fortunath@ctrl01:~$ grep preserve_hostname  /etc/cloud/cloud.cfg
preserve_hostname: true

--

curl https://repogen.simplylinux.ch/txt/bionic/sources_fad62524984e5e894141cfb0002fbf836014d01d.txt | sudo tee -a /etc/apt/sources.list

--

https://www.server-world.info/en/note?os=Ubuntu_16.04&p=ceph&f=1
https://gist.github.com/vanduc95/a17f80d0636badd9aa002f2b493b777b

---On ALL Nodes---

 apt-get update && apt-get -y upgrade ; apt-get install python-minimal



--
 apt-get -y install chrony

insert /etc/chrony/chrony.conf
--
pool 0.asia.pool.ntp.org   iburst maxsources 4
pool 1.asia.pool.ntp.org  iburst maxsources 1
pool 2.asia.pool.ntp.org  iburst maxsources 1
pool 3.asia.pool.ntp.org  iburst maxsources 2

----


   64  vi /etc/chrony/chrony.conf
   65  scp /etc/chrony/chrony.conf cephstor2:/etc/chrony/chrony.conf
   66  scp /etc/chrony/chrony.conf cephstor3:/etc/chrony/chrony.conf
   67  service chrony restart
   68  ssh cephstor2 \ service chrony restart
   69  ssh cephstor3 \ service chrony restart
   70  for i in stop disable; do ssh cephstor2 \ systemctl $i ufw; done
   71  for i in stop disable; do ssh cephstor3 \ systemctl $i ufw; done
   72  for i in stop disable; do ssh cephstor1 \ systemctl $i ufw; done

--
fortunath@cephstor01:~$ cat /etc/rc.local
echo nameserver 8.8.8.8 > /etc/resolv.conf
fortunath@cephstor01:~$ ll /etc/rc.local
-rwxr-xr-x 1 root root 43 Oct  4 18:00 /etc/rc.local*

---

apt-get install ceph-deploy

 mkdir cluster
 cd cluster
 ceph-deploy new ceph1 ceph2 ceph3

--

fortunath@cephstor01:~/cluster$ cat ceph.conf
#[global]
#fsid = c3289b18-9d58-432e-8cf1-a107691edcb0
#mon_initial_members = cephstor01, cephstor02, cephstor03
#mon_host = 192.168.239.128,192.168.239.129,192.168.239.130
#auth_cluster_required = cephx
#auth_service_required = cephx
#auth_client_required = cephx
[global]
fsid = 36c5aa41-dc4c-424e-9adc-1c052929971b
mon_initial_members = cephstor01, cephstor02, cephstor03
mon_host = 192.168.239.128,192.168.239.129,192.168.239.130
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
cluster network = 192.168.85.0/24
public network = 192.168.239.0/24
osd pool default size = 2


---
  149  ceph-deploy new cephstor01 cephstor02 cephstor03
  150  ca ceph.conf
  151  cat ceph.conf
  152  cat ~/ceph/ceph.conf
  153  vi ceph.conf
  154  cat ceph.conf
  155  ip r
  156  vi ceph.conf
  157  ceph-deploy install --release  luminous cephstor1 cephstor2 cephstor3


ceph-deploy mon create-initial
which will

1. do mon create on each mon in the mon_initial_quorum set in ceph.conf
2. wait for them to form a quorum
3. gatherkeys

  219  ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
  220  sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
  221  sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd'
  222  sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
  223  sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
  224  ll /etc/ceph/ceph.client.admin.keyring
  225  ll /etc/ceph/
  226  sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  227  scp /etc/ceph/ceph.client.admin.keyring cephstor01:
  228  scp /etc/ceph/ceph.client.admin.keyring cephstor02:
  229  scp /etc/ceph/ceph.client.admin.keyring cephstor03:
  230  ssh cephstor02 \ sudo cp ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring
  231  ssh cephstor03 \ sudo cp ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring

  158  source /etc/rc.local
  159  sudo source /etc/rc.local
  160  bash /etc/rc.local
  161  sudo bash /etc/rc.local
  162  ssh cephstor02
  163  ssh cephstor03
  164  ceph-deploy install --release  mimic  cephstor01 cephstor02 cephstor03
  165  ceph-deploy mon create-initial
  172  ceph-deploy --overwrite-conf mon create-initial
  173  sudo ceph health
  174  ceph-deploy admin cephstor01 cephstor02 cephstor03
  175  ls /etc/ceph/ceph.client.admin.keyring
  176  ll /etc/ceph/ceph.client.admin.keyring
  177  +sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  178  sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  179  ssh cephstor02 \ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  180  ssh cephstor03 \ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  181  sudo ceph health
  182  ceph-deploy mgr cephstor01 cephstor02 cephstor03
  183  ceph-deploy mgr create cephstor01 cephstor02 cephstor03
  184  sudo ceph hea+65lth
  185  sudo ceph health
  186  history

#######################
sudo apt-get -y install ceph-deploy ceph-common ceph-mds python-minimal ; ssh cephstor2 \ "sudo apt-get -y install ceph-deploy ceph-common ceph-mds python-minimal" ;  ssh cephstor3 \ "sudo apt-get -y install ceph-deploy ceph-common ceph-mds python-minimal"
########################


##For all opensource software USE only main repos####

---On ALL Nodes---
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
lsb_release -a
apt-add-repository 'deb https://download.ceph.com/debian-mimic/ bionic  main'
apt update

###### Ubuntu Main Repos
deb http://in.archive.ubuntu.com/ubuntu/ bionic main restricted universe
deb-src http://in.archive.ubuntu.com/ubuntu/ bionic main restricted universe



root@cephstor3:~# cat /etc/hosts
127.0.0.1       localhost.localdomain   localhost
192.168.239.11 cephstor1
192.168.239.12 cephstor2
192.168.239.13 cephstor3


root@cephstor3:~# cat /etc/network/interfaces
auto ens33
iface ens33 inet static
                address 192.168.85.13
                netmask 255.255.255.0
                gateway 192.168.85.2
                dns-nameservers 8.8.8.8

auto ens38
iface ens38 inet static
                address 192.168.239.13
                netmask 255.255.255.0



echo -e 'Defaults:cephuser !requiretty\ncephuser ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/cephuser
root@ctrl01:~# cat /etc/sudoers.d/ceph
Defaults:fortunath !requiretty
fortunath ALL = (root) NOPASSWD:ALL


---on  Deployserver  node---
fortunath@cephstor01:~$ ssh-keygen
fortunath@cephstor01:~$ vi ~/.ssh/config
fortunath@cephstor01:~$ cat  ~/.ssh/config
# create new ( define all nodes and users )
Host cephstor01
    Hostname cephstor01
    User fortunath
Host cephstor02
    Hostname cephstor02
    User fortunath
Host cephstor03
    Hostname cephstor03
    User fortunath

---
ssh-copy-id cephstor03
ssh-copy-id cephstor02

-----On ALL Nodes---
fortunath@cephstor01:~$ cat /etc/hosts
127.0.0.1       localhost.localdomain   localhost
192.168.239.130 cephstor03
192.168.239.129 cephstor02
192.168.239.128 cephstor012
----
ensure true in:

fortunath@ctrl01:~$ grep preserve_hostname  /etc/cloud/cloud.cfg
preserve_hostname: true

--

curl https://repogen.simplylinux.ch/txt/bionic/sources_fad62524984e5e894141cfb0002fbf836014d01d.txt | sudo tee -a /etc/apt/sources.list

--

https://www.server-world.info/en/note?os=Ubuntu_16.04&p=ceph&f=1
https://gist.github.com/vanduc95/a17f80d0636badd9aa002f2b493b777b

---On ALL Nodes---

 apt-get update && apt-get -y upgrade ; apt-get install python-minimal



--
 apt-get -y install chrony

insert /etc/chrony/chrony.conf
--
pool 0.asia.pool.ntp.org   iburst maxsources 4
pool 1.asia.pool.ntp.org  iburst maxsources 1
pool 2.asia.pool.ntp.org  iburst maxsources 1
pool 3.asia.pool.ntp.org  iburst maxsources 2

----


   64  vi /etc/chrony/chrony.conf
   65  scp /etc/chrony/chrony.conf cephstor2:/etc/chrony/chrony.conf
   66  scp /etc/chrony/chrony.conf cephstor3:/etc/chrony/chrony.conf
   67  service chrony restart
   68  ssh cephstor2 \ service chrony restart
   69  ssh cephstor3 \ service chrony restart
   70  for i in stop disable; do ssh cephstor2 \ systemctl $i ufw; done
   71  for i in stop disable; do ssh cephstor3 \ systemctl $i ufw; done
   72  for i in stop disable; do ssh cephstor1 \ systemctl $i ufw; done

--
fortunath@cephstor01:~$ cat /etc/rc.local
echo nameserver 8.8.8.8 > /etc/resolv.conf
fortunath@cephstor01:~$ ll /etc/rc.local
-rwxr-xr-x 1 root root 43 Oct  4 18:00 /etc/rc.local*

---

apt-get install ceph-deploy

 mkdir cluster
 cd cluster
 ceph-deploy new ceph1 ceph2 ceph3

--

fortunath@cephstor01:~/cluster$ cat ceph.conf
#[global]
#fsid = c3289b18-9d58-432e-8cf1-a107691edcb0
#mon_initial_members = cephstor01, cephstor02, cephstor03
#mon_host = 192.168.239.128,192.168.239.129,192.168.239.130
#auth_cluster_required = cephx
#auth_service_required = cephx
#auth_client_required = cephx
[global]
fsid = 36c5aa41-dc4c-424e-9adc-1c052929971b
mon_initial_members = cephstor01, cephstor02, cephstor03
mon_host = 192.168.239.128,192.168.239.129,192.168.239.130
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
cluster network = 192.168.85.0/24
public network = 192.168.239.0/24
osd pool default size = 2


---
  149  ceph-deploy new cephstor01 cephstor02 cephstor03
  150  ca ceph.conf
  151  cat ceph.conf
  152  cat ~/ceph/ceph.conf
  153  vi ceph.conf
  154  cat ceph.conf
  155  ip r
  156  vi ceph.conf
  157  ceph-deploy install --release  luminous cephstor1 cephstor2 cephstor3


ceph-deploy mon create-initial
which will

1. do mon create on each mon in the mon_initial_quorum set in ceph.conf
2. wait for them to form a quorum
3. gatherkeys

  219  ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
  220  sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
  221  sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd'
  222  sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
  223  sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
  224  ll /etc/ceph/ceph.client.admin.keyring
  225  ll /etc/ceph/
  226  sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  227  scp /etc/ceph/ceph.client.admin.keyring cephstor01:
  228  scp /etc/ceph/ceph.client.admin.keyring cephstor02:
  229  scp /etc/ceph/ceph.client.admin.keyring cephstor03:
  230  ssh cephstor02 \ sudo cp ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring
  231  ssh cephstor03 \ sudo cp ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring

  158  source /etc/rc.local
  159  sudo source /etc/rc.local
  160  bash /etc/rc.local
  161  sudo bash /etc/rc.local
  162  ssh cephstor02
  163  ssh cephstor03
  164  ceph-deploy install --release  mimic  cephstor01 cephstor02 cephstor03
  165  ceph-deploy mon create-initial
  172  ceph-deploy --overwrite-conf mon create-initial
  173  sudo ceph health
  174  ceph-deploy admin cephstor01 cephstor02 cephstor03
  175  ls /etc/ceph/ceph.client.admin.keyring
  176  ll /etc/ceph/ceph.client.admin.keyring
  177  +sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  178  sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  179  ssh cephstor02 \ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  180  ssh cephstor03 \ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  181  sudo ceph health
  182  ceph-deploy mgr cephstor01 cephstor02 cephstor03
  183  ceph-deploy mgr create cephstor01 cephstor02 cephstor03
  184  sudo ceph hea+65lth
  185  sudo ceph health
  186  history

#######################
sudo apt-get -y install ceph-deploy ceph-common ceph-mds python-minimal ; ssh cephstor2 \ "sudo apt-get -y install ceph-deploy ceph-common ceph-mds python-minimal" ;  ssh cephstor3 \ "sudo apt-get -y install ceph-deploy ceph-common ceph-mds python-minimal"
########################


##For all opensource software USE only main repos####

---On ALL Nodes---
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
lsb_release -a
apt-add-repository 'deb https://download.ceph.com/debian-mimic/ bionic  main'
apt update

###### Ubuntu Main Repos
deb http://in.archive.ubuntu.com/ubuntu/ bionic main restricted universe
deb-src http://in.archive.ubuntu.com/ubuntu/ bionic main restricted universe



root@cephstor3:~# cat /etc/hosts
127.0.0.1       localhost.localdomain   localhost
192.168.239.11 cephstor1
192.168.239.12 cephstor2
192.168.239.13 cephstor3


root@cephstor3:~# cat /etc/network/interfaces
auto ens33
iface ens33 inet static
                address 192.168.85.13
                netmask 255.255.255.0
                gateway 192.168.85.2
                dns-nameservers 8.8.8.8

auto ens38
iface ens38 inet static
                address 192.168.239.13
                netmask 255.255.255.0



echo -e 'Defaults:cephuser !requiretty\ncephuser ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/cephuser
root@ctrl01:~# cat /etc/sudoers.d/ceph
Defaults:fortunath !requiretty
fortunath ALL = (root) NOPASSWD:ALL


---on  Deployserver  node---
fortunath@cephstor01:~$ ssh-keygen
fortunath@cephstor01:~$ vi ~/.ssh/config
fortunath@cephstor01:~$ cat  ~/.ssh/config
# create new ( define all nodes and users )
Host cephstor01
    Hostname cephstor01
    User fortunath
Host cephstor02
    Hostname cephstor02
    User fortunath
Host cephstor03
    Hostname cephstor03
    User fortunath

---
ssh-copy-id cephstor03
ssh-copy-id cephstor02

-----On ALL Nodes---
fortunath@cephstor01:~$ cat /etc/hosts
127.0.0.1       localhost.localdomain   localhost
192.168.239.130 cephstor03
192.168.239.129 cephstor02
192.168.239.128 cephstor012
----
ensure true in:

fortunath@ctrl01:~$ grep preserve_hostname  /etc/cloud/cloud.cfg
preserve_hostname: true

--

curl https://repogen.simplylinux.ch/txt/bionic/sources_fad62524984e5e894141cfb0002fbf836014d01d.txt | sudo tee -a /etc/apt/sources.list

--

https://www.server-world.info/en/note?os=Ubuntu_16.04&p=ceph&f=1
https://gist.github.com/vanduc95/a17f80d0636badd9aa002f2b493b777b

---On ALL Nodes---

 apt-get update && apt-get -y upgrade ; apt-get install python-minimal



--
 apt-get -y install chrony

insert /etc/chrony/chrony.conf
--
pool 0.asia.pool.ntp.org   iburst maxsources 4
pool 1.asia.pool.ntp.org  iburst maxsources 1
pool 2.asia.pool.ntp.org  iburst maxsources 1
pool 3.asia.pool.ntp.org  iburst maxsources 2

----


   64  vi /etc/chrony/chrony.conf
   65  scp /etc/chrony/chrony.conf cephstor2:/etc/chrony/chrony.conf
   66  scp /etc/chrony/chrony.conf cephstor3:/etc/chrony/chrony.conf
   67  service chrony restart
   68  ssh cephstor2 \ service chrony restart
   69  ssh cephstor3 \ service chrony restart
   70  for i in stop disable; do ssh cephstor2 \ systemctl $i ufw; done
   71  for i in stop disable; do ssh cephstor3 \ systemctl $i ufw; done
   72  for i in stop disable; do ssh cephstor1 \ systemctl $i ufw; done

--
fortunath@cephstor01:~$ cat /etc/rc.local
echo nameserver 8.8.8.8 > /etc/resolv.conf
fortunath@cephstor01:~$ ll /etc/rc.local
-rwxr-xr-x 1 root root 43 Oct  4 18:00 /etc/rc.local*

---

apt-get install ceph-deploy

 mkdir cluster
 cd cluster
 ceph-deploy new ceph1 ceph2 ceph3

--

fortunath@cephstor01:~/cluster$ cat ceph.conf
#[global]
#fsid = c3289b18-9d58-432e-8cf1-a107691edcb0
#mon_initial_members = cephstor01, cephstor02, cephstor03
#mon_host = 192.168.239.128,192.168.239.129,192.168.239.130
#auth_cluster_required = cephx
#auth_service_required = cephx
#auth_client_required = cephx
[global]
fsid = 36c5aa41-dc4c-424e-9adc-1c052929971b
mon_initial_members = cephstor01, cephstor02, cephstor03
mon_host = 192.168.239.128,192.168.239.129,192.168.239.130
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
cluster network = 192.168.85.0/24
public network = 192.168.239.0/24
osd pool default size = 2


---
  149  ceph-deploy new cephstor01 cephstor02 cephstor03
  150  ca ceph.conf
  151  cat ceph.conf
  152  cat ~/ceph/ceph.conf
  153  vi ceph.conf
  154  cat ceph.conf
  155  ip r
  156  vi ceph.conf
  157  ceph-deploy install --release  luminous cephstor1 cephstor2 cephstor3


ceph-deploy mon create-initial
which will

1. do mon create on each mon in the mon_initial_quorum set in ceph.conf
2. wait for them to form a quorum
3. gatherkeys

  219  ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
  220  sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
  221  sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd'
  222  sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
  223  sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
  224  ll /etc/ceph/ceph.client.admin.keyring
  225  ll /etc/ceph/
  226  sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  227  scp /etc/ceph/ceph.client.admin.keyring cephstor01:
  228  scp /etc/ceph/ceph.client.admin.keyring cephstor02:
  229  scp /etc/ceph/ceph.client.admin.keyring cephstor03:
  230  ssh cephstor02 \ sudo cp ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring
  231  ssh cephstor03 \ sudo cp ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring

  158  source /etc/rc.local
  159  sudo source /etc/rc.local
  160  bash /etc/rc.local
  161  sudo bash /etc/rc.local
  162  ssh cephstor02
  163  ssh cephstor03
  164  ceph-deploy install --release  mimic  cephstor01 cephstor02 cephstor03
  165  ceph-deploy mon create-initial
  172  ceph-deploy --overwrite-conf mon create-initial
  173  sudo ceph health
  174  ceph-deploy admin cephstor01 cephstor02 cephstor03
  175  ls /etc/ceph/ceph.client.admin.keyring
  176  ll /etc/ceph/ceph.client.admin.keyring
  177  +sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  178  sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  179  ssh cephstor02 \ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  180  ssh cephstor03 \ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  181  sudo ceph health
  182  ceph-deploy mgr cephstor01 cephstor02 cephstor03
  183  ceph-deploy mgr create cephstor01 cephstor02 cephstor03
  184  sudo ceph hea+65lth
  185  sudo ceph health
  186  history

#######################
sudo apt-get -y install ceph-deploy ceph-common ceph-mds python-minimal ; ssh cephstor2 \ "sudo apt-get -y install ceph-deploy ceph-common ceph-mds python-minimal" ;  ssh cephstor3 \ "sudo apt-get -y install ceph-deploy ceph-common ceph-mds python-minimal"
########################






Monday, September 17, 2018

How to remove a host from a Ceph cluster?



I’m still studying Ceph, and recently faced a scenario in which one of my Ceph nodes went down due to hardware failure. Even though my data was safe due to the replication factor, I was not able to remove the node from the cluster.
I could remove the OSDs on the node, but I didn’t find a way to remove the node being listed in ‘ceph osd tree’. I ended up editing the CRUSH map by hand, to remove the host, and uploaded it back. This worked as expected. Following are the steps I did to achieve this.
a) This was the state just after the node went down:
# ceph osd tree
 
# id     weight    type     name                up/down        reweight
 -10        .08997    root     default
 -20        .01999            host hp-m300-5
 00        .009995            osd.0                up             1
 40        .009995            osd.4                up             1
 -30        .009995            host hp-m300-9
 10        .009995            osd.1                 down         0
 -40        .05998            host hp-m300-4
 20        .04999            osd.2                up             1
 30        .009995            osd.3                up             1
# ceph -w
 
    cluster 62a6a880-fb65-490c-bc98-d689b4d1a3cb
     health HEALTH_WARN 64 pgs degraded; 64 pgs stuck unclean; recovery 261/785 objects degraded (33.248%)
     monmap e1: 1 mons at {hp-m300-4=10.65.200.88:6789/0}, election epoch 1, quorum 0 hp-m300-4
     osdmap e130: 5 osds: 4 up, 4 in
     pgmap v8465: 196 pgs, 4 pools, 1001 MB data, 262 objects
         7672 MB used, 74192 MB / 81865 MB avail
         261/785 objects degraded (33.248%)
         64 active+degraded
         132 active+clean
I started with marking the OSDs on the node out, and removing them. Note that I don’t need to stop the OSD (osd.1) since the node carrying osd.1 is down and not accessible.
b) If not, you would’ve to stop the OSD using:
# sudo service osd stop osd.1
c) Mark the OSD out, this is not ideally needed in this case since the node is already out.
# ceph osd out osd.1
d) Remove the OSD from the CRUSH map, so that it does not receive any data. You can also get the crushmap, de-compile it, remove the OSD, re-compile, and upload it back.
Remove item id 1 with the name ‘osd.1’ from the CRUSH map.
# ceph osd crush remove osd.1
e) Remove the OSD authentication key
# ceph auth del osd.1
f) At this stage, I had to remove the OSD host from the listing but was not able to find a way to do so. The ‘ceph-deploy’ didn’t have any tools to do this, other than ‘purge’, and ‘uninstall’. Since the node was not f) accessible, these won’t work anyways. A ‘ceph-deploy purge’ failed with the following errors, which is expected since the node is not accessible.
# ceph-deploy purge hp-m300-9
 
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
 [ceph_deploy.cli][INFO  ] Invoked (1.5.22-rc1): /usr/bin/ceph-deploy purge hp-m300-9
 [ceph_deploy.install][INFO  ] note that some dependencies *will not* be removed because they can cause issues with qemu-kvm
 [ceph_deploy.install][INFO  ] like: librbd1 and librados2
 [ceph_deploy.install][DEBUG ] Purging from cluster ceph hosts hp-m300-9
 [ceph_deploy.install][DEBUG ] Detecting platform for host hp-m300-9 ...
 ssh: connect to host hp-m300-9 port 22: No route to host
 [ceph_deploy][ERROR ] RuntimeError: connecting to host: hp-m300-9 resulted in errors: HostNotFound hp-m300-9
I ended up fetching the CRUSH map, removing the OSD host from it, and uploading it back.
g) Get the CRUSH map
# ceph osd getcrushmap -o /tmp/crushmap
h) De-compile the CRUSH map
# crushtool -d /tmp/crushmap -o crush_map
i) I had to remove the entries pertaining to the host-to-be-removed from the following sections:
a) devices
b) types
c) And from the ‘root’ default section as well.
j) Once I had the entries removed, I went ahead compiling the map, and inserted it back.
# crushtool -c crush_map -o /tmp/crushmap
# ceph osd setcrushmap -i /tmp/crushmap
k) A ‘ceph osd tree’ looks much cleaner now ðŸ™‚
# ceph osd tree
 
# id         weight             type         name                up/down        reweight
 -1             0.07999            root         default
 -2            0.01999                        host hp-m300-5
 0            0.009995                    osd.0                down        0
 4            0.009995                    osd.4                 down         0
 -4            0.06                        host hp-m300-4
 2            0.04999                        osd.2                 up             1
 3            0.009995                    osd.3                 up             1
There may be a more direct method to remove the OSD host from the listing. I’m not aware of anything relevant, based on my limited knowledge. Perhaps I’ll come across something as I progress with Ceph. Comments welcome.

Source: 
https://arvimal.blog/2015/05/07/how-to-remove-a-host-from-a-ceph-cluster/