join the cluster for new management server
lushangshiyi
8 Posts
September 5, 2019, 8:49 amQuote from lushangshiyi on September 5, 2019, 8:49 amI create another server to join the cluster for new management server
I create another server to join the cluster for new management server
Last edited on September 11, 2019, 5:19 am by lushangshiyi · #1
admin
2,930 Posts
September 5, 2019, 10:53 amQuote from admin on September 5, 2019, 10:53 amwhen you deploy node via wizard, select "Replace Management Node"
when you deploy node via wizard, select "Replace Management Node"
lushangshiyi
8 Posts
September 6, 2019, 12:59 amQuote from lushangshiyi on September 6, 2019, 12:59 am
Quote from admin on September 5, 2019, 10:53 am
when you deploy node via wizard, select "Replace Management Node"
HI, thanks for you reply .
What does the default 300 GPs mean when I deploy
Quote from admin on September 5, 2019, 10:53 am
when you deploy node via wizard, select "Replace Management Node"
HI, thanks for you reply .
What does the default 300 GPs mean when I deploy
Last edited on September 11, 2019, 5:23 am by lushangshiyi · #3
admin
2,930 Posts
September 6, 2019, 11:46 amQuote from admin on September 6, 2019, 11:46 amWhen you deploy, the deployment wizard creates a default pool, it does not ask you for number of PGs but simply the expected range of the number of disks : 3-5 disks, 5-15 disks, .. so you do not need to calculate the PG count yourself. Is not this what you see ?
Later , after you deploy the cluster and it is up and running, if you have a need to create custom pool(s) via the add pool page, then yes you will need to do the calculation yourself..there is a balloon tooltip help on that page to help you do this.
As for roles, for simple use just leave all roles for all nodes. For more advanced setups it depends on many factors such as your workload, the type of hardware you have and its size now and in the future. If you wish such advanced setups, you can always buy support from us and we look at this in more detail. For simple setups, allow all roles.
When you deploy, the deployment wizard creates a default pool, it does not ask you for number of PGs but simply the expected range of the number of disks : 3-5 disks, 5-15 disks, .. so you do not need to calculate the PG count yourself. Is not this what you see ?
Later , after you deploy the cluster and it is up and running, if you have a need to create custom pool(s) via the add pool page, then yes you will need to do the calculation yourself..there is a balloon tooltip help on that page to help you do this.
As for roles, for simple use just leave all roles for all nodes. For more advanced setups it depends on many factors such as your workload, the type of hardware you have and its size now and in the future. If you wish such advanced setups, you can always buy support from us and we look at this in more detail. For simple setups, allow all roles.
lushangshiyi
8 Posts
September 10, 2019, 8:16 amQuote from lushangshiyi on September 10, 2019, 8:16 am
Quote from admin on September 6, 2019, 11:46 am
When you deploy, the deployment wizard creates a default pool, it does not ask you for number of PGs but simply the expected range of the number of disks : 3-5 disks, 5-15 disks, .. so you do not need to calculate the PG count yourself. Is not this what you see ?
Later , after you deploy the cluster and it is up and running, if you have a need to create custom pool(s) via the add pool page, then yes you will need to do the calculation yourself..there is a balloon tooltip help on that page to help you do this.
As for roles, for simple use just leave all roles for all nodes. For more advanced setups it depends on many factors such as your workload, the type of hardware you have and its size now and in the future. If you wish such advanced setups, you can always buy support from us and we look at this in more detail. For simple setups, allow all roles.
Does this support ISCSI Disk replication or cloning
Quote from admin on September 6, 2019, 11:46 am
When you deploy, the deployment wizard creates a default pool, it does not ask you for number of PGs but simply the expected range of the number of disks : 3-5 disks, 5-15 disks, .. so you do not need to calculate the PG count yourself. Is not this what you see ?
Later , after you deploy the cluster and it is up and running, if you have a need to create custom pool(s) via the add pool page, then yes you will need to do the calculation yourself..there is a balloon tooltip help on that page to help you do this.
As for roles, for simple use just leave all roles for all nodes. For more advanced setups it depends on many factors such as your workload, the type of hardware you have and its size now and in the future. If you wish such advanced setups, you can always buy support from us and we look at this in more detail. For simple setups, allow all roles.
Does this support ISCSI Disk replication or cloning
Last edited on September 11, 2019, 5:21 am by lushangshiyi · #5
admin
2,930 Posts
September 10, 2019, 9:08 amQuote from admin on September 10, 2019, 9:08 amwe support remote disk replication in 2.3.1 via the web ui, it uses snapshots to send diffs according to a schedule you setup. we also adding backups (to local and remote repositories) in 2.4.0
if you need to do anything custom, you can always use the ceph cli commands to do anything you want.
we support remote disk replication in 2.3.1 via the web ui, it uses snapshots to send diffs according to a schedule you setup. we also adding backups (to local and remote repositories) in 2.4.0
if you need to do anything custom, you can always use the ceph cli commands to do anything you want.
Last edited on September 10, 2019, 9:10 am by admin · #6
lushangshiyi
8 Posts
September 23, 2019, 7:48 amQuote from lushangshiyi on September 23, 2019, 7:48 am
Last edited on September 24, 2019, 1:41 pm by lushangshiyi · #7
join the cluster for new management server
lushangshiyi
8 Posts
Quote from lushangshiyi on September 5, 2019, 8:49 amI create another server to join the cluster for new management server
I create another server to join the cluster for new management server
admin
2,930 Posts
Quote from admin on September 5, 2019, 10:53 amwhen you deploy node via wizard, select "Replace Management Node"
when you deploy node via wizard, select "Replace Management Node"
lushangshiyi
8 Posts
Quote from lushangshiyi on September 6, 2019, 12:59 amQuote from admin on September 5, 2019, 10:53 amwhen you deploy node via wizard, select "Replace Management Node"
HI, thanks for you reply .
What does the default 300 GPs mean when I deploy
Quote from admin on September 5, 2019, 10:53 amwhen you deploy node via wizard, select "Replace Management Node"
HI, thanks for you reply .
What does the default 300 GPs mean when I deploy
admin
2,930 Posts
Quote from admin on September 6, 2019, 11:46 amWhen you deploy, the deployment wizard creates a default pool, it does not ask you for number of PGs but simply the expected range of the number of disks : 3-5 disks, 5-15 disks, .. so you do not need to calculate the PG count yourself. Is not this what you see ?
Later , after you deploy the cluster and it is up and running, if you have a need to create custom pool(s) via the add pool page, then yes you will need to do the calculation yourself..there is a balloon tooltip help on that page to help you do this.
As for roles, for simple use just leave all roles for all nodes. For more advanced setups it depends on many factors such as your workload, the type of hardware you have and its size now and in the future. If you wish such advanced setups, you can always buy support from us and we look at this in more detail. For simple setups, allow all roles.
When you deploy, the deployment wizard creates a default pool, it does not ask you for number of PGs but simply the expected range of the number of disks : 3-5 disks, 5-15 disks, .. so you do not need to calculate the PG count yourself. Is not this what you see ?
Later , after you deploy the cluster and it is up and running, if you have a need to create custom pool(s) via the add pool page, then yes you will need to do the calculation yourself..there is a balloon tooltip help on that page to help you do this.
As for roles, for simple use just leave all roles for all nodes. For more advanced setups it depends on many factors such as your workload, the type of hardware you have and its size now and in the future. If you wish such advanced setups, you can always buy support from us and we look at this in more detail. For simple setups, allow all roles.
lushangshiyi
8 Posts
Quote from lushangshiyi on September 10, 2019, 8:16 amQuote from admin on September 6, 2019, 11:46 amWhen you deploy, the deployment wizard creates a default pool, it does not ask you for number of PGs but simply the expected range of the number of disks : 3-5 disks, 5-15 disks, .. so you do not need to calculate the PG count yourself. Is not this what you see ?
Later , after you deploy the cluster and it is up and running, if you have a need to create custom pool(s) via the add pool page, then yes you will need to do the calculation yourself..there is a balloon tooltip help on that page to help you do this.
As for roles, for simple use just leave all roles for all nodes. For more advanced setups it depends on many factors such as your workload, the type of hardware you have and its size now and in the future. If you wish such advanced setups, you can always buy support from us and we look at this in more detail. For simple setups, allow all roles.
Does this support ISCSI Disk replication or cloning
Quote from admin on September 6, 2019, 11:46 amWhen you deploy, the deployment wizard creates a default pool, it does not ask you for number of PGs but simply the expected range of the number of disks : 3-5 disks, 5-15 disks, .. so you do not need to calculate the PG count yourself. Is not this what you see ?
Later , after you deploy the cluster and it is up and running, if you have a need to create custom pool(s) via the add pool page, then yes you will need to do the calculation yourself..there is a balloon tooltip help on that page to help you do this.
As for roles, for simple use just leave all roles for all nodes. For more advanced setups it depends on many factors such as your workload, the type of hardware you have and its size now and in the future. If you wish such advanced setups, you can always buy support from us and we look at this in more detail. For simple setups, allow all roles.
Does this support ISCSI Disk replication or cloning
admin
2,930 Posts
Quote from admin on September 10, 2019, 9:08 amwe support remote disk replication in 2.3.1 via the web ui, it uses snapshots to send diffs according to a schedule you setup. we also adding backups (to local and remote repositories) in 2.4.0
if you need to do anything custom, you can always use the ceph cli commands to do anything you want.
we support remote disk replication in 2.3.1 via the web ui, it uses snapshots to send diffs according to a schedule you setup. we also adding backups (to local and remote repositories) in 2.4.0
if you need to do anything custom, you can always use the ceph cli commands to do anything you want.
lushangshiyi
8 Posts
Quote from lushangshiyi on September 23, 2019, 7:48 am