PetaSAN V3 upgrade hard failures
podilarius
44 Posts
December 16, 2021, 8:47 pmQuote from podilarius on December 16, 2021, 8:47 pmThis is a continuation from the General discussion on upgrade failures.
I had to re-create MDS and MGR folders in /var/lib/(mgr/mds)/ceph-<nodename>.
Once these were re-created, ceph status is HEALTH_WARN.
root@cph-svr1:~# ceph -s
cluster:
id:---
health: HEALTH_WARN
1/3 mons down, quorum cph-svr1,cph-svr3
services:
mon: 3 daemons, quorum cph-svr1,cph-svr3 (age 58m), out of quorum: cph-svr4
mgr: cph-svr3(active, since 58m), standbys: cph-svr1
mds: labfs:1 {0=cph-svr2=up:active} 2 up:standby
osd: 14 osds: 10 up (since 58m), 10 in (since 30h)
data:
pools: 14 pools, 968 pgs
objects: 1.14M objects, 1.3 TiB
usage: 2.0 TiB used, 3.4 TiB / 5.5 TiB avail
pgs: 968 active+clean
Now when trying to access the dashboard, I get a 504 Gateway Timeout.
So I cannot web manage the system.
S3, iSCSI, NFS, and CIFS service will not start.
The alias IPs for S3, and iSCSI (NFS and CIFS also) does not show up on the server.
It seems like the upgrade removed some data or is not loading it.
This is a continuation from the General discussion on upgrade failures.
I had to re-create MDS and MGR folders in /var/lib/(mgr/mds)/ceph-<nodename>.
Once these were re-created, ceph status is HEALTH_WARN.
root@cph-svr1:~# ceph -s
cluster:
id:---
health: HEALTH_WARN
1/3 mons down, quorum cph-svr1,cph-svr3
services:
mon: 3 daemons, quorum cph-svr1,cph-svr3 (age 58m), out of quorum: cph-svr4
mgr: cph-svr3(active, since 58m), standbys: cph-svr1
mds: labfs:1 {0=cph-svr2=up:active} 2 up:standby
osd: 14 osds: 10 up (since 58m), 10 in (since 30h)
data:
pools: 14 pools, 968 pgs
objects: 1.14M objects, 1.3 TiB
usage: 2.0 TiB used, 3.4 TiB / 5.5 TiB avail
pgs: 968 active+clean
Now when trying to access the dashboard, I get a 504 Gateway Timeout.
So I cannot web manage the system.
S3, iSCSI, NFS, and CIFS service will not start.
The alias IPs for S3, and iSCSI (NFS and CIFS also) does not show up on the server.
It seems like the upgrade removed some data or is not loading it.
podilarius
44 Posts
December 17, 2021, 2:37 pmQuote from podilarius on December 17, 2021, 2:37 pmGot my cluster back online with what seems like all services online.
There seems to have been an issue after upgrade with python.
I had to re-link python to python3.8 in the /usr/bin directory.
cd /usr/bin
ln -s python3.8 /usr/bin/python
I haven't completed a reboot test, as I have some testing to complete before I do that.
Everything seems to be working except for node 4 which is a hardware issue.
I am going to try and re-install that one with the option to replace the management server.
ceph -s
cluster:
id: ----
health: HEALTH_WARN
1/3 mons down, quorum cph-svr1,cph-svr3
services:
mon: 3 daemons, quorum cph-svr1,cph-svr3 (age 19h), out of quorum: cph-svr4
mgr: cph-svr3(active, since 19h), standbys: cph-svr1
mds: labfs:1 {0=cph-svr2=up:active} 2 up:standby
osd: 10 osds: 10 up (since 19h), 10 in (since 2d)
rgw: 3 daemons active (cph-svr1, cph-svr2, cph-svr3)
task status:
data:
pools: 14 pools, 968 pgs
objects: 717.14k objects, 1.1 TiB
usage: 1.8 TiB used, 3.7 TiB / 5.5 TiB avail
pgs: 968 active+clean
io:
client: 1.1 KiB/s rd, 38 KiB/s wr, 1 op/s rd, 3 op/s wr
Got my cluster back online with what seems like all services online.
There seems to have been an issue after upgrade with python.
I had to re-link python to python3.8 in the /usr/bin directory.
cd /usr/bin
ln -s python3.8 /usr/bin/python
I haven't completed a reboot test, as I have some testing to complete before I do that.
Everything seems to be working except for node 4 which is a hardware issue.
I am going to try and re-install that one with the option to replace the management server.
ceph -s
cluster:
id: ----
health: HEALTH_WARN
1/3 mons down, quorum cph-svr1,cph-svr3
services:
mon: 3 daemons, quorum cph-svr1,cph-svr3 (age 19h), out of quorum: cph-svr4
mgr: cph-svr3(active, since 19h), standbys: cph-svr1
mds: labfs:1 {0=cph-svr2=up:active} 2 up:standby
osd: 10 osds: 10 up (since 19h), 10 in (since 2d)
rgw: 3 daemons active (cph-svr1, cph-svr2, cph-svr3)
task status:
data:
pools: 14 pools, 968 pgs
objects: 717.14k objects, 1.1 TiB
usage: 1.8 TiB used, 3.7 TiB / 5.5 TiB avail
pgs: 968 active+clean
io:
client: 1.1 KiB/s rd, 38 KiB/s wr, 1 op/s rd, 3 op/s wr
admin
2,930 Posts
December 17, 2021, 4:33 pmQuote from admin on December 17, 2021, 4:33 pm
- Can you describe your hardware: ram, cpu, disk type (ssd/hdd) and configuration (journals./cache) ?
- The 14 pools : are they all 3x replication ? any 2x replication ? any EC what profiles ?
- How did you decide earlier the backend network was not up ? how did you manually bring it up for ceph services to work ?
- Did you do any manual customizations prior to upgrade ?
- What is the reason you did a reboot after upgrade ?
- It is strange you had to manually run ln -s python3.8 /usr/bin/python as it is part of the upgrade script, did you run the upgrade script and it ran successfully ? Note that if this link is not defined, many things will go wrong.
- What was the status of cluster before running upgrade : warning or error ?
- Have you changed the recovery speed from default ? As per upgrade guide we do not recommend running upgrade while cluster is recovering as re balance traffic can put a lot of load on the cluster as in your case you are moving 1/4 of your data.
- Can you describe your hardware: ram, cpu, disk type (ssd/hdd) and configuration (journals./cache) ?
- The 14 pools : are they all 3x replication ? any 2x replication ? any EC what profiles ?
- How did you decide earlier the backend network was not up ? how did you manually bring it up for ceph services to work ?
- Did you do any manual customizations prior to upgrade ?
- What is the reason you did a reboot after upgrade ?
- It is strange you had to manually run ln -s python3.8 /usr/bin/python as it is part of the upgrade script, did you run the upgrade script and it ran successfully ? Note that if this link is not defined, many things will go wrong.
- What was the status of cluster before running upgrade : warning or error ?
- Have you changed the recovery speed from default ? As per upgrade guide we do not recommend running upgrade while cluster is recovering as re balance traffic can put a lot of load on the cluster as in your case you are moving 1/4 of your data.
Last edited on December 17, 2021, 5:29 pm by admin · #3
podilarius
44 Posts
December 20, 2021, 4:51 pmQuote from podilarius on December 20, 2021, 4:51 pm1: Memory: 32GB, CPU: Intel Xeon E5-2630, 5x 146GB 10K HDDs, and 6x 1TB SSDs (lab is mixed as we are just testing low amounts of data) ... no journals or cache.
2: I have metadata pools as 3x replica (10 of them) and 4x EC (2,1) for data on S3 and iSCSI.
3. ran "ip a" and saw that backend network didn't have IPs on them. I manually brought them up by adding config to the /etc/network/interfaces file for the backend network.
4. No, I have upgraded this lab system from 2.6.x to 2.7 and then to 2.8 without issues.
5. Since there was a new version and presumably a new kernel and reboot should be done for loading the new kernel.
6. strange I agree with. the upgrade script ran successfully from the screen output.
Noted, many things did go very wrong.
7. Cluster status at upgrade was Warning. I had Node 4 offline due to hardware failure.
8. Yes, I did. I set it medium. Node 4 has been out of the cluster and rebalance had completed weeks before the upgrade.
Also have to note that I have replaced the OS disk in node 4 and the new 3.0 is not obeying the /etc/udev/rules.d/70-persistent-net.rules.
I have to re-order the cards from what is eth0 and eth1. I only have to do this on Node4.
This seems to be an issue in 20.04 and there are work arounds I am going to try.
The cables in the PCI card and I am 50 Miles away from the server.
1: Memory: 32GB, CPU: Intel Xeon E5-2630, 5x 146GB 10K HDDs, and 6x 1TB SSDs (lab is mixed as we are just testing low amounts of data) ... no journals or cache.
2: I have metadata pools as 3x replica (10 of them) and 4x EC (2,1) for data on S3 and iSCSI.
3. ran "ip a" and saw that backend network didn't have IPs on them. I manually brought them up by adding config to the /etc/network/interfaces file for the backend network.
4. No, I have upgraded this lab system from 2.6.x to 2.7 and then to 2.8 without issues.
5. Since there was a new version and presumably a new kernel and reboot should be done for loading the new kernel.
6. strange I agree with. the upgrade script ran successfully from the screen output.
Noted, many things did go very wrong.
7. Cluster status at upgrade was Warning. I had Node 4 offline due to hardware failure.
8. Yes, I did. I set it medium. Node 4 has been out of the cluster and rebalance had completed weeks before the upgrade.
Also have to note that I have replaced the OS disk in node 4 and the new 3.0 is not obeying the /etc/udev/rules.d/70-persistent-net.rules.
I have to re-order the cards from what is eth0 and eth1. I only have to do this on Node4.
This seems to be an issue in 20.04 and there are work arounds I am going to try.
The cables in the PCI card and I am 50 Miles away from the server.
admin
2,930 Posts
December 20, 2021, 10:17 pmQuote from admin on December 20, 2021, 10:17 pmThanks for the info
i think the culprit is in point 6), i do not know why in your case the upgrade script failed to create the python symlink, a successful running of the script should have done it. This error led to some python scripts not running correctly and hence the ips not configured correctly on boot. We will keep an eye if this error happens with some other users, also if you find out any more info at your end it will be great to know. Just curious did you have any other software you manually installed than may have conflicting python version demands ?
the udev interfacing naming script is working in 20.04. Note that changing hardware and/or re-installing (specially using different version) can lead to interface names being re-arranged, so it is best after installation to always check if you need to rename them, you could do so manually editing udev file or from blue screen node console.
Thanks for the info
i think the culprit is in point 6), i do not know why in your case the upgrade script failed to create the python symlink, a successful running of the script should have done it. This error led to some python scripts not running correctly and hence the ips not configured correctly on boot. We will keep an eye if this error happens with some other users, also if you find out any more info at your end it will be great to know. Just curious did you have any other software you manually installed than may have conflicting python version demands ?
the udev interfacing naming script is working in 20.04. Note that changing hardware and/or re-installing (specially using different version) can lead to interface names being re-arranged, so it is best after installation to always check if you need to rename them, you could do so manually editing udev file or from blue screen node console.
podilarius
44 Posts
December 21, 2021, 2:48 pmQuote from podilarius on December 21, 2021, 2:48 pmNo, I have not installed anything other than PetaSAN on these boxes.
No, I have not installed anything other than PetaSAN on these boxes.
admin
2,930 Posts
December 23, 2021, 11:43 pmQuote from admin on December 23, 2021, 11:43 pmcould it be than on one of the nodes, it was forced shutdown/power cycled quickly after the upgrade script ? this could lead to written data not being flushed to disk.
could it be than on one of the nodes, it was forced shutdown/power cycled quickly after the upgrade script ? this could lead to written data not being flushed to disk.
PetaSAN V3 upgrade hard failures
podilarius
44 Posts
Quote from podilarius on December 16, 2021, 8:47 pmThis is a continuation from the General discussion on upgrade failures.
I had to re-create MDS and MGR folders in /var/lib/(mgr/mds)/ceph-<nodename>.
Once these were re-created, ceph status is HEALTH_WARN.
root@cph-svr1:~# ceph -s
cluster:
id:---
health: HEALTH_WARN
1/3 mons down, quorum cph-svr1,cph-svr3services:
mon: 3 daemons, quorum cph-svr1,cph-svr3 (age 58m), out of quorum: cph-svr4
mgr: cph-svr3(active, since 58m), standbys: cph-svr1
mds: labfs:1 {0=cph-svr2=up:active} 2 up:standby
osd: 14 osds: 10 up (since 58m), 10 in (since 30h)data:
pools: 14 pools, 968 pgs
objects: 1.14M objects, 1.3 TiB
usage: 2.0 TiB used, 3.4 TiB / 5.5 TiB avail
pgs: 968 active+cleanNow when trying to access the dashboard, I get a 504 Gateway Timeout.
So I cannot web manage the system.
S3, iSCSI, NFS, and CIFS service will not start.
The alias IPs for S3, and iSCSI (NFS and CIFS also) does not show up on the server.It seems like the upgrade removed some data or is not loading it.
This is a continuation from the General discussion on upgrade failures.
I had to re-create MDS and MGR folders in /var/lib/(mgr/mds)/ceph-<nodename>.
Once these were re-created, ceph status is HEALTH_WARN.
root@cph-svr1:~# ceph -s
cluster:
id:---
health: HEALTH_WARN
1/3 mons down, quorum cph-svr1,cph-svr3
services:
mon: 3 daemons, quorum cph-svr1,cph-svr3 (age 58m), out of quorum: cph-svr4
mgr: cph-svr3(active, since 58m), standbys: cph-svr1
mds: labfs:1 {0=cph-svr2=up:active} 2 up:standby
osd: 14 osds: 10 up (since 58m), 10 in (since 30h)
data:
pools: 14 pools, 968 pgs
objects: 1.14M objects, 1.3 TiB
usage: 2.0 TiB used, 3.4 TiB / 5.5 TiB avail
pgs: 968 active+clean
Now when trying to access the dashboard, I get a 504 Gateway Timeout.
So I cannot web manage the system.
S3, iSCSI, NFS, and CIFS service will not start.
The alias IPs for S3, and iSCSI (NFS and CIFS also) does not show up on the server.
It seems like the upgrade removed some data or is not loading it.
podilarius
44 Posts
Quote from podilarius on December 17, 2021, 2:37 pmGot my cluster back online with what seems like all services online.
There seems to have been an issue after upgrade with python.I had to re-link python to python3.8 in the /usr/bin directory.
cd /usr/bin
ln -s python3.8 /usr/bin/pythonI haven't completed a reboot test, as I have some testing to complete before I do that.
Everything seems to be working except for node 4 which is a hardware issue.I am going to try and re-install that one with the option to replace the management server.
ceph -s
cluster:
id: ----
health: HEALTH_WARN
1/3 mons down, quorum cph-svr1,cph-svr3services:
mon: 3 daemons, quorum cph-svr1,cph-svr3 (age 19h), out of quorum: cph-svr4
mgr: cph-svr3(active, since 19h), standbys: cph-svr1
mds: labfs:1 {0=cph-svr2=up:active} 2 up:standby
osd: 10 osds: 10 up (since 19h), 10 in (since 2d)
rgw: 3 daemons active (cph-svr1, cph-svr2, cph-svr3)task status:
data:
pools: 14 pools, 968 pgs
objects: 717.14k objects, 1.1 TiB
usage: 1.8 TiB used, 3.7 TiB / 5.5 TiB avail
pgs: 968 active+cleanio:
client: 1.1 KiB/s rd, 38 KiB/s wr, 1 op/s rd, 3 op/s wr
Got my cluster back online with what seems like all services online.
There seems to have been an issue after upgrade with python.
I had to re-link python to python3.8 in the /usr/bin directory.
cd /usr/bin
ln -s python3.8 /usr/bin/python
I haven't completed a reboot test, as I have some testing to complete before I do that.
Everything seems to be working except for node 4 which is a hardware issue.
I am going to try and re-install that one with the option to replace the management server.
ceph -s
cluster:
id: ----
health: HEALTH_WARN
1/3 mons down, quorum cph-svr1,cph-svr3services:
mon: 3 daemons, quorum cph-svr1,cph-svr3 (age 19h), out of quorum: cph-svr4
mgr: cph-svr3(active, since 19h), standbys: cph-svr1
mds: labfs:1 {0=cph-svr2=up:active} 2 up:standby
osd: 10 osds: 10 up (since 19h), 10 in (since 2d)
rgw: 3 daemons active (cph-svr1, cph-svr2, cph-svr3)task status:
data:
pools: 14 pools, 968 pgs
objects: 717.14k objects, 1.1 TiB
usage: 1.8 TiB used, 3.7 TiB / 5.5 TiB avail
pgs: 968 active+cleanio:
client: 1.1 KiB/s rd, 38 KiB/s wr, 1 op/s rd, 3 op/s wr
admin
2,930 Posts
Quote from admin on December 17, 2021, 4:33 pm
- Can you describe your hardware: ram, cpu, disk type (ssd/hdd) and configuration (journals./cache) ?
- The 14 pools : are they all 3x replication ? any 2x replication ? any EC what profiles ?
- How did you decide earlier the backend network was not up ? how did you manually bring it up for ceph services to work ?
- Did you do any manual customizations prior to upgrade ?
- What is the reason you did a reboot after upgrade ?
- It is strange you had to manually run ln -s python3.8 /usr/bin/python as it is part of the upgrade script, did you run the upgrade script and it ran successfully ? Note that if this link is not defined, many things will go wrong.
- What was the status of cluster before running upgrade : warning or error ?
- Have you changed the recovery speed from default ? As per upgrade guide we do not recommend running upgrade while cluster is recovering as re balance traffic can put a lot of load on the cluster as in your case you are moving 1/4 of your data.
- Can you describe your hardware: ram, cpu, disk type (ssd/hdd) and configuration (journals./cache) ?
- The 14 pools : are they all 3x replication ? any 2x replication ? any EC what profiles ?
- How did you decide earlier the backend network was not up ? how did you manually bring it up for ceph services to work ?
- Did you do any manual customizations prior to upgrade ?
- What is the reason you did a reboot after upgrade ?
- It is strange you had to manually run ln -s python3.8 /usr/bin/python as it is part of the upgrade script, did you run the upgrade script and it ran successfully ? Note that if this link is not defined, many things will go wrong.
- What was the status of cluster before running upgrade : warning or error ?
- Have you changed the recovery speed from default ? As per upgrade guide we do not recommend running upgrade while cluster is recovering as re balance traffic can put a lot of load on the cluster as in your case you are moving 1/4 of your data.
podilarius
44 Posts
Quote from podilarius on December 20, 2021, 4:51 pm1: Memory: 32GB, CPU: Intel Xeon E5-2630, 5x 146GB 10K HDDs, and 6x 1TB SSDs (lab is mixed as we are just testing low amounts of data) ... no journals or cache.
2: I have metadata pools as 3x replica (10 of them) and 4x EC (2,1) for data on S3 and iSCSI.
3. ran "ip a" and saw that backend network didn't have IPs on them. I manually brought them up by adding config to the /etc/network/interfaces file for the backend network.
4. No, I have upgraded this lab system from 2.6.x to 2.7 and then to 2.8 without issues.
5. Since there was a new version and presumably a new kernel and reboot should be done for loading the new kernel.
6. strange I agree with. the upgrade script ran successfully from the screen output.
Noted, many things did go very wrong.
7. Cluster status at upgrade was Warning. I had Node 4 offline due to hardware failure.
8. Yes, I did. I set it medium. Node 4 has been out of the cluster and rebalance had completed weeks before the upgrade.Also have to note that I have replaced the OS disk in node 4 and the new 3.0 is not obeying the /etc/udev/rules.d/70-persistent-net.rules.
I have to re-order the cards from what is eth0 and eth1. I only have to do this on Node4.
This seems to be an issue in 20.04 and there are work arounds I am going to try.
The cables in the PCI card and I am 50 Miles away from the server.
1: Memory: 32GB, CPU: Intel Xeon E5-2630, 5x 146GB 10K HDDs, and 6x 1TB SSDs (lab is mixed as we are just testing low amounts of data) ... no journals or cache.
2: I have metadata pools as 3x replica (10 of them) and 4x EC (2,1) for data on S3 and iSCSI.
3. ran "ip a" and saw that backend network didn't have IPs on them. I manually brought them up by adding config to the /etc/network/interfaces file for the backend network.
4. No, I have upgraded this lab system from 2.6.x to 2.7 and then to 2.8 without issues.
5. Since there was a new version and presumably a new kernel and reboot should be done for loading the new kernel.
6. strange I agree with. the upgrade script ran successfully from the screen output.
Noted, many things did go very wrong.
7. Cluster status at upgrade was Warning. I had Node 4 offline due to hardware failure.
8. Yes, I did. I set it medium. Node 4 has been out of the cluster and rebalance had completed weeks before the upgrade.
Also have to note that I have replaced the OS disk in node 4 and the new 3.0 is not obeying the /etc/udev/rules.d/70-persistent-net.rules.
I have to re-order the cards from what is eth0 and eth1. I only have to do this on Node4.
This seems to be an issue in 20.04 and there are work arounds I am going to try.
The cables in the PCI card and I am 50 Miles away from the server.
admin
2,930 Posts
Quote from admin on December 20, 2021, 10:17 pmThanks for the info
i think the culprit is in point 6), i do not know why in your case the upgrade script failed to create the python symlink, a successful running of the script should have done it. This error led to some python scripts not running correctly and hence the ips not configured correctly on boot. We will keep an eye if this error happens with some other users, also if you find out any more info at your end it will be great to know. Just curious did you have any other software you manually installed than may have conflicting python version demands ?
the udev interfacing naming script is working in 20.04. Note that changing hardware and/or re-installing (specially using different version) can lead to interface names being re-arranged, so it is best after installation to always check if you need to rename them, you could do so manually editing udev file or from blue screen node console.
Thanks for the info
i think the culprit is in point 6), i do not know why in your case the upgrade script failed to create the python symlink, a successful running of the script should have done it. This error led to some python scripts not running correctly and hence the ips not configured correctly on boot. We will keep an eye if this error happens with some other users, also if you find out any more info at your end it will be great to know. Just curious did you have any other software you manually installed than may have conflicting python version demands ?
the udev interfacing naming script is working in 20.04. Note that changing hardware and/or re-installing (specially using different version) can lead to interface names being re-arranged, so it is best after installation to always check if you need to rename them, you could do so manually editing udev file or from blue screen node console.
podilarius
44 Posts
Quote from podilarius on December 21, 2021, 2:48 pmNo, I have not installed anything other than PetaSAN on these boxes.
No, I have not installed anything other than PetaSAN on these boxes.
admin
2,930 Posts
Quote from admin on December 23, 2021, 11:43 pmcould it be than on one of the nodes, it was forced shutdown/power cycled quickly after the upgrade script ? this could lead to written data not being flushed to disk.
could it be than on one of the nodes, it was forced shutdown/power cycled quickly after the upgrade script ? this could lead to written data not being flushed to disk.