Not able to login into PetaSAN GUI : 500 Internal Server Error
Bala
13 Posts
October 12, 2024, 1:42 amQuote from Bala on October 12, 2024, 1:42 amI am not able to login into PetaSAN GUI
Below is the error I am getting:
Internal Server Error
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
So, when I logged through Putty and when I checked the Status of PetaSAN service, there was a error I am getting.
root@uat-ps-node1:~# systemctl status petasan-admin
● petasan-admin.service - PetaSAN Web Management and Administration
Loaded: loaded (/lib/systemd/system/petasan-admin.service; static; vendor preset: enabled)
Active: active (running) since Sat 2024-10-12 07:02:30 IST; 1min 3s ago
Main PID: 38143 (admin.py)
Tasks: 1 (limit: 154504)
Memory: 40.3M
CGroup: /system.slice/petasan-admin.service
└─38143 /usr/bin/python3 /opt/petasan/services/web/admin.py
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: 1/ 5 mclock
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: -2/-2 (syslog threshold)
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: 99/99 (stderr threshold)
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: --- pthread ID / name mapping for recent threads ---
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: 7f9bdc6ca080 / crushtool
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: max_recent 500
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: max_new 500
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: log_file /var/lib/ceph/crash/2024-10-12T01:33:26.443869Z_318a5679-bcd6-4789-a9be-bf8bdfa381b5/log
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: --- end dump of recent events ---
Oct 12 07:03:26 uat-ps-node1 admin.py[38417]: Aborted
My cluster Health status:
root@uat-ps-node1:~# ceph -s
cluster:
id: 9fb3eca7-2aac-4365-a463-0bc41d80ed8e
health: HEALTH_OK
services:
mon: 3 daemons, quorum uat-ps-node3,uat-ps-node1,uat-ps-node2 (age 31m)
mgr: uat-ps-node3(active, since 54m), standbys: uat-ps-node1, uat-ps-node2
osd: 15 osds: 15 up (since 31m), 15 in (since 31m); 7 remapped pgs
data:
pools: 3 pools, 531 pgs
objects: 1.32M objects, 4.9 TiB
usage: 9.2 TiB used, 31 TiB / 41 TiB avail
pgs: 30146/2635900 objects misplaced (1.144%)
524 active+clean
5 active+remapped+backfill_wait
2 active+remapped+backfilling
io:
recovery: 7.9 MiB/s, 2 objects/s
OSD Status:
root@uat-ps-node1:~# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
0 hdd 2.18320 1.00000 2.2 TiB 1.4 GiB 1.0 GiB 9 KiB 393 MiB 2.2 TiB 0.06 0.00 47 up
1 hdd 2.18320 1.00000 2.2 TiB 1.1 GiB 783 MiB 9 KiB 377 MiB 2.2 TiB 0.05 0.00 34 up
6 ssd 3.59998 1.00000 3.5 TiB 1.1 TiB 1.1 TiB 18 KiB 2.3 GiB 2.4 TiB 31.93 1.41 96 up
9 ssd 1.81926 0.89999 1.8 TiB 664 GiB 663 GiB 21 KiB 1.4 GiB 1.2 TiB 35.64 1.57 55 up
13 ssd 1.86298 0.84999 1.9 TiB 872 GiB 870 GiB 23 KiB 1.8 GiB 1.0 TiB 45.72 2.02 75 up
14 ssd 0.90958 0.90002 931 GiB 247 GiB 246 GiB 12 KiB 779 MiB 684 GiB 26.53 1.17 22 up
2 hdd 2.18320 1.00000 2.2 TiB 1.2 GiB 871 MiB 11 KiB 401 MiB 2.2 TiB 0.06 0.00 43 up
3 hdd 2.18320 1.00000 2.2 TiB 1.4 GiB 1022 MiB 20 KiB 397 MiB 2.2 TiB 0.06 0.00 46 up
8 ssd 1.81926 1.00000 1.8 TiB 708 GiB 706 GiB 18 KiB 1.6 GiB 1.1 TiB 37.99 1.68 62 up
11 ssd 6.98630 1.00000 7.0 TiB 2.3 TiB 2.3 TiB 21 KiB 4.5 GiB 4.7 TiB 33.25 1.47 206 up
4 hdd 2.18320 1.00000 2.2 TiB 1.3 GiB 923 MiB 11 KiB 396 MiB 2.2 TiB 0.06 0.00 44 up
5 hdd 2.18320 1.00000 2.2 TiB 1.3 GiB 903 MiB 22 KiB 402 MiB 2.2 TiB 0.06 0.00 42 up
7 ssd 6.98630 1.00000 7.0 TiB 2.3 TiB 2.3 TiB 21 KiB 4.5 GiB 4.7 TiB 33.41 1.47 206 up
10 ssd 1.81926 1.00000 1.8 TiB 457 GiB 456 GiB 19 KiB 1.2 GiB 1.4 TiB 24.55 1.08 38 up
15 ssd 1.81927 0.90002 1.8 TiB 555 GiB 554 GiB 8 KiB 1.2 GiB 1.3 TiB 29.78 1.31 47 up
TOTAL 41 TiB 9.2 TiB 9.2 TiB 249 KiB 22 GiB 31 TiB 22.66
MIN/MAX VAR: 0.00/2.02 STDDEV: 17.15
I am not able to login into PetaSAN GUI
Below is the error I am getting:
Internal Server Error
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
So, when I logged through Putty and when I checked the Status of PetaSAN service, there was a error I am getting.
root@uat-ps-node1:~# systemctl status petasan-admin
● petasan-admin.service - PetaSAN Web Management and Administration
Loaded: loaded (/lib/systemd/system/petasan-admin.service; static; vendor preset: enabled)
Active: active (running) since Sat 2024-10-12 07:02:30 IST; 1min 3s ago
Main PID: 38143 (admin.py)
Tasks: 1 (limit: 154504)
Memory: 40.3M
CGroup: /system.slice/petasan-admin.service
└─38143 /usr/bin/python3 /opt/petasan/services/web/admin.py
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: 1/ 5 mclock
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: -2/-2 (syslog threshold)
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: 99/99 (stderr threshold)
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: --- pthread ID / name mapping for recent threads ---
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: 7f9bdc6ca080 / crushtool
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: max_recent 500
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: max_new 500
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: log_file /var/lib/ceph/crash/2024-10-12T01:33:26.443869Z_318a5679-bcd6-4789-a9be-bf8bdfa381b5/log
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: --- end dump of recent events ---
Oct 12 07:03:26 uat-ps-node1 admin.py[38417]: Aborted
My cluster Health status:
root@uat-ps-node1:~# ceph -s
cluster:
id: 9fb3eca7-2aac-4365-a463-0bc41d80ed8e
health: HEALTH_OK
services:
mon: 3 daemons, quorum uat-ps-node3,uat-ps-node1,uat-ps-node2 (age 31m)
mgr: uat-ps-node3(active, since 54m), standbys: uat-ps-node1, uat-ps-node2
osd: 15 osds: 15 up (since 31m), 15 in (since 31m); 7 remapped pgs
data:
pools: 3 pools, 531 pgs
objects: 1.32M objects, 4.9 TiB
usage: 9.2 TiB used, 31 TiB / 41 TiB avail
pgs: 30146/2635900 objects misplaced (1.144%)
524 active+clean
5 active+remapped+backfill_wait
2 active+remapped+backfilling
io:
recovery: 7.9 MiB/s, 2 objects/s
OSD Status:
root@uat-ps-node1:~# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
0 hdd 2.18320 1.00000 2.2 TiB 1.4 GiB 1.0 GiB 9 KiB 393 MiB 2.2 TiB 0.06 0.00 47 up
1 hdd 2.18320 1.00000 2.2 TiB 1.1 GiB 783 MiB 9 KiB 377 MiB 2.2 TiB 0.05 0.00 34 up
6 ssd 3.59998 1.00000 3.5 TiB 1.1 TiB 1.1 TiB 18 KiB 2.3 GiB 2.4 TiB 31.93 1.41 96 up
9 ssd 1.81926 0.89999 1.8 TiB 664 GiB 663 GiB 21 KiB 1.4 GiB 1.2 TiB 35.64 1.57 55 up
13 ssd 1.86298 0.84999 1.9 TiB 872 GiB 870 GiB 23 KiB 1.8 GiB 1.0 TiB 45.72 2.02 75 up
14 ssd 0.90958 0.90002 931 GiB 247 GiB 246 GiB 12 KiB 779 MiB 684 GiB 26.53 1.17 22 up
2 hdd 2.18320 1.00000 2.2 TiB 1.2 GiB 871 MiB 11 KiB 401 MiB 2.2 TiB 0.06 0.00 43 up
3 hdd 2.18320 1.00000 2.2 TiB 1.4 GiB 1022 MiB 20 KiB 397 MiB 2.2 TiB 0.06 0.00 46 up
8 ssd 1.81926 1.00000 1.8 TiB 708 GiB 706 GiB 18 KiB 1.6 GiB 1.1 TiB 37.99 1.68 62 up
11 ssd 6.98630 1.00000 7.0 TiB 2.3 TiB 2.3 TiB 21 KiB 4.5 GiB 4.7 TiB 33.25 1.47 206 up
4 hdd 2.18320 1.00000 2.2 TiB 1.3 GiB 923 MiB 11 KiB 396 MiB 2.2 TiB 0.06 0.00 44 up
5 hdd 2.18320 1.00000 2.2 TiB 1.3 GiB 903 MiB 22 KiB 402 MiB 2.2 TiB 0.06 0.00 42 up
7 ssd 6.98630 1.00000 7.0 TiB 2.3 TiB 2.3 TiB 21 KiB 4.5 GiB 4.7 TiB 33.41 1.47 206 up
10 ssd 1.81926 1.00000 1.8 TiB 457 GiB 456 GiB 19 KiB 1.2 GiB 1.4 TiB 24.55 1.08 38 up
15 ssd 1.81927 0.90002 1.8 TiB 555 GiB 554 GiB 8 KiB 1.2 GiB 1.3 TiB 29.78 1.31 47 up
TOTAL 41 TiB 9.2 TiB 9.2 TiB 249 KiB 22 GiB 31 TiB 22.66
MIN/MAX VAR: 0.00/2.02 STDDEV: 17.15
Last edited on October 13, 2024, 5:02 pm by Bala · #1
Bala
13 Posts
October 12, 2024, 5:40 pmQuote from Bala on October 12, 2024, 5:40 pmI have changed the Ceph balance from Crush-compact to upmap, after that i am getting the below error.
root@uat-ps-node1:/opt/petasan/log# tail -fn 100 PetaSAN.log
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/backend/iscsi_service.py", line 366, in __read_resources_consul
reassignments = MangePathAssignment().get_forced_paths()
File "/usr/lib/python3/dist-packages/PetaSAN/backend/mange_path_assignment.py", line 505, in get_forced_paths
assignments = self._filter_assignments_stats(set_session=True)
File "/usr/lib/python3/dist-packages/PetaSAN/backend/mange_path_assignment.py", line 92, in _filter_assignments_stats
images = ceph_api.get_disks_meta()
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/api.py", line 979, in get_disks_meta
raise Exception("Error getting disk metas, cannot open connection.")
Exception: Error getting disk metas, cannot open connection.
12/10/2024 23:07:40 ERROR iSCSIServer : Error during process.
12/10/2024 23:07:40 ERROR Error getting disk metas, cannot open connection.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/api.py", line 967, in get_disks_meta
active_rbd_pools = self.get_active_app_pools_info("rbd")
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/api.py", line 1803, in get_active_app_pools_info
pool_info = self.get_pools_info()
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/api.py", line 1702, in get_pools_info
rules_info = self.get_rules()
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/api.py", line 1499, in get_rules
crush.read()
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/crush_map.py", line 372, in read
self._read_file_lines(backup)
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/crush_map.py", line 254, in _read_file_lines
raise CrushException(CrushException.DECOMPILE,'Crush Decompile Error')
PetaSAN.core.common.CustomException.CrushException: Crush Decompile Error
I have changed the Ceph balance from Crush-compact to upmap, after that i am getting the below error.
root@uat-ps-node1:/opt/petasan/log# tail -fn 100 PetaSAN.log
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/backend/iscsi_service.py", line 366, in __read_resources_consul
reassignments = MangePathAssignment().get_forced_paths()
File "/usr/lib/python3/dist-packages/PetaSAN/backend/mange_path_assignment.py", line 505, in get_forced_paths
assignments = self._filter_assignments_stats(set_session=True)
File "/usr/lib/python3/dist-packages/PetaSAN/backend/mange_path_assignment.py", line 92, in _filter_assignments_stats
images = ceph_api.get_disks_meta()
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/api.py", line 979, in get_disks_meta
raise Exception("Error getting disk metas, cannot open connection.")
Exception: Error getting disk metas, cannot open connection.
12/10/2024 23:07:40 ERROR iSCSIServer : Error during process.
12/10/2024 23:07:40 ERROR Error getting disk metas, cannot open connection.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/api.py", line 967, in get_disks_meta
active_rbd_pools = self.get_active_app_pools_info("rbd")
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/api.py", line 1803, in get_active_app_pools_info
pool_info = self.get_pools_info()
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/api.py", line 1702, in get_pools_info
rules_info = self.get_rules()
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/api.py", line 1499, in get_rules
crush.read()
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/crush_map.py", line 372, in read
self._read_file_lines(backup)
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/crush_map.py", line 254, in _read_file_lines
raise CrushException(CrushException.DECOMPILE,'Crush Decompile Error')
PetaSAN.core.common.CustomException.CrushException: Crush Decompile Error
admin
2,929 Posts
October 14, 2024, 6:59 amQuote from admin on October 14, 2024, 6:59 amCan you please post your crushmap
ceph osd getcrushmap -o crushmap.bin
crushtool -d crushmap.bin -o crushmap.txt
Can you please post your crushmap
ceph osd getcrushmap -o crushmap.bin
crushtool -d crushmap.bin -o crushmap.txt
Not able to login into PetaSAN GUI : 500 Internal Server Error
Bala
13 Posts
Quote from Bala on October 12, 2024, 1:42 amI am not able to login into PetaSAN GUI
Below is the error I am getting:
Internal Server Error
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
So, when I logged through Putty and when I checked the Status of PetaSAN service, there was a error I am getting.
root@uat-ps-node1:~# systemctl status petasan-admin
● petasan-admin.service - PetaSAN Web Management and Administration
Loaded: loaded (/lib/systemd/system/petasan-admin.service; static; vendor preset: enabled)
Active: active (running) since Sat 2024-10-12 07:02:30 IST; 1min 3s ago
Main PID: 38143 (admin.py)
Tasks: 1 (limit: 154504)
Memory: 40.3M
CGroup: /system.slice/petasan-admin.service
└─38143 /usr/bin/python3 /opt/petasan/services/web/admin.pyOct 12 07:03:26 uat-ps-node1 admin.py[38418]: 1/ 5 mclock
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: -2/-2 (syslog threshold)
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: 99/99 (stderr threshold)
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: --- pthread ID / name mapping for recent threads ---
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: 7f9bdc6ca080 / crushtool
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: max_recent 500
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: max_new 500
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: log_file /var/lib/ceph/crash/2024-10-12T01:33:26.443869Z_318a5679-bcd6-4789-a9be-bf8bdfa381b5/log
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: --- end dump of recent events ---
Oct 12 07:03:26 uat-ps-node1 admin.py[38417]: Aborted
My cluster Health status:
root@uat-ps-node1:~# ceph -s
cluster:
id: 9fb3eca7-2aac-4365-a463-0bc41d80ed8e
health: HEALTH_OKservices:
mon: 3 daemons, quorum uat-ps-node3,uat-ps-node1,uat-ps-node2 (age 31m)
mgr: uat-ps-node3(active, since 54m), standbys: uat-ps-node1, uat-ps-node2
osd: 15 osds: 15 up (since 31m), 15 in (since 31m); 7 remapped pgsdata:
pools: 3 pools, 531 pgs
objects: 1.32M objects, 4.9 TiB
usage: 9.2 TiB used, 31 TiB / 41 TiB avail
pgs: 30146/2635900 objects misplaced (1.144%)
524 active+clean
5 active+remapped+backfill_wait
2 active+remapped+backfillingio:
recovery: 7.9 MiB/s, 2 objects/sOSD Status:
root@uat-ps-node1:~# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
0 hdd 2.18320 1.00000 2.2 TiB 1.4 GiB 1.0 GiB 9 KiB 393 MiB 2.2 TiB 0.06 0.00 47 up
1 hdd 2.18320 1.00000 2.2 TiB 1.1 GiB 783 MiB 9 KiB 377 MiB 2.2 TiB 0.05 0.00 34 up
6 ssd 3.59998 1.00000 3.5 TiB 1.1 TiB 1.1 TiB 18 KiB 2.3 GiB 2.4 TiB 31.93 1.41 96 up
9 ssd 1.81926 0.89999 1.8 TiB 664 GiB 663 GiB 21 KiB 1.4 GiB 1.2 TiB 35.64 1.57 55 up
13 ssd 1.86298 0.84999 1.9 TiB 872 GiB 870 GiB 23 KiB 1.8 GiB 1.0 TiB 45.72 2.02 75 up
14 ssd 0.90958 0.90002 931 GiB 247 GiB 246 GiB 12 KiB 779 MiB 684 GiB 26.53 1.17 22 up
2 hdd 2.18320 1.00000 2.2 TiB 1.2 GiB 871 MiB 11 KiB 401 MiB 2.2 TiB 0.06 0.00 43 up
3 hdd 2.18320 1.00000 2.2 TiB 1.4 GiB 1022 MiB 20 KiB 397 MiB 2.2 TiB 0.06 0.00 46 up
8 ssd 1.81926 1.00000 1.8 TiB 708 GiB 706 GiB 18 KiB 1.6 GiB 1.1 TiB 37.99 1.68 62 up
11 ssd 6.98630 1.00000 7.0 TiB 2.3 TiB 2.3 TiB 21 KiB 4.5 GiB 4.7 TiB 33.25 1.47 206 up
4 hdd 2.18320 1.00000 2.2 TiB 1.3 GiB 923 MiB 11 KiB 396 MiB 2.2 TiB 0.06 0.00 44 up
5 hdd 2.18320 1.00000 2.2 TiB 1.3 GiB 903 MiB 22 KiB 402 MiB 2.2 TiB 0.06 0.00 42 up
7 ssd 6.98630 1.00000 7.0 TiB 2.3 TiB 2.3 TiB 21 KiB 4.5 GiB 4.7 TiB 33.41 1.47 206 up
10 ssd 1.81926 1.00000 1.8 TiB 457 GiB 456 GiB 19 KiB 1.2 GiB 1.4 TiB 24.55 1.08 38 up
15 ssd 1.81927 0.90002 1.8 TiB 555 GiB 554 GiB 8 KiB 1.2 GiB 1.3 TiB 29.78 1.31 47 up
TOTAL 41 TiB 9.2 TiB 9.2 TiB 249 KiB 22 GiB 31 TiB 22.66
MIN/MAX VAR: 0.00/2.02 STDDEV: 17.15
I am not able to login into PetaSAN GUI
Below is the error I am getting:
Internal Server Error
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
So, when I logged through Putty and when I checked the Status of PetaSAN service, there was a error I am getting.
root@uat-ps-node1:~# systemctl status petasan-admin
● petasan-admin.service - PetaSAN Web Management and Administration
Loaded: loaded (/lib/systemd/system/petasan-admin.service; static; vendor preset: enabled)
Active: active (running) since Sat 2024-10-12 07:02:30 IST; 1min 3s ago
Main PID: 38143 (admin.py)
Tasks: 1 (limit: 154504)
Memory: 40.3M
CGroup: /system.slice/petasan-admin.service
└─38143 /usr/bin/python3 /opt/petasan/services/web/admin.py
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: 1/ 5 mclock
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: -2/-2 (syslog threshold)
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: 99/99 (stderr threshold)
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: --- pthread ID / name mapping for recent threads ---
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: 7f9bdc6ca080 / crushtool
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: max_recent 500
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: max_new 500
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: log_file /var/lib/ceph/crash/2024-10-12T01:33:26.443869Z_318a5679-bcd6-4789-a9be-bf8bdfa381b5/log
Oct 12 07:03:26 uat-ps-node1 admin.py[38418]: --- end dump of recent events ---
Oct 12 07:03:26 uat-ps-node1 admin.py[38417]: Aborted
My cluster Health status:
root@uat-ps-node1:~# ceph -s
cluster:
id: 9fb3eca7-2aac-4365-a463-0bc41d80ed8e
health: HEALTH_OK
services:
mon: 3 daemons, quorum uat-ps-node3,uat-ps-node1,uat-ps-node2 (age 31m)
mgr: uat-ps-node3(active, since 54m), standbys: uat-ps-node1, uat-ps-node2
osd: 15 osds: 15 up (since 31m), 15 in (since 31m); 7 remapped pgs
data:
pools: 3 pools, 531 pgs
objects: 1.32M objects, 4.9 TiB
usage: 9.2 TiB used, 31 TiB / 41 TiB avail
pgs: 30146/2635900 objects misplaced (1.144%)
524 active+clean
5 active+remapped+backfill_wait
2 active+remapped+backfilling
io:
recovery: 7.9 MiB/s, 2 objects/s
OSD Status:
root@uat-ps-node1:~# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
0 hdd 2.18320 1.00000 2.2 TiB 1.4 GiB 1.0 GiB 9 KiB 393 MiB 2.2 TiB 0.06 0.00 47 up
1 hdd 2.18320 1.00000 2.2 TiB 1.1 GiB 783 MiB 9 KiB 377 MiB 2.2 TiB 0.05 0.00 34 up
6 ssd 3.59998 1.00000 3.5 TiB 1.1 TiB 1.1 TiB 18 KiB 2.3 GiB 2.4 TiB 31.93 1.41 96 up
9 ssd 1.81926 0.89999 1.8 TiB 664 GiB 663 GiB 21 KiB 1.4 GiB 1.2 TiB 35.64 1.57 55 up
13 ssd 1.86298 0.84999 1.9 TiB 872 GiB 870 GiB 23 KiB 1.8 GiB 1.0 TiB 45.72 2.02 75 up
14 ssd 0.90958 0.90002 931 GiB 247 GiB 246 GiB 12 KiB 779 MiB 684 GiB 26.53 1.17 22 up
2 hdd 2.18320 1.00000 2.2 TiB 1.2 GiB 871 MiB 11 KiB 401 MiB 2.2 TiB 0.06 0.00 43 up
3 hdd 2.18320 1.00000 2.2 TiB 1.4 GiB 1022 MiB 20 KiB 397 MiB 2.2 TiB 0.06 0.00 46 up
8 ssd 1.81926 1.00000 1.8 TiB 708 GiB 706 GiB 18 KiB 1.6 GiB 1.1 TiB 37.99 1.68 62 up
11 ssd 6.98630 1.00000 7.0 TiB 2.3 TiB 2.3 TiB 21 KiB 4.5 GiB 4.7 TiB 33.25 1.47 206 up
4 hdd 2.18320 1.00000 2.2 TiB 1.3 GiB 923 MiB 11 KiB 396 MiB 2.2 TiB 0.06 0.00 44 up
5 hdd 2.18320 1.00000 2.2 TiB 1.3 GiB 903 MiB 22 KiB 402 MiB 2.2 TiB 0.06 0.00 42 up
7 ssd 6.98630 1.00000 7.0 TiB 2.3 TiB 2.3 TiB 21 KiB 4.5 GiB 4.7 TiB 33.41 1.47 206 up
10 ssd 1.81926 1.00000 1.8 TiB 457 GiB 456 GiB 19 KiB 1.2 GiB 1.4 TiB 24.55 1.08 38 up
15 ssd 1.81927 0.90002 1.8 TiB 555 GiB 554 GiB 8 KiB 1.2 GiB 1.3 TiB 29.78 1.31 47 up
TOTAL 41 TiB 9.2 TiB 9.2 TiB 249 KiB 22 GiB 31 TiB 22.66
MIN/MAX VAR: 0.00/2.02 STDDEV: 17.15
Bala
13 Posts
Quote from Bala on October 12, 2024, 5:40 pmI have changed the Ceph balance from Crush-compact to upmap, after that i am getting the below error.
root@uat-ps-node1:/opt/petasan/log# tail -fn 100 PetaSAN.log
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/backend/iscsi_service.py", line 366, in __read_resources_consul
reassignments = MangePathAssignment().get_forced_paths()
File "/usr/lib/python3/dist-packages/PetaSAN/backend/mange_path_assignment.py", line 505, in get_forced_paths
assignments = self._filter_assignments_stats(set_session=True)
File "/usr/lib/python3/dist-packages/PetaSAN/backend/mange_path_assignment.py", line 92, in _filter_assignments_stats
images = ceph_api.get_disks_meta()
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/api.py", line 979, in get_disks_meta
raise Exception("Error getting disk metas, cannot open connection.")
Exception: Error getting disk metas, cannot open connection.
12/10/2024 23:07:40 ERROR iSCSIServer : Error during process.
12/10/2024 23:07:40 ERROR Error getting disk metas, cannot open connection.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/api.py", line 967, in get_disks_meta
active_rbd_pools = self.get_active_app_pools_info("rbd")
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/api.py", line 1803, in get_active_app_pools_info
pool_info = self.get_pools_info()
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/api.py", line 1702, in get_pools_info
rules_info = self.get_rules()
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/api.py", line 1499, in get_rules
crush.read()
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/crush_map.py", line 372, in read
self._read_file_lines(backup)
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/crush_map.py", line 254, in _read_file_lines
raise CrushException(CrushException.DECOMPILE,'Crush Decompile Error')
PetaSAN.core.common.CustomException.CrushException: Crush Decompile Error
I have changed the Ceph balance from Crush-compact to upmap, after that i am getting the below error.
root@uat-ps-node1:/opt/petasan/log# tail -fn 100 PetaSAN.log
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/backend/iscsi_service.py", line 366, in __read_resources_consul
reassignments = MangePathAssignment().get_forced_paths()
File "/usr/lib/python3/dist-packages/PetaSAN/backend/mange_path_assignment.py", line 505, in get_forced_paths
assignments = self._filter_assignments_stats(set_session=True)
File "/usr/lib/python3/dist-packages/PetaSAN/backend/mange_path_assignment.py", line 92, in _filter_assignments_stats
images = ceph_api.get_disks_meta()
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/api.py", line 979, in get_disks_meta
raise Exception("Error getting disk metas, cannot open connection.")
Exception: Error getting disk metas, cannot open connection.
12/10/2024 23:07:40 ERROR iSCSIServer : Error during process.
12/10/2024 23:07:40 ERROR Error getting disk metas, cannot open connection.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/api.py", line 967, in get_disks_meta
active_rbd_pools = self.get_active_app_pools_info("rbd")
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/api.py", line 1803, in get_active_app_pools_info
pool_info = self.get_pools_info()
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/api.py", line 1702, in get_pools_info
rules_info = self.get_rules()
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/api.py", line 1499, in get_rules
crush.read()
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/crush_map.py", line 372, in read
self._read_file_lines(backup)
File "/usr/lib/python3/dist-packages/PetaSAN/core/ceph/crush_map.py", line 254, in _read_file_lines
raise CrushException(CrushException.DECOMPILE,'Crush Decompile Error')
PetaSAN.core.common.CustomException.CrushException: Crush Decompile Error
admin
2,929 Posts
Quote from admin on October 14, 2024, 6:59 amCan you please post your crushmap
ceph osd getcrushmap -o crushmap.bin
crushtool -d crushmap.bin -o crushmap.txt
Can you please post your crushmap
ceph osd getcrushmap -o crushmap.bin
crushtool -d crushmap.bin -o crushmap.txt