Error: Unable to convert string [Mon, 28 Nov 2022 14:16:13 GMT] with date format
Pages: 1 2
Ingo Hellmer
19 Posts
November 2, 2022, 7:12 amQuote from Ingo Hellmer on November 2, 2022, 7:12 amI have installed PetaSAN 3.1 and configured everything accoring to the published documentation. We use Veeam Backup & Replication 11 in the latest version with all patches. Most of the backups are moved to PetaSAN as expected - but some we are unable to upload.
So I opened a Support Case at Veeam an they answered the following:
I can see that the device is in the unofficial support list:
Ceph (14.2.22 or later, 15.2.14 or later, 16.2.5 or later) COMMUNITY ENTRY Currently ISSUES with the date format that prevents correct processing
The sidenote points to a Github pull request that is configured to fix the issue (othe Veeam customers have confirmed this, by all appearances):
https://github.com/ceph/ceph/pull/43656
I can see that this particular fix was merged on March 1st 2022, and is available for backports and upstream:
https://github.com/ceph/ceph/blob/main/src/rgw/rgw_rest_s3.cc#L520-L521
The best course of action now would be to backport the specific fix, or upgrade to a more recent version of Ceph.
I have installed PetaSAN 3.1 and configured everything accoring to the published documentation. We use Veeam Backup & Replication 11 in the latest version with all patches. Most of the backups are moved to PetaSAN as expected - but some we are unable to upload.
So I opened a Support Case at Veeam an they answered the following:
I can see that the device is in the unofficial support list:
Ceph (14.2.22 or later, 15.2.14 or later, 16.2.5 or later) COMMUNITY ENTRY Currently ISSUES with the date format that prevents correct processing
The sidenote points to a Github pull request that is configured to fix the issue (othe Veeam customers have confirmed this, by all appearances):
https://github.com/ceph/ceph/pull/43656
I can see that this particular fix was merged on March 1st 2022, and is available for backports and upstream:
https://github.com/ceph/ceph/blob/main/src/rgw/rgw_rest_s3.cc#L520-L521
The best course of action now would be to backport the specific fix, or upgrade to a more recent version of Ceph.
admin
2,930 Posts
November 2, 2022, 11:56 amQuote from admin on November 2, 2022, 11:56 amThanks for the detailed info.
The quickest way will be the backport fix in current release so we do not need to go through regression tests. I will post here in next few days.
Thanks for the detailed info.
The quickest way will be the backport fix in current release so we do not need to go through regression tests. I will post here in next few days.
Ingo Hellmer
19 Posts
November 3, 2022, 6:53 amQuote from Ingo Hellmer on November 3, 2022, 6:53 amThank you for the quick response.
Thank you for the quick response.
admin
2,930 Posts
November 3, 2022, 2:04 pmQuote from admin on November 3, 2022, 2:04 pmCan you test the following libradosgw.so.2.0.0 file
https://drive.google.com/file/d/1D1rL_r0ylw_kzH6kit7YM7uVM_6jNOA7/view?usp=sharing
this should have the patch you refer to
place it in:
/usr/lib/
( over-riding existing file)
then restart the rgw service
Note that we have not tested it, i recommend you backup original file.
Please let us know if it fixed the issue.
Can you test the following libradosgw.so.2.0.0 file
https://drive.google.com/file/d/1D1rL_r0ylw_kzH6kit7YM7uVM_6jNOA7/view?usp=sharing
this should have the patch you refer to
place it in:
/usr/lib/
( over-riding existing file)
then restart the rgw service
Note that we have not tested it, i recommend you backup original file.
Please let us know if it fixed the issue.
Ingo Hellmer
19 Posts
November 4, 2022, 11:26 amQuote from Ingo Hellmer on November 4, 2022, 11:26 amI have exchanged the file an restarted the nodes. But the problem ist still there.
04.11.2022 12:24:26 :: Unable to convert string [Mon, 28 Nov 2022 14:16:13 GMT] with date format [%Y-%m-%dT%H:%M:%S] to DateTime.
Cannot convert date from string in S3VersioningUtils
I have exchanged the file an restarted the nodes. But the problem ist still there.
04.11.2022 12:24:26 :: Unable to convert string [Mon, 28 Nov 2022 14:16:13 GMT] with date format [%Y-%m-%dT%H:%M:%S] to DateTime.
Cannot convert date from string in S3VersioningUtils
admin
2,930 Posts
November 4, 2022, 6:39 pmQuote from admin on November 4, 2022, 6:39 pmToo bad, the file was built with the following patch applied
https://github.com/ceph/ceph/commit/1db1caf3426e0ce2e58cfaa755a658a1e2e17180
Wait a few days, we provide complete build of Octopus 15.2.17 which was released in Aug and should include the fix.
Too bad, the file was built with the following patch applied
https://github.com/ceph/ceph/commit/1db1caf3426e0ce2e58cfaa755a658a1e2e17180
Wait a few days, we provide complete build of Octopus 15.2.17 which was released in Aug and should include the fix.
Last edited on November 4, 2022, 8:37 pm by admin · #6
admin
2,930 Posts
November 9, 2022, 2:20 pmQuote from admin on November 9, 2022, 2:20 pmThis is a complete build of 15.2.17 which has the fix backported. There is setup script included. Again this is not something we have tested so if you have a lab environment i recommend you try it there first.
https://drive.google.com/file/d/17aBkVRHBYJcfOQpLHhdEjm66F_g7CHOI/view?usp=share_link
This is a complete build of 15.2.17 which has the fix backported. There is setup script included. Again this is not something we have tested so if you have a lab environment i recommend you try it there first.
https://drive.google.com/file/d/17aBkVRHBYJcfOQpLHhdEjm66F_g7CHOI/view?usp=share_link
admin
2,930 Posts
November 9, 2022, 3:31 pmQuote from admin on November 9, 2022, 3:31 pmalso make sure you restart rgw gateway after upgrade
also make sure you restart rgw gateway after upgrade
Ingo Hellmer
19 Posts
November 13, 2022, 9:47 amQuote from Ingo Hellmer on November 13, 2022, 9:47 amI have updated the nodes an now Veeam uploaded all files without errors. Thank you.
Now I have another question. Veeam creates lot of small objects so we have BlueFS spilover on all OSDs. We can exchange the SSDs with larger SSD (960GB SSD per 14TB HDD). But I see under Configuration an entry bluestore_block_db_size=64424509440 - do I have to change (or remove) this entry to us larger (I think 600GB is the next size after 60GB) when setting up this testcluster again.
Is there a way to change the time deep-scrubbing is running? The storage is heavily used after the backups and has plenty of time for deep-scrubbing on normal daytime?
I have updated the nodes an now Veeam uploaded all files without errors. Thank you.
Now I have another question. Veeam creates lot of small objects so we have BlueFS spilover on all OSDs. We can exchange the SSDs with larger SSD (960GB SSD per 14TB HDD). But I see under Configuration an entry bluestore_block_db_size=64424509440 - do I have to change (or remove) this entry to us larger (I think 600GB is the next size after 60GB) when setting up this testcluster again.
Is there a way to change the time deep-scrubbing is running? The storage is heavily used after the backups and has plenty of time for deep-scrubbing on normal daytime?
admin
2,930 Posts
November 14, 2022, 8:43 pmQuote from admin on November 14, 2022, 8:43 pmGood it got fixed 🙂, not sure why the first patch did not work.
Strange you see a lot of small files, i am not sure we observed the same when we tested S3 with Veeam, i will check this with our testers, but it seems odd for Veeam to do this as large blocks give better performance and S3 is geared for large blocks (the most common S3 target is Amazon, writing many small objects over the internet does not give good performance).
bluestore_block_db_size is the correct param to change to create new OSDs. If this is a test cluster and i understand you will build new one, you can build new cluster with Deployment Wizard without adding OSDs, then change the param in the Ceph Configuration page then add the OSDs. Alternatively you can define this within the Wizard under Cluster Settings under Advanced, this will let you add the OSD during deployment. If this was not a test cluster, we have 1 or 2 scripts in the /opt/petasan/scripts/util for migrating db to another partition so you can fix without changing the cluster or OSDs, but i recommend you understand the scripts first and test them before usage on production OSDs.
For scrub speed: The most convenient way is to adjust the speed (slow, fast, ..etc) from the Maintenance Menu -> Scrub Speed. Typically you should monitor your node stats (% disk util, ..etc ) to make sure you do not overly stress the system. You can also adjust scrub parameters from the Ceph Configuration menu, you can also define other parameters like osd_scrub_begin_hour and osd_scrub_end_hour if needed.
Good it got fixed 🙂, not sure why the first patch did not work.
Strange you see a lot of small files, i am not sure we observed the same when we tested S3 with Veeam, i will check this with our testers, but it seems odd for Veeam to do this as large blocks give better performance and S3 is geared for large blocks (the most common S3 target is Amazon, writing many small objects over the internet does not give good performance).
bluestore_block_db_size is the correct param to change to create new OSDs. If this is a test cluster and i understand you will build new one, you can build new cluster with Deployment Wizard without adding OSDs, then change the param in the Ceph Configuration page then add the OSDs. Alternatively you can define this within the Wizard under Cluster Settings under Advanced, this will let you add the OSD during deployment. If this was not a test cluster, we have 1 or 2 scripts in the /opt/petasan/scripts/util for migrating db to another partition so you can fix without changing the cluster or OSDs, but i recommend you understand the scripts first and test them before usage on production OSDs.
For scrub speed: The most convenient way is to adjust the speed (slow, fast, ..etc) from the Maintenance Menu -> Scrub Speed. Typically you should monitor your node stats (% disk util, ..etc ) to make sure you do not overly stress the system. You can also adjust scrub parameters from the Ceph Configuration menu, you can also define other parameters like osd_scrub_begin_hour and osd_scrub_end_hour if needed.
Last edited on November 14, 2022, 8:47 pm by admin · #10
Pages: 1 2
Error: Unable to convert string [Mon, 28 Nov 2022 14:16:13 GMT] with date format
Ingo Hellmer
19 Posts
Quote from Ingo Hellmer on November 2, 2022, 7:12 amI have installed PetaSAN 3.1 and configured everything accoring to the published documentation. We use Veeam Backup & Replication 11 in the latest version with all patches. Most of the backups are moved to PetaSAN as expected - but some we are unable to upload.
So I opened a Support Case at Veeam an they answered the following:
I can see that the device is in the unofficial support list:
Ceph (14.2.22 or later, 15.2.14 or later, 16.2.5 or later) COMMUNITY ENTRY Currently ISSUES with the date format that prevents correct processing
The sidenote points to a Github pull request that is configured to fix the issue (othe Veeam customers have confirmed this, by all appearances):
https://github.com/ceph/ceph/pull/43656
I can see that this particular fix was merged on March 1st 2022, and is available for backports and upstream:
https://github.com/ceph/ceph/blob/main/src/rgw/rgw_rest_s3.cc#L520-L521
The best course of action now would be to backport the specific fix, or upgrade to a more recent version of Ceph.
I have installed PetaSAN 3.1 and configured everything accoring to the published documentation. We use Veeam Backup & Replication 11 in the latest version with all patches. Most of the backups are moved to PetaSAN as expected - but some we are unable to upload.
So I opened a Support Case at Veeam an they answered the following:
I can see that the device is in the unofficial support list:
Ceph (14.2.22 or later, 15.2.14 or later, 16.2.5 or later) COMMUNITY ENTRY Currently ISSUES with the date format that prevents correct processing
The sidenote points to a Github pull request that is configured to fix the issue (othe Veeam customers have confirmed this, by all appearances):
https://github.com/ceph/ceph/pull/43656
I can see that this particular fix was merged on March 1st 2022, and is available for backports and upstream:
https://github.com/ceph/ceph/blob/main/src/rgw/rgw_rest_s3.cc#L520-L521
The best course of action now would be to backport the specific fix, or upgrade to a more recent version of Ceph.
admin
2,930 Posts
Quote from admin on November 2, 2022, 11:56 amThanks for the detailed info.
The quickest way will be the backport fix in current release so we do not need to go through regression tests. I will post here in next few days.
Thanks for the detailed info.
The quickest way will be the backport fix in current release so we do not need to go through regression tests. I will post here in next few days.
Ingo Hellmer
19 Posts
Quote from Ingo Hellmer on November 3, 2022, 6:53 amThank you for the quick response.
Thank you for the quick response.
admin
2,930 Posts
Quote from admin on November 3, 2022, 2:04 pmCan you test the following libradosgw.so.2.0.0 file
https://drive.google.com/file/d/1D1rL_r0ylw_kzH6kit7YM7uVM_6jNOA7/view?usp=sharing
this should have the patch you refer to
place it in:
/usr/lib/
( over-riding existing file)
then restart the rgw serviceNote that we have not tested it, i recommend you backup original file.
Please let us know if it fixed the issue.
Can you test the following libradosgw.so.2.0.0 file
https://drive.google.com/file/d/1D1rL_r0ylw_kzH6kit7YM7uVM_6jNOA7/view?usp=sharing
this should have the patch you refer to
place it in:
/usr/lib/
( over-riding existing file)
then restart the rgw service
Note that we have not tested it, i recommend you backup original file.
Please let us know if it fixed the issue.
Ingo Hellmer
19 Posts
Quote from Ingo Hellmer on November 4, 2022, 11:26 amI have exchanged the file an restarted the nodes. But the problem ist still there.
04.11.2022 12:24:26 :: Unable to convert string [Mon, 28 Nov 2022 14:16:13 GMT] with date format [%Y-%m-%dT%H:%M:%S] to DateTime.
Cannot convert date from string in S3VersioningUtils
I have exchanged the file an restarted the nodes. But the problem ist still there.
04.11.2022 12:24:26 :: Unable to convert string [Mon, 28 Nov 2022 14:16:13 GMT] with date format [%Y-%m-%dT%H:%M:%S] to DateTime.
Cannot convert date from string in S3VersioningUtils
admin
2,930 Posts
Quote from admin on November 4, 2022, 6:39 pmToo bad, the file was built with the following patch applied
https://github.com/ceph/ceph/commit/1db1caf3426e0ce2e58cfaa755a658a1e2e17180
Wait a few days, we provide complete build of Octopus 15.2.17 which was released in Aug and should include the fix.
Too bad, the file was built with the following patch applied
https://github.com/ceph/ceph/commit/1db1caf3426e0ce2e58cfaa755a658a1e2e17180
Wait a few days, we provide complete build of Octopus 15.2.17 which was released in Aug and should include the fix.
admin
2,930 Posts
Quote from admin on November 9, 2022, 2:20 pmThis is a complete build of 15.2.17 which has the fix backported. There is setup script included. Again this is not something we have tested so if you have a lab environment i recommend you try it there first.
https://drive.google.com/file/d/17aBkVRHBYJcfOQpLHhdEjm66F_g7CHOI/view?usp=share_link
This is a complete build of 15.2.17 which has the fix backported. There is setup script included. Again this is not something we have tested so if you have a lab environment i recommend you try it there first.
https://drive.google.com/file/d/17aBkVRHBYJcfOQpLHhdEjm66F_g7CHOI/view?usp=share_link
admin
2,930 Posts
Quote from admin on November 9, 2022, 3:31 pmalso make sure you restart rgw gateway after upgrade
also make sure you restart rgw gateway after upgrade
Ingo Hellmer
19 Posts
Quote from Ingo Hellmer on November 13, 2022, 9:47 amI have updated the nodes an now Veeam uploaded all files without errors. Thank you.
Now I have another question. Veeam creates lot of small objects so we have BlueFS spilover on all OSDs. We can exchange the SSDs with larger SSD (960GB SSD per 14TB HDD). But I see under Configuration an entry bluestore_block_db_size=64424509440 - do I have to change (or remove) this entry to us larger (I think 600GB is the next size after 60GB) when setting up this testcluster again.
Is there a way to change the time deep-scrubbing is running? The storage is heavily used after the backups and has plenty of time for deep-scrubbing on normal daytime?
I have updated the nodes an now Veeam uploaded all files without errors. Thank you.
Now I have another question. Veeam creates lot of small objects so we have BlueFS spilover on all OSDs. We can exchange the SSDs with larger SSD (960GB SSD per 14TB HDD). But I see under Configuration an entry bluestore_block_db_size=64424509440 - do I have to change (or remove) this entry to us larger (I think 600GB is the next size after 60GB) when setting up this testcluster again.
Is there a way to change the time deep-scrubbing is running? The storage is heavily used after the backups and has plenty of time for deep-scrubbing on normal daytime?
admin
2,930 Posts
Quote from admin on November 14, 2022, 8:43 pmGood it got fixed 🙂, not sure why the first patch did not work.
Strange you see a lot of small files, i am not sure we observed the same when we tested S3 with Veeam, i will check this with our testers, but it seems odd for Veeam to do this as large blocks give better performance and S3 is geared for large blocks (the most common S3 target is Amazon, writing many small objects over the internet does not give good performance).
bluestore_block_db_size is the correct param to change to create new OSDs. If this is a test cluster and i understand you will build new one, you can build new cluster with Deployment Wizard without adding OSDs, then change the param in the Ceph Configuration page then add the OSDs. Alternatively you can define this within the Wizard under Cluster Settings under Advanced, this will let you add the OSD during deployment. If this was not a test cluster, we have 1 or 2 scripts in the /opt/petasan/scripts/util for migrating db to another partition so you can fix without changing the cluster or OSDs, but i recommend you understand the scripts first and test them before usage on production OSDs.
For scrub speed: The most convenient way is to adjust the speed (slow, fast, ..etc) from the Maintenance Menu -> Scrub Speed. Typically you should monitor your node stats (% disk util, ..etc ) to make sure you do not overly stress the system. You can also adjust scrub parameters from the Ceph Configuration menu, you can also define other parameters like osd_scrub_begin_hour and osd_scrub_end_hour if needed.
Good it got fixed 🙂, not sure why the first patch did not work.
Strange you see a lot of small files, i am not sure we observed the same when we tested S3 with Veeam, i will check this with our testers, but it seems odd for Veeam to do this as large blocks give better performance and S3 is geared for large blocks (the most common S3 target is Amazon, writing many small objects over the internet does not give good performance).
bluestore_block_db_size is the correct param to change to create new OSDs. If this is a test cluster and i understand you will build new one, you can build new cluster with Deployment Wizard without adding OSDs, then change the param in the Ceph Configuration page then add the OSDs. Alternatively you can define this within the Wizard under Cluster Settings under Advanced, this will let you add the OSD during deployment. If this was not a test cluster, we have 1 or 2 scripts in the /opt/petasan/scripts/util for migrating db to another partition so you can fix without changing the cluster or OSDs, but i recommend you understand the scripts first and test them before usage on production OSDs.
For scrub speed: The most convenient way is to adjust the speed (slow, fast, ..etc) from the Maintenance Menu -> Scrub Speed. Typically you should monitor your node stats (% disk util, ..etc ) to make sure you do not overly stress the system. You can also adjust scrub parameters from the Ceph Configuration menu, you can also define other parameters like osd_scrub_begin_hour and osd_scrub_end_hour if needed.