Reduced data availability: 256 pgs inactive
melefante
4 Posts
December 17, 2019, 4:39 pmQuote from melefante on December 17, 2019, 4:39 pmHi there,
I just installed 4 Petasan 2.4.0 nodes on a VMware ESXi 6.7.0 test system on
DELL Precision 7930
2 xeon 6146 CPU
256GB ram.
5TB SSD drives raid 5
The installation operation gave no problem and the first 3 nodes are also management while the 4 is only storage .
I followed this video
https://www.youtube.com/watch?v=CiBWQdX7jlU&t=1087s
When I enter in the WEB interface he gives me a "Ceph Healt Warning" writing in detail "Reduced data availability: 256 pgs inactive".
I tried to understand what was wrong and I also rebuilt the pool several times, I changed the size from 4 to 2 but nothing changes.
Can you help me please?
Regards
Hi there,
I just installed 4 Petasan 2.4.0 nodes on a VMware ESXi 6.7.0 test system on
DELL Precision 7930
2 xeon 6146 CPU
256GB ram.
5TB SSD drives raid 5
The installation operation gave no problem and the first 3 nodes are also management while the 4 is only storage .
I followed this video
When I enter in the WEB interface he gives me a "Ceph Healt Warning" writing in detail "Reduced data availability: 256 pgs inactive".
I tried to understand what was wrong and I also rebuilt the pool several times, I changed the size from 4 to 2 but nothing changes.
Can you help me please?
Regards
admin
2,930 Posts
December 17, 2019, 8:25 pmQuote from admin on December 17, 2019, 8:25 pmThe video is for an older version where all disks are used for storage. Newer versions you need to add/select the disks to be added for storage, either during deployment of a node (the page where it lists the available disks, or after building the cluster from the Node List => Physical Disk List. You need to add OSDs in a couple of nodes, at least equal (node count) to the min size of pool to be active.
Generally it is better not to use RAID and expose multiple disks as OSDs per node.
The video is for an older version where all disks are used for storage. Newer versions you need to add/select the disks to be added for storage, either during deployment of a node (the page where it lists the available disks, or after building the cluster from the Node List => Physical Disk List. You need to add OSDs in a couple of nodes, at least equal (node count) to the min size of pool to be active.
Generally it is better not to use RAID and expose multiple disks as OSDs per node.
melefante
4 Posts
December 18, 2019, 8:49 amQuote from melefante on December 18, 2019, 8:49 amHi,
first of all thanks for the quick response.
I would like to ask you for a courtesy. I looked at all the pdf manuals that are on your site. Unfortunately I understand that they all refer to an old version. If you could create a pdf with written basic steps common to all installations to easily create a test system I think it would be useful not only to me, but also to all those who are trying to do for the first time an installation of this system, which I think is very interesting.
As for the about the raid , the test system I'm using has a hardware raid but it's also true that I'm creating all the nodes with virtual machines in ESXi and therefore in virtual machines disks are considered physical. I have created 3 disks for each node, the first for the O.S. and the other two for the storage. I specified when creating nodes to use hdb and hdc as storage. If I remember correctly was the last step before creating the node.
Hi,
first of all thanks for the quick response.
I would like to ask you for a courtesy. I looked at all the pdf manuals that are on your site. Unfortunately I understand that they all refer to an old version. If you could create a pdf with written basic steps common to all installations to easily create a test system I think it would be useful not only to me, but also to all those who are trying to do for the first time an installation of this system, which I think is very interesting.
As for the about the raid , the test system I'm using has a hardware raid but it's also true that I'm creating all the nodes with virtual machines in ESXi and therefore in virtual machines disks are considered physical. I have created 3 disks for each node, the first for the O.S. and the other two for the storage. I specified when creating nodes to use hdb and hdc as storage. If I remember correctly was the last step before creating the node.
admin
2,930 Posts
December 18, 2019, 10:06 amQuote from admin on December 18, 2019, 10:06 amHi,
I am not sure from your post if this did solve your issue, hope it did 🙂
Thanks for the comments on manuals, they are a bit lagging but not too much, the admin guide is v 2.3.0, the quick start is earlier but does have the updated osd selection pages in the wizard, the wizard has not changed that much till 2.4.0 where we added prev/next/reset buttons.
For raid, the performance increases as you add more OSDs, the system likes concurrency so the more you have the faster it gets, it also handles redundancy itself. Of course in a virtual env where you can map several virtual disks to 1 real disk, then you will not get any benefit, on the contrary you may actually get lower performance.
Hi,
I am not sure from your post if this did solve your issue, hope it did 🙂
Thanks for the comments on manuals, they are a bit lagging but not too much, the admin guide is v 2.3.0, the quick start is earlier but does have the updated osd selection pages in the wizard, the wizard has not changed that much till 2.4.0 where we added prev/next/reset buttons.
For raid, the performance increases as you add more OSDs, the system likes concurrency so the more you have the faster it gets, it also handles redundancy itself. Of course in a virtual env where you can map several virtual disks to 1 real disk, then you will not get any benefit, on the contrary you may actually get lower performance.
melefante
4 Posts
December 18, 2019, 3:37 pmQuote from melefante on December 18, 2019, 3:37 pmHi,
I solved the problem and assigned the right OSDs to create the volume. Thanks, your explanation was really important.
I then created the ISCSI network
I created the volume ISCSI
Now what I would like to get is to have more systems windows, mac, linux that connect all to the same volume accessing the same files at the same time in a competitive way.
Now I tried to connect a Windows client (which connects perfectly) and have the volume visible. The problem is that the volume is not formatted. If I format it from windows then the other systems can't connect at the same time and access the same files.
So please explain to me how I should do it?
I've always worked on SAN Stornext and hyperfs but in that case I have the storage formatted in snfs or hyperfs directly and every client access the SAN with a specific client that allows you to read this filesystem.
In this case I don't understand how I can do it.
Sorry my ignorance but I'm trying to understand how to use ceph.
Hi,
I solved the problem and assigned the right OSDs to create the volume. Thanks, your explanation was really important.
I then created the ISCSI network
I created the volume ISCSI
Now what I would like to get is to have more systems windows, mac, linux that connect all to the same volume accessing the same files at the same time in a competitive way.
Now I tried to connect a Windows client (which connects perfectly) and have the volume visible. The problem is that the volume is not formatted. If I format it from windows then the other systems can't connect at the same time and access the same files.
So please explain to me how I should do it?
I've always worked on SAN Stornext and hyperfs but in that case I have the storage formatted in snfs or hyperfs directly and every client access the SAN with a specific client that allows you to read this filesystem.
In this case I don't understand how I can do it.
Sorry my ignorance but I'm trying to understand how to use ceph.
admin
2,930 Posts
December 18, 2019, 6:27 pmQuote from admin on December 18, 2019, 6:27 pmA SAN works at the block/disk level, you need to format it just like you install a real disk.
In the future we well be adding filesystem / shares NAS support, but currently we provide a SAN
A SAN works at the block/disk level, you need to format it just like you install a real disk.
In the future we well be adding filesystem / shares NAS support, but currently we provide a SAN
Last edited on December 18, 2019, 6:28 pm by admin · #6
melefante
4 Posts
December 18, 2019, 9:49 pmQuote from melefante on December 18, 2019, 9:49 pmYes i know, but if i format the Volume with a filesystem like NTFS or EXT4 the access is exclusive for the client because those FSÂ doesn't not support multiple concurrency client access (two iscsi clients can't access the same target because the filesystem doesn't support it)
Ususally i work with SAN like Quantum, Scale Logic and Elements, all those systems works with payment filesystem like Stornext or Hyperfs.
Those Filesystems are all clustered Filesystems are developed for multiclients High performance because many times is used for Movies in file frames sequences.
I want to try to replicate that config on your System. So my final target is use a clustered filesystem to permit multiple client access on PetaSan. For my target a NAS support is not usefull because is a file base access so the latency time usually is really much more respect the block/disk level.
Yes i know, but if i format the Volume with a filesystem like NTFS or EXT4 the access is exclusive for the client because those FSÂ doesn't not support multiple concurrency client access (two iscsi clients can't access the same target because the filesystem doesn't support it)
Ususally i work with SAN like Quantum, Scale Logic and Elements, all those systems works with payment filesystem like Stornext or Hyperfs.
Those Filesystems are all clustered Filesystems are developed for multiclients High performance because many times is used for Movies in file frames sequences.
I want to try to replicate that config on your System. So my final target is use a clustered filesystem to permit multiple client access on PetaSan. For my target a NAS support is not usefull because is a file base access so the latency time usually is really much more respect the block/disk level.
Reduced data availability: 256 pgs inactive
melefante
4 Posts
Quote from melefante on December 17, 2019, 4:39 pmHi there,
I just installed 4 Petasan 2.4.0 nodes on a VMware ESXi 6.7.0 test system onDELL Precision 7930
2 xeon 6146 CPU
256GB ram.
5TB SSD drives raid 5
The installation operation gave no problem and the first 3 nodes are also management while the 4 is only storage .
I followed this video
https://www.youtube.com/watch?v=CiBWQdX7jlU&t=1087sWhen I enter in the WEB interface he gives me a "Ceph Healt Warning" writing in detail "Reduced data availability: 256 pgs inactive".
I tried to understand what was wrong and I also rebuilt the pool several times, I changed the size from 4 to 2 but nothing changes.
Can you help me please?Regards
Hi there,
I just installed 4 Petasan 2.4.0 nodes on a VMware ESXi 6.7.0 test system on
DELL Precision 7930
2 xeon 6146 CPU
256GB ram.
5TB SSD drives raid 5
The installation operation gave no problem and the first 3 nodes are also management while the 4 is only storage .
I followed this video
When I enter in the WEB interface he gives me a "Ceph Healt Warning" writing in detail "Reduced data availability: 256 pgs inactive".
I tried to understand what was wrong and I also rebuilt the pool several times, I changed the size from 4 to 2 but nothing changes.
Can you help me please?
Regards
admin
2,930 Posts
Quote from admin on December 17, 2019, 8:25 pmThe video is for an older version where all disks are used for storage. Newer versions you need to add/select the disks to be added for storage, either during deployment of a node (the page where it lists the available disks, or after building the cluster from the Node List => Physical Disk List. You need to add OSDs in a couple of nodes, at least equal (node count) to the min size of pool to be active.
Generally it is better not to use RAID and expose multiple disks as OSDs per node.
The video is for an older version where all disks are used for storage. Newer versions you need to add/select the disks to be added for storage, either during deployment of a node (the page where it lists the available disks, or after building the cluster from the Node List => Physical Disk List. You need to add OSDs in a couple of nodes, at least equal (node count) to the min size of pool to be active.
Generally it is better not to use RAID and expose multiple disks as OSDs per node.
melefante
4 Posts
Quote from melefante on December 18, 2019, 8:49 amHi,
first of all thanks for the quick response.
I would like to ask you for a courtesy. I looked at all the pdf manuals that are on your site. Unfortunately I understand that they all refer to an old version. If you could create a pdf with written basic steps common to all installations to easily create a test system I think it would be useful not only to me, but also to all those who are trying to do for the first time an installation of this system, which I think is very interesting.As for the about the raid , the test system I'm using has a hardware raid but it's also true that I'm creating all the nodes with virtual machines in ESXi and therefore in virtual machines disks are considered physical. I have created 3 disks for each node, the first for the O.S. and the other two for the storage. I specified when creating nodes to use hdb and hdc as storage. If I remember correctly was the last step before creating the node.
Hi,
first of all thanks for the quick response.
I would like to ask you for a courtesy. I looked at all the pdf manuals that are on your site. Unfortunately I understand that they all refer to an old version. If you could create a pdf with written basic steps common to all installations to easily create a test system I think it would be useful not only to me, but also to all those who are trying to do for the first time an installation of this system, which I think is very interesting.
As for the about the raid , the test system I'm using has a hardware raid but it's also true that I'm creating all the nodes with virtual machines in ESXi and therefore in virtual machines disks are considered physical. I have created 3 disks for each node, the first for the O.S. and the other two for the storage. I specified when creating nodes to use hdb and hdc as storage. If I remember correctly was the last step before creating the node.
admin
2,930 Posts
Quote from admin on December 18, 2019, 10:06 amHi,
I am not sure from your post if this did solve your issue, hope it did 🙂
Thanks for the comments on manuals, they are a bit lagging but not too much, the admin guide is v 2.3.0, the quick start is earlier but does have the updated osd selection pages in the wizard, the wizard has not changed that much till 2.4.0 where we added prev/next/reset buttons.
For raid, the performance increases as you add more OSDs, the system likes concurrency so the more you have the faster it gets, it also handles redundancy itself. Of course in a virtual env where you can map several virtual disks to 1 real disk, then you will not get any benefit, on the contrary you may actually get lower performance.
Hi,
I am not sure from your post if this did solve your issue, hope it did 🙂
Thanks for the comments on manuals, they are a bit lagging but not too much, the admin guide is v 2.3.0, the quick start is earlier but does have the updated osd selection pages in the wizard, the wizard has not changed that much till 2.4.0 where we added prev/next/reset buttons.
For raid, the performance increases as you add more OSDs, the system likes concurrency so the more you have the faster it gets, it also handles redundancy itself. Of course in a virtual env where you can map several virtual disks to 1 real disk, then you will not get any benefit, on the contrary you may actually get lower performance.
melefante
4 Posts
Quote from melefante on December 18, 2019, 3:37 pmHi,
I solved the problem and assigned the right OSDs to create the volume. Thanks, your explanation was really important.
I then created the ISCSI network
I created the volume ISCSI
Now what I would like to get is to have more systems windows, mac, linux that connect all to the same volume accessing the same files at the same time in a competitive way.
Now I tried to connect a Windows client (which connects perfectly) and have the volume visible. The problem is that the volume is not formatted. If I format it from windows then the other systems can't connect at the same time and access the same files.
So please explain to me how I should do it?
I've always worked on SAN Stornext and hyperfs but in that case I have the storage formatted in snfs or hyperfs directly and every client access the SAN with a specific client that allows you to read this filesystem.In this case I don't understand how I can do it.
Sorry my ignorance but I'm trying to understand how to use ceph.
Hi,
I solved the problem and assigned the right OSDs to create the volume. Thanks, your explanation was really important.
I then created the ISCSI network
I created the volume ISCSI
Now what I would like to get is to have more systems windows, mac, linux that connect all to the same volume accessing the same files at the same time in a competitive way.
Now I tried to connect a Windows client (which connects perfectly) and have the volume visible. The problem is that the volume is not formatted. If I format it from windows then the other systems can't connect at the same time and access the same files.
So please explain to me how I should do it?
I've always worked on SAN Stornext and hyperfs but in that case I have the storage formatted in snfs or hyperfs directly and every client access the SAN with a specific client that allows you to read this filesystem.
In this case I don't understand how I can do it.
Sorry my ignorance but I'm trying to understand how to use ceph.
admin
2,930 Posts
Quote from admin on December 18, 2019, 6:27 pmA SAN works at the block/disk level, you need to format it just like you install a real disk.
In the future we well be adding filesystem / shares NAS support, but currently we provide a SAN
A SAN works at the block/disk level, you need to format it just like you install a real disk.
In the future we well be adding filesystem / shares NAS support, but currently we provide a SAN
melefante
4 Posts
Quote from melefante on December 18, 2019, 9:49 pmYes i know, but if i format the Volume with a filesystem like NTFS or EXT4 the access is exclusive for the client because those FSÂ doesn't not support multiple concurrency client access (two iscsi clients can't access the same target because the filesystem doesn't support it)
Ususally i work with SAN like Quantum, Scale Logic and Elements, all those systems works with payment filesystem like Stornext or Hyperfs.
Those Filesystems are all clustered Filesystems are developed for multiclients High performance because many times is used for Movies in file frames sequences.
I want to try to replicate that config on your System. So my final target is use a clustered filesystem to permit multiple client access on PetaSan. For my target a NAS support is not usefull because is a file base access so the latency time usually is really much more respect the block/disk level.
Yes i know, but if i format the Volume with a filesystem like NTFS or EXT4 the access is exclusive for the client because those FSÂ doesn't not support multiple concurrency client access (two iscsi clients can't access the same target because the filesystem doesn't support it)
Ususally i work with SAN like Quantum, Scale Logic and Elements, all those systems works with payment filesystem like Stornext or Hyperfs.
Those Filesystems are all clustered Filesystems are developed for multiclients High performance because many times is used for Movies in file frames sequences.
I want to try to replicate that config on your System. So my final target is use a clustered filesystem to permit multiple client access on PetaSan. For my target a NAS support is not usefull because is a file base access so the latency time usually is really much more respect the block/disk level.