Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Difficulty recovering one of three servers after power outage

Pages: First « 2 3 4 5 6 7

To confirm my hardware situation, I have three Dell Poweredge 1950's each with a pair of drives. I do have in the rack a Poweredge 2950, with 6 drives of varied capacities.

Is this reality going to prevent me from creating the desired SAN infrastructure?

If I reinstall the application,  it should be done one at a time rather than enmasse such that the reconfiguration can recapture the prior settings, correct?

See attached for review and comment.

Thank-youNode 2 view

In your case you can make 1 disk for os and 1 as OSD. The cluster will have 3 OSDs in total, the performance will be low. To get better performance, try to add more disks in each node, the more the better.

If you want to re-install: deploy the first node as "Create a New Cluster", the other 2 as "Join Existing Cluster" pointing to first node.

Good luck.

Thanks for the advice on this. Will Petasan know to make one disk OS and one OSD? I could not recall in the setup if, or where, I was asked this by the application. When I look in the ui at node role settings, the three roles and are already checked off. And in the disk list,no options are offered.

 

No it will not. In the installer you choose the os disk. During deploy check the Local Storage role and you will be able to add the second disk as OSD.  You can also add the second disk as OSD in the Physical Disk List later after deployment as described earlier.

fyi: the screen shot you posted is not a valid link, can you please repost

Ok, thanks for the pointer on the disk role selection. I will keep an eye out for the add OSD button and let you know how I make out. The screen snapshot I sent was a PNG image from my local computer. I will attach again.Disk list screen shot

Hello admin, good news, the cluster with three nodes inclusive is now up and I can from the UI see all disks and OSD's and activate them as well as add iSCSI disk. A great improvement.

I want to take this time to thank you for your patience and perseverance through this process of activating the Petasan application. I realize it was certainly frustrating for you to know what I am supposed to see while am not seeing what is intended to be in this configuration.

I think the source of the problem was on the Dell server's I had the RAID-1 active with the two spindles as part of a single virtual disk. Before re-installation, I deleted the RAID-1 virtual disk then created two RAID-0 virtual disks each with one spindle underneath. Once the installation commenced on each server the application found the two disks and I was thereafter able to assign a disk the roles of system and OSD. The entities which I should see were then, and now, visible in the UI from any of the three servers in the cluster.

Now, the business of actually accessing the cluster as a storage device is in order and I have a couple questions please. I have vSphere 6.7 systems as well as Windows 7 & 10 PC's to access the cluster. I started on my Windows 10 PC the iSCSI initiator, but, I am not certain which host address to access. On my Poweredge servers I have two NIC's each with eth0 in the 192.168.0.0/24 network and eth1 in the 10.250.253.0/24 network. I created in the iSCSI Disk List a single iSCSI and the disk shows two paths:

IP Assigned Node
10.0.2.100 peta-san-03
10.0.3.100 peta-san-02

So, if 192.168.0.0/24 is the management network and 10.0.2.0 and 10.0.3.0 are destinations not in my network, am I supposed to route to them through through my eth1 host addresses as the next hop?

Thanks again for your efforts thus far in assisting in getting this deployment working.

There is a page in the ui to assign the iSCSI subnet + auto ip range: edit those to match the ips you are using, this will make new iSCSI disks get the correct ips. For the current disk it will still have its old ips and running, either delete it or if you need it, do a stop/detach/attach in the actions button to re-assign it the new ips.

Thanks. To confirm, I would match the iSCSI disk address to match the interface address I configured?

Is the stop/detach/attach command a per node action or is this performed on one node and propagated to the rest of the nodes in the cluster?

The iSCSI Settings define the ips that will be assigned to the iSCSI disk. Of course if you want you iSCSI clients to connect to these disks they need to be on the same ip subnet.

the stop/detach/attach is for making already running disks take restart with the new iSCSI settings you saved, else they will keep their old ips. In your case since you are testing,  it may be easier to just delete the old disks.

iSCSI disk actions are cluster wide.

Pages: First « 2 3 4 5 6 7