Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

iSCSI Issue and no access to console

Hi,

First of all, thanks for sharing your ambitious project with us.
Your project seems to fit our needs that's why I began with a (small) test environment with unused systems we have.

To begin with, we started with four physical systems (dual procs) and 8 bays. Each system was installed with 2 disks in raid1 (for OS and management) and 2 others disks in JBOD (2x500 GB for storage).

Mangement, iSCSi1 and Backend1 are on eth0 and iSCSi2 and Backend2 on eth1 (all are Gig ports).

We were using your latest distribution (1.3).

First note, I had some issues when installing your systems. Those were caused by weird disks (disk read/write errors) at the moment to build the clusted this forced us to reinstall the system (that's also one reason why we begun with only 2 storage disks per server).

We finally managed to have a functionnal Petasan with 4 servers (3 mon, 3iscsi, 4 OSD).

Creation of a iSCSi disk went smoothly.

The next issue we faced (and are still facing) is to consume the iSCSi target.
We try to consume it with the Windows 10 integrated iSCSi client but each time we try to define the portal we receive the same error : "Initatior instance does not exist"

The iSCSi disk is exposed by 2 paths on 2 different subnets which can be pinged by the client.

I would like to have a look in the LIO logs on the servers but unfortunately, I cannot log through ssh and console mode is no more proposed.
I saw on a older post that I should be able to connect through ssh using root and the cluster password but it doesn't work.

What could I do to get back an ssh or console access on the servers?

Other question, I saw on Ceph site they are releasing the Luminous RC, announcing the future Long Term support version of Ceph with some nice features (new or stabilized) like the bluestore.
As I understand it, the bluestore should enhance Petasan performances, is it right? Do you plan to release soon a new version with it? (and if yes, approx. when?)

Many thanks by advance for your support,

Best regards,

Benoît

We do a lot testing on Windows, mostly server 2012/2016 but we also test Win 10/8/7. We did not encouter "Initatior instance does not exist", i did a quick search and it looks to me it is a client side configuration issue or was not installed correctly.

I would recommend trying it from a different Windows client first if you can. else some of the things to check:

iSCSI Initiator -->  Configuration tab, do you see an Initiator name such as

iqn.1991-05.com.microsoft:mycomputer-name

in Services is the Microsoft iSCSI Initiator started ?

some suggest going in regedit and deleting

HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\iSCSI

then reboot

another suggestion go to Device Manager

Device Manager -> Storage Controllers -> Microsoft iSCSI Initiator -> Update Driver Software -> Browse my computer .. -> Let me pick -> Microsoft iSCSI Initiator install then reboot

Again it is much easier if you have a different Windows client to try from.

Apart from the above issue, most of the issues we see revolve around setting the network correctly, but since you can ping i would say this is working, just make sure your 2 iSCSI subnet masks are not overlapping.

From the PetaSAN you can ssh to the nodes using root as user and password is your cluster password ( not the adim web password), you can use putty + winscp.

Once logged in test you can ping the other way,  from your nodes to both client ips.

Also test the output of

dmesg

dmesg | grep -i error

After your attempt from the client to connect or discover the portal.

 

Our plan for Bluestore is for version 1.5 which will also support SSD caching, it is planned for Late September.

Let us know how you progress on your issue..good luck.

Hi,

Thanks a lot for your fast and clear answer 😉

Concerning my problem with iSCSi client on Windows 10, I confirm the manipulation described here above solved my issue. (Remove the key in the registry and rebooting the client)
A quite funny thing, is that I already tried to access the iSCSi disk from another client with the same result... So this issue is not isolated.

Connection to ssh is also fixed, I don't know what I messed up before...

I am now able to stresstest the new toy and see how it reacts 😉

As this will be used for tests (no need to keep the platform running during the week-end), I'm wondering if there is a specific procedure to follow in order to shutdown the whole system.

I'll try to get some numbers concerning the perfs.

Again many thanks for your support.

 

Best regards,

 

Benoît

 

Great it worked 🙂

When i googled this, it does not appear to be common, but it someone suggested it could be a virus scanner and maybe something else installed than corrupted ths iSCSI setiings.

For performance, the  more physical disks you add the faster your iSCSI disk/lun will become.

 

 

That's what we plan to test.

I also read it could make sense to put OS & Ceph Journal on SSD for more perfomance.

I'll keep you posted on our feelings with your nice tool.

One thing is we do our performance tests with Win 2012R2/2016 server, we did some perf tests on Win7 and Win8 and they were significantly slower, i also recall they did not support MPIO even though the UI appeared to show they did.  We did test Win 10 to make sure it works but not under performance loads, if does not "truly" support MPIO then it will not be as fast.

Good luck.

Good to know.

Anyway, the Windows 10 client was only to check the good working of the setup.

Thanks for the precision.

Best regards,