Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Add support for RocketRaid RAID cards

Pages: 1 2

Specifically the RocketRaid 750

Looking into this further, I grabbed a generic linux kernel and popped it into petasan. The driver happily compiled and ran 'after' it was able to locate linux headers for the kernel. So my question is... If its too much to ask for this to be merged directly could a package with linux headers be available for folks like me that need to build our own card drivers? Thanks!

Below link is a test build for you that includes the RocketRaid 750 driver, built from the sources mentioned in your earlier post.

https://drive.google.com/file/d/0B7VNYCjYBY2yMHBUNUQyTDdsbFU/view?usp=sharing

We do not have RocketRaid 750 card, so i appreciate if you do get a chance to let me know how it worked out. This build has an initial password per node set to password in case you need to log in. If things go well we can include / build it for future releases.

One thing worth mentioning is we only included the r750 module driver we built, we did not include the vendor supplied startup/systemd scripts that seem to load/modprobe the driver before starting udev. We rely on the standard driver loading mechanism via udev to load this driver. But do let me know if there is an issue.

OMG you guys are the best ever!!!!!! I'll test this today!

Success! (mostly). I was able to build out a cluster, it found the disks perfect, and now i have a nicely humming along petasan 3 node cluster on 45drives boxes!

 

The only thing that didn't work out of box is the Intel 82599es network cards, which were working before i think, so not sure what happened there.

Another oddity while testing, I have 44 drives per node and while the first node added them all successfully, the other two nodes did not add any of the drives to the cluster. If I click on manage -> nodes -> hard disks and add a disk manually it says 'adding' but then resets and never adds. So at the moment I have one node that contains all of my data and the other two are just kind of hanging out. It says there are currently 47 OSDs online (out of 3 nodes w/ 44 drives each + system drives)

Hi,

1) Can you try to add disk to a node manually via manage -> nodes -> hard disks from the Management UI that is connected/logged to the node itself. Maybe the job that formats the disks can run fine locally but in your case cannot be remoted for some reason.

2) Can you please gather the following info, zip them and email them to admin @ petasan.org    :

On all 3 nodes, gather the following files and dir ( using WinSCP for example )

/opt/petasan/log/PetaSAN.log

/opt/petasan/config/cluster_info.json

/opt/petasan/jobs (directory)

 

The output of the following commands from any node

replace CLUSTER_NAME with the name you named your cluster

ceph status --cluster CLUSTER_NAME

ceph osd tree --cluster CLUSTER_NAME

ceph osd dump  --cluster CLUSTER_NAME

 

The output of the following commands from a node that cannot add OSDs

ceph-disk list

 

3) You had mentioned earlier you do not get the blue console screen..is this still the case ?

I tried using a different nodes management ip addresses and then tried to add a disk... same thing. It will show 'adding' then after about 10-20 seconds it goes back to the add disk page with nothing added. In the logs it shows 'add disk command' -> 'scrubbing ceph disk' then nothing after that in the log related to adding the drive. If I ssh into the node and do cfdisk it shows the disk as partitioned for ceph etc etc so that portion seems to run ok.

I was also able to mount iscsi blocks and write/read files just fine... as long as the node w/ all the drives was online and available.

I'm going to try presenting the nodes with a single zfs block instead of the raw individual hdd's to see if that clears things up a bit. If that doesn't work i'll see about getting you those log files.

The blue screen is hit or miss. Some boots it shows up right away, others after a few minutes, and still others not at all.

1) It could be a cpu resource issue with this number of drives. The recommended hardware is 1 core and 1.5 to 2 G RAM per OSD. You can combine several disks into RAID 0 or RAID 10, to lower the number of OSDs per node. For an 8 core node, try to have 8 OSDs in total.

Another way that gives you more control is deploying the 3 nodes without selecting the "Local Storage Service" role ( step 5 in wizard, you need to uncheck the selection ). This will build the cluster without adding any OSDs. Then from Node List -> Actions -> Node Role then you can check the storage role. Then add OSD manually via from the Physical Disk List page on different nodes.

 

2) The blue node console not showing is strange, best disable it

systemctl disable petasan-console

systemctl stop  petasan-console

+reboot

 

3) In case you do not solve this, the files requested earlier will help us a lot.

Hello,

Do you use this driver for HighPoint - Rocket 750 HBA ?

http://www.45drives.com/support/downloads/downloads.php

Please note: * These are special drivers for our particular set-up of x2 Rocket 750 HBA’s

Pages: 1 2