SYBASE Cluster on RedHat Pacemaker Cluster - cluster-computing

I am trying to setup a SYBASE cluster on RedHat 7.5 using Pacemaker. I want the Active/Passive mode, where SYBASE will be running only in a single node a the time, but when I configure in such way it's work fine during the configuration, but when the standby node reboots the SYBASE resource is trying to get started on node 2 which it should not happen once it´s up and running on node 1.
I have configured Pacemaker as:
- lvm-sybasedev-res and lvm-databasedev-res are there in order to give shared volume (iSCSI) access to the correct node where SYBASE will be running at the time.
- The sybase-res resource has been created using the bellow command:
Resource Group: sybase-rg
lvm-sybasedev-res (ocf::heartbeat:LVM): Started sdp-1
lvm-databasedev-res (ocf::heartbeat:LVM): Started sdp-1
sybase-IP (ocf::heartbeat:IPaddr2): Started sdp-1
sybase-res (ocf::heartbeat:sybaseASE): Started sdp-1
> pcs resource create sybase-res ocf:heartbeat:sybaseASE server_name="SYBASE" db_user="sa" \
db_passwd="password" sybase_home="/global/sdp/sybase" sybase_ase="ASE-15_0" \
sybase_ocs="OCS-15_0" interfaces_file="/global/sdp/sybase/interfaces" \
sybase_user="sybase" --group sybase-rg --disable
I have constraint colocation setup in order to keep all resource under sybase-rg resoure group on the same node.
I was expecting that if the sybase-rg is up and running on node-1 (sdp-1)... even the node-2 (sdp-2) reboots it should not affect sybase-res because it's the inactive node which is rebooting.
Do I miss something? Any help is welcome.
Regards,

Related

how to solve error CRS-0223 resource has placement error in oracle RAC?

I have error in oracle database 11g real application cluster , i have 2 nodes node1,node2
when i checked the services i found instance node2 is not running
> srvctl status database -d db
instance ins1 is runnig on node node1
instance inst2 is not running on node2
when i checked the services some services offline
>crs_stat -t
ora.node2.gsd target=offline state =offline
ora.node2.ASM2.asm state=offline
ora.node2.inst2 state=offline
i tried to start the services by using the following command
>crs_start ora.node2.gsd
but always get this error
crs-0223 : resource has placement error
how to solve this error and startup instance on node 2 ?
I restarted the servers and everything working fine without errors
I think the cluster connection between the nodes was lost and need to restart the nodes.

Solr - lost configuration after recovering Zookeeper

I just inherited a hadoop cluster ( never worked with hadoop before ) consisting of 7 servers and administered through Ambari.
Today Ambari lost heartbeat with all services on server3 as well as ZooKeeper services (hosted on servers 1, 2, and 3), ZKFailOver ( hosted on server 1 and 2 ), and ZooKeeper clients ( hosted on 4,5,6,7 ) stopped and all refused to start. This also caused the Solr services to stop working.
After some investigating I found that Zookeeper on server3 was erroring on recent snapshot due to a CRC problem. After some more reading I removed the old snapshot files in .../zookeeper/version-2/ and ran 'zk -formatZK' (on server1). Zookeeper services are now able to start and heartbeat from server3 are being received.
The problem I see now is all the Solr services are no longer configured properly - "...ZooKeeperException: Could not find configName for collectioin xxxx found:null" I haven't been having much success figuring out how to get the previous Solr configurations to Zookeeper. I'm trying to use 'zkcli.sh' that I found in the Solr directory which is located in '/opt/solr/xxxx/scripts/cloud-scripts/' but it doesn't seem to work like the zkCli described in the Hadoop documentation.
My question is, how do I setup the Solr servers using the existing config files? If I can't how can I go about reconstructing the following configuration:
/ --- server5
/--shard1------- server7
core -- <
\--shard2------- server4
\ --- server6
Thanks.
So after trial and error I found that zkcli.sh should be used in the following manner:
./zkcli.sh server1:2181,server2:2181,server3:2181 -cmd upconfig .../solr/<corename>/conf -confname <configfilename>
This should upload any existing config to all ZK nodes.

Which node is running Cloudera Manager out of N hadoop nodes?

I have a large hadoop cluster (24 nodes). I have CLI access to these nodes. First few is not running Cloudera Manager (cloudera-scm-server).
How can I find out which node is running Cloudera Manager?
Any help is appreciated.
Cloudera Manager will have two services. one is Server another is agents.
As you said you have CLI access to all the node. So run below command on all the nodes to find which is server and open (server will be running on only 1 machine)
sudo service cloudera-scm-server status
Another simple method to find CDH Server address
ssh to any node and move to /etc/cloudera-scm-agent. There you will find config.ini file, in that you will find server_host address

redis on windows cluster setup

I have downloaded MSOpenTech Redis version 3.x which includes the long awaited clustering feature. My redis database is all working and I can start my cluster on the min 3 nodes required (in cluster mode). Does anyone know how to configure the cluster (it seems no one knows)?
Installing Linux and running the native Linux version is not an option for me sadly.
Any help would be greatly appreciated.
You can follow the Redis Cluster Tutorial and to create the cluster you can use the redis-trib.rb ruby script, for which you need to install Ruby for Windows.
For example:
> C:\Ruby22\Bin\ruby.exe redis-trib.rb create --replicas 1 192.168.1.1:7000 192.168.1.1:7001 192.168.1.1:7002 192.168.1.1:7003 192.168.1.1:7004 192.168.1.1:7005
Did not have the option to install Ruby on Windows but found the manual steps worked for me. The Ruby script seems to do a lot of checking stuff is setup correctly and is the preferred setup route. So Beware, here be dragons.
Set each node to run in Cluster mode. Edit the redis.windows-service.conf file and uncomment
cluster-enabled yes
cluster-config-file nodes-6379.conf
cluster-node-timeout 15000
restart the service.
Run a powershell window and change to the Redis installed folder and start the redis-cli. e.g.
cd "C:\Program Files\Redis"
.\redis-cli.exe
Now you can join other nodes. Run CLUSTER MEET IPADDRESS PORT for each of the other nodes, than the instance you happen to be on. e.g.
CLUSTER MEET 10.10.0.2 6379
After a few seconds running
CLUSTER NODES
Should list all the nodes connected, but all will be set as MASTER.
On each of the other nodes, run CLUSTER REPLICATE MASTERNODEID. Where MASTERNODEID is the hash-looking value next the node declared "myself" on your master when running CLUSTER NODES. e.g.
CLUSTER REPLICATE b7c767ab3ab7c4a926ac2fed937cf140b96764a7
Now allocate slots to each Master. My setup has three instances, only one master.
for ($slot=0;$slot -le 16383;$slot++) {
.\redis-cli.exe -h REDMST CLUSTER ADDSLOTS $slot
}
Reconnect with redis-cli and try and save data. e.g.
SET foo bar
OK
GET foo
"bar"
Phew! Got most this from reading https://www.javacodegeeks.com/2015/09/redis-clustering.html#InstallingRedis which is not Windows specific.
for windows version:
open the command window then type below command
C:\ProgramFiles\redis>FOR /L %i IN (0,1,16383) DO ( redis-cli.exe -p **6380** CLUSTER ADDSLOTS %i )
6380 is port of master node.

Percona Xtradb Cluster nodes won't start

I setup percona_xtradb_cluster-56 with three nodes in the cluster. To start the first cluster, i use the following command and it starts just fine:
#/etc/init.d/mysql bootstrap-pxc
The other two nodes however fail to start when i start them normally using the command:
#/etc/init.d/mysql start
The error i am getting is "The server quit without updating the PID file". The error log contains this message:
Error in my_thread_global_end(): 1 threads didn't exit 150605 22:10:29
mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended.
The cluster nodes are running all Ubuntu 14.04. When i use percona-xtradb-cluster5.5, the cluster ann all the nodes run just fine as expected. But i need to use version 5.6 because i am also using GTID which is only available in version 5.6 and not supported in earlier versions.
I was following these two percona documentation to setup the cluster:
https://www.percona.com/doc/percona-xtradb-cluster/5.6/installation.html#installation
https://www.percona.com/doc/percona-xtradb-cluster/5.6/howtos/ubuntu_howto.html
Any insight or suggestions on how to resolve this issue would be highly appreciated.
The problem is related to memory, as "The Georgia" writes. There should be at least 500MB for default setup and bootstrapping. See here http://sysadm.pp.ua/linux/px-cluster.html

Resources