Install Greenplum DB 4.2.5.2 and 4.2.6.2 within same host (Single Node) - greenplum

Can I install Greenplum DB 2 versions,
version 4.2.5.2 and 4.2.6.2 within same host?
Can run simultaneously?
Thanks.

OK, It can do.
I already tested, just install different path, user owner and port.
Ex.
GPDB 4.2.5.2
/usr/local/greenplum-db-4.2.5.2
owner is "gpadm4252" user
port base start from 40000 (segment hosts)
master host post is 5432
GPDB 4.2.6.2
/opt/greenplum-db-4.2.6.2
owner is "gpadm4262" user
port base start from 45000 (segment hosts)
master post is 6432

Related

run multiple instances of Rethinkdb on the same machine

I would like to run multiple instances of Rethinkdb on the same machine.
Is that possible? if so, what is the set up?
I have found the answer:
https://rethinkdb.com/docs/start-a-server/
Multiple RethinkDB instances on a single machine
Adding a node to a RethinkDB cluster is as easy as starting a new RethinkDB process and pointing it to an existing node in the cluster. Everything else is handled by the system without any additional effort required from the user.
Now start the second RethinkDB instance on the same machine:
$ rethinkdb --port-offset 1 --directory rethinkdb_data2 --join localhost:29015
info: Creating directory /home/user/rethinkdb_data2
info: Listening for intracluster connections on port 29016
info: Attempting connection to 1 peer...
info: Connected to server "Chaosknight" e6bfec5c-861e-4a8c-8eed-604cc124b714
info: Listening for client driver connections on port 28016
info: Listening for administrative HTTP connections on port 8081
info: Server ready

monetdb cluster management can't setup

I'm trying to follow MonetDB docs on Cluster Management
to setup a 3 nodes cluster using 3 Centos machines, I created the 3 dbfarm using monetdbd create /path/to/mydbfarm and from the first node, I run monetdb discover and it returns nothing where it should discover the other nodes, and when I try to run monetdb -h [second node IP] -P mypasshphrase status it returns the following error
status: cannot connect: Connection refused
PS: I have a passwordless connection between these 3 nodes, ssh [any node IP] works just fine,
Thank you
By default MonetDB listens only for local connections. This is for security reasons.
To listen also for remote connections, run
monetdbd set listenaddr=0.0.0.0 .../path/to/dbfarm
on each of the nodes and restart monetdbd.

Change IP address of a Hadoop HDFS data node server and avoid Block pool errors

I'm using the cloudera distribution of Hadoop and recently had to change the IP addresses of a few nodes in the cluster. After the change, on one of the nodes (Old IP:10.88.76.223, New IP: 10.88.69.31) the following error comes up when I try to start the data node service.
Initialization failed for block pool Block pool BP-77624948-10.88.65.174-13492342342 (storage id DS-820323624-10.88.76.223-50010-142302323234) service to hadoop-name-node-01/10.88.65.174:6666
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException): Datanode denied communication with namenode: DatanodeRegistration(10.88.69.31, storageID=DS-820323624-10.88.76.223-50010-142302323234, infoPort=50075, ipcPort=50020, storageInfo=lv=-40;cid=cluster25;nsid=1486084428;c=0)
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:656)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3593)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:899)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:91), I was unable to start the datanode service due to the following error:
Has anyone had success with changing the IP address of a hadoop data node and join it back to the cluster without data loss?
CHANGE HOST IP IN CLOUDERA MANAGER
Change Host IP on all node
sudo nano /etc/hosts
Edit the ip cloudera config.ini on all node if the master node ip changes
sudo nano /etc/cloudera-scm-agent/config.ini
Change IP in PostgreSQL Database
For the password Open PostgreSQL password
cat /etc/cloudera-scm-server/db.properties
Find the password lines
Example. com.cloudera.cmf.db.password=gUHHwvJdoE
Open PostgreSQL
psql -h localhost -p 7432 -U scm
Select table in PostgreSQL
select name,host_id,ip_address from hosts;
Update table IP
update hosts set ip_address = 'xxx.xxx.xxx.xxx' where host_id=x;
Exit the tool
\q
Restart the service on all node
service cloudera-scm-agent restart
Restart the service on master node
service cloudera-scm-server restart
Turns out its better to:
Decommission the server from the cluster to ensure that all blocks are replicated to other nodes in the cluster.
Remove the server from the cluster
Connect to the server and change the IP address then restart the cloudera agent
Notice that cloudera manager now shows two entries for this server. Delete the entry with the old IP and longest heartbeat time
Add the server to the required cluster and add required roles back to the server (e.g. HDFS datanode, HBASE RS, Yarn)
HDFS will read all data disks and recognize the block pool and cluster IDs, then register the datanode.
All data will be available and the process will be transparent to any client.
NOTE: If you run into name resolution errors from HDFS clients, the application has likely cached the old IP and will most likely need be restarted. Particularly Java clients that previously referenced this server e.g. HBASE clients must be restarted due to the JVM caching IPs indefinitely. Java based clients will likely throw errors relating to connectivity to the server with changed IP because they have the old IP cached until they are restarted.

cassandra 2.0.11 multiple nodes on single windows machine

I want to setup 3 nodes on windows machine for testing purpose. i already have community version installed. I followed some tutorials on youtube to setup 1 machine 3 nodes and docs as well. all 3 nodes are up but they are not connected. i can only see 1 node serving 100% load on "nodetool status"
Here is what i wanted, 3 instances connected as below
127.0.0.1 (seed)
127.0.0.2
127.0.0.3
Here is what i did,
Installed Datastax community edition 2.0.11
Copied apache-cassandra/conf -> conf2 & conf3
modified cassandra.yaml for
cluster_name
seed_address (127.0.0.1)
listen_address (seed ip)
rpc_address 0.0.0.0
endpoint_snitch: SimpleSnitch
Above things were documented but i had to change below ports as it was single machine
rpc_port: [if default is 9160 then node1 will be 9161]
native_transport_port:
storage_port:
Changed "JMX_PORT" in cassandra.bat file (created 2 copies of main file)
started all
I tried ccm but its not picking already installed cassandra it tries to build from source and fails.
Am i missing something, it been 2 days (4-5 hours) i am trying to set this up.
Thanks,
Ninad
From my own tests, on Windows 7, 127.0.0.1/127.0.0.2 point to the same interface so you can't bind to the same port. Yet using different ports for each node, I had the same issue as you (nodes not communicating with each other). At the end I would recommend using Linux for this kind of tests, even a simple virtual machine, because for Linux 127.0.0.1 and 127.0.0.2 are not the same.

unable to start regionservers in HBase

i have a problem starting regionservers on slave pc,s. when i enlist only master pc in conf/regionservers every thing works fine but when i add two more slaves to it the hbase does not start .....
if i delete all hbase folders in the tmp folder from all pc,s and then start regionserver (with 3 regionservers enlisted)the hbase gets started but when i try to create a table it again fails(gets stuck)....
pls anyone help
i am using hadoop 0.20.0 which is working fine and hbase 0.92.0
i have 3 pc's in cluster one master and two slaves
also tell that is DNS (both forward and backward lookup working)necessary for hbase in my case????
is there any way to replicate hbase table to all region servers i.e. i want to have a copy of table at each pc and want to access them locally(when i execute map task they should use their local copy of hbase table)
plz help..!!
thanx in advance
make your hosts file as following:
127.0.0.1 localhost
For Hadoop
192.168.56.1 master
192.168.56.101 slave
and in hbase conf put following entries :
hbase.rootdir
hdfs://master:9000/hbase
hbase.master
master:60000
The host and port that the HBase master runs at.
hbase.regionserver.port
60020
The host and port that the HBase master runs at.
hbase.cluster.distributed
true
hbase.tmp.dir
/home/cluster/Hadoop/hbase-0.90.4/temp
hbase.zookeeper.quorum
master
dfs.replication
2
hbase.zookeeper.property.clientPort
2181
Property from ZooKeeper's config zoo.cfg.
The port at which the clients will connect.
If you are using localhost anywhere remove that and replace it with "master" which is name for namenode in your hostfile....
one morething you can do
sudo gedit /etc/hostname
this will open the hostname file bydefault ubuntu will be there so make it master. and restart your system.
For hbase specify in "regionserver" file inside conf dir put these entries
master
slave
and restart.everything.

Resources