Getting error while adding a new node to existing single node in vertica - vertica

I am adding a new node to existing single node in vertica, I'm getting this error
Error: scp failed. Tried to retrieve '/opt/vertica/log/verify-latest.xml' from '103.++.++.++'
System prerequisites failed. Threshold = WARN
Hint: Fix above failures or use --failure-threshold
Installation FAILED with errors.
What I have to do ? Need help :(

yeah I got the answer
Edit /opt/vertica/share/eggs/vertica/network/scp.py file with add where scp -P < Port >, you have to menstion your port there.

Related

Can't use 'put'() to add data to hbase with happybase

My python version is 3.7, and after I ran pip3 install happybase, I started the command hbase thrift start and tried to write a brief .py file as following:
import happybase
connection = happybase.Connection('master')
table = connection.table('jmlr') #'jmlr' is a table in hbase
for i in table.scan():
print(i)
table.put('001', {'title':'dasds'}) #error here
connection.close()
When it's about to run table.put(), it reported such an error:
thriftpy2.transport.base.TTransportException: TTransportException(type=4, message='TSocket read 0 bytes')
And at the same time, the thrift reported an error:
ERROR [thrift-worker-1] thrift.TBoundedThreadPoolServer: Error occurred during processing of message. java.lang.IllegalArgumentException: Invalid famAndQf provided.
But just now I ran this python file again, it gave me a different error in thrift:
thrift.TBoundedThreadPoolServer: Thrift error occurred during processing of message.
org.apache.thrift.protocol.TProtocolException: Bad version in readMessageBegin
I have tried to add parameters like protocol='compact', transport='framed', but this didn't work, even the table.scan() failed.
Everything in the hbase shell is OK, so I can't figure out what went wrong, I'm about to collapse.
I ran into the same issue and found this sollution. You need to add even empty Column Qualifier ( ':' symbol as delimiter between Column Family and Column Qualifier) into put() method:
table.put('001:', {'title':'dasds'})
Also, you have a different error message after second run of script because thrift server is already failed.
I hope it will help you.

failed to install greenplum command center when running gpccinstall

I downloaded greenplum-cc-web-4.6.1-LINUX-x86_64.zip for my greenplum db with 5.18, and followed this link (https://gpcc.docs.pivotal.io/460/topics/setup-collection-agents.html) to install command center. Everything is OK while there is a failure about gpccinstall. It showed following errors:
RunCommandOnEachHost fail on host: client-gp03.bj
Error when unzip remote binary on sdw3 bin/gpccws
bin/ccagent
bin/gpcc
conf/app.conf
gpcc_path.sh
bin/start_agent.sh
bin/queryinfocat.sh
bin/gpcc_md5
ccdata/
alert-email/alertTemplate.html
alert-email/send_alert.sh.sample
languages/
languages/zh.json
languages/en.json
Error when unzip remote binary on client-gp00.bj bin/gpccws
bin/ccagent
bin/gpcc
conf/app.conf
gpcc_path.sh
bin/start_agent.sh
bin/queryinfocat.sh
bin/gpcc_md5
ccdata/
alert-email/alertTemplate.html
alert-email/send_alert.sh.sample
languages/
languages/zh.json
languages/en.json
Error when unzip remote binary on client-gp01.bj bin/gpccws
bin/ccagent
bin/gpcc
conf/app.conf
gpcc_path.sh
bin/start_agent.sh
bin/queryinfocat.sh
bin/gpcc_md5
ccdata/
alert-email/alertTemplate.html
alert-email/send_alert.sh.sample
languages/
languages/zh.json
languages/en.json
Error when unzip remote binary on client-gp02.bj bin/gpccws
bin/ccagent
bin/gpcc
conf/app.conf
gpcc_path.sh
bin/start_agent.sh
bin/queryinfocat.sh
bin/gpcc_md5
ccdata/
alert-email/alertTemplate.html
alert-email/send_alert.sh.sample
languages/
languages/zh.json
languages/en.json
Error when unzip remote binary on client-gp03.bj Warning: the ECDSA host key for 'client-gp03.bj' differs from the key for the IP address '10.136.173.8'
Offending key for IP in /home/gpadmin/.ssh/known_hosts:10
Matching host key in /home/gpadmin/.ssh/known_hosts:17
tar: bin/gpccws: Cannot open: File exists
tar: Exiting with failure status due to previous errors
RunCommandOnEachHost failure happened
Can anyone encounter this issue before? I did some search in google and pivotal community, but failed to find some solution. Any help is appreciated.
BTW, when I ignored above errors, and continued, I found the gpcc web server can be started successfully. And when I logged in, only "Query Monitor" UI section show one warning: "GPCC is no longer receiving updates. Check your network status or gpcc status and refresh this page.", other part of UI seems OK.
From here:
Error when unzip remote binary on client-gp03.bj Warning: the ECDSA host key for 'client-gp03.bj' differs from the key for the IP address '10.136.173.8'
Offending key for IP in /home/gpadmin/.ssh/known_hosts:10
Matching host key in /home/gpadmin/.ssh/known_hosts:17
tar: bin/gpccws: Cannot open: File exists
tar: Exiting with failure status due to previous errors
You have duplicate ssh fingerprint keys in your /home/gpadmin/.ssh/known_hosts file. I recommend removing both lines 10 and 17 from that file, then running ssh-keyscan client-gp03.bj >> /home/gpadmin/.ssh/known_hosts
After this is complete, try ssh-ing to the host, to see that the fingerprint error is cleared up, and if so, try the gpcc installation again.

LogStash::ConfigurationError: com.mysql.jdbc.Driver not loaded

When I use the logstash_input_jdbc plugin sync MySQL and my local elastic search,
The below errors appear, But I search for a long time, but I have no resolve method until now.
./logstash -f ./logstash_jdbc_test/jdbc.conf
Pipeline aborted due to error {:exception=>#,
:backtrace=>["/usr/local/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-3.0.2/lib/logstash/plugin_mixins/jdbc.rb:156:in
prepare_jdbc_connection'",
"/usr/local/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-3.0.2/lib/logstash/inputs/jdbc.rb:167:in
register'",
"/usr/local/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:330:in
start_inputs'", "org/jruby/RubyArray.java:1613:ineach'",
"/usr/local/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:329:in
start_inputs'",
"/usr/local/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:180:in
start_workers'",
"/usr/local/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:136:in
run'",
"/usr/local/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/agent.rb:465:in
start_pipeline'"], :level=>:error}
Yesterday, I find the reason.
The reason is:
In my install path /elasticsearch-jdbc-2.3.2.0/lib, the size of mysql-connector-java-5.1.38.jar is zero.
So I download the new mysql-connector-java-5.1.38.jar, and copy to the path of /elasticsearch-jdbc-2.3.2.0/lib.
And then, my problem resolved.
Now I can sync date between mysql and elaticsearch quickly.

Installing Meteor at Koding

I'm trying to instal meteor at koding and I got error on the last step meteor -p port this is what I get :
app/packages/mongo-livedata/mongo_driver.js:33
throw err;
^
Error: failed to connect to [127.0.0.1:1994]
at Server.connect.connectionPool.on.server._serverState (/Users/chlebta/meteor/dev_bundle/lib/node_modules/mongodb/lib/mongodb/connection/server.js:482:73)
at EventEmitter.emit (events.js:126:20)
at connection.on._self._poolState (/Users/chlebta/meteor/dev_bundle/lib/node_modules/mongodb/lib/mongodb/connection/connection_pool.js:96:15)
at EventEmitter.emit (events.js:99:17)
at Socket.errorHandler (/Users/chlebta/meteor/dev_bundle/lib/node_modules/mongodb/lib/mongodb/connection/connection.js:411:10)
at Socket.EventEmitter.emit (events.js:96:17)
at Socket._destroy.self.errorEmitted (net.js:329:14)
at process.startup.processNextTick.process._tickCallback (node.js:244:9)
Exited with code: 1
Your application is crashing. Waiting for file change.
There is a section about Meteor in the Koding wiki.
Also, please note that you should select a port inside the port range of 1024 to 10000. Some ports may be in use, so you might have to try out a few different ones.
Not sure if you've gotten past this, but I had a similar issue. I ended up having to create an environment variable named MONGO_URL:
export MONGO_URL=mongodb://user:pass#host:port/dbname
Of course, replace user, pass, host, port and dbname with what Koding assigned to you. Not the most secure, so I'll find a more elegant solution to this, but for the moment, it works.

HPCC/HDFS Connector

Does anyone know about HPCC/HDFS connector.we are using both HPCC and HADOOP.There is one utility(HPCC/HDFS connector) developed by HPCC which allows HPCC cluster to acess HDFS data
i have installed the connector but when i run the program to acess data from hdfs it gives error as libhdfs.so.0 doesn't exist.
I tried to build libhdfs.so using command
ant compile-libhdfs -Dlibhdfs=1
its giving me error as
target "compile-libhdfs" does not exist in the project "hadoop"
i used one more command
ant compile-c++-libhdfs -Dlibhdfs=1
its giving error as
ivy-download:
[get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
[get] To: /home/hadoop/hadoop-0.20.203.0/ivy/ivy-2.1.0.jar
[get] Error getting http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
to /home/hadoop/hadoop-0.20.203.0/ivy/ivy-2.1.0.jar
BUILD FAILED java.net.ConnectException: Connection timed out
any suggestion will be a great help
Chhaya, you might not need to build libhdfs.so, depending on how you installed hadoop, you might already have it.
Check in HADOOP_LOCATION/c++/Linux-<arch>/lib/libhdfs.so, where HADOOP_LOCATION is your hadoop install location, and arch is the machine’s architecture (i386-32 or amd64-64).
Once you locate the lib, make sure the H2H connector is configured correctly (see page 4 here).
It's just a matter of updating the HADOOP_LOCATION var in the config file:
/opt/HPCCSystems/hdfsconnector.conf
good luck.

Resources