Cannot debug.traceTransaction in geth: "missing trie node" - go-ethereum

I can trace only transactions that where executed in the last 2 - 3 hours with my geth, I get the following errors on transactions executed for 5 hours and more:
> debug.traceTransaction('0x5c504ed432cb51138bcf09aa5e8a410dd4a1e204ef84bfed1be16dfba1b22060');
Error: missing trie node
691fc4f4d21d10787902e8f3266711f1d640e75fedbeb406dc0b8d3096128436 (path )
at web3.js:3143:20
at web3.js:6347:15
at web3.js:5081:36
at <anonymous>:1:1
> debug.traceTransaction('0x19f1df2c7ee6b464720ad28e903aeda1a5ad8780afc22f0b960827bd4fcf656d');
Error: missing trie node 5412c03b1c22d01fe37fc92d721ab617f94699f9d59a6e68479145412af3edae (path )
at web3.js:3143:20
at web3.js:6347:15
at web3.js:5081:36
at <anonymous>:1:1
Geth node is fullу synced:
> eth.syncing
false
I run it with the following command:
geth --port XXX --datadir XXX --rpcport XXX --rpc --rpcapi admin,debug,miner,shh,txpool,personal,eth,net,web3 console
I have tried both geth versions 1.7.0 and 1.7.2. Deleting the blockchain database and resyncing does not help.
How to cope with the problem?

Geth uses fast sync by default. Use --syncmode=full for full sync. The full database size is over 200Gb for now.
The answer was found here: https://github.com/ethereum/go-ethereum/issues/15088
The reason is described here: https://blog.ethereum.org/2015/06/26/state-tree-pruning/

Related

Getting error while adding a new node to existing single node in vertica

I am adding a new node to existing single node in vertica, I'm getting this error
Error: scp failed. Tried to retrieve '/opt/vertica/log/verify-latest.xml' from '103.++.++.++'
System prerequisites failed. Threshold = WARN
Hint: Fix above failures or use --failure-threshold
Installation FAILED with errors.
What I have to do ? Need help :(
yeah I got the answer
Edit /opt/vertica/share/eggs/vertica/network/scp.py file with add where scp -P < Port >, you have to menstion your port there.

Error on installing Titan DB on Windows

Following the official guide of Titan DB here, and trying to run the command:
graph = TitanFactory.open('conf/titan-cassandra-es.properties')
I got this error:
Backend shorthand unknown: conf/titan-cassandra-es.properties
Obviously, the reason is the incorrect path to the
titan-cassandra-es.properties
file. So I changed it to:
graph = TitanFactory.open('../conf/titan-cassandra-es.properties')
and got this error:
Encountered unregistered class ID: 141.
The error happens in the following version:
titan-0.5.4-hadoop2
On titan-1.0.0-hadoop2 instead of this error message I get this one:
Invalid import definition: 'com.thinkaurelius.titan.hadoop.MapReduceIndexManagement'; reason: startup failed: script14747941661821834264593.groovy: 1: unable to resolve class com.thinkaurelius.titan.hadoop.MapReduceIndexManagement # line 1, column 1. import com.thinkaurelius.titan.hadoop.MapReduceIndexManagement ^
1 error
And on titan-1.0.0-hadoop2 I get this one:
The input line is too long.
The syntax of the command is incorrect.
Does anyone know how to handle this issue?
It seems like you have not even managed to get Titan 1 to start up yet.
I do not believe Titan 1 has been deployed to support Windows out of the box. I.e. the downloadable package will not just work with windows.
Saying that I have managed to get Titan DB 1 to work on windows. To do so, all you have to do is install Cassandra 2.x on Windows. This guide may help you out. Start cassandra and enable thrift connections.
With that done you should be able to get Titan doing basic operations on windows. From there you may find dealing with you current errors easier.
Side Note: Windows support for Titan 0.5.x may be more substantial. So you could look into that as well.

LogStash::ConfigurationError: com.mysql.jdbc.Driver not loaded

When I use the logstash_input_jdbc plugin sync MySQL and my local elastic search,
The below errors appear, But I search for a long time, but I have no resolve method until now.
./logstash -f ./logstash_jdbc_test/jdbc.conf
Pipeline aborted due to error {:exception=>#,
:backtrace=>["/usr/local/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-3.0.2/lib/logstash/plugin_mixins/jdbc.rb:156:in
prepare_jdbc_connection'",
"/usr/local/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-3.0.2/lib/logstash/inputs/jdbc.rb:167:in
register'",
"/usr/local/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:330:in
start_inputs'", "org/jruby/RubyArray.java:1613:ineach'",
"/usr/local/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:329:in
start_inputs'",
"/usr/local/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:180:in
start_workers'",
"/usr/local/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:136:in
run'",
"/usr/local/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/agent.rb:465:in
start_pipeline'"], :level=>:error}
Yesterday, I find the reason.
The reason is:
In my install path /elasticsearch-jdbc-2.3.2.0/lib, the size of mysql-connector-java-5.1.38.jar is zero.
So I download the new mysql-connector-java-5.1.38.jar, and copy to the path of /elasticsearch-jdbc-2.3.2.0/lib.
And then, my problem resolved.
Now I can sync date between mysql and elaticsearch quickly.

Spring-xd strange too many open files error

I upgraded from spring-xd 1.2.1 to 1.3.0, and have both under /opt on my system. After starting xd in single node (but configured to use Zookeeper), I tried to create another stream (e.g. "time | log"), and spring-xd throws the following exception:
java.io.FileNotFoundException: /opt/spring-xd-1.2.1.RELEASE/xd/config/modules/modules.yml (Too many open files)
I changed ulimit -n 60000, but it didn't solve the problem. The strange thing is why it still points to spring-xd-1.2.1.RELEASE? I have started both xd-singlenode and xd-shell under /opt/spring-xd-1.3.1.RELEASE
EDIT: add xd-singlenode running process output just to show it's pointing to 1.3.1:
/usr/java/default/bin/java -Dspring.application.name=admin
-Dlogging.config=file:/opt/spring-xd-1.3.0.RELEASE/xd/config//
/xd-singlenode-logback.groovy -Dxd.home=/opt/spring-xd-1.3.0.RELEASE/xd
-Dspring.config.location=file:/opt/spring-xd-1.3.0.RELEASE/xd/config//
-Dxd.config.home=file:/opt
/spring-xd-1.3.0.RELEASE/xd/config//
-Dspring.config.name=servers,application
-Dxd.module.config.location=file:/opt/spring-xd-1.3.0.RELEASE/xd/config//modules/
-Dxd.module.config.name=modules -classpath
/opt/spring-xd-1.3.0.RELEASE/xd/modules/processor/scripts:/opt/spring-xd
-1.3.0.RELEASE/xd/config:/opt/spring-xd-1.3.0.RELEASE/xd/lib/activation-
...
have you updated your environment variables? specifically XD_CONFIG_LOCATION based on the error shown above.

Installing RHadoop on a Hadoop Cluster

I am trying to install RHadoop on top of my Hadoop cluster. While installing some of the required packages I am facing the following error:
> install.packages("Megh/rmr2_3.3.1.tar.gz")
Installing package into ‘/usr/lib64/R/library’
(as ‘lib’ is unspecified)
inferring 'repos = NULL' from 'pkgs'
Error in rawToChar(block[seq_len(ns)]) :
embedded nul in string: 'rmr2/man/fromdfstodfs.Rd\0\0\0\0erties\n i-_". '
Warning message:
In install.packages("Megh/rmr2_3.3.1.tar.gz") :
installation of package ‘Megh/rmr2_3.3.1.tar.gz’ had non-zero exit status
>
> install.packages("Megh/plyrmr_0.6.0.tar.gz")
Installing package into ‘/usr/lib64/R/library’
(as ‘lib’ is unspecified)
inferring 'repos = NULL' from 'pkgs'
Warning in untar2(tarfile, files, list, exdir, restore_times) :
checksum error for entry 'plyrmr/man/as.data.framed'
Warning in readBin(con, "raw", n = 512L) :
invalid or incomplete compressed data
Error in untar2(tarfile, files, list, exdir, restore_times) :
incomplete block on file
Warning message:
In install.packages("Megh/plyrmr_0.6.0.tar.gz") :
installation of package ‘Megh/plyrmr_0.6.0.tar.gz’ had non-zero exit status
I Have also installed RHive on the cluster. I'm able to execute relatively smaller queries through RHive but larger queries fail:
> rhive.query("SELECT COUNT(*) FROM tradehistory")
Error: java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask
> rhive.query("SELECT tradeno FROM tradehistory LIMIT 10")
tradeno
1 34232193
2 34232198
3 34232199
4 34232200
5 34232201
6 34232202
7 34232203
8 34232204
9 34232205
10 34232206
If anybody has any idea please help me out with this! Thanks a lot in advance!
For installation error that I was facing, I figured out that it was an issue with the tar file.
I downloaded that tar file using Windows system and was transferring the same to my cluster using WinSCP.
for transferring zip/archive kind of files, ideally binary transfer should be used otherwise there are chances of some bytes of the tar file being missed out.
This in turn, results in the error.
In case of Tez, if a query needs to be executed which has to invoke multiple MapReduce tasks, the query can't execute without proper authorization.
So when I tried the same rhive query with supplying the username and password, I was able to achieve the desired results.

Resources