Geth node logs Killed after uploading smart contract - go-ethereum

I'm experiencing an issue with geth.
After I upload a smart contract through RPC, the node logs Killed and dies.
Does anyone have an idea what the issue could be? Or how I could debug it?
geth
--networkid=$NETWORK_ID
--bootnodes=enode://$(BOOTNODE_ID)#$(BOOTNODE_SERVICE_HOST):30301
--rpc
--rpcaddr=0.0.0.0
--rpccorsdomain=\"*\"
--datadir=/ethereum
--debug
--verbosity=4
--identity=$HOSTNAME
--gasprice '1'
--syncmode 'full'
--rpcport 8545
--rpcapi 'personal,db,eth,net,web3,txpool,miner'
--unlock $ACCOUNT
--password /password.txt
--mine

Related

Some problems on QUIC-GO example server

The situation is, I wanna establish a QUIC connection based on quic-go from local to ECS server. The related tests using localhost are done both on local and remote device. That is:
#local: .$QUIC-GO-PATH/example/client/main -insecure -keylog ssl.log -qlog trial.log -v https://127.0.0.1:6121/demo/tile
#local: .$QUIC-GO-PATH/example/main -qlog -tcp -v
These tests are completed.
Now is the problem,when I start local-remote connection an error occurred:
#remote: .$QUIC-GO-PATH/example/main -qlog -tcp -v
#local: .$QUIC-GO-PATH/example/client/main -insecure -keylog ssl.log -qlog trial.log -v https://$REMOTE_IPADDR:6121/demo/tile
timeout: no recent network activity
When I go through a wireshark examination, it seems like the CRYPTO handshake never finishes:
Wireshark
Also client Qlog file atteched here:
Qlog file
Codes are all the same with https://github.com/lucas-clemente/quic-go
Help!
This problem has been solved.
Code $QUIC-GO-PATH/example/main.go has binded the port as a default onto 127.0.0.1:6121, which led to the problem that the server cannot get reached by client outside, just get this on server running:
-bind 0.0.0.0:6121

Getting a code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE' } 'Uncaught Exception thrown' in my apple terminal while attempting to run a script

Sorry in advance because I'm a giant noob to this and probably wont be able to explain it to well. I am attempting to run a script through my apple terminal to connect to an API to a google sheets database to update an app but while the script is running I get 'UNABLE_TO_VERIFY_LEAF_SIGNATURE' } 'Uncaught Exception thrown'
then it logs out. I would prefer not to use process.env['NODE_TLS_REJECT_UNAUTHORIZED'] = '0';
as that posses to much of a security risk.
This is my first time using bash so I have not tried anything as I have zero idea what I could do other then TLS_REJECT which I would
What I get when it runs:
mobenode.js v8.0 1/2/18
errorlogfile=/Users/dgibson/Desktop/GoogleSheets/Import.log
googlespreadsheet=11tKwNspYirKIV7JK7FzoJ9VnWL0CVEx9OeHtSpTHZ
google login
HOST:https://sales.mobe.com
sesstoken found in sheet
{ Error: unable to verify the first certificate
at TLSSocket.<anonymous> (_tls_wrap.js:1116:38)
at emitNone (events.js:106:13)
at TLSSocket.emit (events.js:208:7)
at TLSSocket._finishInit (_tls_wrap.js:643:8)
at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:473:38) code: . 'UNABLE_TO_VERIFY_LEAF_SIGNATURE' } 'Uncaught Exception thrown'
logout
Saving session...
...copying shared history...
...saving history...truncating history files...
...completed.
What I expect to get is the app to update
Heres my script:
DIRNAME=`dirname "$0"`
echo $DIRNAME
$DIRNAME/framework/node $DIRNAME/framework/googlesheeter/googlesheeter.js /errorlogfile="$DIRNAME/Import.log" /googlespreadsheet=11tKwNspYirKIV7JK7FzoJ9VnWL0CVEx9OeHtSpTHZ

Cannot debug.traceTransaction in geth: "missing trie node"

I can trace only transactions that where executed in the last 2 - 3 hours with my geth, I get the following errors on transactions executed for 5 hours and more:
> debug.traceTransaction('0x5c504ed432cb51138bcf09aa5e8a410dd4a1e204ef84bfed1be16dfba1b22060');
Error: missing trie node
691fc4f4d21d10787902e8f3266711f1d640e75fedbeb406dc0b8d3096128436 (path )
at web3.js:3143:20
at web3.js:6347:15
at web3.js:5081:36
at <anonymous>:1:1
> debug.traceTransaction('0x19f1df2c7ee6b464720ad28e903aeda1a5ad8780afc22f0b960827bd4fcf656d');
Error: missing trie node 5412c03b1c22d01fe37fc92d721ab617f94699f9d59a6e68479145412af3edae (path )
at web3.js:3143:20
at web3.js:6347:15
at web3.js:5081:36
at <anonymous>:1:1
Geth node is fullу synced:
> eth.syncing
false
I run it with the following command:
geth --port XXX --datadir XXX --rpcport XXX --rpc --rpcapi admin,debug,miner,shh,txpool,personal,eth,net,web3 console
I have tried both geth versions 1.7.0 and 1.7.2. Deleting the blockchain database and resyncing does not help.
How to cope with the problem?
Geth uses fast sync by default. Use --syncmode=full for full sync. The full database size is over 200Gb for now.
The answer was found here: https://github.com/ethereum/go-ethereum/issues/15088
The reason is described here: https://blog.ethereum.org/2015/06/26/state-tree-pruning/

loopback gets errors when slc run after installing socket.io

After I install socket.io, when I run the loopback API, it gives the following errors. I have tried to uninstall the socket.io or re-install the loopback but it still happens. BTW, I can access the API without any problems but it keeps to show so many errors in the console. So, How should I solve this errors?
INFO strong-agent v2.0.2 profiling app 'Xamarin_Node_API' pid '4076'
INFO strong-agent[4076] started profiling agent
INFO supervisor reporting metrics to `internal:`
supervisor running without clustering (unsupervised)
Web server listening at: http://localhost:3000
Browse your REST API at http://localhost:3000/explorer
Error: Cannot GET /socket.io/?EIO=3&transport=polling&t=635806057373451120-193&b
64=1
at raiseUrlNotFoundError (G:\Users\MM\Xamarin_Node_API\node_modules\loopback
\server\middleware\url-not-found.js:15:17)
at Layer.handle [as handle_request] (G:\Users\MM\Xamarin_Node_API\node_modul
es\loopback\node_modules\express\lib\router\layer.js:95:5)
at trim_prefix (G:\Users\MM\Xamarin_Node_API\node_modules\loopback\node_modu
les\express\lib\router\index.js:312:13)
at G:\Users\MM\Xamarin_Node_API\node_modules\loopback\node_modules\express\l
ib\router\index.js:280:7
at Function.process_params (G:\Users\MM\Xamarin_Node_API\node_modules\loopba
ck\node_modules\express\lib\router\index.js:330:12)
at next (G:\Users\MM\Xamarin_Node_API\node_modules\loopback\node_modules\exp
ress\lib\router\index.js:271:10)
at G:\Users\MM\Xamarin_Node_API\node_modules\loopback\node_modules\express\l
ib\router\index.js:618:15
at next (G:\Users\MM\Xamarin_Node_API\node_modules\loopback\node_modules\exp
ress\lib\router\index.js:256:14)
at Function.handle (G:\Users\MM\Xamarin_Node_API\node_modules\loopback\node_
modules\express\lib\router\index.js:176:3)
at router (G:\Users\MM\Xamarin_Node_API\node_modules\loopback\node_modules\e
xpress\lib\router\index.js:46:12)

my hadoop job 252 hours later died(tasks then killed)

I had 81,068 tasks complete but then 11,799 failed and only 12 were killed. They SEEM to all failed from
2013-09-10 03:07:36,316 INFO org.apache.hadoop.mapred.TaskInProgress: Error from attempt_201308301539_0002_m_083001_0: Error initializing attempt_201308301539_0002_m_083001_0:
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find taskTracker/jobcache/job_201308301539_0002/work in any of the configured local directories
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:389)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocator.java:138)
at org.apache.hadoop.mapred.TaskTracker$TaskInProgress.localizeTask(TaskTracker.java:1817)
at org.apache.hadoop.mapred.TaskTracker$TaskInProgress.launchTask(TaskTracker.java:1933)
at org.apache.hadoop.mapred.TaskTracker.launchTaskForJob(TaskTracker.java:830)
at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:824)
at org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:1664)
at org.apache.hadoop.mapred.TaskTracker.access$1200(TaskTracker.java:97)
at org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:1629)
At this point, I am just looking for guidance on how I can debug this before I re-run this again. For some reason out in the cluster, it looks like all the files are deleted though I thought hadoop M/R only deleted successfull task logs????
Anyone have some advice/ideas on how to debug this further?
It looks like all the default directories for map/reduce are used... /tmp/hadoop-hduser for my hduser.
I have seen stuff on /etc/hosts but then I don't get why 81,000 tasks succeeded before finally failing???
I am using the web interface to get some of this information of course and some logs where hadoopinstalled/logs
thanks,
Dean

Resources