NEAR "ExecutionError":"Exceeded the prepaid gas." - nearprotocol

I'm going through the tutorial in https://near.academy/near101/chapter-6
One of the steps is to run this command (but with my account):
near call museum.testnet add_meme \
'{"meme" : "bob", "title" : "god", "data" : "https://9gag.com/gag/ad8K0vj", "category" : 4}' \
--accountId YOUR_ACCOUNT_NAME.testnet --amount 3
I keep getting errors like:
Log [museum.testnet]: attempting to create meme
Failure [museum.testnet]: Error: {"index":0,"kind":{"ExecutionError":"Exceeded the prepaid gas."}}
Transaction 9F9VUps6nN4myC8wzBUb1W1GTR4xV5WE had 30000000000000 of attached gas but used 2428115526258 of gas
It's a confusing error message because 30,000,000,000,000 > 2,428,178,132,410.
I have also tried running the command with --amount 4 instead, but I got the same error.
What am I doing wrong?

Benji at https://discord.com/channels/490367152054992913/542945453533036544/912840246524260355 suggested that instead of using --amount 3 I use --amount 3 --gas=75000000000000, which worked.

If you are using windows then you should use backslash , before every ".
near call museum.testnet add_meme '{"meme" : "Bob", "title" : "Jokes", "data" : "https://9gag.com/gag/aAGQ97L", "category" : 4}' --accountId NAME.testnet --amount 3 --gas=75000000000000

Related

Meme Museum tutorial - 'exceed prepaid gas'

Im getting this error. It seems to happen no matter how much gas I give it. Can someone help or advise what I am doing wrong? Thanks
Scheduling a call: museum.testnet.add_meme({"meme" : "andrew", "title" : "sweats", "data" : "https://9gag.com/gag/aVxM0B2", "category" : 4}) with attached 6 NEAR
Doing account.functionCall()
Receipts: BuNWjPmKBojGbjUs86EEpX1xrvC6AG5GacxX5xmTzSkk, 5PAK416Wn1F5AzHHz28fNa2dbHzWHFzKg2UoKCStfZ3U
Log [museum.testnet]: attempting to create meme
Failure [museum.testnet]: Error: {"index":0,"kind":{"ExecutionError":"Exceeded the prepaid gas."}}
Transaction 87djyBrCwgubEfsta68xPeRTy6VKShXMWEroSkvjDSY3 had 30000000000000 of attached gas but used 2428128941862 of gas
View this transaction in explorer: https://explorer.testnet.near.org/transactions/87djyBrCwgubEfsta68xPeRTy6VKShXMWEroSkvjDSY3```
In order to execute this properly you need to have both the --amount 3 and the --gas 300000000000000 parameters.
near call museum.testnet add_meme '{ "meme": "coe", \
"title": "roncoe", "data": "https://9gag.com/gag/ad8K0vj", \
"category": 4 }' --accountId youraccountid.testnet \
--gas 300000000000000 --amount 3
Try attaching gas instead of deposit. Here's an example using the CLI:
near call museum.testnet add_meme --gas 300000000000000 --accountId=youraccount.testnet '{
"meme": "bob",
"title": "blabla",
"data": "https://9gag.com/gag/ad8K0vj",
"category": 4
}'

AWS Datapipeline, EmrActivity step to run a hive script fails immediately with 'No such file or directory'

I've got a simple DataPipeline job which has only a single an EmrActivity with a single step attempting to execute a hive script from my s3 bucket.
The config for the EmrActivity looks like this:
{
"name" : "Extract and Transform",
"id" : "HiveActivity",
"type" : "EmrActivity",
"runsOn" : { "ref" : "EmrCluster" },
"step" : ["command-runner.jar,/usr/share/aws/emr/scripts/hive-script --run-hive-script --args -f s3://[bucket-name-removed]/s1-tracer-hql.q -d INPUT=s3://[bucket-name-removed] -d OUTPUT=s3://[bucket-name-removed]"],
"runsOn" : { "ref": "EmrCluster" }
}
And the config for the corresponding EmrCluster resource it's running on:
{
"id" : "EmrCluster",
"type" : "EmrCluster",
"name" : "Hive Cluster",
"keyPair" : "[removed]",
"masterInstanceType" : "m3.xlarge",
"coreInstanceType" : "m3.xlarge",
"coreInstanceCount" : "2",
"coreInstanceBidPrice": "0.10",
"releaseLabel": "emr-4.1.0",
"applications": ["hive"],
"enableDebugging" : "true",
"terminateAfter": "45 Minutes"
}
The error message I'm getting is always the following:
java.io.IOException: Cannot run program "/usr/share/aws/emr/scripts/hive-script --run-hive-script --args -f s3://[bucket-name-removed]/s1-tracer-hql.q -d INPUT=s3://[bucket-name-removed] -d OUTPUT=s3://[bucket-name-removed]" (in directory "."): error=2, No such file or directory
at com.amazonaws.emr.command.runner.ProcessRunner.exec(ProcessRunner.java:139)
at com.amazonaws.emr.command.runner.CommandRunner.main(CommandRunner.java:13)
...
The main error msg being "... (in directory "."): error=2, No such file or directory".
I've logged into the master node and verified the existence of /usr/share/aws/emr/scripts/hive-script. I've also tried specifying an s3-based location for the hive-script, among a few other places; always the same error result.
I can manually create a cluster directly in EMR that looks exactly like what I'm specifying in this DataPipeline, with a Step that uses the identical "command-runner.jar,/usr/share/aws/emr/scripts/hive-script ..." command string, and it works without error.
Has anyone experienced this, and can advise me on what I'm missing and/or doing wrong? I've been at this one for awhile now.
I'm able to answer my own q, after some long research and try-error.
There were 3 things, maybe 4, wrong with my Step script:
needed the 'script-runner.jar', rather than the 'command-runner.jar', as we're running a script (which I ended up just pulling from EMR's libs dir on s3)
need to get the 'hive-script' from elsewhere - so, also went to the public EMR libs dir in s3 for this
a fun one, yay thanks AWS; for the Steps args (everything after the 'hive-script' specification)...need to comma-separate every value in it when in DataPipeline (as opposed to space-separating as you do when specifying args in a Step directly in EMR)
And then the "maybe 4th":
included the base folder in s3 and specific hive release we're working with for the hive-script (I added this as result of seeing something similar in an AWS blog, but haven't yet tested whether it makes a difference in my case, too drained with everything else)
So, in the end, my working EmrActivity ended looking like so:
{
"name" : "Extract and Transform",
"id" : "HiveActivity",
"type" : "EmrActivity",
"runsOn" : { "ref" : "EmrCluster" },
"step" : ["s3://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar,s3://us-east-1.elasticmapreduce/libs/hive/hive-script,--base-path,s3://us-east-1.elasticmapreduce/libs/hive/,--hive-versions,latest,--run-hive-script,--args,-f,s3://[bucket-name-removed]/s1-tracer-hql.q,-d,INPUT=s3://[bucket-name-removed],-d,OUTPUT=s3://[bucket-name-removed],-d,LIBS=s3://[bucket-name-removed]"],
"runsOn" : { "ref": "EmrCluster" }
}
Hope this helps save someone else from the same time-sink I invested. Happy coding!

Write error in prolog

My program(in SWI-Prolog) :
has_ram('one-v',512-mb-ram).
has_ram('one-s',1-gb-ram).
has_ram('m8',2-gb-ram).
has_ram('one-sv',1-gb-ram).
has_processor('one-v',1-ghz).
has_processor('one-s',1.5-ghz).
has_processor('one-m8',2.3-ghz).
has_processor('one-sv',1.2-ghz).
has_brand('one-v',htc).
has_brand('one-s',htc).
has_brand('m8',htc).
has_brand('one-sv',htc).
get_phone_details(X):-
has_brand(X,Y),
has_ram(X,Z),
has_processor(X,P),
write("Name :",X),nl,
write("Brand :",Y),nl,
write("Ram :",Z),nl,
write("Processor :",P).
ERROR Which i got :
ERROR: write/2: Domain error: `stream_or_alias' expected, found `[78,97,109,101,32,32,32,58]'
I would like to get the details of the phone as output.
write/1 doesn't work like that, you can write :
write('Name ':X),nl,
write('Brand ':Y),nl,
write('Ram ':Z),nl,
write('Processor ':P).

AT+ Command to delete "[sms dilivery] status reports"?

I'm trying to delete 'status reports' in the device using the following code-list:
AT
: OK
AT+CMGF=1
: OK
AT+CPMS="SR"
: +CPMS: 4,100,0,45,4,100
AT+CMGD=50
: ERROR
Note: there is a 'status report' available at the index:50.
Could you tell me what causes this error?
Thanks.
According to "ETSI TS 100 585",
0: "REC UNREAD"
1: "REC READ"
2: "STO UNSENT"
3: "STO SENT"
4: "ALL"
so maybe you can try "AT+CMGL=4" to see if it works.
If it works, you can use "at+cmgd=index" to delete the SMS you want

Couchdb view Queries

Could you please help me in creating a view. Below is the requirement
select * from personaccount where name="srini" and user="pup" order by lastloggedin
I have to send name and user as input to the view and the data should be sorted by lastloggedin.
Below is the view I have created but it is not working
{
"language": "javascript",
"views": {
"sortdatetimefunc": {
"map": "function(doc) {
emit({
lastloggedin: doc.lastloggedin,
name: doc.name,
user: doc.user
},doc);
}"
}
}
}
And this the curl command iam using:
http://uta:password#localhost:5984/personaccount/_design/checkdatesorting/_view/sortdatetimefunc?key={\"name:srini\",\"user:pup\"}
My Questions are
As sorting will be done on key and I want it on lastloggedin so I have given that also in emit function.
But iam passing name and user only as parameters. Do we need to pass all the parameters which we give it in key?
First of all I want to convey to you for the reply, I have done the same and iam getting errors. Please help
Could you please try this on your PC, iam posting all the commands :
curl -X PUT http://uta:password#localhost:5984/person-data
curl -X PUT http://uta:password#localhost:5984/person-data/srini -d '{"Name":"SRINI", "Idnum":"383896", "Format":"NTSC", "Studio":"Disney", "Year":"2009", "Rating":"PG", "lastTimeOfCall": "2012-02-08T19:44:37+0100"}'
curl -X PUT http://uta:password#localhost:5984/person-data/raju -d '{"Name":"RAJU", "Idnum":"456787", "Format":"FAT", "Studio":"VFX", "Year":"2010", "Rating":"PG", "lastTimeOfCall": "2012-02-08T19:50:37+0100"}'
curl -X PUT http://uta:password#localhost:5984/person-data/vihar -d '{"Name":"BALA", "Idnum":"567876", "Format":"FAT32", "Studio":"YELL", "Year":"2011", "Rating":"PG", "lastTimeOfCall": "2012-02-08T19:55:37+0100"}'
Here's the view as you said I created :
{
"_id": "_design/persondestwo",
"_rev": "1-0d3b4857b8e6c9e47cc9af771c433571",
"language": "javascript",
"views": {
"personviewtwo": {
"map": "function (doc) {\u000a emit([ doc.Name, doc.Idnum, doc.lastTimeOfCall ], null);\u000a}"
}
}
}
I have fired this command from curl command :
curl -X GET http://uta:password#localhost:5984/person-data/_design/persondestwo/_view/personviewtwo?startkey=["SRINI","383896"]&endkey=["SRINI","383896",{}]descending=true&include_docs=true
I got this error :
[4] 3000
curl: (3) [globbing] error: bad range specification after pos 99
[5] 1776
[6] 2736
[3] Done descending=true
[4] Done(3) curl -X GET http://uta:password#localhost:5984/person-data/_design/persondestwo/_view/personviewtwo?startkey=["SRINI","383896"]
[5] Done endkey=["SRINI","383896"]
I am not knowing what this error is.
I have also tried passing the parameters the below way and it is not helping
curl -X GET http://uta:password#localhost:5984/person-data/_design/persondestwo/_view/personviewtwo?key={\"Name\":\"SRINI\",\"Idnum\": \"383896\"}&descending=true
But I get different errors on escape sequences
Overall I just want this query to be satisfied through the view :
select * from person-data where Name="SRINI" and Idnum="383896" orderby lastTimeOfCall
My concern is how to pass the multiple parameters from curl command as I get lot of errors if I do the above way.
First off, you need to use an array as your key. I would use:
function (doc) {
emit([ doc.name, doc.user, doc.lastLoggedIn ], null);
}
This basically outputs all the documents in order by name, then user, then lastLoggedIn. You can use the following URL to query.
/_design/checkdatesorting/_view/sortdatetimefunc?startkey=["srini","pup"]&endkey=["srini","pup",{}]&include_docs=true
Second, notice I did not output doc as the value of your query. It takes up much more disk space, especially if your documents are fairly large. Just use include_docs=true.
Lastly, refer to the CouchDB Wiki, it's pretty helpful.
I just stumbled upon this question. The errors you are getting are caused by not escaping this command:
curl -X GET http://uta:password#localhost:5984/person-data/_design/persondestwo/_view/personviewtwo?startkey=["SRINI","383896"]&endkey=["SRINI","383896",{}]descending=true&include_docs=true
The & character has a special meaning on the command-line and should be escaped when part of an actual parameter.
So you should put quotes around the big URL, and escape the quotes inside it:
curl -X GET "http://uta:password#localhost:5984/person-data/_design/persondestwo/_view/personviewtwo?startkey=[\"SRINI\",\"383896\"]&endkey=[\"SRINI\",\"383896\",{}]descending=true&include_docs=true"

Resources