How do "geth", "EstimateGas", and "Suggest (Gas) Price" work? - go-ethereum

My friend asked me how does geth estimates gas limits and gas prices. How does it do this?

If you send transactions without gas limits or gas prices via RPC API, geth uses Estimate() or SuggestPrice() instead. Remix uses these, too. These behaviors are of geth v1.8.23. Different versions may work differently.
EstimateGas
input: block number (default: "pending"), 'gas limit' of the transaction (default: gas limit of the given block number)
EstimateGas tries to find a minimal gas to run this transaction on the given block number. It do a binary search between 21000 and 'gas limit'. For example, if 'gas limit' is 79000, it tries to run this transaction with the gas limit, 50000 = (21000 + 79000) / 2. If it failed, it tries with 64500 = (50000 + 79000) / 2, and so on. If it failed with 'gas limit', it returns 0 and error message, "gas required exceeds allowance or always failing transaction".
NOTE: Even if a transaction fails due to non-gas issues, it consider a failure as insufficient gas. Then it will return 0 with an error message in the end.
source: geth /internal/ethapi/api.go
Suggest(Gas)Price
input: number of blocks to search (default: 20, --gpoblocks), price percentile (default: 60, --gpopercentile), fallback result (default: 1 GWei, --gasprice)
SuggestPrice queries gas prices of 'number of recent blocks' from "latest" block in parallel. If it cannot get answers over half of 'number of blocks' due to any reasons, it will query more blocks up to five times 'number of blocks'.
A gas price of a block means a minimum gas price within transactions in that block. Transactions that a miner sent are excluded.
SuggestPrice sorts gas prices of blocks, then picks up the given percentile among prices (0 for the smallest price and 100 for the largest price). It caches this result, and returns a cached result immediately for the same "latest" (mined) block.
If all tries are failed, it returns a last result. If there is no last results, it returns a 'fallback result'. And SuggestPrice cannot return over 500 GWei.
source: geth /eth/gasprice/gasprice.go

Related

Neo4j cypher query for updating nodes taking a long time

I have a graph with the following:
(:Customer)-[:MATCHES]->(:Customer)
This is not limited to two customers but the depth does not exceed 5. The following also exists:
(:Customer)-[:MATCHES]->(:AuditCustomer)
There is a label :Master that needs to be added to one customer node in a matching set depending on some criteria.
My query therefore needs to find all sets of matching customers where none have the label master and add that label to one node in the set. There are a large number of customer nodes and doing it all in one go causes the database to become very slow.
I have attempted to do it using apoc.periodic.commit:
CALL apoc.periodic.commit("MATCH (c:Customer)
WHERE NOT c:Master AND NOT (c)-[:MATCHES*0..5]-(:Master) WITH c limit {limit}
CALL apoc.path.expand(c, 'MATCHES', '+Customer', 0, -1)
YIELD path UNWIND NODES(path) AS nodes WITH c,nodes
order by nodes:Searchable desc, nodes.createdTimestamp asc
with c, head(collect(distinct nodes)) as collectedNodes
set collectedNodes:Master
return count(collectedNodes)", {limit:100})
However this still causes the database to become very slow even if the limit parameter is set to 1. I read that apoc.periodic.commit is a blocking process so this may be causing a problem. Is there a way to do this that is not so resource intensive and the DB can continue to process other transactions while this is running?
The slowest part of the query is the initial match:
MATCH (c:Customer)
WHERE NOT c:Master AND NOT (c)-[:MATCHES*0..5]-(:Master) WITH c limit {limit}
This takes around 3.5s and the entire query takes about 4s. There is very little difference between limit 1 and limit 20. Maybe if there is a way to rewrite this so it is faster, it might be a better approach?
Also, if it is of any use, the following returns ~70K
MATCH (c:Customer)
WHERE NOT c:Master AND NOT (c)-[:MATCHES*0..5]-(:Master)
RETURN COUNT(c)

How to get all the videos of a YouTube channel with the Yt gem?

I want to use the Yt gem to get all the videos of channel. I configure the gem with my YouTube Data API key.
Unfortunately when I use it it returns a maximum of ~1000 videos, even for channels having more than 1000 videos. Yt::Channel#video_count returns the correct number of videos.
channel = Yt::Channel.new id: "UCGwuxdEeCf0TIA2RbPOj-8g"
channel.video_count # => 1845
channel.videos.map(&:id).size # => 949
The Youtube API can't be set to return more than 50 items per request, so I guess Yt automatically performs several requests going through each next page of results to be able to return more than 50 results.
For some reason though it does not go through all the result pages. I don't see a way in Yt for me to control how it goes through the pages of results. In particular I could not find a way to force it to get a single page of results, access the returned value nextPageToken, in order to perform a new request with this value.
Any idea?
Looking into gem's /spec folder, you can see a test for your code.
describe 'when the channel has more than 500 videos' do
let(:id) { 'UC0v-tlzsn0QZwJnkiaUSJVQ' }
specify 'the estimated and actual number of videos can be retrieved' do
# #note: in principle, the following three counters should match, but
# in reality +video_count+ and +size+ are only approximations.
expect(channel.video_count).to be > 500
expect(channel.videos.size).to be > 500
end
end
I did some tests and what I have noticed it that: video_count is the number that is displayed on youtube next to channel's name. This value is not accurate. Not rly sure what it represents.
If you do channel.videos.size, the number is not accurate either, because the videos collection can contain some empty(?) records.
If you do channel.videos.map(&:id).size the returned value should be correct. By correct I mean it should equal to number of videos listed at:
https://www.youtube.com/channel/:channel_id/videos

es_rejected_execution_exception rejected execution

I'm getting the following error when doing indexing.
es_rejected_execution_exception rejected execution of org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1#16248886
on EsThreadPoolExecutor[bulk, queue capacity = 50,
org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor#739e3764[Running,
pool size = 16, active threads = 16, queued tasks = 51, completed
tasks = 407667]
My current setup:
Two nodes. One is the master (data: true, master: true) while the other one is data only (data: true, master: false). They are both EC2 I2.4XL (16 Cores, 122GB RAM, 320GB instance storage). 2 shards, 1 replication.
Those two nodes are being fed by our aggregation server which has 20 separate workers. Each worker makes bulk indexing request to our ES cluster with 50 items to index. Each item is between 1000-4000 characters.
Current server setup: 4x client facing servers -> aggregation server -> ElasticSearch.
Now the issue is this error only started occurring when we introduced the second node. Before when we had one machine, we got consistent indexing throughput of 20k request per second. Now with two machine, once it hits the 10k mark (~20% CPU usage)
we start getting some of the errors outlined above.
But here is the interesting thing which I have noticed. We have a mock item generator which generates a random document to be indexed. Generally these documents are of the same size, but have random parameters. We use this to do the stress test and check the stability. This mock item generator sends requests to aggregation server which in turn passes them to Elasticsearch. The interesting thing is, we are able to index around 40-45k (# ~80% CPU usage) items per second without getting this error. So it seems really interesting as to why we get this error. Has anyone seen this error or know what could be causing it?

how to get recent event recorded in event logs(eg: logged before about 10 seconds) in Windows using C++?

I need to collect event logs from Windows those are logged before 10 seconds. Using pull subscription I could collect already saved logs before execution of program and saving logs while program is running. I tried with the code available on MSDN:
Subscribing to Events
"I need to start to collect the event logged 10 seconds ago". Here I think I need to set value for LPWSTR pwsQuery to achieve that.
L"*[System/Level= 2]" gives the events with level equal to 2.
L"*[System/EventID= 4624]" gives events with eventID is 4624.
L"*[System/Level < 1]" gives events with level < 2.
Like that I need to set the value for pwsQuery to get event logged near 10 seconds. Can I do in the same way as above? If so how? If not what are the other ways to do it?
EvtSubscribe() gives you new events as they happen. You need to use EvtQuery() to get existing events that have already been logged.
The Consuming Events documentation shows a sample query that retrieves events beginning at a specific time:
// The following query selects all events from the channel or log file where the severity level is
// less than or equal to 3 and the event occurred in the last 24 hour period.
XPath Query: *[System[(Level <= 3) and TimeCreated[timediff(#SystemTime) <= 86400000]]]
So, you can use TimeCreated[timediff(#SystemTime) <= 10000] to get events in the last 10 seconds.
The TimeCreated element is documented here:
TimeCreated (SystemPropertiesType) Element
The timediff() function is described on the Consuming Events documentation:
The timediff function is supported. The function computes the difference between the second argument and the first argument. One of the arguments must be a literal number. The arguments must use FILETIME representation. The result is the number of milliseconds between the two times. The result is positive if the second argument represents a later time; otherwise, it is negative. When the second argument is not provided, the current system time is used.
 

eBay API error : You have exceeded your maximum call limit

I have a table of eBay itemid, and for each id I want to apply a reviseitem call, but from the second call I get the following error:
You have exceeded your maximum call limit of 3000 for 5 seconds. Try back after 5 seconds.
NB: I have just 4 calls.
How can I fix this problem?
ebay count the calls per second per unique IP's. So please make sure your all calls from your application must be less than 3000 per 5 seconds. hope this would help.
I have just finished an eBay project and this error can be misleading. eBay allow a certain amount of calla a day and if you exceed that amount in one 24 hour period you can get this error. You can get this amount increased by completing an Application Check form http://go.developer.ebay.com/developers/ebay/forums-support/certification
The eBay Trading API, to which your ReviseItem call belongs, allows up to 5000 calls per 24 hour period for all applications, and up to 1.5M calls / 24hrs for "Compatible Applications", i.e. applications that have undergone a vetting process called "Compatible Application Check". More details here: https://go.developer.ebay.com/developers/ebay/ebay-api-call-limits
However, that's just the generic, "Aggregate" call limit. There are different limits for specific calls, some of which are more liberal (AddItem: 100.000 / day) and others of which are more strict (SetApplication: 50 / day) than that. Additionally, there are hourly and periodic limits.
You can find out any application's applicable limits by executing the GetApiAccessRules call:
<GetApiAccessRulesResponse xmlns="urn:ebay:apis:eBLBaseComponents">
<Timestamp>2014-12-02T13:25:43.235Z</Timestamp>
<Ack>Success</Ack>
<Version>889</Version>
<Build>E889_CORE_API6_17053919_R1</Build>
<ApiAccessRule>
<CallName>ApplicationAggregate</CallName>
<CountsTowardAggregate>true</CountsTowardAggregate>
<DailyHardLimit>5000</DailyHardLimit>
<DailySoftLimit>5000</DailySoftLimit>
<DailyUsage>10</DailyUsage>
<HourlyHardLimit>6000</HourlyHardLimit>
<HourlySoftLimit>6000</HourlySoftLimit>
<HourlyUsage>0</HourlyUsage>
<Period>-1</Period>
<PeriodicHardLimit>10000</PeriodicHardLimit>
<PeriodicSoftLimit>10000</PeriodicSoftLimit>
<PeriodicUsage>0</PeriodicUsage>
<PeriodicStartDate>2006-02-14T07:00:00.000Z</PeriodicStartDate>
<ModTime>2014-01-20T11:20:44.000Z</ModTime>
<RuleCurrentStatus>NotSet</RuleCurrentStatus>
<RuleStatus>RuleOn</RuleStatus>
</ApiAccessRule>
<ApiAccessRule>
<CallName>AddItem</CallName>
<CountsTowardAggregate>false</CountsTowardAggregate>
<DailyHardLimit>100000</DailyHardLimit>
<DailySoftLimit>100000</DailySoftLimit>
<DailyUsage>0</DailyUsage>
<HourlyHardLimit>100000</HourlyHardLimit>
<HourlySoftLimit>100000</HourlySoftLimit>
<HourlyUsage>0</HourlyUsage>
<Period>-1</Period>
<PeriodicHardLimit>0</PeriodicHardLimit>
<PeriodicSoftLimit>0</PeriodicSoftLimit>
<PeriodicUsage>0</PeriodicUsage>
<ModTime>2014-01-20T11:20:44.000Z</ModTime>
<RuleCurrentStatus>NotSet</RuleCurrentStatus>
<RuleStatus>RuleOn</RuleStatus>
</ApiAccessRule>
You can try that out four your own application by pasting an AuthToken for your application into the form at https://ebay-sdk.intradesys.com/s/9a1158154dfa42caddbd0694a4e9bdc8 and then press "Execute call".
HTH.

Resources