Near Explorer block query starts from 9820221 - nearprotocol

BlockExplorer - https://explorer.mainnet.near.org/blocks/2RPJGA17MQ9GAtwSVuVbasuosgkWqDgXHKWLuX4VyYv4
i am able to query starting from block - 9820221.
Can any one help me understand why this is the case and if there are other explorers where i can query the blockDetails

mainnet started from block height 9820210 (see mainnet genesis config), so there are no blocks before that one. The first 3 blocks are missing due to validators being offline or something like that, so the first produced block is 9820214, and you can query it: https://explorer.mainnet.near.org/blocks/CFAAJTVsw5y4GmMKNmuTNybxFJtapKcrarsTh5TPUyQf
Blocks before 9820210 were produced in mainnet running before July 22nd, 2020, but for some reason NEAR needed to restart the network from genesis, and thus we dumped the state as of block 9820210 and called it a new genesis, and that was the start. You have no access to the history before that moment, you can only inspect the state as of genesis, where certain accounts exist with certain balances, contract code, and states.

Related

Does a Substrate Private Network require the initial node to be alive?

I worked through the "Start a Private Network with Subtrate" tutorial, and I successfully created numerous (4) nodes and connected them (all just different ports on localhost, just to see it work). In the Polkadot JS App I can see the three connected nodes. There is no sign of failure that I can tell.
The main node (the one I initiated the network on) shows much more verbose output, such as:
2021-03-27 23:43:06 🙌 Starting consensus session on top of parent 0x454cabefd4df941f0dcee9302a12974fc60150fd75b753b9858517e767997e72
2021-03-27 23:43:06 🎁 Prepared block for proposing at 83 [hash: 0x9db6e2d0ef6da5b42a68652d17cac38f9c26415e747d43f69e3c26b87b1d0d1e; parent_hash: 0x454c…7e72; extrinsics (1): [0x4e86…067c]]
2021-03-27 23:43:06 🔖 Pre-sealed block for proposal at 83. Hash now 0x646e5c26e4c18f0e1e91603b8d47fe9fdd15b5bc88c69346f32c8e7078cc7532, previously 0x9db6e2d0ef6da5b42a68652d17cac38f9c26415e747d43f69e3c26b87b1d0d1e.
whereas the other nodes show only many repetitions of the like (as well as idle messages):
2021-03-27 23:44:18 ✨ Imported #86 (0x2247…ffac)
My hope is that in killing the "primary" node, the other 3 would continue to run and process blocks, but when I kill the main node, everything stops.
Most likely this is a failure of my understanding of what I'm doing here, but figured I'd ask and see if knowledge might rain down upon me.
Thanks in advance for taking the time to assist.

Why does ZeroMQ not receive a string when it becomes too large on a PUSH/PULL MT4 - Python setup?

I have an EA set in place that loops history trades and builds one large string with trade information. I then send this string every second from MT4 to the python backend using a plain PUSH/PULL pattern.
For whatever reason, the data isn't received on the pull side when the string transferred becomes too long. The backend PULL-socket slices each string and further processes it.
Any chance that the PULL-side is too slow to grab and process all the data which then causes an overflow (so that a delay arises due to the processing part)?
Talking about file sizes we are well below 5kb per second.
This is the PULL-socket, which manipulates the data after receiving it:
while True:
# check 24/7 for available data in the pull socket
try:
msg = zmq_socket.recv_string()
data = msg.split("|")
print(data)
# if data is available and msg is account info, handle as follows
if data[0] == "account_info":
[...]
except zmq.error.Again:
print("\nResource timeout.. please try again.")
sleep(0.000001)
I am a bit curious now since the pull socket seems to not even be able to process a string containing 40 trades with their according information on a single MT4 client - Python connection. I actually planned to set it up to handle more than 5.000 MT4 clients - python backend connections at once.
Q : Any chance that the pull side is too slow to grab and process all the data which then causes an overflow (so that a delay arises due to the processing part)?
Zero chance.
Sending 640 B each second is definitely no showstopper ( 5kb per second - is nowhere near a performance ceiling... )
The posted problem formulation is otherwise undecidable.
Step 1) POSACK/NACK prove whether a PUSH side accepts the payload for sending error-free.
Step 2) prove the PULL side is not to be blamed - [PUSH.send(640*chr(64+i)) for i in range( 10 )] via a python-2-python tcp://-transport-class solo-channel crossing host-to-host hop, over at least your local physical network ( no VMCI/emulated vLAN, no other localhost colocation )
Step 3) if either steps above got POSACK-ed, your next chances are the ZeroMQ configuration space and/or the MT4-based PUSH-side incompatibility, most probably "hidden" inside a (not mentioned) third party ZeroMQ wrapper used / first-party issues with string handling / processing ( which you must have already read about, as it has been so many times observed and mentioned in the past posts about this trouble with well "hidden" MQL4 internal eco-system changes ).
Anyway, stay tuned. ZeroMQ is a sure bet and a truly horsepower for professional and low-latency designs in distributed-system's domain.

In Substrate what does code: 1012 "Transaction is temporarily banned" mean?

The full text of the message is :
{code: 1012, message: "Transaction is temporarily banned"}
This would indicate that the transaction is held somewhere in Substrate Runtime mempool or something of that nature, but it is not entirely clear what possible causes can trigger this, and what the eventual outcome might be.
For example,
1) is it that too many transactions have been sent from a given account, IP address or other? Has some threshold been reached?
2) is the transaction actually invalid, or not?
3) The use of the word "temporary" suggests a delay in processing, not an outright rejection of the transaction. Therefore does this suggest that the transaction is valid, but delayed? If so, for how long?
The comments in the substrate runtime core/rpc/src/author/errors.rs and core/transaction-pool/graph/src/errors.rs is no clearer about what is the outcome.
In front of the mempool, exists a transaction blacklist, which can trigger this error. Specifically, this error means that a transaction with the same hash was either:
Part of recently mined block
Detected as invalid during block production and removed from the pool.
Additionally, this error can occur when:
The transaction reaches it's longevity, i.e. is not mined for TransactionValidation::longevity blocks after being imported to the pool.
By default longevity is set to u64::max so this normally should not be the problem.
In any case -ltxpool=log should reveal more details around this error.
A transaction is only temporarily banned because it will be removed from the blacklist when either:
30 minutes pass
There are more than 4,000 transactions on the blacklist
Check out core/transaction-pool/graph/src/rotator.rs.

Nodes loading, but Elasticsearch has 0 shards

I was testing out a 20 node cluster with default replicates, default sharding, and recently wanted to rename the cluster from the default "elasticsearch." So, I updated the config cluster name, and additionally renamed the data from
mylocation/data/OldName
to
mylocation/data/NewName
Which of course contain:
nodes/0
nodes/1
etc...
About a month later, I'm loading up my cluster again, and I see that while all 20 nodes come back online, it says 0 active shards, 0 primary shards, etc. where this should be several thousand. Status is green, nothing is initializing, nothing looks amiss except I have no data. I look in nodes/0 and I see nodes/0/indices/ are well populated with my index names: the data is actually on the disk. But it seems there's nothing I can do to get it to actually load the shards. The config is using the correct Des.path.data=mylocation/data/.
What could be wrong and how can I debug it? I'm fairly confident I ran this for a week after loading it, but it was some time ago and perhaps other things have changed. It just oddly seems to not be recognize any of the data it's pointing at, and it isn't giving me any kind of "I don't see your data" or "cannot read or write that data" error message.
Update
After it gets started it says:
Recovered [0] indices into cluster_state.
I googled this and it sounded like version compatibility. Checked my binaries and this did not appear to be an issue. I'm using 1.3.2 on all.
Update 2
One of 20 nodes repeatly fails with
ElasticsearchillegalStateException[failed to obtain node lock, is the following location writable?]
It lists the correct data dir, which is writable. Should I delete the node lock? Some node.locks are 664 and some are 640 when the cluster is off. Is this normal or possibly the relic of an unclean shutdown?
Are some of these replicates? I have 40 nodes, 20 are 640 and 20 are 664.
Update 3
There are write locks in place at the lucene level. So
data/NewName/nodes/1/indices/indexname/4/index/write.lock
exists. Is this why moving shards fails? Can i safely delete each of these write locks or is there shared state in the _state file that would lead to inconsistency?

How can I log the termination/any end of an SSIS package?

I've got an SSIS package (targeting SQL Server 2012) and I'm currently debugging it. What I'm after is how to log that the SSIS package has finished or stopped by any methods.
The closest ones look to be 'OnExecStatusChanged', 'OnPostExecute', and 'OnPostValidate' however NONE of these provide any log messages when I break execution in Visual Studio.
Here's a screenshot:
I suspect the answer may be "you can't", but I want to see if there are perhaps more exotic solutions before I give up.
You do have two options available that I can think of.
One has been highlighted above in using the pre and post execute functions. If you were to use this solution I would recommend using a table (Dim_Package_Log?) and inserting to this table using a stored procedure on pre and post execute.
Clarification: This wont catch package breaks, just start, end and errors.
As you rightly identified though this would not record package breaks. To capture this I have implemented a view that utilises two tables.
SSISDB.catalog.event_messages
SSISDB.catalog.executions
If you do some "exotic" joins you can utilise the execution_status from executions and the messages from event_messages to find the information you want.
I can't remember which msdn page I found it, but this is what the execution_status means in catalog.executions
The possible values are created (1), running (2), canceled (3), failed (4), pending (5), ended unexpectedly (6), succeeded (7), stopping (8), and completed (9).
Clarification:
Below is a sample line of what SSISDB.catalog.executions outputs for each package execution from a Job:
43198 FolderName ProjectName PackageName.dtsx NULL NULL NULL NULL 10405 GUID SERVICEACCOUNTNAME 0 200 2015-02-16 00:00:03.4156856 +11:00 20 18 7 2015-02-16 00:00:05.4409834 +11:00 2015-02-16 00:00:58.4567400 +11:00 GUID SERVICEACCOUNTNAME 10324 NULL NULL ID SERVER SERVER 16776756 3791028 20971060 8131948 2
In this example there is a column with a value of 7. As detailed above this status changes based upon the end state of the execution of the package. In this case, successful. If the package breaks midway it will be captured in this status.
Further information regarding this ssidb capability is located at this MSDN page.
I know that is a partly answer. What is covered here is to detect that a package is finished by an error of success. This you can do by calling the package from an another parent package.
But if the package is forced to stop then this won't have any effect.

Resources