Substrate genesis blocks not matching - substrate

I am currently doing this tutorial. When I am following it as described and execute the alice and bob nodes both on the same machine it works as expected: the nodes are connecting and are creating and finalizing blocks. Now I want to get the same done but over the internet and on different machines. So I execute the bootnode on my PC and the other node on my Laptop. I compiled both from the same code and also forwarded the ports in my router. So now I expect the same behavior as when I am running both on my local machine. So when I execute them I see network traffic printed in both consoles but the bob node prints a warning: Bootnode with peer id '12D3KooWEyoppNCUx8Yx66oV9fJnriXwCcXwDDUA2kj6vnc6iDEp' is on a different chain (our genesis: 0xbfbd…3144 theirs: 0x8859…14c4) and they are not connected (Idle 0 peers).
So from the warning I conclude they have not the same genesis blocks which is obviously necessary to run as a blockchain. But from my understanding the joining node should copy the current state of the chain from the bootnode. How can I change bobs part to use alices state of the chain?
Both machines are running rust version 1.50.0
Thanks for your help!

Rust compilations are not deterministic, thus the exact same code for the exact same blockchain compiled on two computers will unfortunately not have the same genesis hash. (Specifically because part of the genesis of that chain is the Wasm Runtime blob, which is compiled non-deterministically with Rust).
You need to create a chainspec file to use for all other nodes. Note that you want to generate this on one computer, and pass the file to other nodes (don't re-generate it, as the genesis code will differ, as you ran into) before you then start other nodes using the correct chainspec and manually specifying boot nodes.

Related

Make command in go-ethereum installation?

i am trying to change the source code of geth and I have some doubts.
I need to modify the following sentence in order to admit transactions with more data in them: txMaxSize = 4 * txSlotSize // 128KB
of tx_pool.go file: https://github.com/ethereum/go-ethereum/blob/master/core/tx_pool.go
I have run
git clone https://github.com/ethereum/go-ethereum.git
My doubt is whether i have to modify the file before or after running make geth
Thanks
From the perspective of building the application, you can simply:
git clone $repo
Make modifications to the file. Then
make geth
or
cd cmd/geth/
go build
to build your application. Just running go build will generally be faster for development / testing, but make geth gives more reproducible builds.
I would warn you though, that unless your plan is to make your own network with a separate client that has larger transactions, building Geth with larger transactions will just mean you can create transactions that won't propagate across the network very successfully as other nodes on the network will reject them. Most miners use Geth (or Geth forks) to assemble new blocks, and changing the parameters yourself won't change it for other nodes on the network.

How to use distcc to preprocess and compile everything remotely only?

Background:
I have a 128-core server which I would like to use as a build server.
I have a bunch of client machines which work with a not-so-new and not-so-powerful PC. (Can't upgrade! Not in my control.)
What I did:
I followed the distcc documentation.
And installed a virtual machine on the server with exactly the same compiler version, the same distcc version -- basically the same distribution, as on the client-machines.
After configuring the clients and the servers, I can remotely build. I can verify this using the distccmon-text tool. I can see on the server, there are 8 threads started by the distcc daemon and that are awaiting for build-jobs to come. This was good as a first step. You can see the output below to be sure.
Second Step: Since the client machines are dual-core machines while the server offers 128-cores, and not all clients will be compiling at the same time, I wanted to offload as much of the build as possible to the build-server.
Problems:
First Problem: distcc, no matter how I try to configure it, always tries to distribute the build-jobs equally on the client and the server. Even though my configuration file looks as shown below:
# --- /etc/distcc/hosts -----------------------
# See the "Hosts Specification" section of
# "man distcc" for the format of this file.
#
# By default, just test that it works in loopback mode.
# 127.0.0.1
172.24.26.208/8,cpp,lzo
localhost/0
Which as per the distcc documentation should give higher priority to the build-server and lower-priority to the localhost since it comes later in the list. Also, it should give 8 jobs to the build server and 0 jobs to the localhost. But no, that doesn't happen. Upon typing make -j8 what it tries to do is start 4 threads on localhost and 4 on remote. Not good. This you can see from the image below.
Second Problem: What you would also notice is that the pre-processing is being done on the localmachine. For this there is a tool that comes with distcc. It is called the "distcc-pump" or the pump mode and can be used like this.
time pump make CC="distcc gcc" CXX="distcc g++" -j8
Unfortunately, pump mode or not, the pre-processing happens to be happening all on the localhost, as you can see from the above image. Sad.
Note: At no point does the distcc program, with the configurations I listed here, throw any errors nor warnings, neither on the server nor on the clients.
Versions:
gcc 4.4.5
distcc 3.2rc1.2
(Before someone suggests - "upgrade software!", newer versions are most likely not possible for me. Anyways, this version of distcc offers the features that I need. Also, I can upgrade the server virtual-machine but then there would be compiler version mistmatch between clients and the server. The clients I cannot upgrade.)
Any suggestions, feedback on how to improve this setup/(fix the problems) are most welcome.
EDIT : these solutions do not work, I let the answer to avoid someone else to propose again them
Try by
removing the line concerning the localhost in /etc/distcc/hosts c.f. https://superuser.com/questions/568133/force-most-compilation-to-a-remote-host-with-distcc
or may be specifying 127.0.0.1 rather than localhost in /etc/distcc/hosts c.f. an other problem solved with that substitution in https://distcc.github.io/faq.html
distcc actually differentiates between remote and local CPUs. But contrary to your interpretation, in the hosts file the IP address 127.0.0.1 is considered as a remote CPU and a distccd server is expected to be running there. Any number of jobs you define in the hosts file is interpreted only for these server nodes.
According to the man page, "localhost" is interpreted specially. This is what seems to not work for you. An alternative syntax is --localslots=<int>. Have you tested this?
Additionally, distcc runs jobs on the local host (the one where you start the driver program). First, all linking is done there. Second, when you specify a certain parallelism with make -jN, all jobs exceeding the available number of remote jobs are run on your local host, too - in addition to the workload distribution part of distcc. The option --localslots limits these. The man page does not mention localhost explicitly here. And then there are those jobs, which fail on the server and are repeated locally.
For the given 128-core server I would use the number of cores in the hosts file and start only that number of compile jobs:
$ cat ~/.distcc/hosts
172.24.26.208/128,cpp,lzo
$ make -j 128
...
Then I would expect to not see any compile jobs on the local machine.
The man page has some more words regarding recommended job numbers. Search for the section(s) starting with distcc spreads the jobs across both local and remote CPUs.

Deleting ChainCode from peer

I made a mistake my chaincode and installed them on the peers on my network. When I tried to instantiate the chaincode in the channels, I found the error.
Is there a way to debug chaincode before installing it on peers ? It seems to only get flagged when you instantiate it.
Is there a way to delete the chaincode from the peers without having to restart the network?
Depends on what you mean by mistake / debug. You should make sure it compiles first. That eliminates all typos, syntax, missing libraries, etc. But there is no way to debug functionality except to install and instantiate.
Technically, no. You can delete all the storage (/var/hyperledger/production/peer, /var/hyperledger/production/orderer, the kafka/zookeeper files, and CouchDB). Not a real big deal, but you do have to restart the network and recreate the channel, join it, install and instantiate the cc, etc. But you can install as a different name. Just change the name in your app connection definition to match. You can also upgrade by changing the version number but keeping the same name.
I just change the name until I get to a fairly settled spot and then do the deletes and restart all to clean up. A full cleanup (4 peers, 3 orderers, 4 kafka,3 zoopkeeper) takes me maybe 30 minutes. Normally, I keep a CLI open with install ccname1 and instantiate ccname1 in the buffer and can easily increment to ccname2,3,4,5. It only takes a few seconds that way.
If the error is (chaincode is already present in the peers)
You can try installing the chain code with different version number or different chain code name.
You can initiate chaincode in the channel only once. Next time you have to follow the procedure of upgrade chaincode steps.
Note : Before installing chain code you can check the syntax errors form the machine by installing go and compile the chain code.

Erlang Nodes See Each Other Only After Ping

I am running some erlang code on a Mac OSX, and I have this weird issue. My application is a multi node app where I have a single instance of a server that is shared between nodes (global).
The code works perfectly, except for one annoying thing: the different erlang nodes (I am running each node on a different terminal window) can only communicate with each other after ping!
So if on terminalA I am starting the server, and on terminalB I am running
erl>global:registered_names().
terminalB will return an empty list, unless, before starting the server on terminalA, I have ran a ping (from either one of the terminals).
For example, if I do this on either terminals before starting the server:
erl>net_adm:ping("terminalB").
then I start the server and from the second terminal I list the processes:
erl>global:registered_names().
This time I WILL see the registered process from the second terminal.
Is it possible that the mere net_adm:ping call does some kind of work (like DNS resolving or something like that) that allows the communication?
The nodes in a distributed Erlang system are loosely connected. The
first time the name of another node is used, for example if
spawn(Node,M,F,A) or net_adm:ping(Node) is called, a connection
attempt to that node will be made.
I find this in this link: http://www.erlang.org/doc/reference_manual/distributed.html#id85336
I think you should read this article.

DB job to generate/email Oracle report output

The task is to have an Oracle report generated daily, automatically, and e-mailed to a user.
So I've sort of got this working (it works if I hardcode one of the reports server names below).
I created a job on the database that will generate the report. I'm able to get the report to email as a PDF to the destination with this command:
UTL_HTTP.REQUEST('http://server/reports/rwservlet?server=specific_report_server &report='||p_report_name||'&userid='||p_connstring||'&destype=mail'||p_parameters||'&desname='||p_to_recipientlist||' &cc='||p_cc_recipientlist||'&bcc='||p_bcc_recipientlist||'&subject=%22' || REPLACE(p_subject,' ','%20') || '%22&paramform=no&DESformat=pdf&ENVID='||p_envid);
That works great...
The problem however is that my organization has two report servers that are load balanced. Our server team could take down one of the servers without really any warning, so I can't just hardcode the report server name (the ?server= parameter above) with one of the report server names because it will work for a while, then when that server goes down, it will stop working.
My server team asked me to look for a way to pull the server from the formsweb.cfg file or from default.env value within the job (there are parameters in there that hold the server name). The idea there is that the "http://server" piece will direct the report to be run on the appropriate server, and the first part of the job could get the reports server name from the config file that the report is run on. I'm not sure if this is possible from the database level, or how to do this. Any ideas?
Is there a better way that this can be done, perhaps?
If there are two load-balanced servers, that strongly implies that the network folks must have configured some sort of virtual IP (VIP) for the service. You (and everyone else) should be using that VIP rather than a specific server name.
For example, if you have two servers reportA.yourdomain.com and reportB.yourdomain.com, you would almost certainly create a VIP for reports.yourdomain.com that load balances between the two servers (and knows whether one of the servers is down or whether a new reportC server has been added). This VIP would either do the load balancing itself or would point to an actual physical load balancer that distributes the traffic. All applications would reference the reports.yourdomain.com VIP rather than any hard-coded server names.

Resources