How do I generate an extrinsic proof that encodes the actual extrinsic for which the proof was generated - substrate

I am trying to generate an extrinsic proof that can in turn be used to generate a partial trie and then extract the actual extrinsic for which the proof was generated from the trie, but I've not been able to figure it out yet.
Currently I'm using the trie_db::proof::generate_proof to generate the proof
This is what I've done so far https://github.com/Wizdave97/extrinsic-from-proof/blob/master/src/main.rs
The issue is the second line of this snippet returns an error 'InvalidStateRoot'
let db = sp_trie::storage_proof::StorageProof::new(proof.to_vec()).into_memory_db::<BlakeTwo256>();
let trie = sp_trie::TrieDB::<sp_trie::LayoutV0<BlakeTwo256>>::new(&db, root).unwrap();

Related

Matlab - Genetic algorithm for mixed integer optimization

The problem that I am trying to solve is based on the following code:
https://www.mathworks.com/help/gads/examples/solving-a-mixed-integer-engineering-design-problem-using-the-genetic-algorithm.html
My function has a lot more variables but basically it is the same. I have a set of variables that needs to be optimized under given constraints. Some of the variables have to be discrete. However, they can only take the values 0 and 1, I don't have to specify them, as it is shown in the example. (I have tried both methods though)
First I create the upper and lower boundaries, which creates a variable of size 1x193, respectively.
[lb,ub] = GWO_LUBGA(n_var,n_comp,C,n_comp);
Afterwards I call up the constraints. As I have discrete values, I cannot use equality constraints. Therefore I am using the workaround that was proposed here:
http://www.mathworks.com/help/gads/mixed-integer-optimization.html
ObjCon = #(x) funconGA(x,C,ub,n_comp);
Same for the objective function:
ObjFcn = #(x) CostFcnGA(x,C);
Afterwards I pass it over to the genetic algorithm:
[Pos,Best,~,GWO_cg_curve] = ga(ObjFcn,n_var,[],[],[],[],lb,ub,ObjCon,C.T*6+2:C.T*8+1,opts);
with n_var = 193 and C.T=24
When I try to compile I receive the following error:
Error using ga (line 366)
Dimensions of matrices being concatenated are not consistent.
Line 366 contains the following code. Unfortunately gaminlp cannot be opened.
% Call appropriate single objective optimization solver
if ~isempty(intcon)
[x,fval,exitFlag,output,population,scores] = gaminlp(FitnessFcn,nvars, ...
Aineq,bineq,Aeq,beq,lb,ub,NonconFcn,intcon,options,output,Iterate);
Both anonymous functions work when random values are entered. What could be the reason for this error?

Linear fit with Math.NET: error in data and error in fit parameters?

I am trying to use Math.NET to perform a simple linear fit through a small set of datapoints. Using Fit.Line I am very easily able to perform the linear fit and obtain the slope and intercept:
Tuple<double, double> result = Fit.Line(xdata, ydata);
var intercept = result.Item1;
var slope = result.Item2;
This is very simple, but what about errors?
Errors in y-data
My y-data might contain error bars, can Math.NET take these errors into account? There are no errors in x-data, just in y-data.
Errors in fit parameters
What about the error in the resulting fit parameters? The slope and intercept should have an error or at least some way for me to tell how good these parameters fit. Typically I think you'd use the covariance matrix and its diagonal elements would give the error in the parameters. I don't see any option to use that. Is Math.NET able to give me the fit parameter errors?
I supouse you can use this line to measure the fit error:
GoodnessOfFit.RSquared(xdata.Select(x => a+b*x), ydata); // == 1.0
where 1 means PERFECT (exactly on the line) and 0 means POOR.
it is described in Math.NET documentation on that page:
Math.net - Curve Fitting: Linear Regression

Algorithm and code in SCILAB for row reduced echelon form

I am a novice learner of SCILAB, and I know that there is a pre-defined function rref to produce the row reduced echelon form. I am looking for an algorithm for transforming a m x n matrix into row reduced echelon form and normal form and hence find the rank of a matrix.
Can you please help? Also, we have rref as a pre-defined function in SCILAB, how can we get the scilab code for it? How to find out the code/ algorithm behind any function in SCILAB?
Thanks for your help.
Help about functions
The help pages of Scilab always provide some information and short examples. You can also look at the help online (rref help).
The examples are without output, but demonstrate the various uses. A good first approach is to copy-paste the complete example code into a new scinotes window, save it and press F5 to see what it does. Then modify or extend the code to suite your wanted behavior.
rref & rank
Aren't you looking for the rank function instead? Here an example of using both.
A = [1,2,3;4,5,6;1,2,3]
rref(A);
rank(A);
B = [1,2,3;7,5,6;0,8,7];
rref(B);
rank(B);
Source code
Since Scilab is open source you can find the source code on their git repository, for instance the rref implementation is here.

Why is it common practice to use a bijective function and an incrementing numerical sequence for URL shortening?

I've read the question and answer How to code a URL shortener? and all the math makes perfect sense. My question is, since you have to go back to the database/datastore anyway for the lookup, why not just generate a random short string in your alphabet and store it with the full URL in your datastore, rather than converting it back to a numerical ID?
It seems to me that this saves doing any math on the server, reduces complexity, and eliminates the 'walkability' of the short URL space (for my use-case, this is critical; URLs must not be guessed). If using a NoSQL store designed for key->value lookup, it doesn't seem that there is any potential performance issue of looking up the full URL value from a string as opposed to a numerical ID.
I'd like to know if I'm missing something.
The random short string approach violates the bijectivity of the shortening function.
Given two URLs a and b and your shortening function f, it should be guaranteed that:
if a = b then f(a) = f(b), however, since f generates a random value, the bijectivity is violated.
If however, you are just looking to shorten any particular URL and do not mind that subsequent shortenings of the same URL will generate different values, then the approach you outline above would be more efficient.

I need help optimizing this compression algorithm I came up with on my own

I tried coming up with a compression algorithm. I do little bit about compression theories and so am aware that this scheme that I have come up with could very well never achieve compression at all.
Currently it works only for a string with no consecutive repeating letters/digits/symbols. Once properly established I hope to extrapolate it to binary data etc. But first the algorithm:
Assuming there are only 4 letters: a,b,c,d; we create a matrix/array corresponding to the letters. Whenever a letter is encountered, the corresponding index is incremented so that the index of the last letter encountered is always largest. We incremement an index by 2 if it was originally zero. If it was not originally zero then we increment it by 2+(the second largest element in the matrix). An example to clarify:
Array = [a,b,c,d]
Initial state = [0,0,0,0]
Letter = a
New state = [2,0,0,0]
Letter = b
New state = [2,4,0,0]
.
.c
.d
.
New state = [2,4,6,8]
Letter = a
New state = [12,4,6,8]
//Explanation for the above state: 12 because Largest - Second Largest - 2 = Old value
Letter = d
New state = [12,4,6,22]
and so on...
Decompression is just this logic in reverse.
A rudimentary implementation of compression (in python):
(This function is very rudimentary so not the best kind of code...I know. I can optimize it once I get the core algorithm correct.)
def compress(text):
matrix = [0]*95 #we are concerned with 95 printable chars for now
for i in text:
temp = copy.deepcopy(matrix)
temp.sort()
largest = temp[-1]
if matrix[ord(i)-32] == 0:
matrix[ord(i)-32] = largest+2
else:
matrix[ord(i)-32] = largest+matrix[ord(i)-32]+2
return matrix
The returned matrix is then used for decompression. Now comes the tricky part:
I can't really call this compression at all because each number in the matrix generated from the function are of the order of 10**200 for a string of length 50000. So storing the matrix actually takes more space than storing the original string. I know...totally useless. But I had hoped prior to doing all this that I can use the mathematical properties of a matrix to effectively represent it in some kind of mathematical shorthand. I have tried many possibilities and failed. Some things that I tried:
Rank of the matrix. Failed because not unique.
Denote using the mod function. Failed because either the quotient or the remainder
Store each integer as a generator using pickle.
Store the matrix as a bitmap file but then the integers are too large to be able to store as color codes.
Let me iterate again that the algorithm could be optimized. e.g. instead of adding 2 we could add 1 and proceed. But don't really result in any compression. Same for the code. Minor optimizations later...first I want to improve the main algorithm.
Furthermore, it is very likely that this product of a mediocre and idle mind like myself could never be able to achieve compression after all. In which case, I would then like your help and ideas on what this could probably be useful in.
TL;DR: Check coded parts which depict a compression algorithm. The compressed result is longer than the original string. Can this be fixed? If yes, how?
PS: I have the entire code on my PC. Will create a repo on github and upload in some time.
Compression is essentially a predictive process. Look for patterns in the input and use them to encode the more likely next character(s) more efficiently than the less likely. I can't see anything in your algorithm that tries to build a predictive model.

Resources