Evaluate a BitVec in Z3Py - expression

I am learning Z3 and perhaps my question does not apply, so please be patient.
Suppose I have the following:
c1, c2 = z3.BitVec('c1', 32), z3.BitVec('c2', 32)
c1 = c1 + c1
c2 = c2 + c2
c2 = c2 + c1
c1 = c1 + c2
e1 = z3.simplify(c1)
e2 = z3.simplify(c2)
When I print their sexpr():
print "e1=", e1.sexpr()
print "e2=", e2.sexpr()
Output:
e1= (bvadd (bvmul #x00000004 c1) (bvmul #x00000002 c2))
e2= (bvadd (bvmul #x00000002 c2) (bvmul #x00000002 c1))
My question is, how can I evaluate the numerical value of 'e1' and 'e2' for user supplied values of c1 and c2?
For example, e1(c1=1, c2=1) == 6, e2(c1=1, c2=1) == 4
Thanks!

I figured it out. I had to introduce two separate variables that will hold the expressions. Then I had to introduce two result variables for which I can query the model for their value:
e1, e2, c1, c2, r1, r2 = z3.BitVec('e1', 32), z3.BitVec('e2', 32), z3.BitVec('c1', 32),
z3.BitVec('c2', 32), z3.BitVec('r1', 32), z3.BitVec('r2', 32)
e1 = c1
e2 = c2
e1 = e1 + e1
e2 = e2 + e2
e2 = e2 + e1
e1 = e1 + e2
e1 = z3.simplify(e1)
e2 = z3.simplify(e2)
print "e1=", e1
print "e2=", e2
s = z3.Solver()
s.add(c1 == 5, c2 == 1, e1 == r1, e2 == r2)
if s.check() == z3.sat:
m = s.model()
print 'r1=', m[r1].as_long()
print 'r2=', m[r2].as_long()

Related

Algorithm to mix colours on 7 individual pieces of toy

I'm a woodworker trying to seek some math and algorithm help on the expertise here.
I'm trying to make 28 sets of Tangram for gifting relatives, like this:
DanielHolth + RobotE at nl:Wp [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0/)], from Wikimedia Commons
The toys consists of 7 pieces of wood board that should be painted to individual colours. To make the painting easier, I think the best way is to paint them in batches of 4 sets in the same colour, then mix them.
I have labeled the pieces 1-7 to make discussion easier:
What is the most efficient way to mix the pieces so I do not get identical colour combination per set? I would like the gift to be as individual as possible and colour combination is a good way to achieve this goal.
Edit: Each set of puzzle is made of seven pieces each of a different colour.
Order your colors in some fashion (say R -> G -> B -> Y -> P -> O -> W), and then order your pieces similarly (which you've already done in your picture, 1-7). Lay them out in in a matrix, with each color on a separate row (repeating the columns/pieces, since there's going to be 4 duplicates each). Let B3 denote a blue 3 piece, O7- orange 7, etc.
1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7
(R) R1 R2 R3 R4 R5 R6 R7 R1 R2 R3 R4 R5 R6 R7 R1 R2 R3 R4 R5 R6 R7 R1 R2 R3 R4 R5 R6 R7
(G) G1 G2 G3 G4 G5 G6 G7 G1 G2 G3 G4 G5 G6 G7 G1 G2 G3 G4 G5 G6 G7 G1 G2 G3 G4 G5 G6 G7
(G) B1 B2 B3 B4 B5 B6 B7 B1 B2 B3 B4 B5 B6 B7 B1 B2 B3 B4 B5 B6 B7 B1 B2 B3 B4 B5 B6 B7
(Y) Y1 Y2 Y3 Y4 Y5 Y6 Y7 Y1 Y2 Y3 Y4 Y5 Y6 Y7 Y1 Y2 Y3 Y4 Y5 Y6 Y7 Y1 Y2 Y3 Y4 Y5 Y6 Y7
(P) P1 P2 P3 P4 P5 P6 P7 P1 P2 P3 P4 P5 P6 P7 P1 P2 P3 P4 P5 P6 P7 P1 P2 P3 P4 P5 P6 P7
(O) O1 O2 O3 O4 O5 O6 O7 O1 O2 O3 O4 O5 O6 O7 O1 O2 O3 O4 O5 O6 O7 O1 O2 O3 O4 O5 O6 O7
(W) W1 W2 W3 W4 W5 W6 W7 W1 W2 W3 W4 W5 W6 W7 W1 W2 W3 W4 W5 W6 W7 W1 W2 W3 W4 W5 W6 W7
Now, take out the lower left "triangle" of pieces. That is- remove 0 pieces from the beginning of the first row, 1 from the second, 2 from the third...
1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7
(R) R1 R2 R3 R4 R5 R6 R7 R1 R2 R3 R4 R5 R6 R7 R1 R2 R3 R4 R5 R6 R7 R1 R2 R3 R4 R5 R6 R7
(G) G2 G3 G4 G5 G6 G7 G1 G2 G3 G4 G5 G6 G7 G1 G2 G3 G4 G5 G6 G7 G1 G2 G3 G4 G5 G6 G7
(B) B3 B4 B5 B6 B7 B1 B2 B3 B4 B5 B6 B7 B1 B2 B3 B4 B5 B6 B7 B1 B2 B3 B4 B5 B6 B7
(Y) Y4 Y5 Y6 Y7 Y1 Y2 Y3 Y4 Y5 Y6 Y7 Y1 Y2 Y3 Y4 Y5 Y6 Y7 Y1 Y2 Y3 Y4 Y5 Y6 Y7
(P) P5 P6 P7 P1 P2 P3 P4 P5 P6 P7 P1 P2 P3 P4 P5 P6 P7 P1 P2 P3 P4 P5 P6 P7
(O) O6 O7 O1 O2 O3 O4 O5 O6 O7 O1 O2 O3 O4 O5 O6 O7 O1 O2 O3 O4 O5 O6 O7
(W) W7 W1 W2 W3 W4 W5 W6 W7 W1 W2 W3 W4 W5 W6 W7 W1 W2 W3 W4 W5 W6 W7
Then place these extras at the ends of their corresponding rows. Now, just take the first piece from each row, and make a new set. You can do this 7 times before it will repeat itself. For clarity, in the above example your first set would be R1 G2 B3 Y4 P5 O6 W7, the second set R2 G3 B4 Y5 P6 O7 W1.
After that, repeat the process again- remove 0 from the first row, 1 from the second, etc. Again, move the extras to the ends of their rows, and draw 7 new sets from the first elements of each row. Repeat this process twice more for your final two batches of 7 sets each. Each set will be unique.
I've already posted an answer that attempts to solve the problem as simply as possible, but I felt it was appropriate to provide a solution that also attempts to maximize uniqueness. Another answer already covers the basics of this, but does not account for color pairs created from identical puzzle pieces, so I've attempted to do so here.
This solver isn't the fastest, but guarantees that there will be no more than two identically colored pairs of pieces between any two sets. When ran without shuffling, there is a large bias towards certain colors taking particular pieces, so I provide an argument to shuffle the intermediate arrays to eliminate this bias, at the cost of fewer generated sets (potentially less than 28 - if so, run again). The program will spit out all sets found that satisfy the above criteria, so you can manually pick whichever 28 seems the most "random" or "uniform" to the human eye.
from itertools import combinations, permutations
from random import shuffle
def get_subsets(color_set):
subsets = []
for d in ({}, {'1':'5'}, {'4':'6'}, {'1':'5', '4':'6'}):
tr = lambda s: str.translate(s, str.maketrans(d))
subsets.extend(set(tr(y) for y in x) for x in combinations(color_set, 3))
return subsets
def make_sets(do_random=True):
color_sets = [set(c+str(i) for i, c in enumerate(perm)) for perm in permutations("RGBYPOW")]
results, pairs = [], []
while color_sets:
results.append(color_sets[0])
pairs.extend(get_subsets(color_sets[0]))
color_sets = [x for x in color_sets if all(y - x for y in pairs)]
if do_random: shuffle(color_sets)
results = sorted(sorted(perm, key=lambda x:x[1]) for perm in results)
print("\n".join(map(str, results)))
print(len(results))
if __name__ == "__main__":
make_sets()
Example output:
['B0', 'G1', 'O2', 'W3', 'P4', 'R5', 'Y6']
['B0', 'P1', 'W2', 'Y3', 'O4', 'G5', 'R6']
['B0', 'R1', 'W2', 'O3', 'G4', 'P5', 'Y6']
['B0', 'R1', 'Y2', 'P3', 'W4', 'O5', 'G6']
['B0', 'W1', 'R2', 'G3', 'O4', 'Y5', 'P6']
['G0', 'B1', 'O2', 'P3', 'R4', 'W5', 'Y6']
['G0', 'B1', 'R2', 'W3', 'Y4', 'O5', 'P6']
['G0', 'O1', 'P2', 'B3', 'W4', 'R5', 'Y6']
['G0', 'O1', 'Y2', 'R3', 'B4', 'W5', 'P6']
['G0', 'P1', 'O2', 'Y3', 'B4', 'R5', 'W6']
['G0', 'W1', 'P2', 'O3', 'R4', 'Y5', 'B6']
['O0', 'B1', 'Y2', 'W3', 'R4', 'P5', 'G6']
['O0', 'G1', 'R2', 'Y3', 'W4', 'P5', 'B6']
['O0', 'P1', 'G2', 'R3', 'Y4', 'B5', 'W6']
['O0', 'R1', 'B2', 'G3', 'P4', 'W5', 'Y6']
['P0', 'B1', 'R2', 'O3', 'W4', 'Y5', 'G6']
['P0', 'R1', 'G2', 'W3', 'B4', 'Y5', 'O6']
['P0', 'W1', 'B2', 'Y3', 'O4', 'R5', 'G6']
['P0', 'W1', 'G2', 'B3', 'Y4', 'O5', 'R6']
['R0', 'G1', 'B2', 'Y3', 'P4', 'O5', 'W6']
['R0', 'O1', 'P2', 'Y3', 'G4', 'W5', 'B6']
['R0', 'Y1', 'W2', 'P3', 'G4', 'B5', 'O6']
['W0', 'G1', 'B2', 'P3', 'R4', 'Y5', 'O6']
['W0', 'O1', 'P2', 'G3', 'Y4', 'B5', 'R6']
['W0', 'R1', 'Y2', 'G3', 'O4', 'P5', 'B6']
['W0', 'Y1', 'G2', 'O3', 'B4', 'P5', 'R6']
['W0', 'Y1', 'O2', 'R3', 'P4', 'G5', 'B6']
['Y0', 'B1', 'P2', 'R3', 'W4', 'G5', 'O6']
['Y0', 'G1', 'W2', 'O3', 'B4', 'R5', 'P6']
['Y0', 'O1', 'B2', 'G3', 'R4', 'P5', 'W6']
['Y0', 'P1', 'R2', 'B3', 'G4', 'W5', 'O6']
31
For fun, here's some code that attempts to minimise, without an exhaustive search, the number of sequential colour pairs that can coexist in the same position in an attempt to respond to your request for "as individual as possible." It has an element of randomness. Sometimes it can produce a triple sequence duplicate but at it's best, we get only pair duplicates. (Maybe the chance similarities recipients would find between their gifts would be part of the beauty.)
(Dillon Davis commented that it can produce identical pairs for positions 1 & 5 or 4 & 6, which appear to be prominent similar triangles in the design. I may make a change to it a little later to prioritise avoiding those duplicates.)
let cs = ['R', 'G', 'B', 'Y', 'P', 'O', 'W'];
let pairs = [];
for (let i=0; i<6; i++)
for (let j=i+1; j<7; j++)
pairs.push(cs[i] + cs[j], cs[j] + cs[i]);
let positionMatches = [];
const results = pairs.slice(0, 28);
// Build combinations
for (let i=0; i<5; i++){
// Avoid repeating pairs
// in the same position
let set = new Set();
for (let j=0; j<28; j++){
const last = results[j].substr(-1);
let found = false;
for (let c of cs){
const candidate = last + c;
// Match found
if (!set.has(candidate) && !results[j].includes(c)){
results[j] += c;
set.add(candidate);
found = true;
break;
}
}
// Match not found
// Lower the restriction
// and insert random match
if (!found){
const cs_ = cs.filter(
c => !results[j].includes(c));
const c = cs_[
~~(Math.random()*cs_.length)];
results[j] += c;
positionMatches.push((i+2) + ':' + last + c);
}
}
}
console.log(results.join('\n'));
console.log('');
for (let p of positionMatches){
const [pos, pair] = p.split(':');
console.log(pair + ' duplicate at position ' + pos)
}
UPDATE
Here's a solver with much more random assignment than the one above, which is more sequential and therefore predictable. We can set pairs we'd like to "unmatch" in the unmatch map, and control how much more we'd like to try random candidates when examining the specially chosen unmatched pairs or other pairs (we might want to give more weight to the former to let them try more random candidates). One result that seemed pretty good as I was playing around is listed below (it was achieved with the same 50/50 random setting). Click "Run snippet" for a different result each time!
const unmatch = {
// Try to avoid duplicate pairs
// at indexes (0, 4) and (3, 5)
4: 0,
5: 3
};
const unmatchTrials = 50;
const regularTrials = 50;
let cs = ['R', 'G', 'B', 'Y', 'P', 'O', 'W'];
let pairs = [];
for (let i=0; i<6; i++)
for (let j=i+1; j<7; j++)
pairs.push(cs[i] + cs[j], cs[j] + cs[i]);
let positionMatches = [];
const results = pairs.slice(0, 28);
// Build combinations
for (let i=0; i<5; i++){
// Avoid repeating pairs in the same position,
// as well as in custom positions
let set = new Set();
let unmatchS = new Set();
for (let j=0; j<28; j++){
const last = results[j].substr(-1);
let found = false;
const ri = i + 2;
let count = unmatch.hasOwnProperty(ri) ? unmatchTrials : regularTrials;
while (!found && --count > 0){
const ii = ~~(Math.random() * cs.length);
const c = cs[ii];
const candidate = last + c;
let u = unmatch.hasOwnProperty(ri)
? unmatchS.has(results[j][unmatch[ri]] + c)
: false;
// Match found
if (!set.has(candidate) && !results[j].includes(c) && !u){
results[j] += c;
set.add(candidate);
if (unmatch.hasOwnProperty(ri))
unmatchS.add(results[j][unmatch[ri]] + c)
found = true;
}
}
// Match not found
// Lower the restriction
// and insert random match
if (!found){
const cs_ = cs.filter(
c => !results[j].includes(c));
const c = cs_[
~~(Math.random()*cs_.length)];
results[j] += c;
positionMatches.push((i+2) + ':' + last + c);
}
}
}
console.log(results.join('\n'));
console.log('');
for (let p of positionMatches){
const [pos, pair] = p.split(':');
console.log(pair + ' duplicate at position ' + pos)
}
let m04 = new Set();
let m35 = new Set();
for (let r of results){
const c04 = r[0] + r[4];
const c35 = r[3] + r[5];
if (m04.has(c04))
console.log('15 match: ' + c04);
m04.add(c04);
if (m35.has(c35))
console.log('46 match: ' + c35);
m35.add(c35);
}
(The output below seemed pretty good. Dillon Davis noticed a pair of tangrams there that share a sequence "POW." Those could possibly be for two people who may or may not yet know that they share a special connection. (We could also, you know, just tweak one of them manually :)
RGWBYOP
GROBPYW
RBPWOYG
BRWYOGP
RYWPGOB
YRPBGWO
RPBYWOG
PRYGWBO
ROBWPGY
ORGYPBW
RWGOBYP
WRBOPGY
GBOWYRP
BGOYRWP
GYRWBPO
YGROWPB
GPWORBY
PGYBRWO
GOYPWRB
OGPYBRW
GWPROBY
WGBRYPO
BYGPOWR
YBRPOWG
BPGRWYO
PBYWGOR
BORGPWY
OBWGRPY
PO duplicate at position 4
PG duplicate at position 5
RW duplicate at position 5
OW duplicate at position 5
GO duplicate at position 5
GY duplicate at position 6
WO duplicate at position 6
BY duplicate at position 6
PO duplicate at position 6
46 match: BW
15 match: BO
46 match: PW

Convert int array to char array using Golang?

I have this error :
# command-line-arguments
.\cheking.go:14: cannot use strconv.Itoa(i + 64) + strconv.Itoa(j + 48) (type st
ring) as type [8]int in assignment
code:
package main
import (
"fmt"
"strconv"
)
func main() {
var board [8][8]int
for i := 1; i <= 8; i++ { // initialize array
for j := 1; j <= 8; j++ {
board[(j-1)+8*(i-1)] = (strconv.Itoa(i+64) + "" + strconv.Itoa(j+48)) // int to char
fmt.Printf("%s \n", board[i][j])
}
}
}
strconv.Itoa is shorthand for FormatInt(int64(i), 10):
FormatInt returns the string representation of i in the given base,
for 2 <= base <= 36. The result uses the lower-case letters 'a' to 'z'
for digit values >= 10.
so the result of strconv.Itoa(i+64) is string, and the board is not (this is the error).
I think you are trying to do something like this working sample code (let me know if not):
package main
import "fmt"
func main() {
board := [8][8]string{}
for i := 0; i < 8; i++ { // initialize array
for j := 0; j < 8; j++ {
board[i][j] = string(i+65) + string(j+49) // int to char
fmt.Printf("%s ", board[i][j])
}
fmt.Println()
}
}
output:
A1 A2 A3 A4 A5 A6 A7 A8
B1 B2 B3 B4 B5 B6 B7 B8
C1 C2 C3 C4 C5 C6 C7 C8
D1 D2 D3 D4 D5 D6 D7 D8
E1 E2 E3 E4 E5 E6 E7 E8
F1 F2 F3 F4 F5 F6 F7 F8
G1 G2 G3 G4 G5 G6 G7 G8
H1 H2 H3 H4 H5 H6 H7 H8
if my guess is fine, you may do it this way too:
package main
import "fmt"
func main() {
board := [8][8]string{
{"A1", "A2", "A3", "A4", "A5", "A6", "A7", "A8"},
{"B1", "B2", "B3", "B4", "B5", "B6", "B7", "B8"},
{"C1", "C2", "C3", "C4", "C5", "C6", "C7", "C8"},
{"D1", "D2", "D3", "D4", "D5", "D6", "D7", "D8"},
{"E1", "E2", "E3", "E4", "E5", "E6", "E7", "E8"},
{"F1", "F2", "F3", "F4", "F5", "F6", "F7", "F8"},
{"G1", "G2", "G3", "G4", "G5", "G6", "G7", "G8"},
{"H1", "H2", "H3", "H4", "H5", "H6", "H7", "H8"},
}
// print the board:
for i := 0; i < 8; i++ {
fmt.Println(board[i])
}
}
output:
[A1 A2 A3 A4 A5 A6 A7 A8]
[B1 B2 B3 B4 B5 B6 B7 B8]
[C1 C2 C3 C4 C5 C6 C7 C8]
[D1 D2 D3 D4 D5 D6 D7 D8]
[E1 E2 E3 E4 E5 E6 E7 E8]
[F1 F2 F3 F4 F5 F6 F7 F8]
[G1 G2 G3 G4 G5 G6 G7 G8]
[H1 H2 H3 H4 H5 H6 H7 H8]
First of all, if you want to initialize each board position with a string, you need to change the declaration of your board array:
var board [8][8]string
since strconv.Itoa returns a string.
Then, if you simply need to access each board location sequentially, you can simply update your inner loop:
// You don't need to iterate through the array like C using index arithmatic
for i := 0; i < 8; i++ { // initialize array
for j := 0; j < 8; j++ {
// use whatever logic you are using to init each value
board[i][j] = (strconv.Itoa(i+64) + "" + strconv.Itoa(j+48)) // int to char
fmt.Printf("%s \n", board[i][j])
}
}

NMinimize is very slow

You are my last hope.
In my university there are no people able to answer my question.
I've got a function quite complex depending on 6 paramethers a0,a1,a2,b0,b1,b2 that minimize the delta of pression, volume liquid and volume vapor calculated by a rather new equation of state.
NMinimize is very slow and I could not do any considerations about this equation because timing is very high.
In the code there are some explanations and some problems concerning my code.
On my knees I pray you to help me.
I'm sorry, but after 4 months on construction of these equation I could not test it. And frustration is increasing day after day!
Clear["Global`*"];
data = {{100., 34.376, 0.036554, 23.782}, {105., 56.377, 0.037143,
15.116}, {110., 88.13, 0.037768, 10.038}, {115., 132.21, 0.038431,
6.9171}, {120., 191.43, 0.039138, 4.9183}, {125., 268.76,
0.039896, 3.5915}, {130., 367.32, 0.040714, 2.6825}, {135.,
490.35, 0.0416, 2.0424}, {140., 641.18, 0.042569, 1.5803}, {145.,
823.22, 0.043636, 1.2393}, {150., 1040., 0.044825,
0.98256}, {155., 1295., 0.046165, 0.78568}, {160., 1592.1,
0.047702, 0.63206}, {165., 1935.1, 0.0495, 0.51014}, {170.,
2328.3, 0.051667, 0.41163}, {175., 2776.5, 0.054394,
0.33038}, {180., 3285.2, 0.058078, 0.26139}, {185., 3861.7,
0.063825, 0.19945}, {190., 4518.6, 0.079902, 0.12816}};
tvector = data[[All, 1]];(*K*)
pvector =
data[[All, 2]];(*KPa*)
vlvector = data[[All, 3]];(*L/mol*)
vvvector =
data[[All, 4]];
(*L/mol.*)
r = 8.314472;
tc = 190.56;
avvicinamento = Length[tvector] - 3;
trexp = Take[tvector, avvicinamento]/tc;
vlexp = Take[vlvector, avvicinamento];
vvexp = Take[vvvector, avvicinamento];
zeri = Table[i*0., {i, avvicinamento}];
pexp = Take[pvector, avvicinamento];
(*Function for calculation of Fugacity of CSD Equation*)
(*Function for calculation of Fugacity of CSD Equation*)
fug[v_, p_, t_, a_, b_] :=
Module[{y, z, vbv, vb, f1, f2, f3, f4, f}, y = b/(4 v);
z = (p v)/(r t);
vbv = Log[(v + b)/v];
vb = v + b;
f1 = (4*y - 3*y^2)/(1 - y)^2;
f2 = (4*y - 2*y^2)/(1 - y)^3;
f3 = (2*vbv)/(r t*b)*a;
f4 = (vbv/b - 1/vb)/(r t)*a;
f = f1 + f2 - f3 + f4 - Log[z];
Exp[f]]
(*g Minimize the equality of fugacity*)
g[p_?NumericQ, t_?NumericQ, a0_?NumericQ, a1_?NumericQ, a2_?NumericQ,
b0_?NumericQ, b1_?NumericQ, b2_?NumericQ] := Module[{},
a = a0*Exp[a1*t + a2*t^2];
b = b0 + b1*t + b2*t^2;
csd = a/(r*t*(b + v)) - (-(b^3/(64*v^3)) + b^2/(16*v^2) +
b/(4*v) + 1)/(1 - b/(4*v))^3 + (p*v)/(r*t);
vol = NSolve[csd == 0 && v > 0, v, Reals];
sol = v /. vol;
(*If[Length[sol]==1,Interrupt[];Print["Sol==1"]];*)
vliquid = Min[sol];
vvapor = Max[sol];
fl = fug[vliquid, p, t, a, b];
fv = fug[vvapor, p, t, a, b];
(*Print[{t,p,vol,Abs[fl-fv]}];*)
Abs[fl - fv]];
(*This function minimize the pcalc-pexp and vcalc-vexp *)
hope[a0_?NumericQ, a1_?NumericQ, a2_?NumericQ, b0_?NumericQ,
b1_?NumericQ, b2_?NumericQ] :=
Module[{},
pp[a0, a1, a2, b0, b1, b2] :=
Table[FindRoot[{g[p, tvector[[i]], a0, a1, a2, b0, b1, b2]},
{p,pvector[[i]]}],{i,avvicinamento}];
pressioni1 = pp[a0, a1, a2, b0, b1, b2];
pcalc = p /. pressioni1;
differenza = ((pcalc - pexp)/pexp)^2;
If[MemberQ[differenza, 0.],
differenza = zeri + RandomReal[{100000, 500000}];(*
First problem:
As I've FindRoot that finds the solutions equal to the starting \
point, I don't want these kind of solutions and with this method - \
+RandomReal[{100000,500000}] -
a keep away this solutions.Is it right? *)
deltap = Total[differenza],
differenzanonzero = Select[differenza, # > 0 &];
csd1[a_, b_, p_, t_] :=
a/(r*t*(b + v)) - (-(b^3/(64*v^3)) + b^2/(16*v^2) + b/(4*v) +
1)/(1 - b/(4*v))^3 + (p*v)/(r*t);(*Funzione CSD*)
volumi =
Table[NSolve[csd1[a, b, pcalc[[i]], tvector[[i]]], v, Reals], {i,
avvicinamento}];
soluzioni = v /. volumi;
vvcalc = Table[Max[soluzioni[[i]]], {i, avvicinamento}];
vlcalc = Table[Min[soluzioni[[i]]], {i, avvicinamento}];
deltavl = Total[((vlexp - vlcalc)/vlcalc)^2];
deltavv = Total[((vvexp - vvcalc)/vvcalc)^2];
deltap = Total[differenza];
Print[a0, " ", b0, " ", delta];
delta = 0.1*deltavl + 0.1*deltavv + deltap]];
NMinimize[{hope[a0, a1, a2, b0, b1, b2],
500 < a0 < 700 && -0.01 < a1 < -1.0*10^-5 && -10^-5 < a2 < -10^-7 &&
0.0010 < b0 < 0.1 && -0.0010 < b1 < -1.0*10^-5 &&
10^-9 < b2 < 10^-7}, {a0, a1, a2, b0, b1, b2}]
Thanks in advance!
Mariano Pierantozzi
PhD Student in chemical Engineering

haskell matrix implemetation performance

I heard a lot about amazing performance of programs written in Haskell, and wanted to make some tests. So, I wrote a 'library' for matrix operations just to compare it's performance with the same stuff written in pure C.
First of all I tested 500000 matrices multiplication performance, and noticed that it was... never-ending (i. e. ending with out of memory exception after 10 minutes of so)! After studying haskell a bit more I managed to get rid of laziness and the best result I managed to get is ~20 times slower than its equivalent in C.
So, the question: could you review the code below and tell if its performance can be improved a bit more? 20 times is still disappointing me a bit.
import Prelude hiding (foldr, foldl, product)
import Data.Monoid
import Data.Foldable
import Text.Printf
import System.CPUTime
import System.Environment
data Vector a = Vec3 a a a
| Vec4 a a a a
deriving Show
instance Foldable Vector where
foldMap f (Vec3 a b c) = f a `mappend` f b `mappend` f c
foldMap f (Vec4 a b c d) = f a `mappend` f b `mappend` f c `mappend` f d
data Matr a = Matr !a !a !a !a
!a !a !a !a
!a !a !a !a
!a !a !a !a
instance Show a => Show (Matr a) where
show m = foldr f [] $ matrRows m
where f a b = show a ++ "\n" ++ b
matrCols (Matr a0 b0 c0 d0 a1 b1 c1 d1 a2 b2 c2 d2 a3 b3 c3 d3)
= [Vec4 a0 a1 a2 a3, Vec4 b0 b1 b2 b3, Vec4 c0 c1 c2 c3, Vec4 d0 d1 d2 d3]
matrRows (Matr a0 b0 c0 d0 a1 b1 c1 d1 a2 b2 c2 d2 a3 b3 c3 d3)
= [Vec4 a0 b0 c0 d0, Vec4 a1 b1 c1 d1, Vec4 a2 b2 c2 d2, Vec4 a3 b3 c3 d3]
matrFromList [a0, b0, c0, d0, a1, b1, c1, d1, a2, b2, c2, d2, a3, b3, c3, d3]
= Matr a0 b0 c0 d0
a1 b1 c1 d1
a2 b2 c2 d2
a3 b3 c3 d3
matrId :: Matr Double
matrId = Matr 1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
normalise (Vec4 x y z w) = Vec4 (x/w) (y/w) (z/w) 1
mult a b = matrFromList [f r c | r <- matrRows a, c <- matrCols b] where
f a b = foldr (+) 0 $ zipWith (*) (toList a) (toList b)
First, I doubt that you'll ever get stellar performance with this implementation. There are too many conversions between different representations. You'd be better off basing your code on something like the vector package. Also you don't provide all your testing code, so there are probably other issues that we can't here. This is because the pipeline of production to consumption has a big impact on Haskell performance, and you haven't provided either end.
Now, two specific problems:
1) Your vector is defined as either a 3 or 4 element vector. This means that for every vector there's an extra check to see how many elements are in use. In C, I imagine your implementation is probably closer to
struct vec {
double *vec;
int length;
}
You should do something similar in Haskell; this is how vector and bytestring are implemented for example.
Even if you don't change the Vector definition, make the fields strict. You should also either add UNPACK pragmas (to Vector and Matrix) or compile with -funbox-strict-fields.
2) Change mult to
mult a b = matrFromList [f r c | r <- matrRows a, c <- matrCols b] where
f a b = Data.List.foldl' (+) 0 $ zipWith (*) (toList a) (toList b)
The extra strictness of foldl' will give much better performance in this case than foldr.
This change alone might make a big difference, but without seeing the rest of your code it's difficult to say.
Answering my own question just to share new results I got yesterday:
I upgraded ghc to the most recent version and performance became indeed not that bad (only ~7 times worse).
Also I tried implementing the matrix in a stupid and simple way (see the listing below) and got really acceptable performance - only about 2 times slower than C equivalent.
data Matr a = Matr ( a, a, a, a
, a, a, a, a
, a, a, a, a
, a, a, a, a)
mult (Matr (!a0, !b0, !c0, !d0,
!a1, !b1, !c1, !d1,
!a2, !b2, !c2, !d2,
!a3, !b3, !c3, !d3))
(Matr (!a0', !b0', !c0', !d0',
!a1', !b1', !c1', !d1',
!a2', !b2', !c2', !d2',
!a3', !b3', !c3', !d3'))
= Matr ( a0'', b0'', c0'', d0''
, a1'', b1'', c1'', d1''
, a2'', b2'', c2'', d2''
, a3'', b3'', c3'', d3'')
where a0'' = a0 * a0' + b0 * a1' + c0 * a2' + d0 * a3'
b0'' = a0 * b0' + b0 * b1' + c0 * b2' + d0 * b3'
c0'' = a0 * c0' + b0 * c1' + c0 * c2' + d0 * c3'
d0'' = a0 * d0' + b0 * d1' + c0 * d2' + d0 * d3'
a1'' = a1 * a0' + b1 * a1' + c1 * a2' + d1 * a3'
b1'' = a1 * b0' + b1 * b1' + c1 * b2' + d1 * b3'
c1'' = a1 * c0' + b1 * c1' + c1 * c2' + d1 * c3'
d1'' = a1 * d0' + b1 * d1' + c1 * d2' + d1 * d3'
a2'' = a2 * a0' + b2 * a1' + c2 * a2' + d2 * a3'
b2'' = a2 * b0' + b2 * b1' + c2 * b2' + d2 * b3'
c2'' = a2 * c0' + b2 * c1' + c2 * c2' + d2 * c3'
d2'' = a2 * d0' + b2 * d1' + c2 * d2' + d2 * d3'
a3'' = a3 * a0' + b3 * a1' + c3 * a2' + d3 * a3'
b3'' = a3 * b0' + b3 * b1' + c3 * b2' + d3 * b3'
c3'' = a3 * c0' + b3 * c1' + c3 * c2' + d3 * c3'
d3'' = a3 * d0' + b3 * d1' + c3 * d2' + d3 * d3'

How to rewrite a "sapply" command to increase performance?

I have a data.frame named "d" of ~1,300,000 lines and 4 columns and another data.frame named "gc" of ~12,000 lines and 2 columns (but see the smaller example below).
d <- data.frame( gene=rep(c("a","b","c"),4), val=rnorm(12), ind=c( rep(rep("i1",3),2), rep(rep("i2",3),2) ), exp=c( rep("e1",3), rep("e2",3), rep("e1",3), rep("e2",3) ) )
gc <- data.frame( gene=c("a","b","c"), chr=c("c1","c2","c3") )
Here is how "d" looks like:
gene val ind exp
1 a 1.38711902 i1 e1
2 b -0.25578496 i1 e1
3 c 0.49331256 i1 e1
4 a -1.38015272 i1 e2
5 b 1.46779219 i1 e2
6 c -0.84946320 i1 e2
7 a 0.01188061 i2 e1
8 b -0.13225808 i2 e1
9 c 0.16508404 i2 e1
10 a 0.70949804 i2 e2
11 b -0.64950167 i2 e2
12 c 0.12472479 i2 e2
And here is "gc":
gene chr
1 a c1
2 b c2
3 c c3
I want to add a 5th column to "d" by incorporating data from "gc" that match with the 1st column of "d". For the moment I am using sapply.
d$chr <- sapply( 1:nrow(d), function(x) gc[ gc$gene==d[x,1], ]$chr )
But on the real data, it takes a "very long" time (I am running the command with "system.time()" since more than 30 minutes and it's still not finished).
Do you have any idea of how I could rewrite this in a clever way? Or should I consider using plyr, maybe with the "parallel" option (I have four cores on my computer)? In such a case, what would be the best syntax?
Thanks in advance.
I think you can just use the factor as index:
gc[ d[,1], 2]
[1] c1 c2 c3 c1 c2 c3 c1 c2 c3 c1 c2 c3
Levels: c1 c2 c3
does the same as:
sapply( 1:nrow(d), function(x) gc[ gc$gene==d[x,1], ]$chr )
[1] c1 c2 c3 c1 c2 c3 c1 c2 c3 c1 c2 c3
Levels: c1 c2 c3
But is much faster:
> system.time(replicate(1000,sapply( 1:nrow(d), function(x) gc[ gc$gene==d[x,1], ]$chr )))
user system elapsed
5.03 0.00 5.02
>
> system.time(replicate(1000,gc[ d[,1], 2]))
user system elapsed
0.12 0.00 0.13
Edit:
To expand a bit on my comment. The gc dataframe requires one row for each level of gene in the order of the levels for this to work:
d <- data.frame( gene=rep(c("a","b","c"),4), val=rnorm(12), ind=c( rep(rep("i1",3),2), rep(rep("i2",3),2) ), exp=c( rep("e1",3), rep("e2",3), rep("e1",3), rep("e2",3) ) )
gc <- data.frame( gene=c("c","a","b"), chr=c("c1","c2","c3") )
gc[ d[,1], 2]
[1] c1 c2 c3 c1 c2 c3 c1 c2 c3 c1 c2 c3
Levels: c1 c2 c3
sapply( 1:nrow(d), function(x) gc[ gc$gene==d[x,1], ]$chr )
[1] c2 c3 c1 c2 c3 c1 c2 c3 c1 c2 c3 c1
Levels: c1 c2 c3
But it is not hard to fix that:
levels(gc$gene) <- levels(d$gene) # Seems redundant as this is done right quite often automatically
gc <- gc[order(gc$gene),]
gc[ d[,1], 2]
[1] c2 c3 c1 c2 c3 c1 c2 c3 c1 c2 c3 c1
Levels: c1 c2 c3
sapply( 1:nrow(d), function(x) gc[ gc$gene==d[x,1], ]$chr )
[1] c2 c3 c1 c2 c3 c1 c2 c3 c1 c2 c3 c1
Levels: c1 c2 c3
An alternative solution that does not beat Sasha's approach timing-wise, but is more generalizable and readable, is to simply merge the two data frames:
d <- merge(d, gc)
I have a slower system, so here are my timings:
> system.time(replicate(1000,sapply( 1:nrow(d), function(x) gc[ gc$gene==d[x,1], ]$chr )))
user system elapsed
11.22 0.12 11.86
> system.time(replicate(1000,gc[ d[,1], 2]))
user system elapsed
0.34 0.00 0.35
> system.time(replicate(1000, merge(d, gc, by="gene")))
user system elapsed
3.35 0.02 3.40
The benefit is that you could have multiple keys, fine control over non-matching items, etc.

Resources