Actually I was reading a blog on how to calculate the cumulating return of a stock for each day.
The formula that was described in the blog to calculate cumulating return was
(1 + TodayReturn) * (1 + Cumulative_Return_Of_Previous_Day) - 1 , but still I am not able to calculate the cumulative return that was provided on it.
Can someone please make this clear that how is cumulative return has been calculated in the table given below. This would be a lot of help.
Thanks in advance.
| Days | Stock Price | Return | Cumulative Return|
---------------------------------------------------
| Day 1 | 150 | | |
| Day 2 | 153 | 2.00 % | 2.00 % |
| Day 3 | 160 | 4.58 % | 6.67 % |
| Day 4 | 163 | 1.88 % | 8.67 % |
| Day 5 | 165 | 1.23 % | 10.00 % |
---------------------------------------------------
i see a video of how to move a cube like snake game move
HI
in this video ( https://www.youtube.com/watch?v=aT2zNLSFQEk&list=PLLH3mUGkfFCVNs51eK8ftCAlI3hZQ95tC&index=11 ) he declare float name **lastMove **with no value (zero by default) and use it in condition and **minus **it with Time.time then assign it to **lastMove **.
my question is what is the effect of lastMove in condition when it has no value?
if i clear it from "if statement" the game run fast but if remain in "if statement" time passed very slower
What he does is check continuously if time - lastMove is bigger than a given predefined interval (timeInBetweenMoves). Time keeps increasing each frame while lastMove is fixed. So at some point this condition will be true. When it is, he updates lastMove with the value of time to "reset the loop" = to make the minus difference lower than the interval again.The point of doing this is to move only at a fixed interval (0.25 secs) instead of every frame. Like this:
interval = 0.25 (timeBetweenMoves)
time (secs) | lastMove | time - lastMove
-----------------------------------------
0.00 | 0 | 0
0.05 | 0 | 0.05
0.10 | 0 | 0.10
0.15 | 0 | 0.15
0.20 | 0 | 0.20
0.25 | 0 | 0.25
0.30 | 0 | 0.30 ---> bigger than interval: MOVE and set lastMove to this (0.30)
0.35 | 0.30 | 0.5
0.40 | 0.30 | 0.10
0.45 | 0.30 | 0.15
0.50 | 0.30 | 0.20
0.55 | 0.30 | 0.25
0.60 | 0.30 | 0.30 ---> bigger than interval: MOVE and set lastMove to time (0.60)
0.65 | 0.60 | 0.5
0.70 | 0.60 | 0.10
...
This is kind of a throttling.
I need to design Programmable Logic Array (PLA) circuit with given functions. How can I design circuit with POS form functions. Can someone explain that? enter image description here
Below is the design process completed for F, the other two outputs, G and H, can done in a similar fashon.
F Truth Table
A B C D # F G H
0 0 0 0 0 1
0 0 0 1 1 0
0 0 1 0 2 0
0 0 1 1 3 1
0 1 0 0 4 1
0 1 0 1 5 1
0 1 1 0 6 0
0 1 1 1 7 0
1 0 0 0 8 0
1 0 0 1 9 0
1 0 1 0 10 1
1 0 1 1 11 1
1 1 0 0 12 0
1 1 0 1 13 0
1 1 1 0 14 1
1 1 1 1 15 1
F Karnaugh Map:
AB\CD 00 01 11 10
\
----------
00 0 | 1 x 1 | 0
----------
--------
01 | 1 y 1 | 0 0
--------
----------
11 0 0 | 1 1 |
| z |
| |
10 0 0 | 1 1 |
----------
F Sum Of Product Boolean:
(A'B'D)+(A'BC')+(AC)
x y z ==>(for ref):
x=(A'B'D)
y=(A'BC')
z=(AC)
F Logic Circuit:
------- ------
A---NOT---| | ------- | |
| AND |---| | | |
B---NOT---| | | AND | x | |
------- | |--------------| |
D---------------------------| | | |
------- | |
| | F
------- | OR |---
A---NOT---| | ------- ------ | |
| AND |---| | y | | | |
C---NOT---| | | AND |---| | | |
------- | | | | y+z | |
B---------------------------| | | |-----| |
------- | OR | | |
------- | | | |
A---NOT---| | z | | | |
| AND |---------------------| | ------
C---NOT---| | | |
------- | |
------
I encountered a problem while I was trying to match images with their correlation coefficient.
Say we have 5 thumbnails (a, b, c, d, e) and we need to find the best corresponding thumbnail for each one of them on another set of thumbnails (f, g, h, i, j). (One item cannot be re-used.)
For each possible pair, we compute the correlation coefficient (measurement of similarity).
f g h i j
|-----|-----|-----|-----|-----|
a | 0.5 | 0.7 | 0 | 0 | 0 |
|-----|-----|-----|-----|-----|
b | 0.7 | 0.8 | 0 | 0 | 0 |
|-----|-----|-----|-----|-----|
c | 0 | 0 | 0 | 0 | 0.8 |
|-----|-----|-----|-----|-----|
d | 0 | 0 | 0.5 | 0.6 | 0.7 |
|-----|-----|-----|-----|-----|
e | 0 | 0.6 | 0.7 | 0.5 | 0 |
|-----|-----|-----|-----|-----|
What I do :
Find the max for each raw
f g h i j
|-----|-----|-----|-----|-----|
a | 0 | 0.7 | 0 | 0 | 0 |
|-----|-----|-----|-----|-----|
b | 0 | 0.8 | 0 | 0 | 0 |
|-----|-----|-----|-----|-----|
c | 0 | 0 | 0 | 0 | 0.8 |
|-----|-----|-----|-----|-----|
d | 0 | 0 | 0 | 0 | 0.7 |
|-----|-----|-----|-----|-----|
e | 0 | 0 | 0.7 | 0 | 0 |
|-----|-----|-----|-----|-----|
Find the max for each column
f g h i j
|-----|-----|-----|-----|-----|
a | 0 | 0 | 0 | 0 | 0 |
|-----|-----|-----|-----|-----|
b | 0 | 0.8 | 0 | 0 | 0 |
|-----|-----|-----|-----|-----|
c | 0 | 0 | 0 | 0 | 0.8 |
|-----|-----|-----|-----|-----|
d | 0 | 0 | 0 | 0 | 0 |
|-----|-----|-----|-----|-----|
e | 0 | 0 | 0.7 | 0 | 0 |
|-----|-----|-----|-----|-----|
Save those pairs in a table
Create a mask where the raw and the column of each number in this last table are equal to zero
f g h i j
|-----|-----|-----|-----|-----|
a | 1 | 0 | 0 | 1 | 0 |
|-----|-----|-----|-----|-----|
b | 0 | 0 | 0 | 0 | 0 |
|-----|-----|-----|-----|-----|
c | 0 | 0 | 0 | 0 | 0 |
|-----|-----|-----|-----|-----|
d | 1 | 0 | 0 | 1 | 0 |
|-----|-----|-----|-----|-----|
e | 0 | 0 | 0 | 0 | 0 |
|-----|-----|-----|-----|-----|
Multiply the mask with the first table
f g h i j
|-----|-----|-----|-----|-----|
a | 0.5 | 0 | 0 | 0 | 0 |
|-----|-----|-----|-----|-----|
b | 0 | 0 | 0 | 0 | 0 |
|-----|-----|-----|-----|-----|
c | 0 | 0 | 0 | 0 | 0 |
|-----|-----|-----|-----|-----|
d | 0 | 0 | 0 | 0.6 | 0 |
|-----|-----|-----|-----|-----|
e | 0 | 0 | 0 | 0 | 0 |
|-----|-----|-----|-----|-----|
Repeat the process until the matrix obtained at the second step is equal to zero
So at the end, the matching table looks like that :
f g h i j
|-----|-----|-----|-----|-----|
a | 1 | 0 | 0 | 0 | 0 |
|-----|-----|-----|-----|-----|
b | 0 | 1 | 0 | 0 | 0 |
|-----|-----|-----|-----|-----|
c | 0 | 0 | 0 | 0 | 1 |
|-----|-----|-----|-----|-----|
d | 0 | 0 | 0 | 1 | 0 |
|-----|-----|-----|-----|-----|
e | 0 | 0 | 1 | 0 | 0 |
|-----|-----|-----|-----|-----|
According the this method, the best pairs possible are :
(a,f), (b,g), (c,j), (d,i) and (e,h)
Now the question is :
Is there a better method?
Like for (a,b) and (f,g), wouldn't be better to add up their scores to find the best match?
Ex :
(a,f) (b,g)
0.5 + 0.7 = 1.2
(a,g) (b,f)
0.7 + 0.7 = 1.4
1.4 > 1.2 => best pairs are (a,g) and (b,f)
(As opposed to (a,f), (b,g) with the first method.)
If so, how to make it generalizable?
I hope that I've been clear enough to make you understand the problem.
Thanks in advance for your help.
EDIT :
I found out that the Hungarian algorithm is much faster than the ILP solution provided by AirSquid.
I compared the Hungarian implementation of Scipy (https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.linear_sum_assignment.html) with the ILP based solution.
After 1000 one-to-one matching iterations of a random 20x20 matrix I got :
Method
ite/s
ILP solution
4.06e-2
Hungarian algorithm
1.808e-5
From tests, I haven't seen any differences between those two methods.
This is a trivial pairing model for most any math solver and can be formulated as an ILP. If you wish to go this route in python, you have several choices (after learning a bit about LP/ILP formulation :) ). I'm partial to pyomo but pulp and or-tools are also viable. You will also need a solver engine. There are several freebies out there, some are easier to install than others. I believe pulp has a built-in solver, which is nice.
There is probably a dynamic programming solution to consider as well, but this is fast & easy. For the examples I note in the problem below (a replica of the counter-example above and a random 20x20 matrix), optimal solutions are almost instantaneous.
# pairing
import pyomo.environ as pyo
import numpy as np
data = [[.99, .98, .97, .96, .95],
[.98, .97, .96, .95, 0],
[.97, .96, .95, 0, 0],
[.96, .95, 0, 0, 0],
[.95, 0, 0, 0, 0]]
#data = np.random.rand(20, 20) #alternate random data for testing...
model = pyo.ConcreteModel('r-c pairings')
#re-label the data and push into a dictionary
labels = list('abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ')
data = {(labels[r], labels[len(data) + c]) : data[r][c]
for r in range(len(data)) for c in range(len(data[0]))}
# pyomo components
model.R = pyo.Set(initialize = [k[0] for k in data.keys()])
model.C = pyo.Set(initialize = [k[1] for k in data.keys()])
model.corr = pyo.Param(model.R, model.C, initialize=data)
model.X = pyo.Var(model.R, model.C, within=pyo.Binary) # select pairing (r, c)
# objective: maximize overall value
model.obj = pyo.Objective(expr=pyo.summation(model.corr, model.X), sense=pyo.maximize) #shortcut to ∑cX
# constraint: only use each column value once
def single_use(m, c):
return sum(model.X[r,c] for r in model.R) <= 1
model.C1 = pyo.Constraint(model.C, rule=single_use)
# constraint: only use each row value once
def single_use_row(m, r):
return sum(model.X[r,c] for c in model.C) <= 1
model.C2 = pyo.Constraint(model.R, rule=single_use_row)
# solve it...
solver = pyo.SolverFactory('glpk') # <-- need to have this solver installed
result = solver.solve(model)
print(result)
pyo.display(model)
Output (from the smaller data run):
Problem:
- Name: unknown
Lower bound: 4.75
Upper bound: 4.75
Number of objectives: 1
Number of constraints: 11
Number of variables: 26
Number of nonzeros: 51
Sense: maximize
Solver:
- Status: ok
Termination condition: optimal
Statistics:
Branch and bound:
Number of bounded subproblems: 1
Number of created subproblems: 1
Error rc: 0
Time: 0.010313272476196289
Solution:
- number of solutions: 0
number of solutions displayed: 0
Model r-c pairings
Variables:
X : Size=25, Index=X_index
Key : Lower : Value : Upper : Fixed : Stale : Domain
('a', 'f') : 0 : 0.0 : 1 : False : False : Binary
('a', 'g') : 0 : 0.0 : 1 : False : False : Binary
('a', 'h') : 0 : 0.0 : 1 : False : False : Binary
('a', 'i') : 0 : 0.0 : 1 : False : False : Binary
('a', 'j') : 0 : 1.0 : 1 : False : False : Binary
('b', 'f') : 0 : 0.0 : 1 : False : False : Binary
('b', 'g') : 0 : 0.0 : 1 : False : False : Binary
('b', 'h') : 0 : 0.0 : 1 : False : False : Binary
('b', 'i') : 0 : 1.0 : 1 : False : False : Binary
('b', 'j') : 0 : 0.0 : 1 : False : False : Binary
('c', 'f') : 0 : 0.0 : 1 : False : False : Binary
('c', 'g') : 0 : 0.0 : 1 : False : False : Binary
('c', 'h') : 0 : 1.0 : 1 : False : False : Binary
('c', 'i') : 0 : 0.0 : 1 : False : False : Binary
('c', 'j') : 0 : 0.0 : 1 : False : False : Binary
('d', 'f') : 0 : 0.0 : 1 : False : False : Binary
('d', 'g') : 0 : 1.0 : 1 : False : False : Binary
('d', 'h') : 0 : 0.0 : 1 : False : False : Binary
('d', 'i') : 0 : 0.0 : 1 : False : False : Binary
('d', 'j') : 0 : 0.0 : 1 : False : False : Binary
('e', 'f') : 0 : 1.0 : 1 : False : False : Binary
('e', 'g') : 0 : 0.0 : 1 : False : False : Binary
('e', 'h') : 0 : 0.0 : 1 : False : False : Binary
('e', 'i') : 0 : 0.0 : 1 : False : False : Binary
('e', 'j') : 0 : 0.0 : 1 : False : False : Binary
Objectives:
obj : Size=1, Index=None, Active=True
Key : Active : Value
None : True : 4.75
Constraints:
C1 : Size=5
Key : Lower : Body : Upper
f : None : 1.0 : 1.0
g : None : 1.0 : 1.0
h : None : 1.0 : 1.0
i : None : 1.0 : 1.0
j : None : 1.0 : 1.0
C2 : Size=5
Key : Lower : Body : Upper
a : None : 1.0 : 1.0
b : None : 1.0 : 1.0
c : None : 1.0 : 1.0
d : None : 1.0 : 1.0
e : None : 1.0 : 1.0
I think your method is broken for some cases.
For an example consider:
f g
|-----|-----|
a | 0.9 | 0.8 |
|-----|-----|
b | 0.8 | 0 |
|-----|-----|
For this case, the best solution is ag and bf, where the total score is "0.8 + 0.8 = 1.6". If you choose the maximum score first (af) you're forced to use bg as the second pair (as there's no other choice left), and that gives you a total score of "0.9 + 0 = 0.9", which is much worse.
Note that the same problem exists (and can be much worse) for 5 pairs. E.g. for an extreme case:
f g h i j
|------|------|------|------|------|
a | 0.99 | 0.98 | 0.97 | 0.96 | 0.95 |
|------|------|------|------|------|
b | 0.98 | 0.97 | 0.96 | 0.95 | 0 |
|------|------|------|------|------|
c | 0.97 | 0.96 | 0.95 | 0 | 0 |
|------|------|------|------|------|
d | 0.96 | 0.95 | 0 | 0 | 0 |
|------|------|------|------|------|
e | 0.95 | 0 | 0 | 0 | 0 |
|------|------|------|------|------|
Here, "maximum first" leads to af, bg, ch, di, ej with a total score of 2.91; but the best solution is ef, dg, ch, bi, aj with a total score of 4.75.
To find the best pairings; you want to calculate the total for each possibility, then find the highest total. The simplest way to do that is with a brute force approach (literally, calculate a total for every possibility) but that has relatively high overhead.
Assuming a "nested loops" approach (e.g. where you have an outer loop iterating through the possibilities for a, an inner loop iterating through possibilities for b, ...; and where each inner loop can create a new "partial total" so that the inner-most loop can use the partial total rather than calculating the full total itself); I don't think there's a practical way to improve performance (without creating a risk of failing to find the best solution).
I have seen a different style of Karnaugh Map for logic design. This is the style they used:
Anyone knows how this K-Map done? How to comprehend with this kind of map? Or how they derived from that equation from that map. The map is quite different from the common map like this:
The maps relate to each other this way, the only difference is the cells' (terms') indexes corresponding to the variables or the order of the variables.
The exclamation mark is only an alternative to the negation of a variable. !A is the same as ¬A, also sometimes noted A'.
!A A A !A ↓CD\AB → 00 01 11 10
+----+----+----+----+ +----+----+----+----+
!B | 1 | 0 | 1 | 0 | !D 00 | 1 | 1 | 1 | 0 |
+----+----+----+----+ +----+----+----+----+
B | 1 | 1 | 1 | 1 | !D ~ 01 | 1 | x | x | 1 |
+----+----+----+----+ +----+----+----+----+
B | x | x | x | x | D 11 | x | x | x | x |
+----+----+----+----+ +----+----+----+----+
!B | 1 | 1 | x | x | D 10 | 0 | 1 | 1 | 1 |
+----+----+----+----+ +----+----+----+----+
!C !C C C
If you are unsure, of the indexes in the given K-map, you can always check that by writing the corresponding truth-table.
For example the output value of the first cell in the "strange" K-map is equal to 1 if !A·!B·!C·!D (all variables in its negation), that corresponds with the first line of the truth-table, so the index is 0. And so on.
index | A B C D | y
=======+=========+===
0 | 0 0 0 0 | 1
1 | 0 0 0 1 | 1
2 | 0 0 1 0 | 0
3 | 0 0 1 1 | x ~ 'do not care' state/output
-------+---------+---
4 | 0 1 0 0 | 1
5 | 0 1 0 1 | x
6 | 0 1 1 0 | 1
7 | 0 1 1 1 | x
-------+---------+---
8 | 1 0 0 0 | 0
9 | 1 0 0 1 | 1
10 | 1 0 1 0 | 1
11 | 1 0 1 1 | x
-------+---------+---
12 | 1 1 0 0 | 1
13 | 1 1 0 1 | x
14 | 1 1 1 0 | 1
15 | 1 1 1 1 | x
You can use the map the same way you would use the "normal" K-map to find the implicants (groups), because all K-maps indexing needs to conform to the Gray's code.
You can see the simplified boolean expression is the same in both styles of these K-maps:
f(A,B,C,D) = !A·!C + A·C + B + D = ¬A·¬C + A·C + B + D
!A·!C is marked red,
A·C blue,
B orange
and D green.
The K-maps were generated using latex's \karnaughmap command and tikz library.
it's the same in principle just the rows and columns (or the variables) are in a different order
The red labels are for when the variable is true, the blue for when it's false
It's actually the same map, but instead of A they have C and instead of B they have A and instead of C they have D and instead of D they have B