Hadamard gate returns same result which is different from my expects - measurement

I meausred some states after approve hadamard gate.
I guess that after measuremnt and because of properties of hadamard gate,
The measurement result will always different.
But after I checked , the result is always same
=> After approve hadamard gate at q_0 gate at 2 qubits
=> After approve hadamard gate at q_2 gate at 3 qubits
=> After approve hadamard gate at q_1 gate at 3 qubits
=> After approve hadamard gate at q_0 gate at 3 qubits
I learned that hadamard gate yieds same probabilities of each state.
But the measurement result always returns same.

You have to understand what we really mean by probability first. Let me give you an example : probability of getting head on a coin is 1/2. What this means is that if we flip the coin lets say 1000 times, almost half the times we will get heads. It doesnt mean if we flip it once then half the coin will be head and other half tails.
Thus is a long run, the distribution of heads and tails might be something like 509(heads) and 491(tails) but there will be stretches where coin will flip 10 straight heads.
Now to HADAMARD. Hadamard creates an equal probability for 1 and 0 when applied to ket 0. This means when you measure it, it has a fifty percent probability of getting you 0 and other fifty for getting you 1. thus you can get 10 straight ket 0s when measuring but if you do it for 1000 times, distribution becomes equal. This is why when we use real quantum computers or even simulators, we adjust something called 'shots' to a large number. And that number is how many times that circuit is measured and we get an idea of what quantum state was.
GO to 'set up and run' on composer and you'll understand rest.

Related

Probability and events

Using only a coin, a regular deck of playing cards and a 6 sided die, invent a game of chance where you have a 1 in 20 chance of winning in any given turn or attempt. You do not have to use all the items listed above, but you cannot use anything more. you may use them multiple times and/or combine them in each attempt. For example, "a game is played by picking a card from the deck and tossing a coin twice." You win if you get 2 tails and a spade. Probability of winning is 1 /16. Describe your event, how you play and win the game, and show with calculations how the probability of winning = 1/20?
I think you'd get a better answer on mathematical site rather than a programming one. But here's my attempt:
It depends on what sort of rules you allow.
If you allow rules like: throw the dice; if its 1,2,3,4 you lose; if its 5 you win; if its a 6 repeat, then a game with a 1/20 chance of winning would be: throw a coin twice, if you don't get two heads you lose, if you do get two heads throw the dice as above.
If you don't allow such recursive rules then there is no such game. All the elementary events (throwing a particular side of the coin, picking a particular card, throwing a particular value of the dice) have probabilities that can be expressed as N/D where N and D are integers, and D can be factorised as a product of powers of 2,3 and 13. All events made by ands and ors and selections will have probabilities of this form, since they will be finite sums of finite products of probabilities of elementary events. But 1/20 isn't of this form, because 20 = 2*2*5

Is there a way to calculate the coefficient of the Correlation of binary variables between a and b?

So there are two variables
a -- Who is greater than 40 year old (BINARY 0 or 1)
b -- If they have a Luxury Car (Binary 0 or 1)
Now they have the data sum values.
Total sample size -- 500
Total number of people above 40 are -- 60
Total number of people have luxury car -- 40
Total number of people have luxury car and above 40 -- 10
NOTE: Draw a venn diagram if that helps
Compute the coefficient of the correlation between a and b?
A correlation function can handle binary values. Even categorical or enumerated items, under the hood, the computer is giving random numerical assignments and testing for correlation of interaction. For your case you are simply wanting to know how often the two are the same versus how often they are opposite. If they are always opposite each other you would see a negative 1. Always the same positive 1. Zero would mean no correlation.

Find an algorithm to balance success rate and cost

Assume I have a task that contains 2 steps. Each step has a success rate and a cost. Each step can be assigned to one or more people to complete to increase the success rate. Can anyone help me to find an algorithm to balance them?
For example:
A task with 2 steps. Step 1 has success rate 50% and cost $1. Step 2 has success 75% and cost $2. If I assign each step to only one person, the overall success rate will be 50% * 75% = 37.5%. The threshold I want to reach is 80%.
In this case, I should assign step 1 to 3 people to get 87.5% success rate, and assign step 2 to 2 people to get 93.75%. Then the overall success rate is 82.08%
But I don't know how to implement it with an algorithm.
Update:
Assign to more people means multiply people execute this step at the same time. if there are 3 people to execute a 50% success rate task. The possibility that at least one person succeed is 1 - 0.5^3 = 87.5%.
This is actually very easy once you make the following observation: in order to reach an overall success rate above a threshold T, no individual success rate can be below T. Otherwise, since the overall success rate is the product of each individual one, you would need at least one rate above 1 (>100%) to balance out.
In your example, you need each individual success rate to be above 80%, and the minimum number of people needed for each task is, as you worked out, 3 for step 1 and 2 for step 2.
An individual success rate S is calculated based on the base success rate B, and the number of people N, using the formula S = 1 - (1-B)^N. What you want to find is N: N = ln(1-S)/ln(1-B). Finally, since you need S > T, you get N = ceil[ln(1-T)/ln(1-B)].
Calculate this N for each step and you get an overall success rate above the threshold. Moreover, no N can be smaller, otherwise the corresponding success rate, and so the overall success rate, would be below the threshold.
You mentionned that each step has a cost, but is does not play any role in the problem since there is a hard limit on the number of people for each individual step.

Markov entropy when probabilities are uneven

I've been thinking about information entropy in terms of the Markov equation:
H = -SUM(p(i)lg(p(i)), where lg is the base 2 logarithm.
This assumes that all selections i have equal probability. But what if the probability in the given set of choices is unequal? For example, let's say that StackExchange has 20 sites, and that the probability of a user visiting any StackExchange site except StackOverflow is p(i). But, the probability of a user visiting StackExchange is 5 times p(i).
Would the Markov equation not apply in this case? Or is there an advanced Markov variation that I'm unaware of?
I believe you are mixing up 2 concepts: entropy and the Markov equation. Entropy measures the "disorder" of a distribution on states, using the equation you gave: H = -SUM(p(i)lg(p(i)), where p(i) is the probability of observing each state i.
The Markov property does not imply that every state has the same probability. Roughly, a system is said to exhibit the Markov property if the probability to observe a state depends only on observing a few previous states - after a certain limit, the extra states you observe add no information to predicting the next state.
The prototypical Markov model is known as a Markov chain. It says that from each state i, you can move to any state with another probability, represented as a probability matrix:
0 1 2
0 0.2 0.5 0.3
1 0.8 0.1 0.1
2 0.3 0.3 0.4
In this example, the probability of moving from state 0 to 1 is 0.5, and depends only on being in state 0 (knowing more about the previous states would not change that probability).
As long as all states can be visited starting from any state, no matter what the initial distribution, the probability of being in each state converges to a stable, long-term distribution, and on a "long series" you'll observe each state with a stable probability, which is not necessarily equal for each states.
In our example, we would end up having probabilities p(0), p(1) and p(2), and you could then compute the entropy of that chain using your formula.
From your example are you thinking of Markov Chains?

Algorithm for modeling expanding gases on a 2D grid

I have a simple program, at it's heart is a two dimensional array of floats, supposedly representing gas concentrations, I have been trying to come up with a simple algorithm that will model the gas expanding outwards, like a cloud, eventually ending up with the same concentration of the gas everywhere across the grid.
For example a given state progression could be:
(using ints for simplicity)
starting state
00000
00000
00900
00000
00000
state after 1 pass of algorithm
00000
01110
01110
01110
00000
one more pas should give a 5x5 grid all containing the value 0.36 (9/25).
I've tried it out on paper but no matter how I try, I cant get my head around the algorithm to do this.
So my question is, how should I set about trying to code this algorithm? I've tried a few things, applying a convolution, trying to take each grid cell in turn and distributing it to its neighbours, but they all end up having undesirable effects, such as ending up eventually with less gas than I originally started with, or all of gas movement being in one direction instead of expanding outwards from the centre. I really can't get my head around it at all and would appreciate any help at all.
It's either a diffusion problem if you ignore convection or a fluid dynamics/mass transfer problem if you don't. You would start with equations for conservation of mass and momentum for an Eulerian (fixed control volume) viewpoint if you were solving from scratch.
It's a transient problem, so you need to perform an integration to advance the state from time t(n) to t(n+1). You show a grid, but nothing about how you're solving in time. What integration scheme have you tried? Explicit? Implicit? Crank-Nicholson? If you don't know, you're not approaching the problem correctly.
One book that I really liked on this subject was S.W. Patankar's "Numerical Heat Transfer and Fluid Flow". It's a little dated now, but I liked the treatment. It's still good after 29 years, but there might be better texts since I was reading on the subject. I think it's approachable for somebody looking into it for the first time.
In the example you give, your second stage has a core of 1's. Usually diffusion requires a concentration gradient, so most diffusion related techniques won't change the 1 in the middle on the next iteration (nor would they have got to that state after the first one, but it's a bit easier to see once you've got a block of equal values). But as the commenters on your post say, that's not likely to be the cause of a net movement. Reducing the gas may be edge effects, but can also be a question of rounding errors - set the cpu to round half even, and total the gas and apply a correction now and again.
It looks like you're trying to implement a finite difference solver for the heat equation with Neumann boundary conditions (insulation at the edges). There's a lot of literature on this kind of thing. The Wikipedia page on finite difference method describes a simple but stable method, but for Dirichlet boundary conditions (constant density at edges). Modifying the handling of the boundary conditions shouldn't be too difficult.
It looks like what you want is something like a smoothing algorithm, often used in programs like Photoshop, or old school demo effects, like this simple Flame Effect.
Whatever algorithm you use, it will probably help you to double buffer your array.
A typical smoothing effect will be something like:
begin loop forever
For every x and y
{
b2[x,y] = (b1[x,y] + (b1[x+1,y]+b1[x-1,y]+b1[x,y+1]+b1[x,y-1])/8) / 2
}
swap b1 and b2
end loop forever
See Tom Forsyth's Game Programming Gems article. Looks like it fulfils your requirements, but if not then it should at least give you some ideas.
Here's a solution in 1D for simplicity:
The initial setup is with a concentration of 9 at the origin (), and 0 at all other positive and negative coordinates.
initial state:
0 0 0 0 (9) 0 0 0 0
The algorithm to find next iteration values is to start at the origin and average current concentrations with adjacent neighbors. The origin value is a boundary case and the average is done considering the origin value, and its two neighbors simultaneously, i.e. average among 3 values. All other values are effectively averaged among 2 values.
after iteration 1:
0 0 0 3 (3) 3 0 0 0
after iteration 2:
0 0 1.5 1.5 (3) 1.5 1.5 0 0
after iteration 3:
0 .75 .75 2 (2) 2 .75 .75 0
after iteration 4:
.375 .375 1.375 1.375 (2) 1.375 1.375 .375 .375
You do these iterations in a loop. Outputting the state every n number of iterations. You may introduce a time constant to control how many iterations represent one second of clock-on-the-wall time. This is also a function of what length units the integer coordinates represent. For a given H/W system, you can tune this value empirically. You may also introduce a steady state tolerance value to control when the program says " all neighbor values are within this tolerance" or "no value changed between iterations by more than this tolerance" and so the algorithm has reached a steady state solution.
The concentration for each iteration given a starting concentration can be obtained by the equation:
concentration = startingConcentration/(2*iter + 1)**2
iter is the time iteration. So for your example.
startingConcentration = 9
iter = 0
concentration = 9/(2*0 + 1)**2 = 9
iter = 1
concentration = 9/(2*1 + 1)**2 = 1
iter = 2
concentration = 9/(2*2 + 1)**2 = 9/25 = .35
you can set the value of the array after each "time step"

Resources