Fixed-Effects Regression with Panel Data - Dummy variable is excluded in Output - rstudio

I'm rather new to statistics and R. I'm currently working on a paper and I'm really stuck with a coding problem right now. I suspect that the problem lays within my dummy variable.
I did a fixed-effects regression with my panel data which worked out fine. All my variables (the Y and the Xs) were numeric. I decided to add another variable that is a dummy variable with two levels (yes/no).
I set the variable as as factor variable but whenever I try to run the regression it does not show up in the output. As soon as I remove one specific numeric variable - it shows up in the output.
I obviously don't want to exclude that one numeric variable to include my dummy variable - there must be another way or something I did wrong...
regdata <- read_excel("LinRegData.xlsx")
regdata$eu <- as.factor(regdata$eu)
attach(regdata)
pdata <- pdata.frame(regdata, index=c("country","year"))
pdata$eu <-as.factor((pdata$eu))
plmwithin <- plm(subaus ~ employ + pref + + iEaeM + eu, data = pdata, model="within", family = poisson, effect = 'twoways', index = c('country', 'year'))
The output always excludes the eu-variable:
Call:
plm(formula = subaus ~ employ + pref + eu + iEaeM, data = pdata,
effect = "twoways", model = "within", index = c("country",
"year"), family = poisson)
Unbalanced Panel: n = 32, T = 8-11, N = 288
Residuals:
Min. 1st Qu. Median 3rd Qu. Max.
-0.4828507 -0.0867719 -0.0021724 0.0857117 0.7668712
Coefficients:
Estimate Std. Error t-value Pr(>|t|)
employ 0.03496136 0.00590717 5.9185 1.1e-08 ***
pref -0.02081850 0.00908030 -2.2927 0.02272 *
iEaeM -0.00010028 0.00070435 -0.1424 0.88690
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Total Sum of Squares: 9.4599
Residual Sum of Squares: 8.183
R-Squared: 0.13498
Adj. R-Squared: -0.021654
F-statistic: 12.639 on 3 and 243 DF, p-value: 1.0498e-07
>
If I exclude the iEaeM variable it shows:
Call:
plm(formula = subaus ~ employ + pref + eu, data = pdata, effect = "twoways",
model = "within", index = c("country", "year"), family = poisson)
Unbalanced Panel: n = 32, T = 8-23, N = 659
Residuals:
Min. 1st Qu. Median 3rd Qu. Max.
-1.04463 -0.18859 -0.01558 0.14696 1.46631
Coefficients:
Estimate Std. Error t-value Pr(>|t|)
employ 0.01346798 0.00525102 2.5648 0.01056 *
pref -0.00014268 0.00927955 -0.0154 0.98774
eu1 -0.27793656 0.06324072 -4.3949 1.31e-05 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Total Sum of Squares: 64.478
Residual Sum of Squares: 61.197
R-Squared: 0.050889
Adj. R-Squared: -0.037401
F-statistic: 10.7592 on 3 and 602 DF, p-value: 6.7744e-07
>

Related

Stargazer blank fields for ergm fields

I get blank fields in my stargazer summary from my ERG model. I tried setting the summary setting to TRUE and FALSE, both didn't work.
# HERE IS MY ERGM SUMMARY OUTPUT
> summary(ergm68)
Call:
ergm(formula = net_68 ~ edges + gwb1degree(fixed = T, decay = 0.4) +
b1factor("org", base = 1, levels = "L") + b1factor("org",
levels = "P"), bipartite = T)
Monte Carlo Maximum Likelihood Results:
Estimate Std. Error MCMC % z value Pr(>|z|)
edges -1.2388 1.2296 0 -1.007 0.314
gwb1deg.fixed.0.4 1.0210 1.3756 0 0.742 0.458
b1factor.org.L 1.6768 1.0500 0 1.597 0.110
b1factor.org.P 0.3234 1.0186 0 0.317 0.751
Null Deviance: 95.65 on 69 degrees of freedom
Residual Deviance: 88.26 on 65 degrees of freedom
AIC: 96.26 BIC: 105.2 (Smaller is better. MC Std. Err. = 0.03973)
> stargazer(ergm68, type="text", summary=T, digits=1)
# Stargazer summary set to TRUE
> stargazer(ergm79, type="text", summary=T, digits=1)
===============================================
Dependent variable:
---------------------------
net_79
-----------------------------------------------
edges
gwb1deg.fixed.0.4
b1factor.org.L
b1factor.org.P
-----------------------------------------------
Akaike Inf. Crit. 399.0
Bayesian Inf. Crit. 413.8
===============================================
Note: *p<0.1; **p<0.05; ***p<0.01
>
Stargazer summary set to FALSE
> stargazer(ergm79, type="text", summary=F, digits=1)
===============================================
Dependent variable:
---------------------------
net_79
-----------------------------------------------
edges
gwb1deg.fixed.0.4
b1factor.org.L
b1factor.org.P
-----------------------------------------------
Akaike Inf. Crit. 399.0
Bayesian Inf. Crit. 413.8
===============================================
Note: *p<0.1; **p<0.05; ***p<0.01
>
No luck getting stargazer to generate a complete table form my ERGM output model. I'm not sure if I have missed a setting. Seems to have worked before without any trouble.

Modelling an input that can be used a finite amount of times along a time horizon and is also binary

Happy holidays everyone! I finally have some time off to work on my project, and of course I'm stuck as usual lol.
I'm looking for guidance/examples that would let me be able to model the following:
I have an input (lets call it a 'jump') that is binary (0 or 1) and I want it to only be able to be used only once (or possibly a 'n' number of times where n<#time steps) over the entire time horizon. The affect this input has on the system is it will instantaneously increase the velocity of the system by some predetermined amount.
The second affect this has on the system is that the set of dynamics that progress the system forward in time change. (In this case the 'jump' will change the dynamics from a driving system to a flying system). In the future there will also be a 'double_jump' that does not change the dynamics but does still provide an instantaneous change in velocity. Currently I'm trying to get the first part down then I'm going to attempt to implement this. Just want to keep my bigger vision clear to anyone reading this.
Also another part that is for the future of the model: I'd like to be able to have the system interact with a ball object by let's say using the if2/if3 and if the system's position is some radius from another object's position an impulse will be imparted on the ball object dependent on things like the velocities of the ball and the system. To do properly I imagine I need a way to define a time step that happens at the interaction point, which I believe means I'll need some sort of variable time vector. Any examples for these would be much appreciated.
Okay so 2 and 3 are just here to be here, not really the main points of this question. I think I'll be able to figure them out once I can wrap my head around implementing this weird 'jump' input.
My current plan is to have an MV called 'u_jump' that is a non-integer. Then have a Var called 'jump_hist' that is essentially the 'integral' of 'u_jump', and I give jump_hist an upper bound of 1. What I do right now is just pretend this u_jump is an acceleration on the system by adding to the velocity.dt() equation. This works in theory but doesn't really represent the system I'm trying to control perfectly.
What would be the best example for me to learn some lessons from for implementing this? And another question, is there a way to make the IPOPT solver work for integer solutions by giving the integers a tolerance? Somewhat like the minlp solver option 'minlp_integer_tol 0.05', that way I can still get the speed of IPOPT but the ability to incorporate integer style variables/equations like if3() etc... If not, are there ways I can approach the integer solution with a non-integer solution such that when I implement the control on a real system, the difference between the non-integer solution and the integer solution is within some acceptable tolerance to consider it a disturbance that a feedback controller could mitigate?
Kind of a mouthful I know, my questions always are haha. Hopefully this is helpful for others in the future! Here's my code currently. Let me know if the code gives you issues or anything I could clear up in the question.
Oh and one final note, this is currently setup as a 2D flying system. I've removed the driving dynamics (the c splines commented out) for simplicity of implementing this 'jump' input.
Happy Holidays again everyone!
import numpy as np
import matplotlib.pyplot as plt
import math
import gekko
from gekko import GEKKO
import csv
from mpl_toolkits.mplot3d import Axes3D
class Optimizer():
def __init__(self):
#################GROUND DRIVING OPTIMIZER SETTTINGS##############
self.d = GEKKO(remote=False) # Driving on ground optimizer
ntd = 21
self.d.time = np.linspace(0, 1, ntd) # Time vector normalized 0-1
# options
self.d.options.NODES = 3
self.d.options.SOLVER = 3
self.d.options.IMODE = 6# MPC mode
self.d.options.MAX_ITER = 800
self.d.options.MV_TYPE = 0
self.d.options.DIAGLEVEL = 0
# self.d.options.OTOL = 1
# final time for driving optimizer
self.tf = self.d.FV(value=1.0,lb=0.1,ub=10.0, name='tf')
# allow gekko to change the tf value
self.tf.STATUS = 1
# time variable
self.t = self.d.Var(value=0)
self.d.Equation(self.t.dt()/self.tf == 1)
# Acceleration variable
self.a = self.d.MV(fixed_initial=False, lb = 0, ub = 1, name='a')
self.a.STATUS = 1
# Jumping integer varaibles and equations
self.u_jump = self.d.MV(fixed_initial=False, lb=0, ub=1, integer=True)
self.u_jump.STATUS = 1
self.jump_hist = self.d.Var(value=0, name='jump_hist', lb=0, ub=1)
self.d.Equation(self.jump_hist.dt() == self.u_jump*(ntd-1))
# self.d.Equation(1.0 >= self.jump_hist)
# pitch input throttle (rotation of system)
self.u_p = self.d.MV(fixed_initial=False, lb = -1, ub=1)
self.u_p.STATUS = 1
# Final variable that allows you to set an objective function considering only final state
self.p_d = np.zeros(ntd)
self.p_d[-1] = 1.0
self.final = self.d.Param(value = self.p_d, name='final')
# Model constants and parameters
self.Dp = self.d.Const(value = 2.7982, name='D_pitch')
self.Tp = self.d.Const(value = 12.146, name='T_pitch')
self.pi = self.d.Const(value = 3.14159, name='pi')
self.g = self.d.Const(value = 0, name='Fg')
self.jump_magnitude = self.d.Param(value = 3000, name = 'jump_mag')
def optimize2D(self, si, sf, vi, vf, ri, omegai): #these are 1x2 vectors s or v [x, z]
# variables and intial conditions
# Position in 2d
self.sx = self.d.Var(value=si[0], lb=-4096, ub=4096, name='x') #x position
# self.sy = self.d.Var(value=si[1], lb=-5120, ub=5120, name='y') #y position
self.sz = self.d.Var(value = si[1])
# Pitch rotation and angular velocity
self.pitch = self.d.Var(value = ri, name='pitch', lb=-1*self.pi, ub=self.pi)
self.pitch_dot = self.d.Var(fixed_initial=False, name='pitch_dot')
# Velocity in 2D
self.v_mag = self.d.Var(value=(vi), name='v_mag')
self.vx = self.d.Var(value=np.cos(ri), name='vx') #x velocity
# self.vy = self.d.Var(value=(np.sin(ri) * vi), name='vy') #y velocity
self.vz = self.d.Var(value = (np.sin(ri) * vi), name='vz')
## Non-linear state dependent dynamics descired as csplines.
#curvature vs vel as a cubic spline for driving state
cur = np.array([0.0069, 0.00398, 0.00235, 0.001375, 0.0011, 0.00088])
v_cur = np.array([0,500,1000,1500,1750,2300])
v_cur_fine = np.linspace(0,2300,100)
cur_fine = np.interp(v_cur_fine, v_cur, cur)
self.curvature = self.d.Var(name='curvature')
self.d.cspline(self.v_mag, self.curvature, v_cur_fine, cur_fine)
# throttle vs vel as cubic spline for driving state
ba=991.666 #Boost acceleration magnitude
kv = np.array([0, 1410, 2300]) #velocity input
ka = np.array([1600+ba, 0+ba, 0+ba]) #acceleration ouput
kv_fine = np.linspace(0, 2300, 100) # Higher resolution
ka_fine = np.interp(kv_fine, kv, ka) # Piecewise linear high resolution of ka
self.throttle_acceleration = self.d.Var(fixed_initial=False, name='throttle_accel')
self.d.cspline(self.v_mag, self.throttle_acceleration, kv_fine, ka_fine)
# Differental equations
# Velocity diff eqs
self.d.Equation(self.vx.dt()/self.tf == (self.a*ba * self.d.cos(self.pitch)*self.jump_hist) + (self.a * self.throttle_acceleration * (1-self.jump_hist)) + (self.u_jump * self.jump_magnitude * self.d.cos(self.pitch + np.pi/2)))
self.d.Equation(self.vz.dt()/self.tf == (self.a*ba * self.d.sin(self.pitch)*self.jump_hist) - (self.g * (1-self.jump_hist)) + (self.u_jump * self.jump_magnitude * self.d.sin(self.pitch + np.pi/2)))
self.d.Equation(self.v_mag == self.d.sqrt((self.vx*self.vx) + (self.vz*self.vz)))
self.d.Equation(2300 >= self.v_mag)
# Position diff eqs
self.d.Equation(self.sx.dt()/self.tf == self.vx)
# self.d.Equation(self.sy.dt()/self.tf == self.vy)
self.d.Equation(self.sz.dt()/self.tf == self.vz)
# Orientation diff eqs
self.d.Equation(self.pitch_dot.dt()/self.tf == ((self.Tp * self.u_p) + (self.Dp * self.pitch_dot * (1 - self.d.abs2(self.u_p)))) * self.jump_hist)
self.d.Equation(self.pitch.dt()/self.tf == self.pitch_dot)
# Objective functions
# Final Position Objectives
self.d.Minimize(self.final*1e2*((self.sz-sf[1])**2)) # z final position objective
self.d.Minimize(self.final*1e2*((self.sx-sf[0])**2)) # x final position objective
# Final Velocity Objectives
# self.d.Obj(self.final*1e3*(self.vz-vf[1])**2)
# self.d.Obj(self.final*1e3*(self.vx-vf[0])**2)
# Minimum Time Objective
self.d.Minimize(1e4*self.tf)
#solve
# self.d.solve('http://127.0.0.1') # Solve with local apmonitor server
self.d.open_folder()
self.d.solve(disp=True)
self.ts = np.multiply(self.d.time, self.tf.value[0])
return self.a, self.u_p, self.ts
def getTrajectoryData(self):
return [self.ts, self.sx, self.sz, self.vx, self.vz, self.pitch, self.pitch_dot]
def getInputData(self):
return [self.ts, self.a]
# Main Code
opt = Optimizer()
s_ti = [0,0]
v_ti = 0
s_tf = [1000,500]
v_tf = [00.00, 00.0]
r_ti = 0 # inital orientation of the car
omega_ti = 0.0 # initial angular velocity of car
acceleration, turning, t_star = opt.optimize2D(s_ti, s_tf, v_ti, v_tf, r_ti, omega_ti)
# Printing stuff
# print('u', acceleration.value)
# print('tf', opt.tf.value)
# print('tf', opt.tf.value[0])
# print('u jump', opt.jump)
# for i in opt.u_jump: print(i.value)
print('u_jump', opt.u_jump.value)
print('jump his', opt.jump_hist.value)
print('v_mag', opt.v_mag.value)
print('a', opt.a.value)
# Plotting stuff
ts = opt.d.time * opt.tf.value[0]
t_max = opt.tf.value[0]
x_max = np.max(opt.sx.value)
vx_max = np.max(opt.vx.value)
z_max = np.max(opt.sz.value)
vz_max = np.max(opt.vz.value)
# plot results
fig = plt.figure(2)
ax = fig.add_subplot(111, projection='3d')
# plt.subplot(2, 1, 1)
Axes3D.plot(ax, opt.sx.value, ts, opt.sz.value, c='r', marker ='o')
plt.ylim(0, t_max)
plt.xlim(0, x_max)
plt.ylabel('time')
plt.xlabel('Position x')
ax.set_zlabel('position z')
n=5 #num plots
fig = plt.figure(3)
ax = fig.add_subplot(111, projection='3d')
# plt.subplot(2, 1, 1)
Axes3D.plot(ax, opt.vx.value, ts, opt.vz.value, c='r', marker ='o')
plt.ylim(0, t_max)
plt.xlim(-1*vx_max, vx_max)
# plt.zlim(0, 2000)
plt.ylabel('time')
plt.xlabel('Velocity x')
ax.set_zlabel('vz')
plt.figure(1)
plt.subplot(n,1,1)
plt.plot(ts, opt.a, 'r-')
plt.ylabel('acceleration')
plt.subplot(n,1,2)
plt.plot(ts, np.multiply(opt.pitch, 1/math.pi), 'r-')
plt.ylabel('pitch orientation')
plt.subplot(n, 1, 3)
plt.plot(ts, opt.v_mag, 'b-')
plt.ylabel('vmag')
plt.subplot(n, 1, 4)
plt.plot(ts, opt.u_p, 'b-')
plt.ylabel('u_p')
plt.subplot(n, 1, 5)
plt.plot(ts, opt.u_jump, 'b-')
plt.plot(ts, opt.jump_hist, 'r-')
plt.ylabel('jump(b), jump hist(r)')
plt.show()
print('asdf')
One thing to try is solve with IPOPT for initialization and then APOPT to get the integer solution. Another thing to try is to use an MPCC for a switching condition that does not rely on a binary variable. I've found the MPCC form to be much less reliable than a binary variable switching condition because the solver often gets stuck at the saddle point. However, integer solutions often take much longer to solve.
self.d.options.SOLVER=3
self.d.solve(disp=True)
self.d.options.TIME_SHIFT=0
self.d.options.SOLVER=1
self.d.solve(disp=True)
Here is the solution with IPOPT:
EXIT: Optimal Solution Found.
The solution was found.
The final value of the objective function is 506284.8987787149
---------------------------------------------------
Solver : IPOPT (v3.12)
Solution time : 7.4613000000000005 sec
Objective : 506284.8987787149
Successful solution
---------------------------------------------------
The integer solution is obtained with APOPT.
--------- APM Model Size ------------
Variable time shift OFF
Number of state variables: 1286
Number of total equations: - 1180
Number of slack variables: - 40
---------------------------------------
Degrees of freedom : 66
----------------------------------------------
Dynamic Control with APOPT Solver
----------------------------------------------
Iter: 1 I: 0 Tm: 2.72 NLPi: 92 Dpth: 0 Lvs: 3 Obj: 5.07E+05 Gap: NaN
Iter: 2 I: -1 Tm: 0.53 NLPi: 17 Dpth: 1 Lvs: 2 Obj: 5.07E+05 Gap: NaN
Iter: 3 I: -9 Tm: 47.59 NLPi: 801 Dpth: 1 Lvs: 1 Obj: 5.07E+05 Gap: NaN
Iter: 4 I: 0 Tm: 2.26 NLPi: 35 Dpth: 1 Lvs: 3 Obj: 5.08E+05 Gap: NaN
--Integer Solution: 2.54E+07 Lowest Leaf: 5.08E+05 Gap: 1.92E+00
Iter: 5 I: 0 Tm: 3.56 NLPi: 32 Dpth: 2 Lvs: 2 Obj: 2.54E+07 Gap: 1.92E+00
Iter: 6 I: -9 Tm: 54.65 NLPi: 801 Dpth: 2 Lvs: 1 Obj: 5.08E+05 Gap: 1.92E+00
Iter: 7 I: -1 Tm: 2.18 NLPi: 83 Dpth: 2 Lvs: 0 Obj: 5.08E+05 Gap: 1.92E+00
No additional trial points, returning the best integer solution
Successful solution
---------------------------------------------------
Solver : APOPT (v1.0)
Solution time : 113.5842 sec
Objective : 2.5419931399165962E+7
Successful solution
---------------------------------------------------
APOPT chooses not to jump to minimize the final objective. You may need to add a hard constraint that the vsum() of u_jump is 1. There are additional tutorials on MPCC and integer / binary forms of switching conditions in the Optimization course.
Thanks for sharing your application and keep us updated!

How to determine probability of 0 to N events occurring given probability of each of those N events?

First time posting here, so if I make a mistake with something let me know and I'd be more than happy to fix it!
Given N events, each of which have an individual probability (from 0 to 100%) of occurring, I'd like to determine the probability of 0 to N of those events occurring together.
For example, if I have event 1, 2, 3,...,N and 5 (E1, E2, E3...,EN) where the individual probability of a specific event occurring is as follows:
E1 = 30% probability of occurring
E2 = 40% probability of occurring
E3 = 50% probability of occurring
...
EN = x% probability of occurring
I'd like to know the probability of having:
none of these events occurring
any 1 of these events occurring
any 2 of these events occurring
any 3 of these events occurring
...
all N of these events occurring
I understand that having 0 events occurring is (1-E1)(1-E2)...(1-EN) and that having all N events occurring is E1*E2*...*E3. However, I do not know how to calculate the other possibilities (1 to N-1 events occurring).
I have been looking for some recursive algorithm (binomial compound distribution) that could solve this but I have not found any explicit formula that does this. Wondering if any of you guys could help!
Thanks in advance!
EDIT: The events are indeed independent.
Sounds like Poisson binomial
wikipedia link.
There's an explicit recursive formula but beware of numerical stability.
where
Something like the following recursive program should work.
function ans = probability_vector(probabilities)
if len(probabilities) == 0
% No events can happen.
ans = [1];
elseif len(probabilities) == 1
% 0 or 1 events can happen.
ans = [1 - probabilities[1], probabilities[1]];
else
half = ceil(len(probabilities)/2);
ans_half1 = probability_vector(probabilities[1: half]);
ans_half2 = probability_vector(probabilities[half + 1: end]);
ans = convolve(ans_half1, ans_half2)
end
return
end
And if p is a probability vector, then p[i+1] is the probability of i of the events happening.
See http://matlabtricks.com/post-3/the-basics-of-convolution for an explanation of the magic conv operator that does the meat of the work.
You need to compute your own version of Pascal's triangle, with probabilities (instead of counts) in each location. Row 0 will be the single figure 1.00; row 1 consists of two values, P(E1) and 1-P(E1). Below that, in row k, each position is P(Ek)[above-right entry] + (1-P(Ek))[above-left entry]. I recommend a lower-triangular matrix for this, something like:
1.00
0.30 0.70
0.12 0.46 0.42 # These are 0.3*0.4 | 0.3*0.6 + 0.7*0.4 | 0.7*0.6
0.06 0.29 0.44 0.21 # 0.12*0.5 | 0.12*0.5 + 0.46*0.5 | ...
See how it works? In array / matrix notation for a matrix M, given event probabilities in vector P, this looks something like
M[k, i] = P[k] * M[k-1, i] +
(1-P[k]) * M[k-1, i] + P[k] * M[k-1, i-1]
The above is a nice recursive definition. Note that my earlier "above-right" reference in the lower-matrix notation is simply a row above; above-left is exactly that: row k-1, column i-1.
When you're done, the bottom row of the matrix will be the probabilities of getting N, N-1, N-2, ... 0 of the events. If you want these probabilities in the opposite order, then simply switch the coefficients P[k] and 1-P[k]
Does that get you moving toward a solution?
After tons of research and some help from the answers here, I've come up with the following code:
function [ prob_numSites ] = probability_activationSite( prob_distribution_site )
N = length(prob_distribution_site); % number of events
notProb = 1 - prob_distribution_site; % find probability of no occurrence
syms x; % create symbolic variable
prob_number = 1; % initializing prob_number to 1
for i = 1:N
prob_number = prob_number*(prob_distribution_site(i)*x + notProb(i));
end
prob_number_polynomial = expand(prob_number); % expands the function into a polynomial
revProb_numSites = coeffs(prob_number_polynomial); % returns the coefficients of the above polynomial (ie probability of 0 to N events, where first coefficient is N events occurring, last coefficient is 0 events occurring)
prob_numSites = fliplr(revProb_numSites); % reverses order of coefficients
This takes in probability of certain number of individual events occurring and returns array of the probability of 0 to N events occurring.
(This answer helped a lot).
None of these answers seemed to worked/was understandable for me so I computed it and made it myself in python:
def combin(n, k):
if k > n//2:
k = n-k
x = 1
y = 1
i = n-k+1
while i <= n:
x = (x*i)//y
y += 1
i += 1
return x
# proba being the probability of each of the N evenments, each being different from one another.
for i in range(N,0,-1):
print(i)
if sums[i]> 0:
continue
print(combin(N,i))
for j in itertools.combinations(proba, i):
sums[i]+=np.prod(j)
for i in range(N,0,-1):
for j in range(i+1,N+1):
icomb = combin(j,i)
sums[str(i)] -= icomb*sums[str(j)]
the math is not super simple:
Let $C_{w_n}$ be the set of all unordered sets $(i,j,k...n)$
where $i,j,k...n\in w$
$Co(i,proba) = sum{C_{w_i}} - sum_{u from i+1..n}{(u \choose i) sum{C_{w_u}}}$*
$Co(i, P)$ being the probability of i events occuring given $P = {p_i...p_n}$, the bernouilli probability of each event.

Psuedo-Random Variable

I have a variable, between 0 and 1, which should dictate the likelyhood that a second variable, a random number between 0 and 1, is greater than 0.5. In other words, if I were to generate the second variable 1000 times, the average should be approximately equal to the first variable's value. How do I make this code?
Oh, and the second variable should always be capable of producing either 0 or 1 in any condition, just more or less likely depending on the value of the first variable. Here is a link to a graph which models approximately how I would like the program to behave. Each equation represents a separate value for the first variable.
You have a variable p and you are looking for a mapping function f(x) that maps random rolls between x in [0, 1] to the same interval [0, 1] such that the expected value, i.e. the average of all rolls, is p.
You have chosen the function prototype
f(x) = pow(x, c)
where c must be chosen appropriately. If x is uniformly distributed in [0, 1], the average value is:
int(f(x) dx, [0, 1]) == p
With the integral:
int(pow(x, c) dx) == pow(x, c + 1) / (c + 1) + K
one gets:
c = 1/p - 1
A different approach is to make p the median value of the distribution, such that half of the rolls fall below p, the other half above p. This yields a different distribution. (I am aware that you didn't ask for that.) Now, we have to satisfy the condition:
f(0.5) == pow(0.5, c) == p
which yields:
c = log(p) / log(0.5)
With the current function prototype, you cannot satisfy both requirements. Your function is also asymmetric (f(x, p) != f(1-x, 1-p)).
Python functions below:
def medianrand(p):
"""Random number between 0 and 1 whose median is p"""
c = math.log(p) / math.log(0.5)
return math.pow(random.random(), c)
def averagerand(p):
"""Random number between 0 and 1 whose expected value is p"""
c = 1/p - 1
return math.pow(random.random(), c)
You can do this by using a dummy. First set the first variable to a value between 0 and 1. Then create a random number in the dummy between 0 and 1. If this dummy is bigger than the first variable, you generate a random number between 0 and 0.5, and otherwise you generate a number between 0.5 and 1.
In pseudocode:
real a = 0.7
real total = 0.0
for i between 0 and 1000 begin
real dummy = rand(0,1)
real b
if dummy > a then
b = rand(0,0.5)
else
b = rand(0.5,1)
end if
total = total + b
end for
real avg = total / 1000
Please note that this algorithm will generate average values between 0.25 and 0.75. For a = 1 it will only generate random values between 0.5 and 1, which should average to 0.75. For a=0 it will generate only random numbers between 0 and 0.5, which should average to 0.25.
I've made a sort of pseudo-solution to this problem, which I think is acceptable.
Here is the algorithm I made;
a = 0.2 # variable one
b = 0 # variable two
b = random.random()
b = b^(1/(2^(4*a-1)))
It doesn't actually produce the average results that I wanted, but it's close enough for my purposes.
Edit: Here's a graph I made that consists of a large amount of datapoints I generated with a python script using this algorithm;
import random
mod = 6
div = 100
for z in xrange(div):
s = 0
for i in xrange (100000):
a = (z+1)/float(div) # variable one
b = random.random() # variable two
c = b**(1/(2**((mod*a*2)-mod)))
s += c
print str((z+1)/float(div)) + "\t" + str(round(s/100000.0, 3))
Each point in the table is the result of 100000 randomly generated points from the algorithm; their x positions being the a value given, and their y positions being their average. Ideally they would fit to a straight line of y = x, but as you can see they fit closer to an arctan equation. I'm trying to mess around with the algorithm so that the averages fit the line, but I haven't had much luck as of yet.

Find the smallest regular number that is not less than N

Regular numbers are numbers that evenly divide powers of 60. As an example, 602 = 3600 = 48 × 75, so both 48 and 75 are divisors of a power of 60. Thus, they are also regular numbers.
This is an extension of rounding up to the next power of two.
I have an integer value N which may contain large prime factors and I want to round it up to a number composed of only small prime factors (2, 3 and 5)
Examples:
f(18) == 18 == 21 * 32
f(19) == 20 == 22 * 51
f(257) == 270 == 21 * 33 * 51
What would be an efficient way to find the smallest number satisfying this requirement?
The values involved may be large, so I would like to avoid enumerating all regular numbers starting from 1 or maintaining an array of all possible values.
One can produce arbitrarily thin a slice of the Hamming sequence around the n-th member in time ~ n^(2/3) by direct enumeration of triples (i,j,k) such that N = 2^i * 3^j * 5^k.
The algorithm works from log2(N) = i+j*log2(3)+k*log2(5); enumerates all possible ks and for each, all possible js, finds the top i and thus the triple (k,j,i) and keeps it in a "band" if inside the given "width" below the given high logarithmic top value (when width < 1 there can be at most one such i) then sorts them by their logarithms.
WP says that n ~ (log N)^3, i.e. run time ~ (log N)^2. Here we don't care for the exact position of the found triple in the sequence, so all the count calculations from the original code can be thrown away:
slice hi w = sortBy (compare `on` fst) b where -- hi>log2(N) is a top value
lb5=logBase 2 5 ; lb3=logBase 2 3 -- w<1 (NB!) is log2(width)
b = concat -- the slice
[ [ (r,(i,j,k)) | frac < w ] -- store it, if inside width
| k <- [ 0 .. floor ( hi /lb5) ], let p = fromIntegral k*lb5,
j <- [ 0 .. floor ((hi-p)/lb3) ], let q = fromIntegral j*lb3 + p,
let (i,frac)=properFraction(hi-q) ; r = hi - frac ] -- r = i + q
-- properFraction 12.7 == (12, 0.7)
-- update: in pseudocode:
def slice(hi, w):
lb5, lb3 = logBase(2, 5), logBase(2, 3) -- logs base 2 of 5 and 3
for k from 0 step 1 to floor(hi/lb5) inclusive:
p = k*lb5
for j from 0 step 1 to floor((hi-p)/lb3) inclusive:
q = j*lb3 + p
i = floor(hi-q)
frac = hi-q-i -- frac < 1 , always
r = hi - frac -- r == i + q
if frac < w:
place (r,(i,j,k)) into the output array
sort the output array's entries by their "r" component
in ascending order, and return thus sorted array
Having enumerated the triples in the slice, it is a simple matter of sorting and searching, taking practically O(1) time (for arbitrarily thin a slice) to find the first triple above N. Well, actually, for constant width (logarithmic), the amount of numbers in the slice (members of the "upper crust" in the (i,j,k)-space below the log(N) plane) is again m ~ n^2/3 ~ (log N)^2 and sorting takes m log m time (so that searching, even linear, takes ~ m run time then). But the width can be made smaller for bigger Ns, following some empirical observations; and constant factors for the enumeration of triples are much higher than for the subsequent sorting anyway.
Even with constant width (logarthmic) it runs very fast, calculating the 1,000,000-th value in the Hamming sequence instantly and the billionth in 0.05s.
The original idea of "top band of triples" is due to Louis Klauder, as cited in my post on a DDJ blogs discussion back in 2008.
update: as noted by GordonBGood in the comments, there's no need for the whole band but rather just about one or two values above and below the target. The algorithm is easily amended to that effect. The input should also be tested for being a Hamming number itself before proceeding with the algorithm, to avoid round-off issues with double precision. There are no round-off issues comparing the logarithms of the Hamming numbers known in advance to be different (though going up to a trillionth entry in the sequence uses about 14 significant digits in logarithm values, leaving only 1-2 digits to spare, so the situation may in fact be turning iffy there; but for 1-billionth we only need 11 significant digits).
update2: turns out the Double precision for logarithms limits this to numbers below about 20,000 to 40,000 decimal digits (i.e. 10 trillionth to 100 trillionth Hamming number). If there's a real need for this for such big numbers, the algorithm can be switched back to working with the Integer values themselves instead of their logarithms, which will be slower.
Okay, hopefully third time's a charm here. A recursive, branching algorithm for an initial input of p, where N is the number being 'built' within each thread. NB 3a-c here are launched as separate threads or otherwise done (quasi-)asynchronously.
Calculate the next-largest power of 2 after p, call this R. N = p.
Is N > R? Quit this thread. Is p composed of only small prime factors? You're done. Otherwise, go to step 3.
After any of 3a-c, go to step 4.
a) Round p up to the nearest multiple of 2. This number can be expressed as m * 2.
b) Round p up to the nearest multiple of 3. This number can be expressed as m * 3.
c) Round p up to the nearest multiple of 5. This number can be expressed as m * 5.
Go to step 2, with p = m.
I've omitted the bookkeeping to do regarding keeping track of N but that's fairly straightforward I take it.
Edit: Forgot 6, thanks ypercube.
Edit 2: Had this up to 30, (5, 6, 10, 15, 30) realized that was unnecessary, took that out.
Edit 3: (The last one I promise!) Added the power-of-30 check, which helps prevent this algorithm from eating up all your RAM.
Edit 4: Changed power-of-30 to power-of-2, per finnw's observation.
Here's a solution in Python, based on Will Ness answer but taking some shortcuts and using pure integer math to avoid running into log space numerical accuracy errors:
import math
def next_regular(target):
"""
Find the next regular number greater than or equal to target.
"""
# Check if it's already a power of 2 (or a non-integer)
try:
if not (target & (target-1)):
return target
except TypeError:
# Convert floats/decimals for further processing
target = int(math.ceil(target))
if target <= 6:
return target
match = float('inf') # Anything found will be smaller
p5 = 1
while p5 < target:
p35 = p5
while p35 < target:
# Ceiling integer division, avoiding conversion to float
# (quotient = ceil(target / p35))
# From https://stackoverflow.com/a/17511341/125507
quotient = -(-target // p35)
# Quickly find next power of 2 >= quotient
# See https://stackoverflow.com/a/19164783/125507
try:
p2 = 2**((quotient - 1).bit_length())
except AttributeError:
# Fallback for Python <2.7
p2 = 2**(len(bin(quotient - 1)) - 2)
N = p2 * p35
if N == target:
return N
elif N < match:
match = N
p35 *= 3
if p35 == target:
return p35
if p35 < match:
match = p35
p5 *= 5
if p5 == target:
return p5
if p5 < match:
match = p5
return match
In English: iterate through every combination of 5s and 3s, quickly finding the next power of 2 >= target for each pair and keeping the smallest result. (It's a waste of time to iterate through every possible multiple of 2 if only one of them can be correct). It also returns early if it ever finds that the target is already a regular number, though this is not strictly necessary.
I've tested it pretty thoroughly, testing every integer from 0 to 51200000 and comparing to the list on OEIS http://oeis.org/A051037, as well as many large numbers that are ±1 from regular numbers, etc. It's now available in SciPy as fftpack.helper.next_fast_len, to find optimal sizes for FFTs (source code).
I'm not sure if the log method is faster because I couldn't get it to work reliably enough to test it. I think it has a similar number of operations, though? I'm not sure, but this is reasonably fast. Takes <3 seconds (or 0.7 second with gmpy) to calculate that 2142 × 380 × 5444 is the next regular number above 22 × 3454 × 5249+1 (the 100,000,000th regular number, which has 392 digits)
You want to find the smallest number m that is m >= N and m = 2^i * 3^j * 5^k where all i,j,k >= 0.
Taking logarithms the equations can be rewritten as:
log m >= log N
log m = i*log2 + j*log3 + k*log5
You can calculate log2, log3, log5 and logN to (enough high, depending on the size of N) accuracy. Then this problem looks like a Integer Linear programming problem and you could try to solve it using one of the known algorithms for this NP-hard problem.
EDITED/CORRECTED: Corrected the codes to pass the scipy tests:
Here's an answer based on endolith's answer, but almost eliminating long multi-precision integer calculations by using float64 logarithm representations to do a base comparison to find triple values that pass the criteria, only resorting to full precision comparisons when there is a chance that the logarithm value may not be accurate enough, which only occurs when the target is very close to either the previous or the next regular number:
import math
def next_regulary(target):
"""
Find the next regular number greater than or equal to target.
"""
if target < 2: return ( 0, 0, 0 )
log2hi = 0
mant = 0
# Check if it's already a power of 2 (or a non-integer)
try:
mant = target & (target - 1)
target = int(target) # take care of case where not int/float/decimal
except TypeError:
# Convert floats/decimals for further processing
target = int(math.ceil(target))
mant = target & (target - 1)
# Quickly find next power of 2 >= target
# See https://stackoverflow.com/a/19164783/125507
try:
log2hi = target.bit_length()
except AttributeError:
# Fallback for Python <2.7
log2hi = len(bin(target)) - 2
# exit if this is a power of two already...
if not mant: return ( log2hi - 1, 0, 0 )
# take care of trivial cases...
if target < 9:
if target < 4: return ( 0, 1, 0 )
elif target < 6: return ( 0, 0, 1 )
elif target < 7: return ( 1, 1, 0 )
else: return ( 3, 0, 0 )
# find log of target, which may exceed the float64 limit...
if log2hi < 53: mant = target << (53 - log2hi)
else: mant = target >> (log2hi - 53)
log2target = log2hi + math.log2(float(mant) / (1 << 53))
# log2 constants
log2of2 = 1.0; log2of3 = math.log2(3); log2of5 = math.log2(5)
# calculate range of log2 values close to target;
# desired number has a logarithm of log2target <= x <= top...
fctr = 6 * log2of3 * log2of5
top = (log2target**3 + 2 * fctr)**(1/3) # for up to 2 numbers higher
btm = 2 * log2target - top # or up to 2 numbers lower
match = log2hi # Anything found will be smaller
result = ( log2hi, 0, 0 ) # placeholder for eventual matches
count = 0 # only used for debugging counting band
fives = 0; fiveslmt = int(math.ceil(top / log2of5))
while fives < fiveslmt:
log2p = top - fives * log2of5
threes = 0; threeslmt = int(math.ceil(log2p / log2of3))
while threes < threeslmt:
log2q = log2p - threes * log2of3
twos = int(math.floor(log2q)); log2this = top - log2q + twos
if log2this >= btm: count += 1 # only used for counting band
if log2this >= btm and log2this < match:
# logarithm precision may not be enough to differential between
# the next lower regular number and the target, so do
# a full resolution comparison to eliminate this case...
if (2**twos * 3**threes * 5**fives) >= target:
match = log2this; result = ( twos, threes, fives )
threes += 1
fives += 1
return result
print(next_regular(2**2 * 3**454 * 5**249 + 1)) # prints (142, 80, 444)
Since most long multi-precision calculations have been eliminated, gmpy isn't needed, and on IDEOne the above code takes 0.11 seconds instead of 0.48 seconds for endolith's solution to find the next regular number greater than the 100 millionth one as shown; it takes 0.49 seconds instead of 5.48 seconds to find the next regular number past the billionth (next one is (761,572,489) past (1334,335,404) + 1), and the difference will get even larger as the range goes up as the multi-precision calculations get increasingly longer for the endolith version compared to almost none here. Thus, this version could calculate the next regular number from the trillionth in the sequence in about 50 seconds on IDEOne, where it would likely take over an hour with the endolith version.
The English description of the algorithm is almost the same as for the endolith version, differing as follows:
1) calculates the float log estimation of the argument target value (we can't use the built-in log function directly as the range may be much too large for representation as a 64-bit float),
2) compares the log representation values in determining qualifying values inside an estimated range above and below the target value of only about two or three numbers (depending on round-off),
3) compare multi-precision values only if within the above defined narrow band,
4) outputs the triple indices rather than the full long multi-precision integer (would be about 840 decimal digits for the one past the billionth, ten times that for the trillionth), which can then easily be converted to the long multi-precision value if required.
This algorithm uses almost no memory other than for the potentially very large multi-precision integer target value, the intermediate evaluation comparison values of about the same size, and the output expansion of the triples if required. This algorithm is an improvement over the endolith version in that it successfully uses the logarithm values for most comparisons in spite of their lack of precision, and that it narrows the band of compared numbers to just a few.
This algorithm will work for argument ranges somewhat above ten trillion (a few minute's calculation time at IDEOne rates) when it will no longer be correct due to lack of precision in the log representation values as per #WillNess's discussion; in order to fix this, we can change the log representation to a "roll-your-own" logarithm representation consisting of a fixed-length integer (124 bits for about double the exponent range, good for targets of over a hundred thousand digits if one is willing to wait); this will be a little slower due to the smallish multi-precision integer operations being slower than float64 operations, but not that much slower since the size is limited (maybe a factor of three or so slower).
Now none of these Python implementations (without using C or Cython or PyPy or something) are particularly fast, as they are about a hundred times slower than as implemented in a compiled language. For reference sake, here is a Haskell version:
{-# OPTIONS_GHC -O3 #-}
import Data.Word
import Data.Bits
nextRegular :: Integer -> ( Word32, Word32, Word32 )
nextRegular target
| target < 2 = ( 0, 0, 0 )
| target .&. (target - 1) == 0 = ( fromIntegral lg2hi - 1, 0, 0 )
| target < 9 = case target of
3 -> ( 0, 1, 0 )
5 -> ( 0, 0, 1 )
6 -> ( 1, 1, 0 )
_ -> ( 3, 0, 0 )
| otherwise = match
where
lg3 = logBase 2 3 :: Double; lg5 = logBase 2 5 :: Double
lg2hi = let cntplcs v cnt =
let nv = v `shiftR` 31 in
if nv <= 0 then
let cntbts x c =
if x <= 0 then c else
case c + 1 of
nc -> nc `seq` cntbts (x `shiftR` 1) nc in
cntbts (fromIntegral v :: Word32) cnt
else case cnt + 31 of ncnt -> ncnt `seq` cntplcs nv ncnt
in cntplcs target 0
lg2tgt = let mant = if lg2hi <= 53 then target `shiftL` (53 - lg2hi)
else target `shiftR` (lg2hi - 53)
in fromIntegral lg2hi +
logBase 2 (fromIntegral mant / 2^53 :: Double)
lg2top = (lg2tgt^3 + 2 * 6 * lg3 * lg5)**(1/3) -- for 2 numbers or so higher
lg2btm = 2* lg2tgt - lg2top -- or two numbers or so lower
match =
let klmt = floor (lg2top / lg5)
loopk k mtchlgk mtchtplk =
if k > klmt then mtchtplk else
let p = lg2top - fromIntegral k * lg5
jlmt = fromIntegral $ floor (p / lg3)
loopj j mtchlgj mtchtplj =
if j > jlmt then loopk (k + 1) mtchlgj mtchtplj else
let q = p - fromIntegral j * lg3
( i, frac ) = properFraction q; r = lg2top - frac
( nmtchlg, nmtchtpl ) =
if r < lg2btm || r >= mtchlgj then
( mtchlgj, mtchtplj ) else
if 2^i * 3^j * 5^k >= target then
( r, ( i, j, k ) ) else ( mtchlgj, mtchtplj )
in nmtchlg `seq` nmtchtpl `seq` loopj (j + 1) nmtchlg nmtchtpl
in loopj 0 mtchlgk mtchtplk
in loopk 0 (fromIntegral lg2hi) ( fromIntegral lg2hi, 0, 0 )
trival :: ( Word32, Word32, Word32 ) -> Integer
trival (i,j,k) = 2^i * 3^j * 5^k
main = putStrLn $ show $ nextRegular $ (trival (1334,335,404)) + 1 -- (1126,16930,40)
This code calculates the next regular number following the billionth in too small a time to be measured and following the trillionth in 0.69 seconds on IDEOne (and potentially could run even faster except that IDEOne doesn't support LLVM). Even Julia will run at something like this Haskell speed after the "warm-up" for JIT compilation.
EDIT_ADD: The Julia code is as per the following:
function nextregular(target :: BigInt) :: Tuple{ UInt32, UInt32, UInt32 }
# trivial case of first value or anything less...
target < 2 && return ( 0, 0, 0 )
# Check if it's already a power of 2 (or a non-integer)
mant = target & (target - 1)
# Quickly find next power of 2 >= target
log2hi :: UInt32 = 0
test = target
while true
next = test & 0x7FFFFFFF
test >>>= 31; log2hi += 31
test <= 0 && (log2hi -= leading_zeros(UInt32(next)) - 1; break)
end
# exit if this is a power of two already...
mant == 0 && return ( log2hi - 1, 0, 0 )
# take care of trivial cases...
if target < 9
target < 4 && return ( 0, 1, 0 )
target < 6 && return ( 0, 0, 1 )
target < 7 && return ( 1, 1, 0 )
return ( 3, 0, 0 )
end
# find log of target, which may exceed the Float64 limit...
if log2hi < 53 mant = target << (53 - log2hi)
else mant = target >>> (log2hi - 53) end
log2target = log2hi + log(2, Float64(mant) / (1 << 53))
# log2 constants
log2of2 = 1.0; log2of3 = log(2, 3); log2of5 = log(2, 5)
# calculate range of log2 values close to target;
# desired number has a logarithm of log2target <= x <= top...
fctr = 6 * log2of3 * log2of5
top = (log2target^3 + 2 * fctr)^(1/3) # for 2 numbers or so higher
btm = 2 * log2target - top # or 2 numbers or so lower
# scan for values in the given narrow range that satisfy the criteria...
match = log2hi # Anything found will be smaller
result :: Tuple{UInt32,UInt32,UInt32} = ( log2hi, 0, 0 ) # placeholder for eventual matches
fives :: UInt32 = 0; fiveslmt = UInt32(ceil(top / log2of5))
while fives < fiveslmt
log2p = top - fives * log2of5
threes :: UInt32 = 0; threeslmt = UInt32(ceil(log2p / log2of3))
while threes < threeslmt
log2q = log2p - threes * log2of3
twos = UInt32(floor(log2q)); log2this = top - log2q + twos
if log2this >= btm && log2this < match
# logarithm precision may not be enough to differential between
# the next lower regular number and the target, so do
# a full resolution comparison to eliminate this case...
if (big(2)^twos * big(3)^threes * big(5)^fives) >= target
match = log2this; result = ( twos, threes, fives )
end
end
threes += 1
end
fives += 1
end
result
end
Here's another possibility I just thought of:
If N is X bits long, then the smallest regular number R ≥ N will be in the range
[2X-1, 2X]
e.g. if N = 257 (binary 100000001) then we know R is 1xxxxxxxx unless R is exactly equal to the next power of 2 (512)
To generate all the regular numbers in this range, we can generate the odd regular numbers (i.e. multiples of powers of 3 and 5) first, then take each value and multiply by 2 (by bit-shifting) as many times as necessary to bring it into this range.
In Python:
from itertools import ifilter, takewhile
from Queue import PriorityQueue
def nextPowerOf2(n):
p = max(1, n)
while p != (p & -p):
p += p & -p
return p
# Generate multiples of powers of 3, 5
def oddRegulars():
q = PriorityQueue()
q.put(1)
prev = None
while not q.empty():
n = q.get()
if n != prev:
prev = n
yield n
if n % 3 == 0:
q.put(n // 3 * 5)
q.put(n * 3)
# Generate regular numbers with the same number of bits as n
def regularsCloseTo(n):
p = nextPowerOf2(n)
numBits = len(bin(n))
for i in takewhile(lambda x: x <= p, oddRegulars()):
yield i << max(0, numBits - len(bin(i)))
def nextRegular(n):
bigEnough = ifilter(lambda x: x >= n, regularsCloseTo(n))
return min(bigEnough)
You know what? I'll put money on the proposition that actually, the 'dumb' algorithm is fastest. This is based on the observation that the next regular number does not, in general, seem to be much larger than the given input. So simply start counting up, and after each increment, refactor and see if you've found a regular number. But create one processing thread for each available core you have, and for N cores have each thread examine every Nth number. When each thread has found a number or crossed the power-of-2 threshold, compare the results (keep a running best number) and there you are.
I wrote a small c# program to solve this problem. It's not very optimised but it's a start.
This solution is pretty fast for numbers as big as 11 digits.
private long GetRegularNumber(long n)
{
long result = n - 1;
long quotient = result;
while (quotient > 1)
{
result++;
quotient = result;
quotient = RemoveFactor(quotient, 2);
quotient = RemoveFactor(quotient, 3);
quotient = RemoveFactor(quotient, 5);
}
return result;
}
private static long RemoveFactor(long dividend, long divisor)
{
long remainder = 0;
long quotient = dividend;
while (remainder == 0)
{
dividend = quotient;
quotient = Math.DivRem(dividend, divisor, out remainder);
}
return dividend;
}

Resources