I've been working on my own Julia Set Plot Implementation. I don't want to use JuliaSetPlot, (however I'm eager to use JuliaSetIterationPoints and JuliaSetCount, I just don't really know how).
I've come up with something like this, but I have a problem, I have no idea what is wrong and why it won't work.
Can anyone help?
'''mathematica
firstFun= Function[ {Typed[pixel0, "ComplexReal64"]},
Module[{i = 1, maksi=100, pixel = pixel0},
While[i < maksi && (Abs[pixel])^2 < 2,
temp = (Re[pixel])^2 - (Im[pixel])^2
Re[pixel] = 2 * Re[pixel] * Im[pixel] - 0.8\[Iota] * Im[pixel0]
Im[pixel] = temp - 0.8\[Iota]* Re[pixel0];
i++ ];
i]];
'''
my code
This
firstFun=Function[{Typed[pixel0,"ComplexReal64"]},
Module[{i=1,maksi=100,pixel=pixel0},
While[i<maksi&&Abs[pixel]^2<2,
pixel=2*Re[pixel]*Im[pixel]-0.8*I*Im[pixel0]+
I*(Re[pixel]^2-Im[pixel]^2-0.8*I*Re[pixel0]);
i++];
i]];
compFun[c_]=FunctionCompile[firstFun]
compiles without any compile-time error messages.
If I haven't made a mistake then I think your pixel calculation can be simplified to
pixel=I*Conjugate[pixel]^2+0.8*Conjugate[pixel0]
Please test all this very carefully to make certain that it is correct.
Related
I am getting error mentioned in the title and didn't find yet a solution.
X = train[feats].values
y = train['Target'].values
cv = StratifiedKFold(n_splits=3, random_state=2021, shuffle=True)
model = LogisticRegression(solver='liblinear')
scores = []
for train_idx, test_idx in cv.split(X, y):
model.fit(X[train_idx], y[train_idx])
y_pred = model.predict(X[test_idx])
score = mean_absolute_error(y[test_idx], y_pred )
scores.append(score)
print(np.mean(scores), np.std(scores))
fig = plt.figure(figsize=(15,6));
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
skplt.metrics.plot_confusion_matrix(y, y_pred, ax = ax1) #error line
skplt.metrics.plot_roc(y, y_pred, ax = ax2)
ValueError: Found input variables with inconsistent numbers of samples: [32561, 10853]
I checked the code, read many threads on this error. Somebody suggested me as a solution to put the cross-validation in a loop, but I don't know how to manage this with code (and also which part of operation to put in a loop, and how to write a condition that should be ending this loop). Please, help me with a specific answer that will help me to easily fix problem with my current level of advancement.
This is my first mathmatica code,
I defined the functions:
\[Beta] := v/c
\[Gamma] := 1/Sqrt[1 - \[Beta]^2]
TotalE[\[Gamma][\[Beta]]] := \[Gamma]mc^2
KE := TotalE[\[Gamma][\[Beta]]] - mc^2
No i want to make a series expansion of KE at β → 0 up to order 2,
I tried:
Series[KE, {\[Beta], 1, 2}]
But i got the error massage:
General::ivar: v/c is not a valid variable.
I also wanted to define Ekin as function of β,
so i used Solve function to get the inverse function, β[Ekin]:
Solve[KE, \[Beta]]
The same errors arises again:
Solve::ivar: v/c is not a valid variable.
Try this
Clear[\[Gamma],\[Beta],mc,KE,s,v,c]
\[Gamma] = 1/Sqrt[1 - \[Beta]^2];
TotalE[\[Gamma]*\[Beta]] = \[Gamma]*mc^2;
KE = TotalE[\[Gamma]*\[Beta]] - mc^2;
s=Normal[Series[KE, {\[Beta], 1, 2}]]/.\[Beta]->v/c
Reduce[KE==0, \[Beta]]/.\[Beta]->v/c
which returns
O-mc^2 + mc^2/(Sqrt[2]*Sqrt[1 - v/c]) -
(mc^2*(-1 + v/c))/(4*Sqrt[2]*Sqrt[1 - v/c]) +
(3*mc^2*(-1 + v/c)^2)/(32*Sqrt[2]*Sqrt[1 - v/c])
and
(mc != 0 && v/c == 0)||(-1+v^2/c^2 !=0 && mc == 0)
What that is trying to do is do your calculations with the simple variable beta, before you turn that into v/c and after the calculations replace beta with v/c.
But there are still things about the way you have written that which worry me. You are kind of writing TotalE like it is a function, but that is not the way to define a Mathematica function and I am concerned this may be going to get you into trouble.
Please let me know if I have misunderstood some of what you are trying to do and explain what I've done wrong and I will try to find a way to fix that.
I am reviewing huggingface's version of Albert.
However, I cannot find any code or comment about SOP.
I can find NSP(Next Sentence Prediction) implementation from modeling_from src/transformers/modeling_bert.py.
if masked_lm_labels is not None and next_sentence_label is not None:
loss_fct = CrossEntropyLoss()
masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), masked_lm_labels.view(-1))
next_sentence_loss = loss_fct(seq_relationship_score.view(-1, 2), next_sentence_label.view(-1))
total_loss = masked_lm_loss + next_sentence_loss
outputs = (total_loss,) + outputs
Is SOP inherited from here with SOP-style labeling? or Is there anything I am missing?
The sentence order loss is here:
sentence_order_loss = loss_fn(y_true=sentence_order_label, y_pred=sentence_order_reduced_logits)
It's just a cross entropy loss.
My code is the following. I was told fminsearch would solve this faster. I checked the docs and tutorials but I'm still in the dark. How would you implement fminsearch here? Thanks in advance.
MIN=1e10;
up_vec= u_min1+ ku*lambda;
vp_vec= v_min1+ kv*lambda;
wp_vec= w_min1+ kw*lambda;
%%the loop
for i_up=1:length(up_vec)
for i_vp=1:length(vp_vec)
for i_wp=1:length(wp_vec)
Jp(i_up,i_vp,i_wp)=norm(p- (A\[up_vec(i_up);vp_vec(i_vp);wp_vec(i_wp)]).* ...
[exp(-1i*2*pi/lambda*up_vec(i_up));...
exp(-1i*2*pi/lambda*vp_vec(i_vp));...
exp(-1i*2*pi/lambda*wp_vec(i_wp))]);
if Jp(i_up,i_vp,i_wp) < MIN
MIN=Jp(i_up,i_vp,i_wp);
ind_umin = i_up;
ind_vmin = i_vp;
ind_wmin = i_wp;
up_vec_min=up_vec;
vp_vec_min=vp_vec;
wp_vec_min=wp_vec;
pp_min=pp;
end
end
end
end
You need to define your objective function and then use fminsearch. for instance:
funJp = #(u,v,w)(norm(p- (A\[u;v;w]).* ...
[exp(-1i*2*pi/lambda*u);...
exp(-1i*2*pi/lambda*v);...
exp(-1i*2*pi/lambda*w)]));
x = fminsearch(funJp,[umin_1,vmin_1,wmin_1]);
I am currently runnuing training in matlab on a matrix of logspecrum samples I am constantly dealing with underflow problems.I understood that I need to work with log's in order to deal with underflowing.
I am still strugling with uderflow though , when i calculate the mean (mue) bucause it is negetive i cant work with logs so i need the real values that underflow.
These are equasions i am working with:
In MATLAB code i calulate log_tau in oreder avoid underflow but when calulating mue i need exp(log(tau)) which goes to zero.
I am attaching relevent MATLAB code
**in the code i called the variable alpha is tau ...
for i = 1 : 50
log_c = Logsum(log_alpha,1) - log(N);
c = exp(log_c);
mue = DataMat*alpha./(repmat(exp(Logsum(log_alpha,1)),FrameSize,1));
log_abs_mue = log(abs(mue));
log_SigmaSqr = log((DataMat.^2)*alpha) - repmat(Logsum(log_alpha,1),FrameSize,1) - 2*log_abs_mue;
SigmaSqr = exp(log_SigmaSqr);
for j=1:N
rep_DataMat(:,:,j) = repmat(DataMat(:,j),1,M);
log_gamma(j,:) = log_c - 0.5*(FrameSize*log(2*pi)+sum(log_SigmaSqr)) + sum((rep_DataMat(:,:,j) - mue).^2./(2*SigmaSqr));
end
log_alpha = log_gamma - repmat(Logsum(log_gamma,2),1,M);
alpha = exp(log_alpha);
end
c = exp(log_c);
SigmaSqr = exp(log_SigmaSqr);
does any one see how i can avoid this? or what needs to be fixed in code?
What i did was add this line to the MATLAB code:
mue(isnan(mue))=0; %fix 0/0 problem
and this one:
SigmaSqr(SigmaSqr==0)=1;%fix if mue_k = x_k
not sure if this is the best solution but is seems to work...
any have a better idea?