Related
I'm trying to replicate values from pine script cci() function in golang. I've found this lib https://github.com/markcheno/go-talib/blob/master/talib.go#L1821
but it gives totally different values than cci function does
pseudo code how do I use the lib
cci := talib.Cci(latest14CandlesHighArray, latest14CandlesLowArray, latest14CandlesCloseArray, 14)
The lib gives me the following data
Timestamp: 2021-05-22 18:59:27.675, Symbol: BTCUSDT, Interval: 5m, Open: 38193.78000000, Close: 38122.16000000, High: 38283.55000000, Low: 38067.92000000, StartTime: 2021-05-22 18:55:00.000, EndTime: 2021-05-22 18:59:59.999, Sma: 38091.41020000, Cci0: -16.63898084, Cci1: -53.92565811,
While current cci values on TradingView are: cci0 - -136, cci1 - -49
could anyone guide what do I miss?
Thank you
P.S. cci0 - current candle cci, cci1 - previous candle cci
PineScript has really great reference when looking for functions, usually even supplying the pine code to recreate it.
https://www.tradingview.com/pine-script-reference/v4/#fun_cci
The code wasn't provided for cci, but a step-by-step explanation was.
Here is how I managed to recreate the cci function using Pine, following the steps in the reference:
// This source code is subject to the terms of the Mozilla Public License 2.0 at https://mozilla.org/MPL/2.0/
// © bajaco
//#version=4
study("CCI Breakdown", overlay=false, precision=16)
cci_breakdown(src, p) =>
// The CCI (commodity channel index) is calculated as the
// 1. difference between the typical price of a commodity and its simple moving average,
// divided by the
// 2. mean absolute deviation of the typical price.
// 3. The index is scaled by an inverse factor of 0.015
// to provide more readable numbers
// 1. diff
ma = sma(src,p)
diff = src - ma
// 2. mad
s = 0.0
for i = 0 to p - 1
s := s + abs(src[i] - ma)
mad = s / p
// 3. Scaling
mcci = diff/mad / 0.015
mcci
plot(cci(close, 100))
plot(cci_breakdown(close,100))
I didn't know what mean absolute deviation meant, but at least in their implementation it appears to be taking the difference from the mean for each value in the range, but NOT changing the mean value as you go back.
I don't know Go but that's the logic.
sort_values() got multiple values for argument 'axis'
I am trying to sort this series using sort_values
Item Per Capita GSDP (Rs.)
Andhra_avg 102803
Arunachal_avg 100745
Assam_avg 55650.6
Bihar_avg 30030.6
Chattisgarh_avg 82046.7
Gujrat_avg 128755
Himachal_avg 126650
Jharkhand_avg 56385
Jammu_avg 73422.8
Karnataka_avg 128959
Kerala_avg 139140
MP_avg 61122.2
Maharashtra_avg 133512
my code:
dfGDP_percapita.sort_values("Item", axis = 0, ascending = True, inplace = True, na_position ='first')
Expected result should give "Per Capita GSDP (Rs.)" in the decreasing order with Nan on top
Try changing your code to
dfGDP_percapita.sort_values("Item", inplace=True, na_position ='first')
Some of the arguments are by default - I'm not sure why the error occurs, but my code works like this.
I basically want to know the best to convert my BigDecimal numbers to readable format (float perhaps) for the purposes of displaying them to the client:
In order to figure out liabilities, I do contributions - distributions from each address.
For example, if a person contributes 2 units to an address and that same address distributes 1 unit back to the person, then there is still a 1 unit liability. that's what's going on below. These numbers are all units of cryptocurrency.
Here's another example:
Say address1 and address2 contribute 2 coins each to address3, and address3 distributes 1.0 coins to address1 and 0.5 coins to address2, then address3 has a 1.0 coin liability to address1 and a 1.5 coin liability to address2.
So the actual data using BigDecimal below:
contributions =
{"1444"=>#<BigDecimal:7f915c08f030,'0.502569E2',18(36)>,
"alice"=>#<BigDecimal:7f915c084018,'0.211E1',18(27)>,
"address1"=>#<BigDecimal:7f915c0a4430,'0.87161E1',18(36)>,
"address2"=>#<BigDecimal:7f915c0943f0,'0.84811E1',18(36)>,
"address3"=>#<BigDecimal:7f915c0a43e0,'0.385E0',9(18)>,
"address6"=>#<BigDecimal:7f915c09ebe8,'0.1E1',9(18)>,
"address7"=>#<BigDecimal:7f915c09eb98,'0.1E1',9(18)>,
"address8"=>#<BigDecimal:7f915c09d428,'0.15E1',18(18)>,
"address9"=>#<BigDecimal:7f915c09d3d8,'0.15E1',18(18)>,
"address10"=>#<BigDecimal:7f915c0a7540,'0.132E1',18(36)>,
"address11"=>#<BigDecimal:7f915c0af8a8,'0.392E1',18(36)>,
"address12"=>#<BigDecimal:7f915c0a4980,'0.14E1',18(36)>,
"address13"=>#<BigDecimal:7f915c0af858,'0.2133333333 3333333333 33333334E1',36(54)>,
"address14"=>#<BigDecimal:7f915c0a54c0,'0.3533333333 3333333333 33333334E1',36(45)>,
"address15"=>#<BigDecimal:7f915c0a66e0,'0.1533333333 3333333333 33333334E1',36(36)>,
"sdfds"=>#<BigDecimal:7f915c0a6118,'0.1E0',9(27)>,
"sf"=>#<BigDecimal:7f915c0a6028,'0.1E0',9(27)>,
"address20"=>#<BigDecimal:7f915c0ae688,'0.3E0',9(18)>,
"address21"=>#<BigDecimal:7f915c0ae638,'0.3E0',9(18)>,
"address23"=>#<BigDecimal:7f915c0ae070,'0.1E0',9(27)>,
"address22"=>#<BigDecimal:7f915c0adf80,'0.1E0',9(27)>,
"add1"=>#<BigDecimal:7f915c0ad328,'0.1E0',9(18)>,
"add2"=>#<BigDecimal:7f915c0ad2d8,'0.1E0',9(18)>,
"addx"=>#<BigDecimal:7f915c0acd10,'0.1E0',9(27)>,
"addy"=>#<BigDecimal:7f915c0acc20,'0.1E0',9(27)>}
and the distributions:
distributions =
{"1444"=>#<BigDecimal:7f915a9068f0,'0.502569E2',18(63)>,
"alice"=>#<BigDecimal:7f915a8f44e8,'0.211E1',18(27)>,
"address1"=>#<BigDecimal:7f915a906800,'0.87161E1',18(54)>,
"address2"=>#<BigDecimal:7f915a906710,'0.84811E1',18(54)>,
"address3"=>#<BigDecimal:7f915a906620,'0.385E0',9(36)>,
"address6"=>#<BigDecimal:7f915a8fdea8,'0.1E1',9(27)>,
"address7"=>#<BigDecimal:7f915a8fddb8,'0.1E1',9(27)>,
"address8"=>#<BigDecimal:7f915a8fd5e8,'0.15E1',18(18)>,
"address9"=>#<BigDecimal:7f915a8fd4f8,'0.15E1',18(18)>,
"address10"=>#<BigDecimal:7f915a8fc9b8,'0.132E1',18(36)>,
"address11"=>#<BigDecimal:7f915a9071b0,'0.3920000000 0000003E1',27(45)>,
"address12"=>#<BigDecimal:7f915a907660,'0.1400000000 0000001E1',27(36)>,
"address13"=>#<BigDecimal:7f915a9070c0,'0.2133333333 3333337E1',27(45)>,
"address14"=>#<BigDecimal:7f915a906530,'0.3533333333 3333333333 33333334E1',36(54)>,
"address15"=>#<BigDecimal:7f915a8fc148,'0.1533333333 3333334E1',27(27)>,
"sdfds"=>#<BigDecimal:7f915a907f98,'0.1E0',9(27)>,
"sf"=>#<BigDecimal:7f915a907e08,'0.1E0',9(27)>,
"address20"=>#<BigDecimal:7f915a906ad0,'0.3000000000 0000003E0',18(27)>,
"address21"=>#<BigDecimal:7f915a9069e0,'0.3000000000 0000003E0',18(27)>,
"address23"=>#<BigDecimal:7f915a9063c8,'0.1E0',9(27)>,
"address22"=>#<BigDecimal:7f915a906238,'0.1E0',9(27)>,
"add1"=>#<BigDecimal:7f915a9060a8,'0.5E-1',9(27)>,
"add2"=>#<BigDecimal:7f915a905f18,'0.1E0',9(27)>}
Ideally, I want my liabilities to look like this:
{"add1"=>0.05,
"addx"=>0.1>,
"addy"=>0.1>}
But they look like this:
{"1444"=>0.0,
"alice"=>0.0,
"address1"=>0.0,
"address2"=>0.0,
"address3"=>0.0,
"address6"=>0.0,
"address7"=>0.0,
"address8"=>0.0,
"address9"=>0.0,
"address10"=>0.0,
"address11"=>-3.0e-16,
"address12"=>-1.0e-16,
"address13"=>-3.66666666666e-16,
"address14"=>0.0,
"address15"=>-6.6666666666e-17,
"sdfds"=>0.0,
"sf"=>0.0,
"address20"=>-3.0e-17,
"address21"=>-3.0e-17,
"address23"=>0.0,
"address22"=>0.0,
"add1"=>0.05,
"add2"=>0.0,
"addx"=>#<BigDecimal:7f915c0acd10,'0.1E0',9(27)>,
"addy"=>#<BigDecimal:7f915c0acc20,'0.1E0',9(27)>}
I don't want to include -3.66666666666e-16 because that's essentially 0 and even Ruby accounts for it this way when you run -3.66666666666e-16 > 0... it returns false.
This is what I have... is the a better way? The code below is calculating the liabilities by subtracting con from dis and only selecting the liabilities that are greater than 0.0...that makes sense to me and it excludes 1-time grants of coins (there must be a matching contribution to be a liability). Then, I convert everything to floats so it's readable. Does this look right?
liab = #contributions.merge(#distributions) do |key, con, dis|
(con - dis)
end.select { |addr, amount| amount > 0.0 && #contributions.keys.include?(addr) }
liab.merge(liab) do |k, old, new|
k = new.to_f
end
I want the amount returned in float format, not the big decimal object. Is what I'm doing okay? Will I keep accuracy if I convert to float at the end?
I'm trying to train a dataset with 357 features using Isolation Forest sklearn implementation. I can successfully train and get results when the max features variable is set to 1.0 (the default value).
However when max features is set to 2, it gives the following error:
ValueError: Number of features of the model must match the input.
Model n_features is 2 and input n_features is 357
It also gives the same error when the feature count is 1 (int) and not 1.0 (float).
How I understood was that when the feature count is 2 (int), two features should be considered in creating each tree. Is this wrong? How can I change the max features parameter?
The code is as follows:
from sklearn.ensemble.iforest import IsolationForest
def isolation_forest_imp(dataset):
estimators = 10
samples = 100
features = 2
contamination = 0.1
bootstrap = False
random_state = None
verbosity = 0
estimator = IsolationForest(n_estimators=estimators, max_samples=samples, contamination=contamination,
max_features=features,
bootstrap=boostrap, random_state=random_state, verbose=verbosity)
model = estimator.fit(dataset)
In the documentation it states:
max_features : int or float, optional (default=1.0)
The number of features to draw from X to train each base estimator.
- If int, then draw `max_features` features.
- If float, then draw `max_features * X.shape[1]` features.
So, 2 should mean take two features and 1.0 should mean take all of the features, 0.5 take half and so on, from what I understand.
I think this could be a bug, since, taking a look in IsolationForest's fit:
# Isolation Forest inherits from BaseBagging
# and when _fit is called, BaseBagging takes care of the features correctly
super(IsolationForest, self)._fit(X, y, max_samples,
max_depth=max_depth,
sample_weight=sample_weight)
# however, when after _fit the decision_function is called using X - the whole sample - not taking into account the max_features
self.threshold_ = -sp.stats.scoreatpercentile(
-self.decision_function(X), 100. * (1. - self.contamination))
then:
# when the decision function _validate_X_predict is called, with X unmodified,
# it calls the base estimator's (dt) _validate_X_predict with the whole X
X = self.estimators_[0]._validate_X_predict(X, check_input=True)
...
# from tree.py:
def _validate_X_predict(self, X, check_input):
"""Validate X whenever one tries to predict, apply, predict_proba"""
if self.tree_ is None:
raise NotFittedError("Estimator not fitted, "
"call `fit` before exploiting the model.")
if check_input:
X = check_array(X, dtype=DTYPE, accept_sparse="csr")
if issparse(X) and (X.indices.dtype != np.intc or
X.indptr.dtype != np.intc):
raise ValueError("No support for np.int64 index based "
"sparse matrices")
# so, this check fails because X is the original X, not with the max_features applied
n_features = X.shape[1]
if self.n_features_ != n_features:
raise ValueError("Number of features of the model must "
"match the input. Model n_features is %s and "
"input n_features is %s "
% (self.n_features_, n_features))
return X
So, I am not sure on how you can handle this. Maybe figure out the percentage that leads to just the two features you need - even though I am not sure it'll work as expected.
Note: I am using scikit-learn v.0.18
Edit: as #Vivek Kumar commented this is an issue and upgrading to 0.20 should do the trick.
I have an A*(Star) Algorithm in prolog, and I need receive a list of destinations instead a single destination, and it should past for every element of the list and then back to the start.
NOTE: It can pass in the same place twice.
I tried it, but Swi-prolog returns false everytime, can I get any help?
The code above receive a single destination, and it works, but as I said I need a list of destinations, and pass through them all.
Example: astar(1,[2,2,4,6,8,9,10,13,15],C,P).
/*area(Number,PosX,PosY).*/
area(1,20,80).
area(2,50,30).
area(3,100,100).
area(4,90,70).
area(5,70,50).
area(6,110,50).
area(7,150,90).
area(8,200,90).
area(9,140,60).
area(10,160,20).
area(11,180,60).
area(12,190,20).
area(13,230,70).
area(14,240,30).
area(15,240,20).
/*street(CityA,CityB,Distance).*/
street(1,2,50).
street(1,3,100).
street(1,4,75).
street(1,5,65).
street(2,5,30).
street(3,4,40).
street(3,7,50).
street(4,5,25).
street(4,6,30).
street(4,9,65).
street(4,7,70).
street(5,6,30).
street(6,9,35).
street(6,10,60).
street(7,8,35).
street(7,9,30).
street(8,9,70).
street(8,11,40).
street(8,13,50).
street(9,10,50).
street(9,11,30).
street(10,11,50).
street(10,12,40).
street(11,12,40).
street(11,13,50).
street(11,14,60).
street(12,14,50).
street(13,14,50).
astar(Start,Final,_,Tp):-
estimation(Start,Final,E),
astar1([(E,E,0,[Start])],Final,_,Tp).
astar1([(_,_,Tp,[Final|R])|_],Final,[Final|R],Tp):- reverse([Final|R],L3),write('Path = '),write(L3).
astar1([(_,_,P,[X|R1])|R2],Final,C,Tp):-
findall((NewSum,E1,NP,[Z,X|R1]),(street(X,Z,V),
not(member(Z,R1)),
NP is P+V,
estimation(Z,Final,E1),
NewSum is E1+NP),L),
append(R2,L,R3),
sort(R3,R4),
astar1(R4,Final,C,Tp).
estimation(C1,C2,Est):- area(C1,X1,Y1),area(C2,X2,Y2), DX is X1-X2,DY is Y1-Y2,
Est is sqrt(DX*DX+DY*DY).