Does using the same name for dataframes impact performance - performance

Would there be any difference between the two pieces of code, performance-wise:
df = //read the file here
df = df.select(//some columns here)
df = df.groupBy().agg() // some operation
df = df.filter() //some filtering
df.show()
AND
df = //read the file here
df1 = df.select(//some columns here)
df2 = df1.groupBy().agg() // some operation
df3 = df2.filter() //some filtering
df3.show()

Related

How to speed up resample with for loop?

I want to get the value of rsi on h1 for each m15 candle, this is how I do this. However, with data larger than 500000 lines, this is very time consuming, is there any better way. Note that it is mandatory to resample each row to get the correct result
import talib
import pandas as pd
import numpy as np
def Data(df):
df['RSI1'] = talib.RSI(df['close'], timeperiod=13)
df['RSI2'] = talib.RSI(df['close'], timeperiod=21)
return df
#len(df) > 555555
df = pd.read_csv('m15_candle.csv')
for i in range(0, len(df)):
t = df.at[i, 'time']
if t.hour == 0 and t.minute == 0:
df = df[i:]
break
df = df.set_index('time')
ohlc = {
'open': 'first',
'high': 'max',
'low': 'min',
'close': 'last'
}
rsi1 = [0]*len(df)
rsi2 = [0]*len(df)
for i in range(100000, len(df)):
h1 = Data(df[:i].resample("1h", offset=0).apply(ohlc).dropna())
rsi1[i] = h1.iloc[-1]['RSI1']
rsi2[i] = h1.iloc[-1]['RSI2']
df['RSI1_h1'] = rsi1
df['RSI2_h1'] = rsi2
df = df.reset_index()
df.to_csv("data.csv", index = False)

Is "insample" in mlr3tuning resampling can be used when we want to do hyperparameter tuning with the full dataset?

I've been trying to do some tuning hyperparameters for the survival SVM model. I used the AutoTuner function from the mlr3tuning package. I want to do tuning for the whole dataset (No train & test split). I've found the resampling class which is "insample". When I look at the mlr3 dictionary, it said "Uses all observations as training and as test set."
My questions is, Is "insample" in mlr3tuning resampling can be used when we want to do hyperparameter tuning with the full dataset and if it applies, why when I tried to use the hyperparameter to the survivalsvm function from the survivalsvm package, it gives the different output of concordance index?
This is the code I used for hyperparameter tuning
veteran<-veteran
set.seed(1)
task = as_task_surv(x = veteran, time = 'time', event = 'status')
learner = lrn("surv.svm", type = "hybrid", diff.meth = "makediff3",
gamma.mu = c(0.1, 0.1),kernel = 'rbf_kernel')
search_space = ps(gamma = p_dbl(2^-5, 2^5),mu = p_dbl(2^-5, 2^5))
search_space$trafo = function(x, param_set) {
x$gamma.mu = c(x$gamma, x$mu)
x$gamma = x$mu = NULL
x}
ssvm_at = AutoTuner$new(
learner = learner,
resampling = rsmp("insample"),
search_space = search_space,
measure = msr('surv.cindex'),
terminator = trm('evals', n_evals = 5),
tuner = tnr('grid_search'))
ssvm_at$train(task)
And this is the code that I've been trying using the survivalsvm function from the survivalsvm package
survsvm.reg <- survivalsvm(Surv(veteran$time , veteran$status ) ~ .,
data = veteran,
type = "hybrid", gamma.mu = c(32,32),diff.meth = "makediff3",
opt.meth = "quadprog", kernel = "rbf_kernel")
pred.survsvm.reg <- predict(survsvm.reg,veteran)
conindex(pred.survsvm.reg, veteran$time)

How to save mlr3 lightgbm model correctly?

I have some following codes. I met error when save trained model.
It's only error when i using lightgbm.
library(mlr3)
library(mlr3pipelines)
library(mlr3extralearners)
data = tsk("german_credit")$data()
data = data[, c("credit_risk", "amount", "purpose", "age")]
task = TaskClassif$new("boston", backend = data, target = "credit_risk")
g = po("imputemedian") %>>%
po("imputeoor") %>>%
po("fixfactors") %>>%
po("encodeimpact") %>>%
lrn("classif.lightgbm")
gl = GraphLearner$new(g)
gl$train(task)
# predict
newdata <- data[1,]
gl$predict_newdata(newdata)
saveRDS(gl, "gl.rds")
# read model from disk ----------------
gl <- readRDS("gl.rds")
newdata <- data[1,]
# error when predict ------------------
gl$predict_newdata(newdata)
lightgbm uses special functions to save and read models. You have to extract the model before saving and add it to the graph learner after loading. However, this might be not practical for benchmarks. We will look into it.
library(mlr3)
library(mlr3pipelines)
library(mlr3extralearners)
library(lightgbm)
data = tsk("german_credit")$data()
data = data[, c("credit_risk", "amount", "purpose", "age")]
task = TaskClassif$new("boston", backend = data, target = "credit_risk")
g = po("imputemedian") %>>%
po("imputeoor") %>>%
po("fixfactors") %>>%
po("encodeimpact") %>>%
lrn("classif.lightgbm")
gl = GraphLearner$new(g)
gl$train(task)
# save model
saveRDS.lgb.Booster(gl$model$classif.lightgbm$model, "model.rda")
# save graph learner
saveRDS(gl, "gl.rda")
# load model
model = readRDS.lgb.Booster("model.rda")
# load graph learner
gl = readRDS("gl.rda")
# add model to graph learner
gl$state$model$classif.lightgbm$model = model
# predict
newdata <- data[1,]
gl$predict_newdata(newdata)

Number of partitions scanned(=32767) exceeds limit

I'm trying to use Eel-sdk to stream data into Hive.
val sink = HiveSink(testDBName, testTableName)
.withPartitionStrategy(new DynamicPartitionStrategy)
val hiveOps:HiveOps = ...
val schema = new StructType(Vector(Field("name", StringType),Field("pk", StringType),Field("pk1",a StringType)))
hiveOps.createTable(
testDBName,
testTableName,
schema,
partitionKeys = Seq("pk", "pk1"),
dialect = ParquetHiveDialect(),
tableType = TableType.EXTERNAL_TABLE,
overwrite = true
)
val items = Seq.tabulate(100)(i => TestData(i.toString, "42", "apple"))
val ds = DataStream(items)
ds.to(sink)
Getting error: Number of partitions scanned(=32767) exceeds limit(=10000).
Number 32767 is a power of 2....but still can't figure it out what is wrong. Any idea?
Spark + Hive : Number of partitions scanned exceeds limit (=4000)
--conf "spark.sql.hive.convertMetastoreOrc=false"
--conf "spark.sql.hive.metastorePartitionPruning=false"

python Use Pool to create multiple processes but not execute the results

I put all the functions are placed in a class, including the creation of the process of the function and the implementation of the function, in another file to call the function of this class
from multiprocessing import Pool
def initData(self, type):
# create six process to deal with the data
if type == 'train':
data = pd.read_csv('./data/train_merged_8.csv')
elif type == 'test':
data = pd.read_csv('./data/test_merged_2.csv')
modelvec = allWord2Vec('no').getModel()
modelvec_all = allWord2Vec('all').getModel()
modelvec_stop = allWord2Vec('stop').getModel()
p = Pool(6)
count = 0
for i in data.index:
count += 1
p.apply_async(self.valueCal, args=(i, data, modelvec, modelvec_all, modelvec_stop))
if count % 1000 == 0:
print(str(count // 100) + 'h rows of data has been dealed')
p.close()
p.join
def valueCal(self, i, data, modelvec, modelvec_all, modelvec_stop):
# the function run in process
list_con = []
q1 = str(data.get_value(i, 'question1')).split()
q2 = str(data.get_value(i, 'question2')).split()
f1 = self.getF1_union(q1, q2)
f2 = self.getF2_inter(q1, q2)
f3 = self.getF3_sum(q1, q2)
f4_q1 = len(q1)
f4_q2 = len(q2)
f4_rate = f4_q1/f4_q2
q1 = [','.join(str(ve)) for ve in q1]
q2 = [','.join(str(ve)) for ve in q2]
list_con.append('|'.join(q1))
list_con.append('|'.join(q2))
list_con.append(f1)
list_con.append(f2)
list_con.append(f3)
list_con.append(f4_q1)
list_con.append(f4_q2)
list_con.append(f4_rate)
f = open('./data/test.txt', 'a')
f.write('\t'.join(list_con) + '\n')
f.close()
The result appears very soon like this, but I have not even seen the file being created.But when I check the task manager, there are indeed six processes are created and consumed a lot of resources I cpu. And when the program is finished, the file is still not created.
How can i solve this problem?
10h rows of data have been dealed
20h rows of data have been dealed
30h rows of data have been dealed
40h rows of data have been dealed

Resources