Convert string to date in sparkR - sparkr

I have this data.frame in sparkR
df <- data.frame(user_id=c(1,1,2,2),
time=c("2015-7-10","2015-8-04","2015-8-8","2015-7-10"))
I make this to a DataFrame
dft <- createDataFrame(sqlContext, df)
I want to convert the date (which is now a string) to a 'date'-type.
I use the 'cast'-function
dft$time <- cast(dft$time, 'date')
But now when I use head(dft) I can see that 'time' only contain NA.
Maybe there should be added something to the 'cast'-function or maybe there should be loaded a package before using it?
Alternative one could use 'as.Date' on the data.frame but it takes time for large data.
I have the exact same problem with 'integer'. If I type
cast(dft$time, 'int')
time will then produce NA.

Related

how to convert mm:ss.000 to s.ms using Posixct and strptime

One of my first posts, so I'll do my best. I've tried searching this, which is how I got this far..but I could use some help converting some time data in the form mm:ss.000 (that's milliseconds at the end) to seconds with a fraction at the end. For example, 2:15.45 should come out to 135.45.
This works:
t <- "02:15.45" (as.numeric(as.POSIXct(strptime(t, format = "%M:%OS"))) - as.numeric(as.POSIXct(strptime("0", format = "%S"))))
But this one, where I'm trying to use a column of my dataframe (originally in character form) does not work:
starttimesFPsnapjumps <- FPsnapjumps$start (as.numeric(as.POSIXct(strptime(starttimesFPsnapjumps, format = "%M:%OS"))) - as.numeric(as.POSIXct(strptime("0", format = "%S"))))
Perhaps it's because my numbers in the column have an extra 0 - they are mm:ss.000. Any thoughts? Thanks in advance.

Performance difference map() vs withColumn()

I have a table with over 100 columns. I need to remove double quotes from certain columns. I found 2 ways to do it, using withColumn() and map()
Using withColumn()
cols_to_fix = ["col1", ..., "col20"]
for col in cols_to_fix:
df = df.withColumn(col, regexp_replace(df[col], "\"", ""))
Using map()
def remove_quotes(row: Row) -> Row:
row_as_dict = row.asDict()
cols_to_fix = ["col1", ..., "col20"]
for column in cols_to_fix:
if row_as_dict[column]:
row_as_dict[column] = re.sub("\"", "", str(row_as_dict[column]))
return Row(**row_as_dict)
df = df.rdd.map(remove_quotes).toDF(df.schema)
Here is my question. I found using map() takes about 4 times longer than withColumn() on a table that has ~25M records. I will really appreciate if any fellow stack overflow user can explain the reason for the performance difference, so that I can avoid similar pitfall in future.
firstly, one piece of advice: do not convert DataFrame to RDD and just do df.map(your function here), this may save a lot of time.
the following page
https://dzone.com/articles/apache-spark-3-reasons-why-you-should-not-use-rdds
would save us a lot of time, its main conclusion is that RDD is remarkably slow than DataFrame/Dataset, not to mention the time used for the conversion from DataFrame to RDD.
Let's talk about map and withColumn without any conversion between DataFrame to RDD now.
Conclusion first: map is usually 5x slower than withColumn. the reason is that map operation always involves deserialization and serialization while withColumn can operate on column of interest.
to be specific, map operation should deserialize the Row into several parts on which the operation will be carrying,
An example here :
assume we have a DataFrame which looks like
+--------+-----------+
|language|users_count|
+--------+-----------+
| Java| 20000|
| Python| 100000|
| Scala| 3000|
+--------+-----------+
Then we want to increment all the values in column users_count by 1, we can do it like this:
df.map(row => {
val usersCount = row.getInt(1) + 1
(row.getString(0), usersCount)
}).toDF("language", "users_count_incremented_by_1")
In the code above, we firstly need to deserialize every row to extract the values in the 2nd column, after that we output the modified values and save it as an DataFrame(this step requires serialization of (a,b) into Row(a, b) since DataFrame is nothing but a DataSet of Rows).
for more detailed explanation, check the following excellent article
https://medium.com/#fqaiser94/udfs-vs-map-vs-custom-spark-native-functions-91ab2c154b44
map can not operate on the column itself but have to operate on the values of the column, getting the values require deserialization, saving it as a DataFrame requires serialization.
But map is still of great use: with the help of map method people could implement very sophisticated operations while just built-in operations could be done if we just use withColumn.
To sum it up, map is slower but more flexible, withColumn is surely the most efficient while it's functionality is limited.

Quantstrat applystrategy incorrect dimensions trying to work with manual mktdata OHCLV data vs getSymbols

I apologize for not having a working example atm
All I really need is a sample format for how to load multiple symbols from a csv
The function call says
https://www.rdocumentation.org/packages/quantstrat/versions/0.16.7/topics/applyStrategy
mktdata
"an xts object containing market data. depending on indicators, may need to be in OHLCV or BBO formats, default NULL"
The reason I don't wish to use getSymbols is because I do some preprocessing and load the data from csv's because my internet is shoddy. I do download data, but about once a week. My preprocess produces different symbols from a subset of 400 symbols based on the time periods I scan. I'm trying to frontload all my download processing, and no matter what I try, I can't get it to load from either a dataframe or an xts object. Right now I'm converting from csv to dataframe to xts and attempting to load.
I have noticed my xts objects differ from the getSymbols (error about incorrect dimensions). Specifically if I call colnames. Mine will say none, where as getSymbols subelements list 6 columns.
Anyways. What I would like to do, is see a minimal example of loading custom OHCLV data from a csv into an xts that can be supplied as an object to mktdata = in the applyStrategy call. That way I can format my code to match
I have the code to load and create the xts object from a dataframe.
#loads from a dataframe which includes Symbol, Date, Open, High, Low, Close, Volume, Adjusted
tempData <- symbol_data_set[symbol_data_set$Symbol %in% symbolstring & symbol_data_set$Date >= startDate & symbol_data_set$Date<=endDate,]
#creates a list of xts
vectorXTS <- mclapply(symbolstring,function(x)
{
df <- symbol_data_set[symbol_data_set$Symbol==x & symbol_data_set$Date >= startDate & symbol_data_set$Date<=endDate,]
#temp <- as.xts(
temp <- cbind(as.data.frame(df[,2]),as.data.frame(df[,-1:-2]))
rownames(df) <- df$Date
#,order.by=as.POSIXct(df$Date),)
z <- read.zoo(temp, index = 1, col.names=TRUE, header = TRUE)
#sets names to Symbol.Open ...
colnames(z) <- c(paste0(symbolstring[x],".Open"),paste0(symbolstring[x],".High"),paste0(symbolstring[x],".Low"),paste0(symbolstring[x],".Close"),paste0(symbolstring[x],".Volume"),paste0(symbolstring[x],".Adjusted"))
return(as.xts(z, match.to=AAPL))
#colnames(as.xts(z))
})
names(symbolstring) <- symbolstring
names(vectorXTS) <- symbolstring
for(i in symbolstring) assign(symbolstring[i],vectorXTS[i])
colnames(tempData) <- c(paste0(x,".Symbol"),paste0(x,".Date"),paste0(x,".Open"),paste0(x,".High"),paste0(x,".Low"),paste0(x,".Close"),paste0(x,".Volume"),paste0(x,".Adjusted"))
head(tempData)
rownames(tempData) <- tempData$Date
#attempts to use this xts object I created
results <- applyStrategy(strategy= strategyName, portfolios = portfolioName,symbols=symbolstring,mktdata)
error
Error in mktdata[, keep] : incorrect number of dimensions
This is how you store an xts getSymbols object in a file and reload it for use for quantStrat's applyStrategy (two methods shown, the read.xts method is the ideal as you can see how the csv's are stored)
getSymbols("AAPL",from=startDate,to=endDate,adjust=TRUE,src='yahoo',auto.assign = TRUE)
saveRDS(AAPL, file= 'stuff.Rdata')
AAPL <- readRDS(file= 'stuff.Rdata')
write.zoo(AAPL,file="zoo.csv", index.name = "Date", row.names=FALSE)
rm(AAPL)
AAPL <- as.xts(read.zoo(file="zoo.csv",header = TRUE))
If you want to work with multiple symbols, I had this work.
Note initially I had a reference to the 1st element, i.e. vectorXTS[[1]], and it worked
Note: at least setting it up like this got it to run...
vectorXTS <- mclapply(symbolstring,function(x)
{
df <- symbol_data_set[symbol_data_set$Symbol==x & symbol_data_set$Date >= startDate & symbol_data_set$Date<=endDate,]
temp <- cbind(as.data.frame(df[,2]),as.data.frame(df[,-1:-2]))
rownames(df) <- df$Date
z <- read.zoo(temp, index = 1, col.names=TRUE, header = TRUE)
colnames(z) <- c(paste0(x,".Open"),paste0(x,".High"),paste0(x,".Low"),paste0(x,".Close"),paste0(x,".Volume"),paste0(x,".Adjusted"))
write.zoo(z,file=paste0(x,"zoo.csv"), index.name = "Date", row.names=FALSE)
return(as.xts(read.zoo(file=paste0(x,"zoo.csv"),header = TRUE)))
})
names(vectorXTS) <- symbolstring
#this will assign to memory vs vectorXTS if one wishes to avoid using mktdata = vectorXTS[[]]
for(i in symbolstring) assign(i,vectorXTS[[i]])
results <- applyStrategy(strategy= strategyName, portfolios = portfolioName,symbols=symbolstring, mktdata = vectorXTS[[]])
#alternatively
#results <- applyStrategy(strategy= strategyName, portfolios = portfolioName,symbols=symbolstring)

How to write dates (MM/DD/YY) into a matrix (SAS)

I have following problem:
I need to write a begin and end date into a matrix. Where the matrix contains the yearly quarters (1-4) in the collumns and the rows are the year.
E.g.
Matrix:
Q1 Q2 Q3 Q4
2010
2011
Now the Date 01.01.2010 should be put in the first element and the date 09.20.2011 in the sixed element.
Thanks in advance.
You first have to consider that SAS does not actually have date/time/datetime variables. It just uses numeric variables formatted as date/time/datetime. The actual value being:
days since 1/1/1960 for dates
seconds since 00:00 for times
seconds since 1/1/1960 00:00 for datetimes
SAS does not even distinguish between integer and float numeric types. So a date value can contain a fractional part.
What you do or can do with a SAS numeric variable is completely up to you, and mostly depends on the format you apply. You could mistakenly format a variable containing a date value with a datetime format... or even with a currency format... SAS won't notice or complain.
You also have to consider that SAS does not even actually have matrixes and arrays. It does provide a way to simulate their use to read and write to dataset variables.
That said, SAS does provide a whole lot of formats and informats that allow you to implement date and time manipulation.
Assuming you are coding within a data step, and assuming the "dates" are in dataset numeric variables, then the PUT function can extract the datepart you need to calculate row, column of the matrix element to write to, like so:
DATA table;
ARRAY dm{2,4} dm_r1c1-dm_r1c4 dm_r2c1-dm_r2c4;
beg_row = PUT(beg_date, YEAR4.)-2009;
end_row = PUT(end_date, YEAR4.)-2009;
beg_col = PUT(beg_date, QTR1.);
end_col = PUT(end_date, QTR1.);
dm{beg_row,beg_col} = beg_date;
dm{end_row,end_col} = end_date;
RUN;
... or if you are using a one-dimensional array:
DATA table;
ARRAY da{8} da_1-da_8;
beg_index = 4 * (PUT(beg_date, YEAR4.)-2010) + PUT(beg_date, QTR1.);
end_index = 4 * (PUT(end_date, YEAR4.)-2010) + PUT(end_date, QTR1.);
da{beg_index} = beg_date;
da{end_index} = end_date;
RUN;

add columns to data frame using foreach and %dopar%

In Revolution R 2.12.2 on Windows 7 and Ubuntu 64-bit 11.04 I have a data frame with over 100K rows and over 100 columns, and I derive ~5 columns (sqrt, log, log10, etc) for each of the original columns and add them to the same data frame. Without parallelism using foreach and %do%, this works fine, but it's slow. When I try to parallelize it with foreach and %dopar%, it will not access the global environment (to prevent race conditions or something like that), so I cannot modify the data frame because the data frame object is 'not found.'
My question is how can I make this faster? In other words, how to parallelize either the columns or the transformations?
Simplified example:
require(foreach)
require(doSMP)
w <- startWorkers()
registerDoSMP(w)
transform_features <- function()
{
cols<-c(1,2,3,4) # in my real code I select certain columns (not all)
foreach(thiscol=cols, mydata) %dopar% {
name <- names(mydata)[thiscol]
print(paste('transforming variable ', name))
mydata[,paste(name, 'sqrt', sep='_')] <<- sqrt(mydata[,thiscol])
mydata[,paste(name, 'log', sep='_')] <<- log(mydata[,thiscol])
}
}
n<-10 # I often have 100K-1M rows
mydata <- data.frame(
a=runif(n,1,100),
b=runif(n,1,100),
c=runif(n,1,100),
d=runif(n,1,100)
)
ncol(mydata) # 4 columns
transform_features()
ncol(mydata) # if it works, there should be 8
Notice if you change %dopar% to %do% it works fine
Try the := operator in data.table to add the columns by reference. You'll need with=FALSE so you can put the call to paste on the LHS of :=.
See When should I use the := operator in data.table?
Might it be easier if you did something like
n<-10
mydata <- data.frame(
a=runif(n,1,100),
b=runif(n,1,100),
c=runif(n,1,100),
d=runif(n,1,100)
)
mydata_sqrt <- sqrt(mydata)
colnames(mydata_sqrt) <- paste(colnames(mydata), 'sqrt', sep='_')
mydata <- cbind(mydata, mydata_sqrt)
producing something like
> mydata
a b c d a_sqrt b_sqrt c_sqrt d_sqrt
1 29.344088 47.232144 57.218271 58.11698 5.417018 6.872565 7.564276 7.623449
2 5.037735 12.282458 3.767464 40.50163 2.244490 3.504634 1.940996 6.364089
3 80.452595 76.756839 62.128892 43.84214 8.969537 8.761098 7.882188 6.621340
4 39.250277 11.488680 38.625132 23.52483 6.265004 3.389496 6.214912 4.850240
5 11.459075 8.126104 29.048527 76.17067 3.385126 2.850632 5.389669 8.727581
6 26.729365 50.140679 49.705432 57.69455 5.170045 7.081008 7.050208 7.595693
7 42.533937 7.481240 59.977556 11.80717 6.521805 2.735186 7.744518 3.436157
8 41.673752 89.043099 68.839051 96.15577 6.455521 9.436265 8.296930 9.805905
9 59.122106 74.308573 69.883037 61.85404 7.689090 8.620242 8.359607 7.864734
10 24.191878 94.059012 46.804937 89.07993 4.918524 9.698403 6.841413 9.438217
There are two ways you can handle this:
Loop over each column (or, better yet, a subset of the columns) and apply the transformations to create a temporary data frame, return that, and then do cbind of the list of data frames, as #Henry suggested.
Loop over the transformations, apply each to the data frame, and then return the transformation data frames, cbind, and proceed.
Personally, the way I tend to do things like this is create a bigmatrix object (either in memory or on disk, using the bigmemory package), and you can access all of the columns in shared memory. Just pre-allocate the columns you will fill in, and you won't need to do a post hoc cbind. I tend to do it on disk. Just be sure to run flush(), to make sure everything is written to disk.

Resources