StringInderxer and One hot encoding in SparkR - azure-databricks

I am trying to convert string variable in SparkR to numeric by using one hot encoding concept and using stringindexer on below code:
df<-ft_string_indexer(spark_df,input_col=cluster_group,output_col=new)
However, I am getting below error:
no applicable method for 'ft_string_indexer' applied to an object of class "SparkDataFrame"
Any idea on correct code for stringindexer and Onehotencoding in SparkR?

First, ft_string_indexer() is for sparklyr not sparkR. For differences between the two see here. In general, sparklyr is better for ML so I'd recommend moving to sparklyr if you can.
Second, it is worth noting that SparkR uses onehotencoder in the background for all of its ML. This is noted here. So, you maybe able to get away without doing it depending on what your model.
I couldn't find a SparkR function that does the same thing as ft_string_indexer() exactly, but you could use encode. This should hold up in whatever ML you're doing but without an example I can't be sure. The following is taken from the SparkR documentation on encode:
df <- createDataFrame(as.data.frame(Titanic, stringsAsFactors = FALSE))
tmp <- mutate(df, s1 = encode(df$Class, "UTF-8"))
tmp2 <- mutate(tmp, s2 = base64(tmp$s1),
s3 = decode(tmp$s1, "UTF-8"),
s4 = soundex(tmp$Sex))

Related

How to make forecast using the fpp2 package?

It is the first time I am using fpp2 package to make linear forecast. I have successfully installed the package. However i am having an error when using the commands.
I have already converted the data in time series using ts command.
library(SPEI)
library(fpp2)
m<- read.delim("D:/PHD_UOM/PHD_Dissertation/PhD/PhD_R/mydata/mruspi.txt")
head(m)
y<- spi(ts(m$mru,freq=12,start=c(1971,1)), end=c(2019,12),scale =12)
y
forecast(y,12)
naive(y,12)'''
forecast(y,12)
Error in is.constant(y) :
'list' object cannot be coerced to type 'double'
naive(y,12)
Error in x[, (1 + cs[i]):cs[i + 1]] <- xx :
incorrect number of subscripts on matrix
The problem seems to be with the output of spi, according to the manual it outputs an object of class spi. This object is possibly not adequate to input is forecast function. You might need to use fitted component of the spi class object: y.fitted
For the documentation of SPEI package (specifically p. 6-7): SPEI Documentation
There is a newer version of fpp, called fpp3. I recommend installing fpp3 for starters:
install.package(fpp3)
There is an excellent book that demonstrates how to use fpp3, called Forecasting Principles and Practice. It can be purchased via Amazon, or viewed for free from the author online: https://otexts.com/fpp3/. I am working my way through the book on my own (not in a class), it is very clear and extremely well written. The book receives my highest recommendation, I very strongly recommend using it to learn forecasting.
I am unable to load the library spei, R returns this error:
"package ‘spei’ is not available for this version of R"
If you are able to update R and fpp, then an example of making a linear forecast would be:
library(fpp3)
library(tidyverse)
us_change %>%
model(TSLM(Unemployment ~ Consumption + Production + Savings + season() + trend())) %>%
report()
You can learn more about linear regression using fpp3 here: https://otexts.com/fpp3/regression-intro.html

ROC on multiple test sets in h2o (python)

I had a use-case that I thought was really simple but couldn't find a way to do it with h2o. I thought you might know.
I want to train my model once, and then evaluate its ROC on a few different test sets (e.g. a validation set and a test set, though in reality I have more than 2) without having to retrain the model. The way I know to do it now requires retraining the model each time:
train, valid, test = fr.split_frame([0.2, 0.25], seed=1234)
rf_v1 = H2ORandomForestEstimator( ... )
rf_v1.train(features, var_y, training_frame=train, validation_frame=valid)
roc = rf_v1.roc(valid=1)
rf_v1.train(features, var_y, training_frame=train, validation_frame=test) # training again with the same training set - can I avoid this?
roc2 = rf_v1.roc(valid=1)
I can also use model_performance(), which gives me some metrics on an arbitrary test set without retraining, but not the ROC. Is there a way to get the ROC out of the H2OModelMetrics object?
Thanks!
You can use the h2o flow to inspect the model performance. Simply go to: http://localhost:54321/flow/index.html (if you changed the default port change it in the link); type "getModel "rf_v1"" in a cell and it will show you all the measurements of the model in multiple cells in the flow. It's quite handy.
If you are using Python, you can find the performance in your IDE like this:
rf_perf1 = rf_v1.model_performance(test)
and then print the ROC like this:
print (rf_perf1.auc())
Yes, indirectly. Get the TPRs and FPRs from the H2OModelMetrics object:
out = rf_v1.model_performance(test)
fprs = out.fprs
tprs = out.tprs
roc = zip(fprs, tprs)
(By the way, my H2ORandomForestEstimator object does not seem to have an roc() method at all, so I'm not 100% sure that this output is in the exact same format. I'm using h2o version 3.10.4.7.)

Spark RDD into Matrix

I have an RDD like:
(A,AA,1)
(A,BB,0)
(A,CC,0)
(B,AA,2)
(B,BB,1)
(B,CC,4)
and I want to convert it into the following RRD:
([1,0,0],[2,1,4])
the order is important for me since the main propose is using RowMatrix to convert the second RDD to a matrix.
Your need to be careful with the wording, when you ask for a Matrix, do you mean something like the spark.mllib.matrix ? If so, you will need to follow very specific instructions to create one. However, it seems to me that your problem can be solved in a much easier way. Just using zipWithIndex with groupBy
//Here is how I see it
val test = sc.parallelize(Array(("A","AA",1),("A","BB",0),("A","CC",0),("B","AA",2),("B","BB",1),("B","CC",4))).zipWithIndex
val grouptest = test.groupBy(_._1._1).map(x=>(Vectors.dense(x._2.map(y=>(y._2,y._1._3)).toArray.sortBy(_._1).map(z=>z._2.toDouble))))
In your example, you seem to want the result as a vector? So I used spark's Vector (which by the way, only allows Doubles).
Result looks like:
[1.0,0.0,0.0]
[2.0,1.0,4.0]

is it easy to modify this python code to use pandas and would it help if i did?

I have written a Python 2.7 script that reads a CSV file and then does some standard deviation calculations . It works absolutely fine however it is very very slow. A CSV I tried with 100 million lines took around 28 hours to complete. I did some googling and it appears that maybe using the pandas module might makes this quicker .
I have posted part of the code below, since i am a pretty novice when it comes to python , i am unsure if using pandas would actually help at all and if it did would the function need to be completely re-written.
Just some context for the CSV file, it has 3 columns, first column is an IP address, second is a url and the third is a timestamp.
def parseCsvToDict(filepath):
with open(csv_file_path) as f:
ip_dict = dict()
csv_data = csv.reader(f)
f.next() # skip header line
for row in csv_data:
if len(row) == 3: #Some lines in the csv have more/less than the 3 fields they should have so this is a cheat to get the script working ignoring an wrong data
current_ip, URI, current_timestamp = row
epoch_time = convert_time(current_timestamp) # convert each time to epoch
if current_ip not in ip_dict.keys():
ip_dict[current_ip] = dict()
if URI not in ip_dict[current_ip].keys():
ip_dict[current_ip][URI] = list()
ip_dict[current_ip][URI].append(epoch_time)
return(ip_dict)
Once the above function has finished the data is parsed to another function that calculates the standard deviation for each IP/URL pair (using numpy.std).
Do you think that using pandas may increase the speed and would it require a complete rewrite or is it easy to modify the above code?
The following should work:
import pandas as pd
colnames = ["current_IP", "URI", "current_timestamp", "dummy"]
df = pd.read_csv(filepath, names=colnames)
# Remove incomplete and redundant rows:
df = df[~df.current_timestamp.isnull() & df.dummy.isnull()]
Notice this assumes you have enough RAM. In your code, you are already assuming you have enough memory for the dictionary, but the latter may be significatively smaller than the memory used by the above, for two reasons.
If it is because most lines are dropped, then just parse the csv by chunks: arguments skiprows and nrows are your friends, and then pd.concat
If it is because IPs/URLs are repeated, then you will want to transform IPs and URLs from normal columns to indices: parse by chunks as above, and on each chunk do
indexed = df.set_index(["current_IP", "URI"]).sort_index()
I expect this will indeed give you a performance boost.
EDIT: ... including a performance boost to the calculation of the standard deviation (hint: df.groupby())
I will not be able to give you an exact solution, but here are a couple of ideas.
Based on your data, you read 100000000. / 28 / 60 / 60 approximately 1000 lines per second. Not really slow, but I believe that just reading such a big file can cause a problem.
So take a look at this performance comparison of how to read a huge file. Basically a guy suggests that doing this:
file = open("sample.txt")
while 1:
lines = file.readlines(100000)
if not lines:
break
for line in lines:
pass # do something
can give you like 3x read boost. I also suggest you to try defaultdict instead of your if k in dict create [] otherwise append.
And last, not related to python: working in data-analysis, I have found an amazing tool for working with csv/json. It is csvkit, which allows to manipulate csv data with ease.
In addition to what Salvador Dali said in his answer: If you want to keep as much of the current code of your script, you may find that PyPy can speed up your program:
“If you want your code to run faster, you should probably just use PyPy.” — Guido van Rossum (creator of Python)

MATLAB ConnectedComponentLabeler does not work in for loop

I am trying to get a set of binary images' eccentricity and solidity values using the regionprops function. I obtain the label matrix using the vision.ConnectedComponentLabeler function.
This is the code I have so far:
files = getFiles('images');
ecc = zeros(length(files)); %eccentricity values
sol = zeros(length(files)); %solidity values
ccl = vision.ConnectedComponentLabeler;
for i=1:length(files)
I = imread(files{i});
[L NUM] = step(ccl, I);
for j=1:NUM
L = changem(L==j, 1, j); %*
end
stats = regionprops(L, 'all');
ecc(i) = stats.Eccentricity;
sol(i) = stats.Solidity;
end
However, when I run this, I get an error says indicating the line marked with *:
Error using ConnectedComponentLabeler/step
Variable-size input signals are not supported when the OutputDataType property is set to 'Automatic'.'
I do not understand what MATLAB is talking about and I do not have any idea about how to get rid of it.
Edit
I have returned back to bwlabel function and have no problems now.
The error is a bit hard to understand, but I can explain what exactly it means. When you use the CVST Connected Components Labeller, it assumes that all of your images that you're going to use with the function are all the same size. That error happens because it looks like the images aren't... hence the notion about "Variable-size input signals".
The "Automatic" property means that the output data type of the images are automatic, meaning that you don't have to worry about whether the data type of the output is uint8, uint16, etc. If you want to remove this error, you need to manually set the output data type of the images produced by this labeller, or the OutputDataType property to be static. Hopefully, the images in the directory you're reading are all the same data type, so override this field to be a data type that this function accepts. The available types are uint8, uint16 and uint32. Therefore, assuming your images were uint8 for example, do this before you run your loop:
ccl = vision.ConnectedComponentLabeler;
ccl.OutputDataType = 'uint8';
Now run your code, and it should work. Bear in mind that the input needs to be logical for this to have any meaningful output.
Minor comment
Why are you using the CVST Connected Component Labeller when the Image Processing Toolbox bwlabel function works exactly the same way? As you are using regionprops, you have access to the Image Processing Toolbox, so this should be available to you. It's much simpler to use and requires no setup: http://www.mathworks.com/help/images/ref/bwlabel.html

Resources