Using Jupyter notebook with SparkR - sparkr

I want to use Jupyter notebook with SparkR, I want to install IR kernel on Jupyter which is installed on my Spark cluster.
I could find help on using Jupyter with pySpark, but not with SparkR.
I have created my Spark cluster on AWS-EMR cluster.

If it's not essential to use IRkernel, to use Jupyter with Spark you should consider installing the Apache Toree kernel: https://toree.incubator.apache.org/
This kernel will allow you to connect a Jupyter notebook with Spark using any of the Spark APIs. It also permits the use of magics (eg %pyspark or %sparkr) to switch between languages in different cells of a single notebook.

After you create a kernel with Toree, your kernel.json should include a SPARK_HOME env, indeed it is something like:
"/opt/cloudera/parcels/SPARK2/lib/spark2":
"/opt/cloudera/parcels/SPARK2/lib/spark2",
and sometimes:
"/opt/cloudera/parcels/SPARK2/lib/spark2": "spark-home",
Despite I fixed the SPARK_HOME manually for Scala and get the Scala kernel working, I am still not able to get the SparkR kernel working for me, but maybe the bug in the kernel is the first thing you should check - if you are using Toree.

Related

Installing Hadoop in full distributed mode using VM VirtualBox on Windows Machines

I have installed Hadoop in Pseudo-Distribution Mode using Oracle VM VirtualBox (https://github.com/AmanpreetSingh-GitHub/Hadoop) on Windows 10 machine and it is working perfectly fine and running my MapReduce, Pig, Hive, Sqoop programs.
Now I want to install Hadoop in full distributed mode using Oracle VM VirtualBox on four Windows 10 machines. Could you please let me know how to proceed for this? Any links to resources that briefly tells that will be really helpful.

How to install pyspark & spark for learning purpose on a laptop with limited resources?

I have a windows 7 laptop with 6GB RAM . What is the most RAM/resource efficient way to install pyspark & spark on this laptop just for learning purpose. I don't want to work on actual big data but small dataset is ideal since this is just for learning pyspark & spark in general. I would prefer the latest version of Spark.
FYI: I don't have hadoop installed.
Thanks
You've basically got three options:
Build everything from source
Install Virtualbox and use a pre-built VM like Cloudera Quickstart
Install Docker and find a suitable container
Getting everything up and running when you choose to build from source can be a pain. You've got to install the JDK, build hadoop and spark (both of which require you to install additional software to build them), set up a bunch of environment variables and then pray that didn't mess anything up.
VMs are nice, particularly the one from Cloudera, but you'll often be stuck with an older version of Spark and it might be tight with the resources you described.
I'd go with Docker.
Once you've got docker installed, it becomes very easy to try Spark (and lots of other technologies). My favorite containers for playing around use ipython or jupyter notebooks.
Install Docker:
https://docs.docker.com/installation/windows/
Jupyter Notebook Python, Spark, Mesos Stack
https://github.com/jupyter/docker-stacks/tree/master/pyspark-notebook
One thing to keep in mind is that you are going to have to allocate a certain amount of memory for the VM and the remaining memory still has to operate Windows. Windows 7 requires a minimum of 1 GB for a 32-bit OS or 2 GB for a 64-bit OS. So likely you are only going to wind up with around 4 GB of RAM for running the VM, which is not much.
Assuming you are 64-bit, note that Cloudera requires a minimum of 4 GB RAM to run CDH 5, but if you want to run Cloudera Express, you need 8 GB.
Running Docker from Windows will require you to use boot2docker, which keeps the entire VM in memory. It uses minimal memory (like around 27 MB) to run, so you should be fine there. A MUCH better solution than running VirtualBox!
Another option to consider would be to spin up a free machine on something like Amazon Web Services (http://aws.amazon.com) or Google Cloud (http://cloud.google.com). Particularly with the later, you can get a free trial amount of credits, which you could use to spin up a machine with more RAM than you would typically get with AWS.

Hadoop features when installed on windows using virtual box

Do I get less features or functions of hadoop env. when installed on windows machine using virtual box? Is is good to have this sort of hadoop installation for beginners practice? or What is the difference when hadoop in installed on linux machine vs installation on virtual box on a windows machine.
You can have fully distributed cluster on your windows machine using multiple nodes in the virtual box . However for beginners I will recommend you set up a single node cluster and do the practice. There is no thing as such that you will get less features . You will be running pseudo distributed mode of hadoop . All the daemons will be running. Only thing is that since you have single windows machine with limited storage/ram, you cant test the cluster with huge amounts of data. Hope this helps.

Hadoop cluster with Linux as master and windows 7 as slave

I want to set up a hadoop environment with linux fedora as master and windows 7 machine as slave. Is this combination possible and if so, do I need to install cygwin in windows 7?
The good practice says do not run hadoop on the Windows (simple as that ).
Why do you want to do that?
In case you want to test something use pseudo distributed mode (run all hadoop services on one machine)
Additional thing, I would recommend to use some distribution of the hadoop, for instance cloudera.
This link explains step-by-step how to setup it.
https://ccp.cloudera.com/display/CDH4DOC/CDH4+Installation+Guide
This is pretty simple and what is the important, very briefly documented

Hadoop cluster with ubuntu and Windows

I have three laptops(with ubuntu) that I am networking to act as a cluster for hadoop. I also have a windows only machine, is it possible to add that to the cluster and make it act as a node? Is it feasible? Has anyone come across such an issue?
If you have windows environment, I would suggest that you use VirtualBox and any Linux as Guest OS.
You can build your Hadoop cluster on that. There are numerous installation procedures available for Linux and you can't go wrong with that.
We are using it exactly this way for development purposes. Performance of Hadoop cluster is not a concern as is the functionality.
It also allows you to fine tune your dev ops since you can tear apart and start afresh with a new VM.
Easiest approach to build this way is to :
Install VirtualBox
Install Vagrant
Use a community provided box from: http://www.vagrantbox.es/
Bootstrap your VM for yum packages
Move from NAT interface to Bridged Ethernet interface
Install Hadoop using SCM: http://www.cloudera.com/products-services/tools/
Bring up your cluster
Yes it is possible. On the ubuntu machines, Hadoop installation should be straightforward, you just need to follow the regular steps. Since Hadoop runs on Linux environment, you need to install Cygwin on your windows Machine which is a Linux-like environment for Windows, and will enable you to install and run Linux-based applications (like hadoop) on a Windows machine.
Here is the link for Cygwin Installation: http://www.cygwin.com/install.html

Resources