Too little RAM in Kaa Server - installation

I want to run a test with KAA, so I was trying to install the sandbox in my laptop but it has only 4GB in RAM, so when I try to set up the Virtual Machine the system won't let me set up over 1,6GB and the VM won't start.
So I was trying to install in other old laptop so I installed Ubuntu 16,04 and I followed all the step by step instructions in Kaaproyect's WEB. I could do it, but when I try to start the server can't do it. I was checking the Log error and say me that the problem is in the Java's Virtual machine, can't start because only have 2GB in RAM. I need to test a Little application so is it possible change this requirement in the Java machine and start the system?
PS: I can't buy more Ram.

I recommend you to use amazon AWS. The basic installation where you can run Kaa is free for one year, and it runs quite well there.

Related

Hadoop installation using Cloudera VMware

Can anyone please let me know the minimum RAM required (of the host machine) for running Cloudera's hadoop on VMware workstation?
I have 6GB of RAM. The documentation says that the RAM required by the VM is 4 GB.
Still, when I run it, the CentOS is loaded and the VM crashes. I have no other active application running at the time.
Are there any other options apart from installing hadoop manually?
You may be running into your localhost running out or memory or some other issue preventing the machine from booting completely. There are a couple of other options if you don’t want to deal with a manual install:
If you have access to a docker environment try the the docker image they provide.
Run it in the cloud with AWS, GCE, Azure, they usually have a small allotment of personal/student credits available.
For AWS, EMR also makes it easy for you to run something repeatedly.
For really short durations, you could try the demo from Bitnami (https://bitnami.com/stack/hadoop) and just run whatever you need to there.

How to run Mapr?

I am trying to run mapr sandbox on a windows pc and with 8gb ram. But when I am trying to import the ovf its always saying ovf is corrupt while I have used multiple sources the ovf that is running on the other machine is not running in my one.I have tried to play with the configuration as well I also tried to extract and run the ovf as a vmdk but than there will be no config setup done for so that doesn't works as well. Now I have tried that on vmplayer it got install and said that the ovf format is unsupproted and when you try again it will not see the ovf file specification concern so it imported the file successfully but now its says that the vmx file is incompatible. I cannot find any way out?
I did the following for install it on Ubuntu 14.04 (being virtual machines the final destination, shouldn't be mayor problems):
On VirtualBox
Don't use the ovf file.
Create virtual machine (Machine -> New...)
On operating system, choose red hat 64 bits
On memory, you should asing 8 GB for the VM (or less, if you have an old computer like me :D)
Don't add virtual drives, you can't add both drives. Use the option "Do not add a Virtual Hard Drive"
After creation of the VM
Add both disks to the virtual machine, from settings
Configure the network of the machine as following
Attached to "Bridget Adapter"
Name: Eht0
Adapter Type: Intel PRO/1000 MT Desktop
Promiscuos mode: Deny
Cable Connected: yes
After this small steps, you should be capable of doing right click -> start, and start using MapR. Basically, we import the machine in a very complicated way, because the ovf file that is supposed to use for importing doesn't work!!
I was facing same issue on my Windows & machine. Here is what I did:
Again downloaded MapR sandbox for VMWare for windows.
Uninstalled previous version of VMWare which was giving this issue and downloaded VMWare Workstation Player for Windows 64 bit.
This time it worked.
As I had the chance to experiment with MapR recently-
MapR needs 6GB RAM
at least for the Virtual Box
(or the virtual machine you are using on windows)
if you don't grant the MapR these 6gb it is just not starting with some strange error saying nothing about that issue. You have 8gb ram on your windows machine so I recommend you to spend at least 6.2gb ram for the process.
p.s. Later I had other problems with the mapper as you can see with no support. (previous I found 1 more bug that they say will be fixed in MapR 6)
I am currently using MapR 5.2

How to install pyspark & spark for learning purpose on a laptop with limited resources?

I have a windows 7 laptop with 6GB RAM . What is the most RAM/resource efficient way to install pyspark & spark on this laptop just for learning purpose. I don't want to work on actual big data but small dataset is ideal since this is just for learning pyspark & spark in general. I would prefer the latest version of Spark.
FYI: I don't have hadoop installed.
Thanks
You've basically got three options:
Build everything from source
Install Virtualbox and use a pre-built VM like Cloudera Quickstart
Install Docker and find a suitable container
Getting everything up and running when you choose to build from source can be a pain. You've got to install the JDK, build hadoop and spark (both of which require you to install additional software to build them), set up a bunch of environment variables and then pray that didn't mess anything up.
VMs are nice, particularly the one from Cloudera, but you'll often be stuck with an older version of Spark and it might be tight with the resources you described.
I'd go with Docker.
Once you've got docker installed, it becomes very easy to try Spark (and lots of other technologies). My favorite containers for playing around use ipython or jupyter notebooks.
Install Docker:
https://docs.docker.com/installation/windows/
Jupyter Notebook Python, Spark, Mesos Stack
https://github.com/jupyter/docker-stacks/tree/master/pyspark-notebook
One thing to keep in mind is that you are going to have to allocate a certain amount of memory for the VM and the remaining memory still has to operate Windows. Windows 7 requires a minimum of 1 GB for a 32-bit OS or 2 GB for a 64-bit OS. So likely you are only going to wind up with around 4 GB of RAM for running the VM, which is not much.
Assuming you are 64-bit, note that Cloudera requires a minimum of 4 GB RAM to run CDH 5, but if you want to run Cloudera Express, you need 8 GB.
Running Docker from Windows will require you to use boot2docker, which keeps the entire VM in memory. It uses minimal memory (like around 27 MB) to run, so you should be fine there. A MUCH better solution than running VirtualBox!
Another option to consider would be to spin up a free machine on something like Amazon Web Services (http://aws.amazon.com) or Google Cloud (http://cloud.google.com). Particularly with the later, you can get a free trial amount of credits, which you could use to spin up a machine with more RAM than you would typically get with AWS.

Setup multinode Hadoop cluster using virtual machines on my laptop

I have a windows 7 laptop and I need to setup hadoop (mutlinode) cluster on it.
I have the following things ready -
virtual softwares, i.e. virtualbox and vmware player.
Two virtual machines, i.e.
Ubuntu - for Hadoop master and
Ubuntu - for (1X) Hadoop slave
Has anyone done a setup of such a cluster using Virtual machines on
your laptop ?
If yes please help me to install it.
I've searched over google but I am not getting how to configure this multi-node cluster on hadoop using VMs?
How to run two Ubuntu OS on windows 7 using VMware or virtualbox?
Should we use same Ubuntu version VM image or
vm images with different versions of Ubuntu linux?
Yes you can use ubuntu two node. I am using five nodes(1 master, 4 datanodes).
If you want install multi node in vm ware.
Just download ubutnu from this link: http://www.ubuntu.com/download/desktop
And install two machine. And install java and openssh.
And download shell script for multinode from this link::
https://github.com/tonyreddy/Apache-MultiNode-Insatallation-Shellscript
And try it .....
All the best............
Since you're running Hadoop on your laptop, obviously you're doing it for learning purposes or building POC or functional debugging.
Instead of going through the hassles of installing and setting up Hadoop and related Big-Data softwares, you can simply install a pre-configured pseudo-distributed VM.
Some good options are:
Cloudera QuickStart VM
Hortonworks Sandbox
I've been using the Cloudera's VM on my laptop for quite sometime now and it's been working great.
Cloudera and Hortonworks are the fastest way to get it up and running.
Make sure you have enough RAM installed on your laptop for the Operating system already running, else your laptop will restart abruptly often while you use the Virtual machines.
Let me give you an example -
If you are using Windows 10, it needs 3-5GB RAM to be used to work smoothly,
This means if you load a Virtual Machine of 5GB size in your RAM, Windows may crash when it does not find enough RAM to operate.
You must upgrade the RAM from 8GB to 12GB or best 16GB for smooth operation of your laptop.
Hope it helps

Recover windows seven

I started on Ubuntu and have had the first considerable error. I'm looking for help.
I have an HP Pavilion dv6 i7. I had installed windows 7 and I decided to also install Ubuntu using a USB.
My first attempt was to install Ubuntu 11.10 following the instructions of the official Ubuntu website. When loading the pendrive, my PC stucks at the main menu of ubuntu, so after searching, I found could be due to a problem with my AMD Radeon graphic card (or not), but I decided to change.
Then I used Ubuntu 10.4. This could happen from the start menu i get into Ubuntu live. There I decided to install it because I liked it and I need to develope with Google TV (in windows is not posible).
And I fail in the partitions section. I tried to follow the instructions on this page:
http://hadesbego.blogspot.com/2010/08/instalando-linux-en-hp-pavilion-dv6.html
but there were things that changed a bit so I improvised. I took the windows partition of 700000MB and went to 600000Mb leaving 100GB free to install Linux there. The error was to set it to ext3 (it was ntfs). I thought the new 100gb partition will be set to ext3, and windows partition will stuck at ntfs system, but not.
Total I ran out to boot windows, and above I can not install ubuntu on the 100GB free.
Someone thinks I can help. Is there any easy way to convert back to ntfs windows and not lose data?
Thank you very much.
You should be able to hit F11 when the machine is booting up and go to the HP recovery application. This should let you reset to factory default.
You should definitely be able to install Ubuntu on the new 100GB partition as well. Just make sure you choose the right partition to install it on.
You will need to recover using recovery CD/DVD's. You must have been using the install gparted utility in Linux to "re-partition" your drive. You scrubbed some boot files.
If you successfully recover using the recovery media you can use Disk Management in Win 7 to shrink or extend your volume. In your case you would shrink it down 100Gb's and then when installing Linux gparted will see that available 100 GB and install there while Windows will still run.
Also, you should probably be running ext4 fs, not ext3. you would only want ext3 for compatibility reasons.

Resources