Bitnami LAPP installation error - installation

I am trying to install Bitnami LAPP stack (Linux Apache PostgreSQL PHP) on a centos 6.4 64bit operating system. In the Readme file, following system configuration is expected:
REQUIREMENTS
To install Bitnami LAPP stack you will need:
- Intel x86 or compatible processor
- Minimum of 256 MB RAM
- Minimum of 150 MB hard drive space
- An x86 Linux operating system
- TCP/IP protocol support
During the installation, I receive a warning whic says that I need at least 2000MB memory available. (In the readme file, it is said that the minimum requirment is 256MB)
After choosing the installation parameters, I receive a storage error but I have more storage than explained in the readme file
What am I doing wrong?

The problem was solved by the Bitnami team. I think it was a definition problem in the seup environment.

Related

Regarding hardware requirements for freeswitch

I have a ubuntu 16.04 desktop version installed on 64 bit 4GB RAM, intel core i3 processor 2.13 GHz.
I need to install freeswitch for doing a small project. It will take only one call at a time. I tried looking up the hardware requirements for freeswitch on their wiki. But i am not able to find the hardware requirements.
Will freeswitch run fine on my laptop? Is there a page giving details about minimum hardware requirements for freeswitch? Thanks.
Update: I got some more info on another website: Section Hardware and Software Requirements freeswitch versus asterisk
Min Requirement
System Tuning
Minimum/Recommended System Requirements:
32-bit OS (64-bit recommended)
512MB RAM (1GB recommended)
50MB of Disk Space
System requirements depend on your deployment needs.
If you just want group video calling feature then go for 1.6.x version, else just use only 1.4.x

How to run Mapr?

I am trying to run mapr sandbox on a windows pc and with 8gb ram. But when I am trying to import the ovf its always saying ovf is corrupt while I have used multiple sources the ovf that is running on the other machine is not running in my one.I have tried to play with the configuration as well I also tried to extract and run the ovf as a vmdk but than there will be no config setup done for so that doesn't works as well. Now I have tried that on vmplayer it got install and said that the ovf format is unsupproted and when you try again it will not see the ovf file specification concern so it imported the file successfully but now its says that the vmx file is incompatible. I cannot find any way out?
I did the following for install it on Ubuntu 14.04 (being virtual machines the final destination, shouldn't be mayor problems):
On VirtualBox
Don't use the ovf file.
Create virtual machine (Machine -> New...)
On operating system, choose red hat 64 bits
On memory, you should asing 8 GB for the VM (or less, if you have an old computer like me :D)
Don't add virtual drives, you can't add both drives. Use the option "Do not add a Virtual Hard Drive"
After creation of the VM
Add both disks to the virtual machine, from settings
Configure the network of the machine as following
Attached to "Bridget Adapter"
Name: Eht0
Adapter Type: Intel PRO/1000 MT Desktop
Promiscuos mode: Deny
Cable Connected: yes
After this small steps, you should be capable of doing right click -> start, and start using MapR. Basically, we import the machine in a very complicated way, because the ovf file that is supposed to use for importing doesn't work!!
I was facing same issue on my Windows & machine. Here is what I did:
Again downloaded MapR sandbox for VMWare for windows.
Uninstalled previous version of VMWare which was giving this issue and downloaded VMWare Workstation Player for Windows 64 bit.
This time it worked.
As I had the chance to experiment with MapR recently-
MapR needs 6GB RAM
at least for the Virtual Box
(or the virtual machine you are using on windows)
if you don't grant the MapR these 6gb it is just not starting with some strange error saying nothing about that issue. You have 8gb ram on your windows machine so I recommend you to spend at least 6.2gb ram for the process.
p.s. Later I had other problems with the mapper as you can see with no support. (previous I found 1 more bug that they say will be fixed in MapR 6)
I am currently using MapR 5.2

How to install pyspark & spark for learning purpose on a laptop with limited resources?

I have a windows 7 laptop with 6GB RAM . What is the most RAM/resource efficient way to install pyspark & spark on this laptop just for learning purpose. I don't want to work on actual big data but small dataset is ideal since this is just for learning pyspark & spark in general. I would prefer the latest version of Spark.
FYI: I don't have hadoop installed.
Thanks
You've basically got three options:
Build everything from source
Install Virtualbox and use a pre-built VM like Cloudera Quickstart
Install Docker and find a suitable container
Getting everything up and running when you choose to build from source can be a pain. You've got to install the JDK, build hadoop and spark (both of which require you to install additional software to build them), set up a bunch of environment variables and then pray that didn't mess anything up.
VMs are nice, particularly the one from Cloudera, but you'll often be stuck with an older version of Spark and it might be tight with the resources you described.
I'd go with Docker.
Once you've got docker installed, it becomes very easy to try Spark (and lots of other technologies). My favorite containers for playing around use ipython or jupyter notebooks.
Install Docker:
https://docs.docker.com/installation/windows/
Jupyter Notebook Python, Spark, Mesos Stack
https://github.com/jupyter/docker-stacks/tree/master/pyspark-notebook
One thing to keep in mind is that you are going to have to allocate a certain amount of memory for the VM and the remaining memory still has to operate Windows. Windows 7 requires a minimum of 1 GB for a 32-bit OS or 2 GB for a 64-bit OS. So likely you are only going to wind up with around 4 GB of RAM for running the VM, which is not much.
Assuming you are 64-bit, note that Cloudera requires a minimum of 4 GB RAM to run CDH 5, but if you want to run Cloudera Express, you need 8 GB.
Running Docker from Windows will require you to use boot2docker, which keeps the entire VM in memory. It uses minimal memory (like around 27 MB) to run, so you should be fine there. A MUCH better solution than running VirtualBox!
Another option to consider would be to spin up a free machine on something like Amazon Web Services (http://aws.amazon.com) or Google Cloud (http://cloud.google.com). Particularly with the later, you can get a free trial amount of credits, which you could use to spin up a machine with more RAM than you would typically get with AWS.

Apache Hadoop - node machine disparity?

I have an old desktop having an Intel dual core processor(32-bit) and I have Ubuntu 12.04 Desktop edition(again, 32-bit) running on it. I wish to set-up at least a 4-node Apache Hadoop cluster. For that, I'm planning to buy some used desktops which may come at a cheap price. However, I'm confused with the following queries :
Can Apache Hadoop work with disparate nodes in a cluster - one 32-bit Ubuntu 12.04 while another is the 64-bit version ?
I think the O.S version has to be the same across the cluster nodes - am I correct?
As per the official site, 1.0.3 is the latest stable version - will it work with 32-bit machines or needs all the nodes to be 64-bit?
The answers to the above queries will help me to determine what kind of processor etc. must I purchase to build a cluster(suggestions are welcome!!!)
Can Apache Hadoop work with disparate nodes in a cluster - one 32-bit
Ubuntu 12.04 while another is the 64-bit version ?
As per the official site, 1.0.3 is the latest stable version - will it
work with 32-bit machines or needs all the nodes to be 64-bit?
Everything runs on top of Java, so if you can install a 32bit Java, you can run Java. There are however some native parts, I believe they are crosscompiled working for x86 and x64.
Since the communication takes place via RPC (pure java code) this should work, although I haven't tried it out yet.
I think the O.S version has to be the same across the cluster nodes - am I correct?
Not necessarily, but for the ease of your use in debugging problems and keep clusters homogenous in case of updates I wouldn't do this.

Recover windows seven

I started on Ubuntu and have had the first considerable error. I'm looking for help.
I have an HP Pavilion dv6 i7. I had installed windows 7 and I decided to also install Ubuntu using a USB.
My first attempt was to install Ubuntu 11.10 following the instructions of the official Ubuntu website. When loading the pendrive, my PC stucks at the main menu of ubuntu, so after searching, I found could be due to a problem with my AMD Radeon graphic card (or not), but I decided to change.
Then I used Ubuntu 10.4. This could happen from the start menu i get into Ubuntu live. There I decided to install it because I liked it and I need to develope with Google TV (in windows is not posible).
And I fail in the partitions section. I tried to follow the instructions on this page:
http://hadesbego.blogspot.com/2010/08/instalando-linux-en-hp-pavilion-dv6.html
but there were things that changed a bit so I improvised. I took the windows partition of 700000MB and went to 600000Mb leaving 100GB free to install Linux there. The error was to set it to ext3 (it was ntfs). I thought the new 100gb partition will be set to ext3, and windows partition will stuck at ntfs system, but not.
Total I ran out to boot windows, and above I can not install ubuntu on the 100GB free.
Someone thinks I can help. Is there any easy way to convert back to ntfs windows and not lose data?
Thank you very much.
You should be able to hit F11 when the machine is booting up and go to the HP recovery application. This should let you reset to factory default.
You should definitely be able to install Ubuntu on the new 100GB partition as well. Just make sure you choose the right partition to install it on.
You will need to recover using recovery CD/DVD's. You must have been using the install gparted utility in Linux to "re-partition" your drive. You scrubbed some boot files.
If you successfully recover using the recovery media you can use Disk Management in Win 7 to shrink or extend your volume. In your case you would shrink it down 100Gb's and then when installing Linux gparted will see that available 100 GB and install there while Windows will still run.
Also, you should probably be running ext4 fs, not ext3. you would only want ext3 for compatibility reasons.

Resources