I wanted to know if it is possible to keep an eye on postgreSQL using supervisor on a debian server? I haven't been able to find any examples, so I am wondering if it is worth it, or perhaps there is a more straight forward way of ensuring it is always running.
See Using PostgreSQL with Supervisor on Ubuntu 10.10 and the clarifications at Running PostgreSQL with Supervisord. There are some useful examples in a pastebin too. Given the common name of the program it can be hard to find these, the trick to getting examples is to search for supervisord instead.
Related
I'm newer in Big data solution developments.
To have a complete environment I decide to install hdp 3.1 docker image.
After installation, I taped this two commands like mentioned in this installation guide: https://www.cloudera.com/tutorials/sandbox-deployment-and-install-guide/3/.html
docker start sandbox-hdf
docker start sandbox-proxy
Now after doing this , I don't know what doing in the next step, I don't found any indication of URL address for starting my real work, I don't fount what are the settings that I must do it before starting. Please, anyone has an experience with hdp 3.1 can help me. Thank you a lot.
https://www.cloudera.com/tutorials/sandbox-deployment-and-install-guide/3/.html
I am learning to work with CouchDB and I am usually making Ajax calls in order to communicate with my database. I started getting '
Cross-Origin Request blocked
' and as the reason
Access-Control-Allow-Origin
, so I decided to work with CouchDB through HTTPS and not the standard HTTP. For that, I have followed the instructions given on Link to manual.
Problem on Linux:
I first tried to set it up on my laptop where I use Linux. But I couldn't find the Local.ini file where I was supposed to set the paths to the certificates.
After unsuccessfully trying to find a solution for it, I gave up and started from the beginning on my computer, where I use Windows.
Problem on Windows:
So I installed the newest version of CouchDB on my Windows, I have created the certificates, found the Local.ini file, did everything as it is explained in the manual. The problem was that I couldn't restart CouchDB so that the changes would take place. So, after google-ing the problem, I found a possible solution, to stop CouchDB through the Task Manager->Services-> Stop Apache CouchDB. But when I tried to Start it again I get the problem
Windows could not start the Apache CouchDB on Local Computer. For more information, review the System Event Log. If this is a non-Microsoft service, contact the service vendor, and refer to service-specification error code 3.
I would be very happy if someone could help me with my problem(s). I prefer getting a solution for the Linux problem since I work mostly on my laptop, but I will be satisfied if I get it going even on Windows.
Thanks in advance
On Linux, you can add CORS to CouchDB with this package:
https://github.com/pouchdb/add-cors-to-couchdb
Due to my own idiocracy I have managed to run two versions of Apache on my Mac OS. Since a late update to Sierra my webserver has fallen into decay. I feel it is only a matter of time, until the whole environment breaks down and the OS is doomed for reincarnation.
For now I have managed to get my system side Apache running. However I have found that my "apachectl" command has been replaced with the brew version of apache. Since I am not strong with cmd line I want to ask how I can revert this? For now I am starting system side apache with "/usr/sbin/apachectl start".
If anyone could give me some advice how to keep the two versions from colliding, I would be more than grateful: keep brew apache from autoloading, check for which httpd processes are running and where they are rooted, put brew apache in a dumpster in the middle of the night etc.
I also have brew versions of php installed but I dare not to uninstall due to dependencies... any advide here would be appreciated as well.
IF ANY NEWCOMER READS THIS THREAD: Since I updated my Mac OS to the new Sierra my whole Apache configurations have gone mad. Unfortunately I have followed a very, very bad tutorial a few updates ago (https://www.getgrav.org/blog/macos-sierra-apache-multiple-php-versions) to configure my web development environment. I RECOMMEND EVERYONE TO AVOID THIS TUTORIAL! The blogger writes that his tutorial is only for advanced developers, however the tutorial itself is a total mess: there are no hints for any backup files and the configurations are all in bad style... I would advise anyone to double check on custom configurations in apache and always backup every file you change! For me it is too late and I feel only a hard reset of the system will suffice. Dark days on the horizon...
When you upgraded to the new OS, it does change your default apache config. The good thing is it does save a copy and renames it to httpd.conf~previous also creates a folder under etc/apache2/original for the previous default version. Just copy them back over and you're good to go.
Also you can throw homebrew in the dumpster using their own script
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/uninstall)"
Hope that helps.
I've already installed cygwin in windows7. Now I plan to add sqoop to cygwin for hadoop but I'm not getting it right...
Can anybody please suggest me the correct way for doing so, or a link detailing it?
I think you should reconsider installing Hadoop on Windows, it is not very easy to do it and it is probably more trouble than it is worth, although I believe others have done it.
Anyway there are several other options you could consider with regards to hadoop, first there's two companies I know of that provide free VM's and one of them has worked with Microsoft to try and integrate Hadoop into Windows. Anyway, these are the links:
http://www.cloudera.com/content/www/en-us/downloads/quickstart_vms/5-4.html
http://hortonworks.com/products/hortonworks-sandbox/#install
Otherwise you can try your luck with the default apache installation, though I warn you, if you're new to linux or don't like to spend a lot of time changing configuration files, going this way is not the best. I did my installation this way, and you have to modify a lot of files, plus anything extra like Hive, Sqoop, HBase, etc. needs to be installed separately and configured as well.
Please don't make yourself complicated.
I can only recommend running sqoop on hadoop in a linux virtual machine or native linux. Although successfully running hadoop 0.20.0 on windows xp+cygwin and windows7+cygwin, I once tried setting up a newer version of hadoop on windows7, but failed miserably due to errors in hadoop.
I have wasted days and weeks on this.
So my advice: run hadoop on linux if you can, you'll avoid a serious amount of problems.
At my house I have about 10 computers all different processors and speeds (all x86 compatible). I would like to cluster these. I have looked at openMosix but since they stopped development on it I am deciding against using it. I would prefer to use the latest or next to latest version of a mainstream distribution of Linux (Suse 11, Suse 10.3, Fedora 9 etc).
Does anyone know any good sites (or books) that explain how to get a cluster up and running using free open source applications that are common on most mainstream distributions?
I would like a load balancing cluster for custom software I would be writing. I can not use something like Folding#home because I need constant contact with every part of the application. For example if I was running a simulation and one computer was controlling where rain was falling, and another controlling what my herbivores are doing in the simulation.
I recently set up an OpenMPI cluster using Ubuntu. Some existing write up is at https://wiki.ubuntu.com/MpichCluster .
Your question is too vague. What cluster application do you want to use?
By far the easiest way to set up a "cluster" is to install Folding#Home on each of your machines. But I doubt that's really what you're asking for.
I have set up clusters for music/video transcoding using simple bash scripts and ssh shared keys before.
I manage mail server clusters at work.
You only need a cluster if you know what you want to do. Come back with an actual requirement, and someone will suggest a solution.
Take a look at Rocks. It's a fullblown cluster "distribution" based on CentOS 5.1. It installs all you need (libs, applications and tools) to run a cluster and is dead simple to install and use. You do all the tweaking and configuration on the master node and it helps you with kickstarting all your other nodes. I've recently been installing a 1200+ nodes (over 10.000 cores!) cluster with it! And would not hesitate to install it on a 4 node cluster since the workload to install the master is none!
You could either run applications written for cluster libs such as MPI or PVM or you could use the queue system (Sun Grid Engine) to distribute any type of jobs. Or distcc to compile code of choice on all nodes!
And it's open source, gpl, free, everything that you like!
I think he's looking for something similar with openMosix, some kind of a general cluster on top of which any application can run distributed among the nodes. AFAIK there's nothing like that available. MPI based clusters are the closest thing you can get, but I think you can only run MPI applications on them.
Linux Virtual Server
http://www.linuxvirtualserver.org/
I use pvm and it works. But even with a nice ssh setup, allowing for login without entering passwd to the machine, you can easily remotely launch commands on your different computing nodes.