Nagios Multi website hosting - amazon-ec2

We have an AWS instance, where we have installed Nagios Core and we want to host multiple monitoring for different set of hosts and services. For example, when we type:
https://nagiosdev1.com/nagios_test1/ it should direct to one set of host groups, and
https://nagiosdev1.com/nagios_test2/ it should direct to another set of host groups.
Test2 is cloned from Test1, and both web-pages are working,but they do not show their independent host groups. We modified all host groups in Test2 and they are different from Test1.
At server level, we have specifically mentioned what host groups both should contain, but the GUI part for Test2 are showing the hosts of Test1.
If we make any changes on Test1, it is getting replicated on Test2. But any changes on Test2 doesn't show up.
Test1
Test2
[aghosh#nagiosdev apt]$ cd /usr/local/nagios
test1/ test2/
[aghosh#nagiosdev apt]$ pwd
/usr/local/test2/etc/apt
[aghosh#nagiosdev apt]$ ls | wc -l
1
[aghosh#nagiosdev apt]$ pwd
/usr/local/test1/etc/apt
[aghosh#nagiosdev apt]$ ls | wc -l
25
As you can see, Test2 has only 1 host group, while Test1 has 25.
The whole point being, one site will be holding production host groups and other site will be hosting non-prod host groups.
Anyone got ideas?

Your description of the problem I.e what steps you have performed and what you are looking to achieve completely mismatches moreover this is not following any Nagios best practice methodology. Owing to very little information on reasons why you are creating two different host group in the same server for two different set of to be monitored host ( Prod and Non Prod) and trying to call them by different URI isn't clear to me. In my several Nagios over AWS and GCP I haven't ever used such a crude method. Nagios XI has sophisticated method of host separation and a multi site Nagios Architecture would require you to consider lot more than just creating two hostgroup. For two different set of hosts in same network you may register them on same server with host agents reporting to master, if host are in different network you would use two different Nagios server i.e two different EC2 instance holding two different URL.
Regarding why the nagios_test1 is resembling nagios_test2 in your case, I suspect you are incorrectly calling the host groups which is why every time it is returning you same set of data. You should know NagiosXi provide Nagios API which can be used to show hostgroup members and service group members. Vanilla Nagios free version doesn't provide the same. This could be reason, it can also happen when incorrectly copied the template data for host and services. They may pass Nagios syntax check but provide wrong host / service information.
Supporting Nagios Doc: https://assets.nagios.com/downloads/nagiosxi/docs/Accessing_The_XI_Backend_API.pdf
Hope you have got your answer.

Try not to create two different directories like test1 & test2. Rather, prefer you to create the soft link on test1 to test2 so that physically it appears two different sites but logically it points to the directory test1.
ln -s nagios_test1 nagios_test2
https://nagiosdev1.com/nagios_test1/
https://nagiosdev1.com/nagios_test2/

Related

Hosts File for Greenplum Installation

I am setting up greenplum 3 node cluster for POC while checking installation steps I found that hostfile_exkeys file have to be in master node.
Can anyone tell me where I should create this file location, node etc?
And most important what to put in this?
You create hostfile_exkeys on the Master. It isn't needed on the other hosts. You can put it in /home/gpadmin or anywhere that is convenient for you.
You put the three hostnames for your POC in this file. Example:
mdw
sdw1
sdw2
This is documented pretty well here: https://gpdb.docs.pivotal.io/5120/install_guide/prep_os_install_gpdb.html
You can also run a POC in the cloud. Greenplum is available in AWS, Azure, and GCP. It does all of the configuration for you. You can even use the BYOL product listings for 90 days for free to evaluate the product or you can use the Hourly billed products to get support while you evaluate the product.
There are examples in the utililty reference for gpssh-exkeys documentation but, in general, you should put in all the hostnames in your cluster. If there a multiple network-interfaces, those can go in instead.
I generally put this file either in /home/gpadmin or /home/gpadmin/gpconfigs (good place to keep all files for initial setup and initialization).
Your file will look something like (one name per line):
mdw
sdw1
sdw2
If there are 2 network interfaces, it might look something like:
mdw
mdw-1
mdw-2
sdw1
sdw1-1
sdw1-2
sdw2
sdw2-1
sdw2-2
Your /etc/hosts file (on all server) should include the IP addresses for all the interfaces and their names, so this file should match those names listed in /etc/hosts.
This is primarily to allow the master to exchange ssh keys with all hosts so it is always password-less login to the hosts. After you have this file set up, you will run (example):
gpssh-exkeys -f /home/gpadmin/gpconfigs/yourhostfilename
I hope this helps.

Multiple iDempiere instances in one server

I need to install multiple iDempiere instances in one server. The customized packages are different in build and the db they are using. Is there any way to deploy both of it in one server and access like localhost:8080/client1, localhost:8080/client2 . Any help appreciated.
When I want to reference several application servers I need to copy the path of various installations
and change the database name and port of each application :
/opt/idempiere-server-production/ (on port 8080 for example) for production
And
/opt/idempiere-server-test/ (on port 8081 for example) for test
the way you said is not possible, because the idempiere server for webapp is known as
http://hostname:port/webui
Running multiple instances of idempiere on a single server is not too difficult.
Here is what you need to take care of:
Install the instances into different directories. The instances do not need to share any common files. So you are just fine making a full installation for each instance.
Make sure each instance uses its own data base. Use different names for the instance data bases.
Make sure the idempiere server instances use different tcp ports.
If you really should need to use a single port to access all of the instances you could use a http server like apache or ngnix to do define virtual hosts. Proxying or use of rewrite rules will then allow you to do the desired redirections. (I am using subdomains and apache mod_proxy to do the job)
There is another benefit to using subdomains for browser access: If all your server instances use the same host name the client browser will sometimes not be able to keep cookies from different instances apart, which can lead to a blocked session as discussed here in the idempiere google group.
Use different DB user names. The docs advise not to change the default user name Adempiere and this is ok for a single instance installation. Still if you use a single DB user for all of your instances you will run into trouble once you need to restore a database from a backup file. The RUN_DBRestore.sh will delete and recreate the DB user which is not possible when the user owns more than one DB.
You can run all of your instances as services in parallel. Before the installation of another instance rename the service script: sudo mv /etc/init.d/idempiere /etc/init.d/idempiere-theInstance. Of course you will need to do some book keeping work wth the service controller of your OS to ensure that the renamed services are started as desired.
The service controller talks to the iDempiere server via the OSGI console. For this to work without problems in a multi instance environment you need to assign a different telnet port number to each of the instances: in the editor of your choice open the file /etc/init.d/iDempiere. Find the line export TELNET_PORT=12612 and change the port number to something else.
Please Note:
OS specific descriptions in this guide are for Ubuntu 16/18 or Debian, if on another OS you need to do some research.
I have been using the described approach to host idempiere versions 5 and 6 for some time now and did not have any problems so far. Still make sure you do your own thorough tests if you want to go that route.
If you run into any problems (and maybe even manage to solve them) please report back to the community. (by giving your own answer to this question or by posting to the idempiere google group) Thanks!
You can have as many setups on your server as you like. When you run the setup to create your properties, simply chose other web ports for each installation. You also may need to slightly change the webservers configuration if they have some default ports.

Beowulf Cluster - Identical users on slave nodes

In relation to building a Beowulf cluster, why is it necessary to create identical users on the slave nodes? If one were to create the users on the slave nodes in a different order to the order in which they were created on the master node, what problems would occur and how would one fix them?
I have been trying to find a concrete answer to this for a few hours but with no luck. Any help would be appreciate.
Probably because of SSH access/file permissions.
If one computer needs to access other it must have some sort of remote login technology, and SSH uses user names. Also if you have a file share between them, you may run into problems with file permissions when one pc writes them as one user and other tries to read them as other.
Regarding user creation, by default if you don't specify a user id your user gets the next available. In Ubuntu case, normal accounts start with UID 1000 so if you create 3 users you will get the following
USER NAME ID
user1 1000
user2 1001
user3 1002
If in a different machine you change the order, the users will have different user ids. Of course, you can avoid that providing the desired UID when you create the accounts.
I believe it is because they most likely share some sort of file system such as /home. Any shared software will need certain permissions and the permissions will correspond to a uid or groupid. If there is a user "user" on one machine with a different uid than "user" on another machine, some of the shared filesystem won't be accessible.
To fix it you would need to add the user on each machine with the specific matching uid.
When a MPI program is running in several nodes is necessary to login this nodes, write files etc. If the users is no sync between headnode and nodes you can't even to find the executable because the users permission in NFS share.

DB job to generate/email Oracle report output

The task is to have an Oracle report generated daily, automatically, and e-mailed to a user.
So I've sort of got this working (it works if I hardcode one of the reports server names below).
I created a job on the database that will generate the report. I'm able to get the report to email as a PDF to the destination with this command:
UTL_HTTP.REQUEST('http://server/reports/rwservlet?server=specific_report_server &report='||p_report_name||'&userid='||p_connstring||'&destype=mail'||p_parameters||'&desname='||p_to_recipientlist||' &cc='||p_cc_recipientlist||'&bcc='||p_bcc_recipientlist||'&subject=%22' || REPLACE(p_subject,' ','%20') || '%22&paramform=no&DESformat=pdf&ENVID='||p_envid);
That works great...
The problem however is that my organization has two report servers that are load balanced. Our server team could take down one of the servers without really any warning, so I can't just hardcode the report server name (the ?server= parameter above) with one of the report server names because it will work for a while, then when that server goes down, it will stop working.
My server team asked me to look for a way to pull the server from the formsweb.cfg file or from default.env value within the job (there are parameters in there that hold the server name). The idea there is that the "http://server" piece will direct the report to be run on the appropriate server, and the first part of the job could get the reports server name from the config file that the report is run on. I'm not sure if this is possible from the database level, or how to do this. Any ideas?
Is there a better way that this can be done, perhaps?
If there are two load-balanced servers, that strongly implies that the network folks must have configured some sort of virtual IP (VIP) for the service. You (and everyone else) should be using that VIP rather than a specific server name.
For example, if you have two servers reportA.yourdomain.com and reportB.yourdomain.com, you would almost certainly create a VIP for reports.yourdomain.com that load balances between the two servers (and knows whether one of the servers is down or whether a new reportC server has been added). This VIP would either do the load balancing itself or would point to an actual physical load balancer that distributes the traffic. All applications would reference the reports.yourdomain.com VIP rather than any hard-coded server names.

How to consolidate Health Check Script Output of different servers

I have develop a shell script which we used for health check of servers and then send it on email after every 8 hours.
Its working fine on 8 servers, now requirements are that how i can consolidate output of these eight servers?
Any recommendations?
like ftp all output in one folder and then send that files as attachments or any other approach?
Regards,
Split health-check and emailing into 2 different scripts.
Run emailing on one of servers (after health-checks are complete on all servers)
To consolidate:
Easiest way would be to establish shared/NFS mount across all servers
Alternatively - configure ssh keys to passwordlessly grab output-files from other servers via scp

Resources