How to consolidate Health Check Script Output of different servers - shell

I have develop a shell script which we used for health check of servers and then send it on email after every 8 hours.
Its working fine on 8 servers, now requirements are that how i can consolidate output of these eight servers?
Any recommendations?
like ftp all output in one folder and then send that files as attachments or any other approach?
Regards,

Split health-check and emailing into 2 different scripts.
Run emailing on one of servers (after health-checks are complete on all servers)
To consolidate:
Easiest way would be to establish shared/NFS mount across all servers
Alternatively - configure ssh keys to passwordlessly grab output-files from other servers via scp

Related

Hosts File for Greenplum Installation

I am setting up greenplum 3 node cluster for POC while checking installation steps I found that hostfile_exkeys file have to be in master node.
Can anyone tell me where I should create this file location, node etc?
And most important what to put in this?
You create hostfile_exkeys on the Master. It isn't needed on the other hosts. You can put it in /home/gpadmin or anywhere that is convenient for you.
You put the three hostnames for your POC in this file. Example:
mdw
sdw1
sdw2
This is documented pretty well here: https://gpdb.docs.pivotal.io/5120/install_guide/prep_os_install_gpdb.html
You can also run a POC in the cloud. Greenplum is available in AWS, Azure, and GCP. It does all of the configuration for you. You can even use the BYOL product listings for 90 days for free to evaluate the product or you can use the Hourly billed products to get support while you evaluate the product.
There are examples in the utililty reference for gpssh-exkeys documentation but, in general, you should put in all the hostnames in your cluster. If there a multiple network-interfaces, those can go in instead.
I generally put this file either in /home/gpadmin or /home/gpadmin/gpconfigs (good place to keep all files for initial setup and initialization).
Your file will look something like (one name per line):
mdw
sdw1
sdw2
If there are 2 network interfaces, it might look something like:
mdw
mdw-1
mdw-2
sdw1
sdw1-1
sdw1-2
sdw2
sdw2-1
sdw2-2
Your /etc/hosts file (on all server) should include the IP addresses for all the interfaces and their names, so this file should match those names listed in /etc/hosts.
This is primarily to allow the master to exchange ssh keys with all hosts so it is always password-less login to the hosts. After you have this file set up, you will run (example):
gpssh-exkeys -f /home/gpadmin/gpconfigs/yourhostfilename
I hope this helps.

JMeter Remote testing - 2 slaves

I'm executing JMeter loadtest on my system. We have 1 client server with JMeter GUI and 2 slaves servers.
e.g.
client: 192.168.1.1
slave1: 192.168.1.2
slave2: 192.168.1.3
We are testing application where I need to login, do something and logout.
Is it possible to test such application with 2+ slaves? Because I cannot login with the same user more times on server in current session. I get license error: "User is connected from another machine".
I know, that jmeter multiplies Threads with number of slaves, but how to handle this situation?
Thanks
JMeter uses local CSV files in distributed mode. So you just place different files on each slave and it works.
For distributed testing, the CSV file must be stored on the server host system in the correct relative directory to where the JMeter server is started.
According to the Apache JMeter documentation,
By default, the file is only opened once, and each thread will use a different line from the file. However, the order in which lines are passed to threads depends on the order in which they execute, which may vary between iterations.
If you want each thread to have its own set of values, then you will need to create a set of files, one for each thread. For example test1.csv, test2.csv, …, testn.csv. Use the filename test${__threadNum}.csv and set the "Sharing mode" to "Current thread".
So just put your different credentials in different CSV.
Either of below solutions will address your issue. I use Redis. It is super cool.
Redis:
http://www.testautomationguru.com/jmeter-make-data-sharing-easy-in-distributed-mode-using-redis/
HTTP Simple Table Server:
https://jmeter-plugins.org/wiki/HttpSimpleTableServer

Talend - Read file from several FTP SERVER

I have several FTP servers (4 servers), where there are files that are generated by an application.
This application generates the same type of file with the same structure in the 4 servers.
With Talend, I want to when any change to a file in one of the servers I need to recover their data and put in in Active MQ.
What could you suggest ? Because in tFTP I don't have tWaitForFile
Staying within that architectural approach... You could poll the ftp servers to detect a change in a file's updated Timestamp or size .

Trigger a mainframe job from Windows machine

I am converting my Windows script script that uses FTP to SFTP.
To trigger the mainframe job we had below command:
quote site filetype=jes
put C:\Test\test.dat
bye
sftp.exe uname#servername
But site filetype=jes does not work in SFTP. What will be the equivalent command for SFTP to trigger the mainframe job by sending a trigger file?
There are several options:
You can use a different FTP server (such as the Co:Z product mentioned in an earlier response.
You can wrap a conventional FTP session in a secure network session (VPN, SSH, etc) in a way that keeps the connection secure, but doesn't require SFTP. This gives you the security of SFTP while letting you continue to use your existing FTP scripting unchanged.
You can swap FTP for more of a shell approach (SSH) to login to the mainframe and submit your JCL. Once you have any sort of shell session, there are many ways to submit JCL - see http://www-01.ibm.com/support/knowledgecenter/SSLTBW_1.13.0/com.ibm.zos.r13.bpxa500/submit.htm%23submit for an example.
A slight variant on #3 (above) is that you can have a "submit JCL" transaction in something like a web server, if you're running one on z/OS. This gives you a way to submit JCL using an HTTP request, say through CURL or WGET (if you go this way, be sure someone carefully reviews the security around this transaction...you probably don't want it open to the outside world!).
If this is something you do over and over, and if your site uses job scheduling software (CA-7, Control-M, OPC, Zeke, etc...most sites have one of these), almost all these products can monitor for file activity and launch batch jobs when a file is created. You'd simply create a file with SFTP "PUT", and the job scheduling software would do its thing.
Good luck!
If you're using the Co:Z SFTP server on z/OS you can submit mainframe batch jobs directly.
Strictly speaking this isn't a trigger file, but it does appear to be the equivalent of what you describe as your current FTP process.

DB job to generate/email Oracle report output

The task is to have an Oracle report generated daily, automatically, and e-mailed to a user.
So I've sort of got this working (it works if I hardcode one of the reports server names below).
I created a job on the database that will generate the report. I'm able to get the report to email as a PDF to the destination with this command:
UTL_HTTP.REQUEST('http://server/reports/rwservlet?server=specific_report_server &report='||p_report_name||'&userid='||p_connstring||'&destype=mail'||p_parameters||'&desname='||p_to_recipientlist||' &cc='||p_cc_recipientlist||'&bcc='||p_bcc_recipientlist||'&subject=%22' || REPLACE(p_subject,' ','%20') || '%22&paramform=no&DESformat=pdf&ENVID='||p_envid);
That works great...
The problem however is that my organization has two report servers that are load balanced. Our server team could take down one of the servers without really any warning, so I can't just hardcode the report server name (the ?server= parameter above) with one of the report server names because it will work for a while, then when that server goes down, it will stop working.
My server team asked me to look for a way to pull the server from the formsweb.cfg file or from default.env value within the job (there are parameters in there that hold the server name). The idea there is that the "http://server" piece will direct the report to be run on the appropriate server, and the first part of the job could get the reports server name from the config file that the report is run on. I'm not sure if this is possible from the database level, or how to do this. Any ideas?
Is there a better way that this can be done, perhaps?
If there are two load-balanced servers, that strongly implies that the network folks must have configured some sort of virtual IP (VIP) for the service. You (and everyone else) should be using that VIP rather than a specific server name.
For example, if you have two servers reportA.yourdomain.com and reportB.yourdomain.com, you would almost certainly create a VIP for reports.yourdomain.com that load balances between the two servers (and knows whether one of the servers is down or whether a new reportC server has been added). This VIP would either do the load balancing itself or would point to an actual physical load balancer that distributes the traffic. All applications would reference the reports.yourdomain.com VIP rather than any hard-coded server names.

Resources