Greenplum: gpinitsystem backout script not working - greenplum

I'm trying to install Greenplum database, and gpinitsystem failed.
I want to run the backout script so that I can go back and change the gpinitsystem_config file port numbers.
I run:
sh backout_gpinitsystem_gpadmin_20160614_152833
This is the error I get:
[FATAL]:-Not on original master host gp_master, backout script exiting!
I'm calling this from the master. Why is this message showing up, and what can I do about it?
Unfortunately, I didn't create backup images before trying to initialize the database with gpinitsystem.

This would be spawned by a hostname mismatch. What is in your backout file doesn't match what the system reports as the hostname.

Related

MQ COMMAND dspmqver DO

Any help or hint would be greatly appreciated it.
I am running IBM MQ on Solaris.
I tried to find IBM MQ command to find the version of mq but it gives me the following error:
code:
# pwd
/opt/mqm/bin
# dspmqver
AMQ8594: WebSphere MQ commands are no longer available in /usr/bin.
In order to run MQ commands you must manage your path configuration as
described in the WebSphere MQ product documentation. In particular review the
topic on "Choosing a primary installation".
# sudo dspmqver
Sorry, user root is not allowed to execute '/usr/bin/dspmqver' as root on ud1981esb31.
You have several issues with what you were attempting to do.
First, you did not put the path of /opt/mqm/bin/ in the $PATH environment variable, so when you attempted to run dspmqver command, it is picking up something (shell script I think) that the installer put in the /usr/bin/ directory.
Secondly, if you want to run a program from the current directory you need to prefix it with './' (it's a Unix/Linux thing).
i.e.
./dspmqver
Sorry, user root is not allowed to execute '/usr/bin/dspmqver' as root
on ud1981esb31.
Third, you should never ever start a queue manager or run other MQ commands as 'root'. It should be done under the 'mqm' account.

filenotfoundException when using startNode.bat command

I am trying to restart my node by running StartNode.bat in cmd. But it is giving me the following error,
ADMU0111E: Program exiting with error: java.io.FileNotFoundException: C:\Users{userid}\workspaces\was_profiles{profilename}\config\cells\localcell\nodes\localnode\servers\nodeagent\server.xml (The system cannot find the path specified.) When I checked the above path, server.xml is present in the path C:\Users{userid}\workspaces\was_profiles{profilename}\config\cells\localcell\nodes\localnode\servers\server1\server.xml. I am not sure how to restart my node. Also, I am restarting the node, since I get ORA error invalid username/password issue while trying to test my datasource connection in IBM websphere.
Most of the posts suggest me to restart websphere server and nodeagent as well. Eventually, when I start the nodeagent, I get the above error. I tried to use syncNode command, but could not find the deployment manager host name, port details for my application.
It looks like you have single server version installed, not network deployment. In that case you dont have node and deployment manager, but just single server.
You should use commands like:
startServer.bat server1 - to start the server
stopServer.bat server1 - to stop server

Windows Server 2008 R2 backup does not start as scheduled with no error messages

I have been trying to configure regular automated backups to a shared network drive using the Windows Server Backup console. When I backup manually, it works, however it does not run on its own as scheduled. I have made sure to enable run backup while not logged in. There are no error messages when it does not run according to schedule, it just skips to logging the next scheduled backup as the next day.
I have also tried using the wbadmin command line. My script is similar to the following:
wbadmin enable backup –addtarget:\backupshare\myshare –include: c:\ –user:DOMAIN\mylogin –password:mypassword –schedule:19:00 -systemState -quiet -allowDeleteOldBackups -
I have not received any errors with my script and the windows command line acknowledges that there is a scheduled backup to run. However, the backup does not run and when I check wbadmin get status at the time it is scheduled, it will tell me there is no back up running at the moment with no error codes.
I am not sure why my back ups will not run as scheduled as they will run manually.
Any help would be greatly appreciated,
Thanks!
I’m going to assume that you’re running it with heightened privileges and for running the task you’re using a Domain Admin account, I would suggest running the command manually from command line and add in this argument “get status > \task.log”

Accessing Riak node from a remote machine (riak-admin backup)

While trying to run a riak-admin backup riak#ec2-xxx.compute-1.amazonaws.com riak /home/user/backup.dat all on a remote machine (ec2 instance) I encounter the following error message
{"init terminating in do_boot",{{nocatch,{could_not_reach_node,'riak#ec2-xxx.compute-1.amazonaws.com'}},[{riak_kv_backup,ensure_connected,1,[{file,"src/riak_kv_backup.erl"},{line,171}]},{riak_kv_backup,backup,3,[{file,"src/riak_kv_backup.erl"},{line,40}]},{erl_eval,do_apply,6,[{file,"erl_eval.erl"},{line,572}]},{init,start_it,1,[]},{init,start_em,1,[]}]}}
I assume there's a connection / permission error since the same backup command will work if run locally on the instance (with a local node ip of course), I should note the server (Node.js) can remotely connect to that ip so the port is open and accessible 8098). Any advice on how to make the backup operational remotely?
It would appear that the riak-admin backup command doesn't work remotely - and certainly it's not something I've ever tried to do. I'd recommend setting up a periodic backup (via cron or similar) and then use rsync to get your backup file down to local.
Alternatively, you could try the following hacky untested idea for a single script.
#!/bin/bash
ssh ec2-xxx.compute-1.amazonaws.com "riak-admin backup riak#ip-local-ec2 /home/user/backup.dat all"
rsync -avP ec2-xxx.compute-1.amazonaws.com:/home/user/backup.dat .

Ending and deleting a queue manager by force

Currently I have a queue manager that no matter I do just fails to go away. I am trying to end it and delete it. This is in one of our development servers. Not sure what happened, our server went through host name changes. Currently when I do dspmq, I get:
QMNAME(QM_MIT) STATUS(Status not available)
endmqm says:
AMQ8146: WebSphere MQ queue manager not available.
dltmqm says:
AMQ8041: The queue manager cannot be restarted or deleted because processes,
that were previously connected, are still running.
AMQ7018: The queue manager operation cannot be completed.
I googled and found that listener needs to be killed, which I did. I am running WebSphere MQ v7.1 on Linux.
What else can I do?
Do a ps-ef | grep qmgrname to find any remaining processes that were running as part of the QMgr or that were attached to the QMgr.
Next, do a /opt/mqm/bin/amqiclen -x -F -m qmgrname to get rid of any shared memory segments. The command will fail if you do not provide a fully-qualified path name and try to run it from your $PATH or a relative path.
See WebSphere MQ utility amqiclen usage and description for more details.

Resources