How do install Cloud Manager on my Mac? - bash

I've cloned the repository on GitHub and I'm looking for installation instructions.
So far, I've done these steps (in Mac Terminal):
cd /opt
git clone https://github.com/cldmgr/cloud-manager.git
cd cloud-manager
Now what?

What you've done so far is perfect!
Before running the installer, add /opt/cloud-manager to your PATH variable. For Bash, you would add following like to the bottom the .bash_profile file in your home directory:
export PATH=${PATH}:/opt/cloud-manager
The next step is to run the installer, which will ask you a bunch of questions about your infrastructure in order to configure Cloud Manager (CM).
NOTE: you should be logged in as the user account you intend to run CM under prior to performing the installation - including the 'git clone' command.
Here's how to start the interview:
$ ./methods/cm-install
It'll look like this:
## Welcome to Cloud Manager (CM) Installer
This installer was launched because CM could not find a configuration file
and/or Ansible inventory file where expected. The following interview will
prompt you for all information needed to configure CM.
After the interview, you'll be prompted to save your answers to a response file. This is convenient in the event you want to re-configure CM without answering all the (annoying) questions again.
To do so, run "./methods/cm-init -r templates/<your_responseFile>"
## Configuring CM Core Server
Enter hostname/ip for the CM control node [localhost]:
--> accepted: localhost
Master Private Key (managed server access) [keys/master/cm-master]:
--> accepted: keys/master/cm-master
Master Public Key (managed server access) [keys/master/cm-master.pub]:
--> accepted: keys/master/cm-master.pub
Master User Account [cmadmin]:
--> accepted: cmadmin
Master Password - Cleartext [null-disallowed]:
--> accepted: ********
...
At the end of the interview it will ask you to save your response file. I recommend that you do, as it will save you time later -- in the event you want to reset everything back to defaults as you're learning to use CM.
To re-run the installer using the response file, run:
$ ./methods/cm-init -c -r templates/your-response-file.resp
Now that CM is configured, just type 'cm' to see the usage statement:
$ cm
Usage: cm [<options>] <method> [args]
[methods]
Configuration:
group <add|remove|addAttr|removeAttr|addRule|removeRule> [args]
ipam <subnet|range|checkout|checkin> [args]
Infrastructure:
create [-s][-f] <hostname> <group> [args]
createN [-s|-r] <clusterName> <N> <hostnameBase> <group> [args]
configure [-s] <hostname> [args]
deploy [-s] <hostname> <playbookName> [args]
power [-s] <on|off|cycle> [args]
decommission [-s] [-h <hostname>|all]
[-g <groupName>]
[-c <clusterName>]
reprovision [-s] <hostname>
Continuous Integration:
dso [-s] <name> <ansible|chef|puppet|cm> [args]
pipeline <add|remove|addAttr|removeAttr> [args]
System:
show <server|group|job|subnet|subnetMap|cluster> [args]
connect <hostname> [args]
system <vboxCli|encrypt> [args]
runScript <scriptName>
runCmd <hostname> <command>
[options]
-s : show standard output from ansible playbook
-x : show extended help

Related

Can't create external initiators from chainlink CLI

We're trying to set external initiators to our chainlink containers deployed in GKE cluster according to the docs: https://docs.chain.link/docs/external-initiators-in-nodes/
I log into the the pod:
kubectl exec -it -n chainlink chainlink-75dd5b6bdf-b4kwr -- /bin/bash
And there I attempt to create external initiators:
root#chainlink-75dd5b6bdf-b4kwr:/home/root# chainlink initiators create xxx xxx
No help topic for 'initiators'
I don’t even see initiators in chainlink cli options:
root#chainlink-75dd5b6bdf-b4kwr:/home/root# chainlink
NAME:
chainlink - CLI for Chainlink
USAGE:
chainlink [global options] command [command options] [arguments...]
VERSION:
0.9.10#7cd042c1a94c57219ed826a6eab46752d63fa67a
COMMANDS:
admin Commands for remotely taking admin related actions
attempts, txas Commands for managing Ethereum Transaction Attempts
bridges Commands for Bridges communicating with External Adapters
config Commands for the node's configuration
job_specs Commands for managing Job Specs (jobs V1)
jobs Commands for managing Jobs (V2)
keys Commands for managing various types of keys used by the Chainlink node
node, local Commands for admin actions that must be run locally
runs Commands for managing Runs
txs Commands for handling Ethereum transactions
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--json, -j json output as opposed to table
--help, -h show help
--version, -v print the version
Chainlink version 0.9.10.
Could you please clarify what am I doing wrong?
You need to make sure you have the FEATURE_EXTERNAL_INITIATORS environment variable set to true in your .env file as such:
FEATURE_EXTERNAL_INITIATORS=true
This will open up access to the initiators command in the Chainlink CLI and you can resume the instructions from there.

OpenConnect "must be running as root" in Gitlab CI/CD

I'm trying to get my Continuous Delivery working and subsequently uploading binaries to a company server, which is only accessible through VPN connection.
The problem is, every single time I try it, I'm getting the following error:
Connected as 158.196.194.120 + 2001:718:1001:111::7/64, using SSL
DTLS handshake timed out
DTLS handshake failed: Resource temporarily unavailable, try again.
Failed to bind local tun device (TUNSETIFF): Operation not permitted
To configure local networking, openconnect must be running as root
See http://www.infradead.org/openconnect/nonroot.html for more information
Set up tun device failed
Unknown error; exiting.
The strange thing is, that my code uses sudo explicitly in .gitlab-ci.yml, so I'd expect it to have all the rights.
deploy_spline:
stage: deploy
image: martinbeseda/lib4neuro-ubuntu-system-deps:latest
dependencies:
- test_spline
before_script:
- echo "DEPLOY!"
- apt-get -y install lftp openconnect sudo
script:
- mkfifo mypipe
- export USER=${USER}
- echo "openconnect -v --authgroup VSB -u ${USER} --passwd-on-stdin vpn.vsb.cz < mypipe &" > vpn.sh
- chmod +x vpn.sh
- sudo ./vpn.sh
- echo "${PASS}">mypipe
- lftp -u ${USER},${PASS} sftp://moldyn.vsb.cz:/moldyn.vsb.cz/www/releases -e "put build/SSR1D_spline.out; exit"
So, do you know, what's wrong with my code? Or is it some GitLab CD specific problem?
The Gitlab CI runner needs to run in privileged mode to bind the tunnel interface. Check your /etc/gitlab-runner/config.toml file and make sure that your runner has privileged set to true.
[[runners]]
name = "privileged runner"
...
[runners.docker]
privileged = true
Without that setting, the build container doesn't have the ability to bind the interface, even as root.

How do I get a custom Nagios plugin to work with NRPE?

I have a system with no internet access where I want to install some Nagios monitoring services/plugins. I installed NRPE (Nagios Remote Plugin Executor), and I can see commands defined in it, like check_users, check_load, check_zombie_procs, etc.
command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10
command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20
...
I am able to run the commands like so:
/usr/local/nagios/libexec/check_nrpe -H 127.0.0.1 -c check_load
This produces an output like:
OK - load average: 0.01, 0.13, 0.12|load1=0.010;15.000;30.000;0; load5=0.130;10.000;25.000;0; load15=0.120;5.000;20.000;0;
or
WARNING – load average per CPU: 0.06, 0.07, 0.07|load1=0.059;0.150;0.300;0; load5=0.069;0.100;0.250;0; load15=0.073;0.050;0.200;0;
Now, I want to define/configure/install some more services to monitor. I found a collection of services here. So, say, I want to use the service defined here called check_hadoop_namenode.pl. How do I get it to work with NRPE?
I tried copying the file check_hadoop_namenode.pl into the same directory where other NRPE services are stored, i.e., /usr/lib/nagios/plugins. But it doesn't work:
$ /usr/local/nagios/libexec/check_nrpe -H 127.0.0.1 -c check_hadoop_namenode.pl
I figured this might be obvious because all other services in that directory are binaries, so I need a binary for check_hadoop_namenode.pl file as well. How do I make the binary for it?
I tried installing the plugins according to the description in the link. But it just tries to install some package dependencies, and throws error as it cannot access the internet (my system has no internet access, like I stated before). This error persists even when I install these dependencies manually in another system and copy them to the target system.
$ <In another system with internet access>
mkdir ~/repos
git clone https://github.com/harisekhon/nagios-plugins
cd nagios-plugins
sudo nano Makefile
# replace 'yum install' with 'yumdownloader --resolv --destdir ~/repos/'
# replace 'pip install' with 'pip download -d ~/repos/'
This downloaded 43 dependencies (and dependencies of dependencies, and so on) required to install the plugins.
How do I get it to work?
check_users, check_load or check_zombie_procs are defined on the client side in nrpe.cfg file. Default location are /usr/local/nagios/etc/nrpe.cfg or /etc/nagios/nrpe.cfg. As I read, you already found that file, so you can move to next step.
Put something like this to your nrpe.cfg:
command[check_hadoop_namenode]=/path/to/your/custom/script/check_hadoop_namenode.pl -optional -arguments
Then you need restart NRPE deamon service on client. Something like service nrpe restart.
Just for you information, these custom script doesn't must to be binaries, you can even use simple bash script.
And finally after that, you can call the check_hadoop_namenode command from Nagios server or via local NRPE deamon:
/usr/local/nagios/libexec/check_nrpe -H 127.0.0.1 -c check_hadoop_namenode

Launch EC2 cluster with Whirr

I am working through the tutorial of Jeffery Breen at the moment. I got some troubles when I would like to launch a ec2 cluster with Whirr. I use a cloudera demo vm cdh3u4.
I downloaded the version 0.8.1 of whirr.
Here are all commands I ran:
$ wget http://mirror.switch.ch/mirror/apache/dist/whirr/whirr-0.8.1/whirr-0.8.1.tar.gz
$ tar zxf whirr-0.8.0.tar.gz
$ export PATH="~/whirr-0.8.0/bin:$PATH"
$ export AWS_ACCESS_KEY_ID=MY ACCESS KEY
$ export AWS_SECRET_ACCESS_KEY=MY SECRET ACCESS KEY
$ ssh-keygen -t rsa -P hadoop-ec2
Then I was asked in which file the key should be safed and I typed: hadoop-ec2
$ whirr launch-cluster --config hadoop-ec2.properties
...and here is the problem: There were no instances launched! I got the following message:
Exception in thread "main" org.apache.commons.configuration.ConfigurationException: Cannot locate configuration source hadoop-ec2.properties
at org.apache.commons.configuration.AbstractFileConfiguration.load(AbstractFileConfiguration.java:249)
at org.apache.commons.configuration.AbstractFileConfiguration.load(AbstractFileConfiguration.java:229)
at org.apache.commons.configuration.AbstractFileConfiguration.<init>(AbstractFileConfiguration.java:149)
at org.apache.commons.configuration.PropertiesConfiguration.<init>(PropertiesConfiguration.java:252)
at org.apache.whirr.command.AbstractClusterCommand.getClusterSpec(AbstractClusterCommand.java:122)
at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:57)
at org.apache.whirr.cli.Main.run(Main.java:69)
at org.apache.whirr.cli.Main.main(Main.java:102)
What I did next is link the properties-file directly to the file that Jeffrey Breen published in his tutorial and then I got the following thing:
[cloudera#localhost ~]$ whirr launch-cluster --config /home/cloudera/TutorialBreen/config/whirr-ec2/hadoop-ec2.properties
Running on provider aws-ec2 using identity ${env:AKIAIXPYW6EBNWSZWMTQ}
Bootstrapping cluster
Configuring template for bootstrap-hadoop-datanode_hadoop-tasktracker
Unable to start the cluster. Terminating all nodes.
org.jclouds.rest.AuthorizationException: POST https://ec2.us-east-1.amazonaws.com/ HTTP/1.1 -> HTTP/1.1 401 Unauthorized
at org.jclouds.aws.handlers.ParseAWSErrorFromXmlContent.refineException(ParseAWSErrorFromXmlContent.java:123)
at org.jclouds.aws.handlers.ParseAWSErrorFromXmlContent.handleError(ParseAWSErrorFromXmlContent.java:92)
at org.jclouds.http.handlers.DelegatingErrorHandler.handleError(DelegatingErrorHandler.java:69)
at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.shouldContinue(BaseHttpCommandExecutorService.java:197)
at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:167) . . .
Was this a step into the right direction and if yes, what do I have to do that it works?
I am a very beginner so I'd really appreciate your help and if possible, als "clear" as possible since I am - as I said - a beginner.
The next step would be to run this command:
$ whirr run-script --script install-r+packages.sh --config hadoop-ec2.properties
I really hope to find some help here so that I can continue with the tutorial.
Whirr-config-File:
whirr.cluster-name=hadoop-ec2
# Change the number of machines in the cluster here
whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,5 hadoop-datanode+hadoop-tasktracker
# whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker
# Uncomment out these lines to run CDH
# You need cdh3 because of the streaming combiner backport
whirr.hadoop.install-function=install_cdh_hadoop
whirr.hadoop.configure-function=configure_cdh_hadoop
# make sure java is set up correctly, requires Whirr >= 0.7.1
whirr.java.install-function=install_oab_java
# For EC2 set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.
whirr.provider=aws-ec2
whirr.identity=${env:DFD...(mycode)..DFDSDF}
whirr.credential=${env:df342.(mycode)..3434324}
# The size of the instance to use. See http://aws.amazon.com/ec2/instance-types/
whirr.hardware-id=m1.large
# whirr.hardware-id=c1.xlarge
# select recent, 64-bit CentOS 5.6 AMI from RightScale
whirr.image-id=us-east-1/ami-49e32320
# here's what Cloudera recommends:
# whirr.image-id=us-east-1/ami-ccb35ea5
# If you choose a different location, make sure whirr.image-id is updated too
whirr.location-id=us-east-1
# You can also specify the spot instance price
# http://aws.amazon.com/ec2/spot-instances/
# whirr.aws-ec2-spot-price=0.109
# By default use the user system SSH keys. Override them here.
# whirr.private-key-file=${sys:user.home}/.ssh/id_rsa
# whirr.public-key-file=${sys:user.home}/.ssh/id_rsa.pub
# Expert: override Hadoop properties by setting properties with the prefix
# hadoop-common, hadoop-hdfs, hadoop-mapreduce to set Common, HDFS, MapReduce
# site properties, respectively. The prefix is removed by Whirr, so that for
# example, setting
# hadoop-common.fs.trash.interval=1440
# will result in fs.trash.interval being set to 1440 in core-site.xml.
# Expert: specify the version of Hadoop to install.
#whirr.hadoop.version=0.20.2
#whirr.hadoop.tarball.url=http://archive.apache.org/dist/hadoop/core/hadoop-${whirr.hadoop.version}/hadoop-${whirr.hadoop.version}.tar.gz
yes remove #
Sample here:
whirr.private−key−file=${sys:user.home}/.ssh/whirr_id_rsa
whirr.public−key−file=${sys:user.home}/.ssh/whirr_id_rsa.pub
repalce whirr_id_rsa with your file name. More details here:
http://www.xmsxmx.com/apache-whirr-create-hadoop-cluster-automatically/

How to start and debug opennms using Eclipse

I'm new with opennms. And I'm working in EMS development.
My team plan to move from current EMS to opennms.
I was successfully configure it using Eclipse but don't know how to start opennms and debug from Eclipse.
Actually, I have succeeded compile and assemble using the command /compile.sh and assemble.sh
But I need to know how to debug, compile and start the opennms using Eclipse.
Thanks,
Alya
To start OpenNMS you have to use the "opennms script". This is located in ${opennms.home}/bin
With the script you can then tell OpenNMS to run in debug mode, like so:
sudo ./opennms -t start
OpenNMS then tells you what the remote debugger port is (default is: 8001).
In eclipse you can then "remote debug" OpenNMS.
How to do this, you can e.g. follow this instruction (http://javarevisited.blogspot.de/2011/02/how-to-setup-remote-debugging-in.html)
I usually start opennms in verbose and debug mode : sudo ./opennms -vt start
opennms usage
Usage: ./opennms [-n] [-t] [-p] [-o] [-c timeout] [-v] [-Q] <command> [<service>]
command options: start|stop|restart|status|check|pause|resume|kill
service options: all|<a service id from the etc/service-configuration.xml>
defaults to all
The following options are available:
-n "No execute" mode. Don't call Java to do anything.
-t Test mode. Enable JPDA on port 8001.
-p Enable TIJMP profiling
-o Enable OProfile profiling
-c Controller HTTP connection timeout in seconds.
-v Verbose mode. When used with the "status" command, gives the
results for all OpenNMS services. When used with "start", enables
some verbose debugging, such as details on garbage collection.
-Q Quick mode. Don't wait for OpenNMS to startup. Useful if you
want to watch the logs while OpenNMS starts up without wanting to
open another terminal window.
"opennnms script" examples
To start opennms: sudo ./opennms start
To start opennms in verbose mode: sudo ./opennms -v start
To start opennms in verbose and debug mode: sudo ./opennms -vt start
To stop opennms: sudo ./opennms stop
Assuming you are in folder ${opennms.home}/bin

Resources