Storing Value file from another VM as a varible in Bash - bash

I have a set up that needs to be bootstrapped off the values of some files in another VM.
Here is the run command I am using to invoke the run run command:
BOOT_VM="${VM_NAME}1"
BOOT_ENODE=$(az vm run-command invoke --name ${BOOT_VM} \
--command-id RunShellScript \
--resource-group ${RSC_GRP_NAME} \
--query "value[].message" \
--output tsv \
--scripts "cat /etc/parity/enode.pub")
echo ${BOOT_ENODE}
The result I get is :
Enable succeeded: [stdout] [stderr]
As far as I know, this could mean 2 things:
There is no file there
I am handling the response wrongly.
Really hoping it isnt 1 and would like advice on how to approach this.

For your issue, there is also a reason that the agent in the vm is not at work or something not good happens to it. The Azure VM agent manages interactions between an Azure VM and the Azure fabric controller. So you should check if it works well.
Update
You can check the agent in the portal:
Also, you can check the agent inside the vm:
For example, I want to get the config of vim in the vm and the vm os is Red Hat 7.2. Then the result of the command az vm run-command invoke will like below:

Related

mc: <error> while trying to run bitnami/minio-client the container is exiting within a seconds

docker run -it --name mc3 dockerhub:5000/bitnami/minio-client
08:05:31.13
08:05:31.14 Welcome to the Bitnami minio-client container
08:05:31.14 Subscribe to project updates by watching https://github.com/bitnami/containers
08:05:31.14 Submit issues and feature requests at https://github.com/bitnami/containers/issues
08:05:31.15
08:05:31.15 INFO  ==> ** Starting MinIO Client setup **
08:05:31.16 INFO  ==> ** MinIO Client setup finished! ** mc: Configuration written to /.mc/config.json. Please update your access credentials.
mc: Successfully created /.mc/share.
mc: Initialized share uploads /.mc/share/uploads.json file.
mc: Initialized share downloads /.mc/share/downloads.json file.
**mc: /opt/bitnami/scripts/minio-client/run.sh is not a recognized command. Get help using --help flag.
dockerhub:5000/bitnami/minio-client - name of the image
It would be great if someone reach out to help me how to solve this issue as I'm stuck here for more than 2 days
MinIO has two components:
Server
Client
The Server runs continuously, as it should, so it can serve the data.
On the other hand the client, which you are trying to run, is used to perform operations on a running server. So its expected for it to run and then immediately exit as its not a daemon and its not meant to run forever.
What you want to do is to first launch the server container in background (using -d flag)
$ docker run -d --name minio-server \
--env MINIO_ROOT_USER="minio-root-user" \
--env MINIO_ROOT_PASSWORD="minio-root-password" \
minio/minio:latest
Then launch the client container to perform some operation, for example making/creating a bucket, which it will perform on the server and exit immidieatly after which it will clean up the client container (using -rm flag).
$ docker run --rm --name minio-client \
--env MINIO_SERVER_HOST="minio-server" \
--env MINIO_SERVER_ACCESS_KEY="minio-root-user" \
--env MINIO_SERVER_SECRET_KEY="minio-root-password" \
minio/mc \
mb minio/my-bucket
For more information please checkout the docs
Server: https://min.io/docs/minio/container/operations/installation.html
Client: https://min.io/docs/minio/linux/reference/minio-mc.html

How to run components in AWS Greengrass?

In AWS Greengrass Documentation it says you can test components like this
sudo /greengrass/v2/bin/greengrass-cli deployment create \
--recipeDir ~/greengrassv2/recipes \
--artifactDir ~/greengrassv2/artifacts \
--merge "com.example.HelloWorld=1.0.0"
But if I want to run a component from another script. I should use the same command? For example I have a component that publishes some data to MQTT, and right now I am using system.os like this:
os.system("sudo /greengrass/v2/bin/greengrass-cli deployment create \
--recipeDir ~/greengrassv2/recipes \
--artifactDir ~/greengrassv2/artifacts \
--merge "com.example.HelloWorld=1.0.0"")
But I am not sure if it's the right solution. It does not seem like a nice solution.
I wouldn't recommend using greengrass-cli deployment create command to run the component
It's for local development only
The command run through all the lifecycle steps defined in the component recipe file before running the component, it can be a big overhead.
If "another script" is also a Greengrass component, you can Use the AWS IoT Device SDK for interprocess communication (IPC)
If "another script" is not Greengrass component, you can use restart command to trigger a run for the component. It's less overhead than create command.
sudo /greengrass/v2/bin/greengrass-cli component restart --names "HelloWorld"

minishift - where to get the ansible playbooks for metrics

I am running minishift locally and want to install metrics on the same. I learned that metrics are removed from minishift and there is a way to enable them through ansible-playbooks.
I am not very familiar with ansible-playbooks and would like to understand how to run the following.
ansible-playbook [-i </path/to/inventory>] <OPENSHIFT_ANSIBLE_DIR>/playbooks/openshift-metrics/config.yml \
-e openshift_metrics_install_metrics=True \
-e openshift_metrics_hawkular_hostname=hawkular-metrics.example.com \
-e openshift_metrics_cassandra_storage_type=pv
My questions are
From where do I run this? Should I run this inside the Openshift VM or from my host machine?
Where is OPENSHIFT_ANSIBLE_DIR located?
Is the path to inventory a mandatory option to be passed, if so what is to be passed?
I have managed to install ansible but I searched a lot but couldn't find anything related to the playbooks to be run.

Build to deploy guest on KVM hangs

I'm using Jenkins to automate the deploy of a virtual appliance. The first step is to build a standard CentOS 7 minimal vm in KVM. I wrote a short bash script to do this task which works when running locally on the KVM machine:
#!/bin/bash
#Variables
diskpath="/var/lib/libvirt/images/"
buildname=$(date +"%m-%d-%y-%H-%M")
vmextension=".dsk"
#Change to images directory
cd /var/lib/libvirt/images/
#Deploy VM with with kickstart file
sudo virt-install \
--name=$buildname \
--nographics \
--hvm \
--virt-type=kvm \
--file=$diskpath$buildname$vmextension \
--file-size=20 \
--nonsparse \
--vcpu=2 \
--ram=2048 \
--network bridge=br0 \
--os-type=linux \
--os-variant=generic \
--location=http://0.0.0.0/iso/ \
--initrd-inject /var/lib/libvirt/images/autobuild-ks.cfg \
--extra-args="ks=http://0.0.0.0/ks/autobuild-ks.cfg console=ttyS0"
(IP address i have changed for the purposes of security)
The ISO and the kickstart file are stored on another server and they can both be accessed via http for the purposes of making this script work. To be clear, the script does work.
The problem I have is, when I put this script into Jenkins as a build step, the script works; however, it hangs at the end after the OS has been installed and the kvm guest begins the shutdown process.
here is the kickstart file:
#version=DEVEL
# System authorization information
auth --enableshadow --passalgo=sha512
# Use Network installation media
url --url=http://0.0.0.0/iso
# Use graphical install
#graphical
# Run the Setup Agent on first boot
firstboot --enable
ignoredisk --only-use=sda
# Keyboard layouts
keyboard --vckeymap=gb --xlayouts='gb'
# System language
lang en_GB.UTF-8
# Network information
network --bootproto=dhcp --device=ens160 --ipv6=auto --activate
network --hostname=hostname.domain.com
# Root password
rootpw --iscrypted
taken_encryption_output_out_for_the_purposes_of_security
#Shutdown after installation
shutdown
# System services
services --enabled="chronyd"
# System timezone
timezone Europe/London --isUtc
# System bootloader configuration
bootloader --append=" crashkernel=auto" --location=mbr --boot-
drive=sda
autopart --type=lvm
# Partition clearing information
clearpart --none --initlabel
%packages
#^minimal
#core
chrony
kexec-tools
%end
%addon com_redhat_kdump --enable --reserve-mb='auto'
%end
%anaconda
pwpolicy root --minlen=6 --minquality=50 --notstrict --nochanges --
notempty
pwpolicy user --minlen=6 --minquality=50 --notstrict --nochanges --
notempty
pwpolicy luks --minlen=6 --minquality=50 --notstrict --nochanges --
notempty
%end
I suspect it's something to do with the shutdown option in the Kickstart file but unsure. When I ssh to the kvm server, I can see my newly created vm so the script does work but Jenkins hangs.
[root#sut-kvm01 ~]# virsh list --all
Id Name State
----------------------------------------------------
- 09-22-17-16-21 shut off
So far I have tried shutdown, reboot and obviously halt is default in the kickstart file and they have not worked for me either.
Any ideas how I can get the build to complete successfully? If it hangs, I can't move on to what will be build step number 2.
Help please :-)
Ok so I managed to figure out what the issue was. The issue was nothing to do with Jenkins or the script but rather to do with the kickstart file. In a nutshell, I was editing the wrong kickstart file. The file i was editing was the default kickstart file in the /root/ directory but that is not the same file that was being injected into memory by the script so the changes I made were having no effect.
Note to self - just because the script works, does not mean the answer to the problem isn't written in the script.

How do I run the Hetionet v1.0 docker container?

I'm trying to run the Hetionet v1.0 docker container mentioned in this SO post.
I've setup a digitalocean droplet with Docker
I ran docker pull dhimmel/hetionet and it worked
Now I run docker run dhimmel/hetionet and the following happens (and never returns to the interactive shell prompt).
If that completed successfully I think the last thing I'm supposed to do is run sh ~/run-docker.sh. Furthermore nothing is live at my droplet's ip_address:7474.
The error in the screenshot above looks a lot like it could be related to some redundant #Path("/") annotation, as described in this SO post's comment, buried in the docker container but I'm not sure.
Is the output from running docker run dhimmel/hetionet supposed to hang my shell? I'm running a 2 GB Memory / 40 GB Disk Droplet on Ubuntu 16.04 with Docker 1.12.5.
Thanks for your interest in the Hetionet Docker.
The output in 3 is expected. It looks like a Docker container successfully launched, downloaded the Hetionet database, and launched the Neo4j server. I'll look into fixing the warnings, but they're not errors, as Neo4j is still launching.
For production, we use a more advanced Docker run command. Depending on your use case, you may want to use the development docker run command:
docker run \
--publish=7474:7474 \
--publish=7687:7687 \
--volume=$HOME/neo4j/hetionet-data:/data \
--volume=$HOME/neo4j/hetionet-logs:/var/lib/neo4j/logs \
dhimmel/hetionet
Both the production and development command map ports. This will make it so the Neo4j server running inside your Docker container is available at http://localhost:7474/. This is most likely what you want. If you're doing this on DigitalOcean, you would replace http://localhost with the IP address of your droplet.
For an interactive shell session in a dhimmel/hetionet container, you can use:
docker run --interactive --tty dhimmel/hetionet bash
However, that command does not launch the Neo4j server -- it just let's you explore the image.
Does this clear things up?

Resources