I am trying to clear a hands on in HackerRank, where the task is to stop and start the service named ssh using service module. I have used the below code.
- name: "Stop ssh"
service:
name: ssh
state: stopped
- name: "start ssh"
service:
name: ssh
state: started
Can you please guide me to clear the hands on.
The service handling ssh on Linux is called sshd like a lot of other services, where the d stands for daemon.
So your correct tasks would be:
- name: Stop ssh
service:
name: sshd
state: stopped
- name: Start ssh
service:
name: sshd
state: started
This said, since Ansible connects through ssh, I am unsure how this will really react when the service is stopped.
Most of the linux services, though do also have a restart instruction, that you can also use in Ansible, with the state value restarted.
- name: Restart ssh
service:
name: sshd
state: restarted
Related
I have some python files inside of a VM that will run every week to scrape info from a website. This is automated through cloud scheduler and cloud function and it is confirmed that it works. I wanted to use cloud build and cloud run to update the code inside of the VM each time I update the code inside of the Github. I read somewhere that in order to deploy a container image to a VM, the VM has to have a container-os, so I manually made a new VM matching that criteria through compute engine. The container-os VM is already made. I just need to have the container image for it updated with the new container image built with the updated code from Github.
I'm trying to build a container image that I will later use to update the code inside of a virtual machine. Cloud run is triggered every time I push to a folder in my Github repository.
I checked Container Registry and the images are being created, but I keep getting this error when I check the virtual machine:
"Error: Failed to start container: Error response from daemon:
{
"message":"manifest for gcr.io/$PROJECT_ID/$IMAGE:latest not found: manifest unknown: Failed to fetch \"latest\" from request \"/v2/$PROJECT_ID/$IMAGE:latest/manifests/latest\"."
}"
Why is the request being made for the latest tag when I wanted the tag with the commit hash and how can I fix it?
This is the virtual machine log (sudo journalctl -u konlet-startup):
Started Containers on GCE Setup.
Starting Konlet container startup agent
Downloading credentials for default VM service account from metadata server
Updating IPtables firewall rules - allowing tcp traffic on all ports
Updating IPtables firewall rules - allowing udp traffic on all ports
Updating IPtables firewall rules - allowing icmp traffic on all ports
Launching user container $CONTAINER
Configured container 'preemptive-public-email-vm' will be started with name 'klt-$IMAGE-xqgm'.
Pulling image: 'gcr.io/$PROJECT_ID/$IMAGE'
Error: Failed to start container: Error response from daemon: {"message":"manifest for gcr.io/$PROJECT_ID/$IMAGE:latest not found: manifest unknown: Failed to fetch \"latest\" from request \"/v2/$PROJECT_ID/$IMAGE/manifests/latest\"."}
Saving welcome script to profile.d
Main process exited, code=exited, status=1/FAILURE
Failed with result 'exit-code'.
Consumed 96ms CPU time
This is the cloudbuild.yaml:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA', './folder_name']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA']
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'run-public-email'
- '--image'
- 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA'
- '--region'
- 'us-central1'
images:
- 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA'
This is the dockerfile:
FROM python:3.9.7-slim-buster
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
CMD [ "python3", "hello.py" ]
This is hello.py:
import os
from flask import Flask
app = Flask(__name__)
#app.route('/')
def home():
return "Hello world"
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=int(os.environ.get('PORT', 8080)))
You can update the GCE Container-os VM with the latest image by using Gcloud command in cloudbuild.yaml file. This command is used to update Compute Engine VM instances running container images.
You could encounter a vm restart whenever the image is updated to the GCE container-os VM. When this happens, it will allocate a new ip to the VM, you can use a Static IP to avoid it, if required.
Example Cloudbuild.yaml :
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA', './folder_name']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA']
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'compute'
- 'instances'
- 'update-container'
- 'Instance Name'
- '--container-image'
- 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA'
When i use docker pipeline it will success building.
But when i use exec pipe line it's always stuck in pending.
And i don't what going wrong.
kind: pipeline
type: exec
name: deployment
platform:
os: linux
arch: amd64
steps:
- name: backend image build
commands:
- echo start build images...
# - export MAJOR_VERSION=1.0.rtm.
# - export BUILD_NUMBER=$DRONE_BUILD_NUMBER
# - export WORKSPACE=`pwd`
# - bash ./jenkins_build.sh
when:
branch:
- master
Docker Pipe Line is fine.
kind: pipeline
type: docker
name: deployment
steps:
- name: push image to repo
image: plugins/docker
settings:
dockerfile: src/ZR.DataHunter.Api/Dockerfile
tags: latest
insecure: true
registry: "xxx"
repo: "xxx"
username:
from_secret: username
password:
from_secret: userpassword
First of all it's important to note that the exec pipeline can be used only when Drone is self-hosted. It is written in the official docs:
Please note exec pipelines are disabled on Drone Cloud. This feature is only available when self-hosting
When Drone is self-hosted be sure, that:
The exec runner is installed
It is configured properly in its config file (to be able to connect to the Drone server)
And the drone-runner-exec service is running
After the service is started, look for its log file and you have to see an info message saying it was able to connect to your Drone server:
level=info msg="successfully pinged the remote server"
level=info msg="polling the remote server"
It's also possible to view the web UI (Dashboard) of the running service if you enable it.
So if you see it can poll your server, then your exec pipeline should run as expected.
Request you to help me on the following Issue
I am writing a High available LAMPAPP on UBUNTU 14.04 with ansible (on my home lab). All the tasks are getting excecuted till the glusterfs installation however creating the Glusterfs Volume is a challenge for me since a week. If is use the command moudle the glusterfs volume is getting created
- name: Creating the Gluster Volume
command: sudo gluster volume create var-www replica 2 transport tcp server01-private:/data/glusterfs/var-www/brick01/brick server02-private:/data/glusterfs/var-www/brick02/brick
But if i use the GLUSTER_VOLUME module i am getting the error
- name: Creating the Gluster Volume
gluster_volume:
state: present
name: var-www
bricks: /server01-private:/data/glusterfs/var-www/brick01/brick,/server02-private:/data/glusterfs/var-www/brick02/brick
replicas: 2
transport: tcp
cluster:
- server01-private
- server02-private
force: yes
run_once: true
The error is
"msg": "error running gluster (/usr/sbin/gluster --mode=script volume add-brick var-www replica 2 server01-private:/server01-private:/data/glusterfs/var-www/brick01/brick server01-private:/server02-private:/data/glusterfs/var-www/brick02/brick server02-private:/server01-private:/data/glusterfs/var-www/brick01/brick server02-private:/server02-private:/data/glusterfs/var-www/brick02/brick force) command (rc=1): internet address 'server01-private:/server01-private' does not conform to standards\ninternet address 'server01-private:/server02-private' does not conform to standards\ninternet address 'server02-private:/server01-private' does not conform to standards\ninternet address 'server02-private:/server02-private' does not conform to standards\nvolume add-brick: failed: Host server01-private:/server01-private is not in 'Peer in Cluster' state\n"
}
May i know the mistake i am committing
The bricks: declaration of Ansible gluster_volume module requires only the path of the brick. The nodes participating in the volume are identified as cluster:.
The <hostname>:<brickpath> format is required for the gluster command line. However when you use the Ansible module, this is not required.
So your task should be something like:
- name: Creating the Gluster Volume
gluster_volume:
name: 'var-www'
bricks: '/data/glusterfs/var-www/brick01/brick,/data/glusterfs/var-www/brick02/brick'
replicas: '2'
cluster:
- 'server01-private'
- 'server02-private'
transport: 'tcp'
state: 'present'
I am writing a cookbook for installing a lamp stack with docker as driver for the test kitchen. when i use kitchen create command, it is getting stuck on SSH loop.
---
driver:
name: docker
privileged: true
use_sudo: false
provisioner:
name: chef_zero
# You may wish to disable always updating cookbooks in CI or other testing environments.
# For example:
# always_update_cookbooks: <%= !ENV['CI'] %>
always_update_cookbooks: true
verifier:
name: inspec
platforms:
- name: centos-7.3
driver_config:
run_command: /usr/lib/systemd/systemd
suites:
- name: default
run_list:
- recipe[lamp::default]
verifier:
inspec_tests:
- test/smoke/default
attributes:
The loopback msg is show below.
---> b052138ee79a
Successfully built b052138ee79a
97a2030283b85c3c915fd7bd318cdd12541cc4cfa9517a293821854b35d6cd2e
0.0.0.0:32770
Waiting for SSH service on localhost:32770, retrying in 3 seconds
Waiting for SSH service on localhost:32770, retrying in 3 seconds
The ssh loop is continuing until i force close it.
Can anybody help?
Most likely sshd isn't actually starting correctly. I don't recommend the systemd hack you have in there, it's hard to get right.
I installed elasticsearch using this role.
I do the following steps in the playbook:
---
- hosts: master
sudo: true
tags: elasticsearch
roles:
- ansible-elasticsearch
And then vagrant up es-test-cluster, where es-test-cluster is the name of the VM I mention in VagrantFile. I have given it a private IP 192.162.12.14. The VM boot perfectly and after running sudo service elasticsearch status I get that the service is running on 192.162.12.14:9200 which is correct. But if I run vagrant halt es-test-cluster and then vagrant up es-test-cluster I see that the elasticsearch service is running any more.
I thought of doing this:
---
- hosts: master
sudo: true
tags: elasticsearch
roles:
- ansible-elasticsearch
tasks:
- name: Starting elasticsearch service if not running
service: name=elasticsearch state=started
but even this does not help. This only runs when I boot for the first time.
How can start the service whenever I run vagrant up?
This is for Ubuntu 14.04.
You need to "enable" the service. This can be done with a single flag in your Ansible.
---
- hosts: master
sudo: true
tags: elasticsearch
roles:
- ansible-elasticsearch
tasks:
- name: Starting elasticsearch service if not running
service:
name: elasticsearch
state: started
enabled: true
Also, depending on which ansible-elasticsearch role you are using, it may already have a flag that you can pass to enable the service without the extra task. I know the role I used did.
The solutions are for Debian packages.
You can include the following in the VagrantFile:
config.vm.provision "shell", path: "restart_es.sh"
where restart_es.sh contains:
#!/bin/bash
SERVICE=elasticsearch;
sudo update-rc.d $SERVICE defaults 95 10
if ps ax | grep -v grep | grep $SERVICE > /dev/null
then
echo "$SERVICE service running, everything is fine"
else
echo "$SERVICE is not running"
sudo /etc/init.d/$SERVICE start
fi
OR
config.vm.provision "shell", path: "restart_es.sh", run: "always"
where restart_es.sh contains:
#!/bin/bash
SERVICE=elasticsearch;
if ps ax | grep -v grep | grep $SERVICE > /dev/null
then
echo "$SERVICE service running, everything is fine"
else
echo "$SERVICE is not running"
sudo service $SERVICE start
fi
OR
config.vm.provision "shell", inline: "sudo service elasticsearch start", run: "always"
All three start the elasticsearch service when you restart the vagrant box.