Connect from host to Vagrant VM by Ansible - ansible

How should i define VM made on Vagrant from my host in ansible?
I have just one Vagrant machine with default config:
`
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
config.vm.network "private_network", ip: "55.55.55.55"
and in ssh-config i have port 2222.
When i try from host ssh dvory#55.55.55.55:2222 i can not login (user is created on both), i am not even prompted set password. Also i have same situation in ansible
55.55.55.55 ansible_ssh_port=2222 ansible_ssh_user=vagrant ansible_ssh_private_key_file=~/.ssh/id_rsa
Is possible to do that in this way? I dont want to create two Vagrant VMs and have server/client, i have no idea where i should put my ansible code to have it persistant per server.

The 2222 SSH port in Vagrant is the value on the host (your localhost) of the forwarded port 22 of the guest (the VM).
==> ansible: Forwarding ports...
ansible: 22 (guest) => 2222 (host) (adapter 1)
So you should connect by using either:
localhost on port 2222
55.55.55.55 on port 22
I don't know if you did put your ~/.ssh/id_rsa.pub in the authorized keys on the guest, but by default it's not and you should use the private key .vagrant/machines/<MACHINE_NAME>/virtualbox/private_key.
Also, you can connect into the VM using SSH with vagrant ssh <MACHINE_NAME>.
Usually I use an inventory script with my vagrant hosts (to put in the same dir as Vagrantfile):
#!/bin/bash
INVENTORY_DIR=$(cd $(dirname $0) && pwd)
list() {
cat <<EOF
{
"all": {
"hosts": [
"$(vagrant status --machine-readable | cut -d ',' -f 2 | sort -u | sed '/^$/d' | paste -sd ',' - | sed 's/,/","/g')"
]
}
}
EOF
}
host() {
local hostname=$1
local port="$(VAGRANT_CWD=${INVENTORY_DIR} vagrant port --guest 22 ${hostname})"
[[ ! ${port} =~ ^[0-9]+$ ]] && port=0
cat <<EOF
{
"ansible_host": "localhost",
"ansible_port": ${port},
"ansible_user": "vagrant",
"ansible_ssh_private_key_file": "${INVENTORY_DIR}/.vagrant/machines/${hostname}/virtualbox/private_key"
}
EOF
}
case $1 in
--list) list;;
--host) host $2;;
*) exit 1;;
esac
Then you can use ansible with --inventory <INVENTORY_SCRIPT>

Related

Connection from DB client tool (Datagrip) to Vagrant

I am having trouble connecting to Vagrant from the DB client tool (Datagrip).
If you know anything about this, we would appreciate it if you could let us know.
connection information
→ vagrant ssh-config
Host default
HostName 127.0.0.1
User bargee
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /Users/〇〇/.vagrant/machines/default/virtualbox/private_key
IdentitiesOnly yes
LogLevel FATAL
SSH Connection with Terminal
$ ssh bargee#127.0.0.1 -p 2222 -i /Users/〇〇/.vagrant/machines/default/virtualbox/private_key
Welcome to Barge 2.15.0, Docker version 20.10.8, build 75249d8
[bargee#barge app]$
I was able to do it.
ssh connection on Datagrip
↓
Unable to connect.
Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "ailispaw/barge"
config.vm.box_version = "2.15.0"
config.vm.network "forwarded_port", guest: 3000, host: 3000
config.vm.synced_folder "./", "/app"
config.vm.provider :virtualbox do |vb|
vb.gui = false
vb.cpus = 4
vb.memory = 4096
vb.customize ['modifyvm', :id, '--natdnsproxy1', 'off']
vb.customize ['modifyvm', :id, '--natdnshostresolver1', 'off']
end
config.vm.provision "shell", inline: <<-SHELL
/etc/init.d/docker restart v20.10.8
wget "https://github.com/docker/compose/releases/download/v2.6.0/docker-compose-$(uname -s)-$(uname -m)" -O /opt/bin/docker-compose
chmod +x /opt/bin/docker-compose
if ! grep -q "cd /app" /home/bargee/.bashrc ; then
echo "cd /app" >> /home/bargee/.bashrc
fi
SHELL
end

Vagrant machine unable to authenticate with my newly created user over ssh

My vagrantfile looks like this:
# -*- mode: ruby -*-
# vi: set ft=ruby :
vagrant_home = "/home/vagrant/"
local_share = "#{ENV['HOME']}"
unless Vagrant.has_plugin?("vagrant-vbguest")
puts "Vagrant plugin 'vagrant-vbguest' is not installed!"
puts "Execute: vagrant plugin install vagrant-vbguest"
end
unless Vagrant.has_plugin?("vagrant-sshfs")
puts "Vagrant plugin 'vagrant-sshfs' is not installed!"
puts "Execute: vagrant plugin install vagrant-sshfs"
end
Vagrant.configure("2") do |stage|
stage.vm.box = "centos/7"
stage.vm.hostname = "HSS-IAAS-VB"
stage.vm.box_check_update = true
stage.vm.network "private_network", :type => 'dhcp'
stage.vm.provider "virtualbox" do |vb|
vb.name = "centos7-dev"
vb.gui = false
vb.memory = "1024"
stage.ssh.keys_only = false
stage.ssh.username = "#{ENV['USER']}"
stage.ssh.forward_agent = true
stage.ssh.insert_key = true
stage.ssh.private_key_path = "#{ENV['HOME']}/.ssh/id_rsa" , "/home/#{ENV['USER']}/.ssh/id_rsa
stage.vm.provision :shell, privileged: false do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> #{ENV['home']}.ssh/authorized_keys
sudo bash -c \"echo #{ssh_pub_key} >> #{ENV['home']}/.ssh/authorized_keys\"
SHELL
end
end
end
My issue is that when I run this vagrantfile, I receive an error that states the following: default: Warning: Authentication failure. Retrying... and if I run in debug mode I just see a bunch of timeouts..
All that I am trying to do is rather than create a "vagrant" user, I want to create a user that is the same as the user on the host machine by using #{ENV['USER']} and have the user immediately be able to run vagrant ssh and if their host user is test.user, then the guest user will be test.user..
vagrant ssh-config was:
Host default
HostName 127.0.0.1
User aaron.west
Port 2200
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /Users/aaron.west/.ssh/id_rsa
IdentityFile /Users/aaron.west/.ssh/id_rsa
LogLevel FATAL
all help is appreciated :)
I believe you'll have to create a new user on your Vagrant machine. As per the docs for the ssh.username setting, it doesn't sound like that setting actually creates a user. It only helps you to tell Vagrant what user to connect as, if the box was made with a username other than vagrant.
You probably need to shell out to useradd during provisioning.

not sure if shell script that opens ports/protocols in the event any are blocked is correct

I'm writing a script that will check/open ports/protocols in the event any are blocked. What I have so far is below. The port/protocol names look strange to me. I would have expected IP addresses, but I've never done this before. Would the host be IP address of the DSLAM? Also, can I run nc without specifying host if it's the current machine? Otherwise, does this script do what is needed?
#!/bin/bash
PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
echo -e "############################nnnPresent ports opened on this machine are
$(iptables -nL INPUT | grep ACCEPT | grep dpt)
nCompleted listing...nnn#########################"
#these look funny to me
PORTS=( 123 161 69 "UDP" 80 443 22 8443 8080 23 25 3307 "TCP" "HTTPS" "SNMP" "SFTP" "TFTP")
#modified ip's for public sharing
HOSTS=( "10.x.x.x" "10.x.x.x" "10.x.x.x" "10.x.x.x" "10.x.x.x")
for HOST in "${HOSTS[#]}"
do
for PORT in "${PORTS[#]}"
do
#see which ones need opening...0 is pass (open), 1 fail, 5 timeout; need host still
#alternatively try nmap
nc -z -v -w5 ${HOST} ${PORT}
#if it's not open, then open it
if [ "$?" ne 0 ]; then #shellcheck err this line: Couldn't parse this test expression.
iptables -A INPUT -m tcp -p tcp --dport "$PORT" -j ACCEPT &&
{ service iptables save;
service iptables restart;
echo -e "Ports opened through iptables are n$(iptables -nL INPUT | grep ACCEPT | grep dpt)"; }
else
echo "Port $PORT already open"
fi
done
done
I've been referring to test if port is open, and also open port.
These lines seem odd, OP edit #6 adds an outer for loop which assigns the same value to $HOST on each go-round:
HOSTS=( "10.x.x.x" "10.x.x.x" "10.x.x.x" "10.x.x.x" "10.x.x.x")
for HOST in "${HOSTS[#]}"
do
< stuff ... >
done
Assuming running < stuff ... > four times is not necessary, then
the seven lines above, as written, would be equivalent to:
HOST="10.x.x.x"
< stuff ... >
(Fixed.) Remove the commas from this line:
PORTS=( 123, 161, 69, UDP, 80, 443, 22, 8443, 8080, 23, 25,
3307, TCP, HTTPS, SNMP, SFTP, TFTP)
bash does not use commas to define arrays, and if commas are used
they become chars in the the array data. Example, given the array
exactly as it is above:
echo ${PORTS[0]}
Outputs:
123,

how to build docker containers for different modules

I am building docker containers for different modules through vagrant machine as below.
DOCKER_HOST_NAME = "docker-host"
# Require 'yaml' module
require 'yaml'
# Read details of containers to be created from YAML file
# Be sure to edit 'containers.yml' to provide container details
containers = YAML.load_file('environment/containers.yaml')
# Create and configure the Docker container(s)
Vagrant.configure("2") do |config|
config.vm.define "#{DOCKER_HOST_NAME}", autostart: false do |config|
# Always use Vagrant's default insecure key
config.ssh.insert_key = false
config.vm.box = "ubuntu/trusty64"
config.vm.network "forwarded_port", guest: 8080, host: 1234
config.vm.network :forwarded_port, guest: 22, host: 2222, id: "ssh", auto_correct: true
config.vm.network "forwarded_port", guest: 15672, host: 15672
config.vm.network "private_network", ip: "192.168.50.50"
config.vm.network "forwarded_port", guest: 5672, host: 5672
config.vm.provider :virtualbox do |vb|
vb.name = "docker-host"
end
# provision docker environment
config.vm.provision "docker"
# The following line terminates all ssh connections. Therefore
# Vagrant will be forced to reconnect.
# That's a workaround to have the docker command in the PATH
config.vm.provision "shell", inline:
"ps aux | grep 'sshd:' | awk '{print $2}' | xargs kill"
# create required jars if not present.
config.vm.provision "shell", inline: <<-SHELL
if [ "$SS_BUILD" == "NO" ] && [ -f "/vagrant/src/sonarCharts/notificationEmulator/target/notificationEmulator-1.0-SNAPSHOT.jar" ] && [ -f "/vagrant/src/sonarCharts/dataInputHandler/target/dataInputHandler-1.0-SNAPSHOT.jar" ] && [ -f "/vagrant/src/sonarCharts/controller/target/controller-1.0-SNAPSHOT.jar" ] && [ -f "/vagrant/src/sonarCharts/modelingWorker/target/modelingWorker-1.0-SNAPSHOT.jar" ] && [ -f "/vagrant/src/sonarCharts/contouringWorker/target/contouringWorker-1.0-SNAPSHOT.jar" ]
then
echo "ALL JAR FILES ARE PRESENT"
else
echo "Building the Jar Files...."
sudo apt-get -y install maven
sudo apt-get -y install default-jdk
mvn -f /vagrant/src/sonarCharts/pom.xml clean install
fi
SHELL
config.vm.synced_folder ENV['SS_INPUT'], "/vagrant/data/input"
config.vm.synced_folder ENV['SS_OUTPUT'], "/vagrant/data/output"
config.vm.provision "docker" do |docker|
docker.build_image "/vagrant/environment/base", args: "-t local/base"
docker.build_image "/vagrant/environment/rabbitmq", args: "-t local/rabbitmq"
docker.build_image "/vagrant/environment/notificationEmulator", args: "-t local/notificationEmulator"
docker.build_image "/vagrant/environment/dataInputHandler", args: "-t local/dataInputHandler"
docker.build_image "/vagrant/environment/controller", args: "-t local/controller"
docker.build_image "/vagrant/environment/modelingWorker", args: "-t local/modelingWorker"
docker.build_image "/vagrant/environment/contouringWorker", args: "-t local/contouringWorker"
end
config.vm.provider "virtualbox" do |virtualbox|
virtualbox.memory = 2048
end
end
# Perform one-time configuration of Docker provider to specify
# location of Vagrantfile for host VM; comment out this section
# to use default boot2docker box
config.vm.provider "docker" do |docker|
docker.force_host_vm = true
docker.vagrant_machine = "#{DOCKER_HOST_NAME}"
docker.vagrant_vagrantfile = __FILE__
end
# Iterate through the entries in the YAML file
containers.each do |container|
config.vm.define container["name"] do |cntnr|
# Disable synced folders for the Docker container
# (prevents an NFS error on "vagrant up")
# cntnr.vm.synced_folder ".", "/vagrant", disabled: true
# Configure the Docker provider for Vagrant
cntnr.vm.provider "docker" do |docker|
# Specify the Docker image to use, pull value from YAML file
docker.image = container["image"]
docker.build_dir = container["build_dir"]
docker.build_args = container["build_args"] || []
docker.create_args = container["create_args"] || []
#docker.has_ssh = true
# Specify port mappings, pull value from YAML file
# If omitted, no ports are mapped!
docker.ports = container["ports"] || []
# Mount voluems that are available in the Docker host
docker.volumes = container["volumes"] || []
docker.remains_running = container["remains_running"] | true
# Specify a friendly name for the Docker container, pull from YAML file
docker.name = container["name"]
end
end
end
#Port for logging into the host VM
config.ssh.port = 2222
end
I don't want to use vagrant any more and want to build docker containers in my machine itself. How can i do that.
You can create a Dockerfile and use docker build to create an image:
http://docs.docker.com/engine/userguide/dockerimages/
once you have an image you can use run to create the container:

how to create a security group with dynamic IP address on amazon web service

i need to run an instance and access with my ip address..but the problem is that myISP changes my IP adress every day.plz help me how do i create a security group so that my instance remains accessible even if my ip changes....
thanks in advance..
Here's a way to restrict AWS security group to your dynamic IP address for SSH.
You can write a cronjob to regularly repeat the steps below:
Fetch external IP address (e.g. http://checkip.amazonaws.com/)
Fetch security group details using AWS SDK
Loop through all security rules, check for port 22
If IP address mismatches that from step 1, update it.
Create a security group with port 22 ingress access from your current ip.
Add a Tag to the security group with the key ssh-from-my-ip and the value true
Whenever your IP changes run this script (or run it periodically via cron)
#! /bin/bash
# This script makes it easier to maintain security groups that allow SSH access
# from a computer with a dynamic IP, such as a computer on a home network or ISP.
#
# Using the script will allow you to SSH to an EC2 without having to allow
# access to the whole world (0.0.0.0/0). If you run this script whenever your IP
# changes then the security groups in your account specified by your AWS profile
# will be updated.
#
# The script will find any security groups for your current profile that are
# tagged with a Tag with a Key of "ssh-from-my-ip" and a case insensitive value
# of "true" or "yes".
#
# For each security group found it will revoke any existing tcp ingress on
# port 22 and authorize ingress on port 22 for your current IP.
#
# Dependencies - AWS CLI and jq
# need my current ip
MY_IP=$(curl --silent https://checkip.amazonaws.com)
echo "Your IP is ${MY_IP}"
# need security group id(s) and existing CIDR for the SG
pairs=$(aws ec2 describe-security-groups | aws ec2 describe-security-groups | jq -c '.SecurityGroups[]? | select( (.Tags[]? | select(.Key == "ssh-from-my-ip") | .Value | test("true|yes"; "i"))) | if .IpPermissions | length == 0 then {sg: .GroupId, cidr: null } else {sg: .GroupId, cidr: .IpPermissions[].IpRanges[].CidrIp} end')
for p in $pairs
do
SG=$(echo "$p" | jq -r '.sg')
OLD_CIDR=$(echo "$p" | jq -r '.cidr')
echo "Updating security group ${SG}"
if [[ $OLD_CIDR != 'null' ]]
then
echo "Revoking ingress permission for ${OLD_CIDR} in security group ${SG}"
# remove the existing ingress permission
aws ec2 revoke-security-group-ingress \
--group-id "${SG}" \
--protocol tcp \
--port 22 \
--cidr "${OLD_CIDR}"
fi
# authorize my new IP CIDR
NEW_CIDR="${MY_IP}"/32
echo "Authorizing ingress permission for ${NEW_CIDR} in security group ${SG}"
aws ec2 authorize-security-group-ingress --group-id "${SG}" --ip-permissions '[{"IpProtocol": "tcp", "FromPort": 22, "ToPort": 22, "IpRanges": [{"CidrIp": "'"${NEW_CIDR}"'", "Description": "Rule0"}]}]'
done
#!/bin/bash
# User specific data:
SECURITY_GROUP_NAME="" # Setup here your group name
REGION=${1:-"us-east-1"} # Change default region if needed
USER=`aws iam get-user --query "User.UserName" | tr -d '"'`
RULE_DESCRIPTION='DynamicIP'$USER
echo 'User: '$RULE_DESCRIPTION', Region: '$REGION', Security group: '$SECURITY_GROUP_NAME
checkip () {
OLD_CIDR_IP=`aws ec2 describe-security-groups --region $REGION --query "SecurityGroups[?GroupName=='$SECURITY_GROUP_NAME'].IpPermissions[*].IpRanges[?Description=='$RULE_DESCRIPTION'].CidrIp" --output text`
NEW_IP=`curl -s http://checkip.amazonaws.com`
NEW_CIDR_IP=$NEW_IP'/32'
if [[ $OLD_CIDR_IP != "" ]] && [[ $OLD_CIDR_IP != $NEW_CIDR_IP ]]; then
echo "Revoking $OLD_CIDR_IP"
aws ec2 revoke-security-group-ingress --region $REGION --group-name $SECURITY_GROUP_NAME --protocol tcp --port 22 --cidr $OLD_CIDR_IP --output text >> /dev/null
fi
if [[ $NEW_IP != "" ]] && [[ $OLD_CIDR_IP != $NEW_CIDR_IP ]]; then
echo "Setting up new ip $NEW_IP"
aws ec2 authorize-security-group-ingress --region $REGION --group-name $SECURITY_GROUP_NAME --ip-permissions '[{"IpProtocol": "tcp", "FromPort": 22, "ToPort": 22, "IpRanges": [{"CidrIp": "'$NEW_CIDR_IP'", "Description": "'$RULE_DESCRIPTION'"}]}]'
fi
sleep 30
checkip
}
checkip
You can use a Source CIDR of 0.0.0.0/0 to allow universal access.
You could restrict it to your ISP's address space by looking up their allocations or just monitoring what IP addresses you end up with.
To do it properly and restrict access to a single IP dynamic address you could write an application that monitors your public IP address and when it changes call the EC2 API AuthorizeSecurityGroupIngress method and delete the old address with RevokeSecurityGroupIngress.

Resources