openwhisk cluster setup and load balancing - openwhisk

I am able to run openwhisk in my local dev machine. I like to extend this to production environment. Is there any concept of openwhisk cluster?. I am not able to find good documentation on this. How auto-load balancing is achieved, etcc..

OpenWhisk is deployed via ansible and as such can be distributed across multiple VMs in a straightforward way.
Check the README on distributed deployments for further information and guidance.

Openwhisk will use ansible to deploy the openwhisk
I followed the follwoing way for my distributed setup
First ensure ssh passwrod less connectivity to all the servers
git clone https://github.com/apache/incubator-openwhisk.git
Add the remote_user and private_key_file values to the defaults section of
the ansible.cfg file. The remote_user value sets the default ssh user. The
private_key_file is required when using a private key that is not in the
default ~/.ssh folder
[defaults]
remote_user = ubuntu
private_key_file=/path/to/file.pem
Go to tools/ubuntu-setup run all.sh to install all the required softwares.
Now modify the inventory files(hosts) for your first node. this can become your bootstrapper VM
Check if you are able to ping the hosts : ansible all -i environments/distributed/hosts -m ping
if ping is fine run the next commad to generate the config files: ansible-playbook -i environments/distributed/hosts setup.yml
For installing the pre requisites: ansible-playbook -i environments/distributed prereq_build.yml
Deploy registry: ansible-playbook -i environments/distributed registry.yml
Go to openwhisk home run the following command to build the Openwhisk
./gradlew distDocker -PdockerHost=:4243 -PdockerRegistry=:5000
Once the build is successful run the following commands from the ansible folder
ansible-playbook -i environments/distributed/hosts couchdb.yml
ansible-playbook -i environments/distributed/hosts initdb.yml
ansible-playbook -i environments/distributed/hosts wipe.yml
ansible-playbook -i environments/distributed/hosts openwhisk.yml
ansible-playbook -i environments/distributed/hosts postdeploy.yml
Now edit the host file for other hosts and repeat the steps 7-8 and 12
this will create the setup in all the nodes. once done, you can use a node balancer to load balance on that. for sync between db instances i m using couchdb continuous replication

Related

Run Ansible playbook from Cloud-Init

I have been learning Cloud-Init for several days to do an automatic deployment. To achieve this, and apply certain configurations, I am using Ansible playbooks. The problem that I have found is that I am not able to make the playbook run directly on the operating system that is being installed.
I leave you the user-data file that I am using.
#cloud-config
autoinstall:
version: 1
identity:
hostname: hostname
password: "$6$cOciYeIErEet80Rv$YX8qt6vizXgcUkgIPSKD1qNZNxe77tSWOY3k/0.i8D8EpApaGNuyucxJvONmZiRj4rVM3L6EE4sLKcnzYVcMj/ "
username: ubuntu
storage:
layout:
name: direct
locale: es_ES
timezone: "Europe/Madrid"
keyboard:
layout: es
packages:
- sshpass
- ansible
- git
late-commands:
- git clone https://github.com/MarcOrfilaCarreras/dotfiles /target/root/dotfiles
- ansible-playbook -i inventory-test /root/dotfiles/ansible/playbooks/docker.yml -u ubuntu -e "ansible_password=ubuntu" -e "ansible_become_pass=ubuntu"
PS: I am using Ubuntu Server 22.04, the Ansible command is temporary and only for testing and I know that I have to change the identity fields.
If you want to configure localhost, it's better to use local transport (which is -c local in command line).
Basically, change ansible call to:
ansible-playbook -i inventory-test /root/dotfiles/ansible/playbooks/docker.yml -c local
This will bypass all SSH things and run locally.

Running ansible on ec2 instance from gitlab runner without SSH key?

I have a git lab runner whose job it is to:
a) Create IaC using Terraform. (7 ec2 instances with a defined keypair)
b) Run an ansible playbook that will need to SSH to all 7 instances and configure Kafka.
At the moment, I have automated part a. I then ssh to one of the instances using a private key. copy the ansible code and the private key to the instance and then execute the following cmd to run the ansible:
ansible-playbook --private-key=/home/ec2-user/keyname.pem hosts.yml all.yml
This all works fine but obviously, I want to automate the running of the ansible in a gitlab runner without having to store my private key on the docker container or in the git repo.
I have briefly investigated SSM but don't really understand how that all works.
Note: I need the key for two purposes.
ssh into the first instance
referenced in the host.yml so that the ansible playbook can connect to all other instances
Thanks in advance everyone.
Cheers
Adam

/ect/ansible file is not available in Mac OS

I used pip to install Ansible in MacOS. But I cannot find the /etc/ansible folder. Neither the inventory file.
I want to run my playbook in minikube environment. But the playbook returns,
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
[WARNING]: Could not match supplied host pattern, ignoring: 192.168.99.105
How to solve this issue?
I looked into this matter and using Ansible for managing minikube is not an easy topic. Let me elaborate on that:
The main issue is cited below:
Most Ansible modules that execute under a POSIX environment require a Python interpreter on the target host. Unless configured otherwise, Ansible will attempt to discover a suitable Python interpreter on each target host the first time a Python module is executed for that host.
-- Ansible Docs
What that means is that most of the modules will be unusable. Even ping
Steps to reproduce:
Install Ansible
Install Virtualbox
Install minikube
Start minikube
SSH into minikube
Configure Ansible
Test
Install Ansible
As the original poster said it can be installed through pip.
For example:
$ pip3 install ansible
Install VirtualBox
Please download and install appropriate version for your system.
Install minikube
Please follow this site: Kubernetes.io
Start minikube
You can start minikube by invoking command:
$ minikube start --vm-driver=virtualbox
Parameter --vm-driver=virtualbox is important because it will be useful later for connecting to the minikube.
Please wait for minikube to successfully deploy on the Virtualbox.
SSH into minikube
It is necessary to know the IP address of minikube inside the Virtualbox.
One way of getting this IP is:
Open Virtualbox
Click on the minikube virtual machine for it to show
Enter root for account name. It should not ask for password
Execute command: $ ip a | less and find the address of network interface. It should be in format of 192.168.99.XX
From terminal that was used to start minikube please run below command:
$ minikube ssh
Command above will ssh to newly created minikube environment and it will store a private key in location:
HOME_DIRECTORY .minikube/machines/minikube/id_rsa
id_rsa will be needed to connect to the minikube
Try to login to minikube by invoking command:
ssh -i PATH_TO/id_rsa docker#IP_ADDRESS
If login has happened correctly there should be no issues with Ansible
Configure Ansible
For using ansible-playbook 2 files will be needed:
Hosts file with information about hosts
Playbook file with statements what you require from Ansible to do
Example hosts file:
[minikube_env]
minikube ansible_host=IP_ADDRESS ansible_ssh_private_key_file=./id_rsa
[minikube_env:vars]
ansible_user=docker
ansible_port=22
The ansible_ssh_private_key_file=./id_rsa will tell Ansible to use ssh key from file with correct key to this minikube instance.
Note that this declaration will need to have id_rsa file in the same location as rest of the files.
Example playbook:
- name: Playbook for checking connection between hosts
hosts: all
gather_facts: no
tasks:
- name: Task to check the connection
ping:
You can test the connection by invoking command:
$ ansible-playbook -i hosts_file ping.yaml
Above command should fail because there is no Python interpreter installed.
fatal: [minikube]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "module_stderr": "Shared connection to 192.168.99.101 closed.\r\n", "module_stdout": "/bin/sh: /usr/bin/python: No such file or directory\r\n", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
There is a successful connection between Ansible and minikube but there is no Python interpreter to back it up.
There is a way to use Ansible without Python interpreter.
This Ansible documentation is explaining the use of raw module.

Unable to connect to AWS instance even after manually adding in public key to authorized_keys

I am unable to run an ansible-playbook or use ansible ping on a AWS instance. However, I can ssh into the instance with no problem. My hosts file is this:
[instance]
xx.xx.xxx.xxx ansible_ssh_user=ubuntu ansible_ssh_private_key_file=/home/josh/Ansible/Amazon/AWS.pem
Should I not use a direct path. I am trying to use ansible to install apache onto the server. In my security group in the AWS console, I allowed all incoming ssh traffic in port 22, and ansi
service: name=apache2 state=started`ble tries to ssh through port 22 so that should not be the problem. Is there some crucial idea behind sshing into instances that I didn't catch onto to. I tried following this post: Ansible AWS: Unable to connect to EC2 instance but to no avail.
make sure inside ansible.cfg ***
private_key_file = path of private key(server-private-key)
and in host machine don't change default authorized_keys file ,better way is create one user, for that user create .ssh directory and then inside create a file called authorized_keys & paste your server-public key
$~/.ssh/authorized_keys
try: ansible-playbook yourplaybookname.yml --connection=local
ansible defaults to ssh

How to launch 100 and more servers in Chef

I am new to chef. I have successfully configured chef workstation and server.
So by using this below command I am able to launch only one instance:
knife ec2 server create –image ami-cc5af9a5 -i ram.pem –flavor m1.small -x root –groups chef-client -Z us-east-1a -r “role[webserver]”
By this command I can bootstrap only one node:
knife ec2 server create –image ami-a4827dc9 -i NVirginia.pem –flavor t2.micro -x root –groups RC-Corporation -Z us-east-1a -r "role[learn_chef_httpd]"
I want to launch and Bootstrap 100+ instances, so how can I customize these commands?
knife-ec2 is aimed at relatively small scale interactive usage and is not intended for this. Look at tools like CloudFormation, SparkleFormation, and Terraform.

Resources