Installing Arango Cluster on Gcloud - cluster-computing

I am trying to install an ArangoDB cluster on google cloud. I run the install script. At every try I run up against and issue with an SSH onto the server. In the script there is an SSH user "core" and the script is looking for a password for this user to do SSH onto the server to set it up. I have no idea what that could be and have searched everywhere.
I am running the script from my local machine using gcloud SDK.
Any thoughts welcome!

Related

debug azure webapp using ssh

I am working on a new webapp in azure cloud.
The challenge is that I am working on a new python module that I dont know that well, Pyspice. Pyspice interface to a program Ngpspice.
On my windows PC it works fine but not on the cloud. So I would like to be able to do debugging without pushing and then wait 25min for each build.
Right now I am using SSH to connect to the webapp. Then I can create a simple python script to see if I can get the connection to work between pyspice and ngspice. The challenge I have is that when I run python in SSH then it uses a different environment than the webapp, i.e. all the modules in the requirements.txt is not available. So how can I change environment to be able to debug?
I have created an Azure App service with Python version 3.8, when I check the version in Azure SSH it is showing me different version.
To install the latest version in Azure SSH, run the below command
apt update
apt install python3.9
python --version
Run the below command to change the python version of the Azure App service in Azure Cloud Shell (Bash).
az webapp config set --resource-group MyRGName --name WebAppName --linux-fx-version "PYTHON|3.9"
To check the updated version, run the below command
az webapp config show --resource-group MyRGName --name Python4Nov --query linuxFxVersion
debug azure webapp using ssh
To remote debug Azure Linux App Service, we need to open a TCP Tunnel from the development machine to Azure App Service.
Configure for SSH and Remote Debugging
In Azure CLI run the below command
az webapp create-remote-connection --resource-group MyRG -n WebAppName
References taken from MSDoc

How to configure docker swarm using jenkins?

I have got an assignment. The assignment is "Write a shell script to install and configure docker swarm(one master/leader and one node) and automate the process using Jenkins." I am new to this technology and finding it difficult to proceed. Can anyone help me in explaining step-by-step process of how to proceed?
#Rajnish Kumar Singh, Have you tried to check resources online? I understand you are very new to this technology, but googling some key words like
what is docker swarm
what is jenkins , etc would definitely helps
Having said that, Basically you need to do below set of steps to complete your assignment
Pre-requisites
2 or more - Ubuntu 20.04 Server
(You can use any linux distros like ubuntu, Redhat etc, But make sure your install and execute commands change accordingly.
Here we need two nodes mainly to configure the master and worker node cluster)
Eg :
manager --- 132.92.41.4
worker --- 132.92.41.5
You can create these nodes in any of public cloud providers like AWS EC2 instances or GCP VMs etc
Next, You need to do below set of steps
Configure Hosts
Install Docker-ce
Docker Swarm Initialization
You can refer this article for more info https://www.howtoforge.com/tutorial/ubuntu-docker-swarm-cluster/
This completes first part of your assignment.
Next, You can create one small shell script and include all those install and configuration commands in that script. Basically shell script is collection of set of linux commands. Instead of running each commands separately , you will run script alone and all set up will be done for you.
You can create small script using touch command
touch docker-swarm-install.sh
Specify proper privileges to script to make it executable
chmod +x docker-swarm-install.sh
Next include all your install + configure commands, which you have used earlier to do docker swarm set up in scripts (You can refer above shared link)
Now, when your script is ready, you can configure this script in jenkins job and whenever jenkins job is run, script will get execute and docker swarm cluster will be created
You need a jenkins server. Jenkins is open source software, you can install it in any of public cloud instance (Aws EC2)
Reference : https://devopsarticle.com/how-to-install-jenkins-on-aws-ec2-ubuntu-20-04/
Next once installation is completed. You need to configure job in jenkins
Reference : https://www.toolsqa.com/jenkins/jenkins-build-jobs/
Add your 'docker-swarm-install.sh' as build step in created job
Reference : https://faun.pub/jenkins-jobs-hands-on-for-the-different-use-cases-devops-b153efb483c7
If all set up is successful and now when you run your jenkins job, your docker swarm cluster must be get created.

Can I develop with VS Code in containers on a remote host running Windows/WSL2?

Original Post
I have a Windows workstation with WSL2 and Docker installed that I am able to use for container based development in VS Code. I would like to be able to develop inside the containers on this system remotely. I am able to SSH directly into the WSL2 environment on the workstation and am able to start the docker daemon without logging directly into Windows by creating a Task to start the daemon automatically as described here: https://stackoverflow.com/a/59467740/10692741
However when I try to access Docker on the remote machine by following this guide: https://code.visualstudio.com/docs/remote/containers-advanced#_developing-inside-a-container-on-a-remote-docker-host, I get the following error:
error during connect: Get http://docker/v1.24/version: net/http: HTTP/1.x transport connection broken: malformed HTTP status code "\x00c\x00o\x00m\x00m\x00a\x00n\x00d\x00"
I have also tried connecting via a SSH tunnel as outlined here: https://code.visualstudio.com/docs/remote/troubleshooting#_using-an-ssh-tunnel-to-connect-to-a-remote-docker-host and am unable to connect to Docker as well.
Has anyone had success with a setup like this? Or is this not supported due to limitations with Docker on Windows, WSL2, and/or Windows OpenSSH implementation?
Update: 2021-01-21
When I SSH into the Windows machine remotely, I am able to see the docker containers in the VS Code extension. I am able to start them, stop them, and enter into them with the shell. However, when I try to attach VS Code I get same error shown above.
Things that may have possibly affected this over the past couple days:
Adding SSH keys on my local machine to the ssh-agent via ssh-add /my/key
Exposing Docker daemon on tcp://localhost:2375 without TLS on the remote Windows machine
Also I want to note that the I've tried using Windows, Mac, and Linux as the local machine. With Mac and Linux I am able to open a remote session into the Windows machine, but from the Windows local machine I am able to SSH into the remote Windows machine but cannot open a remote connection in VS Code for some reason.
Ok, I was able to get this working using the port/socket forwarding technique. For sake of clarity, I'll use:
local development workstation, local workstation, or just workstation to indicate the computer from which we wish to use VSCode to access Docker containers on ...
the remote Docker host, remote, or just Docker host
Sanity check -- Do you have Docker Desktop installed on both systems? On the local development workstation, you can skip the WSL2 integration, but you'll at least need the client tools, since the VSCode extension uses them.
Steps I took:
I already had Docker with WSL2 integration set up on my main system (which for the purposes of this exercise, became my remote Docker host), along with VSCode, so I knew everything was working there. It sounds like that was your starting point as well.
On another system on the same network (accessed with RDP to make it simple), I already had VSCode installed as well, with the Remote Development Extension Pack. I also have WSL on that system, but only a v1 instance there. Not that WSL on the workstation should be a factor at all for the purposes of this exercise.
I installed Docker Desktop for Windows on that local development workstation.
I also installed the Docker extension for VSCode, since I didn't yet have it on the local development workstation.
On the workstation, I was not yet set up to SSH from PowerShell into my WSL Ubuntu distro on the remote. From PowerShell on the workstation, I generated an ECDSA key (per this and other documents) and added the public key to my authorized_keys on the the remote.
On the workstation, I started the OpenSSH Authentication Service and added the newly created key to the agent (in PowerShell) with ssh-agent add ~\.ssh\id_ecdsa.
I logged out of the workstation and back in so that the path changes were picked up for the Docker desktop install.
I was then able to ssh from Powershell on the local to Ubuntu/WSL on the remote with the port forwarding. Since I'm using the Windows 10 OpenSSH server as a jumphost to my WSL SSH servers, my command looked slightly different (with a -o "ProxyCommand ... mainly), but overall the structure is the same as the one listed in the "SSH Tunnel" doc you linked in your question.
On the remote (manually, not through any integration from the local), I did a basic docker run -it --rm Ubuntu and left it open.
On the local, from PowerShell, I set the DOCKER_HOST environment variable via [System.Environment]::SetEnvironmentVariable("DOCKER_HOST","tcp://localhost:23750").
I was then able to see the remote container using docker ps on the local. I could also docker exec -it containername bash into it remotely.
Of course, the above two steps aren't needed in the long term for VSCode, they were just part of my process to make sure everything was up and running (since, as you might expect, I did have several points at which I failed during this process).
So with that working, it was a simple matter in VSCode to change the Docker extension's DOCKER_HOST setting to tcp://localhost:23750. And voila, I could see all images on the remote as well as attach to them from VSCode.
Other thing(s) to check
I'll add to this list if we find additional reasons why it might not be working, but for now:
You mention that you are starting the Docker Desktop daemon automatically at startup via Task Manager, but you don't mention anything about the WSL2 instance. However, since you are able to ssh into it, I assume you have a way to bring it up as well? My experience has been that, unless the owning user is logged in, WSL terminates any instances after a few seconds, even if a service is running. There's a workaround, I believe, that I can dust off if this is a problem.

Using Jenkins to SSH into EC2 Ubuntu instance and run shell scripts

I have installed Jenkins on my local, I have created my own EC2 instance, I can ssh into my instance and run some shell scripts to shut down my Wildfly server installed on my instance.
This is what I do when I do it manually on my Mac.
open my mac terminal, type
ssh -i /Users/xxx/tools/xxxx.pem ubuntu#10.206.xxx.xx
It will login to my Instance, and then I type:
cd /srv/wildfly-10.1.0.Final/bin
sudo -s
source /etc/profile
./jboss-cli.sh --connect command=:shutdown
The screen will output
{"outcome" => "success"}
Now, I want to using Jenkins, when I click build button, it will ssh into that instance and run these shell scripts for me. The output is expected the same as I run it after I ssh into the instance.
My question is: what steps should I follow, after I login to my Jenkins local environment: localhost:8080
Create a New Item, which one? Is there some plugin I can use? Where to put my shell scripts, will it run successfully?
A guide would be helpful, thanks a lot!
Additon:
when I try to login: using my ssh command, I get this error:
Pseudo-terminal will not be allocated because stdin is not a terminal.
Host key verification failed.
Too many questions to answer in one post. but this should get you started.
ssh from jenkins to your ec2 should be password less, should you need to set the keys in jenkins. use the credential manager and create one, by pasting the private key
https://www.cloudbees.com/blog/using-ssh-jenkins
Refer remote command execution over ssh for the rest of the task.
you will find how to do this in tons.. but this should give you an idea. https://www.cyberciti.biz/faq/unix-linux-execute-command-using-ssh/
For the question on job type, at this point just go with the freestyle .. And later, you may plan for fancy stuff.
You need to add the PEM file details in place where it asks for Private Key

FreeNX(nomachine) unable to connect after cloning of a working ubuntu EC2 instance

I have previously setup a EC2 instance on Ubuntu 10.04 and setup the necessary binaries to allow ssh and more importantly FreeNX(no machine) to work on my MacOS-10.6 machine.
As this was done on a micro instance, i was keen to try it on small instance today so i created a AMI image from the aws management console(browser) and launch a new small instance using the image with the exact same keypair and security setting.
Expecting the instance to work exactly the same(except much faster) i tried to connect to it using SSH and FreeNX again.
Result:
SSH is working fine and my env look exactly the same.
NX is unable to connect.
it complain username/password is incorrect.
I wonder why this is happen since i did an exact clone of the EC2 instance and i can connect fine using NX with the previous instance?
I had the same issue, and after a lot of searching fixed it. It seems freenx lost the usernames and passwords. I fixed it by doing the following:
log in with putty as ubuntu user then
cd /etc/nxserver
sudo vim node.conf
set ENABLE_PASSDB_AUTHENTICATION="1" and save the file
then
sudo nxserver --adduser xxxxxx
sudo nxserver --passwd yyyyyy
sudo nxserver --restart
after that I was able to log in using nomachine with the username and password I just set.

Resources