Azure App Service Docker Linux Deployment - Where are my files? - bash

I have a Azure CI pipeline, that deploys .NET Core API to a Linux docker image and pushes it to our Azure Container Registry. The files are deployed to /var/lib/mycompany/app using docker-compose and dockerfile. This is then used as an image for an App Service which provides our API. The app starts fine and works, but if I go to advanced tools in the app service and run a bash session, I can see all the logs files generated by docker, but I can't see any of the files I deployed in the locations I deployed them. Why is this, and where can I find them? Is it an additional volume somewhere, a symbolic link, a layer in docker I need to access by some mechanism, a host of some sort, or black magic?
Apologies for my ignorance.
All the best,
Stu.

Opening a bash session using the Advanced Tools will open the session in the underlying VM running your container. If you want to reach your container, you need to install an ssh server in it and use the SSH tab in the Advanced Tools or the Azure CLI.
az webapp create-remote-connection --subscription <subscription-id> --resource-group <resource-group-name> -n <app-name> &
How to configure your container
How to open an SSH session

Related

Access local process from local cluster

I have a local Kubernetes Cluster running under Docker Desktop on Mac. I am running another docker-related process locally on my machine (a local insecure registry). I am interested in getting a process inside the local cluster to push/pull images from the local docker registry.
How can I expose the local registry to be reachable from a pod inside the local Kubernetes cluster?
A way to do this would be to have both the Docker Desktop Cluster and the docker registry use the same docker network. Adding the registry to an existing network is easy.
How does one add the Docker Desktop Cluster to the network?
As I mentioned in comments
I think what you're looking for is mentioned in the documentation here. You would have to add your local insecure registry as insecure-registries value in docker for desktop. Then after restart you should be able to use it.
Deploy a plain HTTP registry
This procedure configures Docker to entirely disregard security for your registry. This is very insecure and is not recommended. It exposes your registry to trivial man-in-the-middle (MITM) attacks. Only use this solution for isolated testing or in a tightly controlled, air-gapped environment.
Edit the daemon.json file, whose default location is /etc/docker/daemon.json on Linux or C:\ProgramData\docker\config\daemon.json on Windows Server. If you use Docker Desktop for Mac or Docker Desktop for Windows, click the Docker icon, choose Preferences (Mac) or Settings (Windows), and choose Docker Engine.
If the daemon.json file does not exist, create it. Assuming there are no other settings in the file, it should have the following contents:
{
"insecure-registries" : ["myregistrydomain.com:5000"]
}
Also found a tutorial for that on medium with macOS. Take a look here.
Is running the registry inside the kubernetes cluster an option?
That way you can use a NodePort service and push images to an address like
"localhost:9000/myrepo".
This is significant because Docker allows insecure (non SSL) connections for localhost.

Docker DNS for Service Discovery to resolve Windows Container´s address by name does not work consistently

Working with Docker Windows Containers I want to go beyond only one Docker container running a App. As described in the Microsoft docs under the headline "Docker Compose and Service Discovery":
Built in to Docker is Service Discovery, which handles service
registration and name to IP (DNS) mapping for containers and services;
with service discovery, it is possible for all container endpoints to
discover each other by name (either container name, or service name).
And because docker-compose lets you define services in it´s yaml files, these should be discoverable (e.g. pingable) by there names (be sure to remind the difference between services and containers in docker-compose). This blog post by Microsoft provides a complete example with the service web and db including full source with the needed docker-compose.yml in the GitHub repo.
My problem is: the Docker windows containers do "find" each other only sometimes, and sometimes not at all. I checked them with docker inspect <container-id> and the alias db and web are present there. But when I powershell into one container (e.g. into one web container via docker exec -it myapps_web_1 powershell) and try to do a ping db this only works only occasionally.
And let me be clear here (because IMHO the docs are not): This problem is the same for non docker-compose scenarios. Building an example app without compose, the problem also appears without docker-compose services, but just plain old container names!
Any ideas on that strange behavior? For me this scenario gets worse with more apps coming into play. For more details, just have a look into https://github.com/jonashackt/spring-cloud-netflix-docker, where I have an example project with Spring Boot & Spring Cloud Eureka/Zuul and 4 docker-compose services, where the weatherbackend and weatherbackend-second are easily scalable - e.g. via docker compose scale weatherbackend=3.
My Windows Vagrant box is build via packer.io and is based on the latest Windows Server 2016 Evalutation ISO. The necessary Windows Features and Docker/docker-compose installation is done with Ansible.
Having no fix for this problem, Docker Windows Containers become mostly unusable for us at the customer.
After a week or two trying to solve this problem, I finally found the solution. Beginning with the read of this docker/for-win/issues/500, I found a link to this multicontainer example application source where one of the authors documented the solution as a sideline, naming it:
Temporary workaround for Windows DNS client weirdness
Putting the following into your Dockerfile(s) will fix the DNS problems:
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop';"]
RUN set-itemproperty -path 'HKLM:\SYSTEM\CurrentControlSet\Services\Dnscache\Parameters' -Name ServerPriorityTimeLimit -Value 0 -Type DWord
(to learn how the execution of Powershell commands inside Dockerfiles work, have a look into the Dockerfile reference)
The problem is also discussed here and the solution will hopefully find it´s way into a official Docker image (or at least into the docs).
I have found out that I needed to open TCP in port 1888 to make the DNS work immediately. Without this port open, I had to connect to the container (windows in my case) and execute in PowerShell Clear-DnsClientCache each time the DNS changed (also during first swarm setup).

Kitematic or other GUI based options to connect to a remote docker host

I have installed CoreOS on a laptop to use it as a Docker host. I really like Kitematic on my mac to create and manager containers. I dont see an option to connect to the remote docker on CoreOS using Kitematic. Are there other tools I can use to connect to a remote docker host and use GUI rather than command line to manager it.
I also like Kitematic a lot! As an alternative in CoreOS, you can try docker-ui, and it's evolution portainer.
They are both docker containers that can help you find / run docker images and inspect docker volumes / network / container stats.
You can also launch new containers directly through the web UI. More information on this good review of the portainer's possibilities
Rancher UI from Rancher Labs maybe also be worth looking at. It is more designed as a docker orchestration tool (when you operate a docker swarm cluster for instance).

Can docker containers be run on live web servers?

I read that heroku uses what they call cedar containers in their infrastructure which allows developers to use containerisation in their apps hosted on heroku. If I'm not mistaken that is, I'm new to all this.
Is is possible to run docker containers on web servers and integrate them as part of your website? Or at least, come up with a method of converting docker containers into Cedar containers or something similar which are compatible with the web server?
On your own private server I see no reason why you couldn't do this, but when it comes to commercial web hosting services, where does this stand?
You are not running "docker on web server", but running "docker with web server".
I mean, you supposed to package your app into the docker with some kind of web server.
After it, you can call your app in this container as regular web site. Also, you can host this container in some docker host (for example, docker cloud, sloppy.io,...)
As for heroku, may be you'll find this helpful

Continuous Integration workflow with docker swarm

Here's my setup, this output was taken from docker-machine ls. Using docker machine to provision the swarm.
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
cluster-master * (swarm) digitalocean Running tcp://REDACTED:2376 cluster-master (master) v1.11.1
kv-store - digitalocean Running tcp://REDACTED:2376 v1.11.1
node-1 - digitalocean Running tcp://REDACTED:2376 cluster-master v1.11.1
node-2 - digitalocean Running tcp://REDACTED:2376 cluster-master v1.11.1
Right now I'm searching for a way to setup my CI/CD workflow. Here is my initial idea:
Create an automatic build on docker hub (bitbucket)
Once changes are pushed, trigger build on docker hub
Testing will be done on docker hub (npm test)
Create a webhook on docker hub once build is success.
The webhook will point to my own application that will then push the changes to the swarm
Questions:
Is it okay to run your testing on docker hub or should I rely on another service?
If I will rely on another service what is your recommended service?
My main problem is pushing the changes to the docker swarm. Should I setup my docker-swarm on a remote machine and host the application there?
The first part of the process all looks fine. Where it gets complicated is managing the deployed production containers.
Is it okay to run your testing on docker hub or should I rely on
another service?
Yes it should be fine to run tests on docker hub assuming you don't need further integration tests.
I need to integrate my containers with amazon services and have a fairly non-standard deployment so this part of the testing has to be done on an amazon instance.
My main problem is pushing the changes to the docker swarm. Should I setup my docker-swarm on a remote machine and host the application there?
If you're just using one machine you don't need the added overhead of using swarm. If you're planning to scale to a larger multi-node deployment, yes deploy to a remote machine because you'll discover sooner the gotchas around using swarm.
You need to think about how you retire old versions and bring in the latest version of your containers to the swarm which is often called scheduling.
One simple approach that can be used is:
Remove traffic from old running container
Stop old running container
Pull latest container
Start latest container
Rinse and repeat for all running containers.
This is done in docker swarm by declaring a service. Then updating the image which can be watched as a task. For more information on the detail of this process see Apply rolling update to swarm and for how to do this in Amazon updating docker containers in ecs

Resources