Deploy docker contained backend on windows 11 by single file click - windows

The task is - deploy Django rest backend via docker on clean Windows 11. It must be done by client clicking single application / bat file. Everything should be installed and set up automatically.
Docker related files are set up okay and test backend works well on ubuntu machine. But how to do automatic deploy on windows?
Using virtual machine with linux is not accepted as well.

Related

How are Docker Desktop proxy settings on Windows propagated to Docker?

I am on a corporate Windows laptop and I want to start experimenting with Docker. Being a corporate machine, everything needs to go through the corporate proxy.
I installed Debian on WSL and then the Docker Desktop, which installed its components on the Debian WSL VM. My first priority however was to test docker on WSL directly and not through Docker Desktop. So I set to read the Docker docs and download the docker/getting-started image through the Debian terminal. That, however, failed due to not using the network proxy.
Desktop Docker docs state that setting the proxy settings on Docker Desktop will propagate the proxy settings to Docker itself. Indeed, I set the proxy settings on Docker Desktop, and I was now able to properly download my image from inside Debian.
Since I want to have full control of Docker through the Debian terminal and not Docker Desktop, I want to understand in which way the proxy settings propagate to Docker inside WSL. I imagined that Docker Desktop altered some configuration file inside Debian, but a grep on the whole system of the proxy ip got me nothing. So my question is, in what way does the Docker Desktop let Docker know which proxy to use?
As much as I know, And am not 100% sure as I have not worked with docker in a while.
When you start docker service in WSL, this will trigger the init.d/docker script, And when you set the Company proxy manually in docker desktop, The loading time is :
Stopping Docker service
Updating configuration Script at /etc/init.d/docker
Starting the service again, and with it the new script
And to make sure that this is valid, You can try to check the /etc/init.d/docker script contents.
and as an alternative way of not adding the scripts manually. you can export the proxy configuration in WSL, and check if it will work without adding the proxy configuration to Docker Desktop.

Docker windows version container compatibility

There is one question that doesn't show up in the forums, what which has a reason to be discussed, IMHO:
Why isn't it possible to pull or build windows docker images (i.e. nanoserver 2019) on an older host system? On the official site, it is documented, that it is not compatible to run, yes:
Version compatibility
But, as I said, "to run". I don't need to run that newer windows container image on the older host system, I just want to pull and build it, to distribute it to a compatible system later on.
Thus, is there a way to handle this issue that shouldn't be one?
You missed one important thing:
Even just do docker build, it will use container, it use container to build not directly on your host machine. Next is the process when docker build:
Docker will create a temporary build container from the base image which you mentioned in Dockerfile with FROM.
Run all instructions of Dockerfile in above temporary build container.
Save the temporary build container as image.
So, as you said you have seen Version compatibility for container from microsoft, so now I think you could also see why build also need this, just because it will also create a container(Just this temporary container will be removed after build).
UPDATE:
The whole story is:
YES, in linux, no problem for a old host os to build/run a new os image/container, because host & container just share the same kernel, the rootfs is provided by container itself.
BUT, you are talking about windows, from windows official, we could see next:
Windows Server 2016 and Windows 10 Anniversary Update (both version 14393) were the first Windows releases that could build and run Windows Server containers. Containers built using these versions can run on newer releases such as Windows Server version 1709, but there are a few things you need to know before you start.
As we've been improving the Windows container features, we've had to make some changes that can affect compatibility. Older containers will run the same on newer hosts with Hyper-V isolation, and will use the same (older) kernel version. However, if you want to run a container based on a newer Windows build, it can only run on the newer host build.
Above is the reason why old windows os could not run a new windows container.
Further more, what I want to say is docker build is just same reason with docker run:
docker run $theImageName need to start a container base on the image theImageName, and as microsoft said, the new os container had to use the new features of kernel, so the new container cannot use the old windows host. Remember, container & host will share the same kernel.
And, docker build -t xxx . will find the Dockerfile with FROM $baseImageName in it, then start a container base on the image $baseImageName, this container is a temp container. All instructions in Dockerfile will executed in this temp container, not in docker host. And finally, this temp build container will be deleted, so you did not see this temp container.
So, as you see, both docker run & docker build will start the container which need to utilize the new windows host's feature, could not use old windows' kernel. This is the limit of microsoft, if you have already understand the limit for docker run on windows, the reason is same of docker build on windows.

Docker Container url not accessible through localhost or ip on windows 10/docker CE/.net Core

This is most simple use case for using docker on windows to deploy dotnet core app.
I used visual studio 2017 to create a dotnet core api with docker support enabled,
the image was created successfully by docker.
Also successfully started new container with this image, but when trying to access api at localhost or ip,then api is not responding.
For more detail steps follow this url
https://github.com/docker/for-win/issues/2230
Windows Version:Windows 10 Enterprise
Docker for Windows 18.03.1-ce-win65 (17513)/Channel:Stable
Steps:
Created Dotnet Core API app in Visual Studio 2017 community.
Selected "Enable Docker Support" checkbox.
Dotnet Core project Created.
Image got created by Docker through dockerfile.
Started new Container with command
Docker run –it –p 8085:80 coreapidemo:dev
API NOT accessible through either localhost or ip
http://172.22.236.61:8085/api/values
http://172.22.236.61/api/values
http://localhost:8085/api/values
Update 1:
Thanks #edwin for your help,using
docker exec -it mycontainer powershell
,i can see that directory c:\app does not contain the neccessary code (aspnetapp.dll) to run app.
Then i downloaded image from https://github.com/dotnet/dotnet-docker/blob/master/samples/aspnetapp/Dockerfile and started container with it.
Then i able to successfully access the app url at http://localhost:8000.
This means that the visual studio tooling for docker is not building the image properly.Anyone can help?

ASP5 deploy to windows 7 server

I'm developing web api in .NET 4.5.1. It is build on TeamCity CI server, but I would like to deploy it to the Windows 7 machine in the local network after every successful build.
I wanted to use dnu publish command, but I have no idea how to use it in this case and how to prepare Windows 7 machine to be ready to receive new, just builded application.
This issue is realy poorly described in case of new ASP.
You need to run:
dnu publish --runtime <name of runtime or "active">
Optionally, you can also pass --no-source.
Once you do that, the bin/output folder will have the application, its dependencies and the runtime. Then, all you have to do is copy that folder to your Win 7 machine.
Here's a script that does something similar for the MusicStore sample. We use it to deploy MusicStore on Nano Server

Jenkins : Selenium GUI tests are not visible on Windows

When I run my selenium test (mvn test) from jenkins (windows) I see only the console output. I don't see the real browsers getting opened . How can I configure jenkins so that I can see the browsers running the test?
I had the same problem, i got the solution after many attempts.
This solution works ONLY on windows XP
If you are using jenkins as a windows service you need to do the following :
1) In windows service select the service of jenkins
2) Open properties window of the service -> Logon-> enable the checkbox "Allow service to interact with desktop"
After then you should reboot the service jenkins
Hope this help you :)
UPDATE:
Actually, I'm working on a an automation tool using Selenium on Windows 10, I've installed Jenkins ver. 2.207 as windows application (EXE file), it's running as windows service and ALL drivers (Chrome, FireFox, IE) are visible during test executions WITHOUT performing a mere configuration on the System or Jenkins
I got the solution. I ran jenkins from command prompt as "java -jar jenkins.war" instead of the windows installer version. Now I can see my browser based tests being executed.
If you are already doing what #Sachin suggests in a comment (i.e. looking at the machine where Jenkins actually runs) and still do not see the browsers, then your problem may be the following:
If you run Jenkins as a service in the background it won't open apps in the foreground. You may either try to run it not as a service in the foreground, or run it as a Local System account and check Allow the service to interact with desktop option. In the latter case you may get into permission problems, though.
Update: To make sure this answer is understood properly by others: Jenkins Windows 'native' installation is not really native. It's a wrapper around Java that runs it as a service.
To interact with desktop GUI, you should launch slave agent via JNLP:
https://wiki.jenkins-ci.org/display/JENKINS/Distributed+builds#Distributedbuilds-LaunchslaveagentviaJavaWebStart
After adding the node in Jenkins (configured as Java Web Start launch), just make a startup batch script on the node machine:
java -jar slave.jar -jnlpUrl http://{Your Jenkins Server}:8080/computer/{Your Jenkins Node}/slave-agent.jnlp
(slave.jar can be downloaded from http://{Your Jenkins Server}:8080/jnlpJars/slave.jar)
See more answers here:
How to run GUI tests on a jenkins windows slave without remote desktop connection?
In the case of Windows 7 you should not install jenkins as windows application (because in this recent version, Microsoft decided to give services their own hidden desktop even you enable the functionality "interact with desktop" in jenkins service), you may have to deploy it from a war file as follows:
1) Download jenkins.war from Jenkins official site
2) Deploy it by the command prompt : java -jar {directoryOfJenkinsFile}/jenkins.war
3) Now you can access jenkins administration on http:// localhost:8080
Hope that helps you !
this is an issue for Jenkins. on Windows it is possible to access logon user's session (screen) under system account. to make the UI testing visible, Jenkins needs to bypass UAC (user access
control) at background. this solution works for me with my own service running as system account.
I also faced the same issue earlier in my local machine (Windows 10).
My test was running perfectly from the NetBeans but when I moved to Jenkins it was only running in console mode. I was unable to view the UI.
So for that, you just need to make your local machine as a Jenkins slave by creating a new slave node in your Jenkins and select that node to execute the Jenkins job.
If jenkins installed by windows installer it is showing only Console out put only. To see browsers download jenkins.war file and run java -jar jenkins.war from command line.
Go through this site:
http://learnseleniumtesting.com/jenkins-and-continuous-test-execution/
If you have the following situation,
You are able to login to the remote machine
You don't see the Jenkins agent window
This slave machine is accessed by many users then try the following,
then try the following suggestion.
Login to slave machine
Go to Task manager
Users
Logout all the users
Then login again.
This worked for me.

Resources