I'm new to docker, and I work in a corporate environment that is very locked down. In short, I have docker installed on a Windows Server 2016, but I have no access to dockerhub at all and will not be getting access to dockerhub in the near future, and so I cannot rely upon "from microsoft/windowsservercore" in my dockerfile.
Nonetheless, I need to package a .Net server application in a docker image for evaluation purposes. Is it possible to build a windowsservercore image from scratch? I looked for the dockerfile in Github for reference, and while I found the one for dotnet-framework at https://github.com/Microsoft/dotnet-framework-docker, I could not locate a windowsservercore docker project there.
Images from the docker hub are often maintained by a wide community and trust-worthy. If you're in a closed network that doesn't allow direct access to the docker hub and you want to run this image on the production server anyway, pull the image on your laptop, then use docker save and docker load to export the image to a tar file and import it on the production machine.
Reference :
https://docs.docker.com/engine/reference/commandline/save/
https://docs.docker.com/engine/reference/commandline/load/
Related
Application Background: Trying to deploy an automation application where user selects *access file and visualize graphs from the python flask backend calculation.
Locally, Downloaded "Access Driver": https://www.microsoft.com/en-us/download/details.aspx?id=54920 [which ran fine]. But unable to deploy on Azure.
Things that I have tried:
I have tried to run this application using Github CI/CD but with Github actions azure can only give option to run on linux os. which will give me the same error (pyodbc connection)
Build Docker image that could eliminate this error however, when selected 'FROM python:slim-buster' under Dockerfile - It generated docker image with linux which gives the same error.
Also, tried adding windows OS in the Dockerfile using FROM microsoft/nanoserver, still received an error while creating an image.
I am new to all these and think might be making mistakes. Any help will be appreciated.
So After a lot of trial and error, I was able to deploy on windows server on Azure.
What worked:
Deploying application on windows server with ODBC driver (AccessDatabaseEngine.exe). Not the 64bit(AccessDatabaseEngine_X64.exe).
One can deploy using Docker image also but ** FROM microsoft/nanoserver ** was not able to build any image. Instead try with ** FROM mcr.microsoft.com/windows/servercore:ltsc2019 **
I'm a junior web developer working in a small web agency. We work on windows 10, with wampserver and mainly with Prestashop, Wordpress and Symfony websites. For every task I am given a "ticket" for which I must develop on a new branch (if needed). When the work is done I merge my branch on the develop branch which is hosted on a preproduction server, and if it is considered ok, it is then merged on the master branch, which is the website in production hosted on another server.
I was given the task to do some research on Docker and to find how it could improve our workflow.
But from what I understood yet, docker containers are just similar to Virtual Machines and are only useful for building isolated environments in order to test an application without having to think about dependencies.
But given the fact that we already have a preproduction server, I don't see the point of using Docker ? Did I miss something ?
Also, could Docker be of use for sharing projects between teamworkers (we all work on Windows) ? (for example a developer is working on a website locally, can he create a container and its image that could be used instantly as it is without configurations by another developer to work on it on his part ?)
But from what I understood yet, docker containers are just similar to Virtual Machines and are only useful for building isolated environments in order to test an application without having to think about dependencies.
No, Docker containers are not only useful for testing.
When building a correct workflow with Docker you can achieve 100% parity between development, staging and production if all use the same docker images.
But given the fact that we already have a preproduction server, I don't see the point of using Docker ? Did I miss something ?
This pre production server, aka what is normally called a staging server should also use docker to run the code.
Also, could Docker be of use for sharing projects between teamworkers (we all work on Windows) ? (for example a developer is working on a website locally, can he create a container and its image that could be used instantly as it is without configurations by another developer to work on it on his part ?)
Yes... You create the base docker images with only the necessary stuff to run in production and from them you can build other docker images with the necessary developers tools.
Production image company/app-name:
FROM php:alpine
# other your production dependencies here
Assumming that ones build the docker image with name company/app-name then the image for development.
Development image company/app-name-dev:
FROM company/app-name
# add here only developer tools
Now the developer uses both images company/app-name and company/app-name-dev during development and in the staging server only company/app-name docker image will be used to run the code.
After soem months of interaction in this flow you may even fill confident to start using the company/app-name to deploy the app to production and now you are with a 100% parity between development, staging and production.
Take a look to the Php Docker Stack for some inspiration, because I built it with this goal in mind for my latest job, but I end up to leave the company when we where in the process of adopting it in development...
But don't put all services you need in one single docker image because that is a bad practice in docker, instead use one service per docker image, like one service for PHP, another for the database, another for the Nginx server, etc.. See here how several services can be composed together with docker compose.
I am trying to follow these instructions to us the Jenkins Plugin to create Jenkins agents with Azure Virtual Machines (via Azure ARM template).
Azure VM Agents plugin
Under Supported Features, it says:
Windows Agents on Azure Cloud using SSH and JNLP
For Windows images to launch via SSH, the image needs to be preconfigured with SSH.
I am a bit confused by this and I'm not sure what it means.
Does it mean that an SSH Client or Server should be installed on the Windows image?
There doesn't seem to be a option for setting up a Windows Azure VM with SSH access, as there is for a linux VM.
Please can anyone clarify what the set up process is?
(By the way, I have tried an unattended installation of cygwin on the Windows VM to try to run a SSH server, but I am running into a separate problem I am trying to solve. I'd like to know if this is not required.)
Answering my own question now I have got a bit deeper in. In the configuration section of the plugin, under Image Configuration, clicking the help on the launch method clarifies what's required.
It looks like a custom image needs to be custom-prepared with a SSH server pre-installed. However, it also looks like it is possible to launch an image with JNLP instead, so I will try that.
Update
I couldn't get JNLP to work (not sure why) but I did get SSH to work. Ticking the 'Pre-Install SSH in Windows Slave (Check when using Windows and SSH)' box does the trick. There's no need to pre-install on the custom image.
I couldn't find any windows image with test agents at microsoft's public docker repo. How can i create a windows docker image with Visual Studio Tests agents to run codedui/mstest?
On a general note how to create a windows docker image with any gui based software pre-installed and pre-configured?
Note: This looks like a low research question, but i had to post it here because docker+windows is relatively new thing and there aren't much information available on net as well.
You could have a VM template with the Microsoft Agent Software already installed on the machine. All the GUI setup does is modify an XML, so you could effectively:
Have a VM container with the AGENT already installed.
Stop the TestAgent service
Modify the Agent Configuration XML
Restart the TestAgent service
This could probably be achieved with a PowerShell script or a custom console application.
If you need more help, we could figure this out together. Please feel free to contact me on LinkedIn.
https://www.linkedin.com/in/david-o-neill-8a1aa498/
I'm a little bit confused about the concept of Docker for Windows.
Can I create a docker container for windows (and a windows host like Server 2016) and install a normal windows application into that container (simple: notepad.exe; advanced some more complex application programmed in Delphi)?
And can I run this container on every Docker enabled (windows) Host? Does the container starts automatically the application inside? Or can a windows docker container only provide service or webbased applications like an IIS website?
if you have Windows Server 2016, you will be able to launch Windows containers (and you will need a Linux server to launch Linux containers).
See those links
https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/manage_docker
https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/container_setup
https://msdn.microsoft.com/en-us/virtualization/windowscontainers/containers_welcome
In Windows, your Dockerfile will start with
FROM windowsservercore
instead of the more usual
FROM debian
or
FROM ubuntu
See some examples of IIS in (Windows) docker
https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/manage_docker
or a SQL Server in docker
http://26thcentury.com/2016/01/03/dockerfile-to-create-sql-server-express-windows-container-image/
The types of application that are candidates for docker are applications that do not have a UI.
Unlike a VM docker images are very slim having only enough codebase to service a particular use case. You can however create a docker image and use VNC to view a desktop like environment; but you have to go through hoops to configure it. Far easier to have a VM if you need a gui surface.
The strength of docker is to easily create containers of servers and DB back ends. You can even run email servers, or a stack of restful services.
On my laptop I had installed mysql, IIS and PHP. With docker I migrated all of these into an image. I spin it up when I need it and in less than 10 seconds i have a working db backend, an IIS server with PHP interface. I can maintain different versions of mysql, IIS and PHP for different iterations, they are all isolated from each other and run in their own container. When I upgrade my laptop I will not need to install any of these, just the image will work.
I know the topic is a bit old, but since I just tried I thought I'll add my 2ct.
No, you cannot start a Windows application inside a container and expect its windows to appear on your desktop.
While starting such an application is possible, in fact, it's of little use because you won't be able to see or interact with the UI.
For example, you can start notepad.exe in your Windows Core Server container and verify that the process is running (using tasklist instead of taskmanager, which cannot be seen as well).
But you cannot type anything into this notepad instance or access the menu.
Hth,
mav
No. Docker is essentially Linux. Yes, you can run Docker on Windows, but what it in fact does is to install VirtualBox and run a Linux VM inside it. Docker servers generally run on Linux VM's in the cloud. The programs you can put in a Docker container are Linux programs.