Created a small .Net 5.0 application in Visual Studio 2019 and added docker-compose container orchestration support to it using linux containers. I set the docker-compose project as the default startup application in Visual Studio, and everything was going great for two days until I decided to alter my configuration so my container would have a stable port number instead of it randomly changing.
I'm not sure what I did but but Visual Studio doesn't attach to the container anymore, and the browser window that normally opens when the container starts no longer does. The container is running but the app doesn't respond to the new port number I'm trying or the older port number either.
The code builds fine in VS, the Dockerfile builds successfully, and the docker-compose up command runs in Visual Studio just fine.
I have portainer running in my docker-compose file and I can see that the container is running but it is not producing any logs. I logged into to my running container using portainer and installed the procps package like so: apt-get update; apt-get -y install procps. When I run ps I see that my app is not actually running.
root#46c62bf07df3:/app# ps l
F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME COMMAND
4 0 2195 0 20 0 3868 3280 do_wai Ss pts/2 0:00 bash
0 0 2216 2195 20 0 7556 1212 - R+ pts/2 0:00 ps l
Update: This continued to happen ever time a cow farted so I had to keep plugging away at the problem. I finally found this report that says its a known internal issue when docker-compose v2 is enabled.
The fix is to disable V2 using docker-compose disable-v2.
You can verify it's disabled with docker-compose --version.
They claim a real fix is in the works mid Oct-2021 and will be release soon. Also they claim Visual Studio 2022 preview already has the fix in it.
https://developercommunity2.visualstudio.com/t/Debugging-docker-compose-VS-cant-attac/1551330?entry=problem&ref=native&refTime=1636020527854&refUserId=09da1758-2dd4-4352-bba5-ea1f5e163268
I also wound up writing this powershell script to clean out all the garbage VS was caching so nothing stuck around.
$ErrorActionPreference = 'Stop'
# delete various folders from the root folder
if ((Test-Path -Path .vs)) {
Remove-Item .vs -recurse -force -Verbose
}
if ((Test-Path -Path obj)) {
Remove-Item obj -recurse -Verbose
}
if ((Test-Path -Path bin)) {
Remove-Item bin -recurse -Verbose
}
# delete various folders recursively
$currentfolder = Get-Location
Get-ChildItem -Path $currentfolder -Directory -Include bin, obj -Recurse | Remove-Item -Force -recurse -Verbose
Get-ChildItem -Path $currentfolder -File -Include docker-compose.dcproj.user -Recurse | Remove-Item -Force -recurse -Verbose
Get-ChildItem -Path $currentfolder -Directory -Recurse -Attributes H | Where-Object { $_.Name.Contains('.vs') } | Remove-Item -Force -recurse -Verbose
# delete all volumes except those with the keep label
#docker volume prune --filter "label!=keep" -f
Original answer
Figured out how to get it working again, but I still don't know why it happens and it appears to randomly fubar now and then.
I wound up creating another temp starter application and adding container orchestration to it so I could see if it would run and also
compare the files with my real application. The new temp app indeed ran successfully in docker-compose so I started looking at differences in various files
between the two apps. All of them were minor diffs, and even after I updated my real application to match the temporary app's files it still wouldn't run.
I noticed that in Visual Studio 2019 you can actually see details about your running docker containers, and also noticed that it was showing me both the temp
app and my real app that I cared about. So I went through all the container details for both and narrowed in on a label called com.microsoft.visualstudio.debuggee.arguments
that was clearly missing something in my real app's container.
The temp application had that label set to :
--additionalProbingPath /root/.nuget/packages "/app/bin/Debug/net5.0/Test.dll"
but my real app instead had this, which is missing the path to the assembly needed to start/debug my application.
--additionalProbingPath /root/.nuget/packages ""
Visual Studio creates a file obj\Docker\docker-compose.vs.debug.g.yml and passes it to docker-compose when you debug, and that label is missing the magic sauce to get my app up and running.
I don't know why Visual Studio is fubaring this file but it continues to randomly happen.
These are the steps that finally got it working.
Close all instances of Visual Studio
Manually delete the following folders
.vs
bin
obj
Manually delete all .user files in the project.
Open Docker Desktop and remove the docker-compose stack created by your project.
Remove the docker images for your project(s)
Load your project and try to run it again.
Visual Studio should regenerate the file mentioned above with the correct bits this time and should attach to the container.
Related
I'm trying to install visual studio through PowerShell, works fine on local computer, but I keep getting errors when I run it on our AWS windows server 2012R2. I've attached my code and error below. Thank you
powershell script
error
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$here = pwd
$software = "Microsoft Visual Studio Installer";
$installed = (Get-ItemProperty HKLM:\Software\Microsoft\Windows\CurrentVersion\Uninstall\* | Where { $_.DisplayName -eq $software }) -ne $null
#If VSCode was not installed before, it will be download required files and install it.
If(-Not $installed)
{
Write-Host "'$software' is NOT installed.";
wget https://aka.ms/vs/17/release/vs_community.exe -outfile “vs.exe”
.\vs.exe install --quiet --norestart
}
#If VSCode was installed before, it will try to update it to the newer version, if available.
#If no updates available, it will do nothing.
else
{
Write-Host "'$software' is installed."
if ( Test-Path -Path $here\vs.exe )
{
.\vs.exe update --quiet --norestart
}
else {
wget https://aka.ms/vs/17/release/vs_community.exe -outfile "vs.exe"
.\vs.exe update --quiet --norestart
}
}
You have used the option -outfile vs.exe but this is not how you tell wget to rename the downloaded file. What this actually does is the following:
downloads the file to vs_community.exe (because that's the filename in the original URL)
writes a log file to utfile
ignores the vs.exe parameter
The wget option to direct the downloaded content to a named file is actually -O, not -outfile (which is the equivalent of -o utfile, which writes the log to utfile).
To specify the correct output filename, use:
-O vs.exe
Or simply execute wget https://aka.ms/vs/17/release/vs_community.exe and then execute the downloaded file, which is named vs_community.exe.
I want to use plastic SCM in Docker windows containers.
I searched in Plastic and found Linux containers docker file
SO I want to try my own docker file for that I need to know the command to download the SCM installation file using PowerShell and install it without the GUI (arguments to pass for installation) via Powershell
This is the script I'm currently using for my windows image. I will warn you that it does take ~5-10 minutes to install during the build process, but everything works great besides that. It works pretty simply, it creates a temp folder and uses the URL for the download to download the installer there, then runs the installer, and finally deletes the temp folder.
#This installs plastic
$tempFolder = "C:\Temp"
$plasticURL = "https://www.plasticscm.com/download/downloadinstaller/10.0.16.5882/plasticscm/windows/client"
$installerName = "plasticinstalling.exe"
New-Item $tempFolder -ItemType Directory -Force -ErrorAction Stop | Out-Null
$installerLocation = (Join-Path -Path $tempFolder -ChildPath $installerName -ErrorAction Stop)
Invoke-WebRequest -UseBasicParsing -Uri $plasticURL -OutFile $installerLocation -ErrorAction Stop
Start-Process -FilePath $InstallerLocation -ArgumentList "--mode","unattended" -NoNewWindow -Wait -PassThru
Remove-Item -Recurse $tempFolder -Force -ErrorAction Ignore
Then, in my docker file I just call the script:
RUN powershell -Command C:\Scripts\installPlastic.ps1
Hope this helped and feel free to reach out with more questions.
I am trying to create a base docker image for a windows application. I know that windows images have their drawbacks and their pitfalls but the application won't run in a linux environment.
It is necessary to have some data on the G: drive and that is where I can't seem to get it to work. I don't need to map the G: volume to my hard drive, I just need to install some stuff there. Here is my Dockerfile:
FROM mcr.microsoft.com/windows/servercore:20H2
Volume "G:"
RUN powershell wget http://some.url/file.zip -OutFile G:\file.zip
RUN tar -xf G:\file.zip -C G:
But the docker build fails on the last line because the file does not show up on the G: drive. I tried downloading the file to the C: drive and then extract it to G: and I also tried to extract it to C: and then copy it to G: but none have worked. The G: drive is always empty.
When I exec into the container and run the commands from the Dockerfile it works as expected. Only when I run docker build the G: drive is completely ignored.
What could I be doing wrong?
OK, I figured it out. The virtual G: drive is created in the intermediate container and is reset with every RUN stage.
What I did instead is to create a directory C:\g_drive within the container and mounted it to G:
RUN mkdir C:\g_drive
RUN powershell Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\DOS Devices' -Name 'G:' -Value "\??\c:\g_drive"
My title explains most of it but want to understand why it is that I can access https://localhost:32770/ and get my API endpoints when I am debugging in Visual Studio but when I end debugging it becomes unavailable.
I'm currently in the thick of spending a few days wrapping my head around Docker and Kubernetes and this is stumping me a bit, and I'd really love to fill this gap in my knowledge.
The container remains running after being created so what has changed?
I noticed this is run at the start of the build:
docker exec -i 0f855d9b4c801bf8c52da48e6dd02ffdf0fe7242fde22fb9a221616e4b2900f9 /bin/sh \
-c "if PID=$(pidof dotnet); then kill $PID; fi"
but I don't see how that changes what happens after the debugging ends when this is before the dockerfile is run and everything. I don't understand the -c in the command, but I do understand that the script in the quotation marks after it is run in the container following docker exec syntax docker exec [OPTIONS] CONTAINER COMMAND [ARG...]. it seems this script kills the existing build of the code before the new one is created.
This is run before the dockerfile is run
docker build -f "F:\Dev\API_files\API_name\Dockerfile"
--force-rm
-t API_name:dev
--target base
--label "com.microsoft.created-by=visual-studio"
--label "com.microsoft.visual-studio.project-name=API_name" "F:\Dev\API_name"
I don't see anything here that would change how the container is running, rm in this instance 'removes intermediate containers after a build (default true)' according to docker build --help
the dockerfile is run next and it is pretty much the default one for ASP.NET core Applications, it has
EXPOSE 80
EXPOSE 443
and the rest are simple build steps.
After all this I can't seem to find much indication of what is going on. My guesses are that it has to do with IIS Express but really I don't know much of what goes on with it and when visual studios is debugging. Whats going on behind the scenes that was running while I was debugging to open the localhost port for the docker container?
Edit: I found a docker run command that may have something to do with it, but maybe not. The docker run command has the -P flag to 'Publish all exposed ports to random ports' but the container never stops running so should I not be able to find these ports and connect to the API?
During the debugging, if you run this command:
docker exec -it containerName bash -c 'pidof dotnet'
You will noticed, that the dotnet process is running, and when you stop the debugging and run it again, you are going to see that, the process was finalized.
If you want to start your application in the container, without run the debugger again, you just need to run the start the dotnet process inside the container.
You could do that, running a script like this:
#Set these 3 variables
$containerName = "MyContainer"
$appDll = "myApp.dll"
$appDirectory = "/app/bin/debug/netcoreapp3.1"
$args = "/c docker exec -it $containerName bash -c 'cd $appDirectory;dotnet $appDll'"
Start-Process -FilePath "powershell" -ArgumentList $args -NoNewWindow
You can check if it worked, by running this script again:
docker exec -it containerName bash -c 'pidof dotnet'
I'm using Docker for Windows and building the docker image with a Dockerfile like this:
FROM mydockerhublogin/win2k16-ruby:1.0
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
RUN powershell -Command \
$ErrorActionPreference = 'Stop'; \
New-Item "HKLM:\Software\WOW6432Node\ExampleCom" -Force ; \
New-ItemProperty "HKLM:\Software\WOW6432Node\ExampleCom" -Name MenuLastUpdate -Value "test" -Force
RUN powershell -Command \
$ErrorActionPreference = 'Stop'; \
New-Item "HKLM:\Software\ExampleCom" -Force ; \
New-ItemProperty "HKLM:\Software\ExampleCom" -Name MenuLastUpdate -Value "test" -Force
# Run ruby script when the container launches
CMD ["C:/Ruby23-x64/bin/ruby.exe", "docker_ruby_test.rb"]
Note that I am adding some registry entries to the Windows registry which the code inside the container will access. While this method of adding registry entries is fine for a few entries, my requirement is to add dozens of entries required for my windows application. Is there a way to do this in a more concise manner?
Try creating a file for the your registry entry and copy that inside the container.
Then try running Invoke-Command -ScriptBlock {regedit /i /s C:\shared\settings.reg}
The following is the only way I could getting working using 4.8-windowsservercore-ltsc2019
COPY Registry/ChromeConfiguration.reg .
RUN reg import ChromeConfiguration.reg