Best Method for Debugging CluedIn Integration Locally - vscode-debugger

I've been working on multiple integrations with CluedIn. They are installed in a local Docker instance but I'm having a hard time understanding how to setup the debugging process in Visual Studio Code.
I've loaded the package .dll and .pdb files into the /app/ServerComponents folder on the cluedin_default_server_1 container.
How do I attach a debugger to the CluedIn integrations for debugging?

This may require extra steps to debug with Visual Studio Code, but with Visual Studio this should be simple.
Assuming that you are using the default environment:
Install procps into cluedin_default_server_1 container.
You have to attach as a root. To do so, either change the docker-compose.server.yaml and add user: root after server: like this:
server:
user: root
Or you can just attach from the command line as a root:
docker exec -it --user root cluedin_default_server_1 /bin/sh
When you are in the container's shell, please, run: apk add procps.
Copy the DLLs and PDBs
Build your solution and copy the DLLs and PDBs under \Home\env\default\components\ServerComponent (the default is the name of your environment).
Restart or recreate the container
docker container restart cluedin_default_server_1
If this will not work, please, try to delete the container and run .\cluedin.ps1 up - it will create a new container for you.
Attach to the process from Visual Studio
In Visual Studio, hit Ctrl+Alt+P or go to Debug -> Attach to Process...
Select the Docker connection type and the container you want to debug, and then the dotnet process. The process must be dotnet exec ... --Name ServerComponent.
Click Attach and check the Managed checkbox:
Now you should be able to debug your code.

Related

Azure Webapp Startup Command is reset after deploy (Visual Studio -> Publish)

I have a Azure Linux Web App with a custom Startup Command "sh /home/custom_startup.sh". Whenever I re-deploy the app via Visual Studio the Start Command is changed to something like "dotnet App.dll". I don't want this behaviour since at the moment I need to after every deploy change the Startup Command manually and restart the application.
Any ideas?

Docker in Win (Linux containers): docker-compose command fails somehow outside of Visual Studio

I am testing Docker containers, and I managed to create an image, using Visual Studio and Docker Compose, that seems to be OK:
Images:
The image is a .NET Core Console application, very small and simple. It does connect to a Redis server and starts producing some stuff, and logs this to console. This is how it looks if I start the project directly from Visual Studio (no Docker involved):
Now, if I run it via Visual Studio, by setting the "docker-compose" project to Startup, and then pressing the play button on top, that says "Docker Compose", I can see that this command is executed:
docker-compose -f "C:\Git\alfa\AlfaModulesPoc\docker-compose.yml" -f
"C:\Git\alfa\AlfaModulesPoc\docker-compose.override.yml" -f
"C:\Git\alfa\AlfaModulesPoc\obj\Docker\docker-compose.vs.debug.g.yml"
-p dockercompose5247976386926556655 --no-ansi up -d
I now see the Output in VS display the console correctly:
also Docker Desktop shows it:
But, if I run the same command from the command line:
I can see that Docker thinks it is running:
and I then take a look at Docker Desktop, there are no logs being captured any more:
And, the logs are not shown, because it seems that there is no activity at all from the container. I would see data being created in the Redis database, but when started manually from the command prompt, then no data is touched on Redis.
Why isn't the container running properly?
Can I see any errors, exceptions or something to determine what is going on?

Can a Service Fabric Aplication be deployed from within a Windows Docker Container to a cluster?

When building a Service Fabric project in Visual Studio (*.sfproj), a Deploy-FabricApplication.ps1 script is created as part of the template to deploy this application to Azure (or Service Fabric running wherever, for that matter). I'm looking for a way to containerize that mechanism as part of a Windows Docker image since our build and deployment process is containerized. Is there a way to run this script from within a Windows Docker container, and if so, what prerequisites would the image need to have?
Update:
Service Fabric SDK 3.3.617 released as part of Service Fabric 6.4 can now be installed in containers to build and deploy Service Fabric projects. This can be done in a Dockerfile using the following:
ADD https://download.microsoft.com/download/D/D/D/DDD408E4-6802-47FB-B0A1-ECF657BEB35F/MicrosoftServiceFabric.6.4.617.9590.exe C:\TEMP\MicrosoftServiceFabricRuntime.exe
ADD https://download.microsoft.com/download/D/D/D/DDD408E4-6802-47FB-B0A1-ECF657BEB35F/MicrosoftServiceFabricSDK.3.3.617.msi C:\TEMP\MicrosoftServiceFabricSDK.msi
RUN C:\TEMP\MicrosoftServiceFabricRuntime.exe /accepteula /sdkcontainerclient /quiet
RUN msiexec.exe /i "C:\TEMP\MicrosoftServiceFabricSDK.msi" /qn
Here is an example Dockerfile
Original Answer:
Turns out, this is no small feat. This script requires the Windows Service Fabric SDK to be installed. The recommended (and only supported) way to install the Service Fabric SDK is through WebPI, which is available here. It's possible to Dockerize the WebPI, however there's a problem. The WebPI installer consists of three components; the Service Fabric SDK, the Service Fabric Runtime, and the Service Fabric Tools for Visual Studio. The WebPI installer will install all of them. Unfortunately, the Service Fabric Runtime (as of this writing) cannot run under a Docker container since it wants to install a kernel level driver. This bug is being tracked here, but has been open for nearly a year with no real progress. This means that one could not run a Service Fabric cluster within a Docker container, but surely the SDK and tools should still be able to run, correct? Unfortunately, there is no way to tell the installer to only install the SDK and tools, but not the runtime.
So, perhaps there is an unsupported way to install just the SDK and tools. Turns out, the release notes have references to various MSIs for the individual components.
SDK Available Here
Tools for Visual Studio Available Here
It's fairly trivial to run msiexec.exe from a Dockerfile, which means we should be able to install the SDK that way. Nope. Unforunately, msiexec will fail with a generic 1603 code. If you run msiexec in verbose mode and output a log file, you can dig into this error and see the root cause:
MSI (s) (78:34) [19:07:56:049]: Product: Microsoft Azure Service
Fabric SDK -- This product requires Service Fabric Runtime to be
installed.
This product requires Service Fabric Runtime to be installed. Action
ended 19:07:56: LaunchConditions. Return value 3.
So, we're once again shot down. I've found no other packaged version of the Service Fabric SDK (Chocolatey has one, but it just launches the WebPI installer) which leaves one final solution; we install the SDK manually without the help of an installer. This requires reverse engineering exactly what the installer does, and integrating this into our Dockerfile.
The SDK installer does a few things. It copies a bunch of files into c:\program files\microsoft sdks\service fabric\ and a bunch of files into c:\program files\microsoft service fabric\. It also GAC's a bunch of stuff (Such as System.Fabric.dll), adds some stuff to the registry, and also installs a Powershell module. We need to do all those things for the script to run.
What I ended up doing is mounting the key folders as Docker volumes so I can use them within my container:
docker run `
-v 'c:\program files\microsoft sdks\service fabric\tools\psmodule\servicefabricsdk:C:\ServiceFabricModules' `
-v 'c:\program files\microsoft service fabric\bin\fabric\fabric.code:C:\ServiceFabricCode' `
-v 'c:\program files\microsoft service fabric\bin\servicefabric:C:\ServiceFabricBin' `
-e ModuleFolderPath=C:\ServiceFabricModules `
-it build-agent powershell
First, I need to share out the c:\program files\microsoft sdks\service fabric\tools\psmodule\servicefabricsdk directory which contains the Powershell module that the Deploy-FabricApplication.ps1 script loads:
Import-Module "$ModuleFolderPath\ServiceFabricSDK.psm1"
Next, we need to share out c:\program files\microsoft service fabric\bin\fabric\fabric.code because it has a bunch of DLLs that the installer GACs.
Lastly, we share out c:\program files\microsoft service fabric\bin\servicefabric because that directory contains the PowerShell module installed by the SDK.
When the container starts, we need to do the following:
First, register the module with PowerShell:
Copy-Item C:\ServiceFabricBin C:\windows\system32\WindowsPowerShell\v1.0\modules\ServiceFabric -Recurse
After you do this, Get-Module -ListAvailable will show the ServiceFabric module. However, no exports will be loaded because it's missing a bunch of DLLs. The installer puts those DLLs in the GAC, but the GAC is dumb so let's just put those DLLs in the same directory so the module finds them:
Copy-Item C:\ServiceFabricCode\System.Fabric*.dll C:\windows\system32\WindowsPowerShell\v1.0\modules\ServiceFabric -Recurse
After this, you should be able to run Get-Module -ListAvailable and see the ServiceFabric module fully loaded.
There's one final thing to do. The Deploy-FabricApplication.ps1 script imports the ServiceFabricSDK.psm1 module (see above). But what is $ModuleFolderPath? Well, the script by default looks in the registry for this value, which of course the installer sets for you. We don't want to muck with the registry for our Docker image, so let's just change the script to look at an environment variable instead:
$ModuleFolderPath = $ENV:ModuleFolderPath
Import-Module "$ModuleFolderPath\ServiceFabricSDK.psm1"
Now we can set that environment variable when we run our Docker container (or from our Dockerfile). Obviously, if you didn't want to modify the Deploy-FabricApplication.ps1 file, you could set this at HKLM:\SOFTWARE\Microsoft\Service Fabric SDK\FabricSDKPSModulePath as well. I'm fairly anti-registry so an environment variable (or just hard code if you really don't care) makes more sense to me.
Also note you'll need to import your certificate (Which you can download from the Key Vault in the form of a PFX file) before the script will deploy:
Import-PfxCertificate -Exportable -CertStoreLocation Cert:\CurrentUser\My\ -FilePath C:\Certs\MyCert.pfx
I believe a more production quality version of this would be to copy the required files into the image within your Dockerfile rather than mount them as volumes so the image would be more self contained, but that should be fairly straight forward. Also, I believe the DLLs that were GAC'ed are also available on NuGet, so it could be possible to download all those files through NuGet during the Docker build process.
Also, here's my full Dockerfile which I've successfully deployed an app to Service Fabric using:
# escape=`
FROM microsoft/dotnet-framework:4.7.1
SHELL ["cmd", "/S", "/C"]
# Install Visual Studio Build Tools
ADD https://aka.ms/vs/15/release/vs_buildtools.exe C:\SETUP\vs_buildtools.exe
RUN C:\SETUP\vs_buildtools.exe --quiet --wait --norestart --nocache `
--add Microsoft.VisualStudio.Workload.AzureBuildTools `
|| IF "%ERRORLEVEL%"=="3010" EXIT 0
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
# Our Deploy Certs
ADD ./Certs/ C:\Certs\
# Update Path (I forget if this was needed for something)
RUN SETX /M PATH $($Env:PATH + ';C:\ServiceFabricCode')
I'm hoping this helps someone, but more so I'm hoping Microsoft fixes their installer to remove the runtime requirement.
Best way to install Azure service fabric is by creating a poweshell file and call it from dockerfile
PowershellFile:
Start-Process "msiexec" -ArgumentList '/i', 'C:/app/WebPlatformInstaller_amd64_en-US.msi', '/passive', '/quiet', '/norestart', '/qn' -NoNewWindow -Wait;
& "C:\Program Files\Microsoft\Web Platform Installer\WebPICMD.exe" /Install /Products:MicrosoftAzure-ServiceFabric-CoreSDK /AcceptEULA
Dockerfile :
RUN powershell -noexit "& ""./InstallServiceFabric.ps1"""

Docker Container url not accessible through localhost or ip on windows 10/docker CE/.net Core

This is most simple use case for using docker on windows to deploy dotnet core app.
I used visual studio 2017 to create a dotnet core api with docker support enabled,
the image was created successfully by docker.
Also successfully started new container with this image, but when trying to access api at localhost or ip,then api is not responding.
For more detail steps follow this url
https://github.com/docker/for-win/issues/2230
Windows Version:Windows 10 Enterprise
Docker for Windows 18.03.1-ce-win65 (17513)/Channel:Stable
Steps:
Created Dotnet Core API app in Visual Studio 2017 community.
Selected "Enable Docker Support" checkbox.
Dotnet Core project Created.
Image got created by Docker through dockerfile.
Started new Container with command
Docker run –it –p 8085:80 coreapidemo:dev
API NOT accessible through either localhost or ip
http://172.22.236.61:8085/api/values
http://172.22.236.61/api/values
http://localhost:8085/api/values
Update 1:
Thanks #edwin for your help,using
docker exec -it mycontainer powershell
,i can see that directory c:\app does not contain the neccessary code (aspnetapp.dll) to run app.
Then i downloaded image from https://github.com/dotnet/dotnet-docker/blob/master/samples/aspnetapp/Dockerfile and started container with it.
Then i able to successfully access the app url at http://localhost:8000.
This means that the visual studio tooling for docker is not building the image properly.Anyone can help?

Visual Studio 2015 Docker Integration wont attach for debuging

I created a Default .net Core RC2 MVC app using VS 2015. I added Docker Support so I could run and debug it in docker.
When I run the project it builds the docker container and starts it. Running the command "docker ps" shows the container running with the correct ports mapped. However I get the following error:
The target process exited without raising a CoreCLR started event. Ensure that the target process is configured to use NETStandard [version ...] or newer. This might be expected if the target process did not run.
Also trying to access the web page returns the following error:
[Fiddler] The connection to '10.0.75.2' failed.
Error: ConnectionRefused (0x274d).
System.Net.Sockets.SocketException No connection could be made because the target machine actively refused it 10.0.75.2:80
Turns out the problem is related to the docker for windows beta I am running. By default it does not let you map volumes.
To enable open the docker for window settings:
And select Manage shared drives.
Select the C drive or drive you have the .Net code stored on and rebuild and deploy the project:

Resources