I am using asp.net core 2.2 and Visual Studio 2019. The containers my application are running on are Debian (one of the official aspnet:2.2 docker images)
So my situation is this. I have an application that consists of 4 microservices running in docker containers and I'm seeing very high cpu usage on the containers nodes when this is under load. What I would like to do is profile the executing code to get an idea where this resource use is happening.
As a starting point I thought I would simply get some profiling running on my local development environment, just to get an idea of what the execution looks like in general. Although in production this runs in Kubernetes I do have a development environment that uses docker compose and I find the Visual Studio Docker tools to be fairly good.
I was hoping to use some of the visual studio profiling tools. I was able to install VSDBG on one of my locally running containers and connect to it with VS BUT in the diagnostic pane I see "the diagnostic tools window does not support the current debugging configuration". I've also tried just running the project from VS using docker compose but I see the same message when I hit a breakpoint. I'm not finding much out there about how to do this.
I also tried getting profiling going using perfcollect but after I generated the trace and opened it with perfviewer I was getting a parsing error when trying to view the cpu stacks . Still not sure what's going on there. I did find an old closed issue on their github describing what I am seeing but there was a fairly recent comment from someone saying they were seeing it with the latest version so maybe it's a regression.
So.. after all this .. my question is this. Are either of the above approaches viable? Is there a better way to achieve this? I'm interested in any way someone has had success viewing some code profiling of a .net core 2.2 application running on a linux docker container. All I really want to do is be able to see where in my code the execution time is going and what resources are being consumed. As I've mentioned I'm not finding much out there when I Google for this and I seem to keep hitting walls. If anyone had any advice or direction on a approach here I would really appreciate it. Thanks much!
Are you open to upgrading to .Net Core 3.0 (.Net Core 2.2 is going out of support in a few days: 12/23/2019)
If you're open to that you can take advantage of the new tool dotnet-trace which supports running in a linux container and can be used with the tools in Visual Studio.
Here are the steps I used to add it to my project:
Change your base image to use the sdk image (needed to install the tool).
Add installing the tool to the image:
RUN dotnet tool install --global dotnet-trace
ENV PATH $PATH:/root/.dotnet/tools
Alternatively if you don’t want to add it in your image you can run the following commands in a running container (as long as it as based on the SDK image):
dotnet tool install --global dotnet-trace
export PATH="$PATH:/root/.dotnet/tools"
Start the project without debugging (Ctrl+F5)
Use the Containers Tool Window to open a terminal window
Run the command:
dotnet-trace collect --process-id $(pidof dotnet) --providers Microsoft-DotNETCore-SampleProfiler
When you are done collecting press enter or Ctrl+C to end the collection
This will crate a file called “trace.nettrace”
By default that /app folder that file would be created in is volume mapped to your project folder. You can open the file from there in VS.
Related
I have a cross-platform desktop application written in Xamarin.Forms that runs both in Windows and MacOS. I want to do some UI automation on top of that application.
After some research it seems like the most cross-platform friendly option is to use Something like Sikuli. As the default stack on our team is centered on the .NET stack we want to use SikuliSharp or Sikuli4Net to perform the automated UI tests.
However, despite of the fact that we've been able to run Sikuli4Net successfully on Windows, automating several flows so far. We have a dire situation on MacOS. Our team (myself included) doesn't have a lot (or maybe any) knowledge of Java applications.
I've installed the JDK 8, but was unable to run the tests in the same way we did on Windows. The code builds, but it seems like something in the environment is lacking.
With Sikuli4Sharp when starting the APILauncher like this:
launch = new APILauncher(true);
launch.Start();
I get the following error:
With SikuliSharp when trying to run a simple demo application on our software I have this error:
I have tried to set up the SIKULI_HOME environment variable using this answer as a reference, but still the same problem (and I did restart the console and IDE, even my machine).
When I run echo $SIKULI_HOME on the terminal I do get the directory that cointains the .jar files:
So I'm kind of lost about were to go from here. These problems made me unsure about being possible to run Sikuli4Net or SikuliSharp on MacOS environments. Is this the case? If not, what am I doing wrong?
as mentioned in the error message: sikuli-script.jar is missing.
You have to check on what version of Sikuli/SikuliX your SikuliSharp or Sikuli4Net are depending on.
In doubt you have to dive into the sources of those Net packages.
In my line of work, I’m working with MS Visual Studio, Docker for Windows and VS’s Docker tooling.
I’m working on a multi-container solution that has dependencies on containers with long startup times (such as elasticsearch). I’ve experimented with the docker-compose 2.4 format and successfully used the depends_on with healthchecks combination in order to ensure the startup order of my projects. This however, negatively impacts my developement experience, as it takes a long amount of time to start my solution (using the VS / .dcproj / docker-compose integration) or to attach the debugger to it.
I’m looking to optimize my developement experience, while retaining the guarantee that my dependencies are up and running before my under-developement containers are being launched.
I was wondering if it would be a good solution to have the dependencies such as elasticsearch/consul etc. started up as a separate docker-compose solution and have it run side-by-side ?
Is it possible to bridge 2 docker-compose solutions using docker-network and preferable retain the docker network dns?
Is there a better conceptual solution than this?
Best regards
I'm trying to get running a docker support with Visual studio 2017 for a .net core 2.0 web app running on linux containers. I'm working on machine with win 7 OS, so I must use a Docker toolbox with Virtual box. I've already checked this question: How to get docker toolbox to work with .net core 2.0 project, but I got stuck in the following problem, when trying to run it with VS:
Volume sharing is not enabled. Enable volume sharing in the docker ce
for windows settings
So far I know that there is a default volume mounted under the C:\Users, so my project files should be copied somewhere under this folder in case I don't want to mount any other volume. So I copied them there.
When I check the settings of my Virtual box, folder seems to be shared:
I can even cd into this folder with command line, but still can't get over this problem. Any ideas about this?
Finally I got this running. Error message comming from VS is very misleading and it has nothing to do with volume sharing. Eventually I realized that problem is in running a debugger, because when I ran solution with Ctrl + F5 everything was ok and container started correctly. Problem occurred only when running with F5 and trying to attach a debugger.
Then I found some clues in console output. VS tries to download some tooling for debugging containers with powershell script named GetVsDbg.ps1. When running this script I could observe errors like:
Add-Type : Cannot add type. The assembly
'System.IO.Compression.FileSystem' could not be found.
Finally I fixed this issue by updating powershell version which was somehow in collision with my .net framework installed on my machine.
Well in my case it turned out that I had changed my windows password and docker wasn't able to get access.
So it was just
uncheck shared drives
Apply
Check again. Enter new password
restart docker
Below setting helped me getting rid of this error. Check the drive you want to share and click apply. This might ask you your network credential just enter in case it pops up.
Docker settings
Thanks,
Rakesh
I fixed it by running following command in Powershell:
docker network create nat
I got same issue attempting to publish an Azure Function App to a Container Registry.
Newer version of Docker Desktop for Windows 2.3, has new interface. I had to got to Resources|File Sharing and add a new Folder. This resolved that issue...
I am trying to build and deploy a gaia build from git repo in Windows. I am trying to deploy it in অ Flame.
I am trying to do it in a windows 7 with cygwin installed. After installing everything this is the error I am getting
This works just fine in a Linux machine, but I need to do this in Windows since right now I have access to it only.
Any pointers to what I am doing wrong here?
I'm afraid it's not going to work without significant effort for several reasons. Much better to use a VM with Linux on as even if it did work it will be really slow. Windows is slow at handling lots of file access and Cygwin slows it down even more.
For example in making a simple change to config.sh (full stack build) so it works on Cygwin I found it took hours to run (on a decent PC). And then I had a couple of corrupt git repos I had to hand fix.
I also looked at getting gaia's make to work, but stopped after the problem just got bigger.
Here's what I found for future reference
The build is not really portable, it expects a linux like environment
While cygwin gives good linux emulation most of the tools run are win32 native and handling path conversion for them requires not trivial changes due to assumptions. For example you can switch to the Win32 XPCshell and hack the command line paths to use cygpath, but environment variable are an extra source of dependency in the JS scripts and are all unix paths. ( I did manage this part).
these path and environment dependencies get magnified with the C build chain and other tools.
You need to change the mount to use noacl or else cygwin attaches ACLs to simulate file properties, thus breaking things. It's might even be a little faster without ACLS
I also tried MinGW which provides native versions without the emulation so should be faster. However it falls short of the requirements and its automatic path conversion heuristics get in the way.
you need to turn of any antivirus prog as they slow it down. in fact the very first time I used the old FIrefox WIndows build it would crash after a long time. Turned out to be a mem leak in the AV :(
So all-in-all it's too much hassle in terms of dev time to convert and probably maintain. A true Windows build would be better but then it's so easy these days to run a VM. You can even share directories between the guest and host so could flash from Windows.
I also tried with cygwin, but was unable to build the gaia source code on windows.
It's not straightforward to build the gaia source code on windows. Please follow these steps:-
Download Mozilla Build from MozillaBuild - Mozilla Wiki and install the tools in c:/mozilla-build (preferred). It includes everything (make, wget, python etc) you need to build gaia source code.
Run start-shell.bat. If build process failed with this batch file then run start-shell-msvc2013.bat if you have Visual Studio 2013 or start-shell-msvc2015.bat if you have Visual Studio 2015. (You need Visual Studio for the second step).
Browse to the gaia source code directory using the command cd Mozilla/gaia.
Run DEVICE_DEBUG=1 make command. Don't run DEVICE=1 make or make command (because you won't be able to debug the apps, I was able to connect to the Firefox OS 2.2 but was not able to debug the apps when I ran these commands).
If you are running this command for the first time, it will download the b2g_sdk otherwise it will create a folder profile with your custom profile.
Open the WEBIDE using Firefox (Nightly preferred) and point to the profile folder you just created.
Links for your reference:-
https://developer.mozilla.org/en-US/docs/Mozilla/Firefox_OS/Developing_Gaia
https://developer.mozilla.org/en-US/Firefox_OS/Developing_Gaia/Different_ways_to_run_Gaia
https://developer.mozilla.org/en-US/docs/Tools/WebIDE/Troubleshooting
https://developer.mozilla.org/en-US/Firefox_OS/Developing_Gaia/Making_Gaia_code_changes
https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions/Windows_Prerequisites
I'm really loosing it here. Not being able to attach a debugger to a process is kind of a big deal for me. As such, I'm having a very hard time doing something to pinpoint the source of problems with an Azure-hosted application.
What's worse is that the app works fine in the Development Fabric, even when using online Storage Tables, but can go quite haywire when uploaded and running online.
I know IntelliTrace is one way to do it, but unfortunately, I've got a x86 machine, and the application uses RIA Services. As such, publishing it from my machine results in an error caused by RIA services. I can't build the application by specifying x64 the very same bug strikes again. (So far the only way that I know of to deploy a RIA Services Azure application is to set it to Any CPU and build / publish it from an x64 machine).
So IntelliTrace is not available. Online Azure doesn't have something to resemble the nice console log window of the Development Fabric, and as such, I'm at a loss. Thus far I've been just trying to get things to work and not crash by commenting out sections of code, but given the time it takes to upload and start an instance, this is hardly optimal.
Any suggestions would be appreciated at this point.
The Azure SDK has a logging / diagnostics mechanism built in:
http://msdn.microsoft.com/en-us/library/gg433120.aspx.
One route would be to deploy a version with some Azure specific instrumentation built in.
You could try to RDP into an instance of the role and see if there's anything in any of the logs (event or files) that helps you identify where the failure is.
Baring that, I think Amasuriel has it right in that you REALLY need to architect instrumentation into your solutions. Its something that's on my "must" list when building a Windows Azure application.
If you have access to another workstation with an x64 version of Visual Studio, you can configure Azure diagnostics to collect and copy the crash dumps to Blob Storage:
// Must be called after diagnostic monitor starts.
CrashDumps.EnableCollection(false);
You can then download them (using a tool like Azure Storage Explorer) and debug them locally.
If you absolutely need to see what's going on on the console Rob Blackwell has embedded a neat little trick in his Azure Run Me solution.
It pushes the console output of azure instance(s) out over the service bus. You can therefore consume that data locally and in effect monitor the console of the instances running on Azure right on your desktop.
AzureRunMe is available here and it's open source so you can take a look at how they've fed the console output to the SB.
https://github.com/RobBlackwell/AzureRunMe