In my line of work, I’m working with MS Visual Studio, Docker for Windows and VS’s Docker tooling.
I’m working on a multi-container solution that has dependencies on containers with long startup times (such as elasticsearch). I’ve experimented with the docker-compose 2.4 format and successfully used the depends_on with healthchecks combination in order to ensure the startup order of my projects. This however, negatively impacts my developement experience, as it takes a long amount of time to start my solution (using the VS / .dcproj / docker-compose integration) or to attach the debugger to it.
I’m looking to optimize my developement experience, while retaining the guarantee that my dependencies are up and running before my under-developement containers are being launched.
I was wondering if it would be a good solution to have the dependencies such as elasticsearch/consul etc. started up as a separate docker-compose solution and have it run side-by-side ?
Is it possible to bridge 2 docker-compose solutions using docker-network and preferable retain the docker network dns?
Is there a better conceptual solution than this?
Best regards
Related
I am using asp.net core 2.2 and Visual Studio 2019. The containers my application are running on are Debian (one of the official aspnet:2.2 docker images)
So my situation is this. I have an application that consists of 4 microservices running in docker containers and I'm seeing very high cpu usage on the containers nodes when this is under load. What I would like to do is profile the executing code to get an idea where this resource use is happening.
As a starting point I thought I would simply get some profiling running on my local development environment, just to get an idea of what the execution looks like in general. Although in production this runs in Kubernetes I do have a development environment that uses docker compose and I find the Visual Studio Docker tools to be fairly good.
I was hoping to use some of the visual studio profiling tools. I was able to install VSDBG on one of my locally running containers and connect to it with VS BUT in the diagnostic pane I see "the diagnostic tools window does not support the current debugging configuration". I've also tried just running the project from VS using docker compose but I see the same message when I hit a breakpoint. I'm not finding much out there about how to do this.
I also tried getting profiling going using perfcollect but after I generated the trace and opened it with perfviewer I was getting a parsing error when trying to view the cpu stacks . Still not sure what's going on there. I did find an old closed issue on their github describing what I am seeing but there was a fairly recent comment from someone saying they were seeing it with the latest version so maybe it's a regression.
So.. after all this .. my question is this. Are either of the above approaches viable? Is there a better way to achieve this? I'm interested in any way someone has had success viewing some code profiling of a .net core 2.2 application running on a linux docker container. All I really want to do is be able to see where in my code the execution time is going and what resources are being consumed. As I've mentioned I'm not finding much out there when I Google for this and I seem to keep hitting walls. If anyone had any advice or direction on a approach here I would really appreciate it. Thanks much!
Are you open to upgrading to .Net Core 3.0 (.Net Core 2.2 is going out of support in a few days: 12/23/2019)
If you're open to that you can take advantage of the new tool dotnet-trace which supports running in a linux container and can be used with the tools in Visual Studio.
Here are the steps I used to add it to my project:
Change your base image to use the sdk image (needed to install the tool).
Add installing the tool to the image:
RUN dotnet tool install --global dotnet-trace
ENV PATH $PATH:/root/.dotnet/tools
Alternatively if you don’t want to add it in your image you can run the following commands in a running container (as long as it as based on the SDK image):
dotnet tool install --global dotnet-trace
export PATH="$PATH:/root/.dotnet/tools"
Start the project without debugging (Ctrl+F5)
Use the Containers Tool Window to open a terminal window
Run the command:
dotnet-trace collect --process-id $(pidof dotnet) --providers Microsoft-DotNETCore-SampleProfiler
When you are done collecting press enter or Ctrl+C to end the collection
This will crate a file called “trace.nettrace”
By default that /app folder that file would be created in is volume mapped to your project folder. You can open the file from there in VS.
I'm a junior web developer working in a small web agency. We work on windows 10, with wampserver and mainly with Prestashop, Wordpress and Symfony websites. For every task I am given a "ticket" for which I must develop on a new branch (if needed). When the work is done I merge my branch on the develop branch which is hosted on a preproduction server, and if it is considered ok, it is then merged on the master branch, which is the website in production hosted on another server.
I was given the task to do some research on Docker and to find how it could improve our workflow.
But from what I understood yet, docker containers are just similar to Virtual Machines and are only useful for building isolated environments in order to test an application without having to think about dependencies.
But given the fact that we already have a preproduction server, I don't see the point of using Docker ? Did I miss something ?
Also, could Docker be of use for sharing projects between teamworkers (we all work on Windows) ? (for example a developer is working on a website locally, can he create a container and its image that could be used instantly as it is without configurations by another developer to work on it on his part ?)
But from what I understood yet, docker containers are just similar to Virtual Machines and are only useful for building isolated environments in order to test an application without having to think about dependencies.
No, Docker containers are not only useful for testing.
When building a correct workflow with Docker you can achieve 100% parity between development, staging and production if all use the same docker images.
But given the fact that we already have a preproduction server, I don't see the point of using Docker ? Did I miss something ?
This pre production server, aka what is normally called a staging server should also use docker to run the code.
Also, could Docker be of use for sharing projects between teamworkers (we all work on Windows) ? (for example a developer is working on a website locally, can he create a container and its image that could be used instantly as it is without configurations by another developer to work on it on his part ?)
Yes... You create the base docker images with only the necessary stuff to run in production and from them you can build other docker images with the necessary developers tools.
Production image company/app-name:
FROM php:alpine
# other your production dependencies here
Assumming that ones build the docker image with name company/app-name then the image for development.
Development image company/app-name-dev:
FROM company/app-name
# add here only developer tools
Now the developer uses both images company/app-name and company/app-name-dev during development and in the staging server only company/app-name docker image will be used to run the code.
After soem months of interaction in this flow you may even fill confident to start using the company/app-name to deploy the app to production and now you are with a 100% parity between development, staging and production.
Take a look to the Php Docker Stack for some inspiration, because I built it with this goal in mind for my latest job, but I end up to leave the company when we where in the process of adopting it in development...
But don't put all services you need in one single docker image because that is a bad practice in docker, instead use one service per docker image, like one service for PHP, another for the database, another for the Nginx server, etc.. See here how several services can be composed together with docker compose.
I have two cloud solutions open in two instances of VS 2012. Each one has a Web Role Project. Both run fine, but I am unable to run them both at the same time. If I attempt to bring up one solution while the other is running, my browser just tells me the page is unavailable.
Anyone have any thought as to why this might be? Could one solution be locking the azure emulators for itself?
I would like to run Teamcity (with a build agent) in a Linux VM to handle our none-.net projects. But in the same breath I'd like to have a BuildAgent setup on a Windows server to handle all of the .net projects.
I can't think of any reasons why this wouldn't work but has anyone any experience and any ideas about the problems I might encounter before I spend too much real time on this?
Ta
It's fully supported. TeamCity also knows which agents to route builds to.
This is a very normal scenario and many project I know do this without any problems. Just make sure that for the builds' Agent Requirements, you properly direct the appropriate job to the appropriate agent. One criterion can be that agent.os.name should contain Windows or Linux etc.
I was wondering if anybody out there is doing their software builds using something like Amazon's EC2. I was thinking about trying to move our builds into that environment. Right now our builds are serial but only because we don't have enough computers to run all the components in parallel. Using EC2 we could create 50 or so computers, run them for a few minutes in massive parallel and then send the build results back to our site. Once we're done we could shutdown or destroy the machines. This would save us a bunch of time since the bottle neck is really the builds and not the size of the results.
Is anybody else doing this? Can you offer any advice?
My company runs our build system on EC2; we have a much smaller setup than the one you're talking about, but we have a build controller instance running Hudson which kicks off builds on a separate, clean instance and then distributes the build artifacts to our repository server (which also happens to be on EC2)
Using a cloud solution is ideal for what you're describing, since you can spin up the build servers only when you need them and be confident of building from a fixed baseline each time. The only downside I can think of is the build time; an EC2 instance can take up to 10 minutes to start up, so you either have to add that on to your total build time or keep the build servers running continuously.