Run two cloud solutions locally and simultaneously - visual-studio

I have two cloud solutions open in two instances of VS 2012. Each one has a Web Role Project. Both run fine, but I am unable to run them both at the same time. If I attempt to bring up one solution while the other is running, my browser just tells me the page is unavailable.
Anyone have any thought as to why this might be? Could one solution be locking the azure emulators for itself?

Related

Run two projects with different data sources in Visual Studio

I am trying to run two different projects from different instances in Visual studio. The idea is that in one, based on a virtual machine, I want to test some long running processes. But so that I can continue with other work I want to run another project on my local machine connecting to a local instance of SQL Server.
Both projects run fine by themselves, the problem occurs when one is run before the other, then the second gets an econn error both are running on localhost, one is set to run on part 2060, the other port 2070
Anyone know a way around this?
Thanks
Found the problem. The issue was with Java debugging. It seems that with it enabled it will not run the two simultaneously. With each on a different port and Java debugging disabled, it runs perfectly.

"No endpoint listening" when starting an Azure web role under Compute Emulator

I have two cloud solutions (.ccproj files). Each has a single distinct web role. One project runs under Compute Emulator without any problems but when I try to run another one (the first one not running) Visual Studio will package it and then display
Windows Azure Tools: There was no endpoint listening at net.pipe://localhost/dfagent/2/host that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details.
Windows Azure Tools: The Windows Azure compute emulator is not running or responding. Stopping the debugging session.
I'm using SDK version 1.4
I Googled for a while but couldn't find anything that could help me. Force starting the Compute Emulator (csrun /devfabric:start) doesn't seem to help.
How do I resolve this problem?
Although an old question, I got this issue recently and the reason for it was that - while the service or website in azure would have been removed or stopped and you try to publish to it. If this happens, check the publish profile to see that you are pointing to the correct service/site including the storage acc etc and correct them. Hope it helps someone.

How do I know why my Azure instance doesn't start?

I deployed my service package into Windows Azure. Management Portal has been showing "waiting for the role instance to start" for 30 minutes already so I assume something is wrong.
I know that there's Azure Diagnostics, but is there some easier way to find what's going on in my instance - like some console displaying some detailed output or something?
In these cases, it is probably the most expedient to simply RDP into the box and see what is going on. Event logs, hitting the site, etc., from inside the machine usually gives you a pretty good idea. If you have Intellitrace (Visual Studio Ultimate), you can also enable that and suck down the logs to see what is happening. That works very well also.
#dunnry The problem is that you can't open a RDP session to the server if your Azure Role is not running, so you don't know anything what is going on.
Most of the times there is something wrong in your Azure Configuration files. Try removing parts and redeploy afterwards. Pay triple attention to your ConnectionStrings. Make sure that the ServiceDefinition ConfigurationSettings are all defined in the ServiceConfiguration ConfigurationSettings File.
What we basically do is to deploy on a nightly build basis. We can check our ChangeSets of the day before after an instance is not reaching the running state.
If the Azure Diagnostics doesn't tell you anything then I don't think so - no. Somewhat annoyingly, one thing that frequently causes problems is Azure Diagnostics initialization - e.g. if the diagnostics connection string is wrong.
If the role instances start but the app has problems then the remote desktop might help.
If all else fails, try Azure support - it's still free right now.

Azure: just HOW do I debug this?

I'm really loosing it here. Not being able to attach a debugger to a process is kind of a big deal for me. As such, I'm having a very hard time doing something to pinpoint the source of problems with an Azure-hosted application.
What's worse is that the app works fine in the Development Fabric, even when using online Storage Tables, but can go quite haywire when uploaded and running online.
I know IntelliTrace is one way to do it, but unfortunately, I've got a x86 machine, and the application uses RIA Services. As such, publishing it from my machine results in an error caused by RIA services. I can't build the application by specifying x64 the very same bug strikes again. (So far the only way that I know of to deploy a RIA Services Azure application is to set it to Any CPU and build / publish it from an x64 machine).
So IntelliTrace is not available. Online Azure doesn't have something to resemble the nice console log window of the Development Fabric, and as such, I'm at a loss. Thus far I've been just trying to get things to work and not crash by commenting out sections of code, but given the time it takes to upload and start an instance, this is hardly optimal.
Any suggestions would be appreciated at this point.
The Azure SDK has a logging / diagnostics mechanism built in:
http://msdn.microsoft.com/en-us/library/gg433120.aspx.
One route would be to deploy a version with some Azure specific instrumentation built in.
You could try to RDP into an instance of the role and see if there's anything in any of the logs (event or files) that helps you identify where the failure is.
Baring that, I think Amasuriel has it right in that you REALLY need to architect instrumentation into your solutions. Its something that's on my "must" list when building a Windows Azure application.
If you have access to another workstation with an x64 version of Visual Studio, you can configure Azure diagnostics to collect and copy the crash dumps to Blob Storage:
// Must be called after diagnostic monitor starts.
CrashDumps.EnableCollection(false);
You can then download them (using a tool like Azure Storage Explorer) and debug them locally.
If you absolutely need to see what's going on on the console Rob Blackwell has embedded a neat little trick in his Azure Run Me solution.
It pushes the console output of azure instance(s) out over the service bus. You can therefore consume that data locally and in effect monitor the console of the instances running on Azure right on your desktop.
AzureRunMe is available here and it's open source so you can take a look at how they've fed the console output to the SB.
https://github.com/RobBlackwell/AzureRunMe

Do you have performance problems when you work on Visual Studio projects via a network share?

We have tremendous problems with Visual Studio (2008, if that matters) locking up and slowing down when accessing projects over a network drive. It can take several minutes to open a large Web site project through a mapped drive, and saving even a single file can take a minute or more.
I fired up Wireshark and watched the traffic. VS, it seems, requests massive amounts of files from the network -- there's an enormous amount of SMB traffic. I've done some research, and this traffic seems to stem from two situations.
VS has to have everything in its own process to provide Intellisense.
VS needs to have all the source in order to compile the project.
All the advice I've read seems to boil down to the same thing: work locally, not on a remote machine, then push your code to an integration server via source control.
This would sure solve our problems (VS is quite fast working locally), but what if you can't work locally? What if the project and the infrastructure required to run it is too large and complicated to be replicated on everyone's individual machines?
We've gone 'round this problem a couple times, and the only way we can figure to work on these projects is direct access via a mapped drive. However, the VS slowness and lockups are really becoming a problem.
One solution: we installed VS on the server and work on the projects directly on the servers via RDP. Seriously.
So, I ask:
What does everyone else do? Do you work via the network, or do you replicate projects locally? If remotely, do you suffer from VS performance issues.
We work locally and use SVN to keep all our code on the server.
I find VS 2008 quite slow working locally sometimes so I wouldn't fancy working on a network share.
Trying to compile over a network share is horribly slow using visual studio. Your start times will be bad as the intellisense database is regenerated. Each compilation has to go over the network multiple times. Linking takes forever.
If you need the output of your compilation on the network, I'd recommend doing your compile locally and defining a post-build command to copy the results to your share.
If, as you say, you cannot pull everything locally then I'd suggest your project is too big and needs to be broken up into more manageable chunks. For a multi-tier application, break it up by tier and invest in some form of continuous integration (e.g. CruiseControl) to automatically build individual pieces. In this way you can work locally on an particular piece and pull the pre-build portions from CI for the other pieces of the application.
I'm not terribly surprised that using VS to load projects over a network share has performance issues. VS (in any language) is constantly getting information from files in the project. Once you start loading this over a network you're at the mercy of the underlying network connection. All lags and access issues will directly translate into VS having an issue loading file contents.
I would advise copying the solution locally and using some form of source code control to sync the project on the share.
If the code is too complicated to install on everyone's machine, then don't put it on everyone's machine. Does everyone need to have everything in order to do productive work?
I have 79 projects in my solution that I work with. Several hundred thousand lines of code. I pull my source down everyday from TFS and build it; it's a lot of code, but it's a far better solution than trying to work over a network share.
A more legitimate situation of having the source code on a share is when one has a non-Windows host on which a (number of) virtual Windows machine is running.
I have this exact situation where my desktop machine (the host) is running Debian and I use VMware to run various virtual Windows machines (the guests), including one that has Visual Studio installed so that I can target Windows OS's. Having the source code on a Samba share on the host machine has the following pro's:
The source is not duplicated, so there is no way to confuse different copies while working on several virtual machines at the same time.
I have full control over the source from my preferred OS.
I can turn on and off any of the virtual machines, or roll back to a snapshot, without the risk of loosing changes.
I can build (etc.) from the same source on several machines without having to commit changes before the source fully tested (reason: I have to use Subversion <1.5).
The only problem with this setup is that Visual Studio (6,7,8,9) is painfully slow.
I have mounted the partition (on which the share lies) with "relatime" and this works in as far as the disk activity on the share moderate, but Visual Studio keeps the (virtual) network card occupied all the time.
Any solutions to this would be very appreciated.
I encountered similar problems everytime I worked (work = anything else then just copy / paste files) over a network drive. The problem occured with ZendStudio and Eclipse.
Why not use any kind of source control?
When working on Windows based projects I've always worked locally.
Once at a unix shop (AIX iirc) developers would work via NFS mount and checkin/checkout via RCS...
I'm using VS2005 across to a network share and not having any performance issues. However, it is a new server (Windows Server 2008). I don't have any other data points for VS since using it at work is relatively new for me.
However, some datapoints from using Netbeans for previous projects on a network share... Local build time for my project was 2 minutes on Vista, on a fast dual-core AMD 64-bit machine. For a network share project, on a Server 2003 box, it was 20 minutes. Building that same project from an ancient Tablet PC (1ghz, single core) running XP locally was around 5 minutes. Interestingly enough, the Tablet PC could build on the Server 2003 box in the same 5 minutes.
For those asking "why" on the network share. The network share is automatically backed up, archived, etc. Also, that way I can very easily look at the same projects from multiple machines without having to worry about pushing back into the repository, etc. Once you've gone to having your dev stuff on a device where you can get to it from anywhere/anything, you'll never want to do local storage again!
I have performance problems via network anything, they just aren't good enough yet.
I thought it was common knowledge that disk-speed is one of the major "slowness" factors when it comes to using VS in Windows. Most dev machines I've built have had projects located on 10k RPM RAID0 drives, or at least a single 10k RPM drive. And even then it seems slow sometimes. Just the way it is, I suppose, until VS2009/VS2010 fixes it? :)
From my experience, this lag when working on a network share is 99% due to Intellisense. Disable it and you'll see.
disabling Intellisense indeed speeds up saving and opening files trough a UNC share dramatically
http://blogs.msdn.com/saraford/archive/2007/12/03/did-you-know-how-to-turn-off-intellisense-by-default.aspx
but then again, as stated in other comments, you might as well use a good text editor
I've also experienced the problems with performance mentioned above. It seems to vary from project to project, but I did find one way of speeding up performance significantly for some project types.
Following the advice in this article made a previously unusable project on a network location (it would take minutes to open one file) perform almost like a local project. The basic gist is that you need to grant FULL TRUST to the network location:
To grant permission to all your projects in your Visual Studio Projects folder located on the network, follow these 8 steps:
Open Microsoft .NET Framework 1.1 (or 2.0) Configuration which you'll
find under Administrative Tools in the Control Panel.
Expand Runtime Security Policy | Machine, | Code Groups | All_Code |
LocalIntranet_Zone In the right-hand pane, click Add a Child Code
Group.
In the dialog that follows choose Create a new code group and fill in
a Name like Visual Studio Projects.
Optionally, provide a Description for the Code Group. (You'll see the
description when you click a Code Group in the left tree, helping you
identify the various Code Groups you may have) .
In the Condition Type drop down, choose URL
For the URL field, type something like this:
file://YourServer/My Documents/Visual Studio Projects/*
Under Use existing permission set, choose FullTrust (that is, if you
trust your own applications. If you don't, choose a different
permission set or create a new one).
Not sure why this works, but it made a previously unusable NET 2.0 project perform significantly better.
Original article: http://imar.spaanjaars.com/364/how-do-i-allow-my-visual-studio-net-projects-to-run-from-a-network-location
I was having the same problem. I have a local copy of our build system, which expects certain drive letters, and was also experiencing slowness.
I have solved the problem by adding the following registry keys:
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\DOS Devices]
"R:"="\DosDevices\D:\devel\build
"S:"="\DosDevices\D:\devel\src"
Note that the double '\'s above are part of the .reg file format. When using regedit use single '\' throughout.
My build times were divided by 3. :)
I found the info in the wikipedia article on the SUBST command.

Resources