I work for a small dotcom which will soon be launching a reasonably-complicated Windows program. We have uncovered a number of "WTF?" type scenarios that have turned up as the program has been passed around to the various not-technical-types that we've been unable to replicate.
One of the biggest problems we're facing is that of testing: there are a total of three programmers -- only one working on this particular project, me -- no testers, and a handful of assorted other staff (sales, etc). We are also geographically isolated. The "testing lab" consists of a handful of VMWare and VPC images running sort-of fresh installs of Windows XP and Vista, which runs on my personal computer. The non-technical types try to be helpful when problems arise, we have trained them on how to most effectively report problems, and the software itself sports a wide array of diagnostic features, but since they aren't computer nerds like us their reporting is only so useful, and arranging remote control sessions to dig into the guts of their computers is time-consuming.
I am looking for resources that allow us to amplify our testing abilities without having to put together an actual lab and hire beta testers. My boss mentioned rental VPS services and asked me to look in to them, however they are still largely very much self-service and I was wondering if there were any better ways. How have you, or any other companies in a similar situation handled this sort of thing?
EDIT: According to the lingo, our goal here is to expand our systems testing capacity via an elastic computing platform such as Amazon EC2. At this point I am not sure suggestions of beefing up our unit/integration testing are going to help very much as we are consistently hitting walls at the systems testing phase. Has anyone attempted to do this kind of software testing on a cloud-type service like EC2?
Tom
The first question I would be asking is if you have any automated testing being done?
By this I mean mainly mean unit and integration testing. If not then I think you need to immediately look into unit testing, firstly as part of your build processes, and second via automated runs on servers. Even with a UI based application, it should be possible to find software that can automate the actions of a user and tell you when a test has failed.
Apart from the tests you as developers can think of, every time a user finds a bug, you should be able to create a test for that bug, reproduce it with the test, fix it, and then add the test to the automated tests. This way if that bug is ever re-introduced your automated tests will find it before the users do. Plus you have the confidence that your application has been tested for every known issue before the user sees it and without someone having to sit there for days or weeks manually trying to do it.
I believe logging application activity and error/exception details is the most useful strategy to communicate technical details about problems on the customer side. You can add a feature to automatically mail you logs or let the customer do it manually.
The question is, what exactly do you mean to test? Are you only interested in error-free operation or are you also concerned how the software is accepted at the customer side (usability)?
For technical errors, write a log and manually test in different scenarios in different OS installations. If you could add unit tests, it could also help. But I suppose the issue is that it works on your machine but doesn't work somewhere else.
You could also debug remotely by using IDE features like "Attach to remote process" etc. I'm not sure how to do it if you're not in the same office, likely you need to build a VPN.
If it's about usability, organize workshops. New people will be working with your application, and you will be video and audio recording them. Then analyze the problems they encountered in a team "after-flight" sessions. Talk to users, ask what they didn't like and act on it.
Theoretically, you could also built this activity logging into the application. You'll need to have a clear idea though what to log and how to interpret the data.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have used InstallAware and InstallShield before, and they are pretty difficult to work with and when something goes wrong it is very difficult to find and resolved the issue.
My question is why can't we use a Windows application written using C# to do this.
I understand that .Net framework may not be installed on the destination computer, so I wonder why no one has ever used this architecture:
I will create a simple installer using IntallSiheld(or any other similar tool) to just install .Net Framework and after that extracts and runs my own Windows application which I have written using C# in elevated mode. My application will run a Wizard with Back and Next button and I will take care of everything in it (copying files, creating and starting Windows Services, adding registry values, creating firewall extensions etc.)
Has anyone ever done this, and is there anything that prevents people from doing this?
In essence: don't try to re-invent the wheel. Use an existing deployment tool and stay with your day job :-). There are many such tools available. See links below.
And below, prolonged, repetitive musing:
Redux: IMHO and with all due respect, if I may say so, making your own installer software is reinventing the wheel for absolutely no gain whatsoever I am afraid. I believe you will "re-discover" the complexities found by others who have walked the path that is involved in deployment as you create your own installer software and find that software can be quick to make, but very hard to perfect. In the process you will expend lots of effort trying to wrap things up - and "the last meter is very long" as you curse yourself dealing with trifles that take up your time at the expense of what would otherwise pay the bills. Sorting out the bugs in any toolkit for whatever technical feature, can take years or even decades. And no, I am not making it up. It is what all deployment software vendors deal with.
Many Existing Tools: there are many existing tools that implement such deployment functionality already - which are not based on Windows Installer (Inno Setup, NSIS, DeployMaster and heaps of other less known efforts):
There is a list of non-MSI installer software here.
There is another list of MSI-capable software here.
My 2 cents - if you do not like MSI, choose one of the free, non-MSI deployment tools. How to create windows installer.
Corporate Deployment: The really important point (for me) is that corporate deployment relies on standardized packaging formats - such as MSI - to allow reliable, remote management of your software's deployment. Making your own installer will not impress any system administrators or corporate deployment specialists (at least until you sort out years of bugs and deficiencies). They want standardized format that they know how to handle (that does not imply that they are that impressed with existing deployment technology). Doing your deployment with standardized deployment formats can get you corporate approval for your software. If you make a weird deployment format that does unusual things on install that can't be easily captured and deployed on a large scale your software is head-first out of any large corporation. No mercy - for real. These are busy environments and you will face little understanding for your unusual solution.
"File-Pushers": Those of us who push files around for a living know that the field of deployment is riddled with silly problems that quickly kill your productiveness in other endeavors - the ones that make you stand out in your field - your day job. Deployment is a high profile, low status endeavor - and we are not complaining. It is just what it is: a necessity that is harder to deal with than you might think. Just spend your time more wisely is what I would conclude.
Complexity: Maybe skim the section "The Complexity of Deployment" here: Windows Installer and the creation of WiX. It is astonishing to deal with all the silly bugs that happen in deployment. It is not just a file copy, though it might be easy to think it is. And if it happens to be just a file copy, then there are existing tools that do the job. Free ones too. See links above. And if you think deployment is only file-copy in general, then please skim this list of tasks a deployment task should be capable of supporting: What is the benefit and real purpose of program installation?
Will your home-grown package handle the following? (just some random thoughts)
A malware-infected terminal server PC in Korea with Unicode characters in the path?
Symbolic links and NTFS junction points paths?
A laptop which shuts itself off in the middle of your file copy because it is out of battery?
Out of disk space situations? What about disk errors? And copy timeouts?
What about reboot requirements? For in-use files or some other reason. How are they to be handled? What if the system is in a reboot pending state and you need to detect it before kicking off your install?
How will you reliably install, configure and start and stop services?
How will you support uninstall and cleanup for your application?
Security software which flags your unknown, unrecognized, non-standard package a security threat and quarantines it? How would you begin to deal with this? Who do you contact to get into the good graces of a "recognized binary" for elevation?
Non-standard NTFS permissioning (ACLs) and NT Privileges? How do you detect it and degrade gracefully when you get permission denied? (for whatever reason).
Deployment of necessary runtimes for your application to work? (has been done by many others before). Download of the lastest runtimes if your embedded ones are out of date? Etc...
Provide a standardized way to extract files from your installation binary?
Provide help and support for your setup binaries for users who try to use them?
Etc... This was just a random list of whatever came to mind quickly. There are obviously many issues.
This was a bit over the top for what you asked, but don't be fooled to think deployment is something you can sort out a solution for in a few hours. And definitely don't take the job promising to do so - if that is what you are being asked. Just my two cents.
The above issues, and many others, are what people discover they have to handle when creating deployment software - for all but the most trivial deployments. Don't waste your time - use some established tool.
Transaction: If you are working in a corporation and just need your files to your testers, you can deploy using batch files for that matter - if you would like to. But you have to support it, and I guarantee you it will take a lot of your time. What do you do when the batch file failed half-way through due to a network error, and your testers are testing files that are inconsistent? Future deployment technologies may be better for such light-weight tasks. Perhaps the biggest feature of a deployment tool is to report whether the deployment completed successfully or not, and to log the errors and to roll the machine back to a stable state if something failed. Windows Installer does a lot of this work for you.
Distribution: A lot of people feel they can "just replicate my build folder to the user's computers". The complexities involved here are many. There is network involved, and network can never be assumed to be reliable, you need lots of error handling here. Then there is the issue of transactions: when do you know when the computer is in a stable state and should stop replicating. How often do you replicate, only on demand? How do you deal with the few computers that failed to replicate. How do you tell the users? These are distribution issues. Corporations have huge tools such as SCCM to deal with all these error conditions. Trying to re-implement all these checks, logging and features will take a long time. In the end you will have re-created an existing distribution system. Full circle. And how do you do inventory of your computers when there is no product registered as installed since only a batch file or script ran? And if you start replicating a lot of packages, how many times do you scan each file to determine if they are up to date? How much network traffic do you want to create? Where does it end? The answer: I guess transactions must be implemented with full logging and error tracking and rollback. Then you are full circle to a distribution system like I mentioned above and a supported package format as well.
This "just replicate my build folder to my users" ideas somehow remind me of this list: https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing. Not a 100% match, but the issues are reminiscent. When networking is involved, things start to become very unpredictable and you need logging, error control, transactions, rollback, network communication, etc... We have re-discovered large scale deployment - the beast that it is.
Network: and let's say you want to replicate your build folder to 10000 desktop machines in your enterprise. How do you kick off the replication? Do you start all replications at once and take down the trading floor of the bank as file replication takes over the whole network like a DDOS attack? Sorry - it is getting out of hand - please pardon the lunacy - but it really is upsetting that this replication approach is seen as viable for large scale deployment with current technology approaches. Built-in Windows features could help, but still need to be tested properly. You need scheduling, queuing, caching, regional distribution shares, logging, reporting / inventory, and God knows what else that a packaging / deployment system gives you already. And re-implementing it will be a pain train of brand new bugs to deal with.
Maybe we one day will see automatic output folder replication based on automatic package generation which really works via an intelligent and transacted distribution system. Many corporate teams are trying, and by using existing tools they get closer with standard package formats used. I guess current cloud deployment systems are moving in this direction with online repositories and easy, interactive installation, but we still need to package our software intelligently. It will be interesting to see what the future holds and what new problems result for packaging and distribution in the age of the cloud.
As we pull files directly from online repositories on-demand we will see a bunch of new problems? Malware, spoofing and injection? (already problematic, but could get worse). Remote files deleted without warning (to get rid of vulnerable releases that should no longer be used - leaving users stranded)? Certificate and signature problems? Firewalls & proxy issues? Auto-magic updates with unfortunate bugs hitting everyone immediately and unexpectedly? And the fallacies of the network and other factors as linked to above. Beats me. We will see.
OK, it became a rant as usual - and that last paragraph is heading over board with speculation (and some of the issues already apply to current deployment). Sorry about that. But do try to get management approval to use an existing packaging & deployment solution is my only advice.
Links:
Stefan Kruger's Installsite.org twitter feed: https://twitter.com/installsite
Choosing a deployment tool:
How to create windows installer
What installation product to use? InstallShield, WiX, Wise, Advanced Installer, etc
Windows Installer and the creation of WiX
WiX quick start tips
More on dark.exe (a bit down the page)
I have just started working with a new company in a very small IT department, and an even smaller dev team (just 2 developers including me). We mainly develop in house web applications for the company.
Now my background is in desktop applications so this job comes with a slight learning curve for me having never developed ASP.Net web applications before. Currently we do not use TFS however I have made the suggestion and it is something we are going to be adopting soon.
I am also considering recommending we move our SQL databases into database projects as currently we do nightly backups to protect the data but all the updates are manual, we connect to the database and execute queries etc.
Im not a DBA but in my last job we were in the process of migrating our databases to database projects and the DBAs seemed to love the idea. What would the benefits and potential downfalls of this be? Would it aid us with updating databases in our live enviroment after development has been done? Obviously we dont want to loose any data but just update tables / Stored Procs etc.
As a side question I have very limited knowledge of TFS, and although we are going to be using it to handle our version control is it possible to use TFS to update our live websites automatically once development has finished?
Sorry if this is quite a broad question, I am attempting to research this myself but I would like to hear from people who actually use the products and do these things.
Thanks
About database projects: I, and several dba's I know, have had mixed experiences with them. I'm not sure they are exactly where they should be at this time but it may be simply a function of how I work. The deployment model is... difficult and can result in some unexpected behavior. If you go this route test test test to make sure you understand exactly what's happening.
If you are just trying to get version control for the database you might consider SQL Source Control from Red Gate. It looks pretty nice and hooks into TFS. I used one of the early versions (beta and 1.0) for awhile and was very happy with it. Now that I think about it, I'm not sure why I don't have it here... ;)
As far as deploying out of TFS, you can absolutely do this. We have a build server set up so that whenever code is checked in the build server automatically spins up to compile and deploy it out to one of our testing sites. Look here for a primer to get you started. This does require some configuration on your web server to properly support and the documentation is spotty at best.
Once you are happy with the test area, you can hook something into changes to the Build Quality so that it pushes the code to a staging or even production server... Or simply have a build setup to recompile and deploy out there. Although I don't recommend doing production pushes this way. Not because of a technical issue, but rather a timing one. It's usually much faster to just copy / paste from one location to production when necessary thereby limiting downtime.
We know this is good to have, but I find myself justifying it to my employer. Please pitch in on why a development team needs a build server.
There are multiple reasons to use build servers. In no particular order and off the top of my head:
You simplify the developers' workflow and reduce the chance of mistakes. Your build server can take care of multiple steps such as checking out latest code, having required software installed, etc. There's no chance of a developer having some stray DLLs on their machine that can cause the build to pass or fail seemingly at random.
Your build server can replicate your target environment (operating system, etc.) and there's less of a chance of something working on developers' desktops and breaking in production.
While it's a good practice for developers to test everything they check in, sometimes they just don't. Then it's good to have the build server there to catch test errors and let the team know the product is broken.
Centralized builds provide easy access to code metrics -- which tests passed, which failed, how often, how well is your code covered by your tests, etc. Having a solid understanding of the quality state of the codebase reduces maintenance and testing costs by providing timely feedback that allows errors to be fixed quickly and easily.
Product deployment is simplified -- the developer or QA doesn't have to remember multiple manual steps. It can be easily automated.
The link between developers and QA is simplified. QA personnel can go to a known location to grab latest, propertly versioned builds.
It's easy to set up builds for release branches, providing an extra safety net for products in their release stage, when making code changes must be done with extra care.
To avoid the "but it works on my box" issue.
Have a consistent, known environment where the software is built to avoid dependencies on local dev boxes.
You can use a virtual server to avoid (much) extra cost if you need to.
ASAP knowledge on what unit tests are currently working and which do not; furthermore, you'll also know if a once passing unit tests starts to fail.
This should sum up why it is critical to have a build server:
http://www.codinghorror.com/blog/2006/10/the-build-server-your-projects-heart-monitor.html
It's a continuous quality test dashboard; it shows you statistics about how the quality of your software is doing, and it shows them to you now. (JUnit, Cobertura)
It makes sure developers aren't hamstrung by other developers breaking the build, and encourages developers to write better code. (FindBugs, PMD)
It saves you time and money throughout the year by getting better code from developers the first time - less money on testing and retesting - and by getting more code from the same developers, because they're less likely to trip each other up.
Two main reasons that non technical people can relate to:
It improves the productivity of the dev team because problems are identified earlier.
It makes the state of the project very obvious. I've shown my management the build status dashboard an now they look at it all the time.
One more thing. Something like Hudson is very simple to set up - you might want to simply run it somewhere in a corner for a while and then show it later.
This is my principal argument:
all official releases must be build in a controlled environment. No exception.
simply because you never know how the developers create their personal releases.
You also don't need to talk about build server as in "blade that costs an arm a a leg". The first build server I set up was a desktop machine that sat unplugged in a corner. It served us very well for more than 3 years.
One you have your build machine, you can start adding some features (Hudson is great) and implement everything that the other posters mentioned.
Once your build machine becomes indispensable to your organization (and everyone sees its benefits), you will be able to ask for a shiny new blade if you wish :-)
The simplest thing you can do to convince your your employer to have a build server is to tell them that they will be able to release faster and with better quality.
Faster releases come from the immediate feedback about quality of the build. If someone breaks the build, he or she can fix the broken build immediately thus avoiding a delay in the build and release schedule. Without a build server the team will have to spend time trying to find what and when happened and how to fix it.
Better quality is achieved by the build server running bug detection tools automatically every time someone check is changes into a version control system. You don't mention what is the main development language in your organization, but such tools, advanced but commercial and simple but free, exist practically for all languages. Lint, FxCop, FindBugs and PMD come to mind.
You may also check this presentation on benefits of continuous integration for a more extensive discussion.
It happens lot of the time that when you report a bug to a developer, he comes back saying "it works on my system" though its a browser app. How do you go about sorting that out ?
From a training/process point of view:
Train your team to know that "works on my machine" is not a get-out-of-jail-free response.
Have an automated build server.
Have an automated test deployment.
Your developers must know that "works" is defined as "works on the test server", not just their machine.
From a testing/debugging point of view:
The developer needs to be shown the sequence of actions that result in the bug happening.
You might want to capture screenshots showing the bug, or possibly a video capture (using tools such as Camtasia). People can be quite bad at describing the sequence of actions that they performed on a system that led to a bug showing itself, so the more information you can capture about the bug and how to replicate itself the better.
From a development/environment point of view:
If there is genuinely a bug that exhibits itself on one environment but not the developer's then find out if it exhibits itself on no development environments, or just your one developer's.
From thereon in it is a case of trying to reduce the differences between the two environments so that your developer can see the issue on his machine.
Or you can go the other way and attempt to debug the issue on the production (non-development) environment.
Implementation details of these vary by platform.
You need to give as much information to the developer as possible. Even stuff that you don't think is relevant.
I can't count the number of times I've had a problem reported and couldn't repeat it, only to find out later a piece of information that the user hadn't originally included but was the key to unlocking the puzzle.
You also need to not accept that answer and say "well something must be different between your set up and mine, what can we do to sort it out".
We deal with that problem by having a development environment on top of the local development that is as close to the productive system as possible in terms of setup, hardware, etc. As a result almost all problems that occur in the production environment are reproducible on that development system even if they can't be reproduced on local developer machines.
This is a common escapist retort that I encounter from teams. My response usually is: "You know, your system isn't the production server and that's where it needs to work". In other words, that excuse simply isn't acceptable.
I also indicate to them the possibilities:
a. There is a configuration difference between the local system and the server.
b. Certain dependencies of the functionality are not updated on the server.
c. They haven't cleared their browser cache.
d. I replicate the problem on the Staging server and demonstrate it to them.
e. ... and so on, depending on the case.
Try to recreate the user that found the bug's system as much as possible: From server config to machine config including browser and OS and such. You should probably have several different setups on which to test your app before releasing.
IE Tester is a good tool for this kind of troubleshooting. If you need to test lots of browsers then virtual machines like Virtual PC are your best bet so you can have many client set-ups on your test server.
ahh yes... the oldest excuse in the book.
Assuming that both the developer and the tester is testing on the same server I would try to isolate the bug by identifying what the difference is between the developers machine and the testers machine. Could be a something minor like flash version, browser differences or forgetting to clear your browser cache
I would also recommend using an automated testing framework and test apps on a dedicated test server.
Not much you can do as an end user, but as a developer you can avoid a lot of these issues by including a lot of logging in the system - The differences the user will think of will just be the simple things you have tested already, but good logging lets you see exactly what was happening when the system failed. I've found quite a few bugs that couldn't possibly happen that way.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
As developers, we believe that not having local administrative access is going to severely handicap our productivity. We will be restricted from running IIS (we’re a web development shop), installing applications, running Microsoft power tools, etc. If you’re going through the FDCC process now, it would be great to hear how you are coping with these changes.
Having actively worked as a contract developer at a base that uses the AF Standard Desktop, I can tell you a few things.
1: and most important. Don't fight it and don't do what the first person suggested "and let them choke on it". That is absolutely the wrong attitude. The military/government is fighting lack of funding, overstretched resources and a blossoming technology footprint that they don't understand. The steps they are taking may not be perfect, but they are under attack and we need to be helping, not hindering.
OK, that off my chest.
2: You need to look at creating (and I know this is hard with funding the way it is) a local development lab. Every base that I have worked at has an isolated network segement that you can get on that has external access, that is isolated from the main gov network. You basically have your work PC for e-mail, reports etc.. that is on the protected network. But, you develop in your small lab. I've had a lab be 2 PCs tucked under my desk that were going to be returned during a tech refresh. In other words, be creative with making yourself a development machine +servers that are NOT restricted. Those machines are just not allowed to be connected to the main lan segment.
3: Get the distributions of the desktop configurations. Part of your testing needs to be deploying/running on these configurations. Again, these configurations are not meant for development boxes. They are meant to be the machines the people use for day to day gov work.
4: If you are working on web solutions, be very aware of the restrictions on adding trusted sites, ActiveX components, certs, certain types of script execution that the configuration won't allow. Especially if you are trying to embed widgets/portlets/utils that require communications outside the deployed application domain.
5: Above all remember that very few of the people you work for understand the technology they are asking you to implement. They know they want function X but they want you to follow draconian security rule Y while achieving it. What that usually means is that the "grab some open source lib or plugin and go" is not an option. But, that is exactly what your managers think you are going to do because of the buzz around rapid development.
In summary, it's a mess out there. Try to help solve the problem.
While I've never been through the FDCC process, I once worked for a U.S. defense contractor who's policy was that no one had local administrative access to their machines. In addition, flash drives and CD-ROMs were disabled (if you wanted to listen to music on CDs, you had to have a personal CD player with headphones).
If you needed software installed you had to put in a work order. Someone would show up at your desk with the install media, login to a local admin account, and let you install the software (the reasoning being that you knew what to install better than they did). Surprisingly, the turnaround was pretty quick, usually around 1/2 an hour.
While an inconvenience, this policy didn't really cripple us. We were doing a combination of Java, C++ (MS Visual C++ and GNU/C++), VB 6.0 and some web development. For what little web development we did, we had a remote dev box we would RDP into for testing. Again, a bit of an inconvenience, but it didn't stop us from getting our jobs done.
Without ever having had the problem, today I'd probably try a virtualising solution to run these tools.
Or, as a friend of mine once opined: "Follow the process until They choke on it." In this case this'd probably mean calling the helpdesk each time you needed to have a modification to your local IIS config or you'd needed one of the powertools started.
From what I can tell FDCC is only intended to be a recommended security baseline. I'd give some push back on the privileges that you require and see what they can come up with to accommodate your request. Instead of saying I need to be a local administrator, I'd list the things that you need to be able to do and let them come up with a solution that works (which will likely to be to let you administer your machine or a VM). You need to be able to run the debugger in Visual Studio, run a local web server (Cassini), install patches/updates to your dev tools on your schedule, ...
I recently moved to a "semi-managed" environment with SCCM that gets patches installed on a regular basis from a local update repository. I was doing this myself, but this is marginally more efficient for the enterprise and it makes the security office happy. I did get them to put me, and the other developers, in a special collection so that we could block breaking changes if needed (how could IE7 be a security update?). Not much broke except that now I need to update Windows Defender manually since I updated it more frequently than they do in the managed collection! It wasn't as extreme as your case, obviously, but I think that is, in part, due to the fact that I was able to present the case for things that I needed to do for my job that required more local control.
From the NIST FAQ on Securing WinXP.
Should I make changes to the baseline settings? Given the wide
variation in operational and technical
considerations for operating any major
enterprise, it is appropriate that
some local changes will need to be
made to the baseline and the
associated settings (with hundreds of
settings, a myriad of applications,
and the variety of business functions
supported by Windows XP Systems, this
should be expected). Of course, use
caution and good judgment in making
changes to the security settings.
Always test the settings on a
carefully selected test machine first
and document the implemented settings.
This is quite common within financial institutions. I personally treat this as a game to see how much software I can run on my PC without any admin rights or sending requests to the support group.
So far I have done pretty well I have only sent one software install request which was for "Rational Software Architect" ('cos I need the plugins from the "official" release). Apart from that I have perl, php, python, apache all up and running. In addition I have jetty server, maven, winscp, putty, vim and a several other tools running quite happlily on my desktop.
So it shouldnt really bother you that much, and, even though I am one of the worst offenders when it comes to installing unofficial software I would recommend "no admin rights" to any shop remotly interested in securing their applications and networks.
One common practice is to give developers an "official" locked down PC on which they can run the official applications and do their eMail admin etc. and a bare bones development workstation to which they have admin rights.
Not having local administrative access to your workstation is a pain in the rear for sure. I had to deal with that while I was working for my university as a web developer in one of the academic departments. Every time I needed something installed such as Visual Studio or Dreamweaver I had to make a request to Computing Services.