I would like some kind of delete/copy/move/etc Windows commands that completely ignores if a file is "in use" or not and will do its job anyway.
my specific case:
So at the company I'm working at, we have GUI test scripts. The GUI program we're testing is one that is supposed to protect other "testprograms" (as we call them) by modifying them in certain ways. So, setup/teardown for these tests involve making a copy of the archived, un-tampered testprograms so that the GUI program can perform destructive operations (while the un-tampered copies still exists).
However, numerous times there's been some glitch and some process is still using the copied testprograms, thereby preventing teardown from overwriting the testprogram with another un-tampered one for the next round of testing. Thus, every single test "fails" because teardown fails.
Unfortunately I can not provide any specific code.
use the command-line version of Unlocker
Related
I am writing e2e tests for a command line app where I have to do file manipulation (such as cp, mv, rm, touch, and mkdir). The tests can execute just fine in my local environment. The problem occurs when they are executed on the server across platforms, where the file manipulation gets interfered with each other. Questions are:
It seems wrong to have shell command in test code to begin with, should I just code the commands programmatically?
If above is yes, is there something that would work as a "temporary file system" that is only visible for the process? So that when the tests run on other platforms, the files would not get messed up?
It seems like mutex lock can work as well but it would slow down the entire build.
Sorry this is more of a general and specific question at the same time. Doubt there will be a perfect answer but would love to hear some suggestions and opinions as I am new in both Go and testing. Appreciate the help!
There is nothing wrong in using OS commands in your code otherwise these will not be available to be used, although it may be incompatible depending on the target environment and as you are facing now may have some restrictions.
One tool that can work as a layer to the file commands is Afero, where you can use it even to simulate in-memory operations and S3 resources.
This is more of a general theory question as I'm stuck on how to proceed since this is my first time developing an application...
I'm developing a reporting application in VS 2015 that requires two types of functionality. It needs to have a GUI so that users can interact with and create reports and those reports need to be scheduled via Windows Task Scheduler. I'm planning on using a Console Application for the scheduling portion. My question is, what would be the best way to implement this? As of right now I have two separate Projects in a single Solution. Is this the best route to take considering my needs or is there a better option that I'm not aware of? I've done some searching online but have not been able to find a valid solution. It's especially difficult since the scheduling portion needs to pull the application settings from the Windows Form Application.
Any help or guidance would be greatly appreciated. Thank you in advance!
The only reason you would need a console application would be if you actually needed a console interface. It doesn't sound like that's the case—the interface will be written in WinForms. Therefore, you don't actually need two separate applications. You can combine all the necessary functionality in a single executable.
The way to do this is by checking for command-line parameters that indicate whether the app should run interactively or headless. Probably, what you'll want to do is make the app run interactively when no command-line parameters are passed. This would be the normal case, the situation the user gets into when they double-click your app to launch it from Explorer.
When it comes time to schedule your app to run a task in the background (with Task Scheduler or anything else), you signal this by passing a special command-line parameter to your app. You can decide what this is, and you may need several of them if your app can do multiple things in the background. If configuration information/parameters need to (or can) be passed to the app to configure how it should perform the background task, you can pass these on the command line, too. Or, as you mention in the question, you could pull these settings from whatever was set/saved by the user the last time they ran the interactive version of the app.
The trick is just checking for these command-line parameters in your application's Main method. In the design I proposed, if there are no command-line parameters specified, then you just go ahead and create the main form like you normally would. If there are command-line parameters, then you need to parse them to see what action is being requested. Once you've parsed them and determined which background task should be run, you just run that background task—without ever creating/showing a form.
There are lots of different solutions for parsing command-line parameters. Using a library would probably be the easiest way, and also give you the most features. But if you just needed something really simple, like a /background mode, then you could easily write the code for this yourself, without taking a dependency on a library or needing to learn how to use it.
So you could do all of this with a single project in a single solution if you wanted to. Or, you could split different aspects of the functionality out into different projects that compile to libraries (e.g., DLLs), but still have only a single executable for simplicity.
My development system uses different clients for development and testing which I assume is a common practice. Unfortunately this introduces a rather annoying convenience issue when it comes to debugging. While breakpoints placed on the development system will stick to their code and move as lines are inserted or deleted, this is rather obviously not the case for breakpoints placed on the same code in another client.
Since the system has no knowledge of exactly how rows were changed between two versions, breakpoints placed in the testing client will remain at a particular line in the program. Any change to the code will therefore break the breakpoints. To resolve this I have to: open another program or screen then return to the program to refresh the code (where's the refresh button SAP?), find where the breakpoints have been moved to and remove them one-by-one (where's the batch remove breakpoints button SAP?) and then set new breakpoints at usually the exact same location.
This problem is becoming so frequent in my work that I sometimes spend more time moving breakpoints than I spend on the actual development. In some cases I just gave up and started coding in user breakpoints since those will at least remain in place. However, these come with their own drawbacks as they can't be removed in the debugger, making them useless when you are forced to stop at every breakpoint in a thousand-record loop.
My actual question is now whether there's a better approach or best practice when it comes to debugging in this scenario. I'm relatively new to ABAP programming so I hope that more experienced developers have alternatives or tricks that they use to speed this process up. Is there some better way to go about debugging and breaking code in a secondary client?
You could try creating a checkpoint group in transaction SAAB, and code the break-points to the checkpoint group.
Syntax
BREAK-POINT ID zyour_new_checkpoint_group.
This has the advantage that you can activate it for a set time, or a set of users etc. However, I'm not sure that if you get stuck in a 1000-line loop that you will be able to just deactivate it & skip over the break-point.
It may be worthwhile to check first if you can deactivate the checkpoint group on the fly while the program is running before using this in anger.
The practice of having a development client and test client makes sense for client dependent objects, e.g. customizing. It ensures a reasonably stable environment for development testing. But it makes no sense for programs and other development objects since they are client independent. However, it is still important that all your client dependent development objects (e.g. standard texts and SapScripts) originate from the development client so it is best to create all your objects there. But once you have done that and are on to testing and debugging there is no technical reason to not just change your program in the test client.
It might take some effort to convince the people responsible for development procedures of this practice since there always is a chance that objects get created in the wrong client which could lead to a mess when you want to release them. But with the scenario you describe in your question you should be able to plead your case.
The design of my application is that standard user operations run first (and produce interested information even if the user cannot proceed) and then optionally offers to make some system changes accordingly, which requires elevation. If the user chooses to proceed, the program reruns itself requiring elevation with a command line switch that tells it where in the workflow to resume. The new process then picks back up where the old one left off and makes the changes the user requested.
My problem is I don't know how to write unit tests against the library methods that necessarily make privileged calls without running all of Visual Studio as administrator. I'd really like to avoid doing that so I'm fine with the system prompting me for credentials to run some or all of my unit tests. But currently as a standard user, the calls simply fail with the "System.Management.ManagementException: Access denied" exception.
Any ideas or experiences with handling this beyond elevating the whole of Visual Studio for the session? Since I'm using the built-in unit tests, ideally the solution would still display per-test results in the test results window but that's not a requirement.
I'm not sure what you are doing that requires administrator privileges, but I would suggest that in a unit test you shouldn't actually be calling those methods, but mocking out the classes that those methods are called on.
In this way you can make sure that the right calls are being made with the right parameters, but you aren't changing the state of the system.
You could impersonate an Admin account using LogonUser().
Take a look at this blog that’s trying to solve your problem.
I liked this codeproject implementation for calling LogonUser better. There's actually many codeproject examples of LogonUser() if you search around a little.
My objective is to write a program which will call another executable on a separate computer(all with win xp) with parameters determined at run-time, then repeat for several more computers, and then collect the results. In short, I'm working on a grid-computing project. The algorithm itself being used is already coded in FORTRAN, but we are looking for an efficient way to run it on many computers at once.
I suppose one way to accomplish this would be to upload a script to each computer and then run said script on each computer, all automatically and dependent on my own parameters. But how can I write a program which will write to, upload, and run a script on a separate computer?
I had considered GridGain, but the algorithm is already coded and in a different language, so that is ruled out.
My current guess at accomplishing this task is using Expect (wiki/Expect), but I have no knowledge of the tool.
Any advice appreciated.
You can use PsExec for this:
http://technet.microsoft.com/en-us/sysinternals/bb897553.aspx
You could also look at the open source alternative RemCom:
http://rce.sourceforge.net/
It's actually pretty simple to write your own as well but RCE will show you how to do it if you want. Although, using PsExec may just suffice your needs.
Have a look into PVM, it was made for the type of situation you're describing, but you may have to annotate your existing codebase and/or implement a wrapper application.