I'm using git-bash on windows and I find it annoying to open up two terminal windows (and navigating to the right path in both) to:
start a http-server to server static files (node tool)
start grunt (default grunt task is grunt-watch which watches the file system and runs tasks when things change)
What I want is to be able to execute a bash script or something to
start the http-server
start other things if relevant
run the grunt command to start it watching
My questions are:
is it possible?
is it practical? (i.e. console feedback might not be possible or would be confusing if multiple things were able to be shown as they would be interwoven.. if that is even possible)
is there a better way? -- other than multiple terminals :o)
If your using Grunt already, you should be able to utilize Grunt's task queues to run multiple tasks in one go. Typically, for each project you have some default task that orchestrates a running development environment like so:
grunt.registerTask(
'default',
'Starts the server in development mode and watches for changes',
['build', 'server', 'watch']);
Sometimes, though, merely queuing tasks isn't enough. You can drop into writing ad-hoc tasks and utilize Grunt's extensive API, such as grunt.task.run and the rich context inside of tasks.
I am not going to bombard you with examples, but things you can do here can range from fetching data from remote sources, spawning different child processes, piping their stdin to the Grunt process and starting arbitrary tasks using grunt.task.run.
Related
Thanks for stopping by, I've searched the corners of the internet but haven't gotten anywhere.
To provision devices for my organization, we must manually run PowerShell commands using SHIFT + F10 in the Windows 11 OOBE as we have multiple methods, one of which being legacy. I'm sure there are better methods but I'm unfortunately working within these limitations. So far, to automate the imaging process, I've created an autounattend.xml which makes WinPE completely silent and some pages of the OOBE also.
Recently, I combined all the PowerShell commands we had been running prior into a script that, after running repeated checks for a network connection, prompts users with a GUI and effectively automates everything we had been doing manually before:
Message box with radio buttons
I need to make this run when the OOBE Sysprep starts, but I really need some help.
The script contains GUI, so it cannot run silently and the user needs to interact with it.
The script must start with the OOBE Windows Welcome Screen, (i.e. select region screen). This is a limitation of the modules used and I therefore can't include it as a synchronous command in FirstLogonCommands or include it in SetupComplete.cmd, as those both execute after the OOBE is completed.
I've tried configuring the answer file to boot into audit mode and have the script run there, but the script requires several reboots and I get an installation failed message after any reboot (despite later making the script enable the Administrator account and call "sysprep /audit /reboot"). Additionally, the Audit Administrator account takes ~15 minutes to log in so it defeats the whole purpose of time saving.
I've tried using Task Scheduler, running both on System Start Up and User Log On, as defaultuser0, BUILTIN\Administrators and SYSTEM. Task scheduler seems to either queue tasks or not call them at all in the OOBE
I've tried placing the script, and then a shortcut of the script, in the common start up folder but that didn't work either.
To reiterate, I need a way to automatically run a script when the OOBE Sysprep starts. Furthermore, I need it to run every time the OOBE is launched as sometimes, we have to manually reboot if something glitches or goes wrong so the script will need to run again when the OOBE is resumed.
I know this is a tough one due to the limitations, but this will make the device rollout significantly easier.
Thanks,
Jake
I know that PS will show me all the currently running processes. But that won't show me anything that's started, then stopped, during some time span. Is there any other way that I can see all the processes that were run during some event?
I'm trying to setup a way of auditing all the processes that ran during a build compilation. I can use PS to check all the running processes at the start of the build, and even run it again at the end. And I can setup a separate thread that will run PS over and over and over again during the build to catch all the processes that might have been run in the middle. But is there some better way of accomplishing this task that I'm not aware of?
This build is being run on a mac, so it uses the mac version of bash.
After your processes have run you can go to the Console (in the Applications/Utilities Folder) and check the system logs for the time period of interest. Many messages are written so the narrower the time window the better.
I have a scenario in Windoze where I'd like to monitor for program A to be running before launching program B. I've google'd for related topics and all I've come across are posts for using batch files to kick off multiple apps in sequence (with/out delays). The use case I'm facing is that the user can download a file which causes program A to launch, so I don't always know when A will be started (i.e. can't just make a batch file with a shortcut on the desktop for the user).
I'd like to find a somewhat native solution that would routinely scan for program A running (could simply set up a Task that repeats) and then kicks off program B.
If a more complex method needs to be employed, so be it, but I'd prefer a minimally-invasive solution... simple language, simple development environment, etc.
I'm currently using windows 7 with apache, php, and mysql. I understand on windows, task scheduler is the equivalence of cron jobs on linux/unix systems. I'm wondering what the easiest way to run a php file on my localhost server is through a task scheduler. I want it to open chrome (i know how to do that) but how do I set it to go to a certain page and close once it's finished the script.
A couple things come to mind...
One would be to utilize AutoIt which is a very powerful and public domain automation tool. You could easily start Chrome, navigate, verify on-screen values (e.g. when your script finishes), and then close chrome.
Another alternative would be to play bit more gorilla warfare tactic and terminate the process Chrome is using after some preset amount of time has elapsed (assuming you know how long this takes to run). See Windows' "TaskKill" command line options here: enter link description here
I have a requirement to run a script on all available slave machines. Primarily this is so they get relevant windows hotfixes and new 3rd party tools before building.
The script I have can be run multiple times without undesirable side effects & is quite light weight, so I'm happy for this to be brute force if necessary.
Can anybody give suggestions as to how to ensure that a slave is 'up-to-date' before it works on a job?
I'm happy with solutions that are driven by a job on the master, or ones which can inject the task (automatically) before normal slave job processing.
My shop does this as part of the slave launch process. We have the slaves configured to launch via execution of a command on the master; this command runs a shell script that rsync's the latest tool files to the slave and then launches the slave process. When there is a tool update, all we need to do is to restart the slaves or the master.
However - we use Linux whereas it looks like you are on Windows, so I'm not sure what the equivalent solution would be for you.
To your title: either use Parameter Plugin or use matrix configuration and list your nodes in it.
To your question about ensuring a slave is reliable, we mark it with a 'testbox' label and try out a variety of jobs on it. You could also have a job that is deployed to all of them and have the job take the machine offline it fails, I imagine.
Using Windows for slaves is very obnoxious for us too :(