I have an awk script that runs on specific log files. Initially we ran this on the machine that generates the log files so all was good, I just basically at the end of the script pointed to the local directory and file I need it to run on, for example: /logs/logfile1
But now, I've added several other machines to help load balance our application, so each time a particular machine is accessed (in round robin fashion) that machine writes its own log file local to that machine.
How do I get the script to run on one machine but access the log files from all of the other machines as well? (I could copy the script and run locally on each of the machines and append the outputs to one file as there are only 5 machines right now, but I figure there is an easier solution).
Also I run CentOS 6.x on these servers if that is helpful
EDIT: I suppose I could create soft links to the other machines, on the machine that is running the script. Just wondering if there is something easier?
Mount the other machines file systems (via ssh, nfs, etc.) on the machine with the script.
Mounting the required directories of all machines on one machine is probably the best solution. However, if you expect the number of machines to increase to a larger number in future, you should try for a scalable solution.
You could have a solution as below:
Maintain a file in one machine with a list of all other machines and their respective directories.
Have a script to telnet/ssh to each of those remote machines and execute your awk script.
Retrieve all the output files via ftp tom one machine and merge.
Related
Our team has ~80 Windows development machines, and activities of each developer are logged as text files on the local storage of those machines.
To analyze the logged activities, I want to gather all log files from those machines. Additionally, the log files are updated constantly, so It is desirable to gather files with the command-line from my machine.
I’ve searched and found some solutions, but all of those are not suitable for our situation:
We cannot use PsExec, because tcp/135 and tcp/445 are both closed (countermeasure for WannaCry).
Administrative share is disabled.
telnet service is not up and is banned by security reasons.
WinRM is disabled on those machines by default.
It is difficult to install new software like OpenSSH on those machines (because of the rule of this project)
RDP is the only way to connect those machines. (I have an account on all machines)
How can I copy files from remote Windows machines with command-line through RDP?
Or, at least, is there any way to execute a command on remote Windows machines with command-line through RDP?
I think you can do this, though it is very hacky :)
For a basic setup, which just copies files once, what you would need to do is
Run a script in the remote session when it logs in. I can think of three ways to do this:
Use the "Alternate Shell" RDP file property. This runs a specified program in place of explorer.exe on login; you can use it to run "cmd.exe /c [your script]" for instance.
If that doesn't work (e.g. the remote machine doesn't respect it), you might be able to use a scheduled task that runs the script on login, but perhaps only for a specified user, or maybe the script could check the WinStation type to make sure this is actually an RDP connection before doing anything.
It's also possible to do this by connecting in RemoteApp mode and using the script as your "application", but that only works for Server and Enterprise editions of Windows.
Enable either drive redirection or clipboard redirection on the RDP connection, to give you a way to get data out.
Drive redirection is much simpler to script; you just have the remote script copy files to e.g. "\\tsclient\C\logs".
Clipboard redirection is theoretically possible - you have the remote script copy, then a local script paste - but would probably be a pain to get working in practice. I'm only mentioning it in case drive redirection isn't available for some reason.
You would probably want to script to then log the session off afterward.
You could then launch that from command-line by running "mstsc.exe [your RDP file]". The RDP files could be programmatically generated if needed (given you're working with 80 machines).
If you want a persistent connection you can execute commands over, that's more complicated, but still technically possible. Two ways I can think of:
Use the previous method to run a program on logon, but this time create a custom application that receives commands using a transport that isn't blocked and executes them in the session. I've done this with WCF over HTTP, for instance; it's not secure, of course.
Develop and install a service on the remote machine that opens an RDP virtual channel, and a corresponding RDP client plugin that communicates with it. You can then do whatever you want across the connection. While this solution would be the most likely to work, it's also the most heavyweight and time-consuming to implement so it's probably a last resort.
The environment
Master PC has access to shared drive X
Master PC has Jenkins as a Windows service
Slave PC is a windows PC in same network as master
Slave PC most likely will not have access to drive X (there will be many slaves PCs running this in the future)
The scenario
I need to copy some files from drive X to the slave machine, but this is a conditional step based on a parameter of the job, so this should be a pipeline step as we don't want to copy the files if not needed. The files to copy might be large so stash/unstash is not an option.
So basically my question is, Is there a simple way to solve the scenario without having to give access to X drive to the slave(s) PC?
I think you think you should copy the files to a neutral location, like a binary repo and copy from there.
So ultimately I found that stash has no hard limit, for now I'm using stash/unstash even on large files and there is no error (e.g. 1.5 Gb) until we start using a different method, like the one in Holleoman's answer
I am trying to write a program (not bash scripting) that accesses another machine remotely on the same network to run commands on its terminal. How can I go about doing that? Any help would be appreciated.
You can do this on a case by case basis using ssh and if you are trying to execute commands on a number of machines simultaneously, look at an orchestration tool such as Ansible.
On a Windows Server 2012 R2 standard machine (AWS EC2 instance) I am using PsExec to start a process on a similar remote computer, supplying user credentials. The process fails in various ways that make me suspect permissions: two AWS CLI commands fail with codes 255 (ses sendemail) or 2 (s3 cp) and Excel refuses to save a file, complaining that there is no disk space.
If I log on to the second machine using the same credentials and run the same .bat file to start the process, it all runs as expected. The process is a WSH JScript and runs invisibly under cscript.exe with its ouput redirected to a file.
I ran a SET command via both methods to see whether the environments were different. There were four differences, none of which seem relevant:
local run has CLIENTNAME=COMPAQ, remote does not have that variable
local has SESSIONNAME variable (from running via mstsc), remote does not
TEMP and TMP have extra subdirectory \3 appended on local but not remote. Both versions end with directories which show as read-only in explorer.
local PATH includes C:\USERS\username\.dnx\bin but remote has %USERPROFILE% instead of username. There is no such directory in either case.
Today I tried process monitor (thanks #GamerJ5 for the suggestion) and saved all the Excel events from a successful local run and a failed remote-start run. Filtering out SUCCESS still left a few thousand results in each case, with no obvious clue as to which of the many failures might be important.
Can anyone suggest what types of request / result might be worth further investigation, or anything else I can look at?
In case it helps anyone else: it turns out that there was a red herring in the question; the failures of the two AWS commands were indirect consequences of the Excel failure. That was a consequence of ignoring Microsoft's "Don't run Office on a server" advice (https://support.microsoft.com/en-us/kb/257757), but a suggested work-around that I found on another post (Microsoft Excel cannot access the file "...". There are several possible reasons Windows Server 2008 R2 with Microsoft Office 2010) worked in this case: create the directory that Excel expects to find at Windows\System32\config\systemprofile\Desktop.
I have some source code on my Mac, and in order to test I'm interested in synchronizing it with a VM containing a similar web server setup to the production environment. Therefore I need to be able to automatically copy files over to the VM every time there are changes.
I know I can use rsync to do this manually whenever a script is run but I need some way of getting it to run in the background every single time a file in a particular directory or one of its sub-directories is modified.
I know inotifywait exists on Linux machines and could solve this problem. I've also read about the FSEvents API and kqueue. However, none of these seem to be accessible from the command line and I really don't want to spend a long time making something to do this...
I guess I could use a cronjob but a minute is a pretty long time to wait to see changes on a website...
Any ideas?
I do this all the time, developing on a Windows/Linux/Mac workstation, and saving changes to a remote Linux server where they're immediately served back to my workstation's browser for testing. You've got a couple options:
You could mount the remote files locally (like via sshfs) and make changes directly to them. I.e., your Mac thinks the files are local, so you can edit them with your GUI editor, but when you File->Save, it actually saves the file remotely. The main downside to this is that you can't work when disconnected from the server.
Mount the local files remotely. This would allow you to work locally while disconnected but won't allow the test site to work when disconnected -- which may not be a big deal. This option might not be doable if you don't have the right tools/access on the remote server.
(My preference.) Use NetBeans IDE, which has a very nice "copy to remote" feature. You maintain a full copy of all files locally, and edit them directly. When you hit File->Save on a file, NetBeans will save it locally and transparently scp/ftp it to your remote server.
How about using a DVCS like git or mercurial, and having the local repo run post-commit hooks to run the rsync and then the test itself?
I'm a bit confused about why you can't just run rsync from the same script that runs the test. If you run rsync -e ssh you can set up automatic public key authentication between the VM and the Mac. There won't be anything manual about the rsync in that case.
You might be able to set up a launchd agent to do what you want for a simple setup. See this question and the man page for launchd.plist for more information about the launchd WatchPath key. But it looks like WatchPath may not work for changes within sub-directories.