I am using a utility called Wireless Network Watcher https://www.nirsoft.net/utils/wireless_network_watcher.html on a -windows 10 machine- that captures the devices connected to my network and export these items to a csv file periodiacally.
The app also offers some command-line options to start the app in the background and scan the network and export the items to a file.
But when I get disconnected from my network (and that happens a lot), the scanning process of the app stops and I need to correct this on each disconnect.
I chose to do this by creating a scheduled task that kills the app when disconnection happens and another one to restart it when I get connected again.
But when I do this, I lose the already recorded items on the already running instance of the app (Like devices that were connected and now are not), so I want to use the command-line option to export the items to a file before killing the app.
C:\WNetWatcher.exe /scomma C:\log.csv
So my question is: Is it doable to pass some parameters (/scomma in my example) to an already running instance of an application, and not starting a new one?
Here are the command Line Options available within the app:
Related
I've read multiple posts on running scripts on GCP VMs but unfortunately could not find an answer that would satisfy my needs.
I have a Go application and I'm looking for a way to run a bash script on a VM instance programatically.
I'm using a Google Cloud Golang SDK which allows me to fetch VM instance info. Unfortunately SDK does not contain a functionality that allows running a bash script on a specific instance(unlike an Azure Cloud SDK for example).
Options I've found:
Google Cloud Compute SDK has an option to set a startup script, that
will run every time an instance is restarted.
Add instance-level public SSH key. Establish an SSH connection and
run a script using Go SSH client.
Problems:
Obviously startup script will require an instance reboot and this is not possible in my use case.
SSH might be also problematic, in case instance is not running SSH
daemon or SSH port is not open. Also, SSH daemon config does not
permit root login by default(PermitRootLogin might be false), thus
script might be running on a non privileged user, making this option not
suitable either.
I should probably note that I am not authorised to change configuration of those VMs (for example change ssh daemon conf to permit root login), I can just use a token based authentication to access them, preferably through SDK, though other options are also possible as long as I am not exposing the instance to additional risks.
What options do I have? Is this even doable? Am I missing something?
Thanks!
As said by Kolban, there is no such API to trigger from outside a bash inside the VM. The best solution is to deploy a webserver (a REST API) that call the bash and to expose it (externally or internally).
But you can also cheat. You can create a daemon on your VM that you run with a startup script and that listen a custom metadata; let's say check it every seconds.
When the metadata is updated, the daemon can perform actions. You can imagine that the metadata contain the script to run with the parameters. At the end of the run, the metadata is cleaned by the daemon.
So now, to run your bash, call the setMetadata Api. It's not out of the box, but you can have something similar of what you expected.
Think of GCP as providing the virtual machine infrastructure such as compute, memory, disk and networking. What runs when the machine boots is between you and the machine image. I am hearing you say that you want to run a bash script within the VM. That is outside of the governance of GCP. GCP will only affect the operation and existence of the environment. If what you want to happen is run a script within the VM programatically you will need to run some form of demon inside the VM that can be signaled to run such a script. This could be a web server such as flask or express, it could be your SSH server or it could be some other technology you choose.
The core thing I think you were looking for was some GCP API that, when called, would run a script within the Compute Engine. I'm going to say that there is no such API.
I am running a Jenkins server on a Windows 10 computer, as a service. In one of the Jenkins-jobs I have to perform tests using a COM application. The same computer is also used by the developers in their daily work over RDP, and the Jenkins-job in question is run in the night when no regular developer is using it. But if no user is logged in on the computer or using RDP, the script in the job fails to start the COM application with the following message:
The server process could not be started because the configured
identity is incorrect. Check the username and password.
I found that the issue seems to be that the identity for the COM application is taken from the current interactive user, and if there is none, it fails, see
https://support.microsoft.com/en-my/help/305761/com-server-application-that-uses-interactive-user-identity-fails-to-lo
I cant seem to be able to solve my issue. I see two options:
Make sure that a user is logged in when the job is executed
Figure out how to run the COM application without an interactive user
For 1 i see the following solutions and why they do not work:
Autologin on Windows start, and leave logged-in: Will not work since we use the computer in our daily work through RDP, which means that the local logged in user will be kicked out since we are only allowed one session at a time.
Log in using RDP and then exit using the script tscon.exe 0 /dest:console which will leave the session open. Will not work since we are 15 people in the team using that machine over RDP, and people will forgett to use this command when they log of by the end of the day.
For 2, i am unable to find a way to do this.
Can I in Windows schedule a user to automaticall be logged in before the job starts? Can i use a second computer and scedule an RDP-session to the first computer, before the job is executed?
Since nobody was able to provide a good solution I will input my workardound as an answer and possible solution. What I ended up doing was using a second computer (running Windows) and schedule a task on that computer that every night (before the Jenkins-job starts) opens an RDP-session to the computer running Jenkins. This way the Jenkins job, and the COM-application, has an active user that it can use.
This is how I achieved this:
Login to the second computer (i.e. the one not running Jenkins) and open the RDP (Remote Desktop Connection) dialog and click Show Options
Enter the details for the first computer (i.e. the one running Jenkins). Make sure to uncheck Always ask for credentials (you will need to save the credentials to be able to automate this).
Save the configuration to an .rdp-file, using Save As...
IMPORTANT: Press connect to connect to the first computer, enter the password and make sure to save it. Also accept any certificates e.t.c. to prevent future warnings/dialogs.
Create a bat-file containing the following
mstsc C:\Path\To\saved_rdp_file.rdp
Create a task in Windows Sceduler that calls the bat-file created in step 5 every night.
Optional: If you want to close the rdp-session when Jenkins is done, create a second bats-script and scedule that as well, containing:
tasklist /FI "imagename eq mstsc.exe" | find "mstsc.exe" && taskkill /f /im mstsc.exe || echo process "mstsc.exe" is not running
I am running some erlang code on a Mac OSX, and I have this weird issue. My application is a multi node app where I have a single instance of a server that is shared between nodes (global).
The code works perfectly, except for one annoying thing: the different erlang nodes (I am running each node on a different terminal window) can only communicate with each other after ping!
So if on terminalA I am starting the server, and on terminalB I am running
erl>global:registered_names().
terminalB will return an empty list, unless, before starting the server on terminalA, I have ran a ping (from either one of the terminals).
For example, if I do this on either terminals before starting the server:
erl>net_adm:ping("terminalB").
then I start the server and from the second terminal I list the processes:
erl>global:registered_names().
This time I WILL see the registered process from the second terminal.
Is it possible that the mere net_adm:ping call does some kind of work (like DNS resolving or something like that) that allows the communication?
The nodes in a distributed Erlang system are loosely connected. The
first time the name of another node is used, for example if
spawn(Node,M,F,A) or net_adm:ping(Node) is called, a connection
attempt to that node will be made.
I find this in this link: http://www.erlang.org/doc/reference_manual/distributed.html#id85336
I think you should read this article.
I am windows user and wanted to use a spot instance using my own EBS windows AMI. For this I have followed these steps:
I had my own on-demand instance with specific settings
Using AWS console I used option "Create Image EBS" to create EBS based windows AMI. IT worked and AMI created successfully
Then using this new AMI I launched a spot medium instance that was created well and now running with status checks passed.
After waiting an hour or more I am trying to connect it using windows 7 RDC client but is not reachable with client tool's standard error that either computer is not reachable or not powered on.
I have tried to achieve this goal and created/ deleted many volums, instances, snapshots everything but still unsuccessful. Doesn't anybody else have any solution to this problem?
Thanks
Basically what's happening is that the existing administrator password (and other user authentication information) for Windows is only valid in the original instance, and can't be used on the new "hardware" that you're launching the AMI on (even though it's all virtualized).
This is why RDP connections will fail to newly launched instances, as will any attempts to retrieve the administrator password. Unfortunately you have no choice but to shut down the new instances you've been trying to connect to because you won't be able to do anything with them.
For various reasons the Windows administrator password cannot be preserved when running a snapshot of the operating system on different hardware (even virtualized hardware) - this is a big part of the reason why technologies like Active Directory exist, so that user authentication information is portable between machines and networks.
It sounds like you did all the steps necessary to get this working except for one - you never took any steps to cause a new password to be assigned to your newly-launched instances based on the original AMI you created.
To fix this, BEFORE turning your instance into a custom AMI that can be used to launch new instances, you need to (in the original instance) run the Ec2ConfigService Settings tool (found in the start menu when remoted into the original instance using RDP), and enable the option to generate a new password on next reboot. Save this setting change.
Then when you do create an AMI from the original instance, and use that AMI to launch new instances, they will each boot up to your custom Windows image but will choose their own random administrator password.
At this point you can go to your ec2 dashboard and retrieve the newly-generated password for the new instance based on the old AMI, and you'll also be able to download the RDP file used to connect to it.
One additional note is that Amazon warns that it can take upwards of 30 minutes for the retrieval of an administrator password after launching a new instance, however in my previous experience I've never had to wait more than a few minutes to be able to get it.
Your problem is most likely that the security group you used to launch the AMI does not have RDP (TCP port #3389) enabled.
When you launch the windows AMI for the first time, AWS will populate the quicklaunch with this port enabled. However, when you launch the subsequent AMI, you will have to ensure that this port is open for your security group.
I have a web service that calls some stored procedure on a AS400 via JTOpen.
What I would like to do is that the connections used to call the stored procedures was opened in a specific subsystem with a specific user, instead of qusrwrk/quser as now (default).
I think I can be able to clone the qusrwrk subsystem to make it start with a specific user, but what I cannot figure out is the mechanism to open the connection in the specific subsystem.
I guess there should be a property at connection level to say subsystem=MySubsystem.
But unfortunatly I haven't found that property.
Any hint would be appreciated.
Flavio
Let the system take care of the subsystem the job database server job is started in.
You should just focus on the application (which is what IBM i excels in).
If need be, you can tweak subsystem parameters for QUSRWRK to improve performance by allocating memory, etc.
The system uses a pool of prestarted jobs as described in the FAQ: When I do WRKACTJOB, why is the host server job running under QUSER instead of the profile specified on the AS400 object?
To improve performance, the host server jobs are prestarted jobs running under QUSER. When the Toolbox connects to a host server job in order to perform an API call, run a command, etc, a request is sent from the Toolbox to an available prestarted job. This request includes the user profile specified on the AS400 object that represents the connection. The host server job receives the request and swaps to the specified user profile before it runs the request. The host server itself originally runs under the QUSER profile, so output from the WRKACTJOB command will show the job as being owned by QUSER. However, the job is in fact running under the profile specified on the request. To determine what profile is being used for any given host server job, you can do one of three things:
1. Display the job log for that job and find the message indicating which user profile is used as a result of the swap.
2. Work with the job and display job status attributes to view the current user profile.
3. Use Navigator for i to view all of the server jobs, which will list the current user of each job. You can also use Navigator for i to look at the server jobs being used by a particular user.