We have a standard config (tunnel.conf) for Wireguard that we want to push to clients (via JAMF Pro).
We do not want the end user to have to open the Wireguard UI to import the config, we want to do this via scripting.
Given I can place the tunnel.conf file anywhere on the end user's system, where do I have to place it, and what command do I need to run to import it?
And conversely, how can I delete a tunnel config from Wireguard, via scripting?
So, as it turns out, Wireguard has a unique key-pair per tunnel - which means each user has their own keys.
Managing that via JAMF sounds like a nightmare, and it'll be easier to point users at their accounts in the VPN to pull down their config, than to manage it for them. Documentation and handholding time!
But it seems to be possible to manage applying a profile via automation. The kind support people at my VPN provider pointed me to this article on JAMF community board:
https://community.jamf.com/t5/jamf-pro/wireguard-configuration-file-distribution/m-p/264747
There's a related page on the wireguard-apple repository:
https://github.com/WireGuard/wireguard-apple/blob/master/MOBILECONFIG.md
If we do end up trying to manage the users configs, I'll update here.
Related
I've read multiple posts on running scripts on GCP VMs but unfortunately could not find an answer that would satisfy my needs.
I have a Go application and I'm looking for a way to run a bash script on a VM instance programatically.
I'm using a Google Cloud Golang SDK which allows me to fetch VM instance info. Unfortunately SDK does not contain a functionality that allows running a bash script on a specific instance(unlike an Azure Cloud SDK for example).
Options I've found:
Google Cloud Compute SDK has an option to set a startup script, that
will run every time an instance is restarted.
Add instance-level public SSH key. Establish an SSH connection and
run a script using Go SSH client.
Problems:
Obviously startup script will require an instance reboot and this is not possible in my use case.
SSH might be also problematic, in case instance is not running SSH
daemon or SSH port is not open. Also, SSH daemon config does not
permit root login by default(PermitRootLogin might be false), thus
script might be running on a non privileged user, making this option not
suitable either.
I should probably note that I am not authorised to change configuration of those VMs (for example change ssh daemon conf to permit root login), I can just use a token based authentication to access them, preferably through SDK, though other options are also possible as long as I am not exposing the instance to additional risks.
What options do I have? Is this even doable? Am I missing something?
Thanks!
As said by Kolban, there is no such API to trigger from outside a bash inside the VM. The best solution is to deploy a webserver (a REST API) that call the bash and to expose it (externally or internally).
But you can also cheat. You can create a daemon on your VM that you run with a startup script and that listen a custom metadata; let's say check it every seconds.
When the metadata is updated, the daemon can perform actions. You can imagine that the metadata contain the script to run with the parameters. At the end of the run, the metadata is cleaned by the daemon.
So now, to run your bash, call the setMetadata Api. It's not out of the box, but you can have something similar of what you expected.
Think of GCP as providing the virtual machine infrastructure such as compute, memory, disk and networking. What runs when the machine boots is between you and the machine image. I am hearing you say that you want to run a bash script within the VM. That is outside of the governance of GCP. GCP will only affect the operation and existence of the environment. If what you want to happen is run a script within the VM programatically you will need to run some form of demon inside the VM that can be signaled to run such a script. This could be a web server such as flask or express, it could be your SSH server or it could be some other technology you choose.
The core thing I think you were looking for was some GCP API that, when called, would run a script within the Compute Engine. I'm going to say that there is no such API.
I'm working on a tool that generates .rdp files and then invokes them using Microsoft RDP Client. This tool is running on Mac OS.
Everything works well, the only problem is that I can't figure out of how I can generate 'password 51:b' field properly. On Windows this can be done easily by using CryptProtectData method from Crypt32.dll library. How can I do the same on Mac.
Another option could be to use "rdp://" URL scheme, but it doesn't seem allow to pass password this way.
So the question is how can I implement auto-login on Mac if I use third-party RDP client.
As far as i know you can't. You can however create a "User Account" and a Server configuration and add both to the client. The connection will then be visible on the main window and you just need to double click it.
To do so, you need to add the password to the Keychain, use /usr/bin/security to do so from a script. It needs to be a generic-password and saved in com.microsoft.rdc.macos. Also be sure to generate an ID according to the RDP Clients scheme, like BFF77777-7777-7777-7777-777777777777.
You may also set the permissions to read that key using /usr/bin/security and set-generic-password-partition-list specifying the right teamid (UBF8T346G9) and again com.microsoft.rdc.macos. You need the admin password to do this step.
Then you can alter the RDP Clients config file, which is a .sqlite file located at /Users/$(whoami)/Library/Containers/com.microsoft.rdc.macos/Data/Library/Application Support/com.microsoft.rdc.macos/com.microsoft.rdc.application-data.sqlite. Add the user configuration in the ZCREDENTIALENTITY table and make sure the ZID matches the one added to the keychain.
To add a server configuration you need to alter the ZBOOKMARKENTITY table. Just add a configuration by hand using the UI and look at the table to get a feeling of how it needs to be setup. Basically you link your user configuration with the server configuratio by making sure that ZCREDENTIAL in ZBOOKMARKENTITY matches Z_PK in ZCREDENTIALENTITY of your user configuration.
I know the answer is a bit late, but it may give you a starting point. This will however not fully automate the process, you will still need to go to the UI and double click the connection you want to use.
I need to install multiple iDempiere instances in one server. The customized packages are different in build and the db they are using. Is there any way to deploy both of it in one server and access like localhost:8080/client1, localhost:8080/client2 . Any help appreciated.
When I want to reference several application servers I need to copy the path of various installations
and change the database name and port of each application :
/opt/idempiere-server-production/ (on port 8080 for example) for production
And
/opt/idempiere-server-test/ (on port 8081 for example) for test
the way you said is not possible, because the idempiere server for webapp is known as
http://hostname:port/webui
Running multiple instances of idempiere on a single server is not too difficult.
Here is what you need to take care of:
Install the instances into different directories. The instances do not need to share any common files. So you are just fine making a full installation for each instance.
Make sure each instance uses its own data base. Use different names for the instance data bases.
Make sure the idempiere server instances use different tcp ports.
If you really should need to use a single port to access all of the instances you could use a http server like apache or ngnix to do define virtual hosts. Proxying or use of rewrite rules will then allow you to do the desired redirections. (I am using subdomains and apache mod_proxy to do the job)
There is another benefit to using subdomains for browser access: If all your server instances use the same host name the client browser will sometimes not be able to keep cookies from different instances apart, which can lead to a blocked session as discussed here in the idempiere google group.
Use different DB user names. The docs advise not to change the default user name Adempiere and this is ok for a single instance installation. Still if you use a single DB user for all of your instances you will run into trouble once you need to restore a database from a backup file. The RUN_DBRestore.sh will delete and recreate the DB user which is not possible when the user owns more than one DB.
You can run all of your instances as services in parallel. Before the installation of another instance rename the service script: sudo mv /etc/init.d/idempiere /etc/init.d/idempiere-theInstance. Of course you will need to do some book keeping work wth the service controller of your OS to ensure that the renamed services are started as desired.
The service controller talks to the iDempiere server via the OSGI console. For this to work without problems in a multi instance environment you need to assign a different telnet port number to each of the instances: in the editor of your choice open the file /etc/init.d/iDempiere. Find the line export TELNET_PORT=12612 and change the port number to something else.
Please Note:
OS specific descriptions in this guide are for Ubuntu 16/18 or Debian, if on another OS you need to do some research.
I have been using the described approach to host idempiere versions 5 and 6 for some time now and did not have any problems so far. Still make sure you do your own thorough tests if you want to go that route.
If you run into any problems (and maybe even manage to solve them) please report back to the community. (by giving your own answer to this question or by posting to the idempiere google group) Thanks!
You can have as many setups on your server as you like. When you run the setup to create your properties, simply chose other web ports for each installation. You also may need to slightly change the webservers configuration if they have some default ports.
At work I have to connect to our server every day. After becoming annoyed with having to use the GUI Connect to Server every day, I wrote a quick script (using mount) that does the same thing.
When I use Connect to Server, however, a link to the mounted server appears in the side panel of the File Manager, which I use all the time. How do I add this link from a terminal/shell script?
(Or even better, where can I find the code for the Connect to Server program?)
Thanks in advance.
You want to use gvfs-mount rather than mount
See the discussion here: http://www.g-loaded.eu/2008/12/08/access-gvfs-mounts-from-the-command-line/
I'm looking for some guidance on how to automat applying a set of permissions withn the local security policy to a multiple users on multiple servers.
For example, via a script, I want to apply "act as part of the operating system" and "adjust memoroy quotas for a process" to user TEST1 and TEST2.
Any feedback on how to get started would be appreciated. thanks!
From a command line, the Microsoft-provided solution is secedit. AppDeploy is a great resource for packaging in general, and they have a good page on secedit here: http://www.osdeploy.com/tips/detail.asp?id=23
In short, change your policies using the Local Security Settings MMC snap-in, then export with secedit as in this page (http://www.webservertalk.com/message534715.html -- also assuming this computer isn't a member of a domain), then import as usual.
Is this machine domain joined? If so, you'll need to make sure no domain policies are applied. Otherwise the domain policies will be exported along with the local ones.
Simpler answer here:
Scripting Local Security Policy
Use ntrights.exe from the Windows 2003 Resource Kit.
However, this doesn't seem to help with the "adjust memory quotas for a process" right.