I'm trying to create a VM within a vApp and use SDRS in Go using the govmomi package, but haven't been able to figure out a way to do it in one step.
Based on the tasks in vCenter (when I do it through the vCenter web client), it seems to be possible. Using the API, if I use SDRS recommendations, I get an error about the resource pool not being valid (because its actually a vApp), but if I use the VM creation process with the vApp, I don't get the SDRS benefits (and can't actually use a datastore cluster at all since a datastore is required).
To work around it, I could either get a recommendation for SDRS and just pull the datastore info out if it instead of applying and then creating the VM through the vApp, or I could create the VM outside of the vApp using SDRS and move it into the vApp afterward, but these both seem hacky.
I'm looking to see if there is a way to do what appears to be happening in vCenter, and avoid one of the hacky workarounds.
Related
I've read multiple posts on running scripts on GCP VMs but unfortunately could not find an answer that would satisfy my needs.
I have a Go application and I'm looking for a way to run a bash script on a VM instance programatically.
I'm using a Google Cloud Golang SDK which allows me to fetch VM instance info. Unfortunately SDK does not contain a functionality that allows running a bash script on a specific instance(unlike an Azure Cloud SDK for example).
Options I've found:
Google Cloud Compute SDK has an option to set a startup script, that
will run every time an instance is restarted.
Add instance-level public SSH key. Establish an SSH connection and
run a script using Go SSH client.
Problems:
Obviously startup script will require an instance reboot and this is not possible in my use case.
SSH might be also problematic, in case instance is not running SSH
daemon or SSH port is not open. Also, SSH daemon config does not
permit root login by default(PermitRootLogin might be false), thus
script might be running on a non privileged user, making this option not
suitable either.
I should probably note that I am not authorised to change configuration of those VMs (for example change ssh daemon conf to permit root login), I can just use a token based authentication to access them, preferably through SDK, though other options are also possible as long as I am not exposing the instance to additional risks.
What options do I have? Is this even doable? Am I missing something?
Thanks!
As said by Kolban, there is no such API to trigger from outside a bash inside the VM. The best solution is to deploy a webserver (a REST API) that call the bash and to expose it (externally or internally).
But you can also cheat. You can create a daemon on your VM that you run with a startup script and that listen a custom metadata; let's say check it every seconds.
When the metadata is updated, the daemon can perform actions. You can imagine that the metadata contain the script to run with the parameters. At the end of the run, the metadata is cleaned by the daemon.
So now, to run your bash, call the setMetadata Api. It's not out of the box, but you can have something similar of what you expected.
Think of GCP as providing the virtual machine infrastructure such as compute, memory, disk and networking. What runs when the machine boots is between you and the machine image. I am hearing you say that you want to run a bash script within the VM. That is outside of the governance of GCP. GCP will only affect the operation and existence of the environment. If what you want to happen is run a script within the VM programatically you will need to run some form of demon inside the VM that can be signaled to run such a script. This could be a web server such as flask or express, it could be your SSH server or it could be some other technology you choose.
The core thing I think you were looking for was some GCP API that, when called, would run a script within the Compute Engine. I'm going to say that there is no such API.
How do I get a vSphere VM's creation time using the ruby SDK? I can get the VM instances, but there doesn't seem to be a property that shows when the VM is created.
api_client = VSphereAutomation::ApiClient.new(configuration)
VSphereAutomation::CIS::SessionApi.new(api_client).create('')
vm_api = VSphereAutomation::VCenter::VMApi.new(api_client)
vms = vm_api.list({filter_power_states: ["POWERED_ON"]})
...
# this will get a specific VM information but nothing about creation time.
vm_api.get('vm-34122')
The output is sourced from the vSphere Web Services API via a new property in the Virtual Machine managed object, which is nested in the ConfigInfo and can be referenced by createDate. Full property path: vm.config.createdate
Worth noting, this information isn't available in the REST (eg. vSphere Automation) API for vSphere, so you'll need to use rbvmomi to access it. Also, if you happen to pull non-sensical dates, that's likely due to the VM being created prior to this property being introduced into the the API in vSphere 6.7.
I need to install multiple iDempiere instances in one server. The customized packages are different in build and the db they are using. Is there any way to deploy both of it in one server and access like localhost:8080/client1, localhost:8080/client2 . Any help appreciated.
When I want to reference several application servers I need to copy the path of various installations
and change the database name and port of each application :
/opt/idempiere-server-production/ (on port 8080 for example) for production
And
/opt/idempiere-server-test/ (on port 8081 for example) for test
the way you said is not possible, because the idempiere server for webapp is known as
http://hostname:port/webui
Running multiple instances of idempiere on a single server is not too difficult.
Here is what you need to take care of:
Install the instances into different directories. The instances do not need to share any common files. So you are just fine making a full installation for each instance.
Make sure each instance uses its own data base. Use different names for the instance data bases.
Make sure the idempiere server instances use different tcp ports.
If you really should need to use a single port to access all of the instances you could use a http server like apache or ngnix to do define virtual hosts. Proxying or use of rewrite rules will then allow you to do the desired redirections. (I am using subdomains and apache mod_proxy to do the job)
There is another benefit to using subdomains for browser access: If all your server instances use the same host name the client browser will sometimes not be able to keep cookies from different instances apart, which can lead to a blocked session as discussed here in the idempiere google group.
Use different DB user names. The docs advise not to change the default user name Adempiere and this is ok for a single instance installation. Still if you use a single DB user for all of your instances you will run into trouble once you need to restore a database from a backup file. The RUN_DBRestore.sh will delete and recreate the DB user which is not possible when the user owns more than one DB.
You can run all of your instances as services in parallel. Before the installation of another instance rename the service script: sudo mv /etc/init.d/idempiere /etc/init.d/idempiere-theInstance. Of course you will need to do some book keeping work wth the service controller of your OS to ensure that the renamed services are started as desired.
The service controller talks to the iDempiere server via the OSGI console. For this to work without problems in a multi instance environment you need to assign a different telnet port number to each of the instances: in the editor of your choice open the file /etc/init.d/iDempiere. Find the line export TELNET_PORT=12612 and change the port number to something else.
Please Note:
OS specific descriptions in this guide are for Ubuntu 16/18 or Debian, if on another OS you need to do some research.
I have been using the described approach to host idempiere versions 5 and 6 for some time now and did not have any problems so far. Still make sure you do your own thorough tests if you want to go that route.
If you run into any problems (and maybe even manage to solve them) please report back to the community. (by giving your own answer to this question or by posting to the idempiere google group) Thanks!
You can have as many setups on your server as you like. When you run the setup to create your properties, simply chose other web ports for each installation. You also may need to slightly change the webservers configuration if they have some default ports.
Is there a way to add a (parametrized) Startup task to a Windows Azure Virtual Machine through the API? I need to execute a cmdlet after the machine has been started and the code depends on two parameters that will be different for each machine. I know this could be easily achieved for a Web/Worker role, but could it be done for Virtual Machines, as well?
For first-time runs of a VM, you can inject a startup task via CustomData. This works in both Linux and Windows VMs. You'll just need to properly base-64-encode your file (whether it's text or binary) based on the REST API docs.
CustomData is dropped into a file in a specific location, and you can have code that looks for this file, taking some type of startup action as appropriate:
Windows: %SYSTEMDRIVE%\AzureData\CustomData.bin
Linux: /var/lib/waagent/CustomData
Note: This will be added to the CLI as well (the pull request is already available - not sure if it's in the latest build.
EDIT Yes, customdata is now part of the Azure CLI, as a parameter to azure vm create, so no need to mess with base-64 encoding on your own :
No. currently there is no such feature provided out of the box.
However, given you will deal with VM anyway, you can create an image of your own. You can register a "Startup Task" in RunOnce registry key. And sysprep the OS with this settings.
This way you will basically have a startup task which will be executed when your machine boots for the first time and will not be executed on consequent VM restarts.
Taking parameters into the code for VM is not as easy for Web/Worker Role. For anything you want you have to query the Azure Management API directly. The only properties you can get from code running on an Azure VM are basically the normal OS properties - i.e. host name, host IP Address. You don't even know your cloud service name, nor your Virtual IP Address (this can be discovered via services as whatismyip.net or similar). So my approach would be to put parameters into an Azure Table Storage and use Machine Name as rowKey. So I can store any VM specific values based on VM Name. And my "Startup" task would query the Table storage, providing my host name as rowKey (and some common pattern for Partition Key), so it gets all required settings.
With IaaS Management Studio you can set a startup script that will run when your VM boot.
In summary, it activates remote powershell and run your script remotely when it detects the powershell port is open.
I am the developer of this tool, but I don't really get what you mean by "parametized", in other words you want your script to have access to the VM info ?
I messed up this.
Installed ZoneMinder and now I cannot connect to my VPS via Remote Desktop, it must probably have blocked connections. Didnt know it will start blocking right away and let me configure it before.
How can I solve this?
Note: My answer is under the assumption this is a Windows instance due to the use of 'Remote Desktop', even though ZoneMinder is primarily Linux-based.
Short answer is you probably can't and will likely be forced to terminate the instance.
But at the very least you can take a snapshot of the hard drive (EBS volume) attached to the machine, so you don't lose any data or configuration settings.
Without network connectivity your server can't be accessed at all, and unless you've installed other services on the machine that are still accessible (e.g. ssh, telnet) that could be used to reverse the firewall settings, you can't make any changes.
I would attempt the following in this order (although they're longshots):
Restart your instance using the AWS Console (maybe the firewall won't be enabled by default on reboot and you'll be able to connect).
If this doesn't work (which it shouldn't), you're going to need to stop your crippled instance, detach the volume, spin up another ec2 instance running Windows, and attach the old volume to the new instance.
Here's the procedure with screenshots of the exact steps, except your specific steps to disable the new firewall will be different.
After this is done, you need to find instructions on manually uninstalling your new firewall -
Take a snapshot of the EBS volume attached to it to preserve your data (essentially the C:), this appears on the EC2 console page under the 'volumes' menu item. This way you don't lose any data at least.
Start another Windows EC2 instance, and attach the EBS volume from the old one to this one. RDP into the new instance and attempt to manually uninstall the firewall.
At a minimum at this point you should be able to recover your files and service settings very easily into the new instance, which is the approach I would expect you to have more success with.