I was looking to find the details of the Virtual Machine using govc.
I was able to fetch the details of the instance using govc vm.info, but the result had details of CPU, Memory, IP Address and other not about disk storage, which I can see on vsphere console or by logging into system.
Is there any using govc to find the disk attached to the instance ??
I asked this on govc's Github repo.
You can use:
To get <path/to/vm>, use
govc vm.info vm-name-here
To get all the details of the VM
govc ls -l -json <path/to/vm>
You can also get information about devices using
$ govc device.info -vm VirtualMachineName
Related
I have added the window's vm ip address in /etc/hosts and similary placed hostname and ip address under C//Windows/System32/etc/hosts folder but yet on pinging the packets is not receiving.
Hi I can't to do comments that's why I writing here. Do you use cloud provider or other virtualization tool (virtualbox for exampele)?
If you don't have curl instal it use
sudo yum install curl
Try to use curl command and then look to the log file
curl google.com
And then look to the log file at log file. You can read here about log file
After that please check your mistakes.
needed to disable the windows defender on windows vm to enable communication.
Using terraform I'm trying to create vsphere_virtual_machine resource. As part of it, I'm trying to find out how to mount virtual disks to a specific folder on a created virtual machine. For example :
Terraform
disk {
label = "disk0"
size = "100"
}
disk {
label = "disk1"
size = "50"
}
How to mount disk0 to volume path D:\mysql\conf and disk1 to volume path D:\mysql\databases on a windows vm created using terraform vsphere_virtual_machine ? Could someone please share your insights here . Thanks in Advance !!
There's nothing in the vsphere_virtual_machine provider that will handle internal operations like that against the Virtual Machine, and I'm not aware of any other provider which can do that either.
Couple workarounds:
check out the remote-exec provisioner, this will let you run some PowerShell or other CLI commands to perform the task you need.
If you're doing this on a regular basis, check out Packer. It can be used to build out a virtual machine, OS and all. You could establish the disk configuration there, then use Terraform to deploy it.
Lastly, look into configuration management utilities. Ansible, PowerShell DSC, Puppet, Chef, etc. These tools will let you make modifications to the VM after they've been deployed.
How to spin up a local version of Spinnaker? This has been answered and addressed in detail here.
https://github.com/spinnaker/spinnaker/issues/1729
Ok, so I got it to work, but not without you valuable help! #lwander
So I'll leave the steps here for posterity.
Each line is a separate command in the command line, I've installed this on a virtual machine with a freshly installed Ubuntu 14.04 copy with nothing else than SSH. Then SSH as root, You will need to configure sshd on your console to allow root access.
https://askubuntu.com/questions/469143/how-to-enable-ssh-root-access-on-ubuntu-14-04
> curl -O https://raw.githubusercontent.com/spinnaker/halyard/master/install/stable/InstallHalyard.sh
created a user account member of the adm and sudo groups (is this necessary???)
then Install Halyard:
bash InstallHalyard.sh
Verify that HAL is installed and validate its version.
hal -v
Tell Hal that the deployment type will be as a local instance (this will publish all services in localhost which will be tricky later in order to access them, but I have a turnaround so keep reading)
hal config deploy edit --type localdebian
Hal will complain that a version has not been selected, just tell HAL which version:
hal config version edit --version 1.0.0
The tell HAL which storage you are going to use, in my case and since it is local I want to use redis.
hal config storage edit --type redis
So now we need to add a cloud provider to HAL, we use AWS so we add it like this:
hal config provider aws edit --access-key-idXXXXXXXXXXXXXXXXXXXX--secret-access-key
I created a user on AWS and added access keys to the user inside IAM on the user security credentials tab. Obviously my access-key-idis not XXXXXXXXXXXXXXXXXXXX, I edited it. You do not need to enter the secret-access-key because the command will prompt for it.
Then you need to create a username relative or that will only concern you spinnaker installation however this will get related to you AWS Account-ID, so in MY spinnaker local installation I chose the username spinnakermaster you should choose yours!. And my AWS Account ID is not YYYYYYYYYYYY, I've edited too.
All the configurations and steps that you'll need to do inside AWS for this to work are really well documented here:
[https://www.spinnaker.io/setup/providers/aws/](https://www.spinnaker.io/setup/providers/aws/
)
And to tell HAL of of the above here's the command:
hal config provider aws account add spinnakermaster --account-id YYYYYYYYYYYY --assume-role role/spinnakerManaged
And after all that and if everything went according to plan we can ask HAL to deploy our brand new spinnaker installation.
hal deploy apply
It will begin a long installation downloading and configuring all the services.
Once it has finished you may do whatever you like but in my case I created a monitoring script like the one described here:
https://github.com/spinnaker/spinnaker/issues/854
Which can be launched on a recursive manner as this:
watch -n1 spinnaker-status.shor until toctrl+Cit!.
then to be able to access your local VM spinnaker copy you can either setup a reverse proxy with the proxy server of your choice to forward all the requests to localhost or you can simply ssh the SH** out of this redirecting the ports;
ssh root#ZZZ.ZZZ.ZZZ.ZZZ -L 9000:127.0.0.1:9000 -L 8084:127.0.0.1:8084 -L 8083:127.0.0.1:8083 -L 7002:127.0.0.1:7002 -L 8087:127.0.0.1:8087 -L 8080:127.0.0.1:8080 -L 8088:127.0.0.1:8088 -L 8089:127.0.0.1:8089
Where obviously theZZZ.ZZZ.ZZZ.ZZZ is not an actual IP Address.
And finally to begin having fun with this cutie you have to go to your browser of choice and type into the address bar:
http://127.0.0.0:9000
Hope this helps and saves some time to everybody!.
Cheers.
EN
I would like to create an EBS-backed Windows image in Eucalyptus 3.4.
https://www.eucalyptus.com/docs/eucalyptus/3.4/index.html#image-guide/img_task_create_bfebs_image.html
I have reached the last step, registering the snapshot:
euca-register --name <image-name> --snapshot <snapshot-id> --root-device-name <device>
What should the value of <device> be?
All the examples I have found via searching have been Unix examples, e.g.
/dev/sda
--root-device-name should be /dev/sda only but remember to give the following options.
--kernel windows
Using a free 'micro' instance from Amazon to fire up a quick demo of MarkLogic. The rpm installs fine with no errors.
Some information that may be helpful:
[user#aws ~]$ rpm -qa | grep release
redhat-release-server-6Server-6.4.0.4.el6.x86_64
[user#aws ~]$ rpm -qa | grep MarkLogic
MarkLogic-7.0-1.x86_64
Starting the MarkLogic server for the very first time shows this:
[user#aws ~]$ sudo /etc/init.d/MarkLogic start
Initialize Configuration
Region: us-west-2 ML_NAME:
Set configuration: MARKLOGIC_ZONE="us-west-2c"
Instance is not managed
Waiting for device mounted to come online : /dev/xvdj
And here it sits with no other messages anywhere including /var/opt/MarkLogic/Logs which doesn't exist yet.
Even though Micro instances aren't officially supported, you can usually start one up. But, reports are that you will be quickly wishing you didn't.
That said, see the precise instructions at http://developer.marklogic.com/products/aws and, in particular, a disk at mounting /dev/sdf ; the server init script will wait forever to come up if you don't do that.
If the above didn't help, I've dug into the RPM enough to discover some issues on AWS.
For one, they use some sysconfig scripts to detect if they're on AWS. If you're running MarkLogic 6, these sysconfigs have a hardcoded drive and will wait indefinitely since it probably won't exist. Yours is 7, and still has some issues on AWS. To bypass this, you can make a /usr/bin/is-ec2.sh that contains:
#!/bin/bash
exit 1
That will prevent it from doing any ec2 detection. More details can be found on my write-up at this github post