I have a running vm in vmware environment and need to migrate that vm to openstack environment(qcow2).
I exported my virtual machine and it created 3 vmdks files, now how can I install this 3 vmdk files in to openstack environment. I have converted these 3 vmdks files to qcow2 format but how can I handle 3 qcow2 image to install my vm.
Could someone guide please?
You could try create server with --block-device-mapping, check from "Block Device Mapping in Nova".
Like this:
openstack server create --flavor FLAVOR.NAME --image qcow2.first.vmdk \
--block-device-mapping vdb=qcow2.second_1.vmdk:image \
--block-device-mapping vdc=qcow2.third_2.vmdk:image \
--nic port-id=the_port_with_same_ip_as_in_vmware \
qcow2_image_server_name
In my situation, we use ceph to provide the shared storage, so I use volume instead of the image in above command line, it works for me. I think it's similar with local disk storage like your scenario.
Another thing, I had test that ceph shared storage could transform to local disk storage with replace the disk's file by qemu-img convert, vice versa.
Related
How can I export a gce Image to use it in a local Virtualbox?
I get the error:
error: No such device
gce-image-export.vmdk
gce-image-export.qcow2
gce-image-export.vdi
I use the command:
qemu-img convert -O vdi gce-image-export.qcow2 gce-image-export.vdi
I get by *.vmdk, *.qcow2, *.vdi all the the same error.
Did you have input for me?
Thanks
kivitendo
You can export the image using the gcloud command. You can see in the following documentation all the use of the command and the flags.
gcloud compute images export \
--destination-uri <destination-uri> \
--image <image-name> \
--export-format <format>
The --export-format flag exports the image to a format supported by QEMU using qemu-img. Valid formats include 'vmdk', 'vhdx', 'vpc', 'vdi', and 'qcow2'.
You can send it to a Bucket and later you can download it.
thanks, i only can export from Google Cloud Platform (gce) to
*.VMDK
*.VHDX
*.VPC
*.qcow2
formats.
I must Virtualbox 6.1 change Virtualbox Machine to EFI Support.
If i use than rEFInd 0.12 as boothelper i can start my vce *.vmdk machine.
I get many errormessage and i don't can log in, in my vce *.vmdk machine. to repair the errormessage and install grub-efi to my vce *.vmdk machine.
My installed NextCloud server is starting.
How can i log in, to my machine?
root doesn't work.
I don't find any tutorial.
kivitendo
I am using docker on windows. With the use of kitematic, I have created an ubuntu container. This ubuntu image has postgresql installed on it.
Wondering if there is any possibility to access the postgres configuration files available in the container from the host (windows machine)?
Where exactly does the container store its file system on the host machine?
I hope it would be part of image file with format VMDK.
Please correct me if I'm wrong.
Wondering if there is any possibility to access the postgres configuration files available in the container from the host (windows machine)
That is not how Docker would allow you to modify a file in a container.
For that, you should mount a host (Windows) folder when starting (docker run -v) your container.
See "Mount a host directory as a data volume"
docker run -d -P --name web -v /c/Users/<myACcount>/src/webapp:/opt/webapp training/webapp python app.py
Issue 247 mentions ~/Library/Application Support/Kitematic for App data, and ~/Kitematic "for easy access to volume data".
I have an issue with running postgres container with set up volumes for data folder on my Mac OS machine.
I tried to run it such like this:
docker run \
--name my-postgres \
-e POSTGRES_USER=admin \
-e POSTGRES_PASSWORD=password \
-e POSTGRES_DB=some_db_dev \
-v $PG_LOCAL_DATA:/var/lib/postgresql/data \
-d postgres:9.5.1
Every time I got the following result in logs:
* Starting PostgreSQL
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are enabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
initdb: could not create directory "/var/lib/postgresql/data/pg_xlog": Permission denied
initdb: removing contents of data directory "/var/lib/postgresql/data"
Versions of docker, docker-machine, virtualbox and boot2docker are:
docker-machine version 0.6.0, build e27fb87
Docker version 1.10.2, build c3959b1
VirtualBox Version 5.0.16 r105871
boot2docker 1.10.3
I saw many publications about this topic but the most of them are outdated. I had tried do the similar solution as for mysql but it did not help.
Maybe somebody can updated me: does some solution exist to run postgres container with data volumes through docker-machine?
Thanks!
If you are running docker-machine on a Mac, at this time, you cannot mount to a directory that is not part of your local user space (/Users/<user>/) without extra configuration.
This is because on the Mac, Docker makes a bind mount automatically with the home ~ directory. Remember that since Docker is being hosted in a VM that isn't your local Mac OS, any volume mounts are relative to the host VM - not your machine. That means by default, Docker cannot see your Mac's directories since it is being hosted on a separate VM from your Mac OS.
Mac OS => Linux Virtual Machine => Docker
^------------------^
Docker Can See VM
^-----------------X----------------^
Docker Can't See Here
If you open VirtualBox, you can create other mounts (i.e. shared folders) to your local machine to the host VM and then access them that way.
See this issue for specifics on the topic: https://github.com/docker/machine/issues/1826
I believe the Docker team is adding these capabilities in upcoming releases (especially since a native Mac version is in short works).
You should use docker named volumes instead of folders on your local file system.
Try creating a volume:
docker volume create my_vol
Then mount the data directory in your above command:
docker run \
--name my-postgres \
-e POSTGRES_USER=admin \
-e POSTGRES_PASSWORD=password \
-e POSTGRES_DB=some_db_dev \
-v my_vol:/var/lib/postgresql/data \
-d postgres:9.5.1
Checkout my blog post for a whole postgres, node ts docker setup for both dev and prod: https://yzia2000.github.io/blog/2021/01/15/docker-postgres-node.html
For more on docker volumes: https://docs.docker.com/storage/volumes/
I am trying to set up a Windows 7 instance on Openstack. This instance requires at least 50 GB of free disk space to run an application. When I create my windows 7 image and upload it everything works fine, except for one problem - The disk space is already 80% used when I start.
For example a windows 7 instance with a 100GB hard drive has only 18.3GB of free space.
What I tried:
I have been trying to create a windows 7 image with a 100GB hard drive. I created a QCOW2 file with a windows 7 ISO and the Virt-IO drivers ISO using the below commands:
create the empty qcow2 file:
qemu-img create -f qcow2 win_64bit_SP1_100GB.qcow2 100G
Combine the two ISOs:
sudo virt-install --connect qemu:///system --name
PS4Agent_win7_64bit_SP1_100GB --ram 2048 --vcpus 2 --network
network=default,model=virtio --disk
path=win_64bit_SP1_100GB.qcow2,format=qcow2,device=disk,bus=virtio
--cdrom /home/khennessy/win7_win8_iso_creation/SW_DVD5_Win_Pro_7w_SP1_64BIT_English_-2_MLF_X17-59279.ISO
--disk path=/home/khennessy/win7_win8_iso_creation/virtio-win-0.1-100.iso,device=cdrom
--vnc --os-type windows --os-variant win2k8 --force
I then uploaded these to Openstack using a 'minimum disk space' of 90GB, making the minimum flavor xl. (Currently trying a lower value, the images are so large it takes a long time to test anything.)
I then create an instance using this image and log into it using the 'console' view. It all works fine but when I go into 'my computer' it tells me I have only 18GB free space? I have tried 'resizing' the images but it just seems to bring them into an error state?
Can anyone help me? Thanks very much.
Before you upload your windows 7 image onto glance, you need to download CloudbaseInitSetup_x64.msi
or CloudbaseInitSetup_x86.msi] and install it on your windows7 image first. Also, don't forget to add "plugins=cloudbaseinit.plugins.windows.extendvolumes.ExtendVolumesPlugin" to the configuration file of Cloudbase-init.
You can visit and http://cloud-ninja.org/2014/05/14/running-windows-7-guests-on-openstack-icehouse/ for more information.
I am creating an ec2 instance through knife . i gave the following command to create
knife ec2 server create -r "role[webserver]" -I ami-b84e04ea --flavor t1.micro --region ap-southeast-1 -G default -x ubuntu -N server01 -S ec2keypair
but getting error as Fog::Compute::AWS::Error: InvalidBlockDeviceMapping => iops must be specified with the volumeType of device '/dev/sda1' . I am unable to solve this issue , Any help will be appreciated .
Its possible that the ami you are trying to launch requires an EBS. With an EBS you can set the IOPS value which seems like it is not set and is giving you the issue.
Having a look at the documentation it seems you might need to add
--ebs-size 10
SIZE as an option.
I got that from the knife documentation
http://docs.opscode.com/plugin_knife_ec2.html
Also taking a look at the source code for the knife ec2 plugin it looks like you can add.
--ebs-optimized
Enabled optimized EBS I/O