I have been testing docker with no issues but suddenly my connection(?) seems to have dropped.
Has anyone experienced this?
What is the fix?
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/
exit status 255
docker is configured to use the default machine with IP
For help getting started, check out the docs at https://docs.docker.com
Unexpected error getting machine url: exit status 255
%USER%s-MacBook-Pro:~ %USER%$ docker run hello-world
Post http:///var/run/docker.sock/v1.20/containers/create: dial unix /var/run/docker.sock: no such file or directory.
* Are you trying to connect to a TLS-enabled daemon without TLS?
* Is your docker daemon up and running?
%USER%s-MacBook-Pro:~ %USER%$
I have been launching docker from the docker quickstart terminal.
In Mac OS, I use this command to resolve the issue
eval "$(docker-machine env default)"
Killing some processes that had things like "docker" in the name and then rebooting the terminal solver the issue for me. Unsure how I created the issue therefore tough to duplicate.
Asuming you are using 64 bit Windos os, For me it was because visualization was not enabled in my bios. Enabled it and issue was resolved.
I could see “this kernel requires an x86-64 CPU, but only detects an i686 CPU, unable to boot” error in virtual box screen
restart system and go to bios
security>virtualization>enable>save and exit
in virtual box, setings>general>basic : check if 64 bit linux is selected
Good luck
Related
I am trying to test Ephemeral Container in v1.23.5 & containerd://1.4.9 in minikube v1.23.0
after entering kubectl debug -it ephemeral-demo --image=busybox:1.35 --target=ephemeral-demo , i can see 2 issues
prompt displayed improper format. each line is different tab spaces etc
when i type ps aux or any commands, cann’t see. but i will be able to see after press enter
Same issue in Vagrant 2 nodes Cluster with (Ubuntu 21.10) & kube version v1.23.5)
Any one experienced this issue? any suggestions/workarounds appreciated
I am trying to install RHEL7.9 KVM guest on Ubuntu 18.04 Azure VM. The anaconda installer fails due to some error but the virt-viewer screen got closed so fast, I was unable to read what is the exact error. I know all anaconda logs get stored in /tmp/anaconda.log file on the KVM guest disk image but I am unable to figure out a way to check the contents of the file. I tried mounting the KVM guest disk image using "mount -o loop .img " command but it fails with NTFS signature is missing, thats probably because the installation fails before the KVM guest's disk is partitioned properly. I am looking for ways to check the contents of that file. Is there any way to redirect the anaconda logs of the guest machine to the Ubuntu host machine ? Pasting the virt-install script and kickstart file used. The RHEL7.9 installation media was downloaded from https://developers.redhat.com/products/rhel/download site.
virt-install.sh
virt-install --location /datadrive/iso_images/rhel7.9-dvd.iso \
--disk /datadrive/rhel79-oracle-$1.img,size=40,format=raw \
--os-variant rhel7.0 \
--initrd-inject ./ks-rhel79-oracle.cfg \
--extra-args="ks=file:/ks-rhel79-oracle.cfg" \
--vcpus 2 \
--memory 2048 \
--noreboot \
--name rhel79-oracle-$1
ks-rhel79-oracle.cfg
#version=DEVEL
# System authorization information
auth --passalgo=sha512 --useshadow
text
firstboot --disable
# Keyboard layouts
keyboard --vckeymap=us --xlayouts='us'
# System language
lang en_US.UTF-8
# Installation logging level
logging --level=debug
# Network information
network --bootproto=dhcp --device=link --activate
network --bootproto=dhcp --hostname=Azure-image
# Shutdown after installation
shutdown
# Root password
rootpw --plaintext password
# SELinux configuration
selinux --disabled
# System services
services --enabled="sshd,chronyd"
# Do not configure the X Window System
skipx
# System timezone
timezone US/Eastern
# System bootloader configuration
bootloader --append="rootdelay=60 mpath crashkernel=2048M intel_idle.max_cstate=1 processor.max_cstate=1 transparent_hugepage=never numa_balancing=disable mce=ignore_ce modprobe.blacklist=kvm_intel,kvm,iTCO_wdt,iTCO_vendor_support,sb_edac,edac_core" --location=mbr
# Partition scheme
zerombr
clearpart --all
# Disk partitioning information
part swap --fstype="swap" --size=32768
part / --fstype="xfs" --grow --size=6144
%post --logfile=/root/anaconda-composer.log --erroronfail
# Remove random-seed
rm /var/lib/systemd/random-seed
# Clear /etc/machine-id
rm /etc/machine-id
touch /etc/machine-id
%end
%packages
#base
%end
%addon com_redhat_kdump --enable --reserve-mb=2048
%end
The below solution worked for me. Used below commands to start rsyslog on port 6080 on the host machine (Redhat7.8 Azure VM) and modified the virt-install script as below to direct anaconda logging to the host machine
yum install -y anaconda
mkdir -p /home/shaaga/remote_inst
eval `analog -p 6080 -o rsyslogd.conf -s /home/shaaga/remote_inst
virt-install --location /datadrive/iso_images/rhel7.9-dvd.iso
--disk /datadrive/rhel79-oracle-$1.img,size=40,format=raw
--os-variant rhel7.0
--initrd-inject ./ks-rhel79-oracle.cfg
--extra-args="ks=file:/ks-rhel79-oracle.cfg"
--vcpus 2
--memory 2048
--noreboot
--name rhel79-oracle-$1 --channel tcp,host=127.0.0.1:6080,mode=connect,target_type=virtio,name=org.fedoraproject.anaconda.log.0
So I've got Yocto on a local build server, coz who wants that massive build chewing up their workspace amirite?
Host and Server are Arch Linux 4.19.44-1-lts
Anyway I am just running up the example from the quick build page found here and when I try to
$ runqemu qemux86
from the ssh (with X11 forwarding enabled) all I get is this lousy output:
runqemu - INFO - Running MACHINE=qemux86 bitbake -e... runqemu - INFO
- Continuing with the following parameters:
KERNEL: [/home/bob/poky/build/tmp/deploy/images/qemux86/bzImage--5.0.3+git0+f0b575cda6_3df4aae607-r0-qemux86-20190520164453.bin] MACHINE: [qemux86] FSTYPE: [ext4] ROOTFS: [/home/bob/poky/build/tmp/deploy/images/qemux86/core-image-sato-qemux86-20190520164453.rootfs.ext4] CONFFILE: [/home/bob/poky/build/tmp/deploy/images/qemux86/core-image-sato-qemux86-20190520164453.qemuboot.conf]
runqemu - INFO - Setting up tap interface under sudo [sudo] password for bob: runqemu - INFO - Network configuration:
192.168.7.2::192.168.7.1:255.255.255.0 runqemu - INFO - Running /home/bob/poky/build/tmp/work/x86_64-linux/qemu-helper-native/1.0-r1/recipe-sysroot-native/usr/bin/qemu-system-i386
-device virtio-net-pci,netdev=net0,mac=52:54:00:12:34:02 -netdev tap,id=net0,ifname=tap0,script=no,downscript=no -drive file=/home/bob/poky/build/tmp/deploy/images/qemux86/core-image-sato-qemux86-20190520164453.rootfs.ext4,if=virtio,format=raw
-vga vmware -show-cursor -usb -device usb-tablet -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0 -cpu pentium2 -m 256 -serial mon:vc -serial null -kernel /home/bob/poky/build/tmp/deploy/images/qemux86/bzImage--5.0.3+git0+f0b575cda6_3df4aae607-r0-qemux86-20190520164453.bin
-append 'root=/dev/vda rw highres=off mem=256M ip=192.168.7.2::192.168.7.1:255.255.255.0 vga=0 uvesafb.mode_option=640x480-32 oprofile.timer=1 uvesafb.task_timeout=-1 '
runqemu - ERROR - Failed to run qemu: Could not initialize SDL(x11 not available) - exiting
runqemu - INFO - Cleaning up Set 'tap0' nonpersistent
It's this part that is clearly a concern:
runqemu - ERROR - Failed to run qemu: Could not initialize SDL(x11 not
available) - exiting
Which is weird, because X is actually running on the machine and I can run qemu just fine. Running
$ qemu-system-x86_64
Opens up a qemu VM on my local machine
Is there something I'm missing here? Does SDL need to be re-compiled with X support or something? What about these options: "-vga vmware", "uvesafb.mode_option=640x480-32". Maybe it's an ssh thing? Or a build config option for sdl that I haven't yet come across....
To clarify it works fine from the console of the server and from the tty using the 'nographic' option. Just not over tty with a graphical option, wondering if that's even possible.
Thanks.
I had the same problem with a build for a minimal image on ubuntu 18.04 server.
Try: runqemu qemux86 nographic
I'm using Jenkins to automate the deploy of a virtual appliance. The first step is to build a standard CentOS 7 minimal vm in KVM. I wrote a short bash script to do this task which works when running locally on the KVM machine:
#!/bin/bash
#Variables
diskpath="/var/lib/libvirt/images/"
buildname=$(date +"%m-%d-%y-%H-%M")
vmextension=".dsk"
#Change to images directory
cd /var/lib/libvirt/images/
#Deploy VM with with kickstart file
sudo virt-install \
--name=$buildname \
--nographics \
--hvm \
--virt-type=kvm \
--file=$diskpath$buildname$vmextension \
--file-size=20 \
--nonsparse \
--vcpu=2 \
--ram=2048 \
--network bridge=br0 \
--os-type=linux \
--os-variant=generic \
--location=http://0.0.0.0/iso/ \
--initrd-inject /var/lib/libvirt/images/autobuild-ks.cfg \
--extra-args="ks=http://0.0.0.0/ks/autobuild-ks.cfg console=ttyS0"
(IP address i have changed for the purposes of security)
The ISO and the kickstart file are stored on another server and they can both be accessed via http for the purposes of making this script work. To be clear, the script does work.
The problem I have is, when I put this script into Jenkins as a build step, the script works; however, it hangs at the end after the OS has been installed and the kvm guest begins the shutdown process.
here is the kickstart file:
#version=DEVEL
# System authorization information
auth --enableshadow --passalgo=sha512
# Use Network installation media
url --url=http://0.0.0.0/iso
# Use graphical install
#graphical
# Run the Setup Agent on first boot
firstboot --enable
ignoredisk --only-use=sda
# Keyboard layouts
keyboard --vckeymap=gb --xlayouts='gb'
# System language
lang en_GB.UTF-8
# Network information
network --bootproto=dhcp --device=ens160 --ipv6=auto --activate
network --hostname=hostname.domain.com
# Root password
rootpw --iscrypted
taken_encryption_output_out_for_the_purposes_of_security
#Shutdown after installation
shutdown
# System services
services --enabled="chronyd"
# System timezone
timezone Europe/London --isUtc
# System bootloader configuration
bootloader --append=" crashkernel=auto" --location=mbr --boot-
drive=sda
autopart --type=lvm
# Partition clearing information
clearpart --none --initlabel
%packages
#^minimal
#core
chrony
kexec-tools
%end
%addon com_redhat_kdump --enable --reserve-mb='auto'
%end
%anaconda
pwpolicy root --minlen=6 --minquality=50 --notstrict --nochanges --
notempty
pwpolicy user --minlen=6 --minquality=50 --notstrict --nochanges --
notempty
pwpolicy luks --minlen=6 --minquality=50 --notstrict --nochanges --
notempty
%end
I suspect it's something to do with the shutdown option in the Kickstart file but unsure. When I ssh to the kvm server, I can see my newly created vm so the script does work but Jenkins hangs.
[root#sut-kvm01 ~]# virsh list --all
Id Name State
----------------------------------------------------
- 09-22-17-16-21 shut off
So far I have tried shutdown, reboot and obviously halt is default in the kickstart file and they have not worked for me either.
Any ideas how I can get the build to complete successfully? If it hangs, I can't move on to what will be build step number 2.
Help please :-)
Ok so I managed to figure out what the issue was. The issue was nothing to do with Jenkins or the script but rather to do with the kickstart file. In a nutshell, I was editing the wrong kickstart file. The file i was editing was the default kickstart file in the /root/ directory but that is not the same file that was being injected into memory by the script so the changes I made were having no effect.
Note to self - just because the script works, does not mean the answer to the problem isn't written in the script.
I'm new to docker.I can't restart virtual machine in docker.I don't know what 'exit status 255' means, while running docker-machine restart vdocker it shows
$docker-machine restart vdocker
Restarting "vdocker"...
Starting "vdocker"...
<vdocker> Check network to re-create if needed...
<vdocker> Waiting for an IP...
Too many retries waiting for SSH to be available. Last error:Maximum number of retries <60> exceeded
running docker-machine ls shows
$docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:2376 v1.11.2
vdocker - virtualbox Running Unknown Somthing went wrong running an SSH command!
command : ip addr show
err : exit status 255
output :
but default machine is working well.
Please let me know if you need any more info or clarity on the problem.
Try, if you don't have any image in it, to delete vdocker, then re-create it (with proxy if you are behind a proxy)
Then make sure to assign it a fixed IP with dmvbf
dmvbf vdocker 99 101
docker-machine regenerate-certs -f vdocker
After that, your VM should start everytime.