Hello i have generated a vm on http://vmg.slynett.com/.
The vm works fine but nothing is installed on it.
I don't know why vagrant provision fail.
vagrant provision
[default] Running provisioner: Vagrant::Provisioners::Shell...
stdin: is not a tty
Europe/Paris
Current default time zone: 'Europe/Paris'
Local time is now: Fri Jun 28 13:15:42 CEST 2013.
Universal Time is now: Fri Jun 28 11:15:42 UTC 2013.
[default] Running provisioner: Vagrant::Provisioners::Puppet...
[default] Running Puppet with /tmp/vagrant-puppet/manifests/base.pp...
stdin: is not a tty
Warning: Could not retrieve fact fqdn
Warning: Config file /etc/puppet/hiera.yaml not found, using Hiera defaults
Error: Puppet::Parser::AST::Resource failed with error ArgumentError: Invalid resource type concat at /tmp/vagrant-puppet/modules-0/apache/manifests/init.pp:130 on node dev
Error: Puppet::Parser::AST::Resource failed with error ArgumentError: Invalid resource type concat at /tmp/vagrant-puppet/modules-0/apache/manifests/init.pp:130 on node dev
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
cd /tmp/vagrant-puppet/manifests && puppet apply --modulepath '/tmp/vagrant-puppet/modules-0' /tmp/vagrant-puppet/manifests/base.pp --detailed-exitcodes || [ $? -eq 2 ]
I am on mac OS X 10.8.3, virtualbox 4.2.6, Vagrant 1.2.2
It looks like you're referring to the concat module from ripenaar/concat and an error like that is usually thrown when a resource isn't present (i.e. not installed or in your module path).
I solved this problem by adding the concat and file_concat puppet modules to the puppet module path. I usually have all needed puppet modules as git submodules in puppet/modules of the vagrant project, so to add the concat and file_concat module, I do:
git submodule add https://github.com/puppetlabs/puppetlabs-concat.git puppet/modules/concat
git submodule add https://github.com/electrical/puppet-lib-file_concat.git puppet/modules/file_concat
Note that the concat module is the official one from puppetlabs and the file_concat module is the one used in concat (it should be installed automatically, but this didn't seem to work for me and may be your problem as well).
Related
i try to setup vagrant 'debian/buster64' manually on win10 but got failure... my steps:
download box file from https://vagrantcloud.com/debian/boxes/buster64/versions/10.4.0/providers/virtualbox.box
try to setup...
$ vagrant box add --name 'debian/buster64' '4d7865da-6242-4853-bc6c-807a63c734e6'
==> box: Box file was not detected as metadata. Adding it directly...
==> box: Adding box 'debian/buster64' (v0) for provider:
box: Unpacking necessary files from: file://D:/Downloads/4d7865da-6242-4853-bc6c-807a63c734e6
box:
The box failed to unpackage properly. Please verify that the box
file you're trying to add is not corrupted and that enough disk space
is available and then try again.
The output from attempting to unpackage (if any):
x ./metadata.json: Cannot extract through symlink \\\\?\\C:\\Users\\mat\\.vagrant.d
x ./box.ovf: Cannot extract through symlink \\\\?\\C:\\Users\\mat\\.vagrant.d
x ./buster.vmdk: Cannot extract through symlink \\\\?\\C:\\Users\\mat\\.vagrant.d
x ./Vagrantfile: Cannot extract through symlink \\\\?\\C:\\Users\\mat\\.vagrant.d
bsdtar.EXE: Error exit delayed from previous errors.
os: windows 10
version: 1909
build: 18363.900
vagrant 2.2.9
Who knows what is wrong?
I assume you have created a Windows Junction to redirect your user profile to another drive (as did I). Well, Vagrant does not like that at all.
Luckily, you can explicitly set the base config directory (.vagrant.d) for your Vagrant installation via the environment variable VAGRANT_HOME. Assuming you moved your profile to drive D, you should set the environment variable as follows:
VAGRANT_HOME=D:\Users\mat\.vagrant.d
I tried to install an openwhisk VM on a windows 10 machine.
Cloned the git repo, cd'd to openwhisk/tools/vagrant and run ./hello.
Many, many minutes later, I get the following error.
==> default: :index
==> default: :goPrepare
==> default: FAILED
==> default: FAILURE:
==> default: Build failed with an exception.
==> default:
==> default: * What went wrong:
==> default: Execution failed for task ':goPrepare'.
==> default: > Create symbolic link at /home/vagrant/openwhisk/bin/openwhisk-cli/.gogradle/project_gopath/src/github.com/apache/incubator-openwhisk-cli
failed
Though I can find the index task in build.gradle, I can not find the goPrepare task under openwhisk-cli or the parent directories.
I presume this command was run on the newly created VM as I get
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
But running vagrant ssh does log me on to the VM.
Looking at the vagrantfile, there is
# Clone and Build CLI
echo "`date`: build-cli-start" >> /tmp/vagrant-times.txt
cd ${OPENWHISK_HOME}/bin
rm -rf openwhisk-cli
su vagrant -c 'git clone --depth=1 https://github.com/apache/incubator-openwhisk-cli.git openwhisk-cli'
cd openwhisk-cli
su vagrant -c './gradlew releaseBinaries'
echo "`date`: build-cli-end" >> /tmp/vagrant-times.txt
The log in /tmp shows build-cli-start but not build-cli-end.
The releaseBinaries task is in build.gradle but no links there.
Has anyone else come across this error? Does anyone know where the goPrepare task is?
Regards
Raised defect at https://github.com/apache/incubator-openwhisk/issues/3649.
Fixed in https://github.com/apache/incubator-openwhisk/pull/3651.
Updated git, run hello again and it works. Congrats to the openwhisk team for responding so quickly.
I'm trying to launch vagrant-lxc on Ubuntu 14.04. I'm using the latest Vagrant download (rather than the ancient version in the Debian repos).
vagrant plugin install vagrant-lxc
runs successfully, as does:
vagrant init fgrehm/precise64-lxc
I ran
sudo vagrant lxc sudoers
to handle the sudo issues mentioned here.
But when I run
vagrant up --provider=lxc
(both with and without sudo) the container doesn't load, spitting out this:
> Bringing machine 'default' up with 'lxc' provider...
==> default: Checking if box 'fgrehm/precise64-lxc' is up to date...
==> default: Setting up mount entries for shared folders...
default: /vagrant => /home/ubuntu
==> default: Starting container...
There was an error executing ["sudo", "/usr/local/bin/vagrant-lxc-wrapper", "lxc-start", "-d", "--name", "ubuntu_default_1456156125505_47833"]
For more information on the failure, enable detailed logging by
setting the environment variable VAGRANT_LOG to DEBUG
Here's the log output I'm getting (from /var/log/lxc/ ubuntu_default_1456156125505_47833.log):
lxc-start 1456158555.539 ERROR lxc_start - start.c:lxc_spawn:884 - failed initializing cgroup support
lxc-start 1456158555.568 ERROR lxc_start - start.c:__lxc_start:1121 - failed to spawn 'ubuntu_default_1456156125505_47833'
lxc-start 1456158555.568 ERROR lxc_start_ui - lxc_start.c:main:341 - The container failed to start.
lxc-start 1456158555.568 ERROR lxc_start_ui - lxc_start.c:main:343 - To get more details, run the container in foreground mode.
lxc-start 1456158555.568 ERROR lxc_start_ui - lxc_start.c:main:345 - Additional information can be obtained by setting the --logfile and --logpriority options.
Any ideas what I'm doing wrong?
Thanks,
Go into /home/<USERNAME>/.vagrant.d/boxes/fgrehm/precise64-lxc/.../lxc/lxc-config
file and
comment out:
lxc.pivotdir = lxc_putold
the do vigrant up again it should work!
I'm trying to set up my firewalld through Ansible on my Fedora 23 server from my Fedora client (Yes I like fedora :D ).
However, each time I try to execute a playbook with some commands including firewalld (Example - firewalld: service=https permanent=true state=enabled), the playbook execution fail with the following message :
failed: [w.x.y.z] => {"failed": true, "parsed": false}
failed=True msg='firewalld required for this module'
I have firewalld up and running on the remote server :
# firewall-cmd --version
0.3.14.2
On my computer :
$ ansible --version
ansible 1.9.4
configured module search path = None
Does anyone know where it could come from ?
Thank you !
--
EDIT: At this line in Ansible source code, firewall library seems not to be imported (and execute error which display that there is no firewall). However, this library exists in Python3 and not Python2 which is used by Ansible.
$ locate firewall
[...]
/usr/lib/python3.4/site-packages/firewall
[...]
I will continue to search, but if someone has an idea...
I found the explanation and solution :
Following my edit, I installed python-firewall which is python 2 bindings of firewalld. But, the execution was incorrect because of the absence of cockpit.
So I had to install cockpit too...
Long story, short story, this is what I've done on remote machine :
# dnf install python-firewall cockpit -y
I'm trying to install a custom Hadoop implementation (>2.0) on Google Compute Engine using the command line option. The modified parameters of my bdutil_env.sh file are as follows:
GCE_IMAGE='ubuntu-14-04'
GCE_MACHINE_TYPE='n1-standard-1'
GCE_ZONE='us-central1-a'
DEFAULT_FS='hdfs'
HADOOP_TARBALL_URI='gs://<mybucket>/<my_hadoop_tar.gz>'
The ./bdutil deploy fails with a exit code 1. I find the following errors in the resultant debug.info file:
ssh: connect to host 130.211.161.181 port 22: Connection refused
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
ssh: connect to host 104.197.63.39 port 22: Connection refused
ssh: connect to host 104.197.7.106 port 22: Connection refused
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
.....
.....
Connection to 104.197.7.106 closed.
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [123].
Connection to 104.197.63.39 closed.
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [123].
Connection to 130.211.161.181 closed.
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [123].
...
...
hadoop-w-1: ==> deploy-core-setup_deploy.stderr <==
....
....
hadoop-w-1: dpkg-query: package 'libsnappy1' is not installed and no information is available
hadoop-w-1: Use dpkg --info (= dpkg-deb --info) to examine archive files,
hadoop-w-1: and dpkg --contents (= dpkg-deb --contents) to list their contents.
hadoop-w-1: dpkg-preconfigure: unable to re-open stdin: No such file or directory
hadoop-w-1: dpkg-query: package 'libsnappy-dev' is not installed and no information is available
hadoop-w-1: Use dpkg --info (= dpkg-deb --info) to examine archive files,
hadoop-w-1: and dpkg --contents (= dpkg-deb --contents) to list their contents.
hadoop-w-1: dpkg-preconfigure: unable to re-open stdin: No such file or directory
hadoop-w-1: ./hadoop-env-setup.sh: line 612: Package:: command not found
....
....
hadoop-w-1: find: `/home/hadoop/hadoop-install/lib': No such file or directory
I don't understand why the initial ssh error is given; I can see the VMs and login to them properly from the UI; my tar.gz is also copied in the proper places.
I also do not understand why libsnappy wasn't installed; is there anything particular I need to do? The shell scripts seem to be having commands to install it, but it's failing somehow.
I checked all the VMs; Hadoop is not up.
EDIT : For solving the ssh problem, I ran the following command:
gcutil --project= addfirewall --allowed=tcp:22 default-ssh
It made no difference.
In this case, the ssh and libsnappy errors are red herrings; when the VMs weren't immediately SSH-able, bdutil polled for awhile until it should've printed out something like:
...Thu May 14 16:52:23 PDT 2015: Waiting on async 'wait_for_ssh' jobs to finish. Might take a while...
...
Thu May 14 16:52:33 PDT 2015: Instances all ssh-able
Likewise, the libsnappy error you saw was a red herring because it's coming from a call to dpkg -s trying to determine whether a package is indeed installed, and if not, to apt-get install it: https://github.com/GoogleCloudPlatform/bdutil/blob/master/libexec/bdutil_helpers.sh#L163
We'll work on cleaning up these error messages since they can be misleading. In the meantime, the main issue here is that Ubuntu hasn't historically been one of the supported images for bdutil; we thoroughly validate CentOS and Debian images, but not Ubuntu images, since they were only added as GCE options in November 2014. Your deployment should work fine with your custom tarball for any debian-7 or centos-6 image. We've filed an issue on GitHub to track Ubuntu support for bdutil: https://github.com/GoogleCloudPlatform/bdutil/issues/29
EDIT: The issue has been resolved with Ubuntu now supported at head in the master repository; you can download at this most recent commit here.
Looking at your error code, it seems like you have to download snappy libraries in your classpath. If you are using java then you can download your libraries from this path https://github.com/xerial/snappy-java. OR try this link https://code.google.com/p/snappy/.