Packer shell provisioners fails on custom ami - amazon-ec2

Ive been running packer builds on a standard aws linux ami from azure devops. Ive just updated the packer.json to use a in-house hardened image instead and its starting failing on the shell provisioner with the error:
==> amazon-ebs: Provisioning with shell script: /home/vsts/work/1/s/packer/elasticsearch/scripts/setup.sh
==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: Cleaning up any extra volumes...
==> amazon-ebs: No volumes to clean up, skipping
==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Deleting temporary keypair...
Build 'amazon-ebs' errored: Error uploading script: Process exited with status 1
==> Some builds didn't complete successfully and had errors:
--> amazon-ebs: Error uploading script: Process exited with status 1
==> Builds finished but no artifacts were created.
##[error]Error: /opt/hostedtoolcache/packer/1.4.3/x64/packer failed with return code: 1
Is this a bad error message and the error is elsewhere? I dont see how the upload could start failing when it worked before :X?

Related

Laravel Vapor {“message”:“The security token included in the request is invalid.”}

I successfully deployed an app on AWS with Laravel Vapor.
==> Ensuring Storage Exists
==> Ensuring Cache Table Is Configured
==> Updating Function Configurations
==> Updating Function Code
==> Running Deployment Hooks
==> Ensuring Rest API Is Configured
==> Ensuring Custom Domains Exist
==> Ensuring DNS Records Exist
==> Ensuring Mail Is Configured
==> Ensuring Scheduled Tasks Are Configured
==> Ensuring Queues Are Configured
==> Updating Function Aliases To New Version
Project deployed successfully. (1m26s)
=============== ===================================================
Deployment ID Environment URL (Copied To Clipboard)
=============== ===================================================
38261 https://clean-kyiv-40wzwvz1sutk.vapor-farm-a1.com
=============== ===================================================
However, when I go on the given link https://clean-kyiv-40wzwvz1sutk.vapor-farm-a1.com I am getting 502 Error
After requesting the link I can see, on Vapor, in the Log tab inside the staging environment next message.
What else should I check to resolve that issue?
Try increasing your deployment timeout if for some reason your code runs for longer than the default 10sec you will get this error from aws Lambda
environments:
production:
memory: 1024
cli-memory: 512
timeout: 120

Openwhisk on-premise install error

I tried to install an openwhisk VM on a windows 10 machine.
Cloned the git repo, cd'd to openwhisk/tools/vagrant and run ./hello.
Many, many minutes later, I get the following error.
==> default: :index
==> default: :goPrepare
==> default: FAILED
==> default: FAILURE:
==> default: Build failed with an exception.
==> default:
==> default: * What went wrong:
==> default: Execution failed for task ':goPrepare'.
==> default: > Create symbolic link at /home/vagrant/openwhisk/bin/openwhisk-cli/.gogradle/project_gopath/src/github.com/apache/incubator-openwhisk-cli
failed
Though I can find the index task in build.gradle, I can not find the goPrepare task under openwhisk-cli or the parent directories.
I presume this command was run on the newly created VM as I get
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
But running vagrant ssh does log me on to the VM.
Looking at the vagrantfile, there is
# Clone and Build CLI
echo "`date`: build-cli-start" >> /tmp/vagrant-times.txt
cd ${OPENWHISK_HOME}/bin
rm -rf openwhisk-cli
su vagrant -c 'git clone --depth=1 https://github.com/apache/incubator-openwhisk-cli.git openwhisk-cli'
cd openwhisk-cli
su vagrant -c './gradlew releaseBinaries'
echo "`date`: build-cli-end" >> /tmp/vagrant-times.txt
The log in /tmp shows build-cli-start but not build-cli-end.
The releaseBinaries task is in build.gradle but no links there.
Has anyone else come across this error? Does anyone know where the goPrepare task is?
Regards
Raised defect at https://github.com/apache/incubator-openwhisk/issues/3649.
Fixed in https://github.com/apache/incubator-openwhisk/pull/3651.
Updated git, run hello again and it works. Congrats to the openwhisk team for responding so quickly.

Running chef-solo gives HTTPServerException 404 "Not Found

I am trying to provision VM by chef. I have written default.rb scripts and kept the files which i want to copy to the VM.
cookbook_file "/etc/hosts" do
source "etc_hosts"
mode 0644
owner "root"
group "root"
end
end
This fails with below error..
Error executing action `create` on resource 'cookbook_file[/etc/hosts]'
==> default: ================================================================================
==> default:
==> default: Net::HTTPServerException
==> default: ------------------------
==> default: 404 "Not Found"
The file is correctly placed.
When tried to run the chef script again this is success.
It fails on alternate runs. Anything which I may be missing ?
Assuming you didn't actually have that commented out, make sure you have the file content at files/etc_hosts in the cookbook.

Unable to launch vagrant-lxc

I'm trying to launch vagrant-lxc on Ubuntu 14.04. I'm using the latest Vagrant download (rather than the ancient version in the Debian repos).
vagrant plugin install vagrant-lxc
runs successfully, as does:
vagrant init fgrehm/precise64-lxc
I ran
sudo vagrant lxc sudoers
to handle the sudo issues mentioned here.
But when I run
vagrant up --provider=lxc
(both with and without sudo) the container doesn't load, spitting out this:
> Bringing machine 'default' up with 'lxc' provider...
==> default: Checking if box 'fgrehm/precise64-lxc' is up to date...
==> default: Setting up mount entries for shared folders...
default: /vagrant => /home/ubuntu
==> default: Starting container...
There was an error executing ["sudo", "/usr/local/bin/vagrant-lxc-wrapper", "lxc-start", "-d", "--name", "ubuntu_default_1456156125505_47833"]
For more information on the failure, enable detailed logging by
setting the environment variable VAGRANT_LOG to DEBUG
Here's the log output I'm getting (from /var/log/lxc/ ubuntu_default_1456156125505_47833.log):
lxc-start 1456158555.539 ERROR lxc_start - start.c:lxc_spawn:884 - failed initializing cgroup support
lxc-start 1456158555.568 ERROR lxc_start - start.c:__lxc_start:1121 - failed to spawn 'ubuntu_default_1456156125505_47833'
lxc-start 1456158555.568 ERROR lxc_start_ui - lxc_start.c:main:341 - The container failed to start.
lxc-start 1456158555.568 ERROR lxc_start_ui - lxc_start.c:main:343 - To get more details, run the container in foreground mode.
lxc-start 1456158555.568 ERROR lxc_start_ui - lxc_start.c:main:345 - Additional information can be obtained by setting the --logfile and --logpriority options.
Any ideas what I'm doing wrong?
Thanks,
Go into /home/<USERNAME>/.vagrant.d/boxes/fgrehm/precise64-lxc/.../lxc/lxc-config
file and
comment out:
lxc.pivotdir = lxc_putold
the do vigrant up again it should work!

Vagrant provision fail with puppet

Hello i have generated a vm on http://vmg.slynett.com/.
The vm works fine but nothing is installed on it.
I don't know why vagrant provision fail.
vagrant provision
[default] Running provisioner: Vagrant::Provisioners::Shell...
stdin: is not a tty
Europe/Paris
Current default time zone: 'Europe/Paris'
Local time is now: Fri Jun 28 13:15:42 CEST 2013.
Universal Time is now: Fri Jun 28 11:15:42 UTC 2013.
[default] Running provisioner: Vagrant::Provisioners::Puppet...
[default] Running Puppet with /tmp/vagrant-puppet/manifests/base.pp...
stdin: is not a tty
Warning: Could not retrieve fact fqdn
Warning: Config file /etc/puppet/hiera.yaml not found, using Hiera defaults
Error: Puppet::Parser::AST::Resource failed with error ArgumentError: Invalid resource type concat at /tmp/vagrant-puppet/modules-0/apache/manifests/init.pp:130 on node dev
Error: Puppet::Parser::AST::Resource failed with error ArgumentError: Invalid resource type concat at /tmp/vagrant-puppet/modules-0/apache/manifests/init.pp:130 on node dev
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
cd /tmp/vagrant-puppet/manifests && puppet apply --modulepath '/tmp/vagrant-puppet/modules-0' /tmp/vagrant-puppet/manifests/base.pp --detailed-exitcodes || [ $? -eq 2 ]
I am on mac OS X 10.8.3, virtualbox 4.2.6, Vagrant 1.2.2
It looks like you're referring to the concat module from ripenaar/concat and an error like that is usually thrown when a resource isn't present (i.e. not installed or in your module path).
I solved this problem by adding the concat and file_concat puppet modules to the puppet module path. I usually have all needed puppet modules as git submodules in puppet/modules of the vagrant project, so to add the concat and file_concat module, I do:
git submodule add https://github.com/puppetlabs/puppetlabs-concat.git puppet/modules/concat
git submodule add https://github.com/electrical/puppet-lib-file_concat.git puppet/modules/file_concat
Note that the concat module is the official one from puppetlabs and the file_concat module is the one used in concat (it should be installed automatically, but this didn't seem to work for me and may be your problem as well).

Resources