Delete a Docker image from inside an RSpec script - ruby

TL;DR: I can't delete my Docker image from inside my RSpec script, because RSpec still has a dependent container
I'm using RSpec to test my Dockerfiles. That is, to test whether the Dockerfile results in a correct image. My test script does this by building an image from the Dockerfile, and then letting RSpec test whether a container (based on that image) has the correct contents.
The execution flow is as follows:
My RSpec script builds the Docker image (in its before(:all) block)
Serverspec creates a Docker container (based off the Docker image)
RSpec / Serverspec executes the tests inside the Docker container
RSpec / Serverspec calls my RSpec script's after(:all) block
RSpec / Serverspec deletes the Docker container
I tried to delete the image from inside the after(:all) block (or the after(:context) block, but that has the same issue), as that is the last hook that RSpec offers me. Only I can't, because the Docker container has not been deleted yet.
Force-deletion is no solution, as Docker then keeps an unnamed, untagged version of the image around.
Is there an (idiomatic) way to accomplish this?
Example RSpec script:
require "serverspec"
require "docker-api"
describe "Dockerfile" do
image_name = 'my_image'
image_tag = 'my_tag'
image_name_tag = image_name + ':' + image_tag
before(:all) do
set :os, family: :debian
set :backend, :docker
# Build image
#image = Docker::Image.build_from_dir("../dockerfile/")
#image.tag(repo: image_name, tag: image_tag)
# Provide Serverspec with our image's id
set :docker_image, #image.id
end
it "is a Debian-based container" do
expect(os_version).to include("Debian")
end
def os_version
command("cat /etc/*release").stdout
end
after(:context) do
# Clean up
#image.delete(name: image_name_tag) # <-- this fails
end
end
(It's not entirely clear to me where RSpec's responsibilities end and Serverspec's begin; hence the RSpec / Serverspec above.)

Related

serverspec using wrong container

I have 2 spec files that use different docker images and therefore are suppose to start separate and different docker containers to run the examples.
In the snippets below I'm using the serverspec gem to test my containers
spec/dockerfile/ember_spec.rb
require 'spec_helper'
require 'shared_examples/release'
describe 'ember' do
before(:all) do
#image = Docker::Image.build_from_dir(image_path('ember'))
set :os, family: :alpine
set :backend, :docker
set :docker_image, #image.id
set :docker_container_create_options, { 'Entrypoint' => ['/bin/sh'] }
end
describe command('ember version') do
its(:stdout) { should contain 'ember-cli: 3.3.0' }
its(:stdout) { should contain 'node: 10.10.0' }
end
include_examples 'os release', 'Alpine Linux'
end
spec/dockerfile/gerbv_spec.rb
require 'spec_helper'
require 'shared_examples/release'
describe 'gerbv' do
before(:all) do
#image = Docker::Image.build_from_dir(image_path('gerbv'))
set :os, family: :debian
set :backend, :docker
set :docker_image, #image.id
set :docker_container_create_options, { 'Entrypoint' => ['/bin/sh'] }
end
describe package('gerbv') do
it { should be_installed }
end
include_examples 'os release', 'Ubuntu 18.04'
end
However when running bundle exec rspec it is quite clear that the same container is being used to run each spec file. I have confirmed this by printing out the running containers before each of the examples. This is of course causing the specs to fail for one of the files (whichever runs second).
When the files are run independently using bundle exec rspec path/to/file then all the specs pass.
Is there any way to force a container to be spun down after the examples in one file have run and a new container created for the other set of examples?
I found a way to solve the problem, albeit a pretty hacky one. The key to this problem lay in how the container is finally released. When there are no longer any references pointing to the Docker instance it will be garbage collected and the container killed and deleted. However the object instance is held in a class level variable as a singleton in the base class. It would seem to me the only way to "reset" specinfra is to call the clear method inherited on the Docker class.
In the end the following solved the problem and the correct class is being used to run each spec.
after(:all) {
Specinfra.backend.class.clear
}
It would be great to know there is a better way to access this methods without having to rely on a method not exposed through the serverspec gem.

Capistrano Deploy, access ActiveRecord

Okay,
I'm sorry if the title is not descriptive enough, but allow me to explain what I want to achieve:
I have a Rails 3 application
During my deploy, it needs to call pg_dump with the correct parameters to restore a backup
The task needs to be ran after the deploy is done but before the migrations.
The problem I have however, is that for this task, I would like to access Rails specific code, which is not working as Capistrano keeps throwing a lot of errors at me like gems not available or module not defined.
This is my Rake task:
namespace :deploy do
namespace :rm do
desc 'Backup the database'
task :backup do
# Generates the command to invoke the Rails runner
# Used by the cfg method to execute the ActiveRecord configuration in the rails config.
def runner
dir = "#{fetch(:deploy_to)}/current"
bundler = "#{SSHKit.config.command_map.prefix[:bundle].first} bundle exec"
env = fetch(:rails_env)
"cd #{dir}; #{bundler} rails r -e #{env}"
end
def cfg(name)
env = fetch(:rails_env)
command = "\"puts ActiveRecord::Base.configurations['#{env}']['#{name}']\""
"#{runner} #{command}"
end
on roles(:db) do
timestamp = Time.now.strftime('%Y%m%d%H%M%S')
backups = File.expand_path(File.join(fetch(:deploy_to), '..', 'backups'))
execute :mkdir, '-p', backups
dump = "PGPASSWORD=`#{cfg('password')}` pg_dump -h `#{cfg'host')}` -U `#{cfg('username')}` `#{cfg('database')}`"
fn = "#{timestamp}_#{fetch(:stage)}.sql.gz"
path = File.join(backups, fn)
execute "#{dump} | gzip > #{path}"
end
end
end
end
In it's current form, it simply generates a string with the runner method and dumps that inside the cfg method.
I tried rewriting the runner method, but for some reason I keep getting the runner --help output from the remote server, but the command being generated in the output is correct, and works locally just fine.
We are using Ruby 2.2.2 and RVM on the remote server.
Is it even possible to do what we are trying to construct together?
I'd suggest writing a rake task inside your Rails app and invoking that from your Capistrano task. This is how the Capistrano rails tasks work.
within release_path do
with rails_env: fetch(:rails_env) do
execute :rake, "db:migrate"
end
end

Testing multiple hosts with the same test using serverspec

The Advanced Tips section of the Serverspec site shows an example of testing multiple hosts with the same test set. I've built an example of my own (https://gist.github.com/neilhwatson/81249ad393800a76a8ad), but there are problems.
The first problem is that the tests stop at the first failure rather than proceeding through the lot and keeping a tally. The second is that the failure output does not indicate on which host the failure occurred. What can I do to fix these problems and produce a final report for all hosts?
For the first issue, ServerSpec by default will run all your tests. However, since you have a loop that executes a Rake task for each environment, the first environment to have a failure causes the task to fails and so an exception is raised and the rest of your tasks don't run.
I've forked your gist and updated the Rake task to surround it with a begin/rescue.
...
begin
desc "Run serverspec to #{host}"
RSpec::Core::RakeTask.new(host) do |t|
ENV['TARGET_HOST'] = host
t.pattern = "spec/base,cfengine3/*_spec.rb"
end
rescue
end
...
For the second problem, it doesn't look like ServerSpec will output which environment the tests are running in. But since the updated Gist shows that the host gets set in the spec_helper.rb we can use that to add an RSpec configuration that sets up an after(:each) and only output the host on errors. The relevant code changes are in a fork of the gist, but basically you'll just need the below snippet in your spec_helper.rb:
RSpec.configure do |c|
c.after(:each) do |example|
if example.exception
puts "Failed on #{host_run_on}"
end
end
end

Buildr, RSpec—pass build parameter to test

I have »buildr« »buildfile« which triggers some »rspec« tests. I would like to pass some path parameters to the tests, so that It wont cause trouble to load test-resources files. In the »buildfile« I have got this code to trigger the tests:
RSpec.configure do |config|
config.add_setting :spec_resources_dir, :default => _(:src, 'spec', 'ruby', 'resources')
end
RSpec::Core::RakeTask.new(:run_rspec) do |t|
t.pattern = 'src/spec/**/*_spec.rb'
end
task test => [:run_rspec]
But if I try to retrieve the value in the specfile like this:
RSpec.configuration.spec_resources_dir
I get this error
undefined method `spec_resources_dir' […] (NoMethodError)
Any ideas?
RSpec's rake task runs the specs in a separate process, so configuration you do with RSpec.configure in the buildfile will not be visible to the running specs.
Two suggestions for passing info from the buildfile to your spec task:
Generate a spec_helper and require it from your specs (or via rspec's -r command line option and the rspec_opts config parameter on RSpec::Core::RakeTask). You could use buildr's filtering to substitute values from the buildfile into the helper.
Set values in ENV and then read them out from your specs. Environment variables are shared from parent to child processes.
By request, an example for #1:
RSpec::Core::RakeTask.new do |t|
t.rspec_opts = "-r '#{_(:target, 'spec_helper.rb')}'"
end
This assumes that you (probably in another task) generate the spec helper into _(:target, 'spec_helper.rb')

Sauce labs tests with meaningful names

I am using rspec and cucumber to run watir tests at sauce labs.
I would like that test name (at sauce labs) is the same as the name of rspec describe block or cucumber feature.
So, if I have rspec file:
describe "something" do
# teh codez
end
or cucumber file:
Feature: something
# teh codez
I would like that at sauce labs the test is also named something. I know how to say to sauce labs how I want the test to be named, but I do not know how to get rspec describe block name or cucumber feature name when the tests are running.
A bit more context: I have several rspec files and all of them are running in parallel, I am using parallel_tests gem for that. It provides TEST_ENV_NUMBER variable, so I am using it to name tests:
caps[:name] = "job #{ENV['TEST_ENV_NUMBER']}"
So jobs are named: job , job 1, job 2... But I would be better if they were named: user, search, login...
You can get the names in the before hooks:
# rspec:
before do
p [example.description, example.full_description]
end
# cucumber:
Before do |scenario|
p [scenario.feature.name, scenario.name]
end

Resources