Rake task selectivly ignoring new code - ruby

In a rake task I'm writing some puts statements show changes while others don't. For instance changing
puts model+" | "+id
into
puts model+" * "+id
doesn't change in the output of the script. However in some places changing
puts "Connecting to "+site
into
puts "Connecting to ----"+site
shows the changes that where made.
In the places where any changes to the line doesn't change the output, adding a new puts statement before or after don't show up when the task is run. Commenting out lines of code around the unchanging puts statements that do the actual work cause the script to not execute those lines, just as it should, but changing or adding puts statements there do not change the output of the script.
Removing all other tasks and emacs backup files from the lib/tasks folder doesn't help. I've been bitten before by having a backup copy of a task with the same namespace and task name running instead of the one I was working on.
This is being run with Ruby 2.4.3 on OpenBSD 6.3-stable on a fx-8350. I would post the whole script but the company I'm working for won't allow it.

How about
puts "#{model} +/*/whatever #{site}"
It shouldn't matter to what sounds like a filesystem update issue (reboot), but it's probably better form to put the variables in the string like that instead of + "" them.

Related

puts statements in repl.rb are not printed to logs/log.txt

While going through the dragon ruby documentation here: http://docs.dragonruby.org.
It says
All puts statements will also be saved to logs/log.txt. So if you want
to stay in your editor and not look at the terminal, or the DragonRuby
Console, you can tail this file.
However when I write puts statements, this is the output in logs
INFO: Marked app/repl.rb for reload
The puts logs are available in logs/puts.txt file.
Looks like the documentation was not updated. Raised PR for the same: https://github.com/DragonRuby/dragonruby-game-toolkit-contrib/pull/83

Ruby Project - Prevent a ruby file from directly being called from OS command line

I am doing a demo command line project in Ruby. The structure is like this:
/ROOT_DIR
init.rb
/SCRIPT_DIR
(other scripts and files)
I want users to only go into the application using init.rb, but as it stands, anyone can go into the sub-folder and call other ruby scripts directly.
Questions:
What ways can above scenario be prevented?
If I was to use directory permissions, would it get reset when running the code from a Windows machine to on Linux machine?
Is there anything that can be included in Ruby files itself to prevent it from being directly called from OS command line?
You can't do this with file permissions, since the user needs to read the files; removing the read permission means you can't include it either. Removing the execute permission is useful to signal that these file aren't intended to be executed, but won't prevent people from typing ruby incl.rb.
The easiest way is probably to set a global variable in the init.rb script:
#!/usr/bin/env ruby
FROM_INIT = true
require './incl.rb'
puts 'This is init!'
And then check if this variable is defined in the included incl.rb file:
unless defined? FROM_INIT
puts 'Must be called from init.rb'
exit 0
end
puts 'This is incl!'
A second method might be checking the value of $PROGRAM_NAME in incl.rb; this stores the current program name (like argv[0] in many other languages):
unless $PROGRAM_NAME.end_with? 'init.rb'
puts 'Must be called from init.rb'
exit 0
end
I don't recommend this though, as it's not very future-proof; what if you want to rename init.rb or make a second script?

How to prevent capistrano replacing newlines?

I want to run some shell scripts remotely as part of my capistrano setup. To test that functionality, I use this code:
execute <<SHELL
cat <<TEST
something
TEST
SHELL
However, that is actually running /usr/bin/env cat <<TEST; something; TEST which is obviously not going to work. How do I tell capistrano to execute the heredoc as I have written it, without converting the newlines into semicolons?
I have Capistrano Version: 3.2.1 (Rake Version: 10.3.2) and do not know ruby particularly well, so there might be something obvious I missed.
I think it might work to just specify the arguments to cat as a second, er, argument to execute:
cat_args = <<SHELL
<<TEST
something
TEST
SHELL
execute "cat", cat_args
From the code #DavidGrayson posted, it looks like only the command (the first argument to execute) is sanitized.
I agree with David, though, that the simpler way might be to put the data in a file, which is what the SSHKit documentation suggests:
Upload a file from a stream
on hosts do |host|
file = File.open('/config/database.yml')
io = StringIO.new(....)
upload! file, '/opt/my_project/shared/database.yml'
upload! io, '/opt/my_project/shared/io.io.io'
end
The IO streaming is useful for uploading something rather than "cat"ing it, for example
on hosts do |host|
contents = StringIO.new('ALL ALL = (ALL) NOPASSWD: ALL')
upload! contents, '/etc/sudoers.d/yolo'
end
This spares one from having to figure out the correct escaping sequences for something like "echo(:cat, '...?...', '> /etc/sudoers.d/yolo')".
This seems like it would work perfectly for your use case.
The code responsible for this sanitization can be found in SSHKit::Command#sanitize_command!, which is called by that class's initialize method. You can see the source code here:
https://github.com/capistrano/sshkit/blob/9ac8298c6a62582455b1b55b5e742fd9e948cefe/lib/sshkit/command.rb#L216-226
You might consider monkeypatching it to do nothing by adding something like this to the top of your Rakefile:
SSHKit::Command # force the class to load so we can re-open it
class SSHKit::Command
def sanitize_command!
return if some_condition
super
end
end
This is risky and could introduce problems in other places; for example there might be parts of Capistrano that assume that the command has no newlines.
You are probably better off making a shell script that contains the heredoc or putting the heredoc in a file somewhere.
Ok, so this is the solution I figured out myself, in case it's useful for someone else:
str = %x(
base64 <<TEST
some
thing
TEST
).delete("\n")
execute "echo #{str} | base64 -d | cat -"
As you can see, I'm base64 encoding my command, sending it through, then decoding it on the server side where it can be evaluated intact. This works, but it's a real ugly hack - I hope someone can come up with a better solution.

Passing variables between chef resources

i would like to show you my use case and then discuss possible solutions:
Problem A:
i have 2 recipes, "a" and "b".. "a" installs some program on my file system (say at "/usr/local/bin/stuff.sh" and recipe "b" needs to run this and do something with the output.
so recipe "a" looks something like:
execute "echo 'echo stuff' > /usr/local/bin/stuff.sh"
(the script just echo(es) "stuff" to stdout)
and recipe "b" looks something like:
include_recipe "a"
var=`/usr/local/bin/stuff.sh`
(note the backquotes, var should contain stuff)
and now i need to do something with it, for instance create a user with this username. so at script "b" i add
user "#{node[:var]}"
As it happens, this doesn't work.. apparently chef runs everything that is not a resource and only then runs the resources so as soon as i run the script chef complains that it cannot compile because it first tries to run the "var=..." line at recipe "b" and fails because the "execute ..." at recipe a did not run yet and so the "stuff.sh" script does not exist yet.
Needless to say, this is extremely annoying as it breaks the "Chef runs everything in order from top to bottom" that i was promised when i started using it.
However, i am not very picky so i started looking for alternative solutions to this problem, so:
Problem B: i've run across the idea of "ruby_block". apparently, this is a resource so it will be evaluated along with the other resources. I said ok, then i'd like to create the script, get the output in a "ruby_block" and then pass it to "user". so recipe "b" now looks something like:
include_recipe "a"
ruby_block "a_block" do
block do
node.default[:var] = `/usr/local/bin/stuff.sh`
end
end
user "#{node[:var]}"
However, as it turns out the variable (var) was not passed from "ruby_block" to "user" and it remains empty. No matter what juggling i've tried to do with it i failed (or maybe i just didn't find the correct juggling method)
To the chef/ruby masters around: How do i solve Problem A? How do i solve Problem B?
You have already solved problem A with the Ruby block.
Now you have to solve problem B with a similar approach:
ruby_block "create user" do
block do
user = Chef::Resource::User.new(node[:var], run_context)
user.shell '/bin/bash' # Set parameters using this syntax
user.run_action :create
user.run_action :manage # Run multiple actions (if needed) by declaring them sequentially
end
end
You could also solve problem A by creating the file during the compile phase:
execute "echo 'echo stuff' > /usr/local/bin/stuff.sh" do
action :nothing
end.run_action(:run)
If following this course of action, make sure that:
/usr/local/bin exist during Chef's compile phase;
Either:
stuff.sh is executable; OR
Execute it through a shell (e.g.: var=`sh /usr/local/bin/stuff.sh`
The modern way to do this is to use a custom resource:
in cookbooks/create_script/resources/create_script.rb
provides :create_script
unified_mode true
property :script_name, :name_property: true
action :run do
execute "creating #{script_name}" do
command "echo 'echo stuff' > #{script_name}"
not_if { File.exist?(script_name) }
end
end
Then in recipe code:
create_script "/usr/local/bin/stuff.sh"
For the second case as written I'd avoid the use of a node variable entirely:
script_location = "/usr/local/bin/stuff.sh"
create_script script_location
# note: the user resources takes a username not a file path so the example is a bit
# strange, but that is the way the question was asked.
user script_location
If you need to move it into an attribute and call it from different recipes then there's no need for ruby_blocks or lazy:
some cookbook's attributes/default.rb file (or a policyfile, etc):
default['script_location'] = "/usr/local/bin/stuff.sh"
in recipe code or other custom resources:
create_script node['script_location']
user node['script_location']
There's no need to lazy things or use ruby_block using this approach.
There are actually a few ways to solve the issue that you're having.
The first way is to avoid the scope issues you're having in the passed blocks and do something like ths.
include_recipe "a"
this = self
ruby_block "a_block" do
block do
this.user `/usr/local/bin/stuff.sh`
end
end
Assuming that you plan on only using this once, that would work great. But if you're legitimately needing to store a variable on the node for other uses you can rely on the lazy call inside ruby to do a little work around of the issue.
include_recipe "a"
ruby_block "a_block" do
block do
node.default[:var] = `/usr/local/bin/stuff.sh`.strip
end
end
user do
username lazy { "#{node[:var]}" }
end
You'll quickly notice with Chef that it has an override for all default assumptions for cases just like this.

Redirect Output of Capistrano

I have a Capistrano deploy file (Capfile) that is rather large, contains a few namespaces and generally has a lot of information already in it. My ultimate goal is, using the Tinder gem, paste the output of the entire deployment into Campfire. I have Tinder setup properly already.
I looked into using the Capistrano capture method, but that only works for the first host. Additionally that would be a lot of work to go through and add something like:
output << capture 'foocommand'
Specifically, I am looking to capture the output of any deployment from that file into a variable (in addition to putting it to STDOUT so I can see it), then pass that output in the variable into a function called notify_campfire. Since the notify_campfire function is getting called at the end of a task (every task regardless of the namespace), it should have the task name available to it and the output (which is stored in that output variable). Any thoughts on how to accomplish this would be greatly appreciated.
I recommend not messing with the Capistrano logger, Instead use what unix gives you and use pipes:
cap deploy | my_logger.rb
Where your logger reads STDIN and STDOUT and both records, and pipes it back to the appropriate stream.
For an alternative, the Engineyard cap recipies have a logger – this might be a useful reference if you do need to edit the code, but I recommend not doing.
It's sort of a hackish means of solving your problem, but you could try running the deploy task in a Rake task and capturing the output using %x.
# ...in your Rakefile...
task :deploy_and_notify do
output = %x[ cap deploy ] # Run your deploy task here.
notify_campfire(output)
puts output # Echo the output.
end

Resources