Encrypt data bag from inside of ruby without relying on knife - ruby

At the moment to encrypt a data bag, I have to do :
system "knife data bag from file TemporaryEncrypting \"#{enc_file_path}\" --secret-file #{Secret_Key_Path}"
and that doesn't work because knife can't find a config file and I can't seem to get it read the one in C:\chef.
How do I do this from within ruby?

I worked out how to encrypt inside of ruby, just use this code:
require 'chef/knife'
#require 'chef/encrypted_data_bag_item' #you need to do this in chef version 12, they've moved it out of knife and into it's own section
require 'json'
secret = Chef::EncryptedDataBagItem.load_secret Secret_Key_Path
to_encrypt = JSON.parse(json_to_encrypt)
encrypted_data = Chef::EncryptedDataBagItem.encrypt_data_bag_item to_encrypt, secret
Answer achieved with information from this answer, here is the code in question:
namespace 'databag' do
desc 'Edit encrypted databag item.'
task :edit, [:databag, :item, :secret_file] do |t, args|
args.with_defaults :secret_file => "#{ENV['HOME']}/.chef/encrypted_data_bag_secret"
secret = Chef::EncryptedDataBagItem.load_secret args.secret_file
item_file = "data_bags/#{args.databag}/#{args.item}.json"
tmp_item_file = "/tmp/#{args.databag}_#{args.item}.json"
begin
#decrypt data bag into tmp file
raw_hash = Chef::JSONCompat.from_json IO.read item_file
databag_item = Chef::EncryptedDataBagItem.new raw_hash, secret
IO.write tmp_item_file, Chef::JSONCompat.to_json_pretty( databag_item.to_hash )
#edit tmp file
sh "#{ENV['EDITOR']} #{tmp_item_file}"
#encrypt tmp file data bag into original file
raw_hash = Chef::JSONCompat.from_json IO.read tmp_item_file
databag_item = Chef::EncryptedDataBagItem.encrypt_data_bag_item raw_hash, secret
IO.write item_file, Chef::JSONCompat.to_json_pretty( databag_item )
ensure
::File.delete tmp_item_file #ensure tmp file deleted.
end
end
end

Related

How to retrieve CSV headers only from S3 [duplicate]

Below is the code I'm using to parse the CSV from within the app, but I want to parse a file located in a Amazon S3 bucket. It needs to work when pushed to Heroku as well.
namespace :csvimport do
desc "Import CSV Data to Inventory."
task :wiwt => :environment do
require 'csv'
csv_file_path = Rails.root.join('public', 'wiwt.csv.txt')
CSV.foreach(csv_file_path) do |row|
p = Wiwt.create!({
:user_id => row[0],
:date_worn => row[1],
:inventory_id => row[2],
})
end
end
end
There are cases with S3, when permissions on S3 Object disallow public access. In-built Ruby functions do assume a path is publicly accessible and don't account for AWS S3 specificity.
s3 = Aws::S3::Resource.new
bucket = s3.bucket("bucket_name_here")
str = bucket.object("file_path_here").get.body.string
content = CSV.parse(str, col_sep: "\t", headers: true).map(&:to_h)
Per-line explanation using AWS SDK:
Line 1. Initialize
Line 2. Choose a bucket.
Line 3. Choose an object and get it as a String.
Line 4. Effectively CSV.parse('the string'), but I also added a options and map over it just in case it helps you.
You can do it like this
CSV.new(open(path_to_s3)).each do |row|
...
end
This worked for me
open(s3_file_path) do |file|
CSV.foreach(file, {headers: true, header_converters: :symbol}) do |row|
Model.create(row.to_hash)
end
end
You can get the csv file from S3 like this:
require 'csv'
require 'net/http'
CSV.parse(Net::HTTP.get(s3_file_url), headers: true).each do |row|
# code for processing row here
end

What is the proper way to input options from a file? | Ruby Scripts

I'm trying to write a script that will take ip addresses from a host file, and username info from a config file. I'm obviously not holding a the file-name as a proper hash/value.
What should my File.new(options[:config_file], 'r').each { |params| puts params } be calling? I've tried what it is currently set too, and
File.new(config_file, 'r').each { |params| puts params }, as well as File.new(:config_file, 'r').each { |params| puts params } with no luck.
Should I be doing something different all together? Like load(filename = nil)?
options = {}
opt_parser = OptionParser.new do |opt|
opt.banner = 'Usage: opt_parser COMMAND [OPTIONS]'
opt.on('--host_file','I need hosts, put them here') do |host_file|
options[:host_file] = host_file
end
opt.on('--config_file', 'I need config info, put it here') do |config_file|
options[:config_file] = config_file
end
opt.on('-h', '--help', 'What your looking at') do |help|
options[:help] = help
puts opt
end
end
opt_parser.parse!
if options[:config_file]
File.new(options[:config_file], 'r').each { |params| puts params }
end
if options[:host_file]
File.new(options[:host_file], 'r').each { |host| puts host }
end
Parsing the hosts file
You can write your own parser or use a gem already implementing one.
Example using the "hosts" gem: (you need to install it)
require 'hosts'
hosts = Hosts::File.read('/etc/hosts')
entries = hosts.elements.select{ |element| element.is_a? Hosts::Entry }
addresses = Hash[entries.map{ |entry| [entry.name, entry.address] }]
# You should get a hash of entry names and addresses
# {"localhost"=>"127.0.0.1", "ip6-localhost"=>"::1"}
Parsing the config file
A common way to store configuration is to use YAML files.
Considering the following YAML file (in '/tmp/config.yml'):
username: foo
password: bar
You can parse this config file using the YAML module:
require 'yaml'
config = YAML.load_file('config.yml')
# You should get a hash of config values
# {"username"=>"foo", "password"=>"bar"}
If you don't want your password stored in plain text in a config file, you can:
ask the password at runtime if your context allow that
use environment variable to store the password and retrieve it at runtime
Edit:
If you only need to extract your hostnames from a text file, considering one hostname per line, you can use something like hostnames = IO.readlines("config.yml").map{ |line| line.chomp } to get an array of hostnames. You can after iterate through this array to do your operations.
www.ruby-doc.org/core-2.1.0/IO.html#method-i-readline

getting BadDigest error while trying to upload compressed file to s3 on ruby 1.9.3

as stated, i am trying to upload a file to s3
require 'digest/md5'
require 'base64'
require 'aws-sdk'
def digest f
f.rewind
Digest::MD5.new.tab do |dig|
f.each_chunk{|ch| dig << ch}
end.base64digest
ensure
f.rewind
end
file = File.new(compress file) #file zipped with zip/zip
total = file.size
digest = digest(file)
s3 = AWS::S3::new(:access_key_id => #access_key_id, :secret_access_key
=> #secret_access_key)
bucket = s3.buckets['mybucket']
bucket.objects["myfile"].write :content_md5 => digest, :content_length
=> total do |buf,len|
buf.write(file.read len)
end
but i constantly get AWS::S3::Errors::BadDigest exception
if i try to upload the file without passing :content_md5, everything goes well, archive downloads and opens correctly.
also as i just found out this fails on ruby 1.9.3 but works well on 1.9.2
fixed by changing digest func to
def digest f
Digest::MD5.file(f.path).base64digest
end
i think the issue was in the fact that the file passed to it was open

Writing to a file then trying to open it again for parsing

I'm trying to save the xml feed of a twitter user to a file and then try to read it again for parsing onto the screen.
This s what I see hen I try to run it..
Wrote to file #<File:0x000001019257c8>
Now parsing user info..
twitter_stats.rb:20:in `<main>': undefined method `read' for "keva161.txt":String (NoMethodError)
Here's my code...
require "open-uri"
require "rubygems"
require "crack"
twitter_url = "http://api.twitter.com/1/statuses/user_timeline.xml?cout=100&screen_name="
username = "keva161"
full_page = twitter_url + username
local_file = username + ".txt"
tweets = open(full_page).read
my_local_file = open(local_file, "w")
my_local_file.write(tweets)
puts "Wrote to file " + my_local_file.to_s
sleep(1)
puts "Now parsing user info.."
sleep(1)
parsed_xml = Crack::XML.parse(local_file.read)
tweets = parsed_xml["statuses"]
first_tweet = tweets[0]
user = first_tweets["user"]
puts user["screen_name"]
puts user ["name"]
puts users ["created_at"]
puts users ["statuses_count"]
You are calling read on local_file, which is the string containing the filename. You meant to type my_local_file.read, I guess, to use the IO object you got from open. (...or File.read local_file.)
Not that this is the best form: why are you writing to a temporary file anyhow? You have the data in memory, so just pass it directly.
If you do want to write to a local file, I commend the block from of open:
open(local_file, 'w') do |fh|
fh.print ...
end
That way Ruby will take care of closing the file for you and all that.

How to do the equivalent of 's3cmd ls s3://some_bucket/foo/bar' in Ruby?

How do I do the equivalent of 's3cmd ls s3://some_bucket/foo/bar' in Ruby?
I found the Amazon S3 gem for Ruby and also the Right AWS S3 library, but somehow it's not immediately obvious how to do a simple 'ls' like command on an S3 'folder' like location.
Using the aws gem this should do the trick:
s3 = Aws::S3.new(YOUR_ID, YOUR_SECTRET_KEY)
bucket = s3.bucket('some_bucket')
bucket.keys('prefix' => 'foo/bar')
I found a similar question here: Listing directories at a given level in Amazon S3
Based on that I created a method that behaves as much as possible as 's3cmd ls <path>':
require 'right_aws'
module RightAws
class S3
class Bucket
def list(prefix, delimiter = '/')
list = []
#s3.interface.incrementally_list_bucket(#name, {'prefix' => prefix, 'delimiter' => delimiter}) do |item|
if item[:contents].empty?
list << item[:common_prefixes]
else
list << item[:contents].map{|n| n[:key]}
end
end
list.flatten
end
end
end
end
s3 = RightAws::S3.new(ID, SECRET_KEY)
bucket = s3.bucket('some_bucket')
puts bucket.list('foo/bar/').inspect
In case some looks for the answer to this question for the aws-sdk version 2, you can very easily do this this way:
creds = Aws::SharedCredentials.new(profile_name: 'my_credentials')
s3_client = Aws::S3::Client.new(region: 'us-east-1',
credentials: creds)
response = s3_client.list_objects(bucket: "mybucket",
delimiter: "/")
Now, if you do
response.common_prefixes
It will give you the "Folders" of that particular subdirectory, and if you do
response.contents
It will have the files of that particular directory
The official Ruby AWS SDK now supports this: http://docs.aws.amazon.com/AWSRubySDK/latest/AWS/S3/Tree.html
You can also add the following convenience method:
class AWS::S3::Bucket
def ls(path)
as_tree(:prefix => path).children.select(&:branch?).map(&:prefix)
end
end
Then use it like this:
mybucket.ls 'foo/bar' # => ["/foo/bar/dir1/", "/foo/bar/dir2/"]
a quick and simple method to list files in a bucket folder using the ruby aws-sdk:
require 'aws-sdk'
s3 = AWS::S3.new
your_bucket = s3.buckets['bucket_o_files']
your_bucket.objects.with_prefix('lots/of/files/in/2014/09/03/').each do |file|
puts file.key
end
Notice the '/' at the end of the key, it is important.
I like the Idea of opening the Bucket class and adding a 'ls' method.
I would have done it like this...
class AWS::S3::Bucket
def ls(path)
objects.with_prefix("#{path}").as_tree.children.select(&:leaf?).collect(&:member).collect(&:key)
end
end
s3 = AWS::S3.new
your_bucket = s3.buckets['bucket_o_files']
your_bucket.ls('lots/of/files/in/2014/09/03/')

Resources