I'm trying to remove an array of directories in a git repo and make 1 commit for each directory removed. I'm using Rugged and Gitlab_git (which is more or less just a wrapper around Rugged) and so far I've managed to do everything I need to except the actually deletion and commit.
I don't see anything in the Rugged Readme that explains how to remove an entire tree/directory. I tried using their commit example for a blob and replacing a single file with a direcotry, but It didnt work
I also tried editing the code they had for the tree builder, but it added a commit to my history that showed every file ever in the repo having been added, and then left staging showing the same thing. Nothing was deleted.
oid = repo.write("Removing folder", :blob)
builder = Rugged::Tree::Builder.new(repo)
builder << { :type => :blob, :name => "_delete", :oid => oid, :filemode => 0100644 }
options = {}
options[:tree] = builder.write
options[:author] = { :email => "testuser#github.com", :name => 'Test Author', :time => Time.now }
options[:committer] = { :email => "testuser#github.com", :name => 'Test Author', :time => Time.now }
options[:message] ||= "Making a commit via Rugged!"
options[:parents] = repo.empty? ? [] : [ repo.head.target ].compact
options[:update_ref] = 'HEAD'
Rugged::Commit.create(repo, options)
Any suggestions? Im still a bit fuzzy on the git internals, so maybe thats my issue.
The git index doesn't track directories explicitly, only their contents. To remove a directory, stage the removal of all of its contents.
You can make a Tree::Builder that is based on an existing tree in the repository, which you can then manipulate as you want.
If you already have the Commit object you want to have as your parent commit, then you can do this:
parent_commit = ... # e.g. this might be repo.head.target
# Create a Tree::Builder containing the current commit tree.
tree_builder = Rugged::Tree::Builder.new(repo, parent_commit.tree)
# Next remove the directory you want from the Tree::Builder.
tree_builder.remove('path/to/directory/to/remove')
# Now create a commit using the contents of the modified tree
# builder. (You might want to include the :update_ref option here
# depending on what you are trying to do - something like
# :update_ref => 'HEAD'.)
commit_data = {:message => "Remove directory with Rugged",
:parents => [commit],
:tree => tree_builder.write
}
Rugged::Commit.create(repo, commit_data)
This will create the commit in the repo with the directory removed, but might not update any branch pointers if you didn’t use :update_ref.
It also won’t update your current working directory or index. If you want to update them you could reset to the new HEAD, but be careful about losing any work. Alternatively you could just remove the directory with Dir.rmdir, mimicking what you would do when removing the directory directly.
Check out the docs for more info, especially Tree::Builder and Commit.create.
Related
Similar to this question, but instead of creating a new file, I'm trying to merge from origin. After creating a new index using Rugged::Repository's merge_commits, and a new merge commit, git reports the new file (coming from origin) as deleted.
Create a merge index,
> origin_target = repo.references['refs/remotes/origin/master'].target
> merge_index = repo.merge_commits(repo.head.target, origin_target)
and a new merge commit,
> options = {
update_ref: 'refs/heads/master',
committer: {name: 'user', email: 'user#foo.com', time: Time.now},
author: {name: 'user', email: 'user#foo.com', time: Time.now},
parents: [repo.head.target, origin_target],
message: "merge `origin/master` into `master`"}
and make sure to use the tree from the merge index.
> options[:tree] = merge_index.write_tree(repo)
Create the commit
> merge_commit = Rugged::Commit.create(repo, options)
Check that our HEAD has been updated:
> repo.head.target.tree
=> #<Rugged::Tree:16816500 {oid: 16c147f358a095bdca52a462376d7b5730e1978e}>
<"first_file.txt" 9d096847743f97ba44edf00a910f24bac13f36e2>
<"second_file.txt" 8178c76d627cade75005b40711b92f4177bc6cfc>
<"newfile.txt" e69de29bb2d1d6434b8b29ae775ad8c2e48c5391>
Looks good. I see the new file in the index. Write it to disk.
> repo.index.write
=> nil
...but git reports the new file as deleted:
$ git st
## master...origin/master [ahead 2]
D newfile.txt
How can I properly update my index and working tree?
There is an important distinction between the Git repository and the working directory. While most common command-line git commands operate on the working directory as well as the repository, the lower-level commands of libgit2 / librugged mostly operate on only the repository. This includes writing the index as in your example.
To update the working directory to match the index, the following command should work (after writing the index):
options = { strategy: force }
repo.checkout_head(options)
Docs for checkout_head: http://www.rubydoc.info/github/libgit2/rugged/Rugged/Repository#checkout_head-instance_method
Note: I tested with update_ref: 'HEAD' for the commit. I'm not sure if update_ref: 'refs/heads/master' will have the same effect.
I'm trying to create a commit with rugged using the following test script:
require "rugged"
r = Rugged::Repository.new(".")
index = r.index
index.read_tree(r.references["refs/heads/master"].target.tree)
blob = r.write("My test", :blob)
index.add(:oid => blob, :path => "test.md", :mode => 0100644)
tree = index.write_tree
parents = [r.references["refs/heads/master"].target].compact
actor = {:name => "Actor", :email => "actor#bla"}
options = {
:tree => tree,
:parents => parents,
:committer => actor,
:message => "message",
:update_ref => "HEAD"
}
puts Rugged::Commit.create(r, options)
The commit is created, and the script outputs 773d97f453a6df6e8bb5099dc0b3fc8aba5ebaa7 (the SHA of the new commit). The generated commit and tree look like they're supposed to:
ludwig$ git cat-file commit 773d97f453a6df6e8bb5099dc0b3fc8aba5ebaa7
tree 253d0a2b8419e1eb89fd462ef6e0b478c4388ca3
parent bb1593b0534c8a5b506c5c7f2952e245f1fe75f1
author Actor <actor#bla> 1417735899 +0100
committer Actor <actor#bla> 1417735899 +0100
message
ludwig$ git ls-tree 253d0a2b8419e1eb89fd462ef6e0b478c4388ca3
100644 blob a7f8d9e5dcf3a68fdd2bfb727cde12029875260b Initial file
100644 blob 7a76116e416ef56a6335b1cde531f34c9947f6b2 test.md
However, the working directory is not updated:
ludwig$ ls
Initial file rugged_test.rb
ludwig$ git status
On branch master
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
deleted: test.md
I have to do a git reset --hard HEAD to get the missing file test.md to show up in the working directory. I thought creating a Rugged commit, and setting :update_ref => "HEAD", was supposed to update the working directory automatically, but something must be going wrong, because doing r.checkout_head also has no effect. However, I think I'm following the rugged examples correctly. What am I missing here?
EDIT:
ludwig$ gem list rugged
*** LOCAL GEMS ***
rugged (0.21.2)
The steps you're taking are those for when you do not want to affect the workdir or current branch. You are not creating the file and you are not writing the modified index to disk.
If you want to put a file on the filesystem and then track it in a new commit, start by creating the file
# Create the file and give it some content
f = open("test.md", "w")
f << "My test"
f.close
# The file from the workdir from to the index
# and write the changes out to disk
index = repo.index
index.add("test.md")
index.write
# Get the tree for the commit
tree = index.write_tree
...
and then commit as you are doing now.
I am trying to provision a vagrant VM to allow users to supply their own bash_profile.local but I don't want this file tracked in the vm's vcs repo. I have a tracked bash_profile.local.dist file that they can rename. How can I tell puppet to only create a file if the source file exists? It is currently working correctly but logs an error during provisioning and this is what I'm trying to avoid.
This is the manifest:
class local
{
file { '.bash_profile.local':
source => 'puppet:///modules/local/bash_profile.local',
path => '/home/vagrant/.bash_profile.local',
replace => false,
mode => 0644,
owner => 'vagrant',
group => 'vagrant',
}
}
You could abuse file in this way :
$a = file('/etc/puppet/modules/local/files/bash_profile.local','/dev/null')
if($a != '') {
file { '.bash_profile.local':
content => $a,
...
}
}
This is not exactly what you asked but you can supply multiple paths in the source, so you can have a default empty file if the user didn't supplied his own.
class local
{
file { '.bash_profile.local':
source => [
'puppet:///modules/local/bash_profile.local',
'puppet:///modules/local/bash_profile.local.default'
],
path => '/home/vagrant/.bash_profile.local',
replace => false,
mode => 0644,
owner => 'vagrant',
group => 'vagrant',
}
}
You can try something like this:
file { 'bash_profile.local':
ensure => present,
source => ['puppet:///modules/local/bash_profile.local', '/dev/null'],
path => '/home/vagrant/.bash_profile.local',
before => Exec['clean-useless-file'],
}
exec { 'clean-useless-file':
command => 'rm .bash_profile.local',
onlyif => 'test -s .bash_profile.local',
cwd => '/home/vagrant',
path => '/bin:/usr/bin',
}
If the admin don't make a copy of ".bash_profile.local" available in "modules/local/bash_profile.local", the file resource will use the second source and then create a blank file. Then, the "onlyif" test fails and the exec will remove the useless blank file.
Used this way this code can be a little cumbersome, but it's better than a provisioning failure. You may evaluate if retaining a blank .bash_profile.local file can be okay in your case. I normally use a variation of this, with wget instead of rm, to get a fresh copy of the file from the internet if it was not already made available as a source.
If you're using puppetmaster, be aware you can use it to provision the own server, presenting two versions of the catalog, according to the .bash_profile.local is present or not.
EC2 gives instances a new IP address when they're stopped then restarted, so I need to be able to automatically manage a route53 record set so that I can access things consistently. Sadly the documentation for the route53 portion of the sdk is not nearly as robust as it is for ec2 (understandably) and so I'm a bit stuck. From what I've seen so far, it seems like change_resource_record_sets (link) is the way to go, but I'm confused as to what needs go into :chages since it mentions a Change object but fails to provide a link to a description of said object.
Here's what my code currently looks like for a creation:
r53.client.change_resource_record_sets(:hosted_zone_id => 'MY_ID', :change_batch => {
:changes => 'I DONT KNOW WHAT GOES HERE',
:action => 'CREATE',
:resource_record_set => {
:name => #instance.instance_name,
:type => 'CNAME',
:ttl => 330,
:value => #instance.ip_address
}})
EDIT: Okay, since I haven't had any help either here or on the official forums I've been messing around with it myself. So it turns out that the documentation is just plain awful. All of the values are stored in a Change object, and not given there. So it actually looks more like this:
some_change = AWS::Route53::CreateRequest.new(#instance.instance_name,
'CNAME',
:ttl => 330,
:resource_records => [
{:value => #instance.ip_address}
])
r53.client.change_resource_record_sets(:hosted_zone_id => 'MY_ZONE', :change_batch => {
:changes => [some_change],
})
The documentation is pretty poor, though it does contain a few examples. I initially had the impression that it was necessary to create requests (and use the client object) per your solution, but there are alternatives.
An example of creating a record can be found in the ResourceRecordSetCollection reference, but it's only a little more concise than your answer:
rrsets = AWS::Route53::HostedZone.new(hosted_zone_id).rrsets
rrset = rrsets.create('foo.example.com.', 'A', :ttl => 300, :resource_records => [{:value => '127.0.0.1'}])
I wanted to update an existing record, and didn't have the hosted_zone_id to hand. It took far too long to figure out how best to do this, so I'm offering the following example in the hope it saves someone else some time:
r53 = AWS::Route53.new
domain = "example.com."
fqdn = "host." + domain
zone = r53.hosted_zones.select { |z| z.name == domain }.first
rrset = zone.rrsets[fqdn, 'A']
rrset.resource_records = [ { :value => "1.2.3.4" } ]
rrset.update
Note that that assumes you only have a single zone with that name in Route53.
The method that zts posted seems to be a much better way of updating records. However, if you're updating an alias record then you have to use DeleteRequest/CreateRequest as the alias_target instance attribute on ResourceRecordSet seems to be readonly, even though the docs don't list it as so.
Here's one way to do it. Note that the hosted zone id for the alias target (region in the code below) should not be a personal zone ID, it is actually an encrypted region ID. This doesn't seem to be documented anywhere, and the only reference for these IDs that I could find was in the source of the fog gem.
Edit: This has now moved to a separate module called fog-aws and is more up to date.
{
"ap-northeast-1" => "Z2YN17T5R711GT",
"ap-southeast-1" => "Z1WI8VXHPB1R38",
"ap-southeast-2" => "Z2999QAZ9SRTIC",
"eu-west-1" => "Z3NF1Z3NOM5OY2",
"eu-central-1" => "Z215JYRZR1TBD5",
"sa-east-1" => "Z2ES78Y61JGQKS",
"us-east-1" => "Z3DZXE0Q79N41H",
"us-west-1" => "Z1M58G0W56PQJA",
"us-west-2" => "Z33MTJ483KN6FU",
}
And the code:
change_request = {
hosted_zone_id: zone.id,
change_batch: { changes: [] }
}
alias_target = {
hosted_zone_id: region,
evaluate_target_health: false
}
# Delete the record if it already exists
if rrset.exists?
alias_target[:dns_name] = rrset.alias_target[:dns_name]
delete_request = AWS::Route53::DeleteRequest.new(fqdn, 'A', alias_target: alias_target)
change_request[:change_batch][:changes][0] = delete_request
r53.client.change_resource_record_sets(change_request)
end
# Create the new record
alias_target[:dns_name] = new_alias
create_request = AWS::Route53::CreateRequest.new(fqdn, 'A', alias_target: alias_target)
change_request[:change_batch][:changes][0] = create_request
r53.client.change_resource_record_sets(change_request)
I hacked it until it worked, and here are my results:
Don't look at the ruby route53 documentation for anything but method/object/attribute names. It is misleading, if not outright wrong. Instead, check out the rest documentation since the client just builds up a standard xml request anyway. My example of creating a simple record is as follows:
some_change = AWS::Route53::CreateRequest.new("foo.bar.com",
'CNAME', # the type of the resource record set
:ttl => 330, # The cache time to live for the current resource record set
:resource_records => [
{:value => "0.0.0.0"} # dependent on type
])
r53.client.change_resource_record_sets(:hosted_zone_id => 'MY_ZONE', :change_batch => {
:changes => [some_change],
})
I initially worked with what #slippery John did, but that proved problematic for spot instances that reclaim certain dns names often.
I found a solution I think is better, almost I identical to his:
some_change = AWS::Route53::ChangeRequest.new("UPSERT","foo.bar.com",
'CNAME', # the type of the resource record set
:ttl => 330, # The cache time to live for the current resource record set
:resource_records => [
{:value => "0.0.0.0"} # dependent on type
])
r53.client.change_resource_record_sets(:hosted_zone_id => 'MY_ZONE', :change_batch => {
:changes => [some_change],
})
it is intentionally copied from his solution with a slight modification.
For aws-sdk v2 it is like this:
[72] pry(main)> r53 = Aws::Route53::Client.new
[73] pry(main)> change
=> {:action=>"UPSERT",
:resource_record_set=>
{:name=>"myhost.example.com",
:resource_records=>[{:value=>"192.0.2.44"}],
:ttl=>60,
:type=>"A"}}
[44] pry(main)> res = r53.change_resource_record_sets(hosted_zone_id: my_zone_id, change_batch: {changes: [change]})
=> #<struct Aws::Route53::Types::ChangeResourceRecordSetsResponse
change_info=
#<struct Aws::Route53::Types::ChangeInfo
id="/change/C02195391TWJO1GT9KVRV",
status="PENDING",
submitted_at=2020-11-03 21:39:09.41 UTC,
comment=nil>>
For more info see https://docs.aws.amazon.com/sdk-for-ruby/v2/api/Aws/Route53/Client.html#change_resource_record_sets-instance_method
I am using the ruby gem 'octokit' which implements the Github API v3. Mostly works great but I cannot seem to filter by date. I believe I have the syntax and time format correct, but it appears my option is ignored and the API returns the past 35 entries regardless of the since or until dates.
Here's a minimal reproducible example (after installing the octokit gem).
require 'octokit'
require 'time'
#day = "2012-09-27"
#until = DateTime.parse(#date).iso8601
#since = (DateTime.parse(#day) - 60*60*48).iso8601
a = Octokit.commits({:username => "cboettig", :repo => "labnotebook", :since => #since, :until => #until})
see the date of the output of last entry
a.last.commit.author.date
explicit day doesn't work either
b = Octokit.commits({:username => "cboettig", :repo => "labnotebook", :since => "2012-09-27T00:00:00+00:00"})
b.last.commit.author.date
The date I get in both examples is from August, outside the specified range given. What did I miss?
Background: I'm trying to write a little Jekyll plugin that uses the API to return commits made to a specified repo on the day of the post.
joeyw gives a great answer to this question here.
The second argument should be the sha or branch, and options should be the third argument, e.g.
Octokit.commits("cboettig/labnotebook", "master", :since => "2012-09-28T00:00:00+00:00").length
or
Octokit.commits("cboettig/labnotebook", nil, :since => "2012-09-28T00:00:00+00:00").length
works just fine. Here's my corresponding jekyll plugin