Checking if an object is in a repo in gitpython - gitpython

I'm working on a program that will be adding and updating files in a git repo. Since I can't be sure if a file that I am working with is currently in the repo, I need to check its existence - an action that seems to be harder than I thought it would be.
The 'in' comparison doesn't seem to work on non-root levels on trees in gitpython. Ex.
>>> repo = Repo(path)
>>> hct = repo.head.commit.tree
>>>> 'A' in hct['documents']
False
>>> hct['documents']['A']
<git.Tree "8c74cba527a814a3700a96d8b168715684013857">
So I'm left to wonder, how do people check that a given file is in a git tree before trying to work on it? Trying to access an object for a file that is not in the tree will throw a KeyError, so I can do try-catches. But that feels like a poor use of exception handling for a routine existence check.
Have I missed something really obvious? How does once check for the existence of a file in a commit tree using gitpython (or really any library/method in Python)?
Self Answer
OK, I dug around in the Tree class to see what __contains__ does. Turns out, when searching in sub folders, one has to check for existence of a file using the full relative path from the repo's root. So a working version of the check I did above is:
>>> 'documents/A' in hct['documents']
True

EricP's answer has a bug. Here's a fixed version:
def fileInRepo(repo, filePath):
'''
repo is a gitPython Repo object
filePath is the full path to the file from the repository root
returns true if file is found in the repo at the specified path, false otherwise
'''
pathdir = os.path.dirname(filePath)
# Build up reference to desired repo path
rsub = repo.head.commit.tree
for path_element in pathdir.split(os.path.sep):
# If dir on file path is not in repo, neither is file.
try :
rsub = rsub[path_element]
except KeyError :
return False
return(filePath in rsub)
Usage:
file_found = fileInRepo(repo, 'documents/A')
This is very similar to EricP's code, but handles the case where the folder containing the file is not in the repo. EricP's function raises a KeyError in that case. This function returns False.
(I offered to edit EricP's code but was rejected.)

Expanding on Bill's solution, here is a function that determines whether a file is in a repo:
def fileInRepo(repo,path_to_file):
'''
repo is a gitPython Repo object
path_to_file is the full path to the file from the repository root
returns true if file is found in the repo at the specified path, false otherwise
'''
pathdir = os.path.dirname(path_to_file)
# Build up reference to desired repo path
rsub = repo.head.commit.tree
for path_element in pathdir.split(os.path.sep):
rsub = rsub[path_element]
return(path_to_file in rsub)
Example usage:
file_found = fileInRepo(repo, 'documents/A')

If you want to omit catch try you can check if object is in repo with:
def fileInRepo(repo, path_to_file):
dir_path = os.path.dirname(path_to_file)
rsub = repo.head.commit.tree
path_elements = dir_path.split(os.path.sep)
for el_id, element in enumerate(path_elements):
sub_path = os.path.join(*path_elements[:el_id + 1])
if sub_path in rsub:
rsub = rsub[element]
else:
return False
return path_to_file in rsub
or you can iterate through all items in repo, but it will be for sure slower:
def isFileInRepo(repo, path_to_file):
rsub = repo.head.commit.tree
for element in rsub.traverse():
if element.path == path_to_file:
return True
return False

There already exists a method of Tree that will do what fileInRepo re-implements in Lucidity's answer .
The method is Tree.join:
https://gitpython.readthedocs.io/en/3.1.29/reference.html#git.objects.tree.Tree.join
A less redundant implementation of fileInRepo is:
def fileInRepo(repo, filePath):
try:
repo.head.commit.tree.join(filePath)
return True
except KeyError:
return False

Related

MyST-Parser: Auto linking / linkifying references to bug tracker issues

I use sphinx w/ MyST-Parser for markdown, and
I want GitHub or GitLab-style auto linking (linkfying) for references.
Is there a way to have MyST render the reference:
#346
In docutils-speak, this is a Text node (example)
And behave as if it was:
[#346](https://github.com/vcs-python/libvcs/pull/346)
So when rendered it'd be like:
#346
Not the custom role:
{issue}`1` <- Not this
Another example: Linkifying the reference #user to a GitHub, GitLab, StackOverflow user.
What I'm currently doing (and why it doesn't work)
Right now I'm using the canonical solution docutils offers: custom roles.
I use sphinx-issues (PyPI), and does just that. It uses a sphinx setting variable, issues_github_path to parse the URL:
e.g. in Sphinx configuration conf.py:
issues_github_path = 'vcs-python/libvcs'
reStructuredText:
:issue:`346`
MyST-Parser:
{issue}`346`
Why custom roles don't work
Sadly, those aren't bi-directional with GitHub/GitLab/tools. If you copy/paste MyST-Parser -> GitHub/GitLab or preview it directly, it looks very bad:
Example of CHANGES:
Example issue: https://github.com/vcs-python/libvcs/issues/363
What we want is to just be able to copy markdown including #347 to and from.
Does a solution already exist?
Are there any projects out there of docutils or sphinx plugins to turn #username or #issues into links?
sphinx (at least) can demonstrable do so for custom roles - as seen in sphinx-issues usage of issues_github_path - by using project configuration context.
MyST-Parser has a linkify extension which uses linkify-it-py
This can turn https://www.google.com into https://www.google.com and not need to use <https://www.google.com>.
Therefore, there may already be a tool out there.
Can it be done through the API?
The toolchain for myst, sphinx and docutils is robust. This is a special case.
This needs to be done at the Text node level. Custom role won't work - as stated above - since it'll create markdown that can't be copied between GitLab and GitHub issues trivially.
The stack:
MyST-Parser API (Markdown-it-py API) > Sphinx APIs (MySTParser + Sphinx) > Docutils API
At the time of writing, I'm using Sphinx 4.3.2, MyST-Parser 0.17.2, and docutils 0.17.1 on python 3.10.2.
Notes
For the sake of an example, I'm using an open source project of mine that is facing this issue.
This is only about autolinking issues or usernames - things that'd easily be mappable to URLs. autodoc code-linking is out of scope.
There is a (defunct) project that does this: sphinxcontrib-issuetracker.
I've rebooted it:
conf.py:
import sys
from pathlib import Path
cwd = Path(__file__).parent
project_root = cwd.parent
sys.path.insert(0, str(project_root))
sys.path.insert(0, str(cwd / "_ext"))
extensions = [
"link_issues",
]
# issuetracker
issuetracker = "github"
issuetracker_project = "cihai/unihan-etl" # e.g. for https://github.com/cihai/unihan-etl
_ext/link_issues.py:
"""Issue linking w/ plain-text autolinking, e.g. #42
Credit: https://github.com/ignatenkobrain/sphinxcontrib-issuetracker
License: BSD
Changes by Tony Narlock (2022-08-21):
- Type annotations
mypy --strict, requires types-requests, types-docutils
Python < 3.10 require typing-extensions
- TrackerConfig: Use dataclasses instead of typing.NamedTuple and hacking __new__
- app.warn (removed in 5.0) -> Use Sphinx Logging API
https://www.sphinx-doc.org/en/master/extdev/logging.html#logging-api
- Add PendingIssueXRef
Typing for tracker_config and precision
- Add IssueTrackerBuildEnvironment
Subclassed / typed BuildEnvironment with .tracker_config
- Just GitHub (for demonstration)
"""
import dataclasses
import re
import sys
import time
import typing as t
import requests
from docutils import nodes
from sphinx.addnodes import pending_xref
from sphinx.application import Sphinx
from sphinx.config import Config
from sphinx.environment import BuildEnvironment
from sphinx.transforms import SphinxTransform
from sphinx.util import logging
if t.TYPE_CHECKING:
if sys.version_info >= (3, 10):
from typing import TypeGuard
else:
from typing_extensions import TypeGuard
logger = logging.getLogger(__name__)
GITHUB_API_URL = "https://api.github.com/repos/{0.project}/issues/{1}"
class IssueTrackerBuildEnvironment(BuildEnvironment):
tracker_config: "TrackerConfig"
issuetracker_cache: "IssueTrackerCache"
github_rate_limit: t.Tuple[float, bool]
class Issue(t.NamedTuple):
id: str
title: str
url: str
closed: bool
IssueTrackerCache = t.Dict[str, Issue]
#dataclasses.dataclass
class TrackerConfig:
project: str
url: str
"""
Issue tracker configuration.
This class provides configuration for trackers, and is passed as
``tracker_config`` arguments to callbacks of
:event:`issuetracker-lookup-issue`.
"""
def __post_init__(self) -> None:
if self.url is not None:
self.url = self.url.rstrip("/")
#classmethod
def from_sphinx_config(cls, config: Config) -> "TrackerConfig":
"""
Get tracker configuration from ``config``.
"""
project = config.issuetracker_project or config.project
url = config.issuetracker_url
return cls(project=project, url=url)
class PendingIssueXRef(pending_xref):
tracker_config: TrackerConfig
class IssueReferences(SphinxTransform):
default_priority = 999
def apply(self) -> None:
config = self.document.settings.env.config
tracker_config = TrackerConfig.from_sphinx_config(config)
issue_pattern = config.issuetracker_issue_pattern
title_template = None
if isinstance(issue_pattern, str):
issue_pattern = re.compile(issue_pattern)
for node in self.document.traverse(nodes.Text):
parent = node.parent
if isinstance(parent, (nodes.literal, nodes.FixedTextElement)):
# ignore inline and block literal text
continue
if isinstance(parent, nodes.reference):
continue
text = str(node)
new_nodes = []
last_issue_ref_end = 0
for match in issue_pattern.finditer(text):
# catch invalid pattern with too many groups
if len(match.groups()) != 1:
raise ValueError(
"issuetracker_issue_pattern must have "
"exactly one group: {0!r}".format(match.groups())
)
# extract the text between the last issue reference and the
# current issue reference and put it into a new text node
head = text[last_issue_ref_end : match.start()]
if head:
new_nodes.append(nodes.Text(head))
# adjust the position of the last issue reference in the
# text
last_issue_ref_end = match.end()
# extract the issue text (including the leading dash)
issuetext = match.group(0)
# extract the issue number (excluding the leading dash)
issue_id = match.group(1)
# turn the issue reference into a reference node
refnode = PendingIssueXRef()
refnode["refdomain"] = None
refnode["reftarget"] = issue_id
refnode["reftype"] = "issue"
refnode["trackerconfig"] = tracker_config
reftitle = title_template or issuetext
refnode.append(
nodes.inline(issuetext, reftitle, classes=["xref", "issue"])
)
new_nodes.append(refnode)
if not new_nodes:
# no issue references were found, move on to the next node
continue
# extract the remaining text after the last issue reference, and
# put it into a text node
tail = text[last_issue_ref_end:]
if tail:
new_nodes.append(nodes.Text(tail))
# find and remove the original node, and insert all new nodes
# instead
parent.replace(node, new_nodes)
def is_issuetracker_env(
env: t.Any,
) -> "TypeGuard['IssueTrackerBuildEnvironment']":
return hasattr(env, "issuetracker_cache") and env.issuetracker_cache is not None
def lookup_issue(
app: Sphinx, tracker_config: TrackerConfig, issue_id: str
) -> t.Optional[Issue]:
"""
Lookup the given issue.
The issue is first looked up in an internal cache. If it is not found, the
event ``issuetracker-lookup-issue`` is emitted. The result of this
invocation is then cached and returned.
``app`` is the sphinx application object. ``tracker_config`` is the
:class:`TrackerConfig` object representing the issue tracker configuration.
``issue_id`` is a string containing the issue id.
Return a :class:`Issue` object for the issue with the given ``issue_id``,
or ``None`` if the issue wasn't found.
"""
env = app.env
if is_issuetracker_env(env):
cache: IssueTrackerCache = env.issuetracker_cache
if issue_id not in cache:
issue = app.emit_firstresult(
"issuetracker-lookup-issue", tracker_config, issue_id
)
cache[issue_id] = issue
return cache[issue_id]
return None
def lookup_issues(app: Sphinx, doctree: nodes.document) -> None:
"""
Lookup issues found in the given ``doctree``.
Each issue reference in the given ``doctree`` is looked up. Each lookup
result is cached by mapping the referenced issue id to the looked up
:class:`Issue` object (an existing issue) or ``None`` (a missing issue).
The cache is available at ``app.env.issuetracker_cache`` and is pickled
along with the environment.
"""
for node in doctree.traverse(PendingIssueXRef):
if node["reftype"] == "issue":
lookup_issue(app, node["trackerconfig"], node["reftarget"])
def make_issue_reference(issue: Issue, content_node: nodes.inline) -> nodes.reference:
"""
Create a reference node for the given issue.
``content_node`` is a docutils node which is supposed to be added as
content of the created reference. ``issue`` is the :class:`Issue` which
the reference shall point to.
Return a :class:`docutils.nodes.reference` for the issue.
"""
reference = nodes.reference()
reference["refuri"] = issue.url
if issue.title:
reference["reftitle"] = issue.title
if issue.closed:
content_node["classes"].append("closed")
reference.append(content_node)
return reference
def resolve_issue_reference(
app: Sphinx, env: BuildEnvironment, node: PendingIssueXRef, contnode: nodes.inline
) -> t.Optional[nodes.reference]:
"""
Resolve an issue reference and turn it into a real reference to the
corresponding issue.
``app`` and ``env`` are the Sphinx application and environment
respectively. ``node`` is a ``pending_xref`` node representing the missing
reference. It is expected to have the following attributes:
- ``reftype``: The reference type
- ``trackerconfig``: The :class:`TrackerConfig`` to use for this node
- ``reftarget``: The issue id
- ``classes``: The node classes
References with a ``reftype`` other than ``'issue'`` are skipped by
returning ``None``. Otherwise the new node is returned.
If the referenced issue was found, a real reference to this issue is
returned. The text of this reference is formatted with the :class:`Issue`
object available in the ``issue`` key. The reference title is set to the
issue title. If the issue is closed, the class ``closed`` is added to the
new content node.
Otherwise, if the issue was not found, the content node is returned.
"""
if node["reftype"] != "issue":
return None
issue = lookup_issue(app, node["trackerconfig"], node["reftarget"])
if issue is None:
return contnode
else:
classes = contnode["classes"]
conttext = str(contnode[0])
formatted_conttext = nodes.Text(conttext.format(issue=issue))
formatted_contnode = nodes.inline(conttext, formatted_conttext, classes=classes)
assert issue is not None
return make_issue_reference(issue, formatted_contnode)
return None
def init_cache(app: Sphinx) -> None:
if not hasattr(app.env, "issuetracker_cache"):
app.env.issuetracker_cache: "IssueTrackerCache" = {} # type: ignore
return None
def check_project_with_username(tracker_config: TrackerConfig) -> None:
if "/" not in tracker_config.project:
raise ValueError(
"username missing in project name: {0.project}".format(tracker_config)
)
HEADERS = {"User-Agent": "sphinxcontrib-issuetracker v{0}".format("1.0")}
def get(app: Sphinx, url: str) -> t.Optional[requests.Response]:
"""
Get a response from the given ``url``.
``url`` is a string containing the URL to request via GET. ``app`` is the
Sphinx application object.
Return the :class:`~requests.Response` object on status code 200, or
``None`` otherwise. If the status code is not 200 or 404, a warning is
emitted via ``app``.
"""
response = requests.get(url, headers=HEADERS)
if response.status_code == requests.codes.ok:
return response
elif response.status_code != requests.codes.not_found:
msg = "GET {0.url} failed with code {0.status_code}"
logger.warning(msg.format(response))
return None
def lookup_github_issue(
app: Sphinx, tracker_config: TrackerConfig, issue_id: str
) -> t.Optional[Issue]:
check_project_with_username(tracker_config)
env = app.env
if is_issuetracker_env(env):
# Get rate limit information from the environment
timestamp, limit_hit = getattr(env, "github_rate_limit", (0, False))
if limit_hit and time.time() - timestamp > 3600:
# Github limits applications hourly
limit_hit = False
if not limit_hit:
url = GITHUB_API_URL.format(tracker_config, issue_id)
response = get(app, url)
if response:
rate_remaining = response.headers.get("X-RateLimit-Remaining")
assert rate_remaining is not None
if rate_remaining.isdigit() and int(rate_remaining) == 0:
logger.warning("Github rate limit hit")
env.github_rate_limit = (time.time(), True)
issue = response.json()
closed = issue["state"] == "closed"
return Issue(
id=issue_id,
title=issue["title"],
closed=closed,
url=issue["html_url"],
)
else:
logger.warning(
"Github rate limit exceeded, not resolving issue {0}".format(issue_id)
)
return None
BUILTIN_ISSUE_TRACKERS: t.Dict[str, t.Any] = {
"github": lookup_github_issue,
}
def init_transformer(app: Sphinx) -> None:
if app.config.issuetracker_plaintext_issues:
app.add_transform(IssueReferences)
def connect_builtin_tracker(app: Sphinx) -> None:
if app.config.issuetracker:
tracker = BUILTIN_ISSUE_TRACKERS[app.config.issuetracker.lower()]
app.connect(str("issuetracker-lookup-issue"), tracker)
def setup(app: Sphinx) -> t.Dict[str, t.Any]:
app.add_config_value("mybase", "https://github.com/cihai/unihan-etl", "env")
app.add_event(str("issuetracker-lookup-issue"))
app.connect(str("builder-inited"), connect_builtin_tracker)
app.add_config_value("issuetracker", None, "env")
app.add_config_value("issuetracker_project", None, "env")
app.add_config_value("issuetracker_url", None, "env")
# configuration specific to plaintext issue references
app.add_config_value("issuetracker_plaintext_issues", True, "env")
app.add_config_value(
"issuetracker_issue_pattern",
re.compile(
r"#(\d+)",
),
"env",
)
app.add_config_value("issuetracker_title_template", None, "env")
app.connect(str("builder-inited"), init_cache)
app.connect(str("builder-inited"), init_transformer)
app.connect(str("doctree-read"), lookup_issues)
app.connect(str("missing-reference"), resolve_issue_reference)
return {
"version": "1.0",
"parallel_read_safe": True,
"parallel_write_safe": True,
}
Mirrors
https://gist.github.com/tony/05a3043d97d37c158763fb2f6a2d5392
https://github.com/ignatenkobrain/sphinxcontrib-issuetracker/issues/25
Mypy users
mypy --strict docs/_ext/link_issues.py work as of mypy 0.971
If you use mypy: pip install types-docutils types-requests
Install:
https://pypi.org/project/types-docutils/
https://pypi.org/project/types-requests/
https://pypi.org/project/typing-extensions/ (Python <3.10)
Example
via unihan-etl#261 / v0.17.2 (source, view, but page may be outdated)

how to "include" another file as part of a Jenkins Pipeline definition

We have a large project that has multiple separate declarative pipeline file definitions. This is used to build different apps and installers from the single code base.
Right now, all of these files contain a large block of "code" used to generate the email body and JIRA update messages. examples:
// Get a JIRA's to add Comments to
// Return map of JIRA id to comment text from all commits for that JIRA
#NonCPS
def getJiraMap() {
a bunch of stuff
return jiraset
}
// Get the body text for the emails
def getMailBody1() {
return "See: ${BUILD_URL}\n\nChanges:\n" + getChangeString() + "\n" + testStatuses()
}
etc...
What I would like to do is have all these common methods in a separate file that all the other pipeline files can include. This seems like it SHOULD be easy, but all examples I've found appear to be rather complex involving a separate SCM - which is NOT what I want.
Updates:
Going through the various suggestions given in that link, I make the following file - BuildTools.groovy: Note that this file is in the same directory as the jenkins pipeline file that uses it.
import hudson.tasks.test.AbstractTestResultAction
import hudson.model.Actionable
Class BuildTools {
// Get a JIRA's to add Comments to
// Return map of JIRA id to comment text from all commits for that JIRA
#NonCPS
def getJiraMap() {
def jiraset = [:]
.. whole bunch of stuff ..
Here are the various things I've tried, and the results.
File sourceFile = new File("./AutomatedBuild/BuildTools.groovy");
Class gcl = new GroovyClassLoader(getClass().getClassLoader()).parseClass(sourceFile);
GroovyObject bt = (GroovyObject) gcl.newInstance();
Fails with:
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use method java.lang.Class getClassLoader
evaluate(new File("./AutomatedBuild/BuildTools.groovy"))
def bt = new BuildTools()
Fails with:
15:29:07 WorkflowScript: 8: unable to resolve class BuildTools
15:29:07 # line 8, column 10.
15:29:07 def bt = new BuildTools()
15:29:07 ^
import BuildTools
def bt = new BuildTools()
Fails with:
15:35:58 WorkflowScript: 16: unable to resolve class BuildTools (note that BuildTools.groovy is in the same folder as this script)
15:35:58 # line 16, column 1.
15:35:58 import BuildTools
15:35:58 ^
GroovyShell shell = new GroovyShell()
def bt = shell.parse(new File("./AutomatedBuild/BuildTools.groovy"))
Fails with:
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use new groovy.lang.GroovyShell

How I can rollback to previous version of deleted file on s3 via ruby aws-sdk

Any example how rollback to previous version of deleted file on s3 via ruby aws-sdk?
Looks like GEM aws-sdk-ruby not show deleted files in list of objects
s3 = Aws::S3::Resource.new
bucket = s3.bucket('aws-sdk')
bucket.objects.each do |obj|
if obj.key.start_with?("images/file_name.jpg")
puts obj.to_yaml
end
end
You can list previous versions like this:
aws_versions = s3.client.list_object_versions(
bucket: 'bucket_name',
prefix: 'images/12345/50x50.jpg'
).versions
https://docs.aws.amazon.com/sdk-for-ruby/v2/api/Aws/S3/Client.html#list_object_versions-instance_method
Download a necessary version like this:
cache = s3.client.get_object(
bucket: 'bucket_name',
key: 'images/12345/50x50.jpg',
version_id: 'your_version_id',
response_target: Rails.root.join('tmp/images/12345/50x50.jpg')
)
https://docs.aws.amazon.com/sdk-for-ruby/v2/api/Aws/S3/Client.html#get_object-instance_method
And finally, save it to your model:
model.attachment = Rails.root.join('tmp/images/12345/50x50.jpg').open
model.save
PS: It's good to have paper_trail installed, so you could find the previous version prefix (the images/12345/50x50.jpg part) in model change history.

Rugged merge commit from origin does not update working tree

Similar to this question, but instead of creating a new file, I'm trying to merge from origin. After creating a new index using Rugged::Repository's merge_commits, and a new merge commit, git reports the new file (coming from origin) as deleted.
Create a merge index,
> origin_target = repo.references['refs/remotes/origin/master'].target
> merge_index = repo.merge_commits(repo.head.target, origin_target)
and a new merge commit,
> options = {
update_ref: 'refs/heads/master',
committer: {name: 'user', email: 'user#foo.com', time: Time.now},
author: {name: 'user', email: 'user#foo.com', time: Time.now},
parents: [repo.head.target, origin_target],
message: "merge `origin/master` into `master`"}
and make sure to use the tree from the merge index.
> options[:tree] = merge_index.write_tree(repo)
Create the commit
> merge_commit = Rugged::Commit.create(repo, options)
Check that our HEAD has been updated:
> repo.head.target.tree
=> #<Rugged::Tree:16816500 {oid: 16c147f358a095bdca52a462376d7b5730e1978e}>
<"first_file.txt" 9d096847743f97ba44edf00a910f24bac13f36e2>
<"second_file.txt" 8178c76d627cade75005b40711b92f4177bc6cfc>
<"newfile.txt" e69de29bb2d1d6434b8b29ae775ad8c2e48c5391>
Looks good. I see the new file in the index. Write it to disk.
> repo.index.write
=> nil
...but git reports the new file as deleted:
$ git st
## master...origin/master [ahead 2]
D newfile.txt
How can I properly update my index and working tree?
There is an important distinction between the Git repository and the working directory. While most common command-line git commands operate on the working directory as well as the repository, the lower-level commands of libgit2 / librugged mostly operate on only the repository. This includes writing the index as in your example.
To update the working directory to match the index, the following command should work (after writing the index):
options = { strategy: force }
repo.checkout_head(options)
Docs for checkout_head: http://www.rubydoc.info/github/libgit2/rugged/Rugged/Repository#checkout_head-instance_method
Note: I tested with update_ref: 'HEAD' for the commit. I'm not sure if update_ref: 'refs/heads/master' will have the same effect.

how can I improve my Rakefile (deployment)

I'm writing my first Rakefile. The first things that I see in the doc is "there is no special format for a Rakefile" and "there is no special syntax in a Rakefile".
Ok, so I had to come up with something on my own, but I can see at least two problems with my creature:
1) I need to create a number of folders, five of them. The sequence of 6 directory tasks looks a bit weird. The list of 5 dependencies in deploy task looks even more weird. Can I shrink it down to one line somehow?
2) I need to repeat my directory name literals two times - when I define their deployment paths and when I copy the contents. Can I avoid that without introducing 5 more variables?
In Java Ant I would create a properties file with all name literals - can I do that with Rake?
This is what I've got:
WEBAPPSDIR = '/var/webapps/'
WEBAPPNAME = 'foo.local'
WEBAPPDIR = File.join(WEBAPPSDIR, WEBAPPNAME)
VIEWSDIR = File.join(WEBAPPDIR, 'views')
PUBLICDIR = File.join(WEBAPPDIR, 'public')
CSSDIR = File.join(PUBLICDIR, 'css')
IMAGESDIR = File.join(PUBLICDIR, 'images')
TMPDIR = File.join(WEBAPPDIR, 'tmp')
HTMLDIR = File.join(PUBLICDIR, 'html')
directory VIEWSDIR
directory CSSDIR
directory HTMLDIR
directory IMAGESDIR
directory TMPDIR
desc 'Deploy to webapps dir'
task :deploy => [VIEWSDIR, CSSDIR, IMAGESDIR, TMPDIR, HTMLDIR] do
cp 'config.ru', WEBAPPDIR
Dir.glob('*.rb') {|f| cp f, WEBAPPDIR}
Dir.glob('views/*.{mab,str}') {|f| cp f, VIEWSDIR}
Dir.glob('css/*.css') {|f| cp f, CSSDIR}
Dir.glob('images/*.{png,jpg,gif}') {|f| cp f, IMAGESDIR}
Dir.glob('html/*.html') {|f| cp f, VIEWSDIR}
end
desc 'Cleans webapp dir'
task :clean do
rm_r WEBAPPDIR, {force: true}
end
Other thoughts/links/examples are welcome too.
This does not really answer your question - but why don't you use capistrano ? If you don't know it already, it's a ruby tool frequently used to handle deployments smoothly

Resources