Here's the scenario. I'm writing GemB, and I plan to use it in ProjectA, but I want to write GemB in a non-specific fashion (as I should). I want GemB to look for a "config" file under a "conventional" project_a/config/gem_b.yml
If that file doesn't exist, GemB can default to it's own gemb/config/default.yml
ProjectA (probably is a variant of a gem/rails-engine/rails-app etc)
GemB (definitely a gem)
Is there a dynamic way for GemB to get the root of ProjectA (the project it's being used in)? If I could predict the name, I could call ProjectA.root (likely), so you might suggest calling Rails.root but I can't guarantee that it will be used in a Rails project, (and in this case unlikely anyway, based on the target functionality of GemB).
Trusting that's a sufficient explanation, let me know if I need to clarify!
Most modern gems don't look for a particular config file. Instead, they expect a config file of any name to be executed, which will set configurations via a .config variable on the gem's class name. For example, I might have this config file:
# config/initializers/ams.rb
ActiveModelSerializers.config.adapter = :json_api
ActiveModelSerializers.config.key_transform = :camel_lower
Now it doesn't matter where the file is located or what it's called. So long as the code is executed, ActiveModelSerializers can look at its own ActiveModelSerializers.config variable and it is good to go.
If you wanted an extensive configuration (by means of a .yml file, for example) you might create a .config.path setting and instruct users of your gem to set the path themselves and then create a file in the spot they choose:
# config/initializers/gem_b.rb
GemB.config.path = '/config/gem_b.yml'
Related
I am new to the Go programming language. I am hoping to integrate Go code, if possible, into existing code that contains heterogeneous code. My present organization of code is:
<reverse-TLD>/<component-path>/<code><extension>
where:
<reverse-TLD> is the domain with parts reversed. For example, com.mydomain.mysubdomain.
<component-path> is 1 or more subdirectories under which code lives. For example, image/jpeg.
<code> is the part of a code filename before the extension. For example, jpeg2000.
<extension> is the extension. For example, .sh, .py, etc. For example, this taken with the other elements above would have a path: com.mydomain.mysubdomain/image/jpeg/jpeg2000.go.
Note that code files other than Go files are in the same directory as Go files.
My issues are:
My existing structure above doesn't include src, pkg, or bin directories. Are there environment or Go env variables that allow me to specify these directories?
The directory <reverse-TLD> and all files under it is read-only. I need the output of the compilation to be based under another directory, given as $BUILD_DIR. That directory can have whatever directories are needed under it.
I am thinking that as a convention, I could use lowercase filenames for Go code that will become an executable command and leading-uppercase filenames for Go code that will become package objects. Is there a best practice naming convention for making this distinction in the Go community?
Is there any problem with my using reverse TLDs? For example, com.mydomain.mysubdomain vs. mysubdomain.mydomain.com.
If the src, pkg, and bin directories are hard requirements, then I think I'll have to write a script that finds Go files and copies them to a temporary directory that meets the requirements, compile them, and then move the built artifacts to the $BUILD_DIR. But, I'm hoping that Go is flexible enough to allow me to do this.
If it is possible, could you show me the commands or environment variables that are needed to compile given the constraints above? And, comments on items 1-4 above are appreciated. Thank you!
That against Go's conventions and is not a recommended practice
I need to reference patients.json from patients.go, here's the folder structure:
If I do:
filepath.Abs("../../conf/patients.json")
it works for go test ./... but fails for revel run
If I do:
filepath.Abs("conf/patients.json")
the exact opposite happens (revel is fine but tests fail).
Is there a way to correctly reference the file so that it works both for tests and normal program run?
Relative paths are always interpreted / resolved to a base path: the current or working directory - therefore it will always have its limitations.
If you can live with always taking care of the proper working directory, you may keep using relative paths.
What I would suggest is to not rely on the working directory, but an explicitly specified base path. This may have a default value hard-coded in your application (which may be the working directory as well), and you should provide several ways to override its value.
Recommended ways to override the base path to which your "relative" paths are resolved against:
Command line flag (see flag package)
Environment variable (see os.Getenv())
(Fix named) Config file in user's home directory (see os/user/User and os/user/Current())
Once you have the base path, you can get the full path by joining the base path and the relative path. You may use path.Join() or filepath.Join(), e.g.:
// Get base path, from any or from the combination of the above mentioned solutions
base := "/var/myapp"
// Relative path, resource to read/write from:
relf := "conf/patients.json"
// Full path that identifies the resource:
full := filepath.Join(base, relf) // full will be "/var/myapp/conf/patients.json"
I've never used Revel myself but the following looks helpful to me:
http://revel.github.io/docs/godoc/revel.html
revel.BasePath
revel.AppPath
This is not the problem with path, but the problem with your design.
You should design your code more careful.
As far as I can tell, you share same path in your test file and reveal run. I guess that maybe you hard code your json path in your model package which is not suggested.
Better way is
model package get json path from global config, or init model with json path like model := NewModel(config_path). so reveal run can init model with any json you want.
hard code "../../conf/patients.json" in your xxxx_testing.go
Can you include expressions in the "Output Files" section of a build rule in Xcode? Eg:
$(DERIVED_FILE_DIR)$(echo "/dynamic/dir")/$(INPUT_FILE_BASE).m
Specifically, when translating Java files with j2objc, the resulting files are saved in subfolders, based on the java packages (eg. $(DERIVED_FILE_DIR)/com/google/Class.[hm]). This is without using --no-package-directories, which I can't use because of duplicate file names in different packages.
The issue is in Output Files, because Xcode doesn't know how to search for the output file at the correct location. The default location is $(DERIVED_FILE_DIR)/$(INPUT_FILE_BASE).m, but I need to perform a string substitution to insert the correct path. However any expression added as $(expression) gets ignored, as it was never there.
I also tried to export a variable from the custom script and use it in Output Files, but that doesn't work either because the Output Files are transformed into SCRIPT_OUTPUT_FILE_X before the custom script is ran.
Unfortunately, Xcode's build support is pretty primitive (compared to say, make, which is third-odd years older :-). One option to try is splitting the Java source, so that the two classes with the same names are in different sub-projects. If you then use different prefixes for each sub-project, the names will be disambiguated.
A more fragile, but maybe simpler approach is to define a separate rule for the one of the two classes, so that it can have a unique prefix assigned. Then add an early build phase to translate it before any other Java classes, so the rules don't overlap.
For me, the second alternative does work (Xcode 7.3.x) - to a point.
My rule is not for Java, but rather for Google Protobuf, and I tried to maintain the same hierarchy (like your Java package hierarchy) in the generated code as in the source .proto files. Indeed files (.pb.cc and .pb.h) were created as expected, with their hierarchies, inside the Build/Intermediates/myProject.build/Debug/DerivedSources directory.
However, Xcode usually knows to continue and compile the generated output into the current target - but that breaks as it only looks for files in the actual ${DERIVED_FILE} - not within sub-directories underneath.
Could you please explain better "Output Files are transformed into SCRIPT_OUTPUT_FILE_X" ? I do not understand.
I am writing a small tool in c++. It is actually more of a framework that is open to customization. It has the following directory structure (simplified example).
src/
main/myexec # linked to libapple.so
apple/
coder/libapple.so
john/libapple.so
.
.
james/libapple.so
Here, the directory "coder" is a generic dummy, with some example code to generate libapple.so. Different users can checkout this tool, create directories of their own, copy the template code from "coder" and customize as they wish. Depending on the configure option (indicating the user), the respective libapple.so needs to be generated.
As I mentioned, this is a simplified example. It is not a matter of generic programming, inheritance etc. In fact, similar to the "apple" folder there are others like "scripts", "docs", "configs" etc each having similar user specific folders. Also, the tool will be maintained at a single repository location to allow me to support & maintain all the code that is not specific to user. As a policy, users are expected to modify and check-in only the contents of their folders.
The problem I am facing is with "configure.ac". I do not want to use "AC_ARG_WITH" option as it would require each new user to edit configure.ac. Also for each user the AC_CONFIG_FILE entries would be exactly the same except for his folder name. I tried using "--enable-user=User" and then AC_SUBST(USERDIR), which also helps in setting "SUBDIRS = #USERDIR#" in Makefile.am. Everything looks good except for the fact that "Makefile.in" is not getting created under the user folder when I specify "AC_CONFIG_FILE = ([apple/${USERDIR}/Makefile])".
Please advice how to overcome this issue. In the worst case I may end up in creating softlinks :(
After one full day of scratching my head, following is the solution that I have come up with.
Create a file "project_makefiles.m4.in" like this
AC_CONFIG_FILES([ apple/USERDIR/Makefile ]
Add the below to configure.ac
m4_include([project_makefiles.m4])
Create a wrapper script like "build.sh" which will create "project_makefiles.m4" from "project_makefiles.m4.in" by replacing "USERDIR". This is done before the automake.
I'm writing some code that at run time may create or delete directories within the project path. I haven't really used ruby for file processing so i'm really uneasy about having code that, with a few mistypes weeks down the line, could result in wiping other directories outside of my project path.
Is there anyway to make it impossible for the program to delete files outside of its own path regardless of whats typed in destructive calls?
Pathname is a wrapper class for almost any file operations.
require "pathname"
path= Pathname.new("/home/johannes")
path.directory? # => true
path.children # => [#<Pathname:.bash_history>, #<Pathname:Documents>, #<Pathname:Desktop>]
path.children.each do |p|
p.delete if p.file?
end
Pathname#children does not contain . or .. so you don't accidently walk up the tree instead of down. If you still don't trust in the code, you can even check if on path is contained in another
Pathname.new("test") <=> Pathname.new("test/123") # => -1
You might want to create a wrapper method around your favourite delete method (or, perhaps, around whole class, because not only deleting files is potentially destructive file operation), which would expand all the submitted paths and check whether they begin with your "sandbox" path). You can also try to redefine delete method, if you are willing to cripple it through whole application.
And maybe the cleanest solution of them all would be to create a new user on your system and run your program as him.
On a POSIX system, you can use Dir.chroot to change the root that your application sees. Then ALL actions, not just delete ones, will be limited to the project directory. This does mean that external commands will be unavailable unless you make them part of your project directory as well.
This is the standard 'sandboxing' method used in Unix based systems. It can be difficult to setup (eliminating all external dependancies is sometimes hard), but affords significant protection when configured properly.
You could generate an Array of filenames in your project directory using
my_files = Dir["/bla/bla/your/directory/**/*"]
and then simply check if the filename passed to your "delete" function exist in your my_files array.
I'm sure there is a more elegant solution, but this could work ^_^
You could use File.expand_path and File.dirname on the input, and check that against __FILE__. So something like this might work:
File.delete(path) if File.dirname(File.expand_path(path)).include? File.dirname(File.expand_path(__FILE__))
I've got automated tests that routinely create and wipe out directories. I've taken two approaches:
Use /tmp as much as possible. The 'tmpdir' standard library module will create temporary directories which will be destroyed when your program exits. Or,
When the code creates a directory that it will later be deleting, it drops a marker file into the directory. When it comes time to delete the directory, if the marker file is not found, the code refuses to delete the directory. A marker file might be called ".ok_to_delete", for example.