Join multiple Coffeescript files into one file? (Multiple subdirectories) - terminal

I've got a bunch of .coffee files that I need to join into one file.
I have folders set up like a rails app:
/src/controller/log_controller.coffee
/src/model/log.coffee
/src/views/logs/new.coffee
Coffeescript has a command that lets you join multiple coffeescripts into one file, but it only seems to work with one directory. For example this works fine:
coffee --output app/controllers.js --join --compile src/controllers/*.coffee
But I need to be able to include a bunch of subdirectories kind of like this non-working command:
coffee --output app/all.js --join --compile src/*/*.coffee
Is there a way to do this? Is there a UNIXy way to pass in a list of all the files in the subdirectories?
I'm using terminal in OSX.
They all have to be joined in one file because otherwise each separate file gets compiled & wrapped with this:
(function() { }).call(this);
Which breaks the scope of some function calls.

From the CoffeeScript documentation:
-j, --join [FILE] : Before compiling, concatenate all scripts together in the order they were passed, and write them into the specified file. Useful for building large projects.
So, you can achieve your goal at the command line (I use bash) like this:
coffee -cj path/to/compiled/file.js file1 file2 file3 file4
where file1 - fileN are the paths to the coffeescript files you want to compile.

You could write a shell script or Rake task to combine them together first, then compile. Something like:
find . -type f -name '*.coffee' -print0 | xargs -0 cat > output.coffee
Then compile output.coffee
Adjust the paths to your needs. Also make sure that the output.coffee file is not in the same path you're searching with find or you will get into an infinite loop.
http://man.cx/find |
http://www.rubyrake.org/tutorial/index.html
Additionally you may be interested in these other posts on Stackoverflow concerning searching across directories:
How to count lines of code including sub-directories
Bash script to find a file in directory tree and append it to another file
Unix script to find all folders in the directory

I've just release an alpha release of CoffeeToaster, I think it may help you.
http://github.com/serpentem/coffee-toaster

The most easy way to use coffee command line tool.
coffee --output public --join --compile app
app is my working directory holding multiple subdirectories and public is where ~output.js file will be placed. Easy to automate this process if writing app in nodejs

This helped me (-o output directory, -j join to project.js, -cw compile and watch coffeescript directory in full depth):
coffee -o web/js -j project.js -cw coffeescript

Use cake to compile them all in one (or more) resulting .js file(s). Cakefile is used as configuration which controls in which order your coffee scripts are compiled - quite handy with bigger projects.
Cake is quite easy to install and setup, invoking cake from vim while you are editing your project is then simply
:!cake build
and you can refresh your browser and see results.
As I'm also busy to learn the best way of structuring the files and use coffeescript in combination with backbone and cake, I have created a small project on github to keep it as a reference for myself, maybe it will help you too around cake and some basic things. All compiled files are in www folder so that you can open them in your browser and all source files (except for cake configuration) are in src folder. In this example, all .coffee files are compiled and combined in one output .js file which is then included in html.

Alternatively, you could use the --bare flag, compile to JavaScript, and then perhaps wrap the JS if necessary. But this would likely create problems; for instance, if you have one file with the code
i = 0
foo = -> i++
...
foo()
then there's only one var i declaration in the resulting JavaScript, and i will be incremented. But if you moved the foo function declaration to another CoffeeScript file, then its i would live in the foo scope, and the outer i would be unaffected.
So concatenating the CoffeeScript is a wiser solution, but there's still potential for confusion there; the order in which you concatenate your code is almost certainly going to matter. I strongly recommend modularizing your code instead.

Related

Adding templated arguments to a command in bash

I am setting up a new project with multiple extensions. My goal is to track the code coverage for all extensions. Extensions live in subdirectories of the directory extensions and have multiple source folders. The number of extensions in my project is not final. So I will most certainly add one or more. Consider a structure like this:
extensions
extension A
src
testsrc
web
src
testsrc
extension B
...
All extensions follow the same structure. I am using the coverage-jdk11 job as described here:
https://docs.gitlab.com/ee/user/project/merge_requests/test_coverage_visualization.html#java-and-kotlin-examples
Now instead of
python /opt/cover2cover.py target/site/jacoco/jacoco.xml
src/main/java
> target/site/cobertura.xml
I need to add multiple src directories, which is supported. So my current version looks like this:
python /opt/cover2cover.py jacoco.xml
extensions/extensionA/src
extensions/extensionA/testsrc
extensions/extensionA/web/src
extensions/extensionA/web/testsrc
> cobertura.xml
But this one obviously only supports extensionA. My idea is to iterate through the subdirectories of the extensions directory and create multiple arguments for each subdirectory. But I have no idea how to do this in shell.
In the end I got it working with the following command:
find extensions -type d -maxdepth 3 -regex "^extensions/.*/\(testsrc\|src\)$" | xargs -t python /opt/cover2cover.py hybris/log/junit/jacoco.xml > hybris/log/junit/cobertura.xml

Shell script to verify data packages

I need to make shell script to check my algorithms with loads of data(tests packages saved in .in files, every package contains folder with .in file and the other one with .out file where supposed to be correct result)
Sometimes It's about 1000 files in one packages so there's no point of doing it manually. I need some kind of loop which opens this .in file then redirect input of my c++ program and also redirect output of this program(save result to .out files) But the point is I can't get this language as quick as I need.
And I would like this script to compare results of my algorithm to .out files from packages
for f in ExternalIn/*.in; do//part of code which opens process with my algorithm and compare its .out file to .out file from package
Skipping checks for missing files, whitespace-safety, etc., you probably need something like:
for f in ExternalIn/*.in; do
# diff the result of my_cpp_app eating file.in with file.out
# and store the comparison result in file.diff
diff ${f/.in/.out} <(my_cpp_app <$f 2>/dev/null) > ${f/.in/.diff}
done
Although I would probably do it with find / xargs pipeline which is not only safer but also allows parallel execution.
Or even write a Makefile for this and use make, which after all is a tool for exactly this kind of work.

Organizing asset files in a Go project

I have a project that contains a folder to manage file templates, but it doesn't look like Go provides any support for non-Go-code project files. The project itself compiles to an executable, but it needs to know where this template folder is in order to operate correctly. Right now I do a search for $GOPATH/src/<templates>/templates, but this feels like kind of a hack to me because it would break if I decided to rename the package or host it somewhere else.
I've done some searching and it looks like a number of people are interested in being able to "compile" the asset files by embedding them in the final binary, but I'm not sure how I feel about this approach.
Any ideas?
Either pick a path (or a list of paths) that users are expected to put the supporting data in (/usr/local/share/myapp, ...) or just compile it into the binary.
It depends on how you are planning to distribute the program. As a package? With an installer?
Most of my programs I enjoy just having a single file to deploy and I just have a few templates to include, so I do that.
I have an example using go-bindata where I build the html template with a Makefile, but if I build with the 'devel' flag it will read the file at runtime instead to make development easier.
I can think of two options, use a cwd flag, or infer from cwd and arg 0:
-cwd path/to/assets
path/to/exe -cwd=$(path/to/exe/assets)
Internally, the exectable would chdir to wherever cwd points to, and then it can use relative paths throughout the application. This has the added benefit that the user can change the assets without having to recompile the program.
I do this for config files. Basically the order goes:
process cmd arguments, looking for a -cwd variable (it defaults to empty)
chdir to -cwd
parse config file
reparse cmd arguments, overwriting the settings in the config file
I'm not sure how many arguments your app has, but I've found this to be very useful, especially since Go doesn't have a standard packaging tool that will compile these assets in.
infer from arg 0
Another option is to use the first argument and get the path to the executable. Something like this:
here := path.Dir(os.Args[0])
if !path.IsAbs(os.Args[0]) {
here = path.Join(os.Getwd(), here)
}
This will get you the path to where the executable is. If you're guaranteed the user won't move this without moving the rest of your assets, you can use this, but I find it much more flexible to use the above -cwd idea, because then the user can place the executable anywhere on their system and just point it to the assets.
The best option would probably be a mixture of the two. If the user doesn't supply a -cwd flag, they probably haven't moved anything, so infer from arg 0 and the cwd. The cwd flag overrides this.

Using CMake, how can I concat files and install them

I'm new to CMake and I have a problem that I can not figure out a solution to. I'm using CMake to compile a project with a bunch of optional sub-dirs and it builds shared library files as expected. That part seems to be working fine. Each of these sub-dirs contains a sql file. I need to concat all the selected sql files to one sql header file and install the result. So one file like:
sql_header.sql
sub_dir_A.sql
sub_dir_C.sql
sub_dir_D.sql
If I did this directly in a make file I might do something like the following only smarter to deal with only the selected sub-dirs:
cat sql_header.sql > "${INSTALL_PATH}/somefile.sql"
cat sub_dir_A.sql >> "${INSTALL_PATH}/somefile.sql"
cat sub_dir_C.sql >> "${INSTALL_PATH}/somefile.sql"
cat sub_dir_D.sql >> "${INSTALL_PATH}/somefile.sql"
I have sort of figured out pieces of this, like I can use:
LIST(APPEND PACKAGE_SQL_FILES "some_file.sql")
which I assume I can place in each of the sub-dirs CMakeLists.txt files to collect the file names. And I can create a macro like:
CAT(IN "${PACKAGE_SQL_FILES}" OUT "${INSTALL_PATH}/somefile.sql")
But I am lost between when the CMake initially runs and when it runs from the make install. Maybe there is a better way to do this. I need this to work on both Windows and Linux.
I would be happy with some hints to point me in the right direction.
You can create the concatenated file mainly using CMake's file and function commands.
First, create a cat function:
function(cat IN_FILE OUT_FILE)
file(READ ${IN_FILE} CONTENTS)
file(APPEND ${OUT_FILE} "${CONTENTS}")
endfunction()
Assuming you have the list of input files in the variable PACKAGE_SQL_FILES, you can use the function like this:
# Prepare a temporary file to "cat" to:
file(WRITE somefile.sql.in "")
# Call the "cat" function for each input file
foreach(PACKAGE_SQL_FILE ${PACKAGE_SQL_FILES})
cat(${PACKAGE_SQL_FILE} somefile.sql.in)
endforeach()
# Copy the temporary file to the final location
configure_file(somefile.sql.in somefile.sql COPYONLY)
The reason for writing to a temporary is so the real target file only gets updated if its content has changed. See this answer for why this is a good thing.
You should note that if you're including the subdirectories via the add_subdirectory command, the subdirs all have their own scope as far as CMake variables are concerned. In the subdirs, using list will only affect variables in the scope of that subdir.
If you want to create a list available in the parent scope, you'll need to use set(... PARENT_SCOPE), e.g.
set(PACKAGE_SQL_FILES
${PACKAGE_SQL_FILES}
${CMAKE_CURRENT_SOURCE_DIR}/some_file.sql
PARENT_SCOPE)
All this so far has simply created the concatenated file in the root of your build tree. To install it, you probably want to use the install(FILES ...) command:
install(FILES ${CMAKE_BINARY_DIR}/somefile.sql
DESTINATION ${INSTALL_PATH})
So, whenever CMake runs (either because you manually invoke it or because it detects changes when you do "make"), it will update the concatenated file in the build tree. Only once you run "make install" will the file finally be copied from the build root to the install location.
As of CMake 3.18, the CMake command line tool can concatenate files using cat. So, assuming a variable PACKAGE_SQL_FILES containing the list of files, you can run the cat command using execute_process:
# Concatenate the sql files into a variable 'FINAL_FILE'.
execute_process(COMMAND ${CMAKE_COMMAND} -E cat ${PACKAGE_SQL_FILES}
OUTPUT_VARIABLE FINAL_FILE
WORKING_DIRECTORY ${CMAKE_CURRENT_LIST_DIR}
)
# Write out the concatenated contents to 'final.sql.in'.
file(WRITE final.sql.in ${FINAL_FILE})
The rest of the solution is similar to Fraser's response. You can use configure_file so the resultant file is only updated when necessary.
configure_file(final.sql.in final.sql COPYONLY)
You can still use install in the same way to install the file:
install(FILES ${CMAKE_CURRENT_BINARY_DIR}/final.sql
DESTINATION ${INSTALL_PATH})

Shell scripting: Iteration over files and modyfying filenames

Scenario: I use a simple function to minify and compress JS files during the deployment like this:
for i in public/js/*.js; do uglifyjs --overwrite --no-copyright "$i"; done
The problem with this approach is that it minifies and overwrites original files. I would like to somehow introduce a versioning of minified JS and CSS files.
Let's say I have a variable with the version: "123". How to modify my script to write files with this version? It should work with CSS and JS files like this:
style.css -> style.123.css
script.js -> script.123.js
Something like this?
VERSION=123; for i in public/js/*.js; do REV=${i/%.js/.${VERSION}.js}; cp "${i}" "${REV}"; uglifyjs --overwrite --no-copyright "${REV}"; done
REV=${i/%.js/.${VERSION}.js} replaces the last occurrence of ".js" by ".123.js".
should work in sh too:
VERSION=321; for i in public/js/*.js; do NEW=$(echo $i | sed s/\\./.$VERSION./) ; cp $i $NEW; uglifyjs --overwrite --no-copyright $NEW; done
This is a silly approach. Use a version control tool for versioning your sources.
But of course, you're right not to want to modify the files in place.
Call your source files foo.src.js (or whatever). (Keep the last suffix .js so your editor recognizes the language). Then have a build step which produces foo.js. When you're developing you can have a "null compile" step which just copies foo.src.js to foo.js. When you make a release, then the copy step is changed to do the uglifyjs instead of a copy. Either way, you never edit the generated foo.js by hand, of course, even when it is just a copy. You might want to, instead of just copying, add a comment on top "generated file, do not edit".
The exact details are up to you. You could have foo.js be the name of the original source file and foo.u.js be the uglified one as long as you refer to the right names when loading.

Resources