How to preserve version related metadata in archive while shipping - bash

As the question says, I would like to know the best practice of how we preserve Version of the software while we are shipping in a form of Tarball.
I already use version in Archive file name, however it is lost when user unpacks it. Over some time, I won't even know which version of software user is operating on.
Looking for some sort of method such as MANIFEST.MF that java uses but didn't find anything with standard practice about it.
I can add to the build script to include that file and add version in the content every time I run the build script. But is there a practice already existing that I can use instead?

Related

Deploy a runtime loaded plugin with dependency to a dll – where to put that dll?

We are building an audio plugin that can be loaded by a various number of audio production softwares. To make it as compatible as possible to all common softwares, we actually build three versions of it (Steinberg VST2 format, Steinberg VST3 format, Avid AAX format), which is achieved by wrapping our core plugin code with wrappers for those three APIs. All three versions are installed in the standard location as specified for each format.
Our plugin now depends on the Microsoft onnxruntime, which we want to dynamically link against. Now, what is the right way of deploying and handling this dependency? As the plugin is loaded by the users host software of choice at runtime, placing the dll dependency next to the executable is no option, since we don't know which host software the user will use and which of the three plugin formats this software will chose.
Being a macOS developer, I'm unfamiliar with Windows best practice here.
Ideally we would like to install the dll into a custom location. But this would need us to modify the systems PATH variable to ensure that the dll is found for all users when a host loads one of our plugins, right? I'm not sure if this is a clean solution?
Another option could be to install the dll into C:\Windows\System32 but my research revealed that there is no versioning information on the dlls located there, so in case some other application installed onnxruntime there as well (or if it's a never windows installation that already ships with onnxruntime), how could we ensure that its version is equal or greater than the version needed by our plugin (in which case we wouldn't overwrite it) or below our minimum needed version (in which would replace it)? This generally seems like bad practice as well to me.
So what's common best practice on Windows for such scenarios? Am I overlooking a proper solution?

How to use Subversion with HelpNDoc

I am writing a documentation for a project that involves multiple developers. We use Subversion (SVN) to work on our code base.
I wrote the first draft of the documentation document using HelpNDoc, which I like for the nice tree-view and easy of use; the problem is that there is a single file, so I don't know how to use SVN to allow other developers to contribute to the documentation and update it.
Do you know if it's possible? If not, can you advice a nice software, easy to use, with a tree-view of the documentation that can be used with SVN or makes it possible for multiple users to update it? We use Windows.
HelpNDoc projects are binary files based on the SQLite open source database engine. The advantage is that the whole documentation stored in a single file so it can easily be copied, moved, shared, backed-up...
However one drawback is that it has to be checked-in as binary content in any version control system including Subversion: diff and merge are not possible on those files.
One possible solution would be to use external documents in HelpNDoc's library: each user works on her own document (which can be a Word document, and HTML web-page...) and a master HelpNDoc project is created to include those documents at generation time. See "Include a file at generation time" in the following step by step guide: How to add an item to the library
Amount of files doesn't matter, real format (text/* or binary) - does. If SVN|any VCS can merge two HelpNDoc files with diverged history (just try it by hand), you'll be happy
I once used Helpinator for software documentation, it's pretty close to HelpnDoc but it's storage format is more suitable for version control.

SonarQube rule to disallow 'forbidden' files

Is there a way to get SonarQube to raise a violation if certain files/folders are found in source?
For example, specifically-named configuration files which contain sensitive data (e.g. passwords) should not be included in version control, and neither should IDE-specific configuration directories like IntelliJ's ".idea" and Eclipse's ".settings" folders.
(Side-note: I'm aware these can/should be part of a global ignore in version control - but that's not what I'm asking about)
I'd like SonarQube to raise a violation during analysis if any of a set of files/folders exist, preferably using a regex-or-similar pattern to do the checking.
I've read up on the fact that SonarQube plugins can be written in Java, but this seems such a simple concept (and one I'm sure isn't unique) that I'm a little surprised I haven't been able to find any existing rules or plugins. The closest I've found is sonar-text-plugin, though that focuses on file contents rather than whether files exist at all.
Before I go reinventing the wheel, is there something pre-existing which could offer this?
SonarQube version 4.5.7 - upgrading is an option if there's no other route.
I do confirm that there is no such built-in feature in SonarQube.
You may be want to write a custom rule for the java plugin.

How to reference projects not in same root

Like most people we use third party libraries. Many have source which we keep in our VCS.
Currently if these libraries are updated, we need to pull the source manually and rebuild the binaries.
I am trying to find a way to instead reference them from the various solutions that use them, so that they will be automatically pulled from source control when you pull the dependant project, and automatically built if they are out of date. It would also be nice to be able to debug into them with the provided source.
The first problem I am having is that the libraries are not in the same solution root as the dependant projects. eg.
\Libraries
\External
\Lib1
Lib1.sln
\Products
\Product1
Product1.sln
Attempting to add Lib1.csproj to my Product1 solution gives me the warning:
The project that you are attempting to add to source control may cause
other source control users to have difficulty opening this solution or
getting newer versions of it. To avoid this problem, add the project
from a location below the binding root (C:\depot\Products\Products1)
of the other source controlled projects in the solution.
If I ignore this then I can set up build dependencies properly, but it still doesn't allow pulling the entire source tree in one go.
I was wondering how other people have third party libraries set up, particularly when there is source code. (We are using Perforce but I guess the question is relevant for any VCS)
One way to solve this in perforce is to put all modules / 3rdparty-software that are about to be reused to a separate location (depot), for examples "//shared" or similar.
Products (trees in your SCMS / perforce) can "link" the required modules by mapping them into the workspace. In perforce you can do that via clientviews.
If you have many people working on many products you'll need a easy mechanism to set up a personal workspace for a product properly (without requiring the developers to setup their clientview manually).
One possibility to achieve that is a small self-written tool/script that sets up a workspace and prepares the personal clientview based on a template that is located in the product-root and that defines what modules from the "//shared" depot need to be mapped to which location in the client workspace.
We are using this practice since years and it works fine. The danger is that the clientviews can get very complex.

How to mimic DropBox functionality with Ruby script?

I would like to upload documents to GoogleDocs every time the OS hears that a file was added/dragged/saved in a designated folder, just the way DropBox uploads a file when you save it in the DropBox folder.
What would this take in Ruby, what are the parts?
How do you listen for when a File is Saved?
How do you listen for when a File is added to a Folder?
I understand how to use the GoogleDocs API and upload things once I get these events, but I'm not sure how this would work.
Update
While I still don't know how to check if a file is added to a directory, listening for when a file is saved is now dirt simple, thanks to Guard for ruby.
If I were faced with this, I would use something like git or bzr to handle the version checking and just call add then commit from your script and monitor which files have changed (and therefore need to be uploaded).
This adds the benefit of full version control over files and it's mostly cross platform (if you include binaries for each platform).
Note this doesn't handle your listening problem, just what you do when you know something has changed. You could schedule the task (via various routes) but I still like the idea of a proper VCS under the hood.
I just found this: http://www.codeforpeople.com/lib/ruby/dirwatch/
You'd need to read over it as I can't vouch for its efficiency or reliability. It appears to use SQLite, so it might be better just to manually check once every 10 seconds (or something along those lines).
Ruby doesn't include a built-in way to "listen" for updates to files. If you want to stick to pure Ruby, your best bet would be to perform the upload on a fixed schedule (say every 5 minutes) regardless of when the file is saved.
If this isn't an acceptable alternative, you could try writing the app (or at least certain parts of it) in Java, which does support this type of thing. Take a look at JRuby for integrating the Ruby and Java portions of your app.
Here is a pure ruby gem:
http://github.com/TwP/directory_watcher
I don't know the correct way of doing this, but a simple hack would be to have a script running in the background which checks the contents of a bunch of folders every n minutes and uses the associated timestamps to determine if the file was modified in that span of time
You would definitely need some native OS code here, to write the monitoring service/client. I'd select C++ if you want it to be cross platform. If you decide to go with .Net, for example, you can use the FileSystemWatcher class to achieve what you need (documentation and here's a related article).
Kind of an old thread, but I am faced with doing something similar and wanted to throw in my thoughts. The route I'm going is to have a ruby script that watches a given directory and checks the timestamps. Once all files have been uploaded, the script saves the latest timestamp and then polls the directory again, checking if any files/folders have been added. If files are found, then the script uploads them and updates the global timestamp, etc...
The downside is that setting up a ruby script to run continually (or as a service) is somewhat painful. But it's not an overwhelming task, just needs to be thought out properly.
Also depends on if your users are competent enough to have ruby installed or if you have to package everything up into a one-click installer as well. That, to me, is the hardest part to figure out.

Resources