Validation test failure for build of jq-1.5 on FreeBSD 9.x... Now what? - installation

A note in the README file said to ask questions here, so I am doing so.
The RIPEstat service has just shut off their own port 43 plain text service and now is forcing everyone to access their data using jq. I have zero experience with or knowledge of jq, but I am forced to give it a try. I have just built the thing successfully from sources (jq-1.5) on my crusty old FreeBSD 9.x system and the build completed OK, but one of the post-build verification tests (tests/onigtest) failed. I am looking at the test-suite.log file but none of what's in there means anything to me. (Unfortunately, I am new to stackoverflow also, and thus, I have no idea how to even upload a copy of that here so that the maintainer can peruse it.)
So, my questions:
1) Should I even worry about the failure of tests/onigtest?
2) If I should, then what should I do about this failure?
3) What is the best and/or most proper way for me to get a copy of the test-suite.log file to the maintainer(s)?

Should I even worry about the failure of tests/onigtest?
If the only failures are related to onigtest, then most likely only the regex filters will be affected.
what should I do about this failure?
According to the jq download page, there is a pre-built binary for FreeBSD, so you might try that.
From your brief description, it's not clear to me what exactly you did, but if you haven't already done so, you might also consider building an executable from a git clone of "master" as per the guidelines on the download page; see also https://github.com/stedolan/jq/wiki/Installation#or-build-from-source
What is the best and/or most proper way for me to get a copy of the test-suite.log file to the maintainer(s)?
You could create a ticket at https://github.com/stedolan/jq/issues

Related

Is there a safe way to modify another package's entry in the RPM database?

I've run into the problem described in this question, where an old package was Obsoleted, and its %preun script is run with $1 = 0, resulting in undesirable behavior. I know this could be worked around by using -e + -i, as suggested in that answer, or the --nopreun flag, but it's difficult to get that information out to users who are accustomed to simply using -U.
I can't modify the existing %preun scripts in the wild. I don't see any way to run additional code from the new package after the old one's preun. I can't find any way to have my new package programmatically prevent the old %preun script from executing.
Is there any safe way to reach into the RPM database and remove a scriptlet for an existing package?
Jeff Johnson is absolutely correct that it should not be done. However it certainly can be done.
I have done this in an RPM at work, for distribution, but note, this was a contained semi-structured environment with no remote hands to all systems.
If you have remote hands access, take the "remove, install" path, and script that.
If you really feel you should be doing this, then these are the pointers. I'm not going to show you exactly how I did it because it was "work" and not mine to share. The concepts are mine :-)
First, back up the /var/lib/rpm/Packages file (cp /var/lib/rpm/Packages /var/tmp/Packages.bkp). Put it somehwere safe. Update your backup if any one else changes the system whilst you are working on your solution. MAke regular checks on the count of RPMs and test every which way from Sunday, after each change or step.
You will need to use the db_unload and db_load commands. For speed, you will need to use "s2p" to convert any shell sed patterns to perl. Then build a pipe which looks like this:
db_unload /var/tmp/Packages.bkp |perl -i -e "s2p converted string" |db_load /var/tmp/Packages.new
You can then attempt to test the Packages.new by copying ot over the original. Always run rpm --rebuilddb after manual changes. If you see any errors, restore the back up and rebuild the db again.
If you need to put it in an RPM, then convert it to Lua, and put it in the pretrans or posttrans scriptlet (%pretrans -p <lua>). The selection depends on the ordering you are trying to achieve. The Lua interpreter is built in to rpm, and so it will run OK during a new system install even if your RPM gets called somehow. I wrapped my "pipe" in a lua long string, and made it only execute if the system already exists. It does nothing otherwise. If you are thinking "that will never happen" then check out "Never say Never".
BTW you can completely stiff your RPM base and thus future administration of the system if you mess this up. If you do that, and have no backup or way out, it would be a hard way to learn that you are responsible for your own actions. Just saying that you have been warned!
No you cannot edit an rpmdb: the headers are protected
from change by a SHA1 or a digital signature.
Instead upgrade to a fixed version of the package using --nopreun
to prevent running the buggy script let.

Find files created or modified by an installer

To track changes in OSX filesystem while an installer runs I'm trying to use the fs_usage.
Can somebody guide me with a simple example on how to interpret the result. There a lot of terms I don't understand when I examine it.
fs_usage's output tends to be full of irrelevant chatter, and hard to interpret. I'd recommend using fseventer, which just shows changed files without all the nonsense. If you're an Apple developer, you can also use PackageMaker's snapshot package feature (which records everything that happens, and offers to make an installer package that does the same things).

Can Xcode run straight from source control without install

I am managing a build lab and have several products/branches to provide service to and I would like my build machines not to be specialized to any one product/branch.
The scenario I would like to have is that souce and all tools needed to build it are checked into source control and just sync and build with some prep/env setup before hand via script.
This is very doable with Visual Studio and many other tools. Is it possible with Xcode? Has anyone gotten a scenario like this to work?
Some system components may need to be shared. Since this is such an atypical scenario, documentation will not be readily available. I would suggest asking on the Xcode-users mailing list that Apple maintains, as you may get a more certain answer.
I doubt if this possible. There are 2 possible ways I know of.
First, which we also follow in our project:
Source code for all projects in checked in the common repository.
A remote server is configured to point to this repository.
Remote server has XCode pre-installed. A pre-written scripts with steps including workspace cleanup, checkout fresh code, build the code, package the output is already feed into the remote server. Of these XCode related commands are using xcodebuild.
Remote server can be configured in 3 ways: a) Build the source code on every checkin, b) Build the source code triggered by user, c) Scheduled building of the source code.
Build results are emailed to the configured email addresses.
Second way is the continuous integration with MAC OS X server.
Just in case you found out the exact system config you are looking for, please post an answer here to enlighten us as well.

What do you do with redundant code?

I have a class, which is part of a code library project that was written for a particular purpose that is no longer required. So the question is what do you do with code like this? Do you simply delete it, or do you leave it in bearing in mind that future developers may come across it and not realise that they can disregard it or do you have some sort of archive system, is there a recognised "pattern" that is in use...
Delete it. You can always get it back from the version control system later. You do have version control, don't you?
As Neil said, delete it. If I'm hired to maintain your project years after you are done with it and it's still full of dead code.. I'm gonna haunt you. And not the ooooohhhhh nice kinda haunting.. but the ARRRRWWWGGGGGRRRR!!!!! annoying kind of haunting.
It depends.
If it is unused because it is obsolete, I would clean it from the current code base by deleting it. If it turns out that it is in fact needed, you can always retrieve it from source control.
If it is unused at the moment, but may be used in the near future, I would keep it in the current code base as I wouldn't expect fellow developers to browse the source control for features just in case. In other words: if you delete something that has a high chance of being used, chances are that someone will re-implement it.
If it is not used anywhere, and no longer required you should delete it to avoid confusion.
You didn't say what code you are using but in C#/VisualStudio you can use the Obsolete attribute to tell other developers not to use the code, you can set the errors argument to true, and this will break the build anywhere that the code is being used.
I would start off by tagging the out-dated code elements with the Obsolete attribute. That way you will be able to locate any code that refers to the out-dated elements, giving you a way to update those parts. When you no longer get any compiler warnings that you use obsoleted code, go ahead and delete it.
Update: OK, now I was thinking .NET and C#, but I am sure many other languages have similar features...
I try to keep my application code as little as possible. Library code should be compatible for a number of release then remove it or just mark it as deprecated.
I totally agree with Neil. Use SVN or any other version control system to keep track of your code and delete anything that is redundant. Too much commented code only makes your code hard to read, and in some cases debugging impossible.
The best option is to remove the code so you have a cleaner repository. Most of the time it is just a short term fealing you delete somehting of potential enormous value.
Counting on svn if fellow programmer need it later won't really work. Because you have to know the code existed before and then some has to scan through the svn.
If I really think I want to keep that code than I usually make an archive out of the files and add them with a description into our wiki and then I delete the code. Over the search of the wiki someone can find the code. Scan it using the archive and as the decription contains repository and revision number they can even ressurect the parts they need easily.
If it's much, reusable and/or code difficult to reproduce, I usually put it into a file called <projectname>_rubbish.<ext>. Not very elegant but I can easily ignore it and also look for it seamlessly when I do need it again.
Install GIT then:
cd <code repo>
git init .
git add .
git commit -m 'inital import for my old code'
... Refactor the code ...
git add <path/to/file/with/changes/>
git commit -m 'that feels much better... :)'
... Create an account on GitHub or setup a GitServer
git remote add origin <remote git repo>
git push origin master
And you're done... :)
Simply delete it. If it is no longer required, there is no point in keeping it.

Lightweight version control for small projects (prototypes, demos, and one-offs) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
Background
I work on a lot of small projects (prototypes, demos, one-offs, etc.). They are mostly coded in Visual Studio (WPF or ASP.NET with code written in C#). Usually, I am the only coder. Occasionally, I work with one other person. The projects come and go, usually in a matter of months, but I have a constantly evolving set of common code libraries that I reuse.
The problem
I've tried to use source control software before (SourceGear Vault), but it seemed like a lot of overhead when working on a small project, especially when I was the only programmer. Still, I would like some of the features that version control offers.
Here's a list of features I'd like to have:
Let me look at any file in an older version of my project instantly. Please don't force me through the rigmarole of (1) checking in my current work, (2) reverting my local copy to the old version, and (3) checking the current version back out so I can once again work on it.
In fact, if I'm the only one on the project, I don't ever want to check out. The only thing I want to be able to do is say, "Please save what I have now as version 2.5."
Store my data efficiently. If I have 100 Mb of media in my project, I don't want that to get copied with every new version I release. Only copy what changes.
Let me keep my common library code files in a single location on my hard drive so that all my current projects can benefit from any bug fixes or improvements I make to my library. I don't want to have to keep copying my library to other projects every time I make a change.
However, do let me go back in time to any version of any project and see what the source code (including the library code) looked like at the time that version was released.
Please don't make me store a special database server on my machine that makes my computer take longer to start up and/or uses resources when I'm not even programming.
Does this exist?
If not, how close can I get?
Edit 1: TortoiseSVN impressions
I did some experimenting with Subversion. A couple observations:
Once you check something in to a repository, it does stuff to your files. It puts these hidden .svn folders inside your project folders. It messes with folder icons. I'm still yet to get my project back to "normal". Unversion a working copy got me part of the way there, but I still have folders with blue question mark icons. This makes me grumpy :-/ Update: finally got rid of the folder icons by manually creating new folders and copying the folders over. (Not good.)
I installed the open source plugin for Visual Studio (AnkhSVN). After creating a fresh repository in my hard drive, I attempted to check in a solution from Visual Studio. It did exact what I was afraid it would do. It checked in only the folders and files that are physically (from the POV of the file system) inside my solution folder. In order to accomplish item #5 above, I need all source code used by solution to be check in. I attempted to do this by hand, but it wasn't a user friendly process (for one thing, when I selected multiple library projects at once and attempted to check them in, it only appeared to check in the first one). Then, I started getting error dialogs when I tried to check in subsequent projects.
So, I'm a little frustrated with SVN (and its supporting software) at this point.
Edit 2: TortoiseHG impressions
I'm trying out Mercurial now (TortoiseHG). It was a little bit difficult to figure out at first, no better or worse than TortoiseSVN I'd say. I noticed an RPC Server on startup (relates to item 6). I figure it should be possible to turn this off if I'm not sharing anything with anyone, but it wasn't something I could figure out just by looking at the options (will check out the help later).
I do appreciate having my local repository as just a single .hg folder. And, simply throwing the folder in the Recycle Bin seemed to be all I needed to do to return everything back to normal (i.e., unversion my project). When I check in (commit), it seems to offer a simple comment window only. I thought maybe there would be a place to put version numbers.
My (probably not very clever) attempt to add a Windows shortcut (a folder aliasing my library projects) failed, not that I really thought it would work :) I thought maybe this would be a sneaky way to get my library projects (currently located elsewhere) included in the repository. But no. Maybe I'll try out "subrepos", but that feature is under construction. So, iffy that I'll be able to do items 4 and 5 without some manual syncing.
Any of the distributed source control solutions seem to match your requirements. Take a look at bazaar, git or mercurial (already mentioned above). Personally I have been using bazaar since v0.92 and have no complaints.
Edit: Heck, after looking at it again, I'm pretty sure any of those 3 solutions handles all 6 of your requested features.
Distributed Version Control Systems (Mercurial, Bazaar, Git) are nice in that they can be completely self-contained in a single directory (.hg, .bzr, .git) in the top of the working copy, where Subversion uses a separate repository directory, in addition to .svn directories in every directory of your working copy.
Mercurial and Subversion are probably the easiest to use on Windows, with TortoiseHG and TortoiseSVN; the Bazaar GUIs have also been improving. Apparently there is also TortoiseGit, though I haven't tried it. If you like the command line, Easy Git seems to be a bit nicer to use than the standard git commands.
I'd like to address point 4, common libraries, in more detail. Unfortunately I don't think any of them will be too easy to use, since I don't think they're directly supported by GUIs (I could be wrong). The only one of these I've actually used in practice is Subversion Externals.
Subversion is reasonably good at this job; you can use Externals (see the chapter in the SVN book), but to associate versions of a project with versions of a library you need to "pin" the library revision in the externals definition (which is itself versioned, as a property of the directory).
Mercurial supports something similar, but both solutions seem a bit immature: subrepository support built-in to the latest version and the "Forest Extension".
Git has "submodule" support.
I haven't seen anything like sub-respositories or sub-modules for Bazaar, unfortunately.
I think Fog Creek's new product, Kiln, will get you pretty close. In response to your specific points:
This is easily done through the web interface -- you don't need to touch your local copy or update. Just find the file you want, click the revision you want to see, and your code will be in front of you.
I'm not sure you can do things exactly like "Please save this as version 2.5", but you can add unique tags to changesets that allow you to identify a special revision (where "special" can mean whatever it wants to you).
Mercurial does a great job of this already (which Kiln uses in the back end), so there shouldn't be any problems in this regard.
By creating different repositories, you can easily have one central 'core' section which is consistent across various projects (though I'm not entirely sure if this is what you're talking about).
I think most version control systems allow you to do this...
Kiln is hosted, so there's no hit on performance to your local machine. The code you commit to the system is kept safe and secure.
Best of all, Kiln is free for up to two licenses by way of their Student and Startup Edition (which also gets you a free copy of FogBugz).
Kiln is in public beta right now -- you can request your account at my first link -- and users are being let as more and more problems are already resolved. (For some idea of what current beta users are saying, take a look at the Kiln Knowledge Exchange site that's dedicated to feedback.)
(Full Disclosure: I am an intern currently working at Fog Creek)
For your requirements I would recommend subversion.
Let me look at any file in an older version of my project instantly. Please don't force me through the rigmarole of (1) checking in my current work, (2) reverting my local copy to the old version, and (3) checking the current version back out so I can once again work on it.
You can use the repository browser of Tortoise Svn to navigate to every existing version easily.
In fact, if I'm the only one on the project, I don't ever want to check out. The only thing I want to be able to do is say, "Please save what I have now as version 2.5."
This is done by svn copy . svn://localhost/tags/2.5.
Store my data efficiently. If I have 100 Mb of media in my project, I don't want that to get copied with every new version I release. Only copy what changes.
Given by subversion.
Let me keep my common library code files in a single location on my hard drive so that all my current projects can benefit from any bug fixes or improvements I make to my library. I don't want to have to keep copying my library to other projects every time I make a change.
However, do let me go back in time to any version of any project and see what the source code (including the library code) looked like at the time that version was released.
Put your libraries into the same svn repository as your remaining code and you'll have global revision numbers to switch back all to a common state.
Please don't make me store a special database server on my machine that makes my computer take longer to start up and/or uses resources when I'm not even programming.
You only have to start svnserve to start a local server. If you only work on one machine you can even do without this and use your repository directly.
I'd say that Mercurial along with TortoiseHg will do what you want. Of course, since you don't seem to be requiring much, subversion with TortoiseSvn should serve equally well, if you only ever work alone, though I think mercurial is nicer for collaboration.
Mercurial:
hg cat --rev 2.5 filename (or "Annotate Files" in TortoiseHg)
hg commit ; hg tag 2.5
Mercurial stores (compressed) diffs (and "keyframes" to avoid having to apply ten thousand diffs in a row to find a version of a file). It's very efficient unless you're working with large binary files.
Symlink the library into all the projects?
OK, now that I read this point I'm thinking Mercurial's Subrepos are closer to what you want. Make your library a repository, then add it as a subrepository in each of your projects. When your library updates you'll need to hg pull in the subrepos to update it, unfortunately. But then when you commit in a project Mercurial will record the state of the library repo, so that when you check out this version later to see what it looked like you'll get the correct version of the library code.
Mercurial doesn't do that, it stores data in files.
Take a look on fossil, its single exe file.
http://www.fossil-scm.org
As people have pointed out, nearly any DVCS will probably serve you quite well for this. I thought I would mention Monotone since it hasn't been mentioned already in the thread. It uses a single binary (mtn.exe), and stores everything as a SQLite database file, nothing at all in your actual workspace except a _MTN directory on the top level (and .mtn-ignore, if you want to ignore files). To give you a quick taste I've put the mtn commands showing how one carries out your wishlist:
Let me look at any file in an older version of my project instantly.
mtn cat -r t:1.8.0 readme.txt
Please save what I have now as version 2.5
mtn tag $(mtn automate heads) 2.5
Store my data efficiently.
Monotone uses xdelta to only save the diffs, and zlib to compress the deltas (and the first version of each file, for which of course there is no delta).
Let me keep my common library code files in a single location on my hard drive so that all my current projects can benefit from any bug fixes or improvements I make to my library.
Montone has explicit support for this; quoting the manual "The purpose of merge_into_dir is to permit a project to contain another project in such a way that propagate can be used to keep the contained project up-to-date. It is meant to replace the use of nested checkouts in many circumstances."
However, do let me go back in time to any version of any project and see what the source code (including the library code) looked like at the time that version was released.
mtn up -r t:1.8.0
Please don't make me store a special database server on my machine
SQLite can be, as far as you're concerned, a single file on your disk that Monotone stores things in. There is no extra process or startup craziness (SQLite is embedded, and runs directly in the same process as the rest of Monotone), and you can feel free to ignore the fact that you can query and manipulate your Monotone repository using standard tools like the sqlite command line program or via Python or Ruby scripts.
Try GIT. Lots of positive comments about it on the Web.

Resources