Tcl source vs Tcl package - performance

Let's say that I don't care to give the full path in each source, when would I choose tcl package over tcl source? is package require faster than source?
I know what the packages protects us from sourcing the code twice, is that a problem? I'm only sourcing functions, so I don't mind the functions to be sourced twice, but it there a performance issue with this?

Of course there is a performance issue.
Think about what happens when you run a source command:
The path supplied to the source command must be opened.
The operating system checks the path for permissions, resolves symbolic links.
This is minor for your particular case.
This can be a major hit for some applications that check file paths over and over
(e.g. web servers).
The file is read in.
Disk I/O. Always slow.
The file is parsed, and interpreted.
Parsing is always slow.
Tcl has a simple rule set, so its parsing is probably faster than some.
Since your functions are replaced...
The issue here is that the byte code compiler now forgets any optimizations
that were in place and the function will run slower than usual the first time
it is used.
Always be aware of what resources (cpu, disk, memory, network) your program is using, and try to minimize the usage.
Opinion: You will find people that just say, "get better hardware". These people are fools and this is the reason why most of the web is so slow. They waste resources needlessly.

Looking at your core question:
Is package require faster than source?
They're not directly comparable.
Packages are a higher-level concept than script files you can source, and are often implemented by sourceing a file or few. There is also a caching mechanism, so that while the first time you do a package require it will be definitely substantially slower than source (as the package management subsystem needs to search through the packages you've got if it doesn't recognise the one you ask for, which actually involves using source on quite a few pkgIndex.tcl files) subsequent calls to package require are probably faster, as packages are not loaded in twice. Once the internal index is built (a normally once-per-interpreter cost), package require of a known but not loaded package is not much slower than directly sourceing its implementation files. Except there's that “higher level” thing going on: the package may not be implemented by things you can source at all, and might instead be using load of a DLL. Or it could do a mix. That's the package's business: all you usually need to know is that the functionality has a name and version. That contrasts with direct source, where you need to know exactly where the code is (OK, easy if it is in a known fixed location or is located relative to the current script) and also that that file is exactly what you need. In general, it's better to split policy (e.g., package require foobar 1.2.3) from implementation (e.g., source -encoding utf8 /usr/local/lib/tcl/packages/foobar_1.2.3/foobar.tcl).
A consequence of packages being a one-time thing is that they're not intended for making instances of objects and object-like things (except for those that are effectively documented singletons in the API). You package require to get the construction commands (which might be classes) and then you use those commands to make the instances that you need when you need them.

Related

Combine a set of shell scripts with internal dependencies into one?

I'm developing a set of shell scripts and for ease of development, the functions are often split across various files.
So the final binary scripts which I expect the end user to use require them to have all the relevant "library" scripts installed in the right location.
I am trying to find a way that allows me to develop the scripts with the same logical split in files, but then I can merge them all into a single binary script.
In the naive case, it would recursively go through all the sourced files and include them in the same file (similar to the pre-processing step in C compilers). The more involved version would also identify which functions are unused and trim them out.
Does anything like this exist? If not, I might consider writing it, but would be happy to hear about potential pitfalls that I should account for
I have seen this before, in Arch Linux' devtools repo. They use m4 to process .in files.
It's just templating though. And you might not need anything more.

Associate Windows directory with program (or treat directory as file)

This is likely not a simple topic - I have researched this to the best of my abilities and realize that it is not supported in any typical fashion.
My goal is to enable something similar to .app files from OSX, where the application, as well as its user data, can exist in the same file. I imagine it would require writing a tool to manage this behaviour, but this question is more about how to achieve this in the Windows OS. I am quite flexible regarding the implementation details, but the more straightforward the behaviour, the better (i.e. avoiding copying or compressing/decompressing entire directories/archives at runtime would be ideal).
Approaches I have considered:
Find a way to get explorer to treat a directory as a file, so that it can be associated. I have found a way to get explorer to treat a directory as a control panel item, I have thus far been unable to find a way to use this to associate a custom program. See the infamous "godmode hack" for Windows (name a directory something to the effect of "GodMode.{ED7BA470-8E54-465E-825C-99712043E01C}"). This one seems the most hopeful, but I'm at wits end trying to find information about creating a new association of this type.
Come up with some kind of archive format which can extract executable information to a temporary directory, launch this executable passing the archive as a commandline parameter. This seems like the ugliest solution, from a performance perspective. I would prefer a different solution if at all possible, one which doesn't involve making duplicates of the program or its data to run.
Find a way to associate a directory directly, though I have found no trace of this being supported in Windows, and I assume this is a dead-end.
Find a way to get an executable to include writeable embedded files. I have been unable to make any headway with this- I even tried a resource hacker approach, but obviously you cannot modify the assembly while its in use.
Tried to make a self-modifying JAR file with Java, but the route I took would add the JDK as a runtime requirement, which seems a bit overkill. Even then, it would be limited to Java, and I'm pretty sure it's not actually supposed to allow that in the first place.
Modify Windows Explorer. I shudder at the amount of work this would take, not to mention the at-best gray area it falls under legally. Perhaps there's a way to extend explorer to achieve this, I'm not sure.
A custom archive file. This seems like the most straightforward way to do it. But it would ideally need to be an archive format that has very little overhead for file I/O. Could even be some kind of virtual disk that gets mounted, but I am imagining that would be pretty heavy.
I would appreciate any insight that anyone has on this topic. I won't go into reasons as they are irrelevant to the question itself- I'm aware it is likely not the most practical solution to anything in particular. Consider it a novel pursuit.
It can be done by application virtualization,
Read this wikipedia page theory:
https://en.wikipedia.org/wiki/Portable_application
https://en.wikipedia.org/wiki/Application_virtualization
And two pages about software:
https://en.wikipedia.org/wiki/VMware_ThinApp
https://en.wikipedia.org/wiki/Turbo_(software)
Windows 7 added the ability for a Desktop.ini file to add/change the folder verbs on a per-folder basis. Using that trick it is possible to create a "folders as applications" style setup.

gentoo: how delete all config files on unmerging package (from its ebuild)

I am making my own personal package to have collection of usefull programs and configs. Main idea is to emerge this package and have system prepared for my prefferencies. Mainly it works (it simply depends on all my favourite programs), but I have two problems here:
how to install USE flags, UNMASK and such before affected programs are installed?
how to uninstall it (emerge --unmerge does NOT delete files in /etc, so even after uninstalling the package the USE flags (and others) are still kept - my intent is to REMOVE them, so next rebuild of world would NOT use them anymore - yes it means a lot of programs would lose some functionalities like support for some languages, support for some other programs and so on, it is desired result)
My solutions so far are:
The package have some files in /etc/portage/package.*
1.1. I emerge that package with --nodeps (so the config files are installed)
1.2. I emerge it again without that flag (so dependencies are installed
with right configuration))
I create (and install) script to parse /var/db/packages for my package CONTENTS and delete all /etc/portage/something files "manually" and I have to rum this script before unmerging the package
Is there better way to do it ?
You just doing/understanding it wrong! (sorry :)
First of all, instead of a metapackage (an empty ebuild that have only runtime dependencies) there is other ways:
use sets to describe your preferred packages. Manage your USE flags in a usual way (including per package USE if needed).
medium complexity solution is to write a metapackage ebuild (your current case) -- but, you can't mask/unmask USE flags anyway…
if you already have your overlay (obviously) -- defining your own profile would solve everything! Here you can manage everything just like you want: mask/unmask any USE flags, define what is system predefined package means for you, & etc…
Unfortunately, I don't use Gentoo portage (and emerge) and have no idea if it's possible to have multiple additive profiles. I have my own profiles here and it works perfectly with Paludis.
Second, never remove any configuration files (config-protected) after uninstall! There is no packages that do that, and there is a bunch of reasons for that… The main one is that user may have them modified and don't want to loose his changes. Moreover, personally I prefer to have all configs that I've ever touched to be in a dedicated VCS repo -- it wouldn't be nice, if someone, except me, would remove smth…
Imagine a real life example: user wants to reinstall some package and he has a bunch of configuration files, he spent some time to carefully edit them. Trivial way is to uninstall and then install again -- Oops! He lost his configs!
Moreover, from ebuild's POV, you have pkg_prerm and pkg_postrm functions, but both of them are called even at upgrade time (i.e. when unmerge followed by immediate merge phase). You have to be really careful to distinct that use cases… And what is more scare, having any "hardcoded" (and unique) rules in any package, you don't have any influence on them…
So, please, never remove any config protected files, let the user to take care of them (he is the boss, not a package manager)…
Update: If you really want to be able to remove some config-protected files, setting up your own profile looks even more better idea. You can set CONFIG_PROTECT_MASK to enforce unprotect files and/or directories. In that way you don't need to modify any ebuilds and/or write an ugly cleanup code.

Effective way of distributing go executable

I have a go app which relies heavily on static resources like images and jars. I want to install that go executable in different platforms like linux, mac and windows.
I first thought of bundling the resources using https://github.com/jteeuwen/go-bindata, but since the files(~100) have size ~ 20MB or so, it takes a really long time to build the executable. I thought having a single executable is an easy way for people to download the executable and run it. But seems like that is not an effective way.
I then thought of writing a installation package for each of the platform like creating a .rpm or .deb packages? So these packages contain all the resources and puts it into some platform specific pre defined locations and the go executable can reference them. But the only thing is that I have to handle that in the go code. I have to see if it is windows then load the files from say c:\go-installs or if it is linux then load the files from say /usr/local/share/go-installs. I want the go code to be as platform agnostic as it can be.
Or is there some other strategy for this?
Thanks
Possibly does not qualify as real answer but still…
As to your point №2, one way to handle this is to exploit Go's way to do conditional compilation: you might create a set of files like res_linux.go, res_windows.go etc and put a set of the same variables in each, pointing to different locations, like
var InstallsPath = `C:\go-installs`
in res_windows.go and
var InstallsPath = `/usr/share/myapp`
in res_linux.go and so on. Then in the rest of the program just reference the res.InstallsPath variable and use the path/filepath package to construct full pathnames of actual resources.
Of course, another way to go is to do a runtime switch on runtime.GOOS variable—possibly in an init() function in one of the source files.
Pack everything in a zip archive and read your resource files from it using archive/zip. This way you'll have to distribute just two files—almost "xcopy deployment".
Note that while on Windows you could just have your executable extract the directory from the pathname of itself (os.Args[0]) and assume the resource file is located in the same directory, on POSIX platforms (GNU/Linux and *BSD etc) the resource file should still be located under /usr/share/myapp or a similar place dictated by FHS (or particular distro's rules), so some logic to locate that file will still be required.
All in all, if this is supposed to be a piece of FOSS, I'd go with the first variant to let the downstream packagers tweak the pathnames. If this is a proprietary (or just niche) software the second idea appears to be rather OK as you'll play the role of downstream packagers yourself.

Make a ruby file unreadable to a user

Can I make a ruby file (e.g script.rb) unreadable to a user?
The file is on an Ubuntu (offline) machine. The user will use a local Sinatra app that will use some ruby files. I don't want the user to see the code in some of those files.
How can I do it?
EDIT:
Can I setup the project in a way that the user will be able to start the app but won't have access to specific files in it?
Thanks
Does that correspond to what you are searching for ?
chmod yourfile.rb 711
As I said in my comment it is literally almost impossible to hide the content of your ruby source file, many people try this in many different ways but it is almost always trivial to reverse engineer. There are some "suggestions" for making your code hidden but they never really work still, here are a few;
Obfuscation - The process of making your code executable but unreadable, using a tool like ProGuard for Java (there are ones for most major languages) will try to make your code a mess, and as unreadable as possible while still maintaining execution speed. Normally this consists of renaming variables, using strange characters and generally hiding, moving or wrapping functions in complicated structures.
Package the interpreter - You can use a tool like ocra to package the script up inside an executable with the interpreter and standard library, but anyone with even a tiny bit of experience in reverse engineering will be able to easily tear out the source code given a small amount of time
Write a custom interpreter - Now we are getting somewhere with making it harder. Writing a custom interpreter will allow you to compile your script to a "bytecode" that can then be executed. This is of course a very time consuming, expensive and incompatible solution when it comes to working with other code bases.
Write most of your code in C and then call out to it via extensions - Again this mostly moves the problem but its still there. It will take more time but anyone can easily pull apart the machine code of the C library you load in and bob is your uncle they have the source code.
Many more alternatives - This isn't a comprehensive list, I am probably missing a few ideas or suggestions.
As far as it goes making code unreadable is hard a better solution might just to be consider providing a licence agreement with your code. That way, someone reads or modifies the source file you can take them to court for a legal settlement.
Extract your code and its functionality to an external API. And then provide it as a service. This way you don't have to expose your source code to your 'users'.

Resources