Make a ruby file unreadable to a user - ruby

Can I make a ruby file (e.g script.rb) unreadable to a user?
The file is on an Ubuntu (offline) machine. The user will use a local Sinatra app that will use some ruby files. I don't want the user to see the code in some of those files.
How can I do it?
EDIT:
Can I setup the project in a way that the user will be able to start the app but won't have access to specific files in it?
Thanks

Does that correspond to what you are searching for ?
chmod yourfile.rb 711

As I said in my comment it is literally almost impossible to hide the content of your ruby source file, many people try this in many different ways but it is almost always trivial to reverse engineer. There are some "suggestions" for making your code hidden but they never really work still, here are a few;
Obfuscation - The process of making your code executable but unreadable, using a tool like ProGuard for Java (there are ones for most major languages) will try to make your code a mess, and as unreadable as possible while still maintaining execution speed. Normally this consists of renaming variables, using strange characters and generally hiding, moving or wrapping functions in complicated structures.
Package the interpreter - You can use a tool like ocra to package the script up inside an executable with the interpreter and standard library, but anyone with even a tiny bit of experience in reverse engineering will be able to easily tear out the source code given a small amount of time
Write a custom interpreter - Now we are getting somewhere with making it harder. Writing a custom interpreter will allow you to compile your script to a "bytecode" that can then be executed. This is of course a very time consuming, expensive and incompatible solution when it comes to working with other code bases.
Write most of your code in C and then call out to it via extensions - Again this mostly moves the problem but its still there. It will take more time but anyone can easily pull apart the machine code of the C library you load in and bob is your uncle they have the source code.
Many more alternatives - This isn't a comprehensive list, I am probably missing a few ideas or suggestions.
As far as it goes making code unreadable is hard a better solution might just to be consider providing a licence agreement with your code. That way, someone reads or modifies the source file you can take them to court for a legal settlement.

Extract your code and its functionality to an external API. And then provide it as a service. This way you don't have to expose your source code to your 'users'.

Related

Tcl source vs Tcl package

Let's say that I don't care to give the full path in each source, when would I choose tcl package over tcl source? is package require faster than source?
I know what the packages protects us from sourcing the code twice, is that a problem? I'm only sourcing functions, so I don't mind the functions to be sourced twice, but it there a performance issue with this?
Of course there is a performance issue.
Think about what happens when you run a source command:
The path supplied to the source command must be opened.
The operating system checks the path for permissions, resolves symbolic links.
This is minor for your particular case.
This can be a major hit for some applications that check file paths over and over
(e.g. web servers).
The file is read in.
Disk I/O. Always slow.
The file is parsed, and interpreted.
Parsing is always slow.
Tcl has a simple rule set, so its parsing is probably faster than some.
Since your functions are replaced...
The issue here is that the byte code compiler now forgets any optimizations
that were in place and the function will run slower than usual the first time
it is used.
Always be aware of what resources (cpu, disk, memory, network) your program is using, and try to minimize the usage.
Opinion: You will find people that just say, "get better hardware". These people are fools and this is the reason why most of the web is so slow. They waste resources needlessly.
Looking at your core question:
Is package require faster than source?
They're not directly comparable.
Packages are a higher-level concept than script files you can source, and are often implemented by sourceing a file or few. There is also a caching mechanism, so that while the first time you do a package require it will be definitely substantially slower than source (as the package management subsystem needs to search through the packages you've got if it doesn't recognise the one you ask for, which actually involves using source on quite a few pkgIndex.tcl files) subsequent calls to package require are probably faster, as packages are not loaded in twice. Once the internal index is built (a normally once-per-interpreter cost), package require of a known but not loaded package is not much slower than directly sourceing its implementation files. Except there's that “higher level” thing going on: the package may not be implemented by things you can source at all, and might instead be using load of a DLL. Or it could do a mix. That's the package's business: all you usually need to know is that the functionality has a name and version. That contrasts with direct source, where you need to know exactly where the code is (OK, easy if it is in a known fixed location or is located relative to the current script) and also that that file is exactly what you need. In general, it's better to split policy (e.g., package require foobar 1.2.3) from implementation (e.g., source -encoding utf8 /usr/local/lib/tcl/packages/foobar_1.2.3/foobar.tcl).
A consequence of packages being a one-time thing is that they're not intended for making instances of objects and object-like things (except for those that are effectively documented singletons in the API). You package require to get the construction commands (which might be classes) and then you use those commands to make the instances that you need when you need them.

Recommended tool to automate complicated build procedure

I am developing an OS for embedded devices that runs bytecode. Basically, a micro JVM.
In the process of doing so, I am able to compile and run Java applications to bytecode(ish) and flash that on, for instance, an Atmega1284P.
Now I've added support for C applications: I compile and process it using several tools and with some manual editing I eventually get bytecode that runs on my OS.
The process is very cumbersome and heavy and I would like to automate it.
Currently, I am using makefiles for automatic compilation and flashing of the Java applications & OS to devices.
All steps, roughly, for a C application are as follows and consist of consecutive manual steps:
(1) Use Docker to run a Linux container with lljvm that compiles a .c file to a .class file (see also https://github.com/davidar/lljvm/tree/master)
(2) convert this c.class file to a jasmin file (https://github.com/davidar/jasmin) using the ClassFileAnalyzer tool (http://classfileanalyzer.javaseiten.de/)
(3) manually edit this jasmin file in a text editor by replacing/adjusting some strings
(4) convert the modified jasmin file to a .class file again using jasmin
(5) put this .class file in a folder where the rest of my makefiles (the ones that already make and deploy the OS and class files from Java apps) can take over.
Current options seem to be just keep using makefiles but this is a bit unwieldly (I already have 5 different makefiles and this would further extend that chain). I've also read a bit about scons. In essence, I'm wondering which are some recommended tools or a good approach for complicated builds.
Hopefully this may help a bit, but the question as such could probably be a subject for a heated discussion without much helpful results.
As pointed out in the comments by others, you really need to automate the steps starting with your .c file to the point you can integrated it with the rest of your system.
There is generally nothing wrong with make and you would not win too much by switching to SCons. You'd get more ways to express what you want to do. Among other things meaning that if you wanted to write that automation directly inside the build system and its rules, you could also use Python and not only shell (should that be of a concern though, you could just as well call that Python code from make). But the essence of target, prerequisite, recipe is still there. And with that need for writing necessary automation for those .c to integration steps.
If you really wanted to look into alternative options. bazel might be of interest to you. The downside being the initial effort to write the necessary rules to fit your needs could be costly. And depending on size of your project, might just be too much. On the other hand once done with that, it'd be very easy to use (apply those rules on growing code base) and you could also ditch the container and rely on its more lightweight sand-boxing and external rules to get the tools and bits you need for your build... all with a single system for build description.

Associate Windows directory with program (or treat directory as file)

This is likely not a simple topic - I have researched this to the best of my abilities and realize that it is not supported in any typical fashion.
My goal is to enable something similar to .app files from OSX, where the application, as well as its user data, can exist in the same file. I imagine it would require writing a tool to manage this behaviour, but this question is more about how to achieve this in the Windows OS. I am quite flexible regarding the implementation details, but the more straightforward the behaviour, the better (i.e. avoiding copying or compressing/decompressing entire directories/archives at runtime would be ideal).
Approaches I have considered:
Find a way to get explorer to treat a directory as a file, so that it can be associated. I have found a way to get explorer to treat a directory as a control panel item, I have thus far been unable to find a way to use this to associate a custom program. See the infamous "godmode hack" for Windows (name a directory something to the effect of "GodMode.{ED7BA470-8E54-465E-825C-99712043E01C}"). This one seems the most hopeful, but I'm at wits end trying to find information about creating a new association of this type.
Come up with some kind of archive format which can extract executable information to a temporary directory, launch this executable passing the archive as a commandline parameter. This seems like the ugliest solution, from a performance perspective. I would prefer a different solution if at all possible, one which doesn't involve making duplicates of the program or its data to run.
Find a way to associate a directory directly, though I have found no trace of this being supported in Windows, and I assume this is a dead-end.
Find a way to get an executable to include writeable embedded files. I have been unable to make any headway with this- I even tried a resource hacker approach, but obviously you cannot modify the assembly while its in use.
Tried to make a self-modifying JAR file with Java, but the route I took would add the JDK as a runtime requirement, which seems a bit overkill. Even then, it would be limited to Java, and I'm pretty sure it's not actually supposed to allow that in the first place.
Modify Windows Explorer. I shudder at the amount of work this would take, not to mention the at-best gray area it falls under legally. Perhaps there's a way to extend explorer to achieve this, I'm not sure.
A custom archive file. This seems like the most straightforward way to do it. But it would ideally need to be an archive format that has very little overhead for file I/O. Could even be some kind of virtual disk that gets mounted, but I am imagining that would be pretty heavy.
I would appreciate any insight that anyone has on this topic. I won't go into reasons as they are irrelevant to the question itself- I'm aware it is likely not the most practical solution to anything in particular. Consider it a novel pursuit.
It can be done by application virtualization,
Read this wikipedia page theory:
https://en.wikipedia.org/wiki/Portable_application
https://en.wikipedia.org/wiki/Application_virtualization
And two pages about software:
https://en.wikipedia.org/wiki/VMware_ThinApp
https://en.wikipedia.org/wiki/Turbo_(software)
Windows 7 added the ability for a Desktop.ini file to add/change the folder verbs on a per-folder basis. Using that trick it is possible to create a "folders as applications" style setup.

Any suggestions of a VB6 Source Code library?

I'm looking for a good VB6 source code library (to extend the language) for things like parsing a path into root, directory, filename, extension, does a file exist, etc.
I'm happy to pay for such a resource if it's a good one (ideally with some sort of reviews/feedback).
I've found one already:
SourcePlus from AxTools
Only downside is that all of their source code is in Classes and I have 12 or so different VB6 apps that all share a lot of common code (.bas modules). So if I use one of the classes in on of my Common.Bas routines I then need to add the class to 12 different programs that all use that Common.Bas module.
I'd really prefer to have the code in either one big class or a .bas module so I can add it one time to each VB6 app and be done with it.
FYI, I've also used with good success http://www.planet-source-code.com/vb/ and of course StackOverflow but I'd be happy to buy something comprehensive and well done.
I'm in favour of buying libraries in general, but you really can do a lot of file related tasks with free code. Karl E Peterson's excellent VB6 website has some great objects written entirely in VB6 - I think they're as reliable as most things I've ever bought. You could just put them in a COM DLL if you dislike managing the classes.
Convert a file path to drive and directory only.
Check whether a directory exists.
Check whether a file exists.
...I could go on
Can you build their source as a COM dll and just refer to it from your own projects?

How to mimic DropBox functionality with Ruby script?

I would like to upload documents to GoogleDocs every time the OS hears that a file was added/dragged/saved in a designated folder, just the way DropBox uploads a file when you save it in the DropBox folder.
What would this take in Ruby, what are the parts?
How do you listen for when a File is Saved?
How do you listen for when a File is added to a Folder?
I understand how to use the GoogleDocs API and upload things once I get these events, but I'm not sure how this would work.
Update
While I still don't know how to check if a file is added to a directory, listening for when a file is saved is now dirt simple, thanks to Guard for ruby.
If I were faced with this, I would use something like git or bzr to handle the version checking and just call add then commit from your script and monitor which files have changed (and therefore need to be uploaded).
This adds the benefit of full version control over files and it's mostly cross platform (if you include binaries for each platform).
Note this doesn't handle your listening problem, just what you do when you know something has changed. You could schedule the task (via various routes) but I still like the idea of a proper VCS under the hood.
I just found this: http://www.codeforpeople.com/lib/ruby/dirwatch/
You'd need to read over it as I can't vouch for its efficiency or reliability. It appears to use SQLite, so it might be better just to manually check once every 10 seconds (or something along those lines).
Ruby doesn't include a built-in way to "listen" for updates to files. If you want to stick to pure Ruby, your best bet would be to perform the upload on a fixed schedule (say every 5 minutes) regardless of when the file is saved.
If this isn't an acceptable alternative, you could try writing the app (or at least certain parts of it) in Java, which does support this type of thing. Take a look at JRuby for integrating the Ruby and Java portions of your app.
Here is a pure ruby gem:
http://github.com/TwP/directory_watcher
I don't know the correct way of doing this, but a simple hack would be to have a script running in the background which checks the contents of a bunch of folders every n minutes and uses the associated timestamps to determine if the file was modified in that span of time
You would definitely need some native OS code here, to write the monitoring service/client. I'd select C++ if you want it to be cross platform. If you decide to go with .Net, for example, you can use the FileSystemWatcher class to achieve what you need (documentation and here's a related article).
Kind of an old thread, but I am faced with doing something similar and wanted to throw in my thoughts. The route I'm going is to have a ruby script that watches a given directory and checks the timestamps. Once all files have been uploaded, the script saves the latest timestamp and then polls the directory again, checking if any files/folders have been added. If files are found, then the script uploads them and updates the global timestamp, etc...
The downside is that setting up a ruby script to run continually (or as a service) is somewhat painful. But it's not an overwhelming task, just needs to be thought out properly.
Also depends on if your users are competent enough to have ruby installed or if you have to package everything up into a one-click installer as well. That, to me, is the hardest part to figure out.

Resources