Cannot `source` shc-compiled scripts - bash

Is there any way to source (include) compiled script?
I use shc to compile all of my scripts and when I run them from the command line they work OK to start. But when script have to include other two scripts (variables.sh.x and functions.sh.x) it crashes and returns an error, that binary files can not be included.
Is there any way to accomplish this?
including piece of code:
source $(dirname $0)/variables.sh.x
source $(dirname $0)/functions.sh.x

shc does not actually compile scripts. It merely obfuscates them by encrypting and embedding them inside a C program, so it cannot improve performance. The actual shell still interprets and executes the code and is required for the script to run.
If you absolutely must use this tool to obfuscate your code, you will have to combine everything into a single file.

Related

PVS-Studio: No compilation units were found

I'm using PVS-Studio in docker image based on ubuntu:18.04 for cross-compiling a couple of files with arm-none-eabi-gcc. After doing pvs-studio-analyzer trace -- .test/compile_with_gcc.sh strace_out file is successfully created, it's not empty and contains calls to arm-none-eabi-gcc.
However pvs-studio-analyzer analyze complains that "No compilation units were found". I tried using --compiler arm-none-eabi-gcc key with no success.
Any ideas?
The problem was in my approach to compilation. Instead of using a proper build system, I used a wacky shell script (surely, I thought, using a build system for 3 files is an overkill, shell script won't hurt anybody). And in that script I used grep to redefine one constant in the source - kinda like that: grep -v -i "#define[[:blank:]]\+${define_name}[[:blank:]]" ${project}/src/main/main.c | ~/opt/gcc-arm-none-eabi-8-2018-q4-major/bin/arm-none-eabi-gcc -o main.o -xc
So compiler didn't actually compiled a proper file, it compiled output of grep. So naturally, PVS-Studio wasn't able to analyze it.
TL;DR: Don't use shell scripts as build system.
We have reviewed the stace_out file. It can be handled correctly by the analyzer, if the source files and compilers are located by the absolute path in the stace_out file. We have a suggestion what might help you. You can "wrap" the build command in a call to pvs-studio-analyzer -- trace and pvs-studio-analyzer analyze and place them inside your script (compile_with_gcc.sh). Thus, the script should start with the command:
pvs-studio-analyzer trace --
and end with the command:
pvs-studio-analyzer analyze
This way we will make sure that the build and analysis were started at the same container run. If the proposed method does not help, please describe in more detail, by commands, the process of building the project and running the analyzer. Also tell us whether the container reruns between the build and the formation of strace_out, and the analysis itself.
It would also help us a lot if you ran the pvs-studio-analyzer command with the optional --dump-log flag and provided it to us. An example of a command that can be used to do this:
pvs-studio-analyzer analyze --dump-log ex.log
Also, it seems that it is not possible to quickly solve the problem and it is probably more convenient to continue the conversation via the feedback form on the product website.

Instead of giving command for batch mode, give .scm file path?

It is possible to supply batch commands directly with the -b flag, but if the commands become very long, this is no longer an option. Is there a way to give the path to an .scm script that was written to a file, without having to move the file into the scripts directory?
No as far as I know. What you give in the -b flag is a Scheme statement, which implies your function has already been loaded by the script executor process. You can of course add more directories that are searched for scripts using Edit>Preferences>Folders>Scripts.
If you write your script in Python the problem is a bit different since you can alter the Python path before loading the script code but the command line remains a bit long.

Shell script sh executable - edit to see script

Is there a way to see the original code of a executable sh script. (I am very new to Linux and trying to understand what things do and such.)
If you know how I need very clear step by step process so I can just type i the commands and run them.
Thanks for your help. Trying to learn (Windows man for 25 years here)
A shell script specifically can be seen in the original text form by simply printing the contents of the file:
cat disk-space.sh.x
Several caveats:
If you mean an executable rather than a script the situation is different. Scripts are read by an interpreter at runtime, which then executes it line by line. Executables may be either scripts or ELF binaries. The latter have been transformed from the original source code to a machine readable form which is very much harder to read for humans.
The extension of the file (.sh.x or .x) does not control whether the file contents are executed as a binary or script.
If the file really is a script it may have been obfuscated, meaning that the source code on your system has deliberately been changed to make the resulting file hard to read.

Coding a relative path to file in OS X [duplicate]

I have a Haskell script that runs via a shebang line making use of the runhaskell utility. E.g...
#! /usr/bin/env runhaskell
module Main where
main = do { ... }
Now, I'd like to be able to determine the directory in which that script resides from within the script, itself. So, if the script lives in /home/me/my-haskell-app/script.hs, I should be able to run it from anywhere, using a relative or absolute path, and it should know it's located in the /home/me/my-haskell-app/ directory.
I thought the functionality available in the System.Environment module might be able to help, but it fell a little short. getProgName did not seem to provide useful file-path information. I found that the environment variable _ (that's an underscore) would sometimes contain the path to the script, as it was invoked; however, as soon as the script is invoked via some other program or parent script, that environment variable seems to lose its value (and I am needing to invoke my Haskell script from another, parent application).
Also useful-to-know would be whether I can determine the directory in which a pre-compiled Haskell executable lives, using the same technique or otherwise.
As I understand it, this is historically tricky in *nix. There are libraries for some languages to provide this behavior, including FindBin for Haskell:
http://hackage.haskell.org/package/FindBin
I'm not sure what this will report with a script though. Probably the location of the binary that runhaskell compiled just prior to executing it.
Also, for compiled Haskell projects, the Cabal build system provides data-dir and data-files and the corresponding generated Paths_<yourproject>.hs for locating installed files for your project at runtime.
http://www.haskell.org/cabal/release/cabal-latest/doc/users-guide/authors.html#paths-module
There is a FindBin package which seems to suit your needs and it also works for compiled programs.
For compiled executables, In GHC 7.6 or later you can use System.Environment.getExecutablePath.
getExecutablePath :: IO FilePathSource
Returns the absolute pathname of the current executable.
Note that for scripts and interactive sessions, this is the path to the
interpreter (e.g. ghci.)
There is executable-path which worked with my runghc script. FindBin didn't work for me as it returned my current directory instead of the script dir.
I could not find a way to determine script path from Haskell (which is a real pity IMHO). However, as a workaround, you can wrap your Haskell script inside a shell script:
#!/bin/sh
SCRIPT_DIR=`dirname $0`
runhaskell <<EOF
main = putStrLn "My script is in \"$SCRIPT_DIR\""
EOF

Is it possible to override hashbang/shebang path behavior

I have a bunch of scripts (which can't be modified) written on Windows. Windows allows relative paths in its #! commands. We are trying to run these scripts on Unix but Bash only seems to respect absolute paths in its #! directives. I've looked around but haven't been able to locate an option in Bash or a program designed to replace and interpreter name. Is it possible to override that functionality -- perhaps even by using a different shell?
Typically you can just specify the binary to execute the script, which will cause the #! to be ignored. So, if you have a Python script that looks like:
#!..\bin\python2.6
# code would be here.
On Unix/Linux you can just say:
prompt$ python2.6 <scriptfile>
And it'll execute using the command line binary. I view the hashbang line as one which asks the operating system to use the binary specified on the line, but you can override it by not executing the script as a normal executable.
Worst case you could write some wrapper scripts that would explicitly tell the interpreter to execute the code in the script file for all the platforms that you'd be using.

Resources