`cmse_check_address_range` changes behaviour with compiler upgrade - gcc

I'm using a Cortex-M33 with arm trust-zone. I have a secure api inside my secure firmware that I can call from my non-secure firmware. All works as expected - at least until I upgraded my compiler from gcc-arm-none-eabi-7-2018-q2-update to gcc-arm-none-eabi-10-2020-q4-major.
The function in question looks like this:
bool __attribute__((cmse_nonsecure_call)) (*Callback_Handler)();
__unused __attribute__((cmse_nonsecure_entry))
bool Secure_SetSomeCallbackHandler(bool (*handler)()) {
// this cmse-check fails with the compiler in `version gcc-arm-none-eabi-10-2020-q4-major`
// it works with the `gcc-arm-none-eabi-7-2018-q2-update` though
handler = cmse_check_address_range(handler, 4, CMSE_NONSECURE);
if (handler == NULL) {
return false;
}
Callback_Handler = handler;
return true;
}
I make sure the supplied pointer really is in non-secure space by using cmse_check_address_range. That works for the version 7, but if I compile the code with version 10, NULL is returned. I did not change anything in the source or any other part, just the compiler.
I checked for any changes in that function, but even https://github.com/gcc-mirror/gcc/commits/master/libgcc/config/arm/cmse.c does not show any changes whatsoever.
Did anything change? Maybe I'm using the function not as intended (do I need different flags for functions? But then again, it works with version 7.
Update:
I also posted this in arm embedded toolchain forum:
https://answers.launchpad.net/gcc-arm-embedded/+question/695596
#HsuHau https://stackoverflow.com/a/66273629/1358283 posted a bug https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99157

It seems to be a GCC bug when libgcc checking CMSE support.
It checks $? for the return value of a gcc command, but in Makefile it should use $$? instead.
diff --git a/libgcc/config/arm/t-arm b/libgcc/config/arm/t-arm
index 364f40ebe7f9..3625a2590bee 100644
--- a/libgcc/config/arm/t-arm
+++ b/libgcc/config/arm/t-arm
## -4,7 +4,7 ## LIB1ASMFUNCS = _thumb1_case_sqi _thumb1_case_uqi _thumb1_case_shi \
HAVE_CMSE:=$(findstring __ARM_FEATURE_CMSE,$(shell $(gcc_compile_bare) -dM -E - </dev/null))
HAVE_V81M:=$(findstring armv8.1-m.main,$(gcc_compile_bare))
-ifeq ($(shell $(gcc_compile_bare) -E -mcmse - </dev/null >/dev/null 2>/dev/null; echo $?),0)
+ifeq ($(shell $(gcc_compile_bare) -E -mcmse - </dev/null >/dev/null 2>/dev/null; echo $$?),0)
CMSE_OPTS:=-mcmse
endif
I have reported the bug:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99157

Related

stack build error: attribute ‘ghc822’ missing, at (string):1:53

I am attempting to build my haskell project on NixOS.
Running $ stack build gives the following error.
$ stack build
error: attribute ‘ghc822’ missing, at (string):1:53
(use ‘--show-trace’ to show detailed location information)
What does this error mean and how could I proceed? When I run $ stack build --show-trace as suggested, I get the following output, which I do not understand either.
$ stack build --show-trace
Invalid option `--show-trace'
Usage: stack build [TARGET] [--dry-run] [--pedantic] [--fast]
[--ghc-options OPTIONS] [--flag PACKAGE:[-]FLAG]
([--dependencies-only] | [--only-snapshot] |
[--only-dependencies]) ([--file-watch] | [--file-watch-poll])
[--exec CMD [ARGS]] [--only-configure] [--trace] [--profile]
[--no-strip] [--[no-]library-profiling]
[--[no-]executable-profiling] [--[no-]library-stripping]
[--[no-]executable-stripping] [--[no-]haddock]
[--haddock-arguments HADDOCK_ARGS] [--[no-]open]
[--[no-]haddock-deps] [--[no-]haddock-internal]
[--[no-]haddock-hyperlink-source] [--[no-]copy-bins]
[--[no-]copy-compiler-tool] [--[no-]prefetch]
[--[no-]keep-going] [--[no-]force-dirty] [--[no-]test]
[--[no-]rerun-tests] [--ta|--test-arguments TEST_ARGS]
[--coverage] [--no-run-tests] [--[no-]bench]
[--ba|--benchmark-arguments BENCH_ARGS] [--no-run-benchmarks]
[--[no-]reconfigure] [--[no-]cabal-verbose]
[--[no-]split-objs] [--skip ARG] [--help]
Build the package(s) in this directory/configuration
I tried changing my channel to nixos-17.09 instead of nixos-unstable (and running nix-channel --update), but still get the same error.
Output of $ nix-channel --list is shown below.
$ nix-channel --list
stack https://nixos.org/channels/nixos-17.09
nixos https://nixos.org/channels/nixos-17.09
The output of $ nix-env -qaPA 'nixos.haskell.compiler' shows ghc822 to be found.
$ nix-env -qaPA 'nixos.haskell.compiler'
warning: name collision in input Nix expressions, skipping ‘/home/matthew/.nix-defexpr/channels_root/nixos’
nixos.haskell.compiler.ghc6102Binary ghc-6.10.2-binary
nixos.haskell.compiler.ghc704 ghc-7.0.4
nixos.haskell.compiler.ghc704Binary ghc-7.0.4-binary
nixos.haskell.compiler.ghc7102 ghc-7.10.2
nixos.haskell.compiler.integer-simple.ghc7102 ghc-7.10.2
nixos.haskell.compiler.ghc7103 ghc-7.10.3
nixos.haskell.compiler.integer-simple.ghc7103 ghc-7.10.3
nixos.haskell.compiler.integer-simple.ghc742 ghc-7.4.2
nixos.haskell.compiler.ghc742 ghc-7.4.2
nixos.haskell.compiler.ghc742Binary ghc-7.4.2-binary
nixos.haskell.compiler.ghc763 ghc-7.6.3
nixos.haskell.compiler.ghc783 ghc-7.8.3
nixos.haskell.compiler.integer-simple.ghc783 ghc-7.8.3
nixos.haskell.compiler.ghc784 ghc-7.8.4
nixos.haskell.compiler.integer-simple.ghc784 ghc-7.8.4
nixos.haskell.compiler.ghc801 ghc-8.0.1
nixos.haskell.compiler.integer-simple.ghc801 ghc-8.0.1
nixos.haskell.compiler.ghc802 ghc-8.0.2
nixos.haskell.compiler.integer-simple.ghc802 ghc-8.0.2
nixos.haskell.compiler.integer-simple.ghc821 ghc-8.2.1
nixos.haskell.compiler.ghc821 ghc-8.2.1
nixos.haskell.compiler.integer-simple.ghc822 ghc-8.2.2
nixos.haskell.compiler.ghc822 ghc-8.2.2
nixos.haskell.compiler.integer-simple.ghcHEAD ghc-8.3.20170808
nixos.haskell.compiler.ghcHEAD ghc-8.3.20170808
nixos.haskell.compiler.ghcjs ghcjs-0.2.0
nixos.haskell.compiler.ghcjsHEAD ghcjs-0.2.020170323
nixos.haskell.compiler.jhc jhc-0.8.2
nixos.haskell.compiler.uhc uhc-1.1.9.4
I installed ghc8.2.2 via $ nix-env -iA nixos.haskell.compiler.ghc822, and $ ghc --version now returns
$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 8.2.2
However, I still get the error error: attribute ‘ghc822’ missing, at (string):1:54 when attempting to run $ stack build.
Also, I attempted to see what ghc version my stack is using after this install, and this led to the same attribute ‘ghc822’ missing error.
$ stack ghc -- --version
error: attribute ‘ghc822’ missing, at (string):1:54
(use ‘--show-trace’ to show detailed location information)
It seems like your stack wants to retrieve the haskell.packages.ghc822 attribute or perhaps haskell.compiler.ghc822, which is not present in your version of <nixpkgs>.
Please check your channel configuration using sudo nix-channel --list (NixOS) or nix-channel --list. Releases 17.03 and older do not have this attribute. 17.09 and unstable should be fine. To switch your default <nixpkgs> to 17.09, note the name of the channel and run
nix-channel --add https://nixos.org/channels/nixos-17.09 <NAME>
Also run nix-channel --update to make sure you have a recent version. GHC 8.2.2 was added on Oct 31st.
If you don't want to change your channel configuration, I suppose you can set the NIX_PATH environment variable
NIX_PATH=nixpkgs=https://github.com/NixOS/nixpkgs-channels/archive/nixos-unstable.tar.gz stack build
Another option is to use a shell.nix. nixos-18.03 comes with ghc 8.2.2, so you can create a shell.nix like:
with import (builtins.fetchGit {
url = https://github.com/NixOS/nixpkgs-channels;
ref = "nixos-18.03";
rev = "cb0e20d6db96fe09a501076c7a2c265359982814";
}) {};
haskell.lib.buildStackProject {
name = "my-project";
buildInputs = [ ghc <otherlibs-here> ];
}
And add the following to your stack.yaml:
nix:
shell-file: shell.nix
Then stack build as usual.
You can provide an old GHC version using a shell.nix file place in the root of your project:
with import (fetchTarball https://github.com/NixOS/nixpkgs/archive/83b35508c6491103cd16a796758e07417a28698b.tar.gz) {};
let ghc = haskell.compiler.ghc802;
in haskell.lib.buildStackProject {
inherit ghc;
name = "myEnv";
buildInputs = [ pcre ];
}
Use a tar url from https://github.com/NixOS/nixpkgs/releases for a version of nixpkgs that contains the GHC version you need.
Then run nix-shell in the root of the project. This will put you into a shell in which you can perform stack build successfully since it would have access to the correct GHC version.
As palik commented, changing the resolver version -- in my case changing
resolver: lts-11.3
to
resolver: lts-9.1
in stack.yaml is a work-around. I do not know what the deeper issue is but would be interested to know.
Update: this post provides a thorough explanation with excellent tips on how to use stackage and nix in concert, including how to reach agreement between package versions of the stack resolver and nix channel.
How to know which resolver to specify in stack.yaml?
Go to this url, which shows stackage snapshots containing ghc:
https://www.stackage.org/package/ghc/snapshots
This will tell you the resolver corresponding to the ghc version you have. For example, I have ghc 8.10.7,
$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 8.10.7
and doing a find '8.10.7' on that page shows me that corresponds to LTS Haskell 18.28 (ghc-8.10.7). So I would specify resolver: lts-18.28 in my stack.yaml.
(How to find resolvers for a given package was found in an answer here.)
based on #steve-chávez answer
stack.yaml
resolver: lts-13.19
system-ghc: true
install-ghc: false
nix:
enable: true
path: [nixpkgs=./nix/nixpkgs/default.nix]
shell-file: shell.nix
nix/nixpkgs/default.nix
let
spec = builtins.fromJSON (builtins.readFile ./revision.json);
src = import <nix/fetchurl.nix> {
url = "https://github.com/${spec.owner}/${spec.repo}/archive/${spec.rev}.tar.gz";
inherit (spec) sha256;
};
nixcfg = import <nix/config.nix>;
nixpkgs = builtins.derivation {
system = builtins.currentSystem;
name = "${src.name}-unpacked";
builder = builtins.storePath nixcfg.shell;
inherit src;
args = [
(builtins.toFile "builder" ''
$coreutils/mkdir $out
cd $out
$gzip -d < $src | $tar -x --strip-components=1
'')
];
coreutils = builtins.storePath nixcfg.coreutils;
tar = builtins.storePath nixcfg.tar;
gzip = builtins.storePath nixcfg.gzip;
};
in
import nixpkgs
nix/nixpkgs/update.sh
#!/usr/bin/env nix-shell
#!nix-shell -i bash -p nix curl jq
SCRIPT_DIR=$(dirname "$(readlink -f "$BASH_SOURCE")")
owner="nixos"
repo="nixpkgs-channels"
rev="nixos-unstable"
full_rev=$(curl --silent https://api.github.com/repos/$owner/$repo/git/refs/heads/$rev | jq -r .object.sha)
echo "full_rev=$full_rev"
expected_sha=$(nix-prefetch-url https://github.com/$owner/$repo/archive/$full_rev.tar.gz)
cat >"$SCRIPT_DIR/revision.json" <<EOL
{
"owner": "$owner",
"repo": "$repo",
"rev": "$full_rev",
"sha256": "$expected_sha"
}
EOL
shell.nix
{
#
# there are 2 ways of using stack with nix
# - define custom packages in `stack.yaml` `packages` option (https://docs.haskellstack.org/en/stable/nix_integration/#additions-to-your-stackyaml)
# - define custom package in `shell.nix` AND `shell-file: ...` in `stack.yaml` (https://docs.haskellstack.org/en/stable/nix_integration/#additions-to-your-stackyaml)
#
# we are using second option
ghc # stack expect this file to define a function of exactly one argument that should be called ghc
}:
let
# pkgs = import ./nix/nixpkgs/default.nix {}
pkgs = import <nixpkgs> {};
in
with pkgs;
haskell.lib.buildStackProject {
inherit ghc;
name = "myEnv";
buildInputs = [ cabal-install ];
}

make - remove intermediate files silently

When building a chain rule make automatically invokes rm to remove any intermediate files at the end of the build process. Since I have about 400 intermediate files to delete that way, that floods console output badly.
Is there a way to silently rm those intermediate files, so that eighter nothing will be echoed after the build is finished oder a message like "Removing intermediate files" is echoed?
You could run make -s or build your very own version of make with this patch applied:
diff --git file.c file.c
index ae1c285..de3c426 100644
--- file.c
+++ file.c
## -410,18 +410,6 ## remove_intermediates (int sig)
{
if (! doneany)
DB (DB_BASIC, (_("Removing intermediate files...\n")));
- if (!silent_flag)
- {
- if (! doneany)
- {
- fputs ("rm ", stdout);
- doneany = 1;
- }
- else
- putchar (' ');
- fputs (f->name, stdout);
- fflush (stdout);
- }
}
if (status < 0)
perror_with_name ("unlink: ", f->name);
Expanding on the accepted answer, you can modify Make's flags from within the Makefile itself (as demonstrated here). So, for your situation, you can include this at the top of your Makefile:
MAKEFLAGS += --silent
The only thing to be aware of is that the --silent flag silences all Make's output. Including the "Nothing to be done" notices.
Edit:
You can also add your target as a dependency to .SILENT, as described at https://www.gnu.org/software/make/manual/html_node/Special-Targets.html.

Package xkbcommon was not found in the pkg-config search path. when building Yocto image

On Ubuntu 14.04
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.3 LTS
Release: 14.04
Codename: trusty
Building a Yocto Poky image using the fido branch
inherit core-image
IMAGE_FEATURES += "x11-base x11-sato package-management ssh-server-dropbear"
IMAGE_INSTALL += "chromium \
lsb \
kernel-modules \
alsa-utils \
... and I am getting this sort of message
I look like it related to the Chromium recipe /meta-browser/recipes-browser/chromium/chromium_45.0.2454.85.bb which starts as such
include chromium.inc
DESCRIPTION = "Chromium browser"
DEPENDS += "libgnome-keyring"
and I get this message
ERROR: Logfile of failure stored in: /home/joel/yocto/build-fido/tmp/work/cortexa7hf-vfp-vfpv4-neon-poky-linux-gnueabi/chromium/45.0.2454.85-r0/temp/log.do_configure.28622
Log data follows:
| DEBUG: Executing python function sysroot_cleansstate
| DEBUG: Python function sysroot_cleansstate finished
| DEBUG: Executing shell function do_configure
| Updating projects from gyp files...
| Package xkbcommon was not found in the pkg-config search path.
| Perhaps you should add the directory containing `xkbcommon.pc'
| to the PKG_CONFIG_PATH environment variable
| No package 'xkbcommon' found
| gyp: Call to 'pkg-config --cflags xkbcommon' returned exit status 1.
| WARNING: exit code 1 from a shell command.
What I have tried
Installed the library
$ sudo apt-get install libxkbcommon-x11-dev
Search for xkbcommon.pc
$ apt-file search xkbcommon.pc
libxkbcommon-dev: /usr/lib/x86_64-linux-gnu/pkgconfig/xkbcommon.pc
pkg-config
joel#linux-Lenovo-G50-70:~/yocto/build-fido$ pkg-config --cflags xkbcommon
<=== Return is EMPTY (?)
joel#linux-Lenovo-G50-70:~/yocto/build-fido$ pkg-config --libs xkbcommon
-lxkbcommon <=== Looks correct
Added PKG_CONFIG_PATH
$ PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/x86_64-linux-gnu/pkgconfig/
$ export PKG_CONFIG_PATH
$ env | grep PKG
PKG_CONFIG_PATH=:/usr/lib/x86_64-linux-gnu/pkgconfig/
but I am still getting the same message when running bitbake
Any suggestions?
Find xkbcommon
$ find /usr/lib/ -name *xkbcommon*
/usr/lib/x86_64-linux-gnu/libxkbcommon.so
/usr/lib/x86_64-linux-gnu/libxkbcommon.so.0.0.0
/usr/lib/x86_64-linux-gnu/libxkbcommon-x11.so.0.0.0
/usr/lib/x86_64-linux-gnu/libxkbcommon-x11.a
/usr/lib/x86_64-linux-gnu/libxkbcommon.a
/usr/lib/x86_64-linux-gnu/libxkbcommon-x11.so.0
/usr/lib/x86_64-linux-gnu/libxkbcommon-x11.so
/usr/lib/x86_64-linux-gnu/pkgconfig/xkbcommon.pc
/usr/lib/x86_64-linux-gnu/pkgconfig/xkbcommon-x11.pc
/usr/lib/x86_64-linux-gnu/libxkbcommon.so.0
In this case, it was the chromium recipe that failed to find libxkbcommon. As the error occurred when building a recipe for the target system, we need to tell the build system that the chromium recipe has a dependency on libxkbcommmon.
This can be done by adding
DEPENDS += "libxkbcommon"
to the chromium recipe.
It's worth noting, that libxkbcommon quite likely is an optional dependency, and in that case, it should be handled by a suitable PACKAGECONFIG. (See PACKAGECONFIG in ref.manual).
Note: I've never built chromium myself, thus I'd prefer to not suggest any suitable PACKAGECONFIG.
I think the Chromium_45 recipe is taken down since the last time I saw it (don't see it anymore).
Anyway, this is what I did to Chromium_40.
I have disabled Wayland (ozone-wayland in Chromium) so that it will only use x11.
In local.conf, I added
CHROMIUM_ENABLE_WAYLAND = "0"
By doing this, I will bypass CHROMIUM_WAYLAND_DEPENDS = "wayland libxkbcommon"
CHROMIUM_X11_DEPENDS = "xextproto gtk+ libxi libxss"
CHROMIUM_X11_GYP_DEFINES = ""
CHROMIUM_WAYLAND_DEPENDS = "wayland libxkbcommon"
CHROMIUM_WAYLAND_GYP_DEFINES = "use_ash=1 use_aura=1 chromeos=0 use_ozone=1"
python() {
if d.getVar('CHROMIUM_ENABLE_WAYLAND', True) == '1':
d.appendVar('DEPENDS', ' %s ' % d.getVar('CHROMIUM_WAYLAND_DEPENDS', True))
d.appendVar('GYP_DEFINES', ' %s ' % d.getVar('CHROMIUM_WAYLAND_GYP_DEFINES', True))
else:
d.appendVar('DEPENDS', ' %s ' % d.getVar('CHROMIUM_X11_DEPENDS', True))
d.appendVar('GYP_DEFINES', ' %s ' % d.getVar('CHROMIUM_X11_GYP_DEFINES', True))
}
P.S.: One more thing I found weird is use-egl.
PACKAGECONFIG[use-egl] = ",,virtual/egl virtual/libgles2" is overrided with PACKAGECONFIG[use-egl] = "" so I have removed PACKAGECONFIG[use-egl] = "" from chromium.inc
PACKAGECONFIG ??= "use-egl"
# this makes sure the dependencies for the EGL mode are present; otherwise, the configure scripts
# automatically and silently fall back to GLX
PACKAGECONFIG[use-egl] = ",,virtual/egl virtual/libgles2"
# Additional PACKAGECONFIG options - listed here to avoid warnings
PACKAGECONFIG[component-build] = ""
PACKAGECONFIG[disable-api-keys-info-bar] = ""
PACKAGECONFIG[ignore-lost-context] = ""
PACKAGECONFIG[impl-side-painting] = ""
PACKAGECONFIG[use-egl] = ""
PACKAGECONFIG[kiosk-mode] = ""

Is the behavior behind the Shellshock vulnerability in Bash documented or at all intentional?

A recent vulnerability, CVE-2014-6271, in how Bash interprets environment variables was disclosed. The exploit relies on Bash parsing some environment variable declarations as function definitions, but then continuing to execute code following the definition:
$ x='() { echo i do nothing; }; echo vulnerable' bash -c ':'
vulnerable
But I don't get it. There's nothing I've been able to find in the Bash manual about interpreting environment variables as functions at all (except for inheriting functions, which is different). Indeed, a proper named function definition is just treated as a value:
$ x='y() { :; }' bash -c 'echo $x'
y() { :; }
But a corrupt one prints nothing:
$ x='() { :; }' bash -c 'echo $x'
$ # Nothing but newline
The corrupt function is unnamed, and so I can't just call it. Is this vulnerability a pure implementation bug, or is there an intended feature here, that I just can't see?
Update
Per Barmar's comment, I hypothesized the name of the function was the parameter name:
$ n='() { echo wat; }' bash -c 'n'
wat
Which I could swear I tried before, but I guess I didn't try hard enough. It's repeatable now. Here's a little more testing:
$ env n='() { echo wat; }; echo vuln' bash -c 'n'
vuln
wat
$ env n='() { echo wat; }; echo $1' bash -c 'n 2' 3 -- 4
wat
…so apparently the args are not set at the time the exploit executes.
Anyway, the basic answer to my question is, yes, this is how Bash implements inherited functions.
This seems like an implementation bug.
Apparently, the way exported functions work in bash is that they use specially-formatted environment variables. If you export a function:
f() { ... }
it defines an environment variable like:
f='() { ... }'
What's probably happening is that when the new shell sees an environment variable whose value begins with (), it prepends the variable name and executes the resulting string. The bug is that this includes executing anything after the function definition as well.
The fix described is apparently to parse the result to see if it's a valid function definition. If not, it prints the warning about the invalid function definition attempt.
This article confirms my explanation of the cause of the bug. It also goes into a little more detail about how the fix resolves it: not only do they parse the values more carefully, but variables that are used to pass exported functions follow a special naming convention. This naming convention is different from that used for the environment variables created for CGI scripts, so an HTTP client should never be able to get its foot into this door.
The following:
x='() { echo I do nothing; }; echo vulnerable' bash -c 'typeset -f'
prints
vulnerable
x ()
{
echo I do nothing
}
declare -fx x
seems, than Bash, after having parsed the x=..., discovered it as a function, exported it, saw the declare -fx x and allowed the execution of the command after the declaration.
echo vulnerable
x='() { x; }; echo vulnerable' bash -c 'typeset -f'
prints:
vulnerable
x ()
{
echo I do nothing
}
and running the x
x='() { x; }; echo Vulnerable' bash -c 'x'
prints
Vulnerable
Segmentation fault: 11
segfaults - infinite recursive calls
It doesn't overrides already defined function
$ x() { echo Something; }
$ declare -fx x
$ x='() { x; }; echo Vulnerable' bash -c 'typeset -f'
prints:
x ()
{
echo Something
}
declare -fx x
e.g. the x remains the previously (correctly) defined function.
For the Bash 4.3.25(1)-release the vulnerability is closed, so
x='() { echo I do nothing; }; echo Vulnerable' bash -c ':'
prints
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
but - what is strange (at least for me)
x='() { x; };' bash -c 'typeset -f'
STILL PRINTS
x ()
{
x
}
declare -fx x
and the
x='() { x; };' bash -c 'x'
segmentation faults too, so it STILL accept the strange function definition...
I think it's worth looking at the Bash code itself. The patch gives a bit of insight as to the problem. In particular,
*** ../bash-4.3-patched/variables.c 2014-05-15 08:26:50.000000000 -0400
--- variables.c 2014-09-14 14:23:35.000000000 -0400
***************
*** 359,369 ****
strcpy (temp_string + char_index + 1, string);
! if (posixly_correct == 0 || legal_identifier (name))
! parse_and_execute (temp_string, name, SEVAL_NONINT|SEVAL_NOHIST);
!
! /* Ancient backwards compatibility. Old versions of bash exported
! functions like name()=() {...} */
! if (name[char_index - 1] == ')' && name[char_index - 2] == '(')
! name[char_index - 2] = '\0';
if (temp_var = find_function (name))
--- 364,372 ----
strcpy (temp_string + char_index + 1, string);
! /* Don't import function names that are invalid identifiers from the
! environment, though we still allow them to be defined as shell
! variables. */
! if (legal_identifier (name))
! parse_and_execute (temp_string, name, SEVAL_NONINT|SEVAL_NOHIST|SEVAL_FUNCDEF|SEVAL_ONECMD);
if (temp_var = find_function (name))
When Bash exports a function, it shows up as an environment variable, for example:
$ foo() { echo 'hello world'; }
$ export -f foo
$ cat /proc/self/environ | tr '\0' '\n' | grep -A1 foo
foo=() { echo 'hello world'
}
When a new Bash process finds a function defined this way in its environment, it evalutes the code in the variable using parse_and_execute(). For normal, non-malicious code, executing it simply defines the function in Bash and moves on. However, because it's passed to a generic execution function, Bash will correctly parse and execute additional code defined in that variable after the function definition.
You can see that in the new code, a flag called SEVAL_ONECMD has been added that tells Bash to only evaluate the first command (that is, the function definition) and SEVAL_FUNCDEF to only allow functio0n definitions.
In regard to your question about documentation, notice here in the commandline documentation for the env command, that a study of the syntax shows that env is working as documented.
There are, optionally, 4 possible options
An optional hyphen as a synonym for -i (for backward compatibility I assume)
Zero or more NAME=VALUE pairs. These are the variable assignment(s) which could include function definitions.
Note that no semicolon (;) is required between or following the assignments.
The last argument(s) can be a single command followed by its argument(s). It will run with whatever permissions have been granted to the login being used. Security is controlled by restricting permissions on the login user and setting permissions on user-accessible executables such that users other than the executable's owner can only read and execute the program, not alter it.
[ spot#LX03:~ ] env --help
Usage: env [OPTION]... [-] [NAME=VALUE]... [COMMAND [ARG]...]
Set each NAME to VALUE in the environment and run COMMAND.
-i, --ignore-environment start with an empty environment
-u, --unset=NAME remove variable from the environment
--help display this help and exit
--version output version information and exit
A mere - implies -i. If no COMMAND, print the resulting environment.
Report env bugs to bug-coreutils#gnu.org
GNU coreutils home page: <http://www.gnu.org/software/coreutils/>
General help using GNU software: <http://www.gnu.org/gethelp/>
Report env translation bugs to <http://translationproject.org/team/>

GDB: Question about relative and absolute paths to files in backtraces

I have question about gdb or gcc (but not firefox).
I see only absolute paths in gdb when i debugging firefox. Example:
5 0x01bb0c52 in nsAppShell::ProcessNextNativeEvent
(this=0xb7232ba0, mayWait=1)
at
/media/25b7639d-9a70-42ca-aaa7-28f4d1f417fd/firefox-dev/mozilla-central/widget/src/gtk2/nsAppShell.cpp:144
It's uncomfortable for reading such backtraces.
If i try to compile and debug tiny test program i see such backtrace (with relative paths to files):
0 main () at prog.c:5
How can i see only relative paths in backtraces when debugging firefox?
P.S. gcc 4.4.1; gdb 7.0.
GDB will show absolute or relative path depending on how the program was compiled. Consider:
$ cd /tmp
$ cat t.c
int main() { return 0; }
$ gcc -g t.c && gdb -q -ex start -ex quit ./a.out
Reading symbols from /tmp/a.out...done.
Temporary breakpoint 1 at 0x4004c8: file t.c, line 1.
Temporary breakpoint 1, main () at t.c:1
1 int main() { return 0; }
Now the same, but compile source via absolute path:
$ gcc -g /tmp/t.c && gdb -q -ex start -ex quit ./a.out
Reading symbols from /tmp/a.out...done.
Temporary breakpoint 1 at 0x4004c8: file /tmp/t.c, line 1.
Temporary breakpoint 1, main () at /tmp/t.c:1
1 int main() { return 0; }
And again, this time with relative path that includes directory prefix:
$ cd /
$ gcc -g tmp/t.c -o tmp/a.out && gdb -q -ex start -ex quit tmp/a.out
Reading symbols from /tmp/a.out...done.
Temporary breakpoint 1 at 0x4004c8: file tmp/t.c, line 1.
Temporary breakpoint 1, main () at tmp/t.c:1
1 int main() { return 0; }
So, you can get gdb to show relative path if you change the way firefox is built. That may prove to be a very non-trivial proposition.

Resources