I was following some instructions on the OSSEC site on how to install it on CentOS.
# wget -q -O – https://www.atomicorp.com/installers/atomic | sh
# yum install ossec-hids ossec-hids-server (or ossec-hids-client for the agent)
After I ran the first command, I noticed a file named - appear in my folder. The second command doesn't work as Yum says it can't find the package. But now this strange file - can't be removed. It is actually a pointer to stdout.
Can anyone help please get rid of it? Thanks
This is happening because the dash (-) you have used in not the regular - used to indicate STDOUT:
% printf '–' | hexdump -C
00000000 e2 80 93 |...|
00000003
% printf '\xe2\x80\x93\n'
–
Regular -:
% printf '-' | hexdump -C
00000000 2d |-|
00000001
% printf '\x2d\n'
-
So you need to use regular - to indicate STDOUT for saving the content.
To remove the created file, use Hex value:
rm -- $'\xe2\x80\x93'
Related
I have the following script that checks for installed/uninstalled packages:
#!/bin/bash
DEPENDENCIES="build-essential pkg-config qt4-qmake libqt4-dev libavformat-dev libavcodec-dev"
for dep in $DEPENDENCIES; do
dpkg -l $dep | grep "$dep"
done | sort
the result is:
dpkg-query: no packages found matching libavformat-dev
dpkg-query: no packages found matching libavcodec-dev
ii build-essential 12.1ubuntu2 amd64 Informational list of build-essential packages
ii pkg-config 0.29.1-0ubuntu1 amd64 manage compile and link flags for libraries
un libqt4-dev (no description available)
un qt4-qmake (no description available)
which is what I expect. I would then like to redirect stdout and stderr to a file depend.out. So I modify the last line of the script to: done | sort &> depend.out. But the contents of depend.out are:
ii build-essential 12.1ubuntu2 amd64 Informational list of build-essential packages
ii pkg-config 0.29.1-0ubuntu1 amd64 manage compile and link flags for libraries
un libqt4-dev <none> <none> (no description available)
un qt4-qmake <none> <none> (no description available)
Why are the lines in bold (uninstalled packages) missing even if I use the redirection operator &>?
most probably these lines were written to your stderr therefore they weren't redirected to the pipe (instead written on the tty)
if you want stderr to be processed by the pipe as well you need to redirect it to stderr manually before piping (as pipe only acts on stdout)
try this one:
#!/bin/bash
DEPENDENCIES="build-essential pkg-config qt4-qmake libqt4-dev libavformat-dev libavcodec-dev"
for dep in $DEPENDENCIES; do
dpkg -l $dep 2>&1 | grep "$dep"
done | sort
To redirect stderr to stdout, use:
command 2>&1
Demonstration:
ls unexisting-path 2>&1 | cat > /dev/null
Here, ls will produce an error output. This output is redirected to stdout, so it gets caught by the pipe | and sent to cat, which outputs it to stdout too. To prove it, > /dev/null is added, and as expected, nothing is displayed.
It works ok as a single tool:
curl "someURL"
curl -o - "someURL"
but it doesn't work in a pipeline:
curl "someURL" | tr -d '\n'
curl -o - "someURL" | tr -d '\n'
it returns:
(23) Failed writing body
What is the problem with piping the cURL output? How to buffer the whole cURL output and then handle it?
This happens when a piped program (e.g. grep) closes the read pipe before the previous program is finished writing the whole page.
In curl "url" | grep -qs foo, as soon as grep has what it wants it will close the read stream from curl. cURL doesn't expect this and emits the "Failed writing body" error.
A workaround is to pipe the stream through an intermediary program that always reads the whole page before feeding it to the next program.
E.g.
curl "url" | tac | tac | grep -qs foo
tac is a simple Unix program that reads the entire input page and reverses the line order (hence we run it twice). Because it has to read the whole input to find the last line, it will not output anything to grep until cURL is finished. Grep will still close the read stream when it has what it's looking for, but it will only affect tac, which doesn't emit an error.
For completeness and future searches:
It's a matter of how cURL manages the buffer, the buffer disables the output stream with the -N option.
Example:
curl -s -N "URL" | grep -q Welcome
Another possibility, if using the -o (output file) option - the destination directory does not exist.
eg. if you have -o /tmp/download/abc.txt and /tmp/download does not exist.
Hence, ensure any required directories are created/exist beforehand, use the --create-dirs option as well as -o if necessary
The server ran out of disk space, in my case.
Check for it with df -k .
I was alerted to the lack of disk space when I tried piping through tac twice, as described in one of the other answers: https://stackoverflow.com/a/28879552/336694. It showed me the error message write error: No space left on device.
You can do this instead of using -o option:
curl [url] > [file]
So it was a problem of encoding. Iconv solves the problem
curl 'http://www.multitran.ru/c/m.exe?CL=1&s=hello&l1=1' | iconv -f windows-1251 | tr -dc '[:print:]' | ...
If you are trying something similar like source <( curl -sS $url ) and getting the (23) Failed writing body error, it is because sourcing a process substitution doesn't work in bash 3.2 (the default for macOS).
Instead, you can use this workaround.
source /dev/stdin <<<"$( curl -sS $url )"
Trying the command with sudo worked for me. For example:
sudo curl -O -k 'https url here'
note: -O (this is capital o, not zero) & -k for https url.
I had the same error but from different reason. In my case I had (tmpfs) partition with only 1GB space and I was downloading big file which finally filled all memory on that partition and I got the same error as you.
I encountered the same problem when doing:
curl -L https://packagecloud.io/golang-migrate/migrate/gpgkey | apt-key add -
The above query needs to be executed using root privileges.
Writing it in following way solved the issue for me:
curl -L https://packagecloud.io/golang-migrate/migrate/gpgkey | sudo apt-key add -
If you write sudo before curl, you will get the Failed writing body error.
For me, it was permission issue. Docker run is called with a user profile but root is the user inside the container. The solution was to make curl write to /tmp since that has write permission for all users , not just root.
I used the -o option.
-o /tmp/file_to_download
In my case, I was doing:
curl <blabla> | jq | grep <blibli>
With jq . it worked: curl <blabla> | jq . | grep <blibli>
I encountered this error message while trying to install varnish cache on ubuntu. The google search landed me here for the error (23) Failed writing body, hence posting a solution that worked for me.
The bug is encountered while running the command as root curl -L https://packagecloud.io/varnishcache/varnish5/gpgkey | apt-key add -
the solution is to run apt-key add as non root
curl -L https://packagecloud.io/varnishcache/varnish5/gpgkey | apt-key add -
The explanation here by #Kaworu is great: https://stackoverflow.com/a/28879552/198219
This happens when a piped program (e.g. grep) closes the read pipe before the previous program is finished writing the whole page. cURL doesn't expect this and emits the "Failed writing body" error.
A workaround is to pipe the stream through an intermediary program that always reads the whole page before feeding it to the next program.
I believe the more correct implementation would be to use sponge, as already suggested by #nisetama in the comments:
curl "url" | sponge | grep -qs foo
I got this error trying to use jq when I didn't have jq installed. So... make sure jq is installed if you're trying to use it.
In Bash and zsh (and perhaps other shells), you can use process substitution (Bash/zsh) to create a file on the fly, and then use that as input to the next process in the pipeline chain.
For example, I was trying to parse JSON output from cURL using jq and less, but was getting the Failed writing body error.
# Note: this does NOT work
curl https://gitlab.com/api/v4/projects/ | jq | less
When I rewrote it using process substitution, it worked!
# this works!
jq "" <(curl https://gitlab.com/api/v4/projects/) | less
Note: jq uses its 2nd argument to specify an input file
Bonus: If you're using jq like me and want to keep the colorized output in less, use the following command line instead:
jq -C "" <(curl https://gitlab.com/api/v4/projects/) | less -r
(Thanks to Kowaru for their explanation of why Failed writing body was occurring. However, their solution of using tac twice didn't work for me. I also wanted to find a solution that would scale better for large files and tries to avoid the other issues noted as comments to that answer.)
I was getting curl: (23) Failed writing body . Later I noticed that I did not had sufficient space for downloading an rpm package via curl and thats the reason I was getting issue. I freed up some space and issue for resolved.
I had the same question because of my own typo mistake:
# fails because of reasons mentioned above
curl -I -fail https://www.google.com | echo $?
curl: (23) Failed writing body
# success
curl -I -fail https://www.google.com || echo $?
I added flag -s and it did the job. eg: curl -o- -s https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
I use putty to connnect to a cubietruck board which has armbian debian jessie software on it. I want to see coloured live log of an app. I followed the following example using watch , tail and ccze together.
When I use the command :
tail -f app.log | ccze
It worked great. Also when I use the command :
watch `tail -f app.log`
It also worked great. However when I gave :
watch --color 'tail -f app.log | ccze'
or
watch -c 'tail -f app.log | ccze'
I get a lot of
(B
charachter and in the text in most of the cases no new lines are recognized and looks as seamless text. I assume that the color related ASCII characters are not decoded correctly.
I also changed the putty keyboard from ESC to VT400 and Linux but the same problem occured.
Does anyone has an idea what am I doing wrong?
watch -c -n5 'tail app.log | ccze -A'
Leaving out the -f parameter for tail, to stop tail watching for changes in the log file (because watch should do that)
Adding the -A parameter to ccze to enable raw ANSI colors
I want to generate System.map from vmlinuz,cause most of machines don't have the file System.map.In fact,vmlinuz are compressed to vmlinuz or bzImage.
It's any tool or script can do this?
I tried:
dd if=/boot/vmlinuz skip=`grep -a -b -o -m 1 -e $'\x1f\x8b\x08\x00' /boot/vmlinuz | cut -d: -f 1` bs=1 | zcat > /tmp/vmlinux
It was failed:
zcat: stdin: not in gzip format
32769+0 records in
32768+0 records out
To extract the uncompressed kernel from the kernel image, you can use the extract-vmlinux script from the scripts directory in the kernel tree (available at least in kernel version 3.5) (if you get an error like
mktemp: Cannot create temp file /tmp/vmlinux-XXX: Invalid argument
you need to replace $(mktemp /tmp/vmlinux-XXX) by $(mktemp /tmp/vmlinux-XXXXXX) in the script). The command is /path/to/kernel/tree/scripts/extract-vmlinux <kernel image> >vmlinux.
If the extracted kernel binary contains symbol information, you should¹ be able to create the System.map file using the mksysmap script from the same subdirectory. The command here is NM=nm /path/to/kernel/tree/scripts/mksysmap vmlinux System.map.
¹ The kernel images shipped with my distribution seem to be stripped, so the script was not able to get the symbols.
As Abrixas2 wrote, you will need a kernel image with symbol information in order to create System.map files and a packed vmlinuz image is not likely to have symbols in it. I can, however, verify that the script in your original post works with '-e' replaced with '-P' and '$' dropped, i.e.,
$ dd if=vmlinuz-3.8.0-19-generic skip=`grep -a -b -o -m 1 -P '\x1f\x8b\x08\x00' vmlinuz-3.8.0-19-generic | cut -d: -f 1` bs=1 | zcat > /tmp/vmlinux
gzip: stdin: decompression OK, trailing garbage ignored
I'm on ubuntu linux.
you can change $'\037\213\010\000' to "$(echo '\037\213\010\000')" in sh
bash$ N=$(grep -abo -m1 $'\037\213\010\000' vmlinuz-4.13.0-37-generic | awk -F: '{print $1+1}') &&
tail -c +$N vmlinuz-4.13.0-37-generic | gzip -d > /tmp/vmlinuz
try this :
dd if=vmlinuz bs=1 skip=24584 | zcat > vmlinux
with
24584 = 24576 + 8
when
od -A d -t x1 vmlinuz | grep '1f 8b 08 00'
gives
....... 0 1 2 3 . . . . 8
0024576 24 26 27 00 ae 21 16 00 1f 8b 08 00 7f 2f 6b 45
enjoy !
I would like to view the contents of a file in the current directory, but in binary from the command line. How can I achieve this?
xxd does both binary and hexadecimal.
bin:
xxd -b file
hex:
xxd file
hexdump -C yourfile.bin
unless you want to edit it of course. Most linux distros have hexdump by default (but obviously not all).
vi your_filename
hit esc
Type :%!xxd to view the hex strings, the n :%!xxd -r to return to normal editing.
As a fallback there's always od -xc filename
sudo apt-get install bless
Bless is GUI tool which can view, edit, seach and a lot more.
Its very light weight.
If you want to open binary files (in CentOS 7):
strings <binary_filename>
$ echo -n 'Hello world!' | hd
00000000 48 65 6c 6c 6f 20 77 6f 72 6c 64 21 |Hello world!|
0000000c
Hexyl formats nicely: sudo apt install hexyl
See Improved Hex editing in the Vim Tips Wiki.
You can open emacs (in terminal mode, using emacs -nw for instance), and then use Hexl mode: M-x hexl-mode.
https://www.gnu.org/software/emacs/manual/html_node/emacs/Editing-Binary-Files.html
To get the output all in a single line in Hexadecimal:
xxd -p yourfile.bin | tr -d '\n'
to convert a file to its binary codes(hexadecimal representation) we say:
xxd filename #
e.g:
xxd hello.c #
to see all the contents and codes in a binary file , we could use commands like readelf and objdump, hexdump ,... .
for example if we want to see all the convert all the contents of a binary file(executable, shared libraries, object files) we say:
hexdump binaryfilename
e.g.
hexdump /bin/bash
but readelf is the best utility for analyzing elf(executable and linking format) files. so if we say:
readelf -a /bin/bash
all the contents in the binary file bash would be shown to us, also we could provide different flags for readelf to see all the sections and headers of an elf file separately, for example if we want to see only the elf header we say:
readelf -h /bin/bash
for reading all the segments of the file:
readelf -l /bin/bash
for reading all the sections of the file:
readelf -S /bin/sh
but again as summary , for reading a normal file like "hello.c" and a binary file like bash in path /bin/bash in linux we say:
xxd hello.c
readelf -a /bin/bash
You can use hexdump binary file
sudo apt-get install hexdump
hexdump -C yourfile.bin