Searching Linux Files for a string (i.e.root credentials) - bash

As a part of our audit policy. I need to search all files on a linux machine for any file that contains the root credentials.
This command will be run by a non-root account, thus, the result will include many "Permission denied" statements.
Any suggestion of the proper syntax to search all files ans filter the result to show useful links only !
I tried:
grep - "root" / | grep "password"
However, as this command is run using non root accounts, the big part of the result is "permission denied"
Thanks

The permission errors are outputed to stderr, so you could simply redirect that to /dev/null/. E.g.:
grep -R "root" . 2> /dev/null

You would go:
grep -lir "root" /
The -l switch outputs only the names of files in which the text occurs (instead of each line containing the text), the -i switch ignores the case, and the -r descends into subdirectories.
EDIT 1:
As running it as not root will be fine, as long as you're not trying to read other users' files.
EDIT 2:
To have only useful links, go with:
grep -lir -v "Permission denied" "root" /
The -v switch is for inverting the sense of matching, to select non-matching lines.

However, as this command is run using non root accounts, the big part
of the result is "permission denied"
Use sudo to run this recursive grep:
cd /home
sudo grep -ir 'root' *

You can suppress the warnings with a redirection to /dev/null.
This solution uses find to walk the whole (accessible) filesystem :
find / -readable -exec grep -H root '{}' \; 2>/dev/null | grep password

Related

hash method to verify integrity of dir vs dir.tar.gz

I'm working on a python scrip that verify the integrity of some downloaded projects.
On my nas, I have all my compressed folder: folder1.tar.gz, folder2.tar.gz, …
On my Linux computer, the equivalent uncompressed folder : folder1, folder2, …
So, i want to compare the integrity of my files without any UnTar or download !
I think i can do it on the nas with something like (with md5sum):
sshpass -p 'pasword' ssh login#my.nas.ip tar -xvf /path/to/my/folder.tar.gz | md5sum | awk '{ print $1 }'
this give me a hash, but I don't know how to get an equivalent hash to compare with the normal folder on my computer. Maybe the way I am doing it is wrong.
I need one command for the nas, and one for the Linux computer, that output the same hash ( if the folders are the same, of course )
If you did that, tar xf would actually extract the files. md5sum would only see the file listing, and not the file content.
However, if you have GNU tar on the server and the standard utility paste, you could create checksums this way:
mksums:
#!/bin/bash
data=/path/to/data.tar.gz
sums=/path/to/data.md5
paste \
<(tar xzf "$data" --to-command=md5sum) \
<(tar tzf "$data" | grep -v '/$') \
| sed 's/-\t//' > "$sums"
Run mksums above on the machine with the tar file.
Copy the sums file it creates to the computer with the folders and run:
cd /top/level/matching/tar/contents
md5sums -c "$sums"
paste joins lines of files given as arguments
<( ...) runs a command, making its output appear in a fifo
--to-command is a GNU tar extension which allows running commands which will receive their data from stdin
grep filters out directories from the tar listing
sed removes the extraneous -\t so the checksum file can be understood by md5sum
The above assumes you don't have any very-oddly named files (for example, the names can't contain newlines)

I need help parsing HTML with grep [duplicate]

It works ok as a single tool:
curl "someURL"
curl -o - "someURL"
but it doesn't work in a pipeline:
curl "someURL" | tr -d '\n'
curl -o - "someURL" | tr -d '\n'
it returns:
(23) Failed writing body
What is the problem with piping the cURL output? How to buffer the whole cURL output and then handle it?
This happens when a piped program (e.g. grep) closes the read pipe before the previous program is finished writing the whole page.
In curl "url" | grep -qs foo, as soon as grep has what it wants it will close the read stream from curl. cURL doesn't expect this and emits the "Failed writing body" error.
A workaround is to pipe the stream through an intermediary program that always reads the whole page before feeding it to the next program.
E.g.
curl "url" | tac | tac | grep -qs foo
tac is a simple Unix program that reads the entire input page and reverses the line order (hence we run it twice). Because it has to read the whole input to find the last line, it will not output anything to grep until cURL is finished. Grep will still close the read stream when it has what it's looking for, but it will only affect tac, which doesn't emit an error.
For completeness and future searches:
It's a matter of how cURL manages the buffer, the buffer disables the output stream with the -N option.
Example:
curl -s -N "URL" | grep -q Welcome
Another possibility, if using the -o (output file) option - the destination directory does not exist.
eg. if you have -o /tmp/download/abc.txt and /tmp/download does not exist.
Hence, ensure any required directories are created/exist beforehand, use the --create-dirs option as well as -o if necessary
The server ran out of disk space, in my case.
Check for it with df -k .
I was alerted to the lack of disk space when I tried piping through tac twice, as described in one of the other answers: https://stackoverflow.com/a/28879552/336694. It showed me the error message write error: No space left on device.
You can do this instead of using -o option:
curl [url] > [file]
So it was a problem of encoding. Iconv solves the problem
curl 'http://www.multitran.ru/c/m.exe?CL=1&s=hello&l1=1' | iconv -f windows-1251 | tr -dc '[:print:]' | ...
If you are trying something similar like source <( curl -sS $url ) and getting the (23) Failed writing body error, it is because sourcing a process substitution doesn't work in bash 3.2 (the default for macOS).
Instead, you can use this workaround.
source /dev/stdin <<<"$( curl -sS $url )"
Trying the command with sudo worked for me. For example:
sudo curl -O -k 'https url here'
note: -O (this is capital o, not zero) & -k for https url.
I had the same error but from different reason. In my case I had (tmpfs) partition with only 1GB space and I was downloading big file which finally filled all memory on that partition and I got the same error as you.
I encountered the same problem when doing:
curl -L https://packagecloud.io/golang-migrate/migrate/gpgkey | apt-key add -
The above query needs to be executed using root privileges.
Writing it in following way solved the issue for me:
curl -L https://packagecloud.io/golang-migrate/migrate/gpgkey | sudo apt-key add -
If you write sudo before curl, you will get the Failed writing body error.
For me, it was permission issue. Docker run is called with a user profile but root is the user inside the container. The solution was to make curl write to /tmp since that has write permission for all users , not just root.
I used the -o option.
-o /tmp/file_to_download
In my case, I was doing:
curl <blabla> | jq | grep <blibli>
With jq . it worked: curl <blabla> | jq . | grep <blibli>
I encountered this error message while trying to install varnish cache on ubuntu. The google search landed me here for the error (23) Failed writing body, hence posting a solution that worked for me.
The bug is encountered while running the command as root curl -L https://packagecloud.io/varnishcache/varnish5/gpgkey | apt-key add -
the solution is to run apt-key add as non root
curl -L https://packagecloud.io/varnishcache/varnish5/gpgkey | apt-key add -
The explanation here by #Kaworu is great: https://stackoverflow.com/a/28879552/198219
This happens when a piped program (e.g. grep) closes the read pipe before the previous program is finished writing the whole page. cURL doesn't expect this and emits the "Failed writing body" error.
A workaround is to pipe the stream through an intermediary program that always reads the whole page before feeding it to the next program.
I believe the more correct implementation would be to use sponge, as already suggested by #nisetama in the comments:
curl "url" | sponge | grep -qs foo
I got this error trying to use jq when I didn't have jq installed. So... make sure jq is installed if you're trying to use it.
In Bash and zsh (and perhaps other shells), you can use process substitution (Bash/zsh) to create a file on the fly, and then use that as input to the next process in the pipeline chain.
For example, I was trying to parse JSON output from cURL using jq and less, but was getting the Failed writing body error.
# Note: this does NOT work
curl https://gitlab.com/api/v4/projects/ | jq | less
When I rewrote it using process substitution, it worked!
# this works!
jq "" <(curl https://gitlab.com/api/v4/projects/) | less
Note: jq uses its 2nd argument to specify an input file
Bonus: If you're using jq like me and want to keep the colorized output in less, use the following command line instead:
jq -C "" <(curl https://gitlab.com/api/v4/projects/) | less -r
(Thanks to Kowaru for their explanation of why Failed writing body was occurring. However, their solution of using tac twice didn't work for me. I also wanted to find a solution that would scale better for large files and tries to avoid the other issues noted as comments to that answer.)
I was getting curl: (23) Failed writing body . Later I noticed that I did not had sufficient space for downloading an rpm package via curl and thats the reason I was getting issue. I freed up some space and issue for resolved.
I had the same question because of my own typo mistake:
# fails because of reasons mentioned above
curl -I -fail https://www.google.com | echo $?
curl: (23) Failed writing body
# success
curl -I -fail https://www.google.com || echo $?
I added flag -s and it did the job. eg: curl -o- -s https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash

Curl wildcard delete

I'm trying to use curl to delete files before i upload a new set, I'm having trouble trying to wildcard the files.
The below code works to delete one specific file
curl -v -u usr:"pass" ftp://11.11.11.11/outgoing/ -Q "DELE /outgoing/configuration-1.zip"
But when i try and wildcard the file with the below
curl -v -u usr:"pass" ftp://11.11.11.11/outgoing/ -Q "DELE /outgoing/configuration-*.zip"
i ge the error below
errorconfiguration-*: No such file or directory
QUOT command failed with 550
Can i use wildcards in curl delete?
Thanks
Curl does not support wildcards in any commands on an FTP server. In order to perform the required delete, you'll have to first list the files in the directory on the server, filter down to the files you want, and then issue delete commands for those.
Assuming your files are in the path ftp://11.11.11.11/outgoing, you could do something like:
curl -u usr:"pass" -l ftp://11.11.11.11/outgoing \
| grep '^configuration[-][[:digit:]]\+[.]zip$' \
| xargs -I{} -- curl -v -u usr:"pass" ftp://11.11.11.11/outgoing -Q "DELE {}"
That command (untested, since I don't have access to your server) does the following:
Outputs a directory listing for the outgoing/outgoing directory on the server.
Filters that directory listing for file names that start with configuration-, then have one or more digits, and then end with .zip. You may need to adjust this regex for different patterns.
Supplies the matching names to xargs, which, using the delimiter {} to interpolate each matched name, runs the curl command to DELETE each file on the server.
You could use one curl command to delete all of the files by concatting the matched names together into a single delete command, but that would be less legible for use as an example.

SFTP: return number of files in remote directory?

I sent a batch of files to a remote server via SFTP. If it were a local directory I could do something like this ls -l | wc -l to get the total number of files. However, with SFTP, I get an error Can't ls: "/|" not found.
echo ls -l | sftp server | grep -v '^sftp' | wc -l
If you want to count the files in a directory the directory path should be put after the ls -l command like
echo ls -l /my/directory/ | sftp server | grep -v '^sftp' | wc -l
Use a batch file to run commands remotely and get the data back to work with in bash:
Make your batch file called mybatch.txt with these sftp commands:
cd your_directory/your_sub_directory
ls -l
Save it out and give it 777 permissions.
chmod 777 mybatch.txt
Then run it like this:
sftp your_username#your_server.com < mybatch.txt
It will prompt you for the password, enter it.
Then you get the output dumped to bash terminal. So you can pipe that to wc -l like this:
sftp your_user#your_server.com < mybatch.txt | wc -l
Connecting to your_server.com...
your_user#your_server.com's password:
8842
The 8842 is the number of lines returned by ls -l in that directory.
Instead of piping it to wc, you could dump it to a file for parsing to determine how many files/folders.
I would use sftp batch file.
Create a file called batchfile and enter "ls -l" in it.
Then run
sftp -b batchfile user#sftpHost | wc -l
The easiest way I have found is to use the lftp client which supports a shell-like syntax to transfer the output of remote ftp commands to local processes.
For example using the pipe character:
lftp -c 'connect sftp://user_name:password#host_name/directory; ls -l | wc -l'
This will make lftp spawn a local wc -l and give it the output of the remote ls -l ftp command on its stdin.
Shell redirection syntax is also supported and will write directly to local files:
lftp -c 'connect sftp://user_name:password#host_name/directory; ls -l >list.txt'
Thus a file named list.txt containing the remote file listing will be created in the current folder on the local machine. Use >> to append instead.
Works perfectly for me.

cron chmod script does not change owner

i need to run cron job that changes owner and group for selected files.
i have a script for this:
#!/bin/bash
filez=`ls -la /tmp | grep -v zend | grep -v textfile | awk '$3 == "www-data" {print $8}'`
for ff in $filez; do
/bin/chown -R tm:tm /tmp/$ff
done
if i run it manually - it works perfectly. if i add this to root's cron
* * * * * /home/scripts/do_script
it does not change owner/group. file has permissions "-rwsr-xr-x".
any idea how this might be solved?
On my system, field $8 is the hour/year, not the filename. Maybe that's the case for your root user as well. This is why you should never try to parse ls. Even if you fix this issue, half a dozen more will remain to break the system in the future.
Use find instead:
find /tmp ! -name '*zend*' ! -name '*textfile*' -user www-data \
-exec chown -R tm:tm {} \;
if you are adding to root's cron (/etc/crontab) then be aware that the syntax is different from a normal user's crontab.
# m h dom mon dow user command
* * 1 * * root /usr/bin/selfdestruct --immediately
Also give the whole path to your command: Cron has not really a rich environment.
Make sure that the commands in your script also have the full path and don't use environment variables.

Resources