Curl wildcard delete - bash

I'm trying to use curl to delete files before i upload a new set, I'm having trouble trying to wildcard the files.
The below code works to delete one specific file
curl -v -u usr:"pass" ftp://11.11.11.11/outgoing/ -Q "DELE /outgoing/configuration-1.zip"
But when i try and wildcard the file with the below
curl -v -u usr:"pass" ftp://11.11.11.11/outgoing/ -Q "DELE /outgoing/configuration-*.zip"
i ge the error below
errorconfiguration-*: No such file or directory
QUOT command failed with 550
Can i use wildcards in curl delete?
Thanks

Curl does not support wildcards in any commands on an FTP server. In order to perform the required delete, you'll have to first list the files in the directory on the server, filter down to the files you want, and then issue delete commands for those.
Assuming your files are in the path ftp://11.11.11.11/outgoing, you could do something like:
curl -u usr:"pass" -l ftp://11.11.11.11/outgoing \
| grep '^configuration[-][[:digit:]]\+[.]zip$' \
| xargs -I{} -- curl -v -u usr:"pass" ftp://11.11.11.11/outgoing -Q "DELE {}"
That command (untested, since I don't have access to your server) does the following:
Outputs a directory listing for the outgoing/outgoing directory on the server.
Filters that directory listing for file names that start with configuration-, then have one or more digits, and then end with .zip. You may need to adjust this regex for different patterns.
Supplies the matching names to xargs, which, using the delimiter {} to interpolate each matched name, runs the curl command to DELETE each file on the server.
You could use one curl command to delete all of the files by concatting the matched names together into a single delete command, but that would be less legible for use as an example.

Related

hash method to verify integrity of dir vs dir.tar.gz

I'm working on a python scrip that verify the integrity of some downloaded projects.
On my nas, I have all my compressed folder: folder1.tar.gz, folder2.tar.gz, …
On my Linux computer, the equivalent uncompressed folder : folder1, folder2, …
So, i want to compare the integrity of my files without any UnTar or download !
I think i can do it on the nas with something like (with md5sum):
sshpass -p 'pasword' ssh login#my.nas.ip tar -xvf /path/to/my/folder.tar.gz | md5sum | awk '{ print $1 }'
this give me a hash, but I don't know how to get an equivalent hash to compare with the normal folder on my computer. Maybe the way I am doing it is wrong.
I need one command for the nas, and one for the Linux computer, that output the same hash ( if the folders are the same, of course )
If you did that, tar xf would actually extract the files. md5sum would only see the file listing, and not the file content.
However, if you have GNU tar on the server and the standard utility paste, you could create checksums this way:
mksums:
#!/bin/bash
data=/path/to/data.tar.gz
sums=/path/to/data.md5
paste \
<(tar xzf "$data" --to-command=md5sum) \
<(tar tzf "$data" | grep -v '/$') \
| sed 's/-\t//' > "$sums"
Run mksums above on the machine with the tar file.
Copy the sums file it creates to the computer with the folders and run:
cd /top/level/matching/tar/contents
md5sums -c "$sums"
paste joins lines of files given as arguments
<( ...) runs a command, making its output appear in a fifo
--to-command is a GNU tar extension which allows running commands which will receive their data from stdin
grep filters out directories from the tar listing
sed removes the extraneous -\t so the checksum file can be understood by md5sum
The above assumes you don't have any very-oddly named files (for example, the names can't contain newlines)

Ruby output as input for system command

I am trying download a ton of files via gsutil (Google Cloud). You can pass a list of URLs to download:
You can pass a list of URLs (one per line) to copy on stdin instead of as command line arguments by using the -I option. This allows you to use gsutil in a pipeline to upload or download files / objects as generated by a program, such as:
some_program | gsutil -m cp -I gs://my-bucket
How can I do this from Ruby, from within the program I mean? I tried to output them but that doesn't seem to work.
urls = ["url1", "url2", "url3"]
`echo #{puts urls} | gsutil -m cp -I gs://my-bucket`
Any idea?
A potential workaround would be to save the URLs in a file and use cat file | gsutil -m cp -I gs://my-bucket but that feels like overkill.
Can you try echo '#{urls.join("\n")}'
If you put puts it returns nil, rather than the string you want to return. The interpolation fails due to the same reason.

iterate through specific files using webHDFS in a bash script

I want to download specific files in a HDFS directory, with their names starting with "total_conn_data_". Since I've got many files I want to write a bash script.
Here's what I do:
myPatternFile="total_conn_data_*.csv"
for filename in `curl -i -X GET "https://knox.blabla/webhdfs/v1/path/to/the/directory/?OP=LISTSTATUS" -u username`; do
curl -i -X GET "https://knox.blabla/webhdfs/v1/path/to/the/directory/$filename?OP=OPEN" -u username -L -o "./data/$filename" -k;
done
But it does not work since curl -i -X GET "https://knox.blabla/webhdfs/v1/path/to/the/directory/?OP=LISTSTATUS" -u username is sending back a json text and not file names.
How should I do? Thanks
curl provides output in json format only. you will have to use other tools like jquery and sed to format that output and get the list of files.

Searching Linux Files for a string (i.e.root credentials)

As a part of our audit policy. I need to search all files on a linux machine for any file that contains the root credentials.
This command will be run by a non-root account, thus, the result will include many "Permission denied" statements.
Any suggestion of the proper syntax to search all files ans filter the result to show useful links only !
I tried:
grep - "root" / | grep "password"
However, as this command is run using non root accounts, the big part of the result is "permission denied"
Thanks
The permission errors are outputed to stderr, so you could simply redirect that to /dev/null/. E.g.:
grep -R "root" . 2> /dev/null
You would go:
grep -lir "root" /
The -l switch outputs only the names of files in which the text occurs (instead of each line containing the text), the -i switch ignores the case, and the -r descends into subdirectories.
EDIT 1:
As running it as not root will be fine, as long as you're not trying to read other users' files.
EDIT 2:
To have only useful links, go with:
grep -lir -v "Permission denied" "root" /
The -v switch is for inverting the sense of matching, to select non-matching lines.
However, as this command is run using non root accounts, the big part
of the result is "permission denied"
Use sudo to run this recursive grep:
cd /home
sudo grep -ir 'root' *
You can suppress the warnings with a redirection to /dev/null.
This solution uses find to walk the whole (accessible) filesystem :
find / -readable -exec grep -H root '{}' \; 2>/dev/null | grep password

Save file to specific folder with curl command

In a shell script, I want to download a file from some URL and save it to a specific folder. What is the specific CLI flag I should use to download files to a specific folder with the curl command, or how else do I get that result?
I don't think you can give a path to curl, but you can CD to the location, download and CD back.
cd target/path && { curl -O URL ; cd -; }
Or using subshell.
(cd target/path && curl -O URL)
Both ways will only download if path exists. -O keeps remote file name. After download it will return to original location.
If you need to set filename explicitly, you can use small -o option:
curl -o target/path/filename URL
The --output-dir option is available since curl 7.73.0:
curl --create-dirs -O --output-dir /tmp/receipes https://example.com/pancakes.jpg
curl doesn't have an option to that (without also specifying the filename), but wget does. The directory can be relative or absolute. Also, the directory will automatically be created if it doesn't exist.
wget -P relative/dir "$url"
wget -P /absolute/dir "$url"
it works for me:
curl http://centos.mirror.constant.com/8-stream/isos/aarch64/CentOS-Stream-8-aarch64-20210916-boot.iso --output ~/Downloads/centos.iso
where:
--output allows me to set up the path and the naming of the file and extension file that I want to place.
Use redirection:
This works to drop a curl downloaded file into a specified path:
curl https://download.test.com/test.zip > /tmp/test.zip
Obviously "test.zip" is whatever arbitrary name you want to label the redirected file- could be the same name or a different name.
I actually prefer #oderibas solution, but this will get you around the issue until your distro supports curl version 7.73.0 or later-
For powershell in Windows, you can add relative path + filename to --output flag:
curl -L http://github.com/GorvGoyl/Notion-Boost-browser-extension/archive/master.zip --output build_firefox/master-repo.zip
here build_firefox is relative folder.
Use wget
wget -P /your/absolut/path "https://jdbc.postgresql.org/download/postgresql-42.3.3.jar"
For Windows, in PowerShell, curl is an alias of the cmdlet Invoke-WebRequest and this syntax works:
curl "url" -OutFile file_name.ext
For instance:
curl "https://airflow.apache.org/docs/apache-airflow/2.2.5/docker-compose.yaml" -OutFile docker-compose.yaml
Source: https://krypted.com/windows-server/its-not-wget-or-curl-its-iwr-in-windows/
Here is an example using Batch to create a safe filename from a URL and save it to a folder named tmp/. I do think it's strange that this isn't an option on the Windows or Linux Curl versions.
#echo off
set url=%1%
for /r %%f in (%url%) do (
set url=%%~nxf.txt
curl --create-dirs -L -v -o tmp/%%~nxf.txt %url%
)
The above Batch file will take a single input, a URL, and create a filename from the url. If no filename is specified it will be saved as tmp/.txt. So it's not all done for you but it gets the job done in Windows.

Resources