date no such file or directory - bash

I'm trying to script something that isn't outputting quite correctly with the date command. Here's the contents of what I have thus far:
#!/bin/bash
# Get RPM manifest
# Output written to /tmp
NOW=$(date +%D)
rpm -qa --qf="%{NAME}.%{ARCH}\n" | sort > /tmp/$HOSTNAME.RPM_Manifest.$NOW.txt
When I run this script, I get this message:
[root#linmachine1 ~]# sh /usr/local/bin/rpm_manifest.sh
/usr/local/bin/rpm_manifest.sh: line 7: /tmp/linmachine1.RPM_Manifest.03/01/17.txt: No such file or directory
I suspect the problem is in how the date formatting within the NOW variable I'm defining may be the culprit. I've tried with and without quotes and get the same thing. Looking at the man pages, I didn't see a way to change the default behavior such that the forward slashes would be replaced by dots, as I believe this is where the problem lies.
EDIT: Thanks for all of your responses. I'm not real sure why this was downvoted though. I asked a legitimate question. What gives?

Yes, you shouldn't have slashes in a file name.
Use:
now=$(date "+%d.%m.%Y")
rpm -qa --qf="%{NAME}.%{ARCH}\n" | sort > "/tmp/$HOSTNAME.RPM_Manifest.$now.txt"
instead, or replace the . with whatever you prefer

Related

Formatting timestamp in bash for use in a file operation

I'm getting some strangeness from bash when I try to use a timestamp as part of a filename.
#!/bin/bash
DATE=`date -d "today" +"%Y%m%d-%H:%M"`
dtl=$DATE.log
for drive in $( ls /dev/disk/by-id | grep 'scsi-35' ); do
mkdir -p /home/tt/drivelog/${drive}
cp /home/tt/drivelog/currentset/$drive.log "/home/tt/drivelog/$drive/$dtl"
done
The above results in a file named 20171122-12/15.log, so my comma has turned into a forward slash = not what I want.
I tried (to no avail) escaping out the colon by using:
DATE=date -d "today" +"%Y%m%d-%H\:%M"
which results in a file named 20171122-12\/15.log
I use double quotes to ensure there was no ambiguity in the reference, which can happen with colons in filenames. Didn't fix.
When I try some debugging and just echo the source and destination portions of the cp command, it looks right. But that normality disappears when I join them together in the cp command. Echo output:
/home/tt/drivelog/currentset/scsi-35000c50094vv123z.log
/home/tt/drivelog/scsi-35000c50094vv123z/20171122-11:55.log
Lastly, substituting .../${drive}/${dtl}" doesn't fix it...
Many thanks! (Image below, showing recent results)
for John1024:
I made sure date was working, output from date cmd:
20171122-12:47
and as reported in bash:
+ dtl=20171122-12:50.log
Using bash to run the script highlighted the issue:
1. The command is working properly...
+ cp /home/tt/drivelog/currentset/scsi-35000c50094aa123z.log /home/tt/drivelog/scsi-35000c50094aa123z/20171122-12:50.log
The issue is that the Mac on which I am looking at the folder is not showing the output properly.
ls in the output directory shows:
20171122-11:58.log
20171122-12\:00.log
20171122-12\:27.log
20171122-12\:48.log
20171122-12:50.log
Yet the view of this from my Mac drops the colon
I'm going to mark this as closed, as the underlying issue is a Mac AFP display incongruity issue, and not a bash issue. See: here Mac OS used colons as path separators when I first started using them in 1984. With the move to OS X, now eons ago, that changed. AFP and third-party implementations of AFP come with "YMMV" caveats, and this is apparently one I missed.
Many thanks to John1024
The issue here is that the underlying colon is not showing properly over AFP.
The code above does, actually, generate colons as intended. See here for more on the idiosyncrasies of OS X (and prior versions).

How do i get the 'head' of all files in a specified directory?

I am a beginner to UNIX. Im trying to create a bash script that lists the 'head' of every file in a specified directory but ive tried everything and it doesnt seem to work. How would i do it. Below is the code i currently have in my script. I intent to add more to the script later on but need this to work first.
numberOfLines=$1
directoryName=$2
head $numberOfLines $directoryName
Try this:
head $directoryName/* -n $numberOfLines
You are calling the head command in a wrong way.
Compare your code to the manual page.
I would use the find command:
find "$directory" -maxdepth 1 -type f -exec head -n "$numberOfLines" {} \;
This ensures that head will be executed only on files and not directories.
Head works on a file (or group of files), not a directory, so you need to adjust your directoryName variable so that you're telling the shell interpreter you mean "every file in this directory" and not a directory.
The easiest way would be to add "/*" to the directoryName, changing your third line to this:
head $numberOfLines ${directoryName}/*
Example:
myshell:tmp gdalton$ ./script.sh -2 hello
==> hello/file1 <==
file 1
==> hello/file2 <==
file 2
file 2
Note that you will need to invoke your first parameter with the dash as I did in the example because of the syntax for the head command. You could easily fix this in your code using the change I made to fix your code as a jumping point... I'd strongly advise you check the man pages for head so you can figure out how to structure your shell commands; they often contain a wealth of options for these commands.
man head
Good luck.

Edit conf file in Ubuntu through one line command

I want to change my default web root folder of apache2 web server, but through command line from a script I am making.
I know to do it through nano/vim and then go to the line and change it manually, but I want to make it by a command line.
I though about some thing like (the syntax is wrong - I know - just to make my point):
vim /etc/apache2/sites-enabled/000-default.conf | find 'DocumentRoot /var/www' | replace 'DocumentRoot /var/www/myFolder'
maybe not with vim but other ??
Any Idea ?
Thanks
Use sed with argument -i.
sed -i 's-/var/www-&/MyFolder-' /etc/apache2/sites-enabled/000-default.conf
Argument -i enables in-place editing.
You should use sed with the substitute command for that kind of operation.
http://www.grymoire.com/Unix/Sed.html#uh-0
I don't have a unix machine at hand but something like that should work (using # rather than the usual / as separator):
sed 's#/var/www#/var/www/MyFolder#' /etc/apache2/sites-enabled/000-default.conf
Even if it is not your question, since your initial question mentioned Vim, you can also use substitute from inside Vim
Like
:%s #/var/war#/var/www/MyFolder#g
% means search in the whole file
g means globally : it will replace multiple instance if the string is found multiple times

AIX text formatting

AIX Version 6.1
I'm trying to write a script to pull times out of a program to send to Zabbix, but I'm wanting to modify the formatting of the times returned.
At present, when I pull the time it returns like so: [15:48:30]
My goal is to remove the brackets ([]) to then be able to pull the time apart with awk to do calculations on the time to render it down into seconds and draw relevant information from that.
AIX is continually giving me errors with every form of text formatting I can think of/find.
Ex: echo $unformattedtime | awk '{print substr($0,1,8)}'
gives me permissions errors when I've already chmod 777 the script. I've seen fixes that require going in and making root changes, and while that's possible for me to do, the script needs to run as a non-root user for what it's being designed to do.
Failing manipulating the unformattedtime variable, I tried putting it into a text file and manipulating it with tr.
Ex: tr -s [] '' < timet.txt > timetformat.txt
Where timet.txt simply had the '[15:48:30]' put in using vi. This simply returned a "tr:Error 0"
Is there some sort of AIX specific method of doing modifications that I'm missing? Or just anything that would accomplish the goal here?
Thanks!
Answer from my comments:
tr -s is going to leave single square brackets unmolested, this isn't what you want. (Though I don't understand that error unless it is a quoting issue on [] in the shell. Try '[]' instead.)
tr -d '[]' is probably more what you want there.
It might also be possible, depending on the shell, to avoid the temporary files by using a HEREDOC:
tr -d '[]' <<EOF
echo "$unformattedtime"
EOF
There can be no leading spaces before the final EOF there for the record.

Loop through a directory with Grep (newbie)

I'm trying to do loop through the current directory that the script resides in, which has a bunch of files that end with _list.txt I would like to grep each file name and assign it to a variable and then execute some additional commands and then move on to the next file until there are no more _list.txt files to be processed.
I assume I want something like:
while file_name=`grep "*_list.txt" *`
do
Some more code
done
But this doesn't work as expected. Any suggestions of how to accomplish this newbie task?
Thanks in advance.
If I understand you problem correctly, you don't need a grep. You can just do:
for file in *_list.txt
do
# use $file, like echo $file
done
grep is one of the most useful commands of Unix. You must comprehend it well; see some useful examples here. As far as your current requirement, I think following code will be useful:
for file in *.*
do
echo "Happy Programming"
done
In place of *.* you can also use regular expressions. For more such useful examples, see First Time Linux, or read all grep options at your terminal using man grep.

Resources