Is there a way to see what files another team member has recently edited in Cloud9? - cloud9-ide

For example, if Joe logs in to Cloud9 and changes code in six files, then logs out, how would I know what files he has changed?

Right now, the only way I can think of is finding which files were modified last. From there on, you can use the File History viewer to see what modifications were made to each file by your team member:
Within Terminal:
find $1 -type f -print0 | xargs -0 stat --format '%Y :%y %n' | sort -nr | cut -d: -f2- | sed '/\.\/\.c9\//d' | head
I've taken this script and modified it to ignore files within the .c9 folder from this answer:
How to recursively find and list the latest modified files in a directory with subdirectories and times?
Hope this helps!

Related

bash script: find recent file in subdirectories [duplicate]

This question already has answers here:
How to recursively find the latest modified file in a directory?
(21 answers)
Closed 2 years ago.
I'm trying to find the path to the latest edited file in a subdirectory matching a certain patern. For that I want to use a bash script to get the file path and then use it for a bunch of actions.
So for exemple i would have a tree like this:
Directory/
dir 1.0/
fileIwant.txt
fileIdont.txt
dir 2.0/
fileIwant.txt
fileIdont.txt
dir 2.1/
fileIwant.txt
fileIdont.txt
So in this case, I would want to find the latest "fileIwant.txt" (probably the one in "dir 2.1" but maybe not). Then I want to store the path of this file to copy it, rename it or whatever.
I found some scripts to find the latest in current directory but I didn't really understood them... So I hoped someone could give me hand.
Thanks !
Edit :
Thanks to the diffent answers I ended up with something like this which work perfectly :
filename="fileIwant"
regex=".*/"$filename"-([0-9]+(\.[0-9]+)*).*/"$filename"\.txt"
path="$(find . -regextype awk -regex $regex -type f -printf '%T# %p\n' | sort -n | cut -f2- -d" " | tail -1)"
(it doesn't exactly match my previous example obviously)
This will help you,
find /yourpathtosearch -type f -printf "%T# %p\n" | sort -nr | head -n 1 | awk '{print $2}'
man command name will give you more details on how to of each commands
Eg: man find
This can easily be done with only two commands:
ls -t Directory/dir*/fileIwant.txt | head -n 1
The command ls with the option -t sorts its results by modification time, newest first, according to the man page.

Automator - find and move most recent file in filtered list

I'm trying to use Automator to
get a filtered list of files in a folder that are more than 30 days old (ok)
move the most recent of those (if there are any) to an existing subfolder
trash the others.
Step 1 is easy enough, but I haven't been able to find a way to do step 2 -- select and move the most recent of the filtered set.
The outcome would be: if the workflow is scheduled once a month, then the subfolder will contain one file per month, & the parent folder will contain only files less than 31 days old
Is there a way to do that?
UPDATE
I tried adding a shell script to the automator workflow
fn=$(ls -t | head -n1)
mv -f -- "$fn" ./<subdirectory>/
But am having trouble with the path in the second line.
ls -t | head -n1 is good, but be careful; if the subdir is the most recently modified, it would take the first slot, not only resulting in an attempt to move that dir into itself (not allowed, and that may be your problem "with the path in the second line"), but potentially deleting the rest of the files, including the one you want to keep. There are many ways to filter out any directories; off the top of my head you could ls -tp | grep -v '/$' | head -n1 . Note that adding a file to a directory affects that directories mtime (last modified time) on posix.
Removing all files is easy, once you move out the file you want to keep, just rm *. Note that this will not remove directories (so long as you do not put -r), which I think is what you want because it appears you're moving the file you want to keep to a sub-directory of where it was.
You may want to add some error trapping too, so if a step fails later steps don't delete files you don't want deleted. I do not use automator, but this should work so long as your using real bash: (including other schedulers, like cron, as long as you get into the correct working dir first)
mv -- "$(ls -tp | grep -v '/$' | head -n1)" subdirectory/ && rm *
&& means do what follows only if what preceeds succeeds. Adding ./ to the beginning of the destination file does nothing, though keeping the / at the end prevents creating a new file named "subdirectory" if it does not already exist. Also, I'm pretty sure the "<>" in the code snippet you sent is to mark it as being different from your actual code, but just in case: Note that the subdirectory, whatever it's called, may need special handling if it does actually contain those characters.
Edit:
I just noticed in the question the constraint "get a filtered list of files in a folder that are more than 30 days old". So, a slight change: (use find to compare the time)
mv -- "$(find -maxdepth 1 -type f -mtime +30 -printf '%T# %f\n' | sort -rn | head -n1 | cut -d\ -f2-)" subdirectory/ && find -maxdepth 1 -type f -mtime +30 -delete
Explanation: find in the current directory (not subdirs, so maxdepth 1) files (not directories, type f) that have a mtime of at least 30 days in the past (-mtime +30) and print the time of the modification and the name (%T# %f); sort as if it were a number (-n) in reverse order (-r); take only the first (head -n1); extract the filename (second+ space-delimited field) and move it to subdirectory. If successful, delete anything that fits the same find criteria as before.
I would not put the files in a environment variable unless the disk is /very/ slow and uncached. The time spent filtering out the filename you moved probably takes more effort then requerying the disk, unless you have an insane number of files, in which case they might not fit in the environment section.
Edit 2: KamilCuk is right. Use null terminated, as null is (the only character) not allowed in filenames:
find -maxdepth 1 -type f -mtime +30 -printf '%T %f\0' | sort -z -t' ' -r -n -s -k1 | head -z -n1 | cut -z -d' ' -f2- | xargs -0 -I{} mv {} subdirectory/ && find -maxdepth 1 -type f -mtime +30 -delete

Can we store the creation date of a folder (not file) using bash script?

Actually I’m a newbie at Bash and I’m learning with some hands on.. I used the following stat command:
find "$DIRECTORY"/ -exec stat \{} --printf="%w\n" \; | sort -n -r | head -n 1 > timestamp.txt
where DIRECTORY is any path say, c:/some/path . It contains a lot of folders. I need to extract the creation date of the latest created folder and store it in a variable for further use. Here I started by storing it in a txt file. But the script never completes. It stays stuck at the point it reaches this command line. Please help. I'm using cygwin. I had used --printf="%y\n" to extract last Modified date of the latest folder and it had worked fine.
The command is okay (save for escaped \{} which I believe is a mistake in the post). It only seems so that it never finishes, but given enough time, it'll finish.
Direct approach - getting the path
The main bottleneck lies in executing stat for each file. Spawning process under Cygwin is extremely slow, and executing one for each of possibly thousands of files is totally infeasible. The only way to circumvent this is not spawning processes like this.
That said, I see few areas for improvement:
If you need only directories like the title of your post suggests, you can pass -type d to your find command to filter out any files.
If you need only modification time (see what means directory modification time on Linux here, I guess this may be similar in Cygwin), you can use find's built in facilities rather than stat's like this:
find "$DIRECTORY"/ -type d -printf '%TY-%Tm-%Td %TH:%TM:%TS %Tz %p\n' \
| sort -nr \
| head -n1 \
| cut -f4 -d' '
Example line before we cut the path with cut - most of stuff in -printf is used to format the date:
2014-09-25 09:41:50.3907590000 +0200 ./software/sqldeveloper/dataminer/demos/obe
After cut:
./software/sqldeveloper/dataminer/demos/obe
It took 0.7s to scan 560 directories and 2300 files.
The original command from your post took 28s without -type d trick, and 6s with -type d trick when ran on the same directory.
Last but not least, if $DIRECTORY is empty, your command will prune whole directory tree, which will take massive amount of time.
Another approach - getting just the date
If you only need creation date of a subdirectory within a directory (e.g. not the path to the directory), you can probably just use stat:
stat --printf '%Y' "$DIRECTORY"/
I'm not sure whether this includes file creations as well, though.
Alternative approaches
Since getting the last created folder is clearly expensive, you could also either:
Save the directory name somewhere when creating said directory, or
Use naming convention such as ddddyymm-name-of-directory which doesn't require any extra syscalls - just find -type d|....
You could do with a -type d option to include only the directories from the current folder, and as discussed in the comments section if you need the output from the stat in just yyyy-mm-dd format, use awk as below.
find "$DIRECTORY"/ -type d -exec stat \{} --printf="%w\n" \; | sort -n -r | head -n 1 | awk '{print $1}'
To store the value in a bash variable:-
$ myvar=$(find "$DIRECTORY"/ -type d -exec stat \{} --printf="%w\n" \; | sort -n -r | head -n 1 | awk '{print $1}')
$ echo $myvar
2016-05-20

#bash recursive concatenation over multiple folders per folder

I have system that creates a folder per day with a txt file generated every 10 minutes.
I need to write a bash script that runs from the start folder over each day merges all txt files into one file per day and writes this file with the into a destination folder.
the last solution I had was something like this
for i in $dirm;
do
ls -1U | find . -name "*.txt" | xargs cat *.txt > all
cut -c 1-80 $i/all > $i/${i##*/}
.....
done
for some reason i can't get the loop right to go through each folder. this finds all .txt. but not per folder. the cut thing is i only need the first 80 chars.
probably a really easy problem but i can't get my head around it.
I assume $dirm is the directory list, then you should find from $i and not from current directory (.)
for i in $dirm;
do
ls -1U | find $i -name "*.txt" | xargs cat *.txt > all
cut -c 1-80 $i/all > $i/${i##*/}
.....
done
I think you're trying to combine the output of ls and find. To do that, piping one command into the other does not work. Instead, run them together in a subshell:
(ls; find) | xargs...

Shell script to list out the larger files in a dir recursively

Shell script to list out the files in a dir recursively which is huge. I am using:
find <path> -mtime +20 -exec ls -ls {} \; | sort -n -r | head -100 | awk '{print $10}'
Issues:
Slower execution
I am not having read permissions inside few sub-directories
Is there any better way to achieve this? I have tried:
du <path> | sort -n -r | head -n 100
Much faster but not that effective.
Depending on the size distribution of the files your find is finding, you might consider using the -size predicate to weed out a lot of the smaller fish before the list gets dumped onto sort. If this is something you run regularly, make a note when you start getting less than 100 lines out of head and use that as an indication that it's time to lower the size limit you're giving find.
Lack of permissions is not a problem you're going to be able to overcome without getting the permissions on the directories in question changed or escalating your privileges so you can read them.
du is almost there, try
du -aS | sort -n -r | head -n 100
which only return you the large files excluding any directories
find has a handy -printf directive:
find . -type f -printf "%s\t%p\n" | sort -nr | head -n 100
find -size +k -atime + -printf "%s,\t%a,\t%p\n"|sort -nr
will give you the desired output
here the sike is on kb and the age is the last access time.

Resources