Find last added file in directory - bash

I'm trying to find a simple command in bash that opens the last added file in a directory. I came across this and a few other questions but they were always about the last modified, in contrast with last added. Is there a simple way to do this?

I think this
answers why the thing you need is not possible.
Linux filesystems either just don't record creation time or if they do there is no way to access it through standard command line tools. However, there are ways to access it as described here for example.
For systems that do record the time, such as Mac OS X as mentioned in the other question, you can adapt the code you found by using:
ls -tU | head -n1

Related

Monitor A File For Additions And Get Last Added Line

I'm having trouble monitoring a file for changes. I need to be able to know when a file changes, and when it does, I need the new line that was added. I intend to parse each line and find ones that match certain criteria, and act on information in those lines. I know the expected number of matching lines ahead of time, but I do not know how many lines in total will be added to the file, or where the matching lines will be.
I've tried 2 packages so far, with no avail.
fsnotify/fsnotify
As fas as I can tell, fsnotify can only tell me when a file is modified, not what the details of the modification was. Since I need to know what exactly was added to the file, this is no good for me.
(As a side-question, can this be run in a loop? The example that I tried exited after just one modification. I need to monitor for multiple modifications.)
hpcloud/tail
This package tries to mimic the Unix tail command, but it seems to have its own issues. The output that I get includes timestamps and other data - I just want the added line, nothing else. Also, it seems to think a file has been modified multiple times, even when it's just one edit. Further, the deal breaker here is that it does not output the last line if the line was not followed by a newline character.
Delegating to tail
I came across this answer, which suggests to delegate this work to the tail command itself, but I need this to work cross-platform (specifically, macOS, Linux and Windows). I don't believe that an equivalent command exists on Windows.
How do I go about tackling this?
#user2515526,
Usually changed diff is out of scope of file watchers' functionality, because, you know, you could change an image, and a watcher would need to keep a track several Mb of a diff in memory, and what if we have thousands of files?
However, as bad as it sounds, this may be exactly the way you want to implement this (sure, depends on your app, etc. - could be fine for text files), i.e. - keeping a map of diffs (1 diff per file) since last modification. Cannot say I like it, but sounds like fsnotify has no support for changes/diffs that you need.
Also, regarding your question about running in a loop, maybe you can get some hints here: https://github.com/kataras/iris/blob/8370d76910cdd8de043753ed81ae080eae8dc798/utils/file.go
Its a framework that allows to build a server that watches for TypeScript file changes. So sounds similar to your case/question.
Cheers,
-D

gVim startup message & info suppression

I got a partial answer to my problem before, and want to solve this problem fully now. The final line of my /Program Files/GNU/vim/_vimrc is
source /homedir/vimsession_file
The filenames that i edit do not change, only their content changes. But, once in a while, i would create a new session file before i exit vim, using
:mks! /homedir/vimsession_file
Everytime i start gVim, i get a message box listing all the files (which I load into the multiple tabs that i have) with a Line number and Character count listed. More detail of this can be found in my orignal post here.
Currently, i am not using the solution proposed in the link given above. The solution i got there was to replace the final line of /Program Files/GNU/vim/_vimrc with the following line:
autocmd VimEnter * source /homedir/vimsession_file
The reason I stopped using the above solution is because my buffers were all getting wiped out (as described in the original post link). So, i was forced to rebuild my buffers every once in a while, when i would restart gVim.
I did search and read in order to solve this on my own. But the closest solution that i saw was here in stackoverflow. But that solution did not work for me either, despite playing with the shortmess variable as suggested there. How can i stop this annoying message box that pops up with OK button, before the start of gVim ? I want to suppress the message box, because the only info i get from it are the line and character count for each file. (NOTE: I looked into the /homedir/vimsession_file and it is about 3500 lines long. I noticed that the file names occur with badd followed by the edit command. For example, i have line 96 and line 164 as below:
Line 96 : badd +16 \Program\ Files\GNU\vim\_vimrc
........
Line 164: edit \Program\ Files\GNU\vim\_vimrc
This pattern repeats for all the other files that get loaded into multiple windows/tabs.
Wanted to post the answer here because there seem to be very few VIM experts who regularly look at stackoverflow. I didn't have the patience to wait many days expecting an answer. I found the answer to my own question, after reading the help file in vim called "starting.txt" which explains the startup/init process of vim and the different initialization files used. The following steps removed the annoying pop-up message box for me, and also made my VIM process start much faster than before due to simplification.
According to what is suggested in starting.txt help file, I separated my numerous tabs/windows/files into different sessions. I recommend reading through this help file (atleast browsing it), if you are a regular user of VIM.
Previously i was lumping numerous files (from different projects) into one single vim session. This is messy and is not the recommended way to use a VIM session file. Before the creation of different session files (for different projects), i first saved my old vim session file so that i can reuse it, during the creation of different session files. You will understand the process clearly if you look at the help file.
I cleaned up my startup procedure further by setting up a new viminfo file, by adding the "-i" parameter to vim.exe (icon) for starting. This was pointed to a new directory and hence gave a fresh start.
The main init files used by vim are viminfo, vimrc and session file. Each is meant for a different purpose. My problem was caused by source /homedir/vimsession_file as the final line of in my vimrc. So, I removed that line, and instead issue that command manually now, after vim starts up (with an empty window). I can source different vimsession_files, in order to load different files (which belong together). On my machine this command takes about 1 second, to load many tabs/windows/files, which belong to a single project/sub-project.
As pointed in the original post URL given in my question, there maybe another way to resolve this by creating and looking at the vimlog file. But i didn't want to bother with that tedious process. The way i am setup now makes more sense to me, because I have various subprojects which properly belong in different sessions of VIM.

OSX: How to force multiple-file-copy operation to plough through errors

This is so wrong.
I want to perform a large copy operation; moving 250 GB from my laptop hard drive to an external drive.
OSX lion claims this will take about five hours.
After a couple of hours of chugging, it reports that one particular file could not be copied (for whatever reason; I cannot remember and I don't have the patience to repeat the experiment at the moment).
And on that note it bails.
I am frankly left aghast.
That this problem persists in this day and age is to me scarcely believable. I remember hitting up against the same scenario 20 years back with Windows 3.1.
How hard would it be for the folks at Apple (or Microsoft for that matter) to implement file copying in such a way that it skips over failures, writing a list of failed operations on-the-fly to stderr? And how much more useful would that implementation be? (both these questions are rhetorical by the way; simply an expression of my utter bewilderment; please don't answer them unless by means of comments or supplements to an answer to the actual question, which follows:).
More to the point (and this is my actual question), how can I implement this myself in OS X?
PS I'm open to all solutions here: programmatic / scripting / third-party software
I hear and understand your rant, but this is bordering on being a SuperUser-type question and not a programming question (saved only by the fact you said you would like to implement this yourself).
From the description, it sounds like the Finder bailed when it couldn't copy one particular file (my guess is that it was looking for admin and/or root permission for some priviledged folder).
For massive copies like this, you can use the Terminal command line:
e.g.
cp
or
sudo cp
with options like "-R" (which continues copying even if errors are detected -- unless you're using "legacy" mode) or "-n" (don't copy if the file already exists at the destination). You can see all the possible options by typing in "man cp" at the Terminal command line.
If you really wanted to do this programatically, there are options in NSWorkspace (the performFileoperation:source:destination:files:tag: method (documentation linked for you, look at the NSWorkspaceCopyOperation constant). You can also do more low level stuff via "NSFileManager" and it's copyItemAtPath:toPath:error: method, but that's really getting to brute-force approaches there.

Process Management w/ bash/terminal

Quick bash/terminal question -
I work a lot on the command line, but have never really had a good idea of how to manage running processes with it - I am aware of 'ps', but it always gives me an exceedingly long and esoteric list of junk, including like 30 google chrome workers, and I always end up going back to activity monitor to get a clean look at what's actually going on.
Can anyone offer a bit of advice on how to manage running processes from the command line? Is there a way to get a clean list of what you've got running? I often use 'killall' on process names that I know as a quick way to get rid of something that's freezing up - can I get those names to display via terminal rather than the strange long names and numbers that ps displays by default? And can I search for a specific process or quick regex of a process, like '*ome'?
If anyone has the answers to these three questions, that would be amazingly helpful to many people, I'm sure : )
Thanks!!
Yes grep is good.
I don't know what you want to achieve but do you know the top command ? Il gives you a dynamic view of what's going on.
On Linux you have plenty of commands that should help you getting what you want in a script and piping commands is a basic we are taught when studying IT.
You can also get a look to the man of jobs and I would advise you to read some articles about process management basics. :)
Good Luck.
ps -o command
will give you a list of just the process names (more exactly, the command that invoked the process). Use grep to search, like this:
ps -o command | grep ".*ome"
there may be scripts out there..
but for example if you're seeing a lot of chrome that you're not interested in, something as simple as the following would help:
ps aux | grep -v chrome
other variations could help showing each image only once... so you get one chrome, one vim etc.. (google show unique rows with perl or python or sed for example)
you could use ps to specifiy one username... so you filter out system processes, or if more than one user is logged to the machine etc.
Ps is quite versatile with the command line arguments.. a little digging help finding a lot of nice tweaks and flags in combinations with other tools such as perl and sed etc..

command line wisdom for 2 panel file manager user

Want to upgrade my file management productivity by replacing 2 panel file manager with command line (bash or cygwin). Can commandline give same speed? Please advise a guru way of how to do e.g. copy of some file in directory A to the directory B. Is it heavy use of pushd/popd? Or creation of links to most often used directories? What are the best practices and a day-to-day routine to manage files of a command line master?
Can commandline give same speed?
My experience is that commandline copying is significantly faster (especially in the Windows environment). Of course the basic laws of physics still apply, a file that is 1000 times bigger than a file that copies in 1 second will still take 1000 seconds to copy.
..(howto) copy of some file in directory A to the directory B.
Because I often have 5-10 projects that use similar directory structures, I set up variables for each subdir using a naming convention :
project=NewMatch
NM_scripts=${project}/scripts
NM_data=${project}/data
NM_logs=${project}/logs
NM_cfg=${project}/cfg
proj2=AlternateMatch
altM_scripts=${proj2}/scripts
altM_data=${proj2}/data
altM_logs=${proj2}/logs
altM_cfg=${proj2}/cfg
You can make this sort of thing as spartan or baroque as needed to match your theory of living/programming.
Then you can easily copy the cfg from 1 project to another
cp -p $NM_cfg/*.cfg ${altM_cfg}
Is it heavy use of pushd/popd?
Some people seem to really like that. You can try it and see what you thing.
Or creation of links to most often used directories?
Links to dirs are, in my experience used more for software development where a source code is expecting a certain set of dir names, and your installation has different names. Then making links to supply the dir paths expected is helpful. For production data, is just one more thing that can get messed up, or blow up. That's not always true, maybe you'll have a really good reason to have links, but I wouldn't start out that way, just because it is possible to do.
What are the best practices and a day-to-day routine to manage files of a command line master?
( Per above, use standardized directory structure for all projects.
Have scripts save any small files to a directory your dept keeps in the /tmp dir, .
i.e /tmp/MyDeptsTmpFile (named to fit your local conventions) )
It depends. If you're talking about data and logfiles, dated fileNames can save you a lot of time. I recommend dateFmts like YYYYMMDD(_HHMMSS) if you need the extra resolution.
Dated logfiles are very handy, when a current process seems like it is taking a long time, you can look at the log file from a week ago and quantify exactly how long this process took, a week, month, 6 months (up to how much space you can afford). LogFiles should also capture all STDERR messages, so you never have to re-run a bombed program just to see what the error message was.
This is Linux/Unix you're using, right? Read the man page for the cp cmd installed on your machine. I recommend using an alias like alias CP='/bin/cp -pi' so you always copy a file with the same permissions and with the original files' time stamp. Then it is easy to use /bin/ls -ltr to see a sorted list of files with the most recent files showing up at the bottom of the list. (No need to scroll back to the top, when you sort by time,reverse). Also the '-i' option will warn you that you are going to overwrite a file, and this has saved me more than a couple of times.
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.

Resources