This question already has answers here:
How to get file creation date/time in Bash/Debian?
(13 answers)
Closed 7 years ago.
I know you can't for files, but is there by any chance a way to check this for folders?
It is not possible to get creation time of any files in linux, since
Each file has three distinct associated timestamps: the time of last
data access, the time of last data modification, and the time the file
status last changed.
Also you as you wrote in your question "I know you can't for files, but is there by any chance a way to check this for folders?", in linux, all files and directories are considered as files, so you have your own answer in your question.
Source
Related
This question already has answers here:
What is the best way to ensure only one instance of a Bash script is running? [duplicate]
(14 answers)
Closed 3 years ago.
I need to remove the contents of a directory D based on the below condition
Get the used space of directory D.
If the used space is above the threshold, then remove the contents of directory based on last modified time (using find mtime)
Have already written a shell script for it (clearSpace.sh), but the problem is that the script can be called by multiple processes simultaneously.
I want the steps 1 & 2 to be atomic so that I can get consistent results.
Is there a way where I first get a "lock" on directory D and execute clearSpace.sh and then give the lock? Permission based locking is not an option
We suggest refrain from locking storage or any other OS resources. The side effects could be devastating. And unrelated.
The responsibility of the OS is to sync, distribute and manage resources.
Resource locking is kernel level programming. Better not get there. There is a reason for not having locking features in the OS scripting tools.
You can implement script access locking. Using a lock.id file to signal that the script is running.
Forbid the script to run if lock.id exist.
And remove the lock.id file when the script operation is completed.
We suggest you change your script thresholds to be more flexible and tolerant.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I need to read a block of n lines in a zip files quickly as possible.
I'm beginer in Go. For bash lovers, I want to do the same as (to get a block of 500 lines between lines 199500 and 200000):
time query=$(zcat fake_contacts_200k.zip | sed '199500,200000!d')
real 0m0.106s
user 0m0.119s
sys 0m0.013s
Any idea is welcome.
Import archive/zip.
Open and read the archive file
as shown in the example right there in the docs.
Note that in order to mimic the behaviour of zcat you have to
first check the length of the File field of the zip.ReadCloser
instance returned by a call to zip.OpenReader,
and fail if it is not equal to 1 — that is, there is no files in the
archive or there are two or more files in it¹.
Note that you have to check the error value
returned by a call to zip.OpenReader for being equal to zip.ErrFormat,
and if it's equal, you have to:
Close the returned zip.ReadCloser.
Try to reinterpret the file as being gzip-formatted (step 4).
Take the first (and sole) File member and
call Open on it.
You can then read the file's contents from the returned io.ReaderCloser.
After reading, you need to call Close() on that instance and then
close the zip file as well. That's all. ∎
If step (2) failed because the file did not have the zip format,
you'd test whether it's gzip-formatted.
In order to do this, you do basically the same steps using the
compress/gzip package.
Note that contrary to the zip format, gzip does not provide file archival — it's merely a compressor, so there's no meta information on any files in the gzip stream, just the compressed data.
(This fact is underlined by the difference in the names of the packages.)
If an attempt to opening the same file as a gzip archive returns
the gzip.ErrHeader error, you bail out, otherwise you read the data
after which you close the reader. That's all. ∎
To process just the specific lines from the decompressed file,
you'd need to
Skip the lines before the first one to process.
Process the lines until, and including the last one to process.
Stop processing.
To interpret the data read from an io.Reader or io.ReadCloser,
it's best to use bufio.Scanner —
see the "Example (Lines)" there.
P.S.
Please read thoroughly this essay
to try to make your next question better that this one.
¹ You might as well read all the files and interpret their contents
as a contiguous stream — that would deviate from the behaviour of zcat
but that might be better. It really depends on your data.
This question already has an answer here:
Spring Batch. How to get the number of the element being processed
(1 answer)
Closed 7 years ago.
Is there any way to get the current row number of the file that is being processed inside item processor without implementing listeners?
Thanks
if your item implements the ItemCountAware Interface, spring batch will fill the current row number for you
ps: i am almost 100% sure that a similar question and answer already exists here at stackoverflow, but i did not found it
This question already has an answer here:
How copy data from one database to another on different server?
(1 answer)
Closed 8 years ago.
Actually I want to copy 50 GB of database from one one server to another server, I just want to know that which one of three options is best.
Thanks
Assuming you don't have to transform the data, can't see any gain by using Hadoop (or parallel processing, for that matter) in this scenario.
See this question on copying Oracle data between servers
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm looking for a shell script idea to work on for practice with shell scripting. Can you please suggest intermediate ideas to work on?
I'm a developer and I prefer working on an idea that deals with files.
For shell scripting, think of a task that you do frequently - and think how you would automate that task.
You can start off with a basic script that just about does what you need. Then you realize that there are small variations on the task, and you start to allow the script to handle those. And it gently becomes more complex.
Almost all of the scripts I have (some hundreds of them) started off as "I've done that before; how can I avoid having to do it again?".
Can you give an example?
No - because I don't know what tasks you do sufficiently often to be (minor) irritants that could be salved by writing a script.
Yes - because I've got scripts that I wrote incrementally, in an attempt to work around some issue or other in my environment.
One task that I'm working on - still a work in progress - is:
Identify duplicate files
Starting at some nominated directory (default, $HOME), find all the files, and for each file, establish a checksum (MD5, SHA1, SHA256 - it is not critical which) for the file; record the file name and checksum (and maybe device number and inode number).
Establish which checksums are repeated - hence identifying identical files.
Eliminate the unique checksums.
Group the duplicate files together with appropriate identifying information.
This much is fairly easy - it requires some medium-grade shell scripting and you might have to find a command to generate the checksum (but you might be OK with sum or cksum, though neither of those reaches even the level of MD5). I've done this in both shell and Perl.
The hard part - where I've not yet gotten a good solution - is then dealing with the duplicates. I have some 8,500 duplicated hashes, with about 27,000 file names in total. Some of the duplicates are images like smileys used in chat transcripts - there are a lot of that particular image. Others are duplicate PDF files collected from various machines at various times; I need to organize them so I have one copy of the file on disk, with perhaps links in the other locations. But some of the other locations should go - they were convenient ways to get the material from retired machines onto my current machine.
I have not yet got a good solution to the second part.
Here are two scripts from my personal library. They are simple enough not to require a full blown programming language, but aren't trivial, particularly if you aim to get all the details right (support all flags, return same exit code, etc.).
cvsadd
Write a script to perform a recursive cvs add so you don't have to manually add each sub-directory and its files. Make it so it detects the file types and adds the -kb flag for binary files as needed.
For bonus points: Allow the user to optionally specify a list of directories or files to restrict the search to. Handle file names with spaces correctly. If you can't figure out if a file is text or binary, ask the user.
#!/bin/bash
#
# Usage: cvsadd [FILE]...
#
# Mass `cvs add' script. Adds files and directories recursively, automatically
# figuring out if they are text or binary. If no file names are specified, looks
# for unversioned files and directories in the current directory.
svnfind
Write a wrapper around find which performs the same job, recursively finding files matching arbitrary criteria, but ignores .svn directories.
For bonus points: Allow other actions besides the default -print. Support the -H, -L, and -P options. Don't erroneously filter out files which simply happen to contain the substring .svn. Make usage identical to the regular find command.
#!/bin/bash
#
# Usage: svnfind [-H] [-L] [-P] [path...] [expression]
#
# Attempts to behave identically to a plain `find' command while ignoring .svn
# directories. Usage is identical to `find'.
You could try some simple CGI scripting. It can be done in shell and involves a lot of here documents, parsing and extracting of form values, a bit of escaping and whatever you want to do as payload. (I do not recommend exposing such a script to the hostile internet, though.)