I saved a .txt file in 2017 with some names and numbers however when I have gone back to open the file now, the file only contains empty spaces - the file has not been modified since the creation date so I am wondering whether I can do anything to retrieve whatever data may be hidden within it. As much as I would like to know what could have happened, I am more interested in retrieving the data itself.
I know a few console redirect parameters, such as >, < and |.
But I can't figure out a way to do it. As an example, let's say I wanted to copy a file using copy. The normal operation would be copy sourceFile destinationFile.
What if I wanted the source file to be input from console? Something like copy <""File Contents here"" Destination file. Unfortunately that doesn't work.
My application is more complex, but I think an example using the copy utility will be easier to understand.
i am in need of script to extract number of note-ref_ and #ref_ presence in all html files
my folder structure will be
D:\Count_Test
wherein lot of folders and sub-folder will contain and in each sub-folder will have a ref.html, text.html file will contain note-ref_ and #ref_ text (apart these files, some other files such as xml, txt and imges and css sub-folder will contain)
I need to count for every single file how many times note-ref_ and #ref_ appeared and the results needs to capture in .csv file
can anybody help me by providing solution to extract text into csv file
Suggestions:
Use the Scripting.FileSystemObject (FSO) to walk through files and sub folders to identify the scope of your actions. Alternatively, you could capture the output of DIR /s /b D:\Count_Test*.html.
Once you know the list of files you'll need to open, you should read each of them using the OpenTextFile function of the FSO and loop through each row. When you find what you're looking for, increase some sort of counter - perhaps in an array.
Finally once you've finished collecting the data, you can output your results by once again doing OpenTextFile, but this time opening your CSV file location and writing the data you've collected in the appropriate format.
We're creating an app that is going to generate some text files on *nix systems with hashed filenames to avoid too-long filenames.
However, it would be nice to tag the files with some metadata that gives a better clue as to what their content is.
Hence my question. Does anyone have any experience with creating files with custom metadata in Ruby?
I've done some searching and there seem to be some (very old) gems that read metadata:
https://github.com/kig/metadata
http://oai.rubyforge.org/
I also found: system file, read write o create custom metadata or attributes extended which seems to suggest that what I need may be at the system level, but dropping down there feels dirty and scary.
Anyone know of libraries that could achieve this? How would one create custom metadata for files generated by Ruby?
A very old but interesting question with no answers!
In order for a file to contain metadata, it has to have a format that has some way (implicitly or explicitly) to describe where and how the metadata is stored.
This can be done by the format, such as having a header that says where the "main" data is stored and where the "metadata" is stored, or perhaps implicitly, such as having a length to the "main" data, and storing metadata as anything beyond the "main" data.
This can also be done by the OS/filesystem by storing information along with the files, such as permission info, modtime, user, and more comprehensive file information like "icon" as you would find with iOS/Windows.
(Note that I am using "quotes" around "main" and "metadata" because the reality is that it's all data, and needs to be stored in some way that tools can retrieve it)
A true text file does not contain any headers or any such file format, and is essentially just a continuous block of characters (disregarding how the OS may store it). This also means that it can be generally opened by any text editor, which will merely read and display all the characters it finds.
So the answer in some sense is that you can't, at least not on a true text file that is truly portable to multiple OS.
A few thoughts on how to get around this:
Use binary at the end of the text file with hope/requirements that their text editor will ignore non-ascii.
Store it in the OS metadata for the file and make it OS specific (such as storing it in the "comments" section that an OS may have for a file.
Store it in a separate file that goes "along with" the file (i.e., file.txt and file.meta) and hope that they keep the files together.
Store it in a separate file and zip the text and the meta file together and have your tool be zip aware.
Come up with a new file format that is not just text but has a text section (though then it can no longer be edited with a text editor).
Store the metadata at the end of the text file in a text format with perhaps comments or some indicator to leave the metadata alone. This is similar to the technique that the vi/vim text editor uses to embed vim commands into a file, it just puts them as comments at the beginning or end of the file.
I'm not sure there are many other ways to accomplish what you want, but perhaps one of those will work.
I have an system with which users can upload a CSV file via an FTP server, or via a html form. On my end, a script polls the uploads directory and processes new files found. Some users will create the CSV by exporting it from Excel, while others will programmatically create it with scripts of their own.
My concern at the moment is: How can I be 100% certain that the file my processing script acts on is complete - in other words that it isn't a partial file (in progress, failed upload, etc)?
If the file format was something more structured, like XML, I'd be 100% confident that the file is complete by checking that the XML structure is valid (ie: closing tags).
Is there a good way to ensure that the uploaded CSV file is complete, without burdening & confusing less technical users who are simply uploading a file exported from a spreadsheet program (ie: providing an md5 of the file contents would be beyond them).
When designing CSV file formats in the past, I've always added a header and footer line as follows:
id,one,two,three,four,five,six
10,1,2,3,4,5,6
11,1,2,3,4,5,6
12,1,2,3,4,5,6
13,1,2,3,4,5,6
14,1,2,3,4,5,6
FOOTER,5
Most CSV file formats have a header to label the columns, the purpose of the footer is to indicate the file is completed. The footer contains a simple line count, which is easy to audit when looping through the file's contents. Not too complicated for users.
You could crosscheck whenever the filesize of the uploaded file matches the filesize of the original file.