Maximum amount of lines that can be copy/pasted into command prompt - windows

I'm using AWS CLI tool to download hundreds of thousands of files. I have almost a million of these one-liners generated from SQL query with different file path that I need to go through:
aws s3 cp s3://[myS3FilePath]/17802c9-6d3b-4eef-855a-a6ae0039c7ff/ C:\[MyLocalFilePath]\17802c9-6d3b-4eef-855a-a6ae0039c7ff\ --recursive
I've been taking ~1000 lines at a time, pasting them to command prompt, and waiting for them to be iterated through. Works great!
It's quite a time waste doing it in 1000 record batches though. What's the maximum amount of lines that I could paste to CMD without losing any of my download commands?
Could I paste 1.000.000 lines into the command prompt for example and trust that it will iterate through all of them?

The easy answer is to put all of the one-liners into a .bat file script and run the .bat file script.

Related

concatenate fastq files in a directory

I have a file uploader, resumable.js, which takes a file and breaks it into 1MB 'chunks' and than sends over the files 1MB at a time. So after an upload I have a directory with thousands, sometimes millions of individual fastq files. I can concatenate all of these 'chunks' back into the files original state with this line of code..
cat file_name.* > merged.fastq
How would I go about concatenating the files back into its original state without manually running this script in the command line? Should I set up some bash script to handle this issue, maybe a cronjob? Any ideas to solve this issue are greatly appreciated.
ANSWER: For what its worth I used this npm module and it works great.
https://www.npmjs.com/package/joiner

When was a file used by another program

I have a file with a series of 750 csv files. I wrote a Stata that runs through each of these files and performs a single task. The files are quite big, and my program has been running for more than 12 hours.
Is there is a way to know which was the last of the csv files that was used by Stata? I would like to know if the code is somewhere near finishing. I thought that organizing the 750 files by "last used" would do the trick, but it does not.
Next time I should be more careful about signalling how the process is going...
Thank you
From the OS X terminal, cd to the directory containing the CSV files, and run the command
ls -lUt | head
which should show your files, sorted by the most recent access time, limited to the 10 most recently accessed.
On the most basic level you can use display and log your session:
clear
set more off
local myfiles file1 file2 file3
forvalues i = 1/3 {
display "processing file`i'"
// <do-something-with-file>
}
See also help log.

How to Copy/Paste large number of lines of code in a terminal

I am working on mySQL source. I need to copy a file with 12,000 lines of code at a time and paste it into another terminal/text document. How can I perform this task?
Use the cat command so that it allows you to select all text and paste it in another terminal where u require

saving entire file in VIM

I have a very large CSV file, over 2.5GB, that, when importing into SQL Server 2005, gives an error message "Column delimiter not found" on a specific line (82,449).
The issue is with double quotes within the text for that column, in this instance, it's a note field that someone wrote "Transferred money to ""MIKE"", Thnks".
Because the file is so large, I can't open it up in Notepad++ and make the change, which brought me to find VIM.
I am very new to VIM and I reviewed the tutorial document which taught me how to change the file using 82,449 G to find the line, l over to the spot, x the double quotes.
When I save the file using :saveas c:\Test VIM\Test.csv, it seems to be a portion of the file. The original file is 2.6GB and the new saved one is 1.1GB. The original file has 9,389,222 rows and the new saved one has 3,751,878. I tried using the G command to get to the bottom of the file before saving, which increased the size quite a bit, but still didn't save the whole file; Before using G, the file was only 230 MB.
Any ideas as to why I'm not saving the entire file?
You really need to use a "stream editor", something similar to sed on Linux, that lets you pipe your text through it, without trying to keep the entire file in memory. In sed I'd do something like:
sed 's/""MIKE""/"MIKE"/' < source_file_to_read > cleaned_file_to_write
There is a sed for Windows.
As a second choice, you could use a programming language like Perl, Python or Ruby, to process the text line by line from a file, writing as it searches for the doubled-quotes, then changing the line in question, and continuing to write until the file has been completely processed.
VIM might be able to load the file, if your machine has enough free RAM, but it'll be a slow process. If it does, you can search from direct mode using:
:/""MIKE""/
and manually remove a doubled-quote, or have VIM make the change automatically using:
:%s/""MIKE""/"MIKE"/g
In either case, write, then close, the file using:
:wq
In VIM, direct mode is the normal state of the editor, and you can get to it using your ESC key.
You can also split the file into smaller more manageable chunks, and then combine it back. Here's a script in bash that can split the file into equal parts:
#!/bin/bash
fspec=the_big_file.csv
num_files=10 # how many mini-files you want
total_lines=$(cat ${fspec} | wc -l)
((lines_per_file = (total_lines+num_files-1) / num_files))
split --lines=${lines_per_file} ${fspec} part.
echo "Total Lines = ${total_lines}"
echo "Lines per file = ${lines_per_file}"
wc -l part.*
I just tested it on a 1GB file with 61151570 lines, and each resulting file was almost 100 MB
Edit:
I just realized you are on Windows, so the above may not apply. You can use a utility like simple text splitter a Windows program which does the same thing.
When you're able to open the file without errors like E342: Out of memory!, you should be able to save the complete file, too. There should at least be an error on :w, a partial save without error is a severe loss of data, and should be reported as a bug, either on the vim_dev mailing list or at http://code.google.com/p/vim/issues/list
Which exact version of Vim are you using? Using GVIM 7.3.600 (32-bit) on Windows 7/x64, I wasn't able to open a 1.9 GB file without out of memory. I was able to successfully open, edit, and save (fully!) a 3.9 GB file with the 64-bit version 7.3.000 from here. If you're not using that native 64-bit version yet, give it a try.

PVRTexTool, is there a way to run it on multiple files at once?

I am using PVRTexTool to convert png files to pvr files but the tool seems to only be able to run on one file at a time(wont accept *.png as file name).
does anyone know how to run it on a group of files at once?
Its really a hassle to run it on all of my textures.
In a shell, run
for file in *.png ; do
PVRTexToll $file
done
(I don't know how to call PVRTeXTool from a command line, so please substitute the second line with a correct version)
This is a general way to feed each file to a command which only accepts one file at a time. See any introduction on shell scripting, e.g. this discussion of the for loop.

Resources