Upload text to transfer.sh without first saving to disk - terminal

My drive is full and I can’t open any apps or remove files (or transfer to an external HD). My only option is using VS code which is still open to upload the unsaved file somewhere online.
Is it possible to upload to transfer.sh without first saving the text to the disk?
echo xyz | curl -F file=#- https://transfer.sh does the trick as long as the string isn’t multi-line. Unfortunately it’s got 11877 lines 😬

Related

Improve my cron script for website change notification

I've been using a script through cron to check if a web page has changed.
Everything is running smoothly but this is my first attempt and I know it could be better. I'm a noob so take it easy.
I've hobbled this together through many sources. The standard services that check if a webpage has changed didn't work on my webpage of instance because each instance created a new shopping cart ID. Instead, I looked for alike with say article:modified_time that was my TextofInterest and it will grab that whole line and compare it to the same line in the prior file.
Things to maybe do better:
I'd like to define each file (.txt, .html, websites) at the beginning.
I've also run into some other code where it looks like it might not have to save to a file and run in memory?
Currently saving to a flash drive (open media vault), I'd like to change the directory to a different drive for the writes of the files.
Any other improvements are welcome.
Here is working code:
#!/bin/bash
cp Current.html Prior.html
wget -O Current.html 'https://websiteofinterest.com/
awk '/TextOfInterest/' Prior.html > Prior.txt
awk '/TextOfInterest/' Current.html > Current.txt
diff -q Current.txt Prior.txt || diff Current.txt Prior.txt | mail -s "Website Canged" "Emailtosendto#email.com"

Tail -f doesn't show last lines

I'm doing a scrip that reads from a log and send the line throught netcat
tail -f /tmp/archivo.txt | grep --line-buffered status=bounced | while read LINE0
do
echo "${LINE0}"
echo "${LINE0}" > /tmp/mail-line.log
netcat localhost 5699 < /tmp/mail-line.log
sleep 1s
done
When I first launch this script it properly sends the data but when a new line is introduced it doesn't work unless I relaunch the script, any ideas?
Thank you.
Edit:
As #Kamil Cuk asked me, I tried only doing echos, it didn't work
What's happening?:
Well, I was introducing the new data using gedit, but this didn't work whit the -f flag but instead with the -F flag because it says that archivo.txt has been replaced. I tried doing echo "New line with bounce=status" >> archivo.txt" and it worked. So I'm assuming that gedit somehow changes metadata and tail doesn't show anything with -f thats why it's not working.
Under the hood files in the filesystem are just identified by numbers, called "inodes". Then you have a text label that points to the inode, called a "file name". When you run tail -f filename you start tailing the file behind the inode that filename currently points to; this is called "opening a file handle" because the tail program creates an open reference to the file through a "file handle".
What happened in your case is that you started tailing one file. Then your logging application (gedit :-) ) created a new file on a different inode and changed what inode filename refers to. The file at the inode that you are tailing still exists because tail still have an open file handle, it won't get removed until all programs close their file handles. But any new program that tries to open the file will get to the new inode.
To get around this you need to open the file in "append mode" (there are other modes as well). "Append mode" means you are pushing data to the end of the file without creating a new inode. All real logging applications will do this since it is much faster than rewriting the entire file every time.

How to transfer files on different computers through clipboard

I have seen a situation many a times where i have to work on VDI desktop/RDP through my laptop where only copy to clipboard option is available.
So if i want to copy a zip file,i cant.
I think one of the option could be convert file to binary and copy to clipboard but how to paste the same on my laptops clipboard and retrieve the the file?
If you use Windows you can use following procedure:
to get your file prepared for the transfer run:
certutil.exe -encode file.zip filezip.txt
next, open the filezip.txt with Notepad and mark all and copy it to the clipboard
Open Notepad on the server with the same file name
notepad.exe filezip.txt
and paste the content from the clipboard. Save the file.
Last step is to convert the file back to binary using certutil again:
certutil.exe -decode filezip.txt file.zip
Now you can extract your files from the file.zip on the server.
As far as I know, if you are using microsoft's remote desktop, you can copy and paste file between local and remote machine.
It's very difficult to implement the copy and paste between two machine. If you can use the web browser in the remote machine, you can try convert the binary to base64 content, then convert it back to binary using some online base64 converter.

How does Windows know if a file was downloaded from the Internet?

If I open a file in Windows, that was downloaded by chrome or another browser Windows popups a warning, that this file is downloaded from the internet. The same for documents you open in Microsoft Word.
But how does windows know that this file originate from the Internet? I think it's the same file as every other file on my hard drive. Has it to do something with the file properties?
Harry Johnston got it!
Had nothing to do with temporary folder or media cache. It's a NTFS Stream.
For further reading: MSDN File Streams
This blocking information is archieved with the following commands on the CLI:
(echo [ZoneTransfer]
More? echo ZoneId=3) > test.docx:Zone.Identifier
This creates an alternative file stream.
When you download any file from internet. It first downloaded in Media Cache instead of temp folder. Only after that it moves to actual location where you select to save that file.
If you copy and paste some file then it move that file through Temp folder only. Before opening any file windows check the location and if it is Media Folder then you get the error "File is downloading or other errors related to this".

saving entire file in VIM

I have a very large CSV file, over 2.5GB, that, when importing into SQL Server 2005, gives an error message "Column delimiter not found" on a specific line (82,449).
The issue is with double quotes within the text for that column, in this instance, it's a note field that someone wrote "Transferred money to ""MIKE"", Thnks".
Because the file is so large, I can't open it up in Notepad++ and make the change, which brought me to find VIM.
I am very new to VIM and I reviewed the tutorial document which taught me how to change the file using 82,449 G to find the line, l over to the spot, x the double quotes.
When I save the file using :saveas c:\Test VIM\Test.csv, it seems to be a portion of the file. The original file is 2.6GB and the new saved one is 1.1GB. The original file has 9,389,222 rows and the new saved one has 3,751,878. I tried using the G command to get to the bottom of the file before saving, which increased the size quite a bit, but still didn't save the whole file; Before using G, the file was only 230 MB.
Any ideas as to why I'm not saving the entire file?
You really need to use a "stream editor", something similar to sed on Linux, that lets you pipe your text through it, without trying to keep the entire file in memory. In sed I'd do something like:
sed 's/""MIKE""/"MIKE"/' < source_file_to_read > cleaned_file_to_write
There is a sed for Windows.
As a second choice, you could use a programming language like Perl, Python or Ruby, to process the text line by line from a file, writing as it searches for the doubled-quotes, then changing the line in question, and continuing to write until the file has been completely processed.
VIM might be able to load the file, if your machine has enough free RAM, but it'll be a slow process. If it does, you can search from direct mode using:
:/""MIKE""/
and manually remove a doubled-quote, or have VIM make the change automatically using:
:%s/""MIKE""/"MIKE"/g
In either case, write, then close, the file using:
:wq
In VIM, direct mode is the normal state of the editor, and you can get to it using your ESC key.
You can also split the file into smaller more manageable chunks, and then combine it back. Here's a script in bash that can split the file into equal parts:
#!/bin/bash
fspec=the_big_file.csv
num_files=10 # how many mini-files you want
total_lines=$(cat ${fspec} | wc -l)
((lines_per_file = (total_lines+num_files-1) / num_files))
split --lines=${lines_per_file} ${fspec} part.
echo "Total Lines = ${total_lines}"
echo "Lines per file = ${lines_per_file}"
wc -l part.*
I just tested it on a 1GB file with 61151570 lines, and each resulting file was almost 100 MB
Edit:
I just realized you are on Windows, so the above may not apply. You can use a utility like simple text splitter a Windows program which does the same thing.
When you're able to open the file without errors like E342: Out of memory!, you should be able to save the complete file, too. There should at least be an error on :w, a partial save without error is a severe loss of data, and should be reported as a bug, either on the vim_dev mailing list or at http://code.google.com/p/vim/issues/list
Which exact version of Vim are you using? Using GVIM 7.3.600 (32-bit) on Windows 7/x64, I wasn't able to open a 1.9 GB file without out of memory. I was able to successfully open, edit, and save (fully!) a 3.9 GB file with the 64-bit version 7.3.000 from here. If you're not using that native 64-bit version yet, give it a try.

Resources