Cygwin/Git on Win hangs after a while after large(?) amount of cURL POST requests are pasted at once into terminal - windows

I need to consecutively execute 300 or 600 or more cURL POST requests. I have them in plain text, and I simply copy-paste into the terminal.
But regardless which one I use, Cygwin or Git on Windows, the following occurs:
If I paste 300 into the terminal, it'll consecutively execute around 104, and then it'll start hanging. Can't stop it or type anything, a complete freeze that lasts forever.
But if I paste 200 into the terminal, all will be consecutively finished with success.
Not sure which details I should provide that could prove to be the bottleneck here, so for a start I would only say that ONE command contains ~1270 characters.
Be kind to provide a solution to be able to consecutively execute even 2000 such cURL POST requests with "one paste" into terminal.

Be kind to provide a solution to be able to consecutively execute even 2000 such cURL POST requests with "one paste" into terminal.
I would do:
one paste in an editor, to save those calls as a script
one call to that script in the shell.
That way, you can accommodate any number of call in your bash (Git or Cygwin) session, by executing one script which was filled with "one paste".
The OP confirms however that the issue persists, which could be linked to curl itself, as in curl/curl issue 5784: "curl stops execution/"hangs" after random time" (Aug. 2020).
However, that issue just got closed (Nov. 2020), with Daniel Stenberg commenting:
I don't see anything for us to act on in this issue.
It's not even clear that this is curl's fault. And nothing has been added to the case in months.
Because of all this, I'm closing.

Related

Occasionally Expect send command gets truncated

I have an expect script that logs in to an SBC and runs a command for a particular interface.
I call this script from a shell script to perform the same command on multiple SBCs and multiple interfaces. I run the script 6 times on each SBC grabbing details for a single interface each time and the output gets saved to a different file on a per SBC/interface combination.
Trouble is, I run it for example on SBC A and in two of the files the command is truncated and nothing happens. Say interface 2 and 3.
If I run the script again, 5 interfaces work this time and now a different interface, interface 4 fails with a truncated command.
I don’t understand what would cause the command to fail randomly. Any thoughts would be appreciated.
Ok think I have cracked it. Occasionally the command I am entering is matching the expected prompt. In reality the command should always match the prompt so strange it doesn’t fail every time.
Have tweaked the expected prompt and re-running script.

Check status of a forked process?

I'm running a process that will take, optimistically, several hours, and in the worst case, probably a couple of days.
I've tried a couple of times to run it and it just never seems to complete (I should add, I didn't write the program, it's just a big dataset). I know my syntax for the command is correct as I use it all the time for smaller data and it works properly (I'll spare you the details as it is obscure for SO and I don't think that relevant to the question).
Consequently, I'd like to leave the program unattended running as a fork with &.
Now, I'm not totally sure whether the process is just grinding to a halt or is running but taking much longer than expected.
Is there any way to check the progress of the process other than ps and top + 1 (to check CPU use).
My only other thought was to get the process to output a logfile and periodically check to see if the logfile has grown in size/content.
As a sidebar, is it necessary to also use nohup with a forked command?
I would use screen for this purpose. see the man for more reference
Brief summary how to use:
screen -S some_session_name - starts a new screen session named session_name
Ctrl + a + d - detach session
screen -r some_session_name returns you to your session

Adding commands in shell scripts to history?

I notice that the commands I have in my shell scripts never get added to the history list. Understandably, most people wouldn't want it to be, but for someone who does, is there a way to do it?
Thanks.
Edit:
Sorry about the extremely late reply.
I have a script that conveniently combines some statements that ultimately result in lynx opening a document. That document is in a dir several directories below the current one.
Now, I usually end up closing lynx to open another document in the current dir and need to keep switching back and forth between the two. I could do this by having another window open, but since I'm mostly on telnet, and the switches aren't too frequent, I don't want to do that.
So, in order to get back to lynx from the other document, I end up having to re-type the lynx command, with the (long) path/filename. In this case, of course, lynx isn't stored in the command history.
This is what I want added to the history, so that I can get back to it easily.
Call it laziness, but hey, if it teaches me a new command....
Cheers.
As #tripleee pointed out, which problem are you actually trying to solve? It's perfectly possible to include any shell code in the history, but above some level of complexity it's much better to keep them in separate shell scripts.
If you want to keep multi-line commands in the history as they are, you might want to try out shopt -s lithist, but that means searching through history will only return one line at a time.

Bash piping output and input of a program

I'm running a minecraft server on my linux box in a detached screen session. I'm not very fond of screen very much and would like to be able to constantly pipe the output of the server to a file(like a pipe) and pipe some input from a file to the server(so that I can input and output to the server from remote programs, like a python script). I'm not very experienced in bash, so could somebody tell me how to do this?
Thanks, NikitaUtiu.
It's not clear if you need screen at all. I don't know the minecraft server, but generally for server software, you can run it from a crontab entry and redirect output to log files.
Assuming your server kills itself at midnight sunday night, (we can discuss changing this if restarting 1x per week is too little or too much OR you require ad-hoc restarts), but for a basic idea of what to do, here is a crontab entry that starts the server each monday at 1 minute after midnight.
01 00 * * 1 dtTm=`/bin/date +\%Y\%m\%d.\%H\%M\%S`; export dtTm; { /usr/bin/mineserver -o ..... your_options_to_run_mineserver_here ... ; } > /tmp/mineserver_trace_log.${dtTm} 2>&1
consult your man page for crontab to confirm that day-of-week ranges are 0-6 (0=Sunday), and change the day-of-week value if 0!=Sunday.
Normally I would break the code up so it is easier to read, but for crontab entries, each entry has to be all on one line, (with some weird exceptions) AND usually a limit of 1024b-8K to how long the line can be. Note that the ';' just before the closing '}' is super-critical. If this is left out, you'll get un-deciperable error messages, or no error messages at all.
Basically, you're redirecting any output into a file (including std-err output). Now you can do a lot of stuff with the output, use more or less to look at the file, grep ERR ${logFile}, write scripts that grep for error messages and then send you emails that errors have been found, etc, etc.
You may have some sys-admin work on your hands to get the mineserver user so it can run crontab entries. Also if you're not comfortable using the vi or emacs editors, creating a crontab file may require help from others. Post to superuser.com to get answers for problems you have with linux admin issues.
Finally, there are two points I'd like to make about dated logfiles.
Good: a. If you app dies, you never have to rerun it to then capture output and figure out why something has stopped working. For long running programs this can save you a lot of time. b. keeping dated files gives you the ability to prove to you, your boss, others, that It used to work just fine, see here are the log files. c. Keeping the log files, assuming there is useful information in them, gives you the opportunity to mine those files for facts. I.E. : program used to take 1 sec for processing, now it is taking 1 hr, etc etc.
Bad: a. You'll need to set up a mechanism to sweep old log files, otherwise at some point everything will have stopped, AND when you finally figure out what the problem was, you discover that your /tmp OR whatever dir you chose to use IS completely full.
There is a self-maintaining solution to using dates on the logfiles I can tell you about if you find this approach useful. It will take a little explaining, so I don't want to spend the time writing it up if you don't find the crontab solution useful.
I hope this helps!

How do you fix wget from comingling download data when running multiple concurrent instances?

I am running a script that in turn calls another script multiple times in the background with different sets of parameters.
The secondary script first does a wget on an ftp url to get a listing of files at that url. It outputs that to a unique filename.
Simplified example:
Each of these is being called by a separate instance of the secondary script running in the background.
wget --no-verbose 'ftp://foo.com/' -O '/downloads/foo/foo_listing.html' >foo.log
wget --no-verbose 'ftp://bar.com/' -O '/downloads/bar/bar_listing.html' >bar.log
When I run the secondary script once at a time, everything behaves as expected. I get an html file with a list of files, links to them, and information about the files the same way I would when viewing an ftp url through a browser.
Continued simplified one at a time (and expected) example results:
foo_listing.html:
...
foo1.xml ...
foo2.xml ...
...
bar_listing.html:
...
bar3.xml ...
bar4.xml ...
...
When I run the secondary script many times in the background, some of the resulting files, although they have the base urls correct (the one that was passed in) the files listed are from a different run of wget.
Continued simplified multiprocessing (and actual) example results:
foo_listing.html:
...
bar3.xml ...
bar4.xml ...
...
bar_listing.html
correct, as above
Oddly enough, all other files I download seem to work just fine. It's only these listing files that get jumbled up.
The current workaround is to put in a 5 second delay between backgrounded processes. With only that one change everything works perfectly.
Does anybody know how to fix this?
Please don't recommend using some other method of getting the listing files or not running concurrently. I'd like to actually know how to fix this when using wget in many backgrounded processes if possible.
EDIT:
Note:
I am not referring to the status output that wget spews to the screen. I don't care at all about that (that is actually also being stored in separate log files and is working correctly). I'm referring to the data wget is downloading from the web.
Also, I cannot show the exact code that I am using as it is proprietary for my company. There is nothing "wrong" with my code as it works perfectly when putting in a 5 second delay between backgrounded instances.
Log bug with Gnu, use something else for now whenever possible, put in time delays between concurrent runs. Possibly create a wrapper for getting ftp directory listings that only allows one at a time to be retrieved.
:-/

Resources