how to produce delay in ruby - ruby

How to produce delay in ruby?
I used sleep statement but it didn't give me what I want.
puts "amit"
sleep(10)
puts "scj"
I want it to first print amit, then a delay of 10 seconds, then print scj.
But in above case what happens is it will pause for 10 seconds and then it will print amit and scj together. I don't want that.
I hope you got what I want to say.

I can't reproduce this. From a console, this does exactly what you'd expect:
puts "amit"
sleep 10
puts "scj"
(Ruby 1.8.6 on Linux)
Can you provide a similar short but complete example which doesn't do what you want - or explain your context more?
If you're writing a web application, then the browser may well only see any data once the whole response has been written - that would explain what you're seeing. If that's the case, you'll need a different approach which would allow the initial response to be written first, and then make the browser make another request. The delay could be at the server or the client, depending no the scenario.

Call $stdout.flush before the call to sleep. The output is probably buffered (although usually output is only line-buffered so puts, which produces a newline, should work without flushing, but apparently that's not true for your terminal).

Related

Can a python async http transfer by easily added to an existing code base?

I'm writing a Fusion 360 python add-in, which is an event-driven way to extent their product (their code calls my functions that hooked in to their events).
Inside my code, I would like to send a single HTTP GET (or POST) request to a remote server without making the user wait (e.g. if they're offline, I want no delay - it just needs to fail silently).
There are many dozens of async examples around, but all of them appear to require that you're running a "normal" program, and that every part of the program is async to start with (i.e. I can't find any examples of a regular program, with an async bit added).
I'm new to python, and the async Doc is drowning me :-(
That said - I do kinda know what I'm doing in other languages, and I understand how processes work (not so much threads though).
I did manage to partly "solve" my own question with this:
subprocess.Popen([get_exec(),os.path.join(prog_folder,"send_data.py"),str(VERSION)])
and a second script - except that pops open an ugly black "DOS" box which hangs around until the transfer completes and looks highly unprofessional. All attempts at avoiding the black box failed (I do not get the luxury of specifying my user's environment, and there is no "windows UI build" python version shipped that works.)
So basically - two questions
a) is it even possible for an event-driven python function to even "spawn" a thread at all? Perhaps imagine it this way: you've written a python module, and any caller can call a function in your module, which returns immediately, but your function then continues to do work for another minute in parallel - but crucially - the caller does not need to do anything special.
b) assuming it more-or-less is possible - can anyone give me a hint or a pointer to an example or something might might give me a clue where to start?
Python 3.7.6+ is my minimum environment.
My main problem (pardon the pun) is that all examples I can find do this:
loop.run_until_complete(asyncio.wait(print_http_headers(url)))
or this:
asyncio.run(main())
both of which block. Even the asyncio doc's "hello world" example is non-async as well (if only they had printed "world" first (after a 1s delay) and then printed "hello" second with no delay - that would have solved everything!!!)
All other suggestions gratefully received (there's bound to be an "outside the box" alternative I've not realized yet I expect - so long as the box isn't black and in-your-face that is :-)
Thanks! #user4815162342 - that totally did the trick!!
import _thread, time, socket
def YoBlably(stuff):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("example.com", 80))
sent=s.send(b'GET /my_path/check_update.asp?u=u1.20200505&v=1.20200503&a=_aft&p=my_prog HTTP/1.1\x0d\x0aHost: example.com\x0d\x0aAccept-Encoding: identity\x0d\x0aUser-Agent: Python\x0d\x0aConnection: close\x0d\x0a\x0d\x0a')
if sent==0:
print("s Problem")
chunk = s.recv(1024000)
if chunk == b'':
print("r Problem")
print('got {}.'.format(chunk))
s.close()
print("Starting in 1s...");
time.sleep(1)
_thread.start_new_thread(YoBlably, ('foo',))
print("Started...");
for i in range(0,6):
time.sleep(1)
print('{}...'.format(i))
print("The end...");
outputs:-
$ python pythreadsock.py
Starting in 1s...
Started...
got b'HTTP/1.1 200 OK\r\nDate: Wed, 20 May 2020 01:47:28 GMT\r\nServer: Apache/2.0.52\r\nExpires: Sun, 17 May 2020 23:58:28 GMT\r\nPragma: no-cache\r\nCache-Control: no-cache\r\nContent-Length: 35\r\nConnection: close\r\nContent-Type: application/json; charset=UTF-8\r\n\r\n{"current_version_xyz":1.20200502}\n'.
0...
1...
2...
3...
4...
5...
The end...

app is not stopping at "gets" - is my online interpreter forcing a timeout?

While starting an assignment (Towers of Hanoi), I leave my code in a very basic state while I ponder the logic of how to continue.
while arr3.count < 6
puts "Move ring FROM which tower?"
from = gets.chomp
puts "Move ring TO which tower?"
to = gets.chomp
end
Before I can start building the rest of the app, however, gets seems to fall through without any input from me, and the second puts displays on the screen. This continues looping every, say, 30 seconds or so. Should I assume this is a feature of online interpreters (like codeacademy labs)?
Now I'm distracted from continuing the assignment and have to find a better place to do my code.
I'm installing Aptana (based on some advice on this forum) to see if I can get a better environment to do my assignments. Or do most people use a text editor then run their .rb file through the windows console window?
Thx

Printing on screen the percentage completed while my tcl script is running?

I have a tcl script which takes a few minutes to run (the execution time varies based on different configurations).
I want the users to have some kind of an idea of whether it's still executing and how long it would take to complete while the script executes.
Some of the ideas I've had so far:
1) Indicate it using ... which keep increasing with each internal command run or so. But again it doesn't really give a sense of how much more to go for a first time user.
2) Use the revolving slash which I've seen used many places.
3) Have an actual percentage completed output on screen. No idea if this is viable or how to go about it.
Does anyone have any ideas on what could be done so that users of the script understand what's going on and how to do this?
Also if I'm implementing it using ... , how do I get them to print the . on the same line each time. If I use puts to do this in the tcl script the . just gets printed on the next line.
And for the revolving slash, I would need to replace something which was already printed on screen. How can I do this with tcl?
First off, the reason you were having problems printing dots was that Tcl was buffering its output, waiting for a new line. That's often a useful behavior (often enough that it's the default) but it isn't wanted in this case so you turn it off with:
fconfigure stdout -buffering none
(The other buffering options are line and full, which offer progressively higher levels of buffering for improved performance but reduced responsiveness.)
Alternatively, do flush stdout after printing a dot. (Or print the dots to stderr, which is unbuffered by default due to mainly being for error messages.)
Doing a spinner isn't much harder than printing dots. The key trick is to use a carriage return (a non-printable character sometimes visualized as ^M) to move the cursor position back to the start of the line. It's nice to factor the spinner code out into a little procedure:
proc spinner {} {
global spinnerIdx
if {[incr spinnerIdx] > 3} {
set spinnerIdx 0
}
set spinnerChars {/ - \\ |}
puts -nonewline "\r[lindex $spinnerChars $spinnerIdx]"
flush stdout
}
Then all you need to do is call spinner regularly. Easy! (Also, print something over the spinner once you've finished; just do puts "\r$theOrdinaryMessage".)
Going all the way to an actual progress meter is nice, and it builds on these techniques, but it requires that you work out how much processing there is to do and so on. A spinner is much easier to implement! (Especially if you've not yet nailed down how much work there is to do.)
The standard output stream is initially line buffered, so you won't see new output until you write a newline character, call flush or close it (which is automatically done when your script exits). You could turn this buffering off with...
fconfigure stdout -buffering none
...but diagnostics, errors, messages, progress etc should really be written to the stderr stream instead. It has buffering set to none by default so you won't need fconfigure.

expect: sequence of expects

I'm trying to automate the interaction with a remote device over telnet using expect.
At some point device generates output like this:
;
...
COMPLETED
...
;
What I need is to make my script exit after the "COMPLETED" keyword and second ";" are found. However all my attemts fail. Script either exits after the first coma or does not exit at all, hanging. Please help.
Expect works.
I make a point of that, because facha has already written "That [presumably the updated script, rather than Expect itself] didn't work" once. Expect has very few faults--but it's so unfamiliar to most programmers and administrators that it can be hard to discern exactly how to talk to it. Glenn's advice to
expect -re {COMPLETE.+;}
and
exp_internal 1
(or -d on the command line, or so on) is perfectly on-target: from everything I know, those are exactly the first two steps to take in this situation.
I'll speculate a bit: from the evidence provided so far, I wonder whether the expect matches truly even get to the COMPLETE segment. Also, be aware that, if the device to which one is telnetting is sufficiently squirrelly, even something as innocent-looking as "COMPLETE" might actually embed control characters. Your only hopes in such cases are to resort to debugging techniques like exp_internal, or autoexpect.
How about: expect -re {COMPLETED.+;}

Can I capture stdout/stderr separately and maintain original order?

I've written a Windows application using the native win32 API. My app will launch other processes and capture the output and highlight stderr output in red.
In order to accomplish this I create a separate pipe for stdout and stderr and use them in the STARTUPINFO structure when calling CreateProcess. I then launch a separate thread for each stdout/stderr handle that reads from the pipe and logs the output to a window.
This works fine in most cases. The problem I am having is that if the child process logs to stderr and stdout in quick succession, my app will sometimes display the output in the incorrect order. I'm assuming this is due to using two threads to read from each handle.
Is it possible to capture stdout and stderr in the original order they were written to, while being able to distinguish between the two?
I'm pretty sure it can't be done, short of writing the spawned program to write in packets and add a time-stamp to each. Without that, you can normally plan on buffering happening in the standard library of the child process, so by the time they're even being transmitted through the pipe to the parent, there's a good chance that they're already out of order.
In most implementations of stdout and stderr that I've seen, stdout is buffered and stderr is not. Basically what this means is that you aren't guaranteed they're going to be in order even when running the program on straight command line.
http://en.wikipedia.org/wiki/Stderr#Standard_error_.28stderr.29
The short answer: You cannot ensure that you read the lines in the same order that they appear on cmd.exe because the order they appear on cmd.exe is not guaranteed.
Not really, you would think so but std_out is at the control of the system designers - exactly how and when std_out gets written is subject to system scheduler, which by my testing is subordinated to issues that are not as documented.
I was writing some stuff one day and did some work on one of the devices on the system while I had the code open in the editor and discovered that the system was giving real-time priority to the driver, leaving my carefully-crafted c-code somewhere about one tenth as important as the proprietary code.
Re-inverting that so that you get sequential ordering of the writes is gonna be challenging to say the least.
You can redirect stderr to stdout:
command_name 2>&1
This is possible in C using pipes, as I recall.
UPDATE: Oh, sorry -- missed the part about being able to distinguish between the two. I know TextMate did it somehow using kinda user visible code... Haven't looked for a while, but I'll give it a peek. But after some further thought, could you use something like Open3 in Ruby? You'd have to watch both STDOUT and STDERR at the same time, but really no one should expect a certain ordering of output regarding these two.
UPDATE 2: Example of what I meant in Ruby:
require 'open3'
Open3.popen3('ruby print3.rb') do |stdin, stdout, stderr|
loop do
puts stdout.gets
puts stderr.gets
end
end
...where print3.rb is just:
loop do
$stdout.puts 'hello from stdout'
$stderr.puts 'hello from stderr'
end
Instead of throwing the output straight to puts, you could send a message to an observer which would print it out in your program. Sorry, I don't have Windows on this machine (or any immediately available), but I hope this illustrates the concept.
I'm pretty sure that even if you don't separate them at all, you're still not guaranteed that they'll interchange one another in the correct order.
Since the intent is to annotate the output os an existing program, any possible interleaving of the two streams must be correct. The original developer will have placed appropriate flush() calls to ensure any mandatory ordering is honoured.
As previously explained, record each fragment that is written with a time stamp, and use this to recover the sequence actually seen by the output devices.

Resources