How to reuse AWS socket? - websocket

I am using an AWS webserver, which is being polled by some other script. The problem is that when I start the server twice in a few seconds (with requests of a client in between), the server fails to start again, saying:
raised AWS.NET.SOCKET_ERROR : Bind : [98] Address already in use
There is this old thread that suggests there may be a reuse_address
option, either in a ini file or as a direct parameter, but says that that also does not work.
Perhaps there is some way to force the OS to abandon the socket?

You need to call AWS.Config.Set.Reuse_Address (Config, True); or set it in the AWS ini file.
For example:
with AWS.Config;
with AWS.Config.Set;
(...)
declare
HTTP_Server : AWS.Server.HTTP;
AWS_Config : AWS.Config.Object := AWS.Config.Default_Config;
begin
AWS.Config.Set.Reuse_Address (AWS_Config, True);
AWS.Config.Set.Server_Port (AWS_Config, 80);
AWS.Server.Start (HTTP_Server, Callback => Respond'Unrestricted_Access, Config => AWS_Config);
(...)

Related

How to see print() results in Tarantool Docker container

I am using tarantool/tarantool:2.6.0 Docker image (the latest at the moment) and writing lua scripts for the project. I try to find out how to see the results of callin' print() function. It's quite difficult to debug my code without print() working.
In tarantool console print() have no effect also.
Using simple print()
Docs says that print() works to stdout, but I don't see any results when I watch container's logs by docker logs -f <CONTAINER_NAME>
I also tried to set container's logs driver to local. Than I get one time print to container's logs, but only once...
The container's /var/log directory is always empty.
Using box.session.push()
Using box.session.push() works fine in console, but when I use it in lua script:
-- app.lua
function log(s)
box.session.push(s)
end
-- No effect
log('hello')
function say_something(s)
log(s)
end
box.schema.func.create('say_something')
box.schema.user.grant('guest', 'execute', 'function', 'say_something')
And then call say_something() from nodeJs connector like this:
const TarantoolConnection = require('tarantool-driver');
const conn = new TarantoolConnection(connectionData);
const res = await conn.call('update_links', 'hello');
I get error:
Any suggestions?
Thanx!
I suppose you've missed io.flush() after print command.
After I added io.flush() after each print call my messages start to write to logs (docker logs -f <CONTAINER_NAME>).
Also I'd recommend to use log module for such purpose. It writes to stderr without buffering.
Regarding the error in the connector, I think nodejs connector simply doesn't support pushes.

RUBY script: Wait for ANY output when connected to Telnet session

I am using ruby script to send commands to the UT. I have successfully established telnet session to the remote UT. The commands sent perform series of operation and gives me the statistics.
Initially, after successfully sending the command, I designed UT to send an OK to script which is received. How do I receive the statistics information? The script doesn't know the output of command in advance and each command will have its own string.
Using ruby, how can I tell telnet::waitfor() command to wait for so long duration but break out if UT sends something.?
To read the OK, i used:
response=#newSession.waitfor({"String" => "OK\n", "Timeout" => time_out})
where, newSession holds the telnet session connection.
#newSession = Net::Telnet::new("Session" => #session,
"Host" => #ut_ip,
"Port" => #port_num,
"Timeout" => 10,
"Prompt" => /[$%#>] \z/n)
I cant use "Match" or "Prompt" since I don't know what I am gonna get.! Help me out guys.. Thanks.
Let’s read the documentation on Net::Telnet:
For some protocols, it will be possible to specify the Prompt option once when you create the Telnet object and use cmd() calls; for others, you will have to specify the response sequence to look for as the Match option to every cmd() call, or call puts() and waitfor() directly; for yet others, you will have to use sysread() instead of waitfor() and parse server responses yourself.
So, you are about to handle the IO yourself and #preprocess it.

.Net FTPS Connection times out after sending 'CCC' command

I've been struggling a lot these last few days with a FTPS server that requires the 'CCC' command I'm trying to access via .Net
I'm using AlexFTPS Library. I'm able to connect and negociate AUTH TLS, I'm able to change directory but when I'm trying to list directory or download files, server asks for 'CCC' command. When I send 'CCC' command, I get a '200 CCC Context Enabled' reply but then I cannot send anything else, anytime I get a server timeout exception.
I've done further tests :
WS_FTP : works if I check the 'Use unencrypted command channel after SSL authentication' option
Filezilla : does not work even if I add 'CCC' as a Post Login Command
http://www.componentpro.com/ftp.net/ : works but is not open source
Any help would be so much appreciated... Sorry I am not FTP fluent...
Here's my code :
Using Client As New AlexPilotti.FTPS.Client.FTPSClient
AddHandler Client.LogCommand, Sub(sender As Object, args As AlexPilotti.FTPS.Common.LogCommandEventArgs)
Console.WriteLine(args.CommandText)
End Sub
AddHandler Client.LogServerReply, Sub(sender As Object, args As AlexPilotti.FTPS.Common.LogServerReplyEventArgs)
Console.WriteLine(args.ServerReply)
End Sub
Dim cred = New Net.NetworkCredential("login", "password")
Client.Connect("ftps.server.com", cred, AlexPilotti.FTPS.Client.ESSLSupportMode.CredentialsRequired)
Client.SendCustomCommand("SYST")
Client.SendCustomCommand("PBSZ 0")
Client.SendCustomCommand("PROT P")
Client.SendCustomCommand("FEAT")
Client.SendCustomCommand("PWD")
Client.SendCustomCommand("TYPE A")
Client.SendCustomCommand("PASV")
Client.SendCustomCommand("CCC")
Client.SendCustomCommand("LIST")
Console.ReadKey()
End Using
Thanks !
CCC ("Clear Command Channel") is a special command which downgrades the connection from SSL (started with AUTH TLS) back to unencrypted again. So it's no enough to just declare it as a custom command which gets send on the established control connection, it has to be handled similar to AUTH TLS by the FTPS library so that after the command is done the TLS downgrade occurs.

node.js: Run external command in new window

I'm writing a node.js script that permanently runs in the background and initiates file downloads from time to time. I want to use WGET for downloading the files, since it seems more robust until I really know what I'm doing with node.js.
The problem is that I want to see the progress of the downloads in question. But I can't find a way to start WGET through node.js in a way that a new shell window is opened and displayed for WGET.
Node's exec and spawn functions run external commands in the background and would let me access the output stream. But since my script runs in the background (hidden), there's not much I could do with the output stream.
I've tried opening a new shell window by running "cmd /c wget..." but that didn't make any difference.
Is there a way to run external command-line processes in a new window through node.js?
You can use node-progress and Node's http client (or requesT) for this, no need for wget:
https://github.com/visionmedia/node-progress
Example:
var ProgressBar = require('../')
, https = require('https');
var req = https.request({
host: 'download.github.com'
, port: 443
, path: '/visionmedia-node-jscoverage-0d4608a.zip'
});
req.on('response', function(res){
var len = parseInt(res.headers['content-length'], 10);
console.log();
var bar = new ProgressBar(' downloading [:bar] :percent :etas', {
complete: '='
, incomplete: ' '
, width: 20
, total: len
});
res.on('data', function(chunk){
bar.tick(chunk.length);
});
res.on('end', function(){
console.log('\n');
});
});
req.end();
UPDATE:
Since you want to do this in a background process and listen for the download progresses (in a separate process or what have you) you can achieve that using a pub-sub functionality, either:
use a message queue like Redis, RabbitMQ or ZeroMQ
start a TCP server on a known port / UNIX domain and listen to it
Resources:
http://nodejs.org/api/net.html

Perl Script to Monitor URL Using proxy credentials?

Please help on the following code, this is not working in our environment.
use LWP;
use strict;
my $url = 'http://google.com';
my $username = 'user';
my $password = 'mypassword';
my $browser = LWP::UserAgent->new('Mozilla');
$browser->credentials("172.18.124.11:80","something.co.in",$username=>$password);
$browser->timeout(10);
my $response=$browser->get($url);
print $response->content;
OUTPUT :
Can't connect to google.com:80 (timeout)
LWP::Protocol::http::Socket: connect: timeout at C:/Perl/lib/LWP/Protocol/http.p m line 51.
OS: windows XP
Regards, Gaurav
Do you have a HTTP proxy at 172.18.124.11? I assume LWP is not using the proxy. You might want to use env_proxy => 1 with the new() call.
You also have a mod-perl2 tag in this question. If this code runs inside mod-perl2, it's possible that the http_proxy env variable is not visible to the code. You can check this eg. by printing $browser->proxy('http').
Or just set the proxy with $browser->proxy('http', '172.18.124.11');
Also, I assume you don't have use warnings on, because new() takes a hash, not just a string. It's a good idea to always enable warnings. That will save you lots of trouble.

Resources