Lua https timeout is not working - https

I am using following versions of Lua and it's packets on openWRT environment:
luasocket-2.0.2
luasec-0.4
lua-5.1.4
Trying to use timeout for a https.request call. Tried using https.TIMEOUT where local https = require("ssl.https") and it never time outs. I tried giving a very small timeout (I know that I won't get answer in that time and internet connection is OK) also I tried when net connection is disconnected once https.request is called. Is it a known issue? or shall I try something else for this. I can guess either send/recieve is blocking it for infinite time.
-Swapnel

Setting the timeout on ssl.https does not work. You have to set it on socket.http.
For instance, if your code looks like this:
local https = require "ssl.https"
https.TIMEOUT = 0.01
b, c, h = https.request("https://www.google.fr/")
change it to this:
local http = require "socket.http"
local https = require "ssl.https"
http.TIMEOUT = 0.01
b, c, h = https.request("https://www.google.fr/")

Related

Locust response time for the first request

I am using Locust and my code looks as below
class RecommenderTasks(TaskSet):
#task
def test_recommender_multiple_platforms(self):
start = round(time.time() * 1000)
self.client.get('recommendations', name='Test')
end = round(time.time() * 1000)
print(end - start)
class RecommenderUser(FastHttpUser):
tasks = [RecommenderTasks]
wait_time = constant(1)
host = "https://my-host.com/"
When I test with this code, I get the following output times
374
62
65
68
64
I am not sure why the very first task time alone is about 300+ ms and the rest are as expected. With this, my overall average time also increases. Could you please help me here?
Locust response times are measured from the time the initial request is sent to the server to the time a response is received. By default Locust reuses socket connections when available but creates new ones if an existing one isn't available. When connecting via HTTPS, there are a number of things that need to be done to set up the connection initially. Generally performance of that connection set up is dependent on things the server is doing. You could look into ways of reducing your connection setup time. How to do that will vary widely depending on your stack but you can find general principles in SO answers like this one:
how to reduce ssl time of website

Sequel + ADO + Puma is not threading queries

We have a website running in Windows Server 2008 + SQLServer 2008 + Ruby + Sinatra + Sequel/Puma
We've developed an API for our website.
When the access points are requested by many clients, at the same time, the clients start getting RequestTimeout exceptions.
I investigated a bit, and I noted that Puma is managing multi threading fine.
But Sequel (or any layer below Sequel) is processing one query at time, even if they came from different clients.
In fact, the RequestTimeout exceptions don't occur if I launch many web servers, each one listening one different port, and I assign one different port to each client.
I don't know yet if the problem is Sequel, ADO, ODBC, Windows, SQLServer or what.
The truth is that I cannot switch to any other technology (like TinyTDS)
Bellow is a little piece of code with screenshots that you can use to replicate the bug:
require 'sinatra'
require 'sequel'
CONNECTION_STRING =
"Driver={SQL Server};Server=.\\SQLEXPRESS;" +
"Trusted_Connection=no;" +
"Database=pulqui;Uid=;Pwd=;"
DB = Sequel.ado(:conn_string=>CONNECTION_STRING)
enable :sessions
configure { set :server, :puma }
set :public_folder, './public/'
set :bind, '0.0.0.0'
get '/delaybyquery.json' do
tid = params[:tid].to_s
begin
puts "(track-id=#{tid}).starting access point"
q = "select p1.* from liprofile p1, liprofile p2, liprofile p3, liprofile p4, liprofile p5"
DB[q].each { |row| # this query should takes a lot of time
puts row[:id]
}
puts "(track-id=#{tid}).done!"
rescue=>e
puts "(track-id=#{tid}).error:#{e.to_s}"
end
end
get '/delaybycode.json' do
tid = params[:tid].to_s
begin
puts "(track-id=#{tid}).starting access point"
sleep(30)
puts "(track-id=#{tid}).done!"
rescue=>e
puts "(track-id=#{tid}).error:#{e.to_s}"
end
end
There are 2 access points in the code above:
delaybyquery.json, that generates a delay by joining the same table 5
times. Note that the table must be about 1000 rows in order to get the
query working really slow; and
delaybycode.json, that generates a delay by just calling the ruby sleep
function.
Both access points receives a tid (tracking-id) parameter, and both write the
outout in the CMD, so you can follow the activity of both process in the same
window and check which access point is blocking incoming requests from other
browsers.
For testing I'm opening 2 tabs in the same chrome browser.
Below are the 2 testings that I'm performing.
Step #1: Run the webserver
c:\source\pulqui>ruby example.app.rb -p 81
I get the output below
Step #2: Testing Delay by Code
I called to this URL:
127.0.0.1:81/delaybycode.json?tid=123
and 5 seconds later I called this other URL
127.0.0.1:81/delaybycode.json?tid=456
Below is the output, where you can see that both calls are working in parallel
click here to see the screenshot
Step #3: Testing Delay by Query
I called to this URL:
127.0.0.1:81/delaybyquery.json?tid=123
and 5 seconds later I called this other URL
127.0.0.1:81/delaybyquery.json?tid=456
Below is the output, where you can see that calls are working 1 at time.
Each call to an access point is finishing with a query timeout exception.
click here to see the screenshot
This is almost assuredly due to win32ole (the driver that Sequel's ado adapter uses). It probably doesn't release the GVL during queries, which would cause the issues you are seeing.
If you cannot switch to TinyTDS or switch to JRuby, then your only option if you want concurrent queries is to run separate webserver processes, and have a reverse proxy server dispatch requests to them.

What's the default time_out limitation of ruby httpclient

Recently, I met a trouble when I intended to upload a very large file by using ruby httpclient. I got error message:
HTTPClient::ConnectTimeoutError: execution expired
I know I can set default value of receive_timeout, send_timeout, and connect_timeout like this:
client = HTTPClient.new
client.receive_timeout = 50000
However, I am really curious about the default value of timeout limitation.Is there anyone could tell me this?Thanks!
The default value is 60 seconds, defined in httpclient/session.rb. This is also the place where the default values of the other parameters are set. The client forwards it to the HTTPClient::SessionManager.

Message/logging from Thin

How can I stop Rack Thin from returning initial messages of the following type?
>> Thin web server (v1.3.1 codename Triple Espresso)
>> Maximum connections set to 1024
>> istening on 0.0.0.0:3000, CTRL+C to stop
I am using it like this:
Rack::Handler::Thin.run(Rack::Builder.new do
map("/resource/"){run(Rack::File.new("/"))}
map("/") do
run(->env{
h = Rack::Utils.parse_nested_query(env["QUERY_STRING"])
[200, {},[routine_to_generate_dynamic_content(h)]]
})
end
end, Port: 3000)
Looks like the initial messages are coming from Thin. As per their Github Issue #31, Disabling all logging, you can add Thin::Logging.silent = true before the rest of the code to silence the initial messages.
However, this will also silence all other messages from the Thin adapter. A glance at the source says these other messages will also be silenced:
Waiting for n connection(s) to finish, can take up to n sec, CTRL+C to stop now
Stopping ...
!! Ruby 1.8.5 is not secure please install cgi_multipart_eof_fix:    gem install cgi_multipart_eof_fix
Hope this helps!
Those messages does not come from rack, they come from thin: https://github.com/macournoyer/thin/blob/master/lib/thin/server.rb#L150 You can set the logging preferences according to this: https://github.com/macournoyer/thin/blob/master/lib/thin/logging.rb Thin::Logging.silent = true, but do you really want to silence all? Maybe direct it to a log file instead of stdout?

Django1.3 multiple gunicorn workers caching problems

i have weird caching problems with the 1.3 version of django. I probably have something configured wrong, but am not sure what.
A good example is django-avatar, which uses caching and many people use it. Even if I dont have a cache backend defined the avatar seems to be cached, which by itself would be ok, but it keeps switching back and forth between the last values cached. Example: I upload a new avatar, now on approximately 50% of the requests it will show me the new one, 50% the old one. If I delete the old one I still get it on the site 50% of the time. The only way to fix it is to disable the caching of the avatar by setting it to one second.
First I thought it was because i used django.core.cache.backends.locmem.LocMemCache, which I never used before, but it even happens when I dont configure a cache backend at all.
I found one similar bug:
Django caching bug .. even if caching is disabled
but my pages render just fine, its the templatetags (for now) that cause the problems in my setup.
I use django 1.3, postgres, nginx, gunicorn 0.12.0, greenlet==0.3.1, eventlet==0.9.16
I just did some more testing and realized that it only happens when I start gunicorn using the config file. If I start it with ./manage.py run_gunicorn everything is fine. Running "gunicorn_django -c deploy/gunicorn.conf.py" causes the problems.
The only explanation I can think of is that each worker gets his own cache (I wonder why, since I did not define a cache).
Update: running ./manage.py run_gunicorn -w 4 also causes the same problems. Therefore I am almost certain that the multiple workers are causing the problems and each worker caches the values seperately.
My configuration:
import os
import socket
import sys
PORT = 8000
PROC_NAME = 'myapp_gunicorn'
LOGFILE_NAME = 'gunicorn.log'
TIMEOUT = 3600
IP = '127.0.0.1'
DEPLOYMENT_ROOT = os.path.dirname(os.path.abspath(__file__))
SITE_ROOT = os.path.abspath(os.path.sep.join([DEPLOYMENT_ROOT, '..']))
CPU_CORES = os.sysconf("SC_NPROCESSORS_ONLN")
sys.path.insert(0, os.path.join(SITE_ROOT, "apps"))
bind = '%s:%s' % (IP, PORT)
logfile = os.path.sep.join([DEPLOYMENT_ROOT, 'logs', LOGFILE_NAME])
proc_name = PROC_NAME
timeout = TIMEOUT
worker_class = 'eventlet'
workers = 2 * CPU_CORES + 1
I also tried it without using 'eventlet', but got the same errors.
Thanks for any help.
It is most likely defaulting to the in-memory-cache, which means each worker has it's own version of the cache in it's own memory space. If you hit thread 1 you get a different cache then thread 3. Nginx is spreading the load between each thread most likely via a round robin distribution, so you are changing threads each hit. Which explains your wacky results.
When you do manage.py run_gunicorn it is most likely running single threaded, and thus only one cache, and that is why you don't see the same results.
Using memcached or something similar is the way to go.

Resources