Lighttpd closes connection when system time is changed - websocket

These are some of the parameters of my lighttpd config file.
server.modules += ( "mod_wstunnel", "mod_auth")
wstunnel.debug = 4
wstunnel.server.max-read-idle = 86400
#wstunnel.ping-interval = 5
#wstunnel.timeout = 30
When I open my web application, connection is created properly using websocket and connects to my c++ server.
All functionalities work except one.
One requirement of my application is to change the system time of machine, but when system time is changed, connection is closed and in log file it shows as :
`2019-02-12 14:04:10: (gw_backend.c.308) released proc: pid: 0 socket: tcp:127.0.0.1:10002 load: 0`
I want to maintain the connection even if system time is changed.
What other parameters can be used or any modification is required in these parameters?
System OS : Fedora 26
Lighttpd version : 1.4.49

wstunnel.server.max-read-idle does not exist. Did you test the lighttpd config before running it and look at the error trace? It should have noted wstunnel.server.max-read-idle as an unrecognized directive.
The directives you seek are:
server.max-read-idle
server.max-write-idle
server.max-keep-alive-idle
However, if the time on your server (running lighttpd) is jumping more than a few seconds, then I suggest that is your primary problem.
Also, Fedora 26 reach end-of-life on May 29, 2018. Supported Fedora have newer version of lighttpd. The current version of lighttpd is lighttpd 1.4.53.

Related

VLC : which file to update VLC Lua youtube script on Mac OS X?

I am trying to read a youtube video through VLC on my mac:
/Applications/VLC.app/Contents/MacOS/VLC -v https://www.youtube.com/watch?v=afzmwAKUppU&app=desktop
Which gives errors :
VLC media player 3.0.
8 Vetinari (revision 3.0.8-0-gf350b6b5a7)
[00007faf5b5e9140] lua generic warning: Error while running script /Applications/VLC.app/Contents/MacOS/share/lua/extensions/youtube.lua, function descriptor() not found
[00007faf5b4589c0] macosx interface warning: Failed to enable media key support, likely app needs to be whitelisted in Security Settings.
[00007faf5b784950] securetransport tls client warning: Ignoring ALPN request due to lack of support in the backend. Proxy behavior potentially undefined.
[00007faf5b770200] lua stream warning: Couldn't extract video URL, falling back to alternate youtube API
[00007faf5b6b5b60] securetransport tls client warning: Ignoring ALPN request due to lack of support in the backend. Proxy behavior potentially undefined.
[00007faf5f97ce70] securetransport tls client warning: Ignoring ALPN request due to lack of support in the backend. Proxy behavior potentially undefined.
2020-10-15 13:45:28.281 VLC[65658:198319] Can't find app with identifier com.spotify.client
[00007faf5b5d8580] lua stream error: Couldn't extract youtube video URL, please check for updates to this script
[00007faf5b44b570] main playlist: playlist is empty
The youtube.lua, I got it by downloading the file from internet :
curl "http://git.videolan.org/?p=vlc.git;a=blob_plain;f=share/lua/playlist/youtube.lua;hb=HEAD" -o /Applications/VLC.app/Contents/MacOS/share/lua/extensions/youtube.lua
Which works on my ubuntu, but not in my Mac: I am wondering if this is not the correct version for Mac OS. And so, which file should be put there ?
If I look on the VLC Lua directory, I find :
/Applications/VLC.app/Contents/MacOS/share/lua/extensions$ ls -l
total 192
-rw-r--r--# 1 romain admin 72K Aug 14 2019 VLSub.luac
-rw-r--r-- 1 root admin 22K Oct 15 13:35 youtube.lua
the youtube.lua is the new script I added, but maybe it was another one to put there ?
Probably a bit late here now, but in any case:
You need to take the youtube.lua from here: https://github.com/videolan/vlc/blob/master/share/lua/playlist/youtube.lua
Then rename it to youtube.luac and place it in the directory (for MacOS) /Applications/VLC.app/Contents/MacOS/share/lua/playlist
I would recommend renaming the old one and keeping it in case something goes awry.
At least on my computer this worked and I can now open YouTube videos in VLC again.
Have you tried the 3.0.11.1 release? https://get.videolan.org/vlc/3.0.11.1/macosx/vlc-3.0.11.1.dmg
You're correct that youtube.lua would be the correct file, though the issue might come from other parts of the code depending on it. FYI, since you are running a stable VLC build, the code you should look at is https://code.videolan.org/videolan/vlc-3.0/-/blob/master/share/lua/playlist/youtube.lua.
https://code.videolan.org/videolan/vlc/-/blob/master/share/lua/playlist/youtube.lua is nightly v4 unstable builds (though for the lua script, they are identical).
Additionally, look out for a new release with an updated script soon https://mailman.videolan.org/pipermail/vlc-devel/2020-October/139076.html

Chilkat ftp.SyncLocalDir with open files?

I’m having an issue with ftp.SyncLocalDir when I have an open file on the local directory.
I’m using the example from http://www.example-code.com/vbnet/ftp_syncLocalTree.asp with a few minor changes. It has been working fine for a few days and then has stopped working.
I’ve found that one of the files is open on the local directory. Looking through the http://chilkatforum.com/ forum I see that one of the answers stated that
“Chilkat will detect errors that are likely permission/access errors and will continue with the remainder of the download.”
This is not happening for me. Looking at the last error text it states that the file is used by another process. Not other files get synchronized.
Is the something else I need to add to the code to force it to continue after the error?
Below is the last error text.
Thanks,
Steve
ChilkatLog:
SyncLocalDir:
DllDate: Dec 5 2014
ChilkatVersion: 9.5.0.46
UnlockPrefix: *********
Username: *********
Architecture: Little Endian; 32-bit
Language: .NET 4.0
VerboseLogging: 0
commandCharset: ansi
dirListingCharset: ansi
localDirPath: Q:\TEST
mode: 2
ProgressMonitoring:
enabled: yes
heartbeatMs: 0
sendBufferSize: 65536
--ProgressMonitoring
downloadDir:
getFile2:
localFilename: Q:\TEST/LINE_6 _13.csv
Replacing existing local file
openForReadWriteWin32:
Failed to open file (2)
localFilePath: Q:\TEST\LINE_6 _13.csv
currentWorkingDirectory: H:\Code In Progress\LLS\Gen 3 Test And Crimp
w-network\VB Code\trunk\FTP Syncronize\bin\Debug
osErrorInfo: The process cannot access the file because it is being us
ed by another process.
localWindowsFilePath: Q:\TEST\Line 6\LINE_6 _13.csv
--openForReadWriteWin32
--getFile2
Failed to download file
failedFilename: /LINE_6 _13.csv
--downloadDir
Failed.
--SyncLocalDir
--ChilkatLog
Please try this new build for the .NET 4.0 Framework:
32-bit Download: http://www.chilkatsoft.com/download/preRelease/ChilkatDotNet4-9.5.0-win32.zip
64-bit Download: http://www.chilkatsoft.com/download/preRelease/ChilkatDotNet4-9.5.0-x64.zip
The feature for continuing past permission/access issues had to do with issues on the remote server as opposed to the local filesystem. This new build should now also do the same for local permission errors. It will be noted in the release notes for Chilkat version 9.5.0.47 when released (soon).
If you have trouble, please post the LastErrorText using this new build.

Memcached-based beaker sessions not initializing on Zope start

We're using memcached-based beaker sessions for a distributed Plone/Zope setup. The same setup works fine in our pre-prod environments but when we move to prod, we can't seem to get it to connect to memcached on 11211.
I've run tcpdump on the machine while starting Zope and it isn't even attempting a connection.
I've telnet from the Zope server to the memcached server (same as ZEO server) to test the connection and that works fine.
I've tried two different prod Zope servers and the result is the same.
We have our old setup (4.2.5) running on yet another prod server and when we start it, we get the expected behavior and it connects to memcached just fine. Unfortunately, we also have a 4.3.2 setup (same as prod) running in pre-prod and it ALSO works just fine. I can't identify any substantial differences in the setups.
All systems are running isolated (not system/apt controlled) Python 2.7s.
Relevant piece of zope.conf
# Beaker Configuration
zope-conf-additional =
<product-config beaker>
cache.type ext:memcached
cache.url ${ips:memcached}
cache.data_dir ${buildout:directory}/var/cache/data
cache.lock_dir ${buildout:directory}/var/cache/lock
cache.regions short, long
cache.short.expire 60
cache.long.expire 3600
session.type ext:memcached
session.url ${ips:memcached}
session.data_dir ${buildout:directory}/var/sessions/data
session.lock_dir ${buildout:directory}/var/sessions/lock
session.key beaker.session
session.secret magicalsecretcornedbeefhash
</product-config>
zcml =
collective.beaker
Relevant versions from versions.cfg:
# Beaker
Beaker = 1.6.4
Products.BeakerSessionDataManager = 1.1
collective.beaker = 1.0b3
python-memcached = 1.47
I've tested that ${ips:memcached} is producing the correct value on startup. No errors are logged to the Zope log, but no beaker sessions are created and no connection attempts made to memcached. Memcached is happily running but produces no logs (seems to never have).
Any ideas or advice on what could possibly be causing this is helpful. More background:
Ubuntu 12.04.4 LTS
Linux 3.2.0-58-generic #88-Ubuntu SMP Tue Dec 3 17:37:58 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
bin/buildout:
#!/opt/zope/pythons/python-2.7/bin/python
import sys
sys.path[0:0] = [
'/opt/zope/buildouts/FOO/eggs/distribute-0.6.28-py2.7.egg',
'/opt/zope/buildouts/FOO/eggs/zc.buildout-1.7.1-py2.7.egg',
]
When moving to production, remember the last part in the Products.BeakerSessionDataManager README:
3. In the ZMI, delete the ``session_data_manager`` object and add a
``Beaker Session Data Manager``.

LWP connect issues despite fresh install

Update
Working on a theory, I edited LWP/Protocol/http.pm to include a sleep statement in the subroutine request:
if (!$has_content || $write_wait || $has_content > 8*1024) {
WRITE:
{
# Since this just writes out the header block it should almost
# always succeed to send the whole buffer in a single write call.
my $n = $socket->syswrite($req_buf, length($req_buf));
sleep 2; ## <----- NEW
unless (defined $n) {
...
And the get statement worked, returning a 200 OK. Much thanks for Alan Curry for help with debugging and finding this particular place in the code.
Not sure it completely answers the question, or if the solution works long term. Will have to do some more checking.
Summary:
LWP::UserAgent module using the get subroutine fails for some URLs, reporting 500 timeout.
Only some URLs fail. E.g. www.google.com fails, but www.google.se succeeds.
I have no other connection issues, all URLs are reachable with browser and through cmd programs such as ping.
Because of this problem, I cannot install modules for perl with CPAN or ActivePerl's ppm.
The problem persisted after installing another perl distribution.
Weirdly enough, using the debugger and stepping through the code makes the failing URLs succeed.
I am using a firewall, and perl is allowed to make connections. (Not relevant, since some URLs succeed)
Firewall log shows perl being allowed to connect for both failing URLs and non-failing. (See below) The log also shows sockets opening to listen, but timestamps are mismatched for failing connections.
Goal
I'm primarily looking for any solution to be able to install modules.
I'm interested in all suggestions on how to debug the problem, complete solutions not required. Any hints or tips are welcome.
Elaboration
I have been using ActivePerl v5.14 for some time. Installing modules with their Perl Package Manager ppm command and gui worked very well, but at some point stopped working, reporting a 500 timeout. The cpan shell reported the very same thing.
I have googled this problem extensively, but found nothing that relates to my problem, or helps in any way.
ActivePerl support claims it may be a proxy setting, which is ludicrous. I have lots of programs that connect to the internet that do not need proxy settings, and as far as I know, I do not need to do this. I have tried to find out what my proxy settings are, if any, but the only thing I have found is vague references such as "use the system settings", "no proxy required" and "proxy is the same as your IP".
So last night I had enough and installed strawberry perl instead, but it suffers from the same problem. I uninstalled ActivePerl afterwards.
Anyway, I have experimented with LWP modules and found that I can reproduce the errors there. It seems it is limited to certain websites, and cpan is one of them (?). I created this script for testing:
use strict;
use warnings;
use LWP::UserAgent;
use URI;
my $ua = LWP::UserAgent->new;
my $url = shift;
my $u = URI->new($url);
$ua->no_proxy('cpan.strawberryperl.com','cpan.com',$u->host);
$ua->timeout(30);
my $r = $ua->get($url);
if ($r->is_success) {
print $r->decoded_content;
} else {
die $r->status_line;
}
And then did some testing:
tx.pl http://cpan.strawberryperl.com/authors/01mailrc.txt.gz
500 read timeout at tx.pl line 23.
tx.pl http://stackoverflow.com
500 read timeout at tx.pl line 23.
tx.pl http://www.google.se
<!doctype html><html itemscope itemtype="http://schema.org/WebPage"><head><meta
http-equiv="content-type" content="text/html; charset=ISO-8859-1"><meta ...
So, google works, and www.youtube.com also works, but www.yahoo.com and search.cpan.com fails. The default timeout of 180 seconds makes this an incredibly annoying thing to debug, which is why I reduced it in my script. Needless to say, all of these URLs are reachable if I try to reach them with Firefox or ping.
ETA:
Strangely enough, running the script through the debugger, turning on trace and skipping to the end makes the previously failed connections successful.
It would seem to imply that there is some kind of hiccup, missed timing that is "fixed" when the script runs more slowly due to printing thousands of lines of trace code.
I could understand this issue as being a result of some ActivePerl module getting corrupted, but strawberry perl is using a completely different set of files, so it must be my system.
Why some sites work and some don't is baffling. I could understand that some sites like stackoverflow.com would protect themselves against potential bots, but why cpan would thwart its own package manager makes no sense.
I am using a firewall, and Perl has been allowed to make connections. My system is a rather old installation of Windows XP (~5 years). While running dual boot with Ubuntu I've never encountered this problem, which is another clue that it is not something to do with proxies.
I am well and truly stumped. If anyone could help me debug this, I would be very grateful.
The CPAN shell error messages below. The funny thing is, it says it tries to use the ftp as a last resort, but I just discovered that the ftp command has not been allowed by my firewall, and if it was used, it should have asked me for permission.
Fetching with LWP:
http://cpan.strawberryperl.com/authors/01mailrc.txt.gz
LWP failed with code[500] message[read timeout]
Warning: no success downloading 'D:\strawberry\cpan\sources\authors\01mailrc.txt
.gz.tmp1252'. Giving up on it.
Fetching with LWP:
http://www.cpan.org/authors/01mailrc.txt.gz
LWP failed with code[500] message[read timeout]
Warning: no success downloading 'D:\strawberry\cpan\sources\authors\01mailrc.txt
.gz.tmp1252'. Giving up on it.
Warning: no success downloading 'D:\strawberry\cpan\sources\authors\01mailrc.txt
.gz.tmp1252'. Giving up on it.
As a last resort we now switch to the external ftp command 'C:\WINDOWS\system32\
ftp.EXE'
to get 'D:\strawberry\cpan\sources\authors\01mailrc.txt.gz.tmp1252'.
Doing so often leads to problems that are hard to diagnose.
If you're the victim of such problems, please consider unsetting the
ftp config variable with
o conf ftp ""
o conf commit
Please check, if the URLs I found in your configuration file
(http://cpan.strawberryperl.com/, http://www.cpan.org/) are valid. The
urllist can be edited. E.g. with 'o conf urllist push ftp://myurl/'
Could not fetch authors/01mailrc.txt.gz
Firewall log for trying to fetch non-failing URL (www.google.se) and failing (stackoverflow.com):
2012-06-27T18:34:04+01:00,info,appl control,C:\WINDOWS\system32\svchost.exe,allow,listen,17,0.0.0.0,56564
2012-06-27T18:34:04+01:00,info,appl control,C:\WINDOWS\system32\svchost.exe,allow,send,17,195.54.122.198,53
2012-06-27T18:34:13+01:00,info,appl control,D:\strawberry\perl\bin\perl.exe,allow,connect out,6,64.34.119.12,80
2012-06-27T18:34:13+01:00,info,appl control,D:\strawberry\perl\bin\perl.exe,allow,connect out,6,64.34.119.12,80
2012-06-27T18:34:21+01:00,info,appl control,C:\Program\Mozilla Firefox\firefox.exe,allow,connect out,6,74.86.70.106,80
2012-06-27T18:34:28+01:00,info,appl control,C:\Program\Mozilla Firefox\firefox.exe,allow,connect out,6,64.34.119.12,80
2012-06-27T18:34:30+01:00,info,appl control,C:\WINDOWS\system32\svchost.exe,allow,listen,17,0.0.0.0,56664
2012-06-27T18:34:30+01:00,info,appl control,C:\WINDOWS\system32\svchost.exe,allow,send,17,195.54.122.198,53
2012-06-27T18:34:30+01:00,info,appl control,D:\strawberry\perl\bin\perl.exe,allow,connect out,6,74.125.143.94,80
2012-06-27T18:34:30+01:00,info,appl control,D:\strawberry\perl\bin\perl.exe,allow,connect out,6,74.125.143.94,80
2012-06-27T18:35:14+01:00,info,appl control,C:\Program\Mozilla Firefox\firefox.exe,allow,connect out,6,64.34.119.12,80
2012-06-27T18:35:21+01:00,info,appl control,C:\Program\Mozilla Firefox\firefox.exe,allow,connect out,6,74.86.70.106,80
2012-06-27T18:36:21+01:00,info,appl control,C:\Program\Mozilla Firefox\firefox.exe,allow,connect out,6,74.86.70.106,80
2012-06-27T18:37:04+01:00,info,appl control,C:\WINDOWS\system32\svchost.exe,allow,listen,17,0.0.0.0,61215
2012-06-27T18:37:04+01:00,info,appl control,C:\WINDOWS\system32\svchost.exe,allow,send,17,195.54.122.198,53
2012-06-27T18:37:07+01:00,info,appl control,D:\strawberry\perl\bin\perl.exe,allow,connect out,6,64.34.119.12,80
2012-06-27T18:37:07+01:00,info,appl control,D:\strawberry\perl\bin\perl.exe,allow,connect out,6,64.34.119.12,80
This might not be a complete solution to your problem. But here it is anyway:
From your "detailed" problem description it looks like it's a problem with your desktop/laptop. Even though your firewall allows connections to websites as you mentioned, "FTP" might not be allowed by the Windows internal firewall.
Usually, ports 20 (FTP command port) and 21 (FTP data port) should have been added to the firewall exceptions (In Windows - Start → Settings → Control Panel → Click on Security Center → Firewall → Exceptions (tab) → Add ports. You can try adding ports 20 and 21 to the exceptions.
However, if you are connected to a router, you may have to port forward ports 20 and 21. However, these ports are forwarded by default, or if you are in a corporate VPN then it's a whole different story. Corporate VPNs, mostly restrict port 21 explicitly however allow port 22 (which is a secured version of port 21, for SFTP). Under such circumstances you may want to use ftp_proxy.
Alternatively (if you don't want to add port 20 and 21 to exception), you can go to the cpan prompt and use an ftp_proxy by:
cpan> o conf ftp_proxy http://your.ftpproxy.com
and then issue the install <module> command. Or you can update your ../CPAN/config.pm file to make permanent changes to the ftp_proxy parameter.
Well, these may be the traditional solutions which you probably already tried. The next step would be to try set the FTP_PASSIVE mode to 1. By default the libnetcfg configuration for this is set to 0. To change this, find the libnetcfg.bat file (it should be somewhere C:\Perl\bin). Open the file in an editor and replace
ftp_int_passive 0
with
ftp_int_passive 1
This is the Windows batch file that runs once CPAN is invoked to set the environment variables. Under a UNIX/Linux-like architecture it's found as libnet.cfg and environment variable FTP_PASSIVE, like
$set | grep FTP_PASSIVE
FTP_PASSIVE=0
so to set just EXPORT FTP_PASSIVE=1.
These might be a few of the very many ways of debugging this. Honestly, there is no point fiddling around the library code as they well work on every other machine, usually 95% of 01mailrc.txt.gz.tmp1252 download issues are due to network/OS/firewall issues, but if you want to expand your Perl knowledge of LWP you can. In fact, you should also be looking at CPAN::FTP::netrc. Best of luck...

Any way to debug php4.3 or php4.4 using Apache2 on windows?

I believe I've tried every possible approach to setting up a debugging envronment in windows using apache2 and php4 but failed with all of them.
I have a couple of apache instances both for php4.3 and php4.4. I got xdebug installed on the 4.4 version (couldn't find a working module for php4.3) but it seems that there is no IDE/client that would work with the older xdebug versions. Then I switched over to the DBG approach. It simply won't load on either version. Tried the last 2 DBG windows binary releases but neither of them would load with php. phpinfo()'s output shows no sign of the DBG module, as well as command line "php -m" doesn't have it either.
I also can't seem to find a way to run php4 as fast cgi on apache 2 under windows. There's only one executable (php.exe) that goes with the php4 binaries for windows (no php-cgi.exe). My apache/php setup is:
Apache/2.0.64 (Win32) PHP/4.4.1 (phpinfo won't mention anything about thread safety, "php -i" would show thread safety as enabled"
Server API: Apache 2.0 Handler
php.ini DBG section:
Tried all of the following for loading the extension (neither works)
zend_extension_ts="C:\PHP\php4.4.1\extensions\php_dbg.dll"
or
zend_extension_ts=php_dbg.dll
or
extension="C:\PHP\php4.4.1\extensions\php_dbg.dll"
or
extension=php_dbg.dll
[debugger]
debugger.enabled = true
debugger.profiler_enabled = true
debugger.JIT_host = clienthost
debugger.JIT_port = 7869
Tried php DBG packages:
dbg-2.13.1-win32.zip
dbg-2.15.5-win32.zip
using the php_dbg.dll-4.4.x (i just renamed it to php_dbg.dll) then copied over to the php extensions directory.
Using windows 7 x64

Resources