Apache2/Laravel responding 301 on some agents - laravel

I have wifidog installed on a TP-LINK
(openwrt 18.06.2)
I have wifidog-auth-laravel installed on a OVH Debian
github /wifidog/wifidog-auth-laravel)
If I use curl, chrome and wget; I get the pong response for the authetication url
But if wifidog attempts to get the pong response I get a 301 Permanantly moved response.
How can that be?
[7][Mon May 20 13:44:54 2019][5977](centralserver.c:302) Level 1: Connecting to auth server example.com:80
[7][Mon May 20 13:44:54 2019][5977](centralserver.c:331) Level 1: Successfully connected to auth server example.com:80
[7][Mon May 20 13:44:54 2019][5977](centralserver.c:141) Unlocking config
[7][Mon May 20 13:44:54 2019][5977](centralserver.c:141) Config unlocked
[7][Mon May 20 13:44:54 2019][5977](centralserver.c:147) Connected to auth server
[6][Mon May 20 13:44:54 2019][5977](wd_util.c:116) AUTH_ONLINE status became ON
[7][Mon May 20 13:44:54 2019][5977](simple_http.c:77) Sending HTTP request to auth server: [GET /ping/?gw_id=EC086B35444C&sys_uptime=1820&sys_memfree=6096&sys_load=0.70&wifidog_uptime=3 HTTP/1.0
User-Agent: WiFiDog 1.2.1
Host: example.com
]
[7][Mon May 20 13:44:54 2019][5977](simple_http.c:87) Reading response
[7][Mon May 20 13:44:54 2019][5977](simple_http.c:111) Read 725 bytes
[7][Mon May 20 13:44:54 2019][5977](simple_http.c:124) HTTP Response from
Server: [HTTP/1.1 301 Moved Permanently
Date: Mon, 20 May 2019 13:44:54 GMT
Server: Apache/2.4.25 (Debian)
Location: http://example.com/ping?gw_id=EC086B35444C&sys_uptime=1820&sys_memfree=6096&sys_load=0.70&wifidog_uptime=3
Content-Length: 415
Connection: close
Content-Type: text/html; charset=iso-8859-1
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>301 Moved Permanently</title>
</head><body>
<h1>Moved Permanently</h1>
<p>The document has moved here.</p>
<hr>
<address>Apache/2.4.25 (Debian) Server at example.com Port 80</address>
</body></html>
]
[4][Mon May 20 13:44:54 2019][5977](ping_thread.c:191) Auth server did NOT say Pong!
[7][Mon May 20 13:44:54 2019][5977](firewall.c:140) Marking auth server down

I found the solution:
The Wifidog userAgent formed the url different from the other user agents:
a workaround is in the wifidog.conf
AuthServer {
Hostname example.com
SSLAvailable yes
Path /
PingScriptPathFragment ping?
LoginScriptPathFragment login?
PortalScriptPathFragment portal?
MsgScriptPathFragment gw_message.php?
AuthScriptPathFragment auth?
}

Related

wget gives 403 on accessible files

First time poster with a bizarre issue I am having. I usually install software through conda, but from one moment to the other I stopped being able to use conda install because of a 403 I get from conda trying to access some configuration files. When trying to download those files with wget --spider --debug https://conda.anaconda.org/anaconda/noarch/current_repodata.json, I get the same 403 error.
DEBUG output created by Wget 1.19.4 on linux-gnu.
Reading HSTS entries from /home/jsequeira/.wget-hsts
URI encoding = ‘UTF-8’
Converted file name 'current_repodata.json' (UTF-8) -> 'current_repodata.json' (UTF-8)
Spider mode enabled. Check if remote file exists.
--2020-07-30 11:25:59-- https://conda.anaconda.org/anaconda/noarch/current_repodata.json
Resolving conda.anaconda.org (conda.anaconda.org)... 104.17.92.24, 104.17.93.24, 2606:4700::6811:5d18, ...
Caching conda.anaconda.org => 104.17.92.24 104.17.93.24 2606:4700::6811:5d18 2606:4700::6811:5c18
Connecting to conda.anaconda.org (conda.anaconda.org)|104.17.92.24|:443... connected.
Created socket 5.
Releasing 0x000056545deb1850 (new refcount 1).
Initiating SSL handshake.
Handshake successful; connected socket 5 to SSL handle 0x000056545deb2700
certificate:
subject: CN=anaconda.org,O=Cloudflare\\, Inc.,L=San Francisco,ST=CA,C=US
issuer: CN=Cloudflare Inc ECC CA-3,O=Cloudflare\\, Inc.,C=US
X509 certificate successfully verified and matches host conda.anaconda.org
---request begin---
HEAD /anaconda/noarch/current_repodata.json HTTP/1.1
User-Agent: Wget/1.19.4 (linux-gnu)
Accept: */*
Accept-Encoding: identity
Host: conda.anaconda.org
Connection: Keep-Alive
---request end---
HTTP request sent, awaiting response...
---response begin---
HTTP/1.1 403 Forbidden
Date: Thu, 30 Jul 2020 11:25:59 GMT
Content-Type: text/html; charset=UTF-8
Connection: close
CF-Chl-Bypass: 1
Set-Cookie: __cfduid=d3cd3a67d3926551371d8ffe5a840b04f1596108359; expires=Sat, 29-Aug-20 11:25:59 GMT; path=/; domain=.anaconda.org; HttpOnly; SameSite=Lax
Cache-Control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Expires: Thu, 01 Jan 1970 00:00:01 GMT
X-Frame-Options: SAMEORIGIN
cf-request-id: 044111dd9600005d4732b73200000001
Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
Vary: Accept-Encoding
Server: cloudflare
CF-RAY: 5baeb8dc2ba65d47-LIS
---response end---
403 Forbidden
cdm: 1
Stored cookie anaconda.org -1 (ANY) / <permanent> <insecure> [expiry 2020-08-29 11:25:59] __cfduid d3cd3a67d3926551371d8ffe5a840b04f1596108359
URI content encoding = ‘UTF-8’
Closed 5/SSL 0x000056545deb2700
Remote file does not exist -- broken link!!!
These files are accessible through the browser, and were always accessible with wget and conda until yesterday, when I was installing some tools not related to these network accesses. How can wget fail to download them?
So this was fixed by reinstalling apt-get. Some configuration file there must have been messed up.

HTTP/1.1 401 Access to the HttpFS server is restricted

bivm:/home/biadmin/Desktop # curl -i "http://bivm.ibm.com:14000/webhdfs/v1/tmp/newfile?op=OPEN"
HTTP/1.1 401 Access to the HttpFS server is restricted. Please obtain proper credential from the BigInsights Console first.
Content-Type: text/html; charset=iso-8859-1
Cache-Control: must-revalidate,no-cache,no-store
Content-Length: 1589
Server: Jetty(6.1.x)
I am getting the above mentioned error, when I am trying to access a file in my cluster using the WebHDFS REST API call through the HttpFS server (running on port 14000 by default). Please advice.

WebClient gets 401 when requesting a directory without a slash

I have a web server using nginx, configured with HTTPS and Basic Authentication.
I'm attempting to query it with my WebClient with PowerShell
$wc = new-object System.Net.WebClient
$wc.Credentials = Get-Credential
return $wc.DownloadString($url)
This works fine with the following $urls
https://server.com
https://server.com/
https://server.com/directory/
https://server.com/page.php
https://server.com/directory/index.php
But for the following $urls, I get The remote server returned an error: (401) Unauthorized.
https://server.com/directory
https://server.com/otherdirectory
https://server.com/directory/directory
I thought at first it was due to redirection, but that wouldn't make sense given some of the working examples. Perhaps it's my nginx configuration?
I currently believe this is a bug in the WebClient class. Here is a summary of my interactions with the server:
------------------------------------------------------------------------
GET /directory HTTP/1.1
Host: example.com
------------------------------------------------------------------------
HTTP/1.1 401 Unauthorized
Server: nginx/1.4.6 (Ubuntu)
Date: Fri, 27 Jun 2014 02:37:45 GMT
Content-Type: text/html
Content-Length: 203
Connection: keep-alive
WWW-Authenticate: Basic realm="sup"
<html>
<head><title>401 Authorization Required</title></head>
<body bgcolor="white">
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx/1.4.6 (Ubuntu)</center>
</body>
</html>
------------------------------------------------------------------------
GET /directory HTTP/1.1
Authorization: Basic bmFjaHQ6aGVsbG8=
Host: example.com
------------------------------------------------------------------------
HTTP/1.1 301 Moved Permanently
Server: nginx/1.4.6 (Ubuntu)
Date: Fri, 27 Jun 2014 02:37:46 GMT
Content-Type: text/html
Content-Length: 193
Location: http://example.com/directory/
Connection: keep-alive
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.4.6 (Ubuntu)</center>
</body>
</html>
------------------------------------------------------------------------
GET /directory/ HTTP/1.1
Host: example.com
------------------------------------------------------------------------
HTTP/1.1 401 Unauthorized
Server: nginx/1.4.6 (Ubuntu)
Date: Fri, 27 Jun 2014 02:37:47 GMT
Content-Type: text/html
Content-Length: 203
Connection: keep-alive
WWW-Authenticate: Basic realm="sup"
<html>
<head><title>401 Authorization Required</title></head>
<body bgcolor="white">
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx/1.4.6 (Ubuntu)</center>
</body>
</html>
------------------------------------------------------------------------
At this point, the WebClient throws an exception, stating that the server returns the error stated in the question.
It should probably provide the auth token with the third request, or at the very least respond to the 401 with another request with the auth token provided.
I would like to have this confirmed by someone else (preferably with .NET 4.5.2) so I can accept this as the answer.

Gradle download timeout/retry

I am on a flaky network (or there is some kind of proxy or virus checker in the way) so my gradle dependencies downloads (external module dependencies mavenCentral()) hangs sometimes.
A local repo would help, but are there any settings for timeouts and retries?
The download starts, and then it hangs, and times out after the default socket timeout,
I can emulate this with wget
wget -d
http://repo1.maven.org/maven2/org/apache/santuario/xmlsec/1.5.2/xmlsec-1.5.2-sources.jar
DEBUG output created by Wget 1.11.4 on Windows-MSVC.
--2013-01-23 13:52:01-- http://repo1.maven.org/maven2/org/apache/santuario/xmls
ec/1.5.2/xmlsec-1.5.2-sources.jar Resolving repo1.maven.org... seconds
0.00, 68.232.34.223 Caching repo1.maven.org => 68.232.34.223 Connecting to repo1.maven.org|68.232.34.223|:80... seconds 0.00,
connected. Created socket 352. Releasing 0x003311d0 (new refcount 1).
---request begin--- GET /maven2/org/apache/santuario/xmlsec/1.5.2/xmlsec-1.5.2-sources.jar
HTTP/1.0 User-Agent: Wget/1.11.4 Accept: / Host: repo1.maven.org
Connection: Keep-Alive
---request end--- HTTP request sent, awaiting response...
---response begin--- HTTP/1.0 200 OK Accept-Ranges: bytes Content-Type: application/java-archive Date: Wed, 23 Jan 2013 12:52:01
GMT Last-Modified: Mon, 14 May 2012 08:47:03 GMT Server: ECAcc
(lhr/4ABA) X-Cache: HIT Content-Length: 577534 Connection: keep-alive
---response end--- 200 OK Registered socket 352 for persistent reuse. Length: 577534 (564K) [application/java-archive] Saving to:
`xmlsec-1.5.2-sources.jar.1'
5% [=> ] 33,328 --.-K/s eta
17m 52s ^
I would like it to timeout faster and retry the download,

Magento Soap API working locally but not remotely

I have created a custom API for Magento Enterprise 1.11. Calling the API through Soap v1 works fine on my local dev environment, however I am unable to make calls from my local environment to the remote environment.
Using PHP interactive shell on my localdev:
php > $client = new SoapClient(WSDL_URI,array('trace'=>1));
php > $client->login(API_USER,API_KEY);
php > var_dump($client->__getLastResponse());
string(538) "<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="urn:Magento" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"><SOAP-ENV:Body><ns1:loginResponse><loginReturn xsi:type="xsd:string">f0eec73e49665aaf9cc4a6644fba5dc6</loginReturn></ns1:loginResponse></SOAP-ENV:Body></SOAP-ENV:Envelope>
I have been able to do this successfully from the localhost, as well as between two local VMs running on my dev machine. I can also access the methods of my custom API without issue.
However, when I try to make a soap client to my remote test environment, I am able to create the client, but the call to $client->login(), or any subsequent call results in the following:
php > $client = new SoapClient(REMOTE_WSDL_URI,array('trace'=>1));
php > $client->login(API_USER,API_KEY);
PHP Warning: Uncaught SoapFault exception: [WSDL] SOAP-ERROR: Parsing WSDL: Couldn't load from 'http://REMOTE_HOST/index.php/api/index/index/wsdl/1/' : failed to load external entity "http://REMOTE_HOST/index.php/api/index/index/wsdl/1/" in php shell code:1
Stack trace:
#0 php shell code(1): SoapClient->__call('login', Array)
#1 php shell code(1): SoapClient->login(API_USER, API_KEY)
#2 {main}
php > var_dump($client->__getLastRequestHeaders());
string(255) "POST /index.php/api/index/index/ HTTP/1.1
Host: REMOTE_HOST
Connection: Keep-Alive
User-Agent: PHP-SOAP/5.3.18-1~dotdeb.0
Content-Type: text/xml; charset=utf-8
SOAPAction: "urn:Mage_Api_Model_Server_HandlerAction"
Content-Length: 550
php > var_dump($client->__getLastResponseHeaders());
string(840) "HTTP/1.1 500 Internal Service Error
Date: Mon, 11 Feb 2013 19:06:56 GMT
Server: Apache/2.2.16 (Debian)
X-Powered-By: PHP/5.3.19-1~dotdeb.0
Set-Cookie: PHPSESSID=7uqrcmiv96hroubnb1uu7c7cm6; expires=Wed, 13-Feb-2013 01:06:56 GMT; path=/; domain=.REMOTE_HOST; HttpOnly
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Set-Cookie: CUSTOMER=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; path=/; domain=.REMOTE_HOST; httponly
Set-Cookie: CUSTOMER_INFO=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; path=/; domain=.REMOTE_HOST; httponly
Set-Cookie: CUSTOMER_AUTH=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; path=/; domain=.REMOTE_HOST; httponly
Content-Length: 468
Vary: Accept-Encoding
Connection: close
Content-Type: text/xml; charset=utf-8
php > var_dump($client->__getLastResponse());
string(468) "<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"><SOAP-ENV:Body><SOAP-ENV:Fault><faultcode>WSDL</faultcode><faultstring>SOAP-ERROR: Parsing WSDL: Couldn't load from 'http://REMOTE_HOST/index.php/api/index/index/wsdl/1/' : failed to load external entity "http://REMOTE_HOST/index.php/api/index/index/wsdl/1/"
</faultstring></SOAP-ENV:Fault></SOAP-ENV:Body></SOAP-ENV:Envelope>
When I hit //REMOTE_HOST/index.php/api/?wsdl I get the standard Magento WSDL.
The two environments are 99.99% identical:
Server version: Apache/2.2.16 (Debian) (both local dev and remote)
PHP 5.3.18 (local dev) 5.3.19 (remote host)
Apache/PHP configurations are the same.
Code base is identical
I have scoured the intewebs for clues, including:
http://www.magentocommerce.com/boards/viewthread/56528/
http://www.magentocommerce.com/wiki/5_-_modules_and_development/web_services/overriding_an_existing_api_class_with_additional_functionality#wsdl
Unable to connect to Magento SOAP API v2 due to "failed to load external entity"
Magento API SOAP-ERROR: Parsing WSDL: Couldn't load from '[url]/index.php/api/index/index/?wsdl=1' : Couldn't find end of Start Tag part line 56
http://www.magentocommerce.com/api/soap/introduction.html
I've tried the "Content-Length" header fix mentioned in the sedond-to-last link, and just about everything else I could think of... Stumped.
While you can load the WSDL URL (http://REMOTE_HOST/index.php/api/index/index/wsdl/1/) from your computer, your remote server can't contact itself via its REMOTE_HOST.
PHP's SoapServer object (used by Magento's implementation) needs to contact the WSDL to know which methods are exposed.
For reasons I've never been able to figure out, it's a common network configuration for a server to not have access to it's own DNS entries. Connect to your server via SSH and try running the following
curl http://REMOTE_HOST/index.php/api/index/index/wsdl/1/
My guess is you'll get a network timeout or a REMOTE_HOST unknown error. Fix your configuration so your server can access itself, and everything should start working.
You could try changing host DNS nameservers perhaps.
vim /etc/resolv.conf to add Google's 8.8.8.8, 8.8.4.4

Resources