Codeigniter/HTTPS issues on Openshift - https

I am unable to even get CI's welcome page to come up on OpenShift - likely because of https. I checked my config file and it detects HTTPS and produces a correct base_url:
$config['base_url']="https//mysite.rhcloud.com/CI_base/";
But that seems to be as far as it goes - nothing in the response to CI's GET call. (I don't have a .htaccess set up. )
GET /CI_base/index.php
When I compare this to the GET request on my localhost system (over http), I noticed that the headers were a little different. But I can't tell which ones may be pointing to a culprit. The issue is probably jumping out at you - at least I hope so! Please could point me in the right direction?
Thanks!
Mmiz
LOCALHOST HEADER:
**Connection** Keep-Alive
**Content-Length** 1925
**Content-Type** text/html
**Date** Mon, 23 Sep 2013 17:16:17 GMT
**Keep-Alive timeout**=5, max=100
**Server** Apache/2.2.22 (Unix) DAV/2 PHP/5.3.15 with Suhosin-Patch mod_ssl/2.2.22 OpenSSL/0.9.8x
**Set-Cookie** TW_COOK=Vj <more>; expires=Mon, 23-Sep-2013 19:16:18 GMT; path=/
**X-Powered-By** PHP/5.3.15
OPENSHIFT HEADER
**Connection** Keep-Alive
**Content-Encoding** gzip
**Content-Length** 121
**Content-Type** text/html
**Date** Mon, 23 Sep 2013 17:16:55 GMT
**Keep-Alive** timeout=15, max=100
**Server** Apache/2.2.15 (Red Hat)
**Vary** Accept-Encoding

by default your application on Openshift responds to both http and https. For instance, if you use the following Codeigniter quickstart https://github.com/openshift/CodeIgniterQuickStart, you can go to http://ci-$yournamespace.rhcloud.com and https://ci-$yournamespace.rhcloud.com. To route to https by default take a look at this article here: https://help.openshift.com/hc/en-us/articles/202398810-How-to-redirect-traffic-to-HTTPS-

Related

New cluster creation using Cloudera Director

Getting the following error while trying to create a new cluster using Cloudera Director. Any advice?
[ec2-user#ip-10-0-2-227 cloudera-director-1.0.0]$ ./bin/cloudera-director bootstrap-remote aws.reference.conf --lp.remote.hostAndPort=127.0.0.1:7189
Process logs can be found at /home/ec2-user/cloudera/cloudera-director-1.0.0/logs/application.log
Cloudera Director 1.0.0 initializing ...
Configuration file passes all validation checks.
Creating a new environment ...
>> POST http://127.0.0.1:7189/api/v1/environments
<< 401 Unauthorized
Unexpected internal error (see logs): HTTP/1.1 401 Unauthorized [X-Content-Type-Options: nosniff, X-XSS-Protection: 1; mode=block, Pragma: no-cache, X-Frame-Options: DENY, Set-Cookie: JSESSIONID=j0ii441ungs61o1ivobib7zn2;Path=/, Content-Type: application/json;charset=UTF-8, Transfer-Encoding: chunked, Server: Jetty(8.1.15.v20140411)]
You are using the Cloudera Director Server (which currently has known issues). In the meantime, you can still get the cluster running with Cloudera Director without the server part.
The command is
./bin/cloudera-director bootstrap aws.simple.conf (simple config)
-OR-
./bin/cloudera-director bootstrap aws.reference.conf (advanced config)
You need to supply the username and password for the Director server when using the bootstrap-remote command, for example:
... --lp.remote.username=admin --lp.remote.password=admin ...
This should have been included in our docs; we're working on that. (I work for Cloudera.)
Feel free to also post questions to community.cloudera.com.

How to subscribe to the PubSubHubbub github?

I'm currently attempting to write a basic client that listens to events from (enterprise) github, and makes API calls accordingly.
The problem I have is that I can't manage to get the PubSubHubbub client configured. I thought it was the client/authentication I'm using, but I now can't get the basic call from the docs working!
In an attempt to work out what I'm doing wrong, I'm making a curl request to my normal github account:
curl -u "joepym" -i \
https://api.github.com/hub \
-F "hub.mode=subscribe" \
-F "hub.topic=http://github.com/JoePym/faraday/events/push" \
-F "hub.callback=*callbackurl*"
and I'm getting back
HTTP/1.1 100 Continue
HTTP/1.1 422 Unprocessable Entity
Server: GitHub.com
Date: Wed, 08 May 2013 18:13:24 GMT
Content-Type: application/json; charset=utf-8
Connection: keep-alive
Status: 422 Unprocessable Entity
X-RateLimit-Limit: 5000
X-RateLimit-Remaining: 4989
X-GitHub-Media-Type: github.beta
X-Content-Type-Options: nosniff
Content-Length: 38
{
"message": "Invalid event: nil"
}
This invalid event message is what my main client is also getting when I attempt to call my enterprise github account with enterprise credentials.
Has anyone encountered this before?
Try using https://github.com/JoePym/faraday/events/push as your hub.topic. Note that we are now using 'https'.

my https website can't download by WGET command

I can browse the page by browser, but I can't download the html page by wget.
https://money.benck.tw
When I use wget, it can't even connect to the website:
--2011-10-12 05:30:24-- https://money.benck.tw/
Resolving money.benck.tw... 97.107.135.68
Connecting to money.benck.tw|97.107.135.68|:443... failed: Connection timed out.
Retrying.
--2011-10-12 05:33:35-- (try: 2) https://money.benck.tw/
Connecting to money.benck.tw|97.107.135.68|:443...
However, I can download the other https website like: https://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js
It's very weird.
For this website you have to use the --no-check-certificate command
wget --no-check-certificate https://money.benck.tw
I'm experiments the same issue, I trying to download files from an external site like https://downloads.wordpress.org/plugin/easy-wp-smtp.zip and I wget using --no-check-certificate stills not working.... It's freezing in this line:
Connecting to downloads.wordpress.org (downloads.wordpress.org)|198.143.164.250|:443...
Anyone have the same issue?
No IP tables configured and rules. When I do this on other server on the same networks works fine. This only happens on this server specialy.
Regards,
Francisco Yu
This is because of this page is probably scraped by wget too often. You need to modify headers, especially useragent.
Examples from other website:
--no-check-certificate does not hepls
wget --no-check-certificate "https://www.money.pl/pieniadze/depozyty/walutowearch/1921-02-05,2021-02-05,LIBORCHF3M,strona,1.html" --2021-02-05 17:05:34-- https://www.money.pl/pieniadze/depozyty/walutowearch/1921-02-05,2021-02-05,LIBORCHF3M,strona,1.html
Loaded CA certificate '/etc/ssl/certs/ca-certificates.crt'
Resolving www.money.pl (www.money.pl)... 212.77.101.20
Connecting to www.money.pl (www.money.pl)|212.77.101.20|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2021-02-05 17:05:34 ERROR 403: Forbidden.
but other tool to download sendign other headers works
http -h "https://www.money.pl/pieniadze/depozyty/walutowearch/1921-02-05,2021-02-05,LIBORCHF3M,strona,1.html"
HTTP/1.1 200 OK
Cache-control: max-age=60, public,stale-while-revalidate=5
Connection: keep-alive
Content-Encoding: gzip
Content-Length: 20756
Content-Security-Policy: upgrade-insecure-requests;
Content-Type: text/html; charset=iso-8859-2
Date: Fri, 05 Feb 2021 16:04:16 GMT
Link: <https://money.wp.pl/dGxwOTV0SyYZFTlneUtGM1pNbSY9EkhlJ1V1dglvOxgnKBALCW87GCcoEAsJbzsYJygQCwlvOxgnKBALCW87GCcoEAsJbzsYJygQCwlvOxgnKBALCW87GCcoEAsJbzsYJygQCwlvOxgnKBALCW87GCcoEAsJbzsYJygQCwlvOxgnKBALCW87GCcobXh0RUZ9WlgoNTAeDjRHBTlpZxYWIhMeKydrAld1TER2ciZYECoUSjgjIR4JKBYSNnomXEF1TUUJJD9VCi4ZEzUxcwJRdT4TKiQ5Sh0zAVJ9YWR2EyYUAjs7IVUFNRsfamZjAiJ2QUV-eWYCSXdNUn1hZHNWd0pGYmRkHVRyXUV6ZhV8LQU3JQwcEAMpYkpCfRclRBYoFhZqZmMCJ3ZWHzs5OhY0EDkoLjA0VFl1XgQ_PTgNKRMbQgIuB0lCIRQEOzUiWQB6XhYrIgVcCzMLSn9lZhYHJBkDKjM5Qh16DxYjISJJRjo=>;rel="preload";as="script";
Server: nginx
Set-Cookie: mny_ver2=v8c;Domain=.money.pl;Path=/;Max-Age=2592000;
Vary: Accept-Encoding

Directory slash redirects? Does this still happen?

I was reading an article referenced by Jeff Atwood about Yahoo's "Best Practices" for speeding up a website, and I noticed this little gem:
One of the most wasteful redirects
happens frequently and web developers
are generally not aware of it. It
occurs when a trailing slash (/) is
missing from a URL that should
otherwise have one. For example, going
to
http://astrology.yahoo.com/astrology
results in a 301 response containing a
redirect to
http://astrology.yahoo.com/astrology/
(notice the added trailing slash).
This is fixed in Apache by using Alias
or mod_rewrite, or the DirectorySlash
directive if you're using Apache
handlers.
Does this still happen? The article is pretty old, as the web goes. I think I've been doing this for years. I don't think I've noticed this happening lately, but then again I've never really looked. Is this an Apache thing? Does IIS 7 do this?
I'm scared. Hold me.
Try it!
Here are some truncated requests run from the terminal.
curl -I http://astrology.yahoo.com/astrology
HTTP/1.0 301 Moved Permanently
Date: Tue, 21 Jun 2011 13:24:24 GMT
Location: http://shine.yahoo.com/astrology/
curl -I http://wordpress.org/extend
HTTP/1.0 301 Moved Permanently
Server: nginx
Date: Tue, 21 Jun 2011 13:26:17 GMT
Location: http://wordpress.org/extend/
Though it seems that IIS does it the other way:
curl -I http://www.iis.net/overview
HTTP/1.0 200 OK
Server: Microsoft-IIS/7.0
curl -I http://www.iis.net/overview/
HTTP/1.0 301 Moved Permanently
Location: http://www.iis.net/overview
Guess it depends how you have it configured, but it's definitely something to optimise.

HTTP server with Ruby

I am trying to make a small HTTP server in Ruby. Its just meant to learn how stuff works, nothing big. So what i did is to send the server an ajax request. The server is listening on port 2000, and so the ajax request is also on port 2000.
The problem i am facing is that the ajax request is returned only with the headers, the content is missing. I tried everything i could find, but it seems to fail too...
I have attached the code, for you to take a look
require 'socket' # Get sockets from stdlib
server = TCPServer.new(2000) # Socket to listen on port 2000
loop { # Servers run forever
client = server.accept # Wait for a client to connect
headers = "HTTP/1.1 200 OK\r\nDate: Tue, 14 Dec 2010 10:48:45 GMT\r\nServer: Ruby\r\nContent-Type: text/html; charset=iso-8859-1\r\n\r\n"
client.puts headers # Send the time to the client
client.puts "<html>amit</html>"
client.close # Disconnect from the client
}
The ajax request is working when pointed to a PHP script running on Apache. the only problem seems to occur when using this server.
Any help is as always, deeply appreciated :)
Regards,
Amit
Your code works fine.
$ telnet localhost 2000
HTTP/1.1 200 OK
Date: Tue, 14 Dec 2010 10:48:45 GMT
Server: Ruby
Content-Type: text/html; charset=iso-8859-1
<html>amit</html>
Connection to host lost.
Now you'll have to find out what's wrong with your AJAX request...
I also included the Content-Length header and that seemed to clear up the errors I was getting with curl:
require 'socket' # Get sockets from stdlib
server = TCPServer.new(2000) # Socket to listen on port 2000
loop { # Servers run forever
client = server.accept # Wait for a client to connect
resp = "<html>amit</html>"
headers = ["HTTP/1.1 200 OK",
"Date: Tue, 14 Dec 2010 10:48:45 GMT",
"Server: Ruby",
"Content-Type: text/html; charset=iso-8859-1",
"Content-Length: #{resp.length}\r\n\r\n"].join("\r\n")
client.puts headers # Send the time to the client
client.puts resp
client.close # Disconnect from the client
}
You're missing the Access-Control-Allow-Origin HTTP header in your response headers.
Since you're trying an AJAX request, and it is basically an XHTTPRequest, you need to pass it in the response to the client, in order to accomplish the Cross-Origin Resource Sharing implementations of your browser.
Just add in your HTTP server:
headers = "HTTP/1.1 200 OK\r\n"
headers += "Access-Control-Allow-Origin: *\r\n"
headers += "Date: Tue, 14 Dec 2010 10:48:45 GMT\r\nServer: Ruby\r\nContent-Type: text/html; charset=iso-8859-1\r\n\r\n"
and then you can try to see if works.

Resources