[Update at the bottom]
The problem is that after each reload of the page, regardless of what I put in the session, session gets 'reset'.
For example:
session[:test]="test"
puts session.inspect.to_s -> {"test"=>"test"}
(page reload)
puts session.inspect.to_s -> {"csrf"=>"b400efd6.....2362bd", "tracking"=>{"HTTP_USER_AGENT"=>"12a007....b", "HTTP_ACCEPT_LANGUAGE"=>"...."}}
In my nginx configuration I have:
proxy_pass http://localhost:9292;
In main.rb file I have:
configure do
(...)
set :sessions, key: '1234567',
path: '/',
expire_after: 86400,
secret: '7654321'
(...)
end
[Update]
It turns out that the response does not contain - set-cookie with the session id. Currently I am using CF->NGINX->Thin/Sinatra. Everything works fine without nginx.
Related
I have a service named alpha (created using python-django) that runs on http://127.0.0.1:9000 and has these two endpoings
/health returns {"health": "OK"} status 200
/codes/<str:code> returns {"code": code} status 200
I also have a kong api-gateway in the db-less declarative mode that runs on localhost port 80
in kong.yaml I have two services
services:
- name: local-alpha-health
url: http://host.docker.internal:9000/health
routes:
- name: local-alpha-health
methods:
- GET
paths:
- /alpha/health
strip_path: true
- name: local-alpha-code
url: http://host.docker.internal:9000/code/ # HOW TO WRITE THIS PART???
routes:
- name: local-alpha-code
methods:
- GET
paths:
- /alpha/code/(?<appcode>\d+) # Is this right???
strip_path: true
If I send a GET request to http://127.0.0.1/alpha/health it returns {"health": "OK"} status 200 which shows kong is working.
I want to send a request such as http://127.0.0.1/alpha/code/123 and I expect to receive {"code": 123} status 200 but I don't know how to setup kong.yaml file to do this. If I send a request to http://127.0.0.1/alpha/code/123 I get 404 from (from the alpha django application) which means kong is routing the request to alpha service but if I send a request to http://127.0.0.1/alpha/code/abc I get {"message": "no Route matched with those values"} which shows the regex is working
I could do this
services:
- name: local-alpha-health
url: http://host.docker.internal:9000/
routes:
- name: local-alpha-health
methods:
- GET
paths:
- /alpha
strip_path: true
Then a request sent to http://127.0.0.1/alpha/code/123 would go to ``http://127.0.0.1:9000/code/123` but I cannot control with regex
Any idea How to route requests to a dynamic endpoint on kong api-gateway?
This content seems related but cannot figure it out how to set it up
https://docs.konghq.com/gateway-oss/2.5.x/proxy/
Note that a request like http://127.0.0.1/alpha/code/abc will indeed not match the rule you have added, because of the \d+ (which matches one or more digits). Also, http://127.0.0.1/alpha/code/123 will reach the upstream as a request to /, since you have strip_path set to true.
I have tested your example with some minor tweaks to proxy to a local httpbin service, which has a similar endpoint (/status/<code>).
Start a local httpbin service:
$ docker run --rm -d -p "8080:80" kennethreitz/httpbin
Start Kong with the following config:
_format_version: "2.1"
services:
- name: local-alpha-code
url: http://localhost:8080
routes:
- name: local-mockbin-status
methods:
- GET
paths:
- /status/(?<appcode>\d+)
strip_path: false
Note that strip_path is set to false, so the entire matching path is proxied to the upstream.
Test it out with:
$ http :8000/status/200
HTTP/1.1 200 OK
I have tried all the gems I can find on Google and Stackoverflow, they all seems to be outdated and unmaintained, so what is the simplest way to invalidate a CloudFront distribution from Ruby?
Here's the little script we ended up using to invalidate the entire cache:
require 'aws-sdk-cloudfront'
cf = Aws::CloudFront::Client.new(
access_key_id: ENV['FOG_AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['FOG_AWS_SECRET_ACCESS_KEY'],
region: ENV['FOG_REGION']
)
resp = cf.create_invalidation({
distribution_id: ENV['FOG_DISTRIBUTION_ID'], # required
invalidation_batch: { # required
paths: { # required
quantity: 1, # required
items: ["/*"],
},
caller_reference: DateTime.now.to_s, # required
},
})
if resp.is_a?(Seahorse::Client::Response)
puts "Invalidation #{resp.invalidation.id} has been created. Please wait about 60 seconds for it to finish."
else
puts "ERROR"
end
https://rubygems.org/gems/aws-sdk
Specifically the cloudfront module:
https://docs.aws.amazon.com/sdkforruby/api/Aws/CloudFront.html
This should give you full CLI control of your cloudfront resources provided you have the correct IAM roles etc set up.
I have installed Elastisearch 2.1.0 and kibana 4.3.0 in single machine.
Kibana.yml Configurations :
# Kibana is served by a back end server. This controls which port to use.
server.port: 5601
# The host to bind the server to.
server.host: "IP"
# A value to use as a XSRF token. This token is sent back to the server on each request
# and required if you want to execute requests from other clients (like curl).
# server.xsrf.token: ""
# If you are running kibana behind a proxy, and want to mount it at a path,
# specify that path here. The basePath can't end in a slash.
# server.basePath: ""
# The Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://IP:9200/"
# preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false,
# then the host you use to connect to *this* Kibana instance will be sent.
elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations
# and dashboards. It will create a new index if it doesn't already exist.
kibana.index: ".kibana"
# The default application to load.
kibana.defaultAppId: "discover"
# If your Elasticsearch is protected with basic auth, these are the user credentials
# used by the Kibana server to perform maintenance on the kibana_index at startup. Your Kibana
# users will still need to authenticate with Elasticsearch (which is proxied through
# the Kibana server)
# elasticsearch.username: "user"
# elasticsearch.password: "pass"
# SSL for outgoing requests from the Kibana Server to the browser (PEM formatted)
# server.ssl.cert: /path/to/your/server.crt
# server.ssl.key: /path/to/your/server.key
# Optional setting to validate that your Elasticsearch backend uses the same key files (PEM formatted)
# elasticsearch.ssl.cert: /path/to/your/client.crt
# elasticsearch.ssl.key: /path/to/your/client.key
# If you need to provide a CA certificate for your Elasticsearch instance, put
# the path of the pem file here.
# elasticsearch.ssl.ca: /path/to/your/CA.pem
# Set to false to have a complete disregard for the validity of the SSL
# certificate.
elasticsearch.ssl.verify: true
# Time in milliseconds to wait for elasticsearch to respond to pings, defaults to
# request_timeout setting
# elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or elasticsearch.
# This must be > 0
# elasticsearch.requestTimeout: 300000
# Time in milliseconds for Elasticsearch to wait for responses from shards.
# Set to 0 to disable.
# elasticsearch.shardTimeout: 0
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying
# elasticsearch.startupTimeout: 5000
# Set the path to where you would like the process id file to be created.
pid.file: /var/run/kibana.pid
# If you would like to send the log output to a file you can set the path below.
logging.dest: /var/log/kibana/kibana.log
# Set this to true to suppress all logging output.
# logging.silent: false
# Set this to true to suppress all logging output except for error messages.
# logging.quiet: true
# Set this to true to log all events, including system usage information and all requests.
# logging.verbose: true
While I am doing curl -IP:5601 , I am getting this output:
**HTTP/1.1 200 OK
x-app-name: kibana
x-app-version: 4.3.0
cache-control: no-cache
content-type: text/html
content-length: 217
accept-ranges: bytes
Date: Wed, 20 Jan 2016 15:28:35 GMT
Connection: keep-alive
<script>var hashRoute = '/app/kibana';
var defaultRoute = '/app/kibana';
var hash = window.location.hash;
if (hash.length) {
window.location = hashRoute + hash;
} else {
window.location = defaultRoute;
</script>
Elasticsearch and kibana both are up and running still I am not able to access Kibana GUI from the browser. It is not displaying the page.
I checked the configurations of elasticsearch.yml too.The host and IP is correct there. Curl command is giving this output for elasticsearch [Command :curl http://IP:9200/]
{
"name" : "node-1",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.1.0",
"build_hash" : "72cd1f1a3eee09505e036106146dc1949dc5dc87",
"build_timestamp" : "2015-11-18T22:40:03Z",
"build_snapshot" : false,
"lucene_version" : "5.3.1"
},
"tagline" : "You Know, for Search"
}
Could anybody tell what could be the issue.
Did you install elasticsearch and kibana on your local machine, I mean your laptop or computer that you are workng on? Or is it running on a separate server?
If you are running it on the same machine that you are accessing the browser, then you could just access it as localhost:port
As your error includes the status
Elasticsearch is still initializing the kibana index, I would recommend you to try the steps mentioned in this page:-
Elasticsearch is still initializing the kibana index
I am having a struggle getting this to work so I've created a hell-world Rails app to try and get this to work.
Here's the repo with the code that is not working: https://github.com/pitosalas/shibtry
Here's what I've done starting from an empty Rails application:
I've added two gems to gem files:
gem 'omniauth-shibboleth'
gem 'rack-saml'
I got the shibboleth meta data from my university's web site and converted it using shib_conv.rb into the corresponding YAML: ./config.yml
I've updated routes adding get '/auth/:provider/callback', to: 'sessions#create'
I've put a breakpoint at SessionController#create
I've added initializers: omniauth.rb:
Rails.application.config.middleware.use OmniAuth::Builder do
provider :shibboleth, {
:shib_session_id_field => "Shib-Session-ID",
:shib_application_id_field => "Shib-Application-ID",
:debug => true,
:extra_fields => [
:"unscoped-affiliation",
:entitlement
]
}
end
I've added rack_sam.rb initializer:
Rails.application.config.middleware.insert_after Rack::ETag, Rack::Saml,
{ :metadata => "#{Rails.root}/config/metadata.yml"}
Now, run the server and go to http://0.0.0.0:3000/auth/shibboleth and I get an error:
undefined method `[]' for nil:NilClass'
which is traced back to this line in rack-saml/misc/onelogin_setting.rb line 13 which is:
settings.idp_sso_target_url = #metadata['saml2_http_redirect']
in other words, looking for the metadata hash for that key. It happens that in my metadata.yml file that key is present, but by the time I get to this onelogin_setting.rb line 13, #metadata is nil (it should contain the contents of the file) and consequently that key doesn't exist.
And that's where, for now, the trail dries up.
I bypassed Shibboleth totally. My goal was to allow login to my universities authentication system specifically to allow students to log in with their student login, which is fronted by google apps. So this was much easier: https://developers.google.com/identity/sign-in/web/
Looks like you forgot to add your config file to the initializer:
Rails.application.config.middleware.insert_after Rack::ETag, Rack::Saml,
{
:metadata => "#{Rails.root}/config/metadata.yml",
:config => "#{Rails.root}/config/rack-saml.yml"
}
And the saml_idp setting in the rack-saml.yml must match the key for the idp_lists entry in your metadata.yml
I have a yaml file that stores the OAuth::AccessToken value returned by authenticating with the oauth gem. I read this file in to save myself authenticating each time.
:access_token: !ruby/object:OAuth::AccessToken
token: 0fXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
secret: eXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
consumer: !ruby/object:OAuth::Consumer
key: 2aXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
secret: 181XXXXXXXXXXXXXXXXXXXXX
options:
:signature_method: HMAC-SHA1
:request_token_path: /oauth/request_token/
:authorize_path: /oauth/authorize
:access_token_path: /oauth/access_token/
:proxy:
:scheme: :header
:http_method: :get
:oauth_version: '1.0'
:site: http://api.mendeley.com
http_method: :get
http: !ruby/object:Net::HTTP
address: api.mendeley.com
port: 80
curr_http_version: '1.1'
no_keepalive_server: false
close_on_empty_response: false
socket:
started: false
open_timeout: 30
read_timeout: 30
continue_timeout:
debug_output:
use_ssl: false
ssl_context:
enable_post_connection_check: true
compression:
sspi_enabled: false
ssl_version:
key:
cert:
ca_file: /etc/ssl/certs/ca-certificates.crt
ca_path:
cert_store:
ciphers:
verify_mode: 1
verify_callback:
verify_depth: 5
ssl_timeout:
params:
:oauth_token: 0fXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
oauth_token: 0XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
:oauth_token_secret: efXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
oauth_token_secret: eXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
When I read this file in using the yaml gem, everything works fine. But I'm using Jekyll and have to read this in with safe_yaml gem, and even though the yaml appears to correctly dictate the class, when I do:
auth_contents = YAML::load(File.open("auth.yaml"))
$access_token = auth_contents[":access_token"]
I get $access_token back as a hash; the class declaration is lost. This means that of course I cannot apply methods like $access_token.get, etc. How can I work around this? Any way to persuade ruby to recognize the correct class?
First of all: Make sure that you actually want to load in the class. It appears to me like you control the YAML file, but if it for some reason is loaded from somewhere you don't trust, you probably want to manually deserialize the hash.
That said, you can whitelist trusted types with safe_yaml:
SafeYAML.whitelist!(OAuth::AccessToken, OAuth::Consumer, Net::HTTP)