I'm basically trying to send an AJAX-Add-to-Cart-POST-Request by WooCommerce WordPress Plugin. The Plugin does so by sending form-data to a URL that already contains a GET param:
jQuery.post('/?wc-ajax=add_to_cart', { product_id: '123', quantity: 1 }, function(response) {...});
This works fine locally, but on the staging server, which is secured by basic htaccess auth:
AuthName "Staging"
AuthUserFile /path/to/.htpasswd
AuthType Basic
Require valid-user
the POST params are completely stripped from the request, means when I remove the above or "whitelist" my IP like so:
Order allow,deny
Allow from 1.2.3.4
Satisfy Any
everything works fine.
Is there something I'm missing? I could not find any documentation why this behaves the way it does.
The site runs on an apache 2.4 server with php 7.1.17, OS is Debian 8.10.
Related
I am trying to upgrade my old Polymer application to Polymer v3. Nearly Everything works fine if I use polymer serve.
But I have to also use some php files to connect to the backend and there is the problem.
When I try to run the application using polymer serve, PHP files are not found and returning 404 whenever I try to make a POST request on them.
Non working example. Having the following file structure:
|_ phpFile.php
|
|_ jsFile.js
Inside jsFile.js
fetch("phpFile.php", {
method: 'POST',
headers: new Headers({
Accept: 'application/json',
'Content-Type': 'application/json'
},
body: JSON.stringify({
testData: true
})
}).then(response => {
console.warn(response);
});
When I try to run the application with XAMPP (virtual host), Making a POST request returns exactly what I need, which is great. But import like this:
import {PolymerElement, html} from '#polymer/polymer/polymer-element.js';
stop working, because there is no bundler that would replace # with the current path. And no, I can't simply rewrite it to '/node_modules/#polymer/polymer/polymer-element.js'. All of the polymer elements are using this notation. I would have to rewrite all source codes which is nonsense.
I need to either make successfull POST on php files while serving with polymer serve or replace all # inside imports while serving with localhost (XAMPP or any other service)
Is there anyone who succesfully implemented connection to php file in Polymer 3?
Or anyone who knows some workaround, solution for this?
If You are using PHP for APIs then create separate Application and run it on xampp.
In sort create tow different Application. One for back end (PHP app) and another for front end (Polymer app).
I try to run my local copy of my yii2 site with https.
I use this in config to force http url to https
'on beforeRequest' => function ($event) {
if(!Yii::$app->request->isSecureConnection){
$url = Yii::$app->request->getAbsoluteUrl();
$url = str_replace('http:', 'https:', $url);
Yii::$app->getResponse()->redirect($url);
Yii::$app->end();
}
},
The only url I can reach is the home page i.e. a bare url such as
example.ext
Other URLs give
Not Found The requested URL /site/index was not found on this server.
When removing the 'onbeforerequest' in the config, I can reach every http URL.
Question: why https URLs become unreachable?
Eventually I made out that there was no url rewrite for pretty url in the virtualhost litening to 443 port.
Adding the recommended rewrite rule in it solved the problem.
#stfsngue: Thank you for comment
Do you see any particular reason for preferring .htacces to 'onbeforeRequest' to force https?
I want to develop a Monitoring-WebApp for different things with AngularJS as Frontend. One of the core-elements is showing an overview of Nexus-Artifacts/Repositories.
When I request the REST-API I'm getting following error back:
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin 'http://localhost:9090' is therefore not allowed access.
To fix this error, I need to modify the response headers to enable CORS.
It would be great if anyone is familiar with that type of problem and could give me an answer!
The CORS headers are present in the response of the system you are trying to invoke. (Those are checked on the client side [aka browser this case], you can implement a call on your backend to have those calls and there you can ignore those headers, but that could become quite hard to maintain.) To change those you'll need a proxy. So your application will not call the url directly like
fetch("http://localhost:9090/api/sometest")
There are at least two ways: one to add a proxy directly before the sonar server and modify the headers for everyone. I do not really recommend this because of security reasons. :)
The other more maintaneable solution is to go through the local domain of the monitoring web app as follows:
fetch("/proxy/nexus/api/sometest")
To achieve this you need to setup a proxy where your application is running. This could map the different services which you depend on, and modify the headers if necessary.
I do not know which application http server are you going to use, but here are some proxy configuration documentations on the topic:
For Apache HTTPD mod_proxy you could use a configuration similar to this:
ProxyPass "/proxy/nexus/" "http://localhost:9090/"
ProxyPassReverse "/proxy/nexus/" "http://localhost:9090/"
It is maybe necessary to use the cookies as well so you may need to take a look at the following configurations:
ProxyPassReverseCookiePath
ProxyPassReverseCookieDomain
For Nginx location you could employ something as follows
location /proxy/nexus/ {
proxy_pass http://localhost:9090/;
}
For node.js see documentation: https://github.com/nodejitsu/node-http-proxy
module.exports = (req, res, next) => {
proxy.web(req, res, {
target: 'http://localhost:4003/',
buffer: streamify(req.rawBody)
}, next);
};
I'm building my first CouchApp (a simple blogging engine) in order to learn more about it. Now, I have it working to the point that the following URL returns blog posts:
http://127.0.0.1:5984/couchblog/_design/couchblog/_list/index/posts
I have a view called posts that returns my posts, and a list called index that renders the posts. So I figured my next step was to rewrite the URLs to something a bit friendlier. Unfortunately the documentation on URL rewriting seems a tad vague, and I just can't seem to get anything to work.
The rewrite section of my design document looks like this:
rewrites: [{
from: '../../../',
to: '/_list/index/posts',
method: 'GET',
query: ''
}],
I'd like to rewrite it so that it serves the list of blog posts from the web server root, but I just can't seem to get anywhere with it. Can anyone see what I'm doing wrong? I'm using CouchDB 1.6.0 on OS X Snow Leopard via Homebrew.
I'd like to rewrite it so that it serves the list of blog posts from the web server root
I think you need to configure your vhosts settings in the couchdb config for that. This is covered pretty well in the vhosts section so I will just post the relevant part here:-
To add a virtual host, add a CNAME pointer to the DNS for your domain name. For development and testing, it is sufficient to add an entry in the hosts file, typically /etc/hosts` on Unix-like operating systems:
# CouchDB vhost definitions, refer to local.ini for further details
127.0.0.1 couchdb.local
Test that this is working:
$ ping couchdb.local
PING couchdb.local (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_req=1 ttl=64 time=0.025 ms
64 bytes from localhost (127.0.0.1): icmp_req=2 ttl=64 time=0.051 ms
Finally, add an entry to your configuration file in the [vhosts]* section:
[vhosts]
couchdb.local:5984 = /example
*.couchdb.local:5984 = /example
If your CouchDB is listening on the the default HTTP port (80), or is sitting behind a proxy, then you don’t need to specify a port number in the vhost key.
*By the way you can do this from futon as well. Just make sure to restart couchdb after you have configured your vhosts section. Other wise changes will have no effect.
For our case however we need to map the vhosts section to the rewrite handler on our database. So our vhosts will look something like this:-
couchdb.local:5984 = your-db/_design/your-design/_rewrite
Modify your rewirtes handler as well
rewrites: [{
from: 'index',
to: '/_list/index/posts',
method: 'GET',
query: ''
}]
Now if you issue a request to
couchdb.local:5984/index
You should see a list of posts.
I am working on application in which I have used two servers hosted on same machine, one is apache which will work as basic host for serving php pages and other side nodejs for communication of rest api, whole application build upon backbone/marionette/requirejs/bootstrap.
Coming to the point, my normal pages are load from apache server like,
http://192.168.20.62/project/design.php
and I have configured my model like this,
define(['backbone'],function(Backbone){
'use strict';
return Backbone.Model.extend({
url:"http://192.168.20.62:9847/page",
defaults: {
...
}
});
});
when I try to save model I am suffering from problem of ajax cross domain call and I am ended up with error in my save communication, following is the node/express server,
var express = require('/root/node_modules/express');
var app = express();
app.configure(function () {
app.use(express.json());
app.use(express.urlencoded());
app.use(function (req, res, next) {
res.setHeader('Access-Control-Allow-Origin', 'http://192.168.20.62:9847');
res.setHeader('Access-Control-Allow-Methods', 'GET, POST, OPTIONS, PUT, PATCH, DELETE');
res.setHeader('Access-Control-Allow-Headers', 'X-Requested-With,content-type');
res.setHeader('Access-Control-Allow-Credentials', true);
next();
});
})
app.post('/page', function(request, response){
console.log(request.body);
response.send(request.body);
});
app.listen(9847);
as you can see I have already written some patch in server code, but still the same, also I have added .htaccess at root level of both at
http://192.168.20.62
and
http://192.168.20.62:9847
with the following code,
Header add Access-Control-Allow-Origin "*"
Header add Access-Control-Allow-Headers "origin, x-requested-with, content-type"
Header add Access-Control-Allow-Methods "PUT, GET, POST, DELETE, OPTIONS"
but things are not helping in anyways, if I run chrome by disabling web security then thing are working properly.
chrome.exe --disable-web-security
Can you guys please help me to solve this puzzle, thanks in advance.
following is the error message from chrome javascript console
OPTIONS http://192.168.20.62:9847/page Origin http://192.168.20.62 is not allowed by Access-Control-Allow-Origin. jquery-2.0.3.min.js:6
XMLHttpRequest cannot load http://192.168.20.62:9847/page. Origin http://192.168.20.62 is not allowed by Access-Control-Allow-Origin.
Oops I figure out the problem, I have setup wrong url for white listing.
res.setHeader('Access-Control-Allow-Origin', 'http://192.168.20.62');
I have to whitelist the source URL which was generating cross domain, this was the only change have to do to make thing running.
Also this has browser version impact, when I have posted this issue, I was checking with firefox version 24.0.* and when I upgrade 25.0 it has stopped generating cross domain error surprisingly. But still Chrome giving natural cross domain error, when I read error message of chrome carefully I come to know that I have whitelisted wrong url.
XMLHttpRequest cannot load http://192.168.20.62:9847/page. The 'Access-Control-Allow-Origin' whitelists only 'http://192.168.20.62:9847'. Origin 'http://192.168.20.62' is not in the list, and is therefore not allowed access.
If you don't have to deal with WebSockets, then place Node.js behind Apache via mod_proxy:
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule rewrite_module modules/mod_rewrite.so
# disable use as forward proxy
ProxyRequests Off
# don't handle Via: headers - we don't care about them
ProxyVia Off
# no need to transport host name - not doing virtual hosting
ProxyPreserveHost Off
ProxyPass /page/ http://192.168.20.62:9847/page/
ProxyPassReverse /page/ http://192.168.20.62:9847/page/
# If you get HTTP status code 502 (Bad Gateway), maybe this could help
SetEnv force-proxy-request-1.0 1
SetEnv proxy-nokeepalive 1
In your web-page you can access your API with the same port as your Apache and you don't have any problems with Cross-Domain / Same-Origin-Policy.
You just have to separate your URIs now, e.g /page/ exists only in node.js app.
If you're serving REST only (no HTML/CSS/Images) via node.js, using JSONP would be another option. Here is a good+short description how to handle JSONP in Express