one of my client has receive a mail that paypal is upgrading there policy. When i do some r&d, i found similar thread on magento forum but no one has replied so far.Following are the link of that thread.
PAYPAL SERVICE UPGRADES
So my question, what modification i have to made in current configuration or there is huge changes required in current magento payment gatway coding.
Any help or suggestion is appreciated.
Major thrust of the quoted document is "don't use hard coded IP addresses".
Magento uses the server name, so DNS will resolve automatically to the new assigned netblocks.
Your Hosting Provider may have to modify their firewalls if they're filtering traffic, but it's probably unlikely. They also need to be running a web server that uses HTTP 1.1.
Viewing your web server's response headers will tell you that, look for HTTP/1.1 200 OK
Quoted PayPal notice of service upgrade
If your site is:
Calling our APIs with a hardcoded PayPal API endpoint IP address, rather than using DNS resolution: Impact of upgrade: API calls will timeout or your will encounter an internal error from your system.
You need to: Use DNS resolution to access our API endpoints and/or open your firewall to the new IP addresses which will be communicated later.
Using HTTP methods other than GET, POST, DELETE and PUT: Impact of upgrade: API calls will return HTTP/1.0 400 Bad Request or HTTP Error 405 Method not allowed.
You need to: Send the API requests using one of the allowed methods. Heartbeat calls using the HEAD method won’t be allowed.
Using the HTTP 1.0 protocol: Impact of upgrade: API calls will return HTTP/1.0 400 Bad Request.
You need to: Update your code to HTTP 1.1 and include the Host header in the API request.
Needing firewall changes to allow new IP addresses: Impact of upgrade: API calls will error out if your system responsible for making API calls to PayPal is behind a firewall that uses Access Control List (ACL) rules and limits outbound traffic to a limited number of IP addresses.
You need to: Update your firewall ACL to allow outbound access to a new set of IP addresses we will be publishing. Test your integration on Sandbox (see the IP addresses for Sandbox API endpoints). The list of new IP addresses for our Live API endpoints will be posted here when available in January.
Related
New to Jmeter so I am not sure if my set-up is correct.
Basically I have these set of API's that I need to Perf test. Starting with a setting up a basic connection from Jmeter - I am receiving 1020 error from cloudflare
Access denied | "domain" used Cloudflare to restrict access
and
<div class="cf-alert cf-alert-error cf-cookie-error hidden" id="cookie-alert" data-translate="enable_cookies">Please enable cookies.</div>
It works with POSTMAN. So wondering what changes I'll need in jmeter.
I have enabled save cookie in jmeter.properties file
API is for logging into a portal: verified username/password. VPN connection verified as this works from postman.
If you're absolutely sure that the request works in postman (although I'm getting this 1020 error even with the real browser) you should be able to get the same behavior in JMeter as well, just make sure to send the identical request (pay attention to HTTP Headers as well)
The easiest is just recording your Postman request using JMeter's HTTP(S) Test Script Recorder, just configure Postman to use JMeter as the proxy
and run your request - JMeter will generate appropriate HTTP Request sampler and HTTP Header Manager
If you need to use VPN for proper access you might need to configure source IP address at the "Advanced" tab of the HTTP Request sampler like it's described in Using IP Spoofing to Simulate Requests from Different IP Addresses with JMeter article
In any case load testing an API behind Cloudflare might be not the best idea as Cloudflare provides DDoS protection and may (and will) block this type of traffic so you need to either whitelist your IP address(es) or let them know about your load testing activities, I believe they will be able to suggest a better workaround than anyone here
This is related with the securities features of CloudFare, either DDos protection or bot blocking. Exceptions can be configured from the CloudFare control panel.
If you don't have access to this panel you'll have to ask the corresponding person inside your company tasked with this job.
I am trying to build a SOCKS solution for forward proxy. I am using dante SOCKS proxy as I have heard that big companies like google uses it as forward proxy solution.
on the SOCKS server, I am allowing based on FQDN's like google.com:443
Now the problem is, when the client constructs the packet, it tries to resolve google.com and gets X.X.X.X and sends connect request to SOCKS server. Now when the server receives the packets, it tries to reconstruct the packet to send out to internet, the server again does DNS resolution and if the server gets response as Y.Y.Y.Y, then it doesn't allow client's request as the destination IP in the client's request is different then the server's resolved IP address.
There was a solution in dante client which tells client to put a dummy destination address 0.0.0.1 and sends request to server and server processes it properly then. However that is creating a problem with internal domains as after using that dns resolution method, every requests goes through dante server :(
Please let me know
If there is any solution through which would help me in maintaining a DNS record expiry DC wide for e.g. google.com resolves to X.X.X.X and I should be able to resolve to this same IP address on 100's of DNS client and in case if the record changes, then it should immediately change/expire on client.
Any other proxy/socks solution which should be transparent to applications for forward proxy
I went ahead with this solution in case anyone is curious to see the solution.
I used PowerDNS Auth Server with Pipe backend. The requests would land to PowerDNS server for resolution, it will pass on all the data to Pipe backend script with ABI, the script analysis the requests, sees if it is present under cached variable/memory map, if it is cache hit, it will respond using cached DNS records else it will use a DNS resolver to resolve that query like a resolver resolves normally.
PowerDNS version lower than 4.1 supports Pipe backend + resolver. This way, the request would first land to pipe backend script, if the script doesn't have any entries cached, it will not respond or will respond blank and then PowerDNS would resolve it with the mentioned resolver server in the configuration. However with version 4.1 and above, the resolver part is removed from PowerDNS Auth server hence you need to handle that behaviour via Pipe backend script.
It depends on your client. Firefox, for example, sends hostname to SOCKS proxy without resolving it. You can confirm that by Wireshark.
PS. assume you are using a SOCKS5/4a proxy. SOCKS4 does not support hostname. Ref: https://en.wikipedia.org/wiki/SOCKS#SOCKS4a
Our web application has a button that is supposed to send data to a server on the local network that in turn prints something on a printer.
So far it was easy: The button triggered an AJAX POST request to http://printerserver/print.php with a token, that page connected to the web application to verify the token and get the data to print and then printed.
However, we are now delivering our web application via HTTPs (and I would rather not go back to HTTP for this) and newer versions of Chrome and Firefox don't make the request to the HTTP address anymore, they don't even send the request to check CORS headers.
Now, what is a modern alternative to the cross-protocol XHR? Do Websockets suffer from the same problem? (A Google search did not make clear what is the current state here.) Can I use TCP Sockets already? I would rather not switch to GET requests either, because the action is not idempotent and it might have practical implications with preloading and caching.
I can change the application on the printerserver in any way (so I could replace it with NodeJS or something) but I cannot change the users' browsers (to trust a self-signed certificate for printerserver for example).
You could store the print requests on the webserver in a queue and make the printserver periodically poll for requests to print.
If that isn't possible I would setup a tunnel or VPN between the webserver and printserver networks. That way you can make the print request from the webserver on the server-side instead of the client. If you use curl, there are flags to ignore invalid SSL certificates etc. (I still suspect it's nicer to introduce a queue anyway, so the print requests aren't blocking).
If the webserver can make an ssh connection to something on the network where the printserver is on, you could do something like: ssh params user#host some curl command here.
Third option I can think of, if printserver can bind to for example a subdomain of the webserver domain, like: print.somedomain.com, you may be able to make it trusted by the somedomain.com certificate, IIRC you have to create a CSR (Certificate Signing Request) from the printserver certificate, and sign it with the somedomain.com certificate. Perhaps it doesn't even need to be a subdomain for this per se, but maybe that's a requirement for the browser to do it client-side.
The easiest way is to add a route to the webapp that does nothing more than relay the request to the print server. So make your AJAX POST request to https://myapp.com/print, and the server-side code powering that makes a request to http://printerserver/print.php, with the exact same POST content it received itself. As #dnozay said, this is commonly called a reverse proxy. Yes, to do that you'll have to reconfigure your printserver to accept (authenticated) requests from the webserver.
Alternatively, you could switch the printserver to https and directly call it from the client.
Note that an insecure (http) web-socket connection on a secure (https) page probably won't work either. And for good reason: generally it's a bad idea to mislead people by making insecure connections from what appears to them to be a secure page.
The server hosting the https webapp can reverse proxy the print server,
but since the printer is local to the user, this may not work.
The print server should have the correct CORS headers
Access-Control-Allow-Origin: *
or:
Access-Control-Allow-Origin: https://www.example.com
However there are pitfalls with using the wildcard.
From what I understand from the question, printserver is not accessible from the web application so the reverse proxy solution won't work here.
You are restricted from making requests from the browser to the printserver by cross-origin-policy.
If wish to communicate with the printserver from an HTTPS page you will need the printserver to expose print.php as HTTPS too.
You could create a DNS A record as a subdomain of your web application that resolves to the internal address of your printserver.
With those steps in place you should be able to update your printserver page to respond with permissive CORS headers which the browser should then respect. I don't think the browser will even issue CORS requests across different protocol schemes (HTTPS vs HTTP) or to internal domains, without a TLD.
We often find columns like Address, Port in web browser proxy settings. I know when we use proxy to visit a page, the web browser request the web page from the proxy server, but what I want to know is how the whole mechanism works? I have observed that many ISP allow only access to a single IP(of their website) after we exhausted our free data usage. But when we enter the site which we wants to browse in proxy URL and then type in the allowed IP, the site get loaded. How this works?
In general, your browser simply connects to the proxy address & port instead of whatever IP address the DNS name resolved to. It then makes the web request as per normal.
The web proxy reads the headers, uses the "Host" header of HTTP/1.1 to determine where the request is supposed to go, and then makes that request itself relaying all remaining data in both directions.
Proxies will typically also do caching so if another person requests the same page from that proxy, it can just return the previous result. (This is simplified -- caching is a complex topic.)
Since the proxy is in complete control of the connection, it can choose to route the request elsewhere, scrape request and reply data, inject other things (like ads), or block you altogether. Use SSL to protect against this.
Some web proxies are "transparent". They reside on a gateway through which all IP traffic must pass and use the machine's networking stack to redirect outgoing connections to port 80 to a local port instead. It then behaves the same as though a proxy was defined in the browser.
Other proxies, like SOCKS, have a dedicated protocol that allows non-HTTP requests to be made as well.
There are 2 types of HTTP proxies, there are the ones that are reversed and the ones that
are forward.
The web browser uses a forward proxy, basically it is sending all http traffic through the proxy, the proxy will take this traffic out to the internet. Every http packet that comes out from your computer, will be send to the proxy before going to the target site.
The ISP blocking does not work when using a proxy because, every packet that comes out from your machine is pointing to the proxy and not to the targe site. The proxy could be getting internet through another ISP that has no blocks whatsoever.
Here is the thing:
We've implemented a C++ RESTful API Server, with built-in HTTP parser and no standard HTTP server like apache or anything of the kind
It has been in use for several months in Amazon structure, using both plain and SSL communications, and no problems have been identified, related to Amazon infra-structure
We are deploying our first backend using Amazon ELB
Amazon ELB has a customizable health check system but also as an automatic one, as stated here
We've found no documentation of what data is sent by the health check system
The backend simple hangs on the socket read instruction and, eventually, the connection is closed
I'm not looking for a solution for the problem since the backend is not based on a standard web server, just if someone knows what kind of message is being sent by the ELB health check system, since we've found no documentation about this, anywhere.
Help is much appreciated. Thank you.
Amazon ELB has a customizable health check system but also as an
automatic one, as stated here
With customizable you are presumably referring to the health check configurable via the AWS Management Console (see Configure Health Check Settings) or via the API (see ConfigureHealthCheck).
The requirements to pass health checks configured this way are outlined in field Target of the HealthCheck data type documentation:
Specifies the instance being checked. The protocol is either TCP,
HTTP, HTTPS, or SSL. The range of valid ports is one (1) through
65535.
Note
TCP is the default, specified as a TCP: port pair, for example
"TCP:5000". In this case a healthcheck simply attempts to open a TCP
connection to the instance on the specified port. Failure to connect
within the configured timeout is considered unhealthy.
SSL is also specified as SSL: port pair, for example, SSL:5000.
For HTTP or HTTPS protocol, the situation is different. You have to
include a ping path in the string. HTTP is specified as a
HTTP:port;/;PathToPing; grouping, for example
"HTTP:80/weather/us/wa/seattle". In this case, a HTTP GET request is
issued to the instance on the given port and path. Any answer other
than "200 OK" within the timeout period is considered unhealthy.
The total length of the HTTP ping target needs to be 1024 16-bit
Unicode characters or less.
[emphasis mine]
With automatic you are presumably referring to the health check described in paragraph Cause within Why is the health check URL different from the URL displayed in API and Console?:
In addition to the health check you configure for your load balancer,
a second health check is performed by the service to protect against
potential side-effects caused by instances being terminated without
being deregistered. To perform this check, the load balancer opens a
TCP connection on the same port that the health check is configured to
use, and then closes the connection after the health check is
completed. [emphasis mine]
The paragraph Solution clarifies the payload being zero here, i.e. it is similar to the non HTTP/HTTPS method described for the configurable health check above:
This extra health check does not affect the performance of your
application because it is not sending any data to your back-end
instances. You cannot disable or turn off this health check.
Summary / Solution
Assuming your RESTful API Server, with built-in HTTP parser is supposed to serve HTTP only indeed, you will need to handle two health checks:
The first one you configured yourself as a HTTP:port;/;PathToPing - you'll receive a HTTP GET request and must answer with 200 OK within the specified timeout period to be considered healthy.
The second one configured automatically by the service - it will open a TCP connection on the HTTP port configured above, won't send any data, and then closes the connection after the health check is completed.
In conclusion it seems that your server might be behaving perfectly fine already and you are just irritated by the 2nd health check's behavior - does ELB actually consider your server to be unhealthy?
As far as I know it's just an HTTP GET request looking for a 200 OK http response.