Sagepay Server Integration 5003 error code on first POST - opayo

I'm attempting to initiate a test transaction with sagepay by POSTing to https://test.sagepay.com/gateway/service/vspform-register.vsp
I've followed the documentation with regards to the format such a request should take, I've whitelisted my IP in the portal on the test environment and I'm using VPSProtocol=3.00 (Which are the two problems that I've seen reported to cause this) but I'm still getting a 5003 error.
I've spoken to support on the phone and POSTed to their showpost endpoint (https://test.sagepay.com/showpost/showpost.asp). It doesn't seem to be able to understand any of the details of my POST despite it being in a Name=Value format separated by &, URL encoding the values like the documentation dictates and providing all the required fields.
I've tried URL encoding the =s as well as the &s just in case I had misunderstood the documentation in that regards but it didn't make any difference.
I believe that I must be sending the body incorrectly somehow. I'd appreciate any suggestions anyone can give. The body I'm sending is below:
VPSProtocol=3.00&TxType=PAYMENT&Vendor=anyjunko&VendorTxCode=123&Amount=143.33&Currency=GBP&Description=TODO&NotificationURL=https%3A%2F%2Fstaging-nelly.anyjunk.co.uk%2Fvs%2Fsagepay-transactions%2F1%2Fsagepay-updates&BillingSurname=NameB&BillingFirstnames=NameA&BillingAddress1=1&BillingAddress2=Putney&BillingCity=London&BillingPostCode=SW11%209YZ&BillingCountry=GB&DeliverySurname=NameB&DeliveryFirstnames=NameA&DeliveryAddress1=1&DeliveryAddress2=Putney&DeliveryCity=London&DeliveryPostCode=SW11%209YZ&DeliveryCountry=GB
Update: I've now tried this with cURL and it worked correctly, but it doesn't work when I send it from Postman or from my code using akka http client.
The cURL command I used was:
curl -X POST "https://test.sagepay.com/gateway/service/vspserver-register.vsp" -d "VPSProtocol=3.00&TxType=PAYMENT&Vendor=anyjunko&VendorTxCode=123&Amount=143.33&Currency=GBP&Description=TODO&NotificationURL=https%3A%2F%2Fstaging-nelly.anyjunk.co.uk%2Fvs%2Fsagepay-transactions%2F1%2Fsagepay-updates&BillingSurname=NameB&BillingFirstnames=NameA&BillingAddress1=1&BillingAddress2=Putney&BillingCity=London&BillingPostCode=SW11%209YZ&BillingCountry=GB&DeliverySurname=NameB&DeliveryFirstnames=NameA&DeliveryAddress1=1&DeliveryAddress2=Putney&DeliveryCity=London&DeliveryPostCode=SW11%209YZ&DeliveryCountry=GB"

There are some requirements missing from the Sage Pay documentation which must be provided otherwise you will get a HTTP 500 Internal Error 5003
Documentation for Server Integration named
"SERVER_Integration_and_Protocol_Guidelines_270815.pdf" is available here:
https://www.sagepay.co.uk/support/find-an-integration-document/server-inframe-integration-documents
First make sure that you have followed the documentation to the letter, especially be careful to include all the mandatory fields and URLencode the values of the name=value pairs.
The following requirements are not mentioned in the documentation:
1 IP Addresses
The IP address from which you send your POST to ~/vspserver-register.vsp must be added to the whitelist on your account settings.
login to My Sage Pay here https://testportal.sagepay.com/mysagepay/login.msp using your administrator account (usually your vendor name)
Go to "IP Válidas" (sorry admin suite is in Spanish for some reason)
Click [Añadir] ("Add") button in bottom right
In the dialog that pops up enter your
"Dirección IP" ("IP address") - type "Whats my IP" into a Google search if you don't know it
your "Máscara de subred" ("Subnet mask") - 255.255.255.000
"Descripción" - just any name you want, has to be unique
Click [Añadir] ("Add") button
2 Firewall
Make sure your firewall is not blocking ports 80 and 443 for HTTP or SSL in either direction
3 HTTPS Protocol
Make sure that you are POSTing to the server using TLS version 1.0 or higher. Also make sure you can use one of the TLS protocols, at time of writing Sage Pay only supports these protocols
TLS1-AES-256-CBC-SHA
TLS1-AES-128-CBC-SHA
TLS1-DHE-DSS-RC4-SHA
TLS1-DHE-DSS-AES-256-CBC-SHA
TLS1-DHE-DSS-AES-128-CBC-SHA
TLS1-DHE-RSA-AES-256-CBC-SHA
TLS1-DHE-RSA-AES-128-CBC-SHA
4 HTTP Headers
There is only one HTTP header required, although it still works if you pass in extra headers such as "Host" or "Content-Length" it will fail if you do not provide exactly
Content-Type: application/x-www-form-urlencoded

Related

Firefox Okta integration, setting up Agentless DSSO

I am currently using trying to set up DSSO with Okta utilizing Firefox. I have been able to successfully set up Edge/Chrome/IE on the domain without issue. I have set the following documentation as outlined on the Okta website for setting up Firefox to no avail. We have been troubleshooting with the Okta experts for the last three days with no forward progress, so I figured I would post the information available here:
Firefox version 107.0.1 32-bit
TLS 1.2
NTLM v2
Windows Server 2019
the result that the agentlessDssoPrecheck is returning:
{"result" : "FAIL_NTMLSSP"} - (that is not a misspelling; the return should be NTLM, but whatever)
I have the following options set in Firefox:
network.negotiate-auth.trusted-uris. org.kerberos.okta.com
network.negotiate-auth.delegation-uris org.kerberos.okta.com
network.negotiate-auth.allow-non-fqdn true
network.negotiate-auth.allow-proxies true
network.automatic-ntlm-auth.trusted-uris org.kerberos.okta.com
network.automatic-auth.allow-non-fqdn true
I attempted to pull the logs using set NSPR_LOG_MODULES=negotiateauth:5, but while Firefox does create the log, it doesn't write anything, including the failure to the log. (If I set the value to all:5, I get a ton of information, it appears useless for what I am trying to troubleshoot)
I attempted to pull fiddler and Wireshark information; I haven't set up the decoding on the Wireshark portion yet; however, I did get an extract of the fiddler information, but I didn't spot anything in there that seemed to indicate why the failure was occurring.
I have one suspicion; the following option in both Edge and Chrome has been set: DisableAuthNegotiateCnameLookup = enable - I don't see an option like that in Firefox or something similar to be able to adjust that value.
Some additional information to share:
Firefox is connecting to and submitting an authentication message to org.kerberos.okta.com; however, it appears to be in the wrong format.
"""Received authorization header contains raw NTLM token, will fail
precheck. Okta is not getting Kerberos ticket but raw NTLM token
instead."""
Firefox with Kerberos can be sensitive to reverse name lookups.
For testing purposes have you tried to set a manual entry in the hosts file with an entry to return the correct DNS name?
I would be interested to know your results from Wireshark using the following filter
dns || Kerberos

Send SMS from AngularJS Web App using Ozeki sms Gateway

I want to send SMS from AngularJS web application using Ozeki sms gateway. Can anyone tell me how to do this? pr suggest me some reference link or code sample.
Plain sending
Assume we skipping other protocols available inside Ozeki Sms NG product (like SMPP, Email, DB etc), and getting to HTTP protocol only, you can go this way:
Prerequisites:
Figure out best way for you to make HTTP request to send SMS
(I'm not AngJS guy so may be there are already few ways to make HTTP-request from Angular, but at least any Ajax method passing params to executing PHP-script for making HTTP request (with curl, file_get_contents) will be totally Ok).
Make sure your Ozeki SMS server is reacheable via IP/domainname etc by your PHP-script so your code can reach its endpoint.
Doing it:
Inside Ozeki install service provider like HTTP Client
http://www.ozekisms.com/index.php?owpn=195&info=service-provider-connections/http-client-connection
or HTTP Server (more powerful version of HTTP Client allowing call back URLs)
http://www.ozekisms.com/index.php?owpn=197&info=service-provider-connections/http-server-connection
Then according (to docs) execute request like
http://server_ip:9501/api?action=sendmessage&username=________&password=________&originatior=__________________&recipient=__________________&messagetype=SMS:TEXT&messagedata=______________
*Some fields are not necessary, it may vary depending on Ozeki version you use.
** port 9501 - is a default Ozeki HTTP port which may be changed in general settings, also it has HTTPS port as well. Basically the correct port is the same which you already use when accessing Ozeki Web GUI.
After executing sending request (try from browser or from something like Postman first) you should get responce in XML format informing you about result of your transaction.
Possible next step... DLRs
Getting delivery reports (if supported by your operator) is a common "i want it too" question.
In case you need them - there is great embedded feature inside "HTTP Server" connector (mentioned above).
Here you can see more details
http://www.ozekisms.com/index.php?owpn=431
"reporturl" - is a field you may use to set kind of "call back url". In other words in this optional field you may specify full URL and list fields to be passed along. So you only have to create your own endpoint to catch them (as GET request from Ozeki server) and use inside your software.

security of sending passwords through Ajax

Is it ok to pass passwords like this or should the method be POST or does it not matter?
xmlhttp.open("GET","pas123",true);
xmlhttp.send();
Additional info: I'm building this using a local virtual web server so I don't think I'll have https until I put upfront some money on a real web server :-)
EDIT: According to Gumo's link encodeURIComponent should be used. Should I do xmlhttp.send(encodeURIComponent(password)) or would this cause errors in the password matching?
Post them via HTTPS than you don't need to matter about that ;)
But note that you need that the page which sends that data must be accessed with https too due the same origin policy.
About your money limentation you can use self signed certificates or you can use a certificate from https://startssl.com/ where you can get certificates for free.
All HTTP requests are sent as text, so the particulars of whether it's a GET or POST or PUT... don't really matter. What matters for security in transmission is that you send it via SSL (and handle it safely on the other end, of course).
You can use a self-signed cert until something better is made available. It will be a special hell later if you don't design with https in mind now :)
It shouldn't matter, the main reason for not using GET on conventional web forms is the fact that the details are visible in the address bar, which isn't an issue when using AJAX.
All HTTP requests (GET/POST/ect) are sent in plain text so could be obtained using network tracing software (e.g. Wireshark) to protect against this you will need to use HTTPS

getting autodiscover URL from Exchange email address

I'm starting with an address for an Exchange 2007 server:
user#domain.exchangeserver.org
And I attempted to send an autodiscover request, as documented at MSDN.
I attempted to use the generic autodiscover address documented at the TechNet White Paper.
So, using curl on PHP, I sent the following request:
<Autodiscover
xmlns="http://schemas.microsoft.com/exchange/autodiscover/outlook/requestschema/2006">
<Request>
<EMailAddress>user#domain.exchangeserver.org</EMailAddress>
<AcceptableResponseSchema>
http://schemas.microsoft.com/exchange/autodiscover/outlook/responseschema/2006a
</AcceptableResponseSchema>
</Request>
</Autodiscover>
to the following URL:
https://domain.exchangeserver.org/autodiscover/autodiscover.xml
But got no response, just an eventual timeout.
I also tried:
https://autodiscover.domain.exchangeserver.org/autodiscover/autodiscover.xml
With the same result.
Now, since my larger goal is to use Autodiscover with Exchange Web Services, and since all of the EWS URLs typically use the same sub-domain as the Outlook Web Access address, I thought I'd see if the same were true for autodiscovery URLS. Since the OWA URL is:
OWA: https://wmail.domain.exchangeserver.org
I tried:
https://wmail.domain.exchangeserver.org/autodiscover/autodiscover.xml
And sure enough, I got back the expected response.
However, I only knew the OWA sub-domain because it's the server I have access to and that I'm using to test everything. I would not know it for sure or be able to guess it if this were a live app and the user was entering in their own Exchange email.
I know that the autodiscover settings must be available without knowing the OWA URL, because I can enter:
user#domain.exchangeserver.org
into Apple Mail on Snow Leopard and it finds everything without trouble.
So the question is...
Should https://domain.exchangeserver.org/autodiscover/autodiscover.xml have worked, and I just missed a step when trying to connect to it? Or,
Is there some trick (maybe involving pinging the email address?) that Apple Mail and other clients use to resolve the address to the OWA subdomain before sending the autodiscover request?
Thanks to anyone who knows or can take a wild guess.
After a bit more banging my head against the Google, I found the following very helpful article on MSDN:
http://msdn.microsoft.com/en-us/library/ee332364.aspx
Specifically the section "Calling Autodiscover"
I'm still trying to figure out how to do a Active Directory Service Connection Point search via LDAP, but step 4, for my server at least, worked like a charm:
The application sends an unauthenticated GET request to http://autodiscover.contoso.com/autodiscover/autodiscover.xml. (Note that this is a non-SSL endpoint).
If the GET request returns a 302 redirect response, it gets the
redirection URL from the Location HTTP
header, and validates it as described
in the section “Validating a
Potentially Unsafe Redirection URL”
later in this article.
Sure enough, a request sent to:
http://domain.exchangeserver.org/autodiscover/autodiscover.xml
sent back a 302 redirect URL:
https://wmail.domain.exchangeserver.org/autodiscover/autodiscover.xml
But this article gives a series of steps, so anyone wanting to implement autodiscover for an Exchange client has 5 things to try before giving up.

Why "Content-Length: 0" in POST requests?

A customer sometimes sends POST requests with Content-Length: 0 when submitting a form (10 to over 40 fields).
We tested it with different browsers and from different locations but couldn't reproduce the error. The customer is using Internet Explorer 7 and a proxy.
We asked them to let their system administrator see into the problem from their side. Running some tests without the proxy, etc..
In the meantime (half a year later and still no answer) I'm curious if somebody else knows of similar problems with a Content-Length: 0 request. Maybe from inside some Windows network with a special proxy for big companies.
Is there a known problem with Internet Explorer 7? With a proxy system? The Windows network itself?
Google only showed something in the context of NTLM (and such) authentication, but we aren't using this in the web application. Maybe it's in the way the proxy operates in the customer's network with Windows logins? (I'm no Windows expert. Just guessing.)
I have no further information about the infrastructure.
UPDATE: In December 2010 it was possible to inform one administrator about this, incl. links from the answers here. Contact was because of another problem which was caused by the proxy, too. No feedback since then. And the error messages are still there. I'm laughing to prevent me from crying.
UPDATE 2: This problem exists since mid 2008. Every few months the customer is annoyed and wants it to be fixed ASAP. We send them all the old e-mails again and ask them to contact their administrators to either fix it or run some further tests. In December 2010 we were able to send some information to 1 administrator. No feedback. Problem isn't fixed and we don't know if they even tried. And in May 2011 the customer writes again and wants this to be fixed. The same person who has all the information since 2008.
Thanks for all the answers. You helped a lot of people, as I can see from some comments here. Too bad the real world is this grotesque for me.
UPDATE 3: May 2012 and I was wondering why we hadn't received another demand to fix this (see UPDATE 2). Looked into the error protocol, which only reports this single error every time it happened (about 15 a day). It stopped end of January 2012. Nobody said anything. They must have done something with their network. Everything is OK now. From summer 2008 to January 2012. Too bad I can't tell you what they have done.
UPDATE 4: September 2015. The website had to collect some data and deliver it to the main website of the customer. There was an API with an account. Whenever there was a problem they contacted us, even if the problem was clearly on the other side. For a few weeks now we can't send them the data. The account isn't available anymore. They had a relaunch and I can't find the pages anymore that used the data of our site. The bug report isn't answered and nobody complaint. I guess they just ended this project.
UPDATE 5: March 2017. The API stopped working in the summer of 2015. The customer seems to continue paying for the site and is still accessing it in February 2017. I'm guessing they use it as an archive. They don't create or update any data anymore so this bug probably won't reemerge after the mysterious fix of January 2012. But this would be someone else's problem. I'm leaving.
Internet Explorer does not send form fields if they are posted from an authenticated site (NTLM) to a non-authenticated site (anonymous).
This is feature for challange-response situations (NTLM- or Kerberos- secured web sites) where IE can expect that the first POST request immediately leads to an HTTP 401 Authentication Required response (which includes a challenge), and only the second POST request (which includes the response to the challange) will actually be accepted. In these situations IE does not upload the possibly large request body with the first request for performance reasons. Thanks to EricLaw for posting that bit of information in the comments.
This behavior occurs every time an HTTP POST is made from a NTLM authenticated (i.e. Intranet) page to a non-authenticated (i.e. Internet) page, or if the non-authenticated page is part of a frameset, where the frameset page is authenticated.
The work-around is either to use a GET request as the form method, or to make sure the non-authenticated page is opened in a fresh tab/window (favorite/link target) without a partly authenticated frameset. As soon as the authentication model for the whole window is consistent, IE will start to send form contents again.
Definitely related: http://www.websina.com/bugzero/kb/browser-ie.html
Possibly related: KB923155
Full Explanation: IEInternals Blog – Challenge-Response Authentication and Zero-Length Posts
This is easy to reproduce with MS-IE and an NTLM authentication filter on server side. I have the same issue with JCIFS (1.2.), struts 1. and MS-IE 6/7 on XP-SP2. It was finally fixed. There are several workarounds to make it up.
change form method from POST (struts default setting) to GET.
For most pages with small sized forms, it works well. Unfortunately i have possibly more than 50 records to send in HTTP stream back to server side. IE has a GET URL limit 2038 Bytes (not parameter length, but the whole URL length). So this is a quick workaround but not applicable for me.
send a GET before POST action executing.
This was recommended in MS-KB. My project has many legacy procedures and i would not take the risk at the right time. I have never tried this because it still needs some extra authentication processing when GET is received by filter layer based on my understanding from MS-KB and I would not like to change the behavior with other browsers, e.g. Firefox, Opera.
detecting if POST was sent with zero content-length (you may get it from header properties hash structure with your framework).
If so, trigger an NTLM authentication cycle by get challenge code from DC or cache and expect an NTLM response.
When the NTLM type2 msg is received and the session is still valid, you don't really need to authenticate the user but just forward it to the expected action if POST content-length is not zero. BTW, this would increase the network traffics. So check your cache life time setting and SMB session soTimeOut configuration before applying the change plz.
Or, more simple, you may just send a 401-unauthorized status to MS-IE and the browser shall send back POST request with data in reply.
MS-KB has provided a hot-fix with KB-923155 (I could not post more than one link because of a low reputation number :{ ) , but it seems not working. Would someone post a workable hot-fix here? Thanks :) Here is a link for reference, http://www.websina.com/bugzero/kb/browser-ie.html
We have a customer on our system with exactly the same problem. We've pin pointed it down to the proxy/firewall. Microsoft's IAS. It's stripping the POST body and sending content-length: 0. Not a lot we can do to work around however, and down want to use GET requests as this exposes usernames/passwords etc on the URL string. There's nearly 7,000 users on our system and only one with the problem... also only one using Microsoft IAS, so it has to be this.
There's a good chance the problem is that the proxy server in between implements HTTP 1.0.
In HTTP 1.0 you must use the Content-Length header field: (See section 10.4 here)
A valid Content-Length is required on
all HTTP/1.0 POST requests. An
HTTP/1.0 server should respond with a
400 (bad request) message if it cannot
determine the length of the request
message's content.
The request going into the proxy is HTTP 1.1 and therefore does not need to use the Content-Length header field. The Content-Length header is usually used but not always. See the following excerpt from the HTTP 1.1 RFC S. 14.13.
Applications SHOULD use this field to
indicate the transfer-length of the
message-body, unless this is
prohibited by the rules in section
4.4.
Any Content-Length greater than or
equal to zero is a valid value.
Section 4.4 describes how to determine
the length of a message-body if a
Content-Length is not given.
So the proxy server does not see the Content-Length header, which it assumes is absolutely needed in HTTP 1.0 if there is a body. So it assumes 0 so that the request will eventually reach the server. Remember the proxy doesn't know the rules of the HTTP 1.1 spec, so it doesn't know how to handle the situation when there is no Content-Length header.
Are you 100% sure your request is specifying the Content-Length header? If it is using another means as defined in section 4.4 because it thinks the server is 1.1 (because it doesn't know about the 1.0 proxy in between) then you will have your described problem.
Perhaps you can use HTTP GET instead to bypass the problem.
This is a known problem for Internet explorer 6, but not for 7 that I know of. You can install this fix for the IE6 KB831167 fix.
You can read more about it here.
Some questions for you:
Do you know which type of proxy?
Do you know if there is an actual body sent in the request?
Does it happen consistently every time? Or only sometimes?
Is there any binary data sent in the request? Maybe the data starts with a \0 and the proxy has a bug with binary data.
If the user is going through an ISA proxy that uses NTLM authentication, then it sounds like this issue, which has a solution provided (a patch to the ISA proxy)
http://support.microsoft.com/kb/942638POST requests that do not have a POST body may be sent to a Web server that is published in ISA Server 2006
I also had a problem where requests from a customer's IE 11 browser had Content-Length: 0 and did not include the expected POST content. When the customer used Firefox, or Chrome the expected content was included in the request.
I worked out the cause was the customer was using a HTTP URL instead of a HTTPS URL (e.g. http://..., not https://...) and our application uses HSTS. It seems there might be a bug in IE 11 that when a request gets upgraded to HTTPS due to HSTS the request content gets lost.
Getting the customer to correct the URL to https://... resulted in the content being included in the POST request and resolved the problem.
I haven't investigated whether it is actually a bug in IE 11 any further at this stage.
Are you sure these requests are coming from a "customer"?
I've had this issue with bots before; they sometimes probe sites for "contact us" forms by sending blank POST requests based on the action URI in FORM tags they discover during crawling.
Presence and possible values of the ContentLength header in HTTP are described in the HTTP ( I assume 1/1) RFC:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.13
In HTTP, it SHOULD be sent whenever the message's length can be determined prior to being transferred
See also:
If a message is received with both a
Transfer-Encoding header field and a Content-Length header field,
the latter MUST be ignored.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.4
Maybe your message is carrying a Transfer-Encoding header?
Later edit: also please note "SHOULD" as used in the RFC is very important and not equivalent to "MUST":
3. SHOULD This word, or the adjective "RECOMMENDED", mean that there
may exist valid reasons in particular circumstances to ignore a
particular item, but the full implications must be understood and
carefully weighed before choosing a different course.
Ref: http://www.ietf.org/rfc/rfc2119.txt
We had a customer using same website in anonymous and NTLM mode (on different ports). We found out that in our case the 401 was related to Riverbed Steelhead application used for http optimization. The first signal pointing us into that direction was a X-RBT-Optimized-By header. The issue was the Gratuitous 401 feature:
This feature can be used with both per-request and per-connection
authentication but it‘s most effective when used with per-request
authentication. With per-request authentication, every request must be
authenticated against the server before the server would serve the
object to the client. However, most browsers do not cache the server‘s
response requiring authentication and hence it will waste one
round-trip for every GET request. With Gratuitous 401, the client-side
Steelhead appliance will cache the server response and when the client
sends the GET request without any authentication headers, it will
locally respond with a ―401 Unauthorized‖ message and therefore saving
a round trip. Note that the HTTP module does not participate in the
actual authentication itself. What the HTTP module does is to inform
the client that the server requires authentication without requiring
it to waste one round trip.
Google also shows this as an IE (some versions, anyway) bug after an https connection hits the keepalive timeout and reconnects to the server. The solution seems to be configuring the server to not use keepalive for IE under https.
Microsoft's hotfix for KB821814 can set Content-Length to 0:
The hotfix that this article describes implements a code change in Wininet.dll to:
Detect the RESET condition on a POST request.
Save the data that is to be posted.
Retry the POST request with the content length set to 0. This prevents the reset from occurring and permits the authentication process to complete.
Retry the original POST request.
curl sends PUT/POST requests with Content-Length: 0 when configured to use HTTP proxy. It's trick to overcome required buffering in case of first unauthorized PUT/POST request to proxy. In case of GET/HEAD requests curl simply repeats the query. The scheme for PUT/POST is like:
Send first PUT/POST request with Content-Length set to 0.
Get answer. HTTP status code of 407 means we have to use proxy
authorization. Prepare headers for proxy authentication for send request.
Send request again with filled headers for proxy authentication and real data to POST/PUT.

Resources