Having issue while making payment with Paysafe - paysafe

I am facing an issue while making payment using paysafe api on test mode.
Issue is showing as error code:
[code] =
3027
[message] => The external processing gateway has reported a limit has been exceeded.
What could be the possible solution to resolve this?
Also sometimes I am facing issue like:
[code] => 5068
[message] => Either you submitted a request that is missing a mandatory field or the value of a field does not match the format expected.
[details] => Array
(
[0] => field: billingDetails.country
[1] => invalid value 'Canada', valid values are [AD, AE, AF, AG, AI, AL, AM, AO, AQ, AR, AS, AT, AU, AW, AX, AZ, BA, BB, BD, BE, BF, BG, BH, BI, BJ, BL, BM, BN, BO, BQ, BR, BS, BT, BV, BW, BY, BZ, CA, CC, CD, CF, CG, CH, CI, CK, CL, CM, CN, CO, CR, CU, CV, CW, CX, CY, CZ, DE, DJ, DK, DM, DO, DZ, EC, EE, EG, EH, ER, ES, ET, FI, FJ, FK, FM, FO, FR, GA, GB, GD, GE, GF, GG, GH, GI, GL, GM, GN, GP, GQ, GR, GS, GT, GU, GW, GY, HK, HM, HN, HR, HT, HU, ID, IE, IL, IM, IN, IO, IQ, IR, IS, IT, JE, JM, JO, JP, KE, KG, KH, KI, KM, KN, KP, KR, KW, KY, KZ, LA, LB, LC, LI, LK, LR, LS, LT, LU, LV, LY, MA, MC, MD, ME, MF, MG, MH, MK, ML, MM, MN, MO, MP, MQ, MR, MS, MT, MU, MV, MW, MX, MY, MZ, NA, NC, NE, NF, NG, NI, NL, NO, NP, NR, NU, NZ, OM, PA, PE, PF, PG, PH, PK, PL, PM, PN, PR, PS, PT, PW, PY, QA, RE, RO, RS, RU, RW, SA, SB, SC, SD, SE, SG, SH, SI, SJ, SK, SL, SM, SN, SO, SR, SS, ST, SV, SX, SY, SZ, TC, TD, TF, TG, TH, TJ, TK, TL, TM, TN, TO, TR, TT, TV, TW, TZ, UA, UG, UM, US, UY, UZ, VA, VC, VE, VG, VI, VN, VU, WF, WS, YE, YT, ZA, ZM, ZW]
)
Can you let me know how to resolve these all issues?
Thanks,
Rajendra Banker
Already tried to contact support team

Related

How to fix "invalid integration variable or limit(s) in Null"?

I'm trying to make a table of elements for a double numerical integraql for the function "N[CurlyPhi]0"
G1[r1_, r_, L_, \[CurlyPhi]0_] :=
2 Sqrt[r^2 + r1^2 - 2 r r1 Cos[\[CurlyPhi]0]] -
2 Sqrt[r^2 + r1^2 - 2 r r1 Cos[\[CurlyPhi]0] + L^2] -
L Log[1 + (
2 L (L - Sqrt[r^2 + r1^2 - 2 r r1 Cos[\[CurlyPhi]0] + L^2]))/(
r^2 + r1^2 - 2 r r1 Cos[\[CurlyPhi]0])]
G2[r1_, r_, L_] :=
2 r r1 (-Sqrt[L^2 + (r - r1)^2] + Sqrt[(r - r1)^2] +
L ArcTanh[L/Sqrt[L^2 + (r - r1)^2]])
and
a3 = 8000;
b3 = 15000;
N\[CurlyPhi]0[a0_, b0_, \[CurlyPhi]0_, L_] := (
1/(\[Pi] (\[CurlyPhi]0 (b0^2 - a0^2) L) ))
NIntegrate[
G2[r1, r, L] - G1[r1, r, L, \[CurlyPhi]0], {r, a0, b0}, {r1, a0, r,
b0}, MinRecursion -> 9, MaxRecursion -> 40,
Method -> {"GlobalAdaptive",
"MaxErrorIncreases" ->
100000}]
I tried to make the table this way (I have use this same code before for another function)
N\[CurlyPhi]0vsLat\[CurlyPhi]0Dat =
ParallelTable[{L, N\[CurlyPhi]0[a3, b3, (10 \[Pi])/180, L 10^3],
N\[CurlyPhi]0[a3, b3, (30 \[Pi])/180, L 10^3],
N\[CurlyPhi]0[a3, b3, (60 \[Pi])/180, L 10^3],
N\[CurlyPhi]0[a3, b3, (90 \[Pi])/180, L 10^3],
N\[CurlyPhi]0[a3, b3, (120 \[Pi])/180, L 10^3],
N\[CurlyPhi]0[a3, b3, (150 \[Pi])/180, L 10^3],
N\[CurlyPhi]0[a3, b3, \[Pi], L 10^3],
N\[CurlyPhi]0[a3, b3, (210 \[Pi])/180, L 10^3],
N\[CurlyPhi]0[a3, b3, (240 \[Pi])/180, L 10^3]}, {L, 1, 100, 2}]
But I get this error message and Idk how to fix this so I can generate the table :
NIntegrate:The integrand (...) has evaluated to Overflow, Indeterminate, or Infinity for all
sampling points in the region with boundaries(...)

What minimal change to my code would make it preserve logical purity?

I posted the code below as an answer to this question and user "repeat" answered and commented that it's not logically pure and "if you are interested in a minimal change to your code that makes it preserve logical-purity, I suggest posting a new question about that. I'd be glad to answer it :)".
% minset_one(1 in D1, 1 in D2, D1, D2, D1Len, D2Len, T).
minset_one_(true, false, D1, _, _, _, D1).
minset_one_(false, true, _, D2, _, _, D2).
minset_one_(true, true, _, D2, D1Len, D2Len, D2) :- D1Len >= D2Len.
minset_one_(true, true, D1, _, D1Len, D2Len, D1) :- D1Len < D2Len.
minset_one(D1, D2, T) :-
(member(1, D1) -> D1check = true ; D1check = false),
(member(1, D2) -> D2check = true ; D2check = false),
length(D1, D1Len),
length(D2, D2Len),
minset_one_(D1check, D2check, D1, D2, D1Len, D2Len, T).
e.g.
?- D1 = [X,Y,Z], D2 = [U,V], minset_one(D1,D2,T).
D1 = [1, Y, Z],
D2 = T, T = [1, V],
U = X, X = 1 ;
false
there are more solutions possible. member(1, D1) is not backtracking through [1, Y, Z], then [X, 1, Z] then [X, Y, 1].
The Problem with (->)/2 (and friends)
Consider the following goal:
(member(1,D1) -> D1check = true ; D1check = false)
(->)/2 commits to the first answer of member(1,D1)—other answers are disregarded.
Can alternatives to (->)/2—like (*->)/2 (SWI, GNU) or if/3 (SICStus)—help us here?
No. These do not ignore alternative answers to make member(1,D1) succeed, but they do not consider that the logical negation of member(1,D1) could also have succeeded.
Back to basics: "If P then Q else R" ≡ "(P ∧ Q) ∨ (¬P ∧ R)"
So let's rewrite (If -> Then ; Else) as (If, Then ; Not_If, Else):
(member(1,D1), D1check = true ; non_member(1,D1), D1check = false)
How should we implement non_member(X,Xs)—can we simply write \+ member(X,Xs)?
No! To preserve logical purity we better not build upon "negation as finite failure".
Luckily, combining maplist/2 and dif/2 does the job here:
non_member(X,Xs) :-
maplist(dif(X),Xs).
Putting it all together
So here's the minimum change I propose:
minset_one_(true, false, D1, _, _, _, D1).
minset_one_(false, true, _, D2, _, _, D2).
minset_one_(true, true, _, D2, D1Len, D2Len, D2) :- D1Len >= D2Len.
minset_one_(true, true, D1, _, D1Len, D2Len, D1) :- D1Len < D2Len.
non_member(X,Xs) :-
maplist(dif(X),Xs).
minset_one(D1, D2, T) :-
(member(1,D1), D1check = true ; non_member(1,D1), D1check = false),
(member(1,D2), D2check = true ; non_member(1,D2), D2check = false),
length(D1, D1Len),
length(D2, D2Len),
minset_one_(D1check, D2check, D1, D2, D1Len, D2Len, T).
Running the sample query we now get:
?- D1 = [X,Y,Z], D2 = [U,V], minset_one(D1,D2,T).
D1 = [1,Y,Z], X = U, U = 1, D2 = T, T = [1,V]
; D1 = [1,Y,Z], X = V, V = 1, D2 = T, T = [U,1]
; D1 = T, T = [1,Y,Z], X = 1, D2 = [U,V], dif(U,1), dif(V,1)
; D1 = [X,1,Z], Y = U, U = 1, D2 = T, T = [1,V]
; D1 = [X,1,Z], Y = V, V = 1, D2 = T, T = [U,1]
; D1 = T, T = [X,1,Z], Y = 1, D2 = [U,V], dif(U,1), dif(V,1)
; D1 = [X,Y,1], Z = U, U = 1, D2 = T, T = [1,V]
; D1 = [X,Y,1], Z = V, V = 1, D2 = T, T = [U,1]
; D1 = T, T = [X,Y,1], Z = 1, D2 = [U,V], dif(U,1), dif(V,1)
; D1 = [X,Y,Z], D2 = T, T = [1,V], U = 1, dif(X,1), dif(Y,1), dif(Z,1)
; D1 = [X,Y,Z], D2 = T, T = [U,1], V = 1, dif(X,1), dif(Y,1), dif(Z,1)
; false.
Better. Sure looks to me like there's nothing missing.
I think it would be:
add:
:- use_module(library(reif)).
... and replace:
%(member(1, D1) -> D1check = true ; D1check = false),
%(member(1, D2) -> D2check = true ; D2check = false),
memberd_t(1, D1, D1check),
memberd_t(1, D2, D2check),
Example of the difference between member and memberd_t:
?- member(X, [A, B, C]).
X = A ;
X = B ;
X = C.
?- memberd_t(X, [A, B, C], IsMember).
X = A,
IsMember = true ;
X = B,
IsMember = true,
dif(A,B) ;
X = C,
IsMember = true,
dif(A,C),
dif(B,C) ;
IsMember = false,
dif(A,X),
dif(B,X),
dif(C,X).
?- memberd_t(X, [A, B, C], IsMember), X = 5, A = 5, C = 5.
X = A, A = C, C = 5,
IsMember = true ;
false.
So, memberd_t is itself adding the dif/2 constraints. To aid performance slightly, it loops through the list only once.
The definition of memberd_t is at e.g. https://github.com/meditans/reif/blob/master/prolog/reif.pl#L194 and https://www.swi-prolog.org/pack/file_details/reif/prolog/reif.pl?show=src

Prolog singleton variable in branch

I have the below code:
loc_sucs(R, C, result(A, S)) :-
loc_sucs(R, C, S),
Rm is R - 1,
A \= move-north;
R = 0;
o(Rm, C);
r(Rm, C, S);
Rp is R + 1,
dim(Z, _),
A \= move-south;
R = Z;
o(Rp, C);
r(Rp, C, S);
Cp is C + 1,
dim(_, W),
A \= move-east;
C = W;
o(R, Cp);
r(R, Cp, S);
Cm is C - 1,
A \= move-west;
C = 0;
o(R, Cm);
r(R, Cm, S).
And I'm getting the singleton warning for Rm, Cm, Rp, Cp, Z and W. Why am I getting this warning if all of these variables are used more than once?

Why do I receive an ACCESSING_NON_EXISTENT_FIELD warning after a JOIN and setting of an alias?

In the following Pig script, my value ct "disappears" when I run a DUMP on any step after performing the generate that sets the e3 alias. For example, if I execute a DUMP on e4 immediately after setting the alias, no value is returned.
I will also see the following warning in my output:
[main] WARN
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher
- Encountered Warning ACCESSING_NON_EXISTENT_FIELD 9 time(s).
eng_grp = GROUP engs BY (aid, scm_id,ts,etype);
eng_grp_out = FOREACH eng_grp
GENERATE
group.aid as aid,
group.scm_id as scm_id,
group.etype as etype,
group.ts as timestamp,
(long)COUNT_STAR(engs) as ct;
eng_joined = JOIN eng_grp_out BY (aid,scm_id), tgc BY (aid, scm_id);
e3 = FOREACH eng_joined GENERATE
MD5((chararray)CONCAT(CONCAT(CONCAT(CONCAT(CONCAT(CONCAT(eng_grp_out::aid,'_'),eng_grp_out::scm_id),'_'),eng_grp_out::etype),'_'),(chararray)eng_grp_out::timestamp)) as id,
eng_grp_out::aid as v,
eng_grp_out::scm_id as scmid,
eng_grp_out::etype AS et,
eng_grp_out::timestamp as ts,
FLATTEN(tgc::tags),
eng_grp_out::ct as ct;
-- the value for "ct" will be output if I do DUMP e3; here
e4 = FOREACH e3 GENERATE
id,
v,
scmid,
et,
ts,
FLATTEN(tgc::tags::g) as gg,
ct;
-- the value for "ct" will be NOT be output if I do DUMP e4; here
e5 = FOREACH e4 GENERATE
id,
v,
scmid,
et,
ts,
gg#'g' as tg,
gg#'v' as tv,
gg#'d' as td,
ct;
e6 = FOREACH e5 GENERATE
id,
v,
scmid,
et,
(long)ts,
tg#'\$oid' as tg,
tv#'\$oid' as tv,
(chararray)td as td,
ct;
e7 = FOREACH e6 GENERATE
id,
v,
scmid,
et,
ts,
'c' as tt,
tg,
tv,
td,
ct;
e8 = FOREACH e7 GENERATE
id,v,scmid,et,ts,tt,
CONCAT(CONCAT(CONCAT(CONCAT(tg,'_'),tv),'_'),td) as ct,
tg,tv,td,ct;
I was able to finally get it to work by changing the assignment of the e3 alias to
e3 = FOREACH eng_joined GENERATE
//...kept everything else the same...
TOMAP('count_val', (long)eng_grp_out::ct);
From there I was able to get the value I needed in the e4 assignment by doing (long)$6#'count_val' as val.

unable to typecast in pig

I am getting errors in AVG function. Can anyone please help on the following script: (Do i need to use tuple or bag while loading?) Thanks.
mydata = LOAD 'bigdata.txt' USING PigStorage(',') AS (stn , wban, yearmoda, temp, a , dewp :double, b , slp :double, c, stp :double, d, visib :double, e, wdsp :double, f, mxspd :double, gust :double, max :double, min :double, prcp :double, sndp :double, frshtt);
clean1 = FOREACH mydata GENERATE stn , wban, yearmoda, temp, a , dewp, b , slp, c, stp, d, visib, e, wdsp, f, mxspd, gust, max , min, prcp ,sndp , frshtt;
--clean2 = FILTER clean1 BY (temp == 9999.9);
tmpdata = FOREACH clean1 GENERATE stn, SUBSTRING(yearmoda, 0, 5) as year, temp;
C = GROUP tmpdata BY (year, temp);
avgtemp = FOREACH C GENERATE group, AVG(temp);
You did not assign temp a type when you LOADed your data. So when Pig tries to call the AVG function, and it checks to see which version of it to use (since it must behave differently if the field is an int rather than a double, for example), it cannot tell how to proceed. Give temp a type (like temp:int) in your LOAD statement and it should work.
In your case, you have also not specified the field correctly. You need to pass AVG a bag to evaluate. You construct this bag by projecting the temp field out of the bag of records in C. The schema of C is {(group:(year,temp)), tmpdata:{(stn,year:chararray,temp)})}, so you need to compute avgtemp like this:
avgtemp = FOREACH C GENERATE group, AVG(tmpdata.temp);

Resources