How is braintree additionalInformation data used in measuring risk level of a transaction - braintree

I came to know that sending additionalInformation to Braintree as part of lookup java api
increases transactions without an authentication.
How is this additionalInformation used from braintree/issuing bank to measure the risk level of the transaction ?

Each information you add is weighted and allows to compute a final scoring for this transaction.
You can adjust the threshold : https://developer.paypal.com/braintree/articles/guides/fraud-tools/basic/risk-threshold-rules
Also it is not braintree specific, you will find a sample of how those information are weighted here : https://help.bolt.com/merchants/references/dashboard-features/risk-scoring/

Related

Autotuning an asymmetric PID Loop utilizing the Relay Method

I am currently working on a custom PID autotuner for a temperature controller. My heater is controlled by a relay and I do not have a cooling element. I am struggling to apply the Åström-Hägglund method to my situation, following this document.
First, the heater block does not function in a symmetric sinusoidal fashion. I do not fully understand how to determine Ku and Pu from this situation.
Second, my relay does not provide the option for a negative step, as seems to be required by Åström-Hägglund. I rely on newton's law of cooling for this part.
How do I account for these discrepancies in my calculation? Is there a better tuning method to attempt?
My Tuning Profile -> This may be compared to the tuning profile in figure 4 on page 2 in the document linked above for further clarification.

Azure anomaly detector parameters

I'm trialing the azure anomaly detector API (with C#) for alert conditions in my organisation. Data is formatted and ordered correctly, and the API returns a result as expected, using the last data point technique. What I'm struggling to understand is how the following parameters interact with each other:
Sensitivity
MaxAnomalyRatio
Period
Documentation here only really lists types and vague ranges. Are there any specific, detailed examples showing what these parameters are and how they interact?
This blog article has details on the max anomaly ratio I found very helpful: https://www.smartercode.io/how-azure-anomaly-detection-api-allows-you-to-find-weirdness-inside-crowd-part-2/
[Optional] Maximum Anomaly Ratio: Advanced model parameter; Between 0 and less than 0.5, it tells the maximum percentage of points that can be determined as anomalies. You can think of it as a mechanism to limit the top anomaly candidates.

Algorithm which allows the backend to find compromised data from the 3rd party service

On my backend I use a 3rd party service which provides currency exchange rates. I use this rate for the funds conversion within the customer’s accounts. As far as this rate affects customers’ money, I have to be sure that it’s legit. Let’s pretend a 3rd party service is experiencing a glitch and returns 100x times bigger rate than it should be, but the backend is going to carry out this transaction, which will lead to huge issues.
The only solution which I found is
to retrieve the same rate from 5 different sources
to calculate a standard deviation for them
to compare the needed rate with an arithmetic average
But this algorithm didn’t work for me — a tiny difference between the rates leads to negative results.
Is there an algorithm which can help me be sure that the rate is correct?

graphql query cost analysis on client side

I am consuming a graphql API. The API sends the requestedCost and the actualCost in the response. I want to calculate the requestedCost at the client side to rate limit my requests.
I have the rules for calculating the cost. For example, an object costs 1 point, fields cost zero points, connections cost 2 points etc.
I am experimenting with the AST using the graphql parse and visit functions and it seems to work but I think there may be a better way.
Any thoughts or suggestions?

Is there way to influence AlchemyAPI sentiment analysis

I was using AlchemyAPI for text analysis. I want to know if there is way to influence the API results or fine-tune it as per the requirement.
I was trying to analyse different call center conversations available on internet. To understand the sentiments i.e. whether customer was unsatisfied/angry and hence conversation is negative.
For 9 out of 10 conversations it gave sentiment as positive and for 1 it was negative. That conversation was about emergency response system (#911 in US). It seems that words shooting, fear, panic, police, siren could have cause this result.
But actually the whole conversation was fruitful. Caller was not angry with the service instead call center person solved the caller's problem and caller was relaxed. So logically this should not be treated as negative.
What is the way ahead to customize the AlchemyAPI behavior ?
We are currently looking at the tools that would be required to allow customization of the AlchemyAPI services. Our current service is entirely pre-trained on billions of web pages, but customization is on the road map. I can't give you any timelines this early, but keep checking back!
Zach, Dev Evangelist AlchemyAPI

Resources