I have implemented google repatcha enterprise with the the score based assessment on on a register page. At first on a test website.
Now I wonder what would be a non fraudulent score. If I use my email address I get a score of 0.89.
Would it be ok if I assess all scores >= 0.7 as non-fraudulent? What would be a good starting point as a minimum score?
I could log the scores and then compare the values over time. So I may could see what is a good minimum score.
On the recaptcha enterprise website it states:
"With low scores, require MFA or email verification to prevent credential stuffing attacks."
Where could I set up MFA or email verification? Is there a documentation about it?
Thank you for any recommendations.
When you create an assessment, reCAPTCHA Enterprise provides a score that helps you understand the level of risk posed by user interactions. You can confirm or correct reCAPTCHA Enterprise's assessment later, when your website has more information about user interactions to determine whether they were legitimate or fraudulent. You can send the reCAPTCHA assessment IDs back to Google with the labels LEGITIMATE or FRAUDULENT to confirm or correct the assessment made by reCAPTCHA Enterprise.
Compared to previous versions of reCAPTCHA, reCAPTCHA Enterprise's scoring system now allows for more precise responses. There are 11 levels of scores in reCAPTCHA Enterprise, with values ranging from 0.0 to 1.0. A score of 1.0 indicates that the interaction is low risk and most likely genuine, while a score of 0.0 indicates that it may be fraudulent. Only the following four score levels, out of the 11 levels, are available by default: 0.1, 0.3, 0.7 and 0.9.
To know more about MFA Configuration, please refer to this documentation .
Related
I'm trialing the azure anomaly detector API (with C#) for alert conditions in my organisation. Data is formatted and ordered correctly, and the API returns a result as expected, using the last data point technique. What I'm struggling to understand is how the following parameters interact with each other:
Sensitivity
MaxAnomalyRatio
Period
Documentation here only really lists types and vague ranges. Are there any specific, detailed examples showing what these parameters are and how they interact?
This blog article has details on the max anomaly ratio I found very helpful: https://www.smartercode.io/how-azure-anomaly-detection-api-allows-you-to-find-weirdness-inside-crowd-part-2/
[Optional] Maximum Anomaly Ratio: Advanced model parameter; Between 0 and less than 0.5, it tells the maximum percentage of points that can be determined as anomalies. You can think of it as a mechanism to limit the top anomaly candidates.
I need to be able to guesstimate an area's population density.
For example, if I selected Time's Square, I need to get the rough population density in a 1KM radius.
I know the Places API does not have a specific function for this, but I think something like this could work:
Fetch the count of all the businesses or premises in an area, and compare them to known results. For example, if central Mumbai has a businesses/premises count of 1000, and a rural town area has a businesses/premises count of 10, then it would be fair to say low density is probably < 100, medium density is probably 100 - 700, and high density is over 700. Or something along those lines.
So is there a way to fetch the count of businesses or premises in an area using Google Places API?
This functionality is not currently supported by the Google Maps Platform -- Places API.
If you are interested to have this feature, you can file this as feature request in Google's Public Issue Tracker:
https://developers.google.com/maps/support/#issue_tracker
Issue Tracker is a tool used internally at Google to track bugs and feature requests during product development.
I currently have an app available on the Google Play store, with an average overall rating of around 4.8. However, the feature ratings (that is the ratings for gameplay, controls and graphics you can see if you scroll down on the app's page) are all at 2.7.
I was confused by vast difference between these ratings, but gaining any insight into these feature ratings has proven to be very difficult. For example:
In the Google Play developer console, I can see all the information about the normal ratings, but can't find any information about what the feature ratings are based on.
When I attempt to leave a review on another game, I am NOT given the choice to leave a rating of the three features!
Googling has proven useless.
So I was wondering about two things:
Is it possible to get any insight into your apps feature ratings? (aka see number of reviews and individual ratings)
What can't I leave a feature rating review? (Has anyone else had this problem?)
I was using AlchemyAPI for text analysis. I want to know if there is way to influence the API results or fine-tune it as per the requirement.
I was trying to analyse different call center conversations available on internet. To understand the sentiments i.e. whether customer was unsatisfied/angry and hence conversation is negative.
For 9 out of 10 conversations it gave sentiment as positive and for 1 it was negative. That conversation was about emergency response system (#911 in US). It seems that words shooting, fear, panic, police, siren could have cause this result.
But actually the whole conversation was fruitful. Caller was not angry with the service instead call center person solved the caller's problem and caller was relaxed. So logically this should not be treated as negative.
What is the way ahead to customize the AlchemyAPI behavior ?
We are currently looking at the tools that would be required to allow customization of the AlchemyAPI services. Our current service is entirely pre-trained on billions of web pages, but customization is on the road map. I can't give you any timelines this early, but keep checking back!
Zach, Dev Evangelist AlchemyAPI
I have about 200,000 Twitter followers across a few twitter accounts. I am trying to find the twitter accounts that a large proportion of my followers are following.
Having looked over the Search API I think this is going to be very slow, unless I am missing a something.
40 calls using GET followers/ids to get the list of 200,000 accounts. Then all I can think of is doing 200,000 calls to GET friends/ids. But at the current rate limit of 150 calls/hour, that would take 55 days. Even if I could get Twitter to up my limit slightly, this is still going to be slow going. Any ideas?
The short answer to your question is, no, there is indeed no quick way to do this. And furthermore, with API v 1.0 being deprecated sometime in March, and v 1.1 being the law of the land (more on this in a moment).
As I understand it, what you want to do is compile a list of followed accounts for each of the initial 200,000 follower accounts. You then want to count each one of these 200,000 original accounts as a "voter", and then the total set of accounts followed by any of these 200,000 as "candidates". Ultimately, you want to be able to rank this list of candidates by "votes" from the list of 200,000.
A few things:
1.) I believe you're actually referencing the REST API, not the Search API.
2.) Based on what you've said about getting 150 requests per hour, I can infer that you're making unauthenticated requests to the API endpoints in question. That limits you to only 150 calls. As a short term fix (i.e., in the next few weeks, prior to v 1.0 being retired), you could make authenticated requests instead, which will boost your hourly rate limit to 350 (source: Twitter API Documentation). That alone would more than double your calls per hour.
2.) If this is something you expect to need to do on an ongoing basis, things get much worse. Once API 1.0 is no longer available, you'll be subject to the v 1.1 API limits, which a.) require authentication, no matter what, and b.) are limited per API method/endpoint. For GET friends/ids and GET followers/ids, in particular, you will be able to only make 15 calls per 15 minutes or 60 per hour. That means that the sort of analysis you want to do will basically become unfeasible (unless you were to skirt the Twitter API terms of service by using multiple apps/ip addresses, etc.). You can read all about this here. Suffice to say, researchers and developers that rely on these API endpoints for doing network analysis are less than happy about these changes, but Twitter doesn't appear to be moderating its position on this.
Given all of the above, my best advice would be to use API version 1.0 while you still can, and start making authenticated requests.
Another thought -- not sure what your use case is -- but you might consider pulling in, say, the 1000 most recent tweets from each of the 200,000 followers and then leveraging the metadata contained in each tweet about mentions. Mentions of other users are potentially more informative than knowing that someone simply follows someone else. You could still tally the most mentioned accounts. The benefit here would be that in moving from API 1.0 to 1.1, the endpoint for pulling in timelines for users will actually have it's API limit raised from 350 per hour to 720 (Source: Twitter API 1.1 documentation)
Hope this helps, and good luck!
Ben