Unable to Upload image in parse dashboard,Logs says : Could not Store file Quota Exceeded - heroku

I am Working on an Android App, for which I use Parse as a backend[Parse-Heroku-Mlab (sandbox Plan)], this app provides different Services in the city, & the service owners complete information is listed in the App which also has User Images and icons.
Issue : When i try to upload images it doesn't gets uploaded(until last day it was working fine), but the text fields in the dashboard works and gets uploaded.
Parse Logs says :
2017-05-28T04:30:17.793Z - Could not store file.
2017-05-28T04:30:17.790Z - quota exceeded
Screenshots of Mongodb stats is attached.
Could it be that the issue is with Mlab.The sandbox plan gives the storage upto 512 mb. I tried freeing the space, but no go.

Try using a CDNfor storage like GCSAdapter or de S3Adapter. It is plain simple to set up. Old Images will continue work as normal.
var GCSAdapter = require('parse-server-gcs-adapter');
var gcsAdapter = new GCSAdapter('project',
'keyFilePath',
'bucket' , {
bucketPrefix: '',
directAccess: false
});
var api = new ParseServer({
appId: 'my_app',
masterKey: 'master_key',
filesAdapter: gcsAdapter
})
Mlab calculates your quota by the fileSize value output from MongoDB. The size of an image can be from 3 to 12mb in modern phones.

Related

video.insert failing silently with positive response and video id

I am testing my python video upload script since 2 days.
Yesterday everything ok. Upload successful; then quota limit reached.
Today continue to test: insert/upload succeeds with response containing a video id, but the video with that id never appears on the channel. Also not visible in youtube studio.
I tried with 2 different videos - same thing and I have a id for all of them.
Now quota is again reached
uploaded but not visible ids e.g.: pSqyId96gTk, -kw-yn-qAxI
If I upload the same video in the Youtube WEB Frontend, everything is ok.
Any Idea how to analyse that problem?
here is part of the response dict:
{'kind': 'youtube#video',
'etag': 'MgZ0r9Yu43ERF415Jw1lPRJgmDc',
'id': 'ZAuHewNcxL8',
'snippet':
{'publishedAt': '2021-12-22T12:40:54Z',
'channelId': 'UCNSTNQqqGxwBqoOdekZPj_A',
...
}
'status':
{
'uploadStatus': 'uploaded',
'privacyStatus': 'private',
'license': 'youtube',
'embeddable': True,
'publicStatsViewable': True}
}
privacyStatus: "public" has the same effect.

How to send Base64 image to Google Cloud Vision API label detection in Ruby?

Hi I'm building a program in Ruby to generate alt attributes for images on a webpage. I'm scraping the page for the images then sending their src, in other words a URL, to google-cloud-vision for label detection and other Cloud Vision methods. It takes about 2-6 seconds per image. I'm wondering if there's any way to reduce response time. I first used TinyPNG to compress the images. Cloud Vision was a tad faster but the time it took to compress more than outweighed the improvement. How can I improve response time? I'll list some ideas.
1) Since we're sending a URL to Google Cloud, it takes time for Google Cloud to receive a response, that is from the img_src, before it can even analyze the image. Is it faster to send a base64 encoded image? What's the fastest form in which to send (or really, for Google to receive) an image?
cloud_vision = Google::Cloud::Vision.new project: PROJECT_ID
#vision = cloud_vision.image(#file_name)
#vision.labels #or #vision.web, etc.
2) My current code for label detection. First question: is it faster to send a JSON request rather than call Ruby (label or web) methods on a Google Cloud Project? If so, should I limit responses? Labels with less than a 0.6 confidence score don't seem of much help. Would that speed up image rec/processing time?
Open to any suggestions on how to speed up response time from Cloud Vision.
TL;DR - You can take advantage of the batching supporting in the annotation API for Cloud Vision.
Longer version
Google Cloud Vision API supports batching multiple requests in a single call to the images:annotate API. There are also these limits which are enforced for Cloud Vision:
Maximum of 16 images per request
Maximum 4 MB per image
Maximum of 8 MB total request size.
You could reduce the number of requests by batching 16 at a time (assuming you do not exceed any of the image size restrictions within the request):
#!/usr/bin/env ruby
require "google/cloud/vision"
image_paths = [
...
"./wakeupcat.jpg",
"./cat_meme_1.jpg",
"./cat_meme_2.jpg",
...
]
vision = Google::Cloud::Vision.new
length = image_paths.length
start = 0
request_count = 0
while start < length do
last = [start + 15, length - 1].min
current_image_paths = image_paths[start..last]
printf "Sending %d images in the request. start: %d last: %d\n", current_image_paths.length, start, last
result = vision.annotate *current_image_paths, labels: 1
printf "Result: %s\n", result
start += 16
request_count += 1
end
printf "Made %d requests\n", request_count
So you're using Ruby to scrape some images off a page and then send the image to Google, yeah?
Why you might not want to base64 encode the image:
Headless scraping becomes more network intensive. You have to download the image to then process it.
Now you also have to worry about adding in the base64 encode process
Potential storage concerns if you aren't just holding the image in memory (and if you do this, debugging becomes somewhat more challenging
Why you might want to base64 encode the image:
The image is not publicly accessible
You have to store the image anyway
Once you have weighed the choices, if you still want to get the image into base64 here is how you do it:
require 'base64'
Base64.encode(image_binary)
It really is that easy.
But how do I get that image in binary?
require 'curb'
# This line is an example and is not intended to be valid
img_binary = Curl::Easy.perform("http://www.imgur.com/sample_image.png").body_str
How do I send that to Google?
Google has a pretty solid write-up of this process here: Make a Vision API Request in JSON
If you can't click it (or are too lazy to) I have provided a zero-context copy-and-paste of what a request body should look like to their API here:
request_body_json = {
"requests":[
{
"image":{
"content":"/9j/7QBEUGhvdG9...image contents...eYxxxzj/Coa6Bax//Z"
},
"features":[
{
"type":"LABEL_DETECTION",
"maxResults":1
}
]
}
]
}
So now we know what a request should look like in the body. If you're already sending the img_src in a POST request, then it's as easy as this:
require 'base64'
require 'curb'
requests = []
for image in array_of_image_urls
img_binary = Curl::Easy.perform(image).body_str
image_in_base64 = Base64.encode(image_binary)
requests << { "image" => { "content" : image_in_base64 }, "imageContext" => "<OPTIONAL: SEE REFERENCE LINK>", "features" => [ {"type" => "LABEL_DETECTION", "maxResults" => 1 }]}
end
# Now just POST requests.to_json with your Authorization and such (You did read the reference right?)
Play around with the hash formatting and values as required. This is the general idea which is the best I can give you when your question is SUPER vague.

Random (403) User Rate Limit Exceeded

I am using translate API to translate some texts in my page, those texts are large html formated texts, so I had to develop a function that splits these texts into smaller pieces less than 4500 characters (including html tags) to avoid the limit of 5000 characters per request, also I had to modify the Google PHP API to allow send requests via POST.
I have enabled the paid version of the api in Goole Developers Console, and changed the total quota to 50M of characters per day and 500 requests/second/urser.
Now I am translating the whole database of texts with a script, it works fine but at some random points I revive the error "(403) User Rate Limit Exceeded", and I have to wait some minutes to re-run the script because when reached the error the api is returning the same error over and over until I wait some time.
I don't know why it keeps returning the error if I don't pass the number of requests, it's like it has some kind of maximum chaaracters per each interval of time or something...
You probably exceed the quota limits you set before: this is either the daily billable or the limit on the request characters per second.
To change the usage limits or request an increase to your quota, do the following:
1. Go to the Google Developers Console "https://console.developers.google.com/".
2. Select a project.
3. On the left sidebar, expand APIs & auth.
4. Click APIs.
5. Click the name of an activated API you're interested in "i.e. The Translate API".
6. Near the top of the info page for the API, click Quota.
If you have the billing enabled, just click Quota and it will take you to the quota page where you can view and change the quota-related settings.
If not, clicking Quota shows information about any free quota and limits that apply to the Translate API.
Google Developer Console has a rate limit of 10 requests per second, regardless of the settings or limits you may have changed.
You may be exceeding this limit.
I was unable to find any documentation around this, but could verify it myself with various API requests.
You control the characters limitation but not the concurrency
You are either making more than 500 concurrent request/second or you are using another Google API that is hitting such concurrency limitation.
The referer header is not set by default, but it is possible to add the headers to a request like so:
$result = $t->translate('Hola Mundo', [
'restOptions' => [
'headers' => [
'referer' => 'https://your-uri.com'
]
]
]);
If it makes more sense for you to set the referer at the client level (so all requests flowing through the client receive the header), this is possible as well:
$client = new TranslateClient([
'key' => 'my-api-key',
'restOptions' => [
'headers' => [
'referer' => 'https://your-uri.com'
]
]
]);
This worked for me!
Reference
In my case, this error was caused by my invalid payment information. Go to Billing area and make sure everything is ok.

Can I reduce my amount of requests in Google Maps JavaScript API v3?

I call 2 locations. From an xml file I get the longtitude and the langtitude of a location. First the closest cafe, then the closest school.
$.get('https://maps.googleapis.com/maps/api/place/nearbysearch/xml?
location='+home_latitude+','+home_longtitude+'&rankby=distance&types=cafe&sensor=false&key=X',function(xml)
{
verander($(xml).find("result:first").find("geometry:first").find("location:first").find("lat").text(),$(xml).find("result:first").find("geometry:first").find("location:first").find("lng").text());
}
);
$.get('https://maps.googleapis.com/maps/api/place/nearbysearch/xml?
location='+home_latitude+','+home_longtitude+'&rankby=distance&types=school&sensor=false&key=X',function(xml)
{
verander($(xml).find("result:first").find("geometry:first").find("location:first").find("lat").text(),$(xml).find("result:first").find("geometry:first").find("location:first").find("lng").text());
}
);
But as you can see, I do the function verander(latitude,longtitude) twice.
function verander(google_lat, google_lng)
{
var bryantPark = new google.maps.LatLng(google_lat, google_lng);
var panoramaOptions =
{
position:bryantPark,
pov:
{
heading: 185,
pitch:0,
zoom:1,
},
panControl : false,
streetViewControl : false,
mapTypeControl: false,
overviewMapControl: false ,
linksControl: false,
addressControl:false,
zoomControl : false,
}
map = new google.maps.StreetViewPanorama(document.getElementById("map_canvas"), panoramaOptions);
map.setVisible(true);
}
Would it be possible to push these 2 locations in only one request(perhaps via an array)? I know it sounds silly but I really want to know if their isn't a backdoor to reduce these google maps requests.
FTR: This is what a request is for Google:
What constitutes a 'map load' in the context of the usage limits that apply to the Maps API? A single map load occurs when:
a. a map is displayed using the Maps JavaScript API (V2 or V3) when loaded by a web page or application;
b. a Street View panorama is displayed using the Maps JavaScript API (V2 or V3) by a web page or application that has not also displayed a map;
c. a SWF that loads the Maps API for Flash is loaded by a web page or application;
d. a single request is made for a map image from the Static Maps API.
e. a single request is made for a panorama image from the Street View Image API.
So I'm afraid it isn't possible, but hey, suggestions are always welcome!
Your calling places api twice and loading streetview twice. So that's four calls but I think they only count those two streetviews as once if your loading it on one page. And also your places calls will be client side so they won't count towards your limits.
But to answer your question there's no loop hole to get around the double load since you want to show the users two streetviews.
What I would do is not load anything until the client asks. Instead have a couple of call to action type buttons like <button onclick="loadStreetView('cafe')">Click here to see Nearby Cafe</button> and when clicked they will call the nearby search and load the streetview. And since it is only on client request your page loads will never increment the usage counts like when your site get's crawled by search engines.
More on those usage limits
The Google Places API has different usages then the maps. https://developers.google.com/places/policies#usage_limits
Users with an API key are allowed 1 000 requests per 24 hour period
Users who have verified their identity through the APIs console are allowed 100 000 requests per 24 hour period. A credit card is required for verification, by enabling billing in the console. We ask for your credit card purely to validate your identity. Your card will not be charged for use of the Places API.
100,000 requests a day if you verify yourself. That's pretty decent.
As for Google Maps, https://developers.google.com/maps/faq#usagelimits
You get 25,000 map loads per day and it says.
In order to accommodate sites that experience short term spikes in usage, the usage limits will only take effect for a given site once that site has exceeded the limits for more than 90 consecutive days.
So if you go over a bit not and then it seems like they won't mind.
p.s. you have an extra comma after zoom:1 and zoomControl : false and they shouldn't be there. Will cause errors in some browsers like IE. You also are missing a semicolon after var panoramaOptions = { ... } and before map = new

Get image height and width of image stored on Amazon S3

I plan to store images on Amazon S3 how to retrieve from Amazon S3 :
file size
image height
image width ?
You can store image dimensions in user-defined metadata when uploading your images and later read this data using REST API.
Refer to this page for more information about user-defined metadata: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html
Getting the file size is possible by reading the Content-Length response header to a simple HEAD request for your file. Maybe your client can help you with this query. More info on the S3 API docs.
Amazon S3 just provides you with storage, (almost) nothing more. Image dimensions are not accessible through the API. You have to get the whole file, and calculate its dimensions yourself. I'd advise you to store this information in the database when uploading the files to S3, if applicable.
On Node, it can be really easy using image-size coupled with node-fetch.
async function getSize(imageUrl) {
const response = await fetch(imageUrl);
const buffer = await response.buffer();
return imageSize(buffer);
}

Resources