Limitations for Google Speech to text Model Adaptation Apis - ruby

I aim to use the google model adaptation to improve the speech to text accuracy, but these APIs are not well documented anywhere.
https://cloud.google.com/speech-to-text/docs/reference/rest/v1p1beta1/projects.locations.customClasses
I tried to create a custom class with 200000 values. And above that count, it is giving an error for the size of the payload and not for the entries count limit.
Where can I find the proper information/details of API and its restriction.
I am using the Ruby library to create custom classes.
Code to create the custom class .
cname = "TestClass"
items = 3_00_000.times.map{|e| Google::Cloud::Speech::V1p1beta1::CustomClass::ClassItem.new(value: Faker::Name.name) };
_class = Google::Cloud::Speech::V1p1beta1::CustomClass.new(name: cname, items: items);
request = Google::Cloud::Speech::V1p1beta1::CreateCustomClassRequest.new({custom_class: _class, parent: "projects/<projectID>/locations/global", custom_class_id: cname})
_klass = client.create_custom_class request
Getting the following error looks like it's being created/updated with the 10_000_000 values.
Google::Cloud::InvalidArgumentError: 3:Request payload size exceeds the limit: 10485760 bytes.. debug_error_string:{"created":"#1628230030.306827000","description":"Error received from peer ipv4:142.251.42.10:443","file":"src/core/lib/surface/call.cc","file_line":1067,"grpc_message":"Request payload size exceeds the limit: 10485760 bytes.","grpc_status":3}

Here's all the publicly available documentation about the API.
https://cloud.google.com/speech/docs/
https://cloud.google.com/speech-to-text/docs/release-notes
https://cloud.google.com/speech-to-text/pricing
https://cloud.google.com/speech-to-text/quotas
https://cloud.google.com/speech-to-text/sla
https://cloud.google.com/speech-to-text/docs/support#troubleshooting
https://cloud.google.com/speech-to-text/docs/best-practices
https://cloud.google.com/speech-to-text/docs/encoding
https://cloud.google.com/speech-to-text/docs/languages
https://cloud.google.com/speech-to-text/docs/apis
https://cloud.google.com/speech-to-text/docs/concepts
https://cloud.google.com/speech-to-text/docs/how-to
https://cloud.google.com/speech/docs/tutorials

Related

How to use entrezpy and Biopython Entrez libraries to access ClinVar data from genomic position of variant

[Disclaimer: I have published this question 3 weeks ago in biostars, with no answers yet. I really would like to get some ideas/discussion to find a solution, so I post also here.
biostars post link: https://www.biostars.org/p/447413/]
For one of my projects of my PhD, I would like to access all variants, found in ClinVar db, that are in the same genomic position as the variant in each row of the input GSVar file. The language constraint is Python.
Up to now I have used entrezpy module: entrezpy.esearch.esearcher. Please see more for entrezpy at: https://entrezpy.readthedocs.io/en/master/
From the entrezpy docs I have followed this guide to access UIDs using the genomic position of a variant: https://entrezpy.readthedocs.io/en/master/tutorials/esearch/esearch_uids.html in code:
# first get UIDs for clinvar records of the same position
# credits: credits: https://entrezpy.readthedocs.io/en/master/tutorials/esearch/esearch_uids.html
chr = variants["chr"].split("chr")[1]
start, end = str(variants["start"]), str(variants["end"])
es = entrezpy.esearch.esearcher.Esearcher('esearcher', self.entrez_email)
genomic_pos = chr + "[chr]" + " AND " + start + ":" + end # + "[chrpos37]"
entrez_query = es.inquire(
{'db': 'clinvar',
'term': genomic_pos,
'retmax': 100000,
'retstart': 0,
'rettype': 'uilist'}) # 'usehistory': False
entrez_uids = entrez_query.get_result().uids
Then I have used Entrez from BioPython to get the available ClinVar records:
# process each VariationArchive of each UID
handle = Entrez.efetch(db='clinvar', id=current_entrez_uids, rettype='vcv')
clinvar_records = {}
tree = ET.parse(handle)
root = tree.getroot()
This approach is working. However, I have two main drawbacks:
entrezpy fulls up my log file recording all interaction with Entrez making the log file too big to be read by the hospital collaborator, who is variant curator.
entrezpy function, entrez_query.get_result().uids, will return all UIDs retrieved so far from all the requests (say a request for each variant in GSvar), thus this space inefficient retrieval. That is the entrez_uids list will quickly grow a lot as I process all variants from a GSVar file. The simple solution that I have implenented is to check which UIDs are new from the current request and then keep only those for Entrez.fetch(). However, I still need to keep all seen UIDs, from previous variants in order to be able to know which is the new UIDs. I do this in code by:
# first snippet's first lines go here
entrez_uids = entrez_query.get_result().uids
current_entrez_uids = [uid for uid in entrez_uids if uid not in self.all_entrez_uids_gsvar_file]
self.all_entrez_uids_gsvar_file += current_entrez_uids
Does anyone have suggestion(s) on how to address these two presented drawbacks?

How to query google analytics api using the google api ruby gem?

The documentation of the google api ruby client lacks of practical examples, it only documents the classes and methods, so it's very hard to guess how should we use the gem in real life. For example, I'm trying to obtain all purchases from enhanced ecommerce to see where they came from (Acquisition Channel or Channel Grouping), but im only interested on transactions that took 5 sessions to convert the transaction ( our unconvinced clients).
First you will need your analytics view_id, can be obtained in the url at the end, after the letter p
Then you need to export the route to the credentials:
In your terminal:
export GOOGLE_APPLICATION_CREDENTIALS = 'folder/yourproject-a91723dsa8974.json'
For more info about credentials see google-auth-gem documentation
After setting this, you can query the api like this
require 'googleauth'
require 'google/apis/analyticsreporting_v4'
scopes = ['https://www.googleapis.com/auth/analytics']
date_from = 10.days.ago
date_to = 2.days.ago
authorization = Google::Auth.get_application_default(scopes)
analytics = Google::Apis::AnalyticsreportingV4::AnalyticsReportingService.new
analytics.authorization = authorization
view_id = '189761131'
date_range = Google::Apis::AnalyticsreportingV4::DateRange.new(start_date: date_from.strftime('%Y-%m-%d'), end_date: date_to.strftime('%Y-%m-%d'))
metric = Google::Apis::AnalyticsreportingV4::Metric.new(expression: 'ga:transactions')
transaction_id_dimension = Google::Apis::AnalyticsreportingV4::Dimension.new(name: 'ga:transactionID')
adquisition_dimension = Google::Apis::AnalyticsreportingV4::Dimension.new(name: 'ga:channelGrouping')
filters = 'ga:sessionsToTransaction==5'
request = Google::Apis::AnalyticsreportingV4::GetReportsRequest.new(
report_requests: [Google::Apis::AnalyticsreportingV4::ReportRequest.new(
view_id: view_id,
metrics: [metric],
dimensions: [transaction_id_dimension, adquisition_dimension],
date_ranges: [date_range],
filters_expression: filters
)]
)
response = analytics.batch_get_reports(request)
response.reports.first.data.rows.each do |row|
dimensions = row.dimensions
puts "TransactionID: #{dimensions[0]} - Channel: #{dimensions[1]}"
end
note filters_expression: filters
Where filters variable is in the form of ga:medium==cpc,ga:medium==organic;ga:source==bing,ga:source==google
Where commas (,) mean OR and semicolons (;) mean AND (where OR takes precedence over AND)
you can check the query explorer to play around with filters.
Here is filters documentation
If the report brings more than 1000 rows (default max rows), a next_page_token attribute will appear.
response.reports.first.next_page_token
=> "1000"
You will have to store that number to use it in the next ReportRequest
next_request = Google::Apis::AnalyticsreportingV4::GetReportsRequest.new(
report_requests: [Google::Apis::AnalyticsreportingV4::ReportRequest.new(
view_id: view_id,
metrics: [metric],
dimensions: [transaction_id_dimension, adquisition_dimension],
date_ranges: [date_range],
filters_expression: filters,
page_token: "1000"
)]
)
until
next_response.reports.first.next_page_toke
=> nil
Alternatively you can change the default page size of the report request by adding
page_size: 10_000 for example.

What is the max size for uploading a RingCentral custom greeting audio file?

When calling the RingCentral Create Custom Greeting API:
POST /restapi/v1.0/account/{accountId}/extension/{extensionId}/greeting
I sometimes get the following error with larger files MP3 and WAV media files. Is there an official size limit?
HTTP/1.1 413 Request Entity Too Large
{
"errorCode": "AGW-413",
"message": "Request entity too large",
"errors": [
]
}
There's no limit specified in the API Reference or blog article:
API Reference:
https://developers.ringcentral.com/api-docs/latest/index.html#!#RefCreateUserCustomGreeting
I'm using the ringcentral_sdk gem with the following code:
req = RingCentralSdk::REST::Request::Multipart.new(
method: 'post',
url: 'account/~/extension/~/greeting'
).
add_json({type: 'Voicemail', answeringRule: {id: '11111111'}}).
add_file(file)
res = client.send_request req
puts res.status
puts MultiJson.encode(res.body, pretty: true)
More is on this blog article:
https://medium.com/ringcentral-developers/updating-ringcentral-user-extension-greetings-using-the-rest-api-and-ruby-db325022c6ee
I was informed there is currently a 1MB file size limit on this API.
I tested this with 0.4MB and 2.5MB WAV files here and confirmed that the smaller file worked and the larger file resulted in this error.
https://www.mediacollege.com/audio/tone/download/
Other useful test files seem to be available here:
https://www.audiocheck.net/testtones_highdefinitionaudio.php
I wrote a Python sample: https://github.com/tylerlong/ringcentral-python/blob/master/test/test_multipart_mixed.py
I also confirm that if the audio file is too large the operation will fail and you will get message Request entity too large.

Under libwebsockets, how to receive message bigger than 4096 on server side?

I have create a websocket server with libwebsockets library, and the protocol list is like this:
/* List of supported protocols and callbacks. */
static struct libwebsocket_protocols protocols[] = {
{ "plain-websocket-protocol" /* Custom name. */,
callback_websocket,
sizeof(struct websocket_client_real),
0 },
{ NULL, NULL, 0, 0 } /* Terminator. */
};
When I use "html + javascript + chromium browser" as client to send websocket message bigger than 4096 bytes, the websocket server will receive the LWS_CALLBACK_RECEIVE callback more than one time, one message is splited to two or more, the max receive size is 4096.
How can I receive unlimited size websocket message on server side?
The lws_protocols struct now has a rx_buffer_size member so you should be able to configure the 4096 size using this.
See the api doc for details https://libwebsockets.org/libwebsockets-api-doc.html
This answer will address this question:
How can I receive unlimited size websocket message on server side?
It's relatively simple, actually. And you don't need to change your rx_buffer_size like it was suggested before.
Check out the function size_t lws_remaining_packet_payload(struct lws *wsi) documented in here: https://libwebsockets.org/libwebsockets-api-doc.html
You can use this function in your LWS_CALLBACK_RECEIVE callback handler to determine if the data your callback was given finishes a complete WebSocket "packet" (aka, message). If this function returns nonzero, then there is more data coming for this packet in a future callback. So your application should buffer this data until lws_remaining_packet_payload(wsi) returns 0. At that point, you have read a complete message and can handle the complete message as appropriate.

Importing binary data to parse.com

I'm trying to import data to parse.com so I can test my application (I'm new to parse and I've never used json before).
Can you please give me an example of a json file that I can use to import binary files (images) ?
NB : I'm trying to upload my data in bulk directry from the Data Browser. Here is a screencap : i.stack.imgur.com/bw9b4.png
In parse docs i think 2 sections could help you out depend on whether you want to use REST api of the android sdk.
rest api - see section on POST, uploading files that can be upload to parse using REST POST.
SDk - see section on "files"
code for Rest includes following:
use some HttpClient implementation having "ByteArrayEntity" class or something
Map your image to bytearrayEntity and POST it with the correct headers for Mime/Type in httpclient...
case POST:
HttpPost httpPost = new HttpPost(url); //urlends "audio OR "pic"
httpPost.setProtocolVersion(new ProtocolVersion("HTTP", 1,1));
httpPost.setConfig(this.config);
if ( mfile.canRead() ){
FileInputStream fis = new FileInputStream(mfile);
FileChannel fc = fis.getChannel(); // Get the file's size and then map it into memory
int sz = (int)fc.size();
MappedByteBuffer bb = fc.map(FileChannel.MapMode.READ_ONLY, 0, sz);
data2 = new byte[bb.remaining()];
bb.get(data2);
ByteArrayEntity reqEntity = new ByteArrayEntity(data2);
httpPost.setEntity(reqEntity);
fis.close();
}
,,,
request.addHeader("Content-Type", "image/*") ;
pseudocode for post the runnable to execute the http request
The only binary data allowed to be loaded to parse.com are images. In other cases like files or streams .. etc the most suitable solution is to store a link to the binary data in another dedicated storage for such type of information.

Resources