YouTube Data API - Number of Results - Maven - maven

I am trying to get videos from youtube in two different ways
a) First using youtube-google-api client library following the guidelines and sample code from here https://developers.google.com/youtube/v3/code_samples/java#search_by_keyword
Nevertheless, since I am implementing in a mavenized project Ihave difficulty in finding the dependency for 'com.google.api.services.samples.youtube.cmdline.Auth" which is required for the following block of code:
try {
youtube = new YouTube.Builder(Auth.HTTP_TRANSPORT, Auth.JSON_FACTORY, new HttpRequestInitializer() {
public void initialize(HttpRequest request) throws IOException {
}
}).setApplicationName("youtube-cmdline-search-sample").build();
b)Second I simply send a GET request to YouTube like this:
https://www.googleapis.com/youtube/v3/search?part=snippet&q=madonna&type=video&key={API_KEY}
but I am able to receive only 5 results, although I've read in several Stackoverflow related questions that I can receive up to 50 videos.This is not feasible even if I set the "max-results" parameter.
Could anyone help me to deal with these issues? Thank you in advance.

In your second way Write maxResults=50 as a parameter instead of max-results=50.
Use YouTube Data Api v3 api explorer to understand parameters very well.
https://developers.google.com/apis-explorer/#s/youtube/v3/
https://www.googleapis.com/youtube/v3/search?part=snippet&q=madonna&maxResults=50&type=video&key={API_KEY}

Related

LiveQuery does not work, if there is no ParseConnectivityProvider provided

Racking my brains over this.
I cannot get past this issue, my code is producing this error:
LiveQuery does not work, if there is no ParseConnectivityProvider provided.
I tried playing around with the liveQueryURL and no luck. The flutter docs have no concrete example on how to implement this url from the server. I assume from the javaScript video and docs that it's my custom subdomain I created such as customdomain.b4a.io which makes the final url 'wss://customdomain.b4a.io'.
I looked into "connectivityProvider:" arg for the Parse().initialize but found nothing concrete on implementing this.
This is a dart demo project only. Any help or ideas much appreciated!
EDIT: This post does not solve my problem at all. It's also very old.
Is it possible this isn't working because this is a dart program rather than flutter? Wouldn't imagine this being the case...
Code:
import 'package:parse_server_sdk/parse_server_sdk.dart';
Future<void> main(List<String> arguments) async {
final keyApplicationId = 'XXX';
final keyClientKey = 'XXX';
final keyParseServerUrl = 'https://parseapi.back4app.com';
final liveQueryURL = 'wss://XXX.b4a.io';
await Parse().initialize(
keyApplicationId,
keyParseServerUrl,
clientKey: keyClientKey,
liveQueryUrl: liveQueryURL,
autoSendSessionId: true,
debug: true,
);
final LiveQuery liveQuery = LiveQuery();
QueryBuilder<ParseObject> query = QueryBuilder<ParseObject>(ParseObject('Color'));
Subscription subscription = await liveQuery.client.subscribe(query);
subscription.on(LiveQueryEvent.create, (value) {
print('Object: ' + value['color']);
print((value as ParseObject).get('color'));
});
}
From https://github.com/parse-community/Parse-SDK-Flutter/issues/543#issuecomment-912783019
please provide a custom ParseConnectivityProvider (connectivityProvider in Parse().initialize).
In case you can assume your device has always internet access, the implementation should be as simple as this:
class CustomParseConnectivityProvider extends ParseConnectivityProvider{
Future<ParseConnectivityResult> checkConnectivity() => ParseConnectivityResult.wifi;
Stream<ParseConnectivityResult> get connectivityStream => Stream<ParseConnectivityResult>.empty();
}
(Not tested and typed on a smartphone.)
Unfortunately parse live query in flutter dose not work with https server url. I faced this problem before and it mades me crazy! What I did was in the backend side of parse server, provide both http and https servers. And In client side in flutter just connect to the http server for live queries!
And that works fine 😉

Convert Video to text(transcript) by google cloud speech to text with Rails Application

Working on a WebAppon Ruby on Rails.
I want to get subtitle for Pre recorded video and also for new videos going to record.
I have implemented the gem 'google-cloud-speech'.
But now I'm not able to get text for my video. I get a suggestion from Google Cloud API doc to add model but when I add model: 'video' to configuration, it says there is no such field model in initialization map entry.
My code without adding model is as per below.
speech_client = Google::Cloud::Speech.new
config ={ encoding: :LINEAR16,
sample_rate_hertz: 16000,
language_code: "en-US",
}
audio = { uri: #uri }
response = speech.recognize config, audio
which is giving me error message like below.
Google::Gax::RetryError: GaxError Exception occurred in retry method that was not classified as transient, caused by 3:Request contains an invalid argument.
from /Users/hiren/.rvm/gems/ruby-2.5.1#Snip/gems/google-gax-1.3.0/lib/google/gax/api_callable.rb:369:in `rescue in block in retryable'
Any help is appreciated.
Thanks
Regarding the model issue, this might be due to that the video model is not available yet for Ruby V1 API version as this feature it's part of the v1p1beta1 version.
Regarding your code issue, I just did the example shown here successfully. It would be helpful if you attach your full code as the documented code works well.

Ruby neo4j-core mass processing data

Has anyone used Ruby neo4j-core to mass process data? Specifically, I am looking at taking in about 500k lines from a relational database and insert them via something like:
Neo4j::Session.current.transaction.query
.merge(m: { Person: { token: person_token} })
.merge(i: { IpAddress: { address: ip, country: country,
city: city, state: state } })
.merge(a: { UserToken: { token: token } })
.merge(r: { Referrer: { url: referrer } })
.merge(c: { Country: { name: country } })
.break # This will make sure the query is not reordered
.create_unique("m-[:ACCESSED_FROM]->i")
.create_unique("m-[:ACCESSED_FROM]->a")
.create_unique("m-[:ACCESSED_FROM]->r")
.create_unique("a-[:ACCESSED_FROM]->i")
.create_unique("a-[:ACCESSED_FROM]->r")
.create_unique("i-[:IN]->c")
.exec
However doing this locally it takes hours on hundreds of thousands of events. So far, I have attempted the folloiwng:
Wrapping Neo4j::Connection in a ConnectionPool and multi-threading it - I did not see much speed improvements here.
Doing tx = Neo4j::Transaction.new and tx.close every 1000 events processed - looking at a TCP dump, I am not sure this actually does what I expected. It does the exact same requests, with the same frequency, but just has a different response.
With Neo4j::Transaction I see a POST every time the .query(...).exec is called:
Request: {"statements":[{"statement":"MERGE (m:Person{token: {m_Person_token}}) ...{"m_Person_token":"AAA"...,"resultDataContents":["row","REST"]}]}
Response: {"commit":"http://localhost:7474/db/data/transaction/868/commit","results":[{"columns":[],"data":[]}],"transaction":{"expires":"Tue, 10 May 2016 23:19:25 +0000"},"errors":[]}
With Non-Neo4j::Transactions I see the same POST frequency, but this data:
Request: {"query":"MERGE (m:Person{token: {m_Person_token}}) ... {"m_Person_token":"AAA"..."c_Country_name":"United States"}}
Response: {"columns" : [ ], "data" : [ ]}
(Not sure if that is intended behavior, but it looks like less data is transmitted via the Non-Neo4j::Transaction technique - highly possibly I am doing something incorrectly)
Some other ideas I had:
* Post process into a CSV, SCP up and then use the neo4j-import command line utility (although, that seems kinda hacky).
* Combine both of the techniques I tried above.
Has anyone else run into this / have other suggestions?
Ok!
So you're absolutely right. With neo4j-core you can only send one query at a time. With transactions all you're really getting is the ability to rollback. Neo4j does have a nice HTTP JSON API for transactions which allows you to send multiple Cypher requests in the same HTTP request, but neo4j-core doesn't currently support that (I'm working on a refactor for the next major version which will allow this). So there are a number of options:
You can submit your requests via raw HTTP JSON to the APIs. If you still want to use the Query API you can use the to_cypher and merge_params methods to get the cypher and params for that (merge_params is a private method currently, so you'd need to send(:merge_params))
You can load via CSV as you said. You can either
use the neo4j-import command which allows you to import very fast but requires you to put your CSV in a specific format, requires that you be creating a DB from scratch, and requires that you create indexes/constraints after the fact
use the LOAD CSV command which isn't as fast, but is still pretty fast.
You can use the neo4apis gem to build a DSL to import your data. The gem will create Cypher queries under the covers and will batch them for performance. See examples of the gem in use via neo4apis-twitter and neo4apis-github
If you are a bit more adventurous, you can use the new Cypher API in neo4j-core via the new_cypher_api branch on the GitHub repo. The README in that branch has some documentation on the API, but also feel free to drop by our Gitter chat room if you have questions on this or anything else.
If you're implementing a solution which is going to make queries like above where you have multiple MERGE clauses, you'll probably want to profile your queries to make sure that you are avoiding the eager (that post is a bit old and newer versions of Neo4j have alleviated some of the need for care, but you can still look for Eager in your PROFILE)
Also worth a look: Max De Marzi's post on Scaling Cypher Writes

Twitter/Spring - post tweet in a #Scheduled job

I have already got some "Spring-scheduled" tasks up and running successfully.
What I would like now is to post some specific tweets to a known Twitter account (and already configured on Twitter side) based on some event recurrence.
However, all I see in the OAuth process, esp. in order to get an access token, is that it requires some callback URL before being able to do anything.
I might be mistaken but this seems hard to integrate in the context of a scheduled task.
Isn't there any other way to achieve tweeting?
In conjunction with Spring Scheduling features, I would use Twitter4j to post a tweet in a scheduled job.
Here is a sample:
#Componet
public class TwitterSender {
#Scheduled(fixedRate = 10000)
public void sendTweet() {
Twitter twitter = TwitterFactory.getSingleton();
Status status = twitter.updateStatus(latestStatus);
System.out.println("Status updated to: " + status.getText() + ".");
}
}
If you need more information you can check the test case for sending update status with Twitter4j. Or you can just dive and see the source.
It may be a bit of a leap in terms of learning curve, but have you looked at spring-integration's twitter:outbound-channel-adapter ?
<twitter:outbound-channel-adapter twitter-template="twitterTemplate"
channel="twitterChannel"/>
http://static.springsource.org/spring-integration/docs/latest-ga/reference/html/twitter.html

Protobuf and Pipeline questions

Is there any way to set more than one protobufEncoder/protobufDecoder?
Let me explain my problem.I have a command which will be send from the
client to the server,The server get the command and do some work according
to the command.Now the response ("answer") of the server could be:
for the length of string or is for the integer square (order not sure)
My question is now:what can I do that the client can receive at least two
different responses from the server? Both are "encoded" with Protobuf.And
in turn,what I need to do that the server can send at least two different
responses?Also both are "encoded" with Protobuf.both are "decoder" with protobuf.
ProtobufDecoder of netty to set two different protobufEncoder/Decoder is not possible.
Let us see below netty example, decoder can only receive a decoder object
LocalTimeServerPipelineFactory:
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline p = pipeline();
p.addLast("frameDecoder", new ProtobufVarint32FrameDecoder());
p.addLast("protobufDecoder", new ProtobufDecoder(LocalTimeProtocol.Locations.getDefaultInstance()));
p.addLast("frameEncoder", new ProtobufVarint32LengthFieldPrepender());
p.addLast("protobufEncoder", new ProtobufEncoder());
p.addLast("handler", new LocalTimeServerHandler());
return p;
}
thanks in advance and best regards.
quartz
You can adjust the ChannelPipeline on the fly. This should help you there. Just add/remove the right one when you need it.

Resources