I found this plug-in called YouMax which embeds your youtube channel into your website, the only problem that I'm having with this plug-in is changing the amount of video results that are collected, it default is 25 videos I want to chane this to another value like 12 or 24.
http://www.codehandling.com/2013/03/youmax-20-complete-youtube-channel-on.html?m=1
There seems to be 3 sections to this plug-ins results: Featured, Uploads, and Playlists.
I edited the youmax.min.js file for the Featured section because it is the first results page that loads. My edit was very small. Essentially, I added the following:
&start-index=1&max-results=2
at the end of the string var apiFeaturedPlaylistVideosURL
This var is located inside the function: function getFeaturedVideos(playlistId)
You can change the result from 2 to 12 or whatever you want and that will be the max amount of results you get back from youtube.
Also- you can add this same argument (&start-index=1&max-results=2) to the Uploads and Playlists function in the youmax.min.js file if thats where you want to limit your results instead (or in addition to Featured section).
I created a copy of my edited youmax.min.js file in jsfiddle. My edit comes on line 152 on jsfiddle. Try downloading it and giving it a try. I hope it helps:
http://jsfiddle.net/wCKKU/
Youmax 2.0 (free version) has been upgraded which has the maxResults option builtin - http://demos.codehandling.com/youmax/home.html
You already get a "maxResults" option with the plugin and a "Load More" functionality.
Regarding the timestamps, you can try the PRO version which has options to display relative timestamps (2 hours ago) or fixed timestamps (23 March 2016)
Cheers :)
Related
I'm trying to import a search result from google to my spreadsheet. I've had success with Wikipedia pages, but for some reason, Google search isn't working correctly (giving a "could not fetch url" error). I'm sure the problem is somewhere in my URL or XPath, but I've been trying a variety of things and I'm lost. Here is what I've got:
=IMPORTXML("https://www.google.com/search?q=dom+fera+easy+thing+released", "//div[#class='Z0LcW XcVN5d']")
I'm linking the spreadsheet below as view-only for reference as well. Ultimately the goal is to be able to webscrape release years of songs. I'd appreciate any help!
https://docs.google.com/spreadsheets/d/1bt8MJ23nfGAv6ianaR-sd7DM5DNn98p7zWSG1UzBlEY/edit?usp=sharing
AFAIK, you can't parse results from GoogleSearch in Google Sheets.
Using Discogs, MusicBrainz, All Music... to get the release dates could be useful.
But it seems some of your groups are little known. So, you can use Youtube to fetch the dates.
Note : we assume the year of publication on Youtube corresponds to the year of release.
Of course, that's not 100% true. For example, artists can clip their video months after release. Or publish nothing on Youtube.
So this method will work with a wide range of songs but not ALL the songs. With recent bands and songs, it should be OK.
To do this you can use the Youtube API or IMPORTXML formulas. In both cases, we always take the first result (relevant order) of the search engine as source.
You need an API key and an ImportJSON script (credits to Brad Jasper) to use the API method. Once you have installed the script and activated your API key,you can paste in cell B3:
="https://www.googleapis.com/youtube/v3/search?key={yourAPIKey}&part=snippet&type=video&filter=items®ionCode=FR&q="&ENCODEURL(A3)
We generate the url to query with the content you input in column A.
We use "regionCode=FR" since some songs are not available in the US ("i need you FMLYBND"). That way we get the correct release date.
In C3, you can paste :
=LEFT(QUERY(ImportJSON(B3);"SELECT Col11 LIMIT 1 label Col11''";1);4)
We parse the JSON, select the column of interest, the line of interest, then we clean the result.
With the IMPORTXML method, you can paste in E3 :
="https://www.youtube.com"&IMPORTXML("https://www.youtube.com/results?search_query="&A3;"(//div[#class='yt-lockup-thumbnail contains-addto'])[3]/a/#href")
We construct the url with the first search result of the search engine.
In F3, you can paste :
=LEFT(IMPORTXML(E3;"//meta[#itemprop='datePublished']/#content");4)
We parse the previously built url, then we extract the year of publication.
As you can see, there's a difference in the results on line 5. That's because the song is not available in the US. The first result returned in the IMPORTXML method is different from the one of the API method which uses a "FR" flag.
Side note : I'm based in Europe. So ";" in the formulas should be replaced with ",".
google does not support web scraping of google search into google sheets. this option was disabled 2 years ago. you will need to use alternative search engine
I've been using Sphinx for my personal website for the past years and realized that I more have a blog with posts and few pages and did the conversion to Nikola in the past days. I also took the opportunity to switch to Markdown as I use it with R and Stack Overflow and everywhere else as well.
I have set in my Sphinx theme to have a local table of contents in the sidebar. There are a handful of very long (over 10k words) posts that would benefit from a local table of contents. I saw that the Nikola manual is written in reST and uses the contents directive. I would like to use that also in those posts.
I could convert these few posts back to reST and use the contents directive, but I'd like to avoid that. Can this be accomplished somehow?
Nikola uses Python-Markdown by default. It supports a TOC extension that one can enable in the conf.py. Then one can use a [TOC] marker anywhere in the document to get a local table of contents.
Updated
Using [TOC] which is a feature of an extension enabled by default. My firts answer was an misinterpretation of your question.
Firts answer
Using Nikola, may be you are interested in "archive" option. This is a default page that include all your posts (optional, this is grouped by date). Example in my blog: https://www.cosmoscalibur.com/archive.html .
Whenever I try to upload my dataset to the AutoML Natural Language Web UI, I get the error
Something is wrong, please try again.
The documentation is not very insightful about how my CSV file is supposed to look, but I tried to make a simple sample file just to make sure it works at all, it looks like this:
text,label
asdf,cat
asodlkao,dog
asdkasdsadksafask,cat
waewq23,cat
dads,cat
saiodjas,cat
skdoaskdoas,dog
hgfkgizk,dog
fzdrgbfd,cat
otiujrhzgf,cat
vchztzr,dog
aksodkasodks,dog
sderftz,dog
dsoakd,dog
qweqweqw,cat
asdqweqe,cat
dkawosdkaodk,dog
ewqeweq,cat
fdsffds,dog
bvcghh,cat
rthnghtd,dog
sdkosadkasodk,cat
sdjidghdfig,cat
kfodskdsof,dog
saodsadok,dog
ksaodksaod,dog
vncvb,cat
I chose this formatting according to the Google suggested Syntax
But even with this formatting I still get the same error
I've seen this question Format of the input dataset for Google AutoML Natural Language multi-label text classification but according to the answers there it seems my formatting should work, so I do not know why I get the error
I've just copied the CSV file and uploaded it to my own project and the dataset created worked. One problem is that an extra label was created "label" - this is because the header is not expected to be in the csv file (probably this should get fixed).
Based on that it seems the problem isn't the CSV file format. I would recommend to check if your project is setup correctly. You can open a bug to get someones help. Either you can open a bug in public issue tracker or send feedback using the UI (there is 'Feedback' option in the menu on top right side of the page).
I have found the problem! As Michal K said, there was nothing wrong with the formatting, the real problem was I was not assigned the role of Storage Object Creator, which is necessary because the Data is uploaded in Cloud Storage first
Good morning. I have downloaded the Yahoo Flickr Creative Commons 100M (14G) Dataset from the official website. When i extracted it i got a 48 GB file wiithout extension. I also have a file .txt where it explains how the dataset is composed and it says that is formed with a lot of record: for any image are registered some information like the link to download, Photo/video identifier,Photo/video hash,User nickname, Date taken and other fields.
Now, i only need the images and the associated hash, so the question is: how do i get it? I have litteraly no idea. Thank you everyone for the help
Blockquote
EDIT: I have managed to open the file with Word, but not all of it because is too big and i have over 10000 record like this, for example:
0 6985418911 4e2f7a26a1dfbf165a7e30bdabf7e72a 39089491#N00 nino63004 2012-02-16 09:56:37.0 1331840483 Canon+PowerShot+ELPH+310+HS IMG_0520 canon,canon+powershot+hs+310,carnival+escatay,cruise,elph,hs+310,key+west+florida,powershot -81.804885 24.550558 12 (link to flickr that i can't post) (other link) Attribution-NonCommercial-NoDerivs License (other link) 7205 8 df7747990d 692d7e0a7f jpg 0
Blockquote
Wikipedia provides all their page views in a hourly text file. (See for instance http://dumps.wikimedia.org/other/pagecounts-raw/2014/2014-01/)
For a project is need to extract keywords and their associated page views for the year 2014. But seeing that one file (representing 1 hour, consequently totalling 24*365 files) is ~80MB. This can be a hard task doing manual.
My questions:
1. Is there any way to download the files automatically? (the files are structured properly this could be helpful)
Download? Sure, that's easy:
wget -r -np http://dumps.wikimedia.org/other/pagecounts-raw/
Recursive wget does it. Note, these files are deprecated now; you probably want to use http://dumps.wikimedia.org/other/pagecounts-all-sites/ instead.
I worked on this project: https://github.com/idio/wikiviews
you just call it like python wikiviews 2 2015 and it will download all the files for February 2015, and join them in a single file.