Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a wiki of recipes and want to put a "Random recipe" on the main page.
Hope you can help me.
I look on the Wikimedia help but dind't found anytihng
http://www.mediawiki.org/w/index.php?title=Special%3ASearch&profile=advanced&search=random+article&fulltext=Search&ns90=1&redirs=1&profile=advanced
This is the wiki that i'm working on
http://lesmots.uy/labs/enciclochef
Thanks
I believe you're looking for Extension:ArticleInclude. It allows you to overwrite the included article name with any random article on the wiki. It also allows you to limit the length, so that you only see a snippet of that article.
For your purposes, I'd imagine that the recipe title and/or ingredients would be sufficient. Including the ingredients may be tricky, since you list them at the top of the page in a simple list format. If you choose to go with the ArticleInclude extension, I'd recommend keeping your ingredients on a single line, and creating line breaks with HTML so that you could use the lines option to get what you need reliably.
Example, altered from Torta aromática:
<blockquote> 3 huevos <br> 3 cucharadas de leche <br> 6 cucharadas de manteca derretida <br> 1 ½ tazas de harina <br> ¾ tazas de azúcar <br> 2 cucharaditas de polvo de hornear <br> 2 cucharaditas de canela <br> ¼ cucharadita de clavo de olor molido <br> ½ cucharadita de cáscara de limón rallada </blockquote>
Relevant Options:
random: Overides article and shows a random article of the given namespaces
count: How many letters will be shown, negative values will display the whole article removing the specific number of letters from the end
lines: Count lines (not letters) that will be shown
Edit: If you don't want to bother with an Extension, Transclusion comes out-of-the-box since MediaWiki 1.8, but not with a random article function. You may have seen transclusion in action before on new user talk pages, where many wikis will commonly link to a page with helpful suggestions for new users.
Related
Here is an example of the data I am using:
"Q7: How does gender income inequality manifest in the communities in which you live and/or work? What do you believe is needed to help close the wealth gap between men and women as well as among women of different races in the county?
Wouldn’t go into pay equity because that would get dismissed. The wealth gap is a more compelling argument.
Equity is more emotional. And wealth gap is more numbers. It goes toward the same thing though."
I am running Rstudio and using library(RQDA) and library(tidyverse).
I am trying to analyze several qualitative interviews formatted in question/answer form as in the provided example. I finished the coding process and now I'm trying to find themes. While coding, I created code categories that correspond with each interview question with the hopes that I would be able to pull out all the codings per code category now. Unfortunately, I cannot figure out how to do it and would appreciate some assistance!
thanks
I am not sure about this but I understood code categories to be helpful for structuring your work by your theoretical perspective. If you create a code category for each interview question (i.e., the topic of the question is your theme/code category), you may have various codes belonging to one "code category", which might not have that much in common. Alternatively, you could create cases (case 1 might be the answers to the first interview question, etc.): "b) Open a file, select part of the file, select a case name, then click button "Link" in "Cases" tab, you can thus link the selected part of file to the selected case" (http://rqda.r-forge.r-project.org/documentation_2.html).
I have massive amounts of natural language questions in the format of "Subject-entity [tab] relationship [tab] Object-entity [tab] question" as follows:
www.freebase.com/m/01jp8ww www.freebase.com/music/album/genre www.freebase.com/m/01qzt1 Which genre of album is harder.....faster?
www.freebase.com/m/0np6z99 www.freebase.com/music/album/release_type www.freebase.com/m/02lx2r what format is fearless
www.freebase.com/m/0wzc58l www.freebase.com/people/person/place_of_birth www.freebase.com/m/0n2z what city was alex golfis born in
www.freebase.com/m/0jtw9c www.freebase.com/film/writer/film www.freebase.com/m/05szq8z what film is by the writer phil hay?
www.freebase.com/m/0gys2sn www.freebase.com/people/deceased_person/place_of_death www.freebase.com/m/0tzls Where did roger marquis die
www.freebase.com/m/01fwty www.freebase.com/people/deceased_person/cause_of_death www.freebase.com/m/0gk4g what was the cause of death of yves klein
www.freebase.com/m/02cft www.freebase.com/location/location/people_born_here www.freebase.com/m/0k8n4c1 Which equestrian was born in dublin?
www.freebase.com/m/01htzx www.freebase.com/tv/tv_genre/programs www.freebase.com/m/04g1jv6 What is a tv action show?
How does one parse these natural language question files to obtain the answer to the question? I am planning on using ruby to parse these millions upon millions of questions, although I am a bit stuck on how to actually obtain the answers to the questions in the files from freebase.
From what I have viewed from the direct links, there doesn't to seem to actually be an answer to the question located in them, so is there some other method to obtaining the answers?
I'm using Wikipedia API to get an article summary. The data is nested within a variable page ID key unique to each article.
How do I extract the extract key if I do not initially have the value of the pages key?
Example API response for 'Stack Overflow':
{
"query": {
"pages": {
"21721040": {
"pageid": 21721040, # this is a unique key value for each article
"ns": 0,
"title": "Stack Overflow",
"extract": "Stack Overflow is a privately held website, the flagship site of the Stack Exchange Network, created in 2008 by Jeff Atwood and Joel Spolsky, as a more open alternative to earlier Q&A sites such as Experts Exchange. The name for the website was chosen by voting in April 2008 by readers of Coding Horror, Atwood's popular programming blog.\nIt features questions and answers on a wide range of topics in computer programming. The website serves as a platform for users to ask and answer questions, and, through membership and active participation, to vote questions and answers up or down and edit questions and answers in a fashion similar to a wiki or Digg. Users of Stack Overflow can earn reputation points and \"badges\"; for example, a person is awarded 10 reputation points for receiving an \"up\" vote on an answer given to a question, and can receive badges for their valued contributions, which represents a kind of gamification of the traditional Q&A site or forum. All user-generated content is licensed under a Creative Commons Attribute-ShareAlike license. Questions are closed in order to allow low quality questions to improve. Jeff Atwood stated in 2010 that duplicate questions are not seen as a problem but rather they constitute an advantage if such additional questions drive extra traffic to the site by multiplying relevant keyword hits in search engines.\nAs of April 2014, Stack Overflow has over 2,700,000 registered users and more than 7,100,000 questions. Based on the type of tags assigned to questions, the top eight most discussed topics on the site are: Java, JavaScript, C#, PHP, Android, jQuery, Python and HTML."
}
}
}
}
Update: Solution based on Uri's response...
key = response['query']['pages'].keys # => ["21721040"]
response['query']['pages'][key[0]]['extract'] # data
You can look at the keys of the hash:
response['query']['pages'].keys
# => ["21721040"]
If you're using Ruby 2.3, you could go for a #dig:
Get the value using extracting the key first:
key = response.dig('query', 'pages')&.keys.first
# => "21721040"
response.dig('query', 'pages', key, 'extract')
# => "Stack Overflow is a privately held website..."
Grab the extract value directly:
response.dig('query', 'pages')&.values.dig(0, 'extract')
# => "Stack Overflow is a privately held website..."
I want to link to a random question within the "resolved questions" section of Yahoo Answers.
I've found some js techniques which involve assigning numbers to each URL so the clicked link chooses randomly from a list I'd create that way. But there are 10's of thousands of resolved questions, and new ones added every day. So that method won't work for me.
How can I link to a random "resolved question?"
You could use the Yahoo! Answers API to get the data you need (in either XML or JSON).
The documentation is available here: http://developer.yahoo.com/answers/
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 12 months ago.
Improve this question
I've read over the Google specification for crawling AJAX-enabled pages. Since part of Google's indexing method uses the URL itself, will converting to !# negatively effect SEO?
For instance, if I have a page at www.mysite.com/surfing, Google will be likely to rate it highly if a user searches for "surfing" because it has "surfing" in the URL. Would the same be true for www.mysite.com/#!surfing or does it ignore the hash fragments for the purposes of weighting the URL itself?
Perhaps you have already read in the google Ajax-crawling instructions that the !# is actually transformed into ?_escaped_fragment_ by the google crawler. So let's use your example:
www.mysite.com/#!surfing , the google crawler will see the link as www.mysite.com/?_escaped_fragment_=surfing . So it comes to the question : what is better for google SEO a link with a paremeter ?_escaped_fragment_=surfing or without one /surfing ?
Search engineer representatives have confirmed on numerous occasions that URLs with more than 2 dynamic parameters may not be spidered unless they are perceived as significantly important (i.e. have many, many links pointing to them). So unless you're using too many parameters in the url, you don't have much to worry about. If you haven't done it already, you can always read the detailed google documentation https://developers.google.com/webmasters/ajax-crawling/docs/getting-started . Now, just an advice - don't rely on # in your AJAX website. Use history.pushState() to change your url to whatever you wish. I use #! only on browsers that don't support history.pushState() like IE. The problem with the SEO with #! doesn't come form the url but from the difficulties in the Server Side processing of the information needed to provide HTML snapshot for the crawler.
The question is old.
Now Google not supports AJAX-Crawling anymore:
https://webmasters.googleblog.com/2015/10/deprecating-our-ajax-crawling-scheme.html
And this document officially deprecated:
https://developers.google.com/search/docs/ajax-crawling/docs/getting-started
So don't use hashbangs in URLs.
Traditionally, from SEO perspective, hash tag (#) is used to avoid the following issues
-Cannibalization issues
-Affiliate URLs (Here is a good article about how to use hash for tracking purpose instead of using question mark in the URL)
-Show limited content on the page (pagination issues)
The usage you are refering to is what Google recommends on how to make AJAX pages being able to be read by Google - https://support.google.com/webmasters/answer/174992?hl=en
For more info about hash tag and its SEO benefits, check this blog post - https://digitalreadymarketing.com/adding-hash-in-urls-seo-benefits/
In My personal opinion and 8 years in SEO & development It won't harm but it depends more on the site other parameters so adding the !# won't do harm...
Do you have the site URL so I can take a more in-depth Look ?
That could cause a problem if Google's crawler thought that there could be an infinite number of possibilities. Like with a ? in the url. But the answer beyond that is clear.
website.com/oreo-cookies
is more semantic and easier to understand for both people and crawlers than
website.com/#!oreo-cookies
But is this going to have a major impact? If you were a client paying me for SEO, I would tell you that your incoming text links with relevant keyword phrases from relevant related websites is far more important. I would also say that if you are submitting an xml sitemap for google to digest, and lots of popular websites are using the #! google will figure it out and ignore it.
So bottom line, if my content was worth linking to, and I made sure google was finding all my pages and indexing them, I would not worry about it.
I think that it will not harm your SEO in any way I am in SEO for last 5 years and haven't experienced such problem yet so don't worry about it. So my opinion is you can do it by adding the !# no harm !!