Saved Searches and Alerting in Coveo - coveo

I'm looking at Coveo capability for a project and the documentation isn't clear in the area of alerting (high level docs are marketing let, and low level docs are mainly API based without a lot of explanation). I see you can set up a 'subscription' to alert on, but it's not clear exactly what that subscription can be against.
Essentially I'm trying to find out if you can save a search query/criteria, and then alert if a document matching that query/criteria is indexed. Essentially what you can do with Percolators in Elastic (if you are familiar with that).
Is this possible in Coveo?
Any pointers to precise and clear documentation that I'm missing would be appreciated.

I don't think this is a feature that Coveo provides, especially not for end users.
The only way to design something like you're asking would be to create your own code that queries Coveo, say every day, with a specific query that normally returns no result, and checks if results are returned. When results are returned, the system would then be able to alert you.
If you think Coveo should create that feature, please feel welcome to post it on the Coveo Ideas portal: https://connect.coveo.com/s/ideas

Related

ElasticSearch as EventStore

I would like to ask you about your opinion and what you see as cons and pros to use ElasticSearch as EventStore.
I would like to hear if somebody had experience with using ElasticSearch as event store and what was the results, reliability and if there was any issues.
eta: Reminded of this post by a no-explanation-down-voter... This Kafka is not for Event Sourcing post is pretty useful when thinking through this sort of a question.
Elastic search is simply not designed to be an authoritative persistent store for actual app state.
Neither is Redis.
Neither is Kafka.
All three may potentially be useful in the context of an app which does employ an event store.
May I suggest reading a book like NoSql distilled to get an idea of selection criteria in this space so you'll know how to select something appropriate.
Also for a question like this to be more answerable (in general questions like this are considered subjective and get closed), it's important to supply context as to what sort of stuff you're going to maintain, what sort of access patterns, what sort of scale and/or any other context. A laundrly list of contextless pro/cons is not going to help much.

GSA . How to prioritize html content with specific metadata

i want to make GSA to prioritize (in the results) content with some specific metadata . Is that possible?
Thanks!
Yes, it's possible. Metadata is one of the options for biasing results.
The admin console help tells you how to do it:
7.4 Admin Console Biasing Help
It's worth pointing out that biasing is more art than science. You may need to experiment a bit.
You should also make sure that you enable Advanced Search Reporting (ASR) so that the self learning scorer is enabled. This is where the GSA adapts to user behavior, learning to deliver better results over time. Links that get more clicks automatically move up in the search ranking.

How to correlate the below given scenario for check boxes?

in my script i have a scenario like the page contains multiple check boxes for example 10, as per the user need user selects check boxes for example one user selects 4 check boxes and other user clicks 5 check boxes, so per each it will vary.
so how to correlate those values,
thanking you.
From the website: "Please don’t share your solutions, ask for help, or help others. This is meant to be a challenge."
So you appear to be violating one of the primary rules in this website. I have looked at this challenge and it's really good to gauge someone's knowledge.
However, to address technology generally - in reading your question I get the sense you may be missing certain fundamental knowledge for this kind of thing. Here's some fundamental knowledge. Hopefully my answer will help increase your knowledge. And hopefully you can use this increased general knowledge to address this specific question.
Definitions:
Correlation - you're taking data the SERVER sends to the browser, capturing it and sending it back. Information present on web pages would fit into this category.
Parameterization - you've got a set of values you'd like to put into web forms. This is usually values like names, addresses, etc
Also understand exactly what is happening when you conduct certain actions on your browser. When you "click" a checkbox does that actually send a message to a server? That usually doesn't (though not always) happen. So when you use phrases like 'click a checkbox' that tells me you may not appreciate the fact that performance testing is server focused, not browser focused.
Performance testing isn't intuitive so you need to understand these concepts. If you dedicate time to understanding the concepts I've outlined above you'll have the knowledge to complete the challenge.
Good luck.
What is driving the variation on check boxes being checked? Is it the result of something that comes back from the server, from a previous request? Or is it somewhat random based on whatever the user wants to do at runtime?

irrelevant messages

i am thinking of making a website..
bt how can i make sure that when a user who is asking some question is nt using any abusive language or the message is totally subject oriented..
i m nt talking about spams..i know about captcha and all..
what i am asking is how can i keep an eye on human activity[in this case the messages sent] and at the same time providing the user his complete privacy!
One word... manually.
They're on the web, they already don't have complete privacy.
Offer the community the means to police themselves, whether by explicitly appointing moderators (like most bulletin boards), allowing them to decide who they can and cannot see (like social media sites), or collaborative moderation (like here).
You can set up a system where comments/posts must be approved by a moderator before being allowed to be posted. I believe Wordpress can do this.
There are curse-word filtering libraries available in most languages, usually complete with the ability to customize the words that are filtered out.
In order to filter spam, there are things like bayesian spam filters which attempt to determine whether a message is spam based on keywords in the response. This really isn't something you would want to attempt to do yourself.
Another thing to look at is Markov Chains. They are designed to generate strings of seemingly valid text based on the probability that any given word is followed by any other particular word. Using a reverse process you can attempt to determine if a string of text is valid by checking whether the words used are following by other "on-topic" words.
This would be very difficult as well.
In order to keep the privacy of the users, you could use combinations of these three tests to create a threshold. That is, you will examine no messages unless they reach a high curse/spam/off-topic score. At that point, those messages will be manually checked to see if they are appropriate.
There currently is no way to have a 100% automated process that won't block valid messages and let invalid ones through.
how can i keep an eye on human activity
Your answer lies here. I don't quite understand what you're getting at about privacy though.

Google crawling indexing algorithms

I am looking for some documents on how Google crawl and index content. I read many "light" papers and articles on what you need to do to improve your ranking and make sure your content is properly indexed but I am looking for some more advanced technical documents on how Google crawl and index content.
The things I would like to know more about:
What elements Google look for when it crawls: page content, URLs format, keywords, description etc...
How the index is updated?
Basically, I am trying to understand why some pages are indexed but not others even if the formats are similar. Why only 10% of my site's pages appear when I do a search on the entire domain even if I can see on my server logs that Google crawled every single link.
The answers to both things are closely-guarded trade secrets, ostensibly to prevent gaming the system.
Also keep in mind that Google makes over 400 algorithmic changes per year, making it close to impossible for an outsider to be accurate and up-to-date. Short of working for Google, you're likely not going to find an in-depth and accurate answer.
However, Matt Cutts, head of the web spam team, frequently provides the most accurate insights in how Google handles content, both on his blog and on the GoogleWebmasterHelp YouTube channel. It's worth going through his content to get a much better understanding of Google's methodology.
In order to provide a technical approach of how a webcrawler works I will suggest you to take a deep look into nutch.apache.org solution.
A typical webcrawler displays the following areas, a fetcher, a parser, and indexer and a searcher. To put it briefly a webcrawler fetch all urls available on a website and creates segments where its store up to 101kb per page. Those pages are parsed but typical words such as and-or-the are not stored but other words are analyzed using bayesian calculations in order to make a rank.
Search engine indexing collects, parses, and stores data to facilitate fast and accurate information retrieval. These tasks are mainly performed by storing a list of occurrences of each search critera, typically in the form of a hash table or binary tree using an inverted index.
As Mark stated Google´s calculations are mainly trade secrets but Patents issued by google could be a good start. Pagerank http://en.wikipedia.org/wiki/PageRank analyses backlinks mainly and the importance that websites pointing to your site have on people´s preferences. In my experience its important to offer an xml sitemap stating all your webpages at your site. On that sitemap you could define the crawl frequency for each page. gsitecrawler.com/ is an interesting possibility.
Google Website Optimizer will give you the chance to see what is google finding on your site, logs are ok but probably the robot finds problem and the best way to know that is with google´s website optimizer in order to display errors.
Finally most of your concerns are things that SEO´s specialist live for, I suggest you to check sites like seomoz.com and their tools... You will learn how to position your website better on organic results on search engines.
hope it helps!, sebastian.
"Yes" Google like fresh & unique content.
Use Google webmaster guideline "try this instead" H1 or H2 meta tag on your HTML programming under the head tag ....your keyword. Anchor have to must use your business related keywords in H1, H2, it can help your site search engine.
Also use for Rich snippets in this tag..!
It scans you web page very precisely and sensitively. Factors like you have javascript embedded or in different file matter, whether you are using frames in designing or using heavy graphics can reduce the ranking of your page. Keywords are obviously rank affecting entities. Broken links also bring your website ranking down.
Basically you can refer to http://www.tutorialspoint.com/seo/ to go through all the important points of google's crawler. This will take a maximum of 40 mins.
MapReduce: Simplified Data Processing on Large Clusters
I analysed the latest algorithm and found that now
Google gives more importance to CONTENT rather than LINKS.
So if your content is good enough with properly available tags, Google will automatically generate index for you. I would suggest H1 - H6 all to be used in good manner.

Resources