OK, so originally I followed this guide to create a fulltext index, and then I imported all my data onto my Neo4j server. The indexes work great, and now I'm trying to use Sinatra/Ruby to interact with my Neo4j graph.
I'm using the Neo4j ruby gem, and I've created a model (movie.rb) of Movies with a fulltext index on the title as per this wiki entry:
class Movie
include Neo4j::NodeMixin
property :id
property :movieID
property :name, :index => :fulltext
property :year
property :imdB
property :rtRating
property :poster
end
However, I am getting this error: NameError: uninitialized constant Neo4j::NodeMixin. The Neo4j Ruby wiki entry states that:
The neo4j-wrapper is included in the neo4j gem The neo4j-wrapper gem defines these mixins: Neo4j::NodeMixin
So it should be included in my project...
I have no idea how to continue... someone please help me search with my fulltext index?
I think we've already helped you with this separately, but for anybody running across this question:
For various reasons, the Neo4j.rb project doesn't have good support for the legacy indexes. That wiki page is out of date (I just updated it to say so). The searchkick gem is recommended for fulltext search functionality with Neo4j.rb.
Also note that there is a direct integration of elasticsearch with Neo4j:
https://github.com/neo4j-contrib/neo4j-elasticsearch
Here's a video tutorial:
https://www.youtube.com/watch?v=SJLSFsXgOvA
For that, though, I believe you would need to make your own HTTP requests to Neo4j to get the results from the plugin. The upside is that not all of your changes need to go through your Ruby app (like would be the case with searchkick)
Related
I'm building an API with Sinatra along with Sequel as ORM on a postgres database.
I have some complexes datasets to query in a paging style, so i'd like to keep the dataset in cache for following next pages requests after a first call.
I've read Sequel dataset are cached by default, but i need to keep this object between 2 requests to benefit this behavior.
So I was wondering to put this object somewhere to retrieve it later if the same query is called again rather than doing a full new dataset each time.
I tried in Sinatra session hash, but i've got a TypeError : can't dump anonymous class #<Class:0x000000028c13b8> when putting the object dataset in it.
I'm wondering maybe to use memcached for that.
Any advices on the best way to do that will be very appreciated, thanks.
Memcached or Redis (using LRU) would likely be appropriate solutions for what you are describing. The Ruby Dalli gem makes it pretty easy to get started with memcached. You can find it at https://github.com/mperham/dalli.
On the Github page you will see the following basic example
require 'dalli'
options = { :namespace => "app_v1", :compress => true }
dc = Dalli::Client.new('localhost:11211', options)
dc.set('abc', 123)
value = dc.get('abc')
This illustrates the basics to use the gem. Consider that Memcached is simply a key/value store utilizing LRU (least recently used) fallout. This means you allocate memory to Memcached and let your keys organically expire unless there is a reason to manually expire the key.
From there it becomes simply attempting to fetch a key from memcached, and then only running your real queries if there is no match found.
found = dc.get('my_unique_key')
unless found
# Do your Sequel query here
dc.set('my_unique_key', 'value_goes_here')
end
I am learning ruby at the moment and using twitter as a platform to help me build my first prototype in Sinatra. I'm using the Twitter gem and have managed to get a private list of mine and display all the tweets related to the users in that list.
However I now want to search through the list for a set of certain set of keywords, and if found display the tweet.
Does anyone know if there is anyway within the Twitter gem to do this? Or how I would go about doing this in rails in an efficient way.
The only way I can figure out is to iterate through each tweet returned, get the text related to that tweet and search for the keywords, if found display that tweet. This to me is stupidly inefficient and would this not use up unnecessary API request?
This is what I have so far if this is of any help to anyone.
require 'sinatra'
require 'rubygems'
require 'twitter'
client = Twitter::REST::Client.new do |config|
config.consumer_key = 'xxxx'
config.consumer_secret = 'xxx'
config.access_token = 'xx'
config.access_token_secret = 'xx'
end
get '/' do
#tweet = client.list_timeline(1231123123123,{:include_rts => 0})
erb :index
end
Many thanks in advance
Matt
You are correct about this: iterate through each tweet returned, get the text related to that tweet and search for the keywords, if found display that tweet.
You wrote: "this to me is stupidly inefficient". You are correct. It's inefficient because you have to retrieve all the tweets, rather than just the tweets that contain the keywords that you want.
The Twitter gem does not do what you want, because Twitter search is slightly unpredictable. This is because the Twitter search is optimizing for relevancy, not thoroughness.
What you're looking for, I think, is Twitter "Streams". When you ask for a Twitter Stream, you get all the tweets from the user (or site, or globally). This is more sophisticated to set up. This gives you everything, and gives it to you in real time.
https://dev.twitter.com/streaming/overview
Then you search the tweets within Rails.
If your want a simple search, you may want to look at using Ruby's select method and Regexp class.
If you want powerful search capabilities, you may want to look at various search gems and search engines such as sunspot_solr and Lucene.
If you're building a real-world business application with more advanced needs for scaling and searching, you may want to read about Twitter Firehose partners and text engines such as discovertext. These partners and engines provide real-time search APIs, caching, and many more features.
Consider using search method as shown in example here
I have an ElasticSearch server running that indexes and searched documents using the excellent Tire gem. Everything works great, except I'm not sure how to go about manually removing documents from the search index.
I have poured over the RDoc and searched for hours, but this is the only hint at a solution I can find https://github.com/karmi/tire/issues/309. Is there an easier way other than building a custom wrapper around curl and making the request manually?
Another hitch is that I use a soft-delete gem called ActsAsParanoid, so the Tire::Model::Callbacks won't remove the object on soft-delete.
Any ideas?
In case you only have the ID (e.g. 12345):
User.tire.index.remove 'user', '12345'
Or more generally:
klass.tire.index.remove klass.document_type, record_id
(which I think is equivalent to what remove #user will do behind the scenes)
reference
Turns out you can just manually remove the soft-deleted Object from the index like so:
#user = User.find(id) #or whatever your indexed object is
User.tire.index.remove #user #this will remove them from the index
Thats it!
I need helping getting Autocomplete working on my Solr Search bar.
Am, using this https://github.com/xponrails/sunspot_autocomplete
Followed the steps & it doesn't work.
I get stuck at my search bar -how do I add it to it, while keeping the params[:search]
Someone else had the same problem, but deleted their code that got it working.
*Does it have to do purely with jquery-ui autocomplete?
Or was there something with having to install the plugin a certain way -I'm not sure if I installed it correctly.
Thank You =)
Please find the steps for auto complete. I am going to give an example for auto complete with items.
Step 1: Create a auto complete text field like <%= text_field_with_auto_complete 'item', 'name' %>
step 2: Then call a method named auto_complete_for_item_name.
setp 3: Inside that method get all the rows from the table and keep it in a variable called #items.
step 4: Then call a new method with this #items variable. Like render :inline => "<%= auto_complete_result #items, 'name','med_item'%>"
step 5: In that method put like this content_tag("ul", items.uniq.join)
It will list all the items based on the entered key.
Please correct me if i was wrong..
Thanks & Regards,
Viji Kumar.M
i just tried it for my project and it worked fine with this steps
all necessary sunspot gems were included and installed (https://github.com/sunspot/sunspot/blob/master/README.md)
autocomplete gem was installed, of course, too (i used little different git path gem 'sunspot_autocomplete', ">= 0.0.3", :git => 'git://github.com/xponrails/sunspot_autocomplete.git' than proposed in manual)
app was restarted after gems added
sunspot generated solr schema.xml was modified (all necessary nodes were added inside blocks
<schema name="sunspot" version="1.0">
<types> *here* </types>
</fields> *and here* <fields>
</schema>)
solr was restarted after schema.xml modified
checked if it started fine with correct! xml (what i mean pointed here)
necessary rows in searchable do; ...; end block for the model were included (it has to be text field and autocompelte field with unique name addressed to text field with :using=>:[text field name] syntax)
model was reindexed after searchable block changes
necessary javascript libraries were included into application.js (i had to copy all js files from gem to app asset folder)
basic styles were included into application.css (same situation as with js files)
assets need to be precompiled (if you try to get it to work in your production environment)
correct autocomplete_text_field was added to view (i had to .html_safe it because it was rendered as raw text)
app was restarted again
and it worked!
but i guess in one or more of this steps you missed something
if the all question is how to add the search field to template - you just need to add this row
<%= autocomplete_text_field("uniq", "field", "http://localhost:8982/solr/", "uniq_field") %> - assuming you have indexed autocomplete :uniq_field, :using => :field and pointed it to the live version of solr (http://localhost:8982/solr/)
if you share with me some code i can point you to something you missed
cheers!
I've recently begun using the aws gem in a Sinatra web application whose purpose is to provide a customized frontend to instance management (integrating non-AWS tools). I am currently working on the form to allow a user to set all the options that might need setting, and one of those options is instance type (m1.small, c1.medium, etc).
What I'd like is to be able to reach out to some source to pull a list of available types. I have looked through the AWS::EC2 documentation and haven't found anything matching this description. I've got no need to insist that a solution be part of the aws gem, but even better if it is, because that's the tool I'm already using.
Do you know of a way to gather this information programmatically?
As far as I can tell this isn't possible. If it were possible, amazon would list the api call in their documentation.
I find the omission a little odd considering the've got apis to list pretty much anything else.
You could maybe kludge it via the DescribeReservedInstancesOfferings call, which lists all the kinds of reserved instances you can buy - I would have thought that extracting the unique instance-types from that would be a reasonable approximation (as far as I know there are no instance types you can't get reserved instances for). Doesn't look like the aws gem supports it though. The official amazon sdk does, as does fog
Here's a somewhat kludgy work-around to the fact that Amazon's still not released an API to enumerate instance types:
instance_types = Set.new()
response = {:next_token => ''}
loop do
response = ec2.client.describe_spot_price_history(
:start_time => (Time.now() - 86400).iso8601,
:end_time => Time.now().iso8601,
:product_descriptions => ['Linux/UNIX'],
:availability_zone => 'us-east-1c',
:next_token => response[:next_token]
)
response[:spot_price_history_set].each do |history_set|
instance_types.add(history_set[:instance_type])
end
if(response[:next_token].nil?)
break
end
end