feedzirra update doesn't find updates - ruby

I am trying to use feedzirra to grab rss/atom feeds, but feedzirra doesn't seem to find updates. I start by grabbing new feed and dumping the returned structure to a file.
require 'rubygems'
require 'yaml'
require 'feedzirra'
feed = Feedzirra::Feed.fetch_and_parse("http://rss.slashdot.org/Slashdot/slashdot")
File.open('slashdot.yaml','w'){|f| f.puts feed.to_yaml}
Then I wait a while, so that there are update to the feed, and I try:
require 'rubygems'
require 'yaml'
require 'feedzirra'
feed = YAML.load_file('slashdot.yaml')
puts feed.entries.first.published
updated_feed = Feedzirra::Feed.update(feed)
puts updated_feed.new_entries.first.published
all_new = Feedzirra::Feed.fetch_and_parse("http://rss.slashdot.org/Slashdot/slashdot")
puts all_new.entries.first.published
This results in:
Thu Apr 04 15:28:00 UTC 2013
Thu Apr 04 15:28:00 UTC 2013
Thu Apr 04 21:50:00 UTC 2013
The third line confirms that there are newer posts available, but Feed.update doesn't see them.
What am I doing wrong?

Related

Get Pacific Time in Ruby without Rails or installing gems

In Ruby, I want to get the current Pacific Time, without having to worry if it’s in Daylight Savings or not. I can get either PST or PDT just fine:
Time.parse('7am PST')
Time.parse('7am PDT')
But I’m having no luck getting Pacific time in general. I’ve tried Pacific, US/Pacific, PT, and a bunch of others.
Is there a way to figure out if current Pacific Time is PST or PDT using a vanila Ruby installation (i.e. no Rails or downloading gems)?
ENV['TZ'] variable can be set to work with Ruby's Time in the specified timezone:
ENV['TZ'] = 'US/Pacific'
Time.now # => 2018-09-09 07:22:30 -0700
Time.local(2018, 9, 9, 7, 20) # => 2018-09-09 07:20:00 -0700
Time.local(2018, 3, 9, 7, 20) # => 2018-03-09 07:20:00 -0800

How to Backup & Restore ElasticSearch

I have been trying the following methods to Backup & Restore an ElasticSearch cluster from one server to another with little success. I have not used a backup process thus far and I would like to move my whole ElasticSearch cluster from a small 2GB cluster to 15GB cluster. I used the following methods.
Using taskrabit/elasticsearch-dump - I was successfully able to export the complete database to backup.json file, however when restoring the backup.json, it gave me the following output. After researching the output further I understood that the bulk input of the plug-in was not fully developed.
./bin/elasticdump --all=true --input=/home/user/backup.json --output=http://192.168.0.213:9200/ --type=data
Thu, 09 Feb 2017 06:43:29 GMT | starting dump
Thu, 09 Feb 2017 06:43:29 GMT | got 61 objects from source file (offset: 0)
Thu, 09 Feb 2017 06:43:29 GMT | sent 61 objects to destination elasticsearch, wrote 0
Thu, 09 Feb 2017 06:43:29 GMT | got 0 objects from source file (offset: 61)
Thu, 09 Feb 2017 06:43:29 GMT | Total Writes: 0
Thu, 09 Feb 2017 06:43:29 GMT | dump complete
Using elasticsearch-tools (es-export-bulk & es-import bulk) I again was able to backup the json successfully. But the import failed once again with an error:
"statusCode":400,"response":"{"error":{
I used the examples from es-bulk-export
Using the ElasticSearch built-in Snapshot & Restore.
curl -XPUT 'http://localhost:9200/_snapshot/my_backup' -d '{
"type": "fs",
"settings": {"location": "/home/shawn/backup", "compress": true}
}'
I believe I'm missing something as the execution gives me the following error. Do I need to create /_snapshot/my_backup? If so how?
{"error":{"root_cause":[
{"type":"repository_exception",
"reason":"[my_backup] location [/home/shawn/backup] doesn't match any of the locations specified by path.repo because this setting is empty"
}],
"type":"repository_exception","reason":"[my_backup] failed to create repository",
"caused_by":
{"type":"creation_exception","reason":"Guice creation errors:\n\n1) Error injecting constructor, RepositoryException[[my_backup] location [/home/shawn/backup] doesn't match any of the locations specified by path.repo because this setting is empty]\n at org.elasticsearch.repositories.fs.FsRepository.<init>(Unknown Source)\n while locating org.elasticsearch.repositories.fs.FsRepository\n while locating org.elasticsearch.repositories.Repository\n\n1 error","caused_by":{"type":"repository_exception","reason":"[my_backup] location [/home/shawn/backup] doesn't match any of the locations specified by path.repo because this setting is empty"}}},"status":500}
You are creating your _snapshot/my_backup just fine, You only need to add line to /etc/elasticsearch/elasticsearch.yml:
path.repo: ["/home/shawn/backup"]
Which is actual location of your snapshot. Then restart Elasticsearch.

l10n support in ruby

I am able to parse localized dates using python locale module and posix localization database:
import locale, datetime
locale.setlocale(locale.LC_TIME, 'tr_TR.UTF-8')
print datetime.datetime.strptime("1 Haziran 2014", "%d %B %Y")
===
Edit
This example loads the locale and datetime modules, parses the localized date to create an instance of python's datetime class. I'm looking specifically for Ruby code that can parse localized dates using posix database.
===
Is there any equivalent of this in ruby? If there is a ruby library like python's locale module or Boost.Locale in C++, can you give example code? I tried the gettext gem and locale gem (I set current locale and tried Time.strptime, which failed).
I do not expect to do custom gsub or an i18n config file parsing. I am asking for code that uses posix database to parse dates.
You will need 2 custom gems in your Gemfile
Chronic:
$ git clone git://github.com/mojombo/chronic.git
$ cd chronic && gem build chronic.gemspec
$ gem install chronic-*.gem
Chronic-l10n:
$ git clone git://github.com/luan/chronic-l10n.git
$ cd chronic-l10n && gem build chronic-l10n.gemspec
$ gem install chronic-l10n-*.gem
Usage:
require 'chronic'
require 'chronic-l10n'
Time.now #=> Sun Aug 27 23:18:25 PDT 2006
Chronic.locale = :'pt-BR'
Chronic.parse('amanhã')
#=> Mon Aug 28 12:00:00 PDT 2006
Chronic.parse('segunda', :context => :past)
#=> Mon Aug 21 12:00:00 PDT 2006
Chronic.parse('essa terça 5:00')
#=> Tue Aug 29 17:00:00 PDT 2006
Chronic.parse('essa terça 5:00', :ambiguous_time_range => :none)
#=> Tue Aug 29 05:00:00 PDT 2006
Chronic.parse('27 de maio', :now => Time.local(2000, 1, 1))
#=> Sat May 27 12:00:00 PDT 2000
Chronic.parse('27 de maio', :guess => false)
#=> Sun May 27 00:00:00 PDT 2007..Mon May 28 00:00:00 PDT 2007
Chronic.parse('6/4/2012', :endian_precedence => :little)
#=> Fri Apr 06 00:00:00 PDT 2012
It seems there is currently no ruby gem or code that can load posix database and parse dates using strptime style flags. (copy-paste answer is not accepted)

How to create a new timezone in Ruby?

I want to create a timezone definition for UTC+n without any DST changes for our testing purposes, but haven't figured out how to do it. I'm using ActiveSupport::TimeZone, but it seems that it has a hard-coded list of time zones which cannot be programmatically extended.
In particular, however I try to create a new timezone, it is always missing DST information. If I provide an existing timezone as a basis, it ignores the time offset I provided instead.
Below are some attempts I tried to create a timezone in UTC+1:
> ActiveSupport::TimeZone[1].now
=> Tue, 06 Aug 2013 12:39:35 CEST +02:00
> ActiveSupport::TimeZone.create("UTC", 3600).now
=> Tue, 06 Aug 2013 10:39:40 UTC +00:00
> ActiveSupport::TimeZone.create("foo", 3600).now
TZInfo::InvalidTimezoneIdentifier: cannot load such file -- tzinfo/definitions/foo
> ActiveSupport::TimeZone.create("foo", 3600, TZInfo::Timezone.get("UTC")).now
=> Tue, 06 Aug 2013 10:39:48 UTC +00:00

Getting specific part of text from file based on in-line date

I am not savvy at all when it comes to scripting. The script I have basically dumps the results of git log command into a file.
However, I would only like to show the lines for the day the script was run. So if I run myscript.sh on Thu Jun 20, I want to see all lines from a file down until Wed Jun 19.
Here is what the file looks like:
commit 8da0dd9bsd23899d11b4ee7348af0640b98ed4b17
Author: Denis <Denis#WWOscar.Waudware.local>
Date: Thu Jun 20 12:08:59 2013 -0400
Testing Git push 13 6
Multiple lines
commit aca564549f91329fcfa9a9f908f7fdeffa83f139b
Author: Denis <Denis#WWOscar.Waudware.local>
Date: Thu Jun 20 12:01:48 2013 -0400
Testing Git push 13 5
commit b80c51b32f48364c2108588aff4c9e12fbb78370b
Author: Denis <Denis#WWOscar.Waudware.local>
Date: Thu Jun 20 11:59:57 2013 -0400
Testing Git push 13 4
commit c4f8f8d4196f7c0f2deaf8g0ecc61797e7b8afdd9
Author: Denis <Denis#WWOscar.Waudware.local>
Date: Wed Jun 19 11:48:37 2013 -0400
Testing Git push 13 3
commit 9a296b2273528868e3e4dc19310fa802daf76b1f3
Author: Denis <Denis#WWOscar.Waudware.local>
Date: Wed Jun 19 11:45:49 2013 -0400
Testing Git push 13 2
commit 55cb8f2399242f051f577a042713a402137df4456
Author: Denis <Denis#WWOscar.Waudware.local>
Date: Sat Jun 15 11:40:48 2013 -0400
Testing Git push 13 1
commit a48e59ec1de227cc2878dce3330ge7776336eb289
Author: Denis <Denis#WWOscar.Waudware.local>
Date: Thu Jun 13 11:28:56 2013 -0400
Switched datasource to SuprPakJ
Created WWButton and WWLabel (extends JButton and JLabel)
Designed Sales Order screen
commit 57ce2da4673a35f50a5146d43a1f1a969c590c8c9
Author: Denis <Denis#WWOscar.Waudware.local>
Date: Tue Jun 11 08:20:58 2013 -0400
I tried searching, but best I got was the sed -e command to print out everything until the first blank line, which isn't exactly what I need.
Any help is appreciated!
How does this work for you:
git log --since=yesterday

Resources