I want to create a timezone definition for UTC+n without any DST changes for our testing purposes, but haven't figured out how to do it. I'm using ActiveSupport::TimeZone, but it seems that it has a hard-coded list of time zones which cannot be programmatically extended.
In particular, however I try to create a new timezone, it is always missing DST information. If I provide an existing timezone as a basis, it ignores the time offset I provided instead.
Below are some attempts I tried to create a timezone in UTC+1:
> ActiveSupport::TimeZone[1].now
=> Tue, 06 Aug 2013 12:39:35 CEST +02:00
> ActiveSupport::TimeZone.create("UTC", 3600).now
=> Tue, 06 Aug 2013 10:39:40 UTC +00:00
> ActiveSupport::TimeZone.create("foo", 3600).now
TZInfo::InvalidTimezoneIdentifier: cannot load such file -- tzinfo/definitions/foo
> ActiveSupport::TimeZone.create("foo", 3600, TZInfo::Timezone.get("UTC")).now
=> Tue, 06 Aug 2013 10:39:48 UTC +00:00
Related
In Ruby, I want to get the current Pacific Time, without having to worry if it’s in Daylight Savings or not. I can get either PST or PDT just fine:
Time.parse('7am PST')
Time.parse('7am PDT')
But I’m having no luck getting Pacific time in general. I’ve tried Pacific, US/Pacific, PT, and a bunch of others.
Is there a way to figure out if current Pacific Time is PST or PDT using a vanila Ruby installation (i.e. no Rails or downloading gems)?
ENV['TZ'] variable can be set to work with Ruby's Time in the specified timezone:
ENV['TZ'] = 'US/Pacific'
Time.now # => 2018-09-09 07:22:30 -0700
Time.local(2018, 9, 9, 7, 20) # => 2018-09-09 07:20:00 -0700
Time.local(2018, 3, 9, 7, 20) # => 2018-03-09 07:20:00 -0800
I have been trying the following methods to Backup & Restore an ElasticSearch cluster from one server to another with little success. I have not used a backup process thus far and I would like to move my whole ElasticSearch cluster from a small 2GB cluster to 15GB cluster. I used the following methods.
Using taskrabit/elasticsearch-dump - I was successfully able to export the complete database to backup.json file, however when restoring the backup.json, it gave me the following output. After researching the output further I understood that the bulk input of the plug-in was not fully developed.
./bin/elasticdump --all=true --input=/home/user/backup.json --output=http://192.168.0.213:9200/ --type=data
Thu, 09 Feb 2017 06:43:29 GMT | starting dump
Thu, 09 Feb 2017 06:43:29 GMT | got 61 objects from source file (offset: 0)
Thu, 09 Feb 2017 06:43:29 GMT | sent 61 objects to destination elasticsearch, wrote 0
Thu, 09 Feb 2017 06:43:29 GMT | got 0 objects from source file (offset: 61)
Thu, 09 Feb 2017 06:43:29 GMT | Total Writes: 0
Thu, 09 Feb 2017 06:43:29 GMT | dump complete
Using elasticsearch-tools (es-export-bulk & es-import bulk) I again was able to backup the json successfully. But the import failed once again with an error:
"statusCode":400,"response":"{"error":{
I used the examples from es-bulk-export
Using the ElasticSearch built-in Snapshot & Restore.
curl -XPUT 'http://localhost:9200/_snapshot/my_backup' -d '{
"type": "fs",
"settings": {"location": "/home/shawn/backup", "compress": true}
}'
I believe I'm missing something as the execution gives me the following error. Do I need to create /_snapshot/my_backup? If so how?
{"error":{"root_cause":[
{"type":"repository_exception",
"reason":"[my_backup] location [/home/shawn/backup] doesn't match any of the locations specified by path.repo because this setting is empty"
}],
"type":"repository_exception","reason":"[my_backup] failed to create repository",
"caused_by":
{"type":"creation_exception","reason":"Guice creation errors:\n\n1) Error injecting constructor, RepositoryException[[my_backup] location [/home/shawn/backup] doesn't match any of the locations specified by path.repo because this setting is empty]\n at org.elasticsearch.repositories.fs.FsRepository.<init>(Unknown Source)\n while locating org.elasticsearch.repositories.fs.FsRepository\n while locating org.elasticsearch.repositories.Repository\n\n1 error","caused_by":{"type":"repository_exception","reason":"[my_backup] location [/home/shawn/backup] doesn't match any of the locations specified by path.repo because this setting is empty"}}},"status":500}
You are creating your _snapshot/my_backup just fine, You only need to add line to /etc/elasticsearch/elasticsearch.yml:
path.repo: ["/home/shawn/backup"]
Which is actual location of your snapshot. Then restart Elasticsearch.
I am able to parse localized dates using python locale module and posix localization database:
import locale, datetime
locale.setlocale(locale.LC_TIME, 'tr_TR.UTF-8')
print datetime.datetime.strptime("1 Haziran 2014", "%d %B %Y")
===
Edit
This example loads the locale and datetime modules, parses the localized date to create an instance of python's datetime class. I'm looking specifically for Ruby code that can parse localized dates using posix database.
===
Is there any equivalent of this in ruby? If there is a ruby library like python's locale module or Boost.Locale in C++, can you give example code? I tried the gettext gem and locale gem (I set current locale and tried Time.strptime, which failed).
I do not expect to do custom gsub or an i18n config file parsing. I am asking for code that uses posix database to parse dates.
You will need 2 custom gems in your Gemfile
Chronic:
$ git clone git://github.com/mojombo/chronic.git
$ cd chronic && gem build chronic.gemspec
$ gem install chronic-*.gem
Chronic-l10n:
$ git clone git://github.com/luan/chronic-l10n.git
$ cd chronic-l10n && gem build chronic-l10n.gemspec
$ gem install chronic-l10n-*.gem
Usage:
require 'chronic'
require 'chronic-l10n'
Time.now #=> Sun Aug 27 23:18:25 PDT 2006
Chronic.locale = :'pt-BR'
Chronic.parse('amanhã')
#=> Mon Aug 28 12:00:00 PDT 2006
Chronic.parse('segunda', :context => :past)
#=> Mon Aug 21 12:00:00 PDT 2006
Chronic.parse('essa terça 5:00')
#=> Tue Aug 29 17:00:00 PDT 2006
Chronic.parse('essa terça 5:00', :ambiguous_time_range => :none)
#=> Tue Aug 29 05:00:00 PDT 2006
Chronic.parse('27 de maio', :now => Time.local(2000, 1, 1))
#=> Sat May 27 12:00:00 PDT 2000
Chronic.parse('27 de maio', :guess => false)
#=> Sun May 27 00:00:00 PDT 2007..Mon May 28 00:00:00 PDT 2007
Chronic.parse('6/4/2012', :endian_precedence => :little)
#=> Fri Apr 06 00:00:00 PDT 2012
It seems there is currently no ruby gem or code that can load posix database and parse dates using strptime style flags. (copy-paste answer is not accepted)
I am not savvy at all when it comes to scripting. The script I have basically dumps the results of git log command into a file.
However, I would only like to show the lines for the day the script was run. So if I run myscript.sh on Thu Jun 20, I want to see all lines from a file down until Wed Jun 19.
Here is what the file looks like:
commit 8da0dd9bsd23899d11b4ee7348af0640b98ed4b17
Author: Denis <Denis#WWOscar.Waudware.local>
Date: Thu Jun 20 12:08:59 2013 -0400
Testing Git push 13 6
Multiple lines
commit aca564549f91329fcfa9a9f908f7fdeffa83f139b
Author: Denis <Denis#WWOscar.Waudware.local>
Date: Thu Jun 20 12:01:48 2013 -0400
Testing Git push 13 5
commit b80c51b32f48364c2108588aff4c9e12fbb78370b
Author: Denis <Denis#WWOscar.Waudware.local>
Date: Thu Jun 20 11:59:57 2013 -0400
Testing Git push 13 4
commit c4f8f8d4196f7c0f2deaf8g0ecc61797e7b8afdd9
Author: Denis <Denis#WWOscar.Waudware.local>
Date: Wed Jun 19 11:48:37 2013 -0400
Testing Git push 13 3
commit 9a296b2273528868e3e4dc19310fa802daf76b1f3
Author: Denis <Denis#WWOscar.Waudware.local>
Date: Wed Jun 19 11:45:49 2013 -0400
Testing Git push 13 2
commit 55cb8f2399242f051f577a042713a402137df4456
Author: Denis <Denis#WWOscar.Waudware.local>
Date: Sat Jun 15 11:40:48 2013 -0400
Testing Git push 13 1
commit a48e59ec1de227cc2878dce3330ge7776336eb289
Author: Denis <Denis#WWOscar.Waudware.local>
Date: Thu Jun 13 11:28:56 2013 -0400
Switched datasource to SuprPakJ
Created WWButton and WWLabel (extends JButton and JLabel)
Designed Sales Order screen
commit 57ce2da4673a35f50a5146d43a1f1a969c590c8c9
Author: Denis <Denis#WWOscar.Waudware.local>
Date: Tue Jun 11 08:20:58 2013 -0400
I tried searching, but best I got was the sed -e command to print out everything until the first blank line, which isn't exactly what I need.
Any help is appreciated!
How does this work for you:
git log --since=yesterday
I'm working on a piece of Outlook automation that takes mail placed in a specific folder and exports it as an RFC822 formatted mail message. This output file will then be fed to the SpamAssassin tool sa-learn.exe.
For Each oItem In oFolder.Items
If TypeOf oItem Is RDOMail Then
Set oMailItem = oItem
' Deptermine the fully qualified path to save the file
sFilePath = GetFilePath(oMailItem, "//Mailbox/SpamAssassin/Spam")
'Save the RFC822 format message
oMailItem.SaveAs sFilePath, rdoSaveAsType.olRFC822
DoEvents
oMailItem.UnRead = False
oMailItem.Delete
End If
DoEvents 'Let the Outlook UI thread breathe a bit
Next 'for each
Here are the message headers from a message saved using this code with redacted e-mail addresses.
From: "Swift Learning" <**********#***.*************.***>
To: <*****#********.***>
Subject: Foreign Languages are easily learned in this program
Date: Tue, 31 Jul 2012 10:11:38 -0700
Message-ID: <8518205138200566845#smx.jacksonpotts2.com>
MIME-Version: 1.0
Content-Type: multipart/alternative;
boundary="----=_NextPart_000_13AE_01CD6F0A.C9624870"
X-Mailer: Microsoft Outlook 14.0
Thread-Index: AQF4Lq/07oPqx1sKGPa5FKQSalUQXg==
What's missing from this are the relay headers that should look something like this.
Received: from [216.104.163.151] by mail.clarkzoo.org (ArGoSoft Mail Server .NET v.1.0.8.4) with ESMTP (EHLO smtp02-forward-1.daemonmail.net)
for <*****#*********.***>; Tue, 31 Jul 2012 12:36:25 -0700
Received: from mxw03.daemonmail.net (unknown [216.104.161.13])
by smtp02-forward-1.daemonmail.net (Postfix) with ESMTP id 4447681FDB;
Tue, 31 Jul 2012 12:18:01 -0700 (PDT)
Received: from localhost (localhost [127.0.0.1])
by mxw03.daemonmail.net (Postfix) with ESMTP id 748CF6A0DD
for <***#******************.***>; Tue, 31 Jul 2012 12:17:52 -0700 (PDT)
How can I capture those relay headers?
Update:
Looking into this further, the raw headers as stored in the MailItem in Outlook are radically different from the RFC822 format as saved by Redemption.
Here's a side by side comparion.
Raw headers from the Properties dialog in Outlook.
Received: from [108.174.54.7] by mail.clarkzoo.org (ArGoSoft Mail Server .NET v.1.0.8.4) with ESMTP (EHLO upgraded.the-ameri-credit-review.com)
for <*****#********.***>; Wed, 01 Aug 2012 07:34:15 -0700
Date: Wed, 1 Aug 2012 09:55:57 -0400
Subject: Your TransUnion, Equifax, and Experian Scores May Have Changed
From: "Credit Check" <info#the-ameri-credit-review.com>
To: <*****#********.***>
Message-ID: <132692318349a4a4158c108651c1428c#upgraded.the-ameri-credit-review.com>
Mime-Version: 1.0
Content-Type: text/html; charset=us-ascii
Content-Transfer-Encoding: 8bit
Content-Disposition: inline
SPF-Received: softfail
X-FromIP: 108.174.54.7
The headers from the RFC822 formatted file:
From: "Credit Check" <info#the-ameri-credit-review.com>
To: <*****#********.***>
Subject: Your TransUnion, Equifax, and Experian Scores May Have Changed
Date: Wed, 1 Aug 2012 06:55:57 -0700
Message-ID: <132692318349a4a4158c108651c1428c#upgraded.the-ameri-credit-review.com>
MIME-Version: 1.0
Content-Type: multipart/alternative;
boundary="----=_NextPart_000_011B_01CD6FC4.403990C0"
X-Mailer: Microsoft Outlook 14.0
Thread-Index: AQIRB+hjg86/OeRgMx9VYijSdeLwhw==
Those headers are only superficially the same.
The relay headers are missing
Date and Subjects are in different positions
Date header has been be modified to repressent local time zone
Content-Type has changed from "text/html; charset=us-ascii" to "multipart/alternative;"
Headers have been added and headers have been removed
The better question is how does one capture the original headers of the message?
I know this is an old post but try saving as rdoSaveAsType.olRFC822_Redemption instead of rdoSaveAsType.olRFC822. It seems to preserve all of the headers.