I have a ruby hash that is something like this:
myhash = { title: 'http://google.com'}
I'm trying to add this to a yaml file like this:
params['myhash'] = myhash
File.open('config.yaml', 'w') do |k|
k.write params.to_yaml
end
The problem is that YAML is removing the quotes around the links even though they are needed (they contain ':').
According to several questions on Stackoverflow, YAML should only remove the quotes when they are not needed.
I found a Solution, but it's really ugly and I prefer not to use it if there was another solution.
I suppose that yaml should be including the quotes in this case. Is there any reason why it's not doing this?
Note: the links are dynamically created
Quotes aren't necessary for your example string. From the specs:
Normally, YAML insists the “:” mapping value indicator be separated from the value by white space. A benefit of this restriction is that the “:” character can be used inside plain scalars, as long as it is not followed by white space.
For example:
h = { value1: 'quotes: needed', value2: 'quotes:not needed' }
puts h.to_yaml
Results in:
---
:value1: 'quotes: needed'
:value2: quotes:not needed
After couple hours i found it's easier in python.
usage: python quotes.py *.yml
This script use literal format if string has '\n'.
Use ruamel to replace yaml lib, yaml lib seems not handle some UTF-8 entry
from ruamel import yaml
import io
import sys
class quote_or_literal(unicode):
pass
def str_presenter(dumper, data):
if data.count("\n"): # check for multiline string
return dumper.represent_scalar('tag:yaml.org,2002:str', data, style='|')
else:
return dumper.represent_scalar('tag:yaml.org,2002:str', data, style='"')
yaml.add_representer(quote_or_literal, str_presenter)
def quote_dict(d):
new = {}
for k, v in d.items():
if isinstance(v, dict):
v = quote_dict(v)
else:
v = quote_or_literal(v)
new[k] = v
return new
def ensure_quotes(path):
with io.open(path, 'r', encoding='utf-8') as stream:
a = yaml.load(stream, Loader=yaml.Loader)
a = quote_dict(a)
with io.open(path, 'w', encoding='utf-8') as stream:
yaml.dump(a, stream, allow_unicode=True,
width=1000, explicit_start=True)
if __name__ == "__main__":
for path in sys.argv[1:]:
ensure_quotes(path)
Related
I want to write multiline string, with two lines, which will be equally indentated by 8 spaces. Ideally without need to have those 8 spaces in input, but I'm willing to do that. Anything so that I can have the result. I think i tried whole documentation. ', | > >-, >+, >8 ..., ... Adding 8char indentation which isn't in source is optional extra, but so far it seems, that in yaml you can have anything but what you actually typed.
• What is the combination for actual as-is multiline string without any yaml transformations (or say impossible if it's impossible)?
• What is the combination for actual as-is multiline string uniformly shifted to the right by N spaces (or say impossible if it's impossible)?
EDIT: specific example:
...
something: |
#blahblah
#blahblah
...
I want field somehting to contain 2 line string, each containing #blahblah, each prefixed by 8 spaces. In JSON you would write(I know I can do that in yaml, but I'd like to use yaml way to write yaml file):
{"something": " #blahblah\n #blahblah"}
EDIT 2: providing minimal working example.
Java code:
import java.io.InputStream;
import java.util.Map;
import org.yaml.snakeyaml.Yaml;
public class Test {
public static void main(String[] args) {
Yaml yaml = new Yaml();
InputStream inputStream = Test.class.getClassLoader().getResourceAsStream("test.yaml");
Map<String, Object> obj = yaml.load(inputStream);
String s = (String) obj.get("a");
System.out.println(s);
}
}
for input:
a: |
#abc
#def
I will get:
#abc
#def
For input:
a: |2
#abc
#def
I will get:
#abc
#def
How can this be explained??? If positive number after | should be for removing extra indentation, I understand it, that without number it will remove all indentation. So if someone want preserved indentation of 4 spaces, it seems he needs to do some math, put indentation of 5 and request to remove 1 using |1. No? This is how it should work? What am I missing?
What is the combination for actual as-is multiline string without any yaml transformations (or say impossible if it's impossible)?
--- |2
droggel
jug
You'll get the string as-is, without the 2 spaces indentation specified by the indentation indicator. Since the indicator must be at least 1 (you can't give 0), there is no way to do that without indentation. Still, this seems to do what you want.
What is the combination for actual as-is multiline string uniformly shifted to the right by N spaces (or say impossible if it's impossible)?
YAML is not a programming language and can't do any kind of string transformation, so this is impossible.
I have a CSV file that, as a spreadsheet, looks like this:
I want to parse the spreadsheet with the headers at row 19. Those headers wont always start at row 19, so my question is, is there a simple way to parse this spreadsheet, and specify which row holds the headers, say by using the "Date" string to identify the header row?
Right now, I'm doing this:
CSV.foreach(params['logbook'].tempfile, headers: true) do |row|
Flight.create(row.to_hash)
end
but obviously that wont work because it doesn't get the right headers.
I feel like there should be a simple solution to this since it's pretty common to have CSV files in this format.
Let's first create the csv file that would be produced from the spreadsheet.
csv =<<-_
N211E,C172,2004,Cessna,172R,airplane,airplane
C-GPGT,C172,1976,Cessna,172M,airplane,airplane
N17AV,P28A,1983,Piper,PA-28-181,airplane,airplane
N4508X,P28A,1975,Piper,PA-28-181,airplane,airplane
,,,,,,
Flights Table,,,,,,
Date,AircraftID,From,To,Route,TimeOut,TimeIn
2017-07-27,N17AV,KHPN,KHPN,KHPN KHPN,17:26,18:08
2017-07-27,N17AV,KHSE,KFFA,,16:29,17:25
2017-07-27,N17AV,W41,KHPN,,21:45,23:53
_
FName = 'test.csv'
File1.write(FName, csv)
#=> 395
We only want the part of the string that begins "Date,".The easiest option is probably to first extract the relevant text. If the file is not humongous, we can slurp it into a string and then remove the unwanted bit.
str = File.read(FName).gsub(/\A.+?(?=^Date,)/m, '')
#=> "Date,AircraftID,From,To,Route,TimeOut,TimeIn\n2017-07-27,N17AV,
# KHPN,KHPN,KHPN KHPN,17:26,18:08\n2017-07-27,N17AV,KHSE,KFFA,,16:29,
# 17:25\n2017-07-27,N17AV,W41,KHPN,,21:45,23:53\n"
The regular expression that is gsub's first argument could be written in free-spacing mode, which makes it self-documenting:
/
\A # match the beginning of the string
.+? # match any number of characters, lazily
(?=^Date,) # match "Date," at the beginning of a line in a positive lookahead
/mx # multi-line and free-spacing regex definition modes
Now that we have the part of the file we want in the string str, we can use CSV::parse to create the CSV::Table object:
csv_tbl = CSV.parse(str, headers: true)
#=> #<CSV::Table mode:col_or_row row_count:4>
The option :headers => true is documented in CSV::new.
Here are a couple of examples of how csv_tbl can be used.
csv_tbl.each { |row| p row }
#=> #<CSV::Row "Date":"2017-07-27" "AircraftID":"N17AV" "From":"KHPN"\
# "To":"KHPN" "Route":"KHPN KHPN" "TimeOut":"17:26" "TimeIn":"18:08">
# #<CSV::Row "Date":"2017-07-27" "AircraftID":"N17AV" "From":"KHSE"\
# "To":"KFFA" "Route":nil "TimeOut":"16:29" "TimeIn":"17:25">
# #<CSV::Row "Date":"2017-07-27" "AircraftID":"N17AV" "From":"W41"\
# "To":"KHPN" "Route":nil "TimeOut":"21:45" "TimeIn":"23:53">
(I've used the character '\' to signify that the string continues on the following line, so that readers would not have to scroll horizontally to read the lines.)
csv_tbl.each { |row| p row["From"] }
# "KHPN"
# "KHSE"
# "W41"
Readers who want to know more about how Ruby's CSV class is used may wish to read Darko Gjorgjievski's piece, "A Guide to the Ruby CSV Library, Part 1 and Part 2".
You can use the smarter_csv gem for this. Parse the file once to determine how many rows you need to skip to get to the header row you want, and then use the skip_lines option:
header_offset = <code to determine number of lines above the header>
SmarterCSV.process(params['logbook'].tempfile, skip_lines: header_offset)
From this format, I think the easiest way is to detect an empty line that comes before the header line. That would also work under changes to the header text. In terms of CSV, that would mean a whole line that has only empty cell items.
I am scraping data from the website and I need to iterate over pages, but instead of a counter they have an alphabetical index
http://funny2.com/jokesb.htm'
http://funny2.com/jokesc.htm')
...
But I can't figure out how to include the [a-z] iterator. I tried
http://funny2.com/jokes^[a-z]+$.htm'
which didn't work.
XPath doesn't support regular expressions. However as Scrapy built atop lxml it supports some EXSLT extensions, particularly re extension. You can use operations from EXSLT prepending them with corresponding namespace like this:
response.xpath('//a[re:test(#href, "jokes[a-z]+\.htm")]/#href')
Docs: https://doc.scrapy.org/en/latest/topics/selectors.html?highlight=selector#using-exslt-extensions
If you need just to extract the links, use LinkExtractor with regexp:
LinkExtractor(allow=r'/jokes[a-z]+\.htm').extract_links(response)
You can iterate through every letter in the alphabet and format that letter into some url template:
from string import ascii_lowercase
# 'abcdefghijklmnopqrstuvwxyz'
from char in ascii_lowercase:
url = "http://funny2.com/jokes{}.htm".format(char)
In scrapy context, you need to find a way to increment character in the url. You can find it with regex, figure out the next character in alphabet and put it into the current url, something like:
import re
from string import ascii_lowercase
def parse(self, response):
current_char = re.findall('jokes(\w).htm', response.url)
next_char = ascii_lowercase[current_char] + 1
next_char = ascii_lowercase[next_char]
next_url = re.sub('jokes(\w).htm', 'jokes{}.htm'.format(next_char), response.url)
yield Request(next_url, self.parse2)
I'm using regex to grab parameters from an html file.
I've tested the regexp and it seems to be fine- it appears that the csv conversion is what's causing the issue, but I'm not sure.
Here is what I have:
mechanics_file= File.read(filename)
mechanics= mechanics_file.scan(/(?<=70%">)(.*)(?=<\/td)/)
id_file= File.read(filename)
id=id_file.scan(/(?<="propertyids\[]" value=")(.*)(?=")/)
puts id.zip(mechanics)
CSV.open('csvfile.csv', 'w') do |csv|
id.zip(mechanics) { |row| csv << row }
end
The puts output looks like this:
2073
Acting
2689
Action / Movement Programming
But the contents of the csv look like this:
"[""2073""]","[""Acting""]"
"[""2689""]","[""Action / Movement Programming""]"
How do I get rid of all of the extra quotes and brackets? Am I doing something wrong in the process of writing to a csv?
This is my first project in ruby so I would appreciate a child-friendly explanation :) Thanks in advance!
String#scan returns an Array of Arrays (bold emphasis mine):
scan(pattern) → array
Both forms iterate through str, matching the pattern (which may be a Regexp or a String). For each match, a result is generated and either added to the result array or passed to the block. If the pattern contains no groups, each individual result consists of the matched string, $&. If the pattern contains groups, each individual result is itself an array containing one entry per group.
a = "cruel world"
# […]
a.scan(/(...)/) #=> [["cru"], ["el "], ["wor"]]
So, id looks like this:
id == [['2073'], ['2689']]
and mechanics looks like this:
mechanics == [['Acting'], ['Action / Movement Programming']]
id.zip(movements) then looks like this:
id.zip(movements) == [[['2073'], ['Acting']], [['2689'], ['Action / Movement Programming']]]
Which means that in your loop, each row looks like this:
row == [['2073'], ['Acting']]
row == [['2689'], ['Action / Movement Programming']]
CSV#<< expects an Array of Strings, or things that can be converted to Strings as an argument. You are passing it an Array of Arrays, which it will happily convert to an Array of Strings for you by calling Array#to_s on each element, and that looks like this:
[['2073'], ['Acting']].map(&:to_s) == [ '["2073"]', '["Acting"]' ]
[['2689'], ['Action / Movement Programming']].map(&:to_s) == [ '["2689"]', '["Action / Movement Programming"]' ]
Lastly, " is the string delimiter in CSV, and needs to be escaped by doubling it, so what actually gets written to the CSV file is this:
"[""2073""]", "[""Acting""]"
"[""2689""]", "[""Action / Movement Programming""]"
The simplest way to correct this, would be to flatten the return values of the scans (and maybe also convert the IDs to Integers, assuming that they are, in fact, Integers):
mechanics_file = File.read(filename)
mechanics = mechanics_file.scan(/(?<=70%">)(.*)(?=<\/td)/).flatten
id_file = File.read(filename)
id = id_file.scan(/(?<="propertyids\[]" value=")(.*)(?=")/).flatten.map(&:to_i)
CSV.open('csvfile.csv', 'w') do |csv|
id.zip(mechanics) { |row| csv << row }
end
Another suggestion would be to forgo the Regexps completely and use an HTML parser to parse the HTML.
I would like to write a Ruby script (repl.rb) which can replace a string in a binary file (string is defined by a regex) to a different, but same length string.
It works like a filter, outputs to STDOUT, which can be redirected (ruby repl.rb data.bin > data2.bin), regex and replacement can be hardcoded. My approach is:
#!/usr/bin/ruby
fn = ARGV[0]
regex = /\-\-[0-9a-z]{32,32}\-\-/
replacement = "--0ca2765b4fd186d6fc7c0ce385f0e9d9--"
blk_size = 1024
File.open(fn, "rb") {|f|
while not f.eof?
data = f.read(blk_size)
data.gsub!(regex, str)
print data
end
}
My problem is that when string is positioned in the file that way it interferes with the block size used by reading the binary file. For example when blk_size=1024 and my 1st occurance of the string begins at byte position 1000, so I will not find it in the "data" variable. Same happens with the next read cycle. Should I process the whole file two times with different block size to ensure avoiding this worth case scenario, or is there any other approach?
I would posit that a tool like sed might be a better choice for this. That said, here's an idea: Read block 1 and block 2 and join them into a single string, then perform the replacement on the combined string. Split them apart again and print block 1. Then read block 3 and join block 2 and 3 and perform the replacement as above. Split them again and print block 2. Repeat until the end of the file. I haven't tested it, but it ought to look something like this:
File.open(fn, "rb") do |f|
last_block, this_block = nil
while not f.eof?
last_block, this_block = this_block, f.read(blk_size)
data = "#{last_block}#{this_block}".gsub(regex, str)
last_block, this_block = data.slice!(0, blk_size), data
print last_block
end
print this_block
end
There's probably a nontrivial performance penalty for doing it this way, but it could be acceptable depending on your use case.
Maybe a cheeky
f.pos = f.pos - replacement.size
at the end of the while loop, just before reading the next chunk.