I have the following asc table:
+---------------------------------------------+
| Report |
+----------+----------+-------------+---------+
| Store | Total |
+----------+----------+-------------+---------+
| A | 2723 |
| B | 7277 |
+----------+----------+-------------+---------+
I need to update the total while threre are updates running on my database.
How can I do that?
I already have the method that gets updated total.
But how can I persist the total on the terminal screen?
You can achieve this using the following gems
https://github.com/ruby/curses
https://github.com/tj/terminal-table
Example :
require 'terminal-table'
require "curses"
Curses.init_screen
Curses.crmode
Curses.noecho
Curses.stdscr.keypad = true
begin
x = 0
y = 0
loop do
table = Terminal::Table.new do |t|
t << ['Random 1', Random.rand(1...10)]
t.add_row ['Random 1', Random.rand(10...100)]
end
Curses.setpos(x, y)
output = table.render.to_s
Curses.addstr(output)
Curses.refresh
sleep 1
end
ensure
close_screen
end
I tried doing this in SQL for about a month now, but I think it might be easier to do it with .NET linq.
The basics are as follows:
The query is supposed to return data from a date range, and return a concatenated list of player names and player times.
The concatenation would ONLY occur if the playEnd was within 30 minutes of the next players playStart.
So if I have data like this:
Name PlayDate PlayStart PlayEnd
----------------------------------------------------
player1 | 10/8/2018 | 08:00:00 | 09:00:00
player2 | 10/8/2018 | 09:10:00 | 10:10:00
player3 | 10/9/2018 | 10:40:00 | 11:30:00
player4 | 10/11/2018 | 08:30:00 | 08:37:00
player5 | 10/11/2018 | 08:40:00 | 08:50:00
player6 | 10/12/2018 | 09:00:00 | 09:45:00
player7 | 10/12/2018 | 09:50:00 | 10:10:00
player8 | 10/12/2018 | 10:30:00 | 12:20:00
player1 and player2 play times would be concatenated together like: player1, player2 = 8:00:00 - 10:10:00 for 10/8/2018
player3 would just be: player3 = 10:40:00 - 11:30:00 for 10/9/2018
player4 and player5 play times would be concatenated like: player4, player5 = 08:30:00 - 08:50:00 for 10/11/2018
player6 and player7 and player8 play times would be concatenated like: player6, player7, player8 = 09:00:00 - 12:20:00 for 10/12/2018
I've tried modifying the query below in many ways, but I just don't know how to compare one row of data with the next and then combine the two (or more) if needed.
var query = from pl in players
select new PlaySession
{
Name = pl.Name,
PlayDate = pl.PlayDate,
PlayStart = pl.PlayStartTime,
PlayEnd = pl.PlayEndTime
};
var grouped = query
.OrderBy(r => r.Name)
.ThenBy(r => r.PlayDate)
.ThenBy(r => r.PlayStart)
Now this is where I get confused:
I need to figure out the following:
how to compare PlayDates of the various rows to make sure that they are the same date, like this: row1.PlayDate == row2.PlayDate
how to compare one rows PlayEnd with the next rows PlayStart, something like this: row2.PlayStart - row1.PlayEnd < 30 minutes
Is there a way to compare values across rows using LINQ?
Thanks!
As per as I am concern, thing should be like as follows:
List<ViewModel> playersGroupList = Players.GroupBy(p => p.PlayDate).Select(group => new ViewModel
{
PlayDate = group.Key,
Names = String.Join("-", group.Select(g => g.Name).ToArray()),
PlayDuration = group.Select(g => g.PlayStart).First() + "-" + group.Select(g => g.PlayEnd).Last()
}).ToList();
And here ViewModel is as follows:
public class ViewModel
{
public string PlayDate {get set;}
public string Names {get set;}
public string PlayDuration {get set;}
}
Note: Some adjudgement may be needed to fulfill your point to point requirement but actual implementation should be as shown.
I'm not sure what the appropriate title for this question so if someone could help me with that also, it would be nice.
-
I have a CSV file that looks something like
ID | Num
a | 1
a | 2
a | 3
b | 4
b | 5
c | 6
c | 7
I need the result to be:
ID | Num
a | 1,2,3,4
b | 4,5
c | 6,7
Currently, my solution is:
ary = CSV.open('some_file')
final = Array.new
id = ary[1][0] # ary[0] is "id"
numJoin = ary[1][1]
(1..ary.length).each do |i|
if id == ary[i+1][0]
numJoin = numJoin + "," + ary[i+1][1]
else
final << [id,numJoin]
id = ary[i+1][0]
numJoin = ary[i+1]]1]
end
end
It works, but I would like to have the opportunity to learn other ways to solve this, as I think there should be simpler ways to do this..
Thanks in advance.
You can use group_by, which groups by the return value of the block passed to it, in this case, it's the ID.
ary = ary.group_by { |v| v[0] }
P.S That file ain't looking like a CSV.
I'm trying to merge the values from 2 hashes, creating a new one. I've tried with
Hash[b.map{|k,v| [a[k],v]}
but when it finds out that an "a" value is empty(nil) it doesn't print b[k]...I've got something like that:
| A | A | | B | B | ====> | C | C |
| key|value| | key|value| ====> |B_value|A_value|
| key|value| | key|value| ====> |B_value|A_value|
| key| nil | | key|value| ====> MISSING
| key|value| | key|value| ====> |B_value|A_value|
The keys are the same
I need to see also the nil.
If I try to print in array format I can see everything (nil included):
p = a.map{|k,v| [b[k],v]}
Probably map is not the right solution, there's something else that can give me the same result?
This is my code:
header_hostname = Hash.new
working_host = Hash.new
fileset.each do |file|
header = YAML.load_file("output/#{file}")
header.each do |k_header,v_header|
if v_header == "Hostname"
header_hostname = header
end
end
working_host = Hash[header.map{|k, v| [header_hostname[k], v] }]
puts working_host
File.open("tmp/working_hosts.txt","a+") << working_host
the output from Hash is like:
...
Erogazione VlanID: '2390'
" SubnetorIP": 10.*.*.*
" Netmask": 255.255.255.240
" Gateway": 10.*.*.*
...
Backup VlanID: ''
Managment VlanID: ''
Privata HB VlanID: ''
Remote Console VlanID: ''
...
Hashes
Header = {"98"=>"Erogazione VlanID", "99"=>" SubnetorIP", "100"=>" Netmask", "101"=>" Gateway", "102"=>" Speed(f,g)", "103"=>" Bond(s/n)", "104"=>" Porte", "105"=>" Switch", "106"=>" Slot/Porte", "107"=>" PortePPanel", "108"=>" PortePPanel(bond)", "109"=>"Backup VlanID", "110"=>" SubnetorIP", "111"=>" Netmask", "112"=>" Gateway", "113"=>" Speed(f,g)", "114"=>" Porte", "115"=>" Switch", "116"=>" Slot/Porte", "117"=>" PortePPanel", "135"=>"Remote Console VlanID", "136"=>" SubnetorIP", "137"=>" Netmask", "138"=>" Gateway", "139"=>" Speed(f,g)", "140"=>" Porte", "141"=>" Switch", "142"=>" Slot/Porte", "143"=>" PortePPanel"}
Machine1 = {"98"=>"3315", "99"=>"10.*.*.*", "100"=>"255.255.255.240", "101"=>"10.*.*.*", "102"=>"g", "103"=>"", "104"=>"2.0", "105"=>"", "106"=>"", "107"=>"", "108"=>"", "109"=>"111", "110"=>"10.*.*.*", "111"=>"255.255.255.240", "112"=>"10.*.*.*", "113"=>"g", "114"=>"1.0", "115"=>"", "116"=>"", "117"=>"", "135"=>"111", "136"=>"10.*.*.*", "137"=>"255.255.255.240", "138"=>"10.*.*.*", "139"=>"", "140"=>"", "141"=>"", "142"=>"", "143"=>"" }
This is the output:
output = {"Erogazione VlanID"=>"3315", " SubnetorIP"=>"10.*.*.*", " Netmask"=>"255.255.255.240", " Gateway"=>"10.*.*.*", " Speed(f,g)"=>"", " Bond(s/n)"=>"", " Porte"=>"", " Switch"=>"", " Slot/Porte"=>"", " PortePPanel"=>"", " PortePPanel(bond)"=>"", "Backup VlanID"=>"111", "Remote Console VlanID"=>"111"}
Your approach seems to work fine, and will not produce a "missing" entry in the resulting Hash.
a = { a: 1, b: 2, c: nil, d: 4 }
b = { a: 5, b: 6, c: 7, d: 8 }
c = Hash[a.map{|k,v| [b[k],v]}]
# {5=>1, 6=>2, 7=>nil, 8=>4}
No issues for the a[:c] value, which is nil. It will generate the 7 => nil mapping, as b[:c] is 7.
I have a .csv file that, for simplicity, is two fields: ID and comments. The rows of id's are duplicated where each comment field had met max char from whatever table it was generated from and another row was necessary. I now need to merge associative comments together thus creating one row for each unique ID, using Ruby.
To illustrate, I'm trying in Ruby, to make this:
ID | COMMENT
1 | fragment 1
1 | fragment 2
2 | fragment 1
3 | fragment 1
3 | fragment 2
3 | fragment 3
into this:
ID | COMMENT
1 | fragment 1 fragment 2
2 | fragment 1
3 | fragment 1 fragment 2 fragment 3
I've come close to finding a way to do this using inject({}) and hashmap, but still working on getting all data merged correctly. Meanwhile seems my code is getting too complicated with multiple hashes and arrays just to do a merge on selective rows.
What's the best/simplest way to achieve this type of row merge? Could it be done with just arrays?
Would appreciate advice on how one would normally do this in Ruby.
Keep the headers and use group by ID:
rows = CSV.read 'comment.csv', :headers => true
rows.group_by{|row| row['ID']}.values.each do |group|
puts [group.first['ID'], group.map{|r| r['COMMENT']} * ' '] * ' | '
end
You can use 0 and 1 but I think it's clearer to use the header field names.
With the following csv file, tmp.csv
1,fragment 11
1,fragment 21
2,fragment 21
2,fragment 22
3,fragment 31
3,fragment 32
3,fragment 33
Try this (demonstrated using irb)
irb> require 'csv'
=> true
irb> h = Hash.new
=> {}
irb> CSV.foreach("tmp.csv") {|r| h[r[0]] = h.key?(r[0]) ? h[r[0]] + r[1] : r[1]}
=> nil
irb> h
=> {"1"=>"fragment 11fragment 21", "2"=>"fragment 21fragment 22", "3"=>"fragment 31fragment 32fragment 33"}