I have seen many different questions that are similar but nothing that I can find that will work.
I am trying to calculate a "running" total for the amount of support tickets that I had on any given day prior to today. I have a current (today) total queue size, and know for each day whether I added to or removed from that queue.
For example:
Date
Created < Known
Completed < Known
Growth < Known
Total Size < Unknown
10-Jan
100
09-Jan
79
77
+2
102
08-Jan
97
92
+5
107
07-Jan
64
67
-3
104
06-Jan
70
66
-4
100
05-Jan
78
80
+2
102
04-Jan
90
82
-8
94
03-Jan
74
68
+6
100
02-Jan
83
87
-4
106
01-Jan
80
70
+10
116
10-Jan is the only known Total value. The remainder total values are being calculated.
In Excel, this would be a simple formula D3 = D2 + C3.
(Calculated column on 'Table' table)
RecursionWithoutIFAndNoFilter_AlsoThisIsWhatIcouldUnderstandFromYourPost_Sorry =
--RunningGrowth
VAR CurrentDate = 'Table'[Date]
VAR RunningGrowth = CALCULATE(SUM('Table'[Growth < Known]), REMOVEFILTERS('Table'), 'Table'[Date]>=CurrentDate)
--MAXDateInTable (I suppose this means TODAY)
--A change in level (because of SELECTEDVALUE) would mean there are more than one row with 01/10
VAR MaxDate = CALCULATE(MAX('Table'[Date]),REMOVEFILTERS('Table'))
VAR TotalSizeInMaxDate = CALCULATE(SELECTEDVALUE('Table'[Total Size < Unknown]),REMOVEFILTERS('Table'),'Table'[Date] = MaxDate)
--Result
VAR Result = TotalSizeInMaxDate + RunningGrowth
RETURN Result
i am trying marshall go struct to bytes (via gob encoding), and then to unmarshall those bytes back to original object. I am getting unexpected result (object is not getting the correct values). Help me to correct the programm please.
Input:
package main
import (
"bytes"
"encoding/gob"
"fmt"
)
type object struct {
name string
age int
}
func main() {
inputObject := object{age: 22, name: "Zloy"}
fmt.Println(inputObject)
var inputBuffer bytes.Buffer
gob.NewEncoder(&inputBuffer).Encode(inputObject)
fmt.Println(inputBuffer)
destBytes := inputBuffer.Bytes()
fmt.Println("\n", destBytes, "\n")
var outputBuffer bytes.Buffer
outputBuffer.Write(destBytes)
fmt.Println(outputBuffer)
var outputObject object
gob.NewDecoder(&outputBuffer).Decode(&outputObject)
fmt.Println(outputObject)
}
Output:
{Zloy 22}
{[18 255 129 3 1 1 6 111 98 106 101 99 116 1 255 130 0 0 0] 0 0}
[18 255 129 3 1 1 6 111 98 106 101 99 116 1 255 130 0 0 0]
{[18 255 129 3 1 1 6 111 98 106 101 99 116 1 255 130 0 0 0] 0 0}
{ 0}
Expected Output:
{Zloy 22}
{[18 255 129 3 1 1 6 111 98 106 101 99 116 1 255 130 0 0 0] 0 0}
[18 255 129 3 1 1 6 111 98 106 101 99 116 1 255 130 0 0 0]
{[18 255 129 3 1 1 6 111 98 106 101 99 116 1 255 130 0 0 0] 0 0}
{Zloy 22}
You need to capitalize the field names to make them publicly exportable/importable:
type object struct {
Name string
Age int
}
https://play.golang.org/p/_YqSmeDi6oH
what i want to do is reading a json format file and modify it then writing the modified content to file.
55 cJSON *root,*basicpara;
56 char *out;
57
58 root = dofile("basicparameter.cfg");
59 out = cJSON_Print(root);
60 printf("before modify:%s\n",out);
61 free(out);
62 basicpara = cJSON_GetObjectItem(root,"basicparameter");
63 cJSON_GetObjectItem(basicpara,"mode")->valueint = 0;
64 cJSON_GetObjectItem(basicpara,"TimeoutPoweron")->valueint = 10;
65
66 out = cJSON_Print(root);
67 printf("after modify:%s\n",out);
68 free(out);
69 //write_file("basicparameter.cfg",out);
70 cJSON_Delete(root);
i am confused why both contents are the same...
before modify:{
"basicparameter": {
"mode": 1,
"nBefore": 2,
"nAfter": 2,
"LuxAutoOn": 50,
"LuxAutoOff": 16,
"TimeoutPoweron": 30
}
}
after modify:{
"basicparameter": {
"mode": 1,
"nBefore": 2,
"nAfter": 2,
"LuxAutoOn": 50,
"LuxAutoOff": 16,
"TimeoutPoweron": 30
}
}
Please use the cJSON_SetNumberValue macro for setting the number. The problem is, that you are only setting the valueint property but printing relies on the valuedouble property.
Having both valueint and valuedouble in cJSON was a terrible design decision and will probably confuse many people in the future as well.
I'm trying to de- and encode a struct which contains a Interface{} as field.
The problem there is, that the encoding works fine, but if I try to decode the data to data the value gets { <nil>}.
It actually works, if I change Data interface{} to Data substring, but this is not a solution for me because I want to cache the results of a query to a database which have different types depending on the query. (e.g. Users or Cookies)
Minimal working example
Source
http://play.golang.org/p/aX7MIfqrWl
package main
import (
"bytes"
"encoding/gob"
"fmt"
)
type Data struct {
Name string
Data interface{}
}
type SubType struct {
Foo string
}
func main() {
// Encode
encodeData := Data{
Name: "FooBar",
Data: SubType{Foo: "Test"},
}
mCache := new(bytes.Buffer)
encCache := gob.NewEncoder(mCache)
encCache.Encode(encodeData)
fmt.Printf("Encoded: ")
fmt.Println(mCache.Bytes())
// Decode
var data Data
pCache := bytes.NewBuffer(mCache.Bytes())
decCache := gob.NewDecoder(pCache)
decCache.Decode(&data)
fmt.Printf("Decoded: ")
fmt.Println(data)
}
Outputs
Expected output
Encoded: [37 255 129 3 1 1 4 68 97 116 97 1 255 130 0 1 2 1 4 78 97 109 101 1 12 0 1 4 68 97 116 97 1 255 132 0 0 0 29 255 131 3 1 1 7 83 117 98 84 121 112 101 1 255 132 0 1 1 1 3 70 111 111 1 12 0 0 0 19 255 130 1 6 70 111 111 66 97 114 1 1 4 84 101 115 116 0 0]
Decoded: {FooBar {Test}}
Current Result
Encoded: [37 255 129 3 1 1 4 68 97 116 97 1 255 130 0 1 2 1 4 78 97 109 101 1 12 0 1 4 68 97 116 97 1 255 132 0 0 0 29 255 131 3 1 1 7 83 117 98 84 121 112 101 1 255 132 0 1 1 1 3 70 111 111 1 12 0 0 0 19 255 130 1 6 70 111 111 66 97 114 1 1 4 84 101 115 116 0 0]
Decoded: { }
The problem is that in your code, there is an error when executing encCache.Encode(encodeData) but since you don't check for error, you don't realize that. The output is blank because encodedData fails to get encoded properly.
If you add error checking,
err := enc.Encode(encodeData)
if err != nil {
log.Fatal("encode error:", err)
}
Then you'd see something like
2013/03/09 17:57:23 encode error:gob: type not registered for interface: main.SubType
If you add one line to your original code before enc.Encode(encodeData),
gob.Register(SubType{})
Then you get expected output.
Decoded: {FooBar {Test}}
See http://play.golang.org/p/xt4zNyPZ2W
You can't decode into an interface because the decoder has no way to determine what type the field should be.
You can handle this in a few different ways. One is to have Data hold a struct with a field for every type that could be decoded. But the type could be very complicated.
The other way is to implement the GobDecoder and GobEncoder interface for your struct and implement your own serialization for the types. This is probably not ideal though.
Perhaps the best approach is to have the cache store specific types instead and use separate method for each type. To use your example. Your application would have a cache method called GetSubType(key string) (*SubType, error) on the cache. This would return the concrete type or a decoding error instead of an interface. It would be cleaner and more readable as well as more typesafe.
Say I have a file of chromosomal data I'm processing with Ruby,
#Base_ID Segment_ID Read_Depth
1 100
2 800
3 seg1 1900
4 seg1 2700
5 1600
6 2400
7 200
8 15000
9 seg2 300
10 seg2 400
11 seg2 900
12 1000
13 600
...
I'm sticking each row into a hash of arrays, with my keys taken from column 2, Segment_ID, and my values from column 3, Read_Depth, giving me
mr_hashy = {
"seg1" => [1900, 2700],
"" => [100, 800, 1600, 2400, 200, 15000, 1000, 600],
"seg2" => [300, 400, 900],
}
A primer, which is a small segment that consists of two consecutive rows in the above data, prepends and follows each regular segment. Regular segments have a non-empty-string value for Segment_ID, and vary in length, while rows with an empty string in the second column are parts of primers. Primer segments always have the same length, 2. Seen above, Base_ID's 1, 2, 5, 6, 7, 8, 12, 13 are parts of primers. In total, there are four primer segments present in the above data.
What I'd like to do is, upon encountering a line with an empty string in column 2, Segment_ID, add the READ_DEPTH to the appropriate element in my hash. For instance, my desired result from above would look like
mr_hashy = {
"seg1" => [100, 800, 1900, 2700, 1600, 2400],
"seg2" => [200, 15000, 300, 400, 900, 1000, 600],
}
hash = Hash.new{|h,k| h[k]=[] }
# Throw away the first (header) row
rows = DATA.read.scan(/.+/)[1..-1].map do |row|
# Throw away the first (entire row) match
row.match(/(\d+)\s+(\w+)?\s+(\d+)/).to_a[1..-1]
end
last_segment = nil
last_valid_segment = nil
rows.each do |base,segment,depth|
if segment && !last_segment
# Put the last two values onto the front of this segment
hash[segment].unshift( *hash[nil][-2..-1] )
# Put the first two values onto the end of the last segment
hash[last_valid_segment].concat(hash[nil][0,2]) if last_valid_segment
hash[nil] = []
end
hash[segment] << depth
last_segment = segment
last_valid_segment = segment if segment
end
# Put the first two values onto the end of the last segment
hash[last_valid_segment].concat(hash[nil][0,2]) if last_valid_segment
hash.delete(nil)
require 'pp'
pp hash
#=> {"seg1"=>["100", "800", "1900", "2700", "1600", "2400"],
#=> "seg2"=>["200", "15000", "300", "400", "900", "1000", "600"]}
__END__
#Base_ID Segment_ID Read_Depth
1 100
2 800
3 seg1 1900
4 seg1 2700
5 1600
6 2400
7 200
8 15000
9 seg2 300
10 seg2 400
11 seg2 900
12 1000
13 600
Second-ish refactor. I think this is clean, elegant, and most of all complete. It's easy to read with no hardcoded field lengths or ugly RegEx. I vote mine as the best! Yay! I'm the best, yay! ;)
def parse_chromo(file_name)
last_segment = ""
segments = Hash.new {|segments, key| segments[key] = []}
IO.foreach(file_name) do |line|
next if !line || line[0] == "#"
values = line.split
if values.length == 3 && last_segment != (segment_id = values[1])
segments[segment_id] += segments[last_segment].pop(2)
last_segment = segment_id
end
segments[last_segment] << values.last
end
segments.delete("")
segments
end
puts parse_chromo("./chromo.data")
I used this as my data file:
#Base_ID Segment_ID Read_Depth
1 101
2 102
3 seg1 103
4 seg1 104
5 105
6 106
7 201
8 202
9 seg2 203
10 seg2 204
11 205
12 206
13 207
14 208
15 209
16 210
17 211
18 212
19 301
20 302
21 seg3 303
21 seg3 304
21 305
21 306
21 307
Which outputs:
{
"seg1"=>["101", "102", "103", "104", "105", "106"],
"seg2"=>["201", "202", "203", "204", "205", "206", "207", "208", "209", "210", "211", "212"],
"seg3"=>["301", "302", "303", "304", "305", "306", "307"]
}
Here's some Ruby code (nice practice example :P). I'm assuming fixed-width columns, which appears to be the case with your input data. The code keeps track of which depth values are primer values until it finds 4 of them, after which it will know the segment id.
require 'pp'
mr_hashy = {}
primer_segment = nil
primer_values = []
while true
line = gets
if not line
break
end
base, segment, depth = line[0..11].rstrip, line[12..27].rstrip, line[28..-1].rstrip
primer_values.push(depth)
if segment.chomp == ''
if primer_values.length == 6
for value in primer_values
(mr_hashy[primer_segment] ||= []).push(value)
end
primer_values = []
primer_segment = nil
end
else
primer_segment = segment
end
end
PP::pp(mr_hashy)
Output on input provided:
{"seg1"=>["100", "800", "1900", "2700", "1600", "2400"],
"seg2"=>["200", "15000", "300", "400", "900", "1000"]}