I am trying to parse through a JSON response for customer data (names and email) and construct a csv file with column headings of the same.
For some reason, every time I run this code I get a CSV file with a list of all the first names in one cell (with no separation in between the names...just a string of names appended to each other) and the same thing for the last name. The following code does not include adding emails (I'll worry about that later).
Code:
def self.fetch_emails
access_token ||= AssistlyArticle.remote_setup
cust_response = access_token.get("https://blah.desk.com/api/v1/customers.json")
cust_ids = JSON.parse(cust_response.body)["results"].map{|w| w["customer"]["id"].to_i}
FasterCSV.open("/Users/default/file.csv", "wb") do |csv|
# header row
csv << ["First name", "Last Name"]
# data rows
cust_ids.each do |cust_firstname|
json = JSON.parse(cust_response.body)["results"]
csv << [json.map{|x| x["customer"]["first_name"]}, json.map{|x| x["customer"]["last_name"]}]
end
end
end
Output:
First Name | Last Name
JohnJillJamesBill SearsStevensSethBing
and so on...
Desired Output:
First Name | Last Name
John | Sears
Jill | Stevens
James | Seth
Bill | Bing
Sample JSON:
{
"page":1,
"count":20,
"total":541,
"results":
[
{
"customer":
{
"custom_test":null,
"addresses":
[
{
"address":
{
"region":"NY",
"city":"Commack",
"location":"67 Harned Road,
Commack,
NY 11725,
USA",
"created_at":"2009-12-22T16:21:23-05:00",
"street_2":null,
"country":"US",
"updated_at":"2009-12-22T16:32:37-05:00",
"postalcode":"11725",
"street":"67 Harned Road",
"lng":"-73.196225",
"customer_contact_type":"home",
"lat":"40.716894"
}
}
],
"phones":
[
],
"last_name":"Suriel",
"custom_order":"4",
"first_name":"Jeremy",
"custom_t2":"",
"custom_i":"",
"custom_t3":null,
"custom_t":"",
"emails":
[
{
"email":
{
"verified_at":"2009-11-27T21:41:11-05:00",
"created_at":"2009-11-27T21:40:55-05:00",
"updated_at":"2009-11-27T21:41:11-05:00",
"customer_contact_type":"home",
"email":"jeremysuriel+twitter#gmail.com"
}
}
],
"id":8,
"twitters":
[
{
"twitter":
{
"profile_image_url":"http://a3.twimg.com...",
"created_at":"2009-11-25T10:35:56-05:00",
"updated_at":"2010-05-29T22:41:55-04:00",
"twitter_user_id":12267802,
"followers_count":93,
"verified":false,
"login":"jrmey"
}
}
]
}
},
{
"customer":
{
"custom_test":null,
"addresses":
[
],
"phones":
[
],
"last_name":"",
"custom_order":null,
"first_name":"jeremy#example.com",
"custom_t2":null,
"custom_i":null,
"custom_t3":null,
"custom_t":null,
"emails":
[
{
"email":
{
"verified_at":null,
"created_at":"2009-12-05T20:39:00-05:00",
"updated_at":"2009-12-05T20:39:00-05:00",
"customer_contact_type":"home",
"email":"jeremy#example.com"
}
}
],
"id":27,
"twitters":
[
null
]
}
}
]
}
Is there a better use of FasterCSV to allow this? I assumed that << would add to a new row each time...but it doesn't seem to be working. I would appreciate any help!
You've got it all tangled up somehow, you're parsing the json too many times (and inside a loop!) Let's make it simpler:
customers = JSON.parse(data)["results"].map{|x| x['customer']}
customers.each do |c|
csv << [c['first_name'], c['last_name']]
end
also 'wb' is the wrong mode for csv - just 'w'.
Related
Currently, I have this kind of JSON array with the same field, what I wanted is to split this data into an independent field and the field name is based on a "name" field
events.parameters (this is the field name of the JSON array)
{
"name": "USER_EMAIL",
"value": "dummy#yahoo.com"
},
{
"name": "DEVICE_ID",
"value": "Wdk39Iw-akOsiwkaALw"
},
{
"name": "SERIAL_NUMBER",
"value": "9KJUIHG"
}
expected output:
events.parameters.USER_EMAIL : dummy#yahoo.com
events.parameters.DEVICE_ID: Wdk39Iw-akOsiwkaALw
events.parameters.SERIAL_NUMBER : 9KJUIHG
Thanks.
Tldr;
There is no filter that does exactly what you are looking for.
You will have to use the ruby filter
I just fixed the problem, for everyone wondering here's my ruby script
if [events][parameters] {
ruby {
code => '
event.get("[events][parameters]").each { |a|
name = a["name"]
value = a["value"]
event.set("[events][parameters_split][#{name}]", value)
}
'
}
}
the output was just like what I wanted.
Cheers!
Let's say I have this object:
{
"id": "1a48c847-4fee-4968-8cfd-5f8369c01f64" ,
"sections": [
{
"id": 0 ,
"title": "s1"
} ,
{
"id": 1 ,
"title": "s2"
} ,
{
"id": 2 ,
"title": "s3"
}
]
}
How can I directly change 2nd title "s2" to other value? without loading the object and save again? Thanks.
Update plus the changeAt term:
r.table('blog').get("1a48c847-4fee-4968-8cfd-5f8369c01f64").update(function(row){
return {
sections: row('sections').changeAt(1,
row('sections')(1).merge({title: "s2-modified"}))
}
}
The above is good if you already know the index of the item you want to change. If you need to find the index, then update it, you can use the .offsetsOf command to look up the index of the element you want:
r.table('table').get("1a48c847-4fee-4968-8cfd-5f8369c01f64").update(function(row){
return row('sections').offsetsOf(function(x){
return x('title').eq('s2')
})(0).do(function(index){
return {
sections: row('sections').changeAt(index,
row('sections')(index).merge({title: "s2-modified"}))
}
})
})
Edit: modified answer to use changeAt
lets say I have this json data file
{
"page": {
"title": "Example Page"
},
"employers": {
"name": "Jon"
},
"employees": [
{ "name": "Mike", "nicknames": ["Superman"] },
{ "name": "Peter", "nicknames": ["Peet", "Peetee", "Peterr"] }
]
}
this data.json file exist as a file outside of the script
I have these 3 lines to read and parse it with json ruby library
data = File.read("data.json")
obj = JSON.parse(data)
puts obj.values
in my terminal it comes out to be like this
{"title"=>"Example Page"}
{"name"=>"Jon"}
{"name"=>"Mike", "nicknames"=>["Superman"]}
{"name"=>"Peter", "nicknames"=>["Peet", "Peetee", "Peterr"]}
what happened to employers and employees? now I have the same key or name in this case. Its difficult for me to grab the values to use them.
Employers and employees are the keys for primary hash, and you requested values, that's why you get what you get. Try putting obj .
I have a json file:
{
"public_holidays": [
{
"date": "2013/1/1",
"name": "New Years Day"
},
{
"date": "2013/1/21",
"name": "Luther King Day"
},
{
"date": "2013/5/27",
"name": "Memorial Day"
}
]
}
My goal is try to capture all of the dates in this file. I am able to get the date on a specific index, but not all the dates at once. Here is what I have:
#file = File.read('public_holidays.json')
def json_file
holiday_dates = JSON.parse(#file)
holiday_dates.each do |key, value|
puts value[0]['date']
end
end
This results in 2013/1/1, but I need all the dates, not just one.
Any ideas?
Your code will only grab the first element in the "public_holidays" array.
Try something like this:
holiday_dates = JSON.parse(#file)
dates = holiday_dates['public_holidays'].map { |x| x['date'] }
I am using ruby version 2.0.0, I have a demo.json file which looks like this:
{ "demo":
{
"rama" : { "Name": "demo" },
"krishna" : { "Name": "hare","place": "bharat", "hawa": { "maina": "tota"} }
}
}
Now I try to manipulate json file by this way:
require 'json'
options = {}
options[:demo] = "kailash"
File.open("demo.json","w") do |f|
f.write(JSON.pretty_generate(options))
end
I want to replace some values and to add some new key-value pairs in the existing JSON file and don't wants to completely replace the entire JSON file. Is there any way to do this?
You must first read and parse your file, then make your changes, and finally you can overwrite the file with the updated object:
require 'json'
options = JSON.parse(IO.read('demo.json'))
options['demo']['kailash'] = { "Name" => "new" }
File.open("demo.json","w") do |f|
f.write(JSON.pretty_generate(options))
end
Output file:
{
"demo": {
"rama": {
"Name": "demo"
},
"krishna": {
"Name": "hare",
"place": "bharat",
"hawa": {
"main": "tota"
}
},
"kailash": {
"Name": "new"
}
}
}