error given Sending json array in ajax request? using javascript - ajax

hihi i am doing a project regarding ajax (xmlHttRequest),
how am i going to call the title in the note session, because normally if you call start year is like, detail = eval....
then for loop it
inside should be
var start year=""
startyear += ...[i].startyear
something like this, but how am i going to call the title inside the note?
i try to call detail.notes.note.title it say is null or is not a object
this is json data:
{
"infos": {
"info": [
{
"startYear": "1900",
"endYear": "1930",
"timeZoneDesc": "daweerrewereopreproewropewredfkfdufssfsfsfsfrerewrBlahhhhh..",
"timeZoneID": "1",
"note": {
"notes": [
{
"id": "1",
"title": "Mmm"
},
{
"id": "2",
"title": "Wmm"
},
{
"id": "3",
"title": "Smm"
}
]
},
"links": [
{
"id": "1",
"title": "Red House",
"url": "http://infopedia.nl.sg/articles/SIP_611_2004-12-24.html"
},
{
"id": "2",
"title": "Joo Chiat",
"url": "http://www.the-inncrowd.com/joochiat.htm"
},
{
"id": "3",
"title": "Bake",
"url": "https://thelongnwindingroad.wordpress.com/tag/red-house-bakery"
}
]
}
]
}
}

Use http://jsonlint.com/ to help you out with JSON.
I got this...
Parse error on line 30:
... ] }
----------------------^
Expecting '}', ',', ']'

Related

Delete existing Records if they are not in sent array Rails 5 API

I need help on how to delete records that exist in the DB but not in array sent in a request;
My Array:
[
{ "id": "509",
"name": "Motions move great",
"body": "",
"subtopics": [
{
"title": "Tywan",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
},
{
"title": "Transportations Gracious",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
},
{
"title": "Transportation part",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
}
]
},
{
"name": "Motions kkk",
"body": "",
"subtopics": [
{
"title": "Transportations",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
}
]
}
]
Below is my implementation: where am going wrong?
#topics = #course.topics.map{|m| m.id()}
#delete= #topics
puts #delete
if Topic.where.not('id IN(?)', #topics).any?
#topics.each do |topic|
topic.destroy
end
end
it's not clear to me where, in your code, you pick the ids sent in the array you showed before... so I'm assuming like this:
objects_sent = [
{ "id": "509",
"name": "Motions move great",
"body": "",
"subtopics": [
{
"title": "Tywan",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
},
{
"title": "Transportations Gracious",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
},
{
"title": "Transportation part",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
}
]
},
{
"name": "Motions kkk",
"body": "",
"subtopics": [
{
"title": "Transportations",
"url_path": "https://ugonline.s3.amazonaws.com/resources/6ca0fd64-8214-4788-8967-b650722ac97f/WhatsApp+Audio+2021-09-24+at+13.57.34.mpeg"
}
]
}
]
since you have your array like this, the only information you need to query on database is the ids (also, assuming the id's in the array are the id's on database, otherwise it wouldn't make sense). You can get them like this:
sent_ids = objects_sent.map{|o| o['id'].to_i}
Also, it seems to me that, for the code you showed, you want to destroy them based on a specific course. There would be 2 ways to do that. First, using the relationship (I prefer like this one):
#course.topics.where.not(id: sent_ids).destroy_all
Or you can do the query directly on the Topic model, but passing the course_id param:
Topic.where(course_id: #course.id).where.not(id: sent_ids).destroy_all
ActiveRecord is smart enough to mount that query correctly in both ways. Give it a test and see which works better for you

NiFi: ReplaceText alternatives to modify JSON

My NiFi application receives two kinda different types of JSON's.
First of them looks like:
[
{
"campaign": {
"resourceName": "customers/8952771329/campaigns/11381694617",
"status": "ENABLED",
"name": "Saint_Spring_Active Minerals_oct-nov_2020_trueview_skip_5766500views",
"id": "11381694617"
},
"metrics": {
"interactionEventTypes": [
"VIDEO_VIEW"
],
"clicks": "6",
"videoQuartileP100Rate": 0.44493171079034244,
"videoQuartileP25Rate": 0.9747718298919024,
"videoQuartileP50Rate": 0.7339309987701469,
"videoQuartileP75Rate": 0.5337562301767105,
"videoViewRate": 0.4471109114825628,
"videoViews": "27872",
"viewThroughConversions": "0",
"contentBudgetLostImpressionShare": 0.0000013066088274492382,
"contentImpressionShare": 0.0999,
"contentRankLostImpressionShare": 0.9001,
"conversionsValue": 0,
"conversions": 0,
"costMicros": "9338700950",
"ctr": 0.00009624947864865732,
"currentModelAttributedConversions": 0,
"currentModelAttributedConversionsValue": 0,
"engagementRate": 0,
"engagements": "0",
},
"segments": {
"device": "CONNECTED_TV",
"date": "2020-12-20"
}
}
]
And second:
[
{
"adGroup": {
"resourceName": "customers/5404177717/adGroups/110501283582",
"campaign": "customers/5404177717/campaigns/11628802542"
},
"metrics": {
"interactionEventTypes": [
"CLICK"
],
"clicks": "1",
"averageCpm": 95497428.02172929,
"gmailForwards": "0",
"gmailSaves": "0",
"gmailSecondaryClicks": "0",
"impressions": "4418",
"interactionRate": 0.00022634676324128565,
"interactions": "1"
},
"adGroupAd": {
"resourceName": "customers/5404177717/adGroupAds/110501283582~480227690139",
"status": "ENABLED",
"ad": {
"resourceName": "customers/5404177717/ads/480227690139",
"id": "480227690139",
"name": "20 sec perek"
},
"adGroup": "customers/5404177717/adGroups/110501283582"
},
"segments": {
"device": "DESKTOP",
"date": "2020-11-21"
}
}
]
I already have 2 tables in my database to save this data. I have an attribute table.name just to not create same block where's only table name is different.
My next block is FlattenJson. After this i'm using ReplaceText with search value (replacement value is empty string): (customers\\\/${client.customer.id}\\\/campaigns\\\/|customers\\\/${client.customer.id}\\\/adGroups\\\/).
Why this? From this line: "adGroup": "customers/5404177717/adGroups/110501283582" i only need last value 110501283582 as ad_group_id. And from this line: "campaign": "customers/5404177717/campaigns/11628802542" i only need 11628802542. ${client.customer.id} can be different, so i'm using EL features.
Also i need to change json value name adGroup to ad.group.id, for this i'm also using ReplaceText.
Can i do it faster without two ReplaceText processors?
Look at the following processors...I think using them can be an alternative:
JoltTransformJSON:
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.5.0/org.apache.nifi.processors.standard.JoltTransformJSON/
UpdateRecord:
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.5.0/org.apache.nifi.processors.standard.UpdateRecord/index.html

Combine json response in nifi

We are calling invokehttp processes and getting response which json. Example
{
"id": "h569gcjhcm",
"doi": {
"id": "10.17632/h569gcjhcm.1",
"status": "allocated",
"prefix": "10.17632"
},
"name": "Data for: Flooding of the Caspian Sea at the intensification of Northern Hemisphere Glaciations",
"description": "Supplementary data for the Jeirankechmez section in Azerbaijan.\n\n- Appendix A contains all paleomagnetic data and interpretations of the Jeirankechmez section. This .dir file can be imported into the paleomagnetism.org webportal under \"Interpretation Portal\", \"Advanced Options\", \"Import Application Save\". For further details on the use of paleomagnetism.org please refer to the article by Koymans et al. (2016) - https://doi.org/10.1016/j.cageo.2016.05.007.\n- Appendix B contains the magnetic susceptibility data for the analysed samples, including geographic coordinates and stratigraphic levels.\n- Appendix C contains the 40Ar/39Ar data for the three analysed volcanic ash layers. ",
"version": 1,
"publish_date": "2019-01-29T12:51:38.090Z",
"data_licence": {
"id": "01d9c749-3c4d-4431-9df3-620b2dcfe144",
"short_name": "CC BY 4.0",
"full_name": "Creative Commons Attribution 4.0 International",
"description": "This dataset is licensed under a Creative Commons Attribution 4.0 International licence.\n\nWhat does this mean?\nYou can share, copy and modify this dataset so long as you give appropriate credit, provide a link to the CC BY license, and indicate if changes were made, but you may not do so in a way that suggests the rights holder has endorsed you or your use of the dataset. Note that further permission may be required for any content within the dataset that is identified as belonging to a third party.",
"url": "http://creativecommons.org/licenses/by/4.0",
"category": "Creative"
},
"contributors": [
{
"first_name": "Christiaan",
"last_name": "van Baak"
},
{
"first_name": "Marius",
"last_name": "Stoica"
},
{
"first_name": "Arjen",
"last_name": "Grothe"
},
{
"first_name": "Gareth",
"last_name": "Davies"
},
{
"profile_id": "72970719-95c8-341b-80d2-afa9e7154baf",
"first_name": "Wout",
"last_name": "Krijgsman"
},
{
"profile_id": "3a4bfe2c-4098-3859-9b88-789fa993e05a",
"first_name": "Keith",
"last_name": "Richards"
},
{
"profile_id": "f1660f3c-ebbd-3289-8240-1f4ea7913df4",
"first_name": "Klaudia",
"last_name": "Kuiper"
},
{
"first_name": "Elmira",
"last_name": "Aliyeva"
}
],
"versions": [
{
"version": 1,
"publish_date": "2019-01-29T12:51:38.090Z",
"available": true
}
],
"files": [
{
"filename": "Appendix_A_Jeirankechmez_pmag_interpretations.dir",
"id": "f2f4cba7-2411-4737-a9b2-f094db30dca1",
"content_details": {
"id": "994bc865-5300-4d76-a373-e528ccd830e8",
"sha256_hash": "2427c4b077372760973ce8224694f2a2ee5383c7f022ad818164d847a20e27cc",
"sha1_hash": "73792dc6d6eb2c1de1e04926ba5d4420dd0aaece",
"content_type": "application/x-director",
"size": 917022,
"created_date": "2019-01-03T00:00:00.000Z"
"download_expiry_time": "2019-01-29T13:52:25.729Z"
},
"metrics": {
"downloads": 0,
"previews": 0
}
},
{
"filename": "Appendix_B_Sample_locations_susceptibility.xlsx",
"id": "64241bf0-5279-49e8-a505-be9075b910e1",
"content_details": {
"id": "af8809d0-8e63-4599-abaa-e7af9ad39959",
"sha256_hash": "0588f44a0cbd477aa2798323e57ce0b2d4a118e767c0b1ffdc9eb1017e4d23c2",
"sha1_hash": "02e89f6f197ebf495e1e2c3d1aab250efc7545e7",
"content_type": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"size": 24770,
"created_date": "2019-01-03T00:00:00.000Z"
,
"download_expiry_time": "2019-01-29T13:52:25.732Z"
},
"metrics": {
"downloads": 0,
"previews": 0
}
},
{
"filename": "Appendix_C_ArAr_data.xlsx",
"id": "2e912027-ff3f-48ad-98b9-b643b59ba0e3",
"content_details": {
"id": "4960377c-060d-41f6-b7af-150617d8ebeb",
"sha256_hash": "235dc32c1e99f350ee5c99908a5f5d72d1aeeab02f78c2e0181d585bd1880fa6",
"sha1_hash": "6483156e4577948cac5d2679eee862c76faed1c9",
"content_type": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"size": 18510,
"created_date": "2019-01-03T00:00:00.000Z"
},
"metrics": {
"downloads": 0,
"previews": 0
}
}
],
"articles": [
{
"id": "10.1016/j.gloplacha.2019.01.007",
"title": "Flooding of the Caspian Sea at the intensification of Northern Hemisphere Glaciations",
"doi": "10.1016/j.gloplacha.2019.01.007",
"journal": {
"issn": "0921-8181",
"name": "Global and Planetary Change",
"url": "http://www.sciencedirect.com/science/journal/09218181"
}
}
],
"categories": [
{
"id": "http://com/vocabulary/OmniScience/Concept-170590667",
"label": "Geology"
},
{
"id": "http://data.elsevier.com/vocabulary/OmniScience/Concept-473860195",
"label": "Strontium Isotope"
}
],
"institutions": [ ],
"metrics": {
},
"available": true,
"related_links": [ ]
}
I am using $contributors.profile_id from above json to call new endpoint(invokeshttp) (https://api.xxx.com/profile/$.profile_id)
Json response for this
"contributors": [
{
“profile_id”:”cedferfiherhforhforf”
"first_name": “xxx”,
"last_name": "van Baak”,
“other_ids”:[] ,
“Other info”: “deeded” }
I have to call this endpoint depending upon number of object in contributor(let say we have 5 object in contributor ,so I have to call this endpoint 5 time)and combine these 5 response together
Then I have to merge the response(above response to the main response )
just an example:
EvaluateJsonPath to extract "id" into attribute, later join by this attribute
SplitJson to split your json by "contributors"
call endpoint
MergeContent merge by "id" and with count after SplitJson

Sorting an array of objects based on object property

I have an array like so:
[
{
"name": "aabb",
"commit": {
"id": "1",
"message": "aabb ",
"committed_date": "2018-04-04T15:11:04.000+05:30",
"committer_name": "ak",
"committer_email": "ak#ak.in"
},
"protected": false
},
{
"name": "aacc",
"commit": {
"id": "2",
"message": "aacc ",
"committed_date": "2018-02-04T15:11:04.000+05:30",
"committer_name": "ak",
"committer_email": "ak#ak.in"
},
"protected": false
},
{
"name": "aadd",
"commit": {
"id": "3",
"message": "aadd ",
"committed_date": "2018-04-01T15:11:04.000+05:30",
"committer_name": "ak",
"committer_email": "ak#ak.in"
},
"protected": false
}
]
I need to sort this array based on committed_date. How do I do that?
Do I have to loop and write a custom sorting function or does Ruby offers something out-of-box?
Using sort_by
array.sort_by {|obj| obj.attribute}
Or more concise
array.sort_by(&:attribute)
In your case
array.sort_by {|obj| obj[:commit][:committed_date]}

json deserializing coming back empty

I took suggestions from here and tried to create a class with objects for the parts I need to parse from my json feed. I then tried to deserialize it before mapping the field contents to my objects but the deserialization returns nothing.
Here is the json feed content:
"data": [
{
"id": "17xxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxx",
"from": {
"name": "Lxxxxxx",
"category": "Sports league",
"id": "17xxxxxxxxxxxxx"
},
"picture": "http://external.ak.fbcdn.net/safe_image.php?d=AQB4GscSy-2RHY_0&w=130&h=130&url=http\u00253A\u00252F\u00252Fwww.ligabbva.com\u00252Fquiz\u00252Farchivos\u00252Fbenzema-quiz-facebook.png",
"link": "http://www.xxxxxva.com/quiz/index.php?qid=34",
"source": "http://www.lxxxxva.com/modulos/redirectQuiz.php?name=benzema&q=34&time=1312827103",
"name": "DEMUESTRA CU\u00c1NTO SABES SOBRE... BENZEMA",
"caption": "www.xxxxxva.com",
"description": "Demuestra cu\u00e1nto sabes sobre Karim Benzema, delantero del Real Madrid.",
"icon": "http://static.ak.fbcdn.net/rsrc.php/v1/yj/r/v2OnaTyTQZE.gif",
"type": "video",
"created_time": "2011-08-08T18:11:54+0000",
"updated_time": "2011-08-08T18:11:54+0000",
"likes": {
"data": [
{
"name": "Jhona Arancibia",
"id": "100000851276736"
},
{
"name": "Luis To\u00f1o",
"id": "100000735350531"
},
{
"name": "Manuel Raul Guerrero Cumbicos",
"id": "100001485973224"
},
{
"name": "Emmanuel Gutierrez",
"id": "100000995038988"
}
],
"count": 127
},
"comments": {
"count": 33
}
},
{
"id": "17xxxxxxxxxxxxxxxx_xxxxxxxxxxxxx",
"from": {
"name"
and here is the code I am trying. I tried to also use the jsontextreader and loop through each tokentype and check each one and then read the value but this looks like it will turn out to be tedious.
....
//fetch the content
responsedata = reader.readtoend()
......
dim pp as facedata = JsonConvert.DeserializeObject(of faceData)(responsedata)
console.writeline(pp.id.tostring)
and the class objects
Public Class faceData
Public Property id() as string
Get
Return f_id
End Get
Set(ByVal value as string)
f_id =value
End Set
End Property
Private f_id as string
......
any response appreciated.
Does anyone have links to full code where it actually works using vb.net.

Resources