How To Develop The post method web api in ASP.NEY c# with json data with array - asp.net-web-api

Please help the below format to develop post meth web api to store the data into Database.
{
"jobcardID": "100023",
"breakdownID": "417217",
"vehicleNumber": "UP32GB4717",
"ODOmeter": 3023232,
" spareparts ":
[
{
"spItemCode": 231,
"spitemName": "Engine Oil",
"spitemCost": "234",
"spitemManhours": "2",
"spitemEstStartDTM ": "2021-06-04 12:32:59",
"spitemEstEndDTM": "2021-06-04 12:35:59"
},
{
"spItemCode": 342,
"spitemName": "Piston",
"spitemCost": "3450",
"spitemManhours": "5",
"spitemEstStartDTM ": "2021-06-04 12:32:59",
"spitemEstEndDTM": "2021-06-04 12:35:59"
}
]
" totalSpareCost":8000,
“totalLabourTime":”23hrs”,
“totalLabourCost":2000,
“TotalCost":10000,
“vehicleRecoveryTime":"2021-06-04 12:35:59",
}

Related

Defining an array of json docs in Elasticsearch Painless Lab

I'm trying to define some docs in ES Painless Lab, to test some logic in Painless Lab, before running it on the actual index, but can't figure out how to do it, and the docs are not helping either. There is very little documentation on the actual syntax and it's not much help for someone with no Java background.
If I try to define a doc like this:
def docs = [{ "id": 1, "name": "Apple" }];
I get an error:
Unhandled Exception illegal_argument_exception
invalid sequence of tokens near ['{'].
Stack:
[
"def docs = [{ \"id\": 1, \"name\": \"Apple ...",
" ^---- HERE"
]
If I want to do it the Java way:
String message;
JSONObject json = new JSONObject();
json.put("test1", "value1");
message = json.toString();
I'm also getting an error:
Unhandled Exception illegal_argument_exception
invalid declaration: cannot resolve type [JSONObject]
Stack:
[
"... ring message;\nJSONObject json = new JSONObject();\n ...",
" ^---- HERE"
]
So what's the proper way to define an array of json objects to play with in Painless Lab?
After more experimenting, I found out that the docs can be passed in the parameters tab as:
{
"docs": [
{ "id": 1, "name": "Apple" },
{ "id": 2, "name": "Pear" },
{ "id": 3, "name": "Pineapple" }
]
}
and then access it from the code as
def doc = params.docs[1];
return doc["name"];
I'd be still interested how to define an object or array in the code itself.

Trouble Deserializing Object Pulled from SQS using GSON

I have a lambda function the receives an S3Event object when a file is put into an S3 Bucket. When the lambda fails, it goes to a dead letter queue set up in Amazon SQS.
When I pull these messages, this this the body:
{
"Records": [
{
"eventVersion": "2.1",
"eventSource": "aws:s3",
"awsRegion": "us-east-1",
"eventTime": "d",
"eventName": "d:Put",
"userIdentity": {
"principalId": ""
},
"requestParameters": {
"sourceIPAddress": "2"
},
"responseElements": {
"x-amz-request-id": "",
"x-amz-id-2": "g"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "",
"bucket": {
"name": "",
"ownerIdentity": {
"principalId": ""
},
"arn": ""
},
"object": {
"key": "",
"size": 12502,
"eTag": "",
"sequencer": ""
}
}
}
]
}
That looks quite a bit like the S3Event object which contains a list of S3EventNotification records. I have tried to deserialize it to the S3 Event Object using the following:
S3Event event = new GsonBuilder().serializeNulls().create().fromJson(s3EventString, S3Event.class);
This results in a null object like so:
{"records":null}
I noticed in the json return from SQS, the "R" in Records is capitalized. I wasn't sure if that made a difference so I changed it to a lowercase "r" and it throws this error:
java.lang.IllegalStateException: Expected BEGIN_OBJECT but was STRING
I'm really not sure what type of object this actually is.
Any help would be greatly appreciated.
Strange. Using Jackson it works perfectly so I will use this for now..
import com.fasterxml.jackson.databind.ObjectMapper;
import com.amazonaws.services.sqs.model.Message;
private S3Event extractS3Event(Message message){
ObjectMapper objectMapper = new ObjectMapper();
return objectMapper.readValue(message.getBody(), S3Event.class)
}
//Then to get the S3 Details
S3Event event = extractS3Event(message);
S3Entity entity = event.getRecords().get(0).getS3();
String bucketName = entity.getBucket().getName();
String s3Key = entity.getObject().getKey();
Re:BEGIN_OBJECT but was STRING.
This is because AWS uses JodaTime for EventTime. You can avoid the issue by removing the field from the JSON text (assuming you do not need it)

sentiment analysis call fails with cognitive service, returning "HttpOperationError"

text_analytics = TextAnalyticsClient(endpoint=endpoint, credentials=credentials)
documents = [
{
"id": "1",
"language": "en",
"text": "I had the best day of my life."
}
]
response = text_analytics.sentiment(documents=documents)
for document in response.documents:
print("Document Id: ", document.id, ", Sentiment Score: ",
"{:.2f}".format(document.score))
Hi, with the sample code from API manual https://learn.microsoft.com/en-us/azure/cognitive-services/text-analytics/quickstarts/python-sdk#sentiment-analysis
I got the following error while trying to call the sentiment classifier
HttpOperationError Traceback (most recent call last)
<ipython-input-18-f0fb322c9e8c> in <module>
8 }
9 ]
---> 10 response = text_analytics.sentiment(documents=documents)
11 for document in response.documents:
12 print("Document Id: ", document.id, ", Sentiment Score: ",
~/anaconda3/envs/lib/python3.6/site-packages/azure/cognitiveservices/language/textanalytics/text_analytics_client.py in sentiment(self, show_stats, documents, custom_headers, raw, **operation_config)
361
362 if response.status_code not in [200, 500]:
--> 363 raise HttpOperationError(self._deserialize, response)
364
365 deserialized = None
HttpOperationError: Operation returned an invalid status code 'Resource Not Found'
It works for me on my side , pls follow or check the steps below to get started with python sentiment analysis SDK :
Create a text analysis service on Azure portal :
Once created , note its endpoint and either one of two keys.
Try the code below :
from azure.cognitiveservices.language.textanalytics import TextAnalyticsClient
from msrest.authentication import CognitiveServicesCredentials
subscriptionKey = "<your Azure servcice key >"
endpoint = "<your Azure servcice endpoint>"
credentials = CognitiveServicesCredentials(subscriptionKey)
text_analytics = TextAnalyticsClient(endpoint=endpoint, credentials=credentials)
documents = [
{
"id": "1",
"language": "en",
"text": "I had the best day of my life."
}
]
response = text_analytics.sentiment(documents=documents)
for document in response.documents:
print("Document Id: ", document.id, ", Sentiment Score: ",
"{:.2f}".format(document.score))
Result :
Hope it helps .

laravel/codeception : test if json response contains only certain keys

I have a json array coming from my api as response:
{
"data": [
{
"id": 1,
"name": "abc"
}
}
I am using laravel for api and laravel-codeception for testing.
public function getAll(ApiTester $I)
{
$I->sendGET($this->endpoint);
}
I have to test if the response contains only id and name key (not any other key) example this response should fail the test.
{
"data": [
{
"id": 1,
"name": "abc",
"email":"abc#xyz"
}
}
I have found $I->seeResponseContainsJson(), but it checks if JSON is present or not. It does not check if JSON response contains only specified keys.
Thanks.

Ruby - FasterCSV after Parsing JSON

I am trying to parse through a JSON response for customer data (names and email) and construct a csv file with column headings of the same.
For some reason, every time I run this code I get a CSV file with a list of all the first names in one cell (with no separation in between the names...just a string of names appended to each other) and the same thing for the last name. The following code does not include adding emails (I'll worry about that later).
Code:
def self.fetch_emails
access_token ||= AssistlyArticle.remote_setup
cust_response = access_token.get("https://blah.desk.com/api/v1/customers.json")
cust_ids = JSON.parse(cust_response.body)["results"].map{|w| w["customer"]["id"].to_i}
FasterCSV.open("/Users/default/file.csv", "wb") do |csv|
# header row
csv << ["First name", "Last Name"]
# data rows
cust_ids.each do |cust_firstname|
json = JSON.parse(cust_response.body)["results"]
csv << [json.map{|x| x["customer"]["first_name"]}, json.map{|x| x["customer"]["last_name"]}]
end
end
end
Output:
First Name | Last Name
JohnJillJamesBill SearsStevensSethBing
and so on...
Desired Output:
First Name | Last Name
John | Sears
Jill | Stevens
James | Seth
Bill | Bing
Sample JSON:
{
"page":1,
"count":20,
"total":541,
"results":
[
{
"customer":
{
"custom_test":null,
"addresses":
[
{
"address":
{
"region":"NY",
"city":"Commack",
"location":"67 Harned Road,
Commack,
NY 11725,
USA",
"created_at":"2009-12-22T16:21:23-05:00",
"street_2":null,
"country":"US",
"updated_at":"2009-12-22T16:32:37-05:00",
"postalcode":"11725",
"street":"67 Harned Road",
"lng":"-73.196225",
"customer_contact_type":"home",
"lat":"40.716894"
}
}
],
"phones":
[
],
"last_name":"Suriel",
"custom_order":"4",
"first_name":"Jeremy",
"custom_t2":"",
"custom_i":"",
"custom_t3":null,
"custom_t":"",
"emails":
[
{
"email":
{
"verified_at":"2009-11-27T21:41:11-05:00",
"created_at":"2009-11-27T21:40:55-05:00",
"updated_at":"2009-11-27T21:41:11-05:00",
"customer_contact_type":"home",
"email":"jeremysuriel+twitter#gmail.com"
}
}
],
"id":8,
"twitters":
[
{
"twitter":
{
"profile_image_url":"http://a3.twimg.com...",
"created_at":"2009-11-25T10:35:56-05:00",
"updated_at":"2010-05-29T22:41:55-04:00",
"twitter_user_id":12267802,
"followers_count":93,
"verified":false,
"login":"jrmey"
}
}
]
}
},
{
"customer":
{
"custom_test":null,
"addresses":
[
],
"phones":
[
],
"last_name":"",
"custom_order":null,
"first_name":"jeremy#example.com",
"custom_t2":null,
"custom_i":null,
"custom_t3":null,
"custom_t":null,
"emails":
[
{
"email":
{
"verified_at":null,
"created_at":"2009-12-05T20:39:00-05:00",
"updated_at":"2009-12-05T20:39:00-05:00",
"customer_contact_type":"home",
"email":"jeremy#example.com"
}
}
],
"id":27,
"twitters":
[
null
]
}
}
]
}
Is there a better use of FasterCSV to allow this? I assumed that << would add to a new row each time...but it doesn't seem to be working. I would appreciate any help!
You've got it all tangled up somehow, you're parsing the json too many times (and inside a loop!) Let's make it simpler:
customers = JSON.parse(data)["results"].map{|x| x['customer']}
customers.each do |c|
csv << [c['first_name'], c['last_name']]
end
also 'wb' is the wrong mode for csv - just 'w'.

Resources