Good Afternoon.
I am sending a 270 to the state(Michigan) and receiving a 271 which I then transform into a 4010 version of the 271 so that a legacy webservice can attempt to absorb the data. The webservice is using dbml and LINQ to translate the message into a series of classes that represent the database after translation occurs it performs a transaction and updates the client. However I am getting an error that says:
The adapter failed to transmit message going to send port "SendEDI"
with URL "http://biz05/WriteEligibilityResponse/service.svc". It will
be retransmitted after the retry interval specified for this Send
Port. Details:"System.ServiceModel.FaultException: a:InternalServiceFaultAn attempt was made to remove a relationship between
a X12_NM1 and a X12_271_2120C. However, one of the relationship's
foreign keys (X12_271_2120C.X12_NM1_Id) cannot be set to
null.An attempt was
made to remove a relationship between a X12_NM1 and a X12_271_2120C.
However, one of the relationship's foreign keys
(X12_271_2120C.X12_NM1_Id) cannot be set to
null. at
EligibilityLookup.Service.ResponseToSQL.WriteResponse(Message message)
at SyncInvokeWriteResponse(Object , Object[] , Object[] )
at System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object
instance, Object[] inputs, Object[]& outputs)
at
System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc&
rpc)
at
System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc&
rpc)
at
System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc&
rpc)
at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean
isOperationContextSet)System.InvalidOperationException
at
Microsoft.BizTalk.Adapter.Wcf.Runtime.WcfClient`2.RequestCallback(IAsyncResult
result)".
Keeping in mind that I cannot change the LINQ code(I cannot edit the client as part of a management descion, rebuilding the front end is Stage 2 of the project) is there any suggestable way to get around this? I have already removed the 5010 to 4010 link in the map for this element, and I also do not care if the I get a complete 271 dataset into the legacy system.
Just googling the error came up with this:
http://blogs.msdn.com/b/bethmassi/archive/2007/10/02/linq-to-sql-and-one-to-many-relationships.aspx
If you can't change the linq model then it appears you are going to have to map data into the 4010 document you send to the web service so that data is populated in the X12_NM1 that maps to the X12_271_2120C table.
Related
I have an issue in being unable to recive the URC message from the modem whenever it receives an SMS.
I know that it receives them since i can find and read them if I use AT+CMGL but, i don't receive any notification when the modem gets them. I played around with the URC related commands but I've been unable to get it to work (other URCs work fine).
The modem is a BG600L M3 from Quectel and following is the sequence of commands i'm sending ("AT" is always omitted and the first command is literally "AT\r", basically an empty one).
//general config
AT\r
CFUN=1,0
E1
+QCFG=\"urc/ri/other\",\"pulse\",8,1
H0
&F
V1
+CMEE=1
&D0
E1
+CREG=2
+CGREG=2
+CEREG=2
//sms config
+CPMS=\"ME\",\"ME\",\"ME\"
+QINDCFG=\"smsincoming\",1
+CMGF=1
+CSDH=0
+CSCS=\"GSM\"
+CNMI=2,2,0,2,0
//doing some deleting and reading
+CMGD=1,3
+CPMS?
//getting the gps fix
+QGPS=1
+QGPSCFG=\"gnssconfig\",3
+QGPSLOC=1
+QGPSEND
//resetting the gms connection
+CFUN=0
+CFUN=1,0
//setting up the gsm connection
+QICFG=\"dataformat\",0,0
+QICFG=\"viewmode\",0
+QICFG=\"recvind\",1
+QICFG=\"tcp/retranscfg\",3,600
+QISDE=0
+QCFG=\"band\",0xf,0x80085,0x80085,1
+QCFG=\"nwscanmode\",1,1
+QCFG=\"nwscanseq\",010101,1
+QCFG=\"iotopmode\",2,1
// checking if it's connected
+CREG?
+QNWINFO
+COPS?
//Getting the time
+CTZU=3
+CTZR=0
+QLTS
+CCLK?
You can set AT+CNMI=2,1,2,0,0 , that should do the trick.
According to specification ETSI TS 127 005 V11.0.0 (2012-10)
+CNMI: <mode>,<mt>,<bm>,<ds>,<bfr>
by keeping <mt> value to 1 we should get indication when message is stored in ME/TA
<mt>: integer type (the rules for storing received SMs depend on its
data coding scheme
0 No SMS-DELIVER indications are routed to the TE.
1 If SMS-DELIVER is stored into ME/TA, indication of the memory location is routed to the TE using unsolicited result code:
+CMTI: <mem>,<index>
I am using the Microsoft Outlook REST API to synchronize messages in a folder using skipTokens with the Prefer: odata.track-changes header.
After 62 successful rounds of results, I get an error 500 ErrorInternalServerError with the message Unable to cast object of type 'LegacyPagingToken' to type 'Microsoft.Exchange.Services.OData.Model.SkipToken'
I have tried:
Retrying the same query (https://outlook.office.com/api/v2.0/me/MailFolders/Inbox/messages/?%24skipToken=1BWUA9eXs5dN89tPsr_FOvtzINQAA0Cwk5o), which results in the same error
Restarting the sync, which results in the same error at the same point
Adding a new message to the Inbox and restarting the sync, which results in the same error at the same point
Moving the messages from that part of the sync to another folder (in case the messages themselves were causing the problem), which results in the same error at the same point
Has anybody run into this error or have suggestions on what might cause it or workarounds?
It looks like the issue was on my end while parsing the skipToken from the #odata.nextLink response. The token in the original question is invalid - the actual skipToken passed back from the API had -AAAA on the end. After 63 queries, in which the skipToken increments, the Base64 encoded form started using characters the regexp I was using didn't find. Switching from a \w regexp to a proper URL parser solved the problem.
I'm currently writing a Scala application made of a Producer and a Consumer. The Producers get some data from and external source and writes em inside Kafka. The Consumer reads from Kafka and writes to Elasticsearch.
The consumer is based on Spark Streaming and every 5 seconds fetches new messages from Kafka and writes them to ElasticSearch. The problem is I'm not able to write to ES because I get a lot of errors like the one below :
ERROR] [2015-04-24 11:21:14,734] [org.apache.spark.TaskContextImpl]:
Error in TaskCompletionListener
org.elasticsearch.hadoop.EsHadoopException: Could not write all
entries [3/26560] (maybe ES was overloaded?). Bailing out... at
org.elasticsearch.hadoop.rest.RestRepository.flush(RestRepository.java:225)
~[elasticsearch-spark_2.10-2.1.0.Beta3.jar:2.1.0.Beta3] at
org.elasticsearch.hadoop.rest.RestRepository.close(RestRepository.java:236)
~[elasticsearch-spark_2.10-2.1.0.Beta3.jar:2.1.0.Beta3] at
org.elasticsearch.hadoop.rest.RestService$PartitionWriter.close(RestService.java:125)
~[elasticsearch-spark_2.10-2.1.0.Beta3.jar:2.1.0.Beta3] at
org.elasticsearch.spark.rdd.EsRDDWriter$$anonfun$write$1.apply$mcV$sp(EsRDDWriter.scala:33)
~[elasticsearch-spark_2.10-2.1.0.Beta3.jar:2.1.0.Beta3] at
org.apache.spark.TaskContextImpl$$anon$2.onTaskCompletion(TaskContextImpl.scala:57)
~[spark-core_2.10-1.2.1.jar:1.2.1] at
org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:68)
[spark-core_2.10-1.2.1.jar:1.2.1] at
org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:66)
[spark-core_2.10-1.2.1.jar:1.2.1] at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
[na:na] at
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
[na:na] at
org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:66)
[spark-core_2.10-1.2.1.jar:1.2.1] at
org.apache.spark.scheduler.Task.run(Task.scala:58)
[spark-core_2.10-1.2.1.jar:1.2.1] at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200)
[spark-core_2.10-1.2.1.jar:1.2.1] at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
[na:1.7.0_65] at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[na:1.7.0_65] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]
Consider that the producer is writing 6 messages every 15 seconds so I really don't understand how this "overload" can possibly happen (I even cleaned the topic and flushed all old messages, I thought it was related to an offset issue). The task executed by Spark Streaming every 5 seconds can be summarized by the following code :
val result = KafkaUtils.createStream[String, Array[Byte], StringDecoder, DefaultDecoder](ssc, kafkaParams, Map("wasp.raw" -> 1), StorageLevel.MEMORY_ONLY_SER_2)
val convertedResult = result.map(k => (k._1 ,AvroToJsonUtil.avroToJson(k._2)))
//TO-DO : Remove resource (yahoo/yahoo) hardcoded parameter
log.info(s"*** EXECUTING SPARK STREAMING TASK + ${java.lang.System.currentTimeMillis()}***")
convertedResult.foreachRDD(rdd => {
rdd.map(data => data._2).saveToEs("yahoo/yahoo", Map("es.input.json" -> "true"))
})
If I try to print the messages instead of sending to ES, everything is fine and I actually see only 6 messages. Why can't I write to ES?
For the sake of completeness, I'm using this library to write to ES : elasticsearch-spark_2.10 with the latest beta version.
I found, after many retries, a way to write to ElasticSearch without getting any error. Basically passing the parameter "es.batch.size.entries" -> "1" to the saveToES method solved the problem. I don't understand why using the default or any other batch size leads to the aforementioned error considering that I would expect an error message if I'm trying to write more stuff than the allowed max batch size, not less.
Moreover I've noticed that actually I was writing to ES but not all my messages, I was losing between 1 and 3 messages per batch.
When I pushed dataframe to ES on Spark, I had the same error message. Even with "es.batch.size.entries" -> "1" configuration,I had the same error.
Once I increased thread pool in ES, I could figure out this issue.
for example,
Bulk pool
threadpool.bulk.type: fixed
threadpool.bulk.size: 600
threadpool.bulk.queue_size: 30000
Like it was already mentioned here, this is a document write conflict.
Your convertedResult data stream contains multiple records with the same id. When written to elastic as part of the same batch produces the error above.
Possible solutions:
Generate unique id for each record. Depending on your use case it can be done in a few different ways. As example, one common solution is to create a new field by combining the id and lastModifiedDate fields and use that field as id when writing to elastic.
Perform de-duplication of records based on id - select only one record with particular id and discard other duplicates. Depending on your use case, this could be the most current record (based on time stamp field), most complete (most of the fields contain data), etc.
The #1 solution will store all records that you receive in the stream.
The #2 solution will store only the unique records for a specific id based on your de-duplication logic. This result would be the same as setting "es.batch.size.entries" -> "1", except you will not limit the performance by writing one record at a time.
One of the possibility is the cluster/shard status being RED. Please address this issue which may be due to unassigned replicas. Once status turned GREEN the API call succeeded just fine.
This is a document write conflict.
For example:
Multiple documents specify the same _id for Elasticsearch to use.
These documents are located in different partitions.
Spark writes multiple partitions to ES simultaneously.
Result is Elasticsearch receiving multiple updates for a single Document at once - from multiple sources / through multiple nodes / containing different data
"I was losing between 1 and 3 messages per batch."
Fluctuating number of failures when batch size > 1
Success if batch write size "1"
Just adding another potential reason for this error, hopefully it helps someone.
If your Elasticsearch index has child documents then:
if you are using a custom routing field (not _id), then according to
the documentation the uniqueness of the documents is not guaranteed.
This might cause issues while updating from spark.
If you are using the standard _id, the uniqueness will be preserved, however you need to make sure the following options are provided while writing from Spark to Elasticsearch:
es.mapping.join
es.mapping.routing
i'm trying to index many documents using Nest to Elasticsearch. things run fine with there's a limited number of documents, but when i ramp up the number - from say 1000 to 50,000 it throws an error. i'm not convinced it's due to the number of documents - it could be bad data.
i'm trying to safeguard against bad data though - i'm only indexing documents that have an id. the id is being generated from one of my fields (upc). so i'm positive there's an id for every document. i'm also making sure my class object that it's serializing to/from has all nullable properties.
still, there's nothing informational that i can see that helps me in this error.
the error i get is..
Unable to perform request: 'POST' on any of the nodes after retrying 0 times
and here's the stacktrace when it throws the error:
at Elasticsearch.Net.Connection.Transport.RetryRequest[T](TransportRequestState`1 requestState, Uri baseUri, Int32 retried, Exception e) in c:\Projects\NEST\src\Elasticsearch.Net\Connection\Transport.cs:line 241
at Elasticsearch.Net.Connection.Transport.DoRequest[T](TransportRequestState`1 requestState, Int32 retried) in c:\Projects\NEST\src\Elasticsearch.Net\Connection\Transport.cs:line 215
at Elasticsearch.Net.Connection.Transport.DoRequest[T](String method, String path, Object data, IRequestParameters requestParameters) in c:\Projects\NEST\src\Elasticsearch.Net\Connection\Transport.cs:line 163
at Elasticsearch.Net.ElasticsearchClient.DoRequest[T](String method, String path, Object data, BaseRequestParameters requestParameters) in c:\Projects\NEST\src\Elasticsearch.Net\ElasticsearchClient.cs:line 75
at Elasticsearch.Net.ElasticsearchClient.Bulk[T](Object body, Func`2 requestParameters) in c:\Projects\NEST\src\Elasticsearch.Net\ElasticsearchClient.Generated.cs:line 45
at Nest.RawDispatch.BulkDispatch[T](ElasticsearchPathInfo`1 pathInfo, Object body) in c:\Projects\NEST\src\Nest\RawDispatch.generated.cs:line 34
at Nest.ElasticClient.<Bulk>b__d6(ElasticsearchPathInfo`1 p, BulkDescriptor d) in c:\Projects\NEST\src\Nest\ElasticClient-Bulk.cs:line 20
at Nest.ElasticClient.Dispatch[D,Q,R](D descriptor, Func`3 dispatch, Boolean allow404) in c:\Projects\NEST\src\Nest\ElasticClient.cs:line 86
at Nest.ElasticClient.Dispatch[D,Q,R](Func`2 selector, Func`3 dispatch, Boolean allow404) in c:\Projects\NEST\src\Nest\ElasticClient.cs:line 72
at Nest.ElasticClient.Bulk(Func`2 bulkSelector) in c:\Projects\NEST\src\Nest\ElasticClient-Bulk.cs:line 15
at Nest.ElasticClient.IndexMany[T](IEnumerable`1 objects, String index, String type) in c:\Projects\NEST\src\Nest\ElasticClient-Index.cs:line 44
at ElasticsearchLoad.Program.BuildBulkApi() in c:\Projects\ElasticsearchLoad\ElasticsearchLoad\Program.cs:line 258
any help would be appreciated!
You are going to be limited in the effective bulk size you can send to Elasticsearch by a combination of your documents and Elasticsearch configuration. There is not any "single best answer" for this, but with some testing and configuration changes you should be able to achieve a suitable bulk indexing performance threshold. Here are some resources to assist you...
elasticsearch bulk indexing gets slower over time with constant number of indexes and documents
Write heavy elasticsearch
Scaling Elasticsearch Part 1: Overview
And for overall Sizing of Elasticsearch I would highly recommend reading - Sizing Elasticsearch - Scaling up and out
If you are running in multi-node cluster make sure your setup is the same for all nodes.
I am not sure if this can help you, but I had similar issue in 2 node cluster. I've was adding synonyms and setup the file for only master machine. I completely forgot to copy it over to 2nd node. This was causing the error above for me when creating new index that depended on that synonym file.
After I added synonym file and restarted 2nd node, everything went back to normal.
I am creating my first AddOn using Quickbooks POS AddOn Dev Kit v10.
I have created a button in the receipts side buttons panel.
Now what I want is the current sales receipt.
For that, what I am trying to do is to get TxnID and query request processor, with that TxnID to get the whole receipt.
I have managed to get information like Qty, Desc1, ItemNum etc. I have also get the Receipt Schema.
https://idnforums.intuit.com/messageview.aspx?catid=49&threadid=16722
From above url, it says DocSID is the TxnID, but I cant get the field value through DocSID.
How can I get the TxnID or is there a better way to do it for getting the current sales reciept?
Thanks in Advance.
After working on it for 2-3 days, I came to know that TxnID will be created after the sales receipt is saved in QB POS through the IPOSService ProcessQBPOSXMLRequest method.
ProcessQBPOSXMLRequest only takes and repond in XML format. I created the receipt request in XML and send it ProcessQBPOSXMLRequest for processing.
I was avoding creatingg the XML request, since it was a long and tedious work, but I did similar work while creating another application with QBPOS SDK v3, with QBPOSFC3 library. I copied the code and add reference to QBPOSFC3.dll, and created xml from IMsgSetRequest interface which send request to POS request processor and convert the request in XML format.