Flink is not adding any data to Elasticsearch but no errors - elasticsearch

Folks, I'm new to all this data streaming process but I was able to build and submit a Flink job that will read some CSV data from Kafka and aggregate it then put it in Elasticsearch.
I was able to do the first two parts, and print out my aggregation to STDOUT. But when I added the code to put it to Elasticsearch, it seems nothing is happening there (no data being added). I looked at the Flink job manager log and it looks fine (no errors) and says:
2020-03-03 16:18:03,877 INFO
org.apache.flink.streaming.connectors.elasticsearch7.Elasticsearch7ApiCallBridge
- Created Elasticsearch RestHighLevelClient connected to [http://elasticsearch-elasticsearch-coordinating-only.default.svc.cluster.local:9200]
Here is my code at this point:
/*
* This Scala source file was generated by the Gradle 'init' task.
*/
package flinkNamePull
import java.time.LocalDateTime
import java.util.Properties
import org.apache.flink.api.common.serialization.SimpleStringSchema
import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.connectors.kafka.{FlinkKafkaConsumer010, FlinkKafkaProducer010}
import org.apache.flink.api.common.functions.RichMapFunction
import org.apache.flink.configuration.Configuration
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.flink.table.api.{DataTypes, Table}
import org.apache.flink.table.api.scala.StreamTableEnvironment
import org.apache.flink.table.descriptors.{Elasticsearch, Json, Schema}
object Demo {
/**
* MapFunction to generate Transfers POJOs from parsed CSV data.
*/
class TransfersMapper extends RichMapFunction[String, Transfers] {
private var formatter = null
#throws[Exception]
override def open(parameters: Configuration): Unit = {
super.open(parameters)
//formatter = DateTimeFormat.forPattern("yyyy-MM-dd HH:mm:ss")
}
#throws[Exception]
override def map(csvLine: String): Transfers = {
//var splitCsv = csvLine.stripLineEnd.split("\n")(1).split(",")
var splitCsv = csvLine.stripLineEnd.split(",")
val arrLength = splitCsv.length
val i = 0
if (arrLength != 13) {
for (i <- arrLength + 1 to 13) {
if (i == 13) {
splitCsv = splitCsv :+ "0.0"
} else {
splitCsv = splitCsv :+ ""
}
}
}
var trans = new Transfers()
trans.rowId = splitCsv(0)
trans.subjectId = splitCsv(1)
trans.hadmId = splitCsv(2)
trans.icuStayId = splitCsv(3)
trans.dbSource = splitCsv(4)
trans.eventType = splitCsv(5)
trans.prev_careUnit = splitCsv(6)
trans.curr_careUnit = splitCsv(7)
trans.prev_wardId = splitCsv(8)
trans.curr_wardId = splitCsv(9)
trans.inTime = splitCsv(10)
trans.outTime = splitCsv(11)
trans.los = splitCsv(12).toDouble
return trans
}
}
def main(args: Array[String]) {
// Create streaming execution environment
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(1)
// Set properties per KafkaConsumer API
val properties = new Properties()
properties.setProperty("bootstrap.servers", "kafka.kafka:9092")
properties.setProperty("group.id", "test")
// Add Kafka source to environment
val myKConsumer = new FlinkKafkaConsumer010[String]("raw.data3", new SimpleStringSchema(), properties)
// Read from beginning of topic
myKConsumer.setStartFromEarliest()
val streamSource = env
.addSource(myKConsumer)
// Transform CSV (with a header row per Kafka event into a Transfers object
val streamTransfers = streamSource.map(new TransfersMapper())
// create a TableEnvironment
val tEnv = StreamTableEnvironment.create(env)
println("***** NEW EXECUTION STARTED AT " + LocalDateTime.now() + " *****")
// register a Table
val tblTransfers: Table = tEnv.fromDataStream(streamTransfers)
tEnv.createTemporaryView("transfers", tblTransfers)
tEnv.connect(
new Elasticsearch()
.version("7")
.host("elasticsearch-elasticsearch-coordinating-only.default.svc.cluster.local", 9200, "http") // required: one or more Elasticsearch hosts to connect to
.index("transfers-sum")
.documentType("_doc")
.keyNullLiteral("n/a")
)
.withFormat(new Json().jsonSchema("{type: 'object', properties: {curr_careUnit: {type: 'string'}, sum: {type: 'number'}}}"))
.withSchema(new Schema()
.field("curr_careUnit", DataTypes.STRING())
.field("sum", DataTypes.DOUBLE())
)
.inUpsertMode()
.createTemporaryTable("transfersSum")
val result = tEnv.sqlQuery(
"""
|SELECT curr_careUnit, sum(los)
|FROM transfers
|GROUP BY curr_careUnit
|""".stripMargin)
result.insertInto("transfersSum")
// Elasticsearch elasticsearch-elasticsearch-coordinating-only.default.svc.cluster.local:9200
env.execute("Flink Streaming Demo Dump to Elasticsearch")
}
}
I'm not sure how I can debug this beast... Wondering if somebody can help me figure out why the Flink job is not adding data to Elasticsearch :(
From my Flink cluster, I'm able to query Elasticsearch just fine (manually) and add records to my index:
curl -XPOST "http://elasticsearch-elasticsearch-coordinating-only.default.svc.cluster.local:9200/transfers-sum/_doc" -H 'Content-Type: application/json' -d'{"curr_careUnit":"TEST123","sum":"123"}'

A kind soul in the Flink mailist pointed out the fact that it could be Elasticsearch buffering my records... Well, it was. ;)
I have added the following options to the Elasticsearch connector:
.bulkFlushMaxActions(2)
.bulkFlushInterval(1000L)

Flink Elasticsearch Connector 7 using Scala
Please find a working and detailed answer which I have provided here.

Related

AWS Glue Cannot read case sensitive table from ORACLE

I am trying to bring data from Oracle table that is case sensitive to AWS S3 using AWS Glue. Oracle query looks something like below:
Select *
from myschema."Employee_Salary"
AWS Crawler is able to pull the table metadata, but the Glue job is not pulling the data, and it is giving error as "no table or view exists".
My sample code looks like below:
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
## #params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "orctest4", table_name = "orcl_orcl_Employee_Salary", transformation_ctx = "datasource0")
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("empid", "decimal(10,2)", "empid", "decimal(10,2)"), ("addressdescription", "string", "addressdescription", "string"), ("salary_id", "decimal(10,2)", "salary_id", "decimal(10,2)"), ("salary_amount", "decimal(10,2)", "salary_amount", "decimal(10,2)"), ("manager_id", "decimal(10,0)", "manager_id", "decimal(10,0)")], transformation_ctx = "applymapping1")
resolvechoice2 = ResolveChoice.apply(frame = applymapping1, choice = "make_struct", transformation_ctx = "resolvechoice2")
dropnullfields3 = DropNullFields.apply(frame = resolvechoice2, transformation_ctx = "dropnullfields3")
datasink4 = glueContext.write_dynamic_frame.from_options(frame = dropnullfields3, connection_type = "s3", connection_options = {"path": "s3://test/my_orcle_parquet"}, format = "parquet", transformation_ctx = "datasink4")
job.commit()

Ballerina integrator and Elasticsearch

i am working on my bachelor degree project and i have to use the Ballerina integrator middle-ware.
Now i need to make a service which will be able to index data coming to ballerina into Elasticsearch.Is there a way to communicate with Elasticsearch using only ballerina (without using Log stash or File beat..) just like we were communicating to a SQL database?
If there is someone looking for the same thing, i just found a way to communicate with Elastic and it's working very well
---here is the code---
import ballerina/http;
import ballerina/log;
listener http:Listener httpListener = new(9090);
http:Client elasticDB = new("http://localhost:9200/");
#http:ServiceConfig{
basePath: "/elastic"
}
service GetElasticSource on httpListener{
#http:ResourceConfig {
methods: ["GET"],
path: "/{index}/_source/{id}"
}
resource function retrieveElasticIndexById(http:Caller httpCaller, http:Request httpRequest, string index, string id){
http:Response resp = new;
var data = elasticDB->get(<#untained> ("/"+index+"/_source/"+id));
if(data is http:Response){
resp = data;
}
var respRet = httpCaller->respond(resp);
if(respRet is error){
log:printError("error responding to the client", err = respRet);
}
}
}

Spring boot stackdriver logging is textPayload and not jsonPayload

I have a log filter that logs out essential request information for debugging and log analytics. But as you can see, the text payload is really hard to read.
I don't want to have to copy + paste this text payload into a text editor every single time. Is there a way to make stack driver print this in a collapsable json instead?
More info:
- GKE pod
#Component
class LogFilter : WebFilter {
private val logger = LoggerFactory.getLogger(LogFilter::class.java)
override fun filter(exchange: ServerWebExchange, chain: WebFilterChain): Mono<Void> {
return chain
.filter(exchange)
.doAfterTerminate {
val request = exchange.request
val path = request.uri.path
val routesToExclude = listOf("actuator")
var isExcludedRoute = false
for (r in routesToExclude) { if (path.contains(r)) { isExcludedRoute = true; break; } }
if (!isExcludedRoute) {
val startTime = System.currentTimeMillis()
val statusCode = exchange.response.statusCode?.value()
val requestTime = System.currentTimeMillis() - startTime
val msg = "Served $path as $statusCode in $requestTime msec"
val requestPrintMap = mutableMapOf<Any, Any>()
requestPrintMap["method"] = if (request.method != null) {
request.method.toString()
} else "UNKNOWN"
requestPrintMap["path"] = path.toString()
requestPrintMap["query_params"] = request.queryParams
requestPrintMap["headers"] = request.headers
requestPrintMap["status_code"] = statusCode.toString()
requestPrintMap["request_time"] = requestTime
requestPrintMap["msg"] = msg
logger.info(JSONObject(requestPrintMap).toString())
}
}
}
}
What you will need to do is customize Fluentd in GKE. Pretty much it's creating a Fluend daemonset for logging instead of the default logging method.
Once that is done, you can setup structured logging to send jsonPayload logs to Stackdriver Logging.
The default Stackdriver logging agent configuration for Kubernetes will detect single-line JSON and convert it to jsonPayload. You can configure Spring to log as single-line JSON (e.g., via JsonLayout1) and let the logging agent pick up the JSON object (see https://cloud.google.com/logging/docs/agent/configuration#process-payload).
1Some of the JSON field names are different (e.g., JsonLayout uses "level" for the log level, while the Stackdriver logging agent recognizes "severity"), so you may have to override addCustomDataToJsonMap to fully control the resulting log entries.
See also GKE & Stackdriver: Java logback logging format?

IllegalStateException when trying to run spark streaming with twitter

I am new to spark and scala. I am trying to run an example given in google. I am encounting following exception when running this program.
Exception is:
17/05/25 11:13:42 ERROR ReceiverTracker: Deregistered receiver for stream 0: Restarting receiver with delay 2000ms: Error starting Twitter stream - java.lang.IllegalStateException: Authentication credentials are missing.
Code that I am executing is as follows:
PrintTweets.scala
package example
import org.apache.spark._
import org.apache.spark.SparkContext._
import org.apache.spark.streaming._
import org.apache.spark.streaming.twitter._
import org.apache.spark.streaming.StreamingContext._
import org.apache.log4j.Level
import Utilities._
object PrintTweets {
def main(args: Array[String]) {
// Configure Twitter credentials using twitter.txt
setupTwitter()
val appName = "TwitterData"
val conf = new SparkConf()
conf.setAppName(appName).setMaster("local[3]")
val ssc = new StreamingContext(conf, Seconds(5))
//val ssc = new StreamingContext("local[*]", "PrintTweets", Seconds(10))
setupLogging()
// Create a DStream from Twitter using our streaming context
val tweets = TwitterUtils.createStream(ssc, None)
// Now extract the text of each status update into RDD's using map()
val statuses = tweets.map(status => status.getText())
statuses.print()
ssc.start()
ssc.awaitTermination()
}
}
Utilities.scala
package example
import org.apache.log4j.Level
import java.util.regex.Pattern
import java.util.regex.Matcher
object Utilities {
/** Makes sure only ERROR messages get logged to avoid log spam. */
def setupLogging() = {
import org.apache.log4j.{Level, Logger}
val rootLogger = Logger.getRootLogger()
rootLogger.setLevel(Level.ERROR)
}
/** Configures Twitter service credentials using twiter.txt in the main workspace directory */
def setupTwitter() = {
import scala.io.Source
for (line <- Source.fromFile("../twitter.txt").getLines) {
val fields = line.split(" ")
if (fields.length == 2) {
System.setProperty("twitter4j.oauth." + fields(0), fields(1))
}
}
}
/** Retrieves a regex Pattern for parsing Apache access logs. */
def apacheLogPattern():Pattern = {
val ddd = "\\d{1,3}"
val ip = s"($ddd\\.$ddd\\.$ddd\\.$ddd)?"
val client = "(\\S+)"
val user = "(\\S+)"
val dateTime = "(\\[.+?\\])"
val request = "\"(.*?)\""
val status = "(\\d{3})"
val bytes = "(\\S+)"
val referer = "\"(.*?)\""
val agent = "\"(.*?)\""
val regex = s"$ip $client $user $dateTime $request $status $bytes $referer $agent"
Pattern.compile(regex)
}
}
When I check using print statments I find the exception is happening at line
val tweets = TwitterUtils.createStream(ssc, None)
I am giving credentials in twitter.txt file which is read properly by program. When I don't place twitter.txt in appropriate directory it shows explicit error, It shows explicit error unauthorized access when I give blank keys for customer key and secret etc in twitter.txt
If you need more details about error related information or versions of software let me know.
Thanks,
Madhu.
I could reproduce the issue with your code. I believe its your problem.
You might have not configured twitter.txt properly. Your twitter.txt file should be like this ->
consumerKey your_consumerKey
consumerSecret your_consumerSecret
accessToken your_accessToken
accessTokenSecret your_accessTokenSecret
I hope it helps.
After changing twitter.txt file syntax to following , single space between key and value it worked
consumerKey your_consumerKey
consumerSecret your_consumerSecret
accessToken your_accessToken
accessTokenSecret your_accessTokenSecret

Acumatica: How to create new customer in ruby?

I need to create Customer using SOAP API in ruby (we want to consume Acumatica api from Ruby on Rails project).
Currently my code using Savon gem looks like this:
client = Savon.client(wsdl: 'wsdl.wsdl') # sample wsdl path
response = client.call :login, message: { name: '', password: '' }
auth_cookies = response.http.cookies
class ServiceRequest
def to_s
builder = Builder::XmlMarkup.new
builder.instruct!(:xml, encoding: 'UTF-8')
# ... problem is here, I don't know how XML request should look like
builder
end
end
p client.call :submit, message: ServiceRequest.new, cookies: auth_cookies
Problem is that, I don't know how XML request should look like.
C# requests looks like this (just piece of sample from docs):
PO302000result = context.PO302000Submit(
new Command[]
{ new Value { Value = "PORE000079", LinkedCommand =
PO302000.DocumentSummary.ReceiptNbr},
new Value { Value = "OK", LinkedCommand =
PO302000.AddPurchaseOrderLine.ServiceCommands.DialogAnswer,
PO302000.Actions.AddPOOrderLine, new Key { Value = "='PORG000084'", FieldName = Commit = true },
PO302000.AddPurchaseOrderLine.OrderNbr.FieldName, ObjectName =
PO302000.AddPurchaseOrderLine.OrderNbr.ObjectName },
new Key { Value = "='CPU00004'", FieldName =
PO302000.AddPurchaseOrderLine.InventoryID.FieldName, ObjectName =
PO302000.AddPurchaseOrderLine.InventoryID.ObjectName },
new Value{ Value = "True", LinkedCommand =
PO302000.AddPurchaseOrderLine.Selected, Commit = true },
PO302000.Actions.AddPOOrderLine2
new Key{ Value = "='CPU00004'", FieldName =
PO302000.DocumentDetails_.InventoryID.FieldName, ObjectName =
PO302000.DocumentDetails_.InventoryID.ObjectName},
new Value{ Value = "1.00", LinkedCommand =
PO302000.DocumentDetails_.ReceiptQty, Commit = true},
// the next part of code is needed if you use Serial items
PO302000.BinLotSerialNumbers.ServiceCommands.NewRow,
new Value { Value = "R01", LinkedCommand =
PO302000.BinLotSerialNumbers.Location },
PO302000.Actions.Save
} );
But I don't know what kind of XML this code produce. It looks like we have Commands array with Values and then action name. But what XML does this kind of code renders? Maybe some C# or Java folks can copy me xml requests samples that are rendered by that kind of code?
Thank you a lot.
Basically it's a bad idea to generate XML SOAP package manually, you should have some wrapper on your side, which have to simplify your code.
Anyway, the C# code below + XML SOAP request
Content[] result = context.Submit(
new Command[]
{
new Value { Value = "PORE000079", LinkedCommand = PO302000.DocumentSummary.ReceiptNbr}
,new Value { Value = "OK", LinkedCommand = PO302000.AddPurchaseOrderLine.ServiceCommands.DialogAnswer, Commit = true }
,PO302000.Actions.AddPOOrderLine
,new Key { Value = "='PORG000077'", FieldName = PO302000.AddPurchaseOrderLine.OrderNbr.FieldName, ObjectName = PO302000.AddPurchaseOrderLine.OrderNbr.ObjectName }
,new Key { Value = "='CPU00004'", FieldName = PO302000.AddPurchaseOrderLine.InventoryID.FieldName, ObjectName = PO302000.AddPurchaseOrderLine.InventoryID.ObjectName }
,new Value{ Value = "True", LinkedCommand = PO302000.AddPurchaseOrderLine.Selected, Commit = true }
,PO302000.Actions.AddPOOrderLine2
,new Key{ Value = "='CPU00004'", FieldName = PO302000.DocumentDetails.InventoryID.FieldName, ObjectName = PO302000.DocumentDetails.InventoryID.ObjectName}
,new Value{ Value = "1.00", LinkedCommand = PO302000.DocumentDetails.ReceiptQty, Commit = true}
// the next part of code is needed if you use Serial items
,PO302000.BinLotSerialNumbers.ServiceCommands.NewRow
,new Value { Value = "R01", LinkedCommand = PO302000.BinLotSerialNumbers.Location }
,new Value { Value = "1.00", LinkedCommand = PO302000.BinLotSerialNumbers.Quantity, Commit = true }
,new Value { Value = "25.00", LinkedCommand = PO302000.DocumentDetails.UnitCost, Commit = true }
,new Key { Value = "='CPU00004'", FieldName = PO302000.DocumentDetails.InventoryID.FieldName, ObjectName = PO302000.DocumentDetails.InventoryID.ObjectName }
,new Value { Value = "0.00", LinkedCommand = PO302000.DocumentDetails.ReceiptQty, Commit = true }
,PO302000.Actions.Save
}
);
<?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><soap:Body><Submit xmlns="http://www.acumatica.com/typed/"><commands><Command xsi:type="Value"><Value>PORE000079</Value><LinkedCommand xsi:type="Field"><FieldName>ReceiptNbr</FieldName><ObjectName>Document</ObjectName><Value>ReceiptNbr</Value><Commit>true</Commit><LinkedCommand xsi:type="Action"><FieldName>cancel</FieldName><ObjectName>Document</ObjectName><LinkedCommand xsi:type="Key"><FieldName>ReceiptNbr</FieldName><ObjectName>Document</ObjectName><Value>=[Document.ReceiptNbr]</Value><LinkedCommand xsi:type="Key"><FieldName>ReceiptType</FieldName><ObjectName>Document</ObjectName><Value>=[Document.ReceiptType]</Value></LinkedCommand></LinkedCommand></LinkedCommand></LinkedCommand></Command><Command xsi:type="Value"><Value>OK</Value><Commit>true</Commit><LinkedCommand xsi:type="Answer"><ObjectName>poLinesSelection</ObjectName><Value>='Yes'</Value></LinkedCommand></Command><Command xsi:type="Action"><FieldName>AddPOOrderLine</FieldName><ObjectName>Document</ObjectName><Commit>true</Commit></Command><Command xsi:type="Key"><FieldName>OrderNbr</FieldName><ObjectName>poLinesSelection</ObjectName><Value>='PORG000077'</Value></Command><Command xsi:type="Key"><FieldName>InventoryID</FieldName><ObjectName>poLinesSelection</ObjectName><Value>='CPU00004'</Value></Command><Command xsi:type="Value"><Value>True</Value><Commit>true</Commit><LinkedCommand xsi:type="Field"><FieldName>Selected</FieldName><ObjectName>poLinesSelection</ObjectName><Value>Selected</Value><Commit>true</Commit></LinkedCommand></Command><Command xsi:type="Action"><FieldName>AddPOOrderLine2</FieldName><ObjectName>Document</ObjectName><Commit>true</Commit></Command><Command xsi:type="Key"><FieldName>InventoryID</FieldName><ObjectName>transactions</ObjectName><Value>='CPU00004'</Value></Command><Command xsi:type="Value"><Value>1.00</Value><Commit>true</Commit><LinkedCommand xsi:type="Field"><FieldName>ReceiptQty</FieldName><ObjectName>transactions</ObjectName><Value>ReceiptQty</Value><Commit>true</Commit></LinkedCommand></Command><Command xsi:type="NewRow"><ObjectName>splits</ObjectName></Command><Command xsi:type="Value"><Value>R01</Value><LinkedCommand xsi:type="Field"><FieldName>LocationID</FieldName><ObjectName>splits</ObjectName><Value>Location</Value></LinkedCommand></Command><Command xsi:type="Value"><Value>1.00</Value><Commit>true</Commit><LinkedCommand xsi:type="Field"><FieldName>Qty</FieldName><ObjectName>splits</ObjectName><Value>Quantity</Value></LinkedCommand></Command><Command xsi:type="Value"><Value>25.00</Value><Commit>true</Commit><LinkedCommand xsi:type="Field"><FieldName>CuryUnitCost</FieldName><ObjectName>transactions</ObjectName><Value>UnitCost</Value></LinkedCommand></Command><Command xsi:type="Key"><FieldName>InventoryID</FieldName><ObjectName>transactions</ObjectName><Value>='CPU00004'</Value></Command><Command xsi:type="Value"><Value>0.00</Value><Commit>true</Commit><LinkedCommand xsi:type="Field"><FieldName>ReceiptQty</FieldName><ObjectName>transactions</ObjectName><Value>ReceiptQty</Value><Commit>true</Commit></LinkedCommand></Command><Command xsi:type="Action"><FieldName>Save</FieldName><ObjectName>Document</ObjectName></Command></commands></Submit></soap:Body></soap:Envelope>
So in the end what I did:
gem install 'mumboe-soap4r' # not soap4r,
# because soap4r is old and bugged with newer rubies
Then I ran
wsdl2ruby.rb --wsdl customer.wsdl --type client
Where wsdl2ruby.rb is installed together with mumboe-soap4r gem. Replace customer.wsdl with path to your wsdl, could be URL or file system path.
After running this command next files were created:
default.rb
defaultMappingRegistry.rb
defaultDriver.rb
ScreenClient.rb
Using those files you can write code similar to C# code or php code to interact with acumatica API:
require_relative 'defaultDriver'
require 'soap/wsdlDriver'
# this is my helper method to make life easier
def prepare_value(value, command, need_commit = false, ignore = false)
value_command = Value.new
value_command.value = value
value_command.linkedCommand = command
value_command.ignoreError = ignore unless ignore.nil?
value_command.commit = need_commit unless need_commit.nil?
value_command
end
soap_client = SOAP::WSDLDriverFactory.new('customer.wsdl').create_rpc_driver
soap_client.login(name: '', password: '').loginResult
screen = soap_client.getSchema(nil)
soap_client.clear(nil)
content = screen.getSchemaResult
# p schema
p customer = content.customerSummary.customerID
p customer_name = content.customerSummary.customerName
country = content.generalInfoMainAddress.country
customer_class = content.generalInfoFinancialSettings.customerClass
commands = ArrayOfCommand.new
commands << prepare_value('ABBA', customer_name)
commands << prepare_value('US', country)
commands << prepare_value('MERCHANT', customer_class)
commands << content.actions.insert
commands << customer.clone # to return
p commands
p soap_client.submit(commands)
Hope it will help someone.
Actually 'mumboe-soap4r' or 'soap2r' or 'soap4r' doesn't work with Acumatica soap API. They are too old and buggy.
In the end I am using Savon gem (version 2). I am creating message using XmlMarkup class. But how I know what XML should I create? In order to know this I am creating soap request in .net , then I see what right XML request looks like and only then I am creating soap request using Savon gem. Too much work, but I don't know better way for now. It works.
In order for Savon to work with Acumatica API I set next options:
client = Savon.client do
wsdl 'http://path/Soap/AR303000.asmx?wsdl'
log true
namespaces 'xmlns:soap' => 'http://schemas.xmlsoap.org/soap/envelope/'
env_namespace 'soap'
namespace_identifier nil
element_form_default ''
end
Don't forget to pass auth cookies
response = client.call(:login, message: { name: '', password: '' })
auth_cookies = response.http.cookies
Build your customer creating xml, then do submit
m = build_create_submit(customer_name)
response = client.call(:submit, message: m, cookies: auth_cookies)
# get customer id of newly created customer
customer_id = response.try(:hash).try(:[], :envelope).
try(:[], :body).
try(:[], :submit_response).
try(:[], :submit_result).
try(:[], :content).
try(:[], :customer_summary).
try(:[], :customer_id).
try(:[], :value)

Resources