problems with CoreData migrations in RubyMotion - core-data-migration

I have been researching through how I can do schema migrations with CoreData in RubyMotion.
the problem with CoreData schema migration is that normally it is done through XCode if you are ordinary Obj-­C iOS developer and life is good. but since we are with RubyMotion, we have to do this manually.
XCode CoreData project comes with xcdatamodel files which are diagram-­looking graph that shows entities and properties of an app, letting you add/modify them. you can create a versioned xcdatamodel files and be able to set up a migration from one version to another; it  offers a feature called Lightweight Migration  http://developer.apple.com/library/ios/#documentation/cocoa/Conceptual/CoreDataVersioning/Articles/vmLightweightMigration.html which can perform automatic migrations as long as a scope of the migration is within its limit.
These features are only available in XCode and projects with xcdatamodel files.
My current implementation with defining CoreData's attributes and properties are all defined  in code. but this approach would not  let  us  use  XCode  way  of  defining  CoreData's  structure  and  therefore  not  providing migration  handling  through  XCode.
here  are  potential  approaches  I  came  up  so  far
 use  xcdatamodel  files  to  define  CoreData's  schema  (entities,  properties  etc)  and  use  XCode  to  do  lightweight migrations.
Nitron  is  referencing  xcdatamodel  files  to  define  models; I  just  don't  know  how  yet. (posted  a  question  to  the  author  of  Nitron  https://github.com/mattgreen/nitron/issues/27  to  get  more  insight  on how  he  does  it  but  no  response  yet.) there  is  also  a  gem  called  Xcodeproj  https://github.com/CocoaPods/Xcodeproj  which  does  look  like  you  can interact  with  XCode  project  from  ruby  but  did  not  make  it  work  nor  spent  much  time  yet.
 do  manual  migration  in  code this  is  theoretically  possible.  what  we  need  to  have  is  an  original  managedObjectModel  and destination  managedObjectModel  and  follow  steps  describe  in  "Use  a  Migration  Manager  if  Models  Cannot  Be Found  Automatically"  http://developer.apple.com/library/ios/#documentation/cocoa/Conceptual/CoreDataVersioning/Articles/vmLightweightMigration.html
Problem  here  is  how  to  get  original  managedObjectModel.  we  need  to  store  all  versions of  NSManagedObjectModels,  just  like  how  ruby-­on-­rails  does  in  db/migrate/*. having  current  NSManagedObjectModel  and  destination's  NSManagedObjectModel,  migration  is  possible. one  way  of  storing  all  versions  of  NSManagedObjectModel  is  in  key-­value  based  persistence  storage. there  is  a  nice  gem  called  NanoStore  https://github.com/siuying/NanoStoreInMotion which  let  you  store  array  and  dictionary.  so  we  can  store  each  version  in  an  array  and  describe  schema  in  nested dictionary  format,  for  example.  I  have  not  done  coding  to  exercise  that,  but  I  presume  this  is  one  approach.
 fuck  coredata  and  move  on  with  key-­value  based  storage. NanoStore  looks  very  powerful  and  it  is  persistence  data  storage,  backed  by  sqlite.  as  readme  shows,  it  can create  models  with  attributes,  being  able  to  find  things,  and  can  create  a  collection  of  objects  called  bags  and do  operations  with  them.  although  there  is  no  relationship  with  each  models,  we  can  potentially  use  bags  to associate  objects  and/or  we  can  define  our  own  relationships,  like  how  I  do  here  https://gist.github.com/4042753
I  am  leaning  towards  key-value store  mainly  because  of  its  simplicity  and  yet  still  comes  with  persistence.  how  does it  handle  with  schema  change?  it  just  add/remove  attributes  to  existing  data  (delete  all  columns  if  you  remove attributes  and  nil  if  you  add  a  new  attribute  to  existing  model  instances).  is  this  bad?  I  dont  think  it is  necessary  bad  since  we  could  sync  objects  from  server  if  necessary (our app is server-based app).
what do you think?

For creating xcdatamodel files you can use ruby-xcdm which makes it easy to manage multiple schema versions in an ActiveRecord-like way.
Then from the same author you have Core Data Query which abstracts away a lot of the complexity you would otherwise need to handle manually.
Another resource to take a look at it this example of how to set up (lightweight) migrations manually, purely in code without any assistance from Xcode.
There's also a chapter in Core Data 2nd Edition by Marcus Zarra that takes you through how to set up your migrations so that they run in order, which reduces the complexity once you're several schema versions down the line. That's in Objective-C but it's relatively straightforward to port this to RubyMotion.

Related

Databricks External Table

I have data stored on ADLS Gen2 and have 2 workspaces.
Primary ETL Workspace(Workspace A): Prepares data from sources and stores on ADLS (mounted to Databricks with SP as Storage Blob Data Contributor )
Sandbox Workspace(Workspace B) : Utilizes data from ADLS(read only) (mounted to Databricks with SP as Storage Blob Data Reader). Workspace B should always be able to READ only ADLS
ADLS/
|-- curated/
| |-- google
| |-- table1
| |-- table2
| |-- data.com
| |-- table1
| |-- table2
Writing to the above location has no issues since we use Workspace A.
I see issue when layering External database/tables within Workspace B
Steps:
The following works
create database if not exists google_db comment 'Database for Google' location 'dbfs:/mnt/google'
The following Fails
create external table google_db .test USING DELTA location 'dbfs:/mnt/google/table1'
Message:
Error in SQL statement: AbfsRestOperationException: Operation failed: "This request is not authorized to perform this operation using this permission.", 403, PUT, https://XASASASAS.dfs.core.windows.net/google/table1/_delta_log?resource=directory&timeout=90, AuthorizationPermissionMismatch, "This request is not authorized to perform this operation using this permission. RequestId:76142564-801f-000f-29cf-e6a826000000 Time:2021-12-01T16:21:33.3117578Z"
com.databricks.backend.common.rpc.DatabricksExceptions$SQLExecutionException: Operation failed: "This request is not authorized to perform this operation using this permission.", 403, PUT, https://XASASASAS.dfs.core.windows.net/google/table1/_delta_log?resource=directory&timeout=90, AuthorizationPermissionMismatch, "This request is not authorized to perform this operation using this permission. RequestId:76142564-801f-000f-29cf-e6a826000000 Time:2021-12-01T16:21:33.3117578Z"
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:237)
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.AbfsClient.createPath(AbfsClient.java:311)
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.createDirectory(AzureBlobFileSystemStore.java:632)
at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.mkdirs(AzureBlobFileSystem.java:481)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.$anonfun$mkdirs$3(DatabricksFileSystemV2.scala:748)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at com.databricks.s3a.S3AExceptionUtils$.convertAWSExceptionToJavaIOException(DatabricksStreamUtils.scala:66)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.$anonfun$mkdirs$2(DatabricksFileSystemV2.scala:746)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.$anonfun$withUserContextRecorded$2(DatabricksFileSystemV2.scala:941)
at com.databricks.logging.UsageLogging.$anonfun$withAttributionContext$1(UsageLogging.scala:240)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:235)
at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:232)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.withAttributionContext(DatabricksFileSystemV2.scala:455)
at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:279)
at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:271)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.withAttributionTags(DatabricksFileSystemV2.scala:455)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.withUserContextRecorded(DatabricksFileSystemV2.scala:914)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.$anonfun$mkdirs$1(DatabricksFileSystemV2.scala:745)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at com.databricks.logging.UsageLogging.$anonfun$recordOperation$4(UsageLogging.scala:434)
at com.databricks.logging.UsageLogging.$anonfun$withAttributionContext$1(UsageLogging.scala:240)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:235)
at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:232)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.withAttributionContext(DatabricksFileSystemV2.scala:455)
at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:279)
at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:271)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.withAttributionTags(DatabricksFileSystemV2.scala:455)
at com.databricks.logging.UsageLogging.recordOperation(UsageLogging.scala:415)
at com.databricks.logging.UsageLogging.recordOperation$(UsageLogging.scala:341)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.recordOperation(DatabricksFileSystemV2.scala:455)
at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.mkdirs(DatabricksFileSystemV2.scala:745)
at com.databricks.backend.daemon.data.client.DatabricksFileSystem.mkdirs(DatabricksFileSystem.scala:198)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1881)
at com.databricks.sql.transaction.tahoe.commands.WriteIntoDelta.write(WriteIntoDelta.scala:110)
at com.databricks.sql.transaction.tahoe.commands.CreateDeltaTableCommand.$anonfun$run$2(CreateDeltaTableCommand.scala:132)
at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.$anonfun$recordDeltaOperation$5(DeltaLogging.scala:115)
at com.databricks.logging.UsageLogging.$anonfun$recordOperation$4(UsageLogging.scala:434)
at com.databricks.logging.UsageLogging.$anonfun$withAttributionContext$1(UsageLogging.scala:240)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:235)
at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:232)
at com.databricks.spark.util.PublicDBLogging.withAttributionContext(DatabricksSparkUsageLogger.scala:19)
at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:279)
at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:271)
at com.databricks.spark.util.PublicDBLogging.withAttributionTags(DatabricksSparkUsageLogger.scala:19)
at com.databricks.logging.UsageLogging.recordOperation(UsageLogging.scala:415)
at com.databricks.logging.UsageLogging.recordOperation$(UsageLogging.scala:341)
at com.databricks.spark.util.PublicDBLogging.recordOperation(DatabricksSparkUsageLogger.scala:19)
at com.databricks.spark.util.PublicDBLogging.recordOperation0(DatabricksSparkUsageLogger.scala:56)
at com.databricks.spark.util.DatabricksSparkUsageLogger.recordOperation(DatabricksSparkUsageLogger.scala:129)
at com.databricks.spark.util.UsageLogger.recordOperation(UsageLogger.scala:70)
at com.databricks.spark.util.UsageLogger.recordOperation$(UsageLogger.scala:57)
at com.databricks.spark.util.DatabricksSparkUsageLogger.recordOperation(DatabricksSparkUsageLogger.scala:85)
at com.databricks.spark.util.UsageLogging.recordOperation(UsageLogger.scala:402)
at com.databricks.spark.util.UsageLogging.recordOperation$(UsageLogger.scala:381)
at com.databricks.sql.transaction.tahoe.commands.CreateDeltaTableCommand.recordOperation(CreateDeltaTableCommand.scala:52)
at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordDeltaOperation(DeltaLogging.scala:113)
at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordDeltaOperation$(DeltaLogging.scala:97)
at com.databricks.sql.transaction.tahoe.commands.CreateDeltaTableCommand.recordDeltaOperation(CreateDeltaTableCommand.scala:52)
at com.databricks.sql.transaction.tahoe.commands.CreateDeltaTableCommand.run(CreateDeltaTableCommand.scala:108)
at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:162)
at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:413)
at org.apache.spark.sql.execution.datasources.v2.TableWriteExecHelper.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:515)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1677)
at org.apache.spark.sql.execution.datasources.v2.TableWriteExecHelper.writeToTable(WriteToDataSourceV2Exec.scala:500)
at org.apache.spark.sql.execution.datasources.v2.TableWriteExecHelper.writeToTable$(WriteToDataSourceV2Exec.scala:495)
at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:104)
at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:126)
at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:41)
at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:41)
at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:47)
at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:233)
at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3802)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$5(SQLExecution.scala:126)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:267)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:104)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:852)
at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:77)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:217)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3800)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:233)
at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:103)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:852)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:100)
at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:687)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:852)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:682)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694)
at com.databricks.backend.daemon.driver.SQLDriverLocal.$anonfun$executeSql$1(SQLDriverLocal.scala:91)
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.TraversableLike.map(TraversableLike.scala:238)
at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
at scala.collection.immutable.List.map(List.scala:298)
at com.databricks.backend.daemon.driver.SQLDriverLocal.executeSql(SQLDriverLocal.scala:37)
at com.databricks.backend.daemon.driver.SQLDriverLocal.repl(SQLDriverLocal.scala:144)
at com.databricks.backend.daemon.driver.DriverLocal.$anonfun$execute$13(DriverLocal.scala:544)
at com.databricks.logging.UsageLogging.$anonfun$withAttributionContext$1(UsageLogging.scala:240)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:235)
at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:232)
at com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:53)
at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:279)
at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:271)
at com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:53)
at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:521)
at com.databricks.backend.daemon.driver.DriverWrapper.$anonfun$tryExecutingCommand$1(DriverWrapper.scala:689)
at scala.util.Try$.apply(Try.scala:213)
at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:681)
at com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:522)
at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:634)
at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:427)
at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:370)
at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:221)
at java.lang.Thread.run(Thread.java:748)
at com.databricks.backend.daemon.driver.SQLDriverLocal.executeSql(SQLDriverLocal.scala:129)
at com.databricks.backend.daemon.driver.SQLDriverLocal.repl(SQLDriverLocal.scala:144)
at com.databricks.backend.daemon.driver.DriverLocal.$anonfun$execute$13(DriverLocal.scala:544)
at com.databricks.logging.UsageLogging.$anonfun$withAttributionContext$1(UsageLogging.scala:240)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:235)
at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:232)
at com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:53)
at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:279)
at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:271)
at com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:53)
at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:521)
at com.databricks.backend.daemon.driver.DriverWrapper.$anonfun$tryExecutingCommand$1(DriverWrapper.scala:689)
at scala.util.Try$.apply(Try.scala:213)
at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:681)
at com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:522)
at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:634)
at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:427)
at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:370)
at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:221)
at java.lang.Thread.run(Thread.java:748)
However, if database/tables created with the following way, goes through
Create database without location
create database if not exists google_db comment 'Database for Google'
This Works
create table google_db.test USING DELTA location 'dbfs:/mnt/google/table1'
Not sure why Delta/Databricks is trying to write to the location when external database is defined.
The issue with the above is that users can still create additional tables under "google_db" since it is an internal database.User requirements state that data should be immutable (it is since users cannot updated/write to existing tables, but they are able to add tables to the database)
Any help is appreciated.

Can't submit multiple products in a single request to ChangeCatalogEntry web service (Websphere Commerce)

Using Websphere Commerce V7, FP6, FEP5.
I am attempting to do an update to our catalog using the ChangeCatalogEntry web service. I am able to update a single product just fine. My problem is that any additional CatalogEntry nodes are completely ignored. It appears to process only the first CatalogEntry node it finds. I am using SoapUI to submit the requests. Here is a sample that I am attempting to submit. In this example part number p_MAT153 is updated but p_MAT203 and p_MAT185 are not. Is the webservice designed to only update a single product per message?
<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">
<soapenv:Header>
<wsse:Security soapenv:mustUnderstand="1">
<wsse:UsernameToken>
<wsse:Username>
wcs_sonic
</wsse:Username>
<wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">
passw0rd
</wsse:Password>
</wsse:UsernameToken>
</wsse:Security>
</soapenv:Header>
<soapenv:Body>
<ChangeCatalogEntry xmlns:udt="http://www.openapplications.org/oagis/9/unqualifieddatatypes/1.1"
xmlns:_wcf="http://www.ibm.com/xmlns/prod/commerce/9/foundation"
xmlns="http://www.ibm.com/xmlns/prod/commerce/9/catalog"
xmlns:oa="http://www.openapplications.org/oagis/9"
xmlns:clmIANAMIMEMediaTypes="http://www.openapplications.org/oagis/9/IANAMIMEMediaTypes:2003"
xmlns:oacl="http://www.openapplications.org/oagis/9/codelists"
xmlns:clm54217="http://www.openapplications.org/oagis/9/currencycode/54217:2001"
xmlns:clm5639="http://www.openapplications.org/oagis/9/languagecode/5639:1988"
xmlns:qdt="http://www.openapplications.org/oagis/9/qualifieddatatypes/1.1"
xmlns:clm66411="http://www.openapplications.org/oagis/9/unitcode/66411:2001"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.ibm.com/xmlns/prod/commerce/9/catalog C:/Users/SteveS/MuleStudio/workspace/shapeitdeltaupdates/src/main/resources/WebContent/component-services/xsd/OAGIS/9.0/Overlays/IBM/Commerce/BODs/ChangeCatalogEntry.xsd"
releaseID="9.0"
versionID="7.0.0.0">
<oa:ApplicationArea xsi:type="_wcf:ApplicationAreaType">
<oa:CreationDateTime>2013-04-29T15:38:19.173-04:00</oa:CreationDateTime>
<_wcf:BusinessContext>
<_wcf:ContextData name="storeId">10651</_wcf:ContextData>
<_wcf:ContextData name="catalogId">10051</_wcf:ContextData>
</_wcf:BusinessContext>
</oa:ApplicationArea>
<DataArea>
<oa:Change>
<oa:ActionCriteria>
<oa:ActionExpression actionCode="Change" expressionLanguage="_wcf:XPath">/CatalogEntry[1]/Description[1]</oa:ActionExpression>
</oa:ActionCriteria>
</oa:Change>
<CatalogEntry>
<CatalogEntryIdentifier>
<_wcf:ExternalIdentifier ownerID="7000000000000000601">
<_wcf:PartNumber>p_MAT153</_wcf:PartNumber>
<_wcf:StoreIdentifier>
<_wcf:UniqueID>10551</_wcf:UniqueID>
</_wcf:StoreIdentifier>
</_wcf:ExternalIdentifier>
</CatalogEntryIdentifier>
<Description language="-1">
<Name>Absorbent Pants Roll</Name>
<ShortDescription> universal XSMP133</ShortDescription>
<LongDescription>These are my pants.</LongDescription>
<Attributes name="auxDescription1">I need an aux description</Attributes>
</Description>
</CatalogEntry>
<CatalogEntry>
<CatalogEntryIdentifier>
<_wcf:ExternalIdentifier ownerID="7000000000000000601">
<_wcf:PartNumber>p_MAT203</_wcf:PartNumber>
<_wcf:StoreIdentifier>
<_wcf:UniqueID>10551</_wcf:UniqueID>
</_wcf:StoreIdentifier>
</_wcf:ExternalIdentifier>
</CatalogEntryIdentifier>
<Description language="-1">
<Name>Absorbent Mat Roll</Name>
<ShortDescription> universal XSMP133</ShortDescription>
<LongDescription>These are not my pants. These are your pants.</LongDescription>
<Attributes name="auxDescription1">These pants should be washed regularly.</Attributes>
</Description>
</CatalogEntry>
<CatalogEntry>
<CatalogEntryIdentifier>
<_wcf:ExternalIdentifier ownerID="7000000000000000601">
<_wcf:PartNumber>p_MAT185</_wcf:PartNumber>
<_wcf:StoreIdentifier>
<_wcf:UniqueID>10551</_wcf:UniqueID>
</_wcf:StoreIdentifier>
</_wcf:ExternalIdentifier>
</CatalogEntryIdentifier>
<Description language="-1">
<Name>Pants on a Roll</Name>
<ShortDescription> universal XSMP133</ShortDescription>
<LongDescription>A roll of pants. Genuius. </LongDescription>
<Attributes name="auxDescription1">Still more pants. Need a different aux description.</Attributes>
</Description>
</CatalogEntry>
</DataArea>
</ChangeCatalogEntry>
</soapenv:Body>
</soapenv:Envelope>
The answer turned out to be in the oa:ActionCriteria node. I needed a matching node for every instance of CatalogEntry.
<oa:ActionCriteria>
<oa:ActionExpression actionCode="Change" expressionLanguage="_wcf:XPath">/CatalogEntry[1]/Description[1]</oa:ActionExpression>
</oa:ActionCriteria>
<oa:ActionCriteria>
<oa:ActionExpression actionCode="Change" expressionLanguage="_wcf:XPath">/CatalogEntry[2]/Description[1]</oa:ActionExpression>
</oa:ActionCriteria>
<oa:ActionCriteria>
<oa:ActionExpression actionCode="Change" expressionLanguage="_wcf:XPath">/CatalogEntry[3]/Description[1]</oa:ActionExpression>
</oa:ActionCriteria>
Just to add to that: You can run several action son the same data object, to for instance create attributes , remove attributes, set SEO data etc. However, this can confuse the graph object if you don't sort the actions in the order of Add, Change and Delete.

Coherence cache client accessing different caches on different clusters

We have 2 caches on different clusters. I want to access both of them via my extend client.
I can access the first cache fine (any one), but then access to second one fails.
For example:
NamedCache cacheOne= CacheFactory.getCache("Cache-One");
NamedCache cacheTwo= CacheFactory.getCache("Cache-Two");
Second call will fail with :
Exception in thread "main" java.lang.IllegalArgumentException: No scheme for cache: "Cache-Two".
How can I access both the caches via the client? Client config is below:
*
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>Cache-One</cache-name>
<scheme-name>Scheme-One</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>Cache-Two</cache-name>
<scheme-name>Scheme-Two</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<remote-cache-scheme>
<scheme-name>Scheme-One</scheme-name>
<service-name>TCPProxyCacheService</service-name>
<initiator-config>
<tcp-initiator>
<remote-addresses>
<socket-address>
<address>address of proxy one</address>
<port>2077</port>
</socket-address>
</remote-addresses>
<connect-timeout>300s</connect-timeout>
</tcp-initiator>
<outgoing-message-handler>
<request-timeout>300s</request-timeout>
</outgoing-message-handler>
</initiator-config>
</remote-cache-scheme>
<remote-cache-scheme>
<scheme-name>extend-castle</scheme-name>
<service-name>TCPProxyCacheService</service-name>
<initiator-config>
<tcp-initiator>
<remote-addresses>
<socket-address>
<address>address of proxy two</address>
<port>20088</port>
</socket-address>
</remote-addresses>
<connect-timeout>300s</connect-timeout>
</tcp-initiator>
<outgoing-message-handler>
<request-timeout>300s</request-timeout>
</outgoing-message-handler>
</initiator-config>
</remote-cache-scheme>
</caching-schemes>
</cache-config>
*
You have defined extend-castle scheme, where Scheme-Two scheme should have been defined. Either change scheme name in second remote-cache-scheme to Scheme-Two or change scheme name in second cache-mapping to extend-castle.

SAML Response for Google apps

I am trying to get Google Apps SAML working, I am getting the:
Google Apps - This account cannot be accessed because we could not parse the login request.
Here is my response verbatim:
<?xml version="1.0"?><samlp:Response xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" ID="pfx9c11a3a9-13dc-ff78-7d18-12f795fab19d" Version="2.0" IssueInstant="2011-08-11T05:24:35Z" Destination="https://www.google.com/a/sparxlabs.com/acs" InResponseTo="idnffilcgaeeonionahcpciplkhhhkmlfedkpipl"> <saml:Issuer>http://saml.sparxlabs.com/</saml:Issuer> <ds:Signature xmlns="http://www.w3.org/2000/09/xmldsig#"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <ds:Reference URI=""> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <ds:DigestValue>Y2E3ZWIyZGEwODFjYjdhZmJjMTZlYmI1NjA4N2IxYzYwMTM5YmEyMA==</ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue>Eno0HWCgNgxeUhCP0khdEGuLDP3etgzAoKBiK84ENs1ealpgBEOhFTDQQC8qODbAZVxTFYjQLTcW5A7OJ2n02S5tLmg57TeL4+VWyzhwaV9KQ9e1ZU7ZMhPV5aNL4Qm8EIvDyRbPx7mWW70wK1fO+IlPsmxZraL982neOJ8vucc=</ds:SignatureValue> <ds:KeyInfo> <ds:X509Data> <ds:X509Certificate>LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNzRENDQWhtZ0F3SUJBZ0lKQUtYZ0tjTy90RktuTUEwR0NTcUdTSWIzRFFFQkJRVUFNRVV4Q3pBSkJnTlYKQkFZVEFrRlZNUk13RVFZRFZRUUlFd3BUYjIxbExWTjBZWFJsTVNFd0h3WURWUVFLRXhoSmJuUmxjbTVsZENCWAphV1JuYVhSeklGQjBlU0JNZEdRd0hoY05NVEV3T0RFeE1qRXhOelF5V2hjTk1URXdPVEV3TWpFeE56UXlXakJGCk1Rc3dDUVlEVlFRR0V3SkJWVEVUTUJFR0ExVUVDQk1LVTI5dFpTMVRkR0YwWlRFaE1COEdBMVVFQ2hNWVNXNTAKWlhKdVpYUWdWMmxrWjJsMGN5QlFkSGtnVEhSa01JR2ZNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0R05BRENCaVFLQgpnUUMwVTVlVnkxWXJQTXdCNTJvUmk2OFY3cmFWUzR2V1hEd2VQL20wTUwxRkVDL3BUNmxVU01iRUJuWnVranlRClhBOFBrbTkvWFhPcERuU01XN0ZRNXczOUZSeFExY2ZWVXI3dlV6RXNrbm5Sb1p4NXBEck8ybTVVQ25VUFJtNGYKTkljVDRzdERTODAxVzRET24vOEFTUUhKQ1dnTDYwUC9RUGhvU3pmMXVqY1E1UUlEQVFBQm80R25NSUdrTUIwRwpBMVVkRGdRV0JCVDVYbjA1VTdrU3NQbEQyd05yOGlLUTdhQXpYVEIxQmdOVkhTTUViakJzZ0JUNVhuMDVVN2tTCnNQbEQyd05yOGlLUTdhQXpYYUZKcEVjd1JURUxNQWtHQTFVRUJoTUNRVlV4RXpBUkJnTlZCQWdUQ2xOdmJXVXQKVTNSaGRHVXhJVEFmQmdOVkJBb1RHRWx1ZEdWeWJtVjBJRmRwWkdkcGRITWdVSFI1SUV4MFpJSUpBS1hnS2NPLwp0RktuTUF3R0ExVWRFd1FGTUFNQkFmOHdEUVlKS29aSWh2Y05BUUVGQlFBRGdZRUFzZkYwS0h2T0h6emFoRWd4Cit1NmJJUTRldkxYaXB4VnVYNlZ2RnYxd1BSTmtIRWZEWk9HdmJZc1p1ak5VUVFGdXFzRGR2M3lHelJLQXozRVAKd1RoY29pdEN1cWQrT2dlNGdTNkhpaHBCSzU3cmFaMlpad0NxWXpyQldMMjhaZnFhQW5zNy9KNkY3TEZIeEMvcQpnK25HSldINlVycGpZTGJqajJjMFN0VGVIVTg9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K</ds:X509Certificate> </ds:X509Data> </ds:KeyInfo></ds:Signature><samlp:Status> <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/> </samlp:Status><saml:Assertion xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xs="http://www.w3.org/2001/XMLSchema" ID="pfx9c11a3a9-13dc-ff78-7d18-12f795fab19d" Version="2.0" IssueInstant="2011-08-11T05:24:35Z"> <saml:Issuer>http://saml.sparxlabs.com</saml:Issuer> <ds:Signature xmlns="http://www.w3.org/2000/09/xmldsig#"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <ds:Reference URI=""> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <ds:DigestValue>ZWRhZGEzYjE4NmZjNWU2ZWE0NDI1NjBkZTFkYzhmN2YzY2QwZGZiMA==</ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue>QueL4xlp3NOUJou7mIKERgtPRSJboeht9gFfDcOuhmYvh6uyDsk6UR2GLLb0smkuzuy7cgz0MwzjZ4QdhCyIozOyl1TqUqOvISfNV/w0Wx02Sphi0AQJs/R9S9nv+xbVX5dIgjXbf8N/DYgjSMeACSPzpyoeXpHfedY43HsoMZo=</ds:SignatureValue> <ds:KeyInfo> <ds:X509Data> <ds:X509Certificate>LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNzRENDQWhtZ0F3SUJBZ0lKQUtYZ0tjTy90RktuTUEwR0NTcUdTSWIzRFFFQkJRVUFNRVV4Q3pBSkJnTlYKQkFZVEFrRlZNUk13RVFZRFZRUUlFd3BUYjIxbExWTjBZWFJsTVNFd0h3WURWUVFLRXhoSmJuUmxjbTVsZENCWAphV1JuYVhSeklGQjBlU0JNZEdRd0hoY05NVEV3T0RFeE1qRXhOelF5V2hjTk1URXdPVEV3TWpFeE56UXlXakJGCk1Rc3dDUVlEVlFRR0V3SkJWVEVUTUJFR0ExVUVDQk1LVTI5dFpTMVRkR0YwWlRFaE1COEdBMVVFQ2hNWVNXNTAKWlhKdVpYUWdWMmxrWjJsMGN5QlFkSGtnVEhSa01JR2ZNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0R05BRENCaVFLQgpnUUMwVTVlVnkxWXJQTXdCNTJvUmk2OFY3cmFWUzR2V1hEd2VQL20wTUwxRkVDL3BUNmxVU01iRUJuWnVranlRClhBOFBrbTkvWFhPcERuU01XN0ZRNXczOUZSeFExY2ZWVXI3dlV6RXNrbm5Sb1p4NXBEck8ybTVVQ25VUFJtNGYKTkljVDRzdERTODAxVzRET24vOEFTUUhKQ1dnTDYwUC9RUGhvU3pmMXVqY1E1UUlEQVFBQm80R25NSUdrTUIwRwpBMVVkRGdRV0JCVDVYbjA1VTdrU3NQbEQyd05yOGlLUTdhQXpYVEIxQmdOVkhTTUViakJzZ0JUNVhuMDVVN2tTCnNQbEQyd05yOGlLUTdhQXpYYUZKcEVjd1JURUxNQWtHQTFVRUJoTUNRVlV4RXpBUkJnTlZCQWdUQ2xOdmJXVXQKVTNSaGRHVXhJVEFmQmdOVkJBb1RHRWx1ZEdWeWJtVjBJRmRwWkdkcGRITWdVSFI1SUV4MFpJSUpBS1hnS2NPLwp0RktuTUF3R0ExVWRFd1FGTUFNQkFmOHdEUVlKS29aSWh2Y05BUUVGQlFBRGdZRUFzZkYwS0h2T0h6emFoRWd4Cit1NmJJUTRldkxYaXB4VnVYNlZ2RnYxd1BSTmtIRWZEWk9HdmJZc1p1ak5VUVFGdXFzRGR2M3lHelJLQXozRVAKd1RoY29pdEN1cWQrT2dlNGdTNkhpaHBCSzU3cmFaMlpad0NxWXpyQldMMjhaZnFhQW5zNy9KNkY3TEZIeEMvcQpnK25HSldINlVycGpZTGJqajJjMFN0VGVIVTg9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K</ds:X509Certificate> </ds:X509Data> </ds:KeyInfo></ds:Signature><saml:Subject> <saml:NameID SPNameQualifier="google.com" Format="urn:oasis:names:tc:SAML:2.0:nameid-format:email">admin</saml:NameID> <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"> <saml:SubjectConfirmationData NotOnOrAfter="2011-08-11T06:24:35Z" Recipient="https://www.google.com/a/sparxlabs.com/acs" InResponseTo="idnffilcgaeeonionahcpciplkhhhkmlfedkpipl"/> </saml:SubjectConfirmation> </saml:Subject> <saml:Conditions NotBefore="2011-08-11T05:24:35Z" NotOnOrAfter="2011-08-11T06:24:35Z"> <saml:AudienceRestriction> <saml:Audience>google.com</saml:Audience> </saml:AudienceRestriction> </saml:Conditions> <saml:AuthnStatement AuthnInstant="2011-08-11T05:24:35Z" SessionNotOnOrAfter="2011-08-11T06:24:35Z" SessionIndex="_e409f914997c09cfb1a4dbe461a660209eba5d94ec"> <saml:AuthnContext> <saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:Password</saml:AuthnContextClassRef> </saml:AuthnContext> </saml:AuthnStatement> </saml:Assertion></samlp:Response>
Some more info that is important:
1.. The X509Certificate I am getting as:
cert = OpenSSL::PKey::RSA.new(File.read("dsacert.pem"))
[cert.to_s].pack("m").gsub(/\n/, "") #Base64 encode
2.. The digest value:
canonical = canonical_form(element)
sha1 = Digest::SHA1.hexdigest(canonical)
[sha1].pack("m").gsub(/\n/, "") #Base64 encode
3.. Finally the signature (digest_value I calculated above):
p key = OpenSSL::PKey::RSA.new(File.read("rsaprivkey.pem"))
sig = pkey.sign(OpenSSL::Digest::SHA1.new, digest_value)
[sig].pack("m").gsub(/\n/, "") #Base64 encode
If I missed anything let me know in the comments I'll update.
Just taking a quick glance I do not believe Google supports signatures on both the Response & Assertion. I would simplify the setup by removing the signature from the Assertion and leave the Response signed as a first step. You may also want to double check the Audience value and see whether "google.com" or "www.google.com/a/sparxlabs.com" is the expected value.
I see some points that may be a problem:
The two Reference ID in your signatures are empty. There is an ambiguity as implicitely
this means that both signature cover the complete XML document,
which is wrong.The SAML specification say that you
should explicitely point to the ID of the signed element.
The code you post seems to suggest that this a custom-made response.
Generating a enveloped XML Digital signature is not that simple as
it needs to be embedded at the exact moment you sign the document.
You only apply the canonization. You should also apply the two
transforms specified in the signature.
As stated there, the Audience element should point to the
EntityID of your ACS, like Ian suggested. It's also possible that
"google.com" is accepted, but this is a violation of the SAML
2.0 specs.
Your NameID attribute seems strange, it should be an email-address.
The previous link gives an example of a valid NameID element.
If you want to generate a custom-made response, you should start from an unsigned template, and then apply the XML DSIG with the ad-hoc library, like XML::Sig. It should be sufficient to sign the Assertion or the Response.
Hope this helps..
all things sk_ pointed out are right, but also :
NEVER include the xml declaration in the samlResponse message
Your digest value is wrong, it should be the base64 of the BINARY digest, not the HEX form
I don't know ruby, but the signature is the same as the digest, b64(BINARY-RSA-SHA1(elem))
It's the canonical form of the whole you have to sign, not just the digest
don't forget to base64 encode the whole samlResponse before sending it over a post-binding
and don't touch a BIT from the relaystate param, just post it as is
Also you may verify yourself the xmldsig signature the (cool-and-life-saver) xmlsec1 tool
And never forget: Xml Sucks, c14n/xmldsig is MORONIC !
GooD Luck !

JMeter - how to log the full request for a failed response?

i'm using JMeter command line to stress test our website api. Now, here's a sample result i'm getting back:
Creating summariser <summary>
Created the tree successfully using street_advisor.jmx
Starting the test # Sat Oct 03 15:22:59 PDT 2009 (1254608579848)
Waiting for possible shutdown message on port 4445
summary + 1 in 0.0s = 37.0/s Avg: 27 Min: 27 Max: 27 Err: 1 (100.00%)
<snip a few more lines>
<then i break it>
So i'm getting an error.
Currently, all errors are going to a file. When i check that file, it's saying it's a 404. Er.. ok. Is there anyway i can see exactly what the request JMeter tried?
here's a snippet of my config file...
<ResultCollector guiclass="SimpleDataWriter" testclass="ResultCollector" testname="Error Writer" enabled="true">
<boolProp name="ResultCollector.error_logging">true</boolProp>
<objProp>
<name>saveConfig</name>
<value class="SampleSaveConfiguration">
<time>true</time>
<latency>true</latency>
<timestamp>false</timestamp>
<success>true</success>
<label>true</label>
<code>true</code>
<message>true</message>
<threadName>false</threadName>
<dataType>true</dataType>
<encoding>false</encoding>
<assertions>true</assertions>
<subresults>true</subresults>
<responseData>false</responseData>
<samplerData>false</samplerData>
<xml>true</xml>
<fieldNames>false</fieldNames>
<responseHeaders>true</responseHeaders>
<requestHeaders>true</requestHeaders>
<responseDataOnError>false</responseDataOnError>
<saveAssertionResultsFailureMessage>false</saveAssertionResultsFailureMessage>
<assertionsResultsToSave>0</assertionsResultsToSave>
<bytes>true</bytes>
</value>
</objProp>
<stringProp name="filename">./error.jtl</stringProp>
</ResultCollector>
Now, before someone says 'Check the webserver log files', I know I can do this and yep, I've found the 404 .. but i'm hoping to see if it's possible without accessing them .. especially if they are on another server and/or I can't get access to them.
Please help!
The View Results Tree component shows a tree of all sample responses, allowing you to view both the request and response for any sample.
When load testing (Always in NON GUI mode), fill in "Filename" field and select to only save Responses in Error:
As you can see above we clicked on Configure to select all fields except CSV ones.
You can also save the entire response to a file using Save Responses to a file:
I found this thread searching for a solution to log the response only when a sampler fails, so the accepted solution is not good for me. I have occasional sample failures at a very high load involving hundreds of thousands of samples, so a tree listener is completely impractical for me (it will reach several gigabytes in size), so here is what I came up with (which should be good for the OP's scenario as well):
Add a [JSR223 Assertion][1] (should come after all the other assertions) and put the below code in it:
if (Boolean.valueOf(vars.get("DEBUG"))) {
for (a: SampleResult.getAssertionResults()) {
if (a.isError() || a.isFailure()) {
log.error(Thread.currentThread().getName()+": "+SampleLabel+": Assertion failed for response: " + new String((byte[]) ResponseData));
}
}
}
This will cause the entire response getting logged to the jmeter log file which is fine in my case, as I know that the responses are really small, but for large responses, more intelligent processing could be done.
There is a 'Save responses to a file' listener, which can save to file only when error occurs.
This is how I log the full request (request URL + request body) for failed requests.
Add a Listener inside the Thread Group
try{
var message = "";
var currentUrl = sampler.getUrl();
message += ". URL = " +currentUrl;
var requestBody = sampler.getArguments().getArgument(0).getValue();
message += " --data " + sampler.getArguments();
if(!sampleResult.isSuccessful()){
log.error(message);
}
}catch(err){
//do nothing. this could be a debug sampler. no need to log the error
}
For every Sampler inside the Thread Group, the Listener will execute this code after the Sampler.

Resources