Copy Document/Page excluding field/column or setting new value - events

I'm using version 8 of Kentico and I have a custom document/page that has a unique numeric identity field, unfortunately this data from an existing source and because I cannot set the primary key ID of the page's coupled data when using the API I was forced to have this separate field.
I ensure the field is new and unique during the DocumentEvents.Insert.Before event using node.SetValue("ItemIdentifier", newIdentifier); if the the node's class name matches, etc. So Workflow is handled as well I also implemented the same method for WorkflowEvents.SaveVersion.Before.
This works great when creating a new item, whoever if we attempt to Copy an existing node the source Identifier remains unchanged. I was hoping I could exclude the field from being copied, but am as yet to find an example of that.
So I went ahead and implemented a solution to ensure a new identifier is created when a node is being copied by handling the DocumentEvents.Copy.Before and DocumentEvents.Copy.After.
Unfortunately in my case the e.Node from these event args are useless, I could not for the life of me get the field modified, when I opened IlSpy I realized why, the node copy method grabs a fresh copy of the node from the database always! Hence rendering DocumentEvents.Copy.Before useless if you want to modify fields before a node is copied.
So I instead pass the identifier along in a RequestStockHelper that the Insert, further down the cycle, handles to generate a new identifier for the cloned node.
Unfortunately, unbeknownst to me, if we copy a published node, the value on the database is correct, but the NodeXML value of it is not.
This IMO sounds like a Kentico bug, it's either retaining the source node's NodeXML/version, or for some reason node.SetValue("ItemIdentifier", newIdentifier); is not working properly on the WorkflowEvents.SaveVersion.Before since it's a published and workflowed node.
Anyone come across a similar issue to this? Is there any other way I can configure a field to be a unique numeric identity field, that is not the primary key, and is automatically incremented when inserted? Or exclude a field from the copy procedure?

As a possible solution, could you create a new document in DocumentEvents.Copy.Before and copy the values over from the copied document, then cancel the copy event itself?

ok, turns out this is not a Kentico issue but the way versions are saved.
if you want to compute a unique value in DocumentEvents.Insert.Before you need to pass it along to WorkflowEvents.SaveVersion.Before because the node that is sent in the later is the same as the original from the former. e.g. whatever changes you do in the Insert node are not sent along to SaveVersion, you need to handle this manually.
So here's the pseudo code that handles the copy scenario and insert of a new item of compiled type CineDigitalAV:
protected override void OnInit()
{
base.OnInit();
DocumentEvents.Insert.Before += Insert_Before;
DocumentEvents.Copy.Before += Copy_Before;
WorkflowEvents.SaveVersion.Before += SaveVersion_Before;
}
private void Copy_Before(object sender, DocumentEventArgs e)
{
if (e.Node != null)
{
SetCopyCineDigitalIdentifier(e.Node);
}
}
private void SaveVersion_Before(object sender, WorkflowEventArgs e)
{
if (e.Document != null)
{
EnsureCineDigitalIdentifier(e.Document);
}
}
private void Insert_Before(object sender, DocumentEventArgs e)
{
if (e.Node != null)
{
EnsureCineDigitalIdentifier(e.Node);
}
}
private void SetCopyCineDigitalIdentifier(TreeNode node)
{
int identifier = 0;
if (node.ClassName == CineDigitalAV.CLASS_NAME)
{
identifier = node.GetValue<int>("AVCreation_Identifier", 0);
// flag next insert to create a new identifier
if (identifier > 0)
RequestStockHelper.Add("Copy-Identifier-" + identifier, true);
}
}
private void EnsureCineDigitalIdentifier(TreeNode node)
{
int identifier = 0;
if (node.ClassName == CineDigitalAV.CLASS_NAME)
{
identifier = node.GetValue<int>("AVCreation_Identifier", 0);
}
if (identifier == 0 || (identifier != 0 && RequestStockHelper.Contains("Copy-Identifier-" + identifier)))
{
// generate a new identifier for new items ot those being copied
RequestStockHelper.Remove("Copy-Identifier-" + identifier);
int newIdentifier = GetNewCineDigitalIdentifierAV(node.NodeSiteName);
node.SetValue("AVCreation_Identifier", newIdentifier);
// store the newidentifier so that saveversion includes it
RequestStockHelper.Add("Version-Identifier-" + identifier, newIdentifier);
}
else if (RequestStockHelper.Contains("Version-Identifier-" + identifier))
{
// handle saveversion with value from the insert
int newIdentifier = ValidationHelper.GetInteger(RequestStockHelper.GetItem("Version-Identifier-" + identifier), 0);
RequestStockHelper.Remove("Version-Identifier-" + identifier);
node.SetValue("AVCreation_Identifier", newIdentifier);
}
}
private int GetNewCineDigitalIdentifierAV(string siteName)
{
return (DocumentHelper.GetDocuments<CineDigitalAV>()
.OnSite(siteName)
.Published(false)
.Columns("AVCreation_Identifier")
.OrderByDescending("AVCreation_Identifier")
.FirstObject?
.AVCreation_Identifier ?? 0) + 1;
}

Related

Bing Maps API - mapArea: this parameter value is out of range

I'm using the Bing Maps API to generate images from certain locations using the BotFramework-Location NuGet package. (Which code can be found here)
Sometimes it works, but sometimes the images are not loaded due to an error in the Rest call (which is called by the BotFramework, not by me)
This is my code:
IGeoSpatialService geoService = new AzureMapsSpatialService(this.azureApiKey);
List < Location > concreteLocations = new List < Location > ();
foreach(String location in locations) {
LocationSet locationSet = await geoService.GetLocationsByQueryAsync(location);
concreteLocations.AddRange(locationSet ? .Locations);
}
// Filter out duplicates
var seenKeys = new HashSet < String > ();
var uniqueLocations = new List < Location > ();
foreach(Location location in concreteLocations) {
if (seenKeys.Add(location.Address.AddressLine)) {
uniqueLocations.Add(location);
}
}
concreteLocations = new List < Location > (uniqueLocations.Take(5));
if (concreteLocations.Count == 0) {
await context.PostAsync("No dealers found.");
context.Done(true);
} else {
var locationsCardReply = context.MakeMessage();
locationsCardReply.Attachments = new LocationCardBuilder(this.bingApiKey, new LocationResourceManager()).CreateHeroCards(concreteLocations).Select(card => card.ToAttachment()).ToList();
locationsCardReply.AttachmentLayout = AttachmentLayoutTypes.Carousel;
await context.PostAsync(locationsCardReply);
}
Which responds this:
The reason not all images are shown is because the Rest call to the Bing Maps API returns this:
mapArea: This parameter value is out of range.
Here is one of the image uris that fail (I removed my key):
https://dev.virtualearth.net/REST/V1/Imagery/Map/Road?form=BTCTRL&mapArea=49.5737,5.53792,49.57348,5.53744&mapSize=500,280&pp=49.5737,5.53744;1;1&dpi=1&logo=always&key=NOT_REAL_12758FDLKJLDKJO8769KLJDLKJF
Anyone know what I've been doing wrong?
I think I understood why you got that error. I tried to get this map and got the same result as yours, as you said:
mapArea: This parameter value is out of range.
If you look at the sample url you provided, the mapArea is equal to 49.5737,5.53792,49.57348,5.53744
So I just inverted the coordinates of the 2 points defining this area, putting the one with the smaller latitude value first, I got a reply:
EDIT:
As you commented, this call is made inside BotBuilder-Location code, not on yours. I had a look to it, this is the method to generate the map, called inside LocationCardBuilder class that you are instantiating:
public string GetLocationMapImageUrl(Location location, int? index = null)
{
if (location == null)
{
throw new ArgumentNullException(nameof(location));
}
var point = location.Point;
if (point == null)
{
throw new ArgumentNullException(nameof(point));
}
if (location.BoundaryBox != null && location.BoundaryBox.Count >= 4)
{
return string.Format(
CultureInfo.InvariantCulture,
ImageUrlByBBox,
location.BoundaryBox[0],
location.BoundaryBox[1],
location.BoundaryBox[2],
location.BoundaryBox[3],
point.Coordinates[0],
point.Coordinates[1],
index,
this.apiKey);
}
else
{
return string.Format(
CultureInfo.InvariantCulture,
ImageUrlByPoint,
point.Coordinates[0],
point.Coordinates[1],
index,
apiKey);
}
}
As you can see, there is no restriction on the format of the BoundaryBox item. But if you look at the documentation about location data here:
BoundingBox: A geographic area that contains the location. A bounding box contains SouthLatitude, WestLongitude, NorthLatitude, and EastLongitude values in units of degrees.
As you may know, in code, SouthLatitude is smaller that NorthLatitude (as it expressed with positive and negative values in code depending on the location compared to Ecuador: 43N is 43, 43S is -43). So the problem seems to be here.
I made a quick test to see if I got the error base on the call you are doing before (to GetLocationsByQueryAsync), I wasn't able to reproduce this case. Can you share the query you did that linked to this problem?

State Manager not persisting/retrieving data

NiFi 1.1.1
I am trying to persist a byte [] using the ​State Manager.
private byte[] lsnUsedDuringLastLoad;
#Override
public void onTrigger(final ProcessContext context,
final ProcessSession session) throws ProcessException {
...
...
...
​final StateManager stateManager = context.getStateManager();
try {
StateMap stateMap = stateManager.getState(Scope.CLUSTER);
final Map<String, String> newStateMapProperties = new HashMap<>();
newStateMapProperties.put(ProcessorConstants.LAST_MAX_LSN,
new String(lsnUsedDuringLastLoad));
logger.debug("Persisting stateMap : "
+ newStateMapProperties);
stateManager.replace(stateMap, newStateMapProperties,
Scope.CLUSTER);
} catch (IOException ioException) {
logger.error("Error while persisting the state to NiFi",
ioException);
throw new ProcessException(
"The state(LSN) couldn't be persisted", ioException);
}
...
...
...
}
I don't get any exception or even a log error entry, the processor continues to run.
The following load code always returns a null value(Retrieved the statemap : {})for the persisted field :
try {
stateMap = stateManager.getState(Scope.CLUSTER);
stateMapProperties = new HashMap<>(stateMap.toMap());
logger.debug("Retrieved the statemap : "+stateMapProperties);
lastMaxLSN = (stateMapProperties
.get(ProcessorConstants.LAST_MAX_LSN) == null || stateMapProperties
.get(ProcessorConstants.LAST_MAX_LSN).isEmpty()) ? null
: stateMapProperties.get(
ProcessorConstants.LAST_MAX_LSN).getBytes();
logger.debug("Attempted to load the previous lsn from NiFi state : "
+ lastMaxLSN);
} catch (IOException ioe) {
logger.error("Couldn't load the state map", ioe);
throw new ProcessException(ioe);
}
I am wondering if the ZK is at fault or have I missed something while using the State Map !
The docs for replace say:
"Updates the value of the component's state to the new value if and only if the value currently is the same as the given oldValue."
https://github.com/apache/nifi/blob/master/nifi-api/src/main/java/org/apache/nifi/components/state/StateManager.java#L79-L92
I would suggest something like this:
if (stateMap.getVersion() == -1) {
stateManager.setState(stateMapProperties, Scope.CLUSTER);
} else {
stateManager.replace(stateMap, stateMapProperties, Scope.CLUSTER);
}
The first time through when you retrieve the state, the version should be -1 since nothing was ever stored before, and in that case you use setState, but then all the times after that you can use replace.
The idea behind replace() and the return value is, to be able to react on conflicts. Another task on the same or on another node (in a cluster) might have changed the state in the meantime. When replace() returns false, you can react to the conflict, sort out, what can be sorted out automatically and inform the user when it can not be sorted out.
This is the code I use:
/**
* Set or replace key-value pair in status cluster wide. In case of a conflict, it will retry to set the state, when the given
* key does not yet exist in the map. If the key exists and the value is equal to the given value, it does nothing. Otherwise
* it fails and returns false.
*
* #param stateManager that controls state cluster wide.
* #param key of key-value pair to be put in state map.
* #param value of key-value pair to be put in state map.
* #return true, if state map contains the key with a value equal to the given value, probably set by this function.
* False, if a conflict occurred and key-value pair is different.
* #throws IOException if the underlying state mechanism throws exception.
*/
private boolean setState(StateManager stateManager, String key, String value) throws IOException {
boolean somebodyElseUpdatedWithoutConflict = false;
do {
StateMap stateMap = stateManager.getState(Scope.CLUSTER);
// While the next two lines run, another thread might change the state.
Map<String,String> map = new HashMap<String, String>(stateMap.toMap()); // Make mutable
String oldValue = map.put(key, value);
if(!stateManager.replace(stateMap, map, Scope.CLUSTER)) {
// Conflict happened. Sort out action to take
if(oldValue == null)
somebodyElseUpdatedWithoutConflict = true; // Different key was changed. Retry
else if(oldValue.equals(value))
break; // Lazy case. Value already set
else
return false; // Unsolvable conflict
}
} while(somebodyElseUpdatedWithoutConflict);
return true;
}
You can replace the part after // Conflict happened... with whatever conflict resolution you need.

My CellTable does not sort

I red a lot about sorting a CellTable. I also went trough the ColumnSorting with AsyncDataProvider. But my CellTable does not sort.
Here is my code:
public class EventTable extends CellTable<Event> {
public EventTable() {
EventsDataProvider dataProvider = new EventsDataProvider(this);
dataProvider.addDataDisplay(this);
SimplePager.Resources pagerResources = GWT.create(SimplePager.Resources.class);
SimplePager pager = new SimplePager(TextLocation.CENTER, pagerResources, false, 5, true);
pager.setDisplay(this);
[...]
TextColumn<Event> nameCol = new TextColumn<Event>() {
#Override
public String getValue(Event event) {
return event.getName();
}
};
nameCol.setSortable(true);
AsyncHandler columnSortHandler = new AsyncHandler(this);
addColumnSortHandler(columnSortHandler);
addColumn(nameCol, "Name");
getColumnSortList().push(endCol);
}
}
public class EventsDataProvider extends AsyncDataProvider<Event> {
private final EventTable eventTable;
public EventsDataProvider(EventTable eventTable) {
this.eventTable = eventTable;
}
#Override
protected void onRangeChanged(HasData<Event> display) {
int start = display.getVisibleRange().getStart();
int length = display.getVisibleRange().getLength();
// check false values
if (start < 0 || length < 0) return;
// check Cache before making a rpc
if (pageCached(start, length)) return;
// get Events async
getEvents(start, length);
}
}
I do now know, if all the methods are need here. If so, I will add them. But in short:
pageCached calls a method in my PageCache Class which holds a map and a list. Before making a rpc call, the cache is checked if the events where already taken and then displayed.
getEvents just makes an rpc call via asynccallback which updates the rowdata via updateRowData() on success.
My Table is displayed fast with currently around 500 entries (could be more, depends on the customer). No missing data and the paging works fine.
I just cannot get the sorting work. As far as I know, AsyncHandler will fire a setVisibleRangeAndClearData() and then an onRangeChanged(). onRangeChanged is never fired. As for the setVisibleRangeAndClearData() I do not know. But the sortindicator (arrow next to the columnname) does change on every click.
I do not want to let the server sort the list. I have my own Comparators. It is enough, if the current visible page of the table is sorted. I do now want to sort the whole list.
Edit:
I changed following code in the EventTable constructor:
public EventTable() {
[...]
addColumnSortHandler(new ColumnSortEvent.AsyncHandler(this) {
public void onColumnSort(ColumnSortEvent event) {
super.onColumnSort(event);
MyTextColumn<Event> myTextColumn;
if (event.getColumn() instanceof MyTextColumn) {
// Compiler Warning here: Safetytype unchecked cast
myTextColumn = (MyTextColumn<Event>) event.getColumn();
MyLogger.log(this.getClass().getName(), "asc " + event.isSortAscending() + " " + myTextColumn.getName(), Level.INFO);
}
List<Event> list = dataProvider.getCurrentEventList();
if (list == null) return;
if (event.isSortAscending()) Collections.sort(list, EventsComparator.getComparator(EventsComparator.NAME_SORT));
else Collections.sort(list, EventsComparator.descending(EventsComparator.getComparator(EventsComparator.NAME_SORT)));
}
});
addColumn(nameCol, "Name");
getColumnSortList().push(endCol);
}
I had to write my own TextColumn to determine the Name of the column. Otherwise how should I know, which column was clicked? The page gets sorted now but I have to click twice on the column. After then, the sorting is done with every click but in the wrong order.
This solution does need polishing and it seems kinda hacky to me. Any better ideas?
The tutorial, that you linked to, states:
This sorting code is here so the example works. In practice, you would
sort on the server.
Async provider is used to display data that is too big to be loaded in a single call. When a user clicks on any column to sort it, there is simply not enough objects on the client side to display "first 20 evens by name" or whatever sorting was applied. You have to go back to your server and request these first 20 events sorted by name in ascending order. And when a user reverses sorting, you have to go to the server again to get first 20 events sorted by name in a descending order, etc.
If you can load all data in a single call, then you can use regular DataProvider, and all sorting can happen on the client side.
EDIT:
The problem in the posted code was in the constructor of EventsDataProvider. Now it calls onRangeChanged, and the app can load a new sorted list of events from the server.

How can I create temporary records of Linq-To-Sql types without causing duplicate key problems?

I have code that generates records based on my DataGridView. These records are temporary because some of them already exist in the database.
Crop_Variety v = new Crop_Variety();
v.Type_ID = currentCropType.Type_ID;
v.Variety_ID = r.Cells[0].Value.ToString();
v.Description = r.Cells[1].Value.ToString();
v.Crop = currentCrop;
v.Crop_ID = currentCrop.Crop_ID;
Unfortunately in this little bit of code, because I say that v.Crop = currentCrop,
now currentCrop.Crop_Varieties includes this temporary record. And when I go to insert the records of this grid that are new, they have a reference to the same Crop record, and therefore these temporary records that do already exist in the database show up twice causing duplicate key errors when I submit.
I have a whole system for detecting what records need to be added and what need to be deleted based on what the user has done, but its getting gummed up by this relentless tracking of references.
Is there a way I can stop Linq-To-Sql from automatically adding these temporary records to its table collections?
I would suggest revisiting the code that populates DataGridView (grid) with records.
And then revisit the code that operates on items from a GridView, keeping in mind that you can grab bound item from a grid row using the following code:
public object GridSelectedItem
{
get
{
try
{
if (_grid == null || _grid.SelectedCells.Count < 1) return null;
DataGridViewCell cell = _grid.SelectedCells[0];
DataGridViewRow row = _grid.Rows[cell.RowIndex];
if (row.DataBoundItem == null) return null;
return row.DataBoundItem;
}
catch { }
return null;
}
}
It is also hard to understand the nature of Crop_Variety code that you have posted. As the Crop_Variety seems to be a subclass of Crop. This leads to problems when the Crop is not yet bound to database and potentially lead to problems when you're adding Crop_Variety to the context.
For this type of Form application I normally have List _dataList inside form class, then the main grid is bound to that list, through ObjectBindingList or another way. That way _dataList holds all data that needs to be persisted when needed (user clicked save).
When you assign an entity object reference you are creating a link between the two objects. Here you are doing that:
v.Crop = currentCrop;
There is only one way to avoid this: Modify the generated code or generate/write your own. I would never do this.
I think you will be better off by writing a custom DTO class instead of reusing the generated entities. I have done both approaches and I like the latter one far better.
Edit: Here is some sample generated code:
[global::System.Data.Linq.Mapping.AssociationAttribute(Name="RssFeed_RssFeedItem", Storage="_RssFeed", ThisKey="RssFeedID", OtherKey="ID", IsForeignKey=true, DeleteOnNull=true, DeleteRule="CASCADE")]
public RssFeed RssFeed
{
get
{
return this._RssFeed.Entity;
}
set
{
RssFeed previousValue = this._RssFeed.Entity;
if (((previousValue != value)
|| (this._RssFeed.HasLoadedOrAssignedValue == false)))
{
this.SendPropertyChanging();
if ((previousValue != null))
{
this._RssFeed.Entity = null;
previousValue.RssFeedItems.Remove(this);
}
this._RssFeed.Entity = value;
if ((value != null))
{
value.RssFeedItems.Add(this);
this._RssFeedID = value.ID;
}
else
{
this._RssFeedID = default(int);
}
this.SendPropertyChanged("RssFeed");
}
}
}
As you can see the generated code is establishing the link by saying "value.RssFeedItems.Add(this);".
In case you have many entities for wich you would need many DTOs you could code-generate the DTO classes by using reflection.

Using DataObjectTypeName in DataObjectSource

The functionality I am trying to use is:
- Create a ObjectDataSource for selection and updating controls on a web page (User Control).
- Use the DataObjectTypeName to have an object created that would send the data to an UpdateMethod.
- Before the values are populated in the DataObjectTypeName’s object, I would like to pre-populate the object so the unused items in the class are not defaulted to zeros and empty strings without me knowing whether the zero or default string was set by the user or by the application.
I cannot find a way to pre-populate the values (this was an issue back in 2006 with framework 2.0). One might ask “Why would anyone need to pre-populate the object?”. The simple answer is: I want to be able to randomly place controls on different User Controls and not have to be concerned with which UpdateMethod needs to handle which fields of an object.
For Example, let’s say I have a class (that reflects a SQL Table) that includes the fields: FirstName, LastName, Address, City, State, Zip. I may want to give the user the option to change the FirstName and LastName and not even see the Address, City, State, Zip (or vice-versa). I do not want to create two UpdateMethods where one handled FirstName and LastName and the other method handles the other fields. I am working with a Class of some 40+ columns from multiple tables and I may want some fields on one screen and not another and decide later to change those fields from one screen to another (which breaks my UpdateMethods without me knowing).
I hope I explained my issue well enough.
Thanks
This is hardly a solution to the problem, but it's my best stab at it.
I have a GridView with its DataSourceID set to an ObjectDataSource.
Whenever a row is updated, I want the property values in the object to be selectively updated - that is - only updated if they appear as columns in the GridView.
I've created the following extension:
public static class GridViewExtensions
{
public static void EnableLimitUpdateToGridViewColumns(this GridView gridView)
{
_gridView = gridView;
if (_gridView.DataSourceObject != null)
{
((ObjectDataSource)_gridView.DataSourceObject)
.Updating += new ObjectDataSourceMethodEventHandler(objectDataSource_Updating);
}
}
private static GridView _gridView;
private static void objectDataSource_Updating(object sender, ObjectDataSourceMethodEventArgs e)
{
var newObject = ((object)e.InputParameters[0]);
var oldObjects = ((ObjectDataSource)_gridView.DataSourceObject).Select().Cast<object>();
Type type = oldObjects.First().GetType();
object oldObject = null;
foreach (var obj in oldObjects)
{
if (type.GetProperty(_gridView.DataKeyNames.First()).GetValue(obj, null).ToString() ==
type.GetProperty(_gridView.DataKeyNames.First()).GetValue(newObject, null).ToString())
{
oldObject = obj;
break;
}
}
if (oldObject == null) return;
var dynamicColumns = _gridView.Columns.OfType<DynamicField>();
foreach (var property in type.GetProperties())
{
if (dynamicColumns.Where(c => c.DataField == property.Name).Count() == 0)
{
property.SetValue(newObject, property.GetValue(oldObject, null), null);
}
}
}
}
And in the Page_Init event of my page, I apply it to the GridView, like so:
protected void Page_Init()
{
GridView1.EnableLimitUpdateToGridViewColumns();
}
This is working well for me at the moment.
You could probably apply similar logic to other controls, e.g. ListView or DetailsView.
I'm currently scratching my head to think of a way this can be done in a rendering-agnostic manner - i.e. without having to know about the rendering control being used.
I hope this ends up as a normal feature of the GridView or ObjectDataSource control rather than having to hack it.

Resources