Duplicate Notifications are being received in appfabric caching - caching

I am configuring notifications for my appfabric cache. The first add operation is sending one notification. But When I replace (Update) the same cache Item with a new value or delete that item, I am receiving multiple number of notifications for that single operation. I suspect that it has nothing to do with the type of operation I perform. Because I have seen multiple add notifications too. Is there any configuration I am messing up??
I have written the delegate in my codeback file. It is hitting the delegate continually for some time even for single operation.
My configuration is :
<dataCacheClient requestTimeout="150000" channelOpenTimeout="19000" maxConnectionsToServer="10">
<localCache isEnabled="true" sync="TimeoutBased" ttlValue="300" objectCount="10000"/>
<clientNotification pollInterval="30" maxQueueLength="10000" />
<hosts>
<host name="192.10.14.20" cachePort="22233"/>
</hosts>
<securityProperties mode="None" protectionLevel="None" />
<transportProperties connectionBufferSize="131072" maxBufferPoolSize="268435456"
maxBufferSize="8388608" maxOutputDelay="2" channelInitializationTimeout="60000"
receiveTimeout="600000"/>
</dataCacheClient>
And my code which has the delegate is :
public void myCacheLvlDelegate(string myCacheName, string myRegion, string myKey, DataCacheItemVersion itemVersion, DataCacheOperations OperationId, DataCacheNotificationDescriptor nd)
{
//display some of the delegate parameters
StringBuilder ResponseOfCache = new StringBuilder("A cache-level notification was triggered!");
ResponseOfCache.Append(" ; Cache: " + myCacheName);
ResponseOfCache.Append(" ; Region: " + myRegion);
ResponseOfCache.Append(" ; Key: " + myKey);
ResponseOfCache.Append(" ; Operation: " + OperationId.ToString());
string value = ManualDataCacheOperations.GetCacheValue(myTestCache, txtKey.Text).ToString();
Legend.Append(string.Format("{0} - Operation {1} Performed at {2}{3}", myKey, OperationId.ToString(), DateTime.Now.ToShortTimeString(), Environment.NewLine));
}
Please let me know Where am I missing it?? Why is it sending multiple notifications for same item. If it is of any use, I am using a cluster with two hosts. And using same cache for session management too.

First, be sure that you added only one callback. It's simple but i've seen too much cases where multiple callbacks were added.
It's really important to understand that cache notifications are based on polling.
When you use cache notifications, your application checks with the cache cluster at a regular interval (pollInterval) to see if any new notifications are available.
So, if you update 10 times a key between two checks, you will get 10 notifications.

Related

Dynamics 365 Customer Service : Email sent from one queue to another queue is getting linked to same ticket

I am facing an issue in Dynamics 365.
Let's say I have 2 queues - Queue1 & Queue2 and have enabled case creation rule on both the queues. Initially, the customer sent an email to Queue1 and converted it into the case, and I want to forward this email to Queue2.
When I forward email FROM Queue1 TO Queue2, it comes back as 'incoming' email to Dynamics through Queue2, but again gets linked to the same old case present in Queue1. I want that, it should create a new case in Queue2.
I tried a pre-create plugin also to clear regardingobject in an incoming email if the sender is a Dynamics queue and as per traces, code is clearing regardingobectid as well. However, it still gets linked to the same ticket somehow.
Is there anyone who faced the same issue and got a workaround.
Plugin code snippet - registered on Email Pre-create sync.
TargetEntity = (Entity)PluginContext.InputParameters["Target"];
var sender = TargetEntity["sender"].ToString().ToLowerInvariant();
EntityCollection senderQueue = GetQueue(sender);
if (senderQueue?.Entities != null && senderQueue.Entities.Count != 0)
{
TracingService.Trace("sender is a queue");
TracingService.Trace("updating : TargetEntity['regardingobjectid'] = null to platform");
TargetEntity["regardingobjectid"] = null;
}```
I was finally able to do it after clearing 3 attributes in the incoming email's target entity.
I have written a pre-validate sync plugin on email cleared below 3 fields :-
TargetEntity["regardingobjectid"] = null;
// this line -- parentactivityid fixed the issue.
TargetEntity["parentactivityid"] = null;
TargetEntity["trackingtoken"] = null;

Lync 2013 - consuming 180 ringing responses from a forked request

Is it possible to configure Lync 2013 to only send a single 180/183 ringing back upstream after an INVITE to Lync triggers multiple INVITEs to Lync subscriber endpoints that each end up generating a 180/183 message.
In case of simultaneous ring, I want Lync to consume all these 180s to avoid unnecessary messaging back to the originator INVITE'ing Lync that is behind a SBC.
It seems to be acting as a forking proxy rather than b2bua.
You're right by saying Lync forks calls. If a user has multiple endpoints, Lync will fork the call to each endpoint and in return each endpoint will return the ringing response.
You can create an MSPL script to catch 180 responses. Since MSPL is stateless, it would require a backing application (a ServerApplication) that checks if a 180 response is already sent for the current call, and block subsequent ringing responses. Based on the assumption that for all requests the CallID header will be identical, you can then decide which responses to send and which not.
A simple MSPL would be something like:
<lc:applicationManifest
lc:appUri="http://www.contoso.com/DefaultRoutingScript"
xmlns:lc="http://schemas.microsoft.com/lcs/2006/05">
<lc:responseFilter reasonCodes="1XX" />
<lc:proxyByDefault action="true" />
<lc:splScript><![CDATA[
if (sipResponse && sipResponse.StatusCode == 180)
{
Dispatch("OnResponse");
}
]]></lc:splScript>
</lc:applicationManifest>
Then in your server application you handle the OnResponse event, I imagine something like this:
public void OnResponse(object sender, ResponseReceivedEventArgs e)
{
if (e.Response.StatusCode == 180)
{
var callIdHeader = e.Response.AllHeaders.FindFirst(Header.StandardHeaderType.CallID);
if (callIdHeader != null)
{
var callId = callIdHeader.Value;
if (ShouldSendRingingResponse(callId))
{
e.ClientTransaction.ServerTransaction.SendResponse(e.Response);
}
}
}
}
public bool ShouldSendRingingResponse(string callId) { .... }
Then you can create some logic in the ShouldSendRingingResponse function to see whether to send the 180 response or not.
Note that I did not test this, it's just a basic outline of how I would attempt to handle the situation.
There isn't a way to prevent this in Lync; however, typically an AudioCodes SBC will be deployed too that contains an option to handle this scenario.
Multiple 18x:
The device supports the interworking of different support for multiple18x responses (including 180 Ringing, 181 Call is Being Forwarded, 182 Call Queued,and 183 Session Progress) that are forwarded to the caller. The UA can be configured as supporting only receipt of the first 18x response (i.e., the device forwards only this response to the caller), or receipt of multiple 18x responses (default). This is configured by the IP Profile parameter, 'SBC Remote Multiple 18x Support

Background process/task in genexus?

We're using Genexus EV3 to develop a project and we noticed that we can use it to easily alert users through SMS or emails about relevant information.
I'd like to know if it's possible to create some sort of background process in genexus that checks a database and so that we can send emails based on the information present in the database.
Thanks.
You can make a asynchronous call to a procedure object using the submit method, like:
// Some code...
prAnyProcedure.submit(&parm1, &parm2)
// Some code...
// Some code...
In this case, the main program flow will continue processing immediately.
But if you need to call a procedure from time to time without any user intervention, you should use a server-side tool like ant or the cron/crontab linux utility.
http://ant.apache.org/faq.html#what-is-ant
http://linux.die.net/man/8/cron
I made some like you need.
I made a procedure, make it MAIN and are launched with de squeduler task (it run every day at 9am). In a table I have the reminder and the mail addresses, the massage to sent etc. This is the main procedure
for each empresa
where EmpresaEstado=1
&Empresanombre=EmpresaNombre
&EmpresaEmail=EmpresaEmail
&EmpresaServidorSalida=EmpresaServidorSalida
&EmpresaServidorPassword=EmpresaServidorPassword
endfor
for each recordatorios
where RecordatoriosEstado=1
where RecordatoriosDiaEnvio=day(today())
&smtp.Host = 'smtp.gmail.com'
&smtp.Port = 25
&smtp.Sender.Address = TRIM(&EmpresaEmail)
&smtp.Sender.Name = TRIM(&empresanombre)
&smtp.Authentication = 1
&smtp.Secure=1
&smtp.UserName = TRIM(&empresaEmail)
&smtp.Password = trim(&EmpresaServidorPassword)
&recordatoriosasunto=RecordatoriosAsunto
&recordatoriostexto=RecordatoriosTexto
&mail.To.New('Mauricio','mlopez.informatica#gmail.com')
&smtp.ErrDisplay = 0
&smtp.Login()
for each
where RecordatoriosClienteEstado=1
&mail.BCC.new(RecordatoriosClienteMail1,trim(RecordatoriosClienteMail1))
if RecordatoriosClienteMail2<>''
&mail.BCC.new(RecordatoriosClienteMail2,trim(RecordatoriosClienteMail2))
endif
endfor
&mail.Subject = &recordatoriosasunto
&mail.Text = &recordatoriostexto
&smtp.Send(&mail)
&mail.Clear()
&smtp.Logout()
endfor

Post large amount of data with Wicket.ajax.post

I'm trying to use the Behavior and JavaScript FileReader as can be seen in this question: Wicket Drag and drop functionality for adding an image
If I post more than 200k, I get an error Form too large. So perhaps I am missing some multipart stuff.
What is the proper way to call Wicket.ajax.post() with a large amount of data?
I tried setting mp to true, but then it started complaining that I do not have a form id. Does it need a form?
Btw. I use Jetty, but that has no problems using a regular file upload using forms.
This error comes from jetty Request implementation.
If you look at sources of the Request#extractFormParameters method, you will see next:
if (_context != null)
{
maxFormContentSize = _context.getContextHandler().getMaxFormContentSize();
maxFormKeys = _context.getContextHandler().getMaxFormKeys();
}
if (maxFormContentSize < 0)
{
Object obj = _channel.getServer().getAttribute("org.eclipse.jetty.server.Request.maxFormContentSize");
if (obj == null)
maxFormContentSize = 200000;
else if
...
}
So, in fact, you can really set your context values as pikand proposed as 0, or set server config as follows:
<Configure id="Server" class="org.eclipse.jetty.server.Server">
<Call name="setAttribute">
<Arg>org.eclipse.jetty.server.Request.maxFormContentSize</Arg>
<Arg>-1<!-- or 0, or any lt 0 --></Arg>
</Call>
...
</Configure>
Your exception is thrown a bit later according to this code:
if (contentLength > maxFormContentSize && maxFormContentSize > 0)
{
throw new IllegalStateException("Form too large: " + contentLength + " > " + maxFormContentSize);
}
So, you can see, that maxFormContentSize could be <= 0 not to throw this exception.
I think, there is no need to update something via ajax. But in fact it is better to limit data size, not to allow users put down your server.
Other application servers have their own settings, for most of them you should set maxPostSize value to zero, to disable this restriction.
Also, wicket Form component has it's own maxSize property, you can set it with Form#setMaxSize. The problem is that Form transmittes this value as Bytes value to FileUploadBase class, which has next javadoc:
The maximum size permitted for the complete request, as opposed to
fileSizeMax. A value of -1 indicates no maximum.
And actually this parameter is set via fileUpload.setSizeMax(maxSize.bytes());, and Bytes can't hold negative value. But I think you can try to set it as 0 and check if it works. By default, Form#getSizeMax() method checks:
return getApplication().getApplicationSettings().getDefaultMaximumUploadSize();
Which returns Bytes.MAX, which is equals to 8388608 terabytes. I think, this is about to be "no limit" value :)
Additionaly, as I know - you don't need to set Form id, to allow using multipart parameter. Only if you updating your form via ajax, you have to set Form.setOutputMarkupId(true). But actually, Form creates id by itself in renderHead method if it is multipart:
// register some metadata so we can later properly handle multipart ajax posts for
// embedded forms
registerJavaScriptNamespaces(response);
response
.render(JavaScriptHeaderItem.forScript("Wicket.Forms[\"" + getMarkupId()
+ "\"]={multipart:true};", Form.class.getName() + '.' + getMarkupId()
+ ".metadata"));
Note, that getMarkupId() method creates markup id if does not exists.
There is a form size limit in Jetty defaulting to 200k.
Add a jetty-web.xml in your webapp/WEB-INF folder. There you can set a form size limit of the needed size.
<?xml version="1.0"?>
<!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN"
"http://jetty.mortbay.org/configure.dtd">
<Configure id="WebAppContext" class="org.eclipse.jetty.webapp.WebAppContext">
<Set name="maxFormContentSize" type="int">900000</Set>
<Set name="maxFormKeys">5000</Set>
</Configure>

RestKit - Cache saved to Core Data only on second call

I'm loading edited objects from the server with RestKit (0.10.0) and Core Data in the backend with the following method. The method as part of a sync process is called whenever the app enters the foreground.
[syncObjectManager loadObjectsAtResourcePath:[NSString stringWithFormat:#"?config=accounts&since=%#", lastSync] usingBlock:^(RKObjectLoader *loader) {
[loader.mappingProvider setObjectMapping:companyMappingSync forKeyPath:#"data"];
loader.backgroundPolicy = RKRequestBackgroundPolicyContinue;
loader.delegate = self;
}];
The response is loaded normal and the cache seemed to be found as well.
2012-04-11 15:58:32.147 mobileCRM[3575:707] T restkit.support:RKCache.m:82 Found cachePath '/var/mobile/Applications/C5E4BF4F-4CB0-4F4C-AC11-FC1E17AE4AF2/Library/Caches/RKClientRequestCache-www.URLTOSERVER.de/PermanentStore/37abc4aff62918578288d10530e6bcd6' for PermanentStore/37abc4aff62918578288d10530e6bcd6
2012-04-11 15:58:32.152 mobileCRM[3575:707] T restkit.support:RKCache.m:119 Wrote cached data to path '/var/mobile/Applications/C5E4BF4F-4CB0-4F4C-AC11-FC1E17AE4AF2/Library/Caches/RKClientRequestCache-www.URLTOSERVER.de/PermanentStore/37abc4aff62918578288d10530e6bcd6'
2012-04-11 15:58:32.158 mobileCRM[3575:707] T restkit.support:RKCache.m:100 Writing dictionary to cache key: 'PermanentStore/37abc4aff62918578288d10530e6bcd6.headers'
2012-04-11 15:58:32.159 mobileCRM[3575:707] T restkit.support:RKCache.m:82 Found cachePath '/var/mobile/Applications/C5E4BF4F-4CB0-4F4C-AC11-FC1E17AE4AF2/Library/Caches/RKClientRequestCache-www.URLTOSERVER.de/PermanentStore/37abc4aff62918578288d10530e6bcd6.headers' for PermanentStore/37abc4aff62918578288d10530e6bcd6.headers
2012-04-11 15:58:32.166 mobileCRM[3575:707] T restkit.support:RKCache.m:103 Wrote cached dictionary to cacheKey 'PermanentStore/37abc4aff62918578288d10530e6bcd6.headers'
When testing a change in the attribute "street" the changes are mapped well as well.
2012-04-11 15:58:33.013 mobileCRM[3575:1a03] T restkit.object_mapping:RKObjectMappingOperation.m:332 Mapped attribute value from keyPath 'street' to 'street'. Value: Musterweg 55
After finishing the operation I'm calling a new fetchRequest and reloading the belonging table view.
Problem
Somehow the changed object isn't saved in the backend (even with Core Data shows the COMMIT message) after the first call.
2012-04-11 16:05:44.235 mobileCRM[3603:351f] I restkit.core_data:RKInMemoryEntityCache.m:131 Caching all 2861 Company objectsIDs to thread local storage
2012-04-11 16:05:44.354 mobileCRM[3603:351f] CoreData: sql: BEGIN EXCLUSIVE
2012-04-11 16:05:44.357 mobileCRM[3603:351f] CoreData: sql: UPDATE ZCOMPANY SET ZSTREET = ?, Z_OPT = ? WHERE Z_PK = ? AND Z_OPT = ?
2012-04-11 16:05:44.361 mobileCRM[3603:351f] CoreData: sql: COMMIT
But when I'm opening the app a second time to call the method again, the refreshed data is shown like expected. So I'm struggling to get the method working on the first call.
Thanks for your ideas!
After checking the output with
// App Delegate
RKLogConfigureByName("RestKit/*", RKLogLevelTrace);
// Scheme
-com.apple.CoreData.SQLDebug 1
I've realized that I reload the data before the commit of the data in Core Data is finished.
So resolve the problem by using the following method even when I'm using the RKRequestQueue.
- (void)objectLoaderDidFinishLoading:(RKObjectLoader *)objectLoader
{
// Send notification to tableView
[[NSNotificationCenter defaultCenter] postNotificationName:#"refreshTableView" object:self];
}
... and it works. :)

Resources