Thus for used ajax enabled wcf services to get records from DB and display it in client without using AsyncPattern property of OperationContractAttribute....
When should i consider AsyncPattern property?
Sample of my operationcontract methods,
[OperationContract]
public string GetDesignationData()
{
DataSet dt = GetDesignationViewData();
return GetJSONString(dt.Tables[0]);
}
public string GetJSONString(DataTable Dt)
{
string[] StrDc = new string[Dt.Columns.Count];
string HeadStr = string.Empty;
for (int i = 0; i < Dt.Columns.Count; i++)
{
StrDc[i] = Dt.Columns[i].Caption;
HeadStr += "\"" + StrDc[i] + "\" : \"" + StrDc[i] + i.ToString() + "¾" + "\",";
}
HeadStr = HeadStr.Substring(0, HeadStr.Length - 1);
StringBuilder Sb = new StringBuilder();
Sb.Append("{\"" + Dt.TableName + "\" : [");
for (int i = 0; i < Dt.Rows.Count; i++)
{
string TempStr = HeadStr;
Sb.Append("{");
for (int j = 0; j < Dt.Columns.Count; j++)
{
if (Dt.Rows[i][j].ToString().Contains("'") == true)
{
Dt.Rows[i][j] = Dt.Rows[i][j].ToString().Replace("'", "");
}
TempStr = TempStr.Replace(Dt.Columns[j] + j.ToString() + "¾", Dt.Rows[i][j].ToString());
}
Sb.Append(TempStr + "},");
}
Sb = new StringBuilder(Sb.ToString().Substring(0, Sb.ToString().Length - 1));
Sb.Append("]}");
return Sb.ToString();
}
public DataSet GetDesignationViewData()
{
try
{
string connectionString = System.Configuration.ConfigurationManager.ConnectionStrings["connectionString"].ConnectionString;
return SqlHelper.ExecuteDataset(connectionString, CommandType.StoredProcedure, DataTemplate.spDesignation_View);
}
catch (Exception err)
{
throw err;
}
}
AsyncPattern has a few uses- it's mainly a server performance optimization that allows you to free up worker pool request threads on blocking operations. For example, when a long-running blocking operation like DB access occurs, if you're using an async DB API on the server with AsyncPattern, the worker thread can return to the pool and service other requests. The original request is "awakened" later on another worker thread when the DB access completes, and the rest of the work is done (the service client just patiently waits- this is all transparent to it unless you're using an AsyncPattern-aware client and binding). This CAN allow your service to process more requests, if done carefully. To take advantage, you need to be using APIs on the server that have native async implementations. The only one I see that might be a candidate is the DB call that's happening in your SQLHelper.ExecuteDataset method- you'd have to read up on the underlying API to make sure a TRUE asynchronous option is available (presence of BeginXXX/EndXXX methods doesn't necessarily mean it's a TRUE async impl). The System.SqlClient stuff is truly async.
A word of caution: you have to be processing a lot of requests to make this worthwhile- there's a significant cost to code complexity and readability to split things up this way. You also need to understand multi-threaded programming very well- there are numerous pitfalls around locking, error handling, etc, that are well outside the scope of a SO post.
Good luck!
Related
The current procedure we now often perform is to extract data from an endpoint, perform computational analysis, generate RDF files and manually load them back into the endpoint.
Now I was looking into automating this procedure using Apache Jena ARQ as these dependencies are currently used for the retrieval of information.
I manage to get it partially to work using INSERT statements but performing thousands if not millions of inserts one by one seem a bit inefficient to me. The second issue is that we sometimes have regex or " in a string and this needs to escaped but there are many exceptions.
Is there a way to either parse or iterate over an internal apache jena model statements and inject this directly into an endpoint?
We currently use graphdb but it would be great if this can be applied using a universal approach.
Update
I have updated the code to handle 10 statements at once not sure yet what eventually the limit will be...
public class EndpointTests extends TestCase {
// Only for local testing
public void testEndpoint() throws Throwable {
String endpoint = "http://10.117.11.77:7200/repositories/Test";
Domain domain = new Domain("file:///Users/jasperkoehorst/diana_interproscan_head.nt");
StmtIterator statements = domain.getRDFSimpleCon().getModel().listStatements();
String strInsert = "INSERT DATA { ";
int insertCounter = 0;
while (statements.hasNext()) {
insertCounter = insertCounter + 1;
Statement statement = statements.nextStatement();
String subject = statement.getSubject().getURI();
String predicate = statement.getPredicate().getURI();
String object = statement.getObject().toString();
if (statement.getObject().isURIResource()) {
object = "<" + statement.getObject().toString() + ">";
}
if (statement.getObject().isLiteral()) {
object = statement.getObject().asLiteral().getString();
object = object.replaceAll("\\\\", "\\\\\\\\");
object = object.replaceAll("\"","\\\\\"");
}
if (object.startsWith("http")) {
object = "<" + object + ">";
} else {
object = "\"" + object + "\"";
}
strInsert = strInsert + "<" + subject + "> <" + predicate + "> " + object + " . ";
if (insertCounter % 10 == 0) {
System.out.println(insertCounter);
strInsert = strInsert + " } ";
UpdateRequest updateRequest = UpdateFactory.create(strInsert);
UpdateProcessor updateProcessor = UpdateExecutionFactory.createRemote(updateRequest, endpoint + "/statements");
updateProcessor.execute();
strInsert = "INSERT DATA { ";
}
}
}
}
We have created some sample lambdas to test SQS insertion performance.
At first we used 128MB lambdas and got 40ms average for insertions of 7KBytes packages.
When we upgraded the lambda to 256MB we got 20ms averages and when we upgraded to 512MB we even got 11ms average time for insertion.
We wanted to know why the bigger memory/cpu of the lambda gets better insertion speeds on SQS, because we thought sqs insertion should not be related to memory or CPU capacity. In summary: Is there any operation inside the sqs client send operation that requires good CPU/Memory to perform better?
Thanks!
Here is our test code:
public class Function
{
public async Task<string> FunctionHandler(ILambdaContext context)
{
var client = new AmazonSQSClient();
var request = new SendMessageRequest
{
MessageAttributes = new Dictionary<string, MessageAttributeValue>(),
MessageBody = "7kbyte String here...",
QueueUrl = "https://our_account/sqs-test-queue "
};
//Just desconsidering the first 2 sent messages, to desconsider the initial connection overhead
var elapsed00 = await SendMessageResponse(client, request, -2);
var elapsed0 = await SendMessageResponse(client, request, -1);
double totalTime = 0;
const int totalIterations = 1000;
for (int j = 0; j < totalIterations; j++)
{
var elapsed = await SendMessageResponse(client, request, j);
totalTime += elapsed;
}
Console.WriteLine("$$$$$AVERAGE:" + ((double)(totalTime / totalIterations)));
return "End test";
}
private async Task<long> SendMessageResponse(AmazonSQSClient client, SendMessageRequest request, int i)
{
var stopwatch = new Stopwatch();
stopwatch.Start();
var response = await client.SendMessageAsync(request);
stopwatch.Stop();
//Console.WriteLine("&&&#####" + i + "Elapsed ms:" + stopwatch.ElapsedMilliseconds + ". For message ID '" +
// response.MessageId + "':");
i++;
return stopwatch.ElapsedMilliseconds;
}
}
SQS isn't behaving differently in relation to your lambda size, the code which you're running in Lambda is changing it's performance based on the memory/CPU allotted.
I have a default ASP .NET Core Web API application with a single handler:
[HttpGet("{x}")]
public string Get(string x)
{
var guid = Guid.NewGuid();
var start = DateTime.Now;
Console.WriteLine($"{guid}\t1\tSTRT\t{start}");
var sb = new StringBuilder();
using (var conn = new OracleConnection(CONN_STR)) {
using (var cmd = conn.CreateCommand()) {
conn.Open();
Console.WriteLine($"{guid}\t2\tCONN\t{DateTime.Now - start}");
cmd.CommandText = "select hello4(:x) from dual";
var nameParam = cmd.CreateParameter();
nameParam.ParameterName = "x";
nameParam.Value = x;
cmd.Parameters.Add(nameParam);
var ret = cmd.ExecuteScalar();
if (ret is string xname) {
sb.Append("{\"x\":");
sb.Append(x);
sb.Append("\",\"xname\":\"");
sb.Append(xname);
sb.Append("\"}");
} else {
sb.Append("{\"error\":\"no data found\"}");
}
}
}
Console.WriteLine($"{guid}\t3\tDONE\t{DateTime.Now - start}");
return sb.ToString();
}
I load test it using vegeta: vegeta attack -targets=targets.txt -duration=10s -rate=100 -timeout=0 | vegeta report.
When hello4 is fast, I can see in the stdout that the handler is invoked 100 times per second.
When hello4 contains dbms_lock.sleep(1); to simulate extra processing time, I see that the handler is invoked much fewer times per second, about 20. I actually expected it to still be invoked about 100 times per second, placing extra stress on the DB and exhausting the SGA (my connection pool limit is 1024).
Why doesn't that happen and how can I force it to start handling more incoming connections simultaneously?
Running cmd.ExecuteScalar in a Task was the right idea, but it has to be a long running task to avoid blocking all threads in the application pool:
private static TaskFactory<object> tf = new TaskFactory<object>();
//and in the method
await tf.StartNew((Func<object>)cmd.ExecuteScalar, TaskCreationOptions.LongRunning).ConfigureAwait(false);
This allows Kestrel to keep handling incoming connections at the rate that they arrive.
I've a basic for loop that's basically download files. It's supposed to update the label as long as it progress.
By searching here at Stack Overflow, I found an orientation to use SetNeedsDisplay(). But it's still refuses to update. Any idea ?
for (int i = 0; i < files.Length; i++)
{
status.Text = "Downloading file " + (i + 1) + " of " + files.Length + "...";
status.SetNeedsDisplay();
string remoteFile = assetServer + files[i];
var webClient2 = new WebClient();
string localFile = files[i];
string localPath3 = Path.Combine(documentsPath, localFile);
webClient2.DownloadFile(remoteFile, localPath3);
}
As previously suggested try to avoid blocking the UI when doing heavy transactions in it. WebClient already has a async method which you can use.
webClient2.DownloadFileasync(new System.Uri(remoteFile), localPath3);
and to prevent you from accessing the UI from a different thread use the built-in method InvokeOnMainThread when accessing UI elements.
InvokeOnMainThread (() => {
status.Text = "Downloading file " + (i + 1) + " of " + files.Length + "...";
status.SetNeedsDisplay ();
});
and finally use the using statement to help you with the resources disposal.
using (var webClient2 = new WebClient ())
{
webClient2.DownloadFileAsync (new System.Uri (remoteFile), localPath3);
}
You could also have the iteration inside the using statement this way you don't have to create a WebClient object for each file instead you will use the same object to download all files available in your files array.
I am seeing errors while exporting email in office 365 account using ews managed api, "The server cannot service this request right now. Try again later." Why is that error occurring and what can be done about it?
I am using the following code for that work:-
_GetEmail = (EmailMessage)item;
bool isread = _GetEmail.IsRead;
sub = _GetEmail.Subject;
fold = folder.DisplayName;
historicalDate = _GetEmail.DateTimeSent.Subtract(folder.Service.TimeZone.GetUtcOffset(_GetEmail.DateTimeSent));
props = new PropertySet(EmailMessageSchema.MimeContent);
var email = EmailMessage.Bind(_source, item.Id, props);
bytes = new byte[email.MimeContent.Content.Length];
fs = new MemoryStream(bytes, 0, email.MimeContent.Content.Length, true);
fs.Write(email.MimeContent.Content, 0, email.MimeContent.Content.Length);
Demail = new EmailMessage(_destination);
Demail.MimeContent = new MimeContent("UTF-8", bytes);
// 'SetExtendedProperty' used to maintain historical date of items
Demail.SetExtendedProperty(new ExtendedPropertyDefinition(57, MapiPropertyType.SystemTime), historicalDate);
// PR_MESSAGE_DELIVERY_TIME
Demail.SetExtendedProperty(new ExtendedPropertyDefinition(3590, MapiPropertyType.SystemTime), historicalDate);
if (isread == false)
{
Demail.IsRead = isread;
}
if (_source.RequestedServerVersion == flagVersion && _destination.RequestedServerVersion == flagVersion)
{
Demail.Flag = _GetEmail.Flag;
}
_lstdestmail.Add(Demail);
_objtask = new TaskStatu();
_objtask.TaskId = _taskid;
_objtask.SubTaskId = subtaskid;
_objtask.FolderId = Convert.ToInt64(folderId);
_objtask.SourceItemId = Convert.ToString(_GetEmail.InternetMessageId.ToString());
_objtask.DestinationEmail = Convert.ToString(_fromEmail);
_objtask.CreatedOn = DateTime.UtcNow;
_objtask.IsSubFolder = false;
_objtask.FolderName = fold;
_objdbcontext.TaskStatus.Add(_objtask);
try
{
if (counter == countGroup)
{
Demails = new EmailMessage(_destination);
Demails.Service.CreateItems(_lstdestmail, _destinationFolder.Id, MessageDisposition.SaveOnly, SendInvitationsMode.SendToNone);
_objdbcontext.SaveChanges();
counter = 0;
_lstdestmail.Clear();
}
}
catch (Exception ex)
{
ClouldErrorLog.CreateError(_taskid, subtaskid, ex.Message + GetLineNumber(ex, _taskid, subtaskid), CreateInnerException(sub, fold, historicalDate));
counter = 0;
_lstdestmail.Clear();
continue;
}
This error occurs only if try to export in office 365 accounts and works fine in case of outlook 2010, 2013, 2016 etc..
Usually this is the case when exceed the EWS throttling in Exchange. It is explain in here.
Make sure you already knew throttling policies and your code comply with them.
You can find throttling policies using Get-ThrottlingPolicy if you have the server.
One way to solve the throttling issue you are experiencing is to implement paging instead of requesting all items in one go. You can refer to this link.
For instance:
using Microsoft.Exchange.WebServices.Data;
static void PageSearchItems(ExchangeService service, WellKnownFolderName folder)
{
int pageSize = 5;
int offset = 0;
// Request one more item than your actual pageSize.
// This will be used to detect a change to the result
// set while paging.
ItemView view = new ItemView(pageSize + 1, offset);
view.PropertySet = new PropertySet(ItemSchema.Subject);
view.OrderBy.Add(ItemSchema.DateTimeReceived, SortDirection.Descending);
view.Traversal = ItemTraversal.Shallow;
bool moreItems = true;
ItemId anchorId = null;
while (moreItems)
{
try
{
FindItemsResults<Item> results = service.FindItems(folder, view);
moreItems = results.MoreAvailable;
if (moreItems && anchorId != null)
{
// Check the first result to make sure it matches
// the last result (anchor) from the previous page.
// If it doesn't, that means that something was added
// or deleted since you started the search.
if (results.Items.First<Item>().Id != anchorId)
{
Console.WriteLine("The collection has changed while paging. Some results may be missed.");
}
}
if (moreItems)
view.Offset += pageSize;
anchorId = results.Items.Last<Item>().Id;
// Because you’re including an additional item on the end of your results
// as an anchor, you don't want to display it.
// Set the number to loop as the smaller value between
// the number of items in the collection and the page size.
int displayCount = results.Items.Count > pageSize ? pageSize : results.Items.Count;
for (int i = 0; i < displayCount; i++)
{
Item item = results.Items[i];
Console.WriteLine("Subject: {0}", item.Subject);
Console.WriteLine("Id: {0}\n", item.Id.ToString());
}
}
catch (Exception ex)
{
Console.WriteLine("Exception while paging results: {0}", ex.Message);
}
}
}