I'm still a bit new to Couchbase and iOS, but I'm running into a problem restarting my replications that I'm having trouble with. Here are a few notes about the flow.
The backend is using custom authentication.
When the user logs in, a new session is created in the sync gateway and those session details are returned to the iOS device. The app then uses those credentials to set up a push and pull replication (I've dropped the push replication for now while trying to debug this). The options on the replications aren't much and are as follows:
let pull = self.database.createPullReplication(self.remoteDBURL);
pull.continuous = true;
pull.headers["uuid"] = "device-1";
pull.setCookieNamed(sessionName, withValue: sessionId, path: "/", expirationDate: cookieExpirationDate, secure: false);
pull.start();
self._pull = pull;
NSNotificationCenter.defaultCenter().addObserver(self, selector: #selector(DataService.replicationChanged(_:)), name: kCBLReplicationChangeNotification, object: self._pull);
This works great and all the proper documents are synced to the device. Currently I have the backend created cookies that only last for about 5 minutes so I can test the refreshing of cookies. So, during the first few minutes, any docs I add to the channel that the app gets, the app receives the doc and all is good.
About halfway into the life of the token, the backend is set up to return with a 401 error telling the app to use it's token to get a new token. So, I have this in the replication change listener:
#objc public func replicationChanged(n: NSNotification) {
let replication = n.object as! CBLReplication;
let error = replication.lastError;
if (error != nil) {
print("last error is NOT nil");
print("last error = \(error)");
switch error!.code {
case 401:
self.updateReplicationSession();
default:
break;
}
}
}
Then, the updateReplication function looks like this:
... make http call to getNewToken url using the 'old' or 'soon to be expired' token. *The server is successfully returning this new session.
self._pull.setCookieNamed(newSessionName, withValue: newSessionId, path: "/", expirationDate: newExpirationDate, secure: false);
self._pull.restart();
...
It is at this time that the syncing stops working. No errors are thrown that I can see of other than once I received a CFNetwork Internal error. I can see on the server logs that the replication sends in the new session once... then everything just seems to hang for the replication. Any new docs to a channel that it gets and it doesn't get them. I don't see anything in the Sync Gateway logs indicating what the problem is. However, I'm still pretty new to this... so there may be something. Additionally, I set up a function to run every few seconds and print the status of the replication and it is stuck on ACTIVE.
I'm using sync gateway v 1.3 and CBL ios 1.3. I was using version 1.2.1 and was having this problem... hoping ugrading to 1.3 would magically fix it. It didn't, but I'm not sure I should go back to 1.2.1.
I'm completely stumped on this. I've searched high and low and often seem to find an answer that fits the bill... but it still doesn't work.
I've tried delaying the updating of the session in order for all calls from replication to 'fail' first. I've tried just calling start() on the replication thinking that maybe the 401 killed the replication and restart() isn't going to do anything. I've called stop()... then waited a bit and called start(). Not sure what to do next.
Any help is appreciated guys! Is it possible the local DB on the phone and the sync gateway have a unresolvable rev problem?
EDIT 1
The only way I can currently get it to work is to completely delete the local db in the replication changed function and restart it... then start a new replication... this works... unfortunately though, I then have to broadcast a notification so that any table view that may be up reloads the query. This causes a refresh animation in tableviews and isn't sustainable... but at least I can keep moving for now.
EDIT 2
I found how to enable better logging of the CBL on iOS and here's an error I keep seeing after a token refresh/ replication restart:
CBLWebSocketChangeTracker[0x7ff9bb53b5e0 iphone]: Connection error #8, retrying in 256.0 sec: PSWebSocketError[3, "Output stream end encountered"]
Thoughts?
EDIT 3
I changed things around a bit to get rid of the refreshing of tokens. I attempted to make it so when a user logs in from the iOS app, the backend creates a session with the sync gateway for that user and returns it. The app then starts a replication with that session. Then, after the 5 minutes (the ttl used when creating the session), the next time the app tries to sync, it will get a 401 and stop the replication and present the login screen. Then, when the user logs in again, a new session is created, etc...
I found 2 things:
-Anytime I added a doc to a channel that would sync with the app, when the app synced, the session expiration date would increase by about 20ish seconds. Is this normal behavior? The only way it would log the user out on the expired token is if I didn't add any docs for long enough.
-The restarted replication still gets stuck. Here are the logs for the sync gateway and xcode:
This is the last part of the sync gateway when the session is expired... which will send the 404 to the app.
2016-08-23T21:11:51.089-05:00 Changes+: Sending seq:163 from channel jmoore2
2016-08-23T21:11:51.089-05:00 Changes+: MultiChangesFeed sending &{Seq:163 ID:un:jmoore2_116 Deleted:false Removed:{} Doc:map[] Changes:[map[rev:1-e775ef6713dc39f6d52d35cefb396fe3]] Err:<nil> allRemoved:false branched:false} (to jmoore2)
2016-08-23T21:11:51.089-05:00 Changes: MultiChangesFeed done (to jmoore2)
2016-08-23T21:11:51.089-05:00 HTTP+: #212: --> 200 OK (0.0 ms)
2016-08-23T21:11:51.459-05:00 HTTP: #216: GET /my_gateway/_session/3fa29222db286e8ec67a51e88b613ba4cd5cbf31 (ADMIN)
2016-08-23T21:11:51.459-05:00 HTTP: #216: --> 404 missing (0.2 ms)
2016-08-23T21:11:51.461-05:00 HTTP: #217: GET /my_gateway/_session/3fa29222db286e8ec67a51e88b613ba4cd5cbf31 (ADMIN)
2016-08-23T21:11:51.461-05:00 HTTP: #217: --> 404 missing (0.2 ms)
Then, when the user logs back in, here are SG log for that:
2016-08-23T21:15:41.200-05:00 HTTP: #223: GET /my_gateway/_session/1a6105eaf2c91de320a47422041c655dd3d5c279 (ADMIN)
2016-08-23T21:15:41.201-05:00 HTTP+: #223: --> 200 (0.5 ms)
2016-08-23T21:15:41.203-05:00 HTTP: #224: GET /my_gateway/_local/78c229762074a95c864f7fecc03ce88f0ef6c499 (as jmoore2)
2016-08-23T21:15:41.203-05:00 HTTP+: #224: --> 200 (0.5 ms)
2016-08-23T21:15:41.371-05:00 Cache: Received #164 after 455ms ("user-login-info:jmoore2" / "48-ea0b2d9771fa2be1d76838f9e4d55081")
2016-08-23T21:15:41.371-05:00 Cache: #164 ==> channel "*"
2016-08-23T21:15:41.371-05:00 Changes+: Notifying that “mdatabase” changed (keys="{*}") count=31
And the xcode log when restarting the replication immediately goes to this:
21:15:41.195‖ Sync: CBLRestPuller[http://mycomp.local:8080/iphone/]: Reachability state = <mycomp.local>:reachable (30002), suspended=0
21:15:41.205‖ Sync: CBLRestPuller[http://mycomp.local:8080/iphone/]: Server is Couchbase Sync Gateway/1.3.0
21:15:41.205‖ Sync: CBLRestPuller[http://mycomp.local:8080/iphone/]: Replicating from lastSequence=162
21:15:41.205‖ Sync: CBLRestPuller[http://mycomp.local:8080/iphone/] starting ChangeTracker: mode=3, since=162
21:15:41.207‖ ChangeTracker: CBLWebSocketChangeTracker[0x7f97d8698190 iphone]: Starting...
21:15:41.207‖ Sync: CBLWebSocketChangeTracker[0x7f97d8698190 iphone]: GET //mycomp.local:8080/iphone/_changes?feed=websocket
21:15:41.208‖ ChangeTracker: CBLWebSocketChangeTracker[0x7f97d8698190 iphone]: Started... <http://mycomp.local:8080/iphone/_changes?feed=websocket>
21:15:41.211‖ CBLWebSocketChangeTracker[0x7f97d8698190 iphone]: Connection error #1, retrying in 2.0 sec: PSWebSocketError[3, "Output stream end encountered"]
and also sometimes this error shows up:
2016-08-23 21:01:15.232 MyApp[52094:36001215] 52094: CFNetwork internal error (0xc01a:/BuildRoot/Library/Caches/com.apple.xbs/Sources/CFNetwork_Sim/CFNetwork-758.3.15/Loading/URLConnectionLoader.cpp:289)
Related
we are using Hotchocolate 10.4.3 for my .net core service.
we are using code first approach.
All settings are in startup.cs exactly similar to their Star War example.
Hotchocolate web site says default timeout for request is 30 sec but I found my application throwing Transaction error after 1 min.
I want to increase that to 2 min.
Also why server still executes everything even after timeout exception.
I always see Transaction error at end after all my code gets execute properly.
If everything is going to run properly why to even then throw error?
I'm still learning graphql. Please correct me if anything sounds incorrect.
In HotChocolate v10, you can set the execution timeout when you add the service in your Startup's ConfigureServices() method:
services.AddGraphQL(
SchemaBuilder.New()
.AddQueryType<Query>()
.Create(),
new QueryExecutionOptions { ExecutionTimeout = TimeSpan.FromMinutes(2) });
They have a good repo of examples here: https://github.com/ChilliCream/hotchocolate-examples
Documentation on the execution options: https://chillicream.com/docs/hotchocolate/v10/execution-engine/execution-options
EDIT: In v11, the syntax has changed:
services
.AddGraphQLServer()
.AddQueryType<Query>()
.SetRequestOptions(_ => new HotChocolate.Execution.Options.RequestExecutorOptions { ExecutionTimeout = TimeSpan.FromMinutes(10) });
I have an asp.net mvc website hosted on Windows Server 2012r2 Standard which uses KnockoutJS to display data in a grid. This server is dedicated to the process that I'm having trouble with - it does not server any other requests.
An ajax call is made to a "GetRecords" action of a controller. This returns data for a couple of dozen records very quickly.
The user is able to make amendments to the data and submit for update. The knockout code makes another ajax call, this time posting the records. At this point the site "hangs" for a long time (over 10 minutes), but it does complete successfully and the updated date is persisted to a database. During the "hang time" the CPU for the IIS Worker Processes hovers around the 50% mark.
I'm trying to figure out what's causing the delay. It seems that the delay happens before the first line of code of the controller action is reached. I've added trace statements to the action and I can see that once the 1st line is executed, then the action completes within a couple of seconds.
From the IIS manager, I've drilled in to "Worker Processes"\"Current Requests" during the time the page is "hung", I can see that the State is listed as "ExecuteRequestHandler" and the Module Name is "ManagedPipelineHandler". There are no other "Current Requests" displayed.
Using the Chrome dev tools, I've captured the json being posted for the update, it is approx 4mb in size.
I've ruled out the problem being caused by bandwidth because I've tested from a browser running locally (on the web server), and I get the same delay.
Also, when I post the same number of records on the same site hosted on my dev VM then it works fine - completes end-to-end in under 3 seconds.
Any suggestion on steps I can take to improve performance of the post?
I have created a process dump of the IIS worker process when it is in the "hanging" state, this is available at: onedrive link
It seems that "Thread 28" is causing the issue, since this has a "Time spent in user mode" value of over 2 minutes. I requested the process dump about 2 minutes after making the http post request from the website. The post did eventually complete ok after about 20 minutes
Able to work around this problem bypassing the MVC model binding. The view model param (editBatchVm) that was passed into the controller method has been replaced. So, instead of:
public void ResubmitRejectedVouchersAsNewBatch(EditBatchViewModel editBatchVm)
{
I now have:
public void ResubmitRejectedVouchersAsNewBatch()
{
string requestData = "";
using (var reader = new StreamReader(Request.InputStream))
{
requestData = reader.ReadToEnd();
}
EditBatchViewModel editBatchVm = JsonConvert.DeserializeObject<EditBatchViewModel>(requestData);
I am new to service worker and workbox. I am currently using the workbox to precache my static assets files, which works fine and I expect my other thirdparty URL to be cached too during runtime, but not working until my second reload on the page:(
Shown Below is the copy of the Code of my Service Worker, please note that I replace my original link to abc.domain.com intentionally :)
workbox.routing.registerRoute(
//get resources from any abc.domain.com/
new RegExp('^https://abc.(?:domain).com/(.*)'),
/*
*respond with a cached response if available, falling back to the network request if it’s not cached.
*The network request is then used to update the cache.
*/
workbox.strategies.staleWhileRevalidate({
cacheName: 'Bill Resources',
maxEntries: 60,
maxAgeSeconds: 30 * 24 * 60 * 60, // 30 Days
}),
);
workbox.routing.registerRoute(
new RegExp('^https://fonts.(?:googleapis|gstatic).com/(.*)'),
//serve from network first, if not availabe then cache
workbox.strategies.networkFirst(),
);
workbox.routing.registerRoute(
new RegExp('^https://use.(?:fontawesome).com/(.*)'),
//serve from network first, if not availabe then cache
workbox.strategies.networkFirst(),
);
I have cleared storage times without number, I refreshed cache storage from google developer tools, but all seems to be the same. Resources from a custom link, google fonts and fontawesome, fail to be cached the first time. Below is the console and the Cache Storage Tab for my page first load image and the second load Image respectively.
Please I dont know what I am doing wrong and why it behaves like so.
Thanks in Advance
This is expected behaviour.
The way service workers get set up is that they will have an install and activate phase, where installation can happen when ever a new service worker is registered or a service worker updates.
A service worker will then activate when it's safe to do so (i.e. no windows are currently being "controlled" be a service worker).
Once a service worker is activated, it'll control any new pages.
What you are seeing is:
Page is loaded and the page registers a service worker
The service worker precaches any files during it's install phase
A service activates but isn't controlling any pages
You refresh the page and at this point the page is controlled and requests will go through the service worker (resulting in the caching on the second load).
The service worker will not cache anything until its been activated. It gets activated only on the second hit itself. To achieve caching on the first hit you have to guide service worker to skip waiting for activation. you can do this by
self.addEventListener('install', () => {
self.skipWaiting(); //tells service worker to skip installing and activate it
/*your code for pre-caching*/
});
once its been skipped it enter the activated mode and will wait for caching but it wont cache the clients interaction. To do so apply the following line
self.addEventListener('activate', () => {
clients.claim();
});
which starts caching on the first hit itself
I was wondering why the resume function works if you navigate away from an upload but it does not work when you use the retry link.
I am using S3 uploader and here is my enabled setting
retry: {
enableAuto: true
},
resume: {
enabled: true
},
Now when I navigate away from the page during an upload, close the browser , then come back I can resume the upload by starting a new upload of the same file.
However, if I deliberately disable the network adapter and let it error, then turn the network back on , I would expect to be able to hit retry and it start from where it stopped. It does not, it starts back at the beginning.
Would someone please enlighten me?
Fine Uploader will always either resume or retry starting with the last failed chunk. My tests show that this is functioning correctly with Fine Uploader S3 5.0.8. If you are seeing something different, please open up a bug case with all of your code and a detailed description of the issue along with the reproduction steps in the Github project's issue tracker.
I downloaded the red5-recorder (http://www.red5-recorder.com/) , which fails to allow me to start recording. After debugging I found that the netconnection, needed to record to a media server, created does not fire a NetStatusEvent event, so essentially it fails silently. I have implemented the connection with the following minimal working example:
trace("make net connection");
nc = new NetConnection();
nc.client = { onBWDone: function():void{ trace("bandwidth check done.") } };
trace("add event listener");
nc.addEventListener(NetStatusEvent.NET_STATUS, function(event:NetStatusEvent) {
trace("handle");
});
trace("connect!");
nc.connect("rtmp://localshost/oflaDemo/test/");
trace("connect done");
The output of this piece of code is:
make net connection
add event listener
connect!
connect done
The actionscript api states that the connect-call always fires such an event:
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/NetConnection.html#includeExamplesSummary
Moreover, the netconnection is not 'connected' (a state of the NetConnection object) 10 seconds after the call. I also took a look at this: NetConnect fails silently in Flash when called from SilverLight But the fix suggested by the author, swapping rtmp and http in the connection uri, do not work. Also, I tested the uri and in fact the exact same code sniplet in a personal project, where it did work. I just can not seem to find why connecting to a media server fails silently in the red5-recorder project.
The awkward part is that if I pass some random string as a conenction uri, still nothing happens (no event, no exception, no crash). Also not setting nc.client becore nc.connect(), which caused exceptions in my experience, did not cause exceptions.
Any suggestions are welcome.
You are setting the address to localshost instead localhost.
nc.connect("rtmp://localshost/oflaDemo/test/");
Correct address:
nc.connect("rtmp://localhost/oflaDemo/test/");