UILocalNotification makes springboard crash - uilocalnotification

if (application.scheduledLocalNotifications?.count)! > 0 {
var notificationArray = application.scheduledLocalNotifications!
let cleanCount = notificationArray.count - 1
for index in 0 ..< cleanCount {
let notification = notificationArray[index]
application.cancelLocalNotification(notification)
notificationArray.remove(at: index)
}
}
I create local notifications less than 64 in applicationDidFinishedWithOptions method, and I want to remove these local notifications ,but above code cause crash when my application finished launch
The device console log reports a signal raise.
below is log:
SpringBoard(UserNotificationsServer)[57] : Load 2 pending notification dictionaries
SpringBoard(UserNotificationsServer)[57] : Load pending 2 notifications
DDCharge[5570] : [Bugly] Fatal signal(5) raised.
SpringBoard(UserNotificationsServer)[57] : Remove 1 pending notifications by match
SpringBoard(FrontBoard)[57] : checking for prefetched object for key=PendingNotificationRecords bundleID=
SpringBoard(FrontBoard)[57] : no object found
SpringBoard(FrontBoard)[57] : fetching object for key=PendingNotificationRecords bundleID= checkPrefetch=NO (synchronously)
SpringBoard(FrontBoard)[57] : retrieved object from store:
SpringBoard(UserNotificationsServer)[57] : Load 2 pending notification dictionaries
SpringBoard(UserNotificationsServer)[57] : Did not remove all expected pending notifications
SpringBoard(FrontBoard)[57] : checking for prefetched object for key=SBApplicationLocalNotificationsLastFireDate bundleID=
SpringBoard(FrontBoard)[57] : found object 505310400
SpringBoard(UserNotificationsServer)[57] : Load last local notification fire date: 2017-01-05 20:00:00.000 +0800
SpringBoard(UserNotificationsServer)[57] :Save pending 2 notifications to application data store
SpringBoard(FrontBoard)[57] : checking for prefetched object for key=PendingNotificationRecords bundleID=
SpringBoard(FrontBoard)[57] : no object found
SpringBoard(FrontBoard)[57] : fetching object for key=PendingNotificationRecords bundleID= checkPrefetch=NO (synchronously)
SpringBoard(FrontBoard)[57] : retrieved object from store:
SpringBoard(UserNotificationsServer)[57] : Load 2 pending notification dictionaries
SpringBoard(UserNotificationsServer)[57] :Update timers for 2 pending notifications (monitoring: 0)
SpringBoard(UserNotificationsServer)[57] : Invalidate persistent timer
SpringBoard(UserNotificationsServer)[57] : Not scheduling local notifications (user notifications: 0, requires local notifications: 0,
SpringBoard(UserNotificationsServer)[57] : Could not load data at /var/mobile/Library/SpringBoard/PushStore/*.pushstore
SpringBoard(UserNotificationsServer)[57] : Update regions for 2 pending notifications
SpringBoard(UserNotificationsServer)[57] : Saving notification list at /var/mobile/Library/SpringBoard/PushStore/*.pushstore with 0 items
DDCharge[5570] <Notice>: [Bugly] Trapped fatal signal 'SIGTRAP(5)'
(
0 DDCharge 0x000000010018fbb8 0x00000001000ac000 + 932792,
1 DDCharge 0x000000010018f9f8 0x00000001000ac000 + 932344,
2 DDCharge 0x000000010018bae0 0x00000001000ac000 + 916192,
3 UIKit 0x000000018a81c6a4 0x000000018a79b000 + 530084,
4 UIKit 0x000000018aa2ca98 0x000000018a79b000 + 2693784,
5 UIKit 0x000000018aa32808 0x000000018a79b000 + 2717704,
6 UIKit 0x000000018aa47104 0x000000018a79b000 + 2801924,
7 UIKit 0x000000018aa2f7ec 0x000000018a79b000 + 2705388,
8 FrontBoardServices 0x00000001864cb92c 0x0000000186491000 + 239916,
9 FrontBoardServices 0x00000001864cb798 0x0000000186491000 + 239512,
10 FrontBoardServices 0x00000001864cbb40 0x0000000186491000 + 240448,
11 CoreFoundation 0x00000001848a2b5c 0x00000001847c

Related

Elasticsearch curl error Connection aborted.', RemoteDisconnected('Remote end closed connection without response')

I am using requests library to connect to elasticsearch for fetching data. I have
26 indices,
spread across 2 nodes,
with 1st node having 16GB RAM / 8 vCPU and the
2nd 8GB RAM / 4 vCPU.
All my nodes are in AWS EC2.
In all I have around 200 GB of data. I am primarily using the database for aggregation exercises.
A typical data record would look like this
SITE_ID DATE HOUR MAID
123 2021-05-05 16 m434
I am using the following python3 definition to send the request and get the data.
def elasticsearch_curl(uri, json_body='',verb='get'):
headers={'Content-Type': 'application/json',}
try:
resp = requests.get(uri, headers=headers, data=json_body)
try:
resp_text = json.loads(resp.text)
except:
print("Error")
except Exception as error:
print('\nelasticsearch_curl() error:', error)
return resp_text
##Variables
tabsite : ['r123','r234'] ##names of indices
siteid : [123,124,125] ##name of sites
I am using the following code to get the data:
for key,value in tabsite.items():
k=key.replace('_','')
if es.indices.exists(index=k):
url="http://localhost:9200/"+str(k)+"/_search"
jb1='{"size":0,"query": {"bool" : {"filter" : [{"terms" : {"site_id": ' + str(siteid) + '}},{"range" : {"date" : \
{"gte": "'+str(st)+'","lte": "'+str(ed)+'"}}}]}}, "aggs" : {"group_by" : {"terms": {"field": "site_id","size":100},"aggs" : {"bydate" : {"terms" : \
{"field":"date","size": 10000},"aggs" : {"uv" : {"cardinality": {"field": "maid"}}}}}}}}'
try:
r2=elasticsearch_curl(url, json_body=jb1)
k1=r2.get('aggregations',{}).get('group_by',{}).get('buckets')
print(k1)
except:
pass
The above code returns the data from r123 which has 18GB of data while it fails to get it from r234 which has 55 GB of data.
I am getting the following error:
elasticsearch_curl() error: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
I have tried the following:
Try running the above code in a machine which has only r234 index with around 45GB of data. It worked.
I tried increasing the RAM size of the 2nd machine in production from 8GB to 16GB - it failed.
When I searched for options here, I understood I need to close the headers. I am not sure how.
I have the following questions:
How do I keep my elasticsearch nodes stable without getting them shutdown automatically?
How do I get rid of the above error which shuts down one of the nodes or both.
Is there any optimimal configuration setting ratio for volume of data : number of nodes : amount of RAM / vCPUs.

I met a error warning when run regression model for a panel using plm package

I have a panel for 27 years, but met this warning when I run a regression.
panel data of global suicide rate with temperature
I use the following codes:
library(plm)
install.packages("dummies")
library(dummies)
data2 <- cbind(mydata, dummy(mydata$year, sep ="_"))
suicide_fe <- plm(suiciderate ~ dmt, data2, index = c("country", "year"),
model= "within")
summary(suicide_fe)
But I got this error:
Error in pdim.default(index[[1]], index[[2]]) :
duplicate couples (id-time)
In addition: Warning messages:
1: In pdata.frame(data, index) :
duplicate couples (id-time) in resulting pdata.frame to find out which, use
e.g. table(index(your_pdataframe), useNA = "ifany") 2: In
is.pbalanced.default(index[[1]], index[[2]]) :
duplicate couples (id-time)

Grails use Promises to answer AJAX calls

I'm trying to leverage multithreading in Grails to handle AJAX calls. The webpage fires an AJAX call, the controller allocates a new thread doing that job, and when it finishes, the result is returned and rendered. Here's my attempt. It failed. Seemingly the second AJAX call was not fired at all.
In the javascript in gsp: Two AJAX calls. The 1st one triggers the 2nd when complete.
function asynchroCrawl(){
var jsonData = $.ajax(
{
url : "${createLink(controller:'environment', action:'asynchroCrawl')}",
dataType : "json",
async : true
}).done(function(jsonData) {
console.log("Crawler completed");
crawlFinished=true;
asynchroWordCloud();
});
}
function asynchroWordCloud() {
var jsonData = $.ajax(
{
url : "${createLink(controller:'environment', action:'asynchroKeywords')}",
dataType : "json",
async : false
}).done(function(jsonData) {
keywordFinished = true;
});
}
In the controller: The other acton function is omitted.
def asynchroCrawl={
User u=session.getAttribute("user");
FrameworkController.crawlStarted=true;
println "Crawling task started.";
def p=task{
NetworkGenerator.formNetwork(u);
}
p.onError { Throwable err -> println "An error occured \n${err.message}" }
p.onComplete { result ->
println "User crawl complete.";
FrameworkController.crawlComplete=true;
render u as JSON;
return;
}
}
NetworkGenerator is just a normal class that runs some job and updates the User object with User.withTransaction{u.merge();}
My understanding is that a Promise is created and handles my job, and when it is complete, the data should be returned to the webpage answering to the AJAX call. So the .done() function should also be fired, leading the flow to another AJAX call. However, the done() is never triggered. I see no "Crawler completed" printed in my browser console.
In my IDE console I do see "User crawl complete.", indicating the promise has completed. But an exception follows, saying:
2015-12-10 21:05:47,540 [Actor Thread 7] ERROR gpars.LoggingPoolFactory - Async execution error: null
Message: null
Line | Method
->> 1547 | notifyAttributeAssigned in org.apache.catalina.connector.Request
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 1538 | setAttribute in ''
| 541 | setAttribute . . . . . in org.apache.catalina.connector.RequestFacade
| 288 | setAttribute in org.apache.catalina.core.ApplicationHttpRequest
| 431 | storeGrailsWebRequest . in org.codehaus.groovy.grails.web.util.WebUtils
| 61 | doCall in org.codehaus.groovy.grails.plugins.web.async.WebRequestPromsiseDecorator$_decorate_closure1
| -1 | call . . . . . . . . . in ''
| 61 | doCall in org.grails.async.factory.gpars.GparsPromise$_onComplete_closure1
| -1 | call . . . . . . . . . in ''
| 62 | run in groovyx.gpars.dataflow.DataCallback$1
Please could anybody tell me what I'm doing wrong here. Really appreciate your help!
Each servlet request is already serving within a thread, so there's no need to spawn another thread. You can spawn a new thread in an action - either with new Thread().start() or with GPars - doesn't really matter.
What matters is, that you have to wait (with thread.join() or alike) till the thread completes, so that your action can deliver the results your JS code expects

Parse "key cannot be nil" error on [PFObject saveInBackground] (Cocoa)

I'm trying out the Parse SDK in an existing Mac OS X application. I followed the setup steps in the Parse Quickstart guide, including adding an import for the Parse library to my AppDelegate .m file and calling:
[Parse setApplicationId:kParseApplicationID clientKey:kParseClientKey];
in the applicationDidFinishLaunching method. The two constants I use are defined in a Constants file which is also imported.
Towards the end the guide says: "Then copy and paste this code into your app, for example in the viewDidLoad method (or inside another method that gets called when you run your app)"
So I imported the Parse header file into my main view controller .m file and copied and pasted their code into its viewDidLoad method:
PFObject *testObject = [PFObject objectWithClassName:#"TestObject"];
testObject[#"foo"] = #"bar";
[testObject saveInBackground];
When this runs, I hit an exception whose message is "setObjectForKey: key cannot be nil" on that last line. Not on the previous line where I'm actually setting the object for the key. Furthermore, if I stop on the previous line and PO testObject, testObject.allKeys, or testObject[#"foo"], they all show non-nil values for the key "foo". And still furthermore, if I move this code to the end of the AppDelegate's applicationDidFinishLaunching method, the code executes without any errors, and the TestObject shows up in my Parse application dashboard.
Can somebody tell me what I'm doing wrong? I'd really like to explore further, but this is a real blocker for me.
Here's the console log from a slightly more involved OS X app, also occurring on [ParseObject saveInBackgroundWithBlock:]:
2015-06-03 16:55:56.046 TestApp [15795:15954566] An uncaught exception was raised
2015-06-03 16:55:56.046 TestApp [15795:15954566] *** setObjectForKey: key cannot be nil
2015-06-03 16:55:56.046 TestApp [15795:15954566] (
0 CoreFoundation 0x00007fff8fb0103c __exceptionPreprocess + 172
1 libobjc.A.dylib 0x00007fff978e476e objc_exception_throw + 43
2 CoreFoundation 0x00007fff8f9e7c66 -[__NSDictionaryM setObject:forKey:] + 1174
3 ParseOSX 0x000000010011adbb __74-[PFMultiProcessFileLockController beginLockedContentAccessForFileAtPath:]_block_invoke + 129
4 libdispatch.dylib 0x000000010026cd43 _dispatch_client_callout + 8
5 libdispatch.dylib 0x000000010026e0b1 _dispatch_barrier_sync_f_invoke + 348
6 ParseOSX 0x000000010011ad15 -[PFMultiProcessFileLockController beginLockedContentAccessForFileAtPath:] + 127
7 ParseOSX 0x00000001000f61ea +[PFObject(Private) _objectFromDataFile:error:] + 207
8 ParseOSX 0x000000010010f231 +[PFUser(Private) _getCurrentUserWithOptions:] + 611
9 ParseOSX 0x00000001000fc4bd -[PFObject(Private) saveAsync:] + 118
10 ParseOSX 0x00000001000e1d25 -[PFTaskQueue enqueue:] + 188
11 ParseOSX 0x00000001000ff06b -[PFObject saveInBackground] + 121
12 ParseOSX 0x00000001000ff270 -[PFObject saveInBackgroundWithBlock:] + 49
13 TestApp 0x0000000100001827 +[SBTParseTranslation saveDBObjectToParse:] + 183
14 TestApp 0x0000000100031fb4 -[SWBMainWindowViewController showRecordForItem:] + 3124
15 TestApp 0x0000000100031268 -[SWBMainWindowViewController showRecordForID:] + 184
16 TestApp 0x0000000100031080 -[SWBMainWindowViewController finishLoad] + 448
17 TestApp 0x0000000100030eb1 -[SWBMainWindowViewController loadData] + 97
18 TestApp 0x0000000100030dd5 -[SWBMainWindowViewController viewDidLoad] + 725
This was caused by a case of confusion on my part. I work mostly on iOS projects, where applicationDidFinishLaunching can typically be counted on to have run before viewControllers load. Apparently that isn't the case in OS X apps.
In short, I was calling the ParseObject save methods before [Parse setApplicationID: clientKey] had been called.
In my case ios7 was crashing with saveInBackground since podfile contained:
platform :ios, '8.0'
and deployment target was 7.0. I replaced with
platform :ios, '7.0'
then clean and build again

Kohana 3.2.0, logging works incorrectly - INFO as DEBUG and ALERT as CRITICAL

I use Kohana 3.2.0 and while using logging in my code as shown below the output written to the log file is not as expected. The content is written correctly but the logging 'level' is wrong. This works fine for all the logging levels except for 'INFO' and 'ALERT'. For INFO it writes as DEBUG and for ALERT it writes as CRITICAL.
In the controller -
Log::instance()->add(Log::INFO, 'The match found is '.$matches[0]);
In the log file -
2013-03-25 11:48:26 --- DEBUG: The match found is fruits
The \system\classes\kohana\log.php has below values.
>const EMERGENCY = LOG_EMERG; // 0
>const ALERT = LOG_ALERT; // 1
>const CRITICAL = LOG_CRIT; // 2
>const ERROR = LOG_ERR; // 3
>const WARNING = LOG_WARNING; // 4
>const NOTICE = LOG_NOTICE; // 5
>const INFO = LOG_INFO; // 6
>const DEBUG = LOG_DEBUG; // 7
>const STRACE = 8;
You're most likely seeing this behaviour on Windows. This is because Windows has less log levels - see PHP bug #18090.
The log levels on Windows are mapped as following:
LOG_EMERG => critical
LOG_ALERT => critical
LOG_CRIT => critical
LOG_ERR => error
LOG_WARNING => warning
LOG_NOTICE => debug
LOG_INFO => debug
LOG_DEBUG => debug

Resources