I worked on a promises project earlier this year in Pharo Smalltalk.
The idea was to achieve the behavior below:
([ 30 seconds wait. 4 ]promiseValue )then: [ :a| Transcript crShow: a ].
This means that the promise will wait in the background for 30 seconds and print on the Transcript. This should not result in freezing of the Pharo user interface.
My implementation below freezes the user interface. Why?
Class promise that implements Promises behavior:
Object subclass: #Promise
instanceVariableNames: 'promiseValue promiseError promiseLock'
classVariableNames: ''
package: 'META-Project-[pgakuo]'
Methods inside class Promise
doesNotUnderstand: aMessage
^ self value
perform: aMessage selector
withArguments: aMessage arguments
then: aBlock
promiseLock isSignaled
ifTrue: [ ^ self ].
promiseLock wait.
promiseError
ifNotNil: [ promiseError
privHandlerContext: thisContext;
signal ].
aBlock value: promiseValue.
self value: aBlock
then: aBlock catch: anotherBlock
promiseLock isSignaled
ifFalse:
[ promiseLock wait.
promiseError ifNotNil: [ anotherBlock value: promiseError ].
promiseValue ifNotNil: [ aBlock value: promiseValue. self value: aBlock ]]
value
promiseLock isSignaled ifFalse: [ promiseLock wait ].
promiseError ifNotNil:
[ promiseError
privHandlerContext: thisContext;
signal ].
^promiseValue
value: aBlock
promiseLock := Semaphore new.
[
[[promiseValue := aBlock value]
on: Error do: [:err | promiseError := err]]
ensure: [promiseLock signal]] fork
And one method added to Blockclosure to make closures use the Promise behavior.
promiseValue
^ Promise new value: self
A block is passed to an instance of Promise and it is executed by Promise>>value: which uses fork to perform tasks in the background. But it does not seem to be working as desired
When working in a playground you'll be working within the UI process. Hence, you are effectively suspending the UI process with your example. Try this:
[ ([ 30 seconds wait. 4 ] promiseValue) then: [ :a |
Transcript crShow: a ] ] forkAt: Processor userBackgroundPriority.
Edit
As the there's the explicit requirement for the original expression to not lock the UI, what you should do is:
do not override #doesNotUnderstand:
you have a choice:
always fork when evaluating a promise
This will incur an overhead due to process scheduling and process creation. You will also lose the context of the original process, unless you explicitly save it (costs memory, incurs performance penalty)
only fork if the current process is the UI process
Checking whether the current process is the UI process is simple and fast. It's not something you would typically do but for your case I'd recommend this approach.
I recommend implementing a class side method for Promise, e.g. Promise class>>value:. This will allow you to isolate this specific case from the rest of your implementation. e.g.
value: aBlock
| instance |
instance := self new.
self isUIProcess
ifTrue: [ [ instance value: aBlock ] forkAt: Processor userBackgroundPriority ]
ifFalse: [ instance value: aBlock ].
^ instance
I solved the freezing problem as follows:
then: aBlock
promiseLock isSignaled
ifFalse:
[ promiseLock wait.
promiseValue ifNotNil: [ aBlock value: promiseValue ]] fork.
And the below code on playground does not freeze the UI
[ 12 seconds wait. 12 ]promiseValue then: [ :a| Transcript crShow: a/2 ]
it prints 6 after 12 seconds of not freezing the UI
Related
We have an until loop in a ADFv2 pipeline.
The time it takes to stop/terminate once the expression condition is met seems to corrolate between the length of time the until loop takes to completes its activities.
This particular until loop performs alot of activites and can take anywhere between 90-120 mins to complete. So it takes almost as long to end/terminate (break out of the loop).
If I "hack" it so that it only performs a handful of activities it will quickly end and break once it's finished and the expression to terminate is met.
It's like a spinning wheel that keeps spinning even after the power is turned off. The momentum that was built up while connected takes a while to slow down and eventually stop.
Is this a known issue, how can I troubleshoot the exact cause here or fix it?
Incorrect usage of nested loop switch could cause this.
Here is a Until component, and some activities after it:
Until
In the Until, some activities,
Correct:
In until - correct
Incorrect (Slow to end/terminate):
In until - incorrect
Why?
For the incorrect case, the last activity If waiting depends on three activities, perhaps its behavior was counterintuitive.
// pay attention to "dependsOn"
{
"name": "If waiting",
"type": "IfCondition",
"dependsOn": [
{
"activity": "Set loop_waiting_refresh_status to True",
"dependencyConditions": [
"Succeeded"
]
},
{
"activity": "WeChatEP_Notifier Info Get new bearer",
"dependencyConditions": [
"Succeeded"
]
},
{
"activity": "Set loop_waiting_refresh_status to False",
"dependencyConditions": [
"Succeeded"
]
}
],
...
}
So I adapted some of the /kernel/sched/rt.c code to write my own simple CPU scheduler, and I'm getting a null pointer dereference exception when I try to acquire a lock. This is despite me printk()'ing all of the relevant pointers, and seeing that they're not NULL.
//Snippet from my adaptation of update_curr_rt()
//wrr_rq is a struct wrr_rq*
printk("Before loop, wrr_rq pointer is %p\n",wrr_rq);
printk("Before loop, &wrr_rq->wrr_runtime_lock is %p\n",&wrr_rq->wrr_runtime_lock);
for_each_sched_wrr_entity(wrr_se) {
printk("1\n");
wrr_rq = wrr_rq_of_se(wrr_se);
printk("2\n");
raw_spin_lock(&wrr_rq->wrr_runtime_lock);
printk("3\n");
[ 263.595176] Before loop, wrr_rq is 00000000aebb4d6d
[ 263.596283] Before loop, &wrr_rq->wrr_runtime_lock is 0000000015dee87f
[ 263.597764] 1
[ 263.598141] wrr_rq_of_se: called
[ 263.598888] 2
[ 263.599268] BUG: kernel NULL pointer dereference, address: 0000000000000068
[ 263.600836] #PF: supervisor write access in kernel mode
[ 263.602027] #PF: error_code(0x0002) - not-present page
...
[ 263.656134] RIP: 0010:_raw_spin_lock+0x7/0x20
I've printed all the relevant pointers and seen they're not NULL (and have values quite a bit above 0), but I still get this exception. I tried using the elixir browser to see what is happening with the raw_spin_lock() macro, and it doesn't seem like anything crazy is happening...
In addition, the runqueue lock is already held when this code is called (the runqueue lock is acquired by task_sched_runtime()).
Any thoughts appreciated.
Thanks.
Credit to #0andriy: It turns out that kernel NULL pointers when printed by %p get hashed to some other unique value that may not be NULL, and so when I printed things with %px I saw they were in fact NULL.
I'm working my way to become more familiar with seaside on dolphin. I have successfully completed the todo app. Now I have started my own app using the todo app as a guide.
I am getting walkback (see below) in the session start. I have set my app up similar to the todo. One thing I do notice is that when I go back into seaside config the root class says "a JCBYCBi..." rather than "JCBYCBi...", which seems to say an INSTANCE is in config rather than a CLASS.
Any help welcome,
John
decoration
^ decoration contents <== decoration isNil
addDecoration: aDecoration
"Add aDecoration to the receivers decoration chain. Answer the added decoration."
| previous current |
previous := nil.
current := self decoration.
[ current ~~ self and: [ self decoration: current shouldWrap: aDecoration ] ] whileTrue: [
previous := current.
current := current next ].
aDecoration setNext: current.
previous isNil
ifTrue: [ self decoration: aDecoration ]
ifFalse: [ previous setNext: aDecoration ].
^ aDecoration
createRoot
^ self rootDecorationClasses
inject: self rootClass new
into: [ :component :decorationClass |
component
addDecoration: decorationClass new;
yourself ]
What could be reasons for the following message:
BUG: spinlock lockup suspected on CPU#0, sh/11786
lock: kmap_lock+0x0/0x40, .magic: dead4ead, .owner: sh/11787, .owner_cpu: 1
Blockquote
BUG: spinlock lockup suspected on CPU#0, sh/11786
This indicates the CPU0 is lockup, and the thread/Process would be sh (or start by sh, I am not sure). You should have a look at the stack strace info dumped by the kernel. For example:
127|uid=0 gid=1007#nutshell:/var # [ 172.285647] BUG: spinlock lockup on CPU#0, swapper/0, 983482f0
[ 172.291523] [<8003cb44>] (unwind_backtrace+0x0/0xf8) from [<801853e4>] (do_raw_spin_lock+0x100/0x164)
[ 172.300768] [<801853e4>] (do_raw_spin_lock+0x100/0x164) from [<80350508>] (_raw_spin_lock_irqsave+0x54/0x60)
[ 172.310618] [<80350508>] (_raw_spin_lock_irqsave+0x54/0x60) from [<7f3cf4a0>] (mlb_os81092_interrupt+0x18/0x68 [os81092])
[ 172.321636] [<7f3cf4a0>] (mlb_os81092_interrupt+0x18/0x68 [os81092]) from [<800abee0>] (handle_irq_event_percpu+0x50/0x184)
[ 172.332781] [<800abee0>] (handle_irq_event_percpu+0x50/0x184) from [<800ac050>] (handle_irq_event+0x3c/0x5c)
[ 172.342622] [<800ac050>] (handle_irq_event+0x3c/0x5c) from [<800ae00c>] (handle_level_irq+0xac/0xfc)
[ 172.351767] [<800ae00c>] (handle_level_irq+0xac/0xfc) from [<800ab82c>] (generic_handle_irq+0x2c/0x40)
[ 172.361090] [<800ab82c>] (generic_handle_irq+0x2c/0x40) from [<800552e8>] (mx3_gpio_irq_handler+0x78/0x140)
[ 172.370843] [<800552e8>] (mx3_gpio_irq_handler+0x78/0x140) from [<800ab82c>] (generic_handle_irq+0x2c/0x40)
[ 172.380595] [<800ab82c>] (generic_handle_irq+0x2c/0x40) from [<80036904>] (handle_IRQ+0x4c/0xac)
[ 172.389402] [<80036904>] (handle_IRQ+0x4c/0xac) from [<80035ad0>] (__irq_svc+0x50/0xd0)
[ 172.397416] [<80035ad0>] (__irq_svc+0x50/0xd0) from [<80036bb4>] (default_idle+0x28/0x2c)
[ 172.405603] [<80036bb4>] (default_idle+0x28/0x2c) from [<80036e9c>] (cpu_idle+0x9c/0x108)
[ 172.413793] [<80036e9c>] (cpu_idle+0x9c/0x108) from [<800088b4>] (start_kernel+0x294/0x2e4)
[ 172.422181] [<800088b4>] (start_kernel+0x294/0x2e4) from [<10008040>] (0x10008040)
[1]This would tell you the function call relationships. Notice the info:
[ 172.310618] [<80350508>] (_raw_spin_lock_irqsave+0x54/0x60) from [<7f3cf4a0>] (mlb_os81092_interrupt+0x18/0x68 [os81092])
This tells mlb_os81092_interrupt function try to use the spin_lock_irqsave to lock something. So we can just found this spinlock is used to lock what, and try to analyse or and logs to detect which one is holding the lock. Then found the method to avoid it.
[2]Also because the CPU0 is locked, and there can be MP system, you should make sure whether there is the a irq which may use the critical resource, if the handler of irq is assigned to other CPUs(like the CPU1), is's OK, but if CPU0 deals with the handler of irq, this would cause the deadlock if you use the spin_lock not the spin_lock_irqsave, so check it.
How can I tell the Jasmine spy to only listen the messages I tell it to expect and ignore any others?
For example:
Example Group
describe 'View', ->
describe 'render', ->
beforeEach ->
#view = new View
#view.el = jasmine.createSpyObj 'el', ['append']
#view.render()
it 'appends the first entry to the list', ->
expect(#view.el.append).toHaveBeenCalledWith '<li>First</li>'
it 'appends the second entry to the list', ->
expect(#view.el.append).toHaveBeenCalledWith '<li>Second</li>'
Implementation
class View
render: ->
#el.append '<li>First</li>', '<li>Second</li>'
Output
View
render
appends the first entry to the list
Expected spy el.append to have been called \
with [ '<li>First</li>' ] but was called \
with [ [ '<li>First</li>', '<li>Second</li>' ] ]
appends the second entry to the list
Expected spy el.append to have been called \
with [ '<li>Second</li>' ] but was called \
with [ [ '<li>First</li>', '<li>Second</li>' ] ]
There are two options:
1. Using the argsForCall spy property
it 'appends the first entry to the list', ->
expect(#view.el.append.argsForCall[0]).toContain '<li>First</li>'
2. Using the args property of the mostRecentCall object
it 'appends the first entry to the list', ->
expect(#view.el.append.mostRecentCall.args).toContain '<li>First</li>'
To be clear, you can't prevent the spy from listening. The spy will listen to all function calls and save them. But you can excess every single call by using argsForCall.