preload function with network off - cobalt

If the network is OFF(eg. it can not visit the youtube page), and start cobalt with --preload param when the platform is powering on, then it can not load and show the youtube ui, even when it switches from preloading state to starting state, the url can not be reloaded and can not show UI, so for this case, how can cobalt process it?
//Even the network is on, it can not reload the YT url from SbSystemRaisePlatformError, tsa03s02-in-f142.1e100.net is the hostname of www.youtube.com
64 bytes from tsa03s02-in-f142.1e100.net (172.217.27.142): icmp_req=51 ttl=52 time=49.1 ms
64 bytes from tsa03s02-in-f142.1e100.net (172.217.27.142): icmp_req=52 ttl=52 time=48.3 ms
64 bytes from tsa03s02-in-f142.1e100.net (172.217.27.142): icmp_req=53 ttl=52 time=66.7 ms
[cobalt]>[11604:2581014753:INFO:h5vcc_url_handler.cc(119)] try to reload url, url= https://www.youtube.com/tv?additionalDataUrl=http://localhost:56789/apps/YouTube/dial_data
[cobalt]>[11604:2581014916:INFO:system_raise_platform_error.cc(49)] SbSystemRaisePlatformError: Connection error.
[cobalt]>[0810/114518:WARNING:system_window.cc(200)] Failed to notify user of error: 0
[cobalt]>[12263:2581046785:WARNING:thread_set_name.cc(36)] Thread name "SplashScreenWebModule" was truncated to "SplashScreenWeb"
64 bytes from tsa03s02-in-f14.1e100.net (172.217.27.142): icmp_req=54 ttl=52 time=48.6 ms
[cobalt]>[0810/114518:INFO:page_visibility_state.cc(70)] PageVisibilityState: app_state=kApplicationStateStarted (2)
[cobalt]>[12267:2581204654:WARNING:thread_set_name.cc(36)] Thread name "Synchronous Load" was truncated to "Synchronous Loa"
[cobalt]>[11604:2581247952:INFO:window_get_size.cc(36)] SbWindowGetSizewidth: 1920, height: 1080, ratio: 1
[cobalt]>[0810/114518:INFO:fetcher_factory.cc(94)] Fetching: h5vcc-embedded://splash_screen.html
[cobalt]>[0810/114518:INFO:fetcher_factory.cc(94)] Fetching: h5vcc-embedded://splash_screen.css
[cobalt]>[0810/114518:INFO:fetcher_factory.cc(94)] Fetching: h5vcc-embedded://you_tube_logo.png
[cobalt]>[0810/114518:INFO:fetcher_factory.cc(94)] Fetching: h5vcc-embedded://splash_screen.js
[cobalt]>[0810/114518:INFO:page_visibility_state.cc(70)] PageVisibilityState: app_state=kApplicationStateStarted (2)
[cobalt]>[12272:2581440597:WARNING:thread_set_name.cc(36)] Thread name "Synchronous Load" was truncated to "Synchronous Loa"
[cobalt]>[0810/114518:INFO:fetcher_factory.cc(94)] Fetching: https://www.youtube.com/tv?additionalDataUrl=http://loc[...]
[cobalt]>[0810/114518:ERROR:host_resolver_proc.cc(155)] [AAAAA]host= www.youtube.com
[cobalt]>[0810/114518:ERROR:browser_module.cc(702)] NetFetcher error on : net::ERR_NAME_RESOLUTION_FAILED, response code -1
[cobalt]>[0810/114518:WARNING:h5vcc_url_handler.cc(30)] url=//network-failure?retry-url=https://www.youtube.com/tv?additionalDataUrl=http://localhost:56789/apps/YouTube/dial_data
[cobalt]>[0810/114518:WARNING:h5vcc_url_handler.cc(92)] HandleNetworkFailure:
64 bytes from tsa03s02-in-f14.1e100.net (172.217.27.142): icmp_req=55 ttl=52 time=48.6 ms
64 bytes from tsa03s02-in-f142.1e100.net (172.217.27.142): icmp_req=56 ttl=52 time=49.5 ms
64 bytes from tsa03s02-in-f14.1e100.net (172.217.27.142): icmp_req=57 ttl=52 time=67.3 ms
[cobalt]>[11604:2584493042:INFO:h5vcc_url_handler.cc(119)] try to reload url, url= https://www.youtube.com/tv?additionalDataUrl=http://localhost:56789/apps/YouTube/dial_data

In regular non-preload mode, if you start up Cobalt with no network, it will attempt to load the URL, fail, and then call SbSystemRaisePlatformError, which is generally expected to display an error dialog of some sort, which can then call back with an indicator to retry.
In preload mode, the situation isn't any different. Moving from preloading to started just is like making the app visible, it doesn't indicate a network status change.
Now, if an implementation got a platform error in preloading mode, it could wait until the transition to started and then fire the retry indicator. That would be up to the platform if that was desired.
There are also network status change events that can be sent (see starboard/event.h) but I'm not sure if they will cause Cobalt to retry automatically.

Set the filter default to kSbSocketResolveFilterIpv4 in SystemHostResolverProc of host_resolver_proc.cc, it will not get the ERR_NAME_RESOLUTION_FAILED when it switches from network off to network on state, and it can reload the YT page well.

Related

NTAG I2C FAST_READ is erring out after a particular page address

I'm using an NTAG I2C plus 2k memory tag and am able to successfully perform a FAST_READ for a particular page address range, but just beyond the range I'm getting an error.
iOS
Start address 0x04 and end address 0x46 reads successfully
await cmd([0x3a, 0x04, 0x46]);
while, start address 0x04 and end address 0x47 fails with
await cmd([0x3a, 0x04, 0x47]);
Error
input bytes: 3A0C0C
input bytes: 3A0447
[CoreNFC] 00000002 816c6760 -[NFCTagReaderSession transceive:tagUpdate:error:]:771 Error Domain=NFCError Code=100 "Tag connection lost" UserInfo={NSLocalizedDescription=Tag connection lost}
Android
Start address 0x04 and end address 0x49 reads successfully
await cmd([0x3a, 0x04, 0x49]);
while, start address 0x04 and end address 0x4b fails with
await cmd([0x3a, 0x04, 0x4b]);
Error
D/NfcService: Transceive start
D/NfcService: Transceive End, Result: 0 mTransceiveSuccess: 1 mTransceiveFail: 0
D/NfcService: Transceive start
D/NfcService: Transceive End, Result: 2 mTransceiveSuccess: 1 mTransceiveFail: 1
D/ReactNativeNfcManager: transceive fail: android.nfc.TagLostException: Tag was lost.
I/ReactNativeJS: Error: transceive fail
Thanks in advance.
From the Tag's datasheet
Remark: The FAST_READ command is able to read out the entire memory of one sector with one command. Nevertheless, the receive buffer of the NFC device must be able to handle the requested amount of data as no chaining is possible
When I do a FAST_READ on a similar Tag type on Android in native code I do a getMaxTransceiveLength to find out how big the buffer is and divide that by 4 and round down to find the max number of pages a FAST_READ can do at once and break up in to multiple FAST_READ's if necessary.
Generally the max Transceive length is 253 bytes on Android or 63 pages.
The react-native-nfc-manager API for Android also has getMaxTransceiveLength in it's API so you can do the same calculation of the maximum pages a FAST_READ can do on your hardware.
I've not done FAST_READ's on iOS but expect there is a similar limit (it does have an error code for transceive packet too large but I've not seen a way to ask it the max transceive length before you send a command)
While probably getMaxTransceiveLength is meant for size of sending command this amount of bytes should be able to be returned before the transceive timeout is hit, as the send and receive data rate is identical.
The transceive timeout can set but not got using react-native-nfc-manager API
Again no options on changing any timeout values in iOS but there is an error to indicate that the communication to the Tag has timed out.
So you could try increasing the timeout value on Android instead of breaking up in to a number of FAST_READ's but working out how long the timeout should be could be difficult and might have a negative impact if you set it too big.
For Android it's probably easier to assume the max send size is safe to receive as well. For iOS assume a max receive size from your experiments or handle the error and re-read with a back off algorithm.

What causes the dynamically allocated error messages in Twincat 4024 and how do I get rid of them?

We have a project which was made with 4022.29 originally. We also tried to run the project with TwinCAT 4024.x. When the configuration is activated for the first time on my local machine it runs fine. However, when I restart the project or activate the configuration I get the following error messages:
Error 27.08.2019 14:06:37 322 ms | 'Port_851' (851): PLC: PLC instance xxx Instance tried to free pointer 0xffff9e02fe620bd8 which was not allocated by the PLC instance.
Error 27.08.2019 14:06:37 322 ms | 'Port_851' (851): PLC: PLC instance xxx Instance tried to free pointer 0xffff9e02fe620b48 which was not allocated by the PLC instance.
… (~20 more error messages)
Error 27.08.2019 14:06:37 322 ms | 'Port_851' (851): PLC: PLC instance xxx Instance tried to free pointer 0xffff9e02fe61fe28 which was not allocated by the PLC instance.
Error 27.08.2019 14:06:37 322 ms | 'Port_851' (851): PLC: PLC instance xxx Instance did not free dynamically allocated memory block at address 0xffff9e02fe616878 of size 65.
Error 27.08.2019 14:06:37 322 ms | 'Port_851' (851): PLC: PLC instance xxx Instance did not free dynamically allocated memory block at address 0xffff9e02fe6167d8 of size 65.
… (~20 more error messages)
Error 27.08.2019 14:06:37 322 ms | 'Port_851' (851): PLC: PLC instance xxx Instance did not free dynamically allocated memory block at address 0xffff9e02fe615978 of size 55.
What causes these error messages? Why do they suddenly show up? Should I get rid of them and if yes how do I do it?
Partial answer
This answer can use some improvements/better understanding. I'll post it here to collect some information on the solutions.
Origin
From Beckhoff support:
The error messages you receive lead to dynamically allocated memory in the router memory (such as __new() ) or not released interface pointers.
The mentioned error messages were implemented with the 4024 release. In the older versions of TwinCAT we were not able to detect such a memory lack.
How I got rid of them
I am not quite sure how to put the above in my own words due to lack of understanding. However, I did track down the origin of the error messages in our project.
Binary search
What I did is to use binary search to track down the origin of the issue. First I kept disabling half of the active tasks of the project until the error message disappeared. Then I re-enabled tasks until I had the specific task causing the issue. After that I did the same with the code running in this task. En/disabling the remaining 50% to track down the code causing the issues.
Origin
In the end I found a function block which used other function blocks as its input. When I changed the inputs from
VAR_INPUT
fbSomeFb : FB_SomeFB;
END_VAR
into
VAR_INPUT
fbSomeFb : REFERENCE TO FB_SomeFB;
END_VAR
the error messages disappeared when the project was restarted.
Fixed another issue
After getting rid of these error messages, another issue with this particular project was solved. We always had the issue that the PLC crashed and restarted when we activated a configuration, or put in into config mode. This only happened on the machine PLC (not any of our development PLCs).
You allocated memory on the heap for an object using the __NEW function. You need to deallocate it. In dynamic programming you need to deallocate an object after you're done using it.
The way to do it in TwinCAT is to use the __DELETE function.
If you're using __NEW in a Function Block (FB), you can simply deallocate the object in the FB_Exit(...) method by calling the __DELETE function there.
e.g.
In FB_Init(...) you put:
pData := __NEW(INT);
In FB_Exit(...) you put:
__DELETE(pData);
FB_Exit(...) will be called whenever you FB moves out of scope. This will automatically deallocate the object from memory.
If you dont want to use FB_Exit(...) you need to think carefully about the conditions necessary for your program to deallocate the object you created from memory.

Heroku need help handling bloat in database and vacuuming

Today, I started getting timeout errors from heroku. I eventually ran this ...
heroku pg:diagnose -a myapp
and got ...
RED: Bloat
Type Object Bloat Waste
───── ───────────────────────────────────────────── ───── ───────
table public.files 776 1326 MB
index public.files::files__lft__rgt_parent_id_index 63 106 MB
RED: Hit Rate
Name Ratio
────────────────────── ──────────────────
overall cache hit rate 0.8246404842342929
public.files 0.8508127886460272
I ran the VACUUM command and it did nothing to address the bloat. How do I address this?
I know this is an old answer. For anyone else facing the same issue, he might try the following for vacuum:
VACUUM (ANALYZE, VERBOSE, FULL) your-table-name;

.NET application handle leak, how to locate the source?

I have a .NET application running in production environment (WINDOWS XP + .NET 3.5 SP1) with a stable handle count around 2000, but in some unknown situation, its handle count will increase extremely fast and finally crash itself(over 10,000 which monitored by PerfMon tool).
I've made a memory dump from there during the increasing period (not crash yet) and imported to WinDbg, can see the overall handle summary:
0:000> !handle 0 0
7229 Handles
Type Count
None 19
Event 504
Section 6108
File 262
Port 15
Directory 3
Mutant 56
WindowStation 2
Semaphore 70
Key 97
Token 2
Process 3
Thread 75
Desktop 1
IoCompletion 9
Timer 2
KeyedEvent 1
  
so no surprise, the leak type is the Section, dig more:
0:000> !handle 0 ff Section
Handle 00007114
Type Section
Attributes 0
GrantedAccess 0xf0007:
Delete,ReadControl,WriteDac,WriteOwner
Query,MapWrite,MapRead
HandleCount 2
PointerCount 4
Name \BaseNamedObjects\MSCTF.MarshalInterface.FileMap.IBC.AKCHAC.CGOOBGKD
No object specific information available
Handle 00007134
Type Section
Attributes 0
GrantedAccess 0xf0007:
Delete,ReadControl,WriteDac,WriteOwner
Query,MapWrite,MapRead
HandleCount 2
PointerCount 4
Name \BaseNamedObjects\MSCTF.MarshalInterface.FileMap.IBC.GKCHAC.KCLBDGKD
No object specific information available
...
...
...
...
6108 handles of type Section
can see the BaseNamedObjects' naming convention are all MSCTF.MarshalInterface.FileMap.IBC.***.*****.
Basically I was stopped here, and could not go any further to link the information to my application.
Anyone could help?
[Edit0]
Tried several combination of GFlags command(+ust or via UI), with no luck, the dumps opened with WinDbg always see nothing via !htrace, so have to using attach process which finally I got the stack for above leaking handle:
0:033> !htrace 1758
--------------------------------------
Handle = 0x00001758 - OPEN
Thread ID = 0x00000768, Process ID = 0x00001784
0x7c809543: KERNEL32!CreateFileMappingA+0x0000006e
0x74723917: MSCTF!CCicFileMappingStatic::Create+0x00000022
0x7473fc0f: MSCTF!CicCoMarshalInterface+0x000000f8
0x747408e9: MSCTF!CStub::stub_OutParam+0x00000110
0x74742b05: MSCTF!CStubIUnknown::stub_QueryInterface+0x0000009e
0x74743e75: MSCTF!CStubITfLangBarItem::Invoke+0x00000014
0x7473fdb9: MSCTF!HandleSendReceiveMsg+0x00000171
0x7474037f: MSCTF!CicMarshalWndProc+0x00000161
*** ERROR: Symbol file could not be found. Defaulted to export symbols for C:\Windows\system32\USER32.dll -
0x7e418734: USER32!GetDC+0x0000006d
0x7e418816: USER32!GetDC+0x0000014f
0x7e4189cd: USER32!GetWindowLongW+0x00000127
--------------------------------------
and then I got stuck again, the stack seems not contain any of our user code, what is the suggestion for move forward?
WinDbg isn't the ideal tool for memory leaks, especially not without preparation in advance.
There's a GFlags option (+ust) which can be enabled for a process to record the stack trace for handle allocations. If you don't have this flag enabled, you'll probably not get more info out of your dump. If you have it, use !htrace to see the stack.
You can also try UMDH (user mode dump heap), which is a free tool. Or get something like memory validator which has certainly a better usability, so it might pay off in the long run.

What is the Faults column in 'top'?

I'm trying to download Xcode (onto version El Capitan) and it seems to be stuck. When I run 'top', I see a process called 'storedownloadd' and the "STATE" column is alternating between sleeping, stuck,and running. The 'FAULTS' has a quickly increasing number with a plus sign after it. The 'FAULTS' column is now over 400,000 and increasing. other than 'top', I see no sign of activity of the download. Does this indicate that something is amiss? Here's a screen shot:
Processes: 203 total, 2 running, 10 stuck, 191 sleeping, 795 threads 11:48:14
Load Avg: 4.72, 3.24, 1.69 CPU usage: 56.54% user, 6.41% sys, 37.3% idle SharedLibs: 139M resident, 19M data, 20M linkedit. MemRegions: 18620 total, 880M resident, 92M private, 255M shared. PhysMem: 7812M used (922M wired), 376M unused.
VM: 564G vsize, 528M framework vsize, 0(0) swapins, 512(0) swapouts. Networks: packets: 122536/172M in, 27316/2246K out. Disks: 78844/6532M read, 240500/6746M written.
PID COMMAND %CPU TIME #TH #WQ #PORT MEM PURG CMPRS PGRP PPID STATE BOOSTS %CPU_ME %CPU_OTHRS UID FAULTS COW MSGSENT MSGRECV SYSBSD SYSMACH
354 storedownloadd 0.3 00:47.58 16 5 200 255M 0B 0B 354 1 sleeping *3[1] 155.53838 0.00000 501 412506+ 54329 359852+ 6620+ 2400843+ 1186426+
57 UserEventAgent 0.0 00:00.35 22 17 378 4524K+ 0B 0B 57 1 sleeping *0[1] 0.23093 0.00000 0 7359+ 235 15403+ 7655+ 24224+ 17770
384 Terminal 3.3 00:12.02 10 4 213 34M+ 12K 0B 384 1 sleeping *0[42] 0.11292 0.04335 501 73189+ 482 31076+ 9091+ 1138809+ 72076+
When top reports back FAULTS it's referring to "page faults", which are more specifically:
The number of major page faults that have occurred for a task. A page
fault occurs when a process attempts to read from or write to a
virtual page that is not currently present in its address space. A
major page fault is when disk access is involved in making that page
available.
If an application tries to access an address on a memory page that is not currently in physical RAM, a page fault occurs. When that happens, the virtual memory system invokes a special page-fault handler to respond to the fault immediately. The page-fault handler stops the code from executing, locates a free page of physical memory, loads the page containing the data needed from disk, updates the page table, and finally returns control to the program — which can then access the memory address normally. This process is known as paging.
Minor page faults can be common depending on the code that is attempting to execute and the current memory availability on the system, however, there are also different levels to be aware of (minor, major, invalid), which are described in more detail at the links below.
↳ Apple : About The Virtual Memory System
↳ Wikipedia : Page Fault
↳ Stackoverflow.com : page-fault

Resources