TwinCAT 3.1 fCoE read and write with Complete Access - twincat

I am attempting to perform complete access read and writes to an EtherCAT slave controller using TwincAT 3.1 on Windows 10.
Generated EtherCAT Slave Stack Code using the Beckhoff Slave Stack Code Tool 5.12. Successfully able to enter operational state using TwinCAT 3.1 as EtherCAT master. I have a mailbox object at index 0x8001, which is an array of 25 bytes. I am unable to perform complete access, which allows writing / reading all 25 bytes from the object at 0x8001. Without the complete access, I will have to write to each sub-index of the index 0x8001 individually.This means 25 separate read/ write operation at each sub-index under the object.
I have tried using EC Engineer 3.06 from Acontis technologies as EtherCAT master, and I verified that the complete accesses are functional.
Does anyone know how to perform complete access using TwinCAT?

Twincat does it automatically if the ESI file has the complete access flag.

Related

How to read data from property file and store it in cache node in ibm mb

I am completely new to IBM MB, i would want to know how to read data from property file and store in cache node. I have gone through Ibm sites, i could find its possible but i do not know how to implement it. Could someone show me a sample code to read data from property file and store it in cache node and refresh it for every 1 hour.
My property file will be like
Key value
Id1 test1
Id2 test2
Check this blog post: Using the global cache in WebSphere Message Broker
This article showed a sample implementation of the new WebSphere Message Broker global cache feature in an enterprise environment to store and access reference data. The global cache is useful in many other scenarios, and you can write similar ESQL and Java routines to access the global cache depending on your requirements.

Periodic tns-12531: TNS: Cannot allocate memory

I have a problem that's been plaguing me about a year now. I have Oracle 12.1.x.x installed on my machine. After a day or two the listener stops responding and the listener.log contains a bunch of TNS-12531 messages. If I reboot, the problem goes away and I'm fine for another day or two. I'm lazy and I hate rebooting, so I decided to finally track this down, but I'm having no luck. Since the alternative is to do work that I really don't want to do, I'm going to spend all my time researching this.
Some notes:
Windows 10 Pro
64-Bit
32 GB RAM
Generally, about 20GB free when the error occurs
I have several databases and it doesn't matter which DB is running
Restarting the DB doesn't help
Restarting the listener doesn't help
Only rebooting clears the problem
When I set TRACE_LEVEL_LISTENER = 16, I don't get much more info. Trace files are not written to
I can connect to the DB if I bypass the listener (ie, set ORACLE_SID=xxx and connect without a DB identifier)
All other network interactions seem to work fine after the listener stops
lsnrctl status hangs and adds another TNS-12531 to the listener.log
I have roughly the same config at home and this does not happen
Below is an example of a listener.log file:
Fri Jul 28 14:21:47 2017
System parameter file is D:\app\user\product\12.1.0\dbhome_1\network\admin\listener.ora
Log messages written to D:\app\user\diag\tnslsnr\LJ-Quad\listener\alert\log.xml
Trace information written to D:\app\user\diag\tnslsnr\LJ-Quad\listener\trace\ora_24288_14976.trc
Trace level is currently 16
Started with pid=24288
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=LJ-Quad)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXTPROC1521ipc)))
Listener completed notification to CRS on start
TIMESTAMP * CONNECT DATA [* PROTOCOL INFO] * EVENT [* SID] * RETURN CODE
28-JUL-2017 14:22:06 * 12531
TNS-12531: TNS:cannot allocate memory
28-JUL-2017 14:22:47 * 12531
TNS-12531: TNS:cannot allocate memory
28-JUL-2017 14:26:24 * 12531
TNS-12531: TNS:cannot allocate memory
Thanks a bunch for any help you can provide!
Issue 1
This error can occur approximately after 2048 connections have been made via the listener when running on a non-English Windows installation.
Fix for Issue 1
Create a Windows User Group named Administrators on the computer where the listener.exe resides. This can fix the issue of the listener dying.
Reference: I'll post the link for the first issue as soon as I find it again
Issue 2
This error can also occur on Windows 64-Bit systems where the Desktop Application Heap is too small.
Fix for Issue 2
Try to Increase the Desktop Application Heap Registry in windows its located in
HKLM\System\CurrentControlSet\Control\Session Manager\SubSystems\Windows
Just as note don't add this Value by yourself, you have to depend on document.
Basically search for the registry entry and alter the third value for the key SharedSection=1024,20480,1024. This is a trial and error approach, but seems to improve listener's stability and memory issues.
Reference: TNS:cannot allocate memory - is there limit to the num databases on one box (Oracle Developer Community)

nca_connect_server: cannot communicate with host in LoadRunner 12.53

I'm currently testing HP LoadRunner 12.53 with Oracle EBS R12.2.5.
I created a simple script using both Oracle Apps, and Oracle NCA + Http protocol (Log in, bring up a form and close/log out) and replayed but run into below error. (same error for both scripts)
nca_connect_server: cannot communicate with host
icx_ticket is correlated and works OK as it is picked and replaced in the parameter.
No need to correlate JSessionIDForms as EBS is running on socket mode.
It is s just simple script with single correlation but can't find any clue for the error.
What could be the root cause of the error?
Where should I look at for a clue? How to make the error / log more verbose and detailed
Thanks in advance
Record it twice. If the value shifts, then correlate it.
Please ensure that you have properly set up the environment before recording, Below steps need to be taken for setting up the environment
1.Set "record = names" flag for specific user profile in Oracle EBS Application** via administrator login (search google how to achieve this or simply ask your application team to do it for you)
2.Run Time Settings and Default.cfg file changes
Run Time Settings
Keep the below values to high limit to avoid replay timeout errors
Run-Time Settings > Internet Protocol >Preferences > Options>
Step Download Limit
HTTP-request connect timeout:
HTTP-receive receive timeout:
Keep-Alive timeout:
Run-Time Settings >Browser>Browser Emulation>
1) Simulate a new user on each iteration – checked
3. default.cfg file inside script directory
"RelativeURL={NCAJServSessionId}" statement in the default.cfg file rolls back each time we run the script so we need to check that it is
/forms/lservlet;JsessionIDForms={NCAJServSessionId} -- R12 Version or
/forms/formservlet?JServSessionIdforms={NCAJServSessionId} -- EBS 11
i Version
4. Correlation - Last but not the least
Ensure correct correlation of each and every parameter, The best way to achieve this is by recording the script 2 times and comparing them using a suitable tool, Correlate each value which might be changing each time you replay the script
Note :- The Oracle EBS is not fully supported by Loadrunner please download the loadrunner compatability matrix and check if your version is supported by Loadrunner.

Can I delete .dmp and .phd files of Liberty Profile server?

In the folder <WAS Liberty Profile root>\<profile>\usr\servers\defaultServer there are many files named core.*.dmp and heapdump.*.phd. The size of these files is between 130 MB and 1.3 GB when my deployed app uses 4 MB.
Can I delete these files *.dmp and *.phd?
What are these files for?
Short answer: yes, it's safe to delete them, but you should find out why they're appearing, as it could indicate that your application is not running correctly.
If your dump files were created a long time ago, or you know you were debugging an OutOfMemoryException or have been running server javadump --include=heap,system then go ahead and delete the files. If, however, you keep getting new dump files and don't know why then read on.
The core and heapdump files contain a snapshot of the memory of the application from a specific point in time. Usually you do this to capture the state of your application at the point where something goes wrong so that you can examine it with analysis tools and try to work out what went wrong.
For example, by default the IBM JVM will perform a dump when an OutOfMemoryException is thrown. This allows you to look at the dump file and see what's using up all the memory.
If you have a corresponding javacore file, the fourth line or so should say why the memory dump was made.
e.g. 1TISIGINFO Dump Requested By User (00100000) Through com.ibm.jvm.Dump.javaDumpToFile (caused by running server javadump)
or 1TISIGINFO Dump Event "user" (00004000) received (caused by running kill -3)
If it's a "user" event, then something's asking the JVM to create a dump. If not, and you're still not sure what's causing it, check your jvm.options file for any -Xdump options which can be used to cause the JVM to create a dump in response to certain events. More information on that in the Knowledge Center.

Can't create multiple striped disk volume in Azure VM

I created an Azure Medium instance Windows 2012 Server and I'm having a problem striping together multiple Azure data disks into a single volume using the Server Manager tool.
In Azure I provisioned the medium instance and then created 4 data disks of 60GB each. I then rdp'ed into the server and inside Server Manager under File and Storage Services\Volumes I saw in the Disks section my 4 data disks listed under the C:\ and D:\ drives that come with this instance. I initialized my 4 data disks (later I also tried NOT initializing them) but when I clicked on "Storage Pools" in the nav bar under the Virtual Disk section I only saw 1 of my data disks.
I saw no way to add any of the other 3 data disks into my Storage Pool and then of course into the subsequent Virtual Disk. This problem limits me to just one data disk in my Virtual Disk. I have tried this many different times and the result is always the same.
Does anyone know what can be causing this or have steps to do the same thing I'm trying to do?
Thanks
If you're wondering why I'm trying to stripe these instead of using just 1 large data disk, this article explains the performance benefits of doing so:
http://blog.aditi.com/cloud/windows-azure-virtual-machines-lessons-learned/
In my blog post I explain how to do this, although perhaps the level of detail you are looking for isn't there. Still, everyone that followed this post (it was a lab) was able to create the striped volume. The blog post is a complete lab; go down half way to see the section about the striped volume. Let me know if you have any questions.
http://geekswithblogs.net/hroggero/archive/2013/03/20/windows-azure-it-roadshow-lab-i.aspx
Thanks
I hit the same problem and some Googling revealed that this is a bug in Server Manager (sorry, can't find the link). The workaround is to use PowerShell to create the pool. These commands will create a new Storage Pool called "Storage" and assign all the available disks to it:
$spaces = (Get-StorageSubSystem | where {$_.Model -eq "Storage Spaces"}[0]).UniqueID
New-StoragePool -FriendlyName "Storage" -StorageSubSystemUniqueId $spaces -PhysicalDisks (Get-PhysicalDisk -CanPool $true)

Resources