I am writing one json payload per line in Liberty console (do not use any third party library) and collect them in LogDNA. This is working fine as long as payload is smaller than 8k. Beyond this the json payload is cut with a new line and can not be processed in LogDNA.
Is there any liberty setting to go beyond this limit ?
This could be for a couple of known reasons. (I see that you are not using a 3rd party library, but I'm including both reasons for completeness in case others are looking for causes of this).
Log4J -
If you're creating logs with an older version of Log4J, using the Console Appender, it will break your log entries into 8k chunks before it writes them to System.out (see https://github.com/OpenLiberty/open-liberty/issues/14197). Switching to a newer version of log4J would fix this.
Liberty -
If you're using an older version of WebSphere Liberty / Open Liberty (before 19.0.0.5) and writing logs to the console they were broken into 8k chunks (see https://github.com/OpenLiberty/open-liberty/issues/6095). Switching to a newer version of Liberty would fix this.
Related
I am using JBoss 7x, and have the following use case.
I am going to do load testing of messaging queues with Jboss. The queues are external to JBoss.
I will push a lot of message in the queue, around 1000 message. When around 100+ message has been pushed I want to crash JBoss. Later I want to re-start the Jboss the verify the message processing.
I had earlier made use of Byteman to crash the JVM using the following
JAVA_OPTS="-javaagent:/BYTEMAN_HOME/lib/byteman.jar=script:/QUICKSTART_HOME/jta-crash-rec/src/main/scripts/xa.btm ${JAVA_OPTS}"
Details are here: https://github.com/Naresh-Chaurasia/jboss-eap-quickstarts/tree/7.3.x/jta-crash-rec
In the above case when ever XA Transaction is happening the JVM is being crashed using byteman, but in my case I want to only crash the JVM/Jboss lets say after 100+ messages. i.e not for each transaction but after processing some messages.
I have also tried a few examples from here, to get ideas of how to achieve it, but did not succeed. https://developer.jboss.org/docs/DOC-17213#top
Question: How can I crash JBoss/ running JVM using byteman or some other way.
See the Programmers Guide that comes bundled with the distribution.
Sections headed "CountDowns" and "Aborting Execution" provide what's necessary. These are built-in features of the Rule Language.
I have a requirement that requires to upload file up to 150MB. I have written a java based rest service using Spring boot 1.5. I am not able to upload larger file. The code works for smaller file size. I have configured all payload/ multipart related configuration for tomcat.It is not working for large files. I am getting "502: Bad Gateway:Registered endpoint unable to handle the request. The code is deployed in Pivotal Cloud Foundry. My question is "Is there any size limit for payload that is configured at Go Router Level which is causing this issue?" Any help is appreciated.
Thanks
Here's what I would suggest:
Run your app locally. Ensure that you can upload a 150M+ file. That will ensure that you have Spring Boot configured correctly, and that there are no limits in Tomcat (embedded) or Spring which would cause this.
When you deploy to a Cloud Foundry installation, there will not be any additional size based restrictions. Gorouter does not directly limit the size of a file that can be uploaded. However, Gorouter has an upper limit on how much time a request can consume in it's entirety (i.e. receive request, process and respond). By default, that is 900s (your CF platform may differ, consult with your platform operator to get a specific value).
I mention this because the upload bandwidth of your client will come into play here. If you have a client that is slowly uploading a 150M file, let's say it would take an hour to upload that file, then it will fail with a response like you're seeing.
My suggestion to confirm, would be to run cf logs and look for the log entry tagged with [RTR] that corresponds to your failed request. It'll have the 502 status code. Now, check the response_time field and see if it matches the max request time as set on your platform (900s default). If it's a match, then that's your issue.
If none of that helps, you're going to need to look for more information. Perhaps try increasing the log levels and running cf logs to see if you get any more clues from your application.
Try to increase multipart size with configuraiton
spring.http.multipart.max-file-size=200MB // you ca
and
Tomcat's configuration or AP
webapps/manager/WEB-INF/web.xml
<multipart-config>
<max-file-size>52428800</max-file-size>
<max-request-size>52428800</max-request-size>
<file-size-threshold>0</file-size-threshold>
</multipart-config>
I have downloaded Apache JMeter 3.1 version and developed a JMX script file. But all my other members use Apache JMeter 3.0 version. I am unable to open my 3.1 jmx file in 3.0 version.
Can anyone suggest how to open the JMX file of 3.1 version in Apache JMeter 3.0 version?
Thanks in advance
Blind shot: my expectation is that you are suffering from Bug #60252. Since JMeter 3.1 a new metric Sent Bytes has been introduced
New Metrics
A new sent_bytes metric has been introduced which reports the bytes sent to server.
Another metric connect_time has been enabled by default in this version
So now Aggregate Report and Summary Report listeners explicitly rely on this metric. If you have these listeners in test plan - just remove them and you should be able to open the script using JMeter 3.0.
Things to consider:
Recommend colleagues upgrading to JMeter 3.1 as newer JMeter versions should normally contain performance improvements and bug fixes
Don't add any listeners to your Test Plan. Really. Listeners should be used for tests development and debugging and viewing test results after test is finished.
Run your test in command-line non-GUI mode like:
jmeter -n -t /path/to/script.jmx -l /path/to/results.jtl
When test is done - open JMeter GUI, add listener(s) of your choice and using "Browse" button locate results.jtl file - you will see saved and calculated metrics
Check out Greedy Listeners - Memory Leeches of Performance Testing article for more details
Raise an issue in JMeter Issue Tracker recommending listing the aforementioned listeners in the Incompatible Changes section
Going forward add essential jmeter.log file parts to your question for non-telepathic community members
As any good software, JMeter takes a big care (believe us as our team leader is an active member of Dev Team) of backward compatiblity but it cannot consider issues when opening a file saved in version N+1 by software in version N (as with any software I think).
So follow Dmitri advice to make your colleagues upgrade to 3.1 for all the good reasons here:
http://www.ubik-ingenierie.com/blog/jmeter-3-1-is-out-with-new-great-features/
But no need to raise a bug (as he recommends) as it is absolutely not a bug.
In Websphere MQ series , command level for a queue manager is 701. What does it actually specify ?
WebSphere products use a "[version].[release].[modification].[Fix Pack]" naming convention. For example 7.0.1.6 is the current release specified down to the Fix Pack level.
Fix packs are limited to bug fixes and very limited non-disruptive functionality enhancements.
Modifications can include functionality enhancements but no API changes. For example the Multi-Instance Queue Manager was introduced in 7.0.1.
Releases may introduce significant new function and limited API changes but are highly forward and backward compatible withing the same version.
Versions encapsulate a core set of functionality. Changes at this level may sacrifice some backward compatibility in trade for significant new functionality. For example, WMQ Pub/Sub was moved from Message Broker to base MQ in the V7 release.
Since administrative functionality does not change in Fix Packs but may change at the Modification level, compatibility with administrative tools is based on the queue manager's Command Level.
There is an old but still useful TechNote which described this when the numbering conventions were adopted for WMQ.
It displays the major version number of WMQ - e.g. 530,600,700,701. Despite being 'only' a .0.1 increment, WMQ 7.0.1 gets a new major version number due to a number of internal changes (e.g. multi-instance QMs), although WMQ 6.0.1.x and 6.0.2.x were both CMDLEVEL 600
Command level, although similar to the V.R.M.F, it not exactly the same thing. The Command level is used to allow configuration applications to know what commands (and attributes within those commands) will be understood by the command server.
The first thing any configuration application should do is discover the PLATFORM and CMDLEVEL of the queue manager. Then that application can determine which commands/attributes it would be acceptable to send to that queue manager.
It is possible that CMDLEVEL could be increased in the service stream. Then the V.R.M.F. would not necessarily match the CMDLEVEL. This would happen if some new external attributes were added in the service stream, so queue managers without that patch would not understand them, but queue managers with the patch would. How does an application determine what to send? Well, the CMDLEVEL would determine that and so would have to be upped by the patch.
We have an existing J2EE application that uses WebSphere MQ to retrieve data from IMS.
The J2EE application sends the IMS transaction name to MQ, which retrieves IMS data. The returned data is then parsed for further processing.
Recently we migrated the application to WebSphere 7. The application worked fine on a windows box. However, when we ported the application to a zLinux (Linux on System z) box - we were able to talk to IMS and data was getting returned from IMS to J2EE application. The parsing process however raises an ArrayIndexOutofBoundsException.
The inputs are the same in both environments and with the operational code being the same(same java build) there is a significant difference observed in the behaviour. Is this anything to do with CharacterCodeSet not being accepted by zLinux environment ? We use a hard-coded value for CCSID from within the J2EE application.
Is it that zLinux environment does not support the existing CCSID requires a different CCSID?
Incidentally the answer to the above question lies in the BIG ENDIAN / LITTLE ENDIAN problem. Linux being little endian stores byte information differently compared to AIX / windows boxes. This was causing a parsing failure - i.e. the piece of code that was parsing the message returned from MQ successfully was not able to parse it in Linux when the format is different.