Mage Resque - Job Class not found error - magento

I'm trying to implement a asynchronous functionality in Magento using Mage-Resque. I have followed the instructions in https://github.com/ajbonner/mage-resque and installed all the components except ext-pcntl.
Now i'm able to queue a job to redis-server. I have tested the it using the default Mns_Resque_Model_Job_Logmessage class but i'm also getting following error.
[info] [11:09:42 2016-03-27] Checking default for jobs
[info] [11:09:42 2016-03-27] Found job on default
[notice] [11:09:42 2016-03-27] Starting work on (Job{default} | ID: 6fe2a430c10ff2920c3f66ec7d52e957 | Mns_Resque_Model_Job_Logmessage | [{"message":"Resque Test 1459057136"}])
[info] [11:09:42 2016-03-27] Forked 3759 at 2016-03-27 11:09:42
[info] [11:09:42 2016-03-27] Processing default since 2016-03-27 11:09:42
[critical] [11:09:42 2016-03-27] (Job{default} | ID: 6fe2a430c10ff2920c3f66ec7d52e957 | Mns_Resque_Model_Job_Logmessage | [{"message":"Resque Test 1459057136"}]) has failed Could not find job class Mns_Resque_Model_Job_Logmessage.
Its reporting that it cannot find a class Mns_Resque_Model_Job_Logmessage. What could be wrong ? Have i missed something? Please help any help would be appreciated...

Related

SonarQube - Failed to get CE Task status - HTTP code 502

I am trying to run sonarqube (hosted remotely) via SonarScanner command through local machine for magento application (for PHP), but getting below error every time. I tried to find solution for this, but didnt find much related to my issue
anyone has any idea about this error?
13:22:38.820 INFO: ------------- Check Quality Gate status
13:22:38.820 INFO: Waiting for the analysis report to be processed (max 600s)
13:22:38.827 DEBUG: GET 200 https://example.com/api/ce/task?id=AYE-URS3o9NiO9ce0vrw | time=7ms
13:22:43.845 DEBUG: GET 200 https://example.com/api/ce/task?id=AYE-URS3o9NiO9ce0vrw | time=11ms
13:22:48.854 DEBUG: GET 200 https://example.com/api/ce/task?id=AYE-URS3o9NiO9ce0vrw | time=9ms
13:22:53.866 DEBUG: GET 200 https://example.com/api/ce/task?id=AYE-URS3o9NiO9ce0vrw | time=12ms
13:22:58.871 DEBUG: GET 502 https://example.com/api/ce/task?id=AYE-URS3o9NiO9ce0vrw | time=5ms
13:22:58.899 DEBUG: eslint-bridge server will shutdown
13:23:04.549 DEBUG: stylelint-bridge server will shutdown
13:23:09.571 INFO: ------------------------------------------------------------------------
13:23:09.571 INFO: EXECUTION FAILURE
13:23:09.571 INFO: ------------------------------------------------------------------------
13:23:09.571 INFO: Total time: 12:09.000s
13:23:09.688 INFO: Final Memory: 14M/50M
13:23:09.688 INFO: ------------------------------------------------------------------------
13:23:09.689 ERROR: Error during SonarScanner execution
Failed to get CE Task status - HTTP code 502: <html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
</body>
</html>

Gitlab job is successful despite assertion failure

I am running SoapUI assertions using maven image in gitlab. Even though the assertion fails the build is successful in gitlab. I have tried using mvn integration-tests -ff and as well as -fae but no luck. Also used allow_failure: false. This did not work either. Please advise as to how to fail the gitlab pipeline job if there is a failure in assertions.
Here is my yml file
T001-0011:
extends: .ETE -stage
image: adoptopenjdk/maven-openjdk11
variables:
MAVEN_CLI_OPTS: "--fail-fast"
script:
- 'mvn -f ./TV001/pom.xml $MAVEN_CLI_OPTS integration-test'
allow_failure: false
when: always
Here is the gitlab log
1 error
09:53:48,937 ERROR [SoapUITestCaseRunner] JDBC_Request failed, exporting to [/builds/gitlab/data/test-team-automation-scripts/./SV321/Warnings/target/surefire-reports/TestSuite_1-AC1-JDBC_Request-0-FAILED.txt]
09:53:48,938 INFO [SoapUITestCaseRunner] Finished running SoapUI testcase [AC1], time taken: 904ms, status: FAILED
09:53:48,953 INFO [SoapUITestCaseRunner] Running SoapUI testcase [AC2]
09:53:48,963 INFO [SoapUITestCaseRunner] running step [IDN220001-Request2]
09:53:48,966 DEBUG [HttpClientSupport$SoapUIHttpClient] Stale connection check
09:53:48,968 DEBUG [HttpClientSupport$SoapUIHttpClient] Attempt 1 to execute request
09:53:48,968 DEBUG [SoapUIMultiThreadedHttpConnectionManager$SoapUIDefaultClientConnection] Sending request: GET /apikey/v1/warnings/waning/IDN22000 HTTP/1.1
09:53:48,974 DEBUG [SoapUIMultiThreadedHttpConnectionManager$SoapUIDefaultClientConnection] Receiving response: HTTP/1.1 404 Not Found
09:53:48,975 DEBUG [HttpClientSupport$SoapUIHttpClient] Connection can be kept alive indefinitely
09:53:49,018 INFO [log] HTTP status code: 404
09:53:49,019 INFO [SoapUITestCaseRunner] Assertion [Valid HTTP Status Codes] has status UNKNOWN
09:53:49,019 INFO [SoapUITestCaseRunner] Assertion [Script Assertion] has status VALID
09:53:49,019 INFO [SoapUITestCaseRunner] Finished running SoapUI testcase [AC2], time taken: 8ms, status: FINISHED
09:53:49,021 INFO [SoapUITestCaseRunner] Project [DPD-3396] finished with status [FAILED] in 2591ms
SoapUI 5.3.0 TestCaseRunner Summary
-----------------------------
Time Taken: 2599ms
Total TestSuites: 1
Total TestCases: 2 (1 failed)
Total TestSteps: 3
Total Request Assertions: 5
Total Failed Assertions: 1
Total Exported Results: 3
[WARNING] JAR will be empty - no content was marked for inclusion!
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 03:03 min
[INFO] Finished at: 2022-06-15T09:53:54+10:00
[INFO] ------------------------------------------------------------------------
Cleaning up file based variables
00:01
Job succeeded
The mvn tests fail but the exit code that the command itself returns, probably is zero.
It is saying, I managed to run the tests.
But for you this doesn't help since you want to check the result of the tests.
Gitlab in order to fail a job checks the exit code of the commands used. You could force mvn to return an erroneous exit code when the tests fail
You could add the following flag
-Dmaven.test.failure.ignore=false

Storm Topology does not start with parallelism hint of 1200

Version Info:
"org.apache.storm" % "storm-core" % "1.2.1"
"org.apache.storm" % "storm-kafka-client" % "1.2.1"
I have a storm Topology with 3 bolts(A,B,C), Where the middle bolt takes around 450ms mean time and other two bolts takes less than 1ms.
I am able to run topology with following parallelism hint values:
A: 4
B: 700
C: 10
But when I increase parallelism hint of B to 1200, the topology does not start.
In the topology logs, I see logs to load the executor: B multiple times, like this:
2018-05-18 18:56:37.462 o.a.s.d.executor main [INFO] Loading executor B:[111 111]
2018-05-18 18:56:37.463 o.a.s.d.executor main [INFO] Loaded executor tasks B:[111 111]
2018-05-18 18:56:37.465 o.a.s.d.executor main [INFO] Finished loading executor B:[111 111]
2018-05-18 18:56:37.528 o.a.s.d.executor main [INFO] Loading executor B:[355 355]
2018-05-18 18:56:37.529 o.a.s.d.executor main [INFO] Loaded executor tasks B:[355 355]
2018-05-18 18:56:37.530 o.a.s.d.executor main [INFO] Finished loading executor B:[355 355]
2018-05-18 18:56:37.666 o.a.s.d.executor main [INFO] Loading executor B:[993 993]
2018-05-18 18:56:37.667 o.a.s.d.executor main [INFO] Loaded executor tasks B:[993 993]
2018-05-18 18:56:37.669 o.a.s.d.executor main [INFO] Finished loading executor B:[993 993]
2018-05-18 18:56:37.713 o.a.s.d.executor main [INFO] Loading executor B:[765 765]
2018-05-18 18:56:37.714 o.a.s.d.executor main [INFO] Loaded executor tasks B:[765 765]
But in between worker process get restarted. I don't see any error in topology logs or storm logs. Following are storm logs, when worker gets restart:
2018-05-18 18:51:46.755 o.a.s.d.s.Container SLOT_6700 [INFO] Killing eaf4d8ce-e758-4912-a15d-6dab8cda96d0:766258fe-a604-4385-8eeb-e85cad38b674
2018-05-18 18:51:47.204 o.a.s.d.s.BasicContainer Thread-7 [INFO] Worker Process 766258fe-a604-4385-8eeb-e85cad38b674 exited with code: 143
2018-05-18 18:51:47.766 o.a.s.d.s.Slot SLOT_6700 [INFO] STATE RUNNING msInState: 109081 topo:myTopology-1-1526649581 worker:766258fe-a604-4385-8eeb-e85cad38b674 -> KILL msInState: 0 topo:myTopology-1-1526649581 worker:766258fe-a604-4385-8eeb-e85cad38b674
2018-05-18 18:51:47.766 o.a.s.d.s.Container SLOT_6700 [INFO] GET worker-user for 766258fe-a604-4385-8eeb-e85cad38b674
2018-05-18 18:51:47.774 o.a.s.d.s.Slot SLOT_6700 [WARN] SLOT 6700 all processes are dead...
2018-05-18 18:51:47.775 o.a.s.d.s.Container SLOT_6700 [INFO] Cleaning up eaf4d8ce-e758-4912-a15d-6dab8cda96d0:766258fe-a604-4385-8eeb-e85cad38b674
2018-05-18 18:51:47.775 o.a.s.d.s.Container SLOT_6700 [INFO] GET worker-user for 766258fe-a604-4385-8eeb-e85cad38b674
2018-05-18 18:51:47.775 o.a.s.d.s.AdvancedFSOps SLOT_6700 [INFO] Deleting path /home/saurabh/storm-run/workers/766258fe-a604-4385-8eeb-e85cad38b674/pids/27798
2018-05-18 18:51:47.775 o.a.s.d.s.AdvancedFSOps SLOT_6700 [INFO] Deleting path /home/saurabh/storm-run/workers/766258fe-a604-4385-8eeb-e85cad38b674/heartbeats
2018-05-18 18:51:47.780 o.a.s.d.s.AdvancedFSOps SLOT_6700 [INFO] Deleting path /home/saurabh/storm-run/workers/766258fe-a604-4385-8eeb-e85cad38b674/pids
2018-05-18 18:51:47.780 o.a.s.d.s.AdvancedFSOps SLOT_6700 [INFO] Deleting path /home/saurabh/storm-run/workers/766258fe-a604-4385-8eeb-e85cad38b674/tmp
2018-05-18 18:51:47.781 o.a.s.d.s.AdvancedFSOps SLOT_6700 [INFO] Deleting path /home/saurabh/storm-run/workers/766258fe-a604-4385-8eeb-e85cad38b674
2018-05-18 18:51:47.782 o.a.s.d.s.Container SLOT_6700 [INFO] REMOVE worker-user 766258fe-a604-4385-8eeb-e85cad38b674
2018-05-18 18:51:47.782 o.a.s.d.s.AdvancedFSOps SLOT_6700 [INFO] Deleting path /home/saurabh/storm-run/workers-users/766258fe-a604-4385-8eeb-e85cad38b674
2018-05-18 18:51:47.783 o.a.s.d.s.BasicContainer SLOT_6700 [INFO] Removed Worker ID 766258fe-a604-4385-8eeb-e85cad38b674
2018-05-18 18:51:47.783 o.a.s.l.AsyncLocalizer SLOT_6700 [INFO] Released blob reference myTopology-1-1526649581 6700 Cleaning up BLOB references...
2018-05-18 18:51:47.784 o.a.s.l.AsyncLocalizer SLOT_6700 [INFO] Released blob reference myTopology-1-1526649581 6700 Cleaning up basic files...
2018-05-18 18:51:47.785 o.a.s.d.s.AdvancedFSOps SLOT_6700 [INFO] Deleting path /home/saurabh/storm-run/supervisor/stormdist/myTopology-1-1526649581
2018-05-18 18:51:47.808 o.a.s.d.s.Slot SLOT_6700 [INFO] STATE KILL msInState: 42 topo:myTopology-1-1526649581 worker:null -> EMPTY msInState: 0
This keeps happening and topology never restarts, which used to start perfectly when parallelism hint for bolt: B was 700, there is no other change.
I see one interesting log here is, not yet sure what this means:
Worker Process 766258fe-a604-4385-8eeb-e85cad38b674 exited with code: 143
Any Suggestions?
Edit:
Config:
topology.worker.childopts: -Xms1g -Xmx16g
topology.worker.logwriter.childopts: -Xmx1024m
topology.worker.max.heap.size.mb: 3072.0
worker.childopts: -Xms1g -Xmx16g -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=1%ID% -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -XX:+UseG1GC -XX:+AggressiveOpts -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/home/saurabh.mimani/apache-storm-1.2.1/logs/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=1M -Dorg.newsclub.net.unix.library.path=/usr/share/specter/uds-lib/
worker.gc.childopts:
worker.heap.memory.mb: 8192
supervisor.childopts: -Xms1g -Xmx16g
Edit:
Logs for strace -fp PID -e trace=read,write,network,signal,ipc in gist.
not yet able to understand it fully, some relevant looking from it:
[pid 3362] open("/usr/lib/locale/UTF-8/LC_CTYPE", O_RDONLY) = -1 ENOENT (No such file or directory)
[pid 3362] kill(1487, SIGTERM) = 0
[pid 3362] close(1)
Quick google suggests 143 is the exit code for when the JVM receives a SIGTERM (e.g. Always app Java end with "Exit 143" Ubuntu). You might be running out of memory, or the OS may be killing the process for some other reason. Remember that setting the parallelism hint to 1200 means that you will get 1200 tasks (copies) for bolt B, where you only had 700 before.
I was able to get this running by tweaking following configurations, seems like it was timing out due to nimbus.task.launch.sec, which was set to 120 and it was restarting the worker if it was not started within 120 secs.
Updated value of some of these configs:
drpc.request.timeout.secs: 1600
supervisor.worker.start.timeout.secs: 1200
nimbus.supervisor.timeout.secs: 1200
nimbus.task.launch.secs: 1200
About nimbus.task.launch.sec:
A special timeout used when a task is initially launched. During launch, this is the timeout used until the first heartbeat, overriding nimbus.task.timeout.secs.
A separate timeout exists for launch because there can be quite a bit of overhead to launching new JVM's and configuring them.

Karaf OSGI Bundles Closing on Startup

I am having issues with karaf/osgi and when i try to start karaf some of my features loop through starting and closing. Here is a log example:
2017-09-05 15:46:03,344 | INFO | rint Extender: 1 | L3vpnProvider | 224 - l3vpn-feature-impl - 0.1.0.SNAPSHOT | L3vpnProvider Session Initiated
2017-09-05 15:46:03,346 | INFO | rint Extender: 2 | L3vpnDataChangeListenerSR | 171 - org.temp.l3vpn-impl - 0.1.0.SNAPSHOT | Service Request Data Listener created
2017-09-05 15:46:03,349 | INFO | ntAdminThread #7 | BlueprintBundleTracker | 144 - org.opendaylight.controller.blueprint - 0.5.3.Boron-SR3 | Blueprint container for bundle org.temp.l3vpn-feature-impl_0.1.0.SNAPSHOT [224] was successfully created
2017-09-05 15:46:03,353 | INFO | Thread-193 | L3vpnProvider | 224 -l3vpn-feature-impl - 0.1.0.SNAPSHOT | L3vpnProvider Closed
And it literally loops and does not stop. The only solution i've found is constant rebuilding until it starts without complications.
Here is the feature in the feature.xml file to show you how its setup.
<feature name='odl-l3vpn-feature-impl' version='${project.version}' description='OpenDaylight :: l3vpn :: Network Model :: Impl'>
<feature version='${mdsal.version}'>odl-mdsal-broker</feature>
<feature version='${project.version}'>odl-l3vpn-network-model</feature>
<feature version='${project.version}'>odl-l3vpn</feature>
<bundle>mvn:org.temp/l3vpn-nc-impl/{{VERSION}}</bundle>
<lots of other bundles being wrapped>
</feature>
There is an additional feature but it has a very similiar structure so i will not put that up unless it is needed.
I am just a loss for what could be causing this to occur. Are there any ideas?
What i've already tried to do is make the odl-mdsal-broker have a prerequisite or dependency element set as true to make sure there wasn't problems starting the bundle too early but there was no luck with that. Any help would be appreciated.
For anyone that may be having the same issue. This was because of a startup error where both features in the blueprint xml had the same config file, so on startup they were getting confused which caused a loop of booting. After renaming one of the config files and updating that in the blueprint.xml this solved the issue.

Error starting remote Bamboo agent: HTTP status code 500 received in response to fingerprint request

I have the following error when starting remote Bamboo agent:
INFO | jvm 1 | 2012/11/20 01:15:58 | 2012-11-20 01:15:58,235 INFO [WrapperSimpleAppMain] [RemoteAgentHomeLocatorForBootstrap] Agent home located at '/Users/user9066/bamboo-agent-home'
INFO | jvm 1 | 2012/11/20 01:15:58 | 2012-11-20 01:15:58,248 INFO [WrapperSimpleAppMain] [AgentUuidInitializer] Found agent UUID <snip> in temporary UUID file '/Users/user9066/bamboo-agent-home/uuid-temp.properties'
INFO | jvm 1
| 2012/11/20 01:15:58 | 2012-11-20 01:15:58,378 INFO [WrapperSimpleAppMain] [AgentContext] Requesting fingerprint, url: http://<bamboo-server-ip>:8090/bamboo/AgentServer/GetFingerprint.action?hostName=<remote-agent-ip>&version=3&agentUuid=<snip>
ERROR | wrapper | 2012/11/20 01:15:58 | JVM exited while starting the application.
INFO | jvm 1 | 2012/11/20 01:15:58 | Exiting due to fatal exception.
INFO | jvm 1 | 2012/11/20 01:15:58 | com.atlassian.bamboo.agent.bootstrap.RemoteAgentHttpException: HTTP status code 500 received in response to fingerprint request.
INFO | jvm 1 | 2012/11/20 01:15:58 | at com.atlassian.bamboo.agent.bootstrap.AgentContext.initFingerprint(AgentContext.java:131)
The ports 8085 and 54663 are open. Enabling log4j does not provide any additional information too.
Has anyone seen this error? Any pointers to resolve this please?
I had a similar error. I seemed to fix it by downloading the alternative remote agent installer called bamboo-agent-4.2.0.jar. You can find it as a small link underneath the main remote agent download button.
Once i had ran this jar and successfully authenticated, the original jar worked.

Resources