I'm trying to set up TorqueBox inside Vagrant on Ubuntu Quantal. I've deployed my app into TorqueBox, but when I try to run bin/standalone.sh, it hangs for a long time after "Setting up Bundler" and then simply says "Killed".
I'm at a complete loss as to how to debug this.
I followed this guide for the installation of TorqueBox: http://torquebox.org/documentation/2.3.0/production-setup.html
Here's the full log: https://gist.github.com/elabs-dev/5411966
Is there a dump file in $TORQUEBOX_HOME/jboss/standalone/bin ? If so, it could indicate that the JVM is crashing.
Otherwise, it could be that there is insufficient memory available to deploy whatever you're deploying - how large is your app?
Related
I installed serverAgent 2.2.3
When I run it in cmd I get this error.
enter image description here
I tried starting the startAgent. bat. It automatically closes.
Thanks for looking.
That's very weird because the error states that sl4j library cannot be found in CLASSPATH and ServerAgent doesn't use this library at all.
Try downloading it from Github and unpacking it somewhere else. Also if you have CLASSPATH environment variable set - try clearing/unsetting it.
set CLASSPATH= && startAgent.bat
More information: How to Monitor Your Server Health & Performance During a JMeter Load Test
Alternatively you can try downloading sl4j.jar and dropping it near ServerAgent.jar but it is not a part of normal ServerAgent installation procedure.
We are constantly getting an error while starting our Beam Golang SDK pipeline (driver program) from a docker image which works when started from local / VM instance. We are using Dataflow runner for our pipeline and Kubernetes to deploy.
LOCAL SETUP:
We have GOOGLE_APPLICATION_CREDENTIALS variable set with service account for our GCP cluster. When running the job from local, job gets submitted to dataflow and completes successfully.
DOCKER SETUP:
Build image used is FROM golang:1.14-alpine. When we pack the same program with Dockerfile and try to run, it fails with error
User program exited: fork/exec /bin/worker: no such file or directory
On checking Stackdriver logs for more details, we see this:
Error syncing pod 00014c7112b5049966a4242e323b7850 ("dataflow-go-job-1-1611314272307727-
01220317-27at-harness-jv3l_default(00014c7112b5049966a4242e323b7850)"),
skipping: failed to "StartContainer" for "sdk" with CrashLoopBackOff:
"back-off 2m40s restarting failed container=sdk pod=dataflow-go-job-1-
1611314272307727-01220317-27at-harness-jv3l_default(00014c7112b5049966a4242e323b7850)"
Found reference to this error in Dataflow common errors doc, but it is too generic to figure out whats failing. After multiple retries, we were able to eliminate any permission / access related issues from pods. Not sure what else could be the problem here.
After multiple attempts, we decided to start the job manually from a new Debian 10 based VM instance and it worked. This brought to our notice that we are using alpine based golang image in Docker which may not have all the required dependencies installed to start the job.
On golang docker hub, we found a golang:1.14-buster where buster is codename for Debian 10. Using that for docker build helped us solve the issue. Self answering here to help anyone else facing the same issues.
I've been running a few containers (approximately a dozen) for awhile now. I've approached whatever the hard limit is on container/image sizes in the past, and had to clean these up to keep it from barfing all over everything, and recently the same has happened again.
I have identified several containers and images I can safely remove to reduce its footprint. But just as I was getting ready to do so, Docker crashed on me. And when I attempt to restart it, it crashes with the error message:
Fatal Error
Docker daemon failed to start
[timestamp] dockerd failed to start daemon: error initializing graphdriver: driver not supported
Thus, I can't use any of the command-line tools to remove these images/containers.
As there are running containers that I don't dare delete at this point, this makes it a little difficult to resolve. Is there a way to start Docker (on the mac) that doesn't actually start any of the containers so that maybe I can avoid this error?
Is the error message even related to my problem? I'm on Docker 2.3.0.4 if it matters.
You could switch to overlay2 driver instead of graph driver
You can follow the document below to switch
https://docs.docker.com/storage/storagedriver/overlayfs-driver/
I have created a neo4j database on my windows machine.
I have transferred the content of the database directory to my linux machine. This is because I have the community edition which does not support the backup functions.
mtt#mttPC:/var/lib/neo4j/data/log$ sudo service neo4j-service start
WARNING: Max 1024 open files allowed, minimum of 40 000 recommended. See the Neo4j manual.
WARNING! You are using an unsupported Java runtime.
* Please use Oracle(R) Java(TM) 7 to run Neo4j Server. Download "Java Platform (JDK) 7" from:
http://www.oracle.com/technetwork/java/javase/downloads/index.html
* Please see http://docs.neo4j.org/ for Neo4j Server installation instructions.
Using additional JVM arguments: -server -XX:+DisableExplicitGC -Dorg.neo4j.server.properties=conf/neo4j-server.properties -Djava.util.logging.config.file=conf/logging.properties -Dlog4j.configuration=file:conf/log4j.properties -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled
Starting Neo4j Server...WARNING: not changing user
process [21498]... waiting for server to be ready..... Failed to start within 120 seconds.
Neo4j Server may have failed to start, please check the logs.
The file messages.log in the database directory says nothing.
Any idea? Are the windows and linux neo4js compatible? Thank you.
Edit
I have made a fresh install of neo4j on my ubuntu machine.
Now I finally get some logs:
2014-05-16 20:01:10.958+0000 ERROR [o.n.k.EmbeddedGraphDatabase]: Startup failed: Component 'org.neo4j.kernel.impl.transaction.XaDataSourceManager#25984c63' was successfully initialized, but failed to start. Please see attached cause exception.: Component 'org.neo4j.kernel.impl.nioneo.xa.NeoStoreXaDataSource#3d34dcb' was successfully initialized, but failed to start. Please see attached cause exception.: 'neostore' has a store version number that we cannot upgrade from. Expected 'NeoStore v0.A.0' but file is version 'NeoStore v0.A.2'.
2014-05-16 20:01:10.958+0000 INFO [o.n.k.EmbeddedGraphDatabase]: Shutdown started
I should be related to this but I am not sure how to proceed. Is the issue related to the fact that when I copied the database, I just stopped neo4j on my windows machine from the neo4j window?
There is no reason why a Neo4j database should not be transferable between operating systems. Can you please provide the output of data/log/console.log? First thought is that you may have permission issues. The files should be read/write for the user the Neo4j process will run as.
I am currently trying to deploy smartfoxserver 2X on EC2 using dotcloud. I have been able to detect the private ip of the amazon web instance, and using the dotcloud tools I have been able to determine the correct port. However, I have difficulty installing the server proper via the command line so that I can log into it using the AdminTool.
My postinstall is fairly straightforward:
./SFS2X/sfs2x-service start-launchd
I find that on 'dotcloud push' there is a fair amount of promising output in my cygwin terminal, but the push hangs after saying that the sfs2x-service has been launched correctly, until timeout.
Consequently, my question is, has anyone found a way to install SFS2X on EC2 via dotcloud successfully? I managed to have partial success with SFS Pro, with a complete push to dotcloud, by calling ./jre/bin/java -jar installer.jar in my postinstall. Do I need to do extra legwork and build an installer jar for SFS2X? Is there a way that would be best to do this?
I do understand that there is a standard approach to deployment with SFS2X using RightScale on EC2, however I am interested in deployment using the dotcloud platform.
Thanks in advance.
The reason why it is hanging is because you are trying to start your process in the postinstall, and this is not the correct place to do that. The postinstall script is suppose to finish, if it doesn't the deployment will time out, and then get cancelled.
Once the postinstall script is finished, it will then finish the rest of your deployment.
See this page for more information about dotCloud postinstall script:
http://docs.dotcloud.com/0.9/guides/hooks/#post-install
Pay attention to this warning at the end.
Warning:
If your post-install script returns an error (non-zero exit code), or if it runs for more than 10 minutes, the platform will consider that your build has failed, and the new version of your code will not be deployed.
Instead of putting this in the postinstall script, you should add it as a background process, so that it starts up once the deployment process is complete.
See this page for more information on adding background processes to dotCloud services:
http://docs.dotcloud.com/0.9/guides/daemons/
TL;DR: You need to create a supervisord.conf file, and add it to the root of your project, and add your service to that.
Example (you will need to change to fit your situation):
[program:smartfoxserver]
command = /home/dotcloud/current/SFS2X/sfs2x-service start-launchd
Also, make sure you have the correct dotCloud service specified in your dotcloud.yml in order to have the correct binary and libraries installed for what your smartfoxserver application.