Unable to remote SSH to my docker container on Jelastic - jelastic

I created my custom image based on Centos and deployed it to Jelastic but I found I can't SSH into my container.
After some troubleshooting I found SSH was not installed in my container so added open-ssh to my container but it was still not working, I can't run "service" command in my container, then I tried different ways trying to get around of it but I still can't get it through.
I want to know am I on the right track? what is the best way to remote SSH to my container created based my custom image? Is SSH required?
Many thanks!
J.

I founder an easier way - using the image(lemonbar/centos6-ssh) which already has SSH installed. It is working, but I don't know the difference, but at least I can move forward!

You can't just launch a process in the background as the only task in the container, something must keep running in the foreground to keep it alive, even if it is a non-deamonized server.
Besides that, in Centos 6.x to be able to login to openash you should disable PAM in sshd_config.
A better option could be not sshing right into the container, but to the host, and from there use docker exec -i -t to run a shell in the container.

Due to latest versions of ubuntu/debian/centos use systemd system daemon which have problems running inside Odin Containers without proper patching such a most recent versions of these OS'es won't work when created with Jelastic. Jelastic is aware of the issue and working on solution which will solve it. ETA ~2 weeks.
Also, please could you provide me with the DockerHub project page that you wants to deploy?

Related

Windows Docker Image keeps hanging up randomly on Azure (web-app)

I won't be able to provide the docker file so I'll try to provide as much context as I can to the issue. I keep running into issues with Azure and Windows based Docker containers randomly. The app will run fine for weeks with no issues and then suddenly bug out (using the same exact image) and go into an endless cycle of "Waiting for container to be start." followed by "Container failed to reach the container's http endpoint".
I have been able to resolve the issue in the past by re-creating the service (again using the same exact image) but it seems this time its not working.
Various Tests:
The same exact docker image runs locally no problem
As I mentioned, re-creating the service before did the trick (using the same exact image)
Below are the exact steps I have in place:
Build a windows based image using a Docker Compose File. I specify in the docker compose file to map ports 2000:2000.
Push the Docker Image to a private repository on docker hub
Create a web app service in Azure using the image
Any thoughts on why this randomly happens? My only next idea is to re-create the docker image as a Linux based image.
Does your app need to access more than a single port? Please note that as of right now, we only allow a single port. More information on that here.
Lastly, please see if the turning off the container check, mentioned here, helps to resolve the matter.
Let us know the outcome of these two steps and we can assist you further if needed.

Docker runs old container instead of new one that I'm pulling (AWS)

I recently deployed a Springboot application to AWS using Docker.
I'm going crazy trying to update my image/container. I've tried deleting everything, used and unused containers, images, tags, etc. and pushing everything again. Docker system prune, docker rm, docker rmi, using a different account too... It still runs that old version of the project.
It all indicates that there's something going on at the server level. I'm using PuTTY.
Help is much appreciated.
What do you mean by old container ? Is it some changes you did from some version control then didn't update on the container ?or just do docker restart container I'd
There's a lot to unpack here, so if you can provide more information - that'd be good. But here's a likely common denominator.
If you're using any AWS service (EKS, Farget, ECS), these are just based on the docker image you provide them. So if your image isn't correctly updated, they won't update.
But the same situation will occur with docker run also. If you keep pointing to the same image (or the image is conceptually unchanged) then you won't have a change.
So I doubt the problem is within Docker or AWS.
Most Likely
You're not rebuilding the spring application binaries with each change
Your Dockerfile is pulling in the wrong binaries
If you are using an image hosting service, like ECR or Nexus, then you need to make sure the image name is pointing to the correct image:tag combination. If you update the image, it should be given a unique tag and then that tagged image referenced by Docker/AWS
After you build an image, you can verify that the correct binaries were copied by using docker export <container_id> | tar -xf - <location_of_binary_in_image_filesystem>
That will pull out the binary. Then you can run it locally to test if it's what you wanted.
You can view the entire filesystem with docker export <container_id> | tar -tf - | less

Undeploying Business Network

Using HyperLedger Composer 0.19.1, I can't find a way to undeploy my business network. I don't necessarily want to upgrade to a newer version each time, but rather replacing the one deployed with a fix in the JS code for instance. Any replacement for the undeploy command that existed before?
There is no replacement for the old undeploy command, and in fact it it not really undeploy - merely hiding the old network.
Be aware that everytime you upgrade a network it creates a new Docker Image and Container so you may want to tidy these up periodically. (You could also try to delete the BNA from the Peer servers but these are very small in comparison to the docker images.)
It might not help your situation, but if you are rapidly developing and iterating you could try this in the online Playground or local Playground with the Web profile - this is fast and does not create any new images/containers.

Deploy go app to docker in vagrant

Now i'm working on RESTfull API on go, using Windows and goclipse.
Testing environemnt consists of few VMs managed by Vagrant. These machines contain nginx, PostgreSQL etc. The app should be deployed into Docker on the separated VM.
There is no problem to deploy app on first time using guide like here: https://blog.golang.org/docker. I've read a lot of information and guides but still totally confused how to automate deploying process and update go app in docker after some changes in code done. On the current stage changes in code done very often, so deploying should be fast.
Could you please advise me with correct way to setup some kind of local CI for such case? What approach will be better?
Thanks a lot.

Foreman finish template is not getting resolve when user-data of the image is enabled

I am using Foreman 1.6 and using AWS EC2 as compute resource.
Problem is, Foreman could not able to resolve the finish template when the user-data of image is enabled. And, I could not able to provision the VM.
When user-date of image is disabled, foreman able to resolve the finish-template and able to provision the vm (Without reading the template, i.e puppet client installation).
Could you guide me where I am going wrong? Its been two week I am struggling with this issue.
Thanks,
Sekhar
You need to create a new provisioning script of type "user-data" (or just use the "Kickstart default user data" and associate it to your OS. Finish scripts are not the right "kind" for cloud-init.

Resources