Passing environment variables into Mesos 0.25 - mesos

I have recently upgraded to Mesos mesos-0.25.0-0.2.70 on CentOS 7. In order to set the DOCKER_HOST environment variable for Mesos, I had previously configured it with a file "/etc/mesos-slave/executor_environment_variables", the contents of which read:
{"DOCKER_HOST": "localhost:12375"}
With the upgrade of Mesos, and a newer Weave version this has stopped working. The latest version of Weave listens on a Unix socket before defaulting to a TCP socket, so I have now changed the contents of the aforementioned file to read:
{"DOCKER_HOST": "unix:///var/run/weave/weave.sock"}
Yet when I create a Docker container via Marathon it gets built in the Mesos cluster without any Weave IP or DNS. I am confused - all that needs to happen is for Mesos to pick up the environment variable DOCKER_HOST, which is not happening.
I'd be happy if anyone can throw pointers my way.

This is an old question but in case anyone stumbles on this one. I was having a similar issue where containers started by Mesos (via Marathon) were not registering with WeaveDNS. To get this to work, when starting up the mesos agent, I used the flag "--docker_socket" and set it equal to the 'DOCKER_HOST' path outputted when you run the command "weave env".
My containers started registering with WeaveDNS after this.

Related

Jenkins through docker: How to configure own host as agent for jenkins?

I'm using Jenkins with pipelines on a mac-mini. All builds are working fine with docker agents (backend, frontend, android app, etc)
The only thing I haven't been able to achieve is to use my own mac-mini as build-agent/slave for the IOS app (I need to build on OSX). Jenkins itself runs through docker as well, so I would need to connect to the host (the OS of the mac-mini) and use that as an agent...
I know one option would be to install jenkins instead of using docker, but I would prefer to keep Jenkins running in a docker container.
Does someone has experience with this or knows any good documentation on how to set this up?
Go to Manage Jenkins > Manage Nodes > New Node.
Configure a node.
Go to the list of nodes.
Select your newly configured node. It should be offline at this moment.
Run the java command displayed on the interface on your host machine.
Your Host machine is now a slave.

Using a Windows VM from Jenkins through vsphere

I'm trying to reset-and-launch a Windows VM (in vsphere) during a Jenkins job. I successfully installed the vSphere Cloud Plugin. I've followed instructions to setup the Windows machine as a jenkins-mvn-slave, and have it setup to run as a service.
If I click on the button in Jenkins for Launch Slave Agent, I can see (in vsphere) that the VM does a revert snapshot, and then it does a power on virtual machine. If I attach to the machine, I can see that the Jenkins service starts automatically. However, back in Jenkins, it tells me that the Slave did not come online in allowed time.
Some key settings for my slave:
Force VM launch: Checked
Wait for VMTools: Not checked
Delay between launch and boot complete: 120
Secondary launch method: Launch slave agents view Java Web Start
Versions:
Jenkins: 1.596.2
vSphere: 5.5.0
Windows: Server 2012 R2 Standard, Build 9600
vSphere plugin: 2.7
What am I missing?
I've done a lot of messing around since I posted, but I think the following is what I was doing wrong. I first got the VM working as a normal slave agent. Once I had that working, then I tried to setup the same as a vsphere-cloud-slave-agent. I wasn't realizing that setting up a host as a slave agent is "agent-name specific".
So, I uninstalled the Jenkins service, launched the "vsphere cloud slave agent", logged into the machine, and ran javaws (as specified in the previously mentioned instructions.
A couple of other gotchas that I encountered (not relevant to the initial post, but maybe relevant to someone who reads this):
I originally installed git with a password manager. Unfortunately, since jenkins jobs aren't interactive, it was hanging on the git clone command. I tried uninstalling and re-installing git, but it didn't fix the problem for whatever user the jenkins slave was running as. I ended up having to revert to a previous slave image and install git from there. (I probably could have also figured out what user was running the jenkins slave, and entered the desired password there.)
I wanted to run a clean VM for each job. I never figured out this one. If I set Availability to Take this slave on-line when in demand and off-line when idle, that was a good start. However, if I set the times to 0 and 0, then the machine was constantly rebooting. If I set the times to 1 and 1, then the machine does mostly what I want, unless there are back-to-back jobs queued to run.

Issues with two author instances of AEM of different versions in local

We're upgrading to 6.1 from 5.6, I have 5.6 setup on port 4502, I changed the port on the jar for 6.1 from 4502 to 4512 and started up both at the same time. But seems like both http://localhost:4512/ and http://localhost:4502/ take me to AEM 6.1.
Are there other configs that need a change to have two versions up and running at the same time?
You can run multiple instances of AEM on your local computer. In fact, as an engineer you should definitely run at least one author instance and one publish instance on your local computer so that you can test your work in both environments before committing any code.
You can rename the jar to cq-author-4502.jar, cq-publish-4503.jar or replace your port number. By naming the file cq-author-4512.jar and running java -jar cq-author-4512.jar, the instance will start up on port 4512.
If you want to start your instance using the start script, you need to update that script in the /crx-quickstart/bin directory. If you're on Linux or Mac update the start file. If you're on Windows update the start.bat file. Follow the instructions and replace 4502 with 4512 and author with publish if necessary. The /crx-quickstart/bin directory will be available after you run the jar file the first time.
First of all there is no useful and logical reason to have two instances that does not comply with author|publish configuration.
But you can start a... e.g. test|author configuration:
Open ../crx-quickstart/bin/start.sh or .bat
Change CQ_PORT=4504
Change CQ_RUNMODE='test'
if [ -z "$CQ_PORT" ]; then
CQ_PORT=4504
fi
if [ -z "$CQ_RUNMODE" ]; then
CQ_RUNMODE='test'
fi
Open ../crx-quickstart/conf/sling.properties
Change author by test
sling.run.mode.install.options=test,publish|...
And start the instances in any order that you like.
It might only be a caching issue in your browser.
When 6.1 has been started on 4502 and then you have an other AEM/CQ version on that port (stop 6.1, start 5.6.1 or something like that), your browser will sometimes show the cached 6.1 login screen or at least some of the 6.1 images that are cached. Press SHIFT-Reload and all should be well.

Is it possible to run kubernetes as a docker container?

I'm very new to kubernetes and trying to conceptualize it as well as set it up locally in order to try developing something on it.
There's a confound though that I am running on a windows machine.
Their "getting started" documentation in github says you have to run Linux to use kubernetes.
As docker runs on windows, I was wondering if it was possible to create a kubernetes instance as a container in windows docker and use it to manage the rest of the cluster in the same windows docker instance.
From reading the setup instructions, it seems like docker, kubernetes, and something called etcd all have to run "in parallel" on a single host operating system... But part of me thinks it might be possible to
Start docker, boot 'default' machine.
Create kubernetes container - configure to communicate with the existing docker 'default' machine
Use kubernetes to manage existing docker.
Pipe dream? Wrongheaded foolishness? I see there are some options around running it in a vagrant instance. Does that mean docker, etcd, & kubernetes together in a single VM (which in turn creates a cluster of virtual machines inside it?)
I feel like I need to draw a picture of what this all looks like in terms of physical hardware and "memory boxes" to really wrap my head around this.
With Windows, you need docker-machine and boot2docker VMs to run anything docker related.
There is no (not yet) "docker for Windows".
Note that issue 7428 mentioned "Can't run kubernetes within boot2docker".
So even when you follow instructions (from a default VM created with docker-machine), you might still get errors:
➜ workspace docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
➜ workspace docker logs -f ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
W0428 09:09:41.479862 1 server.go:249] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults.
I0428 09:09:41.479989 1 server.go:168] Using root directory: /var/lib/kubelet
The alternative would be to try on a full-fledge Linux VM (like the latest Ubuntu), instead of a boot2docker-like VM (based on a TinyCore distro).
All k8s components can be raised up with hyperkube, which helps you bring up a containerized one.
If you're able to run docker on windows, it would probably work. I haven't tried it on windows personally.

Installing Kubernetes on mac with vagrant and virtualbox

This is my first attempt to install and use Kubernetes. I am trying to install an environment on Mac for developing my own apps and deploying them for test locally with Kubernetes. I am familiar with using Vagrant, VirtualBox and Docker for the same purpose. When I saw this page https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/vagrant.md I assumed it would be trivial. I executed these lines:
export KUBERNETES_PROVIDER=vagrant
curl -sS https://get.k8s.io | bash
This created a master VM and a Minion, but Kubernetes seems to have failed to start on the master. On the master /var/log/salt/master is full of python Traceback errors, like this:
2015-07-17 22:14:42,629 [cherrypy.error ][INFO ][3252] [17/Jul/2015:22:14:42] ENGINE Started monitor thread '_TimeoutMonitor'.
2015-07-17 22:14:42,736 [cherrypy.error ][ERROR ][3252] [17/Jul/2015:22:14:42] ENGINE Error in HTTP server: shutting down
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/cherrypy/process/servers.py", line 187, in _start_http_thread
self.httpserver.start()
File "/usr/lib/python2.7/site-packages/cherrypy/wsgiserver/wsgiserver2.py", line 1824, in start
raise socket.error(msg)
error: No socket could be created
Vagrant is version 1.7.3. VirtualBox is version 4.3.30
Have I made an obvious stupid mistake?
I don't yet know the fix but I know what is going wrong since it happens to me as well:
OS X 10.10.3
Vagrant 1.7.4
VirtualBox 4.3.30
Kubernetes 1.0.1
When I run the default configuration of this (which creates one "master" and one "minion" VM) I see that the static IP address is not being assigned to the "eth1" interface, and I also see that the Salt API server is sitting in what appears to be an infinite retry loop because it is trying to listen on that IP address.
Also, the following message happened during boot:
[vagrant#kubernetes-master ~]$ dmesg | grep eth1
[ 9.321496] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
So basically, the static IP address didn't get assigned because eth1 wasn't ready when the system first booted, and Salt is waiting for it to get assigned.
I could fix this after boot by sshing to the box using "vagrant ssh" and running the command:
sudo /etc/init.d/network restart
on each host.
This "fixes" eth1 by assigning the static IP address, and after that Salt begins to do its thing, installs Docker, boots various containers, and so on.
What I don't know is how to make this work every time without manual intervention. It appears to be some sort of a race condition between Vagrant and VirtualBox.
If you just want to kick the tires with Kubernetes, I'd recommend installing boot2docker and then following the Running kubernetes locally via Docker getting started guide. Once you are comfortable interacting with the Kubernetes API and want a more complex local setup, you can then work on installing Vagrant.
If the Vagrant instructions aren't working, you should also feel free to file a bug in the github repository.
The tutorial pointed by Robert is realy easy to run. Just change the version to 0.21.2 (maybe 0.21.3 works too).
Else, if you prefer a vagrant solution, try with pires cluster on vagrant. It runs with almost nothing to change.
Running Kubernetes inside VirtualBox requires 4 networks and some adjustments to the configuration:
The VirtualBox HOST ONLY network will be the network used to access the Kubernetes master and nodes from the Mac or PC.
The NAT Network to download packages from the Internet.
The internal connections between Kubernetes PODs uses a tunnel network TUN
The Kubernetes Cluster IP Network is a private IP range used inside the cluster to give each Kubernetes service a dedicated IP
Vagrantfile needs to pass the node public IPs to the Ansible roles that configure Kubernetes to set KUBELET_EXTRA_ARGS environment variable with the public IP of each node (required for reading logs using kubectl).
NodePort needs to be used to publish applications running inside the Kubernetes cluster as Load Balancers are not available in VirtualBox.
See the full example and download the code at Building a Kubernetes Cluster with Vagrant and Ansible (without Minikube), it has been tested in Ubuntu but should work on a MAC as well.

Resources