I'm currently trying to install a BOSH Director on BOSH Lite - it's clear to me that BOSH Lite already ships with a Director, but I would like to test a release containing a Director "on top of that". Here is my setup:
Everything works fine until I add the warden_cpi job. I would like to configure the Warden CPI to connect to Warden running on the Virtual Machine hosting BOSH Lite and still being available to the Director . So what I tried is this:
releases:
- name: bosh-warden-cpi
url: https://bosh.io/d/github.com/cppforlife/bosh-warden-cpi-release?v=29
sha1: 9cc293351744f3892d4a79479cccd3c3b2cf33c7
version: latest
instance_groups:
- name: bosh-components
jobs:
- name: warden_cpi
release: bosh-warden-cpi
properties:
warden_cpi:
host_ip: 10.254.50.4 # host IP of BOSH Lite Vagrant Box
warden:
connect_network: tcp
connect_address: 10.254.50.4:7777 # again host IP and Port of garden-linux on BOSH Lite Vagrant Box
agent:
mbus: nats://user:password#127.0.0.1:4222
blobstore:
provider: dav
options:
endpoint: http://127.0.0.1:25250
user: user
password: password
where 10.254.50.4 is the IP address of the Vagrant Box and 7777 is the port of garden-linux.
During the deployment, I get this message from bosh vms
+----------------------------------------------------------+--------------------+-----+---------+--------------+
| VM | State | AZ | VM Type | IPs |
+----------------------------------------------------------+--------------------+-----+---------+--------------+
| bosh-components/0 (37a1938e-e1df-4650-bec6-460e4bc3916e) | unresponsive agent | n/a | small | |
| bosh-director/0 (2bb47ce1-0bba-49aa-b9a3-86e881e91ee9) | running | n/a | small | 10.244.102.2 |
| jumpbox/0 (51c895ae-3563-4561-ba3f-d0174e90c3f4) | running | n/a | small | 10.244.102.4 |
+----------------------------------------------------------+--------------------+-----+---------+--------------+
As an error message from bosh deploy, I get this:
Error 450002: Timed out sending `get_state' to e1ed3839-ade4-4e12-8f33-6ee6000750d0 after 45 seconds
After the error occurs, I can see the VM with bosh vms:
+----------------------------------------------------------+---------+-----+---------+--------------+
| VM | State | AZ | VM Type | IPs |
+----------------------------------------------------------+---------+-----+---------+--------------+
| bosh-components/0 (37a1938e-e1df-4650-bec6-460e4bc3916e) | running | n/a | small | 10.244.102.3 |
| bosh-director/0 (2bb47ce1-0bba-49aa-b9a3-86e881e91ee9) | failing | n/a | small | 10.244.102.2 |
| jumpbox/0 (51c895ae-3563-4561-ba3f-d0174e90c3f4) | running | n/a | small | 10.244.102.4 |
+----------------------------------------------------------+---------+-----+---------+--------------+
But when I ssh into the bosh-components VM, there are no jobs in /var/vcap/jobs.
When I remove the warden_cpi block from the jobs list, everything runs as expected. The full jobs list for my BOSH components VM:
nats
postgres
registry
blobstore
The Director itself runs on another machine. Without the Warden CPI the two machines can communicate as expected.
Can anybody point out to me how I have to configure the Warden CPI so that it connects to the Vagrant Box as expected?
The question is very old, it's a BOSH v1 CLI whereas now BOSH v2 is an established standard, Garden Linux had been deprecated a long time ago in favor of Garden runC, but still, having experimented a lot with BOSH-Lite, I'd like to answer this one.
First, a semantics remark: you shouldn't say “on top of that”, but “as instructed by” instead, because a BOSH Director just instructs some underlying (API-based) infrastructure to do something, that eventually makes it run some workloads.
Second, there are two hurdles you might hit here:
The main problem is that the Warden CPI talks to both the Garden backend and the local Linux kernel for setting up various things around those Garden containers. As a direct consequence, you cannot run a Warden CPI inside a BOSH-Lite container.
The filesystem used (here by the long-gone Garden Linux, but nowadays the issue would be similar with Garden runC) might not work inside a Garden container, as managed by the pre-existing Warden CPI.
All in all, then main thing to be aware of, is this idea that the Warden CPI not only talks to the Garden backend through some its REST API. More than that, the Warden CPI needs to be co-located with the Linux kernel that runs Garden, in order to make system calls and run local commands for mounting persistent storage and other things.
Related
I have created a NodePort to forward request from port 30101->80->8089:
apiVersion: v1
kind: Service
metadata:
name: gorest-service
spec:
type: NodePort
selector:
app: gorest
ports:
- protocol: TCP
port: 80
targetPort: 8089
nodePort: 30101
When I try to get the service URL http://192.168.49.2:30101, I am unable to access it, but with url http://127.0.0.1:64741, retrieved by using minikube service <service>, I am able to access.
Query: Unable to understand how http://192.168.49.2:30101 was changed to http://127.0.0.1:64741 retrived by minikube service <service>
% minikube service gorest-service
|-----------|----------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|----------------|-------------|---------------------------|
| default | gorest-service | 8089 | http://192.168.49.2:30101 |
|-----------|----------------|-------------|---------------------------|
🏃 Starting tunnel for service gorest-service.
|-----------|----------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|----------------|-------------|------------------------|
| default | gorest-service | | http://127.0.0.1:64741 |
|-----------|----------------|-------------|------------------------|
🎉 Opening service default/gorest-service in default browser...
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
In your manifest file you remap the service port inside the container from port 80 to port 8089.
If you want to access that service inside kubernetes you have to use the 8089 port.
BUT you cannot access services inside K8s from the outside world: you need to expose them (you should use a load balancer or an egress service).
Minikube is meant to be used for development. The 64741 port you see is a tunnel service that starts minikube and allows you to test and debug your service outside k8s without using an egress (which might be doing more than just connecting the service with the outside world, like for instance authenticating or authorizing requests).
Your understanding is correct, service exposed using NodePort should be reachable on minikube_IP:NodePort. First I checked it on Linux VM with minikube installed and it worked.
Then I noticed you're using MacOS:
Because you are using a Docker driver on darwin
Which leads us to some limitations with minikube running with docker driver on MacOS. Please see this GitHub issue.
There are two options at least (more, but these are simple to do):
use minikube tunnel what you did and it worked for you.
Tunnel is used to expose the service from inside of VM where minikube is running to the host machine's network. Please refer to access applications in minikube. This is how minikube_IP:NodePort transforms to localhost:different_port.
start minikube with VirtualBox driver to get a proper IP (if you really need to access your service on NodePort), below the command how to start it with VirtualBox driver (this should be installed on your machine):
minikube start --driver=VirtualBox
There is a service running on port 8529 on my system (host) and I want to query its API via a PHP script from inside the XAMPP VM. How can I open the port and allow it to access the service? It would be kind of a reverse port forwarding, where a request to localhost:8529 inside the VM should be forwarded to the host:
Host (macOS) Guest (Debian)
------------------ ------------------
| localhost:8080 | --> | localhost:80 | Web Server (Apache)
| localhost:8529 | <-- | localhost:8529 | Other Service
------------------ ------------------
Alternatively, could I grant the VM access to the host LAN in order to access the host IP 192.168.178.22?
(I am unable to reach the service under http://192.168.178.22:8529 despite being bound to 0.0.0.0:8529 however, only localhost:8529 works?!)
Setup / Environment:
I installed the virtual machine version of XAMPP (xampp-osx-7.4.4-0-vm.dmg) on my MacBook Air (macOS Catalina), started the services (IP address 192.168.64.2), mounted the /opt/lampp volume, and enabled port forwarding for localhost:8080 -> 80 (Over SSH). I can reach the XAMPP dashboard in a browser at http://localhost:8080 and http://192.168.64.2:80.
Clicking Open Terminal in the XAMPP app opens a command line for the guest system. It reveals that the VM solution is by Bitnami and the OS is Linux debian 4.9.0-11-amd64. It doesn't appear to be what Bitnami calls Server Console however. How can I actually manage the VM?
I have GC functions which I develop and test locally by using Cloud Pub/Sub emulator.
I want to be able to check from within Go code if Cloud Pub/Sub emulator is up and running. If not, I would like to inform a developer that he/she should start emulator before he/she execute code locally.
When the emulator starts I noticed a line
INFO: Server started, listening on 8085
Maybe I can check if port is available or similar.
I guess you have used this command:
gcloud beta emulators pubsub start
And you got the following output:
[pubsub] This is the Google Pub/Sub fake.
[pubsub] Implementation may be incomplete or differ from the real system.
[pubsub]
[pubsub] INFO: IAM integration is disabled. IAM policy methods and ACL checks are not supported
[pubsub]
[pubsub] INFO: Applied Java 7 long hostname workaround.
[pubsub]
[pubsub] INFO: Server started, listening on 8085
If you take a look at the second INFO message you'll notice that the process name will be JAVA. Now you can run this command:
sudo lsof -i -P -n
Getting all the listening ports and applications, the output should be something like this:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
XXXX
XXXX
java XXX XXX XX IPv4 XXX 0t0 TCP 127.0.0.1:8085 (LISTEN)
Alternatively you can modify the previous command to show only what is happening on the desired port:
sudo lsof -i -P -n | grep 8085
I'm trying to run a ipython notebook from my remote server(Ubuntu 14.04 64 bits on Amazon EC2).
I can access to ipython notebook via ssh tunnelling as described in coderwall blog:
remote$ipython notebook --no-browser --port=8889
local$ssh -N -f -L localhost:8888:localhost:8889 remote_user#remote_host
However I can't have a simple access using http protocol as described in the official doc or this tutorial
remote$ipython notebook --no-browser --port=8889
And point my local browser to http://mypublicip:8889, the browser fails without any warning.
To resolve this problem, I needed to:
Run the notebook server listening on all IP addresses adding cli flag --ip=*:
remote$ipython notebook --no-browser --ip=* --port=8889
Add inbound rule to the amazon ec2 instance in order to listen to port 8889.
+-----------------+----------+------------+-----------+
| Type | Protocol | Port Range | Source |
+=================+==========+============+===========+
| Custom TCP Rule | TCP | 8889 | 0.0.0.0/0 |
+-----------------+----------+------------+-----------+
Of course now it's better to add authentification as the port is listening to all ip adresses
As Monkpit wrote below, your shell might try to glob * character. In that case you should write --ip=\*- Explicitly adding ip address to localhost also helped:
ipython notebook --no-browser --ip=localhost --port=7777
I am trying to deploy a Ruby Sinatra api onto port 4567 of an EC2 micro instance.
I have created a Security Group with the following rules (and created the instance with said security group):
--------------------------------
| Ports | Protocol | Source |
--------------------------------
| 22 | tcp | 0.0.0.0/0 |
| 80 | tcp | 0.0.0.0/0 |
| 443 | tcp | 0.0.0.0/0 |
| 4567 | tcp | 0.0.0.0/0 |
--------------------------------
I bound myapp.rb on port 4567 (the default, but for verbosity):
set :port, 4567
and ran the service:
ruby myapp.rb
[2013-09-05 03:12:54] INFO WEBrick 1.3.1
[2013-09-05 03:12:54] INFO ruby 1.9.3 (2013-01-15) [x86_64-linux]
== Sinatra/1.4.3 has taken the stage on 4567 for development with backup from WEBrick
[2013-09-05 03:12:54] INFO WEBrick::HTTPServer#start: pid=1811 port=4567
Used nmap while ssh'd in the EC2 instance on localhost:
Starting Nmap 6.00 ( http://nmap.org ) at 2013-09-05 03:13 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00019s latency).
PORT STATE SERVICE
4567/tcp open tram
Nmap done: 1 IP address (1 host up) scanned in 0.08 seconds
Used nmap while ssh'd in the EC2 instance on the external ip:
Starting Nmap 6.00 ( http://nmap.org ) at 2013-09-05 03:15 UTC
Nmap scan report for <removed>
Host is up (0.0036s latency).
PORT STATE SERVICE
4567/tcp closed tram
Nmap done: 1 IP address (1 host up) scanned in 0.11 seconds
How do I change the state of the port from closed to open?
You’re starting Sinatra in the development environment. When running in development Sinatra only listens to requests from the local machine.
There a few ways to change this, the simplest is probably to run in the production environment, e.g.:
$ ruby myapp.rb -e production
You could also explicitly set the bind variable if you wanted to keep running in development:
set :bind, '0.0.0.0' # to listen on all interfaces
There are two possible causes for your problem.
Your service is only listening to connections on the loopback interface.
A software firewall is running and is blocking connections from outside on that port.