Why I cannot deploy charts in microk8s controller but I can do it in LXD - microk8s

I am learning Juju which I feel its amazing, but having problems as usual and very little around to find people who explain how it works. I hope any of you can guide me.
Question: deploying charts only works in LXD controller. In microk8s does not work. What can I do? Why is this happening?
Listing Controllers, there are two. A microk8s (the one already installed with Ubuntu 20.04) and the
LXD Hypervisor for localhost (that I dont know perfectly what it is)
Controller Model User Access Cloud/Region Models Nodes HA Version
lxd-staging* lxd-staging-model admin superuser localhost/localhost 3 1 none 2.8.7
microk8s-staging microk8s-staging-model admin superuser microk8s/localhost 2 1 - 2.8.7
Listing models, one for each controller
administrator#master-ubuntu:~$ juju models -c lxd-staging
Controller: lxd-staging
Model Cloud/Region Type Status Machines Access Last connection
controller localhost/localhost lxd available 1 admin just now
default localhost/localhost lxd available 0 admin 3 minutes ago
lxd-staging-model* localhost/localhost lxd available 0 admin 31 seconds ago
administrator#master-ubuntu:~$ juju models -c microk8s-staging
Controller: microk8s-staging
Model Cloud/Region Type Status Access Last connection
controller microk8s/localhost kubernetes available admin just now
microk8s-staging-model* microk8s/localhost kubernetes available admin never connected
Deploying in LXD, works as expected.
administrator#master-ubuntu:~$ juju models
Controller: lxd-staging
Model Cloud/Region Type Status Machines Access Last connection
controller localhost/localhost lxd available 1 admin just now
default localhost/localhost lxd available 0 admin 10 minutes ago
lxd-staging-model* localhost/localhost lxd available 0 admin 8 minutes ago
administrator#master-ubuntu:~$ juju deploy mysql mysqldb
Located charm "cs:mysql-58".
Deploying charm "cs:mysql-58".
Deploying in microk8s, error.
administrator#master-ubuntu:~$ juju models
Controller: microk8s-staging
Model Cloud/Region Type Status Access Last connection
controller microk8s/localhost kubernetes available admin just now
microk8s-staging-model* microk8s/localhost kubernetes available admin never connected
administrator#master-ubuntu:~$ juju deploy mysql mysqldb
ERROR series "xenial" in a kubernetes model not valid

This occurs because the charm hasn't been written with Kubernetes in mind. In an ideal world, this shouldn't matter, but underlying subsystems do. There is an ongoing effort to correct this but for now you could maybe use MariaDB for Kubernetes.

Related

Unable to log in to go micro's remote server

Description
Hello, I have been following a tutorial that sets up my own microservice in the cloud with go micro and kubernetes.
The tutorial has a kubernetes cluster as a prerequisite, so I followed another tutorial by the same author to create a kubernetes cluster.
To sum up the tutorials so you may get the big picture:
I first used Hetzner Cloud to by some machines on a remote location so I can deploy my Rancher server there. Rancher is a UI tool for creating and managing a kubernetes cluster.
Therefore, I:
Bought a machine on Hetzner Cloud
Deployed my Rancher server there
Went to a public IP to log into Rancher
Made a kubernetes cluster with one master and one worker node.
Everything was successful there, I could download kube's .config and manipulate the cluster from the command line.
The next tutorial was on how you deploy go micro framework and your own helloworld microservice in the kubernetes cluster.
The tutorial walks you through deploying go micro's services first, and then shows you the deployment for your own microservice.
I managed to do everything and all of my services are up and running. There is just one problem, I can't log into the micro server with username: admin and password: micro.
Symptoms
What I can do:
I can list kubernetes pods with kubectl get pods -n micro
I can log into a particular pod (I logged into api like in tutorial) with kubectl exec -it -n micro {{pod}} -- bash
There I see the micro exec.
From there, the tutorial just said log in and execute ./micro services and it lists all the services, but I am unable to log in. When I try with the default "admin, micro" combination it says Invalid token provided.
I checked the jwt tokens in MICRO_AUTH_PRIVATE_KEY and MICRO_AUTH_PUBLIC_KEY and they match in every service.
I am able to create another user after which I get access denied to namespace error when trying to list the services. I am unable to create rules with that user.
Please help, this has been haunting me for days. 🙏🏽

juju not able to spin up lxd container

currently I'm trying to setup a three node mysql-innodb-cluster on a maas-deployment using juju.
The setup process worked flawlessly and the deployment of other charms worked fine. When deploying the cluster I would like to achieve that in separate lxd-container. But that doesn't work. I get the error:
Did I miss some critical configuration step here?
did you turn OFF ipv6? that is generally needed when setting up LXD & Juju.. you do that in the lxd init prompts
if you can, you need to wipe all the machine or at the least your network bridges for LXD

Kafka broker on EC2 is not connecting to my zookeeper on my local network

Guys if someone has experienced this issue - please can you help me - i have been racking my brains on this with no success and have pored over as many posts as i can.
Scenario
I have zookeeper / two brokers and a producer and consumer running on my local partition from different IP addresses within my subnet and everything is perfect. My producer produces - consumer consumes life is happy.
I wanted to replicate this on EC2 an spun up a kafka broker on EC2 and want that broker to connect to my zookeeper but for some reason the broker on EC2 is unable to connect to my zookeeper.
Now for clarity sake :
My laptop IP :1.1.1.1 (1) in the attached image
Zookeeper IP: z.z.z.z.z (2) in the attached image
Broker 1 on my laptop : b.b.b.b.b
So the issue is from EC2 when i try connecting to zookeeper i get an error and time out - I do not understand what is going on also i have opened ports/IP to my laptop and have these in my inbound outbound sessions.
Please can someone help - also i dont understand why Kafka broker on EC2 is trying to connect to
z.z.z.z.us-east-2.compute.internal ....
Forgive me guys but I am not sure if / what i need to change.
In the broker config :
i have zookeeper config set as z.z.z.z:2181,1.1.1.1:2181
From EC2 teerminal i can ping my laptop public DNS - but cannot ping my internal partitioned IP on which zookeeper is running - i think this may be a cause also.
If you can please help shed some light on this and if u r in NY - beers on me.
Thank you !!!!
Screen shot of EC2 Kafka log
Solution
I could not find a definitive answer - i went over the docs - so i took a step back and started checking the logs.
This is what i found :
ssh / scp works from my zookeeper to EC2 but not from EC2 to zookeeper
zookeeper is on manjaro dedicated IP on my subnet
So fundamentally the issue is i did not have a port open
So i opened a port on my router updated (very important) the ip address in EC2 server.properties to have my public IP w the assigned port which i opened and forwarded to my Kali (stopped Majaro spun up Zookeeper on Kali) IP and 2181
Worked like a charm - EC2 was able to connect to zookeeper - all good.
Now from my Fedora i fired up a producer and got EC2 to consume - confirmed in logs plus in CMAK.
But from my Ubuntu consumer even though i see the consumer was able to connect i am unable to consume messages and consumer group not being able to create.
So some findings here :
Do not use Manajaro - use Kali - i am not 100% sure for the reason but my yum update resulted in Kali zookeeper to be able to more easily connect to EC2 - plus i can configure Kali more easily - i am pretty sure both are the same but i found it easier to navigate Kali than Manjaro. So i have zookeeper on Kali - and more i use Kali i am loving it - i can run my python socket scripts from Kali on my router and create a new subnet inside Kali - which i could not from Manjaro - fundamentally wanting to run another broker on the same partition / box.
So Kali thumbs up then Fedora, LinuxMint then Ubuntu then Manjaro - again I am 100% sure i am doing something very stupid and this is no reflection on these distributions its just my personal opinion on using them. Plus for some reason i get more info with sudo tracert on Kali than Manjaro - all in all Manjaro i am going to hut down and not touch / use too much time waste (IMHO) its too GUI driven ...
On Manjaro for some reason the IP was set to DHCP and kept on flipping and i also found if the IP flips for some reason i need to flush non committed messages on a topic and create a new cluster and then a new topic in CMAK - i am not sure why this wuld be the case - i am using 2.7.0.
Also - since i did not want to spend $$ on EC2 i am spun up the basic Amazon Linux2 and then created my own AMI w Java / Python (note yum works on Python 2 so u need to relink to Python 2 to do any yum updates and then link to 3.7) / Kafka - but all in all i had to set broker mem to 128M / 526M because of obvious limitation and because i wanted to use the free service from AWS) - so i think the reason my consumer connected but failed to consume any messages was because of throughput and network bandwidth - also i think i made a mistake - shud have used zone N Virginia instead of Ohio being in NY.
From what i have read so far - what i was trying to do w Port forwarding essentially is not recommended and Kafka docs specifically state that to do this in a trusted netwrork in a VPN etc etc - but I had to test to make sure it worked - just a proof of concept
on EC2 the system works perfectly soup to nuts no issues.
So issue resolved - but will need to check thruput on ubuntu for consumer why i could not register a consumer group - but thank you all in all what i wanted to test worked.

Hyperledger Composer 0.15.0 sharing network with the local Playground

I was wondering if since the 0.15.0 release and switching to cards someone has figured out how to access the same network both locally via the CLI and in the Playground and with the same Fabric runtime.
So far, I have been able to install my network's runtime, start and ping it on the Playground's fabric after creating PeerAdmin card using the script that came with Playground.
However, importing the newly deployed network's admin card fails in the Playground. If, however, I deploy the network via the Playground, export the admin card, download/import the admin card from the Playground and then try to composer ping it, it just sits there timing out after a while. This is MacOS High Sierra. So what gives and what can be done?
Thanks much!
If I understood your issue correctly, this is how you can solve it:
Create your business network in Playground
Export business network card from Playground (download button on card) which produces {nameOfUser}.card file.
Now you transfer this card to wherever you have installed fabric/playground
Run command: composer card import -f {nameOfUser}.card
Now your business card should appear under location {usersHome}/.composer/cards/user#network-name
Inside /cards folder, you should see 2 folders. One is "PeerAdmin" which was created if you followed setup and another one is your imported one
Copy connection.json from "PeerAdmin" to your new card and replace it. (this is most important step)
Run command: composer-rest-server and use as network card: user#network-name - folder that you copied
With all this steps I successfully created and ran server. Now you can access it on port IP:3000/explorer
You can share the Business Network Cards between Playground and the CLI. However it can be a bit more difficult if you are running Playground in a Docker Container.
With the CLI you connect to your Fabric servers on localhost and Docker deals with the Port fowarding into the Containers for the Fabric.
The Fabric Containers (and the Playground if you start it in a container) connect with eachother on 'fake' addresses managed by docker-compose e.g. orderer.example.com:7050
So if you start composer-playground using the CLI any Card you export will have localhost as the addresses of Fabric servers and other CLI commands will be able to utilise it. If however you are using Playground in a container the Card will use the fake addresses and you will not be able to connect from the CLI straightaway.
I assume you are using Playground in a Container and hence having the problem. If you find the connection.json in a location similar to: ~/.composer/cards/admin#*xxxxxx*/connection.json and edit the addresses of the fabric server to be localhost you should be able to use the CLI as expected.

Exposing several services with Vagrant and Kubernetes on my own server

Assume the following stack:
A dedicated server
The server is running Vagrant
Vagrant is running 2 virtual machines master + minion-1 (Kubernetes)
minion-1 is running a pod
Within the pod is 2 containers: webservice and fileservice
Both webservice and fileservice should be accessible from internet i.e. from outside. Either by web.mydomain.com - file.mydomain.com or www.mydomain.com/web/ - www.mydomain.com/file/
Before using Kubernetes, I was using a remote proxy (HAproxy) and simply mapped domain names to an internal ip / port.
Now with Kubernetes, I can imagine there is something dedicated to this task but I honestly have no clue from where to start.
I read about "createExternalLoadBalancer", kubernetes Services and kube-proxy. Should a reverse-proxy still be put somewhere (before vagrant or within a pod ?) also is using Vagrant a good option for production (staying in the scope of this question) ?
The easiest thing for you to do at the moment is to make a service of type "nodePort", and to configure your HAproxy to point at minion-1:.
createExternalLoadBalancer is the old, less flexible, way to do this--it requires the cloud provider to do work. Type=nodePort doesn't require anything special from the cloud provider.

Resources