How do you give the master node a RESTful endpoint in IBM Cloud Private (ICP)? - ibm-cloud-private

So far (v2.1 Beta) the solution is to:
1) go up to the upper right-hand corner and click on the user profile (e.g. admin).
2) A menu will unfold with 4 options: Sign out, Change Password, Configure Client and About.
3) Click Configure Client which will present you with a dialog of 4-5 kubectl invocations which include user specific tokens, etc.
4) Copy and paste that text into your command line terminal, and hit enter.
You should then be all set. If you run into a blank dialog, then refresh the entire browser page and try again, it will populate.
However, I want to know if anyone has found a better way to do it?

With ICp 2.1 beta, user actually can install an application from App Center which provides the web terminal for kubectl, helm, and also calicoctl, the detailed steps would be:
1. install `local-charts/web-terminal` from App center
2. go to Workload->Application->default-web-terminal-web
3. click to open the Endpoint page
4. admin/admin as credential

In 2.1.0 Beta3, the first version of the command line tools are available for IBM Cloud Private.
These tools can be installed on your platform of choice from the "Tools > Command Line" option in the webconsole.
bx pr login -a https://mycluster.icp:8443 -u admin --skip-ssl-validation
Login method invokedAPI endpoint: https://mycluster.icp:8443
Password>
Authenticating...
OK
Select an account:
1. ICP Account (9335b8949793c6fb1b96cf2a103a9d50)
Enter a number> 1
Targeted account: ICP Account (9335b8949793c6fb1b96cf2a103a9d50)
bx pr clusters
OK
Name ID State Created Masters Workers Datacenter
mycluster 00000000000000000000000000000001 deployed 2017-10-13T03:28:53+0000 1 3 default
bx pr cluster-config mycluster
Configuring kubectl: /Users/mdelder/.bluemix/plugins/icp/clusters/mycluster/kube-config
Cluster "master.cfc" set.
Context "master.cfc-context" set.
User "master.cfc-user" set.
Context "master.cfc-context" set.
Switched to context "master.cfc-context".
OK
Cluster mycluster configured successfully.
Now you can use kubectl:
kubectl get nodes
NAME STATUS AGE VERSION
10.10.25.27 Ready 2d v1.7.3-7+154699da4767fd
10.10.25.28 Ready 2d v1.7.3-7+154699da4767fd
10.10.25.72 Ready 2d v1.7.3-7+154699da4767fd
10.10.25.73 Ready 2d v1.7.3-7+154699da4767fd
You can also discover information about your cluster:
bx pr help
NAME:
bx pr - IBM Cloud Private Service.
USAGE:
bx pr command [arguments...] [command options]
COMMANDS:
api View or set the API endpoint and API version for the service.
cluster-config Download the Kubernetes configuration and configure kubectl for a specified cluster.
cluster-get View details for a cluster.
clusters List all the clusters in your account.
init Initialize the IBM Cloud Private plugin with the API endpoint.
load-helm-chart Loads a Helm chart archive to an IBM Cloud Private cluster.
load-ppa-archive Load Docker images and Helm charts compressed file that you downloaded from Passport Advantage.
login Log user in.
master-get View the details about a master node.
masters List all master nodes in an existing cluster.
worker-get View the details about a worker node.
workers List all worker nodes in an existing cluster.
help
Enter 'bx pr help [command]' for more information about a command.
bx pr masters mycluster
OK
ID Public IP Private IP State
mycluster-00000000-m1 10.10.25.73 10.10.25.73 deployed

Related

How do I do an On Demand Backup for an IBM Cloud database

I have an Elasticsearch deployment in the IBM Cloud and I want to take regular on-demand backups from it. Is there a way of initiating backups using the command line?
IBM Cloud databases takes regular daily backups of all its databases, but you cannot choose the backup schedule. If you want to create backups more often or on. your chosen schedule, you can use the IBM Cloud CLI backup-now command.
You can install the CLI from here and you will need to add the cloud databases plugin with:
ibmcloud plugin install cloud-databases
Log into the IBM Cloud CLI with:
ibmcloud login -sso
Follow the on-screen instructions to log in.
You can then list all the database deployments in your account with:
ibmcloud cdb ls
#Name Location State
#Databases for PostgreSQL-76 us-south inactive
#testelastic eu-gb active
#Databases for MySQL-9j us-south active
#elastic-target eu-gb active
To back up one of those databases do:
ibmcloud cdb backup-now testelastic
#Key Value
#ID crn:v1:bluemix:public:databases-for-elasticsearch:eu-gb:a/xyz/abc
#Deployment ID crn:v1:bluemix:public:databases-for-elasticsearch:eu-gb:a/abc/def::
#Description Creating an on-demand backup
#Created At 2023-02-01T10:09:12Z
#Status running
#Progress Percentage 0
#Progress Percentage 50
#Status completed
#Progress Percentage 100
There is more information on backup policies in this document

Gcloud and Kubectl see me logged in as two different users

Mac here, in case it makes a difference. I am on 2 separate GCP/gcloud/GKE/Kubernetes projects and have two different gmails for each of them:
Project 1: flim-flam, where my email is myuser1#gmail.example.com (pretend its a gmail)
Project 2: foo-bar, where my email is myuser2#gmail.example.com
I log into my myuser1#gmail.example.com account via gcloud auth login and confirm I am logged in as that account. For instance, I go to the GCP console and verify (in the UI) that I am in fact logged in as myuser1#gmail.example.com. Furthermore, when I run gcloud config configurations list I get:
NAME IS_ACTIVE ACCOUNT PROJECT COMPUTE_DEFAULT_ZONE COMPUTE_DEFAULT_REGION
flim-flam True myuser1#gmail.example.com flim-flam
foo-bar False myuser2#gmail.example.com foo-bar
From my flim-flam project, when I run kubectl delete ns flimflam-app I get permission errors:
Error from server (Forbidden): namespace "flimflam-app" is forbidden: User "myuser2#gmail.example.com" cannot delete resource "namespaces" in API group "" in the namespace "flimflam-app": requires one of ["container.namespaces.delete"] permission(s).
So gcloud thinks I'm logged in as myuser1 but kubectl thinks I'm logged in as myuser2. How do I fix this?
gcloud and kubectl share user identities but their configuration is in different files.
Using gcloud auth login does not update (!) existing (!) kubectl configurations. The former (on Linux) are stored in ${HOME}/.config/gcloud and the latter in ${HOME}/.kube/config.
I don't have a copy on hand but, if you check ${HOME}/.kube/config, it likely references the other Google account. You can either duplicate the users entry and reference it from the context. Or you could edit the existing users entry.
Actually, better yet use gcloud container clusters get-credentials to update kubectl's configuration with the currently-active gcloud user. This command updates ${HOME}/.kube/config for you.

Unable to access H2O Flow using H2O_connection_url

I am using both H2O and Sparkling Water on Amazon Clusters. I have been using Qubole and have been able to access the Flow UI on that platform. I am currently testing Databricks and Sagemaker, but I am unable to access the Flow UI using either platform (using port 54321). I am using H2O_cluster_version: 3.32.1.3. Do I need to use another port?
Getting the right Flow URL can be tricky because of the changes in the base URL at DBC. There were some improvements in more recent releases of SW that give the proper URL within Databricks, so make sure you try the latest version.
You should get it from your print/output, when you create an H2OContext. The port would be 9009. If you want to change it, you can use spark.ext.h2o.client.web.port.
You can also find the link in "Spark UI" -> "Sparkling Water" tab
The format would be something like: https://your-dbc-domain/driver-proxy/o/xxxxxxxx/yyyyyyy/9009/flow/index.html
From the docs for reference:
Flow is accessible via the URL printed out after H2OContext is
started. Internally we use open port 9009. If you have an environment
where a different port is open on your Azure Databricks cluster, you
can configure it via spark.ext.h2o.client.web.port or corresponding
setter on H2OConf.

How can I create external node WAS 8.5.5.x

I have two machines. On both I have WAS 8.5.5.x ND. So I want create on Computer1 an app server and on Computer2 a cluster for this server.
I've created on Computer1 Development manager(Dmgr01), nodeagent(node - hostnameNode01, cell - hostnameCell01) and server (mysrv at the same cell and node). And it works fine. But how can I create an external node/custom profile with another host(Computer2 hostname) to create a cluster for hostnameCell01 on Computer2 using Dmgr01?
Create a standalone application server on computer2, and follow these steps to federate it into the cell: https://www.ibm.com/support/knowledgecenter/en/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/tagt_svr_conf_nodes.html They'll show you how to use the administrative console on the deployment manager to add the new node.
If you want to do it from computer2 instead, you can use the addNode command: https://www.ibm.com/support/knowledgecenter/en/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/rxml_addnode.html If you go that route, make sure when you create the standalone application server profile that you give it a different cell name than the cell you actually intend it to be federated to. There's a list of best practices for addNode here: https://www.ibm.com/support/knowledgecenter/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/rxml_nodetips.html

Websphere Message broker multi-instance message flow

I am looking for a command to change the message broker message flow instance in the run time. I know it is quite easy with MB explorer. But I am more interested towards the server side mqsi command. Ours is a AIX env with message broker 8 installed.
The number of instances a message flow has on the execution group is configured in the BAR file, before deployment.
If you want to change the number of additional instances you will need to redeploy your flow.
You can use the mqsiapplybaroverride command to change the configuration of the flow in the BAR file, and the mqsideploy command to redeploy the BAR.
As of IIB v9 you can control the number of instances dynamically at runtime by assigning a workload management policy.
See the description here:
http://www-01.ibm.com/support/knowledgecenter/SSMKHH_9.0.0/com.ibm.etools.mft.doc/bn34262_.htm
Once you have assigned a policy you can change it using the mqsichangepolicy command specifying an xml policy document that has a different number of instances.
Alternatively you can use the web ui to change it directly on the running broker.

Resources