I am trying to create a subgraph with the graph-cli and publish it to the Graph network. Here is my subgraph yaml file:
specVersion: 0.0.4
schema:
file: ./schema.graphql
dataSources:
– kind: ethereum
name: Contract
network: mainnet
source:
address: “0xc944e90c64b2c07662a292be6244bdf05cda44a7”
abi: Contract
startBlock: 11446769
mapping:
kind: ethereum/events
apiVersion: 0.0.5
language: wasm/assemblyscript
entities:
– Transfer
abis:
– name: Contract
file: ./abis/Contract.json
eventHandlers:
– event: Transfer(indexed address,indexed address,uint256)
handler: handleTransfer
file: ./src/contract.ts
But when I am trying to build the subgraph with the npm run codegen command I am getting the following error:
But I am not sure how there is an indentation problem. How can this be resolved?
You are using the wrong dash. You should do - rather than –.
Related
I have added a SonarQube operator (https://github.com/RedHatGov/sonarqube-operator) in my cluster and when I want to let a Sonar instance out of the operator, the container terminates with the Fail Message:
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [253832] is too low, increase to at least [262144]
The problem lies in the fact that the operator refers to the label;
tuned.openshift.io/elasticsearch
which leaves me to do the necessary tuning, but there is no Elasticsearch operator or tuning on this pristine cluster.
I have created a tuning for Sonar but for whatever reason the thing does not want to be pulled. It currently looks like this:
apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
name: sonarqube
namespace: openshift-cluster-node-tuning-operator
spec:
profile:
- data: |
[main]
summary=A custom OpenShift SonarQube profile
include=openshift-control-plane
[sysctl]
vm.max_map_count=262144
name: openshift-sonarqube
recommend:
- match:
- label: tuned.openshift.io/sonarqube
match:
- label: node-role.kubernetes.io/master
- label: node-role.kubernetes.io/infra
type: pod
priority: 10
profile: openshift-sonarqube
and on the deployment I give the label;
tuned.openshift.io/sonarqube
But for whatever reason it is not pulled and I still get the above error message. Does anyone have an idea, and/or are these necessary steps? I followed the documentation > (https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/scalability_and_performance/using-node-tuning-operator) and it didn't work with the customized example. I set the match in match again, but that didn't work either.
Any suggestions?
Maybe try this:
oc create -f - <<EOF
apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
name: openshift-elasticsearch
namespace: openshift-cluster-node-tuning-operator
spec:
profile:
- data: |
[main]
summary=Optimize systems running ES on OpenShift nodes
include=openshift-node
[sysctl]
vm.max_map_count=262144
name: openshift-elasticsearch
recommend:
- match:
- label: tuned.openshift.io/elasticsearch
type: pod
priority: 20
profile: openshift-elasticsearch
EOF
(Got it from: https://github.com/openshift/cluster-node-tuning-operator/blob/master/examples/elasticsearch.yaml)
I have deployed Elasticsearch, Kibana and Enterprise Search to my local Kubernetes Cluster via this official guide and they are working fine individually (and are connected to the Elasticsearch instance).
Now I wanted to setup Kibana to connect with Enterprise search like this:
I tried it with localhost, but that obviously did not work in Kubernetes.
So I tried the service name inside Kubernetes, but now I am getting this error:
The Log from Kubernetes is the following:
{"type":"log","#timestamp":"2021-01-15T15:18:48Z","tags":["error","plugins","enterpriseSearch"],"pid":8,"message":"Could not perform access check to Enterprise Search: FetchError: request to https://enterprise-search-quickstart-ent-http.svc:3002/api/ent/v2/internal/client_config failed, reason: getaddrinfo ENOTFOUND enterprise-search-quickstart-ent-http.svc enterprise-search-quickstart-ent-http.svc:3002"}
So the questions is how do I configure my kibana enterpriseSearch.host so that it will work?
Here are my deployment yaml files:
# Kibana
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 7.10.1
count: 1
elasticsearchRef:
name: quickstart
config:
enterpriseSearch.host: 'https://enterprise-search-quickstart-ent-http.svc:3002'
# Enterprise Search
apiVersion: enterprisesearch.k8s.elastic.co/v1beta1
kind: EnterpriseSearch
metadata:
name: enterprise-search-quickstart
spec:
version: 7.10.1
count: 1
elasticsearchRef:
name: quickstart
config:
ent_search.external_url: https://localhost:3002
I encountered quite the same issue, but on a development environment based on docker-compose.
I fixed it by setting ent_search.external_url value the same as enterpriseSearch.host value
In your case, i guess, your 'Enterprise Search' deployment yaml file should look like this :
# Enterprise Search
apiVersion: enterprisesearch.k8s.elastic.co/v1beta1
kind: EnterpriseSearch
metadata:
name: enterprise-search-quickstart
spec:
version: 7.10.1
count: 1
elasticsearchRef:
name: quickstart
config:
ent_search.external_url: 'https://enterprise-search-quickstart-ent-http.svc:3002'
I'm trying to run a simple Pipeline on Openshift Online. Here are my steps:
oc new-project ess
Content of bc.yaml:
kind: "BuildConfig"
apiVersion: "v1"
metadata:
name: "yngwuoso-pipeline"
spec:
source:
git:
uri: "https://github.com/yngwuoso/spring-boot-rest-example.git"
strategy:
type: JenkinsPipeline
oc create -f bc.yaml
The result is:
Error from server (Forbidden): error when creating "bc.yaml": buildconfigs.build.openshift.io "yngwuoso-pipeline" is forbidden: unrecognized build strategy: build.BuildStrategy{DockerStrategy:(*build.DockerBuildStrategy)(nil), SourceStrategy:(*build.SourceBuildStrategy)(nil), CustomStrategy:(*build.CustomBuildStrategy)(nil), JenkinsPipelineStrategy:(*build.JenkinsPipelineBuildStrategy)(nil)}
Can anyone tell me what's missing?
If you want to run pipeline build based on git source code, first create the buildConfig of source Strategy for git repo, then create the buildConfig of pipeline to control over the all build process.
For instance, it's sample guide for your understanding, it might not work on your env, but you can customize below configuration for your env.
buildConfig for source strategy (github) is as follows,
apiVersion: v1
kind: BuildConfig
metadata:
labels:
app: yngwuoso-pipeline
name: yngwuoso-git-build
spec:
failedBuildsHistoryLimit: 5
output:
to:
kind: ImageStreamTag
name: yngwuoso-pipeline-image:latest
runPolicy: Serial
source:
git:
uri: https://github.com/yngwuoso/spring-boot-rest-example.git
type: Git
strategy:
sourceStrategy:
from:
kind: ImageStreamTag
name: redhat-openjdk18-openshift:1.3
namespace: openshift
type: Source
triggers:
- type: ConfigChange
- type: ImageChange
buildConfig of pipeline for trigger above buildConfig based on git repo.
apiVersion: v1
kind: BuildConfig
metadata:
labels:
name: yngwuoso-pipeline
name: yngwuoso-pipeline
spec:
runPolicy: Serial
strategy:
jenkinsPipelineStrategy:
jenkinsfile: |-
node(''){
stage 'Build by S2I'
openshiftBuild(namespace: 'PROJECT NAME', bldCfg: 'yngwuoso-git-build', showBuildLogs: 'true')
}
type: JenkinsPipeline
triggers:
- github:
secret: gitsecret
type: GitHub
- generic:
secret: genericsecret
type: Generic
You should configure GitHub Webhook using authentication secret in pipeline buildConfg, refer GitHub Webhooks
for more information.
I am using golang lib client-go to connect to a running local kubrenets. To start with I took code from the example: out-of-cluster-client-configuration.
Running a code like this:
$ KUBERNETES_SERVICE_HOST=localhost KUBERNETES_SERVICE_PORT=6443 go run ./main.go results in following error:
panic: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
goroutine 1 [running]:
/var/run/secrets/kubernetes.io/serviceaccount/
I am not quite sure which part of configuration I am missing. I've researched following links :
https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/
But with no luck.
I guess I need to either let the client-go know which token/serviceAccount to use, or configure kubectl in a way that everyone can connect to its api.
Here's status of my kubectl though some commands results:
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://localhost:6443
name: docker-for-desktop-cluster
contexts:
- context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop
current-context: docker-for-desktop
kind: Config
preferences: {}
users:
- name: docker-for-desktop
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
$ kubectl get serviceAccounts
NAME SECRETS AGE
default 1 3d
test-user 1 1d
$ kubectl describe serviceaccount test-user
Name: test-user
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: test-user-token-hxcsk
Tokens: test-user-token-hxcsk
Events: <none>
$ kubectl get secret test-user-token-hxcsk -o yaml
apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0......=
namespace: ZGVmYXVsdA==
token: ZXlKaGJHY2lPaUpTVXpJMU5pSX......=
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: test-user
kubernetes.io/service-account.uid: 984b359a-6bd3-11e8-8600-XXXXXXX
creationTimestamp: 2018-06-09T10:55:17Z
name: test-user-token-hxcsk
namespace: default
resourceVersion: "110618"
selfLink: /api/v1/namespaces/default/secrets/test-user-token-hxcsk
uid: 98550de5-6bd3-11e8-8600-XXXXXX
type: kubernetes.io/service-account-token
This answer could be a little outdated but I will try to give more perspective/baseline for future readers that encounter the same/similar problem.
TL;DR
The following error:
panic: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
is most likely connected with the lack of token in the /var/run/secrets/kubernetes.io/serviceaccount location when using in-cluster-client-configuration. Also, it could be related to the fact of using in-cluster-client-configuration code outside of the cluster (for example running this code directly on a laptop or in pure Docker container).
You can check following commands to troubleshoot your issue further (assuming this code is running inside a Pod):
$ kubectl get serviceaccount X -o yaml:
look for: automountServiceAccountToken: false
$ kubectl describe pod XYZ
look for: containers.mounts and volumeMounts where Secret is mounted
Citing the official documentation:
Authenticating inside the cluster
This example shows you how to configure a client with client-go to authenticate to the Kubernetes API from an application running inside the Kubernetes cluster.
client-go uses the Service Account token mounted inside the Pod at the /var/run/secrets/kubernetes.io/serviceaccount path when the rest.InClusterConfig() is used.
-- Github.com: Kubernetes: client-go: Examples: in cluster client configuration
If you are authenticating to the Kubernetes API with ~/.kube/config you should be using the out-of-cluster-client-configuration.
Additional information:
I've added additional information for more reference on further troubleshooting when the code is run inside of a Pod.
automountServiceAccountToken: false
In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: go-serviceaccount
automountServiceAccountToken: false
In version 1.6+, you can also opt out of automounting API credentials for a particular pod:
apiVersion: v1
kind: Pod
metadata:
name: sdk
spec:
serviceAccountName: go-serviceaccount
automountServiceAccountToken: false
-- Kubernetes.io: Docs: Tasks: Configure pod container: Configure service account
$ kubectl describe pod XYZ:
When the servicAccount token is mounted, the Pod definition should look like this:
<-- OMITTED -->
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from go-serviceaccount-token-4rst8 (ro)
<-- OMITTED -->
Volumes:
go-serviceaccount-token-4rst8:
Type: Secret (a volume populated by a Secret)
SecretName: go-serviceaccount-token-4rst8
Optional: false
If it's not:
<-- OMITTED -->
Mounts: <none>
<-- OMITTED -->
Volumes: <none>
Additional resources:
Kubernetes.io: Docs: Reference: Access authn authz: Authentication
Just to make it clear, in case it helps you further debug it: the problem has nothing to do with Go or your code, and everything to do with the Kubernetes node not being able to get a token from the Kubernetes master.
In kubectl config view, clusters.cluster.server should probably point at an IP address that the node can reach.
It needs to access the CA, i.e., the master, in order to provide that token, and I'm guessing it fails to for that reason.
kubectl describe <your_pod_name> would probably tell you what the problem was acquiring the token.
Since you assumed the problem was Go/your code and focused on that, you neglected to provide more information about your Kubernetes setup, which makes it more difficult for me to give you a better answer than my guess above ;-)
But I hope it helps!
I am using below yaml file to create the pod, kubectl command giving below error.
How to correct this error message?
apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: debian
command: ["printenv"]
args: ["HOSTNAME", "KUBERNETES_PORT"]
env:
- name: MESSAGE
value: "hello world"
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
kubectl create -f commands.yaml
error: error validating "commands.yaml": error validating data: found invalid field env for v1.PodSpec; if you choose to ignore these errors, turn validation off with --validate=false
follow example from this page.
https://kubernetes.io/docs/tasks/configure-pod-container/define-command-argument-container/
Thanks
-SR
Your—syntactically correct—YAML results in an incorrect data-structure for kubernetes. In YAML the indentations can affect the structure of the data. See this.
I think this should be correct:
apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: debian
command: ["printenv"]
args: ["HOSTNAME", "KUBERNETES_PORT"]
env:
- name: MESSAGE
value: "hello world"
command: ["/bin/echo"]
args: ["$(MESSAGE)"]