How to use Kaniko in Jenkins Pipeline Script to build Docker Image - jenkins-pipeline

I want to use Kaniko in my jenkins script (groovy) file to build a image.
I have no the other configurations in my jenkins, I want to only use my jenkins script
the script looks like this:
podTemplate(label: 'jenkins-kaniko',
containers:[
containerTemplate(name: 'kaniko', image: 'gcr.io/kaniko-project/executor:debug', command: '/busybox/cat', ttyEnabled: true)
],
volumes: [
secretVolume(mountPath: '/home/jenkins/.aws/', secretName: 'aws-secret'),
configMapVolume(mountPath: '/kaniko/.docker/', configMapName: 'docker-config')
])
{
node ('jenkins-kaniko') {
environment {
registry = ""
registryCredential = ''
imageName = 'jenkins_slave'
dockerImage = ''
//dockerHome = tool 'docker_latest'
//PATH = "$dockerHome/bin:$PATH"
}
stages {
stage('Prepare') {
steps {
echo "CheckOut"
script {
//here is checkout git code
}
}
}
stage('Building image') {
/* agent {
label 'jenkinskaniko'
}
*/
steps {
dir('jenkins-slave'){
echo 'build image'
container('kaniko'){
sh "/kaniko/executor --dockerfile `pwd`/Dockerfile `pwd` --insecure --skip-tls-verify --cache=true --destination= jenkins_slave:${env.BUILD_ID}"
}
}
}
}
stage('Deploy Image'){
steps {
script {
docker.withRegistry(registry) {
dockerImage.push()
}
}
}
}
}
}
}
but I have tried several times and got this error:
Created Pod: kubernetes crpcc-jenkins-prodslaves/jenkins-kaniko-hrfk9-j06mk
[Warning][crpcc-jenkins-prodslaves/jenkins-kaniko-hrfk9-j06mk][FailedScheduling] 0/33 nodes are available: 33 node(s) didn't match node selector.
[Warning][crpcc-jenkins-prodslaves/jenkins-kaniko-hrfk9-j06mk][FailedScheduling] 0/33 nodes are available: 33 node(s) didn't match node selector.
[Warning][crpcc-jenkins-prodslaves/jenkins-kaniko-hrfk9-j06mk][FailedScheduling] 0/33 nodes are available: 33 node(s) didn't match node selector.
[Warning][crpcc-jenkins-prodslaves/jenkins-kaniko-hrfk9-j06mk][FailedScheduling] 0/33 nodes are available: 33 node(s) didn't match node selector.
[Warning][crpcc-jenkins-prodslaves/jenkins-kaniko-hrfk9-j06mk][FailedScheduling] 0/33 nodes are available: 33 node(s) didn't match node selector.
[Warning][crpcc-jenkins-prodslaves/jenkins-kaniko-hrfk9-j06mk][FailedScheduling] 0/33 nodes are available: 33 node(s) didn't match node selector.
[Warning][crpcc-jenkins-prodslaves/jenkins-kaniko-hrfk9-j06mk][FailedScheduling] 0/33 nodes are available: 33 node(s) didn't match node selector.
[Warning][crpcc-jenkins-prodslaves/jenkins-kaniko-hrfk9-j06mk][FailedScheduling] 0/33 nodes are available: 33 node(s) didn't match node selector.
[Warning][crpcc-jenkins-prodslaves/jenkins-kaniko-hrfk9-j06mk][FailedScheduling] 0/33 nodes are available: 33 node(s) didn't match node selector.
[Warning][crpcc-jenkins-prodslaves/jenkins-kaniko-hrfk9-j06mk][FailedScheduling] 0/33 nodes are available: 33 node(s) didn't match node selector.
[Warning][crpcc-jenkins-prodslaves/jenkins-kaniko-hrfk9-j06mk][FailedScheduling] 0/33 nodes are available: 33 node(s) didn't match node selector.
[Warning][crpcc-jenkins-prodslaves/jenkins-kaniko-hrfk9-j06mk][FailedScheduling] 0/33 nodes are available: 33 node(s) didn't match node selector.
[Warning][crpcc-jenkins-prodslaves/jenkins-kaniko-hrfk9-j06mk][FailedScheduling] 0/33 nodes are available: 33 node(s) didn't match node selector.
[Warning][crpcc-jenkins-prodslaves/jenkins-kaniko-hrfk9-j06mk][FailedScheduling] 0/33 nodes are available: 33 node(s) didn't match node selector.
[Warning][crpcc-jenkins-prodslaves/jenkins-kaniko-hrfk9-j06mk][FailedScheduling] 0/33 nodes are available: 33 node(s) didn't match node selector.
any solutions?

can you please try the following script:
def label = "goweb-1.$BUILD_NUMBER-pipeline"
podTemplate(label: label, containers: [
containerTemplate(name: 'kaniko', image: 'gcr.io/kaniko-project/executor:debug', command: '/busybox/cat', ttyEnabled: true)
],
volumes: [
secretVolume(mountPath: '/root/.docker/', secretName: 'dockercred')
]) {
node(label) {
stage('Stage 1: Build with Kaniko') {
container('kaniko') {
sh '/kaniko/executor --context=git://github.com/repository/project.git \
--destination=docker.io/repository/image:tag \
--insecure \
--skip-tls-verify \
-v=debug'
}
}
}
}

Related

Elasticsearch Reenable shard allocation ineffective?

I am running a 2 node cluster on version 5.6.12
I followed the following rolling upgrade guide: https://www.elastic.co/guide/en/elasticsearch/reference/5.6/rolling-upgrades.html
After reconnecting the last upgraded node back into my cluster, the health status remained as yellow due to unassigned shards.
Re-enabling shard allocation seemed to have no effect:
PUT _cluster/settings
{
"transient": {
"cluster.routing.allocation.enable": "all"
}
}
My query results when checking cluster health:
GET _cat/health:
1541522454 16:40:54 elastic-upgrade-test yellow 2 2 84 84 0 0 84 0 - 50.0%
GET _cat/shards:
v2_session-prod-2018.11.05 3 p STARTED 6000 1016kb xx.xxx.xx.xxx node-25
v2_session-prod-2018.11.05 3 r UNASSIGNED
v2_session-prod-2018.11.05 1 p STARTED 6000 963.3kb xx.xxx.xx.xxx node-25
v2_session-prod-2018.11.05 1 r UNASSIGNED
v2_session-prod-2018.11.05 4 p STARTED 6000 1020.4kb xx.xxx.xx.xxx node-25
v2_session-prod-2018.11.05 4 r UNASSIGNED
v2_session-prod-2018.11.05 2 p STARTED 6000 951.4kb xx.xxx.xx.xxx node-25
v2_session-prod-2018.11.05 2 r UNASSIGNED
v2_session-prod-2018.11.05 0 p STARTED 6000 972.2kb xx.xxx.xx.xxx node-25
v2_session-prod-2018.11.05 0 r UNASSIGNED
v2_status-prod-2018.11.05 3 p STARTED 6000 910.2kb xx.xxx.xx.xxx node-25
v2_status-prod-2018.11.05 3 r UNASSIGNED
Is there another way to try and get shards allocation working again so I can get my cluster health back to green?
The other node within my cluster had a "high disk watermark [90%] exceeded" warning message so shards were "relocated away from this node".
I updated the config to:
cluster.routing.allocation.disk.watermark.high: 95%
After restarting the node, shards began to allocate again.
This is a quick fix - I will also attempt to increase the disk space on this node to ensure I don't lose reliability.

Unable to move Elasticsearch shards

I have an Elasticsearch cluster with two nodes and eight shards. I am in the situation where all primaries are on one node and all replicas are on the other.
Running the command:
http://xx.xx.xx.1:9200/_cat/shards
returns this result:
myindex 2 r STARTED 16584778 1.4gb xx.xx.xx.2 node2
myindex 2 p STARTED 16584778 1.4gb xx.xx.xx.1 node1
myindex 1 r STARTED 16592755 1.4gb xx.xx.xx.2 node2
myindex 1 p STARTED 16592755 1.4gb xx.xx.xx.1 node1
myindex 3 r STARTED 16592009 1.4gb xx.xx.xx.2 node2
myindex 3 p STARTED 16592033 1.4gb xx.xx.xx.1 node1
myindex 0 r STARTED 16610776 1.3gb xx.xx.xx.2 node2
myindex 0 p STARTED 16610776 1.3gb xx.xx.xx.1 node1
I am trying to swap around certain shards by posting this command:
http://xx.xx.xx.1:9200/_cluster/reroute?explain
with this body:
{
"commands" : [
{
"move" : {
"index" : "myindex",
"shard" : 1,
"from_node" : "node1",
"to_node" : "node2"
}
},
{
"allocate_replica" : {
"index" : "myindex",
"shard" : 1,
"node" : "node1"
}
}
]
}
It doesn't work, and the only "NO" I get in the list of decisions in the explainitions is:
{
"decider": "same_shard",
"decision": "NO",
"explanation": "the shard cannot be allocated on the same node id [xxxxxxxxxxxxxxxxxxxxxx] on which it already exists"
},
It's not fully clear to me if this is the actual error, but there is no other negative feedback. How can I resolve this and move my shard?
This is expected.
Why would you do that? Primaries and replicas are doing the same job.
What problems do you think this would solve?

How to delete unassigned shards in elasticsearch?

I have only one node on one computer and the index have 5 shards without replicas. Here are some parameters describe my elasticsearch node(healthy indexes are ignored in the following list):
GET /_cat/indices?v
health status index pri rep docs.count docs.deleted store.size pri.store.size
red open datas 5 0 344999414 0 43.9gb 43.9gb
GET _cat/shards
datas 4 p STARTED 114991132 14.6gb 127.0.0.1 Eric the Red
datas 3 p STARTED 114995287 14.6gb 127.0.0.1 Eric the Red
datas 2 p STARTED 115012995 14.6gb 127.0.0.1 Eric the Red
datas 1 p UNASSIGNED
datas 0 p UNASSIGNED
shards disk.indices disk.used disk.avail disk.total disk.percent host ip node
14 65.9gb 710gb 202.8gb 912.8gb 77 127.0.0.1 127.0.0.1 Eric the Red
3 UNASSIGNED
Although deleting created shards doesn't seem to be supported, as mentioned on the comments above, reducing the number of replicas to zero for the indexes with UNASSIGNED shards might do the job, at least for single node clusters.
PUT /{my_index}/_settings
{
"index" : {
"number_of_replicas" : 0
}
}
reference
You can try deleting unassigned shard following way (Not sure though if it works for data index, works for marvel indices)
1) Install elasticsearch plugin - head. Refer Elastic Search Head Plugin Installation
2) Open your elasticsearch plugin - head URL in brwoser. From here you can easily check out which are unassigned shards and other related info. This will display infor regarding that shard.
{
"state": "UNASSIGNED",
"primary": true,
"node": null,
"relocating_node": null,
"shard": 0,
"index": ".marvel-es-2016.05.18",
"version": 0,
"unassigned_info": {
"reason": "DANGLING_INDEX_IMPORTED",
"at": "2016-05-25T05:59:50.678Z"
}
}
from here you can copy index name i.e. .marvel-es-2016.05.18.
3) Now you can run this query in sense
DELETE .marvel-es-2016.05.18
Hope this helps !

elasticsearch - remove a second elasticsearch node and add an other node, get unassigned shards

As a starter in Elasticsearch, I just use it for two weeks ago and I have just did a silly thing.
My Elasticsearch has one cluster with two nodes, one master-data node (version 1.4.2), one non-data node (version 1.1.1).There was conflict version when using, I decided to shutdown and delete the non-data node then, install another data node (version 1.4.2) See my image for easy imagine. node3 is named node2 then
Then, I check elastic status
{
"cluster_name":"elasticsearch",
"status":"yellow",
"timed_out":false,
"number_of_nodes":2,
"number_of_data_nodes":2,
"active_primary_shards":725,
"active_shards":1175,
"relocating_shards":0,
"initializing_shards":0,
"unassigned_shards":273
}
Check the cluster state
curl -XGET http://localhost:9200/_cat/shards
logstash-2015.03.25 2 p STARTED 3031 621.1kb 10.146.134.94 node1
logstash-2015.03.25 2 r UNASSIGNED
logstash-2015.03.25 0 p STARTED 3084 596.4kb 10.146.134.94 node1
logstash-2015.03.25 0 r UNASSIGNED
logstash-2015.03.25 3 p STARTED 3177 608.4kb 10.146.134.94 node1
logstash-2015.03.25 3 r UNASSIGNED
logstash-2015.03.25 1 p STARTED 3099 577.3kb 10.146.134.94 node1
logstash-2015.03.25 1 r UNASSIGNED
logstash-2014.12.30 4 r STARTED 10.146.134.94 node2
logstash-2014.12.30 4 p STARTED 94 114.3kb 10.146.134.94 node1
logstash-2014.12.30 0 r STARTED 111 195.8kb 10.146.134.94 node2
logstash-2014.12.30 0 p STARTED 111 195.8kb 10.146.134.94 node1
logstash-2014.12.30 3 r STARTED 110 144kb 10.146.134.94 node2
logstash-2014.12.30 3 p STARTED 110 144kb 10.146.134.94 node1
I have read related question and tried to follow it but no luck. I also comment in the answer the error i got.
ElasticSearch: Unassigned Shards, how to fix?
https://t37.net/how-to-fix-your-elasticsearch-cluster-stuck-in-initializing-shards-mode.html
elasticsearch - what to do with unassigned shards
http://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-reroute.html#cluster-reroute
curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
"commands" : [ {
"allocate" : {
"index" : "logstash-2015.03.25",
"shard" : 4,
"node" : "node2",
"allow_primary" : true
}
}
]
}'
I get
"routing_nodes":{"unassigned":[{"state":"UNASSIGNED","primary":false,"node":null,
"relocating_node":null,"shard":0,"index":"logstash-2015.03.25"}
And I followed the answer in https://stackoverflow.com/a/23781013/1920536
curl -XPUT 'localhost:9200/_cluster/settings' -d '{
"transient" : {
"cluster.routing.allocation.enable" : "all"
}
}'
but no affection.
what should i do ?
Thank in advances.
Update: when I check pending task, it shows that:
{"tasks":[{"insert_order":88401,"priority":"HIGH","source":"shard-failed
([logstash-2015.01.19][3], node[PVkS47JyQQq6G-lstUW04w], [R], s[INITIALIZING]),
**reason [Failed to start shard, message** [RecoveryFailedException[[logstash-2015.01.19][3]: **Recovery failed from** [node1][_72bJJX0RuW7AyM86WUgtQ]
[localhost][inet[/localhost:9300]]{master=true} into [node2][PVkS47JyQQq6G-lstUW04w]
[localhost][inet[/localhost:9302]]{master=false}];
nested: RemoteTransportException[[node1][inet[/localhost:9300]]
[internal:index/shard/recovery/start_recovery]]; nested: RecoveryEngineException[[logstash-2015.01.19][3] Phase[2] Execution failed];
nested: RemoteTransportException[[node2][inet[/localhost:9302]][internal:index/shard/recovery/prepare_translog]];
nested: EngineCreationFailureException[[logstash-2015.01.19][3] **failed to create engine];
nested: FileSystemException**[data/elasticsearch/nodes/0/indices/logstash-2015.01.19/3/index/_0.si: **Too many open files**]; ]]","executing":true,"time_in_queue_millis":53,"time_in_queue":"53ms"}]}
If you have two nodes like
1) Node-1 - ES 1.4.2
2) Node-2 - Es 1.1.1
Now follow these steps to debug.
1) Stop all elasticsearch instance from Node-2.
2) Install elasticsearch 1.4.2 in new elasticsearch node.
Change elasticsearch.yml as master node configuration , specially these three config settings
cluster.name: <Same as master node>
node.name: < Node name for Node-2>
discovery.zen.ping.unicast.hosts: <Master Node IP>
3) Restart Node-2 Elasticsearch.
4) Verify Node-1 logs.

How to move shards around in a cluster

I have a 5 node cluster with 5 indices and 5 shards for each index. Currently the shards of each index are evenly distributed accross the nodes. I need to move shards belonging to 2 different indices from a specific node to a different node on the same cluster
You can use the shard reroute API
A sample command looks like below -
curl -XPOST 'localhost:9200/_cluster/reroute' -H 'Content-Type: application/json' -d '{
"commands" : [ {
"move" :
{
"index" : "test", "shard" : 0,
"from_node" : "node1", "to_node" : "node2"
}
}
]
}'
This moves shard 0 of index test from node1 to node2

Resources