string param in hudson.plugins.git.BranchSpec does not resolve - jenkins-pipeline

Why is it that one job doesn't resolve param used in hudson.plugins.git.BranchSpec while another does? Is BRANCH_NAME special in some way and BACKEND_BRANCH not?
Good Job
<parameterDefinitions>
<hudson.model.StringParameterDefinition>
<name>BRANCH_NAME</name>
<description>The branch to deploy from.</description>
<defaultValue>mybranch</defaultValue>
</hudson.model.StringParameterDefinition>
<hudson.model.StringParameterDefinition>
....
<branches>
<hudson.plugins.git.BranchSpec>
<name>*/${BRANCH_NAME}</name>
</hudson.plugins.git.BranchSpec>
</branches>
Failing Job
<parameterDefinitions>
...
<hudson.model.StringParameterDefinition>
<name>BACKEND_BRANCH</name>
<description>Branch for build deployed to environment</description>
</hudson.model.StringParameterDefinition>
...
<branches>
<hudson.plugins.git.BranchSpec>
<name>*/${BACKEND_BRANCH}</name>
</hudson.plugins.git.BranchSpec>
</branches>
Error
hudson.plugins.git.GitException: Command
"/usr/bin/git fetch --tags --progress origin
+refs/heads/${BACKEND_BRANCH}:refs/remotes/origin/${BACKEND_BRANCH} -
-prune" returned
...
stderr: fatal: Couldn't find remote ref refs/heads/${BACKEND_BRANCH}

Related

Spring cloud server Client does not see properties git uri

I have server and client, made by referring https://spring.io/guides/gs/centralized-configuration/ But my client does not run, cause
java.lang.IllegalStateException: Unable to load config data from 'http://localhost:8888'
at org.springframework.boot.context.config.StandardConfigDataLocationResolver.getReferences(StandardConfigDataLocationResolver.java:128)
at org.springframework.boot.context.config.StandardConfigDataLocationResolver.resolve(StandardConfigDataLocationResolver.java:115)
at org.springframework.boot.context.config.ConfigDataLocationResolvers.lambda$resolve$1(ConfigDataLocationResolvers.java:115)
at org.springframework.boot.context.config.ConfigDataLocationResolvers.resolve(ConfigDataLocationResolvers.java:126)
at org.springframework.boot.context.config.ConfigDataLocationResolvers.resolve(ConfigDataLocationResolvers.java:115)
at org.springframework.boot.context.config.ConfigDataLocationResolvers.resolve(ConfigDataLocationResolvers.java:107)
at org.springframework.boot.context.config.ConfigDataImporter.resolve(ConfigDataImporter.java:101)
at org.springframework.boot.context.config.ConfigDataImporter.resolve(ConfigDataImporter.java:93)
at org.springframework.boot.context.config.ConfigDataImporter.resolveAndLoad(ConfigDataImporter.java:81)
at org.springframework.boot.context.config.ConfigDataEnvironmentContributors.withProcessedImports(ConfigDataEnvironmentContributors.java:121)
at org.springframework.boot.context.config.ConfigDataEnvironment.processInitial(ConfigDataEnvironment.java:242)
at org.springframework.boot.context.config.ConfigDataEnvironment.processAndApply(ConfigDataEnvironment.java:230)
at org.springframework.boot.context.config.ConfigDataEnvironmentPostProcessor.postProcessEnvironment(ConfigDataEnvironmentPostProcessor.java:97)
at org.springframework.boot.context.config.ConfigDataEnvironmentPostProcessor.postProcessEnvironment(ConfigDataEnvironmentPostProcessor.java:89)
at org.springframework.boot.env.EnvironmentPostProcessorApplicationListener.onApplicationEnvironmentPreparedEvent(EnvironmentPostProcessorApplicationListener.java:100)
at org.springframework.boot.env.EnvironmentPostProcessorApplicationListener.onApplicationEvent(EnvironmentPostProcessorApplicationListener.java:86)
at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:176)
at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:169)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:143)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:131)
at org.springframework.boot.context.event.EventPublishingRunListener.environmentPrepared(EventPublishingRunListener.java:82)
at org.springframework.boot.SpringApplicationRunListeners.lambda$environmentPrepared$2(SpringApplicationRunListeners.java:63)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:117)
at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:111)
at org.springframework.boot.SpringApplicationRunListeners.environmentPrepared(SpringApplicationRunListeners.java:62)
at org.springframework.boot.SpringApplication.prepareEnvironment(SpringApplication.java:362)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:320)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1311)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1300)
at bobryakov.dmitry.ConfigurationClientApplication.main(ConfigurationClientApplication.java:9)
Caused by: java.lang.IllegalStateException: File extension is not known to any PropertySourceLoader. If the location is meant to reference a directory, it must end in '/'
at org.springframework.boot.context.config.StandardConfigDataLocationResolver.getReferencesForFile(StandardConfigDataLocationResolver.java:214)
at org.springframework.boot.context.config.StandardConfigDataLocationResolver.getReferences(StandardConfigDataLocationResolver.java:125)
... 30 common frames omitted
As i understand, it because of client doesn't see it's files in server spring.cloud.config.server.git.uri or spring.cloud.config.server.native.search-locations or spring.cloud.config.server.native.searchLocations I tried different notations:
file:///C:/Users/u_m167y/Desktop/config
file:///C:/Users/u_m167y/Desktop/config/
file://C:/Users/u_m167y/Desktop/config
C:/Users/u_m167y/Desktop/config
C:\\\\Users\\\\u_m167y\\\\Desktop\\\\config\\\\
But nothing works. I have git repo in that dir with .git and a-bootiful-client.properties In client i have spring.application.name=a-bootiful-client What is wrong here?
On Windows, you need an extra "/" in the file URL if it is absolute with a drive prefix (for example,file:///${user.home}/config-repo)
Refer spring-cloud-config
EDIT :
The above link has following listing, whihc shows a recipe for creating the git repository in the preceding example:
$ cd $HOME
$ mkdir config-repo
$ cd config-repo
$ git init .
$ echo info.foo: bar > application.properties
$ git add -A .
$ git commit -m "Add application.properties"

Oozie - Setting strategy on DistCp through action configuration

I have a workflow with a distCp action, and it's running fairly well. However, now I'm trying to change the copy strategy and am unable to do that through the action arguments. The documentation is fairly slim on this topic and looking at the source code for the distCp action executor did not help.
If running the distCp from the command line I can use the command line argument
-strategy {uniformsize|dynamic} to set the copy strategy.
Using that logic I tried to do this in the oozie action.
<action name="distcp-run" retry-max="3" retry-interval="1">
<distcp xmlns="uri:oozie:distcp-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapreduce.job.queuename</name>
<value>${poolName}</value>
</property>
</configuration>
<arg>-Dmapreduce.job.queuename=${poolName}</arg>
<arg>-Dmapreduce.job.name=distcp-s3-${wf:id()}</arg>
<arg>-update</arg>
<arg>-strategy dynamic</arg>
<arg>${region}/d=${day2HoursAgo}/h=${hour2HoursAgo}</arg>
<arg>${region2}/d=${day2HoursAgo}/h=${hour2HoursAgo}</arg>
<arg>${region3}/d=${day2HoursAgo}/h=${hour2HoursAgo}</arg>
<arg>${nameNode}${rawPath}/${partitionDate}</arg>
</distcp>
<ok to="join-distcp-steps"/>
<error to="error-report"/>
</action>
However, the action fails when I execute.
From stdout:
...>>> Invoking Main class now >>>
Fetching child yarn jobs
tag id : oozie-1d1fa70383587ae625b6495e30a315f7
Child yarn jobs are found -
Main class : org.apache.hadoop.tools.DistCp
Arguments :
-Dmapreduce.job.queuename=merged
-Dmapreduce.job.name=distcp-s3-0000019-160622133128476-oozie-oozi-W
-update
-strategy dynamic
s3a://myfirstregion/d=21/h=17,s3a://mysecondregion/d=21/h=17,s3a://ttv-logs-eu/tsv/clickstream-clean/y=2016/m=06/d=21/h=17,s3a://mythirdregion/d=21/h=17
hdfs://myurl:8020/data/raw/2016062117
found Distcp v2 Constructor
public org.apache.hadoop.tools.DistCp(org.apache.hadoop.conf.Configuration,org.apache.hadoop.tools.DistCpOptions) throws java.lang.Exception
<<< Invocation of Main class completed <<<
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.DistcpMain], main() threw exception, Returned value from distcp is non-zero (-1)
java.lang.RuntimeException: Returned value from distcp is non-zero (-1)
at org.apache.oozie.action.hadoop.DistcpMain.run(DistcpMain.java:66)...
Looking at the syslog it seems that it grabbed the -strategy dynamic and tried to put it in the array of source paths:
2016-06-22 14:11:18,617 INFO [uber-SubtaskRunner] org.apache.hadoop.tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[-strategy dynamic, s3a://myfirstregion/d=21/h=17,s3a:/mysecondregion/d=21/h=17,s3a:/ttv-logs-eu/tsv/clickstream-clean/y=2016/m=06/d=21/h=17,s3a:/mythirdregion/d=21/h=17], targetPath=hdfs://myurl:8020/data/raw/2016062117, targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
2016-06-22 14:11:18,624 INFO [uber-SubtaskRunner] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at sandbox/10.191.5.128:8032
2016-06-22 14:11:18,655 ERROR [uber-SubtaskRunner] org.apache.hadoop.tools.DistCp: Invalid input:
org.apache.hadoop.tools.CopyListing$InvalidInputException: -strategy dynamic doesn't exist
So from the DistCpOptions there is a copyStrategy but it's set to a default uniformsize value.
I've tried to move the argument in the first place, but then both -Dmapreduce arguments end up in the source paths (but -update does not).
How can I, through Oozie workflow configuration, set the copy strategy to dynamic?
Thanks.
Looking at the code, it doesn't seem possible to set the strategy via configuration. Instead of using the distcp-action you could use a map-reduce action, that way you can configure it however you want.
The Oozie MapReduce Cookbook has examples.
Looking at the Distcp code the relevant part is around line 237 at createJob().
Job job = Job.getInstance(getConf());
job.setJobName(jobName);
job.setInputFormatClass(DistCpUtils.getStrategy(getConf(), inputOptions));
job.setJarByClass(CopyMapper.class);
configureOutputFormat(job);
job.setMapperClass(CopyMapper.class);
job.setNumReduceTasks(0);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
job.setOutputFormatClass(CopyOutputFormat.class);
job.getConfiguration().set(JobContext.MAP_SPECULATIVE, "false");
job.getConfiguration().set(JobContext.NUM_MAPS, String.valueOf(inputOptions.getMaxMaps()));
The code above isn't everything you will need, you'll need to look at the distcp source to work them all out.
So you would need to configure all of the properties yourself in a map-reduce action. This way you could set the InputFormatClass which is where the strategy setting is used.
You can see the available properties for the InputFormatClass in the distcp properties file here.
The one you need is org.apache.hadoop.tools.mapred.lib.DynamicInputFormat.

Running JAR in Hadoop on Google Cloud using Yarn-client

i want to run a JAR in Hadoop on Google Cloud using Yarn-client.
i use this command in the master node of hadoop
spark-submit --class find --master yarn-client find.jar
but it return this error
15/06/17 10:11:06 INFO client.RMProxy: Connecting to ResourceManager at hadoop-m-on8g/10.240.180.15:8032
15/06/17 10:11:07 INFO ipc.Client: Retrying connect to server: hadoop-m-on8g/10.240.180.15:8032. Already tried 0
time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
What is the problem? In case it is useful this is my yarn-site.xml
<?xml version="1.0" ?>
<!--
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/yarn-logs/</value>
<description>
The remote path, on the default FS, to store logs.
</description>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop-m-on8g</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>5999</value>
<description>
In your case, it looks like the YARN ResourceManager may be unhealthy for unknown reasons; you can try to fix yarn with the following:
sudo sudo -u hadoop /home/hadoop/hadoop-install/sbin/stop-yarn.sh
sudo sudo -u hadoop /home/hadoop/hadoop-install/sbin/start-yarn.sh
However, it looks like you're using the Click-to-Deploy solution; Click-to-Deploy's Spark + Hadoop 2 deployment actually doesn't support Spark on YARN at the moment, due to some bugs and lack of memory configs. You'd normally run into something like this if you just try to run it with --master yarn-client out-of-the-box:
15/06/17 17:21:08 INFO cluster.YarnClientSchedulerBackend: Application report from ASM:
appMasterRpcPort: -1
appStartTime: 1434561664937
yarnAppState: ACCEPTED
15/06/17 17:21:09 INFO cluster.YarnClientSchedulerBackend: Application report from ASM:
appMasterRpcPort: -1
appStartTime: 1434561664937
yarnAppState: ACCEPTED
15/06/17 17:21:10 INFO cluster.YarnClientSchedulerBackend: Application report from ASM:
appMasterRpcPort: 0
appStartTime: 1434561664937
yarnAppState: RUNNING
15/06/17 17:21:15 ERROR cluster.YarnClientSchedulerBackend: Yarn application already ended: FAILED
15/06/17 17:21:15 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/metrics/json,null}
15/06/17 17:21:15 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/stage/kill,null}
The well-supported way to deploy is a cluster on Google Compute Engine with Hadoop 2 and Spark configured to be able to run on YARN is to use bdutil. You'd run something like:
./bdutil -P <instance prefix> -p <project id> -b <bucket> -z <zone> -d \
-e extensions/spark/spark_on_yarn_env.sh generate_config my_custom_env.sh
./bdutil -e my_custom_env.sh deploy
# Shorthand for logging in to the master
./bdutil -e my_custom_env.sh shell
# Handy way to run a socks proxy to make it easy to access the web UIs
./bdutil -e my_custom_env.sh socksproxy
# When done, delete your cluster
./bdutil -e my_custom_env.sh delete
With spark_on_yarn_env.sh Spark should default to yarn-client, though you can always re-specify --master yarn-client if you want. You can see a more detailed explanation of the flags available in bdutil with ./bdutil --help. Here are the help entries just for the flags I included above:
-b, --bucket
Google Cloud Storage bucket used in deployment and by the cluster.
-d, --use_attached_pds
If true, uses additional non-boot volumes, optionally creating them on
deploy if they don't exist already and deleting them on cluster delete.
-e, --env_var_files
Comma-separated list of bash files that are sourced to configure the cluster
and installed software. Files are sourced in order with later files being
sourced last. bdutil_env.sh is always sourced first. Flag arguments are
set after all sourced files, but before the evaluate_late_variable_bindings
method of bdutil_env.sh. see bdutil_env.sh for more information.
-P, --prefix
Common prefix for cluster nodes.
-p, --project
The Google Cloud Platform project to use to create the cluster.
-z, --zone
Specify the Google Compute Engine zone to use.

Hbase Error java.lang.RuntimeException: Unable to run quorum server

I am not able to start Hbase, whenever i start i get only Hmaster and Hregionserver in jps. Hquorompeer keeps missing.I checked logs and i am getting below error:
java.lang.RuntimeException: Unable to run quorum server
at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:454)
at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:409)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:151)
at org.apache.hadoop.hbase.zookeeper.HQuorumPeer.runZKServer(HQuorumPeer.java:80)
at org.apache.hadoop.hbase.zookeeper.HQuorumPeer.main(HQuorumPeer.java:70)
Caused by: java.io.IOException: Failed to process transaction type: 1 error: KeeperErrorCode = NoNode for /hbase
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:153)
at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223)
at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:417)
... 4 more
Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.processTransaction(FileTxnSnapLog.java:211)
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:151)
The reason that you are encountering this error could be the data directory where Zookeeper stores snapshots and logs is corrupted.
In order to avoid the HQuorumpeer daemon to die out, you need to provide a path to a new directory where zookeeper can store its snapshots. To do this you need to add the following property in Hbase.site.xml
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>location of the newly created folder</value>
<description>Property from ZooKeeper's config zoo.cfg.
The directory where the snapshot is stored.
</description>
</property>
the default path of "hbase.zookeeper.property.dataDir" is /tmp/hbase-*/zookeeper(/tmp/hbase-hadoop/zookeeper), remove it and try to start zookeeper again
Removing all files from ZooKeeper directory solved the issue. In my case
rm /var/lib/zookeeper/version-2/*

"hudson.util.IOException2: Failed to create a temp file"

I came to work today and i found my hudson with this problem! I've tried to research, but i didn't found anything that help me.
Follow the full stack:
hudson.util.IOException2: Failed to create a temp file on /home/cpcaserver5/.hudson/jobs/SVN/workspace
at hudson.FilePath.createTextTempFile(FilePath.java:966)
at hudson.tasks.CommandInterpreter.createScriptFile(CommandInterpreter.java:124)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:68)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:60)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19)
at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:630)
at hudson.model.Build$RunnerImpl.build(Build.java:175)
at hudson.model.Build$RunnerImpl.doRun(Build.java:137)
at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:429)
at hudson.model.Run.run(Run.java:1366)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:145)
Caused by: hudson.util.IOException2: Failed to create a temporary directory in /etc/tomcat6/apache-tomcat-6.0.35/temp
at hudson.FilePath$12.invoke(FilePath.java:955)
at hudson.FilePath$12.invoke(FilePath.java:944)
at hudson.FilePath.act(FilePath.java:758)
at hudson.FilePath.act(FilePath.java:740)
at hudson.FilePath.createTextTempFile(FilePath.java:944)
... 12 more
Caused by: java.io.IOException: Permission denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.checkAndCreate(File.java:1716)
at java.io.File.createTempFile(File.java:1804)
at hudson.FilePath$12.invoke(FilePath.java:953)
... 16 more
Email was triggered for: Failure
Sending email for trigger: Failure
It looks like you have a permissions problem. Make sure you run Jenkins/Tomcat with appropriate user permissions. Ditto if this happens on a slave - check that slave process runs as a user that has appropriate permissions.

Resources