JenkinsFile default workspace name is too long - jenkins-pipeline

I am currently setting up jenkins with bitbucket.
I have create a new jenkins project as multibranch project.
The JenkinsFile is hosted inside the git repository.
How can I force jenkins to generate a shorter branch name than the default one.
E:\jenkins\workspace\reposName-BrancheName-ZKIZ7BNGL6RTDKLQAQ7QR4FKZMOB3DDAVZ564BLWT2BY5ZV652VA
How can I get ride of ZKIZ7BNGL6RTDKLQAQ7QR4FKZMOB3DDAVZ564BLWT2BY5ZV652VA
This is my jenkinsFile
#!/usr/bin/env groovy
env.PATH = env.PATH + ";c:\\Windows\\System32"
def call(String label = null, Closure body) {
node(label) {
String path = pwd()
String branchName = env.BRANCH_NAME
if (branchName) {
path = path.split(Pattern.quote(File.separator))
def workspaceRoot = path[0..<-1].join(File.separator)
def currentWs = path[-1]
String newWorkspace = env.JOB_NAME.replace('/', '-')
newWorkspace = newWorkspace.replace(File.separator, '-')
newWorkspace = newWorkspace.replace('%2f', '-')
newWorkspace = newWorkspace.replace('%2F', '-')
if (currentWs =~ '#') {
newWorkspace = "${newWorkspace}#${currentWs.split('#')[-1]}"
}
path = "${workspaceRoot}${File.separator}${newWorkspace}"
}
ws(path) {
body()
}
}
}
pipeline
{
} // pipeline
Is there a way to force Jenkins to generate a shorter name?

You can change value of jenkins.branch.WorkspaceLocatorImpl.PATH_MAX=20 in jenkins's script console.
Changes will be lost if you restart jenkins server. To make the changes permanent, add this java property -Djenkins.branch.WorkspaceLocatorImpl.PATH_MAX=20

This is not the best way to fix it, but it is working :)
First create a method to get the current workspace and rework the final path like this:
def GetWorkspace()
{
node
{
String path = pwd()
String branchName = env.BRANCH_NAME
if(branchName)
{
path = path.split(Pattern.quote(File.separator))
def workspaceRoot = path[0..<-1].join(File.separator)
def currentWs = path[-1]
// Here is where we make branch names safe for directories -
// the most common bad character is '/' in 'feature/add_widget'
// which gets replaced with '%2f', so JOB_NAME will be
// ${PR}}OJECT_NAME}%2f${BRANCH_NAME}
String newWorkspace = env.JOB_NAME.replace('/', '-')
newWorkspace = newWorkspace.replace(File.separator, '-')
newWorkspace = newWorkspace.replace('%2f', '-')
newWorkspace = newWorkspace.replace('%2F', '-')
// Add on the '#n' suffix if it was there
if (currentWs =~ '#')
{
newWorkspace = "${newWorkspace}#${currentWs.split('#')[-1]}"
}
path = "E:\\Jenkins\\workspace\\${File.separator}${newWorkspace}"
}
return path
}
}
Then you have to set it up to our agent like this
pipeline {
environment {
//Your Env Setup
}
agent { //Global Agent.
node {
label 'AgentName'
customWorkspace GetWorkspace()
}
}

Related

KMM Signing Pod Dependency for Xcode 14

I've recently updated to Xcode 14 for my KMM project and I now run into this build error when syncing gradle.
shared/build/cocoapods/synthetic/IOS/Pods/Pods.xcodeproj: error: Signing for "gRPC-C++-gRPCCertificates-Cpp" requires a development team. Select a development team in the Signing & Capabilities editor. (in target 'gRPC-C++-gRPCCertificates-Cpp' from project 'Pods')
I'm using Firebase and am including the pods as dependencies in my shared build.gradle file.
kotlin {
...
cocoapods {
...
ios.deploymentTarget = "15.0"
podfile = project.file("../iosApp/Podfile")
framework {
baseName = "shared"
}
xcodeConfigurationToNativeBuildType
val firebaseVersion = "9.6.0"
pod("FirebaseCore") { version = firebaseVersion }
pod("FirebaseAuth") { version = firebaseVersion }
pod("FirebaseFirestore") { version = firebaseVersion }
pod("FirebaseCrashlytics") { version = firebaseVersion }
pod("FirebaseAnalytics") { version = firebaseVersion }
}
...
}
I have a team setup and if I open Xcode and go to the Pods project, select gRPC-C++-gRPCCertificates-Cpp, then add my team to it, the project builds fine from Xcode.
However, this doesn't fix the gradle issue as it seems to use its own temporary project file when syncing (in shared/build/cocoapods/synthetic/IOS/Pods/Pods.xcodepro)
Is there any way to add my team to the gradle sync build?
As a temporary work around, I've switched back to Xcode 13.4.1 which doesn't have this problem.
This will be fixed in Kotlin 1.8.0 but in the meantime here is a workaround (from https://youtrack.jetbrains.com/issue/KT-54161)
First create the directories buildSrc/src/main/kotlin in the root of your project. In buildSrc create the file build.gradle.kts and add the following contents.
plugins {
`kotlin-dsl`
}
repositories {
mavenCentral()
google()
}
dependencies {
implementation("org.jetbrains.kotlin:kotlin-gradle-plugin:1.7.20")
implementation("com.android.tools.build:gradle:7.3.1")
}
Then create the file PatchedPodGenTask.kt in buildSrc/src/main/kotlin. Add the following to it.
import org.gradle.api.Project
import org.gradle.api.logging.Logger
import org.gradle.api.provider.Provider
import org.gradle.api.tasks.*
import java.io.File
import java.io.IOException
import org.jetbrains.kotlin.konan.target.*
import org.jetbrains.kotlin.gradle.targets.native.tasks.PodGenTask
import org.jetbrains.kotlin.gradle.plugin.cocoapods.CocoapodsExtension.SpecRepos
import org.jetbrains.kotlin.gradle.plugin.cocoapods.CocoapodsExtension.CocoapodsDependency.PodLocation.*
import org.jetbrains.kotlin.gradle.plugin.cocoapods.CocoapodsExtension.CocoapodsDependency.PodLocation
import org.jetbrains.kotlin.gradle.plugin.mpp.NativeBuildType
import kotlin.concurrent.thread
import kotlin.reflect.KClass
//import org.jetbrains.kotlin.gradle.plugin.cocoapods.cocoapodsBuildDirs
private val Family.platformLiteral: String
get() = when (this) {
Family.OSX -> "macos"
Family.IOS -> "ios"
Family.TVOS -> "tvos"
Family.WATCHOS -> "watchos"
else -> throw IllegalArgumentException("Bad family ${this.name}")
}
internal val Project.cocoapodsBuildDirs: CocoapodsBuildDirs
get() = CocoapodsBuildDirs(this)
internal class CocoapodsBuildDirs(val project: Project) {
val root: File
get() = project.buildDir.resolve("cocoapods")
val framework: File
get() = root.resolve("framework")
val defs: File
get() = root.resolve("defs")
val buildSettings: File
get() = root.resolve("buildSettings")
val synthetic: File
get() = root.resolve("synthetic")
fun synthetic(family: Family) = synthetic.resolve(family.name)
val externalSources: File
get() = root.resolve("externalSources")
val publish: File = root.resolve("publish")
fun externalSources(fileName: String) = externalSources.resolve(fileName)
fun fatFramework(buildType: NativeBuildType) =
root.resolve("fat-frameworks/${buildType.getName()}")
}
/**
* The task generates a synthetic project with all cocoapods dependencies
*/
open class PatchedPodGenTask : PodGenTask() {
private val PODFILE_SUFFIX = """
post_install do |installer|
installer.pods_project.build_configurations.each do |config|
config.build_settings["EXCLUDED_ARCHS[sdk=iphonesimulator*]"] = "arm64"
end
installer.pods_project.targets.each do |target|
target.build_configurations.each do |config|
config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = '13.0'
end
end
installer.generated_projects.each do |project|
project.targets.each do |target|
target.build_configurations.each do |config|
config.build_settings["DEVELOPMENT_TEAM"] = "..."
end
end
end
end
""".trimIndent()
#TaskAction
fun patchedGenerate() {
val syntheticDir = project.cocoapodsBuildDirs.synthetic(family).apply { mkdirs() }
val specRepos: Collection<String> = specReposAccessor.get().getAllAccessor()
val projResource = "/cocoapods/project.pbxproj"
val projDestination = syntheticDir.resolve("synthetic.xcodeproj").resolve("project.pbxproj")
projDestination.parentFile.mkdirs()
projDestination.outputStream().use { file ->
javaClass.getResourceAsStream(projResource).use { resource ->
resource.copyTo(file)
}
}
val podfile = syntheticDir.resolve("Podfile")
podfile.createNewFile()
val podfileContent = getPodfileContent(specRepos, family.platformLiteral) + PODFILE_SUFFIX
podfile.writeText(podfileContent)
val podInstallCommand = listOf("pod", "install")
runCommand(
podInstallCommand,
project.logger,
exceptionHandler = { e: IOException ->
CocoapodsErrorHandlingUtil.handle(e, podInstallCommand)
},
errorHandler = { retCode, output, _ ->
CocoapodsErrorHandlingUtil.handlePodInstallSyntheticError(
podInstallCommand.joinToString(" "),
retCode,
output,
family,
podNameAccessor.get()
)
},
processConfiguration = {
directory(syntheticDir)
})
val podsXcprojFile = podsXcodeProjDirAccessor.get()
check(podsXcprojFile.exists() && podsXcprojFile.isDirectory) {
"Synthetic project '${podsXcprojFile.path}' was not created."
}
}
private fun getPodfileContent(specRepos: Collection<String>, xcodeTarget: String) =
buildString {
specRepos.forEach {
appendLine("source '$it'")
}
appendLine("target '$xcodeTarget' do")
//if (useLibraries.get().not()) {
appendLine("\tuse_frameworks!")
//}
pods.get().mapNotNull {
buildString {
append("pod '${it.name}'")
val pathType = when (it.source) {
is Path -> "path"
is Url -> "path"
is Git -> "git"
else -> null
}
val path = it.source?.getLocalPathAccessor(project, it.name)
if (path != null && pathType != null) {
append(", :$pathType => '$path'")
}
}
}.forEach { appendLine("\t$it") }
appendLine("end\n")
}
}
fun <T : Any, R> KClass<T>.invokeDynamic(methodStart: String, instance: Any?, vararg params: Any?): R {
val method = this.java.methods.firstOrNull { it.name.startsWith(methodStart) } ?: error("Can't find accessor for $methodStart")
return method.invoke(instance, *params) as R
}
val PodGenTask.podsXcodeProjDirAccessor: Provider<File> get() = this::class.invokeDynamic("getPodsXcodeProjDir", this)
val PodGenTask.specReposAccessor: Provider<SpecRepos> get() = this::class.invokeDynamic("getSpecRepos", this)
val PodGenTask.podNameAccessor: Provider<String> get() = this::class.invokeDynamic("getPodName", this)
fun SpecRepos.getAllAccessor(): Collection<String> = this::class.invokeDynamic("getAll", this)
fun PodLocation.getLocalPathAccessor(project: Project, podName: String): String? = this::class.invokeDynamic("getLocalPath", this, project, podName)
private fun runCommand(
command: List<String>,
logger: Logger,
errorHandler: ((retCode: Int, output: String, process: Process) -> String?)? = null,
exceptionHandler: ((ex: IOException) -> Unit)? = null,
processConfiguration: ProcessBuilder.() -> Unit = { }
): String {
var process: Process? = null
try {
process = ProcessBuilder(command)
.apply {
this.processConfiguration()
}.start()
} catch (e: IOException) {
if (exceptionHandler != null) exceptionHandler(e) else throw e
}
if (process == null) {
throw IllegalStateException("Failed to run command ${command.joinToString(" ")}")
}
var inputText = ""
var errorText = ""
val inputThread = thread {
inputText = process.inputStream.use {
it.reader().readText()
}
}
val errorThread = thread {
errorText = process.errorStream.use {
it.reader().readText()
}
}
inputThread.join()
errorThread.join()
val retCode = process.waitFor()
logger.info(
"""
|Information about "${command.joinToString(" ")}" call:
|
|${inputText}
""".trimMargin()
)
check(retCode == 0) {
errorHandler?.invoke(retCode, inputText.ifBlank { errorText }, process)
?: """
|Executing of '${command.joinToString(" ")}' failed with code $retCode and message:
|
|$inputText
|
|$errorText
|
""".trimMargin()
}
return inputText
}
private object CocoapodsErrorHandlingUtil {
fun handle(e: IOException, command: List<String>) {
if (e.message?.contains("No such file or directory") == true) {
val message = """
|'${command.take(2).joinToString(" ")}' command failed with an exception:
| ${e.message}
|
| Full command: ${command.joinToString(" ")}
|
| Possible reason: CocoaPods is not installed
| Please check that CocoaPods v1.10 or above is installed.
|
| To check CocoaPods version type 'pod --version' in the terminal
|
| To install CocoaPods execute 'sudo gem install cocoapods'
|
""".trimMargin()
throw IllegalStateException(message)
} else {
throw e
}
}
fun handlePodInstallSyntheticError(command: String, retCode: Int, error: String, family: Family, podName: String): String? {
var message = """
|'pod install' command on the synthetic project failed with return code: $retCode
|
| Full command: $command
|
| Error: ${error.lines().filter { it.contains("[!]") }.joinToString("\n")}
|
""".trimMargin()
if (
error.contains("deployment target") ||
error.contains("no platform was specified") ||
error.contains(Regex("The platform of the target .+ is not compatible with `$podName"))
) {
message += """
|
| Possible reason: ${family.name.toLowerCase()} deployment target is not configured
| Configure deployment_target for ALL targets as follows:
| cocoapods {
| ...
| ${family.name.toLowerCase()}.deploymentTarget = "..."
| ...
| }
|
""".trimMargin()
return message
} else if (
error.contains("Unable to add a source with url") ||
error.contains("Couldn't determine repo name for URL") ||
error.contains("Unable to find a specification")
) {
message += """
|
| Possible reason: spec repos are not configured correctly.
| Ensure that spec repos are correctly configured for all private pod dependencies:
| cocoapods {
| specRepos {
| url("<private spec repo url>")
| }
| }
|
""".trimMargin()
return message
}
return null
}
}
Find the line that says config.build_settings["DEVELOPMENT_TEAM"] = "..." and replace ... with your actual dev team ID.
Finally, in the build.gradle.kts file of the module you want to apply it (mine was the shared module) add the following
// #TODO: This is a hack, and we should remove it once PodGenTask is fixed or supports adding suffixes in the official Kotlin plugin
tasks.replace("podGenIOS", PatchedPodGenTask::class.java)
Clean your project and re-sync Gradle. The patch should be applied and work with Xcode 14

How to use environment variables in terraform using TF_VAR_name?

I'm trying to export list variables and use them via TF_VAR_name and getting error while combining them with toset function.
Success scenario:
terraform apply -auto-approve
# Variables
variable "sg_name" { default = ["SG1", "SG2", "SG3", "SG4", "SG5"] }
variable "Project" { default = "POC" }
variable "Owner" { default = "Me" }
variable "Environment" { default = "Testing" }
locals {
common_tags = {
Project = var.Project
Owner = var.Owner
Environment = var.Environment
}
}
# Create Security Group
resource "aws_security_group" "application_sg" {
for_each = toset(var.sg_name)
name = each.value
description = "${each.value} security group"
tags = merge(local.common_tags, { "Name" = each.value })
}
# Output the SG IDs
output "sg_id" {
value = values(aws_security_group.application_sg)[*].id
}
Failure scenario:
TF_VAR_sg_name='["SG1", "SG2", "SG3", "SG4", "SG5"]' terraform apply -auto-approve
# Variables
variable "sg_name" { }
variable "Project" { default = "POC" }
variable "Owner" { default = "Me" }
variable "Environment" { default = "Testing" }
locals {
common_tags = {
Project = var.Project
Owner = var.Owner
Environment = var.Environment
}
}
# Create Security Group
resource "aws_security_group" "application_sg" {
for_each = toset(var.sg_name)
name = each.value
description = "${each.value} security group"
tags = merge(local.common_tags, { "Name" = each.value })
}
# Output the SG IDs
output "sg_id" {
value = values(aws_security_group.application_sg)[*].id
}
Error
Error: Invalid function argument
on main.tf line 16, in resource "aws_security_group" "application_sg":
16: for_each = toset(var.sg_name)
|----------------
| var.sg_name is "[\"SG1\", \"SG2\", \"SG3\", \"SG4\", \"SG5\"]"
Invalid value for "v" parameter: cannot convert string to set of any single
type.
You'll need to specify the type (i.e. type = list(string) in your case) of your variable then it should work.
I tested it with the following configuration:
variable "sg_name" {
type = list(string)
}
resource "null_resource" "application_sg" {
for_each = toset(var.sg_name)
triggers = {
name = each.key
}
}
Then a TF_VAR_sg_name='["SG1", "SG2", "SG3", "SG4", "SG5"]' terraform apply works.
If I remove the type = list(string) it errors out as you say.

How to download a jar from artifactory with gradle to project directory?

What is the correct way to write a gradle task which downloads a .jar(or any other) file from artifactory to project directory(if file does not already exists)?
For example, I have a project with path /root/project with file /root/project/build.gradle inside. I want to download generator.jar from artifactory (com/company/project/generator/1.0) to /root/project directory if /root/project/generator.jar is not present.
tasks.register("downloadGeneratorJar") {
group = 'Download'
description = 'Downloads generator.jar'
doLast {
def f = new File('generator.jar')
if (!f.exists()) {
URL artifactoryUrl = new URL('myurl')
HttpURLConnection myURLConnection = (HttpURLConnection)artifactoryUrl.openConnection()
String userCredentials = gradle.ext.developerUser + ":" + gradle.ext.developerPassword
String basicAuth = "Basic " + new String(Base64.getEncoder().encode(userCredentials.getBytes()))
myURLConnection.setRequestProperty ("Authorization", basicAuth)
InputStream is = myURLConnection.getInputStream()
new FileOutputStream(f).withCloseable { outputStream ->
int read
byte[] bytes = new byte[1024]
while ((read = is.read(bytes)) != -1) {
outputStream.write(bytes, 0, read)
}
}
}
}
}

Set environment variables in Jenkins Declarative Pipeline

Below is my code for deploy stage in jenkinsfile
stage('Deploy') {
node('slave1') {
if ("${env.Build_testapp1}" == 'true') {
script {
env.packageid = "Applications/testapp1/revesion1"
env.environmentId = "Environments/SysTest1/machine1"
}
xldDeploy serverCredentials: 'developer', environmentId: env.environmentId, packageId: env.packageid
}
but how I can make it variable as per environment?
I was looking for something like this
if ("${env.Build_EVN}" == 'dev'){
env.environmentId = "Environments/Dev/machine1"
}
if ("${env.Build_EVN}" == 'systest1'){
env.environmentId = "Environments/SysTest1/machine1"
}
then using "env.environmentId" in stage('Deploy')
You can add one more stage before Deploy to handle env.environmentId for subsequent stages to use.
stage('Prepare env') {
steps {
script {
if ("${env.Build_EVN}" == 'dev'){
env.environmentId = "Environments/Dev/machine1"
}
if ("${env.Build_EVN}" == 'systest1'){
env.environmentId = "Environments/SysTest1/machine1"
}
}
}
}
stage('Deploy') {
...
}

AEM 6.1 Uber Jar Maven Dependency

I am using AEM 6.1 with Maven as the build manager. I have updated the .m2 local folder with the unobfuscated UberJar provided by Adobe. I am getting the following error:
ERROR [JobHandler: /etc/workflow/instances/server0/2016-07-15/model_157685507700064:/content/myApp/testing/wf_test01]
com.adobe.granite.workflow.core.job.JobHandler Process implementation
not found: com.myApp.workflow.ActivatemyAppPageProcess
com.adobe.granite.workflow.WorkflowException: Process implementation
not found: com.myApp.workflow.ActivatemyAppPageProcess at
com.adobe.granite.workflow.core.job.HandlerBase.executeProcess(HandlerBase.java:197)
at
com.adobe.granite.workflow.core.job.JobHandler.process(JobHandler.java:232)
at
org.apache.sling.event.impl.jobs.JobConsumerManager$JobConsumerWrapper.process(JobConsumerManager.java:512)
at
org.apache.sling.event.impl.jobs.queues.JobRunner.run(JobRunner.java:205)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
The UberJar does not seem to have the com.adobe.granite.workflow.core.job package. Is there any way to resolve this issue?
The .execute method for the process step ActivatemyAppPageProcess:
public void execute(WorkItem workItem, WorkflowSession workflowSession, MetaDataMap args) throws WorkflowException {
Session participantSession = null;
Session replicationSession = null;
// ResourceResolver resourceResolver = null;
try {
log.info("Inside ActivatemyAppPageProcess ");
Session session = workflowSession.getSession();
if (replicateAsParticipant(args)) {
String approverId = resolveParticipantId(workItem, workflowSession);
if (approverId != null) {
participantSession = getParticipantSession(approverId, workflowSession);
}
}
if (participantSession != null)
replicationSession = participantSession;
else {
replicationSession = session;
}
WorkflowData data = workItem.getWorkflowData();
String path = null;
String type = data.getPayloadType();
if ((type.equals("JCR_PATH")) && (data.getPayload() != null)) {
String payloadData = (String) data.getPayload();
if (session.itemExists(payloadData))
path = payloadData;
}
else if ((data.getPayload() != null) && (type.equals("JCR_UUID"))) {
Node node = session.getNodeByUUID((String) data.getPayload());
path = node.getPath();
}
ReplicationOptions opts = null;
String rev = (String) data.getMetaDataMap().get("resourceVersion", String.class);
if (rev != null) {
opts = new ReplicationOptions();
opts.setRevision(rev);
}
opts = prepareOptions(opts);
if (path != null) {
ResourceCollection rcCollection =
ResourceCollectionUtil
.getResourceCollection(
(Node) this.admin.getItem(path),
(ResourceCollectionManager) this.rcManager);
boolean isWFPackage = isWorkflowPackage(path, resolverFactory, workflowSession);
List<String> paths = getPaths(path, rcCollection);
for (String aPath : paths)
if (canReplicate(replicationSession, aPath)) {
if (opts != null) {
if (isWFPackage) {
setRevisionForPage(aPath, opts, data);
}
this.replicator
.replicate(replicationSession,
getReplicationType(),
aPath,
opts);
} else {
this.replicator
.replicate(replicationSession,
getReplicationType(),
aPath);
}
} else {
log.debug(session.getUserID() + " is not allowed to replicate " + "this page/asset " + aPath + ". Issuing request for 'replication");
Dictionary properties = new Hashtable();
properties.put("path", aPath);
properties.put("replicationType", getReplicationType());
properties.put("userId", session.getUserID());
Event event = new Event("com/day/cq/wcm/workflow/req/for/activation", properties);
this.eventAdmin.sendEvent(event);
}
} else {
log.warn("Cannot activate page or asset because path is null for this workitem: " + workItem.toString());
}
} catch (RepositoryException e) {
throw new WorkflowException(e);
} catch (ReplicationException e) {
throw new WorkflowException(e);
} finally {
if ((participantSession != null) && (participantSession.isLive())) {
participantSession.logout();
participantSession = null;
}
}
}
com.adobe.granite.workflow.core.job is not exported in AEM at all. That means, you cannot use it because it is invisible to your code.
The com.adobe.granite.workflow.core bundle does only export com.adobe.granite.workflow.core.event.
If you work with the AEM workflows, you should stick to the com.adobe.granite.workflow.api bundle.
The following packages are exported in this bundle and therefore useable:
com.adobe.granite.workflow,version=1.0.0
com.adobe.granite.workflow.collection,version=1.1.0
com.adobe.granite.workflow.collection.util,version=1.0.0
com.adobe.granite.workflow.event,version=1.0.0
com.adobe.granite.workflow.exec,version=1.0.0
com.adobe.granite.workflow.exec.filter,version=1.0.0
com.adobe.granite.workflow.job,version=1.0.0
com.adobe.granite.workflow.launcher,version=1.0.0
com.adobe.granite.workflow.metadata,version=1.0.0
com.adobe.granite.workflow.model,version=1.0.0
com.adobe.granite.workflow.rule,version=1.0.0
com.adobe.granite.workflow.serialization,version=1.0.0
com.adobe.granite.workflow.status,version=1.0.0
Even if the uber.jar has the packages , if you look on your AEM instance on /system/console/bundles and click on the com.adobe.granite.workflow.core package, you will see that in "exported packages" there is no com.adobe.granite.workflow.core.job available. So even if your IDE, Maven and/or Jenkins can handle it, AEM will not be able to execute your code.
In AEM you can only use packages that are exported in one of the available bundles or that are included in your bundle - what would be a bad idea. You would then have two versions of the same code and that will lead to further problems.
Having seen the code I would say there's another problem here. And solving that one will help you get rid off the other one, too.
You try to start another WF (request for activation) for a path that is already used in a workflow.
You have to terminate the current workflow instance to be able to do this.
An example for a clean way to do this would be:
Workflow workflow = workItem.getWorkflow();
WorkflowData wfData = workflow.getWorkflowData();
workflowSession.terminateWorkflow(workflow);
Map<String, Object> paramMap = new HashMap<String, Object>();
if (!StringUtils.isEmpty(data.getNextParticipantUid())) {
paramMap.put("nextParticipant", "admin");
}
workflowSession.startWorkflow(
workflowSession.getModel(WORKFLOW_MODEL_PATH, wfData, paramMap);
The possible reason for the error could be that your workflow process com.myApp.workflow.ActivatemyAppPageProcess service/component is not active because of which its not bound to JobHandler's list of available processes thus causing this exception.
Can you check in /system/console/components that your custom process component is active? If not then you will have to resolve the dependency which is causing the service/component to be not available.

Resources