Spring Boot Gradle plugin credentials - spring-boot

I'm using the Spring Boot Gradle plugin to build and push Docker images to a remote Docker registry
bootBuildImage {
imageName = "docker-repo/app-name"
publish = true
docker {
publishRegistry {
username = project.property('repoUsername')
password = project.property('repoPassword')
}
}
}
The Docker repository credentials are stored in ~/.gradle/gradle.properties.
Is this secure? Would I need to store a similar ~/.gradle/gradle.properties file in the CI/CD environment?
What are the best approaches from a security perspective?

What I do think about this ( Opinion ) is NO .
Storing the credentials with gradle.properties as plain text is a bad idea . but still it widely usable .
For me , I'd rather use the environment variables as its one level above the plain text , where it get tricky to find .
I can assure you that both ways are bad , but i guess that environment variables is a bit more tricky than plain text .
this is my gradle method to retrive env variables or to look at gradle.properties
artifactoryUser = getConfigurationProperty("ARTIFACTORY_USER", "artifactoryUser", null)
artifactoryPwd = getConfigurationProperty("ARTIFACTORY_PWD", "artifactoryPwd", null)
String getConfigurationProperty(String envVar, String sysProp, String defaultValue) {
def result = System.getenv(envVar) ?: project.findProperty(sysProp)
result ?: defaultValue
}
I use it and I am sure that there is a more secure way to do so .

Related

Jib for gradle, how to set permissions on a folder

I have a Java Spring Boot project where I need to be able to copy a file into a specific folder. However I am not able to copy any files. So I need to enable permissions for a specific folder where I am copying into. I am just copying files from one area of the image to another.
Is the following code using correct syntax? It does not seem like it is working because I still am not able to copy anything.
jib {
from {
image = "aregistry.com/images/jre11"
auth {
username = "${registryUsername}"
password = "${registryPassword}"
}
}
to {
image = "us-central1-docker.pkg.dev/myartifactregistry"
auth {
username = "oauth2accesstoken"
password = "${artifactRegistryToken}"
}
}
extraDirectories {
permissions = [
'/nativelibs': '755'
]
}
}
Specifically I am asking about the extraDirectories section. The folder lies at the root of the project.

SpringBoot how to get full URL of application

Basically, I want to log everything that happens in the life cycle of my SpringBoot REST API, and I'd like to log something like App started at [ip]:[port]/[everything else]
I had already seen a question like this but it was using the embedded Tomcat, I use another web server, can it be done? It would be real cool.
You can retrieve these informations using the ServletUriComponentsBuilder in your Controller :
URI currentUri = ServletUriComponentsBuilder.fromCurrentRequestUri()
.build()
.toUri();
String asString = currentUri.toString(); // "http://localhost:8080/orders/1/items/18"
String host = currentUri.getHost(); // "localhost"
int port = currentUri.getPort(); // 8080
String path = currentUri.getPath(); // "/orders/1/items/18"

SpringBoot Unable to load Property file from Runnable jar due to multiple exclamation mark

I have a Spring Boot project and everything works fine locally. Now when i create a runnable jar to run it via jenkins then it is not able to load Property file.
Following is the code where PropertyPlaceholder is configured:
#Bean
public static EncryptablePropertyPlaceholderConfigurer propertyPlaceholderConfigurerEncrypted() {
String env = System.getProperty("spring.profiles.active") != null ? System.getProperty("spring.profiles" +
".active") : "ci";
EncryptablePropertyPlaceholderConfigurer ppc =
new EncryptablePropertyPlaceholderConfigurer(getStandardPBEStringEncryptor());
ppc.setLocations(new ClassPathResource("application.properties"),
new ClassPathResource("application-" + env + ".properties"));
return ppc;
}
In order to debug i added following code within this:
try {
String s = new String(Files.readAllBytes(new ClassPathResource("application-" + env + ".properties").getFile().toPath()));
LOG.info(s);
} catch (IOException e) {
LOG.error("Unable to read file",e);
}
And it gives this error :
java.io.FileNotFoundException: class path resource [application-qa1.properties] cannot be resolved to absolute file path because it does not reside in the file system: jar:file:/var/hudson/workspace/pv/target/T-S-21.3.40.jar!/BOOT-INF/classes!/application-qa1.properties
17:23:44 at org.springframework.util.ResourceUtils.getFile(ResourceUtils.java:217)
I have confirmed that file is located in jar at this location BOOT-INF/classes/application-qa1.properties
So effectively issue is caused due to second exclamation mark showing up in path while loading file from jar /var/hudson/workspace/pv/target/T-S-21.3.40.jar!/BOOT-INF/classes!/application-qa1.properties
Ideally exclamation mark should appear only after jar name.
Can someone please advise on how to address this issue.
You can't read files from a JAR like this. You have to use getResourceAsStream like this:
InputStream is = this.getClass().getClassLoader().getResourceAsStream("application-" + env + ".properties");

flyway + gradle + spring boot configuration

How can I configure flyway in build.gradle to get url ,username, password from other properties file?
Instead of this:
flyway {
url = 'jdbc:postgresql://localhost:5432/db'
user = 'a'
password = 'a'
locations = ['filesystem:db/migration']
}
something like this:
flyway {
path = ['filesystem:src/main/resources/data-access.properties']
locations = ['filesystem:db/migration']
}
You can do something like this:
ext.flywayProps = new Properties()
flywayProps.load(new FileInputStream(this.projectDir.absolutePath + "/src/main/resources/data-access.properties"))
In the root of your build script, it will load a properties file into local variable of Properties type. After that you can use this properties the way you need it, like so for example:
flyway {
url = 'jdbc:postgresql://flywayProps['dbIp']:flywayProps['dbPort']/db'
user = flywayProps['dbUsername']
password = flywayProps['dbPassword']
locations = ['filesystem:db/migration']
}
And in your data-access.properties you need to specify it as follows:
dbIp=localhost
dbPort=5432
dbUsername=a
dbPassword=a

is there a way to get Spark Tracking URL other than mining log files for the log output?

I have an Scala application that creates a Spark Session and I have set up health checks that use the Spark REST API. The Spark Application itself runs on Hadoop Yarn. The REST API URL is currently retrieved by reading the Spark logging generated when the Spark Session is created. This works most of the time but there are some edge cases in my application where it doesn't work so well.
Does anyone know of another way to get this tracking URL?
"You can do this by reading the yarn.resourcemanager.webapp.address value from YARN's config and the application ID (which is exposed both in an event sent on the listener bus, and an existing SparkContext method."
Copied the paragraph above as is from the developer's response found at: https://issues.apache.org/jira/browse/SPARK-20458
UPDATE:
I did try the solution and got pretty close. Here's some Scala/Spark code to build that URL:
#transient val ssc: StreamingContext = StreamingContext.getActiveOrCreate(rabbitSettings.checkpointPath, CreateStreamingContext)
// Update yarn logs URL in Elasticsearch
YarnLogsTracker.update(
ssc.sparkContext.uiWebUrl,
ssc.sparkContext.applicationId,
"test2")
And the YarnLogsTracker object goes something like this:
object YarnLogsTracker {
private def recoverURL(u: Option[String]): String = u match {
case Some(a) => a.split(":").take(2).mkString(":")
case None => ""
}
def update(rawUrl: Option[String], rawAppId: String, tenant: String): Unit = {
val logUrl = s"${recoverURL(rawUrl)}:8042/node/containerlogs/container${rawAppId.substring(11)}_01_000002/$tenant/stdout/?start=-4096"
...
Which produces something like this: http://10.99.25.146:8042/node/containerlogs/container_1516203096033_91164_01_000002/test2/stdout/?start=-4096
I've discovered a "reasonable" way to obtain this. Obviously, the best way would be for Spark libraries to expose the ApplicationReport that they're already fetching to the launcher application directly, since they go to the trouble of setting delegation tokens, etc. However, this seems unlikely to happen.
This approach is two-pronged. First, it attempts to build a YarnClient itself, in order to fetch the ApplicationReport, which will have the authoritative tracking URL. However, from my experience, this can fail (ex: if the job was run in CLUSTER mode, with a --proxy-user in a Kerberized environment, then this will not be able to properly authenticate to YARN).
In my case, I'm calling this helper method from the driver itself, and reporting the result back to my launcher application on the side. However, in principle, any place where you have the Hadoop Configuration available should work (including, possibly, your launcher application). You can obviously use either "prong" of this implementation (or both) depending on your needs and tolerance for complexity, extra processing, etc.
/**
* Given a Hadoop {#link org.apache.hadoop.conf.Configuration} and appId, use the YARN API (via an
* {#link YarnClient} instance) to get the application report, which includes the trackingUrl. If this fails,
* then as a fallback, it attempts to "guess" the URL by looking at various YARN configuration properties,
* and assumes that the URL will be something like: <pre>[yarnWebUI:port]/proxy/[appId]</pre>.
*
* #param hadoopConf the Hadoop {#link org.apache.hadoop.conf.Configuration}
* #param appId the YARN application ID
* #return the app trackingUrl, either retrieved using the {#link YarnClient}, or manually constructed using
* the fallback approach
*/
public static String getYarnApplicationTrackingUrl(org.apache.hadoop.conf.Configuration hadoopConf, String appId) {
LOG.debug("Attempting to look up YARN url for applicationId {}", appId);
YarnClient yarnClient = null;
try {
// do not attempt to fail over on authentication error (ex: running with proxy-user and Kerberos)
hadoopConf.set("yarn.client.failover-max-attempts", "0");
yarnClient = YarnClient.createYarnClient();
yarnClient.init(hadoopConf);
yarnClient.start();
final ApplicationReport report = yarnClient.getApplicationReport(ConverterUtils.toApplicationId(appId));
return report.getTrackingUrl();
} catch (YarnException | IOException e) {
LOG.warn(
"{} attempting to get report for YARN appId {}; attempting to use manually constructed fallback",
e.getClass().getSimpleName(),
appId,
e
);
String baseYarnWebappUrl;
String protocol;
if ("HTTPS_ONLY".equals(hadoopConf.get("yarn.http.policy"))) {
// YARN is configured to use HTTPS only, hence return the https address
baseYarnWebappUrl = hadoopConf.get("yarn.resourcemanager.webapp.https.address");
protocol = "https";
} else {
baseYarnWebappUrl = hadoopConf.get("yarn.resourcemanager.webapp.address");
protocol = "http";
}
return String.format("%s://%s/proxy/%s", protocol, baseYarnWebappUrl, appId);
} finally {
if (yarnClient != null) {
yarnClient.stop();
}
}
}

Resources