Java MS Graph SDK get GraphClient Using an Existing AccessToken (5.13.0 version) - spring-boot

Our Front End using PKCE flow and fetches a access Token. As per the old implementation (microsoft-graph#2.8.1 version) this below snippet gets a Graph Client using an existing access token. Now I cannot get the same working in the newer MS Graph Java SDK.
IGraphServiceClient client = GraphServiceClient.builder()
.authenticationProvider( request -> request.addHeader("Authorization", "Bearer " + tokenAuthentication.getToken().getTokenValue()) )
.buildClient();
Dependencies I have added to my project
<dependency>
<!-- Include the sdk as a dependency -->
<groupId>com.microsoft.graph</groupId>
<artifactId>microsoft-graph</artifactId>
<version>5.13.0</version>
</dependency>
<dependency>
<!-- This dependency is only needed if you are using the TokenCrendentialAuthProvider -->
<groupId>com.azure</groupId>
<artifactId>azure-identity</artifactId>
<version>1.2.5</version>
</dependency>

finally got it working.. see below snippet..
IAuthenticationProvider authProvider = new IAuthenticationProvider() {
#Override
public CompletableFuture<String> getAuthorizationTokenAsync(URL requestUrl) {
CompletableFuture<String> future = new CompletableFuture<>();
future.complete(yourToken);
return future;
}
};
GraphServiceClient<Request> graphClient = GraphServiceClient
.builder()
.authenticationProvider(authProvider)
.buildClient();
return graphClient.me().buildRequest().get();

Related

Micrometer with Prometheus Pushgateway - Add TLS Support

I have a Spring boot application with Prometheus Pushgateway using Micrometer, mainly based on this tutorial:
https://luramarchanjo.tech/2020/01/05/spring-boot-2.2-and-prometheus-pushgateway-with-micrometer.html
pom.xml has following related dependencies:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-core</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
<dependency>
<groupId>io.prometheus</groupId>
<artifactId>simpleclient_pushgateway</artifactId>
<version>0.16.0</version>
</dependency>
And application.properties file has:
management.metrics.export.prometheus.pushgateway.enabled=true
management.metrics.export.prometheus.pushgateway.shutdown-operation=PUSH
management.metrics.export.prometheus.pushgateway.baseUrl=localhost:9091
It is working fine locally in Dev environment while connecting to Pushgateway without any TLS. In our CI environment, Prometheus Pushgateway has TLS enabled. How do I configure TLS support and configure certs in this Spring boot application?
Due to the usage of TLS, you will need to customize a few Spring classes:
HttpConnectionFactory -> PushGateway -> PrometheusPushGatewayManager
A HttpConnectionFactory, is used by prometheus' PushGateway to create a secure connection, and then, create a PrometheusPushGatewayManager which uses the previous pushgateway.
You will need to implement the prometheus’ interface HttpConnectionFactory, I’m assuming you are able to create a valid javax.net.ssl.SSLContext object (if not, more details in the end¹).
HttpConnectionFactory example:
public class MyTlsConnectionFactory implements io.prometheus.client.exporter.HttpConnectionFactory {
#Override
public HttpURLConnection create(String hostUrl) {
// considering you can get javax.net.ssl.SSLContext or javax.net.ssl.SSLSocketFactory
URL url = new URL(hostUrl);
HttpsURLConnection connection = (HttpsURLConnection) url.openConnection();
connection.setSSLSocketFactory(sslContext.getSocketFactory());
return connection;
}
}
PushGateway and PrometheusPushGatewayManager:
#Bean
public HttpConnectionFactory tlsConnectionFactory() {
return new MyTlsConnectionFactory();
}
#Bean
public PushGateway pushGateway(HttpConnectionFactory connectionFactory) throws MalformedURLException {
String url = "https://localhost:9091"; // replace by your props
PushGateway pushGateway = new PushGateway(new URL(url));
pushGateway.setConnectionFactory(connectionFactory);
return pushGateway;
}
#Bean
public PrometheusPushGatewayManager tlsPrometheusPushGatewayManager(PushGateway pushGateway,
CollectorRegistry registry) {
// fill the others params accordingly (the important is pushGateway!)
return new PrometheusPushGatewayManager(
pushGateway,
registry,
Duration.of(15, ChronoUnit.SECONDS),
"some-job-id",
null,
PrometheusPushGatewayManager.ShutdownOperation.PUSH
);
}
¹If you face difficulty retrieving the SSLContext from java code, I recommend studying the library https://github.com/Hakky54/sslcontext-kickstart and https://github.com/Hakky54/mutual-tls-ssl (which shows how to apply it with different client libs).
Then, will be possible to generate SSLContext in java code in a clean way, e.g.:
String keyStorePath = "client.jks";
char[] keyStorePassword = "password".toCharArray();
SSLFactory sslFactory = SSLFactory.builder()
.withIdentityMaterial(keyStorePath, keyStorePassword)
.build();
javax.net.ssl.SSLContext sslContext = sslFactory.getSslContext();
Finally, if you need setup a local Prometheus + TLS environment for testing purposes, I recommend following the post:
https://smallstep.com/hello-mtls/doc/client/prometheus

Not working added global header parameters using Swagger 3 UI

I have migrated existing project Swagger to Swagger3 using dependency springdoc-openapi-ui 1.6.8 version .
Getting issue while added the global header parameter in Swagger config file, it was not showing at Swagger dashboard
Please advised me if any issue in mentioned code.
Code:
**
#Bean
public OpenAPI customOpenAPI() {
return new OpenAPI()
.components(new Components()
.addSecuritySchemes("basicScheme",
new SecurityScheme().type(SecurityScheme.Type.HTTP).scheme("basic"))
.addParameters("myHeader1",
new Parameter().in("header").schema(new StringSchema()).name("myHeader1"))
.addHeaders("myHeader2",
new Header().description("myHeader2 header").schema(new StringSchema())))
.info(new Info().title("eWallet API Sandbox").description("eWallet API Sandbox").version("v1.0")
.contact(new Contact().name("WOW Finstack").url("https://wowdigital.ai/")
.email("info#wowdigital.ai"))
.termsOfService("WOW Finstack").license(new License().name("License").url("#")));
//
};
**
Dependency :
<dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-ui</artifactId>
<version>1.6.8</version>
</dependency>
I recently upgraded to springdoc-openapi-ui and struggled a bit to make global headers work on Spring boot 2.6.3.
I managed to make global header work if it's defined as a Parameter (using new Parameter()...) but I did not make it work when defined as a Header (using new Header()...)
I guess that you already defined a GroupedOpenApi Spring Bean. So what you have to do is add a OpenApiCustomiser to this GroupedOpenApi, see addOpenApiCustomiser(globalHeaderCustomizer()) below:
#Bean
public GroupedOpenApi publicGroup() {
return GroupedOpenApi.builder()
.packagesToScan("com.my.package")
.pathsToMatch("/**")
.group("public")
.addOpenApiCustomiser(globalHeaderCustomizer()) // --> you need this!
.build();
}
where globalHeaderCustomizer() is:
private OpenApiCustomiser globalHeaderCustomizer() {
return openApi -> openApi.getPaths().values().stream().flatMap(pathItem -> pathItem.readOperations().stream())
.forEach(operation -> operation.addParametersItem(
new HeaderParameter().$ref("#/components/parameters/myHeader1")));
}
I think that should fix your issue.

Spring Boot + Azure SDK, extra characters at the end of the file while copying to Azure Storage account

Some extra characters are added at the end of the file after uploading the file into storage account. And there is no issue with 1.33gb file, observed the size difference for 2.22gb file. Below is the code snippet and pom.xml details.
how to resolve it? let me know any details are needed.
Code:
private boolean upload(final MultipartFile file) throws IOException {
BlobClientBuilder blobClientBuilder = new BlobClientBuilder();
blobClientBuilder.endpoint(STORAGE_URL).connectionString(storageConnectionString); blobClientBuilder.containerName(CONTAINER_NAME);
BlobClient blobClient = blobClientBuilder.blobName(file.getOriginalFilename()).buildClient();
blobClient.upload(file.getInputStream(), file.getSize());
boolean uploadStatus = blobClient.exists();
pom.xml:
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.5.2</version>
<relativePath /> <!-- lookup parent from repository -->
</parent>
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-core</artifactId>
<version>1.18.0</version>
</dependency>
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-storage-blob</artifactId>
<version>12.12.0</version>
<exclusions>
<exclusion>
<groupId>io.projectreactor</groupId>
<artifactId>reactor-core</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- https://mvnrepository.com/artifact/io.projectreactor/reactor-core -->
<dependency>
<groupId>io.projectreactor</groupId>
<artifactId>reactor-core</artifactId>
<version>3.4.8</version>
<!--$NO-MVN-MAN-VER$ -->
<!-- Please don't remove/degrade the version, possible for compatibility
issues -->
</dependency>
<!-- https://mvnrepository.com/artifact/io.projectreactor.netty/reactor-netty -->
<dependency>
<groupId>io.projectreactor.netty</groupId>
<artifactId>reactor-netty</artifactId>
<version>1.0.9</version>
<!--$NO-MVN-MAN-VER$ -->
<!-- Please don't remove/degrade the version, possible for compatibility
issues -->
</dependency>
1.33gb file uploaded correctly but 2.22gb shows some extra characters which leads to increase the size of the file in bytes
Instead of uploading large file directly upload them in zip file or chucks
Try with this code
public static void uploadFilesByChunk() {
String connString = "<conn str>";
String containerName = "<container name>";
String blobName = "UploadOne.zip";
String filePath = "D:/temp/" + blobName;
BlobServiceClient client = new BlobServiceClientBuilder().connectionString(connString).buildClient();
BlobClient blobClient = client.getBlobContainerClient(containerName).getBlobClient(blobName);
long blockSize = 2 * 1024 * 1024; //2MB
ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions()
.setBlockSizeLong(blockSize).setMaxConcurrency(2)
.setProgressReceiver(new ProgressReceiver() {
#Override
public void reportProgress(long bytesTransferred) {
System.out.println("uploaded:" + bytesTransferred);
}
});
BlobHttpHeaders headers = new BlobHttpHeaders().setContentLanguage("en-US").setContentType("binary");
blobClient.uploadFromFile(filePath, parallelTransferOptions, headers, null, AccessTier.HOT,
new BlobRequestConditions(), Duration.ofMinutes(30));
}
For more details refer this SO Thread
Thanks #ShrutiJoshi-MT for your code snippet.
I am not sure why it is working with 'uploadFromFile' method and having issue with 'upload' method of BlobClient. Below is the final code I am using, it is working for different file extensions. Anyone finds bug or having suggestions for below code please let me know, it helps me a lot.
First copying Multipartfile to local file and then providing the path.
public boolean uploadWithFile(final MultipartFile multipartFile) throws Exception {
logger.info("uploadWithFile started");
File file = null;
try {
String fileName = multipartFile.getOriginalFilename();
file = new File(fileName);
logger.info("uploadWithFile fileName: {}", fileName);
Path path = Paths.get(fileName);
logger.debug("Copying from MultipartFile to file");
try (InputStream inputStream = multipartFile.getInputStream()) {
Files.copy(inputStream, path, StandardCopyOption.REPLACE_EXISTING);
}
logger.debug("Copied from MultipartFile to file");
String filePath = file.getPath();
logger.debug("Copied file name: {}", file.getName());
logger.debug("Copied file Path: {}", filePath);
logger.debug("Copied file length: {}", file.length());
String containerName = "temp";
String storageConnectionString = "<primarykey> or <secondarykey>";
BlobClientBuilder blobClientBuilder = new BlobClientBuilder();
blobClientBuilder.endpoint(STORAGE_URL).connectionString(storageConnectionString);
blobClientBuilder.containerName(containerName);
BlobClient blobClient = blobClientBuilder.blobName(fileName).buildClient();
logger.debug("uploading to storage account");
blobClient.uploadFromFile(filePath);
logger.debug("uploaded to storage account");
boolean uploadStatus = blobClient.exists();
logger.debug("uploaded status : {}", uploadStatus);
logger.info("uploadWithFile ended");
return uploadStatus;
} catch (Exception exception) {
logger.error("uploadWithFile upload failed: {}", exception);
throw exception;
} finally {
if (Objects.nonNull(file) && file.exists()) {
logger.debug("delete file: {}", file.getName());
file.delete();
logger.debug("deleted file: {}", file.getName());
}
}
}```

How to use resilience4j on calling method?

I tried to use spring retry for Circuit breaking and retry as below and it is working as expected but issue is unable to configure "maxAttempts/openTimeout/resetTimeout" as env variables (error is should be constants). My question is how use resilience4j to achieve the below requirement?
also please suggest there is a way to pass env variables to "maxAttempts/openTimeout/resetTimeout".
#CircuitBreaker(value = {
MongoServerException.class,
MongoSocketException.class,
MongoTimeoutException.class
MongoSocketOpenException.class},
maxAttempts = 2,
openTimeout = 20000L ,
resetTimeout = 30000L)
public void insertDocument(ConsumerRecord<Long, GenericRecord> consumerRecord){
retryTemplate.execute(args0 -> {
LOGGER.info(String.format("Inserting record with key -----> %s", consumerRecord.key().toString()));
BasicDBObject dbObject = BasicDBObject.parse(consumerRecord.value().toString());
dbObject.put("_id", consumerRecord.key());
mongoCollection.replaceOne(<<BasicDBObject with id>>, getReplaceOptions());
return null;
});
}
#Recover
public void recover(RuntimeException t) {
LOGGER.info(" Recovering from Circuit Breaker ");
}
dependencies used are
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.retry</groupId>
<artifactId>spring-retry</artifactId>
</dependency>
You are not using resilience4j, but spring-retry.
You should adapt the title of your question.
CircuitBreakerConfig circuitBreakerConfig = CircuitBreakerConfig.custom()
.waitDurationInOpenState(Duration.ofMillis(20000))
.build();
CircuitBreakerRegistry circuitBreakerRegistry = CircuitBreakerRegistry.of(circuitBreakerConfig);
CircuitBreaker circuitBreaker = circuitBreakerRegistry.circuitBreaker("mongoDB");
RetryConfig retryConfig = RetryConfig.custom().maxAttempts(3)
.retryExceptions(MongoServerException.class,
MongoSocketException.class,
MongoTimeoutException.class
MongoSocketOpenException.class)
.ignoreExceptions(CircuitBreakerOpenException.class).build();
Retry retry = Retry.of("helloBackend", retryConfig);
Runnable decoratedRunnable = Decorators.ofRunnable(() -> insertDocument(ConsumerRecord<Long, GenericRecord> consumerRecord))
.withCircuitBreaker(circuitBreaker)
.withRetry(retry)
.decorate();
String result = Try.runRunnable(decoratedRunnable )
.recover(exception -> ...).get();

Spring Social ProviderSignInUtils.getConnection is returning "error: cannot find symbol"

I'm creating a controller to register an user that has logged in using oauth2 but whenever I try to get the connection using ProviderSignInUtils.getConnection(request) it says the function does not exist.
This is my controller:
import org.springframework.social.connect.web.ProviderSignInUtils;
#RequestMapping(value = "/register", method = RequestMethod.GET)
public String showRegistrationForm(WebRequest request, Model model) {
Connection<?> connection = ProviderSignInUtils.getConnection(request);
RegistrationForm registration = createRegistrationDTO(connection);
model.addAttribute("user", registration);
return "user/registrationForm";
}
Those are the maven dependencies:
<dependency>
<groupId>org.springframework.social</groupId>
<artifactId>spring-social-config</artifactId>
<version>1.1.2.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.social</groupId>
<artifactId>spring-social-core</artifactId>
<version>1.1.2.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.social</groupId>
<artifactId>spring-social-security</artifactId>
<version>1.1.2.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.social</groupId>
<artifactId>spring-social-web</artifactId>
<version>1.1.2.RELEASE</version>
</dependency>
ProviderSignInUtils.getConnection was removed in Spring Social 1.1.2, however the documentation wasn't updated to reflect this. The example code at github shows this instead
#Inject
public SignupController(AccountRepository accountRepository,
ConnectionFactoryLocator connectionFactoryLocator,
UsersConnectionRepository connectionRepository) {
this.accountRepository = accountRepository;
this.providerSignInUtils = new ProviderSignInUtils(connectionFactoryLocator, connectionRepository);
}
#RequestMapping(value="/signup", method=RequestMethod.GET)
public SignupForm signupForm(WebRequest request) {
Connection<?> connection = providerSignInUtils.getConnectionFromSession(request);
if (connection != null) {
request.setAttribute("message", new Message(MessageType.INFO, "Your " + StringUtils.capitalize(connection.getKey().getProviderId()) + " account is not associated with a Spring Social Showcase account. If you're new, please sign up."), WebRequest.SCOPE_REQUEST);
return SignupForm.fromProviderUser(connection.fetchUserProfile());
} else {
return new SignupForm();
}
}
You need to create your own local providerSignInUtils so it has access to the connectionFactoryLocator and connectionRepository.

Resources