I have a dataflow job which is written in apache beam with java. I am able run the dataflow job in GCP through this steps.
Created dataflow template from my code. Then uploading template in cloud storage.
Directly creating job from template option available in GCP->Dataflow->jobs
This flow is working fine.
I want to do same step through java app. means, I have one api when someone sends request to that api, I want to start this dataflow job through the template which I have already stored in storage.
I could see rest api is available to implement this approach. as below,
POST /v1b3/projects/project_id/locations/loc/templates:launch?gcsPath=template-location
But I didn't find any reference or samples for this. I tried the below approach
In my springboot project I added this dependency
<!-- https://mvnrepository.com/artifact/com.google.apis/google-api-services-dataflow -->
<dependency>
<groupId>com.google.apis</groupId>
<artifactId>google-api-services-dataflow</artifactId>
<version>v1b3-rev20210825-1.32.1</version>
</dependency>
and added below code in controller
public static void createJob() throws IOException {
GoogleCredential credential = GoogleCredential.fromStream(new FileInputStream("myCertKey.json")).createScoped(
java.util.Arrays.asList("https://www.googleapis.com/auth/cloud-platform"));
try{
Dataflow dataflow = new Dataflow.Builder(new LowLevelHttpRequest(), new JacksonFactory(),
credential).setApplicationName("my-job").build(); --- this gives error
//RuntimeEnvironment
RuntimeEnvironment env = new RuntimeEnvironment();
env.setBypassTempDirValidation(false);
//all my env configs added
//parameters
HashMap<String,String> params = new HashMap<>();
params.put("bigtableEmulatorPort", "-1");
params.put("gcsPath", "gs://bucket//my.json");
// all other params
LaunchTemplateParameters content = new LaunchTemplateParameters();
content.setJobName("Test-job");
content.setEnvironment(env);
content.setParameters(params);
dataflow.projects().locations().templates().launch("project-id", "location", content);
}catch (Exception e){
log.info("error occured", e);
}
}
This gives {"id":null,"message":"'boolean com.google.api.client.http.HttpTransport.isMtls()'"}
error in this line itself
Dataflow dataflow = new Dataflow.Builder(new LowLevelHttpRequest(), new JacksonFactory(),
credential).setApplicationName("my-job").build();
this is bcs, this dataflow builder expects HttpTransport as 1st argument but I passed LowLevelHttpRequest()
I am not sure is this the correct way to implement this. Can any one suggest any ideas on this? how to implement this? any examples or reference ?
Thanks a lot :)
Related
I have a springboot API which is dealing with lot of processes in backend. I need to stream the status to the frontend. Since I am new to springboot can anyone help me how to achieve this scenario.
Note - Application is going to be containerized in future and I cannot use any cloud service for this.
As there is not much to go off of I will try my best:
If you are using Log4j2 you could simply usethe SocketAppender (external link)
If not:
I did something similar recently and you will need to somehow turn your logs into a stream. I'd advise using the information found here (non-external link)
OutputStream Stream;
#GetMapping("/stream-sse-mvc")
public SseEmitter streamSseMvc() {
SseEmitter emitter = new SseEmitter();
ExecutorService sseMvcExecutor = Executors.newSingleThreadExecutor();
sseMvcExecutor.execute(() -> {
try {
Stream.map(sequence -> SseEmitter.event()
.id(""))
.event("EVENT_TYPE")
.data("String.valueOf(sequence)
.build());
emitter.send(event);
Thread.sleep(1000); //This does not need to be here
} catch (Exception ex) {
emitter.completeWithError(ex);
}
});
return emitter;
}
There might be better ways to reach your endpoints but without knowing what Frameworks you are using this is hard to answer. Essentially what we are doing is capturing all log output to a stream which is then broadcasted by an SSE.
I am working on Fabric8 unit test, now I am trying to create a CRD against KubernetesServer.
import io.fabric8.kubernetes.api.model.apiextensions.v1.CustomResourceDefinition;
public class TestCertManagerService {
#Rule
public KubernetesServer server = new KubernetesServer();
#Test
#DisplayName("Should list all CronTab custom resources")
public void testCronTabCrd() throws IOException {
// Given
//server.expect().get().withPath("/apis/stable.example.com/v1/namespaces/default/crontabs").andReturn(HttpURLConnection.HTTP_OK, ?????).once();
KubernetesClient client = server.getClient();
CustomResourceDefinition cronTabCrd = client.apiextensions().v1().customResourceDefinitions()
.load(new BufferedInputStream(new FileInputStream("src/test/resources/crontab-crd.yml")))
.get();
client.apiextensions().v1().customResourceDefinitions().create(cronTabCrd);
}
}
When I ran it, I got the following error
TestCertManagerService > testCronTabCrd FAILED
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://localhost:60690/apis/apiextensions.k8s.io/v1/customresourcedefinitions.
at app//io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:694)
at app//io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:673)
at app//io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:626)
at app//io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:566)
at app//io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:527)
at app//io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:510)
at app//io.fabric8.kubernetes.client.dsl.base.BaseOperation.listRequestHelper(BaseOperation.java:136)
at app//io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:505)
at app//io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:494)
at app//io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:87)
at app//com.ibm.si.qradar.cp4s.service.certmanager.TestCertManagerService.testCronTabCrd(TestCertManagerService.java:94)
I have a few of questions:
(1) In this case, I am using v1() interface, sometimes I saw example code is using v1beta1(), what decides this version? By the way, I am using Kubernetes-client library 5.9.0
(2) In my code , I comments out this line
server.expect().get().withPath("/apis/stable.example.com/v1/namespaces/default/crontabs").andReturn(HttpURLConnection.HTTP_OK, ?????).once();
What is this statement for? In my case, I want to load a CRD, then create a CR, what is "?????" in the statement?
Any ideas for stack trace? How to fix it?
I appreciate it in advance.
From the code which you shared, it looks like you're using Fabric8 Kubernetes Mock Server in expectations mode. Expectations mode requires the user to set the REST API expectations. So the code shown below is setting some expectations from Mock Server viewpoint.
// Given
server.expect().get()
.withPath("/apis/stable.example.com/v1/namespaces/default/crontabs")
.andReturn(HttpURLConnection.HTTP_OK, getCronTabList())
.once();
These are the expectations set:
Mock Server would be requested a GET request at this URL: /apis/stable.example.com/v1/namespaces/default/crontabs . From URL we can expect a resource under stable.example.com apigroup with v1 version, default namespace and crontabs as plural.
When this URL is being hit, you're also defining response code and response body in andReturn() method. First argument is the response code (200 in this case) and second argument is the response body (a List object of CronTab which would be serialized and sent as response by mock server).
This request is only hit .once(), if KubernetesClient created by Mock Server requests this endpoint more than once; the test would fail. If you want to hit the endpoint more than once, you can use .times(..) method instead.
But in your test I see you're loading a CustomResourceDefinition from YAML and creating it which doesn't seem to match the expectations you set earlier. If you're writing a test about creating a CustomResourceDefinition, it should look like this:
#Test
#DisplayName("Should Create CronTab CRD")
void testCronTabCrd() throws IOException {
// Given
KubernetesClient client = server.getClient();
CustomResourceDefinition cronTabCrd = client.apiextensions().v1()
.customResourceDefinitions()
.load(new BufferedInputStream(new FileInputStream("src/test/resources/crontab-crd.yml")))
.get();
server.expect().post()
.withPath("/apis/apiextensions.k8s.io/v1/customresourcedefinitions")
.andReturn(HttpURLConnection.HTTP_OK, cronTabCrd)
.once();
// When
CustomResourceDefinition createdCronTabCrd = client.apiextensions().v1()
.customResourceDefinitions()
.create(cronTabCrd);
// Then
assertNotNull(createdCronTabCrd);
}
Bdw, if you don't like setting REST expectations. Fabric8 Kubernetes Mock Server also has a CRUD mode which mock real Kubernetes APIServer. You can enable it like this:
#Rule
public KubernetesServer server = new KubernetesServer(true, true);
then use it in test like this:
#Test
#DisplayName("Should Create CronTab CRD")
void testCronTabCrd() throws IOException {
// Given
KubernetesClient client = server.getClient();
CustomResourceDefinition cronTabCrd = client.apiextensions().v1()
.customResourceDefinitions()
.load(new BufferedInputStream(new FileInputStream("src/test/resources/crontab-crd.yml")))
.get();
// When
CustomResourceDefinition createdCronTabCrd = client.apiextensions().v1()
.customResourceDefinitions()
.create(cronTabCrd);
// Then
assertNotNull(createdCronTabCrd);
}
I added CustomResourceLoadAndCreateTest and CustomResourceLoadAndCreateCrudTest tests in my demo repository: https://github.com/r0haaaan/kubernetes-mockserver-demo
I'm using the official Plaid Java API to make a demo application. I've got the back end working in Sandbox, with their /sandbox/public_token/create generated public keys.
Now, I'm trying to modify the front-end from Plaid's quickstart project to talk with my back end, so I can start using the development tier to work with my IRL bank account.
I'm implementing the basic first step - generating a link_token. However, when the front end calls my controller, I get the following error:
ErrorResponse{displayMessage='null', errorCode='INVALID_FIELD', errorMessage='client_id must be a properly formatted, non-empty string', errorType='INVALID_REQUEST', requestId=''}
This is my current iteration on trying to generate a link_token:
public LinkTokenResponse generateLinkToken() throws IOException {
List<String> plaidProducts = new ArrayList<>();
plaidProducts.add("transactions");
List<String> countryCodes = new ArrayList<>();
countryCodes.add("US");
countryCodes.add("CA");
Response<LinkTokenCreateResponse> response =
plaidService.getClient().service().linkTokenCreate(new LinkTokenCreateRequest(
new LinkTokenCreateRequest.User("test_user_ID"),
"test client",
plaidProducts,
countryCodes,
"en"
).withRedirectUri("")).execute();
try {
ErrorResponse errorResponse = plaidService.getClient().parseError(response);
System.out.println(errorResponse.toString());
} catch (Exception e) {
// deal with it. you didn't even receive a well-formed JSON error response.
}
return new LinkTokenResponse(response.body().getLinkToken());
}
I modeled this after how it seems to work in the Plaid Quickstart's example. I do not see client ID being set explicitly anywhere in there, or anywhere else in Plaid's Java API. I'm at a bit of a loss.
I'm not super familiar with the Java Plaid library specifically, but when using the Plaid client libraries, the client ID is generally set when initializing the client instance. From there, it is automatically included in any calls you make from that client.
You can see the client ID being set in the Java Quickstart here:
https://github.com/plaid/quickstart/blob/master/java/src/main/java/com/plaid/quickstart/QuickstartApplication.java#L67
PlaidClient.Builder builder = PlaidClient.newBuilder()
.clientIdAndSecret(configuration.getPlaidClientID(), configuration.getPlaidSecret());
switch (configuration.getPlaidEnv()) {
case "sandbox":
builder = builder.sandboxBaseUrl();
break;
case "development":
builder = builder.developmentBaseUrl();
break;
case "production":
builder = builder.productionBaseUrl();
break;
default:
throw new IllegalArgumentException("unknown environment: " + configuration.getPlaidEnv());
}
PlaidClient plaidClient = builder.build();
I have created a Lambda function which is triggered by a DynamoDB stream. I am trying to process Dynamodb events and put them into a Kinesis stream after some transformation. The Lambda has full access to both DynamoDB and Kinesis stream.
I am using Cloudwatch to check the logs and can see that the DynamoDb events are successfully processed. But when I try to create the Kinesis client (present in a different class), the code fails. I tried logging the error and even printing it but it did not help. Sometimes the logs end with this message
END RequestId: {some request id}
Other times, I get the following error
log4j:WARN No appenders could be found for logger (com.amazonaws.AmazonWebServiceClient).
The code fails at the time of creation of Kinesis client. I can see the log messages / print statements before the creation of Kinesis client. But right at that line code fails. I am not sure what the problem is. Can someone please help me out?
Here is the class in which the code fails
private AmazonKinesis kinesisClient;
private String streamName;
public TestKinesisPut(String streamName) {
this.streamName = streamName;
BasicAWSCredentials awsCreds = new BasicAWSCredentials("ACCESS_KEY", "SECRET_KEY");
System.out.println("aws creds are: " + awsCreds);
clientBuilder = AmazonKinesisClientBuilder.standard().withRegion(Regions.AP_SOUTH_1).
withCredentials(new AWSStaticCredentialsProvider(awsCreds));
System.out.println("Credentials are set: \n " + clientBuilder);
try {
System.out.println("This one is new \n About to build new kinesis client");
// the code fails after this line
kinesisClient = clientBuilder.build();
System.out.println("failed to build client");
}
catch(Exception e) {
System.out.println("failed to initialize producer: " + e.getMessage());
kinesisClient = null;
}
}
Thanks
After a few days of head scratching I decided to tinker with the configuration of my Lambda function. Looks like the problem was caused by OutOfMemoryError. I increased the memory of my Lambda function and it started working.
It seems that at the time of creation of the KinesisClient, the JVM was getting out of metaspace. I did some research and found this stackoverflow thread. Please refer the link to view a detailed discussion on a similar scenario.
Hi in our application we are creating certain dynamic integrations flows that we remove and create on the flow .
Mostly things have worked great but we have observed the below error sometimes specially due to the integration flow trying to remove dependent beans . Can someone comment whether this is a bug or we are missing anything . Error Trace below
java.lang.NullPointerException: null
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resetBeanDefinition(DefaultListableBeanFactory.java:912)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.removeBeanDefinition(DefaultListableBeanFactory.java:891)
at org.springframework.integration.dsl.context.IntegrationFlowContext.removeDependantBeans(IntegrationFlowContext.java:203)
at org.springframework.integration.dsl.context.IntegrationFlowContext.remove(IntegrationFlowContext.java:189) at
code that removes integration flow
flowContext.remove("flowId");
Update of the invoking code
if (discoveryService.isFLowPresent(flowId))
{
LOG.debug("Removing and creating flow [{}]", flowId);
discoveryService.removeIntegrationFlow(fc.getFeedId());
LOG.debug("Removing Old Job and create fresh one with new params [{}]", flowId);
try
{
discoveryService.createFlow(fc.getFeedId());
}
catch (ExecutionException e)
{
throw new IllegalStateException("Error while starting flow for Integration adapter [{}]" + fc.getFeedId(), e);
}
}