Push Notification with FCM + Spring Boot, SenderId mismatch error - spring-boot

I'm doing a project using Spring Boot to send notifications to my React Native app. Since I'm using Expo to my React Native app, I did a different configuration following this tutorial and tested with Notifications composer from Firebase and worked well.
For the Spring Boot, I used this another tutorial to configure FCM, also I create a Private Key file from SDK admin in Firebase to use here:
try {
FirebaseOptions options = FirebaseOptions.builder()
.setCredentials(GoogleCredentials.fromStream(new ClassPathResource(MY_PRIVATE_KEY_FILE).getInputStream()))
.build();
if (FirebaseApp.getApps().isEmpty()) {
FirebaseApp.initializeApp(options);
}
} catch (IOException e) {
e.printStackTrace();
}
So I did a method to send the message using token from Server Key like this:
private String token="AAAAm_p0hoc:************************************";
public void sendPushNotificationWithData() {
PushNotificationRequest pushNotificationRequest = new PushNotificationRequest();
pushNotificationRequest.setMessage("Send push notifications from Spring Boot server");
pushNotificationRequest.setTitle("test Push Notification");
pushNotificationRequest.setToken(token);
Map<String, String> appData= new HashMap<>();
appData.put("name", "PushNotification");
try {
fcmService.sendMessage(appData, pushNotificationRequest);
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
}
But I'm just having this response from FCM:
POST https://fcm.googleapis.com/v1/projects/myproject/messages:send
{
"error": {
"code": 403,
"message": "SenderId mismatch",
"status": "PERMISSION_DENIED",
"details": [
{
"#type": "type.googleapis.com/google.firebase.fcm.v1.FcmError",
"errorCode": "SENDER_ID_MISMATCH"
}
]
}
}
I tried to make a new project into FireBase, update the private and server keys but still having the same response.
Is something I forgot to set up or am I doing it wrong?

Related

WebFlux subscribe() method getting stuck

I am developing a Microservice application in SpringBoot. I am using Spring Cloud gateway there,now since Spring Cloud Gateway uses WebFlux module so,I want to extract username and password inside ServerAuthenticationConverter. But unfortunately flow is getting stuck on subscribe() method.
#Component
public class MyConverter implements ServerAuthenticationConverter {
#Override
public Mono<Authentication> convert(ServerWebExchange exchange) {
String token = exchange.getRequest().getHeaders().getFirst("token");
Map<String,String> credentialMap = new HashMap<>();
if(StringUtils.containsIgnoreCase(exchange.getRequest().getPath().toString(),"/login")){
exchange.getFormData().subscribe(data -> {
for(Map.Entry<String,List<String>> mapEntry : data.entrySet()) {
for (String value : mapEntry.getValue()) {
credentialMap.put(mapEntry.getKey(),value);
log.info("key=" + mapEntry.getKey() + "|value=" + mapEntry.getValue());
}
}
});
User user = new User(credentialMap.get("username"),credentialMap.get("password"));
return Mono.justOrEmpty(new UsernamePasswordAuthenticationToken(user,credentialMap.get("password"), List.of(new SimpleGrantedAuthority("ADMIN"))));
}
else{
if(StringUtils.isNotBlank(token)){
if(StringUtils.contains(token,"Bearer")){
return Mono.justOrEmpty(new MyToken(AuthorityUtils.NO_AUTHORITIES,token.substring(7)));
}else{
return Mono.justOrEmpty(new MyToken(AuthorityUtils.NO_AUTHORITIES,token));
}
}
}
}
throw new IllegalArgumentException("Invalid Access");
}
}
But after printing log statement within subscribe method program flow is getting halted,no exception.
I think subscribe() method is causing some thread level issue.Can someone figureout the problem????

getting FirebaseApp with name [DEFAULT] doesn't exist after deploying spring boot jar file on AWS beanstalk

I have a spring boot rest api which uploads some documents on firestore DB. Problem is when I am running locally it is working absolutely fine without causing any issue. but when I am packaging it as a jar and uploading in AWS beanstalk. That endpoint giving below error response.
{
"timestamp": "2022-11-01T13:53:21.121+00:00",
"status": 500,
"error": "Internal Server Error",
"message": "FirebaseApp with name [DEFAULT] doesn't exist. ",
}
This is how I am reading the firebase service account (file name is serviceaccount.json) file which is located in src/main/resources folder and I have
firebase.credential.resource=serviceaccount.json entry in my application.properties
#Value("${firebase.credential.resource}")
String resourcePath;
#PostConstruct
public void initialize() {
try {
Resource resource = new ClassPathResource(resourcePath);
//FileInputStream serviceAccount = new FileInputStream(resource.getFile());
FirebaseOptions options = new FirebaseOptions.Builder()
.setCredentials(GoogleCredentials.fromStream(resource.getInputStream()))
.build();
if (FirebaseApp.getApps().isEmpty()) {
FirebaseApp.initializeApp(options);
}
} catch (Exception e) {
e.printStackTrace();
}
}

Panache reactiveTransactional timeout with no stack trace

Hi I have played a lot with the following code and has read https://github.com/quarkusio/quarkus/issues/21111
I think I am facing a very similar issue, where it will work the first 4 times and then it stops working and things are stuck and eventually showing.
2022-09-15 23:21:21,029 ERROR [io.sma.rea.mes.provider] (vert.x-eventloop-thread-16) SRMSG00201: Error caught while processing a message: io.vertx.core.impl.NoStackTraceThrowable: Timeout
I have seen such exact behaviours in multiple bug reports and discussion threads.
I am using quarkus-hibernate-reactive-panache + quarkus-smallrye-reactive-messaging with kafka (v2.12)
#Incoming("words-in")
#ReactiveTransactional
public Uni<Void> storeToDB(Message<String> message) {
return storeMetamodels(message).onItemOrFailure().invoke((v, throwable) -> {
if (throwable == null) {
Log.info("Successfully stored");
message.ack();
} else {
Log.error(throwable, throwable);
message.nack(throwable);
}
});
}
private Uni<Void> storeMetamodels(Message<String> message) {
List<EntityMetamodel> metamodels = Lists.newArrayList();
for (String metamodelDsl : metamodelDsls.getMetamodelDsls()) {
try {
EntityMetamodel metamodel = new EntityMetamodel();
metamodel.setJsonSchema("{}")
metamodels.add(metamodel);
} catch (IOException e) {
Log.error(e, e);
}
}
return Panache.getSession().chain(session -> session.setBatchSize(10)
.persistAll(metamodels.toArray((Object[]) new EntityMetamodel[metamodels.size()])));
}
NOTE This same code works if it is running on RestEasy Reactive but I need to move the actual processing and storing to DB away from rest easy as it will be a large process and I do not want it to be stuck on the Rest API waiting for a few minutes.
Hope some Panache or smallrye reactive messaging experts can shed some lights.
Could you try this approach, please?
#Inject
Mutiny.SessionFactory sf;
#Incoming("words-in")
public Uni<Void> storeToDB(Message<String> message) {
return storeMetamodels(message).onItemOrFailure().invoke((v, throwable) -> {
if (throwable == null) {
Log.info("Successfully stored");
message.ack();
} else {
Log.error(throwable, throwable);
message.nack(throwable);
}
});
}
private Uni<Void> storeMetamodels(Message<String> message) {
List<EntityMetamodel> metamodels = Lists.newArrayList();
for (String metamodelDsl : metamodelDsls.getMetamodelDsls()) {
try {
EntityMetamodel metamodel = new EntityMetamodel();
metamodel.setJsonSchema("{}")
metamodels.add(metamodel);
} catch (IOException e) {
Log.error(e, e);
}
}
return sf
.withTransaction( session -> session
.setBatchSize(10)
.persistAll(metamodels.toArray((Object[]) new EntityMetamodel[metamodels.size()]))
);
}
I suspect you've hit a bug where the session doesn't get closed at the end of storeToDB. Because the session doesn't get closed when injected using Panache or dependency injection, the connection stays open and you hit the limit of connections that can stay open.
At the moment, using the session factory makes it easier to figure out when the session gets closed.

Google cloud pubsub subscription not found on cloud but found on local

so I have weird problem, I have subscription in Google Cloud pub sub. In my local machine, the program can find the subscription without any problem, but in cloud (I use GCP VM) I get this error:
com.google.api.gax.rpc.NotFoundException: com.google.api.gax.rpc.NotFoundException: io.grpc.StatusRuntimeException: NOT_FOUND: Resource not found (resource=subs_id).
Eventhough I used subs_id in my local too.
My setup is like this:
Service account credentials.json:
{
"type": "service_account",
"project_id": "project-id-123",
"private_key_id": "privateKeyId",
"private_key": "-----BEGIN PRIVATE KEY-----\nprivateKey\n-----END PRIVATE KEY-----\n",
"client_email": "user#project-id-123.iam.gserviceaccount.com",
"client_id": "clientId",
"auth_uri": "auth_uri",
"token_uri": "token_uri",
"auth_provider_x509_cert_url": "certs_url",
"client_x509_cert_url": "clients_certs_url
}
Code (mostly copied from their documentation):
#Component public class GoogleCloudPubSubListenerImpl implements
GoogleCloudPubSubListener, InitializingBean {
private static final String PROJECT_ID =
ServiceOptions.getDefaultProjectId();
private static final Logger LOGGER =
LoggerFactory.getLogger(GoogleCloudPubSubListenerImpl.class);
#Value("${pub.sub.subscription}")
private String subId;
#Override public void listenPubSub() {
ProjectSubscriptionName subscriptionName = ProjectSubscriptionName.of(PROJECT_ID, subId);
Subscriber subscriber;
try {
subscriber = Subscriber
.newBuilder(subscriptionName,
new someService()).build();
subscriber.startAsync().awaitRunning();
} catch (Exception e) {
LOGGER.error("Subscriber err : {}, {}", e.getLocalizedMessage(), e);
} }
In cloud I run this command:
gcloud pubsub subscriptions pull subs_id
ERROR: (gcloud.pubsub.subscriptions.pull) NOT_FOUND: Resource not found (resource=subs_id).
But this command succeed:
gcloud pubsub subscriptions pull projects/project-id-123/subscriptions/subs_id
Listed 0 items.
When I change the subs_id in my program to projects/project-id-123/subscriptions/subs_id
I get error:
Invalid character "/" in path section "projects/project-id-123/subscriptions/subs_id".
Anyone having same problem? Need advice here ... Thank you

How to configure the ObjectMapper for Unirest in a spring boot project

I am using Unirest in a project which is working fine for me. However, I want to post some data and do not want to escape all the JSON as it looks ugly and is just a pain in the neck.
I found a few links on how to configure the ObjectMapper for Unirest and it gives this code.
Unirest.setObjectMapper(new ObjectMapper() {
com.fasterxml.jackson.databind.ObjectMapper mapper =
new com.fasterxml.jackson.databind.ObjectMapper();
public String writeValue(Object value) {
try {
return mapper.writeValueAsString(value);
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
}
public <T> T readValue(String value, Class<T> valueType) {
try {
return mapper.readValue(value, valueType);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
});
But, no examples show where it is best to do this in a Spring Boot API project.
I tried to set it up in the main class method, but I am getting an error that 'setObjectMapper' cannot be resolved. I also tried to do this in the controller but I get the same error.
My Gradle deps for these two libraries are:
// https://mvnrepository.com/artifact/com.mashape.unirest/unirest-java
compile group: 'com.mashape.unirest', name: 'unirest-java', version: '1.4.5'
compile 'com.fasterxml.jackson.core:jackson-databind:2.10.1'
Can anyone show me how to use the Jackson object mapper with Unirest in a Spring Boot API project as I have been googling and reading docs for two days now. Would appreciate some help.
Thank you in advance
You have several issues here:
The version of unirest you're using (1.4.5) does not contain the feature to configure object mapper. This feature was added later (github PR). So you should update to the latest version available at maven central - 1.4.9. This alone will fix your compilation problem.
You can keep your Unirest configuration code in the main method. However if you want to use not default jackson ObjectMapper(), but the one from the spring context, then it's better to create something like a fake spring bean to inject ObjectMapper:
#Configuration
public class UnirestConfig {
#Autowired
private com.fasterxml.jackson.databind.ObjectMapper mapper;
#PostConstruct
public void postConstruct() {
Unirest.setObjectMapper(new ObjectMapper() {
public String writeValue(Object value) {
try {
return mapper.writeValueAsString(value);
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
}
public <T> T readValue(String value, Class<T> valueType) {
try {
return mapper.readValue(value, valueType);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
});
}
}
Other than that it looks this library changed the package name. Now it's com.konghq. You might want to consider updating, but library API might have changed.
Upd: for the latest version
compile group: 'com.konghq', name: 'unirest-java', version: '3.1.04'
the new API is Unirest.config().setObjectMapper(...)

Resources