Editing/removing messages from ViewController - swift2

I have built a real-time database messenger in Swift 2.3 using Firebase 3.6.0 and JSQMessagesViewController pod.
At the moment I can send and receive messages from different devices that have the application installed from Xcode but unfortunately I'm unable to edit or remove the messages in the MessengerViewController where the JSQMessagesViewController pod is being used.
How can I go about doing this? I've included an example of my dilemma to further illustrate what my problem seems to be. I know I may have to do something with my Firebase database in my code to remove or edit these messages but I can't seem to wrap my head around how to go about doing it.
At the moment, the only way I can remove messages from the MessengerViewController is if I go into my database from my Firebase console and manually delete the data.
Also, I'm using the following pods:
pod 'JSQMessagesViewController'
pod 'Firebase/Database'
pod 'Firebase/Auth' (my application uses user authentication)
pod 'Firebase/Core'
pod 'Firebase/Messaging' (my application also includes cloud messaging to send push notifications from the Firebase console)
I'm wondering if maybe I should've also used pod 'Firebase/Storage' 💭

You just need to add a handle on to the message. This handle will trigger an action and since you triggered it off a specific message you can look the message up by ID and call the appropriate delete method to have firebase remove it from your database. But I would actually just suggest you tag the message as deleted and then if that tag exists on a message you do not display it client side. But that is really up to you.

Related

Kubernetes HTTP Traffic goes to pod being updated

I'm having a pipeline that has the following structure:
Step 1: kubernetes-deployment
Step 2: kubectl rollout restart deploy -n NAMESPACE
Step 3: http-calls to deployment A and B
In the same Namespace, there's a database pod and Pod A and B are connected to this database.
The problem
The problem now is caused by rolling updates - when applying a rolling update, kubernetes starts new pods as the deployment got updated. The old pod is not terminated until the corresponding new pod starts, though.
As kubectl rollout restart deploy is a non-blocking call, it will not wait for the update to finish. And afaik, there is no builtin way of kubectl to force such behavior.
As I'm executing some HTTP Requests after this was called, i now got the problem that sometimes, when the update is not fast enough, the HTTP Calls are received and answered by the old pods from deployment A and B. Shortly after this, the old pods will be terminated, as the new ones are up and running.
This leads to the problem that the effects of those HTTP requests are no longer visible, as they were received by the old pods, saving the corresponding data in a database located in the "old" database pod. As database pod is restarted, the data will be lost.
Note that I am not using a persistent Volume in this case, as this comes from a nightly build scenario and i want to restart those deployments every day and the database state should always contain only the data from the current day's build.
Do you see any way of solving this?
Including a simple wait step would probably work, but I am curious if there is a fancier solution.
Thanks in Advance!
kubectl rollout status deployment <deploymentname> together with startupProbe and livenessProbe solved my problem.

discord.py how to make a JSON file work on Heroku

When I host the bot using Heroku it no longer calculates the JSON files (even if it makes them work they do not appear) and when I restart it is as if nothing had happened and reset everything.
How can I do?
Heroku does not store changes made to files. Heroku dynos restart every once in a while, and that is when data is lost; redeploying the app can also cause the data to be lost. Using a third-party database, such as MongoDB is recommended.

How can I see when a Heroku app was deleted?

One of our test apps got deleted recently. We're trying to track down how/why (suspicion is an error in an API call that we've made). Is there a way to get any logs from Heroku that shows who/when/how an app was deleted?

Cloud Automation Manager Pods on CrashLoopBackOff

I'm having an issue where some of my Pods are on CrashLoopBackOff when I try to deploy CAM from the Catalog. I also followed the instructions in the IBM documentation to clear the data from PVs (By doing rm -Rf /export/CAM_db/*) and purge the previous installations of CAM.
Here are the pods that are on CrashLoopBackOff:
Cam Pods
Here's the specific error when I describe the pod:
MongoDB Pod
Ro-
It is almost always the case that if the cam-mongo pod does not come up properly, the issue is with the PV unable to mount/read/access the actual disk location or the data itself which is on the PV.
Since your pod events indicates container image already exists, and scoped to the store, it seems like you have already tried before to install CAM and its using CE version from the Docker store, correct?
If a prior deploy did not go well, do clean up the disk locations as per the doc,
https://www.ibm.com/support/knowledgecenter/SS2L37_3.1.0.0/cam_uninstalling.html
but like you showed I can see you already tried by cleaning CAM_db, so do the same for the CAM_logs, CAM_bpd and CAM_terraform locations.
Make a note of our install troubleshooting section as it describes a few scenarios in which CAM mongo can be impacted:
https://www.ibm.com/support/knowledgecenter/SS2L37_3.1.0.0/ts_cam_install.html
in the bottom of the PV Create topic, we provide some guidance around the NFS mount options that work best, please review it:
https://www.ibm.com/support/knowledgecenter/SS2L37_3.1.0.0/cam_create_pv.html
Hope this helps you make some forward progress!
The postStart error you can effectively ignore, it means mongo container probably failed to start, so it kills a post script.
This issue usually is due to NFS configuration issue.
I would recommend you to try the troubleshooting steps here in the section that has cam-mongo pod is in CrashLoopBackoff
https://www.ibm.com/support/knowledgecenter/SS2L37_3.1.0.0/ts_cam_install.html
If it's NFS, typically it's things like
-no_root_squash is missing on base directory
-fsid=0 needs to be removed on the base directory for that setup
-folder permissions.
Note. I have seen another customer experiencing this issue and the problem was caused by NFS: there were .snapshot file there already, they have to remove it at first.

Cloudant add on Heroku not working

I am trying to install cloudant on my vulcan based app.
However when I try to add the free version of cloudant through the heroku addons I get the following error:
Could not communicate with vendor, please try again later
Want to confirm if this is a temporary vendor issue or is it something with my app?
I contacted Heroku support and was told Heroku is in the process of removing the add-on. I don't know any further details, but it looks like in order to have Cloudant work on Heroku you'll need to set the account up yourself.
This sounds like an error in the brokering between Heroku and Cloudant. If you can file a ticket with support#cloudant.com (including account information and time of the failure) we can track it down on our side and see if there's action we can take. Alternatively, you can always signup directly at cloudant.com as a short-term work around.

Resources