Cocoapods: group pod libraries under a company name - cocoapods

If I want to publish a set of public pods, is there a way to group them under a company name to prevent collision with other pods and to be clear these pods are from the expected company?
I see Firebase is able to do this https://cocoapods.org/pods/Firebase but I don't see anything different in their .podspecs that would distinguish this unless it's all based on the spec.author name or the name we set up when we register with trunk?
If we can do this, how do we add maintainers to this group?

CocoaPods trunk is a flat namespace and has no grouping mechanism other than strings, which are really only a convention.
The Firebase is an umbrella pod with a set of subspecs that reference other pods that are by convention prefixed with the "Firebase" string. For example the Firebase/Storage subspec depends on the FirebaseStorage pod.

Related

How to create golang modules for others?

Example Scenario
I have an AWS S3 bucket with lots of object which a couple of my application needs to access. The applications use some info available to them to form the S3 object name, get the object and run a transformation on the object data before using it for further processing specific to the application.
I would like to create a module which will hold the logic for forming the object name, obtain it from S3 and then run the transformation on the object data so that I wont be duplicating these functions in multiple places.
In this scenario should I add AWS SDK as a dependency in the module? Keep in mind that the applications might have to use AWS SDK for something else specific to that application or they might not require it at all.
In general what is the best way to solve problems like this i.e where to add the dependency? And how to manage different versions?
If your code has to access packages from the AWS SDK then yes, you have no choice but to add it as dependency. If it doesn't and the logic is generic, or you can abstract it away from the AWS SDK then you don't need the dependency (and in fact the go tooling like go mod tidy will remove the dependency from go.mod if you add it)
In this scenario should I add AWS SDK as a dependency in the module?
Keep in mind that the applications might have to use AWS SDK for
something else specific to that application or they might not require
it at all.
Yes, if any package from your module depends on AWS SDK, Go Modules system is going to add AWS SDK as a dependency for your module. There is nothing special you are supposed to do with your module.
Try this script with Go 1.11 or higher (and make sure to work out of GOPATH):
Write your module like this:
Tree:
moduledir/packagedir1
moduledir/packagedir2
Initialize the module:
Recipe:
cd moduledir
go mod init moduledir ;# or go mod init github.com/user/moduledir
Build module packages:
Recipe:
go install ./packagedir1
go install ./packagedir2
Module things are supposed to automagically work!
In general what is the best way to solve problems like this i.e where to add the dependency? And how to manage different versions?
The Modules system is going to automatically manage dependencies for your module and record them in the files go.mod and go.sum. If you need to override some dependency, you should use the 'go get' command for that. For instance, see this question: How to point Go module dependency in go.mod to a latest commit in a repo?
You can also find much information on Modules here: https://github.com/golang/go/wiki/Modules

Cloud Automation Manager Pods on CrashLoopBackOff

I'm having an issue where some of my Pods are on CrashLoopBackOff when I try to deploy CAM from the Catalog. I also followed the instructions in the IBM documentation to clear the data from PVs (By doing rm -Rf /export/CAM_db/*) and purge the previous installations of CAM.
Here are the pods that are on CrashLoopBackOff:
Cam Pods
Here's the specific error when I describe the pod:
MongoDB Pod
Ro-
It is almost always the case that if the cam-mongo pod does not come up properly, the issue is with the PV unable to mount/read/access the actual disk location or the data itself which is on the PV.
Since your pod events indicates container image already exists, and scoped to the store, it seems like you have already tried before to install CAM and its using CE version from the Docker store, correct?
If a prior deploy did not go well, do clean up the disk locations as per the doc,
https://www.ibm.com/support/knowledgecenter/SS2L37_3.1.0.0/cam_uninstalling.html
but like you showed I can see you already tried by cleaning CAM_db, so do the same for the CAM_logs, CAM_bpd and CAM_terraform locations.
Make a note of our install troubleshooting section as it describes a few scenarios in which CAM mongo can be impacted:
https://www.ibm.com/support/knowledgecenter/SS2L37_3.1.0.0/ts_cam_install.html
in the bottom of the PV Create topic, we provide some guidance around the NFS mount options that work best, please review it:
https://www.ibm.com/support/knowledgecenter/SS2L37_3.1.0.0/cam_create_pv.html
Hope this helps you make some forward progress!
The postStart error you can effectively ignore, it means mongo container probably failed to start, so it kills a post script.
This issue usually is due to NFS configuration issue.
I would recommend you to try the troubleshooting steps here in the section that has cam-mongo pod is in CrashLoopBackoff
https://www.ibm.com/support/knowledgecenter/SS2L37_3.1.0.0/ts_cam_install.html
If it's NFS, typically it's things like
-no_root_squash is missing on base directory
-fsid=0 needs to be removed on the base directory for that setup
-folder permissions.
Note. I have seen another customer experiencing this issue and the problem was caused by NFS: there were .snapshot file there already, they have to remove it at first.

Is is possible to use a private Pod in a public Pod?

I'm creating a public Pod which is using a private Pod. Here, if I try to push the public pod, it's showing some error messages like "Can't find the specification".
Is it possible to use the private Pod inside the public Pod?
No.
If one of your pods is private, that means it's only accessible to you. Since Pods are not like embedded libraries, the source code needs to be accessible to the host as well.
In essence, the dependencies should have a higher or an equal access level as the host project.

Editing/removing messages from ViewController

I have built a real-time database messenger in Swift 2.3 using Firebase 3.6.0 and JSQMessagesViewController pod.
At the moment I can send and receive messages from different devices that have the application installed from Xcode but unfortunately I'm unable to edit or remove the messages in the MessengerViewController where the JSQMessagesViewController pod is being used.
How can I go about doing this? I've included an example of my dilemma to further illustrate what my problem seems to be. I know I may have to do something with my Firebase database in my code to remove or edit these messages but I can't seem to wrap my head around how to go about doing it.
At the moment, the only way I can remove messages from the MessengerViewController is if I go into my database from my Firebase console and manually delete the data.
Also, I'm using the following pods:
pod 'JSQMessagesViewController'
pod 'Firebase/Database'
pod 'Firebase/Auth' (my application uses user authentication)
pod 'Firebase/Core'
pod 'Firebase/Messaging' (my application also includes cloud messaging to send push notifications from the Firebase console)
I'm wondering if maybe I should've also used pod 'Firebase/Storage' 💭
You just need to add a handle on to the message. This handle will trigger an action and since you triggered it off a specific message you can look the message up by ID and call the appropriate delete method to have firebase remove it from your database. But I would actually just suggest you tag the message as deleted and then if that tag exists on a message you do not display it client side. But that is really up to you.

Maintaining staging+prod environments from single repo, 2 remotes with revel buildpack on heroku

Revel models are defined under the models package; so in order to import them one must use the full repo path relative to the %GOPATH/src folder which in this case project/app/models thus results in
import PROJECTNAME/app/models
so far, so good i'f you'r using your app name as the folder name of your local dev machine and have dev+prod environments only.
Heroku's docs recommends using multiple apps for different environment (i.e. for staging). with the same repository with distinct origins;
This is where problem starts, now, since the staging enviromnent resides on alternative appname(let's say PROJECTNAME_STAGING), it's sources are stored under PROJECTNAME_STAGING but the actual code still import PROJECTNAME/app/models instead of import PROJECTNAME_STAGING/app/models; so compile fails, etc.
Is there any possibility to manage multiple environments with a single local repo and multiple origins with revel's heroku buildpack? or a feature is needed in the buildpack that is yet to be implemented?
In addition, there is this possible issue with the .godir file that is required to be versioned and contain the git path to the app, so what about the multi-environment duality regarding this file?
Solution was simple enougth;
The buildpack uses the string in .godir both for the argument for revel run as well as the directory name under GOPATH/src. My .godir file had a git.heroku.com/<APPNAME>.git format; Instead I just used APPNAME format.

Resources