I've got a connection to IronMQ from my Laravel 5.1 project. All credentials I have entered in the config are correct and I am able to send messages to Iron. They are visible on the Dashboard they provide.
My question is how do I get those messages back into my Project to be read.
Apparently Queue::marshal() is now deprecated. But even still the concept of that feature doesn't make sense to me.
How do I exactly get a message back from Iron and how do I send it to a specific place in my Laravael project?
I currently have a Job Class in place, but no way to execute it.
Related
When I attempt to add a new custom domain to my heroku app, I receive the following error
Domain "..." could not be created:
... is currently in use by another app.
See this image:
Example of error in the add new domain sidebar
Details so far:
I use a free dyno for my app
I own the domain and do not use it in any other heroku app (in fact, it's unused for any website)
I used this form, Heroku Domain Release, to release the domain from any existing usage on Heroku. This was successful and seems to imply it was in use by another app at some point.
I ran that form again, and received the message that it was no longer in use by any app.
It's been at least an hour since I tried using the release form and I still receive the same error message when trying to add my domain.
I cannot contact Heroku support directly since I use a free dyno.
Any help greatly appreciated!
I have a web application (Angular front, Laravel backend API). In that there is a section where I have messaging system. Basically four user roles can post and read messages.
When a user logged in I needs to check every 10 seconds to see if there are new messages for any user of any role(out of 4 roles).
This works fine but from time to time I get http status code of 429 Too many request. I have no idea what is the reason. Anyone have an idea of what is the reason or can point me to the right direction in order to fix this?
Note: I have a custom field system build and I use that to hold extra data of messages. I notice that when i fetch messages considerable number of models related to custom fields also quarried. Can this be a reason?
I found the solution and here are my steps.
I needed to find all my request going to the API, for that I installed this package API logger. Using this I examined the requests and there I found some request made to the API over and over.
I corrected the requests made to the API and the issue seems to fixed.
When I did research online, most of the solutions are about triggering Slack notification from TravisCI. Now I want to do the reverse direction - type some message in slack, and trigger a build task in TravisCI.
I'm looking at Slack's Outgoing WebHooks - under their "Custom Integrations" in Slack app directory. However, their webhook POST data spec is fixed, not seem to be programmable through just their webpage UI. They have a column in the UI that lets you fill in URL(s) to POST to. But I don't see any ways that I can customize the data field of the POST request.
Same as TravisCI's Triggering Builds API v3, the data fields they expect in the POST are fixed and unchangeable.
I know I can sign up a cloud service, write some code and spin up a server to re-package the parameters to do the work, like a middleware between these 2 APIs. But just want to see if anyone manages to achieve triggering TravisCI by Slack in such way that doesn't involve spinning up a server myself?
I ended up hosting a server and writing the porting logic myself. I guess there's no simple way to do this, after all they are different APIs. Here is the code where I request against travisCI API, and here is the code where I unpack the slack webhook POST request.
Developing a SPA in the frontend (with Vue.js) which consumes endpoints from a (Laravel) API in the backend introduces some challenges that need to be tackled:
1. How to sync deployment when introducing new backend/frontend code
If the code is separated in two VCS repositories (frontend/backend) it can be challenging to sync deployment of both frontend and backend making sure that both finish at the exact same time. Otherwise this can lead to unexpected behaviour (e.g. calling endpoints that are not yet deployed or have changed). Anyone came up with a great solution for this? What is your best practice to tackle this problem? What if versioning every little change is not an option?
2. How to make sure that the frontend code of the SPA is being refreshed after deployment?
So you managed to keep your deployments in sync (see problem 1.), but how do you make sure that the SPA code of every currently active end user is being refreshed? With webpack code splitting enabled, the application might break immediately for users that are currently using your app in between a deployment.
How do you make sure that your users are being served the latest JS without having them reload the entire application on every request? What are best practices (besides forcing the user to refresh the entire page via websockets)? Are there solutions that allow currently active users to keep using the application without being forced to refresh while they might just finished something that's ready to be saved?
I am very interested in your findings, learnings and solutions!
1. How to sync deployment when introducing new backend/frontend code
The best practice here is to keep the backend and frontend in the same repo. You can, of course, extract some reusable code out of them to use in other projects but the code base should ideally be in the same repo or you will keep facing these frustrating code sync issues. Even if you look at popular Laravel libraries - they all have the frontend and backend in the same repo.
If that's not option, I would suggest that you use a versioning system that can link the versions of both repos. Yep, that means versioning every little change!
2. How to make sure that the frontend code of the SPA is being refreshed after deployment?
Usually, I'd avoid doing stuff to force a refresh on the client codebase but if you have long user sessions, it may actually make sense.
To do that, you can use any web socket implementation (such as Pusher) and have your CI notify the frontend through web sockets of any deployment. The frontend can then queue a page refresh. Check out this article on how to implement.
The two questions are tightly coupled and can't be answered separately in my opinion. I have some possibile strategies to deal with such a scenario:
1. Never introduce breaking changes in the API
API deployments should be incremental without breaking anything for users using the previous version. In this way you can simply push the changes on your backend and when the backend deployment is completed you deploy the frontend. Easily achieved if you have separate projects.
This can be performed for major releases by prefixing the API with the version:
https://website.url/api/v${version}/${endpoint}
while minor deployments should only be minor adjustments/bugfixes that do not break frontend functionality.
This approach is the best because it ensures absolutely no downtime in the user activity, but requires additional work and may not be feasible in many projects. If the backend does not introduce breaking changes, you can implement a simple polling system (with a long timespan, such as minutes) from the frontend that detects if a reload in necessary to load the new frontend deployment.
2. Standard response for outdated requests
Each request from the frontend includes an information about the version in use by the frontend. It could be a standard header, a param, whatever. You should wrap your requests in a function that add the information before sending the request itself.
If the server detects a request from an outdated frontend, it returns a standard response, such as:
{
"error": "update required"
}
The frontend detects the error and reload the page
I honestly don't like this approach, because the request may be a POST request with some form data and a page reload may lose the user all their input, which is annoying.
1. How to sync deployment when introducing new backend/frontend code
With a staging environment where you run both test suites before pulling on production.
2. How to make sure that the frontend code of the SPA is being refreshed after deployment?
Don't just break your API. Implement a grace period. For example, you could check for updates on every request, then notify the user that a new version is available so that they have to click a button at their earliest convenience. Record the used client version in your DB. Once all your users are updated, you can delete the old endpoints.
I am trying to implement Google Checkout in my website.
I have the PHP code sample named "checkout-php-1.3.2" from http://code.google.com/p/google-checkout-php-sample-code/.
I have followed the instructions and am able to send contents to Google Cart successfully.
The problem is i do not know how to update my website's database after the payment has been made.
I looked a little in the demo code and there is a page responsehandlerdemo.php and there i can see a lot of notification cases namely
merchant-calculation-callback
new-order-notification
order-state-change-notification
charge-amount-notification
If anybody can provide any help regarding which callback to use and how to parse the xml.
It will be very helpful.
Regards,
Sourav Mukherjee
With the exception of merchant-calculation-callback (ref), all the other notifications mean something to your order processing (everything that has to do after successful checkout).
E.g.
new-order-notification - is what it says it is, data representing a new order
order-state-change - orders move into different states (status), so this notification notifies you of them
You should go over the Developer docs particularly the Notification API for details.
I'm not a PHP developer (.Net) but I've seen the sample code and it already includes XML parsing for the notifications you receive. Once you get to know the API, you'll know when/where in the flow you need to add your business code (i.e. database storage, etc.).