Laravel IoC and singleton pattern - laravel

I'm trying to use Laravel IoC by creating a singleton object. I'm following the pattern from tutorial as below. I have put a Log message into object (Foobar in this example) constructor and I can see that object is being created every time I refresh page in browser. How is the singleton pattern meant for Laravels IoC? I understood that its shared object for entire application but its obviously being created every time its requested by App:make(...) Can someone explain please. I thought I would use the singleton pattern for maintaining shared MongoDB connection.
App::singleton('foo', function()
{
return new FooBar;
});

What has been said in Laravel Doc
Sometimes, you may wish to bind something into the container that
should only be resolved once, and the same instance should be returned
on subsequent calls into the container:
This is how you can bind a singleton object and you did it right
App::singleton('foo', function()
{
return new FooBar;
});
But, the problem is, you are thinking about the whole process of the request and response in the wrong way. You mentioned that,
I can see that object is being created every time I refresh page in
browser.
Well, this is normal behaviour of HTTP request because every time you are refreshing the page means every time you are sending a new request and every time the application is booting up and processing the request you've sent and finally, once the application sends the response in your browser, it's job is finished, nothing is kept (session, cookie are persistent and different in this case) in the server.
Now, it has been said that the same instance should be returned on subsequent calls, in this case, the subsequent calls mean that, if you call App::make(...) several times on the same request, in the single life cycle of the application then it won't make new instances every time. For example, if you call twice, something like this
App::before(function($request)
{
App::singleton('myApp', function(){ ... });
});
In the same request, in your controller, you call at first
class HomeController {
public function showWelcome()
{
App::make('myApp'); // new instance will be returned
// ...
}
}
And again you call it in after filter second time
App::after(function($request, $response)
{
App::make('myApp'); // Application will check for an instance and if found, it'll be returned
});
In this case, both calls happened in the same request and because of being a singleton, the container makes only one instance at the first call and keeps the instance to use it later and returns the same instance on subsequent calls.

It is meant to be used multiple times throughout the applications instance. Each time you refresh the page, it's a new instance of the application.
Check this out for more info and practical usage: http://codehappy.daylerees.com/ioc-container
It's written for L3, but the same applies for L4.

Related

FireAndForget call to WebApi from Azure Function

I want to be able to call an HTTP endpoint (that I own) from an Azure Function at the end of the Azure Function request.
I do not need to know the result of the request
If there is a problem in the HTTP endpoint that is called I will log it there
I do not want to hold up the return to the client calling the initial Azure Function
Offloading the call of the secondary WebApi onto a background job queue is considered overkill for this requirement
Do I simply call HttpClient.PutAsync without an await?
I realise that the dependencies I have used up until the point that the call is made may well not be available when the call returns. Is there a safe way to check if they are?
My answer may cause some controversy but, you can always start a background task and execute it that way.
For anyone reading this answer, this is far from recommended. The OP has been very clear that they don't care about exceptions or understanding what sort of result the request is returning ...
Task.Run(async () =>
{
using (var httpClient = new HttpClient())
{
await httpClient.PutAsync(...);
}
});
If you want to ensure that the call has fired, it may be worth waiting for a second or two after the call is made to ensure it's actually on it's way.
await Task.Delay(1000);
If you're worried about dependencies in the call, be sure to construct your payload (i.e. serialise it, etc.) external to the Task.Run, basically, minimise any work the background task does.

keeping laravel session variable till user logs out

I have a simple authentications for user,In UserController I have a fuction called postLogin().
public function postLogin()
{
if(Auth::user()->attempt($credentials))
{
return Redirect::intended('desk')->with('stream',"SomeData");;
}
}
with above code I am able to log in successfullt with the "SomeData" variable which I am retrieving it by
<?php
$class = Session::get('stream');
var_dump($class);
?>
First time when it goes to "/desk" url it dumps the value perfectly fine that is "SomeData" but once I refresh the page it resets the session and the value turns to null.
How do I keep this value till the user logs out.
From the laravel official documentation :
Flash Data
Sometimes you may wish to store items in the session only for the next
request. You may do so using the flash method. Data stored in the
session using this method will only be available during the subsequent
HTTP request, and then will be deleted. Flash data is primarily useful
for short-lived status messages:
$request->session()->flash('status', 'Task was successful!');
If you need to keep your flash data around for even more requests, you
may use the reflash method, which will keep all of the flash data
around for an additional request. If you only need to keep specific
flash data around, you may use the keep method:
$request->session()->reflash();
$request->session()->keep(['username', 'email']);

Angular.JS multiple $http post: canceling if one fails

I am new to angular and want to use it to send data to my app's backend. In several occasions, I have to make several http post calls that should either all succeed or all fail. This is the scenario that's causing me a headache: given two http post calls, what if one call succeeds, but the other fails? This will lead to inconsistencies in the database. I want to know if there's a way to cancel the succeeding calls if at least one call has failed. Thanks!
Without knowing more about your specific situation I would urge you to use the promise error handling if you are not already doing so. There's only one situation that I know you can cancel a promise that has been sent is by using the timeout option in the $http(look at this SO post), but you can definitely prevent future requests. What happens when you make a $http call is that it returns a promise object(look at $q here). What this does is it returns two methods that you can chain on your $http request called success and failure so it looks like $http.success({...stuff...}).error({...more stuff..}). So if you do have error handling in each of these scenarios and you get a .error, dont make the next call.
You can cancel the next requests in the chain, but the previous ones have already been sent. You need to provide the necessary backend functionality to reverse them.
If every step is dependent on the other and causes changes in your database, it might be better to do the whole process in the backend, triggered by a single "POST" request. I think it is easier to model this process synchronously, and that is easier to do in the server than in the client.
However, if you must do the post requests in the client side, you could define each request step as a separate function, and chain them via then(successCallback, errorCallback) (Nice video example here: https://egghead.io/lessons/angularjs-chained-promises).
In your case, at each step you can check if the previous one failed an take action to reverse it by using the error callback of then:
var firstStep = function(initialData){
return $http.post('/some/url', data).then(function(dataFromServer){
// Do something with the data
return {
dataNeededByNextStep: processedData,
dataNeededToReverseThisStep: moreData
}
});
};
var secondStep = function(dataFromPreviousStep){
return $http.post('/some/other/url', data).then(function(dataFromServer){
// Do something with the data
return {
dataNeededByNextStep: processedData,
dataNeededToReverseThisStep: moreData
}
}, function(){
// On error
reversePreviousStep(dataFromPreviousStep.dataNeededToReverseThisStep);
});
};
var thirdFunction = function(){ ... };
...
firstFunction(initialData).then(secondFunction)
.then(thirdFunction)
...
If any of the steps in the chain fails, it's promise would fail, and next steps will not be executed.

Request/Acknowledge pattern ASP.NET WebAPI

Does WebAPI have any built in support to fire off some work and return immediately to the caller with an acknowledgement. I am looking to build a data processing server which has some long running processes that need to be ran. The client never expects the results straight away and can query for them later.
With this being the case I am looking for a way to fire off some work in such a way that wont block the controller from returning.
There's nothing in WebAPI keeping you from starting off some background work and returning immediately. So you could have an action implementated like this:
public HttpResponseMessage Post()
{
Task.Factory.StartNew(() => DoWork());
return new HttpResponseMessage(HttpStatusCode.Accepted);
}
This is just a simple example, but you would probably want to track the Task in some kind of dictionary, so you could return the results when the client queries for them later.

TransactionScope disposal failure

I'm using the TransactionScope class within a project based on Silverlight and RIA services. Each time I need to save some data, I create a TransactionScope object, save my data using Oracle ODP, then call the Complete method on my TransactionScope object and dispose the object itself:
public override bool Submit(ChangeSet changeSet)
{
TransactionOptions txopt = new TransactionOptions();
txopt.IsolationLevel = IsolationLevel.ReadCommitted;
using (TransactionScope tx = new TransactionScope(TransactionScopeOption.Required, txopt))
{
// Here I open an Oracle connection and fetch some data
GetSomeData();
// This is where I persist my data
result = base.Submit(changeSet);
tx.Complete();
}
return result;
}
My problem is, the first time I get the Submit method to be called, everything is fine, but if I call it a second time, the execution gets stuck for a couple of minutes after the call to Complete (so, when disposing tx), then I get the Oracle error "ORA-12154". Of course, I already checked that my persistence code completes without errors. Any ideas?
Edit: today I repeated the test and for some reason I'm getting a different error instead of the Oracle exception:
System.InvalidOperationException: Operation is not valid due to the current state of the object.
at System.Transactions.TransactionState.ChangeStatePromotedAborted(InternalTransaction tx)
at System.Transactions.InternalTransaction.DistributedTransactionOutcome(InternalTransaction tx, TransactionStatus status)
at System.Transactions.Oletx.RealOletxTransaction.FireOutcome(TransactionStatus statusArg)
at System.Transactions.Oletx.OutcomeEnlistment.InvokeOutcomeFunction(TransactionStatus status)
at System.Transactions.Oletx.OletxTransactionManager.ShimNotificationCallback(Object state, Boolean timeout)
at System.Threading._ThreadPoolWaitOrTimerCallback.PerformWaitOrTimerCallback(Object state, Boolean timedOut)
I somehow managed to solve this problem, although I still can't figure out the reason it showed up in the first place: I just moved the call to GetSomeData outside the scope of the distributed transaction. Since the call to Submit may open many connections and perform any kind of operations on the DB, I just can't tell why GetSomeData was causing this problem (it just opens a connection, calls a very simple stored function and returns a boolean). I can only guess that has something to do with the implementation of the Submit method and/or with the instantiation of multiple oracle connections within the same transaction scope.

Resources