Laravel - Is there a pattern for creating complex scheduled tasks? [closed] - laravel

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 days ago.
Improve this question
I am creating several scheduled tasks within my Laravel site using the schedule() function within the Console/Kernel object. It acceptable to place simple logic in the schedule() function (based on the explanation from the Laravel site):
$schedule->call(function () {
DB::table('recent_users')->delete();
})->daily();
However, if the logic needs to be more complicated (100 lines of code or more to decide which models to focus on) basic programming rules instructs us to break the code up into additional functions and classes, which can be then be called by schedule(). However, I am a bit confused as to where these functions/classes go. A few possibilities:
A Controller function, even though it does not use a route?
A trait, used by the Kernel?
Console command, even though it's never called from the console?
A helper?
(edit) Other classes that share the Kernel's namespace?
Another kind of class that functions like a Controller, but internally only? (Something that I am not aware of.)
Where do these classes/functions go?
====
There are many established patterns can be regarded as correct or incorrect, based on Laravel's intended use, regardless of opinion. Also, there are many valid ways in which Laravel can be used. Within this context, there are many opinions on the best way to do things, many of which could be considered correct. That said, there are framework design patterns which are beyond the scrutiny of opinion (for now.) We use routes to link URLs to Controllers (not jobs.) We access models from the controllers and not the other way around. My question is intended to discover an established pattern that I am not aware of, and not to debate opinions regarding these patterns.
Basically, I want to avoid building my site in a way that with prove blatantly embarrassing in the future, (especially regarding lesser well known site elements, such as jobs, scheduling and console commands) simply because I did not know Laravel facts.

Related

Basic REST API concepts [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm trying to develop a REST API for the first time and it seems to me that I have a problem with realizing some basic REST API concepts. I'm not sure if I should only create CRUD operations for each model and then analize responses from these operations using Vue (in my case)? Or should I let my DRF side do some business logic?
SPECIFIC QUESTION
Here's an example. I want to remove an object and update some other objects in other table, which are related to the original object I would like to delete. Should I just create one POST(?) endpoint to do that or should I get those other objects I would like to delete using Vue, then call "delete" on each one of them from Vue, and only then delete the original object. As you can see in first case it's a complex operation and in the second case it's a couple of CRUD operations.
I'm asking this because I found many interpretations of REST API in Google and I struggle to find the truth. It seems to me that DRF doesn't want me to create complex views, looks like it just wants me to create 4 operations for each model.
Hope I made myself clear, thank you for trying to help.
What you really seem to be asking is what degree of coupling is appropriate for a REST API. The answer is as little as possible, but what's possible will depend on your application and your requirements.
To use your example, yes, it's preferable to have a uniform interface for deletion for each one of your resources, but what are your other requirements? Is it a problem if you cascade the deletion of children resources? Is it OK for you to automate deletion of orphan resources? Can you afford to lose transaction integrity by requiring the client to explicitly delete multiple resources through their own endpoints? If you can't find a way to make the uniform deletion interface work for you, there's nothing wrong or unRESTful in creating a single POST endpoint for doing what you need, as long as that's not coupled to the needs of a particular client implementation.
Don't expect to find "the truth", or a manual of best practices for REST API or final answers to your questions about that, because REST is just an architectural style, not a rigid specification for architectures. It's just a set of principles used to guide long-term design and evolution of applications.
If you don't have long-term requirements that ask for a careful adoption of REST principles, more important than finding the truth about REST is to respect the Principle of Least Astonishment, since many people already have strong opinions about how a REST API should be implemented. A good example is API versioning with URLs. Adding a version number to your URLs is a REST anti-pattern, but it's a widespread practice believed by many to be a REST best-practice. That happens because most so-called APIs are strongly coupled to the clients, and the API versioning makes it much easier to make backwards incompatible changes. Making backwards incompatible changes is not a problem when you implement REST API and clients correctly, but it takes a lot more work than simply tackling a version number somewhere.
If you really have long-term requirements or if you are genuinely interested in learning about how to design and implement a REST API correctly, try searching for "Hypermedia API" instead of "REST API". Many people gave up on the term REST and decided to start using a new term to refer to APIs that implement REST correctly.

New to ABAP GUI Programming - what to learn? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am new to ABAP programming. To prepare myself for my new job, I am reading ABAP books. During reading, I learned that ABAP has several legacy elements to keep it backwards compatible with older SAP releases.
Regarding GUIs, I am reading about SAP-UI (PARAMETERS, etc.) Dynpros and WebDynpros. Now, I am unsure about on what to focus my learning efforts on.
Are the common rules like "You should know a little about basic SAP-UI, but mainly focus on WebDypros."
Background information: My new employee does SAP customizing for small and medium sized enterprises.
I'm not a consultant, but I work for a medium (~120 employees) sized company myself. If you were to work for us you would mostly create custom abap reports, maybe sometimes program a user exit. Small companies usually don't spend the money needed for big SAP driven portals, so they probably don't use Netweaver AS Java at all. That means abap dynpro and abap lists as your main UI elements. Sometimes it is good to also know your way around other ways of creating reports, for instance SAP Query.
If I were you I would start with basic abap. You won't have any fun working with dynpros if you haven't gotten your head around the basic stuff first. Learn to work with internal tables, work areas, field symbols. Have a look at some basic ABAP Objects stuff (for instance the ALV grid, very useful for displaying all sorts of tables). You should also understand the ABAP Dictionary, the place where structures, tables, data elements, data domains ans search helps are defined.

Will the joomla performance be affected if there is only one controller handling all the requests? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
The controller is responsible for responding to user actions. In the case of a web application, a user action is (generally) a page request. The controller will determine what request is being made by the user and respond appropriately by triggering the model to manipulate the data appropriately and passing the model into the view.
Source: Controller - J1.5:Developing a MVC Component/Introduction
So I was wondering, how many http requests simultaneously (json calls, xmls calls, http calls) can a single controller handle before it starts screwing the application? I could use multiple controllers, but honestly, how many requests can a single controller in joomla handle? or in other words Will the joomla performance be affected if there is one controller handling all the requests in contrast of breaking the logic into multiple controllers?
By thinking in terms of Joomla, you are just going to confuse the answer a lot, because you introduce a lot of extra factors. You could ask the same question about any PHP file, like so:
Simpler question:
I have a file called script.php, how many HTTP requests can call this file at the same time.
The answer: How ever many your server can support. Making two files (script1.php and script2.php) won't necessarily improve performance at all. It likely will have some improvement though, because ever php script that is called is loaded into memory and your server only has so much memory.
The second variable would likely be processing power. So the less that the controller has to process, the less load each call would place on the server. (So for example, if you were performing a calculation on a set of data but needed to display it in three different places on the page, only calculate it once and then save it in a variable that can be used for each display.)
In all of this, though, there is no magic number for the number of requests you could handle. Even if you ran tests and told us your controller could handle 72 simultaneous connections, that is a useless number.
What you actually want to know:
So, the test you actually should run on your server is the difference between one controller and multiple controllers. This comparison takes in to account your current hardware that you run the test on and helps you optimize the code.
And honestly on that note, I'm not sure that there will be enough of a difference to matter having worked with Joomla a lot. There are probably far worse bottlenecks in your code, and would do best to focus on standard optimization practices: PHP code optimization
As one final note, I do think it is valuable to have multiple controllers, but this is more so I can remember where the different functions are and what they do over an inherent speed issue.
Generally, the controller is instantiated once for each single request, so each controller instance handles exactly one request. How many requests can be served simultaneously, depends on the resources available (and of course consumed per thread) and will vary from environment to environment.
it depends on the server you have. Read the article below
HTTP Server Performance Tuning
http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html

retrieving subset of FHIR resource [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
All,
I'm interested in the ability to retrieve a specific element within a FHIR resource using a single URL call. For example, suppose I'm interested in the gender of my patients. I would read the using the URL, without having to walk the XML node path every time. Right now, this functionality does not appear to exist. What do you think about the usefulness of this? Would like to get a sense of the community interest. Thanks.
-Jeff
For the default query mechanism, you can't bring anything back other than the full resource. (And don't even have a guarantee that the desired element will be present on all instances of the resource unless that element was part of your search criteria - in which case, why bother asking? :>). There's a new mechanism for defining custom queries. Refer to _query in the search/query section of the FHIR spec. However, it's not clear whether this will allow retrieval of anything other than full resource instances either.
This functionality does not exist at this time. It's on the wishlist, and we're trying to decide whether we can frame it in a sensible and safe fashion. The case you describe is relatively obvious, but many others aren't. And, in fact, when I think about it, it's not really clear to me how it works. what do you get back? just the gender element? so the server needs to - in effect - do the node walk for you, and you get, instead, to deal with a profusion of different schemas. It's not really obvious to me that this is a net saving for the client, and it's certainly a greater cost for the server.

How to write a spec that is productive? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I've seen different program managers write specs in different format. Almost every one has had his/her own style of writing a spec.
On one hand are those wordy documents which given to a programmer are likely to cause him/her missing a few things. I personally dread the word documents spec...I think its because of my reading style...I am always speed reading things which I think will cause me to miss out on key points.
On the other hand, I have seen this innovative specs written in Excel by one of our clients. The way he used to write the spec was kind of create a mock application in Excel and use some VBA to mock it. He would do things like on button click where should the form go or what action should it perform (in comments).
On data form, he would display a form in cells and on each data entry cell he would comment on what valid values are, what validation should it perform etc.
I think that using this technique, it was less likely to miss out on things that needed to be done. Also, it was much easier to unit test it for the developer. The tester too had a better understanding of the system as it 'performed' before actually being written.
Visio is another tool to do screen design but I still think Excel has a better edge over it considering its VBA support and its functions.
Do you think this should become a more popular way of writing spec? I know it involves a bit of extra work on part of project manager(or whoever is writing the spec) but the payoff is huge...I myself could see a lot of productivity gain from using it. And if there are any better formats of specs that would actually help programmer.
Joel on Software is particularly good at these and has some good articles about the subject...
A specific case: the write-up and the spec.
Two approaches have worked well for me.
One is the "working prototype" which you sort of described in your question. In my experience, the company contracted a user interface expert to create fully functional HTML mocks. The data on the page was static, but it allowed for developers and management to see and play with a "functional" version of the site. All that was left to do was replace the static data on the pages with dynamic content - this prototype was our spec for the initial version of our product. The designer even included detailed explanation of some subtle behavior in popup dialogs that would appear when hovering over mock links. It worked well for our team.
On a subsequent project, we didn't have the luxury of the UI expert, but we used similar approach. We used a wiki to mock a version of the site. We created links between the functional aspects of the system and documented each piece of functionality in detail. Each piece of functionality could, in turn, link to detailed design and architecture decisions. We also used to wiki to hold our to list feature list for each release (which became our release notes). These documents linked back to the detailed feature page. The wiki became a living document - describing our releases and evolution of our system in great detail. It was an invaluable resource.
I prefer the wiki to the working prototype because it's more easily extensible - growing and becoming more valuable as your system evolves.
I think you may have a look about Test-Driven Requirements, which is a technique to make executable specifications.
There are some great tools like FIT, Fitnesse, GreenPepper or Concordion for that purpose.
One of the Microsoft Press books has excellent examples of various documents, including an SRS (which I think is what you are talking about). It might be one of the requirements books by Weigert (I think that's his name, I'm blanking on it right now). I've seen US government organizations use that as a template, and from my three work experiences with the government, they like to make their own whereever they can, so if they are reusing it, it must be good.
Also - a spec should contain NO CODE, in my opinion. It should focus on what the system must do, should do, and can not do using text and diagrams.

Resources