CodeIgniter libraries are expected to be stored under the application/libraries directory so as to be accessible when initialized.
Drivers on the other hand are said to be a special type of Library and are found in the system/libraries/ directory, in their own sub-directory.
In their creation (of drivers), they are not depicted to be stored under the system/libraries/ directory as expected but /application/libraries/, named just as libraries.
What then is/are the difference(s) between CodeIgniter libraries and drivers?
As it says in the documentation
Drivers are a special type of Library that has a parent class and any number of potential child classes.
Child classes have access to the parent class, but not their siblings.
They are useful when you want to create an abstraction layer.
The class CI_Cache (found in /system/libraries/Cache/Cache.php) is probably the easiest to get your head around; it "abstracts" various cache systems (apc, memcached, redis, etc.) so that the different cache systems can be used with the same set of functions.
Other examples in the framework that use abstraction (but not the CI_Driver_Library) are CI_session and the database classes.
These two tutorials might give you some ideas and additional background:
Codeigniter Drivers Tutorial
How to Create Custom Drivers in CodeIgniter
All developer created classes (controllers, models, drivers, etc.) should be placed in the appropriate sub-directory of the /application folder.
You should never place developer created files in the /system folder or its sub-folders.
Drivers are loaded using $this->load->driver('lib_name');
Related
I would like to test some software and would like to make it well-behaved regarding cloud files. For reference functions like RtlIsPartialPlaceholder and RtlIsCloudFilesPlaceholder have been introduced in order to look at the information returned when traversing the folder hierarchy. The above links point to kernel mode documentation, but these functions also exist in user mode (ntdll.dll) and they are really implemented very trivially.
However, in order to test said software I would have to be able to somehow create the states a placeholder for a file on OneDrive can be in.
What functions (registered COM classes would also be fine) can I use to automate creation of a reproducible test set up which I can use to:
create placeholders
only partially hydrate a placeholder
achieve the same for both directories and files
PS: this question was the only one remotely connected to the topic, which I was able to find here on SO.
Standard file/directory operations (create/read/write, etc.) can be achieved transparently using the Win32 API. That's the whole point of this technology.
So, you can create a placeholder using the standard Win32 APIs.
If you create a file or directory in a "sync root" (like in the OneDrive folder hierarchy in the OneDrive case), the associated sync engine process (like OneDrive.exe) will ensure the file or directory is a placeholder. And you can't create a placeholder outside of a sync root hierarchy, AFAIK.
For a file (you can't hydrate or dehydrate a directory), reading/writing correspond to hydration. Note that some sync engines (and/or depending on their configuration) can decide to always fully hydrate a file even if only some bytes were asked by the end-user API (applications).
There are some specific Win32 APIs though for special operations.
You can dehydrate a file using CfDehydratePlaceholder. You can hydrate a file CfHydratePlaceholder. For all the Cloud Filter API, when creating directory handles, don't forget to use the FILE_FLAG_BACKUP_SEMANTICS flags.
What's the recommendation on grouping your business logic in Laravel? I find Laravel to be quite messy when it comes to large web applications. Should we continue to use Laravel default file locations or have anyone tried using a modular package like https://github.com/nWidart/laravel-modules ?
Laravel is a fantastic platform, not only for its elegant syntax and rapid development tool, but also for its community and open source packages. These bundles go a long way toward reducing development time.
Modules
Modules are like packages in that they have their own Models, Views, Controllers, Migrations, and other classes.
All Controllers and Models are placed in the app/ folder by default in a Laravel program, while Migrations, Seeder, Providers, and other components have their own folder.
These folders become inconvenient when the application develops. It becomes difficult to locate logic for a specific portion of the application.
This is where using a modular approach to large projects simplifies production and maintenance. You can build different modules for different parts of your application using Modules. Each module has its own set of configuration options, such as controllers, models, views, migration, seeders, and providers.
To use modules in Laravel, simply autoload the Modules folder using PSR-4 and you're finished. However, this will entail additional tasks such as registering a namespace for language, views, and configuration, as well as running the migration.
Laravel Modules
Installing Laravel Modules is similar to installing any other
package.
Laravel Modules provides Artisan Commands to create new Modules,
activate/deactivate modules, create migrations. Below is a quick
screenshot of the different artisan command it provides.
When you create a new module it also registers a new custom namespace
for Lang, View, and Config.
Lang::get('blog::group.name');
#trans('blog::group.name');
Apart from these it also provides useful Facade Methods, Module
Methods
And also can publish your modules similar to a package (document).
How we used Laravel Modules?
Initially, when we started working on a pos (point of sale) application we didn’t have an idea of creating different modules.
But as the requirements increased to have a plug-play Restaurant extension for it, the idea of making modular made more sense.
So after creating and adding restaurants and many other optional modules, we added a setting for each business to enable or disable different modules.
Module::all(); method was used to list different modules the application had.
Each business can enable or disable modules for them as per their needs.
We used a combination of Module: has(‘blog’); business settings to check if a module is available & enabled.
The Module (or extension or plugin) can be put in any other application to add the functionality.
I have application which has core website, api and admin area. I wanted to know is it bad idea to have everything in one app or should I create different Symfony2 project or should I split them into different kernels?
I'm not sure if adding lots of bundles on same kernel will effect performance a lot or is just a little bit, which does not matter?
Following are options:
keep everything on same kernel, it wont make much difference
have multiple kernel for different part of application (api, admin and core website)
create different Symfony2 project for admin area and api.
or your wise words :)
You can define more "environments".
For example :
In AppKernel.php
public function registerBundles()
{
$bundles = array(
new Symfony\Bundle\FrameworkBundle\FrameworkBundle(),
new Symfony\Bundle\SecurityBundle\SecurityBundle(),
new Symfony\Bundle\TwigBundle\TwigBundle(),
new Symfony\Bundle\MonologBundle\MonologBundle(),
new Symfony\Bundle\SwiftmailerBundle\SwiftmailerBundle(),
new Doctrine\Bundle\DoctrineBundle\DoctrineBundle(),
new Sensio\Bundle\FrameworkExtraBundle\SensioFrameworkExtraBundle(),
//new AppBundle\AppBundle()
);
if (in_array($this->getEnvironment(), array('api'), true)) {
$bundles[] = new ApiBundle\ApiBundle();
//-- Other bundle
}
//-- Other environments
return $bundles;
}
}
It mostly depends on bundles quality. And this how much connected they are.
I would reject point 3 at start (create different Symfony2 project for admin area and api.) - as probably you don't build two separate applications.
Have multiple kernel for different part of application (api, admin and core website)
Common problem is created by Listeners and services in container. Especially when your listener should work only in one of app contexts (api/frontend/backend). Even if you remember to check it at very beginning of listener method (and do magic only in wanted context) then still listener can depend on injected services which need to be constructed and injected anyway. Good example here is FOS/RestBundle: even if you configure zones then still on frontend (when view_listener is activated for api) view_handler is initialized and injected to listener - https://github.com/FriendsOfSymfony/FOSRestBundle/blob/master/Resources/config/view_response_listener.xml#L11 I'm not sure for 100% here but also disabling translations and twig (etc.) for API (most of api's don't need it) will speed it up.
Creating separate Kernel for API context would solve that issue (in our project we use one Kernel and we had to disable that listener - as blackfire.io profiles were telling us that it saves ~15ms on every fronted request).
Creating new Kernel for API would make sure that none of API-only services/listeners will not interfere with frontend/backend rendering (it work both ways). But it will create for you additional work of creating shared components used in many bundles inside project (those from different kernels) - but in world with composer it's not a huge task anymore.
But it's case only for people who measure every millisecond of response time. And depends on your/3dparty bundles quality. If all there is perfectly ok then you don't need to mess with Kernels.
It's personal choice, but I have a similar project and I have a publicBundle, adminBundle and apiBundle all within the same project.
The extra performance hit is negliable but organisation is key ... that is why we're using an MVC package (Symfony) in the first place, is it not? :)
NB: You terminology is a little confusing, I think by Kernel you mean Bundle.
Have several kernels could not necessarily help.
Split your application in bundles and keep all advantages of sharing your entities (and so on) through the different parts of your application.
You can define separated routing/controllers/configuration that are loaded depending on the host/url.
Note :
If you are going to separate your app in two big bundles (i.e. Admin & Api),
and that the two share the same entities, you will surely have to do a choice.
This choice may involves that one of your bundles contains too much (and non related) logic and will need to be refactored in several bundles later.
Create a bundle per section of your application that corresponds to a set of related resources and make difference between the two parts through different contexts from configuration.
Also, name your classes/namespaces sensibly.
I am writing a Mac Cocoa application that will manipulate database files, which can be easily be implemented using NSDocument technology, as they relate directly to disk files.
However the majority of the app will manipulate items within this database. When user opens a database item, a new Window should appear to allow the item to be viewed, edited, saved, so the database item doesn't directly relate to a disk file. Note that undo and redo is appropriate here.
Is it appropriate to use NSDocument technology for both database windows and database item windows, or is there a better approach?
I think using NSDocument would be a great choice. It would allow you to take advantage of most of the provided functionality, such as NSDocumentController, undo support, window management, etc. You will have to override some methods, such as loading and saving. It might be difficult to get the "Open Recent" menu to work correctly for these documents (maybe use a custom URL scheme?). The disadvantages of using NSDocument are... none that I can think of. You would have to write everything from scratch, and it would be even harder to integrate them into the rest of the application.
I built my application based on NSDocument - well, actually NSPersistentDocument as it gives access to Core Data services for storing my object graph. It worked great for me and I found no disadvantages.
When you consider working with NS(Persistent)Document you will have to come up with some kind of mechanism to pass the instance of your document to the various controllers you will build to manage the views/windows and their associated data. I implemented this by creating a generic View controller class capable of holding a reference to my instance of NSPersistentDocument. All my view controllers are subclasses of this generic controller and are thus capable of easily accessing Core Data services.
My app manages 15 Core Data entities with volumes varying per entity from hundreds to hundreds of thousands instances. Not part of your original question, but you might want to consider using Core Data for object persistence. I found it to be real time saver while building my app (having worked before with PHP, Java and various DB layers which generally do not contribute much towards productivity).
I'm studying Prism and need to create a small demo app. I have some design questions. The differences between attitudes might be small, but I need to apply the practices to a large scale project later, so I'm trying to think ahead.
Assuming the classical DB related scenario - I need to get a list of employees and a double click on a list item gets extra information for that employee: Should the data access project be a module, or is a project accessed via repository pattern a better solution? What about large scale project, when the DB is more than one table and provides, say, information about employees, sales, companies etc.?
I'm currently considering to use the DataAccess module as a stand alone module, and have defined its interface in the Infrastructure project as well as its return type (EmployeeInformation). This means that both my DataAccess module and my application have to reference the Infrastructure project. Is this a good way to go?
I'm accessing said DataAccess module using ServiceLocator (MEF) from my application. Should the ServiceLocator be accessed by parts of the application, or is it meant to be used in the initialization section only?
Thanks.
A module is needed and makes sense when it contains ine part of the application that can live on it's own. This can be parts of an application the only several people need or are allowed to use, e.g. the user management module only administrators are allowed to access. But your data access layer is not that kind of isolated functionality that usually goes into a module. It is better placed in a common assembly the real modules can use. The problem here is that all modules depend on this DAL assembly, so have the task of updating your DAL in mind when designing your application (downward compatibility).
Usually there is no problem to have types that are broadly used reside in a common assembly. But this is not the infrastructure assembly. Infrastructure, as the word implies, provides services to have the modules work together. Your common types should go into something like YourNamespace.Types or YourNamespace.Client.Base or ...
This is a topic in many arguments and still unclear (at least from my point of view). Purists of Dependency Injection say it should only be used during initialization. Pragmatists are using the ServiceLocator all over their application.