Beforehand let me thank you all !! Really guys you help a lot. When I will finish my web site and will have much time on watching how userbase is growing I will come here again and again to answer to another people questions(if I can )
So here is the problem.
I made a web-site on CodeIgniter. A social network engine. Something like phpfox, classmates_com or facebook.
It's right now somehow not multilingual, So the UI strings are in the view files, and next step will be move them to the language files.
I want the user to have ability to change the language. So I assume that in database user will have row "lang_local" which would be by default set to en, and then to any other language he will change .
So what is eating my nervs and enery is following.
I will make on this engine several demographic social networks,and I would like to manage theese web-sites in centralized manner with one backend . So whenever I would like to make a new web-network, I just add the domain settings install the script in new folder and add it in database sites
I see it like this
on every table in database like users,comments,messages,categories ,etc I will have a row site_id , and on each query add/update/delete I add a WHERE SITE_ID=XXX
and in table sites(site_id,site_name,domain_name) will have all domains , so that in backend I can filter data by website.
Is this a good way? What if i will need then to be multiserver, what about load balancing? Who can tell me what would be a right,PROFESSIONAL way? My maximum user limit for a database is something like for start 10.000 in one-two year 100.000users
There are loads of ways to do multi-site, but this is a perfectly good way to handle things. I use this approach in my internal work CMS.
The only downside is that it could potentially become massive and have performance issues. You may need to write an export script so you can grab everything belonging to a site then move them to their own install.
Related
I wanted to know if there is any way in Laravel (7) to load only the language files of certain zones/sections. By this I mean that, for example, if a user enters in Users section, load only the "users" and "general" language files.
As far as I know, Laravel seems to always load all the files, which I consider unnecessary. Especially when the number of indexes in the arrays start to become large, I think a solution is needed in this way.
I know that in other Frameworks like CodeIngiter this option is available, but in Laravel I have searched several times for something similar and I can't find anything that solves this problem.
Thank you very much.
I've just started a new job and we have several installs of magneto all of different versions!
Now really it seems to me that we need to firstly upgrade them all and then get them all under one installation of magneto and using one database.
What is the best way (in general terms) of doing this.
Is it even possible or is my best bet to make the sites again under one installation and import the products into it.
There is some talk by a fellow developer that having them under different installs helps with performance. Is this true?
Once we have them all under one install things like stock control and orders as well as putting products on multiple site should also be very straightforward - correct?
We are talking quite a few stores say around 15ish and quite a few products around I would say 4000 maybe more.
My first suggestion is to consider the reasons, why you need all Magento instances to be moved under one installation. The reasons are not clear from your question. So the best developer's advice is "Does it work? Then don't touch it" :)
If there are no specific reasons, then you'd better leave it as is. All reorganization processes (upgrading, infrastructure configuration, access setup, etc.) for a software system are hard, costly, consume time, error-prone, usually have no much value from business point of view and are a little boring. This is not a Magento-specific thing, it is just general characteristic for any software.
Also note, that it is a holiday season. So it is better not to do anything with e-commerce stores until the middle of January.
If you see value in a reorganization of your Magento stores, then the best way to do it is to go gradually - step by step, store by store:
Take your most complex store. Prepare everything you need for the further steps - i.e. get ready the tools, write automatic scripts, go through the process with its copy at some testing server. Write set of functional tests
to cover it with at least smoke-tests. You'll have to repeat
such light checks many times to be sure, that the store appears to be
working. The automatic tests will save much time. Thus all these preparations will decrease your downtime.
Close public access to the store.
Upgrade store to the Magento version, you need. Move it to the new infrastructure.
Verify all the user scenarios manually and with automatic tests. Fix the issues, if any.
Open public access to the store. Monitor logs, load reports at the server machines. Fix issues, if any.
Take next store (let's call it NextStore). Make its copy at a sandbox server.
Make copy of your already converted store (let's call it ConvertedStore) at a sandbox server.
Export all the data from copy of NextStore and import it to the copy of ConvertedStore. You can use Magento Dataflow or Import/Export modules to do that. Not all data can be
imported/exported with those modules - just Catalog, Orders, Customers. You will need
to develop custom scripts to import/export other entities, if you need them.
Verify result manually and with automatic tests and manually. Write automatic scripts, that fix found issues. You will need those scripts later during the real converting process.
Close NextStore.
Move it to the new infrastructure, by engaging the already prepared procedures and scripts. You will need to consider, whether to close ConvertedStore during the converting process. It depends on your feeling, whether it is ok to have it opened or not. For safety reasons it is better to close it.
Verify, that everything works fine. Monitor logs, reports.
Fix issues, if any.
Proceed with the rest of your stores.
That is my (totally personal) view on the procedures.
There is some talk by a fellow developer that having them under
different installs helps with performance. Is this true?
Yes, your friend is right. Separating Magento (actually, anything in this world) into smaller instances makes it lighter to be handled. The performance difference is very small (for your instance of 4000 products), but it is inevitable. Consider, that after combining the instances (suppose, there are ten of them with 400 products each) you'll be handling data for 10x more customers, reports, products, stores, etc. Therefore any search will have to go through ten times more products, in order to return data. Of course, it doesn't matter, if the search takes 0.00001 second, because 0.0001s for combined instance is ok as well. But some things, like sorting or matching sets, grow non-linearly. But, as said before, for 4000 products you won't see big difference.
Once we have them all under one install things like stock control and
orders as well as putting products on multiple site should also be
very straightforward - correct?
You're right - after combining the stores together, handling orders, stock, customers will be much more simpler and straightforward process.
Good luck! :)
The most important thing to consider is what problem you're solving by having all these sites on one Magento "instance". What's more important to your business/team: having these sites share product and inventory or having the flexibility of independently modifying these sites? Any downtime or impacts to availability may affect all sites.
Further questions/areas of investigation:
How much does the product hierarchy (categories and attributes) differ?
Is pricing the same across each site or different?
Are any of these sites multi-regional and how is pricing handled for each region?
It's certainly possible to run multiple sites on one Magento instances, even if there are some rough edges within the platform.
Since there's no way to export all entities in Magento, there's no functionality to merge stores. You'd have to write custom code - it would have to take all the records from the old store, assign them new IDs while preserving referential integrity & insert them into the new store (this is what the "product import" does, but they don't have it for categories, orders, customers, etc.).
The amount of code you'd be writing to do that would take almost longer than just starting over in my opinion. You'd basically be writing the missing functionality for Magento. If it were easy they would have it done it already.
However splitting two stores apart is very easy, since you don't have to worry about reassigning unique identifiers in the DB.
I am kind of a newbie at programming (have worked a bit with Delphi years back) but have started to build an application for Windows Phone 7.5 Mango, as I have a great idea for an app :D
In the application the user should be able to pick different locations from a list (a very large list, 5k+ items) - to make sure that all users always get the latest list, I have created a SQL on my website to generate the list as XML - which I load to the application via httpwebrequest; I am not quite sure what best practise is when dealing with a large list, which will be updated frequently etc.?
That is not the main question thou, because this seems to work pretty okay - my real question is, how to add a search function to my application, so the user can search for a location instead of scrolling throug the entire list?
My SQL is build up with ID, Country, State, Region, City (and a few more irrelevant tables for a search function).
I do not know what the best way to approach this is? Should I make a query on my website and generate the result as XML and use httpwebrequest to get the result to the phone - or should it be a search function on the device to search the entire list? And if so, how do I do that?
Thank you ;-)
First of all I have to inform you that fetching an list with over 5k+ items via a smartphone that not are using Wireless Network, will take a while. So, acrording to me there would be a huge waste of traffic to download the whole list if the user only are interested in a few items. This basically means that you are downloading a bunch of date but only are using 0,01% of it which is not the way you should build a program.
So, acording to me you should make an webservice so that the user can call the webservice and make a http request using its search parameter. And then you basically just use the parameter in an SQL Search Query, which could be an stored procedure or just bare in code, but I don't know how your server/database is build and structed so you can basically choose that whatever.
Here is an example:
SELECT * FROM TABLE_NAME WHERE (ID=#ID) OR (Country=#Country) OR (State=#State) OR (Region=#Region) OR (City=#City)
If i would have done this application I would have had two parameters. One that are representing the user input, in other words the search text and one that explains what the serach parameter is. (ID, Country, State, Region or City?).
You have to register your handle function for TextBox.TextChanged with some logic to filter queries that happens too often (for example typing name John may cause 4 requests: J, Jo, Joh, John). It can be done by using System.Threading.Timer with delayed start and changing it starting time when user types new character (if you need example - ask).
Then I recommend you to use WCF service to "talk" with SQL database. On WCF service use any ORM (Entity framework is simplest) to query you database.
Hoping someone can chime in on an ideal methodology.
I don't want to run my site through a crawler every month to add new pages to my sitemap, I'd like some robust systematic method to do so, because maintaining it by hand seems very prone to ahem human forgetfulness. Is there some sorta way to programmatically validate new controllers, controller methods, views, etc. to some special controller? What I'm picturing is some mechanism that enforces updating the sitemap whenever you create a new controller method or view. I work in LAMP stack if that's relevant. This guy here is doing it through the file system and that's not what I want for a public facing sitemap.
Perhaps there's another best practice for this type of maintenance other than the concept I'm proposing. Would love to hear how everyone else does this! :)
If your site is content based, best practise is reading database periodically and generating each contents link. With this method you can specify some subjects are more prior or vice versa in sitemap.
That method already mentioned before at topic that you linked.
Else, you can hold a visited pages list (static) at server-side. Or just log them. After recording your site traffic, without blocking the user experience, I mean asynchronously, check the sitemaps and add your page links there. You can specify priority with this method too, by visiting intensity of your pages and some statistical logic.
I've read a statement somewhere that generating UI automatically from DB layout (or business objects, or whatever other business layer) is a bad idea. I can also imagine a few good challenges that one would have to face in order to make something like this.
However I have not seen (nor could find) any examples of people attempting it. Thus I'm wondering - is it really that bad? It's definately not easy, but can it be done with any measure success? What are the major obstacles? It would be great to see some examples of successes and failures.
To clarify - with "generating UI automatically" I mean that the all forms with all their controls are generated completely automatically (at runtime or compile time), based perhaps on some hints in metadata on how the data should be represented. This is in contrast to designing forms by hand (as most people do).
Added: Found this somewhat related question
Added 2: OK, it seems that one way this can get pretty fair results is if enough presentation-related metadata is available. For this approach, how much would be "enough", and would it be any less work than designing the form manually? Does it also provide greater flexibility for future changes?
We had a project which would generate the database tables/stored proc as well as the UI from business classes. It was done in .NET and we used a lot of Custom Attributes on the classes and properties to make it behave how we wanted it to. It worked great though and if you manage to follow your design you can create customizations of your software really easily. We also did have a way of putting in "custom" user controls for some very exceptional cases.
All in all it worked out well for us. Unfortunately it is a sold banking product and there is no available source.
it's ok for something tiny where all you need is a utilitarian method to get the data in.
for anything resembling a real application though, it's a terrible idea. what makes for a good UI is the humanisation factor, the bits you tweak to ensure that this machine reacts well to a person's touch.
you just can't get that when your interface is generated mechanically.... well maybe with something approaching AI. :)
edit - to clarify: UI generated from code/db is fine as a starting point, it's just a rubbish end point.
hey this is not difficult to achieve at all and its not a bad idea at all. it all depends on your project needs. a lot of software products (mind you not projects but products) depend upon this model - so they dont have to rewrite their code / ui logic for different client needs. clients can customize their ui the way they want to using a designer form in the admin system
i have used xml for preserving meta data for this sort of stuff. some of the attributes which i saved for every field were:
friendlyname (label caption)
haspredefinedvalues (yes for drop
down list / multi check box list)
multiselect (if yes then check box
list, if no then drop down list)
datatype
maxlength
required
minvalue
maxvalue
regularexpression
enabled (to show or not to show)
sortkey (order on the web form)
regarding positioning - i did not care much and simply generate table tr td tags 1 below the other - however if you want to implement this as well, you can have 1 more attribute called CssClass where you can define ui specific properties (look and feel, positioning, etc) here
UPDATE: also note a lot of ecommerce products follow this kind of dynamic ui when you want to enter product information - as their clients can be selling everything under the sun from furniture to sex toys ;-) so instead of rewriting their code for every different industry they simply let their clients enter meta data for product attributes via an admin form :-)
i would also recommend you to look at Entity-attribute-value model - it has its own pros and cons but i feel it can be used quite well with your requirements.
In my Opinion there some things you should think about:
Does the customer need a function to customize his UI?
Are there a lot of different attributes or elements?
Is the effort of creating such an "rendering engine" worth it?
Okay, i think that its pretty obvious why you should think about these. It really depends on your project if that kind of model makes sense...
If you want to create some a lot of forms that can be customized at runtime then this model could be pretty uselful. Also, if you need to do a lot of smaller tools and you use this as some kind of "engine" then this effort could be worth it because you can save a lot of time.
With that kind of "rendering engine" you could automatically add error reportings, check the values or add other things that are always build up with the same pattern. But if you have too many of this things, elements or attributes then the performance can go down rapidly.
Another things that becomes interesting in bigger projects is, that changes that have to occur in each form just have to be made in the engine, not in each form. This could save A LOT of time if there is a bug in the finished application.
In our company we use a similar model for an interface generator between cash-software (right now i cant remember the right word for it...) and our application, just that it doesnt create an UI, but an output file for one of the applications.
We use XML to define the structure and how the values need to be converted and so on..
I would say that in most cases the data is not suitable for UI generation. That's why you almost always put a a layer of logic in between to interpret the DB information to the user. Another thing is that when you generate the UI from DB you will end up displaying the inner workings of the system, something that you normally don't want to do.
But it depends on where the DB came from. If it was created to exactly reflect what the users goals of the system is. If the users mental model of what the application should help them with is stored in the DB. Then it might just work. But then you have to start at the users end. If not I suggest you don't go that way.
Can you look on your problem from application architecture perspective? I see you as another database terrorist – trying to solve all by writing stored procedures. Why having UI at all? Try do it in DB script. In effect of such approach – on what composite system you will end up? When system serves different businesses – try modularization, selectively discovered components, restrict sharing references. UI shall be replaceable, independent from business layer. When storing so much data in DB – there is hard dependency of UI – system becomes monolith. How you implement MVVM pattern in scenario when UI is generated? Designers like Blend are containing lots of features, which cannot be replaced by most futuristic UI generator – unless – your development platform is Notepad only.
There is a hybrid approach where forms and all are described in a database to ensure consistency server side, which is then compiled to ensure efficiency client side on deploy.
A real-life example is the enterprise software MS Dynamics AX.
It has a 'Data' database and a 'Model' database.
The 'Model' stores forms, classes, jobs and every artefact the application needs to run.
Deploying the new software structure used to be to dump the model database and initiate a CIL compile (CIL for common intermediate language, something used by Microsoft in .net)
This way is suitable for enterprise-wide software and can handle large customizations. But keep in mind that this approach sets a framework that should be well understood by whoever gonna maintain and customize the application later.
I did this (in PHP / MySQL) to automatically generate sections of a CMS that I was building for a client. It worked OK my main problem was that the code that generates the forms became very opaque and difficult to understand therefore difficult to reuse and modify so I did not reuse it.
Note that the tables followed strict conventions such as naming, etc. which made it possible for the UI to expect particular columns and infer information about the naming of the columns and tables. There is a need for meta information to help the UI display the data.
Generally it can work however the thing is if your UI just mirrors the database then maybe there is lots of room to improve. A good UI should do much more than mirror a database, it should be built around human interaction patterns and preferences, not around the database structure.
So basically if you want to be cheap and do a quick-and-dirty interface which mirrors your DB then go for it. The main challenge would be to find good quality code that can do this or write it yourself.
From my perspective, it was always a problem to change edit forms when a very simple change was needed in a table structure.
I always had the feeling we have to spend too much time on rewriting the CRUD forms instead of developing the useful stuff, like processing / reporting / analyzing data, giving alerts for decisions etc...
For this reason, I made long time ago a code generator. So, it become easier to re-generate the forms with a simple restriction: to keep the CSS classes names. Simply like this!
UI was always based on a very "standard" code, controlled by a custom CSS.
Whenever I needed to change database structure, so update an edit form, I had to re-generate the code and redeploy.
One disadvantage I noticed was about the changes (customizations, improvements etc.) done on the previous generated code, which are lost when you re-generate it.
But anyway, the advantage of having a lot of work done by the code-generator was great!
I initially did it for the 2000s Microsoft ASP (Active Server Pages) & Microsoft SQL Server... so, when that technology was replaced by .NET, my code-generator become obsoleted.
I made something similar for PHP but I never finished it...
Anyway, from small experiments I found that generating code ON THE FLY can be way more helpful (and this approach does not exclude the SAVED generated code): no worries about changing database etc.
So, the next step was to create something that I am very proud to show here, and I think it is one nice resolution for the issue raised in this thread.
I would start with applicable use cases: https://data-seed.tech/usecases.php.
I worked to add details on how to use, but if something is still missing please let me know here!
You can change database structure, and with no line of code you can start edit data, and more like this, you have available an API for CRUD operations.
I am still a fan of the "code-generator" approach, and I think it is just a flavor of using XML/XSLT that I used for DATA-SEED. I plan to add code-generator functionalities.