I know this may sound crazy but hear me out...
Say you have a game and you want to update it (add new features / redecorate for seasonal themes / add LTMs etc.) Now, instead of editing your code and then waiting days for your app market provider (Google/Microsoft/Apple etc.) to approve the update and roll out the changes, why not:
Put all of your code into a database
Remove all of your existing code from your code files
Add code which can run code from a database (reads it in and eval()s it)
This way, there'd be no need for software updates unless you wanted to change your database-related code, and you could simply update your database to change what the app does when it's live.
My Question: Why hasn't this been done?
For example:
Fortnite (a real game) often has LTMs (Limited Time Modes) which are available for a few weeks and are then removed. Generally, the software updates are ~ 5GB and take a lot of time unless your broadband is fast. If the code was fetched from a database and then executed, there'd be no need for these updates and the changes could be instantaneous.
EDIT: (In response to the close votes)
I'm looking for facts and statistics to back up reasons rather than just pure opinions. Answers like 'I think this would be good/bad ... ' aren't needed (that's why there's comments); answers like are 'This would be good/bad as this fact shows that ...' are much better and desired.
There are few challenges in your suggested approach.
putting everything in database will make increase the size of db. But that doest affect much.
If all the code is there in db, its possible to decompile your software and get a way to connect to your code?
too much performance overhead of evals. Precompiled code is optimized for their respective runtime
multiple versions? What to do when you want to have multiple versions of your software?
The main reason updates exists is they are easy to maintain, flexible and allows the developer to ship fastest optimized tool.
Put all of your code into a database
Remove all of your existing code from your code files
Add code which can run code from a database (reads it in and eval()s it)
My Question: Why hasn't this been done?
This is exactly how every game works already.
Each time you launch a game, an executable binary game engine (which you describe in step 3) already reads the rest of your code (often some embedded language like LUA) from a "database" (the file system) and "evals" (interprets) it to run the game, as well as assets like level geometry, textures, sounds and music.
You're talking about introducing a layer of abstraction (a real database) between the engine and its data to hide some of your assets, but the database stores its data on the file system, so you really haven't gained anything, you've just changed the way the data is encoded at rest and queried during runtime, and introduce a ton of overhead in both cases.
On the other hand, you're intentionally cheating your way through the app reviewal process this way, and whatever real technical problems you would have are moot, because your app would not be allowed in the app store. The entire point of the app reviewal process is to prevent people from shipping unverified unreviewed code to users, and if your program is obviously designed to circumvent this, your app will be rejected.
Fortnite (a real game) often has LTMs (Limited Time Modes) which are available for a few weeks and are then removed. Generally, the software updates are ~ 5GB and take a lot of time unless your broadband is fast.
Fotenite will have a small binary executable that is the game engine. Updates to this binary will account for a fraction of a percentage of that 5GB. The rest will be some kind of interpreted/embedded language describing the game's levels (also a tiny fraction) and then assets which account for the rest (geometry, textures, sound, music).
If the code was fetched from a database and then executed, there'd be no need for these updates and the changes could be instantaneous
This makes no sense. If you move that entire 5GB from the file system into a database, you still have to transfer around 5GB worth of database updates. 5GB of data in a database still lives as 5GB of data on the file system, it's just that you can't access it directly anymore. You have to transfer around the exact same amount of data, regardless of how you store it.
Related
I want to make a program for a very catered, specific purpose, to aid me in making a large set of quest mods to the videogame Elder Scrolls III: Morrowind. I’m attempting to do this through either excel or Visual Basic, and here I’ve provided a little summary of how dialogue works in the game’s normal creation program and then what I want to create outside of it and improve on.
How Morrowind Dialogue works?
For those of you who may be familiar with the game, you’ll remember that the talking to NPC’s will bring up a set of text, and this text is their dialogue. There are different “topics” that if an NPC has dialogue set for, the player can see the topic and click on it, bringing up a new wall of text, and this is generally how dialogue works in the entire game on the player’s end.
In creating a Morrowind Mod, the way dialogue really works in the “Construction Set” (the program used to create and edit the game) is that a database contains every entry of text, and this these entries have conditions set to them which limit which NPCs can say a given entry of dialogue. So for instance, a topic like “latest rumors”, will have lots of entries in it with lots of different NPCs having something to say about it. The topic itself is a condition of sorts with potentially dozens of entries attached to it, and conditions set to specific entries can also be applied. Conditions can include checking to see if the NPC is in a given city, if the in-game time is night or day, if the player is at a certain numbered stage/index of a given quest line and much, much more. This system is what makes all quests possible and the game dynamic.
What I want to create:
I am beginning a rather large mod project that includes many entries of dialogue, many new and old topics, and many quest and quest stages. I could list all the reasons here but essentially my problem is that the Construction Set has many limitations in terms of organization that make it difficult to make a large mod’s dialogue in. I would be better off to design, set the topics for, and edit all of my dialogue entries outside of the Construction Set program and implement them when I’m confident that the writing and quests are finished.
Essentially if this is too complicated I could just write all the quests and dialogue in Microsoft Word, but optimistically I'd like to do something more dynamic and helpful to me, as a writer, and be able to use real variables to store and set Journal/Quest Indexes, filter dialogue by Quest or by NPC, and easily edit dialogue and quests without getting lost in the normal game’s thousands of lines of other dialogue.
*I can't post more than two links here, but I posted on reddit and there I have a gallery showing how the Construction Set works and what I have made in Visual Studio so far:
https://www.reddit.com/r/learnprogramming/comments/4oap6w/making_an_application_in_visual_basic_to_handle/
So, my intention is to make a program in Visual Studio using Visual Basic or Python that leaves me with a program that lets me write, organize, and set the text for dialogue and filter based on conditions.
This likely requires creating a database file for the program in Visual Studio and being able to create variables in runtime, for the program. That is because I want the user of the program to be able to add new dialogue topics, new journal/quests, and all of these things will have conditions with values associated with them.
Any help, advice, and direction is appreciated. I am relearning Visual Studio (I took two courses in it) and I am unfortunately very new to excel and databases in general.
You are correct in that a database of some kind would be needed. However, you could approach this several different ways depending upon your comfort level, money, portability requirements, etc...
One way to do it would be to use XML to store your data. It has the advantage of being extremely portable and transformable. Since this is likely a program where only one person would be directly accessing the data at any given time, it might be your best bet.
Another option is to use MS Access if you have office. This gives you a workable, albeit fairly basic, relational database. This would probably be a better choice if you have 2 or 3 people that could possibly be working in it.
A third option would be a full DBMS. MySQL is free and you could install that to your local machine, or to a remote server. Installed to a remote server would give you the option of allowing many people to connect to it and modify data transactionally. However, this would be overkill if it is only a one or two person system.
Circling back around to XML... That will most likely be your best bet. It is simple and integrates perfectly with .Net applications. It can be imported/transformed to any data-store later once you are finished (or multiple times as you progress). Interfacing with XML via .Net allows you to work with it like a database within your code, so if you design your data layer properly up front, you could even migrate to a full database later if the project expands drastically. The biggest downside to XML would be that it isn't relational in the way that a regular DBMS is, and it is not inherently transactional. You do not have atomic updates, so if you have several people modifying things at once you could lose data if it is overwritten.
You could get around that to an extent by writing a more advanced data layer to interface with the XML files, but if only one person is making changes locally, and then the data file is, say, uploaded to a remote datastore later, the only thing to keep in mind would be coordinating when and who can modify that file. Mostly logistics stuff at that point.
I've just started a new job and we have several installs of magneto all of different versions!
Now really it seems to me that we need to firstly upgrade them all and then get them all under one installation of magneto and using one database.
What is the best way (in general terms) of doing this.
Is it even possible or is my best bet to make the sites again under one installation and import the products into it.
There is some talk by a fellow developer that having them under different installs helps with performance. Is this true?
Once we have them all under one install things like stock control and orders as well as putting products on multiple site should also be very straightforward - correct?
We are talking quite a few stores say around 15ish and quite a few products around I would say 4000 maybe more.
My first suggestion is to consider the reasons, why you need all Magento instances to be moved under one installation. The reasons are not clear from your question. So the best developer's advice is "Does it work? Then don't touch it" :)
If there are no specific reasons, then you'd better leave it as is. All reorganization processes (upgrading, infrastructure configuration, access setup, etc.) for a software system are hard, costly, consume time, error-prone, usually have no much value from business point of view and are a little boring. This is not a Magento-specific thing, it is just general characteristic for any software.
Also note, that it is a holiday season. So it is better not to do anything with e-commerce stores until the middle of January.
If you see value in a reorganization of your Magento stores, then the best way to do it is to go gradually - step by step, store by store:
Take your most complex store. Prepare everything you need for the further steps - i.e. get ready the tools, write automatic scripts, go through the process with its copy at some testing server. Write set of functional tests
to cover it with at least smoke-tests. You'll have to repeat
such light checks many times to be sure, that the store appears to be
working. The automatic tests will save much time. Thus all these preparations will decrease your downtime.
Close public access to the store.
Upgrade store to the Magento version, you need. Move it to the new infrastructure.
Verify all the user scenarios manually and with automatic tests. Fix the issues, if any.
Open public access to the store. Monitor logs, load reports at the server machines. Fix issues, if any.
Take next store (let's call it NextStore). Make its copy at a sandbox server.
Make copy of your already converted store (let's call it ConvertedStore) at a sandbox server.
Export all the data from copy of NextStore and import it to the copy of ConvertedStore. You can use Magento Dataflow or Import/Export modules to do that. Not all data can be
imported/exported with those modules - just Catalog, Orders, Customers. You will need
to develop custom scripts to import/export other entities, if you need them.
Verify result manually and with automatic tests and manually. Write automatic scripts, that fix found issues. You will need those scripts later during the real converting process.
Close NextStore.
Move it to the new infrastructure, by engaging the already prepared procedures and scripts. You will need to consider, whether to close ConvertedStore during the converting process. It depends on your feeling, whether it is ok to have it opened or not. For safety reasons it is better to close it.
Verify, that everything works fine. Monitor logs, reports.
Fix issues, if any.
Proceed with the rest of your stores.
That is my (totally personal) view on the procedures.
There is some talk by a fellow developer that having them under
different installs helps with performance. Is this true?
Yes, your friend is right. Separating Magento (actually, anything in this world) into smaller instances makes it lighter to be handled. The performance difference is very small (for your instance of 4000 products), but it is inevitable. Consider, that after combining the instances (suppose, there are ten of them with 400 products each) you'll be handling data for 10x more customers, reports, products, stores, etc. Therefore any search will have to go through ten times more products, in order to return data. Of course, it doesn't matter, if the search takes 0.00001 second, because 0.0001s for combined instance is ok as well. But some things, like sorting or matching sets, grow non-linearly. But, as said before, for 4000 products you won't see big difference.
Once we have them all under one install things like stock control and
orders as well as putting products on multiple site should also be
very straightforward - correct?
You're right - after combining the stores together, handling orders, stock, customers will be much more simpler and straightforward process.
Good luck! :)
The most important thing to consider is what problem you're solving by having all these sites on one Magento "instance". What's more important to your business/team: having these sites share product and inventory or having the flexibility of independently modifying these sites? Any downtime or impacts to availability may affect all sites.
Further questions/areas of investigation:
How much does the product hierarchy (categories and attributes) differ?
Is pricing the same across each site or different?
Are any of these sites multi-regional and how is pricing handled for each region?
It's certainly possible to run multiple sites on one Magento instances, even if there are some rough edges within the platform.
Since there's no way to export all entities in Magento, there's no functionality to merge stores. You'd have to write custom code - it would have to take all the records from the old store, assign them new IDs while preserving referential integrity & insert them into the new store (this is what the "product import" does, but they don't have it for categories, orders, customers, etc.).
The amount of code you'd be writing to do that would take almost longer than just starting over in my opinion. You'd basically be writing the missing functionality for Magento. If it were easy they would have it done it already.
However splitting two stores apart is very easy, since you don't have to worry about reassigning unique identifiers in the DB.
I'm building a CMS-type webapp that allows users to enter arbitrary-sized blocks of HTML. These blocks are entered by the user in their admin area and inserted into their template of choice when a page is delivered.
I'm guessing a user is not going to add more than 50-100 blocks and I'm not going to be getting more than 1000 users any time soon.
I was planning on using mySQL's LONGTEXT type to store these but I'm wondering if storing files in a directory will be more performant as the Linux OS will cache them? Given that I'm building for at most (1000 * 100) text blocks is there any reasonable performance worry with using mySQL?
Obviously I will be caching the HTML before delivery so I won't be reading these blocks on every delivery - reads will only occur when someone updates/creates new content.
I could use memcached/other cache/noSQL implementation or some other storage mechanism but I'm focusing on keeping it simple and delivering ASAP so don't want to introduce other stuff that I don't have experience with unless there's a significant performance worry.
Are the blocks of HTML content the only thing you are saving? If so, a file may be easiest.
However, it seems likely that you may want to save other bits of information along with the HTML and be able to query based on those bits of data. For example: date created, date last modified, name of the block, the user(s) who have edited the block.
If this is the case, then a database may be the best way to go. Since you said you do not expect to have many users (at least not a first) I would concentrate on finding the solution that is the fastest / most flexible to program and focus on performance and caching after your website begins to grow in size.
I advise you to use a flat file rather than Mysql to store this kind of data.
Html is more a "file" than a "value information" so it hasn't to be in a DB.
Moreover, you will certainly have better performances.
You can also read this post.
I currently build a CMS system that need to save a lot of pictures per article. I have a lot of questions :-)
I need to show the pictures in a few sizes, with or without watermark. In addition I need to have the original picture too, for archive and admin purpose. What that I think to do right now is to save the pictures in the database, in two versions: 1. the original picture, 2. web-optimized version.
It is really convenient way to save all the images in a table. But does it really good idea? Let say that the database will contain a hundred of thousand pictures, the original pictures size is probably around 3MB. so the db can be easily 100TB size.... Is this really good strategy?
On the other hand, I save a smaller version to each picture. This version need to be shown in a few sizes, with and without watermark. Currently I think to do think to this in on each request. the request will have parameters width, and according to this I can decide the size and the watermark. (I'll cache this work of course). Again, Is this a good strategy? do it really gonna work, or this is very expensive extra work?
Is it really better to save this on the db? I mean each request to article, will need around 50 another requests to its images, and each request required open/close connection to the database.
Technologies that I going to use: .net, sql-server 2008, NHibernate.
The best approach would be storing those images in filesystem and ids on database. Because of performance and maintenance reasons. Backing up and restoring would be much easier on filesystem and pushing the DBMS for such a work is not the best idea, you will need to transfer them from db to application and then push to the client. I just believe that's not it's job. Put a lighttpd daemon or something for image hosting and leave it do its job.
But if you like the idea, since you are going with sql server 2008, you can use FILESTREAM to store your images in your tables. Eventually, it will create files in a storage location that you choose and store the binary data in filesystem while providing transactional features and data integrity, it is a big bonus. Take a look at that option. As I remember, that performs good and the actual database will be much compact.
About the dynamic resizing, I say avoid that. Storage is cheaper than CPU time, just create variety of thumbnails and watermarked versions upon upload time and store them once in somewhere then use when required. Do not perform same operations again and again. You may do that at first request to the resized version, this way it will be easier to add new versions or purging the cache periodically to remove unused files. You will also be able to backup just the original versions.
Putting the images in the database has a couple of advantages. ACID tanscations and backup consistency come to mind. If you absolutely need that then put the images in the database. As you pointed out, this comes with a price: you'll need a huge database infrastructure like machines, licenses, operation team. Each image retrieval is a huge DB I/O effort.
A lot of things will be much easier with only storing metadata in the DB and putting the image blobs on a filesystem.
Two approaches to come to a decison:
What is the killer feature you absolutely (absolutely like in "if I don't have that, the whole thing will not work at all") need from the image-in-database approach? If there is one, go for it
Do a back-of-the-napkin business case, calculating the total cost of the image-in-database approach (project efforts, infrastructure, machine, license, operation) and compare that with an image-in-filesystem approach. That should give some hints on how to proceed.
I've read a statement somewhere that generating UI automatically from DB layout (or business objects, or whatever other business layer) is a bad idea. I can also imagine a few good challenges that one would have to face in order to make something like this.
However I have not seen (nor could find) any examples of people attempting it. Thus I'm wondering - is it really that bad? It's definately not easy, but can it be done with any measure success? What are the major obstacles? It would be great to see some examples of successes and failures.
To clarify - with "generating UI automatically" I mean that the all forms with all their controls are generated completely automatically (at runtime or compile time), based perhaps on some hints in metadata on how the data should be represented. This is in contrast to designing forms by hand (as most people do).
Added: Found this somewhat related question
Added 2: OK, it seems that one way this can get pretty fair results is if enough presentation-related metadata is available. For this approach, how much would be "enough", and would it be any less work than designing the form manually? Does it also provide greater flexibility for future changes?
We had a project which would generate the database tables/stored proc as well as the UI from business classes. It was done in .NET and we used a lot of Custom Attributes on the classes and properties to make it behave how we wanted it to. It worked great though and if you manage to follow your design you can create customizations of your software really easily. We also did have a way of putting in "custom" user controls for some very exceptional cases.
All in all it worked out well for us. Unfortunately it is a sold banking product and there is no available source.
it's ok for something tiny where all you need is a utilitarian method to get the data in.
for anything resembling a real application though, it's a terrible idea. what makes for a good UI is the humanisation factor, the bits you tweak to ensure that this machine reacts well to a person's touch.
you just can't get that when your interface is generated mechanically.... well maybe with something approaching AI. :)
edit - to clarify: UI generated from code/db is fine as a starting point, it's just a rubbish end point.
hey this is not difficult to achieve at all and its not a bad idea at all. it all depends on your project needs. a lot of software products (mind you not projects but products) depend upon this model - so they dont have to rewrite their code / ui logic for different client needs. clients can customize their ui the way they want to using a designer form in the admin system
i have used xml for preserving meta data for this sort of stuff. some of the attributes which i saved for every field were:
friendlyname (label caption)
haspredefinedvalues (yes for drop
down list / multi check box list)
multiselect (if yes then check box
list, if no then drop down list)
datatype
maxlength
required
minvalue
maxvalue
regularexpression
enabled (to show or not to show)
sortkey (order on the web form)
regarding positioning - i did not care much and simply generate table tr td tags 1 below the other - however if you want to implement this as well, you can have 1 more attribute called CssClass where you can define ui specific properties (look and feel, positioning, etc) here
UPDATE: also note a lot of ecommerce products follow this kind of dynamic ui when you want to enter product information - as their clients can be selling everything under the sun from furniture to sex toys ;-) so instead of rewriting their code for every different industry they simply let their clients enter meta data for product attributes via an admin form :-)
i would also recommend you to look at Entity-attribute-value model - it has its own pros and cons but i feel it can be used quite well with your requirements.
In my Opinion there some things you should think about:
Does the customer need a function to customize his UI?
Are there a lot of different attributes or elements?
Is the effort of creating such an "rendering engine" worth it?
Okay, i think that its pretty obvious why you should think about these. It really depends on your project if that kind of model makes sense...
If you want to create some a lot of forms that can be customized at runtime then this model could be pretty uselful. Also, if you need to do a lot of smaller tools and you use this as some kind of "engine" then this effort could be worth it because you can save a lot of time.
With that kind of "rendering engine" you could automatically add error reportings, check the values or add other things that are always build up with the same pattern. But if you have too many of this things, elements or attributes then the performance can go down rapidly.
Another things that becomes interesting in bigger projects is, that changes that have to occur in each form just have to be made in the engine, not in each form. This could save A LOT of time if there is a bug in the finished application.
In our company we use a similar model for an interface generator between cash-software (right now i cant remember the right word for it...) and our application, just that it doesnt create an UI, but an output file for one of the applications.
We use XML to define the structure and how the values need to be converted and so on..
I would say that in most cases the data is not suitable for UI generation. That's why you almost always put a a layer of logic in between to interpret the DB information to the user. Another thing is that when you generate the UI from DB you will end up displaying the inner workings of the system, something that you normally don't want to do.
But it depends on where the DB came from. If it was created to exactly reflect what the users goals of the system is. If the users mental model of what the application should help them with is stored in the DB. Then it might just work. But then you have to start at the users end. If not I suggest you don't go that way.
Can you look on your problem from application architecture perspective? I see you as another database terrorist – trying to solve all by writing stored procedures. Why having UI at all? Try do it in DB script. In effect of such approach – on what composite system you will end up? When system serves different businesses – try modularization, selectively discovered components, restrict sharing references. UI shall be replaceable, independent from business layer. When storing so much data in DB – there is hard dependency of UI – system becomes monolith. How you implement MVVM pattern in scenario when UI is generated? Designers like Blend are containing lots of features, which cannot be replaced by most futuristic UI generator – unless – your development platform is Notepad only.
There is a hybrid approach where forms and all are described in a database to ensure consistency server side, which is then compiled to ensure efficiency client side on deploy.
A real-life example is the enterprise software MS Dynamics AX.
It has a 'Data' database and a 'Model' database.
The 'Model' stores forms, classes, jobs and every artefact the application needs to run.
Deploying the new software structure used to be to dump the model database and initiate a CIL compile (CIL for common intermediate language, something used by Microsoft in .net)
This way is suitable for enterprise-wide software and can handle large customizations. But keep in mind that this approach sets a framework that should be well understood by whoever gonna maintain and customize the application later.
I did this (in PHP / MySQL) to automatically generate sections of a CMS that I was building for a client. It worked OK my main problem was that the code that generates the forms became very opaque and difficult to understand therefore difficult to reuse and modify so I did not reuse it.
Note that the tables followed strict conventions such as naming, etc. which made it possible for the UI to expect particular columns and infer information about the naming of the columns and tables. There is a need for meta information to help the UI display the data.
Generally it can work however the thing is if your UI just mirrors the database then maybe there is lots of room to improve. A good UI should do much more than mirror a database, it should be built around human interaction patterns and preferences, not around the database structure.
So basically if you want to be cheap and do a quick-and-dirty interface which mirrors your DB then go for it. The main challenge would be to find good quality code that can do this or write it yourself.
From my perspective, it was always a problem to change edit forms when a very simple change was needed in a table structure.
I always had the feeling we have to spend too much time on rewriting the CRUD forms instead of developing the useful stuff, like processing / reporting / analyzing data, giving alerts for decisions etc...
For this reason, I made long time ago a code generator. So, it become easier to re-generate the forms with a simple restriction: to keep the CSS classes names. Simply like this!
UI was always based on a very "standard" code, controlled by a custom CSS.
Whenever I needed to change database structure, so update an edit form, I had to re-generate the code and redeploy.
One disadvantage I noticed was about the changes (customizations, improvements etc.) done on the previous generated code, which are lost when you re-generate it.
But anyway, the advantage of having a lot of work done by the code-generator was great!
I initially did it for the 2000s Microsoft ASP (Active Server Pages) & Microsoft SQL Server... so, when that technology was replaced by .NET, my code-generator become obsoleted.
I made something similar for PHP but I never finished it...
Anyway, from small experiments I found that generating code ON THE FLY can be way more helpful (and this approach does not exclude the SAVED generated code): no worries about changing database etc.
So, the next step was to create something that I am very proud to show here, and I think it is one nice resolution for the issue raised in this thread.
I would start with applicable use cases: https://data-seed.tech/usecases.php.
I worked to add details on how to use, but if something is still missing please let me know here!
You can change database structure, and with no line of code you can start edit data, and more like this, you have available an API for CRUD operations.
I am still a fan of the "code-generator" approach, and I think it is just a flavor of using XML/XSLT that I used for DATA-SEED. I plan to add code-generator functionalities.