What is the best (or most appropriate in a general situation) structure for building a website administration section with Code Igniter?
I have heard of several structure ideas, but they all stem from the two basic structures.
Have two applications, administration and application
Have one application which encompasses both sections.
What are the pro's and con's of each method?
Are there any other structures that you can suggest?
When examining these idea's one must take into account the possibility of a shared resources.
E.g. shared log in accounts, shared database, shared designs etc.
Single Application:
Pros:
Shared Authentication
Shared Configuration
Shared Model Code
View Level Integration ([Edit This] links in main application)
Cons:
Low Site Seperation (Can run into issues with setting your site to maintenance mode or bugs causing admin to blow up or visa versa)
Shared Authentication (Both a pro&con, if your auth is compromised, invader has access to admin).
Multiple Application
Pros:
Site Separation, No taint from admin in live site (visa versa)
Clearly defined application roles & can be optimized for their tasks.
For example, you can optimize a db for reads and set your front end on that, and then set up a different db the backend works on and sync to it. (Only really makes a difference in high load websites, but still a conisdderation).
You can host the admin site # a seperate domain for extra security or subdomain with a seperate SSL Certificate, etc...
Cons:
Code Duplication
Configuration Duplication
No tie-ins to main site (or limited).
Overview
It really depends on what you're doing. I'd say for any high trafficked site or any site with sensitive customer data (like credit card details etc) that the site separation of separate applications is invaluable. It allows for a great deal of security considerations above & beyond your average site. Also this allows you to greater optimize each application for their purpose for raw speed.
On medium / low traffic sites, I'd say the simplest approach is always the best (KISS). It'll help you keep your site up to date & moving forward faster. This is essential for sites building up & starting out. Less development time is always a great thing.
That's just a personal opinion though, it's really up to you, it comes down to what's most important to you & your business.
Related
I have a website that could be visited by countries in different continentals. I noticed that most hosting companies have data centers in the US only, which might affect the performance when people from India, for example, are visiting the site. AWS and google own data centers all around the world, so would this be a better choice to solve the above-mentioned doubt? Are they using some technology that makes the website located in all datacenters ?
More about the website :
It is a dynamic website which depends heavily on the database. It mostly involves text. Few ajax code is there.
It is a Q & A website.
You would use some sort of load balancer.
Such as
AWS Elastic Load Balancing
Cloud Load Balancing
Cloud providers such as AWS has something called edge locations. When you deploy a website code, AWS will deploy the same code to edge locations around the world. When a user visits your website and the request reaches to AWS, AWS will redirect the requests to the edge location that is geographically closer to the user. So that the request will be served to the user faster.
I noticed that most hosting companies have data centers in the US only, which might affect the performance when people from India, for example, are visiting the site.
If your web site has purely or mostly static content, it usually won't matter (read about web caching), unless its traffic is large. As a typical example, I manage http://refpersys.org/ (physically hosted by OVH in France) and it is well visible from India: the latency is less than a few hundred milliseconds.
If your web site is extremely dynamic, it could matter (e.g. if every keystroke in a web browser started from India required an AJAX call to the US-located host).
Read much more about HTTP and perhaps TCP/IP. Don't confuse the World Wide Web with the Internet (which existed before the Web).
If performance really matters to you, you would set up some distributed and load balanced web service, by hosting on each continent. You might for instance use some distributed database technologies for the data (read about database replication), e.g. with PostGreSQL as your back-end database.
Of course, you can find web hosting in India.
And all that has some cost, mostly software development and deployment (network sysadmin skills are rare).
It is a Q & A website.
Then it is not that critical (assuming a small to medium traffic), and you can afford (at first) a single hosting located in a single place. I assume no harm is done if a given answer becomes visible worldwide only after several minutes.
Once your website is popular enough, you would have resources (human labor and computing hosting) to redesign it. AFAIK, StackOverflow started with a single web hosting and later improved to its current state. Design your website with some agile software development mindset: the data (that is past questions and answers typed by human users) is the most important asset, so make sure to design your database schema correctly, taking into account database normalization), and ensure that your data is backed-up correctly and often enough. And web technologies are evolving too (in 2021 the Web won't be exactly the same as today in December 2019, see e.g. this question).
If you wanted a world-wide fault-proof Q & A website, you could get a PhD in designing it well enough. Global distributed database consistency is still a research topic (see e.g. this research report).
I had an architecture question, and I had to rewrite the question title multiple times, since SO asked me to. So please feel free to correct it, if you feel so. I am not an expert in cache related things so I would very much appreciate some insights about my architecture related question.
So the situation is like this. We have a web based design app (frontend Javascript, backend PHP) which presents lots of clipart images to our customers who use that in creating online art work. Earlier, our app was loaded into an AWS machine and we used to have the clipart images also stored locally in the same server in order to not have any network transfer required to load the clipart and thus make the design app load time faster. The customer created designs were also saved into a backend MySQL server connected directly to the web based design app (in JSON and relational model).
A while before a new team joined to make a mobile version of this app, and they insisted that the cliparts should be loaded from a "central location" both for our web app, and for the mobile app they are creating. They also said that the design should also be stored into a "central database", accessible by the web and mobile apps (and there were some major re-architecting of the JSON structure as well)
So finally, the architecture changed such that, the cliparts now reside in a centralized location (S3 Server). And there is an "Asset Delivery and Storage (ADS) System" to which our design app makes requests for clipart images and gets served. (Please note that the cliparts repository is very large and only a subset of clipart images are served based on various parameters - such as the style of the design, account type of the customer etc). So this task is now done by the ADS system (written in python).
And since our web design app no longer has any local storage of cliparts nor logic of cliparts filtering (which got delegated to ADS, so no more server side PHP), it has also become a purely web based (front end Javasdcript) app without any server requirements and subsequently got moved to S3.
Now the real matter is that, our web app seems much more slower when initial loading, than when we had our on stash of cliparts stored in the server. I read that if an app requests for images, those images are cached in the browser and if the customer, for eg, loads the same order before that cache has expired then there is no repeat request that needs to be sent to the server (in this case ADS).
If that is true, is there any case I can really make to state that moving the clipart images from the design app server to the ADS system and having to send a request and load them every time a design is loaded has contributed in part to the recent slowness of the design app?
Also most times I hear the answer that "mobile app also does the same and is faster".I am not a mobile developer. Could there be some mobile cache tricks that help the mobile app to be much more "cache-efficient" than the purely web based design app, such that even though the architecture is same for both (sending request ADS for cliparts), the mobile app does it in a better and more efficient manner?
End note: I realise I am not asking a specific programming question. But from some of the notes I have read here, SO is a community for programmers, and I do not know of any other community that so well answers programming related questions. The architecture question I have is a genuine programming related question I face at work and sadly I am not skilled enough to understand if all the recent architectural changes there has any drawbacks that is causing our web app performance to degrade noticeably.
Thanks for reading, and I would really appreciate any pointers or even links to reading for better understanding this.
In chrome, open up the developer tools, and click on the network tab. 90% of the time you can identify the slow resource from there.
Good day,
I will begin developing a Web API solution for a multi-company organization. I'm hoping to make available all useful data to any company across the organization.
Given that I expect there to be a lot of growth with this solution, I want to ensure that it's organized properly from the start.
I want to organize various services by company, and then again by application or function.
So, with regards to the URL, should I target a structure like:
/company1/application1/serviceOperation1
or is there some way to leverage namespaces:
/company2.billing/serviceOperation2
Is it possible to create a separate Web API project in my solution for each company? Is there any value in doing so?
Hope we're not getting too subjective, but the examples I have seen have a smaller scope, and I really see my solution eventually exposing a lot of Web API services.
Thanks,
Chris
Before writing a line of code I would be looking at how the information is to be secured and deployed, versioned and culture of the company.
Will the same security mechanisms (protocols, certificates, modes, etc.) be shared across all companies and divisions?
If they are shared then there is a case for keeping them in the same solution
Will the services cause differing amounts of load and be deployed onto multiple servers with different patching schedules?
If the services are going onto different servers then they should probably be split to match
Will the deployment and subsequent versioning schedule be independent for each service or are all services always deployed together?
If they are versioned independently then you would probably split the solution accordingly
How often does the company restructure and keep their applications?
If the company is constantly restructuring without you would probably want to split the services by application. If the company is somewhat stable and focused on changing the application capabilities then you would probably want to split the services by division function (accounts, legal, human resources, etc.)
As for the URL to access this should naturally flow from the answers above. Hope this helps.
Is cloud hosting the way to go? Or is there something better that delivers fast page loads?
The reason I ask is because I run a buddypress site on a bluehost dedicated server, but it seems to run slow at most times of the day. This scares me because at the moment the sites not live and I'm afraid when it gets traffic it'll become worse and my visitors will lose interest. I use Amazon Cloud to handle all my media, JS, and CSS files along with a catching plugin, but it still loads slow at times.
I feel like the problem is Bluehost, because I visit other sites running buddypress and their sites seem to load instantly. Im not web hosting savvy so can someone please point me in the right direction here?
The hosting choice depends on many factors such as technical requirements, growth rates, burst rates, budgets and more.
Bigger Hardware
To scale up hosting operation, your first choice is often just using a more powerful server, VPS, or cloud instance. The point is not so much cloud vs. dedicated but that you simply bring more compute power to the problem. Cloud can make scaling up easier - often with a few clicks.
Division of Labor
The next step often is division of labor. You offload database, static content, caching or other items to specific servers or services. For example, you could offload static content to a CDN. You could a dedicated database.
Once again, cloud vs non-cloud is not the issue. The point is to bring more resources to your hosting problems.
Pick the Right Application Stack
I cannot stress enough picking the right underlying technology for your needs. For example, I've recently helped a client switch from a Apache/PHP stack to a Varnish/Nginx/PHP-FPM stack for a very business Wordpress operation (>100 million page views/mo). This change boosted capacity by nearly 5X with modest hardware changes.
Same App. Different Story
Also just because you are using a specific application, it does not mean the same hosting setup will work for you. I don't know about the specific app you are using but with Drupal, Wordpress, Joomla, Vbulletin and others, the plugins, site design, themes and other items are critical to overall performance.
To complicate matter, user behavior is something to consider as well. Consider a discussion form that has a 95:1 read:post ratio. What if you do something in the design to encourage more posts and that ratio moves to 75:1. That means more database writes, less caching, etc.
In short, details matter, so get a good understanding of your application before you start to scale out hosting.
A hosting service is part of the solution. Another part is proper server configuration.
For instance this guy has optimized his setup to serve 10 million requests in a day off a micro-instance on AWS.
I think you should look at your server config first, then shop for other hosts. If you can't control server configuration, try AWS, Rackspace or other cloud services.
just an FYI: You can sign up for AWS and use a micro instance free for one year. The link I posted - he just optimized on the same server. You might have to upgrade to a small server because Amazon has stated that micro is only to handle spikes and sustained traffic.
Good luck.
I am writing a Java EE application which is supposed to consume SAP BAPIs/RFC using JCo and expose them as web-services to other downstream systems. The application needs to scale to huge volumes in scale of tens of thousands and thousands of simultaneous users.
I would like to have suggestions on how to design this application so that it can meet the required volume.
Its good that you are thinking of scalability right from the design phase. Martin Abbott and Michael Fisher (PayPal/eBay fame) layout a framework called AKF Scale for scaling web apps. The main principle is to scale your app in 3 axis.
X-axis: Cloning of services/ data such that work can be easily distributed across instances. For a web app, this implies ability to add more web servers (clustering).
Y-axis: separation of work responsibility, action or data. So for example in your case, you could have different API calls on different servers.
Z-Axis: separation of work by customer or requester. In your case you could say, requesters from region 1 will access Server 1, requesters from region 2 will access Server 2, etc.
Design your system so that you can follow all 3 above if you need to. But when you initially deploy, you may not need to use all three methods.
You can checkout the book "The Art of Scalability" by the above authors. http://amzn.to/oSQGHb
A final answer is not possible, but based on the information you provided this does not seem to be a problem as long as your application is stateless so that it only forwards requests to SAP and returns the responses. In this case it does not maintain any state at all. If it comes to e.g. asynchronous message handling, temporary database storage or session state management it becomes more complex. If this is true and there is no need to maintain state you can easily scale-out your application to dozens of application servers without changing your application architecture.
In my experience this is not necessarily the case when it comes to SAP integration, think of a shopping cart you want to fill based on products available in SAP. You may want to maintain this cart in your application and only submit the final cart to SAP. Otherwise you end up building an e-commerce application inside your backend.
Most important is that you reduce CPU utilization in your application to avoid a 'too-large' cluster and to reduce all kinds of I/O wherever possible, e.g. small SOAP messages to reduce network I/O.
Furthermore, I recommend to design a proper abstraction layer on top of JCo including the JCO.PoolManager for connection pooling. You may also need a well-thought-out authorization concept if you work with a connection pool managed by only one technical user.
Just some (not well structured) thoughts...