I am going to create an enterprise web application ( Its a personal dream project), and I admit that I might not know all the technologies and stacks required to build it. Build I am postive I can learn them quickly. I am intersted to know which architecture would suit my software here. I was thinking about Workflow Foundation, but was not sure if its the right approach. Any other tools or architectural patterns that will help me.
More about my application
The expected end users of the application will be the general public and various Government departments. Each of these government departments will have several sub departments who also requires access.
The Complexity
1. The staffs in these government department might be transfered or promoted or depromoted at any time.
A particular role or function might be moved from one particular department to another at point of time.
Number of departments might increase or decrease at point of time.
Should have easy access managment at department level and individual level.
Extra Complexity (If its possible)
1. Data in the reports can can be easily changed( example adding another column , or even manupliting datas) , without changing the basic application logic or structure.
I just did several government projects, to talk about a few of my experience:
Governments usually hierarchical management, divided into: state, county, city, etc., divided into sections for each level of government units responsible for different tasks, can be called "role", the staff belonging to a administrative division, with one or more roles.
General government departments, higher-level units can manage subordinate data, so, using incremental encoder is relatively simple, such as 330102, 33 (state), 01 (county), 02 (City), with: xxx like '3301% 'access to 3301 County (including sub-districts) of the data; Each role has a number of menu items, a very simple permissions system, no elaboration.
In addition, about custom fields, you need to add a metadata dictionary, which store each report fields, each field has a specified property belongs to which administrative divisions, when the user logs in, according to the user's administrative divisions retrieve the corresponding Fields list. Due to the different sections of the view different reports, so it does not need additional fields.
These are my conditions encountered, may not meet your case, for reference only.
Additions:
I used Asp.net 2.0 technology, database using Oracle (customer requirements).Use the WebForm to generate the user interface, do not use any other frameworks. Because other frameworks with learning costs, and demand change mostly need Coding. I want to achieve when there are changes in demand, the customer can adjust, do not rely on any other tools and environments.
Asp.net, I use the most is its WebUserControl (ascx). I put the user interface definitions stored in the database, when the user opens the document (report),generated user controls, and then combined into the WebForm.
//public class Common_CustomBill:Page
protected override void OnInit(EventArgs e)
{
base.OnInit(e);
//User Interface Definition retrieved from the database
LoadBillItems();
foreach(DataRow ARow in BillItems.Rows)
{
HtmlTableCell ACell = new HtmlTableCell();
HtmlTableRow ATableRow = new HtmlTableRow();
ATableRow.Cells.Add(ACell);
//Depending on the type load control
NovaNet.WebBase.BillItem AItem = LoadControl("BillItem/" + ARow["type"].ToString() + ".ascx") as NovaNet.WebBase.BillItem;
AItem.OrderId = ARow["OrderId"].ToString();
AItem.ID = string.Format("C{0:X}{1}", Session.SessionID.GetHashCode(), ARow["OrderId"]);
if(P.OftenFunction.SafeToString(ARow["Border"]) == "none")
{
ACell.Controls.Add(AItem);
}
else
{
HtmlGenericControl FIELDSET = new HtmlGenericControl("FIELDSET");
HtmlGenericControl Legend = new HtmlGenericControl("legend");
FIELDSET.Controls.Add(Legend);
Legend.InnerHtml = ARow["label"].ToString();
FIELDSET.Controls.Add(AItem);
ACell.Controls.Add(FIELDSET);
}
tabMain.Rows.Add(ATableRow);
AItem.BillData = Data;
//initialize the control
AItem.InitiData(ARow["content"] as byte[],ModuleName);
}
}
User control is initialized, the different types of user controls behave differently. For example, the multi-selection control is initialized CheckBoxList Items property. Special, custom type controls, initialization is generated and loaded into a separate ascx file.
public partial class Common_BillItem_CustomControl:NovaNet.WebBase.BillItem
{
public override void OnInitiData()
{
base.OnInitiData();
//FData is incoming "initialization data"
string Value = FData["code"].ToString();
//Disk file generated
string AFileName = "~/Temp/" + ModuleName +"_"+ Value.GetHashCode().ToString("X") + ".ascx";
if(!Directory.Exists(Path.GetDirectoryName(Server.MapPath(AFileName))))
Directory.CreateDirectory(Path.GetDirectoryName(Server.MapPath(AFileName)));
if(!File.Exists(Server.MapPath(AFileName)))
{
using (StreamWriter AWriter = new StreamWriter(Server.MapPath(AFileName),false, System.Text.Encoding.UTF8))
{
AWriter.Write(Value);
AWriter.Close();
}
}
//Loaded into the interface
Controls.Add(LoadControl(AFileName));
}
}
IIS dynamic compilation great. Load "ascx file" Generate "dll files", follow the direct loading "dll file".
Related
The UI is decoupled from the domain, but the UI should try its best to never allow the user to issue commands that are sure to fail.
Consider the following example (pseudo-code):
DiscussionController
#Security(is_logged)
#Method('POST')
#Route('addPost')
addPostToDiscussionAction(request)
discussionService.postToDiscussion(
new PostToDiscussionCommand(request.discussionId, session.myUserId, request.bodyText)
)
#Method('GET')
#Route('showDiscussion/{discussionId}')
showDiscussionAction(request)
discussionWithAllThePosts = discussionFinder.findById(request.discussionId)
canAddPostToThisDiscussion = ???
// render the discussion to the user, and use `canAddPostToThisDiscussion` to show/hide the form
// from which the user can send a request to `addPostToDiscussionAction`.
renderDiscussion(discussionWithAllThePosts, canAddPostToThisDiscussion)
PostToDiscussionCommand
constructor(discussionId, authorId, bodyText)
DiscussionApplicationService
postToDiscussion(command)
discussion = discussionRepository.get(command.discussionId)
author = collaboratorService.authorFrom(discussion.Id, command.authorId)
post = discussion.createPost(postRepository.nextIdentity(), author, command.bodyText)
postRepository.add(post)
DiscussionAggregate
// originalPoster is the Author that started the discussion
constructor(discussionId, originalPoster)
// if the discussion is closed, you can't create a post.
// *unless* if you're the author (OP) that started the discussion
createPost(postId, author, bodyText)
if (this.close && !this.originalPoster.equals(author))
throw "Discussion is closed."
return new Post(this.discussionId, postId, author, bodyText)
close()
if (this.close)
throw "Discussion already closed."
this.close = true
isClosed()
return this.close
The user goes to /showDiscussion/123 and he see the discussion with the <form> from which he can submit a new post, but only if the discussion is not closed or the current user is who started that discussion.
Or, the user goes to /showDiscussion/123 where it's presented as a REST-as-in-HATEOAS API. A hypermedia link to /addPost will be provided, only if the discussion is not closed or the authenticated user is who started that discussion.
How can I provide that knowledge into the UI?
I could code that into the read model,
canAddPostToThisDiscussion = !discussionWithAllThePosts.discussion.isClosed
&& discussionWithAllThePosts.discussion.originalPoster.id == session.currentUserId
but then I need to maintain that logic and keep it in sync with the write model. This is a fairly simple example, but as the states transitions of an aggregate become more complex, it may become really hard to do. I'd like to image my aggregates as state machines, with their workflows (like the RESTBucks example). But I don't like the idea to move that business logic outside my domain model, and put it in a service that both the read side and write side can use.
Maybe this isn't the best example, but as an aggregate root is basically a consistency boundary, we know that we need to prevent invalid state transitions in its life cycle, and in each transitions to a new state some operations may become illegal and vice versa. So, how can the user interface know what is allowed or not? What are my alternative? How should I approach to this problem? Do you have any example to provide?
How can I provide that knowledge into the UI?
The easiest way is probably to share the domain model's understanding of what is possible with the UI. Ta Da.
Here's a way to think about it -- in the abstract, all of the write model logic has a fairly simple looking shape.
{
// Notice that these statements are queries
State currentState = bookOfRecord.getState()
State nextState = model.computeNextState(currentState, command)
// This statement is a command
bookOfRecord.replace(currentState, nextState)
}
Key ideas here: the book of record is the authority of state; everybody else (including the "write model") is working with a stale copy.
What the model represents is a collection of constraints that ensure that the business invariant is satisfied. Over the lifetime of a system, there might be many different sets of constraints, as the understanding of the business changes.
The write model is the authority for which collection of constraints is currently enforced when replacing the state in the book of record. Everybody else is working with a stale copy.
The staleness is something to keep in mind; in a distributed system, any validation you perform is provisional -- unless you have a lock on the state and a lock on the model, either could be changed while your messages are in flight.
This means that your validation is approximate anyway, so you don't need to be too concerned that you have all of the fiddly details right. You assume that your stale copy of the state is approximately right, and your current understanding of the model is approximately right, and if the command is valid given those pre-conditions, then it is checked enough to send.
I don't like the idea to move that business logic outside my domain model, and put it in a service that both the read side and write side can use.
I think the best answer here is "get over it". I get it; because having the business logic inside the aggregate root is what the literature is telling us to do. But if you continue to refactor, identifying common patterns and separating concerns, you'll see that entities are really just plumbing around a reference to state and a functional core.
AggregateRoot {
final Reference<State> bookOfRecord;
final Model<State,Command> theModel;
onCommand(Command command) {
State currentState = bookOfRecord.getState()
State nextState = model.computeNextState(currentState, command)
bookOfRecord.replace(currentState, nextState)
}
}
All we've done here is taken the "construct the next state" logic, which we used to have scattered through out the AggregateRoot, and encapsulated it into a separate responsibility boundary. Here, its specific to the root itself, but an equivalent refactoring it so pass it as an argument.
AggregateRoot {
final Reference<State> bookOfRecord;
onCommand(Model<State,Command> theModel, Command command) {
State currentState = bookOfRecord.getState()
State nextState = model.computeNextState(currentState, command)
bookOfRecord.replace(currentState, nextState)
}
}
In other words, the model, teased out from the plumbing of tracking state, is a domain service. The domain logic within the domain service is just as much a part of the domain model as the domain logic within the aggregate -- the two implementations are dual to one another.
And there's no reason that a read model of your domain shouldn't have access to a domain service.
I don't like the idea of sharing domain knowlegde (code) between the write and the read models as you will have to continously keep them in sync and that'd really a chalenge even if you are the only developer in your company.
But the good knews is that you don't have to duplicate anything. If you designed your Aggregate to be pure, with no side effect as you should do (!), you can simply send it the command but without persisting the changes. If the command throws an exception then the command would not succeed, otherwise the command would succeed. In case of CQRS this is even better as you have a 3rd outcome: idempotent command detection in which case the command succeeds but it has no effect (no events are raised but no exception is thrown either) and the UI might find this interesting.
So, as an example you could have something like this:
DiscussionController
#Security(is_logged)
#Method('POST')
#Route('addPost')
addPostToDiscussionAction(request)
discussionService.postToDiscussion(
new PostToDiscussionCommand(request.discussionId, session.myUserId, request.bodyText)
)
#Method('GET')
#Route('showDiscussion/{discussionId}')
showDiscussionAction(request)
discussionWithAllThePosts = discussionFinder.findById(request.discussionId)
canAddPostToThisDiscussion = discussionService.canPostToDiscussion(request.discussionId, session.myUserId, "some sample body")
// render the discussion to the user, and use `canAddPostToThisDiscussion` to show/hide the form
// from which the user can send a request to `addPostToDiscussionAction`.
renderDiscussion(discussionWithAllThePosts, canAddPostToThisDiscussion)
DiscussionApplicationService
postToDiscussion(command)
discussion = discussionRepository.get(command.discussionId)
author = collaboratorService.authorFrom(discussion.Id, command.authorId)
post = discussion.createPost(postRepository.nextIdentity(), author, command.bodyText)
postRepository.add(post)
canPostToDiscussion(discussionId, authorId, bodyText)
discussion = discussionRepository.get(discussionId)
author = collaboratorService.authorFrom(discussion.Id, authorId)
try
{
post = discussion.createPost(postRepository.nextIdentity(), author, bodyText)
return true
}
catch (exception)
{
return false
}
You could even have a method named whyCantPostToDiscussion that would return the exception or the exception message and display it in the UI.
There is only one issue with the code: the call to postRepository.nextIdentity() because it would increase the next ID every time but you could replace it with something like postRepository.getBiggestIdentity() that should have no side effect.
I find it is rare that authorization is actually part of the domain. If it isn't, it makes sense to move that logic out into its own service which the UI and the domain can make use of.
I like to build up a set of rules using the specification pattern. I find it to be a fairly elegant way to build up the rules.
This also plays very well in a CQRS context as you can run each command through the 'rules engine' before they get issued to your AR's. If you push queries through a message routeing system you can do the same for queries. I've had a lot of success with this approach.
The response you are looking for is HATEOAS, look no further. You must implement your rest api as really restful (level 3) adhering to hypertext to model the state transitions and return links to the clients (being the UI one of those). These links represent the actions the user can execute in its context according to the model state. It´s simple. If you return a link from the server then you bind it to a button in the UI, if you don´t return the link because of business invariants then you do not show the button on the UI. There is a lot more of concepts behind it such as designing a good API supporting a well designed domain model behind but this is the general idea around it and fits exactly what you want.
Data validation should occur at the following places in a web-application:
Client-side: browser. To speed up user error reporting
Server-side: controller. To check if user input is syntactically valid (no sql injections, for example, valid format for all passed in fields, all required fields are filled in etc.)
Server-side: model (domain layer). To check if user input is domain-wise valid (no duplicating usernames, account balance is not negative etc.)
I am currently a DDD fan, so I have UI and Domain layers separated in my applications.
I am also trying to follow the rule, that domain model should never contain an invalid data.
So, how do you design validation mechanism in your application so that validation errors, that take place in the domain, propagate properly to the client? For example, when domain model raises an exception about duplicate username, how to correctly bind that exception to the submitted form?
Some article, that inspired this question, can be found here: http://verraes.net/2015/02/form-command-model-validation/
I've seen no such mechanisms in web frameworks known to me. What first springs into my mind is to make domain model include the name of the field, causing exception, in the exception data and then in the UI layer provide a map between form data fields and model data fields to properly show the error in it's context for a user. Is this approach valid? It looks shaky... Are there some examples of better design?
Although not exactly the same question as this one, I think the answer is the same:
Encapsulate the validation logic into a reusable class. These classes are usually called specifications, validators or rules and are part of the domain.
Now you can use these specifications in both the model and the service layer.
If your UI uses the same technology as the model, you may also be able to use the specifications there (e.g. when using NodeJS on the server, you're able to write the specs in JS and use them in the browser, too).
Edit - additional information after the chat
Create fine-grained specifications, so that you are able to display appropriate error messages if a spec fails.
Don't make business rules or specifications aware of form fields.
Only create specs for business rules, not for basic input validation tasks (e.g. checking for null).
I want to share the approach used by us in one DDD project.
We created a BaseClass having fields ErrorId &
ErrorMessage.
Every DomainModel derive from this BaseClass & thus have a two extra fields ErrorId & ErrorMessage available from
BaseClass.
Whenever exception occurs we handle exception(Log in server, take appropriate steps for compensating logic & fetch User Friendly message from client location based localized Resource file for message ) then propagate data as simple flow without raising or throwing exception.
At client side check if ErrorMessage is not null then show error.
It's basic simple approach we followed from start of project.
If it's new project this is least complicated & efficient approach, but if you doing changes in big old project this might not help as changes are big.
For validation at each field level, use Validation Application Block from Enterprise Library.
It can be used as :
Decorate domain model properties with proper attributes like:
public class AttributeCustomer
{
[NotNullValidator(MessageTemplate = "Customer must have valid no")]
[StringLengthValidator(5, RangeBoundaryType.Inclusive,
5, RangeBoundaryType.Inclusive,
MessageTemplate = "Customer no must have {3} characters.")]
[RegexValidator("[A-Z]{2}[0-9]{3}",
MessageTemplate = "Customer no must be 2 capital letters and 3 numbers.")]
public string CustomerNo { get; set; }
}
Create validator instance like:
Validator<AttributeCustomer> cusValidator =
valFactory.CreateValidator<AttributeCustomer>();
Use object & do validation as :
customer.CustomerNo = "AB123";
customer.FirstName = "Brown";
customer.LastName = "Green";
customer.BirthDate = "1980-01-01";
customer.CustomerType = "VIP";
ValidationResults valResults = cusValidator.Validate(customer);
Check Validation results as:
if (valResults.IsValid)
{
MessageBox.Show("Customer information is valid");
}
else
{
foreach (ValidationResult item in valResults)
{
// Put your validation detection logic
}
}
Code example is taken from Microsoft Enterprise Library 5.0 - Introduction to Validation Block
This links will help to understand Validation Application Block:
http://www.codeproject.com/Articles/256355/Microsoft-Enterprise-Library-Introduction-to-V
https://msdn.microsoft.com/en-in/library/ff650131.aspx
https://msdn.microsoft.com/library/cc467894.aspx
I am working on .NET 4.0 MVC3 web application. The application is all in English and allows users to fill information regarding different regions. For simplicity let's say we have two regions: United States and Western Europe.
Now in the view I present a string let's say Project opening, but if the user works on region United States I would like it to read Project initiation.
When I think about this functionality I immediately think about resource files for different regions, but independent from the UI culture.
Does anyone have a recipe how to achieve what I want?
Would be also nice, if in the future I could make it read e.g. ExtendedDisplayAttribute(string displayName, int regionId) placed over properties of my ViewModels.
EDIT
I am already at the stage where I can access region information in a helper that should return the string for this region. Now I have a problem with the resource files. I want to create multiple resource files with failover mechanism. I expected there would be something working out of the box, but the ResourceManager cannot be used to read resx files.
Is there any technique that will allow me to read the values from specific resource files without some non-sense resgen.exe?
I also do not want to use System.Resources.ResXResourceReader, because it belongs to System.Windows.Forms.dll and this is a Web app.
Just in case someone wants to do the same in the future. This article turned out to be really helpful: http://www.jelovic.com/articles/resources_in_visual_studio.htm
The piece of code that I use (VB) is:
<Extension()>
Public Function Resource(Of TModel)(ByVal htmlHelper As HtmlHelper(Of TModel), resourceKey As String) As MvcHtmlString
Dim regionLocator As IRegionLocator = DependencyResolver.Current.GetService(GetType(IRegionLocator))
Dim resources = New List(Of String)
If Not String.IsNullOrEmpty(regionLocator.RegionName) Then
resources.Add(String.Format("Website.Resources.{0}", regionLocator.RegionName))
End If
resources.Add("Website.Resources")
Dim value = String.Empty
For Each r In resources
Dim rManager = New System.Resources.ResourceManager(r, System.Reflection.Assembly.GetExecutingAssembly())
rManager.IgnoreCase = True
Try
value = rManager.GetString(resourceKey)
If Not String.IsNullOrEmpty(value) Then
Exit For
End If
Catch
End Try
Next
Return New MvcHtmlString(value)
End Function
I have an IEnumerable that I'd like to add to Azure Table in the most efficient way possible. Since every batch write has to be directed to the same PartitionKey, with a limit of 100 rows per write...
Does anyone want to take a crack at implementing this the "right" way as referenced in the TODO section? I'm not sure why MSFT didn't finish the task here...
Also I'm not sure if error handling will complicate this, or the correct way to implement it. Here is the code from the Microsoft Patterns and Practices team for Windows Azure "Tailspin Toys" demo
public void Add(IEnumerable<T> objs)
{
// todo: Optimize: The Add method that takes an IEnumerable parameter should check the number of items in the batch and the size of the payload before calling the SaveChanges method with the SaveChangesOptions.Batch option. For more information about batches and Windows Azure table storage, see the section, "Transactions in aExpense," in Chapter 5, "Phase 2: Automating Deployment and Using Windows Azure Storage," of the book, Windows Azure Architecture Guide, Part 1: Moving Applications to the Cloud, available at http://msdn.microsoft.com/en-us/library/ff728592.aspx.
TableServiceContext context = this.CreateContext();
foreach (var obj in objs)
{
context.AddObject(this.tableName, obj);
}
var saveChangesOptions = SaveChangesOptions.None;
if (objs.Distinct(new PartitionKeyComparer()).Count() == 1)
{
saveChangesOptions = SaveChangesOptions.Batch;
}
context.SaveChanges(saveChangesOptions);
}
private class PartitionKeyComparer : IEqualityComparer<TableServiceEntity>
{
public bool Equals(TableServiceEntity x, TableServiceEntity y)
{
return string.Compare(x.PartitionKey, y.PartitionKey, true, System.Globalization.CultureInfo.InvariantCulture) == 0;
}
public int GetHashCode(TableServiceEntity obj)
{
return obj.PartitionKey.GetHashCode();
}
}
Well, we (the patterns & practices team) just optimized for showing other things we considered useful. The code above is not really a "general purpose library", but rather a specific method for the sample that uses it.
At that moment we thought that adding that extra error handling would not add much, and we diceided to keep it simple, but....we might have been wrong.
Anyway, if you follow the link in the //TODO:, you will find another section of a previous guide we wrote that talks a little bit more on error handling in "complex" storage transactions (not in the "ACID" form though as transactions "ala DTC" are not supported in Windows Azure Storage).
Link is this: http://msdn.microsoft.com/en-us/library/ff803365.aspx
The limitations are listed in more detail there:
Only one instance of the entity should be present in the batch
Max 100 entities or 4 MB payload
Same PartitionKey (which is being handled in the code: notice that "batch" is only specified if there's a single Partition key)
etc.
Adding some extra error handling should not overcomplicate things too much, but depends on the type of app you are building on top of this and your preference to handle this higher or lower in your app stack. In our example, the app would never expect > 100 entities anyway, so it would simply bubble the exception up if that situation happens (because it should be truly exceptional). Same with the total size. The use cases implemented in the app make it impossible to have the same entity in the same collection, so again, that should never happen (and if it happens, it wouls simply throw)
All "entity group transactions" limitations are documented here: http://msdn.microsoft.com/en-us/library/dd894038.aspx
Let us know how it goes! I'm also interested to know if other pieces of the guide were useful for you.
So, I'm not quite sure how I should structure this in CakePHP to work correctly in the proper MVC form.
Let's, for argument sake, say I have the following data structure which are related in various ways:
Team
Task
Equipment
This is generally how sites are and is quite easy to structure and make in Cake. For example, I would have the a model, controller and view for each item set.
My problem (and I'm sure countless others have had it and already solved it) is that I have a level above the item sets. So, for example:
Department
Team
Task
Equipment
Department
Team
Task
Equipment
Department
Team
Task
Equipment
In my site, I need the ability for someone to view the site at an individual group level as well as move to view it all together (ie, ignore the groups).
So, I have models, views and controls for Depart, Team, Task and Equipment.
How do I structure my site so that from the Department view, someone can select a Department then move around the site to the different views for Team/Task/Equipment showing only those that belong to that particular Department.
In this same format, is there a way to also move around ignoring the department associations?
Hopefully the following example URLs clarifies anything that was unclear:
// View items while disregarding which group-set record they belong to
http://www.example.com/Team/action/id
http://www.example.com/Task/action/id
http://www.example.com/Equipment/action/id
http://www.example.com/Departments
// View items as if only those associated with the selected group-set record exist
http://www.example.com/Department/HR/Team/action/id
http://www.example.com/Department/HR/Task/action/id
http://www.example.com/Department/HR/Equipment/action/id
Can I get the controllers to function in this manner? Is there someone to read so I can figure this out?
Thanks to those that read all this :)
I think I know what you're trying to do. Correct me if I'm wrong:
I built a project manager for myself in which I wanted the URLs to be more logical, so instead of using something like
http://domain.com/project/milestones/add/MyProjectName I could use
http://domain.com/project/MyProjectName/milestones/add
I added a custom route to the end (!important) of my routes so that it catches anything that's not already a route and treats it as a "variable route".
Router::connect('/project/:project/:controller/:action/*', array(), array('project' => '[a-zA-Z0-9\-]+'));
Whatever route you put means that you can't already (or ever) have a controller by that name, for that reason I consider it a good practice to use a singular word instead of a plural. (I have a Projects Controller, so I use "project" to avoid conflicting with it.)
Now, to access the :project parameter anywhere in my app, I use this function in my AppController:
function __currentProject(){
// Finding the current Project's Info
if(isset($this->params['project'])){
App::import('Model', 'Project');
$projectNames = new Project;
$projectNames->contain();
$projectInfo = $projectNames->find('first', array('conditions' => array('Project.slug' => $this->params['project'])));
$project_id = $projectInfo['Project']['id'];
$this->set('project_name_for_layout', $projectInfo['Project']['name']);
return $project_id;
}
}
And I utilize it in my other controllers:
function overview(){
$this->layout = 'project';
// Getting currentProject id from App Controller
$project_id = parent::__currentProject();
// Finding out what time it is and performing queries based on time.
$nowStamp = time();
$nowDate = date('Y-m-d H:i:s' , $nowStamp);
$twoWeeksFromNow = $nowDate + 1209600;
$lateMilestones = $this->Project->Milestone->find('all', array('conditions'=>array('Milestone.project_id' => $project_id, 'Milestone.complete'=> 0, 'Milestone.duedate <'=> $nowDate)));
$this->set(compact('lateMilestones'));
$currentProject = $this->Project->find('all', array('conditions'=>array('Project.slug' => $this->params['project'])));
$this->set(compact('currentProject'));
}
For your project you can try using a route like this at the end of your routes.php file:
Router::connect('/:groupname/:controller/:action/*', array(), array('groupname' => '[a-zA-Z0-9\-]+'));
// Notice I removed "/project" from the beginning. If you put the :groupname first, as I've done in the last example, then you only have one option for these custom url routes.
Then modify the other code to your needs.
If this is a public site, you may want to consider using named variables. This will allow you to define the group on the URL still, but without additional functionality requirements.
http://example.com/team/group:hr
http://example.com/team/action/group:hr/other:var
It may require custom routes too... but it should do the job.
http://book.cakephp.org/view/541/Named-parameters
http://book.cakephp.org/view/542/Defining-Routes
SESSIONS
Since web is stateless, you will need to use sessions (or cookies). The question you will need to ask yourself is how to reflect the selection (or not) of a specific department. It could be as simple as putting a drop down selection in the upper right that reflects ALL, HR, Sales, etc. When the drop down changes, it will set (or clear) the Group session variable.
As for the functionality in the controllers, you just check for the Session. If it is there, you limit the data by the select group. So you would use the same URLs, but the controller or model would manage how the data gets displayed.
// for all functionality use:
http://www.example.com/Team/action/id
http://www.example.com/Task/action/id
http://www.example.com/Equipment/action/id
You don't change the URL to accommodate for the functionality. That would be like using a different URL for every USER wanting to see their ADDRESS, PHONE NUMBER, or BILLING INFO. Where USER would be the group and ADDRESS, PHONE NUMBER< and BILLING INFO would be the item sets.
WITHOUT SESSIONS
The other option would be to put the Group filter on each page. So for example on Team/index view you would have a group drop down to filter the data. It would accomplish the same thing without having to set and clear session variables.
The conclusion is and the key thing to remember is that the functionality does not change nor does the URLs. The only thing that changes is that you will be working with filtered data sets.
Does that make sense?