I need some advice on how to go about developing the model for a Client/Server system using UML.
In a short explanation the system consists of a Mobile client which runs on mobile phones. As is common with most mobile applications, the mobile application connects to a server in order to carry out some processing, logging for backups, and connectivity to third-party applications.
Where I need an advice is that envisaging the whole system, almost all the classes in the mobile application are replicated in the server application with the exceptions of a few classes. Likewise in the Server application which contains most of the same classes in the mobile application except some others and some extra functionality.
Giving an example, the Mobile application has a User class that consists of the actors personal details and login details. Likewise the Server application has a User class with the same members existing in the Mobile applications User class except that it has some functionality/methods that are not in the mobile application.
The Server application also has a class that connects to a third-party application to carry out its billing functionality/method. This class obviously is replicated in the Mobile application too however without the Mobile applications billing class having the functionality/method to connect to the third-party.
Ok to the issue on hand, I feel if I am going to follow the principles of UML modeling, I should not replicate these classes but rather should make use of Reuse in the modeling. As I am making use of packages to separate the Mobile application from the Server application, I guess it would involve:
Having the basic classes that do the same thing (methods & members) in both the mobile & server applications
For classes with extra members & functionality in any of the mobile or Server applications, I should use inheritance dependencies to build extra classes to take care of them.
Using << includes >> dependency to add classes generated from #2 to the Mobile and Server packages OR using << includes >> dependency to add classes generated from #1 to the mobile and Server packages as the case may be necessary.
Please is my line of thought correct on how to implement the modeling as I feel replicating the same classes would be against the ideals of UML modeling. Yet the fact that their is a separation between the mobile and Server application sort of wants me to think along the line of modeling totally seperately for the mobile application and then modeling seperately for the Server application.
Again, please is my line of thought correct.
It seems to me you just have one model with three packages:
a commonComponents package containing the classes which are used in the mobile and server application
a mobile package containing the classes used in the mobile application
a server package containing the classes used in the server application
the mobile and server packages import (<> relationship) the elements contained in the commonComponents package. For instance the User commonComponents:User class is imported in the server package where it is extended by the serve:User class. Note that as packages are namespace you can have classes with the same name.
I hope this might help you
http://lowcoupling.com/post/47802411601/uml-diagrams-and-models-with-papyrus
Related
My company has an app on Salesforce Platform that we are planning to expand. I'm researching the pros and cons of hosting the expanded features on Heroku, instead of more on Salesforce. One of the biggest drawback I see currently is not being to access #salesforce modules, but I cannot find documentation for that. Would you know if #salesforce modules can be imported?
ref: https://developer.salesforce.com/docs/component-library/documentation/en/lwc/lwc.reference_salesforce_modules
Some, but not all, base Lightning Web Components can be used in open source LWC-based applications built on the Heroku platform or elsewhere. Learn more about LWC OSS at Trailhead.
However, other parts of the Lightning Web Component ecosystem are specific to the Salesforce platform. For example, in LWC OSS, you cannot import from within #salesforce/apex. You also won't be able to import from modules like #salesforce/schema, which provides schema details of your org, when you're not deploying code and metadata in your org.
What you'll be able to use is the portions of LWC that are built on standard JavaScript, but not the interaction with your Salesforce org. If you need to interact with the org, you'd have to establish your own API connection and make all of the calls yourself.
In Google's latest docs, they say to test Go 1.12+ apps locally, one should just go build.
However, this doesn't take into account all the routing etc that would happen in the app engine utilizing the app.yaml config file.
I see that the dev_appserver.py is still included in the sdk. But it doesn't seem to work in Windows 10.
How does one test their Go App Engine App locally with the app.yaml. ie: as an actual emulated app engine app.
Thank you!
On one hand, if your application consists of just the default service I would recommend to follow #cerise-limón comment suggestion. In general, it is recommended for the routing logic of the application to be handled within the code. Although I'm not a Go programmer, for single service applications that use static_files and static_dir there shouldn't be any problems when testing the application locally. You might also deploy the new version without promoting traffic to it in order to test it as explained here.
On the other hand, if your application is distributed across multiple services and the routing is managed through the dispatch.yaml configuration file you might follow two approaches:
Test each service locally one by one. This could be the way to go if each service has a single responsibility/functionality that could be tested in isolation from the other services. In fact, with this kind of architecture the testing procedure would be more or less the same as for single service applications.
Run all services locally at once and build your own routing layer. This option would allow to test applications where services needs to reach one another in order to fulfill the requests made to them.
Another approach that is widely used is to have a separate project for development purposes where you could just deploy the application and observe it's behavior in the App Engine environment. As for applications with highly coupled services it would be the easiest option. But it largely depends on your budget.
I have a project which is a single solution in VS2010, and I wanted to have it such that:
Solution one: Admin
Solution Two: Front end
Solution three: Models
The reason for this is that Admin will sit in it's own app pool and the front end will sit in another app pool. We then have model talk to both and under model is the SQL database.
my question is:
How do I set this up in to three separate projects such that models can talk to Admin, Front end and the database?
For what you describe I think you'd be better off keeping one Solution and having multiple projects under it.
You can deploy the results of building the Admin project to one web site and the results of building the Front End project to another web site.
Each of these would reference the Models project. Models don't normally 'talk to' other projects but are referenced by them because the initiating action is coming from a web page request to either site - that's what's controlling the flow.
Often you'd also have another project which is a background service which may also reference your models project. This project would run as an NT Service providing time-based execution of work items that aren't tied to an incoming web request, for example, sending emails.
A further level of complexity would be to introduce a services layer and Data Transfer Objects (DTOs). You background service and all web sites now call into the services layer and interact only with DTOs while the service layer uses model objects to communicate with the database. You can now evolve your database schema independently from your web applications provided the service contract remains the same.
I am working on creating a Windows 8 application. UI is using HTML5. Using WinJs I am calling a WCF service that returns a datatable used to build out the UI. All that is good.
I would also like to create a Window Service that gets packaged up with the application, so when someone download/installs it gets unpackaged and the windows service is started/executed. Is this type of configuration possible?
The WCF service today is a web service, but I would like to make it a windows service. The idea is to make everything self contained. This would allow me to make it available in the Microsoft Online store - if I wanted to go that route.
Windows 8 Applications don't support installing services. The best you can do is install a service separately.
Your WCF service should be decoupled from your app and most probably running on a different machine! I am pretty sure that the they are not going to allow you to install or run services in context of a Windows Store App.
Installing a windows service is not an ideal approach for any Windows 8 application. I understand that you want to make everythig self contained but, why as a WCF local service then? Why don't you consider having it has a data access layer in your app itself? Just a thought.
REGARDING CLIENT SIDE
Web services are separate projects and separate deployment models. You can have one Visual Studio project for the Windows 8 client app and one project for the Web Services side.
Windows 8 apps have several options for saving persistent data, such as endpoints for consuming web services.
There are several consideration when storing Windows 8 application data, such as the location of web services to be consumed.
Windows 8 Application data also includes session state, user preferences, and other settings. It is created, read, updated, and deleted when the app is running.
There are 3 types of dimensions to consider. The system manages these data stores for your app:
(1) local: Persistent data that exists only on the current device
(2) roaming: Data that exists on all devices on which the user has installed the app
(3) temporary: Data that could be removed by the system any time the app isn't running
As a developer, you concern yourself with a couple of objects to persist application data:
The first container object is ApplicationDataContainer. The other is ApplicationData. You can use these objects to store your local, roaming, or temporary data.
REGARDING SERVER SIDE
Your Windows 8 Client app will consume http-based web services.
Most developers deploy web services to the cloud to be consumed by iOS, Android, Windows, and other server side services.
Windows Azure is a cloud offering that makes exposing services to clients very simple.
You can leverage either cloud services for robust solutions or the lighter weight Azure Web Sites.
You can typically choose either of these two project types to create web services:
(1) Windows Communication Foundation WCF; or
(2) ASP.NET Web API, which is included with MVC version 4.
WCF has been around longer and has historically been the primary choice for developers when it comes to exposing services.
Microsoft's more modern concepts about web services relate to the ASP.NET Web API, which truly embracing HTTP concepts (URIs and verbs). Also, the ASP.NET Web API can be used to create services that leverage request/response headers, hypermedia, etc.
We are looking at a standard way of configuring the various "endpoints" of our application. Our application is a distributed system with Windows Desktop applications, Windows Server "services" and databases.
We currently configure each piece using XML files. This is getting a little out of hands as we work with larger customers who can have dozens of Servers running our application and hundreds of desktop clients.
Can anyone recommend a Microsoft technology or a third party that would allow us to centralize all that configuration information and manage it in a one place for all our applications? Any changes would be "pushed" to the endpoint(s) that are interested.
For example, if we were to change the login for one of our database, we would make that change on the database, then reflect that change in our centralized system. Following that last step, any service that needs to connect to the database would be notified of the change (and potentially receive the new data). How and what each endpoint does with that information is outside the scope of the system.
Our primary business is not "Centralized Configuration Services". We are a GIS company that provides solutions for various utilities worldwide.
I've done a couple of things to give myself this functionality over the years. I build enterprise applicatons that may be distributed across many servers. I don't want to bury config settings in each services config file or each web server's web.config file. For application specific stuff I usually create an application settings table in the app's database. The table only has two fields. SettingName and SettingValue. I then write a web or wcf service whose sole function it is to retrieve these settings. I write a function called GetSetting where you pass "SettingName" and it returns SettingValue or an empty string if your setting is not found. This way I can store all application settings for all components of the application in one spot. Maintenance and troubleshooting for this is really easy, I'm not hunting through scads of config files spread across a dozen web and app servers.
For larger scale apps I might create a separate AppSettings database where I add a new field to my table mentioned above. ApplicationName. My web or wcf service for this approach has the same method call (GetSetting) only at this scope I pass ApplicationName and SettingName and it returns SettingValue or an empty string.
Doing either of these things allows you to centralize all app settings for any size application or IT shop. It has worked really well for us.
You could use RSS together with BitTorrent to distribute changes. See Wikipedia. It is not MS specific however, but should provide the flexibility you need - a configuration server holding the configuration and providing the feeds needed to configure the clients and possibly servers.
Any VCS through a secure channel?
For example, git through ssh (both available in cygwin).
I think the first step is to have the secure channel (if you want the push ability, pulling might be different).
As for managing the "versions" in different "branches", what's better than a version control system?
As it goes for the Microsoft requirement, well the Microsoft sofwares in that exists in that area would suck pretty bad in your case (as in not the best tool for the job).