Starting out, any suggestions? - vbscript

I have started working in C# for almost a few months and I am looking for something more challenging and interesting. I use a media player called media monkey that supports custom vb scripts, well I made one that writes a file to a dir that has the current song playing, and is updated every time a new song is playing by rewriting what was there before.
Now I want to add this information to a database and keep a record of this and possibly add the information on my home page. I know I can hack a way for it to work, but I want to know what would be the "professional way" of doing things.
I came up with the following and got stuck. I would need an ODBC driver to connect to a database which seems messy, would a web service work? How would that work? Can a VbScript call a dll file to call upon a web service to modify data on a seperate server? Is that safe to do?

Many professional C# apps are n-tier. In your case, you would probably layer it like this:
On the server:
-Database Store
-Database Access/Business layer(sometimes two distinct components, depending on how complex the app is)
-Web Service
On the client:
-Web Service Client
-Any other layers to support client functionality.
So the Database Store would be something like some tables in an Oracle or Microsoft SQL Server, and would on your server.
Database Access/Business layer would be your code that retrieves and stores data to/from your database. It might also contain business objects, which are basically classes that have properties representing your data from your database. The benefit of the data access layer is that sometimes reading and writing to a database can require specialized code, and you don't want that code sprinkled throuought your application. So instead you can call functions in your data access layer that loads needed data into objects, so the rest of your application is just interacting with a regular old .NET object/class. These are called POCOs, which stands for something like Plan Old CLR Object. There are lots of variations on this of course, as people have taken different approaches to the problem of isaloting database access. Also it serves the purpose of minimizing breaking changes whenever the database changes. Since the database access logic is not sprinkled throughout the app, then there are fewer places that need to be updated if the database changes (such as adding new columns to a table or changing a name).
Sometimes the business layer will be it's own layer, and would contain most of the "logic" of the application. It would sit between the data access and web service layers. Using concepts from Service Oriented Architecture (SOA), you might have an authentication service, and a web request handling service. These services are a lot like a class that is always instantiated, there waiting to process requests. Your web request handling service would take a request, and maybe first call into the authentication service to verify credentials before honoring the request. SOA is one of those things I think should be used only when appropriate. It some cases just using Object Oriented techniques will give you the same benefits. Not always though. SOA, when done right, is more scalable, so it really depends on whether SOA offers you additional benefits that you need.
The Webservice would be responsible for receiving requests from the web, parsing/interpreting them, and acting on those requests by making calls into your business layer to update or retrieve data.
So the concept here would be that you could have many users of your service who publish their song updates through your service.
Your client would have a "web service client" layer which would be responible for formatting requests into messages, sending them to the web service, and retrieving messages from the web service. You would put very little application "logic" in your web service layer.
Now all this is probably overkill and inefficient for what you are wanting to do since you just want something for yourself, but it's the basic anatomy of a lot of webservice applications and would be a good learning exercise. The whole purpose of the layers is decoupling and simplicity. While more layers/components makes the application overall more complex, it means each component is simpler. This means it's easier to wrap your head around problems when you are only dealing with one component which interacts with only a couple other components(the sourounding layer). So there is a careful balance between few components and many components. Too few and they become monolithic and difficult to manage. Too many, and they become intertwined in complex ways. I have heard it said something along the lines of "If a class is getting too big and too complex, then split it up into a few more classes". In essence, don't start subdividing stuff for the heck of it just because it sounds like the right thing to do. Evaluate how complex your component is going to be before deciding if you want to split it up. Sometimes for simple cases your have a layer serving more than one purpose, for the sake of getting it done faster and making the overall design simpler. The point is, apply these concepts where appropriate. You will learn what is appropriate with experience, and you obviously understand that you can learn the most by "doing".
"Can vbscript call a COM component?" You can compile .NET DLLs with COM support. Many older things can call COM dlls.
I googled: vbscript dll
and got this: VB Script and DLLs
"Is that safe to do?" Your webservice will be where you would be most concerned with security. It's safe only if you design with security in mind and don't screw up. We all screw up sometimes though, which means there is no guarantee of it being perfectly secure.

Related

Mocking 3rd party integrations outside of the context of a test

In a lot of the apps I work on, we have this problem where we heavily rely on 1st and 3rd party APIs. So much so, in some of our apps, it is useless to try to login without those APIs being in place. Either critical pieces of data are not there or the entire app is like a server side render SPA where it houses no data on its own but pulls that data from an API at the time of a request (we cache it when we can).
This raises a huge problem when trying to develop the app locally since we do not have a sandbox environment. Our current solution is to create a service layer in between our business logic and the actual HTTP calls. We then, in our local environments, swap out the HTTP implementation for a class that just returns fake data. This works pretty well most of the time except for a couple of issues:
This only really gives us one state of the application at a time. Unlike data in the database, we are not able to easily run different seeders to replicate different scenarios.
If we run into a bug in production, we have no way of replicating the api response without actually diving into the code and adding some conditional to return that specific response. With data that is stored in the database, it is easy to login to TablePlus and manually setup some condition or even pull down select table from production.
In our mocks, our functions can get quite large and nasty if we do try to have it dynamically respond with a different response based on the resource id being request, as an example.
This makes the overhead to create each test for each scenario quite high in my opinion. If we could use something similar to a database factory to generate a bunch of different request-response pairs, we could test a lot more cases and if we could somehow, dynamically, setup certain scenarios when we are trying to replicate bugs we are running into production.
Since our applications are built with Laravel and PHP, unlike the database, mocks don't persist from one request to another. We cannot simple throw open a tinker and start seeding out API integrations with data like we can in the database.
I was trying to think of a way to do it with a cache and set request-response pairs. This could also be move to the database but would prefer not to have that extra table there that is only used locally.
Any ideas?

ViewModel Class Design - Should it be on Server Side or Client Side?

In my application, I have a View and it needs data from multiple server side data models.
We have two options.
Call a Single WebSevice once and get ViewModel class object from server side and bind it to the View.
Call multiple WebServices and get different Server Side Model Classes and create a new View Model Class on the client side and bind it to the view.
What's the best approach out of these two options ? Please advise.
If in doubt, consider your UX.
One of the most frustrating things from a user's perspective is to be waiting for your app to respond after they've pressed something.
Every time your app issues a request to your server, users will experience a delay - each additional request increases the length of that delay. Most typical users have a very low tolerance for that sort of thing before they get irritated and decry your app as being "slow".
In the interest of minimising the time required for content to load, keep the number of calls between your client and your server API to an absolute minimum - In general, the fewer calls the better. This leans heavily toward the 'single request, single ViewModel' approach.
Also be mindful of the size of your ViewModel payload; don't just return a huge data-dump to your user when the majority of it will never be seen or used - not only does this waste bandwidth and make things slower, but it also implies the client is going to be doing additional unnecessary work.
This has benefits on your server too; with your server needing to fulfil fewer requests, you will have less work to do later on when scaling up your app to cope with more users.
Lastly, Consider the difference between a simple lightweight "dumb" client which is only responsible for presentation and user interactivity, versus a heavy-weight client application.
By making your Server responsible for generating a ViewModel and doing all the hard work, you can avoid business logic on your client; therefore maintaining clean separation between your Business and Application Layers.
On the other hand, if you require multiple server API calls, then it's likely you'll need a lot more complexity on your client to build your View Model, which risks blurring the line between your Application and your Business Layer.
If you eventually build multiple different client Applications which call the same server, you may find yourself needing to re-use that business logic between those applications; it's easier to re-use business logic which already exists on the server - especially if your client applications use different technologies (e.g. a Web Client and a Mobile App).

Share model between client and server

Due to our domain model and proccess we are looking at sharing model between client and server. Our client is really thick client.
Is there any information about such architecture, pros and cons?
Theoretically if your Domain layer is totally decoupled (from persistence, presentation, infrastructure, etc.) it can easily be reused as a library in different places.
However, as Adrian points out, this raises practical issues :
Security : distributing your domain especially in client applications can be risky. One way around that is to obfuscate your binaries if the client is a desktop app.
Platform mismatch : you might not be able to use the same technology/language on client and server. This will result in a translation of your domain, basically doubling the initial amount of work, maintenance cost and bug proneness.
Versioning : even if the same library is reused, its versions on the client and server must probably be kept in sync to prevent incompatibilities.
Besides, except if you're developing a web version that is an exact clone of a desktop version, I feel that the domain reuse will be partial at best. In the case of a single client/server application, I'm curious to know why you would use the same domain on both tiers... usually what you have on the client side is data structures that might look a bit like the domain entities, but tweaked for the UI, and with no behavior. In that case, reusing the whole domain layer on the client side would mean dragging along a bulky object graph that maybe partly does what you need but also a ton of other unneeded stuff.
Maybe what you need instead is the concept of Bounded Context from Domain Driven Design - same class names but slightly different classes in Client context and Server context.
DDD and modern development practices encourage keeping domain logic out of the client. Most of the code happening in the client-side these days is there to leverage the GUI goodness of the client platform.
Two good reasons for keeping domain logic out of the client are security and maintainability.
For security, the server should regulate what the client can do. The client can be hacked to bits, but if all the domain logic and security are in the server, then no amount of hacking (on the client) can circumvent or damage the system.
For maintainability, if all your domain logic is on the server, then it's in one place. If it's in one place (or better still in a clearly defined module or namespace) then it's easier for anyone on the team to maintain the code.

Should I make my CouchDB database server public-facing?

I'm new to CouchDb and am trying to comprehend how to properly make use of it. I'm coming from MongoDB where I would always write a web layer and put it in front of mongo so that I could allow users to access the data inside of it, etc. In fact, this is how I've used all databases for every web site that I've ever written. So, looking at Couch, I see that it's native API is HTTP and that it has built in things like OAuth support, and other features that hint to me that perhaps I should no longer have my code layer sitting in front of Couch, but instead write Views and things and just give out accounts to Couch to my users? I'm thinking in terms of like an HTTP-based API for a site of mine, or something that users would consume my data through. Opening up Couch like this seems odd to me, though. Is OAuth, in Couch's sense, meant more for remote access for software that I'd write and run internal to my own network "officially", or is it literally meant for the end users?
I know there might be things that could only be done through a code layer on top of CouchDB, like if you wanted additional non-database related things to occur during API requests, also. So thinking along those lines I think I will still need a code layer, anyway.
Dealer's choice.
Nodejitsu has a great writeup on this sort of topic here.
Not knowing your application specifics I'll take a broad approach...
Back-end
If you want to prevent users from ever seeing your database then make it back-end. You can pipe everything through something like node.js and present only what the user needs to see and they'll never know anything about the database.
See Resource View Presenter
Front-end
If you are not concerned about data security, you can host an entire app on CouchDB; see CouchApp. This approach has the benefit of using the replication mechanism to control publishing your site/data. The drawback here is that you will almost certainly run into some technical limitations that will require moving CouchDB closer to the backend.
Bl-end
Have the app server present the interface and the client pull the data from the database separately. This gives the most flexibility but can be a bag of hurt because even with good design this could lead to supportability and scalability issues.
My recommendation
Use CouchDB on the backend. If you need mobile clients to synchronize then use a secondary DB publicly exposed for this purpose and selectively sync this data to wherever it needs to go.
Simply put, no.
There's no way to secure Couch properly on a public facing site. There's no way to discriminate access at a fine enough granular level. If someone has access to any of the data, they have access to all of the data.
Not all data on a site is meant for public consumption, save for the most trivial of sites.

MVCS - Model View Controller Service

I've been using MVC for a long time and heard about the "Service" layer (for example in Java web project) and I've been wondering if that is a real architectural pattern given I can't find a lot of information about it.
The idea of MVCS is to have a Service layer between the controller and the model, to encapsulate all the business logic that could be in the controller. That way, the controllers are just there to forward and control the execution. And you can call a Service in many controllers (for example, a website and a webservice), without duplicating code.
The service layer can be interpreted a lot of ways, but it's usually where you have your core business processing logic, and sits below your MVC architecture, but above your data access architecture.
For example, you layer of a complete system may look like this:
View Layer: Your MVC framework & code of choice
Service Layer: Your Controller will call this layer's objects to get or update Models, or other requests.
Data Access Objects: These are abstractions that your service layer will call to get/update the data it needs. This layer will generally either call a Database or some other system (eg: LDAP server, web service, or NoSql-type DB)
The service layer would then be responsible for:
Retrieving and creating your 'Model' from various data sources (or data access objects).
Updating values across various repositories/resources.
Performing application-specific logic and manipulations, etc.
The Model you use in your MVC may or may not come from your services. You may want to take the results your service gives you and manipulate them into a model that's more specific to your medium (eg: a web page).
I had been thinking of this pattern myself without seeing any reference to this any where else and searched Google and found your Question here :)
Even today there is not much any body talking about or posting about the
View-Controller Service Pattern.
Thought to let you know other are thinking the same and the image above is how I view how it should be.
Currently I am using it in a project I am working on now.
I have it in Modules with each layers in the image above with in it's own self contained Module.
The Services layer is the "connector" "middleman" "server side Controller" in that what the "client" side Controller does for the client, the "Service" does for the server.
In other words the Client side "Controller" only "talks" with the "Service" aka Server Side Controller.
Controller ---> Requests and Receive from the <----- Service Layer
The Service layer fetches or give information to the layers on the server side that needs it.
By itself the Service does not do anything but connect the server layers with what they need.
Here is a code sample:
I have been using the MVCS pattern for years and I didn't know anyone else did as I couldn't find any solid info on the web. I started using it instinctively if you like and it's never let me down for Laravel projects. I'd say it's a very maintainable solution to mid sized projects, especially when working in an agile environment where business logic changes on the constant. Having that separation of concern is very handy.
Saying this, I found the service layer to be unnecessary for small projects or prototypes and what not. I've made the mistake of over complicating the project when making prototypes and it just ultimately means it takes longer to get your idea out. If you're serious about maintaining the project in the mid term then MVCS is a perfect solution IMO.

Resources