Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I contribute to a cluster of C++ and C# servers that publish data statistics, connection status, and management commands for use by a management client. The current implementation uses custom middleware.
Code for both the servers and the client can be changed. I am considering migration to some standard management solution to simplify the code and improve stability. The potential to use 3rd party tools would also be a plus.
What technology should I use for the management interface ... WMI? It seems to be the default, but I don't see a lot of current books or articles. Or should I expose some common web service? Or?
I would say that the answers really depends on the scope of you project.
If you target a Microsoft Windows only client server platform, you can plan to instrument your server code and build a WMI provider. WMI comes from a standard (WBEM) but it's for Microsoft usage only. However using WMI, you keep the ability to use open source management tools like Nagios. And on your client machine your server state stay queryable from PowerShell or VBScript.
If you target a mixt Windows Linux client server platform, I think that SNMP (yes this old stuff) still stay in the race, you can plan to instrument your server code and build an SNMP proxy server. This is not so hard on Windows box. This solution open a wide amount of client management tools on any platform.
I would use a web service only if the scope is private use, you develop the management client for your servers tools, but as far as I know Web service is not so standard as far as management is concern.
A web service interface would be how I'd do it. This really decouples server from the client and allows you the ability to use many different types of applications on the client-side to communicate with the server. Utilizing WMI within the services would also be part of going this route. WMI is a bit confusing at first, but offers the greatest amount of flexibility. There are also several libraries that can be used to abstract your code from the nitty-gritty of WMI and see it as a control layer.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Many vendors such as Microsoft with Sharepoint and Dynamics have made it impossible to access database tables directly in newer versions as they convert to Software as Service (SAS) offerings.
I am working with PTC Windchill and have developed extensive Oracle SQL Layer ETL processing. Is this a future proof practice within the context of this product line? Or in the future will I be required to work through some sort of DAL. If so, is there a recommended practice?
The information available from Windchill for Cloud appear vague and mostly suggesting to me virtualizion at the infrastructure layer, suggesting I would be able to query at the database layer for many years to come. Any confirmation, pointers or feedback would be appreciated.
Windchill offers extensive APIs for data access (and customization) in java. Starting from version 11.0, There are also some soap and rest web service for data access , but not for everything. It is always better to use API, they offer Data Abstraction Layer in a supported way. PTC would recommend that you refer to a consultant for this job.
But if you want to try:
There is a huge documentation about Windchill Customization, you can also create your own web services in java to access the data you want, if standard web services does not suffice. a starting point can be the Windchill help, and the javadoc located in the windchill server in this folder:
WINDCHILL_HOME/codebase/wt/clients/library/api/index.html
there are also some examples:
WINDCHILL_HOME/prog_examples
more documentation and appropriate training is available on https://support.ptc.com, only for registered customer users.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
In Dotnet core, there are two built-in servers Kestrel and HTTP.sys.
I would like to know the differences between those two servers and when to use a particular server when it comes to performance, reliability, micro-service friendly, etc.
Answer: Kestrel vs. HTTP.sys from the official Microsoft docs.
See Kestrel vs. HTTP.sys from the official Microsoft docs.
Main differences are that HTTP.sys is windows only while kestrel can run on linux as well. That also means that HTTP.sys works with windows authentication "out of box" with few settings whereas kestrel needs a lot more to set it up. Performance wise they are similar with http.sys being a bit faster since it is optimized for windows. Also the base for IIS is HTTP.sys.
Reliability not only depends on the server but the infra it is on. I.E if you put both in docker with kubernetes they will be reliable and scalable since you will have containers to take care of that part.
Now i have microservices on both and they are very friendly and i use them for different purposes, environments depending on the service in question.
Also to mention that for public facing services i use reverse proxy anyway i am not familiar with how the two act in that role. Having said that Microsoft recommends HTTP.sys if you have a front facing service since it is more resilient to attacks out of box, but like I said since my services are behind a reverse proxy that handles those requests cannot verify the claims.
hope this helps a bit
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 3 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Improve this question
We are on the initial stages of designing a micro service for my client from their standard monolith app that is sitting on 4 JBOSS servers in their own data center. Is micro service architecture target at only cloud based deployment? Can i deploy a micro service on premise production ready tomcat /JBOSS? Is that a good fit?
Sure you can.
Microservice architecture is a concept of having many small interracting components, where each of them performing well defined part of work, but good.
It's extention of the Linux way and the concept of decoupling components.
In your case you can split your service to several smaller services. Each one with own development and deployment cycles, each one with well defined API.
Is micro service architecture target at only cloud based deployment?
no it's is an architecture for application development. basic idea of micro services is separate complex application function to small functions to reduce complexity and get high performance.
there are few reasons you need to consider before moving micro services.
1.scale of you application.
if your application contain high number of complex functions its better go with micro services. and separate them and deploy separate, then easy to do changes and maintains.
2.performance of application
if some application function need high computing power. you can allocate separate hardware resources. if you implement it as micro services.
3.deploy and maintain
if you use micro services you can deploy and maintain service separate without effect other services.
4.data migration
if your databases contain high data table relation it will little bit difficult remove for function databases(each micro services need each DB) so as a first step keep DB as monolithic and separate function to services. then start to reactor DB
5.call each services
fronted end application keep clean and logic free. and wrap your micro services using API gate way and publish all the services as one service.
6.application security
each and every services running in separate no need to session tracking use JWT (oAuth2) API security.
7.multiple services & transnational
if you need to handle one business function but with more than one service you need to check each and every services function work correctly**(ex db operations ,rollbacks)** so need to developed transnational handler
implementing micro services
there is no specific technology stack for it but there are free more technology available
ex :
java spring boot for micro services (with inbuilt tom cat server )
zuul , eureka for API gate way
oAuth 2 and JWT for security
*Note
there is not fix way to implementation for micro services , use correct technology stack to get performance and implement small business function. and doesn't matter hosting in cloud or local servers.
strong text
There is definitely no limitations whether you deploy your microservices on local, physical servers or in the cloud. Both approaches are valid, but they impose different advantages and disadvantages.
With local/physical servers, you will have:
bigger operations overhead (it is better you have good DevOps in your team)
manual scaling (when you experience bigger traffic, you need to manually fire up new instances, or use some management tool for this)
manual fault detection - if a server goes down (this depends on your/company's server enviorenment) someone will need to fix this "manually"
it is cheaper (a friend is buying old server instances on Amazon and running their semi-microservice architecture on them, he calculated they achieve quite big savings this way)
With cloud infrastructure, you get some of the below advantages (in contrary to above disadvantages):
less operations overhead (the cloud will take care of most of operations)
flexible scaling (when your traffic goes up, cloud can automatically fire up new instances, when it goes down, it will shutdown instances)
error/fault handling - if there occurs a problem in the cloud, you do not need to worry
I did not mention all the advantages and disadvantages of given approaches, as it also depends on the project (will it receive different traffic on different times of day, does it need to keep data locally or can it be in a foreign country in a cloud, ...).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
So far I've read some blog articles about cloud computing and services for hosting applications in the grid.
If I'd wanted to have a web application running in the cloud for as little cost as possible, what would be the best solution?
Let's assume the following configuration:
J2EE web application
Any free database (MySQL, PostgreSQL)
Any web container to deploy the web application to
What application stack would you suggest to be the best combination of services to
host
deploy
run
web applications?
As an additional requirement, the services chosen shouldn't require a lot about server management like firewall settings etc.
This space is changing very quickly right now so I think you will find a lot of different good answers. If I where to do something on the cheap right now I would probably pick the following stack:
Web server: apache
App server: tomcat - use the clustering support if you need to grow or split at the apache level or even introduce a load balancer box at the very front
DB server: MySql - mainly because it is easy to cluster
Platform: scalr - The cloud setup is simple and cheap. It uses Amazon's cloud on the backend and that gets you a lot of extras like putting servers in different datacenters for redundancy.
Now you can add in or remove parts of this. You may not need a web tier out there and can just expose tomcat directly. You may need EJBs and in that case you can just fire up more nodes for that and create another tier. You may want to add a tier for load balancing in front of apache. You may want to use the Amazon cloudfront service to push static files to their edge network.
I have investigated Amazon's ec2 solution recently. It is quite good and there are many pre-built boxes that you can use if you find one that suits your need. I think there will still be some server management involved...you cannot get away from that. But the pre built boxes will make it easier.
The cost is reasonable as you only pay for what you use.
[EDIT] The pre-built boxes are called Amazon Machine Images (AMIs).
I think you can get no where closer to Jelastic. It has all the stuffs that #carson mentioned. Specially I will mention their unique web console and they do not have any dependency for any API or console to be installed. I use their platform for many of the clients for my startup. Also additionally you get a nginx support for load balancing and configuring it right away from the console.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
This is slightly off topic of programming but still has to do with my programming project. I'm writing an app that uses a custom proxy server. I would like to write the server in C# since it would be easier to write and maintain, but I am concerned about the licensing cost of Windows Server + CALS vs a Linux server (obviously, no CALS). There could potentially be many client sites with their own server and 200-500 users at each site.
The proxy will work similar to a content filter. Take returning web pages, process based on the content, and either return the webpage, or redirect to a page on another webserver. There will not be any use of SQL server, user authentication, etc.
Will I need Cals for this? If so, about how much would it cost to setup a Windows Server with proper licensing (per server, in USA)?
This really is an OT question. In any case, there is nothing easier than contacting your local MS distributor. As stackoverflow is by nature an international site, asking a question like that, where the answer is most likely to vary by location (MS license prices really are highly variable and country-specific) is in my opinion not likely to receive an useful answer.
I realize this isn't exactly answering your question but if you want to use Linux, maybe you want to look into using Mono. .Net on Linux.
If users will not be actually connecting to any MS server apps (such as Exchange, SQL Server, etc) and won't be using any OS features directly (i.e. connecting to UNC paths) then all that should be required is the server license for the machine to run the OS. You need Windows Server CALs when clients connect to shares, Exchange CALs for mail clients, and SQL Server CALs for apps that connect to your databases. If the clients of your server won't be connecting to anything but the ports offered by your service, you should be in the clear, and it shouldn't cost any more to build a server for 100 users than 10.
You may not need any CALs for users depending on how you use the server. Certain functionality requires the purchase of CALs but some doesn't. There's no real good way to answer this question since the requirements are too vague. Does it use domain services? Does it use SQL server? Clustering? There are many variables.
If you are looking at what the most you could possibly pay, go to CDW and look at the Open License/Open Business products to get an estimate.
Like said above, if you are using your own connections and nothing else on the server you wont need the cals.
I would Google the ROI on Linux vs Windows for a commercial server, I have no option generally on this, but I have seen that long term they level out, in the grand scheme of things the initial cost of the Windows license is actually minimal and insignificant.
Choose the best technology to solve the end users problem, document why, provide an evaluation report, include maintenance costs, development costs etc. When you do this the answer will be clear to you and your customer.
If your users are not connecting to any other windows resources (Active Directory, SQL Server, File Shares, etc) then you shouldn't need CALs but you I believe there is something like an external connector license. There's also a 'web edition' which looks like it's in the range of ~$400.
Also it looks like Microsoft will be removing the CAL restrictions on web servers completely in Windows Server 2008
Microsoft should call their licensing division Enigma...