This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 11 years ago.
I've got an XSD file within a web applcation thats running on my server - I've hotfixed this, however my changes are not being reflected when I use my web application.
I believe its being cached somewhere, however, I've cleared the caches I can find, restarted IIS on the server, restarted my server through the command prompt (stopServer.bat & startServer.bat)
These caches I've found & cleared:
ibm\websphere\appserver\profiles\app server profile\temp\node\app
server
ibm\websphere\appserver\profiles\app server profile\wstemp
ibm\websphere\appserver\profiles\app server
profile\tranlog\cell\node\app server
ibm\websphere\appserver\profiles\app server profile\logs\app server
ibm\websphere\appserver\profiles\app server profile\logs\ffdc
My changes are not being reflected - its not picking it up, as I've updated the version number within my XSD to 4, yet it always shows 3. I've found every instance of the xsd within the harddrive and they're all up-to-date. (Bad I know to update old kept instances but its frying my skull)
Am I missing anything else? Pulling my hair out here!
other files had been edited on the server
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I have a Visual Studio load test that runs through the pages on a website, but have experienced big differences in performance when using a load balancer. If I run the tests going straight to Web Server 1 bypassing the load balancer I get an average page load time of under 1 second for 100 users as an example. If I direct the same test at the load balancer with 2 web servers behind it then I get an average page load time of about 30seconds - it starts quick but then deteriorates. This is strange as I now have 2 web servers load balanced instead of using 1 direct so I expect to be able to increase load. I am testing this with Azure Web Application Gateway now, and Azure VMs. I have experienced the same problem previously with an NGinx setup, I thought it was due to that setup but now I find I have the same on Azure. Any thoughts would be great.
I had to completely disable the firewall to get the consistent performance. I also ran into other issues with the firewall, where it gave us max entity size errors from a security module and after discussing with Azure Support this entity size can not be configured so keeping the firewall would mean some large pages would no longer function and get this error. This happened even if all rules were disabled, I spent a lot of time experimenting with different rules on/off. The SQL injection rules didn't seem to like our ASP.NET web forms site. I have now simulated 1,000 concurrent users split between two test agents and the performance was good for our site, with average page load time well under a second.
Here are a list of things that helped me to improve the same situation:
Add non-SSL listener and use that (e.g. HTTP instead of HTTPS). Obviously this is not the advised solution but maybe that can give you a hint (offload SSL to the backend pool servers? Add more gateway instances?)
Disable WAF rules (slight improvement)
Disable WAF + Added more gateway instances (increased from 2 to 4 in my case) - SOLVED THE PROBLEM!
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
How come that The response time is very different when calling the same action/page in different times of day ? I'm working in an internal server where I'm the only one who uses the application (which doesn't work with internet connection)
I'm not connected to a network, and there is only one user who is running the app (which is me). It's a ASP site with a remote database
Once again, where are you going to start? You're seriously going to need to look at all aspects of the server that the application is on.
If you have a connected database then you'll need to look at whether:
the database is on a remote server - network issues can interfere quite heavily with your timings here.
the same server - if this is an instanced database you will need to take into account the performance impact of the service that is managing your database and all of the related aspects of that (e.g. do you have any kind of agents running background tasks for the database?).
Are you running a standalone database like Ms Access? - this may cause the least disruption in some ways but can be disastrous in others.
What type of web-application are you looking at?
A simple scripted non-managed IIS ASP site - Very little to manage via IIS here; no need to section off a pool for the application.
A full blown IIS managed application - IIS managed, passing of cookies, credentials etc (all takes slices of time).
If you are connected to a network, then...
How many users are on the network - Though every machine on the network may have a negligible impact on your application server or PC, there are definitely some that do, such as DNC servers and what have you; they need to gather network information for the successful management and running of the network as a whole. Your application server will also communicate with other servers to say things like: "Hi! I'm over here!".
Perhaps the most important question should be regarding your server(s):
What services are running - every service that runs on your server swallows time slices.
What services are not running on your server? - to keep your timings realistic should you stop any services or (more importantly) not?
What services are running on your database server? - just as important as your main application server, your database server needs time to furnish data to your application. If there are other services running on here then this can impact heavily on your time.
Please everyone, chip in here - there's just so much to take into account.
By not giving an adequate qualification for your task it's very difficult for anyone to give a wholly valid answer.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I'm currently upgrading an application to Tridion 2011.
We have two loadbalanced webservers and a single database server hosting the broker database. All content is stored in the broker database and all pages are deployed locally on the webserver (the Tridion deployer is installed on the webservers).
Because the broker will write the content and metadata to a shared database, we'll get errors when we deploy to both webservers, as they will both try to store the content. There's a couple of ways to solve this that I know of..
Deploy to one webserver that writes the content to the Broker DB and use ftp sync to copy pages and directories to the second webserver.
Deploy to one webserver and have the broker write the files to a shared network disk and point both webservers to the shared network disk instead of storing the files locally.
Deploy to both webservers and have them work on a seperate database.
I was wondering if Tridion 2011 has more advanced broker features to enable the scenario where I publish to both webservers, but only have one of the webservers actually write the content to the database (but both read), so I can use 1 broker database instead of 2.
I hope this is a bit more clear.
Tridion is no clustering server and thus cannot manage your high availability requirements for you. You should see clustering separate from Tridion and then think, how would I solve this without Tridion.
If you have your web/application server setup as high availability with some form of sync in place (for both the filesystem and the broker database), then Tridion can just publish to one of the nodes (which technically can even be behind a load balancer).
If you do not consider clustering software and want a "poor mans" cluster, you should set up both your web/application servers with their own deployer and their own database. Then Tridion can just publish to both nodes and all will be automatically in sync (as long as both nodes are online).
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 11 years ago.
I'm using Kaazing Websocket Gateway and I can run the demos - everything is working.
But I want to change the server code (the code that handles the websockets sent to the server and responds). How can I do that?
Zippo,
Would you mind telling me how you have contacted Kaazing Support. I can find no voicemail, forum entry or record of a call. I'd like to make sure we didn't miss anything.
It sounds like you would like to change the Gateway code? If that is so, the answer is that we offer a developer's version of the Gateway with unlimited connections. We don't currently offer a an open source version.
If I have misunderstood your question please contact Kaazing Global Support with your question so we can help you out. Call our switch board at 1-877-KAAZING (1-877-522-9464) and ask for "Technical Support".
Regards
Jan Carlin
Director Kaazing Global Support
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm currently looking at hosting solutions for my Ruby on Rails SaaS web application, and the biggest issue I see is that if I go with something like Amazon EC2, then I still need to configure my own server and install what I need (e.g. database, programming framework, application server, etc.). Each one of these is an opportunity for something to go wrong. I also have to worry about how my data is getting backed up, how frequently, and a host of other "low-level" details. Being a startup I don't have the resources for a sysadmin so would have to play one myself. I currently do some work for a startup and my boss is always talking about how great EC2 is because it let's us "get out of the hardware business" - in reality though, it doesn't feel that way because we still have to set up the server instances, still have to install software, still have to configure the software properly. It feels like we're still in the hardware business, just that we don't really own the server we're using.
In contrast is a service like Heroku (which actually uses EC2 underneath, I believe) but basically takes care of all the low-level details. They do automatic backups for me, I just specify the frequency. They have a server configuration already set up. They have ways to manage it and keep it running so I don't have to monitor traffic. I can focus on my application and just deploy the code, and let them worry about administration and making sure the database is properly configured with the web server and the right folders have permissions.
The problem with Heroku is obviously that I don't have control over these things if I wanted to modify it. Heroku uses nginx as it's web server; if I want to use Phusion Passenger on Apache to stay on the "cutting edge" of RoR development, I'm SOL. If I need to make a quick patch in production (Root of all evil, I know, but it happens sometimes) I don't have SSH access to Heroku's servers. If I need to set up a new database user to allow somebody else to remotely access data, I don't think I can do this. And worst of all if something does happen with the server, I have no way of doing anything except wait for Heroku to fix it.
Basically at what point, if ever, can we as developers focus on our code and application and not have to play sysadmin with server configuration? As a startup with limited resources and limited knowledge of configuring servers (enough to get by), would I be better off sacrificing some configurability for the ability to let somebody else worry about the hardware/software end of things?
Make the server config part of your project and use scripts to setup and tear down your servers. Keep everything under VCS and use the scripts routinely to recreate your development setup.
https://stackoverflow.com/questions/162144/what-is-a-good-ruby-on-rails-hosting-service/265646#265646
I'm not interested in learning how to
configure Apache, ModRails, Phusion,
Mongrel, Thin, MySQL, and whatever.
With Heroku I don't worry. nginx is
the web server, and PostgreSQL is the
database. They have settled on
Ruby/Rack for all new apps. Frameworks
that run on Rack include Rails, Merb,
and Sinatra. Limited choices.