I have been developing a simulation model in my local environment as a way to learn how to code (I am using PHP). In my current environment it is taking around 30 seconds to run 1 simulation. I was expecting this to be much quicker. My theory is that the local environment has limitations due to shared resources that a standard web server would not have. I used the Laragon installer with it's default settings to configure the local environment. Does this theory hold any water?
Asked more simply: Does a standard web server with default settings compute faster than a local environment with default settings.
Related
Is there a way we can monitor the CPU utilisation of a linux server where my website has been hosted using any external tools available without installing any software on the server (i.e just using the IP address of the server).
Please let me know if that would be possible.
If your web tool includes some kind of performance monitor plugin you might set it up on a hidden page. But your tool might also include some kind of monitoring for your site.
But how much you can use the data you get returned is another matter as you most likely are run in a Docker in a WM that is optimal for the provider (and not for you).
I have TFS 2015 installed on one of the company's servers. I try to access TFS using web access and it is extremely slow, it takes more than 5 minutes for a page to load and sometimes even longer. If I restart the server, TFS becomes a little bit faster (a page would need only a minute or so to load), but soon it becomes slower.
The server itself is okay. The CPU and memory are not even fully utilized (~20% - ~40% is utilized).
Other applications that are installed on the server are working fine, so it's just TFS.
Any suggestions?
Log in the application tier machine to try to access the web access to see whether you can see the same behavior.
Check the network connection between the application tier machine and data tier machine if you set up TFS in a multiple server configuration. You may try to turn off the firewall and anti-virus software on the machines.
Clean the cache folder on the application tier, usually the folder locates in: C:\TfsData\ApplicationTier\_fileCache
Check the Requirements and compatibility, to see whether your TFS set up on a appropriate environment.
If the items above is not helpful. You may need to consider move your TFS to another hardware.
I am looking for a way to be able to do the following:
Create an instance of Windows with installed prerequisites and configuration
An isolated environment would be recommended (As in it will not modify the existing configuration on local machine only in that VM-like environment)
Ability to use the internet within that environment
Using it sort of like a "check-point" (Start working on it, doing something wrong and being able to start once again from the instance that we created)
Ability to share the environment
Possibility of creating multiple different environments
Low disk usage if possible
Fast deployment of environment on local machine
I have looked into Docker which seems pretty good for what I need, but I want to investigate other options as well because it requires Windows 10 x64 Enterprise
.
Something that works on Windows 7/Server/8/8.1 would be nice
I would also love to get arguments on why X option is better than Y option.
Thanks in advance!
If you want a completely separate environment, creating a Virtual Machine will be worth considering.
There are products from VMware and Oracle to create your virtual machine. I have been using Oracle Virtualbox (Oracle's virtual machine software) for some time now and find it pretty useful.
With a virtual machine it addresses all your concerns:
Create an instance of Windows with installed prerequisites and
configuration - A virtual machine will run on top of your installed OS without making
any modifications in current installation
An isolated environment would be recommended (As in it will not
modify the existing configuration on local machine only in that
VM-like environment) - It runs completely isolated like a separate
machine.
Ability to use the internet within that environment - You can use
internet inside of a virtual machine
Using it sort of like a "check-point" (Start working on it, doing
something wrong and being able to start once again from the instance
that we created) - You can take a snapshot and save the state. Next time when you start the VM it will be started from this state only.
Ability to share the environment - Export a created VM and it can be
reused.
Possibility of creating multiple different environments - You can run
multiple VMs on your machine. Configure the disk usage and RAM
accordingly.
Low disk usage if possible - Configurable while creating a virtual
machine.
Fast deployment of environment on local machine - Yes, you'll need
the .iso image of your Operating System
I'm the lead developer at a startup and we currently have the following setup:
- Development Server
- Staging Server
- Production Server
- Paid Subversion Hosting
- My local machine
- 2 other developers' local machines
Where is the best place to host the CI server? On an entire new server? Or is my local machine sufficient for this?
Definitely not your local machine. I'd suggest a separate server unless you don't mind slowing down your dev server.
I say not your local machine because the last thing you want to be hindered by is builds. Nothing is more frustrating than a slow machine. And you should generally keep official builds generated off of a separate server.
Generally not local machine (when other options are available) as you mostly want to have the same "stuff" installed (or not installed) on the build server as you have on the production server, so that whatever is running on the the build server is running in as realistic a scenario as possible.
Speaking from a .NET point of view, this means that I don't want (for example) Visual Studio running on the build server, ruling out my local machine.
It would also be a good idea to be sure someone on your team has access to the machine and can perform actions on it, thus potentially ruling out the hosted solution.
Aside from that, as long as it's on a box with a half decent spec, I don't think it really matters.
I would put it at the development server, staging server, or the paid subversion hosting instance, if possible.
UDS (formerly known as Forte 4GL) is the platform our current system is running on. I remember finding a flag in a forum that documented how to deploy this on a Veritas cluster. However, now that we have modern hardware coming in, I can't find that note anywhere in Google.
Finally found it (after hours of searching) at http://sundusum.com/gpunkt/knowledgebase/articles/7240.html
"To allow for configurations like clustered environments, a test for a
new environment variable, FORTE_CM_NO_CANONICAL_NAMECHK, was added in
Forte release 3M. If that environment variable is set, the check for
the match between FORTE_NS_ADDRESS and the machine name is skipped.
"The FORTE_CM_NO_CANONICAL_NAMECHK can be set in the fortedef files on
machines where running the environment manager is allowed. Setting that
environment variable on all nodes should be avoided, since that would
allow the environment manager to be started on any node which can lead
to problems that canonical name check is designed to prevent."