How can I increase Tps in Google Cloud Platform? [closed] - performance

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I'm running minecraft server(modded) using 4Ram with 32G.
It's stable when 1~2people, but when people join server, tps become low.
I think it is not a problem with rams. But packets are too many transfered client and server.
How can I increase tps?

The primary causes of TPS drops are a result of what you have going on in your world.
When adding mods or plugins, you should be thinking about the long-term effects of your choices.
For each modded block you add that provides some type of function, the server has to allocate resources to ensure that function is carried out. Now on its own that one block is of little consequence. But if that block forms an array as is typically done with solar panels, then the server will need to dedicate more resources to carry out that arrays functions. When we break it down we can get an idea of how much is really going on in the background.
Minecraft does not have any built-in methods for checking the RAM usage, but you can check the RAM usage by installing the Essentials plugin and using the command “/memory”. You can take a look at this link for more information. Also this command can help you to determine the Current TPS.
Additionally, you will find some good recommendations the last link that may help you to resolve your problem:
Reduce view distance
Your Minecraft server will run at view distance of 10 by default. We recommend changing your view distance to 6, this will not make any noticeable difference to players, but this can hugely help your server performance. You can learn how to access your server settings here.
Setup automated restarts
Setting up automatic restarts can help your server run smoother by freeing up your server RAM usage. It can also reclaim RAM that gets used by plugins and mods that have small memory leaks. You can view a tutorial on how to setup automated restarts here.
Run the latest version
We recommend using the latest version of Minecraft, plugins, and mods on your server. Most newer versions of software will include bug fixes and performance improvements that will make your server run faster and more stable.
Remove unnecessary mods and plugins
Having unused plugins and mods on the server will use up server resources even if the plugins and mods are not being used. It is a good idea to remove any unnecessary mods and plugins from the server. If you think you may use some plugins in the future and are not using right now, you can disable plugins by renaming the plugin .jar file to end with “.disable.” E.g Essentials.jar.disable. You can remove “.disable” from the plugin name to enable the plugin again.
I also found this documentation that explains, How to optimize the server's performance? That may help you to troubleshoot your issue.
On the other hand, I recommend you to review the following guides on asking questions: How do I ask a good question? and How to create a Minimal, Complete, and Verifiable example in order to provide a better context on what you are doing and what you want to achieve.

Related

How to manage multiple products that share code [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
The company I just joined has a system of products that share a large percentage of their code base (via shared links in Visual SourceSafe). There are about 25 product types in this system as well as a PC interface.
The products network together using proprietary protocols that are largely undocumented. Historically, the method for maintaining this mess is to require that all firmware and software is released as a package. This, of course, causes significant delays in release schedules due to the required regression testing.
Has anyone else had a successful method of dealing with this type of issue? We're really getting beat up over it by management (I honestly can't fault them for feeling this way).
My first thoughts are to try to separate the device releases from each other somehow. Maybe pull shared functionality into libraries which are versioned. Then only update devices that use the libraries that have changed. I see issues with version mismatches from this however.
This is an organizational question. I understand how to keep the house of cards going via testing and processes, but I believe that better organization of the code base could have many good results.
I appreciate the advice.
significant delays in release schedules due to the required regression testing.
That's why folks do a "daily build".
Daily builds typically include a set of tests, sometimes called a
smoke test ( as in where there is smoke there is fire). These tests
are included to assist in determining what may have been broken by the
changes included in the latest build. The critical piece of this
process is to include new and revised tests as the project progresses.
When the organization -- as a whole -- has to keep the daily build working, then people change their responsibilities, points of view, biases, complaints and actions to keep the daily build running.
Daily stand-up meetings become focused on things that might break the build.
Individual developers have to refactor their code more carefully to avoid breaking the build.
Breaking the build becomes an immediate, instantaneous indicator of something being out of sync. Immediate. No delay. If I break the build today, everyone will know it tomorrow morning. No days were wasted assuming (or hoping) that things still worked. We can immediately roll changes back, or apply changes to keep going forward.

Measuring Web Application Performance (Stress-Testing) and Bandwidth Requirements [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am in the process of measuring the bandwidth requirements and how a Web application behaves in terms of response time and memory requirements when the number of users is increased.
Is there a particular good tool that can help us here? I believe JMeter is the standard tool. But are there other tools considering that the site is IE only.
Your answer is extremely appreciated.
Well, the question is how do you want to profile? Do you want to simulate real-world activity? Or do you just want to bombard the heck out of the site?
Load Testing Individual Pages
I don't think you can go wrong with using Apache Bench (ab). It's dirt simple to use, and can really stress your application. My typical usage is:
ab -c 10 -n 1000 http://www.example.com/path/to/page
The -c parameter is the number of simultaneous requests to issue. I would suggest starting low (like 5 to 10) and working your way up. Watch the output for failed requests and falling response rate. You're limited to about 1000 connections on most linux machines, so don't go too crazy.
The -n parameter is how many requests to issue. I would suggest doing at least 100 times the number of concurent requests to get a good average...
Another great use for apache bench is to benchmark individual database queries. Just create a simple script that runs the query, and load away. This can be a really good way to detect fast but expensive queries that will take your server down in production yet seem fine in testing.
Load Testing The Whole Application
I've had good luck with WebLoad. There's an Open Source version if you don't have a good budget that will get you started. But I'd suggest springing for the pro version. With it, you can setup a distributed test environment (as simple as installing the client on every machine in the office, as complex as spinning up a bunch of VMs for it).
The cool thing, is that you can program it in javascript. So you can tell it to take random click paths through the site with random delays. This should simulate a user far better than you could do manually. Then, once you have it setup, push the tests to the distributed engine and hit go.
It supports many different load profiles (stair-step where it adds load little by little for the duration of the test, etc). So you can simulate a slashdot-effect profile, normal day-to-day usage, etc.
The reports it generates are immensely useful. It shows you the slow urls, where the bottlenecks are, etc.
There are plenty of other test platforms and systems out there. This was just one that I found that I felt worked pretty well at the time (I did a comparison about 2 to 3 years ago). I am not affiliated with the company in any way.
Load Testing parts of the application
This is a really useful technique called profiling. The how to and tools are fairly language specific, so I won't go into too much detail here (since you don't have a language tag on your question). But the point is that once you find a slow page, you'll need to profile it to figure out what's slowing it down. Then fix the low hanging fruit (the parts that are the slowest). Then re-test to see if you made a difference or not...
Conclusion
Since it's almost impossible to simulate real-world load, this is really more of an art than a science. Have at it, and have fun. Don't take the results to seriously though, even with the best testing, you're likely to miss something... So I wouldn't take them as gospel and go telling the CEO that you tested that it's capable of 100k concurent users. Since when the day it crashes happens (and if you are lucky it will crash), he will blame you since you told him it would work...
Just a thought, you say IE only so is it hosted on IIS? if so then you might want to look at Microsft's WCAT (Web Capacity Analysis Tool), more information is available here:
http://support.microsoft.com/kb/231282
Although it isn't open source but it is free - do you need the source.

Looking for an issue tracker / project management software that automatically manages start/completion dates based on priority/relationships [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
So, a little background. We are a small company with a half-dozen developers. We have been evaluating many project management / issue tracking software packages (TRAC, Redmine, FogBugz, etc) and trying to create a decent process/workflow for managing projects, adding features, fixing bugs, etc. I'd like to think our requirements are similar to most other companies our size.
Essentially, what this comes down to is
1) An easy way for the PM and developers to track projects, issues, bugs, etc
2) An easy way for the PM and admin/executives to get a birds-eye view of progress and easily manage timelines, schedules, and priorities.
After trying TRAC, we moved to Redmine. We found Redmine to be easier than track to administer and the ability to have sub-projects and sub-tickets is great.
However, the big problem we ran into is the fact that it is very difficult to manage schedules and timelines. It seems like it would be incredibly time-intensive to manage because you have to manually enter a start date, estimated time, and end date for each ticket, project, etc.
So if you setup a month's schedule based on priorities, what are you supposed to do when a particular ticket/issue/subproject takes up more time than was estimated. Right now, it appears I would have to go back in and MANUALLY change the start/end date of every single item.
What would be ideal is to be able to set priorities/dependencies and estimated time on tickets/milestones, and have the software automatically manage the start/end dates. Does anyone know how to get Redmine to do this, or recommend a different software package that can do something like this!
From what I understand you need more than a issue tracking system. More exactly you need to also have a task scheduling mechanism. I do not know a issue tracker with task scheduling engine. I guess that with this feature you are entering somehow in the project management area so I would recommend a project management tool.
I think MS Project (as Kalven said) is too much for you. Try a simpler alternative like RationalPlan first.
If you use the "schedules" plugin (not sure if it's compatible with Redmine 1.1.0) you can set it up so that you can automatically set the start and end dates of issues. I'm not certain that it takes issue relations into account though, but if it does, it should be enough to just change the estimated time for an issue that is taking longer than you first thought and then refill the calendar/schedules.
I have also used many project management software for my small business. However, from last 8 months I am using Project Professional 2010 which is helping me a lot in project task management and time management. I will recommend you to opt for the same.

Starting off on a new project, the ideal way for knowledge management? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
We are starting off on a new from scratch implementation of an eCommerce solution and have decided on the framework to use too. There will be people joining the project who never really had any experience working on this framework. How should we go about doing the knowledge management/transfer? What could be the other challenges we need to be prepared for and how?
I can think of starting a WIKI with the most often needed content and addressing the most common roadblocks...Is that a good idea?
A glossary will go a long way.
Make sure that you and your team use the glossary when talking. There's no point having specific words for things if you don't use them correctly.
Stand up meetings each day should help with whatever roadblocks you have.
Make sure that your team has as high bandwidth communication as possible but also allow for quiet time where people can focus. Maybe have the first 15min. of each hour as a time when people can walk up to each other and ask questions etc. Then the other 45min is silent time. Reassess this to get a balance that everyone fits with.
I second the wiki recommendation - we have had a great deal of success using it to form a knowledge base and glossary for our projects.
Another technique that has worked well for us is creating a scratch repository (e.g. SVN, Git, etc.) for the purpose of technology ramp up spikes. We're currently working on an enterprise-scale project leveraging Spring's OSGi support, and we created several spike projects to explore different facets of the technology. This helped us grasp the technology before getting too encumbered by the business needs.
As far as challenges for which to be prepared? Expect the unexpected. Any time you embark on using a new framework/technology, you will run into roadblocks and your initial velocity will suffer. My best advice - simple determination. Don't give up on your framework at the first sign of stormy weather. Work through the issues. Eventually you'll clear most of the hurdles and your velocity will increase exponentially as the entire team gets more comfortable with the technology.
I think this is the ideal situation for a wiki. Let the developers choose which wiki to use, because they're the ones that will be using it!
There is this thread:
http://discuss.joelonsoftware.com/default.asp?biz.5.738060.3
You might consider Alfresco.com's open source solution for content management.
I think the Wiki is a good idea, but there is also no substitute for real code. To that end a good quality (reference) implementation of a single function, which shows all the layers in the code stack, ie from browser/form down to the DB and back again.

Where can I find a template for documentation about server-side installation of software? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm looking for a good template on server-side installation of software for a project I'm working on.
The client-side is pretty straight-forward. The server-side installation is a little trickier. It is made up of several pieces (services, database connections, dependencies, ports that need to be unblocked, etc.). During a recent test, several undocumented pieces were discovered. Now I need to create installation documentation for our disaster-recovery plans and ways to test the installation without necessarily having a "full-up" system to test on.
I'd really like a suggestion of where I can get a template or a really good example of such a document. I'd like it to be something that an operator could read and comprehend in the heat of a recovery.
[EDIT]
Our current documentation comes mainly from the questions our administrators have had during off-site tests. As new code is written, I'd like to make sure the documentation is written ahead of time. I've been collecting VMWare images to start testing, but was looking for some good examples. It's a Windows Server shop (2000 & 2003). Word templates would be great, but if I could see good documentation, I could create the templates. Any suggestions about what should be tested would be great as well.
[2nd EDIT]
I've gotten several good ideas from the answers posted. After changing my Google search, I came up with some good starting points. They're not perfect, but they are a good start.
Microsoft Exchange - http://technet.microsoft.com/en-us/library/bb125074(EXCHG.65).aspx
iPhone - http://manuals.info.apple.com/en_US/Enterprise_Deployment_Guide.pdf
http://www.novell.com/documentation/gwgateways/gw7_exch/index.html?page=/documentation/gwgateways/gw7_exch/data/ab32nt1.html
http://cregan.wordpress.com/2006/06/22/exchange-2003-step-by-step-installation-instructions/
http://technet.microsoft.com/en-us/magazine/cc160942.aspx
Covers planning in the design stage well - http://www.onlamp.com/pub/a/onlamp/2004/04/08/disaster_recovery.html?page=2
[Edit 10/29/2008]
THIS is the type sample I was looking for. It doesn't have a lot of garbage, but seems to explain enough of the why along with the how http://wiki.alfresco.com/wiki/Installing_Labs_3_Nile
The most complete method that we've come up with for creating our DR documentation, involves going through a full cycle (or two) of installation, and documenting each step along the way.
I realize this can be a bit difficult if you don't have a test (or replacement) system to use to create your documentation - but it's worth lobbying for running through this cycle at least once.
(I recommend twice, the second being done by someone not involved with the project - this is how you test the documentation for future admins, who may not be as experienced with the process.)
A side effect of the above is that your documentation grows fairly large - last I had to do it, I believe the completed installation manual for our database servers was 30+ pages.
What should be tested? Well, in the case of a web site, "can you get to the page?" Include a URL as a starting point and let the admin click through to a certain point. It is not necessary for the admin to go through the whole QA cycle, just a confirmation that what you meant to be deployed is really what got deployed.
Other ideas
Also, we (my team at my last job) had QA test the deployment. As a QA person should be, he was not intimate with the details and as he deployed to QA, we were able to get feedback on what went wrong.
Another thing that is useful is sitting down with the admin(s) before the deployment. Go over the instructions and make sure they understand them the same way you do.
Template? Just make sections that have fields for data such as URL to DEV, QA, and PROD. When you write out the instruction you can refer to those. Just make it clear what is being deployed.
Depending on the admins, automation is helpful. I've had windows admins that want a Word doc with step by step instructions and other admins that wanted a script.
However, some helpful things to include, probably as sections
Database changes
Scripts to run
Verification that they worked
Configuration changes
what are the change
where is a version of the new file (In my case they diffed the two, which helped reduced errors concerning production-specific values)
General verification
what should be different from the user perspective (feature changes)
For web farm deployments, it might be helpful to have a coordination document concerning how the servers need to be pulled in and out of pool.

Resources