Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have to install an Oracle 18c database server, currently I have 4 960 GB solid state hard drives, 32 2.10 ghz Intel Xeon processors, 255 GB of RAM, the database that Mounted on this server is approximately 750 GB in weight and is increased by 350 GB per year and has a concurrency of 500 simultaneous connections, do you consider that this hardware is the one to use to obtain good performance in Oracle or should it be increased?
The answer is, "it depends":
Is this enough storage capacity to meet your long term needs for data, archived transaction logs, backups, auditing, and whatever else you need to store?
Do you have an existing production system, and what sort of performance do you get out of it? How does the old hardware and new hardware compare?
How many transactions does your database process in an hour? Your overall data might only grow by 350GB, but if there are a lot of updates and not just inserts then you you could be archiving that much in log files and backups every day.
What those 500 concurrent sessions are actually doing at any moment will drive the size of your memory and processor requirements, as will the amount of data that you need to cache to support them.
Do you have any HA requirements, and if so how does that affect the configuration of your storage (i.e. do you lose some percentage of your storage to RAID)?
Which operating system you choose can also affect how efficiently your hardware performs.
My personal preference is to use virtualized servers running Oracle Linux with an SSD SAN , but that might not be right for you. In the end there really isn't enough information in your question to say for sure if the hardware you describe is sufficient for your needs or not.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
If in-memory databases are as fast as they claim to be, why aren't they more commonly utilized?
One of the main reasons in-memory databases aren't commonly used is because of cost. As you stated, in-memory databases are usually an order of magnitude faster than disk resident databases for obvious reasons. However, RAM is also significantly more expensive than hard drives and consequently not viable for large databases. With that said, with RAM getting much cheaper in-memory databases today are more than viable for enterprise use.
Another reason is that in-memory databases are often not ACID compliant. This is because memory is volatile, and unforeseen events like power losses may result in complete loss of data. As this is unacceptable for the vast majority of use cases, most in-memory databases do end up utilizing disks to persist data. Of course, this ends up undermining some of the benefits of in-memory databases by re-introducing disk I/O as a performance bottleneck.
In any case, in-memory databases will likely become predominant as RAM becomes cheaper. The performance differences between the two are too drastic to be ignored. Knowing this, multiple vendors have thrown their hats into the in-memory space such as Oracle TimesTen, SAP Hana, and many others. Also, some companies like Altibase are offering “hybrid” DBMS systems, which contain both in-memory and disk resident components.
You may want to read up on these in-memory offerings to get a better understanding of in-memory databases.
http://www.oracle.com/technetwork/database/database-technologies/timesten/overview/index.html
http://www.saphana.com/
http://altibase.com/in-memory-database-hybrid-products/hdbtm-hybrid-dbms/
Possibly because one or more of:
there often a mismatch between the data size and the available RAM size
when the data is small normal disk caching and OS memory/disk management may be as effective
when the data is large, swapping to disk is likely to void any benefit
fast enough to meet performance requirements and service level does not mean as fast as possible - fast enough is good enough.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Memory Issue
One of our server boxes shows 96% memory usage in task manager (137/140GB or so used).
When I look in the "Processes" tab though (even with show processes from all users checked), the top processes combined only use 40GB or so combined at peak times. I've provided an image of top used processes below as well as an image of the performance panel showing the memory usage.
Note: CPU usage isn't usually at 99%, it spiked when I took that screenshot.
My Question
What is the reason for this discrepancy, and how can I more accurately tell which processes are eating the other 100GB of memory?
To verify, here's an image of the performance pannel:
Sergmat is correct in his comment (thanks by the way); I actually found RAMMAP myself yesterday and used it and it revealed the problem.
Our server runs a very heavily used SQL Server instance. RAMMAP reveals that there is a 105GB region of memory used for "AWE" Address Windowing Extensions - which are used to manipulate large regions of memory very quickly by things like RDBMS's (SQL Server).
Apparently you can configure the maximum memory SQL Server would use, this being included; so that's the solution.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Typically in a working environment, I have many windows open, Outlook, 2/3 word docuements, few windows in browser, notepad++, some vpn client, excel etc..
Having said that, there are chances that about 40% of these apps are not frequently used, but are referred only sparingly. They occupy memory none-the-less.
Now, how does a typical OS deal with that kind of memory consumption ? does it suspend that app to hard disk (pagefile , or linux swap area etc) thereby freeing up that memory for usage, or does it keep occupying the memory there as it is.
Can this suspension be a practical solution, doable thing ? Are there any downsides ? response time ?
Is there some study material I can refer to for reading on this topic/direction..
would appreciate the help here.
The detailed answer depends on your OS and how it implements its memory management, but here is a generality:
The OS doesnt look at memory in terms of how many processes are in RAM, it looks at in terms of discrete units called pages. Most processes have several pages of RAM. Pages that are least referenced can be swapped out of RAM and onto the hard disk when physical RAM becomes scarce. Rarely, therefore, is an entire process swapped out of RAM, but only certain parts of it. It could be, for example, that some aspect of your currently running program is idle (ie the page is rarely accessed). In that case, it could be swapped out even though the process is in the foreground.
Try the wiki article for starters on how this process works and the many methods to implement it.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Which is the recommended Amazon EC2 instance size for creating a large (100k+ users) Ejabberd cluster?
I mean, is it more efficient/less costly to use a larger number of small instances, or a smaller number of large instances?
And are HVM images, for cluster computing, of any use for an Ejabberd cluster, or standard images will be enough for the purpose?
eJabberd can use lots of memory but doesn't use a lot of CPU so memory is the biggest consideration. It really depends how many connections you're talking about. 100k+ connections you will need a large instance at least.
From MetaJacks blog (The creater of Strophe, the Javascript XMPP library)
"For Chesspark, we use over a gig of RAM for a few hundred connections. Jabber.org uses about 2.7GB of RAM for its 10k+ connections."
A large instance has 7.5GB of RAM, which isn't enough for 100k+ connections. I'd say you're looking at a 2-3 server cluster of Large instances or a High Memory Instance.
HVM is only really needed when you need hardware support that is not provided by the software virtual machine (e.g. Graphics card processing). Not required for Memory or CPU.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
What is dominate factor of the disk price, capacity or IOPS? I think the answer of this question should also answer this one in which i asked why cost of disc I/O per access is PricePerDiskDrive/AccessesPerSecondPerDisk.
Thanks
Chang
The factor dominating the price is the market segment: Home disks are cheapest, server disks most expensive.
It depends on several factors, as stated in the previous answer, you have the segment, home or business.
Then there is the architecture:
SCSI (bus controller with high speeds)
SSD (flash)
SATA (regular drive)
SAS (serial attached scsi, backwards compatible with SATA)
SAS and SCSI are mostly disks running at high speeds, this makes them more expensive.
SATA disks for home use at normal speeds (5400 or 7200 rpm) are expensive based on capacity and brand. If a company has the first 3 TB disk it will be very expensive, when 3 companies have those disks, prices will decrease because of competition
SSD is a technology that got affordable, but still a lot more expensive than regular SATA (with platters). This is because there are no turning parts and it uses faster memory.
Also a very nice thing to remember :
The faster the drive, the more expensive, there for it is normal that the better your IOPS are the more expensive it is.
Capacity has a price, but it is linked to the drives speed and the recent evolution in technology.