Minio too many open files, please increase 'ulimit -n' - minio

1、when I visit the minio by brower,the log is following.
returned an error (too many open files, please increase 'ulimit -n') (*fmt.wrapError)
internal/logger/logonce.go:54:logger.(*logOnceType).logOnceIf()
internal/logger/logonce.go:94:logger.LogOnceIf()
cmd/erasure-metadata-utils.go:139:cmd.readAllFileInfo.func1()
internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()
2、At the same time, the total objects is less than that I have upload.
the result of my minio
in fact,the size of objects is about 150000.
3、the following is my minio file;
enter image description here

Essentially, this error means you exceeded the number of open file descriptors allowed for this process/user. You can check current limits with ulimit -n and increase the maximum open fd's with ulimit -n 65535
Please know you can find us at https://slack.min.io/ 24/7/365. If you have commercial questions, please reach out to us on hello#min.io or on our Ask an Expert Chat functionality at https://min.io/pricing?action=talk-to-us.

Related

How to increase the size of request in ab benchmarking tool?

I am testing with ab - Apache HTTP server benchmarking tool.
Can I increase the size of the request in Apache (now I see 1 request has the size of 146 bytes).
I tried to increase the size of TCP send/receive buffer (-b option), but It seems does not work. Because I still see the "Total transferred" is 146 bytes.
Do you know any way to increase the size of the request? (change the source code or something).
Or if it is impossible, can you give me a suggestion about some tools which are similar to ab but it can increase the size of the request.
Thank you so much!
Although -b option does seem like it should've worked, I can't say for sure as I haven't used.
Alternatively, have you tried sending a dummy large file in your POST request? that can be accomplished with the -p option followed by a plain text file for instance, that you can either create yourself or Google for something like "generate large file in bytes online" that you can download and pass into the command.
As far as alternatives go, I've heard another open source project called httpperf from HP to be a great option as well, though I doubt we're unable to figure it out how to do it with Apache Benchmark.

Monitor network bytes/sec on Windows for single process/port from command line

I need to monitor average network bytes sent/received per second over a period of time from command line, but only for network traffic sent/received by a certain process or port.
I am currently able to monitor all network traffic using:
logman create counter -n CounterName -c "\Network Interface(*)\Bytes Total/sec" -f csv -o C:\output.log -si 1
which gives me a CSV of network total bytes/sec at 1 second intervals which I can then parse and determine an average, but I need to be able to monitor traffic only sent/received on a single port or by a single process (port would be better)
I've done a good amount of googling and can't find anything built in to Windows to do this. (I've looked at netstat also). I am open to any free third party tools that can do this, they just need to be able to be run from command line and produce some kind of log.
If you want to implement something yourself, you can write a Upper-Layer Windows Filter driver:
http://msdn.microsoft.com/en-us/library/windows/hardware/ff564862(v=vs.85).aspx#possible_driver_layers

hadoop fair scheduler open file error?

I am testing the fair scheduler mode for job assigning, however I get such error
java.io.IOException: Cannot run program "bash": java.io.IOException: error=24, Too many open files
After a google, Most of them will tell to check how many files are currently open in the system (by unix command lsof) and how's that number related to your system limit (check by bash command ulimit -n). Increasing maximum number of open files at a time is short-term solution in my opinion.
Is there anyway to avoid this?
The fact that your system is reaching the limit for #(max open files), you might have to check:
How many other operations are running on the system ?
Are they heavily opening many files ?
Is your hadoop job itself heavily opening many files ?
Is the current limit for #(max open files) too small on your system ? (you can google out the typical values). If its too small, consider increasing it
I think increasing the #(max open files) limit will work out. In long term you might again end up with that problem if #1,#2 and #3 are not addressed.

How to avoid "Too many open files" issue in load balancer, what are side effects of setting ulimit -n to a higher value?

When load balancing a production system with heavy load of requests, or while load testing it, we face a 'Too many open files' issue as below.
[2011-06-09 20:48:31,852] WARN - HttpCoreNIOListener System may be unstable: IOReactor encountered a checked exception : Too many open files
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
at org.apache.http.impl.nio.reactor.DefaultListeningIOReactor.processEvent(DefaultListeningIOReactor.java:129)
at org.apache.http.impl.nio.reactor.DefaultListeningIOReactor.processEvents(DefaultListeningIOReactor.java:113)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:315)
at org.apache.synapse.transport.nhttp.HttpCoreNIOListener$2.run(HttpCoreNIOListener.java:253)
at java.lang.Thread.run(Thread.java:662)
This exception comes in load balancer, if the maximum allowed files (ulimit) to be open is set to low.
we can fix the above exception by increasing the ulimit by giving a higher value (given 655350 here).
ulimit -n 655350
however setting ulimit -n to a higher number may affect overall performance of load balancer and hence response time of our website in unprecedented ways. Any known side effects od setting unumber -n to higher value?
The side effects will be exactly that - under normal circumstances you want to limit the number of open file handles for a process so it doesn't exhaust available file descriptors or do other bad stuff due to programming errors or DoS attacks. If you're comfortable that your system won't do that (or plainly won't work within normal limits!) then there's no other side effects.

Analyze Local Network Traffic, Update Quota with tshark and BASH [duplicate]

This question already has answers here:
How do I calculate network utilization for both transmit and receive
(2 answers)
Closed 2 years ago.
I have a slightly weird problem and I really hope someone can help with this:
I go to university and the wireless network here issues every login a certain quota/week (mine is 2GB). This means that every week, I am only allowed to access 2GB of the Internet - my uploads and downloads together must total at most 2GB (I am allowed access to a webpage that tells me my remaining quota). I'm usually allowed a few grace KB but let's not consider that for this problem.
My laptop runs Ubuntu and has the conky system monitor installed, which I've configured to display (among other things, ) my remaining wireless quota. Originally, I had conky hit the webpage and grep for my remaining quota. However, since my conky refreshes every 5 seconds and I'm on the wireless connection for upwards of 12 hours, the checking of the webpage itself kills my wireless quota.
To solve this problem, I figured I could do one of two things:
Hit the webpage much less frequently so that doing so doesn't kill my quota.
Monitor the wireless traffic at my wireless card and keep subtracting it from 2GB
(1) is what I've done so far: I setup a cron job to hit the webpage every minute and store the result in file on my local filesystem. Conky then reads this file - no need for it to hit the webpage; no loss of wireless quota thanks to conky.
This solution is a win by a factor of 12, which is still not enough. However, I'm a fan of realtime data and will not reduce the cron frequency further.
So, the only other solution that I have is (2). This is when I found out about wireshark and it's commandline version tshark. Now, here's what I think I should do:
daemonize tshark
set tshark to monitor the amount (in KB or B or MB - I can convert this later) of traffic flowing through my wireless card
keep appending this traffic information to file1
sum up the traffic information in the file1 and subtract it from 2GB. Store the result in file2
set conky to read file2 - that is my remaining quota
setup a cron job to delete/erase_the_contents_of file1 every Monday at 6.30AM (that's when the weekly quota resets)
At long last, my questions:
Do you see a better way to do this?
If not, how do I setup tshark to make it do what I want? What other scripts might I need?
If it helps, the website tells me my remaining quota is KB
I've already looked at the tshark man page, which unfortunately makes little sense to me, being the network-n00b that I am.
Thank you in advance.
Interesting question. I've no experience using tshark, so personally I would approach this using iptables.
Looking at:
[root#home ~]# iptables -nvxL | grep -E "Chain (INPUT|OUTPUT)"
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
Chain OUTPUT (policy ACCEPT 9763462 packets, 1610901292 bytes)
we see that iptables keeps a tally of the the bytes that passes through the each chain. So one can presumably go about monitoring your bandwidth usage by:
When your system starts up, retrieve your remaining quota from the web
Zero the byte tally in iptables (Use the -z option)
Every X seconds, get usage from iptables and deduct from quota
Here are some examples of using iptables for IP accounting.
Caveats
There are some drawbacks to this approach. First of all you need root access to run iptables, which means you need conky running as root, or run a cron daemon which writes the current values to a file which conky has access to.
Also, not all INPUT/OUTPUT packets may count towards your bandwidth allocation, e.g. intranet access, DNS, etc. One can filter out only relevant connections by matching them and placing them in a separate iptables chain (examples in the link given above). An easier approach (if the disparity is not too large) would be to occasionally grab your real time quota from the web, reset your values and start again.
It also gets a little tricky when you have existing iptables rules which are either complicated or uses custom chains. You'll then need some knowledge of iptables to retrieve the right values.

Resources