Can't install sample dataset on Kibana - Forbidden Error - macos

Installed Kibana and Elasticsearch on my macOS Catalina via Brew (not sudo), but I'm not able to install the sample data sets. Anyone have any idea why I'm getting this Forbidden error and how to resolve? The error message is on the bottom right of the picture

go to the conf of elasticsearch uncomment and fill the path.logs line with the right path

Check if you have enough(>90%) disk space available
A good way to look for reason of any error is logs if available :)
I was trying to load Sample Data (eCommerce orders, flight , web logs) in to my kibana. I was getting some error. Logs shown below
elasticsearch.log
[o.e.c.r.a.DiskThresholdMonitor] high disk watermark [90%] exceeded on [/Users/xyz/Installs/ElasticSearch/elasticsearch-7.9.3/data/nodes/0] free: 15.1gb[6.4%], shards will be relocated away from this node; currently relocating away shards totalling [0] bytes; the node is expected to continue to exceed the high disk watermark when these relocations are complete
My MAC has total 250 GB space, I freed up extra 20 GB, then it worked. Please check if you have enough memory (more than 90% should be available)

go to management -> index pattern -> create index pattern.

Looks like for some reason I hit the threshold where the indexes are locked because of no more disk space and so I had to unlock the indexes manually
https://discuss.elastic.co/t/unable-to-create-index-pattern-from-kibana/167184

Related

SSD got only 500 Mb reserved for the system and don´t let me Delete this partition nor alocate the 450 Gb of free memory that are not alocated [migrated]

This question was migrated from Stack Overflow because it can be answered on Super User.
Migrated yesterday.
I bought a new SSD to use in my old computer and so once a plugged it and started the process of allocation it only allocated 500 Mb that are said to be reserved to the System, letting the 450 Gb that rested not allocated. Also the disk won´t let me delete this partition nor allocate the empty space, even though I can click to do it.
Disk 1 is the new one
So, after that I tried Delete this partition to retry the process.
Delete partition action
And strangely it did not deleted any thing, since the field is still there, and once I plugged it again, even the name returned.
Because of this I jumped to try to at least allocate the rest of the memory like showed in then next image. Notice that in this image you can also see the deleted partition that doesn´t go away.
Allocate partition action
And so I waited for like 15 minutes for just so it show an screen saying that:
Failed to complete the operation because the Disk Management console display is not up to date. Update the display using the update task. If the problem persists, close the Disk Management console and restart Disk Management or restart your computer.
And so I did only to return to the same situation.
This is the image of the message that appeared.

Unexpected reset of read_only_allow_delete

In some of my indices, I'm doing "index.blocks.read_only_allow_delete": true by using the PUT /index/_settings API call. But after around 10 seconds, the setting disappears and the index is writable again.
I'm wondering if this can be a bug in ES, as in version 6.8 a change was made to reset this setting automatically when a node whose disk had gone over the flooding stage, was again below the normal thresholds.
I'm experiencing that odd behaviour in ES 7.9. What I expected is that, if ES changed the setting to true because of the watermarks, then it could reset it to false later. But if an operator changes the setting to true manually, then ES was going to respect that setting.
These are the docs where I read about that behaviour:
Controls the flood stage watermark, which defaults to 95%. Elasticsearch enforces a read-only index block ( index.blocks.read_only_allow_delete ) on every index that has one or more shards allocated on the node, and that has at least one disk exceeding the flood stage. This setting is a last resort to prevent nodes from running out of disk space. The index block is automatically released when the disk utilization falls below the high watermark.
Cross-posted here.
I ended up using index.blocks.read_only instead, as this one is not updated by ElasticSearch automatically.

Elasticsearch bulk update is extremely slow

I am indexing a large amount of daily data ~160GB per index into elasticsearch. I am facing this case where I need to update almost all the docs in the indices with a small amount of data(~16GB) which is of the format
id1,data1
id1,data2
id2,data1
id2,data2
id2,data3
.
.
.
My update operations start happening at 16000 lines per second and in over 5 minutes it comes down to 1000 lines per second and doesnt go up after that. The update process for this 16GB of data is currently longer than the time it takes for my entire indexing of 160GB to happen
My conf file for the update operation currently looks as follows
output
{
elasticsearch {
action => "update"
doc_as_upsert => true
hosts => ["host1","host2","host3","host4"]
index => "logstash-2017-08-1"
document_id => "%{uniqueid}"
document_type => "daily"
retry_on_conflict => 2
flush_size => 1000
}
}
The optimizations I have done to speed up indexing in my cluster based on the suggestions here https://www.elastic.co/guide/en/elasticsearch/guide/current/indexing-performance.html are
Setting "indices.store.throttle.type" : "none"
Index "refresh_interval" : "-1"
I am running my cluster on 4 instances of the d2.8xlarge EC2 instances. I have allocated 30GB of heap to each nodes.
While the update is happening barely any cpu is used and the load is very less as well.
Despite everything the update is extremely slow. Is there something very obvious that I am missing that is causing this issue? While looking at the threadpool data I find that the number of threads working on bulk operations are constantly high.
Any help on this issue would be really helpful
Thanks in advance
There are a couple of rule-outs to try here.
Memory Pressure
With 244GB of RAM, this is not terribly likely, but you can still check it out. Find the jstat command in the JDK for your platform, though there are visual tools for some of them. You want to check both your Logstash JVM and the ElasticSearch JVMs.
jstat -gcutil -h7 {PID of JVM} 2s
This will give you a readout of the various memory pools, garbage collection counts, and GC timings for that JVM as it works. It will update every 2 seconds, and print headers every 7 lines. Spending excessive time in the FCT is a sign that you're underallocated for HEAP.
I/O Pressure
The d2.8xlarge is a dense-storage instance, and may not be great for a highly random, small-block workload. If you're on a Unix platform, top will tell you how much time you're spending in IOWAIT state. If it's high, your storage isn't up to the workload you're sending it.
If that's the case, you may want to consider provisioned IOP EBS instances rather than the instance-local stuff. Or, if your stuff will fit, consider an instance in the i3 family of high I/O instances instead.
Logstash version
You don't say what version of Logstash you're using. Being StackOverflow, you're likely to be using 5.2. If that's the case, this isn't a rule-out.
But, if you're using something in the 2.x series, you may want to set the -w flag to 1 at first, and work your way up. Yes, that's single-threading this. But the ElasticSearch output has some concurrency issues in the 2.x series that are largely fixed in the 5.x series.
With elasticsearch version 6.0 we had an exactly same issue of slow updates on aws and the culprit was slow I/O. Same data was upserting on a local test stack completely fine but once in cloud on ec2 stack, everything was dying after an initial burst of speedy inserts lasting only for few minutes.
Local test stack was a low-spec server in terms of memory and cpu but contained SSDs.
s3 stack was EBS volumes with default gp2 300 IOPS.
Converting the volumes to type io1 with 3000 IOPS solved the issue and everything got back on track.
I am using amazon aws elasticsearch service version 6.0 . I need heavy write/insert from serials of json file to the elasticsearch for 10 billion items . The elasticsearch-py bulk write speed is really slow most of time and occasionally high speed write . i tried all kinds of methods , such as split json file to smaller pieces , multiprocess read json files , parallel_bulk insert into elasticsearch , nothing works . Finally , after I upgraded io1 EBS volume , everything goes smoothly with 10000 write IOPS .

Solr ate all Memory and throws -bash: cannot create temp file for here-document: No space left on device on Server

I have been started solr for long time approx 2 weeks then I saw that Solr ate around 22 GB from 28 GB RAM of my Server.
While checking status of Solr, using bin/solr -i it throws -bash: cannot create temp file for here-document: No space left on device
I stopped the Solr, and restarted the solr. It is working fine.
What's the problem actually. Didn't get?
And what is the solution for that?
I never want that Solr gets stop/halt while running.
First you should check the space on your file system. For example using df -h. Post the output here.
Is there any mount-point without free space?
2nd: find out the reason, why there is no space left. Your question handles two different thing: no space left on file system an a big usage of RAM.
Solr stores two different kind of data: the search index an the data himself.
Storing the data is only needed, if you like to output the documents after finding them in index. For example if you like to use highlighting. So take a look at your schema.xml an decide for every singe field, if it must be stored or if "indexing" the field is enough for your needs. Use the stored=true parameter for that.
Next: if you rebuild the index: keep in mind, that you need double space on disc during index rebuild.
You also could think about to move your index/data files to an other disk.
If you have solved you "free space" problem on disc, so you probably don't have an RAM issue any more.
If there is still a RAM problem, please post you java start parameter. There you can define, how much RAM is available for Solr. Solr needs a lot of virtual RAM, but an moderate size of physical RAM.
And: you could post the output of your logfile.

PVCS service getting down once the server CPU physical memory usage become high. Whats the issue and How to resolve it?

Our PVCS service getting down once the physical memory usage of the server goes high. Once the server restarts(Not recommended) again the service will be up. Is there any permenant fix for this?
I resolved this issue by increasing the heapsize parameters...:-)
1.On the server system, open the following file in a text editor:
Windows as of VM 8.4.6: VM_Install\vm\common\bin\pvcsrunner.bat
Windows prior to VM 8.4.6: VM_Install\vm\common\bin\pvcsstart.bat
UNIX/Linux: VM_Install/vm/common/bin/pvcsstart.sh
2.Find the following line:
set JAVA_OPTS=
And set the value of the following parameters as needed:
-Xmsvaluem -Xmxvaluem
3.If you are running a VM release prior to 8.4.3, make sure -Dpvcs.mx= is followed by the same value shown after -Xmx.
4.Save the file and restart the server.
The following is a rule of thumb when increasing the values for -Xmx:
•256m -> 512m
•512m -> 1024m
•1024m -> 1280m
As Riant points out above, adjusting the HEAP size is your best course of action here. I actually supported PVCS for nine years until this time in 2014 when I jumped ship. Riant's numbers are exactly what I would recommend.
I would actually counsel a lot of customers to set -Xms and -Xmx to the same value (basically start it at 1024) because if your PDBs and/or your user community are large you're going to hit the ceiling quicker than you might realize.

Resources