I am using openshift 3 pro to mount an elasticsearch server (not ELK).
to do this I'am using this image :
-- https://github.com/lbischof/openshift3-elk
only the elasticsearch part.
After installing I am using elasticdump to add data from another server.
The process is very long and crashing muliples times. during the dumping, the pod is always using ALL the 512Mi Memory quota.
How to allow 1024 or 2048 Mi for my elasticsearch pod ?
You can change the resource quota by going to the deployment config in the web console and from the drop down menu on right side select 'Edit Resource Limits'. You will need to first ensure your Pro account has enough memory associated with it.
Related
We have TFS 2017.3 with separate Code Search server.
We have huge TFS DB (about 1.6TB), in the code search server we have 700GB dis space.
After few weeks the disk space running out and the code search not work in the tfs.
After we increase the disk space the search back to work.
How can we make retention policy to delete old code search data (index)? we don't want to increased more the disk space.
Search indexing (Code and Work Item) works in 2 phases:
Bulk Indexing (BI) where the entire code and work item artifacts in all projects/repositories under a Collection are indexed. This is a
time consuming operation and depends on the size of the artifacts
under the collection.
Continuous Indexing (CI) which handles all incremental updates to the artifacts (add/updated/delete) and indexes them. This is
notification based model where the indexer listens to TFS events
and operates based on those event notifications. CI handles almost
all update operations including CRUD operations at
Project/Repository/Collection layer (such as Repository renames,
Project add/deletes, etc.). The operation time for these CI would
depend again on the size of the incremental update. BI always
precedes CI i.e. a CI will never execute on a project/repository
until BI is completed for the same.
How to Clean-up Index Data and Re-index please follow below steps:
Pause Indexing for all collections. Run the following script on TFS
Configuration DB
https://github.com/Microsoft/Code-Search/blob/master/PauseIndexing.ps1
Login to the machine where the Elasticsearch (ES) is running
Stop the ES service
Delete the entire Search Index folder (something like,
C:\TfsData\Search\IndexStore, or wherever you had configured it to
be)
Restart the TFS Job Agent service(s) on the AT machines
Delete the following tables from each of the collection DBs
DELETE FROM [Search].[tbl_IndexingUnit]
DELETE FROM [Search].[tbl_IndexingUnitChangeEvent]
DELETE FROM [Search].[tbl_IndexingUnitChangeEventArchive]
DELETE FROM [Search].[tbl_JobYield]
DELETE FROM [Search].[tbl_TreeStore]
DELETE FROM [Search].[tbl_DisabledFiles]
DELETE FROM [Search].[tbl_ResourceLockTable]
Restart the ES service
Run this script on TFS Configuration DB:
https://github.com/Microsoft/Code-Search/blob/master/ResumeIndexing.ps1
Run this script (pick from the correct TFS release folder) on each of
the collections:
https://github.com/Microsoft/Code-Search/blob/master/TFS_2017Update2/MissingIndexFolderTriggerCollectionIndexing.ps1
Try the last script on a smaller collection first (which has less
number of repositories) so that you can verify that indexing happened
correctly and the results are query-able.
More details please refer this blog in MSDN: Resetting Search Index in Team Foundation Server
I was able to reduce the disk size after deleting the ES folders, reinstalling the code search extension, and sometimes had to run the MissingIndexFolderTriggerCollectionIndexing.ps1.
But - I came to the conclusion that it was not worth doing, the disk size was growing rapidly and reaching the original size, so I did not save anything.
Although Microsoft recommends giving disk space of 35% of the DB, it is not enough for us and we increase the size when the disk is full to the end (currently about 45% of the DB size).
The conclusion - don't touch the ES, if the disk fills up then increase the disk size.
Hi im trying to mount a new volume for my db pod, i execute kubectl describe pod rc-chacha-5064p to see what its taking so long and i get the following
FailedMount AttachVolume.Attach failed for volume "db-xxxx-disk-pv" : googleapi: Error 403: Exceeded limit 'maximum_persistent_disks' on resource 'gke-xxxx-cluster-1-db-pool-xxxxx-xxxx'. Limit: 16.0
is there a way to raise that limit, i already went trough google quotas but there is nothing about this kind of restriction, any help would be appreciated
This is not a quota issue but a node level limit. Using beta apis, you can create a machine type which can mount more number of disks. See this https://cloud.google.com/compute/docs/disks/#increased_persistent_disk_limits
So, I have a Java app on Heroku that uses RedisCloud addon.
The addon clearly states that the free version comes with a maximum of 30 Connections:
The problem is that Im getting this error:
ERR max number of clients reached
So the first thing I did obviously was check the RedisCloud monitor and to my surprise, It establishes a limit of 10 Connections:
The question:
Why are we getting a connection limit of 10 on RedisCloud when the limit on the Heroku addon says it should be 30?
It appears that your add-on is using an old version of the plan from before we launched our Bigger and Imporved XXXL Free plan earlier this year.
The easiest way to resolve that is to use the Heroku toolkit belt and run the command:
heroku addons:upgrade rediscloud:30 -a <your app's name>
I created an Azure Medium instance Windows 2012 Server and I'm having a problem striping together multiple Azure data disks into a single volume using the Server Manager tool.
In Azure I provisioned the medium instance and then created 4 data disks of 60GB each. I then rdp'ed into the server and inside Server Manager under File and Storage Services\Volumes I saw in the Disks section my 4 data disks listed under the C:\ and D:\ drives that come with this instance. I initialized my 4 data disks (later I also tried NOT initializing them) but when I clicked on "Storage Pools" in the nav bar under the Virtual Disk section I only saw 1 of my data disks.
I saw no way to add any of the other 3 data disks into my Storage Pool and then of course into the subsequent Virtual Disk. This problem limits me to just one data disk in my Virtual Disk. I have tried this many different times and the result is always the same.
Does anyone know what can be causing this or have steps to do the same thing I'm trying to do?
Thanks
If you're wondering why I'm trying to stripe these instead of using just 1 large data disk, this article explains the performance benefits of doing so:
http://blog.aditi.com/cloud/windows-azure-virtual-machines-lessons-learned/
In my blog post I explain how to do this, although perhaps the level of detail you are looking for isn't there. Still, everyone that followed this post (it was a lab) was able to create the striped volume. The blog post is a complete lab; go down half way to see the section about the striped volume. Let me know if you have any questions.
http://geekswithblogs.net/hroggero/archive/2013/03/20/windows-azure-it-roadshow-lab-i.aspx
Thanks
I hit the same problem and some Googling revealed that this is a bug in Server Manager (sorry, can't find the link). The workaround is to use PowerShell to create the pool. These commands will create a new Storage Pool called "Storage" and assign all the available disks to it:
$spaces = (Get-StorageSubSystem | where {$_.Model -eq "Storage Spaces"}[0]).UniqueID
New-StoragePool -FriendlyName "Storage" -StorageSubSystemUniqueId $spaces -PhysicalDisks (Get-PhysicalDisk -CanPool $true)
I am facing a strange problem with solr. After running solr for few hours, client starts reporting error message that it is unable to contact the solr, although solr instance is up on the server.
I can't see any high traffic on website which sometimes is the reason of connection refusal.
This issue gets fixed after solr restart.
Any idea what is going wrong here ?
Answer to most of the problems can be found in logs. Thanks D_K for reminding me.
SEVERE: java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
I have increased the heap size to fix this issue.
java -Xms<initial heap size> -Xmx<maximum heap size>
Also, we have reduced document size by removing unnecessary information which we don't need to retain in solr.
If you have a client with long running connection but low amount of traffic, you may have a firewall in between. Firewalls have limited-size routing tables, so they eventually drop the mapping for connections they haven't seen for a while.
Try sending a ping query every 30 minutes or so through that specific connection and see if the issue goes away. If you need to validate it, run Wireshark on the client and see whether the client is getting RST (reset) packets from an unexpected end point (that would be firewall).
You just need to add your collection in solr by following the step given in this url ( https://drupal.stackexchange.com/questions/95897/apache-solr-4-6-0-insta... ) and then select your collection from your solr which is running on localhost or live site (http://localhost:8983/solr/) and go to schema tab. Click schema tab and then you can see you schema file attach in apachesolr module.
You now just need to your schema url which just look like this http://localhost:8983/solr/your_core_name/. Now add this url in apachesolr module.
Then it will show that your site has contacted apache solr server in your drupal site.