ibm cloud private space issue on docker devmapper - ibm-cloud-private

I got the following error.
devmapper: Thin Pool has 82984 free data blocks which is less than minimum required 163840 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior
Can someone assist?

We did tests at our Lab environment and we applied the solution at docker site, https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/#increase-capacity-on-a-running-device topic Resize a direct-lvm thin pool, and it solved the issue.
I did not extended the LV thinpool during the problem at PROD, what caused the impression that the resolution was very difficult, but I just executed the lvextend -l+100%FREE -n docker/thinpool and docker recognize the new space available and back to create containers without problem.

Related

Where is Max Stack Size set on ORACLE database machines running ORACLE LINUX 7 (on top of VMWARE)

We had an issue where we were advised by oracle our stacksize was too small - Doing ulimit -Ss as the oracle user this showed 10240k (a previously recommended setting) - However when looking at an oracle process - pmon for example and then doing cd /proc/;cat limits - we would see the Max stack size of 2mb.... so it seems the 10mb setting was not having an effect
Oracle recommended adding the line "oracle soft stack 16384" to /etc/security/limits.conf but this line seems to have no effect on my servers (also of course re-booted after adding the line).
I'd be grateful if someone could shed some light on where it is actually being set
Do you use systemd? If you start database via script executed from SystemD then security/limits.config intentionally ignored. And you have to set limits one again in systemd unit file.

Databricks and Delta cache setting

I am trying to follow the instructions on the MSFT website to use delta cache and hoping someone would help me understand it a little better:
https://learn.microsoft.com/en-us/azure/databricks/delta/optimizations/delta-cache
So In the guide it mentions that I should use Standard_E or L series of VMs. Our workload is now set to use the F series machines and when I tried to use only E or L it seemed that the job ran longer and would be using more DBUs.
I did however notice that the Dv3 series allow you to use delta caching (ex: Standard_D16s_v3 VMs). I tried to run some of our workloads using those types of machines and notices that under the storage tab it now shows a similar screen as in the MSFT docs:
Problem is that I am not sure if that is the right way to go about this. The reason I wanted to try to use Dv3 VMs was because it was relatively comparable to the F series, but also seem to allow the delta caching.
I am also wondering if the MSFT recommendation of using the following settings is correct or if they can be different:
spark.databricks.io.cache.maxDiskUsage 50g
spark.databricks.io.cache.maxMetaDataCache 1g
spark.databricks.io.cache.compression.enabled false
Has any one else played with this and can recommend what they did it would be much appreciated.
As background we have the databricks clusters spin up using our Databricks Linked Service (from ADF) and in that linked service we put the following settings:
This is what sends the config settings to the automated clusters that are spun up when we execute Databricks notebooks though ADF.
Thank you

"There are no OCR keys; creating a new key encrypted with given password" Crashes when running Chainlink node

I am setting up a chainlink node in AWS ec2 + AWS RDS (PostgreSQL) and have followed every step in the documentation (https://docs.chain.link/docs/running-a-chainlink-node/).
Everything runs smoothly until the OCR keys creation step. Once it gets here, it shows "There are no OCR keys; creating a new key encrypted with given password". This is supposed to happen but the docker container exits right after (see image below).
Output after OCR keys creation
I have tried the following:
Checking whether there is a problem with the specific table these keys are stored in the PostgreSQL database: public.encrypted_ocr_key_bundles, which gets populated if this step succeeds. Nothing here so far.
Using a different version of the Chainlink docker image (see Chainlink Docker hub). I am currently using version 0.10.0. No success either, even if using latest ones.
Using AWS Cloudformation to "let AWS + Chainlink" take care of this, but even so I have encountered similar problems, so no success.
I have thought about populating the OCR table manually with a query, but I am far from having proper OCR key generation knowledge/script in hand so I do not like this option.
Does anybody know what else to try/where the problem could be?
Thanks a lot in advance!
UPDATE: It was a simple memory problem. The AWS micro instance (1GB RAM) was running out of memory when OCR keys were generated. I only got a log of the error after switching to an updated version of the CL docker image. In conclusion: migrate to a bigger instance. Should've thought of that but learning never stops!

Huge performance hit on a simple Go server with Docker

I've tried several things to get to the root of this, but I'm clueless.
Here's the Go program. It's just one file and has a /api/sign endpoint that accepts POST requests. These POST requests have three fields in the body, and they are logged in a sqlite3 database. Pretty basic stuff.
I wrote a simple Dockerfile to containerize it. Uses golang:1.7.4 to build the binary and copies it over to alpine:3.6 for the final image. Once again, nothing fancy.
I use wrk to benchmark performance. With 8 threads and 1k connections for 50 seconds (wrk -t8 -c1000 -d50s -s post.lua http://server.com/api/sign) and a lua script to create the post requests, I measured the number of requests per second between different situations. In all situations, I run wrk from my laptop and the server is in DigitalOcean VPS (2 vCPUs, 2 GB RAM, SSD, Debian 9.4) that's very close to me.
Directly running the binary produced 2979 requests/sec.
Docker (docker run -it -v $(pwd):/data -p 8080:8080 image) produced 179 requests/sec.
As you can see, the Docker version is over 16x slower than running the binary directly. Everything else is the same during both experiments.
I've tried the following things and there is practically no improvement in performance in the Docker version:
Tried using host networking instead of bridge. There was a slight increase to around 190 requests/sec, but it's still miserable.
Tried increasing the limit on the number of file descriptors in the container version with --ulimit nofile=262144:262144. No improvement.
Tried different go versions, nothing.
Tried debian:9.4 for the final image instead of alpine:3.7 in the hope that it's musl that's performing terribly. Nothing here either.
(Edit) Tried running the container without a mounted volume and there's still no performance improvement.
I'm out of ideas at this point. Any help would be much appreciated!
Using an in-memory sqlite3 database completely solved all performance issues!
db, err = sql.Open("sqlite3", "file=dco.sqlite3?mode=memory")
I knew there was a disk I/O penalty hit associated with Docker's abstractions (even on Linux; I've heard it's worse on macOS), but I didn't know it would be ~16x.
Edit: Using an in-memory database isn't really an option most of the time. So I found another sqlite-specific solution. Before all database operations, do this to switch sqlite to WAL mode instead of the default rollback journal:
PRAGMA journal_mode=WAL;
PRAGMA synchronous=NORMAL;
This dramatically improved the Docker version's performance to over 2.7k requests/sec!

Unable to connect to apache solr - process restart needed

I am facing a strange problem with solr. After running solr for few hours, client starts reporting error message that it is unable to contact the solr, although solr instance is up on the server.
I can't see any high traffic on website which sometimes is the reason of connection refusal.
This issue gets fixed after solr restart.
Any idea what is going wrong here ?
Answer to most of the problems can be found in logs. Thanks D_K for reminding me.
SEVERE: java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
I have increased the heap size to fix this issue.
java -Xms<initial heap size> -Xmx<maximum heap size>
Also, we have reduced document size by removing unnecessary information which we don't need to retain in solr.
If you have a client with long running connection but low amount of traffic, you may have a firewall in between. Firewalls have limited-size routing tables, so they eventually drop the mapping for connections they haven't seen for a while.
Try sending a ping query every 30 minutes or so through that specific connection and see if the issue goes away. If you need to validate it, run Wireshark on the client and see whether the client is getting RST (reset) packets from an unexpected end point (that would be firewall).
You just need to add your collection in solr by following the step given in this url ( https://drupal.stackexchange.com/questions/95897/apache-solr-4-6-0-insta... ) and then select your collection from your solr which is running on localhost or live site (http://localhost:8983/solr/) and go to schema tab. Click schema tab and then you can see you schema file attach in apachesolr module.
You now just need to your schema url which just look like this http://localhost:8983/solr/your_core_name/. Now add this url in apachesolr module.
Then it will show that your site has contacted apache solr server in your drupal site.

Resources