Unable to read 'format.json' from http://111.202.*****:9000/data1: Do not upgrade one server at a time - minio

when I first deploy minio cluster in two nodes of truenas, the container logs as below.I don't know why.
Unable to read 'format.json' from http://111.202.****:9000/data1: Do not upgrade one server at a time - please follow the recommended guidelines mentioned here https://github.com/minio/minio#upgrading-minio for your environment
Unable to read 'format.json' from http://111.202.****:9000/data1: Do not upgrade one server at a time - please follow the recommended guidelines mentioned here https://github.com/minio/minio#upgrading-minio for your environment
Unable to read 'format.json' from http://111.202.****:9000/data2: Do not upgrade one server at a time - please follow the recommended guidelines mentioned here https://github.com/minio/minio#upgrading-minio for your environment
Unable to read 'format.json' from http://111.202.****:9000/data2: Do not upgrade one server at a time - please follow the recommended guidelines mentioned here https://github.com/minio/minio#upgrading-minio for your environment
API: SYSTEM()
Time: 11:17:18 UTC 07/22/2022
Error: Read failed. Insufficient number of disks online (*errors.errorString)
6: internal/logger/logger.go:270:logger.LogIf()
5: cmd/prepare-storage.go:242:cmd.connectLoadInitFormats()
4: cmd/prepare-storage.go:302:cmd.waitForFormatErasure()
3: cmd/erasure-server-pool.go:109:cmd.newErasureServerPools()
2: cmd/server-main.go:694:cmd.newObjectLayer()
1: cmd/server-main.go:531:cmd.serverMain()
Waiting for a minimum of 2 disks to come online (elapsed 22s)

Related

Akeneo PIM No alive nodes found in your cluster ERROR

I keep getting the same error when starting the Akeneo Community Edition! It seems to be an error caused by Elastictsearch, but I cannot figure out what is wrong.
The Error message:
[OK] Database schema created successfully!
Updating database schema...
37 queries were executed
[OK] Database schema updated successfully!
Reset elasticsearch indexes
In StaticNoPingConnectionPool.php line 50:
No alive nodes found in your cluster
Im running on a uberspace server without docker and i'm trying to start it like mentioned here:
https://docs.akeneo.com/4.0/install_pim/manual/installation_ee_archive.html but with the community Edition instead.
Does anyone had the same error and knows how to help me out?
Maybe it a problem with the .env file for the entry point of elastic search. My .env: APP_INDEX_HOSTS=localhost:9200
Can you verify that the Elasticsearch search server is available on localhost:9200 when accessing it via curl/Postman/Sense or something else?
That error usually means the node is either not running, or not running on the configured port.
Pay also attention that your server follow the system requirements - https://docs.akeneo.com/4.0/install_pim/manual/system_requirements/system_requirements.html

Updating service [default] (this may take several minutes)...failed

This used to be working perfectly until a couple of days back exactly 4 days back. When i run gcloud app deploy now it complete the build and then straight after completing the build it hangs on Updating Service
Here is the output:
Updating service [default] (this may take several minutes)...failed.
ERROR: (gcloud.app.deploy) Error Response: [13] Flex operation projects/just-sleek/regions/us-central1/operations/8260bef8-b882-4313-bf97-efff8d603c5f error [INTERNAL]: An internal error occurred while processing task /appengine-flex-v1/insert_flex_deployment/flex_create_resources>2020-05-26T05:20:44.032Z4316.jc.11: Deployment Manager operation just-sleek/operation-1590470444486-5a68641de8da1-5dfcfe5c-b041c398 errors: [
code: "RESOURCE_ERROR"
location: "/deployments/aef-default-20200526t070946/resources/aef-default-20200526t070946"
message: {
\"ResourceType\":\"compute.beta.regionAutoscaler\",
\"ResourceErrorCode\":\"403\",
\"ResourceErrorMessage\":{
\"code\":403,
\"errors\":[{
\"domain\":\"usageLimits\",
\"message\":\"Exceeded limit \'QUOTA_FOR_INSTANCES\' on resource \'aef-default-20200526t070946\'. Limit: 8.0\",
\"reason\":\"limitExceeded\"
}],
\"message\":\"Exceeded limit \'QUOTA_FOR_INSTANCES\' on resource \'aef-default-20200526t070946\'. Limit: 8.0\",
\"statusMessage\":\"Forbidden\",
\"requestPath\":\"https://compute.googleapis.com/compute/beta/projects/just-sleek/regions/us-central1/autoscalers\",
\"httpMethod\":\"POST\"
}
}"]
I tried the following the ways to resolve the error:
I deleted all my previous version and left the running version
I ran gcloud components update still fails.
I create a new project, changed the region from [REGION1] to [REGION2] and deployed and m still getting the same error.
Also ran gcloud app deploy --verbosity=debug, does not give me any different result
I have no clue what is causing this issue and how to solve it please assist.
Google is already aware of this issue and it is currently being investigated.
There is a Public Issue Tracker, you may 'star' and follow so that you can receive any further updates on this. In addition, you may see some workarounds posted that could be performed temporarily if agreed with your preferences.
Currently, there is no ETA yet for the resolution but there will be an update provided as soon as the team progresses on the issue.
I resolved this by adding this into my app.yaml
automatic_scaling:
min_num_instances: 1
max_num_instances: 7
I found the solution here:
https://issuetracker.google.com/issues/157449521
And I was also redirected to:
gcloud app deploy - updating service default fails with code 13 Quota for instances limit exceeded, and 401 unathorizeed

Unable to Execute Distributed Testing in JMeter 4.0

I have applied all the instructions to execute the distributed Testing for JMeter.
But When I run as "Remote Start-XXX.XXX.XXX.XXX"
Then I am getting "Exception creating connection: XXX.XXX.XXX.XXX; nested exception is: java.io.IOException: java.security.UnrecoverableKeyException: Cannot recover key".
Can anyone help me Why I am getting that exception?
Reference that I used: LINK
This is how I solve the same problem as yours. Have a try, hope this can help:
when answering the question What is your first and last name?, you'll have to reply with rmi which is a corresponding value with server.rmi.ssl.keystore.alias in jmeter.propertise. Remember do NOT enter any custom value if u have not changed the server.rmi.ssl.keystore.alias in jmeter.propertise when u are trying to create a keystore.
Given you use JMeter 4.0 additionally you need to follow steps from Setting Up SSL User Manual Chapter
Alternatively if you don't want secure RMI communication between master and slave(s) you can add the next line to user.properties file
server.rmi.ssl.disable=true
or pass it via -J command-line argument like:
jmeter -Jserver.rmi.ssl.disable=true
The change has to be done on all engines (master and all the slaves)
References:
Remote hosts and RMI configuration
Configuring JMeter
In JMeter 4.0 you need to generate a keystore that will be used in distributed testing.
Follow this tutorial instead:
http://jmeter.apache.org/usermanual/jmeter_distributed_testing_step_by_step.html
Pay particular attention to:
https://jmeter.apache.org/usermanual/remote-test.html#setup_ssl

Which is the correct way to install a service in Hortonworks?

I'm a bit confused about which is the correct way to install Oozie in a cluster (2 masters or namenodes, 2 workers or datanodes).
1) So basically, in the documentation you have a set of command line instructions that you can follow. I don't really know in which machine should I execute this shell instructions since I have 2 namenodes.
2) Then I also notice that using the Ambari UI, you can also use 'Admin' > 'Stack And Versions' > 'Add service' (on the service you want, this case Oozie).
3) Finally, also from Ambari UI, you can just go to 'Actions' > 'Add Service' and the 'Add Service Wizard' starts where I guess you can install new services.
Which would be the correct way to do it and how?
Whether a beginner using a VM sandbox or a professional Hadoop administrator working on a huge live cluster, you should almost always use Ambari to add a service. That's what it's there for, after all. It drastically reduces the complexity and chance of failure when installing a service, by:
Letting you specify which nodes to install on.
Automatically generating valid configuration (no chance of mis-typing a port number and spending a day debugging!).
Moving all required files into the correct locations on the correct nodes with the correct permissions.
Running smoke tests to ensure installation is successful.
Giving you monitoring/admin of the service once it's running.
#Nachiket is right to say that your options 2 and 3 will have the same outcome. I always use 'Actions' > 'Add Service' just because it's fewer clicks from the home screen.
There are only a few situations where you'd not use Ambari, primarily if you're installing an unsupported version.

JDataConnect license issue with valid free license

JDataConnect has been used by us many years ago to access Microsoft Access Databases from Java. For a migration project we are tentatively thinking about using it again. In the meantime remote access is possible in principle. I had one successful test a year ago.
Today I am retrying on a different machine with the free license from:
http://www.jnetdirect.com/free-software/jdataconnect-single.html
I have downloaded and installed
3.679.232 JDCSetup_4_0.exe
After opening the firewall on port 1150 I can in principle connect using JData2_0.sql.$Driver as a driver and a connection string like:
jdbc:JDataConnect:1150//leto/c:\\y_wf\\data\\smartCRM\\smartCRM.mdb
then i get the error message:
Attempt 1, Connect to JDataServer on server leto port:1150 Result was: java.sql.SQLException: ServerException:You should upgrade the license for this version. The current License is valid only for version 3
SQLState: 01000
VendorError: 0
I am confused since I used the license key from the JnetDirects webpage. There seems to be no version 3 download available. What might be going on here that makes the situation fail?
JNetDirect support answered the following:
After further verification, I noticed (and corrected) that while the key provided on the download page before the file is downloaded and the main JDataConnect page was the correct one (the one in red below), the one provided on another page (which Wolfgang used) had 1 different digit. It has been corrected.

Resources