EC2 AMI 2.4/2.5 dsc20/dsc21 dependency issue - amazon-ec2

I'm trying to create a 1 node Datastax community edition cluster using this guidelines on EC2 m3.xlarge (eu-west).
Here is the provided parameters:
--clustername cassandra
--totalnodes 1
--version community
As mentioned in the guideline, I opened those ports:
22
8888
1024-65355
Here is the error I found in ~/datastax_ami/ami.log:
The following packages have unmet dependencies:
dsc20 : Depends: cassandra (= 2.0.14) but 2.1.4 is to be installed
[ERROR] 04/21/15-12:58:29 sudo service cassandra stop:
cassandra: unrecognized service
[EXEC] 04/21/15-12:58:29 sudo rm -rf /var/lib/cassandra
[EXEC] 04/21/15-12:58:29 sudo rm -rf /var/log/cassandra
[EXEC] 04/21/15-12:58:29 sudo mkdir -p /var/lib/cassandra
[EXEC] 04/21/15-12:58:29 sudo mkdir -p /var/log/cassandra
[ERROR] 04/21/15-12:58:29 sudo chown -R cassandra:cassandra /var/lib/cassandra:
chown: invalid user: `cassandra:cassandra'
[ERROR] 04/21/15-12:58:29 sudo chown -R cassandra:cassandra /var/log/cassandra:
chown: invalid user: `cassandra:cassandra'
[INFO] Reflector loop...
[INFO] 04/21/15-12:58:29 Reflector: Received 1 of 1 responses from: [u'172.31.46.236']
[INFO] Seed list: set([u'172.31.46.236'])
[INFO] OpsCenter: 172.31.46.236
[INFO] Options: {'username': None, 'cfsreplication': None, 'heapsize': None, 'reflector': None, 'clustername': 'cassandra', 'analyticsnodes': 0, 'seed_indexes': [0, 1, 1], 'realtimenodes': 1, 'java7': None, 'opscenter': 'no', 'totalnodes': 1, 'searchnodes': 0, 'release': None, 'opscenterinterface': None, 'version': 'community', 'dev': None, 'customreservation': None, 'password': None, 'email': None, 'raidonly': None, 'javaversion': None}
[ERROR] Exception seen in ds1_launcher.py:
Traceback (most recent call last):
File "/home/ubuntu/datastax_ami/ds1_launcher.py", line 33, in initial_configurations
ds2_configure.run()
File "/home/ubuntu/datastax_ami/ds2_configure.py", line 1058, in run
File "/home/ubuntu/datastax_ami/ds2_configure.py", line 521, in construct_yaml
IOError: [Errno 2] No such file or directory: '/etc/cassandra/cassandra.yaml'
Related GitHub issue: Add support for DSC 2.2 Versions #81
Does anyone what I've done wrong.
Thanks

There was a bug with the dependencies - as you encountered - Add support for DSC 2.2 Versions #81 and was fixed in AMI 2.5.
Therefore be sure to use the new AMI. Do not use:
DataStax Auto-Clustering AMI
which is the AMI 2.4 version, instead use:
DataStax Auto-Clustering AMI 2.5.1-pv
or
DataStax Auto-Clustering AMI 2.5.1-hvm

Per the github issue:
Proposed fix committed to dev-2.5 and dev-2.6. Will test today and release today.

Related

#QuarkusIntegrationTest : Unable to determine the status of the running process. See the above logs for details

I'm currently learning Quarkus and I have an issue with Native testing.
In this repository (the dev branch), I can package into JAR and binary and run them ([AdinhLux/quarkus-1-intro][https://github.com/AdinhLux/quarkus-1-intro/tree/dev]).
I'm just encountering an issue when running the below command line. It seems my Maven was looking for an information into .target/quarkus.log but nothing is written.
mvn verify -Pnative
[INFO] -------------------------------------------------------
[INFO] T E S T S
[INFO] -------------------------------------------------------
[INFO] Running org.agoncal.quarkus.starting.BookResourceIT
Jul 20, 2022 4:48:58 PM org.jboss.threads.Version <clinit>
INFO: JBoss Threads version 3.4.2.Final
Executing "/Users/adinhlux/development/IntelliJProjects/rest-book/target/rest-book-1.0.0-SNAPSHOT-runner -Dquarkus.http.port=8081 -Dquarkus.http.ssl-port=8444 -Dtest.url=http://localhost:8081 -Dquarkus.log.file.path=/Users/adinhlux/development/IntelliJProjects/rest-book/target/quarkus.log -Dquarkus.log.file.enable=true"
Waited 60 seconds for target/quarkus.log to contain info about the listening port and protocol but no such info was found
[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 2, Time elapsed: 61.737 s <<< FAILURE! - in org.agoncal.quarkus.starting.BookResourceIT
[ERROR] org.agoncal.quarkus.starting.BookResourceIT.shouldCountAllBooks Time elapsed: 0.012 s <<< ERROR!
java.lang.RuntimeException: java.lang.IllegalStateException: Unable to determine the status of the running process. See the above logs for details
I'm running my project on macOS Monterey M1 with the following settings :
sdk install java 17.0.4-oracle
brew install --cask graalvm/tap/graalvm-ce-java17
sdk install maven 3.8.6
sdk install quarkus 2.10.2.Final
xattr -r -d com.apple.quarantine /Library/Java/JavaVirtualMachines/graalvm-ce-java17-22.1.0/Contents/Home
export JAVA_HOME=$(/usr/libexec/java_home -v 17)
export PATH=$JAVA_HOME/bin:$PATH
cd $JAVA_HOME/bin
gu install native-image
I resolved my issue .
In application.properties you need this :
quarkus.log.category."org.agoncal".level=DEBUG

Azure Pipeline failed for maven app engine deploy

I have been trying to deploy my app engine app from azure pipeline now.
I was able to make mvn clean and package it but when i used mvn appengine:deploy it does throw this permission issue. I got through some common questions like this:
I cant init Google Cloud SDK on Ubuntu
and
gcloud components update permission denied
What I did, I added the script tag before my maven in yml file.
- script: |
sudo chown -R $USER /home/vsts/.config/gcloud/config_sentinel
- task: Maven#3
displayName: 'Maven api/pom.xml'
inputs:
mavenPomFile: 'api/pom.xml'
goals: 'clean package appengine:deploy'
But not sure what is the issue and which else permission i need to set my USER for pipeline is vsts here. Please let me know if i made any mistake so far.
Error log from the pipeline is below for reference:
Downloaded from central: https://repo.maven.apache.org/maven2/com/google/guava/guava/27.0-jre/guava-27.0-jre.jar (2.7 MB at 3.4 MB/s)
Nov 06, 2019 6:51:59 PM com.google.cloud.tools.managedcloudsdk.install.Downloader download
INFO: Downloading https://dl.google.com/dl/cloudsdk/channels/rapid/google-cloud-sdk.tar.gz to /home/vsts/.cache/google-cloud-tools-java/managed-cloud-sdk/downloads/google-cloud-sdk.tar.gz
Welcome to the Google Cloud SDK!
WARNING: Could not setup log file in /home/vsts/.config/gcloud/logs, (IOError: [Errno 13] Permission denied: u'/home/vsts/.config/gcloud/logs/2019.11.06/18.52.02.245238.log')
Traceback (most recent call last):
File "/home/vsts/.cache/google-cloud-tools-java/managed-cloud-sdk/LATEST/google-cloud-sdk/bin/bootstrapping/install.py", line 225, in <module>
main()
File "/home/vsts/.cache/google-cloud-tools-java/managed-cloud-sdk/LATEST/google-cloud-sdk/bin/bootstrapping/install.py", line 200, in main
Prompts(pargs.usage_reporting)
File "/home/vsts/.cache/google-cloud-tools-java/managed-cloud-sdk/LATEST/google-cloud-sdk/bin/bootstrapping/install.py", line 123, in Prompts
scope=properties.Scope.INSTALLATION)
File "/home/vsts/.cache/google-cloud-tools-java/managed-cloud-sdk/LATEST/google-cloud-sdk/lib/googlecloudsdk/core/properties.py", line 2269, in PersistProperty
named_configs.ActivePropertiesFile.Invalidate(mark_changed=True)
File "/home/vsts/.cache/google-cloud-tools-java/managed-cloud-sdk/LATEST/google-cloud-sdk/lib/googlecloudsdk/core/configurations/named_configs.py", line 413, in Invalidate
file_utils.WriteFileContents(config.Paths().config_sentinel_file, '')
File "/home/vsts/.cache/google-cloud-tools-java/managed-cloud-sdk/LATEST/google-cloud-sdk/lib/googlecloudsdk/core/util/files.py", line 1103, in WriteFileContents
with FileWriter(path, private=private) as f:
File "/home/vsts/.cache/google-cloud-tools-java/managed-cloud-sdk/LATEST/google-cloud-sdk/lib/googlecloudsdk/core/util/files.py", line 1180, in FileWriter
return _FileOpener(path, mode, 'write', encoding='utf8', private=private)
File "/home/vsts/.cache/google-cloud-tools-java/managed-cloud-sdk/LATEST/google-cloud-sdk/lib/googlecloudsdk/core/util/files.py", line 1208, in _FileOpener
raise exc_type('Unable to {0} file [{1}]: {2}'.format(verb, path, e))
googlecloudsdk.core.util.files.Error: Unable to write file [/home/vsts/.config/gcloud/config_sentinel]: [Errno 13] Permission denied: '/home/vsts/.config/gcloud/config_sentinel'
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 6.958 s
[INFO] Finished at: 2019-11-06T18:52:02Z
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal com.google.cloud.tools:appengine-maven-plugin:2.0.0:deploy (default-cli) on project configuration-api: Execution default-cli of goal com.google.cloud.tools:appengine-maven-plugin:2.0.0:deploy failed: com.google.cloud.tools.managedcloudsdk.command.CommandExitException: Process failed with exit code: 1 -> [Help 1]
As requested, posting my yaml file which may help few to solve the same problem.
Now I given the permission to the parent directory which is gcloud one, as I shown other directories were giving issue.
- script: |
sudo chown -R $USER:$USER /home/$USER/.config/gcloud/
- task: Maven#3
displayName: 'Maven api/pom.xml'
inputs:
mavenPomFile: 'api/pom.xml'
goals: 'clean package appengine:deploy'
I uses $USER to make it for all. As I used Azure maintained Agent my pipeline user was vsts which picked up automatically. And would help others irrespective of their user.
if any more help required let me know thanks.

Install DataNode by Ambari

I have
OS Red Hat Enterprise Linux Server release 7.4 (Maipo)
Ambari Version 2.5.1.0
HDP 2.6
After finished deploy components 2 datanodes not can start.
Tried start returned error:
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode'' returned 127. -bash: /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh: No such file or directory
I tried to delete component and make new install by Ambari.
Installed completed without error
2018-02-27 20:47:31,550 - Execute['ambari-sudo.sh /usr/bin/hdp-select set all `ambari-python-wrap /usr/bin/hdp-select versions | grep ^2.6 | tail -1`'] {'only_if': 'ls -d /usr/hdp/2.6*'}
2018-02-27 20:47:31,554 - Skipping Execute['ambari-sudo.sh /usr/bin/hdp-select set all `ambari-python-wrap /usr/bin/hdp-select versions | grep ^2.6 | tail -1`'] due to only_if
2018-02-27 20:47:31,554 - FS Type:
2018-02-27 20:47:31,554 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'only_if': 'ls /usr/hdp/current/hadoop-client/conf', 'configurations': ...}
2018-02-27 20:47:31,568 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2018-02-27 20:47:31,569 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-02-27 20:47:31,583 - Could not load 'version' from /var/lib/ambari-agent/data/structured-out-3374.json
Command completed successfully!
BUT new start show more again error.
I checked folder /usr/hdp/current/hadoop-client/
In folder new files for example /sbin/hadoop-daemon.sh did not appear.
How to do it again deploy component DataNode by Ambari?
I'd guess the issue is caused by wrong symlinks at /usr/hdp. You may even try to fix them manually, the structure is trivial enough. Through the issue does not sound like a common one after a plain stack deployment.
Are you running Ambari with non-root/custom user? Maybe Ambari has not sufficient permissions? See https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-security/content/how_to_configure_ambari_server_for_non-root.html
Ambari Version 2.5.1.0 is considerably outdated, so it would make sense to update Ambari and see whether it helps.
Also, if you want to whipe out everything see https://github.com/hortonworks/HDP-Public-Utilities/blob/master/Installation/cleanup_script.sh
Also, it may be more productive to ask Ambari-related questions here https://community.hortonworks.com/

Spark Controller installation fails via ambari

When we are trying to install Spark Controller via Ambari, it is giving error.
below is the error we are getting:
stderr: /var/lib/ambari-agent/data/errors-403.txt
File
"/var/lib/ambari-agent/cache/stacks/HDP/2.3/services/SparkController/package/scripts/controller_conf.py",
line 10, in controller_conf recursive = True
File
"/usr/lib/python2.6/site-packages/resource_management/core/base.py",
line 147, in init raise Fail("%s received unsupported argument %s"
% (self, key)) resource_management.core.exceptions.Fail:
Directory['/usr/sap/spark/controller/conf'] received unsupported
argument recursive
stdout: /var/lib/ambari-agent/data/output-403.txt
2016-12-15 08:44:36,441 - Skipping installation of existing package curl
2016-12-15 08:44:36,441 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-12-15 08:44:36,496 - Skipping installation of existing package hdp-select Start installing
2016-12-15 08:44:36,668 - Execute['cp -r /var/lib/ambari-agent/cache/stacks/HDP/2.3/services/SparkController/package/files/sap/spark /usr/sap'] {}
2016-12-15 08:44:36,685 - Execute['chown hanaes:sapsys /var/log/hanaes'] {} Configuring... Command failed after 1 tries
Versions:
Ambari : 2.4.2.0
Spark : 1.5.2.2.3
Spark Controller : 1.6.1
Raised Customer message towards SAP and the resolution was: "Known issue for Spark Controller 1.6.2, so please updagrade to Spark Controller 2.0".
After upgrading to Spark Controller 2.0 the installation was successful. Hence closing this thread.

elasticsearch can not rebuild_index

I want using haystack + elasticsearch
so I install elasticsearch
$ brew info elasticsearch
elasticsearch: stable 5.0.1, HEAD
Distributed search & analytics engine
https://www.elastic.co/products/elasticsearch
/usr/local/Cellar/elasticsearch/5.0.1 (98 files, 34.8M) *
Built from source on 2016-11-29 at 17:52:15
From: https://github.com/Homebrew/homebrew-core/blob/master/Formula/elasticsearch.rb
==> Requirements
Required: java >= 1.8 ✔
==> Caveats
Data: /usr/local/var/elasticsearch/elasticsearch_hanminsoo/
Logs: /usr/local/var/log/elasticsearch/elasticsearch_hanminsoo.log
Plugins: /usr/local/Cellar/elasticsearch/5.0.1/libexec/plugins/
Config: /usr/local/etc/elasticsearch/
plugin script: /usr/local/Cellar/elasticsearch/5.0.1/libexec/bin/plugin
To have launchd start elasticsearch now and restart at login:
brew services start elasticsearch
Or, if you don't want/need a background service you can just run:
elasticsearch
and I start elasticsearch
$ brew services start elasticsearch
==> Tapping homebrew/services
Cloning into '/usr/local/Homebrew/Library/Taps/homebrew/homebrew-services'...
remote: Counting objects: 10, done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 10 (delta 0), reused 6 (delta 0), pack-reused 0
Unpacking objects: 100% (10/10), done.
Checking connectivity... done.
Tapped 0 formulae (36 files, 47K)
==> Successfully started `elasticsearch` (label: homebrew.mxcl.elasticsearch)
and I want rebulid my index
$ python manage.py rebuild_index
WARNING: This will irreparably remove EVERYTHING from your search index in connection 'default'.
Your choices after this are to restore from backups or rebuild via the `rebuild_index` command.
Are you sure you wish to continue? [y/N] y
but It show error
Removing all documents from your index because you said so.
DELETE http://127.0.0.1:9200/haystack [status:N/A request:0.002s
[...]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/hanminsoo/.pyenv/versions/spec/lib/python3.5/site-packages/haystack/backends/elasticsearch_backend.py", line 231, in clear
self.conn.indices.delete(index=self.index_name, ignore=404)
File "/Users/hanminsoo/.pyenv/versions/spec/lib/python3.5/site-packages/elasticsearch/client/utils.py", line 69, in _wrapped
return func(*args, params=params, **kwargs)
File "/Users/hanminsoo/.pyenv/versions/spec/lib/python3.5/site-packages/elasticsearch/client/indices.py", line 198, in delete
params=params)
File "/Users/hanminsoo/.pyenv/versions/spec/lib/python3.5/site-packages/elasticsearch/transport.py", line 307, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File "/Users/hanminsoo/.pyenv/versions/spec/lib/python3.5/site-packages/elasticsearch/connection/http_urllib3.py", line 89, in perform_request
raise ConnectionError('N/A', str(e), e)
elasticsearch.exceptions.ConnectionError: ConnectionError(<urllib3.connection.HTTPConnection object at 0x110791e48>: Failed to establish a new connection: [Errno 61] Connection refused) caused by: NewConnectionError(<urllib3.connection.HTTPConnection object at 0x110791e48>: Failed to establish a new connection: [Errno 61] Connection refused)
All documents removed.
Indexing 0 products
the products doesn't indexing...
first I think it is connection error,
$ curl -v -X GET 127.0.0.1:9200
* Rebuilt URL to: 127.0.0.1:9200/
* Trying 127.0.0.1...
* connect to 127.0.0.1 port 9200 failed: Connection refused
* Failed to connect to 127.0.0.1 port 9200: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 127.0.0.1 port 9200: Connection refused
but I settings settings.py
# Django Haystack
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE':'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
'URL': 'http://127.0.0.1:9200/',
'INDEX_NAME': 'haystack',
'TIMEOUT': 10
},
}
I try elasticsearch restart, and pip reinstall, else...
but I can not understand why.....
pip freeze
appnope==0.1.0
certifi==2016.9.26
decorator==4.0.10
Django==1.9.7
django-debug-toolbar==1.5
django-extensions==1.7.2
django-haystack==2.5.1
djangorestframework==3.4.4
elasticsearch==1.9.0
get==0.0.0
httpie==0.9.6
ipython==5.0.0
ipython-genutils==0.1.0
pep8==1.7.0
pexpect==4.2.0
pickleshare==0.7.3
post==0.0.0
prompt-toolkit==1.0.3
ptyprocess==0.5.1
public==0.0.0
pyelasticsearch==1.4
Pygments==2.1.3
query-string==0.0.0
requests==2.12.2
setupfiles==0.0.0
simplegeneric==0.8.1
simplejson==3.10.0
six==1.10.0
sqlparse==0.2.1
traitlets==4.2.2
urllib3==1.19.1
wcwidth==0.1.7
Java -verions
java version "1.8.0_112"
Java(TM) SE Runtime Environment (build 1.8.0_112-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.112-b16, mixed mode)
It's because your ES server is not starting.
According to your logs, you need to change discovery.zen.ping.timeout to discovery.zen.ping_timeout in your /usr/local/etc/elasticsearch/elasticsearch.yml file and restart ES.

Resources