Liquibase can not find file containing H2 bin - h2

When trying to generate a liquibase diff I get the following:
Starting Liquibase at 00:18:57 (version 4.1.1 #10 built at 2020-10-12 19:24+0000)
Unexpected error running Liquibase: '/usr/local/apps/h2/h2-1.3.176.jar' does not exist
For more information, please use the --logLevel flag
[2022-10-02 00:18:57] SEVERE [liquibase.integration] Unexpected error running Liquibase: '/usr/local/apps/h2/h2-1.3.176.jar' does not exist
liquibase.exception.CommandLineParsingException: '/usr/local/apps/h2/h2-1.3.176.jar' does not exist
at liquibase.integration.commandline.Main.configureClassLoader(Main.java:1278)
at liquibase.integration.commandline.Main$1.run(Main.java:360)
at liquibase.integration.commandline.Main$1.run(Main.java:193)
at liquibase.Scope.child(Scope.java:169)
at liquibase.Scope.child(Scope.java:145)
at liquibase.integration.commandline.Main.run(Main.java:193)
at liquibase.integration.commandline.Main.main(Main.java:156)
The problem is, the java binary does exist at in that directory.
(base) user#userpad:/usr/local/apps/h2$ ls -la
total 1632
drwxr-xr-x 2 root root 4096 okt 2 00:12 .
drwxr-xr-x 3 root root 4096 okt 2 00:12 ..
-rw-rw-r-- 1 user user 1659879 okt 2 00:07 h2-1.3.176.jar
(base) user#userpad:/usr/local/apps/h2$ pwd
/usr/local/apps/h2
(base) user#userpad:/usr/local/apps/h2$
Any help would be appreciated.

Related

Problems when Installing Ruby on Linux

I am completely new to Linux and want to write a website in Yekyll. therefore I have to install Ruby first. Unfortunately this does not work properly. As far as I can see, I do not have the correct rights for Ruby. but I already set the rights to rwx. So I cannot see what else I can do. Can anybody help me please?
Here is what I did and received:
raphael#raphael-ThinkCentre-M58:~$ gem install jekyll bundler
ERROR: While executing gem ... (Gem::FilePermissionError)
You don't have write permissions for the /var/lib/gems/2.5.0 directory.
raphael#raphael-ThinkCentre-M58:~$
raphael#raphael-ThinkCentre-M58:~$ sudo ls -al /var/lib/gems/2.5.0
[sudo] password for raphael:
total 32
drwxr-xr-x 8 root root 4096 Nov 17 10:08 .
drwxr-xr-x 3 root root 4096 Nov 17 09:51 ..
drwxr-xr-x 2 root root 4096 Nov 17 10:08 build_info
drwxr-xr-x 2 root root 4096 Nov 17 10:25 cache
drwxr-xr-x 3 root root 4096 Nov 17 10:08 doc
drwxr-xr-x 3 root root 4096 Nov 17 10:25 extensions
drwxr-xr-x 7 root root 4096 Nov 17 14:48 gems
drwxr-xr-x 2 root root 4096 Nov 17 10:25 specifications
This is much to small to give it to a professional, but never the less it stops my working since two days. :(
Any ideas? Thanks!

Not able to configure databricks with external hive metastore

I am following this document https://docs.databricks.com/data/metastores/external-hive-metastore.html#spark-configuration-options
to connect to my external hive metastore. My metastore version is 3.1.0 and followed the document.
docs.databricks.comdocs.databricks.com
External Apache Hive metastore — Databricks Documentation
Learn how to connect to external Apache Hive metastores in Databricks.
10:51
I have getting this error when trying to connect to external hive metastore
org/apache/hadoop/hive/conf/HiveConf when creating Hive client using classpath:
Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars
spark.sql.hive.metastore.jars=/databricks/hive_metastore_jars/*
When I do an ls on /databricks/hive_metastore_jars/, I can see all copied files
10:52
Do I need to copy any hive specific files and upload it in this folder?
I did exactly what was mentioned in the site
These are the contents of my hive_metastore_jars
total 56K
drwxr-xr-x 3 root root 4.0K Mar 24 05:06 1585025573715-0
drwxr-xr-x 2 root root 4.0K Mar 24 05:06 d596a6ec-e105-4a6e-af95-df3feffc263d_resources
drwxr-xr-x 3 root root 4.0K Mar 24 05:06 repl
drwxr-xr-x 2 root root 4.0K Mar 24 05:06 spark-2959157d-2018-441a-a7d3-d7cecb8a645f
drwxr-xr-x 4 root root 4.0K Mar 24 05:06 root
drwxr-xr-x 2 root root 4.0K Mar 24 05:06 spark-30a72ee5-304c-432b-9c13-0439511fb0cd
drwxr-xr-x 2 root root 4.0K Mar 24 05:06 spark-a19d167b-d571-4e58-a961-d7f6ced3d52f
-rwxr-xr-x 1 root root 5.5K Mar 24 05:06 _CleanRShell.r3763856699176668909resource.r
-rwxr-xr-x 1 root root 9.7K Mar 24 05:06 _dbutils.r9057087446822479911resource.r
-rwxr-xr-x 1 root root 301 Mar 24 05:06 _rServeScript.r1949348184439973964resource.r
-rwxr-xr-x 1 root root 1.5K Mar 24 05:06 _startR.sh5660449951005543051resource.r
Am I missing anything?
Strangely If I look into the cluster boot logs here is what I get
20/03/24 07:29:05 INFO Persistence: Property spark.hadoop.javax.jdo.option.ConnectionDriverName unknown - will be ignored
20/03/24 07:29:05 INFO Persistence: Property spark.hadoop.javax.jdo.option.ConnectionURL unknown - will be ignored
20/03/24 07:29:05 INFO Persistence: Property spark.hadoop.javax.jdo.option.ConnectionUserName unknown - will be ignored
20/03/24 07:29:05 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
20/03/24 07:29:05 INFO Persistence: Property spark.hadoop.javax.jdo.option.ConnectionPassword unknown - will be ignored
20/03/24 07:29:05 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
20/03/24 07:29:05 INFO Persistence: Property datanucleus.schema.autoCreateAll unknown - will be ignored
20/03/24 07:29:09 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
20/03/24 07:29:09 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
I have already set the above configurations and it shows in the logs as well
20/03/24 07:28:59 INFO SparkContext: Spark configuration:
spark.hadoop.javax.jdo.option.ConnectionDriverName=org.mariadb.jdbc.Driver
spark.hadoop.javax.jdo.option.ConnectionPassword=*********(redacted)
spark.hadoop.javax.jdo.option.ConnectionURL=*********(redacted)
spark.hadoop.javax.jdo.option.ConnectionUserName=*********(redacted)
Also version information is available in my hive metastore, I can connect to mysql and see it it shows
SCHEMA_VERSION : 3.1.0
VER_ID = 1
From the output, it looks like the jars are not copied to the "/databricks/hive_metastore_jars/" location. As mentioned in the documentation link you shared:
Set spark.sql.hive.metastore.jars set to maven
Restart the cluster with the above configuration and then check in the Spark driver logs for the message :
17/11/18 22:41:19 INFO IsolatedClientLoader: Downloaded metastore jars to <path>
From this location copy the jars to DBFS from the same cluster and then use an init script to copy the jars from DBFS to "/databricks/hive_metastore_jars/"
As I am using azure mysql there is one more step I need to perform
https://learn.microsoft.com/en-us/azure/databricks/data/metastores/external-hive-metastore

Rabbitmq /usr/local/etc/rabbitmq/rabbitmq-env.conf Missing

I just installed RabbitMQ on an AWS EC2-Instance (CentOS) using the following,
sudo yum install erlang
sudo yum install rabbitmq-server
I was then able to successfully turn it on using,
sudo chkconfig rabbitmq-server on
sudo /sbin/service rabbitmq-server start
...and
sudo /sbin/service rabbitmq-server stop
sudo sudo rabbitmq-server run in foreground;
But now I'm trying to modify the /usr/local/etc/rabbitmq/rabbitmq-env.conf file so I can change the NODE_IP_ADDRESS but the file is no where to be found.
No rabbitmq folder under,
[ec2-user#ip-0-0-0-0 sbin]$ ls /usr/local/etc
[ec2-user#ip-0-0-0-0 sbin]$
There's a rabbitmq folder under /etc but there's nothing in it,
[ec2-user#ip-0-0-0-0 rabbitmq]$ pwd
/etc/rabbitmq
[ec2-user#ip-0-0-0-0 rabbitmq]$ ls
[ec2-user#ip-0-0-0-0 rabbitmq]$
And the only thing in my environment variables for rabbitmq is this
[ec2-user#ip-0-0-0-0 rabbitmq]$ printenv | grep rabbit
PWD=/etc/rabbitmq
I was able to go to the location of the rabbitmq logs and find this information,
root#ip-0-0-0-0
[/var/log/rabbitmq]# pwd
/var/log/rabbitmq
root#ip-0-0-0-0
[/var/log/rabbitmq]# ls -al
total 20
drwxr-x--- 2 rabbitmq rabbitmq 4096 Jun 7 17:28 .
drwxr-xr-x 10 root root 4096 Jun 7 17:23 ..
-rw-r--r-- 1 rabbitmq rabbitmq 3638 Jun 7 17:33 rabbit#ip-0-0-0-0.log
-rw-r--r-- 1 rabbitmq rabbitmq 0 Jun 7 17:25 rabbit#ip-0-0-0-0-sasl.log
-rw-r--r-- 1 root root 0 Jun 7 17:28 shutdown_err
-rw-r--r-- 1 root root 65 Jun 7 17:28 shutdown_log
-rw-r--r-- 1 root root 0 Jun 7 17:25 startup_err
-rw-r--r-- 1 root root 385 Jun 7 17:28 startup_log
cat rabbit#ip-0-0-0-0.log
=INFO REPORT==== 7-Jun-2018::17:29:01 ===
node : rabbit#ip-0-0-0-0
home dir : /var/lib/rabbitmq
config file(s) : (none)
cookie hash : W/uaA12+PF+KOIbCmdKTkw==
log : /var/log/rabbitmq/rabbit#ip-0-0-0-0.log
sasl log : /var/log/rabbitmq/rabbit#ip-0-0-0-0-sasl.log
database dir : /var/lib/rabbitmq/mnesia/rabbit#ip-0-0-0-0
And /var/lib/rabbitmq contains this,
[/var/lib/rabbitmq/mnesia]# cd /var/lib/rabbitmq/
root#ip-0-0-0-0
[/var/lib/rabbitmq]# ls
mnesia
And
[/var/lib/rabbitmq/mnesia]# pwd
/var/lib/rabbitmq/mnesia
root#ip-0-0-0-0
[/var/lib/rabbitmq/mnesia]# ls -al
total 20
drwxr-xr-x 4 rabbitmq rabbitmq 4096 Jun 7 17:29 .
drwxr-x--- 3 rabbitmq rabbitmq 4096 Jun 7 17:25 ..
drwxr-xr-x 4 rabbitmq rabbitmq 4096 Jun 7 17:35 rabbit#ip-0-0-0-0
-rw-r--r-- 1 rabbitmq rabbitmq 5 Jun 7 17:28 rabbit#ip-0-0-0-0.pid
drwxr-xr-x 2 rabbitmq rabbitmq 4096 Jun 7 17:29 rabbit#ip-0-0-0-0-plugins-expand
root#ip-0-0-0-0
And,
[/var/lib/rabbitmq/mnesia/rabbit#ip-0-0-0-0]# pwd
/var/lib/rabbitmq/mnesia/rabbit#ip-0-0-0-0
root#ip-0-0-0-0
[/var/lib/rabbitmq/mnesia/rabbit#ip-0-0-0-0]# ls -al
total 100
drwxr-xr-x 4 rabbitmq rabbitmq 4096 Jun 7 17:35 .
drwxr-xr-x 4 rabbitmq rabbitmq 4096 Jun 7 17:29 ..
-rw-r--r-- 1 rabbitmq rabbitmq 59 Jun 7 17:29 cluster_nodes.config
-rw-r--r-- 1 rabbitmq rabbitmq 160 Jun 7 17:35 DECISION_TAB.LOG
-rw-r--r-- 1 rabbitmq rabbitmq 99 Jun 7 17:35 LATEST.LOG
drwxr-xr-x 2 rabbitmq rabbitmq 4096 Jun 7 17:29 msg_store_persistent
drwxr-xr-x 2 rabbitmq rabbitmq 4096 Jun 7 17:29 msg_store_transient
-rw-r--r-- 1 rabbitmq rabbitmq 29 Jun 7 17:29 nodes_running_at_shutdown
-rw-r--r-- 1 rabbitmq rabbitmq 1123 Jun 7 17:29 rabbit_durable_exchange.DCD
-rw-r--r-- 1 rabbitmq rabbitmq 2422 Jun 7 17:32 rabbit_durable_exchange.DCL
-rw-r--r-- 1 rabbitmq rabbitmq 8 Jun 7 17:25 rabbit_durable_queue.DCD
-rw-r--r-- 1 rabbitmq rabbitmq 8 Jun 7 17:25 rabbit_durable_route.DCD
-rw-r--r-- 1 rabbitmq rabbitmq 8 Jun 7 17:25 rabbit_runtime_parameters.DCD
-rw-r--r-- 1 rabbitmq rabbitmq 3 Jun 7 17:29 rabbit_serial
-rw-r--r-- 1 rabbitmq rabbitmq 344 Jun 7 17:35 rabbit_user.DCD
-rw-r--r-- 1 rabbitmq rabbitmq 193 Jun 7 17:29 rabbit_user_permission.DCD
-rw-r--r-- 1 rabbitmq rabbitmq 461 Jun 7 17:35 rabbit_user_permission.DCL
-rw-r--r-- 1 rabbitmq rabbitmq 134 Jun 7 17:29 rabbit_vhost.DCD
-rw-r--r-- 1 rabbitmq rabbitmq 289 Jun 7 17:32 rabbit_vhost.DCL
-rw-r--r-- 1 rabbitmq rabbitmq 19108 Jun 7 17:25 schema.DAT
-rw-r--r-- 1 rabbitmq rabbitmq 233 Jun 7 17:25 schema_version
And last but not least apparently the logs say there isn't a config file,
[/var/log/rabbitmq]# cat rabbit\#ip-0-0-0-0.log | grep config
config file(s) : (none)
config file(s) : (none)
RabbitMQ Version: {rabbit,"RabbitMQ","3.1.5"}
Does anyone know what's going on here? I'm surprised I didn't see any errors when I started the rabbitmq-server. Do I just create the config files myself?
UPDATE:
I was setting up a cluster environment for my Apache Airflow and so I was configuring it with the CeleryExecutor and setting up the Queue to be RabbitMQ. Turns out I'm running my EC2-Instance with Amazon Linux 1 which doesn't include systemd so I wasn't able to get RabbitMQ properly installed. Had I made my server using Amazon Linux 2 or Ubuntu, or any other Linux that doesn't suck I could have potentially gotten further in installing RabbitMQ and getting it to work with Airflow. So I went on to using AWS SQS for my queue and then I ran into this error. So by now I've wasted over two and a half days trying to just get a queue to work with Celery and Airflow and I read this article which says that Airbnb (the creators of Airflow) are using Celery with Redis as their Queue. So I tried it out and it literally took me three minutes to do and it's working flawlessly.... All I did was download Redis using sudo yum install redis then bam I had Redis installed. I started Redis using redis-server. Then I changed my airflow.cfg broker_url field to broker_url = redis://, ran airflow initdb, restarted the scheduler airflow scheduler, then started a worker airflow worker and BAM my DAGs started running using the Redis queue and CeleryExecutor. HALLELUJAH just use Redis as your queue....
The RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
You should be using the latest version of RabbitMQ (3.7.5) and Erlang 19.3 or later. Version 3.1.5 is very, very, very old. Please see this document for instructions on how to install a recent RMQ on an rpm-based distro.
After that, you will create rabbitmq-env.conf yourself.

sudo /opt/gitlab/embedded/bin/bundle exec /opt/gitlab/embedded/bin/rake .... Fails

I have configured gitlab.rb with LDAP, and after failing to sign in, I wanted to test LDAP with:
sudo /opt/gitlab/embedded/bin/bundle exec /opt/gitlab/embedded/bin/rake gitlab:ldap:check RAILS_ENV=production
When I run this I get the following error:
rake aborted!
Errno::ENOENT: No such file or directory - No file specified as Settingslogic source
/opt/gitlab/embedded/service/gitlab-rails/config/initializers/1_settings.rb:173:in `new'
/opt/gitlab/embedded/service/gitlab-rails/config/initializers/1_settings.rb:173:in `block in <top (required)>'
/opt/gitlab/embedded/service/gitlab-rails/config/initializers/1_settings.rb:172:in `each'
/opt/gitlab/embedded/service/gitlab-rails/config/initializers/1_settings.rb:172:in `<top (required)>'
/opt/gitlab/embedded/service/gitlab-rails/config/environment.rb:5:in `<top (required)>'
/opt/gitlab/embedded/bin/bundle:23:in `load'
/opt/gitlab/embedded/bin/bundle:23:in `<main>'
Tasks: TOP => gitlab:check => gitlab:gitlab_shell:check => environment
(See full trace by running task with --trace)
I am new to Gitlab and rake, I tried to search for the problem, but found myself at a dead end.
I don't know what the problem is and am afraid that when I experiment I will break anything else.
Does anyone recognise this problem, can anyone help me find point towards the problem and hopefully a solution?
Much appreciated.
Install directory:
[root#centos7template ~]# ll /var/opt/gitlab
total 8
drwx------ 2 git root 6 Nov 1 08:45 backups
-rw------- 1 root root 38 Nov 1 08:47 bootstrapped
drwx------ 2 git root 24 Nov 3 12:28 gitaly
drwx------ 3 git root 25 Nov 1 08:45 git-data
drwxr-xr-x 3 git root 19 Nov 1 08:45 gitlab-ci
drwxr-xr-x 2 git root 31 Nov 3 10:39 gitlab-monitor
drwxr-xr-x 9 git root 150 Nov 3 10:39 gitlab-rails
drwx------ 2 git root 23 Nov 3 10:39 gitlab-shell
drwxr-x--- 2 git gitlab-www 51 Nov 3 12:29 gitlab-workhorse
drwx------ 3 root root 68 Nov 3 13:39 logrotate
drwxr-x--- 9 root gitlab-www 154 Nov 3 12:29 nginx
drwxr-xr-x 3 root root 31 Nov 1 08:47 node-exporter
drwx------ 2 gitlab-psql root 25 Nov 3 10:39 postgres-exporter
drwxr-xr-x 3 gitlab-psql root 77 Nov 3 12:29 postgresql
drwxr-x--- 3 gitlab-prometheus root 38 Nov 3 10:39 prometheus
drwxr-x--- 2 gitlab-redis git 57 Nov 3 12:29 redis
-rw-r--r-- 1 root root 40 Nov 1 08:45 trusted-certs-directory-hash
Installed versions:
gitaly v0.43.0
gitlab-config-template 10.1.0
gitlab-cookbooks 10.1.0
gitlab-ctl 10.1.0
gitlab-ctl-ee 10.1.0
gitlab-elasticsearch-indexer v0.2.1
gitlab-monitor v1.9.0
gitlab-pages v0.6.0
gitlab-rails v10.1.0-ee
gitlab-scripts 10.1.0
gitlab-selinux 10.1.0
gitlab-shell v5.9.3
gitlab-workhorse v3.2.0
Created new LDAP configuration in gitlab.rb, reconfigured with gitlab-ctl, tried various authentication method, and with simple_tls it worked.
Thanks for all the effort to post comments with questions! Appreciated.

Subclipse not recognizing my JavaHL

I keep getting the following error:
Failed to load JavaHL Library.
These are the errors that were encountered:
no libsvnjavahl-1 in java.library.path
no svnjavahl-1 in java.library.path
no svnjavahl in java.library.path
java.library.path = "/usr/lib/x86_64-linux-gnu/jni"
Although the library path is correct:
user#localhost /usr/lib/x86_64-linux-gnu/jni $ ls -l
total 336
lrwxrwxrwx 1 root root 24 Apr 6 02:06 libatk-wrapper.so -> libatk-wrapper.so.0.0.18
lrwxrwxrwx 1 root root 24 Apr 6 02:06 libatk-wrapper.so.0 -> libatk-wrapper.so.0.0.18
-rw-r--r-- 1 root root 85168 Sep 20 2012 libatk-wrapper.so.0.0.18
lrwxrwxrwx 1 root root 23 Sep 28 2012 libsvnjavahl-1.so -> libsvnjavahl-1.so.0.0.0
lrwxrwxrwx 1 root root 23 Sep 28 2012 libsvnjavahl-1.so.0 -> libsvnjavahl-1.so.0.0.0
-rw-r--r-- 1 root root 256104 Sep 28 2012 libsvnjavahl-1.so.0.0.0
The above was installed with apt-get install libsvn-java on ubuntu 12.10. Basically this package here.
The installed version of svn is 1.7.5.
The installed version of subclipse is 1.8.19.
I understand that the required svn version for subclipse 1.8.x to work is 1.7.x.
How can I make subclipse recognize my installed JavaHL library?
Okay I have found it...
The problem was in my eclipse.ini file, which looked like this:
-startup
plugins/org.eclipse.equinox.launcher_1.3.0.v20120522-1813.jar
--launcher.library
plugins/org.eclipse.equinox.launcher.gtk.linux.x86_64_1.1.200.v20120913-144807
-showsplash
org.eclipse.platform
--launcher.XXMaxPermSize
256m
--launcher.defaultAction
openFile
-Xms40m
-Xmx512m
-vmargs
-Djava.library.path="/usr/lib/x86_64-linux-gnu/jni"
I had to remove the extra quotes: -Djava.library.path="/usr/lib/x86_64-linux-gnu/jni" to -Djava.library.path=/usr/lib/x86_64-linux-gnu/jni.
That fixed it.

Resources