I am following the MonetDB tutorial but after executing this command:
shell> mclient -u monetdb -d voc
and inserting the password I receive the following error
monetdbd: internal error while starting mserver
This is the error in the log file. How can I solve this? Thank a lot!!
2016-05-23 08:41:39 MSG voc[1060]: !IOException:mal_mapi.listen:operation failed: binding to UNIX socket file /vagrant/mydbfarm/voc/.mapi.sock failed: No such file or directory
2016-05-23 08:41:39 MSG merovingian[1054]: database 'voc' (1060) has exited with exit status 0
2016-05-23 08:41:39 MSG merovingian[1054]: database 'voc' has shut down
2016-05-23 08:41:39 ERR merovingian[1054]: client error: database 'voc' started up, but failed to open up a communication channel
try a mkdir -p /vagrant/mydbfarm/voc/ as the user who runs MonetDB perhaps?
Related
When i'm runnning below query on hive installed on windows - "CREATE TABLE emp.filter AS SELECT id,name FROM emp.employee WHERE gender = 'F';
", Im getting following error --
"FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
2022-08-10 16:17:41,710 ERROR ql.Driver: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:"
Further on checking yarn logs -
So when its launching container, there where the problem is i guess -
""Launching container"
[2022-08-10 16:16:48.142]Container exited with a non-zero exit code 1. Last 4096 bytes of stderr :
'"C:\Users\aksha\Java\jdk1.8.0_202"' is not recognized as an internal or external command,
operable program or batch file."
How can i set it right?
Got it!
After changing "set JAVA_HOME=C:\Users\aksha\Java\jdk1.8.0_202" to
"set JAVA_HOME=C:/Users/aksha/Java/jdk1.8.0_202" in your hadoop-env.cmd file, should launch container/map-red task.
Script error as follows:
omm#db1~1$ qs ctl reload-D /qaussdb/data/db1/
[2021-03-15 15:13:11.849][7126][][gs ctl]:gs ctl reload ,datadir is /gaussdh ta/db1
[2021-03-15 15:13:11.850][7126][][gs ctl]: PID file"/gaussdb/data/db1/postr er.pid" does not exist
2021-03-15 15:13:11.850][7126][][gs ctl]: Is server running? omm#db1 ~]$
I'm trying to do a report of all the objects in all the projects we have in Cloud Storage of our Org. I'm using this repo from the Google Professionnal Services as it's doing exactly what we want: https://github.com/GoogleCloudPlatform/professional-services/tree/main/tools/gcs2bq
We want to use containers instead of just the go code on a Cloud Function for portability mainly.
Locally everything is good and the program behave as expected but when I try in Cloud Run things get tricky. From what I understand, the go part needs to listen to a port, which I added at the beginning of the main so the container can be deployed, which it is:
// Determine port for HTTP service
port := os.Getenv("PORT")
if port == "" {
port = "8080"
log.Printf("defaulting to port %s", port)
}
Start HTTP server.
log.Printf("listening on port %s", port)
if err := http.ListenAndServe(":"+port, nil); err != nil {
log.Fatal(err)
}
But as you can see in the repo, the first file called is the run.sh one. Which set environment variables and then call the main.go. It sucessfully complete it's task, which is get all the size of the different files. But after that the run.sh doesnt "resume" and go to the part where it uploads the data in a BigQuery table, which locally work.
Here is the part in the run.sh file where I have a problem. Note : I don't have errors from executing the ./gcs2bq Note 2 : Every environment variable has a correct value
./gcs2bq $GCS2BQ_FLAGS || error "Export failed!" 2 <- doesnt get past this line
gsutil mb -p "${GCS2BQ_PROJECT}" -c standard -l "${GCS2BQ_LOCATION}" -b on "gs://${GCS2BQ_BUCKET}" || echo "Info: Storage bucket already exists: ${GCS2BQ_BUCKET}"
gsutil cp "${GCS2BQ_FILE}" "gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME}" || error "Failed copying ${GCS2BQ_FILE} to gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME}!" 3
bq mk --project_id="${GCS2BQ_PROJECT}" --location="${GCS2BQ_LOCATION}" "${GCS2BQ_DATASET}" || echo "Info: BigQuery dataset already exists: ${GCS2BQ_DATASET}"
bq load --project_id="${GCS2BQ_PROJECT}" --location="${GCS2BQ_LOCATION}" --schema bigquery.schema --source_format=AVRO --use_avro_logical_types --replace=true "${GCS2BQ_DATASET}.${GCS2BQ_TABLE}" "gs://${GCS2BQ_BUCKET}/${GCS2BQ_FIL$
error "Failed to load gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME} to BigQuery table ${GCS2BQ_DATASET}.${GCS2BQ_TABLE}!" 4
gsutil rm "gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME}" || error "Failed deleting gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME}!" 5
rm -f "${GCS2BQ_FILE}"
I'm kinda new to containers and Cloud Run and even after reading projects and documentation, I'm not sure what I'm doing wrong, Is it normal that the .sh is "stuck" when calling the main.go? I can provide more details/explaination if needed.
Okay so for anyone who encounter similar situation this is how I made it work for me.
The container isn't supposed to stop so no exit, it will just go back to the main function.
That means that when I called executable it just looped and never exited and completed the task. So the solution here is to "recode" everything past the call in golang directly into the main.go
Here the run.sh is then useless so I used another .go file that listen for http request and then call the code that gather data and send it to Bigquery.
Are there known issues with image based provisioning using logical volumes in libvirt? I am getting this error while trying to do the same
Unable to save
Failed to create a compute kvm2 (Libvirt) instance test3.xxx.local: Call
to virNetworkCreateXML failed:
internal error: Child process (/usr/sbin/lvcreate --name
test3.xxx.local-disk1 -L 1K --type snapshot --virtualsize 10485760K -s
/vm-images-pool/images-vol/template_minimal) unexpected exit status 3: 2017-
01-05 00:42:08.133+0000: 12330: debug : virFileClose:102 : Closed fd 29
2017-01-05 00:42:08.133+0000: 12330: debug : virFileClose:102 : Closed fd 31
2017-01-05 00:42:08.133+0000: 12330: debug : virFileClose:102 : Closed fd 27
Volume group name expected (no slash) Run `lvcreate --help' for more
information
This link from Red Hat flags it as a known issue:
https://access.redhat.com/solutions/1995053
That doc has a date of October 20 2015. Not sure if anythig changed after that to support LV.
I tried to satisfy the requirement in that doc by creating a pool based on dir like this:
Setup:
Storage pool vm-images-pool-dir of type dir
Storage pool vm-images-pool of type logical
template_minimal is the image template.
[root#kvm2 libvirt]# virsh vol-list vm-images-pool-dir
Name Path
----------------------------------------------------------------------------
template_minimal /vm-images-pool/images-vol/template_minimal
vm-images-pool storage pool is of type VG with one volume:
images-vol vm-images-pool -wi-ao---- 249.00g
images-vol is mounted under /vm-images-pool/images-vol/
Any insight is appreciated.
Thanks,
TG
=======================================
more details.
Daniel, Thanks. I am a bit confused. I couldn't put the actual commands earlier since I had cleaned them up. I recreated the setup. Here are the commands I used:
virsh pool-define-as vm-images-pool logical --source-dev /dev/mapper/mpathd
virsh pool-build vm-images-pool
virsh pool-start vm-images-pool
virsh vol-create-as vm-images-pool images-vol --capacity 249G
virsh pool-define-as vm-images-pool-dir dir - - - - /vm-images-pool/images- vol/
virsh pool-build vm-images-pool-dir
virsh pool-start vm-images-pool-dir
[root#kvm2 ~]# virsh vol-list vm-images-pool-dir
Name Path
---------------------------------------------------------------------------- --
lost+found /vm-images-pool/images-vol/lost+found
template_minimal /vm-images-pool/images-vol/template_minimal
=======================================
/vm-images-pool/images-vol/template_minimal is the path used for template image
==================================
more tests:
mounted the logical volume at a mount point to match the directory based storage pool:
[root#kvm2 ~]# df -h /vm-images-pool-dir/images-vol
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vm--images--pool-images--vol 245G 1.2G 232G 1% /vm-images- pool-dir/images-vol
[root#kvm2 ~]# virsh vol-list vm-images-pool-dir
Name Path
------------------------------------------------------------------------------
lost+found /vm-images-pool-dir/images-vol/lost+found
template_minimal /vm-images-pool-dir/images-vol/template_minimal
[root#kvm2 ~]#
used /vm-images-pool-dir/images-vol/template_minimal as the template path
same result
Unable to save
Failed to create a compute kvm2 (Libvirt) instance test3.xxx.local: Call
to virNetworkCreateXML failed: internal error: Child process
(/usr/sbin/lvcreate --name test3.xxx.local-disk1 -L 1K --type
snapshot --virtualsize 10485760K - s /vm-images-pool-dir/images-
vol/template_minimal) unexpected exit status 3: 2017-01-05
16:45:10.694+0000: 40712: debug : virFileClose:102 : Closed fd 27 2017-
01-05 16:45:10.694+0000: 40712: debug : virFileClose:102 : Closed fd 29
2017-01-05 16:45:10.694+0000: 40712: debug : virFileClose:102 : Closed fd 24
Volume group name expected (no slash) Run `lvcreate --help' for more
information.
the source of the image is "/vm-images-pool-dir/images-vol/template_minimal" and the guest's target back end is a LV of 10G on another storage pool called "virtual-machines"
Not understanding what the 'lvcreate' commmand is trying to do, shouldnt it at least use "virtual-machines" as the target VG. The tool I am using is Satellite 6.2. I am thinking its something silly that I am overlooking. Not sure where :)
Thanks
TG
Based on the paths in that command, it seems you wanted to create a new file based volume in the /vm-images-pool/images-vol/, ie your "vm-images-pool-dir" pool. The fact that you are seeing an error from "lvcreate" though, suggests that you mistakenly specified "vm-images-pool" to libvirt as the pool to use, causing it to try to create a logical volume instead. You don't show the actual command / API you are running, but check that you've given the right pool name to it.
I know the question has long been asked, but I just hit the same problem and found the answer. I couldn't find the exact virsh command you are using leading to this error, but here I used the following XML file with virsh vol-create libvirtVG logical.xml
<volume >
<name>vol02</name>
<capacity unit='KiB'>2097152</capacity>
<allocation unit='KiB'>0</allocation>
<backingStore>
<path>/dev/libvirtVG/sles15sp1</path>
</backingStore>
</volume>
To be able to get rid of the error I had to set the allocation to the value of the capacity. You can also see that virt-manager is automatically doing it for you:
https://github.com/virt-manager/virt-manager/blob/master/virtinst/storage.py#L646
The equivalent using the virsh vol-create-as command would be:
virsh vol-create-as libvirtVG vol02 2048MiB --allocation 2048MiB \
--backing-vol /dev/libvirtVG/sles15sp1
I'm getting an error in the Cloudera QuickStart VM I downloaded from http://www.cloudera.com/content/cloudera-content/cloudera-docs/DemoVMs/Cloudera-QuickStart-VM/cloudera_quickstart_vm.html.
I am trying a toy example from Tom White's Hadoop: The Definitive Guide book called map_temp.pig, which "finds the maximum temperature by year".
I created a file called temps.txt that contains (year, temperature, quality) entries on each line:
1950 0 1
1950 22 1
1950 -11 1
1949 111 1
Using the example code in the book, I typed the following Pig code into the Grunt terminal:
records = LOAD '/home/cloudera/Desktop/temps.txt'
AS (year:chararray, temperature:int, quality:int);
DUMP records;
After I typed DUMP records;, I got the error:
2014-05-22 11:33:34,286 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias records. Backend error : org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: Application with id 'application_1400775973236_0006' doesn't exist in RM.
…
Details at logfile: /home/cloudera/Desktop/pig_1400782722689.log
I attempted to find out what was causing the error through a Google search: https://www.google.com/search?q=%22application+with+id%22+%22doesn%27t+exist+in+RM%22.
The results there weren't helpful. For example, http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-troubleshoot-error-vpc.html mentioned this bug and said "To solve this problem, you must configure a VPC that includes a DHCP Options Set whose parameters are set to the following values..."
Amazon's suggested fix doesn't seem to be the problem because I'm not using using AWS.
EDIT:
I think the HDFS file path is correct.
[cloudera#localhost Desktop]$ ls
Eclipse.desktop gnome-terminal.desktop max_temp.pig temps.txt
[cloudera#localhost Desktop]$ pwd
/home/cloudera/Desktop
there's another exception before your error :
org.apache.pig.backend.executionengine.ExecException: ERROR 2118: Input path does not exist: hdfs://localhost.localdomain:8020/home/cloudera/Desktop/temps.txt
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288)
Is your file in HDFS? Have you checked the file path?
I was able to solve this problem by doing pig -x local to start the Grunt interpreter instead of just pig.
I should have used local mode because I did not have access to a Hadoop cluster.
This gave me the errors:
2014-05-22 11:33:34,286 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias records. Backend error : org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: Application with id 'application_1400775973236_0006' doesn't exist in RM.
2014-05-22 11:33:28,799 [JobControl] WARN org.apache.hadoop.security.UserGroupInformation - PriviledgedActionException as:cloudera (auth:SIMPLE) cause:org.apache.pig.backend.executionengine.ExecException: ERROR 2118: Input path does not exist: hdfs://localhost.localdomain:8020/home/cloudera/Desktop/temps.txt
From http://pig.apache.org/docs/r0.9.1/start.html:
Pig has two execution modes or exectypes:
Local Mode - To run Pig in local mode, you need access to a single machine; all files are installed and run using your local host and file system. Specify local mode using the -x flag (pig -x local).
Mapreduce Mode - To run Pig in mapreduce mode, you need access to a Hadoop cluster and HDFS installation. Mapreduce mode is the default mode; you can, but don't need to, specify it using the -x flag (pig OR pig -x mapreduce).
You can run Pig in either mode using the "pig" command (the bin/pig Perl script) or the "java" command (java -cp pig.jar ...).
Running the toy example from Tom White's Hadoop: The Definitive Guide book:
-- max_temp.pig: Finds the maximum temperature by year
records = LOAD 'temps.txt' AS (year:chararray, temperature:int, quality:int);
filtered_records = FILTER records BY temperature != 9999 AND
(quality == 0 OR quality == 1 OR quality == 4 OR quality == 5 OR quality == 9);
grouped_records = GROUP filtered_records BY year;
max_temp = FOREACH grouped_records GENERATE group,
MAX(filtered_records.temperature);
DUMP max_temp;
against the following data set in temps.txt (remember that Pig's default input is tab-delimited files):
1950 0 1
1950 22 1
1950 -11 1
1949 111 1
gives this:
grunt> [cloudera#localhost Desktop]$ pig -x local -f max_temp.pig 2>log
(1949,111)
(1950,22)