How to log messages from query module procedure in Memgraph logs? - memgraphdb

I am using Memgraph with Memgraph Platform Docker image, version 2.4.0 on MacOS. I started developing my own Python query module procedure in Memgraph Lab. It is not loading correctly because I have an error in my code, and then I am seeing the reason why in the logs. But to be able to fix the bug, it would help me to print out the certain values from my code. I found that the debugging process is pretty hard and I am wondering if there is a way to log certain messages? I tried using a simple logger and print, but none of those work.

You are lucky, because in Memgraph Platform 2.4.0 (that is Memgraph 2.4.0) there is a new feature - extension of Python API to enable logging on different levels. This means that you can use class Logging from the mgp. Here is the documentation for the Logger Objects. To be able to use this object, please make sure to first set Memgraph flag --also-log-to-stderr to true. You can do that by specifying the configuration options in the Docker run command when starting Memgraph Platform image. For example:
docker run -it -p 7687:7687 -p 7444:7444 -p 3000:3000 -e MEMGRAPH="--also-log-to-stderr=true" memgraph/memgraph-platform:2.4.0
Here is the example usage of the Logger object:
import mgp
#mgp.read_proc
def myProcedure(ctx: mgp.ProcCtx) -> mgp.Record(return_statement = mgp.Nullable[str]):
logger = mgp.Logger()
logger.info("Logging my procedure")
return mgp.Record(return_statement = "hello logging in procedure")
If you run the procedure in the Query execution tab in Memgraph Lab:
CALL test_module.myProcedure() YIELD return_statement;
you are going to see the 'hello logging in procedure' output.
The logged messages will be seen in Memgraph logs once the procedure is run. If you are using Memgraph Lab, then simply head over to the Logs tab and check what's new after you run your procedure.
If you want to check the logs directly in the Memgraph log file, instead of in the Memgraph Lab, please read the how-to guide for accessing logs.

Related

Unable to output container password details when using ansible with podman

when using ansible-podman, I am unable to output the std-out of the container run command as I might do when using the command line. This means that I don't get to see the automatically generated password and keystore password, along with other details.
Even when using the tty parameter of the ansible-podman-container, the logs report:
Auto-configuration will not generate a password for the elastic built-in superuser, as we cannot determine if there is a terminal attached to the elasticsearch process. You can use the bin/elasticsearch-reset-password tool to set the password for the elastic user."
There is no elastic user created, and when I exec into the container, the bin/elasticsearch-reset-password tool fails with:
ROR: Failed to reset password for the [elasticsearch] user
As https is standard on the 8.5 image, I am unable to use it, as I cannot set up auth properly. Also, I cannot use apt to install an editor, as the user elasticsearch does not have sufficient permissions.
If you think this is a podman error then please let me know, and I will hassle the devs, and see if I can't get better output and tty detection etc.
An alternative I have tried is using ansible to run a shell command, but the output is no different.
What I really want is to be able to obtain the password to output to an ansible variable so that I can spin up a pod of containers, including elasticsearch, for running tests.
Alternatively, I can use elasticsearch 7.17.7 with http, but I am going to need encryption for production, and there doesn't seem to be a way to do it with ansible.
Perhaps there is an environment variable that I am missing that I could set to create the password? I have tried setting ELASTIC_PASSWORD, but it is of no help.
I am connecting from django using django-elasticsearch-dsl, and get the following error, when verify_cert is set to false:
AuthenticationException(401, 'security_exception', 'missing authentication credentials for REST request [/forum_posts_index/_search]')
Any help gratefully received...

What to do when Memgraph stops working without any info?

Sometimes the Docker container where Memgraph is running just stops working or says that the process was aborted with exit code 137. How can I fix this?
You should check the Memgraph logs, where you'll probably find the reason why the process was aborted.
Since you said that you're using Memgraph with Docker, there are two options:
If you run Memgraph with Docker using the volume for logs, that is with
-v mg_log:/var/log/memgraph, then mg_log folder usually can be found at \\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\ (Windows) or /var/lib/docker/volumes/ (Linux and macOS).
If you run Memgraph without using the volume for logs, then you need to enter the Docker container. In order to do that, first you have to find out the container ID by running docker ps. Then you have to copy the container ID and run docker exec -it <containerID> bash. For example, if container ID is 83d76fe4df5a, then you run docker exec -it 83d76fe4df5a bash. Next, you need find the folder where logs are located. You can do that by running cd /var/log/memgraph. To read the logs, run cat <memgraph_date>.log, that is, if you have log file memgraph_2022-03-02.log located inside the log folder, then run cat memgraph_2022-03-02.log.
Hopefully, when you read the logs, you'll be able to fix your problem.

Access k8s pod logs generated from ssh exec

I have a filebeat configured to send my k8s cluster logs to Elasticsearch.
When I connect to the pod directly (kubectl exec -it <pod> -- sh -c bash),
the generated output logs aren't being sent to the destination.
Digging at k8s docs, I couldn't find how k8s is handling STDOUT from a running shell.
How can I configure k8s to send live shell logs?
Kubernetes has (mostly) nothing to do with this, as logging is handled by the container environment used to support Kubernetes, which is usually docker.
Depending on docker version, logs of containers could be written on json-file, journald or more, with the default being a json file. You can do a docker info | grep -i logging to check what is the Logging Driver used by docker. If the result is json-file, logs are being written down on a file in json format. If there's another value, logs are being handled in another way (and as there are various logging drivers, I suggest to check the documentation about them)
If the logs are being written on file, chances are that by using docker inspect container-id | grep -i logpath, you'll be able to see the path on the node.
Filebeat simply harvest the logs from those files and it's docker who handles the redirection between the application STDOUT inside the container and one of those files, with its driver.
Regarding exec commands not being in logs, this is an open proposal ( https://github.com/moby/moby/issues/8662 ) as not everything is redirected, just logs of the apps started by the entrypoint itself.
There's a suggested workaround which is ( https://github.com/moby/moby/issues/8662#issuecomment-277396232 )
In the mean time you can try this little hack....
echo hello > /proc/1/fd/1
Redirect your output into PID 1's (the docker container) file
descriptor for STDOUT
Which works just fine but has the problem of requiring a manual redirect.
Use the following process:
Make changes in your application to push logs to STDOUT. You may configure this in your logging configuration file.
Configure file to read those STDOUT logs (which eventual is some docker log file location like /var/log etc)
Start your file as a DeamonSets, so that logs from new pods and nodes can be anatomically pushed to ES.
For better readability of logs, make sure you push logs in json format.

Using composer-wallet-redis , where to see the card in redis service?

I follow the guideline .
Install composer-wallet-redis image and start the container.
export NODE_CONFIG={"composer":{"wallet":{"type":"#ampretia/composer-wallet-redis","desc":"Uses a local redis instance","options":{}}}}
composer card import admin#test-network.card
I found the card still store in my local machine at path ~/.composer/card/
How can I check whether the card exist in the redis server?
How to import the business network cards into the cloud custom wallet?
The primary issue (which I will correct in the README) is that the module name should the composer-wallet-redis The #ampretia was a temporary repo.
Assuming that redis is started on the default port, you can run the redis CLI like this
docker run -it --link composer-wallet-redis:redis --rm redis redis-cli -h redis -p 6379
You can then issue redis cli commands to look at the data. Though it is not recommended to view the data or modify it. Useful to confirm to yourself it's working. The KEYS * command will display everything but this should only be used in a development context. See the warnings on the redis docs pages.
export NODE_ENVIRONMENT
start the docker container
composer card import
execute docker run *** command followed by KEYS * ,empty list or set.
#Calanais

How to build a cassandra cluster with docker on a windows machine?

I want to build a cassandra cluster with docker. The documentation already tells you how to this so this is not the problem I have.
However I am currently using Docker on Windows 10 and obviously it cannot execute the nested command in docker run --name some-cassandra2 -d -e CASSANDRA_SEEDS="$(docker inspect --format='{{ .NetworkSettings.IPAddress }}' some-cassandra)" cassandra:tag which results in an empty seed list for the container.
How can I nest a command like this in Windows or - if this is not possible - get a workaround for this?
I managed to fix it thanks to a docker-compose.yml by Jason Giedymin. It should work in v1 as well as v2 of docker-compose. By doing it this way you just let docker do the linking from the get go and tell cassandras about other seeds with the environment variable the container already gives you.
The sleep 30 part is pretty smart as well as it makes sure that the second container doesn't try to connect on a container that isn't fully up yet.
One thing I would recommend though, is using external_links instead of links. This way other containers don't rely on all of the cassandra containers to be up to start/work. This would defeat the purpose of a distributed database.
I still don't know how to nest Windows cmd commands into each other so I would still be thankful for some tips.

Resources