Google Cloud Logs Export Names - hadoop

Is there a way to configure the names of the files exported from Logging?
Currently the file exported includes colons. This are invalid characters as a path element in hadoop, so PySpark for instance cannot read these files. Obviously the easy solution is to rename the files, but this interferes with syncing.
Is there a way to configure the names or change them to no include colons? Any other solutions are appreciated. Thanks!
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md

At this time, there is no way to change the naming convention when exporting log files as this process is automated on the backend.
If you would like to request to have this feature available in GCP, I would suggest creating a PIT. This page allows you to report bugs and request new features to be implemented within GCP.

Related

How to include config files for the Google Ops-agent

I want to do some configurations for Google Cloud Ops-Agent in order to deploy it via Ansible.
For example /etc/google-cloud-ops-agent/kafka.yaml
How to include *.yaml configs?
If using /etc/google-cloud-ops-agent/config.yaml I'm worried then the configuration will be overwritten
There are two ways I can think of to do this.
The easiest (and least precise): use the copy module to recusively copy the the directory content to the target. Of course, if there are files other than ".yaml", you'll get those as well.
The more complex way...and I have not tested this. use the find module to execute locally on the control node, to get a list of the .yaml files, register their locations and then copy them up. There's probably a simpler way.

how to temporary store uploaded files using FLASK

I'm creating a web application using flask that takes 3 input from the user: name, picture, grades.
I want to store these information temporary depending on the user's session.
and as a beginner I read that sessions are not for storing files, what other secure way you recommend me to use?
I would recommend to write the files to disk.
If this is really temporary, e.g. you have a two-step-sign-up-form, you could write the files to temporary files or into a temporary directory.
Please see the excellent documentation at https://docs.python.org/3/library/tempfile.html
Maybe this should not be this temporary? It sounds like a user picture is something more permanent.
Then I would recommend e.g. to create a directory for each user and store the files there.
This is done with standard Python io, e.g with the open function.
More info about reading and writing files also can be found in the official Python documentation:
https://docs.python.org/3/tutorial/inputoutput.html#reading-and-writing-files

Configure GCS as Parse Server storage

I am trying to use GCS as a storage for my files in Parse Server. I followed the tutorial, installing the parse-server-gcs-adapter and setting the environment variables in my .bashrc:
export PARSE_SERVER_FILES_ADAPTER=parse-server-gcs-adapter
export GCP_PROJECT_ID=my-project-id
export GCP_KEYFILE_PATH=path-to-keyfile.json
export GCS_BUCKET=my-bucket-name
export GCS_DIRECT_ACCESS=true
I can upload files in my Parse Dashboard, and they are correctly saved in the class, but the files cannot be seen in the bucket browser.
Some sources such as this one talk about a config file, but I cannot find info about this file anywhere.
I want to know if there is any means to debug what is happening, or if there is anything obvious that I am missing.
After some more research I could get the solution for the issue.
The first thing (debug what was happening) was to take a look (tail -f) at the log file, located at /opt/bitnami/apps/parse/htdocs/logs/parse.log. There, I could see nothing was happening, thus confirming my suspicion about not changing the right file. Then, I located the config file that I was looking for to be /opt/bitnami/apps/parse/htdocs/server.js, being able to configure it as written in the Parse docs.
As I had another problem with the GCS Adapter, I also found an issue with Bitnami machines, that was identical to this answer to an issue in Github. Now it's all working.

Jenkins plugin auto import

Fairly new to the use of Jenkins, but I am looking for a way to get test results and feed it back into Jenkins.
What this means is, I have a bash scripts that collects metrics on a number of applications and checks to see whether or not the files exist. I collect this data to a plain text file, basically with counters 1/5, 2/5, 5/10 etc.
The output can be however I want it, but I was wondering if there is a good/clean process that can take this data file and output it nicely inside of Jenkins web interface?
We also use Trac as well.. so if there is a Trac plugin that can do something similar, that would be good too.
The best practices would say to escape them and use them as parameters to a parameterized jenkins build or a file save/capture. Parameters are finicky and subject to url encoding, so I would personally do file passing using a shared filesystem such as S3. Your best bet is https://wiki.jenkins-ci.org/display/JENKINS/Remote+access+API

Need help to find out why Websphere Application Server has many .lck files

The file names seesm to point to our WAS data sources. However, we're not sure what is creating them and why there are so many. The servers didn't seem to crash. Why is WAS 6.1.0.23 creating these andy why aren't they being cleaned?
There are many files like these, with some going up to xxx.43.lck
DWSqlLog0.0.lck
DWSqlLog0.0
TritonSqlLog0.0.lck
TritonSqlLog0.0
JTSqlLog0.0
JTSqlLog0.1
JTSqlLog0.3
JTSqlLog0.2
JTSqlLog0.4.lck
JTSqlLog0.4
JTSqlLog0.3.lck
JTSqlLog0.2.lck
JTSqlLog0.1.lck
JTSqlLog0.0.lck
WAS uses JDK Logging and JDK logger creates such files with extension .0,.1 etc along with the .lck file so that the WAS runtime has a lock to these files that it writes to.
Cheers
Manglu

Resources