AzureFileCopy produces application/octet-stream files in blob - azure-blob-storage

I am using the AzureFileCopy task to copy files to a blob for a cdn, but all my new files end up as application/octet-stream by default. Is there any way this can be changed?
steps:
- task: AzureFileCopy#1
displayName: ' File Copy - blob'
inputs:
SourcePath: '$(System.DefaultWorkingDirectory)/xxxxxx/blob'
azureSubscription: 'xxxxxx'
Destination: AzureBlob
storage: xxxxxx
ContainerName: cdn

When upgrading to the latest task version (V5), Azure File Copy will set the content types by default for destination type Blob.

AzCopy sets the content type for a blob or file to application/octet-stream by default. You can set the content type for all blobs or files by explicitly specifying a value for this option.
If you specify this option without a value, then AzCopy sets each blob or file's content type according to its file extension. You can refer to https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy#setcontenttypecontent-type for more details.

Related

error generating documentation my component

I have created a backstage scaffolding template to create a Spring boot rest service deployed to AWS EKS.
When a component is created from it in backstage the component builds using github actions, is deployed to AWS EKS and is registered in backstage.
However clicking on docs for the component fails with the following error
1 info: Step 1 of 3: Preparing docs for entity component:default/stephendemo16 {"timestamp":"2022-04-28T22:36:54.963Z"}
2 info: Prepare step completed for entity component:default/stephendemo16, stored at /tmp/backstage-EjxBxi {"timestamp":"2022-04-28T22:36:56.663Z"}
3 info: Step 2 of 3: Generating docs for entity component:default/stephendemo16 {"timestamp":"2022-04-28T22:36:56.663Z"}
4 error: Failed to build the docs page:
Could not read MkDocs YAML config file mkdocs.yml or mkdocs.yaml for validation; caused by Error: ENOENT: no such file or directory,
open '/tmp/backstage-EjxBxi/mkdocs.yml' {"timestamp":"2022-04-28T22:36:56.664Z"}
ERROR 404: T: Page not found. This could be because there is no index.md file in the root of the docs directory of this repository.
Looks like someone dropped the mic!
Catalog-info registers the docs subdirectory
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: "stephendemo16"
description: "try using template"
annotations:
github.com/project-slug: xxxx/stephendemo16
backstage.io/techdocs-ref: dir:docs
The docs subdirectory contains index.md which contains
## stephendemo16
try using template
## Getting started
Start write your documentation by adding more markdown (.md) files to this folder (/docs) or replace the content in this file.
## Table of Contents
The Table of Contents on the right is generated automatically based on the hierarchy
of headings. Only use one H1 (`#` in Markdown) per file.
...
What have I missed?
Having an index.md alone is not sufficient.
Internally, TechDocs is currently using MkDocs. Mkdocs has a config file called mkdocs.yaml that defines some metadata, plugins, and your file structure (table of contents).
Place an mkdocs.yaml inside your root directory. Mkdocs expects that all markdown files are located inside a /docs sub directory. It references your index.md file relative to that folder:
# You can pass the custom site name here
site_name: 'example-docs'
nav:
# relative reference to your Markdown file and an optional title
- Home: index.md
plugins:
- techdocs-core
The location of your mkdocs.yaml is the root folder of your documentation. Therefore you have to adjust your backstage.io/techdocs-ref annotation to dir:. (means the same folder as your catalog info file).
You can find more details about using the TechDocs setup in the Backstage docs.

How to upload key value to etcd using yaml file?

I have a yaml file which includes key value. How do i upload it into etcd?
tech:
corev2uat:
"#services":
pms:
multifirm:
environment: true

Issues with loading Maxmind Data into Clickhouse Database using a local file

I'm trying to insert Maxmind Data into a Clickhouse Dictionary but defining it source as a local file where I can running my Client from.
so to define my dictionary I use the query:
CREATE DICTIONARY usage_analytics.city_locations(
geoname_id UInt64 DEFAULT 0,
...
...
...
...
)
PRIMARY KEY geoname_id
SOURCE(File(path '/home/ubuntu/maxmind_csv/GeoLite2-City-Locations-en.csv' format 'CSVWithNames'))
SETTINGS(format_csv_allow_single_quotes = 0)
LAYOUT(HASHED())
LIFETIME(300);
yet I keep getting hit with the error of:
Failed to load dictionary 'usage_analytics.city_locations': std::exception. Code: 1001, type: std::__1::__fs::filesystem::filesystem_error, e.what() = filesystem error: in canonical: No such file or directory [\home/ubuntu/maxmind_csv/GeoLite2-City-Locations-en.csv] [/],
According to the documentation, I have to use its absolute path, which I did by using readlink, and still it cannot detect my file. I am running a clickhouse client from a remote machine and have the files on the remote machine. Am I suppose to have my files else where or what?
It looks like this file is not available, to fix it need to to set right ownership for file:
chown clickhouse:clickhouse /home/ubuntu/maxmind_csv/GeoLite2-City-Locations-en.csv
# chown -R clickhouse:clickhouse /home/ubuntu/maxmind_csv
.XML dictionary allows to read files from any folder.
SQL dictionary does not.
https://clickhouse.tech/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources/#dicts-external_dicts_dict_sources-local_file
When dictionary with source FILE is created via DDL command (CREATE DICTIONARY ...), the source file needs to be located in user_files directory, to prevent DB users accessing arbitrary file on ClickHouse node.
/etc/clickhouse-server/config.xml
<!-- Directory with user provided files that are accessible by 'file' table function. -->
<user_files_path>/var/lib/clickhouse/user_files/</user_files_path>

Ansible `archive` module to archive without compression

I have to archive a bunch of files, and want to avoid compression to save time. This is a daily operation to archive 1 TB of data, and write it to a different drive, so "time is of the essence".
Looking at the Ansible archive module documentation it's not clear how to build up the target file without compression.
Currently, my Ansible task looks like this:
- name: Create snapshot tarball
become: true
archive:
path: "{{ snapshots_path.stdout_lines }}"
dest: "{{backup_location}}{{short_date.stdout}}_snapshot.tgz"
owner: "{{backup_user}}"
group: "{{backup_group}}"
Is it possible to speed up this process by telling the module to NOT compress? If yes, how?
Based on this other answer on superuser, tar is not compressing files per default, on the other hand gz, which is the default format of archive is.
So you could try going by:
- name: Create snapshot tarball
become: true
archive:
path: "{{ snapshots_path.stdout_lines }}"
dest: "{{backup_location}}{{short_date.stdout}}_snapshot.tar"
format: tar
owner: "{{backup_user}}"
group: "{{backup_group}}"
This is also backed-up by the manual page of tar:
DESCRIPTION
GNU tar is an archiving program designed to store multiple files in a
single file (an archive), and to manipulate such archives. The
archive can be either a regular file or a device (e.g. a tape drive,
hence the name of the program, which stands for tape archiver), which
can be located either on the local or on a remote machine.

google deployment manager, can you import files in jinja template that you call directly with --template?

https://cloud.google.com/deployment-manager/docs/configuration/templates/create-basic-template
I can deploy a template directly like this: gcloud deployment-manager deployments create a-single-vm --template vm_template.jinja
But what if that template depends on other files that need to be imported? If using a --config file you can define import in that file and call the template as a resource. But you cant pass parameter/properties to a config file. I want to call a template directly to pass --properties via the command line but that template also needs to import other files.
EDIT: What I needed was a top level jinja template instead of a config. My confusion was that you cant use imports in a jinja template without a schema file- it was failing and I thought it wasnt supported. So the solution was just swap out the config with a jinja template (with schema file) and then I can use --properies
Maybe you can try importing the dependent files into your config file as follows:
imports:
- path: vm-template.jinja
- path: vm-template-2.jinja
# In the resources section below, the properties of the resources are replaced
# with the names of the templates.
resources:
- name: vm-1
type: vm-template.jinja
- name: vm-2
type: vm-template-2.jinja
and Set Arbitrary Metadata insito create a special variable that you can pass and might use in other applications outside of Deployment Manager:
properties:
size:
type: integer
default: 2
description: Number of Mongo Slaves
variable-x: ultra-secret-sauce
More info about gcloud deployment-manager deployments create optional flags and example can be found here.
More info about passing properties using a Schema can be found here
Hope it helps

Resources