I have created a backstage scaffolding template to create a Spring boot rest service deployed to AWS EKS.
When a component is created from it in backstage the component builds using github actions, is deployed to AWS EKS and is registered in backstage.
However clicking on docs for the component fails with the following error
1 info: Step 1 of 3: Preparing docs for entity component:default/stephendemo16 {"timestamp":"2022-04-28T22:36:54.963Z"}
2 info: Prepare step completed for entity component:default/stephendemo16, stored at /tmp/backstage-EjxBxi {"timestamp":"2022-04-28T22:36:56.663Z"}
3 info: Step 2 of 3: Generating docs for entity component:default/stephendemo16 {"timestamp":"2022-04-28T22:36:56.663Z"}
4 error: Failed to build the docs page:
Could not read MkDocs YAML config file mkdocs.yml or mkdocs.yaml for validation; caused by Error: ENOENT: no such file or directory,
open '/tmp/backstage-EjxBxi/mkdocs.yml' {"timestamp":"2022-04-28T22:36:56.664Z"}
ERROR 404: T: Page not found. This could be because there is no index.md file in the root of the docs directory of this repository.
Looks like someone dropped the mic!
Catalog-info registers the docs subdirectory
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: "stephendemo16"
description: "try using template"
annotations:
github.com/project-slug: xxxx/stephendemo16
backstage.io/techdocs-ref: dir:docs
The docs subdirectory contains index.md which contains
## stephendemo16
try using template
## Getting started
Start write your documentation by adding more markdown (.md) files to this folder (/docs) or replace the content in this file.
## Table of Contents
The Table of Contents on the right is generated automatically based on the hierarchy
of headings. Only use one H1 (`#` in Markdown) per file.
...
What have I missed?
Having an index.md alone is not sufficient.
Internally, TechDocs is currently using MkDocs. Mkdocs has a config file called mkdocs.yaml that defines some metadata, plugins, and your file structure (table of contents).
Place an mkdocs.yaml inside your root directory. Mkdocs expects that all markdown files are located inside a /docs sub directory. It references your index.md file relative to that folder:
# You can pass the custom site name here
site_name: 'example-docs'
nav:
# relative reference to your Markdown file and an optional title
- Home: index.md
plugins:
- techdocs-core
The location of your mkdocs.yaml is the root folder of your documentation. Therefore you have to adjust your backstage.io/techdocs-ref annotation to dir:. (means the same folder as your catalog info file).
You can find more details about using the TechDocs setup in the Backstage docs.
Related
I've 2 aws-lambda projects.
the first one is using serverless-bundle.
serverless-bundle.github
when I deploy the first project, I can see below logs
(...)
Serverless: Uploading service hello.zip file to S3 (34.56 KB)...
Serverless: Uploading service bye.zip file to S3 (12.34 KB)...
(...)
each function.zip has a small size and different size.
and
the second project is using serverless-plugin-typescript
serverless-plugin-typescript
and
(...)
Serverless: Uploading service hello.zip file to S3 (22.83 MB)...
Serverless: Uploading service bye.zip file to S3 (22.83 MB)...
(...)
each functions.zip has the same size and it has a bigger size than the first project's
I am going to use typescript, so I can't use serverless-bundle because they don't support ts yet.
so, my question is how can I reduce the functions.zip size like using serverless-bundle
Serverless framework now has native support for using typescript via aws-nodejs-typescript template.
For new projects you can create them using serverless create --template aws-nodejs-typescript && npm install
For existing projects, you just need to include serverless-webpack
plugin.
you can use serverless-webpack like this.
service:
name: my-functions
# Add the serverless-webpack plugin
plugins:
- serverless-webpack
In your case, all the zip files are different size because, the first method 'serverless-bundles' is an extension of serverless-webpack
Problem:
I am trying to deploy a function with this step in a second level compilation (second-level-compilation.yaml)
- name: 'gcr.io/cloud-builders/gcloud'
args: ['beta', 'functions',
'deploy', '${_FUNCTION_NAME}',
'--source', 'path/to/function',
'--runtime', 'go111',
'--region', '${_GCP_CLOUD_FUNCTION_REGION}',
'--entry-point', '${_ENTRYPOINT}',
'--env-vars-file', '${_FUNCTION_PATH}/.env.${_DEPLOY_ENV}.yaml',
'--trigger-topic', '${_TRIGGER_TOPIC_NAME}',
'--timeout', '${_FUNCTION_TIMEOUT}',
'--service-account', '${_SERVICE_ACCOUNT}']
I get this error from Cloud Build using the Console.
Step #1: Step #11: ERROR: (gcloud.beta.functions.deploy) Error creating a ZIP archive with the source code for directory path/to/function: ZIP does not support timestamps before 1980
Here is the global flow:
The following step is in a first-level compilation (first-level-compilation.yaml). Which is triggered by Cloud build using a Github repository (via Application GitHub Cloud Build) :
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args: ['-c', 'launch-second-level-compilation.sh ${_MY_VAR}']
The script "launch-second-level-compilation.sh" does specific operations based on ${_MY_VAR} and then launches a second-level compilation passing a lot of substitutions variables with "gcloud builds submit --config=second-level-compilation.yaml --substitutions=_FUNCTION_NAME=val,_GCP_CLOUD_FUNCTION_REGION=val,....."
Then, the "second-level-compilation.yaml" described at the beginning of this question is executed, using the substitutions values generated and passed through the launch-second-level-compilation.sh script.
The main idea here is to have a generic first-level-compilation.yaml in charge of calling a second-level compilation with specific dynamically generated substitutions.
Attempts / Investigations
As described in this issue Cloud Container Builder, ZIP does not support timestamps before 1980, I tried to "ls" the files in the /workspace directory. But none of the files at the /workspace root have strange DATE.
I changed the path/to/function from a relative path to /workspace/path/to/function, but no success, without surprise as the directory ends to be the same.
Please make sure you don't have folders without files. For example:
|--dir
|--subdir1
| |--file1
|--subdir2
|--file2
In this example dir doesn't directly contain any file, only subdirectories. During local deployment gcp sdk puts dir into tarball without copying last modified field.
So it is set to 1st Jan 1970 that causes problems with ZIP.
As possible workaround just make sure every directory contains at least one file.
https://cloud.google.com/deployment-manager/docs/configuration/templates/create-basic-template
I can deploy a template directly like this: gcloud deployment-manager deployments create a-single-vm --template vm_template.jinja
But what if that template depends on other files that need to be imported? If using a --config file you can define import in that file and call the template as a resource. But you cant pass parameter/properties to a config file. I want to call a template directly to pass --properties via the command line but that template also needs to import other files.
EDIT: What I needed was a top level jinja template instead of a config. My confusion was that you cant use imports in a jinja template without a schema file- it was failing and I thought it wasnt supported. So the solution was just swap out the config with a jinja template (with schema file) and then I can use --properies
Maybe you can try importing the dependent files into your config file as follows:
imports:
- path: vm-template.jinja
- path: vm-template-2.jinja
# In the resources section below, the properties of the resources are replaced
# with the names of the templates.
resources:
- name: vm-1
type: vm-template.jinja
- name: vm-2
type: vm-template-2.jinja
and Set Arbitrary Metadata insito create a special variable that you can pass and might use in other applications outside of Deployment Manager:
properties:
size:
type: integer
default: 2
description: Number of Mongo Slaves
variable-x: ultra-secret-sauce
More info about gcloud deployment-manager deployments create optional flags and example can be found here.
More info about passing properties using a Schema can be found here
Hope it helps
I am trying to deploy a shiny app and am running into trouble...
I have an Rmd file, and am trying to publish this document first by running locally in Rstudio, then on web.
My files are stored on my home user directory in a folder named Shiny. This has the files imported, my RMD, my shinyapps.io file, and my rsconnect file.
title: "Sedentary Analysis"
author: "Bianca Gonzalez"
date: "July 26, 2016"
output: html_document
runtime: shiny
When I run the rsconnect::deployApp('SedentaryAnalysis.Rmd') file I get: Document successfully deployed to https://biancagonzalez.shinyapps.io/SedentaryAnalysis/
However when I open my link, I get the error:
/home/shiny/SedentaryAnalysis does not exist.
Thanks for helping me understand this error.
Bianca G
When you call rsconnect::deployApp('SedentaryAnalysis.Rmd'), it only deploys that one file (SedentaryAnalysis.Rmd). Your .Rmd probably has code in it that refers to other files. Those files need to be deployed too for your code to work on shinyapps.io. Here is what you need to do:
Replace any absolute paths in your document with relative ones.
Call rsconnect::deployDoc(...) instead of rsconnect::deployApp(...). This will tell RStudio to look for the files you use in the document and deploy them with the document.
If you're using RStudio, its Publish button will do most of this for you, so try clicking that in the toolbar.
I want to create a link that refers to a section defined in another file.
I have found a similar question on "Python-Sphinx: Link to Section in external File" and I noticed there is an extension called "intersphinx".
So I tried this extension, but it doesn't work (Probably my usage is wrong).
I have tried the following.
conf.py
extensions = ['sphinx.ext.todo', 'sphinx.ext.intersphinx']
...
intersphinx_mapping = {'myproject': ('../build/html', None)}
foo.rst
...
****************
Install Bar
****************
Please refer :ref:`Bar Installation Instruction<myproject:bar_installation>`
I want to create a link like 'Bar Installation Instruction' with above markup.
bar.rst
...
**************************
Installation Instruction
**************************
.. _bar_installation:
some text...
When I run make html, I get the following warning and the link is not created.
foo.rst: WARNING: undefined label: myproject:bar_installation (if the link has no caption the label must precede a section header)
Thanks in advance.
Looks like it's not able to find your mapping inventory file. The first part of the tuple serves as the base URL for your links while the second part is the path to the inventory file. I believe the auto downloading of the inventory files (when you pass None) only works with URIs and not file paths.
In this example, I can build the documentation locally, but it will link to http://example.com/docs/bar.html
'myproject': (
'http://example.com/docs/',
'../html/objects.inv'
)