Is a Bullet List With a Long Link Possible in RST Without Warnings? - python-sphinx

I have a long link escaped like this:
`Build Azure Devops Agents With Linux cloud-init for Dotnet Development \
[terraform, azure, devops, docker, dotnet, cloud-init]`_
.. _Build Azure Devops Agents With Linux cloud-init for Dotnet Development [terraform, azure, devops, docker, dotnet, cloud-init]: https://codingsoul.org/2022/04/25/build-azure-devops-agents-with-linux-cloud-init-for-dotnet-development/
However, if I try to make that link part of a bullet list:
- `Build Azure Devops Agents With Linux cloud-init for Dotnet Development \
[terraform, azure, devops, docker, dotnet, cloud-init]`_
.. _Build Azure Devops Agents With Linux cloud-init for Dotnet Development [terraform, azure, devops, docker, dotnet, cloud-init]: https://codingsoul.org/2022/04/25/build-azure-devops-agents-with-linux-cloud-init-for-dotnet-development/
I get the following warning:
Inline interpreted text or phrase reference start-string without end-string.
I don't know how to parse that warning and the docs / online searches haven't helped me.
Surely bullet lists of long links are a common use case and I'm just missing some obvious documentation somewhere, right?
Here is a bullet list (of long links, coincidentally) I came across in my search that didn't help:
How to use inline code with a trailing whitespace?
https://github.com/sphinx-doc/sphinx/issues/3778
https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html#lists-and-quote-like-blocks

This works for me. You need to indent second and subsequent lines in a list item to align with the first character in the first line.
- `Build Azure Devops Agents With Linux cloud-init for Dotnet Development \
[te3rraform, azure, devops, docker, dotnet, cloud-init]`_
.. _Build Azure Devops Agents With Linux cloud-init for Dotnet Development [te3rraform, azure, devops, docker, dotnet, cloud-init]: https://codingsoul.org/2022/04/25/build-azure-devops-agents-with-linux-cloud-init-for-dotnet-development/
Also your link label also did not match, which generated an error of ERROR: Unknown target name: "Build Azure Devops Agents With Linux cloud-init for Dotnet Development \ [terraform, azure, devops, docker, dotnet, cloud-init]".

Related

Gitlab CI and Xamarin build fails

I've created a complete new Xamarin Forms App project in Visual Studio for Mac and added it to a GitLab repository. After that I created a .gitlab-ci.yml file for setting up my CI build. But the problem is that I get error messages:
error MSB4019: The imported project "/usr/lib/mono/xbuild/Xamarin/iOS/Xamarin.iOS.CSharp.targets" was not found. Confirm that the expression in the Import declaration "/usr/lib/mono/xbuild/Xamarin/iOS/Xamarin.iOS.CSharp.targets" is correct, and that the file exists on disk.
This error pops up also for Xamarin.Android.Csharp.targets.
My YML file look like this:
image: mono:latest
stages:
- build
build:
stage: build
before_script:
- msbuild -version
- 'echo BUILDING'
- 'echo NuGet Restore'
- nuget restore 'XamarinFormsTestApp.sln'
script:
- 'echo Cleaning'
- MONO_IOMAP=case msbuild 'XamarinFormsTestApp.sln' $BUILD_VERBOSITY /t:Clean /p:Configuration=Debug
Some help would be appreciated ;)
You will need a mac os host to build Xamarin.iOS application and AFAIK it's not there yet in GitLab. You can find the discussion here and private beta here. For now, I would recommend going your own MacOS Host and registered GitLab runner on that host:
https://docs.gitlab.com/runner/
You can set up the host where you want (VM or Physical device) and install the GitLab runner and Xamarin environment there, tag it and use with the GitLab pipelines as with any other shared runner.
From the comments on your question, it looks like Xamarin isn't available in the mono:latest image, but that's ok because you can create your own docker images to use in Gitlab CI. You will need to have access to a registry, but if you use gitlab.com (opposed to a self-hosted instance) the registry is enabled for all users. You can find more information on that in the docs: https://docs.gitlab.com/ee/user/packages/container_registry/
If you are using self-hosted, the registry is still available (even for free versions) but it has to be enabled by an admin (docs here: https://docs.gitlab.com/ee/administration/packages/container_registry.html).
Another option is to use Docker's own registry, Docker Hub. It doesn't matter what registry you use, but you'll have to have access to one of them so your runners can pull down your image. This is especially true if you're using shared runners that you (or your admins) don't have direct control over. If you can directly control your runners, another option is to build the docker image on all of your runners that need it.
I'm not familiar with Xaramin, but here's how you can create a new Docker image based on mono:latest:
# ./mono-xamarin/Dockerfile
FROM mono:latest # this lets us build off of an existing image rather than starting from scratch. Everything in the mono:latest image will be available in this image
RUN ./install_xamarin.sh # Run whatever you need to in order to install xamarin or anything else you need.
RUN apt-get install git # just an example
Once your Dockerfile is written, you can build it like this:
docker build --file path/to/Dockerfile --tag mono-xamarin:latest
If you build the image on your runners, you can use it immediately like:
# .gitlab-ci.yml
image: mono-xamarin:latest
stages:
- build
...
Otherwise you can now push it to whichever registry you want to use.

How to deploy web application to AWS instance from GitLab repository

Right now, I deploy my (Spring Boot) application to EC2 instance like:
Build JAR file on local machine
Deploy/Upload JAR via scp command (Ubuntu) from my local machine
I would like to automate that process, but:
without using Jenkins + Rundeck CI/CD tools
without using AWS CodeDeploy service since that does not support GitLab
Question: Is it possible to perform 2 simple steps (that are now done manualy - building and deploying via scp) with GitLab CI/CD tools and if so, can you present simple steps to do it.
Thanks!
You need to create a .gitlab-ci.yml file in your repository with CI jobs defined to do the two tasks you've defined.
Here's an example to get you started.
stages:
- build
- deploy
build:
stage: build
image: gradle:jdk
script:
- gradle build
artifacts:
paths:
- my_app.jar
deploy:
stage: deploy
image: ubuntu:latest
script:
- apt-get update
- apt-get -y install openssh-client
- scp my_app.jar target.server:/my_app.jar
In this example, the build job run a gradle container and uses gradle to build the app. GitLab CI artifacts are used to capture the built jar (my_app.jar), which will be passed on to the deploy job.
The deploy job runs an ubuntu container, installs openssh-client (for scp), then executes scp to open my_app.jar (passed from the build job) to the target server.
You have to fill in the actual details of building and copying your app. For secrets like SSH keys, set project level CI/CD variables that will be passed in to your CI jobs.
Create shell file with the following contents.
#!/bin/bash
# Copy JAR file to EC2 via SCP with PEM in home directory (usually /home/ec2-user)
scp -i user_key.pem file.txt ec2-user#my.ec2.id.amazonaws.com:/home/ec2-user
#SSH to EC2 Instnace
ssh -T -i "bastion_keypair.pem" ec2-user#y.ec2.id.amazonaws.com /bin/bash <<-'END2'
#The following commands will be executed automatically by bash.
#Consdier this as remote shell script.
killall java
java -jar ~/myJar.jar server ~/config.yml &>/dev/null &
echo 'done'
#Once completed, the shell will exit.
END2
In 2020, this should be easier with GitLab 13.0 (May 2020), using an older feature Auto DevOps (introduced in GitLab 11.0, June 2018)
Auto DevOps provides pre-defined CI/CD configuration allowing you to automatically detect, build, test, deploy, and monitor your applications.
Leveraging CI/CD best practices and tools, Auto DevOps aims to simplify the setup and execution of a mature and modern software development lifecycle.
Overview
But now (May 2020):
Auto Deploy to ECS
Until now, there hasn’t been a simple way to deploy to Amazon Web Services. As a result, Gitlab users had to spend a lot of time figuring out their own configuration.
In Gitlab 13.0, Auto DevOps has been extended to support deployment to AWS!
Gitlab users who are deploying to AWS Elastic Container Service (ECS) can now take advantage of Auto DevOps, even if they are not using Kubernetes. Auto DevOps simplifies and accelerates delivery and cloud deployment with a complete delivery pipeline out of the box. Simply commit code and Gitlab does the rest! With the elimination of the complexities, teams can focus on the innovative aspects of software creation!
In order to enable this workflow, users need to:
define AWS typed environment variables: ‘AWS_ACCESS_KEY_ID’ ‘AWS_ACCOUNT_ID’ and ‘AWS_REGION’, and
enable Auto DevOps.
Then, your ECS deployment will be automatically built for you with a complete, automatic, delivery pipeline.
See documentation and issue

How to make SSL/HTTPS certificates on Gitlab Auto DevOps with GKE and kube-lego

The instructions say to install it, but doesn't suggest which method will work, in what order or with which load balancer. I keep getting useless test certificates installed on my deployments.
It sounds like you are getting the default chart from helm, which is slightly different than the instructions you will find on the kube-lego Github. The helm chart uses Let's Encrypt's staging server by default. Also if you are not familiar with helm it can be quite tricky to undo the remnants that the kube-lego helm chart leaves behind, specially as tiller likes to keep the old installation history.
Here's a brief overview of setting up Gitlab Auto Devops with Google Kubernetes Engine. I should mention that you can skip up to step 13 if you use the Gitlab GKE wizard introduced in 10.1, but it won't really tell you what's happening or how to debug when things go wrong, also the wizard is a one way process.
In Gitlab under integrations you'll find a Kubernetes integration button
You can also find this under CI/ CD -> cluster
The API URL is the endpoint ip prefixed by https://
The "CA Certificate" it asks for is service account CA, not the same as the cluster CA
To connect to gitlab you'll need to create a k8s service account and get the CA and token for it. Do so by using gcloud to authenticate kubectl. GCE makes it easy by doing this for you through the "connect" button in k8s engine
https://kubernetes.io/docs/admin/authentication/#service-account-tokens
All commands must be run with a custom namespace, it will not work with default
kubectl create namespace (NS)
kubectl create serviceaccount (NAME) --namespace=(NS)
This will create two tokens in your configs/secrets, a default one and you service account one
kubectl get -o json serviceaccounts (NAME) --namespace=(NS)
kubectl get -o json secret (secrets-name-on-prev-result) --namespace=(NS)
To decode the base64 values echo them to base64 -d, for example
Echo mybase64stringasdfasdf= | base64 -d
To install helm, use the installation script method
https://docs.helm.sh/using_helm/#from-script
Init helm and update it's repo
helm init
helm repo update
Install nginx-ingress, its ingress with nginx. You could use the Google load balancer as well, but it's not as portable.
helm install stable/nginx-ingress
Make a wild card subdomain with an A record pointing to the IP address set up by ingress
In Gitlab, Make auto devops use your newly setup wildcard subdomain, if its "*.x" on "me.com" you should enter "x.me.com" in the auto devops settings.
Now install kube lego
helm install --name lego-main \
--set config.LEGO_EMAIL=CHANGEMENOW#example.com \
--set config.LEGO_URL=https://acme-v01.api.letsencrypt.org/directory \
stable/kube-lego
Helm installations are called charts, equal to a k8s container installation
If you wish to delete a release, you must purge it, otherwise tiller will keep a history, with:
helm delete --purge my-release-name
You can find the release names and their associated chart in
helm list
Troubleshooting
Order doesn't seem to matter too much. attaching to a pod can be a useful way of debugging problems, such as a bad email address. The Ideal order however is probably, nginx-ingress, then kube-lego, then gitlab. I did make it work with gitlab first, then nginx-ingress, then kube-lego.
I heard from Sid that they are working to make this easier... let's hope so.

Simple docker deployment tactics

Hey guys so I've spend the past few days really digging into Docker and I've learned a ton. I'm getting to the point where I'd like to deploy to a digitalocean droplet but I'm starting to wonder about the strategy of building/deploying an image.
I have a perfect Dev setup where I've created a file volume tied to my app.
docker run -d -p 80:3000 --name pug_web -v $DIR/app:/Development test_web
I'd hate to have to run the app in production out of the /Development folder, where I'm actually building the app. This is a nodejs/express app and I'd love to concat/minify/etc. into a local dist folder ane add that build folder to a new dist ready image.
I guess what I'm asking is, A). can I have different dockerfiles, one for Dev and one for Dist? if not B). can I have if statements in my docker files that would do something like... if ENV == 'dist' add /dist... etc.
I'm struggling to figure out how to move this from a Dev environment locally to a tightened up production ready image without any conditionals.
I do both.
My Dockerfile checks out the code for the application from Git. During development I mount a volume over the top of this folder with the version of the code I'm working on. When I'm ready to deploy to production, I just check into Git and re-build the image.
I also have a script that is executed from the ENTRYPOINT command. The script looks at the environment variable "ENV" and if it is set to "DEV" it will start my development server with debugging turned on, otherwise it will launch the production version of the server.
Alternatively, you can avoid using Docker in development, and instead have a Dockerfile at the root of your repo. You can then use your CI server (in our case Jenkins, but Dockerhub also allows for automated build repositories that can do that for you, if you're a small team or don't have access to a dedicated build server.
Then you can just pull the image and run it on your production box.

Source code changes in kubernetes, SaltStack & Docker in production and locally

This is an abstract question and I hope that I am able to describe this clear.
Basically; What is the workflow in distributing of source code to Kubernetes that is running in production. As you don't run Docker with -v in production, how do you update running pods.
In production:
Do you use SaltStack to update each container in each pod?
Or
Do you rebuild Docker images and restart every pod?
Locally:
With Vagrant you can share a local folder for source code. With Docker you can use -v, but if you have Kubernetes running locally how would you mirror production as close as possible?
If you use Vagrant with boot2docker, how can you combine this with Docker -v?
Short answer is that you shouldn't "distribute source code", you should rather "build and deploy". In terms of Docker and Kubernetes, you would build by means of building and uploading the container image to the registry and then perform a rolling update with Kubernetes.
It would probably help to take a look at the specific example script, but the gist is in the usage summary in current Kubernetes CLI:
kubecfg [OPTIONS] [-u <time>] [-image <image>] rollingupdate <controller>
If you intend to try things out in development, and are looking for instant code update, I'm not sure Kubernetes helps much there. It's been designed for production systems and shadow deploys are not a kind of things one does sanely.

Resources