Encountered unknown tag 'snapshot' in DBT Snapshot - snapshot

I want to run a DBT Snapshot and am following a near-identical template to the one outlined in the documentation. However, I get the error when I run dbt snpashot
Compilation Error in model test_snapshot (.../project_folder/snapshots/test_snapshot.sql)
Encountered unknown tag 'snapshot'.
line 1
{% snapshot test_snapshot %}
Below is the code I am attempting to compile.
{% snapshot test_snapshot %}
{{
config(
strategy='check',
unique_key='id',
target_schema='snapshots',
check_cols= 'all'
)
}}
select
*
from {{ ref('modle_in_sample_folder') }}
{% endsnapshot %}
The order of the snapshot folder and ref file is .../project_folder/snapshots/test_snapshot.sql and .../project_folder/intermediate/model_in_sample_folder.sql

The problem was the location of my Snapshot folder. Once I moved it out of my models project folder, and up to one level in the folder hierarchy, I was able to run it successfully.

Related

Get Hugo's version number in YAML files in Hugo v0.55

Question
What's the right way to get Hugo's version number in locale files i18n/*.yaml under Hugo v0.55?
Background
I'm using Hugo with the theme Beautiful Hugo, which included the following deprecated syntax since v0.55:
.URL
.Hugo
.RSSLink
#2 is used in locale files i18n/*yaml to get Hugo's version number {{ .Hugo.Version }}, but this is deprecated.
As a result, my terminal showed messages similar to this when running hugo server.
Building sites … WARN 2019/04/09 10:14:55 Page's .URL is deprecated and will be removed in a future release. Use .Permalink or .RelPermalink. If what you want is the front matter URL value, use .Params.url.
WARN 2019/04/09 10:14:55 Page's .Hugo is deprecated and will be removed in a future release. Use the global hugo function.
WARN 2019/04/09 10:14:55 Page's .RSSLink is deprecated and will be removed in a future release. Use the Output Format's link, e.g. something like:
{{ with .OutputFormats.Get "RSS" }}{{ . RelPermalink }}{{ end }}.
Source: https://gist.github.com/chris-short/78582dc32f877d65eb388f832d2c1dfa
Goal
How to suppress warning #2 .Hugo? (I've already done #1 & #3).
Attempt
#peaceiris on Qiita suggests changing {{ .Hugo.Generator }} to {{ hugo.Generator }}.
Image source: linked blog post
I applied this to locales i18n/*.yaml. i.e. I replaced {{ .Hugo.Version }} with {{ hugo.Version }} in those YAML files. However, I got a function "hugo" not defined error. I tested {{ hugo.Version }} in a Go-HTML template file, and it's OK.
Error: "/home/vin100/beautifulhugo/i18n/zh-TW.yaml:1:1": failed to load translations: unable to parse translation #14 because template: 由 Hugo v{{ hugo.Version }} 提供 • 主題 Beautiful Hugo 移植自 Beautiful Jekyll:1: function "hugo" not defined
map[id:poweredBy translation:由 Hugo v{{ hugo.Version }} 提供 • 主題 Beautiful Hugo 移植自 Beautiful Jekyll]
Thanks to #bep on Hugo Discourse, I've found the soluton! Simply use {{ .Site.Hugo.Version }}.
Reference: #bep's answer on Hugo Discourse

How to debug Ansible issues?

Sometimes, ansible doesn't do what you want. And increasing verbosity doesn't help. For example, I'm now trying to start coturn server, which comes with init script on systemd OS (Debian Jessie). Ansible considers it running, but it's not. How do I look into what's happening under the hood? Which commands are executed, and what output/exit code?
Debugging modules
The most basic way is to run ansible/ansible-playbook with an increased verbosity level by adding -vvv to the execution line.
The most thorough way for the modules written in Python (Linux/Unix) is to run ansible/ansible-playbook with an environment variable ANSIBLE_KEEP_REMOTE_FILES set to 1 (on the control machine).
It causes Ansible to leave the exact copy of the Python scripts it executed (either successfully or not) on the target machine.
The path to the scripts is printed in the Ansible log and for regular tasks they are stored under the SSH user's home directory: ~/.ansible/tmp/.
The exact logic is embedded in the scripts and depends on each module. Some are using Python with standard or external libraries, some are calling external commands.
Debugging playbooks
Similarly to debugging modules increasing verbosity level with -vvv parameter causes more data to be printed to the Ansible log
Since Ansible 2.1 a Playbook Debugger allows to debug interactively failed tasks: check, modify the data; re-run the task.
Debugging connections
Adding -vvvv parameter to the ansible/ansible-playbook call causes the log to include the debugging information for the connections.
Debugging Ansible tasks can be almost impossible if the tasks are not your own. Contrary to what Ansible website states.
No special coding skills needed
Ansible requires highly specialized programming skills because it is not YAML or Python, it is a messy mix of both.
The idea of using markup languages for programming has been tried before. XML was very popular in Java community at one time. XSLT is also a fine example.
As Ansible projects grow, the complexity grows exponentially as result. Take for example the OpenShift Ansible project which has the following task:
- name: Create the master server certificate
command: >
{{ hostvars[openshift_ca_host]['first_master_client_binary'] }} adm ca create-server-cert
{% for named_ca_certificate in openshift.master.named_certificates | default([]) | lib_utils_oo_collect('cafile') %}
--certificate-authority {{ named_ca_certificate }}
{% endfor %}
{% for legacy_ca_certificate in g_master_legacy_ca_result.files | default([]) | lib_utils_oo_collect('path') %}
--certificate-authority {{ legacy_ca_certificate }}
{% endfor %}
--hostnames={{ hostvars[item].openshift.common.all_hostnames | join(',') }}
--cert={{ openshift_generated_configs_dir }}/master-{{ hostvars[item].openshift.common.hostname }}/master.server.crt
--key={{ openshift_generated_configs_dir }}/master-{{ hostvars[item].openshift.common.hostname }}/master.server.key
--expire-days={{ openshift_master_cert_expire_days }}
--signer-cert={{ openshift_ca_cert }}
--signer-key={{ openshift_ca_key }}
--signer-serial={{ openshift_ca_serial }}
--overwrite=false
when: item != openshift_ca_host
with_items: "{{ hostvars
| lib_utils_oo_select_keys(groups['oo_masters_to_config'])
| lib_utils_oo_collect(attribute='inventory_hostname', filters={'master_certs_missing':True}) }}"
delegate_to: "{{ openshift_ca_host }}"
run_once: true
I think we can all agree that this is programming in YAML. Not a very good idea. This specific snippet could fail with a message like
fatal: [master0]: FAILED! => {"msg": "The conditional check 'item !=
openshift_ca_host' failed. The error was: error while evaluating
conditional (item != openshift_ca_host): 'item' is undefined\n\nThe
error appears to have been in
'/home/user/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml':
line 39, column 3, but may\nbe elsewhere in the file depending on the
exact syntax problem.\n\nThe offending line appears to be:\n\n\n-
name: Create the master server certificate\n ^ here\n"}
If you hit a message like that you are doomed. But we have the debugger right? Okay, let's take a look what is going on.
master0] TASK: openshift_master_certificates : Create the master server certificate (debug)> p task.args
{u'_raw_params': u"{{ hostvars[openshift_ca_host]['first_master_client_binary'] }} adm ca create-server-cert {% for named_ca_certificate in openshift.master.named_certificates | default([]) | lib_utils_oo_collect('cafile') %} --certificate-authority {{ named_ca_certificate }} {% endfor %} {% for legacy_ca_certificate in g_master_legacy_ca_result.files | default([]) | lib_utils_oo_collect('path') %} --certificate-authority {{ legacy_ca_certificate }} {% endfor %} --hostnames={{ hostvars[item].openshift.common.all_hostnames | join(',') }} --cert={{ openshift_generated_configs_dir }}/master-{{ hostvars[item].openshift.common.hostname }}/master.server.crt --key={{ openshift_generated_configs_dir }}/master-{{ hostvars[item].openshift.common.hostname }}/master.server.key --expire-days={{ openshift_master_cert_expire_days }} --signer-cert={{ openshift_ca_cert }} --signer-key={{ openshift_ca_key }} --signer-serial={{ openshift_ca_serial }} --overwrite=false"}
[master0] TASK: openshift_master_certificates : Create the master server certificate (debug)> exit
How does that help? It doesn't.
The point here is that it is an incredibly bad idea to use YAML as a programming language. It is a mess. And the symptoms of the mess we are creating are everywhere.
Some additional facts. Provision of prerequisites phase on Azure of Openshift Ansible takes on +50 minutes. Deploy phase takes more than +70 minutes. Each time! First run or subsequent runs. And there is no way to limit provision to a single node. This limit problem was part of Ansible in 2012 and it is still part of Ansible today. This fact tells us something.
The point here is that Ansible should be used as was intended. For simple tasks without the YAML programming. Fine for lots of servers but it should not be used for complex configuration management tasks.
Ansible is a not Infrastructure as Code ( IaC ) tool.
If you ask how to debug Ansible issues, you are using it in a way it was not intended to be used. Don't use it as a IaC tool.
Here's what I came up with.
Ansible sends modules to the target system and executes them there. Therefore, if you change module locally, your changes will take effect when running playbook. On my machine modules are at /usr/lib/python2.7/site-packages/ansible/modules (ansible-2.1.2.0). And service module is at core/system/service.py. Anisble modules (instances of AnsibleModule class declared at module_utils/basic.py) has log method, which sends messages to systemd journal if available, or falls back to syslog. So, run journalctl -f on target system, add debug statements (module.log(msg='test')) to module locally, and run your playbook. You'll see debug statements under ansible-basic.py unit name.
Additionally, when you run ansible-playbook with -vvv, you can see some debug output in systemd journal, at least invocation messages, and error messages if any.
One more thing, if you try to debug code that's running locally with pdb (import pdb; pdb.set_trace()), you'll most likely run into BdbQuit exception. That's because python closes stdin when creating a thread (ansible worker). The solution here is to reopen stdin before running pdb.set_trace() as suggested here:
sys.stdin = open('/dev/tty')
import pdb; pdb.set_trace()
Debugging roles/playbooks
Basically debugging ansible automation over big inventory across large networks is none the other than debugging a distributed network application. It can be very tedious and delicate, and there are not enough user friendly tools.
Thus I believe the also answer to your question is a union of all the answers before mine + small addition. So here:
absolutely mandatory: you have to want to know what's going on, i.e. what you're automating, what you are expecting to happen. e.g. ansible failing to detect service with systemd unit as running or as stopped usually means a bug in service unit file or service module, so you need to 1. identify the bug, 2. Report the bug to vendor/community, 3. Provide your workaround with TODO and link to bug. 4. When bug is fixed - delete your workaround
to make your code easier to debug use modules, as much as you can
give all tasks and variables meaningful names.
use static code analysis tools like ansible-lint. This saves you from really stupid small mistakes.
utilize verbosity flags and log path
use debug module wisely
"Know thy facts" - sometimes it is useful to dump target machine facts into file and pull it to ansible master
use strategy: debugin some cases you can fall into a task debugger at error. You then can eval all the params the task is using, and decide what to do next
the last resort would be using Python debugger, attaching it to local ansible run and/or to remote Python executing the modules. This is usually tricky: you need to allow additional port on machine to be open, and if the code opening the port is the one causing the problem?
Also, sometimes it is useful to "look aside" - connect to your target hosts and increase their debuggability (more verbose logging)
Of course log collection makes it easier to track changes happening as a result of ansible operations.
As you can see, like any other distributed applications and frameworks - debug-ability is still not as we'd wish for.
Filters/plugins
This is basically Python development, debug as any Python app
Modules
Depending on technology, and complicated by the fact you need to see both what happens locally and remotely, you better choose language easy enough to debug remotely.
You could use register module, and debug module to print the return values. For example, I want to know what is the return code of my script execution called "somescript.sh", so I will have my tasks inside the play such as:
- name: my task
shell: "bash somescript.sh"
register: output
- debug:
msg: "{{ output.rc }}"
For full return values you can access in Ansible, you can check this page: http://docs.ansible.com/ansible/latest/common_return_values.html
There are multiple levels of debugging that you might need but the easiest one is to add ANSIBLE_STRATEGY=debug environment variable, which will enable the debugger on the first error.
1st approach: Debugging Ansible module via q module and print the debug logs via the q module as q('Debug statement'). Please check q module page to check where in tmp directory the logs would get generated in the majority of the case either it'll be generated either at: $TMPDIR\q or \tmp\q, so one can do tail -f $TMPDIR\q to check the logs generated once the Ansible module play runs (ref: q module).
2nd Approach: If the play is running on localhost one can use pdb module to debug the play following respective doc: https://docs.ansible.com/ansible/latest/dev_guide/debugging.html
3rd Approach: Using Ansible debug module to print the play result and debug the module(ref: Debug module).

Render markdown from a yaml multiline string in a Jekyll data file

When using Jekyll data files I would like to store a formatted description, primarily to that I can have links in it. It works with HTML.
- name: Project name
description: >
I want to include a link
That renders properly in the generated page when included with {{ project.description }}.
Can I use markdown instead of HTML? I would prefer to do this:
- name: Project name
description: >
I want to include a [link](http://foobar.com)
Turns out Liquid supports filters, but doesn't have one for processing markdown. Thankfully Jekyll adds it's own set of handy filters which includes markdownify so now I can do this:
{{ project.description | markdownify }}

Jekyll/SASS site not compiling via Ruby running on Windows 7, tested also with Prepros, site a no go

I have Jekyll 2.4.0 and Ruby 2.0.09576 installed.
I am working along with Travis from YouTube channel 'DevTips' and using all of his files/information to compile his 'Artist' project site. Last week I successfully served the project site via Ruby/Jekyll serve from source folder. Today, I tried the same process and the site would not compile. I am using same localhost:8000, that is verified and worked last week. I am running Win7 64bit and followed advice to run UTF-8 encoding command: chcp 65001.
No files have changed in the portfolio folder on my PC, NO windows updates or other software installed. The other thing on jekyllrb.com/windows/ says is to add a line of code for 'Auto-regeneration' to the 'Gemfile'...OK, it does not say where this 'Gemfile' is to edit...is it here?: C:.../RubyDevKit/bin/gem.windows batch file?
I am stumped as to why the project files compiled and displayed just fine last week and today it does not work, NO connection can be made to localhost:8000...so no site generated. I also tried to view view site via Prepros, using their 'Open Live Preview' and that generates a site on localhost:8001 displaying only this text: --- --- {% include header.html %} {% include about.html %} {% include work.html %} {% include clients.html %} {% include contact.html %} {% include form.html %} {% include footer.html %}
I've tried other localhost addresses...8000 through 8005.
Any ideas?
Thank you!
Mark
Have you changed the localhost adress?
Default localhost for jekyll is 4000.
Also Prepros live preview will not work in your jekyll folder, because the acutal site is
rendered within the _site-folder. This folder is what jekyll serve generates.
So can you maybe access the site over localhost:4000 and/or is the _site-folder there? Or does jekyll just not work at all?
SOLVED! I will be buying 'Mixture' front end developer framework software! Enough of the bugs and wasted time coming mostly from lame Windows software issues!
CHEERS!
Mark
Do you have the latest version of Python Installed? I had the same issue but after installing python's latest update (2.7.9) the issue was resolved.

How do I prevent module.run in saltstack if my file hasn't changed?

In the 2010.7 version of SaltStack, the onchanges element is available for states. However, that version isn't available for Windows yet, so that's right out.
And unfortunately salt doesn't use the zipfile module to extract zipfiles. So I'm trying to do this:
/path/to/nginx-1.7.4.zip:
file.managed:
- source: http://nginx.org/download/nginx-1.7.4.zip
- source_hash: sha1=747987a475454d7a31d0da852fb9e4a2e80abe1d
extract_nginx:
module.run:
- name: extract.zipfile
- archive: /path/to/nginx-1.7.4.zip
- path: /path/to/extract
- require:
- file: /path/to/nginx-1.7.4.zip
But this tries to extract the files every time. I don't want it to do that, I only want it to extract the file if the .zip file changes, because once it's been extracted then it'll be running (I've got something setup to take care of that). And once it's running, I can't overwrite nginix.exe because Windows is awesome like that.
So how can I extract the file only if it's a newer version of nginx?
I would probably use jinja to test for the existence of a file that you know would only exist if the zip file has been extracted.
{% if salt['file.exists']('/path/to/extract/known_file.txt') %}
extract_nginx:
module.run:
- name: extract.zipfile
- archive: /path/to/nginx-1.7.4.zip
- path: /path/to/extract
- require:
- file: /path/to/nginx-1.7.4.zip
{% endif %}
This will cause the extract_nginx state to not appear in the final rendered sls file if the zip file has been extracted.

Resources