I'm trying to use CodeDeploy's permission handling stuff to deploy a Laravel app but I'm constantly getting a message saying /home/tether/storage/app has duplicate permissions. To my eyes, it looks like the except should make it only one rule.
yaml
permissions:
- object: /home/tether
pattern: "**"
except: [
storage,
storage/app,
storage/framework,
storage/framework/cache,
storage/framework/sessions,
storage/framework/views,
storage/framework,
storage/logs
]
owner: tether
group: tether
- object: /home/tether/storage
pattern: "**"
owner: tether
group: tether
mode: 755
type:
- directory
Can you try adding
type:
- directory
to your /home/tether object? That way the codedeploy-agent would exclude the files listed under that directory (in the exception list) while setting permissions.
Related
I would like to use npmAuthToken from .yarnrc.yml which is located in my home directory
~/.yarnrc.yml content
npmScopes:
company:
npmAuthToken: NpmToken.token
~/Project/MyProject/.yarnrc.yml content
enableGlobalCache: true
logFilters:
- code: YN0013
level: discard
nodeLinker: node-modules
npmScopes:
company:
npmAlwaysAuth: true
npmPublishRegistry: "https://company-url"
npmRegistryServer: "https://company-url"
plugins:
- path: .yarn/plugins/#yarnpkg/plugin-workspace-tools.cjs
spec: "#yarnpkg/plugin-workspace-tools"
yarnPath: .yarn/releases/yarn-3.1.0.cjs
However, when I try to run yarn install it fails on the resolution step with the following message: Invalid authentication (as an anonymous user).
It worked fine with yarn 1 and ~/.npmrc file. Now we're going to migrate yarn 2, but I stuck with this issue. Any ideas on how to solve this? Thanks!
I have to archive a bunch of files, and want to avoid compression to save time. This is a daily operation to archive 1 TB of data, and write it to a different drive, so "time is of the essence".
Looking at the Ansible archive module documentation it's not clear how to build up the target file without compression.
Currently, my Ansible task looks like this:
- name: Create snapshot tarball
become: true
archive:
path: "{{ snapshots_path.stdout_lines }}"
dest: "{{backup_location}}{{short_date.stdout}}_snapshot.tgz"
owner: "{{backup_user}}"
group: "{{backup_group}}"
Is it possible to speed up this process by telling the module to NOT compress? If yes, how?
Based on this other answer on superuser, tar is not compressing files per default, on the other hand gz, which is the default format of archive is.
So you could try going by:
- name: Create snapshot tarball
become: true
archive:
path: "{{ snapshots_path.stdout_lines }}"
dest: "{{backup_location}}{{short_date.stdout}}_snapshot.tar"
format: tar
owner: "{{backup_user}}"
group: "{{backup_group}}"
This is also backed-up by the manual page of tar:
DESCRIPTION
GNU tar is an archiving program designed to store multiple files in a
single file (an archive), and to manipulate such archives. The
archive can be either a regular file or a device (e.g. a tape drive,
hence the name of the program, which stands for tape archiver), which
can be located either on the local or on a remote machine.
When I deploy my serverless api using:
serverless deploy
The lambda layer gets created but when I go to run the function is gives me this error:
"Cannot find module 'request'"
But if I upload the .zip file manually through the console (the exactly same file thats uploaded when I deploy), it works fine.
Any one have any idea why this is happening?
environment:
SLS_DEBUG: "*"
provider:
name: aws
runtime: nodejs8.10
stage: ${opt:api-type, 'uat'}-${opt:api, 'payment'}
region: ca-central-1
timeout: 30
memorySize: 128
role: ${file(config/prod.env.json):ROLE}
vpc:
securityGroupIds:
- ${file(config/prod.env.json):SECURITY_GROUP}
subnetIds:
- ${file(config/prod.env.json):SUBNET}
apiGateway:
apiKeySourceType: HEADER
apiKeys:
- ${file(config/${opt:api-type, 'uat'}.env.json):${opt:api, "payment"}-APIKEY}
functions:
- '${file(src/handlers/${opt:api, "payment"}.serverless.yml)}'
package:
# individually: true
exclude:
- node_modules/**
- nodejs/**
plugins:
- serverless-offline
- serverless-plugin-warmup
- serverless-content-encoding
custom:
contentEncoding:
minimumCompressionSize: 0 # Minimum body size required for compression in bytes
layers:
nodejs:
package:
artifact: nodejs.zip
compatibleRuntimes:
- nodejs8.10
allowedAccounts:
- "*"
Thats what my serverless yaml script looks like.
I was having a similar error to you while using the explicit layers keys that you are using to define a lambda layer.
My error (for the sake of web searches) was this:
Runtime.ImportModuleError: Error: Cannot find module <package name>
I feel this is a temporary solution b/c I wanted to explicitly define my layers like you were doing, but it wasn't working so it seemed like a bug.
I created a bug report in Serverless for this issue. If anyone else is having this same issue they can track it there.
SOLUTION
I followed this this post in the Serverless forums based on these docs from AWS.
I zipped up my node_modules under the folder nodejs so it looks like this when it is unzipped nodejs/node_modules/<various packages>.
Then instead of using the explicit definition of layers I used the package and artifact keys like so:
layers:
test:
package:
artifact: test.zip
In the function layer it is referred to like this:
functions:
function1:
handler: index.handler
layers:
- { Ref: TestLambdaLayer }
The TestLambdaLayer is a convention of <your name of layer>LambdaLayer as documented here
Make sure you run npm install inside your layers before deploying, ie:
cd ~/repos/repo-name/layers/utilityLayer/nodejs && npm install
Otherwise your layers will get deployed without a node_modules folder. You can download the .zip of your layer from the Lambda UI to confirm the contents of that layer.
If anyone face a similar issue Runtime.ImportModuleError, is fair to say that another cause of this issue could be a package exclude statement in the serverless.yml file.
Be aware that if you have this statement:
package:
exclude:
- './**'
- '!node_modules/**'
- '!dist/**'
- '.git/**'
It will cause exactly the same error, on runtime once you've deployed your lambda function (with serverless framework). Just, ensure to remove the ones that could create a conflict across your dependencies
I am using typescript with the serverless-plugin-typescript and I was having a same error, too.
When I switched from
const myModule = require('./src/myModule');
to
import myModule from './src/myModule';
the error disappeared. It seems like the files were not included into the zip file by serverless when I was using require.
PS: Removing the serverless-plugin-typescript and switching back to javascript also solved the problem.
I have a bunch of Ansible roles that I'd like to reuse. They are each kept in a repo in a private BitBucket.
I want to add projects that are hosted in Git as meta/dependencies for my the roles I'm working on but I can't quite figure out the syntax is.
In this non-working example, a role requires another role to be deployed first with parameters prior to running.
FYI, The remote role "acm_layout" is intended to create a standard directory layout for the server, so that my role can run knowing that all of the standard directories already exist.
---
dependencies:
- { role: project_keys } # Works fine, just reuses a local role
- name: acm_layout # Doesn't work, but this is what I want to fix
src: ssh://git#bigcompany.com/acm/acm_layout.git
scm: git
version: feature/initialize
application_storage_dir: "{{base_storage_dir}}"
application_data_dir: "{{app_data_dir}}"
When I runt this I get the following error:
ERROR! the role 'acm_layout' was not found in [lots of paths deleted]
The error appears to have been in '/home/zs5fgzg/_tmp/horizon_deployment_scf/ansible/roles/horizon_layout/meta/main.yaml': line 4, column 6, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- { role: horizon_keys }
- src: ssh://git#bigcompany.com:7999/acm/acm_layout.git
^ here
So what's the correct way to do this?
Yes, you can use ansible-galaxy install with requirements.yml option to get roles remotely. Create requirements.yml as follows:
https://github.com/avinash6784/elk-stack/blob/master/requirements.yml
And run the following command:
$ ansible-galaxy install -r requirements.yml -p roles/
For more info on how to get roles using ansible-galaxy please visit
http://docs.ansible.com/ansible/latest/galaxy.html
So one thing we've encountered in our project is that we do not want to store our large files in our git repo for our ansible roles because it slows down cloning (and git limits files to 100 mb anyways).
What we've done is store our files in a separate internal location, where our files can sit statically and have no size restrictions. Our roles are written so that they first pull these static files to their local files folder and then continue like normal.
i.e.
roles/foo/tasks/main.yml
- name: Create role's files directory
file:
path: "{{roles_files_directory}}"
state: directory
- name: Copy static foo to local
get_url:
url: "{{foo_static_gz}}"
dest: "{{roles_files_directory}}/{{foo_gz}}"
#....Do rest of the tasks...
roles/foo/vars/main.yml
roles_files_directory: "/some/path/roles/foo/files"
foo_static_gz: "https://internal.foo.tar.gz"
foo_gz: "foo.tar.gz"
The main thing I don't find really sound is the hard coded path to the role's files directory. I preferably would like to dynamically look up the path when running ansible, but I haven't been able to find documentation on that. The issue can arise because different users may check roles to a different root paths. Does anyone know how to dynamically know the role path, or have some other pattern that solves the overall problem?
Edit:
I discovered there's actually a {{playbook_dir}} variable that would return "/some/path", which might be dynamic enough in this case. Still isn't safe against the situation where the role name might change, but that's a way rarer occurrence and can be handled through version control.
What about passing values from the command line?
---
- hosts: '{{ hosts }}'
remote_user: '{{ user }}'
tasks:
- ...
ansible-playbook release.yml --extra-vars "hosts=vipers user=starbuck"
http://docs.ansible.com/playbooks_variables.html#passing-variables-on-the-command-line
I just want to add another possible solution: you can try to add custom "facter".
Here is a link to official documentation: http://docs.ansible.com/setup_module.html
And I found this article that might be useful: http://serverascode.com/2015/01/27/ansible-custom-facts.html