Sentry Sourcemaps: bad json: invalid type: map, expected a string - sentry

I've been tasked with integrating Sentry into our front-end application and I'm having some trouble with it.
After building our source I've set the pipelines to upload our sourcemaps to sentry with:
sentry-cli releases -o "myorg" -p "myproject" files "$VERSION" upload-sourcemaps ./
This seems to almost work:
DEBUG 2022-12-09 15:18:39.496486472 +00:00 sentry-cli version: 2.10.0, platform: "linux", architecture: "x86_64"
INFO 2022-12-09 15:18:39.496862713 +00:00 sentry-cli was invoked with the following command line: "sentry-cli" "releases" "-o" "uswitchcom" "-p" "my-mojo-ui" "files" "1.0.1638" "upload-sourcemaps" "./"
> Found 69899 release files
> Analyzing 69899 sources
> Rewriting sources
error: bad json: invalid type: map, expected a string at line 1 column 328
The dist folder structure is really flat, it's being built with Parcel.
dist
├── index.0bf95743.js
├── index.0bf95743.js.map
├── index.a11de8bb.js
├── index.a11de8bb.js.map
├── index.fee677fd.css
├── index.fee677fd.css.map
├── index.html
(I'm not sure why there's two indexes, the app works fine though)
Any ideas what's wrong? Source maps work in the browser when checking with the debug tools so I can't imagine they'd be broken.

Related

"The specified collections path is not part of the configured Ansible collections paths" but I'm installing into a relative directory

I'm installing the ansible.posix collection to use in my playbook like this:
ansible-galaxy collection install -r ansible/requirements.yml -p ansible/collections
However, I get this warning message that I want to get rid of:
[WARNING]: The specified collections path '/home/myuser/path/to/my/repo/ansible/collections' is not part of the
configured Ansible collections paths '/home/myuser/.ansible/collections:/usr/share/ansible/collections'. The installed collection won't be
picked up in an Ansible run.
My repo is laid out like this:
├── ansible
│   ├── playbook.yml
│   ├── files
│   │   ├── ...
│   ├── tasks
│   │   ├── ...
│   ├── requirements.yml
├── ansible.cfg
...
ansible.cfg looks like this:
[defaults]
timeout = 60
callback_whitelist = profile_tasks
Here's the output of ansible --version:
ansible 2.9.17
config file = /home/myuser/path/to/my/repo/ansible.cfg
configured module search path = ['/home/myuser/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.3 (default, Jul 25 2020, 13:03:44) [GCC 8.3.0]
In the docs for installing collections with ansible-galaxy, they say the following:
You can also keep a collection adjacent to the current playbook, under a collections/ansible_collections/ directory structure.
play.yml
├── collections/
│ └── ansible_collections/
│ └── my_namespace/
│ └── my_collection/<collection structure lives here>
And, like the documentation suggests, I can still use the collection just fine in my play. But this warning message is quite annoying. How do I get rid of it?
I have created ansible.cfg within the ansible project I'm working on.
You could simply cp /etc/ansible/ansible.cfg .
but since the file would look like:
[defaults]
collections_paths = ./collections/ansible_collections
It is just easier to create it.
Once there, Ansible will know about your custom configuration file.
In you project folder you will:
mkdir -p ./collections/ansible_collections
And then run the install.
If your requirements.yml contains a collection like:
collections:
- community.general
You'd have to install it as:
ansible-galaxy collection install -r requirements.yml -p ./collections/
And the output would be:
[borat#mypotatopc mycoolproject]$ ansible-galaxy collection install -r requirements.yml -p ./collections/
Process install dependency map
Starting collection install process
Installing 'community.general:3.1.0' to '/home/borat/projects/mycoolproject/collections/ansible_collections/community/general'
In case you won't setup your modified ansible.cfg, the output would be:
[borat#mypotatopc mycoolproject]$ ansible-galaxy collection install -r requirements.yml -p ./
[WARNING]: The specified collections path '/home/borat/projects/mycoolproject' is not part of the configured Ansible collections paths
'/home/borat/.ansible/collections:/usr/share/ansible/collections'. The installed collection won't be picked up in an Ansible run.
Process install dependency map
Starting collection install process
Installing 'community.general:3.1.0' to '/home/borat/projects/mycoolproject/ansible_collections/community/general'
There are other methods too, but I like this one.

Can't deploy go lang app to AWS Elastic Beanstalk

Here is how my files are organized before I zip the file.
├── app
│ ├── main.go
│ ├── Procfile
│ ├── Buildfile
│ ├── build.sh
Buildfile
make: ./build.sh
build.sh
#!/usr/bin/env bash
# Install dependencies.
go get ./...
# Build app
go build ./ -o bin/application
Procfile
web: bin/application
The error I get
[Instance: i-03f3c230e7b575431] Command failed on instance. Return code: 1 Output: (TRUNCATED)... inflating: /var/app/staging/app/main.go Unable to launch application as the source bundle does not contain a Buildfile, Procfile or an executable. Unable to launch application as the source bundle does not contain a Buildfile, Procfile or an executable. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/01_configure_application.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
More error logs
Application update failed at 2018-10-02T01:33:44Z with exit status 1 and error: Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/01_configure_application.sh failed.
Executing: /usr/bin/unzip -o -d /var/app/staging /opt/elasticbeanstalk/deploy/appsource/source_bundle
Archive: /opt/elasticbeanstalk/deploy/appsource/source_bundle
creating: /var/app/staging/app/
inflating: /var/app/staging/app/Buildfile
inflating: /var/app/staging/app/build.sh
inflating: /var/app/staging/app/Procfile
inflating: /var/app/staging/app/main.go
Unable to launch application as the source bundle does not contain a Buildfile, Procfile or an executable.
Unable to launch application as the source bundle does not contain a Buildfile, Procfile or an executable.
Incorrect application version ".01" (deployment 29). Expected version ".01" (deployment 18).
My main.go file has third-party packages. I'm using
port := os.Getenv("PORT")
if port == "" {
port = "5000"
log.Println("[-] No PORT environment variable detected. Setting to ", port)
}
like the docs say in the example app. It compiles and runes locally no problem.
As you can see in your error message:
Unable to launch application as the source bundle does not contain a Buildfile, Procfile or an executable.
Your Procfile, Buildfile and build.sh should be in the root of the project, like this:
├── app
│ ├── main.go
│ Procfile
│ Buildfile
│ build.sh

Puppet unable to find my .erb template but finds other files OK?

No one has been able to explain this inside my company so if you are able to solve this KUDOS to you!
Inside my puppet repo I have setup as follows:
environment/ops/modules/papertrail
├── files
│ ├── elasticsearch_log_files.yml
│ ├── log_files.yml
│ └── remote_syslog.conf
|
└── manifests
├── elasticsearch.pp
└──init.pp
└── templates
└── elasticsearch_log_files.yml.erb
MY elasticsearch.pp file contains the following:
class papertrail::elasticsearch inherits papertrail {
$source = "puppet:///modules/papertrail"
file { "/etc/log_files.yml" :
mode => 0644,
owner => root,
group => root,
ensure => present,
source => "$source/elasticsearch_log_files.yml",
}
}
Now when I try to change the last line to:
"$source/elasticsearch_log_files.yml.erb",
or
"$source/templates/elasticsearch_log_files.yml",
Puppet errors out and says that it can't locate the file:
Error: /Stage[main]/Papertrail::Elasticsearch/File[/etc/log_files.yml]: Could not evaluate: Could not retrieve information from environment ops source(s) puppet:///modules/papertrail/elasticsearch_log_files.yml.erb
What is strange is that when I use the following stanza to just include the yml file instead of erb it works fine and the file gets populated on the target:
"$source/elasticsearch_log_files.yml",
How can I include my erb? I have dynamic variables that I need to assign to the configuration file log_files.yml and I am so far unable to do so =(
This is solved. I didn't add the template directory to my git commit so once added with git add . it worked.

How to setup nginx as reverse proxy for Ruby application (all running inside Docker)

I'm trying to set-up a simple reverse proxy using nginx and a Ruby application, but I also want to have it set-up inside of Docker containers.
I've reached the point where I can run the Ruby application from inside Docker and access the running service from the host machine (my Mac OS X using Boot2Docker). But I'm now stuck trying to implement the nginx part, as I've not used it before and it seems the majority of articles/tutorials/examples on the topic use Rails (rather than a simple Sinatra/Rack application) and also utilises sockets - which I've no need of as far as I'm aware.
I'm also using Docker Compose.
The complete directory structure looks like:
├── docker-compose.yml
├── front-end
│   ├── Dockerfile
│   ├── Gemfile
│   ├── app.rb
│   ├── config.ru
├── proxy
│   ├── Dockerfile
│   ├── nginx.conf
│   └── public
│   ├── 500.html
│   ├── index.html
To see the contents of these files then refer to the following gist:
https://gist.github.com/Integralist/5cfd5c884b0f2c0c5d11
But when I run docker-compose up -d I get the following error:
Creating clientcertauth_frontend_1...
Pulling image ./front-end:latest...
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/compose.cli.main", line 31, in main
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/compose.cli.docopt_command", line 21, in sys_dispatch
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/compose.cli.command", line 27, in dispatch
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/compose.cli.docopt_command", line 24, in dispatch
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/compose.cli.command", line 59, in perform_command
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/compose.cli.main", line 445, in up
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/compose.project", line 184, in up
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/compose.service", line 259, in recreate_containers
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/compose.service", line 242, in create_container
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/docker.client", line 824, in pull
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/docker.auth.auth", line 67, in resolve_repository_name
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/docker.auth.auth", line 46, in expand_registry_url
docker.errors.DockerException: HTTPS endpoint unresponsive and insecure mode isn't enabled.
I'm not sure what's causing this error (a bit of googling returns https://github.com/docker/compose/issues/563 but it seems a separate issue as far as I can tell?).
I'm also not entirely sure the nginx.conf is set-up correctly if I could get past this error. The config for nginx looks like it should do a reverse proxy properly (e.g. using frontend as the upstream app server, which should then resolve to the docker ip address for the front-end container -> as you'll see I've linked the two containers together so I'd expect the front-end app to be set as an alias inside /etc/hosts of the proxy container).
Does any one know what I might be missing?
Thanks.
In your gist, you are using image as a key instead of build, so docker-compose is trying to pull the image from the registry, which is failing.
Since you are building these images locally, the syntax for your docker-compose.yml file should look like this:
frontend:
build: front-end/
ports:
- "8080:5000"

Simulate an SNMP agent using snmpsim

My goal is to simulate an agent using snmpsim from snmpsim.
In that respect I walked an SNMP device and captured the output in a file, mydevice.snmprec.
According to the instruction from snmpsim, I suppose to start the agent invoking snmpsimd.py --agent-udpv4-endpoint=127.0.0.1:1161. The problem is that this command does not point to mydevice.snmprec.
Any idea how include mydevice.snmprec as part of the command to simulate the agent?
Usually you would put it in ~/.snmpsim/data but there is also a --data-dir switch.
You should see some output like this telling you the community name:
Configuring /home/someuser/.snmpsim/data/foo.snmprec controller
SNMPv1/2c community name: foo
Just is case someone might come across the same issue, here is what I did to simulate the agent and the manager:
Installed net-snmp via port install net-snmp for CLI manager. Also got a MIB Broswer for MAC.
Installed snmpsim to simulate the agent
Capture the OID from an actual device:
sudo snmprec.py --agent-udpv4-endpoint=10.1.1.10 --start-oid=1.3.6.1.4.1 --stop-oid=1.3.6.1.4.30 --use-getbulk --output-file=snmpsim/data/mydevice.snmprec
Open a terminal window and started the simulated agent:
$ pwd
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/snmpsim-0.2.4-py2.7.egg/snmpsim
$ ls
__init__.py confdir.pyc data grammar record
__init__.pyc daemon.py error.py log.py variation
confdir.py daemon.pyc error.pyc log.pyc
$ tree
.
├── __init__.py
├── __init__.pyc
├── confdir.py
├── confdir.pyc
├── daemon.py
├── daemon.pyc
├── data
│ ├── mydevice.snmprec
│ ├── foreignformats
│ │ ├── linux.snmpwalk
│ │ ├── winxp1.snmpwalk
│ │ └── winxp2.sapwalk
$ snmpsimd.py --data-dir=data --agent-udpv4-endpoint=127.0.0.1:1161
You should see something like these which represent the last lines where the agent is waiting for queries:
……………
………………..
………….
SNMPv3 USM SecurityName: simulator
SNMPv3 USM authentication key: auctoritas, authentication protocol: MD5
SNMPv3 USM encryption (privacy) key: privatus, encryption protocol: DES
Listening at UDP/IPv4 endpoint 127.0.0.1:1161, transport ID 1.3.6.1.6.1.1.0
Open another terminal window to run the Manager:
$ snmpwalk -On -v2c -c mydevice 127.0.0.1:1161 .1.3.6.1.4.1
At this point you should see the agent reacting to the query and manager displaying whatever the agent sends back.
Also, you can do the same thing from a MIB browser manager.
Note: This supports read-only operations!
I haven't got the part where one can write to the simulated agent, yet. I will post it if I can get it working.

Resources