When trying to compile smart contracts within the Brownie framework CompilerError: solc returned the following errors - compilation

CompilerError: solc returned the following errors:
ParserError: Source "OpenZeppelin/openzeppelin-contracts#3.0.0/contracts/math/SafeMath" not found:
I am getting an error from brownie compile and I can't figure out why it wont import one of the openzeppelin contracts when all the others import ok. here is my brownie-config.yaml
project_structure:
build: build
contracts: contracts
interfaces: interfaces
reports: reports
scripts: scripts
tests: tests
remappings:
zeppelin=/usr/local/lib/open-zeppelin/contracts/
github.com/ethereum/dapp-bin/=/usr/local/lib/dapp-bin/
networks:
default: development
development:
gas_limit: max
gas_buffer: 1
gas_price: 0
max_fee: null
priority_fee: null
reverting_tx_gas_limit: max
default_contract_owner: true
cmd_settings: null
live:
gas_limit: auto
gas_buffer: 1.1
gas_price: auto
max_fee: null
priority_fee: null
reverting_tx_gas_limit: false
default_contract_owner: false
compiler:
evm_version: null
solc:
version: null
optimizer:
enabled: true
runs: 200
remappings: null
vyper:
version: null
project_structure:
build: build
contracts: contracts
interfaces: interfaces
reports: reports
scripts: scripts
tests: tests
remappings:
zeppelin=/usr/local/lib/open-zeppelin/contracts/
github.com/ethereum/dapp-bin/=/usr/local/lib/dapp-bin/
networks:
default: development
development:
gas_limit: max
gas_buffer: 1
gas_price: 0
max_fee: null
priority_fee: null
reverting_tx_gas_limit: max
default_contract_owner: true
cmd_settings: null
live:
gas_limit: auto
gas_buffer: 1.1
gas_price: auto
max_fee: null
priority_fee: null
reverting_tx_gas_limit: false
default_contract_owner: false
compiler:
evm_version: null
solc:
version: null
optimizer:
enabled: true
runs: 200
remappings: null
vyper:
version: null
project_structure:
build: build
contracts: contracts
interfaces: interfaces
reports: reports
scripts: scripts
tests: tests
remappings:
zeppelin=/usr/local/lib/open-zeppelin/contracts/
github.com/ethereum/dapp-bin/=/usr/local/lib/dapp-bin/
networks:
default: development
development:
gas_limit: max
gas_buffer: 1
gas_price: 0
max_fee: null
priority_fee: null
reverting_tx_gas_limit: max
default_contract_owner: true
cmd_settings: null
live:
gas_limit: auto
gas_buffer: 1.1
gas_price: auto
max_fee: null
priority_fee: null
reverting_tx_gas_limit: false
default_contract_owner: false
compiler:
evm_version: null
solc:88888888888888
version: null
optimizer:
enabled: true
runs: 200
remappings: null
vyper:
version: null
This is the error message I am getting:
enter image description here
This is the code that's giving me problems:
Proof-read before posting Proof-read before posting Proof-read before posting
Tried compile the contracts within the brownie frame work and expected to get a compiled contract ready for testing.

Related

Kong error using deck sync - service that already exists

I'm using deck in a CI pipeline to sync configurations to Kong from a declarative yaml file, like this:
_format_version: "1.1"
_info:
defaults: {}
select_tags:
- ms-data-export
services:
- connect_timeout: 60000
enabled: true
host: <the-host-name>
name: data-export-api
path: /api/download
port: <the-port>
protocol: http
read_timeout: 60000
retries: 5
routes:
- name: data-export
https_redirect_status_code: 426
path_handling: v0
preserve_host: false
regex_priority: 0
request_buffering: true
response_buffering: true
strip_path: true
paths:
- /api/download
protocols:
- http
plugins:
- config:
bearer_only: "yes"
client_id: kong
...
...
The error I'm getting occurs while running deck sync --kong-addr <kong-gateway> -s <the-above-yaml-file>, and when there are no actual changes to sync from the file (because the particular service already exists), and it says:
creating service data-export-api
Summary:
Created: 0
Updated: 0
Deleted: 0
Error: 1 errors occurred:
while processing event: {Create} service data-export-api failed: HTTP status 409 (message: "UNIQUE violation detected on '{name=\"data-export-api\"}'")
data-export-api is the name of the service that already exists in kong and deck tries to create.
Is there a way to avoid this error?

Syntax troubles with dbt, Incorrect type. Expected "Seed configs"

I am learning dbt via the jaffle_shop project. I'm trying to set-up the raw data in a raw database, and jaffle_shop schema in order to call the relevant sources later on.
I am having a bit of trouble with my seeds config syntax within my dbt_project.yml, what am I doing wrong?
seeds:
- schema: jaffle_shop
- database: raw
- name: customers
config:
enabled: true
column_types:
id: integer
first_name: varchar(50)
last_name: varchar(50)
columns:
- name: id
tests:
- not_null
- unique
- name: first_name
- name: last_name
- name: orders
config:
enabled: true
column_types:
id: integer
user_id: integer
order_date: date
status: varchar(100)
columns:
- name: id
tests:
- not_null
- unique
- name: user_id
- name: order_date
- name: status
- name: payments
config:
enabled: true
column_types:
id: integer
order_id: integer
payment_method: varchar(100)
amount: integer
columns:
- name: id
tests:
- not_null
- unique
- name: user_id
- name: order_date
- name: status
I am working on VSCode, and using the extension dbt power user, perhaps it doesn't recognize the underlined config paramemters?
I tried writing it this way :
seeds:
+schema: jaffle_shop
which gives no errors until I add +database..
I searched the docs, but I don't see the discrepancy with what I wrote...
You're mixing together two different ways to provide configuration to dbt. See the docs for configuring seeds, and the more general page on the difference between configs and properties.
You can provide general config that applies to all seeds (or a subset of seeds in a specific directory) in your dbt_project.yml file. These configs use the + syntax and only include a subset of all configuration options. This works because it's a project-level config:
seeds:
+schema: jaffle_shop
As of v1.3, these are all of the possible seed configs:
seeds:
<resource-path>:
+quote_columns: true | false
+column_types: {column_name: datatype}
+enabled: true | false
+tags: <string> | [<string>]
+pre-hook: <sql-statement> | [<sql-statement>]
+post-hook: <sql-statement> | [<sql-statement>]
+database: <string>
+schema: <string>
+alias: <string>
+persist_docs: <dict>
+full_refresh: <boolean>
+meta: {<dictionary>}
+grants: {<dictionary>}
You can provide properties or config for an individual seed using what is now called a "property" file (formerly a schema.yml file). These property files can be named anything, as long as they are .yml files. It's usually a good idea to put a single resource's properties in a single file, put that file in the same directory as the resource definition, and name it my_resource.yml. Or you can group them together, with something like seeds.yml. It's these property files that use the syntax that you are trying to place in your dbt_project.yml file:
version: 2
seeds:
- name: customers
config:
enabled: true
column_types:
id: integer
first_name: varchar(50)
last_name: varchar(50)
columns:
- name: id
tests:
- not_null
- unique
- name: first_name
- name: last_name

GNU Radio ZMQ Blocks REP - REQ

I am trying to connect GNU Radio to a python script using the GR ZMQ REP / REQ blocks. GR is running on a Raspberry Pi 4 on router address 192.168.1.25. The python script is on a separate computer, from which I can successfully ping 192.168.1.25. I am setting up the REQ-REP pairs on separate ports, 55555 and 55556.
Flow graph:
import pmt
import zmq
# create a REQ socket
req_address = 'tcp://192.168.1.25:55555'
req_context = zmq.Context()
req_sock = req_context.socket (zmq.REQ)
rc = req_sock.connect (req_address)
# create a REP socket
rep_address = 'tcp://192.168.1.25:55556'
rep_context = zmq.Context()
rep_sock = rep_context.socket (zmq.REP)
rc = rep_sock.connect (rep_address)
while True:
data = req_sock.recv()
print(data)
rep_sock.send (b'1')
Running this code leads to the following error:
ZMQError: Operation cannot be accomplished in current state
The error is flagged at this line:
data = req_sock.recv()
Can you comment on the cause of the error? I know there is a strict REQ-REP, REQ-REP.. relationship, but I cannot find my error.
Your current code has two problems:
You call req_socket.recv(), but then you call rep_sock.send(): that's not how a REQ/REP pair works. You only need to create one socket (the REQ socket); it connects to a remote REP socket.
When you create a REQ socket, you need to send a REQuest before you receive a REPly.
Additionally, you should only create a single ZMQ context, even if you have multiple sockets.
A functional version of your code might look like this:
import zmq
# create a REQ socket
ctx = zmq.Context()
req_sock = ctx.socket (zmq.REQ)
# connect to a remote REP sink
rep_address = 'tcp://192.168.1.25:55555'
rc = req_sock.connect(rep_address)
while True:
req_sock.send (b'1')
data = req_sock.recv()
print(data)
I tested the above code against the following GNU Radio config:
options:
parameters:
author: ''
catch_exceptions: 'True'
category: '[GRC Hier Blocks]'
cmake_opt: ''
comment: ''
copyright: ''
description: ''
gen_cmake: 'On'
gen_linking: dynamic
generate_options: qt_gui
hier_block_src_path: '.:'
id: example
max_nouts: '0'
output_language: python
placement: (0,0)
qt_qss_theme: ''
realtime_scheduling: ''
run: 'True'
run_command: '{python} -u {filename}'
run_options: prompt
sizing_mode: fixed
thread_safe_setters: ''
title: Example
window_size: (1000,1000)
states:
bus_sink: false
bus_source: false
bus_structure: null
coordinate: [8, 8]
rotation: 0
state: enabled
blocks:
- name: samp_rate
id: variable
parameters:
comment: ''
value: '32000'
states:
bus_sink: false
bus_source: false
bus_structure: null
coordinate: [184, 12]
rotation: 0
state: enabled
- name: analog_sig_source_x_0
id: analog_sig_source_x
parameters:
affinity: ''
alias: ''
amp: '1'
comment: ''
freq: '1000'
maxoutbuf: '0'
minoutbuf: '0'
offset: '0'
phase: '0'
samp_rate: samp_rate
type: complex
waveform: analog.GR_COS_WAVE
states:
bus_sink: false
bus_source: false
bus_structure: null
coordinate: [184, 292.0]
rotation: 0
state: true
- name: blocks_throttle_0
id: blocks_throttle
parameters:
affinity: ''
alias: ''
comment: ''
ignoretag: 'True'
maxoutbuf: '0'
minoutbuf: '0'
samples_per_second: samp_rate
type: complex
vlen: '1'
states:
bus_sink: false
bus_source: false
bus_structure: null
coordinate: [344, 140.0]
rotation: 0
state: true
- name: zeromq_rep_sink_0
id: zeromq_rep_sink
parameters:
address: tcp://0.0.0.0:55555
affinity: ''
alias: ''
comment: ''
hwm: '-1'
pass_tags: 'False'
timeout: '100'
type: complex
vlen: '1'
states:
bus_sink: false
bus_source: false
bus_structure: null
coordinate: [504, 216.0]
rotation: 0
state: true
connections:
- [analog_sig_source_x_0, '0', blocks_throttle_0, '0']
- [blocks_throttle_0, '0', zeromq_rep_sink_0, '0']
metadata:
file_format: 1

SQL script is not executed during startup

I want to run SQL script always when I start Spring boot application. I added this Liquibase configuration:
application.yml
spring:
datasource:
platform: org.hibernate.dialect.PostgreSQL95Dialect
url: jdbc:postgresql://10.10.10.10:5432/test
driverClassName: org.postgresql.Driver
username: root
password: test
liquibase:
changeLog: 'classpath:db/changelog/db.changelog-master.yaml'
dropFirst: false
jpa:
hibernate:
ddl-auto: update
show-sql: true
database: postgresql
db.changelog-master.yaml
databaseChangeLog:
- include:
file: db/changelog/changes/ch_0001/changelog.yaml
changelog.yaml
databaseChangeLog:
- include:
file: db/changelog/changes/ch_0001/data/data.yaml
data.yaml
databaseChangeLog:
- changeSet:
id: 0001
author: test
dbms: postgres
runAlways: true # WARNING - remove this before prod - it will run every time with clean data
changes:
- sqlFile:
- relativeToChangelogFile: true
- path: data.sql
data.sql
INSERT into tasks SELECT generate_series(1,355) AS id,
left (md5(random()::text), 10) AS business_name,
(select NOW() + (random() * (NOW()+'90 days' - NOW())) + '30 days') AS created_at,
left (md5(random()::text), 10) AS meta_title,
left (md5(random()::text), 10) AS status,
left (md5(random()::text), 10) AS title,
left (md5(random()::text), 10) AS task_type,
(select NOW() + (random() * (NOW()+'90 days' - NOW())) + '30 days') AS updated_at;
Database table should be populated with test data but it's not. I don't see into logs data.sql file execution.
Do you know know what could be the issue?
Seems like you did not add full config.
Try adding these.
spring:
liquibase:
change-log: classpath:db/changelog/db.changelog-master.yaml
url: {same as spring.datasource.url}
user: {same as spring.datasource.username}
password: {same as spring.datasource.password}
enabled: true
Update
For Postgres DB dbmd value is postgresql. Not postgres.
Reference - Here ​
Updated data.yml
databaseChangeLog:
- changeSet:
id: 0001
author: test
dbms: postgresql
runAlways: true # WARNING - remove this before prod - it will run every time with clean data
changes:
- sqlFile:
- relativeToChangelogFile: true
- path: data.sql

Bukkit yaml checker

Heres my code it says theres something wrong with one of the mapping values when I put it in the yaml checker.. Note: I took the addresses out as they are very confidential, it shouldn't affect anything.)
groups:
md_5:
- admin
disabled_commands:
- disabledcommandhere
player_limit: -1
stats: 34cce1fc-17ab-4156-bb9a-a1c06151137d
permissions:
default:
- bungeecord.command.server
- bungeecord.command.list
admin:
- bungeecord.command.alert
- bungeecord.command.end
- bungeecord.command.ip
- bungeecord.command.reload
listeners:
- max_players: -1
fallback_server: hub
host: 0.0.0.0:25577
bind_local_address: true
ping_passthrough: false
tab_list: GLOBAL_PING
default_server: hub
forced_hosts:
pvp.md-5.net: pvp
tab_size: 60
force_default_server: false
motd: '&1Another Bungee server'
query_enabled: false
query_port: 25577
timeout: 30000
connection_throttle: 4000
servers:
Hub:
address: 198.50.128.131:25565
restricted: false
motd: '&1&l>&d&l>&r&b&lWelcome to &6&l&NFooseNetwork&1&l<&d&L<'
ip_forward: false
online_mode: true
Skyblock:
address: 198.50.128.133:25565
restricted: false
motd: ''
ip_forward: false
online_mode: true
Factions:
address: 198.50.128.143:25565
motd: ''
ip_forward: false
online_mode: true
The problem is here:
online_mode: true
Skyblock:
address: 198.50.128.133:25565
restricted: false
motd: ''
The mapping that begins with the Skyblock key is indented under the online_mode key, which would make it the value of that key, but that key already has the value true.
A few lines later you have a second online_mode key—duplicate keys are not allowed, although not all parsers are strict about this—and you repeat the same error as above with Factions.
I'm not certain, but I think what you want is something like this:
servers:
Hub:
address: 198.50.128.131:25565
restricted: false
motd: '&1&l>&d&l>&r&b&lWelcome to &6&l&NFooseNetwork&1&l<&d&L<'
ip_forward: false
online_mode: true
Skyblock:
address: 198.50.128.133:25565
restricted: false
motd: ''
ip_forward: false
online_mode: true
Factions:
address: 198.50.128.143:25565
motd: ''
ip_forward: false
online_mode: true
Here the value of the servers key is a mapping with three keys (Hub, Skyblock and Factions), each of whose values is in turn a mapping.

Resources