How to connect two Candy Machines? - solana

I need to create a collection with a total of 100 NFTs, where the first 10 (IDs 0 to 9) will be minted to the same wallet from the start, and the remaining 90 will have the possibility to be minted through a web page.
I understand that the procedure would be as follows:
Create a CMv2 with a total of 10 assets.
Mint all of them (because the minting is random, it must be done before adding the remaining NFTs).
Create the second CMv2 with the remaining 90 assets. I must specify the address of the mint created in the first CMv2 (the "collection mint address") with the -m parameter.
However, I encounter several errors when doing this:
Case 1:
private.json -> "number": 10
public.json -> "number": 90
assets
├── private
│ ├── 0.json
│ ├── 0.png
│ ├── ...
│ ├── 9.json
│ └── 9.png
└── public
├── 10.json
├── 10.png
├── ...
├── 99.json
└── 99.png
config
├── private.json
└── public.json
Case 2 (same file structure as above):
private.json -> "number": 10
public.json -> "number": 100
Case 3:
private.json -> "number": 10
public.json -> "number": 90
assets
├── private
│ ├── 0.json
│ ├── 0.png
│ ├── ...
│ ├── 9.json
│ └── 9.png
└── public
├── 0.json
├── 0.png
├── ...
├── 89.json
└── 89.png
config
├── private.json
└── public.json
Case 4 (same file structure as above):
private.json -> "number": 10
public.json -> "number": 100
All 4 cases return the same error: Error Number: 6003. Error Message: Index greater than length!.

I had the same issue not to long ago take a look here: One Collection, Multiple Candy Machines
First of all, I recommend using SUGAR CLI to upload & deploy the Candy Machines - the experience is smoother. If you are on Windows you can use WSL2. I also recommend getting a custom RPC, take a look at Quiknode - it's easy to setup.
To upload and then deploy the public collection:
sugar upload assets/public -c config/public.json --cache .cache/public.json -k <WALLET KEYPAIR.json> -l debug -r <RPC ENDPOINT URL>
sugar deploy -c config/public.json --cache .cache/public.json -k <WALLET KEYPAIR.json> -l debug -r <RPC ENDPOINT URL>
Repeat the same steps as above for the private collection (just change private wherever there is public).
To set the same collection using SUGAR:
sugar collection set --cache .cache/public.json -k <WALLET KEYPAIR.json> --candy-machine <CANDY MACHINE ID> --collection-mint <COLLECTION ADDRESS> -r <RPC ENDPOINT URL>
Repeat for private assets.
I've managed to show total number of NFTs on UI by connecting to the private machine and the public machine (you however cannot mint from the private machine using the UI) - this behavior is not supported by default, you are going to have to do some coding for that.
And regarding the index problem, the different configurations should have done the trick (private.json & public.json) but if Metadata is the problem I used a python script to renumber the indexes properly - if that is something you are interested in I can provide.

Once,you have uploaded your assets and created a Candy Machine then you cannot add or remove assets from that Candy Machine So to answer this question on how to merge two Candy Machine Together you can create a Single Collection/Parent NFT and point both the Candy machine assets to that Collection/Parent NFT. You can use the tool metaboss to do that

Related

"The specified collections path is not part of the configured Ansible collections paths" but I'm installing into a relative directory

I'm installing the ansible.posix collection to use in my playbook like this:
ansible-galaxy collection install -r ansible/requirements.yml -p ansible/collections
However, I get this warning message that I want to get rid of:
[WARNING]: The specified collections path '/home/myuser/path/to/my/repo/ansible/collections' is not part of the
configured Ansible collections paths '/home/myuser/.ansible/collections:/usr/share/ansible/collections'. The installed collection won't be
picked up in an Ansible run.
My repo is laid out like this:
├── ansible
│   ├── playbook.yml
│   ├── files
│   │   ├── ...
│   ├── tasks
│   │   ├── ...
│   ├── requirements.yml
├── ansible.cfg
...
ansible.cfg looks like this:
[defaults]
timeout = 60
callback_whitelist = profile_tasks
Here's the output of ansible --version:
ansible 2.9.17
config file = /home/myuser/path/to/my/repo/ansible.cfg
configured module search path = ['/home/myuser/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.3 (default, Jul 25 2020, 13:03:44) [GCC 8.3.0]
In the docs for installing collections with ansible-galaxy, they say the following:
You can also keep a collection adjacent to the current playbook, under a collections/ansible_collections/ directory structure.
play.yml
├── collections/
│ └── ansible_collections/
│ └── my_namespace/
│ └── my_collection/<collection structure lives here>
And, like the documentation suggests, I can still use the collection just fine in my play. But this warning message is quite annoying. How do I get rid of it?
I have created ansible.cfg within the ansible project I'm working on.
You could simply cp /etc/ansible/ansible.cfg .
but since the file would look like:
[defaults]
collections_paths = ./collections/ansible_collections
It is just easier to create it.
Once there, Ansible will know about your custom configuration file.
In you project folder you will:
mkdir -p ./collections/ansible_collections
And then run the install.
If your requirements.yml contains a collection like:
collections:
- community.general
You'd have to install it as:
ansible-galaxy collection install -r requirements.yml -p ./collections/
And the output would be:
[borat#mypotatopc mycoolproject]$ ansible-galaxy collection install -r requirements.yml -p ./collections/
Process install dependency map
Starting collection install process
Installing 'community.general:3.1.0' to '/home/borat/projects/mycoolproject/collections/ansible_collections/community/general'
In case you won't setup your modified ansible.cfg, the output would be:
[borat#mypotatopc mycoolproject]$ ansible-galaxy collection install -r requirements.yml -p ./
[WARNING]: The specified collections path '/home/borat/projects/mycoolproject' is not part of the configured Ansible collections paths
'/home/borat/.ansible/collections:/usr/share/ansible/collections'. The installed collection won't be picked up in an Ansible run.
Process install dependency map
Starting collection install process
Installing 'community.general:3.1.0' to '/home/borat/projects/mycoolproject/ansible_collections/community/general'
There are other methods too, but I like this one.

Puppet unable to find my .erb template but finds other files OK?

No one has been able to explain this inside my company so if you are able to solve this KUDOS to you!
Inside my puppet repo I have setup as follows:
environment/ops/modules/papertrail
├── files
│ ├── elasticsearch_log_files.yml
│ ├── log_files.yml
│ └── remote_syslog.conf
|
└── manifests
├── elasticsearch.pp
└──init.pp
└── templates
└── elasticsearch_log_files.yml.erb
MY elasticsearch.pp file contains the following:
class papertrail::elasticsearch inherits papertrail {
$source = "puppet:///modules/papertrail"
file { "/etc/log_files.yml" :
mode => 0644,
owner => root,
group => root,
ensure => present,
source => "$source/elasticsearch_log_files.yml",
}
}
Now when I try to change the last line to:
"$source/elasticsearch_log_files.yml.erb",
or
"$source/templates/elasticsearch_log_files.yml",
Puppet errors out and says that it can't locate the file:
Error: /Stage[main]/Papertrail::Elasticsearch/File[/etc/log_files.yml]: Could not evaluate: Could not retrieve information from environment ops source(s) puppet:///modules/papertrail/elasticsearch_log_files.yml.erb
What is strange is that when I use the following stanza to just include the yml file instead of erb it works fine and the file gets populated on the target:
"$source/elasticsearch_log_files.yml",
How can I include my erb? I have dynamic variables that I need to assign to the configuration file log_files.yml and I am so far unable to do so =(
This is solved. I didn't add the template directory to my git commit so once added with git add . it worked.

An addon's ToggleButton icon is not displayed when used in Tor browser

I have a question about a strange behaviour of an addon used in Firefox (40) and Tor browser 5.0.1 (Firefox 38.2.0). The goal would be to have a working addon for both environments.
This simple example was created with jpm init and slightly adapted to highlight the ToggleButton problems. While the ToggleButton and its icon is displayed nicely in Firefox via jpm run, Tor seems to have problems finding the icon files and displays nothing. For importing the addon in Tor I've used jpm xpiand installed the addon via the addon-manager.
My current directory layout has the following structure:
├── README.md
├── data
│   ├── skull-16.png
│   ├── skull-32.png
│   ├── skull-48.png
│   └── skull-64.png
├── icon.png
├── index.js
├── package.json
└── test
└── test-index.js
And this is the content of the index.jsfile:
const self = require('sdk/self');
const { ToggleButton } = require("sdk/ui/button/toggle");
// a dummy function, to show how tests work.
// to see how to test this function, look at test/test-index.js
function dummy(text, callback) {
callback(text);
}
let button = ToggleButton({
id: "skull-link",
label: "Skull Master",
icon: {
"16": "./skull-16.png",
"32": "./skull-32.png",
"48": "./skull-48.png",
"64": "./skull-64.png"
},
onChange: function() {
console.log("toggle")
},
bagde: 0
});
exports.dummy = dummy;
Nothing special, I have just added the ToggleButton part.
I haven't found any clashes between the API in Firefox 38 and 40, so I'm clueless what might trigger this behavior. Thank you all for your help.
You can find the example in as zip-file here: sample addon
It was actually a simple one but took me very long to figure out. I found the answer in the post ndm13's answer. If you have problems with addons working in Firefox but not Tor, append
"permissions": {"private-browsing": true}
to your package.json. Tor browser is always in private browsing mode.

How to setup nginx as reverse proxy for Ruby application (all running inside Docker)

I'm trying to set-up a simple reverse proxy using nginx and a Ruby application, but I also want to have it set-up inside of Docker containers.
I've reached the point where I can run the Ruby application from inside Docker and access the running service from the host machine (my Mac OS X using Boot2Docker). But I'm now stuck trying to implement the nginx part, as I've not used it before and it seems the majority of articles/tutorials/examples on the topic use Rails (rather than a simple Sinatra/Rack application) and also utilises sockets - which I've no need of as far as I'm aware.
I'm also using Docker Compose.
The complete directory structure looks like:
├── docker-compose.yml
├── front-end
│   ├── Dockerfile
│   ├── Gemfile
│   ├── app.rb
│   ├── config.ru
├── proxy
│   ├── Dockerfile
│   ├── nginx.conf
│   └── public
│   ├── 500.html
│   ├── index.html
To see the contents of these files then refer to the following gist:
https://gist.github.com/Integralist/5cfd5c884b0f2c0c5d11
But when I run docker-compose up -d I get the following error:
Creating clientcertauth_frontend_1...
Pulling image ./front-end:latest...
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/compose.cli.main", line 31, in main
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/compose.cli.docopt_command", line 21, in sys_dispatch
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/compose.cli.command", line 27, in dispatch
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/compose.cli.docopt_command", line 24, in dispatch
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/compose.cli.command", line 59, in perform_command
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/compose.cli.main", line 445, in up
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/compose.project", line 184, in up
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/compose.service", line 259, in recreate_containers
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/compose.service", line 242, in create_container
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/docker.client", line 824, in pull
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/docker.auth.auth", line 67, in resolve_repository_name
File "/Users/ben/fig/build/docker-compose/out00-PYZ.pyz/docker.auth.auth", line 46, in expand_registry_url
docker.errors.DockerException: HTTPS endpoint unresponsive and insecure mode isn't enabled.
I'm not sure what's causing this error (a bit of googling returns https://github.com/docker/compose/issues/563 but it seems a separate issue as far as I can tell?).
I'm also not entirely sure the nginx.conf is set-up correctly if I could get past this error. The config for nginx looks like it should do a reverse proxy properly (e.g. using frontend as the upstream app server, which should then resolve to the docker ip address for the front-end container -> as you'll see I've linked the two containers together so I'd expect the front-end app to be set as an alias inside /etc/hosts of the proxy container).
Does any one know what I might be missing?
Thanks.
In your gist, you are using image as a key instead of build, so docker-compose is trying to pull the image from the registry, which is failing.
Since you are building these images locally, the syntax for your docker-compose.yml file should look like this:
frontend:
build: front-end/
ports:
- "8080:5000"

Simulate an SNMP agent using snmpsim

My goal is to simulate an agent using snmpsim from snmpsim.
In that respect I walked an SNMP device and captured the output in a file, mydevice.snmprec.
According to the instruction from snmpsim, I suppose to start the agent invoking snmpsimd.py --agent-udpv4-endpoint=127.0.0.1:1161. The problem is that this command does not point to mydevice.snmprec.
Any idea how include mydevice.snmprec as part of the command to simulate the agent?
Usually you would put it in ~/.snmpsim/data but there is also a --data-dir switch.
You should see some output like this telling you the community name:
Configuring /home/someuser/.snmpsim/data/foo.snmprec controller
SNMPv1/2c community name: foo
Just is case someone might come across the same issue, here is what I did to simulate the agent and the manager:
Installed net-snmp via port install net-snmp for CLI manager. Also got a MIB Broswer for MAC.
Installed snmpsim to simulate the agent
Capture the OID from an actual device:
sudo snmprec.py --agent-udpv4-endpoint=10.1.1.10 --start-oid=1.3.6.1.4.1 --stop-oid=1.3.6.1.4.30 --use-getbulk --output-file=snmpsim/data/mydevice.snmprec
Open a terminal window and started the simulated agent:
$ pwd
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/snmpsim-0.2.4-py2.7.egg/snmpsim
$ ls
__init__.py confdir.pyc data grammar record
__init__.pyc daemon.py error.py log.py variation
confdir.py daemon.pyc error.pyc log.pyc
$ tree
.
├── __init__.py
├── __init__.pyc
├── confdir.py
├── confdir.pyc
├── daemon.py
├── daemon.pyc
├── data
│ ├── mydevice.snmprec
│ ├── foreignformats
│ │ ├── linux.snmpwalk
│ │ ├── winxp1.snmpwalk
│ │ └── winxp2.sapwalk
$ snmpsimd.py --data-dir=data --agent-udpv4-endpoint=127.0.0.1:1161
You should see something like these which represent the last lines where the agent is waiting for queries:
……………
………………..
………….
SNMPv3 USM SecurityName: simulator
SNMPv3 USM authentication key: auctoritas, authentication protocol: MD5
SNMPv3 USM encryption (privacy) key: privatus, encryption protocol: DES
Listening at UDP/IPv4 endpoint 127.0.0.1:1161, transport ID 1.3.6.1.6.1.1.0
Open another terminal window to run the Manager:
$ snmpwalk -On -v2c -c mydevice 127.0.0.1:1161 .1.3.6.1.4.1
At this point you should see the agent reacting to the query and manager displaying whatever the agent sends back.
Also, you can do the same thing from a MIB browser manager.
Note: This supports read-only operations!
I haven't got the part where one can write to the simulated agent, yet. I will post it if I can get it working.

Resources