{
"price": 1.0,
"number": 10,
"gatekeeper": null,
"solTreasuryAccount": "",
"splTokenAccount": null,
"splToken": null,
"goLiveDate": "25 Dec 2021 00:00:00 GMT",
"endSettings": null,
"whitelistMintSettings": null,
"hiddenSettings": null,
"storage": "arweave-sol",
"ipfsInfuraProjectId": null,
"ipfsInfuraSecret": null,
"nftStorageKey": null,
"awsS3Bucket": null,
"noRetainAuthority": false,
"noMutable": false
}
Hi you can have the config file anywhere on ur PC. In simples terms and following the command that you shows, you need to have the config file in the same folder where you have the assets folder.
Other way to see it, is that you need to have config.json in the same folder that your terminal/cmd is. For example if you open a terminal and is at /home/your_user, then you need to have config.json at the home folder on your user.
Other way to do it, is just adding the full path of ur config.json to the -cp flag, for example: npz ts-node ~/metaplex/js/packages/cli/src/candy-machine-v2-cli.ts upload -e devnet -k ~/.config/solana/devnet.json -cp ~/my/full/path/to/config.json -c example ./assets, where the pc will find the config.json at /home/your_user/my/full/path/to/config.json
Hope this can solve your doubt, if you have any other question drop a comment on this answer.
Related
I am preparing the Azure AZ500 certification.
I am stuck on lab which consists to create with aTAG a local Docker image based on the loginServer.
The aim is to be able to push the VM image previously created locally to a private Azure repository.
PS C:\> az acr list
[
{
"adminUserEnabled": false,
"creationDate": "2020-12-26T23:55:33.321068+00:00",
"dataEndpointEnabled": false,
"dataEndpointHostNames": [],
"encryption": {
"keyVaultProperties": null,
"status": "disabled"
},
"id": "/subscriptions/5a89d093-ada0-4d4e-9a12-fe7b87e94eff/resourceGroups/AZContainersRG/providers/Microsoft.ContainerRegistry/registries/AZDemoACRE",
"identity": null,
"location": "westeurope",
"loginServer": "azdemoacre.azurecr.io",
"name": "AZDemoACRE",
"networkRuleSet": null,
"policies": {
"quarantinePolicy": {
"status": "disabled"
PS C:\> docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mcr.microsoft.com/azuredocs/azure-vote-front v1 0e7801ad0561 12 hours ago 944MB
tiangolo/uwsgi-nginx-flask python3.6 997d8bb5f569 8 days ago 944MB
mcr.microsoft.com/oss/bitnami/redis 6.0.8 3a54a920bb6c 2 months ago 103MB*
PS C:\> docker tag azure-vote-front azdemoacre.azurecr.io/azure-vote-front:v1
Error response from daemon: No such image: azure-vote-front:latest
First verify if image exists,
docker image ls
If it is present then use it's image id to create a new tag,
docker tag <image_id> TARGET_IMAGE[:TAG]
for more details,
https://docs.docker.com/engine/reference/commandline/tag/
I am following the Substrate Developer Hub tutorial in here: https://substrate.dev/docs/en/tutorials/start-a-private-network/customspec
I have successfully executed the command:
./target/release/node-template build-spec --disable-default-bootnode --chain local > customSpec.json
But when I try to parse this last file using this command:
./target/release/node-template build-spec --chain=customSpec.json --raw --disable-default-bootnode > customSpecRaw.json
I got the following error:
Error: Input("Error parsing spec file: expected value at line 1 column 1")
The contents of the customSpec.json are:
{
"name": "Local Testnet",
"id": "local_testnet",
"chainType": "Local",
"bootNodes": [],
"telemetryEndpoints": null,
"protocolId": null,
"properties": null,
"consensusEngine": null,
"lightSyncState": null,
"genesis": {
"runtime": {
"frameSystem":
...
...
"palletSudo": {
"key": "5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY"
}
}
}
}
I am attempting the Hello World Hackathon by Polkadot.
Thank you in advance.
I was having this problem on when I was building the chain spec using Windows Powershell. But I used the regular Windows Console and it worked fine.
I'm playing on localhost with a DC/OS installation. While everything works fine, I can't seem to run a docker image located inside a private repo. I'm using python to communicate with chronos:
#celery.task(name='add-job', soft_time_limit=5)
def add_job(job_id):
job_document = mongo.jobs.find_one({
'_id': job_id
})
if job_document:
worker_document = mongo.workers.find_one({
'_id': job_document['workerId']
})
if worker_document:
job = {
'async': True,
'name': job_document['_id'],
'owner': 'owner#gmail.com',
'command': "python /code/run.py",
"disabled": False,
"shell": True,
"cpus": worker_document['cpus'],
"disk": worker_document['disk'],
"mem": worker_document['memory'],
'schedule': 'R1//PT300S',# start now,
"epsilon": "PT60M",
"container": {
"type": "DOCKER",
"forcePullImage": True,
"image": "quay.io/username/container",
"network": "HOST",
"volumes": [{
"containerPath": "/images/",
"hostPath": "/images/",
"mode": "RW"
}]
},
"uris": [
"file:///images/docker.tar.gz"
]
}
return chronos_client.add(job)
else:
return 'worker not found'
else:
return 'job not found'
The job runs fine with a public image (alpine:latest) but it fails without any error inside the dcos installation.
The job gets executed but it fails immediately. The error log of the job inside chronos looks like this:
I1212 12:39:11.141639 25058 fetcher.cpp:498] Fetcher Info: {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/61d6d037-c9f5-482b-a441-11d85554461b-S1\/root","items":[{"action":"BYPASS_CACHE","uri":{"cache":false,"executable":false,"extract":false,"value":"file:\/\/\/images\/docker.tar.gz"}}],"sandbox_directory":"\/var\/lib\/mesos\/slave\/slaves\/61d6d037-c9f5-482b-a441-11d85554461b-S1\/docker\/links\/7029bbea-4c3d-439a-8720-411f6fe40eb9","user":"root"}
I1212 12:39:11.143575 25058 fetcher.cpp:409] Fetching URI 'file:///images/docker.tar.gz'
I1212 12:39:11.143587 25058 fetcher.cpp:250] Fetching directly into the sandbox directory
I1212 12:39:11.143602 25058 fetcher.cpp:187] Fetching URI 'file:///images/docker.tar.gz'
I1212 12:39:11.143612 25058 fetcher.cpp:167] Copying resource with command:cp '/images/docker.tar.gz' '/var/lib/mesos/slave/slaves/61d6d037-c9f5-482b-a441-11d85554461b-S1/docker/links/7029bbea-4c3d-439a-8720-411f6fe40eb9/docker.tar.gz'
I1212 12:39:11.146726 25058 fetcher.cpp:547] Fetched 'file:///images/docker.tar.gz' to '/var/lib/mesos/slave/slaves/61d6d037-c9f5-482b-a441-11d85554461b-S1/docker/links/7029bbea-4c3d-439a-8720-411f6fe40eb9/docker.tar.gz'
Stdout is empty. Executed directly inside marathon as an application with the same settings the authentication works and my image is downloaded & executed. Is this something that chronos does not support? It should...I mean, it has commands for docker...
Update: digging deeper into the agent logs I found this:
Failed to run 'docker -H unix:///var/run/docker.sock pull quay.io/username/container': exited with status 1; stderr='Error: Status 403 trying to pull repository username/container: "{\"error\": \"Permission Denied\"}"
I tried the archive with it's config.json file on the agent itself and it can download when triggered from the command line. I just can't seem to understand why chronos is not using it properly. I can't find any other reference on how to put my credentials other than this.
As it turns out...the uris param is deprecated in favor of fetch. I started from scratch with a marathon config applied to chronos and watched the logs carefully when I saw this: {'message': 'Tried to add both uri (deprecated) and fetch parameters on aBPepwhG5z33e4teG', 'status': 'Bad Request'}. Then I changed my uris parameter into:
"fetch": [{
"uri": "/images/docker.tar.gz",
"extract": true,
"executable": false,
"cache": false
}]
...and it worked.
your post looked a little like this one, which turned out to be a problem with volumes.
I am deploying a package that contains a deploy.ps1 file. As you already know Octopus is running this script on deploying by default, I want to prevent it happening and run a custom script instead.
If you have a requirement like this, then it's better to move the powershell that starts the services to a separate build step and then tag the tentacles you want that script to run on.
In your deployment step for the service, set the start mode to "Manual"
Then have a step that starts the service, and scope that script to the environments / servers that you want to auto start
The code for the step template I use here is
{
"Id": "ActionTemplates-1",
"Name": "Enable and start service",
"Description": null,
"ActionType": "Octopus.Script",
"Version": 8,
"Properties": {
"Octopus.Action.Package.NuGetFeedId": "feeds-builtin",
"Octopus.Action.Script.Syntax": "PowerShell",
"Octopus.Action.Script.ScriptSource": "Inline",
"Octopus.Action.RunOnServer": "false",
"Octopus.Action.Script.ScriptBody": "$serviceName = $OctopusParameters[\"ServiceName\"]\n\nwrite-host \"the service is: \" $serviceName\n\n& \"sc.exe\" config $serviceName start= delayed-auto\n& \"sc.exe\" start $serviceName\n\n"
},
"Parameters": [
{
"Name": "ServiceName",
"Label": "Service Name",
"HelpText": null,
"DefaultValue": null,
"DisplaySettings": {
"Octopus.ControlType": "SingleLineText"
}
}
],
"$Meta": {
"ExportedAt": "2016-10-10T10:21:21.980Z",
"OctopusVersion": "3.3.2",
"Type": "ActionTemplate"
}
}
You may want to modify the step template as it will set the service to "Automatic - Delayed" and then start the service.
Are you able to move the script to a sub folder?
These scripts must be located in the root of your package
http://docs.octopusdeploy.com/display/OD/Custom+scripts
Alternatively - don't include your deploy.ps1 script in the deployment package if it should never be deployed.
I recently installed sublime text 2 plugin "SFTP" to work directly with my ftp server files.
It connects to my ftp server correctly, but when I upload/save the file (in order to save the edits to the original file) it doesn't overrides the original file saving the updates, instead, it creates a new file in my ftp folder: If I edit "index.php" when i save it sublime text creates a new file "index.php.1" and the original index.php remains un-updated.
I tough that this could be a permission issue, but on my server the folder is with 777 on all users and files got 777 permission too.
Here is my sftp pluguin config file (sftp-config.json):
{
// The tab key will cycle through the settings when first created
// Visit http://wbond.net/sublime_packages/sftp/settings for help
// sftp, ftp or ftps
"type": "ftp",
"save_before_upload": true,
"upload_on_save": true,
"sync_down_on_open": true,
"sync_skip_deletes": false,
"confirm_downloads": false,
"confirm_sync": true,
"confirm_overwrite_newer": false,
"host": "mobility.unixdata.es",
"user": "admin",
"password": "mlcud",
//"port": "22",
"remote_path": "/Escriptori/htdocs/betamobility/testing",
"ignore_regexes": [
"\\.sublime-(project|workspace)", "sftp-config(-alt\\d?)?\\.json",
"sftp-settings\\.json", "/venv/", "\\.svn", "\\.hg", "\\.git",
"\\.bzr", "_darcs", "CVS", "\\.DS_Store", "Thumbs\\.db", "desktop\\.ini"
],
"file_permissions": "777",
"dir_permissions": "777",
//"extra_list_connections": 0,
"connect_timeout": 30,
//"keepalive": 120,
//"ftp_passive_mode": true,
//"ssh_key_file": "~/.ssh/id_rsa",
//"sftp_flags": ["-F", "/path/to/ssh_config"],
//"preserve_modification_times": false,
//"remote_time_offset_in_hours": 0,
//"remote_encoding": "utf-8",
//"remote_locale": "C",
}
Thanks for your help !!
Most likely this is due to a setting of the FTP server. See https://superuser.com/questions/333022/ftp-upload-overwrite-does-not-overwite-but-creates-file-ext-instead for an answer to a similar question.
If you are transferring files over a public network, I recommend you use SFTP instead of FTP.