Solana deploy "account data too small for instruction" - solana

When i try to deploy a program with anchor (devnet or mainnet, same error), i get the following error : Deploying program failed: Error processing Instruction 0: account data too small for instruction.
I have no clue where this comes from.
The so file is around 331Ko, and apparently, the error shows up when i try to use "mpl-token-metadata" to get metadata of NFT
Any one has an idea how to correct that ?

When you deploy a program on Solana, the amount of space allocated for that program is 2x the original program size.
This is to ensure there is a good amount of space if you upgrade the program, up to 2x the original program size.
The program that you are deploying is exceeding this limit. You will have to get a new programId and deploy again.

Delete the target folder
Run anchor build, this will add a new keypair to target/deploy
run anchor keys list, this will give you the new program id
copy the id to the top of your lib.rs
run anchor build again
and anchor deploy

Related

Retry a transaction on Candy Machine

I am just finishing an upload of 8000 assets to candy machine (via the upload command). Everything seemed to be working well when it was creating the bundles and saving them to the cache, but once it started to write the indices I've started seeing these two errors on and off:
1)
Waiting 5 seconds to check Bundlr balance.
Requesting a withdrawal of 0.638239951 SOL from Bundlr...
Successfully withdrew 0.638244951 SOL.
Writing all indices in 719 transactions...
Progress: [█░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░] 2% | 18/719Transaction simulation failed: Blockhash not found
Failed writing indices 3682-3691: Transaction was not confirmed in 60.01 seconds. It is unknown if it succeeded or fail.
I have been searching the internet and from what I can tell these errors are out of my control..is this correct? Or what can I do to get these indices to write successfully? Its at 50% progress right now but I assume the upload is not going to be successful when it finishes. If this is the case, do I need to run the candy machine upload command all over again or is there a way for me to just run the transaction portion (where it started to fail) again? I've seen some notes on retry but it wasn't completely clear to me.
The upload process took about 2.5 hours so would like to avoid that if at all possible.
Help is very much appreciated.
Both errors are common so you dont have to worry about it. You should use a custom RPC (using --rpc-url on the upload command) and wait till the upload command ends. When the upload command ends you have to use verify_upload command in order to see if everything went well (if verify_upload shows an error you have to run upload again and repet till verify_upload shows ready to deploy message).

Can I delete .dmp and .phd files of Liberty Profile server?

In the folder <WAS Liberty Profile root>\<profile>\usr\servers\defaultServer there are many files named core.*.dmp and heapdump.*.phd. The size of these files is between 130 MB and 1.3 GB when my deployed app uses 4 MB.
Can I delete these files *.dmp and *.phd?
What are these files for?
Short answer: yes, it's safe to delete them, but you should find out why they're appearing, as it could indicate that your application is not running correctly.
If your dump files were created a long time ago, or you know you were debugging an OutOfMemoryException or have been running server javadump --include=heap,system then go ahead and delete the files. If, however, you keep getting new dump files and don't know why then read on.
The core and heapdump files contain a snapshot of the memory of the application from a specific point in time. Usually you do this to capture the state of your application at the point where something goes wrong so that you can examine it with analysis tools and try to work out what went wrong.
For example, by default the IBM JVM will perform a dump when an OutOfMemoryException is thrown. This allows you to look at the dump file and see what's using up all the memory.
If you have a corresponding javacore file, the fourth line or so should say why the memory dump was made.
e.g. 1TISIGINFO Dump Requested By User (00100000) Through com.ibm.jvm.Dump.javaDumpToFile (caused by running server javadump)
or 1TISIGINFO Dump Event "user" (00004000) received (caused by running kill -3)
If it's a "user" event, then something's asking the JVM to create a dump. If not, and you're still not sure what's causing it, check your jvm.options file for any -Xdump options which can be used to cause the JVM to create a dump in response to certain events. More information on that in the Knowledge Center.

How to setup Cloud Coding on Parse Open Server using Heroku

I have looked everywhere but cannot seem to figure out how to setup cloud coding on the Parse Open Server using Heroku.
I see this link which tells me what to put in the Index.js and Main.js file: Implementing Cloud Code on Open Source Parse Server. However, I cannot seem to find those files. Nor can I find the "cloud" folder.
How do I find the cloud folder?
I created the Parse Server on MongoDB using the "Deploy to Heroku" link on this page: https://github.com/ParsePlatform/parse-server-example. After creating my application by filling out all the information, I ran the command heroku git:clone -a yourAppName to clone the application files. However, when I use the command I obtain a empty repository and get the following message in my terminal:
Cloning into 'hyv3-moja'...
warning: You appear to have cloned an empty repository.
So, how/where do I find the cloud folder with main.js? Did I miss any step in creating the Parse Server?
I also tried using the Parse Command Line. However, when I try to use the parse new command, it requires me to login to a Parse account. However, since Parse is going down, they are not accepting new accounts and I did not have an account before. Regardless, this seems like a deadend.
So can someone please explain to me how to set up Cloud Code?? I want to create a code that decrements a column in the database every second so it operates like a timer. Basically, I want my application to create objects on the database that last a certain amount of time chosen by the user. For this example, ill say 24 hours. So from the moment it is created, I want to decrement those 24 hours in the database. That way when a user of my application clicks to view the object, I translate the time remaining from the database and just output that value to the user to show how much time is remaining for the life of the object.

Using maketorrent in libtorrent examples

So I am trying to build an application that uses libtorrent. However, before I start I would like to make sure that I have compiled the lib correctly and that I have a functioning environment for testing.
I am currently running a VM with opentracker and I try to connect using the example client in libtorrent.
First I start by creating a .torrent file using libtorrent (I am currently not sitting in front of a computer with libtorrent available so I might be remembering the exact commands a bit wrong):
maketorrent.exe dummy.txt -t "http://10.XXX.XXX.XXX/announce"
This gives me a .torrent file called a.torrent. Opening the file everything looks ok, the bencoding is correct and the announce address is there.
Next I try to add it to the example client hoping it starts to seed:
client_test.exe a.torrent
Everything starts up OK, but no tracker is found. Then if I press t to show tracker information I see an error (maybe not the exact phrasing):
Alert: {null} unsupported URL protocol
OK, so maybe something is wrong with how I built libtorrent. So I get the Halite client instead since that is also supposed to be build upon libtorret. But there I have the same problem.
So I have a look at the code and found where this error message is generated. The code is checking if I am supplying an address using the HTTP or HTTPS protocol, which I am. So could it be that I am not able to use a bare IP-address or am I doing something wrong?
I found the problem. It was not a problem with the IP address or the torrent itself. Instead it was a problem with caching.
The first time I added the torrent I used http:\XXX.XXX.XXX.XXX instead of http://XXX.XXX.XXX.XXX which didn't work. However whatever change i did to the torrent file after that did not stick. It was always falling back to that original file until I removed the .resume folder.

Batch file called by scheduled task throws error when scheduled, runs fine when double clicked

I have a batch file that maps a networked drive. About a week or so ago the password expired, so the program calling the batch file started throwing errors.
I've updated the password in the batch file, and when I double click on the batch file, the drive maps fine. However, when the scheduled task kicks off, I get the following error:
Logon failure: unknown user name or bad password.
Anyone seen this before? I've tried recreating the scheduled task, but it doesn't seem to make any difference.
EDIT
I've updated the properties of the scheduled task, which isn't the problem. The problem seems to be the username and password in the batch file. The strange thing is if I log on interactively and double click the executable, everything works perfectly.
The last time the job ran it threw a "semaphore timeout period has expired" error. I've never seen this particular error before, but it seems like it was actually logged on and trying to copy files when this happened.
EDIT
I've revised my code to make it as simple as possible. I'm using a batch file to map the drive, then using code to transfer the files. I still run into the same issue - it works fine when I double click the batch file, but once I throw Scheduler into the picture, it throws a "Bad username or invalid password" error.
Occasionally when I'm trying to run the file by double clicking on it, I get a "Could not find part of the path" error. This says to me the drive mapping actually worked but something failed when it was trying to copy. (Most of the time, testing by double clicking works fine)
The username and password associated with the task when you created it is no longer valid or has changed.
This occurs generally when creating the task and forgetting to select the option not to store the password. When your password is set to expire, then you will meet this problem everytime you will have to reset your password. Make sure to do as on the image :
It sounds like the username and/or password associated with the scheduled task is no longer correct. The batch file is likely OK, you just need to change the properties of the scheduled task.
I ran into something similar while testing a new powershell script we wrote to create a scheduled task to back up to one or more network locations. I had to go through a number of iterations, and when I decreased from two network locations to one, the scheduled task stopped working, with individual steps in the called script giving "Logon failure: unknown user name or bad password" errors, though when I copied the arguments and ran them from the command line they worked.
After reading this question and Tim's comment, I tried deleting the scheduled task and re-creating it. After that it worked. I would concur that the scheduled task likely cached something.
From:
https://danblee.com/log-on-as-batch-job-rights-for-task-scheduler/
Go to the Start menu.
Run.
Type secpol.msc and press Enter.
The Local Security Policy manager opens.
Go to Security Settings – Local Policies – User Rights Assignment node.
Double click Log on as a batch job on the right side.
Click Add User or Group…
Select the user and click OK.

Resources