Is it possible to create a snapshot by code and save it, using pine-script? - snapshot

I was wondering if it is possible to create and save a snapshot, as well as how long do the links that are created last using pine-script?

Yes, this is possible.
See the tradingview help about How do I take a snapshot and share it afterwards?.
It literally says "Feel free to share a link to your snapshot at any time as it doesn't expire."

Related

Can a script be too new for a google sheet?

I work on a google sheet that has several departments looking at and/or adding data to all day long every day. I have been working on making scripts to make my departments life a lot easier.I created an exact duplicate of the sheet so I could make sure it works before executing new scripts.
I have one that sets up an order, sends an email and puts it on the calendar all in one click. It works great.
In the email we need to send a link to a job folder. So we have a script to find that folder and get the link to it.
var folder = DriveApp.getFoldersByName("12345 - Help me")
var in = folder.next()
var link = in.getUrl()
In my testing grounds this works exactly how it should. When I put it into the actual sheet that we work in I get an error
"Error Exception: We're sorry, a server error occurred. Please wait a bit and try again."
I have been trying to figure it out for 4 days so far and am getting nowhere.
I had the "owner" of the sheet transfer it into my ownership incase that was the problem.
I moved it to a shared drive.
Made a copy of the whole spreadsheet to test it; it worked in the copy just fine.
To change over to a new spreadsheet will be a lot of work that would have to take place after work hours when no one should be using it. I am hoping there is a way to refresh the spreadsheet in such a way that we need to reapprove scripts (or something). The spreadsheet in question was created in 2018. I am wondering if its just to old for the script; not that that makes any sense but, I cant think of anything else.
Thoughts?
From the question
The spreadsheet in question was created in 2018. I am wondering if its just to old for the script; not that that makes any sense but, I cant think of anything else.
Nowadays Google Apps Scripts supports two runtimes, the old (Rhino) and the new (V8). There are posts sharing that changing the runtime to the one or the other fixed an issue. Considering this, the first thing to check if what runtime are being used on each Google Apps Script that are being used as "testing grounds" and in production as sometimes one of the source of "confusions" is to use different runtime.
Another thing to try is to create a standard Google Cloud Platform Project (GCP) to replace the default GCP project. On this project enable the Google Drive API.
Resources
https://developers.google.com/apps-script/guides/v8-runtime
https://developers.google.com/apps-script/guides/support/troubleshooting

Distributing data on cluster (using torrents?)

I hope this is a good place to ask this, otherwise please redirect me to the correct forum.
I have a large amount of data (~400GB) I need to distribute to all nodes in a cluster (~100 nodes). Any help into how to do this will be appreciated, following here is what Ive tried.
I was thinking of doing this using torrents but I'm running into a bunch of issues. These are the steps I tried:
I downloaded ctorrent to create the torrent and seed and download it. I had a problem because I didn't have a tracker.
I found that qbittorrent-nox has an embedded tracker so I downloaded that on one of my nodes and set the tracker up.
I now created the torrent using the tracker I created and copied it to my nodes.
When I run the torrent with ctorrent on the node with the actual data on it to seed the data I get:
Seed for others 72 hours
- 0/0/1 [1/1/1] 0MB,0MB | 0,0K/s | 0,0K E:0,1 Connecting
When I run on one of the nodes to download the data I get:
- 0/0/1 [0/1/0] 0MB,0MB | 0,0K/s | 0,0K E:0,1
So it seems they aren't connecting to the tracker ok, but I don't know why
I am probably doing something very wrong, but I can't figure it out.
If anyone can help me with what I am doing, or has any way of distributing the data efficiently, even not with torrents, I would be very happy to hear.
Thanks in advance for any help available.
but the node thats supposed to be seeding thinks it has 0% of the file, and so it doesn't seed.
If you create a metadata file (.torrent) with tool A and then want to seed it with tool B then you need to point B to both the metadata and the data (the content files) itself.
I know it is a different issue now, and might require a different topic, but Im hoping you might have ideas.
You should create a new question which will have more room for you to provide details.
So this is embarrassing, I might have had it working for a while now, but I did change my implementation since I started. I just re-checked and the files I was transferring were corrupted on one of my earlier tries and I have been using them since.
So to sum up this is what worked for me if anybody else ends up needing the same setup:
I create torrents using "transmission-create /path/to/file/or/directory/to/be/torrented -o /path/to/output/directory/output_file_name.torrent" (this is because qbittorrent-nox doesn't provide a tool that I could find to create torrents)
I run the torrent on the computer with the actual files so it will seed using "qbittorrent-nox ~/path/to/torrent/file/name_of_file.torrent"
I copy the .torrent file to all nodes and run "qbittorrent-nox ~/path/to/torrent/file/name_of_file.torrent" to start downloading
qbittorrent settings I needed to configure:
In "Downloads" change "Save files to location" to the location of the data in the node that is going to be seeding #otherwise that node wont know it has the files specified in the torrent and wont seed them.
To avoid issues with the torrents sometimes starting as queued and requiring a "force resume". This doesn't appear to have fixed the problem 100% though
In "Speed" tab uncheck "Enable bandwidth management (uTP)"
uncheck "Apply rate limit to uTP connections"
In "BitTorrent" tab uncheck "Torrent Queueing"
Thanks for all the help and Im sorry I hassled people for no reason from some point..

Local file link to shared dropbox files

Since this is my first time posting a question on stackexchange, please excuse me if I've not included anything. Suggestions for a better post are very welcome!
Background
I'm looking for a way to create a file:// link in e-mails with a specific purpose. In my company we're all using Macbooks with Outlook as our e-mail-client. As soon as a specific document is updated, I would like to be able to e-mail a colleague saying: "here is the to the file". My personal link would be: file:///Users/<MyUserName>/Dropbox/Filepath.ext. However, this does not evaluate correctly on my colleagues computer. I have made it to work with a manual username change, but I'm hoping that there is a way to automatically fill in the username of that person.
My Question:
How can I make the link in such a way that it will always refer to that user's specific user folder?
Resources explored
I've tried working with file://~/ but that always gives a 'can't find the document' error. I've tried googling it but Dropbox and other services only point towards URL-links or to their website. Stackexchange hasn't provided me with an answer so far (Internal links / ":file//" links is without answer). Searching for 'computer independent file links' haven't given me any solace either.
Any help would be greatly appreciated!
not sure if this is what you want. You can check the dorpbox API and read a bit about it. But an easier way might be IFTTT, a free tool which launch triggers. So basically you need to create a folder in dropbox for each user and then use this tool to make triggers for each user. You can send an email and include the new dropbox link and as well you can program the IFTTT to send a file://Users//Dropbox/USER_DROPBOX_FOLDER/{{FILENAME}} whenever a file is placed in his folder.

Is there any way to semi-automatically commit?

Please bear with me here, because I'm a beginner when it comes to version control systems. I've decided to start with the very simple GitHub app. What I want to do is (because I work in Dreamweaver) when I save a file a window to pop-up and ask me if I want to commit, is something like this achievable and if so... then how?
Perhaps there's a solution that uses a directory watcher to watch for changes and then prompt?
In my opinion, this isn't really a good solution though - you don't just want to use Git as a "backup" solution, you want each commit to be a mini-milestone that represents some logical group of changes. I can't think of a single instance where the first time I saved a change to a file it was commit-worthy. If you were to commit with every save, how would you ever test those changes?
I haven't used it myself but the GitWeaver extension may be what you are looking for.

windows hard link - protect against writes

I have a bunch of files that I download at some point and then customize. I want to keep the originals, but also allow modifications, and I want to do this using hard links.
I figure I first download the batch of files into some sort of repository, then create hard links into my work location. I want to let the user delete his files (e.g. delete the hard link), which doesn't pose problems.
However I also want to let him write to them, in which case I want my original file to be left untouched in the repository, so I can revert later. How can I do this transparently, without actually locking the file and forcing him to delete it and recreate it?
Any ideas greatly appreciated, thanks.
Cosmin
In windows you have no such option as NTFS/FAT doesn't support snapshots. Hard links are just links anyway, both point to a single file and if link A is changed link B is changed also.
You can partially achive the same result with Windows File History however I don't know any way to set it up exaclty as you described.

Resources