I'm trying to use zoomable suburst to display some data. I've generated the json file and am able to use the site to display my data. Now I'm trying to do this on my local machine, but am not sure of the correct way to go about this.
I think there are a couple of ways of going about this. One would be to just dump the js code into a js file and import it into an html file. I've seen some implementations on github give me the ability to do this, but they are not as clean as the one i've found on observablehq. And I'm unable to get the one on observablehq to work locally doing a copy/paste.
I also see an option on observablehq where i can download the code. I did that and the readme that came with it says that i need to run it on a server (ex. python -m http.server), but when i run the server from the folder containing the downloaded code, i keep getting a bunch of
code 404, message File not found
Now I'm a bit confused. I'd like to know the "right" way to go about using zoomable sunburst to show my data, and if it's at all possible to run this on my local.
Any suggestions/advice would be great. Thanks.
I'm super late to the party, but here is what was the problem for me :
I was running python -m http.server in the wrong directory, i.e the directory that didn't have index.html file inside. on I ran it in the directory that had the index.html file it worked perfectly.
hope this helps someone!
Related
i just started learning python. today i tried to do some problems along with tutorials. somehwere along the way i do not know what i clicked to make it all stop working. originally it said it couldnt find my file, no matter what i typed in the code. i tried clicking projects and making a new file but ended up deleting the files. I then tried to redownload it from the python website to restore factory settings and it says "verify that you have access to that directory. all code i write in google research collab works but once i put it in python i cant get anything to run.
i tried to reset python and also went to google to find what i did wrong. i think i somehow broke the path but i dont know to what
https://huggingface.co/models
For example, I want to download 'bert-base-uncased', but cann't find a 'Download' link. Please help. Or is it not downloadable?
Accepted answer is good, but writing code to download model is not always convenient. It seems git works fine with getting models from huggingface. Here is an example:
git lfs clone https://huggingface.co/sberbank-ai/ruT5-base
where 'lfs' stays for 'large file storage'. Technically this command is deprecated and simple 'git clone' should work, but then you need to setup filters to not skip large files (How do I clone a repository that includes Git LFS files?)
The models are automatically cached locally when you first use it.
So, to download a model, all you have to do is run the code that is provided in the model card (I chose the corresponding model card for bert-base-uncased).
At the top right of the page you can find a button called "Use in Transformers", which even gives you the sample code, showing you how to use it in Python. Again, for bert-base-uncased, this gives you the following code snippet:
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModelForMaskedLM.from_pretrained("bert-base-uncased")
When you run this code for the first time, you will see a download bar appear on screen. See this post (disclaimer: I gave one of the answers) if you want to find the actual folder where Huggingface stores their models.
I aggre with Jahjajaka's answer. In addition, you can find the git url by clicking the button called "Use in Transformers", shown in the picture.
I typically see if the model has a GitHub repo where I can download the zip file. Due to my company protocols I often cannot directly connect to some sources without getting an SSL certificate error, but I can download from GitHub.
How about using hf_hub_download from huggingface_hub library?
hf_hub_download returns the local path where the model was downloaded so you could hook this one liner with another shell command.
python3 -c 'from huggingface_hub import hf_hub_download; downloaded_model_path = hf_hub_download(
repo_id="CompVis/stable-diffusion-v-1-4-original",
filename="sd-v1-4.ckpt",
use_auth_token=True
); print(downloaded_model_path)'
I used to use Dreamweaver. I've a huge Classic ASP website. I edit the files on my local system, and when done, I can upload the file(s) via ftp to the remote webserver. Now, I try to switch to VSCode. I've installed ftp-simple, ftp-sync and deploy. But can't find the set-up to get a Dreamweaver like behaviour. Eg, I have to locate for each file I want to upload/deploy, the exact location in the remote file tree.
I really feel like deploy deserves more attention. I spent the past 4 days or so to find an extension that does just that. Auto-upload to an ftp-folder from a local folder. I wanted to make git work for my website, but couldn't get that to work on the server with ftp-simple or ftp-sync because those extensions only download the opened files or open in a different temporary folder each time. I set up deploy now and got exactly what I wanted thanks to your tiny comment, thank you!
(I'm sorry if this post is too old to comment on, but I browsed Stack overflow for days to find this, so I thought it might help others in the future to point this out.)
it sounds like your just missing your mapping configuration. Most text editor FTP packages include a configuration file where you specify the server, your credentials, and the root folder of your ftp server. Have you specified this?
I have to use d3 graph at my web page. I never worked with d3. That's why, I'm facing some problem using it. Basically, I've to work at dropbox folder. So, my clients can see the worked file locally at their browser. Consider, I put a d3 chart at file.html file and my folder structure is like this
D:\Projects\Dropbox (Company)\MyName\FolderName\file.html
But, I saw that to run and see/show d3 examples I need run web server or run a python server. So, I've downloaded Python 3.4.1 and installed by double click. It's installed at this directory: C:\Python34
After that, I tried to follow d3's documentation for installing python server. So, I opened my cmd and type:
python -m http.server 8888 &
But, it gives me this error:
So, my question is:
How can I install python web server?
After installing that web server, is it possible see the d3 chart via this link: file:///D:/Projects/Dropbox (Company)/MyName/FolderName/file.html or I've put my files inside htdocs and run via http://localhost/folderName/file.html (I don't want to put files inside htdocs. It'll be tough for my clients see the output of the files directly from their pc)?
If it can't be seen without putting inside htdocs folder, I may find solution for running d3.js locally without installing any additional software/server(though I've found this type of solution for some d3.js chart but not for all). Thanks in advance and please don't mind if it's a lame question. Basically, it's my first day working with d3.js and I'm only ameture level skilled with javascript and jQuery.
OK, browsers are designed with security in mind, by default they don't let scripts go and grab a file from anywhere for very good reasons. They allow you to grab a file from the server or through requests. So to share your work with your client you will either need to use a hosting service - I would recommend bl.ocks, design your visualisation so it doesn't require any external data or provide instructions on how to disable browser security. You can read more about this here, here and here.
On python, in many cases python is already installed on people machines, so running a server from python shouldn't be an issue. All you have to do (on a windows machine) is launch your command prompt navigate to your directory and start your python server. Then open a browser and navigate to the localhost. Please note that python needs to be set as an environmental variable (i.e. your system path), the python documentation might help you here.
I downloaded the latest version of phpcrawler, and I can access a test website of my own.
I only have an image and some text on this site, I run the crawler and I receive the text minus the image because I did the proper $crawler->addNonFollowMatch("/.(jpg|gif|png)$/ i");
I cannot get it to save the tmp file It does not save the unique tmp file in the folder I run the crawler from, I have tried to save a named file no luck.
I did run into many depreciated errors on different lines in all the php files, for example: #fopen, the # cause problems in different area's. I use PHP and can also do Regex.
David.
I answered my own question, since I see that PHPCrawler questions really do not get answered; I saw a question from last year not answered. I will answer it also, though it might be too late to do any good. This is the answer.
I added in a modified phpcrawler I adjusted for my needs:
$fp = fopen('c:/test/poopoo.txt','w');
fwrite($fp,($page_data['source']));
fclose($fp);
You put it before flushing the file and create your instance of class.
I found out using PHP Simple HTML DOM Parser from this project works well. If you need more control use RegExp, but that does have a steep learning curve.