Making Use of Local RunTime on Colab only for accessing data, use remote GPU - runtime

I also looked at Google Colab : Local Runtime use
but not found an answer for my needs.
I am interested in making use of local runtime to access my data.
I can also import my local .py files, to make use of functions already created. Good.
Now, thing is, I would like to install GPU based libraries to exploit CUDA and Colab functionalities.
But if install via pip, I see it will execute on my local machine.
Instead I would like to get things executed on a remote machine.
Can I connect via local Runtime to make access to my data, without needing to import them on Google Drive, and use a remote GPU instance to process them ?
Thank you for advising and also for hinting at how the architecture of a "Runtime" may work.

You can't do this anymore. Google only permits us to use our local machine in the local runtime.

Related

Stanford.NLP.NER Nuget: Does this pass data to a service or is all processing local on your devbox

I need to run NER on sensitive data and would like to know, if using Stanford.NLP.NER Nuget Package on my devbox, the text is sent to a service outside of my corp. net or if the data is processed locally on my machine.
Thanks,
Roger
I'm not familiar with Microsoft NuGet or what you're using in particular, but in general you can absolutely run Stanford NER strictly on your local machine. One you can just run the pipeline and this will just launch a Java process on your local machine and use resources on your local machine. You can also launch a server which is entirely encapsulated on your local machine and once again is only using resources on the local machine.
If someone has made this available via NuGet, I would hope it was just wrapping the local process. If you could let me know more details about how you are accessing Stanford NER I can provide more insight.

How to build and install spark in an offline environment?

I am trying to install Spark 1.3.1 in an offline clusters (No Internet at all, only Lan). However, I don't know how to build it from source code since either via maven or sbt requires network connection. Can someone offer some help or possible solutions?
Thanks.
A simply (albiet somewhat hacky) solution would be to build it on a machine with internet access and then copy all of the items in ~/.ivy2 over to the machine with only lan access so that it can access the cached items. Another, perhaps simpler, option would be to use a pre-built Spark is thats an acceptable solution.

How do I remotely obtain a system's network shares and connections?

I'm looking for a way to obtain information similar to the following console applications, remotely:
net use
net share
netstat -ano
However, I need to be able to do this without running a 3rd party application on the system. This effectively rules out using psexec to execute the command remotely, because psexec would then be installed as a service.
I should add that I have administrative credentials on the remote system. I've considered using WMI's remote execution ability, but that requires me to write output to a file and then retrieve it. It's possible, but I'd like to know if anyone has a better way.
I am using Delphi 2010.
there are a couple Delphi WMI components that allow remote access. I have not used the remote options personally though.
MagWmi - http://www.magsys.co.uk/delphi/magwmi.asp (Delphi 2010 support, and free with source)
WMISet/NTSet - http://www.online-admin.com/ntset.html (TNTShare
Manages shared resources on a local computer and remote hosts. Using this component you can change list of shared devices, see files that have been opened by remote users, watch and terminate remote sessions opened to the destination computer, change list of mapped network drives. It is not free.)
GLibWMI - Found at Torry.net, home page not available. (Delphi 2010 support and Freeware with source). Not sure if its capable of remote access. I have not used it.
Hope this helps
I think the same as Logman.
You can access this information using WMI.
GLibWMI components can be found on this website (http://neftali.clubdelphi.com) or sourceforge (http://sourceforge.net/projects/glibwmi/).
The current version is 1.8b and has a component called SharedInfo with which you can get that information.
The source code is available so you can expand it to access other WMI classes if necessary.
Regards.
P.D: Sorry for my mistakes with english.
You can enumerate shares using the NetShareEnum function (headers are in the Jedi Apilib).
I assume there must be an api for the "net use" but I have never used it (check the WNet functions). Alternative is to use the EnumNetworkDrives method of the WshNetwork com object.
As for netstat I don't think it's possible to do that remotely (other than using some kind of method to spawn a process remotely).

Best way to interface between Windows dev platform and Linux test platform?

my project is a PHP web application. This applies to my test server (local), not production server! I am also the solo developer on this project (however, that may change in the very far future). Also, all my source code is committed to a repository and the production server gets the source code from the repository.
I do my development in Windows while my test server runs on Ubuntu (perhaps you can also recommend me another distro that is easy to use and can serve as a good web server). I need an elegant way to interface between the two environments. Currently, I do my coding in Windows and then FTP the changed files to the test server. However, this is quite cumbersome and tedious since I have to manually go to my FTP client each time. Suggest me something elegant please! Perhaps FTP sync? or OpenVPN (where the root www directory on test server is acts like a folder in Windows)? Thanks for your awesome time!
Easiest would be in Ubuntu, right click a folder then click "Sharing Options", then share the folder. In Windows, connect to the share, and work on that copy.
If you're using version control, using continuous integration like Hudson ( http://hudson-ci.org/ ) would help if you create a task that builds/exports the website for the testing server. This approach would be better in the long term, but you'll waste a day setting it up initially.
I prefer SFTP to FTP.
That said, ExpanDrive lets you map SFTP servers to local drive letters, which then means you can use any text editor to access your files directly on the test server, or use other mechanisms to keep the files in sync. Since they show up as two local drives, you can use just about any product out there.
If you want to use FTP, you can just map the drive in Windows Explorer. If you open up My Computer, then go to Tools > Map Network Drive, you can map a FTP server folder to any local drive. Just type in the address as the folder, ie. ftp://mscharley#192.168.0.10/htdocs
This will atleast save you a trip to the FTP client...
Is there any reason you couldn't just test on your local computer? At my job, we all develop and do developer testing locally, most of us using Windows. Our production and test servers are all linux based. Working locally is really nice, because you don't need to worry about making changes on the server with every small change.
Another option would be to create a checkout or working copy of your code on the server, and then run svn up or svn export (or equivalent using your version control software) each time you change the code on the server (assuming you are sshd into the server). This is kind of slow, but it's easy. The other option would be to write a script that goes through the svn logs for the recent commit and only exports or updates the ones that changed. This is much faster, and for all I know, there is already something out there that this.
Finally, some IDEs allow you to edit files live over ftp\sftp. Basically the IDE downloads a copy of the code and then reuploads it when you save.
Currently I develop on windows (PHP) as well and deploy on a Linux box for testing and production. This is how I do it.
Set up a local development server with e.g. WAMP.
Set up your code base in version control, e.g. Subversion.
Checkout your code base onto the testing/staging server, not just only on your local dev. environment.
In the early stages of development you want to deploy to the testing environment A LOT to sort out any discrepancies between your windows and linux environments. When your programming efforts turn more into program flow type programming this constant testing will probably slow down. But still take the effort to test on a regular basis.
To test your code base on staging do an svn update. I just log in with an SSH session to do this. A key thing here to note is that you do not have to make any config changes to your code base. If you do need to make config changes to your environment on staging it worth while spending the time to SCRIPT this process rather than this being a manual process.
Do the same for production. I use an Subversion check out on production as well. Make sure you set you .htaccess file to deny access to your hidden .svn folders and script the deployment especially if there a config changes necessary.
Some ideas:
Use a server environment under windows (e.g. EasyPHP).
Use a development tool that can save over FTP (e.g. ultra edit).
Use a network drive connected to the remote machine via FTP.
Use a network drive connected to the remote machine via Samba.
Run a linux distro inside a virtualization tool (e.g. virtual box) and write from the windows host to a share directory of the guest host.
Use dropbox to sync files between machines (there is more a hack than an "enterprise" solution).

Simplest way to get access to a remote server for computing tasks

I'm working on some academic research projects involving scraping large data sets from the web using Python. It's been inconvenient to work on my academic institution's Linux server because (1) I don't have superuser access, meaning I'm dependent on the IT staff to install my packages, and (2) my disk quota is somewhat limited (I would ideally want ~10 GB). What is the simplest way for me to get access to a machine that solves these problems? I don't need huge processing power; I just need access to a reasonably fast machine that runs 24/7, so that my programs can run continuously, and above all, something very simple to get running, use, and maintain, since I have a few non-CS people working on this project with me. Linux would be preferable, but I'd consider Windows too.
I'm aware of Amazon Web Services, but am wondering if there's something more appropriate to my specific needs.
By the way, it would be a huge bonus if I could get some sort of remote desktop access to this machine so I wasn't limited to using SSH and SFTP.
Suggestions?
EDIT: I can't use VirtualBox or Virtual PC because I need the program to be running around the clock, and I need to turn off my laptop often, etc.
If you do want to stick with running on your CS department's machines, use virtualenv to solve your package installation woes. And if disk space is an issue, you could use S3 (and perhaps FUSE) to store huge amounts of data extremely cheaply.
However, if that's not really what you're after, I can recommend Slicehost very highly. They give you a virtual private server - so you have complete control over what gets installed, users, admin, etc.
In principle, it's very much like EC2 (which I prefer to use for "real" servers), but has a friendly interface, great customer service and is aimed at smaller projects like yours.
Use x11vnc with ssh.
'sudo apt-get install x11vnc' on your remote server.
Once you have that, you can access your remote server via vnc, but the great thing is that you can tunnel vnc over ssh like so:
ssh -X -C -L 5900:localhost:5900 remotehost x11vnc -localhost -display :0
For more details see the x11vnc manpage.
Or, just setup remote desktop -- (which is actually vnc) on your linux distribution. Most distributions come with a GUI to configure remote desktop access.
If you have a linux machine you can use, then SSH -X will allow you to start GUI programs. It's not remote desktop, but it's close.
ssh -X whoever#whatever.com
firefox
Then bam. A firefox window pops on your desktop.
I have been pretty happy with TekTonic Virtual Private Servers. It's a virtualized environment, but you have full root access to install any packages you need. I'm not sure what your CPU and memory constraints are, but if they aren't too extensive then this should fit the bill nicely for you. I don't know if you would be able to enable a remote desktop as I've never tried but it may be possible to install the requisite packages.
The plans range from $15/mo to $100/mo, the $15/mo plan comes with 294MB RAM, 13GB disk space, and 2.6GHz max CPU speed. I ran on that plan for quite a while and eventually moved up to the next level up with double the disk/cpu/mem, and I've been quite happy with it. I've been with them since 2003 and have yet to find anyone who offers equivalent plans at these prices.

Resources