Stanford.NLP.NER Nuget: Does this pass data to a service or is all processing local on your devbox - stanford-nlp

I need to run NER on sensitive data and would like to know, if using Stanford.NLP.NER Nuget Package on my devbox, the text is sent to a service outside of my corp. net or if the data is processed locally on my machine.
Thanks,
Roger

I'm not familiar with Microsoft NuGet or what you're using in particular, but in general you can absolutely run Stanford NER strictly on your local machine. One you can just run the pipeline and this will just launch a Java process on your local machine and use resources on your local machine. You can also launch a server which is entirely encapsulated on your local machine and once again is only using resources on the local machine.
If someone has made this available via NuGet, I would hope it was just wrapping the local process. If you could let me know more details about how you are accessing Stanford NER I can provide more insight.

Related

Making Use of Local RunTime on Colab only for accessing data, use remote GPU

I also looked at Google Colab : Local Runtime use
but not found an answer for my needs.
I am interested in making use of local runtime to access my data.
I can also import my local .py files, to make use of functions already created. Good.
Now, thing is, I would like to install GPU based libraries to exploit CUDA and Colab functionalities.
But if install via pip, I see it will execute on my local machine.
Instead I would like to get things executed on a remote machine.
Can I connect via local Runtime to make access to my data, without needing to import them on Google Drive, and use a remote GPU instance to process them ?
Thank you for advising and also for hinting at how the architecture of a "Runtime" may work.
You can't do this anymore. Google only permits us to use our local machine in the local runtime.

Can the CPU performance be monitored on the server using external tools without installing any software installed in the server, using the server IP

Is there a way we can monitor the CPU utilisation of a linux server where my website has been hosted using any external tools available without installing any software on the server (i.e just using the IP address of the server).
Please let me know if that would be possible.
If your web tool includes some kind of performance monitor plugin you might set it up on a hidden page. But your tool might also include some kind of monitoring for your site.
But how much you can use the data you get returned is another matter as you most likely are run in a Docker in a WM that is optimal for the provider (and not for you).

How to create windows docker image with VS test agents?

I couldn't find any windows image with test agents at microsoft's public docker repo. How can i create a windows docker image with Visual Studio Tests agents to run codedui/mstest?
On a general note how to create a windows docker image with any gui based software pre-installed and pre-configured?
Note: This looks like a low research question, but i had to post it here because docker+windows is relatively new thing and there aren't much information available on net as well.
You could have a VM template with the Microsoft Agent Software already installed on the machine. All the GUI setup does is modify an XML, so you could effectively:
Have a VM container with the AGENT already installed.
Stop the TestAgent service
Modify the Agent Configuration XML
Restart the TestAgent service
This could probably be achieved with a PowerShell script or a custom console application.
If you need more help, we could figure this out together. Please feel free to contact me on LinkedIn.
https://www.linkedin.com/in/david-o-neill-8a1aa498/

Automatic applications deployment

I want to automate applications/roles/features deployment (unattended) on Windows 2012 R2 Infrastructure, this project needs many hours of programming, this is why i'm asking here.
I want to deploy the following applications and roles : Active Directory, DNS, Sql Server 2012, Citrix XenApp Server, Citrix XenDesktop server, Citrix Datacollector, Citrix Licence server, Citrix Storefront server.
So the basic deployment will be on 8 servers (already installed on ESXi, with ip configuration only).
I imagined this scenario :
I will fill a CSV file that contains all of information, and execute Powershell scripts to deploy everything, we can imagine 1 script that will call different scripts for each components (sql, ad, dns, citrix etc..)
I don't want to depend of any tool (sccm, puppet or whatever..), this is the reasons why i want to create it from scratch -> But maybe i'm wrong.
I also read that there is a new feature called Powershell DSC, to simplify application deployment http://blogs.technet.com/b/privatecloud/archive/2013/08/30/introducing-powershell-desired-state-configuration-dsc.aspx
There is a simple example : if you need 4 iis webserver then, execute this code :
Configuration DeployWebServers
{
Node ("test1.domain.com","test2.domain.com","test3.domain.com","test4.domain.com")
{
Windows-Feature IIS
{
Name = "Web-Server"
Ensure = "Present"
}
}
}
DeployWebServers -OutputPath "C:\scripts"
Start-DscConfiguration -path "C:\scripts" -Verbose -Wait -Force
But in my case i'll have only 1 server per application/roles or feature, if i understand well, this feature is interesting only if you need to deploy the same configuration on (x) servers
What's your advice? Should i choose to write powershell script from scratch? Or choose a solution like puppet or chef (much easier), but in this case i'll be dependant of a tool.
The best solution would be to use a sql database -> The final goal of my project is a web application with a database who will execute my powershell scripts to deploy my infrastructure
Of course from this web application, I will populate my database through forms, and my powershell scripts will query this database to get informations (ip address, client name, domain name, password, users...)**
Thank you for your advice.
Chef or Puppet will be the easiest way forward and both tools have been around for long enough for you not to worry about them disappearing off the internents. Both work, pretty much, out of the box and will get you up and running in a considerably lesser time than if you were to design your own system.
Having said that, a benefit of going with a PS solution is it doesn't require any agents installed on destination boxes(connectivity thanks to WinRM). Ultimately you can wrap it up as a Powershell module, hand it out to your sysadmins and retain full control of what's going on under the hood.
A PS solution will give you full control, better understanding of underlying process - but that will at cost of time and other design headaches.
To sum up: if you have the time, the will or a specific use case then go with PS. Otherwise do what the big boys do and save yourself reinventing the wheel - or seventeen.
Disclaimer: I did the PS thing for a previous employer.
If you're looking for a repeatable deployment solution, and you don't might using some light, free, infrastructure, I propose you use Windows ADK 8.1 and MDT 2013 (if you're using Windows Server 2012 R2). You can develop a front end to chose a deployment type. Rather than using a csv file, all the information can be contained within the Task Sequence, and can be configured to trigger tasks on different conditions.
Johan Arwidmark from deploymentresearch.com has developed a great example of this called Hydration Kit, with full Step-by-Step Guide that sets up a Configuration Manager 2012 R2 infrastructure, running on Windows Server 2012 R2 and SQL Server 2012 SP1, in either Hyper-V or VMware. If you ask him nicely, he might allow you to use his work as a base for your project.

Creating a virtual machine image as a continuous integration artifact?

I'm currently working on a server-side product which is a bit complex to deploy on a new server, which makes it an ideal candidate for testing out in a VM. We are already using Hudson as our CI system, and I would really like to be able to deploy a virtual machine image with the latest and greatest software as a build artifact.
So, how does one go about doing this exactly? What VM software is recommended for this purpose? How much scripting needs to be done to accomplish this? Are there any issues in particular when using Windows 2003 Server as the OS here?
Sorry to deny anyone an accepted answer here, but based on further research (thanks to your answers!), I've found a better solution and wanted to summarize what I've found.
First, both VirtualBox and VMWare Server are great products, and since both are free, each is worth evaluating. We've decided to go with VMWare Server, since it is a more established product and we can get support for it should we need. This is especially important since we are also considering distributing our software to clients as a VM instead of a special server installation, assuming that the overhead from the VMWare Player is not too high. Also, there is a VMWare scripting interface called VIX which one can use to directly install files to the VM without needing to install SSH or SFTP, which is a big advantage.
So our solution is basically as follows... first we create a "vanilla" VM image with OS, nothing else, and check it into the repository. Then, we write a script which acts as our installer, putting the artifacts created by Hudson on the VM. This script should have interfaces to copy files directly, over SFTP, and through VIX. This will allow us to continue distributing software directly on the target machine, or through a VM of our choice. This resulting image is then compressed and distributed as an artifact of the CI server.
Regardless of the VM software (I can recommend VirtualBox, too) I think you are looking at the following scenario:
Build is done
CI launches virtual machine (or it is always running)
CI uses scp/sftp to upload build into VM over the network
CI uses the ssh (if available on target OS running in VM) or other remote command execution facility to trigger installation in the VM environment
VMWare Server is free and a very stable product. It also gives you the ability to create snapshots of the VM slice and rollback to previous version of your virtual machine when needed. It will run fine on Win 2003.
In terms of provisioning new VM slices for your builds, you can simply copy and past the folder that contains the VMWare files, change the SID and IP of the new VM and you have a new machine. Takes 15 minutes depending on the size of your VM slice. No scripting required.
If you use VirtualBox, you'll want to look into running it headless, since it'll be on your server. Normally, VirtualBox runs as a desktop app, but it's possible to start VMs from the commandline and access the virtual machine over RDP.
VBoxManage startvm "Windows 2003 Server" -type vrdp
We are using Jenkins + Vagrant + Chef for this scenario.
So you can do the following process:
Version control your VM environment using vagrant provisioning scripts (Chef or Puppet)
Build your system using Jenkins/Hudson
Run your Vagrant script to fetch the last stable release from CI output
Save the VM state to reuse in future.
Reference:
vagrantup.com
I'd recommend VirtualBox. It is free and has a well-defined programming interface, although I haven't personally used it in automated build situations.
Choosing VMWare is currently NOT a bad choice.
However,
Just like VMWare gives support for VMWare server, SUN gives support for VirtualBOX.
You can also accomplish this task using VMWare Studio, which is also free.
The basic workflow is this:
1. Create an XML file that describes your virtual machine
2. Use studio to create the shell.
3. Use VMWare server to provision the virtual machine.

Resources