I'm forced to use Processing 2 behind a proxy. My problem is: How can I set host and port of the proxy?
I search the settings used by the IDE, e.g. to add libraries or tools. My question is about the usage of a proxy in applications to be developed.
The solution is given in this document: https:Processing's default settings
The proxy settings for Processing (the application itself) can be set in the file Users -> [username] -> AppData -> Roaming -> Processing -> preferences.txt (Windows 7 and higher)
You have to add the values for the keys proxy.host and proxy.port.
It's probably easier to look for a solution in plain Java and you should be able to apply the same solutions in Processing (if you're using the Java Mode).
Unfortunately I won't be able to test, but these related answers seem to address your proxy issue, for example: How do I set the proxy to be used by the JVM
I'd try this in setup() first:
System.setProperty("java.net.useSystemProxies", "true");
Only because it looks very simple. This is the 4th answer on the page.
If it doesn't work I'd move towards the top. If you want to do this straight from Processing you probably need to this programatically at runtime. If neither of the programmatic runtime options work, you should be able to export your sketch from Processing, then run generated the .jar from command line, passing the proxy settings as well.
Although a long stretch, if drawing some graphics on screen is all you need, you can move from Java to JavaScript with P5.js
This is an ongoing problem with all versions of Processing. For anyone else with this issue in Windows:
Processing is unable to run even a single line of code unless it has an internet connection, no idea why this should be the case. If you are behind a proxy there are (at least) two ways to add proxy settings. You can do it per user in c:\users\\appdata\roaming\Processing\preferences.txt
or for all users under the main processing folder (will vary with version) in the lib folder, default.txt. This file says something about "do not edit". Take a backup first if you're concerned, but you can scroll down and edit the proxy settings there easy enough. when a user runs Processing for the first time, these settings will be put into their preferences.txt. If they already have a preferences.txt, you will need to delete or rename it, and a new one will be created at next use.
For a student lab situation, you can copy out this default.txt file to the \lib on all machines, and it will then work for all users.
Related
We want to improve the reproducibility of the analyses at our institute. To this effect, we contemplate on implementing a system based on Singularity. The idea is that at the beginning of the analysis, the user can choose a machine configuration (later amendments must be possible) that sticks with them until the project is complete. Then, the image is archived with the analysis. Ideally, the user doesn't have to issue system admin commands (install packages etc.) in the process.
She just makes a request like "I need R with tidyverse and Python 3 and this and that in-house packages" and she gets a command that she can use to ssh into a singularity container that has those features. When she makes a new request, she gets the newest version of the programs but once the container has been deployed those versions don't change anymore.
It gets tricky when I think of the fact that multiple users will need different combinations of software. Do I need to provide an image for every combination of Software and software extension packages? If I only think of a scenario where users can choose of an arbitrary combination of {R, Julia, Python, r-tidyverse, r-data.table, r-whatever-genomic-analysis-package-on-bioconductor, python-...}
Is there a feature selection method in the veins of
singularity pull library://alpine:3.7 +r:3.2.1 +python3:3.7 +r-package:1.2.3
such that the user can
ssh cluster01 -- singularity shell project-abc.simg
and start/continue working?
If not, is there an alternative approach to supplying custom machine configurations to users using singularity?
I could find Singularity Compose, but this seems to just run multiple containers as services next to each other. So the images can stay separate. I have to merge them.
Yes, with Singularity, a dedicated image must be provided for each possible combination of packages.
Selecting a set of applications per-user is possible by changing your server configuration to the package managers Nix or GUIX, a fork of nix. The concept here is that each application/library lives within its own directory, whose name is a hash of the app! Therefore, multiple application versions can coexist and each application can link to another version of the same library.
A user can select a set of those directories as a user profile. This is a folder of symlinks into binaries in the proper application folders. From the Nix manual:
So, each user can setup their environment as they like, down to bitwise reproducability.
After the analysis, the profile can be turned into an image. I know its possible with GUIX using guix pack (tar, Docker, Singularity).
For Nix, I'm not sure. There is a project on GitHub, datakurre/nix-build-pack-docker, but it's dormant since 2015. Maybe it's enough to copy the needed subset of /nix/store into a folder, pull a NixOS image, and bind /nix/store of that image to your own folder?
Our application is deployed to the target machine with an msi file. All works nicely. Our tester has gone through his plan, and one of the tests requires deleting the application's configuration file. The application is designed to alert the user with a dialog on startup saying "missing config". However, what happens is that - somehow! - the software starts the installer again and retrieves the missing file from the msi! Which is nice, but not what we want. How do we disable that behaviour?
without going into much depth of the windows installer mechanics (if you interested in that there a plenty of articles about this), the shortcut of the software is probably advertised, which means the windows installer checks if everything is in its place before the software is started.
if you can edit the msi, make the shortcut non advertised.
if you can't, install it with DISABLEADVTSHORTCUTS
e.g. msiexec /i myMsi.msi DISABLEADVTSHORTCUTS=1
please note that this is only a quick (and dirty) workaround,
to fix this proper you need to understand the whole windows installer advertising (also called repair or self resiliency) mechanism.
but explaining all the causes and the mechanism of the repair is far beyond this answer and there are quite some articles and posts about that on the internet (and especially on MSDN and stackoverflow)
There is a more correct answer to this, and it is NOT DISABLEADVTSHORTCUTS. You set the component id to null in the MSI file to prevent repair of that individual file. See ComponentId comments here:
http://msdn.microsoft.com/en-us/library/aa368007(v=vs.85).aspx
Edit the MSI file with Orca to delete the Componenty ID, and write an uninstall custom action to delete the file at uninstall if it's there.
In addition, that's a redundant test. Windows will restore that file for you if it's missing, so the idea that you need a test to notify that it's missing is pointless. The true test should be that Windows will restore the file if it's lost, and your app needs to do potentially nothing about the missing file.
You don't mention what tool you are using to make your MSI but I'm going to go out on a limb and guess Visual Studio Deployment Projects (.VDRPOJ).
One of the (many) horrible things about this tool was that it fails to expose the foundational concept of components. Instead it makes every file a key file of it's own component and hides the existence of the component from you. I say 'was' because Microsoft killed this project type in VS. There are around 50k people complaining on UserVoice to bring this tool back and I'm guessing that 49,990 of them don't know what a key path is.
Windows Installer has a concept called the component rules and each component has a keypath. The keypath teaches MSI how to handle repair scenarios. But your tool has to allow you to be able to control this to make it work.
Windows Installer is functioning exactly the way it's supposed to function. You just aren't up to speed on what that is.
However, if you want to ignore Windows Installer best practices and continue using the tool you use today, the trick is to install the app.config file as a different file. Then have the application copy the file to the real file name on run. Windows Installer won't service what it didn't install.
Several answers have been provided that can work:
You can install the file with a blank guid. Then you need to remove it on uninstall using the RemoveFile feature. You will also run into issues if you want to replace it during an upgrade. Could be tricky at times.
You can disable the advertised shortcut(s), but this affects too much in my opinion.
Finally you can use my suggestion to install a separate non-advertised shortcut to use to launch the application. Such a shortcut bypasses the self-repair check. It may still be invoked by other means such as missing file associations, COM registration or similar, but those are exception states.
However, my preference is that an application can start without a config file present, if at all possible. I always suggest a good startup routine with "internal defaults" available. The startup routine should also degrade gracefully if faced with any file system access denied conditions.
Most importantly you should place this config file in the userprofile so you can generate the file on first launch for the user in question. It can even be copied from a read-only copy in the main installation directory.
When you generate a file from internal defaults and put it in a userprofile location, the file will have no interference with Windows Installer at all. The issues that results is how to clean up user data on uninstall. I discussed this with Stefan Kruger (MSI MVP) at one point, and I agree with his notion that user data is indeed user data and should not be automatically dealt with by your installer at all. Leave it installed, and clean it up via system administrator tools if necessary - for example logon scripts.
I just discovered why Maven doesn't work properly on my machine. For some reason it reads the user configuration from the completely wrong location. And I don't understand why. When I run maven with the -X switch I get the following output in the beginning:
[DEBUG] Reading global settings from D:\dev\maven\active\conf\settings.xml
[DEBUG] Reading user settings from D:\.m2\settings.xml
[DEBUG] Using local repository at D:\dev\maven_repo
Why is it reading user settings from D:\.m2 and not my actual user directory like it normally should? It worked fine on my old computer. Does it have something to do with me having installed maven on a different drive this time? On my old computer it was installed on the C drive.
Where does it get this D:\.m2 from? How can I make it read the user settings file from the actual default location, %userprofile%\.m2?
Finally figured it out. Found the solution in this blog post. To find the home directory in Java you do this:
System.getProperty("user.home");
Problem is, for some dumb reason, Java isn't using Windows environment variables or anything like that to find this path. It actually uses the parent directory of the Desktop directory. Since I like to keep certain main folders in my user directory on a separate drive (documents, downloads, music, desktop, etc) I had moved the desktop directory to D:\Desktop. Java then takes that directory, goes one level up and makes Maven and other java applications think D:\ is my home directory.
Gotta say the more I use Java the more i hate it... anyways, hopefully this might help save some hours of head scratching for someone else too.
Update
The original blog post is gone, but found on the WaybackMachine (the URL has been updated), but here's the gist from that post in case that goes too...
The issue: So how does Java play into all of this? Well, Java
developers sometimes want to store settings for their applications in
a folder within the user’s profile directory. It’s the Linux way, and
Java tends to do things the Linux way. (As mentioned earlier, Windows’
“AppData” folder servers the same purpose, with some extra separation
for data dependent on whether or not it should roam with the user’s
profile.) For some reason, Java does not use the Windows environment
variable to determine the location of the user’s profile, but instead
access a registry key that references the user’s desktop folder. It
then takes the parent directory of the desktop and assumes that is the
user’s profile folder (assuming the user makes use of the default
setup Windows chooses).
Essentially, when a programmer calls the Java command:
System.getProperty("user.home");
Java uses the following idea to determine where my user profile folder
is:
PATH_TO_DESKTOP_FOLDER_AS_SET_IN_THE_REGISTRY + "\..\"
This breaks down when the desktop folder has been modified.
So, with my setup, instead of saving settings at:
c:\users\tim\
Java apps tend to save data to:
t:\tim\
In reality, Java apps should save settings to:
c:\users\tim\AppData\Roaming\
or something like that.
To add insult to injury, the Java apps continue to follow the Linux
way and use a period at the beginning of the folder name in an attempt
to “hide” the folder (as is done on Linux). For Windows users, this
simply ensures these folders are listed first in directory listings.
(Hiding a folder on Windows is achieved through setting the hidden
attribute for the file.)
It looks like NetBeans has addressed the issue for their application,
but the root issue remains an unresolved, low priority bug. Somehow
I’d bet it would get fixed a lot faster if the mechanism for
determining the user’s home path on Linux was wrong.
I have set of JVM configured, WAS components (Queues, SIB, etc) created in one environment (WAS 8.0 ) and is all working fine. I need to replicate the same in another set of new servers (and another one potentially). How do I replicate all the steps without typing the information again?
Ideally, you'd make the original changes via scripting and re-run them. An alternative is "properties based configuration" for export/import.
http://www.ibm.com/developerworks/websphere/techjournal/0904_chang/0904_chang.html
I got Team Explorer Everywhere so we can use TFS on the Mac Mini we got to test Iphone apps. Since we're using XCode for phonegap, we need to use the commandline program and it is giving me a lot of grief.
What I've done so far (Listing out for anyone who stumbles on this so they can use it):
-Downloaded the trial (free)
-Set the path using PATH=$PATH\:/FOLDERLOCATION
-Accepted EULA and got trial product key... for command line program (tf eula/tf productkey -trial)
-Set up workspace:
tf workspace -new WORKSPACENAME -server:http://SERVERNAME:PORT/FILEPATH -comment:"WORKSPACENAME" && prompted for username -> domain -> password
-Trying to setup the folder path (Fixed):
tf workfold -map SERVERFOLDERPATH LOCALFOLDERPATH -collection:http://SERVERNAME:PORT/FILEPATH -workspace:WORKSPACENAME && prompted for username -> domain -> password
-Make sure I can check out/check in (On hold):...
The error I'm getting right now is "An argument error occurred: First free argument must be a server path." This is what I've been following ever since I got the path set, but I think the versions are different because mine doesn't seem to be set up the same. Any help at all would be appreciated, and I'll keep up with the post as I figure parts out because there doesn't seem to be much online that I can find on TFS on macs.
Update: As normal, I'm an idiot. Have to put the options at the end of the command and have to have the serverfolder path as the first thing after -map. Now I just need to figure out how to use the damn thing. I'll post any other questions I have and try to get all the correct commands up for the selfish reason of having them somewhere in case I forget them later.
Update 2: The mapping hasn't worked out as well as I'd hoped, it seems a combination of my unfamiliarity with Unix/Mac file systems and some settings being missing is keeping me from using 'tf get' to load all of the test data I was trying to get. I'm planning on trying again after I get the location of where my boss wants the data saved and after I can look into something that would save the workspace so it won't say that it can't find the map path every time...
It looks like you're setting up your workspace and some working folder mappings just fine, after the edit. If you're having problems doing a tf get after this, then there are some common problems that might be occurring. TFS workspaces can be a little bit opaque and having a better understanding of them can sometimes help you understand where the problem is:
Team Foundation Server requires a workspace to be configured before you can get files out of source control, edit them or check them back in. A workspace basically simply contains working folder mappings that map your local path(s) to server path(s).
Workspaces are stored on the server and are uniquely identified by your computer's hostname, your username and the workspace's name. A cache of this information for the local host is saved on the client. This implies:
If you remove a workspace on the server, your workstation will be unable to connect.
If you remove the cache, your local computer will not be able to identify the workspace based on working folder mappings until the cache is rebuilt (which happens every time you connect to the server.)
If you change your username or local workstation's name, you cannot access those workspaces.
(Note that very early versions of the Teamprise command line client had certain issues on Mac OS that made identifying the local workstation name difficult. This is fixed, however, in Team Explorer Everywhere.)
Because you can have multiple workspaces for a single server on a single workstation, you can't always simply provide server paths to tf commands, since server paths are ambiguous. ($/ exists in every workspace, for example.) So the command line client resolves paths based on the current working directory and/or the arguments provided. Meaning that you can run tf get foo.txt if you're in a working folder, or you can run tf get /tmp/foo.txt if /tmp is mapped.
One more point - the configuration data for Team Explorer Everywhere is shared between the TFS plug-in for Eclipse and the command line client. So if you're more comfortable using a GUI to set up your workspace(s), you can do that and then use the CLC as you see fit. You don't need to be a Java programmer to use Eclipse - simply download Eclipse and install the TFS plug-in for Eclipse into it, and select Window > Open Perspective > Team Foundation Server Exploring. After that, you'll have the full GUI Team Explorer experience and this perspective will be restored when you open Eclipse, so you won't even need to worry about the Java IDE bits if you don't want to.