UDS (formerly known as Forte 4GL) is the platform our current system is running on. I remember finding a flag in a forum that documented how to deploy this on a Veritas cluster. However, now that we have modern hardware coming in, I can't find that note anywhere in Google.
Finally found it (after hours of searching) at http://sundusum.com/gpunkt/knowledgebase/articles/7240.html
"To allow for configurations like clustered environments, a test for a
new environment variable, FORTE_CM_NO_CANONICAL_NAMECHK, was added in
Forte release 3M. If that environment variable is set, the check for
the match between FORTE_NS_ADDRESS and the machine name is skipped.
"The FORTE_CM_NO_CANONICAL_NAMECHK can be set in the fortedef files on
machines where running the environment manager is allowed. Setting that
environment variable on all nodes should be avoided, since that would
allow the environment manager to be started on any node which can lead
to problems that canonical name check is designed to prevent."
Related
I'm trying to learn ML on GCP. Some of the Qwiklabs and Tutorials start with Cloud Shell to setup things like env variables and install Python packages, while others start by opening an SSH terminal into a VM to do those preliminary steps.
I can't really tell the difference between the two approaches, other than the fact that in the second case a VM needs to be provisioned first. Presumably, when you use Cloud Shell some sort of VM instance is being provisioned for you behind the scenes anyway.
So how are the two approaches different?
Cloud Shell is a product that is designed to give a large number of preconfigured tools that are kept updated, as well as being quick to start, accessable from the UI, and free. Basically, its a quick way to get an interactive shell. You can learn more about this environment from its documentation.
There are also limits to Cloud Shell -- you can only use it for 60 hours a week, if you go idle your session is terminated, and there is only 5GB of storage. It is also only an f1-micro instance, IIRC. So while it is provisioned for you (and free!), it isn't really useful for anything other than an interactive shell.
On the other hand, SSHing into a VM places you directly in a terminal on that VM, much like you would on any specific host -- you only have whatever tools that the image installed onto that VM provides (and many VMs come pretty bare bones, it depends on the image). But, you're now in a terminal on the host that is likely executing the code you want to work with, and it has as much CPU and RAM as you provisioned in that instance.
As far as guides pointing you to one or the other -- thats really up to them, but I suspect they'd point client / tool type work to the cloud shell (since its easy and a reasonably standard environment, which can even be scripted with tutorials), while they'd probably point how to install necessary software for use in production to a "real" VM.
I have been developing a simulation model in my local environment as a way to learn how to code (I am using PHP). In my current environment it is taking around 30 seconds to run 1 simulation. I was expecting this to be much quicker. My theory is that the local environment has limitations due to shared resources that a standard web server would not have. I used the Laragon installer with it's default settings to configure the local environment. Does this theory hold any water?
Asked more simply: Does a standard web server with default settings compute faster than a local environment with default settings.
I am looking for a way to be able to do the following:
Create an instance of Windows with installed prerequisites and configuration
An isolated environment would be recommended (As in it will not modify the existing configuration on local machine only in that VM-like environment)
Ability to use the internet within that environment
Using it sort of like a "check-point" (Start working on it, doing something wrong and being able to start once again from the instance that we created)
Ability to share the environment
Possibility of creating multiple different environments
Low disk usage if possible
Fast deployment of environment on local machine
I have looked into Docker which seems pretty good for what I need, but I want to investigate other options as well because it requires Windows 10 x64 Enterprise
.
Something that works on Windows 7/Server/8/8.1 would be nice
I would also love to get arguments on why X option is better than Y option.
Thanks in advance!
If you want a completely separate environment, creating a Virtual Machine will be worth considering.
There are products from VMware and Oracle to create your virtual machine. I have been using Oracle Virtualbox (Oracle's virtual machine software) for some time now and find it pretty useful.
With a virtual machine it addresses all your concerns:
Create an instance of Windows with installed prerequisites and
configuration - A virtual machine will run on top of your installed OS without making
any modifications in current installation
An isolated environment would be recommended (As in it will not
modify the existing configuration on local machine only in that
VM-like environment) - It runs completely isolated like a separate
machine.
Ability to use the internet within that environment - You can use
internet inside of a virtual machine
Using it sort of like a "check-point" (Start working on it, doing
something wrong and being able to start once again from the instance
that we created) - You can take a snapshot and save the state. Next time when you start the VM it will be started from this state only.
Ability to share the environment - Export a created VM and it can be
reused.
Possibility of creating multiple different environments - You can run
multiple VMs on your machine. Configure the disk usage and RAM
accordingly.
Low disk usage if possible - Configurable while creating a virtual
machine.
Fast deployment of environment on local machine - Yes, you'll need
the .iso image of your Operating System
When installing a specific software (cheops) via SCCM to german PC's we are facing an issue with the language of the operating system.
It seems that the installpackage is searching for a map which is named "Program Files".
On German staged pc's where the operating system is most of the time in German, the map is called "programme"
Is it possible that this might cause a problem?
How can we fix this?
A couple of ideas:
Complain to the software vendor. They should be using environment variables rather than a hard-coded path. Sigh... I suspect, though, that you don't have much control over that.
Many setup programs allow parameters to be passed via the command line. Perhaps you could pass a German-specific path for those users. See this example or here.
What is the meaning of these Windows Environment variables:
HOMEDRIVE,
HOMEPATH,
HOMESHARE,
and USERPROFILE?
Who set them? When?
Who use them? For doing what?
How the configuration of the samba server modify these variables?
USERPROFILE is set by userenv!LoadUserProfileW which is called when, well, loading the user's profile (the HKEY_USERS\<sid> hive).
This typically happens the first time a process is started for the user.
If you specifically arranged not to load the profile (e.g. with /noprofile for runas) then the process is run in the Default User profile which still has this variable set - since the moment it was loaded at system's startup.
HOMEDRIVE, HOMEPATH and HOMESHARE (as well as several other variables) are set by shell32!RegenerateUserEnvironment which is called on Explorer initialization1. They are placed in the (volatile) HKCU\Volatile Environment key which, being volatile, persists until the profile's unload.
Consequently, they are only set when the user logs into their desktop session. NOT for secondary logons or services.
No wonder people prefer USERPROFILE nowadays.
For HOMEPATH to be set, SYSTEM must have permissions for the profile's directory (they are initially set, of course, but may vanish when e.g. playing with Cygwin's chmod).
1The code also sets a few variables that are already set by userenv. This suggests that this is older code that persists since NT4 days. Difference between profile and home path - Server Fault confirms that.
HOMEDRIVE/HOMEPATH is where the user's personal files are: downloads, music, documents, etc.
HOMESHARE is used instead of HOMEDRIVE if the home directory uses UNC paths.
USERPROFILE is used to store the user's application and OS configuration files and personalization settings. It includes both local and roaming (Active Directory) folders. It seems like people favor using this more than HOMEPATH nowadays.
It's important to note that although HOMEDRIVE/HOMEPATH is often the same path as USERPROFILE, it's not always the case.
I don't think Samba would modify these. It might make use of them to provide an initial (home) directory. Active Directory may change them though.
References:
[dead link] Environment Variables in Windows NT
Where Should I Store my Data and Configuration Files if I Target Multiple OS Versions?
if you go to the run box and type any of the above like this
%HOMEPATH%
then it will go to your environment path that is set on your machine. It's usefull when writing vb scrips and things like that where you want to perform a task on the users profile area for example.
Hope this helps
Those are all set on login, and they are, as SocialAddict said, very useful in scripts when you need to perform an action on different systems.
I'm not too clear on your other question, a samba server shouldn't care about those variables.
See http://vlaurie.com/computers2/Articles/environment.htm for a detailed explanation.