Talend - System env variables not reflecting without restart - etl

I'm using system environment variables to parameterise jobs in talend, But every time I change any value, without restarting talend I'm not able to get the changes reflected. Is there any workaround? I don't want to use Context groups or Implicit Context load. I'm using Talend Open Studio free edition. Is this any different in Enterprise version?

This has to do with Talend is handling environment variables. Talend is reading the environment variables at startup and stores them. There is a good answer here which explains this behavior in more detail regarding Java (Talend is build on Java).
There are also some tricks listed how to get the variables depending on your OS.

Related

Modify user and system variables in Visual Studio 2017 installer

I want to append the installation folder of my program as value to the path variable of user and system variables.
I followed michaelmoo's instruction.
https://stackoverflow.com/a/21390793/9678802
The problem is that the existing value of path variable will be removed.
Digression: Adding to the Path involves some security risks, some performance issues, and can cause application interference - and probably a few other things. It is best avoided in general. It is a "known risk" avoided by deployment professionals - if they can help it.
The concept of AppPaths is a (partial) alternative to
updating the Path environment variable. It allows your application to
be started from the Start => Run dialog, but it does not work
from the command line.
Seems to not work from command prompts.
Warning: Ouch, that answer is very bad indeed (with apologies to the author who clearly tried to help others). That procedure should not be followed in any shape or form though! It is so dangerous that I have seen people sent out the door for far less. Wiping out a whole developer teams' environment path with a package deployment will cause drama - that you can be sure of. The warning really needs to be that strong in this case. I have seen it happen, and even by packages made by experienced professionals.
Built-In Support: As far as I am concerned, the correct answer from the above "thread" is this one. Windows Installer has built-in support for adding environment variable that takes care of all merging and update issues - and it even has rollback support - meaning your environment variable will be restored to its original state should the package installation fail. This built-in feature is a "must use" functionality.
Deployment Tool: So the built-in feature has to be used, but how when the tool does not support it? The best option is to get a "real deployment tool", especially since there are several further problems with the Visual Studio Installer Projects (Note: this is not pitching products, it is telling people about serious pitfalls that will cause real problems in almost all cases - what product you choose is up to you - obviously - but the VS Installer Project just isn't a complete solution).
WiX: Updating the Path variable using WiX is simple enough. And the documentation for the Environment element.
Orca: Though it is possible to "post-process" your compiled MSI and create the required entry in the Environment table, I would recommend that you use a proper tool instead that has been tested and designed to help you succeed in general. A comma wrong or a star wrong in the Environment table and you get completely wrong behavior.

Configuring deployment of SSAS multidimensional cube to different environments

I have seen this answer Deploying SSAS cube to environments
Which describes how deployment of a SSAS cube can be automated, however what I can't see is how to configure the project to be able to deploy to multiple environments i.e. Development, UAT and Production, where each is on a different server and has a different data source.
I can see in the Visual Studio SSDT Analysis Services project that the configuration manager can be used to set multiple deployment configs in the USER SPECIFIC!! .dwproj.user file.
This allows me to set multiple SSAS deployment locations and when building for DEV or UAT etc.. the .deploymenttargets file gets set correctly.
What this doesn't do is allow me to set a different data source automatically during a build to auto change the .configsettings file.
Does anyone know how to do this?
I believe the dwproj file holds the different SQL connection strings per environment as described here:
http://www.artisconsulting.com/blogs/greggalloway/2008/3/19/analysis-services-project-configurations
I have seen a bug when you first set this up that the dwproj file is never flagged as dirty so those connection string changes are never saved to disk. If you have an issue with this I usually add a new Role object then immediately delete it. That marks the dwproj file as needing to be saved.

Deploying/Re-Deploying SAS job in DIS via Script

Is there a possibility to deploy or redeploy a SAS job (Data Integration Studio) via a shell script ?
Also is there a way to create its SPK file via script ?
You can deploy DI jobs from commandline, see here:
http://support.sas.com/documentation/cdl/en/etlug/65807/HTML/default/viewer.htm#p1jxhqhaz10gj2n1pyr0hbzozv2f.htm
I have imported and exported objects into SAS DIS via shell script use this sas ExportPackage utility. I personally find it way more convenient as compared to window method. However, for it to work you need to have X-windows Environment, i used Xming for it.
As for deploying Jobs, never tried it.
To redeploy jobs DI Studio versions 4.901 and higher have a DepoyJobs tool which is designed to perform this function: read more in the SAS documentation. It is available on the server. Older versions had a similar but much more restrictive client tool using ant.
Also see Paper 1067-2017 An Introduction to the Improved SASĀ® Data Integration Studio Batch Deployment Utility on UNIX by Jeff Dyson, The Financial Risk Group, which gives a run through on how to use it.

Replicating WAS components without redoing

I have set of JVM configured, WAS components (Queues, SIB, etc) created in one environment (WAS 8.0 ) and is all working fine. I need to replicate the same in another set of new servers (and another one potentially). How do I replicate all the steps without typing the information again?
Ideally, you'd make the original changes via scripting and re-run them. An alternative is "properties based configuration" for export/import.
http://www.ibm.com/developerworks/websphere/techjournal/0904_chang/0904_chang.html

Seriously, overriding the DefaultDataPath in the sqlcmdvars for a SQL Database project deployment

I have an SQL 2008 database project in Visual Studio 2010 that is sync'ed on a regular basis from a schema comparison during the development phase. This same project is also under TFS source control. I have two environments, Debug and Production. Each environment is a single machine that runs both IIS and SQL Server. The production environment however has different data and log paths for the database D:\Data\ and E:\Logs\ versus my development server at the standard c:\program files\sql....\data.
What I'm trying to do is setup the way I transact my deployments from the debug to production environments. I've gotten WebDeploy 2.1 setup and I build my deployment packages in Visual Studio via the right-click context menu on the website project. I want to manually copy deployment packages to the production server via RDP, so there's no over the wire concerns here. The deployment package settings are setup to include all databases configured in Package/Publish SQL tab. In the Package/Publish SQL tab I don't pull data from data/schema from an existing database because I want to deploy from the SQL database project instead. So I just point to the pre-generated .sql script file located in my database project's /sql/release folder. To top it off, I generate the .sql script in the post-build events in the SQL project via VSDBCMD.exe /dd:- /a:Deploy /manifest:... so that a simple solution rebuild all, then website project deploy ensures I always have the latest .sql script in the deployment package.
This is great and all, but I have a major problem here I can't seem to overcome. It has to do with the database data and log files paths being different from debug to production environments. I actually receive an exception during the WebDeploy in IIS on the production server that says it can't find c:\programs files...\MyDatabase.mdf file. And what's scarey is after this exception, the entire database is deleted. The empty databases I create right before doing the deployment. Happen both times I tried messing around with it. I'm not sure how I feel about that, but I'm hoping I could find a reliable solution to this.
I have been feverishly looking for a way to change the paths during a deployment and have found many places that mention changing the paths in the *.sqlfiles.sql files under Schema Objects\Database level objects\Storage\Files because the path it tries to deploy to is the path specified in those because of the Schema Comparisons and Writes from the Debug SQL server database. Changing the paths here will work temporarily, until I do my next schema comparison and write, then the sqlfiles.sql files will get overwritten with the info from the Debug database again. And I don't want to have to remember to never update these files during a schema comparison because any mistake has the potential to delete the production database.
I think my salvation lies in my Release.sqlcmdvars file. It's a tease actually, I can see a place I "could" type the default database path, but it appears to a read-only field as it mentions "Location where database files are created by default (set when you deploy)." It would be grand if I could specify the paths here. Is there any way at all to specify the path in a variable here that would override the paths from the *.sqlfiles.sql files?
In the solution where I work at, there are two custom variables in the sqlcmdvars called Path1 and Path2 that I thought were reserved names that do such that. However, this doesn't work in my solution and the difference between the two solutions are the other solution gets deployed via TFS build controller. Doing the TFS build controller route isn't an option really because I opted out to save money while using a third party source control service.
Any help with this would be great. I have even gone so far as to create separate *.sqlfiles.sql files for debug and release and configured the dbproj file to use one or the other depending on the Configuration, but this doesn't seem to be working either. Also, using the custom PATH1 variable in the sqlfile.sql file like FILENAME = '$(PATH1)\Cameleon_log.ldf', doesn't work either. I seriously think it shouldn't be this difficult. Am I missing something simple here??
Thanks!
Okay, this was an exercise in futility. Apparently with out syncing with the target database during the script generation the script would be exactly what is needed to build the database from scratch. Even if I could override the file paths, the deployment would complain about database objects already existing. I needed to specify the connection string of the target database in the deploy settings so a comparison is done during the script generation and only the relevant differences are added to the script. I really wanted to avoid exposing my production SQL server to the outside world, but it is what it is. No need to override the paths anymore because it looks the database file paths are conveniently ignored during this comparison!!

Resources