How to export Server Group configurations? - azure-data-studio

How can I export to file my Server Group configurations in Azure Data Studio?
I've created a Workspace file in my application solution that contains useful queries for troubleshooting, etc. I'd like to also include connection configurations for servers relevant to the solution so that any developer can use the Workspace queries quickly without having to ask which servers to run them on.

You can copy your Server Group configurations from User Settings. You will find them under datasource.connectionGroups and connection details under datasource.connections (if you have different ones, check with the group id). Any user can copy paste them into his user settings except the password. Everyone needs to add it after that. I would suggest to create a JSON file with those settings and add the steps to readme.md file.

All connections and groups are located in the settings.json file located at:
%AppData%\azuredatastudio\User\
For me specifically the path looked like this:
C:\Users\MyLogin123\AppData\Roaming\azuredatastudio\User

Related

Configuring settings for last paricipant support wsadmin/websphere

Recently i've came to an issue to configure Last Participant Support on deployed application. I've found some old post about that:
https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014090728
On server itself i found how to do it. But with jython or wsadmin commands im not able to find how to do it on application itself.
But it does not help for me. Any ideas?
There is no command assistance available for the action of changing last participant support from the admin console which typically implies there is no scripting command associated the action. And there doesn't appear to be an wsadmin AdminApp command to modify the property. Looking at config repo changes made as a result of the admin console action, the IBM Programming Model Extensions (PME) deployment descriptor file "ibm-application-ext-pme.xmi" for an application is created/modified by the action.
If possible, the best long-term solution would be to use a tool like RAD to generate that extensions file when packaging the application because if you need to redeploy the app, your config changes wouldn't get overridden. If you can't mod the app, you can script the addition of an PME descriptor file in each of the desired apps with the knowledge that redeploying the app will overwrite your changes. The changes can be made by doing something along the lines of:
1) create a text file named ibm-application-ext-pme.xmi with contents similar to this:
<pmeext:PMEApplicationExtension xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:pmeext="http://www.ibm.com/websphere/appserver/schemas/5.0/pmeext.xmi" xmi:id="PMEApplicationExtension_1559836881290">
<lastParticipantSupportExtension xmi:id="LastParticipantSupportExtension_1559836881292" acceptHeuristicHazard="false"/>
</pmeext:PMEApplicationExtension>
2) in wsadmin or your jython script do the following (note in this example the xmi file you created is in the current directory, if not, include the full path to it in the createDocument command) :
deployUri = "cells/<your_cell_name>/applications/<your_app_name>.ear/deployments/<your_app_name>/META-INF/ibm-application-ext-pme.xmi"
AdminConfig.createDocument(deployUri, "ibm-application-ext-pme.xmi")
AdminConfig.save()
3) restart the server

How to create .rdp file on Mac OS that allows auto-login

I'm working on a tool that generates .rdp files and then invokes them using Microsoft RDP Client. This tool is running on Mac OS.
Everything works well, the only problem is that I can't figure out of how I can generate 'password 51:b' field properly. On Windows this can be done easily by using CryptProtectData method from Crypt32.dll library. How can I do the same on Mac.
Another option could be to use "rdp://" URL scheme, but it doesn't seem allow to pass password this way.
So the question is how can I implement auto-login on Mac if I use third-party RDP client.
As far as i know you can't. You can however create a "User Account" and a Server configuration and add both to the client. The connection will then be visible on the main window and you just need to double click it.
To do so, you need to add the password to the Keychain, use /usr/bin/security to do so from a script. It needs to be a generic-password and saved in com.microsoft.rdc.macos. Also be sure to generate an ID according to the RDP Clients scheme, like BFF77777-7777-7777-7777-777777777777.
You may also set the permissions to read that key using /usr/bin/security and set-generic-password-partition-list specifying the right teamid (UBF8T346G9) and again com.microsoft.rdc.macos. You need the admin password to do this step.
Then you can alter the RDP Clients config file, which is a .sqlite file located at /Users/$(whoami)/Library/Containers/com.microsoft.rdc.macos/Data/Library/Application Support/com.microsoft.rdc.macos/com.microsoft.rdc.application-data.sqlite. Add the user configuration in the ZCREDENTIALENTITY table and make sure the ZID matches the one added to the keychain.
To add a server configuration you need to alter the ZBOOKMARKENTITY table. Just add a configuration by hand using the UI and look at the table to get a feeling of how it needs to be setup. Basically you link your user configuration with the server configuratio by making sure that ZCREDENTIAL in ZBOOKMARKENTITY matches Z_PK in ZCREDENTIALENTITY of your user configuration.
I know the answer is a bit late, but it may give you a starting point. This will however not fully automate the process, you will still need to go to the UI and double click the connection you want to use.

How do I give a service running as SYSTEM shared directory network access over EC2 hosts running Windows Server 2012?

The scenario is as follows:
I have TeamCity set up to use AWS EC2 hosts running Windows Server 2012 R2 as build agents. In this configuration, the TeamCity agent service is running as SYSTEM. I am trying to implement FastBuild as our new compilation process. In order to use the distributed compilation functionality of FastBuild, the build agent host needs to have access to a shared network folder. Unfortunately, I cannot seem to give this kind of access from one machine to another.
To help further the explanation, I'll use named examples. The networked folder, C:\Shared-Folder, lives on a host named Central-Host. The build agent lives on Builder-Host. Everything is running Windows Server 2012 R2 on EC2 hosts that are fully network permissive to each other via AWS security groups. What I need is to share a directory from Central-Host so that Builder-Host can fully access it via a directory structure like this:
\\Central-Host\Shared-Folder
By RDPing into both hosts using the default Administrator account, I can very easily set up the network sharing and browse (while on Builder-Host) to the \\Central-Host\Shared-Folder location. I can also open up the command line and run:
type NUL > \\Central-Host\Shared-Folder\Empty.txt
with the result of an empty text file being created at that networked location.
The problem arises from the SYSTEM account. When I grab PSTOOLS and use the command:
PSEXEC -i -s cmd.exe
I can test commands that will be given by TeamCity. Again, it is a service being run as SYSTEM which, I need to emphasize, cannot be changed to a normal User due to other issues we have when using TeamCity agents under the User account type.
After much searching I have discovered how to set up Active Directory services so that I can add Users and Computers from the domain but after doing so, I still face access denied errors. I am probably missing something important and I hope someone here can help. I believe this problem will be considered "solved" when I can successfully run the "type NUL" command shown above.
This is not an answer for the permissions issue, but rather a way to avoid it. (Wanted to add this as a comment, but StackOverflow won't let me - weird.)
The shared network drive is used only for the remote worker discovery. If you have a fixed list of workers, instead of using the worker discovery, you can specify them explicitly in your config file as follows:
Settings
{
.Workers =
{
'hostname1' // specify hostname
'hostname2'
'192.168.0.10' // or ip
}
... // the other stuff that goes here
This functionality is not documented, as to-date all users have wanted the automatic worker discovery. It is fine to use however, and if it is indeed useful, it can be elevated to a supported feature with just a documentation update.

what's the difference between folders 'installedApps' and 'applications' in websphere application server?

Normally, after we create profiles both DMGR and Node, we have folder applications under path $DMGRPROFILE_HOME/config/cells/$cellName and installedApps under path $NODEPROFILE_HOME/.
All the applications to be deployed will be put into folder installedApps. And we can also see the same contents under the folder applications above. So my question is what's the difference between them? why does the websphere application server put such apps into folder applications besides installedApps?
what's more, for example, if i need to update one file named web.xml of my deployed application war file, do i have to update file under both path above?
Thanks in advance
The applications path under the Dmgr profile contains the files that have been deployed in the admin console.
The installedApps path under the Node profile contains those files after they've been synchronized out to each node. In most cases, this will be immediately after the deployment as well.
Deploying a single file
The safest practice would be to deploy a single file using the admin console, rather than editing it in-place on the filesystem:
The downside is that you have to enter the entire path to the server-deployed file name. e.g. webapp.war/WEB-INF/classes/com/yourcompany/project/package1/YourClass.class.
If you have a typo, it will deploy, but not where you wanted, and you might not notice it until your expected changes didn't take effect.
Direct edit on the filesystem
That said, it is faster to edit on the filesystem, so we do that at times especially for like JSPs. To do that, you need to edit the copy under the Node's installedApps directory. (The location is controlled by WebSphere variable APP_INSTALL_ROOT, which defaults to ${USER_INSTALL_ROOT}/installedApps.)
web.xml
web.xml, however, is different. If you edit that in installedApps, the changes won't take effect. Instead, you'll need to edit the one in a path something like:
$NODEPROFILE_HOME/config/cells/cellName/applications/earName.ear/deployments/applicationName/warName.war/WEB-INF
Or do it in the $DMGRPROFILE_HOME and then synchronize the node (either through syncNode.sh or through the admin console).
Either way, you'll then need to restart the enterprise application.

What is my Eclipse-RCP application storing in $HOME/.eclipse, and how do I prevent it?

When I run my Eclipse RCP application, it creates a whole lot of directories in my $HOME/.eclipse directory. What is this?
I don't want the files there, how can I hinder them from getting there? The rational for this: the application must run very clean and only leave files at one specific location (not $HOME/.eclipse).
I'd figured it was controlled by osgi.instance.area so tried to set this to different values (a directory, #none, #noDfault etc...) but can't stop the application from creating directories in $HOME/.eclipse. -data and other arguments works as expected.
On my system the only thing that is stored in .eclipse is the Equinox Secure Storage. Here is the blurb on the doc page for that:
By default, secure storage is located in your home directory. On Windows that typically resolves to "C:\Documents and Settings\.eclipse\org.eclipse.equinox.security". This location is selected to allow multiple Eclipse-based applications to share the same secure storage.
If you would like to modify the location of the default secure storage, you can use the "-eclipse.keyring " runtime option. The is a path to the file which is used to persist the secure storage data.
Here is the online reference.

Resources