Is a Spring DriverManagerDataSource password stored in plaintext? - spring

You can set a org.springframework.jdbc.datasource.DriverManagerDataSource user name and password with:
dataSource.setUsername("johnsmith");
dataSource.setPassword("myplaintextpassword");
My question is - if I were to create an object this way, and then examine the memory of the machine this is running on, could I see the plaintext password?
If so, how can one securely create a database connection using a passed in password?

Sure. Create a complete heapdump of the process or JVM and you will be able to see it.
I don't know what operating system your application runs in or if it is standalone or run in container like Tomcat but exactly this is the reason why processes need to be separated.
You have to make sure that the file or JNDI configuration your password is stored in is only accessible by those processes / users that absolutely need access to them. And an additional layer of encryption will help too. There always will be someone (like root on Linux) who can read every process memory but your job is to keep the group of people being able to do this as small as possible.
Perhaps serverfault is a better place for asking or searching details about this. I am sure you can describe your environment (OS, container for your application, ...), try to get help for that setup.

Related

windows registry storage best practice

Background
I've recently been shunted into the world of windows programming and I'm still trying to find my way around the best practices and ways of doing things. So I was just hoping for some pointers on use of the registry
Not particularly relevant but the background is that I am creating an installer in Golang, a couple of points to get out the way on that:
I am aware MSI's would usually be best practice for an installer (I have my reasons for going custom exe)
I know there are more obvious language choices than golang, just go with it
Current registry use
As part of the install process, I store several pieces of data in the registry:
run once commands:
Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\RunOnce
I create a few entries here: to restart the process after a system reboot and to delete some temp files on reboot after uninstall
an uninstall entry:
Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\Vendor
Product
Content here is the same as an MSI would create, I was careful not to create any additional custom fields here (all static data until uninstall)
an application entry:
Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Vendor\Product
I store some additional data about the installation here, some of which is needed for uninstall such as state info from before installation (again all static content)
a temporary entry:
Computer\HKEY_CURRENT_USER\SOFTWARE\Vendor\Product
I store some temporary data here which can include some sensitive user entered data (usernames/passwords). I run some symmetric encryption to obscure the data though my understanding is this is area of the registry is encrypted so only the user could access anyway (would like confirmation on that)
This data is used to resume after restart and then deleted
Questions
I'm looking for confirmation / corrections on my current use of the registry?
I now have need to pass some data between an application and a running service, this data would be updated every 1-2 minutes and would be a few bytes of JSON. Does the registry seem like a reasonable place to store variable data like this? If so is there a particular place that better for variable data - I was going to add it to:
Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Vendor\Product
HCKU isn't encrypted to my knowledge. It's stored in a file called NTUser.dat and could be loaded as a hive under HKEY_USERS and visible to other processes with sufficient rights to do so.
You would need to open up the rights to HKLM\SOFTWARE\Vendor\Product if you expect a user priv process to be able to write to it. If you want to pass data to a service you might want to use some sort of IPC pipe to do so. Not sure what's available in Golang for this.

how to run a bash script at startup with a specific user on Ubuntu 12.04 (stable)

Being fairly new to the Linux environment, and not having local resources to inquire on, I would like to ask what is the preferred method of starting a process at startup as a specific user on a Ubuntu 12.04 system. The reasoning for such a setup is that this machine(s) will be hosting an Input/Output Controller (IOC) in an industrial setting. If the machine fails or restarts, this process must boot automatically..... everytime.
My internet searches have provided two such area's to perform this task:
/etc/rc.local
/etc/init.d/
I ask for the specific advantages and disadvantages of each approach. I'll add that some of these machines are clients and some are servers, but all need to run an IOC, and preferably in the same manner.
Within what ever method above is deemed to be the most appropriate, a bash shell script must be run as my specified user. It is my understanding all start up process are owned by root. So I question if this is the best practice:
sudo -u <user> start_ioc.sh
If this is the case, then I believe it is required to create a file under:
/etc/sudoers.d/
Using:
sudo visudo -f <filename>
Where within this file you assign the appropriate rights and paths to the user. Most of my searches has shown this as the proper format:
<user or group> <host or IP>=(<user or group to run as>)NOPASSWD:<list of comma separated applications>
root ALL=(user)NOPASSWD:/usr/bin/start_ioc.sh
So for final additional information, the ultimate reason for this approach, which may also be flawed logic, is that the IOC process needs to have access to a network attached server (NAS). Allowing root access to the NAS is I believe a no-no, where the user can have the appropriate permissions assigned.
This may not be the best answer, but it is how I decided to complete this task:
Exactly as this post here:
how to run script as another user without password
I did use rc.local to initiate the process at startup. It seems to be working quite well.

JDBC via BPXBATCH - how to get TSO-user credentials?

I have a little java program that reads a db2 table via jdbc. This program is invoked via "tso bpxbatch myjavatool".
I wonder if there is the possibility to "pass" the username/password of my TSO user to the JDBC driver?
For example, if I connect to DB2 with a simple REXX script I don't have to specify my username/password again and DB2/RACF checks if my user is allowed to execute the SQLs.
Now my java tool is not running in my TSO address space but under the control of the J9 in the USS address space...
Is there also a way to automatically log in to DB2 with the current TSO user?
I don't know too much about BPXBATCH, but I assume you are still running under your own userid in the USS-address-space.
In your java-code you should be able to get your userid via
String user = System.getProperty("user.name");
As for the password you could try using RACF-Passtickets instead. There is a library IRRRacf.jar in /user/include/java_classes and the corresponding javadoc in IRRRacfDoc.jar in the same directory. the code for generating the Passticket is rather simple:
IRRPassTicket generator = new IRRPassTicket();
String ptkt = generator.generate(user,applid);
then just pass the passticket instead of the password and you should be fine.
Alas, there's several aspects you have to make sure of before using this approach:
Set up RACF to use Passtickets for DB2 - it might already be configured, else you'll have to set up proper profiles in the PTKTDATA-class (See RACF-documentation for more details)
Make sure each user running the code has the proper RACF authorization to use the r_ticketserv callable service (again, see RACF documentation)
Find the correct application-name (applid) for your DB2-system. See the DB2-documentation about using passtickets.

Multiple iDempiere instances in one server

I need to install multiple iDempiere instances in one server. The customized packages are different in build and the db they are using. Is there any way to deploy both of it in one server and access like localhost:8080/client1, localhost:8080/client2 . Any help appreciated.
When I want to reference several application servers I need to copy the path of various installations
and change the database name and port of each application :
/opt/idempiere-server-production/ (on port 8080 for example) for production
And
/opt/idempiere-server-test/ (on port 8081 for example) for test
the way you said is not possible, because the idempiere server for webapp is known as
http://hostname:port/webui
Running multiple instances of idempiere on a single server is not too difficult.
Here is what you need to take care of:
Install the instances into different directories. The instances do not need to share any common files. So you are just fine making a full installation for each instance.
Make sure each instance uses its own data base. Use different names for the instance data bases.
Make sure the idempiere server instances use different tcp ports.
If you really should need to use a single port to access all of the instances you could use a http server like apache or ngnix to do define virtual hosts. Proxying or use of rewrite rules will then allow you to do the desired redirections. (I am using subdomains and apache mod_proxy to do the job)
There is another benefit to using subdomains for browser access: If all your server instances use the same host name the client browser will sometimes not be able to keep cookies from different instances apart, which can lead to a blocked session as discussed here in the idempiere google group.
Use different DB user names. The docs advise not to change the default user name Adempiere and this is ok for a single instance installation. Still if you use a single DB user for all of your instances you will run into trouble once you need to restore a database from a backup file. The RUN_DBRestore.sh will delete and recreate the DB user which is not possible when the user owns more than one DB.
You can run all of your instances as services in parallel. Before the installation of another instance rename the service script: sudo mv /etc/init.d/idempiere /etc/init.d/idempiere-theInstance. Of course you will need to do some book keeping work wth the service controller of your OS to ensure that the renamed services are started as desired.
The service controller talks to the iDempiere server via the OSGI console. For this to work without problems in a multi instance environment you need to assign a different telnet port number to each of the instances: in the editor of your choice open the file /etc/init.d/iDempiere. Find the line export TELNET_PORT=12612 and change the port number to something else.
Please Note:
OS specific descriptions in this guide are for Ubuntu 16/18 or Debian, if on another OS you need to do some research.
I have been using the described approach to host idempiere versions 5 and 6 for some time now and did not have any problems so far. Still make sure you do your own thorough tests if you want to go that route.
If you run into any problems (and maybe even manage to solve them) please report back to the community. (by giving your own answer to this question or by posting to the idempiere google group) Thanks!
You can have as many setups on your server as you like. When you run the setup to create your properties, simply chose other web ports for each installation. You also may need to slightly change the webservers configuration if they have some default ports.

WindowsAzure: Is it possible to set directory permissions within the web.config?

A PHP scriptof mine wants to write into a log folder, the resulting error is:
Unable to open the log file "E:\approot\framework\log/dev.log" for writing.
When I set the writing permissions for the WebRole User RD001... manually it works fine.
Now I want to set the folder permissions automatically. Is there an easy way to get it done?
Please note that I'm very new to IIS and the stuff around, I would appreciate precise answers, thx.
Short/Technical Response:
You could probably set permissions on a particular folder using full-trust and a startup taks. However, you'd need to account for a stateless OS and changing drive letters (possible, not likely) in this script, which would make it difficult. Also, local storage is not persisted, so you'd have no way to ensure this data stayed in the case of a reboot.
Recommendation: Don't write local, read below ...
EDIT: Got to thinking about this, and while I still recommend against this, there is a 3rd option: You can allocate local storage in the service config, then access it from PHP using a dll reference, then you will have access to that folder. Please remember local storage is not persisted, so it's gone during a reboot.
Service Config for local:
http://blogs.mscommunity.net/blogs/dadamec/archive/2008/12/11/azure-reading-and-writing-with-localstorage.aspx
Accessing config from php:
http://phpazure.codeplex.com/discussions/64334?ProjectName=phpazure
Long / Detailed Response:
In Azure, you really are encouraged to approach things as a platform and not as "software on a server". What I mean there is that ideas such as "write something to a local log file" are somewhat incompatible with the cloud "idea". Depending on your usage, you could (and should) convert this script to output this data to some cloud-based or external storage, vs just placing it on the disk.
I would suggest modifying this script to leverage the PHP Azure SDK and write these log entries out to table or blob storage in Azure. If this sounds good, please provide the PHP and I can give an exact example.
The main reason for that (besides pushing the cloud idea) is that in Azure, you cannot assume the host machine ("role instance") will maintain an OS state, so while you can set some things such as folder permissions, you can't rely on them sticking that way. You have no real way to guarantee those permissions won't be reset when the fabric has to update your role and react to some lower level problem. For example, a hard-drive cage on the rack where your current instance lives could fail. If the failure were bad enough, the Fabric controller would need to rebuild your instance. When that happens, your code is moved to an entirely different server, so the need would arise to re-set those permissions. Also, depending on the changes, the E:\ could all of a sudden need to be the F:\ or X:\ drive and you wouldn't know.
Its much better to pretend (at some level) that your application is running "in Azure" and not "on a server in azure", so you make no assumptions about the hosting environment. So anything you need outside of your code (data, logs, audits, etc) should be stored somewhere you can control (Azure Storage, external call-out, etc)

Resources