There is a server where the DB is installed. My system is communicating with DB and ADR structure has been created on my machine side. I know that the path was previously changed by one of admins. I was trying with adrci but i dont really know how does setting paths works. In adrci it looks to me like im setting path where diagnostic is stored and then it can be analyzed by the interpreter but i need to change the place where all stuff is being saved. Its all about changing path on application side, not on DB side(if it isnt needed). Any description about how does it work remotely would be very useful.
I was trying to find the solution in oracle documentation. I understodd some points but there is a lack in my knowledge of DB administration.
Related
I have a huge sqlite file containing my db. I need to know if it is possible and how to connect to this db as an embedded one with jpa.
I'm developing an app that packs this database inside it's own jar so that when I use it on another system I don't have to import a copy of my db back and forth.
The technologies I'd like to use are Angular and Spring since those are the ones I know best. If there are some techonlogies that better suit this purpose I'd like some suggestions.
Thanks :)
I hope I undestood your question correctly, so I made a small project for you, hence you can have a look into it: spring-jpa-sqlite-sample. It may guide you a bit, though I and don't claim correctness or completeness.
The path to the sqlite file can easily be changed by inserting the correct url in the persistence.properties file:
driverClassName=org.sqlite.JDBC
url=jdbc:sqlite:src/main/resources/chinook.db --> you may use relative paths.
hibernate.dialect=dev.mutiny.semo.config.SQLiteDataTypesConfig
hibernate.hbm2ddl.auto=none
hibernate.show_sql=true
You can also use Environment variables from your system, which Spring tries to read from, so that you can reference the correct directory to a file. This can be found here: Read system environment var (SO)
Last but not least. Beware of using huge SQLite files. Find another way and transfer it first into a 'real' Database like any other Client/Server RDBMS you know (Oracle, MariaDB, MSSQL, depends on your scenario/taste).
Have closer look onto the documentation: When to use SQLite (and when not to!)
We have a number of (developer) existDb database servers, and some staging/production servers.
Each have their own configuration, that are slightly different.
We need to select which configuration to load and use in queries.
The configuration is to be stored in an XML file within the repository.
However, when syncing the content of the servers, a single burnt-in XML file is not sufficient, since it is overwritten during copying from the other server.
For this, we need the physical name of the actual database server.
The only function found, request:get-server-name that is not quite stable since a single eXist server can be accessed through a number of various (localhost, intranet or external) URLs. However, that leads to unnecessary duplication of the configuration, one for each external URL...
(Accessing some local files in the file system is not secure and not fast.)
How to get the physical name of the existDb server from XQuery?
I m sorry but I don't fully understand your question, are you talking about exist's default conf.xml or your own configuration file that you need to store in a VCS repo? Should the xquery be executed on one instance and trigger an event in all others, or just some, or...? Without some code it is difficult to see why and when something gets overwritten.
you could try console:jmx-token which does not vary depending on URL (at least it shouldn't)
Also you might find it much easier to use a docker based approach. Either with multiple instances coordinated via docker-compose or to keep the individual configs from not interfering with each other when moving from dev to staging to production https://github.com/duncdrum/exist-docker
If I understand correctly, you basically want to be able to get the hostname or the IP address of a server from XQuery. If the functions in the XQuery Request module are not doing as you wish, then another option would be to set a Java System Property when starting eXist-db. This system property could be the internal DNS name or IP of your server, for example: -Dour-server-name=server1.mydomain.com
From XQuery you could then read that Java System property using util:system-property("our-server-name").
Good morning,
I would like to generate excel file from oracle, therefore I have imported poi 3.16 and all pre-requisits based on the bottom table in this link:
http://poi.apache.org/overview.html#components
Exctly the following files:
commons-logging, commons-codec, commons-collections, log4j ,poi.jar
The dbms command I have used:
dbms_java.loadjava('filename.jar -resolve');
Everything went fine but all the classes that are within "org/apache/poi/hssf/usermodel/" remained invalid. The most important part. :)
Anybody has any idea what can be the problem? Should I import any other classes? First I would like try solution that does not need to check log files on the harddisk or any action on the server itself. I have no access to the server, therefore I have to communicate with the administrators which is complicated in our company :(. Of course if there is no olution within oracle I have no other option...
Thanks in advance,
Sz.
A PHP scriptof mine wants to write into a log folder, the resulting error is:
Unable to open the log file "E:\approot\framework\log/dev.log" for writing.
When I set the writing permissions for the WebRole User RD001... manually it works fine.
Now I want to set the folder permissions automatically. Is there an easy way to get it done?
Please note that I'm very new to IIS and the stuff around, I would appreciate precise answers, thx.
Short/Technical Response:
You could probably set permissions on a particular folder using full-trust and a startup taks. However, you'd need to account for a stateless OS and changing drive letters (possible, not likely) in this script, which would make it difficult. Also, local storage is not persisted, so you'd have no way to ensure this data stayed in the case of a reboot.
Recommendation: Don't write local, read below ...
EDIT: Got to thinking about this, and while I still recommend against this, there is a 3rd option: You can allocate local storage in the service config, then access it from PHP using a dll reference, then you will have access to that folder. Please remember local storage is not persisted, so it's gone during a reboot.
Service Config for local:
http://blogs.mscommunity.net/blogs/dadamec/archive/2008/12/11/azure-reading-and-writing-with-localstorage.aspx
Accessing config from php:
http://phpazure.codeplex.com/discussions/64334?ProjectName=phpazure
Long / Detailed Response:
In Azure, you really are encouraged to approach things as a platform and not as "software on a server". What I mean there is that ideas such as "write something to a local log file" are somewhat incompatible with the cloud "idea". Depending on your usage, you could (and should) convert this script to output this data to some cloud-based or external storage, vs just placing it on the disk.
I would suggest modifying this script to leverage the PHP Azure SDK and write these log entries out to table or blob storage in Azure. If this sounds good, please provide the PHP and I can give an exact example.
The main reason for that (besides pushing the cloud idea) is that in Azure, you cannot assume the host machine ("role instance") will maintain an OS state, so while you can set some things such as folder permissions, you can't rely on them sticking that way. You have no real way to guarantee those permissions won't be reset when the fabric has to update your role and react to some lower level problem. For example, a hard-drive cage on the rack where your current instance lives could fail. If the failure were bad enough, the Fabric controller would need to rebuild your instance. When that happens, your code is moved to an entirely different server, so the need would arise to re-set those permissions. Also, depending on the changes, the E:\ could all of a sudden need to be the F:\ or X:\ drive and you wouldn't know.
Its much better to pretend (at some level) that your application is running "in Azure" and not "on a server in azure", so you make no assumptions about the hosting environment. So anything you need outside of your code (data, logs, audits, etc) should be stored somewhere you can control (Azure Storage, external call-out, etc)
I realise there are some similar questions on here already but I couldn't see one that matched my problem so I'm afraid I had to ask a new question.
I have a webservice running on a server, which is throwing a ORA-12154: TNS:could not resolve the connect identifier specified" error. However when I log onto the said server i am able to tnsping the entry successfully, and connect to it via sqlplus, but not through the webservice.
If anyone has any suggestions as to things to look for then I would greatly appreciate it.
Cheers
Some other things to look at include:
If you're using a service name instead of SID, are you specifying the entire service name?
If you're using the ORACLE_SID environmental variable, check the case (mydb vs MYDB)
Check for a sqlnet.log file
If you're using a username/password#SID connect string, you may need to quote your password if it contains special characters (like an # symbol).
The webservice can't find tnsnames.ora which usually means that you didn't set up Oracle's environment properly when starting the process. This usually means you didn't source oraenv.sh in the shell script which starts it.
So your interactive login works - what is different between your interactive login and the user that runs your web service?
Are they the same user? If not then you will need to update some of your configs in order to make the Oracle client files available to the webservice.
Details like Operating System, Oracle Version, etc are always a help.