I'm trying to start some oracle databases on RHEL, however when I run the dbstart command, I get an error message saying ORACLE_HOME_LISTNER isn't set.
[oracle#olxxxa ~]$ dbstart $ORACLE_HOME
ORACLE_HOME_LISTNER is not SET, unable to auto-start Oracle Net Listener
Usage: /u01/app/oracle/product/12.1.0/dbhome_1/bin/dbstart ORACLE_HOME
Processing Database instance "xxxa": log file /u01/app/oracle/product/12.1.0/dbhome_1/startup.log
Processing Database instance "xxxb": log file /u01/app/oracle/product/12.1.0/dbhome_1/startup.log
Looking online, I saw people were saying to change the dbstart file so that ORACLE_HOME_LISTNER is set from $1 to $ORACLE_HOME, which I did however I'm still getting the same error. I also read that I could pass $ORACLE_HOME right into the dbstart command, however I get the same output with or without the variable being passed.
Related
I am starting PostgreSQL 11 Server with command line on Windows and I am trying to give the log file parameter, however when I start the server the log file is being changed to the default one that is assigned in the postgresql.conf with log_directory and log_filename settings.
I tried to delete the log_directory and log_filename data from postgresql.conf file, but it didn't work the log file is still being changed to the default one that was given in the old log_directory and log_filename values.
I am stoping the server every time to get the new data updated, and I am starting it with this command line:
"C:\Program Files\PostgreSQL11\bin\pg_ctl.exe" -D "C:\Program Files\PostgreSQL11\data\pg11" -w -o "-F -p 5423" -l "C:\Program Files\PostgreSQL11\data\logs\pg11\MY_LOG_FILE.log" start
I get this log message in my log file and after that the log messages will be saved in the old default log file:
2019-07-30 11:18:00 CEST [19996]: [4-1] user=,db=,app=,client= TIPP:
The further log output will appear in the directory
»C:/PROGRA~1/POSTGR~2/data/logs/pg11«
It is mentioned in the documentation:
pg_ctl encapsulates tasks such as redirecting log output and properly
detaching from the terminal and process group.
However, since nobody is having any idea about this issue, it looks like there is a difference between the log file that is passed to the executable and the log file from the postgresql.conf, the one that is passed to the executable is just to log data from the executable while it is starting the server, the other one from the config file is to log data from inside the server like when you execute a query, so the result that I have had makes sense now, and what I got is actually the normal behavior, but in this case the documentation should be fixed.
If this is not the case, and the pg_ctl should really redirect the server log output then this is a bug in PostgreSQL 11.4, just for you guys to know.
I have run into a problem with the psql command in my BASH script as I am trying to login to my local postgres database and submit a query. I am using the command in the following way:
psql -U postgres -d rebasoft_appauditor -c "SELECT * FROM katan_scripts"
However, I get the following error message.
psql: FATAL: Ident authentication failed for user "postgres"
This runs perfectly fine from the command line after I appended the following changes to /var/lib/pgsql/data/pg_hba.conf:
local all all trust
host all all 127.0.0.1/32 trust
Also, could this please be verified for correctness?
I find it rather strange that database authentication works fine on the command line but in a script it fails. Could anyone please help with this?
Note: I am using MAC OSX
It might possibly depend on your bash script.
Watch for the asterisk (*) not be replaced with the file names in your current directory. And possibly a semicolon or \g might help to actually send the SQL statement to the database server.
I'm trying to pull a heroku database to my local Windows computer by using heroku bash command
heroku pg:pull HEROKU_POSTGRESQL_COLOR mydatabase --app appname,
when I running above command I get the following error:
'env' is not recognized as an internal or external command, operable program or batch file.!
But local database 'mydatabase' is created, but without any tables.
My heroku app's database has a table in it, but it is not getting pulled to my local database.
Help me to solve it.
a couple of things:
1.When there is an error such as "'env' is not recognized as an internal or external command, operable program or batch file" it means that the system is trying to execute a command named env. This has nothing to do at all with setting up your environment variables.
Env is not a command in windows, but in unix. I understand that you have a windows machine though. What you can do is run "git bash". (You could get it by itself but it comes with Heroku's CLI).
This gives you a unix-like environment where the "env" command is supported, and then you can run the actual heroku pg:pull command.
2.If that still doesn't work, there is a workaround which works,without installing anything extra. Actually this is based on a ticket which I submitted to Heroku so I'm just going to quote their response:
"The pg:push command is just a wrapper around pg_dump and pg_restore commands. Due to the bug you encountered, it sounds like we should go ahead and do things manually. Run these using cmd.exe (The Command Prompt application you first reported the bug). First grab the connection string from your heroku application config vars.
heroku config:get DATABASE_URL
Then you want to pick out the username / hostname / databasename parts from the connection string, ie: postgres:// username : password # hostname : port / databasename. Use those variables in the following command and paste in the password when prompted for one. This will dump the contents of your heroku database for a local file.
pg_dump --verbose -F c -Z 0 -U username -h hostname -p port databasename > heroku.dump
Next you will load this file into your local database. One thing that the CLI does before running this command is to check and make sure the target database is empty, because running this against a database with real data is something you want to avoid so be careful with pg_restore. When running this manually you run the risk of mangling your data without the CLI check, so you may want to manually verify that the target database is empty first.
pg_restore --verbose --no-acl --no-owner -h localhost -p 5432 -d mydb2 < heroku.dump
I am sorry this is not a better experience, I hope this will help you make progress. We are in the process of rewriting our pg commands so that they work better on all platforms including windows, but there is no solid timeline for when this will be completed."
For taking backup like dump file in heroku firstly you need the backups addon, installing..
$heroku addons:add pgbackups
Then running below command will give you dump file in the name of latest
$ heroku pgbackups:capture
$ curl -o latest.dump `heroku pgbackups:url`
or
wget "`heroku pgbackups:url --app app-name`" -O backup.dump
Edited:(After chatting with user,)
Problem: 'env' is not recognized as an internal or external command, operable program or batch file.!
I suspected that one of the PATH variable to particular program is messed up. You can double click and check that in WINDOWS\system32 folder.
Ok so How to edit it:
My Computer > Advanced > Environment Variables
Then choose PATH and click edit button
I'm trying to login to Oracle, it prompts for a username and password,
but I'm getting this error
A-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Process ID: 0
Session ID: 0 Serial number: 0
What should I do?.Thanks in advance
For me it looks like either your environment is not set correctly
i.e. the ORACLE_HOME and ORACLE_SID environment parameters are not set,
or the database has an issue.
Please check which of both, report it, so we could help you.
to check environment parameters
echo $ORACLE_HOME
echo $ORACLE_BASE
if any of them is not correct - change them to the right content - log in, and enjoy your DB.
If they're correct - check if Oracle process exists on the system
ps -ef | grep oracle
I am just getting started with shell scripts to save me typing in the same commands over and over. This command is used to copy a database over to a slave server as part of setting up MySQL database replication.
It works when typed into the command prompt directly:
mysqldump --host=192.168.1.1 –uUSER –pPASSWORD --opt database_name | mysql --host=192.168.1.2 –uUSER –pPASSWORD -C database_name
USER, PASSWORD and database_name all are replaced with their actual values in the real script.
When I type this command into a scripts.sh file, give it the execute permission, and then run it with ./scripts.sh I get:
'RROR1102 (42000): Incorrect database name 'database_name
mysqldump: Got errno 32 on write
What could be causing this error? Do I need to modify the command somehow when it is contained in a shell script?
The variable your database name is in has a CR at the end. You may need to run your script through dos2unix, or use one of the solutions on this site for stripping CRs from data if you're getting the database name from an external source.