I have a RHEL 5 system and have been trying to get a batch of 3-4 scripts each in a separate variables and they are not working as expected.
I tried using following syntax which I saw a lot of people on this site use and should really work, though for some reason it doesn't work for me.
Syntax -
testvar=`sqlplus -s foo/bar#SCHM <<EOF
set pages 0
set head off
set feed off
#test.sql
exit
EOF`
Now, I have access to the sqlplus command from the oracle bin folder and have also set the ORACLE_HOME and ORACLE_PATH variables exported (I echoed them just to ensure they are actually working).
However, I keep getting the error saying - "you need to set EXPORT for ORACLE_HOME" even though I have confirmed from everyone that I am indeed using the correct path.
Another question I have is once I get the script output (which is numeric number in bits or bytes) value in a variable (There will be 4-5 variables as there are 4-5 scripts), how do I convert into human readable output in either MBs or GBs.
Please guide me on this and I assure you that I will post everything here so that someone in future if gets stuck at the same issue, doesn't have waste time. (and your precious time won't go bad either...)
Thanks in advance,
Brian
Related
We have a number of databases at our company. Among them an oracle 12c (12.2.0.1.0 to be precise), but we have no (qualified) oracle DBAs. The performance has deteriorated drastically in the last 6 months or so and I now have the task of finding out why. My research suggested that I should up some memory parameters in the initDBN.ora file. Here's what the original looked like:
DBN.__data_transfer_cache_size=0
DBN.__db_cache_size=50331648
DBN.__inmemory_ext_roarea=0
DBN.__inmemory_ext_rwarea=0
DBN.__java_pool_size=79691776
DBN.__large_pool_size=8388608
DBN.__oracle_base='/orabin/app/oracle'#ORACLE_BASE set from environment
DBN.__pga_aggregate_target=197132288
DBN.__sga_target=734003200
DBN.__shared_io_pool_size=12582912
DBN.__shared_pool_size=536870912
DBN.__streams_pool_size=4194304
*.audit_file_dest='/orabin/app/oracle/admin/tmf/adump'
*.audit_trail='db'
*.compatible='12.2.0'
*.control_files='/orabin/app/oracle/oradata/tmf/control01.ctl','/orabin/app/oracle/fast_recovery_area/tmf/control02.ctl'
*.db_16k_cache_size=8388608
*.db_32k_cache_size=8388608
*.db_4k_cache_size=8388608
*.db_block_size=8192
*.db_domain='ubs-hainer.com'
*.db_name='tmf'
*.db_recovery_file_dest='/orabin/app/oracle/fast_recovery_area/tmf'
*.db_recovery_file_dest_size=4096m
*.diagnostic_dest='/orabin/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=TMFXDB)'
*.local_listener='LISTENER_TMF'
*.memory_max_target=0
*.nls_language='GERMAN'
*.nls_territory='GERMANY'
*.open_cursors=300
*.pga_aggregate_target=188m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=700m
*.shared_pool_size=536870912
*.streams_pool_size=4194304
*.undo_tablespace='UNDOTBS1'
Please don't blame me for this, because I did not write it. It certainly doesn't look like the sample init.ora and I am not at all certain where the syntax came from. The values I changed were:
DBN.__sga_target=1024m
*.sga_target=1024m
*.memory_max_target=1408m
DBN.__pga_aggregate_target=384m and *.pga_aggregate_target=384m
That's the order in which I made the changes. After each change I used sqlplus to firstly recreate the spffile with:
create spfile='spfileDBN.ora' from pfile='initDBN.ora';
This was followed by an attempt to startup the database with startup nomount. In each case I got an error message which lead me to make the next change.
Finally I got the error which is in the title of this post. When I tried to search for information on this, the findings were grim. Mostly the information dealt with other parameters and did not explain what this error actually meant. The only thing that gave any real background was this link from Burleson Consulting. It didn't really help me solve the problem, so I decided to revert the initDBN.ora file and do some more research. A slow database is generally better than no database.
But Hey! I still get that same error, even after reerting to the original init file. I'm getting desperate now. I have no idea how to fix this. From what I've read to date, setting "underscore variables" in your init file is a "NO NO".
Can anybody provide me with some helpful tips as to how to get rid of this error?
We don't know if the apps running on this database need specific block sizes, but if the priority is getting the database open, you can shrink the init.ora down the smallest, simplest set of parameters that gets you moving forward.
*.audit_file_dest='/orabin/app/oracle/admin/tmf/adump'
*.audit_trail='db'
*.compatible='12.2.0'
*.control_files='/orabin/app/oracle/oradata/tmf/control01.ctl','/orabin/app/oracle/fast_recovery_area/tmf/control02.ctl'
*.db_block_size=8192
*.db_domain='ubs-hainer.com'
*.db_name='tmf'
*.db_recovery_file_dest='/orabin/app/oracle/fast_recovery_area/tmf'
*.db_recovery_file_dest_size=4096m
*.diagnostic_dest='/orabin/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=TMFXDB)'
*.local_listener='LISTENER_TMF'
*.nls_language='GERMAN'
*.nls_territory='GERMANY'
*.open_cursors=300
*.pga_aggregate_target=188m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=1000m
*.undo_tablespace='UNDOTBS1'
should get you an open database. Notice I have bumped up the sga_target to 1000m which is about the minimum you need to get a database started. The true values for sga_target and pga_aggregate_target really need to be set based on your expected usage, and the server configuration. But the init.ora above should get your database running.
I am not sure that this really qualifies as a "solution", but it does fix the initial problem. As mentioned in my reply to Connor McDonald, I set the parameter _shared_pool_reserved_min_alloc to 3000 in the initDBN.ora file, which I copied from Connor's example (thanks for that). After recreating the spfile and trying to restart the database, I got the following error:
ORA-00093: _shared_pool_reserved_min_alloc must be between 4000 and 11953766
That got me thinking that the value 0 in the original error was probably a standin value which really means "the maximum allowed". By actually setting the parameter, I have apparently managed to generate an error which is more meaningful.
The value of _shared_pool_reserved_min_alloc is now set to 4200, which is a value I recall reading in one of the less helpful posts. (No, that post did not say that this is a value that should be used, just that it could be used.) This time, after re-creating the spfile I was able to start the database.
Before I do any more fiddling with parameters, I will do a bit more research... or maybe a lot more.
I am working on code which runs sql scripts with SqlPlus. Most of the scripts have DDL in them, which requires a / (forward slash) to run the buffer. Some of the scripts are missing the slash. I have tried running the script like #script.sql and then /, but it sometimes tries to create something twice. I want to run the show errors command at the end as well, and sometimes that caused trouble for me if the / was missing.
So, I changed to #script . (period) show errors. This works when the / is present in the file, but now if the / is missing in the file, if messes up by not creating the object.
We tried to have some logic to check for ends with / in the file, but that is a messy option in my opinion. I had false positives, and it isn't a perfect solution.
I thought about trying to run with another tool, but since the company I work for has released this already, and customers expect it to run with sqlplus, and may include commands that only work in sqlplus; that are not JDBC compliant, I don't think that is an adequate solution.
I am thinking now that if I could check if / had been run on the current buffer and do / only if it had not, that would be a great solution, perhaps perfect, but I can't find any documentation that such a command exists. Is there another way to solve the problem? Does such a command exist?
Thanks!
SQLPlus is pretty limited. There's no control mechanism to do an IF/THEN even if you determine whether the buffer had been sent.
One thing you can do is, before a script is run, you call
exec begin raise login_denied; end;
After the script, call
#show sqlcode
If that still shows 1017 (the error code for login_denied), then you can be pretty sure that the script didn't execute any SQL statement. You can use a user defined error code if you prefer, such as raise_application_error(-20123,'dummy');
Note that I included a # prefix in the show statement. That runs the command without affecting the contents of the buffer so if you find nothing has been executed, you could try running the buffer contents with the slash.
If you have a single script that contains multiple DDLs (eg a table plus indexes/grants/constraints...) then you still run the risk of not running a final command if it didn't have the slash.
I am working on ubuntu server with 10 users at any point of time. We usually keep our code there and use the server to make builds. The build usually takes 30 to 50 minutes based on concurrency defined. The build command is make -jX where X can be anything from 1 to 24.
My problem starts when many users start giving make command with higher X value. Is there any way to block these commands or to put any limit.
For example, if someone gives make -jX (X>4), I should be able to override the command as make -j4.
I know one way is to use alias but I have no idea how to interpret the argument value through alias (like alias ll='ls -la' in .bashrc file is ok but how to interpret ll -lha through bashrc).
Also is there any way to make the alias work for all the users without editing the bashrc files of all the users?
Thanks in advance.
Although limiting the parameters to a particular command (in this case make) in a way that cannot be circumvented is generally hard, you can configure system-level process limits on each user using /etc/security/limits.conf.
If you open this file on your system, you will see in the comments that you can limits various user resources such as nproc and memory. If you play with these limits, you may be able to get something reasonable to get fair resource sharing among your developers.
I am trying to understand the code in this page: https://github.com/corroded/git-achievements/blob/gh-pages/git-achievements
and I'm kinda at a loss on how it actually works. I do know some bash and shell scripting, but how does this script actually "store" how many times you've used a command(im guessing saving into a text file?) and how does it "sense" that you actually typed in a git command? I have a feeling it's line 464 onwards that does it but I don't seem to quite follow the logic.
Can anyone explain this in a bit more understandable context?
I plan to do some achievements for other commands and I hope to have an idea on HOW to go about it without randomly copying and pasting stuff and voodoo.
Yes on 464 start the script, everything before are helping functions. I dont know how it gets installed, but I would assume you have to call this script instead of the normal git-command. It just checks if the first parameter is achievement, and if not then just (regular) git with the rest parameters is executed. Afterwards he checks if an error happend (if he exits). And then he just makes log_action and check_for_achievments. log_action just writes the issued command with a date into a text file, while achievments scans for that log file for certains events. If you want to add another achievment you have to do it in this check_for_achievments.
Just look how the big case handles it (most of the achievments call the count_function which counts the # usages of the function and matches when a power of 2 is reached).
I'm running a minecraft server on my linux box in a detached screen session. I'm not very fond of screen very much and would like to be able to constantly pipe the output of the server to a file(like a pipe) and pipe some input from a file to the server(so that I can input and output to the server from remote programs, like a python script). I'm not very experienced in bash, so could somebody tell me how to do this?
Thanks, NikitaUtiu.
It's not clear if you need screen at all. I don't know the minecraft server, but generally for server software, you can run it from a crontab entry and redirect output to log files.
Assuming your server kills itself at midnight sunday night, (we can discuss changing this if restarting 1x per week is too little or too much OR you require ad-hoc restarts), but for a basic idea of what to do, here is a crontab entry that starts the server each monday at 1 minute after midnight.
01 00 * * 1 dtTm=`/bin/date +\%Y\%m\%d.\%H\%M\%S`; export dtTm; { /usr/bin/mineserver -o ..... your_options_to_run_mineserver_here ... ; } > /tmp/mineserver_trace_log.${dtTm} 2>&1
consult your man page for crontab to confirm that day-of-week ranges are 0-6 (0=Sunday), and change the day-of-week value if 0!=Sunday.
Normally I would break the code up so it is easier to read, but for crontab entries, each entry has to be all on one line, (with some weird exceptions) AND usually a limit of 1024b-8K to how long the line can be. Note that the ';' just before the closing '}' is super-critical. If this is left out, you'll get un-deciperable error messages, or no error messages at all.
Basically, you're redirecting any output into a file (including std-err output). Now you can do a lot of stuff with the output, use more or less to look at the file, grep ERR ${logFile}, write scripts that grep for error messages and then send you emails that errors have been found, etc, etc.
You may have some sys-admin work on your hands to get the mineserver user so it can run crontab entries. Also if you're not comfortable using the vi or emacs editors, creating a crontab file may require help from others. Post to superuser.com to get answers for problems you have with linux admin issues.
Finally, there are two points I'd like to make about dated logfiles.
Good: a. If you app dies, you never have to rerun it to then capture output and figure out why something has stopped working. For long running programs this can save you a lot of time. b. keeping dated files gives you the ability to prove to you, your boss, others, that It used to work just fine, see here are the log files. c. Keeping the log files, assuming there is useful information in them, gives you the opportunity to mine those files for facts. I.E. : program used to take 1 sec for processing, now it is taking 1 hr, etc etc.
Bad: a. You'll need to set up a mechanism to sweep old log files, otherwise at some point everything will have stopped, AND when you finally figure out what the problem was, you discover that your /tmp OR whatever dir you chose to use IS completely full.
There is a self-maintaining solution to using dates on the logfiles I can tell you about if you find this approach useful. It will take a little explaining, so I don't want to spend the time writing it up if you don't find the crontab solution useful.
I hope this helps!