derby.stream.error.rollingFile.limit property in Derby.properties is not working as rollover is not working as expected - derby

Need help on the below configuration not reflecting in derby.properties.
Kindly have a look on this and see if you can help us on resolving/shed light on this.
"derby.stream.error.rollingFile.limit=5120000 " in derby.properties doesn't perform rotation even after 5MB file size is reached.It keeps growing as much as it can.
But other parameters like "derby.infolog.append=true " is getting reflected as i could derby.log getting appended instead of new log creation.

Rolling log is implemented in version 10.11.1.1 and above. https://db.apache.org/derby/releases/release-10.11.1.1.cgi
You are most likely using an older version.

Related

Creating a rolling file using spring reactive

Looking for advice on how to approach a problem.
Given a Flux<String>, I'd like to write it to a rolling file based on file size - much like the way the rolling file appender works in logback or log4j. For example, if my completed flux were to generate 9mb of data, and I wanted a rolling file of size 2mb, I'd like 4x2mb files and then a 1mb file.
I have seen some posts showing how to write to an OutputStream using the DataBufferUtils, but I can't figure out where to start creating the rolling file concept. I have considered using one of the previously mentioned loggers as a means of achieving this, but I feel this is something that could probably be done without them, and hopefully someone could provide some ideas how!

Oracle 12c startup error: ORA-00093: _shared_pool_reserved_min_alloc must be between 4000 and 0

We have a number of databases at our company. Among them an oracle 12c (12.2.0.1.0 to be precise), but we have no (qualified) oracle DBAs. The performance has deteriorated drastically in the last 6 months or so and I now have the task of finding out why. My research suggested that I should up some memory parameters in the initDBN.ora file. Here's what the original looked like:
DBN.__data_transfer_cache_size=0
DBN.__db_cache_size=50331648
DBN.__inmemory_ext_roarea=0
DBN.__inmemory_ext_rwarea=0
DBN.__java_pool_size=79691776
DBN.__large_pool_size=8388608
DBN.__oracle_base='/orabin/app/oracle'#ORACLE_BASE set from environment
DBN.__pga_aggregate_target=197132288
DBN.__sga_target=734003200
DBN.__shared_io_pool_size=12582912
DBN.__shared_pool_size=536870912
DBN.__streams_pool_size=4194304
*.audit_file_dest='/orabin/app/oracle/admin/tmf/adump'
*.audit_trail='db'
*.compatible='12.2.0'
*.control_files='/orabin/app/oracle/oradata/tmf/control01.ctl','/orabin/app/oracle/fast_recovery_area/tmf/control02.ctl'
*.db_16k_cache_size=8388608
*.db_32k_cache_size=8388608
*.db_4k_cache_size=8388608
*.db_block_size=8192
*.db_domain='ubs-hainer.com'
*.db_name='tmf'
*.db_recovery_file_dest='/orabin/app/oracle/fast_recovery_area/tmf'
*.db_recovery_file_dest_size=4096m
*.diagnostic_dest='/orabin/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=TMFXDB)'
*.local_listener='LISTENER_TMF'
*.memory_max_target=0
*.nls_language='GERMAN'
*.nls_territory='GERMANY'
*.open_cursors=300
*.pga_aggregate_target=188m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=700m
*.shared_pool_size=536870912
*.streams_pool_size=4194304
*.undo_tablespace='UNDOTBS1'
Please don't blame me for this, because I did not write it. It certainly doesn't look like the sample init.ora and I am not at all certain where the syntax came from. The values I changed were:
DBN.__sga_target=1024m
*.sga_target=1024m
*.memory_max_target=1408m
DBN.__pga_aggregate_target=384m and *.pga_aggregate_target=384m
That's the order in which I made the changes. After each change I used sqlplus to firstly recreate the spffile with:
create spfile='spfileDBN.ora' from pfile='initDBN.ora';
This was followed by an attempt to startup the database with startup nomount. In each case I got an error message which lead me to make the next change.
Finally I got the error which is in the title of this post. When I tried to search for information on this, the findings were grim. Mostly the information dealt with other parameters and did not explain what this error actually meant. The only thing that gave any real background was this link from Burleson Consulting. It didn't really help me solve the problem, so I decided to revert the initDBN.ora file and do some more research. A slow database is generally better than no database.
But Hey! I still get that same error, even after reerting to the original init file. I'm getting desperate now. I have no idea how to fix this. From what I've read to date, setting "underscore variables" in your init file is a "NO NO".
Can anybody provide me with some helpful tips as to how to get rid of this error?
We don't know if the apps running on this database need specific block sizes, but if the priority is getting the database open, you can shrink the init.ora down the smallest, simplest set of parameters that gets you moving forward.
*.audit_file_dest='/orabin/app/oracle/admin/tmf/adump'
*.audit_trail='db'
*.compatible='12.2.0'
*.control_files='/orabin/app/oracle/oradata/tmf/control01.ctl','/orabin/app/oracle/fast_recovery_area/tmf/control02.ctl'
*.db_block_size=8192
*.db_domain='ubs-hainer.com'
*.db_name='tmf'
*.db_recovery_file_dest='/orabin/app/oracle/fast_recovery_area/tmf'
*.db_recovery_file_dest_size=4096m
*.diagnostic_dest='/orabin/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=TMFXDB)'
*.local_listener='LISTENER_TMF'
*.nls_language='GERMAN'
*.nls_territory='GERMANY'
*.open_cursors=300
*.pga_aggregate_target=188m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=1000m
*.undo_tablespace='UNDOTBS1'
should get you an open database. Notice I have bumped up the sga_target to 1000m which is about the minimum you need to get a database started. The true values for sga_target and pga_aggregate_target really need to be set based on your expected usage, and the server configuration. But the init.ora above should get your database running.
I am not sure that this really qualifies as a "solution", but it does fix the initial problem. As mentioned in my reply to Connor McDonald, I set the parameter _shared_pool_reserved_min_alloc to 3000 in the initDBN.ora file, which I copied from Connor's example (thanks for that). After recreating the spfile and trying to restart the database, I got the following error:
ORA-00093: _shared_pool_reserved_min_alloc must be between 4000 and 11953766
That got me thinking that the value 0 in the original error was probably a standin value which really means "the maximum allowed". By actually setting the parameter, I have apparently managed to generate an error which is more meaningful.
The value of _shared_pool_reserved_min_alloc is now set to 4200, which is a value I recall reading in one of the less helpful posts. (No, that post did not say that this is a value that should be used, just that it could be used.) This time, after re-creating the spfile I was able to start the database.
Before I do any more fiddling with parameters, I will do a bit more research... or maybe a lot more.

Apache Nifi GetTwitter

I have a simple question, as I am new to NiFi.
I have a GetTwitter processor set up and configured (assuming correctly). I have the Twitter Endpoint set to Sample Endpoint. I run the processor and it runs, but nothing happens. I get no input/output
How do I troubleshoot what it is doing (or in this case not doing)?
A couple things you might look at:
What activity does the processor show? You can look at the metrics to see if anything has been attempted (Tasks/Time) as well as if it succeeded (Out)
Stop the downstream processor temporarily to make any output FlowFiles visible in the connection queue.
Are there errors? Typically these appear in the top-left corner as a yellow icon
Are there related messages in the logs/nifi-app.log file?
It might also help us help you if you describe the GetTwitter Property settings a bit more. Can you share a screenshot (minus keys)?
In my case its because there are two sensitive values set. According to the documentation when a sensitive value is set, the nifi.properties file's nifi.sensitive.props.key value must be set - it is an empty string by default using HortonWorks DataPlatform distribution. I set this to some random string (literally random_STRING but you can use anything) and re-created my process from the template and it began working.
In general I suppose this topic can be debugged by setting the loglevel to DEBUG.
However, in my case the issue was resolved more easily:
I just set up a new cluster, and decided to copy all twitter keys and secrets to notepad first.
It turns out that despite carefully copying the keys from twitter, one of them had a leading tab. When pasting directly into the GetTwitter processer, this would not show, but fortunately it showed up in notepad and I was able to remove it and make this work.

Visual-Studio, Database save error "There was an error parsing the query. [ Token line number = 1,Token line offset = 13,Token in error = , ]"

OK so I recently started getting into programming simple programs and I have taught myself everything using guides on the internet thus far and I have been doing really well and things have been working great, until now. So i feel that I have done a lot, A LOT of searching on this question and I cant seem to find an answer or even anyone else out there that is having the same problem as me. Im going to try to make this as simple as possible so here it goes.
I have made a program in visual studio that is simply supposed to allow me to make entries in fields such as first name, last name, and so on. It works like a charm and saves data like normal, I can close out and re open it and the data is there. BUT
While messing with the program and trying to test it I found that if I DELETED a row of data that i had previously inserted then added ANY new data then clicked the SAVE button it would crash every single time.
First it would say there is no proper delete command. So i gave it one, and it stopped giving me that error message. Then it said it was the update command. So i gave it one then that stopped showing up. But now it gives me the error message. "There was an error parsing the query. [ Token line number = 1,Token line offset = 13,Token in error = , ]. And it gives me this error every time i try to save after following the above steps. And I honestly dont know what to make of it.
I know im going to need to post a sample of my code that I have but I dont know what part of it to post so I figured I would wait and see so I dont post 1,500+ lines on here.
This is the only thing preventing my program from working correctly and if I wouldnt ask if I wasnt completely stumped. And im roughly 300% stumped. So any help would be greatly appreciated.

UnauthorizedAccessException on MemoryMappedFile in C# 4

I wanted to play around with using a MemoryMappedFile to access an existing binary file. If this even at all possible or am I a crazy person?
The idea would be to map the existing binary file directly to memory for some preferably higher-speed operations. Or to atleast see how these things worked.
using System.IO.MemoryMappedFiles;
System.IO.FileInfo fi = new System.IO.FileInfo(#"C:\testparsercap.pcap");
MemoryMappedFileSecurity sec = new MemoryMappedFileSecurity();
System.IO.FileStream file = fi.Open(System.IO.FileMode.Open, System.IO.FileAccess.ReadWrite, System.IO.FileShare.ReadWrite);
MemoryMappedFile mf = MemoryMappedFile.CreateFromFile(file, "testpcap", fi.Length, MemoryMappedFileAccess.Read, sec, System.IO.HandleInheritability.Inheritable, true);
MemoryMappedViewAccessor FileMapView = mf.CreateViewAccessor();
PcapHeader head = new PcapHeader();
FileMapView.Read<PcapHeader>(0, out head);
I get System.UnauthorizedAccessException was unhandled (Message=Access to the path is denied.) on the mf.CreateViewAccessor() line.
I don't think it's file-permissions, since I'm running as a nice insecure administrator user, and there aren't any other programs open that might have a read-lock on the file. This is on Vista with UAC disabled.
If it's simply not possible and I missed something in the documentation, please let me know. I could barely find anything at all referencing this feature of .net 4.0
Thanks!
I know this is an old question, but I just ran into the same error and was able to solve it.
Even though I was opening the MemoryMappedFile as read-only (MemoryMappedFileRights.Read) as you are, I also needed to create the view accessor as read-only as well:
var view = mmf.CreateViewAccessor(offset, size, MemoryMappedFileAccess.Read);
Then it worked. Hope this helps someone else.
If the size is more than the file length, it gives the UnAuthorized Access exception. Because we are trying to access memory beyond the limits of the file.
var view = mmf.CreateViewAccessor(offset, size, MemoryMappedFileAccess.Read);
It is difficult to say what might be going wrong. Since there is no documentation on the MSDN website yet, your best bet is to install FILEMON from SysInternals, and see why that is happening.
Alternately, you can attach a native debugger (like WinDBG) to the process, and put a breakpoint on MapViewOfFile and other overloads. And then see why that call is failing.
Using the .CreateViewStream() from the instance of MemoryMappedFile removed the error from my code. I was unable to get .CreateViewAcccessor() working w/the access denied error

Resources