I'm currently running a derby DB instance created from version 10.13.1.1
I connect via the network mode (startNetworkServer) running on a redhat server.
I'm now wanting to upgrade to version 10.14.2.0
However, when trying to connect to the upgraded database, I receive an access denied "java.io.FilePermission" error.
Details:
I went and downloaded both version 10.13.1.1 and 10.14.2.0 onto my windows desktop.
A backup of the database is created using the following command: SYSCS_UTIL.SYSCS_BACKUP_DATABASE
I copied this backup to both the 10.13 and 10.14 folders.
Starting with my current version (13) i start the network server, and then use ij to connect to the database. This works fine, and i can see the tables. This validates my backup is fine.
connect 'jdbc:derby://localhost:1527/c:\Temp\13\database;create=false';
I then start my 14 versions network server, and then go to 14's ij. When I try to connect to the backup:
connect 'jdbc:derby://localhost:1527/c:\Temp\14\database;create=false';
I get the filePermission error:
ERROR XJ001: DERBY SQL error: ERRORCODE: 0, SQLSTATE: XJ001, SQLERRMC: java.security.AccessControlException
access denied ("java.io.FilePermission" "C:\Temp\updating_derby\threatadvisor" "read")
XJ001.U
Fair enough, I assume this is because i'm trying to connect to an older version, without having run the upgrade=true parameter. When I remove the create parameter, and add the upgrade parameter, it still fails with the same issue.
Ok, so perhaps I can't upgrade a DB via the network server, and I have to directly connect to the DB. From within my app, I use the following connection string:
jdbc:derby:C:/Temp/14/database;upgrade=true;
The app has the version 14 jar on the classpath, so should use it and upgrade. Which it does, the app starts normally and I see all the data. How do I know it upgraded? Because I tried to connect to this 14 database using 13 network server and ij, and it fails (as expected due to version).
So i'm done right? No, I once more try to connect to this now upgraded database via the network server, using ij and i once again get the java.io.FilePermission issue.
I went in and ensured the actual OS permissions on the folders and files inside the "database" folder are not just read-only. None are. Yet still it errors.
I've even tried running 14 network server on the redhat box (on a different port), and trying to connect to this db via ij and even there i get the file permission issue.
I'm really at a loss as to what to do next. Please help!
FYI, the full issue from the derby.log file:
Tue Jun 11 12:04:15 AEST 2019 : Apache Derby Network Server - 10.14.2.0 - (1828579) started and ready to accept connections on port 1527
Tue Jun 11 12:04:28 AEST 2019 Thread[DRDAConnThread_2,5,main] Cleanup action starting
java.security.AccessControlException: access denied ("java.io.FilePermission" "C:\Temp\14\database" "read")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)
at java.security.AccessController.checkPermission(AccessController.java:884)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
at java.lang.SecurityManager.checkRead(SecurityManager.java:888)
at java.io.File.exists(File.java:814)
at java.io.WinNTFileSystem.canonicalize(WinNTFileSystem.java:434)
at java.io.File.getCanonicalPath(File.java:618)
at org.apache.derby.impl.io.DirStorageFactory.doInit(Unknown Source)
at org.apache.derby.impl.io.BaseStorageFactory.init(Unknown Source)
at org.apache.derby.impl.io.DirStorageFactory.init(Unknown Source)
at org.apache.derby.impl.services.monitor.StorageFactoryService.privGetStorageFactoryInstance(Unknown Source)
at org.apache.derby.impl.services.monitor.StorageFactoryService.access$400(Unknown Source)
at org.apache.derby.impl.services.monitor.StorageFactoryService$12.run(Unknown Source)
at org.apache.derby.impl.services.monitor.StorageFactoryService$12.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at org.apache.derby.impl.services.monitor.StorageFactoryService.getCanonicalServiceName(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.findProviderAndStartService(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.startPersistentService(Unknown Source)
at org.apache.derby.iapi.services.monitor.Monitor.startPersistentService(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection$4.run(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection$4.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at org.apache.derby.impl.jdbc.EmbedConnection.startPersistentService(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.(Unknown Source)
at org.apache.derby.jdbc.InternalDriver$1.run(Unknown Source)
at org.apache.derby.jdbc.InternalDriver$1.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at org.apache.derby.jdbc.InternalDriver.getNewEmbedConnection(Unknown Source)
at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
at org.apache.derby.jdbc.EmbeddedDriver.connect(Unknown Source)
at org.apache.derby.impl.drda.Database.makeConnection(Unknown Source)
at org.apache.derby.impl.drda.DRDAConnThread.getConnFromDatabaseName(Unknown Source)
at org.apache.derby.impl.drda.DRDAConnThread.verifyUserIdPassword(Unknown Source)
at org.apache.derby.impl.drda.DRDAConnThread.parseSECCHK(Unknown Source)
at org.apache.derby.impl.drda.DRDAConnThread.parseDRDAConnection(Unknown Source)
at org.apache.derby.impl.drda.DRDAConnThread.processCommands(Unknown Source)
at org.apache.derby.impl.drda.DRDAConnThread.run(Unknown Source)
Cleanup action completed
EDIT 1
Now trying to setup the security.policy file as per this guide. However, after creating a new policy file based off the template in the demo directory, we can't even get derby to pick up our file.
When we try to run:
java -classpath "C:\Temp\14\lib\derby.jar;C:\Temp\14\lib\derbynet.jar;C:\Temp\14\lib\derbyclient.jar;C:\Temp\14\lib\derbytools.jar;C:\Temp\14\lib\derbyoptionaltools.jar" -Djava.security.manager -Djava.security.policy=C:\Temp\14\server.policy org.apache.derby.drda.NetworkServerControl start
We get the following error:
java.security.AccessControlException: access denied org.apache.derby.security.SystemPermission( "engine", "usederbyinternals" )
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)
at java.security.AccessController.checkPermission(AccessController.java:884)
at org.apache.derby.iapi.security.SecurityUtil.checkDerbyInternalsPrivilege(Unknown Source)
at org.apache.derby.iapi.services.monitor.Monitor.getMonitorLite(Unknown Source)
at org.apache.derby.iapi.services.property.PropertyUtil$2.run(Unknown Source)
at org.apache.derby.iapi.services.property.PropertyUtil$2.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at org.apache.derby.iapi.services.property.PropertyUtil.getMonitorLite(Unknown Source)
at org.apache.derby.iapi.services.property.PropertyUtil.getSystemProperty(Unknown Source)
at org.apache.derby.iapi.services.property.PropertyUtil.getSystemProperty(Unknown Source)
at org.apache.derby.impl.drda.NetworkServerControlImpl.init(Unknown Source)
at org.apache.derby.impl.drda.NetworkServerControlImpl.(Unknown Source)
at org.apache.derby.drda.NetworkServerControl.main(Unknown Source)
I know this line is in the policy file (and uncommented):
permission org.apache.derby.security.SystemPermission "engine", "usederbyinternals";
However, I don't think it is even picking up our policy file, as if we change our reference to a non-existing policy file, we still get the same error.
Thanks to #BryanPendleton for pointing me in the right direction. For the initial issue, it was indeed because we needed the server.policy file. His link was helpful:
db.apache.org/derby/docs/10.14/security/csecjavasecurity.html
The second issue which we were having was resolved by using the server.policy file template located here:
https://builds.apache.org/job/Derby-docs/lastSuccessfulBuild/artifact/trunk/out/security/rsecbasicserver.html
Instead of the one provided in the download (the one in the derby download didn't have as many jars mentioned in it). More to the point, the way we referenced the jars had to be tweaked. You will see all the examples were for unix format, whereas we were developing on a test windows PC. Therefore instead of something like (unix):
grant codeBase "file:///home/someone/derby/lib/derby.jar"
We needed to do:
grant codeBase "file:///C:/Temp/14/lib/derby.jar"
Note the additional '/' after 'file' - we had assumed it was merely "file://C:...."
There is another solution to the problem which is to use this code:
https://github.com/apache/hive/blob/master/core/src/test/java/org/apache/hive/hcatalog/DerbyPolicy.java
and use this:
Policy.setPolicy(new DerbyPolicy());
To get a policy set programmatically.
Related
Update: This has been resolved. It was a typo in the URL.
--
I'm trying to read data from Netezza using pyspark on Windows 10 1909.
I can read from it using DbVisualizer no problem. Then I tried running pyspark --driver-class-path <path to nzjdbc.jar> --jars <path to nzjdbc.jar> --master local[*] (same machine, VPN connection, JDBC driver jar, and all).
I used this code from the pyspark shell:
dataframe = spark.read.format("jdbc").options(
url="jdbc:netezza://<server>:5480/<database>",
dbtable="ADMIN.<table>",
user="***",
password="***",
driver="org.netezza.Driver",
).load()
but this fails for me, with the following stack, after about 10-20 seconds (I also tried adding queryTimeout="300", but that didn't make a difference):
"...\AppData\Local\Continuum\miniconda3\envs\spark\lib\site-packages\pyspark\python\lib\py4j-0.10.9-src.zip\py4j\protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o41.load.
: org.netezza.error.NzSQLException: Connection timed out: connect
at org.netezza.sql.NzConnection.initSocket(NzConnection.java:2859)
at org.netezza.sql.NzConnection.open(NzConnection.java:293)
at org.netezza.datasource.NzDatasource.getConnection(NzDatasource.java:675)
at org.netezza.datasource.NzDatasource.getConnection(NzDatasource.java:662)
at org.netezza.Driver.connect(Driver.java:155)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$createConnectionFactory$1(JdbcUtils.scala:64)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:56)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:226)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:35)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:339)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:279)
at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:268)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:268)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:203)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Unknown Source)
A coworker is able to run the same code from his Mac with no issues (also on VPN).
Is there something in Windows or in Netezza itself that could affect what clients are able connect to Netezza? Or could I be missing something in the pyspark command?
Can you try increasing the LoginTimeout value ? FYI queryTimeout refers to timeout for a single query.
I've found a typo in my URL. That was a really sneaky one
After running hive command it failed to create database
Following the official "Getting Started" guide on Apache website
NestedThrowables:
java.sql.SQLException: Unable to open a test connection to the given
database. JDBC url = jdbc:derby:;databaseName=metastore_db;create=true,
username = APP. Terminating connection pool (set lazyInit to true if you
expect to start your database after your app). Original Exception: ------
java.sql.SQLException: Failed to create database 'metastore_db', see the next exception for details.
Caused by: java.sql.SQLException: Directory /opt/hive/bin/metastore_db cannot be created.
at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source)
... 80 more
Caused by: ERROR XBM0H: Directory /opt/hive/bin/metastore_db cannot be created.
at org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
at org.apache.derby.impl.services.monitor.StorageFactoryService$10.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown Source)
at org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown Source)
... 80 more
check the particular user have permission to create /opt/hive/bin/metastore_db
if not then add permission
sudo chmod -R 777 /opt/hive/bin/metastore_db
hope this helps
use: chown -R hduser:hduser * xxxxxxxxx/hive/ (*or whatever username /usergroup you are using) to grant owner (hence full permission.....incl write/create)
(I dont know whether there is a default hive user/group named hive:hive, i created this but granting the ownership right to this hive user created wont work..)
basically the installation instruction assumes you (the hadoop and hence hive who inherit your right) have sufficient permissions.
(note: this is kind of equivalent to run command prompt in Admin Mode in windows, even though you are already admin). Linux tends to be flexible in many things so cause some chaos.
I just realized in hive: hive-site.xml and in the installation steps to create /user/hive/warehouse:
"user" can be so easily confused with "usr" without any alert at all. so remember:
under hadoop fs, this is /user/hive/warehouse, which points to metadata_db folder, to create (NOT /usr/hive/warehouse)
I'm trying to run sonarQube
Installing through cmd as Admin:
InstallNTService.bat
StartSonar.bat
--> Wrapper Started as Console
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2016.03.25 21:09:40 INFO app[o.s.a.AppFileSystem] Cleaning or creating temp directory C:\sonarqube-5.4\temp
WrapperSimpleApp: Encountered an error running main:java.lang.RuntimeException: Failed to reset file system
java.lang.RuntimeException: Failed to reset file system
at org.sonar.process.monitor.Monitor.resetFileSystem(Monitor.java:125)
at org.sonar.process.monitor.Monitor.startProcesses(Monitor.java:105)
at org.sonar.process.monitor.Monitor.start(Monitor.java:99)
at org.sonar.application.App.start(App.java:51)
at org.sonar.application.App.main(App.java:110)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.tanukisoftware.wrapper.WrapperSimpleApp.run(WrapperSimpleApp.java:240)
at java.lang.Thread.run(Unknown Source)
Caused by: java.nio.file.AccessDeniedException: C:\sonarqube-5.4\temp\jffi837955644087697080.tmp
at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
at sun.nio.fs.WindowsFileSystemProvider.implDelete(Unknown Source)
at sun.nio.fs.AbstractFileSystemProvider.delete(Unknown Source)
at java.nio.file.Files.delete(Unknown Source)
at org.sonar.process.FileUtils$CleanDirectoryFileVisitor.visitFile(FileUtils.java:151)
at org.sonar.process.FileUtils$CleanDirectoryFileVisitor.visitFile(FileUtils.java:135)
at java.nio.file.Files.walkFileTree(Unknown Source)
at org.sonar.process.FileUtils.cleanDirectoryImpl(FileUtils.java:123)
at org.sonar.process.FileUtils.cleanDirectory(FileUtils.java:60)
at org.sonar.application.AppFileSystem.createOrCleanDirectory(AppFileSystem.java:116)
at org.sonar.application.AppFileSystem.reset(AppFileSystem.java:73)
at org.sonar.process.monitor.Monitor.resetFileSystem(Monitor.java:122)
... 10 more
<-- Wrapper Stopped
I' getting these errors:
WrapperSimpleApp: Encountered an error running main:java.lang.RuntimeException: Failed to reset file system
java.lang.RuntimeException: Failed to reset file system
Caused by: java.nio.file.AccessDeniedException: C:\sonarqube-5.4\temp\jffi837955644087697080.tmp
Please Assist.
Kill the Java process and try deleting the temp folder contents again. Worked for me.
The user that is running SonarQube (look at the user details in the windows services screen) must have R/W rights on several sub-directories in C:\Sonarqube-6.X. We assigned R/W rights to the whole C:\Sonarqube-6.X directory tree. Before retrying you can delete the temp directory safely. It is also essential that you unblocked the SonarQube zip-file after download and before unzipping.
I had the same issue, but was unable to delete the temp directory because it was locked. I restarted my computer and fired up sonarQube and it started without a problem. I'm guessing when I last closed my SonarQube session some resource was still holding onto the temp folder and wouldn't release but I couldn't find a sonarqube process to kill off in the task manager.
You can follow below steps...Its work form me.
1: Stop the running sonar cmd window.
2: open Task manager and kill "Java Process" then manually delete temp folder.
3: Run Again
Thanks
Vinod
i aslo got the same problem
use the command
source ~/.bash_profile (2times)
then go and execute sonarqn
So, my google-fu is weak...
I could not find another instance of my errors.
I've been having problems with my teamcity nuget repository since a day.
NuGet downloads from the repo fail with unexpected EOF's or corrupt package warnings.
As far as I can tell this is not a hardware failure, the vm, and vm-host do not report disk errors.
To add insult to injury the teamcity logfile 'teamcity-javaLogging-2013-07-17.log' grows unbounded (+3GB in the time to type this, where 10 MB a day is normal) with the stacktraces like the ones below.
My Teamcity version is 7.1.5 (build 24400)
Anyone know how to recover from this failure?
I've not yet summoned up enough courage to just clear all the caches I can find on the teamcity admin page ( Administration > Diagnostics > Caches) because there is a warning in scary yellow on that page not to do that.
Below is a sample of the stacktraces I'm getting.
17-jul-2013 3:00:02 net.sf.ehcache.store.DiskStore get
SEVERE: provider-nugetCache: Could not read disk store element for key 2731. Error was unexpected EOF in middle of data block
java.io.StreamCorruptedException: unexpected EOF in middle of data block
at java.io.ObjectInputStream$BlockDataInputStream.refill(Unknown Source)
at java.io.ObjectInputStream$BlockDataInputStream.read(Unknown Source)
at java.io.DataInputStream.readInt(Unknown Source)
at java.io.ObjectInputStream$BlockDataInputStream.readInt(Unknown Source)
at java.io.ObjectInputStream.readInt(Unknown Source)
at jetbrains.buildServer.serverSide.metadata.impl.metadata.SerializableEntry.readSplitted(SerializableEntry.java:5)
at jetbrains.buildServer.serverSide.metadata.impl.metadata.EntryImpl.readObjectInternal(EntryImpl.java:34)
at jetbrains.buildServer.serverSide.metadata.impl.metadata.SerializableEntry.readExternal(SerializableEntry.java:16)
at java.io.ObjectInputStream.readExternalData(Unknown Source)
at java.io.ObjectInputStream.readOrdinaryObject(Unknown Source)
at java.io.ObjectInputStream.readObject0(Unknown Source)
at java.io.ObjectInputStream.defaultReadFields(Unknown Source)
at java.io.ObjectInputStream.readSerialData(Unknown Source)
at java.io.ObjectInputStream.readOrdinaryObject(Unknown Source)
at java.io.ObjectInputStream.readObject0(Unknown Source)
at java.io.ObjectInputStream.readObject(Unknown Source)
at net.sf.ehcache.store.DiskStore.loadElementFromDiskElement(DiskStore.java:313)
at net.sf.ehcache.store.DiskStore.get(DiskStore.java:268)
at net.sf.ehcache.Cache.searchInDiskStore(Cache.java:1290)
at net.sf.ehcache.Cache.get(Cache.java:904)
at net.sf.ehcache.Cache.get(Cache.java:879)
at jetbrains.buildServer.serverSide.metadata.impl.cache.TypedCacheImpl.getValue(TypedCacheImpl.java:3)
at jetbrains.buildServer.serverSide.metadata.impl.metadata.MetadataStorageImpl.getReportedKeys(MetadataStorageImpl.java:7)
at jetbrains.buildServer.serverSide.metadata.impl.metadata.MetadataStorageImpl.removeBuild(MetadataStorageImpl.java:45)
at jetbrains.buildServer.serverSide.metadata.impl.indexer.BuildIndexCleaner.performCleanup(BuildIndexCleaner.java:16)
at jetbrains.buildServer.serverSide.impl.cleanup.HistoryEntryCleaner.cleanupExtensionsData(HistoryEntryCleaner.java:38)
at jetbrains.buildServer.serverSide.impl.cleanup.HistoryEntryCleaner.performCleanup(HistoryEntryCleaner.java:138)
at jetbrains.buildServer.serverSide.impl.cleanup.HistoryEntryCleaner.performCleanup(HistoryEntryCleaner.java:132)
at jetbrains.buildServer.serverSide.impl.cleanup.ServerCleanupManagerImpl$3.performCleanup(ServerCleanupManagerImpl.java)
at jetbrains.buildServer.serverSide.db.DBFacade$1$1.doInConnection(DBFacade.java:178)
at jetbrains.buildServer.serverSide.db.DBFacade$6.doInConnection(DBFacade.java:415)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:341)
at jetbrains.buildServer.serverSide.db.DBFacade._runSql(DBFacade.java:411)
at jetbrains.buildServer.serverSide.db.DBFacade.access$000(DBFacade.java:33)
at jetbrains.buildServer.serverSide.db.DBFacade$1.doInTransaction(DBFacade.java:174)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:130)
at jetbrains.buildServer.serverSide.db.DBFacade.compact(DBFacade.java:171)
at jetbrains.buildServer.serverSide.impl.cleanup.ServerCleanupManagerImpl.startCleanup(ServerCleanupManagerImpl.java:74)
at jetbrains.buildServer.serverSide.impl.cleanup.ServerCleanupManagerImpl$2.run(ServerCleanupManagerImpl.java:0)
at java.util.TimerThread.mainLoop(Unknown Source)
at java.util.TimerThread.run(Unknown Source)
17-jul-2013 3:00:02 net.sf.ehcache.store.DiskStore remove
SEVERE: provider-nugetCache: Could not remove disk store entry for key 2731. Error was unexpected EOF in middle of data block
java.io.StreamCorruptedException: unexpected EOF in middle of data block
at java.io.ObjectInputStream$BlockDataInputStream.refill(Unknown Source)
at java.io.ObjectInputStream$BlockDataInputStream.read(Unknown Source)
at java.io.DataInputStream.readInt(Unknown Source)
at java.io.ObjectInputStream$BlockDataInputStream.readInt(Unknown Source)
at java.io.ObjectInputStream.readInt(Unknown Source)
at jetbrains.buildServer.serverSide.metadata.impl.metadata.SerializableEntry.readSplitted(SerializableEntry.java:5)
at jetbrains.buildServer.serverSide.metadata.impl.metadata.EntryImpl.readObjectInternal(EntryImpl.java:34)
at jetbrains.buildServer.serverSide.metadata.impl.metadata.SerializableEntry.readExternal(SerializableEntry.java:16)
at java.io.ObjectInputStream.readExternalData(Unknown Source)
at java.io.ObjectInputStream.readOrdinaryObject(Unknown Source)
at java.io.ObjectInputStream.readObject0(Unknown Source)
at java.io.ObjectInputStream.defaultReadFields(Unknown Source)
at java.io.ObjectInputStream.readSerialData(Unknown Source)
at java.io.ObjectInputStream.readOrdinaryObject(Unknown Source)
at java.io.ObjectInputStream.readObject0(Unknown Source)
at java.io.ObjectInputStream.readObject(Unknown Source)
at net.sf.ehcache.store.DiskStore.loadElementFromDiskElement(DiskStore.java:313)
at net.sf.ehcache.store.DiskStore.remove(DiskStore.java:483)
at net.sf.ehcache.Cache.remove(Cache.java:1465)
at net.sf.ehcache.Cache.remove(Cache.java:1392)
at net.sf.ehcache.Cache.remove(Cache.java:1350)
at net.sf.ehcache.Cache.remove(Cache.java:1328)
at jetbrains.buildServer.serverSide.metadata.impl.cache.TypedCacheImpl.remove(TypedCacheImpl.java:16)
at jetbrains.buildServer.serverSide.metadata.impl.metadata.MetadataStorageImpl.removeBuild(MetadataStorageImpl.java:30)
at jetbrains.buildServer.serverSide.metadata.impl.indexer.BuildIndexCleaner.performCleanup(BuildIndexCleaner.java:16)
at jetbrains.buildServer.serverSide.impl.cleanup.HistoryEntryCleaner.cleanupExtensionsData(HistoryEntryCleaner.java:38)
at jetbrains.buildServer.serverSide.impl.cleanup.HistoryEntryCleaner.performCleanup(HistoryEntryCleaner.java:138)
at jetbrains.buildServer.serverSide.impl.cleanup.HistoryEntryCleaner.performCleanup(HistoryEntryCleaner.java:132)
at jetbrains.buildServer.serverSide.impl.cleanup.ServerCleanupManagerImpl$3.performCleanup(ServerCleanupManagerImpl.java)
at jetbrains.buildServer.serverSide.db.DBFacade$1$1.doInConnection(DBFacade.java:178)
at jetbrains.buildServer.serverSide.db.DBFacade$6.doInConnection(DBFacade.java:415)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:341)
at jetbrains.buildServer.serverSide.db.DBFacade._runSql(DBFacade.java:411)
at jetbrains.buildServer.serverSide.db.DBFacade.access$000(DBFacade.java:33)
at jetbrains.buildServer.serverSide.db.DBFacade$1.doInTransaction(DBFacade.java:174)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:130)
at jetbrains.buildServer.serverSide.db.DBFacade.compact(DBFacade.java:171)
at jetbrains.buildServer.serverSide.impl.cleanup.ServerCleanupManagerImpl.startCleanup(ServerCleanupManagerImpl.java:74)
at jetbrains.buildServer.serverSide.impl.cleanup.ServerCleanupManagerImpl$2.run(ServerCleanupManagerImpl.java:0)
at java.util.TimerThread.mainLoop(Unknown Source)
at java.util.TimerThread.run(Unknown Source)
As far as I know, TeamCity NuGet Server is subject to the artifact cleanup policy defined in "Administration | Project-related Settings | Build History Clean-up", so make sure your package is still there!!
My personal advise is ALWAYS setup a dedicated nuget server.
You can setup one for free simply cloning the official NugetGallery project on GitHub: that's the same codebase used by nuget.org so you'll have a familiar UI and increased performances (NugetGallery leverage Lucene.NET indexing capabilities).
As Remco said, you can clear the package cache, by going to http://{teamcity}/admin/admin.html?item=diagnostics&tab=cache and clicking "reset" next to "buildsMetadata". This will remove all the NuGet packages from your feed until you reindex.
You can reindex the NuGet package(s) generated by an individual build by calling TeamCity's REST API. To reindex all builds, you'd have to write a script to loop over all the builds and reindex each.
Bug reports:
http://youtrack.jetbrains.com/issue/TW-25384
http://youtrack.jetbrains.com/issue/TW-23576
Sample reindex script:
http://youtrack.jetbrains.com/issue/TW-19411#comment=27-408230
It seems that the index for the TeamCity NuGet Server got corrupted somehow, the packages were fine, so clearing the cache was the solution.
However, I didn't figure out how the index could be repopulated.
Be warned, clearing the nuget package cache removes all packages from the TeamCity NuGet Server so you start with a clean slate...
we have Sonar set up to run on a separate server. It does, and a client application (sonar-runner) can connect successfully to it. However, the run interrupts with the following exception:
Runner configuration file: C:\Program Files (x86)\sonar-runner-1.3\bin\..\conf\sonar-runner.properties
Project configuration file: C:\project\subproject\sonar-project.properties
Runner version: 1.3
Java version: 1.6.0_33, vendor: Sun Microsystems Inc.
OS name: "Windows 7", version: "6.1", arch: "x86"
Server: http://<serverip>:80
Work directory: C:\project\subproject\.sonar
Total time: 1:30.902s
Final Memory: 0M/15M
Exception in thread "main" org.sonar.batch.bootstrapper.BootstrapException: org.sonar.batch.bootstrapper.BootstrapException: Fail to download the file: http://<serverip>:80/batch/guava-10.0.1.jar
at org.sonar.batch.bootstrapper.Bootstrapper.downloadBatchFiles(Bootstrapper.java:164)
at org.sonar.batch.bootstrapper.Bootstrapper.createClassLoader(Bootstrapper.java:87)
at org.sonar.runner.Runner.createClassLoader(Runner.java:155)
at org.sonar.runner.Runner.execute(Runner.java:78)
at org.sonar.runner.Main.main(Main.java:61)
Caused by: org.sonar.batch.bootstrapper.BootstrapException: Fail to download the file: http://<serverip>:80/batch/guava-10.0.1.jar
at org.sonar.batch.bootstrapper.Bootstrapper.remoteContentToFile(Bootstrapper.java:113)
at org.sonar.batch.bootstrapper.Bootstrapper.downloadBatchFiles(Bootstrapper.java:159)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(Unknown Source)
at java.io.BufferedInputStream.fill(Unknown Source)
at java.io.BufferedInputStream.read1(Unknown Source)
at java.io.BufferedInputStream.read(Unknown Source)
at sun.net.www.http.ChunkedInputStream.readAheadBlocking(Unknown Source)
at sun.net.www.http.ChunkedInputStream.readAhead(Unknown Source)
at sun.net.www.http.ChunkedInputStream.read(Unknown Source)
at java.io.FilterInputStream.read(Unknown Source)
at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(Unknown Source)
at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(Unknown Source)
at org.sonar.batch.bootstrapper.BootstrapperIOUtils.copyLarge(BootstrapperIOUtils.java:63)
at org.sonar.batch.bootstrapper.Bootstrapper.remoteContentToFile(Bootstrapper.java:109)
... 5 more
I can reproduce this with a normal browser. Retrieving the file opens the download manager, however, it takes up to 5 minutes until the file finally downloads (it's only 1.5 megs). Other files that are retrieved by the sonar-runner or using a browser do not have this problem.
The sonar logging doesn't seem to know that there is a problem. Downloads are not logged in the sonar.log file, neither successful ones nor the unsuccessful one. syslog doesn't contain any hints to problems.
had similar problem with sonar+php plugin and eset smart security. Had to disable filtering on 127.0.0.1 in filtering protocols section. It was happening randomly on different jars. It was happening on both solar ant task and solar runner
So, the solution was not something on the server, but rather a client-side problem. Kaspersky Endpoint Security seems to have a bug/feature that it needs to scan everything going over the network, and somehow this one JAR file triggered a multiple-minute-long delay while the file was being scanned.