I'm trying to upload a 6MB file to my JHipster app server. However, I get the following error. Where can I find the related configuration?
io.undertow.server.RequestTooBigException: UT000020: Connection terminated as request was larger than 10485760
at io.undertow.conduits.FixedLengthStreamSourceConduit.checkMaxSize(FixedLengthStreamSourceConduit.java:168)
at io.undertow.conduits.FixedLengthStreamSourceConduit.read(FixedLengthStreamSourceConduit.java:229)
at org.xnio.conduits.ConduitStreamSourceChannel.read(ConduitStreamSourceChannel.java:127)
at io.undertow.channels.DetachableStreamSourceChannel.read(DetachableStreamSourceChannel.java:209)
at io.undertow.server.HttpServerExchange$ReadDispatchChannel.read(HttpServerExchange.java:2332)
at org.xnio.channels.Channels.readBlocking(Channels.java:294)
at io.undertow.servlet.spec.ServletInputStreamImpl.readIntoBuffer(ServletInputStreamImpl.java:192)
at io.undertow.servlet.spec.ServletInputStreamImpl.read(ServletInputStreamImpl.java:168)
at io.undertow.server.handlers.form.MultiPartParserDefinition$MultiPartUploadHandler.parseBlocking(MultiPartParserDefinition.java:213)
at io.undertow.servlet.spec.HttpServletRequestImpl.parseFormData(HttpServletRequestImpl.java:792)
Spring Boot has the following default properties
spring.servlet.multipart.max-file-size=1MB # Max file size. Values can use the suffixes "MB" or "KB" to indicate megabytes or kilobytes, respectively.
spring.servlet.multipart.max-request-size=10MB # Max request size. Values can use the suffixes "MB" or "KB" to indicate megabytes or kilobytes, respectively.
10485760 = 10MB
See the file upload Spring Boot guide :
For Spring Boot 1.5.13.RELEASE try this properties:
spring.http.multipart.max-request-size=100MB
spring.http.multipart.max-file-size=100MB
At container level, there is the property maxPostSize which can be specified directly on the connector.
From docs:
The maximum size in bytes of the POST which will be handled by the container FORM URL parameter parsing. The limit can be disabled by setting this attribute to a value less than or equal to 0. If not specified, this attribute is set to 2097152 (2 megabytes).
In addition to maslbl4 answer, for those who uses " .yml " configuration file like me, use the following code :
spring:
servlet:
multipart:
max-file-size: 100MB
max-request-size: 100MB
It worked fine for me .
open standalone-full.xml file and write {max-post-size="50000000"} here i give the file size 50mb. and my project is running sucessfully.
<server name="default-server">
<http-listener name="default" max-post-size="50000000" socket-binding="http" redirect-socket="https" enable-http2="true"/>
<https-listener name="https" socket-binding="https" security-realm="ApplicationRealm" enable-http2="true"/>
<host name="default-host" alias="localhost">
<location name="/" handler="welcome-content"/>
<http-invoker security-realm="ApplicationRealm"/>
</host>
</se
Using Spring Version :2.3.12
You can use the below properties.
spring.servlet.multipart.max-file-size=35MB
spring.servlet.multipart.max-request-size=35MB
Note :http is depricated
Related
I want to transfer up to 16MB large files through websockets. When I try to send a file greater than 3MB it gives me the following error:
Warning: Unexpected error, closing connection.
java.lang.IllegalArgumentException: Buffer overflow.
at org.glassfish.tyrus.core.Utils.appendBuffers(Utils.java:346)
at org.glassfish.tyrus.core.TyrusWebSocketEngine$TyrusReadHandler.handle(TyrusWebSocketEngine.java:523)
I have read that buffer size can be changed inside glassfish-web.xml file by adding:
<param-name>org.glassfish.tyrus.servlet.incoming-buffer-size</param-name>
<param-value>17000000</param-value>
into:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE glassfish-web-app PUBLIC "-//GlassFish.org//DTD GlassFish Application Server 3.1 Servlet 3.0//EN" "http://glassfish.org/dtds/glassfish-web-app_3_0-1.dtd">
<glassfish-web-app>
<context-root>/PROJECT</context-root>
</glassfish-web-app>
but it didn't work out for me. Is there any other option or am I doing something wrong.
I have followed this manual to migrate from GlassFish to WildFly:
http://wildfly.org/news/2014/02/06/GlassFish-to-WildFly-migration/
However I'm getting the following error when running my application in WildFly:
ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("deploy") failed - address: ([("deployment" => "exampleProject-ear-1.0-SNAPSHOT.ear")]) - failure description: {"WFLYCTL0180: Services with missing/unavailable dependencies" => [
"jboss.persistenceunit.\"exampleProject-ear-1.0-SNAPSHOT.ear/exampleProject-web-1.0-SNAPSHOT.war#exampleProjectPU\".FIRST_PHASE is missing [jboss.naming.context.java.jdbc.__TimerPool]",
"jboss.persistenceunit.\"exampleProject-ear-1.0-SNAPSHOT.ear/exampleProject-web-1.0-SNAPSHOT.war#exampleProjectPU\" is missing [jboss.naming.context.java.jdbc.__TimerPool]"
]}
The error talks about jboss.naming.context.java.jdbc.__TimerPool. Any idea of what should I do? I'm using WildFly 10 and MySQL as database.
Forget about this. __TimerPool was the name of a Datasource in GlassFish and I was using it without knowing it, I simply removed the persistence.xml file that contained it and it worked.
Check your standalone.xml. It must be having a datasource with pool-name "exampleProjectPU" . Something like this. Please remove the full xml block.
<datasources>
<datasource jndi-name="xxx:exampleProjectPU" pool-name="exampleProjectPU" enabled="true">
<connection-url>jdbc:oracle:thin:#//host:port/SID</connection-url>
<driver>oracle</driver>
<security>
<user-name></user-name>
<password></password>
</security>
</datasource>
Go to deployments folder and check if there is any sample project with name "example project.war". If yes, remove it and start the server again. It should work fine.
try to change your mysql-connecter to bin file like mysql-connector-java-5.1.47-bin
make sure the name in is the some in jndi-name
In Oracle Coherence 12, what is the backing-map-scheme that can give a durable storage (NOT database)?
For Ex. Redis writes to a RDB/AOF file and restores KV entries after restart.
Configured Persistence Envrionments in operational config.
<persistence-environments>
<!-- -Dcoherence.distributed.persistence.base.dir=override USR_HOME -->
<persistence-environment id="stage_env_w_active_store">
<persistence-mode>active</persistence-mode>
<active-directory system-property="coherence.distributed.persistence.active.dir">
/opt/datastore/staged/active</active-directory>
<snapshot-directory system-property="coherence.distributed.persistence.snapshot.dir">
/opt/datastore/staged/snapshot</snapshot-directory>
<trash-directory system-property="coherence.distributed.persistence.trash.dir">
/opt/datastore/staged/trash</trash-directory>
</persistence-environment>
<persistence-environment id="stage_env_w_ondemand_store">
<persistence-mode>on-demand</persistence-mode>
<active-directory system-property="coherence.distributed.persistence.active.dir">
/opt/datastore/staged/dactive</active-directory>
<snapshot-directory system-property="coherence.distributed.persistence.snapshot.dir">
/opt/datastore/staged/dsnapshot</snapshot-directory>
<trash-directory system-property="coherence.distributed.persistence.trash.dir">
/opt/datastore/staged/dtrash</trash-directory>
</persistence-environment>
</persistence-environments>
Configured the backing-map-scheme persistence in the cache scheme.
<distributed-scheme>
<scheme-name>server</scheme-name>
<service-name>PartitionedCache</service-name>
<local-storage system-property="coherence.distributed.localstorage">true</local-storage>
<backing-map-scheme>
<local-scheme>
<high-units>{back-limit-bytes 0B}</high-units>
</local-scheme>
</backing-map-scheme>
<persistence>
<environment>stage_env_w_active_store</environment>
</persistence>
<autostart>true</autostart>
</distributed-scheme>
The "Active Space Used on disk (MB)" shows apt space used in JMX JVisualVM.
We are trying to set up the active MQ cluster on production environment on Amazon EC2 with Auto discover and multicast.
I was able to configure successfully auto discovery with multi-cast on my local active mq server but on Amazon EC2 it is not working.
From the link
I found that Amazon EC2 does not support multi-cast. Hence we have to use HTTP transport or VPN for multi-cast. I tried HTTP transport for multi-cast by downloading activemq-optional-5.6.jar (we are using Active-MQ 5.6 version). It requires httpcore and httpClient jars to servlet in it class path.
In broker configuration(activemq.xml)
`
<networkConnectors>
<networkConnector name="default" uri="http://localhost:8161/activemq/DiscoveryRegistryServlet"/>
</networkConnectors>
<transportConnectors>
<transportConnector name="activemq" uri="tcp://localhost:61616" discoveryUri="http://localhost:8161/activemq/DiscoveryRegistryServlet"/>
</transportConnectors>`
are added.
But broker is not identifying the DiscoveryRegistryServlet.
Any help is much appreciated.
Finally figured out how to setup active MQ auto discovery with HTTP
Active-MQ Broker configuration:
In $ACTIVEMQ_HOME/webapps folder create a new folder
|_activemq
|_WEB-INF
|_classes
|_web.xml
create a web.xml file with the following contents
<web-app>
<display-name>ActiveMQ Message Broker Web Application</display-name>
<description>
Provides an embedded ActiveMQ Message Broker embedded inside a web application
</description>
<!-- context config -->
<context-param>
<param-name>org.apache.activemq.brokerURL</param-name>
<param-value>tcp://localhost:61617</param-value>
<description>The URL that the embedded broker should listen on in addition to HTTP</description>
</context-param>
<!-- servlet mappings -->
<servlet>
<servlet-name>DiscoveryRegistryServlet</servlet-name>
<servlet-class>org.apache.activemq.transport.discovery.http.DiscoveryRegistryServlet</servlet-class>
<load-on-startup>1</load-on-startup>
</servlet>
<servlet-mapping>
<servlet-name>DiscoveryRegistryServlet</servlet-name>
<url-pattern>/*</url-pattern>
</servlet-mapping>
</web-app>
Place httpclient-4.0.3.jar, httpcore-4.3.jar, xstream-1.4.5.jar and activemq-optional-5.6.0.jar in $ACTIVEMQ_HOME/lib directory.
In $ACTIVEMQ_HOME/config directory, modify the jetty.xml file to expose activemq web app.
<bean id="securityHandler" class="org.eclipse.jetty.security.ConstraintSecurityHandler">
...
<property name="handler">
<bean id="sec" class="org.eclipse.jetty.server.handler.HandlerCollection">
<property name="handlers">
...
...
<bean class="org.eclipse.jetty.webapp.WebAppContext">
<property name="contextPath" value="/activemq" />
<property name="resourceBase" value="${activemq.home}/webapps/activemq" />
<property name="logUrlOnStart" value="true" />
<property name="parentLoaderPriority" value="true" />
...
...
</list>
</property>
</bean>
</property>
</bean>
Modify activemq.xml file in $ACTIVEMQ_HOME/conf directory to use http protocol
<broker name=”brokerName”>
...
<networkConnectors>
<networkConnector name="default" uri="http://<loadbalancer_IP>:<locadbalancer_Port>/activemq/DiscoveryRegistryServlet?group=test"/>
<!--<networkConnector name="default-nc" uri="multicast://default"/>-->
</networkConnectors>
<transportConnectors>
<transportConnector name="http" uri="tcp://0.0.0.0:61618" discoveryUri="http://<loadbalancer_IP>:<locadbalancer_Port>/activemq/test"/>
</transportConnectors>
...
</broker>
make sure that the broker names are unique. “test” in url is the group name of brokers.
Client configuration:
1. Keep httpclient-4.0.3.jar, httpcore-4.3.jar, xstream-1.4.5.jar and activemq-optional-5.6.0.jar in classpath of client
2. URL to be use by client
discovery:(http://<loadbalancer_IP>:<locadbalancer_Port>/activemq/test)connectionTimeout=10000
here “test” is the group name.
I managed to set up a Mule project to download a file from a FTP, and save it on a local disk. However after transferring the file, Mule keeps trying to delete the remote file on the FTP.
Is there a way to tell Mule not to delete the original file and just leave it as it is?
Here's my project XML:
<?xml version="1.0" encoding="UTF-8"?>
<mule ...>
<flow name="copy-remote-fileFlow1" doc:name="copy-remote-fileFlow1">
<ftp:inbound-endpoint host="ftp.secureftp-test.com" port="21" path="subdir1" user="test" password="test" pollingFrequency="60000" responseTimeout="10000" doc:name="FTP">
<file:filename-wildcard-filter pattern="box.ico" />
</ftp:inbound-endpoint>
<file:outbound-endpoint path="I:\test\" outputPattern="fromMule.ico" responseTimeout="10000"
doc:name="File" /> </flow>
</mule>
And in my case, I don't have the rights to delete the file so I get an exception:
ERROR 2013-05-24 17:35:47,286 [[copy-remote-file].connector.ftp.mule.default.receiver.02] org.mule.exception.DefaultSystemExceptionStrategy: Caught exception in Exception Strategy: Failed to delete file box.ico. Ftp error: 550
java.io.IOException: Failed to delete file box.ico. Ftp error: 550
at org.mule.transport.ftp.FtpMessageReceiver.postProcess(FtpMessageReceiver.java:202)
at com.mulesoft.mule.transport.ftp.EEFtpMessageReceiver.postProcess(EEFtpMessageReceiver.java:71)
at org.mule.transport.ftp.FtpMessageReceiver$FtpWork.run(FtpMessageReceiver.java:316)
at org.mule.work.WorkerContext.run(WorkerContext.java:311)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Your only option consists in extending org.mule.transport.ftp.FtpMessageReceiver in order to override the postProcess method, which is the one that takes care of deleting the file on the FTP server.
To register your custom FtpMessageReceiver use the service-overrides configuration element on your FTP connector:
<ftp:connector name="nonDeletingFtpConnector">
<service-overrides messageReceiver="com.amce.NonDeletingFtpMessageReceiver" />
</ftp:connector>
Adding few things to what David already mentioned. The NonDeletingFtpMessageReceiver class constructor should look like this :
public NonDeletingFtpMessageReceiver(EEFtpConnector connector,
Flow flowConstruct, DefaultInboundEndpoint endpoint,
long frequency, String value1, String value2, long value3)
throws CreateException {
super(connector, flowConstruct, endpoint, frequency);
}
Another solution is to set streaming="true" on the FTP connector which would disable the file deletion.