Universal Image Loader and 302 redirects - universal-image-loader

Using UIL version 1.8.0 to load a twitter profile image url:
http://api.twitter.com/1/users/profile_image/smashingmag.jpg?size=bigger
with disc and memory cache. The images are failing to load and storing the html that comes along with the 302 redirect in the disc cache file. The images never load or decode successfully (the onLoadingFailed method of my SimpleImageLoadingListener gets called for every twitter profile image url). Can anyone load a simple twitter image url with UIL?
Here is the content of my cache file for that url:
cat /mnt/sdcard/MyCache/CacheDir/1183818163
<html><body>You are being redirected.</body></html>
Here is my configuration:
File cacheDir = StorageUtils.getOwnCacheDirectory(FrequencyApplication.getContext(), "MyCache/CacheDir");
DisplayImageOptions defaultOptions = new DisplayImageOptions.Builder()
.cacheInMemory()
.cacheOnDisc()
.imageScaleType(ImageScaleType.IN_SAMPLE_POWER_OF_2)
.build();
ImageLoaderConfiguration config = new ImageLoaderConfiguration.Builder(FrequencyApplication.getContext())
.memoryCacheExtraOptions(480, 800)
.threadPoolSize(20)
.threadPriority(Thread.MIN_PRIORITY)
.offOutOfMemoryHandling()
.memoryCache(new UsingFreqLimitedMemoryCache(2 * 1024 * 1024))
.discCache(new TotalSizeLimitedDiscCache(cacheDir, 30 * 1024 * 1024))
.discCacheFileNameGenerator(new HashCodeFileNameGenerator())
.imageDownloader(new BaseImageDownloader(MyApplication.getContext(), 20 * 1000, 30 * 1000))
.tasksProcessingOrder(QueueProcessingType.FIFO)
.defaultDisplayImageOptions(defaultOptions)
.build();
ImageLoader.getInstance().init(config);

It seems HttpURLConnection can't handle redirect from HTTP to HTTPS automatically (link). I'll fix it in next lib version.
Fix for now - extend BaseImageDownloader and set it into configuration:
public class MyImageDownloader implements BaseImageDownloader {
#Override
protected InputStream getStreamFromNetwork(URI imageUri, Object extra) throws IOException {
HttpURLConnection conn = (HttpURLConnection) imageUri.toURL().openConnection();
conn.setConnectTimeout(connectTimeout);
conn.setReadTimeout(readTimeout);
conn.connect();
while (conn.getResponseCode() == 302) { // >=300 && < 400
String redirectUrl = conn.getHeaderField("Location");
conn = (HttpURLConnection) new URL(redirectUrl).openConnection();
conn.setConnectTimeout(connectTimeout);
conn.setReadTimeout(readTimeout);
conn.connect();
}
return new FlushedInputStream(conn.getInputStream(), BUFFER_SIZE);
}
}

Related

How to use RemoteFileTemplate<SmbFile> in Spring integration?

I've got a Spring #Component where a SmbSessionFactory is injected to create a RemoteFileTemplate<SmbFile>. When my application runs, this piece of code is called multiple times:
public void process(Message myMessage, String filename) {
StopWatch stopWatch = StopWatch.createStarted();
byte[] bytes = marshallMessage(myMessage);
String destination = smbConfig.getDir() + filename + ".xml";
if (log.isDebugEnabled()) {
log.debug("Result: {}", new String(bytes));
}
Optional<IOException> optionalEx =
remoteFileTemplate.execute(
session -> {
try (InputStream inputStream = new ByteArrayInputStream(bytes)) {
session.write(inputStream, destination);
} catch (IOException e1) {
return Optional.of(e1);
}
return Optional.empty();
});
log.info("processed Message in {}", stopWatch.formatTime());
optionalEx.ifPresent(
ioe -> {
throw new UncheckedIOException(ioe);
});
}
this works (i.e. the file is written) and all is fine. Except that I see warnings appearing in my log:
DEBUG my.package.MyClass Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>....
INFO org.springframework.integration.smb.session.SmbSessionFactory SMB share init: XXX
WARN jcifs.smb.SmbResourceLocatorImpl Path consumed out of range 15
WARN jcifs.smb.SmbTreeImpl Disconnected tree while still in use SmbTree[share=XXX,service=null,tid=1,inDfs=true,inDomainDfs=true,connectionState=3,usage=2]
INFO org.springframework.integration.smb.session.SmbSession Successfully wrote remote file [path\to\myfile.xml].
WARN jcifs.smb.SmbSessionImpl Logging off session while still in use SmbSession[credentials=XXX,targetHost=XXX,targetDomain=XXX,uid=0,connectionState=3,usage=1]:[SmbTree[share=XXX,service=null,tid=1,inDfs=false,inDomainDfs=false,connectionState=0,usage=1], SmbTree[share=XXX,service=null,tid=5,inDfs=false,inDomainDfs=false,connectionState=2,usage=0]]
jcifs.smb.SmbTransportImpl Disconnecting transport while still in use Transport746[XXX/999.999.999.999:445,state=5,signingEnforced=false,usage=1]: [SmbSession[credentials=XXX,targetHost=XXX,targetDomain=XXX,uid=0,connectionState=2,usage=1], SmbSession[credentials=XXX,targetHost=XXX,targetDomain=null,uid=0,connectionState=2,usage=0]]
INFO my.package.MyClass processed Message in 00:00:00.268
The process method is called from a Rest method, which does little else.
What am I doing wrong here?

Logging into ElasticSearch with Serilog in ASP.NET Web API

I am trying to log into Elasticsearch in one of my ASP.NET Web API project using Serilog, but unfortunately, I can't find the logs in Kibana.
public class Logger
{
private readonly ILogger _localLogger;
public Logger()
{
ElasticsearchSinkOptions options = new ElasticsearchSinkOptions(new Uri("xxx"))
{
IndexFormat = "log-myservice-dev",
AutoRegisterTemplate = true,
ModifyConnectionSettings = (c) => c.BasicAuthentication("yyy", "zzz"),
NumberOfShards = 2,
NumberOfReplicas = 0
};
_localLogger = new LoggerConfiguration()
.MinimumLevel.Information()
.WriteTo.File(HttpContext.Current.Server.MapPath("~/logs/log-.txt"), rollingInterval: RollingInterval.Day)
.WriteTo.Elasticsearch(options)
.CreateLogger();
}
public void LogError(string error)
{
_localLogger.Error(error);
}
public void LogInformation(string information)
{
_localLogger.Information(information);
}
}
I can see the logs in the file specified above, just not in Elasticsearch. So, I am wondering is there is any way I can debug why it failed to log into Elasticsearch? I am also open to using other logging framework to log into Elasticsearch.
*The credentials and url for Elasticsearch are valid as I have implemented this in my other AWS Lambda project (.net core).
To see exactly what went wrong, the easiest way is to write into console, and in case of ASP.NET project, it will be Debug.WriteLine. So the code to see what went wrong would be
Serilog.Debugging.SelfLog.Enable(msg => Debug.WriteLine(msg));
ElasticsearchSinkOptions options = new ElasticsearchSinkOptions(new Uri("xxx"))
{
IndexFormat = "log-myservice-dev",
AutoRegisterTemplate = true,
ModifyConnectionSettings = (c) => c.BasicAuthentication("yyy", "zzz"),
NumberOfShards = 2,
NumberOfReplicas = 1,
EmitEventFailure = EmitEventFailureHandling.WriteToSelfLog,
MinimumLogEventLevel = Serilog.Events.LogEventLevel.Information
};
The following error message was retrieved from the output console.
Failed to create the template.
Elasticsearch.Net.ElasticsearchClientException: The request was
aborted: Could not create SSL/TLS secure channel.. Call: Status code
unknown from: HEAD /_template/serilog-events-template --->
System.Net.WebException: The request was aborted: Could not create
SSL/TLS secure channel.
The issue is quite clear cut. Added the following in my logger class constructor helped with the issue.
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls | SecurityProtocolType.Tls11 | SecurityProtocolType.Tls12;
Hope it helps others that encounter issue trying to use Serilog to log into Elasticsearch for .Net Framework.

WebTestClient with multipart file upload

I'm building a microservice using Spring Boot + Webflux, and I have an endpoint that accepts a multipart file upload. Which is working fine when I test with curl and Postman
#PostMapping("/upload", consumes = [MULTIPART_FORM_DATA_VALUE])
fun uploadVideo(#RequestPart("video") filePart: Mono<FilePart>): Mono<UploadResult> {
log.info("Video upload request received")
return videoFilePart.flatMap { video ->
val fileName = video.filename()
log.info("Saving video to tmp directory: $fileName")
val file = temporaryFilePath(fileName).toFile()
video.transferTo(file)
.thenReturn(UploadResult(true))
.doOnError { error ->
log.error("Failed to save video to temporary directory", error)
}
.onErrorMap {
VideoUploadException("Failed to save video to temporary directory")
}
}
}
I'm now trying to test using WebTestClient:
#Test
fun shouldSuccessfullyUploadVideo() {
client.post()
.uri("/video/upload")
.contentType(MULTIPART_FORM_DATA)
.syncBody(generateBody())
.exchange()
.expectStatus()
.is2xxSuccessful
}
private fun generateBody(): MultiValueMap<String, HttpEntity<*>> {
val builder = MultipartBodyBuilder()
builder.part("video", ClassPathResource("/videos/sunset.mp4"))
return builder.build()
}
The endpoint is returning a 500 because I haven't created the temp directory location to write the files to. However the test is passing even though I'm checking for is2xxSuccessful if I debug into the assertion that is2xxSuccessful performs, I can see it's failing because of the 500, however I'm still getting a green test
Not sure what I am doing wrong here. The VideoUploadException that I map to simply extends ResponseStatusException
class VideoUploadException(reason: String) : ResponseStatusException(HttpStatus.INTERNAL_SERVER_ERROR, reason)

Xcode : realm thread issue

i am new to IOS development and i recently tried realm
the problem is that, i have to get the urls from a json file and then i put those urls in realm as an object ...and whenever i start my app again the URL variable would get the respected url from realm...
like this:
getUrls()
let realm = try! Realm()
// Query Realm for all dogs less than 2 years old
let urls = realm.objects(UrlCollector.self).first
let sss = realm.objects(UrlCollector.self)
print("no of objects in did load \(sss.count)")
loginUrl = urls!.login
print("login url inside didload \(loginUrl)")
but the problem is getUrls method...it updates the urls using almofire
getUrls method:
Alamofire.request("<<<myurl>>>", method: .post, encoding: JSONEncoding.default, headers: nil).responseJSON { (response:DataResponse<Any>) in
switch(response.result) {
case .success(_):
if let data = response.result.value{
print(data)
let data = JSON(data)
for item in data["result"].arrayValue {
let url = UrlCollector()
url.login = "\(self.server)\(item["login"].stringValue)"
print(url.login)
url.changePassword = "\(self.server)\(item["changePassword"].stringValue)"
print(url.changePassword)
url.phoneNumberVerify = "\(self.server)\(item["phoneNumberVerify"].stringValue)"
print(url.phoneNumberVerify)
url.sessionCheck = "\(self.server)\(item["sessionCheck"].stringValue)"
print(url.sessionCheck)
// Get the default Realm
let realm = try! Realm()
var urls = realm.objects(UrlCollector.self)
// Persist your data easily
try! realm.write {
realm.delete(urls)
realm.add(url)
}
// Query Realm for all dogs less than 2 years old
urls = realm.objects(UrlCollector.self)
print(urls.count)
}
}
break
case .failure(_):
print("Error message:\(response.result.error)")
break
}
}
}
this code runs on did load
my log:
no of objects in did load 1
login url inside didload
{
result = (
{
changePassword = "/iust_app/android/passwordChange.php";
login = "/iust_app/android/login.php";
phoneNumberVerify = "/iust_app/android/onNumberVerification.php";
sessionCheck = "/iust_app/android/sessionCheck.php";
}
);
}
/iust_app/android/login.php
/iust_app/android/passwordChange.php
/iust_app/android/onNumberVerification.php
/iust_app/android/sessionCheck.php
1
print("no of objects in did load \(sss.count)")
loginUrl = urls!.login
print("login url inside didload \(loginUrl)")
as you can see these runs before the request...Please read my log lines to understand
All network requests are executed asynchronously. If you want your code to be executed after you get a response from the server put it into the completion handler of this request.

How can I Read and Transfer chunks of file with Hadoop WebHDFS?

I need to transfer big files (at least 14MB) from the Cosmos instance of the FIWARE Lab to my backend.
I used the Spring RestTemplate as a client interface for the Hadoop WebHDFS REST API described here but I run into an IO Exception:
Exception in thread "main" org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/<user.name>/<path>?op=open&user.name=<user.name>":Truncated chunk ( expected size: 14744230; actual size: 11285103); nested exception is org.apache.http.TruncatedChunkException: Truncated chunk ( expected size: 14744230; actual size: 11285103)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:580)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:545)
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:466)
This is the actual code that generates the Exception:
RestTemplate restTemplate = new RestTemplate();
restTemplate.setRequestFactory(new HttpComponentsClientHttpRequestFactory());
restTemplate.getMessageConverters().add(new ByteArrayHttpMessageConverter());
HttpEntity<?> entity = new HttpEntity<>(headers);
UriComponentsBuilder builder =
UriComponentsBuilder.fromHttpUrl(hdfs_path)
.queryParam("op", "OPEN")
.queryParam("user.name", user_name);
ResponseEntity<byte[]> response =
restTemplate
.exchange(builder.build().encode().toUri(), HttpMethod.GET, entity, byte[].class);
FileOutputStream output = new FileOutputStream(new File(local_path));
IOUtils.write(response.getBody(), output);
output.close();
I think this is due to a transfer timeout on the Cosmos instance, so I tried to
send a curl on the path by specifying offset, buffer and length parameters, but they seem to be ignored: I got the whole file.
Thanks in advance.
Ok, I found out a solution. I don't understand why, but the transfer succeds if I use a Jetty HttpClient instead of the RestTemplate (and so Apache HttpClient). This works now:
ContentExchange exchange = new ContentExchange(true){
ByteArrayOutputStream bos = new ByteArrayOutputStream();
protected void onResponseContent(Buffer content) throws IOException {
bos.write(content.asArray(), 0, content.length());
}
protected void onResponseComplete() throws IOException {
if (getResponseStatus()== HttpStatus.OK_200) {
FileOutputStream output = new FileOutputStream(new File(<local_path>));
IOUtils.write(bos.toByteArray(), output);
output.close();
}
}
};
UriComponentsBuilder builder = UriComponentsBuilder.fromHttpUrl(<hdfs_path>)
.queryParam("op", "OPEN")
.queryParam("user.name", <user_name>);
exchange.setURL(builder.build().encode().toUriString());
exchange.setMethod("GET");
exchange.setRequestHeader("X-Auth-Token", <token>);
HttpClient client = new HttpClient();
client.setConnectorType(HttpClient.CONNECTOR_SELECT_CHANNEL);
client.setMaxConnectionsPerAddress(200);
client.setThreadPool(new QueuedThreadPool(250));
client.start();
client.send(exchange);
exchange.waitForDone();
Is there any known bug on the Apache Http Client for chunked files transfer?
Was I doing something wrong in my RestTemplate request?
UPDATE: I still don't have a solution
After few tests I see that I don't have solved my problems.
I found out that the hadoop version installed on the Cosmos instance is quite old Hadoop 0.20.2-cdh3u6 and I read that WebHDFS doesn't support partial file transfer with length parameter (introduced since v 0.23.3).
These are the headers I received from the Server when I send a GET request using curl:
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: HEAD, POST, GET, OPTIONS, DELETE
Access-Control-Allow-Headers: origin, content-type, X-Auth-Token, Tenant-ID, Authorization
server: Apache-Coyote/1.1
set-cookie: hadoop.auth="u=<user>&p=<user>&t=simple&e=1448999699735&s=rhxMPyR1teP/bIJLfjOLWvW2pIQ="; Version=1; Path=/
Content-Type: application/octet-stream; charset=utf-8
content-length: 172934567
date: Tue, 01 Dec 2015 09:54:59 GMT
connection: close
As you see the Connection header is set to close. Actually, the connection is usually closed each time the GET request lasts more than 120 seconds, even if the file transfer has not been completed.
In conclusion, I can say that Cosmos is totally useless if it doesn't support large file transfer.
Please correct me if I'm wrong, or if you know a workaround.

Resources