I am building a SD card/SCSI adapter using NCR 53CF94 IC and STM32.
All goes pretty well, even manged to make my device work to some degree, i.e. accepting all basic commands and even booting from it to DOS. BUT i have a problem, when asking initiator (PC) to write something to my device , all goes well i get the block address and the data. Then i write the data to SD card and finally responding with status=0 and message=0 to complete the write command, but the initiaor never increasing the secoor number to continue to write proccess and always tries to write the first one, then after few attempts it gives error on the PC (Error writing to drive...). I can't figure it out why the initiaor is not satisfied with the GOOD status and message. Do i need to send some specific data back to the initiator ? Linke CRC ? Or there is some specific command i need to issue for 53C94 ?
Banging my head for few days now.
Need your assistance please.
Thanks !
Artiom.
I figured it out. I was writing 512 byte blocks to a 256 byte array. I'm not sure how this is related to the issue, but after fixing the size everything started to work.
Today, I started getting timeout errors from heroku. I eventually ran this ...
heroku pg:diagnose -a myapp
and got ...
RED: Bloat
Type Object Bloat Waste
───── ───────────────────────────────────────────── ───── ───────
table public.files 776 1326 MB
index public.files::files__lft__rgt_parent_id_index 63 106 MB
RED: Hit Rate
Name Ratio
────────────────────── ──────────────────
overall cache hit rate 0.8246404842342929
public.files 0.8508127886460272
I ran the VACUUM command and it did nothing to address the bloat. How do I address this?
I know this is an old answer. For anyone else facing the same issue, he might try the following for vacuum:
VACUUM (ANALYZE, VERBOSE, FULL) your-table-name;
What are the computer hardware requirements to run a GridsearchCV() or RandomizedsearchCV() in parallel ( having either n_jobs > 1 or n_job == -1 )?
Do all todays' computers support that?
Q: Do all todays' computers support that?
A: Yes, they certainly do.
The problem is with process-instantiation costs and with indirect-effects of the whole python-interpreter replica(s). For further details ref. to details explained here.
I'm working on a client application that uses openssl 1.0.2f for streaming data to the server using c++, Where the call to the SSL_CTX_new hangs 60% of the time soon after the connection start. Some times the call returns after a while (recovering from hang after about 30 seconds to 1 minute) and most of the time it doesn't.
here is my code:
SSL_library_init();
SSLeay_add_ssl_algorithms ( ) ;
SSL_load_error_strings ( ) ;
BIO_new_fp ( stderr , BIO_NOCLOSE ) ;
const SSL_METHOD *m_ssl_client_method = TLSv1_2_client_method ( );
if(m_ssl_client_method)
{
sslContext = SSL_CTX_new ( m_ssl_client_method ) ;
}
That looks similar to the SSL initialization steps given in the openSSL wiki
After debugging through very sleepy profiler I came to know that the initialization of random numbers causes the hang and it looks like it consumes 100% of the cpu and goes into an infinite loop.
Here is a snapshot captured from the verysleepy tool
I'm using VC++ and configured whole program optimization and enabled the use of SSE2 instruction set(disabling these optimizations doesn't seem to give any changes in the results).
I have come across a thread that talks about a similar problem but that doesn't provide a solution for this, I did not find any other threads that talk about this kind of problems, could some one help me with this?
Thanks in advance.
The problems seemed to be a possible bug in the openssl version 1.0.2h, upgrading it to latest version (1.1.0e) solved the problem.
Alright, so whenever I upload this GIF to my board (NGINX+PHP-FPM) I get a slow down until an eventual 504 Gateway Time-out, alright, so I know what you're thinking, "go ahead and fix those nginx.conf and php-fpm settings", well I tweaked them to near perfection last night, my server is running brilliantly now. However, that one particular GIF still screws up, runs php-FPM to almost 100% (I have a great top of the line quad core processor in my server, my server is by no means primitive).
So want to know where it gets weirder? I've uploaded 10MB GIF's with bigger dimensions than the one in case (the one which is causing the issues is about 600KB) and had the server process them ridiculously quickly.
Alright! So let's get into the log, error_log doesn't output anything in regards to this issue. So I went ahlead and set up a slowlog within the php-FPM config.
Here's the issue:
[02-Oct-2011 05:54:17] [pool www] pid 76004
script_filename = /usr/local/www/mydomain/post.php
[0x0000000805afc6d8] imagefill() /usr/local/www/mydomain/inc/post.php:159
[0x0000000805afb908] fastImageCopyResampled() /usr/local/www/mydomain/inc/post.php:107
[0x0000000805af4240] createThumbnail() /usr/local/www/mydomain/classes/upload.php:182
[0x0000000805aeb058] HandleUpload() /usr/local/www/mydomain/post.php:235
Okay, let's look at post.php (line 159 in bold):
if (preg_match("/png/", $system[0]) || preg_match("/gif/", $system[0])) {
$colorcount = imagecolorstotal($src_image);
if ($colorcount <= 256 && $colorcount != 0) {
imagetruecolortopalette($dst_image,true,$colorcount);
imagepalettecopy($dst_image,$src_image);
$transparentcolor = imagecolortransparent($src_image);
**imagefill($dst_image,0,0,$transparentcolor);**
imagecolortransparent($dst_image,$transparentcolor);
}
Line 107:
fastImageCopyResampled($dst_img, $src_img, 0, 0, 0, 0, $thumb_w, $thumb_h, $old_x, $old_y, $system);
upload.php, line 182 (in bold):
**if (!createThumbnail($this->file_location, $this->file_thumb_location, KU_REPLYTHUMBWIDTH, KU_REPLYTHUMBHEIGHT))** { exitWithErrorPage(_gettext('Could not create thumbnail.'));
(note, that error does not show up)
The other post.php (line 235):
$upload_class->HandleUpload();
So what can I do? How can I fix this? I know this is a tough issue, but if you guys could give me any input, it would be greatly appreciated.
Oh and in case anyone is curious, here's the GIF: http://i.imgur.com/rmvau.gif
Have you tried setting the client_body_buffer_size directive in your nginx configs?
See more here: http://www.lifelinux.com/how-to-optimize-nginx-for-maximum-performance/