I have followed all the suggestions I can find.
I am running the current version on redis on windows 2008
I can run fin from command line
I can install the service but it doesnt run
I do...
redis-server --service-install redis.windows.conf
and get "redis successfully installed as a service"
Then I try to start the service doing...
redis-server --service-start redis.windows.conf --loglevel verbose
and get Redis service failed to start
I have made sure I have the .net framework 4.5.2 installed, I have tried with the firewall off and have played with security on the folder.
Anyone have any ideas?
(Merry Christmas all)
Start redis server from the command line instead of as a service and it will display a more useful error message. If you are just using the default configuration it is most likely a problem with the maxmemory/maxheap configuration.
C:\redis>redis-server.exe redis.windows.conf
[1576] 04 Feb 10:32:54.172 #
The Windows version of Redis allocates a memory mapped heap for sharing with
the forked process used for persistence operations. In order to share this
memory, Windows allocates from the system paging file a portion equal to the
size of the Redis heap. At this time there is insufficient contiguous free
space available in the system paging file for this operation (Windows error
0x5AF). To work around this you may either increase the size of the system
paging file, or decrease the size of the Redis heap with the --maxheap flag.
Sometimes a reboot will defragment the system paging file sufficiently for
this operation to complete successfully.
Please see the documentation included with the binary distributions for more
details on the --maxheap flag.
Redis can not continue. Exiting.
In my case the default commandline config did not have logging enabled and the service one did. And no place where it complains about that.
Try creating the directory ./Logs.
Old question, but I came across it while trying to get a Win7x64 install working using binaries Redis-x64-2.8.2101. Couldn't get it to start despite fiddling with various options, no meaningful error given when run with the config, and only an apparently spurious disk space error when run natively.
There appears to be a issue on the github related, linked here for future benefit: https://github.com/MSOpenTech/redis/issues/267
Related
I have a problem with Cuckoo Sandbox and its memory dump it should generate in order to be able to analyse it with Volatility.
My issue is:
Cuckoo's log files telling me that a memory dump has successfully been generated but it can not access them because they can not be found. Manually looking for them in the directory confirms that they do not exist. Cuckoo tells me to enable memory_dump in cuckoo.conf which is enabled.
My Cuckoo version and operating system are:
Cuckoo: 2.0.6
Host: Ubuntu 18.04.1 LTS
Guest: Win7 Ultimate, Service Pack 1, 32-bit
Those are my config files:
cuckoo.conf
memory_dump = yes
memory.conf
guest_profile = Win7SP1x86
delete_memdump = no
processing.conf
[memory]
enabled = yes
This is the output of the cuckoo.log:
INFO: Successfully generated memory dump for virtual machine with label Win7 to path /home/test/.cuckoo/storage/analyses/1/memory.dmp
[...]
ERROR: VM memory dump not found: to create VM memory dumps you have to enable memory_dump in cuckoo.conf!
Any kind of help is appreciated. If you need any more information from me please let me know
Edit: Only memory dump of full machine is not being generated. If malware is injected in a new process then memory dump is generated as shown in the report.json
INFO: injected into process with pid 3844 and name 'iexplorer.exe'
INFO: memory dump of process with pid 3844 completed
and I can also find the 3844-1.dmp file in the directory
I had a similar issue some time back where the memory dump creation was a little inconsistent. However that was with a older version of the cuckoo sandbox.
In processing.conf, check to see if you have set
[procmemory]
enabled = yes
I do remember that I had issues where I would sometimes get full memory dumps if I submitted a sample via the web GUI but I would not get memory dumps if I submitted a sample via commandline or vice versa. Sometimes I would only get memory dumps after the first sample failed. I found that a good place to start was with with something like a 32 bit putty.exe. Once the memory dumps started to work though I never had a issue after that. So I never documented what I done. I do remember playing around with the memory settings, so it may be worth playing around with processing.conf settings, turn them on and off to see what works.
[memory]
enabled = yes
[procmemory]
enabled = yes
and cuckoo.conf
memory_dump = yes
I know it may sound odd but I sometimes seen different functionality when submitting samples through both terminal or webgui mode. I no longer have my setup so I have nothing to compare it to.
[Edit]
Also make sure you have the correct dependencies installed
https://github.com/volatilityfoundation/volatility/wiki/Linux
I am trying to install elastic search in Linux VM but could not able to start the service though it has java installed. I am getting following message while elasticsearch script runs.
[xxxx#ABCWCW0ASMGNJ01A bin]$ java -version
java version "1.7.0_65"
OpenJDK Runtime Environment (novell-2.5.1.2.el6_5-x86_64 u65-b17)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)
[xxxx#ABCWCW0ASMGNJ01A bin]$ ./elasticsearch
Error occurred during initialization of VM
Too small initial heap for new size specified
I have downlaoded 2.3.2 from elastic search website. After initial google I did set the ES_HEAP_SIZE=1g in bash_profile but still no luck. Can you thow some light what could be issue.
Thanks
It seems you don't have enough heap space to start elastic. Please go to increase the java heap size permanently? and do the needful.
Depending on the flavor of Linux you are using, you may need to use a text editor to update the following file to increase the heap size.
/etc/sysconfig/elasticsearch
(uncomment the line 'ES_HEAP_SIZE=' and set this to half of the allocated virtual RAM for the VM)--Based on: https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html
I will caution you to read this as well afterwards if you see something like "Unable to lock JVM memory" in your elasticsearch application log after starting it up:
https://github.com/elastic/elasticsearch/issues/9357
I ran into problems myself with trying to use the environment variable method on Centos 6 and 7.
You may also want to try using the Kopf plugin (open source) to get some simple visibility:
https://github.com/lmenezes/elasticsearch-kopf
Using the following general instructions of course:
https://www.elastic.co/guide/en/elasticsearch/plugins/current/installation.html
If you don't know where certain things are located for elasticsearch on your system, please use the below defaults listing as guidance:
https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-dir-layout.html
The root cause of the problem you are experiencing is most likely related to the startup script being used to bring up your instance of elasticsearch on the VM and it not utilizing the environment variable as expected. I hope you don't mind the extra information, I'm just trying to help you save some time.
I am running Cloudera Hadoop on my laptop and Oracle VirtualBox VM.
I have given 5.6 GB out of mine 8 and six from eight cores as well.
And still I am not able to keep it up and running.
Even without load services would not stay up and running and when I try a query at least Hive will be down within 20 minutes. And sometimes they go down like dominoes: one after another.
More memory seemed to help some: with 3GB and all services, Hue was blinking with red colors when the Hue itself managed to get up. And after rebooting it would takes 30 - 60 minutes before I manage to get the system up enough to even try running anything on it.
There has been two sensible notes (that I have managed to find):
- Warning of swapping.
- Crashing note when the system used 26 GB of virtual memory which was not enough.
My dataset is less than one megabyte, so it is hard to understand why the system would go up to dozens of gigabytes, but for whatever was reason for that has passed: now the system is running more steadily around the 5.6 GB that I have given to it after closing down a few services: see my answer to myself.
And still it is just more stable. Right after I got a warning of swapping and the Hive went down again. What could be reason for more-or-less all Hadoop services going down if the VM starts to swap?
I don't have enough reputation to post the picture to here, but when Hive went down again it was swapping 13 pages / second and utilizing 5.9 GB / 5.6 GB. So basically my system starts crashing more-or-less right after it start to swap. "428 pages were swapped to disk in the previous 15 minute(s)"
I have used default installation options as far as hard drive is concerned.
Only addition is a shared folder between Windows and VM. That works somewhat strangely locking files all the time, so I used it just like FTP and only for passing files from one system to another. Thus I can go days without using it, but systems still crash, so that is not the cause either.
Now that the system is mostly up, services crash still about twice a day: Service Monitor and Hive are quite even with their crashing frequency. After those come Activity Monitor and Event Server, which appear to crash always together. I believe Yarn crashes as well, but it gets up on its own. Last time Hive crashed first, and then it got followed by Service Monitor, Hive (second time), Activity Monitor and Event Server all.
As swap is disk, perhaps the problem is with disk:
# cat /etc/fstab
# swapoff -a
# badblocks -v /dev/VolGroup/lv_swap
Checking blocks 0 to 8388607
Checking for bad blocks (read-only test): done
Pass completed, 0 bad blocks found.
# badblocks -vw /dev/VolGroup/lv_swap
Checking for bad blocks in read-write mode
From block 0 to 8388607
Testing with pattern 0xaa: done
Reading and comparing: done
Testing with pattern 0x55: done
Reading and comparing: done
Testing with pattern 0xff: done
Reading and comparing: done
Testing with pattern 0x00: done
Reading and comparing: done
Pass completed, 0 bad blocks found.
So nothing wrong with swap disk and I have not noticed any disk error anywhere else either.
Note that you could check file system from Windows side also. But I expect that if you make Windows to fix your Linux file system, you have good chances of destroying your Linux with that, so I did my checks somewhat pessimistically, because AFAIK these commands are safe to execute.
About half of the services kept going down, so giving more specifics would be a long story.
I succeeded to get the system more stable by closing down flume, hbase, impala, ks_indexer, oozie, spark and sqoop. And by increasing more memory to some remaining services that complained they had not been given enough memory.
Also I fixed couple of thing on the Windows side, I am not sure which one of these helped:
- MsMpEng.exe kept my hard drive busy. I didn't have permissions to kill it, but I decreased its priority to lowest possible.
- CcmExec.exe got to loop on my DVD and kept reading it for forever. This I solved by taking the DVD out from the drive. Then later on I killed the process tree to keep it from bothering for a while.
I found these using Windows resource manager.
The VM requires 4GB: http://www.cloudera.com/content/cloudera-content/cloudera-docs/DemoVMs/Cloudera-QuickStart-VM/cloudera_quickstart_vm.html You should use that.
I am not clear whether you are using the QuickStart VM though. It's set up to run just the essential services and tuned to conserve memory rather than exploit lots of memory.
It sounds like you are running your own installation, on one virtual machine, on your Windows machine. You may be running an entire cluster's worth of services on one desktop machine. Each of these services has master, worker processes, monitoring processes, etc. You don't need most of them.
You also probably have left memory settings at default suitable for a server-class machine of 16+ GB RAM. Remember these services usually run across many machines, not all on one.
Finally, you're clearly swapping, and that makes things incredibly slow. Remember this is all through a VM too!
Bottom line, use the QuickStart VM if you really want a 1-machine cluster tuned correctly. If you want a real cluster or more services, you need more hardware.
Also consider: cloudera.com/live contains a full CDH 5.1 cluster + sample data, running on demand on AWS. Of course, the advantage of the VM is that you can BYOD, but if you're simply looking for a hands-on Hadoop experience, Live is a great option.
I'm using Redis-server for windows ( 2.8.4 - MSOpenTech) / windows 8 64bit.
It is working great , but even after I run :
I see this : (and here are my questions)
When Redis-server.exe is up , I see 3 large files :
When Redis-server.exe is down , I see 2 large files :
Question :
— Didn't I just tell it to erase all DB ? so why are those 2/3 huge files are still there ?
How can I completely erase those files? ( without re-generating)
NB
It seems that it is doing deletion of keys without freeing occupied space. if so , How can I free this unused space?
From https://github.com/MSOpenTech/redis/issues/83
"Redis uses the fork() UNIX system API to create a point-in-time snapshot of the data store for storage to disk. This impacts several features on Redis: AOF/RDB backup, master-slave synchronization, and clustering. Windows does not have a fork-like API available, so we have had to simulate this behavior by placing the Redis heap in a memory mapped file that can be shared with a child(quasi-forked) process. By default we set the size of this file to be equal to the size of physical memory. In order to control the size of this file we have added a maxheap flag. See the Redis.Windows.conf file in msvs\setups\documentation (also included with the NuGet and Chocolatey distributions) for details on the usage of this flag. "
I know this is an old thread, but I am facing the same issues with the file sizes.
In case you have problems with your C ssd drive (like me), you can make a directory junction:
1) Stop redis service
2) Move C:\Windows\ServiceProfiles\NetworkService\AppData\Local\Redis folder to another drive / location.
3) Open a command prompt in C:\Windows\ServiceProfiles\NetworkService\AppData\Local then execute:
mklink /J "C:\Windows\ServiceProfiles\NetworkService\AppData\Local\Redis" "[newpath]"
PD: [newpath] must be absolute, like "D:\directory junctions\Redis"
4) Start redis service. Now the files are in another drive.
Check http://ss64.com/nt/mklink.html if doubts regarding this command.
I faced this same issue on my development machine. It was resolved by stopping the redis service and I used WinDirStat (which is what I used to detect the issue originally) to permanently delete these files in appdata/local/redis.
I then started redis back up and things were working fine.
Before following this same procedure others may want to ensure that this data isn't needed. In my case it wasn't critical since this is my development workstation.
When you flush the DB you only flush the keys from memory. I'm not sure why you've got files of different names, it may be an artifact of the way the Windows port of Redis manages files, but Redis itself doesn't delete files when you remove keys. You will need to manage outdated files outside of Redis.
I've recently started seeing this line in my Visual Studio 2005 output window when launching my application:
FTH: (7156): *** Fault tolerant heap shim applied to current process. This is usually due to previous crashes. ***
I've tried turning off the fault tolerant heap using the instructions here:
http://msdn.microsoft.com/en-us/library/dd744764(VS.85).aspx
I'm running Windows 7 64-bit edition, so I have made the changes to both the 32-bit and 64-bit registries, and run the "Rundll32.exe fthsvc.dll,FthSysprepSpecialize" command using both the 32-bit and 64-bit versions of Rundll32.exe.
However, after rebooting I am still getting the fault tolerant heap when trying to debug my application!
This is a real problem since it masks the bug I am trying to reproduce, and it also kills performance.
Does anyone have any other suggestions how to disable the fault tolerant heap?
To disable it for a single application
Go to the HKEY_LOCAL_MACHINE and HKEY_CURRENT_USER versions of
Software\Microsoft\Windows
NT\CurrentVersion\AppCompatFlags\Layers\your_application.exe and
delete the FaultTolerantHeap entry.
From here (actually here)
Set this registry value to 0:
HKEY_LOCAL_MACHINE\Software\Microsoft\FTH\Enabled
You can add the name of your executable to the ExclusionList.
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\FTH\ExclusionList
Works for me.
You can edit the application manifest to excluding your program from PCA
see also:How to reset Program Compatibility Assistant for testing
you can clear the list of applications tracked by FTH without stopping this service by following these steps:
Click the Start menu.
Right-click Computer and click Manage.
Click Event Viewer -> Applications and Services Logs -> Microsoft ->
Windows -> Fault-Tolerant-Heap.
View FTH Events.
you will find file named operational by right click and choose clear log,
then you can run you program again and warning message will disappear,
it worked with me without restarting operating system.
On Windows 10 the registry location is:
Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\FTH
You can remove you executable from the list in:
Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\FTH\State
or you can run this command from an elevated command prompt
Rundll32.exe fthsvc.dll,FthSysprepSpecialize
You may need to reboot your machine
"Rundll32.exe fthsvc.dll,FthSysprepSpecialize" looks to only clear the list of currently flagged applications. if your application still causes oddities, the FTH should still step in and take over.
as already mentioned:
Set this registry value to 0: HKEY_LOCAL_MACHINE\Software\Microsoft\FTH\Enabled
this should disable FTH for the whole system.
I had to rename the file as well because the registry entries associated with this key were empty of applicable data. I expect that they populate if you have a misbehaving application. But in my case I was debugging my own application within Visual Studio. So in that case, it was my process that was somehow loading the FTH whether the FTH Service was running or not. And in fact I had no applications listed that were previously tagged as misbehaving.
But I had to follow these instructions:
http://billroper.livejournal.com/960825.html
because it wouldn't let me rename the file until I took ownership and made sure I had full control.
I had similar issue when running a Unit test using (Microsoft::VisualStudio::CppUnitTestFramework).
Somehow I had violated some heap allocation, and next time I tried to debug I received the message : "Fault tolerant heap shim applied to current process. This is usually due to previous crashes. " and the debug environment froze.
To get it to work again, I had to remove test case, recompile and add it again and recompile, then I could set breakpoint and step into the test.
Also ran into this. Renaming/deleting AcXtrnal.dll inside Windows\AppPatch seems to work for me. I like how this Microsoft recommended action (which I did first) does nothing.