Resize public workspace back to micro - cloud9-ide

My processes were getting killed, so I resized my free public workspace to small from micro. It turns out it was the RAM limit, not the disk space limit, that was causing the processes to get killed, and I can't afford to upgrade the workspace any higher, so I want to downgrade back to micro, but when I try to, I get an error saying "not enough ram quota". Apparently my RAM usage is at 1.5 GB/1 GB, and if I downgrade to micro it will still be over quota.
How can I downgrade a workspace back to the free, public, micro size? Do I have to delete the workspace and start over?

I believe that there are some services that you must be making use of which you need to disable or stop using by killing the process (eg SSH). At that point, will they allow you to downgrade.
See this blurb at the very end of their FAQ page:
"If you're using premium resources, you can't downgrade your account until you're no longer using those resources."
I am having a different but related issue where my resize to micro is hung and/or taking forever. The only support channel I can find is to tweet them in hopes they can then DM you for your issue. I cannot locate a support email.
https://twitter.com/C9Support

Related

free space getting low for no reason

I have just noticed that my C drive is getting full, whereas it still had 30 GB free space 3 days ago.
Given last days activity I can't find any reason for this.
Now I realize that my C free space is getting lower and lower even though there's no current activity on my PC (except that it's turned on).
Every 2 minutes, I lose approximately 100 MB of free space, even though I don't download anything.
I launched my antivirus and I have closed my internet connection in order to see if the free space would stop decreasing, but it continued decreasing at the same pace.
I checked the task manager and notice there was a software running which I think was named "One Drive setup.exe" (during the past weeks, I had many pop up windows saying I had to update onedrive, but there was a problem with the auto update etc... but I didn't car because I don't even know what OneDrive is and I don't think I use it). So I killed this running task.
I thought it had stopped the loss of free space (I even gained 100 MB), but the decrease started again.
Now I connected to Internet again.
I got 300 MB free space back and now it seems constant since 4 minutes. Maybe these little ups and downs can be due to the current antivirus scanning.
But what can explain the loss of 30 GB during the past 2 or 3 days?
Could it be windows update? How can i check this with windows 10?
Could it be a virus or something bad?
Please, answer quickly, i only have 17GB left :-(
Thanks
Which version of Windows OS are you using?
Turn off/ disable System Restore point, this way you will able to recover some space. Other than use CCleaner (https://www.piriform.com/ccleaner/download) to clean your system.
They release patch I believe. But u also can use built in disk cleanup tool(https://support.microsoft.com/en-us/help/4026616/windows-disk-cleanup-in-windows-10).
Also uninstalled OneDrive/Google Drive unless u actively use this service. OneDrive sync with cloud files so that u can use those off line.

Xcode Server Bots not running tests

I'm trying, but failing to setup a reliable continuous integration environment using Xcode server.
I have a git repository on a headless mac mini server running the Xcode server service, the server has a separate development user account with administrator privileges that is used by Xcode.
I have setup my schemes, with testing included and shared them to the repository.
The bots run, check out code, build, analyze and archive, but only seems to run tests when it feels like it, which is almost never. I've checked the schemes and they have not changed since Xcode ran the tests and when it didn't.
On first setting them up, tests wouldn't run at all, until I added administrator privileges to the development account, then the tests ran a couple of times, before Xcode server decided to stop running them again.
I don't seem to get any reason why the tests aren't run, sometimes the bots fail to run because of some crash during the setup, and an error is reported, but mostly the bot seems to run, they just don't execute the tests, and no error is reported.
I've logged in remotely to the server, and the simulator is running, but never seems to do anything.
Here's a screenshot of an example bot, you can see the tests used to run, it sees I've reduced my warnings and got rid of an analysis issue. You can also see where no tests run, and no kind of warning or error is given as to why.
I've tried restarting the server, nope.
I've tried restarting the client, nope.
It's really frustrating and can't find any recent issues that offer a proper solution to this. The server is in constant use running backups and other tasks, so I'd rather not have a solution that involves me logging in to the server and restarting something every time there's a problem, which is always, it makes the whole point of bots useless if I'm spending more time logging in to my server trying to get them to work than they are at actually running.
Anyone have similar issues and a solution?
Edit: Noticed that my memory usage was very high on the server, memory pressure was practically always amber, so went out and got some memory today, increased the mac mini's memory from 4GB to 16GB, and now the tests have started running again. Also, the whole process is much faster (less than surprising i guess).
Could it just be low memory causing problems with the simulator? I've only just installed the memory and restarted, so I'll give it a few test runs before I confirm this solution, it's stopped working before...
Seems that this may be a memory issue, I upgraded the servers memory from 4GB to 16GB as my Activity monitor was showing significant memory pressure.
Since doing this, the bots started running tests again, and the total running time for the bot is a quarter that it was.
As per my edit, I've been running the bots for a day now, including bots that run on multiple simulators, and everything seems to be fine.
It's not very good that no obvious indication is given in xcode as to why the tests didn't run.
For reference and to see if this might fix your problems, original server specs were :
Mac Mini Server edition (late 2012)
2.3 GHz Intel Core i7
4GB memory
2x1TB drives
Replaced the 2x2GB memory sticks with 2x8GB sticks (The maximum allowed for the model)
EDIT : After a month of running with no problems, increasing the memory has solved the problem permanently.

Prevent my Windows App to cause Windows Runtime Broker to run out of Memory

When my Windows 10 app runs, it causes a process called Runtime Broker to execute, which takes up a lot of Memory space.
I know my app isn't "Memory-hungry" and it hardly takes 80 MB of RAM to execute. But from the time it starts, the Memory used by Runtime Broker keeps in increasing until the PC gets stuck.
Upon killing that process, the app is force closed by Windows.
I would have posted my source code here, if only I knew which part of the code is causing this to happen.
What are the possible technical reasons for this problem to happen, and what are the possible fixes in my code to prevent this?
Is there something wrong with my code, or is it some API that I am calling?
You can easily delete RuntimeBroker.exe and any other file. I deleted RuntimeBroker.exe and Livecomm.exe by booting a live Linux Dvd and after loading go and mount the c: drive then simply navigate to the file and delete it. Done!
Runtimebroker seems to hold about 60k per file held via StorageFile objects. It's still a bad problem and the only solution is to not hold on to very many of these.
Microsoft just never does anything about this.
Update: Microsoft seems to have quietly ditched UWP. The replacement has "WinUI" and is probably called the Windows App SDK at the moment. No more runtimebroker.exe.

5.6 GB not enough for Cloudera?

I am running Cloudera Hadoop on my laptop and Oracle VirtualBox VM.
I have given 5.6 GB out of mine 8 and six from eight cores as well.
And still I am not able to keep it up and running.
Even without load services would not stay up and running and when I try a query at least Hive will be down within 20 minutes. And sometimes they go down like dominoes: one after another.
More memory seemed to help some: with 3GB and all services, Hue was blinking with red colors when the Hue itself managed to get up. And after rebooting it would takes 30 - 60 minutes before I manage to get the system up enough to even try running anything on it.
There has been two sensible notes (that I have managed to find):
- Warning of swapping.
- Crashing note when the system used 26 GB of virtual memory which was not enough.
My dataset is less than one megabyte, so it is hard to understand why the system would go up to dozens of gigabytes, but for whatever was reason for that has passed: now the system is running more steadily around the 5.6 GB that I have given to it after closing down a few services: see my answer to myself.
And still it is just more stable. Right after I got a warning of swapping and the Hive went down again. What could be reason for more-or-less all Hadoop services going down if the VM starts to swap?
I don't have enough reputation to post the picture to here, but when Hive went down again it was swapping 13 pages / second and utilizing 5.9 GB / 5.6 GB. So basically my system starts crashing more-or-less right after it start to swap. "428 pages were swapped to disk in the previous 15 minute(s)"
I have used default installation options as far as hard drive is concerned.
Only addition is a shared folder between Windows and VM. That works somewhat strangely locking files all the time, so I used it just like FTP and only for passing files from one system to another. Thus I can go days without using it, but systems still crash, so that is not the cause either.
Now that the system is mostly up, services crash still about twice a day: Service Monitor and Hive are quite even with their crashing frequency. After those come Activity Monitor and Event Server, which appear to crash always together. I believe Yarn crashes as well, but it gets up on its own. Last time Hive crashed first, and then it got followed by Service Monitor, Hive (second time), Activity Monitor and Event Server all.
As swap is disk, perhaps the problem is with disk:
# cat /etc/fstab
# swapoff -a
# badblocks -v /dev/VolGroup/lv_swap
Checking blocks 0 to 8388607
Checking for bad blocks (read-only test): done
Pass completed, 0 bad blocks found.
# badblocks -vw /dev/VolGroup/lv_swap
Checking for bad blocks in read-write mode
From block 0 to 8388607
Testing with pattern 0xaa: done
Reading and comparing: done
Testing with pattern 0x55: done
Reading and comparing: done
Testing with pattern 0xff: done
Reading and comparing: done
Testing with pattern 0x00: done
Reading and comparing: done
Pass completed, 0 bad blocks found.
So nothing wrong with swap disk and I have not noticed any disk error anywhere else either.
Note that you could check file system from Windows side also. But I expect that if you make Windows to fix your Linux file system, you have good chances of destroying your Linux with that, so I did my checks somewhat pessimistically, because AFAIK these commands are safe to execute.
About half of the services kept going down, so giving more specifics would be a long story.
I succeeded to get the system more stable by closing down flume, hbase, impala, ks_indexer, oozie, spark and sqoop. And by increasing more memory to some remaining services that complained they had not been given enough memory.
Also I fixed couple of thing on the Windows side, I am not sure which one of these helped:
- MsMpEng.exe kept my hard drive busy. I didn't have permissions to kill it, but I decreased its priority to lowest possible.
- CcmExec.exe got to loop on my DVD and kept reading it for forever. This I solved by taking the DVD out from the drive. Then later on I killed the process tree to keep it from bothering for a while.
I found these using Windows resource manager.
The VM requires 4GB: http://www.cloudera.com/content/cloudera-content/cloudera-docs/DemoVMs/Cloudera-QuickStart-VM/cloudera_quickstart_vm.html You should use that.
I am not clear whether you are using the QuickStart VM though. It's set up to run just the essential services and tuned to conserve memory rather than exploit lots of memory.
It sounds like you are running your own installation, on one virtual machine, on your Windows machine. You may be running an entire cluster's worth of services on one desktop machine. Each of these services has master, worker processes, monitoring processes, etc. You don't need most of them.
You also probably have left memory settings at default suitable for a server-class machine of 16+ GB RAM. Remember these services usually run across many machines, not all on one.
Finally, you're clearly swapping, and that makes things incredibly slow. Remember this is all through a VM too!
Bottom line, use the QuickStart VM if you really want a 1-machine cluster tuned correctly. If you want a real cluster or more services, you need more hardware.
Also consider: cloudera.com/live contains a full CDH 5.1 cluster + sample data, running on demand on AWS. Of course, the advantage of the VM is that you can BYOD, but if you're simply looking for a hands-on Hadoop experience, Live is a great option.

How to investigate a web performance issue which is accumulated

Our web is running on AWS with Ubuntu OS. We developed it on top of playframework. Right after the web is deployed, it is pretty quick. However, after 1 days or os, it slows down significantly. I checked resource usage of the OS, it seems normal and is responsive. Just the web service is slow to request. I suspect there are some memory, thread pool or some resource leak. Any suggestion about how to investigate it? I used 'top' and 'ps' command to look at current resource usage but they all seem normal.
You may want to create a core dump and then take that to you dev computer and examine it. This is not the easiest way but if you have limited access to the box this may be required.
Create a core dump
Analyze Core Dump File?

Resources