Computing Sky View Factor in GrassGis - grass

Hy community,
I´m currently working on my Master Thesis and I have to compute the "sky view factor". Since ESRI Arcmap is not a helpful choice to do that, I found that it is fairly easy to compute with GrassGIS (V.7) using the r.skyview command.
But I get an error message in the logfile i can´t really deal with. Hope that someone of you is experienced with that kind of problem and can help me out with this.
Here is what the GrassGIS output says:
*(Fri Jan 09 16:17:10 2015)
r.skyview input=Subset#PERMANENT output=Subset_SVF ndir=16 maxdistance=15.0
Unknown module parameter "keyword" at line 21
Unknown module parameter "keyword" at line 22
FEHLER: Value <rast> ambiguous for parameter <type>
Valid options: raster,raster_3d,vector,old_vector,ascii_vector,labels,region,group,all
Traceback (most recent call last):
File "C:\Users\Axel-HP\AppData\Roaming\GRASS7\addons/scripts/r.skyview.py", line 120, in <module>
sys.exit(main())
File "C:\Users\Axel-HP\AppData\Roaming\GRASS7\addons/scripts/r.skyview.py", line82, in main
old_maps = _get_horizon_maps()
File "C:\Users\Axel-HP\AppData\Roaming\GRASS7\addons/scripts/r.skyview.py", line 114, in_get_horizon_maps
pattern=TMP_NAME + "*")[gcore.gisenv()['MAPSET']]
File "C:\Temp\GRASSGIS7\etc\python\grass\script\core.py", line 1176, in list_grouped
type=types, pattern=pattern,
exclude=exclude).splitlines():
File "C:\Temp\GRASSGIS7\etc\python\grass\script\core.py", line 425, in read_command
return handle_errors(returncode, stdout, args, kwargs)
File "C:\Temp\GRASSGIS7\etc\python\grass\script\core.py", line 308, in handle_errors
returncode=returncode)
grass.exceptions.CalledModuleError: Module run None
['g.list', '--q', '-m', 'type=rast', 'pattern=tmp_horizon_2340*'] ended with error
Process ended with non-zero return code 1. See errors in the (error) output.
(Fri Jan 09 16:17:11 2015) Befehl ausgeführt (1 Sek)*

I just tested r.skyview and it is working. There were big changes in GRASS module parameter names recently which caused the trouble, but now it should work without problems.

Related

Rasa Timeout Issue

When running Rasa (tried on versions 1.3.3, 1.3.7, 1.3.8) I encounter this timeout exception message almost every time I make a call. I am running a simple program that recognises when a user offers their age, and stores the age in a database through an action response:
Bot loaded. Type a message and press enter (use '/stop' to exit):
Your input -> I am 24 years old
2019-10-10 13:29:33 ERROR asyncio - Task exception was never retrieved
future: <Task finished coro=<configure_app.<locals>.run_cmdline_io() done, defined at /Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/rasa/core/run.py:123> exception=TimeoutError()>
Traceback (most recent call last):
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/rasa/core/run.py", line 127, in run_cmdline_io
server_url=constants.DEFAULT_SERVER_FORMAT.format("http", port)
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/rasa/core/channels/console.py", line 138, in record_messages
async for response in bot_responses:
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/async_generator/_impl.py", line 366, in step
return await ANextIter(self._it, start_fn, *args)
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/async_generator/_impl.py", line 205, in throw
return self._invoke(self._it.throw, type, value, traceback)
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/async_generator/_impl.py", line 209, in _invoke
result = fn(*args)
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/rasa/core/channels/console.py", line 103, in send_message_receive_stream
async for line in resp.content:
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/aiohttp/streams.py", line 40, in __anext__
rv = await self.read_func()
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/aiohttp/streams.py", line 329, in readline
await self._wait('readline')
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/aiohttp/streams.py", line 297, in _wait
await waiter
File "/Users/Kami/Documents/rasa/venv/lib/python3.7/site-packages/aiohttp/helpers.py", line 585, in __exit__
raise asyncio.TimeoutError from None
concurrent.futures._base.TimeoutError
Transport closed # ('127.0.0.1', 63319) and exception experienced during error handling
Originally I thought this timeout was being caused by using large lookup tables for another part of my Rasa program, but for age recognition I am using a simple regex:
## regex:age
- ^(0?[1-9]|[1-9][0-9]|[1][1-9][1-9])$
And even this also causes the timeout.
Please help me solve this. I don't even need to avoid the timeout, I just want to know where I can catch/ignore this exception.
Thanks!
I was fetching data from an API wherein I was getting a Timeout error because it was not able to fetch the data in the default time limit :
Go to the directory: venv/Lib/site-packages/rasa/core/channels/console.py
Change the default value of DEFAULT_STREAM_READING_TIMEOUT_IN_SECONDS to more than 10, in my case I changed it to 30 it worked.
Another reason could be fetching of data again and again within a short period of time which could result in a timeout.
Observations :
When DEFAULT_STREAM_READING_TIMEOUT_IN_SECONDS is set to 10 i get timeout error
When DEFAULT_STREAM_READING_TIMEOUT_IN_SECONDS is set to 30 and keep on running rasa shell again and again I get a timeout error
When DEFAULT_STREAM_READING_TIMEOUT_IN_SECONDS is set to 30 and run rasa shell not frequently it functions perfectly.
Make sure that you uncomment the below code
action_endpoint:
url: "http://localhost:5055/webhook"
in the endpoints.yml. It is used when you are making custom actions to query database.
I had the same problem and it was not solved by increasing timeout.
Make sure you are sending back a 'string' to the rasa shell from rasa action sever. What I mean is, if you are using 'text = ' in your utter_message, make sure that the async result is also a string and not just an object or something else. Change the type if required.
dispatcher.utter_message(text='has to be a string')
Running 'rasa shell -vv' showed me that it is receiving an object and that is why it is not able to parse it, hence timeout.
I can't comment now, but add followup to Vishal response. To check that hooks are present and waiting for connection you can use -vv command line switch. This display all available hooks at startup. For example:
2020-04-21 14:05:56 DEBUG rasa.core.utils - Available web server routes:
/webhooks/rasa GET custom_webhook_RasaChatInput.health
/webhooks/rasa/webhook POST custom_webhook_RasaChatInput.receive
/webhooks/rest GET custom_webhook_RestInput.health
/webhooks/rest/webhook POST custom_webhook_RestInput.receive
/ GET hello

Trouble with Google API in python autosub

I'm trying to setup autosub to translate subtitles, I checked on the Github repo and saw this thread which they happen to be getting the same errors as me. However, when I tried their solution of enabling the Cloud Translation API, it didn't correct the problem. I am running this command for autosub, but I have it as a script to translate to different languages, which is why there are bash variables in the command.
"$tool_path" -o "$output_file" -F "$sub_format" -C 3 -K "key=$api_key" -S "$language_input" -D "$language_output" "$input_file"
When I run this command, I get the exact same error as in the thread, which is as follows:
Converting speech regions to FLAC files: 100% |################################################################################################################################################################################| Time: 0:00:03
Performing speech recognition: 100% |##########################################################################################################################################################################################| Time: 0:00:45
Exception in thread Thread-3:2% |#### | ETA: 0:00:00
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 389, in _handle_results
task = get()
File "/home/eddy/.local/lib/python2.7/site-packages/oauth2client/_helpers.py", line 133, in positional_wrapper
return wrapped(*args, **kwargs)
TypeError: ('__init__() takes at least 3 arguments (1 given)', <class 'googleapiclient.errors.HttpError'>, ())

USA and Canada with jvectormap

I'm looking for a map of (US + Canada) together with states/provinces respectively.
This's what I've done so far:
Downloaded jVectorMap 1.2.2 from here;
After reading this, installed GDAL and Shapely;
Downloaded 10m Admin 1 package from Natural Earth;
Than, according to this thread, it is possible to do what I need using following:
python converter.py --width 900 --country_name_index 12 --country_code_index 18 --longitude0 -100 --where="iso_a2 = 'CA' OR iso_a2 = 'US'" --projection lcc --name us_ca ne_10m_admin_1_states_provinces_shp/ne_10m_admin_1_states_provinces_shp.shp ../jquery-jvectormap-us-ca-lcc-en.js
where --country_name_index 12 --country_code_index 18 part doesn't make any sense to me, since I'm trying to convert 2 countries.
Anyways, after running suggested code I get:
Traceback (most recent call last):
File "converter.py", line 296, in <module>
converter.convert(args['output_file'])
File "converter.py", line 144, in convert
self.loadData()
File "converter.py", line 89, in loadData
self.loadDataSource( sourceConfig )
File "converter.py", line 130, in loadDataSource
shapelyGeometry = shapely.wkb.loads( geometry.ExportToWkb() )
AttributeError: 'module' object has no attribute 'wkb'
I find this really odd, unless I missed something in installation.
After adding import shapely.wkb to converter.py I get Alaska with name State and Yukon as Territory, and that's it.
What am I missing here?
Thanks for your time.
I had the same problem as you. Solved it by using the shapefile, 10m_cultural/ne_10m_admin_1_states_provinces_shp.shp from the package naturalearth all vector themes.
But only downside is that the output JS file is too big. It comes up to 2MB easily. I'll try using shapefile from different source next time and let you know. But for now at least this works.
I had the same problem when building continent maps. The fix was to use an older convert.py version (1.1.1 rather than 1.2.2). You still need to input --country_name_index and --country_code_index flag so give whatever you want as values. The map produced is fine.
convert.py 1.1.1 can be found here :
https://github.com/jfhovinne/jvectormap-maps-builder

How do I get useful diagnostics from boto?

How can I get useful diagnostics out of boto? All I ever seem to get is the infuriatingly useless "400 Bad Request". I recognize that boto is just passing along what the underlying API makes available, but surely there's some way to get something more useful than "Bad Request".
Traceback (most recent call last):
File "./mongo_pulldown.py", line 153, in <module>
main()
File "./mongo_pulldown.py", line 24, in main
print "snap = %r" % snap
File "./mongo_pulldown.py", line 149, in __exit__
self.connection.delete_volume(self.volume.id)
File "/home/roy/deploy/current/python/local/lib/python2.7/site-packages/boto/ec2/connection.py", line 1507, in delete_volume
return self.get_status('DeleteVolume', params, verb='POST')
File "/home/roy/deploy/current/python/local/lib/python2.7/site-packages/boto/connection.py", line 985, in get_status
raise self.ResponseError(response.status, response.reason, body)
boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request
I didn't have much luck with putting the debug setting in the config file, but the call to ec2.connect_to_region() takes a debug parameter, with the same values as in j0nes' answer.
ec2 = boto.ec2.connect_to_region("eu-west-1", debug=2)
Everything that connection object sends/receives will get dumped to stdout.
You can configure the boto.cfg file to be more verbose:
[Boto]
debug = 2
debug: Controls the level of debug messages that will be printed by
the boto library. The following values are defined:
0 - no debug messages are printed
1 - basic debug messages from boto are printed
2 - all boto debugging messages plus request/response messages from httplib

Tornado app halting regularly for few seconds with 100% CPU

I am trying to troubleshoot an app running on tornado 2.4 on Ubuntu 11.04 on EC2. It appears to be hitting 100% CPU regularly and halts at that request for few seconds.
Any help on this is greatly appreciated.
Symptoms:
top shows 100% cpu just at the time it halts. Normally server is about 30-60% cpu utilization.
It halts every 2-5 minutes just for one request. I have checked that there are no cronjobs affecting this.
It halts for about 2 to 9 seconds. Problem goes away on restarting tornado and worsens with tornado uptime. Longer the server is up, for longer duration it halts.
Http requests for which the problem appears do not seem to have any pattern.
Interestingly, next request in log sometimes sometimes matches the halting duration and some times does not. Example:
00:00:00 GET /some/request ()
00:00:09 GET /next/request (9000ms)
00:00:00 GET /some/request ()
00:00:09 GET /next/request (1ms)
# 9 seconds gap in requests is certainly not possible as clients are constantly polling.
Database (mongodb) shows no expensive or large number of queries. No page faults. Database is on the same machine - local disk.
vmstat shows no change in read/write sizes compared to last few minutes.
tornado is running behind nginx.
sending SIGINT when it was most likely halting, gives different stacktrace everytime. Some of them are below:
Traceback (most recent call last):
File "chat/main.py", line 3396, in <module>
main()
File "chat/main.py", line 3392, in main
tornado.ioloop.IOLoop.instance().start()
File "/home/ubuntu/tornado/tornado/ioloop.py", line 515, in start
self._run_callback(callback)
File "/home/ubuntu/tornado/tornado/ioloop.py", line 370, in _run_callback
callback()
File "/home/ubuntu/tornado/tornado/stack_context.py", line 216, in wrapped
callback(*args, **kwargs)
File "/home/ubuntu/tornado/tornado/iostream.py", line 303, in wrapper
callback(*args)
File "/home/ubuntu/tornado/tornado/stack_context.py", line 216, in wrapped
callback(*args, **kwargs)
File "/home/ubuntu/tornado/tornado/httpserver.py", line 298, in _on_request_body
self.request_callback(self._request)
File "/home/ubuntu/tornado/tornado/web.py", line 1421, in __call__
handler = spec.handler_class(self, request, **spec.kwargs)
File "/home/ubuntu/tornado/tornado/web.py", line 126, in __init__
application.ui_modules.iteritems())
File "/home/ubuntu/tornado/tornado/web.py", line 125, in <genexpr>
self.ui["_modules"] = ObjectDict((n, self._ui_module(n, m)) for n, m in
File "/home/ubuntu/tornado/tornado/web.py", line 1114, in _ui_module
def _ui_module(self, name, module):
KeyboardInterrupt
Traceback (most recent call last):
File "chat/main.py", line 3398, in <module>
main()
File "chat/main.py", line 3394, in main
tornado.ioloop.IOLoop.instance().start()
File "/home/ubuntu/tornado/tornado/ioloop.py", line 515, in start
self._run_callback(callback)
File "/home/ubuntu/tornado/tornado/ioloop.py", line 370, in _run_callback
callback()
File "/home/ubuntu/tornado/tornado/stack_context.py", line 216, in wrapped
callback(*args, **kwargs)
File "/home/ubuntu/tornado/tornado/iostream.py", line 303, in wrapper
callback(*args)
File "/home/ubuntu/tornado/tornado/stack_context.py", line 216, in wrapped
callback(*args, **kwargs)
File "/home/ubuntu/tornado/tornado/httpserver.py", line 285, in _on_headers
self.request_callback(self._request)
File "/home/ubuntu/tornado/tornado/web.py", line 1408, in __call__
transforms = [t(request) for t in self.transforms]
File "/home/ubuntu/tornado/tornado/web.py", line 1811, in __init__
def __init__(self, request):
KeyboardInterrupt
Traceback (most recent call last):
File "chat/main.py", line 3351, in <module>
main()
File "chat/main.py", line 3347, in main
tornado.ioloop.IOLoop.instance().start()
File "/home/ubuntu/tornado/tornado/ioloop.py", line 571, in start
self._handlers[fd](fd, events)
File "/home/ubuntu/tornado/tornado/stack_context.py", line 216, in wrapped
callback(*args, **kwargs)
File "/home/ubuntu/tornado/tornado/netutil.py", line 342, in accept_handler
callback(connection, address)
File "/home/ubuntu/tornado/tornado/netutil.py", line 237, in _handle_connection
self.handle_stream(stream, address)
File "/home/ubuntu/tornado/tornado/httpserver.py", line 156, in handle_stream
self.no_keep_alive, self.xheaders, self.protocol)
File "/home/ubuntu/tornado/tornado/httpserver.py", line 183, in __init__
self.stream.read_until(b("\r\n\r\n"), self._header_callback)
File "/home/ubuntu/tornado/tornado/iostream.py", line 139, in read_until
self._try_inline_read()
File "/home/ubuntu/tornado/tornado/iostream.py", line 385, in _try_inline_read
if self._read_to_buffer() == 0:
File "/home/ubuntu/tornado/tornado/iostream.py", line 401, in _read_to_buffer
chunk = self.read_from_fd()
File "/home/ubuntu/tornado/tornado/iostream.py", line 632, in read_from_fd
chunk = self.socket.recv(self.read_chunk_size)
KeyboardInterrupt
Any tips on how to troubleshoot this is greatly appreciated.
Further observations:
strace -p, during the time it hangs, shows empty output.
ltrace -p during hang time shows only free() calls in large numbers:
free(0x6fa70080) =
free(0x1175f8060) =
free(0x117a5c370) =
It sounds like you're suffering from garbage collection (GC) storms. The behavior you've described is typical of that diagnosis, and the ltrace further supports the hypothesis.
Lots of objects are being allocated and disposed of in the main/event loops being exercised by your usage ... and the periodic flurries of calls to free() result from that.
One possible approach would be to profile your code (or libraries on which you are depending) and see if you can refactor it to use (and re-use) objects from pre-allocated pools.
Another possible mitigation would be to make your own, more frequent, calls to trigger the garbage collection --- more expensive in aggregate but possibly less costly at each call. (That would be a trade-off for more predictable throughput).
You might be able to use the Python: gc module both for investigating the issue more deeply (using gc.set_debug()) and for a simple attempted mitigation (calls to gc.collect() after each transaction for example). You might also try running your application with gc.disable() for a reasonable length of time to see if further implicates the Python garbage collector. Note that disabling the garbage collector for an extended period of time will almost certainly cause paging/swapping ... so use it only for validation of our hypothesis and don't expect that to solve the problem in any meaningful way. It may just defer the problem 'til the whole system is thrashing and needs to be rebooted.
Here's an example of using gc.collect() in another SO thread on Tornado: SO: Tornado memory leak on dropped connections

Resources