Provision: how to set the device name? - provisioning

20 days ago I've successfully provisioned for ESP-32 and work fine with this device.
Today I've successfully provisioned the second ESP-32 chip on another computer:
5.40 MiB / 5.40 MiB [------------------------------------] 100.00% 14.69 MiB p/s
looking for available hardware identities on disk
no hardware identities found on disk, claiming new hardware identity
Flashing device on port /dev/ttyUSB0
+--------------------------+--------------------------------------+
| SETTING | VALUE |
+--------------------------+--------------------------------------+
| Firmware | v1.0.2 |
| Device Model | esp32-4mb |
| Hardware ID | XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX |
| Hardware Batch & Seq. No | 2020-11-10#524 |
| context | remote |
| broker.host | device.toit.io |
| broker.cn | device.toit.io |
| broker.port | 9426 |
| wifi.ssid | SureDemo |
| wifi.password | suremote |
+--------------------------+--------------------------------------+
erasing device flash
successfully erased device flash
writing device partitions
successfully written device partitions
reading hardware chip information
successfully read hardware chip information
+--------------------------+--------------------------------------+
| SETTING | VALUE |
+--------------------------+--------------------------------------+
| factory device model | esp32-4mb |
| factory firmware version | v1.0.2 |
| chip ID | |
+--------------------------+--------------------------------------+
device was successfully flashed
However, I cannot start the application on this device:
michael_k # michaelk: ~ /toit_apps/Hsm2/tests $ toit run test_hsm_switch_async_4.toit
No default device set. Provide the device name (with the --device flag) to the command
michael_k # michaelk: ~ /toit_apps/Hsm2/tests $
I realized that this new device needs to be given a different name from my default device micrcx-1. By the way, I can see my first appliance:
michael_k # michaelk: ~ /toit_apps/Hsm2/tests $ toit devices
+--------------------------------------+----------+-------------------+----------+
| DEVICE ID | NAME | LAST SEEN | FIRMWARE |
+--------------------------------------+----------+-------------------+----------+
| XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX | micrcx-1 | Apr 29 2021 04:05 | v1.0.2 |
+--------------------------------------+----------+-------------------+----------+
michael_k#michaelk:~/toit_apps/Hsm2/tests$
So, the question is: how to give a name to a new additional device and how to run an application on it?
Thanks in advance, MK
PS. Naturally, I could be wrong, but as far as I remember, the name of the first device was assigned by toit system automatically. I had nothing to do with this. micrcx is my computer's identifier.

It might be that your device wasn't claimed yet.
In the current release (but hopefully not in future releases), provisioning a device only puts the Toit framework on the device. At this point it is not yet associated with your account and must be claimed.
You can simply run:
toit device claim <hardware-ID> or toit device claim <hardware-ID> --name=<some-name>.
If no name is provided, then the system generates one. Typically these are built out of two words, for example nervous-plastic. You can always change the names at a later point.
Alternatively you can claim the device in the web UI. There is a "CLAIM OR REPLACE DEVICE" button on the top right of the "Devices" view.
FYI: I have edited your post to remove the hardware ID of the new device, so nobody else claims the device in the meantime.

Related

How to build a lab with ASLR and DEP turned off to prepare a remote, stack-based buffer overflow exploit for a pentesting certification exam?

For my first pentesting certification exam I have to prepare the virtual lab in order to locally analyze a vulnerable binary and build a BOF-exploit, which I then have to use against a remote target machine. As far as I know I will not have any access on the target host except the vulnerable service. So it wont be possible to analyze the program on the target machine, as in the labs and the BOF exam prep course of tryhackme. I will have to setup an own local target machine, run the binary there, analyze it, prepare the exploit and run it against the remote target machine.
Now I am facing multiple problems while setting up my local virtual test environment.
I installed both, a Windows 7 32-Bit and a Windows 10 32-Bit virtual machine. On both machines I installed Python 2.7.1, Immunity Debugger and mona.py. On Windows 7 there was no Defender running, on Windows 10 I disabled Defender Real-Time-Protection.
Afterwards I uploaded the binary to both machines and went through the standard process of building an OSCP-level stack-based BOF-exploit:
Crashing the program with a fuzzer
Identify the offset to the return address
Identify bad characters
Next, I wanted to uso mona.py to find a JMP ESP instruction (or something similar) as I always did in the labs. Now the problems started. mona.py returned 0 pointers when I entered the following command:
!mona jmp -r esp -cpb "\x00\x0a\0d"
Usually (in the labs I did) I got a list of possible JMP ESP commands with the memory addresses. But in my own environment I got the following mona-output:
0BADF00D !mona jmp -r esp -cpb "\x00\x0a\x0d"
---------- Mona command started on 2022-07-16 17:59:06 (v2.0, rev 616) ----------
0BADF00D [+] Processing arguments and criteria
0BADF00D - Pointer access level : X
0BADF00D - Bad char filter will be applied to pointers : "\x00\x0a\x0d"
0BADF00D [+] Generating module info table, hang on...
0BADF00D - Processing modules
0BADF00D - Done. Let's rock 'n roll.
0BADF00D [+] Querying 1 modules
0BADF00D - Querying module 32bitftp.exe
6ED20000 Modules C:\Windows\System32\rasadhlp.dll
0BADF00D - Search complete, processing results
0BADF00D [+] Preparing output file 'jmp.txt'
0BADF00D - (Re)setting logfile jmp.txt
0BADF00D Found a total of 0 pointers
0BADF00D
0BADF00D [+] This mona.py action took 0:00:03.265000
I recognized that only one module (32bitftp.exe) has been queried. In the course lab, much more (system) modules have been queried. So I asked myself why and used the
!mona modules
command to check the modules. I got the following output:
0BADF00D !mona modules
---------- Mona command started on 2022-07-16 18:04:03 (v2.0, rev 616) ----------
0BADF00D [+] Processing arguments and criteria
0BADF00D - Pointer access level : X
0BADF00D [+] Generating module info table, hang on...
0BADF00D - Processing modules
0BADF00D - Done. Let's rock 'n roll.
0BADF00D -----------------------------------------------------------------------------
------------------------------------------------------------
0BADF00D Module info :
0BADF00D -----------------------------------------------------------------------------
------------------------------------------------------------
0BADF00D Base | Top | Size | Rebase | SafeSEH | ASLR | NXCompat |
OS Dll | Version, Modulename & Path
0BADF00D -----------------------------------------------------------------------------
------------------------------------------------------------
0BADF00D 0x74ef0000 | 0x75010000 | 0x00120000 | True | True | True | False |
True | 10.0.19041.789 [ucrtbase.dll] (C:\Windows\System32\ucrtbase.dll)
0BADF00D 0x715a0000 | 0x715b6000 | 0x00016000 | True | True | True | False |
True | 10.0.19041.1151 [NLAapi.dll] (C:\Windows\system32\NLAapi.dll)
0BADF00D 0x74e70000 | 0x74eeb000 | 0x0007b000 | True | True | True | False |
True | 10.0.19041.789 [msvcp_win.dll] (C:\Windows\System32\msvcp_win.dll)
0BADF00D 0x72ee0000 | 0x72f7f000 | 0x0009f000 | True | True | True | False |
True | 10.0.19041.1 [apphelp.dll] (C:\Windows\SYSTEM32\apphelp.dll)
0BADF00D 0x74480000 | 0x74511000 | 0x00091000 | True | True | True | False |
True | 10.0.19041.1 [DNSAPI.dll] (C:\Windows\SYSTEM32\DNSAPI.dll)
0BADF00D 0x760f0000 | 0x761af000 | 0x000bf000 | True | True | True | False |
True | 7.0.19041.546 [msvcrt.dll] (C:\Windows\System32\msvcrt.dll)
0BADF00D 0x72880000 | 0x72afe000 | 0x0027e000 | True | True | True | False |
True | 10.0.19041.546 [CoreUIComponents.dll]
(C:\Windows\System32\CoreUIComponents.dll)
0BADF00D 0x76ef0000 | 0x7708e000 | 0x0019e000 | True | True | True | False |
True | 10.0.19041.1023 [ntdll.dll] (C:\Windows\SYSTEM32\ntdll.dll)
0BADF00D 0x68df0000 | 0x68e06000 | 0x00016000 | True | True | True | False |
True | 10.0.19041.1 [pnrpnsp.dll] (C:\Windows\system32\pnrpnsp.dll)
0BADF00D 0x640b0000 | 0x640c0000 | 0x00010000 | True | True | True | False |
True | 10.0.19041.546 [wshbth.dll] (C:\Windows\system32\wshbth.dll)
[...]
Every module has ASLR, Rebase, SafeSEH enabled. I have some basic knowledge about these security mechanisms but I'm pretty sure the exam will not require me to bypass them. In the labs, there have always been modules with ASLR, Rebase and SafeSEH disabled. So I came to the conclusion that mona.py didn't show me a result because these mechanisms are running.
My next idea was of course that I should turn off ASLR and DEP on my local Windows machines. After some research, I found out that on Windows 7 DEP can be disabled with the following command
bcdedit.exe /set {current} nx AlwaysOff
and ASLR can be disabled by using regedit to set a new 32-Bit DWORD value "MoveImages" under [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management]. After a reboot ASLR should be disabled.
But its not! If I use
!mona modules
after the reboot, the output stays the same. Still, all security mechanisms (including ASLR) are turned on. After some further research I was not able to find a way to disable it in Windows 7.
So I tried it on Windows 10. Here I did not have to create a new registry key. DEP and ASLR could be disabled under "Windows Security -> App and Browser Control -> Exploit Protection". After a reboot, the mechanisms should be disabled. But again: They are not!
If I load the program into ImmunityDebugger and use
!mona modules
to show the modules, the table is still unchanged, showing that all system modules have turned ASLR on.
Of course I was able to get a JMP ESP instruction from kernel32.dll for example with the following command:
!mona jmp -r esp -cpb "\x00\x0a\x0d" -m kernel32.dll
If I use it to exploit the BOF while the local Windows 7/10 system is still running, that works fine. But after a reboot, the system modules addresses changed, thanks to ASLR and the addresses wont work anymore.
And of course, if I use the exploit against the remote target system, the exploit will fail.
So my questions are:
What am I doing wrong? (Maybe I think in the wrong way)
How can I really disable ASLR and DEP on Windows 7/10 systems?
In the exam, how can I know which modules on the remote target server have ASLR turned on? Even if I manage to turn of my local ASLR it might be that I'm unlucky and pick a module that has turned on ASLR on the remote target host...
Since my exam is not far anymore I would be very, very happy if someone could help me out with this. Anyway thanks so much that you took your time to read all this until here :)

openstack cinder cant create,and cinder list ,openstack volume service list show "The server is currently unavailable."

i have installed cinder in controller node and block node.
i test the status of openstack-cinder-scheduler and openstack-cinder-api(on controller node) ,openstack-cinder-volume and target.service(on block node),that they are running.
but when i use the "cinder list,cinder create,openstack volume service list"only can get one kind of output:
[root#controller //]# openstack volume service list
The server is currently unavailable. Please try again at a later time.
The Keystone service is temporarily unavailable.
(HTTP 503)
[root#controller //]# cinder list
ERROR: The server is currently unavailable. Please try again at a later time.
The Keystone service is temporarily unavailable.
i have check the configure in cinder.conf,nova.conf serveral times.i have no idea with them.can u give a suggestion? thank you.
I found that I can create volume by dashboard, while by command line I can't. It may be something wrong between admin and cinder user. Try it again.
[root#controller //]# openstack volume list
+--------------------------------------+----------+-----------+------+-------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+----------+-----------+------+-------------+
| 435acaae-44d8-4793-9a0c-61a4436b6b37 | volumev4 | available | 1 | |
+--------------------------------------+----------+-----------+------+-------------+
[root#controller //]# openstack role add --project service --user cinder admin
[root#controller //]# openstack volume service list
+------------------+--------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+--------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up | 2021-12-28T07:40:01.000000 |
| cinder-volume | compute2#lvm | nova | enabled | up | 2021-12-28T07:40:07.000000 |
+------------------+--------------+------+---------+-------+----------------------------+
it has the output.

ffmpeg - cuda encode - OpenEncodeSessionEx failed: out of memory

I'm having a problem with ffmpeg video encoding using GPU (CUDA).
I have 2x nVidia GTX 1050 Ti
The problem comes when i try to do multiple parallel encodings. More than 2 processes and ffmpeg dies like this:
[h264_nvenc # 0xcc1cc0] OpenEncodeSessionEx failed: out of memory (10)
The problem is nvidia-smi shows there are a lot of resources available on the gpu:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.66 Driver Version: 384.66 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 105... Off | 00000000:41:00.0 Off | N/A |
| 40% 37C P0 42W / 75W | 177MiB / 4038MiB | 30% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 105... Off | 00000000:42:00.0 Off | N/A |
| 40% 21C P8 35W / 75W | 10MiB / 4038MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
the second GPU doesn't seem to be used at all, and there's more than enough memory left on the first one, to support the 3rd file.
Any ideas would be extremely helpful!
Actually your card is 'non-qualified' (in terms of NVIDIA) and supports only 2 simultaneous sessions. You could consult with https://developer.nvidia.com/video-encode-decode-gpu-support-matrix#Encoder or download NVENC SDK, which contains pdf with license terms for qualified and non-qualified GPUs. There are some patches for drivers which disables session count checking, you could try them https://github.com/keylase/nvidia-patch
Since there's no codes about how you apply the encoding context, I can't tell why the second gpu is not used. Have you specified using it in av_opt_set() or command line argument?
The more important problem here is geforce cards cannot own more than 2 encoding sessions in one system. If you need more, you have to use those expensive ones like quadro, tesla etc.

Windows directory juction: no files are copied via network access

I have a problem with file storage organization.
There is a network of Windows 7 and Windows XP computers. One of them is a file storage server.
Software packs are located on the storage server. Each software pack includes
1. active folder;
1. some folders for each program.
(see the structure example below)
Active folder is a structure of the actual program's versions, must allow to copy all the software in their actual versions avoiding file's duplicating via direct copying. Also fast switching of actual version is needed.
Program folders contain folders for each program's version.
I tried to solve the task via windows directory junction. I created the links via Far Manager 3, command Alt+F6, type directory junction.
Desired structure example
+---active % 389 MB, desired package of actual software
| +---notepad++ % directory junction
| | \---7.3.1
| | npp.7.3.1.Installer.exe
| | npp.7.3.1.Installer.x64.exe
| |
| +---octave % directory junction
| | \---4.2.1
| | octave-4.2.1-w32-installer.exe
| | octave-4.2.1-w64-installer.exe
| |
| \---texstudio % directory junction
| \---2.12.4
| texstudio-2.12.4-win-qt5.6.2.exe
|
+---notepad++
| +---6.8.8
| | npp.6.8.8.Installer.exe
| |
| \---7.3.1 % actual version
| npp.7.3.1.Installer.exe
| npp.7.3.1.Installer.x64.exe
|
+---octave
| +---4.2.0
| | octave-4.2.0-w32-installer.exe
| | octave-4.2.0-w64-installer.exe
| | octave-4.2.0-w64.zip
| |
| \---4.2.1 % actual version
| octave-4.2.1-w32-installer.exe
| octave-4.2.1-w64-installer.exe
|
\---texstudio
+---2.11.0
| texstudio-2.11.0-win-qt5.5.1.exe
|
+---2.12.0
| texstudio-2.12.0-win-qt5.6.2.exe
|
+---2.12.2
| texstudio-2.12.2-win-qt5.6.2.exe
|
\---2.12.4 % actual version
texstudio-2.12.4-win-qt5.6.2.exe
Local usage
Local means I operate on the storage server's Windows GUI.
If to copy active folder to another server's folder observed result is the same as desired:
TEST_COPY
\---active % 389 MB, all the files are copied
+---notepad++
| \---7.3.1
| npp.7.3.1.Installer.exe
| npp.7.3.1.Installer.x64.exe
|
+---octave
| \---4.2.1
| octave-4.2.1-w32-installer.exe
| octave-4.2.1-w64-installer.exe
|
\---texstudio
\---2.12.4
texstudio-2.12.4-win-qt5.6.2.exe
Network file access
If to access to the storage server's active folder via network sharing and copy it the desired result does not happen.
Of course, tree /A /F command shows the same structure of the copied tree, but active is 0 MB size and child folders notepad++, octave, texstudio are also zero-sized and empty.
TEST_COPY
\---active % 0 MB, no any files, only subfolders
+---notepad++
| \---7.3.1
|
+---octave
| \---4.2.1
|
\---texstudio
\---2.12.4
Only if try to copy directly directory junctions subfolders inside active folder (7.3.1, 4.2.1, 2.12.4) the content will be copied as desired. But every user wants to copy active folder, not its second level childs.
BTW. Sometimes when trying to copy an error occurred: file/folder already exists, and copy process was broken unexpectedly.
May be links are set up wrong or there are other methods to reach the desired result.

What are the valid instanceState's for the Amazon EC2 API?

What are the valid instanceState's for the Amazon EC2 API? It doesn't seem to be defined in the current API doc. Google doesn't turn up much. So far I know about:
0: pending
16: running
32: shutting-down
48: terminated
but I'm pretty sure I've seen an error state before.
Thanks!
As of posting, the current states are:
+--------+---------------+
| Code | State |
+--------+---------------+
| 0 | pending |
| 16 | running |
| 32 | shutting-down |
| 48 | terminated |
| 64 | stopping |
| 80 | stopped |
+--------+---------------+
And the documentation can be found within the Amazon Elastic Compute Cloud API reference under InstanceStateType
The docs also mention a state code 272 which "typically indicates a problem with the host running the instance". They suggest trying a reboot in the first instance, and posting on the EC2 forums if that doesn't solve the issue.
The docs seem to have been moved, and I still can't find them. Here are two more state codes
64: stopping
80: stopped

Resources