Issue with net-snmp v3 for authPriv for AES - snmp

I am creating a c++ project using the net-snmp libraries i build, I was able to interface with my hardware via SNMP v2c as well as SNMP v3 (authNoPriv). However, this was unsuccessful when I tried using authPriv, is there any advice on this?
What I suspect is that net-snmp does not support AES.
When i tried to run net-snmp directly, I see for the privacy protocol there's only the option for DES. So I would like to confirm does net-snmp supports both AES128 and DES privacy protocol?

For authNoPriv, I was returned with the Authentication failure when I used SHA-1 Authentication Protocol
For authPriv, I couldn't establish any connection with the SNMP hardware.
I suspect there is something wrong in my code, as there was no issue with authNoPriv with MD5 Authentication Protocol, but the above errors occur when I configured to the respective the security protocol.
// Definitions
const char * user = "snmpuser";
const char * our_v3_passphrase = "passphrase";
const char * our_v3_privphrase = "privphrase";
struct snmp_session session;
SOCK_STARTUP;
// Initialize the SNMP library
snmp_sess_init(&session);
session.peername = _strdup(argv[1])
// set the SNMP version number
session.version = SNMP_VERSION_3;
session.securityNameLen = strlen(session.securityName);
// set the security level
session.securityLevel = SNMP_SEC_LEVEL_AUTHPRIV; // SNMP_SEC_LEVEL_AUTHNOPRIV (for authNoPriv)
// set the authentication protocol
session.securityAuthProto = usmHMACMD5AuthProtocol; // usmHMACSHA1AuthProtocol
session.securityAuthProtoLen = USM_AUTH_PROTO_MD5_LEN; // USM_AUTH_PROTO_SHA_LEN
session.securityAuthKeyLen = USM_AUTH_KU_LEN;
// set authentication key to a hashed version of passphrase
if (generate_Ku(session.securityAuthProto, session.securityAuthProtoLen, (u_char *)our_v3_passphrase, strlen(our_v3_passphrase), session.securityAuthKey, &session.securityAuthKeyLen) != SNMPERR_SUCCESS) {
snmp_perror(argv[0]);
snmp_log(LOG_ERR, "Error generating Ku from authentication passphrase. \n");
SOCK_CLEANUP;
exit(1);
}
// set the privacy protocol
session.securityPrivProto = usmAES128PrivProtocol; // usmDESPrivProtocol
session.securityAuthProtoLen = USM_PRIV_PROTO_AES128_LEN; // USM_PRIV_PROTO_DES_LEN
session.securityAuthKeyLen = USM_PRIV_KU_LEN;
// set privacy key to a hashed version of privphrase
if (generate_Ku(session.securityAuthProto, session.securityAuthProtoLen, (u_char *)our_v3_privphrase, strlen(our_v3_privphrase), session.securityPrivKey, &session.securityPrivKeyLen) != SNMPERR_SUCCESS) {
snmp_perror(argv[0]);
snmp_log(LOG_ERR, "Error generating Ku from authentication passphrase. \n");
SOCK_CLEANUP;
exit(1);
}

Related

Using IOCTL_VIDEO_QUERY_AVAIL_MODES to get a list of modes supported by a video adapter

I'm trying to query a list of supported modes from a video adapter driver:
// IOCTL_VIDEO_QUERY_NUM_AVAIL_MODES - Retrieve the count of modes on the display adapter
// Input-Buffer: none
// Output-Buffer: VIDEO_NUM_MODES
VIDEO_NUM_MODES videoNumModes{};
// Send the IOCTL_VIDEO_QUERY_NUM_AVAIL_MODES control code directly to the device driver
ULONG bytesReturned{};
if (::DeviceIoControl(
hDevice, // Handle to the display adapter device
IOCTL_VIDEO_QUERY_NUM_AVAIL_MODES, // IOCTL code
nullptr, 0, // No input param struct
&videoNumModes, sizeof videoNumModes, // Address/size of output param struct
&bytesReturned, // Bytes returned in the output param struct
nullptr)) // Optional OVERLAPPED structure
{
// Allocate a buffer to receive the array of supported modes
const auto bufferSizeInBytes = videoNumModes.NumModes * videoNumModes.ModeInformationLength;
pVideoModeInfo = new VIDEO_MODE_INFORMATION[videoNumModes.NumModes];
// IOCTL_VIDEO_QUERY_AVAIL_MODES - Retrieve the array of supported modes
// Input-Buffer: none
// Output-Buffer: <allocated buffer>
// Send the IOCTL_VIDEO_QUERY_AVAIL_MODES control code directly to the device driver
if (::DeviceIoControl(
hDevice,
IOCTL_VIDEO_QUERY_AVAIL_MODES,
nullptr, 0,
pVideoModeInfo, bufferSizeInBytes,
&bytesReturned,
nullptr))
I get FALSE back on the first DeviceIoControl call with LastError set to ERROR_INVALID_FUNCTION (0x1).
I use this same code successfully to call custom IOCTL stuff in my drivers, so I'm confident that the implementation itself is sound. However, when I open a handle to the device, I'm supposed to use a string containing information about both the device and the interface I'm going to use. I defined the GUID for my custom IOCTL interface, and I use something like the following to send custom IOCTL commands:
hDevice = ::CreateFileW(L"\\\\?\\ROOT#DISPLAY#0000#{5f2f2b485bbd-5201-f1f9-4520-30f4bf353599}", ...);
But the documentation for IOCTL_VIDEO_QUERY_NUM_AVAIL_MODES and IOCTL_VIDEO_QUERY_AVAIL_MODES doesn't mention which interface (GUID) they're a part of.
I assumed that I had to open the adapter device with the GUID_DEVINTERFACE_DISPLAY_ADAPTER interface, but I'm getting Incorrect Function on the first DeviceIoControl call. Same result if I open the adapter or one of its monitors with GUID_DEVINTERFACE_MONITOR.
I've searched online for any code examples, but all I find are from the driver side, responding to the query.
The display adapter driver that I'm issuing this against is an IddCx driver, if that helps. Any clues?

WinVerifyTrust function takes long time to excecute

I am using windows WinVerifyTrust function on windows 10 pro, to verify dll signatures.
when I activate this function for the first time, it takes 4 seconds for the function to execute and return verification status for the first dll. for the other proceeding dlls, the function returns at fast rate.
Can anyone help me understand the possible reason for that latency?
the call that takes 4 sec is this call:
lStatus = WinVerifyTrust(
NULL,
&WVTPolicyGUID,
&WinTrustData);
The wraper function I'm using looks like this:
#define _UNICODE 1
#define UNICODE 1
#include <tchar.h>
#include <stdio.h>
#include <stdlib.h>
#include <windows.h>
#include <Softpub.h>
#include <wincrypt.h>
#include <wintrust.h>
// Link with the Wintrust.lib file.
#pragma comment (lib, "wintrust")
BOOL VerifyEmbeddedSignature(LPCWSTR pwszSourceFile)
{
LONG lStatus;
DWORD dwLastError;
// Initialize the WINTRUST_FILE_INFO structure.
WINTRUST_FILE_INFO FileData;
memset(&FileData, 0, sizeof(FileData));
FileData.cbStruct = sizeof(WINTRUST_FILE_INFO);
FileData.pcwszFilePath = pwszSourceFile;
FileData.hFile = NULL;
FileData.pgKnownSubject = NULL;
GUID WVTPolicyGUID = WINTRUST_ACTION_GENERIC_VERIFY_V2;
WINTRUST_DATA WinTrustData;
// Initialize the WinVerifyTrust input data structure.
// Default all fields to 0.
memset(&WinTrustData, 0, sizeof(WinTrustData));
WinTrustData.cbStruct = sizeof(WinTrustData);
// Use default code signing EKU.
WinTrustData.pPolicyCallbackData = NULL;
// No data to pass to SIP.
WinTrustData.pSIPClientData = NULL;
// Disable WVT UI.
WinTrustData.dwUIChoice = WTD_UI_NONE;
// No revocation checking.
WinTrustData.fdwRevocationChecks = WTD_REVOKE_NONE;
// Verify an embedded signature on a file.
WinTrustData.dwUnionChoice = WTD_CHOICE_FILE;
// Verify action.
WinTrustData.dwStateAction = WTD_STATEACTION_VERIFY;
// Verification sets this value.
WinTrustData.hWVTStateData = NULL;
// Not used.
WinTrustData.pwszURLReference = NULL;
// This is not applicable if there is no UI because it changes
// the UI to accommodate running applications instead of
// installing applications.
WinTrustData.dwUIContext = 0;
// Set pFile.
WinTrustData.pFile = &FileData;
// WinVerifyTrust verifies signatures as specified by the GUID
// and Wintrust_Data.
lStatus = WinVerifyTrust(
NULL,
&WVTPolicyGUID,
&WinTrustData);
switch (lStatus)
{
case ERROR_SUCCESS:
/*
Signed file:
- Hash that represents the subject is trusted.
- Trusted publisher without any verification errors.
- UI was disabled in dwUIChoice. No publisher or
time stamp chain errors.
- UI was enabled in dwUIChoice and the user clicked
"Yes" when asked to install and run the signed
subject.
*/
wprintf_s(L"The file \"%s\" is signed and the signature "
L"was verified.\n",
pwszSourceFile);
break;
case TRUST_E_NOSIGNATURE:
// The file was not signed or had a signature
// that was not valid.
// Get the reason for no signature.
dwLastError = GetLastError();
if (TRUST_E_NOSIGNATURE == dwLastError ||
TRUST_E_SUBJECT_FORM_UNKNOWN == dwLastError ||
TRUST_E_PROVIDER_UNKNOWN == dwLastError)
{
// The file was not signed.
wprintf_s(L"The file \"%s\" is not signed.\n",
pwszSourceFile);
}
else
{
// The signature was not valid or there was an error
// opening the file.
wprintf_s(L"An unknown error occurred trying to "
L"verify the signature of the \"%s\" file.\n",
pwszSourceFile);
}
break;
case TRUST_E_EXPLICIT_DISTRUST:
// The hash that represents the subject or the publisher
// is not allowed by the admin or user.
wprintf_s(L"The signature is present, but specifically "
L"disallowed.\n");
break;
case TRUST_E_SUBJECT_NOT_TRUSTED:
// The user clicked "No" when asked to install and run.
wprintf_s(L"The signature is present, but not "
L"trusted.\n");
break;
case CRYPT_E_SECURITY_SETTINGS:
wprintf_s(L"CRYPT_E_SECURITY_SETTINGS - The hash "
L"representing the subject or the publisher wasn't "
L"explicitly trusted by the admin and admin policy "
L"has disabled user trust. No signature, publisher "
L"or timestamp errors.\n");
break;
default:
wprintf_s(L"Error is: 0x%x.\n",
lStatus);
break;
}
// Any hWVTStateData must be released by a call with close.
WinTrustData.dwStateAction = WTD_STATEACTION_CLOSE;
lStatus = WinVerifyTrust(
NULL,
&WVTPolicyGUID,
&WinTrustData);
return true;
}
Please see MSDN documentation on WinVerifyTrust, it seems you will need to prevent retrieval of revocation lists as well:
// Use only the local cache for revocation checks. Prevents revocation checks over the network.
WinTrustData.dwProvFlags = WTD_CACHE_ONLY_URL_RETRIEVAL;

How to do asymmetric encryption/decryption with OSX 10.7+ without openssl?

Since openssl is deprecated in osx 10.7+, I'd like to switch from openssl to the internal osx keychain and crypto function.
But now I am stuck on asymmetric encryption/decryption.
How can I do encryption/decryption of a randomly generated symmetric key with a asymmetric (RSA) key. With openssl it's quite easy.
In the apple dev docs, they say that CommonCrypto supports asymmetric encryption, but while checking the headers, I can only see support for symmetric stuff.
Any hints?
Take a look at Cryptographic Message Syntax Services and see if that can do what you need.
Also, you're misreading the OpenSSL thing just a bit. The OpenSSL libraries that ship with the OS are deprecated. That doesn't mean you can't continue to use OpenSSL. OpenSSL is Open Source, and there's nothing stopping you from downloading it and using it freely in your application.
Apple's deprecation just means that if you use OpenSSL, you need to include your own copy of the OpenSSL libraries so that you are responsible for keeping your OpenSSL library up-to-date and for fixing any breakage that occurs whenever you do so. :-)
And if not, the iOS asymmetric encryption and decryption functions (SecKeyEncrypt and SecKeyDecrypt) do exist in OS X, and the iOS header even shows that they are available in OS X. I'm not sure why they aren't in the OS X SDK. I filed a bug, and it was marked as a dup.
It probably would not be possible for Apple to remove those functions in the future without breaking the Simulator, but if you're submitting to the app store and they give you grief about using them, here's a roughly compatible replacement for SecKeyEncrypt built using the Security Transforms API:
// Workaround for SecKeyEncrypt not really being public API in OS X
OSStatus OSXSecKeyEncrypt ( SecKeyRef key, SecPadding padding, const uint8_t *plainText, size_t plainTextLen, uint8_t *cipherText, size_t *cipherTextLen )
{
CFMutableDictionaryRef parameters = CFDictionaryCreateMutable(
kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(parameters, kSecAttrKeyType, kSecAttrKeyTypeAES);
CFErrorRef error = NULL;
SecTransformRef encrypt = SecEncryptTransformCreate(key, &error);
if (error) {
AFNSLog(#"Encryption failed: %#\n", (__bridge NSError *)error);
return (OSStatus)[(__bridge NSError *)error code];
}
SecTransformSetAttribute(
encrypt,
kSecPaddingKey,
NULL, // kSecPaddingPKCS1Key (rdar://13661366 : NULL means kSecPaddingPKCS1Key and
// kSecPaddingPKCS1Key fails horribly)
&error);
CFDataRef sourceData = CFDataCreate(kCFAllocatorDefault, plainText, plainTextLen);
SecTransformSetAttribute(encrypt, kSecTransformInputAttributeName,
sourceData, &error);
CFDataRef encryptedData = SecTransformExecute(encrypt, &error);
if (error) {
AFNSLog(#"Encryption failed: %#\n", (__bridge NSError *)error);
return (OSStatus)[(__bridge NSError *)error code];
}
if ((unsigned long)CFDataGetLength(encryptedData) > *cipherTextLen) {
return errSecBufferTooSmall;
}
*cipherTextLen = CFDataGetLength(encryptedData);
CFDataGetBytes(encryptedData, CFRangeMake(0, *cipherTextLen), cipherText);
return noErr;
}
You should be able to adapt the code for decryption fairly easily; I didn't need it for my purposes, so I didn't write that function.

Using a specific network interface for a socket in windows

Is there a reliable way in Windows, apart from changing the routing table, to force a newly created socket to use a specific network interface? I understand that bind() to the interface's IP address does not guarantee this.
(Ok second time lucky..)
FYI there's another question here perform connect() on specific network adapter along the same lines...
According to The Cable Guy
Windows XP and Windows ServerĀ® 2003
use the weak host model for sends and
receives for all IPv4 interfaces and
the strong host model for sends and
receives for all IPv6 interfaces. You
cannot configure this behavior. The
Next Generation TCP/IP stack in
Windows Vista and Windows Server 2008
supports strong host sends and
receives for both IPv4 and IPv6 by
default on all interfaces except the
Teredo tunneling interface for a
Teredo host-specific relay.
So to answer your question (properly, this time) in Windows XP and Windows Server 2003 IP4 no, but for IP6 yes. And for Windows Vista and Windows 2008 yes (except for certain circumstances).
Also from http://www.codeguru.com/forum/showthread.php?t=487139
On Windows, a call to bind() affects
card selection only incoming traffic,
not outgoing traffic. Thus, on a
client running in a multi-homed system
(i.e., more than one interface card),
it's the network stack that selects
the card to use, and it makes its
selection based solely on the
destination IP, which in turn is based
on the routing table. A call to bind()
will not affect the choice of the card
in any way.
It's got something to do with
something called a "Weak End System"
("Weak E/S") model. Vista changed to a
strong E/S model, so the issue might
not arise under Vista. But all prior
versions of Windows used the weak E/S
model.
With a weak E/S model, it's the
routing table that decides which card
is used for outgoing traffic in a
multihomed system.
See if these threads offer some
insight:
"Local socket binding on multihomed
host in Windows XP does not work" at
http://www.codeguru.com/forum/showthread.php?t=452337
"How to connect a port to a specified
Networkcard?" at
http://www.codeguru.com/forum/showthread.php?t=451117.
This thread mentions the
CreateIpForwardEntry() function, which
(I think) can be used to create an
entry in the routing table so that all
outgoing IP traffic with a specified
server is routed via a specified
adapter.
"Working with 2 Ethernet cards" at
http://www.codeguru.com/forum/showthread.php?t=448863
"Strange bind behavior on multihomed
system" at
http://www.codeguru.com/forum/showthread.php?t=452368
Hope that helps!
I'm not sure why you say bind is not working reliably. Granted I have not done exhaustive testing, but the following solution worked for me (Win10, Visual Studio 2019). I needed to send a broadcast message via a particular NIC, where multiple NICs might be present on a computer. In the snippet below, I want the broadcast message to go out on the NIC with IP of .202.106.
In summary:
create a socket
create a sockaddr_in address with the IP address of the NIC you want to send FROM
bind the socket to that FROM sockaddr_in
create another sockaddr_in with the IP of your broadcast address (255.255.255.255)
do a sendto, passing the socket created is step 1, and the sockaddr of the broadcast address.
`
static WSADATA wsaData;
static int ServoSendPort = 8888;
static char ServoSendNetwork[] = "192.168.202.106";
static char ServoSendBroadcast[] = "192.168.255.255";
`
... < snip >
if ( WSAStartup(MAKEWORD(2,2), &wsaData) != NO_ERROR )
return false;
// Make a UDP socket
SOCKET ServoSendSocket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
int iOptVal = TRUE;
int iOptLen = sizeof(int);
int RetVal = setsockopt(ServoSendSocket, SOL_SOCKET, SO_BROADCAST, (char*)&iOptVal, iOptLen);
// Bind it to a particular interface
sockaddr_in ServoBindAddr={0};
ServoBindAddr.sin_family = AF_INET;
ServoBindAddr.sin_addr.s_addr = inet_addr( ServoSendNetwork ); // target NIC
ServoBindAddr.sin_port = htons( ServoSendPort );
int bindRetVal = bind( ServoSendSocket, (sockaddr*) &ServoBindAddr, sizeof(ServoBindAddr) );
if (bindRetVal == SOCKET_ERROR )
{
int ErrorCode = WSAGetLastError();
CString errMsg;
errMsg.Format ( _T("rats! bind() didn't work! Error code %d\n"), ErrorCode );
OutputDebugString( errMsg );
}
// now create the address to send to...
sockaddr_in ServoSendAddr={0};
ServoSendAddr.sin_family = AF_INET;
ServoSendAddr.sin_addr.s_addr = inet_addr( ServoSendBroadcast ); //
ServoSendAddr.sin_port = htons( ServoSendPort );
...
#define NUM_BYTES_SERVO_SEND 20
unsigned char sendBuf[NUM_BYTES_SERVO_SEND];
int BufLen = NUM_BYTES_SERVO_SEND;
ServoSocketStatus = sendto(ServoSendSocket, (char*)sendBuf, BufLen, 0, (SOCKADDR *) &ServoSendAddr, sizeof(ServoSendAddr));
if(ServoSocketStatus == SOCKET_ERROR)
{
ServoUdpSendBytes = WSAGetLastError();
CString message;
message.Format(_T("Error transmitting UDP message to Servo Controller: %d."), ServoSocketStatus);
OutputDebugString(message);
return false;
}

In Cocoa, how do I set the TTL on a packet?

I want to be able to explicitly set the TTL value for a socket connection using Cocoa. I've been unable to see anything useful in the CoreFoundation docs. Do I need to go even lower to the BSD Sockets to set the TTL value?
Are you writing YA variant of traceroute? ;)
And yes, plain C sockets API is your friend: call as usual setsockopt() with IP_TTL socket option for IPv4 or IPV6_UNICAST_HOPS for IPv6.
There are two possibilities.
1) You can use plain C/Unix style sockets, so that you first create your socket, then set its options using setsockopt() including the ones you want to add (you may want to check first if these are supported), and finally you create a a CFSocket using CFSocketCreateWithNative().
2) You use directly the CF Apis, for instance
CFSocketSendData
Sends data over a CFSocket object.
CFSocketError CFSocketSendData (
CFSocketRef s,
CFDataRef address,
CFDataRef data,
CFTimeInterval timeout
);
allows you to set a timeout, which is equivalent to setting the socket option SO_SNDTIMEO.
CFSocketCreateConnectedToSocketSignature
Creates a CFSocket object and opens a connection to a remote socket.
CFSocketRef CFSocketCreateConnectedToSocketSignature (
CFAllocatorRef allocator,
const CFSocketSignature *signature,
CFOptionFlags callBackTypes,
CFSocketCallBack callout,
const CFSocketContext *context,
CFTimeInterval timeout
);
Kind regards.
I used CocoaAsyncSocket library. It contains a class called AsyncUdpSocket which is an Obj-C wrapper around the lower-level socket API. I added a method to set TTL of a socket.
-(BOOL)setTTL:(int) ttlValue {
int socketFD = SOCKET_NULL;
if ( socket4FD != SOCKET_NULL) {
socketFD = socket4FD;
}
else{
if ( socket6FD != SOCKET_NULL) {
socketFD = socket6FD;
}
else{
NSLog(#"ERROR: TTL - No Socket Found!");
return NO;
}
}
int status = setsockopt(socketFD, IPPROTO_IP, IP_TTL, &ttlValue, sizeof(ttlValue));
if (status == -1){
NSLog(#"Error: TTL not set!");
return NO;
}
NSLog(#"TTL: %d", ttlValue);
return YES;
}
I only tested it for IPv4. For IPv6 try with IPv6_UNICAST_HOPS

Resources