Is there any good and stable online SMS to PDU converter? - sms

I'm looking for a nice online converter which could work with several modems. The problem i'm dealing with - i can't send sms in pdu mode (with Cinterion BGS-2T). Tried with my own library (still working on it) and several online converters such as:
http://www.smartposition.nl/resources/sms_pdu.html
http://m2msupport.net/m2msupport/module-tester/
http://hardisoft.ru/soft/otpravka-sms-soobshhenij-v-formate-pdu-teoriya-s-primerami-na-c-chast-1/
User data seems to be encoded well (same result everywhere), but the first part of TPDU (with PDU-Type, TP-MR, ...) may be a little bit variable (but never works, damn).
Few moments:
The modem definitely supports pdu mode.
There is cash on balance.
Modem responses on "AT+CMGS" with ">" and it responses on PDU string with "\r\nOK\r\n", but didn't responds with "+CMGS" (and of course i'm not receiving my sms).
If it necessary here is part of my own code:
void get_pdu_string(sms_descriptor* sms, char dst[]) {
char tempnum[8] = "";
char* pTemp = dst;
uint8_t i = 0;
// SMSC
//*pTemp++ = 0x00;
// PDU-Type
*pTemp++ = (0<<TP_MTIH) | (1<<TP_MTIL); // MTI = 01 - outbox sms
// TP-MR
*pTemp++ = 0x00; // unnecessary
// TP-DA
*pTemp++ = strlen(sms->to_number); // address number length
*pTemp++ = 0x91; // address number format (0x91 - international)
gsm_number_swap(sms->to_number,tempnum);
i = (((*(pTemp-2) & 0x01) == 0x01)? (*(pTemp-2)+1) : *(pTemp-2))>>1;
strncpy(pTemp, tempnum, i ); // address number
pTemp += i;
// TP-PID
*pTemp++ = 0;
// TP-DCS
switch(sms->encoding) {
case SMS_7BIT_ENC:
*pTemp++ = 0x00;
break;
case SMS_UCS2_ENC:
*pTemp++ = 0x08;
break;
}
if (sms->flash == 1)
*(pTemp-1) |= 0x10;
// TP-VP
// skip if does not need
// TP-UDL
switch(sms->encoding) {
case SMS_7BIT_ENC:
*pTemp++ = strlen(sms->msg);
break;
case SMS_UCS2_ENC:
*pTemp++ = strlen(sms->msg) << 1;
break;
}
// TP-UD
switch(sms->encoding) {
case SMS_7BIT_ENC: {
char packed_msg[140] = "";
char* pMsg = packed_msg;
gsm_7bit_enc(sms->msg, packed_msg);
while(*pMsg != 0)
*pTemp++ = *pMsg++;
} break;
case SMS_UCS2_ENC: {
wchar_t wmsg[70] = L"";
wchar_t* pMsg = wmsg;
strtoucs2(sms->msg, wmsg, METHOD_TABLE);
while(*pMsg != 0) {
*pTemp++ = (char) (*pMsg >> 8);
*pTemp++ = (char) (*pMsg++);
}
} break;
}
*pTemp = 0x1A;
return;
}
Example of my routine work:
To: 380933522620
Message: Hello! Test SMS in GSM-7
Encoded PDU string:
00 01 00 0C 81 83 90 33 25 62 02 00 00 18 C8 32 9B FD 0E 81 A8 E5 39 1D 34 6D 4E 41 69 37 E8 38 6D B6 6E 1A
Details about PDU string:
1. 00 - skipped SMSC
2. 01 - PDU-Type
3. 00 - TP-MR
4. 0C - length of To number.
5. 81 - type of number (unknown, also tried 0x91 which is international)
6. 83 90 33 25 62 02 - To number
7. 00 - TP-PID
8. 00 - TP-DCS (GSM 7bit, default SMS class)
9. 18 - TP-UD (24 letters)
10. C8 32 ... B6 6E - packed message
11. 1A - ctrl+z

Problem is fixed. I was sending message not as hex string but as binary, silly me :(

I've created balance checker for my openwrt routers. It is written in C and very simple. Works fine for velcom.by and mts.by.

Related

Trouble writing NDEF record to NTAG213 using external NFC reader (but writing to memory works)

I am using the sample provided by Michael Roland in this answer and modified the bytes command structure to match this answer.
After I scan the tag, I receive 90 00 responses from the reader. When I then scan the tag using NFC Tools though, I don't see that it has an NDEF record (photo). If I examine the memory I can see my data written starting at block 4 as follows here.
Meanwhile, if I use the Write Tag feature of NFC Tools to write an NDEF message and then scan the tag again, it does work. The memory in the blocks other than those starting at block 4 appear to be identical (photo).
I don't believe it's a capability container issue as the memory is identical in block 3 after writing to the tag from my reader vs. NFC Tools.
Do I need to do any other kind of NDEF read / check command prior to writing to block 4?
My code below:
byte[] ndefMessage = new byte[] {
(byte)0xD1, (byte)0x01, (byte)0x0C, (byte)0x55, (byte)0x01, (byte)0x65, (byte)0x78, (byte)0x61, (byte)0x6D, (byte)0x70, (byte)0x6C, (byte)0x65, (byte)0x2E, (byte)0x63, (byte)0x6F, (byte)0x6D, (byte)0x2F
};
// wrap into TLV structure
byte[] tlvEncodedData = null;
Log.e("length",String.valueOf(ndefMessage.length));
if (ndefMessage.length < 255) {
tlvEncodedData = new byte[ndefMessage.length + 3];
tlvEncodedData[0] = (byte)0x03; // NDEF TLV tag
tlvEncodedData[1] = (byte)(ndefMessage.length & 0x0FF); // NDEF TLV length (1 byte)
System.arraycopy(ndefMessage, 0, tlvEncodedData, 2, ndefMessage.length);
tlvEncodedData[2 + ndefMessage.length] = (byte)0xFE; // Terminator TLV tag
} else {
tlvEncodedData = new byte[ndefMessage.length + 5];
tlvEncodedData[0] = (byte)0x03; // NDEF TLV tag
tlvEncodedData[1] = (byte)0xFF; // NDEF TLV length (3 byte, marker)
tlvEncodedData[2] = (byte)((ndefMessage.length >>> 8) & 0x0FF); // NDEF TLV length (3 byte, hi)
tlvEncodedData[3] = (byte)(ndefMessage.length & 0x0FF); // NDEF TLV length (3 byte, lo)
System.arraycopy(ndefMessage, 0, tlvEncodedData, 4, ndefMessage.length);
tlvEncodedData[4 + ndefMessage.length] = (byte)0xFE; // Terminator TLV tag
}
// fill up with zeros to block boundary:
tlvEncodedData = Arrays.copyOf(tlvEncodedData, (tlvEncodedData.length / 4 + 1) * 4);
for (int i = 0; i < tlvEncodedData.length; i += 4) {
byte[] command = new byte[] {
(byte)0xFF, // WRITE
(byte)0xD6,
(byte)0x00,
(byte)((4 + i / 4) & 0x0FF), // block address
(byte)0x04,
0, 0, 0, 0
};
System.arraycopy(tlvEncodedData, i, command, 5, 4);
ResponseAPDU answer = cardChannel.transmit(new CommandAPDU(command));
byte[] response = answer.getBytes();
writeLogWindow("response: "+ byteArrayToHexString(response));
}
I believe that the problem is that Michael Roland's answer has a bug in it.
D1 01 0C 55 01 65 78 61 6D 70 6C 65 2E 63 6F 6D 2F is not a valid Ndef message.
If you look at the various specs for Ndef at https://github.com/haldean/ndef/tree/master/docs (specifically the NFCForum-TS-RTD_URI_1.0.pdf and NFCForum-TS-NDEF_1.0.pdf) his example of "http://www.example.com/" is actually made up of "http://www." which has a type code of 01 and 12 characters or the rest of the URL.
Thus the payload length is 13 (1 + 12) bytes so OD where as his message:-
D1 01 0C 55 01 65 78 61 6D 70 6C 65 2E 63 6F 6D 2F
only specifies the length of the second part of the URL and not the prefix, so is one byte too short.
This is confirmed if you try and write a record for that URL using the NFC Tools App or NXP's TagWriter App both generate a message of
D1 01 0D 55 01 65 78 61 6D 70 6C 65 2E 63 6F 6D 2F
So try using in your code
byte[] ndefMessage = new byte[] {
(byte)0xD1, (byte)0x01, (byte)0x0D, (byte)0x55, (byte)0x01, (byte)0x65, (byte)0x78,
(byte)0x61, (byte)0x6D, (byte)0x70, (byte)0x6C, (byte)0x65, (byte)0x2E, (byte)0x63,
(byte)0x6F, (byte)0x6D, (byte)0x2F
};

A Very Strange phenomenon when decrypt aes-128-gcm tls package using OpenSSL in ruby

I'm implimenting my own ssl/tls library for learning purpose. These days when I seed encrypted message to server, I will a receive "bad mac" alert. Today I use this question's record to debug my code.
This is where the strange thing happend. I use his server_random, client_random and master-secret to generate client_write_key and client_write_iv. Then I get the same output as his. However, when I use the client_wrtie_key and client_write_iv to decrypt the "GET / HTTP/1.0\n" message, I get this output:
Q▒W▒ ▒7▒3▒▒▒
This is so different from the correct message! My whole debug output is
C:/Users/ayanamists/.babun/cygwin/home/ayanamists/my_ssl_in_ruby/encrypt_handler/aes_gcm_handler.rb:87:in `final': OpenSSL::Cipher::CipherError
from C:/Users/ayanamists/.babun/cygwin/home/ayanamists/my_ssl_in_ruby/encrypt_handler/aes_gcm_handler.rb:87:in `recv_decrypt'
from decrypt.rb:72:in `<main>'
client write key is
4b 11 9d fb fc 93 0a be 13 00 30 bd 53 c3 bf 78
nonce is
20 29 ca e2 c9 1d e0 05 e2 ae 50 a8
what to be decrypt:
a5 7a be e5 5c 18 36 67 b1 36 34 3f ee f4 a3 87 cb 7c f8 30 30
the tag is
a4 7e 23 0a f2 68 37 8c 4f 33 c8 b5 ba b3 d2 6d
the additional data is
00 00 00 00 00 00 00 01 17 03 03 00 00 15
decrypter is
OpenSSL::Cipher::AES
Q▒W▒ ▒7▒3▒▒▒
output is
bd 8c e8 87 b7 ab c6 f7 eb 31 fd cb 65 4c d4 a9 16 ae 1b ca da
the correct is
47 45 54 20 2f 20 48 54 54 50 2f 31 2e 30 0a
You can see that my key and nonce have no different with his key and nonce, but why the result is wrong?
The code is:
require_relative 'encrypt_message_handler'
require 'pp'
class AES_CGM_Handler
include EncryptMessageHandler
attr_accessor :send_cipher, :recv_cipher, :send_implicit, :recv_implicit,
:send_seq_num, :recv_seq_num
def initialize(server_random, client_random, certificate = '', length = 0,
usage = 'client', version_major = 0x03, version_minor = 0x03)
if block_given?
#master = yield
#version = [0x03, 0x03]
else
super(server_random, client_random, certificate)
end
# the nonce of AES_GCM is defined by:
# +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
# + 0 1 2 3 | 0 1 2 3 4 5 6 7 +
# + salt | nonce_explicit +
# +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
# salt is server_write_iv or client_write_iv, so you need 4 * 2
# and length need to /8(bit->byte) and * 2(both server and client), so it's length/4
key_block = (length/4 + 4 * 2).tls_prf(#master, "key expansion", server_random + client_random)
arr = key_block.unpack "a#{length/8}a#{length/8}a4a4"
client_write_key = arr[0]
server_write_key = arr[1]
client_write_iv = arr[2]
server_write_iv = arr[3]
#send_cipher = OpenSSL::Cipher::AES.new(length, :GCM).encrypt
#recv_cipher = OpenSSL::Cipher::AES.new(length, :GCM).decrypt
if usage == 'client'
#send_cipher.key = client_write_key
#send_implicit = client_write_iv
#recv_cipher.key = server_write_key
#recv_implicit = server_write_iv
puts "server write key is #{server_write_key.to_hex}"
elsif usage == 'server'
#send_cipher.key = server_write_key
#send_implicit = server_write_iv
#recv_cipher.key = client_write_key
#recv_implicit = client_write_iv
puts "client write key is\n #{client_write_key.to_hex}"
else
raise "AES_GCM_HANDLER: BAD_ARGUMENT"
end
#send_seq_num = 0
#recv_seq_num = 0
end
def send_encrypt(type = 22, seqence = '')
nonce_explicit = OpenSSL::Random.random_bytes(8)
nonce = #send_implicit + nonce_explicit
#send_cipher.iv = nonce
length = seqence.length
#the handle of seq_num may be wrong
#send_cipher.auth_data = [0, #send_seq_num,
type, #version[0], #version[1], 0 ,length].pack("NNCCCCn")
encrypt = #send_cipher.update(seqence) + #send_cipher.final
encrypt = encrypt + #send_cipher.auth_tag
return encrypt
end
def recv_decrypt(type = 22, sequence = '', seq_num = 0)
if seq_num != 0
#recv_seq_num = seq_num
end
template = 'a8a*'
arr = sequence.unpack(template)
nonce_explicit = arr[0]
length = sequence.length - 8 - 16
sequence = arr[1]
encrypted = sequence[0, sequence.length - 16]
#recv_cipher.auth_tag = sequence[sequence.length - 16, sequence.length]
#recv_cipher.iv = #recv_implicit + nonce_explicit
puts "nonce is\n #{(#recv_implicit + nonce_explicit).to_hex}"
puts "what to be decrypt: \n #{encrypted.to_hex}"
puts "the tag is \n #{sequence[sequence.length - 16, sequence.length].to_hex}"
#recv_cipher.auth_data =
[0, #recv_seq_num, type ,#version[0], #version[1], 0 ,length].pack("NNCCCCn")
puts "the additional data is\n #{([0, #recv_seq_num, type ,#version[0], #version[1], 0 ,length].pack("NNCCCCn")).to_hex }"
puts "decrypter is\n #{#recv_cipher.class}"
puts #recv_cipher.update(encrypted)
puts "output is\n #{#recv_cipher.update(encrypted).to_hex}"
puts "the correct is\n #{"GET / HTTP/1.0\n".to_hex}"
decrypt = #recv_cipher.update(encrypted) + #recv_cipher.final
return decrypt
end
end
Please help me to solve this problem, I'll be very grateful!
I have solved this question eventually!
If anyone want to use this test vector, do not forget to add "789c" as zlib compress

NSScanner scanCharactersFromSet fails to find leading tabs

A simple routine to find the number of leading tabs using NSScanner, except it finds none of the tabs. The code goes to the 'else' of the scanCharactersFromSet statement. I tried changing asciiTabRange.location to be 0x49 (ASCII 'I'), and changed the input string to start with 'I' instead of tab, and it will find a leading 'I', so the problem seems specific to leading tabs. The lldb output shows the string starts with a tab, and the [asciiTab characterIsMember] statement returns YES.
{
self = [super init];
if (self) {
if ([recordString length] != 0) {
NSScanner* theScanner = [NSScanner scannerWithString: recordString];
NSString* testConfirmString = [theScanner string];
const char* testConfirmCStr = [testConfirmString cStringUsingEncoding: NSASCIIStringEncoding];
if (testConfirmCStr != NULL);
NSRange asciiTabRange;
asciiTabRange.location = 0x09;
asciiTabRange.length = 1;
NSCharacterSet* asciiTab = [NSCharacterSet characterSetWithRange: asciiTabRange];
if ([asciiTab characterIsMember: 0x09] == YES) {
testConfirmCStr = 0;
}
NSString* tabString = nil;
if ([theScanner scanCharactersFromSet: asciiTab
intoString: &tabString] == YES) {
tabLevel = [tabString length];
} else {
tabLevel = 0;
}
itemText = [recordString substringFromIndex: tabLevel];
} else {
// ?
}
children = [[NSMutableArray alloc] init];
}
return self;
}
(lldb) x -c 32 testConfirmCStr
0x600000043a91: 09 49 6e 64 75 73 74 72 79 3a 09 54 65 73 74 53 .Industry:.TestS
0x600000043aa1: 65 76 65 72 61 6c 00 00 00 00 00 00 00 00 00 90 everal..........
By default NSScanner ignores whitespace and newlines; the whitespace character set includes the tab character (U+0009). Try removing this default:
NSScanner* theScanner = [NSScanner scannerWithString: recordString];
theScanner.charactersToBeSkipped = nil;
It's also worth being aware of the NSScanner property scanLocation, in simple cases this may help you count the actual tabs:
NSString *recordString = #"\t\t\tHello\t\tGoodbye\tHello";
NSScanner* theScanner = [NSScanner scannerWithString: recordString];
theScanner.charactersToBeSkipped = nil;
NSRange asciiTabRange;
asciiTabRange.location = 0x09;
asciiTabRange.length = 1;
NSCharacterSet* asciiTab = [NSCharacterSet characterSetWithRange: asciiTabRange];
NSString *tabString;
unsigned long indexOfFirstTabInRun = 0;
unsigned long tabsInRun = 0;
while (!theScanner.isAtEnd) {
indexOfFirstTabInRun = (unsigned long)theScanner.scanLocation;
if ([theScanner scanCharactersFromSet: asciiTab intoString: &tabString]) {
tabsInRun = (unsigned long) theScanner.scanLocation - indexOfFirstTabInRun;
NSLog(#"tabCount: %lu - starting at index %lu", tabsInRun, indexOfFirstTabInRun);
} else {
[theScanner scanCharactersFromSet:asciiTab.invertedSet intoString:nil];
}
}

Message format websocket (Arduino + Esp8266)

Update
Almost there I can receive messages I think. When the code is readable, I will put it in. Trying to send also..
Original question
I'm trying to connect my esp8266 (#38400 baud) ($3.50 wifi chip :)), to a Websocket. The chip is connected with a Arduino pro mini. This setup is OK and it works.
I am able to do a handshake, thanks to some code (https://github.com/ejeklint/ArduinoWebsocketServer).
So this is what the program has to do:
Handle handshake V
Receive message Received some unknown chars
Sending (when I'm able to receive, I will find out how to send)
I'm testing websocket with:
http://www.websocket.org/echo.html
connecting with my wifi module
ws://192.168.1.104:8000
When I send 3 x the message "aaaa" to my Arduino I receive this:
+IPD,0,10: | | | q | | b | k | | c | | |
+IPD,0,10: | | | ¦ | ¡ | 0 | P | Ç | À | Q | 1 |
+IPD,0,10: | | | _ | ò | ± | ? | > | | Ð | ^ | |
How can I decode this?
#include "sha1.h"
#include "Base64.h"
#include <SoftwareSerial.h>
#include <MemoryFree.h>
SoftwareSerial debug(8, 9); // RX, TX
void setup() {
Serial.begin(38400);
debug.begin(38400);
delay(50);
debug.println("start");
Serial.println("AT+RST");
delay(5000);
Serial.println("AT+CWMODE=1"); // NO CHANGE
delay(1500);
Serial.find("OK");
Serial.println("AT+CIPMUX=1");
Serial.find("OK");
delay(3000);
Serial.println("AT+CIPSERVER=1,8000");
boolean server = Serial.find("OK");
delay(3000);
Serial.println("AT+CIFSR"); // Display the ip please
boolean r = readLines(4);
debug.println("eind setup");
debug.println(server);
boolean found = false;
while(!found) // wait for the link
found = Serial.find("Link");
debug.println("link builded, end setup");
}
void loop() {
String key = "";
boolean isKey = Serial.find("Key: ");
if(isKey) {
debug.println("Key found!");
while(true) {
if(Serial.available()) {
char c = (char)Serial.read();
if(c == '=') {
doHandshake(key + "==");
key = "";
break;
}
if(c != '\r' || c != '\n') {
key = key + c;
}
}
}
// _________________________ PROBLEMO ____________________________________
while(true) { // So far so good. Handshake done Now wait for the message
if(Serial.available()) {
char c = (char)Serial.read();
debug.print(c);
debug.print(" | ");
}
}
}
// _________________________ /PROBLEMO ____________________________________
}
boolean readLines(int lines) {
boolean found = false;
int count = 0;
while(count < lines) {
if(Serial.available()) {
char c = (char)Serial.read();
if(c != '\r') {
debug.write(c);
} else {
count++;
}
}
}
return true;
}
bool doHandshake(String k) {
debug.println("do handshake: " + k);
char bite;
char temp[128];
char key[80];
memset(temp, '\0', sizeof(temp));
memset(key, '\0', sizeof(key));
byte counter = 0;
int myCo = 0;
while ((bite = k.charAt(myCo++)) != 0) {
key[counter++] = bite;
}
strcat(key, "258EAFA5-E914-47DA-95CA-C5AB0DC85B11"); // Add the omni-valid GUID
Sha1.init();
Sha1.print(key);
uint8_t *hash = Sha1.result();
base64_encode(temp, (char*)hash, 20);
debug.print(temp);
int cc = -1;
while(temp[cc++] != '\0') {} // cc is length return key
cc = 165 + cc; // length return key + 165 keys for rest of header
Serial.print("AT+CIPSEND=0,");
Serial.println(129); // +30 // was 129
boolean found = false;
while(!found)
found = Serial.find(">"); // Wait until I can send
Serial.print("HTTP/1.1 101 Switching Protocols\r\n");
Serial.print("Upgrade: websocket\r\n");
Serial.print("Connection: Upgrade\r\n");
Serial.print("Sec-WebSocket-Accept: ");
Serial.print(temp);
Serial.print("\r\n\r\n");
return true;
}
I have no experience with websockets, but I think websockets uses UTF-8 while the Arduino terminal uses ASCII. I do not see in your code conversion between UTF-8 and ASCII.
I can send messages now from the websocket >> arduino. But sending is not working :(.
boolean getFrame() {
debug.println("getFrame()");
byte bite;
unsigned short payloadLength = 0;
bite = Serial.read();
frame.opcode = bite & 0xf; // Opcode
frame.isFinal = bite & 0x80; // Final frame?
bite = Serial.read();
frame.length = bite & 0x7f; // Length of payload
frame.isMasked = bite & 0x80;
// Frame complete!
if (!frame.isFinal) {
return false;
}
// First check if the frame size is within our limits.
if (frame.length > 126) {
return false;
}
// If the length part of the header is 126, it means it contains an extended length field.
// Next two bytes contain the actual payload size, so we need to get the "true" length.
if (frame.length == 126) {
byte exLengthByte1 = Serial.read();
byte exLengthByte2 = Serial.read();
payloadLength = (exLengthByte1 << 8) + exLengthByte2;
}
// If frame length is less than 126, that is the size of the payload.
else {
payloadLength = frame.length;
}
// Check if our buffer can store the payload.
if (payloadLength > MAX_RECEIVE_MESSAGE_SIZE) {
debug.println("te groot");
return false;
}
// Client should always send mask, but check just to be sure
if (frame.isMasked) {
frame.mask[0] = Serial.read();
frame.mask[1] = Serial.read();
frame.mask[2] = Serial.read();
frame.mask[3] = Serial.read();
}
// Get message bytes and unmask them if necessary
for (int i = 0; i < payloadLength; i++) {
if (frame.isMasked) {
frame.data[i] = Serial.read() ^ frame.mask[i % 4];
} else {
frame.data[i] = Serial.read();
}
}
for (int i = 0; i < payloadLength; i++) {
debug.print(frame.data[i]);
if(frame.data[i] == '/r')
break;
}
return true;
}
// !!!!!!!!!! NOT WORKING
boolean sendMessage(char *data, byte length) {
Serial.print((uint8_t) 0x1); // Txt frame opcode
Serial.print((uint8_t) length); // Length of data
for (int i = 0; i < length ; i++) {
Serial.print(data[i]);
}
delay(1);
return true;
}
See https://github.com/zoutepopcorn/esp8266-Websocket/blob/master/arduino_websocket.ino
The only problem now is the websocket format from arduino > websocket is not OK :(. But I think this is another issue / question.
WebSocket connection to 'ws://192.168.1.101:8000/?encoding=text' failed: One or more reserved bits are on: reserved1 = 0, reserved2 = 1, reserved3 = 1
What you are looking at is the first Web Socket frame ( | | | q | | b | k | | c | | |) that has been concatenated with the HTTP header. (+IPD,0,10:) The data that your delimiting with pipes (|) is unintelligible because it's not ASCII, nor is it UTF8. You must display the data after the last colon (:) as BINARY. Then it should make complete sense. I was doing exactly the same thing. It was only when I displayed the total data as binary that I "Got it".
I was using the "Web Sockets rock" demo from the web. It's an echo that just sends "Web Sockets rock" to a server that you nominate. I changed the Server address to the I.P. of my ESP8266 and started to look at the frames.
I did a little analysis for myself (same as you did) to see what the ESP8266 would send back after a successful handshake. (I got the hand shake working first)
Here is the 'post handshake' listing straight of TeraTerm-
+IPD,0,21:r¨$v%ÍF%ËVÃW (NOTE: Garbage after the :)
I expected to find "Web Sockets rock" somewhere in there.
Here is the listing converted to Binary, that I extracted from my receive buffer-
0 2B 0010 1011
1 49 0100 1001
2 50 0101 0000
3 44 0100 0100
4 2C 0010 1100
5 30 0011 0000
6 2C 0010 1100
7 32 0011 0010 (Ascii for 21 bytes to follow)
8 31 0011 0001 (Ascii for 21 bytes to follow)
9 3A 0011 1010 (Colon)
10 -7F 1000 0001 (Start of actual FRAME)
11 -71 1000 1111
12 72 0111 0010
13 -58 1010 1000
14 24 0010 0100
15 76 0111 0110
16 25 0010 0101
17 -33 1100 1101
18 46 0100 0110
19 25 0010 0101
20 1D 0001 1101
21 -35 1100 1011
22 4F 0100 1111
23 13 0001 0011
24 6 0000 0110
25 -78 1000 1000
26 56 0101 0110
27 19 0001 1001
28 11 0001 0001
29 -3D 1100 0011
30 57 0101 0111
Description of the fields- (Starting from the first byte after the Colon. 81)
// First byte has FIN bit and frame type opcode = text
// Second byte mask and payload length
// next four bytes for masking key
// So total of 6 bytes for the overhead
// The size of the payload in this case is "F" = 15 (the 4th nibble)
// So total of bytes are (6+15) = 21
// The first byte is saying> FIN bit is set. This is last frame in sequence. The OP code is 1 = TEXT data.
// The second byte is saying> MASK bit is set. The following data will be masked. The data length is "F" = 15
// The 3rd, 4th, 5th, 6th bytes is the masking key. In this case 72, A8, 24, 76.

Most elegant way to expand card hand suits

I'm storing 4-card hands in a way to treat hands with different suits the same, e.g.:
9h 8h 7c 6c
is the same as
9d 8d 7h 6h
since you can replace one suit with another and have the same thing. It's easy to turn these into a unique representation using wildcards for suits. THe previous would become:
9A 8A 7B 6B
My question is - what's the most elegant way to turn the latter back into a list of the former? For example, when the input is 9A 8A 7B 6B, the output should be:
9c 8c 7d 6d
9c 8c 7h 6h
9c 8c 7s 6s
9h 8h 7d 6d
9h 8h 7c 6c
9h 8h 7s 6s
9d 8d 7c 6c
9d 8d 7h 6h
9d 8d 7s 6s
9s 8s 7d 6d
9s 8s 7h 6h
9s 8s 7c 6c
I have some ugly code that does this on a case-by-case basis depending on how many unique suits there are. It won't scale to hands with more cards. Also in a situation like:
7A 7B 8A 8B
it will have duplicates, since in this case A=c and B=d is the same as A=d and B=c.
What's an elegant way to solve this problem efficiently? I'm coding in C, but I can convert higher-level code down to C.
There are only 4 suits so the space of possible substitutions is really small - 4! = 24 cases.
In this case, I don't think it is worth it, to try to come up with something especially clever.
Just parse the string like "7A 7B 8A 8B", count the number of different letters in it, and based on that number, generate substitutions based on a precomputed set of substitutions.
1 letter -> 4 possible substitutions c, d, h, or s
2 letters -> 12 substitutions like in Your example.
3 or 4 letters -> 24 substitutions.
Then sort the set of substitutions and remove duplicates. You have do sort the tokens in every string like "7c 8d 9d 9s" and then sort an array of the strings to detect duplicates but that shouldn't be a problem. It's good to have the patterns like "7A 7B 8A 8B" sorted too (the tokens like: "7A", "8B" are in an ascending order).
EDIT:
An alternative for sorting might be, to detect identical sets if ranks associated with two or more suits and take it into account when generating substitutions, but it's more complicated I think. You would have to create a set of ranks for each letter appearing in the pattern string.
For example, for the string "7A 7B 8A 8B", with the letter A, associated is the set {7, 8} and the same set is associated with the letter B. Then You have to look for identical sets associated with different letters. In most cases those sets will have just one element, but they might have two as in the example above. Letters associated with the same set are interchangeable. You can have following situations
1 letter no duplicates -> 4 possible substitutions c, d, h, or s
2 letters no duplicates -> 12 substitutions.
2 letters, 2 letters interchangeable (identical sets for both letters) -> 6 substitutions.
3 letters no duplicates -> 24 substitutions.
3 letters, 2 letters interchangeable -> 12 substitutions.
4 letters no duplicates -> 24 substitutions.
4 letters, 2 letters interchangeable -> 12 substitutions.
4 letters, 3 letters interchangeable -> 4 substitutions.
4 letters, 2 pairs of interchangeable letters -> 6 substitutions.
4 letters, 4 letters interchangeable -> 1 substitution.
I think a generic permutation function that takes an array arr and an integer n and returns all possible permutations of n elements in that array would be useful here.
Find how how many unique suits exist in the hand. Then generate all possible permutations with those many elements from the actual suits [c, d, h, s]. Finally go through each permutation of suits, and assign each unknown letter [A, B, C, D] in the hand to the permuted values.
The following code in Ruby takes a given hand and generates all suit permutations. The heaviest work is being done by the Array.permutation(n) method here which should simplify things a lot for a corresponding C program as well.
# all 4 suits needed for generating permutations
suits = ["c", "d", "h", "s"]
# current hand
hand = "9A 8A 7B 6B"
# find number of unique suits in the hand. In this case it's 2 => [A, B]
unique_suits_in_hand = hand.scan(/.(.)\s?/).uniq.length
# generate all possible permutations of 2 suits, and for each permutation
# do letter assignments in the original hand
# tr is a translation function which maps corresponding letters in both strings.
# it doesn't matter which unknowns are used (A, B, C, D) since they
# will be replaced consistently.
# After suit assignments are done, we split the cards in hand, and sort them.
possible_hands = suits.permutation(unique_suits_in_hand).map do |perm|
hand.tr("ABCD", perm.join ).split(' ').sort
end
# Remove all duplicates
p possible_hands.uniq
The above code outputs
9c 8c 7d 6d
9c 8c 7h 6h
9c 8c 7s 6s
9d 8d 7c 6c
9d 8d 7h 6h
9d 8d 7s 6s
9h 8h 7c 6c
9h 8h 7d 6d
9h 8h 7s 6s
9s 8s 7c 6c
9s 8s 7d 6d
9s 8s 7h 6h
Represent suits as sparse arrays or lists, numbers as indexes, hands as associative arrays
In your example
H [A[07080000] B[07080000] C[00000000] D[00000000] ] (place for four cards)
To get the "real" hands always apply the 24 permutations (fixed time), so you don't have to care about how many cards has your hand A,B,C,D -> c,d,h,s with the following "trick"> store always in alphabetical order>
H1 [c[xxxxxx] d[xxxxxx] s[xxxxxx] h[xxxxxx]]
Since Hands are associative arrays, duplicated permutations does not generate two different output hands.
#include <stdio.h>
#include <stdlib.h>
const int RANK = 0;
const int SUIT = 1;
const int NUM_SUITS = 4;
const char STANDARD_SUITS[] = "dchs";
int usedSuits[] = {0, 0, 0, 0};
const char MOCK_SUITS[] = "ABCD";
const char BAD_SUIT = '*';
char pullSuit (int i) {
if (usedSuits [i] > 0) {
return BAD_SUIT;
}
++usedSuits [i];
return STANDARD_SUITS [i];
}
void unpullSuit (int i) {
--usedSuits [i];
}
int indexOfSuit (char suit, const char suits[]) {
int i;
for (i = 0; i < NUM_SUITS; ++i) {
if (suit == suits [i]) {
return i;
}
}
return -1;
}
int legitimateSuits (const char suits[]) {
return indexOfSuit (BAD_SUIT, suits) == -1;
}
int distinctSuits (const char suits[]) {
int i, j;
for (i = 0; i < NUM_SUITS; ++i) {
for (j = 0; j < NUM_SUITS; ++j) {
if (i != j && suits [i] == suits [j]) {
return 0;
}
}
}
return 1;
}
void printCards (char* mockCards[], int numMockCards, const char realizedSuits[]) {
int i;
for (i = 0; i < numMockCards; ++i) {
char* mockCard = mockCards [i];
char rank = mockCard [RANK];
char mockSuit = mockCard [SUIT];
int idx = indexOfSuit (mockSuit, MOCK_SUITS);
char realizedSuit = realizedSuits [idx];
printf ("%c%c ", rank, realizedSuit);
}
printf ("\n");
}
/*
* Example usage:
* char** mockCards = {"9A", "8A", "7B", "6B"};
* expand (mockCards, 4);
*/
void expand (char* mockCards[], int numMockCards) {
int i, j, k, l;
for (i = 0; i < NUM_SUITS; ++i) {
char a = pullSuit (i);
for (j = 0; j < NUM_SUITS; ++j) {
char b = pullSuit (j);
for (k = 0; k < NUM_SUITS; ++k) {
char c = pullSuit (k);
for (l = 0; l < NUM_SUITS; ++l) {
char d = pullSuit (l);
char realizedSuits[] = {a, b, c, d};
int legitimate = legitimateSuits (realizedSuits);
if (legitimate) {
int distinct = distinctSuits (realizedSuits);
if (distinct) {
printCards (mockCards, numMockCards, realizedSuits);
}
}
unpullSuit (l);
}
unpullSuit (k);
}
unpullSuit (j);
}
unpullSuit (i);
}
}
int main () {
char* mockCards[] = {"9A", "8A", "7B", "6B"};
expand (mockCards, 4);
return 0;
}

Resources