I have some problems with the documentation.
Here is my program:
package main
import (
"bytes"
"code.google.com/p/go.crypto/openpgp"
"encoding/base64"
"fmt"
)
func main() {
var entity *openpgp.Entity
entity, err := openpgp.NewEntity("bussiere", "test", "bussiere#gmail.com", nil)
if err != nil {
}
var (
buffer bytes.Buffer
)
entity.SerializePrivate(&buffer, nil)
data := base64.StdEncoding.EncodeToString([]byte(buffer.String()))
fmt.Printf("%q\n", data)
entity.Serialize(&buffer)
data2 := base64.StdEncoding.EncodeToString([]byte(buffer.String()))
fmt.Printf("%q\n", data2)
entity.PrivateKey.Serialize(&buffer)
data3 := base64.StdEncoding.EncodeToString([]byte(buffer.String()))
fmt.Printf("%q\n", data3)
entity.PrimaryKey.Serialize(&buffer)
data4 := base64.StdEncoding.EncodeToString([]byte(buffer.String()))
fmt.Printf("%q\n", data4)
//fmt.Printf(buffer.String())
}
Here are the data:
https://gist.github.com/bussiere/5159890
here is the code on gist:
https://gist.github.com/bussiere/5159897
What is the public key?
And how to use it?
And how to make bigger key?
UPDATE: This issue has been fixed: see here
Son the solution/description below is no longer appropriate.
---------------- legacy answer starts below --------------------
Concering your question of How to build keys of an other size: it's impossible.
I ran into the exact same Problem, look at: the source code for the NewEntityFunction:
383 const defaultRSAKeyBits = 2048
384
385 // NewEntity returns an Entity that contains a fresh RSA/RSA keypair with a
386 // single identity composed of the given full name, comment and email, any of
387 // which may be empty but must not contain any of "()<>\x00".
388 // If config is nil, sensible defaults will be used.
389 func NewEntity(name, comment, email string, config *packet.Config) (*Entity, error) {
390 currentTime := config.Now()
391
392 uid := packet.NewUserId(name, comment, email)
393 if uid == nil {
394 return nil, errors.InvalidArgumentError("user id field contained invalid characters")
395 }
396 signingPriv, err := rsa.GenerateKey(config.Random(), defaultRSAKeyBits)
397 if err != nil {
398 return nil, err
399 }
400 encryptingPriv, err := rsa.GenerateKey(config.Random(), defaultRSAKeyBits)
401 if err != nil {
402 return nil, err
403 }
defaultRSAKeyBits is a pkg-level unexported constant. So no chance of modifing this beheavior.
I ended up copying the whole function out, adding a parameter for the keybits and keeping it in my codebase,
if someone has a better solution, I'd be glad to hear it.
Related
I am attempting to generate a personal_sign in Golang like its implemented in ethers.js. Similar question but that ended up using the regular sign over the personal sign_implementation.
Ethers
// keccak256 hash of the data
let dataHash = ethers.utils.keccak256(
ethers.utils.toUtf8Bytes(JSON.stringify(dataToSign))
);
//0x8d218fc37d2fd952b2d115046b786b787e44d105cccf156882a2e74ad993ee13
let signature = await wallet.signMessage(dataHash); // 0x469b07327fc41a2d85b7e69bcf4a9184098835c47cc7575375e3a306c3718ae35702af84f3a62aafeb8aab6a455d761274263d79e7fc99fbedfeaf759d8dc9361c
Golang:
func signHash(data []byte) common.Hash {
msg := fmt.Sprintf("\x19Ethereum Signed Message:\n%d%s", len(data), data)
return crypto.Keccak256Hash([]byte(msg))
}
privateKey, err := crypto.HexToECDSA(hexPrivateKey)
if err != nil {
log.Fatal(err)
}
dataHash := crypto.Keccak256Hash(dataToSign) //0x8d218fc37d2fd952b2d115046b786b787e44d105cccf156882a2e74ad993ee13
signHash := signHash(dataHash.Bytes())
signatureBytes, err := crypto.Sign(signHash.Bytes(), privateKey)
if err != nil {
log.Fatal(err)
}
// signatureBytes 0xec56178d3dca77c3cee7aed83cdca2ffa2bec8ef1685ce5103cfa72c27beb61313d91b9ad9b9a644b0edf6352cb69f2f8acd25297e3c64cd060646242e0455ea00
As you can see the hash is the same, but the signature is different:
0x469b07327fc41a2d85b7e69bcf4a9184098835c47cc7575375e3a306c3718ae35702af84f3a62aafeb8aab6a455d761274263d79e7fc99fbedfeaf759d8dc9361c Ethers
0xec56178d3dca77c3cee7aed83cdca2ffa2bec8ef1685ce5103cfa72c27beb61313d91b9ad9b9a644b0edf6352cb69f2f8acd25297e3c64cd060646242e0455ea00 Golang
Looking at the source code of Ethers.js I can't find anything different aside how the padding is managed.
Edit
Check the approved answer
signHash(data []byte) common.Hash {
hexData := hexutil.Encode(data)
msg := fmt.Sprintf("\x19Ethereum Signed Message:\n%d%s", len(hexData), hexData)
return crypto.Keccak256Hash([]byte(msg))
}
There is a bug in the JavaScript code.
From the documentation of signer.signMessage() (see the Note section), it appears that a string is UTF8 encoded and binary data must be passed as TypedArray or Array.
The Keccak hash is returned hex encoded, i.e. as string, and is therefore UTF8 encoded, which is incorrect. Instead, it must be converted to a TypedArray. For this purpose the library provides the function ethers.utils.arrayify().
The following JavaScript is based on the posted code, but performs the required hex decoding:
(async () => {
let privateKey = "0x8da4ef21b864d2cc526dbdb2a120bd2874c36c9d0a1fb7f8c63d7f7a8b41de8f";
let dataToSign = {"data1":"value1","data2":"value2"};
let dataHash = ethers.utils.keccak256(
ethers.utils.toUtf8Bytes(JSON.stringify(dataToSign))
);
dataHashBin = ethers.utils.arrayify(dataHash)
let wallet = new ethers.Wallet(privateKey);
let signature = await wallet.signMessage(dataHashBin);
document.getElementById("signature").innerHTML = signature; // 0xfcc3e9431c139b5f943591af78c280b939595ce9df66210b7b8bb69565bdd2af7081a8acc0cbb5ea55bd0d673b176797966a5180c11ac297b7e6344c5822e66d1c
})();
<script src="https://cdn.ethers.io/lib/ethers-5.0.umd.min.js" type="text/javascript"></script>
<p style="font-family:'Courier New', monospace;" id="signature"></p>
which produces the following signature:
0xfcc3e9431c139b5f943591af78c280b939595ce9df66210b7b8bb69565bdd2af7081a8acc0cbb5ea55bd0d673b176797966a5180c11ac297b7e6344c5822e66d1c
The Go code below is based on the unmodified posted Go code, but using key and data from the JavaScript code for a comparison:
package main
import (
"fmt"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
"encoding/hex"
"encoding/json"
"log"
)
func signHash(data []byte) common.Hash {
msg := fmt.Sprintf("\x19Ethereum Signed Message:\n%d%s", len(data), data)
return crypto.Keccak256Hash([]byte(msg))
}
func main() {
hexPrivateKey := "8da4ef21b864d2cc526dbdb2a120bd2874c36c9d0a1fb7f8c63d7f7a8b41de8f"
dataMap := map[string]string{"data1":"value1","data2":"value2"}
dataToSign, _ := json.Marshal(dataMap)
privateKey, err := crypto.HexToECDSA(hexPrivateKey)
if err != nil {
log.Fatal(err)
}
dataHash := crypto.Keccak256Hash(dataToSign) //0x8d218fc37d2fd952b2d115046b786b787e44d105cccf156882a2e74ad993ee13
signHash := signHash(dataHash.Bytes())
signatureBytes, err := crypto.Sign(signHash.Bytes(), privateKey)
if err != nil {
log.Fatal(err)
}
fmt.Println("0x" + hex.EncodeToString(signatureBytes))
}
The Go Code gives the following signature:
0xfcc3e9431c139b5f943591af78c280b939595ce9df66210b7b8bb69565bdd2af7081a8acc0cbb5ea55bd0d673b176797966a5180c11ac297b7e6344c5822e66d01
Both signatures match except for the last byte.
The JavaScript code returns the signature in the format r|s|v (see here). v is one byte in size and is just the value in which both signatures differ.
It is v = 27 + rid where rid is the recovery ID. The recovery ID has values between 0 and 3, so v has values between 27 and 30 or 0x1b and 0x1e (see here).
The Go code, on the other hand, returns the recovery ID in the last byte instead of v. So that the signature of the Go code matches that of the JavaScript code in the last byte as well, the recovery ID must be replaced by v:
signatureBytes[64] += 27
fmt.Println("0x" + hex.EncodeToString(signatureBytes))
Does anyone have an example on using "github.com/gohugoio/hugo/resources/images/exif" to extract metadata from a local image using Go?
I looked through the docs and since I'm new to Go I'm not 100% sure if I'm doing things write. I do read the image, but I'm not sure what the next step would be.
fname := "image.jpg"
f, err := os.Open(fname)
if err != nil {
log.Fatal("Error: ", err)
}
(Edit 1)
Actually I think I found a solution:
d, err := exif.NewDecoder(exif.IncludeFields("File Type"))
x, err := d.Decode(f)
if err != nil {
log.Fatal("Error: ", err)
}
fmt.Println(x)
however, the question is how do I know what fields are available? File Type for example returns <nil>
Looks like Hugo uses github.com/rwcarlsen/goexif.
The documentation of the package on go.dev shows Exif.Walk can walk the name and tag for every non-nil EXIF field.
Eg, a small program:
package main
import (
"fmt"
"log"
"os"
"github.com/rwcarlsen/goexif/exif"
"github.com/rwcarlsen/goexif/tiff"
)
type Printer struct{}
func (p Printer) Walk(name exif.FieldName, tag *tiff.Tag) error {
fmt.Printf("%40s: %s\n", name, tag)
return nil
}
func main() {
if len(os.Args) < 2 {
log.Fatal("please give filename as argument")
}
fname := os.Args[1]
f, err := os.Open(fname)
if err != nil {
log.Fatal(err)
}
x, err := exif.Decode(f)
if err != nil {
log.Fatal(err)
}
var p Printer
x.Walk(p)
}
Example:
$ go run main.go IMG_123.JPG
ResolutionUnit: 2
YCbCrPositioning: 2
Make: "Canon"
Model: "Canon IXUS 255 HS"
ThumbJPEGInterchangeFormat: 5620
PixelYDimension: 3000
FocalPlaneResolutionUnit: 2
GPSVersionID: [2,3,0,0]
ExifVersion: "0230"
WhiteBalance: 1
DateTime: "2016:10:04 17:27:56"
CompressedBitsPerPixel: "5/1"
... etc ...
Orientation: 1
MeteringMode: 5
FocalLength: "4300/1000"
PixelXDimension: 4000
InteroperabilityIFDPointer: 4982
FocalPlaneXResolution: "4000000/244"
XResolution: "180/1"
ComponentsConfiguration: ""
ShutterSpeedValue: "96/32"
ApertureValue: "101/32"
ExposureBiasValue: "-1/3"
FocalPlaneYResolution: "3000000/183"
SceneCaptureType: 0
I have a large int array that I want to persist on the filesystem. My understanding is the best way to store something like this is to use the gob package to convert it to a byte array and then to compress it with gzip.
When I need it again, I reverse the process. I am pretty sure I am storing it correctly, however recovering it is failing with EOF. Long story short, I have some example code below that demonstrates the issue. (playground link here https://play.golang.org/p/v4rGGeVkLNh).
I am not convinced gob is needed, however reading around it seems that its more efficient to store it as a byte array than an int array, but that may not be true. Thanks!
package main
import (
"bufio"
"bytes"
"compress/gzip"
"encoding/gob"
"fmt"
)
func main() {
arry := []int{1, 2, 3, 4, 5}
//now gob this
var indexBuffer bytes.Buffer
writer := bufio.NewWriter(&indexBuffer)
encoder := gob.NewEncoder(writer)
if err := encoder.Encode(arry); err != nil {
panic(err)
}
//now compress it
var compressionBuffer bytes.Buffer
compressor := gzip.NewWriter(&compressionBuffer)
compressor.Write(indexBuffer.Bytes())
defer compressor.Close()
//<--- I think all is good until here
//now decompress it
buf := bytes.NewBuffer(compressionBuffer.Bytes())
fmt.Println("byte array before unzipping: ", buf.Bytes())
if reader, err := gzip.NewReader(buf); err != nil {
fmt.Println("gzip failed ", err)
panic(err)
} else {
//now ungob it...
var intArray []int
decoder := gob.NewDecoder(reader)
defer reader.Close()
if err := decoder.Decode(&intArray); err != nil {
fmt.Println("gob failed ", err)
panic(err)
}
fmt.Println("final int Array content: ", intArray)
}
}
You are using bufio.Writer which–as its name implies–buffers bytes written to it. This means if you're using it, you have to flush it to make sure buffered data makes its way to the underlying writer:
writer := bufio.NewWriter(&indexBuffer)
encoder := gob.NewEncoder(writer)
if err := encoder.Encode(arry); err != nil {
panic(err)
}
if err := writer.Flush(); err != nil {
panic(err)
}
Although the use of bufio.Writer is completely unnecessary as you're already writing to an in-memory buffer (bytes.Buffer), so just skip that, and write directly to bytes.Buffer (and so you don't even have to flush):
var indexBuffer bytes.Buffer
encoder := gob.NewEncoder(&indexBuffer)
if err := encoder.Encode(arry); err != nil {
panic(err)
}
The next error is how you close the gzip stream:
defer compressor.Close()
This deferred closing will only happen when the enclosing function (the main() function) returns, not a second earlier. But by that time you already wanted to read the zipped data, but that might still sit in an internal cache of gzip.Writer, and not in compressionBuffer, so you obviously can't read the compressed data from compressionBuffer. Close the gzip stream without using defer:
if err := compressor.Close(); err != nil {
panic(err)
}
With these changes, you program runs and outputs (try it on the Go Playground):
byte array before unzipping: [31 139 8 0 0 0 0 0 0 255 226 249 223 200 196 200 244 191 137 129 145 133 129 129 243 127 19 3 43 19 11 27 7 23 32 0 0 255 255 110 125 126 12 23 0 0 0]
final int Array content: [1 2 3 4 5]
As a side note: buf := bytes.NewBuffer(compressionBuffer.Bytes()) – this buf is also completely unnecessary, you can just start decoding compressionBuffer itself, you can read data from it that was previously written to it.
As you might have noticed, the compressed data is much larger than the initial, compressed data. There are several reasons: both encoding/gob and compress/gzip streams have significant overhead, and they (may) only make input smaller on a larger scale (5 int numbers don't qualify to this).
Please check related question: Efficient Go serialization of struct to disk
For small arrays, you may also consider variable-length encoding, see binary.PutVarint().
I have a question related to using blob type as partition key.
I use it, as I need to save hash value.
(hash value returns binary data. usually as hexadecimal.)
I tried a select query with gocql, however it failed with following error.
Is there any way to get a successful result for this kind of query?
Your advice highly appreciated!!
-- result
hash_value: [208 61 222 22 16 214 223 135 169 6 25 65 44 237 166 229 50 5 40 221]
/ hash_value: ?=??߇?A,???2(?
/ hash_value: 0xd03dde1610d6df87a90619412ceda6e5320528dd
string
2018/03/22 10:03:17 can not unmarshal blob into *[20]uint8
-- select.go
package main
import (
"fmt"
"log"
"crypto/sha1"
"reflect"
"github.com/gocql/gocql"
)
func main() {
cluster := gocql.NewCluster("10.0.0.1")
cluster.Keyspace = "ks"
cluster.Consistency = gocql.Quorum
cluster.ProtoVersion = 4
cluster.Authenticator = gocql.PasswordAuthenticator{
Username: "cassandra",
Password: "cassandra",
}
session, _ := cluster.CreateSession()
defer session.Close()
text := "text before hashed"
data := []byte(text)
hash_value := sha1.Sum(data)
hexa_string := fmt.Sprintf("0x%x", hash_value)
fmt.Println("hash_value: ", hash_value)
fmt.Println(" / string: ", string(hash_value[:]))
fmt.Println(" / column1: ", hexa_string)
fmt.Println(reflect.TypeOf(hexa_string))
// *** select ***
var column1 int
returned_hash := sha1.Sum(data)
//if err := session.Query(`SELECT hash_value, column1 FROM sample WHERE hash_value= ? LIMIT 1`,
// hexa_string).Consistency(gocql.One).Scan(&returned_hash, &column1); err != nil {
if err := session.Query(`SELECT hash_value, column1 FROM sample WHERE hash_value=0xd03dde1610d6df87a90619412ceda6e5320528dd`).Consistency(gocql.One).Scan(&returned_hash, &column1); err != nil {
//fmt.Println(err)
log.Fatal(err)
}
fmt.Println("comment: ", returned_hash, user_id)
}
-- table definition --
CREATE TABLE IF NOT EXISTS ks.samle (
hash_value blob,
column1 int,
...
PRIMARY KEY((hash_value), column1)
) WITH CLUSTERING ORDER BY (column1 DESC);
I fixed the problem by changing the type of variable: returned_hash.
returned_hash (var to store returned result) should be []byte.
I understands as follows.
marshal: convert the data given in the code to a type cassandra can handle.
unmarshal: convert the data cassandra returned back to a type golang-code can handle.
the original error means latter pattern doesn't go well. so the type of returned_hash must be wrong.
Correct me if I'm wrong. thanks.
package main
import (
"fmt"
"log"
"crypto/sha1"
"reflect"
"github.com/gocql/gocql"
)
func main() {
cluster := gocql.NewCluster("127.0.0.1")
cluster.Keyspace = "browser"
cluster.Consistency = gocql.Quorum
//cluster.ProtoVersion = 4
//cluster.Authenticator = gocql.PasswordAuthenticator{
// Username: "cassandra",
// Password: "cassandra",
//}
session, _ := cluster.CreateSession()
defer session.Close()
text := "text before hashed"
data := []byte(text)
hash_value := sha1.Sum(data)
hexa_string := fmt.Sprintf("0x%x", hash_value)
fmt.Println("hash_value: ", hash_value)
fmt.Println(" / string(hash_value): ", string(hash_value[:]))
fmt.Println(" / hexa(hash_value): ", hexa_string)
fmt.Println(reflect.TypeOf(hexa_string))
// *** select ***
var column1 int
//returned_hash := sha1.Sum(data)
//var returned_hash *[20]uint8
var returned_hash []byte
if err := session.Query(`SELECT hash_value, column1 FROM sample WHERE hash_value=? LIMIT 1`,
hash_value[:]).Consistency(gocql.One).Scan(&returned_hash, &column1); err != nil {
//if err := session.Query(`SELECT hash_value, column1 FROM sample WHERE hash_value=0xd03dde1610d6df87a90619412ceda6e5320528dd`).Consistency(gocql.One).Scan(&returned_hash, &column1); err != nil {
log.Fatal(err)
}
fmt.Printf("Returned: %#x %d \n", returned_hash, column1)
}
I was wondering what would be the most elegant way to write a Key Value Form
encoded map to a http.ResponseWriter.
Respond(kv map[string]string) {
for key, value := range kv {
fmt.Fprintf(a.w, "%s:%s\n", key, value)
}
}
I have to follow this Key-Value format:
Key-Value Form Encoding
A message in Key-Value form is a sequence of lines. Each line begins
with a key, followed by a colon, and the value associated with the
key. The line is terminated by a single newline (UCS codepoint 10,
"\n"). A key or value MUST NOT contain a newline and a key also MUST
NOT contain a colon.
Additional characters, including whitespace, MUST NOT be added before
or after the colon or newline. The message MUST be encoded in UTF-8 to
produce a byte string.
I thought about using encoding/csv but isn't that a bit overkill?
Edit: What I came up with so far. (Thanks for all the suggested answers)
http://godoc.org/github.com/kugutsumen/encoding/keyvalue
https://github.com/kugutsumen/encoding/tree/master/keyvalue
The standard library provides support for this: Look at http://godoc.org/net/url#Values.
You can do something like:
f := make(url.Values)
for k, v := range myMap {
f.Set(k, v)
}
myOutput.WriteString(f.Encode())
For example,
package main
import (
"bufio"
"bytes"
"fmt"
"io"
)
func WriteRespond(w io.Writer, kv map[string]string) error {
var buf bytes.Buffer
for k, v := range kv {
buf.WriteString(k)
buf.WriteByte(':')
buf.WriteString(v)
buf.WriteByte('\n')
_, err := buf.WriteTo(w)
if err != nil {
return err
}
}
return nil
}
func main() {
kv := map[string]string{
"k1": "v1",
"k2": "v2",
}
var buf = new(bytes.Buffer)
w := bufio.NewWriter(buf)
err := WriteRespond(w, kv)
if err != nil {
fmt.Println(err)
return
}
err = w.Flush()
if err != nil {
fmt.Println(err)
return
}
fmt.Println(buf.Bytes())
fmt.Println(buf.String())
}
Output:
[107 49 58 118 49 10 107 50 58 118 50 10]
k1:v1
k2:v2
If you want to write strings to any Writer in Go (including an http.ResponseWriter) without using the fmt package, you can use the bytes package to read the strings and write them to the Writer.
The code below creates a Buffer from the key and value strings using bytes.NewBufferString and then writes them to the http.ResponseWriter using the WriteTo function.
package main
import (
"bytes"
"log"
"net/http"
)
func main() {
kv := map[string]string{"key1": "val1", "key2": "val2", "key3": "val3", "key4": "val4", "key5": "val5"}
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
for key, value := range kv {
kvw := bytes.NewBufferString(key + ":" + value + "\n")
if _, err := kvw.WriteTo(w); err != nil {
log.Fatal("Error: ", err)
}
}
})
if err := http.ListenAndServe("localhost:8080", nil); err != nil {
log.Fatal("ListenAndServe: ", err)
}
}
Will output:
key1:val1
key2:val2
key3:val3
key4:val4
key5:val5
Hopefully that's close to what you're after.
EDIT: You can also use the strings.Reader Type and the corresponding WriteTo function from the strings package.