How to provide read access to implementers of a protocol while others have write access in Swift - swift2

I'd like to provide read access for certain properties to all classes/structs that implement a protocol while client classes of the protocol are allowed read+write access. Is there a way to do this in Swift?
protocol WheelsProtocol {
var count: Int {get set}
}
struct Car: WheelsProtocol {
var count: Int = 0
func checkTirePressure() {
// Here, we will iterate over the count of wheels but we should
// not allow the number of wheels to be changed
}
}
struct CarFactory {
var wheels: WheelsProtocol
init(wheels: WheelsProtocol) {
self.wheels = wheels
}
mutating func configureVehicle() {
self.wheels.count = 4
}
}

What about a protocol for car makers and a different one for cars, something like
protocol MakesCars {
var wheelCount: Int {get set}
}
protocol HasWheels{
var wheelCount: Int { get }
}
struct Car: HasWheels {
let wheelCount: Int
init(wheelCount: Int) {
self.wheelCount = wheelCount
}
func checkTirePressure() {
// Here, we will iterate over the count of wheels but we should
// not allow the number of wheels to be changed
wheelCount = 5 //COMPILER ERROR
}
}
struct CarFactory: MakesCars {
...
}
The key is that you have to define a read-only property in the protocol as a var with { get } but in the object that adopts that protocol you have to put let and set it in an initializer.

There is not a way to limit access to a particular method in the way you're describing. From the documentation on Access Control:
Swift provides three different access levels for entities within your
code. These access levels are relative to the source file in which an
entity is defined, and also relative to the module that source file
belongs to.
Public access enables entities to be used within any source file from
their defining module, and also in a source file from another module
that imports the defining module. You typically use public access when
specifying the public interface to a framework.
Internal access enables entities to be used within any source file
from their defining module, but not in any source file outside of that
module. You typically use internal access when defining an app’s or a
framework’s internal structure.
Private access restricts the use of an entity to its own defining
source file. Use private access to hide the implementation details of
a specific piece of functionality.
To accomplish this, you would need to create a separate module (i.e., a Framework) and limit the writes to the internal scope and make the reads of the public scope.

Related

Is there known way to add new syntax features to Protobuf?

Protobuf provides service keyword that defines rpc-interface of one application.
I also want to use concept of entity which means that is part of service (one service contains multiple entities). Each entity type has own unique identifier that gives possibility to address different entities in service.
I would like to use proto like this
message UserReq {
string username = 1;
string password = 2;
}
message RegReq {
uint8 result_code = 1;
}
message RemoteEntityInterface
{
MyEntity entity = 1;
}
message GiveItemResult
{
uint8 result_code = 1;
}
service MyService {
rpc RegisterUser (UserReq) returns (RegReq) {}
rpc Login(UserReq) returns (RemoteEntityInterface) {}
}
entity MyEntity
{
rpc GiveItem (GiveItemReq) returns (GiveItemResult) {}
}
As you can see in example, I used unknown for protobuf keyword entity, this keyword means that MyService can return the interface to some remote object (MyEntity) by using Login remote method.
What are the ways to do this? (maybe write plugin or known way to modify source code of protobuf). Or maybe there are more flexible solutions than protobuf?
I also would like to use multiple parameters per one rpc; adding java-like attributes to rpc; service and entity; and data-model for entity (variables/fields) to add real-time replication support from entity to another service.
I think it is very flexible for services in game-development.
The only official way to extend .proto syntax is to define custom options.
For example, you could have something like:
extend google.protobuf.ServiceOptions {
optional bool is_entity = 123456;
}
service MyEntity
{
option (is_entity) = true;
rpc GiveItem (GiveItemReq) returns (GiveItemResult) {}
}
The default code generator will not do anything special with this option, but you can access it from your own code and from a protoc plugin if you write one.

Evaluating an hcl.Expression which expected value is a Go interface

I am writing a HCL-based configuration language where some types of blocks can reference other blocks using expressions, as follows:
source "my_source" {
// some blocks and attributes...
to = destination.my_kafka_topic
}
destination "kafka" "my_kafka_topic" {
// some blocks and attributes...
}
destination "elasticsearch" "my_es_index" {
// some blocks and attributes...
}
(The goal is the model the flow of events in a messaging system, and then materialize the actual system in terms of infrastructure.)
During parsing, I simply perform a partial decoding of the static attributes and blocks and ensure "to" hcl.Attributes can be interpreted as hcl.Traversals, but don't evaluate them (yet).
type Source struct {
Name string // first HCL tag
/* ... */
To hcl.Traversal
}
type Destination struct {
Type string // first HCL tag
Name string // second HCL tag
/* ... */
}
Later, referenceable blocks (e.g. destination in the example above) are associated with a certain Go type based on their Type label, and this Go type always implements an interface called Referenceable.
type Referenceable interface {
AsReference() *Destination
}
type Destination struct {
// fields that represent the destination in terms of address, protocol, etc.
}
At that stage, I know how to build a map of referenceables map[string]Referenceable where the key is <block_type>.<block_name> (e.g. "destination.my_kafka_topic"), but I am unsure how to leverage that in an hcl.EvalContext in order to evaluate the variables to Go Referenceables.
What I would like to know is: how to express a Go interface in the cty type system used by hcl.EvalContext? Is is even possible? If not, is it advised to inject instances of *Destination in the hcl.EvalContext instead?

How to apply ascending ID parameter to thousands of multiple module types

Suppose I have a simple module Client defined in Client.ned along with two subclassed simple modules:
simple Client
{
parameters:
volatile int clientId;
}
simple ClientA extends Client
{
}
simple ClientB extends Client
{
}
Now what I wish to do is define a network Network with 1000 ClientA instances and 1000 Client 2 instances as its submodules. I would like each instantiation to have a clientId one bigger than the last, i.e I would like the clientId parameter to ascend with each instantiation. For example, suppose we have the following Network.ned file:
network Network
{
submodules:
clientA[1000]: ClientA {
clientId = index;
};
clientB[1000]: ClientB {
clientId = 1000 + index;
}
}
What I'm looking for is a general approach, where we don't know the number of clients that are to be instantiated beforehand or even the number of client subclasses, just that if there is an instantiated Client of some sort, it should have a clientId parameter one larger than the last instantiation.
Remove volatile from clientId declaration in Client.ned and your solution will work properly.
The main purpose of using volatile is to guarantee returning a "fresh" value of a parameter when it is reading several times. In your network the clientId is constant, so the volatile is not necessary. The side-effect of using volatile is problem with using index, and parentIndex.
Beside the above, one should be aware that using omnetpp.ini is a very convenient method of control the simulation. For example, your NED files may look like:
simple Client {
parameters:
int clientId;
}
simple ClientA extends Client { }
simple ClientB extends Client { }
network Network {
submodules:
clientA[1000]: ClientA;
clientB[1000]: ClientB;
}
And the parameters may be set in omnetpp.ini:
**.clientA[*].clientId = index()
**.clientB[*].clientId = 1000 + index()
EDIT
When the number of clients is not known sizeof() method may be used to determine this number:
**.clientA[*].clientId = index()
**.clientB[*].clientId = sizeof(clientA) + index()
Since OMNeT++'s simulator assigns an ascending ID's to every module and submodule, utilizing getId(), getIndex(), getVectorSize(), and getParentModule() allows me to access this info for each module.

Abstract base class design in Go vs C++

I am still learning the Go way of doing things, coming from a C++ background. I am looking for feedback contrasting OOP inheritance to interface composition.
I have a design situation in a Go program where, if I was implementing in C++, I would solve with an abstract base class.
Suppose I need a base class, which has many implementors. The base class has shared methods that do work on abstract data items. Different Worker implementations provide CRUD operations on different item types, but workers all use the shared methods of the base class for general work.
In C++ I might do it this way
class IItem
{
// virtual methods
};
class IWorker
{
public:
// one of many virtual functions that deal with IItem CRUD
virtual IItem* createItem() = 0;
// concrete method that works on interfaces
void doWork()
{
IItem* item = createItem();
// do stuff with an IItem*
}
};
class Toy : public IItem
{
};
// one of many kinds of workers
class ElfWorker : public IWorker
{
public:
ElfWorker()
{
// constructor implicitly calls IWorker()
}
IItem* createItem() override
{
return new Toy;
}
};
In Go you don't have abstract virtual methods such as IWorker::createItem(). Concrete classes need to supply the base with an interface or function that do the right thing.
So I think it is the case that the Go code the ew.ItemCRUD interface has to be explicitly set with a pointer to an ElfWorker.
The elf knows how to createItem(), which in his case happens to be Toy object. Other workers would implement their own ItemCRUD for their data objects.
type Item interface {
// various methods
}
type ItemCRUD interface {
create() Item
// other CRUD
}
type Worker struct {
ItemCRUD // embedded interface
}
func (w *Worker) doWork() {
item := w.create()
// do stuff with item
}
type Toy struct {
}
type ElfWorker struct {
Worker // embedded
// ..
}
func NewElfWorker() *ElfWorker {
ew := &ElfWorker{}
ew.ItemCRUD = ew // <-- #### set Worker ItemCRUD explicitly ####
return ew
}
func (ew *ElfWorker) createItem() Item {
return &Toy{}
}
// more ElfWorker Item CRUD
func bigFunction(w *Worker) {
// ...
w.doWork()
// ..
}
So the part that I am wrestling a bit with is explicit setting. Seems like the "Go way" of composition does require this explicit step if I want the base Worker class to provide shared methods on Items.
Thoughts?
Beeing new to go myself, this answer is not backed by years of go experience :-)
I don't know, if the way you tackle this is the correct approach.
go allows interfaces to be implemented without explicit declaration. If you have elves, and you need them to do ItemCRUD methods, just implement them.
The method set will then match the interface and you can use the elf as a ItemCRUD where required.
To supply any elf object with a default ItemCRUD Implementation, you should implement an adapter for the ItemCRUD and compose the adapter with your abstract elf. The abstract methods could have a default implementation as log.Fatal("not implemented")
The concrete elves shadow the adapter's methods, this answers your question: It is not required to insert the object during creation.
Yet, since go has no generics, it may not be the right approach to have an ItemCRUD.
Im not entirely clear what the plan is with the above code and without understanding that its hard to suggest specific solutions, what is clear is you are very much coming to the party with an established OOP mindset (I did too) which is rarely helpful in finding the best solution in golang.
In Golang I wouldnt usually embed an interface in an implementation, interfaces are satisfied implicitly in Golang which allows for a nice separation of expectation and implementation which should generally be respected.
A reciever method should expect an interface, the implementation passed at runtime should just satisfy the signature of that interface implicitly.
So perhaps my doWork method needs to be able to createItems then it would be the doWork method that would accept any implementation of ItemCRUD which it could call to create an item. But this is me guessing at what you really want to do here, I suspect if you just separate implementation from interface you will probably answer your own question.

golang API interface, what am I missing?

I want to create an interface to make it easy to add new storage backends.
package main
// Storage is an interface to describe storage backends
type Storage interface {
New() (newStorage Storage)
}
// File is a type of storage that satisfies the interface Storage
type File struct {
}
// New returns a new File
func (File) New() (newFile Storage) {
newFile = File{}
return newFile
}
// S3 is a type of storage that satisfies the interface Storage
type S3 struct {
}
// New returns a new S3
func (S3) New() (newS3 S3) {
newS3 = S3{}
return newS3
}
func main() {
// List of backends to choose from
var myStorage map[string]Storage
myStorage["file"] = File{}
myStorage["s3"] = S3{}
// Using one of the backends on demand
myStorage["file"].New()
myStorage["s3"].New()
}
But it seems not possible to define and satisfy a function that should return an object that satisfies the interface itself as well.
File.New() returns an object of type Storage which satisfies Storage.
S3.New() returns an object of type S3.
S3 should satisfies the interface Storage as well but I get this:
./main.go:32: cannot use S3 literal (type S3) as type Storage in assignment:
S3 does not implement Storage (wrong type for New method)
have New() S3
want New() Storage
What am I doing wrong?
I hope I am missing something basic here.
This code does not make sense at all. You are either implementing a factory pattern which is tied to a struct which is of the type the factory is going to produce or you are reinventing the wheel in a wrong way by reimplementing the already existing new keyword and tie it to a struct which is nil the time you would use it.
You can either get rid of the helper function and simply use
s := new(S3)
f := new (File)
Or you could use a static Factory function like:
// Do NOT tie your Factory to your type
function New() S3 {
return S3{}
}
Or, which seems to better suit your use case, create a factory interface, implement it and have its New() function return a Storage instance:
type StorageFactory interface {
New() Storage
}
type S3Factory struct {}
function (f *S3Factory) New() Storage {
return S3{}
}
There are various ways of registering your factory. You could use a global var and init
import "example.com/foo/storage/s3"
type FactoryGetter func() StorageFactory
type FactoryRegistry map[string] FactoryGetter
// Registry will be updated by an init function in the storage provider packages
var Registry FactoryRegistry
func init(){
Registry = make(map[string] FactoryGetter)
}
// For the sake of shortness, a const. Make it abflag, for example
const storageProvider = "s3"
func main(){
f := Registry[storageProvider]()
s := f.New()
s.List()
}
And somewhere in the S3 package
func init() {
Registry["s3"] = function(){ return S3Factory{}}
}
You could even think of making the Factories taking params.
I like what you're doing here and I've actually worked on projects that involved very similar design challenges, so I hope my suggestions can help you out some.
In order to satisfy the interface, you'd need to update your code from...
// New returns a new S3
func (S3) New() (newS3 S3) {
newS3 = S3{}
return newS3
}
to this
// New returns a new S3
func (S3) New() (newS3 Storage) {
newS3 = S3{}
return newS3
}
This means you will receive an instance of Storage back, so to speak. If you want to then access anything from S3 without having to use type assertion, it would be best to expose that S3 function/method in the interface.
So let's say you want a way to List your objects in your S3 client. A good approach to supporting this would be to update Storage interface to include List, and update S3 so it has its own implementation of List:
// Storage is an interface to describe storage backends
type Storage interface {
New() (newStorage Storage)
List() ([]entry, error) // or however you would prefer to trigger List
}
...
// New returns a new S3
func (S3) List() ([] entry, error) {
// initialize "entry" slice
// do work, looping through pages or something
// return entry slice and error if one exists
}
When it comes time to add support for Google Cloud Storage, Rackspace Cloud Files, Backblaze B2, or any other object storage provider, each of them will also need to implement List() ([] entry, error) as well - which is good! Once you've used this List function in the way you need, adding more clients/providers will be more like developing plugins than actually writing/architecting code (since your design is complete by that point).
The real key with satisfying interfaces is to have the signature match exactly and think of interfaces as a list of common functions/methods that you'd want every storage provider type to handle in order to meet your goals.
If you have any questions or if anything I've written is unclear, please comment and I'll be happy to clarify or adjust my post :)

Resources