Question about Cyclone - visual-studio

I have read on Wikipedia that the Cyclone programming language is a safe dialect of the C programming language so consider the following C code.
int strlen(const char *s)
{
int iter = 0;
if (s == NULL) return 0;
while (s[iter] != '\0') {
iter++;
}
return iter;
}
This function assumes that the string being passed in is terminated by NUL ('\0').
But if we pass a string like this,
char buf[] = {'h','e','l','l','o','!'}
it would cause strlen to iterate through memory not necessarily associated with the string s. So there is another version of this code in Cyclone
int strlen(const char ? s)
{
int iter, n = s.size;
if (s == NULL) return 0;
for (iter = 0; iter < n; iter++, s++) {
if (*s == '\0') return iter;
}
return n;
}
Can I use Cyclone in Visual Studio or do I have to donwload a new compiler?

You will need Cyclone. It is also the name of the compiler. The Cyclone compiler is available in source code here. The Cyclone compiler can, according to the documentation, only be compiled with GCC. Check this to compile the Cyclone compiler.
You can use it from Visual Studio if you provide custom rules for the *.cyc files. This way you can use the IDE as a better text editor. For syntax highlighting and style assign *.cyc to the C language extention list.

You can run custom tools for a file with a specific filename extension. The MSDN Library article on how to setup the custom build rule is here. Beware that this is pretty broken in VS2010 right now.

Related

can complex numbers be used in Apple metal code?

I'm trying to make a stand alone binary which uses metal for acceleration. (found a solution in another answer)
I'm trying to compile the metal code and I get an error;
metal_mandel.metal:24:16: error: subscript of pointer to incomplete type 'device complex' (aka 'device __Reserved_Name__Do_not_use_complex')
using namespace metal;
///#include <complex.h>
kernel void point_in_mandel(device complex* inC,
device complex* inZ,
device int* count,
uint index [[thread_position_in_grid]])
{
// the for-loop is replaced with a collection of threads, each of which
// calls this function.
count [index] = 0;
while(cabs(inZ[index]) < 2.0 && count[index] < 256)
{
inZ[index] = (inZ[index] * inZ[index]) + inC;
count[index] ++;
}
}
I've tried both adding and removing the include complex but my head is too sore from looking at this code all day, to work this out further.
Do I just have to implement complex numbers manually?
Thanks
Mark

compiler segfault when printf is added (gcc 10.2 aarch64_none-elf- from arm)

I know this is not adequate for stack overflow question, but ..
This is a function in scripts/dtc/libfdt/fdt_ro.c of u-boot v2021.10.
const void *fdt_getprop_namelen(const void *fdt, int nodeoffset,
const char *name, int namelen, int *lenp)
{
int poffset;
const struct fdt_property *prop;
printf("uuu0 nodeoffset = 0x%x, name = %s, namelen = %d\n", nodeoffset, name, namelen);
prop = fdt_get_property_namelen_(fdt, nodeoffset, name, namelen, lenp,
&poffset);
//printf("uuu1 prop = 0x%lx, *lenp = 0x%x, poffset = 0x%x\n", prop, *lenp, poffset);
if (!prop)
return NULL;
/* Handle realignment */
if (fdt_chk_version() && fdt_version(fdt) < 0x10 &&
(poffset + sizeof(*prop)) % 8 && fdt32_to_cpu(prop->len) >= 8)
return prop->data + 4;
return prop->data;
}
When I build the program, if I uncomment the second printf, the compiler seg-faults.
I have no idea. Is it purely compiler problem(I think so it should never die at least)? or can it be linked to my fault somewhere in another code? Is there any method to know the cause of the segfault? (probably not.).
If you're getting a segmentation fault when running the compiler itself, the compiler has a bug. There are some errors in your code, but those should result in compile-time diagnostics (warnings or error messages), never a compile-time crash.
The code in your question is incomplete (missing declarations for fdt_get_property_namelen_, printf, NULL, etc.). Reproduce the problem with a complete self-contained source file and submit a bug report: https://gcc.gnu.org/bugzilla/
printf("uuu1 prop = 0x%lx, *lenp = 0x%x, poffset = 0x%x\n", prop, *lenp, poffset);
prop is a pointer, so I'd use %p instead of %lx
lenp is a pointer, so I'd make sure that it points to valid memory

GCC, clang/llvm, exe file size

Windows 10, LLVM 7, GCC 8.1, Visual studio 2019.
#include <iostream>
#include <fstream>
using namespace std;
char exe[1000000] = {};
int n = 0;
int filesize;
void read() {
int pointer = 0;
cin >> filesize;
fstream f;
f.open("s.exe", ios::in | ios::app | ios::binary);
f.seekp(pointer, ios::beg);
while (pointer < filesize) {
f.read((char*)&n,sizeof(char));
exe[pointer] = n;
pointer += 1;
}
f.close();
}
void showMassive(){
int pointer = 0;
while(pointer<filesize){
cout << pointer << ":" << (unsigned int8_t)exe[pointer] << endl;
pointer+=1;
}
}
void showAssembler(){
}
void write() {
int pointer = 0;
fstream f;
f.open("s1.exe", ios::out | ios::app | ios::binary);
f.seekp(pointer, ios::beg);
while (pointer < filesize) {
n=exe[pointer];
pointer += 1;
f.write((char*)&n,sizeof(char));
}
f.close();
}
void MachineCodeOptimizer(){
//some code
exe[1031] += 1;//just for example
}
int main(){
read();
showMassive();
showAssembler();
MachineCodeOptimizer();
write();
return 0;
}
this code. Clang creates an exe file 312 kilobytes size at best (-O1 key). GCC creates 66 KB size exe anyway. What happens? Why so difference between compilers? I look at machine code, but dont understand. Now i tried visual studio 2019 - 26 KB! Visual studio 2019 showing result close to assembler(in file size).
Clang and GCC are two completely independent compilers. When you write code in your source language, you only specify what you want the machine to execute, not how it should do that. Compilers are free in choosing their ways to get there, as long as they stay within the limits that are specified by your source language. So it's not surprising that the two resulting executables differ in file size. Also the chosen instructions by the two compilers might differ a lot (or completely) since there are, for example, a dozen different ways to represent loops in machine code (incl. taking advantage of parallel execution of the target processor ... or not). You might want to check out Matt Godbolt's talk from 2017 (https://www.youtube.com/watch?v=bSkpMdDe4g4); this can give you a short but exhaustive introduction in what compilers actually do (for you) behind the scenes.

gcc/clang optimization when de-/serializaing

Some gcc/clang compiler optimizations allow reordering the execution of code in the assembly (e.g. for gcc: -freorder-blocks -freorder-blocks-and-partition -freorder-functions). Is it safe to use such optimizations when de-/serializing data structures in a specific order?
For instance:
void write(int* data,std::ofstream& outstream)
{
outstream.write(reinterpret_cast<char*>(data),sizeof(int));
}
void read(int* data,std::ifstream& instream)
{
instream.read(reinterpret_cast<char*>(data),sizeof(int));
}
void serialize()
{
std::ofstream ofs("/somePath/to/some.file");
int i = 1;
int j = 2;
int k = 3;
write(i, ofs);
write(j, ofs);
write(k, ofs);
ofs.close();
}
void deserialize()
{
std::ifstream ifs("/somePath/to/some.file");
int i;
int j;
int k;
read(i, ifs);
read(j, ifs);
read(k, ifs);
ifs.close();
}
-freorder-blocks, -freorder-blocks-and-partition, and -freorder-functions all influence the order in which code is laid out in your executable file:
-freorder-blocks allows gcc to rearrange the straight-line blocks of assembly code that combine to form your function to try and minimize the number of branch prediction misses that your CPU might make.
-freorder-functions takes the functions that the compiler deems unlikely to execute and moving them to a far away part of your executable file. The goal of doing so is to try and pack as much frequently-executed code into as little memory as possible so as to benefit from the instruction cache as much as possible.
-freorder-blocks-and-partition is like -freorder-functions, but at the assembly block level. If you have an if (unlikely(foo)) { x = 1; } branch in your code, then the compiler might choose to take the code representing x = 1; and move it away frequently executed code.
None of these will affect the control flow of your program. All optimizations will guarantee that if you write i to a file and then j to a file, then that will still be what is observed after optimizations are applied.

Does using unique_ptr mean that I don't have to use the restrict keyword?

When trying to get loops to auto-vectorise, I've seen code like this written:
void addFP(int N, float *in1, float *in2, float * restrict out)
{
for (i = 0 ; i < N; i++)
{
out[i] = in1[i] + in2[i];
}
}
Where the restrict keyword is needed reassure the compiler about pointer aliases, so that it can vectorise the loop.
Would something like this do the same thing?
void addFP(int N, float *in1, float *in2, std::unique_ptr<float> out)
{
for (i = 0 ; i < N; i++)
{
out[i] = in1[i] + in2[i];
}
}
If this does work, which is the more portable choice?
tl;dr Can std::unique_ptr be used to replace the restrict keyword in a loop you're trying to auto-vectorise?
restrict is not part of C++11, instead it is part of C99.
std::unique_ptr<T> foo; is telling your compiler: I only need the memory in this scope. Once this scope ends, free the memory.
restrict tells your compiler: I know that you can't know or proof this, but I pinky swear that this is the only reference to this chunk of memory that occurs in this function.
unique_ptr doesn't stop aliases, nor should the compiler assume they don't exist:
int* pointer = new int[3];
int* alias = pointer;
std::unique_ptr<int> alias2(pointer);
std::unique_ptr<int> alias3(pointer); //compiles, but crashes when deleting
So your first version is not valid in C++11 (though it works on many modern compilers) and the second doesn't do the optimization you are expecting. To still get the behavior concider std::valarray.
I don't think so. Suppose this code:
auto p = std::make_unique<float>(0.1f);
auto raw = p.get();
addFP(1, raw, raw, std::move(p));

Resources