I need to run a loop for 10 billion times and failing to run it, please help me get this done. I am getting ordinal error.
program kittu;
var i:qword;
j:qword;
k:qword;
begin
i:= 10000000000;
k:= 0;
for j:=1 to i do
begin
k:=k+1;
end;
writeln(k);
readln();
end.
From the FreePascal docs for this error message.
Error: Ordinal expression expected The expression must be of ordinal
type, i.e., maximum a Longint. This happens, for instance, when you
specify a second argument to Inc or Dec that doesn’t evaluate to an
ordinal value.
Your variable K is defined as qword, which is a 64-bit length. LongInt is 32 bit.
The for statement is platform dependent.
Observation: qword is not supported to be used as a counter variable on 32-bit platform.
But seems no documentary support to tell which set of datatypes are supported to be used as counter variables.
Tried in both 32-bit and 64-bit platforms:
32-bit:
declaration of variable j could be changed to datatype dword to get it successfully compiled.
It is also required to compile with release mode to prevent getting an error due to overflow.
Compiler: Free Pascal IDE for Win32 for i386
Target CPU: i386
Version 1.0.12 2017/02/13
Compiler Version: 3.0.2
Environment: Win10
edit:
Successfully compiled with i386 free pascal with x86_64 cross compiler
on 64-bit Win10 (edit2: in the left hand side's command line)
[Image]
Guess: the counter in for statement might be optimized with using registers. Under i386 configuration, qword is too large for a 32-bit register.
64-bit:
[Image]
But it seems to work fine in 64-bit platform.
Compiler: Free Pascal Compiler version 3.0.2 [2017/03/18] for x86_64
Environment: Mac OSX 10.11.6
Related
Context
I'm working through some examples in a book by Johnathan Bartlett titled "Learn to Program with Assembly" (2021). The author assumes a linux environment. I'm on OSX (Monterey). He's using gcc. I've got clang (v 13.1.6). In chapter 7 the author introduces laying out data records.
To facilitate this, he uses the .equ directive to define some constants in a file titled persondata.s which happens to only contain a data segment. For example:
# Describe the components of the struct
.globl WEIGHT_OFFSET, HAIR_OFFSET, HEIGHT_OFFSET, AGE_OFFSET .equ WEIGHT_OFFSET, 0
.equ HAIR_OFFSET, 8
.equ HEIGHT_OFFSET, 16
.equ AGE_OFFSET, 24
In another file, tallest.s, he makes use of the HEIGHT_OFFSET constant to access the height of a person record. This file has only a text segment.
movq HEIGHT_OFFSET(%rbx), %rax
The Problem
When I assemble tallest.s using the built-in tools on OSX, the assembler complains that I'm trying to use 32-bit absolute addressing in 64-bit mode.
The Question
How is this supposed to work on OSX? How am I supposed to make use of .equ defined constants?
Things I Tried
If I merge these two files into one file, then assembler doesn't complain. It treats HEIGHT_OFFSET as the constant that it is.
I presume the idea is to have constants defined along with the data, and then make use of those constants in code to avoid 'magic numbers'. Sounds like a good idea.
I tried assembling, linking, and running this code using the book's docker image (johnnyb61820/linux-assembly). It works. No complaints. Some details
# as -v
GNU assembler version 2.31.1 (x86_64-linux-gnu) using BFD version (GNU Binutils for Debian) 2.31.1
^C
# ld -v
GNU ld (GNU Binutils for Debian) 2.31.1
# uname -a
Linux eded2adb9c06 5.10.124-linuxkit #1 SMP Thu Jun 30 08:19:10 UTC 2022 x86_64 GNU/Linux
So it works as written under that set-up. Just not under my set-up which is clang (v 13.1.6).
Based on the fact that this works in the linuxkit docker image, I thought to install gcc via homebrew on my machine. This got me version 12.2.0 of gcc, which I used to try and compile/link my files. It also thinks HEIGHT_OFFSET is a problem due to 32-bit absolute addressing in 64-bit mode.
Based on the output of name -a in the docker image, I'm guessing it is 64 bit. Linux eded2adb9c06 5.10.124-linuxkit #1 SMP Thu Jun 30 08:19:10 UTC 2022 x86_64 GNU/Linux
Oddly enough, it doesn't complain about 32-bit absolute addressing not being supported. Under OSX, I had to make everything rip-relative to access any static-data (true for both gcc and clang). Makes me wonder what it is doing with these addresses.
As a possibly final note, under OSX yasm also doesn't like me using .equ defined constants from another file. It complains about wanting to make use of "32 bit absolute relocations" in 64 bit mode. GCC (12.2.0) and llvm-mc (13.0.1) also take issue with the HEIGHT_OFFSET constant.
I just checked my linux box's config file /boot/config_$(uname -r) and I found both of these flags are defined:
CONFIG_X86_64=y
CONFIG_X86=y
Shouldn't these 2 flags be exclusive to each other?
In addition, I am wondering whether these 2 flags should be used in kernel only because I saw many
#ifdef CONFIG_X86_64
in kernel source code. Can user space application use this flag also?
In addition, since processor can be changed to compatibility mode from 64-bit mode. If this change happens, for code that depend on CONFIG_X86_64 will all fail at run time, right? How does application (kernel or user space) to detect whether machine is in 64 bit or compatibility mode?
Thanks.
CONFIG_X86 is the flag targetting the architecture, the whole x86 family.
This includes both the 32-bit and 64-bit processors.
This can be seen by looking at the latest kernel (at the time of writing it is 4.15.1) Kconfig file1
# SPDX-License-Identifier: GPL-2.0
# Select 32 or 64 bit
config 64BIT
bool "64-bit kernel" if ARCH = "x86"
default ARCH != "i386"
---help---
Say yes to build a 64-bit kernel - formerly known as x86_64
Say no to build a 32-bit kernel - formerly known as i386
config X86_32
def_bool y
depends on !64BIT
#... other options removed
config X86_64
def_bool y
depends on 64BIT
In this file, config options are stripped of the CONFIG_ prefix.
The CONFIG_X86_64 is defined iif CONFIG_64BIT is defined, otherwise CONFIG_X86_32 is.
Look at the depends on declaration to see it.
In a 64-bit kernel this command cat /boot/config-$(uname -r) | grep 'CONFIG_64BIT' should return CONFIG_64BIT=y.
This is also confirmed in this answer for a question on how to make a 32-bit config into a 64-bit one.
The antonym of CONFIG_X86_64 is thus CONFIG_X86_32.
TL;DR CONFIG_X86 is defined for all x86 processors, either bitness. CONFIG_X86_64 is defined only for the subset of x86 processors supporting AMD64/IA32e.
1 This link may change at any time soon. See this.
I've been trying to compile a SPARC program. Just a simple one taken straight out of the book: SPARC Architecture, Assembly Language Programming, and C: Second Edition. However, I get an error leading me to believe SPARC wasn't correctly configured on my computer yet. This is on a Windows machine.
.global main
main:
save %sp, 96, %sp
mov 9, %l0
sub %l0, 1, %o0
sub %l0, 7, %o1
call .mul
nop
sub %l0, 11, %o1
call .div
mov %o0, %l1
mov 1, %g1
ta 0
I have GCC 4.9.2 installed through Cygwin 1.7.5.
I get the follow error upon trying to compile through GCC
C:\Users\Matt\Desktop>gcc expr.s -o expr
expr.s: Assembler messages:
expr.s: Warning: end of file not at end of a line; newline inserted
expr.s:3: Error: no such instruction: `save %sp,96,%sp'
expr.s:4: Error: bad register name `%l0'
expr.s:5: Error: bad register name `%l0'
expr.s:6: Error: bad register name `%l0'
expr.s:9: Error: bad register name `%l0'
expr.s:11: Error: bad register name `%o0'
expr.s:13: Error: bad register name `%g1'
expr.s:14: Error: no such instruction: `ta 0'
Which highlights almost everything unique with SPARC compared to a different architecture as being an 'error'.
So, I tried setting the architecture specifically for the program:
gcc -march=sparc expr.s -o expr
This still throws an error, which leads me to believe that my current configuration isn't set up for SPARC.
The procedure I used to setup GCC is: here
The only difference is instead of specifying c,c++ for the languages, I used all.
Thanks
You are right, your gcc is not set up for SPARC. If you are running Windows, the computer you are running on has an ISA other than SPARC (most likely x86). Your ISA is the hardware interface and can not be changed by a software upgrade.
To compile SPARC programs, you will need to rebuild gcc as a SPARC cross-compiler (host and target ISAs are different). When building from source, this is done with the -target= flag. Building a cross-compiler for linux will be similar to cygwin link.
Once you build the cross-compiler, to execute it you will need a way to simulate a SPARC processor. Using a system such as qemu will work.
Here's a small tutorial on compiling simple programs for a Sparc V8 target and running them on Qemu. The tutorial includes steps on obtaining a cross compiler(assuming you're working with Linux)
I am trying to compile Apple's Libm (version 2026, tarball here). The only file that is failing to compile properly is Source/Intel/frexp.s because
/<path>/Libm-2026/Source/Intel/frexp.s:239:5:
error: invalid instruction mnemonic 'movsxw'
movsxw 8+(8 + (32 - 8))(%rsp), %eax
^~~~~~
/<path>/Libm-2026/Source/Intel/frexp.s:291:5:
error: invalid instruction mnemonic 'movsxw'
movsxw 8(%rsp), %eax
^~~~~~
Looking around on the Internet I can only find very scanty details of the movsxw instruction but it does appear to exist for i386 architectures. I am running OS X 10.9.3 with a Core i5 processor. The macro __x86_64__ is predefined, however it seems the __i386__ macro is NOT *.
I was under the impression that the x86_64 instruction set was fully compatible with the i386 set. Is this incorrect? I can only assume that the movsxw instruction does not exist in the x86_64 instruction set, thus my question is: what does it do, and what can I replace it with from the x86_64 instruction set?
*Checked with: clang -dM -E -x c /dev/null
The canonical at&t syntax for movsxw is movswl although at least some assembler versions seem to accept the former too.
movsxb : Sign-extend a byte into the second operand
movsxw : Sign-extend a word (16 bits) into the second operand
movsxl : Sign-extend a long (32 bits) into the second operand
movsxw assembles just fine for me in 64-bit mode using gcc/as (4.8.1/2.24). I don't have clang for x86 installed on this machine, but you could try specifying the size of the second operand (i.e. change movsxw to movsxwl, which would be "sign-extend word into long").
I have a 64-bit system, but gcc is 32-bits and when I do
>./gcc -c foobar.c
it makes foobar.o which is 64-bits. OK, but how does it know to do that? Based on what environment setting does it know to produce 64-bit object, and where is that documented??
Come to think about it, it is strange that it does that, is it not?? But file utility clearly says, gcc is 32 bits and foobar.o is 64 bits. ( I moved everything to the same directory so it would not be confused. )
I also checked the 3 dynamically linked libraries that it reads: libc, libm and libz and they are also all 32 bits.
To clarify, I don't want to know, how to make it do 32 bits. I want to know, what is it looking at now that it makes it do 64. That is my question, not how to force it the other way around.
When GCC is configured three different systems can be specified:
build: - the system where GCC is going to be built (probably not
relevant to your question)
host: - the system where GCC is going to
be executed (32 bit in your case)
target: the system where binaries, produced by GCC are going to be executed (64 bit in your case)
You can see how your GCC was configured by running:
gcc -v
command (look for --{build,host,target} options.