[Grace-core] Typing of Number

James Noble kjx at ecs.vuw.ac.nz
Tue Jun 21 14:32:22 PDT 2011


> My suggestion would be that if an exact number is passed as an argument, and if the number is a valid index for the data structure, then the result is to extract the element.  Otherwise, it's an error.  

sounds resonable 

> The normal case is that the index will come from an internal iterator.  The above rule will also work if the program does integer arithmetic, or uses indexes like floor(x/2), as one might do in a partition.    It wil liaise an error in other cases, which is the correct behavior, in my view.  
> 
> A more interesting question i: what should at()ifAbsent() do when the index is 2+3.i ?   The ifAbsentBlock, or an error?

it should raise an error like the others, I guess.


>> - when an integral but non Natural type is passed (byte, etc etc)
> 
> Eh?  Bytes aren't integers.

the question is whether they should be.

>>> Do you mean "distinguished from each other" or "distinguished from non-numeric types"?
>> 
>> both I think: primarily distinguished from each other but this implies distinguished from non-numeric types.
>> 
>>> Binary64 and Binary128 have the same behavior as types.  And maybe that's exactly right: you want to be able to replace the Binary64 constructor methods by Binary128 constructors, and have the program continue to be type-correct, but with a more exact result. 
>> 
>> that only requies Binary128 is a subtype of Binary64. 
> 
> Yes, but it should also be possible to replace Binary128 by Binary62 and have the program continue to be type-correct, but with a less exact result.  That requires Binary64 is a subtype of Binary128.

I'm sure something breaks right about here.

perhaps Binary64 and Binary128 are subtypes of BinaryFloatingPoint.

(or they are actually Binary<DoubleWord> and Binary<QuadWord> or something)

> Which raises the question: are our types a partial-order or a pre-order?
> 
>>> There is of course an efficiency argument for prohibiting re-implementation of integer: if you can tell from the type that a value has the particular built-in machine integer representation, you can emit machine instructions rather than method requests.  But it precludes the nice automatic conversion to BigNums that are so convenient.  Didn't we have some principles that covered this.
>> 
>> (looks it up)
>> we've got:
>> * The execution of the language should not depend on a program’s static types.
>> * Efficiency is not a concern of this language design.
>> * The language should support a simple performance model for simple programs. 
>> 
>> pragmatically we at least need to be able to link to C or Java libraries.
> 
> Does that mean that we need wrap-around 32-bit arithmetic as well, for compatibility with Java?  Argh!

and C... I'd been assuming we do: perhaps that is simple crazy.

The big issue actually isn't 32/64/128 MachineIntegers, it's floating point,
as I've convinced myself we do need to support floats (or doubles) for performance 
but (somehow) we can get away without machine-integers...

I think the reason is - within say a 31-bit range, rationals are a constant overhead slower than machineintegers
but things go weird without truncation for arbitrary precision floating point

>> I think that it would be good e.g. to be able to say (and check statically) 
>> that some computation is really in bytes,
> 
> Eh?  Bytes?  Bites?  Mad dogs and Englishmen?

:-) 

> No, it doesn't.  The obvious (to me) way to get these two conflicting behaviors is with two distinct operations: rational plus, which is constrained to keep one within the rationals, and numeric plus, which is constrained only to keep one within the numbers.

that way lies ... O'Caml (or BCPL, I forget which) 

do we then do the same for floating point vs aribtrary precision? 

> + is an operation on Numbers, not on Bytes.   If we have byte objects, I would suggest that they understand operations like and() and nand() and or() and xor() but NOT +, because that would be confusing.   We could use ⊕ for addition mod 255, if you think that it's necessary, or just call it plusMod()

right, and then another name for 64bit, 128bit, 32bit etc?  
and another for float, and for double? 

I don't think that will really work.

> Yes.  Emerald had self-types (we invented them too) but no exact types.  Personally, I've never really seen the need for exact types; they are of the same ilk as final classes: a way of restricting the options of those who want to (re-)use your work.

yes that's just it. sometimes you want to do that - the question is whether the language supports it or some library
written in a low-level language like C or Java.

I mean, we'll need to reflect Java/C/CLR types somehow into Grace - and do so with a certain amount of efficiency.
One way is not really to bother, and to do that "below" Grace in some kind of stub generator.
Another way is to provide a library that exports those types, and at least write *some* of that in Grace.

I guess we don't have to decide that now.


For Numbers I'm kind of leaning towards the   "type Number { method +(other : selftype) : selftype }
route but I'm not sure if that would actually work. 
Perhaps the Number *type* should be called Numeric, and Number be the aribtrary-precison-rational class
(which may be implemented in a bunch of concrete, inter-operable, subclasses)

Then normal Grace Numbers, IEEE.Binary64s, EvilModuloArithmeticIntegers could all be subtypes of Number,
but they wouldn't interoperate.    For indexing, I guess (small) collections could use a method like 
"asSmallNaturalElse: {errorBlock}" to return a value from 1..2^30 or 1..2^62 - nominally as a Number -  
perhaps implemented as a directly tagged integer - or even as some special internal subrange type
which a reasonably clever compiler/vm could know about and optimise. g 

Perhaps our pragmatic performance goal is: no slower than python! 

J









More information about the Grace-core mailing list