Dozensonline > New Systems of Measure > Binary Representation In Dozenals

I have asked something like this before but i got no responses.

As I don't recall the topic was/being discussed elsewhere in this forum, I thought i might like to start one. I am surprised that bases other than dozenal gets more attention than actually improving things in a dozenal society, since it's dozensonline forum.

As the computers only understand the base-2 system, that it can apply in it's core level. The length of these base-2 numbers grows rapidly than any other bases and we already have a term for a single binary digit, the bit (a natural unit) which can fill up any of the two states, 0 or 1 and that it stores the information with it likewise. Now that we have a unit, which seems as simple that it usually deserves. But instead, computers stores bits in packets but usually transfer data in single bits, for example 1 packet of 8 bit each is being required to make information of one character. But some encoding requires upto 32 bits made up of 4 packets. Now that the so called 'packet' is another unit here, which is byte. But the byte hasn't been necessarily of 8 bits since history, but now is the defacto standard, we have an unnecessary unit existing besides simple bit as the unit, the byte (where byte is now 8 bits). Now that by convention, byte is used while storing data and bit is used while transferring data, usually. Now in the decimal world, there are the existing usual prefixes which might mean something else in binary prefixes, where 1000 = 1024. Adding to the already confusion that usually is that 'b' was defined bit and 'B' was defined byte, Ki was defined 1024 and K was defined 1000, for kibi and kilo, 1024^2 for mebi(Mi) and 1000^2(M) for mega resp, etc before switching from 'K' as binary kilo and 'k' as decimal kilo which i don't know it was being followed. Adding this confusion at the time of confusion that kbps is not a transfer of 1 KB every second, ie kb = 1000 bits where KB = 1024x8 bits, adding 'i' in front of K,M,G,T,.. etc isn't anywhere to be seen usually now, as far as i know. Hard disk manufacturers still falsely advertise space in order to gain benefits, some people still don't know that usually a transfer speed of 1mbps doesn't mean 1MB each second, and if they do, they have to make awkward calculations back and forth. Adding to this, there are many additional problems that i can share (where i've heard the cases like merging binary and decimal prefixes like 1000x1024 for mega, etc) but i wanted to keep this as an informative lecturing short and more of a new reformation.

In the dozenal world, the problem (where prefix can mean other type of prefix and unit can mean other type of unit, was once done mostly for marketing gimmick by storage manufacturers to make profit and has been followed ever since as their informal std) can be solved after strictly dropping unnecessary prefixes and unnecessary units, which is the 'binary prefixes' and groups of bits as 'bytes' here. For unit, we should only deal with 'bits' to make this simpler, the computer architecture (ironically bits, not bytes) knows what group of packets that it is compatible to work, store and send data. Humans don't have to adjust with them, computers have to adjust for humans because they don't do math like 5+4, they convert that into binary and add, convert again to decimal to show the result as 9. Also, we could drop 'binary' prefixes and keep dozenal prefixes. For example, the approachment shouldn't be something like this: 2^12 = kilo, 2^(12x2) = mega, 2^(12x3) = giga,.. etc because the dozenal radix point shifting no longer works. Instead a simple (Kode's prefixes) like unqua, biqua, etc where these pure powers of dozen applied to the true binary unit, 'bit' would work.

So, a usual speed of *10 pb/t is 8.6 mb/s, where p=*10^5, b=bit, m=mega(dec) and is not Mega(binary) and t=pentciaday.

Also, a usual file size of *2 nb is 1.2 GB, where n=*10^9, G=giga(binary) or Gibi, b=bit and B=byte.

As simple as that.

I know that some storage architecture are usually in sizes of order that can be exactly represented only by round binary prefixes, this can also be represented with rounding out the fractional parts in decimal or dozenal. For example: the display resolution of HD image of 16:9 aspect ratio is typically 1366x768 which isn't exactly 1 megapixel as advertised but 1049088 pixels, but it is rounded off to 1 mega in decimals, that can also happen with non-round numbers in dozenals even though they might be purely round in binary prefix representations. Moreover, the bits are stored in groups of bytes and is usually accessed that way but we don't need to forget that the actual base unit is bit here that is the actual smallest possible information, and where grouping of them into 1 byte to represent 1 whole character is somewhat arbitrary and can be changed adapting with improving technology or as the number of characters is increased in futuristic society to accommodate byte ranges.

If I got everything right above. Tell me why this won't work? If yes, what could be improved?

As I don't recall the topic was/being discussed elsewhere in this forum, I thought i might like to start one. I am surprised that bases other than dozenal gets more attention than actually improving things in a dozenal society, since it's dozensonline forum.

As the computers only understand the base-2 system, that it can apply in it's core level. The length of these base-2 numbers grows rapidly than any other bases and we already have a term for a single binary digit, the bit (a natural unit) which can fill up any of the two states, 0 or 1 and that it stores the information with it likewise. Now that we have a unit, which seems as simple that it usually deserves. But instead, computers stores bits in packets but usually transfer data in single bits, for example 1 packet of 8 bit each is being required to make information of one character. But some encoding requires upto 32 bits made up of 4 packets. Now that the so called 'packet' is another unit here, which is byte. But the byte hasn't been necessarily of 8 bits since history, but now is the defacto standard, we have an unnecessary unit existing besides simple bit as the unit, the byte (where byte is now 8 bits). Now that by convention, byte is used while storing data and bit is used while transferring data, usually. Now in the decimal world, there are the existing usual prefixes which might mean something else in binary prefixes, where 1000 = 1024. Adding to the already confusion that usually is that 'b' was defined bit and 'B' was defined byte, Ki was defined 1024 and K was defined 1000, for kibi and kilo, 1024^2 for mebi(Mi) and 1000^2(M) for mega resp, etc before switching from 'K' as binary kilo and 'k' as decimal kilo which i don't know it was being followed. Adding this confusion at the time of confusion that kbps is not a transfer of 1 KB every second, ie kb = 1000 bits where KB = 1024x8 bits, adding 'i' in front of K,M,G,T,.. etc isn't anywhere to be seen usually now, as far as i know. Hard disk manufacturers still falsely advertise space in order to gain benefits, some people still don't know that usually a transfer speed of 1mbps doesn't mean 1MB each second, and if they do, they have to make awkward calculations back and forth. Adding to this, there are many additional problems that i can share (where i've heard the cases like merging binary and decimal prefixes like 1000x1024 for mega, etc) but i wanted to keep this as an informative lecturing short and more of a new reformation.

In the dozenal world, the problem (where prefix can mean other type of prefix and unit can mean other type of unit, was once done mostly for marketing gimmick by storage manufacturers to make profit and has been followed ever since as their informal std) can be solved after strictly dropping unnecessary prefixes and unnecessary units, which is the 'binary prefixes' and groups of bits as 'bytes' here. For unit, we should only deal with 'bits' to make this simpler, the computer architecture (ironically bits, not bytes) knows what group of packets that it is compatible to work, store and send data. Humans don't have to adjust with them, computers have to adjust for humans because they don't do math like 5+4, they convert that into binary and add, convert again to decimal to show the result as 9. Also, we could drop 'binary' prefixes and keep dozenal prefixes. For example, the approachment shouldn't be something like this: 2^12 = kilo, 2^(12x2) = mega, 2^(12x3) = giga,.. etc because the dozenal radix point shifting no longer works. Instead a simple (Kode's prefixes) like unqua, biqua, etc where these pure powers of dozen applied to the true binary unit, 'bit' would work.

So, a usual speed of *10 pb/t is 8.6 mb/s, where p=*10^5, b=bit, m=mega(dec) and is not Mega(binary) and t=pentciaday.

Also, a usual file size of *2 nb is 1.2 GB, where n=*10^9, G=giga(binary) or Gibi, b=bit and B=byte.

As simple as that.

I know that some storage architecture are usually in sizes of order that can be exactly represented only by round binary prefixes, this can also be represented with rounding out the fractional parts in decimal or dozenal. For example: the display resolution of HD image of 16:9 aspect ratio is typically 1366x768 which isn't exactly 1 megapixel as advertised but 1049088 pixels, but it is rounded off to 1 mega in decimals, that can also happen with non-round numbers in dozenals even though they might be purely round in binary prefix representations. Moreover, the bits are stored in groups of bytes and is usually accessed that way but we don't need to forget that the actual base unit is bit here that is the actual smallest possible information, and where grouping of them into 1 byte to represent 1 whole character is somewhat arbitrary and can be changed adapting with improving technology or as the number of characters is increased in futuristic society to accommodate byte ranges.

If I got everything right above. Tell me why this won't work? If yes, what could be improved?

The problem with dozenal here, is that none of the powers of 12 are close enough to be taken as powers of 2.

In decimal, we have 1024, which is passable as 1000. In twelfty, 128 is close enough to pretend it's 120. So something like 36-bits, is 30+6 = 64 Gigs, while in twelfty, it is 35+1, gives 2 hundred millions.

In decimal, we have 1024, which is passable as 1000. In twelfty, 128 is close enough to pretend it's 120. So something like 36-bits, is 30+6 = 64 Gigs, while in twelfty, it is 35+1, gives 2 hundred millions.

CODE |

[d:\]rxc 2**36 291,4825 V616:0000 for 200,0000,0000 68,719 476 736.0 0 for 64,000 000 000. |

Of course, if you pretend that 128 = 144, you will get around the problem that computer numbers make for the lower number, and computers use the larger, since both the binary and lower number would be the same.

2**36 = 1.1 3 9,10 0 1,11 8 5,4 for 2. 0 0 0, 0 0 0, 0 0 0, 0 2D10.

Although not as close as the twelfty, it is still reasonably acceptable. It would amount to using the digit-pairing or foursome, rather than the three-digit grouping.

But neither twelfty nor dozenal have the feature of decimal, where the equality is the 10th power, ie you can take something like '48', and write 8+40, ie 2^8 * K^4 = 256T. In the other bases, you need to divide by seven, ie 48 = 6+42 = 2^6 * H^6 = 64 ŞM,

In terms of paper tape, a foot of tape, with holes at 1/10 inch, makes 120, and it's not all that inconcievable you could have something like a 128 bytes make a foot.

I use this scale to translate decimal data sizes into twelfty, and i should imagine the same would go for dozenal, if somewhat smaller.

Thank you for your comments Wendy, but my problem is still with the issue "finding powers of the base that would be close to the binary powers", calling those closest numbers (if any) would confuse the issue to 'which prefix should be used for which purpose'. My proposal was to simply eradicate the 'binary powers' and have powers which would be pure powers of the base where radix shifting works. Also I proposed in favour of eradicating bytes (in measuring band width and representing storage space etc) and to use only the basic unit, the bit. So, as given by my example is that the band width of 1 hexquabit per pentciaday is equivalent to around 8.6 megabit per second or a storage space of 2 ennquabit is 1.2 Gibibytes. So, we could achieve no more fiddling with the numbers that are close enough to give them a same name and no more using multiple units (bits-bytes) that have the same initial alphabet which sound similar but are of different magnitudes where one is represented with lowercase and the other with uppercase. Not that using these differently to different suited applications always cause confusion but because they are unnecessary troublesome to quickly make calculations. For example, if 1GB of file is transferred at the speed of 10mb/s which won't take 100 seconds because 'G' here is not 10^9 but 1024^3 and 'm' is not 1024^2 but 10^6, adding to that 1B is actually 8b. But in my suggested way as example, if *10 ennquabit size of a file is being transfered at the average rate of *10 hexquabit per pentciaday, the file will get downloaded in *(10/10)(ennqua/hexqua)= *1000 pentciadays.

The closest is 2**18 107854 = 12**5, or 1/4 meg. The square of this is 64 Gig. This is a semitone equation of 216=217, about 10 times better than the next pair.

Outside of this, I would fall either on 128 = 144 or 2048 = 1728. In semitone terms, they correspond to 84=86 or 132=129. That is, an error of a log-difference of 1/21.

A ten-byte address is not really used except for the close connection to 1000. You use things like 8 or 9 or whatever.

The difference here is that kibi is far enough from kilo to make the thing worth wile having different, but close, prefixes.

Outside of this, I would fall either on 128 = 144 or 2048 = 1728. In semitone terms, they correspond to 84=86 or 132=129. That is, an error of a log-difference of 1/21.

A ten-byte address is not really used except for the close connection to 1000. You use things like 8 or 9 or whatever.

The difference here is that kibi is far enough from kilo to make the thing worth wile having different, but close, prefixes.

was reading a bit (lol) of the history of the bit and byte - largely 8 bits per byte seems a result of being a power of 2 as is 16. there were different byte sizes in the early days of computing i discovered. the idea of a 12 bit byte is a bit awkward i gather for being half again the size of 8 or a quarter smaller of 16 - it is a bit more workable when you think in terms of a 'nibble' which is 4 bits making 3 nibbles per 12 bit byte.

as a side question to the folks here, does anyone know how quantum computing and quantum programming will be implemented? i understand there are the same two states in quantum computing with the addition of a third state that is the superposition of both of those states and can be either until determined to be either/or. does this mean that there are the same two states in quantum computing or a variable third state? maybe even 4 states? grouping in 12 may become more useful as a bridge from binary to either a 3 or 4 state system.

dan

as a side question to the folks here, does anyone know how quantum computing and quantum programming will be implemented? i understand there are the same two states in quantum computing with the addition of a third state that is the superposition of both of those states and can be either until determined to be either/or. does this mean that there are the same two states in quantum computing or a variable third state? maybe even 4 states? grouping in 12 may become more useful as a bridge from binary to either a 3 or 4 state system.

dan

Qubits are still binary, since they can only be in superpositions of exactly two states, 0 or 1. There are similarly ternary qutrits.

Consider the Multiple Base Number System. I reviewed the book here. This is essentially proposed as using the primes 2 and 3 as a basis for computation. Consequently what is proposed is using base (2,3), (not base 6) as the basis for computation. It crucially relies on rapidly determining a representation of n that involves the fewest digits (the places are not really the same as in a standard base, and the only "digits" 0 and 1, instead, the value of the digits {1, 2, 3, 4, 6, 8, 9, etc} = A003586 arranged in a grid based on the exponents of 2 (horizontally) and 3 (vertically). This is called "canonic representation". It is not currently easy to determine this representation.

I wish there were a more open sort of synopsis available online. I have the book if anyone is coming to atlanta, I could let you read it. It is fairly expensive.

I wrote a few of the considerations in the book into the OEIS last year:

A276380: Irregular triangle where row n contains terms k of the partition of n produced by greedy algorithm such that all elements are in A003586. The reference defines a "canonic" representation of n on page 33 as having the lowest number of terms. The greedy algorithm does not always render the canonic representation. a(41) = {1,4,36}, but {9,32} is the shortest possible partition of 41 such that all terms are in A003586 (i.e., three smooth).

A277070: Row length of A276380(n). Represents the partition size generated by greedy algorithm at A276380(n) such that all parts k are unique and in A003586.

A237442: a(n) is the least number of 3-smooth numbers that add up to n. Canonic length of n.

A277071: Numbers n for which A277070(n) does not equal A237442(n). These are numbers n for which the greedy algorithm A276380(n) produces a partition of n with more than A237442(n) terms that are all unique and in A003586.

A277045: Irregular triangle T(n,k) read by rows giving the number of partitions of length k such that all of the members of the partition are distinct and in A003586. If n is in A003586, then T(n,1) = 1, else T(n,1) = 0. T(n,k) also is the number of ways of representing n involving k 1's in the base(2,3) or "dual-base number system" (i.e., base(2,3)). The number of "canonic" representations of n in a dual-base number system as defined by the reference as having the lowest number of terms, appears in the first column of the triangle with a value greater than 0. A237442(n) = the least k with a nonzero value.

I wish there were a more open sort of synopsis available online. I have the book if anyone is coming to atlanta, I could let you read it. It is fairly expensive.

I wrote a few of the considerations in the book into the OEIS last year:

A276380: Irregular triangle where row n contains terms k of the partition of n produced by greedy algorithm such that all elements are in A003586. The reference defines a "canonic" representation of n on page 33 as having the lowest number of terms. The greedy algorithm does not always render the canonic representation. a(41) = {1,4,36}, but {9,32} is the shortest possible partition of 41 such that all terms are in A003586 (i.e., three smooth).

A277070: Row length of A276380(n). Represents the partition size generated by greedy algorithm at A276380(n) such that all parts k are unique and in A003586.

A237442: a(n) is the least number of 3-smooth numbers that add up to n. Canonic length of n.

A277071: Numbers n for which A277070(n) does not equal A237442(n). These are numbers n for which the greedy algorithm A276380(n) produces a partition of n with more than A237442(n) terms that are all unique and in A003586.

A277045: Irregular triangle T(n,k) read by rows giving the number of partitions of length k such that all of the members of the partition are distinct and in A003586. If n is in A003586, then T(n,1) = 1, else T(n,1) = 0. T(n,k) also is the number of ways of representing n involving k 1's in the base(2,3) or "dual-base number system" (i.e., base(2,3)). The number of "canonic" representations of n in a dual-base number system as defined by the reference as having the lowest number of terms, appears in the first column of the triangle with a value greater than 0. A237442(n) = the least k with a nonzero value.

Although the bit is an absolute thing, the addressing is done in parallel, and the byte is an item of address. The actual number of bits in a byte depends on the number of channels your computer uses.

Fortran was written to work off a five-bit byte, basically, the alphabet in one channel, and numerals and punctuation in the other. The telephone system worked on a seven-byte system, using the eighth bit as a parity block. Because computers were largely devised on obsolete telephone stuff, computers follow this.

Even in the days of DOS, the high page of the code-page was set to different functions. Wordstar used the high bit for formatting. Hex-viewers have a function to 'strip the high bit', to show what wordstar would have as letters, but show in the cp as characters.

One might note that the code pages generally have a uniform low page, and the high page has all sorts of weird stuff - eg cp 437 vs 850. There are also ANSI code pages, used by Windows in the 3.1/9x days, with cp 1252 as the most common.

The nine-bit byte is common enough to leave marks on unix, where the file permissions are given as three octal numerals. One notes here a 36-bit word as common-enough.

I have played around with 12-bit bytes in my experiments. There's nothing fancy. It's just eight bits with an extra nibble.

One might note that 'kilo = kibi' etc, really are not used in computing, except to express size. You don't necessarily have 10 and 20 as more preferred bit structures over any other size. The standard block size in DOS days was 512 bytes, while the modern windows use 4096 byte chuncks on the hard drive. Zip drives were formatted with a cylinder of 1 MB, but this is 63 sectors of 32 blocks of 512 bytes.

Fortran was written to work off a five-bit byte, basically, the alphabet in one channel, and numerals and punctuation in the other. The telephone system worked on a seven-byte system, using the eighth bit as a parity block. Because computers were largely devised on obsolete telephone stuff, computers follow this.

Even in the days of DOS, the high page of the code-page was set to different functions. Wordstar used the high bit for formatting. Hex-viewers have a function to 'strip the high bit', to show what wordstar would have as letters, but show in the cp as characters.

One might note that the code pages generally have a uniform low page, and the high page has all sorts of weird stuff - eg cp 437 vs 850. There are also ANSI code pages, used by Windows in the 3.1/9x days, with cp 1252 as the most common.

The nine-bit byte is common enough to leave marks on unix, where the file permissions are given as three octal numerals. One notes here a 36-bit word as common-enough.

I have played around with 12-bit bytes in my experiments. There's nothing fancy. It's just eight bits with an extra nibble.

One might note that 'kilo = kibi' etc, really are not used in computing, except to express size. You don't necessarily have 10 and 20 as more preferred bit structures over any other size. The standard block size in DOS days was 512 bytes, while the modern windows use 4096 byte chuncks on the hard drive. Zip drives were formatted with a cylinder of 1 MB, but this is 63 sectors of 32 blocks of 512 bytes.

I have played around with an abacus based on icarus's canonical representation. In essence, one can represent all 2-3 bases along different sloping lines, such as 12 as passing through the line through 1, 12, 144, &c.

If you pass the line in a different direction, and not pass through 1, you can get something that looks like base 3/2, but with leading and trailing zeros. So 5 is 11, and 10, 15 become 011, 110. 20 is 0011. 30 is 0110, and 45 is 1100. The usual square rules and carries work, so 11 * 11 is 121, but a carry is called on, so you end up with something like 121 = 4+12+9. Fiddling around with carry rules is possible. It's possible to express every number, as a sum of 3-smooth numbers in no more than two powers (that is several 2^x 3^y , where x+y is n or n+1.)

These sorts of devices are used to demonstrate rules for more advanced systems, and whether it is possible to reduce every class-2 system to a base if any is a base.

If you pass the line in a different direction, and not pass through 1, you can get something that looks like base 3/2, but with leading and trailing zeros. So 5 is 11, and 10, 15 become 011, 110. 20 is 0011. 30 is 0110, and 45 is 1100. The usual square rules and carries work, so 11 * 11 is 121, but a carry is called on, so you end up with something like 121 = 4+12+9. Fiddling around with carry rules is possible. It's possible to express every number, as a sum of 3-smooth numbers in no more than two powers (that is several 2^x 3^y , where x+y is n or n+1.)

These sorts of devices are used to demonstrate rules for more advanced systems, and whether it is possible to reduce every class-2 system to a base if any is a base.

In reference to danthemanxf and bits v trits.

Three-state computing is possible, but it uses some interesting supprises. I drew up a whole series of different devices using a three-state CMP gate, in the same way you see binary computers with NAND gates.

The compare gate, supposing the number was correctly spelt, is simply a tower of sign-gates. For example, if you wrote a three-input gate that gave the first non-zero digit of three imputs, you can then use a tower of these to get a first non-zero digit of any number.

I did a four-input two-output trit sum, where you add three inputs and an carry, to a carry-forward and output.

The three-state arithmetic has some interesting pussles that need to be addressed, such as ½ = 1 + -½, -½ = -1 + ½ can cause some looping.

I never got the clock or memory to work, but i had a lot of fun with the thing.

The thing is that 12 has little to do with bits/trits. Instead, you have to look for numbers that are close together. An interesting one is 2048 v 2187, is roughly the size of the cdrom sector, (it's larger still). 243 v 256 is an interesting bit section.

I did not get around to writing a three-state MMIX in the style of Knuth, but some of us are not that clever.

Three-state computing is possible, but it uses some interesting supprises. I drew up a whole series of different devices using a three-state CMP gate, in the same way you see binary computers with NAND gates.

The compare gate, supposing the number was correctly spelt, is simply a tower of sign-gates. For example, if you wrote a three-input gate that gave the first non-zero digit of three imputs, you can then use a tower of these to get a first non-zero digit of any number.

I did a four-input two-output trit sum, where you add three inputs and an carry, to a carry-forward and output.

The three-state arithmetic has some interesting pussles that need to be addressed, such as ½ = 1 + -½, -½ = -1 + ½ can cause some looping.

I never got the clock or memory to work, but i had a lot of fun with the thing.

The thing is that 12 has little to do with bits/trits. Instead, you have to look for numbers that are close together. An interesting one is 2048 v 2187, is roughly the size of the cdrom sector, (it's larger still). 243 v 256 is an interesting bit section.

I did not get around to writing a three-state MMIX in the style of Knuth, but some of us are not that clever.

QUOTE (wendy.krieger @ Sep 28 2017, 06:29 AM) |

Fortran was written to work off a five-bit byte, basically, the alphabet in one channel, and numerals and punctuation in the other. |

Fortran was originally developed for the IBM 704, which was a 36-bit machine that used a 6-bit character encoding. This is why early FORTRAN limited variable names to 6 letters: It allowed them to be represented as a single machine word.

QUOTE (wendy.krieger @ Sep 28 2017, 06:29 AM) |

The telephone system worked on a seven-byte system, using the eighth bit as a parity block. |

I assume you're referring to the original use of ASCII as a tele

The English language naturally lends itself to a 7-bit encoding, since its orthography includes 52 letters (2 cases × 26 letters), a space between words, and 10 digits. That's 63 characters. Punctuation and symbols are not a closed set, but if you want to encode more than one of them, a 6-bit (64-character) encoding isn't enough (unless you want to fold the alphabet into a single case, as the aforementioned IBM 704 did). Thus, a 7-bit encoding was used.

Computers might have developed differently if they had been invented in East Asia (with thousands of Han characters in their writing system) instead of in the West.

Wendy, note that I did not invent MBNS. This was a group effort by the authors of the source. (just to note proper credit).

The MBNS can be three dimensional, or n-dimensional, to give you further food for thought. The authors did cover base {2, 3, 5}. We could surmise a 4-dimensional consideration with {2, 3, 5, 7}. Then other bases like 12, 60, 120, 210 might be as simple as "shifting gears".

I am not sure how practical the entire thing is. I have a lot of experience with the "regular counting function" r(n) = OEIS A010846 and have written at least 8 algorithms to compute it; it is not an easy thing to compute as n increases. To do this "on the fly" would seem ridiculous. I have calculated r for numbers as large as primorial p_15# = 614889782588491410; it took a day. The algorithm to arrive at a canonical representation is about as tough currently.

This sort of idea is fun to think about, though.

The MBNS can be three dimensional, or n-dimensional, to give you further food for thought. The authors did cover base {2, 3, 5}. We could surmise a 4-dimensional consideration with {2, 3, 5, 7}. Then other bases like 12, 60, 120, 210 might be as simple as "shifting gears".

I am not sure how practical the entire thing is. I have a lot of experience with the "regular counting function" r(n) = OEIS A010846 and have written at least 8 algorithms to compute it; it is not an easy thing to compute as n increases. To do this "on the fly" would seem ridiculous. I have calculated r for numbers as large as primorial p_15# = 614889782588491410; it took a day. The algorithm to arrive at a canonical representation is about as tough currently.

This sort of idea is fun to think about, though.

thank you doublesharp and wendy,

i was hoping the superposition as a variable(?) state would be like a neutral state that put its weight to a binary state when determined - its a binary world and we are binary girls i guess :P - i understood that it resolved to a binary state, but before it was resolved i hoped it was.... a third state: undefined, variable, possible, maybe even (-1 0 1) with -1 being negative 0 neutral and 1 positive (just rolling with the ideas)

dan

i was hoping the superposition as a variable(?) state would be like a neutral state that put its weight to a binary state when determined - its a binary world and we are binary girls i guess :P - i understood that it resolved to a binary state, but before it was resolved i hoped it was.... a third state: undefined, variable, possible, maybe even (-1 0 1) with -1 being negative 0 neutral and 1 positive (just rolling with the ideas)

dan

@icarus: I know you did not invent multi-base notation, partly because you gave a reference to a book on the subject. Most of the mathematics I play with is somewhat practical: I just look at what the limits are. So that I have played around with it, suggests that multi-base is indeed practical.

I've played around with the occasional six and eight-axis multi-base, but only small bits at a time.

@Dan: I know something about fortran. Fortran 77 was pretty new when i did computing. In one of the lessons, they said that it was designed for a five-bit byte, but evidently the six-bit-byte and six-byte-word of the IBM suited it better. I have 16 and 32 bit versions of REXX, but this is also a mainframe language (from IBM too).

The teletype and telegraph were parts of the PMG telephone's department. Of course, we should recall that UNIX was written at Bell laboritaries, and even in the UK, it was the PMG's department that provided the expertise for computers.

Faxes tend to have wained in the west, but these are more common in japan etc, where they are more suited to their runes.

The original telegraph was only six-bit, since they made no use of case. Specifically, they were uncial, but because you could not write the name of the Lord in lower case, they used the great runes throughout.

@danthemanxf: I still have a fondness for trinary numbers, after all of these years. It's an interesting constrast to computing in binary, particularly in the process of sign. You can not just use a sign trit. You have to do a comparison against zero. But then you can do a spell-compare against two numbers, and find the sign of that number, and this gives the relative digits.

Spell-compare is where you go digit by digit 0=0, 7 = 7, 8 > 2, 4 < 9, so the spell-compare is ==><, which can pairwise sign-gate become =>, then > so 0784 is greater than 0729. In binary, you do a subtraction, with carries, ie 0784-0729 = 0055, the first bit is 0, so 0.

The spell-compare feature is in geometry, a distinctive class-2 system feature, and one uses a MBNS to prove it.

I've played around with the occasional six and eight-axis multi-base, but only small bits at a time.

@Dan: I know something about fortran. Fortran 77 was pretty new when i did computing. In one of the lessons, they said that it was designed for a five-bit byte, but evidently the six-bit-byte and six-byte-word of the IBM suited it better. I have 16 and 32 bit versions of REXX, but this is also a mainframe language (from IBM too).

The teletype and telegraph were parts of the PMG telephone's department. Of course, we should recall that UNIX was written at Bell laboritaries, and even in the UK, it was the PMG's department that provided the expertise for computers.

Faxes tend to have wained in the west, but these are more common in japan etc, where they are more suited to their runes.

The original telegraph was only six-bit, since they made no use of case. Specifically, they were uncial, but because you could not write the name of the Lord in lower case, they used the great runes throughout.

@danthemanxf: I still have a fondness for trinary numbers, after all of these years. It's an interesting constrast to computing in binary, particularly in the process of sign. You can not just use a sign trit. You have to do a comparison against zero. But then you can do a spell-compare against two numbers, and find the sign of that number, and this gives the relative digits.

Spell-compare is where you go digit by digit 0=0, 7 = 7, 8 > 2, 4 < 9, so the spell-compare is ==><, which can pairwise sign-gate become =>, then > so 0784 is greater than 0729. In binary, you do a subtraction, with carries, ie 0784-0729 = 0055, the first bit is 0, so 0.

The spell-compare feature is in geometry, a distinctive class-2 system feature, and one uses a MBNS to prove it.

wendy bear with me i am not a mathematician or computer scientist... or formally educated beyond high school in usa, so i am way out of my depth.

i actually looked up if zero was considered a positive number or not and saw that it can be considered 'unsigned' - neither a positive or a negative, or both positive and negative, or specifically signed as 'open ended intervals'. which had the flavor of this superposition bit in quantum computing. a negative zero would "give weight", implied direction, to -1 and a positive zero would "give weight" to 1. this is really just talking out of my bum lol, but i do find some elegance in it despite it being hogwash. when the spin of the particle of an atom is what determines its state, this is movement from non-movement, a measure from zero in different directions. of course the model of visualizing what is going on may be flawed too, it may not be spin but something similar. there are other ways of determining quantum state of a particle as well maybe.

i think i am too off topic - binary is a very elemental way of counting, it is natural and logical and powerful. binary itself i dont think has a convenient synergy with base12, except how wendy mentions 12 being 3 nibbles of 4 bits each. with a ternary or trinary state system 12 could be the bridge between languages, the common denominator. why i am focusing on bit states is because bit grouping is a bit arbitrary outside of orders of magnitude of two. a ternary state computer i assume would have this natural method of grouping: 3, 9, 27... etc. if three states is not a thing and never will be then its a moot point and sorry for wasting your time, please forgive me for being slow lol.

i actually looked up if zero was considered a positive number or not and saw that it can be considered 'unsigned' - neither a positive or a negative, or both positive and negative, or specifically signed as 'open ended intervals'. which had the flavor of this superposition bit in quantum computing. a negative zero would "give weight", implied direction, to -1 and a positive zero would "give weight" to 1. this is really just talking out of my bum lol, but i do find some elegance in it despite it being hogwash. when the spin of the particle of an atom is what determines its state, this is movement from non-movement, a measure from zero in different directions. of course the model of visualizing what is going on may be flawed too, it may not be spin but something similar. there are other ways of determining quantum state of a particle as well maybe.

i think i am too off topic - binary is a very elemental way of counting, it is natural and logical and powerful. binary itself i dont think has a convenient synergy with base12, except how wendy mentions 12 being 3 nibbles of 4 bits each. with a ternary or trinary state system 12 could be the bridge between languages, the common denominator. why i am focusing on bit states is because bit grouping is a bit arbitrary outside of orders of magnitude of two. a ternary state computer i assume would have this natural method of grouping: 3, 9, 27... etc. if three states is not a thing and never will be then its a moot point and sorry for wasting your time, please forgive me for being slow lol.

QUOTE (Double sharp @ Sep 27 2017, 06:37 AM) |

Qubits are still binary, since they can only be in superpositions of exactly two states, 0 or 1. There are similarly ternary qutrits. |

QUOTE (wendy.krieger Posted on Sep 28 2017 @ 11:53 AM) |

In reference to danthemanxf and bits v trits... |

double sharp and wendy:

i apologize for not recognizing that qutrits and trits were mentioned, i did see wendy say trits but i thought it was her term rather than THE term - many of the exact words and detailed maths are foreign to me and there is a lot of information to try to keep up with you. i did not see mention of trits or qutrits in the wiki speaking of quantum computing or programming, and i thank you both for delivering the idea to me, albeit seeping through to me slowly, it was educational and humbling.

dan

*edit*

icarus: you rock man, sorry i didnt realize that the book was relevant to the binary/ternary comments until i saw wendy reference it when she addressed me. i assumed you wrote to the OP and were on topic.

everyone: i realize i am a bit out of my depth, or height depending on how you look at it. ill try to be more careful with how i post in the future. trying to find the balance of asking questions and sharing thoughts, i failed extremely on this thread lol.

thank you for your patience, and kindness

dan don't sweat it! We're all here because we are interested in these things. You are among friends.