Sunday, January 10, 2010

Real Difficulties

I've been Cantoring about set theory for a few posts now and I can't find any obvious inconsistencies. No holes, no places where 135 years of brilliant people pounding on it have left in-roads.

There are aspects of the way set theory handles various infinities that I wonder about, but within the context of set theory itself, it all appears consistent.

Still, this is unsatisfying because it never really answered my questions about infinities. I got curious about the structure of the reals precisely because I couldn't picture any way that a structure could head off to multiple infinities in a way that can't be enumerated. No matter how many infinities, it should be possible to place a finite limit on all of them, and then just parcel them out, one by one.

Well, in that sense it sounds simple. However, when it comes to actually dealing with the infinite paths that make up the real numbers, there does seem to be a few added problems. A few that are surprisingly deep.

Strangely, there is a well-known structure to hold the representations of all of the permutations for R, for which mathematicians agree that it does in fact express all of the possibilities. In A Very Large Structure, I go through the basis of taking one of these base10 real representation trees, called P0 and along with an infinite node parent tree, create a structure T that (in theory) contains all of the real values.

The problem with that post was that given the unknowns in the underlying structure, it is assumed that the constructed sets P0, Pn and T are all uncountable. We can construct the sets, but we cannot count the elements in those sets.

Still, it is a good first step, in that at least R is contained in a set T, and we now have someway of visualizing the different elements in R.


INFINITE CONSTRAINTS

A point that will come up again and again in this post is how to constraint a set of multiple infinite values (on different "axises"). If we do have things stretching off to some large number of different infinities, we can limit each one, and then permute them, one at a time.

In this way, we can count things. An example is being able to count the cells of a bi-infinite spreadsheet, it stretches off into four different directions, on two different axis (vertical and horizontal). We can count the cells by circling around the center, slowly increasing the size of the circle. In a regular spreadsheet that only goes off to the right and to the bottom, we still have two axises, and we can use just diagonals to count everything.

Another example is how I was enumerating the right hand side of the example in the post on Magic Sequences. It's another diagonal example. Yet another example was in the original Infinite Node Tree post. Again, we essentially circle around the inside of the tree, gradually opening the different infinities, one at a time.

It works out that so long as we have some type of 'memory' or stack, we can constrain any number of infinities (even an infinite number of them, it's just more memory), parceling out the elements, one-by-one, and gradually opening up the size of the bounds. There is likely a nice way to prove this, but all on it's own it is mostly self-evident.

And it's exactly this point that keeps driving me to find out why everybody believes that the real values seem to have a way to counter this. The essence of 'uncountable' seems to come not from the structure of thereals, or the number of embedded infinities, but from how they are related to each other in a combinatorial sense.

I kept assuming that the structural attributes would take precedence. After all, if we can pack all of R into some fixed structural representation, then there ought to be a large number of different alternative structures. One of them must be countable.


FUNNY NUMBERS

Another issue that we can deal with early on is this post is that 1.0 = 0.9999... Personally I see this as an abuse of the way infinities work. In the proof listed inWikipedia:

http://en.wikipedia.org/wiki/0.999...

I don't think the subtraction at line 3 in the Digit Manipulation section is particularly valid. The missing difference is caused by a subtle shift in the way the finite representations are being handled. The two infinities are slightly offset from each other. But that's not a popular opinion in mathematical circles.

Another way of looking at this is that the following very small number:

0.000...001 

is considered to be just too small to be a real number. Which is kinda mean, don't you think? I mean, so it is tiny, sure, but does that somehow invalidate its existence?

If we have an example, such as a base10 tree, we see that at any node, at any time, anything can just peel off. Even if it's extremely small. We can go down a near infinite number of zeros, and then just start winding around.

It similar to how we often get junk digits hidden below floating point numbers in a computer after a large series of arithmetic operations. Mostly it is worth ignoring, but in this case while the paths lead to very small numbers, they are still valid paths in the tree, and by (some) definition every path in the tree is a unique number.

Still, if we have a set with:

S = { 1.0, 0.99999... }

we can allow it to be either be two elements or just a single element. Set theory neatly hides this for us.

It makes no real practical difference. Everybody is allowed to have their own viewpoint, and it won't affect the overall results. There is enough space in this area of mathematics to support minuscule-number-haters. :-)

Although I didn't investigate deeply, one would expect that if there is one case of this with little numbers, there are probably an infinitely large number of cases. It should likely be possible to build up a large number of little differences, and prove something horrible like 1=2, or some other fun result like that.


TREE TIPPING

Getting back to reals, if we have all of the reals in a base10 structure, then we should be able to manipulate it. This is a popular place for cranks to start from, because the base10 version seems to be easily complete. The problem with the base10 structures is that they leave the real numbers as endless paths. That makes them more or less impossible to count directly, so we ought to try some funny business.

P0 is our base10 representation tree, over the range [0,1).

We can fiddle with this tree, but I'll switch it to base2 (binary) first because it's a little easier to deal with:

So we have a tree called B2, which is:

- a binary tree
- root is labeled 0
- all children have a left child labeled 0 and a right child labeled 1
- covers the range [0,1)

If we run down an infinite winding path and finesse it into a base2 number, we get:

0.11 0010010000 11111 10110 …

This is actually Pi shifted right twice (Pi/4 ?), so we know this is an infinite string of non-repeating digits.

Now, it is well known that the nodes of a tree are enumerable, but the actual paths themselves are not. Being sort of clever, I first figured that I could just "tip" the tree onto its side, so that the nodes no longer held the representation of the path. Instead, each node could hold a complete infinitely long real value.

If I have such a tree of reals, each completely contained in its own unique node, then I can enumerate all of the nodes, and thus all of them...

Tipping is easy, so long as we have an infinite node tree to tip it into. We just take the far left path:

0.000000000...

As the root node, and then we add in all of it's children. In this case, the children are the infinite list of any nodes that wind off to the right from the above path,ie:

0.100000000...
0.010000000...
0.001000000...
...

Now, for each one of these children, starting with 0.100000... we can add in all of their right children:

0.110000000...
0.101000000...
0.100100000...
...

Etc.

In that way, if we follow the first child down a level, for each level we go, we will get more and more strings of 1.

So now as we approach infinity, we get an infinity large tree, where each node holds an infinitely long real value.

This almost seems like it would work, but there are two very serious problems:

0.1111111111...  is never in the tree.

0.11 00100100001111110110… isn't either.

This happens, because no matter how deep we go, our path is made up of only a finite number of transformations. That is, it is an endless string of all zeros, with some "head" string placed in front by the permutations. 

By definition we essentially get a permanent tail of all zeros. And it is the tail of zeros that keep either of the two above numbers from ever appearing in the tree.

So the numbers:

0.111111111100000000...
0.11 0010010000 1111110110 00000 00000 00000 …

Are in the tree, but the actual infinite right branch and all of the irrationals are not.

Just to recap, if I tip a tree like my P0, from my other article, the irrationals, which where basically the last elements in P0, never make it into the the tipped version of the tree. We can count the new tree, but only because the difficult nodes are missing.

In a sense, this implies that the original base10 representation trees never even really held numbers like Pi/10. Well, they did, because they were defined that way, and we can transfer these numbers to a set P0, but they remain strangely disconnected because of their infinite, and non-repeating tails.


A TALE OF TAILS

Now it turns out that these "tails" are far more difficult then they first appear. They keep coming back to haunt me in different ways.

In fact, the real complication in R being uncountable isn't rooted in the way it goes off to three different infinities all at once, it's really just finding a deterministic way of permuting between the different tails that makes it difficult.

That is, the problem isn't that we can't find a structure large enough to handle the reals, it is that we can't bind them all together in a way to make it easy to get from one to another. We can't find a bridge between them. They are disconnected.

Some interesting things about tails:

- they don't have to go through all of the possible permutations. They can miss some.
- they can repeat a nearly endless number of times, only to peel away at the last second, and still be considered non-repeating.
- they can have any number of little junk bits at the end (if you're not a minuscule-number-hater :-)

Still tails and the permutations on them have some issues:

- finite changes to a tail leave it as an irrational
- infinite, but repeating changes to a tail also leave it as irrational
- irrational changes can leave the tail as either a rational or another irrational
- the difference between two distinct tails is an infinite permutation of an infinite number of changes.

In that last point, we have to make an infinitely large number of small changes to align two distinct tails together, all of the way to infinity. Pi/10 -sqrt(2)/10 looks like:

0.314159265...

-.2
+.03
-.003
+.0003
-.00003
-.000008
+.0000001
-.00000001
+.000000001
...

0.141421356...

Where the differences are non-repeating.

To get from one to the other requires an infinite number of changes, stretching all of the way back to the deepest darkest corners of infinity. It would be nice if there were only a finite set of changes between them, but it doesn't work that way. The tails are distinct, and remain so forever.

What is interesting to note from the third point above, is that Pi/10 - sqrt(2)/10 is itself another irrational:

0.172737909...

That is, if we have two independent irrationals, the difference is a third one.


RANDOM CHANCE AND WALKS

A possible way to enumerate the different real values is to bounce around, one at a time, using random numbers. If we just pick randomly from the original base10 representation tree, then maybe our problems go away?

We can get each infinite path from the tree one at a time, by using the following random generation:

0.[0-9][0-9][0-9][0-9][0-9]...

where [0-9] is some random digit between 0 and 9.

For each iteration, we just pick another random number. Since we are picking them one at a time, if as we get to infinity we have a good chance of picking all possible numbers, then we can count them as we pick them.

The big issue is determining whether or not all of the permutations will get selected at some point. When dealing with probabilities like flipping coins, we can always turn to the Law of Large Numbers:

http://en.wikipedia.org/wiki/Law_of_large_numbers

which says that given enough samples, eventually the actual flips will gradually approach the expected value.

That is, if we flip a coin enough, then 50% of the time it will be heads, and 50% of the time it will be tails. In the short run, the percentages may be skewed one way or the other, but eventually they will work out close to the expected value.

Using the law of large numbers we realize that we can get:

- all single random values to be between 0 and 9
- all nodes in the path to be between 0 and 9.
- all all pairs, triplets, etc. to eventually exist for all finite combinations

That seems fine, but again what works for finite problems quickly runs into trouble with infinite ones.

We don't know whether or not we'll get all infinite permutations, and we have no way of working that out. We may miss a few.

That's fatal, because the different between Pi/10 and sqrt(2)/10 is an infinite set of permutations. It's exactly that infinite permutation that we are trying to consistently generate. That's what ties all of the seeds together.

I took a quick look at Markov chains, hoping that might help. If we consider each random path to be a state of some type, and each state to only be dependent on the last one, we can see a random walk through the whole space as a Markov chain.

However, chains with infinite state space, such as this, don't seem to be as well understood, or at least I had trouble finding any reference materials.

Still, you would expect to run into problems trying to calculate probabilities for infinite events. Most of that branch of mathematics seems geared towards finite things, winning game shows, or proving that not everyone can go bankrupt all at once :-) (OK, I admit that's not fair, it was amis-use of actuarial science, or so I was told ...)


SEEDS

What we really have is a large set of independent tails, each belonging to what we can see as discrete seeds with some combintorical relationship between them.

That is, we can start to classify all of the reals as either:

- finite
- repeating
- irrational seed
- the permutation of an irrational seed.

We don't care about the rationals (finite and repeating) because we already know they are easily countable. We know how to step one by one through all of their possible permutations and combinations (we even know how to over-count them :-) They are just included in the base10 structure along with the more difficult irrationals.

To clarify we can define these irrational seeds to be combinatorically independent (algebraically independent?) non-repeating irrational elements, for which there is an infinite number of finite permutations separating any two elements. The term "distinct" is probably an easier way of saying it.


BASIC PERMUTATIONS

In Crank it Up, I built up an infinite spreadsheet with a bunch of simple permutations. The post started with using Pi as the seed, and then applied two different families of permutations on the entries to generate a whole lot of irrationals. Many people pointed out that this was only a slice of the possible reals, but it was interesting anyways because it was a fairly large slice.

Since all of the permutations were either finite, or infinitely repeating, the result of any operations on an irrational is another irrational. That is, so long as there is a definitive pattern to the change, the change itself will not be able to unwind the non-repeating nature of the original number. Non-pattern trumps pattern. Initially that was a bit of a problem, but it can also be beneficial.

We can work out the basic permutations on irrationals as being:

- finite or infinite
- addition/subtraction or multiplication/division
- reconstructive

Using those possibilities, and guessing at a few others, we can define a larger set of different permutations:

- finite changes, example +0.1
- infinite repeating changes, example +0.101010101....
- right/left shift or divide/multiple by 10
- divide/multiple by repeating 0.1010101....
- block swap (re-arrange blocks)

The first three are from the earlier post. The infinite divide is just another possible permutation.

The last change comes from realizing that an irrational with the blocks swapped subtracted from itself will result in a finite number, making the swap itself just a basic permutation.

In fact, this category includes any permutation where the start is an irrational, the finish is an irrational and the difference is a finite set of finite or simply repeating rationals ...

If we take a seed, and apply any finite number of these different types of permutations, the results are a countable set, because we apply them one at a time.

If we apply an infinite number of such permutations, the results are also countable, and countably infinite. An example of this being a countably infinite set is in the bi-infinite spreadsheet. We applied the first three permutations listed above, but by circling around, we can avoid the infinities and gradually count all of the elements.

Strangely, we can apply an infinite number of different combinations of permutations, and assuming that we do so, one at a time, and limit how we span across infinite sets, then the results are still a countably infinite set.

So, there appear to be a finite number of different types of permutations, that can be applied in an infinite number of different ways. And in this way we can generate really large sets of related real numbers from a seed that are all countably infinite by construction.


SEED PERMUTATIONS

Now it's interesting to note that if we subtract two seeds from each other, we either get zero, a finite set of permutations, or we get a new seed. This is true for all of the different operators (+ - / *) on thereals, not just subtraction.

If we get a new seed, we can define this to be a 'seed permutation', not just a basic one.

So if two seeds are related, their difference is either 0, a finite number or a repeating rational. This means that they are combinatorically related in some way, the results of some type of basic permutation, not a seed one.

If not we get a new unique irrational number.

For the operators in R:

- the difference between two seeds is either finite or a seed: A - B = C
- the addition is either finite or a a seed: A + B = D
- the multiplicand is either finite or a seed: A * B = E
- the dividend is either finite or a seed: A / B = F

That is, for Pi/10 and sqrt(2)/10, we get:

0.314159265... - 0.141421356... = 0.172737909...
0.314159265... + 0.141421356... = 0.455580621...
0.314159265... * 0.141421356... = 0.044428829...
0.314159265... / 0.141421356... = 2.221441470...
0.141421356... / 0.314159265... = 0.450158157...

Which is a new set of five different irrationals. When the result is a new irrational then we can consider the permutation type to be a seed permutation. That is, a permutation that allows us to jump from two non-repeating seeds to a new seed.

Now, we know for some types pf permutations that seeds themselves are the sum of the permutations between other seeds. That sense of infinite, likely means that although we can identify a finite set of seed permutations, it may be impossible to identify all of them, since there may actually be an infinite number of different types. But I am not sure about that.

The really big problem is that the seeds are very independent from each other. We can't just apply simple numbers to them to get to the next set of seeds. We must look for permutations from a higher level, that is the types that are needed on the seeds themselves to get to each other.

Some known seed permutations:

- limited symbols (numbers using only 2, 3, ..., 9 symbols)
- cut out finite blocks/add in finite blocks
- cut out infinite repeating blocks/add in repeating blocks
- irrational changes (infinite, non-repeating).
- irrational operations (+ - * / on two independent irrationals). 

Given a few seeds, can we find all of the possible seeds that are between the seeds? It seems likely, since there are ways to permute between some seeds, but it certainly isn't an easy question.

How do we know we've hit all seeds? That is the hardest problem. There are likely some major permutations, seed and basic that I am missing. Still my current set is fairly large and still useful. Is it complete? I think that would take further research.

If we apply seed permutations to a set of countably infinite seeds, then the results are still countably infinite. Because we get a new seed, and we can swap back and forth between them. That means we can take an countably infinite set of seeds and generate a much larger set of seeds that is still countably infinite.


COMBINATORICALLY COMPLETE SETS

Now, we can look at this all from a much higher perspective.

If we apply all the basic permutations, we can construct countably infinite sets that hold large numbers of real values. The bi-infinite spreadsheet is an example of this.

We want to define a 'combinatorically complete' set to mean a set of real numbers, based around a seed(s), and permuted with all of the known, possible general permutations possible, but still being countably infinite by definition. 

Start with an real element E0

We can create:

set C0 as a combinatorically complete set based on E0

Now, since we have a complete set, we apply Cantor's diagonal construction to that set, and it will produce a new value that is not in our original set. That is, Cantor's will act as a bridge and give us an element from some othercombinatorically complete set.

E1 is the new seed from C0 created by Cantor's diagonal construction.

set C1 is from the permutations on E1

We can now combine these two base sets:

CC0 = C0 ∪ C1

Since CC0 is a complete set, we can apply Cantor's again which will produce a new element from outside the set:

E2 is another seed from CC0 create by Cantor's

We can keep this up for the sets:

En, Cn and CCn

Where CCn is:

CCn = CCn-1 ∪ Cn

We can see that this is growing, very fast.

For fun, we can add in applying all of the seed permutations SPn as well, and thus any new Cn sets on any new created seeds. Thus we can create something like:

CCSn = CCSn-1 ∪ CCn ∪ SPn

We can take the union of all of these possibilities, both basic and seed permutations and then apply Cantor's to generate a new seed outside of the initial set. We can keep this up for infinity.

Thus our set CCSn is getting quite large. Of course, we can continue this on indefinitely allowing n to approach infinity.


WHAT'S MISSING

While our rapidly growing union of combinatorically complete sets is growing massively and quickly, it is not sufficient to just assume that it will encompass all of the real values. Instead we need to prove that there are is no E'0 that is left out.

Start with an E'0 that is defined to be missing. We know that we can get to this value by any basic permutation, so we have to include that:

U0 is the combinatorically complete set from E'0

Now, for each E'n in U0, it is possible that some set of values UU will generate E'n when using Cantor's diagonalization construction. Since we are interested in making sure those values are uncountable, we have to consider UU to be a possible pathway to E'0, so, it has to stay outside of CCSn.

UU0 is the set that will generate U0,

But each element in UU0 itself could possibly be generated through a bridge from Cantor's, so we have to really consider

UUn

Now there is always at least one element E'n, not in any UUn, so we have to include all of those values into our uncountable set as well.

We have to be careful that any of the seed permutations will not create a path to E'0 either. So we need to consider:

UUSn = UUSn-1 ∪ UUn ∪ USPn

Now, interestingly enough, because we are working backwards from Cantor's, there is no way of being sure that different subsets ofUUSn don't just generate each other's values. Thus it is possible for UUSn to be infinite, but not growing sufficiently larger like CCSn.

The nature of the two sets is quite different.


TOTAL REMAINING SPACE

While we can create a UUSn, we have a problem. UUSn itself is actually countably infinite because of its construction. We can easily count it.

Still there must be uncountable sets so that means there are a whole series of UUSnm sets which are distinct from each other. That is, there is no bridge between them.

That means that there have to be a whole series of un-related UUnm sets,

R = CCSn ∪ UUSn0 ∪ UUSn1 ∪ ... ∪ UUSnm

Still even that provides a problem, since even an infinite number of UUSnm sets is still countable, and now all of our uncountable values are actually countable in some way. So it must actually be something like

R = CCSn ∪ UUSn0 ∪ UUSn1 ∪ ... ∪ UUSnm ∪ ...

Where that final ... is somehow a way of keeping the first part of R -- which although it is disconnected is still countable -- uncountable in some way.

So, now lets take some element Ek. This element might be in CCSn, if it is, then it is countable.

If not, then it might be in UUSnm, and so it is found and counted.

But if it is in neither, it is in the "..." part, then by definition it belongs together with all of its permutations, and all of the seed permutations as a new setDDSn.

But if DDSn is not equal to or a subset of CCSn, and it is not equal to or a subset of any UUSnm, then it must be another element of UUSnm, because it is disconnected from all of the other UUSnm sets.

Because of that, for all Ek, they must belong to either CCSn or the UUSnm sets, which means that R must be laid out as follows:

R = CCSn ∪ UUSn0 ∪ UUSn1 ∪ ... ∪ UUSnm

This means that there must just be the two different types of sets and nothing else. Since each of these is countably infinite  by definition, and there are a countable number of them, then the union of all of them is also countably infinite.

Now, the construction and countability remains the same, even if we remove using seed permutations. It's also the same if we remove all of the basic permutations and just build up the elements one at time, however it is far easier to understand when it is applied to large 'pools' of dependent elements.


ANOTHER PERSPECTIVE

There is another way to look at this. We can start with a T, which we assume to be uncountable. Within T were the Pn sets, each of which is also assumed to uncountable. That stands to reason, since at least one subset of any uncountable set must be uncountable itself.

But if we follow that subset deconstruction downward, we quickly start to get rid of a lot of elements. That is for a set U that is uncountable, we need a series of subsets:

U0, U1, U2, ..., Un ...

of uncountable subsets, all of the way down to an infinite, but very small subset. However there can be no smallest uncountable set, and certainly there can be no finite set at the bottom of everything.

But, at the very bottom, there must be some set of elements. And they must be brought together in some way to form the larger sets:

..., Un, ..., Cn, ...

That is, there can be no smallest quanta in the system, such that it is countable. But everything has to be built up from something.

If we map this back onto our base10 representation tree, we have this structure that is shooting off to infinity in a large number of paths, and for each path. If we prune some nodes from this tree, this essentially says that whatever is left is essentially isomorphic to the original tree.

That is, no matter how many branches you prune, the tree just grows right back. However, one can guess that we should be able to prune faster than the tree can grow, and as such at some point we'll get down to something that is countable, but if we do, then the entire tree was countable.

Not only that, but if we keep pruing the tree, then there needs to be some sort of 'asymmetry' in the tree that causes its uncountable property.

But the base10 tree is really simple, grows uniform and except for being massive, is not really all that complex. At least, not complex enough to hide some asymmetry that accounts for it being impossible to enumerate.


THE HALLWAY ANALOGY

There is another way to look at base10 structures. It is as if you were standing in a very long hallway. There is a door down at one end, but the other end stretches off to infinity. In the hallway are a clump of wires. As each wire runs down the hallway, it becomes many more wires. That is, the clump of wires in front of you, if you are facing the door, is way smaller than the clump next to you, which is smaller than the clump behind you.

If you reach out and grab the clump of wires, there are a finite number of them, and they can easily be counted. Now, if you take a step backwards, and grab the next clump, there are more wires but still finite in number. As we work our way backwards down the hall, we can count more and more of the wires.

In a sense, a base10 representation tree is a rather fixed structure. Its difficulty comes from the fact that it is what I've been calling 2D. That is, there are two separate types of infinities built into the tree: the number of branches, and the length of the branches themselves. We don't need to enumerate both infinities, we only need to know how many infinite branches there are (or could be).

The problem comes from the idea that branches such as the irrationals are not actually in the tree until latter, but again we're only concerned with counting all of thereals , not just the irrationals. The wires running down the hallway are the parents of both, even if they are not full 'infinite' until much later. We don't need to know how long the wires are, we just need to know how many of these are splitting off to infinity.


SEED PERMUTATIONS

It seems likely that seed permutations should also be able to generate everything. That is, there is only one reason why some complete set of seed permutations shouldn't be able to bridge all of the different seed sets together.  An infinite number of different infinite permutations would a pretty solid reason, but if that isn't the case then we can do all of the generation.

Although the infinitely long tails make the conceptualization more complex, underneath, the relationships between the seeds is not that sophisticated. At least not sophisticated enough to be able to prevent counting, or generation of other seeds.

We do know, for instance that we can fit all of the permutations together into a base10 representation tree. Application of a structure, any structure, implies some type of inter-relationship between the elements.

Possibly we can start with the all of the permutations of symbols, and then add/delete all possible infinite pattern representations into the mix to cover the whole spectrum.

Does a simple cut/diff loop generate a lot of seeds? That is, if we start with Pi, and then randomly remove one digit, and subtract the two numbers we'll get another seed. If we keep this up, how much coverage will we get from such a simple algorithm?

Do permuting between all of the different possibly combinations of symbols generate a lot of seeds? Can we just start with all of the two symbol patterns, and move through each different set of permutations?

We know that any seed may or may not go through a specific set of patterns, and it may end up going through the same patterns over and over again, but in a way that doesn't repeat in the long run. In this sense, we could probably generate a giant dictionary of all possible patterns, and then permute them against each other to generate all possible seeds. Of course, as always the weakness is how to handle the infinitely long nature of the patterns.

It's easy enough to use a few methods to permute an infinite number of seeds, an then use the differences to get an even larger infinite number. Continued on infinitely, this may also likely produce full coverage.

If we can see the base10 representation, then our ability to easily jump from infinite branch to infinite branch should not be that complex.


SUMMARY

Tails and infinite permutations in R are a difficult problem. Not because of the complexity of the structures, not even because of most of it stretches out to infinity.

It's hard because there is some combinatorical independence between the different sets. Still, all we need do is tie it all together with a bridge or two.

Cantor's diagonal construction is defined to find things not in a set, therefore it is the perfect bridge builder between sets. Since it can't fail, and it can be used one at a time, we can bounce from countable set to countable set, forever.

Seed permutations should work as well. There is nothing intrinsically complicated about applying permutations to seeds, they're just not as obvious as the basic permutations.

Things should be countable, especially if they can be contained in a structure of some type. Uncountable implies that there is some type of structure that exists, but isn't representable in some strange way; that we can't navigate around it.

It is the structure that has no structure, which is well, rather odd, is it not?