You are here

Data correction methods resistant to pessimistic cases

7 posts / 0 new
Last post
Jarek Duda
Data correction methods resistant to pessimistic cases

Standard data correction methods takes some blocks of the data and enlarge them to protect against some specific set of errors. We loose whole block if we get out of this set.
Look at two well known data correction methods: Hamming 4+3 (to store 4 bits we use additional 3 bits) and tripling each bit (1+2).
Assuming the simplest error distribution model - for each bit the probability that it is switched is constant, let say e=1/100, we will have that
- the probability that in 7 bit block we have at least 2 errors, is 1 - (1-e)^7 - 7e(1-e)^6 =~ 0.2%
- for 3 bit block it's about 0.03%
So for each kilobyte of data we irreversibly loose: 4*4=16 bits in Hamming, 2.4 bits for tripling bits.
We see that even for looking to be well protected methods, we loose a lot of data because of pessimistic cases.

To be able to correct errors we are adding usually constant density of redundancy, but the density of errors usually isn't constant - can fluctuate - sometimes is above average, sometimes beyond.
When is above - we need a lot of redundancy to cope with these pessimistic cases.
When is beyond - we've used there more redundancy than it was really needed - we waste some capacity.

The idea is to somehow transfer these unused surpluses of redundancy to help with difficult cases.
Thanks of it instead of adding redundancy according to pessimistic error density, we could use for a bit above average density, which is usually a few orders of magnitude smaller.
To do this we cannot place data in independent blocks (like 7 bits in Hamming)- their redundancy is also independent.
We should use a coding which allow to hide redundancy in a way that each piece of it affects large part of the message.

I'll show now an example how it can be done.
Unfortunately it has various latency - simple errors can be quickly corrected, but more complicated may take long times... maybe there are better ways?

The idea is to use a very precise coding - such that any error would make that the following decoded sequence should be completely random bit sequence.
For example a block ciphers, which uses previous block to calculate the following one ... but there is much better coding for it I will say later about.

Now - add to the message some easily recognizable redundancy - for example insert '1' between each digit (each such bit is hidden in all succeeding bits of encoded message).
If while decoding it occurs that there is '0' in one of these places - that means we had some error before and most probably it's nearby.
Knowing statistical characteristics of expected errors, we can make list of most possible errors in such cases, sort them by their probability - on the top of this list there should be 'switched previous bit',... after a while there can appear 'switched two bits:...'. This list can be very large.
Now if we know that there (nearby) appeared some error, we take this list position by position, correct as it was really this case (switch some bits) and try to decode further fixed number of bits (a few dozens).
If everything is ok - we get only '1' on selected positions - we can assume that it was this error.
If not - try the next one from the list.
This list can be generated online - using large amount of time we could repair even badly damaged transmission.
While creating the list, we have to remember that errors can appear also in the succeeding bits.

Using block ciphers is a bit nasty - they are slow, we have large blocks to search for errors ...
There is new coding just ideal for above purpose - Asymmetric Numeral Systems (ANS)- new entropy coder (http://www.c10n.info/archives/720), which has very nice properties for cryptography ... and data correction - it's much faster than block ciphers and uses small blocks of various length.
Here for example is demonstration about it:
http://demonstrations.wolfram.com/DataCompressionUsingAsymmetricNumeralS...

We can use ANS entropy coding property to make above process quicker and distribute redundancy really uniformly:
to create easily recognizable pattern, instead of inserting '1' symbol regularly, we can add a new symbol - the forbidden one. If it occurs, we know that something was wrong, the nearer the more probable.

Let say we use symbols with some probability distribution (p_i), so we at average need H = -sum_i p_i lg p_i bits/symbol.
For example if we want just to encode bytes without compression, we can threat it as 256 symbols with p_i=1/256 (H = 8 bits/symbol).

Our new symbol will have some chosen probability q. The nearer to 1 it is, the larger redundancy density we add, the easier to correct errors.
We have to rescale the rest of probabilities: p_i ->(1-q) p_i.
In this way, the size of the file will increase r = (H - lg (1-q) )/H times.

Now if while decoding we get the forbidden symbol, we know that,
- with probability q, the first uncorrected yet error has occurred in some of bits used to decode last symbol,
- with probability (1-q)q it occurred in bits used while decoding the previous symbol,
- with probability (1-q)^2 q ...

The probability of succeeding cases drops exponentially, especially if (1-q) is near 0.
But the number of required tries also grows exponentially.
But observe that for example all possible distributions of 5 errors in 50 bits is only about 2 millions - it should be checked in a moment.

Using redundancy like in Hamming(4+3) or tripling (1+2):
4+3 case (r=7/4) - we add forbidden symbol with probability q=1-1/2^3=7/8, and each of 2^4=16 symbols has probability 1/16*1/8=1/128.
In practice ANS works best if lg(p_i) aren't natural numbers, so q should (not necessary) be not exactly 7/8 but something around.
Now if the forbidden symbol occurs, with probability about 7/8 we only have to try to switch one of (about) 7 bits used to decode this symbol.
With 8 times smaller probability we have to switch 7 bits from the previous one... with much smaller probability, depending on the error density model, we should try to switch some two bits ... and even extremely pessimistic cases looks to take reasonable time to correct them.
For 1+2 case (r=3), the forbidden symbol has about 3/4, and 0,1 has 1/8 each.
With probability 3/4 we have only to correct one of 3 bits ... with probability 255/256 one of 12 bits ...

Edited by: admin on 05/16/2010 - 04:32
Jarek Duda
Re: Data correction methods resistant to pessimistic cases

Let's think about theoretical limit of bits of redundancy we have to add for bit of information for assumed statistical error distribution to be able to fully correct the file.
To find this threshold, let's think about simpler looking question: how many information is stored in such uncertain bit?
Let's take the simplest error distribution model - for each bit probability that it's switched is equal e (near zero), so if we see '1' we know that with probability 1-e it's really '1', and with probability e it's 0.
So if we would know which of this cases we have, what is worth h(e)=-e lg(e) - (1-e) lg(1-e) bits, we would have whole bit.
So such uncertain bit contains 1-h(e) bits.
To transfer n real bits, we have to use at least n/(1-h(e)) these uncertain bits - the theoretical limit to be able to read a message is (asymptotically)
h(e)/(1-h(e)) additional bits of redundancy /bit of information.

So a perfect data correction coder for e=1/100 error probability, would need only additional 0.088 bits/bit to be able to restore the message.
Hamming 4+3 in spite of using additional 0.75 bits/bit, looses 16bits/kilobyte with the same error distribution.

There are two main reasons of being far from optimal:
- they place the data in independent blocks, making them sensible to pessimistic cases,
- they don't correspond to the error distribution. For example Hamming 4+3 assumes that we have one of 8 error scenarios - no error, or changed one of 7 bits. It encodes all of this cases equally, but usually the scenario that we have no error is much more probable.

Using ANS we can get Hamming codes or tripling bits as degenerated case - in which blocks are independent.
In other cases we create connection between redundancy of blocks and we can use that with large probability succeeding blocks are correct to correct a damaged block.

Jarek Duda
Re: Data correction methods resistant to pessimistic cases

I've just finished large paper about ANS. There is added some deeper analysis, gathered rethinked information I've placed on different forums...

There is also shown that presented data correction approach can really allow to reach theoretical Shannon's limit and looks to have expected linear time, so should be much better than the only used such practical method - Low density Parity Check (LDPC)
http://arxiv.org/abs/0902.0271

Jarek Duda
Re: Data correction methods resistant to pessimistic cases

The simulator of correction process has just been published on Wolfram's page:
http://demonstrations.wolfram.com/CorrectionTrees/
Is shows that we finally have near Shanon's limit method working in nearly linear time for any noise level.

For given probability of bit damage (p_b), we choose p_d parameter. The higher this parameter is, the more redundancy we add, the easier to correct errors.
We want to find the proper correction (red path in simulator). The main correction mechanism is that if we are expanding the proper correction - everything is fine, but in each step of expanding a wrong correction, we have p_d probability of realizing it. With p_d large enough, the number of corrections we should check doesn't longer grow exponentially.
At each step there is known tree structure and using it we choose the most probable leaf to expand.

I've realized that for practical correction methods (not requiring exponential correction time), we rather need a bit more redundancy than theoretical (Shannon's) limit. Redundancy allows to reduce the number of corrections to consider. In practical correction methods we rather have to elongate corrections and so we have to assume that the expected number of corrections up to given moment is finite, what requires more redundancy than Shannon's limit (observe that block codes fulfills this assumption).
This limit is calculated in the last version of the paper (0902.0271). The basic correction algorithm (as in the simulator) works for a bit worse limit (needs larger encoded file by at most 13%), but it can probably be improved.
Finally this new family of random trees has two phase transitions - for small p_d p_d^2 it has finite expected width.

Used today error correction methods works practically only for very low noise (p_b < 0.01). Presented approach works well for any noise (p_b < 0.5).
For small noises it needs size of encoded file practically like for Shannon's limit. The difference starts for large noises: it needs file size at most twice larger than the limit.
Practical method for large noises give new way to increase capacity of transmition lines and storage devices - for example place two bits where we would normally place one - the cost is large noise increase, but we can handle it now.

For extremely large noises, we can no longer use ANS. On fig. 3 of the paper is shown how to handle it. For example if we have to increase the size of the file 100 times, we can encode each bit in 100 bits - encode 1 as (11...111 XOR 'hash value of already encoded message'. The same with 0. Now while creating the tree, each split will have different number of corrected bits - different weight.

Jarek Duda
Update

I defended my PhD in computer science (now [URL=http://www.scienceforums.net/topic/53324-from-maximal-entropy-random-wal... physics in progress[/URL] ), which half was was about this new approach to data correction - basically it's extended convolutional codes concept to use much larger states (which require working not on the whole space of states as usual, but only on [URL=http://demonstrations.wolfram.com/CorrectionTrees/]used tree of states[/URL] and allows to practically complete repair in linear time) and using entropy coder to add redundancy (simultaneous data compression and rate can be changed fluently).
The thesis is the paper from arxiv with a few small improvements ( can be downloaded from [URL]http://tcs.uj.edu.pl/graduates.php?degree=1&lang=0[/url] )
[URL=https://docs.google.com/fileview?id=0B7ppK4IyMhisMTNkNmQ2NGUtYzBlNy00ODJ... is presentation I've used[/URL] - with a few new pictures which could make understanding the concept easier and e.g. some comparison to LDPC.

Jarek Duda
Implementation reached

I apology for digging this thread up, but finally there is practical implementation and it beats modern state of arts methods in many application. It can be seen as greatly improved convolutional code-like concept – for example [B]no longer using convolution[/B], but carefully designed extremely fast operation allowing to work on much larger states instead. Other main improvements are using [B]bidirectional [/B]decoding and heap (logarithmic complexity) instead of stubbornly used stack (linear complexity). For simplicity it will be called [B]Correction Trees (CT)[/B].
The most important improvement is that [B]it can handle larger channel damage for the same rate[/B]. Adding redundancy to double (rate ½) or triple (rate 1/3) the message size theoretically should allow to completely repair up to correspondingly 11% or 17.4% damaged bits for Binary Symmetric Channel (each bit has independently this probability to be flipped). Unfortunately, this Shannon limit is rather unreachable - in practice we can only reduce Bit Error Rate (BER) if noise is significantly lower than this limit. Turbo Codes (TC) and Low Density Parity Check (LDPC) are nowadays seen as teh best methods – here is comparison of some their implementations with CT approach(triangles) – output BER to input noise level:

[IMG]http://dl.dropbox.com/u/12405967/comaprison_resize.jpg[/IMG]

We can see that [B]CT still repairs when the others has given up[/B] – making it perfect for extreme application like far space or underwater communication. Unfortunately repairing such extreme noises requires extreme resources – software implementation on modern PC decodes a few hundreds bytes per second for extreme noises. Additionally, using more resources the correction capability can be further improved (lowering line in the figure above).
From the other side, [B]CT encoding is extremely fast and correction for low noises is practically for free[/B] – like up to 5-6% for rate ½. In comparison, TC correction always requires a lot of calculation, while LDPC additionally requires also a lot of work for encoding only.
So in opposite to them, [B]CT is just perfect for e.g. hard discs[/B] – everyday work uses low noise, so using CT would make it extremely cheap. From the other hand, if it is really badly damaged, there is still correction possible but it becomes costly. Such correction itself could be also made outside, allowing for further improvement of correction capabilities.

Paper: [url]http://arxiv.org/abs/1204.5317[/url]
Implementation: [url]https://indect-project.eu/correction-trees/[/url]
Simulator: [url]http://demonstrations.wolfram.com/CorrectionTrees/[/url]

Jarek Duda
I have just implemented this

I have just implemented this method for deletion channel: https://github.com/JarekDuda/DeletionChannelPracticalCorrection
It has >2 times better rate than some Harvard LDPC-based implementation: http://www.eecs.harvard.edu/~chaki/doc/code-long.pdf
For example for rate 1/2 it practically always correct 0.05 deletion probability, while LDPC-based implementation has similar behavior for rate 0.2333.
Looking at behavior of Pareto coefficients, it also allows to estimate channel capacity, e.g. p~0.1 for this case.
It can be be also easily combined with other error types, like bit-flips.