|
84734 packages online
|
|
|
|
|
|
No screenshot available
|
Short: | AGF V0.9 - n*8-bit Sample Pre-Packing Processor |
Author: | olethrosgeocities.com (Christos Dimitrakakis) |
Type: | util/pack |
Architecture: | m68k-amigaos |
Date: | 1999-09-08 |
Requires: | 68020+ (fpu opt.) |
Download: | util/pack/agf.lha - View contents | Readme: | util/pack/agf.readme |
Downloads: | 6366 |
|
OVERVIEW
AGF is a sample pre-processor. It transofrms the data into a form having very
little information content. This makes it easier for compression programs to
pack it down to a small size. AGF combined with GZIP gives an average
compression of 50% and it is always better than any other compression method on
its own. It is similar to ADPCM, but better :)
HISTORY
06-09-1999 : Released a version that works :)
05-09-1999 : Released a version that works properly (more or less)
SUMMARY
AGF - Adaptive Gradient-descent FIR filter.
This is a neural-network-like adaptive FIR filter, employing a neural
network of 32 neurons. The adaptation is deterministic, which means
that the sample can be recovered from the processed file without
needing to save an FIR coefficients to it as well. Adaptation is done
on-line, on a sample-by-sample basis.
USAGE
AGF.fpu MODE sample processed_sample
AGF.int MODE sample processed_sample
The processed sample can then be efficiently packed with any kind of packer.
I recommend xpk (xGZIP or xSQSH). lha/lzx will also do :)
The results are always MUCH better.
Modes:
x : extract (decode) using a linear ANN
c : compress (encode) using a linear ANN
xd : extract (decode) using a static filter
cd : compress (encode) using a static filter
AGF.fpu & AGF.int, implement the same algorithm using floating point and fixed
point representations respectively. The first one is compiled specifically for
68060 with FPU and the second for 68060 (using the math libs for any FPU
instructions.. which are only a couple). The integer version is twice as fast
on my 68030+68882.. and the packing performance difference is negligible. I
expect the int version to be also faster on 060 machines (lots of MULs), but
maybe the .fpu version is faster on 040.. test it..
OUTPUT
It outputs the average error of the ANN predictor and when it finishes it shows
the values of the ANN weights.. in case you are interested :)
TODO
Add an RBF layer before the 32-neuron layer.
Make an xpksublib out of it.
Add options for adjusting the number of coefficients and adaptation rate.
BUGS
Bugs Reports to olethros@geocities.com with "AGF BUG" as the subject message please
SEE ALSO
see also dev/basic/gasp.lha for a similar pre-processor where the adaptive
process is controlled by a Genetic Algorithm
|
Contents of util/pack/agf.lha
PERMSSN UID GID PACKED SIZE RATIO CRC STAMP NAME
---------- ----------- ------- ------- ------ ---------- ------------ -------------
[generic] 1317 2564 51.4% -lh5- a710 Sep 6 1999 agf.readme
[generic] 8456 17928 47.2% -lh5- 95ca Sep 6 1999 agf.int
[generic] 8245 17000 48.5% -lh5- 5129 Sep 6 1999 agf.fpu
[generic] 677 1744 38.8% -lh5- 2bcc Sep 6 1999 agf.c
[generic] 606 1421 42.6% -lh5- 0ce8 Sep 6 1999 fir.c
[generic] 544 1534 35.5% -lh5- 2f9f Jan 19 1999 main.c
[generic] 129 233 55.4% -lh5- a7c4 Jan 19 1999 agf.h
[generic] 194 366 53.0% -lh5- cb4f Sep 6 1999 fir.h
---------- ----------- ------- ------- ------ ---------- ------------ -------------
Total 8 files 20168 42790 47.1% Sep 8 1999
|
|
|
|
Page generated in 0.01 seconds |
Aminet © 1992-2024 Urban
Müller and the Aminet team.
Aminet contact address: <aminetaminet net> |