205 |
if all circles were green and all squares yellow, a considerable amount of |
if all circles were green and all squares yellow, a considerable amount of |
206 |
bits would be wasted. |
bits would be wasted. |
207 |
|
|
208 |
|
To understand why it is possible to learn to discriminate particular |
209 |
|
textures easily, consider the task of learning {\em one} texture. |
210 |
|
This is a two-class problem. |
211 |
|
Extensive literature... |
212 |
|
Here, |
213 |
|
there are two categories: the paper A and everything else. |
214 |
|
This problem is quite easy to solve: the probability of error |
215 |
|
decreases exponentially in the number of features. |
216 |
|
|
217 |
|
|
218 |
|
A perceptron\cite{XXX} is a simple neural network with XXX. |
219 |
|
In the input layer, various features are activated, |
220 |
|
... linear combination of features. |
221 |
|
|
222 |
|
Easy to learn when many features, STRICT CORRELATION |
223 |
|
|
224 |
|
$m$ binary features, $N$ vectors. Probability that |
225 |
|
|
226 |
|
|
227 |
Easiest to remember presence and absence of features; therefore, should have |
Easiest to remember presence and absence of features; therefore, should have |
228 |
relatively small basis size, not to have too many features. XXX why? |
relatively small basis size, not to have too many features. XXX why? |
229 |
|
|
704 |
4 textures, texture shading. These correspond to G400, GeForce2 and |
4 textures, texture shading. These correspond to G400, GeForce2 and |
705 |
GeForce3. |
GeForce3. |
706 |
|
|
|
\section{Experiment} |
|
|
|
|
|
\section{A neurocomputing interpretation} |
|
|
|
|
|
The recognizability of the generated textures is perhaps surprising |
|
|
in the light of the experiments on XXX.. |
|
|
|
|
|
To understand why it is possible to learn to discriminate particular |
|
|
textures easily, consider the task of learning {\em one} texture. |
|
|
This is a two-class problem. |
|
|
Extensive literature... |
|
|
Here, |
|
|
there are two categories: the paper A and everything else. |
|
|
This problem is quite easy to solve: the probability of error |
|
|
decreases exponentially in the number of features. |
|
|
|
|
|
|
|
|
A perceptron\cite{XXX} is a simple neural network with XXX. |
|
|
In the input layer, various features are activated, |
|
|
... linear combination of features. |
|
|
|
|
|
Easy to learn when many features, STRICT CORRELATION |
|
|
|
|
|
$m$ binary features, $N$ vectors. Probability that |
|
707 |
|
|
708 |
|
|
709 |
\section{Conclusions} |
\section{Conclusions} |