1 |
|
In this work, we are not using the GPU to simulate or model any |
2 |
|
real-world phenomena. Instead, we use the GPU to produce an infinite |
3 |
|
amount of different, novel shapes. |
4 |
|
|
5 |
We present a perceptually designed hardware-accelerated algorithm for |
We present a perceptually designed hardware-accelerated algorithm for |
6 |
generating unique background textures for distinguishing documents. |
generating unique background textures for distinguishing documents. |
7 |
The procedurally generated unique backgrounds are used as a visualization of |
The procedurally generated unique backgrounds are used as a |
8 |
document identity. In our approach, each document has a different, |
visualization of document identity. In our approach, each document has |
9 |
easily distinguishable background texture. The user can thus identify |
a different, easily distinguishable background texture. The user can |
10 |
an item at a glance, even if only a *fragment* of the item is shown, |
thus identify an item at a glance, even if only a *fragment* of the |
11 |
without reading the title (which the fragment may not even show). |
item is shown, without reading the title (which the fragment may not |
12 |
The motivating example for unique backgrounds is the BuoyOING |
even show). The user should be able to learn the textures of the most |
13 |
(Buoy-Oriented Interface, Next Generation) user interface, a |
often visited documents, as per Zipf's law. |
14 |
focus+context interface for navigating hypertext. |
See figxupdfdiag. |
|
|
|
|
figxupdfdiag: The motivating example for unique backgrounds: the |
|
|
BuoyOING focus+context interface for browsing |
|
|
bidirectionally hyperlinked documents. The interface |
|
|
shows the relevant *fragments* of the other ends of the |
|
|
links and animates them fluidly to the focus upon |
|
|
traversing the link. a) shows a small document network. |
|
|
b) and c) show what a user sees while browsing the |
|
|
network, b) without and c) with background texture. |
|
|
There are three keyframes where the animation stops. |
|
|
Two frames of each animation between the keyframes are |
|
|
shown. The unique backgrounds help the user notice that |
|
|
the upper right buoy in the last keyframe is actually a |
|
|
part of the same document (1) which was in the focus in |
|
|
the first keyframe. Our (as yet untested) hypothesis is |
|
|
that this will aid user orientation. |
|
|
|
|
15 |
|
|
16 |
An initial experiment has shown that the generated textures are indeed |
An initial experiment has shown that the generated textures are indeed |
17 |
recognizable. |
recognizable. |
18 |
|
|
|
|
|
|
Generating Unique Background Textures |
|
|
===================================== |
|
|
|
|
|
To be useful, the unique backgrounds should be easily distinguishable |
|
|
and recognizable, and should not significantly impair the reading of |
|
|
black text on top of it. |
|
|
|
|
|
The ability to distinguish a particular texture from a large set |
|
|
depends on the distribution of textures in the set. For instance, it |
|
|
is intuitively clear that textures with independently random texel |
|
|
values would be a very bad choice: all such textures would look alike, |
|
|
being just noise. In order to design a distinguishable distribution |
|
|
of textures, we have to take into account the properties of the human |
|
|
visual system. |
|
|
|
|
|
The simple model of texture perception we use assumes that at some |
|
|
point, the results from the different pre-attentive feature detectors, |
|
|
such as different shapes and colors, are combined to form an abstract |
|
|
*feature vector* (see Fig.~\ref{fig-perceptual}). |
|
|
However, only a limited number of different features detected can be |
|
|
grouped into objects, indicating that the spatial resolution of the |
|
|
feature vector is quite low - as a well-known example, conjunction |
|
|
coding is not preattentive - red squares are hard to find among green |
|
|
squares and red and green circles. |
|
|
|
|
|
fig-perceptual: The qualitative model of visual perception used to |
|
|
create the algorithm. The visual input is transformed |
|
|
into a feature vector, which contains numbers |
|
|
(activation levels) corresponding to e.g. colors, |
|
|
edges, curves and small patterns. The feature vector |
|
|
is matched against the memorized textures. In order |
|
|
to generate recognizable textures, random seed values |
|
|
should produce a distribution of feature vectors with |
|
|
maximum entropy. |
|
|
|
|
|
From the model we can see that to be distinguishable, a feature vector |
|
|
for a given texture should always be the same. Fragments of a |
|
|
non-repeating texture will be slightly different, resulting in |
|
|
slightly different vectors even if the local structure is the same. A |
|
|
repeating texture should thus be easier to recognize. Our anecdotal |
|
|
observations confirm this. |
|
|
|
|
|
Additionally, the entropy of the feature vectors over the distribution |
|
|
of textures should be maximized. The distribution should contain |
|
|
occurrences of as many different features as possible, and the features |
|
|
should be distributed independently from each other. |
|
|
|
|
|
However, because of the limited spatial resolution of the feature |
|
|
vector, in any *single* texture, only a limited range of features |
|
|
should be used. |
|
|
|
|
|
In a sense, the model of perception should be *inverted* in order to |
|
|
produce a unique background from a random vector. Features that are |
|
|
orthogonal for human perception (e.g., color and direction of fastest |
|
|
luminance change) should be independently random, and features not |
|
|
orthogonal (e.g. colors of neighbouring pixels) should be correlated |
|
|
so as to maximize the entropy. |
|
|
|
|
|
An important point is generating the backgrounds is that the texture |
|
|
appearance should have *no correlation* with any attribute or content |
|
|
of the document so that the textures of any hyperlinked documents are |
|
|
similar only by chance. |
|
|
|
|
|
Hardware-accelerated implementation |
|
|
=================================== |
|
|
|
|
19 |
One major goal for the implementation is to support complicated |
One major goal for the implementation is to support complicated |
20 |
mappings between paper and screen coordinates, such as fisheye |
mappings between paper and screen coordinates, such as fisheye |
21 |
distortion. To make this simple, all processing when rendering the |
distortion. To make this simple, all processing when rendering the |
31 |
GL_ARB_fragment_program once suitable hardware and Linux drivers |
GL_ARB_fragment_program once suitable hardware and Linux drivers |
32 |
emerge. |
emerge. |
33 |
|
|
34 |
Colors |
We use a small palette of colors for each unique background texture, |
35 |
------ |
selected randomly from a heuristic distribution. The shapes of the |
36 |
|
final background texture are generated entirely from a small set of |
37 |
|
static *basis textures* bound to texture units with randomly chosen |
38 |
|
texture coordinate mappings. Even though the basis textures are RGB |
39 |
|
textures, they contain no color information: they are simply treated |
40 |
|
as 3- or 4-vectors and combined using the NVIDIA register combiners |
41 |
|
extension with the palette colors to produce the final fragment |
42 |
|
colors. |
43 |
|
|
44 |
|
Our need for the combiners is rather unconventional: we want to lose |
45 |
|
most of the original shapes of the basis textures in order to create |
46 |
|
new, different shapes from the interaction of the basis texture values |
47 |
|
and combiner parameters chosen randomly from the seed number. For |
48 |
|
this, we use dot products of texture values with each other and with |
49 |
|
random constant vectors, and scale up with the register combiner |
50 |
|
output mappings to sharpen the result (see Fig.~\ref{fig-regcomb}). |
51 |
|
The resulting values are used for interpolating between the palette |
52 |
|
colors. |
53 |
|
|
|
To maintain recognizability, we use a small palette of colors for each |
|
|
paper, selected randomly from a heuristic distribution. The final |
|
|
image contains convex combinations of the palette colors. |
|
|
|
|
|
For readability, we only use colors with the CIE Lightness value over 80. |
|
|
|
|
|
Texture coordinates |
|
|
------------------- |
|
|
|
|
|
The choice of the geometry of the repeating unit (a parallelogram) |
|
|
fixes an absolute scale for the paper. The repeating unit should be |
|
|
fairly isotropic to avoid the degeneration of textures to diagonal |
|
|
lines, and the units for different textures should be relatively |
|
|
similar in size. The repeating unit is chosen from a heuristic |
|
|
distribution satisfying these criteria. |
|
|
|
|
|
After a repeating unit is fixed, there is still freedom in choosing |
|
|
texture coordinates for each basis texture: any mapping of the texture |
|
|
is fine, as long as it repeats with the selected repeating unit. For |
|
|
example, a texture can repeat multiple times inside the repeating |
|
|
unit, or can be skewed w.r.t. the repeating unit. Again, a heuristic |
|
|
distribution is used which does not skew or scale the basis texture |
|
|
too much too often. |
|
|
|
|
|
Basis textures |
|
|
-------------- |
|
|
|
|
|
The shapes of the final background texture are generated entirely from |
|
|
a small set of static *basis textures*. Even though the basis |
|
|
textures are RGB textures, they contain no color information: they are |
|
|
simply treated as 3- or 4-vectors to be used in various ways to create |
|
|
shapes, and color is added by the register combiners using the palette |
|
|
selected as described above. |
|
54 |
|
|
|
fig-basis: The complete set of 2D basis textures used by our |
|
|
implementation. All textures shown in this article are |
|
|
built from these textures and the corresponding HILO |
|
|
textures for offsetting. |
|
|
|
|
|
On the NV25 architecture, the texture accesses can be customized |
|
|
further by the use of texture shading: the texture coordinates used by |
|
|
a texture unit can be made to depend on the result of a previous |
|
|
texture unit. This can be used to create a large variety of |
|
|
shapes\cite{perlin-noise-intro}. So far, we have only used offset |
|
|
textures with random offset matrices, but even they do improve the |
|
|
quality of the output. |
|
|
|
|
|
Register combiners |
|
|
------------------ |
|
|
|
|
|
The NVIDIA register combiners extension is used to combine the the 3- |
|
|
and 4-vectors obtained from the basis textures and the palette colors |
|
|
into the final fragment color. Our need for the combiners is rather |
|
|
unconventional: we want to lose most of the original shapes of the |
|
|
basis textures in order to create new, different shapes from the |
|
|
interaction of the basis texture values and combiner parameters chosen |
|
|
randomly from the seed number. For this, we use dot products of |
|
|
texture values with each other and with random constant vectors, and |
|
|
scale up with the register combiner output mappings to sharpen the |
|
|
result (see Fig.~\ref{fig-regcomb}). The resulting values are used |
|
|
for interpolating between the palette colors. Because some basis |
|
|
textures have blurrier edges than others, the output scalings need to |
|
|
be adjusted depending on the basis textures selected. |
|
55 |
|
|
56 |
|
|
57 |
|
figxupdfdiag: The motivating example for unique backgrounds: the |
58 |
|
BuoyOING focus+context interface for browsing bidirectionally |
59 |
|
hyperlinked documents. The interface shows the relevant *fragments* |
60 |
|
of the other ends of the links and animates them fluidly to the focus |
61 |
|
upon traversing the link. a) shows a small document network. b) and |
62 |
|
c) show what a user sees while browsing the network, b) without and c) |
63 |
|
with background texture. There are three keyframes where the |
64 |
|
animation stops. Two frames of each animation between the keyframes |
65 |
|
are shown. The unique backgrounds help the user notice that the upper |
66 |
|
right buoy in the last keyframe is actually a part of the same |
67 |
|
document (1) which was in the focus in the first keyframe. Our (as |
68 |
|
yet untested) hypothesis is that this will aid user orientation. |
69 |
|
|
70 |
|
fig-perceptual: The qualitative model of visual perception used to |
71 |
|
create the algorithm. The visual input is transformed into a feature |
72 |
|
vector, which contains numbers (activation levels) corresponding to |
73 |
|
e.g. colors, edges, curves and small patterns. The feature vector is |
74 |
|
matched against the memorized textures. In order to generate |
75 |
|
recognizable textures, random seed values should produce a |
76 |
|
distribution of feature vectors with maximum entropy. |
77 |
|
|
78 |
|
fig-basis: The complete set of 2D basis textures used by our |
79 |
|
implementation. All textures shown in this article are built from |
80 |
|
these textures and the corresponding HILO textures for offsetting. |
81 |
|
|
82 |
fig-regcomb: How the limited register combiners of the NV10 |
fig-regcomb: How the limited register combiners of the NV10 |
83 |
architecture can be used to generate shapes. Top: the |
architecture can be used to generate shapes. Top: the two basis |
84 |
two basis textures. Bottom left: dot product of the |
textures. Bottom left: dot product of the basis textures: |
85 |
basis textures: 2(2a-1)\cdot(2b-1)+1/2, where a and b are |
2(2a-1)\cdot(2b-1)+1/2, where a and b are the texture RGB values. |
86 |
the texture RGB values. Bottom right: dot product of the |
Bottom right: dot product of the basis textures squared: 32( |
87 |
basis textures squared: 32( (2a-1)\cdot(2b-1) )^2. This |
(2a-1)\cdot(2b-1) )^2. This term can then be used to modulate between |
88 |
term can then be used to modulate between two colors. |
two colors. |
89 |
|
|
90 |
|
fig-examples: A number of unique backgrounds generated by our system. |
91 |
|
This view can be rendered, without pre-rendering the textures, in 20 |
92 |
|
ms on a GeForce4 Ti 4200 in a 1024x768 window (fill-rate/bandwidth |
93 |
|
limited). |
94 |
|
|
95 |
|
figxanalogicalexample: Two different screenshots of a structure of PDF |
96 |
|
documents viewed in a focus+context view. The user interface shows |
97 |
|
relationships between specific points in the documents. Each document |
98 |
|
has an unique background, which makes it easy to see that the fragment |
99 |
|
of a document on the left side of the bottom view is the document |
100 |
|
fully seen in the top view; without unique backgrounds, this would be |
101 |
|
relatively difficult and would require traversing the link. |