============================================================ Rendering recognizably unique textures on GPUs using shading ============================================================ *Janne V. Kujala* In this work, we apply the GPU to an unusual task: we are not attempting to obtain a specific appearance or effect. Instead, we use the GPU to produce an infinite amount of different, novel, randomly generated textures, with the goal that they should be recognizable by users. In [Kujala & Lukka: "Rendering recognizably unique textures", to be published in Information Visualization'03 conference], we introduced the use of procedurally generated unique background textures as a visualization of document identity. In our approach, each document has a different, easily distinguishable background texture. The user can thus identify an item at a glance, even if only a *fragment* of the item is shown, without reading the title (which the fragment may not even show) `[Figure 1]`__. The user should be able to learn the textures of the most often visited documents, as per Zipf's law. An initial experiment has shown that the generated textures are indeed recognizable. .. _fig1: #motivating __ fig1_ The perceptually designed algorithm runs, after the random seeding and setup stages, entirely on the fragment pipeline of the GPU, in order to allow complicated mappings such as fisheye distortion between the paper and screen coordinates. For each unique background texture, a small palette of colors is selected randomly from a heuristic distribution. The shapes of the final background texture are generated entirely from a small set of static "basis textures" bound to texture units with randomly chosen affine texture coordinate mappings using vertex programs. Even though the basis textures are RGB textures, they contain no color information: they are simply treated as 3- or 4-vectors and combined nonlinearly and the results are used to interpolate between the palette colors to produce the final fragment colors. Plain OpenGL 1.3 does not by itself provide enough flexibility in the fragment pipeline to allow for generating features nonlinearly from the basis textures `[Figure 3]`__. Because of this, and the availability of stable Linux drivers, our main platforms are NV10, i.e., OpenGL 1.3 + GL_NV_register_combiners, and NV25, i.e., NV10 + GL_NV_texture_shader3. We will be working on an implementation based on GL_ARB_fragment_program and GL_NV_fragment_program now that we have obtained our first NV3X-based card. .. _fig3: #basistex __ fig3_ The use of the combiners is rather unconventional: we want to lose most of the original shapes of the basis textures in order to create new, different shapes from the interaction of the basis texture values and combiner parameters chosen randomly from the seed number. For this, we use dot products of texture values with each other and with random constant vectors, and scale up with the register combiner output mappings to sharpen the result `[Figure 4]`__. The resulting values are used for interpolating between the palette colors. .. _fig4: #combiners __ fig4_ On the NV25, we use offset textures to ease the creation of new shapes in the fragment pipeline. .. figure:: images/paper/motivating.png :label: motivating Figure 1. The motivating example for unique backgrounds: the BuoyOING focus+context interface for browsing bidirectionally hyperlinked documents. The interface shows the relevant *fragments* of the other ends of the links and animates them fluidly to the focus upon traversing the link. a) shows a small document network. b) and c) show what a user sees while browsing the network, b) without and c) with background texture. There are three keyframes where the animation stops. Two frames of each animation between the keyframes are shown. The unique backgrounds help the user notice that the upper right buoy in the last keyframe is actually a part of the same document (1) which was in the focus in the first keyframe. Our (as yet untested) hypothesis is that this will aid user orientation. .. figure:: images/paper/model.png :label: model Figure 2. The qualitative model of visual perception used to create the algorithm. The visual input is transformed into a feature vector, which contains numbers (activation levels) corresponding to e.g. colors, edges, curves and small patterns. The feature vector is matched against the memorized textures. In order to generate recognizable textures, random seed values should produce a distribution of feature vectors with maximum entropy. .. figure:: images/paper/basistex.png :label: basistex Figure 3. The complete set of 2D basis textures used by our implementation. All textures shown in this proposal are built from these textures and the corresponding HILO textures for offsetting. .. figure:: images/paper/combiners.png :label: combiners Figure 4. How the limited register combiners of the NV10 architecture can be used to generate shapes. Top: the two basis textures. Bottom left: dot product of the basis textures: 2(2a-1) . (2b-1)+1/2, where a and b are the texture RGB values. Bottom right: dot product of the basis textures squared: 32( (2a-1) . (2b-1) )^2. This term can then be used to modulate between two colors. .. figure:: images/paper/examples.png :label: examples Figure 5. A number of unique backgrounds generated by our system. This view can be rendered, without pre-rendering the textures, in 20 ms on a GeForce4 Ti 4200 in a 1024x768 window (fill-rate/bandwidth limited). .. figure:: images/paper/buoyoing1.png :label: buoyoing1 .. figure:: images/paper/buoyoing2.png :label: buoyoing2 Figures 6-7. Two different screenshots of a structure of PDF documents viewed in a focus+context view. The user interface shows relationships between specific points in the documents. Each document has a unique background, which makes it easy to see that the fragment of a document on the right side of the second view is the document fully seen in the first view; without unique backgrounds, this would be relatively difficult and would require traversing the link.