Группа авторов

Rethinking Prototyping


Скачать книгу

2005: Multi-Level Interaction in Parametric Design. Smart Graphics, Lecture Notes in Computer Science, Volume 3638, Springer, pp. 151-162.

      6. http://www.grasshopper3d.com/profile/DavidRutten [08.08.13]

      7. Aish, R., 2011: DesignScript: Origins, Explanation, Illustration. In: Gengnagel, C. et al. (eds): Computational Design Modeling, Proceedings of the Design Modelling Symposium Berlin 2011. Berlin Heidelberg, Springer.

      DesignScript is available at Autodesk Labs, http://labs.autodesk.com/utilities/designscript/

      Additional information and plug-in’s for DesignScript are available at http://www.designscript.org/

      Frequencies of Wood – Designing in Abstract Domains

      Mathias Bernhard

      Abstract Frequencies of Wood – Designing in Abstract Domains presents the application of Fourier analysis to translate images of wood textures to the frequency domain. With this encoding, a lot more details can be captured by the same amount of data points than with other procedures in the spatial domain. The hereby-composed vectors are then used to compare, qualify and mix different types of wood and to re-synthesize them to new artifacts.

      Mathias Bernhard

      CAAD / ITA / DARCH / ETH Zurich, Switzerland

      1 Introduction

      Even though this paper deals with the analysis and synthesis of image textures, its primary goal is not to provide new algorithms for seamless texture synthesis used by rendering engines. However, a lot of the involved knowledge is gained from the relevant works (e.g. Szeliski 2011) in the field of computer graphics (CG), where these topics are profoundly described. The goal is rather to investigate in tools that can help architects in the design process. Architects and designers always used to work with references, with examining different answers to similar questions. How can the computer learn and help us learn from these references, available in a tremendous abundance and growing?

      2 Why Wood?

      2. Textural Richness

fig01_collection.jpg

      Fig. 1 144 of the wood texture samples that form the basis for this work

      The examined wood textures present a huge variety of colours, from bright yellow over fiery red and royal purple to coffee brown. They distinguish from each other in an enormous richness of patterns and structures, like fiddle back flames, burls, birds’ eyes, quilts, masur and curly waves. At the same time, they share many common features. They all have regions of different densities, denser darker areas and less dense brighter areas. The textures are neither completely random (noise) nor deterministic (grid) but somewhere in between (stochastic). The periodicity of the occurrence of annual rings makes it an ideal topic for investigation with the presented process.

      2.2 Sample Selection

      All wood pieces are cut parallel to the fibers and then scanned. From the original dataset, also microscopic images, cuts perpendicular to the grain, quarter sawn, sealed and photos of products made of every wood are available. They are not being analyzed in the present work. Neither are other quantifiable aspects of the different sorts of wood like the regional provenance, physical coefficients such as elasticity, density or load bearing capacities.

      2.3 Natural Wood

      To do the experiments presented here with wood also has a provocative side. Wood in the public perception counts as warm, natural, pure, and honest, the least processed building material. It is supposed to grow in a wild forest, to be cut, planed and directly nailed onto the floor. Most of construction materials are processed - doped - to improve their fitness to meet the requirements, not only synthetic but also “natural” ones. Works of Christoph Schindler (Schindler and Salmerón Espinosa 2011) or Hironori Yoshida (Digitized Grain, Scan to Production, Yoshida 2012) are good examples of wood being tailored and customized to individual needs. The objects intrigue, make people think ‘What is this?’

      3 Process / Technical Description

      3.1 Data Preparation

      The sample images of the scanned wood types (Meier 2007) are present in the form of JPG files in 8 bit RGB colour. To improve the comparability of the different types and for computational performance reasons, they are first sampled down to a 256 x 256 pixel images. The colour information of every single pixel is then split up in four different two-dimensional dense matrices of double values. The matrices contain the red, green, blue and luminosity value of the pixels colour, mapped to a range from 0 to 1. Luminosity L is calculated as

      This conversion into grayscales returns a value much closer to the perceptible “brightness” than the brightness as in a hue/saturation/brightness (HSB) colour model, see Fig. 2.

fig02_org-bright-lum.jpg

      Fig. 2 Original colour image (left), conversion using brightness (centre), conversion using luminosity (right)

      In HSB, the brightness is defined as the biggest of the three-colour channels and stands technically for the voltage needed to display the colour on a screen. In the Venn-diagram above, one of the channels is always 255, therefore the conversion returns all white.

      Therefore, 256 x 256 x 3 (the fourth being calculated from the other three) means many data (196’608 floating point values) but not much meaning yet. The questions now are: Which are the most important ones? How do they – or at least some of them – relate to and depend on each other? Are there redundancies that one could get rid of?

      3.2 Fourier Analysis

      The idea of the procedure called Fourier Analysis (named after Jean Baptiste Joseph Fourier, 1768-1830) is that any signal (of any dimension) can be decomposed into a series of sine waves of different frequencies, magnitudes and phase angles. Fig. 3 shows the Fourier transform of a one-dimensional signal. The brightness values of one line of pixels (a) are fed into a one-dimensional matrix (b, x-axis=index, y-axis=value) from which the Fourier transform is calculated. The first eight component waves (d, wavelength 1/1, 1/2, 1/3, 1/4 …) are added up to form the inverse transform (c) being a low pass filtered approximation of the original curve. Fourier transform makes a translation from the time domain (or in the case of two-dimensional images spatial domain) into the frequency domain. The signals can be sound, heart pulses, stock market prices or raster images, where the intensities of each row and each column is computed.

fig03_fourier-analysis.jpg

      Fig. 3 Fourier analysis of a one-dimensional signal: original image, one line of pixels (a), corresponding section (b), reconstructed approximation or inverse transform (c) by overlaying the first eight individual frequencies (d)

      The short introduction above is meant to give the reader who is unfamiliar with Fourier analysis some necessary basic knowledge. The mathematics involved are not described in further detail here. One is referred to specialist literature amply available. Implementations are also available with linear algebra libraries for most programming languages. Using the Java linear algebra library Parallel Colt (Wendykier and Nagy 2010), the Fast Fourier Transformation (FFT) of all the four matrices are calculated, resulting in four matrices of complex numbers. Fig. 4 shows one sample input image and the corresponding FFT-matrix of its luminosity channel. Following usual conventions, the values are shifted by half of the matrix’s dimension, so that the value of column 0 / row 0 appears in the center of the image. The grey level G of each pixel is calculated