ABSTRACTS_97

Mathematical Simulation of Induration in Pelletizing Processes

Roland Drugge
Research and Development, Technical Systems
LKAB
S-981 86 Kiruna

 

In this talk we describe the principles for simulation of induration processes developed at LKAB. The model can simulate most types of processes such as Grate Kiln, Straight Grate and the Steel Belt process. The model simulates the chemical reactions which take place in the process as well as heat transfer between gases and beds. What is described here is the kernel of the simulation package called BEDSIM. Pelletizing includes mass and heat exchange between gas and pellets in packed beds. Heat is produced when magnetite is oxidized to hematite, a reaction which is exothermal, and evaporation of water in the drying stage use heat. Addition of lime and dolomite result in heat losses due to calcination of these products. The chemical equilibriums for these reactions are very sensitive for temperatures which result in mathematically stiff problems and which have been dealt with in a special way. Future development of the model will include modelling of two and three dimensions.

 

Scientific Computing with Application in Fluid Mechanics

Björn Engquist
Department of Numerical Analysis and Computing Science
Royal Institute of Technology
S-100 44 Stockholm
E-mail: engquist@nada.kth.se

 

The progress in computational fluid mechanics has followed the rapid development of scientific computing for over fifty years. We shall briefly present this development and then focus on the state of the art today. There is a clear trend towards simulation of more complex industrial flows as e.g. combustion, multiphase flows and non-Newtonian flows. We shall also discuss the relation between computational fluid dynamics and parallel processin g and give examples of applications from the Center for Parallel Computers at the Royal Institute of Technology.

 

The Spatial Dimension in Some Forestry Applications

Ljusk Ola Eriksson
Department of Forest Resource Management and Geomatics
Swedish University of Agricultural Sciences
S-901 83 Umeå
E-mail: Ola.Eriksson@resgeom.slu.se

 

How many trees are there in the forest? What do they look like? And how many and with what attributes will there be in the future? And what is there besides of trees? The forest resource is extremely diverse and the result of a multitude of different influences. The assessment of its present composition and future status necessitates the use of complex models. As the degree of sophistication of these models increases and the amount of data grows, so is our need for adequate computer power.

The purpose of this presentation is to give some examples from forestry where we are severely restricted by the limitations of conventional computer technology. The examples share two common traits. The first is that they are associated with increased demands on the utilization of the forests. Following public concern and international conventions swedish forestry is aiming at biodiversity as well as a sustained yield of timber. Not only the trees but other creatures, ranging from lichens to moose, should be included in our databases and management models. The trees as such are subject to intensified scrutiny. The more information we have about them the better service they will perform in their end product and the less is the waist of raw material. The second trait common to the examples is the spatial perspective. Whether we want to promote biodiversity or better use of the trees we need to know where things are and where they are relative to each other. Certain aspects of, for instance, biodiversity can only be defined in a spatial context.

One example refers to the classification of remote sensing data. The aim is to be able to give a spatially complete description of the forest and not only the average for a region. However, efficient algorithms involve numerous computations for each of the pixels of a scene and, as a result, we are only able to analyze very limited areas. Another example describes an experiment in landscape planning. By adjusting the management of the area an environmentally benign pattern should evolve over time. Solving the optimization problems that result are so time consuming that none but the most simple applications can be approached. The last example is akin to the previous in that it involves spatial management. In this case the aim is to locate harvesting operations such that the right trees can be allocated for the right use.

 

High Performance Computing -- The Road to the Future

Michael Henesey
IBM Corporation
Route 100, B/3
MD3308 Somers, NY 10589
USA
E-mail: henesey@vnet.ibm.com

 

The HPC industry has changed dramatically in the last three years and there is a new business model emerging for successful HPC companies. IBM reentered the HPC business on the crest of this wave and gained the #2 position in the market. The evolution of the market through the year 2000 and the rationale for IBMs continued investment in a leadership position will be explored.

 

The Problem of Parallel Program Performance Estimation

Anthony J. G. Hey
Department of Electronics and Computer Science
University of Southampton
SOUTHAMPTON, SO17 1BJ
United Kingdom
E-mail: ajgh@ecs.soton.ac.uk

 

Realistic parallel performance estimation depends most critically on single node performance. Effects due to network contention are likely to be dwarfed by inaccuracies introduced by inadequate attention being paid to the complex memory hierarchies of modern RISC microprocessor nodes. These memory hierarchy effects can affect both computation and communication but the present model has so far concentrated on the computing aspects.

After a review of present estimation methods, our approach based on execution-driven simulation is presented. The three phases of the model -- slicing, annotation and feedback -- are necessary to obtain realistic estimates in reasonable time. The performance of our 'PERFORM' tool -- Performance Estimation for RISC Microprocessors -- is evaluated on several well-known benchmarks. Prospects for future developments conclude the talk.

 

Spatial Micro Simulation in Social Science --
a Quest for High Performance Computing

Einar Holm
Department of Social and Economic Geography
Umeå University
S-901 87 Umeå
E-mail: Einar.Holm@geography.umu.se

 

One pertinent, never ending issue in social science is the rule of micro contra macro theory, of indepth studies of a few individuals, freedom of choice, aspirations and behaviours in their social and geographical context contra identification of key general, aggregated dimensions of socio economic forces within regions and countries. The latter, more structurally oriented approach have dominated policy oriented research whereas maybe micro studies more often have remained within the realm of basic social science research. Gradually however, conditions for resolving the conflict have improved due to increased availability of large micro data bases and computers powerful enough to handle such data and models. The basic idea behind micro simulation is to represent and model a social system on actor level and thereby maintain resolution and heterogeneity between individuals during simulation. Such models can directly use theory and findings on the level where relevant behaviour occur and can be observed -- the level of decision making agents: individuals, families, firms, organizations. Macro level results will be obtained by aggregating from micro results after simulation instead of being dependent on postulated behaviour assumptions for average aggregates of agents in the simulation. Such posteriori aggregated results becomes less biased and can be obtained for arbitrarily chosen levels and combination of attributes.

The quest for computing capacity becomes even more urgent in geographical micro simulation applications since then detailed spatial attributes is added to the representation of social objects in the model. Therefore the transformation of micro simulation into spatial micro simulation in social science is just about emerging. Ongoing time geography based developments at the department of social and economic geography in Umeå aim at modelling the development of the entire Swedish population via individual events like give birth, die, move from home, mate, divorce migrate, move, commute, educate, work, earn income, consume etc. Experiments aims at testing hypothesis regarding driving forces in the change of fertility, mortality, household formation, education, labour market clearing, migration, place of living and work and consumption etc. Policy applications include questions like impact of different taxes, regulations, transfer incomes and subsidies on individual and regional distribution of welfare, (un)employment and settlement structure. At the spatial modelling centre in Kiruna (SMC) similar developments will be targeted towards modelling environmental behaviour including its socio economic determinants and impacts. Models developed so far contains rudimentary individual behaviour for as many individuals as there are Swedes. Such a model runs for almost two hours while producing eight million interwoven biographies over thirty years on an eight CPU server folly utilizing the compilers threading capabilities. The more comprehensive models we aim at will certainly demand ten times as much of computer resources. Therefore, the prospects of running the models on a 64 node machine is compelling -- not only will the same experiment run one to two orders of magnitude faster. In addition, such resources facilitates levels of ambition in model realism and policy relevance otherwise not achievable.

 

Simulation of Material Processing
and the Need for High Performance Computing

Lars-Erik Lindgren
Department of Computer Aided Design
Luleå University of Technology
S-971 87 Luleå
E-mail: lel@cad.luth.se

 

The development of computers and computational algorithms has made it possible to simulate strongly non-linear processes as those in material processing like welding, cutting and different forming procedures. As a consequence a number of international conferences and some journals are devoted to this theme. Simulation of Material Processing can be used in design in two ways. Firstly, it can be used as a mean for improving the manufacturing process itself. The influence of the process parameters on the properties of the material can be studied. Secondly, it can be used in design of products. Sometimes it is important to include the influence of the manufacturing process on the in-service behaviour of the product. Traditionally the computer cost has been regarded as the main obstacle for simulating processes like welding. The developments during the last decade has increased the size and accuracy of used computational models. This is illustrated in the figure below. The number of unknown displacements in a finite element model is used as a measure of the computational model. The size of the time steps should also be decreased as the model is improved. The plot reveals the increased capacity of FEM w.r.t simulation of welding. The product of unknowns (dofs) multiplied with the number of time steps (nsteps) is plotted versus the year when the analysis was performed.

ABS.gif

Figure: Evolution of size of FE-simulations of welding during the last decades.

The plot does not make any difference between two-dimensional and three-dimensional analyses even if the latter automatically require more computational effort for the same number of unknowns. This increased capacity is due to development in hardware and software. The speed of a computer typically doubles every year. The use of moving mesh techniques etc. can also reduce the required computer time. However, in order to meet some of the ``grand challenges'' in Simulation of Material Processing further development is necessary like a combined development of parallel computations and adaptive mesh techniques. Some cases of Simulation Material Processing are presented in the talk. Each type of process is outlined and the possible gain in design of manufacturing process and/or product is pointed out. Furthermore, some specific problems of the simulations are outlined with respect to computational methods.

 

Theoretical Chemistry: from Abacus to Supercomputer,
from H- to RNA, from Obscurity to Nobility.

Roland Lindh
Department of Theoretical Chemistry
University of Lund
Box 124
S-221 00 Lund
E-mail: teohrl@teokem.lu.se

 

The development of the quantum mechanics in the 1920s is the origin of modern theoretical chemistry. During the eighty years of maturing this field have developed tremendously. A brief survey will be presented on the early developments (1920-1960). A period which lay ground for the basic concepts which still are in use today. During the 1950s it became more and more clear that the favorite tool of the quantum chemist would be the computer. Until recently the computer algorithms used in quantum chemical analysis was limited to modest sized systems of 2-20 atoms. A combination of so-called direct methods and a detailed analysis of how the electron-electron interaction change from a quantum mechanical nature at short distance to classical mechanical at far distance have inspired to developments of algorithms which now can handle much larger systems. Furthermore, the development of algorithms which can utilize the power of parallel computer is making significant progress. Depending of the level of approximation we can today study molecular systems in the range of 2-10,000 atoms. Some examples of this capacity and the prospects for theoretical chemistry in the future will be presented during the lecture.

 

High-Performance Computing Applications in Space Physics

Rickard Lundin
Swedish Institute of Space Physics, IRF
Box 812
S-981 28 Kiruna
E-mail: rickard@irf.se

 

The Swedish Institute of Space Physics, IRF, is a government institute for space physics research with its main office and main research division situated in Kiruna but with research divisions located also in Umeå, Uppsala and Lund. IRF is largely focusing on experimental research in space physics, with emphasis on the plasma physics associated with magnetized bodies such as the Earth and other magnetized planets -- but also solar physics and the solar wind interaction with comets asteroids etc. Furthermore, IRF recently started a new program in atmospheric physics and chemistry (of e.g. stratospheric ozone). IRF have specialized in various plasma measurement techniques, from ground-based radars to instruments on in-situ missions in deep space.

IRF intends to focus on several scientific aspects/applications on basis of the new high-performance computing facility, HPC2N.

  • Simulations on microscopic plasma properties/processes
  • Hybrid simulations of the solar wind interaction with magnetized (e.g. the Earth) and non-magnetized planets (e.g. Mars).
  • General global (morphological) simulations of plasma domains in space -- including plasma interaction with neutral gases (producing e.g. energetic neutral atoms, ENAs).
  • Middle atmosphere (stratosphere, mesosphere) modelling simulations.
  • Solar-Terrestrial coupling simulations (Eco-column).

Finally, IRF also intends to make use of the HPC2N-facility in the design, development, and optimization of new space plasma instruments. The idea is to numerically simulate the instrument performance before the hardware/manufacturing phase, thereby minimizing the present long test phase of the instrument.

 

Parallel Systems Architecture Alternatives -- Hardware and Software Issues

Jamshed H. Mirza
IBM Corporation
41AA/P967
522 South Road Poughkeepsie, NY 12601
USA
E-mail: mirza@vnet.ibm.com

 

Parallel computing holds the promise of ``unlimited'' performance in a cost-effective way. It is now generally recognized that parallel computing is inevitable. It is the only way for researchers to discover the secrets of life within the DNA molecule, simulate the Earth's environment using global climate models, and probe the depths of the universe. Industry will employ very large parallel computers for product design and analyzing customer data to gain competitive advantage. Governments will use these systems to solve problems of national importance.

But to make effective and productive use of parallel computing, these systems must provide more than just raw performance. This talk will give an overview of architecture concepts relevant to parallel computers, discuss the primary architectures in use today, and investigate the implication of architecture on scalability, flexibility, programming models, ease of use, and reliability and availability characteristics.

 

Structural Biology and its way to High Performance Computing

Uwe H. Sauer
Umeå Centre for Molecular Patogenesis
Umeå University
S-901 87 Umeå
E-mail: Uwe.Sauer@ucmp.umu.se

 

In the past, biologists were not renown as heavy users of high performance super computers. This situation has changed in recent years and biologists are now beginning to rely on Giga and Teraflop performance. This trend is due to advances briefly mentioned below.

The Megabase output of the various genome sequencing projects generates a wealth of information that can be digested only with sophisticated algorithms on ultra fast computer systems.

In structural biology it is the X-ray Crystallographer, the NMR spectroscopist and the Electron Microscopist/Crystallographer who are faced with computational problems that benefit drastically from the use of high performance computers.

Another area of structural biology which is split in an experimental and a theoretical branch is the field of (ab initio) protein folding. Recent theoretical break throughs in the models used for calculation as well as in computer performance will lead to exciting new fundamental understanding of the underlying processes. Monte Carlo and Molecular Dynamics calculations are employed to simulate the behavior of proteins. These calculations have a great hunger for extreme computer performance.

In conclusion, biologists in general and structural as well as theoretical biologists in particular have embarked on the road to high performance computing with increasing needs for performance in the future. The collaboration with a center like HPC2N will unavoidably increase the computer literacy among biologists and will influence if not change the speed and the way biology is performed and the results obtained. We are looking forward to the collaboration with the HPC2N.

Parallelizing Real Engineering Codes -- The Europort Experience

Richard Wait
Department of Information Technology
Mid Sweden University
S-851 70 Sundsvall
E-mail: richard@nts.mh.se

 

The purpose of the EU funded Europort programme was to demonstrate to european industry that it is practicable and cost effective to run commercial applications codes on parallel computers. In this context ``parallel computers'' include both large machines at High Performance Computing Centres and distributed networks of workstations. It was necessary to convince code providers, and industrial end-users of third party software, that the effort involved in porting large codes from workstations to parallel computing environments was worthwhile. The code owners had to be convinced that the parallel codes would have a long term future in order to recoup the initial investment in time and effort. The end-users had to be convinced that an investment in new hardware and software would be cost effective. The results demonstrate that the project achieved both these objectives.

 

A National Perspective on HPC and Competence Build-up

Anders Ynnerman
Swedish Council for High Performance Computing (HPDR)
Box 7136
S-103 87 Stockholm
E-mail: Anders.Ynnerman@tfr.se

 

A couple of years into the 1990s it was realized that Sweden had not been keeping up with the international development in terms of nationally available high performance computers. The Swedish council for high performance computing (HPDR) was formed on July 1st 1994. The main task of the council is to provide access to leading edge computational capacity for Swedish academic research. After the initial two year build-up period the council is now funding three national centers for HPC, all with slightly different areas of responsibility. The center for parallel computers (PDC) at KTH is the main HPDR production and user services center. The national supercomputer centre (NSC) at LiU is providing access to traditional vector capacity as well as a parallel computer installation. The NSC facilities are shared with SAAB and SMHI. To ensure that the skills needed to make efficient use of parallel computers are disseminated to the user community, the council has given high performance computing center north (HPC2N), coordinated by UmU, the responsibility for a directed competence development program regarding the efficient use of, and operation environments for, scaleable parallel computer systems.

As a part of the general competence raising program a national graduate school in scientific computing has been founded and is coordinated by HPDR. The school is funded by the foundation for strategic research. The school will focus on providing a core of mathematical, numerical and computational skills to students coming from a wide area of scientific disciplines. The disciplinary training of the students is provided by their home institutions. Each year approximately 10 new students will be admitted to the program.

Updated: 2024-03-21, 12:31