High Performance Computing Center North
Mathematical Simulation of Induration in Pelletizing Processes
Research and Development, Technical Systems
S-981 86 Kiruna
In this talk we describe the principles for simulation of induration processes developed at LKAB. The model can simulate most types of processes such as Grate Kiln, Straight Grate and the Steel Belt process. The model simulates the chemical reactions which take place in the process as well as heat transfer between gases and beds. What is described here is the kernel of the simulation package called BEDSIM. Pelletizing includes mass and heat exchange between gas and pellets in packed beds. Heat is produced when magnetite is oxidized to hematite, a reaction which is exothermal, and evaporation of water in the drying stage use heat. Addition of lime and dolomite result in heat losses due to calcination of these products. The chemical equilibriums for these reactions are very sensitive for temperatures which result in mathematically stiff problems and which have been dealt with in a special way. Future development of the model will include modelling of two and three dimensions.
Department of Numerical Analysis and Computing Science
Royal Institute of Technology
S-100 44 Stockholm
The progress in computational fluid mechanics has followed the rapid development of scientific computing for over fifty years. We shall briefly present this development and then focus on the state of the art today. There is a clear trend towards simulation of more complex industrial flows as e.g. combustion, multiphase flows and non-Newtonian flows. We shall also discuss the relation between computational fluid dynamics and parallel processin g and give examples of applications from the Center for Parallel Computers at the Royal Institute of Technology.
Ljusk Ola Eriksson
Department of Forest Resource Management and Geomatics
Swedish University of Agricultural Sciences
S-901 83 Umeå
How many trees are there in the forest? What do they look like? And how many and with what attributes will there be in the future? And what is there besides of trees? The forest resource is extremely diverse and the result of a multitude of different influences. The assessment of its present composition and future status necessitates the use of complex models. As the degree of sophistication of these models increases and the amount of data grows, so is our need for adequate computer power.
The purpose of this presentation is to give some examples from forestry where we are severely restricted by the limitations of conventional computer technology. The examples share two common traits. The first is that they are associated with increased demands on the utilization of the forests. Following public concern and international conventions swedish forestry is aiming at biodiversity as well as a sustained yield of timber. Not only the trees but other creatures, ranging from lichens to moose, should be included in our databases and management models. The trees as such are subject to intensified scrutiny. The more information we have about them the better service they will perform in their end product and the less is the waist of raw material. The second trait common to the examples is the spatial perspective. Whether we want to promote biodiversity or better use of the trees we need to know where things are and where they are relative to each other. Certain aspects of, for instance, biodiversity can only be defined in a spatial context.
One example refers to the classification of remote sensing data. The aim is to be able to give a spatially complete description of the forest and not only the average for a region. However, efficient algorithms involve numerous computations for each of the pixels of a scene and, as a result, we are only able to analyze very limited areas. Another example describes an experiment in landscape planning. By adjusting the management of the area an environmentally benign pattern should evolve over time. Solving the optimization problems that result are so time consuming that none but the most simple applications can be approached. The last example is akin to the previous in that it involves spatial management. In this case the aim is to locate harvesting operations such that the right trees can be allocated for the right use.
Route 100, B/3
MD3308 Somers, NY 10589
The HPC industry has changed dramatically in the last three years and there is a new business model emerging for successful HPC companies. IBM reentered the HPC business on the crest of this wave and gained the #2 position in the market. The evolution of the market through the year 2000 and the rationale for IBMs continued investment in a leadership position will be explored.
Anthony J. G. Hey
Department of Electronics and Computer Science
University of Southampton
SOUTHAMPTON, SO17 1BJ
Realistic parallel performance estimation depends most critically on single node performance. Effects due to network contention are likely to be dwarfed by inaccuracies introduced by inadequate attention being paid to the complex memory hierarchies of modern RISC microprocessor nodes. These memory hierarchy effects can affect both computation and communication but the present model has so far concentrated on the computing aspects.
After a review of present estimation methods, our approach based on execution-driven simulation is presented. The three phases of the model -- slicing, annotation and feedback -- are necessary to obtain realistic estimates in reasonable time. The performance of our 'PERFORM' tool -- Performance Estimation for RISC Microprocessors -- is evaluated on several well-known benchmarks. Prospects for future developments conclude the talk.
Department of Social and Economic Geography
S-901 87 Umeå
One pertinent, never ending issue in social science is the rule of micro contra macro theory, of indepth studies of a few individuals, freedom of choice, aspirations and behaviours in their social and geographical context contra identification of key general, aggregated dimensions of socio economic forces within regions and countries. The latter, more structurally oriented approach have dominated policy oriented research whereas maybe micro studies more often have remained within the realm of basic social science research. Gradually however, conditions for resolving the conflict have improved due to increased availability of large micro data bases and computers powerful enough to handle such data and models. The basic idea behind micro simulation is to represent and model a social system on actor level and thereby maintain resolution and heterogeneity between individuals during simulation. Such models can directly use theory and findings on the level where relevant behaviour occur and can be observed -- the level of decision making agents: individuals, families, firms, organizations. Macro level results will be obtained by aggregating from micro results after simulation instead of being dependent on postulated behaviour assumptions for average aggregates of agents in the simulation. Such posteriori aggregated results becomes less biased and can be obtained for arbitrarily chosen levels and combination of attributes.
The quest for computing capacity becomes even more urgent in geographical micro simulation applications since then detailed spatial attributes is added to the representation of social objects in the model. Therefore the transformation of micro simulation into spatial micro simulation in social science is just about emerging. Ongoing time geography based developments at the department of social and economic geography in Umeå aim at modelling the development of the entire Swedish population via individual events like give birth, die, move from home, mate, divorce migrate, move, commute, educate, work, earn income, consume etc. Experiments aims at testing hypothesis regarding driving forces in the change of fertility, mortality, household formation, education, labour market clearing, migration, place of living and work and consumption etc. Policy applications include questions like impact of different taxes, regulations, transfer incomes and subsidies on individual and regional distribution of welfare, (un)employment and settlement structure. At the spatial modelling centre in Kiruna (SMC) similar developments will be targeted towards modelling environmental behaviour including its socio economic determinants and impacts. Models developed so far contains rudimentary individual behaviour for as many individuals as there are Swedes. Such a model runs for almost two hours while producing eight million interwoven biographies over thirty years on an eight CPU server folly utilizing the compilers threading capabilities. The more comprehensive models we aim at will certainly demand ten times as much of computer resources. Therefore, the prospects of running the models on a 64 node machine is compelling -- not only will the same experiment run one to two orders of magnitude faster. In addition, such resources facilitates levels of ambition in model realism and policy relevance otherwise not achievable.
Department of Computer Aided Design
Luleå University of Technology
S-971 87 Luleå
The development of computers and computational algorithms has made it possible to simulate strongly non-linear processes as those in material processing like welding, cutting and different forming procedures. As a consequence a number of international conferences and some journals are devoted to this theme. Simulation of Material Processing can be used in design in two ways. Firstly, it can be used as a mean for improving the manufacturing process itself. The influence of the process parameters on the properties of the material can be studied. Secondly, it can be used in design of products. Sometimes it is important to include the influence of the manufacturing process on the in-service behaviour of the product. Traditionally the computer cost has been regarded as the main obstacle for simulating processes like welding. The developments during the last decade has increased the size and accuracy of used computational models. This is illustrated in the figure below. The number of unknown displacements in a finite element model is used as a measure of the computational model. The size of the time steps should also be decreased as the model is improved. The plot reveals the increased capacity of FEM w.r.t simulation of welding. The product of unknowns (dofs) multiplied with the number of time steps (nsteps) is plotted versus the year when the analysis was performed.
Figure: Evolution of size of FE-simulations of welding during the last decades.
The plot does not make any difference between two-dimensional and three-dimensional analyses even if the latter automatically require more computational effort for the same number of unknowns. This increased capacity is due to development in hardware and software. The speed of a computer typically doubles every year. The use of moving mesh techniques etc. can also reduce the required computer time. However, in order to meet some of the ``grand challenges'' in Simulation of Material Processing further development is necessary like a combined development of parallel computations and adaptive mesh techniques. Some cases of Simulation Material Processing are presented in the talk. Each type of process is outlined and the possible gain in design of manufacturing process and/or product is pointed out. Furthermore, some specific problems of the simulations are outlined with respect to computational methods.
Department of Theoretical Chemistry
University of Lund
S-221 00 Lund
The development of the quantum mechanics in the 1920s is the origin of modern theoretical chemistry. During the eighty years of maturing this field have developed tremendously. A brief survey will be presented on the early developments (1920-1960). A period which lay ground for the basic concepts which still are in use today. During the 1950s it became more and more clear that the favorite tool of the quantum chemist would be the computer. Until recently the computer algorithms used in quantum chemical analysis was limited to modest sized systems of 2-20 atoms. A combination of so-called direct methods and a detailed analysis of how the electron-electron interaction change from a quantum mechanical nature at short distance to classical mechanical at far distance have inspired to developments of algorithms which now can handle much larger systems. Furthermore, the development of algorithms which can utilize the power of parallel computer is making significant progress. Depending of the level of approximation we can today study molecular systems in the range of 2-10,000 atoms. Some examples of this capacity and the prospects for theoretical chemistry in the future will be presented during the lecture.
Swedish Institute of Space Physics, IRF
S-981 28 Kiruna
The Swedish Institute of Space Physics, IRF, is a government institute for space physics research with its main office and main research division situated in Kiruna but with research divisions located also in Umeå, Uppsala and Lund. IRF is largely focusing on experimental research in space physics, with emphasis on the plasma physics associated with magnetized bodies such as the Earth and other magnetized planets -- but also solar physics and the solar wind interaction with comets asteroids etc. Furthermore, IRF recently started a new program in atmospheric physics and chemistry (of e.g. stratospheric ozone). IRF have specialized in various plasma measurement techniques, from ground-based radars to instruments on in-situ missions in deep space.
IRF intends to focus on several scientific aspects/applications on basis of the new high-performance computing facility, HPC2N.
Finally, IRF also intends to make use of the HPC2N-facility in the design, development, and optimization of new space plasma instruments. The idea is to numerically simulate the instrument performance before the hardware/manufacturing phase, thereby minimizing the present long test phase of the instrument.
Jamshed H. Mirza
522 South Road Poughkeepsie, NY 12601
Parallel computing holds the promise of ``unlimited'' performance in a cost-effective way. It is now generally recognized that parallel computing is inevitable. It is the only way for researchers to discover the secrets of life within the DNA molecule, simulate the Earth's environment using global climate models, and probe the depths of the universe. Industry will employ very large parallel computers for product design and analyzing customer data to gain competitive advantage. Governments will use these systems to solve problems of national importance.
But to make effective and productive use of parallel computing, these systems must provide more than just raw performance. This talk will give an overview of architecture concepts relevant to parallel computers, discuss the primary architectures in use today, and investigate the implication of architecture on scalability, flexibility, programming models, ease of use, and reliability and availability characteristics.
Uwe H. Sauer
Umeå Centre for Molecular Patogenesis
S-901 87 Umeå
In the past, biologists were not renown as heavy users of high performance super computers. This situation has changed in recent years and biologists are now beginning to rely on Giga and Teraflop performance. This trend is due to advances briefly mentioned below.
The Megabase output of the various genome sequencing projects generates a wealth of information that can be digested only with sophisticated algorithms on ultra fast computer systems.
In structural biology it is the X-ray Crystallographer, the NMR spectroscopist and the Electron Microscopist/Crystallographer who are faced with computational problems that benefit drastically from the use of high performance computers.
Another area of structural biology which is split in an experimental and a theoretical branch is the field of (ab initio) protein folding. Recent theoretical break throughs in the models used for calculation as well as in computer performance will lead to exciting new fundamental understanding of the underlying processes. Monte Carlo and Molecular Dynamics calculations are employed to simulate the behavior of proteins. These calculations have a great hunger for extreme computer performance.
In conclusion, biologists in general and structural as well as theoretical biologists in particular have embarked on the road to high performance computing with increasing needs for performance in the future. The collaboration with a center like HPC2N will unavoidably increase the computer literacy among biologists and will influence if not change the speed and the way biology is performed and the results obtained. We are looking forward to the collaboration with the HPC2N.
Department of Information Technology
Mid Sweden University
S-851 70 Sundsvall
The purpose of the EU funded Europort programme was to demonstrate to european industry that it is practicable and cost effective to run commercial applications codes on parallel computers. In this context ``parallel computers'' include both large machines at High Performance Computing Centres and distributed networks of workstations. It was necessary to convince code providers, and industrial end-users of third party software, that the effort involved in porting large codes from workstations to parallel computing environments was worthwhile. The code owners had to be convinced that the parallel codes would have a long term future in order to recoup the initial investment in time and effort. The end-users had to be convinced that an investment in new hardware and software would be cost effective. The results demonstrate that the project achieved both these objectives.
Swedish Council for High Performance Computing (HPDR)
S-103 87 Stockholm
A couple of years into the 1990s it was realized that Sweden had not been keeping up with the international development in terms of nationally available high performance computers. The Swedish council for high performance computing (HPDR) was formed on July 1st 1994. The main task of the council is to provide access to leading edge computational capacity for Swedish academic research. After the initial two year build-up period the council is now funding three national centers for HPC, all with slightly different areas of responsibility. The center for parallel computers (PDC) at KTH is the main HPDR production and user services center. The national supercomputer centre (NSC) at LiU is providing access to traditional vector capacity as well as a parallel computer installation. The NSC facilities are shared with SAAB and SMHI. To ensure that the skills needed to make efficient use of parallel computers are disseminated to the user community, the council has given high performance computing center north (HPC2N), coordinated by UmU, the responsibility for a directed competence development program regarding the efficient use of, and operation environments for, scaleable parallel computer systems.
As a part of the general competence raising program a national graduate school in scientific computing has been founded and is coordinated by HPDR. The school is funded by the foundation for strategic research. The school will focus on providing a core of mathematical, numerical and computational skills to students coming from a wide area of scientific disciplines. The disciplinary training of the students is provided by their home institutions. Each year approximately 10 new students will be admitted to the program.