Applications of multi-threading paradigms to stimulate turbulent flows
Grossman, Igor (2017) Applications of multi-threading paradigms to stimulate turbulent flows. PhD thesis, Victoria University.
Abstract
Flow structures in turbulent flows span many orders of magnitude of length and time scales. They range from the length scale at which very small eddies lose their coherence as their translational kinetic energy is dissipated into heat, up to eddies the size of which is related to that of the macroscopic system. The behaviour of the range of flow structures can be captured by assuming that the fluid is a continuum, and they can be described by solving the Navier-Stokes equations. However, analytical solutions of the Navier-Stokes equations exist only for simple cases. A complete description of turbulent flow in which the flow variables velocity and pressure are resolved as a function of space and time can be obtained only numerically. The instantaneous range of scales in turbulent flows increases rapidly with the Reynolds number. As a result, most engineering problems have a wide range of scales that can be computed using direct numerical simulation (DNS). As the complexity of the calculated flows increases, an improvement in turbulence models is often needed. One way to overcome this problem is to search for models that better capture the features of turbulence. Furthermore, the models should be parameterised in a way that allows flows to be simulated under a wide range of conditions. DNS is a useful tool in this endeavour, and it can be used to complement the long-established methodologies of experimental research. A large number of computational grids must be used to simulate a high Reynolds number inflows that occur in the complicated geometries often encountered in practical applications. This approach requires a considerable amount of computational power. For example, reducing the grid spacing in half increases the computational cost by a factor of about sixteen. Challenges presented by limitations imposed by computer hardware significantly limit the number of practical numerical solutions required to satisfy engineering needs. In this work, we propose an alternative approach. Rather than running an application that solves the Navier-Stokes equations on one computer, we have developed a platform that allows a group of computers to communicate with one another working together to obtain a solution of a specific flow problem. This approach helps to overcome the problem of hardware limitations. However, to grasp these challenges, we must devise new strategies to computational paradigms associated with parallel computing. In the case of solving the Navier-Stokes equations, we have to deal with significant computational and memory requirements. To overcome these requirements, software should be able to be run on many high-performance computers simultaneously, and network communication may become a new limiting issue that is specific only to parallel environments. Translating to parallel environments triggers several scenarios that do not exist when developing software that executes sequential operations. For example, "racing conditions" may appear that result in many threads that attempt to use different values of a shared variable, or they simultaneously attempt to overwrite it. The order of executions may be random as the operating system can swap between the threads at any time. Attempts to synchronise the threads may result in "deadlock" when all resources become simultaneously locked. Debugging and problem-solving in parallel environments is quite often difficult due to the potentially random nature of the orders in which threads run. All of these features require the development of new paradigms, and we must transform our way of envisioning the development of software for parallel execution. The solution to this problem is the motivation for the work presented in this thesis. A significant contribution of this work is to strategically use the ideas of thread injection to speed up the execution of sequential code. Bottlenecks are identified, and thread injection is used to parallelise the code that may be distributed to many different systems. This approach is implemented by creating a class that takes control over the sequential instructions that create the bottlenecks. The challenge to engineers and scientists is to determine how a given task can be split into components that can be run in parallel. The method is illustrated by applying it to Channelflow (Gibson, 2014), which is open-source Direct Numerical Simulation software used to simulate flows between two parallel plates. Another challenge that arises when approaching representations of real geometries is the scale and magnitude of the data samples. For example, Johns Hopkins Turbulent Database (JHTB) contains results of the solution of direct numerical simulation (DNS) of isotropic turbulent flow in the incompressible fluid in 3D space and only requires 100 TB data. Much more data needed to perform a simulation, and this is just a straightforward model. A natural answer to this challenge is to exploit the opportunities offered by contemporary applications of ‘database technology’ in computational fluid dynamics (CFD) and turbulence research. Direct numerical solution of the Navier-Stokes equations resolves all of the flow structures that influence turbulent flows. Still, in the case of Large Eddy Simulation, the Navier-Stokes equations are spatially filtered so that they are expressed in terms of the velocities of larger-scale structures. The rate of viscous dissipation is quantified by modelling the shear stress, and this process can lead to inaccuracies. A means of rapid testing and evaluation of models is therefore required, and this involves working with large data sets. The contribution of this work is the development of a computational platform that allows LES models to be dynamically loaded and to be rapidly evaluated against DNS data. An idea permeating the methodology is that a core is defined that contains the ‘know-how’ associated with accessing and manipulating data, and which operates independently of a plug-in. The thesis presents an example that demonstrates how users can examine the accuracy of LES models and obtain results almost instantaneously. Such methods allow engineers or scientists to propose their own LES models and implement them as a plug- in with only a few lines of code. We have demonstrated how it can be done by converting the Smagorinsky model to a plug-in to be used on our platform.
Item type | Thesis (PhD thesis) |
URI | https://vuir.vu.edu.au/id/eprint/40454 |
Subjects | Historical > FOR Classification > 0802 Computation Theory and Mathematics Historical > FOR Classification > 0915 Interdisciplinary Engineering Current > Division/Research > College of Science and Engineering |
Keywords | turbulent flows; Navier-Stokes equations; direct numerical simulation; thread injection; Johns Hopkins Turbulent Database; computational fluid dynamics; Large Eddy Simulation |
Download/View statistics | View download statistics for this item |