Dhrubaditya Mitra

only search my website

Numerical Studies of Turbuphoresis

This problem has now been written up as a paper. The preprint is available at the ArXiv.

Study of turbulent flows with small particles suspended in them has wide range of applicability. In the engineering context a typical example is dust deposition on turbine blades or combustion chambers. In geophysical context a typical example is spreading of dust after a volcanic explosion or worse a nuclear fall-out. In astrophysical context a typical example is dusty accretion disks.

Small particles suspended in a fluid can be moved by imposing a temperature gradient, a phenomenon known as thermophoresis that has been studied experimentally since 1870 [Tyndall, Procedings of Royal Institution of Great Britain, 6 3 (1870)] and theoretically since 1879 [J.C. Maxwell, Scientific Papers 3 709 (1879)]. In the last decade it has been observed in numerical simulations of turbulence, that particles suspended in a turbulent flow can also develop a directional motion if the flow is inhomogeneous; a phenomenon that has been named "Turbuphoresis". The aim of this project is to study this phenomenon is a simple setting.

In particular, recent direct numerical simulations (DNS) of particle-laden (the particles are treated as heavy, inertial particles) channel flow (Sardina et al, Journal of Fluid Mechanics, 2012) have found that initially, uniformly-distributed particles migrate away from the center of the channel and cluster near the walls. In an attempt to understand this phenomenon, Belan et al (Phys. Rev. Lett. 2014) have argued that this clustering is due to inhomogeneity of turbulence in the channel. Our focus is to test this suggestion in an even simpler setting.

We plan to run simulations of forced turbulence, where the forcing is inhomogeneous is space. In particular, it is a harmonic function of one of the coordinate, say $z$. Due to the inhomogeneity of the turbulence, this setup should generate a flux of particles. In analogy with thermophoresis, we model this flux by \begin{equation} {\bf J} = -\kappa \nabla n - n \kappa_{\rm u} \nabla [u_{\rm rms}^2] \end{equation} where $\kappa$ is the diffusion coefficient for particle number density $n$ and $\kappa_u$ the coefficient of turbuphoresis that we want to measure. When the simulations reach stationary state, the flux must be zero, hence by balancing the two terms on the right hand side of the above equation we can calculate $\kappa_u$. We would then want to calculate $\kappa_u$ as a function of the inertia of the suspended particles, as measured by the Stokes number.

A slightly more detailed plan of our work is described in the hand-written notes in this pdf file.
Preliminary Results

Our simulations have shown that the heavy particles form clusters. Assuming the prescription of the flux given above, we can calculate the ratio of the two turbulent transport coefficients, $\kappa_{\rm u}/\kappa$ from our simulation. Let us call this ratio the turbulent Soret number \begin{equation} {\rm So} \equiv \frac{\kappa_{\rm u}}{\kappa} \end{equation}

We find that ${\rm So}$ has a non-monotonic dependence on the Stokes number. I presented these preliminary results in the Woods Hole GFD summer school in July 2015. The slides of the talk can be found here.
Further Studies
There are two straightforward extensions of this work that we propose below:
Computational Resources

We shall use the pencil-code, which is a open-source MPI parallelized solver the partial differential equations (PDEs) that are relevant to fluid and magnetohydrodynamic turbulence. The code shows (weak) linear scaling up to 70, 000 cores. I am one of the core developers of this code. As part of other SNIC projects the code has been ported and ran in abisko.

It would be necessary to run the code in 384^3 and 768^3 resolutions in periodic boxes. To get the best results we need to run the code over 768 cores which would imply 16 nodes of abisko. We would also perform smaller runs with 384^3 resolution with 192 cores.

Let us first estimate the core-hours necessary for the runs with 384^3 resolution. The simulation will need to reach a stationary state before particles can be introduced. It is often necessary to run for several large-eddy-turnover times (the largest correlation time in the system) before the system reaches a stationary state. We have calculated that the code requires about 2X10^{-2} microseconds per timestep per meshpoint for 384^3 resolution using 192 cores. One large-eddy-turnover time requires about 2000 time steps. Hence the number of core-hours necessary is approximately 0.63X192 core-hours for one large-eddy-turnover time. To calculate statistically reliable data it would be necessary to eventually run for about 100 large-eddy turnover times. This would imply in total about 12000 core-hours.The memory requirement for such a job is 1.8 GB.

If we now extrapolate to jobs with 768^2 resolution, then assuming linear scaling we expect roughly the same time for one-eddy-turnover time, but because of time-step limitations coming from CFL criterion one-eddy-turnover time now requires 10 times more timestep. This would imply 6.3X768 core hours per one large-eddy turnvoer time. At this resolution we expect to run for about 25 large-eddy-turnover times which implies in total about 120,000 core hours. The memory requirement for such a job is 14.5 GB.

To summarise, in total we shall require about 132,000 core-hours for two sets of jobs. One set will be run with 192 cores and requires 1.8 GB of total memory. The other set will be run with 768 cores and requires 14.5 GB of memory. Assuming some the runs need to be rerun due to accidental mistakes it is necessary to have about 140,000 core-hours for this project.

As the project is timely and topical it requires urgent attention. We aim to finish the project within two months. Hence we would ask for 70,000 core-hour per month.

Last modified: Wed Apr 25 22:28:34 CEST 2012