2011-2012

Laurent FASNACHT – On-line measurement for proactive optimization of bandwith usage

Professor: Amin Shokrollahi

ALGO and LMA

Abstract
Reliable data distribution over the internet is mostly based on TCP (transmission control protocol). With TCP, the rate of data transmission between two nodes decreases as the distance between the two nodes increases. In a globalized internet, it becomes more and more difficult to make optimal use of the broad bandwidth now commonly available in private households. In the case of one-to-many data distribution, large content providers do fix the problem by deploying expensive content delivery networks.
 
The success of new bandwidth-demanding peer-to-peer applications such as tele-presence and high definition video-conference will have to rely on transmission protocols that make more effective use of the available bandwidth. Forward error correcting codes (FEC) on top of standard UDP protocol have proven to provide reliable and efficient data transmission over long distance and/or lossy networks. In particular, using fountain codes it is possible to saturate any available bandwidth by flooding the connection with an arbitrary (large) number of repair packets. Although optimal for the two peers, this method is not very fair to the users sharing the same link and will end up wasting resources. In fact, the sender transmission rate should not exceed the effective bandwidth linking the two peers.
 
The goal of this project was to develop an efficient method to quickly determine and monitor the actual bandwidth available between two peers in order to make optimal and fair use of it.
 
After having done the hypothesis about the model, a simulator was written to validate the theoretical aspects. Then, a network application was written to test if the model was realistic on real connections.
Finally, the algorithm was implemented to verify that it gave correct results.
 
The report is focused on the algorithmic and modeling aspects of the project. The source code will also be provided, as an open source library.

Nicolò PAGAN – Implementation of the point-implicit algorithm into the Eilmer3 CFD code

Responsibles: Dr. Pénélope Leyland, Ojas Joshi

Interdisciplinary Aerodynamics Group – IAG

Abstract
The work presented has the aim to implement the point-implicit scheme inside the Eilmer3
code. Once the scheme is integrated into the existing code, three different approaches can be
compared to different simulations. Two gas models are adopted in order to study the improvement
of the point-implicit scheme: the ideal gas model and the 11 species and 2-temperature gas model.
Performances of three different schemes are compared, once they are applied to these two di↵erent
gas models: the explicit scheme, the implicit scheme (which uses the point-implicit scheme in both
viscous and inviscid update), the hybrid scheme (which uses the point-implicit scheme only for
the viscous update). It turns out that the hybrid scheme has the best performances: the implicit
scheme has worse performances from the point of view of the simulation time, while the explicit
scheme has worse performance concerning the computation time, and the convergence is also
slower.

Rui WANG – WebGL Interface Design for Panoramic Video Sequences

Professor: Prof. Pierre Vandergheynst
Project leader: Luigi Bagnato

Signal Processing Laboratory – LTS2

Abstract
The project deals with advanced web technologies to display panoramic 360° videos. We use the recent Khronos group standard, WebGL, to build an application to display 3D spherical videos. With applying Three.js library, a panoramic video ELBuildingEntrance.mp4 is successfully displayed on a predefined sphere. In the second part of this semester project, first we use a test image as texture, following the previous procedures that we have done for video, we get the pixel value of each pixel of our image during displaying it on predefined sphere. Then with the image data of whole image we have got, we manipulate the data, for improving the brightness of the image. From the printing shown after data value changed, we find the values are really changed according to our filter.
1.1 WegGL technology
Webgl is based on canvas element of HTML5. For 2D canvas, getContext() method is called to get the canvas element. Shaders take shape data as well as turning it into pixels on our screen. Using GLSL, two type of shaders are implemented, vertex shader and fragment shader. Vertex shader runs all over corners of every triangle being rendered. Then, points are transformed, passed along the texture coordinates and lighting factor, which is calculated by ‘normal’. The previous two are passed in ‘varying’ variables. All of these values are passed to fragment shader. Here, we get the appropriate pixel from texture, considering their lighting factors, and output the pixel.
1.2 Three.js library
For this project, we implement our code with an existing library Three.js. It provides high quality and convenient way to create scenes, renderers, cameras and objects, which are basic components in this project. Therefore, we apply Three.js library as reference on primary purpose of achieving our goal effectively.

Some of the results are displayed as following figures:

 

Mahmoud JAFARGHOLI – General Solver for Cardiovascular Lumped Parameter Models

Supervisor: Mr. A. Cristiano I. Malossi, Dr. Toni M. Lassila

Summary
Lumped Parameter Models or Zero-dimensional models are models, which are capable of capturing the main flow characteristics of the cardiovascular system. For this purpose we use the electrical analogue between electrical circuits and flow circuits. Resistors are elelemts which create resistance against the flow and consumes energy; voltage sources and current sources inject energy to the system; inductor and capacitors store energy with different phase respond and diodes allows flow to pass in only one direction.
In this project, we developed a general zero-dimensional solver in the LifeV library (http:// www.lifev.org). The time dependent governing equations are assembled in matrix form of Dif¬ferential Algebraic Equations (DAE). These DAEs are solved by numerical solvers in Trilinos (http: //trilinos.sandia.gov/.

Fabien MARGAIRAZ – Particle-In-Cell and Particle-In-Fourier methods

Supervisors: Prof. Laurent Villard, Dr. Stephan Brunner, Dr. Sébastien Jolliet

Centre de Recherches en Physique des Plasmas – CRPP

Context
Turbulence in magnetized plasmas is known to induce heat, particle and momentum transport at levels typically much larger than that due to collisional processes alone. It leads to a degradation of the quality of confinement in magnetic fusion experiments.

First principles simulations of turbulence are based on a variety of numerical schemes. In particular, a Lagragian, Particle-In-Cell (PIC), Finite Element scheme is used in the ORB5/NEMORB suite of codes developed at CRPP in collaboration with the Max-Planck IPP in Garching.

Whereas the ORB5/NEMORB code has been shown to scale well up to 32k cores, bottle¬necks to further scalability have been identified, related to the way particles interact with the finite element grid used to solve for EM fields. This has prompted a reexamination of the algorithms used.

The intrisic problem of PIC schemes is the accumulation of statistical sampling noise. In ORB5/NEMORB the fields are Fourier transformed, then a filter is applied that eliminates unphysical modes, Fourier transformed back to real space. This has proven very efficient at reducing noise, however at the expense of communications. A source of the problem is that the real space 3D grid data size that needs to be communicated across processors far exceeds the Fourier space data size for physically meaningful modes. Hence the idea of going directly from particle data to physically meaningful Fourier space modes and dispensing with real space grid.

Brief Project Description
In this project, an alternative scheme, using projections on Fourier modes rather than projections on finite elements, is examined as a possible candidate to alleviate some of the scalability problems.
Instead of the 5D gyrokinetic turbulence problem in magnetic plasmas, a simpler phys¬ical model will be considered, namely the Vlasov-Poisson system describing electrostatic perturbations in a collisionless plasma, in a 2D phase space (x,v).

1) A code based on the standard PIC-delta-f finite element formulation will be written and tested.

2) This code will then be modified according to a new “Particle-In-Fourier” (PIF) scheme, i.e. replacing the particle-to-grid (and v.v.) operations with particle-to-Fourier-modes (and v.v.) ones.

3) Single process performance will be measured for both code versions (PIC and PIF) and for various problem sizes. Ways to optimize the PIF operations will be searched.

4) If time permits, the code will be parallelized with domain cloning and/or domain decomposition parallel schemes, using MPI and/or OpenMP. Parallel scalability tests will be then performed.
The codes will be developed in Fortran and make use of various libraries for Fourier transforms, finite elements and linear algebra.
 

Emmanuel Froustey: Numerical solution of the monodomain equation: an inverse problem for infarction models

Supervisor: Dr. Luca Dede’

Chair of Modelling and Scientific Computation – CMCS

Abstract
This project deals with the finite element of an approximation of an inverse problem for the monodomain equation, which models the propagation of the electrical potential in the cardiac muscle. The goal consists in recovering the shape of an infacted rea inside the cardia muscle by measuring the electrical potential at the border of the domain, which is the principle of electrocardiograms (ECG). We consider a problem in two dimensions with implementations in FreeFem++ [3].
Keywords: finite elements, Newton method, monodomain equation, infarction model, optimal control, Lagrangian formalism, steepest descent.

Nicolò Pagan – Numerical Approximation of PDEs with Isogeometric Analysis and implementation in the LiveV library

Supervisor: Dr. Luca Dede’

Chair of Modelling and Scientific Computation – CMCS 

Abstract
The aim of the project is double: to understand the flexibility of the Isogeometric Analysis tools through the solution of some PDEs problems; to test the improvement in the computational time given by a partial loops vectorization at compile-time of the LifeV IGA code. Three different applications have been selected: the potential flow problem around an airfoil profile, the heat equation problem in a bent cylinder and the Laplace problem in a multi-patches geometry representing a blood vessels bifurcation. The geometries used are built through the NURBS package available with the software GeoPDEs. The numerical analysis of the first application is performed with both GeoPDEs and LifeV IGA code. The comparison between different implementations shows that the degrees of freedom loop vectorization at compile-time is able to reduce the matrix assembling time of around 20%. The automatic vectorization at compile-time of the loop on the elements requires too much computational effort without having a reasonable improvement in the running time performances. Unsteady problems and multi-patches geometry have not been tested with LifeV IGA code, but GeoPDEs results show the expected solutions.

Loïc Perruchoud – 3D Physics-Based Soft Multi-Cellular Simulator

Professor: Dario Floreano
Assistants: Andrea Maesani and Jürg Markus Germann

Laboratory of Intelligent Systems – LIS

Description
The tremendous technological advance we are currently experiencing will sooner lead to the feasibility of soft multi-cellular robots, which could potentially display many of the characteristics that can be observed in natural organisms. At the LIS, we are currently
investigating different aspects of such multi-cellular artificial systems, both at the level of hardware and in software simulations.
In this project, the student is expected to extend an existing 2D physics-based soft-multi cellular robot simulator into 3D. The current 2D simulator supports features like cell soft membranes, active/passive membrane adhesion and active deformation of the
membranes.
In the first part of the project the student is expected to perform an evaluation study of existing 3D physics engines. Then, the student will select the best existing technology and implement the mechanisms already present in the 2D version of the simulator.
The student, will characterize the scalability of the simulator and demonstrate its capabilities in a few test scenarios.

Jérémie Despraz – Indirect encodings for soft-multicellular robots

Professor: Dario Floreano

Assistants: Andrea Measani, Jürg Makus Germann

Laboratory of Intelligent Systems – LIS

Description
Since the seminal work of Sims on virtual creatures, different systems for the evolution of morphology and control of modular robots have been proposed. However, the aim of generating the morphology of a modular robot that could reach levels of complexity comparable to the ones observed in natural systems is far from being achieved.

To achieve this goal, many challenges must still be solved. It is clear that to design the structures of such multi-cellular robots, automatic design methods are needed that could possibly replicate the incredible diversity level produced by nature in an artificial system. Various generative encodings have been proposed in the past, including grammatical-encoding and methods that simulate natural morphogenesis.

In this project, the student will investigate existing indirect encodings for multi-cellular systems and test them on morphology matching problems. Furthermore, as test problem, he will investigate the emergence of skeletal structures in a soft-multi cellular robot. In the first part of the project, the student is expected to review existing encodings for the automatic design of multi-cellular structures. Then, the student will perform with the selected encodings a series of experiments to evaluate their capabilities on morphology matching benchmarks. Finally, he will employ the best encoding to evolve multi-cellular structures composed of cells having varying levels of stiffness to investigate the conditions that favours the evolution of skeletal structures.

 

Lucien Xu – Montgomery multiplication on ARM

Professor: Arjen Lenstra

Assistants: Dr. Thorsten Kleinjung, Joppe Bos and Maxime Augier

Laboratory for Cryptologic Algorithms – LACAL

Description
Implement an optimized algorithm of the Montgomery multiplication on ARM architecture, in order to speed up cryptographic operations.

Pascal Bienz – Artifact reduction in phase-contrast X-ray imaging

Supervision: Prof. Michael Unser and Masih Nilchian

Laboratory of Medical and Biological Images – LIB

Description
Grating interferometry is a phase-contrast X-ray imaging method that is extraordinarily sensitive to density variations in the sample. The method is especially suited for imaging of biomedical samples and will play an indispensable role in future X-ray imaging applications. However, the high sensitivity to variations in the sample is accompanied by a high sensitivity to intensity fluctuations (horizontal streaks) during image acquisition. The latter lead to artifacts in the 3D reconstructions, which in turn constitute a major obstacle for 3D data visualization and analysis.

The goal of the project is to design and test out image processing algorithms to reduce these artifacts. The potential impact of such work could be quite significant; in case of success, it would be immediately incorporated in the data processing pipeline of the TOMCAT beamline at the Swiss Light Source (Paul Scherrer Institute).

Vincent Zimmern – Quantum-Mechanical Simulations of Diels-Alder Reactions for Drug Development

Professor: Matteo Dal Peraro

Advisor: Dr. Marco Stenta

Laboratory for Biomolecular Modeling – LBM

Abstract
We explored the possiblity of using a pro-drug as a chassis for targeted drug delivery, for which the active ingredient, nitroxyl, would be released by way of a reverse Diels-Alder reaction. To assess the possibility of such a drug delivery, we used density-functional theory and vibrational mode analysis to analyze the energies of reactants, products, and possible transition states of such a reaction.

Hainan Hu – Analysis of thin film solar cells with OpenMax

Supervisors: Dr. Franz-Josef Haug and Mr. Ali Naqavi

Photovoltaics and thin film Electronics Laboratory PV-LAB

Abstract
The Multiple Multipole Program(MMP) is developed ba Hafner in 1980. The first goal of this program is to obtain accurate and reliable solution of problems with computer. It’s a pure boundary method implementation, the field of each domain is evaluated by a series of expansions, which includs Multipole expansion, plane wave, Rayleigh expansion, Bessel expansion, etc. The basic function if this method fulfills Maxwell’s equations. This method belongs to groups of generalized multipole technique (GMT). This method can achieve a very high accuracy but the establishment of the model is very hard, because the allocation of multipolar function origins is not an easy job. In order to solve this problem, several trials to optimize the setting of multipole.

OpenMaX is a graphical electromagnetics platform with a number of electromagnetic solvers to solve the problem. It can also visualize the field solution, such as vector plots and animation to enhance understanding if the solution.

In this project, we had a quick theoretical study of the method (multiple-multipole) at start, then becoming familiar with the software and application to a simple problem: Sinusoidal grating structure thin film solar cell.

We compared our result with another method which widely used in the optical simulation of the solar cells : RCWA method, from the results we can see there are some parts of EQE curve of TE and TM simulations are quite different, we analysis the result of where possible differences comes from, the efficiency of these two methods and draw a possible future of the work.

 

 

 

Dana Christen – Development of QMMM/MD software for biomolecular modeling

Professor: Matteo Dal Peraro

Laboratory for Biomolecular Modeling – LBM

Description
While providing the most accurate results, quantum molecular dynamics simulations are limited to small systems due to the very high computation times they require. Classical molecular dynamics on the other hand allows larger simulations at the expense of less accurate results.

Hybrid simulations aim at modeling large molecular systems using conventional classical methods while involving quantum mechanics algorithms to enhance subsets of the simulation domain, thus combining reasonable computation time and accurate results in critical areas.

The goal of this project is to implement a basic hybrid framework inside of the molecular simulation software NAMD.

Abd-Ur-Rehman Mustafa – Evaluation of price and performance of HPC in the cloud

Supervisor: Dr. Vittoria Rezzonico

Computational Science and Engineering – CSE

Description
Cloud computing is the delivery of computing as a service, where resources are shared and the service is provided to the client via a specific interface. More and more services in IT are being outsourced to the cloud (mail, calendar, document management, as well as web servers, databases,…), and some studies of outsourcing HPC to the cloud have been done. It has been shown that not all computing workloads are suitable for the cloud (for instance the
communication- and data- intensive computations), but the cloud could be suitable for those computations which do not require too many data transfers.

The goal of this project is to evaluate the solutions for HPC in the cloud.

Gabriel Gnaegi – 3D Numerical simulation of a reentry capsule

Supervisors: Dr. Pénélope Leyland, Ojas Joshi

Interdisciplinary Aerodynamics Group – IAG

Abstract
The goal of this semester project is to make a 3D simulation of reen­try capsule called Phoebus. It is a capsule developed by the ESA to make atmospheric measurements at high velocity such as temperature, heat trans­fer, shock distance and analysis… The first step is to create a 3D geometry of the capsule and a suitable domain of calculus which will allow to capture correctly the shock. The second step, which is the most sensitive, is to gene­rate a suitable mesh to capture correctly the shock and the boundary layer thanks to ICEMCFD.
Once the mesh is good enough, a campaign of simulation,using the code NSMB, is run. The goal is to have a simulation close to reality which means that it deals with non equilibrium chemical species and thermodynamics processes such as radiative and convective heat transfer. To achieve this fi­nal purpose, simple models are used in the beginning to adjust the mesh and then more complex models are added, like chemical equilibrium until the final model. At each step, the mesh needs to be adapted to the new model.
The results will help to understand more the physics of such high velo­cities flights and it will gives additional results for the sizing of the capsule.