[an error occurred while processing this directive] [Home page | kotisivu] [photo]

Markus Peuhkuri

I work at Networking laboratory in the Helsinki University of Technology.. I work as part-time (3/5) researcher in various projects, rest of my workday is taken by duties of laboratory enginer: mainly supporting computer systems. I got my Licenciate Thesis done 2002. I'm completed my studies for a Ph.D, just has to write thesis to defend.

Work history as researcher

Jan 1993 Dec 1994:Assistant in HUT Telecommunications Technology Laboratory
Sept 1995 Dec 1995:Research Assistant in HUT Telecommunications Technology Laboratory
Jan 1996 Aug 1996:Assistant in HUT Telecommunications Technology Laboratory
Sept 1996 Dec 1996:Research Assistant in HUT Telecommunications Technology Laboratory, in a FASTER project. (Parttime Nov-Dec)
Jan 1997 Dec 1998:Parttime research assistant / scientist (since Jun 1997) in MITTA project.
Jan 1998Dec 2001Senior research scientist in Mi2tta project.
Jan 2002Senior research scientist in multiple projects.

Interest Area

Background of Current Research

My previous studiest (Master's Thesis 1997) studied effect of operating systems on network tarffic. One key idea was to develop method to remove or smooth out burts typical to data traffic. However, no feasible way to do it was found. Now I'm changing my research plan to larger scales, away from the burst to model sources at longer time scales.

Currently, there is not very much known about traffic patterns in Internet. Traffic measurements have shown that the traffic has so called long-range dependent behaviour. The most probable reason for this is that each user generates several data transfers, even simultaneously. Compared to traditional teletraffic where users start using quite randomly, the Internet has several layers of processes - some of those can be modelled as stochastic while other are very deterministic.

The deterministic processes are believed to be implementation dependent: characteristic to certain implementation of TCP/IP protocol suite or networking application, further modified by existing network and end system load. This process is thus environment-affected and changing all the time - it should be reconstructible as accurately as needed for simulations and changing as protocols, implementations and hardware develops.

In the Internet the traffic patterns may change dramatically over time. For example, one site measured mean size of transferred file to be 4500 bytes - five months later it was less than half of that. At another site, the amount of web traffic doubled every six week for two full years. There are also great differences in traffic profile between different sites (user groups) and areas of networks.

This motivates continuous traffic measurements to handle this immense moving target. Traditional on-wire traffic measurements need dedicated high-performance equipment with enormous memory capacity to capture traffic and is not in that sense suitable for continuous large-scale measurements. Fortunately, several network applications produce log files, which are generally used to measure service usage and solve problems. These can be used also to measure network traffic as one can found usually wall clock time, the end systems and amount of data transferred (excluding protocol headers). In some cases also some extra information can be found like time of data transfer. Some of accuracy of data is lost but we believe that most of traffic process can be reconstrued.

Research Done in Current Topic

Work done on my Master's Thesis was to create a tool to analyze interaction between operating system (mainly scheduler and network buffers) and the network. The designed toolbox can collect information from the operating system and the network.

The very first target in my thesis was to control scheduling based on generated network traffic. However, further research indicated that the implementation is everything but easy. There is no optimum location for the throttle/accelerator and to work efficiently, some buffers must be allocated.

Future Research

The future research can be titled as ''Model creator'': the main tasks are to
  1. collect traffic traces and transactions logs from various real world applications and networks
  2. model various applications
  3. find out how the characteristic properties of network traffic generated by specific application or protocol can be improved.


List of publications.

[an error occurred while processing this directive]