I work at Networking laboratory in the Helsinki University of Technology.. I work as part-time (3/5) researcher in various projects, rest of my workday is taken by duties of laboratory enginer: mainly supporting computer systems. I got my Licenciate Thesis done 2002. I'm completed my studies for a Ph.D, just has to write thesis to defend.
|Jan 1993||Dec 1994:||Assistant in HUT Telecommunications Technology Laboratory|
|Sept 1995||Dec 1995:||Research Assistant in HUT Telecommunications Technology Laboratory|
|Jan 1996||Aug 1996:||Assistant in HUT Telecommunications Technology Laboratory|
|Sept 1996||Dec 1996:||Research Assistant in HUT Telecommunications Technology Laboratory, in a FASTER project. (Parttime Nov-Dec)|
|Jan 1997||Dec 1998:||Parttime research assistant / scientist (since Jun 1997) in MITTA project.|
|Jan 1998||Dec 2001||Senior research scientist in Mi2tta project.|
|Jan 2002||Senior research scientist in multiple projects.|
Currently, there is not very much known about traffic patterns in Internet. Traffic measurements have shown that the traffic has so called long-range dependent behaviour. The most probable reason for this is that each user generates several data transfers, even simultaneously. Compared to traditional teletraffic where users start using quite randomly, the Internet has several layers of processes - some of those can be modelled as stochastic while other are very deterministic.
The deterministic processes are believed to be implementation dependent: characteristic to certain implementation of TCP/IP protocol suite or networking application, further modified by existing network and end system load. This process is thus environment-affected and changing all the time - it should be reconstructible as accurately as needed for simulations and changing as protocols, implementations and hardware develops.
In the Internet the traffic patterns may change dramatically over time. For example, one site measured mean size of transferred file to be 4500 bytes - five months later it was less than half of that. At another site, the amount of web traffic doubled every six week for two full years. There are also great differences in traffic profile between different sites (user groups) and areas of networks.
This motivates continuous traffic measurements to handle this immense moving target. Traditional on-wire traffic measurements need dedicated high-performance equipment with enormous memory capacity to capture traffic and is not in that sense suitable for continuous large-scale measurements. Fortunately, several network applications produce log files, which are generally used to measure service usage and solve problems. These can be used also to measure network traffic as one can found usually wall clock time, the end systems and amount of data transferred (excluding protocol headers). In some cases also some extra information can be found like time of data transfer. Some of accuracy of data is lost but we believe that most of traffic process can be reconstrued.
The very first target in my thesis was to control scheduling based on generated network traffic. However, further research indicated that the implementation is everything but easy. There is no optimum location for the throttle/accelerator and to work efficiently, some buffers must be allocated.
List of publications.