Connections Between Network Topology and Network Security

Connections Between Network Topology and Network Security
Connections Between Network Topology and Network Security

Connections Between Network Topology

and Network Security

Nils Kalstad Svendsen

Gj?vik University College

email:nilss@hig.no

Abstract.This paper is the project proposal for a Ph.D thesis at Gj?vik

Universtity College.Based on the resent advanced in graph theory and stud-

ies of complex networks we want to emphasise the connection between network

topology and risks in the network.Today network security focuses on single

machine single session security.We hope to show that this approach is miss-

ing the crucial fact that all machines are part of a large,dynamic,complex

network.This might imply that many measures taken today are ine?ective.

The theory of heavy tailed distribution eliminates to some extent the?nite size cut-o?e?ect from the models and includes observations that by previ-

ous models has been considered as extremal observations or outliers.Possible

paths for our project,depending on our?ndings are:Network dependability,

epidemiology in networks,redundancy strategies in networks and risk man-

agement in networks.

Contents

1.Introduction1 1.1.Graph theory2 1.

https://www.360docs.net/doc/a03258651.html,work modelling6

https://www.360docs.net/doc/a03258651.html,work security7

2.Our approach to network security9

3.Engagement10 References10

1.Introduction

In1998Duncan Watts and Steven Strogatz published the article“The collective dynamics of small world networks”in Nature[15].This article was the?rst in a series of articles describing how classical Erd¨o s-R′e yni random graphs(described among others in Bollob′s in[5])does not?t with the structure of real word net-works and proposing more suitable models.The work of Watts and Strogatz were based on analyses of social networks,but it didn’t take long before large techni-cal networks were analysed.In1999the Faloutsos brothers published an article on the topology of the Internet[9]stating that the connectivity of routers and

1

2

switches of Internet satis?es a so called power-law distribution.A break through came in1999with the article of Barab′a si and Albert proposing a model for how networks with power-law connectivity distribution can be grown and how also pages on the WWW satis?es this model.We want to clarify and describe what impact this new understanding of networks has on information security.

1.1.Graph https://www.360docs.net/doc/a03258651.html,works are normally modelled as graphs.A graph con-sists of N of nodes and L edges connecting the nodes together.Edges are dis-tributed between nodes according to rules based on local or global properties of the graph,and these can be directed or undirected,with or without weights. As stated in[3]there has recently been much interest in developing and study-ing new random graph models that captures certain observed common features of many large scaled real-world networks.Although the recent activity in this area started with the“small-world”model of Watts and Strogatz[15],the main focus now seems to be on so called“scale-free”random graphs,whose degree dis-tribution follow power laws(also referred to as heavy-tail distributions,Pareto distributions,Zip?an distributions,etc.).

1.1.1.The Erd¨o s-R′e yni model.The?rst introduced random,undirected,graph was the The Erd¨o s-R′e yni network from the early1960s.There are two main constructions of such graphs with a?xed number N of vertices:

(1)The set of graphs G N,p is constructed by connecting any two vertices of

the network by an edge with probability p.

(2)The set of graphs G N,E is constructed by distributing edges between L

randomly chosen pairs of vertices.

These two constructions de?ne two equivalent statistical ensembles of graphs. According to Dorogovtsev and Mendes[8]the set of graphs in construction(1) is similar to the grand canonical ensemble in statistical mechanics(temperature and chemical potential are?xed)while as the set of graphs in construction(2) is similar to the microcanonical ensemble(energy and numbers of particles are ?xed).

The degree distribution,P(k),is one of the parameters often used to describe a network.Considering model(2),and as the establishment of each edge is inde-pendent,we see that the degree distribution is given by the binomial distribution with parameters N and p.It is a well known fact that this distribution can be approximated with the Poisson distribution for large N and pN= k is the average degree of the nodes in the network.Hence P(k)is given by

k k

P(k)=e?

3 This result can be obtained by more rigorous arguments as by Bollob′a s in[5]. The most characteristic trait of the degree distribution of the Erd¨o s-R′e yni model is that is decays faster than exponentially for large k,allowing only very small degree?uctuations.

Another basic parameter for describing a network is the clustering coe?cient.The clustering coe?cient c of a node is the ratio between the total number y of the edges connecting its nearest neighbours and the total number of all possible edges between all the nearest neighbours.Given that a node has z nearest neighbours we have that

2y

c=

.

c=p=

N

From this we conclude that the clustering coe?cient of the Erd¨o s-R′e yni model,at ?xed

c to any desire

d value.Inspired by th

e fact that many social networks are highly clustered, while at the same time exhibit a small average distance between vertices,Watts and Strogatz[15]proposed a model that interpolates between ordered lattices and purely random networks.

The Watts-Strogatz model starts with a ring of N nodes in which each node is symmetrically connected to its2m nearest neighbours(m on the clockwise and m on the counter clockwise sense).Then,for every node,each edge connected to a clockwise neighbour is rewired with probability p and preserved with probability 1?p.The rewiring connects the edge endpoint to a randomly chosen vertex, avoiding self-connections.The parameter p therefore tunes the level of random-ness present in the graph,keeping the number of edges constant.With this con-struction we obtain a graph of average degree

e?pm,for k≥m.

(k?m?n)!

4

In the limit of p→1it can be shown that the above expression reduces to

m k?m

P(k)=

3m(m?1)

c p≈

c an

d a small

m

k?3.

t+m0

5 In the limit when t→∞we can obtain the time-independent solution

P(k)=2m2k?3,

which indicates that the preferential attachment spontaneously generates a net-work with a power-law degree behaviour.On the other hand Pastor-Satorras and Vespignani also gives an estimate for the average clustering coe?cient by

(ln N)2.

8N

We see that

6

while as the mathematical work done in the?eld up till date is rather limited according to Bollob′a s and Riordan[3].Besides describing the topology of the models and?tting models onto real world networks,the main features deduced from the models are:

(1)Scale-free graphs are extremely robust against random attacks.

(2)Scale-free graphs are extremely vulnerable for targeted attacks.

(3)In?nite sized scale free-graphs have no epidemic threshold.

There is a set of characteristics of the network that can be extracted from its topology.Examples of such characteristics are:Total number of nodes(N)and edges(L),degree distribution(P(k)),average shortest path length(

7 As described by Pastor-Satorras and Vespignani in[14],Internet topology gener-ators impose the desired properties of the network from the outset by designing ad hoc algorithms.This means that topology generators are just a representa-tion of the limited knowledge computer scientists have of the Internet.A far more interesting method is the one of statistical physics where one can explain the large scale properties of matter in terms of basic elements,such as molecules and atoms.In the case of the Internet the ultimate goal of such an approach is to understand the observed empirical laws in terms of emergent properties, spontaneously developed from the microscopic dynamics of its elements.

The statistical physics approach to network modelling naturally focuses on then dynamical evolution rules as the key elements responsible for the structural prop-erties of the whole system.As a result of this change of perspective a new class of models has emerged,based on the realization of two fundamental facts:

(1)The Internet as well as many other complex networks,is a growing net-

work,whose number of vertices and edges continuously increases with time.

(2)Connections are placed by following random processes biased by the local

properties of nodes.

In view of the expectations and demands of the end users,edges are established following preferential mechanisms related to the connectivity and centrality of already existing vertices.These considerations has triggered the development of degree driven models,whose paradigm can be found in the Barab′a si-Albert model[1].

We also note that in network theory it is important to be aware of the fact that networks are build on the top of networks.Hence the network depicted on one level does not necessarily re?ect the lay-out of the“real”network.Hence the (too)common assumption of independent failure and repair of the various links and nodes of networks should be used with great care[10].

https://www.360docs.net/doc/a03258651.html,work https://www.360docs.net/doc/a03258651.html,work security is a broad?eld ranging from protocol veri?cation to?rewalls and detection of malicious software.The main focus up till now has been to enable secure communication between two or several clients and to protect participant against known attacks.On the other hand,following from the Turing Stop theorem,it can be proved that one can not protect one self against all attacks.There is an increasing awareness of the fact that there is no such thing as an absolutely secure system.As a consequence of this new ?elds,such as security metrics and information assurance,have been introduced to information security.

8

Unfortunately these new?elds also often have single session approaches.Looking for literature focusing on how the network topology in?uences the security in the network one will discover that there are few works focusing on this.Williamson and L′e veill′e[18]states that since2000there has been a growing body of interest in using epidemiological models to gain insights into computer viruses.Interesting questions related to the spread of malware is:

(1)How fast does it spread?

(2)How can outbreaks be isolated?

(3)How much time do we have to develop some counter measure?

Today the spread of malicious software in a network is very rapid.According to Moore[11]the Sapphire worm for example infected more than90%of all vulnera-ble hosts in the Internet within10minutes.Taking the results of Pastor-Satorras and Vespignanis paper“Epidemic dynamics in?nite size scale-free networks”[13], stated earlier in this project proposal,into account this is without dought due to the scale-free structure of the Internet.(Negligible percolation threshold implies that one or a few sources can infect the entire network and the small world e?ect causes the rapid spreading.)With such propagation speed human action can in no way slow down the contamination rate.Human based systems simply do not have the time to react before the entire network is contaminated.

This observation is crucial when we take into account the increasing reliance on computing networks and network services,and raise a number of security con-cerns.These concerns has been formulated by the organisers of the Adaptive and Resilient Computing Security Workshop taking place at the Santa Fe Institute November2004.On their home page(https://www.360docs.net/doc/a03258651.html,)they write:“The increasing reliance on computing networks and services raises

a number of security concerns.Firstly that the increasing resource

value in such networks will attract greater levels of targeted at-

tacks,(and general malicious activity via computer viruses/worms/

hacking).Secondly,that the massive increase in network complex-

ity engenders higher risks of system failure.Some of the speci?c

problems,which will be addressed,include:

(1)Design of self-healing networks

(2)Optimisation versus robustness

(3)Machine learning and defence strategies

(4)Dynamic stability in large-scale networks

(5)Self and non-self recognition,immunology models

(6)Impact of topology on network resilience”

The workshop will set the state of the art in these?elds.We conclude that there is a growing awareness about the fact that the?eld of network security should

9 not only deal with the challenges of setting up secure sessions between two users in a network,on also has to consider how every machine in?uences the network and how the network in?uences the machines.

2.Our approach to network security

In the introduction we saw how graph theory and network modelling have inter-acted to improve theory and understanding of communication networks.A result of this is more accurate network models which creates better environments for simulation of new protocols.We notice that few groups have studied how the new network models in?uences the security https://www.360docs.net/doc/a03258651.html,work security models today have a single machine/single session focus.We want to focus on how the topology of the network in?uences epidemiology in the network and network resilience. To achieve our goal we will have to build a simulation tool based on di?erent network models.Through cooperation with FFI?we will hopefully be able to calibrate our model.Based on the model there is a variety of topics we would like to explore.Possible problems are:

(1)Epidemiology in large complex networks

(2)Measure the e?ect of implementing common measures against malware

spreading at several network hubs

(3)Robustness and vulnerability versus optimisation in network design

(4)Redundancy strategies

(5)Risk management in networks

How we attack the problems will depend on the cooperation with our partners. The ultimate goal of our work would be a tool monitoring risk in the network.

(1)The risk for a fraction of the nodes being disconnected.

(2)The risk of malicious software spreading in the network with high speed.

(3)The risk and consequences of random failure of nodes.

We will also investigate the possibility of including system dynamic models of networks.These models can be used to describe how the network interact with their environment.There is a wide range of networks involved in information and communication networks,and one has to be careful as several networks are built on the top of each other.

10

Networks of interest include among others:Internet,mail networks,internal net-works in enterprises,physical communication networks,ad-hock network struc-tures and distributed computing systems.It is di?cult to be speci?c on networks and particular applications as we are dependent on how the theoretical studies evolve.

Our methods will be mathematical modelling and simulations.We intend to explore the topology of information and communication networks and show the in?uence of the network topology on the network behaviour.As a basis for our research we see clear connections to several well matured?elds,among these: Graph theory,percolation theory,queueing systems,Markov chains,statistical physics and physical kinetics.

3.Engagement

We commit to this project description.

Gj?vik,01.10.2004,

Nils Kalstad Svendsen Stipendiat,HiG 2653Vestre Gausdal

Jan Arild Audestad

Professor,HiG

Postboks1505

1506Moss

P?al Spilling

Professor,UiO

Postboks1080

0316Oslo References

[1]R′e ka Albert and Albert-L′a szl′o Barab′a si.Statistical mechanics of complex networks.Re-

views of Modern Physics,74,2002.

[2]A.L.Barab′a si and R.Albert.Emmergence of scaling in random networks.Science,286:509–

511,1999.

[3]B′e la Bollob′a s and Oliver Riordan.Robustness and vulnerability of scale-free random

graphs.Internet Mathematics,1:1–35,2003.

[4]B′e la Bollob′a s and Oliver M.Riordan.Handbook of Graphs and Networks,From the Genome

to the Internet,chapter1,pages1–34.Wiley-VCH,?rst edition,2003.

[5]Bela Bolobas.Random Graphs.Cambridge studies in advanced mathematics.Cambridge

University Press,second edition,2001.

[6]Stefan Bornholdt and Heinz Georg Schuster(Eds.).Handbook of Graphs and Networks,

From the Genome to the Internet.Wiley-VHC,?rst edition,2003.

[7]S.N.Dorogovtsev and J.F.F.Mendes.Evolution of networks.Advances in Physics,51:1079–

1187,2002.

11 [8]S.N.Dorogovtsev and J.F.F.Mendes.Evolution of Networks,From Biological Nets to the

Internet and WWW.Oxford University Press,?rst edition,2003.

[9]Michalis Faloutsos,Petros Faloutsos,and Christos Faloutsos.On power-law relationships

of the internet topology.In SIGCOMM,pages251–262,1999.

[10]Bjarne E.Helvik.Dependable computing systems and communication networks,design and

evolution.draft lecture notes,Department of Telematics,NTNU,January2001.

[11]David Moore,Vern Paxson,Stefan Savage,Colleen Shannon,Stuart Staniford,and Nicholas

Weaver.Inside the slammer worm.IEEE Security and Privacy,4(1):33–39,July2003. [12]M.E.J.Newman.The structure and function of complex networks.SIAM Review,45(2):167–

256,2003.

[13]Romualdo Pastor-Satorras and Alessandro Vespignani.Epidemic dynamics in?nite size

scale-free networks.Phys.Rev,E.65,2002.

[14]Romualdo Pastor-Satorras and Alessandro Vespignani.Evolution and Structure of the In-

ternet,A Statistical Physics Approach.Cambridge University Press,?rst edition,2004. [15]Duncan J.Watts and Steven H.Strogantz.Collective dynamics of”small–world”nettworks.

Nature,393:440–442,1998.

[16]Matthew M.Williamson.Throttling viruses:Restricting propagation to defeat malicious

mobile code.In Proceedings Annual Computer Security Applications Conference,December 2002.

[17]Matthew M.Williamson.Design,implementation and test of an email virus throttle.In

Proceedings Annual Computer Security Applications Conference,December2003.

[18]Matthew M.Williamson and Jasmin L′v eill′e.An epidemiological model of virus spread and

cleanup.In Proceedings Virus Bulletin Conference,September2003.

相关主题
相关文档
最新文档