# BENet 2014 Abstracts

#### Keynote

**Abstract:**

Multilevel networks can be defined, for two levels, as a set of lower level (level 1) nodes and their connections, a set of higher level (level 2) nodes, and their connections, and a perhaps the cross-level network between the level 1 nodes and the level 2 nodes. An example would be academics (level 1 nodes) who collaborate within and between universities on their research (the level 1 network). Universities (level 2 nodes) that set-up student exchange programmes, or shared degree programmes (the level 2 network), and perhaps individual academics (level 1 nodes) that are invited to be external examiners at universities (level 2 nodes) other than their usual place of work. Given such a multilevel network, our target of inference might be understanding the nature of this multilevel structure at a given point in time, or might be understanding how an attribute of the level 1 nodes varies between the level 1, level 2, and cross level networks in an analysis of multilevel network dependencies.

Multilevel analysis is sometimes useful for testing substantive hypotheses relating to social networks. However, the multilevel analysis of a network does not automatically make that network a multilevel network. For example, multilevel analysis is very useful for the analysis of single-level ego-nets, and has an important role to play in multiplex network dependencies for a single level of network nodes. However, given certain targets of inference, a multilevel analysis can be also very useful for analysing a multilevel network.

These subtle differences can be confusing. I this talk I seek to clarify the difference between multilevel networks and the multilevel analysis of networks, and to highlight these differences with real-data examples of the multilevel analysis of single-level and multilevel networks. I will also give an example of when a multilevel analysis is not the right approach for a multilevel network.

#### Session I

Chair: Glenn Magerman (KUL)**Abstract:**Over the past century, increasing international trade in goods, services, financial flows and migration has dramatically altered living conditions around the globe for billions of people. The globalization pattern itself has also changed significantly in response to new technologies and changes in the international political landscape. This paper tracks the evolution of the global trade network over the years 1880 to 2000 and studies how it has affected, and been affected by the political and institutional changes. Starting from historical trade and GDP data, we first construct the Historical Integration Index, hii(A,B) , which captures the impact of bilateral exports to, and imports from country B on country A’s economy. This index is then amalgamated into the global Historical Integration Network (HIN). Using this weighted, directed network, we then try to answer whether trade has evolved along clearly delineated trading blocks (communities) or if it has instead developed into a core-periphery structure. In addition, we look at the effect of the World Wars, colonialism and institutional integration agreements (for example the European Union) on the global trade network. We find that the Historical Integration Network cannot be divided into clear-cut communities and hence could be considered as truly globalized. Countries can be grouped into somewhat more close-knitted communities, but the inter-community integration is too high to consider them clear-cut. This leads to a low modularity M of the network which is insignificant in most years. Concerning the core-periphery structure of the world, we uncovered a highly centralized, unequal HIN. The time-evolution of the cp-coreness C can be divided into three periods. From 1880 to 1900 the HIN gets less centralized, from 1900 to 1970 C increases, and from 1970 to 2000 it flattens out. If globalization and cp-coreness are considered to be each others antagonist, then the first and last period could be tentatively identified with the two waves of globalization. We also found that the number of countries in the core stays constant and that when a new country is added to the network, it is always added to the periphery. This would explain the gradual decrease in the network density over time. Finally, the historical integration index behaves in the way predicted by trade and political theory. Countries are more highly integrated the closer they are to each other, it they speak a similar language and if the country is trading with its former hegemony. Surprisingly, the level of integration is significantly lower when dealing with a (former) colony, but this is probably due to the fact that we are not controlling for the level of development of the partner country. The World Wars lowered the level of integration, but less than the interbellum, which confirms that this period was one of reversed-globalization. Finally, the North American Free Trade Agreement, the European Union and the European Free Trade Agreement all succeeded in raising the level of integration, even though they were mostly closed by countries that were already likely to be trading partners.

**Abstract:**Networks have been the object of much interest in finance chiefly since the collapse of Lehman Brothers Bank in 2008. Other areas of economics begin to integrate networks in their toolboxes. In this work we focus on the International Trade Network, consisting of the import-export data across all countries over a year. While most classic measures to assess the health of a country in trade network involve direct measures, i.e. counting the intensity of import or export with direct trade partners, we develop tutorial arguments to show that ''the network matters'', meaning that paths issued from one country (studying trade partner’s partners for instance) are just as revealing as mere edges. Such arguments must be developed convincingly in order to validate network-based methods as a useful complement to mainstream measures. We briefly review some recent results of the literature pursuing that direction. As a specific case study, we investigate the ever more integrated and ever more unbalanced trade relationships between European countries. To better capture the complexity of economic networks, we propose two global measures that assess the trade integration and the trade imbalances of the European countries. These measures are the network (or indirect) counterparts to traditional (or direct) measures such as the trade-to-GDP (Gross Domestic Product) and trade deficit-to-GDP ratios. Our indirect tools account for the European inter-country trade structure and follow (i) a decomposition of the global trade flow into elementary flows that highlight the long-range dependencies between exporting and importing economies and (ii) the commute-time distance for trade integration, which measures the impact of a perturbation in the economy of a country on another country, possibly through intermediate partners by domino effect. Our application addresses the impact of the launch of the Euro. We find that the indirect imbalance measures better identify the countries ultimately bearing deficits and surpluses, by neutralizing the impact of trade transit countries, such as the Netherlands. Among others, we find that ultimate surpluses of Germany are quite concentrated in only three partners. We also show that for some countries, the direct and indirect measures of trade integration diverge, thereby revealing that these countries (e.g. Greece and Portugal) trade to a smaller extent with countries considered as central in the European Union network. More information can be found in the PLoS ONE 2014 paper:

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0083448

**Abstract:**The recent financial crisis has fostered a renewed interest in systemic risk. If numerous measures have been developped in the literature to cope with the various dimensions of the phenomenon, a recent literature has emphasized the need for more explorations through the lens of complexity science. More specifically, representing financial institutions and their relationships as the vertices and edges of an oriented graph capturing the underlying structure of the financial market as a complex adpative system can help to better understand the nature of mechanisms at stake during contagion episodes. In this paper, we propose an empirical strategy to assess on real data, the impact of the financial network structure on systemic risk and more specifically, to test the existence of a tipping point as conjectured by Haldane (2009) and further modelled in the recent contribution of Acemoglu et al. (2013). Our results suggest that in addition to the traditional firm-level determinants, network structure significantly impacts systemic risk as more central nodes appear to represent the more systematically important financial institutions of our sample. In addition, we do detect threshold effects in the impact of large adverse shocks when combining with high level of closeness centrality, which tends to confirm Acemoglu et al's theoretical predictions.

**Abstract:**In this paper, we propose an agent-based model (ABM) of the interbank market in which agents' (banks) bilateral lending/borrowing decisions are driven primarily by the level of trust between every pair of banks in the system. This approach provides a behavioural foundation for the freeze in interbank credit that occurred in the wake of the collapse of Lehman Brothers in 2008. Moreover, we embed the ABM in a network model calibrated to replicate real observed properties of interbank markets (namely, scale-free behaviour and disassociative mixing). Consequently, each bank represents a node in the network with the (weighted, directed) edges representing the bilateral exposures (i.e. the lending and borrowing relationships) between them. This network is taken as an initial condition for our dynamic approach, whereby edge weights are endogenously updated as banks suffer exogenous shocks and provide/request liquidity on the interbank market. We begin by assuming that each agent is characterized by a bilateral lender trust, which characterizes how much he trusts each of his borrowing counterparties and a borrower trust characterizing i's level of trust for his neighbours from whom he borrows. Note that the number of lending/borrowing neighbours for each i is determined by the in/out degree, respectively of the node. The initial lender (borrower) trust parameter ascribed to each node is simply given by the inverse of the in (out) degree (this is intuitive as it implies that banks with fewer connections have a stronger trust relationship with each of their counterparties. Indeed, banks have been observed to engage in relationship lending with each other.) Dynamic trust between banks enters via simple heuristics on how to redistribute total lending and borrowing amongst potential borrowers and lenders. Specifically, we assume that each bank first determines potential lending and borrowing as a function of their lender and borrower trust parameters. Effective lending and borrowing is obtained by matching borrowing requests with loan provision for each pair ij and taking the minimum of the two. In the first of two endogenous trust updates, borrowing banks update their trust in each lending counterparty as a function of the difference between how much they requested and how much they received. From a network perspective, this varies the weight of the edge in question.

#### Session II

Chair: Dirk Jacobs (ULB)**Abstract:**This article reflects on the use of predetermined genre lists to measure patterns in music taste and, more specifically, cultural omnivorousness. The use of a predetermined array of genres assumes that music genres are rigid and stable concepts, whereas in reality genre boundaries continually emerge, evolve, and disappear. Inspired by Lamont’s (2010) call to study classification systems ‘from the ground up’, we present an alternative strategy to measure patterns of music taste using an open question about artist preferences. We build a two-mode network of artists and respondents to identify clusters of respondents that have similar relationships to the same set of artists. Our results show that research using measurements of cultural omnivorousness based on genre preferences might be hampered, as it misses important subdivisions within genres and is not able to capture respondents who combine specific aspects within and across music genres.

**Abstract:**For many years, educational research has focused on the job satisfaction of teachers to explain well-being, absenteeism, school quality and the decision to stay or leave the profession (Ingersoll & Smith, 2003). Job satisfaction is therefore one of the most frequently investigated job attitudes and can be defined as “the pleasurable or positive emotional state resulting from the appraisal of one’s job and job experience” (Locke, 1976, p.1300). Recent decades, researchers in organization studies increasingly used the social integration or ‘fit’ in the organization to explain job satisfaction from a contextual perspective. Based on the literature, this fit in an organization can be conceptualized in several ways. One way is person-organization fit (P-O fit), which reflects the compatibility between a person and the organization (Kristof, 1996). P-O fit can be measured on various dimensions, but this study uses value congruence, one of the most commonly used measures. Multiple studies already provided evidence that the value congruence of an employee is linked to his or her job satisfaction (Bretz Jr & Judge, 1994; Silverthorne, 2004). A second way to conceptualize the fit of an individual is the social-structural position in the organizational network. Studies using embeddedness theory as a framework indicate that the ‘links’ a person has are crucial for the social integration in the organization (Granovetter, 1985). ‘Links’ can then be described as the amount of ties individuals have with other people and activities at work (Mitchell, Holtom, Lee, Sablynski, & Erez, 2001). Thomas and Feldman (2007) indicated that the more links a person has, the more professionally and personally tied this person will feel to the organization. Often, researchers investigate which network configurations lead to a higher job satisfaction (Lee & Kim, 2011). Previous research in non-educational settings proved that the centrality of an actor has positive effects on the job satisfaction (Kilduff & Krackhardt, 1994) and the employee retention (Mossholder, Settoon, & Henagan, 2005). In educational research, there has been a limited amount of attention for the fit of teachers in their school, especially focusing on the integration or isolation of teachers. Bakkenes, De Brabander & Imants (1999) proved that teacher isolation causes absenteeism and low job satisfaction. Skaalvik and Skaalvik (2011) found that the extent to which teachers share values with other school team members in their school is important for their job satisfaction. Others have argued that collegial relationships and integration are important predictors for the satisfaction teachers perceive from doing their job (Xin & MacMillan, 1999). However, limited attention has been paid to the PO-fit and social-structural fit of teachers. Given that organizational attitudes and behaviours are socially constructed, this study aims to provide clarity on whether and to what extent the fit of teachers can be associated with their job satisfaction. To provide an answer, both attribute and social network data of approximately 1050 school team members, working in 14 secondary schools in Flanders were gathered. The attribute data that were collected concerned several attitudes about the profession and the school as a workplace, such as job satisfaction and the desired and actual collaboration in the school. Based on the latter two, different measures of value congruence can be calculated: perceived P-O-fit and objective P-O fit (O'Reilly, Chatman, & Caldwell, 1991). Perceived P-O fit is constructed by calculating the difference between the individual’s desired collaboration and how the individual perceives the actual collaboration in the school, while objective P-O fit compares the individual’s perception on the desired collaboration with the perception of the school team on the actual collaboration. Although both operationalize the concept of P-O fit, previous research found disparate findings due to the difference in measurement (Verquer, Beehr, & Wagner, 2003). As this study is one of the first to explore the P-O fit of teachers, both measures will be included in the analyses. Relational data were derived from two sociometric questions concerning the information and the social support network. Social network analysis is the most appropriate method to analyse relational data whereby relations are seen as the linkages between agents (Scott, 2004). The basic building block of social networks is the tie and the presence or absence of it, which makes it possible to calculate several network measures, such as indegree centrality, betweenness centrality and closeness centrality (Borgatti & Foster, 2003). UCINET (Borgatti, Everett, & Freeman, 2002) will be used to compute the network centrality measures. To investigate if and to what extent the several measures of fit can be related to teachers’ job satisfaction, regression models will be conducted.

**Abstract:**The objective of this paper is to analyze the formation and structure of organizational networks in the field of immigration in a comparative perspective. More specifically, we will test hypotheses of the impact of specific opportunities in the field of immigration on migrants’ organizational networks by analyzing collaborations of migrant organizations with other migrant and native organizations and the prevailing logics of interaction. Our main argument is that the political context of migrants’ city of residence affects the way migrant organizations send ties to other migrant and non-migrant organizations active in the field. To test our hypotheses we will use a unique data set of an organizational survey of migrant organizations in five European cities: Budapest, Lyon, Madrid, Milan and Zurich and analyze the networks of the total population of migrant organizations in each city.

**Abstract:**The presence of a well connected civic elite, linking the different organizations expressed by an ethnic community, is considered an important determinant of the political participation and trust of minority groups [Fennema & Tillie, 2008]. Previous quantitative work on this topic has been limited to simple structural measures on one-mode projections on the organization mode [Fennema & Tillie, 1999, 2001, 2008; Vermeulen & Berger, 2008]. We remark that the projection introduces biases in some of the measures, increasing the number of ties in a combinatorial fashion, and propose instead a structural analysis of the unprocessed two-mode networks. Inspired by existing measures of hierarchy in one-mode networks [Everett & Krackhardt, 2012], we consider different measures of clustering and redundancy that have been developed for two-mode data [Latapy et al, 2008; Opsahl, 2013]: additionally, we look for correlations among these measures and node degrees, as in [Latapy et al, 2008]. Using data from [Vermeulen, Berger, 2008], on Amsterdam and Berlin, and additional data on Brussels, we compare the association networks developed by the Turkish and Moroccan communities in different host countries, characterized by different political opportunity structures. We further discuss how the proposed measures can be used to identify the different structures described in previous work based on one-mode projections, such as umbrella organizations and cliques.

#### Session III

Chair: Matteo Gagliolo (ULB)**Abstract:**We study the role of conflicting interests in long-run belief dynamics. Agents meet pairwise with their neighbors in the social network and exchange information strategically. We disentangle the terms belief (what is held to be) and opinion (what is ought to be due to a bias): the sender of information would like to spread his opinion (biased belief), while the receiver would like to learn the sender's true belief. In equilibrium the sender only communicates a noisy message containing information about his belief. The receiver interprets the sent message and updates her belief by taking the average of the interpretation and her pre-meeting belief. With conflicting interests, the belief dynamics typically fails to converge: each agent's belief converges to some interval and keeps fluctuating on it forever. These intervals are mutually confirming: they are the convex combinations of the interpretations used when communicating, given all agents hold beliefs in the corresponding intervals.

**Abstract:**Networks have been used to model systems as different as power-grids, flights between airports, or social relations. The standard approach is to collect all interactions between the elements, or nodes, forming the system during a certain period and study the static structure of connections between them. Diverse network structures are typically associated to different constrains that supposedly regulate dynamic processes taking place on the networks, as for example, epidemics or communication. In some contexts, however, these interactions are also dynamic and thus the network structure varies in time. If the dynamics on and of the network occur at the same scale, the temporal patterns of node and link activity may affect non-trivially the diffusion dynamics. In this talk, I aim to briefly introduce the concept of temporal networks and review some of my research related to this topic. I will present an original large dataset of sexual contacts between sex-workers and their clients and some results on the spread of simulated infections in this network, discussing the impact of temporal (particularly bursts of vertex activity) and static (particularly network clustering) structures in the prevalence of infections. If time permits, I will briefly present some results about random walk dynamics in temporal networks. Some emphasis will be given to present an original random walk centrality measure for temporal networks, based on the adjacency matrix at each time step, and to recent results where we propose that temporal heterogeneities may hinder the importance of network structure to regulate the diffusion on networks, particularly, to regulate the convergence time of a stochastic process to the equilibrium. Considering the nature of the workshop, more emphasis will be given to results and insights than to details on methods and calculations.

**Abstract:**A simple way to analyze the networks that one might expect to emerge in the long run is to examine the requirement that individuals do not benefit from altering the structure of the network. A prominent example of such a condition is the pairwise stability notion defined by Jackson and Wolinsky (1996). A network is pairwise stable if no individual benefits from deleting a link and no two individuals benefit from adding a link between them, with at least one benefitting strictly. While pairwise stability is natural, easy to work with and a very important tool in network analysis, it assumes that individuals are myopic, and not farsighted, in the sense that they do not forecast how others might react to their actions. Indeed, the addition or deletion of one link might lead to subsequent additions or deletions of other links. For instance, individuals might not add a link that appears valuable to them given the current network, as this might induce the formation of other links, ultimately leading to lower payoffs for them. Herings, Mauleon and Vannetelbosch (2009) introduce the notion of pairwise farsighted stability. A set of networks is pairwise farsightedly stable (i) if all possible farsighted pairwise deviations from any network within the set to a network outside the set are deterred by the threat of ending worse off or equally well off, (ii) if there exists a farsighted improving path from any network outside the set leading to some network in the set, and (iii) if there is no proper subset satisfying conditions (i) and (ii). Pairwise farsighted stability makes sense if players have perfect anticipation of how others might react to changes in the network. But in general, especially when the set of players becomes large, it requires too much foresight on behalf of the players. Our aim is to provide a tractable concept that can be used to study the influence of the degree of farsightedness on network stability. We define the notion of a level-K farsightedly stable set. We show that a level-K farsightedly stable set always exists and we provide a sufficient condition for the uniqueness of a level-K farsightedly stable set. We find that there is a unique level-1 farsightedly stable set. Level-K farsighted stability leads to a refinement of myopic stability for generic allocation rules. We provide easy to verify conditions for a set to be level-K farsightedly stable. We also consider the relationship between limited farsighted stability and efficiency of networks. We show that if there is a network that Pareto dominates all other networks, then that network is the unique prediction of level-K farsighted stability if K is greater than the maximum number of links in a network. In addition, we introduce a property on the allocation rule under which level-K farsighted stability singles out the complete network. Finally, we illustrate the tractability of our new concept by analyzing the criminal network model of Calvo-Armengol and Zenou (2004). We find that in criminal networks with n players, the set consisting of the complete network, so all criminals are linked to each other, is the unique level-(n-1) farsightedly stable set.

**Abstract:**1 Introduction In this work, we are looking at an epidemic-like propagation process on a network of agents. We suppose that the time between two consecutive meetings is random and described by a given inter-meeting time probability distribution, which is typically a power-law for human-related networks ([1]), and that at each meeting an infected individual transmits the disease to his/her neighbours with some probability p. The goal is to determine the effective inter-event time distribution of the underlying process, that is the distribution of the time it takes for an individual to transmit the disease to his/her neighbour once he/she is infected. Knowing this allows us to compare different behaviour showing same characteristics such as the mean time between two successful transmissions but driven by different inter-meeting time distribution. 2 Results Working in the Laplace domain and with probability generating function, one can analytically determine the first and second moments of the inter-event time distribution, that directly give the mean and variance of this process, in function of p and the moments of the inter-meeting time distribution. From these results, we obtain some important values of this diffusion process such as the average relay time, which is a standard measure of the burstiness of a process ([2]). Once recovery is taken into account, we can find the transmissibility P of the diffusion, which is the overall probability that individual transmits the infection before recovery. In the case of a tree-network, this transmissibility directly gives the reproduction number R0, and therefore the epidemic threshold. We also find that for a given average time <t>/p between two infectious contacts, rarer (high <t>) but more ’efficient’ contacts (high p) lead to less bursty (low relay time) but more transmissible contacts, that is higher probability of transmitting the disease before recovery once infected. 3 Conclusion In this work, we could determine important values such a average relay time or transmissibility of an epidemic-like propagation process taking place on a network of agents. We find that the mean time between two successful transmissions is not sufficient to characterize the diffusion speed of the epidemics, as between two agents with the same mean time of successful transmission, the one with fewer but more efficient contacts has a higher transmissibility for the disease, that is a higher probability of transmitting the disease to his/her neighbours before recovery. References [1] A. L. Barabasi (2005). ”The origin of bursts and heavy tails in human dynamics”, Nature, 435(7039), 207-211. [2] M. Kivela et al. (2012). ”Multiscale analysis of spreading in a large communication network,” Journal of Statistical Mechanics: Theory and Experiment, 2012(03), P0300 [3] R. Lambiotte et al., “Burstiness and spreading on temporal networks,” European Physical Journal B. Condensed Matter and Complex Systems, (2013).

#### Poster session

**Abstract:**Space networks have a great capacity to capitalize information through space and time. To demonstrate this hypothesis we focus on the road network, extracting from cities the skeleton of their streets. The point is to understand how by extracting information as simple as the elementary geometry and topology of those networks, one could describe a very complex object such as a city. Road network continuity has been deeply studied, leaning on social perception or creating a dual to explore its centralities. But, most of the time, the particular geographical property of such a network has been forgotten. To ensure the relevance of our study, we create a complex object, named way. It is fundamentally anchored in space as it is a geographical object constructed with local rules associating arcs at each crossing. As we want to create a generic multi-scale element which could be meaningful as well for physicists as for town planers, we developed three methods of construction. The first (Method 0) favor the straight line (local alignment between two arcs at a crossing) ; the second (Method 1) favor a global alignment at the intersection ; the third (Method 2) associates randomly the arcs at each crossing to ensure the significance of such a reconstruction. The continuity is determined through a limit of deviation angle (threshold angle). By being independent of the toponymy, the way allows to analyze road structures independently of administrative borders. Considering also only the network skeleton, it does not depends on road use and treatment, which are subjected to changes in time. The whole space is considered. Circulation is not hindered by something else than geometry. Several geometrical and topological characteristics have been studied on ways. For instance, the angles distribution was considered, for the originals arcs (which reveal a clear attraction for alignment or perpendicularity as we observe peaks for 0° and 90° angles) and for the ways. Another interesting road network property is how the logarithm of ways length fit a Gaussian curve. With those first observations, one could appreciate the concordance of results on very different spaces. Once the method and the threshold angle scientifically chosen, the study was pursued by developing more sophisticated indicators, taking into account the whole network and the relation of a specific way with it. Two indicators of this kind will be explained here. First one is the connectivity (number of segments from other ways connected to a particular one), which appear to be relevant as it highlights some historical main axis, as old access roads of Paris. Keeping in mind the importance of the alignment correlated to the cost of turning, we will present a second indicator, the structurality (measuring the centrality of a way considering way connections to the whole network). This indicator has for first characteristics to be very stable in space. To visualize the impact of the sample borders, several sub part of the large Avignon (city of the South of France) have been considered. The stability of the indicator in space is shown by observing that the most structural ways remain the same whatever the sample borders are. Even if main coherent axis are cut, the coherence of the network remain stable, and only the very truncated ways are affected. This is in high contrast to the case where one consider only the topological distance on the original graph constituted of arcs and not ways. In this case, the most important arcs changes drastically depending on the sample chosen, and it does not reveal historical roads but either speedways (when their continuity is respected) or the center of the map. The second characteristics of the structurality is to reveal the history of the network. For a sample chosen to cover Avignon and its surrounding, the outer walls and historical access roads are highlighted. For Paris, the historical center appear to be the most structural for the city. In general, we see that with only a local rule, computed at each vertex, one can elaborate more complex and multi-scale elements to analyze the deep structure of a reticular network. The use of the alignment as criteria to construct an hypergraph of ways reveal to be very powerful, as allowing to recover both the structure of a city and its history. The geometry of the spatial networks associated to the topological information is thus relevant to analyze them.

**Abstract:**The field of community detection has attracted much attention in recent years. If efficient methods exit for overlapping or non-overlapping communities in static networks, the problem of finding communities in temporal networks is still a challenge. Basic approaches consider the temporal system as a sequence of static networks where standard methods can be used. Alternative approaches represent the system by a tensor to be clustered. The main purpose of this work is to develop a statistical approach taking advantage of the temporal correlations between edges in order to uncover overlapping, synchronized communities in networks.

**Abstract:**Many interaction matrices in natural communities have been found to be share certain properties, with nestedness being of particular interest in recent years. In this paper, we provide evidence that networks arising from credit relationships between banks and firms display a significantly nested structure. These networks combine mutualistic (bank-sector relationships) and antagonistic tendencies (competition between bank and sectors, respectively). Furthermore, we introduce a dynamical model of such system in order to shed light on important policy issues in financial ecosystems. In particular, we explore the circumstances under which nested architectures arise, and investigate the impact of competition and mutualism on both system stability and biodiversity.

**Abstract:**When dealing with systems of interacting agents, beyond the understanding of their deep principles, naturally arises the question of prediction and control of their possible collective behaviors. Indeed it appears of fundamental importance, for instance in the artificial system design, to ask oneself if the system is going to display a collective behavior or, for instance, its response will be chaotic. In this work we study the influence of network topology on collective properties of a dynamical system defined upon it. For this purpose, we propose a network model in which links of a regular chain are rewired according to a probability $p$ within a specific range $r$. We then focus on how the thermodynamic behavior of a dynamical system, the $XY-$rotors model, is affected by the controlled topological changes of $(p,r)$. We identify a quantity, the network dimension $d(p,r)$ as a crucial parameter. Varying this dimension we are able to cross over from topologies with $d<2$ exhibiting no phase transitions to ones with $d>2$ displaying a second order phase transition, passing by topologies with dimension $d=2$ which exhibit states characterized by infinite susceptibility and macroscopic chaotic/turbulent dynamical behavior. More in detail, this work inscribes itself in the frame of control issues: we provide here a mean to construct a class of networks which, through to controlled topological changes, can give rise to a whole range of dynamical and statistical behaviors among which a chaotic state of infinite susceptibility. We then related the different dynamical behaviors to the network dimension. In our network model, we put essentially three ingredients: first we impose the condition of sparseness, second we introduced the concept of interaction range constraining the links to be at most of a fixed length $r$ and last we inject randomness in the structure so to have a non uniform degree. Practically we proceed to a construction similar to the Watts-Strogatz one for Small World networks: we rewire each link with probability $p$ but we impose to rewire it within a range $r$. Tuning $p,r$ the network dimension due to the presence of long-range links. We then consider, as dynamical system, $N$ $XY-$rotors, whose dynamics is described by an angle $\theta_(t)$ and the momentum $p_(t)$. Each rotor $i$ is located on a network vertex and its interactions are provided by the set of vertices attached to it via the links. We thus run molecular dynamics (MD) simulations of the isolated system and, in order to grasp the amount of coherence in the system, we chose the average magnetization $M=\left|\mathrm}\right|$ as order parameter $\mathrm}=\frac1\sum_\left(\cos\theta_,\sin\theta_\right)$. Finally, our results for $r\sim\sqrt$ and $r\sim N^$ point to a strong correlation between the different thermodynamic behaviors and the dimension of the underlying network.

**Abstract:**The importance of social networks (and more recently online social networks) for describing and predicting various behaviors in several domains such as education, psychology, marketing, medicine ... is now highly recognized. Two aspects of the research carried out on these increasingly complex networks have drawn our attention. The first aspect covers data analysis (e.g., data mining tools such as clustering techniques), which allows us to reduce large amounts of data to concise information. Indeed, detecting communities is of great importance in SNA, where networks are graphs of interconnected individuals. According to Fortunato (2010), “this problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years”. For instance, researchers focused on the investigation of “small world networks” (Mathias & Gopal, 2000 ; Kleinberg, 2001), on the optimization problem of selecting the most influential nodes (Kempe, Kleinberg & Tardos, 2003), … The second aspect concerns knowledge management through the maintenance of a body of knowledge (e.g., an ontology) to which data analysis contributes. According to Gruber (1993) : “A specification of a representational vocabulary for a shared domain of discourse – definitions of classes, relations, functions, and other objects – is called an ontology”. This tool therefore allows sharing information about one particular domain, by defining common vocabulary about this domain (Noy & McGuinness, 2001). According to Braun & al. (2007), developing an ontology is a dynamic and collaborative process in which the ontology evolves through four maturing phases : (1) the emergence of ideas phase, in which the ontology is not yet well-defined and rather descriptive as a vocabulary, (2) the consolidation in communities, in which the ontology grows into a shared vocabulary (also called folksonomy), (3) the formalization, in which the vocabulary concepts are organized into relations, and (4) the axiomatization, which allows inferencing processes. Finally, ontology is used in several sectors such as medical world, industry, linguistic and cognitive engineering, management of business process, … Ontology being finally also helpful in social network analysis. For instance, the representation of knowledge about the members of an online social network being important for the concerned website (Crémer, 2011). In the first phase, an “emerging” ontology related to a specific (online) social network consisting of students enrolled at the university would be built given prior knowledge about this type of network and similar networks. The ontology would focus on network structure, in terms of student’s individual characteristics, of network clusters to which they belong, and of links between clusters. In a second phase, students present in the (online) social network would be clustered given their links with other students. To achieve this goal, we will look for the most appropriate clustering techniques, according to the structure of our graph, in order to analyze it. For our purpose, we will test several segmentation’s algorithms (e.g., Hierarchical Agglomerative Clustering, K Means) and different distances/similarities between the nodes of a graph, e.g., the Randomized Shortest Path (Yen & al., 2008 ; Saerens & al., 2009) or the Minimax Path-Based Dissimilarity Measure (Kim & Choi, 2012). In a third phase, the ontology of the network would be validated and matured through depth analysis of clusters resulting from the segmentation part. In addition, a matured network ontology should help us analyze and understand the emergence of groups in a network of students, through semantic enrichment of connections between clusters (Lasso-Sambony & al., 2013). At a final stage, the clusters would be used to analyze and predict the achievement of students in the social network segmented. We would use mixed models to study the effects of these clusters, in addition to the effects of student's individual characteristics. Our main interest will therefore be to determine how (online) social networks affect these behaviors, in order to discover whether data augmentation with external variables can significantly improve performance prediction. At that stage, we would also be able to test and compare different hypotheses concerning the mechanisms of diffusion, i.e. the effects of several features of the social network clustered. We would investigate if the number of groups, the degree of separation between the groups, the existence of shortcuts across the social space, the size of the network, the strength of ties, … have an significant impact on predictions (Centola, 2005 and 2010 ; Bakshy & Rosenn, 2012 ; Stattner, 2012, Aral & Van Alstyne).

**Abstract:**Popularized by the Netflix Challenge, non-negative matrix factorization (NMF) is a powerful technique for extracting features from a matrix-like dataset, and it has been widely used in recommender systems. NMF assigns a vector of features to each row and column of a matrix and aims to minimize the reconstruction error, i.e. the difference between the actual matrix and the matrix where the entries (i,j) are the dot product of the vectors of features of row i and column j. Besides providing a natural representation of the rows and columns in a feature space, NMF is extensively used to predict the value of unobserved entries of the original matrix, by computing the dot product of the vectors of features of row i and column j for an unknown entry (i,j). Applied to network, a symmetrized version of NMF assigns features to each nodes of the network based on the observed interactions between those edges, and it has been successfully used for overlapping community detection and link prediction. A key property of non-negative matrix factorization is its ability to process large-scale problems. Thanks to optimization methods taking advantage of the sparsity of the data, networks of millions of nodes and tens of millions of edges can be factorized on a single machine, but handling the largest datasets still requires tens of hours or even days. The limitations of NMF appear on rapidly changing dataset, for which the traditional optimization methods of NMF are unable to offer up-to-date factorization in real time. This is especially problematic for web based applications, such as social networks, where “real-time” has become the norm expected by the users, but the continuous flow of modifications and the scale of the data leaves traditional NMF methods behind. We propose here a fast method for updating the factorization of an evolving matrix, based on rapid point-wise gradient descent benefiting from the sparsity of the data, opening the way for always up-to-date factorization on real world dynamic environments. Our method is able to update the factorization of a sparse matrix when a single entry (or edge in the case of networks) is modified with a time complexity independent of the size of the matrix. Our method can also update the factorization to take new rows and columns into account (for example in the case of arrival of new users on a social network). Specifically, we present two variants of our method, adapted to two different loss functions (i.e. two expressions of the reconstruction error). Moreover, our method is compatible with L1 and L2 regularizations. The first loss function is the well-known square loss, for which the update complexity is squared in the number of features. The second loss function is the absolute loss and has, to our knowledge, never been used in non-negative matrix factorization but offers even faster updates, with a complexity linear in the number of features. In summary, we present a general method for updating non-negative matrix factorization, which can for example be applied to maintain a representation of the users of a social network in a feature space when new interactions are created or disappear and when users join or leave the network.

**Abstract:**Some educational systems are characterized by free school choice and a public funding of schools according to the number of pupils enrolled. This is what we call a school quasi-market (Le Grand, 1991). In these systems, schools are in competition with each other according to the number, and the characteristics, of pupils they enrol. Schools are said to be interdependent (Delvaux & Joseph, 2006). These interdependencies can be revealed by comparing the actual distribution of pupils between schools with what would happen if pupils simply attended the school closest to their home (Friant, 2012; Taylor, 2009). We can then analyse which schools attract pupils and which schools are avoided, thus characterizing a competition space. The problems arise when we want to broaden the analysis to a larger scale (e.g. the educational system as a whole). We need more advanced tools to analyse such a large network of interdependencies. This paper addresses this problem by applying social network analysis to better describe and analyse school quasi-markets. We use the results of an agent-based simulation of school choice in French-speaking Belgium (Friant, 2012) and consider the data as a network of schools exchanging pupils with each other. The resulting network is cyclic, directed, and weighted, with nodes representing schools, and edges weights representing fluxes of pupils. Using such metrics as weighted in- and out-degrees, clustering, betweennes centrality, and flows, we propose new ways of characterizing the position of schools in a hierarchical competition space, and in the educational system as a whole. References Delvaux, B., & Joseph, M. (2006). Hiérarchie scolaire et compétition entre écoles: le cas d’un espace local belge. Revue Française de Pédagogie, (156), 19–27. Friant, N. (2012, November 14). Vers une école plus juste : Entre description, compréhension et gestion du système (Thèse de Doctorat en Sciences Psychologiques et de l’Education). Université de Mons, Mons. Retrieved from http://tel.archives-ouvertes.fr/tel-00752087 Le Grand, J. (1991). Quasi-Markets and Social Policy. The Economic Journal, 101(408), 1256–1267. Taylor, C. (2009). Choice, Competition, and Segregation in a United Kingdom Urban Education Market. American Journal of Education, 115(4), 549–568.

**Abstract:**This presentation addresses the use of (personal) social networks by self-employed immigrants, comparing immigrant entrepreneurs with a transnational business (transnational entrepreneurs -TEs) and the ones with a more local entrepreneurial activity (local entrepreneurs - LEs). In recent years, the increasing opportunities for mobility and communication have lead to the emerging and the diffusion of cross-border businesses owned by immigrants. Concerning this, a field of studies into transnational entrepreneurship is emerging. The majority of these studies show the relevant role of social for cross-border businesses. However, it is not still clear whether the findings are peculiar of transnational entrepreneurs or they are similar to all immigrant entrepreneurs. The presentation aims to fill this gap, trying to understand if TEs are different from LEs in terms of the use social networks. In particular, I focus on: use of the network and its relevance; network composition (e.g. nationality, place of residence); role of different groups (e.g. co-nationals; natives etc.); differences between strong and weak ties. In order to further understand the role of social networks for the business, firstly, I analyse the literature comparing the one into immigrant entrepreneurs in general and the one into transnational one. Secondly, I present the findings of a recent research carried out in Milan on about 40 Moroccan entrepreneurs (with a transnational business and with a non-transnational one). Regarding past studies, literature stresses the role of both co-national and familiar networks for LEs. They are both resources (providing financial help, free/low-cost labour force; information) but also oppressive mobility traps. In fact, belonging to these networks entail a set of obligations which might affect the development and the growth of the business. Hence, it seems that these contacts are essential for entrepreneurs with weaker business, namely small businesses owned by entrepreneurs with limited skills etc. Past studies underline also the benefits received from contacts with natives, since they provide no redundant information, such as bureaucratic help. Considering transnational entrepreneurs, they seem to have a wider range of contacts (in term of geographical dispersion). In particular they rely on glocalised networks - support networks composed by both local (host country) and global contacts. From the literature also emerge a more relevant and qualified help of extended and dispersed family, because they often manage the abroad side of the business. I test the tendencies that seem to emerge from the literature analyzing the case of Moroccan entrepreneurs in Milan. I interviewed 40 entrepreneurs (transnational and non-transnational), using both a personal network approach and qualitative interviews. From the research emerges that TEs has a more geographical dispersed business contacts than LEs. Furthermore, also the support contacts (the one providing help for the activity) are more dispersed. TEs have both local and transnational contacts. Unexpectedly, LEs have not transnational contacts (e.g. relatives in the country of origin).This is might be explained by the fact that TEs might decide to start a transnational business because they already had some key contacts abroad. This is confirmed by both qualitative interviews and network analysis. They usually start a business because they have some contacts (usually strong ties) on which they can rely on. Regarding the role of different groups, co-nationals in the host country are important for both groups, but also the ones in the home country are essential for the transnational businesses. Also natives are relevant for both the groups and they usually give both economic and informational help. Furthermore, TEs’ relatives have a more relevant role since they manage the abroad side of the business. By contrast, they just give a occasional help to LEs. Both groups rely on both weak and strong ties. Strong and weak ties help are relevant in the start-up of the business. By contrast, weak ties are more relevant than strong ties for the consolidation and enlargement (in particular economic help from the suppliers). The main difference is connected to the informational help: TEs rely more on strong ties (family and friends) in order to take the information for the business (and in particular for the abroad side). In conclusion, two differences emerge from my analysis. Firstly, TEs’ business contacts are more dispersed in geographical terms (expected) and also the persons on which they rely on. Secondly, TEs’ family contribution is more qualified (they often manage a part of the business).

Back to BENet2014