The Coordinated Science Laboratory invites you to attend its 2012 Symposium on Emerging Topics in Control and Modeling: Networked Systems at the Coordinated Science Laboratory, Urbana, IL, October 15-16.   


First Day (Monday, October 15, 2012):

8:00 am- 9:00 am Breakfast and Registration CSL B02
9:00 am- 9:30 am Opening remarks and overview CSL B02
9:30 am- 10:30 am

Dr. Colin Harrison

Smart City Operations - Complex and Uncertain 

10:30 am- 11:00 am Coffee Break  
11:00 am- 12:00 pm

Dr. Eric Feron

Proof-Carrying Auto Coding of Embedded Control Software 

12:00 pm- 1:00 pm Lunch Siebel Center 2nd Floor Atrium
1:00 pm- 2:00 pm

Dr. Marija Ilic

Structure-Based Modeling and Control of Future Electric Energy Systems 

2:00 pm- 3:00 pm

Dr. Fred Hadaegh

State-of-Art in Formation Flying and Distributed Spacecraft Systems 

3:00 pm- 3:30 pm Coffee Break  
3:30 pm- 4:30 pm

Dr. Alexandre Bayen

Nash-Stackelberg Games in Transportation Networks: Leveraging the Power of Smartphones for Traffic Monitoring and Management 

4:30 pm- 5:30 pm Reception  
   End of Day I  


Second Day (Tuesday, October 16, 2012): 

8:00 am- 8:30 am Breakfast and Registration CSL B02
8:30 am- 9:30 am

Dr. P. R. Kumar

The Challenge of Cyber-Physical Systems 

9:30 am- 10:30 am

Dr. Volkan Isler

Robotic Sensor Networks for Environmental Monitoring 

10:30 am- 11:00 am Coffee Break  
11:00 am- 12:00 pm

Dr. Jie Liu

Discovering the Genome of Data Center 

12:00 pm-  Lunch and Poster Session NCSA Lobby
2:00 pm


2:00 pm- 3:00 pm

Dr. Domitilla Del Vecchio

A Control Theory Approach to Engineering Biomolecular Networks  

 CSL B02
3:00 pm- 3:30 pm Coffee Break  
3:30 pm- 4:30 pm

Dr. Thomas Corbet, Jr.

Simulating the U.S. Transportation Fuels Network: From Oil Fields to Consumers 

4:30 pm- 5:30 pm

Panel Discussion

Panelists to be announced

  End of Day II  


The pressures of urbanization, the coming stress-nexus for water, food, and energy, and the rising impacts of natural catastrophes, which appear to varying degrees around the world, are leading city leaders to seek more intelligent means of managing their infrastructure and services.  Our work on Smarter Cities arose in part from the observation that growing amounts of "real-time" information are available, often from existing urban systems, and that these sources enable us to model and predict the inhabitants' intentions and how they will exploit the city's infrastructure and services.  From this we can develop OODA loops (Observe, Orient, Decide, Act) for the intelligent management of city infrastructure and services such as water, transportation, energy, buildings, and public safety.  However, these systems are both complex - with many cross-domain interactions and inter-dependencies - and increasingly uncertain - as additonal data sources are incorporated that are untrusted, unreliable, and of uncertain context.  This talk will look at specific instances of these challenges and how IBM expects to address them. 

Proof-carrying code has been in existence since Necula and Lee coined the term in 1996. This talk brings forward the details of the development of proof-carrying code from control system specifications. The motivation for this work is the safety-critical nature of many control applications such as aeronautics, robot-assisted surgery, and ground transportation. Several challenges must be addressed during this effort, including: The formal representation of control-theoretic proofs; the migration and representation of these proofs across different layers of software implementation; and the design of a back-end to verify the claimed software properties. The expected payoff from these efforts is to include more semantics in the output of computer-aided control system design environments and to influence the software certification processes currently in use for transportation and health applications.

To begin with, we explain how the need to include unconventional resources changes the basic model structure of electric energy network systems, as typical assumptions underlying hierarchical systems no longer hold. We point out the potential of synchronized wide-area measurements for parameter identification of these qualitatively different dynamical models.

We next briefly review the state-of-art modeling for today’s hierarchical control of large- scale electric power systems. Models currently used by the industry are intended for analysis and not for control design purposes. This lack of dynamical models in standard state-space form with well-understood structure is one of the major roadblocks to systematic design of enhanced communications and control for electric energy systems with intermittent resources, in particular.

A new multi-layered modeling in standard state-space form is proposed as the basis for systematic control design. It is shown how use of unique structures in these complex network models can represent portfolia of unconventional resources embedded within the existing electric power systems. These models are used to assess the tradeoffs between the complexity of control and communications and their technical (reliability) and economic (efficiency) performance in multi-layered electric energy systems . The ultimate question concerning plug-and-play standardization for dynamics is posed by assessing possible performance of distributed control. The zoom-in and zoom-out models underlying the distributed control design are derived using standard- and non-standard singularly perturbation-based model reduction. Physical interpretation of information required for ensuring system-level dynamical performance with distributed controllers is discussed for the first time in the context of these reduced-order models. We point out that a similar modeling approach can be used for other complex network systems.

Proof-of-concept examples of newly proposed multi-layered models and the distributed control design using these models are discussed to show how one can make Azores Islands low-cost green. A detailed treatment of models and control design for this island is provided in the upcoming book [1].

[1] Ilic, Xie and Liu (co-editors), Engineering IT-Enabled Electricity Services: The Tale of Two Low-Cost Azores Islands, Springer, 2012 (to appear).

 Formation flying (FF) of multiple spacecraft is an emerging technology being considered by many space agencies around the world for a variety of applications. A key application of FF for space exploration is distributed telescopes to detect and image exoplanets. Other applications include distributed interferometers, Synthetic Aperture Radars (SAR), flight of very large number of tiny spacecraft for massively distributed sensing of the atmosphere, gravity detection, optical relays, and distributed antennae. These applications pose new and significant challenges to the underlying guidance and control system. FF requires new control systems and architectures, and greater levels of autonomy to meet the required performance in the presence of environmental disturbances, system uncertainties and complex system interactions. This presentation will trace the motivation for these changes and will layout approaches taken to meet the new challenges. An overview of the US and non-US FF missions along with an overview of JPL’s formation robotic, and distributed control software labs will be presented. A set of critical technologies that enable high precision formation flight of space telescopes and interferometers will be highlighted. In particular, a novel architecture is introduced that will enable the fabrication of 100 gram-class spacecraft to be flown in swarms of 100s to 1000s in low Earth orbit. This Silicon Wafer Integrated Femto-satelliTes (SWIFT) provides a paradigm-shifting approach to distributed spacecraft development, missions and applications.

The first part of this work investigates the problem of real-time estimation and control of distributed parameters systems in the context of monitoring traffic with smartphones. The recent explosion of smartphones with internet connectivity, GPS and accelerometers is rapidly increasing sensing capabilities for numerous infrastructure systems. The talk will present theoretical results, algorithms and implementations designed to integrate mobile measurements obtained from smartphones into distributed parameter models of traffic. The models considered include Hamilton-Jacobi equations, first order conservation laws and systems of conservation laws. Other techniques developed will be briefly presented as well, relying on ensemble Kalman filtering.

In the second part of the work, we develop a game theoretic framework for studying Stackelberg routing games on parallel networks with horizontal queues, applicable to transportation networks. We first introduce a new class of latency functions to model congestion with horizontal queues, then we study, for this class of latency, the Stackelberg routing game: assuming a central authority can incentivize the routes of a subset of the players on a network, and that the remaining players choose their routes selfishly, can we compute an optimal route assignment (optimal Stackelberg strategy) that minimizes the total cost? We propose a simple strategy, the Non-Compliant First (NCF) strategy, that can be computed in polynomial time, and we show that it is optimal. We also show that it is robust, in the sense that some perturbations of the NCF strategy are still optimal strategies.

The talk will illustrate the results using a traffic monitoring system launched jointly by UC Berkeley and Nokia, called Mobile Millennium, which is operational in Northern California and streams more than 60 million data points a day into traffic models. The talk will also present a new program recently launched in California, called the Connected Corridor program, which will prototype and pilot California’s next generation traffic management infrastructure.

Cyber–physical systems (CPSs) are the next generation of engineered systems in which computing, communication, and control technologies are tightly integrated. We present a historical account of paths leading to the present interest in CPSs. Research on CPSs is fundamentally important in many important application domains such as transportation, energy, and medical systems. We overview CPS research from both a historical point of view in terms of technologies developed for early generations of control systems, as well as several foundational research topics that underlie this area. These include issues in data fusion, real-time communication, security, middleware, hybrid systems and proofs of correctness.

Robotic Sensor Networks composed of robots and wireless sensing devices hold the potential to revolutionize environmental sciences by enabling researchers to collect data across expansive environments, over long, sustained periods of time. In this talk, I will report our progress on building such a system for monitoring invasive fish (common carp) in inland lakes using autonomous surface vehicles and mobile robots. After presenting results from field experiments, I will go over some of the algorithmic challenges and give an overview of our work on active localization, designing search strategies for finding (possibly mobile) targets, energy efficiency and harvesting.

To meet the ever growing demands of online and cloud services, the data center industry is experiencing exponential expansion. Data centers consume billions of KWh electricity every year, and the number is expected to double every 5 years. Most data centers are conservatively provisioned and operated, resulting in wasted resources and high cost. In the Data Center Genome project, we take a data-driven approach to data center energy management, leveraging networked sensing, modeling, and control technologies. We have designed and deployed wireless environmental sensors to monitor heat distribution in server rooms and software-based services to estimate server power consumption. We build models that bridge the cyber dynamics of computing and physical dynamics of the facility. The findings are used to advance the way equipment are provisioned, loads are distributed, and systems are operated in datacenters.

The past decade has seen tremendous advances in the fields of Systems and Synthetic Biology to the point that de novo creation of simple biomolecular networks, or "circuits", in living organisms to control their behavior has become a reality. A near future is envisioned in which re-engineered bacteria will turn waste into energy and kill cancer cells in ill patients. To meet this vision, one key challenge must be tackled, namely designing biomolecular networks that can realize substantially more complex functionalities than those currently available.

A promising approach to analyzing or designing complex networks is to modularly connect simple components whose behavior can be isolated from that of the surrounding modules. The assumption underlying this approach is that the behavior of a component does not change upon interconnection. This is often taken for granted in fields such as electrical engineering, in which insulating amplifiers enforce modular behavior by suppressing impedance effects. This triggers the fundamental question of whether a modular approach is viable in biomolecular circuits. Here, we address this research question and illustrate how, just as in many mechanical, hydraulic, and electrical systems, impedance-like effects are found in biomolecular systems. These effects, which we call retroactivity, dramatically alter the behavior of a component upon interconnection. We illustrate how, similarly to what is performed in electrical networks, one can reduce the description of an arbitrarily complex system by calculating equivalent retroactivities to the input. By merging disturbance rejection and singular perturbation techniques, we provide an approach that exploits the structure of biomolecular networks to design insulating amplifiers, which buffer systems from retroactivity effects. We provide the first experimental demonstration of our theory on a reconstituted protein modification cycle extracted from bacterial signal transduction and on a synthetic biology circuit in vivo.

Our analysis team has developed an applied model to represent the U.S. supply network for refined petroleum products. Individual components of this model network include oil fields, petroleum (crude oil and refined products) import terminals, refineries, transmission pipelines, tank farms, and distribution terminals.

Simulations are intended to help our team provide timely analyses to decision makers in the federal government. Specifically, the simulations help answer questions of the form:

Which regions of the United States would experience shortages of transportation fuel after a specified disruption to one or more components of the fuel infrastructure?

What would be the duration and magnitude of the shortages?

To answer these questions, simulations must represent both human behavior (decisions made by markets, operators of the components of the supply network, and consumers) and the physical constraints on providing service during disruptions.

Our analysis approach favors using multiple algorithms to simulate the behavior of the transportation fuel supply network. These include a market-based algorithm coupled to a dynamically reconfiguring pipeline network, a network in which individual entities (nodes) seek to maintain target inventories by interacting only with neighboring nodes, and solution of a diffusion equation for a potential field intended to represent aggregate business and market behaviors.