Find Us On:

Linked In


Join Our Mailing List


We won't spam you or ever share your information.

CREATE Programme for a Trustworthy and Secure Cyber-Plexus (TSCP)

Project 1.1.  Authentication Core (PIs: UIUC: David Nicol, Zbigniew Kalbarcyzk; SUTD: Jianying Zhou, Aditya Mathur).

Description.  The TSCP paradigm calls for extensive checking of computations and communications.  Checks include deep authentication, and data and command provenance. A first and essential part of this project is to identify which authentication linkages are needed (potentially trailing back from the process requesting a command through processes which spawned it, to communications that lead to the spawning to the owner of the process generating the communication, to biometrics on that user.) The space of options is large, and we must focus in on the chaining relationships which make the most sense and can be efficiently supported.  The same is true of digital artifact provenance relations.  Similar kind of chained recording is necessary; the question first is what should be recorded.  A second question for both problems is implementation.  Clearly some certificate based PKI could provide the functionality, but this is heavy-weight, and it is doubtful of being the best approach.  Thrust 2 considers this question.

Needed next is a comprehensive base for description of TSCP concepts, their relationships, and TSCP consistency checks.  This project aims to place TSCP checking on a unified basis through development and use of ontologies, and to develop TSCP access nodes that enforce these checks.

A large number of security oriented ontologies exist already.  While we can learn from these constructs and even borrow from them, TSCP security checks run more deeply.  For example, in standard cyber-security practice an authentication check may require two factors, and the ontological description would describe that requirement.  TSCP will require description of an authentication chain, which links the process being checked to the process that spawned it, to a communication that triggered that process, to the process which initiated the communication, to the role which allows for that communication, to a user logged into that role.  TSCP may call for description of command/data provenance; it may call for a framework within which the legitimacy of access depends on system state as well as variables related to the request.

More extensive checking will impact the latency of messages through a TSCP protected system.  A critical part of this thrust’s work will be to use modelling tools such as CyberSage, S3FNet, and Mobius to analyse the impact given placement and functionality of TSCP appliances will have, to identify and discard non-compliant solutions.

We drive the policy framework development with four policy problems: uploads of configurations to devices with policy requirements on who can do it, how strongly they must be authenticated, when they can do it, and requirements on the provenance of the configurations themselves.  A second policy problem is the provisioning of protection to legacy devices by placing a TSCP appliance between the device and network access.  Questions of what to let through to the device are more straightforward than questions of what to let through to the TSCP infrastructure, but this is an important problem that will have expression again when we consider the possibility of intermittently bringing COTS devices to interact with a TSCP enabled CPS.  Different types of policy for this application will drive further development of policy ontology.  A third application for policy development will be command validation.  There are two pieces to this.  One is assurance that the command is given by an authorized user who is authorized to give it, the other is that the command makes sense in the given context. All but the last of these questions are policy oriented. A fourth policy problem is limiting the egress of potentially sensitive data.  This sort of policy can protect sensitive information from leaving a CPS, and with application to other domains prohibit data breaches that so plague commercial enterprises.

As progress is made developing policies and embedding TSCP appliances into testbeds, we will develop reference architecture documentation.

SUTD Prof Aditya Mathur will work closely with UIUC on the design of architecture and authentication mechanisms in Research Thrust 1.  Aditya’s intimate knowledge of, and experience with, the architecture of legacy water treatment and distribution systems will enable the design of new architectures, and authentication mechanisms that can not only be used in the design of new plants but also to enhance the security of legacy plants such as those in Singapore.  Placement of sensors, control codes inside the Programmable Logic Controllers, and the multi-layer communication mechanisms in such plants will need to be re-evaluated with respect to the proposed architecture and authentication mechanisms.   Aditya will work closely with UIUC in these tasks.

SUTD Prof Jianying Zhou will also work closely with UIUC on the authentication core mechanisms in Research Thrust 1, and authentication trust technologies in Research Thrust 2 (see below at Thrust 2).


  • Identified and developed framework to implement deep authentication.
  • Identified and developed framework to implement provenance checking for digital artifacts.
  • Development of policy ontologies to address:
    • device configuration,
    • legacy device protection,
    • command validation, and
    • data egress.
  • Design and implementation of TSCP policy nodes implementing the policies above.
  • Methodologies and front-ends for modelling tools to analyse the feasibility of prospective placement of TSCP appliances and the functionality they apply.
  • TSCP reference architecture and APIs.
  • Prototypes developed and evaluated in EPIC testbed.

Project 2.1.  Authentication Trust Technology (PIs: UIUC: Deming Chen, David Nicol; SUTD: Jianying Zhou)

Description.  The deep authentication and digital provenance efforts of Theme 1 will depend on application of cryptographic signatures.  The choice of framework for implementing those signatures and managing their keys is a serious one.  One can imagine using standard PKI certificates, but looking ahead to TSCP integrating many systems with the need for cross-system validation, one foresees exactly the same problems that limit PKI in enterprise systems.  We will investigate alternative means with promise of better scaling, using technologies such as Google’s macaroons or some application of blockchains.  Macaroons are based on chaining (which we find in deep authentication and digital provenance) and provide decentralized delegation in a way allowing a designer to fine tune in context what is being authorized.   Blockchains offer an even more decentralized structure, with full transparency into the provenance of an associated digital artifact, and so will be studied for application here.

As noted above, SUTD Prof Jianying Zhou will embed with UIUC colleagues working on Project 2.1 in support of investigations developing technologies of trust from his deep experience in this area.  Jianying was lead PI of the project SecSG-EPD090005RFP(D) "Cyber Security and Secure Intelligent Electronics Devices for EV Ecosystem in Smart Grid" funded by EMA under the Smart Energy Challenge Programme. In this project, the cyber-physical vulnerabilities in NIST 7628 "Smart Grid Cyber Security Standard" were identified, and a novel cyber-physical device authentication protocol was designed. The team developed the world's first cyber-physical secure EV charging system (prototype) with a couple of unique features, including cyber-physical entity authentication and demand-response control. One of the industrial collaborators in this project is Singapore Power.  Jianying is also lead PI of the project SecUTS-NRF2014NCR-NCR001-031 "A Cyber-Physical Approach to Securing Urban Transportation Systems" funded by NRF under the National Cybersecurity R&D Programme, which includes ADSC as a principal partner.  In this project, a novel two-factor entity authentication protocol was designed for protecting SCADA devices using big data which has been filed for US patent.   A virtually isolated network was designed which can boost the bandwidth of a legacy isolated network for over 20 times at very low cost while retaining the security requirements of the isolated network. An advanced SCADA firewall was designed to take real-time comprehensive packet inspection for detection of more sophisticated attacks.


  • Methodology for supporting cryptographic basis for deep authentication, with implementation.
  • Identification of existing base of trust technology appropriate for TSCP appliances and their use, e.g. TPM in commercial chips and/or existing research results on software-implemented base of trust.
  • Methodology for supporting provenance of digital artifacts, with implementation.

Project 3.1.  Standards for interaction of COTS devices with TSCP (PIs: UIUC: Zbigniew Kalbarczyk, Bill Sanders; SUTD: Hock Beng Lim)

There will always be contexts where a device that has not been developed to TSCP specifications will have to interact with a CPS protected by TSCP.  Legacy devices are a good example, and are a class of devices that are so important that Thrust 1 devotes effort to developing policy governing how they interact with the rest of the system.  A harder problem is to accommodate the reality that common-off-the-shelf devices will have to be brought to a TSCP protected system to monitor it, to do diagnostics, and to provide maintenance support.  In the future, one can even imagine apps running on tablets that routinely interact with the TSCP protected system, but also have connection elsewhere, potentially even in to the broader Internet. 

The objective of this project is to determine what standards (or applications) must be imposed on commodity products to play a limited role in a TSCP protected CPS.   Applications which participate passively need only have restrictions on the connectivity that ensure they cannot inject anything into the system.  Applications which require direct interaction (e.g., software/firmware update) need to adhere to some principles governing that interaction – better still, be limited to using some TSCP validated application.

We will make this problem and solution concrete by designing and implementing an application for delivering a software update to a CPS device, using a COTS device running an application which constrains the interaction to be a software update.

The objectives of the standards to be developed by this effort are specific for interaction with TSCP components. As background we may conclude that certain existing standards are recommended; there is no need to replicate that. Rather, TSCP components will offer specific interfaces with certain expectations as regards support for chained authentication, and the like.   The standards to be developed are better understood as standards for applications to run on COTS devices where all interaction with TSCP is through those applications.

SUTD Centre for Smart Systems (CS2) Director Prof. Lim is keen to contribute to the design and development of virtualization and ontology-driven techniques to support the interoperability of COTS and IoT devices with TSCP, in particular for software updates. He has developed virtualization techniques for the interoperability of COTs devices with IoT gateways in smart cities applications. The Centre for Smart Systems (CS2) which Prof Lim directs at SUTD will also facilitate collaborations and serve as the host centre for relevant collaborations with TSCP partner companies and agencies working on joint development and deployment projects.


  • Threat analysis of COTS / IOT devices against TSCP protected CPS.
  • Standards defining limits on device interaction, depending on function.
  • Design, implementation, and demonstration of application running on COTS device to update software on TSCP protected device in CPS.

Project 3.2.  TSCP System Verification (PIs: UIUC: Grigore Rosu, Bill Sanders; SUTD: SUN Jun)

Description.  To meet the highest standards of assurance, dependability, and security, the ultimate goal of future TSCP systems is formally guaranteed correctness.  That is, to produce mathematically-grounded certification that the final TSCP system indeed satisfies the formal requirements of its application domain.  Consider, e.g., the task to shut down a component (e.g., in a power plant) safely, where “safely” means only after radio approval from some authority and only if the radio signal has not been down since then (see picture to the right).  The fact that the system is monitored against this requirement is not sufficient, because many things can go wrong.  For example, the safety property may not be coded correctly by the developer.  Or, the system is incorrectly instrumented and some relevant events are not observed, or are duplicated.  Or, an attacker exploits a buffer overflow in the application and disables the monitor altogether.  And so on.

New formal methods research is needed to develop techniques for complex, large, distributed and decentralized systems as required by TSCP.  While there are isolated techniques that can be used to verify a program in a given language, and techniques that can be used to generate provably correct programs for some isolated domains, there is currently no general approach specifically crafted to formally guarantee, statically, that a runtime monitored or verified system will execute correctly.  We believe that such a verification technology is timely and critical for future systems, and in particular for TSCP.  As shown in the figure above, we plan to develop domain-specific languages for specifying safety requirements for target domains, provably-correct monitor generators for such formalisms, automatic instrumentation, as well as a general verification infrastructure that can take all of these artifacts and produce a third-party machine-checkable correctness proof of the resulting.  Using this approach, a certifying authority can check the correctness of the resulting system by simply checking the produced proof certificates, without having to trust our (admittedly complex) system verification framework.

As with any formal methods based approach, an important question is how the method scales with complexity of the code being validated.  This thrust will address that problem by assessing the cost of static and dynamic methods as the functionality of a TSCP node increases by adding policies. Through the Illinois/SUTD collaboration, SUTD Prof Sun Jun will add complementary capabilities and analytic focus through his 15 years engaging problems of formal specification and formal verification with i) investigation and design of domain specification language for capturing system requirements based on temporal logic and other complementary specification languages; ii) design and implementation of approaches for automatically synthesizing monitors from the formal specification; iii) providing formal proofs that the synthesized monitors guarantee that the system satisfies system requirements with proofs that the synthesizer is correct; and iv) investigations for presenting concise, human-understandable third-party verifiable certification of system safety based on the monitors. 

  • Monitor generation techniques for some common formalisms used to specify safety properties, such as temporal logics.
  • Machine-checkable proofs of correctness of monitor generators.
  • Demonstration of how provably-correct monitoring code can be weaved with a given application in a way that the resulting monitored program can be proved correct with respect to the monitored safety property in a scalable manner, where the size and/or complexity of the system plays a minor role.
  • Scalability study.

Project 4.1.  Data Analytics and Alert Response (PIs: UIUC: Ravi Iyer, Marianne Winslett, Michael Bailey; SUTD: Aditya Mathur, David Yau)

Description.  TSCP protects the cyber-physical systems underpinning critical infrastructures; these continuously generate massive quantities of data.  Analytics over this data is key for identifying many kinds of security issues. Affordable analytics in real time is the key enabler for real-time TSCP analytics, at scale, which will allow real-time intelligent alerts.

In this project, we aim to enable this new capability by exploiting maturing programmable hardware technologies to push real-time analytics out of back-end servers and into front-end data collection devices, enabling a massive reduction in data transmission and reducing response time for anomalous events.  We will exploit the new hardware now becoming available in clouds (GPU, FPGA) to cut the cost of key mining tasks for our applications, especially for deep learning models and difficult types of data. 

The result of real-time data analytics will be alerts: for existing failures, for imminent failures, for conditions conducive to failures, to evidence of intrusion or any other kind of abnormal behaviour.  Operators can be overwhelmed by alerts.  One challenge is to have accuracy in the alerts.  Another is to be able to filter and prioritize alerts, presenting to the operator ones that must be attended to most urgently.  A core challenge of this area is to utilize physics and knowledge of the behaviour of the industrial control system in a way to recognize alerts that are not meaningful or not important, and identify those that are.  The thrust area is not so much about finding new alerts as it is about finding them faster, and using domain knowledge to do a much better job of presenting alerts to operators.  

The first approaches we will explore in alert filtering and prioritization might be rule-based to aid in codifying which alerts are most important and under what conditions, although the development of such rules would be problematic. We see promise though in bringing machine learning to the problem, particularly if we are able to develop successful unsupervised learning algorithms.

Aditya’s group in iTrust has initiated work on command validation in water treatment systems.  Existing work at iTrust will serve as one of the bases for the creation of advanced command validation, data analytics, and alert systems.  Current work at iTrust has led to the creation of physics-based alert mechanisms.  However, additional research is needed to develop technology to rapidly identify what sequence of events led to the alert. Furthermore, the current implementation of the alert mechanisms in iTrust issue an alert but do not take, or recommend, any action. Thus, the responsibility to act is entirely on the operator. As part of Thrust 4, Aditya will contribute to the technology of post-alert actions where a human centric approach that combines with an automated options, will be considered. Such technologies will likely need to be integrated with authentication mechanisms proposed in Thrust 1.

SUTD Prof David Yau, who has been PI/Co-PI for several collaborative SUTD-ADSC projects, has been working actively with UIUC/ADSC researchers in demonstrating the potential of ADSC’s world-class real-time analytics capabilities for significantly improving the detection and mitigation of attacks launched by strategic and knowledgeable adversaries.   As part of the overall array of research within this project thrust, Prof Yau will work closely with the Illinois/ADSC PI collaborators to provide for an envisioned at-scale, real-time  (even potentially predictive), and actionable monitoring against a key range of energy system security problems.


  • Accelerator components for affordable real-time analytics pipeline and data lifecycle management.
  • Hardware sensitive partitioning strategies.
  • Scalable mining components.
  • Algorithms to use physics and knowledge of the industrial control system’s behaviour to filter and prioritize intrusion and anomaly alerts, to aid the operator in addressing the most critical problems during an event first.