Normalisation SCADA: l'approche NIST (National Institute of Standards and Technology)

 

 Table of Contents

 

 

 

 

 

 

List of Tables

 

 

 

 

 

Table of Figures

 

 

 

 

 

 

 

  1. Introduction

    1. Program Overview

 

The National Institute of Standards and Technology (NIST) is in the process of developing a cybersecurity test bed for industrial control systems. The goal of this system is to measure the performance of industrial control systems when instrumented with cyber-security protections in accordance with best practices prescribed by national and international standards and guidelines. Examples of such standards and guidelines include IEC-62443 and NIST-800-82. The testbed will include a variety of industrial control simulation scenarios. The first of the scenarios will entail the simulation of a well-known chemical process called the Tennessee Eastman (TE) problem. The TE problem is an ideal candidate for cyber-security investigation because it is an open-loop unstable process that requires closed-loop supervision to maintain process stability and optimize operating costs.

 

    1. Terms, Acronyms, and Abbreviations

 

Many acronyms and abbreviations are used throughout this document. Table 1 lists the common terms used repeatedly throughout the document.

 

Table 1. List of Terms

 

Term

Definition

AC

Alternating Current

CIP

Common Industrial Protocol

CPU

Central processing unit

DC

Direct Current

DMZ

Demilitarized Zone

GbE

Gigabit Ethernet

GPS

Global Positioning System

HDMI

High-Definition Multimedia Interface

HMI

Human Machine Interface

Hz

Hertz

IGMP

Internet Group Management Protocol

IP

Internet Protocol

KVM

Keyboard, video, and mouse

LAN

Local Area Network

LED

Light emitting diode

LSM

Loadable Software Module

MST

Minimum Spanning Tree

OPC

OLE for Process Control

P/S

Power Supply

PC

Personal computer

PCI

Peripheral Component Interconnect

PCIe

Peripheral Component Interconnect Express

PLC

Programmable Logic Controller

QoS

Quality of Service

RAM

Random Access Memory

RPM

Revolutions per minute

RSTP

Rapid Spanning Tree Protocol

SATA

Serial ATA (Advanced Technology Attachment)

STP

Spanning Tree Protocol

TCP

Transmission Control Protocol

TE

Tennessee Eastman

UPS

Uninterruptable Power Supply

USB

Universal Serial Bus

V

Volt

VGA

Video Graphics Array

VLAN

Virtual Local Area Network

W

Watt(s)

 

 

 

    1. Note

 

It is important to note that while specific manufacturer’s products have been identified in this requirements document, these products are in no way necessary to be compliant with the requirements. All solutions will be considered if they meet the minimum requirements and address the intended usage of the system.

 

 

 

  1. System Overview

    1. Intended Usage

 

The system herein referred to inter-changeably as the testbed, rack, or enclave, is intended to emulate a real-world industrial enterprise system as closely as possible without replicating the plant itself. The system is intended to be reconfigurable such that different components may be inter-connected in a variety of network configurations. Traditional industrial systems are often designed with the controller (such as a PLC) close to the machines being controlled; however, with IP-routable protocols and PC-based control systems becoming more prevalent, the controller may be located remote from the machines being controlled with communications conducted over Ethernet-based and wireless mediums. The testbed will be used for measurement of process control performance when instrumented with perimeter-based security and host-based security protections. Some research areas of interest for the reconfiguration testbed are listed below.

 

Security Approaches

 

  • Recommendations for perimeter network security

  • Host-based security such as anti-virus

  • User and device authentication

  • In-line encryption

  • Packet integrity and authentication

  • Deep-packet inspection

  • Zone-based security policies

  • Cyber-physical redundancy

  • Cyber-physical anomaly detection

  • Robust/ fault tolerant control

  • Automated fault recovery

  • Distributed state estimation and validation

 

Networking Components and Protocols

 

  • IP-routable protocols

  • Field bus (non-IP-routable) protocols

  • Firewalls with deep packet inspection

  • Managed industrial switches

  • Network traffic monitoring

 

As shown in Figure 1, the TE enclave will be part of a broader lab network that will include a robotics assembly enclave and one other enclave that has yet to be determined. The entire lab will include a measurement assembly that will allow capture of network traffic and security events through a syslog capture server and Wireshark. The requirements within this document are pertinent only the TE enclave highlighted in the diagram. Ultimately, the TE enclave will be used for industrial network security research, and the system architecture and components should be designed for that purpose.

 

Figure 1. System Context for the Tennessee Eastman Enclave

 

 

 

    1. Default Configuration

 

In its default configuration, the testbed will be constructed as a multi-zone architecture as shown in Figure 2. The “Plant Zone” will contain the plant simulator, the PLC, OPC server, local historian, and the network configuration software required for industrial protocol operation. The “Manufacturing Control Zone” will contain the controller and the human-machine interface (HMI). A third zone, the “Lab DMZ,” will contain the enterprise historian.

 

Figure 2. Logical Network Architecture – Default Configuration

 

A notional rack configuration for the TE simulator is shown in Figure 3. The enclave consists of rack-mounted computers capable of running the various components shown in Figure 2. The rack itself will be a standard 19” cabinet. It is desired that the rack have doors that can close and lock with the PLC and switches installed, and that the rack be only as deep as required to house the mounted equipment. It may be necessary to recess the PLC and Switch center from the front or mount them rear-facing so that the front cabinet doors may close. The enclave will contain rack-mounted computers necessary for factory simulation. A video management system shall be provided such that a single user workstation (keyboard, video, and mouse) can be used to control any of the computers in the rack. All equipment shall be delivered in new condition. Refurbished equipment will not be acceptable.

 

Figure 3. A Notional Rack Deployment Configuration

 

    1. Security Protections

 

As stated in Section Intended Usage, the system will be used to measure the performance of a simulated process control system when instrumented with perimeter and host-based security protections. Perimeter-based security protections include measures such as user and device authentication, transmission integrity, transmission authentication, encryption, and deep packet inspection applied by network devices. Host-based security protections include anti-virus, software firewalls, intrusion detection software, and application software updates, and operating system updates. It is desired that if additional security products exist particularly for the control hardware that those products be offered as proposal options if they are not explicitly required by this document.

 

1.1.1Host Virtualization

 

It is not the intent of this document to provide a solution. The hosts identified in Section Intended Usage - Default Configuration and elaborated in Section Components are intended to specify the minimum requirements for an enclave implemented entirely with dedicated computing resources. With modern advances in computing performance and host virtualization, proposers are encouraged to propose host virtualization as a part of their solution. A solution that includes virtualization would enable rapid reconfiguration of the network architecture, rapid redeployment of hosts (i.e. virtual hosts) to different subnets, and more rapid redeployment of operating systems that are damaged by malware as a part of our research. Virtualization would also enable reconfigurable network storage options common to network data centers. Solutions that include virtualization as a part of the architecture will be considered compliant if those solutions provide the functional and computing performance specified in Section Components for a dedicated hardware approach. A notional virtualization approach is shown in Figure 4.

 

 

 

Figure 4. A Notional Rack Deployment Configuration with Virtualized Servers and Network Storage

 

  1. Components

    1. Rack and Enclosure

 

As shown in Figure 3, the system will be entirely rack-mounted. The requirements for the rack are listed in the following sections. The terms rack, enclosure, and cabinet may be used interchangeably through this document.

 

  1. Enclosure Layout and Wiring

 

The following requirements pertain to the rack enclosure and wiring.

 

  1. The rack and enclosure shall be no more than 42U in height.

  2. The rack and enclosure shall be delivered with side-panels installed.

  3. The rack and enclosure shall come installed with casters for mobility.

  4. The rack and enclosure shall provide lockable front-side and rear-side doors that will close completely when all equipment is installed.

  5. The rack and enclosure shall come installed with cable management to route and protect data cabling. Patch panels are encouraged.

  6. The rack shall provide power strips such that components inside the rack may be connected to power from within the rack.

  7. All rack-mounted equipment shall include rear support mechanisms to prevent sagging. Sliding rails for the more heavy rack-mounted computers are acceptable.

 

    1. Uninterruptable Power Supply

 

The following requirements pertain to the uninterruptable power supply (UPS).

 

  1. The rack and enclosure shall provide a built-in UPS that is sized to support 100% load for no-less than five (5) minutes. This is intended to compensate on short-term dips in electrical power to the rack.

  2. The UPS shall be mounted at the bottom of the rack.

 

    1. Human Input/Output Devices

 

The following requirements pertain to the input/output devices and connectivity to the rack.

 

  1. The rack and enclosure shall provide single point of control and visualization for all the computing equipment in the rack.

  2. The rack and enclosure shall provide a USB keyboard and a USB optical mouse.

 

Monitors and video management system are required for the TE system. Refer to Section Monitors and Video Management for additional requirements regarding the video management system.

 

 

 

 

 

    1. Switch Center

 

The switch center will provide industrial network segmentation with the rack. As shown in Figure 2, the network will be segmented into zones using the switching capabilities provided in the switch center.

 

  1. The switch center shall provide the set of functionality listed in Table 2.

 

Table 2. Switch Center Functionality

 

Qty

Capability

Example

1

Router with IEEE-1588 support

Allen-Bradley Stratix 5900/8300 Layer 3 switch or

CISCO Layer 3 switch with IEEE-1588 support

2

Managed industrial access switch

with IEEE-1588 support and NAT support

Allen-Bradley Stratix 5700 (1783-BMS10CGN)

1

Industrial Power supply

Rockwell Automation 1606-XL60D: Standard Power Supply, 24V DC, 60 W, 120/240V AC / 160-375V DC Input Voltage

1

Industrial Firewall with deep packet inspection capability

Tofino Security Appliance with deep packet inspection and loadable security modules

 

 

 

  1. Each managed industrial switch shall be compliant with the minimum hardware requirements specified in Table 3.

 

Table 3. Industrial Switch Minimum Hardware Requirements

 

Feature

Specification

Number of RJ-45 Ports

8 Fast Ethernet ports

Combo Ports

2 GbE ports

IEEE-1588 Compatible

Yes

 

 

 

  1. Each managed industrial switch shall be compliant with the minimum software requirements specified in Table 4.

 

Table 4. Industrial Switch Minimum Firmware/Software Requirements

 

Feature

Mandatory

Switching

 

CIP Sync (IEEE-1588)

Yes

Resilient Ring Protocol

Yes

FlexLinks

Yes

QoS

Yes

STP/RSTP/MST (instances)

128

IGMP snooping with querier

Yes

VLAN’s with trunking

Yes

Link aggregation

No

Storm control and traffic shaping

Yes

IPv6

No

Access control lists

Yes

Static and InterVLAN routing

Yes

Security

 

CIP port control and fault detection

No

MAC ID port security

Yes

IEEE 802.1x security

Yes

TACACS+, RADIUS authentication

Yes

Encryption (SSH, SNMPv3, HTTPS)

Yes

Diagnostics

 

Port mirroring

Yes

Syslog

Yes

Broken wire detection

Yes

Duplicate IP detection

Yes

Network Address Translation (NAT)

No

Command Line Interface

Yes

CISCO Tool Compatible

Yes

Application Interface

 

EtherNet/IP (CIP) interface

Yes

 

 

 

  1. The Industrial Firewall shall meet the minimum functional requirements specified in Table 5.

 

Table 5. Minimum Requirements for the Industrial Firewall

 

Feature

Example

Firewall

Tofino Firewall

Device Discovery

Tofino Secure Asset Management (SAM)

Security Event Logger

Tofino Event Logger

Deep Packet Inspection

Tofino Deep Packet Inspection LSM

Content Inspector for Modbus TCP

Tofino Modbus TCP Enforcer LSM

Content Inspector for OPC

Tofino OPC Enforcer LSM

Content Inspector for EtherNet/IP

Tofino EtherNet/IP Enforcer LSM

Centralized Security Management

Tofino Central Management Platform

 

 

 

    1. Control Center

 

The control center will be divided into two distinct control centers, a hardware-based PLC (i.e. a hard PLC) and a PC-based PLC (i.e. a soft PLC).

 

  1. Hard-PLC Control Center

 

The hard-PLC portion of the Control Center shall serve as the industrial control interface between the plant simulator and other manufacturing hosts within the system such as the controller host and OPC server. The control center is intended to serve as the industrial control center of the system providing protocol interfaces and translation for down-stream automation processes.

 

  1. The hard PLC portion of the control center shall provide the functionality specified in Table 6.

 

Table 6. Hard-PLC Minimum Hardware Requirements

 

Qty

Description

Example

1

Chassis

Rockwell Automation 1756-A7

1

DeviceNet Bridge/Scanner Module

Rockwell Automation 1756-DNB

1

EtherNet Interface Module

Rockwell Automation 1756-EN2T

1

Secure Communications Module, Ethernet/IP, with IPsec VPN and 1 RJ45 Port

Rockwell Automation 1756-EN2TSC

1

Programmable Controller

Rockwell Automation 1756-L71

1

85-265 VAC Power Supply (13 Amp @ 5V)

Rockwell Automation 1756-PA75

1

GPS Time Sync Module

Rockwell Automation 1756-TIME

1

DeviceNet cabling, 24V/3A

Belden DeviceBus cables for ODVA DeviceNet connectivity

 

 

 

  1. All hard-PLC control center components shall be delivered installed, wired, and connected within the rack.

  2. The hard-PLC shall be compatible with the HMI software.

  3. The hard -PLC shall be compatible with the OPC server software

  4. The hard -PLC shall be compatible with the historian software.

 

    1. PC-based PLC (Soft-PLC) Control Center

 

One of the purposes of the testbed is to measure the performance of control systems when instrumented with host based security. Host-based security includes protections such as anti-virus, intrusion detection, live operating system upgrades, live software patching, and virtualization. In order to insert such security protections, it will be necessary to have administrative access to the operating system. Many PC-based PLC solutions exist in the marketplace, and SoftPLC Corporation is one such vendor. SoftPLC Corporation advertises an open architecture that allows user access and software development access to the operating system. The SoftPLC Corporation Hardbook (Catalog# HB4-HPLA-1K ) may serve as a reference for the proposer. Minimum hardware requirements are listed in Table 7.

 

  1. The soft-PLC shall be mounted within the rack enclosure.

  2. The soft-PLC shall comply with the minimum hardware requirements listed in Table 7.

 

Table 7. Soft-PLC Minimum Hardware Requirements

 

Feature

Requirement

CPU

1 GHz

RAM

512 MB

Expansion Slots

Four (4) PCI slots

Networking Interfaces

Two (2) Fast Ethernet, RJ-45

Two (2) USB 2.0 ports

Indicators

Fault/Status LED’s

Power

AC power with internal AC power adapter

or 24 VDC nominal power input

Network Support

EtherNet/IP scanner

Ethernet

Surge suppression

Yes

 

 

 

  1. The soft-PLC shall allow administrative access to the operating system.

  2. The soft-PLC shall allow installation of third party software applications

  3. The soft-PLC shall allow installation of third party anti-virus applications.

  4. The soft-PLC shall be delivered with a ladder programming software toolset that includes basic ladder constructs, PID control blocks, and timers.

  5. The soft-PLC shall be compatible with the HMI software.

  6. The soft-PLC shall be compatible with the OPC server software

  7. The soft-PLC shall be compatible with the historian software.

  8. All soft-PLC control center components shall be delivered installed, wired, and connected within the rack.

 

    1. Hosts

      1. Plant Simulator Host

 

The plant simulator will be used to simulate the Tennessee Eastman industrial chemical process as well as other factory processes. The simulated TE factory will contain virtual sensors and actuators. These virtual sensors and actuators will communicate to “real-world” control hardware through a variety of protocols including DeviceNet and EtherNet/IP. Both IP-routable and non-IP-routable protocols will be utilized within the test-bed. When bridging from the virtual environment to the physical environment, PCI-based adapter cards will be used. Therefore, to enable this bridging, the plant simulator must provide ample PC expansion slots. In addition, multiple sensors and actuators will be simulated. The card used must be capable of supporting multiple virtual sensor/actuators. For example, the MOLEX DeviceNet Multi-Adapter PCI card (Part# SST-DN4MS-PCU, MOLEX Part# 112113-0009) has been identified as capable of supporting multiple devices on the adapter side. Requirements for the plant simulator are listed below.

 

  1. The plant simulator computer shall be rack mounted within the enclosure.

  2. The plant simulator shall meet the minimum technical specifications listed in Table 8.

 

Table 8. Plant Simulator Minimum Technical Specifications

 

Feature

Minimum Specification

CPU

Intel® Core™ i5-4440 Processor (6M Cache, up to 3.30 GHz)

Supports Virtualization

Yes

Memory

8 GB

Drive

SATA, 250 GB, 7200RPM

Optical Drive

SATA, DVD-RW/CD-RW

Network Interface

One (1) rear panel 1 GbE

USB

Four (4) rear panel USB 2.0

Two (2) front panel USB 2.0 (desired)

Video

Video must be compatible with the video management system

Video must provide at least at least 1920x1080 resolution

PCI Expansion Slots

One (1) Full-height PCI

One (1) Full-height PCIe x1

DeviceNet Multi Adapter Card Installed

MOLEX DeviceNet Multi-Adapter PCI card (Part# SST-DN4MS-PCU, MOLEX Part# 112113-0009). Note that this PCI card must be the “multi-adapter”, not the single adapter card.

EtherNet/IP Adapter Card Installed

MOLEX EtherNet/IP Adapter/Scanner PCIe x1 card (Part# DRL-EIP-PCIE). Note that a PCI version is also available.

Installed operating system

Windows 7 Professional 64 bit with the latest service pack

 

 

 

    1. Control Host

 

The Control Host will serve as the main controller within the enclave. The control host will communicate to the plant using IP-routable and non-IP-routable protocols. When the plant simulator is operating with a non-IP protocol such as DeviceNet, communication between controller and plant will be conducted via OPC and mediated by the PLC. When the plant simulator is operating with an IP-routable protocol such as EtherNet/IP, the control host may communicate to the plant simulator using a variety of application layer protocols.

 

  1. The control computer shall be rack mounted within the enclosure.

  2. The Control host shall be compliant with the minimum technical specifications listed in Table 9.

 

Table 9. Control Host Minimum Technical Specifications

 

Feature

Minimum Specification

CPU

Intel® Core™ i5-4440 Processor (6M Cache, up to 3.30 GHz)

Supports Virtualization

Yes

Memory

8 GB

Drive

SATA, 250 GB, 7200RPM

Optical Drive

SATA, DVD-RW/CD-RW

Network Interface

One (1) rear panel 1 GbE

USB

Four (4) rear panel USB 2.0

Two (2) front panel USB 2.0 (desired)

Video

Video must be compatible with the video management system

Video must provide at least at least 1920x1080 resolution

PCI Expansion Slots

One (1) Full-height PCI

One (1) Full-height PCIe x1

EtherNet/IP Adapter Card Installed

MOLEX EtherNet/IP Adapter/Scanner PCIe x1 card (Part# DRL-EIP-PCIE). Note that a PCI version is also available.

Installed operating system

Windows 7 Professional 64 bit with the latest service pack

 

    1. OPC Server Host

 

The OPC will serve as the system state collector. Sensor/actuator states will be mapped to OPC tag values by the PLC and passed to the OPC server for storage. The OPC server software shall have the capability to read and write process state data to both the Hard PLC and the Soft PLC. This is an important requirement as the intent is to have the states of the actuators in the plant controlled by setting values in the OPC data server. Similarly, the states of sensors in the plant model are to be reflected in the OPC server such that the controller may read sensor information and act accordingly. The PLC will serve as a bridge between the plant simulator and the OPC server.

 

  1. The OPC Server computer shall be rack mounted within the enclosure.

  2. The OPC Server computer shall be compliant with the minimum technical specifications listed in Table 10.

 

Table 10. OPC Server Host Minimum Technical Specifications

 

Feature

Minimum Specification

CPU

Intel® Core™ i5-4440 Processor (6M Cache, up to 3.30 GHz)

Supports Virtualization

Yes

Memory

8 GB

Drive

SATA, 500 GB, 7200RPM

Optical Drive

SATA, DVD-RW/CD-RW

Network Interface

One (1) rear panel 1 GbE

USB

Four (4) rear panel USB 2.0

Two (2) front panel USB 2.0 (desired)

Video

Video must be compatible with the video management system

Video must provide at least at least 1920x1080 resolution

Installed operating system

Windows 7 Professional 64 bit with the latest service pack

Application Software

Commercial-grade OPC Server

OPC configuration software

Network configuration software

 

 

 

    1. HMI Host

 

The Human-Machine Interface (HMI) host will serve as historian, HMI development, and the main HMI display. Requirements for the HMI Host are provided in this section.

 

  1. The HMI Server computer shall be rack mounted within the enclosure.

  2. The HMI host shall be compliant with the minimum requirements specified in Table 11.

 

Table 11. HMI Host Minimum Technical Specifications

 

Feature

Minimum Specification

CPU

Intel® Core™ i5-4440 Processor (6M Cache, up to 3.30 GHz)

Supports Virtualization

Yes

Memory

8 GB

Drive

SATA, 250 GB, 7200RPM

Optical Drive

SATA, DVD-RW/CD-RW

Network Interface

One (1) rear panel 1 GbE

USB

Four (4) rear panel USB 2.0

Two (2) front panel USB 2.0 (desired)

Video

Video must be compatible with the video management system

Video must provide at least at least 1920x1080 resolution

Installed operating system

Windows 7 Professional 64 bit with the latest service pack or other version if required for compatibility with the application software.

Application Software

Commercial-grade HMI Development software package

Commercial-grade HMI display software

 

 

 

 

 

 

 

    1. Local Historian Host (Plant Zone)

 

The Historian host will serve as historian host machine local to the PLC. Requirements for the Local Historian host computer are provided in this section.

 

  1. The Local Historian computer shall be rack mounted within the enclosure.

  2. The Local Historian computer shall be compliant with the minimum requirements specified in Table 11.

 

Table 12. Local Historian Host Minimum Technical Specifications

 

Feature

Minimum Specification

CPU

Intel® Core™ i5-4440 Processor (6M Cache, up to 3.30 GHz)

Supports Virtualization

Yes

Memory

8 GB

Drive

SATA, 500 GB, 7200RPM

Optical Drive

SATA, DVD-RW/CD-RW

Network Interface

One (1) rear panel 1 GbE

USB

Four (4) rear panel USB 2.0

Two (2) front panel USB 2.0 (desired)

Video

Video must be compatible with the video management system

Video must provide at least at least 1920x1080 resolution

Installed operating system

Windows 7 Professional 64 bit with the latest service pack or other version if required for compatibility with the application software.

Application Software

Commercial-grade Historian software

 

 

 

 

 

    1. Enterprise Historian (DMZ)

 

The Enterprise Historian host will serve as a replicated historian host machine for clients in the enterprise zone. Requirements for the Enterprise Historian host computer are provided in this section.

 

  1. The Enterprise Historian computer shall be of mini-tower form factor with monitor, USB mouse, and keyboard.

  2. The Enterprise Historian computer shall be compliant with the minimum requirements specified in Table 11.

 

Table 13. Enterprise Historian Host Minimum Technical Specifications

 

Feature

Minimum Specification

Form factor

Mini-Tower

CPU

Intel® Core™ i5-4440 Processor (6M Cache, up to 3.30 GHz)

Supports Virtualization

Yes

Memory

8 GB

Drive

SATA, 500 GB, 7200RPM

Optical Drive

SATA, DVD-RW/CD-RW

Network Interface

One (1) rear panel 1 GbE

USB

Two (2) rear panel USB 2.0

Two (2) front panel USB 2.0

Video

Video must be compatible with the video management system

Video must provide at least at least 1920x1080 resolution

Monitor

24” with minimum 1920x1080 resolution, 16:9 aspect ratio

Input device support

USB Optical Mouse, USB Keyboard

Installed operating system

Windows 7 Professional 64 bit with the latest service pack or other version if required for compatibility with the application software.

Application Software

Commercial-grade Historian software

 

 

 

 

 

 

 

    1. Monitors and Video Management System

 

A centralized video system with multiple monitors is envisioned to emulate a factory network operations center. The intended lab layout is shown in Figure 5. The Tennessee Eastman rack is labeled as “TE Rack” and is connected to a 32 inch monitor on the “TE HMI” computer table and to the multi-screen system on the adjacent wall. Vendor will provide only the 32 inch monitor.

 

The multi-screen system will be connected to the TE Rack. One large monitor and four smaller monitors will be mounted together with the main screen in the center and two smaller screens on each side for a total of five screens. The vendor is required to provide video routing management equipment that will allow any source to be displayed on any of the monitors.

 

  1. The system shall be provided with the minimum video equipment requirements listed in Table 14.

 

Monitor sizes are given as a diagonal measurement which is the common standard for specifying monitor dimensions. Monitors can be assumed to be rack-mounted.

 

Table 14. Video Management System Minimum Specifications

 

Qty

Item Description

1

32” LED Monitor with three (3) or more HDMI audio/video inputs

1

Video management system for the TE rack

 

 

 

Figure 5. Laboratory Layout and HDMI Video Interconnectivity [TE Enclave components are shown in red].

 

    1. Software

 

The system is intended to emulate a typical factory environment in which control of the plant may be collocated within the plant subnet or remotely located in another subnet. The software types listed within this section have been selected to facilitate the emulation of a factory environment with the re-configurability required for the research goals.

 

  1. The system shall be delivered with the software listed in Table 15.

 

Table 15. List of Application Software (Minimum Requirements Specification)

 

Qty

Description

Example

1

License locator hardware/software

Rockwell Software USB Dongle

1

HMI viewer software

FactoryTalk View SE Station 100 Display

1

HMI development software

FactoryTalk View Studio for FactoryTalk View Enterprise

1

PLC software development environment for the hard PLC

RSLogix 5000 Standard, ENG

1

PLC software development environment for the soft PLC

Unspecified

1

DeviceNet configuration software

RSNetworx

1

OPC server and configuration environment

RSLinx Gateway

1

Local Historian server with up to 250 tags

FactoryTalk Historian 250 Tags

1

Enterprise Historian server with up to 250 tags

FactoryTalk Historian 250 Tags

1

Software to allow export of historical data from the historian database.

FactoryTalk Historian DataLink EXCEL ADD-IN – Single User

1

Graphical configuration for switches and routers delivered as part of the switching center.

Cisco Network Assistant

 

 

 

  1. The PLC software development environment shall include at a minimum basic ladder constructs, timers, and PID loop controllers. It is desired that the PLC programming tools provide the necessary constructs to control manufacturing processes that include regulation of analog processes and discrete event processes.

  2. The PLC software development environment shall provide online monitoring and troubleshooting of PLC programs.

  3. The OPC server software shall have the capability to read and write to the selected PLC. This is an important requirement as the intent is to have the OPC state of actuators reflected in the plant model. Similarly, the states of sensors in the plant model are to be reflected in the OPC server. The PLC will serve as a bridge between the plant simulator and the OPC server.

  4. The OPC server software shall interoperate with the MATLAB OPC Toolbox which requires that the OPC server comply with the OPC Foundation Data Access (DA) standard version 2.05a.

  5. All servers either real or virtual shall come installed with the specified operating systems.

 

  1. Assembly and Acceptance

 

The selected supplier shall fulfill all of the requirements specified in this document. The product shall be assembled at the supplier’s facility and delivered to the government as a whole or in part.

 

Prior to delivery, the factory shall provide a factory test report documenting that the system has been constructed according to specifications and that all requirements have been satisfactorily met. The government may require an onsite inspection of the system prior to delivery. The factory test report shall at a minimum include a checklist of all requirements within this document indicating satisfactory completion of said requirements.

 

Final acceptance test will be conducted at NIST. Final acceptance will consist of an inspection of the system and government validation that all hardware and software components have been supplied and operate as specified by this document.

 

  1. Meetings and Design Reviews

 

The following sections describe the meetings that are required for contract performance. All meetings will be conducted by teleconference. If the awardee is local to the NIST Gaithersburg facility, an in-person meeting may be arranged but is not required.

 

1.2Kick-off Meeting

 

A project kick-off meeting will be conducted. The purpose of the meeting is to review the requirements and contract deliverables.

 

1.3Design review

 

A single design review will be conducted. The purpose of the review is to ensure that requirements are understood, external interfaces are compatible, and that the design meets the requirements in this document.

 

1.4Acceptance Readiness Review

 

After delivery of the factory test report, an acceptance readiness review meeting will be conducted. The purpose of this meeting is to address defects and other problems that are identified during factory test.

 

1.5Final Acceptance Review

 

After final acceptance testing has been conducted, a final acceptance review meeting will be conducted. The purpose of this meeting is to address defects and other problems that are identified during final acceptance test.

 

1.6Status Meetings

 

Regular status meetings will be conducted by phone to communicate progress between the government and awardee. Regular meetings will occur no less often than once every two weeks.

 

1.7Project Close-out Review

 

The project close-out review meet will be conducted to ascertain all open issues. It is expected that all issues will be resolved by the time this meeting is conducted.

 

2Schedule

 

All products, services, and meetings shall be conducted in accordance with the schedule specified in Table 16. All delivery dates are specified in calendar days.

 

Table 16. TE Eastman Delivery Schedule

 

Milestone

Delivery Date

Kick-off meeting

10 days ARO

Design Review

20 days ARO

Acceptance Readiness Review

60 days ARO

Enclave (product) delivery

90 days ARO

Final Acceptance Review

100 days ARO

Project Close-out

120 days ARO

 

 

 

 BONUS:

 

 

https://www.enisa.europa.eu/activities/Resilience-and-CIIP/critical-infrastructure-and-services/scada-industrial-control-systems/certification-of-cyber-security-skills-of-ics-scada-professionals

http://www.ssi.gouv.fr/IMG/pdf/securite_industrielle_GT_methode_classification-principales_mesures.pdf

http://www.cyberisques.com/78-mots-cles-promotionnels/311-cyber-risques-news-cyber-guerre-et-cyber-sabotage

 

Les dossiers de Cyber Risques News

CYBERISQUES.COM premier service de Veille "Business & Cyber Risks" pour les dirigeants et membres des COMEX/CODIR

Renseignements   Cette adresse e-mail est protégée contre les robots spammeurs. Vous devez activer le JavaScript pour la visualiser.

 

 

Informations supplémentaires