Each year the SCinet’s Network Research Exhibition (NRE) showcases and number of interesting network-based experiments during SC. The goal of the NRE is to showcase technologies that will impact HPC in general and SCinet in particular.
Topics for SC16’s Network Research Exhibition demos and experiments range from software-defined networking (SDN) to security/encryption and resilience. The titles and booths for the demos are listed before each description. Please stop by and check out these new network technologies and innovations!
You can find here the list of demos:
- #1 Advance Reservation Access Control using Software-defined Networking and Tokens
- #2 SDX: Interconnecting SDN and Legacy L2 domains with AutoGOLE
- #3 Data Transfer Nodes Experiments for LHC
- #4 Network Function Virtualisation for real-time Virtual Reality application
- #5 E2E Real Service Analytics Over 100 Gbps WANs Using The Blue Planet Framework
- #6 mdtmFTP @ 100 Gbps Networks
- #7 Automated GOLE (AutoGOLE) showing worldwide provisioning of circuits through NSI
- #8 IRNC International Software Defined Networking Exchange (SDXs) 100 Gbps Services for Data Intensive (Petascale) Science, Including Integration with 100 Gbps Data Transfer Nodes (DTNs)
- #9 KREONET-S: SD-WAN Based On Distributed Controls for Virtual Dedicated Networks Across Multiple Domains
- #10 Bioinformatics SDX for Precision Medicine
- #11 Data Commons and Data Peering at 100 Gbps
- #12 SDN Optimized High Performance Data Transfer Systems
- #13 International WAN High Performance Data Transfer Services Integrated With 100 Gbps Data Transfer Nodes for Petascale Science (PetaTrans)
- #14 Deep Network Visibility Using R-Scope® and ENSIGN Technologies by Reservoir Labs
- #15 GÉANT 100G DTN
- #16 TYPHOON: An SDN Enhanced Real-Time Big Data Streaming Framework
- #17 Title: Rapid Earth Science Data Distribution over a Multi-Institutional Open Storage Research Infrastructure
- #18 FPGA Accelerated Key-Value Store (KVS) For Supercomputing
- #19 Demonstrations of 200 Gbps Disk-to-Disk WAN File Transfers using Parallelism across NVMe Drives
- #20 Dynamic Remote I/O for Distributed Compute and Storage
- #21 On-demand Data Analytics and Storage for Extreme-Scale Distributed Computing
- #22 SIMECA: SDN-based IoT Mobile Edge Cloud Architecture
- #23 HyPer4: Portable, Dynamic Data Plane Virtualization
- #24 Single Node disk-to-disk transfers over regular IP at 100 Gbps with Aspera FASP
- #25 Demonstration of Programmable Network Measurement of Data Intensive Flows at 100Gbps
- #26 CloudSight: A Tenant-oriented Transparency Framework for Cross-layer Cloud Troubleshooting
- #27 Secure Autonomous Research Networks
- #28 CoreFlow experimentation
- #29 SDN-Enabled QoS Scheduling for Data Transfer Infrastructure Applications
- #30 Panorama: Tools for Modeling, Monitoring, Anomaly Detection, and Adaptation for Scientific Workflows on Dynamic Networked Cloud Infrastructures
- #31 INDIRA: Intent-based User-defined Service Deployment over Multi-Domain SDN applications for Science Applications by ESnet
- #32 AtlanticWave/SDX Controller for Scientific Data Flows
- #33 Advanced Research Computing Over 100 Gbps WANs Using DTNs
- #34 PacificWave SDX and StarLight SDX interoperability
- #35 International Testbed Federation Between NSF Chameleon cloud testbed and the European Grid’5000 testbed
- #36 Detecting Elephant Flows at 100Gbps Using R-Scope®
- #37 Sandia Emulytics and Automated Discovery
#1 Title: Advance Reservation Access Control using Software-defined Networking and Tokens
Booth: 1030(DOE), 2543 (Georgia Tech)
Timing: Tuesday or Wednesday
Summary: Advanced high-speed networks allow users to reserve dedicated bandwidth connections through advance reservation systems. A common use case for advance reservation systems is data transfers in distributed science environments. In this scenario, a user wants exclusive access to his/her reservation. However, current advance network reservation methods cannot ensure exclusive access of a network reservation to the specific flow for which the user made the reservation. In this demo we present a network architecture that addresses the above-mentioned limitation and ensure that a reservation is used only by the intended flow. We achieve this by leveraging software-defined networking (SDN) and tokens. We use SDN to orchestrate and automate the reservation of networking resources from end to end, and across multiple administrative domains. Finally, tokens are used to create a strong binding between the user or application who requested the reservation, and the flows provisioned by SDN.
#2 Title: SDX: Interconnecting SDN and Legacy L2 domains with AutoGOLE
Booth: Booth 801(NCHC) ,2611 (OCC, StarLight, iCAIR)
Summary: Many Internet Exchange Points (IXPs) provide dynamic circuit provisioning among connected networks to reduce manual operator intervention and to enable direct control by edge applications, services, and processes. However, incorporating similar functionality between SDNs and legacy networks in Software Defined eXchange Points (SDXs) requires additional mechanisms for the interconnection between heterogeneous networks. In our proposed SDX system, SDN networks and legacy networks are connected by using Network Service Interface (NSI) [1] protocol, which can be used for East-West provisioning. This demo shows a prototype implementation of SDX automated inter-domain dynamic virtual circuit provisioning services. The main goal of our demonstration is to explain the mechanisms underneath the implementation of the dynamic virtual circuit services in an SDX and the use of the NSI protocol as an interface between SDN networks and legacy networks.
#3 Title: Data Transfer Nodes Experiments for LHC
Booth: 2611 (iCAIR), 2437 / 2537 (Caltech)
Summary: Data Transfer Nodes (DTNs) allow for fast and efficient transport of data for data intensive science e.g., the high energy physics (HEP) community. This demonstration will show how multiple data streams can be transferred with high performance among DTNs at SC16 and around the globe on high capacity international interdomain WANs.
#4 Title: Network Function Virtualisation for real-time Virtual Reality application
Booth: 3421 (SURF)
Summary: In this demonstration we will show real-time transformations on a point-cloud initiated from a Kinect sensor pointed towards the VR glasses of the user in the SURF booth. In our setup the data from this sensor may flow through zero or more virtual network functions (NFV) and the user will experience a real-time difference when applying various effects. Network functions are controlled by a LeapMotion sensor the user is wearing. Under the hood we’re using the Network Service Header.
#5 Title: E2E Real Service Analytics Over 100 Gbps WANs Using The Blue Planet Framework
Booth: 2523 (Ciena), 2611 (OCC, StarLight, iCAIR)
Summary: Monitoring, measuring, and analyzing large scale, continuous, high performance, large capacity single stream data flows over 100 Gbps WANs is an especially important requirement for network production operations support for many data intensive applications. This demonstration showcases the capabilities of using the Ciena Blue Planet control framework tools for such measurement and analysis. The streams will be generated by high resolution digital media at 4k streaming over a live 100 Gbps WAN paths and incorporating the CENI testbed, which provides real time analytic processing. These demonstrations are supported by Ciena Research Labs, iCAIR at Northwestern University, the Electronic Visualization Laboratory at the University of Illinois at Chicago, and the University of Amsterdam.
#6 Title: mdtmFTP @ 100 Gbps Networks
Booth: 2611 (OCC, StarLight, iCAIR)
Summary: mdtmFTP is a data transfer tool on top of the multicore-aware data transfer middleware. To address the high-performance challenges of data transfer in the big data era, this project is researching, developing, implementing and demonstrating mdtmFTP: a high-performance data transfer tool for big data. mdtmFTP has several salient features. First, it adopts an I/O centric architecture to execute data transfer tasks. Second, it can fully utilize the underlying multicore system. Third, it implements a large virtual file mechanism to address the lots-of-small-files (LOSF) problem. Finally, mdtmFTP utilizes together multiple optimization mechanisms—zero copy, asynchronous I/O, pipelining, batch processing, and pre-allocated buffer pools—to improve performance. These demonstrations use mdtmFTP to show optimal bulk data movement over wide area networks. The mdtmFTP protocol can be compared with existing data transfer tools such as GridFTP, BBCP, and FDT. This will show that mdtmFTP performs better than existing data transfer tools, including over long distant WANs. These demonstrations are supported by Fermi National Accelerator Laboratory, NAR Labs, iCAIR at Northwestern University, ESnet, StarLight, NERSC, and the Metropolitan Research and Education Network (MREN).
#7 Title: Automated GOLE (AutoGOLE) showing worldwide provisioning of circuits through NSI
Booth: 3421 (SURF)
Summary: The Automated GOLE is a fabric of networks and exchanges that allows for the dynamic creation of circuits. This year we will present worldwide scheduling and provisioning of those services through a GUI that may be used worldwide by all collaborating entities within our fabric.
#8 Title: IRNC International Software Defined Networking Exchange (SDXs) 100 Gbps Services for Data Intensive (Petascale) Science, Including Integration with 100 Gbps Data Transfer Nodes (DTNs)
Booth: 2611 (OCC, StarLight, iCAIR)
Summary: With multiple international partners, the International Center for Advanced Internet Research (iCAIR), with support from the Global Lambda Integrated Facility (GLIF), the StarLight International National Communications Exchange Facility, the Metropolitan Research and Education Network (MREN), CANARIE, NetherLight, SURFnet, TWAREN, NCHC, KISTI/KREONET, HPDMnet, iGENI, PacificWave, the PNWGP, the Global Research Platform Network (GRPnet), and other regional, national, and international networks are preparing to showcase through a series of demonstrations Software Defined Networking Exchanges (SDXs) for data intensive sciences, including those based on 100 Gbps services and those that use E2E single 100 Gbps streams. iCAIR and multiple national and international research partners, including the NASA Goddard Space Flight Center High End Computing Network initiative, the Naval Research Laboratory, the StarLight consortium, the StarWave consortium, MREN, the Mid-Atlantic Crossroads (MAX), Open Commons Consortium, the Open Science Data Cloud, the Center for Data Intensive Science, ESnet, and SCinet.
#9 Title: KREONET-S: SD-WAN Based On Distributed Controls for Virtual Dedicated Networks Across Multiple Domains.
Booth: 2043 (KISTI), 2611 (OCC, StarLight, iCAIR)
Summary: These demonstration will showcase KREONET-S, a new network initiative established to drive softwarization of KREONET Infrastructure. The first phase of this project has been targeted to softwarize six regional and international network centers, which are Daejeon, Seoul, Busan, Gwangju, Changwon in South Korea, and Chicago in the US, specially, the StarLight Facility, which is a GLIF Open Lambda Exchange – Global Lambda Integrated Facility GOLE). Eventually the first production SD-WAN services for KREONET users, KREONET-S, will provide multiple advanced networking services capabilities, including to international science collaborations, by implementing more flexible and programmable networking capabilities, specialized provisioning for large scale data flows, and flow isolation. These demonstrations will be supported by KISTI, iCAIR at Northwestern University and StarLight International/National Communications Exchange Facility.
#10 Title: Bioinformatics SDX for Precision Medicine
Booth: 2611 (OCC, StarLight, iCAIR)
Summary: This demonstration will showcase a prototype Bioinformatics Software Defined Networking Exchange (SDX), designed to support the specific requirements of bioinformatics workflows, including at 100 Gbps, in highly distributed research environments demonstrating how precision medicine enabled by precision networks. This approach is being develop for a project, being supported by multiple partners including iCAIR, the Laboratory for Advanced Computing, the European Bioinformatics Institute, the Open Commons Consortium, Bionimbus Data Cloud, the Open Science Data Cloud, the Center for Data Intensive Science, National Institutes of Health, Ontario Institute for Cancer Research, Academia Scinica, the Singapore Centre for Supercomputing, University of Amsterdam, SURFnet, NetherLight, StarLight, MREN, CANARIE, and SingAREN.
#11 Title: Data Commons and Data Peering at 100 Gbps.
Booth: 2611 (OCC, StarLight, iCAIR)
Summary: These demonstrations showcase the data science capabilities of the Open Commons Consortium (OCC) Data Commons and the OCC Open Science Data Cloud (OSDC). The demonstrations stream and peer data from OCC Data Commons, the OSDC and SC16 over 100 Gbps paths, based on a scientific Software Defined Network Exchange (SDX), – supported by the Center for Data Intensive Science, the Open Science Data Cloud, the Open Common Consortium, iCAIR at Northwestern University, the StarLight International/National Communications Exchange Facility, SCinet, and the Metropolitan Research and education Network (MREN).
#12 Title: SDN Optimized High Performance Data Transfer Systems
Booth: 2437 / 2537 (Caltech)
Summary: The Caltech group and its partners believe that optimization of large scientific data flows can be successfully achieved by using intelligent software defined networks. For SuperComputing 2016 we plan the following main demonstrations. 1. Traffic Management for CMS Applications: the CMS high level software (PhEDEx and ASO) utilizes the application layer traffic optimization (ALTO) file transfer scheduler in the OpenDaylight SDN controller to optimally route the physics data across various partner booths as well as to remote sites. 2. SDN Controlled Infrastructure: the core of all the planned demonstrations at SC16 uses a custom SDN controller to install 100GE flows on all our switches. File transfers are subsequently initialized in an automated way using a custom scheduler. 3. 1Tbps Network transmission using a small number of DTN nodes: Caltech and StarLight/OCC booths will each host one server with 10 Networks cards and will transfer the network traffic using RDMA over Converged Ethernet Technology (RoCE). 4. Simplified 100/200GE DTN Server Design:The most recent Caltech DTN designs contain SSD drives, NVMe or traditional JBODs with a large number of mechanical drives. We will showcase two DTN designs with NVME 2.5” drives and with M.2 cards using a PCIe adapter. 5.Distributed xrootd storage cache for the CMS experiment: The LHC experiments have recently improved data access over WAN. A caching system for use with the CMS Any Data, Anytime, Anywhere infrastructure will be demonstrated for the first time. 6. Virtual Reality event display of the CMS experiment developed with NovaVR LLC, and FNAL. The display and software are on NVIDIA graphics equipment and the Unity3d gaming engine. 7. SDN Applications for scientists: We will demonstrate a rapid development environment for writing complex but in a very easy to code high level applications to achieve complex operations like DTN and Firewall in a Science DMZ and traffic load balancer.
#13 Title: International WAN High Performance Data Transfer Services Integrated With 100 Gbps Data Transfer Nodes for Petascale Science (PetaTrans)
Booth: 2611 (OCC, StarLight, iCAIR)
Summary: The PetaTrans 100 Gbps Data Transfer Node (DTN) reseach project is directed at improving large scale WAN services for high performance, long duration, large capacit,y single data flows. iCAIR is designing, developing, and experimenting with multiple designs and configurations for 100 Gbps Data Transfer Nodes (DTNs) over 100 Gbps Wide Area Networks (WANs), especially trans-oceanic WANs, PetaTrans – high performance transport for petascale science, including demonstrations at SC16. These DTNs are being designed specifically to optimize capabilities for supporting E2E (e.g., edge servers with 100 Gbps NICs) large scale, high capacity, high performance, reliable, high quality, sustained individual data streams for science research. This project is supported by the International Center for Advanced Internet Research (iCAIR) at Northwestern University, the StarLight International/National Communications Exchange Facility, the Metropolitan Research and Education Network (MREN), and SCinet.
#14 Title: Deep Network Visibility Using R-Scope® and ENSIGN Technologies by Reservoir Labs
Booth: 3001 (SCinet)
Timing: Please join us for a demonstration at the SCinet booth. Send an e-mail indicating your preferred meeting time(s) https://www.reservoir.com/company/contact/
Summary: Reservoir Labs will demonstrate the fusion of two of its cutting-edge technologies – 1) R-Scope, a high-performance cyber security appliance enabling deep network visibility, advanced situational awareness, and real-time security event detection by extracting cyber-relevant data from network traffic, and 2) ENSIGN, a high-performance Big Data analysis tool that provides the fastest and most scalable tensor analysis routines to reveal interesting patterns and discover subtle undercurrents and deep cross-dimensional correlations in data. Our objective is to demonstrate the effectiveness of ENSIGN analytics in an operational cyber security setup to extract anomalous patterns of network traffic, detect alarming behaviors, and provide actionable insights into network data, by analyzing network metadata output logs from the R-Scope systems.
#15 Title: GÉANT 100G DTN
Booth: 3357 (GÉANT)
Summary: GÉANT is currently testing data transfer nodes in the lab. Two machines will be deployed in the GÉANT backbone before the SC. One of the DTN node will be deployed in Paris where GÉANT’s 100G Trans-Atlantic link from NY also terminates and the other in London where there is access to a 100G Trans-Atlantic link from Washington. These nodes will be connected to Corsa switches in Paris and London forming one SDN domain. A second SDN domain will be formed, in collaboration with Corsa, at the SC16 show floor. The two SDN domains will be connected using the Trans-Atlantic links. The demonstration will show the establishment of multi-domain BoD services across OpenFlow and non-OpenFlow domains showing data transfers between the SC show floor and the DTN nodes connected to GÉANT in Paris and London. The proposed solution relies on a framework called DynPaC, which is able to provide resilient L2 services taking into account bandwidth and VLAN utilization constraints. DynPaC has been extended to be compliant with the Network Services Interface-Connection Service (NSI-CS) protocol, which makes the SDN-based BoD solution multi-domain capable. The demo will establish two multi-domain BoD services across the OpenFlow and non-OpenFlow domains, show how the traffic is limited at the networking devices to enforce the QoS constraints, and select the optimal intra-domain paths during the process. At some point within the demonstration a failure will be emulated on one of the Trans-Atlantic link and the system will show how the traffic is re-routed to alternative pre-computed paths without disrupting the service provisioning.
#16 Title: TYPHOON: An SDN Enhanced Real-Time Big Data Streaming Framework
Booth: 2143 (University of Utah)
Timing: Wednesday, 16 November: 12:00pm-2:00pm
Summary: Processing big-data in real-time in a scalable fashion is inherently challenging because of the volume of data, unexpected data patterns, etc. This results in skewed data distributions and load fluctuations. Current big data streaming frameworks (e.g., Storm) focus mainly on the scalability of real-time stream data processing, and are not flexible enough for dynamic reconfiguration of applications to deal with changes in the distribution and volume of data. To address these problems, we propose an SDN-based real-time big data framework, called Typhoon. Typhoon makes use of SDN to perform routing in a streaming framework, allowing greater flexibility, elasticity and robustness. In the demo, we present several practical use cases of Typhoon, including live debugging of an active streaming pipeline.
#17 Title: Rapid Earth Science Data Distribution over a Multi-Institutional Open Storage Research Infrastructure
Booth: 1543 (University of Michigan), 1010 (Indiana University School of Informatics and Computing)
Summary: Research scientists face a number of obstacles in obtaining and working with large data sets over existing network and storage infrastructure. This data-access problem is often amplified when coordination among distributed, multi-institutional resources is in the critical path for conducting collaborative research. This demonstration highlights an application integrated with the Multi-Institutional Open Storage Research Infrastructure (MI-OSiRIS) as a demonstration of how optimized data interfaces within a software-defined, scalable storage buildout can address many of the data-intensive and collaboration challenges faced by researchers and their respective communities. The Earth Observation Depot Network (EODN) is our representative application that disseminates multi-resolution remote sensing data across distributed Ceph storage resources within the MI-OSiRIS deployment, which span well-connected sites in Michigan, Indiana, and local SCinet and Utah Cloudlab infrastructure in Salt Lake City. Our demonstration intends to show the automated provisioning of a new OSiRIS site at the SC16 exhibition floor which will join the existing OSiRIS Ceph sites back in Michigan. Our goal is to understand the impact the large latency has on our distributed Ceph deployment using the EODN network-intensive workflow as an example of a collaborative, data-intensive science use-case.
#18 Title: FPGA Accelerated Key-Value Store (KVS) For Supercomputing
Booth: 4423 (Algo-Logic Systems)
Summary: By reducing the latency and increasing the throughput, supercomputing centers can speed up data sharing between machines. Algo-Logic Systems is working to accelerate a wide base of applications for high performance computing (HPC), low-latency finance, scientific applications, communications, database speedup, sensor tracking for location and movement, and multimedia. In this live demonstration, we will show an ultra low latency KVS implemented in FPGA hardware. We will run benchmarks that show key/value search performed over standard 10 or 40 Gigabit/second Ethernet (GE) achieve a fiber-to-fiber lookup latency of under a half microsecond (0.5 µS). We will also show that a single instance of the FPGA-accelerated KVS connected to a 40 GE interface achieves a throughput of 150 Million Searches Per Second (MSPS). We observe that the KVS in FPGA is 88 times faster while using 21x less power than socket I/O running on a traditional CPU-based server.
#19 Title: Demonstrations of 200 Gbps Disk-to-Disk WAN File Transfers using Parallelism across NVMe Drives
Booth: 2611 (StarLight/OCC/iCAIR)
Summary: NASA requires the processing and exchange of ever increasing vast amounts of scientific data, so NASA networks must scale up to ever increasing speeds, with 100 Gigabit per second (Gbps) networks being the current challenge. However it is not sufficient to simply have 100 Gbps network pipes, since normal data transfer rates would not even fill a 1 Gbps pipe. The NASA Goddard High End Computer Networking (HECN) team will demonstrate systems and techniques to achieve near 200G line-rate disk-to-disk data transfers between a single pair of high performance NVMe servers across a national wide area 2x100G network, by utilizing parallelism of multiple NVMe drives, system cores and processes, and network data streams.
#20 Title: Dynamic Remote I/O for Distributed Compute and Storage
Booth: 2611 (OCC, StarLight, iCAIR)
Summary: This demonstration will show large-scale remote data access, a dynamic pipelined distributed processing framework and Software Defined Networking (SDN) enabled automation between distant operating locations. We will dynamically deploy a complex video processing workflow, and then rapidly redeploy as needed to satisfy varying processing needs and leverage available distributed resources in a way that is relevant to emerging intelligence data processing challenges. The video content includes production quality, live, 4K by 60 fps video and the infrastructure to be utilized is a nationally distributed set of storage and computing resources leveraging multiple 100G SDN network paths. Processing at NRL and on the floor will use new systems with 12 NVIDIA P100 GPU processors (~120 TFLOPS in a box) and 800Gbps of RDMA network bandwidth from each system. The Remote I/O strategy allows data processing to begin as soon as data begins to arrive at the compute location rather than waiting for bulk transfers to complete.
#21 Title: On-demand Data Analytics and Storage for Extreme-Scale Distributed Computing
Booth: 2501 (NCSA)
Summary: This is a distributed analysis and visualization demo with efficient transfer from the originating HPC system to a secondary point of high capacity analysis and finally visualization rendered on-demand at a third location. Data size and throughput is becoming one of the main limiting factors of extreme-scale simulations and experiments. The extreme-scale example considered here is taken from computational cosmology. The science requirements demand running very large simulations, such as N-body runs with trillions of particles. The resulting data products are scientifically very rich and of interest to many research groups. It is therefore very desirable that the data be made broadly available. However, as a fiducial example, a trillion-particle simulation with the HACC code generates 20 PB of raw data (40 TB per snapshot and 500 snapshots), which is more than many systems can store for a single run in their file systems. An interesting point is that while one version of HACC is optimized for Argonne National Laboratory’s Mira and can scale to multi-millions of cores, the National Center for Supercomputing Application’s Blue Waters resource offers exceptional data analytics capabilities with its thousands of GPUs. This suggests a combined infrastructure based on using Mira for the simulation, NCSA’s Blue Waters for a first-level data analysis, and a separate, possibly distributed, data center to store the distilled results. Users from other universities and labs would then pull the data from the data center and run further data analytics locally following their particular scientific interests.
#22 Title: SIMECA: SDN-based IoT Mobile Edge Cloud Architecture
Booth: 2143 (University of Utah)
Timing: Tuesday, 15 November, 3:00pm-5:00pm
Summary: In this work we propose an SDN-based IoT Mobile Edge Cloud Architecture (SIMECA) which can deploy diverse IoT services at the mobile edge by leveraging distributed, lightweight control and data planes optimized for IoT communications. We implemented a prototype of SEMICA using emulated and SDR-based SDN-enabled base stations, OpenEPC (www.openepc.com) core network and Open vSwitch. We run our demo as a profile in PhantomNet. In the demo, we will show how to initiate an instance of SIMECA network and show basic functionality of SIMECA including device initial connection setup and seamless device mobility (hand over).
#23 Title: HyPer4: Portable, Dynamic Data Plane Virtualization
Booth: 2143 (University of Utah)
Timing: Wednesday, 16 November: 3:00pm-5:00pm
Summary: Software Defined Networking’s recent innovations include the P4 language and Reconfigurable Match Table architecture, which permit us to define custom protocols and functionality for the data plane of network devices, even ASICs-based devices, after such devices are already deployed. A straightforward application of P4 involves writing P4 programs, each defining one function and one set of protocols, and deploying and running one of these programs on each device at any moment in time. We demonstrate HyPer4, a virtualization approach which includes a special P4 program capable of emulating multiple other P4 programs simultaneously. HyPer4 is highly portable (the target simply needs to support P4) and dynamic (virtualized programs may be added or modified at runtime). This demo illuminates 1) the possibilities for network slicing, checkpointing, complex function composition, and moving target defense; and 2) performance impacts resulting from this new level of flexibility.
#24 Title: Single Node disk-to-disk transfers over regular IP at 100 Gbps with Aspera FASP
Booth: 3930 (Aspera)
Summary: The next generation of Aspera FASP is about optimizing both the WAN and Storage to provide maximum utilization of available resources to transfer data as fast as possible anywhere in the world. FASP operates over regular IP, supports AES-GCM encryption, and works on commodity Intel hardware.
#25 Title: Demonstration of Programmable Network Measurement of Data Intensive Flows at 100Gbps
Booth: 2611 (OCC)
Summary: The demonstration will show Advanced Measurement Instrument and Services (AMIS) to achieve flow-granularity network measurement and monitoring at line rates at 100Gbps. As passive measurement, AMIS consists of software APIs to examine selected flows with no impact to the performance of user traffic, and supports the existing measurement tools (e.g., perfSONAR). With scalable hardware and an open source software stack, the programmable measurement services will equip network operators with effective tools to quantify flow-level network performance, and more importantly to enable in-depth flow analysis through software libraries. Specifically, the demo will (1) invoke RESTful APIs to achieve a programmable and composable measurement; (2) passively measure the TCP window size, when some conditions are satisfied, to trigger active measurements via perfSONAR nodes and alert on checking the receive buffer size on the end host; and (3) execute passive measurement to obtain packet loss and then trigger perfSONAR measurement to conduct active measurements for troubleshooting.
#26 Title: CloudSight: A Tenant-oriented Transparency Framework for Cross-layer Cloud Troubleshooting
Booth: 2143 (University of Utah)
Timing: Thursday, 17 November: 10:00am-12:00pm
Summary: Infrastructure-as-a-Service (IaaS) cloud platforms simplifies the work of cloud tenants by providing clean virtual datacenter abstractions and the means to automate tenant system administrator tasks. When things go wrong, however, the inherent multi-player multi-layer environment of IaaS platforms complicates troubleshooting and root cause analysis. To address these concerns we present our work on CloudSight in which cloud providers allow tenants greater system-wide visibility through a transparency-as-a-service abstraction.
#27 Title: Secure Autonomous Research Networks
Booth: 2523 (Ciena)
Timing: Continuous
Summary: SARNET, Secure Autonomous Response NETworks, is a project funded by the Dutch Research Foundation. The University of Amsterdam, TNO, KLM, and Ciena conduct research on automated methods against attacks on computer network infrastructures. By using the latest techniques in Software Defined Networking and Network Function Virtualization, a SARNET can use advanced methods to defend against cyber-attacks and return the network to its normal state. The research goal of SARNET is to obtain the knowledge to create ICT systems that: 1) model the system’s state based on the emerging behavior of its components; 2) discover by observations and reasoning if and how an attack is developing and calculate the associated risks; 3) have the knowledge to calculate the effect of countermeasures on states and their risks; 4) choose and execute the most effective countermeasure.
Similar to last years demonstration[1,2], we will show an interactive touch table based demonstration that controls a Software Defined Network running on (Exo)GENI infrastructure. This year the visitor will select the attack origin and the type of attack, the system will respond and defend against this attack autonomously. The response, includes security VNFs that are deployed when necessary, the underlying Software Defined Network will route the attack traffic to the VNF for analysis or mitigation.
[1] SARNET demonstration at SC15 – http://sc.delaat.net/sc15/SARNET.html
[2] R. Koning et al., “Interactive analysis of sdn-driven defence against distributed denial of service attacks,” in “2016 IEEE NetSoft Conference and Workshops (NetSoft),” (IEEE, 2016), pp. 483–488.
#28 Title: CoreFlow experimentation
Booth: 2523 (Ciena)
Timing: Continuous
Summary: CoreFlow is a framework that can enrich generated security events with data from multiple sources [1]. Currently it is implemented to correlate Bro with NetFlow information. The enriched data from CoreFlow can be used to create better judgements and potentially more advanced responses. We will experiment using this framework on security events generated on the SCinet network, and try to gather some performance data.
[1] CoreFlow: Enriching Bro security events using network traffic monitoring data – INDIS workshop SC16 Nov 13, 2016
#29 Title: SDN-Enabled QoS Scheduling for Data Transfer Infrastructure Applications
Booth: 1201 (Pittsburgh Supercomputing Center), 4160 (Corsa Technology)
Summary: Pittsburgh Supercomputing Center (PSC), National Institute for Computational Sciences (NICS), and Corsa Technologies are demonstrating the NSF CC*-funded “Developing Applications with Networking Capabilities via End-to-End SDN” (DANCES) project. DANCES integrates network QoS provisioning with supercomputing infrastructure scheduling and resource management to facilitate the transfer of large data sets. The motivation for DANCES was to provide QoS protection for priority bulk data flows traversing congested 10G end-site infrastructure. SDN/OF is the programmatic basis for implementing the dynamic QoS provisioning. The data transfer mechanisms used in the demonstration are PSC’s SLASH2 wide area distributed file system and GridFTP. The SLASH2 file system was designed to meet large data set management needs such as wide area multi-residency of files, system-managed data distribution, and data integrity verification. Provisioning is initiated within a Torque prologue script and the data transfer performed as part of job workflow. The demonstration shows a) how bandwidth of a priority data transfer is protected from disruption by competing best-effort flows and b) limiting control of bandwidth allocated to background SLASH2 replication traffic. The networking technology demonstrated includes Corsa OpenFlow switch hardware, metering and QoS controlled by PSC’s Ryu-based OpenFlow controller and CONGA bandwidth management software.
#30 Title: Panorama: Tools for Modeling, Monitoring, Anomaly Detection, and Adaptation for Scientific Workflows on Dynamic Networked Cloud Infrastructures
Booth: 3628 (RENCI)
Timing: Tuesday (11/15) – 11:30am -12:30pm; Wednesday (11/16) – 10:30am – 11:30am
Summary: This demonstration will showcase a novel, dynamically adaptable networked cloud infrastructure driven by the demand of a data-driven scientific workflow through use of performance models. It will use resources from ExoGENI – a Networked Infrastructure-as-a-Service (NIaaS) testbed funded through NSF’s Global Environment for Network Innovation (GENI) project. The demo will run on dynamically provisioned ‘slices’ spanning multiple ExoGENI cloud sites that are interconnected using dynamically provisioned connections from Internet2 and ESnet. We will show how a virtual Software Defined Exchange (SDX) platform, instantiated on ExoGENI, provides additional functionality for supporting the modeling and management of scientific workflows. Using the virtual SDX slice, we will demonstrate how tools developed in the DoE Panorama project can enable the Pegasus Workflow Management System to monitor and manipulate network connectivity and performance between sites, pools, and tasks within a workflow. We will use representative, data-intensive DoE workflows as driving use cases to showcase the above capabilities.
#31 Title: INDIRA: Intent-based User-defined Service Deployment over Multi-Domain SDN applications for Science Applications by ESnet
Booth: 1030 (DOE)
Summary: Intent-based networking allows users to define the ‘what’ and not the ‘how’ on provisioning their network service. Using principles of natural language processing and semantic ontologies, users can converse with the network, defining quality of service attributes, bandwidth, time frames and get guarantees for their service behavior. We present INDIRA (Intent-based Network Deployment Interface Renderer) which provides an easy-to-use API that for scientists to automatically configure networks for them. The project uses a well-defined ontology to reason about service needs and identify any conflicts in setting up the user-defined topology. Once the request is validated, INDIRA’s engine proceeds to automatically setup topologies using the NSI circuit creation API, demonstrated over the ESnet production network. The system guarantees improvement in service level expectations by communicating with scientists (i.e.: non-networking backgrounds) in a high-level language. INDIRA uses an intelligent semantic ontology, producing RDF graphs to perform policy checking and conflict resolution before network rendering commands are called. The INDIRA platform is able to demonstrate case studies for on-demand provisioning and user-defined file transfers for high bandwidth distributed science applications.
#32 Title: AtlanticWave/SDX Controller for Scientific Data Flows
Booth: 2501 (NCSA), 2437 / 2537 (Caltech)
Summary: A Software-Defined Internet Exchange Point (SDX) is a new technology that has many, often conflicting definitions. We hope to cement the definition to refer to two specific aspects: first, a SDN-based dataplane that connects various networks together, and second, participant networks are able to define forwarding policy within the exchange point by way of a configuration API. Participants can make forwarding decisions based not only on BGP, but based on other considerations.
#33 Title: Advanced Research Computing Over 100 Gbps WANs Using DTNs
Booth: 2611 (OCC, StarLight, iCAIR)
Summary: Compute Canada is deploying new Advanced Research Computing (ARC) resources consisting of multi-petabytes storage systems and well over 100,000 compute cores. These resources, located across four sites in Canada, will be used by Canadian researchers from any region in the country. Each Compute Canada will be providing a Science DMZ network architecture, enabling 100 Gbps connectivity to the CANARIE IP network and specialized data transfer (DTN). These demonstrations use Data Transfer Nodes (DTNs) optimized for 100 Gbps disk-to-disk data transfer, and illustrate how such a platform enables data intensive science, such as bioinformatics (cancer research, personalized medicine, *omics). . A dedicated 100 Gbps path from Toronto to StarLight has been developed for these demonstrations, and a 100 Gbps SCinet WAN path from StarLight to SC16 has been implemented. These demonstrations will be supported by Compute Canada, CANARIE, GTAnet, ORION, RISQ, iCAIR at Northwestern University, StarLight, SCinet, and Ciena.
#34 Title: PacificWave SDX and StarLight SDX interoperability.
Booth:2611 (OCC, StarLight, iCAIR)
Summary: For demonstrations at SC16, iCAIR has formed a research partnership with PacificWave to showcase interoperability between the StarLight SDX and the SDX being implemented at a CENIC site in LA. These demonstrations are supported by iCAIR, CENIC, PacificWave, StarLight and MREN.
#35 Title: International Testbed Federation Between NSF Chameleon cloud testbed and the European Grid’5000 testbed,
Booth:2611 (OCC, StarLight, iCAIR)
Summary: iCAIR has formed a partnership to federate the NSF Chameleon cloud testbed and the European Grid’5000 testbed, using SDX services and technology. This federation has designed a showcase demonstration at SC16. Partners include iCAIR, INDRIA, University of Rennes, the Chameleon consortium, iMINDs, the University of Ghent, NetherLight, SURFnet and CANARIE.
#36 Title: Detecting Elephant Flows at 100Gbps Using R-Scope®
Booth: 3001 (SCinet)
Timing: Please join us for a demonstration at the SCinet booth. Send an e-mail indicating your preferred meeting time(s) https://www.reservoir.com/company/contact/
Summary: A general objective in the design of high-performance computer networks is to ensure the quality of service (QoS) required by their users. This objective is often challenged by the presence of very large flows—known as elephant flows—because of their adverse effects on delay-sensitive flows. Deterioration of QoS due to these large flows is especially relevant in high-speed networks such as the ESnet, which is used to transport both very large flows carrying scientific experiments as well as smaller delay-sensitive flows. In these networks, it is key to actively detect the elephant flows and use QoS mechanisms for routing and scheduling them to protect the smaller ones.
This demonstration will showcase rHNTES/FlowTec, a new mathematical framework, algorithm and a high-performance implementation to detect elephant flows at very high-speed rates jointly developed by Reservoir Labs and the University of Virginia. Embedded into our R-Scope HPC network analysis appliance, the algorithms part of rHNTES/FlowTec will be demonstrated by processing live traffic generated from the Supercomputing Conference.
#37 Title: Sandia Emulytics and Automated Discovery
Booth: 1030 (DOE)
Timing: Wednesday, 16 November – 13:00-15:00
Summary: This presentation showcases Sandia’s state-of-the-art network modeling and emulation capabilities. A key part of Sandia’s modeling methodology is the discovery and specification of the information-system under study, and the ability to recreate that specification with the highest fidelity possible in order to extrapolate meaningful results. Sandia’s minimega platform (minimega.org) is an open-source tool for launching and managing large-scale virtual machine-based experiments. The platform supports research in virtualization, SDN, cybersecurity, and could computing.
The demonstration will include automated network structure and behavior discovery on the SC16 network. In order to build the model, the toolset introspects, anonymizes, and correlates data sources such as PCAP, netflow, active scans, and router configurations. As well as model building, Sandia will demonstrate the ability to rapidly launch the model in a large-scale virtual machine testbed.