Showing posts with label Project Crossbow. Show all posts
Showing posts with label Project Crossbow. Show all posts

Monday, August 29, 2011

Solaris Tab - Solaris 10 Neworking Update


Solaris Tab - Solaris 10 Networking Update

The following has been added to the Solaris Tab for Networking information.

Solaris Reference Material
2009-07 [PDF] OpenSolaris Crossbow: Virtual Wire in a Box
2010-05 [HTML] Solaris 10 Neworking - The Magic Revealed
2011-08 [HTML] Solaris 11 Express Network Tunables

Friday, July 1, 2011

Solaris Tab Update: Solaris 11 & Crossbow


Solaris Tab Update: Solaris 11 & Crossbow

New resources have been added to the Solaris Tab, primarily concerning Solaris 11.

There is a helpful PDF document demonstrating Virtual Networking via "Crossbow".

Network Management Connection

Setting up a completely virtualized server and switch environment on a single platform meets various requrements such as: portable network management demonstrations, framework to build network management test labs, and a framework to simulate and test network management applications in a WAN environment, without purchasing the hardware.

Powerful frameworks like Crossbow are available under Solaris 11 derivative operating systems like Solaris 11 Express, OpenSolaris, OpenIndiana, Illumos, etc.

Solaris Reference Material
2010-12 [PDF] Set Up a Virtual Network Automatically with Solaris 11 Express
2011-03 [PDF] Solaris 11 ISV Adoption Guide
2011-06 [HTML] Lab: Introduction to the Solaris ZFS File System
2011-06 [HTML] Lab: Protecting Your Applications with Solaris 10 Security
2011-06 [HTML] Lab: Protecting Your Applications with Solaris 11 Security
2011-06 [HTML] Lab: Installing Solaris 11 Express in Oracle VM VirtualBox

Tuesday, January 18, 2011

Sun Developer Days for NY/NJ: 2010-Dec

Sun Developer Days for NY/NJ: 2010-Dec

Abstract
Isaac Rozenfeld from Oracle/Sun posted an agenda and materials from a 2-day tour of New York City and Bridgewater tour of Solaris Days.

Agenda
08:30 Registration & Breakfast
09:00 Welcome Back, AgendaIsaac Rozenfeld [Audio] Focus on Financial Services - Ambreesh Khanna [Audio]
09:10 Solaris Networking Virtualization – Nicolas Droux [Audio]
10:00 Solaris Zones Update – Dan Price [Audio]
10:45 Image Packaging System – Bart Smaalders [Audio]
11:30 Platform Updates: x86 and SPARC – Sherry Moore [Audio]
12:15 Lunch, Isaac Rozenfeld's bonus session on running Solaris on top of the VirtualBox hypervisor [Audio]
01:00 Solaris Integration into Oracle – Damien Farnham [Audio]
01:45 Leaping Forward with Solaris Infiniband – David Brean [Audio]
02:30 Installation Experience Modernization – David Miner [Audio]
03:15 Oracle Enterprise Manager Ops Center – Mike Barrett [Audio]
04:00 Service Management Facility Architecture and Deployment – Liane Praza [Audio]
04:45 Q&A/Raffle

Executive Overview
Some of the important take-aways from a Network Management perspective.

10:00AM Solaris Zones Update by Dan Price
  • Page 5 - Older Solaris 8 & Solaris 9 SPARC physical machine (p2v) can be vitualized, as well as Linux under Intel
  • Page 8 - Security and Patch OS Updates can be made by merely migrating a zone containing an application from the old server to another server which had the patch applied
  • Page 24 - p2v support virtualizing Solaris 8, Solaris 9 (now Solaris 10 from a Solaris 11 platform); v2v for moving a zone between physical machines
  • Page 26 - Some common application support matrix where inquiries are constantly made
  • Page 19 - New "zonestat" command for quickly seeing health of components across multiple zones simultaneously.
10:45AM - Image Packaging System by Bart Smaalders
  • Pages 1-44 - Overview of the Solaris 11 Image Packaging System
11:30 AM - Platform Updates: x86 and SPARC by Sherry Moore
  • Page 4 - New SPARC T3 Processor (16 cores) image and features
  • Page 5 - I am tickled that Oracle used a SPARC diagram drawn by me (unfortunately they stretched it)
  • Page 6 - Current generation systems: images and features
  • 1:45PM - Leaping forward with Solaris Infiniband
  • Page 16 - Infiniband usage in Solaris Virtualized Zones Diagram
  • Page 30 - Important OS commands for Infiniband Fabric
2:30PM - Installation Experience Modernizations by David Miner
  • Page 4 - Solaris 10 and Solaris 11 Comparisons (important: Jumpstart Replaced!)
  • Page 5 - New Boot Environments based upon ZFS with "unlimited snapshots", breaking mirror with only one rollback is a thing of the past with Solaris 11
  • Page 9 - New Automated Installer Diagram, to replace Jumpstart… following pages illustrate use cases!
4:00PM - Service Management Facility Architecture and Deployment
  • Page 4 - Best Practices for deploying applicatons across networks
  • Page 7 - Best Practices for deploying applications onto ZFS
  • Page 9 - Software Support and Admin teams no longer require root or sudo with Solaris SMF for stop/start/restart
  • Page 11 - Application layer firewalls bundled as a service
  • Page 16 - Solaris 11 Image Packaging Sytem no longer uses scripts, but bundles into SMF
  • Page 17 - Automatic Fault notifications through SMF via email & SNMP
  • Page 19 - Best Practices of modern virtualized Solaris Application Deployment

Tuesday, July 13, 2010

Solaris Crossbow Virtual Wire: Network in a Box



Solaris Crossbow Virtual Wire: Network in a Box

Abstract:

For 8 years, Sun has been re-developing the TCP/IP stack under Solaris. Nicolas Droux is involved as one of the core architects in Solaris in the process of re-architecting the TCP/IP stack. At the 23'rd Large Installation System Administration Conference (LISA-09), Nicolas presented over a short session describing the new features in Solaris TCP/IP from Project Crossbow.

Problems
  • Host Virtualilzation
  • Service Virtualization
Key Issues to Solve
  • Virtualizing Hardware NIC's
  • Zones Sharing a NIC
  • Maintain Performance
  • The desire is to allow the virtualized network stack to use as much of the hardware as possible.
  • Allow the Virtual Machines to understand how much bandwidth they are allowed to use, to keep zones from stepping on one another.
  • Management integrated into the stack itself, to avoid users having to look at multiple man pages.
  • Security to ensure badly behaved applications are not injecting bad packets on a shared network
8 Years of Development
  • Old code based upon Steams of solutions to resolve
  • closer integration of IP to TCP layers
  • data link, mac to IP
  • new interface to device drivers (Project Nemo)
  • IP QoS integrated, simplified, and made more efficient
  • Crossbow integrated at MAC layer
  • Requested more modern NIC features from hardware more hardware rings buffers, DMA Channels, and rich classifiers... building new features into the TCP/IP stack
Enablers & Key Opportunities
  • Server and Network Consolidation
  • Open Networking
  • Cloud Computing
Features
  • Hardware Lanes, to assign traffic to virtual NIC's, buffers, kernel threads, interrupts, the CPU threads, Zones, and/or Virtual Machines!
  • Stack adjusts flow based upon server load or traffic load, with ability to adjust interrupts, so large chains of packets can be pulled from the NIC without an interrupt per packet penalty
  • Virtual NIC's, pseudo-MAC instances, can be configured with bandwidth, priorities, and link aggregation, and assign V-NIC's on top
  • Bind: VLAN and Priority Flow Control to a V-NIC; hardware lan to a Switch
  • Virtual switch built automatically whenever 2 VNIC's are assigned to a Data Link
  • Virtual Switch can be built on EtherStubs, isolated from real hardware
  • Assigning a CPU Pool to a VNIC is coming
Implications to Hardware
  • Zones can replace real machines in a model in a Solaris model on a laptop
  • Virtual Switches can replace real switches in a Solaris model on a laptop
  • Virtual Routers can replace real routers in a Solaris model on a laptop
  • The configuration can be deployed in a production data center
Implications to Services: Crossbow Flows
  • Flows describes a type of traffic moving through a network
  • Flows can be described by: Services, Transport, Port Number, etc.
  • Properties can be attached to flows: Bandwidth, CPU, Priorities, etc.
  • Flows can be created on NIC's an V-NIC's
Question & Answers
  • Bandwidth can be assigned to a NIC, Bandwidth Guarantees to allow bursting was on the roadmap in 2009.

Friday, June 5, 2009

OpenSolaris 2009.06 - Network Virtualization

OpenSolaris 2009.06 - Network Virtualization

Network Virtualization Technology: Project Crossbow

Sun has been working at re-architecting the TCP/IP stack in Solaris for Virtualization for close to 3 years, making progress each year with new features. OpenSolaris 2009.06 exhibits some of the most recent enhancements
http://link.brightcove.com/services/player/bcpid1640183659?bctid=24579687001

Network infrastructure in Solaris has been re-written at the NIC, Driver, and Socket levels - all the way up the stack.

Network Virtualization has to do with dedicated resources and isolation of network resources. They are talking about multiple: Hardware Ring Buffers in a NIC, TCP/IP Stacks in a Kernel, Kernel Ring Buffers in a Stack.

http://www.opensolaris.com/use/ProjectCrossbow.pdf
"Crossbow is designed as a fully parallelized network stack structure. If you think of a physical network link as a road, then Crossbow allows dividing that road into multiple lanes. Each lane represents a flow of packets, and the flows are architected to be independent of each other — no common queues, no common threads, no common locks, no common counters."

Some of the more interesting results of this integration: create networks with no physical NIC cards; create switches in software; assign bandwidth to a virtual NIC card (vNIC); assign CPU resources to a vNIC; assign quality of service (QoS) attributes to a vNIC; throttling protocols on a vNIC; virtualize dumb NIC's via the kernel to look like smart NIC's; switch automatically between interrupt and polled modes.

The implications are staggering:

  • Heavy consumption of network resources by an application does not necessarily have to step-on other mission critical applications running in another virtual server
  • Priorities for latency sensitive protocols (ex. VoIP) can be specified for traffic based upon various packet policies, like Source IP, Destination IP, MAC address, Port, or Protocol
  • Security is enhanced since Solaris 10 containers no longer have to share IP stacks for the same physical NIC, but physical NIC's can now have multiple IP stacks for each container
  • Multiple physical ports can be aggregated into a single virtual port and then re-subdivided into multiple virtual NIC's so many applications or many virtual servers can experience load sharing and redundancy in a simplified way (once at the lowest layer instead of multiple times, for each virtual machine)
  • Older systems can be retained for D-R or H-A since their dumb NIC's would be virtualized in the kernel and the newer NIC's with newer equipment can be added into the application cluster for enhanced performance
  • Heavily used protocols will switch a stack into "polled mode" to remove the overhead of interrupts to the overall operating system, providing better overall system performance, as well as providing faster network throughput to competing operating systems
  • Enhanced performance at a lower system resource expense is achieved by tuning the vNIC's to more closely match the clients mean flow control can happen at the hardware or NIC card level (instead of forcing the flow control higher in the TCP stack)
  • Modeling of applications and their performance can be done completely on a laptop, all application tiers, including H-A, without ever leaving the laptop - allowing architects to test the system performance implications by making live configuration settings
  • Repelling DoS attacks at the NIC card - if there is a DoS attack against a virtual server's vNIC card, the other virtual servers do not necessarily have to be impacted on the main system due to isolation and resource management, and packets are dropped at the hardware layer instead of at the kernel or application, where high levels of interrupts are soaking up all available CPU capacity.
Usually, adding & leveraging features like QoS and Virtualization will decrease performance to an operating system, but with OpenSolaris, adding these feature with a substantial re-write of code, enabled a substantial increase in read & write throughput over Solaris as well as substantial increase in read throughput (with close to on-par write throughput) in comparison to Linux on the same hardware.
http://www.opensolaris.com/learn/features/networking/networkperformance/

This OpenSolaris technology is truly ground-breaking for the industry.

Usage of Network Virtualization in Network Managment

In the realm of Network Management, there is usually a mix of unreliable protocols (ICMP and UDP) with reliable protocols (TCP sockets.) The unreliable protocols are used to gather (ICMP, SNMP) or collect (Syslog) data from the edge devices while reliable protocols are used to aggregate that data within the management platform cluster.

While the UDP packets are sent/received, they can be dropped under times of high utilization (event storms, denial of service attacks, managed network outages, etc.) - so applying higher quality of service to these protocols becomes desirable to ensure the network management tools have the most accurate view of the state of the network.

Communication to internal system, which are aggregating that data, require this data for longer term usage (i.e. monthly reporting) and must be maintained (i.e. backups) - but these subsystems are no where near as important to maintaining an accurate state of the managed network when debugging an outage, which affects the bottom line of the company. These packets can be delayed a few microseconds to ensure the critical packets are being processed.

Enhanced performance in the overall TCP/IP stack also means more devices can be managed by the network management platform while maintaining the same hardware.

Implementation of
Network Virtualization in Network Management

The H-A platform can be loaded up with OpenSolaris 2009.06 and the LDOM holding the Network Management application can be live-migrated seamlessly in minutes.
http://blogs.sun.com/weber/entry/logical_domain_mobility_between_solaris

After running on the production H-A platform for a time, the production platform can be upgraded, and the LDOM migrated back in minutes.

Conclusion

Operating systems like OpenSolaris 2009.06 offer to the Network Management Architect new options in lengthening asset lifespan, increasing return-on-investment for hardware assets, ensuring better system performance of network management assets, ensuring the best possible network management team performance possible.
Subscribe to: Comments (Atom)

AltStyle によって変換されたページ (->オリジナル) /