Showing posts with label T3. Show all posts
Showing posts with label T3. Show all posts

Thursday, April 11, 2013

Solaris: Massive Internet Scalability


[SPARC processor, courtesy Oracle SPARC T5/M5 Kick-Off]
 Solaris: Massive Internet Scalability
Abstract:
Computing systems started with single processors. As computer requirements increased, multiple processors were lashed together, using technology called SMP (Symmetric Multi-Processing) to add more computing power into a single system, breaking up tasks into processes and threads, but the transition to multi-threaded computing was a long process. The lack of scalability for some problems produced MPP (Massively Parallel Processing) platforms, lashing systems together using special software to load-balance jobs to be processed. MPP platforms were very difficult to program general purpose applications, so massively Multi-Core and Multi-Threaded processors started to appear. Oracle recently released the SPARC T5 processor and systems - producing an SMP platform scalable with massive sockets, cores, and threads into a single chassis - leveraging existing multi-threaded computing software, reducing the need for MPP in real-world applications, while placing tremendous pressure upon the Operating System layer.

[SPARC logo, courtesy SPARC.org]
SPARC Growth Rate:
The SPARC processors started a growth rate, with a movement to massively threaded software.
SPARCCoresGHzThreadsSocketsTotal-CoresTotal-Threads
T181.4321832
T281.6641864
T2+81.664432256
T3161.6128464512
T48364432256
T5163.612881281024
M563.648321921536

The movement to massively threaded processors meant that applications needed to be re-written to take advantage of the new higher throughput. Certain applications were already well suited for this workload (i.e. web servers) - but many were not.

[DTrace infrastructure and providers]
Application Challenges:
The movement to massively threaded software, to take advantage of the higher overall throughput offered by the new processor technology, was difficult for application programmers. Technologies such as DTrace were added to advanced operating systems such as Solaris to assist developers and systems administrators in pin-pointing their code hot-spots for later re-write.

When the SPARC T4 was released, there was a feature called "Critical Thread API" in the S3 core, to assist application programmers who could not resolve some single thread bottlenecks. The S3 core could automatically switch into a single-threaded mode (with the sacrifice of throughput) to address hot-spots. The T4 (and T5) faster S3 core was also clocked at a higher rate, providing an overall boost to single threaded workflows over previous processors - even at the same number of cores and threads. The ability to perform out-of-order instruction handling in the S3 also increased speed in the execution of single-threaded applications.

The SPARC T4 and T5 processors finally offered application developers a no-compromise processor. For heavy single-threaded workloads, the SPARC M5 processor was released from Oracle, driving inreasing scales of higher single-threaded workloads, without having to rely upon systems produced by long-time SPARC partner & competitor - Fujitsu.


[Solaris logo, courtesy Sun Microsystems]
Operating System Challenges:

A single system scaling to 192 cores and 1536 threads offers incredible challenges to Operating System designers. Steve Sistare from Oracle discusses some of these challenges in a Part 1 article and solutions in a Part 2 article. Some of the challenges overcome by Solaris included:
CPU scaling issues include: •increased lock contention at higher thread counts
•O(NCPU) and worse algorithms
Memory scaling issues include:
•working sets that exceed VA translation caches
•unmapping translations in all CPUs that access a memory page
•O(memory) algorithms
•memory hotspots

Device scaling issues include:
•O(Ndevice) and worse algorithms
•system bandwidth limitations
•lock contention in interrupt threads and service threads
Clearly, the engineering team at Oracle were up for the tasks created for them by the Oracle SPARC engineering team. Innovation from Sun Microsystems continues under Oracle. It will take years for other Operating System vendors to "catch up".
Network Management Applications:

In the realm of Network Management, many polling applications used threads to scale, where network communication to edge devices was latency bottlenecked - making the SPARC "T" processors an excellent choice in the carrier based environment.
The data returned by the massively mult-threaded pollers needed to be placed in a database, in a consistent fashion. This offered a problem during the device "discovery" process. This is normally a single-threaded process, which experienced massive slow-downs under the "T" processors - until the T4 was released. With processors like the SPARC T4 and SPARC T5 - Network Management applications gain the proverbial "best of both worlds" with massive hardware thread scalability for pollers and excellent single-threaded throughput during discovery bottlenecks with the "Critical Thread API."

The latest SPARC platforms are optimal platforms for massive Network Management applications. There is no other platform on the planet which compares to SPARC for managing "The Internet".
Posted by at No comments:
Labels: , , , , , , , , ,

Wednesday, April 11, 2012

Solaris Tab: SPARC White Paper Addendums

The Solaris Tab was recently updated with some white papers.

A new category was added for SPARC White Papers.

White papers were placed in date order, using shortened titles on the top, for easy access, while they were categorized with their full titles on the bottom according to topic.

Solaris Reference Material
2010-04 [PDF] Oracle's Sun SPARC T5120/T5220, T5140/T5240 Server Architecture
2010-04 [PDF] Oracle Sun SPARC Enterprise T5440 Server Architecture
2011-02 [PDF] Oracle's SPARC T3-4, T3-2, T3-1, and T3-1B Server Architecture
2012-02 [PDF] Oracle's SPARC T4-1, T4-2, T4-4, and T4-1B Server Architecture
2012-04 [PDF] How the SPARC T4 Processor Optimizes Throughput Capacity: A Case Study

SPARC White Papers
2010-04 [PDF] Oracle's Sun SPARC Enterprise T5120/T5220 and Oracle's Sun SPARC Enterprise T5140/T5240 Server Architecture
2010-04 [PDF] Oracle Sun SPARC Enterprise T5440 Server Architecture
2011-02 [PDF] Oracle's SPARC T3-4, SPARC T3-2, SPARC T3-1, and SPARC T3-1B Server Architecture
2012-02 [PDF] Oracle's SPARC T4-1, SPARC T4-2, SPARC T4-4, and SPARC T4-1B Server Architecture
2012-04 [PDF] How the SPARC T4 Processor Optimizes Throughput Capacity: A Case Study
Posted by at No comments:
Labels: , , , ,

Tuesday, January 18, 2011

Sun Developer Days for NY/NJ: 2010-Dec

Sun Developer Days for NY/NJ: 2010-Dec

Abstract
Isaac Rozenfeld from Oracle/Sun posted an agenda and materials from a 2-day tour of New York City and Bridgewater tour of Solaris Days.

Agenda
08:30 Registration & Breakfast
09:00 Welcome Back, AgendaIsaac Rozenfeld [Audio] Focus on Financial Services - Ambreesh Khanna [Audio]
09:10 Solaris Networking Virtualization – Nicolas Droux [Audio]
10:00 Solaris Zones Update – Dan Price [Audio]
10:45 Image Packaging System – Bart Smaalders [Audio]
11:30 Platform Updates: x86 and SPARC – Sherry Moore [Audio]
12:15 Lunch, Isaac Rozenfeld's bonus session on running Solaris on top of the VirtualBox hypervisor [Audio]
01:00 Solaris Integration into Oracle – Damien Farnham [Audio]
01:45 Leaping Forward with Solaris Infiniband – David Brean [Audio]
02:30 Installation Experience Modernization – David Miner [Audio]
03:15 Oracle Enterprise Manager Ops Center – Mike Barrett [Audio]
04:00 Service Management Facility Architecture and Deployment – Liane Praza [Audio]
04:45 Q&A/Raffle

Executive Overview
Some of the important take-aways from a Network Management perspective.

10:00AM Solaris Zones Update by Dan Price
  • Page 5 - Older Solaris 8 & Solaris 9 SPARC physical machine (p2v) can be vitualized, as well as Linux under Intel
  • Page 8 - Security and Patch OS Updates can be made by merely migrating a zone containing an application from the old server to another server which had the patch applied
  • Page 24 - p2v support virtualizing Solaris 8, Solaris 9 (now Solaris 10 from a Solaris 11 platform); v2v for moving a zone between physical machines
  • Page 26 - Some common application support matrix where inquiries are constantly made
  • Page 19 - New "zonestat" command for quickly seeing health of components across multiple zones simultaneously.
10:45AM - Image Packaging System by Bart Smaalders
  • Pages 1-44 - Overview of the Solaris 11 Image Packaging System
11:30 AM - Platform Updates: x86 and SPARC by Sherry Moore
  • Page 4 - New SPARC T3 Processor (16 cores) image and features
  • Page 5 - I am tickled that Oracle used a SPARC diagram drawn by me (unfortunately they stretched it)
  • Page 6 - Current generation systems: images and features
  • 1:45PM - Leaping forward with Solaris Infiniband
  • Page 16 - Infiniband usage in Solaris Virtualized Zones Diagram
  • Page 30 - Important OS commands for Infiniband Fabric
2:30PM - Installation Experience Modernizations by David Miner
  • Page 4 - Solaris 10 and Solaris 11 Comparisons (important: Jumpstart Replaced!)
  • Page 5 - New Boot Environments based upon ZFS with "unlimited snapshots", breaking mirror with only one rollback is a thing of the past with Solaris 11
  • Page 9 - New Automated Installer Diagram, to replace Jumpstart… following pages illustrate use cases!
4:00PM - Service Management Facility Architecture and Deployment
  • Page 4 - Best Practices for deploying applicatons across networks
  • Page 7 - Best Practices for deploying applications onto ZFS
  • Page 9 - Software Support and Admin teams no longer require root or sudo with Solaris SMF for stop/start/restart
  • Page 11 - Application layer firewalls bundled as a service
  • Page 16 - Solaris 11 Image Packaging Sytem no longer uses scripts, but bundles into SMF
  • Page 17 - Automatic Fault notifications through SMF via email & SNMP
  • Page 19 - Best Practices of modern virtualized Solaris Application Deployment

Sunday, December 5, 2010

CoolThreads UltraSPARC and SPARC Processors


[UltraSPARC T3 Micrograph]

CoolThreads UltraSPARC and SPARC Processors

Abstract:

Processor development takes an immense quantity of time, to architect a high-performance solution, and an uncanny vision of the future, to project market demand and acceptance. In 2005, Sun embarked on a bold path moving toward many cores and many threads per core. Since the purchase of Sun by Oracle, the internal SPARC road map from Sun had clarified.


[UltraSPARC T1 Micrograph]
Generation 1: UltraSPARC T1
A new family of SPARC processors was announced by Sun on 2005 November 14.
  • Single die
  • Single socket
  • 64 bits
  • 4, 6, 8 integer cores
  • 4, 6, 8 crypto cores
  • 4 threads/core
  • 1 shared floating point core
  • 1.0 GHz - 1.4 GHz clock speed
  • 279 million transisters
  • 378 mm2
  • 90 nm CMOS (TI)
  • 1 JBUS port
  • 3 Megabyte Level 2 Cache
  • 1 Integer ALU per Core
  • ??? Memory Controllers
  • 6 Stage Integer Pipeline per Core
  • No embedded Ethernet into CPU
  • Crypto Algorithms: ???
Platform designed as a front-end server for web server applications. With a massive number of cores, it was designed to provide web-tier performance similar to existing quad-socket systems leveraging a single socket.

To understand the ground-breaking advancement in this technology, most processors were single core, with an occasional dual core processor (with cores glued together through a more expensive process referred to as a multi-chip module, driving higher software licensing costs for those platforms.)


Generation 2: UltraSPARC T2
The next generation of the CoolThreads processor was announced by Sun on 2007 August.
  • Single die
  • Single Socket
  • 64 bits
  • 4, 6, 8 integer cores
  • 4, 6, 8 crypto cores
  • 4, 6, 8 floating point units
  • 8 threads/core
  • 1.2 GHz - 1.6 GHz clock speed
  • 503 million transisters
  • 342 mm2
  • 65 nm CMOS (TI)
  • 1 PCI Express port (1.0 x8)
  • 4 Mageabyte Level 2 Cache
  • 2 Integer ALU per Core
  • 4x Dual Channel FBDIMM DDR2 Controllers
  • 8 Stage Integer Pipeline per Core
  • 2x 10 GigabitEthernet on-CPU ports
  • Crypto Algorithms: DES, Triple DES, AES, RC4, SHA1, SHA256, MD5, RSA-2048, ECC, CRC32
This processor was designed for higher compute intensive requirements and incredibly efficient network capacity. Platform made an excellent front-end server for applications as well as Middleware, with the ability to do 10 Gigabit wire-speed encryption with virtually no CPU overhead.

Competitors started to build Single-Die dual-core CPU's with Quad-Core processors by gluing dual-core processors into a Multi-Chip Module.


[UltraSPARC T2 Micrograph]
Generation 3: UltraSPARC T2+
Sun quickly released the first CoolThreads SMP capable UltraSPARC T2+ in 2008 April.
  • Single die
  • 1-4 Sockets
  • 64 bits
  • 4, 6, 8 integer cores
  • 4, 6, 8 crypto cores
  • 4, 6, 8 floating point units
  • 8 threads/core
  • 1.2 GHz - 1.6 GHz clock speed
  • 503 million transisters
  • 342 mm2
  • 65 nm CMOS (TI)
  • 1 PCI Express port (1.0 x8)
  • 4 Megabyte Level 2 Cache
  • 2 Integer ALU per Core
  • 2x? Dual Channel FBDIMM DDR2 Controllers
  • 8? Stage Integer Pipeline per Core
  • No embedded Ethernet into CPU
  • Crypto Algorithms: DES, Triple DES, AES, RC4, SHA1, SHA256, MD5, RSA-2048, ECC, CRC32
This processor allowed the T processor series to move from the Tier 0 web engines and Middleware to Application tier. Architects started to understand the benefits of this platform entering the Database tier. This was the first Coolthreads processor to scale past 1 and up to 4 sockets.

By this time, competition really started to understand that Sun had properly predicted the future of computing. The drive toward single-die Quad-Core chips have started with Hex-Core Multi-Chip Modules being predicted.


Generation 4: SPARC T3
The market became nervous with Oracle purchasing Sun. The first Oracle branded CoolThreads SMP capable UltraSPARC T3 was launched in in 2010 September.
  • Single die
  • 1-4 Sockets
  • 64 bits
  • 16 integer cores
  • 16 crypto cores
  • 16 floating point units
  • 8 threads/core
  • 1.67 GHz clock speed
  • ??? million transisters
  • 377 mm2
  • 40 nm
  • 2x PCI Express port (2.0 x8)
  • 6 Megabyte Level 2 Cache
  • 2 Integer ALU per Core
  • 4x DDR3 SDRAM Controllers
  • 8? Stage Integer Pipeline per Core
  • 2x 10 GigabitEthernet on-CPU ports
  • Crypto Algorithms: DES, 3DES, AES, RC4, SHA1, SHA256/384/512, Kasumi, Galois Field, MD5, RSA to 2048 key, ECC, CRC32
This processor was more than what the market was anticipating from Oracle. This processor took all the features of the T2 and T2+ combined them into the new T3 with an increase in overall features. No longer did the market need to choose between multiple sockets or embedded 10 GigE interfaces - this chip has it all plus double the cores.

The market, immediately before this release, the competition was releasing single die hex-core and octal-core CPU's using multi-chip modules, by gluing them together. The T3 was a substantial upgrade over the competition by offering double the cores on a single die.


Generation 5: SPARC T4
Oracle indicated in December 2010 that they had thousands of these processors in the lab and predicted this processor will be released end of 2011.

After the announcement, a separate press release indicated processors will have a renovated core, for higher single threaded performance, but the socket will offer half the cores.

Most vendors are projected to have 8 core processors available (through Multi-Chip modules) by the time the T3 is released, but only the T4 should be on a single piece of silicon during this period.


[2010-12 SPARC Solaris Roadmap]
Generation 6: SPARC T5

Some details on the T5 were announced with the T4. Processors will use the renovated T4 core, with a 28nm process. This will return to 16 cores per socket again. This processor may be the first Coolthreads T processor able to scale from 1-8 processors. It is projected to appear in early 2013.

Some vendors are projecting to have 12 core processors on the market using Multi-Chip Module technology, but when the T5 is released, this should still be the market leader in 16 cores per socket.

Network Management Connection

Consolidating most network management stations in a globalized environment works very well with the Coolthreads T-Series processors. Consolidating multiple slower SPARC platforms onto single and double socket T series have worked well over the past half decade.

While most network management polling engines will scale linearly with these highly-threaded processors, there are some operations which are bound to single threads. These type of processes include event correlation, startup time, and syncronization after a discovery in a large managed topology.

The market will welcome the enhanced T4 processor core and the T5 processor, when it is released.

Friday, December 3, 2010

Scalable Highest Performing Clusters at Value Pricing



Scalable Highest Performing Clusters at Value Pricing

Abstract:
Oracle presented another milestone achievement in their 5 year SPARC/Solaris road map with Fujitsu. John Fowler stated: "Hardware without Software is a Door-Stop, Solaris is the gateway."

High-Level:
The following is a listing of my notes from the two sessions. The notes have been combined, with Larry Ellison outlining the high-level and John Fowler presenting the lower-level details. SPARC T3 making world-record benchmarks. New T3 based integrated products. Oracle's Sun/Fujitsu M-Series gets a speed bump. SPARC T4 is on the way.

Presentation Notes:


New TpmC Database OLTP Performance
  • SPARC Top cluster performance
  • SPARC Top cluster price-performance
  • (turtle)
    HP Superdome Itanium 4 Million Transactions/Minute
  • (stallion)
    IBM POWER7 Power 780 10 Million Transactions/Minute
    (DB2 clustered through custom applications)
  • Uncomfortable 4 month for Oracle, when IBM broke the Oracle record
  • (cheetah)
    Sun SPARC 30 Million Transactions/Minute
    (standard off-the-shelf Oracle running RAC)
  • Oracle/Sun performance benchmark => ( IBM + HP ) x 2 !
  • Sun to IBM Comparison:
    3x OLTP Throughput, 27% better Price/Performance, 3.2x faster response time
  • Sun to HP Comparison:
    7.4x OLTP Throughput 66 Better Price/Performance, 24x compute density
  • Sun Supercluster:
    108 sockets, 13.5 TB Memory, Infiniband 40 Gigabit link, 246 Terabytes Flash, 1.7 Petabytes Storage, 1 Quadrillion rows, 43 Trillion transactions per day, 0.5 sec avg response

New Gold Release
  • Gold Standard Configurations are kept in the lab
  • What the customer has, the support organization will have assembled in the lab
  • Oracle, Sun, Cisco, IBM will all keep their releases and bug fixes in sync with releases

SPARC Exalogic Elastic Cloud
  • Designed to run Middleware
  • New T3 processor based
  • 100% Oracle Middleware is Pure Java
  • Tuned for Java and Oracle Fusion Middleware
  • Load-balances with elasticity
  • Ships Q1 2011
  • T3-1B SPARC Compute Blades based
    30 Compute Servers, 16 cores/server, 3.8 TB RAM, 960 GB mirrored flash disks, 40 TB SAS Storage, 4 TB Read Cache, 72 GB Write Cache, 40 Gg/sec Infiniband, 10 GigE to Datacenter

SPARC Supercluster
  • New T3 processor based and M processor based
  • T3-2 = 2 nodes, 4 CPU's, 64 cores/512 threads, 0.5 TB RAM, 96 TB HDD ZFS, 1.7TB Write Flash, 4TB Read Flash, 40 Gbit Infiniband
  • T3-4 = 3 nodes, 12 CPU's, 192 cores/1536 threads, 1.5 TB RAM, 144 TB HDD ZFS, 1.7TB Write Flash, 4TB Read Flash, 40 Gbit Infiniband
  • M5000 = 2 nodes, 16 CPU's, 64 core/128 threads, 1 TB RAM, 144 TB HDD ZFS, 1.7TB Write Flash, 4TB Read Flash, 40 Gbit Infiniband

T3 Processor in production
  • Releases already, performing in these platforms
  • 1-4 processors in a platform
  • 16 cores/socket, 8 threads/core
  • 16 crypto-engines/socket
  • More cores, threads, 10 GigE on-chip, more crypto engines

T4 Processor in the lab!
  • Thousands under test in the lab, today
  • To be released next year
  • 1-4 processors
  • 8 cores/socket, 8 threads/core
  • faster per-thread execution

M3 Processor from Fujitsu
  • SPARC VII+
  • 1-64 SPARC64 VII+ Processors
  • 4 cores, 2 threads/core
  • Increased CPU frequency
  • Double cache memory
  • 2.4x performance of original SPARC64 VI processor
  • VII+ boards will slot into the VI and VII board chassis
Flash Optimization
- Memory hierarchy with software awareness

Infiniband
- Appropriate for High Performance Computing
- Dramatically better performance than Ethernet for linking servers to servers & storage

New Solaris 11 Release

  • Next Generation Networking
    re-engineered network stack
    low latency high bandwidth protocols
    virtualized
  • Cores and Threads Scale
    Adaptive Thread and Memory Placement
    10,000's of core & threads
    thread observability with DTrace

  • Memory Scale
    Dynamic optimization for large memory configs
    Advanced memory placement
    VM systems for 1000's TB memory configs

  • I/O Performance
    Enhanced NUMA I/O framework
    Auto-Discovery of NUMA architecture
    I/O resources co-located with CPU for scale/performance

  • Data Scale
    ZFS Massive storage for massive datasets

  • Availability
    Boot times in seconds
    Minimized OS Install
    Risk-Free Updates with lightweight boot and robust package dependency
    Extensive Fault Management with Offline failing components
    Application Service Managemment with Restart failed applications and associated services quickly

  • Security
    Secure by default
    Secure boot validated with onboard Trusted Platform Module
    Role Based Root Access
    Encrypted ZFS datasets
    Accelerated Encryption with hardware encryption support

  • Trusted Solaris Extensions
    Dataset labels for explicit access rules
    IP labels for secure communication

  • Virtualization
    Network Virtualization to add to Server and Storage Virtualization
    Network Virtualization includes Virtual NIC's and Virtual Switches
SPARC Supercluster Architecture
  • Infiniband is 5x-8x faster than most common Enterprise interconnects
    Infiniband has been leveraged with storage and clustering in software
  • Flash is faster than Rotating Media
    Integrated into the Memory AND Storage Hierarchy

SPARC 5 Year Roadmap
  • SPARC T3 delvered in 2010
  • SPARC VII+ delivered in 2010
  • Solaris 11 and SPARC T4 to be delivered in 2011
Next generation of mission critical enterprise computing
  • Engineer software with hardware products
  • Deliver clusters for general purpose computing
  • Enormous levels of scale
  • Built in virtualization
  • Built in Security
  • Built in management tools
  • Very Very high availability
  • Tested with Oracle software
  • Supported with Gold Level standard
  • Customers spend less time integrating and start delivering services on systems engineered with highest performance components

Tuesday, October 5, 2010

US Department of Energy: No POWER Upgrade From IBM


US Department of Energy: No POWER Upgrade From IBM

Abstract:

Some ay no one was ever fired for buying IBM, but no government or business ever got trashed for buying SPARC. The United States Department of Energy bought an IBM POWER system with no upgrade path and no long term spare parts.


[IBM Proprietary POWER Multi-Chip Module]

Background:

The U.S. Depertmant of Energy purchased a petaflops-class hybrid blade supercomputer called the IBM "Roadrunner" that performed into the multi-petaflop range for nuclear simulations at the Los Alamos National Laboratory. It was based upon the IBM Blade platform. Blades were based upon an AMD Opteron and hybrid IBM POWER / IBM Cell architecture. A short article was published in October 2009 in The Register.

Today's IBM:

A month later, the super computer was not mentioned at the SC09 Supercomputin Trade Show at Oregon, because IBM killed it. Apparently, it was killed off 18 months earlier - what a waste of American tax payer funding!

Tomorrow's IBM:

In March 2010, it was published that IBM gave it's customers (i.e. the U.S. Government) three months to buy spares, because future hybrid IBM POWER / Cell products were killed. Just a few months ago, IBM demonstrated their trustworthlessness with their existing Thin Client customers and partners by abandoning their thin client partnership and using the existing partner to help fund IBM's movement to a different future thin client partner!



Obama Dollars:

It looks like some remaining Democratic President Obama stimulus dollars will be used to buy a new super computer from Cray and cluster from SGI. The mistake of buying IBM was so huge that it took a massive spending effort from the Federal Government to recover from losing money on proprietary POWER.

[Fujitsu SPARC64 VII Processor]

[Oracle SPARC T3 Processor]
Lessons Learned:
If only the U.S. Government did not invest in IBM proprietary POWER, but had chosen an open CPU architecture like SPARC, which offers two hardware vendors: Oracle/Sun and Fujitsu.

[SUN UltraSPARC T2; Used in Themis Blade for IBM Blade Chassis]

Long Term Investment:

IBM POWER is not an open processor advocated by other systems vendor. Motorola abandoned the systems market for POWER from a processor production standpoint. Even Apple abandoned POWER on the desktop & server arena. One might suppose that when IBM kills a depended upon product, that one could always buy video game consoles and place them in you lights-out data center, but that is not what the Department of Energy opted for.

Oracle/Sun has a reputation of providing support for systems a decade old, and if necessary, Open SPARC systems and even blades for other chassis can be (and are) built by other vendors(i.e. Themis built an Open SPARC blade for an IBM Blade chassis.) SPARC processors have been designed & produced by different processor and system vendors for over a decade and a half. SPARC is a well proven long term investment in the market.

Network Management Connection:

If you need to build a Network Operation Center, build it upon the infrastructure the global telecommunications providers had trusted for over a decade: SPARC & Solaris. One will not find serious network management applications on IBM POWER, so don't bother wasting time looking. There are reasons for it.

Friday, July 23, 2010

UltraSPARC T3: Open Solaris Support


UltraSPARC T3: Open Solaris Support

There is news in the OpenSolaris front: new additions to OpenSolaris and formal naming of the new CoolThreads CPU from Oracle/Sun!


OpenSolaris PSARC/2010/274 is not published, but appeared on Twitter describing a new "-xtarget value for UltraSPARC T3".

For those of you who are unaware, the 16 core with 8 thread per core UltraSPARC T3 from Oracle appears to be the successor to the UltraSPARC T2+ from Sun.

Network Management Tie-In

What does this mean for Network Management?

The Register published a possible SPARC roadmap, showing the a 16 core 8 thread/core processor showing up just after mid 2010, so one might suspect the arrival is close.

Oracle OpenWorld starts September 20, 2010 - so we may have an announcement around then.

If you are building or expanding a NOC, it might be prudent to as for an NDA to determine best purchase time or wait a couple months (if you are a new customer) before striking a purchase, since the high performance UltraSPARC T3 may be just around the corner. Twice the throughput per socket may be worth the wait, if the cost is not significantly higher.

Monday, July 5, 2010

Political Posturing Holding Up Solaris, But Coming!


Political Posturing Holding Up Solaris, But Coming!

Mika Borner, leader of the Switzerland OpenSolaris User Group, which is sponsored by the Advocacy Community Group, recently had a NDA discussion with Dan Roberts (Director of Solaris Product Management.) His impressions of the future of Solaris have been recorded for all to see:
Oracle is still working out, how Solaris/Solaris Next/OpenSolaris will play together. As I understood, this is the main reason why OpenSolaris 2010H? is delayed.
New Solaris and/or SPARC releases on the way.
I can't tell you about the future of Solaris, but I see it quite rosy. The promises Oracle has made about Solaris/SPARC have/will more or less be fulfilled. There will be some interesting announcements ;-)
...

My personal opinion is, that Oracle will invest more into (Open)Solaris
than they will in Oracle Unbreakable Linux. In the former Oracle has full control, while the latter has to follow RHEL development closely.

The biggest question at the moment is, will OpenSolaris 2010H? come out
at all, and if yes, when... Honestly I don't know, but Oracle Open World could be a good time to release it.


New Solaris Releases During OpenWorld?

The question seems to be WHEN and HOW will the latest releases be conducted, not necessarily IF. Speculation seems to point around Oracle Open World 2010 in September. How this seems to it in could be tied into the various agenda items.
Oracle and Fujitsu Keynote Addresses, for SPARC Solaris communities.
Oracle and Intel Keynote Addresses, for Intel Solaris (and Linux & Microsoft) communities.
While never attending an Oracle Open World in the past, one would certainly be interested attending the next one virtually!


New UltraSPARC T3 Release During OpenWorld?

What the SPARC community is waiting for, with much anticipation, is the arrival of UltraSPARC T3 processor. This processor will help the SPARC community jump ahead of the competition in the central processor community for the next few years again.

The UltraSPARC T2, while still competitive from an aggregate socket performance perspective, is a little weak on cost competitiveness. With the doubling of cores on the T3, cost competitiveness should be increased.

Friday, February 12, 2010

Two Billion Transistors: Niagra T3



Two Billion Transistors: Niagra T3

Abstract:

Sun Microsystems has been developing octal core processors for almost a half decade. During the past few years, a new central processor unit called "Rainbow Falls" or "UltraSPARC KT" has been in development. With the release of the Power7, IBM's first octal core CPU, there has been a renewal of interest in the OpenSPARC processor line, in particular the T3.

Background:

OpenSPARC was an Open Source project started with an initial contributor of Sun Microsystems. It was based upon the open SPARC architecture, which had many companies and manufacturers contributing and leveraging the open specification over the years. Afara Websystems was one of those SPARC vendors who started the intellectual thought on combining many SPARC cores onto a single piece of silicon. They were later purchased by Sun Microsystems, who had the deep pockets to invest the engineering required to bring it to fruition (as the OpenSPARC or UltraSPARC T1) and advance it (with the design of the T2, T2+, and now the T3.) Sun was later purchased by Oracle, who had some deeper pockets.

Features:

As is typical with the highly integrated OpenSPARC processors, PCIe are included on-chip, providing very fast access to I/O subsystems.

The T3 looks more like an a combined T2 and T2+ with enhancecments. The T2 had embedded 10Gig Ethernet, while the T2+ had 4 chip cache coherency glue. Well, the T3 has it all, in conjunction with an uplifted DDR3 DRAM interface with 4 memory channels, enhanced crypto co-processors, a doubling of cores!

The benefits to Network Management:

Small and immature Network Management products are usually thread-bound, but those days of poorly programmed systems are long gone (except in the Microsoft Windows world.)

Network management workloads are typically highly threaded and UNIX based. Platforms like the OpenSPARC have played to meet these workloads from their very early design days in the early 2000's, with other CPU vendors anxiously trying to catch up in the late 2000's.

When thousands of devices need to have information polled from numerous subsystems on various minute intervals, latency on the receiving of the information adds a level of complexity to the polling software, and highly threaded CPU's with a well written OS reward the programmer for their work.

It was not that long ago when Solaris was updated to manage processes in the millions, when those processes could have dozens, hundreds, or thousands of threads apiece.

In the Network Management arena, we welcome these high-throughput workhorses!
Subscribe to: Comments (Atom)

AltStyle によって変換されたページ (->オリジナル) /