Showing posts with label T2+. Show all posts
Showing posts with label T2+. Show all posts

Tuesday, April 27, 2021

Sun SPARC Enterprise T5120 - ALOM Introduction

Sun SPARC Enterprise T5120 - ALOM - Introduction

Abstract

UNIX Systems Manufacturers originated their markets as workstations, during a time when they used 32 bit systems and the rest of the PC market was concentrating on 8 and 16 bit systems, and some CPU vendors like Intel use segmentation to keep their 16 bit software alive while struggling to move to 32 bit architectures. Some of the original servers were stacked workstations on a rack in a cabinet. The former high-powered video cards were merely ignored, as remote management needed command line interfaces. Engineering quickly determined that console access needed to be built into a new class of systems: rack mounted servers. These early servers offered traditional ALOM compatibility shell as well as newer ILOM shell. The ALOM shell is quite functional.

Sun Enterprise T5120 LOM

The Sun Enterprise T5120 is a server with a second generation OpenSPARC processor. It comes with a Lights Out Management (LOM) capability referred to as Integrated Lights Out Management (ILOM.) The Advanced Lights Out Management (ALOM) shell may be it's default. Most remote systems management work can be done from the LOM. Oracle has an ILOM 3.0 manual. There are also manuals formerly published by Sun Microsystems for OpenBoot 3.x and OpenBoot 4.x manuals.

ALOM: Logging In

A 9600 Baud Serial Cable can be added to the Console port, to provide immediate access.

SUNSP00144FAC0BE7 login: admin
Password:
Waiting for daemons to initialize...

Daemons ready

Oracle(R) Integrated Lights Out Manager
Version 3.0.12.4.y r77080
Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved.

sc>

ALOM: Help Screen

The Console port, provides a reasonable help screen:

sc> help
Available commands
------------------
Power and Reset control commands:
powercycle [-y] [-f]
poweroff [-y] [-f]
poweron [-c] [FRU]
reset [-y] [-c] [-d] [-f] [-n]
Console commands:
break [-y] [-c]
console [-f]
consolehistory [-b lines|-e lines|-v] [-g lines] [boot|run]
Boot control commands:
bootmode [normal|reset_nvram|bootscript="string"|config="configname"]
setkeyswitch [-y] <normal|stby|diag|locked>
showkeyswitch
Locator LED commands:
setlocator [on|off]
showlocator
Status and Fault commands:
clearasrdb
clearfault <UUID>
disablecomponent [asr-key]
enablecomponent [asr-key]
removefru [-y] <FRU>
setfru -c [data]
showcomponent [asr-key]
showenvironment
showfaults [-v]
showfru [FRU]
showlogs [-b lines|-e lines|-v] [-g lines] [-p logtype[r|p]]
shownetwork [-v]
showplatform [-v]
showpower [-v]
ALOM Configuration commands:
setdate <[mmdd]HHMM | mmddHHMM[cc]yy][.SS]>
setsc [param] [value]
setupsc
showdate
showhost [version]
showsc [-v] [param]
ALOM Administrative commands:
flashupdate <-s IPaddr -f pathname> [-v] [-y] [-c]
help [command]
logout
password
resetsc [-y]
restartssh [-y]
setdefaults [-y]
ssh-keygen [-l|-r] <-t {rsa|dsa}>
showusers [-g lines]
useradd <username>
userclimode <username> <default|alom>
userdel [-y] <username>
userpassword <username>
userperm <username> [c][u][a][r][o][s]
usershow [username]
sc>

ALOM: Setting the Date

The LOM Serial Port provides access to set the date, as it is likely not correct.:

sc> showdate
Fri Apr 23 14:24:11 2021

sc> setdate 042318242021
sc> showdate
Fri Apr 23 18:24:44 2021

sc>

ALOM: Show System Controller & Network

The LOM can be enabled to perform DHCP, so access can be made over an ethernet cable.

sc> showsc
SP firmware version: 3.0.12.4.y

Parameter Value
--------- -----
if_network true
if_connection ssh
if_emailalerts true
netsc_dhcp true
netsc_ipaddr 192.168.1.66
netsc_ipnetmask 255.255.255.0
netsc_ipgateway 192.168.1.254
mgt_mailhost 0.0.0.0
mgt_mailalert
sc_customerinfo
sc_escapechars #.
sc_powerondelay false
sc_powerstatememory false
sc_clipasswdecho true
sc_cliprompt sc
sc_clitimeout 0
sc_clieventlevel 2
sc_backupuserdata true
diag_trigger power-on-reset error-reset
diag_verbosity normal
diag_level max
diag_mode normal
sys_autorunonerror false
sys_autorestart reset
sys_boottimeout 0
sys_bootrestart none
sys_maxbootfail 3
sys_bootfailrecovery poweroff
ser_baudrate 9600
ser_commit (Cannot show property)
netsc_enetaddr 00:14:4F:AC:0B:E7
netsc_commit (Cannot show property)
sys_enetaddr 00:14:4f:ac:0b:de

sc>

ALOM: Access via TCP/IP

The LOM can be accessed over TCP/IP, over an ethernet cable, via a terminal package like PuTTY.

login as: admin
Using keyboard-interactive authentication.
Password:
Waiting for daemons to initialize...

Daemons ready

Oracle(R) Integrated Lights Out Manager
Version 3.0.12.4.y r77080
Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved.

sc>

ALOM: The Keyswitch

If the Keyswitch is normal, the system can be booted, otherwise use "setkeyswitch normal"

sc> showkeyswitch
Keyswitch is in the NORMAL position.
sc>

ALOM: Chassis Power On

The Chassis can be powered on from the LOM prompt
sc> showpower -v
--------------------------------------------------------------------------------
Power metrics information cannot be displayed when the System power is off

sc> poweron
sc> Chassis | major: Host has been powered on

sc>

ALOM: Chassis Power Consumtion

The Chassis power usage can be shown from the LOM prompt
sc> showpower -v
--------------------------------------------------------------------------------
Power Supplies:
--------------------------------------------------------------------------------
INPUT OUTPUT
Power Power
Supply Status (W) (W)
--------------------------------------------------------------------------------
/SYS/PS0 OK 130.0 120.0
/SYS/PS1 OK 135.0 125.0
--------------------------------------------------------------------------------
Total Power 265.0

--------------------------------------------------------------------------------
INPUT INPUT INPUT OUTPUT OUTPUT OUTPUT
Volt Current limit Volt Current limit
Supply (V) (A) (A) (V) (A) (A)
--------------------------------------------------------------------------------
/SYS/PS0 122.4 1.06 9.20 12.0 10.00 54.00
/SYS/PS1 122.4 1.10 9.20 12.0 10.42 54.00
sc>

ALOM: Gain Firmware/OS Console

The Chassis LOM prompt can also provide Console access to Firmware and OS Console

sc> console
Enter #. to return to ALOM.
/
0:0:0>
0:0:0>POST 4.33.6 2012年03月14日 08:28
0:0:0>
0:0:0>Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.
0:0:0>POST enabling CMP 0 threads: ffffffff.ffffffff
0:0:0>VBSC mode is: 00000000.00000001
0:0:0>VBSC level is: 00000000.00000001
0:0:0>VBSC selecting Normal mode, MAX Testing.
0:0:0>VBSC setting verbosity level 2
0:0:0>Basic Memory Tests....Done
0:0:0>Test Memory....Done
0:0:0>Setup POST Mailbox ....Done
0:0:0>Master CPU Tests Basic....Done
0:0:0>Init MMU.....
0:0:0>NCU Setup and PIU link train....Done
0:0:0>L2 Tests....Done
0:0:0>Extended CPU Tests....Done
0:0:0>Scrub Memory....Done
0:0:0>SPU CWQ Tests...Done
0:0:0>MAU Tests...Done
0:0:0>Network Interface Unit Port 0 Tests ..Done
0:0:0>Network Interface Unit Port 1 Tests ..Done
0:0:0>Functional CPU Tests....Done
0:0:0>Extended Memory Tests....Done
2021年04月23日 20:53:59.521 0:0:0>INFO:
2021年04月23日 20:53:59.578 0:0:0> POST Passed all devices.
2021年04月23日 20:53:59.633 0:0:0>POST: Return to VBSC.
2021年04月23日 20:53:59.687 0:0:0>Master set ACK for vbsc runpost command and spin...
Chassis | major: Host is running
ChassisSerialNumber BEL07492JB

SPARC Enterprise T5120, No Keyboard
Copyright (c) 1998, 2012, Oracle and/or its affiliates. All rights reserved.
OpenBoot 4.33.6.b, 16256 MB memory available, Serial #78384094.
Ethernet address 0:14:4f:ac:b:de, Host ID: 84ac0bde.

{0} ok
Note: the "ok" prompt provides access to OpenFirmware

ALOM: OpenFirmware: Show Devices

The Chassis ALOM Firmware prompt allows access all devices available to an operating system

{0} ok show-devs
/ebus@c0
/pci-performance-counters@0
/niu@80
/pci@0
/cpu@3f
/cpu@3e
/cpu@3d
/cpu@3c
/cpu@3b
/cpu@3a
/cpu@39
/cpu@38
/cpu@37
/cpu@36
/cpu@35
/cpu@34
/cpu@33
/cpu@32
/cpu@31
/cpu@30
/cpu@2f
/cpu@2e
/cpu@2d
/cpu@2c
/cpu@2b
/cpu@2a
/cpu@29
/cpu@28
/cpu@27
/cpu@26
/cpu@25
/cpu@24
/cpu@23
/cpu@22
/cpu@21
/cpu@20
/cpu@1f
/cpu@1e
/cpu@1d
/cpu@1c
/cpu@1b
/cpu@1a
/cpu@19
/cpu@18
/cpu@17
/cpu@16
/cpu@15
/cpu@14
/cpu@13
/cpu@12
/cpu@11
/cpu@10
/cpu@f
/cpu@e
/cpu@d
/cpu@c
/cpu@b
/cpu@a
/cpu@9
/cpu@8
/cpu@7
/cpu@6
/cpu@5
/cpu@4
/cpu@3
/cpu@2
/cpu@1
/cpu@0
/virtual-devices@100
/iscsi-hba
/virtual-memory
/memory@m0,8000000
/aliases
/options
/openprom
/chosen
/packages
/ebus@c0/serial@0,ca0000
/pci@0/pci@0
/pci@0/pci@0/pci@9
/pci@0/pci@0/pci@8
/pci@0/pci@0/pci@2
/pci@0/pci@0/pci@1
/pci@0/pci@0/pci@8/pci@0
/pci@0/pci@0/pci@8/pci@0/pci@a
/pci@0/pci@0/pci@8/pci@0/pci@9
/pci@0/pci@0/pci@8/pci@0/pci@8
/pci@0/pci@0/pci@8/pci@0/pci@2
/pci@0/pci@0/pci@8/pci@0/pci@1
/pci@0/pci@0/pci@8/pci@0/pci@9/SUNW,emlxs@0,1
/pci@0/pci@0/pci@8/pci@0/pci@9/SUNW,emlxs@0
/pci@0/pci@0/pci@8/pci@0/pci@9/SUNW,emlxs@0,1/fp@0,0
/pci@0/pci@0/pci@8/pci@0/pci@9/SUNW,emlxs@0,1/fp@0,0/tape
/pci@0/pci@0/pci@8/pci@0/pci@9/SUNW,emlxs@0,1/fp@0,0/disk
/pci@0/pci@0/pci@8/pci@0/pci@9/SUNW,emlxs@0/fp@0,0
/pci@0/pci@0/pci@8/pci@0/pci@9/SUNW,emlxs@0/fp@0,0/tape
/pci@0/pci@0/pci@8/pci@0/pci@9/SUNW,emlxs@0/fp@0,0/disk
/pci@0/pci@0/pci@2/scsi@0
/pci@0/pci@0/pci@2/scsi@0/disk
/pci@0/pci@0/pci@2/scsi@0/tape
/pci@0/pci@0/pci@1/pci@0
/pci@0/pci@0/pci@1/pci@0/pci@3
/pci@0/pci@0/pci@1/pci@0/pci@2
/pci@0/pci@0/pci@1/pci@0/pci@1
/pci@0/pci@0/pci@1/pci@0/pci@3/network@0,1
/pci@0/pci@0/pci@1/pci@0/pci@3/network@0
/pci@0/pci@0/pci@1/pci@0/pci@2/network@0,1
/pci@0/pci@0/pci@1/pci@0/pci@2/network@0
/pci@0/pci@0/pci@1/pci@0/pci@1/pci@0
/pci@0/pci@0/pci@1/pci@0/pci@1/pci@0/usb@0,2
/pci@0/pci@0/pci@1/pci@0/pci@1/pci@0/usb@0,1
/pci@0/pci@0/pci@1/pci@0/pci@1/pci@0/usb@0
/pci@0/pci@0/pci@1/pci@0/pci@1/pci@0/usb@0,2/hub@4
/pci@0/pci@0/pci@1/pci@0/pci@1/pci@0/usb@0,2/storage@3
/pci@0/pci@0/pci@1/pci@0/pci@1/pci@0/usb@0,2/storage@2
/pci@0/pci@0/pci@1/pci@0/pci@1/pci@0/usb@0,2/storage@1
/pci@0/pci@0/pci@1/pci@0/pci@1/pci@0/usb@0,2/storage@3/disk
/pci@0/pci@0/pci@1/pci@0/pci@1/pci@0/usb@0,2/storage@2/disk
/pci@0/pci@0/pci@1/pci@0/pci@1/pci@0/usb@0,2/storage@1/disk
/virtual-devices@100/rtc@5
/virtual-devices@100/console@1
/virtual-devices@100/random-number-generator@e
/virtual-devices@100/ncp@6
/virtual-devices@100/n2cp@7
/virtual-devices@100/channel-devices@200
/virtual-devices@100/tpm@f
/virtual-devices@100/flashprom@0
/virtual-devices@100/channel-devices@200/virtual-domain-service@0
/virtual-devices@100/channel-devices@200/virtual-channel-client@1
/virtual-devices@100/channel-devices@200/virtual-channel@0
/virtual-devices@100/channel-devices@200/virtual-channel-client@2
/virtual-devices@100/channel-devices@200/virtual-channel@3
/iscsi-hba/disk
/openprom/client-services
/packages/obp-tftp
/packages/kbd-translator
/packages/SUNW,asr
/packages/dropins
/packages/terminal-emulator
/packages/disk-label
/packages/deblocker
/packages/SUNW,builtin-drivers
{0} ok

ALOM: OpenFirmware: Show Disks

The Chassis ALOM Firmware prompt allows access the devices hosting the operating system

{0} ok show-disks
a) /pci@0/pci@0/pci@8/pci@0/pci@9/SUNW,emlxs@0,1/fp@0,0/disk
b) /pci@0/pci@0/pci@8/pci@0/pci@9/SUNW,emlxs@0/fp@0,0/disk
c) /pci@0/pci@0/pci@2/scsi@0/disk
d) /pci@0/pci@0/pci@1/pci@0/pci@1/pci@0/usb@0,2/storage@3/disk
e) /pci@0/pci@0/pci@1/pci@0/pci@1/pci@0/usb@0,2/storage@2/disk
f) /pci@0/pci@0/pci@1/pci@0/pci@1/pci@0/usb@0,2/storage@1/disk
g) /iscsi-hba/disk
q) NO SELECTION
Enter Selection, q to quit: q

{0} ok

ALOM: OpenFirmware: Device Aliases

The Chassis ALOM Firmware prompt provides access to common alias names for long device names.

{0} ok devalias
backup /pci@0/pci@0/pci@2/scsi@0/disk@0
primary /pci@0/pci@0/pci@2/scsi@0/disk@2
ttya /ebus@c0/serial@0,ca0000
nvram /virtual-devices/nvram@3
net3 /pci@0/pci@0/pci@1/pci@0/pci@3/network@0,1
net2 /pci@0/pci@0/pci@1/pci@0/pci@3/network@0
net1 /pci@0/pci@0/pci@1/pci@0/pci@2/network@0,1
net0 /pci@0/pci@0/pci@1/pci@0/pci@2/network@0
net /pci@0/pci@0/pci@1/pci@0/pci@2/network@0
cdrom /pci@0/pci@0/pci@1/pci@0/pci@1/pci@0/usb@0,2/storage@2/disk@0:f
disk3 /pci@0/pci@0/pci@2/scsi@0/disk@3
disk2 /pci@0/pci@0/pci@2/scsi@0/disk@2
disk1 /pci@0/pci@0/pci@2/scsi@0/disk@1
disk0 /pci@0/pci@0/pci@2/scsi@0/disk@0
disk /pci@0/pci@0/pci@2/scsi@0/disk@0
scsi /pci@0/pci@0/pci@2/scsi@0
virtual-console /virtual-devices/console@1
name aliases

{0} ok

ALOM: OpenFirmware: Default Environment

The Chassis ALOM Firmware prompt provides basic default values, which can be changed.

{0} ok printenv
Variable Name Value Default Value
ttya-rts-dtr-off false false
ttya-ignore-cd true true
keyboard-layout US-English
reboot-command
security-mode none No default
security-password No default
security-#badlogins 0 No default
verbosity min min
pci-mem64? true true
diag-switch? false false
local-mac-address? true true
fcode-debug? false false
scsi-initiator-id 7 7
oem-logo No default
oem-logo? false false
oem-banner No default
oem-banner? false false
ansi-terminal? true true
screen-#columns 80 80
screen-#rows 34 34
ttya-mode 9600,8,n,1,- 9600,8,n,1,-
output-device virtual-console virtual-console
input-device virtual-console virtual-console
auto-boot-on-error? false false
load-base 16384 16384
auto-boot? false true
network-boot-arguments
boot-command boot boot
boot-file
boot-device /pci@0/pci@0/pci@2/scsi@ ... disk net
multipath-boot? false false
boot-device-index 0 0
use-nvramrc? true false
nvramrc devalias primary /pci@0/ ...
error-reset-recovery boot boot
{0} ok printenv boot-device
boot-device = /pci@0/pci@0/pci@2/scsi@0/disk@2,0:a /pci@0/pci@0/pci@2/scsi@0/disk@0,0:a

{0} ok
Note: The system is set to not automatically boot, but there is a default boot disk defined.

ALOM: OpenFirmware: Boot DVD OS Installer

The Chassis ALOM OpenFirmware prompt can initiate an OS boot from DVD to install an OS.

{0} ok boot cdrom
Boot device: /pci@0/pci@0/pci@1/pci@0/pci@1/pci@0/usb@0,2/storage@2/disk@0:f File and args:
|
SunOS Release 5.10 Version Generic_147440-01 64-bit
Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.
Configuring devices.
...

ALOM: OpenFirmware: Escape from OpenFirmware to ALOM

The Chassis OpenFirmware can be exited with the escape sequence "#." to return to the ALOM.

{0} ok #.
Serial console stopped.

sc>

ALOM: Chassis Power Off

The Chassis can be powered off from the LOM prompt (This may take a long time)

sc> poweroff
Are you sure you want to power off the system [y/n]? y
sc>

Chassis Power Off Quickly

The Chassis can be powered off from the LOM prompt, forcably & quickly

sc> poweroff -f
Are you sure you want to power off the system [y/n]? y
Chassis | critical: Host has been powered off

sc> showpower -v
--------------------------------------------------------------------------------
Power metrics information cannot be displayed when the System power is off

sc>

Logging Out of LOM

Once the user is done with the Serial Port access to the LOM, they can log out to close the session.

sc> logout

SUNSP00144FAC0BE7 login:



Conclusions

OpenSPARC Systems are very viable platforms, offering tremendous Lights Out Management capabilities. The most recent SPARC Systems have been the fastest platforms in the world for nearly 4 years, so there is a tremendous growth potential for migrating from these older Open Source SPARC platforms. Being based upon Open Hardware, Open Firmware, and Open Hardware - new hardware vendors can also create their own newer generation platforms, to meet their own requirements if the Light Out Management capabilities are not a requirement.

Thursday, April 11, 2013

Solaris: Massive Internet Scalability


[SPARC processor, courtesy Oracle SPARC T5/M5 Kick-Off]
 Solaris: Massive Internet Scalability
Abstract:
Computing systems started with single processors. As computer requirements increased, multiple processors were lashed together, using technology called SMP (Symmetric Multi-Processing) to add more computing power into a single system, breaking up tasks into processes and threads, but the transition to multi-threaded computing was a long process. The lack of scalability for some problems produced MPP (Massively Parallel Processing) platforms, lashing systems together using special software to load-balance jobs to be processed. MPP platforms were very difficult to program general purpose applications, so massively Multi-Core and Multi-Threaded processors started to appear. Oracle recently released the SPARC T5 processor and systems - producing an SMP platform scalable with massive sockets, cores, and threads into a single chassis - leveraging existing multi-threaded computing software, reducing the need for MPP in real-world applications, while placing tremendous pressure upon the Operating System layer.

[SPARC logo, courtesy SPARC.org]
SPARC Growth Rate:
The SPARC processors started a growth rate, with a movement to massively threaded software.
SPARCCoresGHzThreadsSocketsTotal-CoresTotal-Threads
T181.4321832
T281.6641864
T2+81.664432256
T3161.6128464512
T48364432256
T5163.612881281024
M563.648321921536

The movement to massively threaded processors meant that applications needed to be re-written to take advantage of the new higher throughput. Certain applications were already well suited for this workload (i.e. web servers) - but many were not.

[DTrace infrastructure and providers]
Application Challenges:
The movement to massively threaded software, to take advantage of the higher overall throughput offered by the new processor technology, was difficult for application programmers. Technologies such as DTrace were added to advanced operating systems such as Solaris to assist developers and systems administrators in pin-pointing their code hot-spots for later re-write.

When the SPARC T4 was released, there was a feature called "Critical Thread API" in the S3 core, to assist application programmers who could not resolve some single thread bottlenecks. The S3 core could automatically switch into a single-threaded mode (with the sacrifice of throughput) to address hot-spots. The T4 (and T5) faster S3 core was also clocked at a higher rate, providing an overall boost to single threaded workflows over previous processors - even at the same number of cores and threads. The ability to perform out-of-order instruction handling in the S3 also increased speed in the execution of single-threaded applications.

The SPARC T4 and T5 processors finally offered application developers a no-compromise processor. For heavy single-threaded workloads, the SPARC M5 processor was released from Oracle, driving inreasing scales of higher single-threaded workloads, without having to rely upon systems produced by long-time SPARC partner & competitor - Fujitsu.


[Solaris logo, courtesy Sun Microsystems]
Operating System Challenges:

A single system scaling to 192 cores and 1536 threads offers incredible challenges to Operating System designers. Steve Sistare from Oracle discusses some of these challenges in a Part 1 article and solutions in a Part 2 article. Some of the challenges overcome by Solaris included:
CPU scaling issues include: •increased lock contention at higher thread counts
•O(NCPU) and worse algorithms
Memory scaling issues include:
•working sets that exceed VA translation caches
•unmapping translations in all CPUs that access a memory page
•O(memory) algorithms
•memory hotspots

Device scaling issues include:
•O(Ndevice) and worse algorithms
•system bandwidth limitations
•lock contention in interrupt threads and service threads
Clearly, the engineering team at Oracle were up for the tasks created for them by the Oracle SPARC engineering team. Innovation from Sun Microsystems continues under Oracle. It will take years for other Operating System vendors to "catch up".
Network Management Applications:

In the realm of Network Management, many polling applications used threads to scale, where network communication to edge devices was latency bottlenecked - making the SPARC "T" processors an excellent choice in the carrier based environment.
The data returned by the massively mult-threaded pollers needed to be placed in a database, in a consistent fashion. This offered a problem during the device "discovery" process. This is normally a single-threaded process, which experienced massive slow-downs under the "T" processors - until the T4 was released. With processors like the SPARC T4 and SPARC T5 - Network Management applications gain the proverbial "best of both worlds" with massive hardware thread scalability for pollers and excellent single-threaded throughput during discovery bottlenecks with the "Critical Thread API."

The latest SPARC platforms are optimal platforms for massive Network Management applications. There is no other platform on the planet which compares to SPARC for managing "The Internet".
Posted by at No comments:
Labels: , , , , , , , , ,

Sunday, December 5, 2010

CoolThreads UltraSPARC and SPARC Processors


[UltraSPARC T3 Micrograph]

CoolThreads UltraSPARC and SPARC Processors

Abstract:

Processor development takes an immense quantity of time, to architect a high-performance solution, and an uncanny vision of the future, to project market demand and acceptance. In 2005, Sun embarked on a bold path moving toward many cores and many threads per core. Since the purchase of Sun by Oracle, the internal SPARC road map from Sun had clarified.


[UltraSPARC T1 Micrograph]
Generation 1: UltraSPARC T1
A new family of SPARC processors was announced by Sun on 2005 November 14.
  • Single die
  • Single socket
  • 64 bits
  • 4, 6, 8 integer cores
  • 4, 6, 8 crypto cores
  • 4 threads/core
  • 1 shared floating point core
  • 1.0 GHz - 1.4 GHz clock speed
  • 279 million transisters
  • 378 mm2
  • 90 nm CMOS (TI)
  • 1 JBUS port
  • 3 Megabyte Level 2 Cache
  • 1 Integer ALU per Core
  • ??? Memory Controllers
  • 6 Stage Integer Pipeline per Core
  • No embedded Ethernet into CPU
  • Crypto Algorithms: ???
Platform designed as a front-end server for web server applications. With a massive number of cores, it was designed to provide web-tier performance similar to existing quad-socket systems leveraging a single socket.

To understand the ground-breaking advancement in this technology, most processors were single core, with an occasional dual core processor (with cores glued together through a more expensive process referred to as a multi-chip module, driving higher software licensing costs for those platforms.)


Generation 2: UltraSPARC T2
The next generation of the CoolThreads processor was announced by Sun on 2007 August.
  • Single die
  • Single Socket
  • 64 bits
  • 4, 6, 8 integer cores
  • 4, 6, 8 crypto cores
  • 4, 6, 8 floating point units
  • 8 threads/core
  • 1.2 GHz - 1.6 GHz clock speed
  • 503 million transisters
  • 342 mm2
  • 65 nm CMOS (TI)
  • 1 PCI Express port (1.0 x8)
  • 4 Mageabyte Level 2 Cache
  • 2 Integer ALU per Core
  • 4x Dual Channel FBDIMM DDR2 Controllers
  • 8 Stage Integer Pipeline per Core
  • 2x 10 GigabitEthernet on-CPU ports
  • Crypto Algorithms: DES, Triple DES, AES, RC4, SHA1, SHA256, MD5, RSA-2048, ECC, CRC32
This processor was designed for higher compute intensive requirements and incredibly efficient network capacity. Platform made an excellent front-end server for applications as well as Middleware, with the ability to do 10 Gigabit wire-speed encryption with virtually no CPU overhead.

Competitors started to build Single-Die dual-core CPU's with Quad-Core processors by gluing dual-core processors into a Multi-Chip Module.


[UltraSPARC T2 Micrograph]
Generation 3: UltraSPARC T2+
Sun quickly released the first CoolThreads SMP capable UltraSPARC T2+ in 2008 April.
  • Single die
  • 1-4 Sockets
  • 64 bits
  • 4, 6, 8 integer cores
  • 4, 6, 8 crypto cores
  • 4, 6, 8 floating point units
  • 8 threads/core
  • 1.2 GHz - 1.6 GHz clock speed
  • 503 million transisters
  • 342 mm2
  • 65 nm CMOS (TI)
  • 1 PCI Express port (1.0 x8)
  • 4 Megabyte Level 2 Cache
  • 2 Integer ALU per Core
  • 2x? Dual Channel FBDIMM DDR2 Controllers
  • 8? Stage Integer Pipeline per Core
  • No embedded Ethernet into CPU
  • Crypto Algorithms: DES, Triple DES, AES, RC4, SHA1, SHA256, MD5, RSA-2048, ECC, CRC32
This processor allowed the T processor series to move from the Tier 0 web engines and Middleware to Application tier. Architects started to understand the benefits of this platform entering the Database tier. This was the first Coolthreads processor to scale past 1 and up to 4 sockets.

By this time, competition really started to understand that Sun had properly predicted the future of computing. The drive toward single-die Quad-Core chips have started with Hex-Core Multi-Chip Modules being predicted.


Generation 4: SPARC T3
The market became nervous with Oracle purchasing Sun. The first Oracle branded CoolThreads SMP capable UltraSPARC T3 was launched in in 2010 September.
  • Single die
  • 1-4 Sockets
  • 64 bits
  • 16 integer cores
  • 16 crypto cores
  • 16 floating point units
  • 8 threads/core
  • 1.67 GHz clock speed
  • ??? million transisters
  • 377 mm2
  • 40 nm
  • 2x PCI Express port (2.0 x8)
  • 6 Megabyte Level 2 Cache
  • 2 Integer ALU per Core
  • 4x DDR3 SDRAM Controllers
  • 8? Stage Integer Pipeline per Core
  • 2x 10 GigabitEthernet on-CPU ports
  • Crypto Algorithms: DES, 3DES, AES, RC4, SHA1, SHA256/384/512, Kasumi, Galois Field, MD5, RSA to 2048 key, ECC, CRC32
This processor was more than what the market was anticipating from Oracle. This processor took all the features of the T2 and T2+ combined them into the new T3 with an increase in overall features. No longer did the market need to choose between multiple sockets or embedded 10 GigE interfaces - this chip has it all plus double the cores.

The market, immediately before this release, the competition was releasing single die hex-core and octal-core CPU's using multi-chip modules, by gluing them together. The T3 was a substantial upgrade over the competition by offering double the cores on a single die.


Generation 5: SPARC T4
Oracle indicated in December 2010 that they had thousands of these processors in the lab and predicted this processor will be released end of 2011.

After the announcement, a separate press release indicated processors will have a renovated core, for higher single threaded performance, but the socket will offer half the cores.

Most vendors are projected to have 8 core processors available (through Multi-Chip modules) by the time the T3 is released, but only the T4 should be on a single piece of silicon during this period.


[2010-12 SPARC Solaris Roadmap]
Generation 6: SPARC T5

Some details on the T5 were announced with the T4. Processors will use the renovated T4 core, with a 28nm process. This will return to 16 cores per socket again. This processor may be the first Coolthreads T processor able to scale from 1-8 processors. It is projected to appear in early 2013.

Some vendors are projecting to have 12 core processors on the market using Multi-Chip Module technology, but when the T5 is released, this should still be the market leader in 16 cores per socket.

Network Management Connection

Consolidating most network management stations in a globalized environment works very well with the Coolthreads T-Series processors. Consolidating multiple slower SPARC platforms onto single and double socket T series have worked well over the past half decade.

While most network management polling engines will scale linearly with these highly-threaded processors, there are some operations which are bound to single threads. These type of processes include event correlation, startup time, and syncronization after a discovery in a large managed topology.

The market will welcome the enhanced T4 processor core and the T5 processor, when it is released.

Tuesday, October 5, 2010

US Department of Energy: No POWER Upgrade From IBM


US Department of Energy: No POWER Upgrade From IBM

Abstract:

Some ay no one was ever fired for buying IBM, but no government or business ever got trashed for buying SPARC. The United States Department of Energy bought an IBM POWER system with no upgrade path and no long term spare parts.


[IBM Proprietary POWER Multi-Chip Module]

Background:

The U.S. Depertmant of Energy purchased a petaflops-class hybrid blade supercomputer called the IBM "Roadrunner" that performed into the multi-petaflop range for nuclear simulations at the Los Alamos National Laboratory. It was based upon the IBM Blade platform. Blades were based upon an AMD Opteron and hybrid IBM POWER / IBM Cell architecture. A short article was published in October 2009 in The Register.

Today's IBM:

A month later, the super computer was not mentioned at the SC09 Supercomputin Trade Show at Oregon, because IBM killed it. Apparently, it was killed off 18 months earlier - what a waste of American tax payer funding!

Tomorrow's IBM:

In March 2010, it was published that IBM gave it's customers (i.e. the U.S. Government) three months to buy spares, because future hybrid IBM POWER / Cell products were killed. Just a few months ago, IBM demonstrated their trustworthlessness with their existing Thin Client customers and partners by abandoning their thin client partnership and using the existing partner to help fund IBM's movement to a different future thin client partner!



Obama Dollars:

It looks like some remaining Democratic President Obama stimulus dollars will be used to buy a new super computer from Cray and cluster from SGI. The mistake of buying IBM was so huge that it took a massive spending effort from the Federal Government to recover from losing money on proprietary POWER.

[Fujitsu SPARC64 VII Processor]

[Oracle SPARC T3 Processor]
Lessons Learned:
If only the U.S. Government did not invest in IBM proprietary POWER, but had chosen an open CPU architecture like SPARC, which offers two hardware vendors: Oracle/Sun and Fujitsu.

[SUN UltraSPARC T2; Used in Themis Blade for IBM Blade Chassis]

Long Term Investment:

IBM POWER is not an open processor advocated by other systems vendor. Motorola abandoned the systems market for POWER from a processor production standpoint. Even Apple abandoned POWER on the desktop & server arena. One might suppose that when IBM kills a depended upon product, that one could always buy video game consoles and place them in you lights-out data center, but that is not what the Department of Energy opted for.

Oracle/Sun has a reputation of providing support for systems a decade old, and if necessary, Open SPARC systems and even blades for other chassis can be (and are) built by other vendors(i.e. Themis built an Open SPARC blade for an IBM Blade chassis.) SPARC processors have been designed & produced by different processor and system vendors for over a decade and a half. SPARC is a well proven long term investment in the market.

Network Management Connection:

If you need to build a Network Operation Center, build it upon the infrastructure the global telecommunications providers had trusted for over a decade: SPARC & Solaris. One will not find serious network management applications on IBM POWER, so don't bother wasting time looking. There are reasons for it.
Subscribe to: Comments (Atom)

AltStyle によって変換されたページ (->オリジナル) /