Tuesday, October 13, 2009

Sun Takes #1 Spot in TPC-C Benchmarks!



Sun Takes #1 Spot in TPC-C Benchmarks!

Abstract
Sun has long participated in benchmarks. Some benchmarks have been left idle by Sun for may years. Sun has released a new TPC-C benchmark, using a cluster of T2+, earlier than advertised.
An interesting blog on the topic
Interesting Observations
  • Order of magnitude fewer racks to produce a faster solution
  • Order of magnitude fewer watts per 1000 tpmC
  • Sun's 36 sockets to IBM's 32 sockets
  • 10 GigE & FC instead of InfiniBand
  • Intel based OpenSolaris storage servers, instead of AMD "Thumper " based servers
Some thoughts:
  • The order of magnitude improvements in space and power consumption was obviously more compelling to someone than shooting for an order of magnitude improvement in performance
  • The performance could have been faster by adding more hosts to the RAC configuration, but the order of magnitude comparisons would be lost
  • The cost savings for superior performing SPARC cluster is dramatic: fewer hardware components for maintenance , lower HVAC costs, lower UPS costs, lower generator costs, lower cabling costs, lower data center square footage costs
  • The pricing per SPARC core is still to high for the T2 and T2+ processors, in comparison to the performance with competing sockets
  • The negative hammering by a few internet posters about the Sun OpenSPARC CoolThreads processors not being capable of running large databases is finally put to rest
It would have been nice to see:
  • a more scalable SMP solution, but this solution will expand better in an IBM horse race
  • a full Sun QDR InfiniBand configuration
  • a full end-to-end 10GigE configuration
  • Ithe T2 with embedded 10GigE clustered instead of the T2+ with the 10GigE card

Thursday, October 8, 2009

IBM fends off T3 as Apple fends off Psystar; Future of Computing

IBM fends off T3 as Apple fends off Psystar

The Woes of IBM

The Department of Justice is investigating IBM, on behalf of the CCIA. IBM also refused to license their software on computing mainframes in the past. European rack server company T3 appears to be the latest aggressor, now.

The Woes of Apple

Apple has been defending it's right to license their Apple Computer MacOSX software only on Apple hardware for some time. The latest upstart, Psystar (who resides in the southern part of the United States), ironically released a new line of products referred to as the "Rebel Series".

An Odd Turn of Events

What makes these cases, so very interesting, is a more recent western U.S. court ruling against software company Autodesk which is leaning in the direction that software is not licensed, but owned.

Impacts to the Industry

As people turn to Open Source software, to reduce costs on their business, providers have been bundling their software at near-free costs with hardware, to be competitive.

Now, with hardware under attack for this practice, vendors will not be able to hide the cost of software creation. If the U.S. traditionally liberal Western court is not over-turned, the computing industry is at tremendous risk.

Basically, no one will pay for the salaries of good software designers and hardware designers will continually have their products knocked-off by clone manufacturers... leaving the industry in a place where innovation may suffer - because no one will reward innovation with a salary.

What Could Be Next?

If no one would pay for software and hardware innovation on commodity hardware, where might people go, to secure their investment?

Very possibly, the next turn could be back to proprietary platforms, again. If an innovative software solution is only available on a proprietary OS which is available only on a proprietary hardware platform - there is a guaranteed return on investment... regardless of whether the software is licensed or purchased.

This is not very good news, for the industry.

On the other hand, this could lead the industry back to a period of Open Computing - there there was choice between hardware vendors, OS vendors, software vendors... all according to open standards and open API's that were cooperatively created between heterogeneous industry groups like: x.org, open group, posix, open firmware, etc.

It is very possible that the liberal U.S. Western Federal Courts could be overturned (as happens quite often), once justices or the U.S. Congress realizes that they may be pushing a huge industry into non-existence or cause the balkanization of this industry, back into medieval fiefdoms.

It could also mean the end of commercial software, if the court case is not overturned. Programmers could be turned into perpetual free-lancers, in areas where labor is expensive.

In areas where labor is less expensive, programmers could be considered little more than factory workers. Hire your factory worker, by the hour, to finish your project and just release them. They will seldom see little more than a segment of code and never achieve the experience to really architect a complete solution.

It could also mean the end of high quality generic open-source software. This is, perhaps, the most ironic result. If software has no value, since few will have the means to pay for it, innovative programmers may choose to leave their software closed-source, in order to survive.

As much as people do not like licensing terms, the alternative is somewhat stark.

Wednesday, October 7, 2009

ZFS: The Next Word

ZFS: The Next Word

Abstract

ZFS is the latest in disk and hybrid storage pool technology from Sun Microsystems. Unlike competing 32 bit file systems, ZFS is a 128-bit file system, allowing for near limitless storage boundaries. ZFS is not a stagnant architecture, but a dynamic one, where changes are happening often to the open source code base.

What's Next in ZFS?

Jeff Bonwick and Bill Moore did a presentation at The Kernel Conference Australia 2009 regarding what was happening next in ZFS. A lot of the features were driven by the Fishworks team as well as Lustre clustering file system.
[埋込みオブジェクト:https://slx.sun.com/sites/slx.sun.com/modules/flowplayer/FlowPlayerLight.swf?config=%7BmenuItems%3A%5Bfalse%2Cfalse%2Cfalse%2Cfalse%2Ctrue%2Ctrue%2Cfalse%5D%2CallowFullScreen%3Atrue%2CshowFullScreenButton%3Atrue%2CshowPlayListButtons%3Afalse%2CshowStopButton%3Atrue%2CusePlayOverlay%3Afalse%2CautoPlay%3Afalse%2CautoBuffering%3Afalse%2CstartingBufferLength%3A1%2CshowMenu%3Atrue%2CemailVideoLink%3A%271179275620%27%2CemailPostUrl%3A%27https%3A%2F%2Fslx%2Esun%2Ecom%2F%27%2CvideoFile%3A%27https%3A%2F%2Fslx%2Esun%2Ecom%2Flimelight%2Ffilevault%2F1179275620%2F0%2F11792756201254087565%2Eflv%27%2CsplashImageFile%3A%27http%3A%2F%2Fsuncms%2Evo%2Ellnwd%2Enet%2Fo18%2Fs%2Fslx%2F11792756201254087565%2Ejpg%3Fh%3D9af704e0911a46c7fed8a3d1274ec48b%27%2CwatermarkUrl%3A%27https%3A%2F%2Fslx%2Esun%2Ecom%2Ffiles%2Flogo%2Epng%27%2CwatermarkLinkUrl%3A%27https%3A%2F%2Fslx%2Esun%2Ecom%2F%27%2CshowWatermark%3A%27fullscreen%27%2Cloop%3Afalse%2Cwidth%3A530%2Cheight%3A400%2CcontrolsOverVideo%3A%27ease%27%2CcontrolBarGloss%3A%27low%27%2CinitialScale%3A%27fit%27%2CbaseURL%3A%27https%3A%2F%2Fslx%2Esun%2Ecom%2Fsites%2Fslx%2Esun%2Ecom%2Fmodules%2Fflowplayer%27%2Cembedded%3Atrue%7D]
What are the new enhancements in functionality?
  • Enhanced Performance
    Enhancements all over the system
  • Quotas on a per-user basis
    Always had quotas on a per-filesystem basis, originally thought each user would get a filesystem, this does not scale well for thousands of users with many existing management tools
    Works with industry standard POSIX based UID's & Names
    Works with Microsoft SMB SID's & Names
  • Pool Recovery
    Disk drives often "out-right lie" to operating system when they re-order the writing of the blocks.
    Disk drives often "out-right lie" to operating systems when they receive a "write barrier", indicating that the write was completed, when the write was not completed.
    If there is a power outage in the middle of the write, even after a "write barrier" was done, the drive will often silently drop the "write commit", making the OS thinking that the writes were safe, when they were not - resulting in a pool corruption.
    Simplification in this area - during a scrub, go back to an earlier uber-block, and correct pool... and never over-write a recently changed transaction group, in the case of a new transaction.
  • Triple Parity RAID-Z
    Double parity RAID-Z has been around from the beginning (i.e. lose 2 out of 7 drives)
    Triple parity RAID-Z allows for disks with bigger, higher, faster high-BER drive usage
    Quadruple Parity is on the way (i.e. lose 3 out of 10 drives)
  • De-duplication
    This is very nice capacity enhancement with application, desktop, and server virtualization
  • Encryption
  • Shadow Migration (aka Brain Slug?)
    Pull out that old file server and replace it with a ZFS [NFS] server without any downtime.
  • BP Rewrite & Device Removal
  • Dynamic LUN Expansion
    Before, if a larger drive was inserted, the default behavior was to resize the LUN
    During a hot-plug, tell the system admin that the LUN has been resized
    Property added to make LUN expansion automatic or manual
  • Snapshot Hold property
    Enter an arbitrary string for a tag, issue the snapshot, issue a delete, when an "unhold" is done, the destroy is done.
    Makes ZFS look sort of like a relational database with transactions.
  • Multi-Home Protection
    If a pool is shared between two hosts, works great as long as clustering software is flawless.
    The Lustre team prototyped a heart-beat protocol on the disk to allow for multi-home-protection inherent in ZFS
  • Offline and Remove a separate ZFS Log Device
  • Extend Underlying SCSI Framework for Additional SCSI Commands
    SCSI "Trim" command, to allow ZFS to direct less wear leveling on unused flash areas, to increase life and performance of flash
  • De-Duplicate in a ZFS Send-Receive Stream
    This is in the works, to make backups & Restores more efficient
Performance Enhancements include:
  • Hybrid Storage Pools
    Makes everything go (alot) faster with a little cache (lower cost) and slower drives (lower cost.)
    - Expensive (fast, reliable) Mirrored SSD Enterprise Write Cache for ZFS Intent Logging
    - Inexpensive consumer grade SSD cache for block level Read Cache in a ZFS Level 2 ARC
    - Inexpensive consumer grade drives with massive disk storage potential with a 5x lower energy consumption
  • New Block Allocator
    This was a extremely simple 80 line code segment that works well under empty pools, that was finally re-engineered for performance when the pool gets full. ZFS will now use both algorithms.
  • Raw Scrub
    Increase performance by running through the pool and metadata to ensure checksums are validated without uncompressing data in the block.
  • Parallel Device Open
  • Zero-Copy I/O
    From the folks in Lustre cluster storage group requested and implemented the feature.
  • Scrub Prefetch
    A scrub will now prefetch blocks to increase utilization of the disk and decrease scrub time
  • Native iSCSI
    This is part of the COMSTAR enhancements. Yes, this is there today, under OpenSolaris, and offers tremendous performance improvements and simplified management
  • Sync Mode
    NFS benchmarking in Solaris is shown to be slower than Linux, because Linux does not guarantee a write to NFS actually makes it to disk (which violates the NFS protocol specification.) This feature allows Solaris to use a "Linux" mode, where writes are not guaranteed, to increase performance, at the expense of .
  • Just-In-Time Decompression
    Prefetch hides latency of I/O, but burns CPU. This allows prefetch to get the data without decompressing the data, until needed, to save CPU time, and also conserve kernel memory.
  • Disk drives with higher capacity and less reliability
    Formatting options to reduce error-recovery on a sector-by-sector basis
    30-40% improved capacity & performance
    Increased ZFS error recovery counts
  • Mind-the-Gap Reading & Writing Consolidation
    Consolidate Read Gaps in the case of reads, to ingle aggregate read can be used, reading data between adjacent sectors, and throw away intermediate data, since fewer I/O's allow for streaming data from drives more efficiently
    Consolidate Write Gaps in the case of a write, so single aggrigate write can be used, even if adjacent regions have a blank sector gap between them, streaming data to drives more efficiently
  • ZFS Send and Receive
    Performance has been improved using the same Scrub Prefetch code
Conclusion

The ZFS implementation in Solaris 10-2009 release actually has some of the ZFS features detailed in the most recent conferences.

Wednesday, September 30, 2009

Sun / Oracle License Change - T2+ Discount!

Sun / Oracle License Change - T2+ Discount!

Abstract

Oracle licenses it's database by several factors, typically the Standard License (by socket) and an Enterprise License (by core scaling factor.) Occasionally, Oracle will change the core scaling factor, resulting in discounting or liability for the consumer.

The Platform

The OpenSPARC platform is an open sourced SPARC implementation where the specification is also open. There have been several series of chips based upon this implementation: T1, T2, and T2+. The T1 & T2 are both single socket implementations, while the T2+ is a multi-socket implementation.

The Discount

While reviewing the Oracle licensing PDF, the following information has come to light concerning the OpenSPARC processor line, in particular the Sun UltraSPARC T2+ processor.

Factor Vendor/Processor
0.25 SUN T1 1.0GHz and 1.2GHz (T1000, T2000)
0.50 SUN T1 1.4GHz (T2000)
0.50 Intel Xeon 74xx or 54xx multi-core series (or earlier); Intel Laptop
0.50 SUN UltraSPARC T2+ Multicore
0.75 SUN UltraSPARC T2 Multicore

0.75 SUN UltraSPARC IV, IV+, or earlier
0.75 SUN SPARC64 VI, VII
0.75 SUN UltraSPARC T2, T2+ Multicore
0.75 IBM POWER5
1.00 IBM POWER6, SystemZ
1.00 All Single Core Chips


Note, Red is old, Green is new. Oracle has broken out the T2+ processor to a core factor of 0.50 instead of 0.75.

To see a copy of some of the old license factors, please refer to my old blog on the Oracle IBM license change entry.

Impacts to Network Management infrastructure

To calculate your discount, see the table below. It is basically 33% for the Enterprise version of Oracle under the T2+ processor.

Chips Cores Old New
01 08 06 04
02 16 12 08
03 24 18 12
04 32 24 16


If you have been waiting for a good platform to move your polling intensive workloads to, this may be the right time, since the T2+ has had it's licensing liability reduced.

Wednesday, September 16, 2009

ZFS: Adding Mirrors

ZFS: Adding Mirrors

Abstract

Several articles have been written about ZFS including: [Managing Storage for Network Management], [More Work With ZFS], [Apache: Hack, Rollback, Recover, and Secure], and [What's Better, USB or SCSI]. This is a short article on adding a mirrored drive to an existing ZFS volume.

Background

A number of weeks ago, a 1.5 Terabyte external was added to a Sun Solaris 10 storage server. Tests were conducted to observe the differences between SCSI and USB drives, as well as UFS and ZFS filesystems. The original disk that was added will now be added to.


Inserting a new USB drive into the system is the first step. If the USB drive is not recognized upon, a discovery can be forced using the classic "disks" command, as the "root" user.
Ultra60-root$ disks
A removable (i.e. USB) drive can be labeled using the "expert" mode of the "format" command.
Ultra60-root$ format -e
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 [SEAGATE-SX1181677LCV-C00B cyl 24179 alt 2 hd 24 sec 611]
/pci@1f,4000/scsi@3/sd@0,0
1. c0t1d0 [SEAGATE-SX1181677LCV-C00C cyl 24179 alt 2 hd 24 sec 611]
/pci@1f,4000/scsi@3/sd@1,0
2. c2t0d0 [Seagate-FreeAgent XTreme-4115-1.36TB]
/pci@1f,2000/usb@1,2/storage@4/disk@0,0
3. c3t0d0 [Seagate-FreeAgent XTreme-4115-1.36TB]
/pci@1f,2000/usb@1,2/storage@3/disk@0,0
This is what the pool appears to be before adding a mirrored disk
Ultra60-root$ zpool status
pool: zpool2
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
zpool2 ONLINE 0 0 0
/dev/rdsk/c2t0d0 ONLINE 0 0 0

errors: No known data errors
Process

An individual slice can be added as a mirror to an existing disk through "zpool attach"
Ultra60-root$ zpool attach zpool2 /dev/rdsk/c2t0d0 /dev/dsk/c3t0d0s0
Verification

The result of adding a disk slice to create a mirror can be checked with "zpool status"
Ultra60-root$ zpool status
pool: zpool2
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 1h4m, 6.81% done, 14h35m to go
config:
NAME STATE READ WRITE CKSUM
zpool2 ONLINE 0 0 0
mirror ONLINE 0 0 0
/dev/rdsk/c2t0d0 ONLINE 0 0 0
c3t0d0s0 ONLINE 0 0 0

errors: No known data errors
The consumption of the CPU utilization during the resliver can be observed through "sar"
Ultra60-root$ sar

SunOS Ultra60 5.10 Generic_141414-09 sun4u 09/16/2009

00:00:00 %usr %sys %wio %idle
00:15:01 0 40 0 60
00:30:00 0 39 0 60
00:45:00 0 39 0 61
01:00:00 0 39 0 61
01:15:00 0 39 0 61
01:30:01 0 41 0 59
...
10:45:00 0 43 0 57
11:00:00 0 40 0 59
11:15:01 0 40 0 60
11:30:00 0 40 0 59
11:45:00 0 39 0 61
12:00:00 0 43 0 56
12:15:00 0 47 0 53
12:30:01 0 44 0 56

Average 0 39 0 60
If you are curious concerning the performance of the system during the resilvering process over the USB ports, there is "zfs iostat" command.

Ultra60-root$ zpool iostat 2 10
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
zpool2 568G 824G 12 0 1.30M 788
zpool2 568G 824G 105 0 6.92M 0
zpool2 568G 824G 156 0 9.81M 7.48K
zpool2 568G 824G 157 1 10.1M 5.74K
zpool2 568G 824G 117 6 10.3M 11.5K
zpool2 568G 824G 154 5 10.1M 7.49K
zpool2 568G 824G 222 31 8.44M 36.7K
zpool2 568G 824G 120 13 8.45M 10.2K
zpool2 568G 824G 113 4 9.75M 8.99K
zpool2 568G 824G 120 5 9.48M 11.0K

Conclusion

The above session demonstrates how a whole external USB device was used to create a ZFS pool and an individual slice from another USB device was used to mirror an existing pool.

Now, if I can just get this Seagate FreeAgent Xtreme 1.5TB disk to just be recognized by some system using FireWire (No, can't use it reliably on an old MacG4, Dual G5, Mac Dual Core Intel, or a dual SPARC Solaris platforms) - I would be much happier than using USB.

Monday, September 14, 2009

Solaris Containers vs VMWare and Linux

Solaris Containers vs VMWare and Linux

I saw an interesting set of benchmarks today - two similarly configured boxes with outstanding performance differences.

SAP Hardware: Advanage Linux

Two SAP benchmarks were released - one under Solaris while the other was under Linux.

2009034: Sun Fire x4270, Solaris 10, Solaris Container as Virtualization, 8vCPUs (half the CPUs available in the box), Oracle 10g, EHP4: 2800 SD-User.

2009029: Fujitsu Primergy RX 3000, SuSE Linux Enterprise Server 10, VMWare ESX Server 4.0, 8vCPUs (half the CPUs available in the box), MaxDB 7.8, EHP4: 2056 SD-User.

SAP Benchmark: Results

What were the results?

VendorServerOSPartitioningRDBMS Memory
Oracle/SUNSolarisZonesOracle 10g48 Gigabytes
Novell/SuSELinuxVMWareMaxDB 7.896 Gigabyes

Benchmark Solaris Linux Result
Users 2,800 2,056 Solaris 36% more users
Response 0.97s 0.98s Solaris 1% greater responsiveness
Line Items306,330/h224,670/hr Solaris 36% greater throughput
Dialog Steps 919,000/hr 674,000/hr Solaris 36% greater throughput
SAPS15,320 11,230 Solaris 36% greater performance
Avg DB Dialog 0.008 sec 0.008 sec tie!
Avg DB Update 0.007 sec 0.012 sec Solaris 71% faster updates

SAP System Advantage: Solaris

VMWare has offered highly functional virtualization under Intel & AMD platforms for some time, but there are alternatives.
  • Solaris has yielded significantly higher performance solution on multiple platforms (Intel, AMD, and SPARC) for years
  • Solaris server required half the RAM as the Linux server, to achieve higher performance
  • A single OS solution (Solaris 10) offers greater security vs a multiple OS solution (VMWare Hypervisor in conjunction with SuSE Linux)
  • When partitioning servers, database license liability (under Oracle) can be reduced under Solaris Containers while they can not be reduced, under VMWare.
Oracle / Sun with Solaris 10 - perfect together!

Thursday, September 10, 2009

What's Better: USB or SCSI?

What's Better: USB or SCSI?

Abstract
Data usage and archiving is just exploding everywhere. The bus options for adding data increase often, with new bus protocols being added regularly. With systems so prevalent throughout businesses and homes, when should one choose a different bus protocol for accessing the data? This set of tests will be done with some older mid-range internal SCSI drives against a brand new massive external USB drive.

Test: Baseline
The Ultra60 test system is an SUN UltraSPARC II server, running dual 450MHz CPU's and 2 Gigabytes of RAM. Internally, there are 280 pin 180Gigabyte SCSI drives. Externally, there is one external 1.5 Terabyte Seagate Extreme drive. A straight "dd" will be done, from a 36Gig root slice, to the internal drive, and external disk.


Test #1a: Write Internal SCSI with UFS
The first copy was to an internal disk running UFS file system. The system hovered around 60% idle time with about 35% CPU time pegged in the SYS category, the entire time of the copy.

Ultra60-root$ time dd if=/dev/dsk/c0t0d0s0 of=/u001/root_slice_0
75504936+0 records in
75504936+0 records out

real 1h14m6.95s
user 12m46.79s
sys 58m54.07s


Test #1b: Read Internal SCSI with UFS
The read back of this file was used to create a baseline for other comparisons. The system hovered around 50% idle time with about 34% CPU time pegged in the SYS category, the entire time of the copy. About 34 minutes was the span of the read.

Ultra60-root$ time dd if=/u001/root_slice_0 of=/dev/null
75504936+0 records in
75504936+0 records out

real 34m13.91s
user 10m37.39s
sys 21m54.72s


Test #2a: Write Internal SCSI with ZFS
The internal disk was tested again using the ZFS file system, instead of UFS file system. The system hovered around 50% idle with about 45% being pegged in the sys category. The write time lengthened about 50%, using ZFS.

Ultra60-root$ time dd if=/dev/dsk/c0t0d0s0 of=/u002/root_slice_0
75504936+0 records in
75504936+0 records out

real 1h49m32.79s
user 12m10.12s
sys 1h34m12.79s


Test #2b: Read Internal SCSI with ZFS
The 36 Gigabyte read took ZFS took about 50% longer than UFS. The CPU capacity was not strained much more, however.

Ultra60-root$ time dd if=/u001/root_slice_0 of=/dev/null
75504936+0 records in
75504936+0 records out

real 51m15.39s
user 10m49.16s
sys 36m46.53s


Test #3a: Write External USB with ZFS
The third copy was to an external disk running ZFS file system. The system hovered around 0% idle time with about 95% CPU time pegged in the SYS category, the entire time of the copy. The copy consumed about the same amount of time as the ZFS copy to the internal disk.

Ultra60-root$ time dd if=/dev/dsk/c0t0d0s0 of=/u003/root_slice_0
75504936+0 records in
75504936+0 records out

real 1h52m13.72s
user 12m49.68s
sys 1h36m13.82s


Test #3b: Read External USB with ZFS
Read performance is slower over USB than it is over SCSI with ZFS. The time is 82% slower than the UFS SCSI read and 21% slower than the ZFS SCSI read. CPU utilization seems to be slightly more with USB (a factor of 10% less idle time with USB over SCSI.)

Ultra60-root$ time dd if=/u003/root_slice_0 of=/dev/null
75504936+0 records in
75504936+0 records out

real 1h2m50.76s
user 12m6.22s
sys 42m34.05s


Untested Conditions

Attempted was Firewire and eSATA, but these bus protocols would not reliably work on the Seagate Extreme 1.5TB drive, under any platform tested (several Macintoshes and SUN Workstations.) If you are interested in a real interface besides USB, this external drive is not the one you should be investigating - it is a serious mistake to purchase.

Conclusion

The benefits of ZFS does not come with a cost in time. Reads and writes are about 50% slower, but the cost may be worth it for the benefits of: unlimited snapshots, unlimited file system expansion, error correction, compression, 1 or 2 disk failure tolerance, future 3 disk failure tolerance, future encryption, and future clustering are features.

If you are serious about your system performance, SCSI is definitely a better choice, over USB, to provide throughput with minimum CPU utilization - regardless of file system. If you invested in CPU capacity and have CPU capacity to burn (i.e. muti-core CPU), then buying external USB storage may be reasonable, over purchasing SCSI.
Subscribe to: Comments (Atom)

AltStyle によって変換されたページ (->オリジナル) /