Showing posts with label MacOSX. Show all posts
Showing posts with label MacOSX. Show all posts

Sunday, April 13, 2014

Security: Heartbleed, Apple, MacOSX, iOS, Linux, and Android


Abstract:
Nearly every computing device today is connected together via a network of some kind. These connections open up opportunities or vulnerabilities for exploitation by mafia, criminals, or government espionage via malware. While computers such as MacOSX are immune, along with their mobile devices based upon iOS iPhone and iPads... huge numbers of Linux and Android devices are at risk!





Heartbleed:

This particular vulnerability can be leveraged by many sources in order to capture usernames and passwords, where those account credentials can be later used for nefarious purposes. Nefarious includes: command and control to attack commercial, financial, government, or even launch attacks against entire national electrical grids; stealing money; stealing compute resources. The defect is well documented.


Apple and Android/Linux Vulnerabilities:

There are many operating systems which are vulnerable to this defect, but for this article, we are only really concerned about the mobile market.
While most of the buzz surrounding OpenSSL's Heartbleed vulnerability has focussed on websites and other servers, the SANS Institute reminds us that software running on PCs, tablets and more is just as potentially vulnerable.
Williams said a dodgy server could easily send a message to vulnerable software on phones, laptops, PCs, home routers and other devices, and retrieve up to 64KB of highly sensitive data from the targeted system at a time. It's an attack that would probably yield handy amounts of data if deployed against users of public Wi-Fi hotspots, for example.
While Google said in a blog post on April 9 that all versions of Android are immune to the flaw, it added that the “limited exception” was one version dubbed 4.1.1, which was released in 2012.
Security researchers said that version of Android is still used in millions of smartphones and tablets, including popular models made by Samsung Electronics Co., HTC Corp. and other manufacturers. Google statistics show that 34 percent of Android devices use variations of the 4.1 software.

The company said less than 10 percent of active devices are vulnerable. More than 900 million Android devices have been activated worldwide.
After taking a few days to check its security, the fruity firm joined other companies in publicly announcing how worried or secure its customers should feel.
“Apple takes security very seriously. IOS and OS X never incorporated the vulnerable software and key Web-based services were not affected,” an Apple spokesperson said.

Conclusions:
To give an adequate understanding regarding the number of mobile Android devices at risk, one could take the population of the United States, at roughly 317 Million people as a baseline. 90 million Android Linux based devices vulnerable, that is equivalent to nearly 28% of the population of the United States is at risk! This is no small number of mobile devices - there is a lot of patching that either needs to be done or mobile devices which should be destroyed. Ensure you check your android device!

Friday, December 28, 2012

Notable Events: NeXT and Perl


ARS Technica published:
The Legacy of NeXT Lives On in MacOSX
NeXTSTEP technologies still fuel Macs, iPhones, and iPads 16 years later.

The Register published:
The Perl Programming Language Marks 25th Anniversary
Munging data since 1987


Posted by at No comments:
Labels: , , ,

Wednesday, November 14, 2012

Automatic Storage Tiering

Image courtesy: WWPI.
Automatic Storage Tiering

Abstract:
Automatic Storage Tiering or Hierarchical Storage Management is the process of placing the most data onto storage which is most cost effective, while meeting basic accessibility and efficient requirements. There has been much movement over the past half-decade in storage management.



Early History:
When computers were first being built on boards, RAM (most expensive) held the most volatile data while ROM held the least changing data. EPROM's provided a way for users to occasionally change mostly static data (requiring a painfully slow erasing mechanism using light and special burning mechanism using high voltage), placing the technology in a middle tier. EEPROM's provided a simpler way to update data on the same machine, without removing the chip for burning. Tape was created, to archive storage for longer periods of time, but it was slow, so it went to the bottom of the capacity distribution pyramid. Rotating disks (sometimes referred to as rotating rust) was created, taking a middle storage tier. As disks became faster, the fastest disks (15,000 RPM) moved towards the upper part of the middle tier while slower disks (5,400RPM or slower) moved towards the bottom part of the middle tier. Eventually, consumer-grade (not very reliable) IDE and SATA disks became available, occupying the higher areas of the lowest tier, slightly above tape.

Things seemed to settle out for awhile in the storage hierarchy; RAM, ROM, EEPROM, 15K FibreChannel/SCSI/SAS Disks, 7.2K FibreChannel/SCSI/SAS Disks, IDE/SATA, Tape - until the creation of ZFS by Sun Microsystems.
Logo Sun Microsystems
Storage Management Revolution:
In the early 2000's, Sun Microsystems started to invest more in flash technology. They anticipated a revolution in storage management, with the increase performance of a technology called "Flash", which is little more than EEPROM's. These became known as Solid State Drives or SSD's. In 2005, Sun released ZFS under their Solaris 10 operating system and started adding features that included flash acceleration.

There was a general problem that Sun noticed: flash was either cheap with low reliability (Multi-Level Cell or MLC) or expensive with higher reliability (Single-Level Cell or SLC). It was also noted that flash was not as reliable as disks (for many write environment.) The basic idea was to turn automatic storage management on it's head with the introduction of flash to ZFS under Solaris: RAM via ARC (Adaptive Read Cache), Cheap Flash via L2ARC (Level 2 Adaptive Read Cache), Expensive Flash via Write Log, Cheap Disks (protected with disk parity & CRC's at the block level.)

Sun Funfire 4500 aka Thumper, courtesy Sun Microsystems

For the next 5-7 years, the industry experienced hand-wringing with what to do with the file system innovation introduced by Sun Microsystems under Solaris ZFS. More importantly, ZFS is a 128 bit file system, meaning massive data storage is now possible, on a scale that could not be imagined. The OS and File System were also open-sourced, meaning it was a 100% open solution.

With this technology, built into all of their servers, This became increasingly important as Sun released mid-range storage systems based upon this technology with a combination of high capacity, lower price, and higher speed access. Low end solutions based upon ZFS also started to hit the market, as well as super-high-end solutions in MPP clusters.

Drobo-5D, courtesy Drobo

Recent Vendor Advancements - Drobo:
A small-business centric company called Data Robotics or Drobo released a small RAID system with an innovative feature: add drives of different size, system adds capacity and reliability on-the-fly, with the loss of some disk space. While there is some disk space loss noted, the user is protected against drive failure on any drive in the RAID, regardless of size, and any size drive could replace the bad unit.
Drobo-Mini, courtesy Drobo
The Drobo was very innovative, but they started to move into Automatic Storage Tiering with the release of flash drive recognition. When an SSD (expensive flash drive) is added to the system, it is understood and used to accelerate the storage of the rotating disks. A short review of the product was done by IT Business Edge. With a wide range of products, from the portable market (leveraging 2.5 inch drives), to midsize market (leveraging 3.5 inch drives), to small business market (leveraging up to 12 3.5 inch drives) - this is a challenging low to medium end competitor.

The challenge with Drobo - it is a proprietary hardware solution, sitting on top of a fairly expensive hardware, for the low to medium end market. This is not your 150ドル raid box, where you can stick in a couple of drives and go.



Recent Vendor Advancements - Apple:
Not to be left out, Apple was going to bundle ZFS into every Apple Macintosh they were to ship. Soon enough, Apple canceled their ZFS announcement, canceled their open-source ZFS announcement, placed job ads for people to create a new file system, and went into hybernation for years.

Apple recently released a Mac OSX feature they referred to as their Fusion Drive. Mac Observer noted:
"all writes take place on the SSD drive, and are later moved to the mechanical drive if needed, resulting in faster initial writes. The Fusion will be available for the new iMac and new Mac mini models announced today"
Once again, the market was hit with an innovative product, bundled into an Operating System, not requiring proprietary software.


Implications for Network Management:
With continual improvement in storage technology, this will place an interesting burden on Network, Systems, and Storage Management vendors. How does one manage all of these solutions in an Open way, using Open protocols, such as SNMP?

The vendors to "crack" the management "nut" may be in the best position to have their product accepted into existing heterogeneous storage managed shops, or possibly supplant them.

Monday, April 16, 2012

Apple MacOSX Malware: Java Exploit Phase 2


Apple MacOSX Malware: Java Exploit Phase 2

Abstract:
As noted in a previous article, MacOSX experienced a pretty severe malware exploit, through an Oracle Java vulnerability. It appears a second Java exploit targeting Apple Macintosh OSX is current active on the Internet.

Previous Resolution:
Apple engaged a Java fix, as well as forcing the shutdown of Java applets, by default. The latter was considered pretty heavy-handed, but considering the second exploit was just revealed, one must wonder whether Apple was aware of this issue looming on the horizon.

New Java Exploit:
An writer at securelist.com described the new malware issue.
This new threat is a custom OS X backdoor, which appears to have
been designed for use in targeted attacks. After it is activated on an infected
system, it connects to a remote website in typical C&C fashion to fetch
instructions. The backdoor contains functionality to make screenshots of the
user’s current session and execute commands on the infected machine.

Interesting:
It appears from the screenshot that there is a Microsoft ASPX involved, in the malware. a Microsoft system seems to be receiving/controlling the malware. Whether this means this is some type of hybrid malware (infecting a Microsoft system) or the malware designer is using Microsoft OS as their virus distribution system is an interesting question.

Saturday, April 7, 2012

Inevitable: Apple MacOSX Infected Via Java on Web


Inevitable: Apple MacOSX Infected via Java on Web

Abstract:
Desktop and Server based systems based upon Microsoft Windows platform have long been the most vulnerable platforms on the internet, providing the most efficient platform for malware writers to steal computing and network cycles from owners around the world. Various other open platforms (i.e. UNIX based systems), which serves much of the internet traffic, have long tried to keep from being infected, by applying more rigorous security rules at the OS level. Apple, being one such vendor who migrated to a UNIX platform, had been successful in keeping their clients secure - but finally a single Java based vulnerability has been discovered (and leveraged) to exploit some systems.

Virus Buster:
A virus vendor located in Russia recently published a short research article on a particular threat, which has been closed by Apple.

Doctor Web—the Russian anti-virus vendor—conducted a research to determine the scale of spreading of Trojan BackDoor.Flashback that infects computers running Mac OS X. Now BackDoor.Flashback botnet encompasses more than 550 000 infected machines, most of which are located in the United States and Canada. This once again refutes claims by some experts that there are no cyber-threats to Mac OS X.
While very uncommon, MacOSX based Apple Macintosh computers occasionally have third-party based software (i.e. Flash, Java, etc.) which can offer some level of vulnerability to all platforms, including MacOS, Windows, UNIX, etc.

The Origin:

The virus research company explains how computers get infected.

According to some sources, links to more than four million compromised web-pages could be found on a Google SERP at the end of March. In addition, some posts on Apple user forums described cases of infection by BackDoor.Flashback.39 when visiting dlink.com.
The Morphing:
Companies started working on a solution, but before Apple released a patch, there was an attempt to diversify the virus, so they might be able to survive once it was closed.

Attackers began to exploit CVE-2011-3544 and CVE-2008-5353 vulnerabilities to spread malware in February 2012, and after March 16 they switched to another exploit (CVE-2012-0507).
Security, At Last:
While this vulnerability has been "in the wild" on the internet for awhile, this particular virus was exterminated.

The vulnerability has been closed by Apple only on April 3, 2012.
Protecting Yourself:
This particular threat is not unique to Apple, but also other systems like Windows. Apple released a security patch, to close this vulnerability - it would be well advised that you regularly download updates from Apple to apply these patches whenever possible.

A general rule of thumb: STAY AWAY FROM IMMORAL (i.e. pornography) AND ILLEGAL (i.e. copyrighted material like music, videos, software, etc.) DOWNLOADS - NEVER VIEW OR DOWNLOAD SOFTWARE OFF OF THE INTERNET, UNLESS IT IS A WELL KNOWN SITE - NO MATTER WHAT COMPUTER YOU ARE ON... these sites notoriously try to download viruses to your computer!

Friday, February 3, 2012

ZFS: Apple Enters Storage Arena


ZFS: Apple Enters Storage Arena

Abstract:
File systems have existed nearly as long as computing systems. First, systems used storage based upon tape solution with serial access. Next came random block file access. Various filesystems were created, offering different capabilities, and eventually allowed a disk drive to be divided up into multiple logical slices. Volume managers arrived later on the scene, to aggregate disks below individual filesystems, to make larger capacities. ZFS was created by Sun Microsystems, for the purpose of erasing the distinction of volume manager and file system - to add flexibility that the divided pair could not easily achieve. Apple computers often have the need for massive data storage, but the native filesystem has been lacking - until ZFS became a possibility.

History:
Apple computers are the traditional work horse for graphic design houses. They work with large media such as billboards and books with high resolution photographs... which all take a lot of space. As computers continued to advance, they knew they needed a real filesystem.

In 2007, Apple was originally intending on packaging ZFS into their MacOSX operating system and shipping it with Leopard. This would have fixed a lot of problems experienced in the Macintosh environment, including the long time it takes to re-silver an mirrored set if someone kicked a power cable on a desktop USB drive, and virtually unlimited expansion of a filesystem by merely adding disks.

Along came 2009, Apple dumped ZFS. There was an outcry in the community, looking for a real filesystem under MacOSX, but Apple started looking for a new team to "roll their own" filesystem.


In 2011, Apple still could not develop a modern filesystem, and some of the old people who were porting ZFS to MacOSX decided to form their own startup - with the purpose of porting ZFS to MacOSX.


Enter Ten's Complement LLC

Here it is - 2012... a half-decade later and Apple has been unable to release a modern filesystem. By the way, nearly every other operating system was incapable of that, including AIX, HPUX, Linux, and Windows. Interestingly, the old MacOSX developer finally released ZFS. The limited liability company Ten's Complement is now offers a Single Disk, Multiple Disk, and will offer a De-Duplication option in the near future for MacOSX.

Network Management Connection

With the creation of ZFS, Apple MacOSX has finally made it into the realm of being a very viable platform for server applications. No longer will people need to use MacOSX as a client and buy a SPARC or Intel Solaris platform as a server to gain the benefits of ZFS. Common designers, video publishers, and media collectors can now just add the occasional multi-terabyte hard drive and just keep on building their data collection with limited concern for failure - it will all be protected with parity and old deletions can be easily rolled back.

With the addition of ZFS to MacOSX - expect to see more MacOSX platforms in the small enterprises. The benefits of Solaris with the simplicity of MacOSX will surely be an awesome win for the computer community - which means Network Managers will need to take this into their consideration as they roll out management platforms.
Posted by at No comments:
Labels: , , ,

Saturday, July 16, 2011

ZFS: A Multi-Year Case Study in Moving From Desktop Mirroring (Part 1)



Abstract:
ZFS was created by Sun Microsystems to innovate the storage subsystem of computing systems by simultaneously expanding capacity & security exponentially while collapsing the formerly striated layers of storage (i.e. volume managers, file systems, RAID, etc.) into a single layer in order to deliver capabilities that would normally be very complex to achieve. One such innovation introduced in ZFS was the ability to provide inexpensive limited life solid state storage (FLASH media) which may offer fast (or at least greater deterministic) random read or write access to the storage hierarchy in a place where it can enhance performance of less deterministic rotating media. This paper discusses the process of upgrading attached external mirrored storage to external network attached ZFS storage.

Case Study:
A particular Media Design House had formerly used multiple external mirrored storage on desktops as well as racks of archived optical media in order to meet their storage requirements. A pair of (formerly high-end) 400 Gigabyte Firewire drives lost a drive. An additional pair of (formerly high-end) 500 Gigabyte Firewire drives experienced a drive loss within one month later. A media wall of CD's and DVD's was getting cumbersome to retain.

The goal was to consolidate the mirrored sets of current data, recent data, and long-term old data onto a single set of mirrored media. The target machine the business was most concerned about was a high-end 64bit dual 2.5GHz PowerMAC G5 deskside server running MacOSX.


The introduction of mirrored external higher capacity media (1.5 TB disks with eSata, Firewire, and USB 2.0 options) proved to be far too problematic. These drives were just released and proved unfortunately buggy. During improper shutdowns or proper shutdowns where the media did not properly flush the final writes from cache in time resulted in horrible delays lasting over a day. Rebuilding the mirrored set upon next startup would take over a day, where access time to that media was tremendously degraded during a rebuild process.

Moving a 1.5TB drives to external USB storage connector on a new top-of-the-line Linksys WRT610N Dual-Band N Router with Gigabit Ethernet and Storage Link proved impossible. The thought is that the business would copy the data manually from the desktop to the network storage nightly, by hand, over the gigabit ethernet. Unfortunately, the embedded Linux file system did not support USB drives of this size. The embedded Linux int he WRT610N system also did not support mirroring or SNMP for remote management.

The decision was to hold-off any final decision until the next release of MacOSX was released, where a real enterprise grade file system would be added to MacOSX - ZFS.


With the withdrawal of ZFS from the next Apple operating system, the decision was made to migrate the all the storage from the Media Design House onto a single deskside ZFS server, which could handle the company's storage requirements. Solaris 10 was the selected, since it offered a stable version of ZFS under a nearly Open Source operating system, without being on the bleeding-edge as OpenSolaris was. If there was ever the decision to change the licensing with Solaris 10, it was understood that OpenSolaris could be leveraged, so long term data storage was safe.

Selected Hardware:
Two Seagate FreeAgent XTreme external drives were selected for storage. A variety of interfaces were supported, including eSATA, Firewire 400, and USB 2.0 At the time, this was the highest capacity external disk which could be purchased with the widest variety of high-capacity storage interfaces off-the-shelf at local computer retailers. 2 Terabyte drives were expected to be released in the next 9 months, so it was important the system would be able to accept them without bios or other file system size limitations. These were considered "green" drives, meaning that they would spin down when not in use, to conserve energy.


A dual 450MHz deskside Sun Ultra60 Creator 3D with 2 Gigabytes of RAM was chosen for the solution. They were well build machines with a current low price-point which could run current releases of Solaris 10 with modern ZFS filesystem. Dual 5 port USB PCI cards were selected (as the last choice, after eSATA and Firewire cards proved incompatible with the Seagate external drives... more on this choice, later.) Solaris offered security with stability, since few viruses and worms target this enterprise and managed services grade platform, and a superior file system to any other platform on the market at the time (as well as today): ZFS. SPARC offered long term equipment supportability since 64 bit was supported for a decade, while consumer grade Intel and AMD CPU's were still struggling to get off of 32 bit.

The Apple laptops and Deskside Server all supported Gigabit Ethernet and 802.11N. Older Apple systems supported 100 megabit Ethernet and 802.11G. A 1 Gigabit Ethernet card for the Sun Ultra60 was purchased, in addition to several Gigabit Ethernet Switches for the office. A newly released Linksys dual-band Wireless N router with 4xGigabit Ethernet ports was also purchased, the first of a new generation of wireless router in the consumer market. This new wireless router would offer simultaneous access to network resources over full-speed 2.4GHz 802.11G and 5GHz 802.11 N wireless systems. The Gigabit ethernet switches were also considered "green" switches, where power was greatly conserved when ports were not in use.


CyberPower UPS's were chosen for the solution for all aspects of the solution, from disk to Sun server, to switches, to wireless access point. These UPS's were considered "green" UPS's, where their power consumption was far less than competing UPS's, plus the displays clearly showed information regarding load, battery capacity, input voltage, output voltage, and component run time.

Speed Bumps:
The 64 bit PCI bus in the Apple Deskside Server and the Sun Deskside Workstation proved notoriously difficult to acquire eSATA cards, which would work reliably. The drives worked independently under FireWire, but two drives would not work reliably on the same machine with FireWire. A pair of FireWire cards was also purchased, in order to move the drives to independent controllers, but this did not work under either MacOSX or Solaris platforms with these external Seagate drives. The movement to USB 2.0 was a last ditch effort. Under MacOSX, rebuild times ran more than 24 hours, which drove the decision to move to Solaris with ZFS. Two 5 port USB 2.0 cards were selected, one for each drive, with enough extra ports to add more storage for the next 4 years. The USB 2.0 cards had a firmware bug, which required a patch to Solaris 10, in order to make the cards operate at full USB 2.0 speed.

Implementation:
A mirror of the two 1.5 Terabyte drives was created and the storage was shared from ZFS with a couple of simple commands.

The configuration is as shown below.
Ultra60/user# zpool status
pool: zpool2
state: ONLINE
config:
 NAME STATE READ WRITE CKSUM
 zpool2 ONLINE 0 0 0
 mirror ONLINE 0 0 0
 c4t0d0s0 ONLINE 0 0 0
 c5t0d0s0 ONLINE 0 0 0
errors: No known data errors
Ultra60/user# zfs get sharenfs zpool2
NAME PROPERTY VALUE SOURCE
zpool2 sharenfs on local

Implementation Results:
Various tests were conducted, such as:
  • Pulling the power out of a USB disk during read and write operations
  • Pulling the USB cord out of a USB disk during read and write operations
  • Pulling the power out of the SPARC Workstation during read and write operations
Under all cases, the system recovered within seconds to minutes with complete data availability and quick access to the data (instead of days of sluggishness, due to completing a rebuild, with the former desktop mirrored solution.)

Even though the SPARC CPU system was vastly slower, in raw CPU clock speed, from the POWER CPU in the Apple deskside unit, the overall performance of the storage area network was vastly superior to the former desktop mirroring attempt using the high-capacity storage.

Copying the data across the ethernet network experienced some short delays, during the time the disks needed to spin up from sleep mode. With future versions of ZFS projecting to support both Level 2 ARC for reads and Intent Logging for writes, the performance was considered more than acceptable until Solaris 10 received sufficient upgrades in the future.

The system was implemented and accepted within the Media Design House. The process of moving old desktop mirrors and racks of CD and DVD media to Solaris ZFS storage began.

Wednesday, March 30, 2011

MacOSX: ZFS Update


MacOSX: ZFS Update


Abstract:


The Apple Macintosh Operating System was built around the Hierarchal File System (HFS.) The file system was upgraded from 16 bits to 32 bits and renamed HFS+ while several other operating feature were also added. The market has been clamoring for a real storage solution, Apple briefly released a ZFS beta, and finally a new commercial company is doing the heavy lifting of providing MacOSX a reasonably current ZFS implementation.

History:
The Zettabyte File System (ZFS) was built by Sun in 2004 on top of a 128 bit base, differentiating it from competing (16 and 32 bit) platforms. Sun open-sourced ZFS in 2005.

In 2006, a skunkworks operation at Apple started to port ZFS to MacOSX by Chris Emura, (Apple Filesystem Development Manager) and Don Brady(Apple filesystem and OS engineer.) Apple started down the road of adopting of ZFS for MacOSX Server "Leopard" 10.5 in 2007. Mac OSX Server "Snow Leopard" 10.6 was supposed to have full ZFS support, but ZFS was later canceled.

After working on the HFS+ and former Apple ZFS port, 20 year kernel and file system veteran Don Brady announced the formation of corporation "Ten's Complement" to finally bring ZFS to MaxOSX. The intention is to use the Illumos source code base to provide the much needed (and much desired) functionality to MacOSX.

The MacZFS group offered a package download for MacZFS-74.1.0 on March 5, 2011.

Helpful Links:

Ziff-Davis industry reporter Robin Harris cearly outlines the benefits for ZFS under MacOSX. For more information on MacOSX, ZFS, and it's Illumos source base, see the following.
Network Management Connection:
Network management is all about tying a lot of (remote) data together into a large database for easy investigation. ZFS is the only modern reliable file and volume management system in the open sourced and commercial world at this point. MacOSX may be one of the most simple, secure, and robust user facing UNIX based systems in the world at this point.

The marriage of the two (ZFS and MacOSX) offer tremendous possibilities to tie together robust user end experience (through appliances such as iPhone, iPod Touch, and iPad families) with robust back-end processing (virtually virus-proof MacOSX and ZFS.)

Thursday, March 24, 2011

2011 March 20-36: Articles of Interest

Security, Networking, and Industry Articles of Interest


2011年03月16日 - Microsoft malware removal tool takes out Public Enemy No. 4
Microsoft finally used its Malicious Software Removal Tool to remove the fourth-biggest threat in automated program's history dating back to at least 2005.


2011年03月18日 - RSA breach leaks data for hacking SecurID tokens
'Extremely sophisicated' attack targets 2-factor auth


2011年03月20日 - AT&T acquires T-Mobile USA from Deutsche Telekom for 39ドルbn
There was one GSM network, to rule them all...


2011年03月23日 - Mac OS X daddy quits Apple
Bertrand Serlet, Apple’s senior vice president of Mac software engineering and the man who played a lead role in the development of Mac OS X, is leaving the company.


2011年03月23日 - 'Iranian' attackers forge Google's Gmail credentials
Skype, Microsoft, Yahoo, Mozilla also targeted.

Extremely sophisticated hackers, possibly from the Iranian government or another state-sponsored actor, broke into the servers of a web authentication authority and counterfeited certificates for Google mail and six other sensitive addresses, the CEO of Comodo said


2011年03月23日 - Oracle announced all software development stopped on Intel's Itanium CPU.
Red Hat was the first to pull the plug on Itanium, saying back in December 2009 that its Enterprise Linux 6 operating system, which was released last summer, would not be supported on Itanium processors.

Microsoft followed suit in April 2010, saying that Windows Server 2008 R2 and SQL Server 2008 R2 would be the final releases supported on Itanium.


2011年03月24日 - Apple Mac OS X: ten years old today
OS X was the product of Apple's 1996 purchase of NeXT, a move that not only saw the acquisition of a modern operating system, but also the return of its co-founder, Steve Jobs, to the company.

Thursday, October 8, 2009

IBM fends off T3 as Apple fends off Psystar; Future of Computing

IBM fends off T3 as Apple fends off Psystar

The Woes of IBM

The Department of Justice is investigating IBM, on behalf of the CCIA. IBM also refused to license their software on computing mainframes in the past. European rack server company T3 appears to be the latest aggressor, now.

The Woes of Apple

Apple has been defending it's right to license their Apple Computer MacOSX software only on Apple hardware for some time. The latest upstart, Psystar (who resides in the southern part of the United States), ironically released a new line of products referred to as the "Rebel Series".

An Odd Turn of Events

What makes these cases, so very interesting, is a more recent western U.S. court ruling against software company Autodesk which is leaning in the direction that software is not licensed, but owned.

Impacts to the Industry

As people turn to Open Source software, to reduce costs on their business, providers have been bundling their software at near-free costs with hardware, to be competitive.

Now, with hardware under attack for this practice, vendors will not be able to hide the cost of software creation. If the U.S. traditionally liberal Western court is not over-turned, the computing industry is at tremendous risk.

Basically, no one will pay for the salaries of good software designers and hardware designers will continually have their products knocked-off by clone manufacturers... leaving the industry in a place where innovation may suffer - because no one will reward innovation with a salary.

What Could Be Next?

If no one would pay for software and hardware innovation on commodity hardware, where might people go, to secure their investment?

Very possibly, the next turn could be back to proprietary platforms, again. If an innovative software solution is only available on a proprietary OS which is available only on a proprietary hardware platform - there is a guaranteed return on investment... regardless of whether the software is licensed or purchased.

This is not very good news, for the industry.

On the other hand, this could lead the industry back to a period of Open Computing - there there was choice between hardware vendors, OS vendors, software vendors... all according to open standards and open API's that were cooperatively created between heterogeneous industry groups like: x.org, open group, posix, open firmware, etc.

It is very possible that the liberal U.S. Western Federal Courts could be overturned (as happens quite often), once justices or the U.S. Congress realizes that they may be pushing a huge industry into non-existence or cause the balkanization of this industry, back into medieval fiefdoms.

It could also mean the end of commercial software, if the court case is not overturned. Programmers could be turned into perpetual free-lancers, in areas where labor is expensive.

In areas where labor is less expensive, programmers could be considered little more than factory workers. Hire your factory worker, by the hour, to finish your project and just release them. They will seldom see little more than a segment of code and never achieve the experience to really architect a complete solution.

It could also mean the end of high quality generic open-source software. This is, perhaps, the most ironic result. If software has no value, since few will have the means to pay for it, innovative programmers may choose to leave their software closed-source, in order to survive.

As much as people do not like licensing terms, the alternative is somewhat stark.
Subscribe to: Comments (Atom)

AltStyle によって変換されたページ (->オリジナル) /