skip to main | skip to sidebar
Showing posts with label SSDS. Show all posts
Showing posts with label SSDS. Show all posts

Tuesday, October 27, 2009

Amazon Attempts to Preempt PDC 2009 Release of SQL Azure with MySQL 5.1 Relational Database Service

For the second year in a row, Amazon Web Services (AWS) announces new and improved services a few weeks before Microsoft’s Professional Developers Conference (PDC). Last year it was Amazon Web Services Announces SLA Plus Windows Server and SQL Server Betas for EC2, which I reported on 10/23/2008.

This year, it’s Amazon Relational Database Service (Amazon RDS) Beta, announced on 10/27/2009, which delivers pre-configured MySQL 5.1 instances with up to 68 GB of memory and 26 ECUs (8 virtual cores with 3.25 ECUs each) servicing up to 1 TB of data storage. One Elastic Compute Unit (ECU) provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor. A summary of RDS beta pricing is here.

Stay tuned for a comparison of AWS RDS and SQL Azure pricing.

Werner VogelsExpanding the Cloud: The Amazon Relational Database Service (RDS) post of the same date contrasts AWS’ three approaches to cloud-based databases:

AWS customers now have three database solutions available:

  • Amazon RDS for when the application requires a relational database but you want to reduce the time you spend on database management, Amazon RDS automates common administrative tasks to reduce your complexity and total cost of ownership. Amazon RDS allows you to manage your database compute and storage resources with a simple API call, and only pay for the infrastructure resources they actually consume.
  • Amazon EC2- Relational Database AMIs for when the application require the use of a particular relational database and/or when the customer wants to exert complete administrative control over their database. An Amazon EC2 instance can be used to run a database, and the data can be stored within an Amazon Elastic Block Store (Amazon EBS) volume. Amazon EBS is a fast and reliable persistent storage feature of Amazon EC2. Available AMIs include IBM DB2, Microsoft SQL Server, MySQL, Oracle, PostgreSQL, Sybase, and Vertica.
  • Amazon SimpleDB for applications that do not require a relational model, and that principally demand index and query capabilities. Amazon SimpleDB eliminates the administrative overhead of running a highly-available production database, and is unbound by the strict requirements of a RDBMS. With Amazon SimpleDB, you store and query data items via simple web services requests, and Amazon SimpleDB does the rest. In addition to handling infrastructure provisioning, software installation and maintenance, Amazon SimpleDB automatically indexes your data, creates geo-redundant replicas of the data to ensure high availability, and performs database tuning on customers' behalf. Amazon SimpleDB also provides no-touch scaling. There is no need to anticipate and respond to changes in request load or database utilization; the service simply responds to traffic as it comes and goes, charging only for the resources consumed.

Jeff Barr’s Introducing Amazon RDS - The Amazon Relational Database Service post of the same date to the AWS Blog digs into the details of creating and populating an RDS instance:

… Using the RDS APIs or the command-line tools, you can access the full capabilities of a complete, self-contained MySQL 5.1 database instance in a matter of minutes. You can scale the processing power and storage space as needed with a single API call and you can initiate fully consistent database snapshots at any time.

Much of what you already know about building applications with MySQL will still apply. Your code and your queries will work as expected; you can even import a dump file produced by mysqldump to get started.

Amazon RDS is really easy to use. I'll illustrate the most important steps using the command-line tools, but keep in mind that you can also do everything shown here using the APIs.

The first step is to create a database instance. …

Jeff then goes on to demonstrate how to:

  • Create a database named mydb with room for up to 20 GB of data
  • Check on the status of your new database at any time
  • Edit the database's security group so that it allows inbound connections, enable connections from any (or all) of your EC2 security groups, or enable connections from a particular IP address or address range using CIDR notation
  • Expand your instance’s storage space immediately or during the instances maintenance window
  • Set up a two hour backup window and a retention period for automated backups and create a database snapshot at any time

James Hamilton’s Relational Database Service, More Memory, and Lower Prices provides a overview of AWS RDB, which begins:

I’ve worked on our around relational database systems for more than 20 years. And, although I freely admit that perfectly good applications can, and often are, written without using a relational database system, it’s simply amazing how many of world’s commercial applications depend upon them. Relational database offerings continue to be the dominant storage choice for applications with a need for structured storage.

There are many alternatives, some of which are very good. ISAMs like Berkeley DB. Simple key value stores. Distributed Hash Tables. There are many excellent alternatives and, for many workloads, they are very good choices. There is even a movement called Nosql aimed at advancing non-relational choices. And yet, after 35 to 40 years depending upon how you count, relational systems remain the dominant structured storage choice for new applications.

Understanding the importance of relational DBs and believing a big part of the server-side computing world is going to end up in the cloud, I’m excited to see the announcement last night of the Amazon Relational Database Service.

He concludes:

AWS also announced last night the EC2 High Memory Instance, with over 64GB of memory. This instance type is ideal for large main memory workloads and will be particularly useful for high-scale database work. Databases just love memory. …

AWS EC2 On-Demand instance prices were reduced by up to 15%.

Lower prices, more memory, and a fully managed, easy to use relational database offering.

James is a Vice President and Distinguished Engineer on the Amazon Web Services team where he is focused on infrastructure efficiency, reliability, and scaling.

RightScale’s Amazon launches Relational Database Service and larger server sizes post of 10/26/2009 takes another tack when describing RDS capabilities:

… With the Relational Database Service AWS fulfills a long standing request from a large number of its users, namely to provide a full relational database as a service. What Amazon is introducing today is slightly different than what most people might have expected, it’s really MySQL5.1 as a service. The RDS product page has the low-down on how it works, but the short is that with an API call you can launch a MySQL5.1 server that is fully managed by AWS. You can’t SSH into the server, instead you point your MySQL client library (or command line tool) at the database across the network. Almost anything you can do via the MySQL network protocol you can do against your RDS instance. Pretty simple and the bottom line is that businesses that don’t want to manage MySQL themselves can outsource much of that to AWS. For background on RDS I’d also recommend reading Jeff Barr’s write-up and Werner’s blog which recaps the data storage options on AWS.

What AWS does is keep your RDS instance backed up and running, plus give you automation to up-size (and down-size). You can create snapshot backups on-demand from which you can launch other RDS instances and AWS automatically performs a nightly backup and keeps transaction logs that allow you to do a point-in-time restore. …

One of the current shortcomings of RDS is the lack of replication. This means you’re dependent on one server and it’s impossible to add slave MySQL servers to an RDS instance in order to increase read performance. It’s also impossible to use MySQL replication to replicate from a MySQL server located in your datacenter to an RDS instance. But replication is in the works according to the RDS product page. …

Note re “… and down-size”: According to RDS documentation, you can’t reduce the size of allocated data storage.

Alan Williamson offers his knowledgeable and balanced third-party view of RDS in his Amazon takes MySQL to the cloud; or have they? post of 10/27/2009:

Amazon have just announced a whole slew of updates to their Web Services platform, at a time I might add, that their stock is riding at an all time high of 124ドル per share; no better time to announce price cuts. They announced the following high level features:

  • New Amazon RDS service
  • Additional 2 EC2 instance types
  • Price reduction on EC2 per-hour prices
Amazon RDS service

RDS or, Relational Database Service, is basically a full managed MySQL 5.1 server within the Amazon infrastructure. Instead of spinning up an EC2 instance, installing MySQL and faffing with all the security, backup and tuning settings, you merely call the API endpoint CreateDBInstance, supply which type of instance you want and how much storage you want, and you are up and running.

Their pricing starts off at 0ドル.11 per hour for the smallest instance, going the whole way up to 3ドル.10 for their highest instance. You also get charged for the amount of storage you reserve at the usual 0ドル.10 per GB[-month].

A number of big gains to be had here with Amazon RDS straight out of the gate. Namely, no longer do you have to wrestle with EBS to manage your persistence storage - no panics of data loss should the instance you had MySQL running suddenly dies. Secondly, you don't have to concern yourself with running the latest patched version of MySQL, as Amazon will keep the server binaries up-to-date for you.

They provide tools to easily and quickly take snapshots and create new instances, which is essentially wrapper tools around their existing EBS service and provide nightly backups for you automatically. Again, simply utilising their existing EBS service.

You do actually get a MySQL server, listening on port 3306 for all your connections from your app servers and all the tools you've built up to manage MySQL over the years. From an operational stand point, it’s business as usual.

But before you go terminating all your existing MySQL instances, allow some caution.

Amazon RDS, at the moment (although they plan to address this soon), is a one-singer-one-song setup. There is no real time replication and you are relying on a single Amazon EC2 instance not to fail. [Emphasis added.]

Some are probably not too worried about this, as they are probably sailing close to the wind without replication at the minute. However, how does a forced 2-hour downtime per week work out for your application? This is called the "Maintenance Window" and is an opportunity for Amazon to patch the server with the latest security and performance updates. They claim to only bring the server down as short as possible and you get to pick the time of day. [Emphasis added.]

That's going to sting, especially with no replication to take up the slack from the blackout. …

Jeff Barr says in his blog post: “A High Availability offering so that you can easily and cost-effectively provision synchronously replicated RDS instances in two different availability zones” is “planned for the coming months.'”

Jeffrey Schwartz begins his Amazon Sets Stage for Cloud Battle With Microsoft article of 10/27/2009 for Redmond Developer News with:

In what could be an escalating war in the emerging arena of cloud-based computing services, Amazon today said it will let customers host relational data in its EC2 cloud service using the MySQL database The company today also said that it plans to slash the costs of its EC2 service by as much as 15 percent.

The news comes just weeks before Microsoft is expected to make available its Azure cloud service at the annual Microsoft Professional Developers Conference in Los Angeles (see SQL Azure Is PDC Ready). Microsoft initially did not plan to offer a relational database service with Azure, but the company reversed course after earlier testers largely rejected the non-relational offering. Microsoft's SQL Azure Database will be part of the Azure offering (see Microsoft Revamps Cloud Database and Targeting Azure Storage). …

And concludes:

Roger Jennings, principal analyst at Oakleaf Systems and author of the book Cloud Computing with the Windows Azure Platform (Wrox), said Amazon clearly has taken a swipe at Microsoft. "It certainly appears that way," Jennings said in an interview.

The offering from Amazon includes support for larger databases of up to 1 terabyte, compared to just 10 gigabytes for SQL Azure, Jennings pointed out. "Larger SQL Azure databases must be sharded, but distributed queries and transaction[s] aren’t supported yet," he said in a follow-up email.

Jennings also noted that Amazon MySQL instances scale up resources with an hourly surcharge and scale out by sharding, if desired. "SQL Azure scales out by sharding but not up," he said. "SQL Azure instances have a fixed set of resources not disclosed publicly."

Wednesday, February 25, 2009

Ballmer Says Azure Will Release at PDC 2009

According to Elizabeth Montalbano’s Ballmer: Azure ready for release by end of year InfoWorld post of 1/24/2009:

Microsoft plans to release its Windows Azure cloud computing platform before the end of the year, Microsoft CEO Steve Ballmer said Tuesday [at the company’s Winter Wall Street Analysts briefing].

In comments made to members of the financial community, Ballmer said Microsoft will have "the ability to go to market" with Azure by the end of this year at its Professional Developers Conference (PDC) in November.

"[Azure] will reach fruition with the PDC this year," he said. Ballmer spoke to Wall Street analysts Tuesday to give them an update on Microsoft's financial status and what they can expect from the company for the remainder of the fiscal and calendar year. Microsoft's fiscal year ends on June 30.

This represents a significant delivery slip for SQL Data Services (SDS), which was slated to release in 2009H1 when introduced as SQL Server Data Services (SSDS) at MIX08. My SQL Server Data Services to Deliver Entities from the Cloud post of 3/7/2008 provides the low-down on SDS’s predecessor.

The Azure team will need to speed development substantially if promised relational features are to be included in SDS v1 (see my A Mid-Course Correction for SQL Data Services post of 2/24/2009.) Most features promised by the SSDS team in mid-2008 still haven’t made it into the current SDS CTP.

Details will be added when a transcript is available.

Dave Cutler Rationalizes Azure “Skunk Works”

Dave Cutler answers his own sixth question in Mary Jo Foley’s “Red Dog: Five questions with Microsoft mystery man Dave Cutler” of 2/25/2009. Cutler writes:

One of the things you did not ask is why aren’t we saying more about Azure and in the process filling the marketplace with sterling promises for the future. The answer to this is simply that the RD group is very conservative and we are not anywhere close to being done. We believe that cloud computing will be very important to Microsoft’s future and we certainly don’t want to do anything that would compromise the future of the product. We are hypersensitive about losing people’s data. We are hypersensitive about the OS or hypervisor crashing and having properties experience service outages. So we are taking each step slowly and attempting to have features 100% operational and solidly debugged before talking about them. The opposite is what Microsoft has been criticized for in the past and the RD dogs hopefully have learned a new trick.

I don’t believe the skunk works approach is appropriate for Azure. Developers need a full description and timeline for required features. Failure to deliver promised features in a timely manner is what led to A Mid-Course Correction for SQL Data Services.

Tuesday, February 24, 2009

A Mid-Course Correction for SQL Data Services

The Azure Services folks have decided that SQL Data Services (SDS) needs more relational attributes at the expense of the “simplicity” policy espoused by the original SQL Server Data Services (SSDS) team. First news about the change in direction came at the MSDN Developer Conference’s visit to San Francisco on 2/23/2009 in conjunction with 1105 Media’s Visual Studio Live! conference at the Hyatt Regency.

Gavin Clarke reports in 'Full' SQL Server planned for Microsoft's Azure cloud in a 2/23/2009 midnight (GMT) post to The Register:

[Microsoft] told The Reg it's working to add as many features as possible from SQL Server to its fledgling Azure Services Platform cloud as quickly as possible, following feedback.

General manager [of] developer and platform evangelism Mark Hindsbro said Microsoft hoped to complete this work with the first release of Azure, currently available as a Community Technology Preview (CTP). But he added that some features might be rolled into subsequent updates to Azure. Microsoft has not yet given a date for the first version of Azure, which was released as a CTP last October.

"We are still getting feedback from ISVs for specific development scenarios they want. Based on feedback we will prioritize features and get that out first," he said.

"The aim is to get that in the same ship cycle of the overall Azure platform but it might be that some of it lags a little big and comes short there after."

Hopefully, SDS hasn’t reached the point of no return.

Adding RDBMS Features Reverses Original Policy

Traditional relational databases don’t deliver the extreme scalability expected of cloud computing in general and Azure in particular. So SQL Server Data Services (SSDS) adopted a Entity-Attribute-Value (EAV) data model built on top of a customized version of SQL Server 2005 (not SQL Server 2008), as I reported in my Test-Drive SQL Server Data Services cover story for Visual Studio Magazine’s July 2008 issue.

SSDS architect Soumitra Sengupta posted Philosophy behind the design of SSDS and some personal thoughts to the S[S]DS Team blog on 6/26/2008. According to Soumitra, the first and foremost problems the team needed to solve were:

  1. Building a scale free, highly available consistent data service that is fault tolerant and self healing
  2. Building the service using low cost commonly available hardware
  3. Building a service that was also cheap to operate - lights out operation

The team favored simplicity at the expense of traditional relational database features, which potential users (such as me) expected .

Since we made an early decision to limit the number of hard problems we needed to solve, we decided that we would focus less on the features of the service but more on the quality of the service and the cost of standing up and running the service.  The less the service does we argued, the easier it would be for us to achieve our objectives.  In hindsight, this was probably one of the best decisions we made.  Istvan, Tudor and Nigel deserve special credit for keeping us focused on "less is better".

The result of this policy were schemaless EAV tables that offered flexible properties (property bags) in an Authority-Container-Entity (ACE) architecture that mystified .NET developers, who were then in the process of about-facing their mindset from traditional SQL queries to .NET 3.5’s Language Integrated Query (LINQ) constructs and object/relational mapping with LINQ to SQL and the Entity Framework. SSDS offered SOAP and REST data access protocols with a very limited query syntax.

The SSDS folks believed the simplified ACE construct made it easy for developers who weren’t database experts to create data-driven applications that used SSDS instead of Amazon Web Service’s SimpleDB or the Google App Engine as a scalable data store in the cloud.

Less wasn’t Better

Apparently, “less” didn’t turn out to be “better” when it comes to the .NET developers who are Azure’s target audience. Microsoft promotes the Azure Services Platform’s ability to leverage their Visual Studio 2008 expertise. VS 2008 is all about, ADO.NET, object/relational modeling (O/RM), and integration with SQL Server 200x with the SqlClient classes. SSDS’s REST interface didn’t even align with heavily promoted ADO.NET Data Services.

Gavin continues:

According to Hindsbro, partners want a full SQL Server database in the cloud. The current SQL Data Services (SDS), which became available last March, provides a lightweight and limited set of features. Prior to SDS, Microsoft's database service was called SQL Server Data Services.

"If you go there now you will find more rudimentary database experiences exposed. Not a lot of these apps would be interesting without a full database in the cloud, and that is coming," Hindsbro said.

He did not say what SQL Server features Microsoft would add to Azure, other than to say it'll include greater relational functionality.

Microsoft in a statement also did not provide specifics, but said it's "evolving SDS capabilities to provide customers with the ability to leverage a traditional RDBMS data model in a cloud-based environment. Developers will be able to use existing programming interfaces and apply existing investments in development, training, and tools to build their applications."

The pre-beta SDS restricts what users can do in a number of ways that make it hard to set up and manage and that are limit its usefulness in large deployments.

Gavin’s last paragraph is an understatement, to be charitable.

Less is Azure Table Services

Azure’s early testers are mystified by SDS’s overlap with Azure’s Table Service, which has a feature set that’s almost identical to SDS today, but is aligned with ADO.NET Data Services and its LINQ to REST queries.

Microsoft’s standard answer to Azure and SDS Forum questions, such as “The confusion here is why are there two different kinds of storage. Are they different?  If so why and if not what is the relation?” in the Azure Forum’s Difference between Azure Storage and SDS Storage thread and SDS Forum’s What Are the Advantages of SDS Over Table Storage Services with the StorageClient Helper Classes? thread is:

"SDS will provide scalable relational database as a service (today, Joins, OrderBy, Top...are supported) and as it evolves, we plan to support other features such as aggregates, schemas, fan-out queries, and so on.  SDS just like any other database also supports blobs.  SDS is for Unstructured, Semi, and Structured data, with a roadmap of having highly available relational capabilities."

Microsoft won’t reveal pricing for Azure services, but it’s clear that SDS is positioned as a value-added feature with premium per-hour and per GB storage charges compared with prices for renting plain old tables (POTs).

Early RDBMS Feature Promises

The SDS team began promising more SQL Server features shortly after releasing the SQL Server Data Services (SSDS) invitation-only CTP on 3/5/2008 at the MIX08 conference. Primary examples were optional schemas, full-text indexing and search, blob data type, ORDER BY clauses for queries, cross-container queries, transactions, JOINs, TOP(n), simplified backup and restore, and alignment of the REST API with ADO.NET Data Services.

The team delivered blobs, pseudo-JOINs, ORDER BY, and Take (but not Skip) by PDC 2008 (late October 2008) when the Azure invitation-only CTP released. My SQL Data Services (SDS) Update from the S[S]DS Team post of 10/27/2008 describes the new features in Sprint #5.

The SDS team will need to deliver all the promised features, and perhaps a few more, to justify a significant increase to service charges over those for Azure tables.

Competition from Amazon

In the meantime, Amazon Web Services (AWS) announced on 10/1/2008 that Amazon EC2 “will offer you the ability to run Microsoft Windows Server or Microsoft SQL Server … later this Fall.” My Amazon Adds SQL Server to Oracle and MySQL as EC2 Relational Database Management Systems post of 10/1/2009 has more details. Amazon announced support for IBM DB2 and Informix Dynamic Server in this IBM and AWS page on 2/11/2009.

EC2 currently supports Windows Server 2003 R2 and SQL Server 2005 Express and Standard editions. There’s no surcharge for the Express edition and the surcharges for the three instance types that offer SQL Server Standard edition are:

Instance Type Surcharge/Hour Surcharge/Year
Standard Large US$ 0.60 US$   5,256
Standard Extra Large US$ 1.20 US$ 10,512
High CPU Extra Large US$ 1.20 US$ 10,512

Note that SQL Server Standard isn’t available for the Standard Small instance type, which costs US$ 0.375 per hour less than Standard Large. If you don’t need Standard Large’s added capacity, the yearly surcharge increases by US$ 3,285.

You probably can’t beat AWS’s SimpleDB for low-cost usage and storage charges. Amazon now offers a simplified SQL subset for querying SimpleDB EAV tables.

Soumitra’s SQL Server Data Services (SSDS) is simple, but it is not SimpleDB post of 3/7/2008 claims that SSDS isn’t a SimpleDB-compete and concludes:

Underneath the hood, the service is running on SQL Server.  So the rich capabilities of our server software is all there.  We have chosen to expose a very simple slice of it for now.  As Nigel explained, we will be refreshing the service quite frequently as we understand our user scenarios better.  So you can expect to see more capabilities of the Data Platform to start showing up in our service over time.  What we announced here is just a starting point, our destination remains the extension of our Data Platform to the cloud.  I know you are asking "I need more details and a timeline".  As we on-board beta customers and get their feedback, we will be able to give you more details.

Whether or not SSDS is a SimpleDB-compete or not, I’m sure that the SDS Team would like to offer their product at a surcharge that’s competitive with Amazon’s for SQL Server Standard.

Silence Isn’t Golden

In the first few months of SSDS’s existence, the team posted frequently in to the S[S]DS Team Blog, but went silent after PDC 2008. I mentioned the lack of communication in my The SQL Data Services Team’s Recent Silence Isn’t Golden post of 1/3/2009.

Jeff Currier replied in a comment:

We've been a bit more silent than usual because the features we've been focusing on have been more of a operational nature (and therefore not customer facing). This should explain the recent silence (along with the holidays).

That might be an explanation, but it isn’t a very satisfactory one.

Dave Robinson posted SQL Data Services – What’s with the silence? today, presumably in response to The Register’s article:

Just wanted to drop a quick note. People are starting to question what’s going on in the SDS world and why we have been so silent. Well, to be honest, we have been so silent because the entire team has been heads down adding some new exciting features that customers have been demanding. Last year at Mix we told the world about SDS. This time around we will be unveiling some new features that are going to knock your socks off. So, that’s it for now. Just wanted to let everyone the team is alive and well and super excited for the road ahead. We are 3 weeks away from Mix so hang on just a little bit longer. Trust me, it’s worth it.

I’d like to know why it’s “worth it” to wait for MIX09 to find out what’s in store for SDS and when can we finally expect it.

What new features are going to knock my socks off?

Saturday, November 01, 2008

Microsoft Must Publish Projected Azure Service Levels and Pricing Now

Art Whittman asks Is The Cloud The End Of Microsoft? in his InformationWeek column of 10/30/2008 and then provides a partial answer:

Microsoft's failure to explain any aspect of its cloud business model renders the rest of its good words about as intelligible as Charlie Brown's teacher. Its competition can tell you exactly how you'll pay for services, and for a developer looking to field their own SaaS product, that makes all the difference. More than anything, Microsoft is describing what's come to be known as platform as a service. The platform is for developers, and developers have to understand how (or whether) they'll make money.

Update 11/1/2008: Rocky Lhotka says in his Thoughts on PDC 2008 post of 10/31/2008:

The real question is whether [porting my Web site to Azure] would even make sense, and that comes down to the value proposition. One big component of value is price. Like anyone else, I pay a certain amount to run my web site. Electricity, bandwidth, support time, hardware costs, software costs, etc. I've never really sorted out an exact cost, but it isn't real high on a per-month basis. And I could host on any number of .NET-friendly hosting services that have been around for years, and some of them are pretty inexpensive. So the question becomes whether Azure will be priced in such a way that it is attractive to me. If so, I'm excited about Azure!! If not, then I really don't care about Azure.

I suspect most attendees went through a similar thought process. If Microsoft prices Azure for "the enterprise" then 90% of the developers in the world simply don't care about Azure. But if Microsoft prices Azure for small to mid-size businesses, and for the very small players (like me) then 90% of the developers in the world should (I think) really be looking at this technology

Ray Ozzie said in his PDC keynote of 10/27/2008:

When it is released commercially, Windows Azure will have a very straightforward business model, with costs primarily being derived as a function of two key factors[: An app’s] resource consumption and a specific service level that we agree to provide.

The pricing and models for all the Azure services will be competitive with the marketplace, and we'll provide a variety of offers and service levels where there may be differentiated requirements across the breadth of developers and markets that we serve as a company from the individual developer to the enterprise.

I don’t believe that platitudes promising “competitive” pricing and models will be sufficient when the primary competitor, Amazon Web Services (AWS), has a published service level agreement (SLA) and resources price list that potential users can use to analyze their business plans for cloud deployment.

Furthermore, the SDS (then SSDS) team have claimed that they don’t compete against AWS. [See Soumitra Sengupta’s It is simple, but it is not SimpleDB post of March 7, 2008 and Nigel Ellis’s observation quoted from Tim Anderson’s Microsoft reveals its database for the cloud article in the Register for 3/10/2008:

This is not a SimpleDB compete. Our goal is to take our existing entrprise server products and capabilities and give that a service delivery mechanism. This is only a start... four months from now, it's going to look very different.

It’s no wonder that, as noted on PDC 2008’s main Web page on 10/29/2008, the ES18 Business Considerations for Cloud Computing at 4:45PM in Petree Hall has been CANCELLED.

Wednesday, October 29, 2008

LINQ and Entity Framework Posts for 10/27/2008+

Note: This post is updated daily or more frequently, depending on the availability of new articles.

• Updated 10/29/2008 8:00 AM PDT: Additions

Entity Framework and Entity Data Model (EF/EDM)

Paul Gielens liveblogs Tim Mallalieu’s The Future of the Entity Framework (TL20) PDC 2008 session on 10/28/2008. The Entity Framework Futures video should appear shortly on Channel9.

Danny Simmons used Damien Guard’s T4 templates for LINQ to SQL as inspiration for his Using T4 Templates to generate EF classes post of 10/28/2008. At this point, Danny’s template is an “early draft” and is missing a VB version and some other features. However, it’s a step in the right direction.

Faisal Mohamood’s Foreign Keys in the Conceptual and Object Models post of 10/27/2008 compares the treatment of foreign key values in LINQ to SQL (visible) and EF/EDM (not visible), as well as the trade-offs of both approaches. The topic is controversial, but I’m in favor of making FK values optional in EF/EDM, just as adding the “Set” suffix to entity sets should have been optional (and presumably will be in v2.)  

LINQ to SQL

• Sidar Ok’s Lazy Loading with Linq to SQL POCO post of 10/28/2008 shows you how to implement lazy loading for his earlier Achieving POCO s in Linq to SQL project with the LinFu Dynamic Proxy.

Sebastien Lachance suggests use of the DataContext’s DeleteDatabase() and CreateDatabase() methods to assure that test databases are in the proper state in his Unit Testing, LinqToSql and CreateDatabase post of 10/27/2008.

Ilan Assayag provides links to methods of providing Configurable connection string with Linq to SQL in his 10/27/2008 post.

LINQ to Objects, LINQ to XML, et al.

Charlie Calvert begins coverage of C# 4.0 in his LINQ Farm: Covariance and Contravariance in C# 4.0 post of 10/28/2008.

ADO.NET Data Services (Astoria)

Pablo Castro’s Now you know...it's Windows Azure post of 10/28/2008 describes Azure’s table service for structured storage of entity/attribute/value (EAV) data in row/column (property) containers (partitions). Table service features an Astoria-compatible RESTful external interface, as well as internal ADO.NET connectivity. Azure includes .NET 3.5 SP1 so you can use the current Astoria client to interact with table services.

Pablo and Niranjan Nilakantan will present Windows Azure: Modeling Data for Efficient Access at Scale (ES07 | Wed 10/29 | 1:15 PM-2:30 PM | 403AB):

Learn how to use the highly scalable, available and durable table storage service. This session presents a deep dive with demos into the programming APIs and data models for structured storage.

Brad Calder presented the Windows Azure Storage: Essential Cloud Services (ES04 01:14:50) PDC session on 10/28/2008:

Modern services need available, scalable and durable data in many forms, including both structured and unstructured data. This session presents blob, table and queue storage services and the APIs for manipulating and querying data.

Brad Calder is Director/Architect of Cloud Storage, which is the essential scalable, available and durable storage for Microsoft’s Cloud Platform.

Brad’s session describes Azure’s three basic data abstractions: Blobs, Tables, and Queues. Table-oriented content begins at 00:21:29. ADO.NET Data Services access with LINQ or REST starts at 00:30:00. Queue content begins at 00:40:00.

Matthieu Mezil’s ADO.NET Data Services Hooking POC V4 post of 10/28/2008 improves on Astoria’s rights management approach by moving it from the service to the business logic layer’s entity class. His downloadable proof of concept code is available from CodePlex.

Joe Gregorio, Google’s main man for the Atom Pub Protocol, offers a 15-minute An Introduction to REST video he describes as:

Google Data APIs are based on the Atom Publishing Protocol and both Google Data APIs and AtomPub get many advantages from being RESTful protocols. Often the meaning of REST and the advantages of RESTfulness go unexplained so I put together this short 15 minute video that explains REST and some of the advantages you get with a protocol built in that style.

Steve Maine’s Announcing the WCF REST Starter Kit post of 10/27/2008 describes the REST Starter Kit as follows:

When I talk to customers about the API’s for building REST services we added to WCF in .NET 3.5, most people “get it” at some abstract level but come away really wishing for deeper guidance and samples on how to use the platform to address common problems. To answer some of those questions, we’ve taken about 20 of the top customer questions and put them into the REST Starter Kit download as SDK samples so you can look at code and see how the API’s are used in practice.

ASP.NET Dynamic Data (DD)

Scott Hunter will present ASP.NET Dynamic Data [for MVC] (PC30 | Wed 10/29 | 3:00 PM-4:15 PM | 411):

The next version of ASP.NET MVC contains a new scaffolding feature based on Dynamic Data that provides a rich framework for creating data driven web sites. Learn how to quickly build a Dynamic Data web site using features like model level validation, field and entity templates, and scaffolding.

Mike Ormond’s Lots of new ASP.NET bits post of 10/28/2008 notes that the ASP.NET CodePlex site has the following new items:

    • ASP.NET AJAX 4.0 Preview 3
    • ASP.NET Dynamic Data 4.0 Preview 1
    • ASP.NET MVC Beta Source Code Release

SQL [Server] Data Services (S[S]DS) and Cloud Computing

Jim Nakashima covers the simples way to get a Blob Storage service up and running in his Windows Azure Walkthrough: Simple Blob Storage Sample post of 10/29/2008.

OakLeaf’s Cloud Computing at PDC and Elsewhere: Day 2 (10/28) is the second “daily compendium of PDC keynotes and sessions about Cloud Computing, SQL [Server] Data Services, and related topics.”

• Jim Nakashima’sASP.Net MVC Projects running on Windows Azure post of 10/28/2008 describes how to tweak an ASP.NET MVC project to run on WinAz and includes a sample application.MSDN’s SQL Data Services Developer Center has updated documentation for SDS (updated from SSDS).

Anthony Carrabino provides an overview of what’s new in SDS in his New version of SQL Data Services part of “Azure” CTP post of 10/28/2008.

Jim Nakashima’s Deploying a Service on Windows Azure post of 10/28/2008 is a detailed, illustrated primer that explains how to deploy as a .NET Service the simple ASP.NET app he demonstrated in his Quick Lap around the Windows Azure Tools for Microsoft Visual Studio article.

Ryan Dunn explains What's new in SQL Data Services for Developers in this Channel9 screencast wherein

Ryan visits Jason Hunter and Jeff Currier, a couple of the lead developers on the SQL Data Services team (formerly called SQL Server Data Services) to find out what the new PDC CTP build of the SDS service brings for developers.

The SSDS Getting Started Forum’s New Features thread discusses the lack of cross-container joins in the new SDS version.

Soumitra Sengupta Microsoft Announces Windows Azure and Azure Services Platform provides a brief synopsis of previous and forthcoming PDC 2008 sessions about SQL Data Services.

Oakleaf’s SQL Data Services (SDS) Update from the S[S]DS Team of 10/27/2008 is a copy of a Soumitra Sengupta’s e-mail sent to all registered SSDS users about the upgrade to SDS coming in “early” and “mid-November.”

OakLeaf’s Cloud Computing at PDC and Elsewhere: Day 1 (10/27) is the first of “a daily compendium of PDC keynotes and sessions about Cloud Computing, SQL Server Data Services, and related topics.”

SQL Server Compact (SSCE) 3.5 and Sync Services

Lev Novik’s Microsoft Sync Framework Advances in v2 PDC 2008 presentation of 10/28/2008 “shows you how the next version of the Microsoft Sync Framework makes it easier to synchronize distributed copies of data across desktops, devices, services, or anywhere else they may be stored.”

Liam Cavanaugh’s Annoucing Sync Framework v2 CTP1 post of 10/28/2008 describes the first glimpse of Sync Services v2, which adds the following new features:

    • Simple Providers
    • Change Unit Filtering
    • Filter negotiation

Liam Cavanaugh explains sync-related SQL Services from the SQL Services Labs incubator in his Introducing SQL Services Labs post of 10/27/2008.

Miscellaneous (WPF, WCF, MVC, Silverlight, etc.)

• Jim Nakashima’sASP.Net MVC Projects running on Windows Azure post of 10/28/2008 describes how to tweak an ASP.NET MVC project to run on WinAz and includes a sample application. (Repeated from SDS and Cloud Computing.)

Rob Conery is finally getting closer to release of his venerable MVC sample project, as noted in his MVC Storefront Preview 1 Available post of 10/28/2008.

Mike Ormond’s Lots of new ASP.NET bits post of 10/28/2008 notes that the ASP.NET CodePlex site has the following new items:

    • ASP.NET AJAX 4.0 Preview 3
    • ASP.NET Dynamic Data 4.0 Preview 1
    • ASP.NET MVC Beta Source Code Release

(Repeated from the Dynamic Data topic.)

John Papa waxes enthusiastic in his Silverlight 2 - What a Ride! post of 10/27/2008 about Silverlight 2 adoption for O’Reilly’s InsideRIA blog. But the Flash proponents claim Silverlight 2 hasn’t achieved significant usage. I agree with John; until Silverlight 1.0 reared its head, Adobe and Macromedia were resting on their laurels. Flash (and Flex) need competition.

Tim Heuer’s Silverlight Toolkit Released – More controls! post of 10/28/2008 describes the new Silverlight Toolkit that RTW’d on the same date.

Shawn Wildermuth’s

Mike Snow explains that there is now a released version of the Silverlight Tools for Visual Studio 2008 SP1 in Silverlight Tools RTW Released! of 10/28/2008.

Rob Conery’s SubSonic MVC Addin Updated for Beta 1 post of 10/27/2008 announces an updated version of the MVC Add in for his flagship object/relational modeling tool, SubSonic.

Monday, October 27, 2008

Cloud Computing at PDC and Elsewhere: Day 1 (10/27)

A daily compendium of PDC keynotes and sessions about Cloud Computing, SQL Server Data Services, and related topics. This post will be updated frequently from 8:00 AM to 5:00 PM PDT or later. Unless otherwise noted all blog posts are dated 10/27/2008.

Ray Ozzie Keynote: Red Dog Turns Azure

Yet another example of a “heap big smoke; no fire” PDC keynote. As expected Ray Ozzie announced the name of SQL Server Data Services’ and Live Services’ underlying cloud operating system that Steve Ballmer recently called Windows Cloud . The new name is Windows Azure (WinAz).

 

Steve Marx built and deployed a “Hello Cloud” service on a machine with WinAz SDK and the Azure Tools for VS 2008 SP1 installed. The newly scaled-up service is live at http://hellocloud.cloudapp.net. Steve Marx: Windows Azure for Developers is a Channel9 video interview with Steve for the developer audience. The Manuvir Das: Introducing Windows Azure interview is directed toward IT management.

Bob Muglia assured developers that WinAz will be the “next generation of platform for developers to take advantage of” with “built-in scale-out Service bus; federated access control; and scale-out workflow services so workflows span from on-premises to the cloud.” A primary feature will be symmetry between on-premises application design and debugging with the Windows Azure SDK and projects deployed to the WinAz cloud.

J. Nicholas Hoover’s Microsoft PDC Live Blog for Information Week is one of the first  reasonably complete and readable synopses of the keynotes, so this completes OakLeaf coverage of the day one keynote.

Mary Jo Foley offers a more technically oriented post, Microsoft’s Azure cloud platform: A guide for the perplexed, and answer the rhetorical Why ‘Azure’? question. She cites the following Azure PDC usage limitations:

    • Total compute usage: 2000 VM hours
    • Cloud storage capacity: 50GB
    • Total storage bandwidth: 20GB/day

Joe Wilcox’s Azure: Windows Becomes the Web post is more marketing oriented. Joe says:

If there was ever a "Microsoft conquers, or perhaps becomes, the Web" strategy, Azure is it. The Web services—cloud computing—platform is brilliant in concept, but execution will determine whether or not Microsoft walks rather than just talks.

Here’s the complete transcript of the Day One keynotes from Microsoft PressPass: Professional Developers Conference 2008 Day 1 Keynote: Ray Ozzie, Amitabh Srivastava, Bob Muglia, Dave Thompson.

Useful Windows Azure Links for Developers

Soma Somasegar provides more details on Windows Azure Tools for Microsoft Visual Studio in his Windows Azure Tools for Microsoft Visual Studio CTP post, which includes Solution Explorer and New Project screen captures. He also discusses the Emerging Trends pillar of VS 2010.

The OakLeaf Blog’s SQL Data Services (SDS) Update from the S[S]DS Team post of 10/27/2008 provides a detailed list of changes to SQL Data Services (SDS) (formerly SQL Server Data Services, SSDS) in Sprint #5.

Ryan Dunn posted a link to the What's new in SQL Data Services for Developers Channel9 screencast that includes details on new relational features (joins) an blob support in the upcoming public CTP.

From the Cloud Computing Tools Team’s Bookmarks: Windows Azure post of 10/27/2008:

Here's a set of bookmarks that you'll find useful as you ramp up and use Windows Azure.

Downloads:

Portal:

Community:

Some of these links haven’t fully propagated yet. If you receive 404’s, try again later today.

Once you have the SDK and Tools, check these links from the Register for Azure Services page:

and the Azure Tools blog:

Note that the SQL Data Services SDK is the SQL Server Data Services SDK with a new name.

Posts About Cloud-Related Sessions

Channel9’s John Shewchuk and Dennis Pilarinos: Inside .NET Services video covers much of the content of  BB01 A Lap Around the Azure Services Platform (Mon 10/27 | 3:30 PM-4:45 PM | Petree Hall CD).

Channel9 describes the Dave Campbell: Inside SQL Services video thusly:

Technical Fellow Dave Campbell digs into the "fabric" of Azure's SQL Services. What are the current capabilities of SQL Services and how will they evolve? Can you upload stored procedures to the cloud and expect them to run? What does extending a shrink-wrapped application to the world of distributed cloud services really mean?

You can expect the video to cover much of the content of Dave’s BB15 SQL Server: Database to Data Platform - Road from Server to Devices to the Cloud (Mon 10/27 | 5:15 PM-6:30 PM | 408B).

Paul GielensA Lap Around Windows Azure post of 10/27/2008 is a detailed analysis of session ES16 A Lap Around Windows Azure (Mon 10/27 | 11:00 AM-12:15 PM | Petree Hall CD)presented by Manuvir Das. Paul has several earlier posts that equally insightful.

SQL Services Incubation Projects

Following are links to descriptions of current SQL Services Incubation projects:

  • Codename “Astoria” Offline - Version 1 of ADO.NET Data Services Framework (a.k.a. Project "Astoria") introduced a way of creating and consuming flexible, data-centric REST services. Now we are working on creating an end-to-end story for taking data services offline using synchronization. Integrating data services with the Microsoft Sync Framework will enable developers to create offline-capable applications that have a local replica of their data, synchronize that replica with an online data service when a network connection becomes available, and use replicas with the ADO.NET Entity Framework for regular data access.
  • Accessing SDS using ADO.NET Data Services - This incubation project focuses on aligning SDS and ADO.NET Data Services. With this alignment SDS will support AtomPub and JSON formats. It will also provide support for established set of conventions for constructing URLs to point to resources. We are also extending ADO.NET Data Services to provide access to the flexible data stored in SDS.
  • Data Mining in the Cloud - The SQL Server Data Mining team is working to extend the power and ease of use of SQL Server Data Mining to the Cloud. Our goal is provide services that allow you to build rich, predictive applications without worrying about server infrastructure, and showcase these services with cool applications that give you a glimpse of what’s possible
  • U Rank – This Microsoft Research project is exploring how personalization, social context, and communication may be used to improve the search experience and leveraging SQL Data Services to power the service. Use the search engine to re-rank search results, move results from one search to another, add notes, and otherwise edit searches. Not only will you see your changes again the next time you come back, but your friends will see the changes too!
  • Project Codename "Anchorage" – We’re evolving the popular SyncToy application to enable much more than just file/folder synchronization between PCs, devices, and services! With this project, providers will be able to register and be discovered in a variety of sync groups including contacts, files, favorites, videos, as well as photos across endpoints such as the Live Mesh, PhotoBucket.com, Smugmug.com, and more
  • Project Codename “Huron”- Leverage the power of SQL Data Services to enable enterprise edge scenarios using the technologies in this incubation! Share data with relational stores like Access, SQL Express, SQL CE, SQL Server, enable B2B data sharing, and push workgroup databases to field workers and mobile users
  • Reporting against SQL Data Services – Leverage SQL Server Reporting Services (SSRS) 2008 to build and deploy rich reports against data hosted in SQL Data Services (SDS). SSRS data source extensibility framework is used to provide an incubation custom data extension for SDS. Developers can download the custom extension and configure it against their on-premise SSRS 2008 installation. This will allow them to connect to SDS authorities and containers via HTTP SOAP to extract data sets, build rich reports using standard tools like Report Designer / Report Builder and deploy the reports to Report Manager

VS 2010 Prerelease Download

Get your copy from the Microsoft Pre-release Software Visual Studio 2010 and .NET Framework 4.0 Community Technology Preview (CTP) page. The CTP is available only as a Virtual PC image and requires Virtual PC 2007 SP1. VS 2010 is not required to run the WinAz SDK or VS 2008 WinAz tools.

Links to Significant Cloud Computing Background Content

The Economist presents a multipart report on cloud computing by Ludwig Siegele in its 10/23/2008 edition:

Click here for an audio interview of the author.

The Seattle Times offers a Details about Microsoft's cloud computing expected at conference preview by Benjamin J. Romano:

Microsoft is expected to sort out its strategy for cloud computing, a broad change in how computer users retrieve and process information and applications, at the company's Professional Developers Conference this week.

Tim O’Reilly attempts in Web 2.0 and Cloud Computing of 10/26/2008 to explain why Larry Ellison is both right and wrong about the future of cloud computing:

A couple of months ago, Hugh Macleod created a bit of buzz with his blog post The Cloud's Best Kept Secret. Hugh's argument: that cloud computing will lead to a huge monopoly. Of course, a couple of weeks ago, Larry Ellison made the opposite point, arguing that salesforce.com is "barely profitable", and that no one will make much money in cloud computing.

In this post, I'm going to explain why Ellison is right, and yet, for the strategic future of Oracle, he is dangerously wrong.

Tim can’t avoid Web 2.0 agitprop in his conclusion:

So here's the real trick: cloud computing is real. Everything is moving into the cloud, in whole or in part. The utility layer of cloud computing will be just that, a utility, without outsized profits.

But the cloud platform, like the software platform before it, has new rules for competitive advantage. And chief among those advantages are those that we've identified as "Web 2.0", the design of systems that harness network effects to get better the more people use them.

SQL Data Services (SDS) Update from the S[S]DS Team

The S[S]DS team sent the following e-mail about the result of Sprint #5 to registered users of SQL Data Services (SDS), formerly SQL Server Data Services (SSDS), which runs under the Windows Azure cloud operating system:

This morning, we announced, the latest upgrade of SSDS [Sprint #5] at the PDC2008 conference in Los Angeles. We have also changed the name of the service from SSDS to SDS (SQL Data Services). In order to support this announcement, the documentation on our DevCenter has been updated to cover the new features in this upgrade.

Key members of our team are at the PDC. We hope to be able to meet with many of you who are attending this event. Please come to our talks, stop by our booth, and say "Hello" to us in the services lounge.

The SDS upgrade announced at the PDC will be made available to you in early November and broadly available as a public CTP (Community Technology Preview) in mid-November. [Emphasis added.]

New features announced today:

Additional Query Support

a) Joins

Join query support in SQL Data Services (SDS) allow you to retrieve entities from a container based on a join condition involving properties on different kinds of entities. For example, if you have a container with customer and order entities, a query to find orders for a given a customer would require you join the customer and order entities based on a common property. Since both the customers and orders are in the same container, you query the same container twice (using aliases); first find the customer and then find orders for that customer using a join condition.

b) OfKind

To simplify the join Syntax, SQL Data Services (SDS) has introduced an OfKind function. This function can specified on the queries From clause to distinguish between multiple Kinds within a container.

An example would be

from c in entities.OfKind("Customer") select c

c) Order By

SQL Data Services (SDS) now supports an Order By clause in our query syntax. This optional clause allows you to have your query results returned order by the property of your choosing in either ascending or descending order

d) Take

SQL Data Services (SDS) now supports a Take function in its query language. This new function can be used to restrict the number of entities returned in a given query.

.Net Access Control Integration (SOAP Only)

The .NET Access Control Service is a hosted, secure, standards-based infrastructure for multi-party, federated authentication and rules-driven, claims-based authorization. SQL Data Services (SDS) supports authentication and authorization via tokens issued by the .NET Access Control Service. This allows applications secure not only their own web service layer, but also their SDS-based data layer using the same, declarative access control mechanism.

Other forms of credentials, besides username/password, can be used to obtain a token from the identify provider. They are:

a) X.509 certificate

b) InfoCard

In this release, using token-based authentication scheme with SDS has the following limitations:

a) When communicating with SDS, the token-based security is available only when using the SOAP protocol. For applications using the REST protocol, only the basic authentication is supported (username/password).

b) Requires a .NET Services solution account.

Metrics

Service users often want to know the usage pattern of the service. Microsoft® SQL Data Services (SDS) user may want to find:

a) How many containers or entities do I have?

b) What is the total storage consumed by an authority or a container?

c) What is the amount storage used by blobs in a container?

d) What is the aggregate number of requests (GET, POST, PUT and so on) sent against an authority or a container?

e) What is the number of request and response bytes sent with a specific authority or container in scope?

The service provides this information as properties on authorities and containers.

User Limits during the public CTP (this will change, when we go "Live")

a) Each user will be allowed 50gb of storage across all Authorities

b) 1000 Containers per Authority

c) 1gb of Blob Entities per Container (up from 500mb)

d) 100mb of Flexible Entities per Container (up from 20mb)

e) Each Blob Entity will be capped at 100mb

For more detail, please visit SDS DevCenter at http://msdn.microsoft.com/en-us/sqlserver/dataservices/default.aspx

Thank You,

The SSDS Team

When this article was posted, there were no more details about the S[S]DS upgrade at the preceding link.

Ryan Dunn posted a link to the What's new in SQL Data Services for Developers Channel9 screencast that includes details on new relational features (joins) an blob support in the upcoming public CTP.

Channel9 presents the Dave Campbell: Inside SQL Services video with the following deck:

Technical Fellow Dave Campbell digs into the "fabric" of Azure's SQL Services. What are the current capabilities of SQL Services and how will they evolve? Can you upload stored procedures to the cloud and expect them to run? What does extending a shrink-wrapped application to the world of distributed cloud services really mean?

You can expect the video to cover much of the content of Dave’s BB15 SQL Server: Database to Data Platform - Road from Server to Devices to the Cloud (Mon 10/27 | 5:15 PM-6:30 PM | 408B).

Sunday, October 19, 2008

LINQ and Entity Framework Posts for 10/13/2008+

Note: This post is updated daily or more frequently, depending on the availability of new articles.

Update 10/18/2008 5:30 PM PDT: Minor additions
• Update 10/16/2008 4:30 PM PDT: ASP.NET MVC Beta and other additions
• Update 10/14/2008 5:00 PM PDT: Minor additions

Entity Framework and Entity Data Model (EF/EDM)

Matthieu Mezil’s ADO.NET Data Services Hooking POC V3 post of 10/17/2008 continues his Proof of Concept series about using EF as a data source for ADO.NET Data Services.

He finishes implementing the IUpdatable interface and enables the System.Collections.ObjectModel.Collection’s Count property in ADO.NET Data Services Hooking POC V3 .1 of 10/18/2008.

Note: The current v3.1 CodePlex download appears to be a T-SQL script, not source code for an IUpdatable implementation.

Ezequiel Sculli’s How-To: Perform Update Actions using an ObjectContainerDataSource with Entity Framework post of 10/16/2008 describes how to work around the failure of EF to update the data store with changes and display them in the postback. The workaround, according to patterns & practices’s How-To: Perform Update Actions using an Object Container Data Source with Entity Framework is to create a new instance of the entities for each operation performed.

Note: The ObjectContainerDataSource is a component of the Web Client Software Factory (WCSF).

•• The EF Team posted Migrating from LINQ to SQL to Entity Framework: Deferred Loading, the second in its EF from LINQ to SQL migration series, on 10/16/2008.

When the team finishes the series, will we receive the official deprecation notice for LINQ to SQL? See my Is the ADO.NET Team Abandoning LINQ to SQL? post of May 23, 2008.

Beth Massi’s Editing Data from Two Tables in a Single DataGridView post of 10/15/2008 explains how to update data from two tables in a DataGridView by entity splitting with the Entity Framework, LINQ to SQL, and DataSets as data sources.

Kristofer Andersson updated the Huagati DBML/EDMX Tools to version 1.40 on 10/15/2008. For more information, see the entry in the LINQ to SQL section.

David Sceppa announced on 10/14/2008 that Npgsql's ADO.NET Provider for PostgreSQL Supports the ADO.NET Entity Framework! For more details, see the PGFoundry blog’s Release Notes for Npgsql2.0RTM

Matthieu Mezil’s Bug with IQueryable and yield syntax: System.BadImageFormatException "An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B)" post of 10/14/2008 describes an exception that occurs when attempting to execute the yield return statement on an IQueryable<T> sequence.

LINQ to SQL

Beth Massi’s Editing Data from Two Tables in a Single DataGridView post of 10/15/2008 explains how to update data from two tables in a DataGridView by entity splitting with the Entity Framework, LINQ to SQL, and DataSets as data sources. (Repeated from EF/EDM.)

Kristofer Andersson updated the Huagati DBML/EDMX Tools to version 1.40 on 10/15/2008. According to Kris, the updates include:

  • Added [HuagatiDBMLTools.msi] installer / uninstaller
  • Improved performance of the "Update Documentation" feature for LINQ to SQL
  • Moved menu items to a separate dropdown menu ("DBML/EDMX Tools") on the VS2008 menu bar to avoid congestion in the standard Tools menu
  • Fixed issue where the mouse pointer sometimes showed the hourglass/wait pointer when dialogs were visible
  • A couple of minor adjustments to the pluralization/singularization rules
  • Better handling of some reserved SQL keywords

Save the .zip file to a temporary folder, extract the files with the Use Folder Names option selected, and then run HuagatiDBMLTools.msi to install the files in a \Program Files\…\HuagatiExtensions folder. The installer adds a Huagati DBML EDMX Tools node to your Programs menu with Reset Add-in Menus in VS 2008 and User Guide commands. If you’ve installed an earlier version, choose Reset Add-in Menus in VS 2008 after installation.

Sidar Ok writes his own POCO classes and basic unit tests, and then uses SqlMetal’s XML mapping file option to generate the mapping layer in his Achieving POCO s in Linq to SQL post of 10/14/2008. Be sure to read the comments.

Rob Conery’s Make Visual Studio Generate Your Repository post of 10/13/2008 shows you how to use VS 2008 Text Template Transformation Toolkit (T4) feature to wrap a LINQ to SQL with a testable IQueryable Repository framework for use with (what else than) MVC.

Rob includes links to the Clarius's T4 template editor and Damien Guard’s LINQ to SQL templates. For additional details on Damien’s templates, see my Bidirectional Serialization of LINQ to SQL Object Graphs with Damien Guard’s T4 Template in VS 2008 SP1 post of 9/25/2008. (See Scott Hanselman’s T4 post in the Misc. category.)

Neil Pullinger recommends that you Always set AutoPage to true in LinqDataSource in his 10/14/2008 post. If AutoPage is false, your page retrieves all rows from the data source.

LINQ to Objects, LINQ to XML, et al.

•• Corey Roth’s My 500th Post! Left Outer Joins with LINQ post of 10/16/2008 is his 500th DotNetMafia.com - Tip of the Day item. The series began on 12/1/2004.

•• Eric Meijer and Bart De Smet conduct the centennial edition of Channel9’s Going Deep series: Erik Meijer and Bart De Smet: LINQ-to-Anything of 10/15/2008. Here’s an excerpt from the deck:

Meet Bart de Smet, a software engineer extraordinaire on the WPF team who spends his free time blogging (what an incredible wealth of truly useful technical information to be found on Bart's blog!) and creating custom LINQ providers. In fact, Bart is probably the world's most prolific LINQ provider creator, from LINQ-to-MSI to LINQ-to-Simpsons! How does he do it???

Who better to have involved in this LINQ'ified conversation (with lots of whiteboarding) than LINQ co-creator, programming languages designer, fundamentalist functional programming high priest and Channel 9 star Erik Meijer?

•• Steven Taub and Hazim Shafi contributed the “Improved Support For Parallelism In The Next Version Of Visual Studio” article to the “Coding Tools” column for MSDN Magazine’s October 2008 issue. VS 2010 will include a new viewer that offers MultiStack and Task views of parallel processes to simplify debugging:

MultiStack View (screen capture courtesy of Microsoft)

Task View (screen capture courtesy of Microsoft)

The article also describes forthcoming Concurrency, Thread Blocking, and Core Execution and Thread Migration views to analyze performance issues and aid debugging.

Steven Taub announces that .NET 4.0’s System.core.dll will include PLINQ and mscorlib.dll will contain the Parallel Extensions (ParallelFx) in his Parallel Programming and the .NET Framework 4.0 post of 10/10/2008. His Parallelism in October 2008 MSDN Magazine post of 10/2/2008 has links to articles about parallelism in MSDN Magazine’s October 2008 issue.

Kevin Hoffman’s Implementing the Weak Event Pattern in CLINQ v2.0 of 10/13/2008 describes why CLINQ requires the Weak Event Pattern (weak delegates) to avoid inadvertent memory leaks.

ADO.NET Data Services (Astoria)

Matthieu Mezil’s ADO.NET Data Services Hooking POC V3 post of 10/17/2008 continues  his Proof of Concept series about using EF as a data source for ADO.NET Data Services.

He finishes implementing the IUpdatable interface and enables the System.Collections.ObjectModel.Collection’s Count property in ADO.NET Data Services Hooking POC V3 .1 of 10/18/2008. (Repeated from the EF category.)

Note: The current v3.1 CodePlex download appears to be a T-SQL script, not source code for an IUpdatable implementation.

Phani Raju of the Astoria team discusses the reason for the use of the Astoria client’s DataServiceContext.SetLink() method for change tracking associated objects in his Viewer Mail , #1 post of 10/15/2008.

For additional background on updating associated objects with the Astoria client library, see Phani’s Working with Associations in ADO.NET Data Services post of 7/2/2008.

Tim Heuer’s Silverlight 2 Released: New controls, tools, announcements! post of 10/14/2008 says the following about support for Astoria in the RTM version:

If you have an ADO.NET Data Services (the artist formerly known as Astoria) endpoint, in your Silverlight 2 project you can choose Add Service Reference and point to that endpoint and the appropriate proxies will be generated for you.

Tim’s Silverlight and ADO.NET Data Service proxy generation post of later the same day provides more details on the use of Add Service Reference for Astoria services. (Don’t forget that the service must be running for the Add Service Reference dialog to find it.)

Shawn Wildermuth’s ADO.NET Data Services and TimeZone points out a problem with URL-encoding positive ISO 8601 time zone offsets, such as +3:00 for Bulgaria. URL encoding replaces the + with a space, which leads to an incorrect result. Applying the ToUniversalTime() method is a solution, but Shawn says “that’s a hack at best.”

ASP.NET Dynamic Data (DD)

••• Scott Hanselman unleashed a tsunami of Hanselminutes podcasts on 10/28/2008. Two of these are related to MVC, DD, and scaffolding:

•• Scott Hanselman’s ASP.NET MVC Beta released - Coolness Ensues post of 10/16/2009 (3:37 PM PDT) announces that the beta has a Go-Live license, includes jQuery, and offers a set of links into ScottGu’s “new feature” topics described in the Misc. section. ScottHa says:

I also showed [at VS Live! 2008 Las Vegas] ASP.NET+Dynamic Data that you'll be hearing more about at PDC and even more next year. You should feel free to use these subsystems as you like, mix and match, promote and ignore. Whatever makes you happy. All the ASP.NET core stuff like Authentication, Authorization, Session, Caching, etc, that all works in all of these subsystems because they are all ASP.NET.

SQL Server Data Services (SSDS) and Cloud Computing

••• Dare Obasanjo analyzes Tim Bray’s issue with vendor lock-in with cloud computing in his aptly named Cloud Computing and Vendor Lock-In post of October 19, 2008. Dare concludes:

[T]he fact is that today if a customer has heavily invested in either [the Amazon EC2/S3 or Google App Engine] platform then there isn't a straightforward way for customers to extricate themselves from the platform and switch to another vendor. In addition there is not a competitive marketplace of vendors providing standard/interoperable platforms as there are with email hosting or Web hosting providers.

As long as these conditions remain the same, it may be that lock-in is too strong a word describe the situation but it is clear that the options facing adopters of cloud computing platforms aren't great when it comes to vendor choice.

Of course the lock-in problem also applies to SSDS. If you’re willing to lock yourself into Windows Server 2008 and SQL Server 200x, Microsoft and Amazon might be interchangeable cloud hosting vendors. 

••• John Foley announces Information Week’s new “Cloud Computing Destination” in his InformationWeek Launches PlugIntoTheCloud.com post of 10/17/2008. John says:

Our research tells us that business technologists are intrigued by cloud computing, but not yet swayed. InformationWeek Analytics (our in-depth reports business) surveyed 456 business technology professionals to gauge their plans for cloud computing. Among the respondents, 20% were considering cloud services, while another 28% said they didn't know enough about them. In other words, nearly half are still mulling it over. Of the rest, 18% said they were already using cloud services and 34% had no interest.

If 18% of respondents are using cloud services and 20% are considering using them, I’d say that 38% is more than intrigued with a new and controversial technology.

••• Pete Kooman of the Google App Engine team says in his Announcing HTTPS support for appspot.com! post of 10/16/2008 that GAE will finally support secure HTTP communication, but only with appspot.com URLs. There will be no HTTPS support for arbitrary Google Apps domains.

Gartner Inc assigns the #2 Strategic Technology for 2009 to Cloud Computing in their Gartner Identifies the Top 10 Strategic Technologies for 2009 post of 10/16/2008. (Virtualization is #1).

• Eugenio Pace and Gianpaolo Carraro wrote a “Head in the Cloud, Feet on the Ground” post for Microsoft Architecture Journal, issue #17 , which was posted online on 10/15/2008.

Gianpaolo excerpts liberally from the article in his Head in the cloud, Feet on the ground: an article about the cloud post of the same date to support the article’s transportation analogy to computing services, which is the underlying theme of the pair’s Cloud Services Architecture Symposium to be held on day 4 of PDC 2008.

Ken Oestreich’s Postcards from the Cloud Summit Executive Conference post of 10/14/2008 provides a detailed rundown of the Cloud Summit Executive conference held at the Computer History Museum in San Jose on the same date.

Eric Eldon announces the arrival of Cloudera, a startup specializing in helping organizations adopt the open-source Hadoop software platform in his Ex-Google, Yahoo, Facebook employees snub recession, launch Hadoop startup post of 10/14/2008. According to the post:

Cloudera will help other companies “install, configure and run” Hadoop [and MapReduce], either on a company’s own servers or using Amazon’s hosted Elastic Compute Cloud (EC2) service.

For more details, see Amr Awadallah’s The Startup is Cloudera, the Business is Hadoop MapReduce post of 10/13/2008, which has links to founders and initial employees’ blogs. Tony Bain explains the differences between Hadoop and RDBMSs in his What is Hadoop? post of October 15, 2008.

Hadoop and MapReduce are likely to be the primary competition to SQL Server Data Services (SSDS) for data sources that are extremely large (petabytes), non-structured, or both. Yahoo! and Facebook run Hadoop.

Mike Amundsen’s Strongly-typed DataSets for SSDS Demo post of 10/14/2008 to the SQL Server Data Services (SSDS) - Getting Started forum describes how to populate DataSets with SSDS entities.

However, C.C.Chai notes in a reply:

Basically, [the] Typed Data Set approach works. Its major disadvantage is preloading large amount of data in order to enable joining, grouping, etc.

When the application requires Customer Entities, I will have to load all the customers first before doing sorting / filtering. Otherwise, I could get wrong results.

The matter is even worse if you try to enforce relationship constraints in the data set. When populating a data table, you have to look for parent tables and fill those first. In my testing, I almost load all the records in my SSDS container because of the complex relationships

SQL Server Compact (SSCE) 3.5 and Sync Services

••• Steve Lasker’s Evolution or Revolution for moving to offline architectures post of 10/17/2008 is a lengthy analysis of the benefits and pitfalls of occasionally-connected systems. Steve makes reference to his earlier Sync Services forADO.NET and SQL Server Compact Presentation (8/21/2007) and his two Tech*Ed US Developers 2007 presentations plus a more recent article:

Mary Jo Foley reports in her Microsoft’s online/offline sync platform (re)released to manufacturing post of 10/17/2008 that the Sync Services team stealth-posted an updated version (RTM1) of the Microsoft Sync Framework v1.0 on 10/13/2008.

To date, there’s been no comment on the updated version in the Microsoft Sync Framework blog nor related messages in any of the five Microsoft Sync Framework forums, which is very strange. Did the original version have bugs?

Miscellaneous (WPF, WCF, MVC, Silverlight, etc.)

Phil Haack’s ASP.NET MVC Beta Released! post of 10/16/2008 reiterates Scott Hanselman’s post on the same topic (see the Dynamic Data section) and notes:

As I warned before, we no longer bundle the Mvc Futures assembly (Microsoft.Web.Mvc.dll). However, we did just publish a release of this assembly updated for Beta on CodePlex. Source code for the Beta and Futures releases will be pushed to CodePlex shortly. Sorry about the delay but there’s so much work to be done here.

•• Scott Guthrie dropped the other shoe by finally posting ASP.NET MVC Beta Released at 12:30 PM PDT on 10/16/2008. His very detailed “post contains a quick summary of some of the new features and changes in this build compared to the previous ‘Preview 5’ release.

He’s “also planning to publish a few end to end tutorials in the weeks ahead that explain ASP.NET MVC concepts in more depth for folks who have not looked at it before, and who want a ‘from the beginning’ set of tutorials on how to get started.”

Robert Shelton, a Microsoft Software Development and Platform Evangelist in the Washington DC area, announced the availability of the official ASP.NET MVC Beta download in his Free download: Microsoft ASP.NET MVC Beta post of 10/15/2008.

I would have expected to see a blog post from Scott Guthrie, Scott Hanselman, or Phil Haack announcing a beta of this significance. According to Redmond Media Group editor Becky Nagle’s “Microsoft Posts ASP.NET MVC Beta for Download” article, Hanselman discussed the beta in his VS Live! 2008 Las Vegas keynote and Guthrie will make the official announcement on 10/16/2008. Regarding the keynote, Becky says:

He also showed off some of the dynamic data capabilities of MVC, including some future capabilities scheduled to be released next summer.

Scott Hanselman seconds Rob Conery’s review of the Text Template Transformation Toolkit (T4) for VS 2008 with an extensive list of links to T4 resources in his T4 (Text Template Transformation Toolkit) Code Generation - Best Kept Visual Studio Secret post of 10/14/2008.

Tim Heuer’s Silverlight 2 Released: New controls, tools, announcements! post of 10/14/2008 provides a detailed description of the new features and controls in the RTW version. Tim notes that the Silverlight Tools RC1 work with the released bits.

Rick Strahl’s detailed Client Templating with jQuery post of 10/13/2008 describes several methods for templating controls, such as displaying rich lists and updating data sources. 

Shawn Wildermuth outlines the changes between Silverlight 2 Beta 2 and the RTM version in his Silverlight 2 Released! post of 10/13/2008.

John Papa offers an abbreviated transcript of Microsoft’s conference call that announced the availability of Silverlight 2.0 RTM in his ScottGu Announces Silverlight 2 Due Out Tomorrow post of 10/13/2008. He also summarizes highlights of the related press release.

Shawn Wildermuth’s Dirty Little Secrets - Episode 2 describes how to “use control templates to skin a complex control in Silverlight 2.”

Subscribe to: Comments (Atom)
 

AltStyle によって変換されたページ (->オリジナル) /