Showing posts with label Rails. Show all posts
Showing posts with label Rails. Show all posts
Wednesday, January 14, 2015
cancannible role-based access control gets an update for Rails 4
Can You Keep a Secret? / 宇多田ヒカル
cancannible is a gem that has been kicking around in a few large-scale production deployments for years. It still gets loving attention - most recently an official update for Rails 4 (thanks to the push from @zwippie).
And now also some demo sites - one for Rails 3.2.x and another for Rails 4.3.x so that anyone can see it in action.
So what exactly does cancannible do? In a nutshell, it is a gem that extends CanCan with a range of capabilities:
cancannible is a gem that has been kicking around in a few large-scale production deployments for years. It still gets loving attention - most recently an official update for Rails 4 (thanks to the push from @zwippie).
And now also some demo sites - one for Rails 3.2.x and another for Rails 4.3.x so that anyone can see it in action.
So what exactly does cancannible do? In a nutshell, it is a gem that extends CanCan with a range of capabilities:
- permissions inheritance (so that, for example, a User can inherit permissions from Roles and/or Groups)
- general-purpose access refinements (to automatically enforce multi-tenant or other security restrictions)
- automatically stores and loads permissions from a database
- optional caching of abilities (so that they don't need to be recalculated on each web request)
- export CanCan methods to the model layer (so that permissions can be applied in model methods, and easily set in a test case)
(追記) (追記ここまで)
Tuesday, January 28, 2014
Learning Devise for Rails
(blogarhythm ~ Points of Authority / Linkin Park)
I recently got my hands on a review copy of Learning Devise for Rails from Packt and was quite interested to see if it was worth a recommendation (tldr: yes).
A book like this has to be current. Happily this edition covers Rails 4 and Devise 3, and code examples worked fine for me with the latest point releases.
[フレーム]The book is structured as a primer and tutorial, perfect for those who are new to devise, and requires only basic familiarity with Rails. Tutorials are best when they go beyond the standard trivial examples, and the book does well on this score. It covers a range of topics that will quickly become relevant when actually trying to use devise in real life. Beyond the basic steps needed to add devise in a Rails project, it demonstrates:
I particularly appreciate the fact that the chapter on testing is even there in the first place! These days, "how do I test it?" should really be one of the first questions we ask when learning something new.
The topics are clearly demarcated so after the first run-through the book can also be used quite well as a cookbook. It does however suffer from a few cryptic back-references in the narrative, so to dive in cookbook-style you may find yourself having to flip back to previous sections to connect the dots. A little extra effort on the editing front could have improved this (along with some of the phraseology, which is a bit stilted in parts).
Authentication has always been a critical part of Rails development, but since Rails 3 in particular it is fair to say that devise has emerged as the mature, conventional solution (for now!). So I can see this book being the ideal resource for developers just starting to get serious about building Rails applications.
Learning Devise for Rails would be a good choice if you are looking for that shot of knowledge to fast-track the most common authentication requirements, but also want to learn devise in a little more depth than a copy/paste from the README and wiki will get you! It will give enough foundation for moving to more advanced topics not covered in the book (such as developing custom strategies, understanding devise and warden internals).
I recently got my hands on a review copy of Learning Devise for Rails from Packt and was quite interested to see if it was worth a recommendation (tldr: yes).
A book like this has to be current. Happily this edition covers Rails 4 and Devise 3, and code examples worked fine for me with the latest point releases.
[フレーム]The book is structured as a primer and tutorial, perfect for those who are new to devise, and requires only basic familiarity with Rails. Tutorials are best when they go beyond the standard trivial examples, and the book does well on this score. It covers a range of topics that will quickly become relevant when actually trying to use devise in real life. Beyond the basic steps needed to add devise in a Rails project, it demonstrates:
- customizing devise views
- using external authentication providers with Omniauth
- using NoSQL storage (MongoDB) instead of ActiveRecord (SQLite)
- integrating access control with CanCan
- how to test with Test::Unit and RSpec
I particularly appreciate the fact that the chapter on testing is even there in the first place! These days, "how do I test it?" should really be one of the first questions we ask when learning something new.
The topics are clearly demarcated so after the first run-through the book can also be used quite well as a cookbook. It does however suffer from a few cryptic back-references in the narrative, so to dive in cookbook-style you may find yourself having to flip back to previous sections to connect the dots. A little extra effort on the editing front could have improved this (along with some of the phraseology, which is a bit stilted in parts).
Authentication has always been a critical part of Rails development, but since Rails 3 in particular it is fair to say that devise has emerged as the mature, conventional solution (for now!). So I can see this book being the ideal resource for developers just starting to get serious about building Rails applications.
Learning Devise for Rails would be a good choice if you are looking for that shot of knowledge to fast-track the most common authentication requirements, but also want to learn devise in a little more depth than a copy/paste from the README and wiki will get you! It will give enough foundation for moving to more advanced topics not covered in the book (such as developing custom strategies, understanding devise and warden internals).
(追記) (追記ここまで)
Tuesday, November 12, 2013
Punching firewalls with Mandrill webhooks
(blogarhythm ~ Fire Cracker - Ellegarden)
Mandrill is the transactional email service by the same folks who do MailChimp. I've written about it before, in particular how to use the mandrill-rails gem to simplify inbound webhook processing.
Mandrill webhooks are a neat, easy way for your application to respond to various events, from recording when users open email, to handling inbound mail delivery.
That all works fine if your web application lives on the public internet i.e. Mandrill can reach it to post the webhook. But that's not always possible: your development/test/staging environments for example; or perhaps production servers that IT have told you must be "locked down to the max".
Mandrill currently doesn't offer an official IP whitelist, so it's not possible to use firewall rules to just let Mandrill servers in. Mandrill does provide webhook authentication (supported by the mandrill-rails gem), but that solves a different problem: anyone can reach your server, but you can distinguish the legitimate webhook requests.
I thought I'd share a couple of techniques I've used to get Mandrill happily posting webhooks to my dev machine and servers behind firewalls.
Here's a simple scenario:
Say the gateway machine already hosts http://gateway.mydomain.net, but we want to be able to tell Mandrill to post it's webhooks to http://gateway.mydomain.net/inbox_internal, and these (and only these) requests will be forwarded to http://192.168.0.1:8080/inbox.
Here are the important parts of the /etc/haproxy/haproxy.cfg used on the gateway machine:
Job done! Our internal server remains hidden behind the firewall, but Mandrill can get through to it by posting webhooks to http://gateway.mydomain.net/inbox_internal.
Once started, ngrok will give you http and https addresses that will tunnel to port 3000 on your machine. You can use these addresses in the Mandrill webhook and inbound domains configuration, and they'll work as long as you keep your app and ngrok running.
Mandrill is the transactional email service by the same folks who do MailChimp. I've written about it before, in particular how to use the mandrill-rails gem to simplify inbound webhook processing.
Mandrill webhooks are a neat, easy way for your application to respond to various events, from recording when users open email, to handling inbound mail delivery.
That all works fine if your web application lives on the public internet i.e. Mandrill can reach it to post the webhook. But that's not always possible: your development/test/staging environments for example; or perhaps production servers that IT have told you must be "locked down to the max".
Mandrill currently doesn't offer an official IP whitelist, so it's not possible to use firewall rules to just let Mandrill servers in. Mandrill does provide webhook authentication (supported by the mandrill-rails gem), but that solves a different problem: anyone can reach your server, but you can distinguish the legitimate webhook requests.
I thought I'd share a couple of techniques I've used to get Mandrill happily posting webhooks to my dev machine and servers behind firewalls.
Using HAProxy to reverse-proxy Mandrill Webhooks
If you have at least one internet-visible address, HAProxy is excellent for setting up a reverse-proxy to forward inbound Mandrill Webhooks to the correct machine inside the firewall. I'm currently using this for some staging servers so we can run real inbound mail scenarios.Here's a simple scenario:
- gateway.mydomain.net - your publicly-visible server, with HAProxy installed and running OK
- internal/192.168.0.1 - a machine on an internal network that you want to receive webooks posted to 192.168.0.1:8080/inbox
Say the gateway machine already hosts http://gateway.mydomain.net, but we want to be able to tell Mandrill to post it's webhooks to http://gateway.mydomain.net/inbox_internal, and these (and only these) requests will be forwarded to http://192.168.0.1:8080/inbox.
Here are the important parts of the /etc/haproxy/haproxy.cfg used on the gateway machine:
global #... defaults mode http # enable http mode which gives of layer 7 filtering #... # this is HAProxy listening on http://gateway.mydomain.net frontend app *:80 default_backend webapp # set the default server for all requests # next we define a rule that will send requests to the internal_mandrill backend instead if the path starts with /inbox_internal acl req_mandrill_inbox_path path_beg /inbox_internal use_backend internal_mandrill if req_mandrill_inbox_path # define a group of backend servers for the main web app backend webapp server app1 127.0.0.1:8001 # this is where we will send the Mandrill webhooks backend internal_mandrill reqirep ^([^\ ]*)\ /inbox_internal(.*) 1円\ /inbox2円 server int1 192.168.0.1:8080 # add a server to this backendObviously the path mapping is optional (but neat to demonstrate), and I've left out all the normal HAProxy options like balancing, checks and SSL option forwarding that you might require in practice, but are not relevant to the issue at hand.
Job done! Our internal server remains hidden behind the firewall, but Mandrill can get through to it by posting webhooks to http://gateway.mydomain.net/inbox_internal.
Tunneling to dev machines with ngrok
For development, we usually don't want anything so permanent. There are quite a few services for tunneling to localhost, mainly with developers in mind. Lately I've been using ngrok which is living up to it's name - it rocks! Trival to setup and works like a dream. Say I'm developing a Rails app:# run app locally (port 3000) rails s # run ngrok tunnel to port 3000 ngrok 3000
Once started, ngrok will give you http and https addresses that will tunnel to port 3000 on your machine. You can use these addresses in the Mandrill webhook and inbound domains configuration, and they'll work as long as you keep your app and ngrok running.
(追記) (追記ここまで)
Saturday, June 15, 2013
Optimising presence in Rails with PostgreSQL
(blogarhythm ~ Can't Happen Here - Rainbow)
It is a pretty common pattern to branch depending on whether a query returns any data - for example to render a quite different view. In Rails we might do something like this:
But when we are running on PostgreSQL in particular, we've learned to be leery of COUNT(*) due to it's well known performance problems. In fact I first started digging into this question when I started seeing expensive COUNT(*) queries show up in NewRelic slow transaction traces. How expensive COUNT(*) actually is depends on many factors including the complexity of the query, availability of indexes, size of the table, and size of the results set.
So can we improve things by avoiding the COUNT(*) query? Assuming we are going to use all the results anyway, and we haven't injected any calculated columns in the query, we could simply to_a the query before testing presence i.e.:
I ran some benchmarks comparing the two approaches with different kinds of queries on a pretty well-tuned system and here are some of the results:
Clearly, depending on the type of query we can gain up to 40% performance improvement by restructuring our code a little. While my aggregate results were fairly consistent over many runs, the performance of individual queries did vary quite widely.
I should note that the numbers were *not* consistent or proportional across development, staging, test and production environments (mainly due to differences in data volumes, latent activity and hardware) - so you can't benchmark on development and assume the same applies in production.
It is a pretty common pattern to branch depending on whether a query returns any data - for example to render a quite different view. In Rails we might do something like this:
query = User.where(deleted_at: nil).and_maybe_some_other_scopes
if results = query.presence
results.each {|row| ... }
else
# do something else
endWhen this code executes, we raise at least 2 database requests: one to check presence, and another to retrieve the data. Running this at the Rails console, we can see the queries logged as they execute, for example: (0.9ms) SELECT COUNT(*) FROM "users" WHERE "users"."deleted_at" IS NULL User Load (15.2ms) SELECT "users".* FROM "users" WHERE "users"."deleted_at" IS NULLThis is not surprising since under the covers, presence (or present?) end up calling count which must do the database query (unless you have already accessed/loaded the results set). And 0.9ms doesn't seem too high a price to pay to determine if you should even try to load the data, does it?
But when we are running on PostgreSQL in particular, we've learned to be leery of COUNT(*) due to it's well known performance problems. In fact I first started digging into this question when I started seeing expensive COUNT(*) queries show up in NewRelic slow transaction traces. How expensive COUNT(*) actually is depends on many factors including the complexity of the query, availability of indexes, size of the table, and size of the results set.
So can we improve things by avoiding the COUNT(*) query? Assuming we are going to use all the results anyway, and we haven't injected any calculated columns in the query, we could simply to_a the query before testing presence i.e.:
query = User.where(deleted_at: nil).and_maybe_some_other_scopes
if results = query.to_a.presence
results.each {|row| ... }
else
# do something else
endI ran some benchmarks comparing the two approaches with different kinds of queries on a pretty well-tuned system and here are some of the results:
| Query | Using present? | Using to_a | Faster By |
|---|---|---|---|
| 10k indexed queries returning 1 / 1716 rows | 17.511s | 10.938s | 38% |
| 4k complex un-indexed queries returning 12 / 1716 rows | 23.603s | 15.221s | 36% |
| 4k indexed queries returning 1 / 1763218 rows | 22.943s | 20.924s | 9% |
| 10 complex un-indexed queries returning 15 / 1763218 rows | 23.196s | 14.072s | 40% |
Clearly, depending on the type of query we can gain up to 40% performance improvement by restructuring our code a little. While my aggregate results were fairly consistent over many runs, the performance of individual queries did vary quite widely.
I should note that the numbers were *not* consistent or proportional across development, staging, test and production environments (mainly due to differences in data volumes, latent activity and hardware) - so you can't benchmark on development and assume the same applies in production.
Things get murky with ActiveRecord add-ons
So far we've talked about the standard ActiveRecord situation. But there are various gems we might also be using to add features like pagination and search magic. MetaSearch is an example: a pretty awesome gem for building complex and flexible search features. But (at least with version 1.1.3) present? has a little surprise in store for you:irb> User.where(id: '0').class => ActiveRecord::Relation irb> User.where(id: 0).present? (0.8ms) SELECT COUNT(*) FROM "users" WHERE "users"."id" = 0 => false irb> User.search(id_eq: 0).class => MetaSearch::Searches::User irb> User.search(id_eq: 0).present? => true
Any Guidelines?
So, always to_a my query results? Well, no, it's not that simple. Here are some things to consider:- First, don't assume that <my_scoped_query>.present? means what you think it might mean - test or play it safe
- If you are going to need all result rows anyway, consider calling to_a or similar before testing presence
- Avoid this kind of optimisation except at the point of use. One of the beauties of ActiveRecord::Relation is the chainability - something we'll kill as soon as we hydrate to a result set Array for example.
- While I got a nice 40% performance bonus in some cases with a minor code fiddle, mileage varies and much depends on the actual query. You probably want to benchmark in the actual environment that matters and not make any assumptions.
Sunday, June 09, 2013
My Virtual Swag from #rdrc
(blogarhythm ~ Everybody's Everything - Santana)
So the best swag you can get from a technology conference is code, right? Well RedDotRubyConf 2013 did not disappoint! Thanks to some fantastic speakers, my weekends for months to come are spoken for. Here's just some of the goodness:
So the best swag you can get from a technology conference is code, right? Well RedDotRubyConf 2013 did not disappoint! Thanks to some fantastic speakers, my weekends for months to come are spoken for. Here's just some of the goodness:
- First up, @tenderlove kicked off an #irresponsibleruby meme with some crazy internal fiddles.
- From @gazay: gon is a gem that makes it easy to expose server data to javascript.
- From @flyerhzm and others: convinced me to go try Celluloid and Celluloid::IO for building actor-based asynchronous systems.
- From @zacksiri: transponder an opinionated javascript library for assisting in working with front-end heavy rails app.
- From @netzke: checkout netzke for building complex UIs on Rails with Ext JS.
- From @jimweirich: equator_weight.rb so @tenderlove knows how much more he can eat while in Singapore;-)
- From @dqminh: The RSpec applause_formatter.rb so you too can get applause when your tests go green, just like @jimweirich.
- From @josevalim: awesome talk on concurrency and a tip to checkout thread_safe. And somehow I now have an urge to go play with Elixir too;-)
- From @sausheong: tanks and tanksworld demonstrating Gosu-based games.
- From @sikachu: appraisal for testing against multiple permutations of gem dependencies.
- @a_matsuda convinced us to use Ruby 2.0, spam his friends in Japan on twitter, and blow our minds with Trick 2013 ruby code.
- From @nigelr: fracture an interesting way of co-ordinating expectations for multiple test conditions to things don't fall through the cracks.
- From @steveklabnik: Shoes and Functional Reactive Programming in Ruby with frappuccino. wtf!
- And just for completeness, from @tardate (me): RGovData, sps_bill_scanner, and megar.
Will I still be a Rubyist in 5 years? #rdrc
(blogarhythm ~ Ruby - Kaiser Chiefs)
The third RedDotRubyConf is over, and I think it just keeps getting better! Met lots of great people, and saw so many of my Ruby heroes speak on stage. Only thing that could make it even better next year would be to get the video recording thing happening!
I had the humbling opportunity to share the stage and here are my slides. Turned out to be a reflection on whether I'd still be a Rubyist in another 5 years, and what are the external trends that might change that. Short story: Yes! Of course. I'll always think like a Rubyist even though things will probably get more polyglot. The arena of web development is perhaps the most unpredictable though.
A couple of areas I highlight that really need a bit more love include:
I mentioned a few of my projects in passing. Here are the links for convenience:
[フレーム]
The third RedDotRubyConf is over, and I think it just keeps getting better! Met lots of great people, and saw so many of my Ruby heroes speak on stage. Only thing that could make it even better next year would be to get the video recording thing happening!
I had the humbling opportunity to share the stage and here are my slides. Turned out to be a reflection on whether I'd still be a Rubyist in another 5 years, and what are the external trends that might change that. Short story: Yes! Of course. I'll always think like a Rubyist even though things will probably get more polyglot. The arena of web development is perhaps the most unpredictable though.
A couple of areas I highlight that really need a bit more love include:
- There's a push on SciRuby. Analytics are no longer the esoteric domain of bioinformaticists. Coupled with Big Data (which Ruby is pretty good at), analytics are driving much of the significant innovation in things we build.
- Krypt - an effort lead by Martin Boßlet to improve the cryptographic support in Ruby. My experience building megar made it painfully obvious why we need to fix this.
Let it never be said, the romance is dead
'Cos there’s so little else occupying my head
I mentioned a few of my projects in passing. Here are the links for convenience:
- RGovData is a ruby library for really simple access to government data. It aims to make consuming government data sets a "one liner", letting you focus on what you are trying to achieve with the data, and happily ignore all the messy underlying details of transport protocols, authentication and so on.
- sps_bill_scanner is a ruby gem for converting SP Services PDF invoices into data that can be analysed with R. Only useful if you are an SP Services subscriber in Singapore, but other wise perhaps an interesting example of extracting postitional text from PDF and doing some R.
- megar ("megaargh!" in pirate-speak) is a Ruby wrapper and command-line (CLI) client for the mega.co.nz API. My example of how you *can* do funky crypto in Ruby ... it's just much harder than it should be!
[フレーム]
Saturday, February 16, 2013
Easy Mandrill inbound email and webhook handling with Rails
(blogarhythm ~ Psycho Monkey - Joe Satriani)
Mandrill is the transactional email service by the same folks who do MailChimp, and I've been pretty impressed with it. For SMTP mail delivery it just works great, but where it really shines is inbound mail handling and the range of event triggers you can feed into to your application as webhooks (for example, to notify on email link clicks or bounces).
The API is very nice to use, but in a Rails application it's best to keep all the crufty details encapsulated and hidden away, right? That's what the mandrill-rails gem aims to do - make supporting Mandrill web hooks and inbound email as easy and Rails-native as possible.
I recently added some new methods to mandrill-rails to provide explicit support for inbound mail attachments (in the 0.0.3 version of the gem).
With the mandrill-rails gem installed, we simply define the routes to our webhook receiver (in this example an 'inbox' controller):
If you love playing with transactional mail and haven't tried Mandrill yet, it's well worth a look!
Mandrill is the transactional email service by the same folks who do MailChimp, and I've been pretty impressed with it. For SMTP mail delivery it just works great, but where it really shines is inbound mail handling and the range of event triggers you can feed into to your application as webhooks (for example, to notify on email link clicks or bounces).
The API is very nice to use, but in a Rails application it's best to keep all the crufty details encapsulated and hidden away, right? That's what the mandrill-rails gem aims to do - make supporting Mandrill web hooks and inbound email as easy and Rails-native as possible.
I recently added some new methods to mandrill-rails to provide explicit support for inbound mail attachments (in the 0.0.3 version of the gem).
With the mandrill-rails gem installed, we simply define the routes to our webhook receiver (in this example an 'inbox' controller):
resource :inbox, :controller => 'inbox', :only => [:show,:create]And then in the controller we provide handler implementations for any of the 9 event types we wish to consume. Here's how we might get started handling inbound email, including pulling out the attachments:
class InboxController < ApplicationController include Mandrill::Rails::WebHookProcessor # Defines our handler for the "inbound" event type. # This gets called for every inbound event sent from Mandrill. def handle_inbound(event_payload) [... do something with the event_payload here, or stuff it on a background queue for later ... ] if attachments = event_payload.attachments.presence # yes, we have at least 1 attachment. Let's examine the first: a1 = attachments.first a1.name # => e.g. 'sample.pdf' a1.type # => e.g. 'application/pdf' a1.content # => this is the raw content provided by Mandrill, # and will be base64-encoded if not plain text # e.g. 'JVBERi0xLjMKJcTl8uXrp/Og0MTGCjQgMCBvY ... (etc)' a1.decoded_content # => this is the content decoded by Mandrill::Rails, # ready to be written as a File or whatever # e.g. '%PDF-1.3\n%\xC4\xE5 ... (etc)' end end endThat's nice and easy, yes? See the Mandrill::Rails Cookbook for more tips.
If you love playing with transactional mail and haven't tried Mandrill yet, it's well worth a look!
Saturday, January 12, 2013
2013: Time for web development to have its VB3 moment
(blogarhythm ~ Come Around Again - JET)
And that's a compliment!
Wow. This year we mark the 20th anniversary of the Visual Basic 3.0 launch way back in 1993.
It's easy to forget the pivotal role it played in revolutionizing how we built software. No matter what you think of Microsoft, one can't deny the impact it had at the time. Along with other products such as PowerBuilder and Borland Delphi, we started to see long-promised advances in software development (as pioneered by Smalltalk) become mainstream reality:
In its day, Visual Basic 3.0 was variously lauded (by non-programmers who could finally make the app they always wanted) and loathed (by IT professionals shocked at the prospect of ceding control to the great unwashed). Interestingly, Visual Basic succeeded *despite* the language (BASIC, probably the most widely derided language of all time. Or perhaps it shares that crown with COBOL).
The party didn't last long however, as by the late 90's the internet had fundamentally changed the rules of the game.
VB, PowerBuilder and the like suffered from an implicit assumption of a client-server architecture, and were not prepared for a webified world. They didn't (all) disappear of course, with Visual Basic in particular finding a significant role as Microsoft's mainstream server-side language, and it lives on in Visual Studio. Yet it lost it's revolutionary edge, and had to be content to simply fit in as an "also can do in this language" alternative.
We have certainly come a long way since then, and many advances in practice and technology have become de rigueur. Here are some examples that would not have been considered normal by any stretch in 1993:
Yet it is also salutary to reflect on some of the great innovations we saw back in 1993 that have yet to be re-invented and re-imagined successfully for the web.
I am thinking in particular of the radical productivity that was possible with the event-driven, WYSIWYG GUI programming model. It certainly hasn't gone away (take xcode for example). But why is that not the leading way of building for the web today? After all, the web is graphical and event-driven. A perfect fit one would think.
It has perhaps been the very success of the internet, and the rapid unconstrained innovation it has enabled, that has in turn inhibited major advances in web development.
Those that have come close (such as Adobe Flash) have ultimately failed primarily because they did not embrace the open standards of the web. And others, like Microsoft Visual Studio and Oracle JDeveloper have remained locked in proprietary silos.
On the whole, we still work at levels of abstraction that are no higher, and many times lower, than those embodied by the best tools of 1993. It is, after all, very difficult to build abstractions over a foundation that is in constant flux. And with highly productive languages and frameworks at our disposal (like Ruby/Rails), it makes complete sense for many - myself included - to actively spurn graphical IDEs for the immense flexibility we get in return for working at the coding coalface.
On the backend, our technology stacks are mature and battle-tested (LAMP, Rails). And we have an array of cloud-ready, open source solutions for just about every back-end infrastructure need you can imagine: from BigData (Hadoop, MongoDB ..) to messaging (RabbitMQ, ØMQ ..) and more.
My sense is that in the past couple of years we have been edging towards the next leap forward. Our current plateau is now well consolidated. Yet despite efforts such as codecademy to open up software coding to all, web development remains as complex as ever. To do it well, you really need to master a dizzying array of technologies and standards.
Do we have the perfect solution yet? No.
But we are starting to see enticing inklings of what the future may look like. Perhaps one of the most compelling and complete visions is that provided by the meteor project. It is very close.
Will meteor streak ahead to gain massive mid-share and traction? Or will an established platform like Rails take another giant step forward? Or is there something else in the wings we don't know about yet?
It will be an interesting year. And if the signs are to be trusted, I expect we'll look back on 2013 as a tipping point in web development - its VB3 moment.
Do you think we're in for such a radical shift? Or heading in a different direction altogether? Or will inertia simply carry the status quo... I'd love to hear what others think!
And that's a compliment!
Wow. This year we mark the 20th anniversary of the Visual Basic 3.0 launch way back in 1993.
It's easy to forget the pivotal role it played in revolutionizing how we built software. No matter what you think of Microsoft, one can't deny the impact it had at the time. Along with other products such as PowerBuilder and Borland Delphi, we started to see long-promised advances in software development (as pioneered by Smalltalk) become mainstream reality:
- finally, Rapid Application Development that really was rapid
- simplicity that put the development of non-trivial applications within the realm of the average computer user. It made simple things simple and complex things possible (to borrow from Alan Kay)
- development environments that finally did the obvious: want to build a graphical user interface? Then build it graphically (i.e. WYSIWYG), and build a complete client or client-server app from a single IDE.
- an event-driven programming model that explicitly linked code to the user-facing triggers and views (like buttons and tables)
- perhaps the first mainstream example of a viable software component reuse mechanism (improved and rebranded many times over time: ActiveX, COM, .NET)
In its day, Visual Basic 3.0 was variously lauded (by non-programmers who could finally make the app they always wanted) and loathed (by IT professionals shocked at the prospect of ceding control to the great unwashed). Interestingly, Visual Basic succeeded *despite* the language (BASIC, probably the most widely derided language of all time. Or perhaps it shares that crown with COBOL).
The party didn't last long however, as by the late 90's the internet had fundamentally changed the rules of the game.
VB, PowerBuilder and the like suffered from an implicit assumption of a client-server architecture, and were not prepared for a webified world. They didn't (all) disappear of course, with Visual Basic in particular finding a significant role as Microsoft's mainstream server-side language, and it lives on in Visual Studio. Yet it lost it's revolutionary edge, and had to be content to simply fit in as an "also can do in this language" alternative.
Web Development - a case of one step back and one step forward?
You would think that over the past 20 years, web development would have been able to leap far ahead of what was best practice in client-server computing at the time.We have certainly come a long way since then, and many advances in practice and technology have become de rigueur. Here are some examples that would not have been considered normal by any stretch in 1993:
- Reliance on open standard protocols at every tier: from client to server, server to database and messaging systems
- Global, well-known repositories of shared, reusable code (Github, Rubygems .. and let's not forget grand-daddy CPAN)
- Version control. There is no argument.
- Automated testing tools and continuous integration.
- Open source is mainstream, and even preferred in many contexts.
Yet it is also salutary to reflect on some of the great innovations we saw back in 1993 that have yet to be re-invented and re-imagined successfully for the web.
I am thinking in particular of the radical productivity that was possible with the event-driven, WYSIWYG GUI programming model. It certainly hasn't gone away (take xcode for example). But why is that not the leading way of building for the web today? After all, the web is graphical and event-driven. A perfect fit one would think.
It has perhaps been the very success of the internet, and the rapid unconstrained innovation it has enabled, that has in turn inhibited major advances in web development.
Those that have come close (such as Adobe Flash) have ultimately failed primarily because they did not embrace the open standards of the web. And others, like Microsoft Visual Studio and Oracle JDeveloper have remained locked in proprietary silos.
On the whole, we still work at levels of abstraction that are no higher, and many times lower, than those embodied by the best tools of 1993. It is, after all, very difficult to build abstractions over a foundation that is in constant flux. And with highly productive languages and frameworks at our disposal (like Ruby/Rails), it makes complete sense for many - myself included - to actively spurn graphical IDEs for the immense flexibility we get in return for working at the coding coalface.
The Tide is Turning
Once the wild west of hackety scripts and rampant browser incompatibilities, the building blocks of the web have been coalescing. HTML5, CSS3 and leading browser rendering engines are more stable, consistent and reliable than ever. Javascript is now considered a serious language, and the community has embraced higher-level APIs like jQuery and RIA frameworks such as ember.js and backbone.js. Web design patterns are more widely understood than ever, with kits like bootstrap putting reusable good practice in the hands of novices.On the backend, our technology stacks are mature and battle-tested (LAMP, Rails). And we have an array of cloud-ready, open source solutions for just about every back-end infrastructure need you can imagine: from BigData (Hadoop, MongoDB ..) to messaging (RabbitMQ, ØMQ ..) and more.
My sense is that in the past couple of years we have been edging towards the next leap forward. Our current plateau is now well consolidated. Yet despite efforts such as codecademy to open up software coding to all, web development remains as complex as ever. To do it well, you really need to master a dizzying array of technologies and standards.
Time for Web Development to Level Up
What does the next level offer? We don't know yet, but I'd suggest the following as some of the critical concerns for next gen web development:- a unified development experience: the ability to build a full-stack application as one without the need for large conceptual and technological leaps from presentation, to business logic, to infrastructure concerns.
- implicit support for distributed event handling: a conventional mechanism for events raised on a client or server to be consumed by another client or server.
- event-driven GUI development: draw a web page as you want it to be presented, hook up events and data sources.
- it is mobile: more than just responsive web design. Explicit suport for presenting appropriately on the full range of desktop, tablet and mobile devices
- distributed data synchronisation: whether data is used live on a web page, stored for HTML5 offline, or synchronized with a native mobile application, our tools know how to distribute and synchronize updates.
- (ideally) let's not have to go back to square one and re-invent our immense investments in standard libraries and reusable code (like the extensive collection of ruby gems)
Do we have the perfect solution yet? No.
But we are starting to see enticing inklings of what the future may look like. Perhaps one of the most compelling and complete visions is that provided by the meteor project. It is very close.
Will meteor streak ahead to gain massive mid-share and traction? Or will an established platform like Rails take another giant step forward? Or is there something else in the wings we don't know about yet?
It will be an interesting year. And if the signs are to be trusted, I expect we'll look back on 2013 as a tipping point in web development - its VB3 moment.
Do you think we're in for such a radical shift? Or heading in a different direction altogether? Or will inertia simply carry the status quo... I'd love to hear what others think!
Monday, November 07, 2011
Adding Mobile Support with Web 2.0 Touch to the NoAgenda Attack Vector Dashboard
The quest for an ideal javascript framework for mobile web applications has been a bit of a work-in-progress for some time (at least if you cared about cross-platform).
You might have got started (like me) with Jonathan Stark's excellent books Building iPhone Apps with HTML, CSS, and JavaScript and Building Android Apps with HTML, CSS, and JavaScript, and maybe tried the jQTouch framework that these spawned. Meanwhile, the official jQuery mobile framework has slowly been moving to fruition.
I recently discovered another project - Web 2.0 Touch - that is pitched as a mini framework with better features and more ease of use than jQTouch. Since I had a little side-project underway that could benefit from mobile support, I thought I'd give it a test drive.
And I was duly impressed. In just a few hours, I had a full and distinct mobile version of the site. Better yet, I didn't run into any weird behaviours that can plague mobile development. It just worked.
Now I'm not going to stop tracking the jQuery Mobile project or other solutions like Rhomobile, but if all you need is a quick, functional and good looking mobile view, then Web 2.0 Touch is well worth a look.
The NoAgenda Attack Vector Dashboard is the project I used Web 2.0 Touch for, and if you want to see all the intricate details of how I made the site mobile-friendly - you can! The entire site is open sourced and available on
GitHub. I'll just describe a couple of the features here...
Since the application has a very specific and rich desktop presentation, I knew the mobile version was going to be very different. Here are the desktop and mobile "home pages" side-by-side:
Rather than weigh down view code with lots of conditionals, I decided to use the MIME-type method of differentiation.
If you haven't used this before, it essentially means registering a suitable MIME-type (I called it mobile), and in the main ApplicationController, the request.format is set to this type if the client is detected to require the special mobile view. Now a request to an :index page will render with index.mobile.erb (or index.mobile.haml as is my preference), while the non-mobile view will render with index.html.erb / index.html.haml.
I've added the browser gem to the project for device identification, and for this app I've decided to only specifically handle iPhone and Android. I also don't give these phones a desktop view alternative, since I know it is not going to be nice.
In the application, I've made this 'unobtrusive' so it might be worth pointing out what is going on. The .touch_load class is used to tag items that should initiate a page transition. The data-url and data-transition attributes tell it where to go and what transition animation to use.
Blogarhythm: Touch - Noiseworks
You might have got started (like me) with Jonathan Stark's excellent books Building iPhone Apps with HTML, CSS, and JavaScript and Building Android Apps with HTML, CSS, and JavaScript, and maybe tried the jQTouch framework that these spawned. Meanwhile, the official jQuery mobile framework has slowly been moving to fruition.
I recently discovered another project - Web 2.0 Touch - that is pitched as a mini framework with better features and more ease of use than jQTouch. Since I had a little side-project underway that could benefit from mobile support, I thought I'd give it a test drive.
And I was duly impressed. In just a few hours, I had a full and distinct mobile version of the site. Better yet, I didn't run into any weird behaviours that can plague mobile development. It just worked.
Now I'm not going to stop tracking the jQuery Mobile project or other solutions like Rhomobile, but if all you need is a quick, functional and good looking mobile view, then Web 2.0 Touch is well worth a look.
The NoAgenda Attack Vector Dashboard is the project I used Web 2.0 Touch for, and if you want to see all the intricate details of how I made the site mobile-friendly - you can! The entire site is open sourced and available on
GitHub. I'll just describe a couple of the features here...
Differentiated Views
The first has not much to do with Web 2.0 Touch per se, and is more just a demonstration of how easy it is to work with a range of view technologies in Rails.Since the application has a very specific and rich desktop presentation, I knew the mobile version was going to be very different. Here are the desktop and mobile "home pages" side-by-side:
Rather than weigh down view code with lots of conditionals, I decided to use the MIME-type method of differentiation.
If you haven't used this before, it essentially means registering a suitable MIME-type (I called it mobile), and in the main ApplicationController, the request.format is set to this type if the client is detected to require the special mobile view. Now a request to an :index page will render with index.mobile.erb (or index.mobile.haml as is my preference), while the non-mobile view will render with index.html.erb / index.html.haml.
I've added the browser gem to the project for device identification, and for this app I've decided to only specifically handle iPhone and Android. I also don't give these phones a desktop view alternative, since I know it is not going to be nice.
# config/initializers/mime_types.rb:
Mime::Type.register_alias "text/html", :mobile
# application_controller.rb:
class ApplicationController < ActionController::Base
before_filter { request.format = :mobile if (browser.iphone? || browser.android?) }
end
With that in place, my *.mobile.haml view and layout files just need to focus on rendering the mobile site.Page Transitions
The jsTouch.loadPage method is used to load and navigate pages in the Web 2.0 Touch framework.In the application, I've made this 'unobtrusive' so it might be worth pointing out what is going on. The .touch_load class is used to tag items that should initiate a page transition. The data-url and data-transition attributes tell it where to go and what transition animation to use.
.toolbar
%h1= t('.title')
%a.button.back.touch_load{'data-url' => menu_dashboard_path, 'data-transition' => 'pop-out' }= t(:done)
.content
= render :partial => 'notes/table'The enableSmartphonePageLoad function runs during page load to setup the behaviour: enableSmartphonePageLoad: function() {
$('.touch_load').live('click', function() {
var url = $(this).data('url') || $(this).attr('href');
var transition = $(this).data('transition') || 'slide-left';
if (url != "") {
jsTouch.loadPage(url, { transition: transition });
}
return false;
});
},
Blogarhythm: Touch - Noiseworks
Sunday, April 24, 2011
Multi-tenancy with Rails
RedDotRubyConf 2011 in Singapore is over. It was an amazing event (ryan takes notes so we don't have to - day#1 day#2)
Somehow I managed to cheat my way into a line-up of legendary speakers that included Matz himself. Here are the slides..
I spoke about multi-tenancy - what it is and why it's increasingly relevant for Rails development. It dives a little into four of the many approaches and ends with the challenge: Isn't it about time there was a 'Rails Way'?
Blogarhythm: So Many Ways POP DISASTER
Somehow I managed to cheat my way into a line-up of legendary speakers that included Matz himself. Here are the slides..
[フレーム]
I spoke about multi-tenancy - what it is and why it's increasingly relevant for Rails development. It dives a little into four of the many approaches and ends with the challenge: Isn't it about time there was a 'Rails Way'?
Blogarhythm: So Many Ways POP DISASTER
Sunday, January 30, 2011
Paranoid Yak Shaving
So a few weeks ago I found myself wanting "soft-delete" in a Rails app. Ruby Toolbox is a little long in the tooth on the subject, but after a little more research I discovered xpond's paranoid project that was just what I wanted:
All was cool, except at about the same time we updated to Rails 3.0.3 and it broke (as it turned out, due to changes in AREL 2.0.6 internals).
One of the beautiful things about github and the way it's been adopted by the ruby/rails community in particular is that it makes it so damn easy to just dive in and help update code originally contributed by other people. So paranoid needs updating for Rails 3.0.3? No problem - fork it, diagnose the issue and push your fixes back up to github.
But that's also a great recipe for yak shaving ;-)
The fixes are yet to make it into the released gem, but if you desparately need 3.0.3 support you can install from my repo. i.e. in your Gemfile:
Blogarhythm: Paranoid (of course - but this is the bluegrass version!)
- packaged as a gem, not a plugin
- built for Rails 3 (arel-aware in particular)
- can be selectively applied to your models
All was cool, except at about the same time we updated to Rails 3.0.3 and it broke (as it turned out, due to changes in AREL 2.0.6 internals).
One of the beautiful things about github and the way it's been adopted by the ruby/rails community in particular is that it makes it so damn easy to just dive in and help update code originally contributed by other people. So paranoid needs updating for Rails 3.0.3? No problem - fork it, diagnose the issue and push your fixes back up to github.
But that's also a great recipe for yak shaving ;-)
The fixes are yet to make it into the released gem, but if you desparately need 3.0.3 support you can install from my repo. i.e. in your Gemfile:
gem 'paranoid', '~> 0.0.10', :require => 'paranoid', :git => 'git://github.com/tardate/paranoid'
Blogarhythm: Paranoid (of course - but this is the bluegrass version!)
Tuesday, December 14, 2010
CruiseControlling Ruby 1.9.2 and Rails 3.0.2
CruiseControl for Ruby from ThoughWorks has long been one of the easiest ways to your rails project under continuous integration.
But there's still an issue that it can't run under Ruby 1.9.x. That's not very good if you are targeting 1.9.2 for your project.
Here's a quick recipe for how you can build a 1.9.2 project with CC, using the wonders of rvm..
In ~/.cruise/projects/my_project_name, edit cruise_config.rb to run a shell script instead of the standard build task (I'm calling it ccbuild.sh and it will be in the root of my git repo):
Add ccbuild.sh to your repository (don't forget to chmod u+x it). It needs to ensure rvm script is loaded and activate the correct ruby & gemset.
The script initialization is necessary because it seems the way CC spawns the shell script it doesn't pick up the rvm initialization you might already have in .bash_profile. Without rvm script initialization, "rvm" will invoke the binary which can't make the direct environment mods it needs to do.
Here's what I have in ccbuild.sh:
Once that's checked in and pushed to the repo, you can kick-off CC:
Now my ccmenu is green, and CruiseControl is running my project under 1.9.2 and rails 3.0.2;-)
Blogarhythm: Waiting for the light ART-SCHOOL
But there's still an issue that it can't run under Ruby 1.9.x. That's not very good if you are targeting 1.9.2 for your project.
Here's a quick recipe for how you can build a 1.9.2 project with CC, using the wonders of rvm..
# download and unpack CC to /usr/local/cruisecontrol-1.4.0 (or where you like)
# for convenience, add .rvmrc in /usr/local/cruisecontrol-1.4.0 to have it run 1.8.7
echo "rvm 1.8.7-p302"> /usr/local/cruisecontrol-1.4.0/.rvmrc
# configure CC:
cd /usr/local/cruisecontrol-1.4.0
./cruise add my_project_name --source-control git --repository git@github.com:myname/myproject.git
# ^^ This will initialize the ~/.cruise/projects/my_project_name folder for a git-based project
# if you have an .rvmrc file in your git repo, pre-emptively trust it to avoid clogging CC:
mkdir ~/.cruise/projects/my_project_name
rvm rvmrc trust ~/.cruise/projects/my_project_name
In ~/.cruise/projects/my_project_name, edit cruise_config.rb to run a shell script instead of the standard build task (I'm calling it ccbuild.sh and it will be in the root of my git repo):
Project.configure do |project|
# [.. other stuff ..]
project.build_command = './ccbuild.sh'
# [.. other stuff ..]
end
Add ccbuild.sh to your repository (don't forget to chmod u+x it). It needs to ensure rvm script is loaded and activate the correct ruby & gemset.
The script initialization is necessary because it seems the way CC spawns the shell script it doesn't pick up the rvm initialization you might already have in .bash_profile. Without rvm script initialization, "rvm" will invoke the binary which can't make the direct environment mods it needs to do.
Here's what I have in ccbuild.sh:
#!/bin/bash
if [ "$(type rvm | head -1)" != "rvm is a function" ]
then
source ~/.rvm/scripts/rvm || exit 1
fi
if [ "$(type rvm | head -1)" != "rvm is a function" ]
then
echo "rvm not properly installed and available"
exit 1
fi
rvm use 1.9.2-p0@mygemsetname --create
bundle check || bundle install || exit 1
rake # setup to run all required tests by default
Once that's checked in and pushed to the repo, you can kick-off CC:
cd /usr/local/cruisecontrol-1.4.0
./cruise start
Now my ccmenu is green, and CruiseControl is running my project under 1.9.2 and rails 3.0.2;-)
Blogarhythm: Waiting for the light ART-SCHOOL
Sunday, October 03, 2010
Add to Calendar with a jQuery Widget
If you deal with any kind of event-based information on your websites, you would probably really like an easy way of letting users add it to their calendar.
Unlike link sharing—where there are some great drop-in solutions like AddToAny and AddThis—calendar integration unfortunately remains a bit rough around the edges. Varying standards with varying degrees of adoption; consideration for desktop and web-based calendar clients; and the complicating factor of timezones make it all a bit harder than it really should be.
AddToCal is a jQuery UI Widget that I put together to help me solve the problem and do things like you see in the screen clip on the right. It's freely shared and available on github.
Using it on a web page is as simple as including the js links, binding it to the DOM elements or classes on your page that contain "events", and provide an implementation of the getEventDetails method that knows how to extract the event details from your particular DOM structure.
The example also demonstrates how to use AddToCal in conjunction with the hCalendar microformat for event notation (try it out here).
I've currently included support for the web-based calendars by Google, Yahoo!, and Microsoft Live. If you can serve iCal or vCalendar format event links then AddToCal also links to 30boxes and iCal/vCalendar desktop software—including the iPad Calendar application;-)
What about iCal and vCalendar formats? These are complicated a little because we need a URL to the respective iCal and vCalendar format resources .. so we need to be able to serve them before AddToCal can link to them.
Thankfully, this can be relatively trivial once you get a handle on the file formats. Here's an example of how to implement with Ruby on Rails.
Say we have an Events controller and associated Event model that represents an activity people may like to add to their calendars. A simple iCal implementation with ERB means creating a views/events/show.ics.erb along these lines:
Sharp eyes will note the unusual rfc3339 helper method I've provided to make it easy to get date/times in RFC3339 format as required by the iCal and vCal standards. You could extend Time::DATE_FORMATS, but here I've simply added the method to ActiveSupport::TimeWithZone
To support vCalendar, we also implement views/events/show.vcs.erb
Depending on your Rails version and web server, you may have to teach it about these MIME types e.g. add to config/initializers/mime_types.rb:
Blogarhythm: Remember - Jimi Hendrix
Unlike link sharing—where there are some great drop-in solutions like AddToAny and AddThis—calendar integration unfortunately remains a bit rough around the edges. Varying standards with varying degrees of adoption; consideration for desktop and web-based calendar clients; and the complicating factor of timezones make it all a bit harder than it really should be.
AddToCal is a jQuery UI Widget that I put together to help me solve the problem and do things like you see in the screen clip on the right. It's freely shared and available on github.
Using it on a web page is as simple as including the js links, binding it to the DOM elements or classes on your page that contain "events", and provide an implementation of the getEventDetails method that knows how to extract the event details from your particular DOM structure.
The example also demonstrates how to use AddToCal in conjunction with the hCalendar microformat for event notation (try it out here).
I've currently included support for the web-based calendars by Google, Yahoo!, and Microsoft Live. If you can serve iCal or vCalendar format event links then AddToCal also links to 30boxes and iCal/vCalendar desktop software—including the iPad Calendar application;-)
Serving iCal and vCalendar links
What about iCal and vCalendar formats? These are complicated a little because we need a URL to the respective iCal and vCalendar format resources .. so we need to be able to serve them before AddToCal can link to them.
Thankfully, this can be relatively trivial once you get a handle on the file formats. Here's an example of how to implement with Ruby on Rails.
Say we have an Events controller and associated Event model that represents an activity people may like to add to their calendars. A simple iCal implementation with ERB means creating a views/events/show.ics.erb along these lines:
BEGIN:VCALENDAR
PRODID:-//AddToCal Example//EN
VERSION:2.0
METHOD:PUBLISH
BEGIN:VEVENT
DTSTART:<%= @event.start_time.rfc3339 %>
DTEND:<%= @event.end_time.rfc3339 %>
LOCATION:<%= event_url( @event ) %>
SEQUENCE:0
UID:
DTSTAMP:<%= Time.zone.now.rfc3339 %>
DESCRIPTION:<%= event_url( @event ) %>\n<%= @event.full_title %>
SUMMARY:<%= @event.synopsis %>
PRIORITY:5
CLASS:PUBLIC
END:VEVENT
END:VCALENDAR
Sharp eyes will note the unusual rfc3339 helper method I've provided to make it easy to get date/times in RFC3339 format as required by the iCal and vCal standards. You could extend Time::DATE_FORMATS, but here I've simply added the method to ActiveSupport::TimeWithZone
class ActiveSupport::TimeWithZone
def rfc3339
utc.strftime("%Y%m%dT%H%M%SZ")
end
end
To support vCalendar, we also implement views/events/show.vcs.erb
BEGIN:VCALENDAR
PRODID:-//AddToCal Example//EN
VERSION:1.0
BEGIN:VEVENT
SUMMARY:<%= @event.full_title %>
PRIORITY:0
CATEGORIES:SHOW
CLASS:PUBLIC
DTSTART:<%= @event.start_time.rfc3339 %>
DTEND:<%= @event.end_time.rfc3339 %>
URL:<%= event_url( @event ) %>
DESCRIPTION;ENCODING=QUOTED-PRINTABLE:<%= event_url( @event ) %> =0A<%= @event.synopsis %>
LOCATION;ENCODING=QUOTED-PRINTABLE:<%= event_url( @event ) %>
END:VEVENT
END:VCALENDAR
Depending on your Rails version and web server, you may have to teach it about these MIME types e.g. add to config/initializers/mime_types.rb:
Mime::Type.register "application/hbs-vcs, text/calendar, text/x-vcalendar", :vcs
Blogarhythm: Remember - Jimi Hendrix
Sunday, July 25, 2010
DHH, Lars, and the Quality of Water
Just for the record:
Blogarhythm: Smoke on the Water - Metallica covering Deep Purple.
- David Heinemeier Hansson: born in Denmark 1979. Partner at 37signals
- Lars Ulrich: born in Denmark 1963. Drummer for Metallica
- DHH: outspoken proponent for building businesses from revenue.
- Lars: outspoken proponent for exploiting copyright for money.
Blogarhythm: Smoke on the Water - Metallica covering Deep Purple.
Wednesday, January 20, 2010
Released: Authlogic_RPX 1.1.1 gem now with identity mapping for Rails
Authlogic_RPX is the gem I released some time back that lets you easily support multiple authentication schemes in Rails without really having to think about it (Facebook, twitter, OpenID, Google, Yahoo! etc etc etc).
With thanks for contributions from John and Damir, Authlogic_RPX 1.1.1 is now available and includes an internal identity mapping that lets you tie multiple authentication methods to a single user. There are also some other bug fixes and improvements, so it is recommended you update Authlogic_RPX even if you don't want to enable the identity mapping.
See the Authlogic_RPX repo on github for information about installation, usage, and don't forget the sample site (and sample code).
Under the covers, it uses the RPX service from JanRain, a bit like this:
To get a better idea of why you might want to use Authlogic_RPX, and some pointers on how to do it, here's a reprise of my presentation on the subject to the singapore Ruby Brigade last year:
Soundtrack for this post? (I want) Security Jo Jo Zep & The Falcons. 1977, tight pants, mo and all;-)
With thanks for contributions from John and Damir, Authlogic_RPX 1.1.1 is now available and includes an internal identity mapping that lets you tie multiple authentication methods to a single user. There are also some other bug fixes and improvements, so it is recommended you update Authlogic_RPX even if you don't want to enable the identity mapping.
See the Authlogic_RPX repo on github for information about installation, usage, and don't forget the sample site (and sample code).
Under the covers, it uses the RPX service from JanRain, a bit like this:
To get a better idea of why you might want to use Authlogic_RPX, and some pointers on how to do it, here's a reprise of my presentation on the subject to the singapore Ruby Brigade last year:
Rails authentication with Authlogic RPX[埋込みオブジェクト:http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=srbauthlogicrpx-12549789382598-phpapp01&stripped_title=srbauthlogicrpx]
View more presentations from Paul Gallagher.
Soundtrack for this post? (I want) Security Jo Jo Zep & The Falcons. 1977, tight pants, mo and all;-)
Saturday, January 16, 2010
ActiveWarehouse/ETL and Reflections on BI for Rails
I've recently been considering the opportunity to apply Ruby and Rails goodness to mainstream Business Intelligence applications.
During my research into prior art I discovered Anthony Eden's ActiveWarehouse and ActiveWarehouse-ETL projects, and gave them a test drive using a fictitious "Cupcakes Inc" site.
I presented this at the Jan 2010 Singapore Ruby Brigade meetup held at hackerspace.sg. My "point-of-view" slides are embedded below, and you can find the sample project and doco on github.
Conclusions?
Nevertheless, ActiveWarehouse and ActiveWarehouse-ETL are interesting projects, and the underlying implementations make for some educational code reading. Hopefully my slides and the Cupcakes sample project will add a bit to the available documentation, and give a bit of a leg up to anyone intersted in checking out these projects;-)
[フレーム]Soundtrack for this post: Information Overload- Living Color
During my research into prior art I discovered Anthony Eden's ActiveWarehouse and ActiveWarehouse-ETL projects, and gave them a test drive using a fictitious "Cupcakes Inc" site.
I presented this at the Jan 2010 Singapore Ruby Brigade meetup held at hackerspace.sg. My "point-of-view" slides are embedded below, and you can find the sample project and doco on github.
Conclusions?
- ActiveWarehouse is a textbook implementation of classic data warehousing techniques. That was clearly Anthony's intention, but it also means it does not really attempt to explore how data warehousing might be approached quite differently with Ruby and Rails
- ActiveWarehouse/ETL are not for the faint-hearted. When you get them working, they works well, but the lack of documentation basically means it's inevitable you'll end up reading the sources to figure it all out
- I have concerns about scalability. Having worked on terabyte warehouses using "classic" technology, I know just how far you push databases in order to scale. This bears more investigation and testing before it would be sensible to commit to ActiveWarehouse for a large-scale DWH implementation
Nevertheless, ActiveWarehouse and ActiveWarehouse-ETL are interesting projects, and the underlying implementations make for some educational code reading. Hopefully my slides and the Cupcakes sample project will add a bit to the available documentation, and give a bit of a leg up to anyone intersted in checking out these projects;-)
ActiveWarehouse/ETL - BI & DW for Ruby/Rails[埋込みオブジェクト:http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=srbactivewarehouse-100116080609-phpapp02&stripped_title=activewarehouseetl-bi-dw-for-rubyrails]
View more presentations from Paul Gallagher.
[フレーム]Soundtrack for this post: Information Overload- Living Color
Tuesday, December 29, 2009
Understanding Authlogic Plugin Dynamics
authlogic is by far and away my favourite authentication framework for Rails. I've raved enough in my slides on Authlogic_RPX.
It's true beauty is making authentication so unobtrusive for application developers.
However, the same can't be said for Authlogic plugin developers. I spent quite a bit of time meandering through the authlogic source and other plugins in order to produce Authlogic_RPX (the RPX plugin for authlogic, to support JanRain's RPX service).
I recently returned to the Authlogic_RPX in order to provide an update that finally adds identity mapping (with contributions from John and Damir; thanks guys!).
Luckily my previous exploits were recent enough that much of what I learned about authlogic were still pretty fresh. But before I forget it all again, I thought it would be worthwhile to write up a few of the "insights" I had on the authlogic source.
Hence this post. I'm just going to focus on one thing for now. Since authlogic is so "unobtrusive", one of the big conceptual hurdles you need to get over if you are attempting to write an authlogic plugin is simply:
(To follow this discussion, I'd recommend you have a plugin close to hand. Either my previously mentioned Authlogic_RPX, or another like Authlogic_OAuth, or Authlogic_openid)
By unobtrusive, I mean like this. Here is the minimal configuration for a user model that uses Authlogic_RPX:
Pretty simple, right? But what power lies behind that little "acts_as_authentic" statement?
What follows is my attempt at a description of what goes on behind the scenes..
The main file in an authlogic plugin/gem is going to have the relevant requires to the library files. But they do squat. We start mixing in our plugin with the includes and helper registrations:
Note that your plugin ActsAsAuthentic module get's mixed in with ActiveRecord itself (not just a specific ActiveRecord model). That's crucial to remember when considering class methods in your plugin (they are basically global across all ActiveRecord).
What happens when the previous lines included the plugin's ActsAsAuthentic module?
The self.included method handles the initial bootstrap..
Here we see we do a class_eval on the class that the module is included in (i.e. ActiveRecord::Base). You'll immediately get the sense we're doing some kind of mixin with the Config and Methods modules. The Config / Methods module structure is a common pattern you will see throughout authlogic.
extend Config takes the Config module (AuthlogicRpx::ActsAsAuthentic::Config) and add it to the ActiveRecord::Base class cdefinition. i.e. methods defined in Config become class methods of ActiveRecord::Base. (If you add a def self.extended(klass) method to Config you'll be able to hook the extension).
add_acts_as_authentic_module(Methods, :prepend) adds the Methods module (AuthlogicRpx::ActsAsAuthentic::Methods) to the authlogic modules list. That's all. Take a look at add_acts_as_authentic_module:
It is only when we add the acts_as_authentic in our model class that things start to happen. This method loads all the modules from the list built up by all the call(s) to "add_acts_as_authentic_module". Note the include in the last line of the method:
Once the include is invoked, our plugin will usually hook the event and do some setup activity in our module's def self.included method.
Unlike the Config extension, the class you are including in (the klass parameter in the example), is the specific ActiveRecord model you have marked as "acts_as_authentic".
In other words, the methods in the Methods module get included as instance methods for the specific ActiveRecord models class (User in the example I presented earlier).
Let's hang it all out in a simplified and contrived example. Take this basic structure:
If we add this to our User model, then the result we'd end up with is this:
I've covered the main points in bootstrapping authlogic. There's obviously a lot more that goes on, but I think once you get these basics it makes authlogic-related code so much easier to read and understand. It's a pretty neat demonstration of dynamic ruby at work.
Understanding the loading process is also makes it possible to be definitive about how your application will behave, rather than just treating it as a heuristic black box.
Take authlogic configuration settings for example. Say we have a configuration parameter in our plugin called "big_red_button" that takes values :on and :off.
Syntactically, both of these user model definitions are valid:
However, the behaviour is slightly different, and the difference will be significant if you have any initialisation code in the plugin that cares about the setting of the big_red_button.
In the second case, it should be clear that setting big_red_button :on only happens after all the plugin initialisation is complete.
But in the first case, it is a little more subtle. If you go back to review the acts_as_authentic method you'll see that setting the big_red_button occurs at yield self if block_given?. Implications:
How's that? Pretty cool stuff, but thankfully as I mentioned before, these details only really concern plugin authors and anyone who just loves to read dynamic ruby code.
There's much more to authlogic that what I've discussed here of course (and RPX). Perhaps good fodder for a future post? Let's see..
Soundtrack for this post: Because it's There - Michael Hedges
It's true beauty is making authentication so unobtrusive for application developers.
However, the same can't be said for Authlogic plugin developers. I spent quite a bit of time meandering through the authlogic source and other plugins in order to produce Authlogic_RPX (the RPX plugin for authlogic, to support JanRain's RPX service).
I recently returned to the Authlogic_RPX in order to provide an update that finally adds identity mapping (with contributions from John and Damir; thanks guys!).
Luckily my previous exploits were recent enough that much of what I learned about authlogic were still pretty fresh. But before I forget it all again, I thought it would be worthwhile to write up a few of the "insights" I had on the authlogic source.
Hence this post. I'm just going to focus on one thing for now. Since authlogic is so "unobtrusive", one of the big conceptual hurdles you need to get over if you are attempting to write an authlogic plugin is simply:
Just how the heck does it all get loaded and mixed in with my models??
(To follow this discussion, I'd recommend you have a plugin close to hand. Either my previously mentioned Authlogic_RPX, or another like Authlogic_OAuth, or Authlogic_openid)
By unobtrusive, I mean like this. Here is the minimal configuration for a user model that uses Authlogic_RPX:
class User < ActiveRecord::Base
acts_as_authentic
end
Pretty simple, right? But what power lies behind that little "acts_as_authentic" statement?
What follows is my attempt at a description of what goes on behind the scenes..
First: get loaded
The main file in an authlogic plugin/gem is going to have the relevant requires to the library files. But they do squat. We start mixing in our plugin with the includes and helper registrations:
require "authlogic_rpx/version"
require "authlogic_rpx/acts_as_authentic"
require "authlogic_rpx/session"
require "authlogic_rpx/helper"
require "authlogic_rpx/rpx_identifier"
ActiveRecord::Base.send(:include, AuthlogicRpx::ActsAsAuthentic)
Authlogic::Session::Base.send(:include, AuthlogicRpx::Session)
ActionController::Base.helper AuthlogicRpx::Helper
Note that your plugin ActsAsAuthentic module get's mixed in with ActiveRecord itself (not just a specific ActiveRecord model). That's crucial to remember when considering class methods in your plugin (they are basically global across all ActiveRecord).
What including ActsAsAuthentic in ActiveRecord::Base does..
What happens when the previous lines included the plugin's ActsAsAuthentic module?
The self.included method handles the initial bootstrap..
module AuthlogicRpx
module ActsAsAuthentic
def self.included(klass)
klass.class_eval do
extend Config
add_acts_as_authentic_module(Methods, :prepend)
end
end
..
Here we see we do a class_eval on the class that the module is included in (i.e. ActiveRecord::Base). You'll immediately get the sense we're doing some kind of mixin with the Config and Methods modules. The Config / Methods module structure is a common pattern you will see throughout authlogic.
extend Config takes the Config module (AuthlogicRpx::ActsAsAuthentic::Config) and add it to the ActiveRecord::Base class cdefinition. i.e. methods defined in Config become class methods of ActiveRecord::Base. (If you add a def self.extended(klass) method to Config you'll be able to hook the extension).
add_acts_as_authentic_module(Methods, :prepend) adds the Methods module (AuthlogicRpx::ActsAsAuthentic::Methods) to the authlogic modules list. That's all. Take a look at add_acts_as_authentic_module:
def add_acts_as_authentic_module(mod, action = :append)
modules = acts_as_authentic_modules
case action
when :append
modules << mod
when :prepend
modules = [mod] + modules
end
modules.uniq!
write_inheritable_attribute(:acts_as_authentic_modules, modules)
end
Ready to launch..
It is only when we add the acts_as_authentic in our model class that things start to happen. This method loads all the modules from the list built up by all the call(s) to "add_acts_as_authentic_module". Note the include in the last line of the method:
def acts_as_authentic(unsupported_options = nil, &block)
# Stop all configuration if the DB is not set up
return if !db_setup?
raise ArgumentError.new("You are using the old v1.X.X configuration method for Authlogic. Instead of " +
"passing a hash of configuration options to acts_as_authentic, pass a block: acts_as_authentic { |c| c.my_option = my_value }") if !unsupported_options.nil?
yield self if block_given?
acts_as_authentic_modules.each { |mod| include mod }
end
Ignition..
Once the include is invoked, our plugin will usually hook the event and do some setup activity in our module's def self.included method.
module Methods
def self.included(klass)
klass.class_eval do
..
end
..
end
..
Unlike the Config extension, the class you are including in (the klass parameter in the example), is the specific ActiveRecord model you have marked as "acts_as_authentic".
In other words, the methods in the Methods module get included as instance methods for the specific ActiveRecord models class (User in the example I presented earlier).
Hanging it on the line..
Let's hang it all out in a simplified and contrived example. Take this basic structure:
module AuthlogicPlugin
module ActsAsAuthentic
def self.included(klass)
klass.class_eval do
extend Config
add_acts_as_authentic_module(Methods, :prepend)
end
end
module Config
def config_item
end
end
module Methods
def self.included(klass)
klass.class_eval do
def self.special_setting
end
end
end
def instance_item
end
end
end
end
If we add this to our User model, then the result we'd end up with is this:
- config_item: will be a class method on ActiveRecord::Base
- instance_item: will be an instance method on User
- special_setting: will be a class method on User
Conclusions & Implications?
I've covered the main points in bootstrapping authlogic. There's obviously a lot more that goes on, but I think once you get these basics it makes authlogic-related code so much easier to read and understand. It's a pretty neat demonstration of dynamic ruby at work.
Understanding the loading process is also makes it possible to be definitive about how your application will behave, rather than just treating it as a heuristic black box.
Take authlogic configuration settings for example. Say we have a configuration parameter in our plugin called "big_red_button" that takes values :on and :off.
Syntactically, both of these user model definitions are valid:
class User < ActiveRecord::Base
acts_as_authentic do |c|
c.big_red_button :on
end
end
class User < ActiveRecord::Base
acts_as_authentic
big_red_button :on
end
However, the behaviour is slightly different, and the difference will be significant if you have any initialisation code in the plugin that cares about the setting of the big_red_button.
In the second case, it should be clear that setting big_red_button :on only happens after all the plugin initialisation is complete.
But in the first case, it is a little more subtle. If you go back to review the acts_as_authentic method you'll see that setting the big_red_button occurs at yield self if block_given?. Implications:
- Config extension of ActiveRecord::Base takes place before the big_red_button is set
- Method methods are included in the User model before the big_red_button is set
- Method's def self.included is called after the big_red_button is set (meaning you can safely do conditional initialisation here based on the big_red_button setting)
How's that? Pretty cool stuff, but thankfully as I mentioned before, these details only really concern plugin authors and anyone who just loves to read dynamic ruby code.
There's much more to authlogic that what I've discussed here of course (and RPX). Perhaps good fodder for a future post? Let's see..
Soundtrack for this post: Because it's There - Michael Hedges
Thursday, October 08, 2009
Rails authentication with Authlogic and RPX
The Singapore Ruby Brigade had it's monthly meetup last night. Great discussions as always, and how can you complain when you get to carry on over maggi mee goreng supper until the wee hours;-)
Here are the slides from my talk about Rails authentication options in general, and specifically why and how to use RPX with Authlogic (using the Authlogic_RPX gem I released last week).
Sunday, September 27, 2009
Released: Authlogic_RPX gem, the easiest way to support multiple authentication schemes in Rails
I've just made Authlogic_RPX public for the first time and invite any rails developers to take a look. It's a ruby gem that adds suppport for RPX authentication in the Authlogic framework for Ruby on Rails. With RPX, you get to support all the common authentication schemes in one shot (Facebook, twitter, OpenID etc).
Authlogic_RPX is available under the MIT license, and a number of resources are available:
The best place to find out more is the README, it contains the full background and details on how to start using it. Feedback and comments are welcome and invited (either directly to me, or you can enter them in the github issues list for the project).
Authlogic_RPX plugs into the fantastic Authlogic framework by Ben Johnson/binarylogic. Authlogic is elegant and unobtrusive, making it currently one of the more popular approaches to authentication in Rails.
RPX is the website single-sign-on service provided by JanRain (the OpenID folks). It complements their OPX offerings I wrote about recently. RPX abstracts the authentication process for developers and provides a single, simple API to deal with. This approach is great for developers because you only need to build a single authentication integration, and leave to JanRain the messy details of implementing and maintaining support for the range of authentication providers: OpenID, OAuth, Facebook Connect, AOL, Yahoo, Google, and so on..
If you want to learn more, there was recently a great podcast interview with Brian Ellin from JanRain on the IT Conversations Network: RPX and Identity Systems
Authlogic_RPX is available under the MIT license, and a number of resources are available:
- Authlogic_RPX source repository: github.com/tardate/authlogic_rpx
- Live Demonstration Site: rails-authlogic-rpx-sample.heroku.com
- Demonstration site source repository: github.com/tardate/rails-authlogic-rpx-sample
The best place to find out more is the README, it contains the full background and details on how to start using it. Feedback and comments are welcome and invited (either directly to me, or you can enter them in the github issues list for the project).
Authlogic_RPX plugs into the fantastic Authlogic framework by Ben Johnson/binarylogic. Authlogic is elegant and unobtrusive, making it currently one of the more popular approaches to authentication in Rails.
RPX is the website single-sign-on service provided by JanRain (the OpenID folks). It complements their OPX offerings I wrote about recently. RPX abstracts the authentication process for developers and provides a single, simple API to deal with. This approach is great for developers because you only need to build a single authentication integration, and leave to JanRain the messy details of implementing and maintaining support for the range of authentication providers: OpenID, OAuth, Facebook Connect, AOL, Yahoo, Google, and so on..
If you want to learn more, there was recently a great podcast interview with Brian Ellin from JanRain on the IT Conversations Network: RPX and Identity Systems
Thursday, September 10, 2009
Twitpocalypse II: Developers beware of DB variances
Alert: "Twitpocalypse II" coming Friday, September 11th - make sure you can handle large status IDs!
Twitter operations team will artificially increase the maximum status ID to 4294967296 this coming Friday, September 11th.
"Twitpocalypse (I)" occured back in June, when twitter and application developers had to deal with the fact that message status IDs broke the signed 32-bit integer limit (2,147,483,647).
At that point, the limit was raised to the unsigned 32-bit limit of 4,294,967,296. Now we're heading to crack that this week. You can track our collective rush to the brink social celebrity meltdown at www.twitpocalypse.com;-)
First reaction: OMG, it's taken only 3 months to double the volume of tweets sent over all time? That's a serious adoption curve.
Next reaction: once again, application developers are reminded that we unfortunately can't ignore the specifics of the database platform they are running on and just take it for granted.
It's actually quite common for development and production infrastructure to be subtly different. This is especially true in the Rails world where SQLite is the default development database, but production systems will often be using MySQL or PostgreSQL.
If you are using a hosted ("cloud") service it may even take some digging to actually find out what kind of database you are running on. For example, if you use Heroku to host Rails applications, most of the time you don't care that they run PostgreSQL (originally I think they were using MySQL but migrated a while back).
It's in situations like Twitpocalypse that you care. With a Rails-based twitter application, use an "integer" in your database migrations and you will have no problem running locally on SQLite, but you're app will blow up on a production PostgreSQL database when you encounter a message with status_id above 2,147,483,647.
Fortunately, the solution is simple: migrate to bigint data types.
And the even better news is that ActiveRecord database migrations make this a cinch if you have been using integer types in the past. For example, if you've been using an integer type to store "in_reply_to_status_id" references in twitter mentions table, the change_column method will happily manage the messy details for you:
class ForcebigintMentions < ActiveRecord::Migration
def self.up
change_column :mentions, :in_reply_to_status_id, :bigint
end
def self.down
change_column :mentions, :in_reply_to_status_id, :integer
end
end
It's always a good idea to check fundamental limits for the database platforms you are using. They are not always what you expect, and you can't safely apply lessons from one product to another without doing your homework.
Here's a quick comparison of integer on some of the common platforms:
- SQLite: INTEGER. The value is a signed integer, stored in 1, 2, 3, 4, 6, or 8 bytes depending on the magnitude of the value. i.e. will automatically scale to an 8 byte signed BIGINT (-9223372036854775808 to 9223372036854775807)
- PostgreSQL: INTEGER 4 bytes (-2147483648 to +2147483647). Use BIGINT for 8 byte signed integer.
- MySQL: INT (alias INTEGER) has a signed range of -2147483648 to 2147483647, or an unsigned range of 0 to 4294967295. Use BIGINT is the 8 byte integers.
- Oracle : NUMBER type ranges from 1.0 x 10^-130 to but not including 1.0 x 10^126. The activerecord-oracle-enhanced-adapter provides facilities for intepreting NUMBER as FixNum or BigDecimal in ActiveRecord as appropriate.
PS: there's been some discussion of why twitter would schedule this update on Sep 11th and publicise it as the Twitpocalypse II. I hope it was just an EQ+IQ deficiency, not someone's twisted idea of a funny or attention-grabbing stunt.
Subscribe to:
Comments (Atom)