JSONB in a Postgres table. Here’s how I did it.
Let’s start by defining the schema. We’ll create a table that will store org charts as a JSONB object:
CREATE TABLE org_charts ( id BIGSERIAL PRIMARY KEY, chart JSONB );
Next, let’s define the JSON. It’s going to be a recursive structure of employee nodes with a reports collection containing the employee’s direct reports. Since there can be multiple levels of reporting, our org chart can be arbitrarily deep:
{
"id": 1,
"name": "Charles Montgomery Burns",
"title": "Owner",
"reports": [
{
"id": 2,
"name": "Waylon Smithers, Jr."
},
{
"id": 3,
"name": "Inanimate carbon rod",
"reports": [
{
"id": 4,
"name": "Homer Simpson",
"title": "Safety Engineer"
}
]
},
{
"id": 5,
"name": "Lenny Leonard"
},
{
"id": 6,
"name": "Carl Carlson"
},
{
"id": 7,
"name": "Frank Grimes",
"status": "deceased"
}
]
}
We would like to write a query that can navigate this tree and, for instance, print out the names of all employees. To do that, we need to use Postgres’ powerful recursive common table expression (CTE). Recursive CTEs sound very intimidating, but are actually fairly straight forward to write. Here’s the one I wrote for this purpose:
WITH RECURSIVE reports (id, json_element) AS ( -- non recursive term SELECT id, chart FROM org_charts UNION -- recursive term SELECT id, CASE WHEN jsonb_typeof(json_element) = 'array' THEN jsonb_array_elements(json_element) WHEN jsonb_exists(json_element, 'reports') THEN jsonb_array_elements(json_element -> 'reports') END AS json_element FROM reports WHERE jsonb_typeof(json_element) = 'array' OR jsonb_typeof(json_element) = 'object' )
Let’s dissect this a bit. The query starts with:
WITH RECURSIVE reports (id, json_element) AS (
Here, we’re declaring a recursive CTE called reports, which takes two parameters id and json_element. Note that parameters represent the columns being returned by the overall query as well as internal subqueries.
Next, we define the non-recursive term:
-- non recursive term SELECT id, chart FROM org_charts
This query tells Postgres how to start evaluating the CTE. It’s evaluated first (and only once) and its results drive the rest of the query. Note that this query must return two columns that match those defined by the CTE (id and json_element in this case).
After the non-recursive term, we need to define the recursive term:
-- recursive term SELECT id, CASE WHEN jsonb_typeof(json_element) = 'array' THEN jsonb_array_elements(json_element) WHEN jsonb_exists(json_element, 'reports') THEN jsonb_array_elements(json_element -> 'reports') END AS json_element FROM reports WHERE jsonb_typeof(json_element) = 'array' OR jsonb_typeof(json_element) = 'object'
A few things to note here:
FROM reports), which is what allows it to be recursive.id and json_element.UNION or UNION ALL.Now, let’s talk about what this query is doing. Essentially, it’s unrolling the nested JSON structure into rows. If it runs into a JSON array, it creates one row per each member using jsonb_array_elements() function. If it runs into an element that contains the reports collection, it unrolls that as well using jsonb_array_elements(). Finally, it only looks at JSON arrays or objects (and not values or NULLs).
So, how do we actually run this CTE? By simply treating reports as any other table:
SELECT * FROM reports;
Running the query above results in the following:
| id | json_element |
|---|---|
| 1 |
{
"id": 1,
"name": "Charles Montgomery Burns",
"title": "Owner",
"reports": [
{
"id": 2,
"name": "Waylon Smithers, Jr."
},
{
"id": 3,
"name": "Inanimate carbon rod",
"reports": [
{
"id": 4,
"name": "Homer Simpson",
"title": "Safety Engineer"
}
]
},
{
"id": 5,
"name": "Lenny Leonard"
},
{
"id": 6,
"name": "Carl Carlson"
},
{
"id": 7,
"name": "Frank Grimes",
"status": "deceased"
}
]
}
|
| 1 |
{
"id": 2,
"name": "Waylon Smithers, Jr."
}
|
| 1 |
{
"id": 3,
"name": "Inanimate carbon rod",
"reports": [
{
"id": 4,
"name": "Homer Simpson",
"title": "Safety Engineer"
}
]
}
|
| 1 |
{
"id": 5,
"name": "Lenny Leonard"
}
|
| 1 |
{
"id": 6,
"name": "Carl Carlson"
}
|
| 1 |
{
"id": 7,
"name": "Frank Grimes",
"status": "deceased"
}
|
| 1 | NULL |
| 1 |
{
"id": 4,
"name": "Homer Simpson",
"title": "Safety Engineer"
}
|
As you can see here, our query created one row per employee plus a NULL. To be honest, I’m not sure why the NULL row is being returned (I tried but failed to get rid of it). That said, it’s easy enough to filter the NULL row out.
To get the list of names (which was our original goal), we can do this:
SELECT json_element -> 'name' AS employee_name FROM reports WHERE jsonb_exists(json_element, 'name');
Here’s we’re looking only at those rows that contain the element name and then returning the value of name:
| employee_name |
|---|
| “Charles Montgomery Burns” |
| “Waylon Smithers, Jr.” |
| “Inanimate carbon rod” |
| “Lenny Leonard” |
| “Carl Carlson” |
| “Frank Grimes” |
| “Homer Simpson” |
Overall, using Postgres recursive CTEs to navigate JSON trees proved to be a challenging, but rewarding exercise. If you grok the basics, it’s fairly straight forward to build up complex queries.
I wanted a tool that can make plans simple to understand and be visually pleasing. More specifically, I wanted:
Let’s see how Pev helps with these. I’ll use the plan produced by the query below for illustration (you can run this query against the dellstore2 database):
SELECT C.STATE,SUM(O.NETAMOUNT), SUM(O.TOTALAMOUNT) FROM CUSTOMERS C INNER JOIN CUST_HIST CH ON C.CUSTOMERID = CH.CUSTOMERID INNER JOIN ORDERS O ON CH.ORDERID = O.ORDERID GROUP BY C.STATE LIMIT 10 OFFSET 1
I should also note that Pev only works with EXPLAIN Plans in JSON format. To produce one, use this code:
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS, FORMAT JSON)
First off, Pev uses a classic tree graph to visualize the plan. I find this to be easier to view than the left-to-right tree used by PgAdmin:
By default, each node displays its type + relevant details (like the object being scanned or the join condition), duration, and key insights (like whether this node is some type of outlier):
Speaking of insights, Pev currently calculates the following:
– outlier nodes (largest, slowest, costliest)
– nodes with bad planner estimates (planner missed by a factor of 100 or more)
Pev also allows for various customizations, like showing planner estimate details and a graph of either rows, duration, or cost:
If you want to see absolutely everything Postgres knows about the node, just click on the title to get the extended view:
Using these customizations (available in the settings menu on the left), you can easily create graphs like the one below which shows how fast each node is:
I personally find it hard to mentally map the plan I’m seeing to the query that generated it. Pev helps in this regard by showing you the query right next to your node and highlighting the relevant part wherever possible. Just click the little blue database icon inside the node:
I must admit that highlighting the relevant part of the query is quite rudimentary at this point, but I’m hopeful that it can be improved in the future.
Pev is heavily influenced by the excellent explain.depesz.com. I learned a lot about how Postgres planner works from using it and reading the help.
If you do use Pev, please let me know how you like it at @alexTatiyants. If you want to make it better, the code is on github.
Here’s a diagram of the overall process:
android mac dropbox photo backup
First thing you need to do is to enable “Camera Upload” capability in the Dropbox Android app. If you’re installing it for the first time, the app will prompt you during setup. Otherwise, go to Settings > Turn on Camera Upload.
If you haven’t done so already, install Dropbox on your Mac. Once the phone starts uploading pictures, you’ll see them a folder called “Camera Uploads”.
At this point, you should have pictures automatically downloading to your Mac. You could stop here, and if you’re already paying for Dropbox, you very well may. However, if you’re using the free 2GB version, you’ll need to make sure that you don’t run out of space.
The way to do that is with an Automator script. You can configure your Mac to automatically move files from the folder used by Dropbox to another folder. To do this, follow the steps below:
Here’s a screenshot of the configured action:
And that’s all there is to it!
You may be wondering why not just use Google+ Auto Backup. It comes standard on your Android and does indeed back up your photos to the Google’s cloud. However, getting those photos onto your computer automatically is not easy (or, in my case, possible).
As of today (August 2014), you have 2 options: (1) use the web UI to individually select photos and then download them or (2) use Picasa’s “import from Google+” feature. The first option is a non-starter and the second option didn’t work for me. For some reason, Picasa can only see (and sync) last day’s photos.
That said, it’s default output is not great. There’s a fair amount of visual noise and default color choices could be better. So, in this tutorial, I’ll explain how to customize your JFree chart to look clean and modern.
We’ll start by generating a simple bar chart:
Integer width = 500, height = 300
JFreeChart chart = ChartFactory.createBarChart(
"", // chart title
"", // domain axis label
"", // range axis label
createDataset(), // data
PlotOrientation.VERTICAL, // orientation
true, // include legend
true, // tooltips?
false // URLs?
)
ChartRenderingInfo info = new ChartRenderingInfo(new StandardEntityCollection())
ChartUtilities.saveChartAsPNG(new File("bar-chart.png"), chart, width, height, info)
This is what the output looks like:
JFree bar chart default output
Though it’s not terrible, there are a few issues:
So, let’s fix that shall we?
Let’s start by cleaning up the plot area. In particular, let’s get rid of various lines as well as that background gray:
CategoryPlot plot = chart.categoryPlot plot.backgroundPaint = Color.white plot.domainGridlinePaint = Color.white plot.rangeGridlinePaint = Color.white plot.outlineVisible = false
This is what the chart now looks like:
Jfree bar chart with plot area cleaned up
Next, let’s make the bars look nicer. We’ll remove the gradient change the default colors:
// remove the gradient fill
plot.renderer.gradientPaintTransformer = null
plot.renderer.barPainter = new StandardBarPainter()
Paint[] colors = [
new Color(0, 172, 178), // blue
new Color(239, 70, 55), // red
new Color(85, 177, 69) // green
]
// change the default colors
for (int i = 0; i < 4; i++) {
plot.renderer.setSeriesPaint(i, colors[i % colors.length])
}
Our chart now looks like this:
Jfree bar chart with formatted plot area and bars
The next item on our hit list is the legend. There are many ways one can format the legend, but I’m going to focus on (1) removing the border and separating it with extra padding and (2) giving each item its own line. Note that the second fix will require us to increase the height of the chart (In my example I’ll increase it from 300px to 400px).
Here’s the code:
chart.legend.frame = BlockBorder.NONE chart.legend.itemLabelPadding = new RectangleInsets(5.0, 2.0, 10.0, width) chart.legend.padding = new RectangleInsets(20.0, 20.0, 0.0, 0.0)
Although there doesn’t appear to be an option to automatically put legend items on separate lines, there’s a trick you can use. The trick is to set the right padding to something large, like the width of the chart.
Ok, here’s what our chart now looks like:
Jfree bar chart fully formatted
For the final step, I will simplify the chart even more by getting rid of the range (Y) axis and putting value labels directly on the bars. Furthermore, I will make the category labels less pronounced.
// add values to bars plot.renderer.baseItemLabelGenerator = new StandardCategoryItemLabelGenerator() plot.renderer.baseItemLabelsVisible = true // hide y axis NumberAxis rangeAxis = (NumberAxis) plot.rangeAxis rangeAxis.visible = false // format the x axis CategoryAxis domainAxis = plot.domainAxis domainAxis.tickLabelPaint = new Color(160, 163, 165) domainAxis.categoryLabelPositionOffset = 4 domainAxis.lowerMargin = 0 domainAxis.upperMargin = 0 domainAxis.categoryMargin = 0.2
And here’s the final chart:
Well, I was (and still am) a huge fan of Angular. I’ve made the necessary (and significant) time investment to learn it and can now do pretty much whatever I need with it. Therefore, constantly hearing about how some wet-behind-the-ears framework from Facebook (groan!) was so much better than my beloved Angular did nothing to endear me to it.
And so, as is my custom, I assumed that React was a flash-in-the-pan, here-today-gone-tomorrow kind of framework and decided to simply ignore it until that happened. This was a fine plan except for one flaw: React refused to go away. On the contrary, it seemed like every other day someone was signing its praises.
Since ignoring React was getting me nowhere, I decided to give in and take a closer look. Here’s what I found.
tldr; React is a very impressive library for developing fast, component-based UIs. That said, it’s not a full framework and will likely require additional libraries in order to build large apps.
React is a JavaScript library designed around component-based UIs and one-way data flow. It’s simple, internally consistent, and theoretically sound. Oh, and it’s very, very fast.
React is all about components. Its creators firmly believe in components as the right way to do UI and they’ve built a framework which goes out of its way to push the developer in that direction.
I couldn’t agree more with this approach. Components are a really good way to structure application UI for two reasons. First, it’s much easier to reason about (and build and debug) an application when you can restrict your mental model to a specific component within it.
Second, a component-centric mindset forces you to look for and extract common patterns used in your application into reusable pieces. Instead of thinking “To implement a table filter, I’ll just add a few tags to my template”, you think “To implement a table filter, I’ll add a new component to my app and drop that component into my template”.
By the way, unless component orientation is front and center in your framework, you won’t think this way. In fact, unless your framework not only encourages, but makes it easy to create components, you won’t. For example, Angular definitely has the concept of components in the form of directives. This is great, except that creating directives is not trivial.
To create directives you need to have a deep understanding of how Angular’s runtime is managed (compiling vs linking), how scope inheritance works (isolated vs. parent vs. child), and possibly what transclusion is. And so, many Angular devs simply ignore directives and use templates instead (I certainly was in this category until fairly recently).
React is also all about immutability. Data flow between components is one-way and immutable by default. There’s a special ceremony around mutating state.
Again, this is a fantastic idea. Making state mutations so explicit and intentional forces the developer to really think about the flow of data within the app. It encourages eliminating needless state (such as creating separate variables to keep track of list counts) and makes bugs due to state inconsistencies between different parts of the app less likely.
It’s important to remember that React (unlike Angular or Ember) is not a full framework. It is about the UI (or, as its website puts it “React is the V in MVC”). It gives you everything you need to create and render components, but it is missing a few things:
There are many options for each of these available on React’s Complementary Tools page.
Another thing to consider is testing. Facebook has its own test runner called Jest, which makes it easier to test React components. I haven’t tried it yet, but it does look similar to Angular’s Karma (which by the way is an excellent test runner).
Looking at the list of missing items, one can’t help but notice that Angular has support for all of these. Given that, one can’t help but wonder whether Angular and React can be combined.
It turns out that it’s at least possible to combine React and Angular. It boils down to calling React’s renderComponent() method from Angular directive’s $watch() method.
I haven’t made up my mind about whether this is a good idea. After all, you still need to go through the hassle of creating directives, and you now have to also figure out what belongs in that directive and what belongs in React’s component.
So, should Angular devs switch to React? It’s difficult for me to answer this question because, while I understand where React excels, I don’t know what the warts are. I don’t know what it’s like to write large apps in React, what it’s like to debug weird issues in React.
I suspect that skilled Angular devs can get by just fine with Angular. If you know what you’re doing with Angular, you can get a lot of mileage out of it. Unless you have some crazy performance problem which requires a rewrite, there may not be enough of a reason to switch.
On the other hand, new Angular devs could well benefit from taking a closer look at React. It does a better job than Angular in forcing you to do the right thing (whether it is organizing your application in terms of components or forcing you to manage state better).
React is a great library for creating UI. It solves some very complex problems in a clever and elegant way. It strongly encourages good habits. It is simple to grok.
It’s also true that React doesn’t give you everything you need to build apps. You need to make choices about a set of supporting technologies before you can really get going with it. Some may really like this best-of-breed approach, while others prefer a single, comprehensive framework.
I think that React’s adoption may be aided by a comprehensive framework built around it (à la Marionette). Facebook seems to be moving in that direction with Flux, but nothing concrete has materialized as of yet.
As it turns out, there is nothing especially hard about DI and testing in Angular. You just need to get used to the syntax. One way to do so is to pretend that it’s something else. Allow me to explain.
Let’s imagine that we’re writing a Pastry Shop application. Our application needs a service for baking cakes, which we’ll call Baker. The Baker service requires a few things to do its work. For example, it needs access to a Store where it can get the necessary ingredients. It also needs a Mixer and an Oven. In other words, it has certain dependencies.
Now, let’s imagine that we’re using a special version of JavaScript, which makes it easy to express dependencies using annotations. Here’s what our Baker service might look like:
@inject(Store, Mixer, Oven)
function Baker() {
var bake = function(item) {
if (item === 'vanilla cake') {
var ingredients = [
Store.get('sugar'),
Store.get('flower'),
Store.get('eggs')
];
var dough = Mixer.mix(ingredients);
var cake = Oven.cook(dough);
cake.frosting = 'vanilla';
return cake;
};
};
return: {
bake: bake
};
};
We’re using an annotation called @inject to specify that Baker needs Store, Mixer, and Oven to do its work. This same annotation could be used in writing the corresponding tests:
@inject(Baker, Store, Mixer, Oven)
describe('Baker Service tests', function () {
it('should bake vanilla cake', function () {
// arrange
spyOn(Store, 'get').andReturnValue({});
spyOn(Mixer, 'mix').andReturnValue({});
spyOn(Oven, 'cook').andReturnValue({});
// act
var cake = Baker.bake('cake');
// assert
expect(Store.get).toHaveBeenCalledWith('sugar');
expect(Store.get).toHaveBeenCalledWith('flower');
expect(Store.get).toHaveBeenCalledWith('baking soda');
expect(Store.get).toHaveBeenCalledWith('eggs');
expect(Mixer.mix).toHaveBeenCalled();
expect(Oven.cook).toHaveBeenCalled();
expect(cake.frosting).toEqual('vanilla');
});
});
Sadly, the code above won’t work in JavaScript. The good news is that Angular gives us multiple ways to do the same thing:
$inject – set $inject property on the object which requires dependencies to an array of dependency names.factory() or controller()).Of the three methods described above, #3 seems to be the most widely accepted. So, let’s rewrite our code above using this method. First, let’s look at the service:
// simple, but not possible
@inject(Store, Mixer, Oven)
function Baker() {...}
// noisy, but possible
angular.module('PastryShop').factory('Baker', ['Store', 'Mixer', 'Oven',
function(Store, Mixer, Oven) {...}
]);
Next, let’s rewrite the test:
// simple, but not possible
@inject(Store, Mixer, Oven)
describe('Baker Service tests', function () {
it('should bake vanilla cake', function () {...});
});
// noisy, but possible
describe('Baker Service tests', function () {
var Baker, Store, Mixer, Oven;
beforeEach(angular.mock.module('PastryShop'));
beforeEach(function () {
angular.mock.inject(function (_Baker_, _Store_, _Mixer_, _Oven_) {
Baker = _Baker_;
Store = _Store_;
Mixer = _Mixer_;
Oven = _Oven_;
});
});
it('should bake vanilla cake', function () {...});
});
Let’s understand what’s going on here. Angular provides a special module called ngMock, which exposes (among other things) two very useful methods:
module() – used to bootstrap your app before every test.inject() – used to get instances of various services injected into your tests. By convention, if you wrap the name of the service in underscores, Angular will strip those underscores before giving you the right service.Once we obtain the necessary services from inject(), we simply store them within the scope of our describe() block and they become available to all of our tests.
When testing controllers, there are a couple of extra things to consider. First, to actually create a controller for our tests, we need to use the $controller service. Second, since most controllers need a $scope to work with, we can get one by using $rootScope‘s $new() method.
Here’s the code:
describe('Baker Controller tests', function () {
var scope, BakerController, Baker;
beforeEach(angular.mock.module('PastryShop'));
beforeEach(function () {
angular.mock.inject(function ($rootScope, $controller, _Baker_) {
scope = $rootScope.$new();
Baker = _Baker_;
BakerController = $controller('BakerController', {
$scope: scope,
Baker: Baker
});
});
});
});
Once again, we’re using the inject() method to get our dependencies, which in this case include $rootScope and $controller. We create an instance of a $scope and inject it, along with an instance of Baker, into the BakerController.
GORM uses good API design and some Groovy magic to be both novice and expert friendly. It does this via five increasingly powerful data querying mechanisms:
In this post I’ll cover how each mechanism works and, perhaps even more importantly, when to use each one. But first, a disclaimer: information presented below is based on GORM documentation and confirmed by my own experiments. It is current as of Grails version 2.3.7. If I missed or misrepresented anything, please let me know.
With that out of the way, here’s summary of when each method should be used:
| dynamic finder | where clause | criteria | HQL | SQL | |
|---|---|---|---|---|---|
| simple queries | x | x | x | x | x |
| complex filters | x | x | x | x | |
| associations | x | x | x | x | |
| property comparisons | x | x | x | x | |
| some subqueries | x | x | x | x | |
| eager fetches w/ complex filters | x | x | x | ||
| projections | x | x | x | ||
| queries with arbitrary return sets | x | x | |||
| highly complex queries (like self joins) | x | ||||
| some database specific features | x | ||||
| performance-optimized queries | x |
Before we can get into GORM, we’ll need to define a few domain objects to work with:
Here, we have a Company and a Store. Company manufactures Products, which are sold at Stores. Details of each sale (which Product got sold and which Store sold it) are recorded in a Transaction. Here’s the code:
class Company {
String name
String location
}
class Store {
String name
String city
String state
}
class Product {
String name
Company manufacturer
BigDecimal salesPrice
}
class Transaction {
Product product
Store store
Date salesDate
Integer quantity
}
This is definitely a matter of personal preference, but I dislike that (1) GORM assumes all properties to be non-nullable by default and (2) it fails silently on error. I also find it useful to see the SQL generated by Hibernate while I’m developing and testing. So, I’ll make the following tweaks to Config.groovy and DataSource.groovy:
// *** in Config.groovy ***
// 1. make all properties nullable by default
grails.gorm.default.constraints = {
'*'(nullable: true)
}
// 2. turn off silent GORM errors
grails.gorm.failOnError = true
// *** in DataSource.groovy ***
// 3. enable logging of Hibernate's SQL queries
test {
dataSource {
logSql = true
// .... other settings
}
}
development {
dataSource {
logSql = true
// .... other settings
}
}
Ok, we’re finally ready to query some data.
The simplest way of querying in GORM is by using dynamic finders. Dynamic finders are methods on a domain object that start with findBy, findAllBy, and countBy. For example, we can use dynamic finders to get a list of products filtered in different ways:
Company ACME = Company.findByName('ACME')
Product.findAllByManufacturer(ACME)
Product.findAllByManufacturerAndSalesPriceBetween(ACME, 200, 500)
We can also get counts:
Product.countByManufacturer(ACME)
The interesting thing about dynamic finders methods is that they don’t actually exist on the domain object. Instead, GORM uses Groovy’s Meta Object Programming (MOP) hooks to intercept calls to them and construct queries on the fly.
Another thing to note about dynamic finders is that they’re lazily loaded by default. This can be changed by specifying how specific objects should be fetched:
Product fluxCapacitor = Product.findByName('flux capacitor')
Transaction.findAllByProduct(fluxCapacitor, [fetch: [product: 'eager', store: 'eager']])
Originally introduced in Grails 2.0, the where clause gives us another simple option for querying data. Here’s how the examples above can be done using it:
// Product.findAllByManufacturer(ACME)
Product.where {
manufacturer == ACME
}.list()
// Product.findAllByManufacturerAndSalesPriceBetween(ACME, 200, 500)
Product.where {
manufacturer == ACME && (salesPrice > 200 && salesPrice < 800)
}.list()
// Product.countByManufacturer(ACME)
Product.where {
manufacturer == ACME
}.count()
Although the where clause can do the same stuff as dynamic finders, it’s definitely more powerful. For instance, you can define more complex filter conditions.
Imagine that you want to get a list of all sales with either 1 item sold or sales of a specific product over a specific date range. Doing that with a dynamic finder could look like findAllByQuantityOrProductAndSalesDateBetween, but this doesn’t actually work. Instead, we’ll use a where clause:
Transaction.where {
quantity == 1 ||
(product == fluxCapacitor &&
(salesDate >= '1/1/2014' && salesDate <= '1/10/2014')
)
}.list()
Another place where the where clause helps is when querying associations. For example, to get the list of transactions for a specific manufacturer we can do this:
Transaction.where {
product.manufacturer.name == 'ACME'
}.list()
Note that product.manufacturer references an associated object. The query above will result in the following SQL joins:
FROM transaction this_ INNER JOIN product product_al1_ ON this_.product_id = product_al1_.id INNER JOIN company manufactur2_ ON product_al1_.manufacturer_id = manufactur2_.id WHERE manufactur2_.name = ?
There are two other use cases where the where clause can be useful: property comparison and subqueries:
// find stores named after the city they're located in
Store.where {
name == city
}.list()
// find the largest sales of the flux capacitor
Transaction.where {
quantity == max(quantity) && product == fluxCapacitor
}.list()
I should note that subqueries for the where clause are limited to projections (i.e. aggregates like min, max, or avg).
The two methods we’ve covered so far are certainly straight-forward, but can be limiting. For example, imagine that you wanted to get a list of all products and the stores they were sold in for a given manufacturer.
To do this efficiently, you’d want all product and store information to be retrieved in one shot (eagerly). Unfortunately, where clauses don’t (yet) allow you to specify which objects should be eagerly fetched. Fortunately, there is a way to do just this by using Criteria:
Transaction.createCriteria().list {
fetchMode 'product', FetchMode.JOIN
fetchMode 'store', FetchMode.JOIN
product {
manufacturer {
eq 'id', ACME.id
}
}
}
In this example we’re using fetchMode of JOIN to indicate that both product and store properties should be eagerly retrieved. We’re also using a nested condition to get at the right manufacturer.
Keep in mind that GORM’s Criteria is actually a DSL for Hibernate’s criteria builder. Therefore, it allows you to build up quite sophisticated queries.
Aside from eager joins, Criteria can also be useful for projections. Projections are a way to further shape a data set and are typically used for aggregate functions like sum(), count(), and average().
For example, here’s a projection that gets product quantities sold for a given manufacturer:
Transaction.createCriteria().list {
projections {
groupProperty 'product'
sum 'quantity'
}
product {
manufacturer {
eq 'id', ACME.id
}
}
}
Note that we’re creating a projections clause and specifying both the aggregate (sum) and the grouping (via groupProperty).
Dynamic finders, where clauses, and criteria give us a lot of power over how and what we can query. However, there are a few situations where we need even more power and that’s where HQL comes in.
But before we talk about HQL and its use cases, there is one important thing to realize. If you’re using the first 3 ways of querying, GORM will always give you strongly typed domain objects (unless you’re using counts or projections). This is not necessarily true if you’re using HQL.
GORM gives you two ways of using HQL. The first is to use it in combination with find() or findAll() methods of the domain object. If you’re using it this way, you’re essentially limited to specifying the WHERE clause. For example:
Transaction.findAll('from Transaction as t where t.product.manufacturer.id = :companyId', [companyId: 1])
Here, we’re asking for all transactions for a given manufacturer. Note that, like the other methods we’ve discussed so far, using HQL this way still allows GORM to give you back domain objects.
On a separate note, this example uses named maps ([companyId: 1]) to pass in query parameters. Although you can also use positional maps, I definitely prefer named maps because they’re explicit and you can use the same parameter multiple times in your query without having to specify it multiple times.
Up to now, every querying method we’ve used returned strongly typed domain objects. This is great, but sometimes you need something different. That’s where executeQuery() comes in.
GORM allows you to execute arbitrary HQL using executeQuery(). For example, here’s a query that returns names of the Store, the Product, and the Manufacturer sold over a given time period:
String query = $/
select
s.name,
m.name,
p.name
from Transaction as t
inner join t.product as p
inner join t.store as s
inner join p.manufacturer as m
where t.product.manufacturer.id = :companyId
and t.salesDate between :startDate and :endDate
/$
List queryResults = Transaction.executeQuery(query,
[companyId: ACME.id, startDate: new Date('1/1/2014'), endDate: new Date('1/31/2014')]
)
What’s really noteworthy here is that we can shape the return set however we want. Obviously this makes it impossible for GORM to give us the right domain objects, but in certain instances the tradeoff is justified.
Another point I should make is that the data set returned by this query is a List of Arrays. To make it more useful, we could post-process it and convert it to a List of Maps with named properties:
Transaction.executeQuery(query,
[companyId: ACME.id, startDate: new Date('1/1/2014'), endDate: new Date('1/31/2014')]
).collect {
[
storeName: it[0],
manufacturerName: it[1],
productName: it[2]
]
}
The output of this query can, for instance, be easily serialized into JSON and rendered as a response from a controller.
For better or worse, quite a few devs believe that using an ORM means never having to look at SQL. While this may be true for a large majority of queries, certain situations require it.
Consider a query which wants to compare sales of all products for a given manufacturer to a similar time period last year. This query requires joining the Transaction table to itself (over different time ranges).
HQL joins (including self-joins) are possible if there’s an association defined between objects. In other words, we’d need to modify our Transaction class like this:
class Transaction {
Product product
Store store
Date salesDate
Integer quantity
Transaction baseline
}
If we did that, we could then define the following HQL query:
String query = $/ select t1.product.name, sum(t1.quantity), sum(t2.quantity) from Transaction as t1 inner join t1.baseline as t2 where t1.product.manufacturer.id = :companyId and t1.salesDate between :startDate and :endDate and t2.salesDate between :baselineStartDate and :baselineEndDate group by t1.product.name /$
Now, while this is possible to do, I find the solution distasteful. After all, doing this forces us to pollute the domain object with almost arbitrary associations just to make the query work.
The other option is to use native SQL:
String query = $/
SELECT p.name,
sum(t1.quantity),
sum(t2.quantity)
FROM transaction t1
LEFT OUTER JOIN transaction t2 ON t1.product_id = t2.product_id
INNER JOIN product p ON t1.product_id = p.id
WHERE p.manufacturer_id = :companyId
AND t1.sales_date between :startDate and :endDate
AND t2.sales_date between :baselineStartDate and :baselineEndDate
GROUP BY p.name
/$
new Transaction()
.domainClass
.grailsApplication
.mainContext
.sessionFactory
.currentSession
.createSQLQuery(query)
.setLong('companyId', 1)
.setDate('startDate', new Date('1/1/2014'))
.setDate('endDate', new Date('1/31/2014'))
.setDate('baselineStartDate', new Date('1/1/2013'))
.setDate('baselineEndDate', new Date('1/31/2013'))
.list()
There are a couple of things to note here. First, in order to execute this query we need to get a hold of Hibernate’s current session and call its createSQLQuery() method. The two ways to do is are (1) get sessionFactory injected into our class by Grails or (2) new up the domain class inside of the method and walk a long chain of dependencies to get it.
I’m using option 2 here because I put the method which implements this query inside my domain class and I wanted to keep it static:
class Transaction {
...
static ListIf you were putting this method somewhere other than your domain class (like controller or service), I would recommend using option 1.
The other thing I want to point out is that because we’re using the actual Hibernate method, we cannot pass it a map of parameters. Instead, we have to use Hibernate’s strongly types set*() methods.
Aside from complex-yet-still-generic SQL queries, we sometimes need to take advantage of database specific constraints. For example, Postgres allows storing data as arrays, maps (hstore), or JSON. Certain types of queries using these data types are difficult, if not impossible, to write using HQL.
There’s one other reason to use native SQL from GORM: performance tuning. Though Hibernate is typically pretty good about how it creates the necessary SQL, it’s definitely not perfect. So, there are rare instances where hand-tuned SQL can give you a significant performance boost.
Querying options supported by GORM are all appropriate under the right circumstances. I personally try to use the simplest option wherever possible (less code to test and maintain). On the other hand, if the unthinkable happens and either HQL (or SQL) is required, it’s good to understand how to make it work.
IntelliJ 13 running Karma Jasmine tests
Note the fact that IDEA is aware of Jasmine syntax and uses the built-in test runner to run Karma tests. Here’s a quick summary of how to do it.
At the command line, type the following (assumes you’ve already installed node/npm):
npm install -g karma
Go to Settings > Plugins > Browse Repositories > search for “karma”:
IntelliJ 13 install Karma plugin
Note that this requires IDEA 13.
Go to Run > Edit Configurations, add new configuration of type Karma:
IntelliJ Configure run configufation
Go to Settings > JavaScript > Libraries, add a new Global Library. Then, navigate to wherever npm installs its modules (probably /usr/local/lib/) and select the jasmine.js file from node_modules/karma-jasmine/lib/jasmine.js:
IntelliJ 13 Global Library Jasmine configuration
Click on the “Hector” icon (a little guy in a bowler hat at the bottom right of the screen) and click “Configure Inspections”:
IntelliJ Configure inspections
Select jasmine library click OK:
Intellij 13 enable library for project
Now, I should note that this guessing game can be a lot of fun, especially if you take into account what permissions the app is asking for. That said, I would like to propose an explicit rating system for free apps. Using simple iconography, app makers can finally let their users know exactly how they plan to generate revenue:
Though this set is incomplete, it does cover a good range of free app money making options. If you think of anything I missed, please do let me know here or on Twitter @AlexTatiyants.
Now, these books are clearly written by talented professionals at the top of their game. They’re all unique and wonderful, each in their own special way. However, if you study these gems as much as I have, you will notice subtle similarities among them.
In fact, I was able to synthesize these similarities, as subtle as they are, into a framework of sorts. This framework (which I call Child Readership Authoring Plan) can be used by non-talented un-professionals nowhere near the top (or even the middle) of their game to write their very own children’s book.
And so, without further delay, I present to you this framework. Please use it responsibly.
The first step is to pick a name. It should be common, but not too common, fashionable, but not too fashionable. And it should preferably be a girl’s name.
For my book, I’ll pick Wendy.
Next, find an adjective that rhymes with the name from step 1. It could be any adjective at all. It doesn’t need to be related in any way to the story you’ll write. It just has to rhyme.
Hmm, let’s see, what rhymes with Wendy… How about Bendy? Yes, Bendy Wendy, sounds great!
Make sure that the adjective name combo you ended up with is appropriate for a children’s book. For this step, I would recommend running your idea by an editor or spouse.
It has been pointed out to me that Bendy Wendy may not work. My bad, I see what I did wrong there. No problem, I’ll change it to Trendy Wendy.
Next, get a random collection of words. Again, they don’t have to make sense as a collection or be meaningful in any way, they just have to rhyme. In case you have trouble coming up with words on your own, I’d recommend a site like rhymezone. Pro tip: if you aren’t able to get enough real words that rhyme, feel free to make some up.
For my book, I came up with the following: bike, spike, mike, like, alike, tyke, and trike. Also, just to be safe, I made up fyke and drike.
You’re finally ready to write something. Remember that it’s not at all necessary for your story to be interesting, educational, life affirming, or morally unambiguous. In fact, it doesn’t even have to make sense.
All it has to do is sound as cute as possible. Also remember that no amount of alliteration, no matter how labored, is too much. The same is true for repetition.
Here’s what I came up with:
On her Sunday morning hike
With her fluffy kitty Fyke,
And her little brother Mike,
And his fluffy doggy Drike
Trendy Wendy saw a bike
Chained with something to a spike.
“Wowy wow, this bike I like!”
Trendy Wendy said to Mike
And his fluffy doggy Drike
Who was looking at the bike.
“Great for boys and girls alike,
Let’s unchain it from that spike!”
“I too like this snazzy bike,
It is very nice!” said Mike.
“But it’s too big for a tyke,
So don’t unchain it from that spike!
Trendy Wendy, Fyke, and Drike,
Let’s go home and ride my trike.”
So there you have it, a brand new children’s book written using a simple 4 step framework in roughly 20 minutes (your results may vary). Feel free to use this framework to unleash the majesty of the written word.