Showing posts with label Programming. Show all posts
Showing posts with label Programming. Show all posts
Monday, February 25, 2013
Keyword arguments.
Thank god.
# Ruby 1.9: # (From action_view/helpers/text_helper.rb) def cycle(first_value, *values) options = values.extract_options! name = options.fetch(:name, 'default') # ... end # Ruby 2.0: def cycle(first_value, *values, name: 'default') # ... end
More@ http://blog.marc-andre.ca/2013/02/23/ruby-2-by-example/
# Ruby 1.9: # (From action_view/helpers/text_helper.rb) def cycle(first_value, *values) options = values.extract_options! name = options.fetch(:name, 'default') # ... end # Ruby 2.0: def cycle(first_value, *values, name: 'default') # ... end
More
Saturday, June 20, 2009
PEAR in June
June is here, and things are beginning to pick up again.
We've welcomed Rodrigo Sampaio Primo, probably better known for his efforts with TikiWiki and elsewhere, Peter Bittner joined us to feed back some of his Open Document improvements, we've seen the feature and bugfix releases of Services_Amazon_SQS, Net_LDAP2, Console_Commandline, XML_Serializer, PHP_UML, Payment_DTA, Net_UserAgent_Detect, Net_DNS, Services_Facebook, Testing_DocTest and Net_Nmap.
Christian Weiske has been working on getting Open Document back into shape, Greg Beaver is once again helping us move forward to elect a new PEAR group, as well as getting the next version of the PEAR installer ready for testing.
Slightly worrying, we haven't heard much from Amir since the elections in Iran, and he hasn't been on IRC.
PHP 5.3 isn't far off, and I think it's fair to suggest that we've all got a subdued sense of excitement about it. That, and the consumption of a metric tonne of meat.
We've welcomed Rodrigo Sampaio Primo, probably better known for his efforts with TikiWiki and elsewhere, Peter Bittner joined us to feed back some of his Open Document improvements, we've seen the feature and bugfix releases of Services_Amazon_SQS, Net_LDAP2, Console_Commandline, XML_Serializer, PHP_UML, Payment_DTA, Net_UserAgent_Detect, Net_DNS, Services_Facebook, Testing_DocTest and Net_Nmap.
Christian Weiske has been working on getting Open Document back into shape, Greg Beaver is once again helping us move forward to elect a new PEAR group, as well as getting the next version of the PEAR installer ready for testing.
Slightly worrying, we haven't heard much from Amir since the elections in Iran, and he hasn't been on IRC.
PHP 5.3 isn't far off, and I think it's fair to suggest that we've all got a subdued sense of excitement about it. That, and the consumption of a metric tonne of meat.
Tags
Elections in Iran,
Internet Relay Chat,
Iran,
pear,
php,
Programming,
TikiWiki
Friday, April 24, 2009
Handy hint for unit tests
We've got loads of unit tests. A run takes approximately 20 minutes.
This is because we've got a lot of database interaction, and the re-engineering effort required to go back and mock that all out is immense.
So, what's the best way to make sure you catch problems quickly?
In your AllTests.php, make it a policy to put new test suites at the top, rather than the bottom.
Basically:
This is the opposite thinking than "write your most important test cases first", but it helps you find the newest broken features.
This is because we've got a lot of database interaction, and the re-engineering effort required to go back and mock that all out is immense.
So, what's the best way to make sure you catch problems quickly?
In your AllTests.php, make it a policy to put new test suites at the top, rather than the bottom.
Basically:
public static function suite() {
$suite = new PHPUnit_Framework_TestSuite();
$suite-<addTestSuite('NewTest');
$suite-<addTestSuite('CoreTest');
return $suite;
}
This is the opposite thinking than "write your most important test cases first", but it helps you find the newest broken features.
Sunday, January 25, 2009
Using Image_Graph neatly
Here are my two best tips around using Image_Graph for projects. They aren't necessarily right, but have worked fantastically for me.
Build a simple page which takes a number of arguments via GET variables, and serves up an image. You can then use simple commands to render whatever you like.
Its worth thinking about maintaining a pretty similar approach to google's API, so that you can swap one for the other almost trivially.
Say you have a set of reports you must run. The amount of data is huge, so you really don't want to try and do things on the fly. You have to update the data periodically - ie, once a week or month.
Steps here:
1. Denormalize in the database - precalculate answers and render them into tables. It will save you loads of time.
2. When you have the data you need, pre-render the graphs and save them to disk. Do it with an easy naming scheme.
Now when someone hits your pages to look at information, you've got everything already there - its a matter of wiring it together.
These two things are pretty obvious and self explanatory, but worth keeping in mind. The last thing you want to do is build a page which assembles data, then realize Image_Graph renders in a different stream (ie, not inline), and resort to copy and paste coding.
Use it like Google Chart API (on demand)
Build a simple page which takes a number of arguments via GET variables, and serves up an image. You can then use simple commands to render whatever you like.
# Rendering code:
require_once 'Net/URL.php';
function make_graph_url($data) {
$url = new Net_URL('graph.php');
$url->gt;querystring['data'] = $data;
$url->gt;querystring['type'] = 'pie';
return $url->gt;getURL(); // "graph.php?type=pie&data[Cats]=1&data[Fish]=2";
}
# HTML / Presentation bit
<img src="%3C?php%20print%20make_graph_url%28$data%29;%20?%3E" alt="Graph of Cats and Fish" />
#Graph.php
require_once 'Image/Graph.php';
$graph = new Image_Graph();
// read in $_GET and construct your graph
$graph->gt;done();
Its worth thinking about maintaining a pretty similar approach to google's API, so that you can swap one for the other almost trivially.
Pre-rendering
Say you have a set of reports you must run. The amount of data is huge, so you really don't want to try and do things on the fly. You have to update the data periodically - ie, once a week or month.
Steps here:
1. Denormalize in the database - precalculate answers and render them into tables. It will save you loads of time.
2. When you have the data you need, pre-render the graphs and save them to disk. Do it with an easy naming scheme.
Now when someone hits your pages to look at information, you've got everything already there - its a matter of wiring it together.
These two things are pretty obvious and self explanatory, but worth keeping in mind. The last thing you want to do is build a page which assembles data, then realize Image_Graph renders in a different stream (ie, not inline), and resort to copy and paste coding.
Tags
advice,
Application programming interface,
Cut copy and paste,
google,
html,
javascript,
pear,
php,
Programming,
Searching,
tutorial,
Web APIs
Tuesday, January 20, 2009
PEAR Bug Triage Day Results (December 17-18th)
Here's the results of Bug Triage, Dec 17-18th and a few days surrounding it.
If you missed it, the next one will be Feb 7-8th.
Accomplishments
- Triage the latest 50 bugs
- Crypt_GPG - gauthierm fixed broken unit tests
- Services_Amazon_SQS - gauthierm is making progress on mocking out all HTTP
- Validate_BE unit tests fail - doconnor fixed
- HTTP_Session - doconnor made it skip if it can't possibly pass due to the unit test environment
- XML_Feed_Parser - doconnor excluded from unit test results (too much noise for too little benefit)
- PEAR 1.8 - fixing up unit tests - dufuz
- Text_Wiki_Creole saw a release
- Services_ExchangeRates 0.6 - doconnor did a release; mocked out all unit tests, tweaked API
- MDB2 / MDB2_Driver_mysql - a new beta was put out! Lots of bugs fixed since the last release
- New pear server was tested, lots of little patches, not quite ready yet! - doconnor, dufuz, cweiske, farell
If you missed it, the next one will be Feb 7-8th.
Tags
bug triage,
open source,
pear,
php,
Programming,
Testing,
Unit Tests
Wednesday, January 07, 2009
Generating Nutritional Data RDF from USDA, Part 2
I has a bit of a whinge yesterday about copyright, nutrition data, and so forth.
Today, my inbox has a nice copy of the NUTTAB data I want, I've located SR21, I've had someone else point me at canadian nutritional data too.
To get the NUTTAB data, you have to email Food Standards Australia, but thats not a huge deal.
So; progress:
I've made a script to import the USDA SR21 data into a database (ie, mysql), and render it out as RDF.
Pretty easy stuff! Its in PHP, and makes use of PEAR.
The basic plan: import everything, render out individual items, publish them statically on the web. Maybe later, get someone to stuff them all into a SPARQL endpoint.
Rinse, repeat with Canadian, Australian data. Grow a common ontology for Food, Nutrients, etc.
You can view some of the output RDF, I've not generated the whole set yet as my poor computer is far too old and creaky to do so.
Additionally, there are lots of linked data connections I want to make.
I want to link the sources with pubmed, the units with... something (side note: there's not much in the way of unit and measurement ontologies I could find!), the USDA style names with wikipedia/dbpedia/freebase; the compound names (PROCNT - protein content) with... something.
How silly is this: there's no semantic web url for milligrams. The best I could do was a few related concepts, because someone at wikipedia decided to merge all of the sub-articles for measurements into the single unit (ie, mg to g).
Today, my inbox has a nice copy of the NUTTAB data I want, I've located SR21, I've had someone else point me at canadian nutritional data too.
To get the NUTTAB data, you have to email Food Standards Australia, but thats not a huge deal.
So; progress:
I've made a script to import the USDA SR21 data into a database (ie, mysql), and render it out as RDF.
Installing it
Pretty easy stuff! Its in PHP, and makes use of PEAR.
# Get the code:
$ svn co svn checkout http://freebase-owl.googlecode.com/svn/trunk/nutrition/
# 0. Install dependencies
$ sudo apt-get install php-pear mysql wget unzip
$ sudo pear install -fa MDB2 XML_Beautifier
# 1. Get the SR21 data, extract it
$ wget http://www.nal.usda.gov/fnic/foodcomp/Data/SR21/dnload/sr21.zip
$ unzip sr21.zip
# 2. Make configuration
$ cp config.php.dist config.php
$ vim config.php
# 3. Create a database of your choosing, with the same settings as configuration
mysql -u root -p
CREATE DATABASE usda;
# 4. Run the install script. This will take a while as it imports all data. If it fails, just DROP the database and start again
$ php install.php
# 5. Give it a shot from the command line. "1002" is the USDA food id.
$ php rdfizer.php 1002
$ php rdfizer.php 1002 > 1002.rdf
# 6. Generate the whole set:
php generate-all.php
The basic plan: import everything, render out individual items, publish them statically on the web. Maybe later, get someone to stuff them all into a SPARQL endpoint.
Rinse, repeat with Canadian, Australian data. Grow a common ontology for Food, Nutrients, etc.
You can view some of the output RDF, I've not generated the whole set yet as my poor computer is far too old and creaky to do so.
Additionally, there are lots of linked data connections I want to make.
I want to link the sources with pubmed, the units with... something (side note: there's not much in the way of unit and measurement ontologies I could find!), the USDA style names with wikipedia/dbpedia/freebase; the compound names (PROCNT - protein content) with... something.
How silly is this: there's no semantic web url for milligrams. The best I could do was a few related concepts, because someone at wikipedia decided to merge all of the sub-articles for measurements into the single unit (ie, mg to g).
Monday, December 29, 2008
PEAR bug triage roundup - Dec 27th/28th
We had PEAR bug triage on the 27th/28th.
I'd expected this to be a quiet one, but CVS activity was actually pretty heavy!
We accomplished:
* XML_Feed_Parser tests got added (1500 unit tests)! - doconnor
* HTTP_Upload parse errors fixed - doconnor
* Net_SMPP parse errors fixed - doconnor
* Net_Whois bugfix release - doconnor
* Massive improvements to PEAR_PackageFileManager tests - dufuz
* Auth_Prefmanager tests now skip if not configured - doconnor
* HTML_Template_IT 1.3.0a1 released - doconnor
* Image_Color 1.0.3 released - doconnor
* MP3_Playlist - phpcs - doconnor
* Net_IPv6 got into the pear test suite - doconnor
* Started Services_Akismet2 - gauthierm
* Started the process for new releases of DB_DataObject, HTML_Page2, HTTP_Upload, HTTP_WebDAV_Client, HTTP_WebDAV_Server, Image_Canvas, Image_Graph, Image_Transform, MDB2, MDB2_Driver_mysql, Mail_Mime, Net_SmartIRC, SQL_Parser, Spreadsheet_Excel_Writer, Validate - doconnor, troehr
The most important one here would be PEAR_PackageFileManager improvements - this is part of getting PEAR 1.8 out of the door.
Coming in second was the addition of 1500 or so tests with XML_Feed_Parser - unfortunately, we went from 145 failures to over 1000. The benefit of this: You can really see where PHP / libxml have a few holes, so over time, more bugs will be filed and this will improve.
Unfortunately, overall, it felt like we just ended up with more work on our plates as we unravelled bug after bug - so we'll power on through at the next bug triage day!
I'd expected this to be a quiet one, but CVS activity was actually pretty heavy!
We accomplished:
* XML_Feed_Parser tests got added (1500 unit tests)! - doconnor
* HTTP_Upload parse errors fixed - doconnor
* Net_SMPP parse errors fixed - doconnor
* Net_Whois bugfix release - doconnor
* Massive improvements to PEAR_PackageFileManager tests - dufuz
* Auth_Prefmanager tests now skip if not configured - doconnor
* HTML_Template_IT 1.3.0a1 released - doconnor
* Image_Color 1.0.3 released - doconnor
* MP3_Playlist - phpcs - doconnor
* Net_IPv6 got into the pear test suite - doconnor
* Started Services_Akismet2 - gauthierm
* Started the process for new releases of DB_DataObject, HTML_Page2, HTTP_Upload, HTTP_WebDAV_Client, HTTP_WebDAV_Server, Image_Canvas, Image_Graph, Image_Transform, MDB2, MDB2_Driver_mysql, Mail_Mime, Net_SmartIRC, SQL_Parser, Spreadsheet_Excel_Writer, Validate - doconnor, troehr
The most important one here would be PEAR_PackageFileManager improvements - this is part of getting PEAR 1.8 out of the door.
Coming in second was the addition of 1500 or so tests with XML_Feed_Parser - unfortunately, we went from 145 failures to over 1000. The benefit of this: You can really see where PHP / libxml have a few holes, so over time, more bugs will be filed and this will improve.
Unfortunately, overall, it felt like we just ended up with more work on our plates as we unravelled bug after bug - so we'll power on through at the next bug triage day!
Friday, December 19, 2008
WTF: Refactoring snippet of the day
<?php
class SortingClassNameHere {
public function __construct($string) {
$this->string = $string;
}
private static $c = null;
public static function cmp($a, $b) {
$r = strnatcasecmp($a[self::$c], $b[self::$c]);
return ($r > 0 ? 1 : ($r < 0 ? -1 : 0));
}
public function process($data) {
self::$c = $this->string;
if (!usort($data, array("SortingClassNameHere", "cmp"))) {
throw new Exception('Unable to sort results.');
}
return $data;
}
}
Hints:
* SortingClassNameHere::cmp() is never called anywhere else in the code base apart from process()
* If you don't know why this is bad, I will shoot you.
Tags
Array,
Method,
php,
Programming,
puzzle,
refactoring,
unit testing
Wednesday, December 10, 2008
PEAR Bug Triage Day Results (December 6th)
PEAR's December Bug Triage Day was alright, if somewhat quiet. It was actually more spread out over the preceding weeks rather than much on the day itself.
Christian got a new version of Services_Blogging out earlier in the month, while Console_GetArgs and Numbers_Words got updated and increased unit test coverage - just about everyone under the sun chipped in and wrote translated unit tests for Numbers_Words - Igor, Christian, Lorenzo, Kouber, David and anyone else I missed.
Payment_DTA got a new owner in Martin Schutte, which saw a good few bug fixes applied.
I fixed up Text_Figlet, which broke PHP 4 compatibility, and got out a release of it and Services_Yadis.
Finally, Validate got back in the unit test good books, with almost all unit test failures resolved.
The next one is calendared for December 28th, so we'll see how that goes :)
Christian got a new version of Services_Blogging out earlier in the month, while Console_GetArgs and Numbers_Words got updated and increased unit test coverage - just about everyone under the sun chipped in and wrote translated unit tests for Numbers_Words - Igor, Christian, Lorenzo, Kouber, David and anyone else I missed.
Payment_DTA got a new owner in Martin Schutte, which saw a good few bug fixes applied.
I fixed up Text_Figlet, which broke PHP 4 compatibility, and got out a release of it and Services_Yadis.
Finally, Validate got back in the unit test good books, with almost all unit test failures resolved.
The next one is calendared for December 28th, so we'll see how that goes :)
Tags
bug triage,
open source,
pear,
php,
Programming,
quality,
Test,
unit testing
Subscribe to:
Comments (Atom)