tag:blogger.com,1999:blog-151365752025年2月02日 07:21:28 +0000MVC FrameworkWindsorEasyNetQIoCLinqLINQ to SQLNHibernateALT.NETALT.NET UKCastle WindsorIRepositorySuteki ShopWCFasp.netADO.NET Data ServicesExtension methodsUnit TestsMVCContribValidationWCF FacilityWTFjQueryAltNetUKDDD8DockerGoogleIServiceLocatorLive WriterMEFMSTestMulti-tenancyRESTRavenDbTask Parallel LibraryAjaxArchitectureC# 4.0CSSDDD7F#Forms AuthenticationGitGoogle mapsIELoad TestingLog4netMetaWeblog APIMonoMonorailPDCPhotosynthREMIXRSDRapVisual ToolsWeb DesignWeb HostingWeb Servicesbloggingcustom iteratorsmsbuildsoftware factoriessussex geek dinnervisual studioCode rantLife as a mort.http://mikehadlow.blogspot.com/noreply@blogger.com (Mike Hadlow)Blogger504125tag:blogger.com,1999:blog-15136575.post-46898574321546739192021年7月06日 11:41:00 +00002021年07月06日T12:41:58.238+01:00New Blog At mikehadlow.com<p> This is my last post here at Code Rant. From now I will be posting at <a href="https://mikehadlow.com/" target="_blank">mikehadlow.com</a>. I've written a post on my new blog <a href="https://mikehadlow.com/posts/welcome-to-my-new-blog/" target="_blank">here</a> explaining the reasons. Thanks for visiting Code Rant and please take a moment to look at my new blog.</p>http://mikehadlow.blogspot.com/2021/07/new-blog-at-mikehadlowcom.htmlnoreply@blogger.com (Mike Hadlow)0tag:blogger.com,1999:blog-15136575.post-26571371842979502020年8月31日 14:36:00 +00002020年08月31日T15:53:21.660+01:00C# preprocessor directive symbols from the dotnet build command line via DefineConstants<p>Invoking the C# compiler directly allows one to pass in symbols for the preprocessor via a <a href="https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/compiler-options/define-compiler-option">command option</a> (-define or -d). But it's not at all obvious how to do this with the dotnet build command. There is no 'define' flag, so how do you do it?</p><p>Let me first show you how this works using the C# compiler directly:</p><p>Create a new file 'Program.cs' with this code:</p>
<pre>using System;
namespace CscTest
{
class Program
{
static void Main(string[] args)
{
#if FOO
Console.WriteLine("Hello FOO!");
#else
Console.WriteLine("NOT FOO!");
#endif
}
}
}
</pre>
<p>Now compile it with CSC:</p>
<pre>>csc -d:FOO Program.cs
</pre>
<p>And run it:</p>
<pre>>Program
Hello FOO!
</pre>
<p>Happy days.</p><p>It is possible to do the same thing with dotnet build, it relies on populating the MSBuild DefineConstants property, but unfortunately one is not allowed to access this directly from the command line:</p><p>If you invoke this command:</p>
<pre>dotnet build -v:diag -p:DefineConstants=FOO myproj.csproj
</pre>
<p>It has no effect, and somewhere deep in the diagnostic output you will find this line:</p>
<pre>The "DefineConstants" property is a global property, and cannot be modified.</pre>
<p>Instead one has to employ a little indirection. In your csproj file it <i>is</i> possible to populate DefineConstants. Create a project file, say 'CscTest.csproj', with a DefineConstants PropertyGroup element with the value FOO:</p>
<pre><Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp3.1</TargetFramework>
<DefineConstants>FOO</DefineConstants>
</PropertyGroup>
</Project>
</pre>
<p>Build and run it with dotnet run:</p><p>
</p><pre>>dotnet run .
Hello FOO!
</pre>
<p>The csproj file is somewhat like a template, one can pass in arbitrary properties using the -p flag, so we can replace our hard coded FOO in DefineConstants with a property placeholder:</p>
<pre><Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp3.1</TargetFramework>
<DefineConstants>$(MyOption)</DefineConstants>
</PropertyGroup>
</Project>
</pre>
<p>And pass in FOO (or not) on the command line. Unfortunately it now means building and running as two individual steps:</p>
<pre>>dotnet build -p:MyOption=FOO .
...
>dotnet run --no-build
Hello FOO!
</pre>
<p>And all is well with the world. It would be nice if the MSBuild team allowed preprocessor symbols to be added directly from the command line though.</p>http://mikehadlow.blogspot.com/2020/08/c-preprocessor-directive-symbols-from.htmlnoreply@blogger.com (Mike Hadlow)1tag:blogger.com,1999:blog-15136575.post-35665204678587384172020年8月04日 11:21:00 +00002020年08月04日T12:32:10.355+01:00Restoring from an Azure Artifacts NuGet feed from inside a Docker BuildIf you are using Azure DevOps pipelines to automate building your .NET Core application Docker images, it's natural to also want to use the DevOps Artifacts NuGet feed for your internally hosted NuGet packages. Unfortunately there is much confusion and misinformation about how to authenticate against the Artifacts NuGet feed. While researching this topic I found various sources saying that you needed to install the <a href="https://github.com/microsoft/artifacts-credprovider">NuGet credential provider</a> as part of the docker build, and then set various environment variables. I followed this route (excerpt from an example below), even to the extent of creating a custom Docker image for all our dotnet builds with the credential provider already installed. <pre>ARG PAT
RUN wget -qO- https://raw.githubusercontent.com/Microsoft/artifacts-credprovider/master/helpers/installcredprovider.sh | bash
ENV NUGET_CREDENTIALPROVIDER_SESSIONTOKENCACHE_ENABLED true
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS "{\"endpointCredentials\": [{\"endpoint\":\"https://pkgs.dev.azure.com/jakob/_packaging/DockerBuilds/nuget/v3/index.json\", \"password\":\"${PAT}\"}]}"
</pre>
<div>The technique is to install the credential provider, then configure it with the DevOps Artifacts endpoint and a Personal Access Token (PAT), which you can generate by going to your user settings from the DevOps UI:</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTinQjWN-hzYY_4jnjPHO-HtWSCSLhuXkWJX1-VPloqj0TD674zSNJqgBc5vaA0cPa2p1kaD3oUZEfv6hq4-jOauL6PJNwra7G2nhr6fbS283lnoPCAMVMIGtTahZAOckwZt3A5Q/s538/devops-user-settings.png" style="margin-left: 1em; margin-right: 1em;"><img alt="DevOps User Settings" border="0" data-original-height="538" data-original-width="356" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTinQjWN-hzYY_4jnjPHO-HtWSCSLhuXkWJX1-VPloqj0TD674zSNJqgBc5vaA0cPa2p1kaD3oUZEfv6hq4-jOauL6PJNwra7G2nhr6fbS283lnoPCAMVMIGtTahZAOckwZt3A5Q/d/devops-user-settings.png" /></a></div><div><br /></div><div><br /></div><div><b>After wasting over a day on this, I was then very surprised indeed to find that a colleague was restoring from the same DevOps Artifacts feed on a locally hosted TeamCity server, simply by providing the PAT as the NuGet API-Key! They hadn't installed the NuGet credential provider, so according to the Microsoft documentation it shouldn't work?</b></div><div><br /></div><div>I tried it myself. The PAT does indeed work as a NuGet API-Key. A slight further complication is that the 'dotnet restore' command doesn't have an API-Key switch, so the next easiest thing is to simply use a nuget.config file as follows:</div><div><br /></div>
<pre><?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageSources>
<add key="DevOpsArtifactsFeed" value="your-devops-artifacts-nuget-source-URL" />
</packageSources>
<packageSourceCredentials>
<DevOpsArtifactsFeed>
<add key="Username" value="foo" />
<add key="ClearTextPassword" value="your-PAT" />
</DevOpsArtifactsFeed>
</packageSourceCredentials>
</configuration>
</pre>
<div>Replace the place-holders with your Artifacts NuGet feed URL and your PAT. The Username is not considered by Artifacts feed and can be any string. Copy the above configuration into a file named 'nuget.config' and run create a Dockerfile like this:</div>
<div><br /></div>
<pre>FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /app
# copy source code, nuget.config file should be placed in the 'src' directory for this to work.
COPY src/ .
# restore nuget packages
RUN dotnet restore --configfile nuget.config
# build
RUN dotnet build
# publish
RUN dotnet publish -o output
# build runtime image
FROM mcr.microsoft.com/dotnet/core/runtime:3.1 AS runtime
WORKDIR /app
COPY --from=build /app/output/ ./
#
ENTRYPOINT ["your/entry/point"]
</pre>
<div><br /></div>
<div>
This is the simplest thing that will possibly work. <b>But you really shouldn't hard code secrets such as your PAT into your source control system</b>. Very conveniently, the dotnet restore command will do environment variable replacement in the nuget.config file, so you can replace your hard-coded PAT with a reference to an ENV var and then pass that to docker build:
In your nuget.config file:
<pre> <packageSourceCredentials>
<DevOpsArtifactsFeed>
<add key="Username" value="foo" />
<add key="ClearTextPassword" value="%NUGET_PAT%" />
</DevOpsArtifactsFeed>
</packageSourceCredentials>
</pre>
<div>
In your Dockerfile:
</div>
<pre>ARG NUGET_PAT
ENV NUGET_PAT=$NUGET_PAT
</pre>
<div>Your docker build command:</div>
<pre>docker build -t my-image --build-arg NUGET_PAT="your PAT" .
</pre>
<div>
I hope this short post saves somebody from the many hours that I wasted on this. I also hope that Microsoft updates their documentation!
</div></div>http://mikehadlow.blogspot.com/2020/08/restoring-from-azure-artifacts-nuget.htmlnoreply@blogger.com (Mike Hadlow)1tag:blogger.com,1999:blog-15136575.post-7251857735958749262020年4月15日 16:04:00 +00002020年04月15日T17:04:24.388+01:00A Framework to DotNet Core Conversion Report<div class="has-line-data" data-line-end="2" data-line-start="1">
An experience report of converting a large microservice platform from .NET Framework to dotnet core.</div>
<h2 class="code-line" data-line-end="4" data-line-start="3">
<a href="https://www.blogger.com/null" id="Background_3"></a>Background</h2>
<div class="has-line-data" data-line-end="5" data-line-start="4">
For the last year or so I’ve been working with company that maintains a significant trading platform built in .NET. The architecture consists of a number of Windows Service components that communicate using <a href="https://www.rabbitmq.com/">RabbitMQ</a> with <a href="https://easynetq.com/">EasyNetQ</a>. These are all backend components that at the top level communicate with various clients via a web API maintained by a different team. The infrastructure is hosted in the company’s own data center with a <a href="https://en.wikipedia.org/wiki/Continuous_delivery">CI/CD</a> software process featuring <a href="https://www.atlassian.com/software/bitbucket">BitBucket</a>, <a href="https://www.jetbrains.com/teamcity/">Team City</a>, and <a href="https://octopus.com/">Octopus</a>, a pretty standard .NET delivery pipeline.</div>
<h3 class="code-line" data-line-end="7" data-line-start="6">
<a href="https://www.blogger.com/null" id="Motivation_6"></a>Motivation</h3>
<div class="has-line-data" data-line-end="8" data-line-start="7">
Our motivation for porting to dotnet core was essentially twofold: to keep the technology platform up to date, and to be in a position to exploit new developments in application platforms, specifically to take advantage of container technology, such as <a href="https://www.docker.com/">Docker</a>, and container orchestrators, such as <a href="https://kubernetes.io/">Kubernetes</a>.</div>
<div class="has-line-data" data-line-end="10" data-line-start="9">
Microsoft is, on the whole, very good at supporting their technology for the long term; there are many companies with VB6 applications still running, for example, and the .NET Framework will undoubtedly be supported on Windows for years to come. However there are significant costs and risks in supporting legacy software platforms, such as: difficulty in using newer technologies and protocols because libraries aren’t available for the legacy platform; difficulty hiring and retaining technology staff who will fear that their skills are not keeping up to date with the market; and the increasing cost over time of porting to a newer platform as year on year the gap with the legacy technology widens. There is a danger that at some point in the future the legacy platform will become unsupportable, but the technology gap is so wide that the only feasable solution is a very expensive re-write.</div>
<div class="has-line-data" data-line-end="12" data-line-start="11">
Software infrastructure has experienced a revolution in the last few years. I’ve written before why I think that <a href="http://mikehadlow.blogspot.com/2019/01/why-containers-are-game-changer-for.html">containerization is a game changer</a>, especially for distributed microservice architectures such as ours. It has the potential to significantly reduce risks and costs and increase flexibility. For all Microsoft’s efforts, Windows containers are still a platform that one should use with caution; all the maturity is with Linux containers. We are very keen to exploit the opportunities of Docker and Kubernetes, and so this has a prerequisite that our software can run on Linux. It provides the second strong incentive for our move to dotnet core.</div>
<h2 class="code-line" data-line-end="14" data-line-start="13">
<a href="https://www.blogger.com/null" id="Process_13"></a>Process</h2>
<h3 class="code-line" data-line-end="16" data-line-start="15">
<a href="https://www.blogger.com/null" id="Analysis_15"></a>Analysis</h3>
<div class="has-line-data" data-line-end="17" data-line-start="16">
A dotnet core application can only consume dotnet core or dotnet standard dependencies, so the first task is to understand the dependency tree; what are the projects, NuGet packages, and system assemblies that the application relies upon, and which assemblies do these rely on in turn. Once we have that picture, we can work from the leafs down to the trunk; from the top-level dependencies down to the application itself. For third party NuGet packages we have to make sure that a dotnet standard version is available. For libraries internal to the organisation, we have to add each one to our list of projects that we will need to convert to dotnet standard.</div>
<div class="has-line-data" data-line-end="19" data-line-start="18">
I used my own tool: <a href="https://github.com/mikehadlow/AsmSpy">AsmSpy</a> to help with this. It was originally designed to report on assembly version conflicts, but since it already built an internal dependency graph, it was a relatively simple extension to add a visualizer to <a href="https://github.com/mikehadlow/AsmSpy/commit/f60398a78edea988ccee8a8459c2a492537e04bf">output the graph as a tree view</a>. To do this, simply add the <code>-tr</code> option:</div>
<div class="has-line-data" data-line-end="21" data-line-start="20">
<code>asmspy.exe <path to application executable> -tr</code></div>
<div class="has-line-data" data-line-end="23" data-line-start="22">
At the end of the analysis process, we should have a list of NuGet packages to be checked for dotnet standard versions, our internal libraries that need to be converted to dotnet core, and our applications/services that need to be converted to dotnet core. We didn’t have any problems with base class libraries or frameworks because our services are all console executables that communicate via EasyNetQ, so the BCL footprint was very light. Of course you will have a different experience if your application uses something like WCF.</div>
<h3 class="code-line" data-line-end="25" data-line-start="24">
<a href="https://www.blogger.com/null" id="Converting_Projects_to_dotnet_Standard_and_Core_24"></a>Converting Projects to dotnet Standard and Core</h3>
<div class="has-line-data" data-line-end="26" data-line-start="25">
Some early experiments we tried with converting .NET Frameworks to dotnet Standard or Core in place, by modifying the <code>.csproj</code> files, did not go well, so we soon settled on the practice of creating entirely new solutions and projects and simply copying the .cs files across. For this <a href="https://git-scm.com/docs/git-worktree">Git Worktree</a> is your very good friend. Worktree allows you to create a new branch with a new working tree in a separate directory, so you can maintain both your main branch (master for example), and your conversion branch side by side. The project conversion process looks something like this:</div>
<ol>
<li class="has-line-data" data-line-end="28" data-line-start="27">Create a new branch in a new worktree with the worktree command: <code>git worktree add -b core-conversion <path to new working directory></code></li>
<li class="has-line-data" data-line-end="29" data-line-start="28">In the new branch open the solution in Visual Studio and remove all the projects.</li>
<li class="has-line-data" data-line-end="30" data-line-start="29">Delete all the project files using explorer or the command line.</li>
<li class="has-line-data" data-line-end="31" data-line-start="30">Create new projects, copying the names of the old projects, but using the dotnet Standard project type for libraries, ‘Class Library (.NET Standard)’, and the dotnet Core project type for services and applications. In our case all the services were created as ‘Console App (.NET Core)’. For unit tests we used ‘xUnit Test Project (.NET Core)’, or ‘MSTest Test Project (.NET Core)’, depending on the source project test framework.</li>
<li class="has-line-data" data-line-end="32" data-line-start="31">From our analysis (above), add the project references and NuGet packages required by each project.</li>
<li class="has-line-data" data-line-end="33" data-line-start="32">Copy the .cs files <em>only</em> from the old projects to the new projects. An interesting little issue we found was that old .cs files were still in the repository despite being removed from their projects. .NET Framework projects enumerate each file by name (the source of many a problematic merge conflict) but Core and Standard projects simply use a wildcard to include every .cs file in the project directory, so a compile would include these previously deleted files and cause build problems. Easily fixed by deleting the rogue files.</li>
<li class="has-line-data" data-line-end="34" data-line-start="33">Once all this is done the solution should build and the tests should all pass.</li>
<li class="has-line-data" data-line-end="35" data-line-start="34">NuGet package information is now maintained in the project file itself, so for your libraries you will need to copy that from your old <code>.nuspec</code> files.</li>
<li class="has-line-data" data-line-end="37" data-line-start="35">One you are happy that the application is working as expected, merge your changes back into your main branch.</li>
</ol>
<div class="has-line-data" data-line-end="38" data-line-start="37">
You have now successfully converted your projects from .NET Framework to dotnet core and standard. Read on if you want to take advantages of the new dotnet Core frameworks available, and for ideas about build and deployment pipelines.</div>
<h3 class="code-line" data-line-end="40" data-line-start="39">
<a href="https://www.blogger.com/null" id="Taking_advantage_of_new_dotnet_core_frameworks_39"></a>Taking advantage of new dotnet core frameworks</h3>
<div class="has-line-data" data-line-end="41" data-line-start="40">
At this point we need to make a strategic decision about how far we want to take advantage of the new hosting, dependency-injection, configuration, and logging frameworks that now come out-of-the-box with dotnet core. We may decide that we will simply use standard versions of all our existing frameworks. In our case we had: TopShelf for windows service hosting, Ninject for DI, System.Configuration for configuration, and log4net and NLog for logging, but we decided to replace all these with their Generic Host equivalents from the <code>Microsoft.Extensions.*</code> namespaces.</div>
<table class="table table-striped table-bordered"><thead>
<tr>
<th>Framework NuGet Package</th>
<th><code>Microsoft.Extensions.*</code> equivalent</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="http://topshelf-project.com/">TopShelf</a></td>
<td><a href="https://www.nuget.org/packages/Microsoft.Extensions.Hosting.WindowsServices">Microsoft.Extensions.Hosting.WindowsServices</a></td>
</tr>
<tr>
<td><a href="http://www.ninject.org/">Ninject</a></td>
<td><a href="https://www.nuget.org/packages/Microsoft.Extensions.DependencyInjection/">Microsoft.Extensions.DependencyInjection</a></td>
</tr>
<tr>
<td>System.Configuration</td>
<td><a href="https://www.nuget.org/packages/Microsoft.Extensions.Configuration/">Microsoft.Extensions.Configuration</a></td>
</tr>
<tr>
<td><a href="https://logging.apache.org/log4net/">log4net</a></td>
<td><a href="https://www.nuget.org/packages/Microsoft.Extensions.Logging/">Microsoft.Extensions.Logging</a></td>
</tr>
</tbody>
</table>
<div class="has-line-data" data-line-end="50" data-line-start="49">
The APIs of the existing 3rd party frameworks differ from the equivalent <code>Microsoft.Extensions.*</code> frameworks, so some refactoring is required to replace these. In the case of TopShelf and Ninject, the scope of this refactoring is limited; largely to the Program.cs file and the main service class for TopShelf, and to the NinjectModules where service registration occurs for Ninject. This makes it relatively painless to do the substitution. With Ninject, the main issue is the limited feature set of <code>Microsoft.Extensions.DependencyInjection</code>. If you make widespread use of advanced container features, you’ll find yourself writing a lot of new code to make the same patterns work. Most of our registrations were pretty straightforward to convert.</div>
<div class="has-line-data" data-line-end="52" data-line-start="51">
Replacing <code>log4net</code> with <code>Microsoft.Extensions.Logging</code> is a bit more of a challenge since references to <code>log4net</code>, especially the <code>ILog</code> class and its methods, were spread liberally throughout our codebase. Here we found that the best refactoring method was to let the type system do the heavy lifting, using the following steps:</div>
<ol>
<li class="has-line-data" data-line-end="54" data-line-start="53">Uninstall the <code>log4net</code> NuGet package. The build will fail with many missing class and method exceptions.</li>
<li class="has-line-data" data-line-end="55" data-line-start="54">Create a new interface named <code>ILog</code> with namespace <code>log4net</code>, now the build will fail with just missing method exceptions.</li>
<li class="has-line-data" data-line-end="56" data-line-start="55">Add methods to your <code>Ilog</code> interface to match the missing <code>log4net</code> methods (for example <code>void Info(object message);</code>) until you get a clean build.</li>
<li class="has-line-data" data-line-end="57" data-line-start="56">Now use Visual Studio’s rename symbol refactoring to change your <code>ILog</code> interface to match the <code>Microsoft.Extensions.Logging</code> <code>ILogger</code> interface and its methods to match <code>ILogger</code>'s methods. For example rename <code>void Info(object message);</code> to <code>void LogInformation(string message);</code>.</li>
<li class="has-line-data" data-line-end="58" data-line-start="57">Rename the namespace from <code>log4net</code> to <code>Microsoft.Extensions.Logging</code>. This is a two step process because you can’t use rename symbol to turn one symbol into three, so rename <code>log4net</code> to some unique string, then use find and replace to change it to <code>Microsoft.Extensions.Logging</code>.</li>
<li class="has-line-data" data-line-end="60" data-line-start="58">Finally delete your interface .cs file, and assuming you’ve already added the <code>Microsoft.Extensions.Hosting</code> NuGet package and its dependencies (which include logging), everything should build and work as expected.</li>
</ol>
<div class="has-line-data" data-line-end="61" data-line-start="60">
Configuration is another challenge. Gone are our old friends <code>App.config</code> and <code>System.Configuration.ConfigurationManager</code> to be replaced with a new configuration framework, <code>Microsoft.Extensions.Configuration</code>. This is far more flexible and can load configuration from various sources, including JSON files, environment variables, and command line arguments. We replaced our <code>App.config</code> files with <code>appsettings.json</code>, and refactored our attributed configuration classes into POCOs and used the <code>IConfigurationSection.Bind<T>(..)</code> method to load the config. An easier and more streamlined process than the clunky early 2000’s era <code>System.Configuration</code>. At a later date we will probably move to loading environment specific configuration from environment variables to better align with the Docker/k8s way of doing things.</div>
<h3 class="code-line" data-line-end="63" data-line-start="62">
<a href="https://www.blogger.com/null" id="Changes_to_our_build_and_deployment_pipeline_62"></a>Changes to our build and deployment pipeline</h3>
<div class="has-line-data" data-line-end="64" data-line-start="63">
As I mentioned above, we use a very common combination of <a href="https://www.atlassian.com/software/bitbucket">BitBucket</a>, <a href="https://www.jetbrains.com/teamcity/">Team City</a>, and <a href="https://octopus.com/">Octopus</a> to host our build and deployment pipeline. We follow a continuous delivery style deployment process. Any commit to a BitBucket Git repository immediately triggers a build, test and package process in Team City, which in turn triggers Octopus to deploy the package to our development environment. We then have to manually use the Octopus UI to release to first our QA environment and then to Production. Although our ultimate aim, and a prime motivation for the transition to Core, is to move to Docker and Kubernetes, we needed to be able to build and deploy using our existing tooling for the time being. This proved to be pretty straightforward. The changes were in three main areas:</div>
<ol>
<li class="has-line-data" data-line-end="66" data-line-start="65"><strong>Using the <code>dotnet</code> tool</strong>: The build and test process changed from using NuGet, MSBuild and xUnit, to having every step, except the Octopus trigger, run with the <code>dotnet</code> tool. This simplifies the process. One very convenient change is how easy it is to version the build with the command line switch <code>/p:Version=%build.number%</code>. We also took advantage of the self-contained feature to free us from having to ensure that each deployment target had the correct version of Core installed. This is a great advantage.</li>
<li class="has-line-data" data-line-end="67" data-line-start="66"><strong>JSON configuration variables</strong>: We previously used the <a href="https://octopus.com/docs/projects/variables/variable-substitutions">Octopus variable substitution feature</a> to inject environment specific values into our <code>App.config</code> files. This involved annotating the config file with Octopus substitution variables, a rather fiddly and error prone process. But now with the new <code>appsettings.json</code> file we can use the convenient <a href="https://octopus.com/docs/deployment-process/configuration-features/json-configuration-variables-feature">JSON configuration variable feature</a> to do the replacement, with no need for any Octopus specific annotation in our config file.</li>
<li class="has-line-data" data-line-end="69" data-line-start="67"><strong>Windows service installation and startup</strong>: Previously, with TopShelf, installing our windows services on target machines was a simple case of calling <code>ourservice.exe install</code> and <code>ourservice.exe start</code> to start it. Although the <code>Microsoft.Extensions.Hosting</code> framework provides hooks into the Windows service start and stop events, it doesn’t provide any facilities to install or start the service itself, so we had to write somewhat complex powershell scripts to invoke <code>SC.exe</code> to do the installation and the powershell <code>Start-Service</code> command to start. This is definitely a step backward.</li>
</ol>
<h2 class="code-line" data-line-end="70" data-line-start="69">
<a href="https://www.blogger.com/null" id="Observations_69"></a>Observations</h2>
<div class="has-line-data" data-line-end="71" data-line-start="70">
The conversion of our entire suite of services from .NET Framework to Core turned out to be a bigger job than we at first expected. This was mainly because we took the opportunity to update our libraries and services to replace our 3rd party NuGet packages with the new <code>Microsoft.Extensions.*</code> frameworks. This was a significant refactoring effort. Doing a thorough analysis of your project and its dependencies before embarking on the conversion is essential. With large scale distributed applications such as ours, it’s often surprising how deep the organisations internal dependency graph goes, especially if, like me, you are converting large codebases which you didn’t have any input into writing. With the actual project conversion I would highly recommend starting with new projects rather than trying to convert them in place. This turned out to be a far more reliable method.</div>
<div class="has-line-data" data-line-end="73" data-line-start="72">
DotNet Core is a complete ground up reinvention of the .NET tooling and frameworks, and the 20 year difference shows in many places. The tooling is modern, as are the frameworks, and although there’s plenty to argue about with the individual decisions the team have made, on the whole it’s a large step forward. This was apparent in many ways during the conversion process, with many things be simpler and easier than with the old .NET Framework. Having the entire SDK surficed through a single command line tool (the <code>dotnet</code> command), making automated build processes so much easier, is probably the most prominent example. I for one am very pleased we were able to take the effort to make the change.</div>
http://mikehadlow.blogspot.com/2020/04/a-framework-to-dotnet-core-conversion.htmlnoreply@blogger.com (Mike Hadlow)2tag:blogger.com,1999:blog-15136575.post-35042844306478321642019年1月29日 14:47:00 +00002019年01月29日T14:47:08.623+00:00Why Containers are a Game Changer for Software Development<p>I originally wrote this piece as of part of a paper evaluating container technology for a client.</p> <p>This document describes container technology, best represented by Docker. Containerization is a game changing technology that’s experiencing rapid adoption. Some measures have around 25% of companies now using Docker in some form (<a href="https://www.datadoghq.com/docker-adoption/">https://www.datadoghq.com/docker-adoption/</a>). Containers can dramatically simplify the software development process, allowing companies to be more agile and lower the cost of building and maintaining large software systems. This document looks at how containers fit within the general evolution of software systems.</p> <p><strong>A brief history of software</strong></p> <p>The history of software development is a story of successive rounds of abstraction and commodification. If you can treat a class of something (a computer, a network or a peripheral) as a black box with a consistent API, it enables common industry wide tooling and commodification.</p> <p>In the early years of computing software was written for a particular version of hardware. Each program would would take complete control of the machine, use the processor’s physical instruction set, directly address physical memory and have intimate knowledge of the locations and capabilities of any devices attached to the machine. This meant that a program written for one model of machine would not work on a different model. Machines were typically sold with a dedicated software suite, which meant that the same classes of  software had to be written repeatedly for each machine. In the early days of home computers it was typical for a word processor, for example, to come in different versions for all the major machines on the market and with drivers for a range of popular printers. If your printer wasn’t included it wouldn’t work.</p> <p>To solve this problem and allow a single program to run on a variety of machines, operating systems were created to provide an abstraction layer over the underlying hardware. So long as a piece of software was designed to run on the operating system of your computer, it worked. The operating system also isolated the program from variations in peripheral hardware. You no longer had to care about what particular printer was attached to the computer because an operating system driver provided a common abstract printer API regardless of the actual hardware model. As operating systems evolved they provided not only isolation from the hardware, but also isolation from other programs running on the same computer with innovations such as protected memory and pre-emptive multitasking. With the adoption of an operating system as a common platform, the thing it abstracted, the hardware, became a commodity. This lead to dramatic cost reductions and economies of scale, both for hardware and software.</p> <p>The same adoption and standardisation also occurred with networking. TCP/IP became the standard which allowed computer systems to be connected world wide and HTTP has become a standard for sharing data globally. This has allowed software solutions to serve customers at a massive scale.</p> <p>As software that runs on commodified platforms became more complex, various mechanisms evolved to make software more modular and reusable. Collections of modular software ‘libraries’ could be brought together to create more powerful applications in less time. Software environments also evolved to include runtimes to relieve programmers from the need to manage memory and to further abstract the program from its environment. Software systems also evolved to be composed of multiple processes running on multiple machines to better aid scalability and resilience. Various services and infrastructure tools such as web servers and databases provided off-the-shelf capabilities to further aid software development.</p> <p><strong>The complexity of the modern software environment</strong></p> <p>All these libraries, services and infrastructure have to be correctly configured for the software to run. This is often a semi-manual, complex, time consuming and error prone task. When multiple pieces of software run on a single machine there can often be complex and damaging interactions between conflicting library and tool versions. The complexity of provisioning environments, installing tools and libraries of the correct version, opening the correct ports and configuring connections, especially when this is done in different environments with differing network topologies, a fertile environment for human error.</p> <p>Once in production, these complex systems need to be monitored, managed and audited. This introduces additional tooling and configuration, adding yet another vector for misconfiguration and error.</p> <p>Also the difficulty of coordinating teams of software developers who create complex software systems requires the formalisation and automation of the software development process. This introduces new tools, such as build and deployment systems that must also be configured correctly for the software to be successfully delivered into production. This configuration work is also often manual, fragile and error prone, and since a single toolset is often shared by many teams and components, it creates significant friction when introducing new services, libraries or tools.</p> <p>Because the delivery and runtime environments are maintained and versioned separately from source code, this introduces risk and friction. Services often share both environments and delivery processes, meaning that upgrades and changes have to be coordinated. In a worse-case scenario separate teams may be tasked with maintaining the runtime infrastructure and the delivery process, escalating any change to a large scale organisational issue. Often the overwhelming task of synchronising software and environment upgrades means that they are done infrequently and with a great deal of ceremony and risk.</p> <p>Virtual machines don’t really help here. They can make the work of technical operations easier; they decouple an entire operating system environment from hardware and make it easy to replicate and move environments around hardware infrastructure. However, VMs make very little difference to software developers. The software pipeline and runtime environment is still maintained and configured separately from the the software source code itself.</p> <p>The stage has been set for another round of abstraction, this time the abstraction is the interface between the operating system, the userland environment and the network topology that the software is built and runs within.</p> <p>Containerisation is the technology that provides this abstraction and solves many of the problems described above. Containers provide a scripted per-process runtime user environment that is maintained alongside the source code. The software build process and target network topology of a large software system is also defined in container and composition/orchestration scripts. Because the scripts are maintained by developers on a per-process (per service) basis and are maintained under source control alongside the service’s source code, the software describes the environment that builds it and that it runs in, and this description is versioned with the software. Effectively it reverses the usual hierarchy and allows each component to own it’s environment and delivery process. The environment for a component is identical regardless of whether it’s running on the developer’s machine, in a test environment or in production and removes much of the risk of configuring the software pipeline and runtime environment described above. This idea of extending Git workflow to operations is known as GitOps. (see <a href="https://www.weave.works/blog/what-is-gitops-really">https://www.weave.works/blog/what-is-gitops-really</a>). In the same way that operating systems removed the need for software to care about hardware, so containers allow the software environment to be described without having to know or care about the specific operating system environment and the physical network.</p> <p><strong>Conclusion</strong></p> <p>Docker and its various orchestration options offer game changing performance increases for large software organisations. It provides a single, integrated, scripted, scalable platform for both the software delivery pipeline and production operations. It’s experiencing fast adoption and will soon be as standard a part of IT infrastructure as VMs are currently. Any software organisation of reasonable scale should now be seriously looking at a path for adoption.</p>http://mikehadlow.blogspot.com/2019/01/why-containers-are-game-changer-for.htmlnoreply@blogger.com (Mike Hadlow)0tag:blogger.com,1999:blog-15136575.post-58092351138491671612018年11月06日 15:54:00 +00002018年11月06日T15:54:41.393+00:00Decoupling, Architecture and Teams<p>This article discusses the relationship in software development between code organisation and social organisation. I discuss why software and teams do not scale easily, lessons we can learn from biology and the internet, and show how we can decouple software and teams to overcome scaling problems.</p> <p>The discussion is based on my 20 years experience of building large software systems, but I’ve also been very impressed with the book <a href="https://www.amazon.co.uk/gp/product/1942788339/ref=as_li_qf_asin_il_tl?ie=UTF8&tag=coderantmikeh-21&creative=6738&linkCode=as2&creativeASIN=1942788339&linkId=ba152af123517ccd1d3100240ee00b78" target="_blank">Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations</a><em> by Nicole Forsgren, Jez Humble and Gene Kim</em>, which provides research data to back up most of the assertions that I make here. It’s a highly recommended read.</p> <p><a href="https://www.amazon.co.uk/gp/product/1942788339/ref=as_li_qf_asin_il_tl?ie=UTF8&tag=coderantmikeh-21&creative=6738&linkCode=as2&creativeASIN=1942788339&linkId=7bb9a3a50edf60a24941965e791a3d33" target="_blank"><img title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" border="0" alt="image" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLWsBlXz3qE6Gjz1ifo7zrQci9X2cO9xUbz_tntHQock6rENGus0gseSjLSwmxA_FhKOlhGNXUOOau5WdefE0mAY-y_JoRJ4Zyjs6oQ3ls69k5LqQvk__gNHmcJcaL-An3qDFRZQ/?imgmax=800" width="159" height="240" /></a></p> <p><strong>Software and software teams do not scale.</strong></p> <p>It’s a very common story, the first release of a product, perhaps written by one or two people, often seems remarkably easy. It might only provide limited functionality, but it is written quickly and fulfils the customer’s requirements. Customer communication is great because the customer is usually in direct communication with the developers. Any defects are quickly fixed and new features can be added quite painlessly. After a while though the pace slows. Version 2.0 takes a little longer than expected, it’s harder to fix bugs and new features don’t seem to come out quite so easily. The natural response to this is to add new developers to the team, yet each extra person added to the team seems to reduce productivity. As the software ages and grows in complexity it appears to atrophy. In extreme cases, organizations can find themselves running on software that’s hugely expensive to maintain and that it seems almost impossible to change. There are negative scale effects. The problem is that you don’t have to make ‘mistakes’ for this to happen, it’s so common that one could almost say that it’s a ‘natural’ property of software.</p> <p>Why is this? There are two reasons, code related and team related. Neither code nor teams scale well.</p> <p>As a codebase grows, it becomes harder for a single person to understand. There are fixed human cognitive limits and while it is possible for a single individual to maintain a mental model of the details of a small system, once it gets past a certain size, it grows larger than the cognitive range of a single person. Once a team grows past five or more people it’s almost impossible for one person to stay up to speed with how all parts of the system work. When no one person understands the complete system, fear reigns. In a tightly coupled large system it’s very hard to know the impact of any significant change since the consequences are not localised. Developers learn to work in a minimum-impact style of work-arounds and duplication rather than factoring out commonalities and creating abstractions and generalisations. This feeds back into system complexity, further amplifying these negative trends. Developers stop feeling any ownership of code they don’t really understand and are reluctant to refactor. Technical debt increases. It also makes for unpleasant and unsatisfying work and encourages ‘talent evaporation’, where the best developers, those who can more easily find work elsewhere, move on.</p> <p>Teams also don’t scale. As the team grows, communication gets harder. The simple network formula comes into play: </p> <p><strong>c = n(n-1)/2</strong> <br />(where n is the number of people and c is the number of communication channels)</p> <table cellspacing="0" cellpadding="2" border="1"><tbody> <tr> <td valign="top">Number of team members</td> <td valign="top">Number of communication channels</td> </tr> <tr> <td valign="top">1</td> <td valign="top">0</td> </tr> <tr> <td valign="top">2</td> <td valign="top">1</td> </tr> <tr> <td valign="top">5</td> <td valign="top">10</td> </tr> <tr> <td valign="top">10</td> <td valign="top">45</td> </tr> <tr> <td valign="top">100</td> <td valign="top">4950</td> </tr> </tbody></table> <p>The communication and coordination needs of the team rise geometrically as the team size increases. It’s very hard for a single team to stay a coherent entity over a certain size and the natural human social tendency to split into smaller groups will lead to informal sub-groups forming even if there is no management input. Peer level communication becomes difficult and will naturally be replaced by emergent leaders and top-down communication. Team members change from being equal stakeholders in the system to directed workers. Motivation suffers and there is a lack ownership driven by the <a href="https://en.wikipedia.org/wiki/Diffusion_of_responsibility">diffusion of responsibility effect</a>.</p> <p>Management often intervenes at this stage and formalises the creation of new teams and management structures to organize them. But whether formal or informal, larger organisations find it hard to keep people motivated and actively engaged.</p> <p>It’s typical to blame poorly skilled developers and bad management for these scaling pathologies, but that’s unfair, scale issues are a ‘natural’ property of growing and aging software, it’s what always happens unless you spot the problem early, recognise the inflexion point and work very hard to mitigate it. Software teams are constantly being created, the amount of software in the world is constantly growing and most software is small scale, so it’s quite common for a successful and growing product to have been created by a team that has no experience of large-scale software development. Expecting them recognise the inflexion point when the scale issues start to bite and to know what to do about it is unrealistic.</p> <p><strong>Scaling lessons from nature</strong></p> <p>I recently read <a href="https://www.amazon.co.uk/gp/product/1780225598/ref=as_li_qf_asin_il_tl?ie=UTF8&tag=coderantmikeh-21&creative=6738&linkCode=as2&creativeASIN=1780225598&linkId=97371bb5f23ce8ac77e1478aceaa4dc3" target="_blank">Geoffrey West’s excellent book Scale</a>. He talks about the mathematics of scale in biological and social-economic systems. His thesis is that all large complex systems obey fundamental scaling laws. It’s a fascinating read and very much recommended. For the purposes of this discussion I want to focus on his point that many biological and social systems scale amazingly well. Take the basic mammal body plan. We share the same cell types, bone structure, nervous and circulatory system of all mammals. Yet the difference in size between a mouse and a blue whale is around 10^7. How does nature use the same fundamental materials and plans for organisms of such hugely different scales? The answer appears to be that evolution has discovered fractal branching networks. This can be seen quite obviously if you consider a tree. <em>Each small part of the tree looks like a small tree</em>. The same is true for our mammalian circulatory and nervous systems, they are branching fractal networks where a small part of your lungs or blood vessels looks like a scaled down version of the whole.</p> <p><a href="https://www.amazon.co.uk/gp/product/1780225598/ref=as_li_qf_asin_il_tl?ie=UTF8&tag=coderantmikeh-21&creative=6738&linkCode=as2&creativeASIN=1780225598&linkId=97371bb5f23ce8ac77e1478aceaa4dc3"><img title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" border="0" alt="image" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8C2KavWRDln5WMM220LJ9JLOB6z3508ZtfMiubjXlQl8e8PAx8mKPVN22Ig-F81EfPSa-nktUZmSoMr75nwv2DiERrjLepYEBuJrBNQddcv-8jTTJvhO17YPZalHVJ1U6DIkt_A/?imgmax=800" width="156" height="240" /></a></p> <p>Can we take these ideas from nature and apply them to software? I think there are important lessons that we can learn. If we can build large systems which have smaller pieces that look like complete systems themselves, then it might be possible to contain the pathologies that affect most software as it grows and ages.</p> <p>Are there existing software systems that scale many orders of magnitude successfully? The obvious answer is the internet, a global software system of millions of nodes. Subnets do indeed look and work like smaller versions of the whole internet.</p> <p><strong>Attributes of decoupled software.</strong></p> <p>The ability to decouple software components from the larger system is the core technique for successful scaling. The internet is fundamentally a decoupled software architecture. This means that each node, service or application on the network has the following properties:</p> <ul> <li> <p>Obeys a shared communication protocol.</p> </li> <li> <p>Only shares state via a clear contract with other nodes.</p> </li> <li> <p>Does not require implementation knowledge to communicate.</p> </li> <li> <p>Versioned and deployed independently.</p> </li> </ul> <p>The internet scales because it is a network of nodes that communicate over a set of clearly defined protocols. The nodes only share their state via the protocol, the implementation details of one node do not need to be understood by the nodes communicating with it. The global internet is not deployed as a single system, each node is separately versioned and deployed. Individual nodes come and go independently of each other. Obeying the internet protocol is the only thing that really matters for the system as a whole. Who built each node, when is created or deleted, how it’s versioned, what particular technologies and platforms it uses are all irrelevant to the internet as a whole. This is what we mean by decoupled software.</p> <p><strong>Attributes of decoupled teams.</strong></p> <p>We can scale teams by following the similar principles:</p> <ul> <li> <p>Each sub-team should look like a complete small software organization.</p> </li> <li> <p>The internal processes and communication of the team should not be a concern outside the team.</p> </li> <li> <p>How the team implements software should not be important outside the team.</p> </li> <li> <p>Teams should communicate with the wider organisation about external concerns: common protocols, features, service levels and resourcing.</p> </li> </ul> <p>Small software teams are more efficient than large ones, so we should break large teams into smaller groups. The lesson from nature and the internet is that the sub-teams should look like a single, small software organisations. How small? Ideally one to five individuals.</p> <p>The point that each team should look like a small independent software organisation is important. Other ways of structuring teams are less effective. It’s often tempting to split up a large team by function, so we have a team of architects, a team of developers, a team of DBAs, a team of testers, a deployment team and an operations team, but this solves none of the scaling problems we talked about above. A single feature needs to be touched by every team, often in an iterative fashion if you want to avoid waterfall style project management - which you do. The communication boundaries between these functional teams become a major obstacle to effective and timely delivery. The teams are not decoupled because they need to share significant internal details in order to work together. Also the interests of the different teams are not aligned: The development team is usually rewarded for feature delivery, the test team for quality, the support team for stability. These different interests can lead to conflict and poor delivery. Why should the development team care about logging if they never have to read the logs? Why should the test team care about delivery when they are rewarded for quality?</p> <p>Instead we should organise teams by decoupled software services that support a business function, or a logical group of features. Each sub-team should design, code, test, deploy and support their own software. The individual team members are far more likely to be generalists than specialists because a small team will need to share these roles. They should focus on automating as much of the process as possible: automated tests, deployment and monitoring. Teams should choose their own tools and decide for themselves how to architect their systems. While the organizational protocols that the system uses to communicate must be decided at an organization level, the choice of tools used to implement the services should be delegated to the teams. This very much aligns with a DevOps model of software organization.</p> <p>The level of autonomy that a team has is a reflection of the level of decoupling from the wider organization. Ideally the organization should care about the features, and ultimately business value, that the team provides, and the cost of resourcing the team.</p> <p>The role of the software architect is important in this style of organisation. They should not focus on the specific tools and techniques that teams use, or micro-manage the internal architecture of the services, instead they should concentrate on the protocols and interactions between the various services and the health of the system as a whole.</p> <p><strong>Inverse Conway: software organisation should model the target architecture.</strong></p> <p>How do decoupled software and decoupled teams align? <a href="https://en.wikipedia.org/wiki/Conway%27s_law" target="_blank">Conway’s Law</a> states that:</p> <p><em>"organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations."</em></p> <p>This is based on the observation that the architecture of a software system will reflect the team structure of the organization that creates it. We can ‘hack’ Conway’s law by inverting it; organize our teams to reflect our desired architecture. With this in mind we should align our decoupled teams with our decoupled software components. Should this be a one-to-one relationship? I think this is ideal, although it seems that it’s fine for a single small team to deliver several decoupled software services. I would argue that the scaling inflexion point for teams is larger than that for software, so this style of organisation seems valid. However, it’s important that the software components should remain segregated with their own version and deployment story even if some share the same team. We would like to be able to split the team if it grows too large, and being able to hand off various services to different teams would be a major benefit. We can’t do that if the services are tightly coupled or share process, versioning or deployment. </p> <p>We should avoid having multiple teams work on the same components, this is an anti-pattern and is in some ways worse than having a single large team working on an oversize single codebase because the communication barriers between the teams leads to even worse feelings of lack-of-ownership and control.</p> <p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkik0tz7aHdX_UMkmCjzOeuVZFsuJt-Om-6y3S7dt2vGX06zAEDLCJyH4KsNUcUILjmSbqjK1PbImjHHuflbNjpZEtAOh-nmigBhFk8a51cRPfkzoQ0UmU2kB8EW02pNn66Nf02w/s1600-h/image%255B4%255D"><img title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" border="0" alt="image" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3yPfUhCotU1NrfsFyn4RtWdmzLXXstEMPOvHgzUpPzHGfo4sAYp1dnucqo6pIVsMGKeEEFdGG2fXoBPuS9XTfJhwzvl6YPuIgZBYTXP0XTkLaDTeEnsqm0yYoQawoXX48YONMLw/?imgmax=800" width="796" height="386" /></a></p> <p>The communication requirements between decoupled teams building decoupled software are minimised. Taking the example of the internet again, it’s often possible to use an API provided by another company without any direct communication if the process is simple and documentation sufficient. The communication should not require any discussion of software process or implementation, that is internal to the team, instead communication should be about delivering features, service levels, and resourcing.</p> <p>An organisation of decoupled software teams building decoupled software should be easier to manage than the alternatives. The larger organization should focus on giving the teams clear goals and requirements in terms of features and service levels. The resource requirements should come from the team, but can be used by the organization to measure return on investment.</p> <p><strong>Decoupled Teams Building Decoupled Software</strong></p> <p>Decoupling software and teams is key to building a high performance software organisation. My anecdotal experience supports this view. I’ve worked in organisations where teams were segregated by software function or software layer or even where they’ve been segregated by customer. I’ve also worked in chaotic large teams on a single codebase. All of these suffer from the scaling problems discussed above. The happiest experiences were always where my team was a complete software unit independently building, testing and deploying decoupled services. But you don’t have to rely on my anecdotal evidence, the book Accelerate (described above), has survey data to support this view.</p>http://mikehadlow.blogspot.com/2018/11/decoupling-architecture-and-teams.htmlnoreply@blogger.com (Mike Hadlow)0tag:blogger.com,1999:blog-15136575.post-44664140987933711882018年10月01日 13:56:00 +00002018年10月03日T09:30:05.671+01:00Visual Programming - Why it’s a Bad Idea<p><img style="border: 0px currentcolor; border-image: none; background-image: none;" border="0" src="https://upload.wikimedia.org/wikipedia/commons/f/fb/Scratch_2.0_Screen_Hello_World.png" /></p> <p><strong>Note. <a href="https://www.reddit.com/r/programming/comments/9kgk75/visual_programming_why_its_a_bad_idea/" target="_blank">This post had a great response on Reddit with over 300 comments</a>. I’ve added an update section to the end of this post to address some of the main criticisms.</strong></p> <p>A <a href="https://en.wikipedia.org/wiki/Visual_programming_language" target="_blank">visual programming language</a> is one that allows the programmer to create programs by manipulating graphical elements rather than typing textual commands. A well known example is <a href="https://scratch.mit.edu/" target="_blank">Scratch</a>, a visual programming language from MIT that’s used to teach children. The advantages given are that they make programming more accessible to novices and non-programmers. There was a very popular movement in the 1990’s to bring these kinds of tools into the enterprise with so called <a href="https://en.wikipedia.org/wiki/Computer-aided_software_engineering" target="_blank">CASE tools</a>, where enterprise systems could be defined with UML and generated without the need for trained software developers. This involved the concept of ‘round tripping’, where a system could be modelled visually, the program code would be generated from the models, and any changes to the code could be pushed back to the model. These tools failed to deliver on their promises and most of these attempts have now been largely abandoned.</p> <p>So visual programming has failed to catch on, except in some very limited domains. This is fundamentally attributable to the following misconceptions about programming:</p> <ul> <li> Textual programming languages obfuscate what is essentially a simple process.</li> <li>Abstraction and decoupling play a small and peripheral part in programming.</li> <li>The tools that have been developed to support programming are unimportant.</li> </ul> <p>The first misconception holds that software development has significant barriers to entry because textual programming languages obfuscate the true nature of programming. The popularity of Scratch among educationalists plays to this misconception. The idea is that programming is actually quite simple and if we could only present it in a clear graphical format it would dramatically lower the learning curve and mental effort required to create and read software. I expect this misconception comes from a failure to actually read a typical program written in a standard textual programming language and imagine it transformed into graphical elements of boxes and arrows. If you do this it soon becomes apparent that a single line of code often maps to several boxes and since it’s not untypical for even a simple program to contain hundreds of lines of code, this translates into hundreds or even thousands of graphical elements. The effort to mentally parse such a complex picture is often far harder than reading the equivalent text.</p> <p>The solution for most visual programming languages is to make the ‘blocks’ represent more complex operations so that each visual element is equivalent to a large block of textual code. Visual workflow tools are a particular culprit here. The problem is that this code needs to be defined somewhere. It becomes  ‘property dialogue programming’. The visual elements themselves only represent the very highest level of program flow and the majority of the work is now done in standard textual code hidden in the boxes. Now we have the worst of both worlds, textual programming unsupported by modern tooling. The properly dialogues are usually sub-standard development environments and enforce a particular choice of language, usually a scripting language of some kind. Visual elements can’t be created except by experienced programmers, or understood except by reading their underlying code, so most of the supposed advantages of the visual representation are lost. There’s an impedance mismatch between the visual ‘code’ and the textual code, and the programmer has to navigate the interface between the two, often spending more effort on conforming to the needs of the graphical programming tool than solving the problem at hand.</p> <p>Which bring us to the second misconception, that abstraction and decoupling are peripheral concerns. Visual programming makes the assumption that most programs are simple procedural sequences, somewhat like a flowchart. Indeed, this is how most novice programmers imagine that software works. However, once a program gets larger than a quite trivial example, the complexity soon overwhelms the novice programmer. They find that it’s very hard to reason about a large procedural code base and often struggle to produce stable and efficient software at scale. Most of the innovation in programming languages is an attempt to manage complexity, most commonly via abstraction, encapsulation and decoupling. All the type systems and apparatus of object-oriented and functional programming is really just an effort to get this complexity under control. Most professional programmers will be continually abstracting and decoupling code. Indeed, the difference between good and bad code is essentially how well this has been done. Visual programming tools rarely have efficient mechanisms to do this and essential trap the developer in an equivalent of 1970’s BASIC.</p> <p>The final misconception is that visual programmers can do without all the tools that have been developed over the decades to support programming. Consider the long evolution of code editors and IDEs. Visual Studio, for example, supports efficient intellisense allowing the look-up of thousands of APIs available in the base class library alone. The lack of good source control is another major disadvantage of most visual programming tools. Even if they persist their layout to a textual format, the diffs make little or no sense. It’s very hard to do a ‘blame’ on a large lump of XML or JSON. Things that make no difference to the functional execution of the program, such as the position and size of the graphical elements, still lead to changes in the metadata, which make it harder still to parse a diff. Textual programming languages have learnt to separate units of code into separate source files, so a change in one part of the system is easy to merge with a change in another. Visual tools will usually persist as a diagram per file which means that merges become problematic, made harder still when the semantic meaning of the diff is difficult to parse.</p> <p>In conclusion, the advantages given for visual programming tools, that they make the program easier to create and understand are almost always a mirage. They can only succeed in the simplest of cases and at best result in the suboptimal situation where the visual elements are simply obfuscating containers for textual code.</p> <p><strong>Update...</strong></p> <p>I was probably wrong to use a screen-shot of Scratch and use it as the primary example in my first paragraph. I’m not an educator and I don’t really have an opinion about Scratch’s effectiveness as a teaching tool. Many people say that they find it enormously useful in teaching programming, especially to children. Anything that introduces more people to the wonderful and exciting world of programming is only to be celebrated. I really didn’t intend this post as a criticism of Scratch specifically, it was simply the visual programming system that I thought the largest number of people would have heard of.</p> <p>Another counter example cited on Reddit were static structure tools, such as UI designers, database schema designers, or class designers. I agree that they can be very useful. Anything that helps to visualise the structure of data or the large scale structure of a program is a bonus. These are never enough on their own though. The ultimate failure of 90’s tools such as Power Builder that attempted to build on graphical visualisations to create a fully code-free development environment attest to this.</p>http://mikehadlow.blogspot.com/2018/10/visual-programming-why-its-bad-idea.htmlnoreply@blogger.com (Mike Hadlow)37tag:blogger.com,1999:blog-15136575.post-30468669511505914032018年9月14日 13:57:00 +00002018年09月14日T15:03:27.744+01:00What I Learnt Creating Guitar Dashboard: SVG, TypeScript and Music Theory.<p><a title="Guitar Dashboard" href="http://guitardashboard.com/" target="_blank"><img title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" border="0" alt="image" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjpbCep3EGyQufWudWJCRW3TVnOxkU7YlZoDieBZB3si668dfm-mpbOv0O-GDhiips5qQfgqZvgBl0DmWw7_bUlNrHW04SerByTCA3lSmLYlT-YKt5_eLblT6cwYbmrAae3lh1QSQ/?imgmax=800" width="607" height="484" /></a></p> <p>Guitar Dashboard is a side project I’ve been working on occasionally over the past two years. It’s an open source web application (you can find it at <a title="http://guitardashboard.com/" href="http://guitardashboard.com/">http://guitardashboard.com/</a> and the code at <a title="https://github.com/mikehadlow/gtr-cof" href="https://github.com/mikehadlow/gtr-cof">https://github.com/mikehadlow/gtr-cof</a>). It’s intended as an interactive music theory explorer for guitarists that graphically links theoretical concepts, such as scales, modes and chords to the guitar fretboard. It evolved out my my own attempts, as an amateur guitarist, to get a better understanding of music theory. It includes an algorithmic music theory engine that allows arbitrarily complex scales and chords to be generated from first principles. This gives it far more flexibility than most comparable tools. Coming at music theory from the point of view of software developer, and implementing a music theory rules engine, has given me a perspective that’s somewhat different from most traditional approaches. This post outlines what I’ve learnt, technically and musically while building Guitar Dashboard.  There are probably things here that are only interesting to software developers, and others only of interest to musicians, but I expect there’s a sizable group of people, like me, who fit in the intersection of that Venn diagram and who will find it interesting.</p> <h4>Why Guitar Dashboard?</h4> <p>Guitar dashboard’s core mission is to graphically and interactively integrate music theory diagrams, the chromatic-circle and circle-of-fifths, with a graphical representation of the fretboard of a stringed instrument. It emerged from my own study of scales, modes and chords over the past three or four years.</p> <p>I expect like many self taught guitarists, my main aim when I first learnt to play at the age of 15 was to imitate my guitar heroes, Jimmy Page, Jimi Hendrix, Steve Howe, Alex Lifeson and others. A combination of tips from fellow guitarists, close listening to 60’s and 70’s rock cannon, and a ‘learn rock guitar’ book was enough to get me to a reasonable imitation. I learnt how to play major and minor bar chords and a pentatonic scale for solos and riffs. This took me happily through several bands in my 20s and 30s. Here’s me on stage in the 1980’s with The Decadent Herbs.</p> <p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHyZ_osuUVa9fx_iip1QjWcGKvetHpk2BL782MVZm-7lmJRVu-yZiNWLLeRoYSJZN2_jBWno4PwJ3OQJbA6pRANiqUIexUCm-uRspFKy_RJW3XByU5oFNgGCxeDygW_x9h1HsAjw/s1600-h/mike-decadent-herbs5"><img title="mike-decadent-herbs" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" border="0" alt="mike-decadent-herbs" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhT1PmmoVMWkbFBUfhvlGcHND0VdMy6ROHmH1UUZfR9DkV2XBRCx0O5uVQTL-bdmMaG94XrL7qrbVnE80HeSfdYeL8zxAzipqsTIzA3jbrFtW5XWgFg7fb3iTruYVn23qcd-qk8Kg/?imgmax=800" width="366" height="480" /></a></p> <p>I was aware that there was a whole school of classical music theory, but it didn’t at first appear to be relevant to my rock ambitions, and any initial attempts I tried at finding out more soon came to grief on the impenetrable standard music notation and vocabulary, and the very difficult mapping of stave to fretboard. I just couldn’t be bothered with it. I knew there were major and minor scales, I could even play C major on my guitar, and I’d vaguely heard of modes and chord inversions, but that was about it. In the intervening years I’ve continued to enjoy playing guitar, except these days it’s purely for my own amusement, but I’d become somewhat bored with my limited range of musical expression. It wasn’t until around four years ago on a train ride, that a question popped into my head, "what is a ‘mode’ anyway?" </p> <p>In the intervening decades since my teenage guitar beginnings the internet had happened, so while then I was frustrated by fusty music textbooks, now Wikipedia, immediately to hand  on my phone, provided a clear initial answer to my ‘what is a mode question’, followed soon after by a <a href="http://www.ethanhein.com/wp/2015/music-theory-for-the-perplexed-guitarist/" target="_blank">brilliant set of blog posts by Ethan Hein</a>, a music professor at NYU. His clear explanations of how scales are constructed from the 12 chromatic tones by selecting certain intervals, and how chords are then constructed from scales, and especially how he relates modes to different well known songs, opened up a whole new musical world for me. I was also intrigued by his use of the circle-of-fifths which led me to look for interactive online versions. I found <a href="https://randscullard.com/CircleOfFifths/" target="_blank">Rand Scullard’s excellent visualisation</a> a great inspiration. At the same time in my professional work as a software developer I’d become very excited by the possibilities of SVG for interactive browser based visualisations and realised that Rand’s circle-of-fifths, which he’d created by showing and hiding various pre-created PNG images, would be very easy to reproduce with SVG, and that I could drive it from an algorithmic music engine implemented from the theory that Ethan Hein had taught me. The flexibility offered by factoring out the music generation from the display also meant that I could easily add new visualisations, the obvious one being a guitar fretboard.</p> <p>My first version was pretty awful. Driven by the hubris of the novice, I’d not really understood the subtleties of note or interval naming and my scales sometimes had duplicate note names amongst other horrors. I had to revisit the music algorithm a few times before I realised that intervals are the core of the matter and the note names come out quite easily once the intervals are correct. The algorithmic approach paid off though; it was very easy to add alternative tunings and instruments to the fretboard since it was simply a case of specifying a different set of starting notes for each string, and any number of strings. Flipping the nut and providing a left-handed fretboard were similarly straightforward. I more recently added non-diatonic scales (access them via the ‘Scale’ menu). This also came out quite easily since the interval specification for the original diatonic scale is simply a twelve element Boolean array. Unfortunately the note naming issue appears again, especially for non-seven-note-scales. Moving forward, it should be relatively easy to add a piano keyboard display, or perhaps, to slay an old demon, a musical stave that would also display the selected notes.</p> <p>For an introduction to Guitar Dashboard, I’ve created a video tour:</p> <h4><iframe height="315" allowfullscreen="allowfullscreen" src="https://www.youtube.com/embed/JPcEvxoWTJM" frameborder="0" width="560" allow="autoplay; encrypted-media"></iframe> </h4> <p>So that’s Guitar Dashboard and my motivation for creating it. Now a brief discussion of some of the things I’ve learnt. First some technical notes about SVG and TypeScript, and then some reflections on music theory. <br /></p> <h4>The awesome power of SVG.</h4> <p>The visual display of Guitar Dashboard is implemented using SVG. </p> <p><a href="https://en.wikipedia.org/wiki/Scalable_Vector_Graphics" target="_blank">SVG</a> (Scalable Vector Graphics) is an "XML-based vector image format for two-dimensional graphics with support for interactivity and animation." (Wikipedia). All modern browsers support it. You can think of it as the HMTL of vector graphics. The most common use case for SVG is simple graphics and graphs, but it really shines when you introduce animation and interactivity. Have a look at these <a href="https://www.creativebloq.com/design/examples-svg-7112785" target="_blank">blog</a> <a href="https://webdesign.tutsplus.com/articles/svg-brilliance-10-inspiring-examples-from-around-the-web--cms-27050" target="_blank">posts</a> to see some excellent examples.</p> <p>I was already a big fan of SVG before I started work on Guitar Dashboard and the experience of creating it has only made me even more enamoured. The ability to programmatically build graphical interactive UIs or dashboards is SVG’s strongest, but most underappreciated asset. It’s gives the programmer, or designer, far more flexibility than image based manipulation or HTML and CSS. The most fine grained graphical elements can respond to mouse events and be animated. I used the excellent <a href="https://d3js.org/" target="_blank">D3js</a> library as an interface to the SVG elements but I do wonder sometimes whether it was an appropriate choice. As a way of mapping data sets to graphical elements, it’s wonderful, but I did find myself fighting it to a certain extent. Guitar Dashboard is effectively a data generator (the music algorithm) and some graphs (the circles and the fretboard), but the graphs are so unlike most D3js applications, that it’s possible I would have been better off just manipulating the raw SVG or developing my own targeted library.</p> <p>Another strength of SVG is the tooling available to manipulate it. Not only is it browser native, which also means that it’s easy to print and screen-shot, but there are also powerful tools, such as the open source vector drawing tool, <a href="https://inkscape.org/en/" target="_blank">Inkscape</a> that make it easy to create and modify SVG documents. One enhancement that I’m keen to include in Guitar Dashboard is a ‘download’ facility that will allow the user to download the currently rendered SVG as a file that can be opened and modified in Inkscape or similar tools. Imagine if you want to illustrate a music theory article, or guitar lesson, it would be easy to select what you want to see in Guitar Dashboard, download the SVG and then edit it at will. You could easily just cut out the fretboard, or the circle-of-fifths, if that’s all you needed. You could colour and annotate the diagrams in any way you wanted. Because SVG is a vector graphics format, you can blow up an SVG diagram to any size without rasterization. You could print a billboard with a Guitar Dashboard graphic and it would be completely sharp. This makes it an excellent choice for printed materials such as textbooks.</p> <h4>TypeScript makes large browser based applications easy.</h4> <p>Creating Guitar Dashboard was my first experience of writing anything serious in <a href="https://www.typescriptlang.org/" target="_blank">TypeScript</a>. I’ve written plenty of Javascript during my career, but I’ve always found it a rather unhappy experience and I’ve always been relieved to return to the powerful static type system of my main professional language C#. I’ve experimented with Haskell and Rust which both have even stronger type systems and the experience with Haskell of '"if it compiles it will run" is enough to make anyone who might have doubted the power of types a convert. I’ve never understood the love for dynamic languages. Maybe for a beginner, the learning curve of an explicit type system seems quite daunting, but for anything but the simplest application, its lack means introducing a whole class of bugs and confusion that simply don’t exist for a statically typed language. Sure you can write a million unit tests to ensure you get what you think you should get, but why have that overhead?</p> <p>Typescript allows you to confidently create large scale browser based applications. I found it excellent for making Guitar Dashboard. I’m not sure I am writing particularly good Typescript code though. I soon settled into basing everything around interfaces, enjoying the notion of <a href="https://www.triplet.fi/blog/type-system-differences-in-typescript-structural-type-system-vs-c-java-nominal-type-system/" target="_blank">structural rather than nominal typing</a>. I didn’t use much in the way of composition and there’s no dependency injection. Decoupling is achieved with a little home made event bus:</p> <pre> export class Bus<T> {
private listeners: Array<(x:T)=>void> = [];
private name: string;
constructor(name: string) {
this.name = name;
}
public subscribe(listener: (x:T)=>void): void {
this.listeners.push(listener);
}
public publish(event: T): void {
//console.log("Published event: '" + this.name + "'")
for (let listener of this.listeners) {
listener(event);
}
}
}
</pre>
<p> A simple event bus, is just a device to decouple code that wants to inform that something has happened from code that wants to know when it does. It’s a simple collection of functions that get invoked every time an event is published. The core motivation is to prevent event producers and consumers from having to know about each other. There’s one instance of Bus for each event type.</p>
<p>Each of the main graphical elements is its own namespace which I treated like stand alone modules. Each of which subscribe to and raise typed events via a Bus instance. I only created classes when there was an obvious need, such as the Bus class above and the <a href="https://github.com/mikehadlow/gtr-cof/blob/master/src/cof-module.ts" target="_blank">NoteCircle</a> class which has two instances, the chromatic-circle and circle of fifths. I didn’t write any unit tests either, although now I think the music module algorithm is complex enough that it’s really crying out for them. Guitar Dashboard is open source, so you can see for yourself what you think of my Typescript by <a href="https://github.com/mikehadlow/gtr-cof" target="_blank">checking it out on GitHub</a>.</p>
<p>Another advantage of TypeScript is the excellent tooling available. I used <a href="https://code.visualstudio.com/" target="_blank">VS Code</a> which itself is written in TypeScript and which supports it out-of-the-box. The fact that VS Code has been widely adopted outside of the Microsoft ecosystem is a testament to its quality as a code editor. It came top in the most recent <a href="https://insights.stackoverflow.com/survey/2018/#development-environments-and-tools" target="_blank">Stack Overflow developer survey</a>. I’ve even started experimenting with using it for writing C# and it’s a pretty good experience.
<br /></p>
<h4>What I learnt about music.</h4>
<p>Music is weird. Our ears are like a serial port into our brain. With sound waves we can reach into our cerebral cortex and tweak our emotions or tickle our pleasure senses. A piece of music can take you on a journey, but one which bares no resemblance to concrete reality. Music defines human cultures and can make and break friendships; people feel that strongly about it. But fundamentally it’s just sound waves. It greatly confuses evolutionary psychologists. What possible survival advantage does it confer? Maybe it’s the human equivalent of the peacock’s tail; a form of impressive display; a marker of attendant mental agility and fitness? Who knows. What is true is that we devote huge resources to the production and consumption of music: the hundreds of thousands of performers; the huge marketing operations of the record companies; the global business of producing and selling musical instruments and the kit to record it and play it back. The biggest company in the world, Apple, got its second wind from a music playback device and musical performers are amongst the most popular celebrities.</p>
<p>But why do our brains favour some forms of  sound over others? What makes a melody, a harmony, a rhythm, more or less attractive to us? I recently read a very good book on this subject, The Music Instinct by Philip Ball. The bottom line is that we have no idea why music affects us like it does, but that’s unsurprising given that the human brain is still very much a black box to science. It does show, however, that across human cultures there are some commonalities: rhythm, the recognition of the octave, where we perceive two notes an octave apart as being the same note, and also something close to the fifth and the third. It’s also true that music is about ratios between frequencies rather than the frequencies themselves, with perhaps the exception of people with perfect pitch. The more finely grained the intervals become, the more cultures diverge, and it’s probably safe to say that the western twelve tone chromatic scale with its ‘twelfth root of two’ ratio is very much a technical innovation to aid modulation rather than something innate to the human brain. Regardless of how much is cultural or innate, the western musical tradition is very much globally dominant. Indeed, it’s hard buy a musical instrument that isn’t locked down to the twelve note chromatic scale.</p>
<a href="https://www.amazon.co.uk/gp/product/0099535440/ref=as_li_tl?ie=UTF8&camp=1634&creative=6738&creativeASIN=0099535440&linkCode=as2&tag=coderantmikeh-21&linkId=a1f1b9dec4a0dede47a89d6b23259f68" target="_blank"><img border="0" src="//ws-eu.amazon-adsystem.com/widgets/q?_encoding=UTF8&MarketPlace=GB&ASIN=0099535440&ServiceVersion=20070822&ID=AsinImage&WS=1&Format=_SL250_&tag=coderantmikeh-21" /></a><img style="margin: 0px !important; border: currentcolor !important; border-image: none !important;" border="0" alt="" src="//ir-uk.amazon-adsystem.com/e/ir?t=coderantmikeh-21&l=am2&o=2&a=0099535440" width="1" height="1" />
<p>However, despite having evolved a very neat, mathematical and logical theory, western music suffers from a common problem that bedevils any school of thought that’s evolved over centuries, a complex and difficult vocabulary and a notation that obfuscates rather than reveals the structure of what it represents. Using traditional notation to understand music theory is like doing maths with Roman numerals. In writing the music engine of guitar dashboard, by far the most difficult challenges have been outputting the correct names for notes and intervals.</p>
<p>This is a shame, because the fundamentals are really simple. I will now explain western music theory in four steps: </p>
<ol>
<li>Our brains interpret frequencies an octave apart as the same ‘note’, so we only need to care about the space between n and 2n frequencies.</li>
<li>Construct a ratio such that applying the ratio to n twelve times gives 2n. Maths tells you that this must be the 12th root of 2. (first described by Simon Stevin in 1580). Each step is called a semitone.</li>
<li>Start at any of the twelve resulting notes and jump up or down in steps of 7 semitones (traditionally called a 5th) until you have a total of 7 tones/notes. Note that we only care about n to 2n, so going up two sets of 7 semitones (or two 5ths) is the same as going up 2 semitones (a tone) (2 x 7 – 12 = 2. In music all calculations are mod 12). This is a diatonic scale. If you choose the frequency 440hz, jump down one 7-semitone step and up 5, you have an A major scale. Up two 7-semitone steps and down four gives you A minor. The other five modes (Lydian, Mixolydian, Dorian, Phrygian and Locrian) are just different numbers of up and down 7-semitone steps.</li>
<li>Having constructed a scale, choose any note. Count 3 and 5 steps of the scale (the diatonic scale you just constructed, not the original 12 step chromatic scale) to give you three notes. This is a triad, a chord. Play these rhythmically in sequence while adding melody notes from the scale until you stumble across something pleasing.</li>
</ol>
<p>That, in four simple steps, is how you make western music.</p>
<p>OK, that’s a simplification, and the most interesting music breaks the rules, but this simple system is the core of everything else you will learn. But try to find this in any music textbook and it simply isn’t there. Instead there is arcane language and confusing notation. I really believe that music education could be far simpler with a better language, notation and tools. Guitar Dashboard is an attempt to help people visualise this simplicity. Everything but the fretboard display is common to all musical instruments. It’s only aimed at guitarists because that’s what I play and it also helps that guitar is the second most popular musical instrument. The most poplar, piano, would be easy to add. Piano Dashboard anyone?</p>http://mikehadlow.blogspot.com/2018/09/what-i-learned-creating-guitar.htmlnoreply@blogger.com (Mike Hadlow)16tag:blogger.com,1999:blog-15136575.post-73128326208096150132018年9月05日 11:17:00 +00002018年09月05日T12:17:55.219+01:00The Possibilities of Web MIDI With TypeScript<p>If you’ve ever had any experience with music technology, or more specifically sequencers, keyboards or synthesisers, you will have come across <a href="https://en.wikipedia.org/wiki/MIDI" target="_blank">MIDI</a> (Musical Instrument Digital Interface). It’s used to send note and controller messages from musical devices, such as keyboards or sequencers which are used to play and record music, and devices that produce sounds, such as samplers or synthesizers. It’s pure control information, for example, "play a c# in the 3rd octave with a velocity of 85", there’s no actual audio involved. It dates back to the early 1980s, when a group of musical instrument manufacturers such as Roland, Sequential Circuits, Oberheim, Yamaha and Korg got together to define the standard. It soon lead to a huge boom in low cost music production and the genesis of new musical styles. It’s no accident that rap and electronic dance music date from the mid to late 80’s.</p> <p><a href="https://webaudio.github.io/web-midi-api/" target="_blank">Web MIDI</a> is a new W3C specification for an API to allow browser applications to access MIDI input and output devices on the host machine. You can enumerate the devices, then choose to listen for MIDI messages, or format and send your own messages. It’s designed to allow applications to consume and emit MIDI information at the protocol level, so you receive and send the actual raw message bytes rather the API providing the means to play MIDI files using General MIDI for example. Don’t let this put you off though, the protocol is very simple to interpret as I’ll demonstrate later.</p> <p>The potential for a large new class of browser based musical applications is huge. The obvious examples are things like browser based sequencers and drum machines emitting MIDI messages and synthesizers and samplers on the consuming side using <a href="https://www.w3.org/TR/webaudio/" target="_blank">Web Audio</a>, another interesting new standard. But it goes much wider than that, the MIDI protocol is ideally suited to any real-time parameter control. It’s already widely used for lighting rigs and special effects in theatrical productions for example. Also because it’s such an established standard, there is all kinds of cheaply available hardware controller interfaces full of knobs and buttons. If you’ve got any application that requires physical control outside the range of keyboard/mouse/trackpad, it might be a solution. Imagine a browser based application that allowed you to turn knobs on a cheap MIDI controller to tweak the parameters of a mathematical visualisation, or some network based industrial controller, or even as new input for browser based games. The possibilities are endless.</p> <p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuxV-Zm6OXYBdMhwUDs63u7_ZvbEnJoqZjIPPhdIWJO9d2t42jMOGELdDQBqZ5-qznsUvCnQjG0GJrcwcKGVMw9I7U0rpeOVZNUlUtsQNpK58PQ4FKXzwyXras34llQLlRJdYTrQ/s1600-h/image%255B4%255D"><img title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" border="0" alt="image" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg152dhQrMrRAPbGnANnaqM5h5z-B9_pBX44qsxvG64anrnOJtjpr1I-JTkZu_3Gkif0uGVPjVAE-pR0OAU6v-GBG7nCTqirS85lMMvaS66O4xLDG6PwiJSWfjC97clDIhNTzIRjw/?imgmax=800" width="531" height="480" /></a></p> <p>I’m going to show a simple TypeScript example. I’m currently working on a TypeScript application that consumes MIDI and I couldn’t find much good example code so I’m hoping this might help. I’m using the type definitions from here: <a title="https://www.npmjs.com/package/@types/webmidi" href="https://www.npmjs.com/package/@types/webmidi">https://www.npmjs.com/package/@types/webmidi</a>.</p> <p>The entry point into the new API is a new method on navigator, requestMIDIAccess. This returns a Promise<MIDIAccess> that you can use to enumerate the input and output devices on the system. Here I’m just looking for input devices:</p> <pre>window.navigator.requestMIDIAccess()
.then((midiAccess) => {
console.log("MIDI Ready!");
for(let entry of midiAccess.inputs) {
console.log("MIDI input device: " + entry[1].id)
entry[1].onmidimessage = onMidiMessage;
}
})
.catch((error) => {
console.log("Error accessing MIDI devices: " + error);
});
</pre>
<p>I’ve bound my onMidiMessage function to the onmidimessage event on every input device. This is the simplest possible scenario, it would be better to provide an option to your user to choose the device they want to use. This allows us to process MIDI events as they arrive from MIDI devices.</p>
<p>MIDI events arrive as byte arrays with a length of 1 to 3 bytes. The first byte is always the ‘status’ byte. The four most significant bits are the status type. Here we’re only concerned with note on (9) and off (8) messages. The four least significant bytes tell us the MIDI channel. This allows up to 16 different devices, or voices to be controlled by a single controller device. If you ignore the channel, as we’re doing here, it’s known as OMNI mode. For note on/off messages, the second byte is the note number and the third is the velocity, or how loud we want the note to sound. The note number describes the frequency of the note using the classical western chromatic scale; good luck if you want to make Gamelan dance music! The notes go from C0 (around 8hz) to G11 (approx 12543hz). This is much wider than a grand piano keyboard and sufficient for the vast majority of applications. See the code for how to convert the note number to name and octave. See <a href="http://www.songstuff.com/recording/article/midi_message_format/" target="_blank">this page</a> and the <a href="https://en.wikipedia.org/wiki/MIDI" target="_blank">Wikipedia page</a> for more details.</p>
<p>In this example we filter for on/off messages, then write the channel, note name, command type and velocity to the console:</p>
<pre>
let noteNames: string[] = ["C", "C#", "D", "D#", "E", "F", "F#", "G", "G#", "A", "A#", "B"];
function onMidiMessage(midiEvent: WebMidi.MIDIMessageEvent): void {
let data: Uint8Array = midiEvent.data;
if(data.length === 3) {
// status is the first byte.
let status = data[0];
// command is the four most significant bits of the status byte.
let command = status >>> 4;
// channel 0-15 is the lower four bits.
let channel = status & 0xF;
console.log(`$Command: ${command.toString(16)}, Channel: ${channel.toString(16)}`);
// just look at note on and note off messages.
if(command === 0x9 || command === 0x8) {
// note number is the second byte.
let note = data[1];
// velocity is the thrid byte.
let velocity = data[2];
let commandName = command === 0x9 ? "Note On " : "Note Off";
// calculate octave and note name.
let octave = Math.trunc(note / 12);
let noteName = noteNames[note % 12];
console.log(`${commandName} ${noteName}${octave} ${velocity}`);
}
}
}
</pre>
<p>Here’s the output. I’m using <a href="https://sourceforge.net/projects/vmpk/" target="_blank">Vmpk</a> (Virtual MIDI Piano Keyboard) to play the notes. You’ll also need a MIDI loopback device such as <a href="https://www.tobias-erichsen.de/software/loopmidi.html" target="_blank">loopMIDI</a> if you want to connect software devices, but it should be plug and play with a hardware controller:</p>
<p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEglGlz-WvnVU65Qm53NgUFqQkbBfpXJ_iwlNwFNfK2-uoaMpFVJGvY7mhxiP0zlg-VPB1JMUh0Lz37YfGihTssK6ba2i01o9h5FMCb8KR1YjKEWI8xHniyErRzqHIbxu1vdXkFTgg/s1600-h/image%255B9%255D"><img title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" border="0" alt="image" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3A92vMHR7yfhigXQGqj6MNHrcVFCLKWE2bBJaeh9hVIPeJpBH2moOumtuKAuIWncc7h1rPOPEIQcoSz4p-ukwCIQZeUGumd48UuMdWfwQ-jmj_dCNq1WX_h0lKD2w22YC8LwzrQ/?imgmax=800" width="825" height="340" /></a></p>
<p>So there we have it. MIDI is now very easy to integrate into a browser based application. I’ve demonstrated this with just a few lines of code. It opens up possibilities for a new class of software and not for just musical applications. It’s going to be very interesting to see what people do with it.</p>http://mikehadlow.blogspot.com/2018/09/the-possibilities-of-web-midi-with.htmlnoreply@blogger.com (Mike Hadlow)2tag:blogger.com,1999:blog-15136575.post-9214437606159838042018年1月18日 11:19:00 +00002018年01月18日T11:19:13.722+00:00Configure AsmSpy as an external tool in Visual Studio<p><a href="https://github.com/mikehadlow/AsmSpy">AsmSpy</a> is a tool I wrote a few years ago to view assembly version conflicts. Despite the fact that it started as a single page of code command line application, it’s been one of my more successful open source efforts. I still use it all the time, especially now with the ‘forking’ of .NET into Framework and Core and spreading use of dotnet standard, both good things IMHO, but not without the occasional assembly version head scratcher.</p> <p>Today I want to show how easy it is to integrate AsmSpy into Visual Studio as an ‘external tool’.</p> <p>First download AsmSpy from the <a href="https://github.com/mikehadlow/AsmSpy">GitHub repository</a>. If you download the <a href="https://ci.appveyor.com/project/rahulpnath/asmspy/branch/master/artifacts">zip file</a>, you’ll see that it’s merely a stand alone exe that you can run from the command line.</p> <p>In VS select External Tools from the ‘Tools’ menu.</p> <p><img title="external-tools" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" border="0" alt="external-tools" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhrd-3EW9wd9iv_sJt4Xf6AuZvJYJ0xO0awOI_GGxzRJYuH0Paz7l_5p3FBVHabWYzd8JvFui5SKhwOYVV-BFtPyl6LkY29b1ofbbSi34mkeRQBNfLI8LJylvpp54afqHsTdBX2/?imgmax=800" width="359" height="437" /></p> <p>Now configure AsmSpy as follows: <br />Title: AsmSpy <br />Command: The path to where you’ve put the AsmSpy.exe file. <br />Arguments: $(BinDir)  - this points AsmSpy at the output directory of the currently selected project. <br />Initial Directory: $(ProjectDir) <br />Use Output Window: checked. – this ensures that the output from AsmSpy will go to Visual Studio’s output window. <br /></p> <p><img title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" border="0" alt="image" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjw6KmeaVQ0sBleHesyAiCIfI_36qpldlppgz6GgpvbLeze2CG1yLL2SQsMsqbuNyZVc5i2umhkf86yn4av4vOLZRB4DFATZyZWx94UytI7GycudvFtsYSsIrO0C2qRHnvVSyz/?imgmax=800" width="458" height="461" /></p> <p>Now you can select a project in Solution Explorer and go to Tools –> AsmSpy. AsmSpy will run against the build output of your project and you can view Assembly version conflicts in the Visual Studio output window.</p>http://mikehadlow.blogspot.com/2018/01/configure-asmspy-as-external-tool-in.htmlnoreply@blogger.com (Mike Hadlow)1tag:blogger.com,1999:blog-15136575.post-2929551800315316312016年1月14日 11:43:00 +00002016年01月14日T11:43:46.932+00:00Running The KestrelHttpServer On Linux With CoreCLR<p>Being both a long-time .NET developer and Linux hobbyist, I was very excited about the recent ‘go live’ announcement for CoreCLR on Linux (and Windows and Mac). I thought I’d have a play with a little web server experiment on an Amazon EC2 instance. To start with I tried to get the <a href="https://github.com/aspnet/KestrelHttpServer">KesteralHttpServer</a> sample application working <a href="https://github.com/aspnet/KestrelHttpServer/issues/574#issuecomment-171402256">which wasn’t as easy as I’d hoped</a>, so this post is a note of the steps you currently need.</p> <p>So first create a new Ubuntu Server 14.04 AMI:</p> <p><img title="image[5]" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="image[5]" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyeyEs8ORqGbjYJJ0ehViXahKzYygtLe-zPtxm4OWr9DfZCoQ4Q76FM5i0FUk9wEnn6fV4yfx0_1VhGhlAbaWZogw7Y_degZPDQCZ7Lc9Q3FvIl7Iag5Sk9Vjz9IVX7Q9BcMcS7Q/?imgmax=800" width="1263" height="114" /></p> <p>With a t2.micro instance type:</p> <p><img title="image[11]" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="image[11]" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-7jmIlf4YscdT6x3mYYi3M7buDSkAtEEYdJdEwcfNJ92MUJ1_q1nhmTZkPoKzYOoIj3JWUcXhtQRuADg2xk8WKgBTLqwd3YffqnSr2kTV-ybJfqlpyJRXeZYD5dFh4BUcOUZ7MQ/?imgmax=800" width="1047" height="70" /></p> <p>Next log in and update:</p> <pre>ssh -i .ssh/mykey.pem ubuntu@-the-ip-address
...
sudo apt-get update
sudo apt-get upgrade</pre>
<p>Currently there are two different sets of instructions for installing CoreCLR on Linux. <a href="https://dotnet.github.io/getting-started/">The first one I found</a> (I think linked from Scott Hanselman’s blog) shows how to use the standard Debian package manager to install the new ‘dotnet’ comand line tool. Apparently Kestrel will not currently work with this. <a href="https://docs.asp.net/en/latest/getting-started/installing-on-linux.html">The second set of instructions</a> use the existing ‘dnvm’, ‘dnu’ and ‘dnx’ tools. These do work, but you need to get the latest unstable RC2 version of CoreCLR, like this:</p>
<p>First install the Dot Net Version Manager tool:</p>
<pre>sudo apt-get install unzip curl
curl -sSL https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.sh | DNX_BRANCH=dev sh && source ~/.dnx/dnvm/dnvm.sh
source /home/ubuntu/.dnx/dnvm/dnvm.sh</pre>
<p>Next install the latest unstable (thus the '-u' flag) CoreCLR:</p>
<pre>sudo apt-get install libunwind8 gettext libssl-dev libcurl4-openssl-dev zlib1g libicu-dev uuid-dev
dnvm upgrade -u -r coreclr</pre>
<p>At the time of writing this installed 1.0.0-rc2-16357</p>
<p><img title="image[17]" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="image[17]" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhImhlgPlnpDWtJLOF8ETp6Ju8phNanudOX_nWNv6i_WQNYtlVfkCIQxWw0diHfvBZLVkmLrsvwUlV6Syh9rV31Byy8ayMwZ7-ANPkUG4xDsYI4UdTHEKSwueAhkG1lwSrvvvvcNA/?imgmax=800" width="719" height="183" /></p>
<p>You also need to follow the instructions to build the libuv library from source. This sounds hairy, but it worked fine for me:</p>
<pre>sudo apt-get install make automake libtool curl
curl -sSL https://github.com/libuv/libuv/archive/v1.8.0.tar.gz | sudo tar zxfv - -C /usr/local/src
cd /usr/local/src/libuv-1.8.0
sudo sh autogen.sh
sudo ./configure
sudo make
sudo make install
sudo rm -rf /usr/local/src/libuv-1.8.0 && cd ~/
sudo ldconfig</pre>
<p>Next get the KestrelHttpServer source code:</p>
<pre>git clone https://github.com/aspnet/KestrelHttpServer.git</pre>
<p>Restore the Kestrel packages by running dnu restore in the root of the repository:</p>
<pre>cd KestrelHttpServer
dnu restore</pre>
<p>Next navigate to the sample app and restore the packages there too:</p>
<pre>cd samples/SampleApp/
dnu restore</pre>
<p>Now you should be able to run the sample app by typing:</p>
<pre>dnx web</pre>
<p><img title="image[23]" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="image[23]" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuER7WOYD-ziu6Qr76CMeApNc6lSRdLHvexHptC6wAiOE13zXHZv3eUsSef6v49LvR21i-msPBBMWttNHtL5VrL-Z0wa995hK0rt1i-77Bx8lXYhQLD22f_2ZVDEIZUz9KRyabWQ/?imgmax=800" width="966" height="862" /></p>
<p>Voilà!</p>
<p>There’s obviously some way to go before this is a straightforward out-of-the-box experience. The team should also try and unify their getting started instructions because there are various different conflicting pages floating around. The Kestrel team were very helpful though in getting this working. Now to do something with my new found Linux web server.</p>http://mikehadlow.blogspot.com/2016/01/running-kestrelhttpserver-on-linux-with.htmlnoreply@blogger.com (Mike Hadlow)3tag:blogger.com,1999:blog-15136575.post-174446292159421052015年12月04日 13:28:00 +00002015年12月04日T13:28:38.971+00:00Learn To Code, It’s Harder Than You Think<p><em>TL;DR: All the evidence shows that programming requires a high level of aptitude that only a small percentage of the population possess. The current fad for short learn-to-code courses is selling people a lie and will do nothing to help the skills shortage for professional programmers.</em></p> <p><em>This post is written from a UK perspective. I recognise that things may be very different elsewhere, especially concerning the social standing of software developers.</em></p> <p>It’s a common theme in the media that there is a shortage of skilled programmers (‘programmers’, ‘coders’, ‘software developers’, all these terms mean the same thing and I shall use them interchangeably). There is much hand-wringing over this coding skills gap. The narrative is that we are failing to produce candidates for the "high quality jobs of tomorrow". For example, <a href="http://www.telegraph.co.uk/education/educationnews/10985961/Britain-faces-growing-shortage-of-digital-skills.html">this from The Telegraph</a>:</p> <blockquote> <p>"Estimates from the Science Council suggest that the ICT workforce will grow by 39 per cent by 2030, and a 2013 report from O2 stated that around 745,000 additional workers with digital skills would be needed to meet demand between now and 2017.</p> <p>Furthermore, research by City & Guilds conducted last year revealed that three quarters of employers in the IT, Digital and Information Services Sector said that their industry was facing a skills gap, while 47 per cent of employers surveyed said that the education system wasn’t meeting the needs of business."</p> </blockquote> <p>Most commentators see the problem as being a lack of suitable training. Not enough programmers are being produced from our educational institutions. For example, here is Yvette Cooper, a senior Labour party politician, <a href="http://www.theguardian.com/politics/2015/may/23/yvette-cooper-labour-leadership-general-election">in The Guardian</a>:</p> <blockquote> <p>"The sons and daughters of miners should all be learning coding. We have such huge advantages because of the world wide web being invented as a result of British ingenuity. We also have the English language but what are we doing as a country to make sure we are at the heart of the next technology revolution? Why are we not doing more to have coding colleges and technical, vocational education alongside university education?"</p> </blockquote> <p>There is also a common belief in the media that there are high barriers to entry to learning to code. <a href="http://www.theguardian.com/technology/2015/jul/26/founders-coders-coding-free-training-london">This from the Guardian is typical</a>:</p> <blockquote> <p>"It’s the must-have skill-set of the 21st century, yet unless you’re rich enough to afford the training, or fortunate enough to be attending the right school, the barriers to learning can be high."</p> </blockquote> <p>So the consensus seems to be that high barriers to entry and a lack of accessible training mean that only a rich and well educated elite have access to these highly paid jobs. The implication is that there is a large population of people for whom programming would be a suitable career if only they could access the education and training that is currently closed to them.</p> <p>In response, there are now a number of initiatives to encourage people to take up programming. The UK government created ‘Year of Code’ in 2014:</p> <p><img title="year-of-code" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-top-width: 0px" border="0" alt="year-of-code" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhD3SAnXGxevHa-WN0fx0-iwLiiJyfe2KYYJBQ6fCL_bodo5hgE63eVeFxzwq_B3kGHL_dcirLV63WzZc6YdVGJf3eyzfDDIPIiJ9Lls-zBM2QL20N1q2i3goLHZmpasHwMyrofQ/?imgmax=800" width="789" height="440" /></p> <p>The message is "start coding this year, it’s easier than you think." Indeed the executive director of Year of Code, Lottie Dexter, said in a <a href="https://www.youtube.com/watch?v=e3q3KxX82tY">Newsnight interview</a> that people can "pick it up in a day". Code.org, a "non-profit dedicated to expanding participation in computer science education", says on its website, "Code.org aims to help demystify that coding is difficult".</p> <p><img title="hour-of-code" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-top-width: 0px" border="0" alt="hour-of-code" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgL_0CsKxZUd9J6-WuQ5luU8jqpR2RNeJPe9UQGXAJg1P27PMVTbmhZ7elgxpnAxwU9ZX987VDvCeNkV3-TfllPMBXafZ_WU0Q9V6-nr5zqM_hd34XSUGe_YYvDAHaa-1EmfVjpew/?imgmax=800" width="336" height="259" /></p> <p>So is it really that easy to learn how to code and get these high paying jobs? Is it really true that anyone can learn to code? Is it possible to take people off the streets, give them a quick course, and produce professional programmers?</p> <p>What about more traditional formal education? Can we learn anything about training programmers from universities? </p> <p>Given the skills shortage one would expect graduates from computer science courses to have very high employment rates. However, it seems that is not the case. The Higher Education Statistics Agency <a href="http://www.software.ac.uk/blog/2013-10-31-whats-wrong-computer-scientists">found</a> that computer science graduates have "the unwelcome honour of the lowest employment rate of all graduates." Why is this? Anecdotally there seems to be a mismatch between the skills the students graduate with and those that employers expect them to have. Or more bluntly, after three years of computer science education they can’t code. A comment on <a href="http://www.software.ac.uk/blog/2013-10-31-whats-wrong-computer-scientists">this article</a> by an anonymous university lecturer has some interesting insights:</p> <blockquote> <p>"Every year it's the same - no more than a third of them [CS students] are showing the sort of ability I would want in anyone doing a coding job. One-third of them are so poor at programming that one would be surprised to hear they had spent more than a couple of weeks supposedly learning about it, never mind half-way through a degree in it. If you really test them on decent programming skills, you get a huge failure rate. In this country it's thought bad to fail students, so mostly we find ways of getting them through even though they don't really have the skills."</p> </blockquote> <p><a href="http://blog.codinghorror.com/separating-programming-sheep-from-non-programming-goats/">Other research points to similar results</a>. There seems to be a ‘double hump’ in the outcome of any programming course between those who can code and those who can’t.</p> <blockquote> <p>"In particular, most people can't learn to program: between 30% and 60% of every university computer science department's intake fail the first programming course."</p> </blockquote> <p>Remember we are talking about degree level computing courses. These are students who have been accepted by universities to study computer science. They must be self selecting to a certain extent. If the failure rate for programming courses is so high amongst undergraduates it would surely be even higher amongst the general population - the kinds of candidates that the short ‘learn to code’ courses are attempting to attract.</p> <p>Let’s look at the problem from the other end of the pipeline. Let’s take successful professional software developers and ask them how they learnt to code. One would expect from the headlines above that they had all been to expensive, exclusive coding schools. But here again that seems not to be the case. Here are the results of the <a href="http://stackoverflow.com/research/developer-survey-2015">2015 Stack Overflow developers survey</a>. Note that this was a global survey, but I think the results are relevant to the UK too:</p> <p><img title="SO-dev-survey" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-top-width: 0px" border="0" alt="SO-dev-survey" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjD90aXlXowxO9OIhl6VUvoNeILMmwZ-hhWrO6TI001uB2SZeQiRG53JgVKGtpkMX7OZDiHt2axTZherhnuE-iVjMuAv6IdKE87xtGniXfzZ6OT4BPD5eEm5VoaY0AcCvvi0bvEpQ/?imgmax=800" width="494" height="519" /></p> <p>Only a third have a computer science or related degree and nearly 42%, the largest group, are self taught. I have done my own small and highly unscientific research on this matter. I run a monthly meet-up for .NET developers here in Brighton, and a quick run around the table produced an even more pronounced majority for the self-taught. For fun, I also did a quick Twitter poll:</p> <p><img title="mh-self-taught-twitter-pol" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-top-width: 0px" border="0" alt="mh-self-taught-twitter-pol" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdS2xhfAJ-CCQRrEHo1SmSKCpsrAcUY0TOZfY8kAyEtM70-w3jQcw2evDvW_l06DtJZoe6y742rzbhAK_qYC3warTXXCHkNO2-bNpJxofFsVNDsrcXdAzHgHnhtN8SCbpEYEZIkA/?imgmax=800" width="583" height="270" /></p> <p>76% say they are self taught. Also interesting were the comments around the poll. This was typical:</p> <p><img title="self-taught-tweet" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-top-width: 0px" border="0" alt="self-taught-tweet" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzJ9sy1tE9HC234OVyAUMOQwsjug5FjdcOhn8FnTudY-Rm5tUkZfrcKWsnjDMsOUYNS-EBEETe8xzd04al14fsv8EcwNldf5iAFovi4sGYJa0rfv5LhxZgC8Tfm98vJlRqFeh0rA/?imgmax=800" width="601" height="100" /></p> <p>Even programmers with CS degrees insist that they are largely self taught. Others complained that it was a hard question to answer since the rate of change in the industry means that you never stop learning. So even if you did at some point have formal training, you can’t rely on that for a successful career. Any formal course will be just a small element of the continual learning that defines the career of a programmer.</p> <p>We are left with a very strange and unexpected situation. Formal education for programmers seems not to work very well and yet the majority of those who are successful programmers are mostly self taught. On the one hand we seem to have people who don’t need any guided education to give them a successful career; they are perfectly capable of learning their trade from the vast sea of online resources available to anyone who wants to use it. On the other hand we have people who seem unable to learn to code even with years of formal training. </p> <p>This rather puts the lie to the barriers to entry argument. If the majority of current professional software developers are self taught, how can there be barriers to entry? Anyone with access to the internet can learn to code if they have the aptitude for it.</p> <p>The evidence points to a very obvious conclusion: there are two populations: one that finds programming a relatively painless and indeed enjoyable thing to learn and another that can’t learn no matter how good the teaching. The elephant in the room, the thing that Yvette Cooper, the ‘year of code’ or ‘hour of code’ people seem unwilling to admit is that programming is a very high aptitude task. It is not one that ‘anyone can learn’, and it is not easy, or rather it is easy, but only if you have the aptitude for it. The harsh fact is that most people will find it impossible to get to any significant standard.</p> <p>If we accept that programming requires a high level of aptitude, it’s fun to compare some of the hype around the ‘learn to code’ movement with more established high-aptitude professions. Just replace ‘coder’ or ‘coding’ with ‘doctor’,  ‘engineer’,  ‘architect’ or ‘mathematician’.</p> <ul> <li>"You can pick up Maths in a day." </li> <li>Start surgery this year, it’s easier than you think! </li> <li>skyscraper.org aims to help demystify that architecture is difficult. </li> <li>"The sons and daughters of miners should all be learning to be lawyers." </li> </ul> <p>My friend Andrew Cherry put it very well:</p> <p><img title="free-training-ac" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-top-width: 0px" border="0" alt="free-training-ac" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtJeQQuAGTrYe0nmcwgDdjkMUG7dEEDDGFvrVyQCsJmMNRvSF_vgZnsY93mu6XuKLybBUZ8ye3V4uoZ5BV-32Z6g60vrhDjHtygtawPqpecQ9w8HVb3kB3fcOYWYZcLW9y8DEdug/?imgmax=800" width="544" height="95" /></p> <p>Answer:  only one: software development. You want to be a doctor? Go to medical school for seven years.</p> <p>Accepting that aptitude is important for a successful career in programming, we can approach the ‘shortage’ problem from a different angle. We can ask how we can persuade talented people to choose programming rather than other high-aptitude professions. The problem is that these individuals have a great deal of choice in their career path and, as I’m going to explain, programming has a number of negative social and career attributes which make them unlikely to choose it.</p> <p>There’s no doubt that software development is a very attractive career. It’s well paid, mobile, and the work itself is challenging and rewarding. But it has an image problem. I first encountered this at university in the 1990’s. I did a social science degree (yes I’m one of those self taught programmers). Socially, us arts students looked down on people studying computer science, they were the least cool students on the campus - mostly guys, with poor dress sense. If anyone considered them at all it was with a sense of pity and loathing. When towards the end of my degree, I told my then girlfriend, another social science student, that I might choose a career in programming, she exclaimed, "oh no, what a waste. Why would you want to do that?" If you did a pop-quiz at any middle-class gathering in the UK and asked people to compare, say, medicine, law, architecture or even something like accountancy, with software development, I can guarantee that they would rate it as having a lower social status. Even within business, or at least more traditional businesses, software development is seen as a relatively menial middle-brow occupation suitable for juniors and those ill-qualified for middle management. Perversely, all these courses saying ‘learn to code, it’s easy’ just reinforce the perception that software development is not a serious career.</p> <p>There’s another problem with software development that’s the flip side of the low barriers to entry mentioned above, and that is there is no well established entry route into the profession. Try Googling for ‘how to become a doctor’, or ‘how to become a lawyer’ for example:</p> <p><img title="how-to-become-a-doctor" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-top-width: 0px" border="0" alt="how-to-become-a-doctor" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiB86jNsn6U78Y7RRA4MLvwmG1N2JdhY65RL-Xw4US-tdimYILz5z1X551fByH3GjxqTZdOcC5ym01wKrDWBVKl1WEEX07WW80r2Ru4WwXe3u2dN_xiulwKWAPAhG3BOVZn1e2fUw/?imgmax=800" width="654" height="365" /></p> <p>There are a well established series of steps to a recognised professional qualification. If you complete the steps, you become a recognised member of one of these professions. I’m not saying it’s easy to qualify as a doctor, but there’s little doubt about how to go about it. Now Google for ‘how to become a software developer’, the results, <a href="http://www.theguardian.com/careers/careers-blog/how-to-become-a-software-developer">like this one for example</a>, are full of vague platitudes like ‘learn a programming language’, ‘contribute to an open source project’, ‘go to a local programming group’. No clear career path, no guarantees about when and if you will be considered a professional and get access to those high-paying jobs of the future.</p> <p><img title="Yes, I made this up, but it makes the point. :)" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-top-width: 0px" border="0" alt="Yes, I made this up, but it makes the point. :)" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghGfyi0fOOKgkSVVmDc0k708boPXW8tB735uSj0gTYMpLLm4TuY5fDh6pz1-TVvXkO4uiiDVPScEtTpmncAna66I86GPfVemAgsIEXToz399X7DSb3W3n3eHlrSQIHl8tF1g8lWQ/?imgmax=800" width="665" height="256" /></p> <p>Now take a high-aptitude individual who has done well at school and finds demanding intellectual tasks relatively straightforward, and offer them a choice: on the one hand, here is a career, let’s take medicine for example, you follow these clearly enumerated steps, which are demanding but you are good at passing exams, and at the end you will have a high-status, high paying job. Or, how about this career: go away, learn some stuff by yourself, we’re not sure exactly what; try and get a junior, low status job, and just learn more stuff – which you can work out somehow – and work your way up. No guarantees that there’s a well paying job at the end of it. Oh, and, by the way, the whole world will think you are a bit of a social pariah while you are about it. Which would you choose?</p> <p>So could software development follow the example of older professions and establish a professional qualification with high barriers to entry? There are attempts to do this. The British Computer Society (BCS) calls itself ‘the chartered institute for IT’ and seeks establish professional qualifications and standards. The problem is that it’s comprehensively ignored by the software industry. Even if you could get the industry to take a professional body seriously, how would you test people to see if they qualified? What would be on the exam? There are very few established practices in programming and as soon as one seems to gain some traction it gets undermined by the incredibly rapid pace of change. Take Object Oriented programming for example. In the 2000’s, it seemed to be establishing itself as the default technique for enterprise programming, but now many people, including myself, see it as a twenty year diversion and largely a mistake. How quickly would programming standards and qualifications stay up to date with current practice? Not quickly enough I suspect.</p> <p>However, my main point in this post has been to establish that programming is a high-aptitude task, one than only some people are capable of doing with any degree of success. If the main point of a professional qualification is filter out people who can’t code, does it really matter if what is being tested for is out of date, or irrelevant to current industry practices? Maybe our tentative qualification would involve the completion of a reasonably serious program in LISP? A kind of <a href="https://en.wikipedia.org/wiki/The_Glass_Bead_Game">Glass Bead Game</a> for programmers? The point would be to find out if they can code. They can learn what the current fads are later. The problem still remains how to get industry to recognise the qualification.</p> <p>In the meantime we should stop selling people a lie. Programming is not easy, it is hard. You can’t learn to code, certainly not to a standard to get a well-paid-job-of-the-future, in just a few weeks. The majority of the population can not learn to code at all, no matter how much training they receive. I doubt very much if the plethora of quick learn-to-code courses will have any impact at all on the skills shortage, or the problem of unskilled low pay and unemployment. Let’s stop pretending that there are artificial barriers to entry and accept that the main barrier to anyone taking it up is their natural aptitude for it. Instead let’s work on improving the social status of the software industry – I think this is in any case happening slowly – and also work on encouraging talented young people to consider it as a viable alternative to some of the other top professions.</p> http://mikehadlow.blogspot.com/2015/12/learn-to-code-its-harder-than-you-think.htmlnoreply@blogger.com (Mike Hadlow)126tag:blogger.com,1999:blog-15136575.post-34244941125371016622015年9月10日 11:02:00 +00002015年09月11日T09:43:03.202+01:00Partial Application in C#<p>My recent post, <a href="http://mikehadlow.blogspot.dk/2015/08/c-program-entirely-with-static-methods.html">C# Program Entirely With Static Methods</a>, got lots of great comments. Indeed, as is often the case, the comments are in many ways a better read than the original post. However, there were several commenters who claimed that C# does not have partial application. I take issue with this. Any language that supports higher order functions, that is, functions that can take functions as arguments and can return functions, by definition, supports partial application. C# supports higher order functions, so it also supports partial application.</p> <p>Let me explain.</p> <p>Let’s start by looking at partial application in F#. Here’s a simple function that adds two numbers (you can type this into F# interactive):</p> <pre>>let add a b = a + b;;</pre>
<p>Now we call use our ‘add’ function to add two numbers, just as we’d expect:</p>
<pre>> add 3 4;;
val it : int = 7</pre>
<p>But because F# supports partial application we can also do this:</p>
<pre>> let add3 = add 3;;
> add3 4;;
val it : int = 7</pre>
<p>We call add with a single argument and it returns a function that takes a single argument which we can then use to add three to any number.</p>
<p>That’s partial application. Of course, if I try this in C# it doesn’t work:</p>
<p><img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-BXOJbWJQzulU0nLfCO21a_8o76kDVZAWuI9F6xpbtvDvRCKpYT3xuRFeItWk5U61z_k3xjUNeFE3mftDgu6I25eFUFdcsPuocDT9dwpgnsiUrXJn5zycjz4onnWBWl-ZtYBFAQ/?imgmax=800" width="320" height="44" /></p>
<p>Red squiggly line saying "delegate Func has two parameters but is invoked with one argument. </p>
<p>Case proven you say: C# does not support partial application!</p>
<p>But wait!</p>
<p>Let’s look again at the F# add function. This time I’ll include the response from F# interactive:</p>
<pre>> let add a b = a + b;;
val add : a:int -> b:int -> int</pre>
<p>This shows us the type of the add function. The important bit is: "a:int –> b:int –> int". This tells us that ‘add’ is a function that takes an int and returns a function that takes an int and returns an int. It is <em>not</em> a function with two arguments. F# is a restrictive language, it only has functions with single arguments. That is a good thing. See Mark Seemann’s post <a href="http://blog.ploeh.dk/2015/04/13/less-is-more-language-features/">Less is More: Langauge Features</a> for an in depth discussion of how taking features away from a language can make it better. When people say "F# supports partial application" what they really mean is that "F# functions can only have one argument." The F# compiler understands the syntax ‘let add a b = ...’ to mean "I want a function that takes a single argument and returns a function that takes another single argument."</p>
<p>There’s nothing to stop us from defining our C# function with the same signature as our F# example. Then we can partially apply it in the same way:</p>
<p><img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNiUEfOcHBW5nQ4SU_6l5ztcP2FrJgPkelG3KfPE1Efumw2a6DlWfoDa-W9w3pHN-3kOMDI-GDakPPTirZMCCwn3ZKd_eQz6GLm9Tr8tvqqqakb2vWZAFpLghdxedkScZ-jn2vkQ/?imgmax=800" width="386" height="81" /></p>
<p>There you are: partial application in C#. No problem at all.</p>
<p>"But!" You cry, "That’s weird and unusual C#. I don’t want to define all my functions in such a strange way." In that case, let me introduce you to my friend <a href="https://en.wikipedia.org/wiki/Currying">Curry</a>. It’s not a spicy dish of South Asian origin but the process of turning a function with multiple arguments into a series of higher order functions. We can define a series of overloaded Curry extension methods:</p>
<p><img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6GRpECTFmntr_obsD2zoGPw-4BlrA6mTVrBF8Xcp2XQcud6Hx0hkG-y3OToSFEwhi-VJjlWAkS5EIINlBdjB4VpvE2wiVwBTCyTb1Znepp8T5yZdv3lv61PEE0EqVRFNoKya_tQ/?imgmax=800" width="696" height="185" /></p>
<p>We can then use them to turn ‘ordinary’ C# functions with multiple arguments into higher-order functions which we can partially apply:</p>
<p><img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzoC6GsVtzV50jn0fV-AzDxcDG-Yj3LgQ1Jet7RQ5DpjEc1rddHpoDKYv3w031Ybw_YQN_8-SOYbQpIsbEu-trvXiD4W2s36yDaUCvCbpDDezmgaLhZ_2Yf6G2A16OO2ugxQdAkQ/?imgmax=800" width="330" height="78" /></p>
<p>Thinking more about Mark Seemann’s blog post, it would be an interesting exercise to start to take features away from C# whilst keeping syntactic changes to a minimum. If we took away multiple function arguments, classes, interfaces, nullable types, default mutability etc, would we end up with a subset language that would be perfect for functional programming, but still familiar to C# developers? You would of course lose backward compatibility with existing C# code, so the incentive to do it isn’t that great, but it’s a fascinating thought experiment.</p> http://mikehadlow.blogspot.com/2015/09/partial-application-in-c.htmlnoreply@blogger.com (Mike Hadlow)5tag:blogger.com,1999:blog-15136575.post-33498870790836677522015年8月07日 14:13:00 +00002015年08月07日T15:40:13.963+01:00C#: Program Entirely With Static Methods<p><em>OK, that’s a provocative title to get your attention. This post is really about how one can move to a more functional programming style and remove the need for much of the apparatus of object-oriented programming, including interfaces and classes. In this post, I’m going to take some typical object-oriented C# code and refactor it in a more functional style. I’ll show that the result is more concise and easier to test.</em></p> <p>Over the past couple of years I’ve noticed that my C# coding style has changed drastically under the influence of functional programming. Gone are interfaces and instance classes to be replaced by static methods, higher-order functions and closures. It’s somewhat ironic since I spent many years as a cheerleader for object-oriented programming and I considered static methods a code smell.</p> <p>I guess if I look at my programming career, it has the following progression:</p> <p>Procedural –> Object-Oriented –> Functional</p> <p>The OO phase now looks like something of a detour.</p> <p>C# has all the essential features you need for functional programming – higher-order functions, closures, lambda expressions – that allow you to entirely ditch the OO programming model. This results in more concise, readable and maintainable code. It also has a huge impact on unit testing, allowing one to do away with complex mocking frameworks, and write far simpler tests.</p> <p><strong>Introducing our object oriented example</strong></p> <p>Let’s look at an example. First I’ll introduce a highly simplified OO example, a simple service that grabs some customer records from a data-store, creates some reports and then emails them. Then I’ll show the same code refactored in a more functional style using delegates and higher-order static methods.</p> <p>Let’s look at the object-oriented example first:</p> <p>Well written object-oriented code is compositional. Concrete classes depend on abstractions (interfaces). These interfaces are consumed as dependencies by classes that rely on them and are usually injected as constructor arguments. This is called Dependency Injection. It’s good practice to compose object instances in a single place in the application - the composition root - usually when the application starts up, or on a significant event, such as an HTTP request. The composition can be hand coded or handed off to an IoC container. The constructed graph is then executed by invoking a method on the root object. This often occurs via an application framework (such as MVC or WebApi) rather than being explicitly invoked by user code.</p> <p>We are going to get some customer records, create some reports and then email them to our customers. So first we need three interfaces: a data access abstraction, a report building abstraction, and an emailing abstraction:</p> <script src="https://gist.github.com/mikehadlow/474172cbf6452bce91e3.js?file=Interfaces.csx"></script> <p>And here are the implementations. This is not a real program of course, I’ve just coded some dummy customers and the emailer simply writes to the console.</p> <script src="https://gist.github.com/mikehadlow/474172cbf6452bce91e3.js?file=Implementations.csx"></script> <p>Now we have our service class that depends on the three abstractions and orchestrates the reporting process:</p> <script src="https://gist.github.com/mikehadlow/474172cbf6452bce91e3.js?file=ReportingService.csx"></script> <p>As you can see, we inject the dependencies as constructor arguments, store them in class properties, then invoke methods on them in the code in the RunCustomerReportBatch method. Some people like to store the dependencies in class fields instead. That’s a matter of choice.</p> <p>Our composition root composes the ReportingService with its dependencies and then returns it for the program to invoke. Don’t forget this is a highly simplified example. Composition is usually far more complex:</p> <script src="https://gist.github.com/mikehadlow/474172cbf6452bce91e3.js?file=ObjectOrientedComposition.csx"></script> <p>To write a unit test for the reporting service we would typically use either hand-crafted mocks, or some kind of mocking framework. Here’s an example unit test using XUnit and Moq:</p> <script src="https://gist.github.com/mikehadlow/474172cbf6452bce91e3.js?file=ObjectOrientedTest.csx"></script> <p>We first create mocks for ReportingService’s dependencies with the relevant methods stubbed, which we inject as constructor arguments. We then invoke ReportingService and verify that the emailer was invoked as expected.</p> <p>So that’s our object-oriented example. It’s typical of much well constructed C# code that you will find in the wild. It’s the way I’ve been building software for many years now with much success.</p> <p>However, this object-oriented code is verbose. About a third of it is simply OO stuff that we have to write repeatedly and mechanically rather than code that is actually solving our problem. This boilerplate includes: the class’ properties (or fields) to hold the dependencies; the assigning of constructor arguments to those properties; writing the class and constructor. We also need complex mocking frameworks simply to test this code. Surely that’s a smell that’s telling us something is wrong?</p> <p><strong>Enlightenment</strong></p> <p>Enlightenment begins when you realise that the dependencies and method arguments can actually just be seen as arguments that are applied at different times in the application’s lifecycle. Consider a class with a single method and a single dependency:</p> <script src="https://gist.github.com/mikehadlow/474172cbf6452bce91e3.js?file=ClassWithDependency.csx"></script> <p>We could equally represent this as a static method with two arguments:</p> <script src="https://gist.github.com/mikehadlow/474172cbf6452bce91e3.js?file=DoThing.csx"></script> <p>But how do we partially apply these arguments? How do we give ‘DoThing’ the IDependency argument at composition time and the ‘string arg’ at the point where it is required by the application logic? Simple: We use a closure. Anything taking a dependency on ‘DoThing’ will ask for an Action<string>, because that is the signature of the ‘Do’ method in our ‘Thing’ class. So in our composition root, we ‘close over’ our previously created IDependency instance in a lambda expression with the signature, Action<string>, that invokes our DoThing static method. Like this:</p> <script src="https://gist.github.com/mikehadlow/474172cbf6452bce91e3.js?file=ThingComposition.csx"></script> <p>So the interface is replaced with the built-in Action<T> delegate, and the closure is effectively doing the job of our ‘Thing’ class, the interface’s implementation, but with far fewer lines of code.</p> <p><strong>Refactoring to functional</strong></p> <p>OK. Let’s go back to our example and change it to use this new insight. We don’t need the interface definitions. They are replaced by built in delegate types:</p> <p>ICustomerData becomes Func<IEnumerable<Customer>></p> <p>IEmailer becomes Action<string, string></p> <p>IReportBuilder becomes Func<Customer, Report></p> <p>The classes are replaced with static methods:</p> <script src="https://gist.github.com/mikehadlow/474172cbf6452bce91e3.js?file=StaticMethodsReplaceClasses.csx"></script> <p>Our ReportingService is also replaced with a single static method that takes its dependencies as delegate arguments:</p> <script src="https://gist.github.com/mikehadlow/474172cbf6452bce91e3.js?file=RunCustomerReportsBatch.csx"></script> <p>Composition looks like this:</p> <script src="https://gist.github.com/mikehadlow/474172cbf6452bce91e3.js?file=FunctionalComposition.csx"></script> <p>This is functionally equivalent to the object-oriented code above, but it has 57 lines of code as opposed to 95; exactly 60% of the original code.</p> <p>There’s also a marked simplification of the unit test:</p> <script src="https://gist.github.com/mikehadlow/474172cbf6452bce91e3.js?file=FunctionalUnitTest.csx"></script> <p>The requirement for a complex mocking framework vanishes. Instead we merely have to set up simple lambda expressions for our stubs. Expectations can be validated with closed over local variables. It’s much easier to read and maintain.</p> <p>Moving to a functional style of programming is certainly a huge departure from most C# code that you find in the wild and can initially look a little odd to the uninitiated. But it has many benefits, making your code more concise and easier to test and reason about. C# is, surprisingly, a perfectly adequate functional programming language, so don’t despair if for practical reasons you can’t use F#.</p> <p>The complete code example for this post is on GitHub here: <a title="https://github.com/mikehadlow/FunctionalDemo" href="https://github.com/mikehadlow/FunctionalDemo">https://github.com/mikehadlow/FunctionalDemo</a></p> http://mikehadlow.blogspot.com/2015/08/c-program-entirely-with-static-methods.htmlnoreply@blogger.com (Mike Hadlow)47tag:blogger.com,1999:blog-15136575.post-78654383627416611772015年6月05日 10:54:00 +00002015年06月05日T11:54:26.743+01:00C#: How to Record What Gets Written to or Read From a Stream<p>Streams are a very nice abstraction over a read/write loop. We can use them to represent the contents of a file, or a stream of bytes to or from a network socket. They make it easy to read and write large amounts of data without consuming large amounts of memory. Take this little code snippet:</p> <script src="https://gist.github.com/mikehadlow/45448d151784b5f75a79.js?file=FileCopy.cs"></script> <p>Example.txt may be many GB in size, but this operation will only ever use the amount of memory configured for the buffer. As an aside, the .NET framework’s Stream class’s default buffer size is the maximum multiple of 4096 that is still smaller than the large object heap threshold (85K). This means it likely to be collected at gen zero by the garbage collector, but still gives good performance.</p> <p>But what if we want to log or view the contents of Example.txt as it’s copied to the output file? Let me introduce my new invention: <strong>InterceptionStream</strong>. This is simple class that inherits and decorates Stream and takes an additional output stream. Each time the wrapped stream is read from, or written to, the additional output stream gets the same information written to it. You can use it like this:</p> <script src="https://gist.github.com/mikehadlow/45448d151784b5f75a79.js?file=FileCopyWithLog.cs"></script> <p>I could just as well have wrapped the input stream with the InterceptionStream for the same result:</p> <script src="https://gist.github.com/mikehadlow/45448d151784b5f75a79.js?file=FileCopyWithLogOnInput.cs"></script> <p>You can use a MemoryStream if you want to capture the log in memory and assign it to a string variable, but of course this negates the memory advantages of the stream copy since we’re now buffering the entire contents of the stream in memory:</p> <script src="https://gist.github.com/mikehadlow/45448d151784b5f75a79.js?file=FileCopyWithLogToString.cs"></script> <p>Here is the InterceptionStream implementation. As you can see it’s very simple. All the work happens in the Read and Write methods:</p> <script src="https://gist.github.com/mikehadlow/45448d151784b5f75a79.js?file=InterceptionStream.cs"></script> http://mikehadlow.blogspot.com/2015/06/c-how-to-record-what-gets-written-to-or.htmlnoreply@blogger.com (Mike Hadlow)5tag:blogger.com,1999:blog-15136575.post-57090100285463120382015年5月28日 10:10:00 +00002015年05月28日T11:25:41.696+01:00Inject DateTime.Now to Aid Unit TestsIf you have logic that relies on the current system date, it's often difficult to see how to unit test it. But by injecting a function that returns <code>DateTime.Now</code> we can stub the current date to be anything we want it to be.<br />
Let's look at an example. Here we have a simple service that creates a new user instance and saves it in a database:<br />
<pre> public class UserService : IUserService
{
private readonly IUserData userData;
public UserService(IUserData userData)
{
this.userData = userData;
}
public void CreateUser(string username)
{
var user = new User(username, createdDateTime: DateTime.UtcNow);
userData.SaveUser(user);
}
}</pre>Now if I want to write a unit test that checks that the correct created date is set, I have to rely on the assumption that the system date won't change between the creation of the User instance and the test assertions.<br />
<pre> [TestFixture]
public class UserServiceTests
{
private IUserService sut;
private IUserData userData;
[SetUp]
public void SetUp()
{
userData = MockRepository.GenerateStub<iuserdata>();
sut = new UserService(userData);
}
[Test]
public void UserServiceShouldCreateUserWithCorrectCreatedDate()
{
User user = null;
// using Rhino Mocks to grab the User instance passed to the IUserData stub
userData.Stub(x => x.SaveUser(null)).IgnoreArguments().Callback<user>(x =>
{
user = x;
return true;
});
sut.CreateUser("mike");
Assert.AreEqual(DateTime.UtcNow, user.CreatedDateTime);
}
}</pre>But in this case, probably because Rhino Mocks is doing some pretty intensive proxying, a few milliseconds pass between the user being created and my assertions running.<br />
<pre>Test 'Mike.Spikes.InjectingDateTime.UserServiceTests.UserServiceShouldCreateUserWithCorrectCreatedDate' failed:
Expected: 2015年05月28日 09:08:18.824
But was: 2015年05月28日 09:08:18.819
InjectingDateTime\InjectDateTimeDemo.cs(75,0): at Mike.Spikes.InjectingDateTime.UserServiceTests.UserServiceShouldCreateUserWithCorrectCreatedDate()</pre>The solution is to inject a function that returns a DateTime:<br />
<pre> public class UserService : IUserService
{
private readonly IUserData userData;
private readonly Func<datetime> now;
public UserService(IUserData userData, Func<datetime> now)
{
this.userData = userData;
this.now = now;
}
public void CreateUser(string username)
{
var user = new User(username, createdDateTime: now());
userData.SaveUser(user);
}
}</pre>Now our unit test can rely on a fixed DateTime value rather than one that is changing as the test runs:<br />
<pre> [TestFixture]
public class UserServiceTests
{
private IUserService sut;
private IUserData userData;
// stub the system date as some arbirary date
private readonly DateTime now = new DateTime(2015, 5, 28, 10, 46, 33);
[SetUp]
public void SetUp()
{
userData = MockRepository.GenerateStub<iuserdata>();
sut = new UserService(userData, () => now);
}
[Test]
public void UserServiceShouldCreateUserWithCorrectCreatedDate()
{
User user = null;
userData.Stub(x => x.SaveUser(null)).IgnoreArguments().Callback<user>(x =>
{
user = x;
return true;
});
sut.CreateUser("mike");
Assert.AreEqual(now, user.CreatedDateTime);
}
}</pre>And the test passes as expected.<br />
In our <a href="http://blog.ploeh.dk/2011/07/28/CompositionRoot/">composition root</a> we inject the current system time (here as UTC):<br />
<pre> var userService = new UserService(userData, () => DateTime.UtcNow);</pre>This pattern can be especially useful when we want to test business logic that relies on time passing. For example, say we want to check if an offer has expired; we can write unit tests for the case where the current (stubbed) time is both before and after the expiry time just by injecting different values into the system-under-test. Because we can stub the system time to be anything we want it to be, it makes it easy to test time based busines logic.http://mikehadlow.blogspot.com/2015/05/if-you-have-logic-that-relies-on.htmlnoreply@blogger.com (Mike Hadlow)11tag:blogger.com,1999:blog-15136575.post-6843467747556641752015年4月22日 10:54:00 +00002015年04月22日T11:58:26.662+01:00A Simple Nowin F# Example<p><a href="http://mikehadlow.blogspot.com/2015/04/basic-owin-self-host-with-f.html">In my last post</a> I showed a simple F# <a href="http://owin.org/">OWIN</a> self hosted server without an application framework. Today I want to show an even simpler example that doesn’t reference any of the Microsoft OWIN libraries, but instead uses an open source server implementation, <a href="https://github.com/Bobris/Nowin">Nowin</a>. Thanks to <a href="https://twitter.com/randompunter">Damien Hickey</a> for pointing me in the right direction.</p> <p>The great thing about the <a href="http://owin.org/">Open Web Interface for .NET (OWIN)</a> is that it is simply a specification. There is no OWIN library that you have to install to allow web servers, application frameworks and middlewear built to the OWIN standard to communicate. There is no interface that they must implement. They simply need to provide an entry point for the <a href="http://owin.org/spec/spec/owin-1.0.0.html#ApplicationDelegate">OWIN application delegate</a> (better know as the AppFunc):</p> <pre> Func<IDictionary<string , object>, Task></pre>
<p>For simple applications, where we don’t need routing, authentication, serialization, or an application framework, this means we can simply provide our own implementation of the AppFunc and pass it directly to an OWIN web server.</p>
<p><a href="https://github.com/Bobris/Nowin">Nowin</a>, by <a href="https://github.com/Bobris">Boris Letocha</a>, is a .NET web server, built directly against the standard .NET socket API. This means it should work on all platforms that support .NET without modification. The author claims that it has equivalent performance to NodeJS on Windows and can even match HttpListener. Although not ready for production, it makes a compelling implementation for simple test servers and stubs, which is how I intend to use it.</p>
<p>To use any OWIN web server with F#, we simply need to provide an AppFunc and since F# lambdas have an implicit cast to System.Func<..> we can simply provide the AppFunc in the form:</p>
<pre> fun (env: IDictionary<string, obj>) -> Task.FromResult(null) :> Task</pre>
<p>Let’s see it in action. First create an F# console application and install the Nowin server with NuGet:</p>
<pre> Install-Package Nowin</pre>
<p>Now we can host our Nowin server in the application’s entry point:</p>
<pre> [<entrypoint>]
let main argv =
use server =
Nowin.ServerBuilder
.New()
.SetEndPoint(new IPEndPoint(IPAddress.Any, port))
.SetOwinApp(fun env -> Task.FromResult(null) :> Task)
.Build()
server.Start()
printfn "Server listening on http://localhost:%i/ \nhit <enter> to stop." port
Console.ReadLine() |> ignore
0</pre>
<p>Of course this server does nothing at all. It simply returns the default 200 OK response with no body. To do any useful work you need to read the OWIN environment, understand the request and create a response. To make this easier in F# I’ve created a simple OwinEnvironment type with just the properties I need. You could expand this to encompass whatever OWIN environment properties you need. Just look at the OWIN spec for this.</p>
<pre> type OwinEnvironment = {
httpMethod: string;
requestBody: Stream;
responseBody: Stream;
setResponseStatusCode: (int -> unit);
setResponseReasonPhrase: (string -> unit)
}</pre>
<p>Here is a function that takes the AppFunc environment and maps it to my OwinEnvironment type:</p>
<pre> let getOwinEnvironment (env: IDictionary<string , obj>) = {
httpMethod = env.["owin.RequestMethod"] :?> string;
requestBody = env.["owin.RequestBody"] :?> Stream;
responseBody = env.["owin.ResponseBody"] :?> Stream;
setResponseStatusCode =
fun (statusCode: int) -> env.["owin.ResponseStatusCode"] <- statusCode
setResponseReasonPhrase =
fun (reasonPhrase: string) -> env.["owin.ResponseReasonPhrase"] <- reasonPhrase
}</pre>
<p>Now that we have our strongly typed OwinEnvironment, we can grab the request stream and response stream and do some kind of mapping. Here is a function that does this. It also only accepts POST requests, but you could do whatever you like in the body. Note the transform function is where the work is done.</p>
<pre> let handleOwinEnvironment (owin: OwinEnvironment) : unit =
use writer = new StreamWriter(owin.responseBody)
match owin.httpMethod with
| "POST" ->
use reader = new StreamReader(owin.requestBody)
writer.Write(transform(reader.ReadToEnd()))
| _ ->
owin.setResponseStatusCode 400
owin.setResponseReasonPhrase "Bad Request"
writer.Write("Only POST requests are allowed")</pre>
<p>Just for completeness, here is a trivial transform example:</p>
<pre> let transform (request: string) : string =
sprintf "%s transformed" request</pre>
<p>Now we can re-visit our console Main function and pipe everything together:</p>
<pre> [<entrypoint>]
let main argv =
use server =
Nowin.ServerBuilder
.New()
.SetEndPoint(new IPEndPoint(IPAddress.Any, port))
.SetOwinApp(fun env ->
env
|> getOwinEnvironment
|> handleOwinEnvironment
|> endWithCompletedTask)
.Build()
server.Start()
printfn "Server listening on http://localhost:%i/ \nhit <enter> to stop." port
Console.ReadLine() |> ignore
0</pre>
<p>The endWithCompletedTask function, is a little convenience to hide the ugly synchronous Task return code:</p>
<pre> let endWithCompletedTask = fun x -> Task.FromResult(null) :> Task</pre>
<p>So as you can see, OWIN and Nowin make it very easy to create small web servers with F#. Next time you just need a simple service stub or test server, consider doing something like this, rather that using a heavyweight server and application framework such as IIS, MVC, WebAPI or WebForms. </p>
<p>You can find the complete code for the example in this Gist <a href="https://gist.github.com/mikehadlow/c88e82ee98619f22f174">https://gist.github.com/mikehadlow/c88e82ee98619f22f174</a>: </p> http://mikehadlow.blogspot.com/2015/04/a-simple-nowin-f-example.htmlnoreply@blogger.com (Mike Hadlow)6tag:blogger.com,1999:blog-15136575.post-718859358050653872015年4月16日 11:49:00 +00002015年04月27日T12:59:08.842+01:00Basic OWIN Self Host With F#<p>I’m still very much an F# noob, but yesterday I thought I’d use it to write a little stub web service for a project I’m currently working on. I simply want to respond to any POST request to my service. I don’t need routing, or any other ‘web framework’ pieces. I just wanted to use the Microsoft.AspNet.WebApi.OwinSelfHost package to create a little web service that runs inside a console program.</p> <p>First create a new F# console project. Then install the self host package:</p> <pre> Microsoft.AspNet.WebApi.OwinSelfHost</pre>
<p>Note that this will also install various WebApi pieces which we don’t need here, so we can go ahead and uninstall them:</p>
<pre> uninstall-package Microsoft.AspNet.WebApi.OwinSelfHost
uninstall-package Microsoft.AspNet.WebApi.Owin
uninstall-package Microsoft.AspNet.WebApi.Core
uninstall-package Microsoft.AspNet.WebApi.Client</pre>
<p>My requirement is to simply take any POST request to the service, take the post body and transform it in some way (that’s not important here), and then return the result in the response body.</p>
<p>So first, here’s a function that takes a string and returns a string:</p>
<pre> let transform (input: string) =
sprintf "%s transformed" input</pre>
<p>Next we’ll write the OWIN start-up class. This needs to be a class with a single member, Configuration, that takes an IAppBuilder:</p>
<pre> open Owin
open Microsoft.Owin
open System
open System.IO
open System.Threading.Tasks
type public Startup() =
member x.Configuration (app:IAppBuilder) = app.Use( ... ) |> ignore</pre>
<p>We need something to pass into the Use method on IAppBuilder. The Use method looks like this:</p>
<pre> public static IAppBuilder Use(
this IAppBuilder app,
Func<IOwinContext, Func<Task>, Task> handler
)</pre>
<p>So we need a handler with the signature Func<IOwinContext, Func<Task>, Task>. Since F# lambdas cast directly to Func<..> delegates, we simply use lots of type annotations and write a function which looks like this:</p>
<pre> let owinHandler = fun (context:IOwinContext) (_:Func<task>) ->
handleOwinContext context;
Task.FromResult(null) :> Task</pre>
<p>Note that this is running synchronously. We’re just returning a completed task.</p>
<p>Now lets look at the handleOwinContext function. This simply takes the IOwinContext, grabs the request, checks that it’s a ‘POST’, and transforms the request stream into the response stream using our transform function:</p>
<pre> let handleOwinContext (context:IOwinContext) =
use writer = new StreamWriter(context.Response.Body)
match context.Request.Method with
| "POST" ->
use reader = new StreamReader(context.Request.Body)
writer.Write(transform(reader.ReadToEnd()))
| _ ->
context.Response.StatusCode <- 400
writer.Write("Only POST")</pre>
<p>Now all we need to do is register our Startup type with the OWIN self host in our Program.Main function:</p>
<pre>open System
open Microsoft.Owin.Hosting
[<entrypoint>]
let main argv =
let baseAddress = "http://localhost:8888"
use application = WebApp.Start<Startup.Startup>(baseAddress)
Console.WriteLine("Server running on {0}", baseAddress)
Console.WriteLine("hit <enter> to stop")
Console.ReadLine() |> ignore
0</pre>
<p>And we’re done. Now let’s try it out with the excellent <a href="https://chrome.google.com/webstore/detail/postman-rest-client/fdmmgilgnpjigdojojpjoooidkmcomcm?hl=en">Postman client</a>, just run the console app and send a POST request to <a href="http://localhost:8888/">http://localhost:8888/</a>:</p>
<p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3YFv_-mt8gN_mWiK_JCod0xnalb2gp-kWpgyEUkn3Ql1Qju30rBX6wakHbf5lCjhuJEgk1NYOzTgxqNWQzBnl_jNPZfDljJxej_J4gMSNaJuB6NkXIHb1kTfF7R61dVq2rUsS4Q/s1600-h/Postman_owin_self_host_fsharp%25255B4%25255D.png"><img title="Postman_owin_self_host_fsharp" style="border-left-width: 0px; border-right-width: 0px; border-bottom-width: 0px; display: inline; border-top-width: 0px" border="0" alt="Postman_owin_self_host_fsharp" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEEsKaFkglEkKZ8C89MCQ0d7PJDuDN09QE7FAgohFPtOiITUbWwyBGLtGi6GAjzDzayf1pwbMft88VG-S-DGVM4XSAvNIc2yny2O13wPa-fxePK1ViIPjU4RNMANyi5Cm5o3d-wg/?imgmax=800" width="946" height="640" /></a> </p>
<p>Full source code in <a href="https://gist.github.com/mikehadlow/67b242c95eb77f3d6aca">this Gist</a>. </p> http://mikehadlow.blogspot.com/2015/04/basic-owin-self-host-with-f.htmlnoreply@blogger.com (Mike Hadlow)0tag:blogger.com,1999:blog-15136575.post-64143447441157197872014年12月15日 17:15:00 +00002014年12月15日T20:27:53.966+00:00The Lava Layer Anti-Pattern<p><em>TL:DR Successive, well intentioned, changes to architecture and technology throughout the lifetime of an application can lead to a fragmented and hard to maintain code base. Sometimes it is better to favour consistent legacy technology over fragmentation.</em></p> <p>An ‘anti-pattern’ describes a commonly encountered pathology or problem in software development. The Lava Layer (or Lava Flow) anti-pattern is well documented (<a href="http://www.antipatterns.com/lavaflow.htm">here</a> and <a href="http://en.wikipedia.org/wiki/Lava_flow_(programming)">here</a> for example). It’s symptoms are a fragile and poorly understood codebase with a variety of different patterns and technologies used to solve the same problems in different places. I’ve seen this pattern many times in enterprise software. It’s especially prevalent in situations where the software is large, mission critical, long-lived and where there is high staff turn-over. In this post I want to show some of the ways that it occurs and how it’s often driven by a very human desire to improve the software.</p> <p>To illustrate I’m going to tell a story about a fictional piece of software in a fictional organisation with made up characters, but closely based on real examples I’ve witnessed. In fact, if I’m honest, I’ve been several of these characters at different stages of my career.  I’m going to concentrate on the data-access layer (DAL) technology and design to keep the story simple, but the general principles and scenario can and do apply to any part of the software stack.</p> <p>Let’s set the scene...</p> <p>The Royal Churchill is a large hospital in southern England. It has a sizable in-house software team that develop and maintain a suite of applications that support the hospital’s operations. One of these is WidgetFinder, a physical asset management application that is used to track the hospital’s large collection of physical assets; everything from beds to CT scanners. Development on WidgetFinder was started in 2005. The software team that wrote version 1 was lead by Laurence Martell, an developer with may years experience building client server systems based on VB/SQL Server. VB was in the process of being retired by Microsoft, so Laurence decided to build WidgetFinder with the relatively new ASP.NET platform. He read various Microsoft design guideline papers and a couple of books and decided to architect the DAL around the ADO.NET RecordSet. He and his team hand coded the DAL and exposed DataSets directly to the UI layer, as was demonstrated in the Microsoft sample applications. After seven months of development and testing, Version 1 of WidgetFinder was released and soon became central to the Royal Churchill’s operations. Indeed, several other systems, including auditing and financial applications, soon had code that directly accessed WidgetFinders database.</p> <p>Like any successful enterprise application, a new list of requirements and extensions evolved and budget was assigned for version 2. Work started in 2008. Laurence had left and a new lead developer had been appointed. His name was Bruce Snider. Bruce came from a Java background and was critical of many of Laurence’s design choices. He was especially scornful of the use of DataSets: "an un-typed bag of data, just waiting for a runtime error with all those string indexed columns." Indeed WidgetFinder did seem to suffer from those kinds of errors. "We need a proper object-oriented model with C# classes representing tables, such as Asset and Location. We can code gen most of the DAL straight from the relational schema." He asked for time and budget to rewrite WidgetFinder from scratch, but this was rejected by the management. Why would they want to re-write a two year old application that was, as far as they were concerned, successfully doing its job? There was also the problem that many other systems relied on WidgetFinder’s database and they would need to be re-written too.</p> <p>Bruce decided to write the new features of WidgetFinder using his OO/Code Gen approach and refactor any parts of the application that they had to touch as part of version 2. He was confident that in time his Code Gen DAL would eventually replace the hand crafted DataSet code. Version 2 was released a few months later. Simon, a new recruit on the team asked why some of the DAL was code generated, and some of it hand-coded. It was explained that there had been this guy called Lawrence who had no idea about software, but he was long gone.</p> <p>A couple of years went by. Bruce moved on and was replaced by Ina Powers. The code gen system had somewhat broken down after Bruce had left. None of the remaining team really understood how it worked, so it was easier just to modify the code by hand. Ina found the code confusing and difficult to reason about. "Why are we hand-coding the DAL in this way? This code is so repetitive, it looks like it was written by an automation. Half of it uses DataSets and the other some half baked Active Record pattern. Who wrote this crap? If you hand code your DAL, you are stealing from your employer. The only sensible solution is an ORM. I recommend that we re-write the system using a proper domain model and NHibernate." Again the business rejected a rewrite. "No problem, we will adopt an evolutionary approach: write all the new code DDD/NHibernate style, and progressively refactor the existing code as we touch it." Many months later, Version 3 was released.</p> <p>Mandy was a new hire. She’d listened to Ina’s description of how the application was architected around DDD with the data access handled by NHibernate, so she was surprised and confused to come across some code using DataSets. She asked Simon what to do. "Yeah, I think that code was written by some guy who was here before me. I don’t really know what it does. Best not to touch it in case something breaks."</p> <p>Ina, frustrated by management who didn’t understand the difficulty of maintaining such horrible legacy applications, left for a start-up where she would be able to build software from scratch. She was replaced by Gordy Bannerman who had years of experience building large scale applications. The WidgetFinder users were complaining about it’s performance. Some of the pages took 30 seconds or more to appear. Looking at the code horrified him: Huge Linq statements generating hundreds of individual SQL requests, no wonder it was slow. Who wrote this crap? "ORMs are a horrible leaky abstraction with all kinds of performance problems. We should use a lightweight data-access technology like Dapper. Look at Stack-Overflow, they use it. They also use only static methods for performance, we should do the same." And so the cycle repeated itself. Version 4 was released a year later. It was buggier than the previous versions. Gordy had dismissed Ina’s love of unit testing. It’s hard to unit test code written mostly with static methods.</p> <p>Mandy left to be replaced by Peter. Simon introduced him to the WidgetFinder code. "It’s not pretty. A lot of different things have been tried over the years and you’ll find several different ways of doing the same thing depending on where you look. I don’t argue, just get on with trawling through the never ending bug list. Hey, at least it’s a job."</p> <p>This is a graphical representation of the DAL code over time. The Y-axis shows the version of the software. It starts with version one at the bottom and ends with version four at the top. The X-axis shows features, the older ones to the left and the newer ones to the right. Each technology choice is coloured differently. red is the hand-coded RecordSet DAL, blue the Active Record code gen, green DDD/NHibernate and Yellow is Dapper/Static methods.</p> <p><img title="LavaLayer" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-top-width: 0px" border="0" alt="LavaLayer" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZsG2qech-jgzd2mKaZ6hNj-i14Jef93b38Ne7dTIrFR5Ojbr14egaG4caVtUG8JiDGfHbka6pVT0TeiU-HumiktqGKDPH8eYHG-X-xqpdnF5Gy-kdkaW4Jqxku1CTariesDIHCg/?imgmax=800" width="711" height="196" /></p> <p>Each new design and technology choice never completely replaced the one that went before. The application has archaeological layers revealing it’s history and the different technological fashions taken up successively by Laurence, Bruce, Ina and Gordy. If you look along the Version 4 line, you can see that there are four different ways of doing the same thing scattered throughout the code base.</p> <p>Each successive lead developer acted in good faith. They genuinely wanted to improve the application and believed that they were using the best design and technology to solve the problem at hand. Each wanted to re-write the application rather than maintain it, but the business owners would not allow them the resources to do it. Why should they when there didn’t seem to be any rational business reason for doing so? High staff turnover exacerbated the problem. The design philosophy of each layer was not effectively communicated to the next generation of developers. There was no consistent architectural strategy. Without exposition or explanation, code standing alone needs a very sympathetic interpreter to understand its motivations.</p> <p>So how should one mitigate against Lava Layer? How can we approach legacy application development in a way that keeps the code consistent and well architected? A first step would be a little self awareness.</p> <p>We developers should recognise that we suffer from a number of quite harmful pathologies when dealing with legacy code:</p> <ul> <li>We are highly (and often overly) critical of older patterns and technologies. "You’re not using a relational database?!? NoSQL is far far better!" "I can’t believe this uses XML! So verbose! JSON would have been a much better choice."</li> <li>We think that the current shiny best way is the end of history; that it will never be superseded or seen to be suspect with hindsight.</li> <li>We absolutely must ritually rubbish whoever came before us. Better still if they are no longer around to defend themselves. There’s a <a href="http://www.dilbert.com/strips/comic/2013-02-24/">brilliant Dilbert cartoon for this</a>.</li> <li>We despise working on legacy code and will do almost anything to carve something greenfield out of an assignment, even if it makes no sense within the existing architecture.</li> <li>Rather than try to understand legacy code, how it works and the motivations that created it, we throw up our hands in despair and declare that the whole thing needs to be rewritten.</li> </ul> <p>If you find yourself suggesting a radical change to an existing application, especially if you use the argument that, "we will refactor it to the new pattern over time." Consider that you may never complete that refactoring, and think about what the application will look like with two different ways of doing the same thing. Will this aid those coming after you, or hinder them? What happens if your way turns out to be sub-optimal? Will replacing it be easy? Or would it have been better to leave the older, but more consistent code in place? Is WidgetFinder better for having four entirely separate ways of getting data from the database to the UI, or would it have been easier to understand and maintain with one? Try and have some sympathy and understanding for those who came before you. There was probably a good reason for why things were done the way they were. Be especially sympathetic to consistency, even if you don’t necessarily agree with the design or technology choices. </p> http://mikehadlow.blogspot.com/2014/12/the-lava-layer-anti-pattern.htmlnoreply@blogger.com (Mike Hadlow)34tag:blogger.com,1999:blog-15136575.post-50608132609579206662014年7月03日 14:22:00 +00002014年07月03日T15:26:12.552+01:00Hire Me<p>I’m on a sales drive. I want to move away from daily-rate contracting, and focus on full-lifecycle project delivery. I’ve created a new website to help market myself <a href="http://mikehadlow.com/">http://mikehadlow.com/</a>. I’m looking for customers who want software written to a specification for a fixed price.</p> <p><a href="http://mikehadlow.com/"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="Screen Shot 2014年07月03日 at 14.49.11" border="0" alt="Screen Shot 2014年07月03日 at 14.49.11" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlQfe1qj347_Z5yR5m95Aa1L_yUBItxFgvxLtPkHs2GyVaLNPuSeYYsYYqeSao2wzTIJIPdpwqttlEHsplN06LY1GExSAM8yrmJMUv0OMYHp8JUfnfEMK2xylg2BGT42Sn08nw/?imgmax=800" width="644" height="367" /></a> </p> <p>I’ve been working in IT since 1996, although I’ve played with computers and programming since I was a teenager. Except for the first two years when I had a permanent job, I’ve worked as a daily or hourly rate contractor, with just the occasional foray into fixed-price project work. Looking at my CV I can count 17 different organizations that  I’ve worked for during that time. Some of them where large companies where I was just a small part of a large team. For example, I was one of over a hundred contractors at one particular public sector project. Others were tiny local Brighton companies where I was often the end-to-end developer for a complete system. I’ve had a variety of roles, from being a travelling troubleshooter, driving around the country fixing installs of one particularly nasty system, a bug-fixer for months on end on a huge mission critical system, and a plug-n-play C# programmer on a whole range of different projects. More recently, for the last five years or so, I’ve mostly been hired in an ‘architect’ role. What this means is somewhat vague, but it usually encompasses giving higher-level strategic design direction and getting involved in team structure, process design, and planning. All this experience has given me some very strong opinions about what makes a successful software project. I hope that’s pretty obvious to anyone reading this blog. It’s also given me the confidence to take responsibility for the entire project lifecycle.</p> <p>During this time I’ve also occasionally done fixed-price projects. The largest of these was a customer relationship management system for a pharmaceutical company, this was a six month project which I worked with a DBA to deliver. I’ve also built a property management system for a legal practice, and a complete eCommerce system that I’ve also maintained for the last six years. I always enjoyed these projects the most. It’s very satisfying to be able to deliver a working system to a client and see it really helping their business. The problem has been finding the work. I’m hoping now that the popularity of this blog and the success of <a href="http://easynetq.com/" target="_blank">EasyNetQ</a> will provide enough of an audience for me that I’ll be able to do projects full time.</p> <p>I want to move from being just an element of a project’s delivery, to being the person responsible for it.  Taking responsibility means delivering to a price and time-scale and creating and managing the team that does it. So if you have a requirement for bespoke software and you need a safe pair of hands to deliver it, please get in touch with me at <a href="mailto:mike@suteki.co.uk">mike@suteki.co.uk</a> and let’s talk.</p> http://mikehadlow.blogspot.com/2014/07/hire-me.htmlnoreply@blogger.com (Mike Hadlow)4tag:blogger.com,1999:blog-15136575.post-19236311464979373162014年6月06日 15:01:00 +00002014年06月06日T16:01:05.118+01:00Heisenberg Developers<p>TL:DR You can not observe a developer without altering their behavior.</p> <p><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="image" border="0" alt="image" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9mwJj7A9dZnSeiMnmHi0n8CC_uBcegLPwmIia6-WkjYvCbVNWv7kS949KK1nXgRHpAVtgsBnZuwQsoU_JIR4F9CyZ1xRzRD-UtVBv9Xmzra4rYEdGFwsEtozCizCJwj_alEXS/?imgmax=800" width="300" height="300" /> </p> <p><strong>First a story.</strong></p> <p>Several years ago I worked on a largish project as one of a team of developers. We were building an internal system to support an existing business process. Initially things went very well. The user requirements were reasonably well defined and we worked effectively iterating on the backlog. We were mostly left to our own devices. We had a non-technical business owner and a number of potential users who gave us broad objectives, and who tested features as they became available. When we felt that piece needed refactoring, we spent the time to do it. When a pain point appeared in the software we changed the design to remove it. We didn’t have to ask permission to do any of things, so long features appeared at reasonable intervals, everyone was happy.</p> <p>Then came <em>that</em> requirement. The one where you try to replace an expert user’s years of experience and intuition with software. What started out as a vague and wooly requirement, soon became a monster as we started to dig into it. We tried to push back against it, or at least get it scheduled for a later version of the software to be delivered at some unspecified time in future. But no, the business was insistent, they wanted it in the next version. A very clever colleague thought the problem could be solved with a custom DSL that would allow the users themselves to encode their business rules and he and another guy set to work building it. Several months later, he was still working on it. The business was frustrated by the lack of progress and the vaguely hoped for project delivery dates began to slip. It was all a bit of a mess.</p> <p>The boss looked at this and decided that we were loose cannons and the ship needed tightening up. He hired a project manager with an excellent CV and a reputation for getting wayward software projects under control. He introduced us to ‘Jira’, a word that strikes fear into the soul of a developer. Now, rather than taking a high level requirement and simply delivering it at some point in the future, we would break the feature into finely grained tasks, estimate each of the tasks, then break the tasks into finer grained tasks if the estimate was more than a day’s work. Every two weeks we would have a day long planning meeting where these tasks were defined. We then spent the next 8 days working on the tasks and updating Jira with how long each one took. Our project manager would be displeased when tasks took longer than the estimate and would immediately assign one of the other team members to work with the original developer to hurry it along. We soon learned to add plenty of contingency to our estimates. We were delivery focused. Any request to refactor the software was met with disapproval, and our time was too finely managed to allow us refactor ‘under the radar’.</p> <p>Then a strange thing started to happen. Everything slowed.</p> <p>Of course we had no way to prove it because there was no data from ‘pre-PM’ to compare to ‘post-PM’, but there was a noticeable downward notch in the speed at which features were delivered. With his calculations showing that the project’s delivery date was slipping, our PM did the obvious thing and started hiring more developers, I think they were mostly people he’d worked with before. We, the existing team had very little say in who was hired, and it did seem that there was something of a cultural gap between us and the new guys. Whenever there was any debate about refactoring the code, or backing out of a problematic feature, the new guys would argue against it, saying it was ‘ivory tower’, and not delivering features. The PM would veto the work and side with the new guys.</p> <p>We became somewhat de-motivated. After loosing an argument about how things should be done more than a few times, you start to have a pretty clear choice: knuckle down, don’t argue and get paid, or leave. Our best developer, the DSL guy, did leave, and the ones of us arguing for good design lost one of our main champions. I learnt to inflate my estimates, do what I was told to do, and to keep my imagination and creativity for my evening and weekend projects. I found it odd that few of my new colleagues seemed to actually enjoy software development, the talk in our office was now more about cars than programming languages. They actually seemed to like the finely grained management. As one explained to me, "you take the next item off the list, do the work, check it in, and you don’t have to worry about it." It relieved them of the responsibility to make difficult decisions, or take a strategic view.</p> <p>The project was not a happy one. Features took longer and longer to be delivered. There always seemed to be a mounting number of bugs, few of which seemed to get fixed, even as the team grew. The business spent more and more money for fewer and fewer benefits.</p> <p><strong>Why did it all go so wrong?</strong></p> <p>Finely grained management of software developers is compelling to a business. Any organization craves control. We want to know what we are getting in return for those expensive developer salaries. We want to be able to accurately estimate the time taken to deliver a system in order to do an effective cost-benefit analysis and to give the business an accurate forecast of delivery. There’s also the hope that by building an accurate database of estimates verses actual effort, we can fine tune our estimation, and by analysis find efficiencies in the software development process.</p> <p>The problem with this approach is that it fundamentally misunderstands the nature of software development. That it is a creative and experimental process. Software development is a complex system of multiple poorly understood feedback loops and interactions. It is an organic process of trial and error, false starts, experiments and monumental cock-ups. Numerous studies have shown that effective creative work is best done by motivated autonomous experts. As developers we need to be free to try things out, see how they evolve, back away from bad decisions, maybe try several different things before we find one that works. We don’t have hard numbers for why we want to try this or that, or why we want to stop in the middle of this task and throw away everything we’ve done. We can’t really justify all our decisions, many them are hunches, many of them are wrong.</p> <p>If you ask me how long a feature is going to take, my honest answer is that I really have no idea. I may have a ball-park idea, but there’s a long-tail of lower-probability possibilities, that mean that I could easily be out by a factor of 10. What about the feature itself? Is it really such a good idea? I’m not just the implementer of this software, I’m a stake holder too. What if there’s a better way to address this business requirement? What if we discover a better way half way through the estimated time? What if I suddenly stumble on a technology or a technique that could make a big difference to the business? What if it’s not on the road map?</p> <p>As soon as you ask a developer to tell you exactly what he’s going to do over the next 8 days (or worse weeks or months), you kill much of the creativity and serendipity. You may say that he is free to change the estimates or the tasks at any time, but he will still feel that he has to at least justify the changes. The more finely grained the tasks, the more you kill autonomy and creativity. No matter how much you say it doesn’t matter if he doesn’t meet his estimates, he’ll still feel bad about it. His response to being asked for estimates is twofold: first, he will learn to put in large contingencies, just in case one of those rabbit-holes crosses his path; second, he will look for the quick fix, the hack that just gets the job done. Damn technical debt, that’s for the next poor soul to deal with, I must meet my estimate. Good developers are used to doing necessary, but hard to justify work ‘under the radar’, they effectively lie to management about what they are really doing, but finely grained management makes it hard to steal the time in which to do it.</p> <p>To be clear, I’m not speaking for everyone here. Not all developers dislike micromanagement. Some are more attracted to the paycheck than the art. For them, micromanagement can be very attractive. So long as you know how to work the system you can happily submit inflated estimates, just do what you’re told, and check in the feature. If users are unhappy and the system is buggy and late, you are not to blame, you just did what you were told.</p> <p>Finely grained management is a recipe for ‘talent evaporation’. The people who live and breathe software will leave – they usually have few problems getting jobs elsewhere. The people who don’t like to take decisions and need an excuse, will stay. You will find yourself with a compliant team that meekly carries out your instructions, doesn’t argue about the utility of features, fills in Jira correctly, meets their estimates, and produces very poor quality software.</p> <p><strong>So how should one manage developers?</strong></p> <p>Simple: give them autonomy. It seems like a panacea, but finely grained management is poisonous for software development. It’s far better to give high level goals and allow your developers to meet them as they see fit. Sometimes they will fail; you need to build in contingency for this. But don’t react to failure by putting in more process and control. Work on building a great team that you can trust and that can contribute to success rather than employing rooms full of passive code monkeys.</p> http://mikehadlow.blogspot.com/2014/06/heisenberg-developers.htmlnoreply@blogger.com (Mike Hadlow)86tag:blogger.com,1999:blog-15136575.post-71101620543174994972014年4月29日 10:44:00 +00002014年04月29日T13:19:26.035+01:00JSON Web Tokens, OWIN, and AngularJS<p>I’m working on an exciting new project at the moment. The main UI element is a management console built with <a href="https://angularjs.org/">AngularJS</a> that communicates with a HTTP/JSON API built with <a href="http://nancyfx.org/">NancyFX</a> and hosted using the <a href="https://katanaproject.codeplex.com/documentation">Katana</a> <a href="http://owin.org/">OWIN</a> self host. I’m quite new to this software stack, having spent the last three years buried in SOA and messaging, but so far it’s all been a joy to work with. AngularJS makes building single page applications so easy, even for a newbie like me, that it almost feels unfair. I love the dependency injection, templating and model binding, and the speed with which you can get up and running. On the server side, NancyFx is perfect for building HTTP/JSON APIs. I really like the design philosophy behind it. The built-in dependency injection, component oriented design, and convention-over-configuration, for example, is exactly how I like build software. OWIN is a huge breakthrough for C# web applications. Decoupling the web server from the web framework is something that should have happened a long time ago, and it’s really nice to finally say goodbye to ASP.NET.</p> <p>Rather than using cookie based authentication, I’ve decided to go with <a href="http://self-issued.info/docs/draft-ietf-oauth-json-web-token.html">JSON Web Tokens</a> (JWT). This is a relatively new authorization standard that uses a signed token, transmitted in a request header, rather than the traditional ASP.NET cookie based authorization.</p> <p>There are quite a few advantages to JWT:</p> <ul> <li>Cross Domain API calls. Because it’s just a header rather than a cookie, you don’t have any of the cross-domain browser problems that you get with cookies. It makes implementing single-sign-on much easier because the app that issues the token doesn’t need to be in any way connected with the app that consumes it. They merely need to have access to the same shared secret encryption key. </li> <li>No server affinity. Because the token contains all the necessary user identification, there’s no for shared server state – a call to a database or shared session store. </li> <li>Simple to implement clients. It’s easy to consume the API from other servers, or mobile apps. </li> </ul> <p>So how does it work? The JWT token is a simple string of three ‘.’ separated base 64 encoded values:</p> <p><header>.<payload>.<hash></p> <p>Here’s an example:</p> <div id="codeSnippetWrapper"> <pre style="border-bottom-style: none; text-align: left; padding-bottom: 0px; line-height: 12pt; border-right-style: none; background-color: #f4f4f4; margin: 0em; padding-left: 0px; width: 100%; padding-right: 0px; font-family: 'Courier New', courier, monospace; direction: ltr; border-top-style: none; color: black; font-size: 8pt; border-left-style: none; overflow: visible; padding-top: 0px" id="codeSnippet">eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1c2VyIjoibWlrZSIsImV4cCI6MTIzNDU2Nzg5fQ.KG-ds05HT7kK8uGZcRemhnw3er_9brQSF1yB2xAwc_E</pre>
<br /></div>
<p>The header and payload are simple JSON strings. In the example above the header looks like this:</p>
<div id="codeSnippetWrapper">
<pre style="border-bottom-style: none; text-align: left; padding-bottom: 0px; line-height: 12pt; border-right-style: none; background-color: #f4f4f4; margin: 0em; padding-left: 0px; width: 100%; padding-right: 0px; font-family: 'Courier New', courier, monospace; direction: ltr; border-top-style: none; color: black; font-size: 8pt; border-left-style: none; overflow: visible; padding-top: 0px" id="codeSnippet">{ "typ": "JWT", "alg": "HMACSHA256" }</pre>
<br /></div>
<p>This is defined in the JWT standard. The ‘typ’ is always ‘JWT’, and the ‘alg’ is the hash algorithm used to sign the token (more on this later). </p>
<p>The payload can be any valid JSON, although the standard does define some keys that client and server libraries should respect:</p>
<div id="codeSnippetWrapper">
<pre style="border-bottom-style: none; text-align: left; padding-bottom: 0px; line-height: 12pt; border-right-style: none; background-color: #f4f4f4; margin: 0em; padding-left: 0px; width: 100%; padding-right: 0px; font-family: 'Courier New', courier, monospace; direction: ltr; border-top-style: none; color: black; font-size: 8pt; border-left-style: none; overflow: visible; padding-top: 0px" id="codeSnippet">{<br /> "user": "mike",<br /> "exp": 123456789<br />}</pre>
<br /></div>
<p></p>
<p>Here, ‘user’ is a key that I’ve defined, ‘exp’ is defined by the standard and is the expiration time of the token given as a UNIX time value. Being able to pass around any values that are useful to your application is a great benefit, although you obviously don’t want the token to get too large.</p>
<p>The payload is not encrypted, so you shouldn’t put sensitive information it in. The standard does provide an option for encrypting the JWT inside an encrypted wrapper, but for most applications that’s not necessary. In my case, an attacker could get the user of a session and the expiration time, but they wouldn’t be able to generate new tokens without the server side shared-secret.</p>
<p>The token is signed by taking the header and payload, base  64 encoding them, concatenating with ‘.’ and then generating a hash value using the given algorithm. The resulting byte array is also base 64 encoded and concatenated to produce the complete token. Here’s some code (taken from John Sheehan’s <a href="https://github.com/johnsheehan/jwt/blob/master/JWT/JWT.cs">JWT project on GitHub</a>) that generates a token. As you can see, it’s not at all complicated:</p>
<div id="codeSnippetWrapper">
<pre style="border-bottom-style: none; text-align: left; padding-bottom: 0px; line-height: 12pt; border-right-style: none; background-color: #f4f4f4; margin: 0em; padding-left: 0px; width: 100%; padding-right: 0px; font-family: 'Courier New', courier, monospace; direction: ltr; border-top-style: none; color: black; font-size: 8pt; border-left-style: none; overflow: visible; padding-top: 0px" id="codeSnippet"><span style="color: #008000">/// <summary></span><br /><span style="color: #008000">/// Creates a JWT given a payload, the signing key, and the algorithm to use.</span><br /><span style="color: #008000">/// </summary></span><br /><span style="color: #008000">/// <param name="payload">An arbitrary payload (must be serializable to JSON via <see cref="System.Web.Script.Serialization.JavaScriptSerializer"/>).</param></span><br /><span style="color: #008000">/// <param name="key">The key bytes used to sign the token.</param></span><br /><span style="color: #008000">/// <param name="algorithm">The hash algorithm to use.</param></span><br /><span style="color: #008000">/// <returns>The generated JWT.</returns></span><br /><span style="color: #0000ff">public</span> <span style="color: #0000ff">static</span> <span style="color: #0000ff">string</span> Encode(<span style="color: #0000ff">object</span> payload, <span style="color: #0000ff">byte</span>[] key, JwtHashAlgorithm algorithm)<br />{<br /> var segments = <span style="color: #0000ff">new</span> List<<span style="color: #0000ff">string</span>>();<br /> var header = <span style="color: #0000ff">new</span> { typ = <span style="color: #006080">"JWT"</span>, alg = algorithm.ToString() };<br /><br /> <span style="color: #0000ff">byte</span>[] headerBytes = Encoding.UTF8.GetBytes(jsonSerializer.Serialize(header));<br /> <span style="color: #0000ff">byte</span>[] payloadBytes = Encoding.UTF8.GetBytes(jsonSerializer.Serialize(payload));<br /><br /> segments.Add(Base64UrlEncode(headerBytes));<br /> segments.Add(Base64UrlEncode(payloadBytes));<br /><br /> var stringToSign = <span style="color: #0000ff">string</span>.Join(<span style="color: #006080">"."</span>, segments.ToArray());<br /><br /> var bytesToSign = Encoding.UTF8.GetBytes(stringToSign);<br /><br /> <span style="color: #0000ff">byte</span>[] signature = HashAlgorithms[algorithm](key, bytesToSign);<br /> segments.Add(Base64UrlEncode(signature));<br /><br /> <span style="color: #0000ff">return</span> <span style="color: #0000ff">string</span>.Join(<span style="color: #006080">"."</span>, segments.ToArray());<br />}</pre>
<br /></div>
<p><strong>Implementing JWT authentication and authorization in NancyFx and AngularJS</strong></p>
<p>There are two parts to this: first we need a login API, that takes a username (email in my case) and a password and returns a token, and secondly we need a piece of OWIN middleware that intercepts each request and checks that it has a valid token.</p>
<p>The login Nancy module is pretty straightforward. I took John Sheehan’s code and pasted it straight into my project with a few tweaks, so it was just a question of taking the email and password from the request, validating them against my user store, generating a token and returning it as the response. If the email/password doesn’t validate, I just return 401:</p>
<div id="codeSnippetWrapper">
<pre style="border-bottom-style: none; text-align: left; padding-bottom: 0px; line-height: 12pt; border-right-style: none; background-color: #f4f4f4; margin: 0em; padding-left: 0px; width: 100%; padding-right: 0px; font-family: 'Courier New', courier, monospace; direction: ltr; border-top-style: none; color: black; font-size: 8pt; border-left-style: none; overflow: visible; padding-top: 0px" id="codeSnippet"><span style="color: #0000ff">using</span> System;<br /><span style="color: #0000ff">using</span> System.Collections.Generic;<br /><span style="color: #0000ff">using</span> Nancy;<br /><span style="color: #0000ff">using</span> Nancy.ModelBinding;<br /><span style="color: #0000ff">using</span> MyApp.Api.Authorization;<br /><br /><span style="color: #0000ff">namespace</span> MyApp.Api<br />{<br /> <span style="color: #0000ff">public</span> <span style="color: #0000ff">class</span> LoginModule : NancyModule<br /> {<br /> <span style="color: #0000ff">private</span> <span style="color: #0000ff">readonly</span> <span style="color: #0000ff">string</span> secretKey;<br /> <span style="color: #0000ff">private</span> <span style="color: #0000ff">readonly</span> IUserService userService;<br /><br /> <span style="color: #0000ff">public</span> LoginModule (IUserService userService)<br /> {<br /> Preconditions.CheckNotNull (userService, <span style="color: #006080">"userService"</span>);<br /> <span style="color: #0000ff">this</span>.userService = userService;<br /><br /> Post [<span style="color: #006080">"/login/"</span>] = _ => LoginHandler(<span style="color: #0000ff">this</span>.Bind<LoginRequest>());<br /><br /> secretKey = System.Configuration.ConfigurationManager.AppSettings [<span style="color: #006080">"SecretKey"</span>];<br /> }<br /><br /> <span style="color: #0000ff">public</span> dynamic LoginHandler(LoginRequest loginRequest)<br /> {<br /> <span style="color: #0000ff">if</span> (userService.IsValidUser (loginRequest.email, loginRequest.password)) {<br /><br /> var payload = <span style="color: #0000ff">new</span> Dictionary<<span style="color: #0000ff">string</span>, <span style="color: #0000ff">object</span>> {<br /> { <span style="color: #006080">"email"</span>, loginRequest.email },<br /> { <span style="color: #006080">"userId"</span>, 101 }<br /> };<br /><br /> var token = JsonWebToken.Encode (payload, secretKey, JwtHashAlgorithm.HS256);<br /><br /> <span style="color: #0000ff">return</span> <span style="color: #0000ff">new</span> JwtToken { Token = token };<br /> } <span style="color: #0000ff">else</span> {<br /> <span style="color: #0000ff">return</span> HttpStatusCode.Unauthorized;<br /> }<br /> }<br /> }<br /><br /> <span style="color: #0000ff">public</span> <span style="color: #0000ff">class</span> JwtToken<br /> {<br /> <span style="color: #0000ff">public</span> <span style="color: #0000ff">string</span> Token { get; set; }<br /> }<br /><br /> <span style="color: #0000ff">public</span> <span style="color: #0000ff">class</span> LoginRequest<br /> {<br /> <span style="color: #0000ff">public</span> <span style="color: #0000ff">string</span> email { get; set; }<br /> <span style="color: #0000ff">public</span> <span style="color: #0000ff">string</span> password { get; set; }<br /> }<br />}<br /></pre>
<br /></div>
<p>On the AngularJS side, I have a controller that calls the LoginModule API. If the request is successful, it stores the token in the browser’s sessionStorage, it also decodes and stores the payload information in sessionStorage. To update the rest of the application, and allow other components to change state to show a logged in user, it sends an event (via $rootScope.$emit) and then redirects to the application’s root path. If the login request fails, it simply shows a message to inform the user:</p>
<div id="codeSnippetWrapper">
<pre style="border-bottom-style: none; text-align: left; padding-bottom: 0px; line-height: 12pt; border-right-style: none; background-color: #f4f4f4; margin: 0em; padding-left: 0px; width: 100%; padding-right: 0px; font-family: 'Courier New', courier, monospace; direction: ltr; border-top-style: none; color: black; font-size: 8pt; border-left-style: none; overflow: visible; padding-top: 0px" id="codeSnippet">myAppControllers.controller(<span style="color: #006080">'LoginController'</span>, <span style="color: #0000ff">function</span> ($scope, $http, $window, $location, $rootScope) {<br /> $scope.message = <span style="color: #006080">''</span>;<br /> $scope.user = { email: <span style="color: #006080">''</span>, password: <span style="color: #006080">''</span> };<br /> $scope.submit = <span style="color: #0000ff">function</span> () {<br /> $http<br /> .post(<span style="color: #006080">'/api/login'</span>, $scope.user)<br /> .success(<span style="color: #0000ff">function</span> (data, status, headers, config) {<br /> $window.sessionStorage.token = data.token;<br /> <span style="color: #0000ff">var</span> user = angular.fromJson($window.atob(data.token.split(<span style="color: #006080">'.'</span>)[1]));<br /> $window.sessionStorage.email = user.email;<br /> $window.sessionStorage.userId = user.userId;<br /> $rootScope.$emit(<span style="color: #006080">"LoginController.login"</span>);<br /> $location.path(<span style="color: #006080">'/'</span>);<br /> })<br /> .error(<span style="color: #0000ff">function</span> (data, status, headers, config) {<br /> <span style="color: #008000">// Erase the token if the user fails to login</span><br /> delete $window.sessionStorage.token;<br /><br /> $scope.message = <span style="color: #006080">'Error: Invalid email or password'</span>;<br /> });<br /> };<br />});</pre>
<br /></div>
<p>Now that we have the JWT token stored in the browser’s sessionStorage, we can use it to ‘sign’ each outgoing API request. To do this we create an <a href="https://docs.angularjs.org/api/ng/service/$http">interceptor</a> for Angular’s http module. This does two things: on the outbound request it adds an Authorization header ‘Bearer <token>’ if the token is present. This will be decoded by our OWIN middleware to authorize each request. The interceptor also checks the response. If there’s a 401 (unauthorized) response, it simply bumps the user back to the login screen.</p>
<div id="codeSnippetWrapper">
<pre style="border-bottom-style: none; text-align: left; padding-bottom: 0px; line-height: 12pt; border-right-style: none; background-color: #f4f4f4; margin: 0em; padding-left: 0px; width: 100%; padding-right: 0px; font-family: 'Courier New', courier, monospace; direction: ltr; border-top-style: none; color: black; font-size: 8pt; border-left-style: none; overflow: visible; padding-top: 0px" id="codeSnippet">myApp.factory(<span style="color: #006080">'authInterceptor'</span>, <span style="color: #0000ff">function</span> ($rootScope, $q, $window, $location) {<br /> <span style="color: #0000ff">return</span> {<br /> request: <span style="color: #0000ff">function</span> (config) {<br /> config.headers = config.headers || {};<br /> <span style="color: #0000ff">if</span>($window.sessionStorage.token) {<br /> config.headers.Authorization = <span style="color: #006080">'Bearer '</span> + $window.sessionStorage.token;<br /> }<br /> <span style="color: #0000ff">return</span> config;<br /> },<br /> responseError: <span style="color: #0000ff">function</span> (response) {<br /> <span style="color: #0000ff">if</span>(response.status === 401) {<br /> $location.path(<span style="color: #006080">'/login'</span>);<br /> }<br /> <span style="color: #0000ff">return</span> $q.reject(response);<br /> }<br /> };<br />});<br /><br />myApp.config(<span style="color: #0000ff">function</span> ($httpProvider) {<br /> $httpProvider.interceptors.push(<span style="color: #006080">'authInterceptor'</span>);<br />});</pre>
<br /></div>
<p>The final piece is the <a href="http://benfoster.io/blog/how-to-write-owin-middleware-in-5-different-steps">OWIN middleware</a> that intercepts each request to the API and validates the JWT token.</p>
<p>We want some parts of the API to be accessible without authorization, such as the login request and the API root, so we maintain a list of exceptions, currently this is just hard-coded, but it could be pulled from some configuration store. When the request comes in, we first check if the path matches any of the exception list items. If it doesn’t we check for the presence of an authorization token. If the token is not present, we cancel the request processing (by not calling the next AppFunc), and return a 401 status code. If we find a JWT token, we attempt to decode it. If the decode fails, we again cancel the request and return 401. If it succeeds, we add some OWIN keys for the ‘userId’ and ‘email’, so that they will be accessible to the rest of the application and allow processing to continue by running the next AppFunc.</p>
<div id="codeSnippetWrapper">
<pre style="border-bottom-style: none; text-align: left; padding-bottom: 0px; line-height: 12pt; border-right-style: none; background-color: #f4f4f4; margin: 0em; padding-left: 0px; width: 100%; padding-right: 0px; font-family: 'Courier New', courier, monospace; direction: ltr; border-top-style: none; color: black; font-size: 8pt; border-left-style: none; overflow: visible; padding-top: 0px" id="codeSnippet"><span style="color: #0000ff">using</span> System;<br /><span style="color: #0000ff">using</span> System.Collections.Generic;<br /><span style="color: #0000ff">using</span> System.Threading.Tasks;<br /><br /><span style="color: #0000ff">namespace</span> MyApp.Api.Authorization<br />{<br /> <span style="color: #0000ff">using</span> AppFunc = Func<IDictionary<<span style="color: #0000ff">string</span>, <span style="color: #0000ff">object</span>>, Task>;<br /><br /> <span style="color: #008000">/// <summary></span><br /> <span style="color: #008000">/// OWIN add-in module for JWT authorization.</span><br /> <span style="color: #008000">/// </summary></span><br /> <span style="color: #0000ff">public</span> <span style="color: #0000ff">class</span> JwtOwinAuth<br /> {<br /> <span style="color: #0000ff">private</span> <span style="color: #0000ff">readonly</span> AppFunc next;<br /> <span style="color: #0000ff">private</span> <span style="color: #0000ff">readonly</span> <span style="color: #0000ff">string</span> secretKey;<br /> <span style="color: #0000ff">private</span> <span style="color: #0000ff">readonly</span> HashSet<<span style="color: #0000ff">string</span>> exceptions = <span style="color: #0000ff">new</span> HashSet<<span style="color: #0000ff">string</span>>{ <br /> <span style="color: #006080">"/"</span>,<br /> <span style="color: #006080">"/login"</span>,<br /> <span style="color: #006080">"/login/"</span><br /> };<br /><br /> <span style="color: #0000ff">public</span> JwtOwinAuth (AppFunc next)<br /> {<br /> <span style="color: #0000ff">this</span>.next = next;<br /> secretKey = System.Configuration.ConfigurationManager.AppSettings [<span style="color: #006080">"SecretKey"</span>];<br /> }<br /><br /> <span style="color: #0000ff">public</span> Task Invoke(IDictionary<<span style="color: #0000ff">string</span>, <span style="color: #0000ff">object</span>> environment)<br /> {<br /> var path = environment [<span style="color: #006080">"owin.RequestPath"</span>] <span style="color: #0000ff">as</span> <span style="color: #0000ff">string</span>;<br /> <span style="color: #0000ff">if</span> (path == <span style="color: #0000ff">null</span>) {<br /> <span style="color: #0000ff">throw</span> <span style="color: #0000ff">new</span> ApplicationException (<span style="color: #006080">"Invalid OWIN request. Expected owin.RequestPath, but not present."</span>);<br /> }<br /> <span style="color: #0000ff">if</span> (!exceptions.Contains(path)) {<br /> var headers = environment [<span style="color: #006080">"owin.RequestHeaders"</span>] <span style="color: #0000ff">as</span> IDictionary<<span style="color: #0000ff">string</span>, <span style="color: #0000ff">string</span>[]>;<br /> <span style="color: #0000ff">if</span> (headers == <span style="color: #0000ff">null</span>) {<br /> <span style="color: #0000ff">throw</span> <span style="color: #0000ff">new</span> ApplicationException (<span style="color: #006080">"Invalid OWIN request. Expected owin.RequestHeaders to be an IDictionary<string, string[]>."</span>);<br /> }<br /> <span style="color: #0000ff">if</span> (headers.ContainsKey (<span style="color: #006080">"Authorization"</span>)) {<br /> var token = GetTokenFromAuthorizationHeader (headers [<span style="color: #006080">"Authorization"</span>]);<br /> <span style="color: #0000ff">try</span> {<br /> var payload = JsonWebToken.DecodeToObject (token, secretKey) <span style="color: #0000ff">as</span> Dictionary<<span style="color: #0000ff">string</span>, <span style="color: #0000ff">object</span>>;<br /> environment.Add(<span style="color: #006080">"myapp.userId"</span>, (<span style="color: #0000ff">int</span>)payload[<span style="color: #006080">"userId"</span>]);<br /> environment.Add(<span style="color: #006080">"myapp.email"</span>, payload[<span style="color: #006080">"email"</span>].ToString());<br /> } <span style="color: #0000ff">catch</span> (SignatureVerificationException) {<br /> <span style="color: #0000ff">return</span> UnauthorizedResponse (environment);<br /> }<br /> } <span style="color: #0000ff">else</span> {<br /> <span style="color: #0000ff">return</span> UnauthorizedResponse (environment);<br /> }<br /> }<br /> <span style="color: #0000ff">return</span> next (environment);<br /> }<br /><br /> <span style="color: #0000ff">public</span> <span style="color: #0000ff">string</span> GetTokenFromAuthorizationHeader(<span style="color: #0000ff">string</span>[] authorizationHeader)<br /> {<br /> <span style="color: #0000ff">if</span> (authorizationHeader.Length == 0) {<br /> <span style="color: #0000ff">throw</span> <span style="color: #0000ff">new</span> ApplicationException (<span style="color: #006080">"Invalid authorization header. It must have at least one element"</span>);<br /> }<br /> var token = authorizationHeader [0].Split (<span style="color: #006080">' '</span>) [1];<br /> <span style="color: #0000ff">return</span> token;<br /> }<br /><br /> <span style="color: #0000ff">public</span> Task UnauthorizedResponse(IDictionary<<span style="color: #0000ff">string</span>, <span style="color: #0000ff">object</span>> environment)<br /> {<br /> environment [<span style="color: #006080">"owin.ResponseStatusCode"</span>] = 401;<br /> <span style="color: #0000ff">return</span> Task.FromResult (0);<br /> }<br /> }<br />}<br /></pre>
<br /></div>
<p>So far this is all working very nicely. There are some important missing pieces. I haven’t implemented an expiry key in the JWT token, or expiration checking in the OWIN middleware. When the token expires, it would be nice if there was some algorithm that decides whether to simply issue a new token, or whether to require the user to sign-in again. Security dictates that tokens should expire relatively frequently, but we don’t want to inconvenience the user by asking them to constantly sign in.</p>
<p>JWT is a really nice way of authenticating HTTP/JSON web APIs. It’s definitely worth looking at if you’re building single page applications, or any API-first software.</p> http://mikehadlow.blogspot.com/2014/04/json-web-tokens-owin-and-angularjs.htmlnoreply@blogger.com (Mike Hadlow)14tag:blogger.com,1999:blog-15136575.post-82254914043465830512014年4月16日 13:53:00 +00002014年04月16日T14:53:29.037+01:00A Contractor’s Guide To Recruitment Agencies<p>I haven’t contracted through an agency for a long time, but I thought I’d write up my experiences from almost ten years of working as an IT contractor for anyone considering it as a career choice.</p> <p>IT recruitment agencies provide a valuable service. Like any middle-man, their job is to bring buyers and sellers together. In this case the buyer is the end client, the company that needs a short-term resource to fill a current skills gap. The seller is you, the contactor offering the skill. The agency needs to do two things well: market intelligence - finding clients in need of resources and contractors looking to sell their skills; and negotiation – negotiating the highest price that the client will pay, and the lowest price that the contractor will work for. The agency’s income is a simple formula: </p> <p><em>(client rate – contractor rate) * number of contractors placed.</em></p> <p>Minimize the contractor rate, maximize the client rate, and place as many contractors as possible. That’s success.</p> <p>Anyone with a phone can set themselves up as a recruitment agency. There are zero or low startup costs. The greatest difficultly most agencies face is finding clients. Finding contractors is a little easier, as I’ll explain presently. Having a good relationship with a large corporate or government client is a gold standard for any agency. Even better if that relationship is exclusive. Getting a foot in the door with one of these clients is very difficult, usually some long established, large agency has long ago stitched up a deal with someone high-up. But any company or organization in need of a contractor is a potential client, and agencies spend inordinate amounts of time in the search for names they can approach with potential candidates.</p> <p>As I said before, finding contractors is somewhat easier. There are a number of well known websites, Jobserve is the most common one to use in the UK, so it’s merely a case of putting up a job description and waiting for the CVs to roll in. The agent will try to make the job sound as good as possible to maximize the chances of getting applications within the limits of the client’s job spec.</p> <p>An ideal contractor for an agency is someone who the client wants to hire, and who is willing to work for the lowest possible rate, and who will keep the client happy by turning up every day and doing the work that the client expects. Since agencies take an on-going percentage of the daily rate, the longer the contract lasts the better. The agency will attempt to do some filtering to ‘add value’, but since few agencies have any real technology knowledge, this mainly consists of matching keywords and years-of-experience. Anyone with any experience of talking to agencies will know how frustrating it can be, "Do you know any ASPs?" "No, they don’t want .NET, they want C#." I’m not making those quotes up. Ideally they will want to persuade the client that they have some kind of exclusive arrangement with ‘their’ contractors and that the client would not be able to hire them through anyone else. It can be very embarrassing for them if the client receives your CV through a competing agency as well as theirs. </p> <p><strong>The job hunt. How you should approach it.</strong></p> <p>Let’s say you’re a competent C# developer, how should you approach landing your dream contract role? The obvious first place to look are the popular jobsites. Do a search for C# contracts in your local area, or further afield if you’re willing to travel. Scan the job listings looking for anything that looks like it vaguely fits. Don’t be too fussy at this stage, you want to increase your chances by applying for as many jobs as possible. Once you’ve got a list of jobs it’s worth trying to see if you can work out who the company is. If you can make a direct contract with the client, so much the better. Don’t worry about feeling underhand, agencies do this to each other all the time, it’s part of the game.</p> <p>Failing a direct contact, the next step is to email your CV to the agency. Remember they’ll be trying to match keywords, so it’s worth customizing your CV to the job advert. Make sure as many keywords as possible match those in the advert, remembering of course that you <em>might</em> have to back up your claims in an interview. </p> <p>The next step is usually a short telephone conversation with the recruiter. This call is the beginning of the negotiations with the recruiter. Negotiating is their full time job, they are usually very good at it. Be very wary. Your attitude is that you are a highly qualified professional who is somewhat interested in the role, but it’s by no means the only possibility at this stage. Whatever you do, don’t appear desperate. Remember at this stage you are an unknown quantity. Most contractors a recruiter comes into contact with will be duds (there’s no barriers to entry in our profession either), and they will initially be suspicious of you. Confidently assert that you have all the experience you mention in your CV, and that, of course, you can do the job. There is no point in getting into any technical discussion with the recruiter, they simply won’t understand. Remember: match keywords and experience. At this stage, even if you’ve got doubts about the job, don’t express them, just appear keen and confident.</p> <p>Sometimes there’s a rate mentioned on the advert, at other times it will just say ‘market rates’, which is meaningless. If the agent doesn’t bring up rates at this point, there’s no need to mention them. At this stage you are still an unknown quantity. Once the client has decided that they really want you, you are gold, and in a much stronger bargaining position. If there’s a rate conversation at the pre interview stage, try to stay non-committal. If there’s a range, say you’ll only work for the top number.</p> <p>They may ask you for references. Your reply should be to politely say that you only give references after an interview. It’s a common trick to put an imaginary job on a jobsite then ask applicants for references. Remember, an agency’s main difficulty is finding clients and the references are used as leads. If you give them references you will never hear from them again, but your previous clients will be hounded with phone calls.</p> <p>Another common trick is to ask you where else you are applying. They are looking for leads again. Be very non-committal. They may also ask you for names of people you worked for at previous jobs, this is just like asking for references, you don’t need to tell them. Sometimes it’s worth have a list of made up names to give out if they’re very persistent.</p> <p>Next you will either hear back from the agent with an offer of an interview, or you won’t hear from them at all. No agency I’ve ever had contact with bothered to call me giving a reason why an interview hadn’t materialized. If you don’t hear from them, move on with applying for the next job. Constantly calling the agency smacks of desperation and won’t get you anywhere. There are multiple possible reasons that the interview didn’t materialize, the most common being that the job didn’t exist in the first place (see above).</p> <p>At all times be polite and professional with the agent even if you’re convinced they’re being liberal with the truth.</p> <p>If you get an interview, that’s good. This isn’t a post about interviewing, so let’s just assume that you were wonderful and the client really wants you. You’ll know this because you’ll get a phone call from the agent congratulating you on getting the role. You are now a totally different quantity in the agent’s eyes, a successful candidate, a valuable commodity, a guaranteed income stream for as long as the contract lasts. Their main job now is to get you to work for as little as possible while the client pays as much as possible. If you agreed a rate before the interview, now is their chance to try and lower it. You may well have a conversation like this: "I’m very sorry John, but the client is not able to offer the rate we agreed, I’m afraid it will have to be XXX instead." Call their bluff. Your answer should be: "Oh that’s such a shame, I was really looking forward to working with them, but I my minimum rate is <whatever you initially agreed>. Never mind, it was nice doing business with you." I guarantee they will call you back the next day telling you how hard they have been working on your behalf to persuade the client to increase your rate.</p> <p>If you haven’t already agreed a rate, now is the time to have a good idea of the minimum you want to work for. Add 30% to it. That’s your opening rate with the agent. They will choke and tell you there’s no way that you’ll get that. Ask them for their maximum and choke in return. Haggle back and forth until you discover what their maximum is. If it’s lower than your minimum, walk away. You may have to walk away and wait for them to phone you. Of course you’ve got to be somewhere in the ballpark of the market rate or you won’t get the role. Knowing the market rate is tricky, but a few conversations with your contractor mates should give you some idea.</p> <p>Once the rate has been agreed and you start work your interests are aligned with the agent. You both want the contract to last and and you both want to maintain a good relationship with the client. The agency should pay you promptly. Don’t put up with late or missing payments, just leave. Usually a threat to walk off site can work wonders with outstanding invoices. Beware, at the worst some agents can be downright nasty and bullying. I’ve been told that would never work in IT again by at least two different characters. It’s nice to see how that turned out. Just ignore bullies, except to make a note that <em>you</em> will never work for <em>their</em> agency again.</p> <p>Agencies are a necessary evil until you have built up a good enough network and reputation that you don’t need to use them any more. Some are professional and honest, many aren’t, but if you understand their motivations and treat anything they say with a pinch of salt, you should be fine. </p> http://mikehadlow.blogspot.com/2014/04/a-contractors-guide-to-recruitment.htmlnoreply@blogger.com (Mike Hadlow)23tag:blogger.com,1999:blog-15136575.post-42997526956298399502014年4月03日 13:22:00 +00002014年04月03日T14:23:00.162+01:00DockerMonoA Docker ‘Hello World' With Mono<p><a href="https://www.docker.io/">Docker</a> is a lightweight virtualization technology for Linux that promises to revolutionize the deployment and management of distributed applications. Rather than requiring a complete operating system, like a traditional virtual machine, Docker is built on top of Linux containers, a feature of the Linux kernel, that allows light-weight Docker containers to share a common kernel while isolating applications and their dependencies.</p> <p>There’s a very good Docker SlideShare presentation <a href="http://www.slideshare.net/dotCloud/docker-intro-november">here</a> that explains the philosophy behind Docker using the analogy of standardized shipping containers. Interesting that the standard shipping container has done more to create our global economy than all the free-trade treaties and international agreements put together.</p> <p>A Docker image is built from a script, called a ‘Dockerfile’. Each Dockerfile starts by declaring a parent image. This is very cool, because it means that you can build up your infrastructure from a layer of images, starting with general, platform images and then layering successively more application specific images on top. I’m going to demonstrate this by first building an image that provides a Mono development environment, and then creating a simple ‘Hello World’ console application image that runs on top of it.</p> <p>Because the Dockerfiles are simple text files, you can keep them under source control and version your environment and dependencies alongside the actual source code of your software. This is a game changer for the deployment and management of distributed systems. Imagine developing an upgrade to your software that includes new versions of its dependencies, including pieces that we’ve traditionally considered the realm of the environment, and not something that you would normally put in your source repository, like the Mono version that the software runs on for example. You can script all these changes in your Dockerfile, test the new container on your local machine, then simply move the image to test and then production. The possibilities for vastly simplified deployment workflows are obvious.</p> <p><em>Docker brings concerns that were previously the responsibility of an organization’s operations department and makes them a first class part of the software development lifecycle. Now your infrastructure can be maintained as source code, built as part of your CI cycle and continuously deployed, just like the software that runs inside it.</em></p> <p>Docker also provides <a href="https://index.docker.io/">docker index</a>, an online repository of docker images.  Anyone can create an image and add it to the index and there are already images for almost any piece of infrastructure you can imagine. Say you want to use RabbitMQ, all you have to do is grab a handy RabbitMQ images such as <a title="https://index.docker.io/u/tutum/rabbitmq/" href="https://index.docker.io/u/tutum/rabbitmq/">https://index.docker.io/u/tutum/rabbitmq/</a> and run it like this:</p> <div id="codeSnippetWrapper"> <pre style="border-bottom-style: none; text-align: left; padding-bottom: 0px; line-height: 12pt; border-right-style: none; background-color: #f4f4f4; margin: 0em; padding-left: 0px; width: 100%; padding-right: 0px; font-family: 'Courier New', courier, monospace; direction: ltr; border-top-style: none; color: black; font-size: 8pt; border-left-style: none; overflow: visible; padding-top: 0px" id="codeSnippet">docker run -d -p 5672:5672 -p 55672:55672 tutum/rabbitmq</pre>
</div>
<p>The –p flag maps ports between the image and the host.</p>
<p>Let’s look at an example. I’m going to show you how to create a docker image for the Mono development environment and have it built and hosted on the docker index. Then I’m going to build a local docker image for a simple ‘hello world’ console application that I can run on my Ubuntu box.</p>
<p>First we need to create a Docker file for our Mono environment. I’m going to use the <a href="https://launchpad.net/~directhex/+archive/monoxide">Mono debian packages from directhex</a>. These are maintained by the official Debian/Ubuntu Mono team and are the recommended way of installing the latest Mono versions on Ubuntu.</p>
<p>Here’s the Dockerfile:</p>
<div id="codeSnippetWrapper">
<pre style="border-bottom-style: none; text-align: left; padding-bottom: 0px; line-height: 12pt; border-right-style: none; background-color: #f4f4f4; margin: 0em; padding-left: 0px; width: 100%; padding-right: 0px; font-family: 'Courier New', courier, monospace; direction: ltr; border-top-style: none; color: black; font-size: 8pt; border-left-style: none; overflow: visible; padding-top: 0px" id="codeSnippet">#DOCKER-VERSION 0.9.1<br />#<br />#VERSION 0.1<br />#<br /># monoxide mono-devel package on Ubuntu 13.10<br /><br />FROM ubuntu:13.10<br />MAINTAINER Mike Hadlow <span style="color: #0000ff"><</span><span style="color: #800000">mike</span>@<span style="color: #ff0000">suteki</span>.<span style="color: #ff0000">co</span>.<span style="color: #ff0000">uk</span><span style="color: #0000ff">></span><br /><br />RUN sudo DEBIAN_FRONTEND=noninteractive apt-get install -y -q software-properties-common<br />RUN sudo add-apt-repository ppa:directhex/monoxide -y<br />RUN sudo apt-get update<br />RUN sudo DEBIAN_FRONTEND=noninteractive apt-get install -y -q mono-devel</pre>
</div>
<p>Notice the first line (after the comments) that reads, ‘FROM  ubuntu:13.10’. This specifies the parent image for this Dockerfile. This is the official docker Ubuntu image from the index. When I build this Dockerfile, that image will be automatically downloaded and used as the starting point for my image.</p>
<p>But I don’t want to build this image locally. Docker provide a build server linked to the docker index. All you have to do is create a public GitHub repository containing your dockerfile, then link the repository to your profile on docker index. <a href="https://index.docker.io/help/docs/#trustedbuilds">You can read the documentation for the details</a>.</p>
<p>The GitHub repository for my Mono image is at <a title="https://github.com/mikehadlow/ubuntu-monoxide-mono-devel" href="https://github.com/mikehadlow/ubuntu-monoxide-mono-devel">https://github.com/mikehadlow/ubuntu-monoxide-mono-devel</a>. Notice how the Docker file is in the root of the repository. That’s the default location, but you can have multiple files in sub-directories if you want to support many images from a single repository.</p>
<p>Now any time I push a change of my Dockerfile to GitHub, the docker build system will automatically build the image and update the docker index. You can see image listed here: <a title="https://index.docker.io/u/mikehadlow/ubuntu-monoxide-mono-devel/" href="https://index.docker.io/u/mikehadlow/ubuntu-monoxide-mono-devel/">https://index.docker.io/u/mikehadlow/ubuntu-monoxide-mono-devel/</a></p>
<p>I can now grab my image and run it interactively like this:</p>
<div id="codeSnippetWrapper">
<pre style="border-bottom-style: none; text-align: left; padding-bottom: 0px; line-height: 12pt; border-right-style: none; background-color: #f4f4f4; margin: 0em; padding-left: 0px; width: 100%; padding-right: 0px; font-family: 'Courier New', courier, monospace; direction: ltr; border-top-style: none; color: black; font-size: 8pt; border-left-style: none; overflow: visible; padding-top: 0px" id="codeSnippet">$ sudo docker pull mikehadlow/ubuntu-monoxide-mono-devel<br />Pulling repository mikehadlow/ubuntu-monoxide-mono-devel<br />f259e029fcdd: Download complete <br />511136ea3c5a: Download complete <br />1c7f181e78b9: Download complete <br />9f676bd305a4: Download complete <br />ce647670fde1: Download complete <br />d6c54574173f: Download complete <br />6bcad8583de3: Download complete <br />e82d34a742ff: Download complete <br /><br />$ sudo docker run -i mikehadlow/ubuntu-monoxide-mono-devel /bin/bash<br />mono --version<br />Mono JIT compiler version 3.2.8 (Debian 3.2.8+dfsg-1~pre1)<br />Copyright (C) 2002-2014 Novell, Inc, Xamarin Inc and Contributors. www.mono-project.com<br /> TLS: __thread<br /> SIGSEGV: altstack<br /> Notifications: epoll<br /> Architecture: amd64<br /> Disabled: none<br /> Misc: softdebug <br /> LLVM: supported, not enabled.<br /> GC: sgen<br />exit</pre>
<br /></div>
<p>Next let’s create a new local Dockerfile that compiles a simple ‘hello world’ program, and then runs it when we run the image. You can follow along with these steps. All you need is a Ubuntu machine with Docker installed.</p>
<p>First here’s our ‘hello world’, save this code in a file named hello.cs:</p>
<div id="codeSnippetWrapper">
<pre style="border-bottom-style: none; text-align: left; padding-bottom: 0px; line-height: 12pt; border-right-style: none; background-color: #f4f4f4; margin: 0em; padding-left: 0px; width: 100%; padding-right: 0px; font-family: 'Courier New', courier, monospace; direction: ltr; border-top-style: none; color: black; font-size: 8pt; border-left-style: none; overflow: visible; padding-top: 0px" id="codeSnippet">using System;<br /><br />namespace Mike.MonoTest<br />{<br /> public class Program<br /> {<br /> public static void Main()<br /> {<br /> Console.WriteLine("Hello World");<br /> }<br /> }<br />}</pre>
<br /></div>
<p>Next we’ll create our Dockerfile. Copy this code into a file called ‘Dockerfile’:</p>
<div id="codeSnippetWrapper">
<pre style="border-bottom-style: none; text-align: left; padding-bottom: 0px; line-height: 12pt; border-right-style: none; background-color: #f4f4f4; margin: 0em; padding-left: 0px; width: 100%; padding-right: 0px; font-family: 'Courier New', courier, monospace; direction: ltr; border-top-style: none; color: black; font-size: 8pt; border-left-style: none; overflow: visible; padding-top: 0px" id="codeSnippet">#DOCKER-VERSION 0.9.1<br /><br />FROM mikehadlow/ubuntu-monoxide-mono-devel<br /><br />ADD . /src<br /><br />RUN mcs /src/hello.cs<br />CMD ["mono", "/src/hello.exe"]</pre>
<br /></div>
<p>Once again, notice the ‘FROM’ line. This time we’re telling Docker to start with our mono image. The next line ‘ADD . /src’, tells Docker to copy the contents of the current directory (the one containing our Dockerfile) into a root directory named ‘src’ in the container. Now our hello.cs file is at /src/hello.cs in the container, so we can compile it with the mono C# compiler, mcs, which is the line ‘RUN mcs /src/hello.cs’. Now we will have the executable, hello.exe, in the src directory. The line ‘CMD ["mono", "/src/hello.exe"]’ tells Docker what we want to happen when the container is run: just execute our hello.exe program.</p>
<p>As an aside, this exercise highlights some questions around what best practice should be with Docker. We could have done this in several different ways. Should we build our software independently of the Docker build in some CI environment, or does it make sense to do it this way, with the Docker build as a step in our CI process? Do we want to rebuild our container for every commit to our software, or do we want the running container to pull the latest from our build output? Initially I’m quite attracted to the idea of building the image as part of the CI but I expect that we’ll have to wait a while for best practice to evolve.</p>
<p>Anyway, for now let’s manually build our image:</p>
<div id="codeSnippetWrapper">
<pre style="border-bottom-style: none; text-align: left; padding-bottom: 0px; line-height: 12pt; border-right-style: none; background-color: #f4f4f4; margin: 0em; padding-left: 0px; width: 100%; padding-right: 0px; font-family: 'Courier New', courier, monospace; direction: ltr; border-top-style: none; color: black; font-size: 8pt; border-left-style: none; overflow: visible; padding-top: 0px" id="codeSnippet">$ sudo docker build -t hello .<br />Uploading context 1.684 MB<br />Uploading context <br />Step 0 : FROM mikehadlow/ubuntu-monoxide-mono-devel<br /> ---<span style="color: #0000ff">></span> f259e029fcdd<br />Step 1 : ADD . /src<br /> ---<span style="color: #0000ff">></span> 6075dee41003<br />Step 2 : RUN mcs /src/hello.cs<br /> ---<span style="color: #0000ff">></span> Running in 60a3582ab6a3<br /> ---<span style="color: #0000ff">></span> 0e102c1e4f26<br />Step 3 : CMD ["mono", "/src/hello.exe"]<br /> ---<span style="color: #0000ff">></span> Running in 3f75e540219a<br /> ---<span style="color: #0000ff">></span> 1150949428b2<br />Successfully built 1150949428b2<br />Removing intermediate container 88d2d28f12ab<br />Removing intermediate container 60a3582ab6a3<br />Removing intermediate container 3f75e540219a</pre>
<br /></div>
<p>You can see Docker executing each build step in turn and storing the intermediate result until the final image is created. Because we used the tag (-t) option and named our image ‘hello’, we can see it when we list all the docker images:</p>
<div id="codeSnippetWrapper">
<pre style="border-bottom-style: none; text-align: left; padding-bottom: 0px; line-height: 12pt; border-right-style: none; background-color: #f4f4f4; margin: 0em; padding-left: 0px; width: 100%; padding-right: 0px; font-family: 'Courier New', courier, monospace; direction: ltr; border-top-style: none; color: black; font-size: 8pt; border-left-style: none; overflow: visible; padding-top: 0px" id="codeSnippet">$ sudo docker images<br />REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE<br />hello latest 1150949428b2 10 seconds ago 396.4 MB<br />mikehadlow/ubuntu-monoxide-mono-devel latest f259e029fcdd 24 hours ago 394.7 MB<br />ubuntu 13.10 9f676bd305a4 8 weeks ago 178 MB<br />ubuntu saucy 9f676bd305a4 8 weeks ago 178 MB<br />...</pre>
<br /></div>
<p>Now let’s run our image. The first time we do this Docker will create a container and run it. Each subsequent run will reuse that container:</p>
<div id="codeSnippetWrapper">
<pre style="border-bottom-style: none; text-align: left; padding-bottom: 0px; line-height: 12pt; border-right-style: none; background-color: #f4f4f4; margin: 0em; padding-left: 0px; width: 100%; padding-right: 0px; font-family: 'Courier New', courier, monospace; direction: ltr; border-top-style: none; color: black; font-size: 8pt; border-left-style: none; overflow: visible; padding-top: 0px" id="codeSnippet">$ sudo docker run hello<br />Hello World</pre>
<br /></div>
<p>And that’s it.</p>
<p>Imagine that instead of our little hello.exe, this image contained our web application, or maybe a service in some distributed software. In order to deploy it, we’d simply ask Docker to run it on any server we like; development, test, production, or on many servers in a web farm. This is an incredibly powerful way of doing consistent repeatable deployments.</p>
<p>To reiterate, I think Docker is a game changer for large server side software. It’s one of the most exciting developments to have emerged this year and definitely worth your time to check out.</p> http://mikehadlow.blogspot.com/2014/04/a-docker-hello-world-with-mono.htmlnoreply@blogger.com (Mike Hadlow)1tag:blogger.com,1999:blog-15136575.post-60895331028581433022014年4月01日 10:24:00 +00002014年04月01日T11:24:22.776+01:00DockerDocker: Bulk Remove Images and Containers<p>I’ve just started looking at <a href="https://www.docker.io/">Docker</a>. It’s a cool new technology that has the potential to make the management and deployment of distributed applications a great deal easier. I’d very much recommend checking it out. I’m especially interested in using it to deploy Mono applications because it promises to remove the hassle of deploying and maintaining the mono runtime on a multitude of Linux servers. </p> <p>I’ve been playing around creating new images and containers and debugging my Dockerfile, and I’ve wound up with lots of temporary containers and images. It’s really tedious repeatedly running ‘docker rm’ and ‘docker rmi’, so I’ve knocked up a couple of bash commands to bulk delete images and containers.</p> <p>Delete all containers:</p> <div id="codeSnippetWrapper"> <pre style="border-bottom-style: none; text-align: left; padding-bottom: 0px; line-height: 12pt; border-right-style: none; background-color: #f4f4f4; margin: 0em; padding-left: 0px; width: 100%; padding-right: 0px; font-family: 'Courier New', courier, monospace; direction: ltr; border-top-style: none; color: black; font-size: 8pt; border-left-style: none; overflow: visible; padding-top: 0px" id="codeSnippet">sudo docker ps -a -q | xargs -n 1 -I {} sudo docker rm {}</pre>
<br /></div>
<p></p>
<p>Delete all un-tagged (or intermediate) images:</p>
<div id="codeSnippetWrapper">
<pre style="border-bottom-style: none; text-align: left; padding-bottom: 0px; line-height: 12pt; border-right-style: none; background-color: #f4f4f4; margin: 0em; padding-left: 0px; width: 100%; padding-right: 0px; font-family: 'Courier New', courier, monospace; direction: ltr; border-top-style: none; color: black; font-size: 8pt; border-left-style: none; overflow: visible; padding-top: 0px" id="codeSnippet">sudo docker rmi $( sudo docker images | grep '<span style="color: #0000ff"><</span><span style="color: #800000">none</span><span style="color: #0000ff">></span>' | tr -s ' ' | cut -d ' ' -f 3)</pre>
<br /></div> http://mikehadlow.blogspot.com/2014/04/docker-bulk-remove-images-and-containers.htmlnoreply@blogger.com (Mike Hadlow)5