Only reason for the swap is to get programmatic public access to the repo. Azure DevOps supports public projects but REST access still appears to require authentication. Personal Access Tokens work just fine, but they expire which is a pain in the backside.
Maybe anonymous access to Azure Repos REST services gets enabled in the future, but I didn't want to wait. It was pretty easy to port the Azure DevOps REST Wrapper to Octokit. My code was already using TPL Dataflow to load all the post metadata from the Azure Git repo. All I really needed to do was change the client initialization code and the URL construction scheme and I was good to go.
]]>Only reason for the swap is to get programmatic public access to the repo. Azure DevOps supports public projects but REST access still appears to require authentication. Personal Access Tokens work just fine, but they expire which is a pain in the backside.
Maybe anonymous access to Azure Repos REST services gets enabled in the future, but I didn't want to wait. It was pretty easy to port the Azure DevOps REST Wrapper to Octokit. My code was already using TPL Dataflow to load all the post metadata from the Azure Git repo. All I really needed to do was change the client initialization code and the URL construction scheme and I was good to go.
]]>Back when I was working for the IronPython team, I would write series of posts on a single topic, such as writing an IronPython debugger or using IronPython's __clrtype__ metaclass feature. I've also written series on building parsers in F#, a step-by-step guide to brokered WinRT components and the home grown engine I use for this blog. And with the new job, I suspect I'll be writing even more.
So I decided to make series a first class construct in my blog. You can go to a top level page to see all my series. Or you can go to a specific series page, such as the one for Hawk Notes. One thing I really like about Series is that they display in chronological order. That makes reading a series like the IronPython Debugger much easier to follow.
Implementing Series was fairly straightforward...or at least it would have been if I hadn't decided to significantly refactor the service code. I didn't like how I was handling configuration - too much direct config reading that was better handled by options. I made some poor service design decisions that limited the use of dependency injection. Most of all, I wanted to change the way memory caching worked so that more data loaded on demand instead of ahead of time. I also took the opportunity to use newer language constructs like value tuples and local functions.
I still have two different sources for posts - the local file system and an Azure Repo. I used to use Azure Storage but I got rid of that source as part of this refactoring. I have a simple interface PostLoader1 which the AzureGitLoader and FileSystemLoader classes implement. In order to select between them at run time, I have a third PostLoader implementation named PostLoaderSelector. PostLoaderSelector chooses between the sources based on configuration and uses IServiceProvider to activate the specified type from the DI container. PostLoaderSelector gets the IServiceProvider instance via constructor injection. I couldn't find a good example of how to manually activate types from ASP.NET's DI container, so for future reference it looks like this:
public Task<IEnumerable<Post>> LoadPostsAsync() { // Look Ma, a Local Function! PostLoader GetLoader() { switch (options.PostStorage) { case BlogOptions.PostStorageType.FileSystem: return serviceProvider.GetService<FileSystemLoader>(); case BlogOptions.PostStorageType.AzureGit: return serviceProvider.GetService<AzureGitLoader>(); default: throw new NotImplementedException(); } }; var loader = GetLoader(); return loader.LoadPostsAsync(); }
Note the lack of the "I" prefix for my interface type names. Death to this final vestigal Hungarian notation!↩
Back when I was working for the IronPython team, I would write series of posts on a single topic, such as writing an IronPython debugger or using IronPython's __clrtype__ metaclass feature. I've also written series on building parsers in F#, a step-by-step guide to brokered WinRT components and the home grown engine I use for this blog. And with the new job, I suspect I'll be writing even more.
So I decided to make series a first class construct in my blog. You can go to a top level page to see all my series. Or you can go to a specific series page, such as the one for Hawk Notes. One thing I really like about Series is that they display in chronological order. That makes reading a series like the IronPython Debugger much easier to follow.
Implementing Series was fairly straightforward...or at least it would have been if I hadn't decided to significantly refactor the service code. I didn't like how I was handling configuration - too much direct config reading that was better handled by options. I made some poor service design decisions that limited the use of dependency injection. Most of all, I wanted to change the way memory caching worked so that more data loaded on demand instead of ahead of time. I also took the opportunity to use newer language constructs like value tuples and local functions.
I still have two different sources for posts - the local file system and an Azure Repo. I used to use Azure Storage but I got rid of that source as part of this refactoring. I have a simple interface PostLoader1 which the AzureGitLoader and FileSystemLoader classes implement. In order to select between them at run time, I have a third PostLoader implementation named PostLoaderSelector. PostLoaderSelector chooses between the sources based on configuration and uses IServiceProvider to activate the specified type from the DI container. PostLoaderSelector gets the IServiceProvider instance via constructor injection. I couldn't find a good example of how to manually activate types from ASP.NET's DI container, so for future reference it looks like this:
public Task<IEnumerable<Post>> LoadPostsAsync() { // Look Ma, a Local Function! PostLoader GetLoader() { switch (options.PostStorage) { case BlogOptions.PostStorageType.FileSystem: return serviceProvider.GetService<FileSystemLoader>(); case BlogOptions.PostStorageType.AzureGit: return serviceProvider.GetService<AzureGitLoader>(); default: throw new NotImplementedException(); } }; var loader = GetLoader(); return loader.LoadPostsAsync(); }
Note the lack of the "I" prefix for my interface type names. Death to this final vestigal Hungarian notation!↩
Once I got my code updated to the latest version of ASP.NET, I decided actually add some functionality. When I built Hawk, I focused primarily on how the site was going to render my existing content. I didn't give much though to how I would write new blog posts. That turned out to be a bad decision. Getting new posts published turned out to be such a huge pain in the butt that I bothered to write new posts. With the new job, I want to go back to blogging much more often. So I figured I should update the engine to make publishing new posts easier.
As I wrote in my previous Hawk Note, Hawk loads all the post metadata into memory on startup. It supports loading posts from either the file system as well as from Azure Storage. The master copy of my posts are stored in their own (削除) Visual Studio Online (削除ここまで) (削除) Visual Studio Team Services (削除ここまで) Azure DevOps Repos. Moving posts from the git repo into azure storage turned out the be the biggest publishing obstacle. So I decided to eliminate that step. If you're reading this, I guess it worked!
Since Hawk already supported loading posts from two different repositories, it was pretty straight forward to add a third that reads directly from the Azure Repo. I also added code to render the markdown to HTML using Markdig, eliminating the need to use Edge.js 1.
Originally, I decided to store my blog posts in what is now called Azure Repos because it offered free private repos for personal use. While GitHub now offers free private repos, I've decided to keep my posts in an Azure Repos because I find the Azure DevOps REST interface much easier to wrap my head around than GitHub's GraphQL based approach. Yeah, I'm sure GraphQL is the better approach. But I was able to add the ability to load posts from Azure Repos in roughly 150 lines of code. For now anyway, easy and proven beats newer and more powerful.
Hawk pulling from the Azure Repo is the first step for creating a simple publishing process. Next, I will to set up a service hook to automatically trigger a Hawk refresh when I push to the post repo master branch. Eventually, I also want to add image publishing and maybe some type tool to help author the post metadata (auto set the date, auto slugify the title, validate categories and tags, etc). I also want to re-enable comments, though social media makes that somewhat less important than it was back in the day. Regardless, I figure it's best to tackle improvements incrementally. Last thing I want is another long silence on this blog.
Note, I might not use Edge.js in Hawk anymore, but I still think Tomasz Janczuk must be some kind of a wizard.↩
Once I got my code updated to the latest version of ASP.NET, I decided actually add some functionality. When I built Hawk, I focused primarily on how the site was going to render my existing content. I didn't give much though to how I would write new blog posts. That turned out to be a bad decision. Getting new posts published turned out to be such a huge pain in the butt that I bothered to write new posts. With the new job, I want to go back to blogging much more often. So I figured I should update the engine to make publishing new posts easier.
As I wrote in my previous Hawk Note, Hawk loads all the post metadata into memory on startup. It supports loading posts from either the file system as well as from Azure Storage. The master copy of my posts are stored in their own (削除) Visual Studio Online (削除ここまで) (削除) Visual Studio Team Services (削除ここまで) Azure DevOps Repos. Moving posts from the git repo into azure storage turned out the be the biggest publishing obstacle. So I decided to eliminate that step. If you're reading this, I guess it worked!
Since Hawk already supported loading posts from two different repositories, it was pretty straight forward to add a third that reads directly from the Azure Repo. I also added code to render the markdown to HTML using Markdig, eliminating the need to use Edge.js 1.
Originally, I decided to store my blog posts in what is now called Azure Repos because it offered free private repos for personal use. While GitHub now offers free private repos, I've decided to keep my posts in an Azure Repos because I find the Azure DevOps REST interface much easier to wrap my head around than GitHub's GraphQL based approach. Yeah, I'm sure GraphQL is the better approach. But I was able to add the ability to load posts from Azure Repos in roughly 150 lines of code. For now anyway, easy and proven beats newer and more powerful.
Hawk pulling from the Azure Repo is the first step for creating a simple publishing process. Next, I will to set up a service hook to automatically trigger a Hawk refresh when I push to the post repo master branch. Eventually, I also want to add image publishing and maybe some type tool to help author the post metadata (auto set the date, auto slugify the title, validate categories and tags, etc). I also want to re-enable comments, though social media makes that somewhat less important than it was back in the day. Regardless, I figure it's best to tackle improvements incrementally. Last thing I want is another long silence on this blog.
Note, I might not use Edge.js in Hawk anymore, but I still think Tomasz Janczuk must be some kind of a wizard.↩
I've been at Microsoft longer than I've been a husband, longer than I've been a father, longer than I've lived in the Pacific Northwest. It's been an awesome twenty year plus ride, but the time has come for me to take on new challenges.
I'm joining NEO Global Development's brand new Seattle office (which is really in Redmond). NEO is a community driven open source project delivering the technical underpinnings for the Smart Economy. NEO Global Development (aka NGD) is the technical R&D arm of the NEO Foundation, the NEO project's governing body. I'm going to be the Chief Architect for the Seattle office.
This move will reunite me with former colleague and long-time friend John deVadoss. I worked for John for back in my Architecture Strategy Team days. John is the director of NGD's Seattle office and I'm thrilled to be working with him again.
I had the privilege of presenting at NEO DevCon back in February. It was inspiring to meet folks from NEO's global community. NGD's main office is in Shanghai, NSPCC is in St. Petersberg, NeoResearch is in Brasil and the City of Zion community has team members from all corners of the Earth. I can't lie - the opportunity to work with this far reaching and diverse global community was a big selling point for me joining NGD Seattle.
NGD Seattle's primary focus is on developer tools and experience ('natch) for the NEO platform. John and my-soon-to-be-colleague Longfei Wang previewed a few things we're working on last month at Consensus 2019. In particular John showed off NEO Express Node, a private NEO blockchain management tool that I built. Of course, the Consensus preview is just a small taste of what we plan to deliver - especially once I join NGD full time and we build out more of our Seattle based engineering team.
While developer experience will be my primary focus, I also expect to pitch in on the core NEO platform. NEO 3.0 development is already in full swing. Core platform might not be my focus, but platform capabilities and developer experience go hand in hand. I'm sure I'll have plenty of opportunity to contibute to the core as we work towards our 3.0 release.
I'll miss Microsoft - especially the amazing people I've had the opportunity to work with over the years. It's particularly hard to leave the xlang project. Cross platform language projection has been a passion project of mine for several years now. Knowing xlang is in the capable hands of folks like Ben, Scott, Ryan and Kenny does make it easier. Besides, xlang is open source so I can still submit PRs if I get really Microsoft-homesick, right?
]]>I've been at Microsoft longer than I've been a husband, longer than I've been a father, longer than I've lived in the Pacific Northwest. It's been an awesome twenty year plus ride, but the time has come for me to take on new challenges.
I'm joining NEO Global Development's brand new Seattle office (which is really in Redmond). NEO is a community driven open source project delivering the technical underpinnings for the Smart Economy. NEO Global Development (aka NGD) is the technical R&D arm of the NEO Foundation, the NEO project's governing body. I'm going to be the Chief Architect for the Seattle office.
This move will reunite me with former colleague and long-time friend John deVadoss. I worked for John for back in my Architecture Strategy Team days. John is the director of NGD's Seattle office and I'm thrilled to be working with him again.
I had the privilege of presenting at NEO DevCon back in February. It was inspiring to meet folks from NEO's global community. NGD's main office is in Shanghai, NSPCC is in St. Petersberg, NeoResearch is in Brasil and the City of Zion community has team members from all corners of the Earth. I can't lie - the opportunity to work with this far reaching and diverse global community was a big selling point for me joining NGD Seattle.
NGD Seattle's primary focus is on developer tools and experience ('natch) for the NEO platform. John and my-soon-to-be-colleague Longfei Wang previewed a few things we're working on last month at Consensus 2019. In particular John showed off NEO Express Node, a private NEO blockchain management tool that I built. Of course, the Consensus preview is just a small taste of what we plan to deliver - especially once I join NGD full time and we build out more of our Seattle based engineering team.
While developer experience will be my primary focus, I also expect to pitch in on the core NEO platform. NEO 3.0 development is already in full swing. Core platform might not be my focus, but platform capabilities and developer experience go hand in hand. I'm sure I'll have plenty of opportunity to contibute to the core as we work towards our 3.0 release.
I'll miss Microsoft - especially the amazing people I've had the opportunity to work with over the years. It's particularly hard to leave the xlang project. Cross platform language projection has been a passion project of mine for several years now. Knowing xlang is in the capable hands of folks like Ben, Scott, Ryan and Kenny does make it easier. Besides, xlang is open source so I can still submit PRs if I get really Microsoft-homesick, right?
]]>[NotNull]
attribute.
I decided to scrap my use of BufferedHtmlContent and built out several classes that
implement IHtmlContent directly instead. For example, the links at the bottom
of my master layout are rendered by the SocialLink class.
Frankly, I'm not sure if rolling your own IHtmlContent class for snippet
of HTML code you want to automate is a best practice. It seems like it's harder than
it should be. It feels like ASP.NET needs a built-in class like BufferedHtmlContent,
so I'm not sure why it's been removed.[NotNull]
attribute.
I decided to scrap my use of BufferedHtmlContent and built out several classes that
implement IHtmlContent directly instead. For example, the links at the bottom
of my master layout are rendered by the SocialLink class.
Frankly, I'm not sure if rolling your own IHtmlContent class for snippet
of HTML code you want to automate is a best practice. It seems like it's harder than
it should be. It feels like ASP.NET needs a built-in class like BufferedHtmlContent,
so I'm not sure why it's been removed.However, as I went thru and converted all my old content to Markdown, I discovered that I needed some features that aren't supported by either the original implementation or the new CommonMark project. Luckily, I discovered the markdown-it project which implements the CommonMark spec but also supports syntax extensions. Markdown-it already had extensions for all of the extra features I needed - things like syntax highlighting, footnotes and custom containers.
The only problem with using markdown-it in Hawk is that it's written in JavaScript.
JavaScript (削除) is a fine language (削除ここまで) has lots of great libraries, but I
find it a chore to write significant amounts of code in JavaScript - especially async code.
I did try and rewrite my blog post upload tool in JavaScript.
It was much more difficult than the equivalent C# code.
Maybe once promises
become more widely used and async/await
is available, JavaScript will feel like it has a reasonable developer experience to me.
Until then, C# remains my weapon of choice.
I wasn't willing to use JavaScript for the entire publishing tool, but I still needed to use markdown-it 1. So I started looking for a way to integrate the small amount of JavaScript code that renders Markdown into HTML in with the rest of my C# code base. I was expecting to have to setup some kind of local web service with Node.js to host the markdown-it code in and call out to it from C# with HttpClient.
But then I discovered Edge.js. Holy frak, Edge.js blew my mind.
Edge.js provides nearly seamless interop between .NET and Node.js. I was able to drop the 30 lines of JavaScript code into my C# app and call it directly. It took all of about 15 minutes to prototype and it's less than 5 lines of C# code.
Seriously, I think Tomasz Janczuk must be some kind of a wizard.
To demonstrate how simple Edge.js is to use, let me show you how I integrated markdown-it into my publishing tool. Here is a somewhat simplified version of the JavaScript code I use to render markdown in my tool using markdown-it, including syntax highlighting and some other extensions.
// highlight.js integration lifted unchanged from // https://github.com/markdown-it/markdown-it#syntax-highlighting var hljs = require('highlight.js'); var md = require('markdown-it')({ highlight: function (str, lang) { if (lang && hljs.getLanguage(lang)) { try { return hljs.highlight(lang, str).value; } catch (__) {} } try { return hljs.highlightAuto(str).value; } catch (__) {} return ''; } }); // I use a few more extensions in my publishing tool, but you get the idea md.use(require('markdown-it-footnote')); md.use(require('markdown-it-sup')); var html = return md.render(markdown);
As you can see, most of the code is just setting up markdown-it and its extensions. Actually rendering the markdown is just a single line of code.
In order to call this code from C#, we need to wrap the call to md.render with a
JavaScript function that follows the Node.js callback style.
We pass this wrapper function back to Edge.js by returning it from the JavaScript code.
// Ain't first order functions grand? return function (markdown, callback) { var html = md.render(markdown); callback(null, html); }
Note, I have to use the callback style in this case even though my code is syncronous. I suspect I'm the outlier here. There's a lot more async Node.js code out in the wild than syncronous.
To make this code available to C#, all you have to do is pass the JavaScript code into the
Edge.js Func function. Edge.js includes a embedded copy of Node.js as a DLL.
The Func function executes the JavaScript and wraps the returned Node.js callback
function in a .NET async delegate. The .NET delegate takes an object input
parameter and returns a Task<object>. The delegate input parameter is passed in
as the first parameter to the JavaScript function. The second parameter passed to the
callback function becomes the return value from the delegate (wrapped in a Task of course).
I haven't tested, but I assume Edge.js will convert the callback function's first
parameter to a C# exception if you pass a value other than null.
It sounds complex, but it's a trivial amount of code:
// markdown-it setup code omitted for brevity Func<object, Task<object>> _markdownItFunc = EdgeJs.Edge.Func(@" var md = require('markdown-it')() return function (markdown, callback) { var html = md.render(markdown); callback(null, html); }"); async Task<string> MarkdownItAsync(string markdown) { return (string)await _markdownItFunc(markdown); }
To make it easier to use from the rest of my C# code, I wrapped the Edge.js delegate with a statically typed C# function. This handles type checking and casting as well as provides intellisense for the rest of my app.
The only remotely negative thing I can say about Edge.js is that it doesn't support .NET Core yet. I had to build my markdown rendering tool as a "traditional" C# console app instead of a DNX Custom Command like the rest of Hawk's command line utilities. However, Luke Stratman is working on .NET Core support for Edge.js. So maybe I'll be able to migrate my markdown rendering tool to DNX sooner rather than later.
Rarely have I ever discovered such an elegant solution to a problem I was having. Edge.js simply rocks. As I said on Twitter, I owe Tomasz a beer or five. Drop me a line Tomasz and let me know when you want to collect.
I also investigated what it would take to update an existing .NET Markdown implementation like CommonMark.NET or F# Formatting to support custom syntax extensions. That would have been dramatically more code than simply biting the bullet and rewriting the post upload tool in JavaScript.↩
However, as I went thru and converted all my old content to Markdown, I discovered that I needed some features that aren't supported by either the original implementation or the new CommonMark project. Luckily, I discovered the markdown-it project which implements the CommonMark spec but also supports syntax extensions. Markdown-it already had extensions for all of the extra features I needed - things like syntax highlighting, footnotes and custom containers.
The only problem with using markdown-it in Hawk is that it's written in JavaScript.
JavaScript (削除) is a fine language (削除ここまで) has lots of great libraries, but I
find it a chore to write significant amounts of code in JavaScript - especially async code.
I did try and rewrite my blog post upload tool in JavaScript.
It was much more difficult than the equivalent C# code.
Maybe once promises
become more widely used and async/await
is available, JavaScript will feel like it has a reasonable developer experience to me.
Until then, C# remains my weapon of choice.
I wasn't willing to use JavaScript for the entire publishing tool, but I still needed to use markdown-it 1. So I started looking for a way to integrate the small amount of JavaScript code that renders Markdown into HTML in with the rest of my C# code base. I was expecting to have to setup some kind of local web service with Node.js to host the markdown-it code in and call out to it from C# with HttpClient.
But then I discovered Edge.js. Holy frak, Edge.js blew my mind.
Edge.js provides nearly seamless interop between .NET and Node.js. I was able to drop the 30 lines of JavaScript code into my C# app and call it directly. It took all of about 15 minutes to prototype and it's less than 5 lines of C# code.
Seriously, I think Tomasz Janczuk must be some kind of a wizard.
To demonstrate how simple Edge.js is to use, let me show you how I integrated markdown-it into my publishing tool. Here is a somewhat simplified version of the JavaScript code I use to render markdown in my tool using markdown-it, including syntax highlighting and some other extensions.
// highlight.js integration lifted unchanged from // https://github.com/markdown-it/markdown-it#syntax-highlighting var hljs = require('highlight.js'); var md = require('markdown-it')({ highlight: function (str, lang) { if (lang && hljs.getLanguage(lang)) { try { return hljs.highlight(lang, str).value; } catch (__) {} } try { return hljs.highlightAuto(str).value; } catch (__) {} return ''; } }); // I use a few more extensions in my publishing tool, but you get the idea md.use(require('markdown-it-footnote')); md.use(require('markdown-it-sup')); var html = return md.render(markdown);
As you can see, most of the code is just setting up markdown-it and its extensions. Actually rendering the markdown is just a single line of code.
In order to call this code from C#, we need to wrap the call to md.render with a
JavaScript function that follows the Node.js callback style.
We pass this wrapper function back to Edge.js by returning it from the JavaScript code.
// Ain't first order functions grand? return function (markdown, callback) { var html = md.render(markdown); callback(null, html); }
Note, I have to use the callback style in this case even though my code is syncronous. I suspect I'm the outlier here. There's a lot more async Node.js code out in the wild than syncronous.
To make this code available to C#, all you have to do is pass the JavaScript code into the
Edge.js Func function. Edge.js includes a embedded copy of Node.js as a DLL.
The Func function executes the JavaScript and wraps the returned Node.js callback
function in a .NET async delegate. The .NET delegate takes an object input
parameter and returns a Task<object>. The delegate input parameter is passed in
as the first parameter to the JavaScript function. The second parameter passed to the
callback function becomes the return value from the delegate (wrapped in a Task of course).
I haven't tested, but I assume Edge.js will convert the callback function's first
parameter to a C# exception if you pass a value other than null.
It sounds complex, but it's a trivial amount of code:
// markdown-it setup code omitted for brevity Func<object, Task<object>> _markdownItFunc = EdgeJs.Edge.Func(@" var md = require('markdown-it')() return function (markdown, callback) { var html = md.render(markdown); callback(null, html); }"); async Task<string> MarkdownItAsync(string markdown) { return (string)await _markdownItFunc(markdown); }
To make it easier to use from the rest of my C# code, I wrapped the Edge.js delegate with a statically typed C# function. This handles type checking and casting as well as provides intellisense for the rest of my app.
The only remotely negative thing I can say about Edge.js is that it doesn't support .NET Core yet. I had to build my markdown rendering tool as a "traditional" C# console app instead of a DNX Custom Command like the rest of Hawk's command line utilities. However, Luke Stratman is working on .NET Core support for Edge.js. So maybe I'll be able to migrate my markdown rendering tool to DNX sooner rather than later.
Rarely have I ever discovered such an elegant solution to a problem I was having. Edge.js simply rocks. As I said on Twitter, I owe Tomasz a beer or five. Drop me a line Tomasz and let me know when you want to collect.
I also investigated what it would take to update an existing .NET Markdown implementation like CommonMark.NET or F# Formatting to support custom syntax extensions. That would have been dramatically more code than simply biting the bullet and rewriting the post upload tool in JavaScript.↩
First off, I've changed jobs (again). Last year, I made the switch from program manager to dev. Unfortunately, the project I was working on was cancelled. After several months in limbo, I was reorganized into the .NET Core framework team back over in DevDiv. I've got lots of friends in DevDiv and love the open source work they are doing. But I really missed being in Windows. Earlier this year, I joined the team that builds the platform plumbing for SmartGlass. Not much to talk about publicly right now, but that will change sometime soon.
In addition to my day job in SmartGlass, I'm also pitching in to help the Microsoft Services Disaster Response team. I knew Microsoft has a long history of corporate giving. However, I was unaware of the work we do helping communities affected by natural disasters until recently. My good friend Lewis Curtis took over as Director of Microsoft Services Disaster Response last year. I'm currently helping out on some of the missions for Nepal in response to the devestating earthquake that hit there earlier this year.
Finally, I decided that I was tired of running Other Peoples Codetm on my website. So I built out a new blog engine called Hawk. It's written in C# (plus about 30 lines of JavaScript), uses ASP.NET 5 and runs on Azure. It's specifically designed for my needs - for example, it automatically redirects old DasBlog style links like http://devhawk.net/2005/10/05/code+is+model.aspx. But I'm happy to let other people use it and would welcome contributions. When I get a chance, I'll push the code up to GitHub.
]]>First off, I've changed jobs (again). Last year, I made the switch from program manager to dev. Unfortunately, the project I was working on was cancelled. After several months in limbo, I was reorganized into the .NET Core framework team back over in DevDiv. I've got lots of friends in DevDiv and love the open source work they are doing. But I really missed being in Windows. Earlier this year, I joined the team that builds the platform plumbing for SmartGlass. Not much to talk about publicly right now, but that will change sometime soon.
In addition to my day job in SmartGlass, I'm also pitching in to help the Microsoft Services Disaster Response team. I knew Microsoft has a long history of corporate giving. However, I was unaware of the work we do helping communities affected by natural disasters until recently. My good friend Lewis Curtis took over as Director of Microsoft Services Disaster Response last year. I'm currently helping out on some of the missions for Nepal in response to the devestating earthquake that hit there earlier this year.
Finally, I decided that I was tired of running Other Peoples Codetm on my website. So I built out a new blog engine called Hawk. It's written in C# (plus about 30 lines of JavaScript), uses ASP.NET 5 and runs on Azure. It's specifically designed for my needs - for example, it automatically redirects old DasBlog style links like http://devhawk.net/2005/10/05/code+is+model.aspx. But I'm happy to let other people use it and would welcome contributions. When I get a chance, I'll push the code up to GitHub.
]]>Furthermore, even though they lost, these playoffs are a promise of future success. I tell my kids all the time that the only way to get good at something is to work hard while you’re bad at it. Playoff hockey is no different. Most of the Caps had little or no playoff experience going into this series and it really showed thru the first three games. But they kept at it and played much better over the last four games of the series. They went 2-2 in those games, but the two losses went to overtime. A little more luck (or better officiating) and the Caps are headed to Pittsburgh instead of the golf course.
What a difference six seasons makes. Sure, they won the President’s Trophy in 2010. But the promise of future playoff success has been broken, badly. The Caps have been on a pretty steep decline after getting beat by the eighth seed Canadians in the first round of the playoffs in 2010. Since then, they’ve switched systems three times and head coaches twice. This year, they missed the playoffs entirely even with Alex Ovechkin racking up a league-leading 51 goals.
Today, the word came down that both the coach and general manager have been let go. As a Caps fan, I’m really torn about this. I mean, I totally agree that the coach and GM had to go – frankly, I was surprised it didn’t happen 7-10 days earlier. But now what do you do? The draft is two months and one day away, free agency starts two days after that. The search for a GM is going to have to be fast. Then the GM will have to make some really important decisions about players at the draft, free agency and compliance buyouts with limited knowledge of the players in our system. Plus, he’ll need to hire a new head coach – preferably before the draft as well.
The one positive note is that the salary cap for the Capitals looks pretty good for next year. The Capitals currently have the second largest amount of cap space / open roster slot in the league. (The Islanders are first with 14ドル.5 million / open roster slot. The Caps have just over 7ドル million / open roster slot.) They have only a handful of unrestricted free agents to resign – with arguably only one "must sign" (Mikhail Grabovski) in the bunch. Of course, this could also be a bug rather than a feature – having that many players under contract may make it harder for the new GM to shape the team in his image.
Who every the Capitals hire to be GM and coach, I’m not expecting a promising start. It feels like the next season is already a wash, and we’re not even finished with the first round of this year’s playoffs yet.
I guess it could be worse.
I could be a Toronto Leafs fan.
]]>Furthermore, even though they lost, these playoffs are a promise of future success. I tell my kids all the time that the only way to get good at something is to work hard while you’re bad at it. Playoff hockey is no different. Most of the Caps had little or no playoff experience going into this series and it really showed thru the first three games. But they kept at it and played much better over the last four games of the series. They went 2-2 in those games, but the two losses went to overtime. A little more luck (or better officiating) and the Caps are headed to Pittsburgh instead of the golf course.
What a difference six seasons makes. Sure, they won the President’s Trophy in 2010. But the promise of future playoff success has been broken, badly. The Caps have been on a pretty steep decline after getting beat by the eighth seed Canadians in the first round of the playoffs in 2010. Since then, they’ve switched systems three times and head coaches twice. This year, they missed the playoffs entirely even with Alex Ovechkin racking up a league-leading 51 goals.
Today, the word came down that both the coach and general manager have been let go. As a Caps fan, I’m really torn about this. I mean, I totally agree that the coach and GM had to go – frankly, I was surprised it didn’t happen 7-10 days earlier. But now what do you do? The draft is two months and one day away, free agency starts two days after that. The search for a GM is going to have to be fast. Then the GM will have to make some really important decisions about players at the draft, free agency and compliance buyouts with limited knowledge of the players in our system. Plus, he’ll need to hire a new head coach – preferably before the draft as well.
The one positive note is that the salary cap for the Capitals looks pretty good for next year. The Capitals currently have the second largest amount of cap space / open roster slot in the league. (The Islanders are first with 14ドル.5 million / open roster slot. The Caps have just over 7ドル million / open roster slot.) They have only a handful of unrestricted free agents to resign – with arguably only one "must sign" (Mikhail Grabovski) in the bunch. Of course, this could also be a bug rather than a feature – having that many players under contract may make it harder for the new GM to shape the team in his image.
Who every the Capitals hire to be GM and coach, I’m not expecting a promising start. It feels like the next season is already a wash, and we’re not even finished with the first round of this year’s playoffs yet.
I guess it could be worse.
I could be a Toronto Leafs fan.
]]>But before we get to the manual steps, let’s create the WinRT client app. Again, we’re going to create a new project but this time we’re going to select "Blank App (Windows)" from the Visual C# -> Store Apps -> Windows App node of the Add New Project dialog. Note, I’m not using "Blank App (Universal)" or "Blank App (Windows Phone)" because the brokered WinRT component feature is not support on Windows Phone. Call the client app project whatever you like, I’m calling mine "HelloWorldBRT.Client".
Before we start writing code, we need to reference the brokered component. We can’t reference the brokered component directly or it will load in the sandboxed app process. Instead, the app need to reference a reference assembly version of the .winmd that gets generated automatically by the proxy/stub project. Remember in the last step when I said Kieran Mockford is an MSBuild wizard? The proxy/stub template project includes a custom target that automatically publishes the reference assembly winmd file used by the client app. When he showed me that, I was stunned – as I said, the man is a wizard. This means all you need to do is right click on the References node of the WinRT Client app project and select Add Reference. In the Reference Manager dialog, add a reference to the proxy/stub project you created in step two.
Now I can add the following code to the top of my App.OnLaunched function. Since this is a simple Hello World walkthru, I’m not going to bother to build any UI. I’m just going to inspect variables in the debugger. Believe me, the less UI I write, the better for everyone involved. Note, I’ve also added the P/Invoke signatures for GetCurrentProcess/ThreadID and to the client app like I did in the brokered component in step one. This way, I can get the process and thread IDs for both the app and broker process and compare them.
var pid = GetCurrentProcessId(); var tid = GetCurrentThreadId(); var c = new HelloWorldBRT.Class(); var bpid = c.CurrentProcessId; var btid = c.CurrentThreadId;
At this point the app will compile, but if I run it the app will throw a TypeLoadException when it tries to create an instance of HelloWorldBRT.Class. The type can’t be loaded because the we’re using the reference assembly .winmd published by the proxy/stub project – it has no implementation details, so it can’t load. In order to be able to load the type, we need to declare the HelloWorldBRT.Class as a brokered component in the app’s pacakge.appxmanifest file. For non-brokered components, Visual Studio does this for you automatically. For brokered components we have to do it manually unfortunately. Every activatable class (i.e. class you can construct via "new") needs to be registered in the appx manifest this way.
To register HelloWorldBRT.Class, right click the Package.appxmanifest file in the client project, select "Open With" from the context menu and then select "XML (Text) editor" from the Open With dialog. Then you need to insert inProcessServer extension that includes an ActivatableClass element for each class you can activate (aka has a public constructor). Each ActivatableClass element contains an ActivatableClassAttribute element that contains a pointer to the folder where the brokered component is installed. Here’s what I added to Package.appxmainfest of my HelloWorldBRT.Client app.
<Extensions> <Extension Category="windows.activatableClass.inProcessServer"> <InProcessServer> <Path>clrhost.dll</Path> <ActivatableClass ActivatableClassId="HelloWorldBRT.Class" ThreadingModel="both"> <ActivatableClassAttribute Name="DesktopApplicationPath" Type="string" Value="D:\dev\HelloWorldBRT\Debug\HelloWorldBRT.PS"/> </ActivatableClass> </InProcessServer> </Extension> </Extensions>
The key thing here is the addition of the DesktopApplicationPath ActivatableClassAttribute. This tells the WinRT activation logic that HelloWorldBRT.Class is a brokered component and where the managed .winmd file with the implementation details is located on the device. Note, you can use multiple brokered components in your side loaded app, but they all have the same DesktopApplicationPath.
Speaking of DesktopApplicationPath, the path I’m using here is path the final location of the proxy/stub components generated by the compiler. Frankly, this isn’t an good choice to use in a production deployment. But for the purposes of this walk thru, it’ll be fine.
Now when we run the app, we can load a HelloWorldBRT.Class instance and access the properties. re definitely seeing a different app process IDs when comparing the result of calling GetCurrentProcessId directly in App.OnLoaded vs. the result of calling GetCurrentProcessId in the brokered component. Of course, each run of the app will have different ID values, but this proves that we are loading our brokered component into a different process from where our app code is running.
Now you’re ready to go build your own brokered components! Here’s hoping you’ll find more interesting uses for them than comparing the process IDs of the app and broker processes in the debugger! 😄
]]>But before we get to the manual steps, let’s create the WinRT client app. Again, we’re going to create a new project but this time we’re going to select "Blank App (Windows)" from the Visual C# -> Store Apps -> Windows App node of the Add New Project dialog. Note, I’m not using "Blank App (Universal)" or "Blank App (Windows Phone)" because the brokered WinRT component feature is not support on Windows Phone. Call the client app project whatever you like, I’m calling mine "HelloWorldBRT.Client".
Before we start writing code, we need to reference the brokered component. We can’t reference the brokered component directly or it will load in the sandboxed app process. Instead, the app need to reference a reference assembly version of the .winmd that gets generated automatically by the proxy/stub project. Remember in the last step when I said Kieran Mockford is an MSBuild wizard? The proxy/stub template project includes a custom target that automatically publishes the reference assembly winmd file used by the client app. When he showed me that, I was stunned – as I said, the man is a wizard. This means all you need to do is right click on the References node of the WinRT Client app project and select Add Reference. In the Reference Manager dialog, add a reference to the proxy/stub project you created in step two.
Now I can add the following code to the top of my App.OnLaunched function. Since this is a simple Hello World walkthru, I’m not going to bother to build any UI. I’m just going to inspect variables in the debugger. Believe me, the less UI I write, the better for everyone involved. Note, I’ve also added the P/Invoke signatures for GetCurrentProcess/ThreadID and to the client app like I did in the brokered component in step one. This way, I can get the process and thread IDs for both the app and broker process and compare them.
var pid = GetCurrentProcessId(); var tid = GetCurrentThreadId(); var c = new HelloWorldBRT.Class(); var bpid = c.CurrentProcessId; var btid = c.CurrentThreadId;
At this point the app will compile, but if I run it the app will throw a TypeLoadException when it tries to create an instance of HelloWorldBRT.Class. The type can’t be loaded because the we’re using the reference assembly .winmd published by the proxy/stub project – it has no implementation details, so it can’t load. In order to be able to load the type, we need to declare the HelloWorldBRT.Class as a brokered component in the app’s pacakge.appxmanifest file. For non-brokered components, Visual Studio does this for you automatically. For brokered components we have to do it manually unfortunately. Every activatable class (i.e. class you can construct via "new") needs to be registered in the appx manifest this way.
To register HelloWorldBRT.Class, right click the Package.appxmanifest file in the client project, select "Open With" from the context menu and then select "XML (Text) editor" from the Open With dialog. Then you need to insert inProcessServer extension that includes an ActivatableClass element for each class you can activate (aka has a public constructor). Each ActivatableClass element contains an ActivatableClassAttribute element that contains a pointer to the folder where the brokered component is installed. Here’s what I added to Package.appxmainfest of my HelloWorldBRT.Client app.
<Extensions> <Extension Category="windows.activatableClass.inProcessServer"> <InProcessServer> <Path>clrhost.dll</Path> <ActivatableClass ActivatableClassId="HelloWorldBRT.Class" ThreadingModel="both"> <ActivatableClassAttribute Name="DesktopApplicationPath" Type="string" Value="D:\dev\HelloWorldBRT\Debug\HelloWorldBRT.PS"/> </ActivatableClass> </InProcessServer> </Extension> </Extensions>
The key thing here is the addition of the DesktopApplicationPath ActivatableClassAttribute. This tells the WinRT activation logic that HelloWorldBRT.Class is a brokered component and where the managed .winmd file with the implementation details is located on the device. Note, you can use multiple brokered components in your side loaded app, but they all have the same DesktopApplicationPath.
Speaking of DesktopApplicationPath, the path I’m using here is path the final location of the proxy/stub components generated by the compiler. Frankly, this isn’t an good choice to use in a production deployment. But for the purposes of this walk thru, it’ll be fine.
Now when we run the app, we can load a HelloWorldBRT.Class instance and access the properties. re definitely seeing a different app process IDs when comparing the result of calling GetCurrentProcessId directly in App.OnLoaded vs. the result of calling GetCurrentProcessId in the brokered component. Of course, each run of the app will have different ID values, but this proves that we are loading our brokered component into a different process from where our app code is running.
Now you’re ready to go build your own brokered components! Here’s hoping you’ll find more interesting uses for them than comparing the process IDs of the app and broker processes in the debugger! 😄
]]>Proxies and stubs look like they might be scary, but they’re actually trivial (at least in the brokered component scenario) because 100% of the code is generated for you. It couldn’t be much easier.
Right click the solution node and select Add -> New Project. Alternatively, you can select File -> New -> Project in the Visual Studio main menu, but if you do that make sure you change the default solution from "Create new Solution" to "Add to Solution". Regardless of how you launch the new project wizard, search for "broker" again, but this time select the "Brokered Windows Runtime ProxyStub" template. Give the project a name – I chose "HelloWorldBRT.PS".
Once you’ve created the proxy/stub project, you need to set a reference to the brokered component you created in step 1. Since proxies and stubs are native, this is a VC++ project. Adding a reference in a VC++ is not as straightforward as it is in C# projects. Right click the proxy/stub project, select "Properties" and then select Common Properties -> References from the tree on the left. Press the "Add New Reference..." button to bring up the same Add Reference dialog you’ve seen in managed code projects. Select the brokered component project and press OK.
Remember when I said that 100% of the code for the proxy/stub is generated? I wasn’t kidding – creating the template and setting referencing the brokered component project is literally all you need to do. Want proof? Go ahead and build now. If you watch the output windows, you’ll see a bunch of output go by referencing IDL files and MIDLRT among other stuff. This proxy/stub template has some custom MSBuild tasks that generates the proxy/stub code using winmdidl and midlrt. The process is similar to what is described here. BTW, if you get a chance, check out the proxy/stub project file – it is a work of art. Major props to Kieran Mockford for his msbuild wizardry.
Unfortunately, it’s not enough just to build the proxy/stub – you also have to register it. The brokered component proxy/stub needs to be registered globally on the machine, which means you have to be running as an admin to do it. VS can register the proxy/stub for you automatically, but that means you have to run VS as an administrator. That always makes me nervous, but if you’re OK with running as admin you can enable proxy/stub registration by right clicking the proxy/stub project file, selecting Properties, navigating to Configuration properties -> Linker -> General in the tree of the project properties page, and then changing Register Output to "Yes".
If you don’t like running VS as admin, you can manually register the
proxy/stub by running regsvr32 <proxystub dll> from an elevated
command prompt. Note, you do have to re-register every time the public
surface area of your brokered component changes so letting VS handle
registration admin is definitely the easier route to go.
In the third and final step, we’ll build a client app that accesses our brokered component.
]]>Proxies and stubs look like they might be scary, but they’re actually trivial (at least in the brokered component scenario) because 100% of the code is generated for you. It couldn’t be much easier.
Right click the solution node and select Add -> New Project. Alternatively, you can select File -> New -> Project in the Visual Studio main menu, but if you do that make sure you change the default solution from "Create new Solution" to "Add to Solution". Regardless of how you launch the new project wizard, search for "broker" again, but this time select the "Brokered Windows Runtime ProxyStub" template. Give the project a name – I chose "HelloWorldBRT.PS".
Once you’ve created the proxy/stub project, you need to set a reference to the brokered component you created in step 1. Since proxies and stubs are native, this is a VC++ project. Adding a reference in a VC++ is not as straightforward as it is in C# projects. Right click the proxy/stub project, select "Properties" and then select Common Properties -> References from the tree on the left. Press the "Add New Reference..." button to bring up the same Add Reference dialog you’ve seen in managed code projects. Select the brokered component project and press OK.
Remember when I said that 100% of the code for the proxy/stub is generated? I wasn’t kidding – creating the template and setting referencing the brokered component project is literally all you need to do. Want proof? Go ahead and build now. If you watch the output windows, you’ll see a bunch of output go by referencing IDL files and MIDLRT among other stuff. This proxy/stub template has some custom MSBuild tasks that generates the proxy/stub code using winmdidl and midlrt. The process is similar to what is described here. BTW, if you get a chance, check out the proxy/stub project file – it is a work of art. Major props to Kieran Mockford for his msbuild wizardry.
Unfortunately, it’s not enough just to build the proxy/stub – you also have to register it. The brokered component proxy/stub needs to be registered globally on the machine, which means you have to be running as an admin to do it. VS can register the proxy/stub for you automatically, but that means you have to run VS as an administrator. That always makes me nervous, but if you’re OK with running as admin you can enable proxy/stub registration by right clicking the proxy/stub project file, selecting Properties, navigating to Configuration properties -> Linker -> General in the tree of the project properties page, and then changing Register Output to "Yes".
If you don’t like running VS as admin, you can manually register the
proxy/stub by running regsvr32 <proxystub dll> from an elevated
command prompt. Note, you do have to re-register every time the public
surface area of your brokered component changes so letting VS handle
registration admin is definitely the easier route to go.
In the third and final step, we’ll build a client app that accesses our brokered component.
]]>