Along with several of my endjin colleagues, I’m attending NDC London this week. Today was day 1, and here’s a run through of the sessions I attended and my thoughts.
— Jonathan George (@jon_george1) January 29, 2020
The day started with the keynote from Tess Ferrandez-Norlander, titled “We are the guardians of our future”. The focus of the talk was the unsettling ways that AI is being used right now across all aspects of our day to day lives, and the responsibilities we as software engineers must accept as we work on these systems.
It introduced me to a number of new terms, the most interesting of which was the concept of “bias laundering”. This is when the data we use to train our machine learning models reinforces institutional bias, and as a result allows us to shrug off responsibility for that by shifting responsibility for the resultant biased inferences made by those models onto the computer – “maths isn’t biased”. Tess showed a number of examples, ranging from the training sets used for various facial recognition models to natural language systems such as Google Translate.
Of particular interest was the behavour of Google Translate when translating sentences to a gender-neutral language and back:
Great trick from @TessFerrandez to expose bias in Google’s algorithm:
— Carol 🥱 (@CarolSaysThings) January 29, 2020
It was also fascinating to discover that a standard image library, used to benchmark facial recognition models, is built from images of people in news in the first few years of the 21st century. Apparently this image set is 77% male, 80% white, and 5% George W Bush. Yep.
The talk started and ended with a challenge to software engineers to take responsibility for the systems we develop, and consider a question which has existed in science for years – just because we can, does that mean we should?
Session 1 – Capability mapping
This was the first of what I expect to be several sessions I attend that’s focussed on microservices. This one, run by Ian Cooper, was focussed on defining the boundaries of microservices, and how this can be done by better understanding business processes and capabilities.
It started with some background on the history of microservices, including the now-famous edict from Jeff Bezos that all internal Amazon functionality be exposed via service interfaces – aka microservices, and how this translates into a shift from project teams, working to schedules and delivery dates, to product teams, working on making specific areas of business functionality better. Ian also looked at the typical problems associated with monolithic architectures, focussing on the long release cycle and how this impacts productivity.
The main focus of the talk was how you go about defining the boundaries of these services. Go too fine-grained, and you end up with what he refers to as the “nanoservices antipattern” – essentially services which just implement CRUD operations for individual entities. When this happens you need to ask – where did the business logic go?
He looked at a couple of approaches for identifying the capabilities that should be implemented as services. The first is the classic Lean Manufacturing approach – the Value Stream Map. By going through the classic value stream mapping process, you map out the different processes that form part of fulfilling specific user requirements. These processes can be further decomposed into activities, and the activities (or in some cases, the processes themselves) become candidates for being turned into microservices.
The second approach – which was somewhat rushed through due to time constraints – was event storming, which can similarly result in a good set of candidate activities for microservices.
Finally Ian took a quick look at Domain Driven Design’s notion of Bounded Contexts, and why that isn’t necessarily the best place to start when trying to decompose a monolith into microservices. The problem here is that an existing monolith will often – by necessity – form a bounded context. It may do so in an extremely torturous way. The point Ian made was that while all microservices are, by definition, bounded contexts, the reverse is not true. Bounded contexts that exist in the monolith are most likely too large to represent microservices, and a more fundamental rethink will likely be needed if the change is going to effected well.
One thing that struck me in the first half of the course is that while it was not explicitly called out, there are a lot of parallels between the rules of classic Object Oriented Design and those of SOA/microservice architectures. In the same way that OO promotes a the union of data and behaviour in a single package – the object – so does a microservice, albeit at a different level of granularity. Classic OO antipatterns – such as the anaemic domain model – have parallels in the world of microservices – the “nanoservice” referred to above. This is not really new news, just one of those useful reminders that whilst software development approaches change, the fundamental underlying principals often remain applicable to the new ways of working.
Session 2 – Make your custom .NET GC – “whys” and “hows”
I chose this session as one I thought would be technically fascinating whilst being of little practical use, and it’s fair to say I wasn’t disappointed. It was a whirlwind tour, lead by Konrad Kokosa, through the details of implementing a custom garbage collector in .NET – starting with why you’d want to do it, through the simplest possible implementation (illustrated with old-school “here be dragons” style pirate maps), to the ultimate goal of a concurrent compacting garbage collector and the reasons why it’s currently impossible to get there.
It was interesting to see that even the most basic calloc-based GC that Konrad showed performed less well than the default .NET GC, and he was really clear that anyone who decides to write a GC with the goal of producing something better than the default was setting themselves a hugely ambitious goal – and a more realistic objective would be to produce something that doesn’t just blow up the runtime!
Sadly the current state of play is that deficiencies in the interfaces that custom GCs use to talk to the runtime mean that it’s not really possible to build a “proper” custom GC right now, but Konrad is undaunted, and working on a PR for the runtime that will allow him to proceed with his goal.
So, this was a really interesting session with little practical application, but I’m glad I went.
Session 3 – Application diagnostics in .NET Core 3.1
This was quite a fun session – the first one at NDC that’s had two presenters – David Fowler and Damian Edwards – and it worked well with them obviously enjoying their talk. They gave us a run through of the various different diagnostic constructs now available in .NET Core, from ILogger at the top all the way down to EventSource at the bottom. This was followed by an example of how to use each to produce diagnostic data, and demos of how to consume the results, with a sprinkling of do’s and don’ts for good measure.
— Steve Gordon at #confoo 🇨🇦 (@stevejgordon) January 29, 2020
They also showed us how to use some of the dotnet tools for gathering data from running processes – counters, dumps, GC dumps and so on – and how to view captured data in Visual Studio, using some samples of badly behaving code that they’ve made available in GitHub. I don’t generally need to do low level diagnostic work, so it was good to get this refresher on the available tooling.
They finished off showing a couple of the Visual Studio tools for debugging parallel code – the Parallel Stacks and Tasks windows. This I can see being more useful day to day, and I’m looking forward to playing with it some more.
Session 4 – Micro Frontends – a strive for fully verticalized systems
This session from David Leitner was a look at how we could apply the same goals we have when building Microservices – small, autonomous, independently deployable, etc, to the front ends we build over the top of them. It’s a pretty common situation to see a bunch of back-end microservices being fronted by a monolithic web application, but is there another way?
David started by suggesting that for the majority of scenarios, the monolithic web app is still a good choice. The scenarios where it’s worth taking on the added overhead of micro-frontends are really when you’re dealing with Amazon-level scale – Amazon themselves have multiple web front ends that make up their site, with some of the changes more obvious than others. For example, there’s an obvious visual difference in the UI when you hit the checkout, but actually the complete journey from landing, through searching for a product, adding it to the basket and checking out takes you through around 7 different front ends.
The challenge then is, if we do implement micro front-ends, how do we bring them together without impacting the user experience and without forcing overly chatty or unnecessarily large interactions with the back end services. On the service interaction side he covered off various options, such as introducing presentation services tightly coupled to the micro front-ends, or using GraphQL based middleware.
He then moved onto options for composing the micro front-ends into a single application, covering off build-time, server side and client side integration and looking at methods and tools that can be used to help with these approaches, with potential trade-offs for each.
Whilst the talk didn’t get down into the detail, it was an interesting listen and definitely provided some food for thought.
Session 5 – Blazor, a new framework for browser-based .NET apps
I saw @stevensanderson pull a tin of beans out of his bag about 10 minutes before his talk and it didn't even cross my mind that that was an unusual thing for a speaker to do. pic.twitter.com/Deu26nXtb7
— Pete Smith (@beyond_code) January 29, 2020
I came away from the session enthused to learn more about Blazor. Steve’s demo was slick, and he has an obvious passion for what he’s doing with the Blazor team. They are bang up to date with gRPC support, and have a well thought through front-end testing story. To finish off, Steve showed a couple of their “experimental” projects – native desktop apps and native mobile apps built using Blazor without the need for web-rendering which were really impressive. Whilst I went in thinking that Blazor was just another attempt to hide underlying complexity from developers – a la WebForms – I came out thinking that it has serious potential to help in my day to day work.
This is the first time I’ve attended an NDC event and it was a really enjoyable day with a good selection of sessions to attend. I’m writing this with what I feel is a well-earned beer, and really looking forward to the next two days.