Endjin - Home

Technology

In this post we show how a combination of Kubernetes, Azure Durable Functions and Azure API Management can be used to make legacy batch processing code available as a RESTful API. This is a great example of how serverless technologies can be used to expose legacy software to the public internet in a controlled way, allowing you to reap some of the benefits of a cloud first approach without fully rewriting and migrating existing software.


NDC London 2020 – My highlights

by Ed Freeman

A couple of weeks back, along with a rabble of other endjineers, I was fortunate enough to attend NDC London. This wasn’t my first time at an NDC conference – in fact, my previous outing was to Oslo to experience the “original” flavour of NDC back in 2018. That was extremely fun and packed with […]


NDC London day 1 was mainly focused around the responsibility we all face when developing new technology. As developers we cannot absolve ourselves of the consequences of not considering diversity and inclusivity when designing our solutions.


In this blog from the Azure Advent Calendar 2019 we discuss building a secure data solution using Azure Data Lake. Data Lake has many features which enable fine grained security and data separation. It is also built on Azure Storage which enables us to take advantage of all of those features and means that ADLS is still a cost effective storage option!

This post runs through some of the great features of ADLS and runs through an example of how we build our solutions using this technology!


Very excited to be speaking at NDC in London in January! The talk is focused on “Combatting illegal fishing with Machine Learning and Azure” and will focus on the recent work we did with OceanMind. OceanMind are a not-for-profit who are working on cleaning up the world’s oceans with the help of Microsoft’s cloud technologies. […]


How Azure DevTestLabs is helping me climb Everest

by Carmel Eve

Remote working allows us to work from anywhere we want. This brings a huge amount of flexibility in freedom, however we do need the help of a working laptop! When Carmel’s laptop gave in just before a trip, she used Azure DevTestLabs to allow her to continue to work using a 10 year old Mac that probably couldn’t wouldn’t have been up to the task alone…


Machine learning often seems like a black box. This post walks through what’s actually happening under the covers, in an attempt to de-mystify the process!

Neural networks are built up of neurons. In a shallow neural network we have an input layer, a “hidden” layer of neurons, and an output layer. For deep learning, there is simply more hidden layers which allows for combining neuron’s inputs and outputs to build up a more detailed picture.

If you have an interest in Machine Learning and what is really happening, definitely give this a read (WARNING: Some algebra ahead…)!


Building a secure solution on Azure can be a daunting task. Using Azure Functions and Managed Identities, we have built up a pattern for giving services access to one another, woithout the need to store credentials. These managed identities can be given access to necessary resources. For example, they can be granted roles and added to access control lists in ADLS Gen2 accounts, or the ability to access keys in key vault. This means that data can be securely accessed without needing to store connection strings or app passwords.


This is the second blog in a series around design patterns. This post focuses on the builder pattern. The builder pattern is used when there is complex set up involved in creating an object. Like the other creational patterns, it also separates out the construction of an object from the object’s use.


Here at endjin we’ve done a lot of work around data analysis and ETL. As part of this we have done some work with Databricks Notebooks on Microsoft Azure. Notebooks can be used for complex and powerful data analysis using Spark. Spark is a “unified analytics engine for big data and machine learning”. It allows you to run data analysis workloads, and can be accessed via many APIs. This means that you can build up data processes and models using a language you feel comfortable with. They can also be run as an activity in a ADF pipeline, and combined with Mapping Data Flows to build up a complex ETL process which can be run via ADF.


Mapping Data Flows are a relatively new feature of ADF. They allow you to visually build up complex data transformation sequences. This can aid in the streamlining of data manipulation and ETL processes, without the need to write any code! This post gives a brief introduction to the technology, and what this could enable!


At endjin we have a high quality bar when it comes to our code. As part of this we carry out regular code reviews. One of the tools we have used for these code reviews is NDepend. This is the second in a blog series written as we carried out that process. This post focuses on the insight you can quickly gain just by glancing at the NDepend UI.


In this post Carmel runs through some of the main principles behind agile estimation and planning. At endjin we use a lot of these techniques in our projects and this is a great post which highlights the reasons behind some of what we do. The key motivation behind good estimation is to be useful for project planning. There is a huge amount of inherent uncertainty surrounding estimates, especially early in the project. So, we shift our aim from 100% precise, or “true”, estimates, and towards providing estimates which are useful and accurate.

Carmel also runs through the steps in an agile delivery and release process. Definitely worth the read if you have an interest in agile and/or project management!


11 cheers for binary (And 3 for hexadecimal)!

by Carmel Eve

Sometimes it’s good to go back to the basics… This is a quick post that runs through binary and hexadecimal numbers, and how those relate to our every day computing!


So, this week we are looking at the Buffer and Window Rx operators. (If you have no idea what I’m on about, I suggest you start at the beginning!) There are a few different implementations of these operators, and we are going to focus on the time-based versions. In order to do this, we need […]


After a brief foray into Azure AD, we’re back onto Rx! (If you missed part 1 and 2 then might be worth having a quick read – going to gloss over some of the stuff common to both) OnNext(The GroupBy operator) This week we’re looking at the GroupBy operator. This one’s a bit more involved, […]


When he joined endjin, Technical Fellow Ian sat down with founder Howard for a Q&A session. This was originally published on LinkedIn in 5 parts, but is republished here, in full. Ian talks about his path into computing, some highlights of his career, the evolution of the .NET ecosystem, AI, and the software engineering life.


There’s been a little bit of a gap since my last Rx blog, I’ve been pretty busy with keeping up with Advent of Code in any spare time (and I’m sure there will be a blog along those lines at some point in the near future). But, for now, it’s time for a deep dive […]


In case you missed it… Here’s a link to my last blog on understanding Rx (luckily this blog has an internal buffer so if you’re just tuning in now, you’ve not missed your chance)! OnNext(Understanding of the Rx operators) Now one of the most exciting things about Rx is that it has its own implementation […]


Overflowing with dataflow part 2: TPL Dataflow

by Carmel Eve

Edit: In case you missed it! Here’s a link to part 1, a general overview of dataflow as a processing technique! The specific implementation of dataflow that I want to talk about is the TPL dataflow library. The task parallel library is a .NET library which aims to make parallel processing and concurrency simpler to […]