Endjin - Home

Carmel Eve's Blog

NDC London day 1 was mainly focused around the responsibility we all face when developing new technology. As developers we cannot absolve ourselves of the consequences of not considering diversity and inclusivity when designing our solutions.


There are many different paths into the tech industry, Carmel has been speaking at some local schools about joining the industry from a scientific background. In this post she discusses the crucial tools which science gives you which can help you succeed in tech!


In this blog from the Azure Advent Calendar 2019 we discuss building a secure data solution using Azure Data Lake. Data Lake has many features which enable fine grained security and data separation. It is also built on Azure Storage which enables us to take advantage of all of those features and means that ADLS is still a cost effective storage option!

This post runs through some of the great features of ADLS and runs through an example of how we build our solutions using this technology!


In January 2020, Carmel is speaking about creating high performance geospatial algorithms in C# which can detect suspicious vessel activity, which is used to help alert law enforcement to illegal fishing. The input data is fed from Azure Data Lake Storage Gen 2, and converted into data projections optimised for high-performance computation. This code is then hosted in Azure Functions for cheap, consumption based processing.


How Azure DevTestLabs is helping me climb Everest

by Carmel Eve

Remote working allows us to work from anywhere we want. This brings a huge amount of flexibility in freedom, however we do need the help of a working laptop! When Carmel’s laptop gave in just before a trip, she used Azure DevTestLabs to allow her to continue to work using a 10 year old Mac that probably couldn’t wouldn’t have been up to the task alone…


We worked on a project recently which required us to build a highly performant system for processing vast quantities of messages in real time. We had made the decision to run this processing using Azure Functions with C#. This post runs through some of the techniques we used for writing highly performant, low allocation code, including data streaming, list preallocation and the relatively new C# feature: Span.


Machine learning often seems like a black box. This post walks through what’s actually happening under the covers, in an attempt to de-mystify the process!

Neural networks are built up of neurons. In a shallow neural network we have an input layer, a “hidden” layer of neurons, and an output layer. For deep learning, there is simply more hidden layers which allows for combining neuron’s inputs and outputs to build up a more detailed picture.

If you have an interest in Machine Learning and what is really happening, definitely give this a read (WARNING: Some algebra ahead…)!


This blog is part of a series around design patterns. This post focuses on the composite pattern. The composite pattern is often used in situations where you want to me able to treat groups and individuals in the same way during processing.


Building a secure solution on Azure can be a daunting task. Using Azure Functions and Managed Identities, we have built up a pattern for giving services access to one another, woithout the need to store credentials. These managed identities can be given access to necessary resources. For example, they can be granted roles and added to access control lists in ADLS Gen2 accounts, or the ability to access keys in key vault. This means that data can be securely accessed without needing to store connection strings or app passwords.


This is the second blog in a series around design patterns. This post focuses on the builder pattern. The builder pattern is used when there is complex set up involved in creating an object. Like the other creational patterns, it also separates out the construction of an object from the object’s use.


Here is a blog written by our apprentice Carmel after her second year of the apprenticeship. We think it demonstrates the huge variety of things we get to work on here at endjin, and highlights the best of the blogs that Carmel produced through during the year – of which there were a lot!

If you think an apprenticeship with us is something which might interest you – send a CV through to hello@endjin.com!


This is the first blog in a series about design patterns. This blog focuses on the differences between the factory method and abstract factory patterns. The factory method is a method which takes the creation of objects and moves it out of the main body of the code. An abstract factory is similar to the factory method, but instead of a method it is an object in its own right.


Here at endjin we’ve done a lot of work around data analysis and ETL. As part of this we have done some work with Databricks Notebooks on Microsoft Azure. Notebooks can be used for complex and powerful data analysis using Spark. Spark is a “unified analytics engine for big data and machine learning”. It allows you to run data analysis workloads, and can be accessed via many APIs. This means that you can build up data processes and models using a language you feel comfortable with. They can also be run as an activity in a ADF pipeline, and combined with Mapping Data Flows to build up a complex ETL process which can be run via ADF.


Software architecture and Agile project planning are often seen to be at odds. However, here at endjin we think that the way in which they intersect solves a lot of the common issues surrounding architecture. The key to successful architecture is constantly keeping the drivers in mind and having a tight communication loop as the architecture is implemented. These concepts are ones which are key to agile project management, and the combination of these two disciplines can be extremely powerful and successful when applied correctly.


Mapping Data Flows are a relatively new feature of ADF. They allow you to visually build up complex data transformation sequences. This can aid in the streamlining of data manipulation and ETL processes, without the need to write any code! This post gives a brief introduction to the technology, and what this could enable!


At endjin we have a high quality bar when it comes to our code. As part of this we carry out regular code reviews. One of the tools we have used for these code reviews is NDepend. This is the second in a blog series written as we carried out that process. This post focuses on the insight you can quickly gain just by glancing at the NDepend UI.


At endjin we have a high quality bar when it comes to our code. As part of this we carry out regular code reviews. One of the tools we have used for these code reviews is NDepend. This is the first in a blog series written as we carried out that process. This post runs through the different metrics used by NDepend, and the reasons that each of these can be an indication of code quality.


In this post Carmel runs through some of the main principles behind agile estimation and planning. At endjin we use a lot of these techniques in our projects and this is a great post which highlights the reasons behind some of what we do. The key motivation behind good estimation is to be useful for project planning. There is a huge amount of inherent uncertainty surrounding estimates, especially early in the project. So, we shift our aim from 100% precise, or “true”, estimates, and towards providing estimates which are useful and accurate. Carmel also runs through the steps in an agile delivery and release process. Definitely worth the read if you have an interest in agile and/or project management!


11 cheers for binary (And 3 for hexadecimal)!

by Carmel Eve

Sometimes it’s good to go back to the basics… This is a quick post that runs through binary and hexadecimal numbers, and how those relate to our every day computing!


This is the final blog in a series which delves into how the Rx operators work under the covers. This series aims to provide a greater understanding of Rx and its operators. This post focuses on the JOIN operator.