Endjin - Home

Carmel Eve's Blog

This is the third blog in a series about learning DAX and Power BI. The first two blogs focused on filter and row contexts. We are now moving on to talk about calculated columns and measures. These are the main features used to support the display of complex visuals. They allow you to combine columns, aggregate values, reformat data, and much more. The difference between these features can get a bit confusing so we’ve attempted to make that clearer using some concrete examples!


Optimising C# for a serverless environment

by Carmel Eve

In our recent project with OceanMind we used #AzureFunctions to process marine vessel telemetry from around the world. This involved processing huge quantities of data in close to real time. We optimised our processing for a #serverless environment, the outcome of which being that the compute would cost less than £10 / month!

This post summarises some of the techniques we used, including some concrete examples of optimisations we made.

#bigdata #dataprocessing #dataanalysis #bigcompute


The application of scientific experimental process to software development leads to the development of fully-validated solutions. This approach provides you with confidence in designs and means that you can quickly identify ideas which are not worth pursuing.

At endjin we use the ideas of hypotheses and experimentation when designing any solution and this gives us full confidence in the designs we produce. In this post we outline the steps and advantages of using this approach.


Learning DAX and Power BI – Row Contexts

by Carmel Eve

Here is the second blog in a series around learning DAX and Power BI. This post focuses on row contexts, which are used when iterating over the rows of a table when, for example, evaluating a calculated column. Row contexts along with filter contexts underpin the basis of the DAX language. Once you understand this underlying theory it is purely a case of learning the syntax for the different operations which are built on top of it.


Learning DAX and Power BI – Filter Contexts

by Carmel Eve

Here is the first in a series of blog posts around understanding DAX and Power BI. This post focuses on filter contexts. which are a central concept which is vital for being able to write effective and powerful DAX!

In this series Carmel walks through the main ideas and syntax surrounding the DAX language, and provides examples of using these over a dataset. DAX is an extremely powerful language. Using these techniques it is possible to build up complex reports which provide the insight you really need!


Five quick tips for public speaking

by Carmel Eve

We all get nervous in the run up to a public speaking event. However, there are things we can do which help alleviate some of the pressure. Here are 5 quick tips around preparing for a talk!


Remote working has many benefits. It allows us a huge amount of freedom, especially around managing our personal and professional lives. But alongside these benefits it also brings challenges. When you combine these challenges with certain aspects of mental health it can sometimes be difficult to manage. However, there are also ways in which remote work can enable us to control our environments in a way that would be extremely difficult if working from a conventional office.

We have been a fully remote company now for over two years, and in this post Carmel shares some of her experiences of managing mental health whilst remote working.


NDC London day 1 was mainly focused around the responsibility we all face when developing new technology. As developers we cannot absolve ourselves of the consequences of not considering diversity and inclusivity when designing our solutions.


There are many different paths into the tech industry, Carmel has been speaking at some local schools about joining the industry from a scientific background. In this post she discusses the crucial tools which science gives you which can help you succeed in tech!


In this blog from the Azure Advent Calendar 2019 we discuss building a secure data solution using Azure Data Lake. Data Lake has many features which enable fine grained security and data separation. It is also built on Azure Storage which enables us to take advantage of all of those features and means that ADLS is still a cost effective storage option!

This post runs through some of the great features of ADLS and runs through an example of how we build our solutions using this technology!


In January 2020, Carmel is speaking about creating high performance geospatial algorithms in C# which can detect suspicious vessel activity, which is used to help alert law enforcement to illegal fishing. The input data is fed from Azure Data Lake Storage Gen 2, and converted into data projections optimised for high-performance computation. This code is then hosted in Azure Functions for cheap, consumption based processing.


How Azure DevTestLabs is helping me climb Everest

by Carmel Eve

Remote working allows us to work from anywhere we want. This brings a huge amount of flexibility in freedom, however we do need the help of a working laptop! When Carmel’s laptop gave in just before a trip, she used Azure DevTestLabs to allow her to continue to work using a 10 year old Mac that probably couldn’t wouldn’t have been up to the task alone…


We worked on a project recently which required us to build a highly performant system for processing vast quantities of messages in real time. We had made the decision to run this processing using Azure Functions with C#. This post runs through some of the techniques we used for writing highly performant, low allocation code, including data streaming, list preallocation and the relatively new C# feature: Span.


Machine learning often seems like a black box. This post walks through what’s actually happening under the covers, in an attempt to de-mystify the process!

Neural networks are built up of neurons. In a shallow neural network we have an input layer, a “hidden” layer of neurons, and an output layer. For deep learning, there is simply more hidden layers which allows for combining neuron’s inputs and outputs to build up a more detailed picture.

If you have an interest in Machine Learning and what is really happening, definitely give this a read (WARNING: Some algebra ahead…)!


This blog is part of a series around design patterns. This post focuses on the composite pattern. The composite pattern is often used in situations where you want to me able to treat groups and individuals in the same way during processing.


Building a secure solution on Azure can be a daunting task. Using Azure Functions and Managed Identities, we have built up a pattern for giving services access to one another, woithout the need to store credentials. These managed identities can be given access to necessary resources. For example, they can be granted roles and added to access control lists in ADLS Gen2 accounts, or the ability to access keys in key vault. This means that data can be securely accessed without needing to store connection strings or app passwords.


This is the second blog in a series around design patterns. This post focuses on the builder pattern. The builder pattern is used when there is complex set up involved in creating an object. Like the other creational patterns, it also separates out the construction of an object from the object’s use.


Here is a blog written by our apprentice Carmel after her second year of the apprenticeship. We think it demonstrates the huge variety of things we get to work on here at endjin, and highlights the best of the blogs that Carmel produced through during the year – of which there were a lot!

If you think an apprenticeship with us is something which might interest you – send a CV through to hello@endjin.com!


This is the first blog in a series about design patterns. This blog focuses on the differences between the factory method and abstract factory patterns. The factory method is a method which takes the creation of objects and moves it out of the main body of the code. An abstract factory is similar to the factory method, but instead of a method it is an object in its own right.


Here at endjin we’ve done a lot of work around data analysis and ETL. As part of this we have done some work with Databricks Notebooks on Microsoft Azure. Notebooks can be used for complex and powerful data analysis using Spark. Spark is a “unified analytics engine for big data and machine learning”. It allows you to run data analysis workloads, and can be accessed via many APIs. This means that you can build up data processes and models using a language you feel comfortable with. They can also be run as an activity in a ADF pipeline, and combined with Mapping Data Flows to build up a complex ETL process which can be run via ADF.


1 2 3