Endjin - Home

endjin blogs

We help small teams achieve big things.

This is the sixth blog in a series about DAX and Power BI. This post focuses on relationships and related tables. These relationships allow us to build up intricate and powerful models using a combination of sources and tables. The use of relationships in DAX powers many of the features around slicing and page filtering of reports.


Jess and Carmel recently gave a talk at Azure Oxford on “Combatting illegal fishing with Machine Learning and Azure – for less than £10 / month). The recording of that talk is now available for viewing!

The talk focuses on the recent work we completed with OceanMind. They run through how to construct a cloud-first architecture based on serverless and data analytics technologies and explore the important principles and challenges in designing this kind of solution. Finally, we see how the architecture we designed through this process not only provides all the benefits of the cloud (reliability, scalability, security), but because of the pay-as-you-go compute model, has a compute cost that we could barely believe!


Whilst testing Power BI Dataflows isn’t something that many people think about, it’s critical that business rules and associated data preparation steps are validated to ensure the right insights are available to the right people across the organisation. Data insights are useless, even dangerous, if they can’t be trusted, so despite the lack of “official support” or recommended approaches from Microsoft, endjin treat Power BI solutions just as any other software project with respect to testing – building automated quality gates into the end to end development process. This post outlines an approach that endjin has used to test Power BI Dataflows to add quality gates and build confidence in large and complex Power BI solutions.


Effectively managing mental capacity

by Carmel Eve

We are currently living in extremely uncertain times. As we try to process these changes, many of us have found that we do not have the same mental capacity we once did. Some of that might be due to extra constraints on our time, but a lot of our brain-power is also spent trying to process the huge worldwide changes which are currently taking place.

In this blog, Carmel talks about some techniques for managing that mental capacity, and how those apply in an ever-changing world.

Category: Culture

C# 8.0 nullable references offer benefits besides the obvious win of detecting null-related coding errors. This new language feature can also improve the expressiveness of your code.


Learning DAX and Power BI – Table Functions

by Carmel Eve

This is the fifth blog in a series on DAX and Power BI. This post focuses on table functions. In DAX, table functions return a table which can then be used for future processing. This can be useful if, for example, you want to perform an operation over a filtered dataset. Table functions, like most functions in DAX, operate under the filter context in which they are applied.


Azure Analysis Services provides an enterprise-grade analytical platform with massive scale and flexibility. But, as one of the more expensive services in the Azure platform, consideration should be given to cost management, especially in multi-environment ALM scenarios. This post explains how to massively reduce running costs through automation using Powershell and orchestration tools like Azure DevOps.


Building a proximity detection pipeline

by Carmel Eve

At endjin, our approach focuses on using scientific experimental method to support the creation of fully proved and tested decision making, and the use of scientific research to support our work. This post runs through how we applied that process to creation a pipeline to detect vessel proximity.
This is an example which is based around the project we recently worked on with OceanMind. In this project we helped them to build a #serverless architecture which could detect vessel proximity in close to real time. The vessel proximity events we detected were then fed into machine learning algorithms in order to detect illegal fishing!
Carmel also runs through some of the actual calculations we used to detect proximity, how we used #data projections to efficiently process large quantitities of incoming data, and the use of #durablefunctions to orchestrate the processing.


Using complex objects in BDD Scenarios with SpecFlow

by Jonathan George

During our projects at endjin, we often find ourselves evangelising Behaviour Driven Development, and specifically SpecFlow. In this post we look at a technique for defining complex test data objects in your Gherkin feature files, which we’ve also made available via the endjin-sponsored Corvus.NET project.


C# 8.0 has an ambitious new feature, called nullable references. In this post, Ian explains why this makes a fundamental change to the default assumptions about nullability.


Learning DAX and Power BI – Aggregators

by Carmel Eve

This is the fourth blog in a series about DAX and Power BI. We have so far covered filter and row contexts, and the difference between calculated columns and measures. This post focuses on aggregators. We cover the limitations of the classic aggregators, and demontrate the power of the iterative versions. We also highlight some of the less intuitive features around how these functions interact with both filter and row contexts.


Power BI Dataflow refresh polling

by Ed Freeman

If you’re a frequent user of the Power BI REST API and Power BI Dataflows, you may have come across the problem that there’s seemingly no programmatic way to get the refresh history of a Dataflow. The ability to know the status of a refresh operation is useful when you’re performing automated operations, and you need to know that something has succeeded or failed before deciding what to do next. For example, a desired feature in the Power BI Service is to be able to refresh a dataflow, and automatically refresh a dataset that depends on that dataflow. Without a refresh history endpoint, this is made more complicated than necessary. This blog outlines a way to programmtically retrieve a Dataflow’s refresh history in order to poll a refresh operation’s status, useful for any fully automated scenario.


This post explains how to update Azure Analysis Services model schemas from inside custom .NET applications. Whilst not a common scenario for most, it shows that this is easy to do using the AMO SDK. So, there’s nothing stopping you from developing complex and rich end-user functionality over the top of your data analysis solutions – providing run-time, user-driven schema changes like “what if” analysis.


See how to manage consistent default configuration across all your .NET projects by using NuGet build assets.


Wardley Maps are a fantastic tool to help provide situational awareness, in order to help you make better decisions. We use Wardley Maps to help our customers think about the various benefits and trade-offs that can be made when migrating to the Cloud. In this blog post, Jess Panni demonstrates how we used Wardley Maps to plan the migration of OceanMind to Microsoft Azure, and how the maps highlighted where the core value of their platform was, and how PaaS and Serverless services offered the most value for money for the organisation.


This is the third blog in a series about learning DAX and Power BI. The first two blogs focused on filter and row contexts. We are now moving on to talk about calculated columns and measures. These are the main features used to support the display of complex visuals. They allow you to combine columns, aggregate values, reformat data, and much more. The difference between these features can get a bit confusing so we’ve attempted to make that clearer using some concrete examples!


If you use Azure Functions on a regular basis, you’ll likely have grappled with the challenge of testing them. Even now, several years after their introduction, the testing story for Functions is not hugely well defined. In the final post in this series, we show how to ensure specs written using Corvus.SpecFlow.Extensions can run as part of your build pipeline.


If you use Azure Functions on a regular basis, you’ll likely have grappled with the challenge of testing them. Even now, several years after their introduction, the testing story for Functions is not hugely well defined. In the fourth of this series of posts, we look at how configuration can be supplied from your tests to the functions apps being tested.


If you use Azure Functions on a regular basis, you’ll likely have grappled with the challenge of testing them. Even now, several years after their introduction, the testing story for Functions is not hugely well defined. In the third of a series of posts, we look at using classes in the Corvus.SpecFlow.Extensions library to run functions apps via scenario and feature hooks.


If you use Azure Functions on a regular basis, you’ll likely have grappled with the challenge of testing them. Even now, several years after their introduction, the testing story for Functions is not hugely well defined. In the second of a series of posts, we look at using step bindings provided by the Corvus.SpecFlow.Extensions library to run functions apps as part of your SpecFlow scenarios.