Endjin - Home

Engineering Practices

Whilst “read/write XMLA endpoint” might seem like a technical mouthful, its addition to Power BI is a significant milestone in the strategy of bringing Power BI and Analysis Services closer together. As well as closing the gap between IT-managed workloads and self-service BI, it presents a number of new opportunities for Power BI developers in terms of tooling, process and integrations. This post highlights some of the key advantages of this new capability and what they mean for the Power BI developer.


C# 8.0’s nullable references feature dramatically changes a fundamental aspect of the language. In this post, Ian explains how to you can soften the impact by enabling gradually across your projects.


Despite being inherently difficult to test, the need to validate data modelling, business rules and security boundaries in Power BI reports is important, as well as the need for ensuring that quality doesn’t regress over time as the insights evolve. This post explains that, by connecting to the underlying tabular model, it is possible to execute scenario-based specifications to add quality gates and build confidence in Power BI reports, just as any other software project.


Whilst testing Power BI Dataflows isn’t something that many people think about, it’s critical that business rules and associated data preparation steps are validated to ensure the right insights are available to the right people across the organisation. Data insights are useless, even dangerous, if they can’t be trusted, so despite the lack of “official support” or recommended approaches from Microsoft, endjin treat Power BI solutions just as any other software project with respect to testing – building automated quality gates into the end to end development process. This post outlines an approach that endjin has used to test Power BI Dataflows to add quality gates and build confidence in large and complex Power BI solutions.


Azure Analysis Services provides an enterprise-grade analytical platform with massive scale and flexibility. But, as one of the more expensive services in the Azure platform, consideration should be given to cost management, especially in multi-environment ALM scenarios. This post explains how to massively reduce running costs through automation using Powershell and orchestration tools like Azure DevOps.


Building a proximity detection pipeline

by Carmel Eve

At endjin, our approach focuses on using scientific experimental method to support the creation of fully proved and tested decision making, and the use of scientific research to support our work. This post runs through how we applied that process to creation a pipeline to detect vessel proximity.
This is an example which is based around the project we recently worked on with OceanMind. In this project we helped them to build a #serverless architecture which could detect vessel proximity in close to real time. The vessel proximity events we detected were then fed into machine learning algorithms in order to detect illegal fishing!
Carmel also runs through some of the actual calculations we used to detect proximity, how we used #data projections to efficiently process large quantitities of incoming data, and the use of #durablefunctions to orchestrate the processing.


Using complex objects in BDD Scenarios with SpecFlow

by Jonathan George

During our projects at endjin, we often find ourselves evangelising Behaviour Driven Development, and specifically SpecFlow. In this post we look at a technique for defining complex test data objects in your Gherkin feature files, which we’ve also made available via the endjin-sponsored Corvus.NET project.


This post explains how to update Azure Analysis Services model schemas from inside custom .NET applications. Whilst not a common scenario for most, it shows that this is easy to do using the AMO SDK. So, there’s nothing stopping you from developing complex and rich end-user functionality over the top of your data analysis solutions – providing run-time, user-driven schema changes like “what if” analysis.


See how to manage consistent default configuration across all your .NET projects by using NuGet build assets.


If you use Azure Functions on a regular basis, you’ll likely have grappled with the challenge of testing them. Even now, several years after their introduction, the testing story for Functions is not hugely well defined. In the final post in this series, we show how to ensure specs written using Corvus.SpecFlow.Extensions can run as part of your build pipeline.


If you use Azure Functions on a regular basis, you’ll likely have grappled with the challenge of testing them. Even now, several years after their introduction, the testing story for Functions is not hugely well defined. In the fourth of this series of posts, we look at how configuration can be supplied from your tests to the functions apps being tested.


If you use Azure Functions on a regular basis, you’ll likely have grappled with the challenge of testing them. Even now, several years after their introduction, the testing story for Functions is not hugely well defined. In the third of a series of posts, we look at using classes in the Corvus.SpecFlow.Extensions library to run functions apps via scenario and feature hooks.


If you use Azure Functions on a regular basis, you’ll likely have grappled with the challenge of testing them. Even now, several years after their introduction, the testing story for Functions is not hugely well defined. In the second of a series of posts, we look at using step bindings provided by the Corvus.SpecFlow.Extensions library to run functions apps as part of your SpecFlow scenarios.


If you use Azure Functions on a regular basis, you’ll likely have grappled with the challenge of testing them. Even now, several years after their introduction, the testing story for Functions is not hugely well defined. In the first of a series of posts, we look at some different approaches to testing your functions apps, and introduce the Corvus.SpecFlow.Extensions library.


Optimising C# for a serverless environment

by Carmel Eve

In our recent project with OceanMind we used #AzureFunctions to process marine vessel telemetry from around the world. This involved processing huge quantities of data in close to real time. We optimised our processing for a #serverless environment, the outcome of which being that the compute would cost less than £10 / month!

This post summarises some of the techniques we used, including some concrete examples of optimisations we made.

#bigdata #dataprocessing #dataanalysis #bigcompute


High-performance C#: a test pattern for ref structs

by Ian Griffiths

C# 7.2 introduce ref structs, a new kind of type (Span is a ref struct) designed to support certain high performance scenarios. There are constraints around their use, and when writing unit tests for our Ais.Net parser, this caused some challenges. This blog describes the technique we used to work around the constraints.


The application of scientific experimental process to software development leads to the development of fully-validated solutions. This approach provides you with confidence in designs and means that you can quickly identify ideas which are not worth pursuing.

At endjin we use the ideas of hypotheses and experimentation when designing any solution and this gives us full confidence in the designs we produce. In this post we outline the steps and advantages of using this approach.


Integrating Azure Analysis Services into custom applications doesn’t just mean read-only data querying. But if your application changes the underlying model, it will need to be re-processed before the changes take effect. This post describes how to use the REST API for Azure Analysis Services inside a custom .NET application to perform asynchronous model refreshes, meaning your applications can reliably and efficiently deal with model updates.


Power BI Data Type Mappings

by Ed Freeman

If you’ve worked with Power BI at all, you’ll have probably realised that there are numerous mediums through which you work with (potentially the “same”) data. Data types across these mediums can be called different things, but actually refer to the same thing. They can also (unsurprisingly) be called different things and actually mean different things. It’s useful to know what the corresponding data types are across these mediums, as you may need to, for example, convert queries from one format to another. This blog and containing report intend to clarify what the corresponding data types are across each of the separate mediums within Power BI.


There are hidden pitfalls with dependency injection, particularly when managing the lifetime of scoped components. What is safe? And are there other approaches we can take to managing scoped object lifetimes?


1 2 3 5