Endjin - Home

Analytics

Did you know that Azure Synapse has great support for .NET and #csharp? Learning new languages is often a barrier to digital transformation, being able to use existing people, skills, tools and engineering disciplines can be a massive advantage.


For years we have been building modern cloud data solutions on Azure and helping our customers transform their use of data to drive outcomes. Here are 5 reasons why Azure Synapse Analytics might just be the service that we have been crying out for.


Power BI Embedded is a great tool for ISVs offering a BI product for their customers. Generally, the reports exposed to the customers are personalized to some extent – one can use the Power BI JavaScript library to interact with the reports and tweak the visuals based on the logged-in user. Another way reports differ from user to user is the underlying data that user is allowed to see. This is controlled using Row-level Security.

A standard method to implement Row-level security is to pass the user’s email address to the data model, and have rules filter the data model appropriately based on the data that user is permitted to see. However, sometimes it can be useful to filter the data model based on more than just a user’s email address. This blog indicates how to modify an Embed Request (made by Power BI Embedded) to provide additional context on what data the embedding application would like to be returned for a user to view on a report.


Configuring model properties in Power BI allows you to create a model which is far more discoverable and is able to better support the visualisations you need. There are several different model properties which can be configured, some of these focus on discoverability whilst others allow you to alter the ways in which data is sorted/displayed/summarised in the reports.


Whilst “read/write XMLA endpoint” might seem like a technical mouthful, its addition to Power BI is a significant milestone in the strategy of bringing Power BI and Analysis Services closer together. As well as closing the gap between IT-managed workloads and self-service BI, it presents a number of new opportunities for Power BI developers in terms of tooling, process and integrations. This post highlights some of the key advantages of this new capability and what they mean for the Power BI developer.


Learning DAX and Power BI – CALCULATE

by Carmel Eve

This is the final blog in a series about DAX and Power BI. This post focuses on the CALCULATE function, which is a unique function in DAX. The CALCULATE function has the ability to alter filter contexts, and therefore can be used to enable extremely powerful and complex processing. This post covers some of the most common scenarios for using CALCULATE, and some of the gotchas in the way in which these different features interact!


Despite being inherently difficult to test, the need to validate data modelling, business rules and security boundaries in Power BI reports is important, as well as the need for ensuring that quality doesn’t regress over time as the insights evolve. This post explains that, by connecting to the underlying tabular model, it is possible to execute scenario-based specifications to add quality gates and build confidence in Power BI reports, just as any other software project.


In order to effectively work with our data in Power BI we need to structure the model as best to support the representations we need. This process is called data modelling. Data modelling includes loading, shaping, cleansing and enhancing the data.

This post runs through some of the important steps used in data modelling, and gives an example of loading and shaping data using Power BI.


This is the sixth blog in a series about DAX and Power BI. This post focuses on relationships and related tables. These relationships allow us to build up intricate and powerful models using a combination of sources and tables. The use of relationships in DAX powers many of the features around slicing and page filtering of reports.


Jess and Carmel recently gave a talk at Azure Oxford on “Combatting illegal fishing with Machine Learning and Azure – for less than £10 / month). The recording of that talk is now available for viewing!

The talk focuses on the recent work we completed with OceanMind. They run through how to construct a cloud-first architecture based on serverless and data analytics technologies and explore the important principles and challenges in designing this kind of solution. Finally, we see how the architecture we designed through this process not only provides all the benefits of the cloud (reliability, scalability, security), but because of the pay-as-you-go compute model, has a compute cost that we could barely believe!


Whilst testing Power BI Dataflows isn’t something that many people think about, it’s critical that business rules and associated data preparation steps are validated to ensure the right insights are available to the right people across the organisation. Data insights are useless, even dangerous, if they can’t be trusted, so despite the lack of “official support” or recommended approaches from Microsoft, endjin treat Power BI solutions just as any other software project with respect to testing – building automated quality gates into the end to end development process. This post outlines an approach that endjin has used to test Power BI Dataflows to add quality gates and build confidence in large and complex Power BI solutions.


Learning DAX and Power BI – Table Functions

by Carmel Eve

This is the fifth blog in a series on DAX and Power BI. This post focuses on table functions. In DAX, table functions return a table which can then be used for future processing. This can be useful if, for example, you want to perform an operation over a filtered dataset. Table functions, like most functions in DAX, operate under the filter context in which they are applied.


Azure Analysis Services provides an enterprise-grade analytical platform with massive scale and flexibility. But, as one of the more expensive services in the Azure platform, consideration should be given to cost management, especially in multi-environment ALM scenarios. This post explains how to massively reduce running costs through automation using Powershell and orchestration tools like Azure DevOps.


Learning DAX and Power BI – Aggregators

by Carmel Eve

This is the fourth blog in a series about DAX and Power BI. We have so far covered filter and row contexts, and the difference between calculated columns and measures. This post focuses on aggregators. We cover the limitations of the classic aggregators, and demontrate the power of the iterative versions. We also highlight some of the less intuitive features around how these functions interact with both filter and row contexts.


Power BI Dataflow refresh polling

by Ed Freeman

If you’re a frequent user of the Power BI REST API and Power BI Dataflows, you may have come across the problem that there’s seemingly no programmatic way to get the refresh history of a Dataflow. The ability to know the status of a refresh operation is useful when you’re performing automated operations, and you need to know that something has succeeded or failed before deciding what to do next. For example, a desired feature in the Power BI Service is to be able to refresh a dataflow, and automatically refresh a dataset that depends on that dataflow. Without a refresh history endpoint, this is made more complicated than necessary. This blog outlines a way to programmtically retrieve a Dataflow’s refresh history in order to poll a refresh operation’s status, useful for any fully automated scenario.


This post explains how to update Azure Analysis Services model schemas from inside custom .NET applications. Whilst not a common scenario for most, it shows that this is easy to do using the AMO SDK. So, there’s nothing stopping you from developing complex and rich end-user functionality over the top of your data analysis solutions – providing run-time, user-driven schema changes like “what if” analysis.


Wardley Maps are a fantastic tool to help provide situational awareness, in order to help you make better decisions. We use Wardley Maps to help our customers think about the various benefits and trade-offs that can be made when migrating to the Cloud. In this blog post, Jess Panni demonstrates how we used Wardley Maps to plan the migration of OceanMind to Microsoft Azure, and how the maps highlighted where the core value of their platform was, and how PaaS and Serverless services offered the most value for money for the organisation.


This is the third blog in a series about learning DAX and Power BI. The first two blogs focused on filter and row contexts. We are now moving on to talk about calculated columns and measures. These are the main features used to support the display of complex visuals. They allow you to combine columns, aggregate values, reformat data, and much more. The difference between these features can get a bit confusing so we’ve attempted to make that clearer using some concrete examples!


Optimising C# for a serverless environment

by Carmel Eve

In our recent project with OceanMind we used #AzureFunctions to process marine vessel telemetry from around the world. This involved processing huge quantities of data in close to real time. We optimised our processing for a #serverless environment, the outcome of which being that the compute would cost less than £10 / month!

This post summarises some of the techniques we used, including some concrete examples of optimisations we made.

#bigdata #dataprocessing #dataanalysis #bigcompute


Learning DAX and Power BI – Row Contexts

by Carmel Eve

Here is the second blog in a series around learning DAX and Power BI. This post focuses on row contexts, which are used when iterating over the rows of a table when, for example, evaluating a calculated column. Row contexts along with filter contexts underpin the basis of the DAX language. Once you understand this underlying theory it is purely a case of learning the syntax for the different operations which are built on top of it.


1 2 3