Endjin - Home

Big Compute

How to plan your cloud transformation journey

by Howard van Rooijen

This week I received an email from someone who asked how they could use our free Thought Leadership content to help their organisation move to the cloud. I realised that although we’ve released a lot of content, we’d never talked publicly about the rationale behind them and how they are all interconnected. Our Thought Leadership […]


Choosing the right cloud platform provider can be a daunting task. Take the big three, AWS, Azure, and Google Cloud Platform; each offer a huge number of products and services, but understanding how they enable your specific needs is not easy. Since most organisations plan to migrate existing applications it is important to understand how […]


In this series, we’re comparing cloud services from AWS, Azure and Google Cloud Platform. A full breakdown and comparison of cloud providers and their services are available in this handy poster. We have assessed services across three typical migration strategies: Lift and shift – the cloud service can support running legacy systems with minimal change […]


We produced a booklet to coincide with our Future Decoded talk “The 100 Year Start-up: Embracing Disruption in Financial Services“, where we examine the challenges and opportunities in the Microsoft Cloud for the Financial Services Industry, covering the following topics: Security, Privacy & Data Sovereignty Data Ingestion, Transformation & Enrichment Big Compute Big Data – […]


Azure Batch – Time is Money in Big Compute

by James Broome

Earlier in the year, endjin worked with the Azure Batch Product Team to run a series of experiments against the Azure Batch service using a framework we developed for performing scale, soak and performance tests. We’ve had conversations with a number of organisations over the last 5 years who have scaled their compute intensive workloads (SAS, […]


A short while ago, I was trying to classify some data using Azure Machine Learning, but the training data was very imbalanced. In the attempt to build a useful model from this data, I came across the Synthetic Minority Oversampling Technique (SMOTE), an approach to dealing with imbalanced training data. This blog describes what I […]


Spinning up 16,000 A1 Virtual Machines on Azure Batch

by Howard van Rooijen

Big Compute, like Big Data has a different meaning for every organisation; for Big Data this generally tends to be when data grows to a point where it can no longer be stored, queried, backed up, restored or processed easily on traditional database architectures. For Big Compute this tends to be when computation grows to […]