FTTx Planning and Design at BAM 2016

FTTx Planning & Design optimisation at BAM 2016

The Biarri Applied Mathematics Conference is on again and registrations are open for 2016. This year FTTx planning and design optimisation is a core component of the free 2-day conference so don’t miss out!

What are Fibre Optic Networks, how do we predict how expensive they will be and why do we do it?

Patrick Edwards will take BAM attendees through the complexities of a fibre network rollout and how it’s not always as straight forward as meets the eye. With billions of dollars being spent around the world on the deployment of these networks Patrick will explore the importance of optimisation and mathematics when predicting costs and different architectures.

Patrick Edwards has a background in mathematics, physics, programming and biochemistry. Patrick enjoys finding new ways to apply the skills from those areas to problems in the FTTx space. By conducting experiments for clients, Patrick helps companies across the telco industry make informed architectural and strategic decisions across their FTTx rollouts.

Why don’t Engineers and Mathematicians get along?

Alex Grime will be looking into the differences in how engineers and mathematicians think and speak across FTTx deployments and how that often gets in the way of successfully leveraging each other’s strengths:

  • Engineers want accurate, mathematicians want precise,
  • Engineers are interested in the destination, mathematicians are interested in the journey,
  • Engineers think 3 dimensionally, mathematicians think n dimensionally.

Alex Grime has over 20 years of experience in the telco industry across Network Strategy, Technology, Planning, Design, Cost Optimisation, and Operations. With a strong history in various roles across Optus, and NBN Alex is now one of the leading Telecommunications consultants for Biarri Networks.

How freedom to innovate is optimising global fibre rollouts.

Laura Smith will be discussing how FTTx networks are now being planned, designed and deployed with greater certainty, speed and at a lower cost by empowering smart mathematical minds. Through the use of optimisation, machine learning and other mathematical techniques, the entire industry is being re-imagined around us– and for the better.

Laura joined Biarri Networks after graduating with a Science degree in 2014. Starting as a member of the design team, she began taking on leadership roles and her focus changed to team development. Laura is passionate about process improvement and thrives on the challenges of working with a wide variety of people and clients.

The BAM Conference 2016

Registrations are now open and this year the conference will be held in Brisbane, Australia, on June 28 and 29 at QUT Gardens Point with support from The Queensland University of Technology, The Australian Mathematical Sciences Institute and Biarri.

Head over to the website to explore the other speakers, presenters and register now!

Biarri River to Rooftop Start

Biarri at Mater’s River to Rooftop 2015

Last week a bunch of the Biarri team gave up their Friday morning sleep-in to take on one of Brisbane’s tallest stair climbs to raise money and awareness for cancer.

The river to rooftop stair climb is an annual event held by the Mater foundation and is doing a fantastic job at raising money for prostate cancer research and finding a cure!

Biarri River to Rooftop Start

Tzara getting ready to climb!

Biarri River to Rooftop Tzara

Great views at the top, but exhausted

Biarri River to Rooftop Finished

Rav getting excited about the chance of winning a whipper snipper!

Biarri River to Rooftop Wippersniper

Good work to everyone that participated and we’re looking forward to a bigger and better stair climb next year!

About Mater Research

Mater Research is a world class institute which aims to discover, develop, translate, and commercialise medical research that can be translated into clinical care for the benefit of all. Mater Research is based in South Brisbane and specialises in cancer, maternity and obesity related research discovering ways to prevent and treat conditions affecting babies, children, adolescents and adults, helping them to lead healthy lives.

Mitigation by Iteration

Mitigation by Iteration – Facilitating the Optimisation Journey

For companies to remain competitive, they require smart systems that solve their unique day to day business problems. However, when applying these systems many decision makers get lost in the complexity due to limited communication and collaboration within the implementation process.

We are at the cutting edge of the latest optimisation methodologies and web technologies. However, unlike many other optimisation, and analytics companies out there, one of our main goals is to make powerful optimisation accessible in the real world to bring value to our clients.

We specialise in the development of web applications and smart optimisation engines– delivered in less time than you would probably think. It’s not unusual for us to go from an initial workshop with a client, to understanding their problem, and then having a fully functional optimiser in a production environment within three months.

On top of this the same people are often involved through the entire SDLC (software development life cycle) i.e. from the spec/design, theory, implementation and delivery/support. This reduces the overhead many organisations incur by having different people in business analytics and developer positions. The people implementing the solution actually understand and work with you to solve your problem.

Read more

Biarri Workbench - optimisaiton through the cloud

Optimisation through the cloud – The Biarri Workbench

Optimising your business; by operating in the most efficient and effective way; is essential in delivering a greater competitive advantage and is key to driving business success.

But, how can you drive innovation, empower your business decisions and survive in a highly competitive and volatile global marketplace?

A KPMG Report on elevating business in the cloud found that,

As cloud adoption picks up pace, cloud is poised not only to grow in scale, but will also increasingly impact more and more areas of the business. They do not only result in cost savings, but they can help organisations increase workforce flexibility, improve customer service, and enhance data analytics. In other words, the cloud should be considered a key enabler of the corporate strategy, driving strategic business transformations of all kinds.

Gartner Research supports this by naming the cloud as one of the top 10 strategic technology trends for 2015 that will have a significant impact on organisations during the next three years.

Agile Analytics driving business optimisaiton through the cloud

Anayltics delivered through the cloud empowers you to make adaptable decisions quickly, with far more rigour. Having access to your data anywhere, any time, on any device means that you no longer require large IT systems that are overly complex and not built for your specific problem. Cloud solutions allow you to scale up and down, and adapt depending on your specific target requirements. This means you can have point solutions targeting specific pain points within your business.

How does cloud based optimisation fit in my business?

Optimisaiton can be applied to most business problems. The question you should be asking yourself is; What is the best possible outcome of the decisions i’m about to make?

For your Supply ChainHow can I best support effective capital decisions to ensure end-to-end efficiency? – Learn More

For your Logistics – How should I best manage my fleet, workforce, facilities and work communications? – Learn More

For your WorkforceHow should I best plan my workforce across the next few hours, days, and months into the future? – Learn More

For your AnalyticsHow can I be sure that I am making the right decisions and considering all variables? – Learn More

How do we do it?

With our team of mathematicians, software developers and UI designers we use The Biarri Workbench which is an intuitive cloud based platform designed to support the rapid development on powerful web based software solutions.

With the powerful development platform of The Biarri Workbench, we are able to easily customise, alter, and build a solution for your specific business requirements.

The Workbench is Accessible. Empowering you to reduce your companies IT footprint and access world class optimisation anywhere, anytime, on any device.

The Workbench is Customisable. Built from the ground up to allow for rapid, bespoke deployment of software, built for your specific optimisation requirements.

The Workbench is Easy To Use. Through simple linear workflows, and customised visualisation widgets anyone in your business can easily master your software, reducing the need for long training and workforce upskilling.

The Workbench is Scalable. Regardless of business size or project complexity, through cloud based delivery, and bespoke software solutions built around your requirements, there is no more one size fits all approach.

The Workbench is Powerful. At the core of the Workbench are complex mathematical engines powered by industry grade commercial solvers. This means you can be certain in the justification around your decision making.

The Workbench is Efficient. Cloud based software delivery gives you the power to determine who sees what the data when– providing you with more control and rigour over your optimisation processes.

The Workbench is Secure. Security measures exceed both industry and customer requirements with the ability to easily accommodate your specific needs.

 Ask us how we can deliver optimisation via the cloud for your business!

Death by parameters

In my previous blog post I wrote about the great flexibility and power of Genetic Algorithms. So you may have thought; why do I need help with my optimisation problems? One can just simply grab an off-the-shelf Genetic Algorithm and use it. However, as with everything, there are always two sides to every story and this time I’ll show you why optimizing with Genetic Algorithms is much harder than it seems.

The flexibility of Genetic Algorithms arises, in part, from a flexibility to choose a dizzying number of parameters. When writing your own code you potentially have to decide on things such as the number of competing optimal solutions, the number of times to improve them, the mutation and crossover probabilities, the percentage of the population to eliminate in each generation and many more.

With so many choices, choosing the parameters correctly can determine whether the algorithm bears fruit or withers and dies. This difficulty has lead to many papers on the best way to choose parameters. Unfortunately even if one is able to choose good parameters for one problem, this is no guarantee that the same parameters will work for the next problem.

Reed warbler cuckoo

A reed warbler raising the young of a common cuckoo

So over the years researchers have searched for other powerful optimisation techniques which don’t suffer from such a parameter overload. From this research we now have a number of promising algorithms. In particular, in 2009 Xin-she Ying and Suash Deb came up with the ultimate of all parameter starved algorithms, the Cuckoo Search Algorithm. In this algorithm there is one parameter. Yes only one.

The Cuckoo Search Algorithm is inspired by the parasitic nature of some cuckoo species such as the European common cuckoo. These species lay their eggs in the nests of other host birds in an attempt to trick the host to raise their own nestlings. Sometimes this devious trick succeeds. When it doesn’t, the host bird either throws the egg over the side of the nest or simply abandons the nest altogether.

In the Cuckoo Search Algorithm the cuckoo’s ploy translates into an optimisation algorithm via four idealized rules which are repeated until the desired optimisation criteria are fulfilled. In the following algorithm each egg represents a solution and by a cuckoo laying an egg, we mean create a new random solution:

  1. Each cuckoo lays an egg in a random nest.
  2. Out of all laid eggs keep a number of the best eggs equal to the number of cuckoos.
  3. Abandon a fixed fraction of the worst eggs.
  4. Repeat

Find the parameter? The single, lonely parameter in the Cuckoo Search Algorithm is the fraction of the worst nests that are abandoned. This parameter affects how thorough the algorithm searches all possible solutions and so a lower value means the algorithm will find a local optimum faster (although maybe not a desired global optimum).

The avian-inspired algorithm has been used in numerous difficult problems to oust other optimisation methods out of their leadership position. For example, it has been used for spring and beam design problems , scheduling problems, the famous traveling salesman problem and even optimisation challenges in nanoelectronics! Like most other heuristic optimisation methods, the areas of application can be quite astounding.

So now that you’ve canceled the download of a promising Genetic Algorithm and started one of a new Cuckoo Search Algorithm, I thought I’d warn you again that there’s another side to this story too. Although the bird-based algorithm makes parameter choice simple, it may or may not be your best choice for a given optimisation problem. There are many heuristics for optimisation problems and choosing the right heuristic is probably much harder than choosing the right parameters for a given optimisation method. But you don’t have to worry about your precision nest eggs because luckily you’re on the website of a company competent enough to help you with this choice.

Can Analytics help fight Ebola?

The issue facing many countries that are both directly and indirectly effected is, how can we prevent the spread of Ebola?

The use of analytics in crisis and natural disasters is not a new phenomenon. In 2010 during the Haiti Earthquakes a research team made up of staff from Karolinska Institute in Sweden and Columbia University managed to map the spread of Cholera by mapping out mobile phone data.

What is happening in Africa?

Orange Telcom has handed over data from 150,000 mobile devices to a Swedish organisation in order to determine where people are moving. BBC found that this allowed authorities to see where to best place treatment centers and plan where to restrict, and prevent travel.

Nalini Joshi is a Professor in the School of Mathematics and Statistics at the University of Sydney. She stated during her appearance on Q&A that,

”The latest mathematical models from the CDC show that if you can isolate or hospitalise 70% of the infected patients by December, then the epidemic will be over in January. “So, it gives you a measure of what you can do to finish, to make sure that the epidemic doesn’t become a pandemic across the globe.”

She went on to say,

”It leads to a decision-making process, where you have to decide what resources you need to be able to hospitalise the 70% of infected patients that are expected by December. So it leads to all kinds of other branches, how many volunteers should you be sending, how many blankets and gowns and all of that should you send? So it gives you a measuring tool. It’s a ruler for deciding how to make the action happen.”

So, what does this all mean?

At the end of the day analytics within disaster control is a tool that empowers authorities to predict and properly plan. By providing quantitative analysis that is supported by data it reduces the need for spur of the moment gut feeling. This initiative and innovation used by authorities shows how analytics really can be used everywhere, and can help with disaster control.

Is it time for you to start using analytics?

2 years at Biarri

LinkedIn recently reminded me about my 2-year anniversary at Biarri. It feels like longer (a colleague has called it “Biarri time-dilation”), and I think that is a good thing.

In my previous job with an multinational corporate, I had a comfortable position running a great development and support team essentially for a single, well-earning COTS product. The product has an international client base, and that meant I scored a few business class trips each year to meet with generally happy customers. Needless to say, it was pretty sweet for a number of years, and again, comfortable.

But as the years passed, the software was ageing along with its core technologies. I had worked on the large code base for so long, I can still recall its structure. I wasn’t learning anything new – it was time to move on.

I’d worked with a few guys that had left the company for various reasons, and increasingly frequently I would hear of “Biarri” – a startup company that was based on accessible mathematics delivered in SaaS fashion. Biarri was gaining momentum, and the more I heard about the kind of projects and people involved, the more keen I was to look into a possible move away from my comfortable office.

I met with one of the Biarri founders, Joe Forbes, over coffee at his favourite cafe (in what we started calling his “coffice”). I liked what I heard – it seemed the Biarri guys had learned from the development mistakes we often see in software development in a corporate culture (“remember the Alamo”, Joe often says). Software was stripped back to the essentials, with the aim to have interaction design done first rather than as an afterthought (or worse still, by developers!). Interfacing with other systems was somewhat bypassed – using simple formats (e.g. CSV) to move data between systems. A mathematical approach was taken to solving each problem, by formulating each engine in a formal way to exploit the “power of maths”. The company was keeping its IT footprint as light as possible through the use of remotely hosted / cloud based computing resources, and had a strong focus on keeping costs low (I had always found the waste, and lack of care about it, frustrating within a corporate). They were using new web-app based technologies, and finally, they were growing! I jumped ship.

My probation period was hard work. Myself and another newbie – our COO George – were placed on a project aiming to model a redesign of a national parcel network, and some of the major Biarri players were on a well earned spate of leave. George took the reigns, and I was back in the developers chair, trying to cover ground as quickly as possible. My learning of “new stuff” started from that first week, and pretty much hasn’t abated since. As the project wore on, I got to team-lead a few of the more “junior” developers – however Joe and Ash are exceptionally good at hiring very clever people – not much leading was required. By the end of the project I had been reasonably well deprogrammed from my old corporate culture (it isn’t your job title that makes you in Biarri – its how you perform), I’d worked on the tech. stack from back to front, and was ready for the next challenge.

Since then, I’ve worked on a number of quite different scheduling optimisation software projects. Along the way I’ve learned about linear and mixed integer programming in the “real world” (not just on toy problems), and how the real difficulty can lie in the customised algorithms which feed just enough of a problem into commercial solvers without killing solve time. I’ve seen classic statistical, AI and dynamic programming techniques applied to problems as diverse as hub selection, vehicle routing, fibre network design, demand forecasting and resource allocation. I’ve learned how agent-based approaches can be used in the field of optimisation as well as simulation, and I’ve seen the company mature its software development processes to keep the machine working at the top speed it requires (where “agile” can also mean “predictable”).

As Biarri has rapidly grown over the last few years, so has the code base. It’s great to know refactoring isn’t a dirty word at Biarri. I think the key to keeping Biarri agile will be to avoid falling into the trap of developing large “in house” libraries, and not being afraid to throw away code – that should set us up to be able to refresh the “tech. stack” regularly (keeping us ready to use leading edge technology, not locking us into past decisions).

Just like any workplace, the people make or break the culture. Joe has a “hire slow, fire fast” policy, and this has made for a great place to work. It’s a pretty flat structure, and I hope it stays that way – sometimes someone has to be the boss, but everyone needs to be able and willing to get their hands dirty when required.

I can’t say I’m always “comfortable” at Biarri, but I wouldn’t have it any other way. Looking forward to the next 2 years.

Melbourne Open Science Workshop 19th of July 2014

Melbourne Open Science Workshop on the 19th of July!

I am very excited to announce that on the 19th of July, the first Open Science Workshop will be held in Melbourne, Australia, at Inspire 9.

 

It’s going to be a really exciting day. It’s completely free to attend, and there are still some tickets available here – Melbourne Open Science Workshop Tickets – or on the website itself.

 

The plan is to get together 100 scientists and researchers, and talk to them about all the cool open science software that is out there, and get everyone started using the really cool collaborative tools, that can help make papers and research truly reproducible.

 

On the day we will be talking specifically about GitHub and the SageMathCloud. We will spend the morning learning getting set up with Git, practising pushing, pulling, forking and pull requests, motivated with examples.

 

In the afternoon, we will hear from some people working on open science projects; and then will take a look into how to work with IPython notebooks in Git and the SageMathCloud (SMC). Among other things, SMC provides a way to run IPython notebooks in a collaborative environment. This means you can share computations with colleagues, and keep everything nice and controlled in repositories.

 

If you are a scientist or researcher and want to learn about easy ways to make papers more reproducible; about the best ways to collaborate with your colleagues and generally have fun and work towards make the scientific process more open and available to all, then you should come along! Hope to see you on the day! Head over to the website for more informatio and to reserve your spot!

 

I should also thank our sponsors: Biarri Networks, GitHub, and Inspire 9.

Data! Data! Who owns the data?

When I started in this industry there was no such thing as a CIO, though it wasn’t long in coming.  IT was usually found in the Finance department under the CFO (who had absolutely NO CLUE about IT at all).  I typed memos on a PC using some character-based word processor but had to print the memos out and put them into internal mail because we didn’t have email!  At the time, companies owned and managed their own big mainframes, and corporate information systems were generally character-based and accessed via a terminal, or, in advanced cases, through a terminal emulator running on a PC – woo hoo!.  There was no concept of “data as an asset” and it was bloody expensive to buy storage, so every effort was made to minimise the size of any database.  ERP was the hottest thing ever and was going to revolutionise business by eliminating all those pesky employees required to manually execute transactions.

So, what’s different a quarter of a century on?  Lots of things, obviously; I’ll just cherry pick a few to make my point.  The falsities around ERP marketing hype were harshly discovered by everyone who bought into it and the message shifted from “you can get rid of employees by automating your transactions” to “think what you can do with access to all of the data that’s now available!”  The computing platform and apps have shifted so far they’re on another planet; who needs a word processor now that our email programs are so sophisticated?  Do companies even have internal mail rooms any more?  “Data is an asset” and, with relatively cheap storage, companies have lots and lots and lots of it.  We have Chief Information Officers (CIOs) who are supposedly responsible for the company’s information but, after a quick search for a modern definition of the role, seem to be mainly focused on the appropriate use of technology within the organisation.  Now, Analytics is going to revolutionise business!

OK, I’ve bought into that line about analytics.  It’s really cool stuff.  However, analytics is data hungry.  In fact, it’s famished.  But, it doesn’t need just any data.  It needs good, clean, sanitary data!  “So, what is that?” you ask.

Let me illustrate with an example of what it is NOT; I’ve got to be a bit cryptic to protect the innocent, but hopefully you’ll get the idea.

Let’s take a company that has over 10 years of sales data in a 120GB database.  The level of detail tracked for each purchase if fantastic!  Almost overwhelming, in fact, as there are hundreds of tables to mine.  We know each product purchased, quantity and date; each transaction is lovingly coded with a 3-character transaction type and 3-character discount code (if any) amongst a plethora of other highly informative codes.  Codes relate back to a “master” table where you can find the highly informative description.

“Wow!”  “Great!”, you think.  Now we can use those handy codes to look for buying patterns and, maybe, predict sales.  If we are good, we might be able look for a slow down in sales and trigger a “why don’t you introduce discount x” message which, again if we’re good, will result in a boost in sales which we can track (testing our hypothesis and adding that information to the mix).

Everything seems good.  Then you realise that the end-users have the ability, and right, to add any code they want at any time, even if it means the same as a previously used code (just has a slightly different 3-character code).  Even better, they can reuse a code with a totally different description (hence meaning) from year-to-year or product-to-product!  This data is now pretty much useless because there is no consistency in the data at all. We can’t look for patterns programatically.  A human would have to trawl through the hundreds of thousands of records to recode everything consistently.

In talking with the customer, the department that is interested in doing the sales analysis has no control over the department doing data entry.  The department doing data entry is following their own agenda and doesn’t see why they should be inconvenienced by entering data according to the shims of another department.

At another customer, the spatial data that is used to catalogue assets is owned by the spatial specialists who enter the data.  The spatial database has been designed to provide information on which assets are located where (the final state of the asset).  It does not support the question: “what do I need to do to install the asset?”  For example, installation of an asset might require significant infrastructure to be created.  Let’s say a platform needs to be constructed and some holes need to be dug for supports.  Even though someone has gone out and assessed the site ahead of time (it’s going to be on concrete so we need to get through the concrete first, which is harder than just going into grass, and then need to make sure the concrete is fixed after installation) and that information is held in a separate, Excel (ugh) file with a reference back to the asset identifier, it is not supplied in the spatial database.  Why?  because it’s not relevant to the final state of the asset, only the construction of the asset. Once construction is complete they don’t care about the fact that it’s installed over concrete.  So, in planning construction someone has to manually review the Excel files against the spatial database to plan the cost, timing and means of constructing the asset.  The spatial specialists don’t see why the construction information should be entered into the database; it will take them longer to update the database and the information needs to be maintained until it becomes obsolete (after construction).  Yet, by having that data in the database the cost, timing and means of construction could be automatically generated saving not only time, but also errors generated through the manual process!

Am I the only one who finds these situations bizarre?  Irritating?  Annoying?  Unbelievable?

Remember the tree-swing cartoons? http://www.businessballs.com/treeswing.htm

How were these issues resolved in the manufacturing industry?  Someone realised that sales increased and costs reduced when they got it right!  And those companies who didn’t solve the problem eventually went out of business.  Simple, right?

So, I pose the following questions:

  • Are companies who aren’t able to solve this problem with their data going to die a painful death, as those who can solve it overtake them through the “power of analytics?”  I think, Yes!  And, they deserve to die!  (though, what will their employees do?).
  • Who in the organisation has ultimate responsibility for ensuring that data meets the organisation’s needs today and into the future?  I naively assumed that the CIO would make sure that data would be relevant and useful across the entire organisation, across the years (as much as possible).  However, this does not seem to be the case.  Departments are still fighting over who owns what data and don’t seem to be willing to help each other out for the overall good of the company.  Surely we don’t need to pay yet another executive obscene amounts of money to get this right?
  • Maybe the Universe is just trying to send me a message here by sending me all the difficult cases?
  • Maybe I’m just being overly cynical due to lack of sleep and food…

Here’s to better data!

scirate logo

Using Scirate to stay up to date

So for the past few months, I’ve been using a website called Scirate to catch up on the latest research in my field (quantum computing), and while Scirate is quite well-known in my community, it seems that it is not so widely known outside it. So in this post I would like to introduce you to it, and comment briefly on my workflow, hoping to inspire you to use it, and create your own!

Scirate assumes familiarity with arXiv – a very popular website among the physics/maths/computer science community for publishing “preprints” – research articles before they are published in professional journals. The arXiv doesn’t do much in the way of filtering for technical quality, so it is an “exercise for the reader” to decide which papers are worth reading. Partly, Scirate helps to solve this problem, but depending on how you use it, you will still need to exercise some discretion based on arbitrary criteria of your choosing. I’ll admit to favouring authors I know (of), or research institutions that I know, or observe do a lot of work in a particular area.

Before continuing on with this article, I recommend you sign up for Scirate. It’s possible to use Scirate without signing up, but by default it only shows papers in the “quantum physics” category.

Having signed up, the most important task is to properly configure the “Subscriptions” section. Navigate to this section of the website, and you will be faced with a very large list of of words with checkboxes. The words correspond to arxiv categories. To find out what they mean, they are listed here – arXiv categories. As an example, below is my part of my selection:

Having properly configured the list of categories you are interested in, head back to the Scirate homepage, and you will see a list of the papers for the currently-selected period. By the default this period will be for the current “day”. I put day in quotes, as on the weekend, no new papers are processed, so it remains constant over that period. Note in particular that you can select different periods to browse.

As mentioned above, one of the main features of Scirate is the ability to “Scite” things. For example, consider my current homepage view (figure below).

Note I have Scited all these papers, as have a few other people. The idea being that particularly “popular” papers appear at the top. The implication is that the more people in your field who use the system the better it becomes!

My personal workflow for Scirate is to browse it daily, at around 2-3pm, and again in the morning, at about 8-9am. I choose these times as it appears that Scirate actually updates its daily listing at around 12pm in my timezone (Melbourne, Australia). So it means that I can go carefully through all the days papers, around lunchtime, “Scite” on the ones I want to read, and then come back the following morning to see what everyone else is interested in. In this way I catch any papers that I might’ve missed in my first pass!

Perhaps you have another way of using it! Feel free to share it!