Posts

Using Open Street Map data for Australia

I’ve been using Open Street Map (OSM) data for Australia for generating matrices of travel times and distances that are suitable for input to truck routing optimisation problems. OSM is a world-wide map data set that anyone can edit and is impressively comprehensive.

Some of the resources I’ve used are www.osmaustralia.org, which does a regular batch job to extract country by country OSM data, and http://keepright.x10hosting.com/ which keeps an up to date list of map data errors. These types of errors are important to know about when producing routable data from OSM, in particular dead end one way streets and “almost junctions”.

I’ve been using Quantum GIS (QGIS) to look at the data after converting it to MID/MIF format. I can get it to label the lines that are one way streets, but unfortunately there is no way to show the line direction in QGIS, which is a major pain. (Actually there is a way but you have to get a source code branch from their repo and build it all etc etc). [Now that I think of it, a silly cheat way to do it would also be to produce another field in the map data, say a letter which represents whether the arc is “D” for down or “L” for left etc (based on the difference in the lat/longs of the start and end nodes of each arc), and label that way.]

I also found that the performance of QGIS is quite different depending on whether the data is opened as TAB or MIDMIF format. TAB format is fairly fast (just zooming, panning etc) but MIDMIF is quite noticeably slower. You’d think it wouldn’t make a difference as it would be using the same internal data representation, but obviously not.

I’m using some extra layers to show some of the processed data (picture below). For example, I have a layer which just shows the arcs involved in restricted turns, which I can layer on top of the street network and use a thicker line style for. I also use dots in another layer which I have produced based on my “island” processing. The dots represent “orphaned” nodes which are not on the main “island” of map data (which is connected in the sense that all nodes can reach all other nodes). These orphaned nodes will be ignored by the travel time calculation. There are around 9600 of these nodes in the entire set of Australia OSM routable data I’m using (which has 620042 nodes and 1503668 arcs in total). This filtered subset of OSM data I am using includes only street segments with a “highway” tag – this will exclude cycleways, pathways, hiking trails, ferry routes, ski slopes, building outlines, administrative boundaries, waterways etc.

Some observations on the OSM data (for Australia):
• One way information seems quite complete.
• Only 12% of the streets have road speed information (“max_speed” tag). This is an issue as vehicle routing needs highways to have a faster speed otherwise they won’t be used (and there will be lots of rat-running for example). In longer routes (e.g. interstate) the travel time will also be grossly overestimated. A couple of things we could do here: search for all segments with “highway”, “motorway”, or “freeway” in their names and assume some sort of speed like 80km/h on these segments. Or, with a bit more coding effort, if there are chains of segments where one has a speed and the others with the same street name don’t, assume that speed on all of those segments.
• About 70% of the road segments have street names. Some of the unnamed segments are bits of roundabout, service roads, motorway on/off-ramps etc. But, there are also a lot of streets which are classified (by their “highway” tag) as “secondary”, “tertiary”, “unclassified” and even “residential” which are not named. This is an issue when vehicle routing needs to produce driver directions or verbose path information.
• There’s only several hundred instances of streets which have restricted manoeuvre information (banned right hand turns being the prime example). Most of these instances are in Sydney. In reality this number should be in the thousands or tens of thousands. This will likely be the biggest issue from a routing quality point of view – it will mean you can do many illegal right turns.
• I found a weird character or two at the end of the file, which causes both Python and C++ file reading functions to get confused and read blanks forever. Weird, but easily avoided.

Drawing figures for scientific publications

I recently had to modify a picture (figure) as part of a resubmission to an academic journal.

The picture was created using XFig some time ago. XFig is one of the most widely used applications for creating figures for academic publications, and easily the best application I have used for this purpose. Its ability to incorporate Latex scientific formulas (use “xfig -specialtext -latexfonts -startlatexFont default” when launching XFig) and fonts, means that figures can be easily and beautifully integrated into Latex documents.

As I now have a windows machine, and am definitely no expert on linux (I am slowly trying to remedy this) I was dreading having to jump through the hoops to get XFig working on my new windows box (See http://www.cs.usask.ca/~wew036/latex/xfig.html). Part of the process involves installing Cygwin (a Linux-like environment for Windows). Not being familiar with linux this process seems quite convoluted (and I have been down this path before).

Googling XFig also brings up WinFig. Which is supposed to be very similar to XFig but runs on MS Windows. After downloading WinFig I quickly found out that you can only save figures with 15 or less objects in them without paying for the full version (making the free version not very useful). Something that the homepage neglects to mention.

I soon realised that because I had already installed virtual box and installed ubuntu (the process was very pain free) I should definitely try to use XFig within Ubuntu. Installing XFig within Ubuntu is what one my colleagues would call automagical – with a terminal inside Ubuntu type “sudo apt-get install xfig”, then as mentioned before type “xfig -specialtext -latexfonts -startlatexFont default” and I was cooking straight away.

In order to open my file in XFig in Ubuntu I was hoping to be able to share some folders with MSWindows, and then mount them inside Ubuntu. Alas, despite all my efforts I have still not been able to get this to work. Email to the rescue, emailed them to myself, opened my gmail in ubuntu, saved the file and the problem was solved.

The moral of the story (well this blog) is that if you are trying to get XFig working on windows, – don’t. Use the power that virtual box gives you and run XFig in Ubuntu within virtual box on your Windows machine! Now to move back to the Latex editing applications in linux and away from those I have been using with MS Windows!

Well Done Nick and Daryl – Maths Prize Winners

Daryl Bruce and Nick Vaskrsic both won awards at the recent RMIT Mathematical Sciences Awards Night.  Daryl won the Operations Research Prize and Nick won the Mathematics Rising Star Prize.  Both of them currently work as part-time business analysts and route optimisation planners for Biarri.

It is great to see the guys doing so well in their studies. All the best to everyone in the Biarri team who are currently in the midst of exams – good luck!

Managing Database Connections “with”

I’m using a pool to manage database connections. I decided to change from having per thread database connections to have per function database connections.

I was unsure of how to do this elegantly and thought decorating the the functions might be the best approach. However I really wanted to have local variables defined: enter context managers and the with statement.


@contextmanager
def db_wrap():
conn = pool.getconn()
dict_cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
cur = conn.cursor()
try:
yield conn, cur, dict_cur
finally:
pool.putconn(conn)

and to use it:

with db_wrap() as (conn, cur, dict_cur):
cur.execute('select stuff etc')
conn.commit()

Rather elegant 🙂

Official docs: http://docs.python.org/library/contextlib.html

Loki

Flexible authentication with cherrypy and repoze.who

I recently struggled with the amazing lack of clear examples on how to setup an easy, flexible and effective authentication and authorization system with cherrypy. The solution: Repoze.who and the cherrypy wsgi pipeline.

So you define a function that returns the middleware:


def setup_auth(app):
middleware = PluggableAuthenticationMiddleware(
app,
identifiers,
authenticators,
challengers,
mdproviders,
default_request_classifier,
default_challenge_decider,
log_stream = log_stream,
log_level = logging.DEBUG
)
return middleware

 

and then append that middleware to the pipeline.

 

if __name__ == “__main__”:
app = cherrypy.Application(Root())
app.wsgiapp.pipeline.append((‘repoze.who’, setup_auth))
cherrypy.quickstart(app, config=’workbench.conf’)

 

Easy!

The benefits of hosted solutions

Cloud computing offers many advantages over more traditional software channels. There is an interesting analogy here between computing power and electricity (elucidated well in the book The Big Switch: rewiring the world, from Edison to Google by Nicholas Carr 2008) – cloud computing offers computing power and business applications served up from a “centralised power plant”.

At Biarri we are excited about the advantages of cloud-hosted solutions instead of the more traditional desktop approach, in particular:

  • All clients run the latest and greatest version of the software
  • Accessible anywhere there is an internet connection
  • From a development point of view, problems can be solved once in a controlled computing environment. This lets developers concentrate more on features of value to customers and less on making software robust enough to run under many different environments (OS versions or variants for example).
  • Avoiding desktop licensing issues (e.g. tying software to a particular machine with awkward process for transferring license to another machine)
  • Short-circuits problems with desktop software deployment
  • Instant support – pursuing the electricity analogy, this would be like having a handyman always immediately on call to fix your problem
  • Scalability – ability to increase the computing power required on demand by “spinning up” more machine instances in the cloud
  • And best of all for our business clients – available on a low-cost monthly subscription (with the ability to easily turn on and off the service) without an arduous IT deployment

Biarri’s Workbench continues to be developed and enhanced and offers a hosted solution that comprises many different optimisation engines that all “plug in” to a central, flexible hosted architecture. In keeping with Biarri’s key values of accessibility and power, the user interface offers both a workflow-oriented interface with rich data manipulation and visualisation functionality.

Accessibility – at work every day

The posts below describe Biarri’s approach to deliver optimisation through accessibility of a number of different channels. Whether it is modelling in excel, a Biarri inside engine, by using the Biarri Workbench or even a managed service – these are all great ways for businesses to quickly and affordably benefit from the power of optimisation and Operations Research.

On a more practical level each day we are seeing examples where Biarri’s open and accessible approach is giving customers the power to “drive” the model or application and get the benefits of new insights and approaches to complex problems which deliver savings.

The power of being able to change parameters and input data and re-run optimisations all in a matter of seconds means in a very short time customers have used Biarri’s tools to get a deep understanding of what drives cost and yield in a supply chain or business operation and the impact of the different levers they can pull to affect change.

We at Biarri believe that powerful insight and analysis should be available quickly, to answer the burning questions that management have. For this reason, every day the team at Biarri look for new ways to make our commercial mathematics more accessible for our customers.

Accessible Engines: Biarri Inside

From the start Biarri has been committed to providing easy access to powerful optimisation.  Our focus at inception was twofold being:

  • Custom optimisation models delivered through consultancy projects and
  • To deliver task specific engines made available on a Software as a Service (SaaS) basis via the “Biarri WorkBench” (WB).

The strong demand for our analytical consulting services continues to surprise us and the WB is now in Beta with a launch expected later in the year.

However, a new delivery channel has emerged to provide even greater access to our optimisation capability.  Over the last few months we have been developing optimisation formulations and engines that other companies are embedding in their own software.  This ‘Biarri Inside’ approach allows companies to quickly and affordably add optimisation capability to an existing application and leverage Biarri’s expertise in formulating and coding optimisation engines.

We are responding to the demand for the ‘Biarri Inside’ approach by ensuring the optimisation engines that we develop have a well documented and designed API so  each can be easily incorporated into existing solutions.  We are now involved in a number of projects that leverage this delivery channel for accessible optimisation.