Start here

OECD Study Validates The Mind Lab by Unitec’s Postgrad Programme

Recently the OECD published a study called Students, Computers and Learning: Making the Connection. Unfortunately the media did the usual thing the media does and made a superficial (mis)reading of the document to come up with the headline Computers ‘do not improve’ pupil results, says OECD, and that was the BBC for heaven’s sake! Of course the report is narrow in scope, in that it focuses on PISA results, and concerns itself with only a handful of external references. It also only grudgingly acknowledges the success in digital teaching and learning in Australia, preferring to focus on its apparent mission to talk up the deadly drill and practice traditional schooling of the Asian territories who play the PISA game so well and make their kids so miserable in the process. No-one, surely, wants to see anti-suicide fences put up round our examination halls? Nevertheless a deeper reading of the document gives more interesting insights that validates our post graduate programme in digital and collaborative learning at The Mind Lab by Unitec. As the OECD report says, ‘technology can support new pedagogies that focus on learners as active participants with tools for inquiry-based pedagogies and collaborative workspaces,’ a philosophy very much in tune with our own. Of most interest, however, is Chapter 8, Implications of Digital Technology for Education Policy and Practice. In a previous study, Pathways to a Better World: Assessing Mobile Learning Policy against UNESCO Guidelines in a New Zealand Case Study, I looked at New Zealand policy in the context of the UNESCO Policy Guidelines for Mobile Learning. One of the conclusions from that piece of research was that it reaffirmed the importance of some core policy recommendations, such as the need to introduce the use of mobile devices into teacher education. The OECD’s much broader study also acknowledges the critical importance of teacher education in making the most of technology in schools; ‘Technology can amplify great teaching but great technology cannot replace poor teaching.‘ It also acknowledges that there are many benefits that PISA cannot measure, including the way that that ‘technology provides great platforms for collaboration among teachers and for their participation in continued professional development, thus empowering them as knowledge professionals and change leaders.’ These three themes of digital tools, collaboration and leadership lie at the heart of our programme. We would wholeheartedly echo the final words of the OECD report: ‘The key elements for success are the teachers, school leaders and other decision makers who have the vision, and the ability, to make the connection between students, computers and learning.’ We share that vision, and are busy giving teachers the same vision, and the ability, to transform education for the better.

(dis)connectivism: a learning theory for the ghost in the machine

One of the most recent attempts at a learning theory is connectivism, which attempts to address the relationship between knowledge and technology. At the same time there is an increasing disconnect between our physical bodies and our digital souls. In a somewhat baffling and opaque paper from 2010 called ‘Academetron, automaton, phantom: uncanny digital pedagogies’, Siân Bayne of the University of Edinburgh addressed the concept of the ‘uncanny’ in online learning. Once the layers are peeled aside, there are some useful ideas to consider. Bayne refers to ‘the multiple synchronicities available to us when we work online…[the] blurring of being and not-being, presence and absence online.‘ Our online lives are schizophrenically littered across multiple contexts, each one demanding a slightly different type of e-presence; an avatar, a profile, a photograph. We spread ourselves thin over the personal, the professional, the store, the auction, the review; constructing at one moment a Facebook life of “success so huge and wholly farcical“, the next, a LinkedIn profile designed to get that elusive new job to make that success less fictional. We lose the distinction between past and present. Chronology blurs. It is indeed uncanny when my dead mother’s Facebook accounts sends me a message, or a Google search tells me that we will have nuclear fusion by…oh… 2011? Alarming news stories of teenage suicide cults, seemingly driven by a desire to achieve digital immortality through physical death seem to take the disconnect between our real and virtual lives to extremes. Perhaps notwithstanding Ryle’s critique of mind-body dualism, we are all becoming ghosts in the machine. Can we ever call them back from heaven? This disconnectivism between a life lived and a fragmented digital artifact should perhaps raise some disquiet as to the role of pedagogy in the age of ghosts. Perhaps one question for educators is how we temper the tendency to make learning a process of digital publication. It sometimes feels as if the default assignment task these says is to ‘broadcast yourself’. Perhaps a better mantra would be, ‘reflect on yourself, protect yourself’ “for the vision of one man lends not its wings to another man.” Some things are better left to the imagination, rather than the app.

Ragile software development – a longitudinal reflection on post-agile methodology

Ok, so there is no such thing as a ‘ragile’ software development method. Nevertheless, for a number of converging reasons, I have recently been given cause to reflect on the history of rapid, lightweight and agile approaches to software development, and the current dominant ideology of methods. I use the label ‘ragile’ as an indicator of where we might have been, or where we might still go, in developing software in the post-agile era.

There’s a scene in Woody Allen’s movie ‘Sleeper‘, where two doctors from 200 years into the future discuss belief about diet. It goes like this:

You mean there was no deep fat? No steak or cream pies or… hot fudge?”

Those were thought to be unhealthy… precisely the opposite of what we now know to be true.”

Scientists tend to realize that this week’s theory is just that, and it may be replaced by a new theory at any time, based on our empirical observations. Diet promoters tend to take the opposite view. Everything in the past was wrong, but now we know the truth. I hope that software developers are more like scientists than fad dietitians, and will embrace change in their thinking.

Received wisdom has it, perhaps, that the problems of the waterfall approach to software development have been overcome by a legion of certified Scrum Masters leading their agile organisations to the continuous delivery of quality applications. If only we were so enlightened. Royce himself, in his famous ‘waterfall’ paper, stated “I believe in this concept, but the implementation described above is risky and invites failure.” He was talking about Figure 2 in his paper, the oft-copied waterfall diagram. It seems that few bothered to read the rest of the paper and the rather more challenging figures within it, consigning the software industry to decades of misunderstanding. Assuming that software development was in a chronic pre-agile crisis is also a misreading of history. Participants at the 1968 NATO conference, which was the apocryphal source of terms like ‘software engineering’ and ‘software crisis’ acknowledged that many large data processing systems were working perfectly well, thank you. DeGrace and Stahl told us that software was a ‘wicked problem’ 1990, but ‘wicked’ does not mean ‘insoluble’, though it does mean that we should not expect there to be one right answer.

The software industry has seen a series of new ideas about software development over the decades, many of which have been based on leveraging improvements in the hardware and software tools available to us. Kent Beck in the first XP book referred to fully utilising these new tools, turning up the dial on all the best practices at once, possibly to 11 (or was that just Spinal Tap?) Almost a decade earlier, James Martin had published his ‘Rapid Application Development‘ book. stressing the value of, among other things, prototyping, code generation, metrics, visualisation tools, process support tools and shared, reusable domain models. Later, in 1996, Steve McConnell’s ‘Rapid Development‘ emphasised many of the same ideas, stressing productivity tools, risk management and best practices. Both authors prefigured many practices of the lightweight (later agile) methods that were emerging in the late 1990s; Iterative, timeboxed development, customer engagement, small teams, adapting to changing requirements and quality assurance.

An underlying theme in rapid development is the concept of domain modelling and automated tools, including for code generation. Similar themes appear in Agile Modelling, Model Driven Development and, Domain Driven Design. Tools like the Eclipse Modeling Framework, and those based on the naked objects pattern, such as Apache Isis, put modelling at the heart of systems development.

The agile methods movement is at a point of change (hmmm, isn’t that the definition of a crisis?) Recent efforts have revisited the lessons of the Japanese car industry, with Lean and Kanban, in a search for ever ‘lighter’ processes, while at the same time a vision of agile as a traditional fixed methodology process has become established (endemic, even.) This has recently caused two signatories of the original agile manifesto to refer to ‘the failure of agile‘ (Andrew Hunt) and state that ‘agile is dead‘ (Dave Thomas). From another perspective, Vaughn Vernon lamented in his book on domain driven design in 2013 that in Scrum “a product backlog is thrust at developers as if it serves as a set of designs”

So, coming back to ‘ragile’, what is it? Well no more than an —acknowledgement that there may be a collection of practices from both rapid and agile —(and model/domain driven development) that remain relevant to the future of software development. Such an approach would emphasise tools, leverage prototypes, include shared domain models, embrace code generation, automate as much as possible, including estimation and project management, and deliver continuously. Such an approach might be considered radical, in the particular sense of going back to the roots of a phenomenon. Some of the ideas of Martin and O’Connell were much harder to do in the 1990s than they are now. Can there be any software developers who do not use code generators of one type or another? They generate object property methods, test stubs, service end points and a host of other components, they refactor code and create user interfaces and database schema. Rails and Grails developers work with domain models as a matter of course, allowing frameworks to build whole architectures automatically. It’s time we rethink how these strands might become a method for the 2020s that is able to cope with the distributed and parallel domain models of the cloud-based, in-database, Internet-of-things, plugin-driven, Web 3.0 applications of the future.


There’s been quite a bit of debate in the press about the new .sucks top level Internet domain, including this article in the New Zealand Herald. It does have its proponents, of course. The website claims that it can be used to ‘foster debate’ and ‘share opinions’. They suggest that it is valuable for cause marketing, consumer advocacy, anti-bullying etc. I can’t help wondering why other less infantile domains can’t be used for these worthy causes. In fact, of course, it’s just a free-for-all that makes individuals and organisations have to run around paying stupid prices for these domains just to protect themselves from Internet trolls. Obviously no-one could have seen that coming, right?

The body responsible for allowing new domain names is ICANN, the Internet Corporation for Assigned Names and Numbers, which claims to be a ‘not-for-profit public-benefit corporation’. I do wonder about the public benefit aspects. The socially aware and compassionate people who suggested the SUCKS domain name were Top Level Spectrum, Inc., Vox Populi Registry Inc. and Dog Bloom, LLC. All concerned charities with our welfare at heart, I’m sure. Vox Populi also won the auction to have the right to extort money from everyone wanting to defend themselves from this domain name. The three SUCKS entries were some of the 1,930 suggestions received by ICANN for new top level domain names in 2012. You can see the full list at Most of them were reasonably sensible, if largely self-serving, with lots of corporations wanting their own domains. There were, however, several stupid and destructive suggestions that were clearly rejected out of hand. These included SEX, SEXY, WTF and SUCKS… oh, wait…

I suppose if you make more than half a million dollars from the faceless corporations who suggest a domain like SUCKS (that’s just for making the suggestions – each one cost $185,000) you owe them back, however much collateral damage you cause in the process. Not to mention the millions of dollars you can make from selling the rights to the domain itself, as this list of domain auctions shows. ICANN are now running around trying to close the stable door after the horse has bolted. Too little, too late.

It will be interesting to see who ends up as the owner of

Kinross Flat and the Amazon Jungle – An Indie Publishing Experience with CreateSpace and Kindle

I recently self-published my first novel, Kinross Flat, via Amazon CreateSpace and Kindle. This post is about my experience of the whole process, which was quite complex but well supported by Amazon’s various self-publishing tools. Amazon is not the only independent publishing platform, and I can’t speak for the relative merits of the alternative channels, so I’m not necessarily claiming that Amazon is the best. However, as the owner of a Kindle, it was the one that came to mind when I started thinking about indie publishing. I’d welcome others’ views on the alternatives.

CreateSpace is basically for print-on-demand, so if you only want to publish an eBook on Kindle then you don’t need it. However, the advantage of CreateSpace is that once you’ve set up your print-on-demand copy, the addition of a Kindle version is practically automatic, so you get both options at no cost. Yes, no cost – the whole thing is basically free (well, up to a point – I’ll get back to that later!)

So, what do you have to do? There are a number of ways of preparing your book for publication, but the best approach, I think, is to use the tools that are provided for you. You will need to register on the CreateSpace website, after which you will have access to an author dashboard that leads you through all the different steps required to publish your book. The easiest way to make sure your book is in the correct format is to download the Word template, which contains all the required styles and layouts. You can choose your book size, but I went with the recommended 6″ x 9″. For cover design, there is a free to use Cover Creator tool, which provides a relatively small number of basic layouts, but all of these can be customised in terms of font and background colour. You can also upload your own cover image and, of course, write all the cover text. The system will generate an ISBN for you, and add it to the cover with a bar code, automatically.

Once you submit your interior file (i.e. the book text), the system automatically checks it for compatibility, then generates a preview file, which you can check on line or download as a PDF. You can also order a preview hard copy, which is probably the best way to proof read it, but you may have to wait several weeks to get it, and you have to pay for it.

Once you approve the preview, after a few more system checks, including a spell check, your book gets released to the Amazon sales channels, but only after a number of other things have been done. You have to fill in a U.S. tax declaration, which will specify how much U.S. withholding tax you will pay on any royalties, based on the country where you are a tax resident. For New Zealand that was 10%. On the subject of royalties, you also have to choose your royalty rate (35% or 70%) and the retail cost of your book (there’s a handy calculator for this, which shows you the royalties you would receive through each distribution channel based on a U.S. dollar price.) Incidentally there are several different distribution channels you can choose from, but since it costs you nothing to choose them all it seems a bit pointless to exclude any.

After your CreateSpace book is published, it’s a simple step to choose to also distribute on Kindle.You get another chance to check the preview on a Kindle emulator, choose whether you want digital rights management, select a retail price, and away you go. The Kindle version appears a day or two later on the Amazon site, and eventually the print and Kindle versions get linked together. Not all Amazon sites will support the print-on-demand version, just, Amazon Europe and the CreateSpace store. The Australian site, for example, will only offer the Kindle version.

So, is it really free? Well, basically, yes. There are all kinds of options on the CreateSpace site to get help with design, formatting, marketing etc., and these can be quite expensive, but as long as you are reasonably computer literate, the tools don’t require much expertise to do everything yourself. You can, if you like, pay for a review on Kirkus, which may or may not be favourable and costs hundreds of dollars. It’s possible that might pay off in sales, but it’s an unknown quantity. You will, of course, have to pay for any preview copies, or any hard copies of the final book, but these are more or less at cost.

Overall, I found the whole process quite fascinating and supportive. I did occasionally get lost in the Amazon jungle, and ended up, for example, filling in the tax form twice, for reasons I still don’t understand. Nevertheless, I’d recommend it to anyone else who like me, regards themselves as an amateur author who just wants to share their work. If you’re a ‘real’ writer I suspect that the more traditional publishing channels are still the best way to go, since ‘indie publishing’, although it sounds cooler, is still just what used to be called ‘vanity publishing’, which doesn’t sound so cool!

Refactoring Coderetreats: In Search of Simple Design

A while ago I posted a blog about the Global Day of Coderetreat. Since then I’ve been gathering and analysing data about coderetreats to see what their benefits are, and how they might be made even more effective. I’ve just written up some of this work in an article for InfoQ (thanks for the opportunity Shane Hastie), which you can find at

The title has two meanings (sort of.) In one respect it’s about changing the design of coderetreats (i.e. refactoring the coderetreat itself) and in another respect it’s about bringing more refactoring activities into a coderetreat in order to focus more directly on the four rules of simple design (for more detail on this in the context of a coderetreat you could try Corey Haines’ book Understanding the 4 Rules of Simple Design.)

I hope the article encourages more software developers to attend and run coderetreats.

Converting a Google Doc to a Kindle Format .mobi File

I recently had a document, written using Google Docs, that I wanted to make available in Kindle format (a .mobi file.) The thing was, I didn’t want to publish it through Amazon, I just wanted to provide a file that could be copied to a Kindle by anyone who wanted to access the material in that format. It turned out to be a little more complicated than I first thought, so if anyone else wants to do the same, I’ll explain how I did it. The thing to remember is that Google and Amazon are competitors, so they’re not going to make it easy to go from one to the other are they? No indeed…

My first thought was to export my Google Doc to Microsoft Word format. The catch seems to be that the usual way of converting a Word document to Kindle format is to upload it using Amazon Kindle Direct Publishing. This wasn’t really what I wanted to do, as I had no intention of publishing the document via Amazon. I just wanted  a tool that would do a local file conversion on my machine. There are some third party apps that claim to do that with a Word file but I picked one at random and it was pretty flaky.

My next approach was to use KindleGen, a command line tool provided by Amazon. This works on several input file formats, but not Microsoft Word. It does, however, convert HTML documents, which is one of the formats that you can export from Google Docs. The problem is that the default CSS styles of the HTML document that Google Docs gives you are not well suited to Kindle. The font sizes will be all over the place because Google Docs generates a style sheet that uses point values for font sizes that look really bad on a Kindle screen. I found when reading the document on my Kindle that only the largest font size setting was readable, and that was too big. The last thing you want is a Kindle doc that doesn’t look like the other books on the reader’s Kindle. For similar reasons I also chose to remove the font family settings, preferring to let the Kindle use its default fonts. However you can leave these alone if you want.

Another issue with the default HTML format is that a couple of useful meta tags are missing from the HTML. Anyway, all this is easily fixed! What does make life a bit difficult is that Google Docs generates the names of its class styles inconsistently. Yes, that’s right, every time it generates an HTML document, it randomly renames the class styles! This completely stuffs up any attempt you might make to set up a reusable style sheet. Thank you Google! (not).

Anyway, here’s the process I followed:

Google Doc to .mobi, step-by-step

1. Start with a Google Doc. Here’s a very simple one

google doc

2. Create a new working folder. I called mine ‘kindlegen’

3. Download KindleGen from the Amazon KindleGen page. It is downloaded as a zipped archive

4. Unzip the archive into your folder

4. Export your Google Doc in HTML format: File -> Download as… -> Web Page (.html, zipped)

google doc menu

5. Unzip the HTML page into your working folder

6. Open the HTML page in a suitable HTML editor. If you don’t have one, a text editor will do (though it makes it harder). Here’s what it looks like in Notepad. Not very human readable as there are no line feeds. You can manually put them in if you find it easier to navigate that way. With a proper HTML editor with color syntax highlighting it’s a lot easier.


7. You will see that near the beginning of the HTML source is a ‘style’ element containing a large number of internal CSS styles. I changed the font sizes of all of these as I couldn’t be bothered working out which ones were actually being used in my document. You need to replace all the ‘pt’ values for the ‘font-size’ elements with ’em’ values. I chose similar values, for example for 11pt, which is the standard paragraph font size,  I used 1em. For 16pt headings I used 1.5 em, and so on. Basically, it’s more or less a divide by 10 exercise.

For example, here’s the generated entry for the paragraph (p) tag (unlike the various class styles the HTML element styles are at least consistent)


My updated version looks like this (I also removed the font-family):


I didn’t find there was a need to replace any of the other parts of the styles. KindleGen will ignore any that don’t apply.

8. If you like, also remove all the ‘font-family’ entries (as above). The Kindle will be able to cope with the fonts used by the Google Doc if you leave them in.

7. By default, the ‘title’ element will contain the original file name, which may not actually be your preferred document title. If you need to, change the content of the ‘title’ element at the beginning of the file to the one you want

<title>My Book Title</title>

8. Near the top of the HTML source you should find the following ‘meta’ element, between the title element and the style element.

<meta content=”text/html; charset=UTF-8″ http-equiv=”content-type”>

Leave this alone, but add the following element above or beneath it:

<meta name=”author” content=”my name“>

If you don’t do this, when the document appears in your Kindle book list, there will be no author name associated with it.

If you want your document to have a cover image (a JPEG), you will also need to add the following element

<meta name=”cover” content=”mycoverfile.jpg”>

This assumes that your cover JPEG is going to be in the same folder as the HTML document when you convert it. If you have a cover image, add it to your working folder.

9. Open a command window in your working folder and run KindleGen against your HTML file:

kindlegen myhtmlfile.html

You may get some warnings, for example if you haven’t defined a cover image, or there are CSS styles that don’t apply. These won’t matter. In this example I didn’t provide a cover file, and the ‘max-width’ CSS property is being ignored.


Assuming there are no fatal errors, the tool will create a .mobi file in the same folder.

10. Connect your Kindle using a USB cable. Navigate to the ‘documents’ folder on the Kindle and copy your .mobi file into it (if you want you can put it in a subfolder, The Kindle will still pick it up.)

11. Eject the Kindle and check the book list. You should find your document has been added and is readable.

Here’s my file on my elderly Kindle.

2015-02-25 06.56.39


Get every new post delivered to your Inbox.

Join 171 other followers

%d bloggers like this: