I recently had a document, written using Google Docs, that I wanted to make available in Kindle format (a .mobi file.) The thing was, I didn’t want to publish it through Amazon, I just wanted to provide a file that could be copied to a Kindle by anyone who wanted to access the material in that format. It turned out to be a little more complicated than I first thought, so if anyone else wants to do the same, I’ll explain how I did it. The thing to remember is that Google and Amazon are competitors, so they’re not going to make it easy to go from one to the other are they? No indeed…
My first thought was to export my Google Doc to Microsoft Word format. The catch seems to be that the usual way of converting a Word document to Kindle format is to upload it using Amazon Kindle Direct Publishing. This wasn’t really what I wanted to do, as I had no intention of publishing the document via Amazon. I just wanted a tool that would do a local file conversion on my machine. There are some third party apps that claim to do that with a Word file but I picked one at random and it was pretty flaky.
My next approach was to use KindleGen, a command line tool provided by Amazon. This works on several input file formats, but not Microsoft Word. It does, however, convert HTML documents, which is one of the formats that you can export from Google Docs. The problem is that the default CSS styles of the HTML document that Google Docs gives you are not well suited to Kindle. The font sizes will be all over the place because Google Docs generates a style sheet that uses point values for font sizes that look really bad on a Kindle screen. I found when reading the document on my Kindle that only the largest font size setting was readable, and that was too big. The last thing you want is a Kindle doc that doesn’t look like the other books on the reader’s Kindle. For similar reasons I also chose to remove the font family settings, preferring to let the Kindle use its default fonts. However you can leave these alone if you want.
Another issue with the default HTML format is that a couple of useful meta tags are missing from the HTML. Anyway, all this is easily fixed! What does make life a bit difficult is that Google Docs generates the names of its class styles inconsistently. Yes, that’s right, every time it generates an HTML document, it randomly renames the class styles! This completely stuffs up any attempt you might make to set up a reusable style sheet. Thank you Google! (not).
Anyway, here’s the process I followed:
Google Doc to .mobi, step-by-step
1. Start with a Google Doc. Here’s a very simple one
2. Create a new working folder. I called mine ‘kindlegen’
3. Download KindleGen from the Amazon KindleGen page. It is downloaded as a zipped archive
4. Unzip the archive into your folder
4. Export your Google Doc in HTML format: File -> Download as… -> Web Page (.html, zipped)
5. Unzip the HTML page into your working folder
6. Open the HTML page in a suitable HTML editor. If you don’t have one, a text editor will do (though it makes it harder). Here’s what it looks like in Notepad. Not very human readable as there are no line feeds. You can manually put them in if you find it easier to navigate that way. With a proper HTML editor with color syntax highlighting it’s a lot easier.
7. You will see that near the beginning of the HTML source is a ‘style’ element containing a large number of internal CSS styles. I changed the font sizes of all of these as I couldn’t be bothered working out which ones were actually being used in my document. You need to replace all the ‘pt’ values for the ‘font-size’ elements with ‘em’ values. I chose similar values, for example for 11pt, which is the standard paragraph font size, I used 1em. For 16pt headings I used 1.5 em, and so on. Basically, it’s more or less a divide by 10 exercise.
For example, here’s the generated entry for the paragraph (p) tag (unlike the various class styles the HTML element styles are at least consistent)
My updated version looks like this (I also removed the font-family):
I didn’t find there was a need to replace any of the other parts of the styles. KindleGen will ignore any that don’t apply.
8. If you like, also remove all the ‘font-family’ entries (as above). The Kindle will be able to cope with the fonts used by the Google Doc if you leave them in.
7. By default, the ‘title’ element will contain the original file name, which may not actually be your preferred document title. If you need to, change the content of the ‘title’ element at the beginning of the file to the one you want
<title>My Book Title</title>
8. Near the top of the HTML source you should find the following ‘meta’ element, between the title element and the style element.
<meta content=”text/html; charset=UTF-8″ http-equiv=”content-type”>
Leave this alone, but add the following element above or beneath it:
<meta name=”author” content=”my name“>
If you don’t do this, when the document appears in your Kindle book list, there will be no author name associated with it.
If you want your document to have a cover image (a JPEG), you will also need to add the following element
<meta name=”cover” content=”mycoverfile.jpg”>
This assumes that your cover JPEG is going to be in the same folder as the HTML document when you convert it. If you have a cover image, add it to your working folder.
9. Open a command window in your working folder and run KindleGen against your HTML file:
You may get some warnings, for example if you haven’t defined a cover image, or there are CSS styles that don’t apply. These won’t matter. In this example I didn’t provide a cover file, and the ‘max-width’ CSS property is being ignored.
Assuming there are no fatal errors, the tool will create a .mobi file in the same folder.
10. Connect your Kindle using a USB cable. Navigate to the ‘documents’ folder on the Kindle and copy your .mobi file into it (if you want you can put it in a subfolder, The Kindle will still pick it up.)
11. Eject the Kindle and check the book list. You should find your document has been added and is readable.
Here’s my file on my elderly Kindle.
I was recently asked to deliver a one day Scrum workshop that was supposed to conclude with an agile project simulation activity, but there was no specific guidance as to which activity to use. I’ve used several different types of process miniature for agile project management. A few years ago I even wrote one, the Agile (Technique) Hour, with a colleague in the UK, and I use the XP Game with my students. I’ve also found the use of Lego to be a good way to make activities suitably tactile and creative, in particular I’ve used the latest version of Lego Mindstorms with my my post grad students.
Pondering what to do in the workshop I looked for something that used Lego but was also Scrum-specific. It didn’t take long to find the Scrum Simulation with Lego by Alex Krivitsky. It’s a pretty simple simulation, using the most basic Lego bricks, but works well. The team have to build a city, over three sprints, from a product backlog of story cards provided by the product owner. The best things that came out of our exercise were, I think, the following:
- I deliberately didn’t give the team story priorities, or business values. Like an unhelpful customer I told them all my requirements were equally important. All they had were effort points. As a consequence I ended up with a city with no schools or hospitals.
- In the first sprint I gave only one of the three ‘C’s the (story) card. I didn’t give them either the conversation (clarifying requirements) or the confirmation (defining acceptance criteria.) As a result the buildings at the end of sprint one were terrible and I rejected nearly all of them. Like a typical customer I didn’t know what I wanted, but I knew that I didn’t want what they had done. After the review and retrospective, quality improved hugely in the second sprint.
- In the second sprint the team knew much better what their task was, but their teamwork was dysfunctional. Some found themselves idle while others did their own thing. Again, following the review and retrospective, teamwork improved remarkably in sprint three.
- Team velocity was all over the place, (the burndown chart looked like a difficult ski run), but in the end they could have done more stories in sprint three than they had scheduled. They asked if they should add in more stories from the product backlog. I told them no, if you find you finish a sprint early go down the pub. I didn’t get my schools or hospitals, but in real life, I would have a happier team.
Here’s my team’s Lego city. Note the stained glass window in the church and the wheels in the bicycle shop. Good work team!
One of the most important aspects of using a mobile device for learning is being able to use it to interact with your environment. A major part of that is the various sensors that enable you to gather data from your learning context. That has not always been easy to do in the past, with the need to find and install various apps that would allow you to access different combinations of sensors on your device.
Thankfully, the nQuire-it citizen inquiry site has been launched to help young people to develop practical science skills, and the nQuire-it platform includes the Sense-it app, the first open application to unlock the full range of sensors on mobile devices, so that people of any age can do science projects on their phones and tablets.
Sense-it provides a useful list of the sensors available on your particular device. My ‘legacy’ Galaxy SIII doesn’t have anything like the full set of sensors available on some of the newest phones, but still has a reasonable selection, as this screen capture from Sense-it’s handy ‘show/hide sensors’ tool shows.
Each sensor has an associated tool within the app. These appear on the main screen.
Each app makes it easy to gather data from the selected sensor. Here, for example, is the light sensor being used to measure the light level in my office.
The nQuire-it site has lots of projects where you can try out these sensors, and you can also create your own projects. This should prove a great resource for science teachers and learners.
It seems that, for learning designers, learning analytics (mostly using log and performance data gathered from learning management systems) is the new black. I recently attended the annual conference of the Australasian Society for Computers in Learning in Tertiary Education (ASCILITE), where every fourth presentation, it seemed, had something to do with learning analytics. Much of the content of these presentations was on the ‘what’ of learning analytics, i.e. what is technically possible in gathering data about how students are learning? The following question is ‘how'; how do we use this data? Finally we have to address the ‘why’ question; why are we doing this and what is our goal?
Perhaps the most interesting observation was given by Jaclyn Broadbent, talking about the Desire2Learn Intelligent Agent tool: http://ascilite2014.otago.ac.nz/sharing-practice/#78
One of the tasks of these agents is to sent automated, customised emails to students, not only task reminders but also positive feedback on good performance. i.e. the system knows what the students are doing and knows how to send targeted emails that reflect this performance. The ‘why’, of course, is to provide positive feedback in the hope that this will sustain good performance. Apparently, these automated emails are very well received by the students, but hardly any of them realise that these messages are being generated by a machine, rather than being sent personally by the course tutors. Perhaps even more interestingly, the few who did realise that these emails were automated still liked receiving them. Perhaps this is partly because the course tutors created the message templates, so their personalities were still evident in the generated emails. I’d be interested to know if this attitude still prevails as tools like this become more and more common, and the novelty factor wears off. Once every student in higher education is receiving encouraging emails sent by the machine, will they still regard them as positive and valuable? Or will they become the next generation of annoying corporate spam? I guess in the end it depends on the content. As long as we are giving students insights they may not have gained on their own, for example their relative performance compared to their peers on a course, our cyber-motivation may still hold its value.
I spent some time over the weekend throwing away old data CDs. Many of these were for courses I’d delivered on customer sites. These days the course tools are shared on a (soon to be obsolete) USB 2 stick. Others were archive disks for my digital photos. I didn’t quite get to throwing those out, but as I put them back in the cupboard I reflected that my current laptop doesn’t have a CD drive (though I have an external drive that I hardly ever use.) These days all my photos get uploaded automatically to DropBox as soon as I’m on WiFi network; no more cables, manual file copying and CD burning. No doubt, should I ever need these CDs again, I’ll have nothing left that can read them. My kids don’t respond to emails, I have to message them using social media. My colleagues judge each other on the dubious statistics generated automatically by the search algorithms in Google Scholar Citations. I video call people on the other side of the planet for free on a disposable mobile device. All of this is, of course, the new normal. Something happened over the last few years that moved our lives into the socio-mobile cloud where we gave up ownership and control for convenience and immediacy.
The question I find myself asking as I trash my old CDs is what will the next new normal be? What will happen to us in the future that will make Facebook, WiFi, smartphones and cloud storage look like clunky old museum pieces? Relentless connectivity will be the first to arrive, since it is already well on the way. The immediate casualty of that will be that the blessed sanctuary of the aeroplane will be absorbed in to the all consuming expectations of 24/7 availability. We will lose what little control we have over our means of communication as the relative privacy of corporate email gets overtaken by misguided attempts to make us more ‘social’. We will lose ownership of any and all data that we generate, as private storage becomes obsolete. We will be unable to define ourselves in any domain other than the digital; your online profiles will be more powerful than the real you. At some point, we will be required to sell the last fragments of our individuality to the needs of corporate greed and national security. If the past is anything to go by, we will do it willingly and blindly, trading our inheritance for a few trinkets.
For the last year or so, one of my research activities has been exploring the design and delivery of coderetreats. Our first article on this topic Coderetreats: Reflective Practice and the Game of Life was published in IEEE Software, the first piece of academic work to be published in this area. In that article we reported on how running a standard coderetreat with our students helped develop their reflective practice. In a post late last year I mentioned the Global Day of Coderetreat. We also gathered data from that event, which raised some interesting questions about how well these activities support the use of test driven development (TDD) and learning about the four principles of simple design. The coderetreat website (coderetreat.org) says that ‘Practicing the basic principles of modular and object-oriented design, developers can improve their ability to write code that minimizes the cost of change over time.’ These principles are outlined elsewhere as the ‘XP Simplity Rules‘. However there was some evidence from our research that the usual approaches to coderetreats were not particularly effective at making these design rules explicitly understood. We also observed that many participants struggled to get started with their first unit test.
To try to address these issues, we re-engineered our most recent coderetreat so that it scaffolded the intended learning outcomes explicitly. For the first session we provided the first unit test, and suggested the second. This had a remarkable effect on how quickly the participants got into the TDD cycle. We also designed each session so that it directly addressed one of the four principles of simple design, by providing various code and test components, building on the concept of the legacy coderetreats that have been run by others. In fact the last session was very much in the legacy coderetreat style, where we provided some poorly written ‘legacy code’ without tests, which the participants had to refactor by first adding unit tests.
We have yet to analyse the data we gathered in detail, but we do believe that there is a lot of scope to take the coderetreat idea forward with new ways of ensuring that the key elements of design understanding are made explicit in the outcomes.