Android Ecosystem: 2008-2018

Some time in 2018 I realised we’ve reached that point in the Android ecosystem that so many platforms reach as they approach the 10 year mark. There was more stuff coming out than it was possible to keep up with, or even try out.

So I thought, let’s draw up a high level map of most things Android developers have come into contact with, in general, since the start. Links to large-sized files are at the end…

Android Ecosystem: 2008-2018

Roughly speaking as you move out from the centre you are moving forward in time, although to group some items by theme I bend the rules. The lines represent relationships, though not always direct, and the dotted boxes are things that may no longer be in active use. There are also some 3rd-party honourable mentions in there.

I was prompted to finally make good on this diagram when I listened to Donn and Kaushik talking about Imposter Syndrome on Fragmented. Take a look at the image, this isn’t even the complete picture. At the same time many Android developers are doing other stuff; iOS, server-side, web, Flutter. No wonder it’s hard to keep up.

The same thing happened for me around 2008 with Flash. I started with Flash in ’99, there was timeline animation, a scattering of “scripting”, all highly creative. Over the next 10 years it evolved; XML layouts, 2-way data binding, ECMAScript 4th Edition (what eventually became Javascript “Harmony”). It found a home in video, games and the Enterprise; server-side generators came out costing $15,000 per CPU and that was just the start. “RIAs” (Rich Internet Applications, aka thick clients) were lightyears ahead of the rest of the web. In the UK you could command a top-tier day rate working for banks as a freelancer, building internal tools that managed their data and generated reports. Then, as you know, it stopped.

Android is not going to live forever, but things have certainly moved at such a pace to keep things interesting. From phones to TVs, cars, smart speakers and more, the surfaces available have exploded, and the tools also. We’re gonna need a larger sheet of paper.

A huge thank you to the people who make learning this stuff possible. The bloggers, the developer advocates, conference speakers, podcasters and documentation writers. 🙌

Download the source and exported PDF/PNG files from the GitHub repo.

Bots as Celebrities

Messenger-based services, bots, agents, AI. It looks like app fatigue has led us to look to these for the next green field, something new for VCs to plough their money into, something that feels different.

From time to time technology comes full circle and here we are again using something like IRC, in the UX slam dunk that is Slack, and setting loose upon it an army of bots… again, like we do/did with IRC. Of course both of these things are significantly evolved from their forebears; the semi-public messaging platform (albeit, now less suited for massive audiences) but also the bots, who once were relegated to performing simple tasks like running file shares or hosting quizzes for a handful of geeks, are now powered by significant “AI” resource and connected to millions of people and myriad services from Uber to Dominos.


But what for and why now?

AI, in the sci-fi movie sense, feels like it’s been “a decade away” for as long as I can remember. In reality IA (as the case may actually be) is already in use and has been with us for some time, just in a very limited and low profile capacity, with the exception of IBM’s Watson kicking butt on Jeopardy perhaps. What we are now seeing is that potential being unleashed in consumer-space and the results are going to change HCI yet again.

Who are the trailblazers? IBM’s Watson we’ve mentioned, Facebook Messenger’s “M”, Amazon’s Alexa, the agent that lives in the Quartz news app and of course the numerous bots that will be hatched through Slack Bot Startups to name a few. Most of these interact through chat, be it text or voice, and when the AI isn’t feeling chatty it’s beating us at 2,500 year old board games. I remember configuring an AliceBot maybe 10 years ago and whilst at the time it felt like a scene from Bladerunner it was positively naive when compared to with complex behaviour on display today.

What caught my eye most recently however was Microsoft’s entry to the bot scene with Tay. Designed “to be entertainment”, Tay is a chat bot that pretends to be a 19-year old American girl, complete with acronym-heavy “text speak”, the ability to play games and a strong opinion on some pretty heavy thought experiments. Tay will be available through Kik, GroupMe, and Twitter initially and over time will learn new skills and presumably perform better at the Turing test.


On the surface Tay seems like a bit of fun, a tech giant flexing its R&D muscle. But the ramifications could be profound. Tay got me thinking, how will these bots evolve, and how will we as a society perceive them?

Messaging bots + services: the ultimate brand advocate is a celebrity. If your brand can develop their own AI celebrities they can exact fine grained control over their message, and worry less about post-club drunken photos of their current “face” appearing in Heat magazine.

The bots we’ve grown accustomed to in the last few years are agents: Siri, Cortana, Amy, Alexa and erm… “OK Google” (the latter lacking the necessary persona to really grow on us), they’re fairly passive in their approach. They act out on our requests, very rarely instigating something. I think this is where a big shift is about to occur, we’ll see more impetus to create original content from the agents and ultimately they will begin to define their own goals.

It seems likely to me that agencies could in fact craft and tune personas powered by these underlying AI bot engines (AIaaS please?) to become nothing short of celebrities, with millions of followers across the (people) social networks and a genuine human connection, within certain groups at least.

Who might want this?

Well any media outlet for sure, if you want to disseminate a message you better have either a great story or a pretty face. Brands could engage with experts to craft their ultimate brand advocate, an entirely constructed celebrity. Infinitely scalable and international, the Celebribot might engage itself in real time media buying without the slightest of instructions, based on the agenda and campaign package currently being relayed to it. Hey if a mute Lara Croft can become a brand advocate for an energy drink, just think what could happen if she could talk, think, and plan for herself.

So this is where I think we are going with the new wave of bots. Can we look forward to manifestations of AI personalities hovering over us, dressed up drones, perhaps HAL 9000 from 2001: A Space Odyssey or maybe if we’re lucky something or someone more like Holly from Red Dwarf. Maybe I’ve been watching a little too much Black Mirror but it certainly looks like our engagement with these entities is about to see a pace change.

How did you start coding?

Recently Usborne books made their beautifully illustrated 1980’s computing books for kids available for download. It turns out several of my friends and Twitter acquaintances picked up their love of coding from these books as youngsters, myself included.

I remember being in a dentist’s waiting room where an old battered copy of “Computer Space Games” lay on the bookshelf. I was so engrossed they actually let me take that book home, and thus began my journey.

Computer Space Games Book
Computer Space Games by Usborne Books

As an aside, today I’m a father of one (soon to be two), who absolutely loves Usborne’s latest “That’s not my…[Insert Subject]” touchy-feely book series. The pages of each book contain the phrase “That’s not my…” and the subject, which ranges from “Monkey” to “Snowman”. In some ways Usborne is continuing their logical thinking teachings with each page providing a condition that evaluates as true or false 😉 I highly recommend these for anyone with a young toddler.

That's Not My Puppy Book
Usborne’s That’s Not My Puppy book

Memory Lane

Flicking through these old computing books had me inadvertently taking a trip down memory lane. I didn’t have a computer for some time after I started “coding” (writing down programs in BBC Basic) but that just made it all the more enticing as one day I’d be able to see these programs crash run for real. The problem I had with my BBC Basic skills was that the BBC Micro was already a relic when I was a young teen, however I did eventually get an Amiga 600 on which I learned the programming language Amiga E (closely related to C). Later getting a Gateway PC which had Windows 95, a Cyrix 5×86 CPU (Intel was expensive!), a 56k modem, CD-Rom, VGA graphics card and a bucket load of power.

In those days kids like me hung around IRC, where after dinner I’d spend time chatting with quite a few “leet d00dz”. In these circles I came across a fantastic range of things: from mIRC-script and Sub7, to SoftIce and assembly language (ASM). ASM is something I would prompt any young coder to at least get some experience with. It may be all but useless these days, with even the most throw-away chips happily run the voluminous instructions output by much higher-level languages. The main thing you learn from ASM is the fundamentals of how a computer’s brain takes your instructions and uses a much more limited set of constructs and variables (registers) to do anything. Ultimately as a kid this was the thing that sold me on computers, they can do anything and all you needed was your brain and some time to create that anything.

Coding through necessity

When I was 15 or 16 we still used dialup modems to access the net. I think it cost something like 2p (£0.02 GBP) a minute to dial up, and during that time no-one could use the phone. It also made a racket so there was no sneaking online. We didn’t have a lot of money, so my internet time was limited to 30 minutes a day. So like a boy scout, in order to learn you had to be prepared. I ended up writing a Visual Basic app to spider and scrape sites, saving the pages to disk. This way I could dial up, have it scrape a bunch of sites to 3 levels deep and disconnect, reading at my leisure.

In chemistry class we were given homework of balancing symbol equations, hundreds of these things to work through. They aren’t hard, really it’s just just grunt work to apply some basic rules. As I later found out, it’s a core tenet of a coder to be lazy and never to repeat the same task more than once. So I wrote another little VB app which let you press buttons to input the elements, the numbers of units e.g. O² and hit go. I sold this program on floppy disk for £1 a pop to classmates, and the homework problem was solved.

With hindsight the above are early examples of situations where coding solved a real world problem for me personally, and I suspect that might be the case for a few of you reading. I also wonder if the huge amount and instant availability of free content gets in the way of this desire to create, but I like to think that this desire is universal.

Finding the “right” language?

At school we learned Pascal (and Delphi), a little Prolog, and for a final project we had an open choice (I opted for Visual C++ with MFC and Crystal Reports, so practical). We were also taught to finger trace which I believe helps to minimise common typos in later years. From there I started to do “real work” with ActionScript (for my sins, 10 years as a Flash developer), JavaScript (web and later nodejs), some Coldfusion and ASP.NET, some iOS projects in Objective-C and for the large part my days have been spent in Java (Android) in recent years. If you’re familiar with the 99 Bottles of Beer website you’ll know there are hundreds and hundreds of programming languages. The other day I was wondering whether those 10 years of Flash and Flex and the vast amounts of time, perhaps some 5000 hours, learning the ins and outs of a huge enterprise SDK was time that has been quite simply, lost. The thing I have learned though is that it doesn’t really matter what languages you’ve touched on over the years, it’s never a step backwards. ActionScript was based on ECMAScript 262 (as is JavaScript) and eventually evolved into something like Harmony-meets-Java. The thing is I learned from this was how to use a dynamically typed language, how to architect apps with (Pure)MVC, how to write testable code. It’s almost never time lost, well, maybe there are some exceptions.

Who know’s what comes next? What I know for sure is that this is not the end of my journey, something new will come along and it’ll be time to start again, leaning on previous experience but not being blinkered by it.

That was my story in a nutshell and time passed has me missing a lot out no doubt, but what did your journey look like? What were the key moments that made an impact on you, what you learned, and why?

Are You OK? App

I’ve just published a companion  site for my free app Are You OK?.

The app is aimed at people wishing to regularly check the status of family or friends who may for example live alone and are vulnerable to accidents like a fall in their home, unable to call for help. Something like the reverse of a panic button system; if they don’t press a button every few hours, it sends an SMS message to selected contacts with a call to check in.

Head over to the website to read more about the app and find the download link.

Fragments and Activities in Android Apps

UPDATE: 5 years later this post is pretty out of date. Some of it still holds, but it is now possible to better architect primarily “single Activity” apps, especially with the advent of the android Navigation component. For posterity the post below remains…

When asking “should I use a Fragment or Activity?” it’s not always immediately obvious on how you should architect an app.

My advice is try to avoid a single “god” Activity (h/t Eric Burke) that manages navigation between tens of Fragments – it may seem to give you good control over transitions, but it gets messy quickly*.

My go to is always to use a combination of Activities and Fragments. So here are some tips:

  • If it’s a distinct part of an app (News, Settings, Write Post), use a new Activity. This Activity may be fairly light-weight, simply inflating a Fragment in its layout XML or in code.
  • For everything else use Fragments.
  • This gives you flexibility when combining Fragments in Activity layouts for tablet.
  • Create a BaseActivity class which handles setup/styling of ActionBar and SlidingDrawerLayout if you have that kind of navigation.
  • Nullify or customise the transitions between Activities if for example if you don’t want to have an obvious transition with an ActionBar that’s already in place (and you can make use of new L Activity transitions to smoothly transitions).
  • Fragments don’t need to be visual, an Activity can use the FragmentManager to create a persistent headless Fragment with setRetainInstance() who’s job may be to perform a background task (update, upload, refresh) – this means the user can rotate the device without destroying and recreating the Fragment, and is sometimes and alternative to binding to a Service onResume().

Some good sources for how to architect apps, as always the Google I/O Schedule app:

and Eric Burke’s 2012 talk, around half-way through:

*When does it get messy?

  • When dealing with deeper hierarchies, and with navigational requests that come from a user action within a Fragment.
  • When you need the ActionBar to be in overlay mode (for a full screen experience) but only in certain screens.
  • When you need to create new tasks (either shooting off to another app and back, or allowing other apps to start Activities in your app to do something like with a Share action)
  • There are many more, please feel free to add some in the comments if you can think of any.

Load Testing Live Streaming Servers

There are two types of test I’ll describe below. First of all using Apple HLS streams, which is HTTP Live Streaming via port 80, supported by iOS and Safari, and also by Android (apps and browser). Then we have Adobe’s RTMP over port 1935, mostly used by Flash players on desktop, this covers browsers like Internet Explorer and Chrome on desktop. These tests apply to Wowza server but I think it’ll also cover Adobe Media Server.

All links to files and software mentioned are duplicated at the end of this post.

It’s worth noting that you can stick to HLS entirely by using an HLS plugin for Flash video players such as this one, and that is what we’re doing in order to make good use of Amazon’s CloudFront CDN.

For the purpose of testing you may also wish to simulate some live camera streams from static video files, see further down this post for info on how to do that on your computer, server or EC2.

Testing RTMP Live Streaming with Flazr

In this test we want to load test a Wowza origin server itself to see the direct effect of a lot of users on CPU load and RAM usage. This test is performed with Flazr, via RTMP on port 1935.

Assuming you’ve set up your Wowza or Adobe Media server already, for example by using a pre-built Wowza Amazon EC2 AMI. We’re using an m3.xlarge instance for this test as it has high network availabilty and a tonne of RAM – and we’re streaming 4 unique 720p ~4Mbit streams to it, transcoded to multiple SD and HD outputs (CPU use from this alone is up to 80%).

Installing flazr

First up, for the instance size and test configuration I’m using I modified flazr’s to increase the Java heap size to 8GB, otherwise you run out of RAM. Next up FTP and upload (or wget) flazr to a directory on your server/EC2 instance. Then SSH in and:

apt-get install default-java
cd path/to/flazr
chmod +x
./ -host -version 00000000 -load 1000 -length 60000 -port 1935 -app yourAppName yourStreamName

The order of parameters does seem to matter in later versions of flazr, but either way this test runs for 60 seconds, with a load of 1000 viewers. Given all the transcoding our CPU was already feeling the pain, but there was no sign of trouble. We managed 4500 before anything started to stutter in our test player from another m3.xlarge instance.

Wowza CPU Usage

Of course this only matters if you are not using a CDN, but it’s good to know this EC2 instance can handle a lot of HD viewers.

Testing HLS Live Streaming (or a CDN such as Amazon CloudFront) with hlsprobe

Onto HLS streaming, the standard for mobile apps and sites. We have used Wowza CloudFront Formations to set up HLS caching for content delivery, so that we can handle a very large number of viewers without impacting on the CPU load or network throughput of the origin server, and to giver us greater redundancy. Given CloudFront works with HLS streams we are not using RTMP for this test, so we cannot use Flazr again. To test HLS consumption –that being the continuous download of .m3u8 files and their linked .ts video chunks– we can use a tool called hlsprobe, which written in python.

If you’re on a Mac and don’t have python I recommend you install it via brew to get up and running quickly. If you don’t have brew, get it here.

#on a mac
brew install python
#on ubuntu/amazon
sudo apt-get python

hlsprobe also relies on there being an SMTP server running, not that you need a fully functional one but:

#on mac
sudo postfix start
#on ec2, this auto-starts
sudo apt-get install postfix

Then to install hlsprobe’s dependencies and hlsprobe itself:

pip install m3u8
pip install PyYAML
git clone
cd hlsprobe

A sample config is linked at the end of the post.

Running hls probe is as simple as this (note the -v verbose mode, you can turn that off once you have it working).

python hlsprobe -v -c config.yaml

Now if you fire up the Wowza Engine Manager admin interface, you should still see the connection count and network traffic, but the traffic. If you’re testing your CDN such as with CloudFront, you should note that your CPU usage does not increase substantially as you add thousands of clients.

Simulating cameras to Wowza via nodeJS

It’s good to be able to simulate live streams at any time, either from your computer or in my case, from some EC2 instances. To do this I’ve written a simple nodejs script which loops a video, optionally transcoding as you go. I recommend against that due to high CPU use and therefore frame-loss; in my sample script I am passing through video and audio directly, the video is already using the correct codecs, frame size and bitrate via Handbrake.

The script runs ffmpeg, so you’ll need to install that first:

#on a mac
brew install ffmpeg
#on ubuntu/Amazon you'll have to to install/compile ffmpeg the usual way

Edit the js script to point to your server, port, and video file, the run the script with:

node fakestream.js

If the video completes, it’ll restart the stream but there will be a second of downtime, some video players automatically retry, but make sure your video is long enough for the test to be safe.

These are just a couple of ways of load testing a live streaming server, there are 3rd parties out there but we’ve not had great success so far, and this way you have a lot more control over the test environments.


fakestream.js – NodeJS script to simulate live streams
config.yaml – Sample config for hlsprobe
hlsprobe – Tool for testing HLS streams
Flazr – Tool for testing RTMP streams
OSMF-HLS – OSMF HLS Plugin to support HLS in Flash video players

Postman Collection to HTML (node script)

If you use the excellent Postman for testing and developing your APIs (and if you don’t yet, please give it a try!) you may find this little node script helpful when generating documentation.

It simply converts your downloaded Postman collection file to HTML (with tables) for inserting into documentation or sharing with a 3rd party developer. The Postman collection is perfect for sharing with developers as it remains close to “live documentation”, but sometimes you need a more readable form.

Registering Your Android App for File Types and Email Attachments

I’ve recently finished work on an app that registers itself as a handler for a given file extension, let’s call it “.mytype”, so if the user attempts to open a file named “file1.mytype” our app would launch and receive an Intent containing the informati…

I’ve recently finished work on an app that registers itself as a handler for a given file extension, let’s call it “.mytype”, so if the user attempts to open a file named “file1.mytype” our app would launch and receive an Intent containing the information on the file’s location and its data can be imported. Specifically I wanted this to happen when the user opened an email attachment, as data is shared between users via email attachment for this app.

There are many pitfalls to doing this, and the Stack Overflow answers I saw given for the question had various side-effects or problems. The most common was that your app would appear in the chooser dialog whenever the user clicked on an email notification, for any email – not just those with your attachment. After some trial and error, I came up with this method.

Create IntentFilters in AndroidManifest.xml

The first step is to add <intent-filter> nodes to the application node of the AndroidManifest.xml. Here’s an example of that:

  <action android:name="android.intent.action.VIEW" />
  <action android:name="android.intent.action.EDIT" />
  <category android:name="android.intent.category.DEFAULT" />
  <action android:name="android.intent.action.VIEW" />
  <action android:name="android.intent.action.EDIT" />
  <category android:name="android.intent.category.DEFAULT" />

Now something to note here, I’ve specified a filter for both “application/mytype” mimetype and also the more generic “application/octet-stream” mime type. The reason for this is because we can’t guarantee the attachment’s mime-type has been set correctly. We have iOS users and Android users sharing timers via email, and with iOS the mime type is set, with Android, at least in my tests on Android 4.2, the mime-type reverts to application/octet-stream for attachments sent from within the app.


I initially put these IntentFilters on the “home” Activity of my app, however I soon started encountering security exceptions in LogCat detailing how my Activity didn’t have access to the data from the other process (Gmail). I realised this was because my Activity’s tag had the launch mode set to:


Which prevents multiple instances of it being launched, this is important when users can launch the app from either the launcher icon or in this case via attachment (I didn’t want to have multiple instances of my home Activity running as that would confuse the user). So the solution was simply to create a new “ImportDataActivity” that handled the data import from the attachment, and then launched the home Activity with the Intent.FLAG_ACTIVITY_CLEAR_TOP flag added.

Importing Data

So in ImportDataActivity we need to import the data stored in the attachment, in my case this was JSON. The following shows how you might go about doing this:

protected void onCreate(Bundle savedInstanceState) {

  Uri data = getIntent().getData();
  if(data!=null) {
    try {
    } catch (Exception e) {
      // warn user about bad data here

  // launch home Activity (with FLAG_ACTIVITY_CLEAR_TOP) here…

private void importData(Uri data) {
  final String scheme = data.getScheme();

  if(ContentResolver.SCHEME_CONTENT.equals(scheme)) {
    try {
      ContentResolver cr = context.getContentResolver();
      InputStream is = cr.openInputStream(data);
      if(is == null) return;

      StringBuffer buf = new StringBuffer();			
      BufferedReader reader = new BufferedReader(new InputStreamReader(is));
      String str;
      if (is!=null) {							
        while ((str = reader.readLine()) != null) {	
          buf.append(str + "\n" );

      JSONObject json = new JSONObject(buf.toString());

      // perform your data import here…


That’s all that’s needed to register-for, and read data from custom file-types.

Sending Email with Attachments

Now how about sending an email with a custom attachment. Here’s a sample of how you might do that:

String recipient = "", 
  subject = "Sharing example", 
  message = "";

final Intent emailIntent = new Intent(android.content.Intent.ACTION_SEND);

emailIntent.putExtra(android.content.Intent.EXTRA_EMAIL, new String[]{recipient});
emailIntent.putExtra(android.content.Intent.EXTRA_SUBJECT, subject);
emailIntent.putExtra(android.content.Intent.EXTRA_TEXT, message);

// create attachment
String filename = "example.mytype";

File file = new File(getExternalCacheDir(), filename);
FileOutputStream fos = new FileOutputStream(file);
byte[] bytes = json.toString().getBytes();

if (!file.exists() || !file.canRead()) {
  Toast.makeText(this, "Problem creating attachment", 

Uri uri = Uri.parse("file://" + file.getAbsolutePath());
emailIntent.putExtra(Intent.EXTRA_STREAM, uri);

        "Email custom data using..."), 

Please note that “REQUEST_SHARE_DATA” is just an static int const in the class, used in onActivityResult() when the user returns from sending the email. This code will prompt the user to select an email client if they have multiple apps installed.

As always, please do point out any inaccuracies or improvements in the comments.

Seconds Pro for Android

The latest Android app I’ve been working on for Runloop’s, the hugely successful iOS interval timer Seconds Pro, is now live. Packed with the following features:

• Quickly create timers for interval training, tabata, circuit training
• Save your tim…

The latest Android app I’ve been working for Runloop, the hugely successful iOS interval timer Seconds Pro, is now live. Packed with the following features:

• Quickly create timers for interval training, tabata, circuit training
• Save your timers, as many as you need
• Organize Timers into groups
• Text to speech
• Install timers from the timer repository
• Send your timers to your friends
• Full control over every interval
• Assign music to intervals or timers
• Large display
• The choice of personal trainers up and down the country


You can download the app now from the Google Play Store.

If you’re looking for high quality Android development, head over to my company’s website – Valis Interactive.