While I was researching more ways to visualize my Google Location History, I ran across Beneath the Data’s excellent post on exactly that. I thought I’d start with trying to replicate what Tyler had done (because his figures are way prettier than the ones I’ve made thus far), and then doing my own thing from there. Little did I know that this would send me down some giant GIS/shapefile/geography rabbit hole.
I immediately went to download a bunch of shapefiles of San Francisco, California, etc. I got obsessed with Shapefiles. I saw a bunch used for ecological studies of the bay, but (though I was sorely tempted) I did not download those. Unfortunately, this is where I hit my first snag. Apparently, Seattle uploads shapefiles with WGS84 coordinates. San Francisco, alas, does not. Anywhere. Ever.
There is a boundless sea of vector files available for SF, but no WGS84 coordinates to be had. I researched for an embarrassing number of hours of how to convert to WGS84 and trying to figure out if they were calculations I could automate, and how exactly the x,y coordinates for shapefile vectors were calculated. I even installed QGIS because somebody said that you could save a layer as WGS84 through there. Well, I couldn’t get that to work. Maybe it is due to my inexperience with GIS and shapefiles, or maybe it is a function that QGIS doesn’t have anymore. I don’t know.
Anyhow, I spent so much time learning about maps and shapefiles, that I haven’t yet done anything beyond simply replicating what Tyler did, but with my own data. But now that I’ve gotten all that out of the way, making the visualizations that I want will be the next step.
So I’ve been doing some more investigating of my Google Timeline Data here and there (as I started writing about here).
After my last post, a friend of mine pointed me towards the Haversine formula for calculating distance between two sets of lat, long coordinates and with that I was able to calculate distances on a day-by-day basis that were consistently close to Google’s estimations for the days. Encouraged, I then moved on to calculating distances on a per-year basis, and that was fun. I got what seemed reasonable to me:
Between 5/22/15 and 5/22/16 I went 20,505 miles
Between 9/20/2013 and 5/22/16 I went 43,434 miles
Recall that these are for every sort of movement, including airplanes. So you can see my average miles/day went way up recently, due to a few big recent airplane trips. So if the numbers seem high (~60 miles a day, ~45 miles a day), this is why.
But then I wanted to visualize some of the data. I decided to use matplotlib to plot the points because it is easy to use and because Python has an easy way to load JSON data.
So I ended up breaking down the points by the assigned “activity.” True to my word, I only considered the “highest likelihood activities,” and discarded all the other less likely ones. To keep it simple.
You may recall that I had a total of 1,048,575 position data points. Well, only 288,922 had activities assigned to them. So just over a quarter. Still, it is enough data to have a bit of fun with.
Of those 288,922 data points with activity, it turns out that there were only a total of 7 different activities:
The first obvious thing to do was to sort by activity type and then plot the coordinates segregated by activity type.
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
latitudeArray = 
longitudeArray = 
with open("bicycle.csv") as csvfile:
reader = csv.reader(csvfile)
for row in reader:
x, y = float(row), float(row)
latitudeArray.append(x) #storing all the latitudes and longitudes in separate arrays, but the index is the same for each pair
m = Basemap(llcrnrlon=min(longitudeArray)-10, #Set map's displayed max/min based on your set's max/min
lat_ts=20, #"latitude of true scale" lat_ts=0 is stereographic projection
resolution='h', #resolution can be set to 'c','l','i','h', or 'f' - for crude, low, intermediate, high, or full
x1, y1 = m(longitudeArray, latitudeArray) #map the lat, long arrays to x, y coordinate pairs
m.scatter(x1, y1, s=25, c='r', marker="o") #Plot your markers and pick their size, color, and shape
plt.title("Steen's Bicycling Coordinates")
Note that, at the time, I wrote a script which saved all the different activity types into their own CSV, because I wanted to look at them and play with them individually. And I then decided to load these CSV files into my script for plotting – because they were there. Note that, if I went back and did it again, I’d probably not bother with the CSV intermediary, and just go straight from the geoJSON file.
And in this way I was able to see the different activities plotted onto Basemap’s default map thing:
I also plotted the other categories – including the mysterious “tilting,” but I couldn’t really discern any sort of meaning from those. They just looked like all the points from everywhere I’ve ever been, and were therefore not very meaningful. Not like the dramatic differences between the points and obvious routes for biking, on any sort of vehicle, or walking. So, there’s no need for you to see those.
I’d say this was a success. So my next question is to figure out how much time I spend doing each activity. And how much time I spend in each place. I’ll have to think about it a bit on how to calculate the time. All those points with no activity associated with them have me concerned that it won’t be very straightforward to just subtract the timestamps to get ΔT. It could be the case that in the “no activity” points, I had ended up doing completely different things, and then returned to the original activity (in which case the calculated ΔT would be incorrect). But, then again, it is probably very likely that all those “no activity” points actually are the same activity if they are bookended by it. Hmm.
So now I’m wondering if, given that a time point is uploaded every 60 seconds, I should just say 1 timepoint = 60 seconds? That doesn’t seem quite right to me, but I’ll work it out for a smaller data set and see if that even comes close to accurate. I’ll keep thinking about it, but if anybody has any suggestions on how to get around this problem, feel free to let me know!
I really love the data collected by Google Timeline, and I have fun going through it. But some of the most obvious data aggregates are not available! You cannot get how many miles you’ve biked this year, or how many hours you spent at your office, or how many hours you’ve spent walking, etc. You can see those stats on a day-per-day basis, which I’ll admit is interesting, but I usually already have an instinctual idea of I’ve done on a day-by-day basis. I’d like to imagine that I’d be totally surprised by what my stats would be for a full year.
So I downloaded my raw data from Google Takeout to try and play with, and it turns out to be one humongous geoJSON file. But it seems that it is laid out in the format of each unique timestamp taken, with all the associated data for that point. In most cases, you just get timestamp, latitude/longitude (E7), and accuracy.
But sometimes there’s a list of “activities,” with Google’s “confidence” that the activity listed was the one… being done. They seem to always add up to 100%, if you assume that “on foot” is the same as “walking,” but it isn’t clear to me why both get listed in those cases
So, obviously I’d have to do some calculations based on the latitude and longitude to get data on distances. Which I haven’t done. So I guess I’ll probably just consider the timestamps that have types associated, and for simplicity’s sake I’ll most likely end up only considering the one with the highest “confidence.” With that method, it would be pretty easy to calculate total times. Distances… not as easy, mostly because I haven’t worked with latitudes and longitudes very much. But I’m sure there’s some module out there or something for handling that, so I probably won’t have to learn too much about it 😉
All I’ve done with my data thus far was convert all the timepoints with associated lat/long into human dates with lat/long that are not… E7. So now I have 1,048,575 lines formatted like this
Which is the first timepoint I have in my Google Location History.
This format makes it super easy for me to quickly generate maps like
Or even overlays of multiple days, etc. Which, I realize, is… baaaasically exactly what I can already get from Google Location History through my Timeline.
But! My next step is to calculate how many hours per year were spent doing each activity (walking/biking/driving/etc). There will obviously be far fewer than 1,048,575 timepoints for which there is activity data associated, because most timepoints don’t have any activity.
I haven’t started thinking about the mileage counter yet, so if anybody has any suggestions for handling latitude/longitude points and calculating mileage from those, I guess feel free to let me know. Otherwise, I’ll just let you all know what stats I come up with.
My ongoing saga of Google Location History data continues here…