My youngest has graduated high school and his party is in a week. We thought it would be fun to use an online slideshow of pics of him through the years that people could access with a QR code, and that got me thinking about seeing if I could build a game where party attendees could interact with the photos. Basically I thought it would be fun to see which of my kid’s friends could find him in 10 consecutive photos the fastest.
I figured I had a week to do it and knew that I’d have to refamiliarize myself with a bunch of different issues:
How to swap photos out (so they only see one at a time)
How to determine whether they’ve clicked/touched a photo and, if so, where they clicked
How to find where his face is in all the photos so that I can tell if the user touched the right person
How to restart if they miss him in a pic
How to collect the total time for the 10 pics if they are successful and put it on a leaderboard.
There’s a bunch of other things involved, but those were the ones I knew would take some time digging through old projects of mine where I figured that stuff out. I wasn’t intimidated by anything as I knew I’d done all of it before, but I knew it would take a while and I wasn’t really sure if I’d have the time/patience/excitement to keep it all going.
Then I thought I’d try something:
This is a screenshot of me asking Gemini (Google’s AI tool) to help.
I copy and pasted the code and tried it. It used placeholders for the images that really just looked like broken images, but I could already tell that the “if you click wrong it should start again” was working.
So I replaced the placeholders with real images where I knew where my kid’s face was (see below for those details) and . . . it worked!
Now a few of you are thinking “yep, we knew that’s what was going to happen”. To be honest, I was pretty confident too, especially after watching a few vids of others doing similar things. But I was still excited! Some of this post is about how I feel about this approach and how this tool fits into my tool bag.
I had to find the faces first
For this to work I had to find a bunch of images of my kid and figure out where his face was at in each of them. The first part was easy, as Google Photos does a really great job of finding pictures with particular people in it. I found 81 (yep, 3^4) that I thought did the trick. Namely they had him in it along with some other people that might confuse some folks (all three of my kids share a lot of the same features, for example). They also spanned his whole life, so only folks like me would be able to do really well in this game.
The second part (finding where his face was in the photo) I knew was not technically difficult, just logistically awkward. First I wondered if Gemini might be able to access Google Photos, especially their trained tool that knows what my kid looks like. No dice. Then I uploaded one of the photos to Gemini and asked for the coordinates of all the faces. That worked, but I would have to know which one was my kid and I’d have to manually submit all the photos.
Ultimately I realized that I could do it if I could just have an interface that showed all the photos and, if I clicked on one, it would paste the url of the image along with the coordinates of the click into a list I could paste into a spreadsheet:
This one correctly updated the captions (after I updated the code to use the actual images from Google Drive and not the placeholders), but I realized I needed the url and coordinates:
That gave me a web page with all 81 images in a photo grid. For each I just clicked on my kid’s face and then copied and pasted the list it generated into a Google Sheet tab that is driving the app.
What I had(?) to do
I knew I could build a Google Apps Script Web App to do all this. It could be driven by a Google Sheet that has the list generated above. So I:
Put all 81 photos in a single folder in Google Drive (and set the folder to viewable by anyone)
That url will let you host images in google drive and let anyone load them in a web site. It seems to work great, but I wouldn’t build a true image server this way. I don’t think Google would be happy with that. Since this is only going to be used on the party day, I’m not too worried about it (but we’ll see!).
Send all those urls/links to a web app so that they can replace the placeholder image sources that Gemini built
Gemini only built the leaderboard to work with local storage (so really it would just be each user’s personal leaderboard) so I had to adjust that code and have it send the score and the username to the Google Apps Script that could update a Google Sheet tab with that data. Similarly I have to read that spreadsheet and send it as the initial leaderboard to each user when they access the page.
In some initial tests it kept saying I wasn’t clicking on the correct face, but I knew I was. It turned out that Gemini only looked for things in a box with my original measurement at the top left corner rather than in the center. Easily fixed when I looked at the logic Gemini was using to determine if the click was correct.
Vibe coding or killing jobs or being lazy or not learning or . . .
I’ve been thinking a lot about this experience over the past few days. Here’s a list of some of the prominent thoughts:
It worked!
I got to work on the parts I love to work on (game play, big ideas, testing)
I didn’t have to work on the parts I don’t enjoy (things like scaling the pixel measurements, scaling the images, getting it to work well on a phone)
I was easily able to fix oddities like the corner vs the center of the box because I fully understood all that part of the code that Gemini created(?).
I am totally at the whim of Gemini for the CSS code because I my CSS skills aren’t that great. Mostly that means that instead of reading through their CSS and thinking of fixes so it looks better on phones, I just went back to Gemini and asked it to make it look better on a phone. That worked, so good enough, but I definitely didn’t learn anything (I literally didn’t look at what it did to change the CSS, I just tested it and was happy enough).
I’m not sure I can say “I made this”
I think I’m more excited to show this game at the party than if I spent the whole week working on it. Mostly I say that because I predict that I would have gotten crabby about some aspect of the work that had to be done and taken a short cut that ultimately would have driven me crazy.
I guess we’re comparing “I spent a lot of time on this and am proud of what I made” with “… and I used AI to do it!”. We’ll see how the party goes 🙂
I’m nervous that the biggest thing I learned while doing this project was how to use Gemini.
Your thoughts?
So I’d love to hear what you think about this. Here are some starters for you:
This is cool! How did you . . .?
This is dumb! Why didn’t you . . .?
Why didn’t you use [fill in this blank with a non-Gemini LLM tool]?
Can you share the full code?
It’s almost as if you don’t want to share your kid’s name or even show pictures of him [I WANT TO SEE IT IN ACTION!]
I think you learned a ton of things. For example . . .
I think you didn’t learn anything. This was a waste of your time and mine
Yes of course you can say “I made this”
No of course you can’t say “I made this”
When you say “I got to work on the parts I love . . .” you’re really saying that you got to stay in your comfort zone. That’s no way to learn.
When you say “I didn’t have to work on the parts I don’t enjoy” you’re ensuring that you’re able to keep your focus and be way more productive
I’m a LLM and I think this is great!
I’m a human coder and you owe me money
I’m not convinced you had to do anything here. It seems you could have done all of this with Vibe Coding (I’m using the definition where you never type any code, just LLM prompts).
I went on a bike camping trip last week and before I left I built something to show my friends and family my trip. I wanted to capture what I built here for my future self who can’t remember how to:
easily capture images with a date stamp and a geolocation in a mobile app
host a public web page that shows the journey with clickable markers that can open the images
Add a path from fitbit data if you have tracked your trip.
tl;dr check this out (note it takes ~20 seconds to load but once it’s loaded it’s quite responsive)
AppSheet for capturing images
I’ve talked about AppSheet before. It’s Google’s so-called “no-code” tool and it works really well on both iOS and Android devices. All I did was make a simple one that only I use. It’s got a very basic interface:
This is the interface for my AppSheet app. You hit the green plus to initiate a new trip. Then you hit the camera icon to add an image. When you do it saves the current datestamp and the current location.
When you add an image a simple Google Sheet gets updated:
This is the spreadsheet that the AppSheet App uses to store its data. There’s another tab for the trips, but this shows the “images” tab. The images are actually stored in Google Drive in the location and using the name shown in the “image” column.
So as long as I have my app installed on my phone I’m able to take quick photos, add a caption, and trust that it’ll get timestamped and geostamped in the spreadsheet.
Use Google Apps Script to host a web page
I’ve talked a lot about google apps script before. Here I’m using the webApp feature. Here’s the gist of what I wrote:
I grab the tripId from the end of the url (…?tripId=XXXXX)
I filter all the rows in the spreadsheet tab shown above, grabbing only those with the tripId from step 1.
For each image I grab the Google Drive Id of the google drive file.
Assuming you’ve set the sharing of all the images to be public, that url will load the image even in an incognito window
I check to see if there’s any fitbit data (see below) and extract the lat/long paths so that they can be added to the map
I use an html template that has all the data in a global namespace so that I can use Leaflet to make the map
When the page loads the underlying map is made, the markers are added at the location of the image and set to be popups if they’re clicked, and I add either a line connecting all the numbered markers or I add the fitbit path.
Really I just used all the lessons I learned from this project. The key is uploading your .tsx file into google drive and then just saving their urls in another tab of the spreadsheet:
Another tab in the spreadsheet where I manually add in links to .tsx files from my fitbit (really my Google Watch)
The function at the bottom of the javascript listed above is really all you need (in fact that one grabs more than just the coordinates but they’re all I’m using for this project).
Highlights
What I like about this project is how I can provide my family and friends some context-rich information about my camping trips. They can see where I’ve been (or watch it grow as I travel) and they can see some images with my thoughts. It’s all free to use (appsheet with just one user is free, the map tiles are free for small projects, and the website hosting is free using google apps script) and relatively easy to set up for yourself:
Make a new appsheet and make a simple view with a couple of actions (like “add image”)
Make a spreadsheet to store the info
Attach a google apps script web app to that spreadsheet with the code above
Your thoughts?
This was fun to build and I think I’ll keep using it as I go on a bunch of camping trips this summer. I’d love to hear your thoughts. Here are some starters for you:
This is cool! I especially like . . .
This is dumb! I can’t believe you ripped off . . .
How do you make the camera icon work to add an image in AppSheet?
Why don’t you just make appsheet do all of this (I know it has mapping features)? (answer: you can only have 10 people use it before they charge you)
Why in the world does it take so long to load? (answer: All I’m grabbing from the images files is their google drive ID but it seems to take a while to do that. I’ll keep digging to see if I can do that faster. Also the last 10 seconds or so is loading in all the .tsx data and transmitting it (the ones in the link above contain over 7,000 geo points each)).
I see on line 56 that you’ve tried other tile sources. What’s wrong with them (answer: some don’t zoom in far enough)
A bunch of your markers are on top of each other. Fix it! (for now you can just drag them around to uncover ones underneath)
I see on line 65 that for a while you tried waiting for the images to load before updating the map. Do you not need to do that?
What in the world are you doing starting on line 21 of the html template?
I realized that if you know of a situation that can be described in some strange coordinate system as a circle, there’s likely a way to calculate pi from that situation. What I decided to play around with is a disk bouncing off trees in a forest without any friction. It’s kind of like a plinko machine on ice, I guess, without using gravity to drive the disk. Here’s a version of it:
Disk (black) bouncing around a forest of trees (red) and constrained in a circle (green)
While you see a green circle in that set up, it’s not the circle I had in mind that’ll get me pi. Instead I was realizing that if there’s no energy loss, there are times when the initial energy is entirely kinetic (I do the tree collisions using half-springs) and that will always have the same maximum value. What’s cool is if you plot the vertical velocity against the horizontal velocity, you get those maximum points falling on a circle:
Plot with the x-component of the disk’s velocity on the x-axis and the vertical component on the y-axis. The red dots indicate when the disk hits a local maximum of its kinetic energy.
The black circle in that image is defined by setting where the constant is the two times the initial energy of the system. I found the red dots by using the “EventLocator” method for NDSolve in the Wolfram Engine (which, as I say all the time now, I run for free on my home computer).
So how do I get pi? Well, I find a path that hits all the red dots with the smallest possible distance. Clearly that’s not going to each red dot in order, as that constantly crisscrosses the circle. Instead, I use the traveling salesman optimization, or the “FindShortestPath” function in the Wolfram Language. That leads to this:
The red dots are the points in a vx vs vy plot where the kinetic energy is at a local maxima. The black path is the shortest tour of all of those points. The blue circle is the circle representing locations where the system would have a kinetic energy equal to the original energy of the simulation.
To get pi I just determine the length of that tour and divide by two times the radius of the circle, which I know from the energy of the system. For this particular simulation I get (drumroll please):
See, I told you it was a pretty dumb way to calculate pi.
Your thoughts?
I welcome your thoughts. Here are some starters for you:
I love this! I particularly like . . .
I hate this! What is especially egregious is . . .
Why do you keep saying “local maxima” for the kinetic energy. Shouldn’t they all be on the circle?
Wait, they’re not all on the circle!
Are you really going to keep saying “Wolfram Engine” instead of Mathematica now?
Wait, what do you mean you have the Wolfram Engine running for free, and why do you say “home computer”? Surely you’re just using your work’s site license somehow [answer: I bought a new computer and set everything up for the Wolfram Engine and Jupyter without ever logging into my work account]
What is a half-spring?
You’re going to link to that cool block collision vid and not bother trying to do this in real life with measurements? Jerk
Why are some trees outside of the green circle? Surely you selected random tree locations inside of the circle instead of stupidly selecting them in a square and not bothering to fix it.
I see you’re not bothering to give us your code once again
trees=RandomReal[{-1,1},{100,2}];
limit=0.05;
k=2/limit^2;
m=1;
vSingle[d_][rpin_]:=(dist=Sqrt[(rpin-d).(rpin-d)]; Piecewise[{{1/2 k (dist-limit)^2, dist<limit},{0,True}}]);
vAll[d_]:=(rad=Sqrt[d.d];Total[vSingle[d]/@trees]+Piecewise[{{1/2 k (1.2-rad)^2,rad>1.2},{0,True}}]);
r[t_]:={x[t],y[t]};
KE=1/2 m r'[t].r'[t];
kefunc[t_]:=1/2 m r'[t].r'[t];
PE=vAll[r[t]];
L=KE-PE;
lag[x_]:=D[L,x[t]]-D[L,x'[t],t]==0;
points=Reap[sol=First[NDSolve[{lag/@{x,y},
x[0]==-1.1, y[0]==0,
x'[0]==1, y'[0]==0},{x,y},{t,0,tmax=10},MaxStepSize->0.1,Method -> {"EventLocator", "Event" ->kefunc'[t] , "Direction" -> -1,
"EventAction" :> Sow[r'[t]]}]]];
Show[ParametricPlot[r[t]/.sol,{t,0,tmax}],ListPlot[trees,PlotStyle->Red]]
Show[Graphics[Circle[]],ParametricPlot[r'[t]/.sol,{t,0,tmax}]];
frame[t_]:=Show[Graphics[{Disk[r[t]/.sol,limit],Red,Point[trees],Green,Circle[{0,0},1.2]},PlotRange->1.3],ParametricPlot[r[t2]/.sol,{t2,0,t},PlotPoints->Round[1000 t/tmax]]];
frameList=Table[frame[t],{t,0.1,tmax,tmax/100}];
Export["pi from tree collisions.gif",frameList,"DisplayDurations"->0.1];
circlePlot=Show[Graphics[{Circle[],Red,Point[points[[2,1]]]}],ParametricPlot[r'[t]/.sol,{t,0,tmax}]];
Export["pi from tree circle plot.png", circlePlot];
fst=FindShortestTour[points[[2,1]]];
fstPath=Graphics[{Red,Point[points[[2,1]]], Black,Line[points[[2,1]][[fst[[2]]]]],Blue,Circle[]}];
Export["pi from tree path.png",fstPath]
This post describes how I figured out how to ride to work while racing earlier versions of myself. Think of it like Mario Kart’s Ghost mode, only without cool 3D virtual reality and with a phone screen that shows your position and the position of 3 random rides along the same route.
These are all taken from one of my rides. The first one shows some poor phone GPS because I was actually on that big bridge. The rest were are different points on a ride I was pretty proud of, since I won! (note that I also came in second, third, and fourth!). The gray rectangle at the bottom is a mismatch between my html (see below) and my phone screen size that I haven’t bothered to fix yet.
The screen updates every time the phone gets a new GPS location. For my phone (Google Pixel 7 with Verizon) it’s about every 8 seconds. The map is set to keep all four markers in the frame, so it constantly adjusts the pan and zoom to do that.
Don’t care about the details, just tell me how to do it
I’ll put the details of what I learned below. Here’s how to do it yourself:
Get yourself a device that can track your workout that allows you to export .tcx files. That’s what fitbit uses, but I gather it was invented by Garmin so I think there’s lots of ways to do that.
Create a folder in Google Drive that will hold all the tcx files (make a different folder for different types of rides).
Create a new Google Apps Script file.
In code.gs, put in this:
var funcs=[];
var allData=[];
function doGet(e) {
var t=HtmlService.createTemplateFromFile("main");
t.funcs=funcs;
t.funcnames=t.funcs.map(f=>f.name);
prepData(e.parameter.dir);
var timeTotal=Math.max(...allData.map(m=>m.times).flat());
t.globals={allData:allData,
currentTime:0,
timeTotal:timeTotal,
colors:["red","blue","green"],
initTime:0,
map:null,
markers:[],
locationMarker:null,
locationCircle:null,
group:null,
};
return t.evaluate();
}
const getLocations=(time)=>
{
var locations=[];
allData.forEach(a=>
{
var targetIndex=a.times.findIndex(f=>f>=time);
if(targetIndex==-1) targetIndex=a.times.length-1;
locations.push([a.lats[targetIndex], a.longs[targetIndex]]);
})
return locations;
}
funcs.push(getLocations);
function prepData(dir="to")
{
var folder=DriveApp.getFolderById(dir=="to"?"[FOLDERID FOR THE 'to' RIDES]":"[FOLDERID FOR THE 'from' RIDES]");
var files=folder.getFiles();
var fileIds=[];
while(files.hasNext())
{
var file=files.next();
fileIds.push(file.getId());
}
var upToThree=[];
while(upToThree.length<3)
{
var id=fileIds[Math.floor(Math.random()*fileIds.length)];
if(!upToThree.includes(id)) upToThree.push(id);
}
allData=upToThree.map(id=>readTcxFile(id));
}
function readTcxFile(id)
{
var file=DriveApp.getFileById(id);
var text=file.getBlob().getDataAsString();
// Logger.log(text.slice(0,10));
var times=[...text.matchAll(/<Time>(.*)<\/Time>/g)].map(m=>new Date(m[1]).getTime());
var fT=times[0];
times=times.map(m=>m-fT);
var lats=[...text.matchAll(/<LatitudeDegrees>(.*)<\/LatitudeDegrees>/g)].map(m=>m[1]);
var longs=[...text.matchAll(/<LongitudeDegrees>(.*)<\/LongitudeDegrees>/g)].map(m=>m[1]);
var alts=[...text.matchAll(/<AltitudeMeters>(.*)<\/AltitudeMeters>/g)].map(m=>m[1]);
return {times:times, lats:lats, longs:longs, alts:alts};
}
Change the “[FOLDERID FOR THE ‘to’ RIDES]” to the folder id for your “to” rides and similar for your “from” rides.
note that you could change the logic on line 38 to do any number of different folders. I mostly use this for riding to and from work so I use this logic.
Make a new file in the script and call it main.html
Set up a new web app deployment with this script and you’re good to go! (note that you need to run the doGet function once just to get all the permissions set).
To use the logic on line 38 of the code.gs, add “?dir=to” or “?dir=from” to the url you get from the deployment.
What I learned (or: “I care more about your journey of learning than just taking what you’ve done and implementing it for myself”)
Here’s a quick vid showing what I was originally excited to build. It shows what it does and walks through how to do the code:
Below I’ll talk about all the interesting technical things I learned how to do:
.tcx files
With my new Pixel Watch (technically the Pixel Watch 2 if you must know), you get fitbit for free (though they strongly encourage you to move up to premium which I haven’t done). Tracking a ride is really easy. Just wake up the watch, swipe once, then hit “bike” and it’s tracking. At the end of the day you just have to export the tcx file wherever you want it:
Typically it seems I get a new “Trackpoint” every second. So most of what I did had to do with scraping all of the Times, Latitudes, Longitudes, Altitudes, Distances, and HeartRates. Typically I do that using some regex. In Google Apps Script you can see an example on lines 63-65 in the code up above. I just grab them all and store them in javascript arrays so that they’re all indexed the same (if I’m looking at the 51st time, then the 51st element of the “lats” array is the latitude at that time).
Maps
As you can likely tell, I ultimately went with Leaflet.js for controlling the maps on the web pages and with openstreetmaps for what are called the tiles that Leaflet needs to show. I started doing a ton with the Google Maps Static API but they’re static and not supposed to be used with Leaflet. Also I get to make 1000 Google static maps per day using Google Apps Script but to connect it with Leaflet would mean using their normal API and that starts costing money much quicker. OpenStreetMaps, at least for small projects like this, is a perfect choice, it seems to me.
Brief aside about how to choose the appropriate initial zoom when you know all your latitudes and longitudes: It turns out that all the mapping things I’ve played with for this project use the same calculation for zoom level. I was interested in making sure that I picked the integer zoom level that for sure contained all my position data. So I’d make sure to know the extremes (most northerly, southernly, westerly, and easterly) and then I needed to figure out what zoom level for sure contained that. At first I was constantly making 600×400 maps so I just needed to make sure that everything would fit. It turns out that the key is to know that at zero zoom level, the full equator should fit in 256 pixels. Every zoom after that leads to an additional factor of two. I knew I wanted all my horizontal stuff to fit in 600 pixels and all my vertical stuff to fit in 400 pixels. I’d figure out the decimal zoom that would achieve those (two different answers often) and take the most zoomed out one as the one that would capture everything. Then I would round that down to the nearest integer and use that for the initial zoom level. You can see that in the video above.
Once you tell Leaflet to use openstreetmaps, you can then load the map with additional markers and lines. What you see in all the things above are the lines of all the routes shown in the race and markers for the current location. Here’s some pseudo code that makes a map and adds some lines and markers:
At first I thought I had to do a bunch of external drawing (basically an html canvas element on top of the map) but Leaflet let’s me do all of that just like my pseudo-code above. However, for the speed, heart rate, altitude, and distance plots you see in the youtube vid above I still had to figure out how to make those (animated!) plots.
Because they all have all the data constantly and a moving set of markers, I actually use two canvas elements on top of each other. The lower one has the full plots and the top one is constantly redrawn (see the animation section below) with the moving markers.
To have stacked canvas elements, just put them in the same div and make sure their css uses absolute location:
<!-- put this in the head somewhere -->
<style>
.wrapper {
position: relative;
width: 600px;
height: 400px;
}
.wrapper canvas {
position: absolute;
top: 0;
left: 0;
}
</style>
<!-- put this in the body somewhere -->
<div id="speedDiv" class="wrapper col">
<canvas id="speedsCanvas" width="300" height="200"></canvas>
<canvas id="speedsCanvasDot" width="300" height="200"></canvas>
</div>
Here’s the code to draw a simple line to the bottom canvas and a marker on the top canvas:
var bottom = document.getElementById("speedsCanvas");
var top = document.getElementById("speedsCanvasDot");
var context = bottom.getContext('2d');
context.clearRect(0,0,canvas.width,canvas.height);
context.moveTo(100,50); // where to start the line
context.lineTo(110,60); // draw a line from the start to here
context.lineTo(120, 70); // draw a line from where you are now to here
context.stroke(); // actually draw the line
context=top.getContext('2d'); // now grab the other canvas and draw a marker
context.fillStyle="red";
context.fillRect(100,50,10); // at 100 from the left, 50 from the top, and 10 pixels wide
Here’s the part that sucks: the origin (of the pixels) is at the upper left of the canvas. So your horizontal instincts are all correct, but your vertical ones are all backwards.
Animation on a web page
For any tcx files that I’m trying to animate, I reset all of their times to start at zero. I also convert all time stamps to the number of milliseconds since 1/1/1970 using new Date(datestamp).getTime(). Then, when I’m animating things, I just have javascript constantly recheck the time, determine how much time has gone by since the animation started, and then choose the appropriate latitudes and longitudes to display on the map. All of this is done with the magical window.requestAnimationFrame().
Actually, for the ghost mode app, I don’t have it constantly recheck the time. Instead, I wait until there’s a new GPS measurement, and then I check the time. On my phone that’s about every eight seconds. There is vanilla javascript to check the gps location, but leaflet has it built in with the command map.locate(). It’s that command that I repeat over and over again, waiting for the results before updating the map. Here’s the pseudocode for that:
AppSheet is a google product that enables you to use Google Sheets as a backend database for a mobile app that works in both the Android and Apple ecosystems. I encourage you to check it out, as it’s become a go-to tool for me to think about interacting with students and my colleagues at my school. Quick note: it’s really designed to be used by people on the same domain (yourcoolschool.edu, for example), not really to produce a true mobile app for the world. This post is about some brainstorming I’ve been doing to think about a campus app that could be useful.
What I’ve done so far
Here’s a quick list of the apps I’ve built and some of the cool features they make use of:
Hamline Go
This is an app I built to try to help build community on campus. Students can see faculty and staff avatars on a map and collect them by clicking on them. They get one point for every collection unless its a faculty member who teaches in their major, then it’s 5 points. If instead they actually talk to a faculty or staff member and get their hourly code, they get 10 points. Certain offices on campus also have hourly codes. I also built in flash (1 hour), daily, and weekly challenges like “write a haiku about your major” or “prove you found this spot on campus”. Point leaders after the end of each month get declining balance (money) prizes.
Tools used:
Google Maps
Login features (checks user email and crosschecks the faculty in their declared major)
Editable images (you can draw on images before submitting them for the challenges)
Google Apps Script running independently to update both the avatar locations (randomized on our campus) and the hourly codes
Majors t-shirts
Every major program was encouraged to design a tshirt that we would then order for any declared major that wanted one. I built an app to show students the designs available to them (only the designs for their declared majors) and let them submit the size that they wanted.
Tools used:
Login (check their major and crosscheck against the submitted designs)
Selfie collection (students are encouraged to submit a selfie wearing their tshirts so we can make a collage)
Scavenger hunt
I made an app that encourages first year students to learn about the various offices on campus. Each campus had a QR code displayed that, if the students scanned it using the app, indicated that the student had visited that office
Tools used
QR code reader
Login
I’ve made it quite a bit up the learning curve, including some of these really useful tips and tricks:
Make a slice of the users data called “me”. Have it be a filter of all the users that match the useremail() (a built in function). Essentially it’s a data source with just one row in it. Then I use index(me[id], 1) to get the current user’s id when doing a lot of the crosschecking mentioned above. I do the same thing on any of the related collections. In Hamline Go, for example, I make a “my avatars” slice to show the avatars that have been collected by that user. Then I can send that slice as the core data to a “view” that shows avatars.
Use an automation to send a notification whenever there’s a new flash challenge. They only last an hour, so it’s helpful to let the students know there’s a new one. These notifications come as push notifications to the students phones.
Pre-fill forms using LinkToForm that can take a collection of values to pre-fill elements. You can also then not show those elements in the form view and so it looks magical!
Send email notifications to people that include live forms right in gmail. This works really great for approvals.
What I’m hoping to do
I’d love to create an app that I’m currently calling “Hamline All Around”. Here are some of the features I hope to build in:
Somehow provide a navigation aid through our various systems. We get a lot of feedback from students who say they just don’t understand how to accomplish various things, especially the ones that are supposed to be providing valuable resources to them. A great example is our emergency grant program.
One thing I’m thinking of is a dynamic FAQ that might ask what sort of thing they’d like to accomplish and then quickly get them to the office they should start with.
“Find a study buddy!” I gather other schools do this and so I built a quick test to see if this would work. The student logs in and it shows them the classes they’re taking. For each they can “raise their hand.” If they do, they see all the other students who’ve done the same, all of whom are indicating that they’re interested in finding study buddies. They can see each other’s emails, or potentially use the app’s push notification system to communicate. I also made a form that lets them say “I plan to study tomorrow night in the library from 7-8” and the others who’ve raised their hands can indicate if they plan to come. This is displayed in a nice calendar format that looks a lot like google calendar.
Show the student group and athletic team calendar events in one place (right now you have to separately subscribe to both ical streams, but appsheet can do the subscription and display them in the app)
Event check in: All employees and students have bar codes on the back of their ids and appsheet can read them (it’s a setting for one of the built-in form elements). It would be very easy to build in an event check-in system where the event runners (multiple of them) could have the app running on their phones and they could scan in the students/staff/faculty who attend. Once they do, the app on the students/staff/faculty phones could become much more functional (ie the app, upon being updated, would detect that the person is checked in and would then show all the relevant functionality for the event).
Directions: Often it’s not enough to tell people what building things are in. Directions for how to actually get to the office can come in handy. I’ve been wondering about vids showing someone literally walking from a common place to an office, all the while describing what useful things the office can do for students.
Help!
I’m excited to work on this, but I’d love some more brainstorming, including poking some of these ideas a little. Here are some starters for you:
I love this. What I especially love is . . .
This is dumb. What I especially dislike is . . .
Wait, you’re tracking the location of all faculty and staff in real time?! (note: it took me a long time to convince people I wasn’t doing that. I call the things avatars and try to make it clear that I just randomly place them on the campus map. However, several faculty and staff were deeply suspicious of this app and asked to be taken out of the app.)
I think the “find a study buddy” app is really interesting. Have you thought about . . .
I think the “find a study buddy” app is really disturbing. Instead you should . . .
So after you figure out all the views and the data connections you have to learn how to write it for both the Android and iOS systems?! (no! Appsheet does all that for me. All the students have to install is Appsheet which already exists in both ecosystems)
We’ve got a student app on our campus and I love it. Here are my favorite parts . . .
We’ve got a student app on our campus and I hate it. What would make it so much better is . . .
Here’s some additional functionality you should think of . . .
Can you please do some tutorials on how to do all this in Appsheet?
which is actually referring to a really old post of mine about how to dig a well.
In order to add in the effects of a spinning planet, there are two or three major routes I could take:
Set the constraints on the path to be for a spinning path.
Just add the coriolis and centrifugal forces to my other calculations
Do either (1) or (2) with some Lagrange Multipliers so that we could interrogate the constraint forces keeping the ball on in the tunnel
This post is really about (3), though I’ve done (2) and it’s pretty cool. I just put in the fictitious forces, but only those components that are along the tunnel (using a dot product). Here’s the result:
What constraints do we use for the Lagrange Multiplier approach?
When you do Lagrange Multipliers, which you usually do because you want to learn something about the forces that enforce the constraint, you need to have a formula of something that (typically) stays constant for all the dynamics.
I first thought I could do this with a single constraint. I figured if we have points a and b on the line and we want to find a measure of how far a third point, p, is form the line, we’d get a distance and the constraint would be that the distance should always be zero. I can get that distance by asking for the length of the vector given by crossing a normalized vector on the line with the vector from the point of interest, p, to either end point. As I tell my students, when you use a cross product, one vector is saying to the other: “hey you, how much of you is perpendicular to me?” and that perpendicular distance is exactly what I’m looking for. So, I figured this would work for a constraint (it should be zero for any point on the line):
So I tried it! Holy crow, it’s slow in the Wolfram Engine (Mathematica for free). Actually, if I start the simulation on the line, the WE just gives up:
r[t_]:={x[t],y[t],z[t]};
constraint=Cross[r[t]-p1,p2-p1].Cross[r[t]-p1,p2-p1];
KE=1/2 r'[t].r'[t];
PE=4/3 Pi r[t].r[t];
L=KE-PE;
el[a_]:=D[L,a[t]]-D[L,a'[t],t]+lm[t] D[constraint,a[t]]==0;
sol=First[NDSolve[{el/@{x,y,z},
x[0]==p1[[1]],
y[0]==p1[[2]],
z[0]==p1[[3]],
x'[0]==0,
y'[0]==0,
z'[0]==0,
D[constraint,t,t]==0},{x,y,z,lm},{t,0,1}]]
NDSolve::ndcf: Repeated convergence test failure at t == 0.; unable to continue.
If I start it just off the line, it meanders up and down the line, maintaining its distance. But, wow, it’s a really slow integration (note: all I did was change the x[0] value to p1[[1]]+0.01 on line 8)
Here’s a plot of the constraint function (note that it stays pretty constant):
Since it was so slow, I wanted to find another way. That’s when I realized that a straight line is given by the intersection of two planes. So maybe I could just say “hey, it’s on this plane and on that plane at all times.” That’s equivalent to saying that we have two constraints, not just the one like above.
So how do you find the equations for a plane (or two for that matter) that the original line is on? Well, the formula for a plane is so we just need to find two sets of a, b, c, and d that do the trick. What I did was:
and the WE told me the relationships you have among them. Then I just arbitrarily chose two sets of a and b (<0,1> and <1,0>) and calculated two sets of c and d. Then I just used those two equations in the Solve command as my (2!) constraints and it worked. And it was really fast.
A general path in 3D can be given by three parametric equations where you use a fourth variable to connect them all:
x=f(t)
y=g(t)
z=h(t)
That looks like three constraints but when using Lagrange Multipliers you don’t really want to have that extra variable (t in this case). You want to only use x, y, and z and have extra unknown force terms that the ODE solver solves for.
So really you have to invert one of those 3 equations. Often one of them is amazingly simple, like f(t)=t. If that’s true, then you have two constraints:
y-g(x)
z-h(x)
and you’re off and running!
Here’s an example where g(x) is sin(x) and h(x) is cos(x). This is a helical path:
Note that I put in a potential energy in all of these examples. I could have just set that to zero, but then to get things to move I’d have to set the initial velocity. For the straight lines, that’s relatively easy, but for the helix I’d have to think carefully about what direction it starts moving in. So much easier to start things at rest and let a (conservative) force take over.
Your thoughts?
I’d love to hear your thoughts. Here are some starters for you:
This is super helpful. What I especially like is . . .
This is dumb. I hope future you realizes that.
This is future you. Thanks for this, it’s really come in handy.
This is future you, watch out for that thing tomorrow.
Really? You’re going to start using WE instead of “the mathematica kernel you can use for free”? That seems too informal.
I’m still not sure I get why you insisted on adding a potential energy to all these examples.
Why do you use for the potential energy in a planet?
What do you mean by a conservative force? What’s wrong with a liberal force?
I greatly enjoyed this recent video from StandUpMaths:
The set up is a hoop with a mass attached at one point. It’s rough so it rolls without slipping. It’s released with some angular momentum with the extra mass starting at the top. The question is whether the hoop loses contact with the ground as it rolls around. The original analysis from decades ago focused on a mass-less hoop and suggested that the hoop has to hop when the mass has rolled around just 90 degrees.
I wanted to see what it would take to model this. What I thought would be useful would be to use a Lagrange Multiplier technique so that I could easily check the normal force on the ground, watching to see if it ever goes to zero, as that’s when it would hop. If the normal force goes negative, that means the ground is pulling the hoop down. And the ground doesn’t usually do that.
Math/physics setup
When you use Lagrange Multipliers, you tend to have more degrees of freedom in the problem. In fact you have the normal number of degrees of freedom (one in this case: $latex \theta) plus however many constraints you model with Lagrange Multipliers. I want to model both the normal force and the rolling-without-slipping force. For the normal force, my constraint is:
where r is the radius of the hoop and y is the y-component of the center of the hoop.
For the rolling-without-slipping constraint I use:
where x is the x-component of the center of the hoop and is the rotation angle. When the mass is at the top.
The kinetic energy is given by:
Where we treat the center of mass of the hoop as one free particle and the mass as another. Note, however, that we stipulate that the location of the mass is tied to the location of the center:
Then we just do the usual Euler Lagrange tricks and we’re off and running.
The “WhenEvent” trick stops the integration right at the point of when the hoop would hop. I don’t always use that, you’ll see, in my animations below.
Results
Here’s a plot of the normal force for the case of a truly massless hoop:
You see that it starts at 9.8 because the mass is 1kg and the ground is really just holding the hoop+mass up (remember that the hoop is massless). It’s not actually exactly 9.8, though, since right at the beginning there’s a non-zero angular speed, , and so the some of the gravity force needs to be dedicated to centripetal acceleration.
Here’s an animation that stops right at the point of hopping:
Now lets give the hoop just a little bit of mass: 0.01kg (so 100x less than the attached mass). Here’s a normal force plot showing that I allowed the simulation to go past the hopping point. Note that I don’t let it hop. Rather, I allow the normal force to go negative.
Here’s an animation of that setup:
Finally let’s consider a situation where the mass of the hoop equals the mass of the attached point. Here’s a normal force plot:
Note that it never goes to zero. Here’s an animation:
Cool, huh?
Your thoughts?
I’d love to hear what you think. Here are some starters for you:
This is cool! What I liked the most was . . .
This is dumb! What you totally got wrong was . . .
Wait, I thought you saw this on a math(s) youtube channel. This seems like physics.
I don’t think that mass is undergoing circular motion, so saying that some of the gravity is providing a centripetal force doesn’t seem right.
That youtube video talks about looking at when the hoop starts to slip. Couldn’t you do that by looking at your rolling Lagrange Multiplier and seeing if it gets larger than the expected friction?
Pick a point on the earth and start digging. It doesn’t have to be straight down. Keep it straight (careful! it’s harder than you might think) and keep going until you come back to the surface. Ok, now drop a ball into your hole. It’ll come back to you eventually because while gravity will pull it down the hole at the beginning, eventually it’ll start getting further from the center of the earth and start slowing down. Assuming the earth’s density is spherically symmetric, it’ll come to a stop right at the edge of the other end, turn around, and come back.
If in addition to being spherically symmetric the earth’s density was also uniform, the travel time of the ball will be independent of the direction you dug the hole. In other words connect any two points on the surface of this idealized earth and the time it takes the ball to tunnel from one point to the other and back will be the same! Pretty cool, right?
I wanted to see if I could 1) make cool animations of that, and 2) see if my intuition would be right for non-uniform spherically symmetric densities. Before you scroll any further, make a prediction: If the planet is more dense towards the center, will a tunnel that gets close to the center take more or less time than a shallow tunnel?
The model
I modeled this using the Wolfram Engine (for free!) which is basically mathematica but using a Jupyter interface. I used a Lagrangian approach which just means I needed to identify the degrees of freedom (the distance down the tunnel) and figure out how to express both the kinetic and potential energy for a particle using that variable. If p1 and p2 are the two points on the surface of the earth, I parametrized my problem with the variable s such that if s is zero we’re at p1 and if s is one we’re at p2. When s is between zero and one it’s in the tunnel:
Then the kinetic energy is:
which will be a function of s.
The potential energy is a little trickier but basically it’ll always be:
where the total Mass, M, that is closer to the center that the point of interest is given by:
assuming r is less than the radius of the planet.
Simulations
Here’s a plot of s as a function of time for a uniform density planet in a universe where G=1, m=1, and the radius of the planet = 1:
Here’s an animation of 10 different random tunnels all dug from the north pole. Note how they stay in sync:
Ok, time to check your intuition. Consider a planet whose density varies as 1/r. That means it’s technically infinitely dense right at the center and it progressively gets less dense as you get further out. Which tunnels will be faster? The ones that are shallow, and so only ever experience the lower densities, which means they always feel nearly the full gravitational pull of the planet? Or the ones that dive deep and explore portions of the planet with high densities?
Here’s what I was thinking: The shallow ones never penetrate where the bulk of the mass is. That means their force stays high. Therefore I think they should be faster.
Here’s a plot of 10 random tunnels, note how they don’t stay in sync:
Here’s the animation:
For me it’s easiest to see the effect if you stare at the upper left purple one and keep your peripheral attention on the central green one. You can tell that the shorter path takes longer! I was wrong!
Ok, what about the opposite: a density that grows linearly with distance from the center. That means there’s no density at the center and the maximum density is at the outer edge.
First a plot of the trajectories:
and here’s the animation:
See how the lower right red one really falls behind? Cool, huh?
Your thoughts?
I’d love to hear what you think about all this. Here are some starters for you:
This is really cool! What about a planet that . . .?
This is really dumb! What you forgot about was . . .?
My intuition was right! Here’s what I was thinking . . .
My intuition was wrong! Therefore all of physics is wrong.
I just assume the potential energy is always GMm/r and you just have to figure out the M. (that’s what I naively thought too as I started but realized that you have to do the integrations to get it right)
Did you set all the planets to have the same total mass? (yes)
Why do you insist on using an expensive piece of software that normal people can’t afford . . . wait, what did you do two posts ago?
What happens if the density is not spherically symmetric? What are you, lazy?
You clearly didn’t model the effects of a spinning planet, even though you forced us to click through to a whole other blog post about digging holes. Be consistent!
Why didn’t you dig identical tunnels for all three planets? It’s super tough for me to directly compare them. My guess is you got all excited about using RandomPoint[Sphere[], n] in the Wolfram Engine.
I modeled the rungs of the ladder as rigid sticks with a mass, a length, and moment of inertia
I modeled the ground as a one-way spring (it only pushes up if either end of the rung goes below the ground, it doesn’t pull back down when either end is above the ground)
I modeled the strings holding the rungs together as one-way spring as well. If they stretch, they pull. If they’re compressed, they do nothing.
I modeled frictional energy loss by treating the ground as a viscous fluid. Any time either end is under the ground, there’s a frictional force proportional to the speed of that end that is in the opposite direction of the motion. Here’s a youtube vid I made about that approach.
I did all of that in a Lagrangian formalism. The one-way springs are piecewise potential energy functions, for example.
I had to play around with the strength of the ground one-way spring to make sure the bounces seemed realistic and also with the amount of viscous friction so that the rungs came to a stop relatively quickly.
What it looks like
Here’s an animated gif showing 4 very similar ladders. From left to right:
Moment of inertia of each rung is
Moment of inertia of each rung is
Moment of inertia of each rung is
Free fall (doesn’t hit the ground so the moment of inertia doesn’t matter)
Here’s a plot (along with a zoomed-in inset) of the height of the middle of the top rung for each of those ladders:
You can see that the smaller the moment of inertia of the rung, the faster it falls.
Using Rhett’s explanation (which I really think is right), when a tilted (that’s important!) rung hits the ground, it slaps down the other end, which stretches the string tied to that end, which then pulls down, at least slightly, on the rung above it. This continues in a domino like fashion. Ultimately all that “pulling down” is what accelerates the top rung faster than free-fall.
The differences for the moment of inertia is what I find interesting. The one is a rung that has a uniform cross-section (ie, the mass is uniformly spread through the length). The one is what you get if all the mass of the rung is at the ends. Really this is the highest possible moment of inertia for a rung. The lowest possible is zero if all the mass is in a point mass at the center, but for fun I did to see a more obvious effect. With a higher moment of inertia, the “slaps down” effect I talk about above is lessened, because the rung is harder to rotate. That means the “pulls down” effect is lessened as well and so you see that there’s less additional acceleration for the top rung. In the plot above you can see that the maximum moment of inertia is indistinguishable from free-fall.
Your thoughts
I’d love to hear what you think. Here’s some starters for you:
I love this! I’m going to build a minimal moment of inertia ladder right now.
This is dumb. What you totally missed in your model was . . .
I don’t know why you insist on using such an expensive tool like Mathematica . . . wait, what did you say in your last post?
I don’t like your friction model. Instead I think you should . . .
I think your rungs are bouncing too high, you should raise your friction amount. (I did that but found that it reduces the “slap down” too as it doesn’t just cause rotation, it also moves the end a little bit. Something to think about, I guess)
I think all ladder rungs are uniform. This is stupid.
I think Veritasium doctored that first video. There’s no way there’s two ladders with every rung angle being exactly the same.
Hang on a sec, it looks like you’ve pinned the center of the rungs on a vertical line. That’s crap and you know it.
Are you saying that if all the rungs are perfectly horizontal nothing happens? How boring.
In this post I’m going to try to capture the steps I took today to get a Jupyter Notebook to run Mathematica commands. I did it on a Windows laptop so if you’re on a Mac of Linux you’ll have to make appropriate adjustments.
Why free?
Mathematica is not free. A single license can be over $1000. Student versions are more like $200 but there’s no question that it’s pricey, especially when compared with python-based computing. Of course, it definitely has value! The amount of development that’s been poured into it, including just in the last few years, is amazing. Want easy to use neural networks? Check. Want lightning fast prototyping? Check. Want access to tons of curated data sets that all play well together? Check.
But, ugh, what a cost. And it’s hard to convince students to embrace it if they know they’ll lose it as soon as they graduate. I’ve spent lots of time looking for a replacement for the types of things I tend to work on, and it’s been a mixed bag.
But ever since I noticed that Wolfram (the company that makes Mathematica) put a free mathematica kernel (what they now call a Wolfram Engine) on the Raspberry Pis, I started keeping an eye out for ways to use the Wolfram Engine for free.
If you go here you’ll see that you can download the engine for developers and use it for free as long as you’re not trying to make money with it. When you download it and install it you can then run wolframscript in a cmd window and you get old-school command-line Mathematica. Graphics don’t work but general calculations do.
To get a more useful interface, you can pair it with a Jupyter Notebook. These are free interfaces that people mostly use to do Python programming. But you can swap out the kernel from Python to Wolfram Engine with the steps that I outline below.
So the kernel (Wolfram Engine) is free (assuming you’re not trying to make money) and Jupyter Notebooks are free. Off we go!
Then go down to “Assets” and download the file called WolframLanguageForJupyter-x.y.z.paclet (for me x=0, y=9, z=3)
Make sure to save it to the same folder where python and pip are now installed
Run a few commands in command-line mathematica
In that directory, run the wolfram engine with this command: wolframscript
In Mathematica run these commands one at a time:
PacletInstall["WolframLanguageForJupyter-x.y.z.paclet"] (make sure to use the right x, y, and z)
Needs["WolframLanguageForJupyter`"] (note the extra back tick toward the end there)
ConfigureJupyter["Add"]
And you’re done! Now you just need to run jupyter by typing jupyter notebook on the command line and it should launch a new tab in your browser (it takes about a minute on my machine, it seems). Then you can select a new file using the Mathematica kernel:
Finally, here’s a youtube vid of me using it (showing how to model a 10-pendula)
Your thoughts?
I’d love to hear your thoughts. Here are some starters for you:
This is cool! My favorite part is . . .
This is dumb. The dumbest part is . . .
Why don’t you italicize Mathematica?
Can it do Manipulate commands? (sort of)
Can it access the curated data? (haven’t tried yet)