Ten short thoughts about AGU25

Minor dispatches written about 400 feet from Bourbon Street
Author
Published

December 19, 2025

This was my first time at AGU in New Orleans – and my first time in New Orleans in general. My review is that I really enjoy New Orleans, and I really enjoy AGU, and I really hate the New Orleans convention center. Mostly, I hate that the convention center is about a mile long from end to end, and talks were in rooms on either side of a single hallway spanning the entire mile – meaning I often had to pass all 36,000 attendees in that single mile-long hall as I went from Global Change (at ~0.2 miles deep) to Informatics sessions (at ~1.0).


I found a third wave coffee place (Fourth Wall) via Reddit and went there each morning. The line at 7:45am got notably longer every single day – including Friday, when I’d assume a decent chunk of folks had already gone home. I wonder if the human geographers have insights into how conferences, with their odd, massive groups of tourists with correlated commutes and schedules, interact with their host cities over the span of the event. Feels like it’d be a great AGU talk.


I’m always surprised at how much better the “science and society” talks are than the average session. Then I’m surprised that I’m surprised professional science communicators give good talks. We should fund more science communication. I’m pretty sure I don’t just think this because I work for our “Web Communications Branch”.


Why were only random doors unlocked? It seemed like doors marked “Enter Here” had a 50% chance of being open, while the rest of the doors had a 10% chance.


We all agree ten minute talks are bad, right? Maybe we don’t need ten talks per session?


I’ve always been a skeptic when it comes to automated data harmonization/interoperability. The idea is nice – augment your data with some magic set of metadata fields and boom it can automatically be rbind()’d with any other data that does the same – but real data is usually too messy for this to be practical. My favorite example is that, in 2021, a court in New York State idiotically changed the definition of a “tree” for some purposes from anything above 3 inches diameter to anything above 1 inch. So if you’re measuring trees, now you need to add whether you’re using the standard from before this case or after it – or perhaps the federal standard, which starts at 5 inches. This is before we even get to the fact that “tree” isn’t a well-defined category; rhododendrons usually aren’t included in the category, for instance, even though they can get to 33 feet!

All that to say, resolving methodology differences down to a level where data can be automatically harmonized (or rejected) seems to me a tall task for metadata alone, and I don’t see the AI era solving this problem any time soon.


The coffee stations this year all used composable coffee cups, despite the conference center not having a single compost waste bin. That has to be higher impact than the alternative, right? Don’t get me wrong – I flew to Louisiana, this isn’t a meaningful source of my emissions for the week. It’s just a little odd seeing this sort of green-washing at a conference like this.


For a conference of geospatial scientists, not a lot of spatial awareness in the crowd.


I think we’d all be better off if we were more honest about the role of money in shaping studies. I saw several talks where the sampling region seemed to be defined as roughly “where I could get on half a tank of gas”, and depending what you’re doing and what conclusions you’re drawing, that’s basically fine! We don’t need to pretend you chose the state your university is in because it’s a unique understudied ecotype that the rest of the researchers at your university have somehow overlooked.


It seems like a problem that we basically can’t run meaningful and low-stakes experiments with data sharing and discoverability anymore. I was in a few sessions where NASA ESDIS were discussing their project of moving 190 petabytes of imagery into S3, and the solutions they’ve built out to deal with it. And I think the tech looks great – at least, I love STAC and I hear nice enough things about CMR, as someone who’s never touched it. But it seems like a shame that it’s basically impossible for non-NASA people to try building other access patterns, to see if there are places for improvement that we’ve missed. I mean, where would you even start? You don’t have 190 petabytes of data to test with – and you certainly don’t have the money to pay for it, if you did.

Of course, it’s not like folks were able to build open-sourced versions of the Apollo missions, either. Maybe it’s not a bad thing to have smart, dedicated people collaborating together on a single solution for a hard problem. I like the tech they’ve got so far!

Footnotes

  1. Currently, I’m annoyed enough that I’m tempted to skip the conference the next time we’re in NOLA, but still grab a hotel nearby and do my usual handshakes and meetings. Realistically, I know myself well enough to know I’ll have completely forgotten by January.↩︎

  2. Exercise left for the reader: how many coffee cups would you need to go through before it was meaningful? And could you do it before the staff stopped you?↩︎

  3. Emphasis on meaningful – I specifically mean that independent researchers basically can’t contribute to modern data repository infrastructure, at least not for repositories at operational scales.↩︎