MARINE ECOLOGY
  • Home
  • Blog
  • Research
    • Microplastics
    • Oyster Mortality
    • Tipping Points
  • CV and Publications
  • Contact Me

BLOG

New posts weekly!

A Secret Map Addendum

9/11/2025

0 Comments

 
Picture
While last week's map is still a secret, this week I worked on another map for my own research, although this map was useful for only a few days. While the oyster monitoring data I have from the Mississippi Department of Marine Resources includes temperature, salinity, and dissolved oxygen data, these data are point data, which means they represent data collected at one point in time: the time when employees counted and recorded oysters. However, we know that organisms respond to cues in their environment at different times-perhaps times around spawning, reproduction, lunar cycles-and these response times are often asynchronous with our sampling and monitoring efforts. Therefore, while the environmental data I have during the oyster monitoring is useful, data leading up to these sampling events may (keyword) provide more explanatory power. 

Many research teams within the United States and across the globe build complex mathematical and hydrodynamic models that explain features of our oceans, including the movement of sediment, wave action, currents, temperature, and salinity. For many of these models, the researchers create accessible data files, stored and available for public use, although these data files are quite massive (a few gigabytes for half a day of data). Since my work spans more than a decade, the file size required for all my data would be quite large, which is why I turned to looking at maps. The model I found for my work provides a preview image per file that looks like the one depicted below. The image is interactive, meaning I can click on any modeled area of the image and extract the data for that feature. As long as I know which grid square belongs to which location in my own research, I can click and extract the temperature and salinity information from the otherwise massive data file. 

Why would I click the files and points independently rather than program my computer to extract the data? That's a great question, and I'm so glad you and I had the same thought. I started this data extraction process by writing lines of computer code to automate the extraction process. Today I ran those lines of code and it took 2.5 hours to extract data from 28 files, or 28 days-worth of data. Considering that I need to complete this process for perhaps 1000 days, I wanted to increase my processing speed by performing the manual clicking task while the computer completed automated processing. Hence, the colored squares I added to this map so that I could recognize the grid cells that I needed to click. 

Today, though, I happily finished working with the manual system, as I thought more carefully about how my code extracted and processed the data files. What makes the files so large is the amount of data points and the amount of values per data point. Instead of extracting all the data within a single file, I rewrote the code to extract just the variables I wanted at only the locations relevant to my research. This modification turned a 2.5 hour data extraction and processing step to a 3 minute step for 28 days of data, which is immensely helpful, and means no more manual clicks. So thank you, map, for your service, and while it was fun to make an MS Paint creation, I'll stick to the automatic processing.

I am hoping to write an extended feature next week on the BlueBoat, as it arrived at the lab, but that blog may be delayed by our plans to outfit it with additional gear. There will still be a blog next week, though, don't worry!

0 Comments



Leave a Reply.

    Categories

    All

    RSS Feed

Powered by Create your own unique website with customizable templates.
Photos from unukorno, Grace Courbis
  • Home
  • Blog
  • Research
    • Microplastics
    • Oyster Mortality
    • Tipping Points
  • CV and Publications
  • Contact Me