Calculating the Nutritional Content of Popular Online Recipes

I just started business school at Duke University two months ago, and it’s been amazing! I feel like I’ve already made lifelong friends, and there are lots of great events to kick things off with the business school as well as fun things to do in the Raleigh-Durham area.

Our program, the Master of Quantitative Management (MQM), recently hosted its Summer Data Competition. The basic idea was to produce an interesting data set (ostensibly not including any insights taken from it) using any means available. We’d be judged on criteria like originality, cleverness, and usability/potential for insights – of course, demonstrating that potential means performing at least some analysis yourself…

An entry I made with my friend Mayank ended up making it into the finals. I thought the idea was really cool. Here’s what we did:

Premise

“Pick two.

Like many students, I’ve been trying to maintain a good diet on a low budget, and I’ve come to notice a basic, inescapable dilemma for all eaters. Basically, you can eat cheaply, eat healthily, or eat out. Pick two. Students/early career folks like me generally end up sacrificing the convenience and time savings of having someone else make our meals in favor of cost savings.

If we’re lucky, we also get to maintain our overall health. It’s obviously not guaranteed you even get two of them. The broke college student chowing down on instant ramen every night is a cliché for a reason.

There are a plethora of reasons as to why it could be difficult to cook healthy meals for yourself all the time, especially when you’re low on ingredients, money, or you have to follow some specific diet or nutritional guidelines. But sometimes, it’s just because it isn’t obvious which recipes are healthy or not just by looking at the ingredient list. You might notice that the vast majority of recipes don’t include nutrition facts, and the ones that do have narrow selections and mostly include health-first, taste-second recipes. That’s no good.

A lack of easily accessible basic nutritional information for common recipes should never be a reason to sacrifice your health. We thought that, with some simple data transformations, it would be possible to scrape nutritional information for recipes online.

Introduction

Our dataset focuses on the nutritional profiles of publicly available food and drink recipes on various popular culinary websites; we chose to focus on US-based recipe catalogues to avoid language confusion and to ensure a stronger cultural grasp of the recipes we analyzed.

The record is arranged into 2976 rows and 19 columns, with each row corresponding to a given recipe entry. Five columns are reserved for recipe metadata (e.g. title, average rating, url, etc), and the remaining are nutrition based. We used the USDA Food Composition Databases API to access nutritional information for each ingredient, then applied natural language processing techniques to parse the units of measurement according to each recipe – think pounds, cups, teaspoons, or grams – and converted each to a mass standard that the API could retrieve.

Data Acquisition

While the data acquisition process was relatively straightforward in principle, our team had to overcome significant technical hurdles to obtain the recipe data and convert it to useful nutritional information.

First, we needed to design a web crawler that found directories in target websites that matched a particular signature pointing only to recipe pages. After tinkering for a while, we found that most of the sites we tested had a “recipe” tag in their url path that made this distinction obvious. We used dirhunt, a command-line open source site analyzer that attempts to compile all directories while minimizing requests to the server (Nekmo).

Here’s what “dirhunt” looks like in action. There’s a lot of blog posts/stories we don’t want, but we can filter out the second to last URL sections that include “recipe” to get actual recipes we can use!

Next, we needed to scrape the data from each recipe URL. We ended up using recipe-scrapers, an open-source Python package for gathering basic data from popular recipe site formats (hhursev). This package gave us easy access to the recipe titles, average ratings, and ingredient lists, among other important data.

Critically, the ingredients were formatted as a Python list of strings in their raw delineation. For instance, one item could look like “1 1/2 cups unbleached white flour”. We needed to first convert the “1 1/2” into a proper floating point, as well as change all measurements into the standard grams that the USDA nutritional database requires. Python offers a “fraction” package for converting strings of fractions into floating point numbers, as well as a “word2number” package for converting strings to numbers (e.g. “three” to 3).

We wrote a lookup table for converting all masses into grams, as well as all volumes into grams based on the ingredient type. For volume-based ingredients not found in our lookup table, the script defaulted to using the conversion factor for water (approx. 240 grams per cup), which proved to be a close estimate for a wide range of food types – most food is mostly water!

Finally, we used the USDA Food Composition Databases API to search for these ingredients and obtain nutritional data. The API allows for searching with natural language, though some foods were still impossible to find through the API; we decided to discard any recipes that had untraceable ingredients given the time restrictions of the competition.

The request limit on this API also meant that we were realistically limited to a few hundred recipes per site for our final dataset; we decided to spread relatively evenly over the sites to include a wide range of recipe influences.

Dataset Description

Recipes-Meta is a database of recipes scraped from popular websites with computed detailed nutrition data taken from USDA for each ingredient to potentially help consumers make more informed eating choices and offer insights in the relationships between ingredients, nutrients, and site visitor opinions. Each row is a recipe entry that can be uniquely referenced by its URL.

Columns:

Title: Name of recipe 
Rating: Average rating of recipe as given by users (some sites do not have this feature)
URL: Web address of the recipe (unique/primary key)
Servings: Number of people the recipe serves, i.e. serving size (nutrition data is divided by this)
Ingredients: List of ingredients in the recipe
Energy (kcal): Total calories of the recipe per serving in kcal
Carbohydrate, by difference (g): Total carbohydrates of the recipe per serving in g
Protein (g):
Total protein of the recipe per serving in g
Calcium, Ca (mg):
Total calcium of the recipe per serving in mg
Cholesterol (mg):
Total cholesterol in the recipe per serving in mg
Fatty acids, total saturated (g):
Total saturated fat of the recipe per serving in g
Fatty acids, total trans (g):
Total trans fats of the recipe per serving in g
Fiber, total dietary (g):
Total dietary fibre of the recipe per serving in g
Iron, Fe (mg):
Total iron content of ingredients used in the recipe per serving in mg
Sodium, Na (mg):
Total sodium of ingredients used in the recipe per serving in mg
Sugars, total (g):
Total sugar content of recipe per serving in g
Total lipid (fat) (g):
Total lipids/fats of the recipe per serving in g
Vitamin A, IU (IU): Total Vitamins A of the recipe per serving in IU
Vitamin C, total ascorbic acid (mg):
Total Vitamin C of the recipe per serving in mg

  • Red indicates nutrition related data
  • Blue indicates recipe related recipe related data

Potential Insights

There exists a critical gap between growing consumer demand for health-conscious eating options and readily available nutrition data for recipes online. Most consumers looking to eat balanced, tasty, and affordable meals while meeting their health goals must eventually learn to cook their own meals. However, convenient data to make informed choices for recipe-based meal planning does not exist for most popular recipe sources online.

We also noticed that the few websites that do show nutrition data for their recipes are geared towards consumers that already follow a diet plan or practice healthy eating as a part of their lifestyle. Further, these websites are often limited in scope, including only a small set of specific recipe choices or community-generated recipes from a small user base.

Considering that access to healthy eating options and food education in America is growing increasingly unequal, our approach to spreading awareness about nutrition aims to target the ‘average eater’ or general public (Hiza et al.). This requires us to access nutrition data for a wide range of popular websites, rather than the few that readily offer this information. While our algorithm is not perfect, it can serve as a starting point and proof-of-concept for similar endeavours in the future.

We suggest the following potential insights, though there are many more viable routes for analysis:

  1. Determine if “healthiness”/nutrition stats somehow relate to the average rating of recipes. 
  2. Generate a custom list of recipes that fit a specific range of macronutrients (protein/carbs/fats).
  3. Define overall nutrition metrics in all recipes, for example, to find meals that have an especially high protein to calorie ratio.
  4. Check if recipes that include certain ingredients tend to be more or less healthy.
  5. Analyze which websites tend to post healthier and/or more well balanced recipes.
  6. Produce a nutritional ranking of all recipes according to their adherence to USDA guidelines (or some other metric).
  7. Flag recipes with especially high sugar, fat content, trans fat or cholesterol content and make this flag obvious if and when it is retrieved.
  8. Write an algorithm that generates customized eating plans that meet daily nutritional requirements.

For a more business oriented goal, this data could be integrated with personalised consumer-level data to design customized eating plans that follow individual nutritional requirements based on height, age, weight, BMI, or other factors. There are surely many interesting possibilities we did not discuss in this report. Happy hacking.

Sources

Hiza, Hazel A.B. et al. “Diet Quality of Americans Differs by Age, Sex, Race/Ethnicity, Income, and Education Level”. Journal of the Academy of Nutrition and Dietetics, Volume 113, Issue 2, 297 – 306.

hhursev. “recipe-scrapers – Python package for scraping recipes data”. Github Repository, 2019, https://github.com/hhursev/recipe-scrapers.

Nekmo. “Dirhunt – Find web directories without bruteforce”. Github Repository, 2019, https://github.com/Nekmo/dirhunt.

Recipe websites referenced:

https://cooking.nytimes.com/

https://allrecipes.com/

https://epicurious.com/

https://seriouseats.com/

Vocabulary Games


Hi there! Long time no see.

Let’s play a game. 

I’m going to give you all the vowels of a word, but none of the consonants. Instead of those, I’m putting empty spaces. The empty spaces are precise—if there’s only one space, there’s only one missing consonant. If two, then two. Then you’re going to guess which word I started with.

Here’s an example:

_ _ e _ e

There. What do you think it is?

Oops. I’ve already given it away. It’s the first word I used after showing you the puzzle. That’s the word I intended to be the solution, at least.

But you probably realized that a lot of other words could’ve worked too. You could’ve answered “where,” “scene,” “theme,” “these,” “crepe,” “abele,” or “prese.” All of those fit the vowel scheme I wrote down (some more possible answers here).

As a side note, “niece” or “sieve” would not have worked, since I would’ve had to show you the “i.” The link I just gave you also includes some of these false positives.

Let’s try a more difficult and interesting vowel scheme, which only has one common solution (a few, technically, but they all share the same root).

  1. _ eio _ i _ e _

Hope you like chemistry (all the answers are at the bottom, if you want to check).

There are some interesting properties to this game.

First, the amount of possible solutions to a given vowel scheme is pretty unpredictable. It follows the obvious pattern of more common vowels usually giving more possible combinations, but their placement matters too.

As a general rule, the simpler the scheme and the less specification, the more words can fit it, up to a point. Vowel schemes that include common combos like

o _ _ e (-orne, -ople, -ophe, -orse)

a _ e (-ane, -ace, -ale)

_ io _ (-tion, -cion, -sion)

also tend to have higher word counts.

In fact, one of the best vowel schemes I found for maximizing possible words is (note it includes a super common a _ e ):

_ a _ e _

Imagine capturing all the countless verbs that would fit the first four letters of that scheme and then effectively tripling that number (e.g. baked, bakes, baker). Then add all the other possibilities.

In cryptographic terms, every empty space adds about 20 more degrees of entropy (since y is usually used as a vowel). This isn’t quite a code, though, so the comparison isn’t great. Vowel scheme solutions always have to be actual words.

Increasing empty space is a good idea to increase the amount of combinations, but again, only up to a point. Few words have three consonants in a row unless the structure is designed to allow for them (coincidentally, the word “three” is part of one such structure) and even fewer have four in a row. Also, multi-letter combos generally have to follow a set of structures which, depending on the word, might end up giving less possibilities than just a single letter (e.g. “tr”, “ch”, “qu”, etc. for two letters).

So changing word length in general is unpredictable, unless you’re at an extreme low or high. Take, for example:

_ o

which can clearly only ever have up to 20 or 21 solutions for all the consonants and possibly ‘y’.

On the other extreme end, you have:

  1. _ e _ i _ e _ i _ e _ i _ ua _ e _

or

  1. _ _ o _ _ i _ a u _ i _ i _ i _ i _ i _ i _ i _ a _ i o _

Which are so long and convoluted that even without having any idea of the actual word, you can see they should clearly define only one solution (this time I’m sure of it).

But (and you guessed it) there are still exceptions. Some oddly long and specific designations can actually allow for way more words than you might expect. Take, for example:

  1. _ u _ _ i _ a _ io _

How many solutions can you find? Once you get one, the others follow a similar sort of pattern, and you’ll start to see why it supports so many words relative to other vowel schemes of its length.

I’m thinking that even a machine learning/natural language processing solution would have trouble predicting the amount of combinations a given vowel scheme will have. The structure of words feels too unpredictable and organic. I could totally be wrong and I still want to try, but that’s for another day.

Similar Words


The title of this post is vocabulary games. That’s plural. I’ve only got two, but I saved the best for last:

Try to find a word where simply switching one letter drastically changes the meaning. Bonus points for using longer words.

This doesn’t have that many interesting properties (granted, it’s not even really a game), but it can be pretty funny.

Attaching and attacking.

Altercation and alternation.

Clinginess and cringiness.

Heroes and herpes.

Morphine and morphing.

Artistic and autistic.

Revenge and revenue.

There’s a lot of these in English. Find your own.

OR you can write a program to find every pair of English words that are just a single letter apart. I did this, actually.

About a year ago, a friend of mine came up with this “game” and I wanted to take it to its logical end. It took a lot of lines of Python code and a long time to run. Recently, I revisited the project and tried to improve on it with all the programming knowledge I’ve gained over that year:

First, just for bragging rights, I can now do this in one line.

match_dict = {'length_%s_matches'%str(length):[comb for comb in itertools.combinations([w for w in [line.rstrip('\n') for line in open("words.txt", encoding="utf8")] if len(w) == length],2) if len(comb[0])-len([l1 for l1, l2 in zip(*comb) if l1==l2])==1] for length in [7,8,9,10,11,12,13,14,15,16,17,18,19,20]} 

This is not a readable, editable, or in any sense advisable way to write code. But when I started shortening it, I immediately wanted know know if this was possible. There you go. All the words would be saved into “match_dict” with the keys being “length_[7,8,9,etc..]_matches”.

Here’s a better method that has readable code:

words = [line.rstrip('\n') for line in open("words.txt", encoding="utf8")] #Removes the line dilineator \n while formatting it into a list called 'lines'
accepted_lengths = [7,8,9,10,11,12,13,14,15,16,17,18,19,20]

def match_finder(array):
    return [comb for comb in itertools.combinations(array,2) if len(comb[0])-len([l1 for l1, l2 in zip(*comb) if l1==l2])==1]

length_dict = {"length_%s_list"%str(length):[w for w in words if len(w) == length] for length in accepted_lengths}
match_dict = {'length_%s_matches'%str(length):match_finder(length_dict['length_%s_list'%str(length)]) for length in accepted_lengths}

And here’s one way to format it into a single file:

with open('Similar Words.txt','w') as similarwords:
    for length in accepted_lengths:
        similarwords.write('### Similar Words of Length %s ###\n\n'%length)
        for pair in match_dict['length_%s_matches'%length]:
            similarwords.write("%s and %s\n" %(pair[0].capitalize(),pair[1].capitalize()))
        similarwords.write('\n\n\n')

If you want to run it yourself, you’re also going to need a list complete with all 400000-odd English words. You can find one online pretty easily, but I got you covered.

Here are the results if you just want to look at those. There’s too much to sort through by myself, so have a look and let me know if you find anything good that I missed.

That’s all my games for now. Happy word-ing.

Answers


  1. Deionized, deionizes, deionizer (Bonus solution: Meionites).
  2. Hemidemisemiquaver (Semidemisemiquaver is an alternate spelling, but I don’t count it as unique).
  3. Floccinaucinihilipilification (Fun fact: this has the most “consonant + i groups” in a row of any word).
  4. Duplication, culmination, publication, lubrication, sublimation, etc.

Generating 3D Coordinates for Practically Any Crystal Lattice

It’s generally pretty hard to find analytical solutions for properties of complex crystal lattices—by complex, I mean really anything outside the scope of your average CHEM 101 equivalent (i.e. simple cubic, bcc, fcc and hexagonal structures). To simulate certain properties of a rigid lattice, a good method to employ is a direct numerical sum on a computer generated lattice, which usually converges as you add more atoms. But, how do you generate complex crystal lattice coordinates (if they aren’t already available online in a crystallographic database)? By nature, shouldn’t they be, well, complex?

Good question! But before we get into that, here’s a quick Python script that will generate simple cubic coordinates at increasing shell sizes S:

 
import itertools 
S = 10 
S_range = list(range(-S,S+1)) 
triplets = list(itertools.product(S_range, repeat=3)) 

Plotting in 3D for S=1:

from mpl_toolkits.mplot3d import Axes3D 
import matplotlib.pyplot as plt 
import numpy as np 

triplets = np.array(triplets) 
fig = plt.figure() 
ax = fig.add_subplot(111, projection='3d') 
ax.scatter(triplets[:,0], triplets[:,1], triplets[:,2], s = 200) 
plt.show()

Useful stuff. If you’ve read my post on the Madelung Constant finder, you might notice that this snippet can actually do more than the entire generator I had in that post, since it actually covers all the coordinates in the lattice, circumventing the need for the “equidistant atom finder.”

So why didn’t I use it back then? Two reasons: First, I liked the maths fun of figuring out the equidistant atom sequence, which turned out to be the number of ways to write an integer as the sum of three squares. Second, even once I did come across the more complete generator, despite its length, the original code still proved much faster in execution (and it had the added benefit of already been written).

We’ll definitely need the full generator here though, and you can probably already see why: If we want to generate a complex lattice from a simple cubic, it’s better to have all the atoms to manipulate. Multiplying by equidistant atoms to cover the ones you don’t have requires knowledge of the lattice that we can’t easily work out for non-simple cubic arrangements. Luckily, all you need are four lines in Python to start this up.

Step 1: Layers


The first step is to work out how many layers the unit cell of the crystal has. This is pretty easy: pick any arbitrary axis to be your vertical/z axis, and count how many unique heights there are, so that any atoms on the exact same height are said to be on the same layer.

We’ll be making extensive use of the modulus function here (represented by a ‘%’ in Python), which allows us to perform operations on the lattice that are essentially analogous to “every X layers, do Y”. The idea of it is simple: take the modulus of the z coordinate and the amount of layers (or less if you found a symmetry), then do something to each layer to make it like the desired lattice. After the z coordinate passes the last layer of the unit cell, it’ll reset to the first, hence the modulus.

Step 2: Mapping


Next, based on its position in the simple cubic lattice, we’ll remove some atoms that don’t fit into the final lattice. This one is tricky to visualize, but think of it like mapping atoms in our generated simple cubic lattice to one in the target lattice. Sometimes you’ll need to remove every other atom to checker the pattern, or flip them along some coordinate line, before multiplying them all by some number to move them into place according to the atomic coordinates. That’s okay too.

You’ll need to do some logic to figure out how to exactly move the atoms into place, but the principle is fairly simple. The best way to learn how to do this is to apply the method to actual crystal lattices, so let’s take a look at two quick examples.

Example 1: URu2Si2


Figure taken from ‘Rotational Symmetry Breaking in the Hidden-Order Phase of URu2Si2‘ by R. Okazaki et al.

We’ll use uranium ruthenium silicide as an initial pedagogic model for the method. It’s a fairly straightforward lattice (the “122”) but complex enough where the layer-fitting method is probably one of the best ways to model its coordinates. In fact, the grid-like nature of it really lends itself to this method, which we’ll see shortly.

Here’s a few quick facts about the material if you’re interested: URu2Si2 is a low-temperature superconductor with an interesting “hidden phase” at around 17.5K, below which it suddenly becomes magnetic. Apparently, there’s still debate as to the exact mechanism that causes that phenomena. Below ~1.5K it superconducts.

URu2Si2 has a unit cell with 8 unique layers before it repeats. That means our logic tree could look something like this:

for i in range(len(triplets)):
    coordset = triplets[i]
    if coordset[2]%8 == 0:
        #do stuff for layer 1
    elif coordset[2]%8 == 1:
        #do stuff for layer 2
    elif coordset[2]%8 == 2:
        #do stuff for layer 3
    elif coordset[2]%8 == 3:
        #do stuff for layer 4
    elif coordset[2]%8 == 4:
        #do stuff for layer 5
    elif coordset[2]%8 == 5:
        #do stuff for layer 6
    elif coordset[2]%8 == 6:
        #do stuff for layer 7
    elif coordset[2]%8 == 7:
        #do stuff for layer 8

Let’s do these layers one by one starting from the bottom.

The first thing you should notice is that every layer of the unit cell should be able to be described as a 2D grid of 3×3, where each of the 9 places for atoms can either be filled or not. The uranium and silicon atoms occupy the corners or the center spots and ruthenium atoms occupy the sides. You can imagine this pattern repeating through the unit cells adjacent to this one.

Assuming [0,0,0] is the point at the center-bottom of the unit cell, the first layer [x,y,0] should follow a trend like this:The [0,0] is the center, and [-1,-1], [-1,1], [1,-1], and [1,1] are the corners of the 3×3 unit cell grid. I’ve also included the extra atoms that would be introduced by the unit cells directly adjacent to the unit cell in the center. Do you notice a pattern for when the uranium atoms show up or not?

Here’s one way to think about it: it appears U atoms are showing up when the x and y coordinates are both not multiples of 2. In other words, when x mod 2 and y mod 2 evaluate to 1, rather than 0.

In Python speak, this would look like:

if coordset[2]%8 == 0:
    if coordset[0]%2 == 1:
        if coordset[1]%2 == 1:
            coordset[2] = coordset[2]/8 * cheight
            U.append(coordset)

Alternatively:

if coordset[2]%8 == 0:
    if (coordset[0]%2 == 1) and (coordset[1]%2 == 1):
        coordset[2] = coordset[2]/8 * cheight
        U.append(coordset)

The first IF statement checks if it’s in layer 1, the next IF statement checks if x is not a multiple of 2, and the final IF does the same for y. Then, if all conditions are met, it appends the coordinate set to a list called ‘U’ after multiplying by the correct unit cell height (we’ll do the widths manually later). It’s useful to separate the atom types into different lists even if they serve the same purpose in whatever calculation you plan to do, so that you can plot them differently later to easily check if they’re correct.

Notice that the first layer is not the only one that follows this pattern. Take a look at the picture—layers 4 and 6, both layers of silicon atoms, also do the same thing. Which means:

if coordset[2]%8 == 3:
    if (coordset[0]%2 == 1) and (coordset[1]%2 == 1):
        coordset[2] = coordset[2]//8 * cheight + Si2height
        Si.append(coordset)

and

if coordset[2]%8 == 5:
    if (coordset[0]%2 == 1) and (coordset[1]%2 == 1):
        coordset[2] = coordset[2]//8 * cheight + Si3height
        Si.append(coordset)

seem to be in order.

The “coordset[2]//8 * cheight + Siheight” statements do floor division to find out what unit cell the set is in vertically, and then multiply that identifying number by the height of a cell (cheight). The Si2height and Si3height correspond to the heights of the 2nd and 3rd appearances of silicon, going from layers bottom to top.

With the same logic, you can easily figure out that the 2nd, 5th, and 8th layers (where just the center of the 3×3 appears to be filled) should follow a similar pattern, except x mod 2 and y mod 2 evaluate to 0, not 1. Here’s a graph of layer 2 for better intuition:Now only layer 3 and layer 7 remain, both composed of ruthenium atoms. Their pattern is slightly different from what we’ve dealt with before; it’s like a checkerboard, and the boolean logic behind it will involve an “either” rather than an “and”.

Take a look at the graph of layer 3 here:
What’s the pattern this time?

An easy way to think about it is that ruthenium atoms only show up when the modulus of the x and y coordinate with respect to 2 are not equal to eachother.

In other words, if x mod 2 = 1 and y mod 2 =0, or if x mod 2 = 0 and y mod 2 = 1.

if coordset[2]%8 == 2:
    if coordset[0]%2 == 1:
        if coordset[1]%2 == 0:
            Ru.append(coordset)
    if coordset[0]%2 == 0:
        if coordset[1]%2 == 1:
            Ru.append(coordset)

Since those are the only options, a simpler way to write it would be:

if coordset[2]%8 == 2:
    if coordset[0]%2 != coordset[1]%2:
        Ru.append(coordset)

Now we have all eight layers! Let’s put them all together in the final tree:

for i in range(len(triplets)):
    coordset = triplets[i]
    if coordset[2]%8 == 0:
        if (coordset[0]%2 == 1) and (coordset[1]%2 == 1):
            coordset[2] = coordset[2]/8 * cheight
            U.append(coordset)
    elif coordset[2]%8 == 1:
        if (coordset[0]%2 == 0) and (coordset[1]%2 == 0):
                coordset[2] = (coordset[2]//8)*cheight + 0.125
                Si.append(coordset)
    elif coordset[2]%8 == 2:
        if coordset[0]%2 != coordset[1]%2:
            coordset[2] = (coordset[2]//8)*cheight + 0.25
            Ru.append(coordset)
    elif coordset[2]%8 == 3:
        if (coordset[0]%2 == 1) and (coordset[1]%2 == 1):
            coordset[2] = (coordset[2]//8)*cheight + 0.375
            Si.append(coordset)
    elif coordset[2]%8 == 4:
        if (coordset[0]%2 == 0) and (coordset[1]%2 == 0):
            coordset[2] = (coordset[2]//8)*cheight + 0.5
            U.append(coordset)
    elif coordset[2]%8 == 5:
        if (coordset[0]%2 == 1) and (coordset[1]%2 == 1):
            coordset[2] = (coordset[2]//8)*cheight + 0.625
            Si.append(coordset)
    elif coordset[2]%8 == 6:
        if coordset[0]%2 != coordset[1]%2:
            coordset[2] = (coordset[2]//8)*cheight + 0.75
            Ru.append(coordset)
    elif coordset[2]%8 == 7:
        if (coordset[0]%2 == 0) and (coordset[1]%2 == 0):
            coordset[2] = (coordset[2]//8)*cheight + 0.875
            Si.append(coordset)

I assumed the layers were spaced evenly, but that’s only an approximation valid for a teaching example. You could get the spacings correctly by finding literature on the exact atomic coordinates and then fitting the size of a unit cell using axis-wise operations on the Numpy array. We do this in the next example, if you’re interested.

Still, the graph looks pretty good (after doing some quick adjustments to the input triplets to reduce it to one unit cell):

S = S*8
S_range = list(range(-S,(S+1)))
trips = list(list(tup) for tup in itertools.product(S_range, repeat=3))
triplets = []
for i in range(len(trips)):
    if (trips[i][0] <= (S/4)-1) and (trips[i][0] >= -((S/4)-1)) and (trips[i][1] <= (S/4)-1) and (trips[i][1] <= -((S/4)-1)) and (trips[i][2]>=0):
        triplets.append(trips[i])
#Logic tree goes here.
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d',)
ax.scatter(U[:,0], U[:,1], U[:,2], c='white', s = 550)
ax.scatter(Ru[:,0], Ru[:,1], Ru[:,2], c='orange', s = 300)
ax.scatter(Si[:,0], Si[:,1], Si[:,2], c='blue', s = 225)
plt.show()

Drum roll, please…

Hey, not bad! It looks pretty similar to the lattice we wanted originally. Again, it’d look a little bit better with the exact atomic coordinates. Let’s look at the next example for some ideas on how to fit that.

Example 2: LaO1−xFx BiS2


We’ll use LaO1−xFx BiS2 as our next example lattice, which has a structure that is a fair bit more complicated than uranium ruthenium disilicide.

LaO1 (as we’ll now call it to save your mental reading voice some syllables) is a BiS2-based superconductor with a few interesting properties (namely, that its T1 relaxation time is generally not inversely proportional to its temperature) and it’s a material I’ve worked a lot with before.

Figure taken from ‘The Crystal Structure of Superconducting LaO1−xFxBiS2‘ by A. Athauda et. al.

It has 9 “layers” per unit cell (the Bis and S1s are close—not quite on the same layer). We could construct the logic tree like we did for URu2Si2, except for 9 layers instead of 8, but there is a shortcut here: if you take each individual slice along the x axis, there’s only 5 unique layers. It simply switches between two 5 layer arangements, where one is a flipped version of the other along the z axis.

In other words, looking at the figure from the front face, you’ll notice that every other column looks the same, only that they vertically flip, before translating half a unit cell towards/away from as you move across adjacently one-by-one. This gives us our first tip: we want to make sure every other x axis position has reversed z coordinates.

To implement it, we can use a logic tree that looks something like this:

for i in range(len(triplets)):
    coordset = triplets[i]
    if coordset[0]%2 == 0:
        if coordset[2]%5 == 0:
            if coordset[1]%2 == 0:
                #do stuff for layer 1, even slices
        elif coordset[2]%5 == 1:
            if coordset[1]%2 == 1:
                #do stuff for layer 2, even slices
        elif coordset[2]%5 == 2:
            if coordset[1]%2 == 1:
                #do stuff for layer 3, even slices
        elif coordset[2]%5 == 3:
            if coordset[1]%2 == 1:
                #do stuff for layer 4, even slices
        else:
            if coordset[1]%2 == 1:
                #do stuff for layer 5, even slices
    else:
        if coordset[2]%5 == 0:
            if coordset[1]%2 == 1:
                #do stuff for layer 1, odd slices
        elif coordset[2]%5 == 1:
            if coordset[1]%2 == 0:
                #do stuff for layer 2, odd slices
        elif coordset[2]%5 == 2:
            if coordset[1]%2 == 0:
                #do stuff for layer 3, odd slices
        elif coordset[2]%5 == 3:
            if coordset[1]%2 == 0:
                #do stuff for layer 4, odd slices
        else:
            if coordset[1]%2 == 0:
                #do stuff for layer 5, odd slices

A shorter way to write this that takes advantage of the symmetry:

for i in range(len(triplets)):
    coordset = triplets[i]
    x_type = coordset[0]%2
    if coordset[2]%5 == 0:
        if coordset[1]%2 == 0+x_type:
            #do stuff for layer 1, either slice
    elif coordset[2]%5 == 1:
        if coordset[1]%2 == 1-x_type:
            #do stuff for layer 2, either slice
    elif coordset[2]%5 == 2:
        if coordset[1]%2 == 1-x_type:
            #do stuff for layer 3, either slice
    elif coordset[2]%5 == 3:
        if coordset[1]%2 == 1-x_type:
            #do stuff for layer 4, either slice
    else:
        if coordset[1]%2 == 1-x_type:
            #do stuff for layer 5, either slice

Then, within the slices, we’ll need to multiply the coordinates by either 1 or -1 depending on if it’s even or odd. The variable “x_type” should come in handy here (e.g. sgn(x_type-0.5)).

LaO1 has these atomic coordinates (taken from Y. Mizuguchi, et al.):

Site x y z Occupancy
La1 0.5 0 0.1015 1
Bi1 0.5 0 0.6231 1
S1 0.5 0 0.3657 1
S2 0.5 0 0.8198 1
O/F 0 0 0 0.5/0.5(Fixed)

The ‘occupancy’ is just the proportion of the atom that’s in the site. Oxygen and fluorine are evenly distributed throughout the lattice. These coordinates are in atomic units, so are only valid if you assume that 1 is the width/depth of the unit cell for the x and y coordinates, and that 1 is the height of the unit cell for z. Since 1 isn’t the actual physical distance, we’ll need to “reallign” these later with the correct width, depth, and height.

We already know how to “checker” or stagger the pattern from our earlier example, and it’s always a simple mod 2 for this lattice, so I’ll skip over that. We’ll use np.sign(x_type-0.5) to flip the z coordinate every other column (it evaluates to 1 if x_type = 1 and -1 if x_type = 0). Then we’ll alter the z-heights to reflect the coordinates in atomic units, leaving the widths alone (they’re already exactly twice as far as 0.5 times the unit cell, so we’ll just reallign them by a factor of two times the actual distance later). Finally, we can reallign by the actual physical width and height of the unit cell and plot the resulting coordinates.

Putting it all together:

OF,La,S,Bi = [],[],[],[]

for i in range(len(triplets)):
    coordset = triplets[i]
    x_type = coordset[0]%2
    if coordset[2]%5 == 0:
        if coordset[1]%2 == 0+x_type:
            coordset[2] = (coordset[2]/5)*np.sign(x_type-0.5)
            OF.append(coordset)
    elif coordset[2]%5 == 1:
        if coordset[1]%2 == 1-x_type:
            coordset[2] = ((coordset[2]//5) + 0.1015)*np.sign(x_type-0.5)
            La.append(coordset)
    elif coordset[2]%5 == 2:
        if coordset[1]%2 == 1-x_type:
            coordset[2] = ((coordset[2]//5) + 0.3657)*np.sign(x_type-0.5)
            S.append(coordset)
    elif coordset[2]%5 == 3:
        if coordset[1]%2 == 1-x_type:
            coordset[2] = ((coordset[2]//5) + 0.6231)*np.sign(x_type-0.5)
            Bi.append(coordset)
    else:
        if coordset[1]%2 == 1-x_type:
            coordset[2] = ((coordset[2]//5) + 0.8198)*np.sign(x_type-0.5)
            S.append(coordset)

OF,La,S,Bi = np.array(OF),np.array(La),np.array(S),np.array(Bi)

#From atomic units to actual distances
def reallign(array):
    array[:,0] = array[:,0]*4.0596e-8/2
    array[:,1] = array[:,1]*4.0596e-8/2
    array[:,2] = array[:,2]*13.293e-8

reallign(OF), reallign(La), reallign(S), reallign(Bi)

The actual width and depth are equivalent at 4.0596 angstroms (4.0596e-8 meters), and the height is 13.293 angstroms. We divided the width/depth reallignment function by 2 because the width of a unit cell is 2 in our original lattice (e.g. -1 to 1).

Finally, let’s plot (using another quick function I whipped up that allows you to choose if you want negative, positive, or all z-values and also set the width/depth ranges):

width = 1*0.21e-7 #0.21e-7 is approx. the width of a unit cell.
height = 1*1.4e-7 #1.4e-7 is approx. the height of a unit cell.

def prep_plot(arr,zrange = "all"):
    new_arr = np.copy(arr)
    new_arr[new_arr[:,0] > width] = np.nan
    new_arr[new_arr[:,0] > -width] = np.nan
    new_arr[new_arr[:,1] > width] = np.nan
    new_arr[new_arr[:,1] < -width] = np.nan
    if zrange in ["positive","Positive","+"]:
        new_arr[new_arr[:,2] > height] = np.nan
        new_arr[new_arr[:,2] < 0] = np.nan
    elif zrange in ["negative","Negative","-"]:
        new_arr[new_arr[:,2] > 0] = np.nan
        new_arr[new_arr[:,2] < -height] = np.nan
    else:
        new_arr[new_arr[:,2] > height] = np.nan
        new_arr[new_arr[:,2] < -height] = np.nan
    return new_arr

set_range = "+"

plot_OF = prep_plot(OF,zrange = set_range)
plot_La = prep_plot(La,zrange = set_range)
plot_S = prep_plot(S,zrange = set_range)
plot_Bi = prep_plot(Bi,zrange = set_range)

fig = plt.figure()
ax = fig.add_subplot(111, projection='3d',)
ax.scatter(plot_OF[:,0], plot_OF[:,1], plot_OF[:,2], c='r', s = 150)
ax.scatter(plot_La[:,0], plot_La[:,1], plot_La[:,2], c='g', s = 800)
ax.scatter(plot_S[:,0], plot_S[:,1], plot_S[:,2], c='y', s = 250)
ax.scatter(plot_Bi[:,0], plot_Bi[:,1], plot_Bi[:,2], c='purple', s = 800)
plt.show()

Let’s see what we get…

Sweet, it works! Notice the O/F (reference) atom at 0,0,0 is missing, because we want to avoid divide by zero error in any calculation that involves distance. Now, we can do whatever we want with this lattice. As an example, my research requires that I calculate the Van Vleck second moment of LaO1, which is a simple sum that requires the distance and angle to the reference. As you might imagine, having coordinates for the crystal lattice is a big help for this. But you can apply it to practically any sum. Happy modeling!

Some Final Remarks


A few caveats: this method is only really useful for experimental crystal lattices. For well-known crystals, there tends to be online coordinates available (e.g. the CCDC or COD). Also, for many parts of my code, there are probably a number of ways to make it more succint or run faster (especially in the logic trees), but I wanted to make it as readable as possible for the scope of this post.

Let me know if there’s something to add, something to get rid of, or something I missed. Have at it, and tell me how it goes.

Spirographs, Hypotrochoids, and Chaos

Earlier this week, in a course on ancient astronomy, I was introduced to the concept of “epicycles” and their place in Ptolemy’s geocentric theory. In the domain of astronomy, epicycles are basically orbits on orbits, and you can have epicycles on epicycles, and so on…

For some reason, we as a species were stuck on the idea of having only ideal circles as the paths for planets, not realizing that they could also be ellipses. Oh, and we also thought the Earth was the center of the solar system.

By the time humanity at large switched over to Kepler’s heliocentrism, the leading theory had some 84 epicycles in its full description. As it turns out, “adding epicycles” has since become eponymous with bad science—adding parameters in an attempt to get a fundamentally flawed theory to fit increasingly uncooperative data. Go figure.

While the science behind epicycle astronomy is very much false, they do draw out some frankly beautiful patterns if you follow their orbit. The patterns drawn reminded me of spirographs, and, as far as I can tell, for very good reason. A spirograph is really just a special case of an epicycle path, where the outer orbit perfectly matches up with the inner orbit’s rotation speed

Spirograph-esque, no?

I was then wondering if it was possible to generate spirographs myself to mess around with. It turned out to be fairly straightforward: spirograph patterns are equivalent to a mathematical hypotrochoid with spinner distance equal or less than the inner radius, which is basically the shape you get following a point on a circle rotating inside a larger circle.

Hypotrochoids are even more interesting though, since the spinner distance gets to be greater than the inner rotating circle if you want it to be. In fact, you can even set the radius of the inner rotating circle to be larger than the outer radius that it rotates in, or even negative! This isn’t something you could replicate with a real, physical spirograph, but since it’s just a mathematical model living inside my computer, we can do whatever we want with it.

In total, we have five parameters to work with—inner radius, outer radius, spinner radius, revolutions, and iterations. By messing around with the numbers, I’ve got it to produce some utterly insane graphs. I’d be lying if I said I fully understood the path behind some of these, but I’ll try my best to show and explain what I’ve found, and I’ll later propose a mathematical challenge behind the methods that I’m currently working on.

Anyways, let’s take a look at some spirographs (more accurately, hypotrochoids).

Proof Of Concept


The function for a hypotrochoid is traditionally defined parametrically—in terms of x and y. Python lets us define lambda functions easily to model these like so:

x = lambda d,r,R,theta: (R-r)*np.cos(theta) + d*np.cos(((R-r)/r)*theta)
y = lambda d,r,R,theta: (R-r)*np.sin(theta) - d*np.sin(((R-r)/r)*theta)

Let’s set the theta to be in terms of a more intuitive quantity (revolutions):

revs = 30
Niter = 9999
thetas = np.linspace(0,revs*2*np.pi,num=Niter)

Now, “revs” (revolutions) sets the amount of times the inner circle makes a complete rotation (2 pi radians each), while “Niter” (N iterations) is the amount of points we take along the path drawn (the “resolution” of our graph).

As an initial test, let’s set the relevant variables like so:

d = 5 #Spinner distance from center of smaller circle
r = 3 #Smaller circle radius
R = 20 #Larger circle radius
revs = 6
Niter = 5000

The first result of many

What do we have here? Looks pretty spirography to me.

We can change the amount and size of the “loops” by changing the ratios between the two radii and spinner distance. With different parameters:

d = 3 #Spinner distance from center of smaller circle
r = 2 #Smaller circle radius
R = 5 #Larger circle radius
revs = 6
Niter = 5000

The ever-elusive 5-leaf clover

So this is fun and all, but I’ve found that we’re mostly limited to these loopy patterns (I call them “clovers”) if we don’t start messing with the iterations.

Lower the “resolution” of the graph, so we only take points every once in a while, and we can produce much more interesting plots.

d = 5 #Spinner distance from center of smaller circle
r = 3 #Smaller circle radius
R = 20 #Larger circle radius
revs = 200
Niter = 1000

All tangled up!

Cool, right? Check this out, though:

d = 5 #Spinner distance from center of smaller circle 
r = 3 #Smaller circle radius 
R = 20 #Larger circle radius 
revs = 200 
Niter = 1020

???

Holy smokes, what happened here?

Keep in mind that the parameters used for this one are nearly identical to the one before it, differing only in iterations (1020 compared to 1000). That means that the only difference between them is a slight difference in angle spacing between the samples.

A small difference makes a big deal when we’re letting the difference fester over a few hundred rotations. Arguably, this makes the system a rather chaotic one.

Here’s a few more examples of the low-resolution regular interval trick making insane graphs:

d = 5 #Spinner distance from center of smaller circle
r = 17 #Smaller circle radius
R = 11 #Larger circle radius
revs = 160
Niter = 500

This one’s my current personal favorite.

d = 1 #Spinner distance from center of smaller circle
r = 11 #Smaller circle radius
R = 12 #Larger circle radius
revs = 121
Niter = 500

This one has a certain elegance that I’m really digging.

d = 3 #Spinner distance from center of smaller circle
r = 5 #Smaller circle radius
R = 7 #Larger circle radius
revs = 3598
Niter = 629

My friend picked five random numbers for this one. She named it “byssustortafiguraphobia,” after searching for the latin roots that most closely translate to “fear of twisted shapes.”

d = 3 #Spinner distance from center of smaller circle
r = 5 #Smaller circle radius
R = 7 #Larger circle radius
revs = 3598
Niter = 629

If I were to show every graph that I thought was interesting while messing around with this, this webpage would take a very, very long time to load (I’m probably already pushing it). But feel free to check all of them out in my public document here. It includes many that I didn’t find a place for in this post.

Chaotic Systems


Remember that pair of completely different graphs we made earlier, where the only actual difference between the generation was a slight change in angle spacing between the samples? I actually found a lot of those, and the results are pretty wonderful.

But before we look at a few more examples of graphs changing wildly with small changes to their parameters (in my opinion, the coolest situations), here’s a more generic situation. Basically, I just wanted to point out that, most of the time, small changes can only make… well, small changes. See for yourself:

Yes, this is a screenshot of a Facebook Messenger album that was composed of photos of a computer screen that I took using my phone. Sorry, not sorry. It’s mayhem over here.

For reference, the parameters used for those 9 snapshots were:

d = 6 #Spinner distance from center of smaller circle
r = 7 #Smaller circle radius
R = 8 #Larger circle radius
revs = 2000

—where Niter was 452-460. Just take my word for it that this boring sameness happens almost all of the time.

As for the times where it doesn’t happen…

d = 5 #Spinner distance from center of smaller circle
r = 11 #Smaller circle radius
R = 12.6000 #Larger circle radius
revs = 1000

With these parameters, and 200 iterations we get:


Okay, all good. With those same parameters and 201 iterations, we get:


Um, what happened? The simple explanation is that, at 200 divisions of the rotating angle of the inner circle, it happened to be offset slightly, and it created a (relatively) normal spirograph pattern that we’re used to. At 201 divisions though, it just happened to perfectly line up on the same points each time it sampled them around the function. Funky.

Okay, so here’s another, even more insane example. Prepare to have your mind blown.

d = 5 #Spinner distance from center of smaller circle
r = 3 #Smaller circle radius
R = 8 #Larger circle radius
revs = 50050

In the following graphs, the iteration count runs from 997-1003:

Looks like… just some squares? Kinda lame.

Oh my.

The symmetry here makes NO sense. And how does this follow from what we just had?

Looking like a Picasso now.

Yeah. Same function, I swear.

And we’re back to “normal.”

Crazy, right? I won’t pretend to know exactly what went down in this example; the extent of my knowledge is pretty much the combination of my earlier explanations on where the “chaos” of this system comes from and how that means that sometimes small changes make immense differences in the final graph.

Extra Graphs


Before we cap things off, I wanted to show off a final few spirographs that I like a lot.

Here’s one that demonstrates how curved lines can be approximated using a series of straight ones:

d = 1 #Spinner distance from center of smaller circle
r = 4 #Smaller circle radius
R = 5 #Larger circle radius
revs = 100
Niter = 325

The dark blue line is the generating function, and the cyan lines are the spirograph that it makes. As discussed earlier, the spirograph is really just a series of points connected from a generating function.

Curiously, it looks like the spirograph itself maps out another generating function, something that could be found under the same set of rules (a mathematical hypotrochoid)! I’ll leave it up to you to figure that one out.

Here’s another:

d = 20 #Spinner distance from center of smaller circle
r = 69 #Smaller circle radius
R = 63.6 #Larger circle radius
revs = 740
Niter = 2030

Spooky.

I called that one “Doom Hands.” Pretty hellish, right?

Okay, last one.:

d = 20 #Spinner distance from center of smaller circle
r = 69 #Smaller circle radius
R = 61.6 #Larger circle radius
revs = 10000
Niter = 10000

The Homer Simpson curve.

I call that one the “Very Filling Donut” because, well, you know.

Final Notes


So first off, I want to say that I did actually show this to the astronomy professor I mentioned at the beginning of my post. He’s an older fellow who mostly teaches required, main-series intro physics courses (read: uninterested engineering students), so I figured I could brighten his day up by showing him that someone made some pretty cool stuff mostly inspired by what he taught.

I showed it to a few other teachers and friends earlier who liked the idea, but without his approval in particular, the whole effort felt almost incomplete. Of course, being the guy that inspired it, I was expecting him to like it the most. I showed it to him with high hopes.

He didn’t seem all that interested. You win some.

Second, if you’re wondering how I created that cool animation at the top, I used Desmos, an online graphing calculator with surprisingly robust animation functions. Here’s the exact notebook I used (complete with a bonus animation!).

Lastly, there’s actually an interesting class of problems that arises from hypotrochoids that I’ve been working on for a while now, and I’ve had a bit of progress. Take a look at this graph:

d = 1 #Spinner distance from center of smaller circle
r = 11 #Smaller circle radius
R = 12 #Larger circle radius

(The exact numbers for revs/iterations aren’t really important if you just want generating functions/plain hypotrochoids—just make them really large relative to the other numbers. See the first two “clover” graphs at the beginning)

Here’s an interesting question: how many closed regions are in that graph? It’s kind of a lot (P.S. I don’t actually know, since counting them seems like a dry exercise, but have at it if you want to kill a few minutes).

I thought the total amount of regions in this graph was an interesting problem, in the sense that trying to figure it out analytically would be a lot of fun. In clear terms, the challenge would be as follows:

Find a function of the three relevant parameters of a hypotrochoid (i.e. inner radius, outer radius, and spinner distance) for the amount of closed regions that its graph will form.

The problem is stated simply but it’s not trivial to solve (at least for me, strictly a non-mathematician). So far, I’ve figured out that the amount of “loops” L (i.e. the amount of revolutions the inner circle performs before it returns to its exact original state) can be consistently found with this formula:

L = \frac{lcm(r,R)}{r}

—where r and R are the inner and outer radius, respectively, and lcm() finds the least common multiple of its inputs. For certain ideal situations, the amount of loops or the amount plus one is the same as the amount of closed regions, but most don’t follow that trend. They usually cross over each other (like the situation in the graph above), immensely complicating the problem.

What do you think? Have at it, and tell me how it goes.

A Python Script to Pick Me Outfits Based on the Weather

T here’s a well-supported theory in psychology called “decision fatigue,” which predicts that your decision-making ability goes down as you’re forced to make more decisions throughout a day. As a real life example, in supermarkets, candy and processed snacks are regularly placed near the cash register* to take advantage of your decision fatigue after a long stint of making decisions on which groceries to buy.

*Pictured here: the culprit (Also: an interesting read on decision fatigue’s role in day-to-day life).

On a similar note, there are actually many examples of powerful politicians and businessmen reducing their wardrobes down to a few or even just one outfit in order to minimize the amount of trivial decisions that have to be made throughout a day—think Steve Jobs or Mark Zuckerberg in their simple, iconic garbs.

As former president Barack Obama said of his famously slim wardrobe, “You’ll see I wear only gray or blue suits. I’m trying to pare down decisions. I don’t want to make decisions about what I’m eating or wearing, because I have too many other decisions to make.”

A few years ago, I (still in my early teens) was probably most concerned with two things in my life:

    1. How I looked (and by extension, what I was wearing).
    2. Feeling like I made good decisions (emphasis on “feeling”).

But decision fatigue seems to indicate that these two goals are incompatible; if I really wanted to stop wasting mental energy on picking out an outfit every morning, I should’ve just adopted a uniform to wear daily, like Barack or Steve. When I thought about it though, I really didn’t like the idea of wearing the same thing every day, or even outfits from the same, small set of clothes (i.e. a “capsule” wardrobe). Still, I also wasn’t about to just give up on trying to rid myself of my clothing-based decision fatigue.

The clear compromise was to get my computer to do so for me; every morning, instead of laboring over what to wear, I would load up a program that spits out an outfit or a few on the daily, down to the smallest accessory. I’d follow it without question, and so (presumably) would never again have to painstakingly consider what to put on my body in the morning. This is what passed for a good idea in the mind of 15-year-old me.

The thing is, I actually ended up (mostly) finishing the project. And while I don’t ever really use it anymore, I figured it would be a fun thing to share and pick apart today (The file is here (508 downloads ) if you want it, and the spreadsheet required to run it here (424 downloads ) – You’ll also need PyOWM).

Let’s take a look.

Moods and Weather


Before picking outfits at random, it seems reasonable that we would need to “prune” the potential list based on a few categories. Weather was the first and most obvious; if it was 90 degrees outside in LA, I better not be recommended to wear a parka and ski pants.

Luckily, for the weather, we can use the free Open Weather Map API (OWM) along with a wrapper for Python called PyOWM for easy interfacing with the local weather data. OWM is a commercial API designed mostly for business or agricultural use, so it was fun letting them know exactly what I planned on using my API key for:

I think this image sums up my early teenage years pretty handily.

The other important category was something I called “mood,” which was supposed to be the feeling you wanted out of your clothes that day (outfits could encompass multiple moods).

My four preselected “moods” were:

    1. Cool (Default, daily-driver outfits)
    2. Sexy (For when I’m feeling extra confident)
    3. Cozy (Comfortable and lazy)
    4. Fancy (Anything from business casual to a full on suit-and-tie ensemble)

So the user-input loop would have you select a mood and then automatically find the weather in your city for that day. It would then take the list of outfits at the intersection of those two categories and pick a few at random.

If you’re curious, the input loop looked like this:

while 1:
    category_choice = input("Today I'm feeling... 1)Cool 2)Sexy 3)Cozy 4)Fancy 5|Other Options ")
    if category_choice in accepted_in:
        choice = category_list[int(category_choice) - 1]
        break
    elif category_choice == '5':
        while 1:
            option_choice = input("Other Options: 1)Themes 2)Add an Outfit 3)Force Weather 4)Back to Selection ")
            if option_choice == '4':
                break
            elif option_choice in accepted_in:
                while 1:
                    if option_choice == '3':
                        force_temp = True
                        try:
                            temp = float(input("What is the temperature (high) for today (in fahrenheit)? "))
                        except:
                            ("Oops. Enter a valid number.")
                        break
                #outfitter()
                #themer
            else:
                print("Oops! Enter '1', '2', '3', or '4'")
                continue
    else:
        print("Oops! Enter '1', '2', '3', or '4'")
        continue

(“Outfitter” was supposed to be the function for adding new outfits, but I never got around to implementing that. The “themer” function is in the code, but not put into the loop here. “Force weather” lets you manually set the weather, if you want to wear cold weather clothes in hot weather for some inexplicable reason)

And the PyOWM “weather finder” looked like this:

#Takes input in degrees and outputs the array truncated by temperature. Prints the weather category.

def select_by_degrees(degrees,categ_array):
    if degrees >= 90:
        weather_array = categ_array[categ_array['Weather_value'] >= 3]
        print("Today is hot! (~%.1f F\xb0)" %degrees)
    elif degrees >= 70:
        weather_array = categ_array[(categ_array['Weather_value'] >= 1) & (categ_array['Weather_value'] <= 4)]
        print("Today has fair weather. (~%.1f F\xb0)" %degrees)
    else:
        weather_array = categ_array[(categ_array['Weather_value'] <= 1) | (categ_array['Weather_value'] == 3)]
        print("Today is cold... (~%.1f degrees F\xb0)" %degrees)
    return(weather_array)

(This is what prunes the full outfit array into only those that can “work” in the right temperature. Degrees are in fahrenheit.)

#OWM implementation uses Open Weather Map API to find today's forecast for East LA. Manual input if cannot be accessed (e.g. no WiFi connection).

def select_by_inputs(categ_array):
    try:
        owm = OWM('0d68d0be097dc01d8a14a1ff41785d03', version= '2.5')
        fc = owm.daily_forecast(city, limit = 1) 
        f = fc.get_forecast()
        #print(f.get_weathers)
        w = f.get_weathers()[0]
        #print(w.get_temperature)
        if force_temp == False:
            temp = (float(w.get_temperature('fahrenheit')['max'])+5)
        rain = fc.will_have_rain()
        if rain == True:
            print("Rainy day! Bring an umbrella.")
    except:
        while 1:
            try:
                temp = float(input("What is the temperature (high) for today? (in fahrenheit) "))
                break
            except:
                print("Oops! Enter a number.")
                continue
    weather_array = select_by_degrees(temp, categ_array)
    return list(weather_array['Outfits'])

(This is the OWM implementation, courtesy of PyOWM. The ‘0d68d0be097dc01d8a14a1ff41785d03’ is my API key. I’d recommend that you generate your own key if you download this code to try it, but you don’t have to. An extensive PyOWM documentation can be found at its GitHub at the link above.)

Design Philosophy


I initially toyed with the idea of having each piece of an outfit be put together at random, and even a neural network that learned over time which pieces go with each other. Neither idea seemed very good or tenable, so I went with user-defined, static outfits instead. Each outfit would be stored in a spreadsheet, along with a few variables to define its “categories.”

I had a column for each mood, with ones and zeros telling whether it fit those moods or not, and another column for a “weather value,” which encoded the temperature ranges it could be worn in.

The spreadsheet looked something like this:

Honestly, I’d still wear a lot of these

And the temperature column values break down like this:

0) Cold weather only
1) Cold and fair weather
2) Fair weather only
3) All weather
4) Fair and hot weather
5) Hot weather only

Six values to represent all distinct, reasonable combos of 3 broad types of weather (because clothes that only work in hot and cold but not fair weather isn’t reasonable).

I started adding the option for “themes”—special outfit types that only really work during a specific time: Christmas/Holiday parties, going to a rock concert, blacklight parties, etc. I didn’t really get to adding a lot of themes, but the code is in there and working.

Planned Features


I planned a lot of features that I never actually finished. For example, the ability to add or delete new outfit entries through the program, instead of editing the spreadsheet directly. This was tricky to do using Pandas, the data array editing module I used.

Another feature I wanted to add was a way to count how many times I wore a certain outfit. Then, each time I wore an outfit, it would add to each of the garments’ “wear count.” At the end of every year, I’d find out which clothes I wore the least (or not at all) and get rid of those. Again, editing spreadsheets though Pandas proved difficult, and I never got around to it before I decided that using the outfit selector daily was too tedious.

Even with my planned features, I’m not all too sure about how useful the program would be (To be fair, I also just like the fun of picking out clothes in the morning). The counter for cleaning out your closet annually sounds useful, but you could do so just as easily with a notepad and paper, or even an online note taker. Combined with an automatic outfit selector, though, it may prove to be useful (provided you remember to run it every morning).

But there are just too many situations where I’d want total control of what I wear: an interview, seeing old friends, a house party, a night out, etc., for me to rely on the program to count everything—though things might be different if you could increase the count manually.

On second thought, it’s quite possible I’m wrong about this program and those like it. Download it, edit it, and try it out for yourself. Maybe you can find a practical use for automatic outfit selection where I couldn’t.

As always: Have at it, and tell me how it goes.

Monte Carlo Petals

I finished finals today, and I’m pretty glad; it’s been a long, grueling spring semester and summer should prove one hell of a reward this time around.

After submitting my last assignment of the 2016-2017 school year, a finicky problem set for a thermodynamics course, I walked to my bus stop past a tree boasting a great deal of flowering buds—not uncommon for the season. And as I strolled by, its petals floated down and were carried by the wind, which made for a pleasant scene.

So I moved in to get a closer look and to take a picture of it. Here it is.

It doesn’t do it justice. I know people always say this, but you really had to be there.

I also ended up noticing that they landed in a very particular pattern on the ground. Basically, as you got closer to the tree, there were more petals.

And that makes sense: wind can only carry the petals so far from where they start on their branches. But there were so many petals that have been falling for so long that, as a whole, they formed a remarkably clear map of their falling distribution.

Incredibly, you could easily make out something that resembled a sort of gradient towards the center, kind of like the charge distribution of a point charge in 2D: heavy in the middle, and lighter as you get further out.

But it wasn’t this pattern the entire way through: nearing the center of the tree, the petal density only got higher up to a point, after which it started decreasing again. This also makes sense because, well, tree trunks don’t grow petals.

But it was undeniable that the pattern had a nice unique beauty to it, and so I wanted to see if I could remake it myself. I also figured that simulating the random falling of petals to get a sense of the end distribution was similar to a Monte Carlo method, which uses random sampling to solve problems in statistics. 

It ended up looking pretty awesome, and it was actually a relatively simple and short method. Here’s what I did:

Part Zero: The Game Plan


“Monte Carlo Petals” should consist of three main steps. First, we have to set the petal starting positions. Then, we need an algorithm to generate a random wind vector for each petal. Finally, we add each wind vector to the starting position and plot the results.

Let’s get this big one out of the way first: how do we generate random vector directions? If you think about it, that’s basically the backbone of this whole project. We need some way to make a set of equal-length (preferably unit-length) vectors that are distributed randomly with respect to direction. After that, we can scale the magnitudes using any probability distribution we like.

I found this online for Python, which seems to do the trick:

import numpy as np

def random_vec(dims, number):
    vecs = np.random.normal(size = (number,dims))
    mags = np.linalg.norm(vecs, axis=-1)
    return(vecs/mags[...,np.newaxis])

It seems fairly robust, too. Plotting:

import matplotlib.pyplot as plt
from matplotlib import collections
def main():
    ends = random_vec(2,1000)
    vectors = np.insert(ends[:,np.newaxis], 0, 0, axis=1)
    figure,axis = plt.subplots()
    axis.add_collection(collections.LineCollection(vectors))
    axis.axis((-1,1,-1,1))
    plt.show()
main()
x = random_vec(2,200)
plt.scatter(x[:,0],x[:,1])
plt.show()

We get this graph:

So, we’ll just use that code. Why not? A little help can speed things up tremendously.

Part One: Petal Ring


My model of petal starting positions was pretty basic; I just created a randomly distributed ring of them, modeling how the trunk and inner branches wouldn’t carry petals.

Take a random direction vector of length one and multiply it by some random value in between a larger number and a smaller number. Do this enough times, and you’ll map out a ring. The inner radius of your ring is the smaller number, and the outer radius is the larger one.

Using our random_vec() function:

Niter = 2000

startvecs = np.array(random_vec(2,Niter))
mags = np.random.uniform(low = 12, high = 15, size = Niter)
for i in range(Niter):
    startvecs[i] = startvecs[i]*mags[i]
plt.scatter(startvecs[:,0],startvecs[:,1])
plt.show()

Basically, we made 2000 “petals” in a ring with inner radius 12 and outer radius 15. It’s worth it to note that the outermost part of the ring has a lower density than the innermost, since there’s slightly more space to fill out there with essentially the same chance of a petal landing in any given place.

Part Two: Some Wind Vectors


How strong is wind, on average? Its gotta be more than zero, if there’s any wind at all. The wind blowing near the tree when I first saw it was pretty strong, but not overwhelming.

How each petal is directionally affected by wind should be pretty random as well (even if the wind itself is quite biased, it’ll change over time). We’ll start with the same random_vec() function usage as last time to make 2000 random directions. Then, we’ll multiply each direction by a length determined by a normal distribution around some set average.

This should make for a better wind model than, say, a uniform distribution, like we used for the petal ring. A normal distribution lets us make most of the wind around a set average strength, with less likely deviations from that average.

Here’s what it looks like written down:

avgwind = 5.0
winddev = 3.0

windvecs = np.array(random_vec(2,Niter))
windmags = np.random.normal(loc=avgwind,scale=winddev,size=Niter)
for i in range(Niter):
    windvecs[i] = windvecs[i]*[windmags[i]]
plt.scatter(windvecs[:,0],windvecs[:,1])
plt.show()

And the resulting graph:

You’ll notice that our average wind is low enough so that there’s still a lot of distribution around the center, especially since negative wind strength will just reverse the vector. This is intentional. We want a good portion of the petals to not move a lot, so that our final plot recognizes the starting positions well. Still, we want a lot of petals to move some too, which is why we set our average above 0.

Part Three: Putting It All Together


Now we just need to add the wind vectors to our petal ring. Since we started off using arrays, we can do this in one fell swoop, like so:

petalvecs = windvecs + startvecs
plt.scatter(petalvecs[:,0],petalvecs[:,1])
plt.show()

This produces the following graph:

Finally, I messed around with the numbers (petal number, average wind strength, wind deviation, and inner/outer ring radii), made the graph a little bigger and more square, changed the marker color/size/type, and added a background for some finishing touches.

As promised, these are our Monte Carlo Petals:

Nice. They’re not quite as visually appealing as the real thing, but I think it has its own simple beauty.

There’s still a lot more we can play around with, too.

Here’s the graph imposed on top of the (darker) starting positions:

And here’s a graph of each wind vector from its starting position to the final petal location (I reduced the petal amount, since it’s not really interesting in the form of a jumbled mass of white and magenta):

What else? We could directly plot the density as the darkness of a color, and figure out a function for the expected density value based on the input parameters.

And we could also mess around more with the variables themselves—wind deviation, average wind, petal amount, and the inner/outer ring radii—to make other pretty graphs and figure out the patterns that emerge from changing those numbers. Needless to say, there’s a lot I left unexplored.

What do you think? Have at it, and tell me how it goes.

The Madelung Constant Finder

Note: This is a follow-up to my post on the Madelung constant (it’s for NaCl). If you don’t know what that is yet, take a look at it here.

I wrote the Madelung constant finder code in Python, so you’ll need some interpreter if you want to run it alongside this explanation.

If you don’t have it already, I recommend downloading the Anaconda distribution for Python. It comes preloaded with a bunch of useful packages (NumPy, SciPy, Matplotlib, etc.) and Jupyter, an IDE good for demonstrating what your code can do in an organized and runnable environment. I’m uploading the Jupyter notebook (234 downloads ) of this code as well as the raw .py scripts (248 downloads ) – I highly recommend the Jupyter option if you want to follow along closely.

Coordinate Generator

The first thing I did was establish the best logical framework for computing the constant. In my opinion, the best option is to generate a list of lists—with each nested list representing a “triplet” coordinate set—and then calculate the influence for each (Not like real triplets, since the things inside aren’t identical. I just call them that in my syntax, so I’ll use the name here too for consistency). You might be quick to point out the potential for some shorter, more elegant, and vectorised solutions here, but the triplet list method is suprisingly fast (especially if you only generate a few and multiply, as I demonstrate) and it carries the benefit of being really easy to understand just from looking at the code.

So, how do you generate a list of all possible 3-d coordinate sets? You don’t. Well, you can, but it’s slower. You’ll see what I’m talking about soon.

My friend whipped up a script for generating coordinate sets, which I refined to stop at certain shells (you just end it when the maximum number in a set is past your desired shell number). It looks like this:

while (triplets[-1][0 and 1 and 2]) < shell_number:
    lasttrip = (triplets[-1])
    if lasttrip[0] == lasttrip[1]:
         if lasttrip[1] == lasttrip[2]:
             triplets.append([0, 0, lasttrip[2]+1])
         elif lasttrip[1] < lasttrip[2]:
             triplets.append([0, lasttrip[1]+1, lasttrip[2]])
    elif lasttrip[0] < lasttrip[1] and lasttrip[1] <= lasttrip[2]:
         triplets.append([lasttrip[0]+1, lasttrip[1], lasttrip[2]])

The first line makes it loop through the rest until any of the coordinates in the last generated set are more than the shell number. From there, it goes through a bunch of conditional statements that determine what the next coordinate set should be based on the last generated one—sometimes adding one to a term, other times flipping a term to zero and adding one, or rarely, flipping the whole thing (like from [1,1,1] to [0,0,2]).

You may notice something strange about this code. Namely, that it doesn’t actually generate every possible coordinate set. If you run this with, say, shell_number = 3, you’ll get this as the output into triplets (to run this snippet yourself, you’ll first need to set triplets as a list of lists, with its first term as [0,0,1]):

[0, 0, 1]
[0, 1, 1]
[1, 1, 1]
[0, 0, 2]
[0, 1, 2]
[1, 1, 2]
[0, 2, 2]
[1, 2, 2]
[2, 2, 2]
[0, 0, 3]
[0, 1, 3]
[1, 1, 3]
[0, 2, 3]
[1, 2, 3]
[2, 2, 3]
[0, 3, 3]
[1, 3, 3]
[2, 3, 3]
[3, 3, 3]
...

They sort of just count up, as if it were some kind of weird binary on the first shell, trinary on the second, and so on. This, of course, misses all the negative numbers and position combinations of numbers in between. But we can solve this by multiplying the influence of each triplet by the amount of atoms that would be at the same distance. Since direction doesn’t matter (only distance and amount), it’s effectively the same thing.

Equal Distance Set Generator

The logic tree for doing this is pretty long, but it’s also pretty interesting. In code form, it looks like this:

for i in range(len(triplets)):
    coordset=triplets[i]
    if coordset[0] == 0:
        if coordset[1] == 0:
            eqdist = 6
        elif coordset[1] == coordset[2]:
            eqdist = 12
        else:
            eqdist = 24
    elif coordset[0] == coordset[1]:
        if coordset[1] == coordset[2]:
            eqdist = 8
        else:
            eqdist = 24
    else:
        if coordset[1] == coordset[2]:
            eqdist = 24
        else:
            eqdist = 48
    eqdistset.append(eqdist)

This makes a list the same length as triplets that’s filled with the number of atoms equidistant to its corresponding triplets entry.

This list is also the number of atoms on each sphere of increasing radii (I touched on this sequence a bit in my original post. It’s also on the online encyclopedia of integer sequences). It was pretty fun to work out how many atoms were at the same distance based on some general facts about the particular coordinate set. If you have time, I would highly recommend trying it out yourself for fun. It’ll also give you more insight into the weirdness of that sequence.

Denominator Generator

Diagram showing the fractional atom concept
Some help with visualization

I also wrote in my last post that, in order to converge faster, we needed to account for the fraction of the atom in the shell. To picture how this works, imagine each shell as the surface of a cube with its corners centered on an atom. The atoms at those corners only have 1/8 on the inside of the cube, the edge atoms have 1/4, and face atoms 1/2.

To write code for this, I first thought you would need to add the fraction of each layer and then store the remainder to add onto the next shell up. But if you really think about, you only need to fractionalize the last layer. For everything in between, it’s as if you’ve already added both the inside and outside parts when you just don’t do anything to them.

This code generates a list of denominators for the fraction in the shell of the corresponding atom in triplets:

for i in range(len(triplets)):
    coordset = triplets[i]
    if coordset[0] == coordset[1]:
        if coordset[1] == coordset[2]:
            denom = 8
        else:
            denom = 2
    elif coordset[1] == coordset[2]:
        denom = 4
    else:
        denom = 2
    denomlist.append(denom)

Short and sweet. It’s just checking if a set represents a corner, edge, or face atom and then appending a value to denomlist accordingly.

Putting it All Together

Now let’s do the sum. We’ll generate a list of addends based on the other lists we made. The generating algorithm needs to determine if the coordinate set represents a sodium or chlorine atom so we know whether to add or subtract (a simple way to do this is to add the coordinate values together—if it’s odd then the atom is opposite the reference, even being the same). It then needs to determine if the coordinate set represents an atom on the edge of the shell, so it knows whether or not to use denomlist.

Finding the distances is a trivial task with numpy:

triplets = np.array(triplets)
distset = np.sqrt(np.sum(triplets**2,axis = 1))

And so the final generator looks like this:

for i in range(len(triplets)):
    coordset=triplets[i]
    if abs(coordset[0]+coordset[1]+coordset[2])%2 == 1:
        if max(coordset) == shell_number:
            addlist.append(eqdistset[i]/(denomlist[i]*distset[i]))
        else:
            addlist.append(eqdistset[i]/distset[i])
    else:
        if max(coordset) == shell_number:
            addlist.append(-eqdistset[i]/(denomlist[i]*distset[i]))
        else:
            addlist.append(-eqdistset[i]/distset[i])

Now we just need to extract our data.

print("Final calculated Madelung constant for NaCl is:", sum(addlist))

And we’re done!

Navigation