Applied machine learning relies on translating the structure of a problem into a computational model. This arises in applications as diverse as statistical physics and food recommender systems. The pattern of connectivity in an undirected graphical model or the fact that datapoints in food recommendation are un- ordered collections of features can inform the structure of a model. First, consider undi- rected graphical models from statistical physics like the ubiquitous Ising model. Basic research in physics requires scalable simulations for comparing the behavior of a model to its experimental counterpart. The Ising model consists of binary random variables with local connectivity; interactions between neighboring nodes can lead to long-range correlations. Modeling these correlations is necessary to capture physical phenomena such as phase transitions. To mirror the local structure of these models, we use ﬂow- based convolutional generative models that can capture long-range correlations. Com- bining ﬂow-based models designed for continuous variables with recent work on hier- archical variational approximations enables the modeling of discrete random variables. Compared to existing variational inference methods, this approach scales to statistical physics models with tens of thousands of correlated random variables and uses fewer pa- rameters. Just as computational choices can be made by considering the structure of an undirected graphical model, model construction itself can be guided by the structure of individual datapoints. Consider a recommendation task where datapoints consist of un- ordered sets, and the objective is to maximize top-K recall, a common recommendation metric. Simple results show that a classiﬁer with zero worst-case error achieves maxi- mum top-K recall. Further, the unordered structure of the data suggests the use of a permutation-invariant classiﬁer for statistical and computational efficiency. We evalu- ate such a classiﬁer on human dietary behavior data, where every meal is an unordered collection of ingredients, and ﬁnd that it outperforms probabilistic matrix factorization methods. Finally, we show that building problem structure into an approximate infer- ence algorithm improves the accuracy of probabilistic modeling methods.