The main component of the color helper file is the RGB class. Various modules (e.g. Plotly and Matplotlib) use different conventions for representing red, green and blue color intensities. The RGB class accepts as input and provides as output (via associated methods) the following possible formats:
A 3-tuple of floats between 0 and 1 such as (0.6, 0.4, 1.0) (corresponds to the method asTupleFloat)
A 3-tuple of integers between 0 and 255 such as (153, 102, 255) (corresponds to the method asTupleInt)
A string representing a 3-tuple of integers between 0 and 255 such as "rgb(153,102,255)" (corresponds to the method asStringTuple)
A string representing a 6 digit hexadecimal number such as "#7F66FF" (corresponds to the method asStringHex)
The color helper file also contains two additional components which I have found to be helpful. The first is a pair of dictionaries containing all colormaps of Matplotlib (i.e. ALL_MATPLOTLIB_COLORMAPS) and all color scales of Plotly sorted by type (i.e. ALL_PLOTLY_COLOR_SCALES_BY_TYPE); keys of the dictionaries are sequence names and values are lists of RGB objects. The reason these dictionaries exist is a matter of personal convenience as I grew tired of having to search documentation for the existing color sequences in these modules. The second additional component is the customSpectrum function which takes as input a value between 0 and 1 and outputs an RGB object. The list DEFAULT_SPECTRUM is passed to this function by default and contains a ROYGBIV-like sequence of colors (see below); however, the user can specify any list of RGB objects they wish interpolated via the optional rgb_spectrum argument.
The file latex_helper.py is a simple script which uses Matplotlib to render LaTeX equations stored in txt files. There isn't too much to say here beyond the fact that the main purpose of this script is to allow me to render the equation images used on this website.
The privacy_helper.py script contains the privacyDecorator function as an attempt to add private class attributes to the code base. The implementation is by no means secure from exploit, but generally speaking the use case here is to prevent users from accidentally modifying stored values that they shouldn't modify or accessing internal functions not meant for their use.
There are a few important design factors worth noting here. First, any attribute that begins with a double underscore or that follows the typical Python name mangling convention will automatically be considered private. Second, users can provide a list of strings representing any additional constants, variables or functions they want to be considered private. Finally, the default behavior is to allow calling the built-in deepcopy function on the class but this can be blocked by simply setting the deepcopy_flag argument to be False.
Suppose we have the data points (x1,y1) through (xn,yn) such that x1 < x2 < … < xn. Define the polynomials f1(x) through fn-1(x) such that (1) fi(xi) = yi and (2) fi(xi+1) = yi+1 for each i in the set {1,...,n-1}. Such a mathematical construction is called a "spline" and there are of course infinitely many ways to connect finitely many points using piecewise polynomials. However, two common methods are linear splines and natural cubic splines.
First, the linear spline is the simpler of these two versions as it takes the form fi(x) = ai(x - xi) + bi for each polynomial. The above Conditions (1) and (2) are sufficient for allowing us to solve for our coefficients ai and bi:
Next, the natural cubic spline has significantly more complexity because our polynomials are of the form fi(x) = ai(x - xi)3 + bi(x - xi)2 + ci(x - xi) + di. Without any extra constraints we would have too many degrees of freedom for our spline to be uniquely defined. As such, we have the following additional requirements of (3) fi'(xi+1) = fi+1'(xi+1) to make slopes match at defining points, (4) fi''(xi+1) = fi+1''(xi+1) to make concavities match at defining points, and (5) the "natural" choice of making f1''(x1) and fn-1''(xn) both being equal to 0. This system of 4n - 4 equations and unknowns is not too tricky to solve but there are a few key steps for making it simpler to solve. The first is to note that di = yi for each i simply because of Condition (1). The second is to define Δxi = xi+1 - xi and Δyi = yi+1 - yi for each i in the set {1,...,n-1}. The third and final step is to imagine a hypothetical nth cubic polynomial which, according to Condition (5), would have quadratic term bn = 0.
With this setup work completed, it turns out to be easiest to compute the quadratic coefficients b1,...,bn by solving a linear system of equations. It is tricky to cleanly write the matrices of this system for arbitrary values of n; however, the following example of setting n = 5 should be sufficient for demonstrating the general pattern used in larger systems:
From here, the following equations allow one to solve for the cubic and linear coefficients respectively:
The linear and natural cubic splines shown above are currently implemented in spline_helper.py. However, in the future it would be easy to implement additional spline options because both LinearSpline and NaturalCubicSpline are sub-classes of the abstract Spline class contained in that Python file. For now, take a look at the following figure generated using these two classes!
The main goal of sqlite3_helper.py is to create a wrapper around SQLite3 which allows one to use SQL databases similarly to how one might use a Pandas dataframes or a log file. Although perhaps not the typical use for a SQL database, I have found that it is convenient to store progressively-generated data sets in a standalone file rather than in short-term memory; such a file can then be read by data visualization software to create a live view of the data as it is generated. While this functionality could be implemented by periodically writing relevant dataframes to their associated csv files, I have found a variety of benefits to keeping all information in a single db file.
The first main component of the sqlite3_helper.py file is the custom ConnectionManager class. The idea is that one provides a path to a db file and the connection manager maintains an active connection and cursor for the file through which all other functionality is implemented. Beyond this, the user provides a "maximum buffer size" which controls the frequency at which temporary changes stored in a journal file are committed to the main db file; this feature is the primary reason for the manager's existence as controlling the commit frequency is an important step for optimization.
Beyond the ConnectionManager class, there are many functions contained in the sqlite3_helper.py script. As such, here is a high-level summary of these various functions and their arguments:
Functions for adding and deleting tables in a db file
addTable(connection_manager, table_name, column_names, column_types, replace_flag = True)
deleteTable(connection_manager, table_name)
Functions for reading from a db file
getExistingTables(connection_manager) -> list
getColumnNames(connection_manager, table_name) -> list
getColumnTypes(connection_manager, table_name) -> list
getRowCount(connection_manager, table_name) -> int
readTable(connection_manager, table_name) -> dict
readColumn(connection_manager, table_name, column_name) -> list
readRow(connection_manager, table_name, row_index) -> list
readEntry(connection_manager, table_name, column_name, row_index) -> Any
Functions for writing to a db file
appendColumn(connection_manager, table_name, column_name, column_type, new_column = None)
appendRow(connection_manager, table_name, new_row = None)
deleteColumn(connection_manager, table_name, column_name)
deleteRow(connection_manager, table_name, row_index)
replaceColumn(connection_manager, table_name, column_name, new_column)
replaceRow(connection_manager, table_name, row_index, new_row)
replaceEntry(connection_manager, table_name, column_name, row_index, new_entry)
swapRows(connection_manager, table_name, row_index_1, row_index_2)
Function for sorting a table by column ordering
sortTable(connection_manager, table_name, column_name, ascending_flag)
A common use case for how tkinter is used across the Presently Level code base is to create dialog boxes for saving and loading files. For now this is the only real use of the script, however the long-term plan is to make a set of wrapper functions that make it easier to build tkinter-based apps.
The contents of the type helper script are generally the lowest level functions of the Presently Level repositories. Type hints are certainly helpful in Python but they are not as robust more formal type declarations used in other languages. Personally, I have typically used simple assert statements to check object types but this has proved repetitive over time; how many times must I verify that an object is numeric, or that an object is a list of strings? The functions of type_helper.py are simply collections of assert statements that serve as a shortcut for this low-level operation.
The primary content of the vector_helper.py script is the VectorField and VectorFieldGenerator classes. The general idea of the generator class is that a 2-dimensional grid is created, then several points on the grid are randomly selected to be given "base vectors", and finally vectors at the remaining points are generated using proximity-based interpolation. These interpolated vectors can be passed as a pair of numpy arrays to the vector field class, or more generally if a user has a pair of arrays they generated themselves that is also acceptable. From here metrics derived from the Jacobian of the field (such as curl, divergence and the determinant) can be computed at each point.
There are two main features of these vector field classes to be aware of. The first one is that the heavy computations (such as interpolation and the Jacobian) take advantage of multiprocessing to speed up generation of the values. The second feature is that the vector field can be visualized in several ways: a red/blue plot of the negative/positive curl of the field, a red/blue plot of the negative/positive divergence of the field, a red/blue plot of the negative/positive Jacobian determinant of the field, and full RGB plots of the previous images combined using PCA.