Transforming the QuadTree algorithm to numpy indexing

Thinking about pointers 8 bytes long on AMD64. We have less than 2^32 datapoints. So why not give 32 intergers a chance against pointers for optimization.

The QuadTree algorithm only needs to know:

  • The tree nodes wich know their sub nodes
  • The bounding box of each tree node
  • The datapoint assigned to the tree node

So why not come up with a flat datastructures like this:

# The points to process Numpy array( num_points x 2)
self.points = points
# The return values for each point
self.levels = np.ndarray(dtype=float, shape=(len(self.points), 1))
# The boundary box of each quadTree node
self.rect_points = np.ndarray(dtype=float, shape=(4 * len(self.points), 4))
# Global index for the next storage position
self.next_store_idx = 0
# The quadtree consists of the 4 indexes of the four subdivisions and a fifth index for the index of the data point (in self.points) contained in this node
self.tree = np.empty(dtype=int, shape=(4*len(self.points), 5))
self.tree.fill(-1)

This aproach will run.

But we have a problem with the memory. Since we do not know in forehand how many tree nodes we will utilize we have to set the number of tree nodes to the theoretical maximum wich is 4 times the number of datapoints.

That is ugly and needs further investigation.